@Chapter
    @Title { Resource Solvers }
    @Tag { resource_solvers }
@Begin
@LP
A @I { resource solver } assigns resources to tasks, or changes
existing resource assignments.  This chapter presents the resource
solvers packaged with KHE.
@BeginSections

@Section
    @Title { Specification }
    @Tag { resource_solvers.spec }
@Begin
@LP
The recommended interface for resource solvers, defined in
@C { khe_solvers.h }, is
@ID @C {
typedef bool (*KHE_TASKING_SOLVER)(KHE_TASKING tasking,
  KHE_OPTIONS options);
}
It assigns resources to some of the tasks of @C { tasking },
influenced by @C { options }, returning @C { true } if it changed, or
at least usually changes, the solution.  Taskings were defined in
Section {@NumberOf extras.taskings}.  The @C { options } parameter is as
in Section {@NumberOf general_solvers.options}; by convention, options
consulted by resource solvers have names beginning with @C { rs_ }.
@PP
A resource solver could focus on the initial @I construction of a
resource assignment, or on the @I repair of an existing resource
assignment.  It is not wise, however, to try to classify solvers
rigidly in this way, because some can be used for both.  A construction
solver can be converted into a repair solver by prefixing it with
some unassignments, and a repair solver can be converted into a
construction solver by including missing assignments among the
defects that it is able to repair.
@PP
Except for preassignments, there is no reason to assign resources,
at least in large numbers, before times are assigned.  Accordingly,
a resource solver may choose to assume that all meets have been
assigned times.  It may alter time assignments in its quest for
resource assignments.
@PP
The usual way to convert preassignments in the instance into
assignments in the solution is to call @C { KheTaskTreeMake }
(Section {@NumberOf resource_structural.task_tree.construction});
this is one of several routine jobs that it carries out.
@C { KheTaskTreeMake } does not fix these assignments, although
it does reduce the domains of the affected tasks to singletons.
So other solvers should not be able to move preassigned tasks to
other resources, but they can unassign them, which will produce
errors if any preassigned tasks are unassigned when the solution
is written.
@PP
A @I { split assignment } is an assignment of two or more distinct
resources to the tasks monitored by an avoid split assignments monitor.
A @I { partial assignment } is an assignment of resources to some of these
same tasks, but not all.  An assignment can be both split and partial.
@End @Section

@Section
    @Title { The resource assignment invariant }
    @Tag { resource_solvers.invt }
@Begin
@LP
If all tasks have duration 1, then the matching defines an assignment
of resources to tasks which maximizes the number of assignments.
Although larger durations are common, and maximizing the number of
assignments is not the only objective, still it is clear from this
fact that the matching deserves a central place in resource assignment.
@PP
Accordingly, the author's work in resource assignment
@Cite { $kingston2008resource } emphasizes algorithms that preserve the
following condition, called the @I { resource assignment invariant }:
@ID @I {
The number of unmatchable demand tixels equals its initial value.
}
Assignments are permitted only when the number of unmatchable demand
tixels does not increase.  This keeps the algorithms on a path that
cannot lead to new violations of required avoid clashes constraints,
avoid unavailable times constraints, limit busy times constraints,
and limit workload constraints.  In practice, most tasks can be
assigned while preserving this invariant.
@PP
The Boolean option @F rs_invariant is used to tell resource solvers
whether they should preserve the resource assignment invariant or
not.  In principle, every resource solver should consult and obey
this option; in practice, many do but not all.  A reasonable strategy
is to preserve the invariant for most of the solve, but to relax it
near the end, to allow as many assignments as possible to be made.
This strategy is followed by KHE's high-level resource solvers
(Section {@NumberOf resource_solvers.all_together}).  They set this
option, so it is futile for the end user to set it when using these functions.
@PP
The invariant is not usually checked after each individual operation.
Rather, a sequence of related operations is carried out, and then
the number of unmatchable demand tixels at the end of the sequence
is compared with the number at the start.  If it has increased, the
sequence of operations needs to be undone.  Such sequences were called
@I { atomic sequences } in Section {@NumberOf solutions.marks},
where the following code (using a mark object) was recommended
for obtaining them:
@ID @C {
mark = KheMarkBegin(soln);
success = SomeSequenceOfOperations(...);
KheMarkEnd(mark, !success);
}
When preserving the resource invariant, this needs to be changed to
@ID @C {
mark = KheMarkBegin(soln);
init_count = KheSolnMatchingDefectCount(soln);
success = SomeSequenceOfOperations(...);
if( KheSolnMatchingDefectCount(soln) > init_count )
  success = false;
KheMarkEnd(mark, !success);
}
This works without the matching too, since then
@C { KheSolnMatchingDefectCount } returns 0.
@PP
As a simple but effective aid to getting this right, this code is
encapsulated in functions
@ID @C {
void KheAtomicOperationBegin(KHE_SOLN soln, KHE_MARK *mark,
  int *init_count, bool resource_invariant);
bool KheAtomicOperationEnd(KHE_SOLN soln, KHE_MARK *mark,
  int *init_count, bool resource_invariant, bool success);
}
which may be placed before and after a sequence of operations, like
this:
@ID @C {
KheAtomicOperationBegin(soln, &mark, &init_count, resource_invariant);
success = SomeSequenceOfOperations(...);
KheAtomicOperationEnd(soln, &mark, &init_count, resource_invariant,
  success);
}
Here @C { mark } and @C { init_count } are variables of type
@C { KHE_MARK } and @C { int }, not used for anything else,
@C { resource_invariant } is @C { true } if the operations must
preserve the resource invariant to be considered successful, and
@C { success } is their diagnosis of their own success, not including
checking the resource invariant.  @C { KheAtomicOperationEnd }
returns @C { true } if @C { success } is @C { true } and
(if @C { resource_invariant } is @C { true }) the number of
unmatchable demand tixels did not increase:
@IndentedList

@LI -1px @Break @C {
void KheAtomicOperationBegin(KHE_SOLN soln, KHE_MARK *mark,
  int *init_count, bool resource_invariant)
{
  *mark = KheMarkBegin(soln);
  *init_count = KheSolnMatchingDefectCount(soln);
}
}

@LI -1px @Break @C {
bool KheAtomicOperationEnd(KHE_SOLN soln, KHE_MARK *mark,
  int *init_count, bool resource_invariant, bool success)
{
  if( resource_invariant &&
      KheSolnMatchingDefectCount(soln) > *init_count )
    success = false;
  KheMarkEnd(*mark, !success);
  return success;
}
}

@EndList
The code is trivial, but useful because it encapsulates a common but
slightly confusing pattern.
@PP
If the resource invariant is being enforced, there may be no need
to include the cost of demand monitors in the solution cost, since
their cost cannot increase.  They must continue to monitor the
solution, however, so detaching is not appropriate.  Function
@ID {0.98 1.0} @Scale @C {
void KheDisconnectAllDemandMonitors(KHE_SOLN soln, KHE_RESOURCE_TYPE rt);
}
disconnects all demand monitors (or all demand monitors which
monitor entities of type @C { rt }, if @C { rt } is non-@C { NULL })
from all their parents, including the solution object if it is
a parent.  Thus, as required, they continue to monitor the
solution, but the costs they compute are not added to the
cost of any group monitor.  @C { KheSolnMatchingDefectCount }
still works, however, and there is nothing to prevent them from
being made children of other group monitors later.
@End @Section

@Section
    @Title { Unchecked, checked, ejecting, and Kempe task and task set moves }
    @Tag { resource_solvers.kempe }
@Begin
@LP
The operation of assigning a resource to a task is fundamental to resource
solving.  This section defines four variants of this operation (unchecked,
checked, ejecting, and Kempe), and presents functions for applying them to
individual tasks and to task sets (Section {@NumberOf extras.task_sets}).
@PP
In all cases, the task or tasks to be moved can be assigned or unassigned
initially; either way, they are reassigned to the given resource.  If the
given resource is @C { NULL }, that's fine too; it means unassignment,
even for the Kempe functions, where it would be more natural, arguably,
for the operation to be undefined.  The functions all return @C { false }
when they either cannot carry out the requested changes, or they can but
that changes nothing.  Failed operations leave the solution in its state
at the point of failure, so calls on these functions (except
@C { KheTaskMoveResource }) should be enclosed in @C { KheMarkBegin }
and @C { KheMarkEnd }, to undo failed attempts properly.
@PP
An @I { unchecked task move } is just a call on platform function
@ID @C {
bool KheTaskMoveResource(KHE_TASK task, KHE_RESOURCE r);
}
Although it makes the checks described in Section {@NumberOf solutions.tasks},
it is called `unchecked' here, because it does not check whether
the move introduces any incompatible tasks (defined below).
@PP
An @I { unchecked task set move } is a set of unchecked task moves
to the same resource, as implemented by function
@ID @C {
bool KheTaskSetMove(KHE_TASK_SET ts, KHE_RESOURCE r);
}
defined here.  It moves the tasks of @C { ts } to @C { r } using
calls to @C { KheTaskMoveResource }.  It returns @C { true } when
@C { ts } is non-empty and the individual moves all succeed.
@PP
An @I { ejecting task move } is a task move which both moves a
resource to a task and @I ejects (that is, unassigns) the resource
from all incompatible tasks.  This is done by function
@ID {0.97 1.0} @Scale @C {
bool KheEjectingTaskMove(KHE_TASK task, KHE_RESOURCE r, bool allow_eject);
}
when @C { allow_eject } is @C { true }.
It moves @C { task } to @C { r }, unassigning @C { r } from all
incompatible tasks (defined below), and returning @C { true } if it
succeeds.  Failure can be due to @C { task } being fixed, or @C { r }
not lying in the domain of @C { task }, or @C { r } being already
assigned to @C { task }, or because some incompatible task cannot
be unassigned, or it can be but @C { allow_eject } is @C { false },
meaning that ejection is not allowed (this is called an checked
task move above).
@PP
@C { KheEjectingTaskMove } considers two tasks to be incompatible
when they overlap in time.  However, in nurse rostering, two tasks
are often considered incompatible when they occur on the same day,
so another function is offered which handles such cases using
frames (Section {@NumberOf extras.frames}):
@ID @C {
bool KheEjectingTaskMoveFrame(KHE_TASK task, KHE_RESOURCE r,
  bool allow_eject, KHE_FRAME frame);
}
This is the same as @C { KheEjectingTaskMove } except that two
tasks are considered incompatible if any time that one task is
running lies in the same time group of @C { frame } as some time 
that the other task is running.  Here @C { frame } may not be
a null frame.
@PP
Unlike the corresponding function for ejecting meet moves,
@C { KheEjectingTaskMove } and @C { KheEjectingTaskMoveFrame }
do not consult the matching or use a group monitor.  Instead,
when @C { r } is non-@C { NULL }, they use @C { r }'s
timetable monitor to find the tasks assigned @C { r } that are
incompatible with @C { task } and unassign them, returning
@C { false } if any cannot be unassigned, because they are
fixed or preassigned.  Then they call @C { KheTaskMoveResource }
and return what it returns.
@PP
It is not likely that some incompatible task @C { task2 } cannot be
unassigned because it is fixed.  This is because
@C { KheTaskFirstUnFixed(task2) } (Section {@NumberOf solutions.tasks.asst})
is unassigned, not @C { task2 }.
@PP
An @I { ejecting task set move } is a set of ejecting task moves to
the same resource.  This operation is carried out by functions
@ID @C {
bool KheEjectingTaskSetMove(KHE_TASK_SET ts, KHE_RESOURCE r,
  bool allow_eject);
bool KheEjectingTaskSetMoveFrame(KHE_TASK_SET ts, KHE_RESOURCE r,
  bool allow_eject, KHE_FRAME frame);
}
which perform ejecting task moves on the elements of @C { ts },
without or with a frame, returning @C { true } when @C { ts } is
non-empty and all of the individual ejecting task moves succeed.
@PP
A @I { Kempe task move } is carried out by functions
@ID {0.96 1.0} @Scale @C {
bool KheKempeTaskMove(KHE_TASK task, KHE_RESOURCE r);
bool KheKempeTaskMoveFrame(KHE_TASK task, KHE_RESOURCE r, KHE_FRAME frame);
}
If @C { r } is @C { NULL }, this is just an unassignment as usual.
Otherwise, if @C { task } is initially unassigned, or assigned @C { r },
@C { false } is returned.  Otherwise, let @C { r2 } be the
resource initially assigned to @C { task }.  @C { KheKempeTaskMove }
performs a sequence of ejecting task moves, first of @C { task }
to @C { r }, then of the tasks ejected by this move to @C { r2 },
then of the tasks ejected by those moves to @C { r }, and so on
until there are no ejected tasks.  It fails if any of these ejecting
task moves fails, or if tries to move some task twice.  There is no
@C { allow_eject } parameter because it is inherent in the Kempe
idea to keep going until all tasks are assigned.
@PP
A @I { Kempe task set move } is approximately a set of Kempe
task moves, carried out by
@ID @C {
bool KheKempeTaskSetMove(KHE_TASK_SET ts, KHE_RESOURCE r);
bool KheKempeTaskSetMoveFrame(KHE_TASK_SET ts, KHE_RESOURCE r,
  KHE_FRAME frame);
}
The tasks must initially be assigned the same resource.
This is not exactly like moving the tasks one by one,
because the rule about not moving a task twice applies to the
operation as a whole.
@PP
Finally, there is a way to select the kind of move to make on
the fly, defined by type
@ID @C {
typedef enum {
  KHE_MOVE_UNCHECKED,
  KHE_MOVE_CHECKED,
  KHE_MOVE_EJECTING,
  KHE_MOVE_KEMPE,
} KHE_MOVE_TYPE;
}
The usual four functions are offered:
@ID @C {
bool KheTypedTaskMove(KHE_TASK task, KHE_RESOURCE r, KHE_MOVE_TYPE mt);
bool KheTypedTaskMoveFrame(KHE_TASK task, KHE_RESOURCE r,
  KHE_MOVE_TYPE mt, KHE_FRAME frame);
bool KheTypedTaskSetMove(KHE_TASK_SET ts, KHE_RESOURCE r,
  KHE_MOVE_TYPE mt);
bool KheTypedTaskSetMoveFrame(KHE_TASK_SET ts, KHE_RESOURCE r,
  KHE_MOVE_TYPE mt, KHE_FRAME frame);
}
These switch on @C { mt }, then call one of the functions
above.  There is also
@ID @C {
char *KheMoveTypeShow(KHE_MOVE_TYPE mt);
}
which returns the obvious one-word description of @C { mt }:
@C { "unchecked" } and so on.
@End @Section

# obsolete
# @Section
#     @Title { Frame operations for resource solvers }
#     @Tag { resource_solvers.frame }
# @Begin
# @LP
# This section presents some operations on frames
# (Section {@NumberOf extras.frames}) of interest to resource solvers.
# These operations mostly take a frame and a resource as parameters,
# and operate on the set of tasks lying in the frame which are currently
# assigned the resource.  We sometimes refer to this combination of a
# frame and a resource as a @I { resource frame }, but usually we just
# call it a frame.
# @PP
# There is a natural connection between resource frames and task sets,
# made manifest by
# @ID @C {
# KHE_TASK_SET KheFrameTaskSet(KHE_FRAME frame, KHE_RESOURCE r);
# }
# It returns a new task set containing the proper roots of all tasks
# which are currently assigned @C { r } and which lie in meets that
# overlap in time with the time groups of @C { frame }.  No task
# appears twice.  Expressions such as `the tasks of the frame' below
# refer to the tasks of this task set.
# @PP
# The usual visit operations are available:
# @ID @C {
# void KheFrameSetVisitNum(KHE_FRAME frame, KHE_RESOURCE r, int num);
# int KheFrameVisitNum(KHE_FRAME frame, KHE_RESOURCE r);
# bool KheFrameVisited(KHE_FRAME frame, KHE_RESOURCE r, int slack);
# void KheFrameVisit(KHE_FRAME frame, KHE_RESOURCE r);
# void KheFrameUnVisit(KHE_FRAME frame, KHE_RESOURCE r);
# }
# These do to the frame's tasks what the corresponding operations
# on task sets do.
# @PP
# As Section {@NumberOf extras.frames} explains, each time group of a
# frame comes with a polarity, as used in cluster busy times and limit
# active intervals constraints.  This means that each time group of a
# resource frame can be classified as either active or inactive,
# depending on its polarity and whether the resource is busy during
# its times or not, in the usual way.  Functions
# @ID @C {
# bool KheFrameIsActive(KHE_FRAME frame, KHE_RESOURCE r);
# bool KheFrameIsInactive(KHE_FRAME frame, KHE_RESOURCE r);
# }
# return @C { true } if @C { frame } is active (if all its time
# groups are active), and if @C { frame } is inactive (if all
# its time groups are inactive).  Functions
# @ID @C {
# KHE_FRAME KheFrameActiveAtLeft(KHE_FRAME frame, KHE_RESOURCE r);
# KHE_FRAME KheFrameInactiveAtLeft(KHE_FRAME frame, KHE_RESOURCE r);
# KHE_FRAME KheFrameActiveAtRight(KHE_FRAME frame, KHE_RESOURCE r);
# KHE_FRAME KheFrameInactiveAtRight(KHE_FRAME frame, KHE_RESOURCE r);
# }
# return the largest active or inactive slice of @C { frame } lying
# within @C { frame } at its left or right end.
# @PP
# File @C { khe_solvers.h } contains a declaration
# @ID @C {
# typedef struct khe_frame_iterator_rec {
#   ...
# } *KHE_FRAME_ITERATOR;
# }
# which can be used for visiting the active intervals of a frame,
# calling functions
# @ID @C {
# void KheFrameIteratorInit(KHE_FRAME_ITERATOR fi, KHE_FRAME frame,
#   KHE_RESOURCE r, int extra);
# bool KheFrameIteratorNext(KHE_FRAME_ITERATOR fi, KHE_FRAME *res);
# }
# The basic code for visiting each active interval of resource frame
# @C { (frame, r) } is
# @ID @C {
# struct khe_frame_iterator_rec fi_rec;
# KheFrameIteratorInit(&fi_rec, frame, f, 0);
# while( KheFrameIteratorNext(&fi_rec, &active_frame) )
# {
#   ... visit active_frame ...
# }
# }
# Each value of @C { active_frame } is a maximal active slice of @C { frame }.
# @PP
# Parameter @C { extra } may be any non-negative integer.  It is used
# to introduce diversity:  the iteration starts at the first active
# interval which begins after position @C { extra } in the frame,
# modulo the frame length, wraps around at the end, and ends just
# before that active interval.
# @PP
# As an alternative to @C { KheFrameIteratorNext }, one can call
# @ID @C {
# bool KheFrameIteratorNextPair(KHE_FRAME_ITERATOR fi, KHE_FRAME *res);
# }
# This visits one pair of active frames on each step:  @C { *res } starts
# with one or more active time groups, continues with one or more inactive
# time groups, and ends with one or more active time groups.  The second
# active slice of one value of @C { *res } will be the first active slice
# of the next.
# @End @Section

@Section
    @Title { Resource assignment algorithms }
    @Tag { resource_solvers.assignment }
@Begin
@LP
This section presents four algorithms for constructing initial
assignments of resources to tasks.  The next section documents another.
@PP
As explained at the start of this chapter, it is not wise to
emphasise the distinction between construction and repair.
Although the author has not found any uses for these algorithms
in repair, there may be some; and later in this chapter there
is another algorithm (resource matching) which is useful for
both.  Indeed, the time sweep algorithm built on resource
matching is the author's method of choice for constructing an
initial assignment in nurse rostering.
@BeginSubSections

@SubSection
    @Title { Satisfying requested task assignments }
    @Tag { resource_solvers.assignment.requested }
@Begin
@LP
When an event resource must be assigned a particular resource,
that should appear in the instance as a preassignment.  Such
preassignments in the instance are converted to assignments in
the solution by function @C { KheSolnAssignPreassignedResources }
(Section {@NumberOf solutions.complete}).
@PP
When the assignment is merely a preference, it will be included as a
request, in the form of a constraint, not as a preassignment.  Function
@ID @C {
bool KheSolnAssignRequestedResources(KHE_SOLN soln,
  KHE_RESOURCE_TYPE rt, KHE_OPTIONS options);
}
may be used to make these requested assignments.  It returns
@C { true } if it changes the solution.
@PP
It is quite likely that some of the requested assignments will be
incompatible with finding a good solution.  That's fine:  the
assignments made by @C { KheSolnAssignRequestedResources } are not
fixed in any sense; they are open to change by repair algorithms later.
@PP
@C { KheSolnAssignRequestedResources } works as follows.  First, it
finds all limit busy times and cluster busy times monitors which
monitor resources of type @C { rt }, have non-zero cost, and have
non-zero minimum limit without allowing zero.  For the cluster
busy times constraint, a non-trivial maximum limit can also be
used if there are negative time groups, using the transformation
given at the end of Section {@NumberOf constraints.clusterbusy}.
We'll assume a minimum limit and positive time groups here, but
the equivalent case of a maximum limit and negative time groups
is also handled.
@PP
These monitors all require a resource to be assigned one or
more tasks.  In some cases, which we call @I { forcing } cases,
they force the resource to be assigned a task at a particular
time.  For limit busy times constraints, this is true for each
time in each time group whose cardinality is not larger than the
minimum limit.  For cluster busy times constraints, it is true
for each time in time groups of cardinality one, when the number
of time groups is not larger than the minimum limit.  In all
other cases, which we call @I { non-forcing } cases, they force
the resource to be assigned a task, but not at a particular time.
@PP
Sort the monitors into decreasing combined weight order.  Make
two passes over the monitors, handling forcing cases on the
first pass, and non-forcing cases on the second.
@PP
To handle forcing cases, find each particular time that the
resource has to be busy.  Try assigning the resource to each
task of its type running at that time, and keep the assignment
which produces the smallest solution cost.
@PP
To handle non-forcing cases, determine a set of times such
that one of those times has to be busy (for cluster busy times
monitors this will be the set of all times in all time groups
that are not already busy), try assigning the resource to
each task of its type running at any of those times, and keep
the assignment which produces the smallest solution cost.
@PP
A monitor may need several repeats of this treatment to
reduce its cost to 0.  It is important to, in effect, start
again on the monitor after keeping an assignment, since it
is possible for one assignment to affect several times or
time groups, especially when tasks have been grouped.
@PP
@C { KheSolnAssignRequestedResources } consults two options:
@TaggedList

@DTI { @F rs_requested_off } {
A Boolean option which, when @C { true }, causes
@C { KheSolnAssignRequestedResources } to do nothing.
}

@DTI { @F rs_requested_nonforced_on } {
A Boolean option which, when @C { true }, causes
@C { KheSolnAssignRequestedResources } to carry out its
second pass over the monitors (the pass that handles
non-forced requests).  By default this pass is omitted,
because it is harder to justify and less obviously
useful than the first (forced) pass.
}

@EndList
It also uses the @C { gs_event_timetable_monitor } option
(Section {@NumberOf general_solvers.general}), to find the
events running at each time.  It aborts if this option
is not in @C { options }.
@PP
There is another function, closely related to
@C { KheSolnAssignRequestedResources }:
@ID @C {
bool KheMonitorRequestsSpecificBusyTimes(KHE_MONITOR m);
}
It returns @C { true } if @C { m } requests that a resource be busy
at one or more specific times, triggering a forcing case for
@C { KheSolnAssignRequestedResources }.  Precisely, it returns
@C { true } when given:
@NumberedList

@LI {
A limit busy times monitor with a non-zero minimum limit, with
@C { false } for @C { allow_zero }, and with one or more time
groups whose cardinality is at least the minimum limit.
}

@LI {
A cluster busy times monitor with a minimum limit equal to or
greater than its number of time groups, with @C { false } for
@C { allow_zero }, whose time groups are all positive, with one
or more of them containing just one time.
}

@LI {
A cluster busy times monitor such that the transformation documented
by the theorem at the end of Section {@NumberOf constraints.clusterbusy}
produces the previous case.
}

@EndList
The correspondence with @C { KheSolnAssignRequestedResources } is not
quite exact, as it turns out, but the differences are insignificant,
practically speaking.
@End @SubSection

@SubSection
    @Title { Most-constrained-first assignment }
    @Tag { resource_solvers.assignment.most_constrained_first }
@Begin
@LP
When each unfixed task has no followers, so that each demands a
resource for a single interval of time, as is usual with room
assignment, a simple `most constrained first' heuristic
assignment algorithm that maintains the resource assignment
invariant is usually sufficient to obtain a virtually optimal
assignment (in high school timetabling, not nurse rostering).  Function
@ID @C {
bool KheMostConstrainedFirstAssignResources(KHE_TASKING tasking,
  KHE_OPTIONS options);
}
does this.  It tries to assign each unassigned
unfixed task of @C { tasking }, leaving assigned ones untouched.  For
each such task, it maintains the set of resources that can currently
be assigned to the task without increasing the number of unmatchable
demand tixels.  It selects a task with the fewest such resources,
assigns it if possible, and repeats until all tasks have been handled.
@PP
Each assignment preserves the resource assignment invariant.  If
no assignment can do that, the task remains unassigned.  Among all
resources that preserve it, as a first priority an assignment that
minimizes @C { KheSolnCost } is chosen, and as a second priority,
resources that have already been assigned to other tasks of the
event resources of the task and the tasks assigned to it are
preferred.  So even when an avoid split assignments constraint is
not present, the algorithm favours assigning the same resource to
all the tasks of a given event resource, for regularity.
@PP
In fact, @C { KheMostConstrainedFirstAssignResources } assigns
task groups (Section {@NumberOf resource_structural.task_groups}),
not individual tasks.  Each task of a task group is assignable
by the same resources, so one list of suitable resources is kept
per task group.  At each step, a task group is selected for
assignment for which the number of suitable resources minus the
number of unassigned tasks is minimal.
@PP
When a resource is assigned to a task, it becomes less available, so
its suitability for assignment to its other task groups is rechecked.
If it proves to be no longer assignable to some of them, their
priorities are changed.  The task groups are held in a priority queue
(Section {@NumberOf modules.priqueue}), which allows their queue
positions to be updated efficiently when their priorities change.
@End @SubSection

@SubSection
    @Title { Resource packing }
    @Tag { resource_solvers.assignment.pack }
@Begin
@LP
To @I pack a resource means to find assignments of tasks to the
resource that make the solution cost as small as possible, while
preserving the resource assignment invariant, in effect utilizing
the resource as much as possible @Cite { $kingston2008resource }.
Following the recommended interface for resource assignment
functions (Section {@NumberOf resource_solvers.spec}), function
@ID @C {
bool KheResourcePackAssignResources(KHE_TASKING tasking,
  KHE_OPTIONS options);
}
assigns resources to the unassigned tasks of @C { tasking } using
resource packing, as follows.
@PP
The tasks are clustered into task groups
(Section {@NumberOf resource_structural.task_groups}).  Two numbers
help to estimate the difficulty of utilizing a resource effectively:
the @I { demand duration } and the @I { supply duration }.  A
resource's demand duration is the total duration of the task groups
it is assignable to.  Its supply duration is the number of times it
is available for assignment:  the cycle length, minus the number of
its workload demand monitors, minus the total duration of any tasks
it is already assigned to.
@PP
The resources are placed in a priority queue, ordered by
increasing demand duration minus supply duration.  That is,
the less demand there is for the resource, or the more supply,
the more important it is to pack it sooner rather than later.
In practice, part-time teachers come first in this order, which
is good, because they are difficult to utilize effectively.
@PP
The main loop of the algorithm removes a resource of minimum
priority from the priority queue and packs it.  If this causes
any task groups to become completely assigned, they are unlinked
from the resources assignable to them, reducing those resources'
demand durations and thus altering their position in the priority
queue.  This is repeated until the queue is empty.
@PP
Each resource @C { r } is packed using a binary tree search:  at
each tree node, one available task group is either assigned to
@C { r }, or not.  The task groups are taken in decreasing order
of the maximum, over all tasks @C { t } of the task group, of
@C { KheMeetDemand(m) }, where @C { m } is the first unfixed meet
on the chain of assignments out of the meet containing @C { t }.
This gives preference to tasks whose meets are hard to move,
reasoning that the leftovers will be given split assignments, and
repairing them may require moving their meets.  The search tree
has a moderate depth limit.  At the limit, the algorithm switches
to a simple heuristic which assigns as many tasks as it can.
@End @SubSection

#@SubSection
#    @Title { Consecutive packing }
#    @Tag { resource_solvers.assignment.consec }
#@Begin
#@LP
#Consecutive packing is a variant of resource packing which is
#specialized for nurse rostering.  The basic idea is the same:
#build a timetable for each nurse in turn, hardest first,
#utilizing the nurse as much as possible.  However, consecutive
#packing understands that nurses take one shift per day, and that
#runs of the same shift type are often best.  The function is
#@ID @C {
#bool KheResourcePackConsecutive(KHE_TASKING tasking,
#  KHE_OPTIONS options);
#}
#It assigns resources to the unassigned tasks of @C { tasking },
#one resource at a time, as follows.
#@PP
#The resources are sorted so that those with larger workload limits
#precede those with smaller limits, on the principle that it is harder
#to utilize a resource effectively when its workload is larger.  Each
#resource @M { r } in turn is then assigned a complete timetable.  This
#timetable is kept, so that later resources have fewer choices of tasks.
#So the timetables of the last few resources are likely to be quite poor.
#@PP
#The timetable of @M { r } is viewed as a sequence of adjacent
#@I { same-shift runs }.  A same-shift run is a maximal sequence of
#shifts of the same type on adjacent days.  For present purposes,
#having the day off is considered to be a shift type, so sequences
#of free days are also same-shift runs.
#@PP
#For each resource @M { r }, a tree search is used.  At each step,
#one same-shift run is assigned, immediately following the previous
#run.  There are choices available at each step, of shift type and
#run length, which creates the tree structure.  The aim is to
#utilize the resource as much as possible, while minimizing
#the total cost of the constraints applicable to @M { r }.
#@PP
#Any shift type (including free time) could be chosen, provided that
#it is different from the type of the previous run.  Shift types which
#are plentiful on the day that the run is to start are preferred.
#Another preference is for runs that begin on a day when the number of
#shifts with the wanted shift type has just increased, and that end
#on a day when that number is about to decrease.  This idea, analogous
#to profile grouping, helps to keep demand even across the cycle.
#Sometimes the choice of shift type is forced, because of history,
#or because free time is required.
#@PP
#The choice of run length could be anything in the range permitted by
#a consec solver (Section {@NumberOf resource_structural.consec}) for
#@M { r }.  However, larger run lengths are preferred for true shifts,
#and smaller run lengths are preferred for free time, so as to utilize
#the resource as much as possible.
#@PP
#The algorithm also prefers tasks with smaller domains (but always
#containing @M { r }) to tasks with larger domains, and tasks which span
#several days (owing to grouping done previously) to tasks which don't
#(but always within the wanted run length).  Grouped tasks are not disturbed.
#@PP
#All these preferences are combined in the search for some runs to
#try at each step.  They don't contribute directly to solution cost,
#but they are taken into account by choosing runs that satisfy them.
#Up to six different runs are tried, with a variety of shift types
#and lengths.  This makes the solve into a search tree with up to
#six-way branching.
#@PP
#To reduce the size of the tree, the most likely of the six choices is
#tried first, and then after the first path is complete, branch and bound
#is used to prune the others.  The only constraints included in the cost
#used by the branch and bound search are those applicable to @M { r }
#whose cost cannot decrease as the number of assignments increases.
#This covers most constraints.  The exceptions are mainly complete
#weekends and minimum workload limits, which are included in the
#cost only at the end of each path.
#@End @SubSection

@SubSection
    @Title { Split assignments }
    @Tag { resource_solvers.assignment.split }
@Begin
@LP
After solver functions such as
@C { KheMostConstrainedFirstAssignResources }
(Section {@NumberOf resource_solvers.assignment.most_constrained_first})
and @C { KheEjectionChainRepairResources }
(Section {@NumberOf resource_solvers.ejection }) have assigned
resources to most tasks, some tasks may remain unassigned.  These will
have to receive split assignments.  Function
@ID @C {
bool KheFindSplitResourceAssignments(KHE_TASKING tasking,
  KHE_OPTIONS options);
}
reduces the cost of the solution as much as it can, by making split
assignments to the unassigned tasks of @C { tasking } while maintaining
the resource assignment invariant.  Any tasks which were unassigned to
begin with are replaced in @C { tasking } by their child tasks.
@PP
At the core of @C { KheFindSplitResourceAssignments } is a procedure
which takes every pair of resources capable of constituting a split
assignment to some task and tries to assign them greedily to the task,
keeping the assignment that produces the lowest solution cost.  However,
before entering on that, @C { KheFindSplitResourceAssignments }
eliminates resources that cannot be assigned even to one child task,
makes assignments that are forced because there is only one available
resource (not forgetting that one forced assignment might lead
to another, or that once a resource has been assigned to one
child task it makes sense to assign it to as many others as
possible), and divides each task into independent components
(in the sense that no resource is assignable to two components).
In practice, much of what it does is more or less forced.
@End @SubSection

@EndSubSections
@End @Section

@Section
    @Title { Single resource assignment using dynamic programming }
    @Tag { resource_solvers.single }
@Begin
@LP
This section presents a polynomial-time dynamic programming algorithm
that finds an optimal timetable for a single resource, assuming that
time assignments are fixed, and that the resource's timetable can be
built up step by step in chronological order, as in nurse rostering.
@PP
Let the single resource be @M { r }.  The algorithm finds one
timetable for @M { r } for each distinct total number of assigned
times.  Each timetable minimizes the total cost of the resource
constraints that monitor the timetable of @M { r }, among all
timetables with that number of assigned times.  The caller is then
free to adopt any one of these timetables.  The algorithm does not
minimize other costs, such as the cost of assigning or not assigning
tasks, or costs that depend on the timetables of two resources.  It
chooses unassigned tasks whose assignment minimizes these costs at the moment
they are chosen, but that is not the same as minimizing them overall.
@BeginSubSections

@SubSection
    @Title { Running the algorithm }
    @Tag { resource_solvers.single.running }
@Begin
@LP
To run the algorithm, the first step is to create a
@I { single resource solver }, by calling
@ID @C {
KHE_SINGLE_RESOURCE_SOLVER KheSingleResourceSolverMake(KHE_SOLN soln,
  KHE_OPTIONS options);
}
Among other things, parameter @C { options } is used to access
the common frame, defining the days of the cycle.  The solver can
be deleted when it is no longer wanted, by calling
@ID @C {
void KheSingleResourceSolverDelete(KHE_SINGLE_RESOURCE_SOLVER srs);
}
To solve for a particular resource @C { r }, call
@ID @C {
void KheSingleResourceSolverSolve(KHE_SINGLE_RESOURCE_SOLVER srs,
  KHE_RESOURCE r, KHE_SRS_DOM_KIND dom_kind, int min_assts,
  int max_assts, KHE_COST cost_limit);
}
This does not change the solution.  Instead, it carries out the solve
and finds a number of distinct timetables.  The timetables vary in the
number of assignments they contain, as explained above.
@PP
The type of parameter @C { dom_kind } is defined in @C { khe_solvers.h }
as
@ID @C {
typedef enum {
  KHE_SRS_DOM_WEAK,
  KHE_SRS_DOM_MEDIUM,
  KHE_SRS_DOM_STRONG,
  KHE_SRS_DOM_TRIE
} KHE_SRS_DOM_KIND;
}
This determines whether the solve uses weak dominance, medium
dominance, strong dominance, or trie dominance.  These terms
are explained below.  Solutions of minimum cost are found in
any case; there may be some difference in running time.
@PP
The solve only finds timetables whose number of assignments is at least
@C { min_assts } and at most @C { max_assts }; if these restrictions
are not wanted, simply pass @C { 0 } and @C { INT_MAX }.  The result of
@C { KheResourceMaxBusyTimes } (Section {@NumberOf solutions.avail})
would be a good starting point for constructing more interesting values
for @C { min_assts } and @C { max_assts }.
@PP
The solve only finds timetables whose cost is no larger than
@C { cost_limit }.  A reasonable value for this in nurse rostering
would be @C { KheCost(0, INT_MAX) }, since hard constraint violations
are unacceptable.  To have no cost limit at all, use
@C { KheCost(INT_MAX, INT_MAX) }.
#@PP
#After @C { KheSingleResourceSolverMake } there may be any number of
#calls to
#@ID {0.98 1.0} @Scale @C {
#void KheSingleResourceSolverAddCostCutoff(KHE_SINGLE_RESOURCE_SOLVER srs,
#  KHE_COST cost_cutoff);
#}
#Then each subsequent call to @C { KheSingleResourceSolverSolve } will
#carry out one solve for each @C { cost_cutoff }, in the order they are
#passed in (which should therefore be increasing order), stopping when
#@C { KheSingleResourceSolverTimetableCount } (see below) returns
#a non-zero result, or when the cutoffs are exhausted.  Partial
#solutions whose cost is at least @C { cost_cutoff } will be dropped
#from the search.  To request an unlimited cutoff, call
#@ID @C {
#KheSingleResourceSolverAddCostCutoff(srs, KheCost(INT_MAX, INT_MAX));
#}
#To be precise, this drops solutions whose cost is
#@C { KheCost(INT_MAX, INT_MAX) }, but there are none of those in practice.
#If @C { KheSingleResourceSolverAddCostCutoff } is not called, then
#@ID @C {
#KheSingleResourceSolverAddCostCutoff(srs, KheCost(1, 0));
#}
#is called automatically.  This drops solutions with a non-zero hard
#cost, which is usually appropriate for nurse rostering.
@PP
To find out about the timetables produced by
@C { KheSingleResourceSolverSolve }, call
@ID {0.98 1.0} @Scale @C {
int KheSingleResourceSolverTimetableCount(KHE_SINGLE_RESOURCE_SOLVER srs);
void KheSingleResourceSolverTimetable(KHE_SINGLE_RESOURCE_SOLVER srs,
  int i, int *asst_count, KHE_COST *r_cost);
}
afterwards.
@C { KheSingleResourceSolverTimetableCount } returns the number of
timetables that were found, and @C { KheSingleResourceSolverTimetable }
reports on the @C { i }th timetable, for @C { i } in the range 0 to
@C { KheSingleResourceSolverTimetableCount(srs) - 1 }.  It reports
the number of assignments, and the total cost of the resource monitors
of @C { r } (the quantity that is optimized).  The timetables are
returned in increasing order of @C { *asst_count }.
@PP
To actually change the original solution, call
@ID {0.98 1.0} @Scale @C {
void KheSingleResourceSolverAdopt(KHE_SINGLE_RESOURCE_SOLVER srs, int i);
}
This will change the solution to include the @C { i }th timetable.
@PP
To move on to another resource, call @C { KheSingleResourceSolverSolve }
again.  It saves some time (not a huge amount) to use one solver on many
resources.  All memory is reclaimed by @C { KheSingleResourceSolverDelete }.
Finally,
@ID @C {
void KheSingleResourceSolverDebug(KHE_SINGLE_RESOURCE_SOLVER srs,
  int verbosity, int indent, FILE *fp);
}
produces a debug print of @C { srs } onto @C { fp } with the given
verbosity and indent; and
@ID @C {
void KheSingleResourceSolverTest(KHE_SOLN soln, KHE_OPTIONS options,
  KHE_RESOURCE r);
}
creates a single resource solver and tests it by finding optimal
timetables for @C { r }.  It produces some debug output, including
graphs (Section {@NumberOf general_solvers.stats.graphs}) in subdirectory
@C { stats } of the current directory.  The user must create this
subdirectory before @C { KheSingleResourceSolverTest } is called.
@End @SubSection

@SubSection
    @Title { Overview of the algorithm }
    @Tag { resource_solvers.single.overview }
@Begin
@LP
This section gives an overview of the algorithm,
omitting implementation details.
@PP
Let the days of the cycle (that is, the time groups of the common
frame) be @M { langle d sub 1 ,..., d sub n rangle } in chronological
order.  Let @M { lbrace s sub 0 , s sub 1 ,..., s sub a rbrace } be
the shift types, where @M { s sub 0 } is a special shift type
denoting non-assignment (a free day).  As a step towards the dynamic
programming algorithm, consider first a tree search algorithm which
assigns one of the @M { s sub i } to @M { d sub 1 }, then one to
@M { d sub 2 }, and so on.  Each day has @M { a + 1 } choices, so
the tree tries @M { (a + 1) sup n } timetables altogether.  Clearly
this will find the best timetable, but at the cost of exploring an
infeasibly large number of alternatives in practice.
@PP
The dynamic programming algorithm explores the same search tree,
but it prunes large parts of it.  Let a @I { partial timetable }
be an assignment of shift types to the first @M { m } days, for
some @M { m }.  (This is where we assume that timetables can be
built up in chronological order.)  For example, consider these
two partial timetables for the first 5 days:
@CD {
@Tbl
    rule { yes }
    aformat { @Cell A | @Cell B | @Cell C | @Cell D | @Cell E }
{
@MarkRowa
  A { @M { s sub 1 } }
  B { @M { s sub 1 } }
  C { @M { s sub 1 } }
  D { @M { s sub 0 } }
  E { @M { s sub 0 } }
}
&2c
@Tbl
    rule { yes }
    aformat { @Cell A | @Cell B | @Cell C | @Cell D | @Cell E }
{
@MarkRowa
  A { @M { s sub 2 } }
  B { @M { s sub 2 } }
  C { @M { s sub 2 } }
  D { @M { s sub 0 } }
  E { @M { s sub 0 } }
}
}
If there is no constraint on the total number of @M { s sub 1 } shifts
that may be assigned, and none on the total number of @M { s sub 2 }
shifts that may be assigned, then these two partial timetables are
indistinguishable from this point onwards.  If, say, the second has
a smaller cost than the first (for example if there is an upper limit
of 2 on the number of consecutive @M { s sub 1 } shifts, but not
on the number of consecutive @M { s sub 2 } shifts), then
it is safe to not explore the search tree rooted at the first
partial timetable, because any timetable it leads to will be
worse than the corresponding timetable in the search tree rooted
at the second partial timetable.  The dynamic programming algorithm
exploits this idea.
@PP
Of course, one cannot simply ignore all but one partial timetable
of length @M { m }.  For example, if there @I are upper limits on
the number of @M { s sub 1 } or @M { s sub 2 } shifts that may be
assigned, then in the example above, the search trees rooted at
both partial timetables must be explored, because the different
numbers of @M { s sub 1 } and @M { s sub 2 } shifts will very
likely lead to different costs later.
@PP
One partial solution is said to @I { dominate } another if its
presence means that the other can be dropped.  Although we have
some way to go yet before we can define domination concretely, it
is clear that we need to be able to test pairs of partial solutions
for dominance, and that although the result depends on the two
solutions, it also depends crucially on what constraints there are.
@PP
The dynamic programming algorithm is as follows.  Search the search
tree in a breadth-first fashion, first finding partial timetables
that end on @M { d sub 1 }, then partial timetables that end on
@M { d sub 2 }, and so on.  For each day @M { d sub m }, maintain
a set of partial timetables @M { P sub m } found so far that end
on @M { d sub m }.  As each new partial timetable @M { x } ending
on @M { d sub m } is created, see whether @M { x } is dominated by
any existing partial timetable in @M { P sub m } and drop it if so.
If not, add @M { x } to @M { P sub m } and drop from @M { P sub m }
any existing partial timetables that are dominated by @M { x }.  In
this way, instead of eventually containing all timetables ending on
@M { d sub m }, @M { P sub m } eventually contains all @I undominated
timetables ending on @M { d sub m }.
@PP
Suppose @M { P sub m } is complete.  To extend beyond there, for
each partial timetable @M { x } in @M { P sub m }, and for each task
@M { t } beginning on day @M { d sub {m+1} }, create a new partial
timetable consisting of @M { x } plus @M { t }.  This will end on
@M { d sub {m+1} }, or later if @M { t } is a grouped task covering
several consecutive days.  (Also try an artificial task which
represents doing nothing on @M { d sub {m+1} }.)  In principle this
means trying every task in every event that begins running on
@M { d sub {m+1} }, but actually all these tasks are examined
before beginning the algorithm proper, and the right ones to try
are decided in advance, as follows.
@PP
First of all, if any of these tasks are already assigned @M { r },
then, since we have decided not to remove any existing assignments
of @M { r }, this is the only task that is tried beginning on
@M { d sub {m+1} } or on any other days the task is running.  Otherwise,
first omit each task that begins on an earlier day, or that is
assigned already, or for which @C { KheTaskAssignResourceCheck }
(Section {@NumberOf solutions.tasks.cycle}) returns @C { false }.
Then assign the remaining tasks to groups, one group for each
distinct (@I {start time}, @I {end time}) pair.  Keep one task from
each group, one whose assignment reduces the cost of its event
resources the most, according to @C { KheTaskAssignmentCostReduction }
(Section {@NumberOf solutions.tasks.asst}).  If there are two or
more equally good tasks, one whose domain is smaller is chosen.
@PP
A task with a given start time and end time begins on the day
containing the start time, and ends on the day containing the
end time.  It is added to a set of all chosen tasks associated
with its start day.  The partial timetables that are produced by
assigning that task end on its end day; they are stored in the
set of partial solutions associated with the end day.  Although
the sets of partial solutions are extended in strict chronological
order, they can be built up several days ahead.
@PP
That completes the algorithm.  But we still need a definition of
dominance which allows the algorithm to safely discard a large
number of partial timetables.
#@PP
#The number of partial timetables at level @M { m } will be at most
#the product, over all constraints @M { c } in category (3) at that
#level, of @M { D(c) }, the number of distinct possible values of
#the summary for @M { c }.  The product could be a fairly large number,
#although for most constraints (all except constraints on total
#workload) @M { D(c) } is a small constant.  For constraints on the
#number of consecutive days assigned a particular shift type, at most
#one of the summaries can be non-zero.  So there is no reason to fear
#an infeasibly large number of partial timetables.  If we want to
#maximize the utilization of resources, it might pay to cull partial
#timetables whose total number of assignments is much less than the
#largest number at that level.
#@PP
#The number of edges leading out of each partial timetable is basically
#@M { a + 1 } as for the tree search, or more if some tasks are grouped
#(by combinatorial grouping, say).  The total running time is
#proportional to this times the total number of partial timetables.
#How fast this is is a question for empirical testing, but there is
#no reason to think that it will be infeasibly large.
#@PP
#@I { obsolete below here }
#@PP
#for each partial timetable, which consists of a sequence of summaries,
#one for each constraint which is in category (3) at that level.  We
#want one timetable for each distinct total number of assignments, so
#we also include the total number of assignments in the signature.
#For all partial timetables with the same signature, continue to
#explore only out of one, one with minimum cost among all partial
#timetables with that signature.
#@PP
#Two partial timetables are called @I { equivalent } if it is safe
#to continue searching out of only one of them (one with minimum
#cost).  It is not hard to see what makes two partial timetables
#equivalent.  First, they must be timetables for the same days,
#@M { d sub 1 ,..., d sub m } for some @M { m }.  Second, consider
#each constraint @M { c } that applies to the given resource
#@M { r }.  There are three cases:
#@ParenNumberedList
#
#@LI @OneRow {
#If the value of @M { c } is affected only by days
#@M { d sub 1 ,..., d sub m }, then that value is final in both
#timetables and does not prevent the timetables from being
#equivalent.  It does help to decide which of the two has
#the smaller cost.
#@LP
#This also applies to any @I part of the cost of a constraint that
#depends only on @M { d sub 1 ,..., d sub m }.  For example, if
#some constraint limits the number of consecutive @M { s sub 1 }
#assignments, then that may produce a cost for the first timetable
#above, but that part of the cost is finalized.
#}
#
#@LI @OneRow {
#If the value of @M { c } is affected only by days
#@M { d sub {m+1} ,..., d sub n }, then that value is unknown in both
#timetables and does not prevent the two timetables from being
#equivalent.  Again, this could apply to a part of the constraint.
#}
#
#@LI @OneRow {
#Finally, suppose the value of @M { c } is partly determined by
#@M { d sub 1 ,..., d sub m }, and partly by
#@M { d sub {m+1} ,..., d sub n }.  Make a summary of the effect
#on the value of @M { c } of @M { d sub 1 ,..., d sub m }.  If this
#summary is the same in both timetables, they are equivalent as far
#as @M { c } is concerned; if it is different, they are not equivalent
#and the search trees rooted at both timetables must be explored.
#}
#
#@EndList
#We have already seen an example of the last case:  a constraint
#@M { c } which depends on the total number of assignments of shift
#@M { s sub 1 }, and hence depends on both @M { d sub 1 ,..., d sub m }
#and @M { d sub {m+1} ,..., d sub n }.  The summary in that
#case is the number of assignments of shift @M { s sub 1 } on days
#@M { d sub 1 ,..., d sub m }.
#@PP
#This summary is basically what one sees in implementations
#of history in nurse rostering.  The only difference is that it
#summarises what has happened up to and including @M { d sub m },
#rather than what has happened before @M { d sub 1 }.  It is
#usually an integer, such as a number of shifts, although we
#will see cases where more than that is required.  When
#@M { m = n }, all constraints fall into case (1).
#@PP
#The dynamic programming algorithm is as follows.  Search the search
#tree in a breadth-first fashion, first finding partial timetables up
#to @M { d sub 1 }, then partial timetables up to @M { d sub 2 }, and
#so on.  At each of these @I { levels }, construct a @I { signature }
#for each partial timetable, which consists of a sequence of summaries,
#one for each constraint which is in category (3) at that level.  We
#want one timetable for each distinct total number of assignments, so
#we also include the total number of assignments in the signature.
#For all partial timetables with the same signature, continue to
#explore only out of one, one with minimum cost among all partial
#timetables with that signature.
#@PP
#The implementation uses a hash table at each level.  Each entry's
#value is one partial timetable, and its key is that partial timetable's
#signature.  As each partial timetable is created, calculate its
#signature, and retrieve that signature.  If there is no existing
#partial timetable with that signature in the hash table, add the
#new partial timetable to the hash table.  If there is such a
#timetable and its cost is greater than the new partial timetable's
#cost, replace it in the hash table by the new partial timetable.
#Otherwise forget the new partial timetable.
#@PP
#To extend to the next level, visit every partial timetable in the
#hash table for the current level, and try each possible next assignment.
#In principle this means every task in every event that begins running
#on the next day, but actually the algorithm checks all these tasks
#before beginning the real work, and decides in advance which ones
#to try, as follows.
#@PP
#First of all, if any of these tasks are already assigned @M { r },
#then, since we have decided not to remove any existing assignments
#of @M { r }, this is the only task tried at this level.  Otherwise,
#first omit each task that begins on an earlier day, or that is
#assigned already, or for which @C { KheTaskAssignResourceCheck }
#(Section {@NumberOf solutions.tasks.cycle}) returns @C { false }.
#Then assign the remaining tasks to groups, one group for each
#distinct (start time, end time) pair.  Keep one task from each group.
#This will be one whose assignment reduces the cost of its event
#resources the most, according to @C { KheTaskAssignmentCostReduction }
#(Section {@NumberOf solutions.tasks.asst}).  If there are two or
#more equally good tasks, one whose domain is smaller is chosen.
#@PP
#A task with a given start time and end time begins on the day
#containing the start time, and is one of a list of all chosen
#tasks that begin on that day and are listed under that day.
#The partial timetable that is produced by assigning that task
#ends on the day containing the time's end day, and it is stored
#in the hash table associated with the end day.  So although hash
#tables are extended in strict chronological order, they can be
#built up several days ahead of the time sweep.
#@PP
#The number of partial timetables at level @M { m } will be at most
#the product, over all constraints @M { c } in category (3) at that
#level, of @M { D(c) }, the number of distinct possible values of
#the summary for @M { c }.  The product could be a fairly large number,
#although for most constraints (all except constraints on total
#workload) @M { D(c) } is a small constant.  For constraints on the
#number of consecutive days assigned a particular shift type, at most
#one of the summaries can be non-zero.  So there is no reason to fear
#an infeasibly large number of partial timetables.  If we want to
#maximize the utilization of resources, it might pay to cull partial
#timetables whose total number of assignments is much less than the
#largest number at that level.
#@PP
#The number of edges leading out of each partial timetable is basically
#@M { a + 1 } as for the tree search, or more if some tasks are grouped
#(by combinatorial grouping, say).  The total running time is
#proportional to this times the total number of partial timetables.
#How fast this is is a question for empirical testing, but there is
#no reason to think that it will be infeasibly large.
#@PP
#It remains to define the summary for each of the six
#constraint types that apply to resources in XESTT.  In fact there
#is one summary for each monitor, not for each constraint.  The sole
#condition on the choice of summary for a monitor @M { m } is that
#two summaries must be different when the assignments so far (in
#@M { d sub 1 ,..., d sub m }) differ in such a way that the same
#future assignments (in @M { d sub {m+1} ,..., d sub n }) could
#generate different cost increments for @M { m }.  This rule applies
#to all monitors, but it is satisfied by the empty summary in cases
#(1) and (2), as is easily seen.  So what follows only applies to
#case (3) monitors.
## The summary is
## only needed in case (3), for constraints that are affected by both
## @M { d sub 1 ,..., d sub m } and @M { d sub {m+1} ,..., d sub n }.
#@PP
#@I { Avoid clashes monitors }.  If the cost function is linear,
#each time contributes an independent value to the cost, and the
#constraint can be divided into a case (1) part and a case (2) part,
#so that (3) does not arise and no summary for this constraint is
#added to the signature.  If the cost function is not linear, the
#summary is the deviation over @M { d sub 1 ,..., d sub m }.
#@PP
#@I { Avoid unavailable times monitors }.  These are similar to avoid
#clashes monitors:  if the cost function is linear, each time is
#independent and no summary is needed.  If the cost function is not
#linear, the summary is the deviation over @M { d sub 1 ,..., d sub m }.
#@PP
#@I { Limit idle times monitors }.  These are not used in nurse
#rostering.  Handling them is future work (quite feasible); at present
#they are omitted from the resource cost that is optimized.
#@PP
#@I { Cluster busy times monitors }.  The summary always contains the
#number of active time groups @M { x }, including any history value.  In
#addition, for each time group that spans both @M { d sub 1 ,..., d sub m }
#and @M { d sub {m+1} ,..., d sub n }, one Boolean value is added to the
#summary, with value @C { true } when that time group is active in
#@M { d sub 1 ,..., d sub m }.
#@PP
#For an example where Booleans are needed, consider a constraint on
#the number of busy weekends, and suppose we are up to a Saturday.
#It is not enough to know the number of busy weekends so far; we also
#need to know whether Saturday is busy, since that determines whether
#assigning a shift on the immediately following Sunday increases the
#number of busy weekends.
#@PP
#If the cost function is linear, the monitor has an upper limit, and
#@M { x } exceeds the upper limit, then @M { x } is reduced to the
#upper limit.  There is no need to distinguish how far over the
#limit we are, since every additional violation will add the same
#cost to the solution.  Alternatively, if the monitor has no upper
#limit, and @M { x } exceeds the minimum limit, it is reduced to
#the minimum limit.  This reduces the number of distinct values
#that the summary can take; see below for why this is useful.
#@PP
#@I { Limit busy times monitors }.  First of all, if the cost
#function is linear, then each time group makes an independent
#contribution to the cost.  So each time group that lies entirely
#in @M { d sub 1 ,..., d sub m } or @M { d sub {m+1} ,..., d sub n }
#is ignored.  Very often, all time groups will be ignored and the
#monitor needs no summary at all.
#@PP
#Otherwise, the summary begins with the deviation over
#@M { d sub 1 ,..., d sub m }.  In addition, for each individual
#time group that spans both @M { d sub 1 ,..., d sub m } and
#@M { d sub {m+1} ,..., d sub n }, one integer is appended to the
#summary, whose value is the number of busy times in that time group.
#A Boolean would be insufficient in this case.
#@PP
#@I { Limit workload monitors }.  @I { still to do }
#@PP
#@I { Limit active intervals monitors }.  Limit active intervals
#constraints are defined in such a way that each active interval
#contributes a separate cost, even if the cost function is not linear,
#because the cost function is applied to each active interval
#separately.  So the summary does not need to be influenced by active
#intervals in @M { d sub 1 ,..., d sub m } that have definitely ended.
#@PP
#In principle, some quite strange combinations of partial
#intervals could be present.  Consider a limit active intervals
#monitor whose sequence of time groups consists of all Mondays,
#then all Tuesdays, all Wednesdays, all Thursdays, and all
#Fridays.  There could be 5 partial intervals.  In practice,
#however, the time groups are always chronologically increasing.
#We define a summary that is space-efficient only for that
#case, although it works in all cases.
#@PP
#In the order that the time groups of the monitor appear, find the
#first time group @M { T sub i } that is not covered completely by
#@M { d sub 1 ,..., d sub m }.  If there is no such time group,
#this is case (1), so make an empty summary and stop.  Otherwise find
#the last time group @M { T sub j } that is not covered completely
#by @M { d sub {m+1} ,..., d sub n }.  If there is no such time
#group, this is (2), so make an empty summary and stop.
#@PP
#Otherwise we have @M { T sub i } and @M { T sub j }.  Consider
#any time group @M { T sub k } such that @M { i > k > j }.  Since
#@M { k < i }, @M { T sub k } is covered completely by
#@M { d sub 1 ,..., d sub m }.  Since @M { k > j }, @M { T sub k }
#is covered completely by @M { d sub {m+1} ,..., d sub n }.  This
#is a contradiction, so there can be no such @M { T sub k }, and
#so @M { i <= j + 1 }.
#@PP
#The case @M { i = j + 1 } is quite normal.  It means that
#@M { T sub i } is the first time group that is not covered completely
#by @M { d sub 1 ,..., d sub m }, and @M { T sub {i-1} }, which is
#@M { T sub j }, is the last time group that is not covered completely by
#@M { d sub {m+1} ,..., d sub n }.  In other words, @M { T sub {i-1} }
#is the last time group covered completely by @M { d sub 1 ,..., d sub m },
#and @M { T sub i } is the first time group covered completely by
#@M { d sub {m+1} ,..., d sub n }.  So there are no time groups
#covered partly by @M { d sub 1 ,..., d sub m } and partly by
#@M { d sub {m+1} ,..., d sub n }, as there are when @M { i < j + 1 }.
#@PP
#Now make a signature which consists of the number of consecutive
#active time groups directly to the left of @M { T sub i } (possibly 0),
#plus one Boolean value for each time group in the range @M { T sub i }
#to @M { T sub j } inclusive (none when @M { i = j + 1 }), with value
#@C { true } when the time group is active.
#@PP
#It should be clear that as @M { m } increases, the indexes @M { i }
#and @M { j } that select @M { T sub i } and @M { T sub j } cannot
#decrease.  This fact can be used to speed up the determination of
#@M { i } and @M { j } for all @M { m }.
@End @SubSection

@SubSection
    @Title { Signatures }
    @Tag { resource_solvers.single.signatures }
@Begin
@LP
Before we can define dominance we need another concept.  The
@I signature of a constraint @M { c } on day @M { d sub m },
written @M { s sub m (c, x) }, is a concise representation of a
given partial timetable @M { x } for days @M { d sub 1 ,..., d sub m }
as it affects @M { c }, excluding parts of the timetable for which
@M { c } has already yielded a cost.  It will turn out that these
signatures are mainly what we need when performing dominance testing.
@PP
Let us consider some examples.  Suppose @M { c } is a constraint on the
total number of shifts worked.  Then for @M { s sub m (c, x) } we may
choose the number of shifts worked during @M { d sub 1 ,..., d sub m }.
It does not matter to @M { c } which shifts they are.
@PP
Suppose @M { c } is a constraint on the total number of busy weekends
(weekends where at least one shift is worked).  Then for
@M { s sub m (c, x) } we may choose the number of busy weekends during
@M { d sub 1 ,..., d sub m }, but there is a catch when @M { d sub m }
is a Saturday:  the signature must record whether a shift is worked on
that day, because that determines whether working a shift on the
immediately following Sunday, @M { d sub {m+1} }, adds to the number
of busy weekends or not.  So @M { s sub m (c, x) } is an integer plus
a Boolean when @M { d sub m } is a Saturday, and an integer on other days.
@PP
Suppose @M { c } is a constraint on the number of consecutive night
shifts.  Then @M { s sub m (c, x) } is the number of consecutive night shifts
ending at @M { d sub m } inclusive (0 if @M { r } is not assigned a night
shift on @M { d sub m }).  Consecutive sequences that terminated earlier
have already yielded a cost and are excluded from the signature.
@PP
The reader familiar with history in nurse rostering will have
noticed a strong connection between signatures and history values.
A history value records what @M { r } did that affects @M { c } before
@M { d sub 1 }, whereas a signature records what @M { r } did that
affects @M { c } before and during @M { d sub m }.  The general idea
is the same, although the author's previous work on history explicitly
assumes that cases like the Saturday treated above do not occur at
the start.  Here, we cannot assume such cases away.
@PP
It is common for @M { s sub m (c, x) } to be empty.  Suppose
@M { c } is affected only by what @M { r } is doing on days
@M { d sub i ,..., d sub j }.  Then @M { s sub m (c, x) } is
empty for @M { m < i }, because nothing relevant to @M { c }
is happening then.  Also, @M { s sub m (c, x) } is empty for
@M { m >= j }, because @M { c }'s final cost is calculated and
reported on day @M { d sub j } , so that again there is nothing
relevant to @M { c } to remember.
@PP
The signature of a partial timetable @M { x } on day @M { d sub m },
written @M { s sub m (x) }, is the concatenation, over all constraints
@M { c } in an arbitrary but fixed order, of @M { s sub m (c, x) }.
The cost of a partial timetable @M { x } on day @M { d sub m },
written @M { c sub m (x) }, is the sum of all costs already yielded
by constraints up to and including day @M { d sub m }.  As it turns
out, these two values, @M { s sub m (x) } and @M { c sub m (x) },
are all that the dynamic programming algorithm needs to remember
about a partial timetable @M { x } in order to do its work.
@PP
To show that the dynamic programming algorithm works with any nurse
rostering model, we need signatures for all resource constraints.
We could define them separately for each of the seven XESTT resource
constraints.  But instead, we are going to represent them by
expression trees, and derive the signatures from the trees.  This
approach is more general and insightful.
#@PP
#@I { omit below here }
#@PP
#It is important for efficiency to not process every constraint on
#every day.  One can always tell, by examining a constraint @M { c },
#which days it is affected by.  For example, a constraint prohibiting
#a day shift on the day following a night shift on the first Monday
#is affected by what @M { r } is doing on those two days.  A constraint
#on the total number of shifts worked is affected by what @M { r }
#is doing on all days.  And so on.
#@PP
#Let @M { f sub c } be the first day that constraint @M { c } is affected
#by, and left @M { l sub c } be the last day that @M { c } is affected by.
#It may be that some days between @M { f sub c } and @M { l sub c }
#do not affect @M { c }.  For example, if @M { c } concerns weekends
#it will be unaffected by the weekdays in between.  However, as it
#turns out, the implementation has nothing to gain by recognizing
#these cases.  So it considers each constraint @M { c } to be affected
#by a sequence of consecutive days, beginning at @M { f sub c } and
#ending at @M { l sub c }.  These days are said to be @I { active }
#for @M { c }; @M { f sub c } is @M { c }'s @I { initial } day, and
#@M { l sub c } is @M { c }'s @I { final } day.  The initial and
#final days could be the same, but they always exist, because @M { c }
#must be affected by at least one day, otherwise it is useless and
#can be dropped.
#@PP
#Suppose a constraint @M { c } depends only on the assignments on days
#@M { d sub 1 ,..., d sub m }.  Then @M { c } yields its final cost when
#@M { d sub m } is assigned, and so @M { s sub m (c, x) } is empty.  Or
#suppose @M { c } depends only on the assignments on days
#@M { d sub {m+1} ,..., d sub n }.  Then there is nothing to remember
#about @M { d sub 1 ,..., d sub m }, and again @M { s sub m (c, x) }
#is empty.  So the only constraints for which @M { s sub m (c, x) } is
#non-empty are those which are affected both by the assignments on at
#least one of the days @M { d sub 1 ,..., d sub m }, and also by the
#assignments on at least one of the days @M { d sub {m+1} ,..., d sub n }.
#Let @M { C sub m } be the set of these constraints; we say that they are
#@I active on day @M { d sub m }.
#@PP
#There is an issue here:  what do we mean by a single constraint?
#Consider a constraint which specifies three days that @M { r } wants
#to keep free.  We prefer to break this into three constraints,
#one for each day, and in general we prefer our constraints to be
#as fine-grained as possible, since in that way we maximize the
#chance that they can be omitted from @M { C sub m }, which is useful, as
#it turns out.  But suppose that the cost of not giving @M { r } the
#requested free days is the square of the number of unwanted busy
#days.  Then the constraint cannot be broken into independent
#constraints, and it will be active on each day @M { d sub m } lying between
#the first requested day off inclusive and the last exclusive.
@End @SubSection

@SubSection
    @Title { Expression trees }
    @Tag { resource_solvers.single.trees }
@Begin
@LP
This section introduces expression trees and explains how signatures
are derived from them.  We start with an example of an expression
tree for a constraint on the number of busy weekends:
@CD @Diag
  treehsep { 1.5c }
  blabelprox { NE }
  # blabelfont { -3p }
{
  ||0.8c
  @HTree {
    @Node blabel { 7-28 } { @M { INT_SUM_COMB } }
    @FirstSub {
      @Node blabel { 6-7 } { @M { OR } }
      @FirstSub to { W } { @Node blabel { 6-6 } @M { BUSY_TIME(1Sat1) } }
      @NextSub  to { W } { @Node blabel { 6-6 } @M { BUSY_TIME(1Sat2) } }
      @NextSub  to { W } { @Node blabel { 7-7 } @M { BUSY_TIME(1Sun1) } }
      @NextSub  to { W } { @Node blabel { 7-7 } @M { BUSY_TIME(1Sun2) } }
    }
    @NextSub pathstyle { noline } {
      @Node outlinestyle { noline } { ... }
    }
    @NextSub {
      @Node blabel { 27-28 } { @M { OR } }
      @FirstSub to { W } { @Node blabel { 27-27 } @M { BUSY_TIME(4Sat1) } }
      @NextSub  to { W } { @Node blabel { 27-27 } @M { BUSY_TIME(4Sat2) } }
      @NextSub  to { W } { @Node blabel { 28-28 } @M { BUSY_TIME(4Sun1) } }
      @NextSub  to { W } { @Node blabel { 28-28 } @M { BUSY_TIME(4Sun2) } }
    }
  }
}
We assume here that the instance has 28 days, starting on a Monday,
with two shifts per day.  A @M { BUSY_TIME(x) } node has value 1
when the resource is busy at time @M { x }.  An @M { OR } node
has value 1 when at least one of its children has value 1.  An
@M { INT_SUM_COMB } node sums the values of its children, compares
the result with the limits, and reports any cost.
@PP
Descriptions of each of the node types used by the implementation are
given in Section {@NumberOf resource_solvers.single.node}, and
expression trees for each of the XESTT resource constraints are
given in Section {@NumberOf resource_solvers.single.monitors}.
Here we are concerned with features that all node types share.
@PP
Associated with each node @M { v } is a non-empty sequence of
consecutive days, called @M { v }'s @I { active days }.  They are
stored in @M { v } as a pair of integers, the index of the first
(or @I { initial }) active day and of the last (or @I { final })
active day.  These indexes are shown in the diagram above, adjacent
to their nodes.  We'll explain how they are defined first, and
then discuss their significance.
@PP
A leaf node is concerned with what @M { r } is doing at a given
time.  Its active days consist of just the day containing that time.
For example, @M { BUSY_TIME(1Sat2) } is concerned with what @M { r }
is doing at time @M { 1Sat2 }, which lies in day @M { d sub 6 }, so
@M { d sub 6 } is its sole active day.  It is both initial and final.
@PP
An internal node, like @M { INT_SUM_COMB } or @M { OR } above, has
a value that depends on its children's values.  Its active days
also depend on its children's active days.  One might guess that
they are the smallest sequence that includes all its children's
active days, but that is not quite right.  Instead, its active
days are the smallest sequence that includes all the final
active days of its children.  For example, the initial active day
of node @M { INT_SUM_COMB } is @M { d sub 7 }, even though it
has a child whose initial active day is @M { d sub 6 }.
@PP
The significance of active days is that they are the days when
calculations need to be performed at their nodes.  Consider
the first @M { OR } node above.  On day @M { d sub 6 }, the
first Saturday, it needs to work out whether @M { r } is busy
on that day.  The same is true for day @M { d sub 7 }, the
first Sunday; and on that day its value becomes final and
needs to be reported to its parent.  Before and after those
two days, there is nothing for the @M { OR } node to do.
@PP
The same argument applies to the @M { INT_SUM_COMB } node,
and now we see why @M { d sub 6 } is not one of its active
days.  On @M { d sub 6 }, its first child is active working
out whether @M { r } is busy that day; but that child has
nothing to report to its parent until @M { d sub 7 }, when
it knows whether @M { r } is busy on the first weekend.
So there is nothing for the @M { INT_SUM_COMB } node to
do on @M { d sub 6 }.
@PP
For efficiency, our implementation visits nodes only on their
active days.  Before the main algorithm begins we find the active
days for all nodes, following the rules just given.  Then, for
each day, we construct a list of all nodes in all expression
trees that are active on that day.  That list visits the nodes in
postorder, to ensure that each active node is visited after all its
active children are visited, something that will be important later.
@PP
We are now ready to define concretely the value of the signature
of a partial timetable for day @M { d sub m }.  It consists of
a sequence of values, one for each node which is active but not
final on @M { d sub m }.  For example, both the @M { INT_SUM_COMB }
node and the last @M { OR } node above are active but not final
on @M { d sub 27 }, explaining why the signature contains two
values for this constraint on that day.  The values may appear
in any fixed order; our implementation uses postorder, since for
other reasons it visits the active nodes in this order anyway.
@PP
When a node calculates a value, on one of its active days, that
value is needed and must not be forgotten.  On non-final active
days, it is added to the signature, as we have just stated.  On
final active days, it is final and is passed to its parent instead
of to the signature.
@PP
The signature has one extra value (at the start):  the number
of busy days in the timetable.  This is to support reporting one
timetable for each distinct number of busy days
(Section {@NumberOf resource_solvers.single.running}).
@End @SubSection

@SubSection
    @Title { From one partial timetable to the next }
    @Tag { resource_solvers.single.part }
@Begin
@LP
Each partial timetable @M { y } contains four fields:  the partial
timetable @M { x } that it is derived from; the task (or free day)
@M { t } which is added to @M { x } to produce @M { y };  its
cost; and its signature.  Nothing more is required, although
the reader will have to take that fact on trust for the moment.
@PP
In this section we explain how to combine a partial timetable
@M { x }, ending on day @M { d sub {m-1} }, with a task @M { t }
running on day @M { d sub m } (or a decision to keep that day free),
to produce a new partial timetable @M { y }.  That is, we explain
how to construct one edge of the search tree:
@CD @Diag {
X:: @Node @M { x } |2c Y:: @Node @M { y }
//
@Arrow ylabel { @M { t } } from { X } to { Y }
}
A free day always occupies exactly one day, but a task may occupy
several consecutive days, if it is grouped.  In that case we
analyse the task into its daily elements, and apply the algorithm
we are about to describe repeatedly.  Then, at the end, we reset
the final partial timetable @M { y }'s predecessor to @M { x } and
delete the intermediate partial timetables.  We won't mention this
again; from now on we will consider extending by only a single day.
@PP
Let @M { c sub {m-1} (x) } and @M { s sub {m-1} (x) } be the
cost and signature of @M { x }, and let @M { c sub m (y) } and
@M { s sub m (y) } be the cost and signature of @M { y }.  To create
@M { y }, first obtain a new partial solution object.  Initialize
its previous partial timetable to @M { x } and its task to @M { t },
initialize its cost @M { c sub m (y) } to @M { c sub {m-1} (x) },
and initialize its signature @M { s sub m (y) } to the empty sequence.
Then add the new number of busy days (either the old number of
busy days, or that plus 1, depending on @M { t }) to @M { s sub m (y) }
as its first element.  The old number of busy days is easily found:
it is just the first element of @M { s sub {m-1} (x) }.  In general,
it will turn out that everything we need to know about the state
of @M { x } may be retrieved very simply from @M { s sub {m-1} (x) }.
@PP
Now traverse the list of active nodes for @M { d sub m }.  At
each node @M { v }, carry out the appropriate calculation (we'll
see what this is shortly).  If @M { d sub m } is not @M { v }'s
final active day, the result of this calculation is added to
the end of @M { s sub m (y) }.  If @M { d sub m } is @M { v }'s
final active day, the result is held in @M { v } for its parent
to retrieve later in the traversal, unless it is a cost, in which
case there is no parent and the value is added to @M { c sub m (y) }.
At the end of this traversal, @M { y } is complete.
@PP
In general terms, the calculation carried out at each node @M { v }
looks like this:
@ID @OneRow {0.9 1.0} @Scale @F @Verbatim {
void NodeDoDay(Node v, PartialSoln x, PartialSoln y, Day today)
{
  /* initialize v's value to its value before today */
  if( today is v's initial active day )
    v->value = some standard initial value (e.g. 0, or a history value);
  else
    v->value = v's old value (stored in x's signature);

  /* update v's value to include today */
  for( each child w of v for which today is w's final active day )
    obtain the final value w->value of w and update v->value with it;

  /* save v's updated value, or just keep it in v */
  if( today is not v's final active day )
    add v->value to the end of y's signature;
  else if( v produces a cost )
    add v's cost to y's cost field;
}
}
For example, at an @M { INT_SUM } node this would initialize
@C { v->value } to 0 on the initial active day, and retrieve its
previous value on subsequent days.  It would then visit all the
children @M { w } of @M { v } whose values have become final today
(the nodes are visited in postorder, so these children have all
completed their own @C { NodeDoDay } calls), and add @C { w->value }
to @C { v->value }.  Finally, if today is not @M { v }'s final active
day, its value is added to @M { y }'s signature, where it can be
retrieved next time.
@PP
The algorithm for leaf nodes is slightly different.  The value on the
initial day depends on the task.  For example, in a @M { BUSY_TIME }
node it is 1 if the task runs at the node's time, and 0 otherwise.
There are no children and no second part.  There is just one active
day, so the value is never added to the signature; it is held in the
node for its parent to retrieve later in the traversal.
@PP
For efficiency, each node must know, for each of its non-final
active days, the position in the signature for that day of its
value, and for each of its active days, the children for which
that day is the final active day.  This information is calculated
in advance, before the main algorithm begins, in a straightforward
manner, taking a negligible amount of time.  It also helps with
efficiency that on each day, only the active nodes are visited,
and each is visited only once.
@End @SubSection

@SubSection
    @Title { Dominance testing }
    @Tag { resource_solvers.single.dominance }
@Begin
@LP
This section is concerned with dominance testing.  As mentioned
in Section {@NumberOf resource_solvers.single.running}, the
algorithm offers two kinds of dominance testing, which we call
@I { weak } and @I { strong } dominance.  They find solutions
with the same (optimal) cost, but they differ in running time.
@PP
Given two partial timetables @M { x } and @M { y } for the same set
of days @M { d sub 1 ,..., d sub m }, @M { x } @I { weakly dominates }
@M { y } when @M { c sub m (x) <= c sub m (y) } and
@M { s sub m (x) = s sub m (y) } (that is, the signatures are identical).
@PP
If @M { x } weakly dominates @M { y }, then @M { y } may be discarded,
and it is easy to see why.  The key point is that as the timetables
are extended, equal assignments produce equal changes in cost, because
all constraints have equal signatures in @M { x } and @M { y } and those
determine all future costs.  So since @M { c sub m (x) <= c sub m (y) },
every timetable starting from @M { x } has cost less than or equal to the
cost of the corresponding timetable starting from @M { y }.  So @M { y }
can be discarded.
@PP
If @M { c sub m (x) = c sub m (y) } and @M { s sub m (x) = s sub m (y) },
then the two partial solutions weakly dominate each other.  Of course,
that does not mean that both may be discarded.  Either one of them may
be discarded, introducing some nondeterminism into the algorithm.
@PP
Weak dominance can be implemented very efficiently.  The set
@M { P sub m } of timetables that end on @M { d sub m }, defined
in Section {@NumberOf resource_solvers.single.overview}, is
implemented by a hash table whose keys are signatures and whose
values are partial timetables.  Whenever a new partial timetable
ending on @M { d sub m } is created, before adding it to the
table we look up its signature in the hash table.  If another
partial timetable with the same signature is already present, only
one of the two is kept:  one with minimum cost.
@PP
We turn now to strong dominance; the rest of this section is
devoted to it.  Its basic idea is that the elements of the
signature do not always have to be equal.  Consider this example,
and assume that the only constraint @M { c } is a maximum limit
of 3 on consecutive @M { s sub 1 } shifts:
@CD {
@Tbl
    rule { yes }
    aformat { @Cell A | @Cell B | @Cell C | @Cell D |
              @Cell E | @Cell F | @Cell G | @Cell H }
{
@MarkRowa
  A { @M { s sub 1 } }
  B { @M { s sub 1 } }
  C { @M { s sub 1 } }
  D { @M { s sub 0 } }
  E { @M { s sub 0 } }
  F { @M { s sub 1 } }
  G { @M { s sub 1 } }
  H { @M { s sub 1 } }
}
&1.5c
@Tbl
    rule { yes }
    aformat { @Cell A | @Cell B | @Cell C | @Cell D |
              @Cell E | @Cell F | @Cell G | @Cell H }
{
@MarkRowa
  A { @M { s sub 1 } }
  B { @M { s sub 1 } }
  C { @M { s sub 0 } }
  D { @M { s sub 1 } }
  E { @M { s sub 1 } }
  F { @M { s sub 0 } }
  G { @M { s sub 1 } }
  H { @M { s sub 1 } }
}
}
Neither partial timetable weakly dominates the other, because @M { c } has signature
3 in the first and 2 in the second.  But the second does dominate
the first:  each of its timetables will be at least as good as the
first's corresponding timetable, and indeed better when the next
assignment is @M { s sub 1 }.
@PP
Given two partial timetables @M { x } and @M { y }, @M { x }
@I { strongly dominates } @M { y } when @M { c sub m (x) <= c sub m (y) },
and for each constraint @M { c }, the signature of
@M { c } in @M { x } strongly dominates the signature of @M { c } in
@M { y }.  So far this follows the definition of weak equivalence,
but now we wish to allow `@M { non <= }' instead of equality.
@PP
Strong dominance is more complicated than weak dominance in that there
is no simple general rule for it.  Each case (each node of the
expression tree for each constraint) must be analysed separately,
and with care.
@PP
Here is an example of this analysis.  Suppose cost is a monotone
non-decreasing function of the determinant (the value compared with the
limits) when there is a maximum limit, and a monotone non-increasing
function of the determinant when there is a minimum limit.
@FootNote {
In XESTT, this assumption fails only when a constraint's `allow-zero'
flag is set.  The leading nurse rostering example is
the `complete weekends' constraint, which requires a nurse to be
busy on both days of a weekend or neither.  Cost increases as we
go from 0 to 1 busy days, then falls as we go from 1 to 2.  In
such cases we will fall back on equality.
}
Suppose the determinant cannot decrease over time.  When there is a
maximum limit, determinant @M { v sub x } strongly dominates determinant
@M { v sub y } when @M { v sub x <= v sub y }, because of the monotonicity.
When there is a minimum limit, @M { v sub x } strongly dominates
@M { v sub y } when @M { v sub x >= v sub y }, for the same reason.
When both limits are present, both rules apply and equality is needed.
@PP
We can loosen this slightly:  if both values are at or over the
minimum limit, each strongly dominates the other as far as the minimum
limit is concerned.  This is because both constraints have cost 0 and
will continue to have cost 0 as the determinant increases over time.
@PP
Even with the analysis all done, strong dominance is harder to
implement efficiently than weak dominance.  The hash table no longer
works.  Papers known to the author are silent about this, implying
that strong dominance checking is carried out by comparing each
incoming partial timetable at level @M { m } with each partial
timetable already present at level @M { m }.  This multiplies the
running time at level @M { m } by the number of partial timetables
at level @M { m }.  One author has said that dominance checking is
the most time consuming part of their algorithm, so this may be
important.  They mention a few ideas from the literature for
speeding it up, but they are not striking.
@PP
Of course, strong dominance reduces the number of partial timetables
that need to be kept in each @M { P sub m }.  (It is easy to show
this, using induction on @M { m } and the fact that every case of
weak dominance is also a case of strong dominance.)  Whether this
is sufficient to compensate for the increased cost of dominance
checking is not clear, and requires empirical investigation.  Weak
dominance alone is enough to achieve polynomial time.
@PP
The following tests are used by strong dominance.  For each test,
this table shows its name, its abbreviation (used in the expression
trees to follow), and the condition specified by it:
@CD @Tbl
    aformat { @Cell ml { 0i } A | @Cell indent { ctr } B | @Cell mr { 0i } C }
    mv { 0.55vx }
{
@Rowa
    ma { 0i }
    rb { yes }
    A { Test name }
    B { Abbr. }
    C { Condition specifying when @M { v sub 1 } dominates @M { v sub 2 } }
@Rowa
    A { @C { KHE_SRS_DOM_UNUSED } }
    B { }
    C { This test should never be made }
@Rowa
    A { @C { KHE_SRS_DOM_GE } }
    B { @M { non >= } }
    C { @M { v sub 1 >= v sub 2 } }
@Rowa
    A { @C { KHE_SRS_DOM_LE } }
    B { @M { non <= } }
    C { @M { v sub 1 <= v sub 2 } }
@Rowa
    A { @C { KHE_SRS_DOM_EQ } }
    B { @M { non = } }
    C { @M { v sub 1 = v sub 2 } }
@Rowa
    A { @C { KHE_SRS_DOM_GE_LOOSE } }
    B { @M { non >= * } }
    C { @M { v sub 1 >= v sub 2 @B " or " v sub 1 >= min_limit } }
@Rowa
    A { @C { KHE_SRS_DOM_EQ_LOOSE } }
    B { @M { non = * } }
    C { @C { KHE_SRS_DOM_GE_LOOSE } @B " and " @C { KHE_SRS_DOM_LE } }
    rb { yes }
    mb { 0i }
}
In the vicinity of minimum limits, maximum limits, and allow zero
flags, two combinations of these tests, which we call @M { alpha }
and @M { beta }, are commonly needed:
@ID @Tbl
   aformat { @Cell ml { 0i } A | @Cell B | @Cell mr { 0i } C }
   mv { 0.5vx }
{
@Rowa
    ma { 0i }
    B { @M { alpha } }
    C { @M { beta } }
    rb { yes }
@Rowa
    A { If there is a non-trivial allow zero flag }
    B { @M { non = } }
    C { @M { non = } }
@Rowa
    A { Otherwise, if there is a non-trivial minimum limit only }
    B { @M { non >= * } }
    C { @M { non >= } }
@Rowa
    A { Otherwise, if there is a non-trivial maximum limit only }
    B { @M { non <= } }
    C { @M { non <= } }
@Rowa
    A { Otherwise (there are non-trivial minimum and maximum limits) }
    B { @M { non = * } }
    C { @M { non = } }
    rb { yes }
    mb { 0i }
}
If @C { allow_zero } is @C { true } and @C { min_limit } is 0 or
1, we set @C { min_limit } to 0 and @C { allow_zero } to @C { false }
before doing anything else.  Then, by a non-trivial allow zero flag
we mean a @C { true } value for @C { allow_zero } and a value of at
least 2 for @C { min_limit }.  By a non-trivial minimum limit we mean
one which is greater than the minimum possible value, which is always
0.  By a non-trivial maximum limit we mean one which is less than
the maximum possible value.  What this is depends on the context; it
might be the number of time groups in the constraint, for example.
Constraints that do not have either a non-trivial minimum limit or
a non-trivial maximum limit cannot generate a cost and are passed
over by the algorithm.
@PP
The single resource solver also offers two further forms of
dominance testing, called @I { medium dominance } and
@I { trie dominance }.  Medium dominance is intermediate
between strong and weak dominance and offers some of the
advantages of both.  Trie dominance is an implementation of
strong dominance which uses a trie data structure to carry
out dominance testing more efficiently.  For further details,
consult Jeff Kingston's paper about single resource solving.
@End @SubSection

@SubSection
    @Title { Expression tree node types }
    @Tag { resource_solvers.single.node }
@Begin
@LP
In this section we present the complete set of node types
needed for the XESTT constraints.  Here are the leaf nodes:
@TaggedList

@DTI { @M { BUSY_TIME(t) } }
@OneRow {
A leaf node with attribute @M { t }, a time.  Its value is 1
when @M { r } is busy at @M { t }, otherwise 0.
# This value becomes known when the time sweep reaches @M { t }'s day.
}

@DTI { @M { FREE_TIME(t) } }
@OneRow {
Like @M { BUSY_TIME(t) } except that the value is 1 when
when @M { r } is free at @M { t }, otherwise 0.
}

@DTI { @M { WORK_TIME(t) } }
@OneRow {
Like @M { BUSY_TIME(t) } except that the value is a @C { float }
workload when @M { r } is busy at @M { t }, otherwise 0.  The
value depends on which task @M { r } is assigned during @M { t }.
# not just on whether @M { r } is busy.
}

@DTI { @M { BUSY_DAY(d) } }
@OneRow {
A leaf node with attribute @M { d }, a day.  Its value is 1
when @M { r } is busy on day @M { d }, otherwise 0.  This node
could be implemented as the @M { OR } of a set of @M { BUSY_TIME }
nodes; using it instead of that is an optimization.
# This value becomes known when the time sweep reaches @M { d }.
}

@DTI { @M { FREE_DAY(d) } }
@OneRow {
Like @M { BUSY_DAY(d) } except that the value is 1 when
when @M { r } is free on day @M { d }, otherwise 0.
}

@DTI { @M { WORK_DAY(d) } }
@OneRow {
Like @M { BUSY_DAY(d) } except that the value is a @C { float }
workload when @M { r } is busy on day @M { d }, otherwise 0.  The
value depends on which task @M { r } is assigned during @M { d }.
}

@EndList
And here are the internal nodes:
@TaggedList

@DTI { @M { OR } }
@OneRow {
An internal node whose value is 1 when at least one of its
children has value 1.
}

@DTI { @M { AND } }
@OneRow {
Similar to @M { OR }, except that its value is 1 when all of
its children have value 1.
}

@DTI { @M { INT_SUM } }
@OneRow {
An internal node whose value is the sum of its children's values.
All values are integers.
}

@DTI { @M { FLOAT_SUM } }
@OneRow {
Like @M { INT_SUM } except that its value and its children's values
have type @C { float }.
}

@DTI { @M { INT_DEV(a, b, z) } }
@OneRow {
Here @M { a } and @M { b } are integers, and @M { z } is a Boolean.
This node has a single child whose value is an integer.  Its value
is the amount by which its child's value falls short of @M { a } or
exceeds @M { b }.  If @M { z } is true, then as a special case its
result is 0 if the child's value is 0.  Because the node has a single
child, it has only one active day (the child's last active day), so
its value is never stored in any signature.
}

@DTI { @M { FLOAT_DEV(a, b, z) } }
@OneRow {
Here @M { a } and @M { b } are integers, and @M { z } is a Boolean.
This node has a single child whose value is a @C { float }.  Its
value is the amount by which its child's value falls short of @M { a }
or exceeds @M { b }, rounded up to the nearest integer.  If @M { z }
is true, then as a special case its result is 0 if the child's value
is 0.
}

@DTI { @M { COST(f, w) } }
@OneRow {
Here @M { f } is a cost function and @M { w } is a combined weight.
This node has a single child whose value is an integer.  Its own
value is the result of applying cost function @M { f } with weight
@M { w } to the child's value.  Because the result is a cost, it
is added to the new partial solution's cost rather than being
used by any parent.
}

@DTI { @M { INT_SUM_COMB(f, w, a, b, z, h sub b , h sub a ) } }
@OneRow {
This node combines a @M { COST }, @M { INT_DEV }, and @M { INT_SUM }
node into a single node that does the work of all three.  In addition,
it handles history before (@M { h sub b }) and history after
(@M { h sub a }) values.  It is used whenever there is a @M { COST }
node whose only child is an @M { INT_DEV } node whose only child is
an @M { INT_SUM } node.  It does what those three nodes taken
together do; and when @M { b } is @C { INT_MAX } (i.e. there is
no upper limit), or @M { f } is not quadratic, it does it in a
way that reduces the amount of stored information, as follows.
@LP
To do what the three nodes do, @M { INT_SUM_COMB } merely has to
store @M { h } plus the total value of its finalized children, and
when they are all finalized, calculate a deviation, calculate a cost,
and add the cost to the new partial solution.  The stored information
is an arbitrary integer.  But in the cases just mentioned, it can
do better.
@LP
Suppose first that there is no upper limit @M { b }.  If the
value ever reaches @M { a }, then the cost drops to 0
and remains there, because the value can never decrease.  So
the signature value is reduced to an integer whose value is
at most @M { a }.  Any values above @M { a } are stored as @M { a }.
@LP
Or suppose that there is an upper limit @M { b }, and that the cost
function @M { f } is linear.  If the value ever increases beyond
@M { b }, every value of 1 above @M { b } has cost @M { w }.  So
instead of increasing the value to above @M { b }, we may leave
it at @M { b } and report cost @M { w } immediately.  The signature
value is reduced to an integer not exceeding @M { b }.
@LP
Finally, suppose that there is an upper limit @M { b }, and that the
cost function @M { f } is a step function.  Then every value above
@M { b + 1 } has the same cost as @M { b + 1 }.  So we do not need
to store higher values.  The signature value is an integer not
exceeding @M { b + 1 }.
}

@DTI { @M { INT_SEQ_COMB(f, w, a, b, h sub b , h sub a ) } }
{
Like @M { INT_SUM_COMB }, except that it stores the length of the
sequence of active time groups ending on the current day (or 0 if
there is no such sequence).  As sequences end, their cost is
calculated and added to the cost of the new partial timetable,
and they are forgotten by this node.  The optimizations used
with @M { INT_SUM_COMB } also apply here.
}

@RawEndList
@End @SubSection

@SubSection
    @Title { Expression trees for XESTT monitors }
    @Tag { resource_solvers.single.monitors }
@Begin
@LP
In this section we present expression trees for the seven XESTT
monitor types.  The labels on some nodes (@M { non <= },
@M { alpha }, and @M { beta }) define the dominance tests at
those nodes (Section {@NumberOf resource_solvers.single.dominance}).
Unlabelled nodes do not need dominance tests.
@PP
@B { Avoid clashes monitors }.
Assuming there are no clashes within individual tasks, no clashes
can occur, because at most one task is assigned on each day.  So
these monitors are ignored.
@PP
@B { Avoid unavailable times monitors }.
If the unavailable times are @M { t sub 1 , t sub 2 ,..., t sub k },
the tree is
@CD @Diag treevsep { 1.0f } treehsep { 1.0f } alabelprox { NW } {
@Tree {
  @Box @M { COST(f, w) }
  @FirstSub {
      @Box alabel { @M { non <= } } @M { INT_SUM }
	@FirstSub @Box @M { BUSY_TIME( t sub 1 ) }
	@NextSub  @Box @M { BUSY_TIME( t sub 2 ) }
	@NextSub  pathstyle { noline } @Circle outlinestyle { noline } ...
	@NextSub  @Box @M { BUSY_TIME( t sub k ) }
  }
}
}
There is an implicit maximum limit of 0, justifying the @M { non <= }
dominance test.  If the cost function is linear, or there is only one
unavailable time, each time contributes an independent value to the
total cost, and we use multiple trees instead:
@CD @Diag treevsep { 1.0f } treehsep { 1.0f } {
@Tree {
  @Box @M { COST(f, w) }
  @FirstSub @Box @M { BUSY_TIME( t sub 1 ) }
}
||1c
@Tree {
  @Box @M { COST(f, w) }
  @FirstSub @Box @M { BUSY_TIME( t sub 2 ) }
}
||1c
@OneRow { //1c ... }
||1c
@Tree {
  @Box @M { COST(f, w) }
  @FirstSub @Box @M { BUSY_TIME( t sub k ) }
}
}
We prefer this because none of these nodes needs to store a value
in the signature.
@PP
@B { Limit idle times monitors }.
These are not used in nurse rostering.  Handling them is future work
(feasible, but low priority); at present they are omitted from the
optimized resource cost.
@PP
@B { Cluster busy times monitors }.
We have already seen one cluster expression tree.  Here is another,
with its top three nodes optimized into one @M { INT_SUM_COMB } node:
@CD @Diag treehsep { 0.0f } alabelprox { NW } blabelprox { NE } {
@Tree {
  @Box alabel { @M { alpha } }
  @M { INT_SUM_COMB(f, w, 20, 24, false, h sub b , h sub a ) }
  @FirstSub {
    @Box blabel { @M { beta } } @M { OR }
    @FirstSub @Box {0.75 1.0} @Scale @M { BUSY_TIME(1Mon1) }
    @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
    @NextSub @Box {0.75 1.0} @Scale @M { BUSY_TIME(1Mon3) }
  }
  @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
  @NextSub {
    @Box alabel { @M { beta } } @M { OR }
    @FirstSub @Box {0.75 1.0} @Scale @M { BUSY_TIME(4Mon1) }
    @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
    @NextSub @Box {0.75 1.0} @Scale @M { BUSY_TIME(4Mon3) }
  }
}
}
# For the dominance tests @M { alpha } and @M { beta }, see above.
# @PP
# Actually the @M { OR } nodes are more complicated than shown here.
Within each time group, if some day's times are all present, then
their @M { BUSY_TIME } nodes are replaced by one @M { BUSY_DAY }
node.  It saves time to visit one @M { BUSY_DAY } node rather
than several @M { BUSY_TIME } nodes.  Negative time groups are
handled like this:
@CD @Diag treehsep { 0.2f } alabelprox { NW } {
@Tree {
  @Box alabel { @M { beta } } @M { AND }
  @FirstSub @Box @M { FREE_TIME(1Mon1) }
  @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
  @NextSub @Box @M { FREE_TIME(1Mon3) }
}
}
Again, @M { FREE_DAY } nodes may replace @M { FREE_TIME } nodes.
When an @M { OR } or @M { AND } node has exactly one child, the
@M { OR } or @M { AND } node is omitted.  The reader is invited to
work through the justification of the @M { alpha } and @M { beta }
dominance tests.
@PP
@B { Limit busy times monitors }.
A limit busy times monitor may monitor several time groups, like
a cluster busy times monitor, but a deviation is calculated for
each time group separately:
@CD @Diag treehsep { 0.0f } alabelprox { NW } {
@Tree {
  @Box @M { COST(f, w) }
  @FirstSub {
    @Box alabel { @M { non <= } } @M { INT_SUM }
    @FirstSub {
      @Box @M { INT_DEV(0, 1, false) }
      @FirstSub
      {
        @Box alabel { @M { alpha } } @M { INT_SUM }
	@FirstSub @Box {0.75 1.0} @Scale @M { BUSY_TIME(1Mon1) }
	@NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
	@NextSub @Box {0.75 1.0} @Scale @M { BUSY_TIME(1Mon3) }
      }
    }
    @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
    @NextSub {
      @Box @M { INT_DEV(0, 1, false) }
      @FirstSub {
        @Box alabel { @M { alpha } } @M { INT_SUM }
	@FirstSub @Box {0.75 1.0} @Scale @M { BUSY_TIME(4Mon1) }
	@NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
	@NextSub @Box {0.75 1.0} @Scale @M { BUSY_TIME(4Mon3) }
      }
    }
  }
}
}
This example requires a nurse to take at most one shift per day, and
something like it would be found in most instances.  It should be
optimized away, since the algorithm knows that it will be assigning
at most one shift per day, but the author has never got around to
implementing that idea.  As before, if an entire day's worth of
times appear together under an @M { INT_SUM }, they are replaced by
a single @M { BUSY_DAY } node.
@PP
The dominance tests can be understood in two steps.  First, the higher
@M { INT_SUM } does not change anything for the lower @M { INT_SUM }
nodes:  their values are still converted into deviations and then
into costs, so @M { alpha } is correct.  Second, an
@M { INT_DEV(0, 0, false) } node could be inserted below the root,
indicating @M { alpha } for the higher @M { INT_SUM } too; but for
these limits, @M { alpha } is `@M { non <= }'.
@PP
If the cost function is linear, each child of the higher @M { INT_SUM } is
made into its own tree, and @M { INT_SUM_COMB } nodes are used:
@CD @Diag treehsep { 0.0f } alabelprox { NW } {
@Tree {
  @Box alabel { @M { alpha } }
  @M { INT_SUM_COMB(f, w, 0, 1, true, 0, 0) }
  @FirstSub @Box {0.75 1.0} @Scale @M { BUSY_TIME(1Mon1) }
  @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
  @NextSub @Box {0.75 1.0} @Scale @M { BUSY_TIME(1Mon3) }
}
||0.2c
@OneRow { //1c ... }
||0.2c
@Tree {
  @Box alabel { @M { alpha } }
  @M { INT_SUM_COMB(f, w, 0, 1, true, 0, 0) }
  @FirstSub @Box {0.75 1.0} @Scale @M { BUSY_TIME(4Mon1) }
  @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
  @NextSub @Box {0.75 1.0} @Scale @M { BUSY_TIME(4Mon3) }
}
}
An @M { INT_SUM_COMB } node is also used when there is only one time group.
@PP
@B { Limit workload monitors }.
These are like limit busy times monitors, except that they keep
track of a @C { float } workload rather than an @C { int } number
of busy times:
@CD @Diag treehsep { 0f } alabelprox { NW } {
@Tree {
  @Box @M { COST(f, w) }
  @FirstSub {
    @Box alabel { @M { non <= } } @M { INT_SUM }
    @FirstSub {
      @Box @M { FLOAT_DEV(3, 7) }
      @FirstSub {
        @Box alabel { @M { alpha } } @M { FLOAT_SUM }
	@FirstSub @Box {0.70 1.0} @Scale @M { WORK_TIME(1Mon1) }
	@NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
	@NextSub @Box {0.70 1.0} @Scale @M { WORK_TIME(1Mon3) }
      }
    }
    @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
    @NextSub {
      @Box @M { FLOAT_DEV(3, 7) }
      @FirstSub
      {
        @Box alabel { @M { alpha } } @M { FLOAT_SUM }
	@FirstSub @Box {0.70 1.0} @Scale @M { WORK_TIME(4Mon1) }
	@NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
	@NextSub @Box {0.70 1.0} @Scale @M { WORK_TIME(4Mon3) }
      }
    }
  }
}
}
Again, when the cost function is linear, this may be broken into one
tree for each time group, and a @M { FLOAT_SUM_COMB } node used, like
@M { INT_SUM_COMB } (not currently implemented).
@PP
@B { Limit active intervals monitors }.
These have the same data as cluster busy times monitors, without allow
zero.  The active values from the time groups are combined differently:
@CD @Diag treehsep { 0.0f } alabelprox { NW } blabelprox { NE } {
@Tree {
  @Box @M { COST(f, w) }
  @FirstSub {
    @Box @M { INT_DEV(a, b, false) }
    @FirstSub {
      @Box alabel { @M { alpha } } @M { INT_SEQ }
      @FirstSub {
	@Box blabel { @M { beta } } @M { OR }
	@FirstSub @Box {0.75 1.0} @Scale @M { BUSY_TIME(1Mon1) }
	@NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
	@NextSub @Box {0.75 1.0} @Scale @M { BUSY_TIME(1Mon3) }
      }
      @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
      @NextSub {
	@Box alabel { @M { beta } } @M { OR }
	@FirstSub @Box {0.75 1.0} @Scale @M { BUSY_TIME(4Mon1) }
	@NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
	@NextSub @Box {0.75 1.0} @Scale @M { BUSY_TIME(4Mon3) }
      }
    }
  }
}
}
As for cluster busy times constraints, negative time groups are
implemented with @M { AND } and @M { FREE_TIME } nodes, and times
making up complete days are replaced by @M { BUSY_DAY } nodes.
@PP
Each sequence of active time groups, once its length is finalized,
has its length compared with the limits and produces a cost
immediately.  The @M { INT_SEQ } node above is responsible for doing
that.  So it only has to remember the length of the sequence
of complete children with value 1 that includes the most recently
completed child, or 0 if there is no such interval.
@PP
Actually, the three top nodes are combined into a single
@M { INT_SEQ_COMB } node, much like the @M { INT_SUM_COMB } node.
The code for @M { INT_SEQ } nodes is commented out in the
implementation, because it is unused.  The same optimizations,
depending on the maximum limit and the cost function, apply
here too and they are implemented.
@PP
The implementation is much easier if, as the time sweep proceeds,
the @M { INT_SEQ } node's children become complete and report their
values in the same order that they appear in the expression tree.  We
guarantee this by arbitrarily increasing the range of days that
each child is considered to be active to include the last day
that the preceding child is considered to be active.  This way,
after a child's value is finalized, it continues to be treated as
active, including storing its associated information, until its
@M { INT_SEQ } parent is ready to accept its value.  This adds to the
amount of information that has to be kept, but not in practice,
because in practice the time groups of limit active intervals
monitors are always chronologically increasing, ensuring that
no range increases are needed.
@PP
When day ranges are increased and @M { OR } or @M { AND } nodes
are omitted, the @M { BUSY_TIME } and @M { FREE_TIME } nodes need
dominance tests.  They inherit the @M { beta } tests from their
omitted parents.
@PP
Two @M { BUSY_TIME } nodes which have the same time and the same
unincreased range are represented by the same object.  The same
goes for @M { FREE_TIME }, @M { WORK_TIME }, @M { BUSY_DAY },
@M { FREE_DAY }, and @M { WORK_DAY } nodes.  The implementation
is compatible with arbitrary common subexpression elimination,
but that is not implemented.
@PP
Before building any trees, the constraints are sorted so that
those with larger limits come first.  This is done for the
benefit of trie dominance:  larger limits mean larger arrays
of children in the tries, and if they come first they come
earlier in the signatures and so higher in the tries, meaning
that there are fewer of them and they contain fewer null entries.
@End @SubSection

@SubSection
    @Title { Running time }
    @Tag { resource_solvers.single.time }
@Begin
@LP
In this section we show that the dynamic programming algorithm,
using either weak or strong dominance, runs in polynomial time
in practice.
@PP
Running time is usually expressed as a function of @M { n }, the size
of the input.  We will let @M { n } be the number of days.  Taking
the conventional view that a timetabling problem consists of times,
resources, events, and constraints, it is clear (given than we only
deal with a single resource) that the input size will be proportional
to the number of days.  In effect, we are considering an infinite set
of instances, one for each value of @M { n }, each with @M { n } days,
one resource, a constant number @M { a } of shift types per day, and
some constraints.
@PP
How many constraints?  Most constraints are determined by what
happens in one small region of the timetable.  Their number is
proportional to @M { n }.  The only constraints determined by the
whole timetable are limits on the number of shifts or minutes,
in total or for a particular shift type.  Their number remains
constant as @M { n } increases.  For any particular day the number of
constraints affected by what happens on that day is a constant,
independent of @M { n }.
@PP
Let @M { W(m) } be the number of nodes in @M { P sub m } (`@M { W }'
is for `weak dominance').  Also let @M { W(0) = 1 }, representing the
root of the search tree, which exists before the first day.  Our
first task is to estimate @M { W(m) }.
@PP
Let @M { bar s sub m (c, x) bar } be the number of distinct values of
@M { s sub m (c, x) } that occur as @M { x } runs over all partial
solutions for @M { d sub 1 ,..., d sub m }.  The signature of a node in
@M { P sub m } is the concatenation of the signatures of the constraints
on day @M { m }, and the partial timetables in @M { P sub m } have
distinct signatures, so
@ID @Math { W(m) <= big prod from { c } bar s sub m (c, x) bar }
We have already argued that the number of constraints @M { c }
affected by what happens on day @M { d sub m } will be constant.
For most of these constraints, @M { bar s sub m (c, x) bar }
will be a small constant, and in particular it will be 1 if
the signature is empty.  So we can write
@ID @Math { W(m) <= big prod from { c } bar s sub m (c, x) bar = O( n sup K ) }
where @M { K } is the number of constraints @M { c } for which
@M { bar s sub m (c, x) bar }
increases with @M { n }.  (The increase is always linear at most.)
Constraints of this kind mainly limit the total number of shifts
(or shifts of a particular type) worked.  Constraints on the number
of busy weekends also have an @M { bar s sub m (c, x) bar } which
increases with @M { n }, but only slowly.  Altogether, then,
@M { K } is a small constant.
@PP
Given a partial timetable in @M { P sub m }, the running time of
assigning one more shift to it, including creating a new partial timetable
object, finding its signature and cost, and looking up the signature
in the @M { P sub {m+1} } hash table, may be taken to be 1.  This is
fair because the number of constraints affected by a particular day is
a constant, independent of @M { n }.  The KHE implementation does
indeed do all this in a small constant amount of time.
@PP
For each of the @M { W(m) } nodes in @M { P sub m } we need to generate
at most @M { a + 1 } new solutions, making a total running time of at most
@ID @Math { (a + 1)W(m)}
to generate the @M { P sub {m+1} } hash table.  (The up to @M { a }
shifts used when doing this are chosen, just once, before the main
algorithm begins, and this takes a negligible amount of time.)  So
the overall running time is at most
@ID @Math { big sum from {0 <= m < n} (a + 1)W(m) = O( n sup {K+1} ) }
where @M { K } is the number of constraints for which
@M { bar s sub m (c, x) bar } increases with @M { n }.  As explained
above, @M { K } is a small constant in practice.
@PP
This analysis ignores the extra value (the number of busy days)
in each signature.  It is ignored because something like it will
usually be present anyway on every day except the last, derived
from a constraint on total workload.
@PP
In some models, shifts have durations in minutes and there is a
constraint on the total duration of the shifts taken by a nurse.
This could lead to a very large value of @M { bar s sub m (c, x) bar },
although the number should be manageable if all durations are
multiples of, say, 30 or 60 minutes.
@PP
The author has not found any way to tighten up this analysis for
strong dominance.  It is easy to see that @M { S(m) <= W(m) },
where @M { S(m) } is the size of @M { P sub m } when strong
dominance is used.  This can be proved using induction on @M { m }
and the fact that every case of weak dominance is also a case of
strong dominance.  The running time for creating one partial
timetable and inserting it into @M { P sub m } must be multiplied
by @M { S(m) }, to account for the cost of the pairwise dominance
tests.  One would think that this would be significantly slower,
but testing suggests otherwise; it seems that the cost of creating
the extra partial solutions outweighs the cost of the extra dominance
tests.
@End @SubSection

@EndSubSections
@End @Section

@Section
    @Title { Resource matching }
    @Tag { resource_solvers.matching }
@Begin
@LP
Consider the tasks running at some time @M { t }.  Each task can be
assigned at most one resource.  Assuming the resources have hard
avoid clashes constraints, each resource can be assigned to at most
one of the tasks.  So the assignments to these tasks form a matching
in the bipartite graph with tasks for demand nodes, resources for
supply nodes, and feasible assignments for edges.
@PP
Consider an initial state in which none of the tasks running at time
@M { t } is assigned.  For each edge in the bipartite graph, carry
out the indicated assignment, label the edge with the cost of the
solution after the assignment is made, and then remove the assignment.
The result is a bipartite graph with edge weights representing the
badness of each individual assignment.  
@PP
Assuming that all tasks have hard assign resource constraints,
a maximum matching of minimum cost in this graph will be a very
desirable assignment.  Indeed, it will often be optimal.  This
can be seen by examining all constraint types:  each is either
unaffected by the assignment, or else its effect is independent
for each edge, so that the edge weights are valid in combination
as well as individually.  @I { Resource matching } is KHE's name
for this general idea.
@PP
There is one constraint whose effect is not independent for each
edge:  the limit resources constraint from employee scheduling.
Resource matching handles this constraint specially, as described
in Section {@NumberOf resource_solvers.matching.limit_resources}.
This special arrangement is exact (preserves optimality) in many
common cases, but in general it is merely heuristic.  The resource
assignment invariant is another problem:  it may hold for each
element of a set of assignments individually, but fail on the
whole set.  However, this does not seem to be a problem in practice.
@PP
Not all tasks have hard assign resource constraints.  In nurse
rostering, for example, a shift requiring between 3 and 5 nurses
is modelled by an event with 5 tasks, only 3 of which have assign
resource constraints.  Fortunately, missing assign resource
constraints are easily handled.  For each task, add a supply node,
linked only to that task, representing non-assignment of the task.
The edge weight is just the initial solution cost, because choosing
that edge changes nothing.
@PP
As described, resource matching constructs assignments; it does not
repair them.  However, a repair algorithm is easily made from it:
choose a time @M { t }, unassign all the tasks running at that time,
reassign them using resource matching, and then either keep the new
assignments if they improve the solution, or revert to the original
assignments if they do not.
@PP
During initial construction it may be that some tasks are already
assigned, and what is wanted is to assign the unassigned ones without
disturbing the assigned ones.  In that case, simply omit the demand
nodes for assigned tasks.
@PP
Instead of selecting all tasks running at a single time @M { t },
KHE's implementation selects all tasks whose times overlap with an
arbitrary set of times @M { T }.  For @M { bar T bar >= 2 }, this
does not make sense in general, because one resource could be
assigned to two or more of the tasks, and the rationale for using
matching is lost.  However, there are at least two cases
where it does make sense.
@PP
First, when @M { T } is a time group from the common frame
(Section {@NumberOf extras.frames}), hard limit busy times
constraints prohibit resources from being assigned to two
or more tasks that overlap @M { T }.
@PP
Second, when resource matching is used for repair, KHE's version of
it specifies that tasks which are assigned the same resource at the
start must be assigned the same resource at the end.  Of course,
this does not produce an optimal reassignment of the tasks, because
it requires some tasks to be assigned to the same resources.
However, minimum cost weighted matchings can be found in polynomial
time, whereas true optimal reassignment is NP-complete.
@BeginSubSections

@SubSection
    @Title { A solver for resource matching }
    @Tag { resource_solvers.matching.solver }
@Begin
@LP
This section presents a solver for resource matching.  It
can be used directly via the interface given in this section,
or indirectly via the applications given in the following
two sections.
@PP
One solver may be used for many solves.  To create and delete a
solver, call
@ID @C {
KHE_RESOURCE_MATCHING_SOLVER KheResourceMatchingSolverMake(
  KHE_SOLN soln, KHE_RESOURCE_GROUP rg, HA_ARENA a);
void KheResourceMatchingSolverDelete(KHE_RESOURCE_MATCHING_SOLVER rms);
}
# void KheResourceMatchingSolverDelete(KHE_RESOURCE_MATCHING_SOLVER rms);
The deletion really only happens when arena @C { a } is deleted or
recycled; but before then a call to @C { KheResourceMatchingSolverDelete }
is needed to carry out some tidying up (there are group monitors to remove).
@PP
The solves have one supply node for each resource of @C { rg }, plus
supply nodes representing non-assignment.  Typically, @C { rg } would
be @C { KheResourceTypeFullResourceGroup(rt) } for some resource type
@C { rt }, but it can be any resource group.  It is fixed for the
lifetime of the solver.
@PP
To carry out one solve, call
# @ID @C {
# bool KheResourceMatchingSolverSolve(KHE_RESOURCE_MATCHING_SOLVER rms,
#   KHE_RESOURCE_MATCHING_DEMAND_SET rmds, bool edge_adjust1_off,
#   bool edge_adjust2_off, bool edge_adjust3_off, bool edge_adjust4_off,
#   bool nocost_off, KHE_OPTIONS options);
# }
@ID @C {
bool KheResourceMatchingSolverSolve(KHE_RESOURCE_MATCHING_SOLVER rms,
  KHE_RESOURCE_MATCHING_DEMAND_SET rmds, bool edge_adjust1_off,
  bool edge_adjust2_off, bool edge_adjust3_off, bool edge_adjust4_off,
  bool ejection_off, KHE_OPTIONS options);
}
# bool KheResourceMatchingSolverSolve(KHE_RESOURCE_MATCHING_SOLVER rms,
#   KHE_RESOURCE_MATCHING_DEMAND_SET rmds, bool matching_off,
#   bool edge_adjust_off, KHE_OPTIONS options);
If this can find a way to improve the solution, it does so and returns
@C { true }.  Otherwise it leaves the solution unchanged and returns
@C { false }.  Parameter @C { rdms } is the set of demand nodes to
match against the supply nodes already present in @C { rms }; how
to construct it is explained below.  The other parameters affect
the detailed behaviour of the solver, as follows.
# @PP
# If @C { matching_off } is @C { true }, a simple constructive
# heuristic is used to find the assignments instead of minimum-cost
# matching.  The heuristic takes each task in turn, in increasing
# domain size order, and tries every assignment of an unused
# resource to the task, and also non-assignment, and keeps the
# assignment or non-assignment that increases cost the least.
# It tries exactly the same edges as the ones that appear in
# the bipartite graph, but it builds its matching one edge at
# a time, rather than all at once using a minimum-cost matching.
# @PP
# On a substantial test without limit resources constraints, this
# version was never better and occasionally significantly worse.
# The running time was always worse and often significantly worse,
# presumably because repair had to work hard to make up for
# construction's deficiencies.  However the `increasing domain
# size order' part has not yet been tested.
@PP
When @C { true }, parameters @C { edge_adjust1_off },
@C { edge_adjust2_off }, @C { edge_adjust3_off }, and
@C { edge_adjust4_off } turn off the four edge adjustments.
These adjust edge costs so that, in cases which would otherwise
be tied, resources with certain properties are preferred, as follows.
@PP
Edge adjustment 1 gives preference to resources with a larger
number of available times (Section {@NumberOf solutions.avail})
over resources with a smaller number.  This seems likely to be the
most effective form of edge adjustment, so it is given twice the
weight of the others.
@PP
Edge adjustment 2 gives preference to resources whose assignment
brings a smaller number of constraints from below their maximum
values to their maximum values.  Hopefully this will keep more
resources available for assignment for longer.
@PP
Edge adjustment 3 tracks the number of consecutive assignments
to the resource in recent solves, and favours resources for
which this is smaller.  This encourages smaller sequences
of consecutive assignments, which hopefully will give more
flexibility when repairing later.
@PP
Edge adjustment 4 tracks the time of day of the most recent
assignment to the resource, and favours assignments that
repeat that time of day.  This encourages sequences of
shifts of the same type.  These always seem to be acceptable
in nurse rostering, and they are often preferable.
@PP
At the end of the call, if limit resources monitors are
involved and any of them have non-zero cost, function
@C { KheEjectionChainRepairInitialResourceAssignment }
is called to repair them.  This call is omitted if
parameter @C { ejection_off } is @C { true }.
# @PP
# When @C { true }, parameter @C { nocost_off } causes tasks for
# which non-assignment has no cost to be included in the match.
# When it is @C { false }, only tasks for which non-assignment
# has a cost are included.
@PP
Three options from @C { options } are consulted.  Option
@C { rs_invariant } determines whether the resource assignment
invariant is in effect, as usual.  If it is, only individual
edges that preserve the invariant are included in the graph, and
if, when the solution is changed to reflect the minimum matching,
any of the individual assignments fail the invariant, those
assignments are omitted.  Option @C { gs_common_frame } supplies
the common frame, needed even when edge adjustment is not in
effect, for ejecting task moves.  Finally, the first time that
@C { rmds } is solved, option @C { gs_event_timetable_monitor }
(Section {@NumberOf general_solvers.general}), which must be
present, is used to obtain efficient access to the tasks which
overlap its times.
@PP
A demand set is constructed by a sequence of calls beginning with
@ID @C {
KHE_RESOURCE_MATCHING_DEMAND_SET KheResourceMatchingDemandSetMake(
  KHE_RESOURCE_MATCHING_SOLVER rms, bool preserve_existing);
}
(for @C { preserve_existing }, see below), and continuing with any
number of calls to
@ID @C {
void KheResourceMatchingDemandSetAddTime(
  KHE_RESOURCE_MATCHING_DEMAND_SET rmds, KHE_TIME t);
void KheResourceMatchingDemandSetAddTimeGroup(
  KHE_RESOURCE_MATCHING_DEMAND_SET rmds, KHE_TIME_GROUP tg);
void KheResourceMatchingDemandSetAddFrame(
  KHE_RESOURCE_MATCHING_DEMAND_SET rmds, KHE_FRAME frame);
}
in any order.  These define a set of times @M { T }:  the union of
the times @C { t }, the time groups @C { tg }, and the time groups
of @C { frame }.  @M { T } may not be empty.
@PP
A demand set may be saved, and solved multiple times.  When it
is no longer needed it may be deleted explicitly, by calling
@ID {0.91 1.0} @Scale @C {
void KheResourceMatchingDemandSetDelete(KHE_RESOURCE_MATCHING_DEMAND_SET rmds);
}
Alternatively, deleting its solver's arena will also delete it, because
it is stored in that arena.  A less drastic alternative to deletion is
@ID {0.91 1.0} @Scale @C {
void KheResourceMatchingDemandSetClear(KHE_RESOURCE_MATCHING_DEMAND_SET rmds);
}
which clears out @C { rmds } ready for a fresh lot of times.
@PP
The demand nodes of one demand node set are specified in two steps:
first the tasks to include, called the @I { selected tasks }, are
specified, then the grouping of those tasks into demand nodes.  A
task @C { t } is selected (its assignment may be changed) when it
satisfies these conditions:
@NumberedList

@LI @OneRow {
It has the same resource type as the solver's @C { rg } attribute;
}

@LI @OneRow {
It is either assigned directly to the cycle task of a resource
of @C { rg }, or else it is unassigned;
}

@LI @OneRow {
If @C { preserve_existing } is @C { true }, it is unassigned;
}

@LI @OneRow {
It, or some task assigned directly or indirectly to it, lies in
a meet which is assigned a time, directly or indirectly, so as
to cause the task to share at least one time with @M { T };
}

@LI @OneRow {
It is not derived from a preassigned event resource;
}

@LI @OneRow {
Its assignment is not fixed (by @C { KheTaskAssignFix }, or
because it is a cycle task);
}

@LI @OneRow {
Not assigning it might attract a cost.  All tasks subject to assign
resource constraints of non-zero cost are included.  Some tasks
subject to limit resources constraints with minimum limits are also
included, chosen heuristically, as explained in the depths of
Section {@NumberOf resource_solvers.matching.limit_resources}.
}

# @LI @OneRow {
# Not assigning it might attract a cost.  All tasks subject to
# assign resource constraints of non-zero cost are included.
# Then, for each limit resources constraint of non-zero cost
# which is affected by tasks that cover @M { T }, sufficient
# additional tasks are included to ensure that, if they are
# all assigned, its minimum limit will be reached.
# }

# @LI @OneRow {
# Unless @C { nocost_off } is @C { true }, not assigning it might
# attract a cost, because it is monitored by an assign resource or
# limit resources constraint of non-zero cost, as reported by
# @C { KheTaskNonAssignmentHasCost(t, false) }
# (Section {@NumberOf solutions.tasks.asst}).
# }

@EndList
The last item is a compromise.  If too few tasks are included, the
assignment will be too far from final to be useful; but if too many
are included (including tasks for which assignment is not needed),
then if the resources have minimum workload limits these will
favour assigning all these tasks, over-using the resources early
in the cycle and causing workload shortages later.
# Fortunately,
# perfection is not required here; provided we get close, that
# will produce a reasonable initial solution, and the
# repair phase can do the rest.
@PP
For each resource @C { r } of @C { rg } there is one demand node
containing all selected tasks which are initially assigned @C { r }.
If there are no such tasks (for example, when @C { preserve_existing }
is @C { true }), there is no such node.  There is also one demand
node for each of the remaining selected tasks.  (We are speaking of
logical demand nodes here; as the next section explains, equivalent
logical demand nodes are grouped into single nodes by the
implementation, for efficiency.)
@PP
The supply nodes of one solve consist of one for each resource
@C { r } of @C { rg }, representing assignment of @C { r }, and
one for each demand node, representing non-assignment of its tasks.
(Again, these are logical supply nodes; in the implementation, all
supply nodes representing non-assignment are grouped into a single
supply node.)
@PP
When determining which edges are present and their weights, the
first step is to unassign every initially assigned selected task
using @C { KheTaskUnAssign }.  This must succeed, because the
selected tasks are not fixed.  Then, for each demand node @C { d },
for each supply node @C { s } representing assignment
of a resource @C { r }, draw an edge between @C { d } and
@C { s } when the tasks of @C { d } can be assigned @C { r }.
This is tested by calling @C { KheEjectingTaskMoveFrame }
for each task of @C { d }; an edge is added when all
these calls succeed.  The edge cost is the solution cost
after they are done, optionally with edge adjustment as
described above.  There is also an edge from @C { d } to the
supply node @C { s } representing non-assignment of the tasks of
@C { d }, whose cost is the (unchanged) solution cost.
# @PP
# Finally, an implementation note.  Because the weighted
# bipartite matching code is built on a min-cost flow algorithm,
# the implementation can combine equivalent nodes.  This is done
# for demand nodes containing equivalent tasks (tasks which can
# be assigned the same resources and have the same effect on the
# timetables of resources), and for the supply nodes representing
# non-assignment.  This significantly reduces the size of the
# matching graph and the running time.
@End @SubSection

@SubSection
    @Title { Implementing resource matching }
    @Tag { resource_solvers.matching.limit_resources }
@Begin
@LP
This section describes the implementation of resource matching
in detail.
@PP
As mentioned earlier, limit resources constraints (or rather
monitors) are a problem for resource matching, because they take
away the independence of the edge weights.  Suppose that on the
current day there is a requirement for at least one senior nurse.
If no special arrangements are made, every edge to a non-senior
nurse will carry a cost.  That is not right, because only one
task needs a senior nurse.  This problem strongly influences the
implementation.
@PP
Resource matching detaches all limit resources monitors that affect
the current match and replaces them by adjustments to the edge
weights.  This restores the lost independence.  These adjustments
are often @I { exact }:  they have the same effect on cost as the
monitors.  When they are not exact, resource matching loses its
local optimality, although it is still a good heuristic.
@PP
The algorithm has two parts.  The first part, @I { preparation },
builds the demand nodes and does a few other things explained below.
It has three phases.  The second part, @I { solving }, adds the
edges, finds the matching, and makes the assignments.  A demand
set may be solved repeatedly, but it is prepared only once, just
before it is solved for the first time.
@PP
@I { Preparation (first phase): find and group selected tasks }.
A @I { selected task } is a task that may be assigned by the
current match.  An @I { affected task } is a task whose assignment
is affected by the current match:  it is either selected, or it is
assigned, directly or indirectly, to a selected task (its
@I { selected task }).  For example, if a Saturday night task
is grouped with a Sunday night task, then when solving either
Saturday or Sunday, one of the tasks is affected and selected
and the other is affected but not selected.
@PP
Given the demand set's set of times @M { T }, the selected tasks are
easily found.  For each time in @M { T }, use the event timetable
monitor from option @C { gs_event_timetable_monitor } to find the
meets running at that time.  For each task of the wanted type in
each meet, follow its chain of assignments to its proper root.  By
the way it was found, the proper root must satisfy conditions 1, 2,
and 4; if it also satisfies conditions 3, 5, and 6, then make it a
selected task.  Condition 7 (concerning the cost of non-assignment)
is not checked here; that will be done later.
@PP
A selected task might be encountered more than once while doing
this.  So to finish this step, the array of selected tasks it
builds is sorted and uniqueified.
@PP
The next step is to traverse the uniqueified array of selected
tasks, doing two things.  First, if several selected tasks
are assigned the same resource when resource matching is called,
the specification states that they should be assigned the same
resource by resource matching.  So a task grouper
(Section {@NumberOf resource_structural.task_tree.grouper}) is used
to group these tasks.  In each group, the leader task remains
selected but its followers are assigned to it, demoting them
to affected but not selected.  From now on, `selected' means
`selected after grouping'.  The grouping is removed at the end
of the solve:  the follower tasks are then assigned directly to
whatever the leader task is assigned to.  Second, each selected
task is placed into its own @I { demand node }, a node of the
bipartite graph.
@PP
@I { Preparation (second phase): find task profiles and merge
equivalent nodes }.
One selected task per node would work.  But many tasks are
@I { equivalent }:  for each resource @M { r }, assigning @M { r }
to one of these tasks has the same effect as assigning @M { r } to
another.  Given that the following calculations are not cheap and
that underlying the weighted bipartite matching algorithm is a
flow algorithm, able to handle multiple equivalent nodes as single
nodes with multiplicities represented by edge capacities, it makes
sense to merge nodes containing equivalent tasks into a single node
whose incoming edge has its number of tasks as a capacity limit.
This phase does this.
@PP
Determining whether selected tasks are equivalent is done by
building a @I { task profile } for each, such that two tasks are
equivalent if their profiles are equal.  A selected task's profile
depends on the task itself and on the tasks assigned to it, directly or
indirectly.  It consists of the set of times occupied by those tasks,
their total workload, and a set of @I { preferences }.  A preference
is a pair @M { ( g sub i , c sub i ) }, where @M { g sub i } is a set
of resources, and @M { c sub i } is a cost.  Its meaning is that the
resources of @M { g sub i } are preferred for this task, and assigning
something not in @M { g sub i } attracts cost @M { c sub i }.
@PP
For convenience of presentation, an artificial resource @M { r sub 0 }
is defined, such that assigning @M { r sub 0 } to some task means
non-assignment of that task.  A preference's @M { g sub i } may
include @M { r sub 0 }.
# @PP
# Preferences have two rather different uses.  Here they are part of a
# task's profile, used to help decide which tasks are equivalent.  Later,
# they will be used to calculate adjustments to edge costs which replace
# detached monitors.
@PP
At the start of this phase, each node contains a single task.  It
also has a task profile attribute, which is now initialized to the
task profile for the node's sole task, @M { t } say.  This is done
by traversing @M { t }, the tasks assigned to @M { t }, the tasks
assigned to those tasks, and so on recursively.  While doing this,
the set of times occupied by those tasks, and their total workload,
are added to the profile.  Also, for each point where an assign
resource or prefer resources monitor @M { m } monitors a task
@M { t prime } which is either @M { t } or assigned directly or
indirectly to it, one preference @M { ( g sub i , c sub i ) } is added:
@BulletList

@LI {
If @M { m } is an assign resource monitor, @M { g sub i } is the full
set of resources of the task's resource type, and @M { c sub i } is
the duration of @M { t prime } multiplied by the weight of @M { m }'s
constraint.
}

@LI {
If @M { m } is a prefer resources monitor, @M { g sub i } is @M { m }'s
resource group plus @M { r sub 0 }, and @M { c sub i } is the duration
of @M { t prime } multiplied by the weight of @M { m }'s constraint.
}

@EndList
These preferences express the effect of these monitors exactly.  A
prefer resources constraint does not penalize non-assignment, which
is why @M { r sub 0 } is included.
@PP
Two preferences with the same set of resources may be merged into one,
whose cost is the sum of the two original costs.  These merges are done
as preferences are added to profiles.
@PP
After the traversal of the affected tasks of selected task @M { t }
ends, the preferences in @M { t }'s profile are sorted, to expedite
comparing profiles.  After the profiles are done, the nodes are
sorted to bring nodes with equal profiles together, then adjacent
nodes with equal profiles are merged.
@PP
@I { Preparation (third phase): add preferences representing limit
resources monitors }.  This phase adds preferences representing limit
resources monitors.  The representation is often exact, and when it
isn't, it is usually close.
@PP
While preferences representing assign resource and prefer resources
monitors were being added to task profiles in the previous step, a
list of all limit resources monitors that monitor affected tasks was
built.  This list is now sorted and uniqueified.  Each monitor on
it is then visited and preferences are added to represent it.
@PP
Before visiting the first limit resources monitor, all affected
tasks are visited, and the back pointer in each is set to its
selected task.  (All that is actually needed is a boolean mark
to indicate that the task is affected.)  After the last limit
resources monitor is visited, the affected tasks are visited
again and their back pointers are cleared.
@PP
Handling one limit resources monitor @M { m sub i } proceeds in two
steps.  In the first step, several quantities are calculated:  the
total duration @M { N } of the affected tasks monitored by
@M { m sub i }; lower and upper limits @M { L } and @M { U } on the
total duration of these tasks which may be assigned resources from
its resource group @M { g sub i } without incurring a penalty; and
for each selected task @M { t }, its @I { monitored duration }
@M { t sub d }:  the total duration of its affected tasks that
are monitored by @M { m sub i }.  In the second step, preferences
are added based on these quantities.
@PP
The first step proceeds as follows.  @M { N } is initialized to 0, and
@M { L } and @M { U } to @M { m sub i }'s minimum and maximum limits.
If @M { m sub i } has no minimum limit, @M { L } is set to 0.  If
@M { m sub i } has no maximum limit, @M { U } is set to a very large
number.  For each selected task @M { t }, @M { t sub d } is set to 0.
@PP
Now @M { m sub i } may monitor non-affected tasks as well as affected
ones.  In practice, limit resources monitors always limit what is
happening at a particular moment in time, so non-affected tasks might
seem to be not a live issue.  But consider the grouped Saturday and
Sunday tasks above.  While matching Saturday, there may well be a
limit resources monitor which monitors the Sunday task and also
other, non-affected Sunday tasks.
@PP
A complete traversal of the tasks monitored by @M { m sub i } is
carried out.  For each task, the back pointer set earlier tells
whether the task is affected by the current match or not.  If it is
affected, its duration is added to @M { N } and also to the monitored
duration of its selected task.  If it is not affected, there are
two cases.  If it is assigned, directly or indirectly, to a resource
from @M { g sub i }, then its duration is subtracted from both
@M { L } and @M { U }.  If it is not assigned to any resource,
directly or indirectly, then its duration is subtracted from
@M { L } only.  This is analogous to how history is handled by
cluster busy times and limit active intervals monitors.
@PP
After this, if @M { L } is negative, set it to 0, and if it exceeds
@M { N }, set it to @M { N }.  Do the same for @M { U }.  After that
we must have @M {  0 <= L <= U <= N }.  Here @M { L <= U } is an
invariant of this whole step, established by a requirement of the
limit resources constraint and preserved as the step proceeds.
# It is also clear from the algorithm that @M { N } is the sum of the
# @M { d sub t } values.
@PP
That concludes the first step in the handling of limit resources
monitor @M { m sub i }, the calculation of @M { N }, @M { L },
@M { U }, and the @M { d sub t }.  The second step adds preferences
to demand nodes, as follows.
@PP
Selected tasks with total monitored duration at least @M { L } should
be assigned resources from @M { g sub i }, so find selected tasks
@M { t } whose total monitored duration is as large as possible not
exceeding @M { L }, and add preference @M { ( g sub i , w sub i d sub t ) }
to their nodes, to encourage these assignments.  Similarly, selected
tasks of monitored duration at least @M { N - U } should not be assigned
resources from @M { g sub i }, so find selected tasks @M { t } whose
total monitored duration is as large as possible not exceeding @M { N - U },
and add preference 
@M { ( G union lbrace r sub 0 rbrace - g sub i , w sub i d sub t ) }
to each of them, to discourage these assignments.  Each demand node
receives at most one preference derived from @M { m sub i }, since
@M { L + (N - U) <= N }.
@PP
Given that all the tasks in one node share the same preferences, it
may be necessary to split nodes while doing this.  Only one of the
two resulting nodes receives the new preference.
@PP
Which demand nodes should these preferences be added to?  It is easy
to add preferences derived from assign resource and prefer resources
monitors, because the tasks and hence the nodes are determined; but
here the selected tasks must be chosen, from the selected tasks
with positive monitored duration.
@PP

Nodes need to be chosen whose preferences are as similar as possible to
the new preference that will come in.  It would be wrong to encourage
some resources with one preference and a completely different set of
resources with another.  This will be investigated further below.  For
now, it is assumed that there is an integer @I { compatibility } for
each node with respect to @M { m sub i }, such that when compatibility
is high, adding preference @M { ( g sub i , w sub i d sub t ) } works
well, and when it is low, adding preference
@M { ( G union lbrace r sub 0 rbrace - g sub i , w sub i d sub t ) }
works well.
@PP
The algorithm for adding limit resources preferences, then, is
as follows.  Calculate the compatibility of each node, and
store it in the node.  Sort the nodes into decreasing order of
compatibility.  Consider the tasks as forming a single
sequence, beginning with the tasks in the first node, then the
second, and so on.  Ignoring tasks of zero monitored duration,
find the largest prefix of this sequence whose tasks have
total monitored duration at most @M { L }, and ensure that
preference @M { ( g sub i , w sub i d sub t ) } applies to
each task @M { t } of them and to no other tasks.  This may
involve some node splitting, as mentioned earlier.  Then find
the largest suffix of this sequence whose tasks have monitored
duration at most @M { N - U }, and ensure that preference
@M { ( G union lbrace r sub 0 rbrace - g sub i , w sub i d sub t ) }
applies to each task @M { t } of them and to no other tasks.
Again, this may require some node splitting.
@PP
The implementation is slightly different.  At each node @M { n },
it builds a set @M { A } of tasks from @M { n }.  First it adds
to @M { A } the first task it can find whose monitored duration
is non-zero and would not cause the target (initially either
@M { L } or @M { N - U }) to be exceeded.  Then it adds to @M { A }
as many more tasks from @M { n } as it can, subject to them all
having the same monitored duration as the first, and collectively
not exceeding the target.  After this is done, if @M { A } is
empty it proceeds to the next node.  If @M { A } contains every
task of @M { n } it adds the new preference to @M { n }, updates
the target, and proceeds to the next node.  Otherwise it makes
a new node holding copies of @M { n }'s preferences plus the
new preference, moves the tasks of @M { A } to it, updates the
target, then restarts on @M { n }.  It does the @M { N - U }
preferences before the @M { L } ones, because this is slightly
simpler to implement, given that new nodes go on the end of the
sorted sequence of nodes.
@PP
A formula is needed for the compatibility of a preference
@M { ( g sub i , w sub i d sub t ) } with a node @M { n sub j }.
Let the intersection of the resource groups of the preferences
already present in @M { n sub j } be @M { G sub j }.  If there
are no preferences, @M { G sub j = G union lbrace r sub 0 rbrace },
where @M { G } is the full set of resources.  Finding a suitable
formula is a rather puzzling problem; the author's current choice is
@ID @Math {
{ bar G sub j intersection g sub i bar } over { bar G sub j bar }
}
or 0 if @M { bar G sub j bar = 0 } (unlikely).  This
reaches its maximum value, 1, when @M { G sub j subseteq g sub i },
which is reasonable since adding @M { ( g sub i , w sub i d sub t ) }
to @M { n sub j } does not reduce @M { G sub j }, and its minimum
value, 0, when @M { G sub j ` intersection g sub i } is empty.
@PP
Although the result will in general be heuristic, not exact, the
difficulty should not be overstated.  A typical example might be
(a) 5 or 6 nurses, with (b) at least one senior nurse and (c) at
most two trainee nurses.  For cases like this, a simple heuristic
should do very well.
@PP
Let @M { S } be the senior nurses and @M { T } be the trainee nurses.
Constraint (a) adds preferences of the form @M { ( G , w sub i ) },
where @M { G } is the full set of resources, to 5 tasks, leaves one
task untouched, and adds preferences of the form
@M { ( lbrace r sub 0 rbrace , w sub i ) } to the remaining tasks;
(b) adds one preference of the form @M { ( S , w sub i ) };
and (c) adds preferences of the form
@M { ( G union lbrace r sub 0 rbrace - T , w sub i ) }
to all but two of the tasks.  It is easy to verify that
(b) will prefer nodes containing @M { ( G , w sub i ) },
while (c) will prefer nodes containing 
@M { ( lbrace r sub 0 rbrace , w sub i ) }, and also nodes
containing @M { ( S , w sub i ) }, since
@M { S intersection (G union lbrace r sub 0 rbrace - T) = S },
because @M { S } and @M { T } are disjoint.
@PP
Set operations are slow, so four optimizations are used.  First,
in preferences derived from assign resource constraints,
@M { g sub i } is in fact @C { NULL }.  This is because it
has no effect on intersections (except by omitting @M { r sub 0 },
but @M { r sub 0 } is handled separately).  Only non-@C { NULL }
sets of resources need to be intersected.  Second, an intersection
is only performed when it is actually needed:  when a task profile
already contains at least two preferences with non-@C { NULL } sets
of resources, and a third is being considered for adding to it.
That makes three non-@C{ NULL } sets---quite unlikely in practice.
Third, only the size of the intersection in the formula is calculated,
not the actual set.  And fourth, intersections are stored as resource
sets (Section {@NumberOf extras.resource_sets}), which are cheaper
than resource groups.

# Nodes need to be chosen whose preferences are as similar as possible to
# the new preference that will come in.  It would be wrong to encourage
# some resources with one preference and a completely different set of
# resources with another.  This will be investigated further below.  For
# now, it is assumed that there is an integer @I { incompatibility } for
# each node with respect to @M { m sub i }, such that when incompatibility
# is low, adding preference @M { ( g sub i , w sub i d sub t ) } works
# well, and when it is high, adding preference
# @M { ( G union lbrace r sub 0 rbrace - g sub i , w sub i d sub t ) }
# works well.
# @PP
# The algorithm for adding limit resources preferences, then, is
# as follows.  Calculate the incompatibility of each node, and
# store it in the node.  Sort the nodes into increasing order of
# incompatibility.  Consider the tasks as forming a single
# sequence, beginning with the tasks in the first node, then the
# second, and so on.  Ignoring tasks of zero monitored duration,
# find the largest prefix of this sequence whose tasks have
# total monitored duration at most @M { L }, and ensure that
# preference @M { ( g sub i , w sub i d sub t ) } applies to
# each task @M { t } of them and to no other tasks.  This may
# involve some node splitting, as mentioned earlier.  Then find
# the largest suffix of this sequence whose tasks have monitored
# duration at most @M { N - U }, and ensure that preference
# @M { ( G union lbrace r sub 0 rbrace - g sub i , w sub i d sub t ) }
# applies to each task @M { t } of them and to no other tasks.
# Again, this may require some node splitting.
# @PP
# The implementation is slightly different.  At each node @M { n },
# it builds a set @M { A } of tasks from @M { n }.  First it adds
# to @M { A } the first task it can find whose monitored duration
# is non-zero and would not cause the target (initially either
# @M { L } or @M { N - U }) to be exceeded.  Then it adds to @M { A }
# as many more tasks from @M { n } as it can, subject to them all
# having the same monitored duration as the first, and collectively
# not exceeding the target.  After this is done, if @M { A } is
# empty it proceeds to the next node.  If @M { A } contains every
# task of @M { n } it adds the new preference to @M { n }, updates
# the target, and proceeds to the next node.  Otherwise it makes
# a new node holding copies of @M { n }'s preferences plus the
# new preference, moves the tasks of @M { A } to it, updates the
# target, then restarts on @M { n }.  It does the @M { N - U }
# preferences before the @M { L } ones, because this is slightly
# simpler to implement, given that new nodes go on the end of the
# sorted sequence of nodes.
# @PP
# A suitable incompatibility function is needed, one which avoids
# penalizing some demand node for assigning some resource, and penalizing
# it again for not assigning it.  Although the result will in general be
# heuristic, not exact, the difficulty should not be overstated.  A
# typical example might be (a) 5 or 6 nurses, with (b) at least one
# senior nurse and (c) at most two trainee nurses.  For cases like
# this, a simple heuristic should do very well.
# @PP
# Let @M { S } be the senior nurses and @M { T } be the trainee nurses.
# Constraint (a) adds preferences of the form @M { ( G , w sub i ) },
# where @M { G } is the full set of resources, to 5 tasks, leaves one
# task untouched, and adds preferences of the form
# @M { ( lbrace r sub 0 rbrace , w sub i ) } to the remaining tasks;
# (b) adds one preference of the form @M { ( S , w sub i ) };
# and (c) adds preferences of the form
# @M { ( G union lbrace r sub 0 rbrace - T , w sub i ) }
# to all but two of the tasks.
# @PP
# It would be bad to encourage the same task to assign both a senior
# nurse and a trainee, and bad to encourage tasks beyond the sixth
# task to not assign a nurse and also to assign a senior nurse or
# trainee.  On the other hand, adding preference
# @M { ( G union lbrace r sub 0 rbrace - T , c sub i ) } to a task
# which already has preference @M { ( lbrace r sub 0 rbrace , c sub j ) }
# is very desirable, because it does not reduce the size of the intersection
# at all.  It nominates a task which preferably is not assigned a resource
# as one of the tasks which preferably is not assigned a trainee nurse.
# @PP
# However, minimizing the size of the reduction in the intersection is
# not always right.  If a preference of the form @M { ( S , c sub i ) }
# is added to a demand node which already has a preference of the form
# @M { ( lbrace r sub 0 rbrace , c sub j ) }, then the size of the
# intersection reduces by only 1, but since the result is empty, no
# assignment to this demand node is free of cost.  That isn't desirable at all.
# @PP
# Based on examples like these the author has decided to define
# the incompatibility to be the cardinality of the symmetric
# difference of the new preference's resource group with the
# intersection of the existing preferences' resource groups.  Every
# element of the symmetric difference is an option that is available
# before the preference is added, but lost after it is added.
# @PP
# Set operations are slow, so four optimizations are used.  First,
# in preferences derived from assign resource constraints, the
# @M { g sub i } value is in fact @C { NULL }.  This is because it
# has no effect on the intersection (except by omitting @M { r sub 0 },
# but @M { r sub 0 } is handled separately).  Only non-@C { NULL }
# sets of resources need to be intersected.  Second, an intersection
# is only performed when it is actually needed:  when a task profile
# already contains at least two preferences with non-@C { NULL }
# sets of resources, and a third is being considered for adding to
# it.  That makes three non-@C{ NULL } sets---quite unlikely in
# practice.  Third, only the size of the symmetric difference is
# found, not the actual set.  And fourth, intersections are stored
# as resource sets (Section {@NumberOf extras.resource_sets}), which
# are cheaper than resource groups.

@PP
Consider any demand node @M { d }.  Suppose that every preference in
its profile contains @M { r sub 0 }.  This means that, after careful
consideration, preparation has concluded that not assigning @M { d }'s
tasks would not incur a cost.  As explained earlier, it is better
not to include such tasks at all, because they could over-use resources
early in the cycle.  Accordingly, such nodes @M { d } are now deleted.
@PP
Finally, preparation ends with the deletion of preferences derived
from assign resource and prefer resources monitors (they are not
needed by solving, as explained below).  The results of preparation
are stored in the demand set object:  the demand nodes with their
tasks, profiles, and preferences; the uniqueified list of relevant
limit resources monitors; and the task grouper recording which tasks
have to be assigned the same resource.
@PP
@I { Solving }.  Solving is much easier to describe than preparation.
Group tasks as indicated by the task grouper.  Detach the limit
resources monitors.  From each demand node, add an edge of capacity
1 to each supply node representing a resource, and an edge of unlimited
capacity to the supply node representing non-assignment.  Find a
maximum matching of minimum cost and make the assignments indicated
by it.  Reattach the limit resources monitors.  Ungroup the task grouper.
If parameter @C { ejection_off } is @C { false } and limit resources
monitors were involved and any of them have non-zero cost, call
@C { KheEjectionChainRepairInitialResourceAssignment } to repair
them.  After that, using a mark, if the solution is not improved,
undo the assignments.
@PP
The cost of the edge from demand node @M { d } to the supply node
for resource @M { r } (possibly @M { r sub 0 }) is the cost of
the solution after that one assignment is made.  In addition, to
compensate for the detached limit resources monitors, for each
preference @M { ( g sub i , c sub i ) } in @M { d } derived from a
limit resources constraint such that @M { g sub i } does not contain
@M { r }, @M { c sub i } is added to the edge cost.  Edge costs are
not affected by preferences derived from assign resource and prefer
resources monitors, because those monitors are not detached.  Their
preferences are needed for task equivalence and to guide the placement
of preferences derived from limit resources monitors, but they are
not used when solving, so they can be and are deleted at the end of
preparation.  There are also the separate adjustments described
earlier, the ones controlled by parameters @C { edge_adjust1_off },
@C { edge_adjust2_off }, @C { edge_adjust3_off }, and
@C { edge_adjust4_off }.
@PP
When there are no limit resources monitors, the algorithm does not
waste time on work inspired by them:  grouping equivalent tasks
using task profiles is a valuable optimization in any case, and the
third phase of preparation does nothing.  Whether the preparation
time spent on limit resources monitors is significant is a question
that can only be answered definitely by testing, but the running
time is probably dominated by solving, in which case the answer is no.
@PP
This section sheds light on how event resource constraints should
be modelled.  It is best in principle to use assign resource and
prefer resources constraints, because they affect each task independently.
But if they are replaced by equivalent limit resources constraints,
this algorithm will produce the same matching graph.  This opens
a path to a useful generalization---the expression of all event
resource constraints by limit resources constraints---by showing
that the efficiency advantage of assign resource and prefer
resources constraints need not be lost.
@End @SubSection

@SubSection
    @Title { Time sweep resource assignment }
    @Tag { resource_solvers.matching.time.sweep }
@Begin
@LP
In a planning timetable whose columns represent times and whose
rows represent resources, resource packing proceeds vertically:
it assigns one row after another.   @I { Time sweep } proceeds
horizontally, assigning one time (that is, the tasks running at
that time) after another.  This is likely to be useful in nurse
rostering, where many constraints link nearby times.
@PP
KHE offers this function for time sweep resource assignment:
@ID @C {
bool KheTimeSweepAssignResources(KHE_SOLN soln, KHE_RESOURCE_GROUP rg,
  KHE_OPTIONS options);
}
Using resource matching, it assigns resources to those tasks of
@C { soln } whose resource type is that of @C { rg }, and which are
initially unassigned.  It does not disturb any existing assignments.
For how it handles fixed and preassigned tasks, and other such
details, see Section {@NumberOf resource_solvers.matching.solver}.
@PP
@C { KheTimeSweepAssignResources } obtains a frame from @C { KheFrameOption }
(Section {@NumberOf extras.frames}).  It visits each time group of the
frame in chronological order, and uses one resource matching to assign
or reassign the tasks which overlap this time group.  It is influenced
indirectly by the resource matching options, and directly by these options:
@TaggedList

# @DTI { @F rs_time_sweep_matching_off } {
# A Boolean option which, when @C { true }, causes a constructive
# heuristic to be used instead of matching, by passing @C { true }
# to @C { KheResourceMatchingSolverSolve } for @C { matching_off }.
# }

@DTI { @F rs_time_sweep_daily_time_limit } {
A string option defining a soft time limit for each day.  The format
is the one accepted by @C { KheTimeFromString }
(Section {@NumberOf general_solvers.runningtime}):  @F { secs }, or
@F { mins:secs }, or @F { hrs:mins:secs }.  There is also the special
value @F { - }, meaning `set no limit', and this is the default value.
}

@DTI { @F rs_time_sweep_edge_adjust1_off } {
A Boolean option which, when @C { true }, causes edge adjustment 1 to be
turned off, by passing @C { true } to @C { KheResourceMatchingSolverSolve }
for @C { edge_adjust1_off }.
}

@DTI { @F rs_time_sweep_edge_adjust2_off } {
A Boolean option which, when @C { true }, causes edge adjustment 2 to be
turned off, by passing @C { true } to @C { KheResourceMatchingSolverSolve }
for @C { edge_adjust2_off }.
}

@DTI { @F rs_time_sweep_edge_adjust3_off } {
A Boolean option which, when @C { true }, causes edge adjustment 3 to be
turned off, by passing @C { true } to @C { KheResourceMatchingSolverSolve }
for @C { edge_adjust3_off }.
}

@DTI { @F rs_time_sweep_edge_adjust4_off } {
A Boolean option which, when @C { true }, causes edge adjustment 4 to be
turned off, by passing @C { true } to @C { KheResourceMatchingSolverSolve }
for @C { edge_adjust4_off }.
}

@DTI { @F rs_time_sweep_ejection_off } {
A Boolean option which, when @C { true }, causes ejection chain repair
to be turned off, by passing @C { true } to
@C { KheResourceMatchingSolverSolve } for @C { ejection_off }.
}

# @DTI { @F rs_time_sweep_nocost_off } {
# A Boolean option which, when @C { true }, includes tasks for which
# non-assignment has no cost in the sweep, by passing @C { true } to
# @C { KheResourceMatchingSolverSolve } for @C { nocost_off }.
# }

@DTI { @F rs_time_sweep_lookahead } {
An integer option which, when it has a positive value @M { k },
causes time sweep to look ahead @M { k } time groups when
calculating edge costs.  A full description appears below
(Section {@NumberOf resource_solvers.matching.lookahead}).
The default value, 0, produces no lookahead.
}

@DTI { @F rs_time_sweep_preserve_existing_off } {
A Boolean option which, when @C { true }, causes existing assignments
to not be preserved, by passing @C { false } to
@C { KheResourceMatchingSolverSolve } for @C { preserve_existing }.
}

@DTI { @F rs_time_sweep_cutoff_off } {
A Boolean option which, when @C { true }, causes cutoff times to be
omitted.  When @C { false }, cutoff times are installed in all cluster
busy times and limit active intervals monitors for the resources of
@C { rg }, making them ignore all time groups after the largest time of
the current time group.  Cluster busy times monitors that request their
resources to be busy at specific times, as reported by
@C { KheMonitorRequestsSpecificBusyTimes }
(Section {@NumberOf resource_solvers.assignment.requested}), are excepted:
they are not cut off.  Cutoff times are removed after the last time group.
}

@DTI { @F rs_time_sweep_redo_off } {
A Boolean option which, when @C { true }, causes redoing to be
omitted.  When @C { false }, after the last time group is assigned,
the algorithm returns to the first time group and reassigns it using
resource matching with the same options.  The result may be different,
because the following time groups are assigned now, and there are no
cutoffs.  It sweeps through all the time groups in this way.  At the
end, it checks whether the cost improved, and if so it does another
redo sweep, continuing until a complete redo sweep has no effect on cost.
}

@DTI { @F rs_time_sweep_rematch_off } {
A Boolean option which, when @C { true }, causes rematching to be
omitted.  When @C { false }, after each time group is assigned during
the initial sweep, the most recently assigned 2, 3, and so on up to
@F { rs_time_sweep_rematch_max_groups } time groups are reassigned,
using resource matching with the same options.  This rematching is
omitted during redoing.
}

@DTI { @F rs_time_sweep_rematch_max_groups } {
The maximum number of time groups rematched (see just above).  The
default value is 7.
}

@DTI { @F rs_time_sweep_two_phase } {
A Boolean option which, when @C { true }, causes time sweep to run
twice.  The first run assigns the resources of @C { rg } with the largest
workload limits according to @C { KheClassifyResourcesByWorkload }
(Section {@NumberOf resource_structural.classify_by_workload}).
The second run assigns the rest.
}

@EndList
On one instance, cutoff times and redoing had a very significant
effect.  Without redoing, cutoff times reduced final cost from 185
to 149.  With redoing, they reduced final cost from 95 to 72.  Edge
adjustment produced mixed results.  Rematching during time sweep also
produced mixed results, reducing one solution cost by 40 (from 107 to
67), but increasing another by 20.
@End @SubSection

@SubSection
    @Title { Time sweep with lookahead }
    @Tag { resource_solvers.matching.lookahead }
@Begin
@LP
If option @F rs_time_sweep_lookahead has value @M { k > 0 },
@C { KheTimeSweepAssignResources } looks ahead @M { k } time groups when
calculating edge costs, as follows.
@PP
Lookahead is similar to combinatorial grouping
(Section {@NumberOf resource_structural.constraints.combinatorial}).
Suppose that while we are matching time group @M { i } we need to
determine the cost of the edge that connects task @M { t } to
resource @M { r }.  Set the cutoffs of resource monitors so that
they monitor everything up to and including time group @M { i + k }.
Detach all assign resource and limit resources monitors that monitor
tasks running during time groups after time group @M { i }.  Then try
all combinations of assignments which include assigning @M { t } to
resource @M { r } during time group @M { i }, and assigning any task
(in fact, the first task in each demand node, since the others are
equivalent) or nothing during time groups @M { i + 1 `` ,..., `` i + k }.
Take the minimum of these values and use it as the cost of @M { e },
plus edge adjustments as usual.
@PP
The point of including resource monitors up to time group @M { i + k }
is to include the cost of the minimum-cost combination for the
resource in the edge cost.  The point of excluding assign resource
and limit resources monitors after time group @M { i } is that if
some combination leaves some event resource unassigned, that does not
matter because some other resource might eventually be assigned to it.
For limit resources monitors it would be ideal to `detach' any minimum
limit but leave any maximum limit `attached'; but we don't do that.
In any case a maximum limit will be at least 1 in practice, and
we are only assigning one resource.
# @PP
# Without lookahead, the cost @M { C } of @M { e } is the cost of the
# current solution altered so that @M { t } is assigned @M { r }, plus
# edge adjustments.  With lookahead, the first step is to calculate
# @M { C }, the cost without lookahead, as before.  At the same time,
# @M { c }, the total cost of all monitors of resource constraints
# involving @M { r }, is calculated.  Then, for each combination
# @M { i } of one assignment or non-assignment of a task to @M { r } on
# each of the @M { k } subsequent days, the total cost @M { c sub i } of
# the monitors of resource constraints involving @M { r } is calculated.
# The edge cost is then changed from @M { C } to @M { C - c + c sub m },
# where @M { c sub m } is the minimum of the @M { c sub i }.
# @PP
# This value is not a solution cost:  it includes the resource costs of
# certain assignments but not their event resource costs.  But it is a
# good estimate of the true cost of choosing to assign @M { r } to
# @M { t }.  For example, if @M { t } should really begin a sequence of
# assignments, but @M { r } wants to be free the next day, the cost of
# either not having the sequence or not allowing @M { r } to be free
# will be included.  When @M { k = 0 }, there is just one combination
# (the empty sequence of assignments), its cost is @M { c }, and so
# the edge costs reduce to the costs without lookahead.
# @PP
# When monitor cutoffs are in use, the cutoffs for monitors of @M { r }
# are moved forward @M { k } time groups while the @M { c sub i } are
# being calculated, and moved back again afterwards.
@PP
To support lookahead, a variant of
@C { KheResourceMatchingSolverSolve } is offered:
@ID @C {
bool KheResourceMatchingSolverSolveWithLookahead(
  KHE_RESOURCE_MATCHING_SOLVER rms,
  ARRAY_KHE_RESOURCE_MATCHING_DEMAND_SET *rmds_array,
  int first_index, int last_index, bool edge_adjust1_off,
  bool edge_adjust2_off, bool edge_adjust3_off,
  bool edge_adjust4_off, bool ejection_off, KHE_OPTIONS options);
}
# @ID @C {
# bool KheResourceMatchingSolverSolveWithLookahead(
#   KHE_RESOURCE_MATCHING_SOLVER rms,
#   ARRAY_KHE_RESOURCE_MATCHING_DEMAND_SET *rmds_array,
#   int first_index, int last_index, bool edge_adjust1_off,
#   bool edge_adjust2_off, bool edge_adjust3_off,
#   bool edge_adjust4_off, bool nocost_off, KHE_OPTIONS options);
# }
The matched demand set is in @C { *rmds_array } at @C { first_index }.
The lookahead demand sets follow, ending at @C { last_index }.  So
@C { last_index == first_index } means no lookahead,
@C { last_index == first_index + 1 } means one day's
worth, and so on.
The other parameters are as for @C { KheResourceMatchingSolverSolve }.
@C { ARRAY_KHE_RESOURCE_MATCHING_DEMAND_SET }
is defined
alongside
@C { KHE_RESOURCE_MATCHING_DEMAND_SET }
in @C { khe_solvers.h }.
@End @SubSection

@SubSection
    @Title { Resource rematching repair }
    @Tag { resource_solvers.matching.rematch }
@Begin
@LP
@I { Resource rematching } repairs a solution using resource
matching.  KHE's function for this is
@ID @C {
bool KheResourceRematch(KHE_SOLN soln, KHE_RESOURCE_GROUP rg,
  KHE_OPTIONS options, int variant);
}
It creates a resource matching solver for @C { soln } and @C { rg }
and calls it on many sets of times.
@PP
Parameter @C { variant } may be any integer and causes some change
in behaviour when it changes.  At present, depending on whether it
is odd or even, the time sets rematched are traversed in forward
or reverse order.  This can be significant, especially when a time
limit prevents all of them from being visited.
@PP
@C { KheResourceRematch } is influenced indirectly by the resource
matching solver options, and directly by these options:
@TaggedList

@DTI { @F rs_rematch_off }
{
A Boolean option which, when @C { true }, causes @C { KheResourceRematch }
to do nothing.
}

@DTI { @F rs_rematch_select }
{
This determines how @C { KheResourceRematch } selects sets of times for
solving.  Its values are @C { "none" }, @C { "defective_tasks" },
@C { "frame" }, @C { "intervals" }, and @C { "auto" }, for which see below.
}

@DTI { @F rs_rematch_max_groups }
{
An integer option which instructs @C { KheResourceRematch } to
try sequences of adjacent time groups of length 1, 2, and so on
up to its value.  Its default value is 7.  It is only consulted
when @F rs_rematch_select is @C { "frame" } or @C { "intervals" }.
}

# @DTI { @F rs_rematch_matching_off } {
# A Boolean option which, when @C { true }, causes a constructive
# heuristic to be used instead of matching, by passing @C { true }
# to @C { KheResourceMatchingSolverSolve } for @C { matching_off }.
# }

@DTI { @F rs_rematch_edge_adjust1_off } {
A Boolean option which, when @C { true }, causes edge adjustment 1 to be
turned off, by passing @C { true } to @C { KheResourceMatchingSolverSolve }
for @C { edge_adjust1_off }.
}

@DTI { @F rs_rematch_edge_adjust2_off } {
A Boolean option which, when @C { true }, causes edge adjustment 2 to be
turned off, by passing @C { true } to @C { KheResourceMatchingSolverSolve }
for @C { edge_adjust2_off }.
}

@DTI { @F rs_rematch_edge_adjust3_off } {
A Boolean option which, when @C { true }, causes edge adjustment 3 to be
turned off, by passing @C { true } to @C { KheResourceMatchingSolverSolve }
for @C { edge_adjust3_off }.
}

@DTI { @F rs_rematch_edge_adjust4_off } {
A Boolean option which, when @C { true }, causes edge adjustment 4 to be
turned off, by passing @C { true } to @C { KheResourceMatchingSolverSolve }
for @C { edge_adjust4_off }.
}

@DTI { @F rs_rematch_ejection_off } {
A Boolean option which, when @C { true }, causes ejection chain repair
to be turned off, by passing @C { true } to
@C { KheResourceMatchingSolverSolve } for @C { ejection_off }.
}

# @DTI { @F rs_rematch_nocost_off } {
# A Boolean option which, when @C { true }, includes tasks for which
# non-assignment has no cost in the rematch, by passing @C { true } to
# @C { KheResourceMatchingSolverSolve } for @C { nocost_off }.
# }

@EndList
The choices for @F rs_rematch_select are as follows.  In each
case, a set of times may be selected several times over, but
each distinct set is solved only once.  As explained above
at the end of the introduction to resource matching, when the
selected tasks are initially assigned (as is assumed here),
tasks which share a resource initially will share one finally.
@PP
If @F rs_rematch_select is @C { "none" }, rematching is turned off,
like @F { rs_rematch_off }.
@PP
If @F rs_rematch_select is @C { "defective_tasks" }, sets of times
suited to repairing high school timetables are selected.  Find the
first tasking of @C { soln } whose resource type is the resource
type of @C { rg }.  For each task @M { t } of that tasking which
is unassigned or assigned a resource from @C { rg }, and which is
defective (unassigned, assigned an unpreferred resource, part of
a split assignment, or involved in a clash), make one set of times
equal to the set of times that @M { t } is running, including the
times of all tasks connected with @M { t } by assignments not
involving a cycle task.
@PP
If @F rs_rematch_select is @C { "frame" }, sets of times suitable for
repairing nurse rostering timetables are selected.  For each index in
the common frame (Section {@NumberOf extras.frames}), the time group
at that index, plus @M { m-1 } immediately following time groups, are
united to form one of the sets of times.  There is one set for each
value of @M { m } between 1 and @C { rs_rematch_max_groups } inclusive.
@PP
If @F rs_rematch_select is @C { "intervals" }, then for each limit
active intervals constraint in the instance, for each index into the
sequence of time groups of that constraint, the time group at that
index, plus @M { m-1 } immediately following time groups, are united
to form one of the sets of times.  There is one set for each value
of @M { m } between 1 and @C { rs_rematch_max_groups } inclusive.
To these are added the sets of times solved when @F rs_rematch_select
is @C { "frame" }.
@PP
Finally, if @F rs_rematch_select is @C { "auto" } (the default
value), then @C { "defective_tasks" } is chosen when the model
is high school timetabling, otherwise @C { "frame" } is chosen.
The author had high hopes for @C { "intervals" }, but his tests
showed an improvement in only one instance, from 107 to 105,
which did not justify the increased running time, averaging
one or two seconds.
@End @SubSection

@EndSubSections
@End @Section

@Section
    @Title { Ejection chain repair }
    @Tag { resource_solvers.ejection }
@Begin
@LP
Function
@ID @C {
bool KheEjectionChainRepairResources(KHE_TASKING tasking,
  KHE_OPTIONS options);
}
uses ejection chains (Chapter {@NumberOf ejection}) to improve the
solution by changing the assignments of the tasks of @C { tasking }.
It is influenced by many options, including
@TaggedList

@DTI { @F rs_eject_off } {
A Boolean option which, when @C { true }, causes this
function to do nothing.
}

@EndList
For full details, consult Section {@NumberOf ejection.repair}.
@End @Section

@Section
    @Title { Resource pair repair }
    @Tag { resource_solvers.pair }
@Begin
@LP
One idea for repairing resource assignments is to unassign all tasks
assigned to two resources, then try to reassign those tasks to the
same two resources in a better way---an example of very large-scale
neighbourhood (VLSN) search @Cite { $ahuja2002, $meyers2007 }.  The
search space, although formally exponential in size, is often small
enough to search completely, giving an optimal result.
@PP
This section is devoted to function @C { KheResourcePairReassign },
which carries out this idea while trying to save time by detecting
symmetries.  Section {@NumberOf resource_solvers.reassign} offers
another way of reassigning resources.  It does not detect symmetries,
but it is more general in several respects.
@BeginSubSections

@SubSection
    @Title { The basic function }
    @Tag { resource_solvers.pair.basic }
@Begin
@LP
The basic function for carrying out this kind of repair is
@ID @C {
bool KheResourcePairReassign(KHE_SOLN soln, KHE_RESOURCE r1,
  KHE_RESOURCE r2, bool resource_invariant, bool fix_splits);
}
It knows that when one task is assigned to another, the two tasks
must be assigned the same resource; and it believes that tasks that
overlap in time must be assigned different resources.  It does not
change task domains, fixed assignments, or assignments of tasks to
non-cycle tasks.  If it can find a reassignment to @C { r1 } and
@C { r2 } of the tasks currently assigned to @C { r1 } and @C { r2 }
which satisfies these conditions and gives @C { soln } a lower cost,
it makes it and returns @C { true }; otherwise it changes nothing and
returns @C { false }.  If @C { resource_invariant } is @C { true }, only
changes that preserve the resource assignment invariant are allowed.
@C { KheResourcePairReassign } accepts any resources, but it is most
likely to succeed on resources with similar capabilities that are
involved in defects.
@PP
If @C { fix_splits } is @C { true }, the algorithm focuses on repairing
split assignments, by forcing tasks unassigned by the algorithm which
are linked by avoid split assignments constraints of non-zero cost to
be assigned the same resource in the reassignment.  This runs faster,
because it has fewer choices to try, but it may overlook other kinds
of improvements.
@PP
Within the set of tasks assigned to @C { r1 } and @C { r2 } originally,
there may be subsets which are not assignable to two resources
without introducing clashes.  Clashes in the original assignments
can cause this, as can split assignments when @C { fix_splits } is
set.  Such subsets are ignored by @C { KheResourcePairReassign };
their original assignments are left unchanged.
@End @SubSection

@SubSection
    @Title { A resource pair solver }
    @Tag { resource_solvers.pair.solver }
@Begin
@LP
Resource solver
@ID @C {
bool KheResourcePairRepair(KHE_TASKING tasking, KHE_OPTIONS options);
}
calls @C { KheResourcePairReassign } for many pairs of resources.  The
@C { resource_invariant } arguments of all these calls are set to the
@C { rs_invariant } option of @C { options }.  Two other options
control the behaviour of @C { KheResourcePairRepair }:
@TaggedList

@DTI { @F rs_pair_off } {
A Boolean option which, when @C { true }, turns resource pair
repair off.
}

@DTI { @F rs_pair_select } {
This option determines which pairs of resources are tried.  If it
is @C { "none" }, no pairs are tried, giving another way to turn
this repair off.  If it is @C { "splits" } (the default),
then for all pairs of resources involved in all split
assignments of @C { tasking }, @C { KheResourcePairRepair }
calls @C { KheResourcePairReassign } for those two resources, with
the @C { fix_splits } parameter set to @C { true }.  This focuses
the solver on repairing split assignments.  If it is @C { "partitions" },
then @C { KheResourcePairReassign } calls @C { KheResourcePairRepair }
for each pair of resources in each partition of the resource type of
@C { tasking }, or in all resource types if @C { tasking } has no
resource type, with @C { fix_splits } set to @C { false }.
Each resource type with no partitions is treated as though all resources
lie in a single shared partitition.  This focuses the solver on
improving resources' assignments generally.  However the search space
is often larger, increasing the chance that the search will be cut
short, losing optimality.  Value @C { "all" } is the same as
@C { "partitions" } except that partitions are ignored,
so that there is a call on @C { KheResourcePairReassign } for every
pair of distinct resources of the types involved.
}

@EndList
@C { KheResourcePairRepair } collects statistics about its calls to
@C { KheResourcePairReassign }, held in the @C { rs_pair_calls },
@C { rs_pair_successes }, and @C { rs_pair_truncs } options.  Each
time @C { KheResourcePairReassign } is called, @C { rs_pair_calls } is
incremented.  Each time it returns @C { true }, @C { rs_pair_successes }
is incremented.  And each time it truncates an overlong search (at most
once per call), @C { rs_pair_truncs } is incremented.  The caller must
initialize and retrieve these options at the right moments, using the usual
options functions (Section {@NumberOf general_solvers.options}).
@End @SubSection

@SubSection
    @Title { Partition graphs }
    @Tag { resource_solvers.pair.partition }
@Begin
@LP
Resource pair repair is essentially about two-colouring a clash
graph whose nodes are tasks and whose edges join pairs of tasks
that overlap in time.  Although the basic idea is simple enough,
the details become quite complicated, especially when optimizing
by removing symmetries in the search.  It has proved convenient to
build on a separate @I { partition graph } module, which is the
subject of this section.  It finds the connected components of a
graph (called @I { components } here), and, if requested,
partitions components into two @I { parts } by two-colouring them.
@PP
The module stores a graph whose nodes are represented by values
of type @C { void * }.  There are operations for creating a graph
in a given arena, adding nodes to it, and visiting those nodes:
@ID @C {
KHE_PART_GRAPH KhePartGraphMake(KHE_PART_GRAPH_REL_FN rel_fn,
  HA_ARENA a);
void KhePartGraphAddNode(KHE_PART_GRAPH graph, void *node);
int KhePartGraphNodeCount(KHE_PART_GRAPH graph);
void *KhePartGraphNode(KHE_PART_GRAPH graph, int i);
}
Deleting the arena deletes the graph, including its components and
parts, but not its nodes.  These functions and the others in this
section are declared in include file @C { khe_part_graph.h }.
@PP
To define the edges, the user passes in a @I { relation function }
of type @C { KHE_PART_GRAPH_REL_FN } which the module calls back
whenever it needs to know whether two nodes are connected by an
edge.  As the user would define it, this function looks like this:
@ID @C {
KHE_PART_GRAPH_REL RelationFn(void *node1, void *node2)
{
  ...
}
}
where type @C { KHE_PART_GRAPH_REL } is
@ID @C {
typedef enum {
  KHE_PART_GRAPH_UNRELATED,
  KHE_PART_GRAPH_DIFFERENT,
  KHE_PART_GRAPH_SAME
} KHE_PART_GRAPH_REL;
}
Values @C { KHE_PART_GRAPH_UNRELATED } and @C { KHE_PART_GRAPH_DIFFERENT }
are the usual options for clash graphs, the first saying that there is
no edge between the two nodes, the second that there is an edge which
requires the two nodes to be coloured with different colours.  The
third value, @C { KHE_PART_GRAPH_SAME }, says that the two nodes must
be coloured the same colour.  It is used, for example, when the two
nodes represent tasks which are linked by an avoid split assignments
constraint, and the @C { fix_splits } option is in force.
@PP
After all nodes have been added, the user may call
@ID @C {
void KhePartGraphFindConnectedComponents(KHE_PART_GRAPH graph);
}
to find the connected components, which may then be visited by
@ID {0.95 1.0} @Scale @C {
int KhePartGraphComponentCount(KHE_PART_GRAPH graph);
KHE_PART_GRAPH_COMPONENT KhePartGraphComponent(KHE_PART_GRAPH graph, int i);
}
The graph that a component is a component of may be found by
@ID {0.98 1.0} @Scale @C {
KHE_PART_GRAPH KhePartGraphComponentGraph(KHE_PART_GRAPH_COMPONENT comp);
}
and the nodes of a component may be visited by
@ID @C {
int KhePartGraphComponentNodeCount(KHE_PART_GRAPH_COMPONENT comp);
void *KhePartGraphComponentNode(KHE_PART_GRAPH_COMPONENT comp, int i);
}
@C { KhePartGraphFindConnectedComponents } considers two nodes to
be connected when @C { rel_fn } returns @C { KHE_PART_GRAPH_SAME }
or @C { KHE_PART_GRAPH_DIFFERENT } when passed those nodes.
@PP
If requested, the module will partition the nodes of a component
into two sets, such that two-colouring the component will give
the nodes in one set one colour, and the nodes in the other set
the other colour.  This gives exactly two ways to two-colour the
component, which is all there are, since once a colour is assigned
to one node, its neighbours must be assigned the other colour, their
neighbours must be assigned the first colour, and so on.  To carry
out this partitioning, call
@ID @C {
void KhePartGraphComponentFindParts(KHE_PART_GRAPH_COMPONENT comp);
}
After that, to retrieve the two parts, call
@ID @C {
bool KhePartGraphComponentParts(KHE_PART_GRAPH_COMPONENT comp,
  KHE_PART_GRAPH_PART *part1, KHE_PART_GRAPH_PART *part2);
}
If @C { KhePartGraphComponentFindParts } was able to partition
the component into two parts, @C { KhePartGraphComponentParts }
returns @C { true } and sets @C { *part1 } and @C { *part2 }
to non-@C { NULL } values; otherwise it returns @C { false }
and sets them to @C { NULL }.  To find a part's enclosing component,
call
@ID @C {
KHE_PART_GRAPH_COMPONENT KhePartGraphPartComponent(
  KHE_PART_GRAPH_PART part);
}
The nodes of a part may be visited by
@ID @C {
int KhePartGraphPartNodeCount(KHE_PART_GRAPH_PART part);
void *KhePartGraphPartNode(KHE_PART_GRAPH_PART part, int i);
}
as usual.
@End @SubSection

@SubSection
    @Title { The implementation of resource pair reassignment }
    @Tag { resource_solvers.pair.implementation }
@Begin
@LP
This section describes the implementation of @C { KheResourcePairReassign }.
It builds two partition graphs altogether, a @I { first graph } which
does the basic analysis, and a @I { second graph } which is used to
find and remove symmetries in the first graph.
@PP
The same node type is used in both graphs.  A node holds a set of
tasks.  A resource is @I { assignable to a node } when it is assignable
to each task of the node.  A resource is assignable to a fixed task
when it is assigned to that task (fixed tasks are never unassigned).
A resource is assignable to an unfixed task when it lies in the
domain of that task.  It is possible for neither, one, or both
resources to be assignable to a node.  If neither is assignable,
the node is @I { unassignable }, otherwise it is @I { assignable }.
@PP
When a resource is assignable to a node, there are operations for
assigning and unassigning it.  To assign it, assign it to each
unfixed task of the node.  To unassign it, unassign it from each
unfixed task of the node.
@PP
The first graph contains one node for each task initially assigned
@C { r1 } or @C { r2 }, containing just that task.  Thus, in the
first graph there are no unassignable nodes.  Given two nodes, the
first graph's relation function first checks which resources are
assignable to each.  If there is no way to assign the same resource
to both nodes, it returns @C { KHE_PART_GRAPH_DIFFERENT }.
Otherwise, if there is no way to assign different resources to the
nodes, it returns @C { KHE_PART_GRAPH_SAME }.  Otherwise, if
@C { fix_splits } is @C { true } and the two nodes share an
avoid split assignments monitor of non-zero cost, it returns
@C { KHE_PART_GRAPH_SAME }.  Otherwise, if the two nodes overlap
in time, it returns @C { KHE_PART_GRAPH_DIFFERENT }.  Otherwise
it returns @C { KHE_PART_GRAPH_UNRELATED }.
@PP
Next, the graph's connected components are found and partitioned.
It is easy to see, referring to the relation function, that if a
component was successfully partitioned there must be at least one
way (and possibly two ways) to assign @C { r1 } to the nodes of
one part and @C { r2 } to the nodes of the other part.  So a
component of the first graph is called @I { assignable } if it
was successfully partitioned, and @I { unassignable } otherwise.
@PP
For each assignable component, the nodes of one part are merged
into one node, and the nodes of the other are merged into a second
node.  These two nodes are assignable to different resources in one
or two ways.  For each unassignable component, all the nodes are
merged into a single node.  It does not matter whether this node
is assignable or not; it is never assigned.
@PP
Next, the assignable components are sorted into increasing order
of number of possible assignments.  Each of the @M { C } assignable
components has 1 or 2 possible assignments.  A tree search is carried
out which tries each of these on each component in turn.  The total
search space size is at most @M { 2 sup C }.  This is often small
enough to search completely.  For safety, the search only explores
both assignments until 512 tree nodes have been visited; after that
it tries only one assignment for each component.  In the usual way,
each time the tree search reaches a leaf it compares its solution
cost with the best so far, and if it is better (and if the resource
assignment invariant is preserved, if required) it takes a copy of
its decisions.  At the end, the cost of the best solution found is
compared with the initial solution cost, and if the best solution
is better it is installed; otherwise the initial solution is restored.
@PP
The search space often has symmetries which would waste time and
cause the node limit to be reached often enough to compromise
optimality in practice if they were not removed.  The rest of
this section describes them and how @C { KheResourcePairReassign }
removes them.
@PP
Suppose @C { r1 } and @C { r2 } are Mathematics teachers assigned to
two Mathematics courses from the same form, each split into 4
meets of the same durations, running simultaneously.  This gives 4
components and a search space of size @M { 2 sup 4 }, yet clearly
this could be reduced safely to 1.  If two of the simultaneous meets
are made not simultaneous, the search space size can still be reduced
safely, to 2.  If @C { fix_splits } is @C { true }, each set of 4
meets is related, making 1 component and a search space of size
2---still unnecessarily large when the meets are simultaneous.
@PP
A component is @I { symmetrical } if it makes no difference which
of its two assignments is chosen.  In that case, its assignment
choices can be reduced from 2 to 1 by arbitrarily removing one,
halving the search space size.  But note the complicating factor
in the Mathematics example:  one cannot arbitrarily remove one
choice from each component, because some combinations of choices
lead to split assignments and others do not.  Instead, a way must
be found to first merge the four components into one, which can
then be assigned arbitrarily.
@PP
Symmetry arises when the two assignment choices of a component
affect monitors in the same way.  They need to have the same
effect on the state of monitors, so that no difference arises
when the monitors change state again later in response to
changes outside the component.
@PP
The two choices always have the same effect on the state of event
monitors (no effect at all), and on the state of assign resources
monitors, which care only whether tasks are assigned resources,
not which resources.  As far as these kinds of monitors are
concerned, all components are symmetrical.  Classify the remaining
monitors into three groups:  resource monitors, prefer resources
monitors, and avoid split assignments monitors.  (This description
was written before the advent of limit resources monitors, and
does not take them into account.)
@PP
A component is @I { r-symmetrical }, @I { p-symmetrical }, or
@I { s-symmetrical } when it is assignable both ways and they
affect in the same way all resource, prefer resources, or avoid
split assignments monitors that monitor tasks of the component.
(In particular, if there are no monitors of some type, the
component is vacuously symmetrical in that type.)  Combinations
of prefixes denote conjunctions of these conditions.  For example,
@I { symmetrical } is shorthand for @I { rps-symmetrical }.
@PP
Although these definitions are clear in principle, they are rather
abstract.  An algorithm needs concrete, easily computable conditions
that imply the abstract ones and are likely to hold in practice.  Here
are the concrete conditions used by @C { KheResourcePairReassign },
assuming that the component is assignable both ways.
@PP
Suppose that some component's two parts run at the same times and have
the same total workload.  Then the component is r-symmetrical, because
only these things affect resource monitors, except clashes---but component
assignments have no clashes in themselves, and since the two parts
run at the same times, they have the same clashes with tasks outside
the component.
@PP
Suppose that, for every prefer resources monitor of non-zero cost
which monitors any task of some component, either @C { r1 } and
@C { r2 } are both preferred by the monitor's constraint, or they
are both not preferred.  Then the component is p-symmetrical.
@PP
Suppose that, for each task in some component @M { c } which is
monitored by an avoid split assignments monitor of non-zero cost,
every task monitored by that monitor either was not assigned
@C { r1 } or @C { r2 } originally, or else it lies in @M { c }.
Then the component is s-symmetrical.
@PP
To prove this, take one avoid split assignments monitor, and partition
the set of tasks monitored by it into those that were not assigned
@C { r1 } or @C { r2 } originally, and so are beyond the scope of the
reassignment (call them @M { S sub 1 }), and those that were (call
them @M { S sub 2 }).  If the tasks of @M { S sub 2 } lie within two
or more components, then which way those components are assigned does
matter.  But if they lie within one component, then the cost of the
monitor will be the same whichever assignment is chosen.  This is
because @C { r1 } and @C { r2 } do not appear among the resources
assigned to the tasks of @M { S sub 1 } (if they did, those tasks
would be in @M { S sub 2 }), so the assignments to @M { S sub 2 }
introduce fresh resources to the monitor.  If all the tasks of
@M { S sub 2 } lie in one part of the component, one fresh resource
is introduced by both assignments; if some lie in one part and the
others in the other, two fresh resources are introduced by both
assignments.  Either way, the effect on the monitor is the same.
@PP
When @C { fix_splits } is @C { true }, all tasks which share an
avoid split assignments monitor lie in the same part, so in the
same component.  So every component is s-symmetrical in that case.
@PP
It is easy to check whether a component is rp-symmetrical.  This
is done as each component is partitioned.  Merely checking for
s-symmetry is not enough:  as illustrated by the Mathematics
example, several components may need to be merged (by merging
their parts) to produce one s-symmetrical component.  This is
done using the second partitioning graph, as follows.
@PP
The second-graph nodes are the merged nodes from the first-graph
components.  When two nodes come from the same first-graph component,
@C { KHE_PART_GRAPH_DIFFERENT } is returned by the relation function.
Otherwise, if they share an avoid split assignments monitor of non-zero
cost, it returns @C { KHE_PART_GRAPH_SAME }.  Otherwise it returns
@C { KHE_PART_GRAPH_UNRELATED }.
@PP
Two nodes representing the two parts of a first-graph component must
lie in the same second-graph component, because there is an edge
between them.  So each second-graph component is a set of first-graph
components linked by avoid split assignments constraints.
@PP
For each second-graph component, its first-graph components may be
merged if it does not contain an unassignable first-graph component, at
most one of its first-graph components is not rp-symmetrical, and it is
partitionable.  The two nodes of the merged component are built by
merging the nodes of each part of the second-graph component.  If all
the first-graph components being merged are rp-symmetrical, the resulting
component is rps-symmetrical, so either one of its assignments may be
removed.  But component merges are valuable even without rps-symmetry.
@End @SubSection

#@SubSection
#    @Title { A simpler resource pair repair }
#    @Tag { resource_solvers.pair.simple }
#@Begin
#@LP
#This section describes @C { KheResourcePairSimpleReassign }, a
#simpler resource pair repair function, which is suited to nurse
#rostering applications.  It does not attempt to find symmetries,
#which are rare in instances whose events all have duration 1.
#Instead it offers a way of limiting the search space to just
#one segment of the timetables of the two resources:
#@ID @C {
#bool KheResourcePairSimpleReassign(KHE_SOLN soln,
#  KHE_RESOURCE r1, KHE_RESOURCE r2, KHE_FRAME frame, int fi, int li,
#  bool resource_invariant, int max_assignments);
#}
##bool KheResourcePairSimpleReassign(KHE_SOLN soln, KHE_RESOURCE r1,
##  KHE_RESOURCE r2, KHE_TIME_PAR TITION tp, int first_part_index,
##  int last_part_index, bool resource_invariant, int max_assignments);
#It tries all combinations of reassignments of @C { r1 } and @C { r2 }
#to the tasks assigned to them in @C { frame } between indexes @C { fi }
#and @C { li } inclusive.  If @C { resource_invariant } is @C { true },
#only assignments satisfying the resource invariant are acceptable.
#@PP
#Each resource is assigned to at most one task per time group of
#@C { frame }, so there are two choices at each time group (or 1
#when neither resource is assigned), and the size of the search
#space is at most 2 to the power @C { KheFrameTimeGroupCount(frame) }.
#However, the function stops after making @C { max_assignments }
#assignments in total.  If it finds an improvement, it changes
#@C { soln } to it and returns @C { true }, otherwise it keeps
#the original assignments and returns @C { false }.
#@PP
#A convenient way to invoke @C { KheResourcePairSimpleReassign }
#repeatedly is
#@ID @C {
#bool KheResourcePairSimpleRepair(KHE_SOLN soln, KHE_OPTIONS options);
#}
#It returns @C { true } if any of its calls to
#@C { KheResourcePairSimpleReassign} return true.  It obtains @C { frame }
#and @F { resource_invariant } from @C { options }.  The following
#options are also consulted and determine the other parameters:
#@TaggedList
#
#@DTI { @F rs_pair_off } {
#A Boolean option which, when @C { true }, turns resource simple
#pair repair off.
#}
#
#@DTI { @F rs_pair_select } {
#This option determines which pairs of resources are tried.  Its value
#may be @C { "none" }, meaning that no pairs are tried, giving another
#way to turn this repair off; or @C { "all" }, meaning that for each
#resource type, all pairs are tried; or @C { "adjacent" }, meaning
#that each adjacent pair of resources (0 and 1, 1 and 2, 2 and 3,
#and so on) in each resource type is tried; or @C { "defective" }
#(the default), meaning that for each resource type, all pairs of
#resources for which at least one of the resources has a defective
#resource monitor are tried.
#}
#
#@DTI { @F rs_pair_parts } {
#The value of @C { KheFrameTimeGroupCount(frame) } on each call.
#For example, setting this value to 7 (the default) reassigns one week.
#}
#
#@DTI { @F { rs_pair_start }, @F { rs_pair_increment } } {
#The value of @C { frame }'s start index on the first call, and how
#much it is incremented by on each subsequent call.  The default
#values are 0 and @C { rs_pair_parts }.
#}
#
#@DTI { @F rs_pair_max } {
#The value of @C { max_assignments }.  Its default value is
#1000000, which is fine when @F rs_pair_parts is 7 but may need
#to be reduced when it is larger.
#}
#
#@EndList
#Resource pair repair runs very quickly when the default values are
#used, as would be expected given that the search space has size at
#most @M { 2 sup 7 } per pair.  On tests run by the author it found
#several improvements, enough to justify its modest running time.
#Setting @F rs_pair_parts to 14 gave some further improvement, but
#it also slowed the solves down noticeably.
#@PP
#Another way to invoke @C { KheResourcePairSimpleReassign }
#repeatedly is
#@ID {0.97 1.0} @Scale @C {
#bool KheResourcePairSimpleBusyRepair(KHE_SOLN soln, KHE_OPTIONS options);
#}
#This is like @C { KheResourcePairSimpleRepair } except that it
#makes a different set of calls to @C { KheResourcePairSimpleReassign},
#and its options have different names:  @F { rs_bpair_off },
#@F { rs_bpair_select }, @F { rs_bpair_parts }, @F { rs_bpair_start },
#@F { rs_bpair_increment }, and @F { rs_bpair_max }.  These are like
#the corresponding options of @C { KheResourcePairSimpleRepair },
#except @F { rs_bpair_select } has no effect, and the default
#value of @F { rs_bpair_parts } is 14.
#@PP
#For each resource, @C { KheResourcePairSimpleBusyRepair } obtains its
#number of busy times and maximum busy times from the functions
#documented in Section {@NumberOf solutions.top.avail}.
## @C { KheResourceTimetableMonitorB usyTimes }
## (Section {@NumberOf monitoring_timetables_resource}), and its limit from
## @C { KheFrameResource MaxBusyTimes } (Section {@NumberOf extras.frames}).
#It pairs the most overloaded resource with the most underloaded one, the
#second most overloaded with the second most underloaded, and so on, and
#calls @C { KheResourcePairSimpleReassign } on each pair.  Each resource
#participates in at most one call to @C { KheResourcePairSimpleReassign },
#so the number of these calls is very much smaller than the number made by
#@C { KheResourcePairSimpleRepair }, and accordingly @F { rs_bpair_parts }
#can reasonably be larger than @F { rs_pair_parts }.
#@PP
#When tested by the author, @C { KheResourcePairSimpleBusyRepair }
#produced very promising pairs, but failed to improve the solution,
#both when @F { rs_bpair_parts } had the default value 14, and when
#it was increased to 21, with @F { rs_bpair_increment } set to 7.
#At that point a small but noticeable amount of time was being
#consumed by some pairs.  Pairing each overloaded resource with
#the most underloaded resource also failed to find any improvement.
#Accordingly, the default value of @F { rs_bpair_off } has been
#set to @C { true }.
#@PP
#Also available are
#@ID @C {
#bool KheResourceTripleSimpleReassign(KHE_SOLN soln,
#  KHE_RESOURCE r1, KHE_RESOURCE r2, KHE_RESOURCE r3, KHE_FRAME frame,
#  int fi, int li, bool resource_invariant, int max_assignments);
#}
#and
#@ID @C {
#bool KheResourceTripleSimpleRepair(KHE_SOLN soln, KHE_OPTIONS options);
#}
#These are just like @C { KheResourcePairSimpleReassign } and
#@C { KheResourcePairSimpleRepair }, only reassigning three resources
#rather than two.  @C { KheResourceTripeSimpleRepair } consults options
#{@F rs_triple_off}, {@F rs_triple_select}, {@F rs_triple_parts},
#{@F rs_triple_start}, {@F rs_triple_increment}, and {@F rs_triple_max}.
#These are like the corresponding options for resource pairs, except
#that {@F rs_triple_select} has default value @C { "none" },
#not @C { "defective" }.
#@PP
#The author undertook one test of resource triple repair.  It ran
#after resource pair repair, but still it found five improvements.
#The search space has size @M { 6 sup 7 } per resource triple when
#{@F rs_triple_parts} is 7.  This is not impossibly large, but
#trying all triples containing at least one defective resource is
#very slow.  The author's test took several hours (the instance
#was @C { INRC1-LH03 }, with about 50 nurses) and convinced him
#that resource triple repair is not suitable for routine use.  It
#may be suitable for repairing a few very bad resources.
#@End @SubSection

#@SubSection
#    @Title { Resource pair run reassignment }
#    @Tag { resource_solvers.pair.run }
#@Begin
#@LP
#This section describes yet another resource pair reassignment
#algorithm, again for nurse rostering.  The basic function is
#@ID @C {
#bool KheResourcePairRunReassign(KHE_SOLN soln, KHE_OPTIONS options,
#  KHE_RESOURCE r1, KHE_RESOURCE r2, int fi, int li,
#  bool resource_invariant, int max_assignments);
#}
#This is similar to earlier functions in that it tries to reassign
#@C { r1 } and @C { r2 } to the tasks currently assigned those
#resources in the range of days @C { fi } to @C { li } so as to
#improve the solution, returning @C { true } if it succeeds.  As
#before, @C { resource_invariant } says whether the resource invariant
#is to apply, and @C { max_assignments } limits the number of
#assignments that may be tried.
#@PP
#The difference is that this function calls @C { KheFindTaskRunRight }
#(Section {@NumberOf resource_structural.task_finding.other})
#repeatedly, to group the tasks assigned a given resource into runs of
#tasks on consecutive days.  Each run is taken to be indivisible, as
#though its tasks were grouped together.  This makes for a smaller
#search space, allowing @C { fi } and @C { li } to cover larger
#intervals, possibly the whole cycle.
#@PP
#Again similarly to earlier functions,
#@ID @C {
#bool KheResourcePairRunRepair(KHE_SOLN soln, KHE_OPTIONS options);
#}
#may be called to make many calls on @C { KheResourcePairRunReassign }
#for different pairs of resources @C { r1 } and @C { r2 }.  There
#is also a time saving because @C { KheResourcePairRunReassign }
#finds the runs for each resource just once, unless one of its
#calls to @C { KheResourcePairRunReassign } is successful, in
#which case it finds the runs for the two resources again.
#@PP
#@C { KheResourcePairRunRepair } consults options {@F rs_rpair_off},
#{@F rs_rpair_select}, {@F rs_rpair_parts}, {@F rs_rpair_start},
#{@F rs_rpair_increment}, and {@F rs_rpair_max}.  These are as
#for @C { KheResourcePairSimpleRepair } above except that the
#default value of @F rs_rpair_parts is 28, much larger than for
#{@F rs_pair_starts}.  This is because @C { fi } and @C { li }
#can reasonably cover much larger intervals when reassigning runs,
#as explained earlier.
#@End @SubSection

#@SubSection
#    @Title { Resource pair swapping }
#    @Tag { resource_solvers.pair.swap }
#@Begin
#@LP
#Yet another way to repair two resources is to
#simply swap their timetables:
#@ID @C {
#bool KheResourcePairSwap(KHE_SOLN soln,
#  KHE_RESOURCE r1, KHE_RESOURCE r2, KHE_FRAME frame, int fi, int li,
#  bool resource_invariant);
#}
#It attemps to move all tasks of @C { soln } lying (partly or wholly)
#between indexes @C { fi } and @C { li } inclusive in @C { frame }
#initially assigned @C { r1 } to @C { r2 } and vice versa.  If
#@C { resource_invariant } is @C { true }, only assignments
#satisfying the resource invariant are acceptable.  If all these
#assignments succeed and the solution cost is reduced,
#@C { KheResourcePairSwap } leaves @C { soln } in the new
#state and returns @C { true }.  Otherwise it leaves @C { soln }
#unchanged and returns @C { false }.
#@PP
#A convenient way to invoke @C { KheResourcePairSwap } repeatedly is
#@ID @C {
#bool KheResourcePairSwapRepair(KHE_SOLN soln, KHE_OPTIONS options);
#}
#It returns @C { true } if any of its calls to @C { KheResourcePairSwap }
#return true.  It obtains @C { frame } and @F { resource_invariant }
#from @C { options }.  The following options are also consulted and
#determine the other parameters:
#@TaggedList
#
#@DTI { @F rs_swap_off } {
#A Boolean option which, when @C { true }, turns resource swap
#repair off.
#}
#
#@DTI { @F rs_swap_select } {
#This option determines which pairs of resources are tried.  Its
#values are the same as those for @C { rs_pair_select }.
#}
#
#@DTI { @F rs_swap_parts } {
#The value of @C { fi - li + 1 } on each call.  For example, setting
#this value to 7 reassigns one week.  The default value is
#@C { KheTimeGroupTimeCount(frame) }, which means that the
#entire timetables of the two resources are swapped.
#}
#
#@DTI { @F { rs_swap_start }, @F { rs_swap_increment } } {
#The value of @C { fi } on the first call, and how much it
#is incremented by on each subsequent call.  The default
#values are 0 and @C { rs_swap_parts }.
#}
#
#@EndList
#@I { report on its effectiveness still to do }
#@End @SubSection

@EndSubSections
@End @Section

#@Section
#    @Title { Resource run reassignment }
#    @Tag { resource_solvers.run }
#@Begin
#@LP
#This section describes a variant of @C { KheResourcePairRunReassign }
#from Section {@NumberOf resource_solvers.pair.run} which in principle
#can optimally reassign the runs of any number of resources.  We say
#`in principle' because as the number of resources increases the running
#time increases dramatically, so that in practice it can only reassign
#3 resources, or 4 at most.
#@PP
#The first step in resource run reassignment is to create a resource
#run solver, by calling
#@ID @C {
#KHE_RESOURCE_RUN_SOLVER KheResourceRunSolverMake(KHE_SOLN soln,
#  KHE_OPTIONS options, bool resource_invariant, int max_assignments);
#}
#Here @C { resource_invariant } says whether to enforce the resource
#invariant; @C { max_assignments } limits the number of alternatives
#tried on each call to @C { KheResourceRunSolverSolve } below.
#@PP
#A solver object may be deleted by calling
#@ID @C {
#void KheResourceRunSolverDelete(KHE_RESOURCE_RUN_SOLVER rrs);
#}
#after resource run solving is completed.  This must also be done if
#any changes to the solution are made other than those carried out by
#@C { rrs }.  This is because @C { rrs } keeps information between
#solves, so it will go wrong if the solution changes in ways that
#it does not know about.
#@PP
#To say which resources are to be involved in the next solve, call
#@ID @C {
#bool KheResourceRunSolverAddResource(KHE_RESOURCE_RUN_SOLVER rrs,
#  KHE_RESOURCE r);
#bool KheResourceRunSolverDeleteResource(KHE_RESOURCE_RUN_SOLVER rrs,
#  KHE_RESOURCE r);
#void KheResourceRunSolverClearResources(KHE_RESOURCE_RUN_SOLVER rrs);
#}
#@C { KheResourceRunSolverAddResource } adds @C { r } to the solver,
#returning @C { false } and changing nothing when @C { r } is already
#present.  @C { KheResourceRunSolverDeleteResource } deletes @C { r },
#returning @C { false } and changing nothing when
#@C { r } is not present.  @C { KheResourceRunSolverClearResources }
#removes all resources from the solver.
#@PP
#To carry out an actual solve, call
#@ID {0.95 1.0} @Scale @C {
#bool KheResourceRunSolverSolve(KHE_RESOURCE_RUN_SOLVER rrs, int fi, int li);
#}
#This optimally reassigns the solver's resources to the tasks assigned
#those resources in the range of days @C { fi } to @C { li } while
#keeping runs together, returning @C { true } if it improves the solution.
#@PP
#If there are @M { k } resources and initially those resources are
#assigned @M { R } runs, then each run could be assigned any one
#of the @M { k } resources, making a search space of size
#@M { k sup R }.  However there is some pruning.  Before starting
#the search, the solver checks the @M { (k-1)R } possible new
#assignments to see which succeed, and does not try failed ones
#again.  Any run that cannot move at all is omitted from the
#search, and runs that overlap with that run have its resource
#removed from their list of resources to try.  And any assignment
#which would give a resource two tasks on the same day is not tried.
#The test for this is carried out efficiently using intervals.
#@PP
#Unavailable times are taken into account when solution costs are
#reported, but they are not taken as a reason to exclude a resource
#from being assigned to a run.  It is not unusual for an optimal
#solution to contain a few assignments of resources to tasks at
#unavailable times.
#@PP
#A convenient way to call @C { KheResourceRunSolverSolve }
#repeatedly is
#@ID @C {
#bool KheResourceRunRepair(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
#  KHE_OPTIONS options);
#}
#This creates a resource run solver, tries a variety of sets of
#resources of type @C { rt } and intervals @C { fi .. li }, and
#ends by deleting the solver and returning @C { true } if any of
#the solves improved the solution.  @C { KheResourceRunRepair }
#consults these options:
#@TaggedList
#
#@DTI { @F rs_run_off } {
#A Boolean option which, when @C { true }, causes
#@C { KheResourceRunRepair } to do nothing.
#}
#
#@DTI { @F rs_run_resources } {
#This integer option says how many resources from @C { rt } to
#select for each solve.  Values 0 and 1 are useless; 2 is the
#default; and larger values (even 3) can produce long run times.
#}
#
#@DTI { @F rs_run_select } {
#This option determines which sets of resources are tried.  Its value
#may be @C { "none" }, meaning that no sets are tried, giving another
#way to turn @C { KheResourceRunRepair } off; or @C { "all" }, meaning
#that all sets of @C { rs_run_resources } resources of the specified
#type are tried; or @C { "adjacent" }, meaning that each set of
#@C { rs_run_resources } resources which are adjacent to each other
#in the specified type are tried; or @C { "defective" } (the default),
#meaning that all sets of @C { rs_run_resources } resources of the
#specified type for which at least one of the resources has a
#defective resource monitor are tried.
#}
#
#@DTI { @F rs_run_parts } {
#The value of @C { fi - li + 1 } on each call.  For example, setting this
#value to 28 (the default) reassigns four weeks.  If this is larger than
#the total number of days, it is silently reduced to the total number of days.
#}
#
#@DTI { @F { rs_run_start }, @F { rs_run_increment } } {
#The value of @C { fi } on the first call, and how much it is
#incremented by on each subsequent call for the same set of
#resources.  The default values are 0 and @C { rs_run_parts }.
#}
#
#@DTI { @F rs_run_max } {
#The value of @C { max_assignments } on each call.  Its default value
#is 1000000, which is fine for small values of @F rs_run_resources
#and {@F rs_run_parts}, but may need to be reduced as the problem
#size increases.
#}
#
#@EndList
#These options are similar to the options for
#@C { KheResourcePairSimpleRepair } above, except that
#@F rs_run_resources is different and reflects the abilility
#@C { KheResourceRunRepair } to handle any number of resources,
#in principle.
#@End @Section

@Section
    @Title { Resource reassignment }
    @Tag { resource_solvers.reassign }
@Begin
@LP
This section describes an operation called @I { resource reassignment }
which in principle can optimally reassign the tasks assigned to an
arbitrary number of resources.  We say `in principle' because as the
number of resources increases the running time increases dramatically,
so that in practice it can only handle 3 resources, or 4 at most; or
alternatively it can handle all resources, but only when the tasks
are running at a very limited range of times.
@PP
The first step in resource reassignment is to create a
@I { reassign solver }, by calling
@ID @C {
KHE_REASSIGN_SOLVER KheReassignSolverMake(KHE_SOLN soln,
  KHE_RESOURCE_TYPE rt, KHE_OPTIONS options);
}
The solver uses @C { gs_common_frame } and @C { rs_invariant } from
@C { options }.  These values, along with @C { soln } and @C { rt },
are fixed for the lifetime of the solver.
@PP
A solver object may be deleted by calling
@ID @C {
void KheReassignSolverDelete(KHE_REASSIGN_SOLVER rs);
}
This should be done after solving is completed, and also if any changes
to the solution are made other than those carried out by @C { rs }.
This is because @C { rs } keeps information between solves, so it
will go wrong if the solution changes in ways that it does not know about.
@PP
To say which resources are to be involved in the next solve, call
@ID {0.94 1.0} @Scale @C {
bool KheReassignSolverAddResource(KHE_REASSIGN_SOLVER rs, KHE_RESOURCE r);
bool KheReassignSolverDeleteResource(KHE_REASSIGN_SOLVER rs, KHE_RESOURCE r);
void KheReassignSolverClearResources(KHE_REASSIGN_SOLVER rs);
}
@C { KheReassignSolverAddResource } adds @C { r } to the solver, returning
@C { false } and changing nothing when @C { r } is already present.
@C { KheReassignSolverDeleteResource } deletes @C { r }, returning
@C { false } and changing nothing when @C { r } is not present.  Both
abort if @C { r } does not have the resource type @C { rt } passed to
@C { KheReassignSolverMake }.  @C { KheReassignSolverClearResources }
deletes all resources.
@PP
One of the resources may be @C { NULL }.  This causes the solver to
select as many non-overlapping unassigned tasks of type @C { rt } as
it can easily find, and try assigning them, and also unassigning other
tasks, as though unassigned tasks were assigned a resource called
@C { NULL }.  Only unassigned tasks in need of assignment according to
@C { KheTaskNeedsAssignment } (Section {@NumberOf solutions.tasks.asst})
are included.
@PP
To carry out an actual solve, call
@ID @C {
bool KheReassignSolverSolve(KHE_REASSIGN_SOLVER rs, int first_index,
  int last_index, KHE_REASSIGN_GROUPING grouping, bool ignore_partial,
  KHE_REASSIGN_METHOD method, int max_assignments);
}
This optimally reassigns the solver's resources to the tasks assigned
those resources in the @I { target interval } (@C { first_index } to
@C { last_index } inclusive), returning @C { true } if it improves
the solution.
@PP
When deciding whether a task @C { t } lies in the target interval,
@C { t }'s own interval, as returned by @C { KheTaskFinderTaskInterval }
(Section {@NumberOf resource_structural.task_finding.task_finder}),
is used to determine which days it is running.  These include days
when tasks assigned directly or indirectly to @C { t } are running.
@PP
Tasks are organized into groups during the solve, and parameter
@C { grouping } determines how these groups are made.  Two tasks are
only eligible to be in the same group if they are assigned the same
resource (possibly @C { NULL }) initially.  The tasks of each group
are assigned the same resource throughout the solve.  The type of
@C { grouping } is
@ID @C {
typedef enum {
  KHE_REASSIGN_MINIMAL,
  KHE_REASSIGN_RUNS,
  KHE_REASSIGN_MAXIMAL
} KHE_REASSIGN_GROUPING;
}
@C { KHE_REASSIGN_MINIMAL } produces no grouping beyond the initial
grouping of the tasks (which is not disturbed); @C { KHE_REASSIGN_RUNS }
groups sequences of tasks assigned the same resource on adjacent days
(@I { runs }); and @C { KHE_REASSIGN_MAXIMAL } groups all tasks
initially assigned the same resource which participate in the solve.
The meaning of `optimal reassignment' is relative to these groupings;
only @C { KHE_REASSIGN_MINIMAL } produces true optimal reassignment.
@PP
When @C { ignore_partial } is @C { true }, tasks that lie partly
inside and partly outside the target interval are ignored, just
as though they were not there.  When @C { grouping } is
@C { KHE_REASSIGN_RUNS }, this causes some runs to be shorter
than they otherwise would be.
@PP
When @C { ignore_partial } is @C { false }, tasks that lie partly
inside and partly outside the target interval are included in the
solve.  Furthermore, when @C { grouping } is @C { KHE_REASSIGN_RUNS },
tasks that lie entirely outside the target interval are included when
they are part of a run that lies partly within the target interval.
@PP
We say that a group @I { needs assignment } when at least one of its
tasks needs assignment, according to @C { KheTaskNeedsAssignment }
(Section {@NumberOf solutions.tasks.asst}).  If a group does not
need assignment, then, in addition to trying to assign it to the
resources of the solve, @C { KheReassignSolverSolve } will also
try unassigning it.  When @C { grouping } is @C { KHE_REASSIGN_RUNS },
runs are built so as to ensure that all tasks in any given run have
the same value for @C { KheTaskNeedsAssignment }.  As previously
stated, for the @C { NULL } resource this value must be @C { true };
but for non-@C { NULL } resources it may be @C { true } or @C { false }.
@PP
The type of @C { method } is
@ID @C {
typedef enum {
  KHE_REASSIGN_EXHAUSTIVE,
  KHE_REASSIGN_MATCHING
} KHE_REASSIGN_METHOD;
}
It determines the algorithm used for solving:  exhaustive search
or weighted bipartite matching.  The latter is reasonable only
when there is one group of tasks per resource:  when @C { grouping }
is @C { KHE_REASSIGN_MAXIMAL }, or the target interval is narrow.
Parameter @C { max_assignments } limits the number of alternatives
tried on each call to @C { KheReassignSolverSolve } when
@C { method } is @C { KHE_REASSIGN_EXHAUSTIVE }.  It is not
consulted when @C { method } is @C { KHE_REASSIGN_MATCHING }.
@PP
If there are @M { k } resources and initially those resources are
assigned @M { R } groups of tasks, then each group could be assigned
any one of the @M { k } resources, making a search space of size
@M { k sup R } when @C { method } is @C { KHE_REASSIGN_EXHAUSTIVE }.
However there is some pruning.  Before starting the search, the solver
checks the @M { (k-1)R } possible new assignments to see which succeed,
and does not try failed ones again.  Any group that cannot move at all
is omitted from the search, and groups that overlap with that group
have its resource removed from their list of resources to try.  And any
assignment which would give a resource two tasks on the same day is not
tried.  The test for this is carried out efficiently using intervals.
@PP
Unavailable times are taken into account when solution costs are
reported, but they are not taken as a reason to exclude a resource
from being assigned to a group.  It is not unusual for an optimal
solution to contain a few assignments of resources to tasks at
unavailable times.
@PP
Function
@ID @C {
void KheReassignSolverDebug(KHE_REASSIGN_SOLVER rs,
  int verbosity, int indent, FILE *fp);
}
produces a debug print of @C { rs } onto @C { fp } with the
given verbosity and indent.  Between solves there is not much
to display, mainly the resources.
@PP
A convenient way to call @C { KheReassignSolverSolve }
repeatedly is
@ID @C {
bool KheReassignRepair(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
  KHE_OPTIONS options);
}
This creates a reassign solver, tries a variety of sets of resources
of type @C { rt } and target intervals, and ends by deleting the
solver and returning @C { true } if any of the solves improved the
solution.  @C { KheResourceReassignRepair } consults these options:
@TaggedList

@DTI { @F rs_reassign_resources } {
This integer option says how many non-@C { NULL } resources from
@C { rt } to select for each solve.  The default value is 2.  The
special value @C { all } selects all resources of type @C { rt }.  This
is only feasible when @C { method } is @C { KHE_REASSIGN_MATCHING }.
When @C { method } is @C { KHE_REASSIGN_EXHAUSTIVE }, larger
values (sometimes even 3) can produce long run times.
}

@DTI { @F rs_reassign_select } {
All sets of resources tried contain @C { rs_reassign_resources }
non-@C { NULL } resources.  All have type @C { rt }.  This option
determines which of these sets are tried.  Its value may be
@C { "none" } (the default), meaning that no sets are tried,
turning @C { KheReassignRepair } off; @C { "all" }, meaning that
all sets are tried; @C { "adjacent" }, meaning that each set of
resources which are adjacent to each other in @C { rt } are tried;
or @C { "defective" }, meaning that all sets in which at least one
of the resources has a defective resource monitor are tried.
@LP
For experimental use there is also @C { constraint:xxx } where
@C { xxx } stands for any non-empty string.  The cluster busy
times constraints of the instance whose names contain @C { xxx }
are found, and then all sets of resources are selected such that
one resource violates one of these constraints, and the rest are
slack (strictly below the maximum) for all of them.  The hope is
that optimal reassignment might move tasks from the violating
resource to the slack ones.
}

@DTI { @F rs_reassign_null } {
When @C { true }, this Boolean option says to include @C { NULL }
in the set of resources passed to the solver on each call.  This
is in addition to the @F rs_reassign_resources non-@C { NULL }
resources selected by @C { rs_reassign_select }.  The default
value is @C { false } as usual.
}

@DTI { @F rs_reassign_parts } {
The value of @C { first_index - last_index + 1 } on each call.  For
example, setting this value to 14 (the default) reassigns two weeks.
If @F rs_reassign_parts is larger than the total number of days, it
is silently reduced to the total number of days.
@LP
For experimental use there is also @C { constraint:xxx } where
@C { xxx } stands for any non-empty string.  The cluster busy
times constraints of the instance whose names contain @C { xxx }
are found, and for each time group of each of them, the smallest
target interval covering that time group is one of the target
intervals tried.  A target interval may be found several times
over in this way, but it is only tried once.  For this value
of @F { rs_reassign_parts }, options @F { rs_reassign_start }
and @F { rs_reassign_increment } (just below) are not consulted.
}

@DTI { @F { rs_reassign_start }, @F { rs_reassign_increment } } {
The value of @C { first_index } on the first call for a given set
of resources, and how much it is incremented by on each subsequent
call for that set of resources.  The default values are 0 and
@C { rs_reassign_parts }.  Only intervals lying entirely within
the legal range are tried.
}

@DTI { @F rs_reassign_grouping } {
Determines the @C { grouping } argument of each call (see above).  It
may be @C { "minimal" } (the default), @C { "runs" }, or @C { "maximal" }.
}

@DTI { @F rs_reassign_ignore_partial } {
A Boolean option which determines the @C { ignore_partial }
argument of each call (see above).  The default value is @C { false }.
}

@DTI { @F rs_reassign_method } {
Determines the @C { method } argument of each call (see above).
Its value may be either @C { "exhaustive" } (the default) or
@C { "matching" }.
}

@DTI { @F rs_reassign_max_assignments } {
An integer option which determines the @C { max_assignments }
argument of each call (see above).  Its default value is 1000000.
}

@EndList
To allow for up to three calls to @C { KheReassignRepair } with separate
options, there are also
@ID @C {
bool KheReassign2Repair(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
  KHE_OPTIONS options);
bool KheReassign3Repair(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
  KHE_OPTIONS options);
}
@C { KheReassign2Repair } is the same except that it consults options
@C { rs_reassign2_resources }, @C { rs_reassign2_select }, and so on;
@C { KheReassign3Repair } is the same except that it consults options
@C { rs_reassign3_resources }, @C { rs_reassign3_select }, and so on.
@End @Section

@Section
    @Title { Trying unassignments }
    @Tag { resource_solvers.unassignments }
@Begin
@LP
KHE's solvers assume that it is always a good thing to assign a
resource to a task.  However, occasionally there are cases where
cost can be reduced by unassigning a task, because the cost of the
resulting assign resource defect is less than the cost of the defects
introduced by the assignment.  As some acknowledgement of these
anomalous cases, KHE offers
@ID @C {
bool KheSolnTryTaskUnAssignments(KHE_SOLN soln, KHE_OPTIONS options);
}
for use at the end.  It tries unassigning each proper root task of
@C { soln }.  If any unassignment reduces the cost of @C { soln },
it is not reassigned.  The result is @C { true } if any unassignments
were kept.
@PP
Restricting @C { KheSolnTryTaskUnAssignments } to proper root
tasks ensures that it does no task ungrouping.  By the end there
will probably be no groups anyway, but it seems best to keep
the ideas of ungrouping and unassigning distinct.
@PP
It might pay to unassign two or more adjacent tasks.
@C { KheSolnTryTaskUnAssignments } consults an option for this:
@TaggedList

@DTI { @F rs_max_unassign }
{
This integer option determines the maximum number of adjacent
tasks to try unassigning.  The default value is 1.
}

@EndList
For example, setting @C { rs_max_unassign } to 2 will try unassigning
entire weekends (among other things), which might pay off if the
resource is working on too many weekends.
@End @Section

@Section
    @Title { Putting it all together }
    @Tag { resource_solvers.all_together }
@Begin
@LP
This section presents functions which assemble the pieces described
in previous sections.
@PP
Three structural decisions face a resource solver.  Should it work
with split assignments?  Should it preserve the resource assignment
invariant?  Should it respect the domains of tasks?  It is easy to
write solvers that can be used with any combination of these
decisions, as follows.
@PP
Get unsplit assignments by building a task tree with avoid split
assignments jobs.  Allow split assignments by calling
@C { KheTaskingAllowSplitAssignments }
(Section {@NumberOf resource_structural.task_tree.reorganization}).
Either way, a solver assigns resources to unfixed tasks, without
knowing or caring if they have followers.
@PP
By enclosing each attempt to change the solution in
@C { KheAtomicTransactionBegin } and @C { KheAtomicTransactionEnd }
(Section {@NumberOf resource_solvers.invt}), a solver can preserve the
resource assignment invariant, or not, depending on the value of a
Boolean parameter.
@PP
If domains are to be respected, do nothing; if not, then before
running the solver, call @C { KheTaskingEnlargeDomains }
(Section {@NumberOf resource_structural.task_tree.reorganization })
to enlarge them to the full set of resources.
@PP
A sequence of three functions,
@ID @C {
bool KheTaskingAssignResourcesStage1(KHE_TASKING tasking,
  KHE_OPTIONS options);
bool KheTaskingAssignResourcesStage2(KHE_TASKING tasking,
  KHE_OPTIONS options);
bool KheTaskingAssignResourcesStage3(KHE_TASKING tasking,
  KHE_OPTIONS options);
}
packages this chapter's ideas into a three-stage solver which assigns
resources to the tasks of @C { tasking }.  Called in order, they take
a `progressive corruption' approach to the decisions just described:
they are spotless at first, but they slide into the gutter towards the
end.
@PP
@C { KheTaskingAssignResourcesStage1 } begins by setting option
@C { "rs_invariant" } to @C { true }.  Then it assigns resources
to the unassigned unfixed tasks of @C { tasking }, using the
assignment algorithm indicated by the @C { rs_constructor }
option, as detailed below.  This is followed by a call to a private
function, called the `repair part' here, which tries several
kinds of repairs, including @C { KheResourceRematch }
(Section {@NumberOf resource_solvers.matching.rematch}),
@C { KheEjectionChainRepairResources }
(Section {@NumberOf resource_solvers.ejection}), and, in
the employee scheduling model, @C { KheReassignRepair }
(Section {@NumberOf resource_solvers.reassign}).
@PP
After this, the great majority of the tasks, probably, have been
assigned resources.  There are no split assignments, the resource
assignment invariant is preserved, and domains are respected.
@PP
@C { KheTaskingAssignResourcesStage2 } does nothing if the instance
contains no avoid split assignments constraints.  Otherwise, it calls
@C { KheFindSplitResourceAssignments } to build split assignments, and
@C { KheTaskingAllowSplitAssignments } to permit all tasks, assigned
or not, to be split.  It then calls the repair part.  Ejection chain
repair will try to remove split assignments (it has always been able
to, but there has been nothing to trigger it until now), and it also
tries to assign unassigned tasks, even at the cost of splitting
assignments that were previously unsplit.
@PP
@C { KheTaskingAssignResourcesStage3 } is very corrupt indeed.
It turns the resource assignment invariant off, enlarges domains
by calling @C { KheTaskingEnlargeDomains }, then runs the repair
part yet again.  Enlarging domains makes sense only at the very
end, and will help only if any resource is better than none.
Because the resource assignment invariant is removed, this stage
should be run only after the first two stages have been run
@I { for each resource type }.
@PP
The options consulted by the three functions directly are
@TaggedList

@DTI { @F rs_constructor }
{
This option determines which resource solver
@C { KheTaskingAssignResourcesStage1 } calls to construct the initial
resource assignment.  Its possible values are:
@LeftList

@LI {
@C { "none" }:  no solver is called, so the repair stages have to find
assignments as well as repair them.  This is not likely to work well,
although it makes a worthwhile test.
}

@LI {
@C { "most_constrained" }:
@C { KheMostConstrainedFirstAssignResources }
(Section {@NumberOf resource_solvers.assignment.most_constrained_first}).
}

@LI {
@C { "resource_packing" }:
@C { KheResourcePackAssignResources }
(Section {@NumberOf resource_solvers.assignment.pack}).
}

# @LI {
# @C { "consec_packing" }:  @C { KheResourcePackConsecutive }
# (Section {@NumberOf resource_solvers.assignment.consec}).
# }

@LI {
@C { "time_sweep" }:  @C { KheTimeSweepAssignResources }
(Section {@NumberOf resource_solvers.matching.time.sweep}).
}

@LI {
@C { "auto" } (the default):  one of the functions just listed is
called, depending on the model and whether there are avoid split
assignments constraints.
}

@LI {
@C { "requested_only" }:  only @C { KheSolnAssignRequestedResources }
(Section {@NumberOf resource_solvers.assignment.requested}) is called.
# If time sweep is called and
# there are limit resources constraints, @F rs_time_sweep_matching_off
# is set to @C { "true" }.
}

@LI {
@C { "single_test" }:  only @C { KheSingleResourceSolverTest }
(Section {@NumberOf resource_solvers.single.running}) is called,
once for each non-empty resource type.  This also sets
@F { rs_repair_off } (see below) to @C { true }, turning off
all repair.  This option is not a serious solver, it is for
testing single resource solving.
}

@RawEndList
}

@DTI { @F rs_group_by_resource }
{
This option, when @C { true }, causes the repair part of
@C { KheTaskingAssignResourcesStage1 } to be executed twice,
first in the usual way, and then with the tasks grouped by
resource using @C { KheTaskingGroupByResource }
(Section {@NumberOf resource_structural.task_tree.group.by.resource}).
The grouping is then removed.
}

@DTI { @F rs_repair_off }
{
This option, when @C { true }, causes the repair part to do nothing
in all three stages, leaving just the initial construction, including
any repair steps within the construction algorithms.
}

@DTI { {@F rs_repair1_off}, {@F rs_repair2_off}, {@F rs_repair3_off} }
{
These three options, when @C { true }, cause stage 1, 2, or 3 of the
repair part to do nothing.
}

@DTI { @F rs_repair_rematch_off }
{
This option, when @C { true }, turns off rematching repair in the
repair parts.
}

@DTI { @F rs_repair_ejection_off }
{
This option, when @C { true }, turns off ejection chain repair in the
repair parts.
}

@DTI { @F rs_multiplier }
{
A string option which when present causes @C { KheSetMonitorMultipliers }
(Section {@NumberOf general_solvers.monitor.multiplier}) to be called
once at the start of Stage 1, and again at the end of Stage 1.  Its
value is @C { val:str }, where @C { val } is an integer and @C { str }
is an arbitrary non-empty string.  These two values are passed to
the first call to @C { KheSetMonitorMultipliers }, and cause the
multipliers of all cluster busy times monitors derived from constraints
whose names or Ids include @C { str } to be multiplied by @C { val }.
The second call resets the multipliers in those same monitors to 1.
}

@DTI { @F rs_repair_time_limit }
{
A string option defining a soft time limit for the repair part of each
stage.  The format is the one accepted by @C { KheTimeFromString }
(Section {@NumberOf general_solvers.runningtime}):  @F { secs }, or
@F { mins:secs }, or @F { hrs:mins:secs }, or the special value
@F { - }, meaning `no limit', which is the default value.
}

@EndList
Many other options influence the solvers called by the three functions.
All three functions set the @C { rs_invariant } option, making it
futile for the user to do so if they are used.
@End @Section

@EndSections
@End @Chapter
