@Chapter
    @Title { Resource-Structural Solvers }
    @Tag { resource_structural }
@Begin
@LP
This chapter documents the solvers packaged with KHE that modify
the resource structure of a solution:  group and ungroup tasks,
and so on.  These solvers may alter resource assignments, but they
only do so occasionally and incidentally to their structural work.
# We also include here one solver which adjusts resource monitors.
@BeginSections

# @Section
#     @Title { Task bound groups }
#     @Tag { resource_structural.task_bound_groups }
# @Begin
# @LP
# Task domains are reduced by adding task bound objects to tasks
# (Section {@NumberOf solutions.tasks.domains}).  Frequently, task
# bound objects need to be stored somewhere where they can be found and
# deleted later.  The required data structure is trivial---just an array
# of task bounds---but it is convenient to have a standard for it, so
# KHE defines a type @C { KHE_TASK_BOUND_GROUP } with suitable operations.
# @PP
# To create a task bound group, call
# @ID @C {
# KHE_TASK_BOUND_GROUP KheTaskBoundGroupMake(KHE_SOLN soln);
# }
# To add a task bound to a task bound group, call
# @ID @C {
# void KheTaskBoundGroupAddTaskBound(KHE_TASK_BOUND_GROUP tbg,
#   KHE_TASK_BOUND tb);
# }
# To visit the task bounds of a task bound group, call
# @ID {0.96 1.0} @Scale @C {
# int KheTaskBoundGroupTaskBoundCount(KHE_TASK_BOUND_GROUP tbg);
# KHE_TASK_BOUND KheTaskBoundGroupTaskBound(KHE_TASK_BOUND_GROUP tbg, int i);
# }
# To delete a task bound group, including deleting all the task
# bounds in it, call
# @ID @C {
# bool KheTaskBoundGroupDelete(KHE_TASK_BOUND_GROUP tbg);
# }
# This function returns @C { true } when every call it makes to
# @C { KheTaskBoundDelete } returns @C { true }.
# @End @Section

@Section
    @Title { Task trees }
    @Tag { resource_structural.task_trees }
@Begin
@LP
In this section we consider building a tree of tasks, analogous
to the layer tree of meets, for structuring the assignment of
tasks to other tasks and to resources.
@BeginSubSections

@SubSection
    @Title { Discussion }
    @Tag { resource_structural.task_trees.discussion }
@Begin
@LP
What meets do for time, tasks do for resources.  A meet has a time
domain and assignment; a task has a resource domain and assignment.
Link events constraints cause meets to be assigned to other meets;
avoid split assignments constraints cause tasks to be assigned to
other tasks.
@PP
There are differences.  Tasks lie in meets, but meets do not lie
in tasks.  Task assignments do not have offsets, because there is
no ordering of resources like chronological order for times.
@PP
Since the layer tree is successful in structuring meets for
time assignment, let us see what an analogous tree for structuring
tasks for resource assignment would look like.  A layer tree is
a tree, whose nodes each contain a set of meets.  The root node
contains the cycle meets.  A meet's assignment, if present, lies
in the parent of its node.   By convention, meets lying outside
nodes have fixed assignments to meets lying inside nodes, and
those assignments do not change.
@PP
A @I { task tree }, then, is a tree whose nodes each contain a set of
tasks.  The root node contains the cycle tasks (or there might be
several root nodes, one for each resource type).  A task's
assignment, if present, lies in the parent of its node.  By
convention, tasks lying outside nodes have fixed assignments to
tasks lying inside nodes, and those assignments do not change.
@PP
Type @C { KHE_TASKING } is KHE's nearest equivalent to a task
tree node.  It holds an arbitrary set of tasks, but there is
no support for organizing taskings into a tree structure, since
that does not seem to be needed.  It is useful, however, to look
at how tasks are structured in practice, and to relate this to
task trees, even though they are not explicitly supported by KHE.
@PP
A task is assigned to a non-cycle task and fixed, to implement an
avoid split assignments constraint.  Such tasks would therefore
lie outside nodes (if there were any).  When a solver assigns a
task to a cycle task, the task would have to lie in a child node
of a node containing the cycle tasks (again, if there were any).
So there are three levels:  a first level of nodes containing
the cycle tasks; a second level of nodes containing unfixed tasks
wanting to be assigned resources; and a third level of fixed,
assigned tasks that do not lie in nodes.
@PP
This shows that the three-way classification of tasks presented
in Section {@NumberOf solutions.tasks.asst}, into cycle tasks,
unfixed tasks, and fixed tasks, is a proxy for the missing task
tree structure.  Cycle tasks are first-level tasks, unfixed tasks
are second-level tasks, and fixed tasks are third-level tasks.
@C { KHE_TASKING } is only needed for representing second-level
nodes, since tasks at the other levels do not require assignment.
By convention, then, taskings will contain only unfixed tasks.
@End @SubSection

@SubSection
    @Title { Task tree construction }
    @Tag { resource_structural.task_trees.construction }
@Begin
@LP
KHE offers a solver for building a task tree holding the tasks
of a given solution:
@ID @C {
bool KheTaskTreeMake(KHE_SOLN soln, KHE_OPTIONS options);
}
As usual, this solver returns @C { true } if it changes the
solution.  Like any good solver, this function has no special
access to data behind the scenes.  Instead, it works by calling
basic operations and helper functions:
@BulletList

@LI {
It calls @C { KheTaskingMake } to make one tasking for each resource
type of @C { soln }'s instance, and it calls @C { KheTaskingAddTask }
to add the unfixed tasks of each type to the tasking it made for that type.
These taskings may be accessed by calling @C { KheSolnTaskingCount }
and @C { KheSolnTasking } as usual, and they are returned in an order
suited to resource assignment, as follows.  Taskings for which
@C { KheResourceTypeDemandIsAllPreassigned(rt) } is @C { true }
come first.  Their tasks will be assigned already if
@C { KheSolnAssignPreassignedResources } has been called, as it
usually has been.  The remaining taskings are sorted by decreasing
order of @C { KheResourceTypeAvoidSplitAssignmentsCount(rt) }.
These functions are described in Section {@NumberOf resource_types}.
Of course, the user is not obliged to follow this ordering.  It is
a precondition of @C { KheTaskTreeMake } that @C { soln } must have
no taskings when it is called.
}

@LI {
It calls @C { KheTaskAssign } to convert resource preassignments into
resource assignments, and to satisfy avoid split assignments constraints,
as far as possible.  Existing assignments are preserved (no calls to
@C { KheTaskUnAssign } are made).
}

@LI {
It calls @C { KheTaskAssignFix } to fix the assignments it makes
to satisfy avoid split assignments constraints.  These may be removed
later.  At present it does not call @C { KheTaskAssignFix } to fix
assignments derived from preassignments, although it probably should.
}

@LI {
It calls @C { KheTaskSetDomain } to set the domains of tasks to
satisfy preassigned resources, prefer resources constraints, and
other influences on task domains, as far as possible.
@C { KheTaskTreeMake } never adds a resource to any domain, however;
it either leaves a domain unchanged, or reduces it to a subset of
its initial value.
}

@EndList
These elements interact in ways that make them impossible to
separate.  For example, a prefer resources constraint that
applies to one task effectively applies to all the tasks that
are linked to it, directly or indirectly, by avoid split
assignments constraints.
@PP
@C { KheTaskTreeMake } does not refer directly to any options.
However, it calls function @C { KheTaskingMakeTaskTree }, described
below, and so it is indirectly influenced by its options.
@PP
The implementation of @C { KheTaskTreeMake } has two stages.  The
first creates one tasking for each resource type of @C { soln }'s
instance, in the order described, and adds to each the unfixed tasks
of its type.  This stage can be carried out separately by repeated
calls to
@ID @C {
KHE_TASKING KheTaskingMakeFromResourceType(KHE_SOLN soln,
  KHE_RESOURCE_TYPE rt);
}
which makes a tasking containing the unfixed tasks of @C { soln } of
type @C { rt }, or of all types if @C { rt } is @C { NULL }.  It
aborts if any of these unfixed tasks already lies in a tasking.
@PP
The second stage is more complex.  It applies public function
@ID @C {
bool KheTaskingMakeTaskTree(KHE_TASKING tasking,
  KHE_SOLN_ADJUSTER sa, KHE_OPTIONS options);
}
to each tasking made by the first stage.  When @C { KheTaskingMakeTaskTree }
is called from within @C { KheTaskTreeMake }, its @C { options } parameter
is inherited from @C { KheTaskTreeMake }.
@PP
As described for @C { KheTaskTreeMake }, @C { KheTaskingMakeTaskTree }
assigns tasks and tightens domains; it does not unassign tasks or
loosen domains.  Only tasks in @C { tasking } are affected.  If
@C { sa } is non-@C { NULL }, any task bounds created while tightening
domains are added to @C { sa }, which allows for them to be deleted
later if required.  Tasks assigned to non-cycle tasks have their
assignments fixed, so are deleted from @C { tasking }.
@PP
The implementation of @C { KheTaskingMakeTaskTree } imitates the layer
tree construction algorithm:  it applies @I jobs in decreasing priority
order.  There are fewer kinds of jobs, but the situation is more complex
in another way:  sometimes, some kinds of jobs are wanted but not others.
The three kinds of jobs of highest priority install existing domains and
task assignments, and assign resources to unassigned tasks derived from
preassigned event resources.  These jobs are always included; the first
two always succeed, and so does the third unless the user has made
peculiar task or domain assignments earlier.  The other kinds of jobs
are optional, and whether they are included or not depends on the
options (other than @C { rs_invariant }) described next.
@PP
@C { KheTaskTreeMake } consults the following options.
# Those other
# than @F rs_invariant apply only to constraints @C { c } such that
# @C { KheConstraintCombinedWeight(c) } is not minimal take part.
# This is a simple attempt to limit structural changes to
# cases that make a significant difference.
@TaggedList

@DTI { @F rs_invariant } {
A Boolean option which, when @C { true }, causes @C { KheTaskTreeMake }
to omit assignments and domain tightenings which violate the resource
assignment invariant (Section {@NumberOf resource_solvers.invt}).
}

@DTI { @F rs_task_tree_prefer_hard_off } {
A Boolean option which, when @C { false }, causes @C { KheTaskTreeMake }
to make a job for each point of application of each hard prefer
resources constraint of non-zero weight.  The priority of the
job is the combined weight of its constraint, and it attempts
to reduce the domains of the tasks of @C { tasking } monitored
by the constraint's monitors so that they are subsets of the
constraint's domain.
}

@DTI { @F rs_task_tree_prefer_soft } {
Like @F rs_task_tree_prefer_hard_off except that it applies to
soft prefer resources constraints instead of hard ones, and its sense
is reversed so that the default value (@C { false } as usual) omits
these jobs.  The author has encountered cases where reducing domains
to enforce soft prefer resources constraints is harmful.
}

@DTI { @F rs_task_tree_split_hard_off } {
A Boolean option which, when @C { false }, causes @C { KheTaskTreeMake }
to make a job for each point of application of each hard avoid split
assignments constraint of non-zero weight.  Its priority is the
combined weight of its constraint, and it attempts to assign the
tasks of @C { tasking } to each other so that all the tasks of
the job's point of application of the constraint are assigned,
directly or indirectly, to the same root task.
}

@DTI { @F rs_task_tree_split_soft_off } {
Like @F rs_task_tree_split_hard_off except that it applies to
soft avoid split assignments constraints rather than hard ones.
}

@DTI { @F rs_task_tree_limit_busy_hard_off } {
A Boolean option which, when @C { false }, causes @C { KheTaskTreeMake }
to make a job for each point of application of each limit busy times
constraint with non-zero weight and maximum limit 0.  Its priority is
the combined weight of its constraint, and it attempts to reduce the
domains of those tasks of @C { tasking } which lie in events
preassigned the times of the constraint, to eliminate its resources,
since assigning them to these tasks must violate this constraint.
However, the resulting domain must have at least two elements; if
not, the reduction is undone, reasoning that it is too severe
and it is better to allow the constraint to be violated.
@LP
This flag also applies to cluster busy times constraints with
maximum limit 0, or rather to their positive time groups.
These are essentially the same as the time groups of limit
busy times constraints when the maximum limit is 0.
}

@DTI { @F rs_task_tree_limit_busy_soft_off } {
Like @F rs_task_tree_limit_busy_hard_off except that it applies to
soft limit busy times constraints rather than hard ones.
}

@EndList
By default, all of these jobs except @F rs_task_tree_prefer_soft are run.
@End @SubSection

@EndSubSections
@End @Section

@Section
    @Title { Resource supply and demand }
    @Tag { resource_structural.supply_and_demand }
@Begin
@LP
This section covers several topics which are not closely related,
except that, in a general way, they are all concerned with the
supply of and demand for resources.
@BeginSubSections

@SubSection
    @Title { Accounting for supply and demand }
    @Tag { resource_structural.supply_and_demand.accounting }
@Begin
@LP
This section aims to understand the supply and demand for
resources in practice.
@PP
Let @M { S }, the @I { supply }, be the sum, over all resources
@C { r } of type @C { rt }, of the number of times that @C { r }
could be busy without violating any resource constraints, as
calculated by @C { KheResourceMaxBusyTimes }
(Section {@NumberOf solutions.avail.functions}).  Let @M { D }, the 
@I { demand }, be the total duration of tasks of type @C { rt }
for which there are assign resource constraints of non-zero weight.
@M { S } and @M { D } depend only on the instance; they are the
same for every solution.
@PP
Let the @I { excess supply } of resource type @C { rt } be
@M { S - D }, the amount by which the supply of resources of that
type exceeds the demand for them.  This could be negative, in which
case unassigned tasks or overloaded resources are inevitable.
@PP
Other considerations arise when we try to understand how supply
and demand play out in a solution.  Some resources may be
@I { overloaded }:  their actual number of busy times is larger
than the value calculated by @C { KheResourceMaxBusyTimes }.
Let @M { O } be the sum, over all overloaded resources, of the
excess.  Other resources may be @I { underloaded }:  their actual
number of busy times is smaller than the value calculated by
@C { KheResourceMaxBusyTimes }.  Let @M { U } be the sum,
over all underloaded resources, of the amount by which each
underloaded resource falls short.  HSEval prints @M { O } (in
fact @M { minus O }) and @M { U } below each planning timetable.
It should be clear that in a given solution, the number of
busy times that resources actually supply is @M { S + O - U }.
@PP
There are also adjustments needed on the demand side.  Some
tasks that require assignment may in fact not be assigned.
Let @M { X } be their total duration.  HSEval prints these tasks
in the Unassigned row at the bottom of the planning timetable.
Also, some tasks that do not require assignment may in fact
be assigned.  Let @M { Y } be their total duration.  HSEval
prints these tasks in italics in planning timetables, and prints
their total duration at the bottom of the timetables.  In a
given solution, the total duration of the tasks that are actually
assigned is @M { D - X + Y }.
@PP
But now, each task that is actually assigned consumes one unit
of resource supply, and vice versa, so we must have
@ID @Math { D - X + Y = S + O - U }
and rearranging gives
@ID @Math { S - D = U - O + Y - X }
@M { S - D }, the excess supply, depends only on the instance.  So
the quantity on the right is constant over all solutions for a given
instance.
@PP
Now each unit of @M { O + X } incurs a cost, but each unit of
@M { U + Y } incurs no cost.  Nevertheless, minimizing @M { O + X }
is the same as minimizing @M { U + Y }, because their difference
is a constant.
@End @SubSection

@SubSection
    @Title { Classifying resources by available workload }
    @Tag { resource_structural.supply_and_demand.classify_by_workload }
@Begin
@LP
Resources with high workload limits, as indicated by functions
@C { KheResourceMaxBusyTimes } and @C { KheResourceMaxWorkload }
(Section {@NumberOf solutions.avail}), may be harder to exploit
than resources with lower workload limits, so it may make sense
to timetable them first.  Function
@ID @C {
bool KheClassifyResourcesByWorkload(KHE_SOLN soln,
  KHE_RESOURCE_GROUP rg, KHE_RESOURCE_GROUP *rg1,
  KHE_RESOURCE_GROUP *rg2);
}
helps with that.  It partitions @C { rg } into two resource groups,
@C { rg1 } and @C { rg2 }, such that the highest workload resources
are in @C { rg1 }, and the rest are in @C { rg2 }.  It returns
@C { true } if it succeeds with this, and @C { false } if not, which
will be because the resources of @C { rg } have equal maximum workloads.
@PP
If @C { KheClassifyResourcesByWorkload } returns @C { true }, every
resource in @C { rg1 } has a maximal value of @C { KheResourceMaxBusyTimes }
and a maximal value of @C { KheResourceMaxWorkload }, and every element
of @C { rg2 } has a non-maximal value of @C { KheResourceMaxBusyTimes }
or a non-maximal value of @C { KheResourceMaxWorkload }.  If it returns
@C { false }, then @C { rg1 } and @C { rg2 } are @C { NULL }.
@End @SubSection

@SubSection
    @Title { Limits on consecutive days, and rigidity }
    @Tag { resource_structural.supply_and_demand.consec }
@Begin
@LP
Nurse rostering instances typically place minimum and maximum
limits on the number of consecutive days that a resource can
be free, busy, or busy working a particular shift.  These limits
are scattered through constraints and may be hard to find.  This
section makes that easy.
@PP
An object called a @I { consec solver } is used for this.  To
create one, call
@ID @C {
KHE_CONSEC_SOLVER KheConsecSolverMake(KHE_SOLN soln, KHE_FRAME frame);
}
It uses memory from an arena taken from @C { soln }.  Its
attributes may be retrieved by calling
@ID @C {
KHE_SOLN KheConsecSolverSoln(KHE_CONSEC_SOLVER cs);
KHE_FRAME KheConsecSolverFrame(KHE_CONSEC_SOLVER cs);
}
The frame must contain at least one time group, otherwise
@C { KheConsecSolverMake } will abort.
@PP
To delete a solver when it is no longer needed, call
@ID @C {
void KheConsecSolverDelete(KHE_CONSEC_SOLVER cs);
}
This works by returning the arena to the solution.
@PP
To find the limits for a particular resource, call
@ID {0.98 1.0} @Scale @C {
void KheConsecSolverFreeDaysLimits(KHE_CONSEC_SOLVER cs, KHE_RESOURCE r,
  int *history, int *min_limit, int *max_limit);
void KheConsecSolverBusyDaysLimits(KHE_CONSEC_SOLVER cs, KHE_RESOURCE r,
  int *history, int *min_limit, int *max_limit);
void KheConsecSolverBusyTimesLimits(KHE_CONSEC_SOLVER cs, KHE_RESOURCE r,
  int offset, int *history, int *min_limit, int *max_limit);
}
For any resource @C { r }, these return the history (see below), the
minimum limit, and the maximum limit on the number of consecutive
free days, the number of consecutive busy days, and the number of
consecutive busy times which appear @C { offset } places into each
time group of @C { frame }.  Setting @C { offset } to 0 might
return the history and limits on the number of consecutive early
shifts, setting it to 1 might return the limits on the number of
consecutive day shifts, and so on.  The largest offset acceptable
to @C { KheConsecSolverBusyTimesLimits } is returned by
@ID @C {
int KheConsecSolverMaxOffset(KHE_CONSEC_SOLVER cs);
}
An @C { offset } larger than this, or negative, produces an abort.
@PP
The @C { *history } values return history:  the number of consecutive
free days, consecutive busy days, and consecutive busy times with the
given @C { offset } in the timetable of @C { r } directly before the
timetable proper begins.  They are taken from the history values of the
same constraints that determine the @C { *min_limit } and @C { *max_limit }
values.
@PP
All these results are based on the frame passed to
@C { KheConsecSolverFrame }, which would always be the common frame.
They are calculated by finding all limit active intervals constraints
with non-zero weight, comparing their time groups with the frame
time groups, and checking their polarities.  In effect this reverse
engineers what programs like NRConv do when they convert specialized
nurse rostering formats to XESTT.
@PP
If no constraint applies, @C { *history } is set to 0, @C { *min_limit }
is set to 1 (a sequence of length 0 is not a sequence at all), and
@C { *max_limit } is set to @C { KheFrameTimeGroupCount(frame) }.
In the unlikely event that more than one constraint applies,
@C { *history } and @C { *min_limit } are set to the largest of the
values from the separate constraints, and @C { *max_limit } is set
to the smallest of the values from the separate constraints.
@PP
The @I { rigidity } of a resource is how constrained it is to
follow a particular pattern of busy and free days, assuming that
it is utilized to the maximum extent that constraints allow, as
reported by @C { KheResourceMaxBusyTimes }
(Section {@NumberOf solutions.avail.functions}).  Rigidity
takes account of constraints on the number of consecutive busy
days and consecutive free days, plus history.
@PP
It is hard to see how a local repair method, for example ejection
chains (Section {@NumberOf resource_solvers.ejection}), can just
stumble on a good timetable for a rigid resource (although it often
does).  Something more targeted, like optimal assignment using
dynamic programming (Section {@NumberOf resource_solvers.dynamic}),
seems indicated.
@PP
Suppose that resource @M { r } has 20 available busy times, that
the cycle has 28 days, that @M { r }'s busy days are limited to
at most 5 consecutive days, and that its free days are limited
to at least 2 consecutive days.  Then to reach the 20 busy days
economically we need runs of 5 consecutive busy days, separated
by runs of 2 consecutive free days.  A typical pattern would be
@ID { (5 busy, 2 free, 5 busy, 2 free, 5 busy, 2 free, 5 busy, 2 free) }
The only freedoms here are to move the last two free days to other
points in the cycle, or else to move two or more busy times to the end.
# There are less than
# @M { 28 times 27 slash 2 } ways to do this, making the resource
# very rigid.
@PP
Resources with few available times can also be rigid.  Suppose
that @M { r } has 6 available busy times, that the cycle has
28 days, that @M { r }'s busy days are limited to at least 2
consecutive days, and that its free days are limited to at most
7 consecutive days.  (This is an actual example, from an INRC2
instance.)  A typical pattern would be
@ID { (7 free, 2 busy, 7 free, 2 busy, 7 free, 2 busy, 1 free) }
The only freedom here is to move up to 6 free days to the end,
another rigid case.  We've just shown, for example, that @M { r }'s
first and last days must be free.
@PP
For an example of a resource which is @I not rigid, let @M { r }
have 15 available busy times, subject to the same constraints as
the two previous resources.  A typical pattern would be
@ID {
(5 busy, 2 free, 5 busy, 2 free, 5 busy, 9 free)
}
This is not quite legal because the last run of free days
is too long, but it's close, and there are many choices
for moving two or more of those 9 free days forward, and
for regrouping the busy sequences, for example into three
runs of 4 days and one run of 3 days.
@PP
The ideal measure of rigidity (actually non-rigidity) would be the
number of distinct zero cost patterns of busy and free days.  But
that seems impracticable to calculate, and anyway we do not need a
precise measure.  The measure we choose is inspired by the examples
given above.  It is a weighted sum of two parts, @M { m sub 1 } and
@M { m sub 2 }:
@BulletList

@LI {
First, we ask what is the smallest number of runs of consecutive
busy days that we can have and still reach our desired number of
busy days without violating any minimum or maximum limits on
consecutive busy or free days?  And what is the largest number?
The difference is @M { m sub 1 }, our first measure of non-rigidity.
(Other measures are correlated with this one.  For example, if the
number of runs can vary, their lengths can vary as well.)
}

@LI {
Second, we ask what choices there are for placing the first
run of consecutive busy days, consistent with history.  For
example, if there are 2 busy days from history, and the
minimum limit is 3, then there is no choice for the first
run of busy days:  it must start on the first day.  Or if
there are 5 free days in history, and the maximum number of
consecutive free days is 7, then the first run of busy days
must start on the first, second, or third day.  The number of
choices here is @M { m sub 2 }, our second measure of non-rigidity.
}

@EndList
We weight the first measure by 10 and the second by 1.
@PP
For the resource with 20 available times above, at least 4 runs
are required, because each run can have at most 5 busy times.
At most 5 runs can be used, because if 6 runs are used there are
5 gaps between runs, each containing at least 2 times, leaving
at most 18 places for busy times.  So @M { m sub 1 = 5 - 4 = 1 }.
@PP
For the resource with 6 available times, at most 3 runs are
possible, because each run has at least 2 busy times.  And
2 runs doesn't work, because it leaves only three free runs,
each with at most 7 free times, to hold the 22 free times.
So @M {  m sub 1 = 3 - 3 = 0 }.
@PP
For the resource with 15 available times it is a little harder
to see what the possibilities are.  A somewhat rough and ready
general method works like this.  Suppose all busy runs have
length @M { x }, except possibly one run that is shorter,
and all free runs have length @M { y }.  If the number of
busy times we want is @M { a }, then the number of busy runs
is @M { c = lceil a slash x rceil }.
We must place one free run of length @M { y } between each
adjacent pair of busy runs, and optionally we can place one
free run of length @M { y } before the first run and after
the last run.  This gives a total number of times (busy plus
free) of between @M { a + y (c - 1) } and @M { a + y(c + 1) }.
If the total number of times in the cycle is between these
limits, then @M { x } and @M { y } are workable choices
and @M { c } is a workable number of busy runs.
@PP
Now @M { x } and @M { y } are bounded by limits set by
constraints.  So we try each combination of one legal choice
for @M { x } and one for @M { y } and see what workable
values for @M { c } we get.  The first measure of non-rigidity,
@M { m sub 1 }, is the difference between the largest and
smallest workable values for @M { c }.
@PP
A general method of calculating the second measure of non-rigidity
goes like this.  Suppose that the minimum length of a run of
consecutive busy times is @M { b sub "min" }, and the maximum
length is @M { b sub "max" }.  Suppose that the minimum length
of a run of consecutive free times is @M { f sub "min" }, and
the maximum length is @M { f sub "max" }.  And suppose that the
number of consecutive busy days from history is @M { b }, and
the number of consecutive free days from history is @M { f }.
At most one of @M { b } and @M { f } can be non-zero, and we
also have @M { 1 <= b sub "min" <= b sub "max" }, and
@M { 1 <= f sub "min" <= f sub "max" }.
@PP
If @M { b = f = 0 }, then the first day could be busy, contributing
1 to @M { m sub 2 }, or else any number of initial days from
@M { f sub "min" } to @M { f sub "max" } inclusive could be free,
contributing a further @M { f sub "max" - f sub "min" + 1 } to
@M { m sub 2 }.
@PP
If @M { b > 0 }, then @M { f = 0 }.  If @M { b < b sub "min" },
then the first day must be busy, so @M { m sub 2 = 1 }.  If
@M { b sub "min" <= b < b sub "max" }, then the first day
could be busy, contributing @M { 1 } to @M { m sub 2 }, or
free, contributing @M { f sub "max" - f sub "min" + 1 }
to @M { m sub 2 }.  If @M { b >= b sub "max" }, then the first day
must be free, and @M { m sub 2 = f sub "max" - f sub "min" + 1 }.
@PP
If @M { f > 0 }, then @M { b = 0 }.  If @M { f < f sub "min" },
then the first day must be free, and the number of initial free
days may be between @M { f sub "min" - f } and @M { f sub "max" - f }
inclusive, making @M { m sub 2 = f sub "max" - f sub "min" + 1 }
choices altogether.  If @M { f sub "min" <= f < f sub "max" },
then the first day could be busy, contributing 1 to @M { m sub 2 },
or free, contributing a further @M { f sub "max" - f } to
@M { m sub 2 }.  If @M { f sub "max" <= f }, then the first
day must be busy, so @M { m sub 2 = 1 }.
@PP
Function
@ID @C {
int KheConsecSolverNonRigidity(KHE_CONSEC_SOLVER cs, KHE_RESOURCE r);
}
returns the non-rigidity as we have defined it here.  There is no
precise threshold separating non-rigidity from rigidity, but for
the first measure a value of 0 is very rigid, 1 is somewhat rigid,
and 2 is non-rigid, arguably.  For the second measure a similar
statement is reasonable.  Rather than worrying about thresholds it
may be better to sort the resources by increasing non-rigidity and
treat, say, the first 20% or 30% of them as rigid.
@PP
Finally,
@ID @C {
void KheConsecSolverDebug(KHE_CONSEC_SOLVER cs, int verbosity,
  int indent, FILE *fp);
}
produces the usual debug print of @C { cs } onto @C { fp } with the
given verbosity and indent.  When @C { verbosity >= 2 }, this prints all
results for all resources, using format @C { history|min-max }.  For
efficiency, these are calculated all at once by @C { KheConsecSolverMake }.
@End @SubSection

@SubSection
    @Title { Tighten to partition }
    @Tag { resource_structural.supply_and_demand.partition }
@Begin
@LP
Suppose we are dealing with teachers, and that they have partitions
(Section {@NumberOf resource_types}) which are their faculties
(English, Mathematics, Science, and so on).  Some partitions may
be heavily loaded (that is, required to supply teachers for tasks
whose total workload approaches the total available workload of
their resources) while others are lightly loaded.
@PP
Some tasks may be taught by teachers from more than one partition.
These @I { multi-partition tasks } should be assigned to teachers from
lightly loaded partitions, and so should not overlap in time with other
tasks from these partitions.  @I { Tighten to partition } tightens the
domain of each multi-partition task in a given tasking to one partition,
returning @C { true } if it changes anything:
@ID {0.95 1.0} @Scale @C {
bool KheTaskingTightenToPartition(KHE_TASKING tasking,
  KHE_SOLN_ADJUSTER sa, KHE_OPTIONS options);
}
The choice of partition is explained below.  All changes are additions
of task bounds to tasks, and if @C { sa } is non-@C { NULL }, all
these task bounds are also added to @C { sa }, so that they can
be removed later if desired.
@PP
It is best to call @C { KheTaskingTightenToPartition } after
preassigned meets are assigned, but before general time
assignment.  The tightened domains encourage time assignment to
avoid the undesirable overlaps.  After time assignment, the
changes should be removed, since otherwise they constrain
resource assignment unnecessarily.
# This is what the task bound
# group is for:
# @ID @C {
# tighten_tbg = KheTaskBoundGroupMake(soln);
# for( i = 0;  i < KheSolnTaskingCount(soln);  i++ )
#   KheTaskingTightenToPartition(KheSolnTasking(soln, i),
#     tighten_tbg, options);
# ... assign times ...
# KheTaskBoundGroupDelete(tighten_tbg);
# }
# The rest of this section explains how @C { KheTaskingTightenToPartition }
# works in detail.
@PP
@C { KheTaskingTightenToPartition } does nothing when the tasking has
no resource type, or @C { KheResourceTypeDemandIsAllPreassigned }
(Section {@NumberOf resource_types}) says that the resource type's
tasks are all preassigned, or the resource type has no partitions,
or its number of partitions is less than four or more than one-third
of its number of resources.  No good can be done in these cases.
@PP
Tasks whose domains lie entirely within one partition are not touched.
The remaining multi-partition tasks are sorted by decreasing combined
weight then duration, except that tasks with a @I { dominant partition }
come first.  A task with an assigned resource has a dominant partition,
namely the partition that its assigned resource lies in.  An unassigned
task has a dominant partition when at least three-quarters of the
resources of its domain come from that partition.
@PP
For each task in turn, an attempt is made to tighten its domain so
that it is a subset of one partition.  If the task has a dominant
partition, only that partition is tried.  Otherwise, the partitions
that the task's domain intersects with are tried one by one, stopping
at the first success, after sorting them by decreasing average
available workload (defined next).
@PP
Define the @I { workload supply } of a partition to be the sum, over
the resources @M { r } of the partition, of the number of times in
the cycle minus the number of workload demand monitors for @M { r }
in the matching.  Define the @I { workload demand } of a partition
to be the sum, over all tasks @M { t } whose domain is a subset of
the partition, of the workload of @M { t }.  Then the
@I { average available workload } of a partition is its workload
supply minus its workload demand, divided by its number of resources.
Evidently, if this is large, the partition is lightly loaded.
@PP
Each successful tightening increases the workload demand of its
partition.  This ensures that equally lightly loaded partitions
share multi-partition tasks equally.
@PP
In a task with an assigned resource, the dominant partition is the
only one compatible with the assignment.  In a task without an
assigned resource, preference is given to a dominant partition, if
there is one, for the following reason.  Schools often have a few
@I { generalist teachers } who are capable of teaching junior
subjects from several faculties.  These teachers are useful for
fixing occasional problems, smoothing out workload imbalances,
and so on.  But the workload that they can give to faculties other
than their own is limited and should not be relied on.  For
example, suppose there are five Science teachers plus one
generalist teacher who can teach junior Science.  That should
not be taken by time assignment as a licence to routinely schedule
six Science meets simultaneously.  Domain tightening to a dominant
partition avoids this trap.
@PP
Tightening by partition works best when the @C { rs_invariant }
option of @C { options } is @C { true }.  For example, in a case like
Sport where there are many simultaneous multi-partition tasks, it
will then not tighten more of them to a lightly loaded partition
than there are teachers in that partition.  Assigning preassigned
meets beforehand improves the effectiveness of this check.
@End @SubSection

@SubSection
    @Title { Balancing supply and demand }
    @Tag { resource_structural.supply_and_demand.balance }
@Begin
@LP
This section presents a solver for investigating the balance between
supply of and demand for resources of a given type.  Its main aim is
to answer this question: if some resource is not used up to its full
capacity, what cost will that have in terms of tasks not assigned?
@PP
To create a balance solver, call
@ID @C {
KHE_BALANCE_SOLVER KheBalanceSolverMake(KHE_SOLN soln,
  KHE_RESOURCE_TYPE rt, KHE_FRAME days_frame, HA_ARENA a);
}
It makes a solver for the supply of and demand for resources of type
@C { rt } in @C { soln }, using memory from arena @C { a }.  There is
no deletion operation; the solver is deleted when @C { a } is freed.
@PP
To find the total supply of resources of type @C { rt }, call
@ID @C {
int KheBalanceSolverTotalSupply(KHE_BALANCE_SOLVER bs);
}
This calls @C { KheResourceMaxBusyTimes(soln, r, &res) }
for each resource @C { r } of type @C { rt }, and returns the
sum of the @C { res } values.  As documented in
Section {@NumberOf solutions.avail}, @C { res } is an
upper limit on @C { r }'s number of busy times (as imposed by
constraints) minus its current number of busy times.
@PP
To find the total demand for resources of type @C { rt }, call
@ID @C {
int KheBalanceSolverTotalDemand(KHE_BALANCE_SOLVER bs);
}
This is the sum, over all unassigned tasks @C { t } of type @C { rt }, of
the total duration of @C { t }, as returned by @C { KheTaskTotalDuration(t) }
(Section {@NumberOf solutions.tasks.asst}).
@PP
The balance solver analyses this demand by cost reduction.  For each
task @C { t } that contributes to @C { KheBalanceSolverTotalDemand(bs) },
it calls @C { KheTaskAssignmentCostReduction }
(Section {@NumberOf solutions.tasks.asst}) on @C { t }, and groups tasks
with equal cost reductions.  To access these groups, call
@ID @C {
int KheBalanceSolverDemandGroupCount(KHE_BALANCE_SOLVER bs);
void KheBalanceSolverDemandGroup(KHE_BALANCE_SOLVER bs, int i,
  KHE_COST *cost_reduction, int *total_durn);
}
@C { KheBalanceSolverDemandGroup } returns the information kept about
the @C { i }th group:  the cost reduction of each of its tasks, and
their total duration.  @C { KheBalanceSolverTotalDemand } returns the
sum of these total durations.  The groups are visited in order of
decreasing cost reduction.
@PP
Using this information it is easy to work out the marginal cost of
not utilising a resource @C { r } to its full capacity.  Suppose
that tasks are assigned in order of decreasing cost reduction,
until all resources are used to capacity.  The cost reduction of
the last task assigned is the marginal cost of not fully utilizing
@C { r }.  This value is returned by
@ID @C {
KHE_COST KheBalanceSolverMarginalCost(KHE_BALANCE_SOLVER bs);
}
If supply exceeds demand, there is no marginal cost, and so the
value returned is 0.  Finally,
@ID @C {
void KheBalanceSolverDebug(KHE_BALANCE_SOLVER bs, int verbosity,
  int indent, FILE *fp);
}
produces the usual debug print of @C { bs } onto @C { fp } with
the given verbosity and indent.
@End @SubSection

@SubSection
    @Title { Resource flow }
    @Tag { resource_structural.supply_and_demand.resource_flow }
@Begin
@LP
It is arguably too simple to just compare the total supply of
resources with the total demand for them.  The tasks which
constitute the demand have prefer resources monitors (hard and
soft) which restrict which resources can be used.  There could
be enough supply overall but not enough of a particular kind:
enough nurses but not enough senior nurses, enough rooms but
not enough Science laboratories, and so on.
@PP
We can detect such problems now using the global tixel matching.
However, here we build a @I { flow graph } (a directed graph in
which we will find a maximum flow) that is much smaller than the
global tixel matching.  This graph gives a clearer view of the
overall situation than one can get from a bipartite matching.  We
call this general idea @I { resource flow }, or just @I { flow }.
@PP
A flow graph is for a given resource type @C { rt }.  It is
built from a set of @I { admissible resources } and a set of
@I { admissible tasks }.  The admissible resources are just the
resources of type @C { rt }.  A task is admissible when
all of these conditions hold:
@NumberedList

@LI {
It has type @C { rt }.
}

@LI {
It is a proper root task.
}

@LI {
It is derived from an event resource (needed because we use
the event resource's domain).
}

@LI {
It is not preassigned.
}

@LI {
Its assignment is not fixed.
}

@LI {
@C { KheTaskNonAsstAndAsstCost }
(Section {@NumberOf resource_structural.mtask_finding.ops})
gives it a positive non-assignment cost.
}

@LI {
It is not assigned a resource.  This condition is optional; it
is present when parameter @C { preserve_assts } of function
@C { KheFlowMake } below has value @C { true }.
}

@EndList
As usual, the @I { total duration } of a proper root task is the
duration of the task plus the durations of all the tasks assigned
to it, directly or indirectly.
@PP
The flow graph contains a source node, some @I { resource nodes }
(each containing a set of one or more admissible resources), some
@I { task nodes } (each containing a set of one or more admissible
tasks), and a sink node.
@PP
For each admissible resource, define a resource node @M { x } containing
just that resource.  Add an edge from the source node to @M { x },
whose capacity @M { c(x) } is the number of times that the resource
is available, according to @C { KheResourceMaxBusyTimes }
(Section {@NumberOf solutions.avail.functions}).  If the resource
is currently assigned to any inadmissible proper root tasks, then
reduce @M { c(x) } by the total duration of those tasks (but not
below 0) to compensate for their omission.
@PP
For each distinct set @M { R sub y } of resources preferred by at least
one task, define a @I { task node } @M { y } containing all tasks that
prefer @M { R sub y }, and add an edge from @M { y } to the sink node,
whose capacity @M { c(y) } is the total duration of those tasks.  Then
for each resource node @M { x } and each task node @M { y }, draw an
edge from @M { x } to @M { y } of infinite capacity whenever @M { x }'s
resource lies in @M { R sub y }.
@PP
Before solving this graph, we compress it by merging resource nodes
that are connected to the same task nodes.  Each such merged node
has incoming capacity equal to the total capacity of the nodes it
replaces, and outgoing edges like the edges it replaces.
@PP
Here is an example of a flow graph from a real instance
(INRC2-4-100-0-1108):
@CD @Diag {
@Tbl
   aformat { @Cell A | @Cell B | @Cell | @Cell C | @Cell D }
   mh { 0.8c }
   iv { ctr }
   mv { 0.4c }
{
@Rowa ma { 0i }
  B { DA:: @Box HN_* }
  C { SA:: @Box HeadNurse }
@Rowa
  A { SS:: @Circle }
  B { DB:: @Box NU_* }
  C { SB:: @Box Nurse }
  D { SK:: @Circle }
@Rowa mb { 0i }
  B { DC:: @Box CT_* }
  C { SC:: @Box Caretaker }
}
//
@Arrow from { SS } to { DA@W } ylabel { 196 }
@Arrow from { SS } to { DB@W } ylabel { 250 }
@Arrow from { SS } to { DC@W } ylabel { 537 }

@Arrow from { DA } to { SA } ylabel { +4p @Font @M { infty } }
@Arrow from { DA } to { SB } ylabel { +4p @Font @M { infty } }
@Arrow from { DB } to { SB } ylabel { +4p @Font @M { infty } }
@Arrow from { DB } to { SC } ylabel { +4p @Font @M { infty } }
@Arrow from { DC } to { SC } ylabel { +4p @Font @M { infty } }

@Arrow from { SA@E } to { SK } ylabel { 91 }
@Arrow from { SB@E } to { SK } ylabel { 239 }
@Arrow from { SC@E } to { SK } ylabel { 669 }
}
Node HN_* holds the head nurses, node HeadNurse holds the tasks
that require a head nurse, and so on.  This example substantiates
our claim about the clarity of flow graphs:  it shows that head
nurses can do the work of ordinary nurses as well as their own,
and ordinary nurses can do the work of caretaker nurses as well as
their own.  This is just as well, because, as the graph also shows,
head nurses have a superfluity of available workload and caretakers
have a shortage.
@PP
This flow graph can answer many questions.  Each resource node is
the answer to the question `What kind of resource is this?',
although that answer does not come with a simple name in general.
(We will compare the sets of resources we get with existing resource
groups, so that we can give the nodes familiar names whenever possible.
But the algorithm deals with sets of resources that it defines itself,
not with sets defined previously as resource groups.)
# @PP
# The basic question we answer with flows is `does a maximum flow
# exist which includes a non-zero flow from @M { r } to @M { s }?'
# Let @M { f(r, s) } be the answer to this question (a boolean).
# To find @M { f(r, s) }, we subtract 1 from @M { c(r) } and
# @M { c(s) } and find a maximum flow.  If this flow is just 1
# less than the original maximum flow, then a maximum flow that
# uses this edge exists:  take this flow and add one unit of
# flow from the source to @M { r } to @M { s } to the sink.
@PP
Call a maximum flow in this graph the @I { original flow }.
By changing the graph and seeing whether the new maximum flow
is less than the original, we can answer questions like these:
@BulletList

@LI {
Can at least one of the tasks of task node @M { y } be assigned a
resource from resource node @M { x }?  Subtract 1 from @M { c(x) }
and @M { c(y) } and find a maximum flow.  If this flow is just 1
less than the original flow, then a maximum flow that uses this
edge exists:  take this flow and add one unit of flow from the
source to @M { x } to @M { y } to the sink.  If the answer is no,
we might as well delete the edge from @M { x } to @M { y }.  This
may interest callers since it simplifies the situation.
}

@LI {
Must the resources of @M { x } be used exclusively by @M { y }?
Yes, if for every other @M { y } connected to @M { x } the
previous question has answer no.
}

@LI {
Can the tasks of @M { y } be limited to resources from @M { x }?
Remove all edges into @M { y } other than the one from @M { x }
and find a maximum flow.  The answer is yes if this equals the
original flow.
}

@EndList
There are many possible questions; our plan is to implement them
as we need them.
# we can choose any one of `there exists', `for all', and `how many'
# in several places; wherever we ask a question about @M { x } we
# can ask the same question about @M { y }, and vice versa;
# wherever a condition occurs we can negate it; and so on.
@PP
The implementation defines three types.  Type @C { KHE_FLOW }
represents the entire flow graph; type @C { KHE_FLOW_RESOURCE_NODE }
represents one resource node; and type @C { KHE_FLOW_TASK_NODE }
represents one task node.
@PP
We start with type @C { KHE_FLOW_RESOURCE_NODE }.  Its operations are
@ID @C {
KHE_RESOURCE_SET KheFlowResourceNodeResources(
  KHE_FLOW_RESOURCE_NODE frn);
bool KheFlowResourceNodeResourceGroup(KHE_FLOW_RESOURCE_NODE frn,
  KHE_RESOURCE_GROUP *rg);
int KheFlowResourceNodeCapacity(KHE_FLOW_RESOURCE_NODE frn);
bool KheFlowResourceNodeFlow(KHE_FLOW_RESOURCE_NODE frn,
  KHE_FLOW_TASK_NODE *ftn, int *flow);
void KheFlowResourceNodeDebug(KHE_FLOW_RESOURCE_NODE frn,
  int verbosity, int indent, FILE *fp);
}
@C { KheFlowResourceNodeResources } returns the set of resources
represented by flow resource node @C { frn }.  If
@C { KheFlowResourceNodeResourceGroup } returns @C { true },
then the  pre-existing resource group @C { *rg } contains exactly these
resources.
@C { KheFlowResourceNodeCapacity }
returns the total capacity of those resources (the sum of their
individual capacities, defined above).
@C { KheFlowResourceNodeFlow } reports the results of a max flow
solve on the graph.  It is to be called repeatedly, and each
time it returns @C { true } it reports one edge with flow
@C { *flow } from @C { frn } to @C { *ftn }.  So it should be
called like this:
@ID @C {
while( KheFlowResourceNodeFlow(frn, &ftn, &flow) )
  ... there is a non-zero flow from frn to ftn ...
}
Finally, @C { KheFlowResourceNodeDebug } produces a debug print of
@C { frn } in the usual way.
@PP
The operations on type @C { KHE_FLOW_TASK_NODE } are
@ID @C {
KHE_TASK_SET KheFlowTaskNodeTasks(KHE_FLOW_TASK_NODE ftn);
KHE_RESOURCE_GROUP KheFlowTaskNodeDomain(KHE_FLOW_TASK_NODE ftn);
int KheFlowTaskNodeCapacity(KHE_FLOW_TASK_NODE ftn);
void KheFlowTaskNodeDebug(KHE_FLOW_TASK_NODE ftn, int verbosity,
  int indent, FILE *fp);
}
@C { KheFlowTaskNodeTasks } returns the set of tasks represented by
@C { ftn }.  @C { KheFlowTaskNodeDomain } returns the domain they
share.  @C { KheFlowTaskNodeCapacity } returns their capacity (their
total duration); and @C { KheFlowTaskNodeDebug } produces a debug
print of @C { ftn } in the usual way.
@PP
Now for the operations on type @C { KHE_FLOW }.  A flow object is
created and deleted by calling
@ID @C {
KHE_FLOW KheFlowMake(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
  bool preserve_assts, bool include_soft);
void KheFlowDelete(KHE_FLOW f);
}
@C { KheFlowMake } builds the flow object in an arena taken from
@C { soln }, including creating its resource and task nodes as
defined above, and finding a maximum flow.   @C { KheFlowDelete }
returns the arena to @C { soln }, making @C { f }, its nodes, and
the resource sets and task sets from its nodes undefined.  (The task
sets are not created within @C { f }'s arena, because the task set
interface does not offer that option.  But @C { KheFlowDelete }
explicitly deletes them.)
# @PP
# The resources included are all resources of type @C { rt }.  The
# capacity of each resource @C { r } is @C { KheResourceMaxBusyTimes }
# (Section {@NumberOf solutions.avail.functions}), minus the total
# duration of any tasks assigned @C { r } when @C { KheResourceFlowMake }
# is called and omitted according to the rules given next.
# @PP
# The tasks included are all proper root tasks of type @C { rt },
# with three exceptions:  tasks not derived from an event resource
# are omitted; fixed tasks are omitted; and if @C { preserve_assts }
# is @C { true }, then proper root tasks that are assigned resources
# when @C { KheResourceFlowMake } is called are omitted.  The capacity
# of each task is its duration, including the durations of tasks
# assigned to it, directly or indirectly.  If the task is derived from
# event resource @C { er }, the set of resources assignable to it is
# @C { KheEventResourceHardAndSoftDomain(er) } if @C { include_soft }
# is @C { true }, and @C { KheEventResourceHardDomain(er) } otherwise.
# @C { KheTaskDomain } is not called.
@PP
The flow object returned by @C { KheFlowMake } accepts a variety of
queries.  Its resource nodes may be visited (sorted by increasing
index of their resource sets' first resources) by
@ID @C {
int KheFlowResourceNodeCount(KHE_FLOW f);
KHE_FLOW_RESOURCE_NODE KheFlowResourceNode(KHE_FLOW f, int i);
}
Its task nodes may be visited (in an unspecified order) by
@ID @C {
int KheFlowTaskNodeCount(KHE_FLOW f);
KHE_FLOW_TASK_NODE KheFlowTaskNode(KHE_FLOW f, int i);
}
There is also
@ID @C {
KHE_FLOW_RESOURCE_NODE KheFlowResourceToResourceNode(KHE_FLOW f,
  KHE_RESOURCE r);
KHE_FLOW_TASK_NODE KheFlowTaskToTaskNode(KHE_FLOW f, KHE_TASK task);
}
These return the resource node containing @C { r } and the task node
containing @C { task }, or @C { NULL } if there is no such node (if
@C { r } or @C { task } is not admissible).  Finally,
@ID @C {
void KheFlowDebug(KHE_FLOW f, int verbosity, int indent, FILE *fp);
}
produces a debug print of @C { f } onto @C { fp } with the
given verbosity and indent.
@End @SubSection

@SubSection
    @Title { Workload packing }
    @Tag { resource_structural.supply_and_demand.workload_packing }
@Begin
@LP
The solver in this section is inspired by instance @C { COI-WHPP },
in which each resource has maximum workload 70, day shifts have
workload 7, night shifts have workload 10, and the total supply
of workload is just sufficient to meet the total demand.  Given
these conditions, and the absence of significant other conditions,
it is not hard to see that in the best solutions, some resources
will be assigned 10 day shifts only, and the rest will be assigned
7 night shifts only.  It is not a real-world scenario, but it is
the scenario in this instance.
@PP
Function
@ID @C {
bool KheWorkloadPack(KHE_SOLN soln, KHE_OPTIONS options,
  KHE_RESOURCE_TYPE rt, KHE_SOLN_ADJUSTER sa);
}
checks to see whether a scenario like the one above occurs in
@C { soln } in the tasks and resources of type @C { rt }.  If so,
it installs task bounds into the tasks of type @C { rt } to enforce
this kind of partitioned solution.  This involves making heuristic
decisions about which resources will get day shifts and which will
get night shifts.  If all goes well, it adds the task bounds it
created to @C { sa } (so they they can be removed later if desired)
and returns @C { true }.  Otherwise it changes nothing and returns
@C { false }.
# @PP
# If task bounds were added, a call to @C { KheTaskBoundGroupDelete(*tbg) }
# can be used to remove them again.  This deletes the task bound
# group, including deleting any task bounds in it, which in turn
# removes them from the tasks they were added to.
@PP
@C { KheWorkloadPack } does not assign resources to tasks.  It leaves
that to other solvers.  They are forced by the task bounds to do it
in the way that @C { KheWorkloadPack } has decided on.
@PP
The rest of this section presents the details of how
@C { KheWorkloadPack } works.  We begin with the conditions
under which it acts.
@PP
Let @M { S } be the set of event resources of type @C { rt } with
non-zero workload for which assign resource constraints with
non-zero weight are present.  (Event resources with zero workload
can be assigned freely without affecting the workload packing
calculation.  Event resources without assign resource constraints
of non-zero weight do not need to be assigned at all.)  Over all
elements of @M { S } there must be exactly two distinct workloads,
@M { w sub 1 } and @M { w sub 2 } say.  Each is a workload, not a
workload per time, and so is a positive integer.  We require
@M { w sub 1 } and @M { w sub 2 } to be relatively prime.
@PP
Now suppose that for some resource @M { r } the workload limit is
@M { W = w sub 1 w sub 2 }.  Then the only way to assign @M { r }
to event resources from @M { S } that completely exhausts @M { r }'s
workload is for all of the event resources assigned @M { r } to
have the same workload, say @M { w sub i }, and for @M { r } to be
assigned @M { W "/" w sub i } such event resources.  The proof
of this is by contradiction, as follows.
@PP
Any other arrangement leads to a total workload for @M { r } of the form
@ID @Math {
a sub 1 w sub 1 + a sub 2 w sub 2 = W = w sub 1 w sub 2
}
where @M { a sub 1 } and @M { a sub 2 } are positive integers.
Dividing through by @M { w sub 1 } shows that @M { w sub 1 }
divides @M { a sub 2 } (because @M { w sub 1 } and @M { w sub 2 }
are relatively prime), and similarly @M { w sub 2 } divides
@M { a sub 1 }.  So let @M { a sub 1 = b sub 1 w sub 2 } and
@M { a sub 2 = b sub 2 w sub 1 } where @M { b sub 1 } and
@M { b sub 2 } are positive integers.  This gives
@ID @Math {
b sub 1 w sub 2 w sub 1 + b sub 2 w sub 1 w sub 2 = w sub 1 w sub 2
}
Dividing by @M { w sub 1 w sub 2 } gives @M { b sub 1 + b sub 2 = 1 },
a contradiction, because @M { b sub 1 } and @M { b sub 2 } are positive
integers.
@PP
Each resource of type @C { rt } must have limit workload constraints
of non-zero weight which give it maximum workload
@M { W = w sub 1 w sub 2 }, according to @C { KheResourceMaxWorkload }
(Section {@NumberOf solutions.avail.functions}).  If there are
@M { n } resources of type @C { rt }, then the total workload supply
is @M { nW }.  The total workload of the elements of @M { S } must be
at least @M { nW }, so that workload demand equals or exceeds supply.
@PP
Finally, we need to decide which resources to assign to the event
resources with workload @M { w sub 1 }, and which to assign to the
event resources with workload @M { s sub 2 }.  We do this as follows.
@PP
Let @M { S sub 1 } be the set of event resources from @M { S }
whose workload is @M { w sub 1 }, and let @M { S sub 2 } be the set
of event resources from @M { S } whose workload is @M { w sub 2 }.
Let @M { R = lbrace r sub 1 ,..., r sub n rbrace } be the resources
of type @C { rt }.  We need to partition @M { R } into @M { R sub 1 },
the resource group of the task bound applied to the event resources
of @M { S sub 1 }, and @M { R sub 2 }, the resource group of the
task bound applied to the event resources of @M { S sub 2 }.
@PP
Each event resource of @M { S sub 1 } has workload @M { w sub 1 },
making a total workload of @M { bar S sub 1 bar w sub 1 }.  From
the work done above, each resource has maximum workload
@M { W = w sub 1 w sub 2 }, so the number of resources needed
to cover the event resources of @M { S sub 1 } is
@ID @Math {
c sub 1 = bar S sub 1 bar w sub 1 ` "/" w sub 1 w sub 2
= bar S sub 1 bar ` "/" w sub 2
}
Similarly, @M { c sub 2 = bar S sub 2 bar ` "/" w sub 1 } resources
are needed to cover the event resources of @M { S sub 2 }.  Suitable
resources can be selected using a maximum flow in this graph:
@CD @Diag {
@Tbl
   i { ctr }
   mh { 1.2c }
   mv { 0.0c }
   aformat { @Cell A | @Cell B | @Cell C | @Cell D }
{
@Rowa
    B { R1:: @Circle @M { r sub 1 } }
@Rowa
    C { S1:: @Circle @M { S sub 1 } }
@Rowa
    B { R2:: @Circle @M { r sub 2 } }
@Rowa
    A { SOURCE:: @Circle {} }
    D { SINK:: @Circle {} }
@Rowa
    B { ... }
@Rowa
    C { S2:: @Circle @M { S sub 2 } }
@Rowa
    B { RN:: @Circle @M { r sub n } }
}
//
@Arrow from { SOURCE } to { R1 } ylabel { 1 }
@Arrow from { SOURCE } to { R2 } ylabel { 1 }
@Arrow from { SOURCE } to { RN } ylabel { 1 }
@Arrow from { R1 } to { S1 } ylabel { 1 }
@Arrow from { R2 } to { S1 } ylabel { 1 }
@Arrow from { R2 } to { S2 } ylabel { 1 }
@Arrow from { RN } to { S2 } ylabel { 1 }
@Arrow from { S1 } to { SINK } ylabel { @M { c sub 1 } }
@Arrow from { S2 } to { SINK } ylabel { @M { c sub 2 } }
}
The flow along each edge is an integral number of resources.
Each resource @M { r sub i } is represented by a node at the end
of an edge of weight 1 from the source, ensuring that each
resource is utilized at most once.  Each set of event resources
@M { S sub j } is represented by a node at the start of an edge
of weight @M { c sub j } to the sink, ensuring that at most
@M { c sub j } resources are utilized by the event resources
of @M { S sub j }.  An edge of weight 1 joins each @M { r sub i }
to each @M { S sub j } such that @M { r sub i } is qualified
for @M { S sub j }, in the sense that @M { r sub i } lies in
the domain of sufficiently many elements of @M { S sub j } to
consume its entire maximum workload.
@PP
We don't actually build this flow graph, although we could.
Instead, we find all the @M { r sub i } which are qualified
for @M { S sub 1 } only and place them into @M { R sub 1 },
taking care not to add more than @M { c sub 1 } resources
to @M { R sub 1 }.  Then we find all the @M { r sub i }
which are qualified for @M { S sub 2 } only and place them
into @M { R sub 2 }, taking care not to add more than
@M { c sub 2 } resources to @M { R sub 2 }.  Finally we
make arbitrary assignments of the remaining resources to
@M { R sub 1 } or @M { R sub 2 }, again taking care not
to exceed the @M { c sub 1 } and @M { c sub 2 } limits.
@PP
At various points in this algorithm we may find that we are
unable to utilize some resource.  In that case the maximum
flow is less than @M { n }, so we abandon workload packing.
@End @SubSection

#@SubSection
#    @Title { Another form of resource similarity }
#    @Tag { resource_structural.supply_and_demand.similarity }
#@Begin
#@LP
#Function @C { KheResourceSimilar } (Section {@NumberOf resources_infer})
#is offered by the KHE platform for deciding whether two resources
#are similar.  This section offers a different form of the same idea:
#@ID @C {
#bool KheResourceSimilarDomains(KHE_RESOURCE r1, KHE_RESOURCE r2,
#  float frac);
#}
#Here @C { r1 } and @C { r2 } are distinct non-@C { NULL } resources,
#and @C { frac } is a floating-point number between 0.0 and 1.0
#inclusive.  @C { KheResourceSimilarDomains } returns @C { true }
#when at least @C { frac } of the tasks currently assigned @C { r1 }
#could also be assigned @C { r2 }, in the sense that their domains
#allow that assignment, and at least @C { frac } of the tasks
#currently assigned @C { r2 } could also be assigned @C { r1 }.
#@End @SubSection

@EndSubSections
@End @Section

@Section
    @Title { Solution adjustments }
    @Tag { resource_structural.adjust }
@Begin
@LP
This section presents solution adjustments
(Section {@NumberOf general_solvers.adjust}) for resource-structural
applications.
@BeginSubSections

@SubSection
    @Title { Changing the multipliers of cluster busy times monitors }
    @Tag { resource_structural.adjust.multiplier }
@Begin
@LP
Cluster busy times monitors formerly had a @I { multiplier }, which
was an integer that their true costs were multiplied by.  Multipliers
have been made redundant by @C { KheMonitorSetCombinedWeight }
(Section {@NumberOf monitoring_monitors}), but the solver they
supported is still available, with a change of interface:
@ID @C {
void KheSetClusterMonitorMultipliers(KHE_SOLN soln,
  KHE_SOLN_ADJUSTER sa, char *str, int val);
}
This finds each cluster busy times constraint @C { c } whose name
or Id contains @C { str }, and uses calls to
@C { KheSolnAdjusterMonitorChangeWeight } to multiply the combined weight of
each monitor derived from @C { c } by @C { val }.  If @C { sa != NULL },
then the monitors can easily be returned to their previous state later:
@ID @C {
sa = KheSolnAdjusterMake(soln);
KheSetClusterMonitorMultipliers(sa, str, val);
do_something;
KheSolnAdjusterDelete(sa);
}
The multipliers are in place while @C { do_something } is running,
and removed afterwards.
@End @SubSection

@SubSection
    @Title { Tilting the plateau }
    @Tag { resource_structural.adjust.tilting }
@Begin
@LP
This section documents a rather left-field idea, which we call
@I { tilting the plateau }.  The idea is to consider a defect
near the start of the timetable to be worse than an equally
bad defect near the end of the timetable.  A local search method
like ejection chains will then believe that it has succeeded
when it moves a defect towards the end of the timetable.  The
hope is that over the course of several repairs, defects will
move all the way to the end and disappear.
@PP
The function for this is
@ID @C {
void KheTiltPlateau(KHE_SOLN soln, KHE_SOLN_ADJUSTER sa);
}
For each monitor @M { m } of @C { soln } whose combined weight @M { w }
satisfies @M { w > 0 }, @C { KheMonitorSetCombinedWeight }
(Section {@NumberOf monitoring_monitors}) is called to change the combined
weight of @M { m } from @M { w } to @M { wT - t }, where @M { T }
is the number of times in the instance, and @M { t } is the index of the
first time monitored by @M { m }, as returned by @C { KheMonitorTimeRange }
(Section {@NumberOf monitoring.sweep_times}), or 0 if
@C { KheMonitorTimeRange } returns @C { false }.  Multiplying
every monitor's weight by @M { T } does not really change the instance,
but subtracting @M { t } makes monitors near the end of the timetable
less costly than monitors near the start.
@PP
When @M { m } is a limit active intervals monitor whose combined
weight @M { w } satisfies @M { w > 0 }, the procedure is somewhat
different.  The new combined weight is @M { wT }, not @M { wT - t };
but then @M { m } itself is informed that tilting is in force, by
a call to @C { KheLimitActiveIntervalsMonitorSetTilt }
(Section {@NumberOf monitoring.limitactive}).  This causes @M { m }
to perform its own subtraction of @M { t } from each cost it reports,
but using a different value of @M { t } for each defective interval,
namely the index of the first time in that interval.  In this way,
defective intervals near the end cost less than defective intervals
near the start.
@PP
@C { KheTiltPlateau } may be used in conjunction with a solution adjuster:
@ID @C {
sa = KheSolnAdjusterMake(soln);
KheTiltPlateau(soln, sa);
do_something;
KheSolnAdjusterDelete(sa);
}
The tilt applies during @C { do_something }; @C { KheSolnAdjusterDelete }
removes it, including making the appropriate calls to
@C { KheLimitActiveIntervalsMonitorClearTilt }.  Alternatively,
the @C { sa } parameter of @C { KheTiltPlateau } may be @C { NULL },
but then there will be no simple way to remove the tilt.
@End @SubSection

@SubSection
    @Title { Propagating unavailable times to resource monitors }
    @Tag { resource_structural.adjust.unavail }
@Begin
@LP
A resource @M { r }'s @I { unavailable times }, @M { U sub r }, is a
set of times taken from certain monitors of non-zero weight that apply
to @M { r }:  all times in avoid unavailable times monitors, all times
in limit busy times monitors with maximum limit 0, and all times
in positive time groups of cluster busy times constraints with
maximum limit 0.  In this section we do not care about the weight of
these monitors, provided it is non-zero.  We simply combine all these
times into @M { U sub r }.
@PP
Suppose that @M { r } has a cluster busy times or limit active intervals
monitor @M { m } with a time group @M { T } such that @M { T subseteq U sub r }.
Then, although @M { T } could be busy, it is not likely to be busy,
and it is reasonable to let @M { m } know this, by calling
@C { KheClusterBusyTimesMonitorSetNotBusyState }
(Section {@NumberOf monitoring.clusterbusy}) or
@C { KheLimitActiveIntervalsMonitorSetNotBusyState }
(Section {@NumberOf monitoring.limitactive}).
@PP
KHE offers a solver that implements this idea:
@ID @C {
bool KhePropagateUnavailableTimes(KHE_SOLN soln, KHE_RESOURCE_TYPE rt);
}
For each resource @M { r } of type @C { rt } in @C { soln }'s instance
(or for each resource of the instance if @C { rt } is @C { NULL }), it
calculates @M { U sub r }, and, if @M { U sub r } is non-empty, it
checks every time group @M { T } in every cluster busy times and
limit active intervals monitor for @M { r }.  For each
@M { T subseteq U sub r }, it calls the function appropriate to
the monitor, with @C { active } set to @C { false } if @M { T }
is positive, and to @C { true } if @M { T } is negative.  It
returns @C { true } if it changed anything.
@PP
There is no corresponding function to undo these settings.  As
cutoff indexes increase they become irrelevant anyway.
@End @SubSection

@SubSection
    @Title { Changing the minimum limits of cluster busy times monitors }
    @Tag { resource_structural.adjust.minimums }
@Begin
@LP
Cluster busy times monitors have a @C { KheClusterBusyTimesMonitorSetMinimum }
operation (Section {@NumberOf monitoring.clusterbusy}) which changes
their minimum limits.  This section presents a method of making these
changes which might be useful during solving.
@PP
This method calculates the demand for resources at particular times,
which only really makes sense after all times are assigned.  So it
could reasonably be classified as a resource structural solver, but
since it helps to adjust monitor limits it has been documented here.
@PP
Consider this example from nurse rostering.  Suppose each resource
has a maximum limit on the number of weekends it can be busy.  Since
each resource can work at most 2 shifts per weekend, summing up
these maximum limits and multiplying by 2 gives the maximum number
of shifts that resources can work on weekends.  We call this the
@I { supply } of weekend shifts.
@PP
Now suppose we find the number of weekend shifts that the instance
requires nurses for.  Call this the @I { demand } for weekend shifts.
@PP
If demand equals or exceeds supply, each resource needs to work its
maximum number of weekends, or else some demands will not be covered.
In that case, the resources' maximum limits are also minimum limits.
The solver described here calculates supply and demand.  It leaves it
to the user to call @C { KheClusterBusyTimesMonitorSetMinimum }, or whatever.
# record its results.  It takes all of the cluster busy times constraints
# of the instance, groups them so that constraints with the same time groups
# lie in one group, then does the calculations for those constraints.
@PP
To create a solver for doing this work, call
@ID @C {
KHE_CLUSTER_MINIMUM_SOLVER KheClusterMinimumSolverMake(HA_ARENA a);
}
It uses memory taken from arena @C { a }.  There is no operation to
delete the solver; it is deleted when @C { a } is freed.  To carry
out one solve, call
@ID @C {
void KheClusterMinimumSolverSolve(KHE_CLUSTER_MINIMUM_SOLVER cms,
  KHE_SOLN soln, KHE_OPTIONS options, KHE_RESOURCE_TYPE rt);
}
It uses @C { options } to find the common frame and event timetable
monitor.  It considers tasks and resources of type @C { rt } only.
It can be called any number of times to solve problems with unrelated
values of @C { soln }, @C { options }, and @C { rt }.
@PP
The attributes of the most recent solve may be found by calling
@ID @C {
KHE_SOLN KheClusterMinimumSolverSoln(KHE_CLUSTER_MINIMUM_SOLVER cms);
KHE_OPTIONS KheClusterMinimumSolverOptions(
  KHE_CLUSTER_MINIMUM_SOLVER cms);
KHE_RESOURCE_TYPE KheClusterMinimumSolverResourceType(
  KHE_CLUSTER_MINIMUM_SOLVER cms);
}
These will all be @C { NULL } before the first solve.  If a new
solve is begun with the same attributes as the previous solve,
it will produce the same outcome if the solution has not changed.
# When the solver is no longer needed,
# @ID @C {
# void KheClusterMinimumSolverDelete(KHE_CLUSTER_MINIMUM_SOLVER cms);
# }
# may be called to delete it (by recycling its arena back to @C { soln }).
@PP
The solve first finds the constraints suited to what it does:  all
cluster busy times constraints with non-zero cost and a non-zero number
of time groups which are pairwise disjoint (always true in practice)
and either all positive, in which case a non-trivial maximum limit
must be present, or all negative, in which case a non-trivial minimum
limit must be present.
@PP
For each maximal non-empty subset of these constraints with the same time
groups (ignoring polarity) and the same `applies to' time group, the solve
makes one @I { group }, with its own supply and demand, for each offset
of the `applies to' time group.  To visit these groups, call
@ID @C {
int KheClusterMinimumSolverGroupCount(KHE_CLUSTER_MINIMUM_SOLVER cms);
KHE_CLUSTER_MINIMUM_GROUP KheClusterMinimumSolverGroup(
  KHE_CLUSTER_MINIMUM_SOLVER cms, int i);
}
There are several operations for querying a group.  To visit its
constraints, call
@ID {0.98 1.0} @Scale @C {
int KheClusterMinimumGroupConstraintCount(KHE_CLUSTER_MINIMUM_GROUP cmg);
KHE_CLUSTER_BUSY_TIMES_CONSTRAINT KheClusterMinimumGroupConstraint(
  KHE_CLUSTER_MINIMUM_GROUP cmg, int i);
}
To retrieve its constraint offset, call
@ID {0.98 1.0} @Scale @C {
int KheClusterMinimumGroupConstraintOffset(KHE_CLUSTER_MINIMUM_GROUP cmg);
}
The time groups may be retrieved from its first constraint.  To find
its supply, call
@ID @C {
int KheClusterMinimumGroupSupply(KHE_CLUSTER_MINIMUM_GROUP cmg);
}
This is calculated as described above for weekends; here is a
fully general description.
@PP
For each constraint @C { c } of @C { cmg } we calculate a supply, as
follows.  Suppose first that the constraint has non-trivial maximum
limit @C { max } and that all its time groups are positive.  Find,
for each time group @C { tg } of @C { c }, the number of frame time
groups that @C { tg } intersects with (taking the offset into
account).  This is the maximum number of times from @C { tg } that
a resource can be busy for.  Take the @C { max } largest of these
numbers and add them to get the supply of @C { c }.
@PP
If @C { c } has a non-trivial minimum limit @C { min } and all
its time groups are negative, set @C { max } to the number of
time groups minus @C { min } and proceed as in the positive case.
(For more on this transformation, see the theorem at the end of
Section {@NumberOf constraints.clusterbusy}.)
@PP
For each resource @C { r } of type @C { rt } we find a supply, as
follows.  If @C { r } is a point of application of at least one
constraint, its supply is the minimum of the supplies of its
constraints.  Otherwise, its supply is the sum, over all time
groups @C { tg }, of the number of frame time groups @C { tg }
intersects with.  @C { KheClusterMinimumGroupSupply } is the
sum, over all resources @C { r }, of the supply of @C { r }.
@PP
To find a group's demand, call
@ID @C {
int KheClusterMinimumGroupDemand(KHE_CLUSTER_MINIMUM_GROUP cmg);
}
This is the sum, over all times in the time groups of the group's
constraints (taking the offset into account), of the number of
tasks of type @C { rt } running at each time.
@PP
Finally,
@ID @C {
void KheClusterMinimumGroupDebug(KHE_CLUSTER_MINIMUM_GROUP cmg,
  int verbosity, int indent, FILE *fp);
}
produces a debug print of @C { cmg } onto @C { fp } with the given
verbosity and indent.
# To find the monitors associated with the group (that is, the
# monitors derived from the group's constraints, and its offset), call
# @ID @C {
# int KheClusterMinimumGroupMonitorCount(KHE_CLUSTER_MINIMUM_GROUP cmg);
# KHE_CLUSTER_BUSY_TIMES_MONITOR KheClusterMinimumGroupMonitor(
#   KHE_CLUSTER_MINIMUM_GROUP cmg, int i);
# }
@PP
There is also an operation for finding the group of a given monitor:
@ID @C {
bool KheClusterMinimumSolverMonitorGroup(KHE_CLUSTER_MINIMUM_SOLVER cms,
  KHE_CLUSTER_BUSY_TIMES_MONITOR cbtm, KHE_CLUSTER_MINIMUM_GROUP *cmg);
}
If @C { cms } has a group containing @C { cbtm }'s constraint and offset
(there can be at most one), this function returns @C { true } and sets
@C { *cmg } to that group.  Otherwise it returns @C { false } and sets
@C { *cmg } to @C { NULL }.
@PP
It is up to the caller to take it from here.  For example, after
carrying out a solve, for each cluster monitor @C { m } one could
call @C { KheClusterMinimumSolverMonitorGroup } to see whether it is
subject to a group.  Then if that group's demand equals or exceeds
its supply, a call to @C { KheClusterBusyTimesMonitorSetMinimum }
increases @C { m }'s minimum limit.  And so on.  However, the
solver does offer some convenience functions to help with this:
@ID @C {
void KheClusterMinimumSolverSetBegin(KHE_CLUSTER_MINIMUM_SOLVER cms);
void KheClusterMinimumSolverSet(KHE_CLUSTER_MINIMUM_SOLVER cms,
  KHE_CLUSTER_BUSY_TIMES_MONITOR m, int val);
void KheClusterMinimumSolverSetEnd(KHE_CLUSTER_MINIMUM_SOLVER cms,
  bool undo);
}
@C { KheClusterMinimumSolverSetBegin } begins a run of changes to
monitors' minimum limits.  @C { KheClusterMinimumSolverSet } makes a
call to @C { KheClusterBusyTimesMonitorSetMinimum }, and remembers
that the call was made.  @C { KheClusterMinimumSolverSetEnd } ends
the run of changes, and if @C { undo } is @C { true } it also undoes
them (in reverse order), returning the monitor limits to their values
when the run began.  Use of these functions is optional.
@PP
For convenience there is also
@ID @C {
void KheClusterMinimumSolverSetMulti(KHE_CLUSTER_MINIMUM_SOLVER cms,
  KHE_RESOURCE_GROUP rg);
}
where @C { rg }'s resource type must equal @C { cms }'s.  It calls
@C { KheClusterMinimumSolverMonitorGroup } for each cluster busy
times monitor @C { m } for each resource of @C { rg }.  If that
returns @C { true } and the group's demand equals or exceeds its
supply, then @C { m }'s minimum limit is changed to its maximum
limit.  Neither @C { KheClusterMinimumSolverSetBegin } nor
@C { KheClusterMinimumSolverSetEnd } are called.  The user must
call @C { KheClusterMinimumSolverSetBegin } first, as usual, and
is free to call @C { KheClusterMinimumSolverSetEnd } immediately
with @C { undo } set to @C { false }, or later with @C { undo }
set to @C { true }.  It is probably not a good idea to not call
@C { KheClusterMinimumSolverSetEnd } at all, since that will leave
@C { cms } unable to accept calls to @C { KheClusterMinimumSolverSetBegin }.
@PP
Finally, function
@ID @C {
void KheClusterMinimumSolverDebug(KHE_CLUSTER_MINIMUM_SOLVER cms,
  int verbosity, int indent, FILE *fp);
}
produces the usual debug print of @C { cms } onto @C { fp } with
the given verbosity and indent.
@PP
Cluster minimum solvers deal only with cluster busy times constraints.
Other constraints might help to reduce supply further.  For example, if
a resource is unavailable for an entire day, that will reduce supply by
1.  At present these kinds of ideas are not taken into account.
@End @SubSection

@SubSection
    @Title { Unbalanced complete weekends }
    @Tag { resource_structural.adjust.complete_weekends }
@Begin
@LP
@I { Complete weekends } constraints, saying that each resource
should either be busy on both days of each weekend, or free on
both days, are common in nurse rostering.  They help to minimize
the number of weekends that nurses work.  But they can cause
a rather obscure problem, which we uncover and deal with in
this section.
@PP
A task is @I required (meaning that assignment of the task is
required) when it has non-zero non-assignment cost, according
to @C { KheTaskNonAsstAndAsstCost }
(Section {@NumberOf resource_structural.mtask_finding.ops}).
This has nothing to do with required constraints; the cost does
not have to be a hard cost.  A task is @I optional (meaning that
assignment of the task is optional) when it has zero non-assignment
cost, according to @C { KheTaskNonAsstAndAsstCost }.  Every task
is either required or optional.
@PP
If one day of a weekend subject to a complete weekends constraint
is @I busier than (has more required tasks than) the other, then
clearly something has to give.  There are three possibilities:
@NumberedList

@LI {
Some nurses have complete weekends defects.
}

@LI {
On the busier day, some required tasks are unassigned.
}

@LI {
On the other (less busy) day, some optional tasks are assigned.
}

@EndList
These could occur together.  The first two give rise directly to
defects---they are highly visible.  The third does not give rise
to any defects directly, but it will have a cost if there is a
general shortage of nurses, because assigning nurses to optional
tasks adds to the general overload.
@PP
Despite the absence of direct defects, the third possibility might
not be the best, in which case we want to steer solvers away from
it.  There are several ways to do this, which we'll come to shortly.
Whichever method is used, the aim is to rule out the third possibility
without biasing the solve towards either of the others.
# ; we do it by fixing the
# assignments of the optional tasks on the less busy day before they
# can become assigned.  This rules out the third possibility without
# biasing the solve towards either of the others.
@PP
Function
@ID @C {
void KheBalanceWeekends(KHE_SOLN soln, KHE_OPTIONS options,
  KHE_RESOURCE_TYPE rt, KHE_SOLN_ADJUSTER sa)
}
carries out this programme for @C { soln }'s resources and tasks
of type @C { rt }.  If @C { sa != NULL }, it uses solution adjuster
@C { sa } to record the changes it made, so that someone else
can undo them later.  Parameter @C { options } is needed for
accessing the common frame and the event timetable monitor.
There is also this option for controlling @C { KheBalanceWeekends }:
@TaggedList

@DTI { @F rs_balance_weekends_method } {
A string option that determines what is done when an unbalanced
weekend is discovered.  Value @C { "none" } turns weekend balancing off.
Value @C { "fix_optional" } fixes all optional tasks on the less busy
day.  The third and default value, @C { "fix_required" }, fixes one or
a few required tasks on the busier day, so as to equalize the number of
unfixed required tasks on the two days.  More details are given below.
}

# @DTI { @F rs_balance_weekends_no_undo } {
# A Boolean option with default value @C { false }.  When it is
# changed to @C { true }, any changes are not added to @C { sa },
# so when @C { sa } is deleted later they are not undone.
# }

@EndList
# passing
# to function @C { KheBalanceSolverMake }
# (Section {@NumberOf resource_structural.supply_and_demand.balance}),
# which uses it to find the common frame.
In detail, @C { KheBalanceWeekends } works as follows.
@PP
Cluster busy times constraints have offsets; each legal offset
defines a separate constraint.  @C { KheBalanceWeekends } handles
this, but for simplicity we will say `constraint' here when we really
mean `constraint plus offset'.
@PP
@C { KheBalanceWeekends } begins by using a balance solver
(Section {@NumberOf resource_structural.supply_and_demand.balance})
to compare the overall supply and demand for resources of type
@C { rt }.  If supply exceeds demand, case (3) above will not
necessarily generate any cost, so the solve returns early,
having changed nothing.
@PP
Next, the solve finds all cluster busy times constraints which apply
to resources of type @C { rt }, have non-zero weight, and contain
exactly two time groups (both positive), minimum limit 2, maximum
limit 2, and allow zero flag @C { true }.  These are the constraints
that request complete weekends, although no-one checks (or needs to
check) that the two time groups represent a Saturday and a Sunday.
(A check is made that the two time groups are disjoint.)
It groups these constraints into equivalence classes, placing two
constraints into the same class when they have the same time groups.
It then handles each class @M { C } separately.
@PP
The first step in handling @M { C } is to check that its constraints,
taken together, apply to every resource of type @C { rt }.  If not,
@M { C } is skipped, because not all resources require complete
weekends.
@PP
Let @M { R(d) } be the number of required tasks running on day @M { d }.
Let @M { d sub 1 } and @M { d sub 2 } be the two days that @M { C }
monitors.  If @M { R( d sub 1 ) = R( d sub 2 ) } there is nothing to
do and we skip @M { C }.  If @M { R( d sub 1 ) < R( d sub 2 ) } we
swap the names of the two days.  So we can assume from now on that
@M { R( d sub 1 ) > R( d sub 2 ) }.  This implies @M { R( d sub 1 ) > 0 }.
We've previously called @M { d sub 1 } the busier day, and @M { d sub 2 }
the less busy day.
@PP
Let @M { O(d) } be the number of optional tasks running on day @M { d }.
If @M { O( d sub 2 ) = 0 }, then our plan of fixing the assignments
of the optional tasks of @M { d sub 2 } changes nothing, because
there are no such tasks.  So there is nothing to do and we skip
@M { C }.  So we can assume @M { O( d sub 2 ) > 0 }.
@PP
We also check at this point whether any of the @M { O( d sub 2 ) }
optional tasks are currently assigned.  If any are, then what we are
trying to prevent has already occurred, so again we skip @M { C }.
@PP
The next step is to work out whether the costs involved are such that
case (3) above is not the best choice.  We'll return to that in a
moment.  If (3) is not the best choice, then the solution is changed
as determined by the @F rs_balance_weekends_method option defined
above.  If @C { sa != NULL }, these changes are recorded in @C { sa }
so that someone else can undo them later.
# If option @F rs_balance_weekends_no_undo is @C { false }, these
# changes are recorded in @C { sa } so that they can be undone later.
@PP
The cost calculation that decides whether to proceed is as follows.
Suppose that all required tasks are assigned except for task @M { t }
on @M { d sub 1 }, whose non-assignment cost, @M { n(t) }, is minimal.
We find the cost of carrying on for each of the three cases above.  Let
@M { c } be the cost incurred by resource constraints of assigning a task,
given that demand exceeds supply, as returned by
@C { KheBalanceSolverMarginalCost }
(Section {@NumberOf resource_structural.supply_and_demand.balance}).
The three cases and their costs are:
@NumberedList

@LI {
Assign a nurse to @M { t } only.  The cost of this (call it @M { c sub 1 })
is the initial cost, minus @M { n(t) }, plus @M { c }, plus the minimum of
the weights of the constraints of @M { C }.
}

@LI {
Leave @M { t } unassigned.  Then the cost @M { c sub 2 } is the initial
cost, since the solution does not change.
}

@LI {
Assign a nurse to @M { t } and to a task @M { t prime } on
@M { d sub 2 } that does not need assignment.  Then the cost
@M { c sub 3 } is the initial cost, minus @M { n(t) }, plus @M { 2c }.
If all of the optional tasks on @M { d sub 2 } have non-zero
assignment cost, add the minimum of those costs to @M { c sub 3 }.
}

@EndList
If @M { c sub 3 > c sub 1 } or @M { c sub 3 > c sub 2 }, then we want
to avoid the third case, so we fix the optional tasks on @M { d sub 2 }.
@PP
When @F rs_balance_weekends_method has value @C { "fix_optional" },
what to do is clear:  fix all optional tasks on the less busy
day.  When the value is @C { "fix_required" }, we need to fix one
or more required tasks on the busier day so that the number of
unfixed required tasks is equal on the two days.  The number
to fix is clear, but which ones?  This is decided as follows.
@PP
Build a bipartite graph whose demand nodes are the required tasks
on the busier day, and whose supply nodes are the required tasks
on the less busy day.  Join two nodes by an edge when their
tasks' domains have a non-empty intersection, weighted by
the cost of the initial solution minus the non-assignment
costs of the two endpoints (this will favour tasks with large
non-assignment costs), breaking ties by the difference
in offset in the days frame of the starting times of the two
tasks (this will favour pairs of tasks for the same shift).
Find a maximum matching in this graph and fix every demand
node that fails to match.
@End @SubSection

@SubSection
    @Title { Allowing split assignments }
    @Tag { resource_structural.adjust.allow_splits }
@Begin
@LP
A good way to minimize split assignments is to prohibit them at
first but allow them later.  To change a tasking from the first
state to the second, call
@ID @C {
bool KheTaskingAllowSplitAssignments(KHE_TASKING tasking,
  bool unassigned_only);
}
It unfixes and unassigns all tasks assigned to the tasks of
@C { tasking } and adds them to @C { tasking }, returning
@C { true } if it changed anything.  If one of the original
unfixed tasks is assigned (to a cycle task), the tasks assigned
to it are assigned to that task, so that existing resource
assignments are not forgotten.  If @C { unassigned_only } is
@C { true }, only the unassigned tasks of @C { tasking } are
affected.  (This option is included for completeness, but it
is not recommended, since it leaves few choices open.)
@C { KheTaskingAllowSplitAssignments } preserves the resource
assignment invariant.
@End @SubSection

@SubSection
    @Title { Enlarging task domains }
    @Tag { resource_structural.adjust.enlarge_domains }
@Begin
@LP
If any room or any teacher is better than none, then it will
be worth assigning any resource to tasks that remain unassigned
at the end of resource assignment.  Function
@ID { 0.98 1.0 } @Scale @C {
void KheTaskingEnlargeDomains(KHE_TASKING tasking, bool unassigned_only);
}
permits this by enlarging the domains of the tasks of @C { tasking }
and any tasks assigned to them (and so on recursively) to the full
set of resources of their resource types.  If @C { unassigned_only }
is true, only the unassigned tasks of @C { tasking } are affected.
The tasks are visited in postorder---that is, a task's domain is
enlarged only after the domains of the tasks assigned to it have
been enlarged---ensuring that the operation cannot fail.
Preassigned tasks are not enlarged.
@PP
This operation works, naturally, by deleting all task bounds from
the tasks it changes.  Any task bounds that become applicable to no
tasks as a result of this are deleted.
@End @SubSection

@EndSubSections
@End @Section

#@Section
#    @Title { Grouping by resource constraints (old) }
#    @Tag { resource_structural.constraints }
#@Begin
#@LP
#@I { Grouping by resource constraints } is KHE's term for a method
#of grouping tasks together, forcing the tasks in each group to
#be assigned the same resource, when all other ways of assigning
#resources to those tasks can be shown to have non-zero cost.  That
#does not mean that those tasks will always be assigned the same resource
#in good solutions, any more than, say, a constraint requiring nurses
#to work complete weekends is always satisfied in good solutions.
#However, in practice those tasks usually do end up being assigned the
#same resource, so it makes sense to require that, at least to begin
#with.  Later we can remove the groupings and see what happens.
#@PP
#@C { KheTaskTreeMake } also groups tasks, but its groups are based
#on avoid split assignments constraints, whereas here we make groups
#based on resource constraints.
#@PP
#The function is
#@ID @C {
#bool KheG roupByResourceConstraints(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
#  KHE_OPTIONS options, KHE_TASK_SET ts);
#}
#There is no @C { tasking } parameter because this kind of grouping
#cannot be applied to an arbitrary set of tasks, as it turns out.
#Instead, it applies to all tasks of @C { soln } whose resource
#type is @C { rt }, which lie in a meet which is assigned a time,
#and for which non-assignment may have a cost (discussed later).
#If @C { rt } is @C { NULL }, @C { KheGroupByResourceConstraints }
#applies itself to each of the resource types of @C { soln }'s
#instance in turn.  It tries to group these tasks, returning
#@C { true } if it groups any.
#@PP
#For each resource type, @C { KheGroupByResourceConstraints } finds
#whatever groups it can.  It makes each such @I { task group } by
#choosing one of its tasks as the @I { lea der task } and assigning
#the others to it.  It makes assignments only to proper root tasks
#(non-cycle tasks not already assigned to other non-cycle tasks),
#so it does not disturb existing groups.  But it does take existing
#groups into account:  it will use tasks to which other tasks are
#asssigned in its own groups.
#@PP
#Tasks which are initially assigned a resource participate in
#grouping.  Such a task may have its assignment changed to some
#other task, but in that case the other task will be assigned the
#resource.  In other words, if one task is assigned a resource
#initially, and it gets grouped, then its whole group will be
#assigned that resource afterwards.  Two tasks initially assigned
#different resources will never be grouped together.
#@PP
#On the other hand, tasks whose assignments are fixed are ignored.
#It is true that they could become lea der tasks, since the assignments
#of lea der tasks are not changed, but there are other considerations
#when choosing lea der tasks, and to add fixing to the mix has been
#deemed by the author to be too much at present.
#In practice fixed tasks are fixed by @C { KheAssignByHistory }
#(Section {@NumberOf resource_solvers.assignment.history}), so they
#are already grouped (in effect) and it is reasonable to ignore them.
#@PP
#If @C { ts } is non-@C { NULL }, every task that
#@C { KheGroupByResourceConstraints } assigns to another task is added
#to @C { ts }.  So the groups can be removed when they are no longer
#wanted, by running through @C { ts } and unassigning its tasks.
#@C { KheTaskSetUnGroup } (Section {@NumberOf extras.task_sets}) does this.
## @PP
## if @C { r_ts } is non-@C { NULL }, every task that
## @C { KheGroupByResourceConstraints } assigns a resource to
## is added to @C { r_ts }.  Only @C { KheGroupByHistory }
## (Section {@NumberOf resource_structural.constraints.history})
## assigns resources to tasks.
#@PP
#@C { KheGroupByResourceConstraints } uses two kinds of grouping.
#The first, @I { combinatorial grouping }, tries all combinations of
#assignments over a few consecutive days, building a group when just
#one of those combinations has zero cost, according to the cluster
#busy times and limit busy times constraints that monitor those days.
#The second, @I { profile grouping }, uses limit active intervals
#constraints to find different kinds of groups.  All this is
#explained below.
#@PP
#@C { KheGroupByResourceConstraints } consults option
#@C { rs_invariant }, and also
#@TaggedList
#
#@DTI { @F rs_group_by_rc_off } @OneCol {
#A Boolean option which, when @C { true }, turns grouping by
#resource constraints off.
#}
#
#@DTI { @F rs_group_by_rc_max_days } @OneCol {
#An integer option which determines the maximum number of consecutive days
#(in fact, time groups of the common frame) examined by combinatorial grouping
#(Section {@NumberOf resource_structural.constraints.combinatorial}).
#Values 0 or 1 turn combinatorial grouping off.  The default value is 3.
#}
#
#@DTI { @F rs_group_by_rc_combinatorial_off } @OneCol {
#A Boolean option which, when @C { true }, turns combinatorial grouping off.
#}
#
#@DTI { @F rs_group_by_rc_profile_off } @OneCol {
#A Boolean option which, when @C { true }, turns profile grouping off.
#}
#
#@EndList
#It also calls @C { KheFrameOption } (Section {@NumberOf extras.frames})
#to obtain the common frame, and retrieves the event timetable monitor
#from option @C { gs_event_timetable_monitor }
#(Section {@NumberOf general_solvers.general}).
#@PP
#The following subsections describe how @C { KheGroupByResourceConstraints }
#works in detail.  It has several parts, which are available separately,
#as we will see.  For each resource type, it starts by building a tasker
#and adding the time groups of the common frame to it as overlap time
#groups (Section {@NumberOf resource_structural.constraints.taskers}).
#Then, using this tasker, it performs combinatorial grouping by calling
#@C { KheCombGrouping }
#(Section {@NumberOf resource_structural.constraints.applying}), and
#profile grouping by calling @C { KheProfileGrouping }
#(Section {@NumberOf resource_structural.constraints.profile}),
#first with @C { non_strict } set to @C { false }, then again with
#@C { non_strict } set to @C { true }.
#@BeginSubSections
#
#@SubSection
#  @Title { Taskers }
#  @Tag { resource_structural.constraints.taskers }
#@Begin
#@LP
#A @I { tasker } is an object of type @C { KHE_TASKER } that
#facilitates grouping by resource constraints.  We'll see how to
#create one shortly; but first, we introduce two other types that
#taskers use.
#@PP
#Taskers deal directly only with proper root tasks (tasks which are
#either unassigned, or assigned directly to a cycle task, that is,
#to a resource).  Tasks whose assignments are fixed are skipped over
#by taskers, as discussed above.  Taskers consider two (unfixed) proper
#root tasks to be equivalent when they have equal domains and assigned
#resources (possibly @C { NULL }), and they cover the same set of times.
#(A task @I covers a time when it, or some task assigned directly
#or indirectly to it, is running at that time.)  Equivalent tasks
#are interchangeable with respect to resource assignment:  they
#may be assigned the same resources, and their effect on resource
#constraints is the same.  Identifying equivalent tasks is vital
#in grouping; without it, virtually no group could be shown to
#be the only zero-cost option.
## @PP
## Taskers consider two tasks to be equivalent when @C { KheTaskEquivalent }
## (Section {@NumberOf solutions.tasks}) says that they are equivalent,
## and their assigned resources are equal (possibly @C { NULL }).  Two
## equivalent tasks are interchangeable with respect to resource
## assignment:  they may be assigned the same resources, and their
## effect on resource constraints is the same.  Identifying equivalent
## tasks is vital in grouping; without it, virtually no group could be
## shown to be the only zero-cost option.
#@PP
#A @I class is an object of type @C { KHE_TASKER_CLASS }, representing
#an equivalence class of tasks (a set of equivalent tasks).  Each task
#known to a tasker lies in exactly one class.  The user cannot create
#these classes; they are created and kept up to date by the tasker.
#@PP
#The tasks of an equivalence class may be visited by
#@ID @C {
#int KheTaskerClassTaskCount(KHE_TASKER_CLASS c);
#KHE_TASK KheTaskerClassTask(KHE_TASKER_CLASS c, int i);
#}
#There must be at least one task, because if a class becomes empty,
#it is deleted by the tasker.
#@PP
#The three attributes that equivalent tasks share may be retrieved by
#@ID @C {
#KHE_RESOURCE_GROUP KheTaskerClassDomain(KHE_TASKER_CLASS c);
#KHE_RESOURCE KheTaskerClassAsstResource(KHE_TASKER_CLASS c);
#KHE_TIME_SET KheTaskerClassTimeSet(KHE_TASKER_CLASS c);
#}
#These return the domain (from @C { KheTaskDomain }) that the tasks of
#@C { c } share, their assigned resource (from @C { KheTaskAsstResource }),
#and the set of times they each cover.  The user must not modify the
#value returned by @C { KheTaskerClassTimeSet }.  Function
#@ID @C {
#void KheTaskerClassDebug(KHE_TASKER_CLASS c, int verbosity,
#  int indent, FILE *fp);
#}
#produces a debug print of @C { c } onto @C { fp } with the given
#verbosity and indent.
#@PP
#The other type that taskers use represents one time.  The type is
#@C { KHE_TASKER_TIME }.  Again, the tasker creates objects of these
#types, and keeps them up to date.  Function
#@ID @C {
#KHE_TIME KheTaskerTimeTime(KHE_TASKER_TIME t);
#}
#returns the time that @C { t } represents.
#@PP
#The tasks of an equivalence class all run at the same times, and so
#for each time, either every task of an equivalence class is running
#at that time, or none of them are.  Accordingly, to visit the tasks
#running at a particular time, we actually visit classes:
#@ID @C {
#int KheTaskerTimeClassCount(KHE_TASKER_TIME t);
#KHE_TASKER_CLASS KheTaskerTimeClass(KHE_TASKER_TIME t, int i);
#}
#Each equivalence class appears in one time object for each time
#that its tasks are running, giving a many-to-many relationship
#between time objects and class objects.  Function
#@ID @C {
#void KheTaskerTimeDebug(KHE_TASKER_TIME t, int verbosity,
#  int indent, FILE *fp);
#}
#produces a debug print of @C { t } onto @C { fp } with the given
#verbosity and indent.
#@PP
#We turn now to taskers themselves.  To create a tasker, call
#@ID {0.98 1.0} @Scale @C {
#KHE_TASKER KheTaskerMake(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
#  KHE_TASK_SET task_set, HA_ARENA a);
#}
#@C { KheTaskerMake } gathers all unfixed proper root tasks (tasks
#which are either unassigned, or assigned directly to a cycle task
#representing a resource) of @C { soln } whose resource type is
#@C { rt }, for which non-assignment may have a cost (see below),
#and which lie in meets with an assigned time.  The meets' time
#assignments are assumed to be fixed for the lifetime of the
#tasker; if they change, errors will occur.  From here on, `task'
#means one of these tasks, unless stated otherwise.
## event resources for which @C { KheEventResourceNeedsAssignment }
## (Section {@NumberOf event_resources}) returns @C { KHE_YES } are
## @PP
## If @C { include_assigned_tasks } is @C { true }, tasks assigned a
## resource are included, otherwise they are excluded.  The author
## sets this to @C { false }, so as to exclude tasks that have
## already been assigned a resource by @C { KheAssignByHistory }
## (Section {@NumberOf resource_solvers.assignment.requested}).
## History is not taken into account by grouping, which is not
## ideal, but this simple alternative to that works quite well.
#@PP
#The tasker's attributes may be accessed by
#@ID @C {
#KHE_SOLN KheTaskerSoln(KHE_TASKER tr);
#KHE_RESOURCE_TYPE KheTaskerResourceType(KHE_TASKER tr);
#KHE_TASK_SET KheTaskerTaskSet(KHE_TASKER tr);
#HA_ARENA KheTaskerArena(KHE_TASKER tr);
#}
#A tasker object remains in existence until its arena, @C { a },
#is deleted or recycled.
#@PP
#It seems wrong to group a task for which non-assignment has a cost
#with a task for which non-assignment has no cost.  But what to do
#about this issue is a puzzle.  Simply refusing to group such tasks
#would not address all the relevant issues, e.g. whether to include
#both types in profiles.  At present, if the instance contains at
#least one assign resource constraint, then only tasks derived from
#event resources for which @C { KheEventResourceNeedsAssignment }
#(Section {@NumberOf event_resources}) returns @C { KHE_YES } are
#considered for grouping.  If the instance contains no assign resource
#constraints, then only tasks derived from event resources for which
#@C { KheEventResourceNeedsAssignment } returns @C { KHE_MAYBE }
#are considered for grouping.  This is basically a stopgap.
#@PP
#Tasks are grouped by calls to @C { KheTaskMove }, each of which
#assigns one follower task to a le ader task.  This removes the
#follower task from the set of tasks of interest to the tasker,
#and it usually enlarges the set of times covered by the le ader task,
#placing it into a different equivalence class.  The main purpose
#of the tasker object is to keep track of these changes.
#@PP
#If @C { task_set } is non-@C { NULL }, each follower task assigned
#during grouping is added to it.  This makes it easy to remove the
#groups later, when they are no longer wanted, by running through
#@C { task_set } and unassigning each of its tasks.  @C { KheTaskSetUnGroup }
#(Section {@NumberOf extras.task_sets}) does this.
#@PP
#@C { KheTaskerMake } places its tasks into classes indexed by time.
#To visit each time, call
#@ID @C {
#int KheTaskerTimeCount(KHE_TASKER tr);
#KHE_TASKER_TIME KheTaskerTime(KHE_TASKER tr, int i);
#}
#Here @C { KheTaskerTimeTime(KheTaskerTime(tr, KheTimeIndex(t))) == t }
#for all times @C { t }.  @C { KheTaskerTimeCount(tr) } returns the same
#value as @C { KheInstanceTimeCount(ins) }, where @C { ins } is
#@C { tr }'s solution's instance.  From each @C { KHE_TASKER_TIME }
#object one can access the classes running at that time, and
#the tasks of those classes, using functions introduced above.
#@PP
#Finally,
#@ID @C {
#void KheTaskerDebug(KHE_TASKER tr, int verbosity, int indent, FILE *fp);
#}
#produces a debug print of @C { tr } onto @C { fp } with the given
#verbosity and indent.
#@End @SubSection
#
#@SubSection
#  @Title { Tasker support for grouping }
#  @Tag { resource_structural.constraints.groupings }
#@Begin
#@LP
#Taskers keep their classes up to date as tasks are grouped.  However,
#they can't know by magic that tasks are being grouped.  So it's wrong to
#call platform operations like @C { KheTaskAssign } and @C { KheTaskMove }
#directly while using a tasker.  @C { KheTaskAddTaskBound } is also out
#of bounds.  Instead, proceed as follows.
#@PP
#A @I grouping is a set of classes used for grouping tasks.  A group is
#made by taking any one task out of each class in the grouping, choosing
#one to be the l eader task, assigning the others (called the followers)
#to it, and inserting the l eader task into some other class appropriate
#to it, where it is available to participate in other groupings.
#@PP
#When a task is taken out of a class, the class may become empty, in
#which case the tasker deletes that class.  When the follower tasks are
#assigned to the le ader tasks, the set of times covered by it usually
#changes, and the tasker may need to create a new class object to hold
#it.  So class objects may be both created and destroyed by the tasker
#when tasks are grouped.
## (The tasker holds a free list of class objects.)
#@PP
#A tasker may handle any number of groupings over its lifetime, but at
#any moment there is at most one grouping.  The operations for building
#this @I { current grouping } are:
#@ID @C {
#void KheTaskerGroupingClear(KHE_TASKER tr);
#bool KheTaskerGroupingAddClass(KHE_TASKER tr, KHE_TASKER_CLASS c);
#bool KheTaskerGroupingDeleteClass(KHE_TASKER tr, KHE_TASKER_CLASS c);
#int KheTaskerGroupingBuild(KHE_TASKER tr, int max_num, char *debug_str);
#}
#These call the platform operations, as well as keeping the tasker up
#to date.
#@PP
#@C { KheTaskerGroupingClear } starts off a grouping, clearing out
#any previous grouping.
#@PP
#@C { KheTaskerGroupingAddClass }, which may be called any number of
#times, adds @C { c } to the current grouping.  If there is a problem
#with this, it returns @C { false } and changes nothing.  These
#potential problems (there are two kinds) are explained below.
#@PP
#@C { KheTaskerGroupingDeleteClass } undoes a call to
#@C { KheTaskerGroupingAddClass } with the same @C { c } that
#returned @C { true }.  Deleting @C { c } might not be possible, since it
#might leave the grouping with no viable l eader class (for which
#see below).  @C { KheTaskerGroupingDeleteClass } returns @C { false }
#in that case, and changes nothing.  This cannot happen if classes
#are deleted in stack order (last in first out), because each
#deletion then returns the grouping to a viable previous state.
#@PP
#@C { KheTaskerGroupingBuild } ends the grouping.  It makes some groups and
#returns the number it made.  Each group is either made completely, or
#not at all.  The number of groups made is the minimum of @C { max_num }
#and the @C { KheTaskerClassTaskCount } values for the classes.  It then
#removes all classes from the grouping, like @C { KheTaskerGroupingClear }
#does, understanding that some may have already been destroyed by being
#emptied out by @C { KheTaskerGroupingBuild }.
#@PP
#It is acceptable to add just one class, in which case the `groups' are
#just tasks from that class, no assignments are made, and nothing actually
#changes in the tasker's data structure.  If this is not wanted, then
#the caller should ensure that @C { KheTaskerGroupingClassCount }
#(see below) is at least 2 before calling @C { KheTaskerGroupingBuild }.
#@PP
#Parameter @C { debug_str } is used only by debugging code, to
#say why a group was made.  For example, its value might be
#@C { "combinatorial grouping" } or @C { "profile grouping" }.
#@PP
#At any time, the classes of the current grouping may be
#accessed by calling
#@ID @C {
#int KheTaskerGroupingClassCount(KHE_TASKER tr);
#KHE_TASKER_CLASS KheTaskerGroupingClass(KHE_TASKER tr, int i);
#}
#in the usual way.  They will not usually be returned in the
#order they were added, however; in particular, the class that
#the tasker currently intends to use as the l eader class has
#index 0.
#@PP
#We now describe the two problems that make
#@C { KheTaskerGroupingAddClass } return @C { false }.  The first
#problem concerns l eader tasks.  Tasks are grouped by choosing one
#task as the l eader and assigning the others to it.  So one of the
#classes added by @C { KheTaskerGroupingAddClass } has to be chosen as
#the one that l eader tasks will be taken from (the @I { l eader class }).
#The tasker does this automatically in a way that usually works well.
#(It chooses any class whose tasks are already assigned a resource,
#or if there are none of those, a class whose domain has minimal
#cardinality, and checks that the first task of each of the other
#classes can be assigned to the first task of that class without
#changing any existing resource assignment.)  But in rare cases, the
#domains of two classes may be such that neither is a subset of the
#other, or two classes may be initially assigned different resources.
#@C { KheTaskerGroupingAddClass } returns @C { false } in such cases.
#@PP
#The second problem concerns the times covered by the classes.  It
#would not do to group together two tasks which cover the same time,
#because then, when a resource is assigned to the grouped task, the
#resource would have a clash.  More generally, if a resource cannot
#be assigned to two tasks on the same day (for example), then it
#would not do to group two tasks which cover two times from the
#same day.  To help with this, the tasker has functions
#@ID @C {
#void KheTaskerAddOverlapFrame(KHE_TASKER tr, KHE_FRAME frame);
#void KheTaskerDeleteOverlapFrame(KHE_TASKER tr);
#}
## void KheTaskerAddOverlapTimeGroup(KHE_TASKER tr, KHE_TIME_GROUP tg);
## void KheTaskerClearOverlapTimeGroups(KHE_TASKER tr);
#@C { KheTaskerAddOverlapFrame } informs the tasker that a resource
#should not be assigned two tasks that cover the same time group of
#@C { frame }.  If this condition would be violated by some call to
#@C { KheTaskerGroupingAddClass }, then that call returns @C { false }
#and adds nothing.  @C { KheTaskerDeleteOverlapFrame }, which is never
#needed in practice, removes this requirement.
## @C { KheTaskerAddOverlapTimeGroup } may be called any number of times.
## It informs the tasker that a group which covers two times from @C { tg }
## (or one time twice) is not permitted.  If some call to
## @C { KheTaskerGroupingAddClass } would violate this condition, then that call
## returns @C { false } and adds nothing.  @C { KheTaskerAddOverlapFrame }
## calls @C { KheTaskerAddOverlapTimeGroup } for each time group
## of @C { frame }.  And @C { KheTaskerClearOverlapTimeGroups }, which
## is never needed in practice, clears away all overlap time groups.
#@PP
#If overlaps are prevented in this way, the same class cannot be added
#to a grouping twice.  So there is no need to prohibit that explicitly.
## @PP
## Each time may lie in at most one overlap time group.  There is no
## logical need for this, but it simplifies the implementation, and
## it is true in practice (i.e. when overlap time groups are derived
## from frames).  @C { KheTaskerAddOverlapTimeGroup } and
## @C { KheTaskerAddOverlapFrame } may not be called when a grouping
## is under construction.
#@PP
#When @C { KheTaskerGroupingAddClass } returns @C { false }, the caller
#has two options.  One is to abandon this grouping altogether, which
#is done by not calling @C { KheTaskerGroupingBuild }.  The next call to
#@C { KheTaskerGroupingClear } will clear everything out for a fresh
#start.  The other option is to continue with the grouping, finding
#other classes to add.  This is done by making zero or more other
#calls to @C { KheTaskerGroupingAddClass }, followed by
#@C { KheTaskerGroupingBuild }.
#@PP
#After one grouping is completed, the user may start another.  The tasker
#will have been updated by the previous @C { KheTaskerGroupingBuild }
#to no longer contain the ungrouped tasks but instead to contain the
#grouped ones.  They can become elements of new groups.
#@PP
#@C { KHE_TASKER_CLASS } objects may be created by
#@C { KheTaskerGroupingBuild }, to hold the newly created groups,
#and also destroyed, because empty classes are deleted.  So
#variables of type @C { KHE_TASKER_CLASS } may become
#undefined when @C { KheTaskerGroupingBuild } is called.
#@PP
#Although @C { KheTaskerGroupingAdd } can be used to check whether a
#class can be added, it may be convenient to check for overlap in
#advance.  For this there are functions
#@ID @C {
#bool KheTaskerTimeOverlapsGrouping(KHE_TASKER_TIME t);
#bool KheTaskerClassOverlapsGrouping(KHE_TASKER_CLASS c);
#}
#@C { KheTaskerTimeOverlapsGrouping } returns @C { true } if @C { t }
#lies in an overlap time group which is currently covered by a class of
#the current grouping.  @C { KheTaskerClassOverlapsGrouping } returns
#@C { true } if any of the times covered by @C { c } is already so covered.
## @PP
## Consider the following scenario.  A grouping is constructed which
## includes a class with an assigned resource.  Other classes in the
## grouping do not have the assigned resource, but they overlap in time
## with classes that do.  When a group is made from the grouping, there
## will be a clash.  This scenario is not explicitly prevented.  It
## underlies the importance of not just accepting the groups made by a
## grouping; one must check their cost.  These functions help with that:
## @ID @C {
## bool KheTaskerGroupingTestAsstBegin(KHE_TASKER tr, KHE_RESOURCE *r);
## void KheTaskerGroupingTestAsstEnd(KHE_TASKER tr);
## }
## @C { KheTaskerGroupingTestAsstBegin } selects a suitable resource
## and assigns it to tasks that form a group in the current grouping
## (skipping assigned tasks).  If it succeeds, it sets @C { *r } to the
## resource it used and returns @C { true }, otherwise it undoes any
## changes, sets @C { *r } to @C { NULL },  and returns @C { false }.
## @C { KheTaskerGroupingTestAsstEnd } undoes what a successful call
## to @C { KheTaskerGroupingTestAsstBegin } did.  It must be called,
## or else errors will occur in the tasker.
## @PP
## A suitable resource is either one that is already assigned to one
## or more tasks of the grouping, or else it is the first resource
## from the domain of the l eader class that is free at the times
## covered by all of the classes of the grouping, taking any overlap
## frame into account.  If there is no such resource (not likely),
## @C { KheTaskerGroupingTestAsstBegin } returns @C { false }.
#@End @SubSection
#
#@SubSection
#  @Title { Tasker support for profile grouping }
#  @Tag { resource_structural.constraints.pgroupings }
#@Begin
#@LP
#Taskers also have functions which support profile grouping
#(Section {@NumberOf resource_structural.constraints.profile}).  To
#set and retrieve the @I { profile maximum length }, the calls are
#@ID @C {
#void KheTaskerSetProfileMaxLen(KHE_TASKER tr, int profile_max_len);
#int KheTaskerProfileMaxLen(KHE_TASKER tr);
#}
#The profile maximum length can only be set when there are no
#profile time groups.
#@PP
#To visit the sequence of @I { profile time groups } maintained by the
#tasker, the calls are
#@ID @C {
#int KheTaskerProfileTimeGroupCount(KHE_TASKER tr);
#KHE_PROFILE_TIME_GROUP KheTaskerProfileTimeGroup(KHE_TASKER tr, int i);
#}
#To make one profile time group and add it to the end of the tasker's
#sequence, and to delete a profile time group, the calls are
#@ID @C {
#KHE_PROFILE_TIME_GROUP KheProfileTimeGroupMake(KHE_TASKER tr,
#  KHE_TIME_GROUP tg);
#void KheProfileTimeGroupDelete(KHE_PROFILE_TIME_GROUP ptg);
#}
#The last profile time group is moved to the position of the
#deleted one, which only makes sense in practice when all
#the profile time groups are being deleted.  So a better
#function to call is
#@ID @C {
#void KheTaskerDeleteProfileTimeGroups(KHE_TASKER tr);
#}
#which deletes all of @C { tr }'s profile time groups.  They go
#into a free list in the tasker.
#@PP
#Functions
#@ID @C {
#KHE_TASKER KheProfileTimeGroupTasker(KHE_PROFILE_TIME_GROUP ptg);
#KHE_TIME_GROUP KheProfileTimeGroupTimeGroup(KHE_PROFILE_TIME_GROUP ptg);
#}
#retrieve a profile time group's tasker and time group.
#@PP
#A profile time group's @I { cover } is the number of @I { cover tasks }:
#tasks that cover the time group, ignoring tasks that cover more than
#@C { profile_max_len } profile time groups.  This is returned by
#@ID @C {
#int KheProfileTimeGroupCover(KHE_PROFILE_TIME_GROUP ptg);
#}
#The profile time group also keeps track of the @I { domain cover }:
#the number of cover tasks with a given domain.  Two domains are
#considered to be equal if @C { KheResourceGroupEqual } says that
#they are.  To visit the (distinct) domains of a profile time group,
#in increasing domain size order, the calls are
#@ID @C {
#int KheProfileTimeGroupDomainCount(KHE_PROFILE_TIME_GROUP ptg);
#KHE_RESOURCE_GROUP KheProfileTimeGroupDomain(KHE_PROFILE_TIME_GROUP ptg,
#  int i, int *cover);
#}
#@C { KheProfileTimeGroupDomain } returns the domain cover as well as the
#domain itself.  The sum of the domain covers is the cover.  There is also
#@ID @C {
#bool KheProfileTimeGroupContainsDomain(KHE_PROFILE_TIME_GROUP ptg,
#  KHE_RESOURCE_GROUP domain, int *cover);
#}
#which searches @C { ptg }'s list of domains for @C { domain },
#returning @C { true } and setting @C { *cover } to the domain
#cover if it is found.
#@PP
#@C { KheProfileTimeGroupDomain } and
#@C { KheProfileTimeGroupContainsDomain } may return 0
#for @C { *cover }, when tasks with a given domain enter
#the profile and later leave it.
#@PP
#Profile grouping algorithms will group tasks while these functions
#are being called.  The sequence of profile time groups is unaffected
#by grouping, but covers and domain covers will change if the grouped
#tasks cover more than @C { profile_max_len } profile time groups.
#The domains of a profile time group may also change during grouping,
#when tasks with unequal domains are grouped.  Altogether it is safest
#to discontinue a partially completed traversal of the domains of a
#profile time group when a grouping occurs.
#@PP
#There are also a few functions on tasker classes that relate
#to profile time groups.  First,
#@ID @C {
#bool KheTaskerClassCoversProfileTimeGroup(KHE_TASKER_CLASS c,
#  KHE_PROFILE_TIME_GROUP ptg);
#}
#returns @C { true } if @C { c } covers @C { ptg }.  Each class
#keeps track of the times from profile time groups that it covers.
#Functions
#@ID @C {
#int KheTaskerClassProfileTimeCount(KHE_TASKER_CLASS c);
#KHE_TASKER_TIME KheTaskerClassProfileTime(KHE_TASKER_CLASS c, int i);
#}
#visit these times in an unspecified order.
#@PP
#Function
#@ID @C {
#void KheTaskerProfileDebug(KHE_TASKER tr, int verbosity, int indent,
#  FILE *fp);
#}
#prints the profile groups of @C { tr } onto @C { fp }, with the
#classes that cover not more than @C { profile_max_len } of them.
#@End @SubSection
#
#@SubSection
#  @Title { Combinatorial grouping }
#  @Tag { resource_structural.constraints.combinatorial }
#@Begin
#@LP
#Suppose that there are two kinds of shifts (tasks), day and night;
#that a resource must be busy on both days of the weekend or neither;
#and that a resource cannot work a day shift on the day after a night
#shift.  Then resources assigned to the Saturday night shift must work
#on Sunday, and so must work the Sunday night shift.  So it makes sense
#to group one Saturday night shift with one Sunday night shift, and to
#do so repeatedly until night shifts run out on one of those days.
#@PP
#Suppose that the groups just made consume all the Sunday night shifts.
#Then those working the Saturday day shifts cannot work the Sunday
#night shifts, because the Sunday night shifts are grouped with
#Saturday night shifts now, which clash with the Saturday day shifts.
#So now it is safe to group one Saturday day shift with one Sunday
#day shift, and to do so repeatedly until day shifts run out on one
#of those days.
#@PP
#Groups made in this way can be a big help to solvers.  In instance
#@C { COI-GPost.xml }, for example, each Friday night task can be
#grouped with tasks for the next two nights.  Good solutions always
#assign these three tasks to the same resource, owing to constraints
#specifying that the weekend following a Friday night shift must be
#busy, that each weekend must be either free on both days or busy on
#both, and that a night shift must not be followed by a day shift.
#A time sweep task assignment algorithm (say) cannot look ahead
#and see such cases coming.
#@PP
#@I { Combinatorial grouping } implements these ideas.  It searches
#through a space whose elements are sets of classes.  For each set of
#classes @M { S } in the search space, it calculates a cost @M { c(S) },
#defined below, and selects a set @M { S prime } such that
#@M { c( S prime ) } is zero, or minimal.  It then makes one group by
#selecting one task from each class and grouping those tasks, and then
#repeating that until as many tasks as possible or desired have been grouped.
#@PP
#As formulated here, one application of combinatorial grouping
#groups one set of classes @M { S prime }.  In the example above,
#grouping the Saturday and Sunday night shifts would be one
#application, then grouping the Saturday and Sunday day shifts
#would be another.
#@PP
#Combinatorial grouping is carried out by a
#@I { combinatorial grouping solver }, made like this:
#@ID @C {
#KHE_COMB_SOLVER KheCombSolverMake(KHE_TASKER tr, KHE_FRAME days_frame);
#}
#It deals with @C { tr }'s tasks, using memory from @C { tr }'s arena.
#Any groups it makes are made using @C { tr }'s grouping operations,
#and so are reflected in @C { tr }'s classes, and in its task set.
#Parameter @C { days_frame } would always be the common frame.  It
#is used when selecting a suitable resource to tentatively assign to
#a group of tasks, to find out what times the resource should be free.
#@PP
#Functions
#@ID @C {
#KHE_TASKER KheCombSolverTasker(KHE_COMB_SOLVER cs);
#KHE_FRAME KheCombSolverFrame(KHE_COMB_SOLVER cs);
#}
#return @C { cs }'s tasker and frame.
#@PP
#A @C { KHE_COMB_SOLVER } object can solve any number of combinatorial
#grouping problems, one after another.  The user loads the solver with
#one problem's @I requirements (these determine the search space
#@M { S }), then requests a solve, then loads another problem and
#solves, and so on.
#@PP
#It is usually best to start the process of loading requirements
#into the solver by calling
#@ID @C {
#void KheCombSolverClearRequirements(KHE_COMB_SOLVER cs);
#}
#This clears away any old requirements.
#@PP
#A key requirement for most solves is that the groups it makes
#should cover a given time group.  Any number of such requirements
#can be added and removed by calling
#@ID @C {
#void KheCombSolverAddTimeGroupRequirement(KHE_COMB_SOLVER cs,
#  KHE_TIME_GROUP tg, KHE_COMB_SOLVER_COVER_TYPE cover);
#void KheCombSolverDeleteTimeGroupRequirement(KHE_COMB_SOLVER cs,
#  KHE_TIME_GROUP tg);
#}
#any number of times.  @C { KheCombSolverAddTimeGroup } specifies that
#the groups must cover @C { tg } in a manner given by the @C { cover }
#parameter, whose type is
#@ID @C {
#typedef enum {
#  KHE_COMB_SOLVER_COVER_YES,
#  KHE_COMB_SOLVER_COVER_NO,
#  KHE_COMB_SOLVER_COVER_PREV,
#  KHE_COMB_SOLVER_COVER_FREE,
#} KHE_COMB_SOLVER_COVER_TYPE;
#}
#We'll explain this in detail later.  @C { KheCombSolverDeleteTimeGroup }
#removes the effect of a previous call to @C { KheCombSolverAddTimeGroup }
#with the same time group.  There must have been such a call, otherwise
#@C { KheCombSolverDeleteTimeGroup } aborts.
#@PP
#Any number of requirements that the groups should cover a given
#class may be added:
#@ID @C {
#void KheCombSolverAddClassRequirement(KHE_COMB_SOLVER cs,
#  KHE_TASKER_CLASS c, KHE_COMB_SOLVER_COVER_TYPE cover);
#void KheCombSolverDeleteClassRequirement(KHE_COMB_SOLVER cs,
#  KHE_TASKER_CLASS c);
#}
#These work in the same way as for time groups, except that care is
#needed because @C { c } may be rendered undefined by a solve, if
#it makes groups which empty @C { c } out.  The safest option
#after a solve whose requirements include a class is to call
#@C { KheCombSolverClearRequirements }.
#@PP
#Three other requirements of quite different kinds may be added:
#@ID @C {
#void KheCombSolverAddProfileGroupRequirement(KHE_COMB_SOLVER cs,
#  KHE_PROFILE_TIME_GROUP ptg, KHE_RESOURCE_GROUP domain);
#void KheCombSolverDeleteProfileGroupRequirement(KHE_COMB_SOLVER cs,
#  KHE_PROFILE_TIME_GROUP ptg);
#}
#and
#@ID @C {
#void KheCombSolverAddProfileMaxLenRequirement(KHE_COMB_SOLVER cs);
#void KheCombSolverDeleteProfileMaxLenRequirement(KHE_COMB_SOLVER cs);
#}
#and
#@ID @C {
#void KheCombSolverAddNoSinglesRequirement(KHE_COMB_SOLVER cs);
#void KheCombSolverDeleteNoSinglesRequirement(KHE_COMB_SOLVER cs);
#}
#Again, we'll explain the precise effect later.  These last three
#requirements can only be added once:  a second call replaces the
#first, it does not add to it.
#@PP
#There is no need to reload requirements between solves.  The
#requirements stay in effect until they are either deleted
#individually or cleared out by @C { KheCombSolverClearRequirements }.
#The only caveat concerns classes that become undefined during
#grouping, as discussed above.
#@PP
#The search space of combinatorial solving is defined by all
#these requirements.  First, we need some definitions.  A task
#@I covers a time if it, or a task assigned to it directly or
#indirectly, runs at that time.  A task covers a time group if
#it covers any of the time group's times.  A class covers a time
#or time group if its tasks do.  A class covers a class if it is
#that class.  A set of classes covers a time, time group, or class
#if any of its classes covers that time, time group, or class.
#@PP
#Now a set @M { S } of classes lies in the search space for a run
#of combinatorial grouping if:
#@NumberedList
#
#@LI @OneRow {
#Each class in @M { S } covers at least one of the time groups and
#classes passed to the solver by the calls to
#@C { KheCombSolverAddTimeGroup } and @C { KheCombSolverAddClass }.
#}
#
#@LI @OneRow {
#For each time group @C { tg } or mtask @C { mt } passed to the solver by
#@C { KheCombSolverAddTimeGroup } or @C { KheCombSolverAddClass },
#if the accompanying @C { cover } is @C { KHE_COMB_SOLVER_COVER_YES },
#then @M { S } covers @C { tg } or @C { c }; or if @C { cover } is
#@C { KHE_COMB_SOLVER_COVER_NO }, then @M { S } does not cover @C { tg }
#or @C { c }; or if @C { cover } is @C { KHE_COMB_SOLVER_COVER_PREV },
#then @M { S } covers @C { tg } or @C { c } if and only if it covers
#the time group or class immediately preceding @C { tg } or @C { c }; or
#if @C { cover } is @C { KHE_COMB_SOLVER_COVER_FREE }, then @M { S } is
#free to cover @C { tg } or @C { c }, or not.
#@LP
#If the first time group or class has cover @C { KHE_COMB_SOLVER_COVER_PREV },
#this is treated like @C { KHE_COMB_SOLVER_COVER_FREE }.
#@LP
#Time groups and classes not mentioned may be covered, or not.  The
#difference between this and passing a time group or class with cover
#@C { KHE_GROUP_SOLVER_COVER_FREE } is that the classes that cover
#a free time group or class are included in the search space.
#}
#
#@LI @OneRow {
#The classes of @M { S } may be added to the tasker to form a grouping.
#There are rare cases where adding the classes in one order will
#succeed, while adding them in another order will fail.  In those
#cases, whether @M { S } is included in the search space or not will
#depend on the (unspecified) order in which the solver chooses to add
#@M { S }'s classes to the tasker.
#}
#
#@LI @OneRow {
#If @C { KheCombSolverAddProfileRequirement(cs, ptg, domain) } is
#in effect, then @M { S } contains at least one class that covers
#@C { ptg }'s time group, and if @C { domain != NULL }, that class
#has that domain.
#}
#
#@LI @OneRow {
#If @C { KheCombSolverAddProfileMaxLenRequirement(cs) } is in
#effect, then @M { S } contains only classes that cover at most
#@C { profile_max_len } times from profile time groups.
#}
#
#@LI @OneRow {
#If @C { KheCombSolverAddNoSinglesRequirement(cs) } is in effect,
#then @M { S } contains at least two classes.  Otherwise @M { S }
#contains at least one class.
#}
#
#@EndList
#That fixes the search space.  We now define the cost @M { c(S) }
#of each set of classes @M { S } in that space.
#@PP
#The first step is to identify a suitable resource @M { r }.  Take the
#first class of the tasker grouping made from @M { S }; this is the
#class that l eader tasks will come from.  If it already has an assigned
#resource (as returned by @C { KheTaskerClassAsstResource }), use that
#resource for @M { r }.  Otherwise search the class's domain (as
#returned by @C { KheTaskerClassDomain }) for a resource which is free at
#all of the time groups of the current frame which overlap with the time
#groups added by calls to @C { KheCombSolverAddTimeGroupRequirement }.
#If no such resource can be found, ignore @M { S }.
#@PP
#The second step is to assign @M { r } to one task from each class
#of @M { S }, except in classes where @M { r } is already assigned
#to a task.  This is done without informing the tasker, but after
#the cost is determined these assignments are undone, so the
#tasker's integrity is not compromised in the end.  The cost
#@M { c(S) } of a set of classes @M { S } is determined while the
#assignments are in place.  It is the total cost of all cluster busy
#times and limit busy times monitors which monitor @M { r } and have
#times lying entirely within the times covered by the time groups
#added by calls to @C { KheCombSolverAddTimeGroupRequirement }.
#This second condition is included because we don't want @M { r }'s
#global workload, for example, to influence the outcome.
## The cost @M { c(S) } of a set of classes @M { S } is the change
## in solution cost caused by assigning a suitable resource (as
## defined for @C { KheTaskerGroupingTestAsstBegin } in
## Section {@NumberOf resource_structural.constraints.groupings})
## to one task from each class of @M { S }, taking into account only
## avoid clashes, cluster busy times, and limit busy times constraints
## which apply to every resource of the type of the tasks being
## grouped.  Furthermore, the times of the cluster busy times and
## limit busy times constraints must lie entirely within the times
## covered by the classes from which @M { S } is chosen; we don't
## want changes in a resource's global workload, for example, to
## influence the outcome.
#@PP
#After all the requirements are added, an actual solve is carried
#out by calling
#@ID @C {
#int KheCombSolverSolve(KHE_COMB_SOLVER cs, int max_num,
#  KHE_COMB_SOLVER_COST_TYPE ct, char *debug_str);
#}
#@C { KheCombSolverSolve } searches the space of all sets of classes
#@M { S } that satisfy the six conditions, and selects one set
#@M { S prime } of minimal cost @M { c( S prime ) }.  Using
#@M { S prime }, it makes as many groups as it can, up to
#@C { max_num }, and returns the number it actually made,
#between @C { 0 } and @C { max_num }.  If @M { S prime }
#contains a single class, no groups are made and the value
#returned is 0.
#@PP
#Parameter @C { ct } has type
#@ID @C {
#typedef enum {
#  KHE_COMB_SOLVER_COST_MIN,
#  KHE_COMB_SOLVER_COST_ZERO,
#  KHE_COMB_SOLVER_COST_SOLE_ZERO
#} KHE_COMB_SOLVER_COST_TYPE;
#}
#If @C { ct } is @C { KHE_COMB_SOLVER_COST_MIN }, then @M { c( S prime ) }
#must be minimal among all @M { c(S) }.
#If @C { ct } is @C { KHE_COMB_SOLVER_COST_ZERO }
#or @C { KHE_COMB_SOLVER_COST_SOLE_ZERO }, then @M { c( S prime ) } must
#also be 0, and in the second case there must be no other @M { S } in
#the search space such that @M { c(S) } is 0.  If these conditions are
#not met, no groups are made.
#@PP
#Parameter @C { debug_str } is passed on to @C { KheTaskerGroupingBuild }.
#It might be @C { "combinatorial grouping" }, for example.
#@PP
#An awkward question raised by combinatorial grouping is what to do about
#@I { singles }:  classes whose tasks already satisfy the requirements,
#without any grouping.  The answer seems to vary depending on why
#combinatorial grouping is being called, so the combinatorial solver
#does not have a single way of dealing with singles.  Instead it
#offers three features that help with them.
#@PP
#First, as we have seen, if the set of classes @M { S prime } with
#minimum or zero cost contains only one class, @C { KheCombSolverSolve }
#accepts that it is the best but makes no groups from it, returning 0
#for the number of groups made.
#@PP
#Second, as we have also seen, @C { KheCombSolverAddNoSinglesRequirement }
#allows the user to declare that a set @M { S } whose classes consist
#of a single class which satisfies all the requirements (a single)
#should be excluded from the search space.  But adding this requirement
#is not a magical solution to the problem of singles.  For one thing,
#when we need a unique zero-cost set of classes, we may well want to
#include singles in the search space, to show that grouping is better
#than doing nothing.  For another, there may still be an @M { S }
#containing one single and another class which covers a time group or
#class with cover type @C { KHE_COMB_SOLVER_COVER_FREE }.
#@PP
#Third, after setting up a problem ready to call
#@C { KheCombSolverSolve }, one can call
#@ID @C {
#int KheCombSolverSingleTasks(KHE_COMB_SOLVER cs);
#}
#This searches the same space as @C { KheCombSolverSolve } does, but
#it does no grouping.  Instead, it returns the total number of tasks in
#sets of classes @M { S } in that space such that @M { bar S bar = 1 }.
#It returns 0 if @C { KheCombSolverAddNoSinglesRequirement } is in
#effect when it is called, quite correctly.
#@PP
#Finally,
#@ID @C {
#void KheCombSolverDebug(KHE_COMB_SOLVER cs, int verbosity,
#  int indent, FILE *fp);
#}
#produces the usual debug print of @C { cs } onto @C { fp }
#with the given verbosity and indent.
#@End @SubSection
#
#@SubSection
#  @Title { Applying combinatorial grouping }
#  @Tag { resource_structural.constraints.applying }
#@Begin
#@LP
#This section describes one way in which the general idea of
#combinatorial grouping, as just presented, may be applied in
#practice.  This way is implemented by function
#@ID @C {
#int KheCombGrouping(KHE_COMB_SOLVER cs, KHE_OPTIONS options);
#}
#@C { KheCombGrouping } does what this section describes, and
#returns the number of groups it made.  Before it is called,
#the common frame should be loaded into @C { cs }'s tasker as
#overlap time groups.
#@PP
#Let @M { m } be the value of the @F rs_group_by_rc_max_days option
#of @C { options }.  Iterate over all pairs @M { (f, c) }, where
#@M { f } is a subset of the common frame containing @M { k }
#adjacent time groups, for all @M { k } such that @M { 2 <= k <= m },
#and @M { c } is a class that covers @M { f }'s first or last time group.
#@PP
#For each pair, set up and run combinatorial grouping with one `yes'
#class, namely @M { c }, and one `free' time group for each of the
#@M { k } time groups of @M { f }.  Set @C { max_num } to @C { INT_MAX },
#and set @C { ct } to @C { KHE_COMB_SOLVER_COST_SOLE_ZERO }.  If there
#is a unique zero-cost way to group a task of @M { c } with tasks on
#the following @M { k - 1 } days, this call will find it and carry out
#as many groupings as it can.
## , and set @C { allow_single } to @C { false }.
#@PP
#If @M { f } has @M { k } time groups, each with @M { n } classes,
#say, there are up to @M { (n + 1) sup {k - 1} } combinations for
#each run, so @C { rs_group_by_rc_max_days } must be small, say 3,
#or 4 at most.  In any case, unique zero-cost groupings typically
#concern weekends, so larger values are unlikely to yield anything.
#@PP
#If one @M { (f, c) } pair produces some grouping, then
#@C { KheCombGrouping } returns to the first pair containing @M { f }.
#This handles cases like the one described earlier, where a grouping
#of Saturday and Sunday night shifts opens the way to a grouping of
#Saturday and Sunday day shifts.
#@PP
#The remainder of this section describes @I { combination elimination }.
#This is a refinement that @C { KheCombGrouping } uses to make
#unique zero-cost combinations more likely in some cases.
#@PP
#Some combinations examined by combinatorial grouping may have zero
#cost as far as the monitors used to evaluate it are concerned, but
#have non-zero cost when evaluated in a different way, involving the
#overall supply of and demand for resources.  Such combinations can
#be ruled out, leaving fewer zero-cost combinations, and potentially
#more task grouping.
#@PP
#For example, suppose there is a maximum limit on the number of
#weekends each resource can work.  If this limit is tight
#enough, it will force every resource to work complete weekends,
#even without an explicit constraint, if that is the only way
#that the available supply of resources can cover the demand
#for weekend shifts.  This example fits the pattern to be given
#now, setting @M { C } to the constraint that limits the number
#of busy weekends, @M { T } to the times of all weekends,
#@M { T sub i } to the times of the @M { i }th weekend, and
#@M { f tsub i } to the number of days in the @M { i }th weekend.
#@PP
#Take any any set of times @M { T }.  Let @M { S(T) }, the
#@I { supply during @M { T } }, be the sum over all resources
#@M { r } of the maximum number of times that @M { r } can be busy
#during @M { T } without incurring a cost.  Let @M { D(T) }, the
#@I { demand during @M { T } }, be the sum over all tasks @M { x }
#for which non-assignment would incur a cost, of the number of times
#@M { x } is running during @M { T }.  Then @M { S(T) >= D(T) }
#or else a cost is unavoidable.
#@PP
#In particular, take any cluster busy times constraint @M { C } which
#applies to all resources, has time groups which are all positive, and
#has a non-trivial maximum limit @M { M }.  (The analysis also applies
#when the time groups are all negative and there is a non-trivial
#minimum limit, setting @M { M } to the number of time groups minus
#the minimum limit.)  Suppose there are @M { n } time groups
#@M { T sub i }, for @M { 1 <= i <= n }, and let their union be @M { T }.
#@PP
#Let @M { f tsub i } be the number of time groups from the common
#frame with a non-empty intersection with @M { T sub i }.  This is
#the maximum number of times from @M { T sub i } during which any one
#resource can be busy without incurring a cost, since a resource can
#be busy for at most one time in each time group of the common frame.
#@PP
#Let @M { F } be the sum of the largest @M { M } @M { f tsub i }
#values.  This is the maximum number of times from @M { T } that
#any one resource can be busy without incurring a cost:  if it is
#busy for more times than this, it must either be busy for more
#than @M { f tsub i } times in some @M { T sub i }, or else it
#must be busy for more than @M { M } time groups, violating the
#constraint's maximum limit.
#@PP
#If there are @M { R } resources altogether, then the supply during
#@M { T } is bounded by
#@ID @Math { S(T) <= RF }
#since @M { C } is assumed to apply to every resource.
#@PP
#As explained above, to avoid cost the demand must not exceed the
#supply, so
#@ID @M { D(T) <= S(T) <= RF }
#Furthermore, if @M { D(T) >= RF }, then any failure to maximize
#the use of workload will incur a cost.  That is, every resource
#which is busy during @M { T sub i } must be busy for the full
#@M { f tsub i } times in @M { T sub i }.
#@PP
#So the effect on grouping is this:  if @M { D(T) >= RF }, a resource
#that is busy in one time group of the common frame that overlaps
#@M { T sub i } should be busy in every time group of the common
#frame that overlaps @M { T sub i }.  @C { KheCombGrouping } searches
#for constraints @M { C } that have this effect, and informs its
#combinatorial grouping solver about what it found by changing the
#cover types of some time groups from `free' to `prev'.  When
#searching for groups, the option of covering some of these time
#groups but not others is removed.  With fewer options, there is
#more chance that some combination might be the only one with
#zero cost, allowing more task grouping.
#@PP
#Instance @C { CQ14-05 } has two constraints that limit busy weekends.
#One applies to 10 resources and has maximum limit 2; the other applies
#to the remaining 6 resources and has maximum limit 3.  So combination
#elimination actually takes sets of constraints with the same time
#groups that together cover every resource once.  Instead of @M { RF }
#(above), it uses the sum over the set's constraints @M { c sub j }
#of @M { R sub j F sub j }, where @M { R sub j } is the number of
#resources that @M { c sub j } applies to, and @M { F sub j } is the
#sum of the largest @M { M sub j } of the @M { f tsub i } values,
#where @M { M sub j } is the maximum limit of @M { c sub j }.  The
#@M { f tsub i } are the same for all @M { c sub j }.
#@End @SubSection
#
#@SubSection
#  @Title { Profile grouping }
#  @Tag { resource_structural.constraints.profile }
#@Begin
#@LP
#Suppose 6 nurses are required on the Monday, Tuesday, Wednesday,
#Thursday, and Friday night shifts, but only 4 are required on the
#Saturday and Sunday night shifts.  Consider any division of the
#night shifts into sequences of one or more shifts on consecutive
#days.  However these sequences are made, at least two must begin
#on Monday, and at least two must end on Friday.
#@PP
#Now suppose that the intention is to assign the same resource to
#each shift of any one sequence, and that a limit active intervals
#constraint, applicable to all resources, specifies that night shifts
#on consecutive days must occur in sequences of at least 2 and at most
#3.  Then the two sequences of night shifts that must begin on Monday
#must contain a Monday night and a Tuesday night shift at least, and the
#two that end on Friday must contain a Thursday night and a Friday night
#shift at least.  So here are two groupings, of Monday and Tuesday
#nights and of Thursday and Friday nights, for each of which we can
#build two task groups.
#@PP
#Suppose that we already have a task group which contains a sequence
#of 3 night shifts on consecutive days.  This group cannot be grouped
#with any night shifts on days adjacent to the days it currently
#covers.  So for present purposes the tasks of this group can be
#ignored.  This can change the number of night shifts running on
#each day, and so change the amount of grouping.  For example, in
#instance @C { COI-GPost.xml }, all the Friday, Saturday, and Sunday
#night shifts get grouped into sequences of 3, and 3 is the maximum,
#so those night shifts can be ignored here, and so every Monday night
#shift begins a sequence, and every Thursday night shift ends one.
#@PP
#We now generalize this example, ignoring for the moment a few
#issues of detail.  Let @M { C } be any limit active intervals
#constraint which applies to all resources, and whose time groups
#@M { T sub 1 ,..., T sub k } are all positive.  Let @M { C }'s
#limits be @M { C sub "max" } and @M { C sub "min" }, and suppose
#@M { C sub "min" } is at least 2 (if not, there can be no grouping
#based on @M { C }).  What follows is relative to @M { C }, and is
#repeated for each such constraint.  Constraints with the same
#time groups are notionally merged, allowing the minimum limit
#to come from one constraint and the maximum limit from another.
#@PP
#A @I { long task } is a task which covers at least @M { C sub "max" }
#adjacent time groups from @M { C }.  Long tasks can have no influence
#on grouping to satisfy @M { C }'s minimum limit, so they may be ignored,
#that is, profile grouping may run as though they are not there.  This
#applies both to tasks which are present at the start, and tasks which
#are constructed along the way.  
#@PP
#A task is @I { admissible for profile grouping }, or just
#@I { admissible }, if it satisfies the following conditions:
#@NumberedList
#
#@LI {
#The task is a proper root task lying within an mtask created by the
#mtask finder made available to profile grouping when
#@C { KheProfileGrouping } (see below) is called.
#}
#
#@LI {
#The task is not assigned a resource, and its assignment is not fixed.
#}
#
#@LI {
#The task is not a long task.
#}
#
#@EndList
#These conditions imply that if one task lying within an mtask is
#admissible for profile grouping, then every unassigned task in
#that mtask is also admissible.
#@PP
#Let @M { n sub i } be the number of admissible tasks that cover
#@M { T sub i }.  The @M { n sub i } together make up the
#@I profile of @M { C }.  The tasker operations from
#Section {@NumberOf resource_structural.constraints.taskers }
#which support profile grouping make it easy to find the profile.
#@PP
#For each @M { i } such that @M { n sub {i-1} < n sub i },
#@M { n sub i - n sub {i-1} } groups of length at least
#@M { C sub "min" } must start at @M { T sub i } (more precisely,
#they must cover @M { T sub i } but not  @M { T sub {i-1} }).  They may
#be constructed by combinatorial grouping, passing in time groups
#@M { T sub i ,..., T sub { i + C sub "min" - 1 } } with cover type
#`yes', and @M { T sub {i-1} } and @M { T sub { i + C sub "max" } } with
#cover type `no', asking for @M { m = n sub i - n sub {i-1} - c sub i }
#tasks, where @M { c sub i } is the number of existing tasks (not
#including long ones) that satisfy these conditions already (as
#returned by @C { KheCombSolverSingles }).  The new groups must group
#at least 2 tasks each.  Some of the time groups may not exist; in
#that case, omit the non-existent ones but still do the grouping,
#provided there are at least 2 `yes' time groups.  The case for
#sequences ending at @M { j } is symmetrical.
#@PP
#If @M { C } has no history, we may set @M { n su b 0 } and
#@M { n sub {k+1} } to 0, allowing groups to begin at @M { T sub 1 }
#and end at @M { T sub k }.  If @M { C } has history, we do not know
#how many tasks are running outside @M { C }, so we set @M { n su b 0 }
#and @M { n sub {k+1} } to infinity, preventing groups from beginning
#at @M { T sub 1 } and ending at @M { T sub k }.
#@PP
#Groups made by one round of profile grouping may participate in later
#rounds.  Suppose @M { C sub "min" = 2 }, @M { C sub "max" = 3 },
#@M { n sub 1 = n sub 5 = 0 }, and @M { n sub 2 = n sub 3 = n sub 4 = 4 }.
#Profile grouping builds 4 groups of length 2 begining at @M { T sub 2 },
#then 4 groups of length 3 ending at @M { T sub 4 }, incorporating the
#length 2 groups.
## @PP
## The general aim is to pack blocks of size freely chosen between
## @M { C sub "min" } and @M { C sub "max" } into a given profile, and
## group wherever it can be shown that the packing can only take one
## form.  But we are not interested in optimal solutions (ones with
## the maximum amount of grouping), so we do not search for other
## cases.  However, some apparently different cases are actually
## already covered.  For example, suppose @M { C sub "min" = 2 } and
## @M { C sub "max" = 3 }, with @M { n sub 1 = n sub 5 = 0 } and
## @M { n sub 2 = n sub 3 = n sub 4 = 4 }.  Then 4 groups of length 3
## can be built.  But the function does this:  it first builds 4
## groups of length 2 begining at @M { T sub 2 }, then 4 groups of
## length 3 ending at @M { T sub 4 }, incorporating the length 2 groups.
#@PP
#We turn now to three issues of detail.
## @PP
## @B { History. }  How to handle history is the subject of
## Section {@NumberOf resource_structural.constraints.history}.
## For each resource @M { r sub i } with a history value @M { x sub i }
## such that @M { x sub i < C sub "min" }, use combinatorial grouping with
## one `yes' time group for each of the first @M { C sub "min" -  x sub i }
## time groups of @M { C } (when these all exist), build one group, and
## assign @M { r sub i } to it.  (This idea is not yet implemented;
## none of the instances available at the time of writing need it.)
## , and one `no' time group for the next time group of @M { C }
#@PP
#@B { Singles. }  We need to consider how singles affect profile
#grouping.  Singles of length @M { C sub "max" } or more are
#ignored, but there may be singles of length @M { C sub "min" }
#when @M { C sub "min" < C sub "max" }.
#@PP
#The @M { n sub i - n sub {i-1} } groups that must start at
#@M { T sub i } include singles.  Singles are already present,
#which is similar to saying that they must be made first.  So before
#calling @C { KheCombSolverSolve } we call @C { CombSolverSingleTasks }
#to determine @M { c sub i }, the number of singles that satisfy the
#requirements, and then we pass @M { n sub i - n sub {i-1} - c sub i }
#to @C { KheCombSolverSolve }, not @M { n sub i - n sub {i-1} }, and
#exclude singles from its search space.
#@PP
#@B { Varyin g task domains. }  Suppose that one senior nurse is wanted
#each night, four ordinary nurses are wanted each week night, and two
#ordinary nurses are wanted each weekend night.  Then the two groups
#starting on Monday nights should group demands for ordinary nurses,
#not senior nurses.  Nevertheless, tasks with different domains are
#not totally unrelated.  A senior nurse could very well act as an
#ordinary nurse on some shifts.
#@PP
#We still aim to build @M { M = n sub i - n sub {i-1} - c sub i }
#groups as before.  However, we do this by making several calls on
#combinatorial grouping.  For each resource group @M { g } appearing
#as a domain in any class running at time @M { T sub i }, find
#@M { n sub gi }, the number of tasks (not including long ones) with
#domain @M { g } running at @M { T sub i }, and @M { n sub { g(i-1) } },
#the number at @M { T sub {i-1} }.  For each @M { g } such that
#@M { n sub gi > n sub { g(i-1) } }, call combinatorial grouping,
#insisting (by calling @C { KheCombSolverAddProfileRequirement })
#that @M { T sub i } be covered by a class whose domain is @M { g },
#passing @M { m = min( M, n sub gi - n sub { g(i-1) } ) }, then
#subtract from @M { M } the number of groups actually made.
#Stop when @M { M = 0 } or the list of domains is exhausted.
## @PP
## We still aim to build @M { M = n sub i - n sub {i-1} - c sub i }
## groups as before.  However, we do this by making several calls on
## combinatorial grouping, utilizing the @C { domain } parameter, which
## we call @M { g } here.  For each @M { g } appearing as a domain in
## any class running at time @M { T sub i }, find @M { n sub gi }, the
## number of tasks (not including long ones) with domain @M { g }
## running at @M { T sub i }, and @M { n sub { g(i-1) } }, the number
## at @M { T sub {i-1} }.  For each @M { g } such that
## @M { n sub gi > n sub { g(i-1) } }, add @M { g } and
## @M { M sub g = n sub gi - n sub { g(i-1) } } to a list.
## Then re-traverse the list.  For each @M { g } on it, call
## combinatorial grouping, passing @M { m = min( M, M sub g ) } and
## @M { g }, then subtract from @M { M } the number of groups actually
## made.  Stop when @M { M = 0 } or the list is exhausted.
## @End @SubSection
## 
## @SubSection
##   @Title { Applying profile grouping }
##   @Tag { resource_structural.constraints.applying2 }
## @Begin
## @LP
#@PP
#@B { Non-uniqueness of zero-cost groupings. }
#The main problem with profile grouping is that there may be
#several zero-cost groupings in a given situation.  For example,
#a profile might show that a group covering Monday, Tuesday, and
#Wednesday may be made, but give no guidance on which shifts on
#those days to group.
#@PP
#One reasonable way of dealing with this problem is the following.
#First, do not insist on unique zero-cost groupings; instead, accept
#any zero-cost grouping.  This ensures that a reasonable amount of
#profile grouping will happen.  Second, to reduce the chance of
#making poor choices of zero-cost groupings, limit profile grouping
#to two cases.
#@PP
#The first case is when each time group @M { T sub i } contains a
#single time, as at the start of this section, where each
#@M { T sub i } contained the time of a night shift.  Although we do
#not insist on unique zero-cost groupings, we are likely to get them
#in this case, so we call this @I { strict profile grouping }.
#@PP
#The second case is when @M { C sub "min" = C sub "max" }.  It is
#very constraining to insist, as this does, that every sequence of
#consecutive busy days (say) away from the start and end of the cycle
#must have a particular length.  Indeed, it changes the problem into a
#combinatorial one of packing these rigid sequences into the profile.
#Local repairs cannot do this well, because to increase
#or decrease the length of one sequence, we must decrease or increase
#the length of a neighbouring sequence, and so on all the way back to
#the start or forward to the end of the cycle (unless there are
#shifts nearby which can be assigned or not without cost).
#So we turn to profile grouping to find suitable groups before
#assigning any resources.  Some of these groups may be less than
#ideal, but still the overall effect should be better than no
#grouping at all.  We call this @I { non-strict profile grouping }.
## No profile grouping of this kind is done until
## all cases where the time groups are singletons have been tried.
#@PP
#When @M { C sub "min" = C sub "max" }, all singles are off-profile.
#This is easy to see:  by definition, a single covers @M { C sub "min" }
#time groups, so it covers @M { C sub "max" } time groups, but
#@C { profile_max_len } is @M { C sub "max" - 1 }.
#@PP
#These ideas are implemented by function
#@ID @C {
#int KheProfileGrouping(KHE_COMB_SOLVER cs, bool non_strict);
#}
#It carries out some profile grouping, as follows, and returns
#the number of groups it makes.
#@PP
#Find all limit active intervals constraints @M { C } whose time
#groups are all positive and which apply to all resources.  Notionally
#merge pairs of these constraints that share the same time groups when
#one has a minimum limit and the other has a maximum limit.  Let
#@M { C } be one of these (possibly merged) constraints such that
#@M { C sub "min" >= 2 }.  Furthermore, if @C { non_strict } is
#@C { false }, then @M { C }'s time groups must all be singletons,
#while if @C { non_strict } is @C { true }, then @M { C sub "min" = C sub "max" }
#must hold.
#@PP
#A constraint may qualify for both strict and non-strict processing.
#This is true, for example, of a constraint that imposes equal lower
#and upper limits on the number of consecutive night shifts.  Such a
#constraint will be selected in both the strict and non-strict cases,
#which is fine.
#@PP
#For each of these constraints, proceed as follows.  Set the profile
#time groups in the tasker to @M { T sub 1 ,..., T sub k }, the time
#groups of @M { C }, and set the @C { profile_max_len } attribute to
#@M { C sub "max" - 1 }.  The tasker will then report the values
#@M { n sub i } needed for @M { C }.
#@PP
#Traverse the profile repeatedly, looking for cases where
#@M { n sub i > n sub {i-1} } and @M { n sub j < n sub {j+1} }, and
#use combinatorial grouping (aiming to find zero-cost groups, not
#unique zero-cost groups) to build groups which cover @M { C sub "min" }
#time groups starting at @M { T sub i } (or ending at @M { T sub j }).  This
#involves loading @M { T sub i ,..., T sub {i + C sub "min" - 1} } as `yes'
#time groups, and @M { T sub {i-1} } and @M { T sub { i + C sub "max" } }
#as `no' time groups, as explained above.
#@PP
#The profile is traversed repeatedly until no points which allow
#grouping can be found.  In the strict grouping case, it is then
#time to stop, but in the non-strict case it is better to keep
#grouping, as follows.  From among all time groups @M { T sub i }
#where @M { n sub i > 0 }, choose one which has been the starting
#point for a minimal number of groups (to spread out the starting
#points as much as possible) and make a group there if combinatorial
#grouping allows it.  Then return to traversing the profile
#repeatedly:  there should now be @M { n sub i > n sub {i-1} }
#cases just before the latest group and @M { n sub j < n sub {j+1} }
#cases just after it.  Repeat until there is no @M { T sub i } where
#@M { n sub i > 0 } and combinatorial grouping can build a group.
#@End @SubSection
#
## replaced by Assign by history
## @SubSection
##   @Title { Grouping by history }
##   @Tag { resource_structural.constraints.history }
## @Begin
## @LP
## This section continues with grouping based on limit active
## intervals constraint @M { C } with limits @M { C sub "min" } and
## @M { C sub "max" }.  We focus here on the start of the cycle, which
## is special because several of @M { C }'s resources may have history,
## and groups of tasks of unusual length may be needed for them.
## @PP
## The algorithm presented here is called @I { grouping by history }.
## What it actually does, though, is assign resources to tasks rather
## than group tasks.  It does this because it is not enough to create
## groups of tasks of unusual length; it is also necessary to reserve
## them for the resources they were created for.  Assigning the resources
## to them is the obvious way to do that.  Strictly speaking, this makes
## grouping by history a resource solver rather than a resource-structural
## solver.
## @PP
## This raises the question of why KHE's other resource assignment
## solvers can't be left to handle history themselves.  The two solvers
## in question are @C { KheTimeSweepAssignResources }
## (Section {@NumberOf resource_solvers.matching.time.sweep}) and
## @C { KheDynamicResourceSequentialSolve }
## (Section {@NumberOf resource_solvers.dynamic.initial}).
## The answer, at least in part, is that they both run after grouping,
## which will not work well if grouping does not take history into account.
## Also, @C { KheDynamicResourceSequentialSolve } may make arbitrary
## choices for the resources it assigns which cause problems for
## other resources that are not assigned until later, including
## problems satisfying history requirements.
## @PP
## @BI { Constraints }.  Each constraint @M { C } that the algorithm
## handles must satisfy these conditions:
## @NumberedList
## 
## @LI {
## @M { C } is a limit active intervals constraint with at least one time
## group.
## }
## 
## # @LI {
## # @M { C } has a non-zero history value for at least one
## # resource of the given resource type @C { rt }.
## # }
## 
## @LI {
## Each time group of @M { C } is positive.
## }
## 
## @LI {
## Each time group @M { g } of @M { C } is a subset of the times of one
## day (that is, one time group of the common frame), called @M { g }'s
## @I { associated day }.
## }
## 
## @LI {
## As we proceed from one time group of @M { C } to the next,
## the associated days are consecutive.
## }
## 
## @LI {
## The associated day of the first time group of @M { C } is
## the first day of the cycle.
## }
## 
## @EndList
## These conditions are checked, and if any fail to hold, @M { C }
## is ignored.  The first two conditions just ensure that @M { C }
## is relevant to history, so they don't really count as restrictions.
## The last three are less restrictive in practice than they seem.
## The most likely case of a real-world constraint that fails them
## is a limit on the number of consecutive busy
## weekends.  However, this may not matter, because limits on
## consecutive busy weekends do not seem to occur in practice, and
## it is not clear what the algorithm could do with them if they
## did, given the 5-day gaps between weekends.
## @PP
## Let the sequence of time groups of @M { C } be
## @M { G sub 0 ,..., G sub {n-1} }, where @M { n >= 1 }.  The
## first time group is @M { G sub 0 } rather than @M { G sub 1 }
## to agree with the C language convention.  We use 0-origin
## indexing generally.
## @PP
## One limit active intervals constraint may have several offsets,
## each representing a different instantiation of the constraint.
## We treat each offset as a distinct constraint, but for simplicity
## of presentation we say `constraint' when we should, strictly,
## say `constraint plus offset'.
## @PP
## We assume here that @M { C }'s cost function is not a step function.
## In the rare cases where it is a step function, our analysis does not
## always hold---but we apply our algorithm anyway.
## # @PP
## # There may be costs in assigning or not assigning certain tasks, but
## # those do not matter to us here.  Our sole concern is with the
## # requirements placed on resources' timetables by history.
## # @PP
## # The basic idea is to build an unweighted matching graph in which
## # each demand node is a resource, each supply node is a set of grouped
## # tasks, and each edge joins a resource to a set of grouped tasks
## # that satisfies the history needs of that resource.  We use a maximum
## # matching in this graph to define an assignment of resources to sets
## # of grouped tasks which satisfies the history requirements of as many
## # resources as possible.  Here now are the details.
## @PP
## @BI { Resources }.  We are only interested in resources that must be
## busy during @M { C }'s first time group in order to avoid a cost for
## @M { C } caused by history.  Let @M { h(r) } be @M { C }'s history value
## for resource @M { r }.
## @BulletList
## 
## @LI {
## If @M { h(r) = 0 }, or equivalently if @M { C } contains no value
## for @M { h(r) }, then there is no constraint on @M { r }'s timetable
## at the start of the cycle, so we are not interested in @M { r }.
## }
## 
## @LI {
## If @M { C sub "min" <= h(r) <= C sub "max" }, there is no need
## to extend the existing sequence of @M { h(r) } tasks, since as
## it stands it generates zero cost.  If @M { C sub "max" < h(r) },
## then it would be a bad idea to extend it, because it is already
## generating a cost which will increase if we extend it further.
## So we are not interested in @M { r } in these cases.
## }
## 
## @EndList
## So the set @M { R } of resources of interest consists of those
## resources @M { r } such that @M { 0 < h(r) < C sub "min" }.
## @PP
## We are not going to worry about @M { r } having history in two
## constraints @M { C sub 1 } and @M { C sub 2 }, or more.  If
## @M { C sub 1 } monitors night shifts and @M { C sub 2 } monitors
## day shifts, then we cannot have @M { h(r) > 0 } in both.  The
## only practical possibility is for @M { C sub 1 } to monitor night
## shifts (or any other single shift type) and @M { C sub 2 } to
## monitor busy days.  We'll be sorting the constraints so that those
## with smaller time groups come first, and ignoring occurrences of
## a given resource @M { r } in history lists after its first occurrence.
## @PP
## @BI { Admissible tasks }.
## We want to assign resources with non-zero history to tasks running
## at the start of the cycle.  Each task @M { t } used for this must
## satisfy these conditions:
## @ParenAlphaList
## 
## @LI @OneRow {
## Task @M { t } has the given resource type @C { rt }.
## }
## 
## @LI @OneRow {
## Task @M { t } is a proper root task.
## }
## 
## @LI @OneRow {
## The times that @M { t } is running (including the times of any
## tasks assigned, directly or indirectly, to @M { t }) include
## at least one time.
## }
## 
## @LI @OneRow {
## The times that @M { t } is running (including the times of any
## tasks assigned, directly or indirectly, to @M { t }) include
## at most one time from each day.
## }
## 
## @LI @OneRow {
## Every time that @M { t } is running is a time monitored by @M { C }.
## }
## 
## @LI @OneRow {
## The days of the times that @M { t } is running are consecutive.
## }
## 
## @EndList
## Tasks satisfying these conditions are called @I { admissible tasks }.
## @PP
## The first four conditions are not really restrictions.  The fifth
## condition is needed because if @M { t } is running at a time not
## monitored by @M { C }, then assigning @M { t } to a resource will
## make that resource busy on the day of that time, preventing it from
## being busy at a time needed to satisfy @M { C }.
## @PP
## Condition (f) allows us to represent the days that @M { t } is
## running as an interval:  a pair of integer indexes @M { (a, b) }
## satisfying @M { 0 <= a <= b } which we call @M { i(t) }.  This is
## both an interval in the sequence of days of the cycle and an interval
## in the sequence of time groups of @M { C }, given the restrictions
## above on how these two sequences of time groups are related.  We
## write @M { l(t) } for the length of @M { i(t) }.
## @PP
## The algorithm relies on sets @M { T sub i }, each of which contains
## all admissible tasks @M { t } such that @M { i(t) = (i, k) } for
## some @M { k >= i }; that is, all admissible tasks whose first day
## has index @M { i }.  Building @M { T sub i } is a straightforward
## matter of retrieving from the event timetable monitor all meets
## running at the times of @M { G sub i }, finding all the tasks of
## type @C { rt } lying within those meets, finding their proper root
## tasks, then building their intervals and omitting those tasks that
## do not satisfy all the conditions.  Each @M { T sub i } is built
## only when it is needed.
## @PP
## @BI { Admissible task-sets }.
## As we build larger sets of tasks to assign to a resource @M { r },
## we don't want the tasks to overlap in time, or be separated by
## unused days.  So we define an @I { admissible task-set } to be a
## non-empty set of tasks such that each task is admissible, the
## tasks run on disjoint days, those days include the first day of
## the cycle, and there are no unused days between tasks.
## @PP
## The days that an admissible task-set @M { s } is running form an
## interval @M { i(s) } which begins on the first day of the cycle.
## As usual we define the length @M { l(s) } to be the length of
## @M { i(s) }.  We also define the @I domain @M { d(s) } to be the
## intersection of the domains of @M { s }'s tasks.  This is the set
## of resources that can be assigned to all of the tasks of @M { s }.
## @PP
## @BI { The algorithm }.
## As an initial idea, suppose we have somehow come up with a set
## @M { S } of admissible task-sets @M { s }.  Then we can solve
## our problem by building a bipartite graph and finding a maximum
## matching in it.  Each demand node is a resource @M { r } from
## @M { R }, each supply node is a task-set @M { s } from @M { S },
## and each edge joins an @M { r } to an @M { s } when
## @NumberedList
## 
## @LI {
## @M { r in d(s) };
## }
## 
## @LI {
## @M { C sub "min" <= h(r) + l(s) };
## }
## 
## @LI {
## @M { h(r) + l(s) <= C sub "max" }.
## }
## 
## @EndList
## A maximum matching in this graph can be used to decide which assignments
## to make.
## @PP
## Although this initial idea helps to clarify the problem, the real
## issue is how to group tasks into a set @M { S } of admissible
## task-sets so that the resulting maximum matching is as large as
## possible.  There does not seem to be an efficient algorithm for
## this problem (it resembles three-dimensional matching, which
## is NP-complete), so we proceed heuristically, as follows.
## @PP
## The algorithm builds a sequence of
## minimum-cost bipartite matchings.  We represent an instance
## of the minimum-cost bipartite matching problem in the usual way,
## as a triple @M { ( V sub 1 , V sub 2 , E ) }, where @M { V sub 1 }
## is a set of @I { demand nodes } that want to be matched,
## @M { V sub 2 } is a set of @I { supply nodes } that are available
## to match with demand nodes, and @M { E } is a set of weighted edges.
## Each edge @M { e = ( v sub 1 , v sub 2 , w ) } joins one
## demand node @M { v sub 1 } to one supply node @M { v sub 2 } by
## an edge of weight @M { w }.
## @PP
## The algorithm alternates between two kinds of minimum-cost bipartite
## matchings.  For each kind, we first present the demand nodes, then
## the supply nodes, then the edges.  We then explain how the matching
## is used, and only after that do we define the edge weights.  We do
## it this way because the weights are easier to understand once we
## know how the matching is used.
## @PP
## In the first kind of matching, which we call an @I { X-graph matching },
## the graph has the form @M { X sub i = (R, S, E) }
## where @M { R } is a set of resources of interest and
## @M { S } is a set of admissible task-sets, each of which has
## interval @M { i(s) = (0, j) } for some @M { j >= i }.  In other
## words, each task-set of @M { S } covers the first @M { i + 1 }
## time groups of @M { C } and possibly more.  The particular resources
## included in @M { R } and task-sets included in @M { S } depend on
## the progress of the algorithm and will be given later.
## @PP
## # @M { R prime } is a set of dummy supply nodes, one for each resource.  In
## # other words, for each @M { r in R } there is one @M { r prime in R prime }.
## # For each @M { r } there is an edge from @M { r } to @M { r prime };
## # this is the only edge entering @M { r prime }.  This arrangement
## # ensures that @M { r } always matches with something; i
## # @PP
## Some (not all) of the edges @M { (r, s) } in a minimum-cost matching
## in @M { X sub i } will be interpreted as decisions to assign @M { r }
## to the tasks of @M { s }.  Accordingly, an edge is drawn between
## demand node @M { r } and supply node @M { s } when conditions (1)
## and (3) above hold.
## @PP
## After finding a minimum-cost matching in @M { X sub i } we
## divide the @M { r in R } into three categories:
## @BulletList
## 
## @LI @OneCol {
## If @M { r } did not match, it is dropped (removed from @M { R }).
## It is not assigned to any tasks, and grouping by history will
## not assign it to any tasks.
## }
## 
## @LI @OneCol {
## If @M { r } matched with some @M { s in S }, and (2) above happens
## to hold for this @M { r } and @M { s }, then assigning @M { r } to
## the tasks of @M { s } gives @M { r } everything it needs.  So those
## assignments are made, then @M { r } is dropped (removed from
## @M { R }), and @M { s } is dropped (removed from @M { S }).
## }
## 
## @LI @OneCol {
## If @M { r } matched with some @M { s in S }, but (2)
## above does not hold for this @M { r } and @M { s }, then
## the quest to satisfy @M { r } must continue, so @M { r }
## remains in @M { R } and @M { s } remains in @M { S }.
## No assignments are made.
## }
## 
## @EndList
## Say something profound here.
## @PP
## When defining the edge weights, it helps to remember that X-graph
## matching is similar to resource matching
## (Section {@NumberOf resource_solvers.matching}).  Both use weighted
## bipartite matching to match resources with tasks.  The weight of an
## edge in resource matching is the solution cost after @M { r } is
## assigned to @M { s }.  But to do that here would probably not work
## well, because only some of the resources of type @C { rt } are being
## assigned.  So here, to each edge @M { (r, s) } we assign a weight
## @M { w(r, s) } which approximates the change in solution cost
## (that is, cost after minus cost before) when @M { r } is assigned
## to @M { s }.
## @PP
## Solution cost is affected by many constraints as grouping by
## history proceeds, but we are going to focus here on just two kinds:
## the limit active intervals constraint @M { C } that started all
## this, and the event resource constraints that are affected by the
## assignment or non-assignment of @M { s }.
## @PP
## Taking only @M { C } into account, let @M { a(r) } be the cost to
## @M { r } of assigning @M { r }, and let @M { n(r) } be the cost to
## @M { r } of not assigning @M { r }.  Similarly, taking event
## resource constraints relevant to @M { s } into account, let
## @M { a(s) } be the cost to @M { s } of assigning @M { s }, and let
## @M { n(s) } be the cost to @M { s } of not assigning @M { s }.  Then
## @ID @Math {
## w(r, s) = a(r) - n(r) + a(s) - n(s)
## }
## is a suitable weight.  The more the cost of non-assignment exceeds the
## cost of assignment, the smaller this will be (very likely it will be
## negative, but that does not matter), and the greater the chance will be
## of choosing this edge and thus avoiding the expensive non-assignment.
## @PP
## Concretely, @M { a(r) } is 0 and @M { n(r) } is the cost due to
## @M { h(r) } being smaller than @M { C sub "min" }.  The values for
## @M { n(s) } and @M { a(s) } are sums of the values returned by
## @C { KheTaskNonAsstAndAsstCost }
## (Section {@NumberOf resource_structural.mtask_finding.ops}).
## @PP
## This whole operation changes @M { R } and @M { S }.  So we notate it as
## @ID @M {
## (R, S) = XMatch(R, S);
## }
## This does not show the assignments that occur in the second case
## above, but it does show the two sets that the X-graph works with,
## and it shows that they have new values after the match.
## @PP
## In the second kind of matching, which we call a @I { Y-graph matching },
## the graph has the form @M { Y sub i = (S, T sub i , E) }, where
## @M { i >= 1 }, @M { S } is a set of admissible task-sets @M { s }
## such that @M { i(s) = (0, j) } for some @M { j >= i-1 }, and
## @M { T sub i } is (as above) the set of all admissible tasks @M { t }
## such that @M { i(t) = (i, k) } for some @M { k >= i }.
## @PP
## Each edge @M { (s, t) } in a minimum-cost matching in @M { Y sub i }
## will be interpreted as a decision to add @M { t } to @M { s },
## producing a larger admissible task-set.
## Accordingly, we draw an edge from @M { s in S } to each
## @M { t in T sub i } whenever @M { i(s) = (1, i-1) }.  We can't
## match an admissible task-set @M { s } with @M { i(s) = (1, j) }
## for some @M { j > i-1 } with a task @M { t } from @M { T sub i }
## with @M { i(t) = (i, k) }, because they would overlap at index @M { i }.
## @PP
## After finding a minimum-cost matching in @M { Y sub i } we
## divide the @M { s in S } into three categories:
## @BulletList
## 
## @LI {
## If @M { i(s) = (1, j) } for some @M { j > i-1 }, then @M { s } cannot
## match, but it is retained as is in @M { S }.
## }
## 
## @LI {
## If @M { i(s) = (1, i-1) } and @M { s } matches with some
## @M { t in T sub i }, then @M { s } is retained in @M { S }
## with @M { t } added to it.
## }
## 
## @LI {
## If @M { i(s) = (1, i-1) } and @M { s } does not match with any
## @M { t in T sub i }, then @M { s } is dropped (removed from @M { S }).
## }
## 
## @EndList
## Say something profound here.
## @PP
## For edge weights, we can't be guided by solution cost, since that
## is not directly affected by adding @M { t } to @M { s }.  Instead,
## we ask what makes a good choice.  The answer seems to have two parts.
## @PP
## First, we want the domain of @M { s cup lbrace t rbrace } to be as
## large as possible, since that will maximize our options in later
## matchings.  For example, we don't want to add a task requiring
## a senior nurse to a set of tasks requiring a trainee nurse:
## the result might be a set of tasks that no-one can be assigned to.
## Accordingly, we want to include
## @ID @Math {
## w sub 1 = minus bar ` d( s cup lbrace t rbrace ) ` bar
## }
## (where @M { bar ... bar } is set cardinality) in the weight of the
## edge from @M { s } to @M { t }.
## @PP
## Second, we don't want to add a task with a high non-assignment cost
## to a set of tasks with a high assignment cost (or vice versa), since
## that produces a set of tasks whose cost is high whether we assign
## it or not.  We want to match tasks with a high assignment cost
## together, and tasks with a high non-assignment cost together.  Let
## @M { n(s) } and @M { n(t) } be the non-assignment costs of @M { s }
## and @M { t }, and @M { a(s) } and @M { a(t) } be the assignment costs
## of @M { s } and @M { t }.  We can get what we want by including
## @ID @Math { 
## w sub 2 = bar n(s) - n(t) bar + bar a(s) - a(t) bar
## }
## (where @M { bar ... bar } is absolute value) in the weight of the
## edge from @M { s } to @M { t }.
## @PP
## How should we combine these two weights?  We could add them together,
## but that does not really make sense, because @M { w sub 1 } is a number
## of resources and @M { w sub 2 } is a cost.  Or we could declare one to
## be more important than the other, and use a weight which is an ordered
## pair:  @M { ( w sub 1 , w sub 2 ) } or @M { ( w sub 2 , w sub 1 ) }.
## The trouble with this is that it is hard to argue that either is more
## important than the other.
## @PP
## @I { remainder still to do }
## @PP
## This whole operation uses @M { T sub i } to change the admissible task-sets
## @M { S }.  So we notate it as
## @ID @M {
## S = YMatch(S, T sub i );
## }
## This shows the two sets that the Y-graph works with, and the fact that
## @M { S } changes its value.
## @PP
## Here is the main algorithm.  @M { R } is a set of resources of
## interest, and @M { S } is a set of admissible task-sets.  The
## value assigned to @M { S } at the start of the iteration of the
## loop with index value @M { i } is a set of admissible task-sets
## @M { s }, all of which satisfy @M { i(s) = (0, j) } for some @M { j >= i }.
## @ID @OneCol lines @Break {
## @M { R } = the set of all resources of interest;
## @B {for}( @M { i } = 0;  @M { i < n @B " and " bar R bar > 0 };  @M { i } = @M { i + 1 } )
## "{"
##     @B {if}( @M { i } == 0 )
##         @M { S = lbrace lbrace t rbrace `` bar `` t in T sub i rbrace };
##     @B {else}
##         @M { S = YMatch(S, T sub i ) };
## 
##     @M { (R, S) = XMatch(R, S) };
## "}"
## }
## In words, each iteration first builds a current set of admissible
## task-sets @M { S }, from scratch on the first iteration, and by
## extending the previous set on subsequent iterations.  It then matches
## @M { S } with the remaining resources of interest, and repeats until
## all resources have been handled.
## @PP
## @BI { Concluding points }.
## Although this algorithm works off limit active intervals constraints,
## it is quite different from profile grouping.  It needs to run before
## other kinds of grouping are run.  There is one point of potential
## overlap, however,  As described here, for the most part we build
## task-sets @M { s } such that @M { h(r) + l(s) = C sub "min" }.  We
## could choose to build larger sets than this, as long as we respect
## the upper limit @M { h(r) + l(s) <= C sub "max" }.  This might be
## useful if regular profile grouping determines that a set has to end
## where the larger @M { s } ends.  At present we are not doing this;
## we are relying on other parts of the overall solve to extend
## @M { s } if needed.
## @PP
## A review of this section will show that the algorithm still works if
## different resources have different values for @M { C sub "min" } and
## @M { C sub "max" }, as long as the time groups of @M { C } are the
## same for all resources.  So we start by finding all limit active
## intervals constraints that have the properties given above, then
## partition them into equivalence classes.  Two constraints lie in the
## same class when they have the same time groups in the same order.
## We then treat each class like a single constraint.  The resources
## of interest are all resources with non-zero history in any of the
## class's constraints, and @M { C sub "min" } and @M { C sub "max" },
## as well as the constraint weight and cost function, can differ
## between resources.
## @PP
## As mentioned earlier, we sort the constraint classes so that classes
## with smaller time groups come first.  A resource is of interest only
## in the first class where it has non-zero history.
## @PP
## All this is done, independently of any tasker or other solver,
## by function
## @ID @C {
## int KheGroupByHistory(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
##   KHE_OPTIONS options, KHE_TASK_SET r_ts);
## }
## Strictly speaking, this does no task grouping at all; rather,
## it assigns some resources to some tasks.  It returns the number of
## distinct resources that it assigns to tasks, adding to @C { r_ts }
## (if non-@C { NULL }) all tasks assigned a resource.  It is
## called by @C { KheGroupByResourceConstraints }
## (Section {@NumberOf resource_structural.constraints}) before
## other grouping functions.
## @PP
## A question is how to incorporate information about the cost of
## assigning or not assigning certain tasks.  We prefer to assign
## tasks for which non-assignment has a cost, and we prefer to not
## assign tasks for which assignment has a cost, but at present we
## are not doing anything to make that happen.
## @PP
## The algorithm has one undesirable property:  for each resource
## @M { r }, it either reduces @M { c(r) } all the way to 0, or else
## it does not reduce it at all.  There should be some way of handling
## resources for which the best outcome is somewhere in between.
## @End @SubSection
#
#@EndSubSections
#@End @Section

@Section
    @Title { Task finding }
    @Tag { resource_structural.task_finding }
@Begin
@LP
@I { Task finding } is KHE's name for some operations, based on
@I { task finder } objects, that find sets of tasks which are to be
moved all together from one resource to another.  Task finding is
used by only a few solvers, because it has been replaced by
@I { mtask finding }, the subject of
Section {@NumberOf resource_structural.mtask_finding}.  Only old
code uses task finding now; it may eventually be removed altogether.
@PP
Task finding is concerned with which days tasks are running.  A @I day
is a time group of the common frame.  The days that a task @C { t }
is running are the days containing the times that @C { t } itself is
running, plus the days containing the times that the tasks assigned
to @C { t }, directly or indirectly, are running.  The days that a
task set is running are the days that its tasks are running.
@PP
Task finding represents the days that a task or task set is running
by a @I { bounding interval }, a pair of integers:  @C { first_index },
the index in the common frame of the first day that the task or task
set is running, and @C { last_index }, the index of the last day that
the task or task set is running.  So task finding is unaware of cases
where a task runs twice on the same day, or has a @I gap (a day within
the bounding interval when it is not running).  Neither is likely in
practice.  Task finding considers the duration of a task or task set
to be the length of its bounding interval.
@PP
Task finding operations typically find a set of tasks, often
stored in a task set object (Section {@NumberOf extras.task_sets}).
In some cases these tasks form a @I { task run }, that is, they
satisfy these conditions:
@NumberedList

@LI {
The set is non-empty.  An empty run would be useless.
}

@LI {
Every task is a proper root task.  The tasks are being found in
order to be moved from one resource to another, and this ensures
that the move will not break up any groups.
}

@LI {
No two tasks run on the same day.  This is more or less automatic
when the tasks are all assigned the same resource initially, but it
holds whether the tasks are assigned or not.  If it didn't, then
when the tasks are moved to a common resource there would be clashes.
}

@LI {
The days that the tasks are running are consecutive.  In other words,
between the first day and the last there are no @I { gaps }:  days
when none of the tasks is running.
}

@EndList
The task finder does not reject tasks which run twice on the same
day or which have gaps.  As explained above, it is unaware of these
cases.  So the last two conditions should really say that the task
finder does not introduce any @I new clashes or gaps when it groups
tasks into runs.
@PP
Some runs are @I { unpreassigned runs }, meaning that all of their
tasks are unpreassigned.  Only unpreassigned runs can be moved from
one resource to another.  And some runs are @I { maximal runs }:
they cannot be extended, either to left or right.  We mainly deal
with maximal runs, but just what we mean by `maximal' depends on
circumstances.  For example, we may want to exclude preassigned
tasks from our runs.  So our definition does @I not take the
arguably reasonable extra step of requiring all runs to be maximal.
@PP
Some task finding operations find all tasks assigned a particular
resource in a particular interval.  In these cases, only conditions
2 and 3 must hold; the result need not be a task run.
@PP
Task finding treats non-assignment like the assignment of a special
resource (represented by @C { NULL }).  This makes it equally at home
finding assigned and unassigned tasks.
@PP
A task @C { t } @I { needs assignment } if @C { KheTaskNeedsAssignment(t) }
(Section {@NumberOf solutions.tasks.asst}) returns @C { true },
meaning that non-assignment of a resource to @C { t } would incur
a cost, because of an assign resource constraint, or a limit
resources constraint which is currently at or below its minimum
limit, that applies to @C { t }.  Task finding never includes
tasks that do not need assignment when it searches for unassigned
tasks, because assigning resources to such tasks is not a high
priority.  It does include them when searching for assigned tasks.
@PP
A resource is @I { effectively free } during some set of days if it
is @C { NULL }, or it is not @C { NULL } and the tasks it is assigned
to on those days do not need assignment.  The point is that it
is always safe to move some tasks to a resource on days when it is
effectively free:  if the resource is @C { NULL }, they are simply
unassigned, and if it is non-@C { NULL }, any tasks running on those
days do not need assignment, and can be unassigned, at no cost, before
the move is made.  Task finding utilizes the effectively free concept and
offers move operations that work in this way.
@BeginSubSections

@SubSection
    @Title { Task finder objects }
    @Tag { resource_structural.task_finding.task_finder }
@Begin
@LP
To create a task finder object, call
@ID @C {
KHE_TASK_FINDER KheTaskFinderMake(KHE_SOLN soln, KHE_OPTIONS options,
  HA_ARENA a);
}
This returns a pointer to a private struct in arena @C { a }.  Options
@C { gs_common_frame } (Section {@NumberOf extras.frames}) and
@C { gs_event_timetable_monitor } (Section {@NumberOf general_solvers.general})
are taken from @C { options }.  If either is @C { NULL },
@C { KheTaskFinderMake } returns @C { NULL }, since it cannot
do its work without them.
@PP
Ejection chain repair code can obtain a task finder from the ejector
object, by calling
@ID @C {
KHE_TASK_FINDER KheEjectorTaskFinder(KHE_EJECTOR ej);
}
This saves time and memory compared with creating new task finders
over and over.  Once again the return value is @C { NULL } if the
two options are not both present.
@PP
The days tasks are running (the time groups of the common frame) are
represented in task finding by their indexes, as explained above.
The first legal index is 0; the last is returned by
@ID @C {
int KheTaskFinderLastIndex(KHE_TASK_FINDER tf);
}
This is just @C { KheTimeGroupTimeCount(frame) - 1 }, where @C { frame }
is the common frame.  Also,
@ID @C {
KHE_FRAME KheTaskFinderFrame(KHE_TASK_FINDER tf);
}
may be called to retrieve the frame itself.
@PP
As defined earlier, the bounding interval of a task or task set
is the smallest interval containing all the days that the task
or task set is running.  It is returned by these functions:
@ID @C {
KHE_INTERVAL KheTaskFinderTaskInterval(KHE_TASK_FINDER tf,
  KHE_TASK task);
KHE_INTERVAL KheTaskFinderTaskSetInterval(KHE_TASK_FINDER tf,
  KHE_TASK_SET ts);
}
These return an interval (Section {@NumberOf general_solvers.intervals})
holding the indexes in the common frame of the first and last days that
@C { task } or @C { ts } is running.  If @C { ts } is empty, the
interval is empty.  There is also
@ID @C {
KHE_INTERVAL KheTaskFinderTimeGroupInterval(KHE_TASK_FINDER tf,
  KHE_TIME_GROUP tg);
}
which returns an interval holding the first and last days that
@C { tg } overlaps with.  If @C { tg } is empty, the interval
is empty.
@PP
These three operations find task sets and runs:
@ID @C {
void KheFindTasksInInterval(KHE_TASK_FINDER tf,
  KHE_INTERVAL in, KHE_RESOURCE_TYPE rt, KHE_RESOURCE from_r,
  bool allow_preassigned, bool allow_partial,
  KHE_TASK_SET res_ts, KHE_INTERVAL *res_in);
bool KheFindFirstRunInInterval(KHE_TASK_FINDER tf,
  KHE_INTERVAL in, KHE_RESOURCE_TYPE rt, KHE_RESOURCE from_r,
  bool allow_preassigned, bool allow_partial, bool sep_need_asst,
  KHE_TASK_SET res_ts, KHE_INTERVAL *res_in);
bool KheFindLastRunInInterval(KHE_TASK_FINDER tf,
  KHE_INTERVAL in, KHE_RESOURCE_TYPE rt, KHE_RESOURCE from_r,
  bool allow_preassigned, bool allow_partial, bool sep_need_asst,
  KHE_TASK_SET res_ts, KHE_INTERVAL *res_in);
}
All three functions clear @C { res_ts }, which must have been
created previously, then add to it some tasks which are assigned
@C { from_r } (or are unassigned if @C { from_r } is @C { NULL }).
They set @C { *res_in } to the bounding interval of the tasks of
@C { res_ts }.
@PP
Call @C { in } the @I { target interval }.  A task @C { t }
@I { overlaps } the target interval when at least one of the days
on which @C { t } is running lies in it.  Subject to the following
conditions, @C { KheFindTasksInInterval } finds all tasks
that overlap the target interval; @C { KheFindFirstRunInInterval }
finds the first (leftmost) run containing a task that overlaps the
target interval, or returns @C { false } if there is no such run;
and @C { KheFindLastRunInInterval } finds the last (rightmost) run
containing a task that overlaps the target interval, or returns
@C { false } if there is no such run.
@PP
When @C { from_r } is @C { NULL }, only unassigned tasks that need
assignment (as discussed above) are added.  The first could be any
unassigned task of type @C { rt } (it is this that @C { rt } is
needed for), but the others must be compatible with the first, in
that we expect these tasks to be assigned some single resource,
and it would not do for them to have widely different domains.
@PP
Some tasks are @I { ignored }, which means that the operation
behaves as though they are simply not there.  Subject to this
ignoring feature, the runs found are maximal.  A task is ignored in
this way when it is running on any of the days that the tasks that
have already been added to @C { res_ts } are running.  Preassigned
tasks are allowed when @C { allow_preassigned } is @C { true }.
Tasks that are running partly or wholly outside the target
interval are allowed when @C { allow_partial } is @C { true }.
When @C { allow_partial } is @C { true }, a run can extend
an arbitrary distance beyond the target interval, and contain
some tasks that do not overlap the target interval at all.
@PP
If @C { sep_need_asst } is @C { true }, all tasks @C { t }
in the run found by @C { KheFindFirstRunInInterval } or
@C { KheFindLastRunInInterval } have the same value of
@C { KheTaskNeedsAssignment(t) }.  This value could be @C { true }
or @C { false }, but it is the same for all tasks in the run.
If @C { sep_need_asst } is @C { false }, there is no requirement
of this kind.
@End @SubSection

@SubSection
    @Title { Daily schedules }
    @Tag { resource_structural.task_finding.daily }
@Begin
@LP
Sometimes more detailed information is needed about when a
task is running than just the bounding interval.  In those
cases, task finding offers @I { daily schedules }, which
calculate both the bounding interval and what is going on
on each day:
@ID @C {
KHE_DAILY_SCHEDULE KheTaskFinderTaskDailySchedule(
  KHE_TASK_FINDER tf, KHE_TASK task);
KHE_DAILY_SCHEDULE KheTaskFinderTaskSetDailySchedule(
  KHE_TASK_FINDER tf, KHE_TASK_SET ts);
KHE_DAILY_SCHEDULE KheTaskFinderTimeGroupDailySchedule(
  KHE_TASK_FINDER tf, KHE_TIME_GROUP tg);
}
These return a @I { daily schedule }:  a representation of
what @C { task }, @C { ts }, or @C { tg } is doing on each
day, including tasks assigned directly or indirectly to
@C { task } or @C { ts }.  Also,
@ID @C {
KHE_DAILY_SCHEDULE KheTaskFinderNullDailySchedule(
  KHE_TASK_FINDER tf, KHE_INTERVAL in);
}
returns a daily schedule representing doing nothing during
the given interval.
@PP
A @C { KHE_DAILY_SCHEDULE } is an object which uses memory
taken from its task finder's arena.  It can be deleted (which
actually means being added to a free list in its task finder)
by calling
@ID @C {
void KheDailyScheduleDelete(KHE_DAILY_SCHEDULE ds);
}
It has these attributes:
@ID @C {
KHE_TASK_FINDER KheDailyScheduleTaskFinder(KHE_DAILY_SCHEDULE ds);
bool KheDailyScheduleNoOverlap(KHE_DAILY_SCHEDULE ds);
KHE_INTERVAL KheDailyScheduleInterval(KHE_DAILY_SCHEDULE ds);
}
# int KheDailyScheduleFirstDayIndex(KHE_DAILY_SCHEDULE ds);
# int KheDailyScheduleLastDayIndex(KHE_DAILY_SCHEDULE ds);
@C { KheDailyScheduleTaskFinder } returns @C { ds }'s task finder;
@C { KheDailyScheduleNoOverlap } returns @C { true } when no two
of the schedule's times occur on the same day, and @C { false }
otherwise; and @C { KheDailyScheduleInterval } returns the interval
of day indexs of the schedule's days.  For each day between the
interval's first and last inclusive,
@ID @C {
KHE_TASK KheDailyScheduleTask(KHE_DAILY_SCHEDULE ds, int day_index);
}
returns the task running in @C { ds } on day @C { day_index }.
It may be a task assigned directly or indirectly to @C { task }
or @C { ts }, not necessarily @C { task } or a task from
@C { ts }.  @C { NULL } is returned if no task is running
on that day.  This is certain for schedules created by
@C { KheTaskFinderTimeGroupDailySchedule } and
@C { KheTaskFinderNullDailySchedule }, but it is also possible
for schedules created by @C { KheTaskFinderTaskDailySchedule }
and @C { KheTaskFinderTaskSetDailySchedule }.  If there are two
or more tasks running on that day, an arbitrary one of them is
returned; this cannot happen when @C { KheDailyScheduleNoOverlap }
returns @C { true }.  Similarly,
@ID @C {
KHE_TIME KheDailyScheduleTime(KHE_DAILY_SCHEDULE ds, int day_index);
}
returns the time in @C { ds } that is busy on day @C { day_index }.
This will be @C { NULL } if there is no time in the schedule on that
day, which is always the case when the schedule was created by a
call to @C { KheTaskFinderNullDailySchedule }.
@End @SubSection

@EndSubSections
@End @Section

@Section
    @Title { Multi-task finding }
    @Tag { resource_structural.mtask_finding }
@Begin
@LP
The author has made several attempts over the years to define an
equivalence relation on tasks and use it to group equivalent tasks
together into classes.  The purpose is to avoid symmetrical
assignments, in which a resource is assigned to several tasks in
turn which are in fact equivalent, wasting time.  This section
describes what he hopes and believes will be his final attempt.
@PP
It could be argued that equivalence classes of tasks are only needed
because XHSTT, and following it the KHE platform, allow at most one
resource to be assigned to each task at any given moment during solving.
If several could be assigned, equivalence would be guaranteed because
the `tasks' thus grouped would be indistinguishable.  This would
probably work for nurse rostering, but in high school timetabling it
would not handle tasks that become equivalent when their meets are
assigned the same time---requests for ordinary classrooms, for example.
@PP
Still, `a task to which several resources can be assigned' is a
valuable abstraction, better for the user than a set of equivalent
tasks.  So instead of defining a task group or task class (as in the
author's previous attempts), we define a @I { multi-task } or @I { mtask }
to be a task to which several resources can be assigned simultaneously.
Behind the scenes, an mtask is a set of equivalent proper root tasks,
but the user does not know or care which tasks those are, or which are
assigned which resources:  the mtask handles that, in a provably best
possible way, as we'll see.
@PP
The idea, then, is to group tasks into mtasks and to write resource
assignment algorithms that assign resources to mtasks rather than to
tasks.  Assigning resources to mtasks is somewhat harder to do than
assigning them to tasks, because mtasks accept multiple assignments,
but it should run faster because assignment symmetries are avoided.
@PP
Three types are defined here.  Type @C { KHE_MTASK } represents one
mtask.  @C { KHE_MTASK_SET } represents a simple set of mtasks.  And
@C { KHE_MTASK_FINDER } creates mtasks and holds them.  All older
attemps at task equivalencing have been removed from the KHE platform
and solvers.
# @PP
# The author has removed several older attempts at task equivalencing
# from the KHE platform and solvers.  However, one type has not been
# removed:  @C { KHE_TASKER_CLASS } from
# Section {@NumberOf resource_structural.constraints.taskers}.  It
# could be unified with @C { KHE_MTASK }, but its implementation,
# supporting combinatorial and profile grouping, would add a lot of
# complexity to @C { KHE_MTASK }.  For now, anyway, it remains separate.
@BeginSubSections

@SubSection
    @Title { Multi-tasks }
    @Tag { resource_structural.mtask_finding.ops }
@Begin
@LP
A @I { multi-task } or @I mtask is a task to which several resources
can be assigned simultaneously.  Behind the scenes, it is a non-empty set
of proper root tasks which are equivalent to one another in a sense to be
defined in Section {@NumberOf resource_structural.mtask_finding.similarity}.
This section presents the operations on mtasks.
@PP
There is no operation to create one mtask, because mtasks need to
be made together all at once, which is what @C { KheMTaskFinderMake }
(Section {@NumberOf resource_structural.mtask_finding.solver}) does.
After that, any changes to individual tasks which affect their
equivalence will render these mtasks out of date.  This includes
assignments of one task to another task, changes to task domains,
changes to whether a task assignment is fixed or not, meet splits
and merges, and attaching and detaching event resource monitors.
Because of this, it is best to create mtasks at the beginning of a
call on some resource solver, after any such changes have been made,
and delete them (by deleting the mtask finder's arena) at the end
of that call, before later calls on other solvers can change things.
# KHE's solvers do this.
@PP
However, several of these `forbidden' operations have mtask versions.
These do what the forbidden operations do (indeed, each calls one
forbidden operation), but they also update the mtasks to take account
of the change.  For example, the mtask version of assigning one task
to another will cause the two tasks to be removed from their mtasks,
and then the combined entity will be added to another mtask.  This
could make one or two mtasks disappear (since there are no empty
mtasks), and it could bring a new mtask into existence.  Operations
of this type are too slow to call from the inner loops of solvers,
but they can be called from less time-critical code.
@PP
Here now are the operations on mtasks.  One advantage of the
mtask abstraction is that we can model these operations on the
corresponding task operations---although there are some
differences, such as that we cannot assign one mtask to another.
@PP
First come some general operations:
@ID @C {
char *KheMTaskId(KHE_MTASK mt);
}
This returns an Id for mtask @C { mt }, just the task Id of its first task.
@ID @C {
KHE_RESOURCE_TYPE KheMTaskResourceType(KHE_MTASK mt);
bool KheMTaskIsPreassigned(KHE_MTASK mt, KHE_RESOURCE *r);
bool KheMTaskAssignIsFixed(KHE_MTASK mt);
KHE_RESOURCE_GROUP KheMTaskDomain(KHE_MTASK mt);
int KheMTaskTotalDuration(KHE_MTASK mt);
float KheMTaskTotalWorkload(KHE_MTASK mt);
}
Again, these come from @C { mt }'s first task; they must be the same
for all @C { mt }'s tasks, otherwise those tasks would not have been
placed into the same mtask.  A preassigned task is the only member of
its mtask, except in the unlikely case of equivalent tasks preassigned
the same resource.  A task with a fixed assignment is the only
member of its mtask.
# There is also
# @ID @C {
# float KheMTaskWorkloadPerTime(KHE_MTASK mt);
# }
# This returns @C { KheMTaskTotalWorkload(mt) } divided by
# @C { KheMTaskTotalDuration(mt) }.  This will differ from
# any individual @C { KheTaskWorkloadPerTime } if tasks with
# different workloads are grouped together, but that does not
# seem likely to be a problem in practice.
@PP
The proper root tasks of an mtask can come from the same meet, or
from different meets.  When they come from the same meet, function
@ID @C {
bool KheMTaskHasSoleMeet(KHE_MTASK mt, KHE_MEET *meet);
}
sets @C { *meet } to that meet and returns @C { true }.  Otherwise
it sets @C { *meet } to @C { NULL } and returns @C { false }.
@PP
KHE allows the user to create tasks which are not derived from any
event resource or meet.  These are intended for use as proper root
tasks to which ordinary tasks are assigned.  However, if no ordinary
tasks are assigned to them, the result is a task with duration 0.
This is awkward, but careful examination (which we'll do later)
shows that it is not really a special case.
# are true vacuously when there are no atomic tasks.
# which we call here a @I { degenerate proper root task }, or just a
# @I { degenerate task }.  Degenerate tasks are awkward, useless, and
# unlikely to occur, but still we have to allow for the possibility
# that there will be some.  So even degenerate tasks lie in mtasks,
# which we call @I { degenerate mtasks }.
@PP
An mtask @I { has fixed times } when none of its tasks (including
tasks assigned, directly or indirectly, to those tasks) lie in meets
with unassigned times, and the call to @C { KheMTaskSolverMake }
that created the mtask had @C { fixed_times } set to @C { true },
meaning that there is an assumption that assigned times will not
change.  To check this condition, call
@ID @C {
bool KheMTaskHasFixedTimes(KHE_MTASK mt);
}
When it returns @C { true }, these functions provide access to the times:
@ID @C {
KHE_INTERVAL KheMTaskInterval(KHE_MTASK mt);
KHE_TIME KheMTaskDayTime(KHE_MTASK mt, int day_index,
  float *workload_per_time);
KHE_TIME_SET KheMTaskTimeSet(KHE_MTASK mt);
}
# int KheMTaskFirstDayIndex(KHE_MTASK mt);
# int KheMTaskLastDayIndex(KHE_MTASK mt);
@C { KheMTaskInterval } returns the smallest interval of days in the
days frame of @C { mt }'s task finder that contains @C { mt }'s times.
In mtask finding generally, a value of type @C { KHE_INTERVAL }
(defined in Section {@NumberOf general_solvers.intervals}) always
denotes an interval of days.  For each index @C { day_index } in
this interval, @C { KheMTaskDayTime } returns the time that @C { mt }
is busy on the day of @C { days_frame } with index @C { day_index }, or
@C { NULL } if @C { mt } does not run that day, as well as @C { mt }'s
workload per time on that day.  Finally, @C { KheMTaskTimeSet } returns
the set of times that the tasks of @C { mt } are running.
@PP
Many mtask operations utilize @C { KheMTaskInterval(mt) } as their
representation of when @C { mt } is running.  This representation
is convenient but it does not recognize days within the interval
where an mtask runs twice, or not at all.  Two functions help to
identify such cases:
@ID @C {
bool KheMTaskNoOverlap(KHE_MTASK mt);
bool KheMTaskNoGaps(KHE_MTASK mt);
}
@C { KheMTaskNoOverlap } returns @C { true } when no two
of @C { mt }'s busy times lie on the same day, and
@C { KheMTaskNoGaps } returns @C { true } when none of
the calls to @C { KheMTaskDayTime } return @C { NULL }.
@PP
Returning to functions that do not need fixed times,
to visit the tasks of an mtask we have
@ID @C {
int KheMTaskTaskCount(KHE_MTASK mt);
KHE_TASK KheMTaskTask(KHE_MTASK mt, int i,
  KHE_COST *non_asst_cost, KHE_COST *asst_cost);
}
@C { KheMTaskTask } returns the @C { i }th task @C { t }, plus a cost
@C { *non_asst_cost } which will be included in the solution cost
whenever @C { t } is unassigned (as reported by assign resource
monitors) and a cost @C { *asst_cost } which will be included in
the solution cost whenever @C { t } is assigned (as reported by
prefer resources monitors with empty sets of preferred resources).
Actually, these costs can vary depending on other task assignments;
the costs returned here are lower bounds that do not depend on other
assignments.  The tasks are returned so that those most in need of
assignment come first, that is, in order of decreasing
@C { *non_asst_cost - *asst_cost }.  Tasks for which this order is
not certain lie in different mtasks.  All this is explained in detail
in Section {@NumberOf resource_structural.mtask_finding.similarity}.
@PP
For the convenience of solvers that need these costs but not mtasks,
there is also
@ID @C {
void KheTaskNonAsstAndAsstCost(KHE_TASK task, KHE_COST *non_asst_cost,
  KHE_COST *asst_cost);
}
It returns these costs, as defined above, for @C { task },
quite independently of mtask finding.  Here @C { task } would
usually be a proper root task, but it does not need to be; the
costs depend on @C { task } itself and on all tasks assigned,
directly or indirectly, to @C { task }.
@PP
Next come operations concerned with resource assignment.  Each
mtask has a set of resources currently assigned to it (that is,
assigned to some of its tasks).  This set is in fact a multi-set:
a resource may be currently assigned to a given mtask more than
once.  Assigning a resource more than once to a given mtask inevitably
causes clashes, but it is better to let it happen than to waste time
preventing it.  The resource assignment operations are
@ID @C {
bool KheMTaskMoveResourceCheck(KHE_MTASK mt, KHE_RESOURCE from_r,
  KHE_RESOURCE to_r, bool disallow_preassigned);
bool KheMTaskMoveResource(KHE_MTASK mt, KHE_RESOURCE from_r,
  KHE_RESOURCE to_r, bool disallow_preassigned);
}
@C { KheMTaskMoveResourceCheck } returns @C { true } when changing one
of @C { mt }'s assignments from @C { from_r } to @C { to_r } would succeed,
and @C { false } when it would not succeed.  Here @C { from_r } could be
@C { NULL }, in which case the request is to add @C { to_r } to the set
of resources assigned to @C { mt }, that is, to increase the multiplicity
of its assignments to @C { mt } by one.  We call this an @I { assignment },
although we have not provided a @C { KheMTaskAssignResourceCheck }
operation for it.  And @C { to_r } could be @C { NULL }, in which case
the request is to remove @C { from_r } from the set of resources assigned
@C { mt }, that is, to reduce the multiplicity of its assignments to
@C { mt } by one.  We call this an @I { unassignment }. although again
there is no @C { KheMTaskUnAssignResourceCheck } operation.
@C { KheMTaskMoveResource } actually makes the change, returning
@C { true } if it was successful, and @C { false } if it wasn't
(in that case, nothing is changed).
@PP
Parameter @C { disallow_preassigned } is concerned with the awkward
question of what to do with preassigned mtasks.  The corresponding
functions for tasks allow a preassigned task to be assigned, unassigned,
and moved to another task which is preassigned the same resource.  If
@C { disallow_preassigned } is @C { false }, the equivalent behaviour
is permitted here, allowing a preassigned mtask to be assigned and
unassigned.  However, in practice callers of these functions are more
likely to want all changes to preassigned tasks to be disallowed:
such tasks will already be assigned their preassigned resources,
and changes to those assignments are not wanted.  This is what
happens when @C { disallow_preassigned } is @C { true }.
@PP
Here is the full list of reasons why an mtask move might not succeed:
@BulletList

@LI @OneRow {
@C { from_r == to_r }, so the move would change nothing.
}

@LI @OneRow {
@C { mt } contains only fixed tasks; their assignments cannot change.
}

@LI @OneRow {
@C { mt } contains only preassigned tasks, and either the
@C { disallow_preassigned } parameter is @C { true }, so
that their assignments cannot change, or else it is @C { false },
and @C { to_r } is neither of the two permitted values (the
preassigned resource and @C { NULL }).
}

@LI @OneRow {
@C { to_r != NULL } and the domain of @C { mt } (the same for all
its tasks) does not contain @C { to_r }.
}

@LI @OneRow {
@C { from_r != NULL } and @C { from_r } is not one of the resources
assigned to @C { mt }.
}

@LI @OneRow {
@C { from_r == NULL } (and therefore @C { to_r != NULL }) and
@C { mt } does not contain at least one unassigned task to
assign @C { to_r } to.
}

@EndList
As usual, returning @C { false } when the reassignment changes nothing
reflects the practical reality that no solver wants to waste time
on such changes.
@PP
This next function may be useful for suggesting a suitable resource for
assignment:
@ID @C {
bool KheMTaskResourceAssignSuggestion(KHE_MTASK mt, KHE_RESOURCE *to_r);
}
It returns @C { true } with @C { *to_r } set to a suggestion for
an assignment to @C { mt }, if one can be found, and @C { false }
if no suggestion can be made.  The suggestion comes by looking for
tasks which share an event resource with the next unassigned task
of @C { mt } and are already assigned a resource:  if that resource
can be assigned to @C { mt }, then it becomes the suggestion.  The
idea here is to promote resource constancy (assigning the same
resource to all the tasks of a given event resource) even when it
is not required by an avoid split assignments constraint.
@PP
For visiting the assignments to @C { mt } there is
@ID @C {
int KheMTaskAsstResourceCount(KHE_MTASK mt);
KHE_RESOURCE KheMTaskAsstResource(KHE_MTASK mt, int i);
}
which return the number of non-@C { NULL } resources in the
multi-set of resources assigned to @C { mt }, and the @C { i }th
resource, in the usual way.  There are also
@ID @C {
int KheMTaskAssignedTaskCount(KHE_MTASK mt);
int KheMTaskUnassignedTaskCount(KHE_MTASK mt);
}
which returns the number of assigned tasks in @C { mt }, and the
number of unassigned tasks in @C { mt }.  Naturally, they sum to
@C { KheMTaskTaskCount(mt) }.  @C { KheMTaskAssignedTaskCount }
is a synonym for @C { KheMTaskAsstResourceCount }.  The assigned
tasks always come first in an mtask, so the first unassigned task
(if there is one) is
@ID {0.95 1.0} @Scale @C {
KheMTaskTask(mt, KheMTaskAssignedTaskCount(mt), &non_asst_cost, &asst_cost);
}
There is also
@ID @C {
bool KheMTaskNeedsAssignment(KHE_MTASK mt);
}
which returns @C { true } when @C { mt } contains at least one
unassigned task such that the costs returned by @C { KheMTaskTask }
satisfy @C { *non_asst_cost - *asst_cost > 0 }.  In other words, the
cost of the solution would be reduced if this task was assigned, as
far as the event resource monitors that determine @C { *non_asst_cost }
and @C { *asst_cost } are concerned.  Also,
@ID @C {
int KheMTaskNeedsAssignmentTaskCount(KHE_MTASK mt);
KHE_TASK KheMTaskNeedsAssignmentTask(KHE_MTASK mt, int i,
  KHE_COST *non_asst_cost, KHE_COST *asst_cost);
}
return the number of tasks in @C { mt } that need assignment,
as just defined, and the @C { i }'th of these tasks, counting
from 0.  One could write
@ID @C {
KheMTaskNeedsAssignmentTaskCount(mt) > 0
}
instead of @C { KheMTaskNeedsAssignment(mt) }.  And
@ID @C {
bool KheMTaskContainsNeedlessAssignment(KHE_MTASK mt);
}
returns @C { true } if @C { mt } contains a task which is assigned
but does not need to be.  This means that a call to
@C { KheMTaskNeedsAssignment(mt) } would return @C { false }, but
furthermore, after any one resource is unassigned from @C { mt },
@C { KheMTaskNeedsAssignment(mt) } would still return @C { false }.
@PP
Any given set of resources is always assigned to the tasks of an
mtask in a best possible (least cost) way.  When a resource is
unassigned from an mtask, the remaining assignments may no longer
have this property.  In that case, they are adjusted to make them best
possible again.
@PP
A similar issue arises when an mtask is constructed:  if the
initial resource assignments are not best possible, they will
be moved from one task to another within the mtask until they
are.  So there may be calls on task assignment operations while
@C { KheMTaskSolverMake } is running.  These are guaranteed to
not increase the cost of the solution.  They might decrease it.
@PP
An mtask's tasks all have the same domain, making the following
operations well-defined:
@ID @C {
bool KheMTaskAddTaskBoundCheck(KHE_MTASK mt, KHE_TASK_BOUND tb);
bool KheMTaskAddTaskBound(KHE_MTASK mt, KHE_TASK_BOUND tb);
bool KheMTaskDeleteTaskBoundCheck(KHE_MTASK mt, KHE_TASK_BOUND tb);
bool KheMTaskDeleteTaskBound(KHE_MTASK mt, KHE_TASK_BOUND tb);

int KheMTaskTaskBoundCount(KHE_MTASK mt);
KHE_TASK_BOUND KheMTaskTaskBound(KHE_MTASK mt, int i);
}
@C { KheMTaskAddTaskBound } adds its bound to each task.  It
returns @C { false } and changes nothing if any of the underlying
@C { KheTaskAddTaskBound } operations would return @C { false }.
@PP
If the domain of an mtask is changed in this way, its tasks could
become equivalent to the tasks of some other mtask that already
have the new domain.  However, no attempt is made to find and
merge such mtasks.  It does no harm, apart from wasting solve
time, to have two mtasks on hand which could be merged into one.
@PP
Mtasks work correctly with marks and paths.  Operations on
mtasks are not stored in paths, but the underlying operations on
tasks are, and that is enough to make everything work.
@PP
Finally,
@ID @C {
void KheMTaskDebug(KHE_MTASK mt, int verbosity, int indent, FILE *fp);
}
produces a debug print of @C { mt } onto @C { fp }.  It calls
@C { KheTaskDebug } for each task of @C { mt }.
@End @SubSection

@SubSection
    @Title { Multi-task sets }
    @Tag { resource_structural.mtask_finding.sets }
@Begin
@LP
Just as type @C { KHE_TASK_SET } represents a simple set of tasks,
so @C { KHE_MTASK_SET } represents a simple set of mtasks.  The
only wrinkle is that an mtask set remembers the interval that it
covers (the union of the values of @C { KheMTaskInterval(mt) }
for each of its mtasks @C { mt }).  This is done to make function
@C { KheMTaskSetInterval }, presented below, very efficient.
@PP
The operations on mtask sets follow those on task sets, with a few
adjustments.  To create and delete an mtask set, call
@ID @C {
KHE_MTASK_SET KheMTaskSetMake(KHE_MTASK_FINDER mtf);
void KheMTaskSetDelete(KHE_MTASK_SET mts, KHE_MTASK_FINDER mtf);
}
Deleted mtask sets are held in a free list in @C { mtf }, and
freed when @C { mtf }'s arena is freed.
@PP
Three operations are offered for reducing the size of an
mtask set:
@ID @C {
void KheMTaskSetClear(KHE_MTASK_SET mts);
void KheMTaskSetClearFromEnd(KHE_MTASK_SET mts, int count);
void KheMTaskSetDropFromEnd(KHE_MTASK_SET mts, int n);
}
@C { KheMTaskSetClear } clears @C { mts } back to the empty set.
@C { KheMTaskSetClearFromEnd } removes mtasks from the end
until @C { count } mtasks remain.  If @C { count } is larger
than the number of mtasks in @C { mts }, none are removed.
@C { KheMTaskSetDropFromEnd } removes the last @C { n } mtasks
from @C { mts }.  If @C { n } is larger than the number of mtasks
in @C { mts }, all are removed.
@PP
Two operations are offered for adding mtasks to an mtask set:
@ID @C {
void KheMTaskSetAddMTask(KHE_MTASK_SET mts, KHE_MTASK mt);
void KheMTaskSetAddMTaskSet(KHE_MTASK_SET mts, KHE_MTASK_SET mts2);
}
@C { KheMTaskSetAddMTask } adds @C { mt } to the end of @C { mts };
@C { KheMTaskSetAddMTaskSet } appends the elements of @C { mts2 }
to the end of @C { mts } without disturbing @C { mts2 }.
@PP
Here are two operations for deleting one mtask:
@ID @C {
void KheMTaskSetDeleteMTask(KHE_MTASK_SET mts, KHE_MTASK mt);
KHE_MTASK KheMTaskSetLastAndDelete(KHE_MTASK_SET mts);
}
@C { KheMTaskSetDeleteMTask } deletes @C { mt } from @C { mts }
(it must be present).  Assuming that @C { mts } is not empty,
@C { KheMTaskSetLastAndDelete } deletes and returns the last
mtask of @C { mts }.
@PP
To find out whether an mtask set contains a given mtask, call
@ID {0.98 1.0} @Scale @C {
bool KheMTaskSetContainsMTask(KHE_MTASK_SET mts, KHE_MTASK mt, int *pos);
}
If found, this sets @C { *pos } to @C { mt }'s index in @C { mts }.
To visit the mtasks of an mtask set, call
@ID @C {
int KheMTaskSetMTaskCount(KHE_MTASK_SET mts);
KHE_MTASK KheMTaskSetMTask(KHE_MTASK_SET mts, int i);
}
in the usual way.  There is also
@ID @C {
KHE_MTASK KheMTaskSetFirst(KHE_MTASK_SET mts);
KHE_MTASK KheMTaskSetLast(KHE_MTASK_SET mts);
}
which return the first and last elements when @C { mts } is non-empty.
@PP
For sorting an mtask set there is
@ID @C {
void KheMTaskSetSort(KHE_MTASK_SET mts,
  int(*compar)(const void *, const void *));
}
where @C { compar } compares mtasks.  There is also
@ID @C {
void KheMTaskSetUniqueify(KHE_MTASK_SET mts);
}
which uses a call to @C { HaArraySortUnique } with a suitable
comparison function to uniqueify @C { mts }, that is, to ensure
that each mtask in @C { mts } appears there at most once.  The
mtasks are sorted by increasing starting time, with ties
broken by increasing order of @C { KheTaskSolnIndex }
applied to each mtask's first task.  This does what is wanted,
given than every mtask contains at least one task, and no task
appears in two mtasks.
@PP
When @C { mts }'s mtasks all have fixed times, function
@ID @C {
KHE_INTERVAL KheMTaskSetInterval(KHE_MTASK_SET mts);
}
returns the smallest interval containing the indexes in the days
frame of the days of all of their times.  As mentioned earlier,
this interval is kept up to date as mtasks are added and removed,
ensuring that @C { KheMTaskSetInterval } just has to return a
field of @C { mts }, making it very fast.
@PP
Next come operations for changing the assignments of resources
to an mtask set:
@ID @C {
bool KheMTaskSetMoveResourceCheck(KHE_MTASK_SET mts,
  KHE_RESOURCE from_r, KHE_RESOURCE to_r, bool disallow_preassigned,
  bool unassign_extreme_unneeded);
bool KheMTaskSetMoveResource(KHE_MTASK_SET mts,
  KHE_RESOURCE from_r, KHE_RESOURCE to_r, bool disallow_preassigned,
  bool unassign_extreme_unneeded);
}
@C { KheMTaskSetMoveResource } calls @C { KheMTaskMoveResource } for each
mtask @C { mt } of @C { mts }, and @C { KheMTaskSetMoveResourceCheck }
checks whether this would succeed, without doing it.
@PP
When @C { to_r != NULL } and @C { unassign_extreme_unneeded } is
@C { true }, the first and last mtasks in @C { mts } are treated
differently.  For each, if there is a needless assignment in the
mtask, according to @C { KheMTaskContainsNeedlessAssignment }
(Section {@NumberOf resource_structural.mtask_finding.ops}),
the mtask is unassigned instead of moved.  Over the course
of a solve this reduces the number of needless assignments,
reducing resource workloads and generally improving solutions,
as the author's tests have shown.
@PP
Two similar functions are
@ID {0.95 1.0} @Scale @C {
bool KheMTaskSetMoveResourcePartialCheck(KHE_MTASK_SET mts,
  int first_index, int last_index, KHE_RESOURCE from_r, KHE_RESOURCE to_r,
  bool disallow_preassigned, bool unassign_extreme_unneeded);
bool KheMTaskSetMoveResourcePartial(KHE_MTASK_SET mts,
  int first_index, int last_index, KHE_RESOURCE from_r, KHE_RESOURCE to_r,
  bool disallow_preassigned, bool unassign_extreme_unneeded);
}
These are like @C { KheMTaskSetMoveResourceCheck } and
@C { KheMTaskSetMoveResource } except that they only apply
to some of the mtasks of @C { mts }, those whose index in
@C { mts } lies between @C { first_index } and @C { last_index }
inclusive---just as though these were the only mtasks in @C { mts }.
@PP
Finally we have
@ID @C {
void KheMTaskSetDebug(KHE_MTASK_SET mts, int verbosity, int indent,
  FILE *fp);
}
which produces a debug print of @C { mts } onto @C { fp } with the
given verbosity and indent.
@End @SubSection

@SubSection
    @Title { Multi-task finders }
    @Tag { resource_structural.mtask_finding.solver }
@Begin
@LP
The operation for creating mtasks is
@ID @C {
KHE_MTASK_FINDER KheMTaskFinderMake(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
  KHE_FRAME days_frame, bool fixed_times, HA_ARENA a);
}
Using memory from arena @C { a }, this makes a @C { KHE_MTASK_FINDER }
object containing mtasks such that every proper root task of
@C { soln } whose type is @C { rt } lies in exactly one mtask.
Or @C { rt } may be @C { NULL }, and then mtasks are created for
every resource type.  Parameter @C { days_frame } holds the common
frame and influences the operations below that depend on days.
An mtask finder is deleted when its arena is deleted, along with
its mtasks and mtask sets.
@PP
If @C { fixed_times } is @C { true }, the finder assumes that any
times currently assigned to meets will remain as they are for its
entire lifetime.  (This is not checked, so care is needed
here.)  This allows it to treat tasks from different meets as
equivalent, if they run at the same times and satisfy all other
requirements.  If @C { fixed_times } is @C { false }, the finder
does not make this assumption.  Instead, equivalent tasks must come
from the same meet, so that they always run at the same times, even
if those times change or are unassigned.  For full details, consult
Section {@NumberOf resource_structural.mtask_finding.similarity}.
# @PP
# If @C { make_group_monitors } is @C { true }, @C { KheMTaskFinderMake }
# groups certain event resource monitors together, as described in
# detail in Section {@NumberOf resource_structural.mtask_finding.eject}.
# In effect, this hides monitors of individual tasks inside monitors
# for mtasks, just as mtasks hide the tasks themselves.  It is
# recommended when using mtasks with ejection chains.
@PP
These simple queries return the attributes passed in:
@ID @C {
KHE_SOLN KheMTaskFinderSoln(KHE_MTASK_FINDER mtf);
KHE_FRAME KheMTaskFinderDaysFrame(KHE_MTASK_FINDER mtf);
bool KheMTaskFinderFixedTimes(KHE_MTASK_FINDER mtf);
KHE_ARENA MTaskFinderArena(KHE_MTASK_FINDER mtf);
}
To find out which resource types the mtask finder is handling,
there are functions
@ID {0.98 1.0} @Scale @C {
int KheMTaskFinderResourceTypeCount(KHE_MTASK_FINDER mtf);
KHE_RESOURCE_TYPE KheMTaskFinderResourceType(KHE_MTASK_FINDER mtf, int i);
bool KheMTaskFinderHandlesResourceType(KHE_MTASK_FINDER mtf,
  KHE_RESOURCE_TYPE rt);
}
The first two allow you to visit the resource types handled by
@C { mtf }; the third tells you whether @C { mtf } handles a
given resource type.  These functions are arguably overkill,
since @C { mtf } either handles one resource type or all resource
types; but in principle it could handle any subset of the resource
types, so this approach has seemed best.
# KHE_RESOURCE_TYPE KheMTaskFinderResourceType(KHE_MTASK_FINDER mtf);
# There is no @C { KheMTaskFinderResourceType } function because
# @C { NULL } may be passed for @C { rt }.
# the mtask finder handles multiple resource types.
@PP
When dealing with mtasks, the days of the common frame that they
are running on loom large.  These days are often represented by
their indexes in the common frame (parameter @C { days_frame }
of @C { KheMTaskFinderMake }).  The index of the first day is 0,
and of the last day is
@ID @C {
int KheMTaskFinderLastIndex(KHE_MTASK_FINDER mtf);
}
This is one less than the number of time groups in @C { days_frame }.
@PP
To visit the mtasks of a @C { KHE_MTASK_FINDER } object, the calls are
@ID @C {
int KheMTaskFinderMTaskCount(KHE_MTASK_FINDER mtf);
KHE_MTASK KheMTaskFinderMTask(KHE_MTASK_FINDER mtf, int i);
}
as usual.  The order that the mtasks appear here is arbitrary,
unless one chooses to first call
@ID @C {
void KheMTaskFinderMTaskSort(KHE_MTASK_FINDER mtf,
  int (*compar)(const void *, const void *));
}
to sort the mtasks using function @C { compar }.  One comparison
function is provided:
@ID @C {
int KheMTaskDecreasingDurationCmp(const void *, const void *);
}
@C { KheMTaskFinderMTaskSort(mtf, &KheMTaskDecreasingDurationCmp) }
sorts the mtasks by decreasing duration, which might be a good
heuristic for ordering them for assignment.
# @PP
# When an mtask is non-degenerate, each of its proper root tasks is,
# or is assigned (directly or indirectly) at least one task derived
# from an event resource and meet.  These tasks are called the
# @I { atomic tasks } of the proper root task.  In a non-degenerate
# mtask, each proper root task contains at least one atomic task.
@PP
When mtasks are in use, it is best to deal only with them and not
access tasks directly.  When a task is returned by some function
and has to be dealt with, the right course is to call
@ID @C {
KHE_MTASK KheMTaskFinderTaskToMTask(KHE_MTASK_FINDER mtf, KHE_TASK t);
}
to move from task @C { t } to its proper root task and from there
to the mtask containing that proper root task.  This function will
abort if there is no such mtask.  That should never happen, provided
the resource type of @C { t } is the resource type, or one of the
resource types, handled by @C { mtf }.
# since even degenerate tasks lie in mtasks.
@PP
When the @C { fixed_times } parameter of @C { KheMTaskFinderMake }
is @C { true }, one can call
@ID @C {
KHE_MTASK_SET KheMTaskFinderMTasksInTimeGroup(KHE_MTASK_FINDER mtf,
  KHE_RESOURCE_TYPE rt, KHE_TIME_GROUP tg);
KHE_MTASK_SET KheMTaskFinderMTasksInInterval(KHE_MTASK_FINDER mtf,
  KHE_RESOURCE_TYPE rt, KHE_INTERVAL in);
}
These return the set of mtasks of resource type @C { rt } that are
running at any time of @C { tg } (which must be non-empty), or at
any time of any time group of interval @C { in } of @C { mtf }'s
days frame (again, @C { in } must be non-empty).  Each set is built
on demand (except that for singleton time groups @C { tg } the sets
are built when @C { mtf } itself is built), sorted by increasing
start time, uniqueified by @C { KheMTaskSetUniqueify } when
necessary, and cached within @C { mtf } so that subsequent requests
for it run quickly.  The caller must not modify these mtask sets.
A similar function is
@ID @C {
void KheMTaskFinderAddResourceMTasksInInterval(KHE_MTASK_FINDER mtf,
  KHE_RESOURCE r, KHE_INTERVAL in, KHE_MTASK_SET mts);
}
This adds to @C { mts } the mtasks that @C { r } is assigned to
that lie wholly within interval @C { in } in the current frame,
in chronological order.  These mtasks can change as resource
assignments change, so there is no caching of the results.  One
can also do a similar job avoiding mtasks by calling
@ID @C {
void KheAddResourceProperRootTasksInInterval(KHE_RESOURCE r,
  KHE_INTERVAL in, KHE_SOLN soln, KHE_FRAME days_frame,
  KHE_TASK_SET ts);
}
to add to @C { ts } the proper root tasks assigned @C { r } in
@C { soln } that lie wholly within @C { in } of @C { days_frame }.
@PP
When @C { fixed_times } is @C { false }, or tasks lie in unassigned
meets, the functions just given aren't really useful.  But there are
other ways to visit mtasks.  @C { KheMTaskFinderMTaskCount } and
@C { KheMTaskFinderMTask } will visit them all, for example.
Another option is to visit the tasks of a given meet and use
@C { KheMTaskFinderTaskToMTask } to find the mtasks containing
those tasks.
# are not really
# available, although they can be called and will then give empty
# results.  There is still an mtask for every task, however; the
# same condition determines whether two tasks are similar and thus
# belong in the same mtask, except for one change:  instead of
# requiring the same times, the tasks must come from the same
# meet.  There are no functions for accessing subsets of these
# mtasks, there are just the functions for accessing them all,
# given earlier.  To access the mtasks derived from a given meet,
# one can traverse the tasks of the meet, and for each task of the
# appropriate resource type, call @C { KheMTaskFinderTaskToMTask }.
# Some mtasks may be visited more than once by this procedure.
# mtask contains tasks from the same meet, rather than tasks with
# the same assigned times.
# it will contain tasks
# Now for functions that are available when @C { fixed_times } is
# @C { false }, or when there are tasks whose atomic tasks do not
# all have assigned times.  One can visit the mtasks that lack
# times, or rather the non-degenerate ones, indexed by the meet
# with the lowest index:
# @ID @C {
# int KheMTaskFinderMTaskFromMeetCount(KHE_MTASK_FINDER mtf,
#   KHE_MEET meet);
# KHE_MTASK KheMTaskFinderMTaskFromMeet(KHE_MTASK_FINDER mtf,
#   KHE_MEET meet, int i);
# }
# These return the number of non-degenerate mtasks whose first meet is
# @C { meet }, and the @C { i }th of these mtasks.
@PP
We return now to functions that are available irrespective of the
value of @C { fixed_times }.  It was mentioned at the start of
this section that several operations on tasks which are forbidden
(because they would change the mtask structure) have mtask
versions which both carry out the forbidden operation and
change the mtask structure, possibly creating or destroying
some mtasks as they do so.  These operations are
@ID @C {
bool KheMTaskFinderTaskMove(KHE_MTASK_FINDER mtf, KHE_TASK task,
  KHE_TASK target_task);
bool KheMTaskFinderTaskAssign(KHE_MTASK_FINDER mtf, KHE_TASK task,
  KHE_TASK target_task);
bool KheMTaskFinderTaskUnAssign(KHE_MTASK_FINDER mtf, KHE_TASK task);
bool KheMTaskFinderTaskSwap(KHE_MTASK_FINDER mtf, KHE_TASK task1,
  KHE_TASK task2);
void KheMTaskFinderTaskAssignFix(KHE_MTASK_FINDER mtf, KHE_TASK task);
void KheMTaskFinderTaskAssignUnFix(KHE_MTASK_FINDER mtf, KHE_TASK task);
}
@C { KheMTaskFinderTaskMove } (for example) calls @C { KheTaskMove },
and it also updates @C { mtf }'s data structures so that the right
results continue to be returned by
# the various query functions:
@C { KheMTaskFinderMTaskCount },
{0.95 1.0} @Scale @C { KheMTaskFinderMTask },
# @C { KheMTaskFinderMTaskAtTimeCount },
# @C { KheMTaskFinderMTaskAtTime },
{0.95 1.0} @Scale @C { KheMTaskFinderMTaskFromMeetCount },
{0.95 1.0} @Scale @C { KheMTaskFinderMTaskFromMeet },
and also by functions
{0.95 1.0} @Scale @C { KheMTaskFinderTaskToMTask },
{0.95 1.0} @Scale @C { KheMTaskFinderMTasksInTimeGroup }, and
{0.95 1.0} @Scale @C { KheMTaskFinderMTasksInInterval }.
Mtasks held by the user, either directly or in user-defined mtask
sets, may become undefined when mtasks are created and destroyed.
@PP
Because of these updates, @C { KheMTaskFinderTaskMove } and the other
functions above are too slow to be called from within time-critical
code; but they are fine for other applications.  Structural solvers,
for example, are usually not time-critical.  The related checking
and query functions (@C { KheTaskMoveCheck } and so on) are safe to
call directly, since they change nothing.
@PP
As explained in Section {@NumberOf resource_structural.task_grouping},
to @I group some tasks means to move them to a common
@I { leader task }, forcing solvers to assign the same resource to
each task in the group (by assigning a resource to the leader task).
If any of them are assigned before grouping, then it must be the same
assignment, and the leader task will have that assignment after grouping.
@PP
The mtask finder contains a task grouper object
(Section {@NumberOf resource_structural.task_grouping.task_grouper})
and offers functions based on the task grouper functions:
@ID @C {
void KheMTaskFinderTaskGrouperClear(KHE_MTASK_FINDER mtf);
bool KheMTaskFinderTaskGrouperAddTask(KHE_MTASK_FINDER mtf,
  KHE_TASK task);
void KheMTaskFinderTaskGrouperDeleteTask(KHE_MTASK_FINDER mtf,
  KHE_TASK task);
KHE_TASK KheMTaskFinderTaskGrouperMakeGroup(KHE_MTASK_FINDER mtf,
  KHE_SOLN_ADJUSTER sa);
}
Each of these calls the corresponding task grouper function, but it
also updates @C { mtf }'s data structures appropriately, as for the
other `forbidden' operations.
# @ID {0.95 1.0} @Scale @C {
# void KheMTaskFinderGroupBegin(KHE_MTASK_FINDER mtf, KHE_TASK leader_task);
# bool KheMTaskFinderGroupAddTask(KHE_MTASK_FINDER mtf, KHE_TASK task);
# void KheMTaskFinderGroupEnd(KHE_MTASK_FINDER mtf, KHE_SOLN_ADJUSTER sa);
# }
# @C { KheMTaskFinderGroupBegin } clears out any previous task grouping
# information and sets the leader task (a proper root task).  Then any
# number of calls to @C { KheMTaskFinderGroupAddTask } set the tasks (also
# proper root tasks) to be assigned to the leader task, without actually
# carrying out those assignments.  The return value is @C { true } if
# @C { task } can be included; if it is @C { false }, @C { task } is
# omitted from the grouped tasks, either because it cannot be moved to
# @C { leader_task }, or because it is assigned a resource and some
# other task in the group is assigned a different resource.  Finally,
# @C { KheMTaskFinderGroupEnd } actually carries out the moves.  If
# @C { sa != NULL } these are recorded in solution adjuster @C { sa },
# allowing them to be undone later if desired.
# @PP
# A sequence of calls to @C { KheMTaskFinderTaskAssign } would do
# what these calls do.  But these calls are faster because they build
# only the final mtask which reflects all the assignments.
# @PP
# To speed up `forbidden' operations and grouping operations,
# it may help to call
# @ID @C {
# void KheMTaskFinderClearCachedMTaskSets(KHE_MTASK_FINDER mtf);
# }
# when the mtask sets currently being cached
# (returned by @C { KheMTaskFinderMTasksInTimeGroup } and
# @C { KheMTaskFinderMTasksInInterval }) are not likely to be
# needed any time soon.  This would be the case, for example,
# during profile grouping, when moving from one limit active
# intervals constraint to another one with different time groups.
# With these mtask sets cleared out, @C { mtf } does not have
# to spend time updating them when mtasks are created and deleted.
@PP
Finally,
@ID @C {
void KheMTaskFinderDebug(KHE_MTASK_FINDER mtf, int verbosity,
  int indent, FILE *fp);
}
produces a debug print of @C { mtf } onto @C { fp } with
the given verbosity and indent.
@End @SubSection

@SubSection
    @Title { Behind the scenes 1:  defining task similarity }
    @Tag { resource_structural.mtask_finding.similarity }
@Begin
@LP
It is now time to look behind the scenes, and see how mtasks
guarantee that symmetrical assignments will be avoided, and at the
same time that nothing useful will be missed.
# @PP
# The specification states that meet splits and merges render the
# solver and its mtasks out of date.  So the set of proper root
# tasks to be distributed into mtasks is fixed and definite.
@PP
Behind the scenes, then, an mtask is a sequence (not a set)
of proper root tasks, each optionally assigned a resource.
When @M { m } resources are assigned to an mtask, they are
assigned to the first @M { m } proper root tasks in the
sequence.  Each mtask contains the proper root tasks of one
equivalence class of an equivalence relation between proper
root tasks that we call @I { task similarity }.  To turn
this set into a sequence we sort the elements into non-decreasing
order of an attribute of each task called its @I { task cost }.
@PP
It is easy to see how mtasks avoid many assignments.  Suppose we
have @M { n } unassigned tasks, and that we decide to assign @M { m }
resources to these tasks, where @M { m <= n }.  For the first
resource there are @M { n } unassigned tasks to choose from,
for the second there are @M { n - 1 } to choose from, and so on,
giving @M { n(n-1)...(n-m+1) } choices altogether.  This could be
a very large number.  But now suppose that these @M { n } tasks
are grouped into an mtask.  Then the mtask tries just one of these
choices, the one which assigns the first resource that comes along
to the first task, the second to the second, and so on.  So there
is a large reduction in the number of choices.  The question is
whether anything useful has been missed.
@PP
`Missing something useful' is really an appeal to a dominance
relation between solutions (Appendix {@NumberOf dynamic_theory}).
We claim that any solution containing assignments of any @M { m }
resources to the @M { n } tasks is dominated by the solution
containing the assignments chosen by the mtask.  The proof
will go like this.  Limit all consideration to the @M { m }
resources and @M { n } tasks of interest.  If a resource is
assigned to a task that appears later in the mtask's sequence
than some other task which is unassigned, then we can move the
resource to that earlier unassigned task, and the move will not
increase the cost of the solution, in fact it might decrease it.
And then, exchanging the assignments of any two resources can be
done and will not change solution cost.  These two facts, if we
can prove them, will together show that we can transform our
solution into the mtask's solution with no increase in cost.
# @PP
# From one point of view, two different assignments are just that,
# different, and so there is no symmetry.  What makes symmetry
# possible is that many monitors do not depend on exactly which
# task is assigned to which resource; instead, they depend on
# properties of the task.  If two different tasks have equal
# properties, symmetry is possible.  So uncovering symmetry is basically
# about carefully examining the effect of assignments on monitors.
# Even if two tasks are monitored by different monitors, those
# monitors could be symmetrical.  It will get complicated, but
# nothing we do will be approximate.  If we cannot prove that some
# situation is symmetrical, the tasks involved should and will go
# into different mtasks.
# # We may end up with more mtasks than
# # we actually need, but within any mtask the tasks will definitely
# # be symmetrical.
# @PP
# This section defines two key things.  First is @I { task similarity },
# an equivalence relation between proper root tasks.  Each equivalence
# class of this relation supplies the proper root tasks of one mtask.
# Second is one @I { task cost } for each proper root task.  The
# members of each mtask are ordered by non-decreasing task cost,
# which will ensure that, within each mtask, assigning earlier tasks is
# not worse than assigning later ones.
@PP
We call a task, considered independently of any tasks that may be
assigned to it, an @I { atomic task }.  We view one proper root
task as the set of all the atomic tasks assigned to it, directly
or indirectly, including itself.  Apart from domains, preassignments,
and fixed assignments, which relate specifically to the root task,
only this set matters, not which tasks are assigned to which.
# From now on, the term `task' will refer to this set of atomic tasks.
@PP
As mentioned earlier, KHE allows tasks to be created that are not
derived from any meet.  These would typically serve as proper root
tasks to which tasks derived from meets could be assigned.  Such tasks
are consulted to find domains, preassignments, and fixed assignments
when they are proper root tasks, but since they do not run at any
times and have no effect on any monitors they are ignored otherwise:
they are not included among the atomic tasks.  This means that the
set of atomic tasks could be empty.  However we do not treat this
case as special.  Conditions of the form `for each atomic task, ...'
are vacuously true.
# In that case the proper root
# task is considered to be @I degenerate and the mtask containing it
# is also said to be degenerate.
@PP
A proper root task is said to have fixed times if each of its
atomic tasks lies in a meet with an assigned time, and the
@C { fixed_times } parameter of @C { KheMTaskFinderMake } is
@C { true }, allowing us to assume that these assigned times
will not change.  In that case, similarity is based on the
assigned times of the tasks' meets.  Otherwise, things are
handled as though none of the tasks have assigned times, and
similarity is based on their meets.
# Early in the mtask construction process, the atomic tasks of each
# proper root task are found and sorted.  Atomic tasks without assigned
# times come before atomic tasks with assigned times.  Two atomic tasks
# without assigned times are sorted by increasing meet index.  Two
# atomic tasks with assigned times are sorted by increasing time
# index.  Either way, ties are broken arbitrarily.
# @PP
# There is one wrinkle.  If @C { fixed_times } is @C { false },
# assigned times cannot be relied upon to remain constant throughout
# the lifetime of the mtasks.  So in that case we treat all tasks as
# though they have no assigned times, using only their meet indexes
# in the sorting.
@PP
Two proper root tasks are similar when they satisfy these conditions:
@ParenNumberedList

@LI @OneRow {
They have equal domains.
}

@LI @OneRow {
They are either both unpreassigned, or both preassigned the same
resource.  This second possibility inevitably causes clashes, which
means that in practice a preassigned task will usually not be similar
to any other task, making it the only member of its mtask.
}

@LI @OneRow {
The assignments of both tasks are not fixed.  In other words, a
task whose assignment is fixed is always the only member of its mtask.
}

@LI @OneRow {
The number of atomic tasks must be the same for both tasks, and
taking them in a canonical order based on their assigned times
and meets, corresponding atomic tasks must be similar, according
to a definition to be given below.  This condition is vacuously
true when both tasks have no atomic tasks.
}

@EndList
Assuming that the similarity relation for atomic tasks is an
equivalence relation, this evidently defines an equivalence
relation on proper root tasks, as required.
@PP
Two atomic tasks are similar when they satisfy these conditions:
@ParenNumberedList

@LI @OneRow {
They have equal durations and workloads.
}

@LI @OneRow {
Either they both have an assigned time, in which case those
times are equal, or they both don't, in which case their meet
indexes are equal.  This second case is always followed when
@C { fixed_times } is @C { false }, consistent with what was
said about this above.  It is also followed when at least
one of the atomic tasks in question has no assigned time.
}

@LI @OneRow {
They are similar in their effects on monitors.  There are many
details to cover here; these are tackled below.
}

@EndList
Once again, this is clearly an equivalence relation, provided
that (3) is an equivalence relation.
@PP
These rules could be improved on.  For example, if there are
no limit workload monitors, then task workloads do not matter.
Still, what we have is simple and works well in practice.
@PP
The rest of this section is concerned with similarity of two
atomic tasks in their effect on monitors.  The general idea is that
this similarity holds when, for all resources @M { r }, assigning
@M { r } to one of the tasks has the same effect on monitors as
assigning it to the other task.  But there are complications in
making this general idea concrete, as we are about to see.
# We are already
# assuming that the two tasks have the same domain.  If we can show
# that, for all resources @M { r } in this domain, assigning @M { r }
# to one of the atomic tasks affects monitors in the same way as
# assigning it to the other, then it does not matter which of the
# two tasks @M { r } is assigned to, so the tasks are similar in
# their effect on monitors, and there is nothing here to prevent
# them from being placed into the same mtask.
@PP
We can safely ignore unattached monitors and monitors with weight 0.
A monitor can be an @I { event monitor }, monitoring the times assigned
to a specified set of events, or an @I { event resource monitor },
monitoring the resources assigned to a specified set of tasks, or
a @I { resource monitor }, monitoring the busy times or workload
of a specified resource.  We'll take each kind in turn.
# @PP
# Before we start, though, we have to introduce a caveat.  If we move
# from tasks to mtasks, the data structures encountered by time and
# resource repair algorithms change, and that can lead to changes in
# the repairs tried.  For example, we might end up doing fewer task
# moves, and that might lead to different random numbers being passed
# to time assignment repair operations, giving different outcomes.
# So we can't expect algorithms built on mtasks to produce solutions
# identical to algorithms built on tasks.  But this is not the fault
# of the mtasks, and there is no reason to think that such changes
# will be systematically for the worse.
@PP
@I { Event monitors } are unaffected by the assignments of resources
to tasks.  They depend only on the times assigned to meets.  So we
can ignore them here.
@PP
@I { Resource monitors } are not directly concerned with which
tasks a resource is assigned to, but rather with those tasks'
busy times and workloads.  We have already required similar tasks
to be equal in those respects, so that moving a resource from one
similar task to another leaves its resource monitors unaffected.
This is true whether or not times are assigned.
@PP
@I { Event resource monitors } (assign resource, prefer resources,
avoid split assignments, and limit resources monitors) are where
things get harder.  The tests we have so far included in the
similarity condition do not guarantee that event resource monitors
will be unaffected when a resource is moved from one task to
another---far from it.
@PP
Before we delve into event resource monitors, there is a special
case we need to dispose of.  Consider an avoid split assignments
monitor @M { m } whose monitored tasks are all assigned to each
other (have the same proper root).  At most one distinct resource
can be assigned to these tasks, so @M { m } must have cost 0.  It
can be and is ignored.  This case is quite likely to arise in
practice, although @M { m } might be detached when it does.  It
includes the case where @M { m } monitors a single task.
@PP
The author spent some time considering what happens with other
kinds of event resource monitors when their tasks have the same
proper root.  These monitors monitor a single task, in effect,
which is helpful for similarity.  However these cases seem
unlikely to arise in practice, and some of their details are
not obvious, so nothing special has been done about them.
@PP
Event resource monitors explicitly name the tasks (always atomic)
that they @I monitor (are affected by).  We divide them into two
groups.  A @I { separable monitor } is one whose cost may be apportioned
to the tasks it monitors, each portion depending only on the assignment
of that one task.  A @I { inseparable monitor } is one whose cost cannot
be apportioned in this way.
@PP
A monitor that monitors just one task is separable, because all its cost
can be apportioned to that task.  But there are less trivial examples.
Consider an assign resource constraint with a linear cost function.
Its cost is its weight times the total duration of its unassigned
tasks, and this may be apportioned to the individual unassigned
tasks, making the monitor a separable one.  But if the cost function
is not linear, one cannot apportion the cost in this way.
# Some monitors monitor several atomic tasks but are nevertheless
# classified as single-task monitors:  assign resource and prefer
# resources monitors with linear cost functions, and limit resources
# monitors with maximum limits 0 and linear cost functions.  These
# monitors can be and are divided (notionally) into one single-task
# monitor for each monitored atomic task.
# The second is when all the tasks monitored
# by @M { m } are assigned to one another (when they have the same
# proper root).  In this case the tasks behave like a single task.
# This is quite likely when @M { m } is an avoid split assignments monitor.
@PP
We analyse inseparable monitors first.  If task @M { t } is monitored
by inseparable monitor @M { m }, the cost of assigning a resource to
@M { t } cannot be apportioned to @M { t }.  This indeterminacy in
cost prevents us from saying definitely what the effect on @M { m }
of a resource assignment is.  So in this case, @M { t } cannot be
considered similar to any other task.
@PP
There is however an exception to this rule.  Consider two tasks both
monitored by @M { m }.  An examination of the event resource constraints
will show that, provided the two tasks have equal durations, the effect
on @M { m } of assigning a given resource @M { r } to either task must
be the same.  So @M { m } does not prevent the two tasks from being
declared similar.  Altogether, then, for two atomic tasks to be similar
they must have the same inseparable monitors---not monitors with the same
attributes, but the exact same monitors.
@PP
We turn now to separable monitors.  Each task has its own individually
apportionable cost, dependent only on its own assignment.  Again we
divide these monitors into two groups:
@I { resource-dependent separable monitors }, for which the cost
depends on the choice of resource, and
@I { resource-independent separable monitors }, for which the cost
depends only on whether the task is assigned or not, not on the choice
of resource.
@PP
For example, a separable prefer resources monitor will usually be
resource-dependent, because the cost depends on whether the
assigned resource is a preferred one or not.  But if the set of
preferred resources is empty, assigning any resource produces
the same cost, and the monitor is resource-independent.
@PP
To analyse the resource-dependent separable monitors, consider
the usual kind of separable prefer resources monitor.  The
cost depends on which resource is assigned, so the permutations
of resource assignments that mtasks rely on could produce virtually
any cost changes.  So we require, for similarity, that the
resource-dependent separable monitors of the two tasks can
be put into one-to-one correspondence such that corresponding
monitors have the same attributes (type, hardness, cost function,
weight, preferred resources, and limits where present).
# @PP
# Limit resources monitors that monitor a single task never appear
# in practice, so they hardly matter.  However, it is easy to follow
# the path made by prefer resources monitors, and require a one-to-one
# correspondence between these monitors such that corresponding
# monitors have the same hardness, cost function, weight, preferred
# resources, and minimum and maximum limits.
@PP
We are left with just the resource-independent separable monitors,
whose cost depends only on whether each task is assigned or not,
not on which resource is assigned.
# These are
# single-task assign resource monitors, and single-task prefer
# resources and limit resources monitors whose set of preferred
# resources is either empty or contains every resource of the
# relevant resource type.  Single-task avoid split assignments
# monitors were disposed of earlier.
# @PP
We could repeat the previous work and require a one-to-one
correspondence between these monitors such that corresponding
monitors have the same attributes.  But we can do better.
@PP
Consider three tasks, @M { t sub 1 }, @M { t sub 2 }, and
@M { t sub 3 }, that are similar according to the rules so far.
Suppose @M { t sub 1 } is monitored by a separable assign
resource monitor with weight 20, @M { t sub 2 } is not monitored,
and @M { t sub 3 } is monitored by a separable prefer resources
monitor with an empty set of resources and weight 10.  Assuming
duration 1, assigning any resource to @M { t sub 1 } reduces the
cost of the solution by 20; assigning any resource to @M { t sub 2 }
does not change the cost; and assigning any resource to @M { t sub 3 }
increases the cost by 10.  Examples like this are common in nurse
rostering, to place limits on the number of nurses assigned to a
shift.  Here, at least one nurse is wanted, but three is too many.
@PP
Let the @I { task cost } of a task @M { t } be the sum, over all
resource-independent separable monitors @M { m } that monitor
@M { t }, of the change in cost reported by @M { m } when @M { t }
goes from being unassigned to being assigned.  In the example
above, assuming duration 1, the task costs are @M { minus 20 }
for @M { t sub 1 }, @M { 0 } for @M { t sub 2 }, and @M { 10 }
for @M { t sub 3 }.  These values are independent of all other
assignments, and also of which resource is being assigned, and
so they can be calculated in advance of any solving, while
mtasks are being constructed.  When adding an assignment to an
mtask, it will always be better to choose a remaining unassigned
task with minimum task cost.  So the mtask sorts its tasks by
non-decreasing task cost at the start, and assigns them in sorted order.
@PP
It remains to state, for each monitor type, the conditions under
which it is separable, and if separable, resource-independent.  The
examples given earlier cover most of these cases.
@PP
An assign resource monitor is separable when it monitors a single task,
or its cost function is linear, or both.  It is then always
resource-independent.  Otherwise it is inseparable.
@PP
A prefer resources monitor is ignored when its set of preferred
resources includes every resource of its resource type, since
its cost is always 0 then.  Otherwise, it is separable when it
monitors a single task, or its cost function is linear, or both.
It is then resource-independent when its set of preferred resources
is empty.  Otherwise it is inseparable.
@PP
An avoid split assignments monitor is ignored when its tasks all
have the same proper root (including when it monitors a single
task).  Otherwise it is always considered to be inseparable.
@PP
It would not be unreasonable to declare all limit resources
monitors to be inseparable, since in practice they apply to
multiple tasks and have non-trivial limits.  However, they can
also be used to do what assign resource monitors do, by selecting
all resources and setting the minimum limit to the total duration
of the tasks monitored.  They can also be used to do what prefer
resources monitors do, by selecting those resources that are not
selected by the prefer resources monitor, and setting the maximum
limit to 0.  In these cases, we want a limit resources monitor to be
classified in the same way that the assign resource or prefer resources
monitor would be.
@PP
If a limit resources monitor is equivalent to an assign resource or
prefer resources monitor as just described, it is classified as that
other monitor would be.  Otherwise, it is separable when its cost function
is linear and its maximum limit is 0.  It is then resource-independent
when its set of preferred resources contains every resource of its
type.  It is also separable when it monitors a single task.  In that
case it is resource-independent when its set of preferred resources
contains every resource of its type, in which case the assignment
cost depends on how the duration of the task compares with the
monitor's limits.  (Curiously, if the task's duration is less than
the minimum limit, there will be both a non-assignment cost and an
assignment cost, because the minimum limit is not reached whether
the task is assigned or not.)  Otherwise it is inseparable.
# ignored when its set of selected
# resources is empty (its cost must be 0 then).  Otherwise, if it
@PP
To recapitulate, then, two proper root tasks are similar when they have
equal domains and preassignments, they are both not fixed, and they
have similar atomic tasks.  Two atomic tasks are similar when they
have equal durations, workloads, and start times (or meets), their
inseparable monitors are the same, and their resource-dependent
separable monitors have equal attributes.  Their resource-independent
separable monitors (usually assign resource monitors, and prefer
resources monitors with no preferred resources) may differ:  instead
of influencing similarity, they determine the task's position in the
sequence of tasks of its mtask.
@End @SubSection

@SubSection
    @Title { Behind the scenes 2:  accessing mtasks and mtask sets }
    @Tag { resource_structural.mtask_finding.impl }
@Begin
@LP
This section describes the mtask finder's moderately efficient data
structure for accessing mtasks by signature, and for finding the
mtask sets returned by @C { KheMTaskFinderMTasksInTimeGroup } and
@C { KheMTaskFinderMTasksInInterval }.  It has been written to
clarify the ideas of its somewhat confused author, and is not likely
to be of any value to the user.
@PP
Quite a few objects are created and deleted in the operations that
follow.  Deleted objects are added to free lists in the mtask finder,
where they are available for future creations.
@PP
The data structure allows proper root tasks to be inserted and
deleted at any moment, not just during initialization.  This
flexibility is needed to support the `forbidden' operations,
which work by deleting from the data structure the proper
root tasks they affect, carrying out the operation requested, and
then inserting the result tasks back into the data structure.
@PP
Actually there are three data structures.  First, there is an array
of all mtasks, included to support @C { KheMTaskFinderMTaskCount }
and @C { KheMTaskFinderMTask }.  Each mtask contains its index in
this array.  To add a new mtask we add it to the end and set its
index; to delete it we use its index to find its position, and
move the last element to that position, changing its index.
@PP
Second, there is an array of mtasks indexed by task index (function
@C { KheTaskSolnIndex } from the KHE platform).  For each task
handled by the mtask finder (each proper root task of a suitable
resource type), the value at its index is its mtask.  Other indexes
have @C { NULL } values and are never accessed.  This supports a
trivial implementation of @C { KheMTaskFinderTaskToMTask }.  When a
task is added to an mtask or removed from it, the value at its index
is changed.
@PP
We won't mention these two arrays again, although they are kept up
to date as the structure changes.  All subsequent data structure
descriptions relate to the third data structure.
@PP
Every task has a resource type, and every mtask has one too, because its
tasks all have the same domain.  @C { KheMTaskFinderMTasksInTimeGroup }
and @C { KheMTaskFinderMTasksInInterval } have a resource type parameter
and return sets of mtasks which all have that resource type.
@PP
So all operations that we are concerned with here have a parameter
which is a non-@C { NULL } resource type; call it @C { rt }.
Each operation traverses a short list of tables (this list is the
entry point for the third data structure), one table for each
resource type supported by the mtask finder, to find the table for
@C { rt }.  The rest takes place in that table; everything
in it has resource type @C { rt }.
@PP
@BI { Task insertion }.
To add a proper root task to the structure, first we
build its @I { signature }.  This is an object containing everything
needed to decide whether two proper root tasks are similar, as defined
in Section {@NumberOf resource_structural.mtask_finding.similarity},
including one @I { atomic signature } for each atomic task assigned,
directly or indirectly, to the proper root task.  Atomic signatures
are sorted into a canonical order for ease of comparison.  The
non-assignment and assignment costs, as returned by @C { KheMTaskTask },
are calculated at the same time as the signature but are not part of
it and are stored separately.
@PP
The tasks of an mtask have equal signatures.  This shared signature
is stored in the mtask.  A task belongs in an mtask if its signature
is equal to the mtask's stored signature.
@PP
So after calculating the signature of the new task, the second
step is to search the appropriate table to see if it contains an
mtask with the same signature as the signature of the new task.
There are three different ways to do this, depending on the
@I { type } of the signature:
@TaggedList

@DTI { @C { KHE_SIG_FIXED_TIMES } } {
A task's signature has this type when the @C { fixed_times } parameter
of @C { KheMTaskFinderMake } is @C { true }, each of its atomic tasks
derived from a meet has an assigned time, and there is at least one
such atomic task.  So the task has a chronologically first assigned
time, and we use that as an index into the table.  We'll explain how
this is done later on.
}

@DTI { @C { KHE_SIG_MEETS } } {
A task's signature has this type when the @C { fixed_times } parameter
of @C { KheMTaskFinderMake } is @C { false }, or not every atomic task
derived from a meet has an assigned time, and there is at least one
atomic task derived from a meet.  We use any one of these meets to
find other tasks with the same signature:  we traverse the set of
all tasks of the meet, and for each of those of the right resource
type that has an mtask, we compare the mtask's signature with the
new task's signature.  So there is no third data structure for this
case; the meet itself provides a suitable structure.  This would
not work for @C { KHE_SIG_FIXED_TIMES }, because fixed-time tasks
with the same signatures can come from different meets.
}

@DTI { @C { KHE_SIG_OTHER } } {
A task's signature has this type when neither of the other two
cases applies.  This means that the task has no atomic tasks
derived from meets; its duration is therefore zero and it is
basically useless.  Still, for uniformity it must lie in an
mtask.  These mtasks are likely to be very few, so they are
stored in a separate list in the table, and this list is
searched to find the mtask (if any) with this signature.
}

@EndList
Whichever way the search is done, if it finds an existing mtask
whose signature is equal to the new task's signature, all we have
to do is add the new task to that mtask and throw away the new
task's signature.  If it does not find an existing mtask with
that signature, we have to create a new mtask with that signature,
add the new task to it as its first task, and insert the new
mtask into the data structure.  This insertion does nothing
if the signature type is @C { KHE_SIG_MEETS }, and it is a simple
addition to the end of the table's separate list if the signature
type is @C { KHE_SIG_OTHER }.  How an mtask is inserted when its
type is @C { KHE_SIG_FIXED_TIMES } is a subject for later.
@PP
@BI { Task deletion }.
To delete a task, we first delete it from its mtask, obtained
by the usual call to @C { KheMTaskFinderTaskToMTask }.  If
the mtask becomes empty, we then have to delete the mtask
(we don't allow empty mtasks).  We do this in one of three
ways depending on the signature type.  If the type is
@C { KHE_SIG_MEETS } there is nothing to do;  if it is
@C { KHE_SIG_OTHER } we search the appropriate table's
separate list of mtasks of this type and delete the
mtask from there.  If the type is @C { KHE_SIG_FIXED_TIMES }
we use the first assigned time to index the table, as
for insertion, and carry on as described below.
@PP
@BI { The third data structure }.  The third data structure supports
five operations:  mtask retrieval by signature, mtask insertion,
mtask deletion, @C { KheMTaskFinderMTasksInTimeGroup }, and
@C { KheMTaskFinderMTasksInInterval }.  The last two operations
are supposed to cache their results so that multiple calls with
the same parameters run quickly.  These cached values must be
kept up to date as mtasks are inserted and deleted.
@PP
We've already shown how the first three operations are done when
the signature type is @C { KHE_SIG_MEETS } or @C { KHE_SIG_OTHER }.
The last two, @C { KheMTaskFinderMTasksInTimeGroup } and
@C { KheMTaskFinderMTasksInInterval }, do not deal in these
two types of mtasks anyway.  So we need to consider here only
mtasks whose signatures have type @C { KHE_SIG_FIXED_TIMES }.
@PP
One entry in the third data structure has type
@ID @C {
typedef struct khe_entry_rec {
  KHE_TIME_GROUP		tg;
  KHE_INTERVAL			in;
  KHE_MTASK_SET			mts;
} *KHE_ENTRY;
}
Entry @C { e } means:
`the value of @C { KheMTaskFinderMTasksInTimeGroup(mtf, rt, tg) }
is @C { e->mts } when @C { tg == e->tg }, and the value of
@C { KheMTaskFinderMTasksInInterval(mtf, rt, in) } is @C { e->mts }
when @C { in == e->in }.'  The @C { rt } parameter is not mentioned
because @C { e } lies within one table of the third data structure,
as defined above, and @C { rt } is taken care of when this table
is selected.
@PP
One table of the third data structure, then, consists of an
array indexed by time, where each element contains a list
of these entries.  An entry appears once in each list indexed
by a time that is one of the times of its time group or interval
(considered as a set of time groups).  This means that an entry
appears in the table as many times as its time group or interval
has times.
@PP
As we will see, from time to time it will be necessary to add
an entry to a table.  However we never delete an entry.  Once
we begin keeping track of the mtasks of a particular time group
or interval, we continue doing that until the mtask finder is
deleted.  This is arguably wasteful, but the perennial caching
question (is this cache entry still needed?) has no easy answer
here, and we expect to receive queries for only a moderate
number of distinct time groups and intervals.
@PP
Let us see now how to implement the five operations.
@PP
To retrieve an mtask by signature, we take the chronologically
first time of the signature (call it @C { t }), and we take the
first entry of the list indexed by @C { t }.  As we'll see later,
this entry is always present and its mtask set contains every
mtask whose signature includes @C { t }.  So we search that
mtask set for an mtask containing the signature we are looking for.
@PP
To insert a new mtask @C { mt }, we have to find every mtask set
that @C { mt } belongs in and add it.  So for each of @C { mt }'s
fixed times we traverse the list of entries indexed by that time
and add @C { mt } to the mtask set in each entry.  It is easy to
see that these are exactly the mtask sets that @C { mt } needs to
be added to.  An entry can appear in several lists, so we only
add @C { mt } to an mtask set when it is not already present.
If it is present it will be at the end, so that condition can
be checked quickly.
@PP
To delete an mtask @C { mt } we have to find every mtask set
that @C { mt } is currently in and remove it.  So for each of
@C { mt }'s fixed times we traverse the list of entries indexed
by that time and delete @C { mt } from the mtask set in each entry.
Because an entry can appear in several lists, we only attempt to
delete @C { mt } from an entry's mtask set when it is present.
@PP
To implement @C { KheMTaskFinderMTasksInTimeGroup(mtf, rt, tg) },
we first need to check whether the @C { rt } table contains an
entry for @C { tg }.  We do this by searching the list of entries
indexed by the first time of @C { tg } (it is a precondition that
@C { tg } cannot be empty) for an entry containing @C { tg }.
If we find one, we return its mtask set and we are done.
@PP
If there is no entry containing @C { tg }, we have to make one
and add it to each list indexed by a time of @C { tg }, which
is straightforward.  The hard part is that we also have to build
the mtask set of all mtasks whose fixed times have a non-empty
intersection with @C { tg }, so that we can add it to the new
entry and also return it to the caller.  We could do this from
scratch, by finding all tasks running at the relevant times, then
building and uniqueifying the set of all these tasks' mtasks.
But we do it in a faster way, as follows.
@PP
As we saw when inserting and deleting mtasks, once an entry is
present it is kept up to date as mtasks come and go.  So during
the initialization of the mtask finder, before any mtasks have
been created, we add one entry to the start of each list.  If
the list is for time @C { t }, the entry contains a time group
containing just @C { t } (as returned by platform function
@C { KheTimeSingletonTimeGroup }) and an empty mtask set.  As
mtasks are inserted and deleted, this mtask set will always hold
the set of all fixed-time mtasks whose times include @C { t }.
This entry will always be first in its list.
@PP
So to build the new mtask set, we take the union of the mtask sets in
the first entries of the lists indexed by the times of the new time
group.  We call @C { KheMTaskSetAddMTaskSet } repeatedly to build
the union, then we call @C { KheMTaskSetUniqueify } to uniqueify it.
@PP
@C { KheMTaskFinderMTasksInInterval } is similar to
@C { KheMTaskFinderMTasksInTimeGroup }.  Its
@C { in } parameter is just a shorthand for the union of the time
groups of @C { in }'s days.
@PP
Where then is the confusion?  The author was not sure whether
each entry had to be added to multiple lists.  Suppose each
entry was added to just one list, the one for its time group's
first time.  @C { KheMTaskFinderMTasksInTimeGroup } and
@C { KheMTaskFinderMTasksInInterval } at least would be fine:
they use only that first time to access the table.  Would
anything go wrong?
@PP
Just one thing would go wrong, as it turns out.  When a new mtask is
added, it would be added to the mtask set of each entry whose time
group's first time is one of the mtask's fixed times.  But that is
not enough.  For example, an mtask holding tasks of the Wednesday
night shift would not be added to the mtask set holding all mtasks
running on Wednesday, because that mtask set's entry would lie only
in the list indexed by the first time on Wednesday.
@PP
The mtask finder's similarity rule must be complicated, but are
the complications just described necessary?  The author believes
that they are.  @C { KheMTaskFinderMTasksInTimeGroup } and
@C { KheMTaskFinderMTasksInInterval } are used frequently by
the ejection chain solver, so they must run quickly.  The symmetry
elimination provided by mtasks is essential for grouping by resource
constraints (Section {@NumberOf resource_structural.grouping_by_rc}),
and that solver also needs the `forbidden' operations.  We don't want
multiple multi-task software modules, so one module has to do it all.
@End @SubSection

@EndSubSections
@End @Section

@Section
    @Title { Task grouping }
    @Tag { resource_structural.task_grouping }
@Begin
@LP
To @I group some tasks means to add an unbreakable requirement that
either they are all unassigned, or else they are all assigned to the
same @I { parent task }.  The parent task is usually a cycle task,
representing a resource, although it need not be.
# Put another way,
# assigning a parent task to a grouped task is the same as assigning
# it to every member of the group.
@PP
Concretely, task grouping is carried out by selecting one of
the tasks to be the @I { leader task }, and assigning the others,
called @I { follower tasks }, to the leader task.  Assigning
the group to a parent task is done by assigning the leader
task to the parent task.
@PP
The first of the following subsections presents the task grouper,
used throughout KHE to perform the actual grouping.  The other
subsections present two applications of task grouping.  They
describe old code that would turn out rather differently if it
was written today.
@BeginSubSections

@SubSection
    @Title { The task grouper }
    @Tag { resource_structural.task_grouping.task_grouper }
@Begin
@LP
Different solvers group tasks for different reasons, but the
actual grouping should always be done in the same way, as follows.
The first step is to create a @I { task grouper object } by calling
@ID @C {
KHE_TASK_GROUPER KheTaskGrouperMake(HA_ARENA a);
}
This object remains available until arena @C { a } is deleted or
recycled.  It can be used repeatedly to make many groups, although
only one at a time.  To begin making a group, call
@ID @C {
void KheTaskGrouperClear(KHE_TASK_GROUPER tg);
}
This clears away any remnants of previous groups.  To add one
task to the growing group, call
@ID @C {
bool KheTaskGrouperAddTask(KHE_TASK_GROUPER tg, KHE_TASK task);
}
If @C { task } is compatible with the tasks already added (concerning
which see below), this stores @C { task } in @C { tg } and returns
@C { true }.  Otherwise it stores nothing and returns @C { false }.
Either way, no task assignments or moves are made at this stage.
@C { KheTaskGrouperAddTask } aborts if @C { task } is @C { NULL }
or a cycle task.  It returns @C { false } if @C { task } is
already stored in @C { tg }.
# It also returns @C { false } if adding @C { task } would mean that
# the group contains one member which is assigned to another.
@PP
It is also possible to delete the record of a previously stored task,
by calling
@ID @C {
void KheTaskGrouperDeleteTask(KHE_TASK_GROUPER tg, KHE_TASK task);
}
However, due to issues with finding leader tasks, only the most
recently added but not deleted task may be deleted in this way.
Finally,
@ID @C {
KHE_TASK KheTaskGrouperMakeGroup(KHE_TASK_GROUPER tg,
  KHE_SOLN_ADJUSTER sa);
}
makes one group from the currently stored tasks.  Concretely, it
chooses a leader task from these stored tasks and assigns the
other stored tasks to it.  It returns the leader task.  The call to
@C { KheTaskGrouperMakeGroup } cannot fail, given that incompatible
tasks have already been rejected by @C { KheTaskGrouperAddTask },
although it will abort if no tasks are stored, and do nothing (correctly)
if just one is stored.  If @C { sa != NULL }, the changes are saved
in @C { sa } so that they can be undone later.  The task grouper
itself does not offer an undo operation.  But @C { sa } can record
any number of grouping operations, and then deleting @C { sa } will
undo them all.
@PP
@C { KheTaskGrouperMakeGroup } does not clear the grouper.  One can call
it, then do some evaluation of the result, then use @C { sa } to undo the
grouping (this undo will be exact unless some tasks of the group are
assigned initially and others are unassigned), and then carry on just
as though @C { KheTaskGrouperMakeGroup } had not been called.  Together
with @C { KheTaskGrouperDeleteTask } this means that a tree search for
the best group (in any sense chosen by the caller) is supported.
@PP
The task grouper keeps a list of the tasks that have been added, each
with some associated information.  When memory for this is no longer
needed (when @C { KheTaskGrouperClear } or @C { KheTaskGrouperDeleteTask }
is called), it is recycled through a free list in the task grouper.
So it is much better to re-use one task grouper than to create many.
@PP
All this may sound simple, but we now have a long list of issues to
ponder, to make task grouping robust and able to interact appropriately
with other solvers.  This is why task groupers are needed:  there is
a lot more to it than just assigning followers to a leader task.
# ).  Task grouping is
# part of structural solving, and so we have to consider what undoing
# it means, and its interactions with other structural solvers and
# ordinary solvers.
# ---all important in practice, because task grouping
# has many applications and many interactions.
@PP
@BI { Finding a leader task. }
The first problem is to find a suitable leader task.  We choose
a task to be leader to which every other stored task can be moved.
This mainly means that the domain of the chosen leader task must be
a subset of the domain of every stored task.  If an attempt is made
to store a task which prevents this (for example, if the new task's
domain is disjoint from some already stored task's domain), then
the new task is rejected and @C { KheTaskGrouperAddTask } returns
@C { false }.  We calculate a leader task each time a task is
stored, and keep them all.  If the last task is deleted we return
to the previous leader task without having to re-calculate it.
@PP
A more general approach is to find the best candidate for leader task
and then reduce its domain until all the followers can be assigned to
it (recall that assignment requires the parent's domain to be a
subset of the child's).  A disadvantage of this is that the reduced
domain could be empty, but it has been rejected for another reason:
when many groups are being tried, many resource groups could be
created, which would be expensive in running time and memory.
@PP
For the record, here is a pass through the conditions imposed
by @C { KheTaskMoveCheck }, which every task moved to the chosen
leader task must satisfy.  First, the task's assignment cannot
be fixed.  We will be circumventing this, by unfixing beforehand
and re-fixing afterwards, as explained below.  Second, the task
must not be a cycle task.  @C { KheTaskGrouperAddTask } aborts in
this case; it also aborts when its @C { task } is @C { NULL }.
Third, the move must actually change something.  This is
guaranteed if no stored task is assigned to another stored
task; @C { KheTaskGrouperAddTask } returns @C { false } if it
is passed a task that causes this problem.  Fourth and last, the
domain of @C { task } must be a superset of the domain of the
leader task.  We've just explained how we handle that.
@PP
@BI { Undoing a grouping. }
Suppose that the stored tasks are unassigned initially.  A
structural solver groups them by assigning the followers to the
chosen leader task, then an ordinary solver assigns a resource to
the leader task, and then we need to undo the grouping.  An exact
undo would unassign the follower tasks, since they were unassigned
initially; but that is quite wrong.  In fact, the follower tasks'
assignments are moved from the leader task to whatever the leader
task is assigned to at the time of the undo.  We see here that an
overly literal interpretation of undo fails to capture the true
meaning, which is that a previously imposed requirement has to be
removed, without disturbing other requirements.  Function
@C { KheSolnAdjusterTaskGroup }
(Section {@NumberOf general_solvers.adjust.adjuster}) is offered
by the solution adjuster module to support this kind of undo.
@PP
@BI { Tasks which are leaders of their own groups. }
A stored task could be the leader task of a previously created
group.  This is not a problem, because task grouping concerns the
task's relationship with its parent, not its children.  If the
task is chosen to be the leader task of the new group, its
children will be partly from the old group and partly from the
new group.  When we use @C { sa } to remove the group, it unassigns
only the children from the new group, not all the children.
@PP
@BI { Assigned tasks. }
A stored task could have a parent task (possibly a cycle task,
denoting a resource).  Stored tasks with different parents
cannot be grouped, because grouping requires tasks to have the
same parent.  For example, two tasks assigned different resources
@M { r sub 1 } and @M { r sub 2 } cannot be grouped.  So this
is another case where @C { KheTaskGrouperAddTask } might
return @C { false }.
@PP
We cannot just declare that assigned tasks may not participate
in grouping, because there is an application where such a rule
would pose a major problem:  interval grouping, where the
assigned tasks come from assign by history.  Instead, the rule
is that assigned tasks are permitted provided they share the
same parent.  If the chosen leader task @M { l } is assigned
that parent, we move the others to @M { l }.  If @M { l } is
unassigned, we assign @M { l } the common parent and move the
others to @M { l }.  Either way, every task is now assigned the
common parent, albeit indirectly (via @M { l }).  Here is yet
another reason why @C { KheTaskGrouperAddTask } might return
@C { false }:  there might be a common parent whose domain is
not a subset of the domain of the chosen leader task @M { l }.
# @PP
# @BI { Unassigned tasks that become grouped with assigned tasks. }
# An obscure issue arises here.  Let @M { t } be an unassigned task
# that becomes assigned when it is grouped.  Suppose the grouping
# is done by a structural solver, and then an ordinary solver is
# run which assigns a resource to every task, and then the
# structural solver's work is undone.  If the undo is exact,
# @M { t } will be unassigned.  If no other solvers are run
# afterwards, @M { t } will be unassigned at the end of solving,
# probably quite unexpectedly.  Without grouping it would have
# been assigned, by the ordinary solver.
# @PP
# Fortunately, undo is not exact.  Instead, as explained above, it
# moves each follower task's assignment to whatever the leader task is
# assigned to at the moment of the undo.  So if the leader task is
# assigned then, @M { t } will also be assigned after the grouping
# is removed.  So this is not really a separate case; it is documented
# here because the author initially thought that it was.
# # To prevent this, when some stored tasks are assigned and others
# # are not, the task grouper divides what it does into two phases.
# # In the first phase, the unassigned tasks are assigned the parent
# # task.  In the second phase, the tasks (all of which are now assigned
# # the common parent) are grouped.  When the grouping is undone,
# # only the second phase is undone.  (This is implemented by carrying
# # out the second phase using @C { sa }, but carrying out the first
# # phase without using @C { sa }.)  The first phase has the feel of an
# # ordinary solve, whose work is not undone but may be altered.
@PP
@BI { Fixed task assignments. }
A task assignment may be @I { fixed }, meaning that it may not be
changed.  Interpreted literally, a task with a fixed assignment
cannot participate in task grouping unless it is chosen to be the
leader task.  But we will view task fixing as a logical requirement
that does not necessarily prevent a task from being grouped.
@PP
First, suppose that the task @M { t } whose assignment is fixed
is assigned to a task @M { u }.  Then if we ignore the fixing, in
the grouped task @M { t } will either keep its assignment (if it
is chosen to be the leader task) or else it will be assigned to
the leader task and the leader task will be assigned to @M { u }.
We regard this as acceptable for a fixed @M { t }, because @M { t }
is still assigned to @M { u }, indirectly.  So when building the
group, if @M { t } is not the leader task, we unfix it, move
it to the leader task, and re-fix it.
@PP
In the grouped state, the assignment of @M { t } to the leader
task could equally well be fixed or not fixed.  It does not
matter, because no-one is going to change it until the time
comes to undo it.  But we prefer to fix it.  What does matter
is that the assignment of the leader task to @M { u } must be
fixed, otherwise some ordinary solver could change it and thus
violate the fix on @M { t }.
@PP
Second, suppose that the task @M { t } whose assignment is
fixed is unassigned.  We interpret this as saying that @M { t }
may not be assigned.  Once again, we need to fix the assignment
of the leader task, but now we require that the leader task be
unassigned, since otherwise we have violated the fix on @M { t }.
So @M { t } cannot share a group with an assigned task, and we
have yet another case where @C { KheTaskGrouperAddTask }
might return @C { false }.
@PP
It takes some case-by-case analysis to prove it, but undoing the
grouping of a task whose assignment is initially fixed is
straightforward.  It is always correct to move the task's
assignment to the leader task's parent, and fix that assignment.
@PP
@BI { Summary of the task grouping algorithm. }
Given a set of tasks which have passed the checks made by
@C { KheTaskGrouperAddTask }, together with two values
calculated while adding them (the leader task, and the
shared parent), the actual grouping is done as follows.
@PP
First, move every task except the leader task to the leader
task.  Fixed tasks are unfixed before their move and re-fixed
after it.  Second, if the leader task is not currently assigned
to the shared parent, move it to the shared parent.  (If this
move is needed, then the leader task is not fixed.  This is
because the only way that its parent can differ from the
shared parent is for its parent to be @C { NULL } and the
shared parent to be non-@C { NULL }; and in that case, if
it was fixed it would be a fixed unassigned task which was
being grouped with an assigned task, which is not allowed.)
Third and last, if the leader task has at least one fixed
follower (which we determine as we move the followers), and
its assignment is not fixed, then fix its assignment.
@PP
Undoing is not exact, but we can approximate it by carrying out
in reverse order the reverse of each step above, and then adjust
the algorithm we get.  This produces the following.  A record of
what happened during grouping is held in @C { sa }; this undo
algorithm relies on that record.  There is not enough information
in the tasks themselves to determine what to do.
@PP
First, if the leader task was fixed during grouping, unfix it.
Second, irrespective of whether the leader task was moved to a
shared parent, its parent after the undo has to be its
parent at the time of the undo, so do nothing.  Third,
move every follower task from the leader task to the
leader task's parent at the time of the undo (possibly
@C { NULL }).  If the follower task was (and so is) fixed,
unfix it before the move and re-fix it afterwards.
@PP
@BI { Another interface to task grouping. }
There is a different way to access task grouping.  It offers
the same semantics (indeed, behind the scenes it runs the
same code), but for certain applications (interval grouping,
for example) it can save a lot of time and memory.
@PP
Instead of type @C { KHE_TASK_GROUPER }, this interface uses type
@C { KHE_TASK_GROUPER_ENTRY }.  This type holds one task of the growing
group, some information about the group (its leader task and shared
parent, mainly), and a pointer back to the previous entry, holding
the previous task and information.  This pointer will be @C { NULL }
in the entry holding the first task.  This makes a singly linked
list of tasks and information, accessed from the last (most recently
added) entry.
@PP
The advantage of the linked structure is that if we are trying
two sequences of tasks, @M { angleleft a, b, c angleright } and
@M { angleleft a, b, d angleright }. then the first part of the
two sequences, @M { angleleft a, b angleright }, can be shared.
This is where the time and memory savings can be made.
@C { KheTaskGrouperDeleteTask } offers analogous savings
(delete @M { c } then add @M { d }), but it does not allow
the two proto-groups to exist simultaneously.
@PP
Just two operations make up this interface to task grouping:
@ID @C {
bool KheTaskGrouperEntryAddTask(KHE_TASK_GROUPER_ENTRY prev,
  KHE_TASK task, KHE_TASK_GROUPER_ENTRY next);
KHE_TASK KheTaskGrouperEntryMakeGroup(KHE_TASK_GROUPER_ENTRY last,
  KHE_SOLN_ADJUSTER sa);
}
@C { KheTaskGrouperEntryAddTask } is semantically the same as
@C { KheTaskGrouperAddTask }, but here the previously added
tasks are represented by @C { prev }.  This will be @C { NULL }
when @C { task } is the first task; in that case the result
must be @C { true }.  The result of the
addition (if @C { true } is returned) is represented by
@C { next }, which will contain @C { task } and related
information.  @C { KheTaskGrouperEntryMakeGroup } is semantically
the same as @C { KheTaskGrouperMakeGroup }, but here the tasks
to be grouped are the task stored in @C { last }, the task
stored in its predecessor entry, and so on.
@PP
Any number of calls to @C { KheTaskGrouperEntryAddTask } with
the same @C { prev } may be made.  This is how sequences come
to share subsequences, as described above.  A group is defined
by its last entry.  There is no ambiguity, because there is
only one path going backwards.
@PP
This form of task grouping does not allocate any memory.
The memory pointed to by @C { prev } (if non-@C { NULL }) and
@C { next } (always non-@C { NULL }) must be allocated by
the caller, using code such as
@ID @C {
struct khe_task_grouper_entry_rec new_entry_rec;
KheTaskGrouperEntryAddTask(prev, task, &new_entry_rec);
}
Here @C { struct khe_task_grouper_entry_rec } is the struct that
@C { KHE_TASK_GROUPER_ENTRY } points to; it is defined (with
its fields) in @C { khe_solvers.h } alongside
@C { KHE_TASK_GROUPER_ENTRY }.  @C { KheTaskGrouperEntryAddTask }
overwrites the memory pointed to by @C { next }.
# @C { KHE_TASK_GROUPER_ENTRY } is a
# @C { struct }, not a pointer type.  Its definition appears in file
# @C { khe_solvers.h }, although it is better if the user treats it
# as a private type.  When calling @C { KheTaskGrouperEntryAddTask },
# both @C { prev } (if non-@C { NULL }) and @C { next } (always
# non-@C { NULL }) must point to memory that the caller has made
# available to hold values of this type.  The memory pointed to by
# @C { next } will be overwritten by @C { KheTaskGrouperEntryAddTask }.
# When used this way, task grouping does not itself allocate any memory.
@PP
Actually there is a third function, added to fix an issue in
interval grouping:
@ID @C {
void KheTaskGrouperEntryAddDummy(KHE_TASK_GROUPER_ENTRY prev,
  KHE_TASK_GROUPER_ENTRY next);
}
Like @C { KheTaskGrouperEntryAddTask }, this adds @C { next } as a
successor to @C { prev }, but here the entry is a @I { dummy }:
it changes nothing.  The fields of @C { next } are copied from
@C { prev }, which must be non-@C { NULL }.  The new entry is
marked so that @C { KheTaskGrouperEntryMakeGroup } knows to ignore it.
@End @SubSection

@SubSection
    @Title { The task multi-grouper }
    @Tag { resource_structural.task_grouping.multi }
@Begin
@LP
A @I { task multi-grouper } is a structural solver that makes
multiple task groups.  Actually any structural solver that makes
task groups will do this; the task multi-grouper does it for
certain simple cases.  To make one, call
@ID {0.96 1.0} @Scale @C {
KHE_TASK_MULTI_GROUPER KheTaskMultiGrouperMake(KHE_SOLN soln,
  KHE_FRAME days_frame, HA_ARENA a);
}
It remains available until @C { a } is deleted.  The days frame
defines the days; it may be @C { NULL } in some cases (see below).
The multi-grouper can be cleared (returned to its initial state)
by calling
@ID @C {
void KheTaskMultiGrouperClear(KHE_TASK_MULTI_GROUPER tmg);
}
The basic way to add a task is
@ID {0.96 1.0} @Scale @C {
bool KheTaskMultiGrouperAddTask(KHE_TASK_MULTI_GROUPER tmg, KHE_TASK task);
}
If @C { task } is a proper root task, this adds @C { task } to
@C { tmg } and returns @C { true }.  Otherwise it returns
@C { false } and does not add @C { tmg }.  For convenience there is also
@ID {0.96 1.0} @Scale @C {
void KheTaskMultiGrouperAddResourceTypeTasks(KHE_TASK_MULTI_GROUPER tmg,
  KHE_RESOURCE_TYPE rt);
}
This calls @C { KheTaskMultiGrouperAddTask } for each task of type
@C { rt }.  Other convenience functions for adding tasks could easily
be added.
@PP
Once the tasks are all present, a call to
@ID @C {
void KheTaskMultiGrouperMakeGroups(KHE_TASK_MULTI_GROUPER tmg,
  KHE_TASK_MULTI_GROUPER_GROUP_TYPE group_type, KHE_SOLN_ADJUSTER sa);
}
carries out the actual grouping.  There are many possible rules for
determining which tasks get grouped with which, and @C { group_type }
says which of these rules is followed:
@ID @C {
typedef enum {
  KHE_TASK_MULTI_GROUPER_GROUP_SAME_RESOURCE,
  KHE_TASK_MULTI_GROUPER_GROUP_SAME_RESOURCE_CONSECUTIVE
} KHE_TASK_MULTI_GROUPER_GROUP_TYPE;
}
At present two rules are implemented.
@C { KHE_TASK_MULTI_GROUPER_GROUP_SAME_RESOURCE }
places all tasks assigned the same non-@C { NULL }
resource into one group.  The other possibility is
@C { KHE_TASK_MULTI_GROUPER_GROUP_SAME_RESOURCE_CONSECUTIVE }.
It places all tasks assigned the same non-@C { NULL } resource
and running on consecutive days of @C { days_frame } into one
group.  In the second case, parameter @C { days_frame } of
@C { KheTaskMultiGrouperMake } must have been non-@C { NULL }; it
defines the days.  In both cases, unassigned tasks become the only
members of their group.  To ignore them altogether, don't add them.
@PP
Every task added to the multi-grouper will lie in a group, even if
(as for unassigned tasks) it is its group's only member.  If,
when making one group, the task grouper
(Section {@NumberOf resource_structural.task_grouping.task_grouper})
refuses to add some task to a growing group, a new group is begun
with that task for its first member.
@PP
If @C { sa != NULL }, the task group operations used to build the
groups are saved in @C { sa }, so that the groups can be removed
later if desired.  One can call @C { KheSolnAdjusterUndo(sa) }
in the usual way to remove the groups, @C { KheSolnAdjusterRedo(sa) }
to reinstate them, and so on.
@PP
After the groups have been made, functions
@ID @C {
int KheTaskMultiGrouperGroupCount(KHE_TASK_MULTI_GROUPER tmg);
KHE_TASK KheTaskMultiGrouperGroup(KHE_TASK_MULTI_GROUPER tmg, int i);
}
can be used to visit the groups' leader tasks.  This is valid even
if the grouping has been undone, although in that case the leader
tasks will not be assigned their follower tasks.
@End @SubSection

# @SubSection
#     @Title { Grouping by resource }
#     @Tag { resource_structural.task_grouping.resource }
# @Begin
# @LP
# @I { Grouping by resource } is a kind of task grouping,
# obtained by calling
# @ID @C {
# bool KheGroupByResource(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
#   KHE_OPTIONS options, KHE_SOLN_ADJUSTER sa);
# }
# # bool KheTaskingGroupByResource(KHE_TASKING tasking,
# #   KHE_OPTIONS options, KHE_TASK_SET ts);
# Similarly to grouping by resource constraints, to be described in
# Section {@NumberOf resource_structural.grouping_by_rc}, it groups
# tasks of resource type @C { rt } which lie in adjacent time groups
# of the common frame, and records each adjustment it makes in
# @C { sa } (if @C { sa } is non-@C { NULL }) so that it can be
# undone later.  However, the tasks are chosen in quite a
# different way:  each group consists of a maximal sequence of
# tasks which lie in adjacent time groups of the frame and are
# currently assigned to the same resource.  The thinking is that
# if the solution is already of good quality, it may be advantageous
# to keep these runs of tasks together while trying to assign them
# to different resources using an arbitrary repair algorithm.
# @PP
# It is also possible to pass @C { NULL } for @C { rt }.  In that
# case the algorithm is run for each resource type of @C { soln }'s
# instance in turn.
# @PP
# There are rare cases where incompatibilities between tasks
# prevent them from being grouped.  In those cases, what should
# be one group may turn out to be two or more groups.
# # @PP
# # When a grouping made by @C { KheTaskingGroupByResource } and
# # recorded in a task set is no longer needed, function
# # @C { KheTaskSetUnGroup } (Section {@NumberOf extras.task_sets})
# # may be used to remove it.
# @End @SubSection

# @SubSection
#     @Title { The task resource grouper }
#     @Tag { resource_structural.task_grouping.task_resource_grouper }
# @Begin
# @LP
# A @I { task resource grouper } supports a form of task grouping
# which allows the grouping to be done, undone, and redone at will.
# @PP
# The first step is to create a task resource grouper object, by calling
# @ID @C {
# KHE_TASK_RESOURCE_GROUPER KheTaskResourceGrouperMake(
#   KHE_RESOURCE_TYPE rt, HA_ARENA a);
# }
# This makes a task resource grouper for tasks of type @C { rt }.
# It is deleted when @C { a } is deleted.  Also,
# @ID @C {
# void KheTaskResourceGrouperClear(KHE_TASK_RESOURCE_GROUPER trg);
# }
# clears @C { trg } back to its state immediately after
# @C { KheTaskResourceGrouperMake }.
# @PP
# To add tasks to a task resource grouper, make any number of calls to
# @ID @C {
# bool KheTaskResourceGrouperAddTask(KHE_TASK_RESOURCE_GROUPER trg,
#   KHE_TASK t);
# }
# Each task passed to @C { trg } in this way must be assigned directly
# to the cycle task for some resource @C { r } of type @C { rt }.  The
# tasks passed to @C { trg } by @C { KheTaskResourceGrouperAddTask } which are
# assigned @C { r } at the time they are passed are placed in one group.
# No assignments are made.
# @PP
# If @C { true } is returned by @C { KheTaskResourceGrouperAddTask },
# @C { t } is the @I { leader task } for its group:  the first
# task assigned @C { r } passed to @C { trg }.  If @C { false }
# is returned, @C { t } is not the leader task.
# @PP
# Adding the same task twice is legal but is the same as adding it
# once.  If the task is the leader task, it is reported to be so
# only the first time it is passed.
# @PP
# Importantly, although the grouping is determined by which resources
# the tasks are assigned to, it is only the grouping that the grouper
# cares about, not the resources.  Once the groups are made, the resources
# that determined the grouping become irrelevant to the grouper.
# @PP
# At any time one may call
# @ID @C {
# void KheTaskResourceGrouperGroup(KHE_TASK_RESOURCE_GROUPER trg);
# void KheTaskResourceGrouperUnGroup(KHE_TASK_RESOURCE_GROUPER trg);
# }
# @C { KheTaskResourceGrouperGroup } ensures that, in each group, the
# tasks other than the leader task are assigned directly to the leader
# task.  It does not change the assignment of the leader task.
# @C { KheTaskResourceGrouperUnGroup } ensures that, for each group,
# the tasks other than the leader task are assigned directly to
# whatever the leader task is assigned to (possibly nothing).  As
# mentioned, the resources which defined the groups originally
# are irrelevant to these operations.
# @PP
# If @C { KheTaskResourceGrouperGroup } cannot assign some task to its
# leader, it adds the task's task bounds to the leader and tries again.
# If it cannot add these bounds, or the assignment still does not succeed,
# it aborts.  As well as ungrouping, @C { KheTaskResourceGrouperUnGroup }
# removes any task bounds that were added by
# @C { KheTaskResourceGrouperGroup }.  In detail,
# @C { KheTaskResourceGrouperGroup } records the number of task bounds present
# when it is first called, and @C { KheTaskResourceGrouperUnGroup } removes
# task bounds from the end of the leader task until this number is reached.
# @PP
# A task grouper's tasks may be grouped and ungrouped at will.  This is
# more general than using a solution adjuster, since after ungrouping
# that way there is no way to regroup.
# # The extra power comes from the fact that a task grouper contains,
# # in effect, a task set for each group.
# @PP
# The author has encountered one case where @C { KheTaskResourceGrouperUnGroup }
# fails to remove the task bounds added by @C { KheTaskResourceGrouperGroup }.
# The immediate problem has probably been fixed, although it is hard to
# be sure that it will not recur.  So instead of aborting in that case,
# @C { KheTaskResourceGrouperUnGroup } prints a debug message and stops
# removing bounds for that task.
# @End @SubSection

@EndSubSections
@End @Section

@Section
    @Title { Task grouping by resource constraints }
    @Tag { resource_structural.grouping_by_rc }
@Begin
@LP
@I { Task grouping by resource constraints }, or @I { TGRC }, is
KHE's term for grouping tasks together, forcing the tasks in each
group to be assigned the same resource, based on analyses of
resource constraints which suggest that solutions in which the
tasks in each group are not assigned the same resource are likely
to be inferior.  That does not mean that those tasks will always
be assigned the same resource in good solutions, any more than,
say, a constraint requiring nurses to work complete weekends is
always satisfied in good solutions.  However, in practice those
tasks usually do end up being assigned the same resource, so it
makes sense to require it, at least to begin with.  Later we can
remove the groups and see what happens.
@PP
@C { KheTaskTreeMake } also groups tasks, but its groups are based
on avoid split assignments constraints, whereas here we make groups
based on resource constraints.
@PP
The function is
@ID @C {
bool KheGroupByResourceConstraints(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
  KHE_OPTIONS options, KHE_SOLN_ADJUSTER sa);
}
There is no @C { tasking } parameter because this kind of grouping
cannot be applied to an arbitrary set of tasks, as it turns out.
Instead, it applies to all tasks of @C { soln } whose resource
type is @C { rt }, which lie in a meet which is assigned a time,
with some exceptions, discussed below.  If @C { rt } is @C { NULL },
@C { KheGroupByResourceConstraints } applies itself to each of the
resource types of @C { soln }'s instance in turn.  It tries to group
these tasks, returning @C { true } if it groups any.  If
@C { sa != NULL }, it saves any changes in solution adjuster
@C { sa } (Section {@NumberOf general_solvers.adjust.adjuster}),
so that they can be undone later.
@PP
@C { KheGroupByResourceConstraints } finds whatever groups it can
among these tasks.  It makes each such @I { task group } by
choosing one of its tasks as the @I { leader task } and assigning
the others to it.  It makes assignments only to proper root tasks
(non-cycle tasks not already assigned to other non-cycle tasks),
so it does not disturb existing groups.  But it does take existing
groups into account:  it will use tasks to which other tasks are
asssigned in its own groups.
@PP
Tasks initially assigned a resource participate in TGRC.  Two
tasks can be put into the same group only if they are not
assigned different resources initially; and if any of the grouped
tasks are assigned a resource initially, the whole group is
assigned that resource finally.
# {0.97 1.0} @Scale @C { KheMTaskFinderGroupBegin },
# {0.97 1.0} @Scale @C { KheMTaskFinderGroupAddTask }, and
# {0.97 1.0} @Scale @C { KheMTaskFinderGroupEnd }
# from Section {@NumberOf resource_structural.mtask_finding.solver}
# follow this rule.
# @PP
# However, in practice, when @C { KheGroupByResourceConstraints }
# is called the only tasks assigned a resource have been assigned
# by @C { KheAssignByHistory }
# (Section {@NumberOf resource_solvers.assignment.history}).  In
# effect, those tasks are already grouped.  Given that
# @C { KheGroupByResourceConstraints } does not take account of
# history (ideally it would, but it does not at present), the
# practical way forward is for it to ignore tasks which are
# assigned a resource, just as though they were not there.
# @PP
# Tasks which are initially assigned a resource participate in
# grouping.  Such a task may have its assignment changed to some
# other task, but in that case the other task will be assigned the
# resource.  In other words, if one task is assigned a resource
# initially, and it gets grouped, then its whole group will be
# assigned that resource afterwards.  Two tasks initially assigned
# different resources will never be grouped together.
@PP
Tasks whose assignments are fixed (even to @C { NULL }) are
usually ignored.  They can't join groups, because
that would change their assignments unless they happen to be
chosen as leader tasks.  At present there is an awkward
workaround in place to allow task grouping to cooperate
with assign by history, in which tasks with fixed assignments
to non-@C { NULL } resource values participate in grouping.
Their assignments are unfixed then refixed to other tasks,
but without changing the resources they are assigned to.
# It is true that they could become leader tasks, since
# the assignments of leader tasks are not changed, but there are
# other considerations when choosing leader tasks, and to add fixing
# to the mix has seemed to the author to be a bridge too far.  In
# any case there are not likely to be any fixed unassigned proper
# root tasks when @C { KheGroupByResourceConstraints } is called.
# In practice fixed tasks are fixed by @C { KheAssignByHistory }
# (Section {@NumberOf resource_solvers.assignment.history}), so they
# are already grouped (in effect) and it is reasonable to ignore them.
# @PP
# If @C { ts } is non-@C { NULL }, every task that
# @C { KheGroupByResourceConstraints } assigns to another task is added
# to @C { ts }.  So the groups can be removed when they are no longer
# wanted, by running through @C { ts } and unassigning its tasks.
# @C { KheTaskSetUnGroup } (Section {@NumberOf extras.task_sets}) does this.
@PP
Most of the tasks that participate in grouping are tasks for which
non-assignment has a non-zero cost.  In practice only a few tasks
for which non-assignment has cost zero (and assignment has cost
zero or greater) participate in TGRC, and only when there seems
to be no other way to build the needed groups.
@PP
To summarize, then, @C { KheGroupByResourceConstraints } applies
to each proper root task of @C { soln } whose resource type is
@C { rt } (or any type if @C { rt } is @C { NULL }), which lies
in a meet which is assigned a time, and (usually) for which
non-assignment has a non-zero cost.
@PP
@C { KheGroupByResourceConstraints } uses two kinds of grouping.
The first, @I { combinatorial grouping }, tries all combinations of
assignments over a few consecutive days, building a group when just
one of those combinations has zero cost, according to the cluster
busy times and limit busy times constraints that monitor those days.
The second, @I { interval grouping }, uses limit active intervals
constraints to find different kinds of groups.  All this is
explained below.
@PP
@C { KheGroupByResourceConstraints } consults option
@C { rs_invariant }, and also
@TaggedList

@DTI { @F rs_group_by_rc_off } @OneCol {
A Boolean option which, when @C { true }, turns task grouping by
resource constraints off.
}

@DTI { @F rs_group_by_rc_max_days } @OneCol {
An integer option which determines the maximum number of consecutive days
(in fact, time groups of the common frame) examined by combinatorial
grouping (Section {@NumberOf resource_structural.grouping_by_rc.applying}).
Values 0 or 1 turn combinatorial grouping off.  The default value is 3.
}

@DTI { @F rs_group_by_rc_combinatorial_off } @OneCol {
A Boolean option which, when @C { true }, turns combinatorial grouping off.
}

@DTI { @F rs_group_by_rc_interval_off } @OneCol {
A Boolean option which, when @C { true }, turns interval grouping off.
}

@EndList
It also calls @C { KheFrameOption } (Section {@NumberOf extras.frames})
to obtain the common frame.
@PP
The following subsections describe the algorithms used behind the
scenes for TGRC.  There are many details; some have been omitted.
The last subsections document the interface used by the TGRC
modules to communicate with each other, as found in header file
@C { khe_sr_tgrc.h }.
# in more detail than the user
# is likely to need.  Types and functions mentioned in these subsections
# are declared in header file @C { khe_sr_tgrc.h }, which is not
# included in file @C { khe_solvers.h }.  So although TGRC is
# implemented over multiple source files, its internal details are not
# made available to users.
# There are two main kinds:  combinatorial
# grouping and profile grouping.
# The following subsections describe @C { KheGroupByResourceConstraints }
# in detail.  It has several parts, which are available separately, as we
# will see.  For each resource type, it first calls @C { KheMTaskFinderMake }
# (Section {@NumberOf resource_structural.mtask_finding.solver})
# to make an mtask finder, and @C { KheCombGrouperMake } (see below) to
# make a combinatorial grouper object @C { cg }.  Then, using @C { cg },
# it calls @C { KheCombGrouping } to perform combinatorial grouping, and
# then @C { KheProfileGrouping } to perform profile grouping, first with
# @C { non_strict } set to @C { false }, then again with @C { non_strict }
# set to @C { true }.
@BeginSubSections

@SubSection
    @Title { Combinatorial grouping }
    @Tag { resource_structural.grouping_by_rc.combinatorial }
@Begin
@LP
Suppose that there are two kinds of shifts, day and night; that each
nurse must be busy on both days of the weekend or neither; and
that nurses cannot work a day shift on the day after a night shift.
Then nurses assigned to the Saturday night shift must work on
Sunday, and so must work the Sunday night shift.  So it makes sense
to group one Saturday night shift with one Sunday night shift, and to
do so repeatedly until night shifts run out on one of those days.
@PP
Suppose that the groups just made consume all the Sunday night shifts.
Then nurses working the Saturday day shifts cannot work the Sunday
night shifts, because the Sunday night shifts are grouped with
Saturday night shifts now, which clash with the Saturday day shifts.
So now it is safe to group one Saturday day shift with one Sunday
day shift, and to do so repeatedly until day shifts run out on one
of those days.
@PP
Groups made in this way can be a big help to solvers.  In instance
@C { COI-GPost.xml }, for example, each Friday night task can be
grouped with tasks for the next two nights.  Good solutions always
assign these three tasks to the same resource, owing to constraints
specifying that the weekend following a Friday night shift must be
busy, that each weekend must be either free on both days or busy on
both, and that a night shift must not be followed by a day shift.
# A time sweep task assignment algorithm (say) cannot look ahead
# and see such cases coming.
@PP
@I { Combinatorial grouping } realizes these ideas.  It enumerates
a space whose elements are sets of mtasks
(Section {@NumberOf resource_structural.mtask_finding.ops}).  The space
is defined by @I { requirements } supplied by the caller.  As explained
in Section {@NumberOf resource_structural.grouping_by_rc.impl2},
the requirements could state that the sets must
cover a given time group or mtask, or must not cover a given
time group or mtask, and so on.  For each set of mtasks
@M { S } in the search space, it calculates a cost @M { c(S) },
by evaluating the resource constraints that apply to one
resource in the part of the cycle covered by @M { S },
and selects a set @M { S prime } such that @M { c( S prime ) }
is minimal, or zero.  It then makes one group by selecting one
task from each mtask of @M { S prime } and grouping those tasks,
and then repeating that until as many tasks as possible or
desired have been grouped.
@PP
As formulated here, combinatorial grouping is a low-level
algorithm which finds and groups one set of mtasks @M { S prime }.
It is called on by higher-level algorithms to do their actual
grouping.  For example, a higher-level algorithm might try
combinatorial grouping at various points through the cycle,
or even try it repeatedly at the same points, as in the
example above, where grouping the Saturday and Sunday night
shifts would be one application of combinatorial grouping, then
grouping the Saturday and Sunday day shifts would be another.
# @PP
# As formulated here, one application of combinatorial grouping
# groups one set of mtasks @M { S prime }.  In the example above,
# grouping the Saturday and Sunday night shifts would be one
# application, then grouping the Saturday and Sunday day shifts
# would be another.
@PP
The number of sets of mtasks tried by combinatorial grouping will
usually be exponential in the number of days involved in the search.
So the number of days has to be small, unless the choices on each
day are very limited.
# In practice that should be
# enough anyway, given that most groups involve weekends.
@End @SubSection

@SubSection
  @Title { Using combinatorial grouping with combination reduction }
  @Tag { resource_structural.grouping_by_rc.applying }
@Begin
@LP
This section describes one way in which the general idea of
combinatorial grouping, as just presented, is applied by TGRC.
# This way is implemented by function
# @ID @C {
# int KheCombGrouping(KHE_COMB_GROUPER cg, KHE_OPTIONS options,
#   KHE_SOLN_ADJUSTER sa);
# }
# It does what this section describes, and returns the number of
# groups it makes.  If @C { sa != NULL }, any task assignments it
# makes are saved in @C { sa }, so that they can be undone later.
@PP
Let @M { m } be the value of the @F rs_group_by_rc_max_days option
described earlier.  Iterate over all pairs @M { (f, t) }, where
@M { f } is a subset of the common frame containing @M { k } adjacent
time groups, for all @M { k } such that @M { 2 <= k <= m }, and
@M { t } is an mtask that covers @M { f }'s first or last time group.
@PP
For each @M { (f, t) } pair, run combinatorial grouping, set up
to require that @M { t } be covered and that each of the @M { k }
time groups of @M { f } be free to be either covered or not, and
only doing grouping when there is a unique zero-cost grouping
satisfying these requirements.
# with one mtask requirement with cover `yes' for @M { t }, and one
# time group requirement with cover `free' for each of the @M { k }
# time groups of @M { f }.  Set @C { max_num } to @C { INT_MAX },
# and set @C { cg_variant } to @C { KHE_COMB_VARIANT_SOLE_ZERO }.
# If there is a unique zero-cost way to group a task of @M { t }
# with tasks on the preceding or following @M { k - 1 } days,
# this call will find it and build as many groups as it can.
# @PP
# For each @M { (f, t) } pair, run @C { KheCombGrouperSolve }, set up
# with one mtask requirement with cover `yes' for @M { t }, and one
# time group requirement with cover `free' for each of the @M { k }
# time groups of @M { f }.  Set @C { max_num } to @C { INT_MAX },
# and set @C { cg_variant } to @C { KHE_COMB_VARIANT_SOLE_ZERO }.
# If there is a unique zero-cost way to group a task of @M { t }
# with tasks on the preceding or following @M { k - 1 } days,
# this call will find it and build as many groups as it can.
@PP
If @M { f } has @M { k } time groups, each with @M { n } mtasks,
say, there are up to @M { (n + 1) sup {k - 1} } combinations for
each run, so @C { rs_group_by_rc_max_days } must be small, say 3,
or 4 at most.  In any case, unique zero-cost groupings typically
concern weekends, so larger values are unlikely to yield anything.
@PP
If one @M { (f, t) } pair produces some grouping, then return to
the first pair containing @M { f }.  This handles cases like the
one described earlier, where a grouping of Saturday and Sunday night
shifts opens the way to a grouping of Saturday and Sunday day shifts.
@PP
The remainder of this section describes @I { combination reduction }.
This is a refinement that TGRC uses to make unique zero-cost
combinations more likely in some cases.
@PP
Some combinations examined by combinatorial grouping may have zero
cost as far as the monitors used to evaluate it are concerned, but
have non-zero cost when evaluated in a different way, involving the
overall supply of and demand for resources.  Such combinations can
be ruled out, leaving fewer zero-cost combinations, and potentially
more task grouping.
@PP
For example, suppose there is a maximum limit on the number of
weekends each resource can work.  If this limit is tight
enough, it will force every resource to work complete weekends,
even without an explicit constraint, if that is the only way
that the available supply of resources can cover the demand
for weekend shifts.  This example fits the pattern to be given
now, setting @M { C } to the constraint that limits the number
of busy weekends, @M { T } to the times of all weekends,
@M { T sub i } to the times of the @M { i }th weekend, and
@M { f tsub i } to the number of days in the @M { i }th weekend.
@PP
Take any any set of times @M { T }.  Let @M { S(T) }, the
@I { supply during @M { T } }, be the sum over all resources
@M { r } of the maximum number of times that @M { r } can be busy
during @M { T } without incurring a cost.  Let @M { D(T) }, the
@I { demand during @M { T } }, be the sum over all tasks @M { x }
for which non-assignment would incur a cost, of the number of times
@M { x } is running during @M { T }.  Then @M { S(T) >= D(T) }
or else a cost is unavoidable.
@PP
In particular, take any cluster busy times constraint @M { C } which
applies to all resources, has time groups which are all positive, and
has a non-trivial maximum limit @M { M }.  (The analysis also applies
when the time groups are all negative and there is a non-trivial
minimum limit, setting @M { M } to the number of time groups minus
the minimum limit.)  Suppose there are @M { n } time groups
@M { T sub i }, for @M { 1 <= i <= n }, and let their union be @M { T }.
@PP
Let @M { f tsub i } be the number of time groups from the common
frame with a non-empty intersection with @M { T sub i }.  This is
the maximum number of times from @M { T sub i } during which any one
resource can be busy without incurring a cost, since a resource can
be busy for at most one time in each time group of the common frame.
@PP
Let @M { F } be the sum of the largest @M { M } @M { f tsub i }
values.  This is the maximum number of times from @M { T } that
any one resource can be busy without incurring a cost:  if it is
busy for more times than this, it must either be busy for more
than @M { f tsub i } times in some @M { T sub i }, or else it
must be busy for more than @M { M } time groups, violating the
constraint's maximum limit.
@PP
If there are @M { R } resources altogether, then the supply during
@M { T } is bounded by
@ID @Math { S(T) <= RF }
since @M { C } is assumed to apply to every resource.
@PP
As explained above, to avoid cost the demand must not exceed the
supply, so
@ID @M { D(T) <= S(T) <= RF }
Furthermore, if @M { D(T) >= RF }, then any failure to maximize
the use of workload will incur a cost.  That is, every resource
which is busy during @M { T sub i } must be busy for the full
@M { f tsub i } times in @M { T sub i }.
@PP
So the effect on grouping is this:  if @M { D(T) >= RF }, a resource
that is busy in one time group of the common frame that overlaps
@M { T sub i } should be busy in every time group of the common
frame that overlaps @M { T sub i }.  TGRC searches for constraints
@M { C } that have this effect, and informs its combinatorial
grouping solver about what it found by changing the requirements
for some time groups from `a group is free to cover this time group,
or not' to `a group must cover this time group if and only if it
covers the previous time group'.  When searching for groups, the
option of covering some of these time groups but not others is removed.
With fewer options, there is more chance that some combination
might be the only one with zero cost, allowing more task grouping.
@PP
Instance @C { CQ14-05 } has two constraints that limit busy weekends.
One applies to 10 resources and has maximum limit 2; the other applies
to the remaining 6 resources and has maximum limit 3.  So combination
reduction actually takes sets of constraints with the same time
groups that together cover every resource once.  Instead of @M { RF }
(above), it uses the sum over the set's constraints @M { c sub j }
of @M { R sub j F sub j }, where @M { R sub j } is the number of
resources that @M { c sub j } applies to, and @M { F sub j } is the
sum of the largest @M { M sub j } of the @M { f tsub i } values,
where @M { M sub j } is the maximum limit of @M { c sub j }.  The
@M { f tsub i } are the same for all @M { c sub j }.
@End @SubSection

# @SubSection
#     @Title { Profile grouping }
#     @Tag { resource_structural.grouping_by_rc.profile }
# @Begin
# @LP
# Suppose 6 nurses are required on the Monday, Tuesday, Wednesday,
# Thursday, and Friday night shifts, but only 4 are required on the
# Saturday and Sunday night shifts.  Consider any division of the
# night shifts into sequences of one or more shifts on consecutive
# days.  However these sequences are made, at least two must begin
# on Monday, and at least two must end on Friday.
# @PP
# Now suppose that the intention is to assign the same resource to
# each shift of any one sequence, and that a limit active intervals
# constraint, applicable to all resources, specifies that night shifts
# on consecutive days must occur in sequences of at least 2 and at most
# 3.  Then the two sequences of night shifts that must begin on Monday
# must contain a Monday night and a Tuesday night shift at least, and the
# two that end on Friday must contain a Thursday night and a Friday night
# shift at least.  So here are two groupings, of Monday and Tuesday
# nights and of Thursday and Friday nights, for each of which we can
# build two task groups.
# @PP
# Suppose that we already have a task group which contains a sequence
# of 3 night shifts on consecutive days.  This group cannot be grouped
# with any night shifts on days adjacent to the days it currently
# covers.  So for present purposes the tasks of this group can be
# ignored.  This can change the number of night shifts running on
# each day, and so change the amount of grouping.  For example, in
# instance @C { COI-GPost.xml }, all the Friday, Saturday, and Sunday
# night shifts get grouped into sequences of 3, and 3 is the maximum,
# so those night shifts can be ignored here, and so every Monday night
# shift begins a sequence, and every Thursday night shift ends one.
# @PP
# We now generalize this example, ignoring for the moment a few
# issues of detail.  Let @M { C } be any limit active intervals
# constraint which applies to all resources, and whose time groups
# @M { T sub 1 ,..., T sub k } are all positive.  Let @M { C }'s
# limits be @M { C sub "max" } and @M { C sub "min" }, and suppose
# @M { C sub "min" } is at least 2 (if not, there can be no grouping
# based on @M { C }).  What follows is relative to @M { C }, and is
# repeated for each such constraint.  Constraints with the same
# time groups are notionally merged, allowing the minimum limit
# to come from one constraint and the maximum limit from another.
# @PP
# Let @M { n sub i } be the number of tasks of interest that cover
# @M { T sub i }.  The @M { n sub i } make up the @I profile of @M { C }.
# @PP
# A @I { long task } is a task which covers at least @M { C sub "max" }
# adjacent time groups from @M { C }.  Long tasks can have no influence
# on grouping to satisfy @M { C }'s minimum limit, so they may be ignored,
# that is, profile grouping may run as though they are not there.  This
# applies both to tasks which are present at the start, and tasks which
# are constructed along the way.  
# @PP
# # A task is @I { admissible } (for profile grouping) if it satisfies
# # the following conditions:
# # @NumberedList
# # 
# # @LI {
# # The task is a proper root task lying within an mtask created by the mtask
# # finder passed to profile grouping when @C { KheProfileGrouping }
# # (see below) is called.
# # }
# # 
# # @LI {
# # The task is not fixed, not assigned a resource, and it needs assignment.
# # }
# # 
# # @LI {
# # The task is not a long task.
# # }
# # 
# # @EndList
# # If a task is admissible, then every unassigned task in that task's
# # mtask is also admissible.
# # @PP
# # For the definition of `cover' see
# # Section {@NumberOf resource_structural.grouping_by_rc.combinatorial}.
# # @PP
# As profile grouping proceeds, some tasks become grouped into larger
# tasks which are no longer relevant because they are long.  This causes
# some of the @M { n sub i } values to decrease.  We always base our
# decisions on the current profile, not the original profile.
# @PP
# For each @M { i } such that @M { n sub {i-1} < n sub i },
# @M { n sub i - n sub {i-1} } groups of length at least
# @M { C sub "min" } must start at @M { T sub i } (more precisely,
# they must cover @M { T sub i } but not  @M { T sub {i-1} }).  They may
# be constructed by combinatorial grouping, passing in time groups
# @M { T sub i ,..., T sub { i + C sub "min" - 1 } } with cover type
# `yes', and @M { T sub {i-1} } and @M { T sub { i + C sub "max" } } with
# cover type `no', asking for @M { m = n sub i - n sub {i-1} - c sub i }
# tasks, where @M { c sub i } is the number of existing tasks (not
# including long ones) that satisfy these conditions already.
# # (as returned by @C { KheCombSolverSingles }).
# The new groups must group at least 2 tasks each.  Some of the time
# groups may not exist; in that case, omit them, but still do the
# grouping if there are at least 2 `yes' time groups.  The case for
# sequences ending at @M { j } is symmetrical.
# @PP
# If @M { C } has no history, we set @M { n sub 0 } and
# @M { n sub {k+1} } to 0, encouraging groups to begin at @M { T sub 1 }
# and end at @M { T sub k }.  If @M { C } has history, we still
# set @M { n sub 0 } to 0, reasoning that assign by history
# (Section {@NumberOf resource_solvers.assignment.history}) has
# taken care of history at that end; but we set @M { n sub {k+1} } to
# +2p @Font @M { infty }, preventing groups from being formed to
# end at @M { T sub k }.
# # we do not know
# # how many tasks are running outside @M { C }, so we set @M { n sub 0 }
# # and @M { n sub {k+1} } to infinity, preventing groups from beginning
# # at @M { T sub 1 } and ending at @M { T sub k }.
# @PP
# Groups made by one round of profile grouping may participate in later
# rounds.  Suppose @M { C sub "min" = 2 }, @M { C sub "max" = 3 },
# @M { n sub 1 = n sub 5 = 0 }, and @M { n sub 2 = n sub 3 = n sub 4 = 4 }.
# Profile grouping builds 4 groups of length 2 begining at @M { T sub 2 },
# then 4 groups of length 3 ending at @M { T sub 4 }, incorporating the
# length 2 groups.
# @PP
# We turn now to some issues of detail.
# @PP
# @B { Singles. }  A @I single is a set of mtasks that satisfies the
# requirements of combinatorial grouping but contains only one mtask.
# We need to consider how singles affect profile grouping.  Singles
# of length @M { C sub "max" } or more are ignored, but there may be
# singles of smaller length.
# @PP
# The @M { n sub i - n sub {i-1} } groups that must start at
# @M { T sub i } include singles.  Singles are already present, just
# as though they were made first.  The combinatorial grouping solver
# has a variant that applies the given requirements, but instead of
# doing any grouping, returns @M { c sub i }, the number of tasks of
# interest that lie in the mtasks of singles.  Then we ask combinatorial
# grouping to make up to @M { n sub i - n sub {i-1} - c sub i } groups,
# not @M { n sub i - n sub {i-1} }, with an extra requirement that
# singles are to be exluded.  If @M { n sub i - n sub {i-1} - c sub i <= 0 }
# we skip the call; the sequences that need to start at @M { T sub i }
# are already present.
# @PP
# @B { Varying task domains. }  Suppose that one senior nurse is wanted
# each night, four ordinary nurses are wanted each week night, and two
# ordinary nurses are wanted each weekend night.  Then two groups still
# need to start on Monday nights, but they should group demands for
# ordinary nurses, not senior nurses.  Nevertheless, tasks with
# different domains are not totally unrelated.  A senior nurse
# could very well act as an ordinary nurse on some shifts.
# @PP
# We still aim to build @M { M = n sub i - n sub {i-1} - c sub i }
# groups as before.  However, we do this by making several calls on
# combinatorial grouping.  For each resource group @M { g } appearing
# as a domain in any mtask running at time @M { T sub i }, find
# @M { n sub gi }, the number of tasks (not including long ones) with
# domain @M { g } running at @M { T sub i }, and @M { n sub { g(i-1) } },
# the number at @M { T sub {i-1} }.  For each @M { g } such that
# @M { n sub gi > n sub { g(i-1) } }, call combinatorial grouping,
# with a requirement expressing a preference for domain @M { g },  
# # insisting that @M { T sub i } be covered by an mtask whose domain
# # is @M { g },
# and asking for @M { min( M, n sub gi - n sub { g(i-1) } ) } groups.
# Then subtract from @M { M } the number of groups actually made.
# Stop when @M { M = 0 } or the list of domains is exhausted.
# @PP
# @B { Varying task costs. }  The tasks participating in profile
# grouping might well differ in their non-assignment cost.  It feels
# wrong to group tasks with very different costs.  Although this
# is not currently prevented, it is likely to be fairly harmless,
# for two reasons.
# @PP
# First, in grouping generally we only consider tasks which
# need assignment---tasks whose cost of non-assignment exceeds
# their cost of assignment.  So we won't be grouping a task
# that needs assignment with a task that doesn't.
# @PP
# Second, the most cost-reducing tasks in each mtask are assigned
# first.  That should encourage task groups to contain tasks of
# similar cost.
# # Some might be compulsory
# # (assigning them reduces the hard cost of the solution), others might
# # be deprecated (assigning them increases cost), others might be
# # neutral.  These costs are visible as the @C { non_asst_cost } and
# # @C { asst_cost } values returned by @C { KheMTaskTask }
# # (Section {@NumberOf resource_structural.mtask_finding.ops}).
# # @PP
# # Mtasks ensure that the
# # most cost-reducing tasks are assigned first, which should help
# # task groups to contain tasks of similar cost.  But if the best
# # remaining unassigned task in one mtask has very different cost
# # to the best in another, they will be grouped.
# # @PP
# # There are other possibilities.  We could easily ignore deprecated
# # tasks altogether during profile grouping, for example.  The
# # author has not yet given serious thought to this subject.
# @PP
# @B { Non-uniqueness of zero-cost groupings. }
# The main problem with profile grouping is that there may be
# several zero-cost groupings in a given situation.  For example,
# a profile might show that a group covering Monday, Tuesday, and
# Wednesday may be made, but give no guidance on which shifts on
# those days to group.
# @PP
# There are various ways to deal with this problem.  At present
# we are limiting profile grouping to constraints @M { C } whose
# time groups all contain a single time.  Thus profile grouping
# will group sequences of day shifts, sequences of night shifts,
# and so on, but it will not group sequences of days, even when
# there is a constraint limiting the number of consecutive busy
# days whose profile shows that sequences must begin on a certain day.
# An exception to this is the case @M { C sub "min" = C sub "max" },
# discussed below.
# @PP
# @B { An overall algorithm. }
# We are now in a position to present an overall algorithm for
# profile grouping.  Find all limit active intervals constraints
# @M { C } which apply to all resources and whose time groups are
# all singletons and all positive.  Notionally merge constraints
# that share the same time groups; for example, we could take
# @M { C sub "min" } from one and @M { C sub "max" } from another.
# For each of these merged constraints @M { C } such that
# @M { C sub "min" >= 2 }, proceed as follows.
# # Furthermore, if @C { non_strict }
# # is @C { false }, then @M { C }'s time groups must all be
# # singletons, while if @C { non_strict } is @C { true }, then
# # @M { C sub "min" = C sub "max" } must hold.
# # @PP
# # A constraint may qualify for both strict and non-strict processing.
# # This is true, for example, of a constraint that imposes equal lower
# # and upper limits on the number of consecutive night shifts.  Such a
# # constraint will be selected in both the strict and non-strict cases,
# # which is fine.
# @PP
# # For each of these constraints, proceed as follows.
# # Set the profile
# # time groups in the tasker to @M { T sub 1 ,..., T sub k }, the time
# # groups of @M { C }, and set the @C { profile_max_len } attribute to
# # @M { C sub "max" - 1 }.  The tasker will then report the values
# # @M { n sub i } needed for @M { C }.
# # @PP
# Traverse the profile repeatedly, looking for cases where
# @M { n sub i > n sub {i-1} } and @M { n sub j < n sub {j+1} }, and
# use combinatorial grouping (aiming to find zero-cost groups, not
# unique zero-cost groups) to build groups which cover between
# @M { C sub "min" } and @M { C sub "max" } time groups starting
# at @M { T sub i } (or ending at @M { T sub j }).
# # This
# # involves loading @M { T sub i ,..., T sub {i + C sub "min" - 1} } as `yes'
# # time groups, and @M { T sub {i-1} } and @M { T sub { i + C sub "max" } }
# # as `no' time groups, as explained above.
# Continue traversing the profile until until no points which allow
# grouping can be found.
# @PP
# As groups are made, the @M { n sub i } will often decrease.  At some
# point they might all be zero, or the @M { n sub i - n sub {i-1} - c sub i }
# might all be zero.  Alternatively, they might all be non-zero but all
# equal, and we need to think about what to do then.  Further grouping
# is possible but would involve arbitrary choices, making whether to
# go further a matter of experience and experiment.
# @PP
# One case where going further is worthwhile is when
# @M { C sub "min" = C sub "max" }.  It is
# very constraining to insist, as this does, that every sequence of
# consecutive busy days (say) away from the start and end of the cycle
# must have a particular length.  Indeed, it changes the problem into a
# combinatorial one of packing these rigid sequences into the profile.
# Local repairs cannot do this well, because to increase
# or decrease the length of one sequence, we must decrease or increase
# the length of a neighbouring sequence, and so on all the way back to
# the start or forward to the end of the cycle (unless there are
# shifts nearby which can be assigned or not without cost).
# So we turn to profile grouping to find suitable groups before
# assigning any resources.  Some of these groups may be less than
# ideal, but still the overall effect should be better than no
# grouping at all.
# @PP
# Another case for going further is when
# @M { C sub "min" + 1 = C sub "max" } and the time groups are
# singletons.  This case arises in instance @F { INRC2-4-100-0-1108 },
# where night shifts preferably come in sequences of length 4 or 5.
# The author's other solvers struggle with this requirement, making
# it very tempting to build these sequences before doing any assignment.
# @PP
# If we do decide to keep going, one way to do that is as follows.
# From among all time groups @M { T sub i }
# where @M { n sub i > 0 }, choose one which has been the starting
# point for a minimal number of groups (to spread out the starting
# points as much as possible) and make a group there if combinatorial
# grouping allows it.  Then return to traversing the profile
# repeatedly.  There should now be an @M { n sub i > n sub {i-1} }
# case just before the latest group, and an @M { n sub j < n sub {j+1} }
# case just after it.  Repeat until there is no @M { T sub i } where
# @M { n sub i > 0 } and combinatorial grouping can build a group.
# @PP
# Another way to keep going is to use the dynamic programming
# algorithm from the next section.  Although it is not globally
# optimum, it is an efficient way to find high-quality groups.
# # It reduces every @M { n sub i }
# # by one, so it only applies when every @M { n sub i >= 1 }.  It
# # is an efficient way to find high-quality groups.
# # One reasonable way of dealing with this problem is the following.
# # First, do not insist on unique zero-cost groupings; instead, accept
# # any zero-cost grouping.  This ensures that a reasonable amount of
# # profile grouping will happen.  Second, to reduce the chance of
# # making poor choices of zero-cost groupings, limit profile grouping
# # to two cases.
# # @PP
# # The first case is when each time group @M { T sub i } contains a
# # single time, as at the start of this section, where each
# # @M { T sub i } contained the time of a night shift.  Although we do
# # not insist on unique zero-cost groupings, we are likely to get them
# # in this case.  We call this @I { Type A profile grouping }.
# # @PP
# # The second case is when @M { C sub "min" = C sub "max" }.  It is
# # very constraining to insist, as this does, that every sequence of
# # consecutive busy days (say) away from the start and end of the cycle
# # must have a particular length.  Indeed, it changes the problem into a
# # combinatorial one of packing these rigid sequences into the profile.
# # Local repairs cannot do this well, because to increase
# # or decrease the length of one sequence, we must decrease or increase
# # the length of a neighbouring sequence, and so on all the way back to
# # the start or forward to the end of the cycle (unless there are
# # shifts nearby which can be assigned or not without cost).
# # So we turn to profile grouping to find suitable groups before
# # assigning any resources.  Some of these groups may be less than
# # ideal, but still the overall effect should be better than no
# # grouping at all.  We call this @I { Type B profile grouping }.
# # @PP
# # @PP
# # When @M { C sub "min" = C sub "max" }, no singles are counted in
# # the profile.  This is easy to see:  by definition, a single covers
# # @M { C sub "min" } time groups, so it covers @M { C sub "max" }
# # time groups, but we are omitting existing groups of this length
# # or greater from the profile.
# # # @C { profile_max_len } is @M { C sub "max" - 1 }.
# # @PP
# # These ideas are implemented by function
# # @ID @C {
# # int KheProfileGrouping(KHE_COMB_GROUPER cg, bool non_strict,
# #   KHE_SOLN_ADJUSTER sa);
# # }
# # It carries out some profile grouping, as follows, and returns
# # the number of groups it makes.  If @C { sa != NULL }, any task
# # assignments it makes are saved in @C { sa }, so that they can
# # be undone later.
# # 
# # In the strict grouping case, it is then
# # time to stop, but in the non-strict case we keep
# # grouping, as follows.  From among all time groups @M { T sub i }
# # where @M { n sub i > 0 }, choose one which has been the starting
# # point for a minimal number of groups (to spread out the starting
# # points as much as possible) and make a group there if combinatorial
# # grouping allows it.  Then return to traversing the profile
# # repeatedly.  There should now be an @M { n sub i > n sub {i-1} }
# # case just before the latest group, and an @M { n sub j < n sub {j+1} }
# # case just after it.  Repeat until there is no @M { T sub i } where
# # @M { n sub i > 0 } and combinatorial grouping can build a group.
# @End @SubSection

# @SubSection
#     @Title { A dynamic programing algorithm for profile grouping }
#     @Tag { resource_structural.grouping_by_rc.dynamic }
# @Begin
# @LP
# This section presents a dynamic programming algorithm for profile
# grouping which can be applied to any subsequence @M { [a, b] } of
# the profile such that @M { n sub i > 0 } for all @M { i } in the
# range @M { a <= i <= b }, and @M { n sub {a-1} = n sub {b+1} = 0 }.
# The algorithm reduces each @M { n sub i } in the range by one,
# using groups of minimum total cost.  Applied repeatedly, it can
# produce many very good groups, although there is no suggestion
# that they are globally optimum.
# # Profile grouping is able to begin a group at position @M { i } when
# # @M { n sub i > n sub {i-1} }, and end a group at position @M { i } when
# # @M { n sub i > n sub {i+1} }.  Where these cases occur it is clearly
# # correct to begin or end a group of minimal length there, given that
# # it can be extended later if needed.  But if all the @M { n sub i }
# # are equal, this provides no guidance.  In that case it might be better
# # not to group.  Above, we carry on grouping in that case only when
# # @M { C sub "min" = C sub "max" }, arguing that the tightness of the
# # situation warrants it.
# # @PP
# # This same argument could be made when @M { C sub "min" + 1 = C sub "max" },
# # as for example in the constraint on consecutive night shifts in
# # instance @F { INRC2-4-100-0-1108 }, where night shifts should be
# # taken in sequences of length 4 or 5.
# # @PP
# # However, this section is not concerned with when further grouping
# # is needed:  that question must be answered by experience.  Instead,
# # when it is decided on, this section offers an optimal method of
# # carrying it out, assuming that the @M { n sub i } are all non-zero
# # across the cycle, and that we want to build groups such that exactly
# # one group covers each time group of @M { C }.  This will mainly be
# # useful when the @M { n sub i } are all equal, but we do not require
# # them to be equal.
# @PP
# We have one hard constraint and one soft constraint.  The hard
# constraint is that we require the algorithm to produce a set of
# groups, each of length between @M { C sub "min" } and
# @M { C sub "max" } inclusive,  such that every position in the
# range is covered by exactly one group.  The last group, however,
# may have length less than @M { C sub "min" } when it is the last
# time group of @M { C } and history (i.e. a future) is present,
# since short sequences at the end do not violate @M { C } in that
# case.  The soft constraint is that the total cost of the groups
# (as reported by combinatorial grouping) should be minimized.
# @PP
# One could ask whether there will be any cost:  a sequence of
# night shifts (say) whose length satisfies @M { C } is not
# likely to violate any other constraints.  In practice this
# is largely true.  The main exception is that complete
# weekend constraints may combine with unwanted pattern
# constraints to cause sequences that end on a Saturday
# or begin on a Sunday to have non-zero cost.
# # This is because the complete weekend
# # constraint requires something on Sunday, but the sequence has
# # ended so another night shift is excluded, and the other shifts
# # are often prohibited by unwanted pattern constraints.  On
# # the other hand, a sequence beginning on Sunday could follow
# # a day shift on Saturday.
# @PP
# Our dynamic programming algorithm finds a solution @M { S(i) }
# which is optimal among all solutions which cover the first
# @M { i } time groups of @M { [a, b] }, for each @M { i } such
# that @M { 0 <= i <= b - a + 1 }.
# @PP
# The first of these optimal solutions, @M { S(0) }, is required to
# cover no time groups, so it is the empty set of sequences, with
# cost 0.  Assume inductively that we have found @M { S(k) } for each
# @M { k } such that @M { 0 <= k < i }.  We need to find @M { S(i) }.
# @PP
# To do this, for each @M { j } such that
# @M { C sub "min" <= j <= C sub "max" },
# find the solution which consists of @M { S(i - j) } plus
# a single sequence covering time groups
# @M { T sub {i - j + 1} ... T sub i }.  The cost of this
# solution is the cost of @M { S(i - j) } plus the cost of
# the additional sequence, as reported by combinatorial
# grouping, tasked with finding a sequence of minimum cost
# covering @M { T sub {i - j + 1} ... T sub i } but not
# @M { T sub {i - j} } and not @M { T sub {i+1} }.  Find
# the solution of minimum cost over all @M { j } and declare
# that to be @M { S(i) }.
# @PP
# As explained above, the last group may have length less than
# @M { C sub "min" } when history is present.  In that case, we
# allow the last sequence to have any length @M { j } such that
# @M { 1 <= j <= C sub "max" }.
# @PP
# The main problem with this algorithm is that there may be
# no @M { S(i) } at all.  For example, @M { S(1) } does not
# exist because there are no legal sequences of length 1;
# legal sequences start only with @M { S( C sub "min" ) }.
# Even after that, there may be gaps.  For example, if
# every sequence must have length 4 or 5, there is no
# @M { S(6) } or @M { S(7) }.  There is also the possibility
# that sequences of the right lengths might exist but
# combinatorial grouping finds no way to group their tasks,
# even though we ask it only for sequences of minimum, not
# necessarily zero, cost.  We treat missing solutions of
# this kind as though they had cost +2p @Font @M { infty }.
# We also do this when we need an @M { S(i - j) } but
# @M { i - j < 0 }.
# @PP
# Another problem is that if @M { C sub "max" } is relatively
# large, combinatorial grouping could be too slow.  This has not
# been a problem in practice, but it is probably safest to limit
# dynamic programming to cases where either the time groups each
# contain a single time, or else @M { C sub "max" <= 4 }.
# @PP
# Normally, we remove a sequence from the profile only when it has
# length @M { C sub "max" }, because only then is it unable to
# participate in further grouping.  However, after one round of
# dynamic programming we remove every sequence in the optimal
# solution from the profile, reasoning that collectively they are
# finished and should not participate further.  We can repeat this,
# reducing each @M { n sub i } by one on each round, until some
# @M { n sub i = 0 } or the round fails to find a solution.
# @PP
# Although the dynamic programming algorithm finds an optimal way to reduce
# each @M { n sub i } by one, the @I { general profile grouping problem },
# which is to find an
# optimal way to fill an arbitrary profile with minimum-cost sequences
# of length between @M { C sub "min" } and @M { C sub "max" }, remains
# unsolved.  Even when the @M { n sub i } are equal there is no
# proof that a sequence of rounds, each of which finds an optimal way
# to reduce them all by one, is guaranteed to find an optimal solution
# overall.  (It is true that an optimal solution in this case can be
# divided into a sequence of rounds, each of which reduces all the
# @M { n sub i } by one, but that does not prove that our sequence of
# rounds is optimal.)  When arbitrary task domains are added, it is easy
# to see that the problem includes the NP-hard multi-dimensional matching
# problem.  However, task domains do not seem to be a problem in practice.
# @PP
# The author has considered using dynamic programming for the general
# profile grouping problem, inspired by the dynamic programming
# algorithm for optimal resource assignment
# (Section {@NumberOf resource_solvers.dynamic}).  Such an algorithm
# seems to be possible, but it would be complicated, especially since
# it would need to take into account any task grouping that has already
# occurred.  The optimal resource assignment algorithm treats grouped
# tasks heuristically; that would not suffice here.
# @End @SubSection

@SubSection
  @Title { Implementation notes 1:  mtask groups }
  @Tag { resource_structural.grouping_by_rc.impl1 }
@Begin
@LP
File @C { khe_sr_tgrc.h } contains the interfaces that
the TGRC source files use to communicate with each other.
It declares a type @C { KHE_MTASK_GROUP } representing one
@I { mtask group }.  This is an mtask set with additional
features relevant to grouping:  it contains an mtask set,
plus it keeps track of which mtask will be the leader mtask,
and the cost to a resource of assigning it to the group.
@PP
For creating and deleting an mtask group object there are
@ID @C {
KHE_MTASK_GROUP KheMTaskGroupMake(KHE_COMB_GROUPER cg);
void KheMTaskGroupDelete(KHE_MTASK_GROUP mg);
}
Here @C { KHE_COMB_GROUPER } is another type defined in
@C { khe_sr_tgrc.h }.  It is mainly concerned with running
combinatorial grouping, but it also holds a free list of
mtask group objects.
@PP
There are operations for clearing an mtask group object
and overwriting its contents with the contents of another
mtask group object:
@ID @C {
void KheMTaskGroupClear(KHE_MTASK_GROUP mg);
void KheMTaskGroupOverwrite(KHE_MTASK_GROUP dst_mg,
  KHE_MTASK_GROUP src_mg);
}
For visiting its mtasks there are
@ID @C {
int KheMTaskGroupMTaskCount(KHE_MTASK_GROUP mg);
KHE_MTASK KheMTaskGroupMTask(KHE_MTASK_GROUP mg, int i);
}
as usual, along with
@ID @C {
bool KheMTaskGroupIsEmpty(KHE_MTASK_GROUP mg);
}
which is the same as testing whether the count is 0.
For adding and deleting mtasks there are
@ID @C {
bool KheMTaskGroupAddMTask(KHE_MTASK_GROUP mg, KHE_MTASK mt);
void KheMTaskGroupDeleteMTask(KHE_MTASK_GROUP mg, KHE_MTASK mt);
}
@C { KheMTaskGroupAddMTask } adds @C { mt } to @C { mg } and
returns @C { true }, or if the addition cannot be carried out
(because @C { mt } runs on the same day as one of the mtasks that
is already present, or because no leader mtask can be found that
suits both the existing mtasks and @C { mt }), it changes nothing
and returns @C { false }.  @C { KheMTaskGroupDeleteMTask } deletes
@C { mt } from @C { mg }.  Owing to issues around calculating
leader mtasks, @C { mt } must be the most recently added but not
deleted mtask, otherwise @C { KheMTaskGroupDeleteMTask }
will abort.  Function
@ID @C {
bool KheMTaskGroupContainsMTask(KHE_MTASK_GROUP mg, KHE_MTASK mt);
}
returns @C { true } when @C { mg } contains @C { mt }.
@PP
An mtask group @C { mg } has a cost, which is the cost of the
resource monitors of some resource @C { r } when @C { r } is
assigned to one task from each mtask of @C { mg }.  Not all monitors
are included, only cluster busy times and limit busy times monitors
whose monitoring is limited to the days during which the mtasks of
@C { mg } are running, plus one extra day on each side.  (We do not
want wider issues, such as global workload limits, to influence this
cost.)  The mtask group module is responsible for finding a suitable
resource, making the assignments, measuring the cost, and taking the
assignments away again, all of which is done by
@ID @C {
bool KheMTaskGroupHasCost(KHE_MTASK_GROUP mg, KHE_COST *cost);
}
If a cost can be calculated, @C { KheMTaskGroupHasCost } sets
@C { *cost } to its value and returns @C { true }.  If a cost
cannot be calculated, because @C { mg } is empty, or a suitable
resource @C { r } cannot be found, or cannot be assigned to every
mtask of @C { mg } (none of these conditions is likely to occur
in practice), then @C { false } is returned.  There is also
@ID @C {
bool KheMTaskGroupIsBetter(KHE_MTASK_GROUP new_mg,
  KHE_MTASK_GROUP old_mg);
}
which returns @C { true } when @C { old_mg } is empty or else
both @C { new_mg } and @C { old_mg } have a cost, and the cost
of @C { new_mg } is smaller than the cost of @C { old_mg }.
@PP
Calculating the cost is slow, so mtask group objects cache the
most recently calculated cost, and only recalculate it when the
set of mtasks has changed since it was last calculated.
@PP
To actually carry out some grouping, the function is
@ID {0.95 1.0} @Scale @C {
int KheMTaskGroupExecute(KHE_MTASK_GROUP mg, int max_num,
  KHE_SOLN_ADJUSTER sa, char *debug_str);
}
By making calls to functions @C { KheMTaskFinderTaskGrouperClear },
@C { KheMTaskFinderTaskGrouperAddTask }, and
@C { KheMTaskFinderTaskGrouperMakeGroup }
(Section {@NumberOf resource_structural.mtask_finding.solver}),
it makes up to @C { max_num } groups from the mtasks
of @C { mg }.  It returns the number of groups actually made.
If @C { sa != NULL } the task assignments made are
recorded in @C { sa } so that they can be undone later.
# If @C { fix_leaders_sa != NULL }, the @C { NULL } assignments
# in the leader tasks of the groups are fixed and stored in
# @C { fix_leaders_sa } so that they can be undone later.
# The point of this is that fixing their assignments removes
# them from the profile, which is what is wanted when finding
# groups using dynamic programming.
Parameter @C { debug_str }
is used for debugging only, and should contain some indication
of how the group came to be formed:  @C { "combinatorial grouping" },
@C { "interval grouping" }, or whatever.
@PP
Finally,
@ID @C {
void KheMTaskGroupDebug(KHE_MTASK_GROUP mg,
  int verbosity, int indent, FILE *fp);
}
produces a debug print of @C { mg } onto @C { fp } with
the given verbosity and indent.  This includes the cost,
if currently known, and it highlights the leader mtask.
@End @SubSection

@SubSection
  @Title { Implementation notes 2:  the combinatorial grouper }
  @Tag { resource_structural.grouping_by_rc.impl2 }
@Begin
@LP
Combinatorial grouping is a low-level solve algorithm that provides
services to higher-level grouping solvers.  It allows those solvers
to load a variety of different requirements, and it then will search
for groups that satisfy those requirements.
@PP
This is done by a @I { combinatorial grouper } object, made like this:
@ID @C {
KHE_COMB_GROUPER KheCombGrouperMake(KHE_MTASK_FINDER mtf,
  KHE_RESOURCE_TYPE rt, HA_ARENA a);
}
It finds groups of @C { mtf }'s mtasks of type @C { rt }, using memory
from arena @C { a }.  There is no @C { Delete } operation; the grouper
is deleted when @C { a } is freed.  It calls @C { KheMTaskGroupExecute }
from Section {@NumberOf resource_structural.grouping_by_rc.impl1} to
actually make its groups, and this updates @C { mtf }'s mtasks, so
that @C { mtf } does not go out of date as grouping proceeds.
Functions
@ID @C {
KHE_MTASK_FINDER KheCombGrouperMTaskFinder(KHE_COMB_GROUPER cg);
KHE_SOLN KheCombGrouperSoln(KHE_COMB_GROUPER cg);
KHE_RESOURCE_TYPE KheCombGrouperResourceType(KHE_COMB_GROUPER cg);
HA_ARENA KheCombGrouperArena(KHE_COMB_GROUPER cg);
}
return various attributes of @C { cg }; the solution comes
from @C { mtf }.
@PP
The resource type passed to @C { KheCombGrouperMake } must be
non-@C { NULL }, and it must be one of the resource types handled
by @C { mtf }.  An mtask finder is able to handle either one
resource type or all resource types, but a comb grouper can
only handle one resource type.
@PP
Incidentally to its other functions, a @C { KHE_COMB_GROUPER }
object holds a free list of mtask group objects.  Functions
@ID @C {
KHE_MTASK_GROUP KheCombGrouperGetMTaskGroup(KHE_COMB_GROUPER cg);
void KheCombGrouperPutMTaskGroup(KHE_COMB_GROUPER cg,
  KHE_MTASK_GROUP mg);
}
get an object from this list (returning @C { NULL } if the list is
empty) and put an object onto the list.
@PP
A @C { KHE_COMB_GROUPER } object can solve any number of combinatorial
grouping problems for a given @C { mtf }, one after another.  The user
loads the grouper with one problem's requirements, then requests a
solve, then loads another lot of requirements and solves, and so on.
@PP
We'll present the functions which load requirements informally now.
Precise descriptions of what each requirement does are given at the
end of this section.  These requirements make a rather eclectic
bunch.  They are all needed, however, to support the various kinds
of grouping.
@PP
It is usually best to start the process of loading requirements by calling
@ID @C {
void KheCombGrouperClearRequirements(KHE_COMB_GROUPER cg);
}
This clears away any old requirements.
@PP
A key requirement for most solves is that the groups it makes
should cover a given time group.  Any number of such requirements
can be added and removed by calling
@ID @C {
void KheCombGrouperAddTimeGroupRequirement(KHE_COMB_GROUPER cg,
  KHE_TIME_GROUP tg, KHE_COMB_COVER_TYPE cover);
void KheCombGrouperDeleteTimeGroupRequirement(KHE_COMB_GROUPER cg,
  KHE_TIME_GROUP tg);
}
any number of times.  @C { KheCombSolverAddTimeGroup } specifies that
the groups must cover @C { tg } in a manner given by the @C { cover }
parameter, whose type is
@ID @C {
typedef enum {
  KHE_COMB_COVER_YES,
  KHE_COMB_COVER_NO,
  KHE_COMB_COVER_PREV,
  KHE_COMB_COVER_FREE
} KHE_COMB_COVER_TYPE;
}
We'll explain this fully later, but just briefly, @C { KHE_COMB_COVER_YES }
means that we are only interested in sets of mtasks that cover the
time group, @C { KHE_COMB_COVER_NO } means that we are not interested
in sets of mtasks that cover the time group, and so on.
@PP
@C { KheCombGrouperDeleteTimeGroupRequirement } undoes a previous call to
@C { KheCombGrouperAddTimeGroupRequirement } with the same time group.  If
there has been no such call, @C { KheCombGrouperDeleteTimeGroupRequirement }
aborts.
@PP
Any number of requirements that the groups should cover a given
mtask may be added:
@ID @C {
void KheCombGrouperAddMTaskRequirement(KHE_COMB_GROUPER cg,
  KHE_MTASK mt, KHE_COMB_COVER_TYPE cover);
void KheCombGrouperDeleteMTaskRequirement(KHE_COMB_GROUPER cg,
  KHE_MTASK mt);
}
These work in the same way as for time groups.  Care is needed
because @C { mt } may be rendered undefined, if groups are made
that leave @C { mt } empty afterwards.  The safest option after
a solve whose requirements include an mtask is to call
@C { KheCombGrouperClearRequirements }.
@PP
Next we have
@ID @C {
void KheCombSolverAddNoSinglesRequirement(KHE_COMB_SOLVER cs);
void KheCombSolverDeleteNoSinglesRequirement(KHE_COMB_SOLVER cs);
}
This is concerned with whether mtask sets that contain a single mtask
are acceptable---an awkward question, as we'll see.  And
@ID {0.98 1.0} @Scale @C {
void KheCombGrouperAddPreferredDomainRequirement(KHE_COMB_GROUPER cg,
  KHE_RESOURCE_GROUP rg);
void KheCombGrouperDeletePreferredDomainRequirement(KHE_COMB_GROUPER cg);
}
specifies that mtasks whose domains resemble @C { rg } are preferred.
We'll return to all these requirements later.
@PP
There is no need to reload requirements between solves.  Requirements
stay in effect until they are either deleted individually or cleared
out by @C { KheCombGrouperClearRequirements }.
@PP
After all the requirements are added, an actual solve is carried
out by calling
@ID @C {
int KheCombGrouperSolve(KHE_COMB_GROUPER cg, int max_num,
  KHE_COMB_VARIANT_TYPE cg_variant, KHE_SOLN_ADJUSTER sa,
  char *debug_str);
}
@C { KheCombGrouperSolve } searches the space of all sets of mtasks
@M { S } that satisfy the requirements passed in by the user, and
selects one set @M { S prime } of minimal cost @M { c( S prime ) }.
Using @M { S prime }, it makes as many groups as it can, up to
@C { max_num } groups, and returns the number it actually made,
between @C { 0 } and @C { max_num }.  If @M { S prime } contains
a single mtask, no groups are made and the value returned is 0.
@PP
@C { KheCombGrouperSolve } offers several variants of the algorithm
just described, selected by parameter @C { cg_variant }, which we'll
describe later.  If parameter @C { sa } is non-@C { NULL }, any
task assignments made by @C { KheCombGrouperSolve } are stored in
@C { sa }, so that they can be undone later.  Parameter @C { debug_str }
is used only by debug code, to say how the grouping came about.  It
might be @C { "combinatorial grouping" } or @C { "interval grouping" },
for example.
@PP
One variant of @C { KheCombGrouperSolve } is different and
has been given its own interface:
@ID @C {
int KheCombGrouperSolveSingles(KHE_COMB_GROUPER cg);
}
It makes no groups.  Instead, it counts the number of tasks
needing assignment that lie in mtasks which satisfy the
requirements by themselves (not grouped with any other
mtasks).  These are the tasks we called singles above.
@PP
Our tour of the interface of @C { KHE_COMB_GROUPER } ends with function
@ID @C {
void KheCombGrouperDebug(KHE_COMB_GROUPER cg, int verbosity,
  int indent, FILE *fp);
}
This produces the usual debug print of @C { cg } onto @C { fp }
with the given verbosity and indent.
@PP
The rest of this section is devoted to a precise description of
@C { KheCombGrouperSolve }.  There are three things to do here.
First, we need to specify how the search space of mtask sets is
determined.  Second, for each mtask set @M { S } in the search
space we need to define a cost @M { c(S) }.  And third, we need
to explain the algorithm variants selected by @C { cg_variant }.
@PP
For the search space we need some definitions.  A task @I covers a
time if it, or a task assigned to it directly or indirectly, runs
at that time (and possibly at other times).  A task covers a time
group if it covers one or more of the time group's times.  An mtask
covers a time or time group if its tasks do (they run at the same
times).  An mtask covers an mtask if it is that mtask.  An mtask
covers a time group or mtask requirement if it covers that
requirement's time group or mtask.
# A set of mtasks covers a time, time group, or
# mtask if any of its mtasks covers that time, time group, or mtask.
@PP
A set of mtasks @M { S } lies in the search space if it satisfies
all of the following conditions.  The letters in parentheses at
the end of each condition will be explained afterwards.
# The solver has three opportunities to make tests which delimit
# the search space:  when it is considering whether to include
# an mtask @C { mt } in the search generally; when it
# is considering whether to add an mtask @C { mt } to its current
# set of mtasks @M { S }; and when it has a complete set @M { S }
# and is considering whether it should be considered part of the
# search space.  The earlier something can be ruled out, the
# faster the solve runs.  Anyway, we'll take each of these
# opportunities in order.
# @PP
# First then, before the solving proper begins, @C { KheCombGrouperSolve }
# finds the full set of mtasks which could possibly occur in mtask sets 
# of interest.  These are all mtasks @C { mt } that satisfy all of
# these conditions:
@NumberedList

@LI {
Each mtask in @M { S } covers at least one time group or mtask
requirement whose @C { cover } is not @C { KHE_COMB_COVER_NO }.
This condition allows for a generate-and-test approach to building the
search space:  find the set @M { X } of all mtasks that satisfy this
condition, then use the usual recursive algorithm to generate all
subsets @M { S } of @M { X }, then test each @M { S } against each
of the following conditions.
(a)
}

@LI {
For each @C { mt } in @M { S },
@C { mt } does not cover any time group or mtask requirement
whose @C { cover } is @C { KHE_COMB_COVER_NO }.
(a)
}

# @LI {
# For each @C { mt } in @M { S },
# @C { KheMTaskAssignIsFixed(mt) } is @C { false }, that is, @C { mt }
# is not a set of tasks whose assignments are fixed.
# (a)
# }

@LI {
For each @C { mt } in @M { S }, @C { mt } contains at least one
task which not fixed, not assigned, and for which non-assignment
has a cost.  That is, @C { KheMTaskAssignIsFixed(mf) } must be
@C { false } and @C { KheMTaskNeedsAssignment(mt) } must be
@C { true }.  Only tasks with these properties participate in
grouping, as discussed above.
# @C { KheMTaskUnassignedTaskCount(mt) > 0 }, that is, @C { mt }
# contains at least one unassigned task.  Any assigned tasks in
# @C { mt } are ignored throughout the solve, in accordance with
# the principle that combinatorial solving ignores assigned tasks.
(a)
}

# @LI @OneRow {
# If @C { KheCombGrouperAddMTaskFnRequirement } was called, then
# @C { mtask_fn(mt, impl) } is @C { true }.  Here @C { impl }
# is set to @C { KheCombGrouperAddMTaskFnRequirement }'s @C { impl }
# parameter.  There may be at most one call to
# @C { KheCombGrouperAddMTaskFnRequirement } per solve.  If the
# user has several conditions to test, they must be packaged into
# one @C { mtask_fn }.
# (a)
# }

# @EndList
# Second we need to consider testing whether a given mtask
# @C { mt } can be added to a growing set of mtasks @M { S },
# that is, whether @M { S } plus @C { mt } could be an element
# of the search space, or a subset of an element of the search
# space.  The conditions here are:
# @NumberedList

@LI @OneRow {
For each pair of distinct mtasks @C { mt1 } and @C { mt2 } in @M { S },
@C { KheMTaskInterval(mt1) } and @C { KheMTaskInterval(mt2) } are
disjoint.  We intend to assign some resource to one task from each
mtask of @M { S }, so no two of those tasks can run on the same day.
(b)
}

@LI @OneRow {
If @M { S } is non-empty then it contains a @I { leader mtask },
that is, an mtask containing tasks that can serve as leader tasks
for the tasks in the other mtasks of @M { S }.  This rules out
sets @M { S } whose mtasks have incompatible domains.
(b)
}

@LI @OneRow {
If @C { cg_variant == KHE_COMB_VARIANT_SINGLES }, then @M { S }
contains at most one mtask.  We say more about this below.
(b)
}

@LI @OneRow {
If @C { KheCombGrouperAddNoSinglesRequirement } was called,
then @M { S } contains at least two mtasks.  Otherwise @M { S }
contains at least one mtask.
(c)
}

# @EndList
# Then the solve proper generates, potentially, all subsets of
# this full set of mtasks, checking the following conditions
# along the way.  For each subset @M { S } the following
# conditions are checked before @M { S } is admitted to
# the search space:
# @NumberedList

# @LI @OneRow {
# If {0.95 1.0} @Scale @C { KheCombSolverAddMTaskSetFnRequirement }
# was called, then {0.95 1.0} @Scale @C { mtask_set_fn(S, impl) } is
# @C { true }.  Here @C { impl } is set to
# @C { KheCombGrouperAddMTaskSetFnRequirement }'s
# @C { impl } parameter.  There may be at most one call to
# @C { KheCombGrouperAddMTaskFnRequirement } per solve.  If the
# user has several conditions to test, they must be packaged into
# one @C { mtask_set_fn }.
# # @LP
# # It is more efficient to exclude unwanted tasks using @C { mtask_fn }
# # than to wait until an entire set of mtasks is made and exclude
# # the set by calling @C { mtask_set_fn }.  But there are cases where
# # mtasks are acceptable individually but not together, and
# # @C { mtask_set_fn } is useful then.
# (c)
# }

@LI @OneRow {
Each time group or mtask requirement @M { C } must be satisfied.  What
this means depends on the value of @M { C }'s @C { cover } parameter,
as follows:
@TaggedList

@DTI { @C { KHE_COMB_COVER_YES } }
{
At least one of the mtasks
of @M { S } covers @M { C }'s time group or mtask.
}

@DTI { @C { KHE_COMB_COVER_NO } }
{
None of the mtasks of @M { S } cover @M { C }'s time group or mtask.
}

@DTI { @C { KHE_COMB_COVER_PREV } }
{
This is interpreted like @C { KHE_COMB_COVER_YES } if the preceding time
group or mtask requirement is covered, and like @C { KHE_COMB_COVER_NO }
if the preceding time group or mtask requirement is not covered.
}

@DTI { @C { KHE_COMB_COVER_FREE } }
{
@M { C } is free to be covered by @M { S }'s mtasks, or not.
}

@EndList
If the first time group or mtask has cover @C { KHE_COMB_COVER_PREV },
this is treated like @C { KHE_COMB_COVER_FREE }.
(c)
}

@EndList
Time groups and mtasks not mentioned in any requirement may be
covered, or not.  The difference between this and a time
group or mtask with cover @C { KHE_COMB_COVER_FREE } is
that the mtasks that cover a free time group or mtask may be
included in the search space.
@PP
We have so far given the impression that @C { KheCombGrouperSolve }
generates all subsets @M { S } of the set @M { X } defined in
condition (1) above, and then tests each @M { S } against these
conditions.  In fact, it does better.  The letter at the end of
each condition says when that condition is evaluated:
@ParenAlphaList

@LI {
This condition is evaluated just once for each mtask @C { mt },
at the start of the solve.  If it does not hold, then @C { mt } is
omitted from the set @M { X } of mtasks that we find all subsets of.
}

@LI {
When some set @M { S } does not satisfy this condition, every
superset of @M { S } also does not satisfy it.  So it is evaluated
each time we add an mtask to @M { S } when generating all subsets.
If it fails, that path of the recursive generation of all subsets is
truncated immediately.
}

@LI {
This condition is (and can only be) evaluated when a complete subset has
been generated.
}

@EndList
In addition, for each mtask @C { mt } a list is kept of all time group
and mtask requirements @M { C } with cover @C { KHE_COMB_COVER_YES } for
which @C { mt } is the last mtask that covers @M { C }.  Before trying
the branch of the recursion that omits @C { mt }, the list is traversed
and if there are any requirements in it that are not yet covered, that
branch is not taken.
# @PP
# Mtasks @C { mt } for which @C { KheMTaskAssignIsFixed(mt) } is
# @C { true } are of no use in grouping, since their assignments
# cannot be changed.  It is true that they could be leader mtasks,
# since leader tasks' assignments are not changed.  But that allows
# at most one fixed task per group, and there are the tasks' domains
# to consider too.  Altogether fixed tasks don't go well with grouping.
# @PP
# Ignoring assigned tasks is harder to justify.  A task assigned
# resource @C { r } could be grouped with some unassigned tasks,
# leaving all of them assigned @C { r }.  The author might revisit
# this rule in the future, if practice demands it.  A key issue
# is the interaction between grouping and assign by history (the
# usual source of assignments during this early stage of the solve).
@PP
There is no prohibition on passing in a Yes cover requirement for
an mtask which cannot be part of any @M { S } because it fails
to satisfy one of the (a) conditions.  For example, we could
require the solve to cover an mtask whose tasks were all assigned.
This condition is impossible to satisfy, so the result will be
that @C { KheCombGrouperSolve } finds no groups and returns 0.
@PP
We said above that the first step is to build the set @M { X } of all
mtasks that satisfy the first condition.  Before doing anything further,
this set is sorted so that mtasks whose first busy day is earlier
come before mtasks whose first busy day is later.  If there is a
preferred domain (if @C { KheCombGrouperAddPreferredDomainRequirement }
was called), then as a second priority, mtasks whose domain is a
superset of the preferred domain come before mtasks whose domain
is not a superset of the preferred domain, and as a third priority,
mtasks whose domain is smaller come before mtasks whose domain
is larger.  This ensures that mtasks with preferred domains are
tried first, which means that sets of mtasks with preferred domains
are tested first, making them more likely to be chosen, but without
actually ruling out any set of mtasks.
@PP
The second thing we need to do is to explain how the cost @M { c(S) }
of each set of mtasks @M { S } is defined.  By the conditions above,
@M { S } is non-empty and contains a leader mtask.
@PP
Let @M { I } be the smallest interval of days such that all the mtasks
in @M { X }, as defined by conditions (1) and (a) above, run entirely
within those days, plus (for safety) one extra day on each side.  This
is the grouper's idea of the part of the cycle affected by the current
solve.  Take the leader mtask of @M { S } and search its domain (as 
returned by @C { KheMTaskDomain }) for a resource @M { r } which is
free and available throughout @M { I }.  Most resources are free during
grouping, and most resources are available (not subject to avoid
unavailable times constraints) most of the time, so @M { r } should
be easy to find; but if there is no such @M { r }, ignore @M { S }.
@PP
Assign @M { r } to each mtask of @M { S }.  The cost @M { c(S) } of
@M { S } is determined while the assignments are in place.  It is the
total cost of all cluster busy times and limit busy times monitors
which monitor @M { r } and have times lying entirely within the times
of the days @M { I }.  We limit ourselves to monitors within @M { I }
because we don't want @M { r }'s global workload, for example, to
influence the outcome.  We add one day on each side so as not to miss
monitors that prohibit certain local patterns, such as incomplete
weekends.  This is admittedly ad-hoc but it seems to work.  After
the cost is worked out, the assignments of @M { r } added to the
mtasks of @M { S } are removed.
# covered by the time groups added by calls to
# @C { KheCombGrouperAddTimeGroupRequirement }.
# This second condition is included because we don't want @M { r }'s
# global workload, for example, to influence the outcome.
# @PP
@PP
The third and last thing we need to do is to explain the
@C { cg_variant } parameter.  It has type
@ID @C {
typedef enum {
  KHE_COMB_VARIANT_MIN,
  KHE_COMB_VARIANT_ZERO,
  KHE_COMB_VARIANT_SOLE_ZERO,
  KHE_COMB_VARIANT_SINGLES
} KHE_COMB_VARIANT_TYPE;
}
and allows the user to select one of four variants of the basic
algorithm, as follows.
@PP
If @C { cg_variant } is @C { KHE_COMB_VARIANT_MIN }, then
a subset @M { S prime } is chosen such that @M { c( S prime ) }
is minimal among all @M { c(S) }, as described above.  This
will be possible as long as the search space contains at
least one @M { S } satisfying the conditions.  If it
doesn't, no groups are made.
@PP
If @C { cg_variant } is @C { KHE_COMB_VARIANT_ZERO } or
@C { KHE_COMB_VARIANT_SOLE_ZERO }, then @M { c( S prime ) } must
also be 0, and in the second case there must be no other @M { S }
satisfying the conditions such that @M { c(S) } is 0.  If these
conditions are not met, no groups are made.
@PP
If @C { cg_variant } is @C { KHE_COMB_VARIANT_SINGLES },
the behaviour is different.  No groups are made.  Instead,
@C { KheCombGrouperSolve } returns the number of individual,
ungrouped tasks which satisfy the given requirements.  (If
the requirements include `no singles', this will be 0.)
This variant is accessed by calling @C { KheCombGrouperSolveSingles },
not @C { KheCombGrouperSolve }.
@PP
Let us call an mtask that satisfies the requirements without
any grouping a @I { single }.  Singles raise some awkward questions
for combinatorial grouping.  What to do about them seems to vary
depending on why combinatorial grouping is being called, so
instead of dealing with them in a fixed way, the grouper
offers three features that help with them.
@PP
First, if the set of mtasks @M { S prime } with minimum or zero
cost contains only one mtask, @C { KheCombSolverSolve } accepts
it as best but makes no groups from it, returning 0 for
the number of groups made.  It is natural not to make any task
assignments, because each of them is from a task from one
mtask of @M { S prime } to a task from another mtask of
@M { S prime }, which is not possible when @M { S prime }
contains only one mtask.  But it is arguable that each
unassigned task from that one mtask is a satisfactory group
which should be reported.  However, the value returned here
is 0, as we said.
@PP
Second, by calling @C { KheCombSolverAddNoSinglesRequirement },
the user may declare that a set @M { S } containing just one
mtask should be excluded from the search space.  But this
is not a magical solution to the problem of singles.  For
example, when we need a unique zero-cost set of mtasks, we
may want to include singles in the search space, to show that
grouping is better than doing nothing.  We need to think
about the significance of singles in the current context.
# And there may still be an
# @M { S } containing one single and another mtask which covers a time
# group or mtask with cover type @C { KHE_COMB_COVER_FREE }.
@PP
Third, after setting up a problem, one can call
@C { KheCombGrouperSolveSingles }.  This searches the requested space, but,
as we have seen, it does no grouping, instead returning the total number
of tasks lying in singles.  If our aim is to produce a certain number of
groups, we can treat these singles as pre-existing groups, subtract
their number from our target, and run again with `no singles' on.
@End @SubSection

@SubSection
  @Title { Implementation notes 3:  interval grouping by dynamic programming }
  @Tag { resource_structural.grouping_by_rc.impl3 }
@Begin
@LP
@C { KheGroupByResourceConstraints } uses a dynamic programming algorithm
to carry out optimal interval grouping based on limit active intervals
constraints, in cases where the number of choices is fairly limited.
A good example is Constraint 17 from instance INRC2-4-100-0-1108, which
limits the number of consecutive night shifts to between 4 and 5.
# Other solvers are likely to struggle to satisfy such limited choices,
# but they actually help dynamic programming, since they ensure that
# it will not have to handle an excessively large number of states.
@PP
The aim is to group tasks into sequences of suitable @I length
(total duration).  In the example, these lengths are 4 and 5,
although shorter lengths are also accepted (with a penalty), to
ensure that every instance of the problem has a solution.  The whole
process is driven by a single limit active intervals constraint
@M { C }.  (If there are several constraints with the same time
groups, they are conceptually merged into a single constraint.  We
continue to refer to the merged entity as @M { C }.)
@PP
The first step is to decide which tasks are @I { admissible },
meaning wanted for inclusion in the groups.  A task @C { t }
is admissible if it satisfies these three conditions:
@NumberedList

@LI {
Task @C { t } is returned by @C { KheMTaskTask } from
Section {@NumberOf resource_structural.mtask_finding.ops}.
}

@LI {
According to @C { KheTaskNonAsstAndAsstCost }, not assigning
@C { t } costs more than assigning @C { t }.
}

@LI {
The busy times of @C { t }, taken chronologically, appear
in consecutive time groups of @M { C }.  In the example, these are
tasks whose times consist of one or more consecutive night shifts.
}

@EndList
Let @M { T sub 1 ,..., T sub n } be the time groups of @M { C }.
For each @M { T sub i }, two sets of tasks are important to us:
@BulletList

@LI {
@M { X sub i }, the set of admissible tasks @M { s } such that
the first time @M { t } that @M { s } is running satisfies
@M { t in T sub i }.
}

@LI {
@M { Y sub i }, the set of admissible tasks @M { s } such that
some time @M { t } that @M { s } is running satisfies
@M { t in T sub i }.
}

@EndList
We have @M { X sub i subset Y sub i }, and also the curious result
@M { X sub 1 cup cdots cup X sub i = Y sub 1 cup cdots cup Y sub i }.
The @M { X sub i } are pairwise disjoint; the @M { Y sub i } may not be,
since each @M { s } appears in one @M { Y sub i } for each time that
it is running.
@PP
A @I group is a set @M { g } of one or more admissible tasks such that
there exists a set of consecutive time groups @M { T sub a ,..., T sub b }
such that for each @M { T sub i } in the set, there exists exactly
one task @M { s in g } that is running during @M { T sub i }.  A
@I solution for a set of admissible tasks @M { S } is a set of groups,
such that each @M { s in S } is assigned (appears in) exactly one group.
For example, one way to make a solution is to place each admissible task
into its own group.
@PP
Let @M { G sub i } be a solution for @M { X sub 1 cup cdots cup X sub i }.
For example, @M { G sub 7 } might look like this:
@CD @Diag paint { lightgrey } margin { 0c } { @VContract {
1.5c @Wide @M { T sub 1 } |
1.5c @Wide @M { T sub 2 } |
1.5c @Wide @M { T sub 3 } |
1.5c @Wide @M { T sub 4 } |
1.5c @Wide @M { T sub 5 } |
1.5c @Wide @M { T sub 6 } |
1.5c @Wide @M { T sub 7 } |
1.5c @Wide @M { T sub 8 } |
1.5c @Wide @M { T sub 9 } |
//0.2f
@Box { 6c @Wide 0.5c @High } |
@Box { 4.5c @Wide 0.5c @High } |
//
@Box paint { white } outlinestyle { noline } { 3c @Wide 0.5c @High } |
@Box { 7.5c @Wide 0.5c @High }
//
@Box { 7.5c @Wide 0.5c @High } |
@Box paint { white } outlinestyle { noline } { 1.5c @Wide 0.5c @High } |
@Box { 3c @Wide 0.5c @High }
} }
where each grey rectangle represents one group.  Every task which
begins at or before @M { T sub 7 } is present in a group of @M { G sub 7 }.
# Groups that
# end before @M { T sub 7 } are finished, and will usually have
# a suitable length (4 or 5 in the example); other groups may
# still be forming and are likely to have smaller length.
@PP
Within a given @M { G sub i }, a @I finished group @M { g } is a
group that cannot be extended by adding tasks.  If @M { g } is
not finished it is @I { unfinished }.  There are three ways in
which @M { g } can come to be finished:
@NumberedList

@LI {
If @M { g } does not include a task running at @M { T sub i },
it is finished because it is now too late to add such a task,
and adding a task from a later time group would create a gap
in @M { g } at @M { T sub i }.
}

@LI { 
If @M { g }'s total duration is equal to or larger than @M { C }'s
maximum limit, then @M { g } is finished because adding any task
would give @M { g } a total duration which is too large and is
not permitted.
}

@LI {
If @M { G sub i } is @M { G sub n }, the last time group, then
@M { g } is finished because there are no tasks to add to it.
}

@EndList
A finished group @M { g } has a cost @M { c(g) }, which is its
cost as returned by combinatorial grouping plus any cost arising
from falling short of the minimum limit from @M { C }, or
exceeding the maximum limit from @M { C }.  The cost of solution
@M { G sub i }, written @M { c( G sub i ) }, is the sum of the
costs of its finished groups.  Our aim is to find a solution of
minimal cost for the whole set of admissible tasks.
@PP
To move from a solution @M { G sub {i-1} } for
@M { X sub 1 cup cdots cup X sub {i-1} } to a solution @M { G sub i }
for @M { X sub 1 cup cdots cup X sub i }, we need to assign each
task of @M { X sub i } to a group:  either to an existing unfinished
group of @M { G sub {i-1} } containing a task running at @M { T sub {i-1} }
but not at @M { T sub i }, or to a new group.  After this,
any unfinished groups from @M { G sub {i-1} } that are not
running at @M { T sub i } are declared finished.
To find all solutions, we do this in all ways and for all
@M { i }, starting from @M { G sub 0 }, the empty set of groups
which is the sole solution for no time groups.
@PP
This process would be hopelessly exponential, but for the fact that
in many cases, one solution for @M { X sub 1 cup cdots cup X sub i }
can be shown to @I dominate another, meaning that for each complete
solution derived from the second solution, there is a complete solution
derived from the first solution whose cost is equal or less.  We can
drop the dominated solutions.
@PP
The @I signature of a solution @M { G sub i } is that part of
@M { G sub i } relevant to dominance testing.  In this case it
is its cost, @M { c( G sub i ) }, plus a set of 4-tuples
@M { langle l(g), e(g), d(g), a(g) rangle }, one for each unfinished
group @M { g }:
@NumberedList

@LI @OneRow {
The @I { length } (total duration) of @M { g }, denoted @M { l(g) }.
If @M { g } is assigned a resource @M { r } (see below) and covers
the first time group, then the length is increased by the history
value of @M { r }.
}

@LI @OneRow {
The @I { extension } of @M { g }, denoted @M { e(g) }, which is
that part of the length which lies strictly to the right of
@M { T sub i }.  It satisfies @M { 0 <= e(g) < l(g) }.
}

@LI @OneRow {
The @I { domain } of @M { g }, denoted @M { d(g) }, which is
the set of resources that could be assigned to @M { g }.  It
is the intersection of the domains of the tasks of @M { g }
(although see below).
}

@LI @OneRow {
The @I { assignment } of @M { g }, denoted @M { a(g) }, a
resource.  If at least one task is assigned a resource, that
resource is @M { a(g) }.  If no tasks are assigned a resource,
then @M { a(g) } is @C { NULL }.
}

@EndList
The order of the 4-tuples does not matter.  Strictly speaking the
set is a multiset:  if there are two unfinished groups with the
same 4-tuple, that 4-tuple appears twice.
@PP
@M { G sub i } dominates @M { G prime tsub i } when
@M { c( G sub i ) <= c( G prime tsub i ) } and the 4-tuples in the
two sets can be permuted so that for corresponding groups @M { g }
and @M { g prime } we have @M { l(g) = l( g prime ) },
@M { e(g) = e( g prime ) }, @M { d(g) supseteq d( g prime ) },
and @M { a(g) = a( g prime ) }.  The permuting can be effected by
sorting the 4-tuples by increasing length, then extension size, then
domain size, and then comparing corresponding 4-tuples.  This is not
be perfect (it may miss some cases of dominance), but it is good enough.
@PP
At the end, in @M { G sub n }, every group is finished, as we
explained above, so there are no 4-tuples and dominance depends
only on cost.  There will therefore be just one undominated
solution, and that is the solution of minimum cost that we
are seeking.
@PP
@BI { Grouping tasks with similar domains. }
At first sight, the algorithm does not seem to include anything
which favours grouping tasks with similar domains.  However, the
dominance test does favour such groups, as we now show.
@PP
Suppose that there are two groups, @M { g sub 1 } and
@M { g sub 2 }, in some solution, and that their domains satisfy
@M { d( g sub 1 ) subseteq d( g sub 2 ) }.  Suppose that there
are two tasks @M { t sub 1 } and @M { t sub 2 } running on the
next day, and that their domains happen to be
@M { d( t sub 1 ) = d( g sub 1 ) } and 
@M { d( t sub 2 ) = d( g sub 2 ) }.  If we group @M { t sub 1 }
with @M { g sub 1 } and @M { t sub 2 } with @M { g sub 2 }, the
domains of the new groups will be @M { d( g sub 1 ) } and
@M { d( g sub 2 ) }.  If we group @M { t sub 1 } with @M { g sub 2 }
and @M { t sub 2 } with @M { g sub 1 }, their domains will both
be @M { d( g sub 1 ) cap d( g sub 2 ) = d( g sub 1 ) }.  But
@M { d( g sub 1 ) subseteq d( g sub 2 ) }, so this second solution
is dominated by the first.
@PP
@BI { Assigned tasks and history. }
Some admissible tasks may be assigned a resource initially.  Such
tasks are handled correctly, without changing their assignments.
This is needed in practice to preserve the results of assign by
history.
@PP
A task @M { t } with an assigned resource @M { r } may be grouped
with a task assigned @M { r }, or with a task assigned no resource.
(The resulting group will then be assigned @M { r }.)  But @M { t }
may not be grouped with a task assigned some other resource, nor
with an unassigned task running on the same day as some other task
assigned @M { r }.
@PP
To avoid the last possibility, interval grouping has an initial step
which groups tasks running on adjacent days that are assigned
the same resources.  These groups are different in principle
from the other groups made by interval grouping, but they
are also recorded in @C { sa } so that they too can be removed
later if desired.  As usual, any grouping already present is
handled correctly.
@PP
Any group with an assigned resource which covers the first time
group of the limit active intervals constraint that instigated
this whole process has its length increased by the history value
of the resource.  For example, suppose that we are grouping
night shifts with minimum limit 4 and maximum limit 5.  Suppose
that resource @M { r } with history value 2 has been assigned
two night shifts at the start of the cycle by assign by history.
These two shifts will be grouped in the initial step just
described, but furthermore, this group will be assigned length
4, to count in the 2 from history.  This means that the dynamic
programming algorithm will be free to either leave it as is or
to add one task, making length 5, but not more.  Which is just
what is needed.
@PP
@BI { Two restrictions. }
The author has imposed two rules which restrict the search space,
so they could in principle cause the algorithm to miss the optimal
solution, although that is unlikely.
@PP
First, the algorithm only constructs groups whose domain
is not only the intersection of the domains of its tasks, but
also equal to one or more of those domains.  This avoids
spending time constructing new resource groups to serve as
domains.  It has the incidental advantage of ensuring
that all domains are non-empty, when the domains of the
individual tasks are non-empty.  This restriction could
be removed, but then it would be necessary to build in
a cache of resource groups which are non-trivial intersections,
to avoid creating potentially thousands of these resource
groups as the solve proceeds.
@PP
Second, we apply the following rule:
# @ID @I {
# If in some solution @M { S } there is a group @M { g } such that
# @M { g }'s length is less than the minimum limit and @M { g }
# could have included (at the end, not at the start) a task lying
# in some mtask @C { mt }, then none of the tasks of @C { mt }
# start a new group in @M { S }.
# }
@ID @I {
Suppose that solution @M { S } contains two groups, @M { g sub 1 }
and @M { g sub 2 }, such that the length of @M { g sub 1 } is less
than the minimum limit, and @M { g sub 2 } starts immediately
after @M { g sub 1 } ends, and moving the first task @M { t } of
@M { g sub 2 ` } from the start of @M { g sub 2 } to the end of
@M { g sub 1 } would be legal.  (By this we mean that @M { t }'s
domain is compatible with @M { g sub 1 }'s domain, and the move
would not cause the length of @M { g sub 1 } to exceed the
maximum limit.)  Then @M { S } is excluded from the search space.
}
Expressed less formally, we give preference to growing undersized
groups over starting new groups.  For example, suppose groups of
length 4 or 5 are wanted, and there is one task (of length 1)
available to be grouped at each of six consecutive time groups.
Then one possible solution, grouping the first task with itself
and the other five with each other, is ruled out, as is grouping
the first two and the last four, and the first three and the last
three.  In these cases the first task of the second group could
have been included in the undersized first group.  The point of
this restriction is that it greatly reduces the size of the
search space, but is unlikely to cause the algorithm to miss
the optimal solution, given that undersized groups have a cost.
(Yes, there is a problem here when the cost function of the limit
active intervals constraint is quadratic.)
# @PP
# There are cases where using a task to extend an undersized group
# turns out to be impossible owing to incompatible domains.  As a
# patch for this problem, whenever a task is not able to be added
# to any existing group, it is always allowable to use it to start
# a new group.
@PP
@BI { Including optional tasks. }
The dynamic programming algorithm never produces a group whose
length exceeds the maximum limit.  But it does produce groups
whose length falls short of the minimum limit, when it cannot
avoid it.
@PP
In good solutions one frequently sees these short sequences
extended by the addition of @I { optional tasks }, that is,
tasks whose non-assignment cost does not exceed their assignment
cost.  Our algorithm could do this simply by dropping the
second condition above, making optional tasks admissible.
However, their inclusion is not free.  For one thing, when
workload is tight, every optional task assigned a resource
adds to the overall workload overload.  For another, there
is a danger that the algorithm could be swamped by optional
tasks and run too slowly.
@PP
We handle this as follows.  First, we run the algorithm without
admitting optional tasks.  At the end, for each undersized group
in the solution we identify up to two optional tasks that could
be used to lengthen that group, one running just before it and
one running just after it.  We then make all these optional
tasks admissible and run the algorithm a second time.
@PP
There has to be a cost associated with choosing an optional task,
otherwise the best solution will have more of them than it
needs to.  At present the algorithm is using the cost returned
by @C { KheBalanceSolverMarginalCost }
(Section {@NumberOf resource_structural.supply_and_demand.balance})
multiplied by the duration of the task.
@PP
A group containing only optional tasks is assigned cost 0, since
although it must be considered to be a group (because of our rule
that every admissible task ends up in exactly one group), there
is no need for any resource to be assigned to it, and so its
existence does not foreshadow any cost in actual solutions.
@PP
On one run, including optional tasks reduced the total duration
of undersized groups from 15 to 4, while introducing 10
optional tasks grouped with non-optional tasks.
@PP
@BI { Time complexity. }
The key to finding the time complexity of this algorithm is to
estimate the number of undominated @M { G sub i } for each @M { i }.
We ignore extensions, domains, and assignments, because few tasks
have assignments or non-zero
extensions, and there are likely to be only a few distinct domains.
Suppose @M { G sub i } has @M { n } unfinished groups, each of which
has length @M { l(g) } in the range @M { 1 <= l(g) <= K }.  Then
each undominated @M { G sub i } has a distinct multiset of lengths,
since given two solutions with equal lengths, one always dominates
the other.  So the number of undominated solutions is at most
@M { p(n, K) }, the number of distinct multisets of cardinality
@M { n } whose elements are integers in the range @M { [1, K] }.
For example, if @M { n = 4 } and @M { K = 3 } there are 15 of
these multisets:
@CD @OneRow @F lines @Break {
3 3 3 3  &2c  3 3 1 1  &2c  2 2 2 2
3 3 3 2  &2c  3 2 2 2  &2c  2 2 2 1
3 3 3 1  &2c  3 2 2 1  &2c  2 2 1 1
3 3 2 2  &2c  3 2 1 1  &2c  2 1 1 1
3 3 2 1  &2c  3 1 1 1  &2c  1 1 1 1
}
In general, we can argue as follows.  Divide these multisets into
two parts.  In the first part place those multisets that contain
at least one @M { K }.  This fixes one of the values in the multiset
but places no new constraints on the others.  So the number of such
multisets is @M { p(n-1, K) }.  In the second part place those
multisets that do not contain at least one @M { K }.  There are
@M { p(n, K-1) } of those.  So
@ID @M { p(n, K) ``=`` p(n-1, K) ``+`` p(n, K-1) }
We can have @M { n = 0 }, but the smallest valid @M { K } is @M { 1 },
so the bases of this recurrence are @M { p(n, 1) = 1 }, a multiset
containing just one element (a sequence of @M { n } ones), and
@M { p(0, K) = 1 }, a multiset containing just one element (the
empty sequence).
#This gives the table
#@ID @Tbl
#    indent { ctr }
#    aformat { @Cell rr { yes } A | @Cell B | @Cell C | @Cell D | @Cell E
#    | @Cell F }
#{
#@Rowa
#    rb { yes }
#    A { @M { p(n, k) } }
#    B { @M { n = 0 } }
#    C { @M { n = 1 } }
#    D { @M { n = 2 } }
#    E { @M { n = 3 } }
#    F { @M { n = 4 } }
#@Rowa
#    A { @M { K = 1 } }
#    B { @M { 1 } }
#    C { @M { 1 } }
#    D { @M { 1 } }
#    E { @M { 1 } }
#    F { @M { 1 } }
#@Rowa
#    A { @M { K = 2 } }
#    B { @M { 1 } }
#    C { @M { 2 } }
#    D { @M { 3 } }
#    E { @M { 4 } }
#    F { @M { 5 } }
#@Rowa
#    A { @M { K = 3 } }
#    B { @M { 1 } }
#    C { @M { 3 } }
#    D { @M { 6 } }
#    E { @M { 10 } }
#    F { @M { 15 } }
#    rb { yes }
#}
Although the base is not the usual one, the recurrence is
familiar and tells us that @M { p(n, K) } is a combination:
@ID @M {
p(n, K) `` = `` pmatrix { row col n + K-1 row col {K-1} }
}
For example, @M { p(4, 3) = "-2p" @Font pmatrix { row col 6 row col 2 } = 15 }.  For
a fixed @M { K } this is polynomial in @M { n }, of order @M { n sup K }.
@PP
If we elected to not sort the 4-tuples, each of the @M { n } elements
would have a value in the range @M { [1, K] } independently of the
others, making @M { K sup n } distinct sequences altogether.  This
is exponential in @M { n }, and larger in practice.  For example,
@M { 3 sup 4 = 81 }.
@PP
In instance INRC2-4-100-0-1108, the maximum length of an unfinished
group is @M { K = 4 }, and there are at most about 25 nurses on
the night shift.  So the value that interests us is
@M { p(25, 4) = "-2p" @Font pmatrix { row col @R "28" row col 3 } = 3276 }.
This is a manageable number.  In practice, our preference for
extending undersized groups rather than starting new ones should
reduce it considerably.
@End @SubSection

@EndSubSections
@End @Section

@EndSections
@End @Chapter
