@Chapter
    @Title { Resource-Structural Solvers }
    @Tag { resource_structural }
@Begin
@LP
This chapter documents the solvers packaged with KHE that modify
the resource structure of a solution:  solvers that group tasks,
and so on.  These solvers may alter resource assignments, but they
only do so occasionally and incidentally to their structural work.
# We also include here one solver which adjusts resource monitors.
@BeginSections

# @Section
#     @Title { Task bound groups }
#     @Tag { resource_structural.task_bound_groups }
# @Begin
# @LP
# Task domains are reduced by adding task bound objects to tasks
# (Section {@NumberOf solutions.tasks.domains}).  Frequently, task
# bound objects need to be stored somewhere where they can be found and
# deleted later.  The required data structure is trivial---just an array
# of task bounds---but it is convenient to have a standard for it, so
# KHE defines a type @C { KHE_TASK_BOUND_GROUP } with suitable operations.
# @PP
# To create a task bound group, call
# @ID @C {
# KHE_TASK_BOUND_GROUP KheTaskBoundGroupMake(KHE_SOLN soln);
# }
# To add a task bound to a task bound group, call
# @ID @C {
# void KheTaskBoundGroupAddTaskBound(KHE_TASK_BOUND_GROUP tbg,
#   KHE_TASK_BOUND tb);
# }
# To visit the task bounds of a task bound group, call
# @ID {0.96 1.0} @Scale @C {
# int KheTaskBoundGroupTaskBoundCount(KHE_TASK_BOUND_GROUP tbg);
# KHE_TASK_BOUND KheTaskBoundGroupTaskBound(KHE_TASK_BOUND_GROUP tbg, int i);
# }
# To delete a task bound group, including deleting all the task
# bounds in it, call
# @ID @C {
# bool KheTaskBoundGroupDelete(KHE_TASK_BOUND_GROUP tbg);
# }
# This function returns @C { true } when every call it makes to
# @C { KheTaskBoundDelete } returns @C { true }.
# @End @Section

@Section
    @Title { Task trees }
    @Tag { resource_structural.task_trees }
@Begin
@LP
In this section we consider building a tree of tasks, analogous
to the layer tree of meets, for structuring the assignment of
tasks to other tasks and to resources.
@BeginSubSections

@SubSection
    @Title { Discussion }
    @Tag { resource_structural.task_trees.discussion }
@Begin
@LP
What meets do for time, tasks do for resources.  A meet has a time
domain and assignment; a task has a resource domain and assignment.
Link events constraints cause meets to be assigned to other meets;
avoid split assignments constraints cause tasks to be assigned to
other tasks.
@PP
There are differences.  Tasks lie in meets, but meets do not lie
in tasks.  Task assignments do not have offsets, because there is
no ordering of resources like chronological order for times.
@PP
Since the layer tree is successful in structuring meets for
time assignment, let us see what an analogous tree for structuring
tasks for resource assignment would look like.  A layer tree is
a tree, whose nodes each contain a set of meets.  The root node
contains the cycle meets.  A meet's assignment, if present, lies
in the parent of its node.   By convention, meets lying outside
nodes have fixed assignments to meets lying inside nodes, and
those assignments do not change.
@PP
A @I { task tree }, then, is a tree whose nodes each contain a set of
tasks.  The root node contains the cycle tasks (or there might be
several root nodes, one for each resource type).  A task's
assignment, if present, lies in the parent of its node.  By
convention, tasks lying outside nodes have fixed assignments to
tasks lying inside nodes, and those assignments do not change.
@PP
Type @C { KHE_TASK_SET } is KHE's nearest equivalent to a task
tree node.  It holds an arbitrary set of tasks, but there is
no support for organizing task sets into a tree structure, since
that does not seem to be needed.  It is useful, however, to look
at how tasks are structured in practice, and to relate this to
task trees, even though they are not explicitly supported by KHE.
@PP
A task is assigned to a non-cycle task and fixed, to implement an
avoid split assignments constraint.  Such tasks would therefore
lie outside nodes (if there were any).  When a solver assigns a
task to a cycle task, the task would have to lie in a child node
of a node containing the cycle tasks (again, if there were any).
So there are three levels:  a first level of nodes containing
the cycle tasks; a second level of nodes containing unfixed tasks
wanting to be assigned resources; and a third level of fixed,
assigned tasks that do not lie in nodes.
@PP
This shows that the three-way classification of tasks presented
in Section {@NumberOf solutions.tasks.asst}, into cycle tasks,
unfixed tasks, and fixed tasks, is a proxy for the missing task
tree structure.  Cycle tasks are first-level tasks, unfixed tasks
are second-level tasks, and fixed tasks are third-level tasks.
A task set is only needed for representing second-level
nodes, since tasks at the other levels do not require assignment.
# By convention, then, taski ngs will contain only unfixed tasks.
@End @SubSection

@SubSection
    @Title { Task tree construction }
    @Tag { resource_structural.task_trees.construction }
@Begin
@LP
KHE offers a solver for building a task tree holding the tasks
of a given solution:
@ID @C {
bool KheTaskTreeMake(KHE_SOLN soln, KHE_SOLN_ADJUSTER sa,
  KHE_OPTIONS options);
}
As usual, this solver returns @C { true } if it changes the
solution.  If @C { sa != NULL }, any changes it makes are
stored in @C { sa }, where they can be undone later if desired.
Like any good solver, this function has no special access to
data behind the scenes.  Instead, it works by calling basic
operations and helper functions:
@BulletList

# @LI {
# It calls @C { KheTaskingMake } to make one tasking for each resource
# type of @C { soln }'s instance, and it calls @C { KheTaskingAddTask }
# to add the unfixed tasks of each type to the tasking it made for that type.
# These taskings may be accessed by calling @C { KheSolnTaskingCount }
# and @C { KheSolnTasking } as usual, and they are returned in an order
# suited to resource assignment, as follows.  Taskings for which
# @C { KheResourceTypeDemandIsAllPreassigned(rt) } is @C { true }
# come first.  Their tasks will be assigned already if
# @C { KheSolnAssignPreassignedResources } has been called, as it
# usually has been.  The remaining taskings are sorted by decreasing
# order of @C { KheResourceTypeAvoidSplitAssignmentsCount(rt) }.
# These functions are described in Section {@NumberOf resource_types}.
# Of course, the user is not obliged to follow this ordering.  It is
# a precondition of @C { KheTaskTreeMake } that @C { soln } must have
# no taskings when it is called.
# }

@LI {
It notionally sorts the resource types so that resource types @C { rt }
for which @C { KheResourceTypeDemandIsAllPreassigned(rt) } is @C { true }
come first, and then so that the resource types appear in decreasing
order of @C { KheResourceTypeAvoidSplitAssignmentsCount(rt) }.  These
functions are described in Section {@NumberOf resource_types}.  Then
it handles all tasks of each resource type in turn, in this order.
}

@LI {
It calls @C { KheTaskAssign } to convert resource preassignments into
resource assignments, and to satisfy avoid split assignments constraints,
as far as possible.  Existing assignments are preserved (no calls to
@C { KheTaskUnAssign } are made).
}

@LI {
It calls @C { KheTaskAssignFix } to fix the assignments it makes
to satisfy avoid split assignments constraints.  These may be removed
later.  At present it does not call @C { KheTaskAssignFix } to fix
assignments derived from preassignments, although it probably should.
}

@LI {
It calls @C { KheTaskSetDomain } to set the domains of tasks to
satisfy preassigned resources, prefer resources constraints, and
other influences on task domains, as far as possible.
@C { KheTaskTreeMake } never adds a resource to any domain, however;
it either leaves a domain unchanged, or reduces it to a subset of
its initial value.
}

@EndList
These elements interact in ways that make them impossible to
separate.  For example, a prefer resources constraint that
applies to one task effectively applies to all the tasks that
are linked to it, directly or indirectly, by avoid split
assignments constraints.
@PP
# @C { KheTaskTreeMake } does not refer directly to any options.
# However, it calls function @C { KheTaskingMakeTaskTree }, described
# below, and so it is indirectly influenced by its options.
# @PP
# The implementation of @C { KheTaskTreeMake } has two stages.  The
# first creates one tasking for each resource type of @C { soln }'s
# instance, in the order described, and adds to each the unfixed tasks
# of its type.  This stage can be carried out separately by repeated
# calls to
# @ID @C {
# KHE_TAS KING KheTaskingMakeFromResourceType(KHE_SOLN soln,
#   KHE_RESOURCE_TYPE rt);
# }
# which makes a tasking containing the unfixed tasks of @C { soln } of
# type @C { rt }, or of all types if @C { rt } is @C { NULL }.  It
# aborts if any of these unfixed tasks already lies in a tasking.
# @PP
# The second stage is more complex.  It applies public function
# @ID @C {
# bool KheTaskingMakeTaskTree(KHE_TASK ING tasking,
#   KHE_SOLN_ADJUSTER sa, KHE_OPTIONS options);
# }
# to each tasking made by the first stage.  When @C { KheTaskingMakeTaskTree }
# is called from within @C { KheTaskTreeMake }, its @C { options } parameter
# is inherited from @C { KheTaskTreeMake }.
# @PP
# As described for @C { KheTaskTreeMake }, @C { KheTaskingMakeTaskTree }
# assigns tasks and tightens domains; it does not unassign tasks or
# loosen domains.  Only tasks in @C { tasking } are affected.  If
# @C { sa } is non-@C { NULL }, any task bounds created while tightening
# domains are added to @C { sa }, which allows for them to be deleted
# later if required.  Tasks assigned to non-cycle tasks have their
# assignments fixed, so are deleted from @C { tasking }.
# @PP
The implementation of @C { KheTaskTreeMake } imitates the layer
tree construction algorithm:  it applies @I jobs in decreasing priority
order.  There are fewer kinds of jobs, but the situation is more complex
in another way:  sometimes, some kinds of jobs are wanted but not others.
The three kinds of jobs of highest priority install existing domains and
task assignments, and assign resources to unassigned tasks derived from
preassigned event resources.  These jobs are always included; the first
two always succeed, and so does the third unless the user has made
peculiar task or domain assignments earlier.  The other kinds of jobs
are optional, and whether they are included or not depends on the
options (other than @C { rs_invariant }) described next.
@PP
@C { KheTaskTreeMake } consults the following options.
# Those other
# than @F rs_invariant apply only to constraints @C { c } such that
# @C { KheConstraintCombinedWeight(c) } is not minimum take part.
# This is a simple attempt to limit structural changes to
# cases that make a significant difference.
@TaggedList

@DTI { @F rs_invariant } {
A Boolean option which, when @C { true }, causes @C { KheTaskTreeMake }
to omit assignments and domain tightenings which violate the resource
assignment invariant (Section {@NumberOf resource_solvers.invt}).
}

@DTI { @F rs_task_tree_prefer_hard_off } {
A Boolean option which, when @C { false }, causes @C { KheTaskTreeMake }
to make a job for each point of application of each hard prefer
resources constraint of non-zero weight.  The priority of the
job is the combined weight of its constraint, and it attempts
to reduce the domains of the tasks of @C { tasking } monitored
by the constraint's monitors so that they are subsets of the
constraint's domain.
}

@DTI { @F rs_task_tree_prefer_soft } {
Like @F rs_task_tree_prefer_hard_off except that it applies to
soft prefer resources constraints instead of hard ones, and its sense
is reversed so that the default value (@C { false } as usual) omits
these jobs.  The author has encountered cases where reducing domains
to enforce soft prefer resources constraints is harmful.
}

@DTI { @F rs_task_tree_split_hard_off } {
A Boolean option which, when @C { false }, causes @C { KheTaskTreeMake }
to make a job for each point of application of each hard avoid split
assignments constraint of non-zero weight.  Its priority is the
combined weight of its constraint, and it attempts to assign the
tasks of @C { tasking } to each other so that all the tasks of
the job's point of application of the constraint are assigned,
directly or indirectly, to the same root task.
}

@DTI { @F rs_task_tree_split_soft_off } {
Like @F rs_task_tree_split_hard_off except that it applies to
soft avoid split assignments constraints rather than hard ones.
}

@DTI { @F rs_task_tree_limit_busy_hard_off } {
A Boolean option which, when @C { false }, causes @C { KheTaskTreeMake }
to make a job for each point of application of each limit busy times
constraint with non-zero weight and maximum limit 0.  Its priority is
the combined weight of its constraint, and it attempts to reduce the
domains of those tasks of @C { tasking } which lie in events
preassigned the times of the constraint, to eliminate its resources,
since assigning them to these tasks must violate this constraint.
However, the resulting domain must have at least two elements; if
not, the reduction is undone, reasoning that it is too severe
and it is better to allow the constraint to be violated.
@LP
This flag also applies to cluster busy times constraints with
maximum limit 0, or rather to their positive time groups.
These are essentially the same as the time groups of limit
busy times constraints when the maximum limit is 0.
}

@DTI { @F rs_task_tree_limit_busy_soft_off } {
Like @F rs_task_tree_limit_busy_hard_off except that it applies to
soft limit busy times constraints rather than hard ones.
}

@EndList
By default, all of these jobs except @F rs_task_tree_prefer_soft are run.
@End @SubSection

@EndSubSections
@End @Section

@Section
    @Title { Resource supply and demand }
    @Tag { resource_structural.supply_and_demand }
@Begin
@LP
This section covers several topics which are not closely related,
except that, in a general way, they are all concerned with the
supply of and demand for resources.
@BeginSubSections

@SubSection
    @Title { Accounting for supply and demand }
    @Tag { resource_structural.supply_and_demand.accounting }
@Begin
@LP
This section aims to understand the supply and demand for
resources in practice.
@PP
Let @M { S }, the @I { supply }, be the sum, over all resources
@C { r } of type @C { rt }, of the number of times that @C { r }
could be busy without violating any resource constraints, as
calculated by @C { KheResourceMaxBusyTimes }
(Section {@NumberOf solutions.avail.functions}).  Let @M { D }, the 
@I { demand }, be the total duration of tasks of type @C { rt }
for which there are assign resource constraints of non-zero weight.
@M { S } and @M { D } depend only on the instance; they are the
same for every solution.
@PP
Let the @I { excess supply } of resource type @C { rt } be
@M { S - D }, the amount by which the supply of resources of that
type exceeds the demand for them.  This could be negative, in which
case unassigned tasks or overloaded resources are inevitable.
@PP
Other considerations arise when we try to understand how supply
and demand play out in a solution.  Some resources may be
@I { overloaded }:  their actual number of busy times is larger
than the value calculated by @C { KheResourceMaxBusyTimes }.
Let @M { O } be the sum, over all overloaded resources, of the
excess.  Other resources may be @I { underloaded }:  their actual
number of busy times is smaller than the value calculated by
@C { KheResourceMaxBusyTimes }.  Let @M { U } be the sum,
over all underloaded resources, of the amount by which each
underloaded resource falls short.  HSEval prints @M { O } (in
fact @M { minus O }) and @M { U } below each planning timetable.
It should be clear that in a given solution, the number of
busy times that resources actually supply is @M { S + O - U }.
@PP
There are also adjustments needed on the demand side.  Some
tasks that require assignment may in fact not be assigned.
Let @M { X } be their total duration.  HSEval prints these tasks
in the Unassigned row at the bottom of the planning timetable.
Also, some tasks that do not require assignment may in fact
be assigned.  Let @M { Y } be their total duration.  HSEval
prints these tasks in italics in planning timetables, and prints
their total duration at the bottom of the timetables.  In a
given solution, the total duration of the tasks that are actually
assigned is @M { D - X + Y }.
@PP
But now, each task that is actually assigned consumes one unit
of resource supply, and vice versa, so we must have
@ID @Math { D - X + Y = S + O - U }
and rearranging gives
@ID @Math { S - D = U - O + Y - X }
@M { S - D }, the excess supply, depends only on the instance.  So
the quantity on the right is constant over all solutions for a given
instance.
@PP
Now each unit of @M { O + X } incurs a cost, but each unit of
@M { U + Y } incurs no cost.  Nevertheless, minimizing @M { O + X }
is the same as minimizing @M { U + Y }, because their difference
is a constant.
@End @SubSection

@SubSection
    @Title { Classifying resources by available workload }
    @Tag { resource_structural.supply_and_demand.classify_by_workload }
@Begin
@LP
Resources with high workload limits, as indicated by functions
@C { KheResourceMaxBusyTimes } and @C { KheResourceMaxWorkload }
(Section {@NumberOf solutions.avail}), may be harder to exploit
than resources with lower workload limits, so it may make sense
to timetable them first.  Function
@ID @C {
bool KheClassifyResourcesByWorkload(KHE_SOLN soln,
  KHE_RESOURCE_GROUP rg, KHE_RESOURCE_GROUP *rg1,
  KHE_RESOURCE_GROUP *rg2);
}
helps with that.  It partitions @C { rg } into two resource groups,
@C { rg1 } and @C { rg2 }, such that the highest workload resources
are in @C { rg1 }, and the rest are in @C { rg2 }.  It returns
@C { true } if it succeeds with this, and @C { false } if not, which
will be because the resources of @C { rg } have equal maximum workloads.
@PP
If @C { KheClassifyResourcesByWorkload } returns @C { true }, every
resource in @C { rg1 } has a maximal value of @C { KheResourceMaxBusyTimes }
and a maximal value of @C { KheResourceMaxWorkload }, and every element
of @C { rg2 } has a non-maximal value of @C { KheResourceMaxBusyTimes }
or a non-maximal value of @C { KheResourceMaxWorkload }.  If it returns
@C { false }, then @C { rg1 } and @C { rg2 } are @C { NULL }.
@End @SubSection

@SubSection
    @Title { Limits on consecutive days, and rigidity }
    @Tag { resource_structural.supply_and_demand.consec }
@Begin
@LP
Nurse rostering instances typically place minimum and maximum
limits on the number of consecutive days that a resource can
be free, busy, or busy working a particular shift.  These limits
are scattered through constraints and may be hard to find.  This
section makes that easy.
@PP
An object called a @I { consec solver } is used for this.  To
create one, call
@ID @C {
KHE_CONSEC_SOLVER KheConsecSolverMake(KHE_SOLN soln, KHE_FRAME frame);
}
It uses memory from an arena taken from @C { soln }.  Its
attributes may be retrieved by calling
@ID @C {
KHE_SOLN KheConsecSolverSoln(KHE_CONSEC_SOLVER cs);
KHE_FRAME KheConsecSolverFrame(KHE_CONSEC_SOLVER cs);
}
The frame must contain at least one time group, otherwise
@C { KheConsecSolverMake } will abort.
@PP
To delete a solver when it is no longer needed, call
@ID @C {
void KheConsecSolverDelete(KHE_CONSEC_SOLVER cs);
}
This works by returning the arena to the solution.
@PP
To find the limits for a particular resource, call
@ID {0.98 1.0} @Scale @C {
void KheConsecSolverFreeDaysLimits(KHE_CONSEC_SOLVER cs, KHE_RESOURCE r,
  int *history, int *min_limit, int *max_limit);
void KheConsecSolverBusyDaysLimits(KHE_CONSEC_SOLVER cs, KHE_RESOURCE r,
  int *history, int *min_limit, int *max_limit);
void KheConsecSolverBusyTimesLimits(KHE_CONSEC_SOLVER cs, KHE_RESOURCE r,
  int offset, int *history, int *min_limit, int *max_limit);
}
For any resource @C { r }, these return the history (see below), the
minimum limit, and the maximum limit on the number of consecutive
free days, the number of consecutive busy days, and the number of
consecutive busy times which appear @C { offset } places into each
time group of @C { frame }.  Setting @C { offset } to 0 might
return the history and limits on the number of consecutive early
shifts, setting it to 1 might return the limits on the number of
consecutive day shifts, and so on.  The largest offset acceptable
to @C { KheConsecSolverBusyTimesLimits } is returned by
@ID @C {
int KheConsecSolverMaxOffset(KHE_CONSEC_SOLVER cs);
}
An @C { offset } larger than this, or negative, produces an abort.
@PP
The @C { *history } values return history:  the number of consecutive
free days, consecutive busy days, and consecutive busy times with the
given @C { offset } in the timetable of @C { r } directly before the
timetable proper begins.  They are taken from the history values of the
same constraints that determine the @C { *min_limit } and @C { *max_limit }
values.
@PP
All these results are based on the frame passed to
@C { KheConsecSolverFrame }, which would always be the common frame.
They are calculated by finding all limit active intervals constraints
with non-zero weight, comparing their time groups with the frame
time groups, and checking their polarities.  In effect this reverse
engineers what programs like NRConv do when they convert specialized
nurse rostering formats to XESTT.
@PP
If no constraint applies, @C { *history } is set to 0, @C { *min_limit }
is set to 1 (a sequence of length 0 is not a sequence at all), and
@C { *max_limit } is set to @C { KheFrameTimeGroupCount(frame) }.
In the unlikely event that more than one constraint applies,
@C { *history } and @C { *min_limit } are set to the largest of the
values from the separate constraints, and @C { *max_limit } is set
to the smallest of the values from the separate constraints.
@PP
The @I { rigidity } of a resource is how constrained it is to
follow a particular pattern of busy and free days, assuming that
it is utilized to the maximum extent that constraints allow, as
reported by @C { KheResourceMaxBusyTimes }
(Section {@NumberOf solutions.avail.functions}).  Rigidity
takes account of constraints on the number of consecutive busy
days and consecutive free days, plus history.
@PP
It is hard to see how a local repair method, for example ejection
chains (Section {@NumberOf resource_solvers.ejection}), can just
stumble on a good timetable for a rigid resource (although it often
does).  Something more targeted, like optimal assignment using
dynamic programming (Section {@NumberOf resource_solvers.dynamic}),
seems indicated.
@PP
Suppose that resource @M { r } has 20 available busy times, that
the cycle has 28 days, that @M { r }'s busy days are limited to
at most 5 consecutive days, and that its free days are limited
to at least 2 consecutive days.  Then to reach the 20 busy days
economically we need runs of 5 consecutive busy days, separated
by runs of 2 consecutive free days.  A typical pattern would be
@ID { (5 busy, 2 free, 5 busy, 2 free, 5 busy, 2 free, 5 busy, 2 free) }
The only freedoms here are to move the last two free days to other
points in the cycle, or else to move two or more busy times to the end.
# There are less than
# @M { 28 times 27 slash 2 } ways to do this, making the resource
# very rigid.
@PP
Resources with few available times can also be rigid.  Suppose
that @M { r } has 6 available busy times, that the cycle has
28 days, that @M { r }'s busy days are limited to at least 2
consecutive days, and that its free days are limited to at most
7 consecutive days.  (This is an actual example, from an INRC2
instance.)  A typical pattern would be
@ID { (7 free, 2 busy, 7 free, 2 busy, 7 free, 2 busy, 1 free) }
The only freedom here is to move up to 6 free days to the end,
another rigid case.  We've just shown, for example, that @M { r }'s
first and last days must be free.
@PP
For an example of a resource which is @I not rigid, let @M { r }
have 15 available busy times, subject to the same constraints as
the two previous resources.  A typical pattern would be
@ID {
(5 busy, 2 free, 5 busy, 2 free, 5 busy, 9 free)
}
This is not quite legal because the last run of free days
is too long, but it's close, and there are many choices
for moving two or more of those 9 free days forward, and
for regrouping the busy sequences, for example into three
runs of 4 days and one run of 3 days.
@PP
The ideal measure of rigidity (actually non-rigidity) would be the
number of distinct zero cost patterns of busy and free days.  But
that seems impracticable to calculate, and anyway we do not need a
precise measure.  The measure we choose is inspired by the examples
given above.  It is a weighted sum of two parts, @M { m sub 1 } and
@M { m sub 2 }:
@BulletList

@LI {
First, we ask what is the smallest number of runs of consecutive
busy days that we can have and still reach our desired number of
busy days without violating any minimum or maximum limits on
consecutive busy or free days?  And what is the largest number?
The difference is @M { m sub 1 }, our first measure of non-rigidity.
(Other measures are correlated with this one.  For example, if the
number of runs can vary, their lengths can vary as well.)
}

@LI {
Second, we ask what choices there are for placing the first
run of consecutive busy days, consistent with history.  For
example, if there are 2 busy days from history, and the
minimum limit is 3, then there is no choice for the first
run of busy days:  it must start on the first day.  Or if
there are 5 free days in history, and the maximum number of
consecutive free days is 7, then the first run of busy days
must start on the first, second, or third day.  The number of
choices here is @M { m sub 2 }, our second measure of non-rigidity.
}

@EndList
We weight the first measure by 10 and the second by 1.
@PP
For the resource with 20 available times above, at least 4 runs
are required, because each run can have at most 5 busy times.
At most 5 runs can be used, because if 6 runs are used there are
5 gaps between runs, each containing at least 2 times, leaving
at most 18 places for busy times.  So @M { m sub 1 = 5 - 4 = 1 }.
@PP
For the resource with 6 available times, at most 3 runs are
possible, because each run has at least 2 busy times.  And
2 runs doesn't work, because it leaves only three free runs,
each with at most 7 free times, to hold the 22 free times.
So @M {  m sub 1 = 3 - 3 = 0 }.
@PP
For the resource with 15 available times it is a little harder
to see what the possibilities are.  A somewhat rough and ready
general method works like this.  Suppose all busy runs have
length @M { x }, except possibly one run that is shorter,
and all free runs have length @M { y }.  If the number of
busy times we want is @M { a }, then the number of busy runs
is @M { c = lceil a slash x rceil }.
We must place one free run of length @M { y } between each
adjacent pair of busy runs, and optionally we can place one
free run of length @M { y } before the first run and after
the last run.  This gives a total number of times (busy plus
free) of between @M { a + y (c - 1) } and @M { a + y(c + 1) }.
If the total number of times in the cycle is between these
limits, then @M { x } and @M { y } are workable choices
and @M { c } is a workable number of busy runs.
@PP
Now @M { x } and @M { y } are bounded by limits set by
constraints.  So we try each combination of one legal choice
for @M { x } and one for @M { y } and see what workable
values for @M { c } we get.  The first measure of non-rigidity,
@M { m sub 1 }, is the difference between the largest and
smallest workable values for @M { c }.
@PP
A general method of calculating the second measure of non-rigidity
goes like this.  Suppose that the minimum length of a run of
consecutive busy times is @M { b sub "min" }, and the maximum
length is @M { b sub "max" }.  Suppose that the minimum length
of a run of consecutive free times is @M { f sub "min" }, and
the maximum length is @M { f sub "max" }.  And suppose that the
number of consecutive busy days from history is @M { b }, and
the number of consecutive free days from history is @M { f }.
At most one of @M { b } and @M { f } can be non-zero, and we
also have @M { 1 <= b sub "min" <= b sub "max" }, and
@M { 1 <= f sub "min" <= f sub "max" }.
@PP
If @M { b = f = 0 }, then the first day could be busy, contributing
1 to @M { m sub 2 }, or else any number of initial days from
@M { f sub "min" } to @M { f sub "max" } inclusive could be free,
contributing a further @M { f sub "max" - f sub "min" + 1 } to
@M { m sub 2 }.
@PP
If @M { b > 0 }, then @M { f = 0 }.  If @M { b < b sub "min" },
then the first day must be busy, so @M { m sub 2 = 1 }.  If
@M { b sub "min" <= b < b sub "max" }, then the first day
could be busy, contributing @M { 1 } to @M { m sub 2 }, or
free, contributing @M { f sub "max" - f sub "min" + 1 }
to @M { m sub 2 }.  If @M { b >= b sub "max" }, then the first day
must be free, and @M { m sub 2 = f sub "max" - f sub "min" + 1 }.
@PP
If @M { f > 0 }, then @M { b = 0 }.  If @M { f < f sub "min" },
then the first day must be free, and the number of initial free
days may be between @M { f sub "min" - f } and @M { f sub "max" - f }
inclusive, making @M { m sub 2 = f sub "max" - f sub "min" + 1 }
choices altogether.  If @M { f sub "min" <= f < f sub "max" },
then the first day could be busy, contributing 1 to @M { m sub 2 },
or free, contributing a further @M { f sub "max" - f } to
@M { m sub 2 }.  If @M { f sub "max" <= f }, then the first
day must be busy, so @M { m sub 2 = 1 }.
@PP
Function
@ID @C {
int KheConsecSolverNonRigidity(KHE_CONSEC_SOLVER cs, KHE_RESOURCE r);
}
returns the non-rigidity as we have defined it here.  There is no
precise threshold separating non-rigidity from rigidity, but for
the first measure a value of 0 is very rigid, 1 is somewhat rigid,
and 2 is non-rigid, arguably.  For the second measure a similar
statement is reasonable.  Rather than worrying about thresholds it
may be better to sort the resources by increasing non-rigidity and
treat, say, the first 20% or 30% of them as rigid.
@PP
Finally,
@ID @C {
void KheConsecSolverDebug(KHE_CONSEC_SOLVER cs, int verbosity,
  int indent, FILE *fp);
}
produces the usual debug print of @C { cs } onto @C { fp } with the
given verbosity and indent.  When @C { verbosity >= 2 }, this prints all
results for all resources, using format @C { history|min-max }.  For
efficiency, these are calculated all at once by @C { KheConsecSolverMake }.
@End @SubSection

@SubSection
    @Title { Tighten to partition }
    @Tag { resource_structural.supply_and_demand.partition }
@Begin
@LP
Suppose we are dealing with teachers, and that they have partitions
(Section {@NumberOf resource_types}) which are their faculties
(English, Mathematics, Science, and so on).  Some partitions may
be heavily loaded (that is, required to supply teachers for tasks
whose total workload approaches the total available workload of
their resources) while others are lightly loaded.
@PP
Some tasks may be taught by teachers from more than one partition.
These @I { multi-partition tasks } should be assigned to teachers from
lightly loaded partitions, and so should not overlap in time with other
tasks from these partitions.  @I { Tighten to partition } tightens the
domain of each multi-partition task in a given tasking to one partition,
returning @C { true } if it changes anything:
@ID {0.95 1.0} @Scale @C {
bool KheTightenToPartition(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
  KHE_SOLN_ADJUSTER sa, KHE_OPTIONS options);
}
The choice of partition is explained below.  All changes are additions
of task bounds to tasks, and if @C { sa } is non-@C { NULL }, all
these task bounds are also added to @C { sa }, so that they can
be removed later if desired.
@PP
It is best to call @C { KheTightenToPartition } after
preassigned meets are assigned, but before general time
assignment.  The tightened domains encourage time assignment to
avoid the undesirable overlaps.  After time assignment, the
changes should be removed, since otherwise they constrain
resource assignment unnecessarily.
# This is what the task bound
# group is for:
# @ID @C {
# tighten_tbg = KheTaskBoundGroupMake(soln);
# for( i = 0;  i < KheSolnTaskingCount(soln);  i++ )
#   KheTightenToPartition(KheSolnTasking(soln, i),
#     tighten_tbg, options);
# ... assign times ...
# KheTaskBoundGroupDelete(tighten_tbg);
# }
# The rest of this section explains how @C { KheTightenToPartition }
# works in detail.
@PP
@C { KheTightenToPartition } does nothing when @C { rt } is
@C { NULL }, or @C { KheResourceTypeDemandIsAllPreassigned }
(Section {@NumberOf resource_types}) says that the @C { rt }'s
tasks are all preassigned, or @C { rt } has no partitions,
or its number of partitions is less than four or more than one-third
of its number of resources.  No good can be done in these cases.
@PP
Tasks whose domains lie entirely within one partition are not touched.
The remaining multi-partition tasks are sorted by decreasing combined
weight then duration, except that tasks with a @I { dominant partition }
come first.  A task with an assigned resource has a dominant partition,
namely the partition that its assigned resource lies in.  An unassigned
task has a dominant partition when at least three-quarters of the
resources of its domain come from that partition.
@PP
For each task in turn, an attempt is made to tighten its domain so
that it is a subset of one partition.  If the task has a dominant
partition, only that partition is tried.  Otherwise, the partitions
that the task's domain intersects with are tried one by one, stopping
at the first success, after sorting them by decreasing average
available workload (defined next).
@PP
Define the @I { workload supply } of a partition to be the sum, over
the resources @M { r } of the partition, of the number of times in
the cycle minus the number of workload demand monitors for @M { r }
in the matching.  Define the @I { workload demand } of a partition
to be the sum, over all tasks @M { t } whose domain is a subset of
the partition, of the workload of @M { t }.  Then the
@I { average available workload } of a partition is its workload
supply minus its workload demand, divided by its number of resources.
Evidently, if this is large, the partition is lightly loaded.
@PP
Each successful tightening increases the workload demand of its
partition.  This ensures that equally lightly loaded partitions
share multi-partition tasks equally.
@PP
In a task with an assigned resource, the dominant partition is the
only one compatible with the assignment.  In a task without an
assigned resource, preference is given to a dominant partition, if
there is one, for the following reason.  Schools often have a few
@I { generalist teachers } who are capable of teaching junior
subjects from several faculties.  These teachers are useful for
fixing occasional problems, smoothing out workload imbalances,
and so on.  But the workload that they can give to faculties other
than their own is limited and should not be relied on.  For
example, suppose there are five Science teachers plus one
generalist teacher who can teach junior Science.  That should
not be taken by time assignment as a licence to routinely schedule
six Science meets simultaneously.  Domain tightening to a dominant
partition avoids this trap.
@PP
Tightening by partition works best when the @C { rs_invariant }
option of @C { options } is @C { true }.  For example, in a case like
Sport where there are many simultaneous multi-partition tasks, it
will then not tighten more of them to a lightly loaded partition
than there are teachers in that partition.  Assigning preassigned
meets beforehand improves the effectiveness of this check.
@End @SubSection

@SubSection
    @Title { Balancing supply and demand }
    @Tag { resource_structural.supply_and_demand.balance }
@Begin
@LP
This section presents a function for working out whether the
demand for resources of a given type exceeds their supply.
If it does, the function also answers these two questions:
@NumberedList

@LI {
The shortfall could be handled by not assigning tasks that
would prefer to be assigned.  What is the minimum event resource
constraint cost of omitting one task assignment at one time?
We call this the @I { task cost }.
}

@LI {
The shortfall could be handled by overloading resources.
What is the minimum resource constraint cost of overloading one
resource by one time?  We call this the @I { resource cost }.
}

@EndList
The function is
@ID {0.98 1.0} @Scale @C {
bool KheResourceDemandExceedsSupply(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
  int *demand, int *supply, KHE_COST *task_cost, KHE_COST *resource_cost);
}
It returns @C { true } when the demand for resources of type
@C { rt } exceeds the supply.  Irrespective of the value
returned, it sets @C { *demand } to the demand, @C { *supply } to the
supply, @C { *task_cost } to the answer to the first question above,
and @C { *resource_cost } to the answer to the second question.  The
two questions make sense whether demand exceeds supply or not.
@PP
The rest of this section says how @C { *demand }, @C { *supply },
@C { *task_cost }, and @C { *resource_cost } are calculated.  The
return value is just @C { *demand > *supply }.
@PP
The demand is the sum, over all tasks @C { t } of type @C { rt }
satisfying certain conditions, of the value of @C { KheTaskTotalDuration(t) }
(Section {@NumberOf solutions.tasks.asst}).  The conditions are:
@C { t } must must be a proper root task, it must satisfy
@C { KheTaskTotalDuration(t) > 0 }, it must be unassigned, and
@ID @C { KheTaskNonAsstAndAsstCost(t, &non_asst_cost, &asst_cost) }
(Section {@NumberOf resource_structural.mtask_finding.ops})
must yield values satisfying @C { non_asst_cost > asst_cost }.  This
last condition excludes tasks which are happy to remain unassigned.
@PP
The supply is the sum, over all resources @C { r } of type
@C { rt }, of the value of @C { res } in expression
@ID @C {
KheResourceMaxBusyTimes(soln, r, &res)
}
As documented in Section {@NumberOf solutions.avail}, @C { res } is
an upper limit on @C { r }'s number of busy times (as imposed by
constraints) minus its current number of busy times.
@PP
The task cost is the minimum, over all tasks @C { t } that contribute
to the demand, of the value of @C { non_asst_cost - asst_cost } for
@C { t } divided by @C { KheTaskTotalDuration(t) }.  We have required
both numbers to be positive.  In the unlikely event of there being
no included tasks, the task cost is undefined, so we define it to be 0.
@PP
The resource cost is the minimum, over all resources @C { r }
that contribute to the supply, of the @I { overload cost } of
@C { r }:  the minimum cost in resource constraints of overloading
@C { r } by one time.  For each @C { r }, this cost is found by
using functions @C { KheAvailSolverMaxBusyTimesAvailNodeCount }
and @C { KheAvailSolverMaxBusyTimesAvailNode }
(Section {@NumberOf solutions.avail.query})
to visit the monitors that contribute to the value calculated by
@C { KheResourceMaxBusyTimes }.  The minimum of the weights of
those monitors (as returned by @C { KheMonitorCombinedWeight })
is taken to be @C { r }'s overload cost.  This method is not as
naive as it appears to be at first blush.
@PP
Resources @C { r } for which there are no contributing monitors
have no overload cost and do not participate in the calculation
of the resource cost.  In the unlikely event of there being no
participating resources, the resource cost is undefined, so we
define it to be 0.
@End @SubSection

# @SubSection
#     @Title { Balancing supply and demand (old) }
#     @Tag { resource_structural.supply_and_demand.balance_old }
# @Begin
# @LP
# This section presents the @I { balance solver }, used for
# investigating the balance between supply of and demand for
# resources of a given type.  It aims to answer these two
# complementary questions:
# @NumberedList
# 
# @LI {
# If some resource is not used to its capacity, what cost
# will that have in tasks not assigned?
# }
# 
# @LI {
# If all tasks are assigned that need to be, what cost will that have in
# overloaded resources?
# }
# 
# @EndList
# To create a balance solver, call
# @ID @C {
# KHE_BALANCE_SOLVER KheBalanceSolverMake(KHE_SOLN soln,
#   KHE_RESOURCE_TYPE rt, KHE_FRAME days_frame, HA_ARENA a);
# }
# It makes a solver for the supply of and demand for resources of type
# @C { rt } in @C { soln }, using memory from arena @C { a }.  There is
# no deletion operation; the solver is deleted when @C { a } is freed.
# @PP
# To find the total supply of resources of type @C { rt }, call
# @ID @C {
# int KheBalanceSolverTotalSupply(KHE_BALANCE_SOLVER bs);
# }
# This calls @C { KheResourceMaxBusyTimes(soln, r, &res) }
# for each resource @C { r } of type @C { rt }, and returns the
# sum of the @C { res } values.  As documented in
# Section {@NumberOf solutions.avail}, @C { res } is an
# upper limit on @C { r }'s number of busy times (as imposed by
# constraints) minus its current number of busy times.
# @PP
# To find the total demand for resources of type @C { rt }, call
# @ID @C {
# int KheBalanceSolverTotalDemand(KHE_BALANCE_SOLVER bs);
# }
# This is the sum, over all unassigned tasks @C { t } of type @C { rt }, of
# the total duration of @C { t }, as returned by @C { KheTaskTotalDuration(t) }
# (Section {@NumberOf solutions.tasks.asst}).
# @PP
# The balance solver analyses this demand by cost reduction.  For each
# task @C { t } that contributes to @C { KheBalanceSolverTotalDemand(bs) },
# it calls @C { KheTaskAssignmentCostReduction }
# (Section {@NumberOf solutions.tasks.asst}) on @C { t }, and groups tasks
# with equal cost reductions.  To access these groups, call
# @ID @C {
# int KheBalanceSolverDemandGroupCount(KHE_BALANCE_SOLVER bs);
# void KheBalanceSolverDemandGroup(KHE_BALANCE_SOLVER bs, int i,
#   KHE_COST *cost_reduction, int *total_durn);
# }
# @C { KheBalanceSolverDemandGroup } returns the information kept about
# the @C { i }th group:  the cost reduction of each of its tasks, and
# their total duration.  @C { KheBalanceSolverTotalDemand } returns the
# sum of these total durations.  The groups are visited in order of
# decreasing cost reduction.
# @PP
# Using this information it is easy to work out the marginal cost of
# not utilising a resource @C { r } to its full capacity.  Suppose
# that tasks are assigned in order of decreasing cost reduction,
# until all resources are used to capacity.  The cost reduction of
# the last task assigned is the marginal cost of not fully utilizing
# @C { r }.  This value is returned by
# @ID @C {
# KHE_COST KheBalanceSolverMarginalCost(KHE_BALANCE_SOLVER bs);
# }
# If supply exceeds demand, there is no marginal cost, and so the
# value returned is 0.  Finally,
# @ID @C {
# void KheBalanceSolverDebug(KHE_BALANCE_SOLVER bs, int verbosity,
#   int indent, FILE *fp);
# }
# produces the usual debug print of @C { bs } onto @C { fp } with
# the given verbosity and indent.
# @End @SubSection

@SubSection
    @Title { Resource flow }
    @Tag { resource_structural.supply_and_demand.resource_flow }
@Begin
@LP
It is arguably too simple to just compare the total supply of
resources with the total demand for them.  The tasks which
constitute the demand have prefer resources monitors (hard and
soft) which restrict which resources can be used.  There could
be enough supply overall but not enough of a particular kind:
enough nurses but not enough senior nurses, enough rooms but
not enough Science laboratories, and so on.
@PP
We can detect such problems now using the global tixel matching.
However, here we build a @I { flow graph } (a directed graph in
which we will find a maximum flow) that is much smaller than the
global tixel matching.  This graph gives a clearer view of the
overall situation than one can get from a bipartite matching.  We
call this general idea @I { resource flow }, or just @I { flow }.
@PP
A flow graph is for a given resource type @C { rt }.  It is
built from a set of @I { admissible resources } and a set of
@I { admissible tasks }.  The admissible resources are just the
resources of type @C { rt }.  A task is admissible when
all of these conditions hold:
@NumberedList

@LI {
It has type @C { rt }.
}

@LI {
It is a proper root task.
}

@LI {
It is derived from an event resource (needed because we use
the event resource's domain).
}

@LI {
It is not preassigned.
}

@LI {
Its assignment is not fixed.
}

@LI {
@C { KheTaskNonAsstAndAsstCost }
(Section {@NumberOf resource_structural.mtask_finding.ops})
gives it a positive non-assignment cost.
}

@LI {
It is not assigned a resource.  This condition is optional; it
is present when parameter @C { preserve_assts } of function
@C { KheFlowMake } below has value @C { true }.
}

@EndList
As usual, the @I { total duration } of a proper root task is the
duration of the task plus the durations of all the tasks assigned
to it, directly or indirectly.
@PP
The flow graph contains a source node, some @I { resource nodes }
(each containing a set of one or more admissible resources), some
@I { task nodes } (each containing a set of one or more admissible
tasks), and a sink node.
@PP
For each admissible resource, define a resource node @M { x } containing
just that resource.  Add an edge from the source node to @M { x },
whose capacity @M { c(x) } is the number of times that the resource
is available, according to @C { KheResourceMaxBusyTimes }
(Section {@NumberOf solutions.avail.functions}).  If the resource
is currently assigned to any inadmissible proper root tasks, then
reduce @M { c(x) } by the total duration of those tasks (but not
below 0) to compensate for their omission.
@PP
For each distinct set @M { R sub y } of resources preferred by at least
one task, define a @I { task node } @M { y } containing all tasks that
prefer @M { R sub y }, and add an edge from @M { y } to the sink node,
whose capacity @M { c(y) } is the total duration of those tasks.  Then
for each resource node @M { x } and each task node @M { y }, draw an
edge from @M { x } to @M { y } of infinite capacity whenever @M { x }'s
resource lies in @M { R sub y }.
@PP
Before solving this graph, we compress it by merging resource nodes
that are connected to the same task nodes.  Each such merged node
has incoming capacity equal to the total capacity of the nodes it
replaces, and outgoing edges like the edges it replaces.
@PP
Here is an example of a flow graph from a real instance
(INRC2-4-100-0-1108):
@CD @Diag {
@Tbl
   aformat { @Cell A | @Cell B | @Cell | @Cell C | @Cell D }
   mh { 0.8c }
   iv { ctr }
   mv { 0.4c }
{
@Rowa ma { 0i }
  B { DA:: @Box HN_* }
  C { SA:: @Box HeadNurse }
@Rowa
  A { SS:: @Circle }
  B { DB:: @Box NU_* }
  C { SB:: @Box Nurse }
  D { SK:: @Circle }
@Rowa mb { 0i }
  B { DC:: @Box CT_* }
  C { SC:: @Box Caretaker }
}
//
@Arrow from { SS } to { DA@W } ylabel { 196 }
@Arrow from { SS } to { DB@W } ylabel { 250 }
@Arrow from { SS } to { DC@W } ylabel { 537 }

@Arrow from { DA } to { SA } ylabel { +4p @Font @M { infty } }
@Arrow from { DA } to { SB } ylabel { +4p @Font @M { infty } }
@Arrow from { DB } to { SB } ylabel { +4p @Font @M { infty } }
@Arrow from { DB } to { SC } ylabel { +4p @Font @M { infty } }
@Arrow from { DC } to { SC } ylabel { +4p @Font @M { infty } }

@Arrow from { SA@E } to { SK } ylabel { 91 }
@Arrow from { SB@E } to { SK } ylabel { 239 }
@Arrow from { SC@E } to { SK } ylabel { 669 }
}
Node HN_* holds the head nurses, node HeadNurse holds the tasks
that require a head nurse, and so on.  This example substantiates
our claim about the clarity of flow graphs:  it shows that head
nurses can do the work of ordinary nurses as well as their own,
and ordinary nurses can do the work of caretaker nurses as well as
their own.  This is just as well, because, as the graph also shows,
head nurses have a superfluity of available workload and caretakers
have a shortage.
@PP
This flow graph can answer many questions.  Each resource node is
the answer to the question `What kind of resource is this?',
although that answer does not come with a simple name in general.
(We will compare the sets of resources we get with existing resource
groups, so that we can give the nodes familiar names whenever possible.
But the algorithm deals with sets of resources that it defines itself,
not with sets defined previously as resource groups.)
# @PP
# The basic question we answer with flows is `does a maximum flow
# exist which includes a non-zero flow from @M { r } to @M { s }?'
# Let @M { f(r, s) } be the answer to this question (a boolean).
# To find @M { f(r, s) }, we subtract 1 from @M { c(r) } and
# @M { c(s) } and find a maximum flow.  If this flow is just 1
# less than the original maximum flow, then a maximum flow that
# uses this edge exists:  take this flow and add one unit of
# flow from the source to @M { r } to @M { s } to the sink.
@PP
Call a maximum flow in this graph the @I { original flow }.
By changing the graph and seeing whether the new maximum flow
is less than the original, we can answer questions like these:
@BulletList

@LI {
Can at least one of the tasks of task node @M { y } be assigned a
resource from resource node @M { x }?  Subtract 1 from @M { c(x) }
and @M { c(y) } and find a maximum flow.  If this flow is just 1
less than the original flow, then a maximum flow that uses this
edge exists:  take this flow and add one unit of flow from the
source to @M { x } to @M { y } to the sink.  If the answer is no,
we might as well delete the edge from @M { x } to @M { y }.  This
may interest callers since it simplifies the situation.
}

@LI {
Must the resources of @M { x } be used exclusively by @M { y }?
Yes, if for every other @M { y } connected to @M { x } the
previous question has answer no.
}

@LI {
Can the tasks of @M { y } be limited to resources from @M { x }?
Remove all edges into @M { y } other than the one from @M { x }
and find a maximum flow.  The answer is yes if this equals the
original flow.
}

@EndList
There are many possible questions; our plan is to implement them
as we need them.
# we can choose any one of `there exists', `for all', and `how many'
# in several places; wherever we ask a question about @M { x } we
# can ask the same question about @M { y }, and vice versa;
# wherever a condition occurs we can negate it; and so on.
@PP
The implementation defines three types.  Type @C { KHE_FLOW }
represents the entire flow graph; type @C { KHE_FLOW_RESOURCE_NODE }
represents one resource node; and type @C { KHE_FLOW_TASK_NODE }
represents one task node.
@PP
We start with type @C { KHE_FLOW_RESOURCE_NODE }.  Its operations are
@ID @C {
KHE_RESOURCE_SET KheFlowResourceNodeResources(
  KHE_FLOW_RESOURCE_NODE frn);
bool KheFlowResourceNodeResourceGroup(KHE_FLOW_RESOURCE_NODE frn,
  KHE_RESOURCE_GROUP *rg);
int KheFlowResourceNodeCapacity(KHE_FLOW_RESOURCE_NODE frn);
bool KheFlowResourceNodeFlow(KHE_FLOW_RESOURCE_NODE frn,
  KHE_FLOW_TASK_NODE *ftn, int *flow);
void KheFlowResourceNodeDebug(KHE_FLOW_RESOURCE_NODE frn,
  int verbosity, int indent, FILE *fp);
}
@C { KheFlowResourceNodeResources } returns the set of resources
represented by flow resource node @C { frn }.  If
@C { KheFlowResourceNodeResourceGroup } returns @C { true },
then the  pre-existing resource group @C { *rg } contains exactly these
resources.
@C { KheFlowResourceNodeCapacity }
returns the total capacity of those resources (the sum of their
individual capacities, defined above).
@C { KheFlowResourceNodeFlow } reports the results of a max flow
solve on the graph.  It is to be called repeatedly, and each
time it returns @C { true } it reports one edge with flow
@C { *flow } from @C { frn } to @C { *ftn }.  So it should be
called like this:
@ID @C {
while( KheFlowResourceNodeFlow(frn, &ftn, &flow) )
  ... there is a non-zero flow from frn to ftn ...
}
Finally, @C { KheFlowResourceNodeDebug } produces a debug print of
@C { frn } in the usual way.
@PP
The operations on type @C { KHE_FLOW_TASK_NODE } are
@ID @C {
KHE_TASK_SET KheFlowTaskNodeTasks(KHE_FLOW_TASK_NODE ftn);
KHE_RESOURCE_GROUP KheFlowTaskNodeDomain(KHE_FLOW_TASK_NODE ftn);
int KheFlowTaskNodeCapacity(KHE_FLOW_TASK_NODE ftn);
void KheFlowTaskNodeDebug(KHE_FLOW_TASK_NODE ftn, int verbosity,
  int indent, FILE *fp);
}
@C { KheFlowTaskNodeTasks } returns the set of tasks represented by
@C { ftn }.  @C { KheFlowTaskNodeDomain } returns the domain they
share.  @C { KheFlowTaskNodeCapacity } returns their capacity (their
total duration); and @C { KheFlowTaskNodeDebug } produces a debug
print of @C { ftn } in the usual way.
@PP
Now for the operations on type @C { KHE_FLOW }.  A flow object is
created and deleted by calling
@ID @C {
KHE_FLOW KheFlowMake(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
  bool preserve_assts, bool include_soft);
void KheFlowDelete(KHE_FLOW f);
}
@C { KheFlowMake } builds the flow object in an arena taken from
@C { soln }, including creating its resource and task nodes as
defined above, and finding a maximum flow.   @C { KheFlowDelete }
returns the arena to @C { soln }, making @C { f }, its nodes, and
the resource sets and task sets from its nodes undefined.  (The task
sets are not created within @C { f }'s arena, because the task set
interface does not offer that option.  But @C { KheFlowDelete }
explicitly deletes them.)
# @PP
# The resources included are all resources of type @C { rt }.  The
# capacity of each resource @C { r } is @C { KheResourceMaxBusyTimes }
# (Section {@NumberOf solutions.avail.functions}), minus the total
# duration of any tasks assigned @C { r } when @C { KheResourceFlowMake }
# is called and omitted according to the rules given next.
# @PP
# The tasks included are all proper root tasks of type @C { rt },
# with three exceptions:  tasks not derived from an event resource
# are omitted; fixed tasks are omitted; and if @C { preserve_assts }
# is @C { true }, then proper root tasks that are assigned resources
# when @C { KheResourceFlowMake } is called are omitted.  The capacity
# of each task is its duration, including the durations of tasks
# assigned to it, directly or indirectly.  If the task is derived from
# event resource @C { er }, the set of resources assignable to it is
# @C { KheEventResourceHardAndSoftDomain(er) } if @C { include_soft }
# is @C { true }, and @C { KheEventResourceHardDomain(er) } otherwise.
# @C { KheTaskDomain } is not called.
@PP
The flow object returned by @C { KheFlowMake } accepts a variety of
queries.  Its resource nodes may be visited (sorted by increasing
index of their resource sets' first resources) by
@ID @C {
int KheFlowResourceNodeCount(KHE_FLOW f);
KHE_FLOW_RESOURCE_NODE KheFlowResourceNode(KHE_FLOW f, int i);
}
Its task nodes may be visited (in an unspecified order) by
@ID @C {
int KheFlowTaskNodeCount(KHE_FLOW f);
KHE_FLOW_TASK_NODE KheFlowTaskNode(KHE_FLOW f, int i);
}
There is also
@ID @C {
KHE_FLOW_RESOURCE_NODE KheFlowResourceToResourceNode(KHE_FLOW f,
  KHE_RESOURCE r);
KHE_FLOW_TASK_NODE KheFlowTaskToTaskNode(KHE_FLOW f, KHE_TASK task);
}
These return the resource node containing @C { r } and the task node
containing @C { task }, or @C { NULL } if there is no such node (if
@C { r } or @C { task } is not admissible).  Finally,
@ID @C {
void KheFlowDebug(KHE_FLOW f, int verbosity, int indent, FILE *fp);
}
produces a debug print of @C { f } onto @C { fp } with the
given verbosity and indent.
@End @SubSection

@SubSection
    @Title { Workload packing }
    @Tag { resource_structural.supply_and_demand.workload_packing }
@Begin
@LP
The solver in this section is inspired by instance @C { COI-WHPP },
in which each resource has maximum workload 70, day shifts have
workload 7, night shifts have workload 10, and the total supply
of workload is just sufficient to meet the total demand.  Given
these conditions, and the absence of significant other conditions,
it is not hard to see that in the best solutions, some resources
will be assigned 10 day shifts only, and the rest will be assigned
7 night shifts only.  It is not a real-world scenario, but it is
the scenario in this instance.
@PP
Function
@ID @C {
bool KheWorkloadPack(KHE_SOLN soln, KHE_OPTIONS options,
  KHE_RESOURCE_TYPE rt, KHE_SOLN_ADJUSTER sa);
}
checks to see whether a scenario like the one above occurs in
@C { soln } in the tasks and resources of type @C { rt }.  If so,
it installs task bounds into the tasks of type @C { rt } to enforce
this kind of partitioned solution.  This involves making heuristic
decisions about which resources will get day shifts and which will
get night shifts.  If all goes well, it adds the task bounds it
created to @C { sa } (so they they can be removed later if desired)
and returns @C { true }.  Otherwise it changes nothing and returns
@C { false }.
# @PP
# If task bounds were added, a call to @C { KheTaskBoundGroupDelete(*tbg) }
# can be used to remove them again.  This deletes the task bound
# group, including deleting any task bounds in it, which in turn
# removes them from the tasks they were added to.
@PP
@C { KheWorkloadPack } does not assign resources to tasks.  It leaves
that to other solvers.  They are forced by the task bounds to do it
in the way that @C { KheWorkloadPack } has decided on.
@PP
The rest of this section presents the details of how
@C { KheWorkloadPack } works.  We begin with the conditions
under which it acts.
@PP
Let @M { S } be the set of event resources of type @C { rt } with
non-zero workload for which assign resource constraints with
non-zero weight are present.  (Event resources with zero workload
can be assigned freely without affecting the workload packing
calculation.  Event resources without assign resource constraints
of non-zero weight do not need to be assigned at all.)  Over all
elements of @M { S } there must be exactly two distinct workloads,
@M { w sub 1 } and @M { w sub 2 } say.  Each is a workload, not a
workload per time, and so is a positive integer.  We require
@M { w sub 1 } and @M { w sub 2 } to be relatively prime.
@PP
Now suppose that for some resource @M { r } the workload limit is
@M { W = w sub 1 w sub 2 }.  Then the only way to assign @M { r }
to event resources from @M { S } that completely exhausts @M { r }'s
workload is for all of the event resources assigned @M { r } to
have the same workload, say @M { w sub i }, and for @M { r } to be
assigned @M { W "/" w sub i } such event resources.  The proof
of this is by contradiction, as follows.
@PP
Any other arrangement leads to a total workload for @M { r } of the form
@ID @Math {
a sub 1 w sub 1 + a sub 2 w sub 2 = W = w sub 1 w sub 2
}
where @M { a sub 1 } and @M { a sub 2 } are positive integers.
Dividing through by @M { w sub 1 } shows that @M { w sub 1 }
divides @M { a sub 2 } (because @M { w sub 1 } and @M { w sub 2 }
are relatively prime), and similarly @M { w sub 2 } divides
@M { a sub 1 }.  So let @M { a sub 1 = b sub 1 w sub 2 } and
@M { a sub 2 = b sub 2 w sub 1 } where @M { b sub 1 } and
@M { b sub 2 } are positive integers.  This gives
@ID @Math {
b sub 1 w sub 2 w sub 1 + b sub 2 w sub 1 w sub 2 = w sub 1 w sub 2
}
Dividing by @M { w sub 1 w sub 2 } gives @M { b sub 1 + b sub 2 = 1 },
a contradiction, because @M { b sub 1 } and @M { b sub 2 } are positive
integers.
@PP
Each resource of type @C { rt } must have limit workload constraints
of non-zero weight which give it maximum workload
@M { W = w sub 1 w sub 2 }, according to @C { KheResourceMaxWorkload }
(Section {@NumberOf solutions.avail.functions}).  If there are
@M { n } resources of type @C { rt }, then the total workload supply
is @M { nW }.  The total workload of the elements of @M { S } must be
at least @M { nW }, so that workload demand equals or exceeds supply.
@PP
Finally, we need to decide which resources to assign to the event
resources with workload @M { w sub 1 }, and which to assign to the
event resources with workload @M { s sub 2 }.  We do this as follows.
@PP
Let @M { S sub 1 } be the set of event resources from @M { S }
whose workload is @M { w sub 1 }, and let @M { S sub 2 } be the set
of event resources from @M { S } whose workload is @M { w sub 2 }.
Let @M { R = lbrace r sub 1 ,..., r sub n rbrace } be the resources
of type @C { rt }.  We need to partition @M { R } into @M { R sub 1 },
the resource group of the task bound applied to the event resources
of @M { S sub 1 }, and @M { R sub 2 }, the resource group of the
task bound applied to the event resources of @M { S sub 2 }.
@PP
Each event resource of @M { S sub 1 } has workload @M { w sub 1 },
making a total workload of @M { bar S sub 1 bar w sub 1 }.  From
the work done above, each resource has maximum workload
@M { W = w sub 1 w sub 2 }, so the number of resources needed
to cover the event resources of @M { S sub 1 } is
@ID @Math {
c sub 1 = bar S sub 1 bar w sub 1 ` "/" w sub 1 w sub 2
= bar S sub 1 bar ` "/" w sub 2
}
Similarly, @M { c sub 2 = bar S sub 2 bar ` "/" w sub 1 } resources
are needed to cover the event resources of @M { S sub 2 }.  Suitable
resources can be selected using a maximum flow in this graph:
@CD @Diag {
@Tbl
   i { ctr }
   mh { 1.2c }
   mv { 0.0c }
   aformat { @Cell A | @Cell B | @Cell C | @Cell D }
{
@Rowa
    B { R1:: @Circle @M { r sub 1 } }
@Rowa
    C { S1:: @Circle @M { S sub 1 } }
@Rowa
    B { R2:: @Circle @M { r sub 2 } }
@Rowa
    A { SOURCE:: @Circle {} }
    D { SINK:: @Circle {} }
@Rowa
    B { ... }
@Rowa
    C { S2:: @Circle @M { S sub 2 } }
@Rowa
    B { RN:: @Circle @M { r sub n } }
}
//
@Arrow from { SOURCE } to { R1 } ylabel { 1 }
@Arrow from { SOURCE } to { R2 } ylabel { 1 }
@Arrow from { SOURCE } to { RN } ylabel { 1 }
@Arrow from { R1 } to { S1 } ylabel { 1 }
@Arrow from { R2 } to { S1 } ylabel { 1 }
@Arrow from { R2 } to { S2 } ylabel { 1 }
@Arrow from { RN } to { S2 } ylabel { 1 }
@Arrow from { S1 } to { SINK } ylabel { @M { c sub 1 } }
@Arrow from { S2 } to { SINK } ylabel { @M { c sub 2 } }
}
The flow along each edge is an integral number of resources.
Each resource @M { r sub i } is represented by a node at the end
of an edge of weight 1 from the source, ensuring that each
resource is utilized at most once.  Each set of event resources
@M { S sub j } is represented by a node at the start of an edge
of weight @M { c sub j } to the sink, ensuring that at most
@M { c sub j } resources are utilized by the event resources
of @M { S sub j }.  An edge of weight 1 joins each @M { r sub i }
to each @M { S sub j } such that @M { r sub i } is qualified
for @M { S sub j }, in the sense that @M { r sub i } lies in
the domain of sufficiently many elements of @M { S sub j } to
consume its entire maximum workload.
@PP
We don't actually build this flow graph, although we could.
Instead, we find all the @M { r sub i } which are qualified
for @M { S sub 1 } only and place them into @M { R sub 1 },
taking care not to add more than @M { c sub 1 } resources
to @M { R sub 1 }.  Then we find all the @M { r sub i }
which are qualified for @M { S sub 2 } only and place them
into @M { R sub 2 }, taking care not to add more than
@M { c sub 2 } resources to @M { R sub 2 }.  Finally we
make arbitrary assignments of the remaining resources to
@M { R sub 1 } or @M { R sub 2 }, again taking care not
to exceed the @M { c sub 1 } and @M { c sub 2 } limits.
@PP
At various points in this algorithm we may find that we are
unable to utilize some resource.  In that case the maximum
flow is less than @M { n }, so we abandon workload packing.
@End @SubSection

#@SubSection
#    @Title { Another form of resource similarity }
#    @Tag { resource_structural.supply_and_demand.similarity }
#@Begin
#@LP
#Function @C { KheResourceSimilar } (Section {@NumberOf resources_infer})
#is offered by the KHE platform for deciding whether two resources
#are similar.  This section offers a different form of the same idea:
#@ID @C {
#bool KheResourceSimilarDomains(KHE_RESOURCE r1, KHE_RESOURCE r2,
#  float frac);
#}
#Here @C { r1 } and @C { r2 } are distinct non-@C { NULL } resources,
#and @C { frac } is a floating-point number between 0.0 and 1.0
#inclusive.  @C { KheResourceSimilarDomains } returns @C { true }
#when at least @C { frac } of the tasks currently assigned @C { r1 }
#could also be assigned @C { r2 }, in the sense that their domains
#allow that assignment, and at least @C { frac } of the tasks
#currently assigned @C { r2 } could also be assigned @C { r1 }.
#@End @SubSection

@EndSubSections
@End @Section

@Section
    @Title { Solution adjustments }
    @Tag { resource_structural.adjust }
@Begin
@LP
This section presents solution adjustments
(Section {@NumberOf general_solvers.adjust}) for resource-structural
applications.
@BeginSubSections

@SubSection
    @Title { Changing the multipliers of cluster busy times monitors }
    @Tag { resource_structural.adjust.multiplier }
@Begin
@LP
Cluster busy times monitors formerly had a @I { multiplier }, which
was an integer that their true costs were multiplied by.  Multipliers
have been made redundant by @C { KheMonitorSetCombinedWeight }
(Section {@NumberOf monitoring_monitors}), but the solver they
supported is still available, with a change of interface:
@ID @C {
void KheSetClusterMonitorMultipliers(KHE_SOLN soln,
  KHE_SOLN_ADJUSTER sa, char *str, int val);
}
This finds each cluster busy times constraint @C { c } whose name
or Id contains @C { str }, and uses calls to
@C { KheSolnAdjusterMonitorChangeWeight } to multiply the combined weight of
each monitor derived from @C { c } by @C { val }.  If @C { sa != NULL },
then the monitors can easily be returned to their previous state later:
@ID @C {
sa = KheSolnAdjusterMake(soln);
KheSetClusterMonitorMultipliers(sa, str, val);
do_something;
KheSolnAdjusterDelete(sa);
}
The multipliers are in place while @C { do_something } is running,
and removed afterwards.
@End @SubSection

@SubSection
    @Title { Tilting the plateau }
    @Tag { resource_structural.adjust.tilting }
@Begin
@LP
This section documents a rather left-field idea, which we call
@I { tilting the plateau }.  The idea is to consider a defect
near the start of the timetable to be worse than an equally
bad defect near the end of the timetable.  A local search method
like ejection chains will then believe that it has succeeded
when it moves a defect towards the end of the timetable.  The
hope is that over the course of several repairs, defects will
move all the way to the end and disappear.
@PP
The function for this is
@ID @C {
void KheTiltPlateau(KHE_SOLN soln, KHE_SOLN_ADJUSTER sa);
}
For each monitor @M { m } of @C { soln } whose combined weight @M { w }
satisfies @M { w > 0 }, @C { KheMonitorSetCombinedWeight }
(Section {@NumberOf monitoring_monitors}) is called to change the combined
weight of @M { m } from @M { w } to @M { wT - t }, where @M { T }
is the number of times in the instance, and @M { t } is the index of the
first time monitored by @M { m }, as returned by @C { KheMonitorTimeRange }
(Section {@NumberOf monitoring.sweep_times}), or 0 if
@C { KheMonitorTimeRange } returns @C { false }.  Multiplying
every monitor's weight by @M { T } does not really change the instance,
but subtracting @M { t } makes monitors near the end of the timetable
less costly than monitors near the start.
@PP
When @M { m } is a limit active intervals monitor whose combined
weight @M { w } satisfies @M { w > 0 }, the procedure is somewhat
different.  The new combined weight is @M { wT }, not @M { wT - t };
but then @M { m } itself is informed that tilting is in force, by
a call to @C { KheLimitActiveIntervalsMonitorSetTilt }
(Section {@NumberOf monitoring.limitactive}).  This causes @M { m }
to perform its own subtraction of @M { t } from each cost it reports,
but using a different value of @M { t } for each defective interval,
namely the index of the first time in that interval.  In this way,
defective intervals near the end cost less than defective intervals
near the start.
@PP
@C { KheTiltPlateau } may be used in conjunction with a solution adjuster:
@ID @C {
sa = KheSolnAdjusterMake(soln);
KheTiltPlateau(soln, sa);
do_something;
KheSolnAdjusterDelete(sa);
}
The tilt applies during @C { do_something }; @C { KheSolnAdjusterDelete }
removes it, including making the appropriate calls to
@C { KheLimitActiveIntervalsMonitorClearTilt }.  Alternatively,
the @C { sa } parameter of @C { KheTiltPlateau } may be @C { NULL },
but then there will be no simple way to remove the tilt.
@End @SubSection

@SubSection
    @Title { Propagating unavailable times to resource monitors }
    @Tag { resource_structural.adjust.unavail }
@Begin
@LP
A resource @M { r }'s @I { unavailable times }, @M { U sub r }, is a
set of times taken from certain monitors of non-zero weight that apply
to @M { r }:  all times in avoid unavailable times monitors, all times
in limit busy times monitors with maximum limit 0, and all times
in positive time groups of cluster busy times constraints with
maximum limit 0.  In this section we do not care about the weight of
these monitors, provided it is non-zero.  We simply combine all these
times into @M { U sub r }.
@PP
Suppose that @M { r } has a cluster busy times or limit active intervals
monitor @M { m } with a time group @M { T } such that @M { T subseteq U sub r }.
Then, although @M { T } could be busy, it is not likely to be busy,
and it is reasonable to let @M { m } know this, by calling
@C { KheClusterBusyTimesMonitorSetNotBusyState }
(Section {@NumberOf monitoring.clusterbusy}) or
@C { KheLimitActiveIntervalsMonitorSetNotBusyState }
(Section {@NumberOf monitoring.limitactive}).
@PP
KHE offers a solver that implements this idea:
@ID @C {
bool KhePropagateUnavailableTimes(KHE_SOLN soln, KHE_RESOURCE_TYPE rt);
}
For each resource @M { r } of type @C { rt } in @C { soln }'s instance
(or for each resource of the instance if @C { rt } is @C { NULL }), it
calculates @M { U sub r }, and, if @M { U sub r } is non-empty, it
checks every time group @M { T } in every cluster busy times and
limit active intervals monitor for @M { r }.  For each
@M { T subseteq U sub r }, it calls the function appropriate to
the monitor, with @C { active } set to @C { false } if @M { T }
is positive, and to @C { true } if @M { T } is negative.  It
returns @C { true } if it changed anything.
@PP
There is no corresponding function to undo these settings.  As
cutoff indexes increase they become irrelevant anyway.
@End @SubSection

@SubSection
    @Title { Changing the minimum limits of cluster busy times monitors }
    @Tag { resource_structural.adjust.minimums }
@Begin
@LP
Cluster busy times monitors have a @C { KheClusterBusyTimesMonitorSetMinimum }
operation (Section {@NumberOf monitoring.clusterbusy}) which changes
their minimum limits.  This section presents a method of making these
changes which might be useful during solving.
@PP
This method calculates the demand for resources at particular times,
which only really makes sense after all times are assigned.  So it
could reasonably be classified as a resource structural solver, but
since it helps to adjust monitor limits it has been documented here.
@PP
Consider this example from nurse rostering.  Suppose each resource
has a maximum limit on the number of weekends it can be busy.  Since
each resource can work at most 2 shifts per weekend, summing up
these maximum limits and multiplying by 2 gives the maximum number
of shifts that resources can work on weekends.  We call this the
@I { supply } of weekend shifts.
@PP
Now suppose we find the number of weekend shifts that the instance
requires nurses for.  Call this the @I { demand } for weekend shifts.
@PP
If demand equals or exceeds supply, each resource needs to work its
maximum number of weekends, or else some demands will not be covered.
In that case, the resources' maximum limits are also minimum limits.
The solver described here calculates supply and demand.  It leaves it
to the user to call @C { KheClusterBusyTimesMonitorSetMinimum }, or whatever.
# record its results.  It takes all of the cluster busy times constraints
# of the instance, groups them so that constraints with the same time groups
# lie in one group, then does the calculations for those constraints.
@PP
To create a solver for doing this work, call
@ID @C {
KHE_CLUSTER_MINIMUM_SOLVER KheClusterMinimumSolverMake(HA_ARENA a);
}
It uses memory taken from arena @C { a }.  There is no operation to
delete the solver; it is deleted when @C { a } is freed.  To carry
out one solve, call
@ID @C {
void KheClusterMinimumSolverSolve(KHE_CLUSTER_MINIMUM_SOLVER cms,
  KHE_SOLN soln, KHE_OPTIONS options, KHE_RESOURCE_TYPE rt);
}
It uses @C { options } to find the common frame and event timetable
monitor.  It considers tasks and resources of type @C { rt } only.
It can be called any number of times to solve problems with unrelated
values of @C { soln }, @C { options }, and @C { rt }.
@PP
The attributes of the most recent solve may be found by calling
@ID @C {
KHE_SOLN KheClusterMinimumSolverSoln(KHE_CLUSTER_MINIMUM_SOLVER cms);
KHE_OPTIONS KheClusterMinimumSolverOptions(
  KHE_CLUSTER_MINIMUM_SOLVER cms);
KHE_RESOURCE_TYPE KheClusterMinimumSolverResourceType(
  KHE_CLUSTER_MINIMUM_SOLVER cms);
}
These will all be @C { NULL } before the first solve.  If a new
solve is begun with the same attributes as the previous solve,
it will produce the same outcome if the solution has not changed.
# When the solver is no longer needed,
# @ID @C {
# void KheClusterMinimumSolverDelete(KHE_CLUSTER_MINIMUM_SOLVER cms);
# }
# may be called to delete it (by recycling its arena back to @C { soln }).
@PP
The solve first finds the constraints suited to what it does:  all
cluster busy times constraints with non-zero cost and a non-zero number
of time groups which are pairwise disjoint (always true in practice)
and either all positive, in which case a non-trivial maximum limit
must be present, or all negative, in which case a non-trivial minimum
limit must be present.
@PP
For each maximal non-empty subset of these constraints with the same time
groups (ignoring polarity) and the same `applies to' time group, the solve
makes one @I { group }, with its own supply and demand, for each offset
of the `applies to' time group.  To visit these groups, call
@ID @C {
int KheClusterMinimumSolverGroupCount(KHE_CLUSTER_MINIMUM_SOLVER cms);
KHE_CLUSTER_MINIMUM_GROUP KheClusterMinimumSolverGroup(
  KHE_CLUSTER_MINIMUM_SOLVER cms, int i);
}
There are several operations for querying a group.  To visit its
constraints, call
@ID {0.98 1.0} @Scale @C {
int KheClusterMinimumGroupConstraintCount(KHE_CLUSTER_MINIMUM_GROUP cmg);
KHE_CLUSTER_BUSY_TIMES_CONSTRAINT KheClusterMinimumGroupConstraint(
  KHE_CLUSTER_MINIMUM_GROUP cmg, int i);
}
To retrieve its constraint offset, call
@ID {0.98 1.0} @Scale @C {
int KheClusterMinimumGroupConstraintOffset(KHE_CLUSTER_MINIMUM_GROUP cmg);
}
The time groups may be retrieved from its first constraint.  To find
its supply, call
@ID @C {
int KheClusterMinimumGroupSupply(KHE_CLUSTER_MINIMUM_GROUP cmg);
}
This is calculated as described above for weekends; here is a
fully general description.
@PP
For each constraint @C { c } of @C { cmg } we calculate a supply, as
follows.  Suppose first that the constraint has non-trivial maximum
limit @C { max } and that all its time groups are positive.  Find,
for each time group @C { tg } of @C { c }, the number of frame time
groups that @C { tg } intersects with (taking the offset into
account).  This is the maximum number of times from @C { tg } that
a resource can be busy for.  Take the @C { max } largest of these
numbers and add them to get the supply of @C { c }.
@PP
If @C { c } has a non-trivial minimum limit @C { min } and all
its time groups are negative, set @C { max } to the number of
time groups minus @C { min } and proceed as in the positive case.
(For more on this transformation, see the theorem at the end of
Section {@NumberOf constraints.clusterbusy}.)
@PP
For each resource @C { r } of type @C { rt } we find a supply, as
follows.  If @C { r } is a point of application of at least one
constraint, its supply is the minimum of the supplies of its
constraints.  Otherwise, its supply is the sum, over all time
groups @C { tg }, of the number of frame time groups @C { tg }
intersects with.  @C { KheClusterMinimumGroupSupply } is the
sum, over all resources @C { r }, of the supply of @C { r }.
@PP
To find a group's demand, call
@ID @C {
int KheClusterMinimumGroupDemand(KHE_CLUSTER_MINIMUM_GROUP cmg);
}
This is the sum, over all times in the time groups of the group's
constraints (taking the offset into account), of the number of
tasks of type @C { rt } running at each time.
Finally,
@ID @C {
void KheClusterMinimumGroupDebug(KHE_CLUSTER_MINIMUM_GROUP cmg,
  int verbosity, int indent, FILE *fp);
}
produces a debug print of @C { cmg } onto @C { fp } with the given
verbosity and indent.
# To find the monitors associated with the group (that is, the
# monitors derived from the group's constraints, and its offset), call
# @ID @C {
# int KheClusterMinimumGroupMonitorCount(KHE_CLUSTER_MINIMUM_GROUP cmg);
# KHE_CLUSTER_BUSY_TIMES_MONITOR KheClusterMinimumGroupMonitor(
#   KHE_CLUSTER_MINIMUM_GROUP cmg, int i);
# }
@PP
There is also an operation for finding the group of a given monitor:
@ID @C {
bool KheClusterMinimumSolverMonitorGroup(KHE_CLUSTER_MINIMUM_SOLVER cms,
  KHE_CLUSTER_BUSY_TIMES_MONITOR cbtm, KHE_CLUSTER_MINIMUM_GROUP *cmg);
}
If @C { cms } has a group containing @C { cbtm }'s constraint and offset
(there can be at most one), this function returns @C { true } and sets
@C { *cmg } to that group.  Otherwise it returns @C { false } and sets
@C { *cmg } to @C { NULL }.
@PP
It is up to the caller to take it from here.  For example, after
carrying out a solve, for each cluster monitor @C { m } one could
call @C { KheClusterMinimumSolverMonitorGroup } to see whether it is
subject to a group.  Then if that group's demand equals or exceeds
its supply, a call to @C { KheClusterBusyTimesMonitorSetMinimum }
increases @C { m }'s minimum limit.  And so on.  However, the
solver does offer some convenience functions to help with this:
@ID @C {
void KheClusterMinimumSolverSetBegin(KHE_CLUSTER_MINIMUM_SOLVER cms);
void KheClusterMinimumSolverSet(KHE_CLUSTER_MINIMUM_SOLVER cms,
  KHE_CLUSTER_BUSY_TIMES_MONITOR m, int val);
void KheClusterMinimumSolverSetEnd(KHE_CLUSTER_MINIMUM_SOLVER cms,
  bool undo);
}
@C { KheClusterMinimumSolverSetBegin } begins a run of changes to
monitors' minimum limits.  @C { KheClusterMinimumSolverSet } makes a
call to @C { KheClusterBusyTimesMonitorSetMinimum }, and remembers
that the call was made.  @C { KheClusterMinimumSolverSetEnd } ends
the run of changes, and if @C { undo } is @C { true } it also undoes
them (in reverse order), returning the monitor limits to their values
when the run began.  Use of these functions is optional.
@PP
For convenience there is also
@ID @C {
void KheClusterMinimumSolverSetMulti(KHE_CLUSTER_MINIMUM_SOLVER cms,
  KHE_RESOURCE_GROUP rg);
}
where @C { rg }'s resource type must equal @C { cms }'s.  It calls
@C { KheClusterMinimumSolverMonitorGroup } for each cluster busy
times monitor @C { m } for each resource of @C { rg }.  If that
returns @C { true } and the group's demand equals or exceeds its
supply, then @C { m }'s minimum limit is changed to its maximum
limit.  Neither @C { KheClusterMinimumSolverSetBegin } nor
@C { KheClusterMinimumSolverSetEnd } are called.  The user must
call @C { KheClusterMinimumSolverSetBegin } first, as usual, and
is free to call @C { KheClusterMinimumSolverSetEnd } immediately
with @C { undo } set to @C { false }, or later with @C { undo }
set to @C { true }.  It is probably not a good idea to not call
@C { KheClusterMinimumSolverSetEnd } at all, since that will leave
@C { cms } unable to accept calls to @C { KheClusterMinimumSolverSetBegin }.
@PP
Finally, function
@ID @C {
void KheClusterMinimumSolverDebug(KHE_CLUSTER_MINIMUM_SOLVER cms,
  int verbosity, int indent, FILE *fp);
}
produces the usual debug print of @C { cms } onto @C { fp } with
the given verbosity and indent.
@PP
Cluster minimum solvers deal only with cluster busy times constraints.
Other constraints might help to reduce supply further.  For example, if
a resource is unavailable for an entire day, that will reduce supply by
1.  At present these kinds of ideas are not taken into account.
@End @SubSection

@SubSection
    @Title { Allowing split assignments }
    @Tag { resource_structural.adjust.allow_splits }
@Begin
@LP
A good way to minimize split assignments is to prohibit them at
first but allow them later.  To change a tasking from the first
state to the second, call
@ID @C {
bool KheAllowSplitAssignments(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
  bool unassigned_only);
}
It unfixes and unassigns all tasks assigned to the tasks of
@C { soln } whose resource type is @C { rt }, returning
@C { true } if it changed anything.  If one of the original
unfixed tasks is assigned (to a cycle task), the tasks assigned
to it are assigned to that task, so that existing resource
assignments are not forgotten.  If @C { unassigned_only } is
@C { true }, only the unassigned tasks of @C { tasking } are
affected.  (This option is included for completeness, but it
is not recommended, since it leaves few choices open.)
@C { KheAllowSplitAssignments } preserves the resource
assignment invariant.
@End @SubSection

@SubSection
    @Title { Enlarging task domains }
    @Tag { resource_structural.adjust.enlarge_domains }
@Begin
@LP
If any room or any teacher is better than none, then it will
be worth assigning any resource to tasks that remain unassigned
at the end of resource assignment.  Function
@ID @C {
void KheEnlargeDomains(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
  bool unassigned_only);
}
permits this by enlarging the domains of the tasks of @C { soln }
whose resource type is @C { rt } and any tasks assigned to them
(and so on recursively) to the full set of resources of @C { rt }.
If @C { unassigned_only } is true, only unassigned tasks are
affected.  The tasks are visited in postorder---that is, a task's
domain is enlarged only after the domains of the tasks assigned to
it have been enlarged---ensuring that the operation cannot fail.
Preassigned tasks are not enlarged.
@PP
This operation works, naturally, by deleting all task bounds from
the tasks it changes.  Any task bounds that become applicable to no
tasks as a result of this are deleted.
@End @SubSection

@EndSubSections
@End @Section

#@Section
#    @Title { Grouping by resource constraints (old) }
#    @Tag { resource_structural.constraints }
#@Begin
#@LP
#@I { Grouping by resource constraints } is KHE's term for a method
#of grouping tasks together, forcing the tasks in each group to
#be assigned the same resource, when all other ways of assigning
#resources to those tasks can be shown to have non-zero cost.  That
#does not mean that those tasks will always be assigned the same resource
#in good solutions, any more than, say, a constraint requiring nurses
#to work complete weekends is always satisfied in good solutions.
#However, in practice those tasks usually do end up being assigned the
#same resource, so it makes sense to require that, at least to begin
#with.  Later we can remove the groupings and see what happens.
#@PP
#@C { KheTaskTreeMake } also groups tasks, but its groups are based
#on avoid split assignments constraints, whereas here we make groups
#based on resource constraints.
#@PP
#The function is
#@ID @C {
#bool KheG roupByResourceConstraints(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
#  KHE_OPTIONS options, KHE_TASK_SET ts);
#}
#There is no @C { tasking } parameter because this kind of grouping
#cannot be applied to an arbitrary set of tasks, as it turns out.
#Instead, it applies to all tasks of @C { soln } whose resource
#type is @C { rt }, which lie in a meet which is assigned a time,
#and for which non-assignment may have a cost (discussed later).
#If @C { rt } is @C { NULL }, @C { KheGroupByResourceConstraints }
#applies itself to each of the resource types of @C { soln }'s
#instance in turn.  It tries to group these tasks, returning
#@C { true } if it groups any.
#@PP
#For each resource type, @C { KheGroupByResourceConstraints } finds
#whatever groups it can.  It makes each such @I { task group } by
#choosing one of its tasks as the @I { lea der task } and assigning
#the others to it.  It makes assignments only to proper root tasks
#(non-cycle tasks not already assigned to other non-cycle tasks),
#so it does not disturb existing groups.  But it does take existing
#groups into account:  it will use tasks to which other tasks are
#asssigned in its own groups.
#@PP
#Tasks which are initially assigned a resource participate in
#grouping.  Such a task may have its assignment changed to some
#other task, but in that case the other task will be assigned the
#resource.  In other words, if one task is assigned a resource
#initially, and it gets grouped, then its whole group will be
#assigned that resource afterwards.  Two tasks initially assigned
#different resources will never be grouped together.
#@PP
#On the other hand, tasks whose assignments are fixed are ignored.
#It is true that they could become lea der tasks, since the assignments
#of lea der tasks are not changed, but there are other considerations
#when choosing lea der tasks, and to add fixing to the mix has been
#deemed by the author to be too much at present.
#In practice fixed tasks are fixed by @C { KheAssignByHistory }
#(Section {@NumberOf resource_solvers.assignment.history}), so they
#are already grouped (in effect) and it is reasonable to ignore them.
#@PP
#If @C { ts } is non-@C { NULL }, every task that
#@C { KheGroupByResourceConstraints } assigns to another task is added
#to @C { ts }.  So the groups can be removed when they are no longer
#wanted, by running through @C { ts } and unassigning its tasks.
#@C { KheTaskSetUnGroup } (Section {@NumberOf extras.task_sets}) does this.
## @PP
## if @C { r_ts } is non-@C { NULL }, every task that
## @C { KheGroupByResourceConstraints } assigns a resource to
## is added to @C { r_ts }.  Only @C { KheGroupByHistory }
## (Section {@NumberOf resource_structural.constraints.history})
## assigns resources to tasks.
#@PP
#@C { KheGroupByResourceConstraints } uses two kinds of grouping.
#The first, @I { combinatorial grouping }, tries all combinations of
#assignments over a few consecutive days, building a group when just
#one of those combinations has zero cost, according to the cluster
#busy times and limit busy times constraints that monitor those days.
#The second, @I { profile grouping }, uses limit active intervals
#constraints to find different kinds of groups.  All this is
#explained below.
#@PP
#@C { KheGroupByResourceConstraints } consults option
#@C { rs_invariant }, and also
#@TaggedList
#
#@DTI { @F rs_group_by_rc_off } @OneCol {
#A Boolean option which, when @C { true }, turns grouping by
#resource constraints off.
#}
#
#@DTI { @F rs_combinatorial_grouping_max _days } @OneCol {
#An integer option which determines the maximum number of consecutive days
#(in fact, time groups of the common frame) examined by combinatorial grouping
#(Section {@NumberOf resource_structural.constraints.combinatorial}).
#Values 0 or 1 turn combinatorial grouping off.  The default value is 3.
#}
#
#@DTI { @F rs_group_by_rc_combinatorial_off } @OneCol {
#A Boolean option which, when @C { true }, turns combinatorial grouping off.
#}
#
#@DTI { @F rs_group_by_rc_profile_off } @OneCol {
#A Boolean option which, when @C { true }, turns profile grouping off.
#}
#
#@EndList
#It also calls @C { KheFrameOption } (Section {@NumberOf extras.frames})
#to obtain the common frame, and retrieves the event timetable monitor
#from option @C { gs_event_timetable_monitor }
#(Section {@NumberOf general_solvers.general}).
#@PP
#The following subsections describe how @C { KheGroupByResourceConstraints }
#works in detail.  It has several parts, which are available separately,
#as we will see.  For each resource type, it starts by building a tasker
#and adding the time groups of the common frame to it as overlap time
#groups (Section {@NumberOf resource_structural.constraints.taskers}).
#Then, using this tasker, it performs combinatorial grouping by calling
#@C { KheCombGrouping }
#(Section {@NumberOf resource_structural.constraints.applying}), and
#profile grouping by calling @C { KheProfileGrouping }
#(Section {@NumberOf resource_structural.constraints.profile}),
#first with @C { non_strict } set to @C { false }, then again with
#@C { non_strict } set to @C { true }.
#@BeginSubSections
#
#@SubSection
#  @Title { Taskers }
#  @Tag { resource_structural.constraints.taskers }
#@Begin
#@LP
#A @I { tasker } is an object of type @C { KHE_TASKER } that
#facilitates grouping by resource constraints.  We'll see how to
#create one shortly; but first, we introduce two other types that
#taskers use.
#@PP
#Taskers deal directly only with proper root tasks (tasks which are
#either unassigned, or assigned directly to a cycle task, that is,
#to a resource).  Tasks whose assignments are fixed are skipped over
#by taskers, as discussed above.  Taskers consider two (unfixed) proper
#root tasks to be equivalent when they have equal domains and assigned
#resources (possibly @C { NULL }), and they cover the same set of times.
#(A task @I covers a time when it, or some task assigned directly
#or indirectly to it, is running at that time.)  Equivalent tasks
#are interchangeable with respect to resource assignment:  they
#may be assigned the same resources, and their effect on resource
#constraints is the same.  Identifying equivalent tasks is vital
#in grouping; without it, virtually no group could be shown to
#be the only zero-cost option.
## @PP
## Taskers consider two tasks to be equivalent when @C { KheTaskEquivalent }
## (Section {@NumberOf solutions.tasks}) says that they are equivalent,
## and their assigned resources are equal (possibly @C { NULL }).  Two
## equivalent tasks are interchangeable with respect to resource
## assignment:  they may be assigned the same resources, and their
## effect on resource constraints is the same.  Identifying equivalent
## tasks is vital in grouping; without it, virtually no group could be
## shown to be the only zero-cost option.
#@PP
#A @I class is an object of type @C { KHE_TASKER_CLASS }, representing
#an equivalence class of tasks (a set of equivalent tasks).  Each task
#known to a tasker lies in exactly one class.  The user cannot create
#these classes; they are created and kept up to date by the tasker.
#@PP
#The tasks of an equivalence class may be visited by
#@ID @C {
#int KheTaskerClassTaskCount(KHE_TASKER_CLASS c);
#KHE_TASK KheTaskerClassTask(KHE_TASKER_CLASS c, int i);
#}
#There must be at least one task, because if a class becomes empty,
#it is deleted by the tasker.
#@PP
#The three attributes that equivalent tasks share may be retrieved by
#@ID @C {
#KHE_RESOURCE_GROUP KheTaskerClassDomain(KHE_TASKER_CLASS c);
#KHE_RESOURCE KheTaskerClassAsstResource(KHE_TASKER_CLASS c);
#KHE_TIME_SET KheTaskerClassTimeSet(KHE_TASKER_CLASS c);
#}
#These return the domain (from @C { KheTaskDomain }) that the tasks of
#@C { c } share, their assigned resource (from @C { KheTaskAsstResource }),
#and the set of times they each cover.  The user must not modify the
#value returned by @C { KheTaskerClassTimeSet }.  Function
#@ID @C {
#void KheTaskerClassDebug(KHE_TASKER_CLASS c, int verbosity,
#  int indent, FILE *fp);
#}
#produces a debug print of @C { c } onto @C { fp } with the given
#verbosity and indent.
#@PP
#The other type that taskers use represents one time.  The type is
#@C { KHE_TASKER_TIME }.  Again, the tasker creates objects of these
#types, and keeps them up to date.  Function
#@ID @C {
#KHE_TIME KheTaskerTimeTime(KHE_TASKER_TIME t);
#}
#returns the time that @C { t } represents.
#@PP
#The tasks of an equivalence class all run at the same times, and so
#for each time, either every task of an equivalence class is running
#at that time, or none of them are.  Accordingly, to visit the tasks
#running at a particular time, we actually visit classes:
#@ID @C {
#int KheTaskerTimeClassCount(KHE_TASKER_TIME t);
#KHE_TASKER_CLASS KheTaskerTimeClass(KHE_TASKER_TIME t, int i);
#}
#Each equivalence class appears in one time object for each time
#that its tasks are running, giving a many-to-many relationship
#between time objects and class objects.  Function
#@ID @C {
#void KheTaskerTimeDebug(KHE_TASKER_TIME t, int verbosity,
#  int indent, FILE *fp);
#}
#produces a debug print of @C { t } onto @C { fp } with the given
#verbosity and indent.
#@PP
#We turn now to taskers themselves.  To create a tasker, call
#@ID {0.98 1.0} @Scale @C {
#KHE_TASKER KheTaskerMake(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
#  KHE_TASK_SET task_set, HA_ARENA a);
#}
#@C { KheTaskerMake } gathers all unfixed proper root tasks (tasks
#which are either unassigned, or assigned directly to a cycle task
#representing a resource) of @C { soln } whose resource type is
#@C { rt }, for which non-assignment may have a cost (see below),
#and which lie in meets with an assigned time.  The meets' time
#assignments are assumed to be fixed for the lifetime of the
#tasker; if they change, errors will occur.  From here on, `task'
#means one of these tasks, unless stated otherwise.
## event resources for which @C { KheEventResourceNeedsAssignment }
## (Section {@NumberOf event_resources}) returns @C { KHE_YES } are
## @PP
## If @C { include_assigned_tasks } is @C { true }, tasks assigned a
## resource are included, otherwise they are excluded.  The author
## sets this to @C { false }, so as to exclude tasks that have
## already been assigned a resource by @C { KheAssignByHistory }
## (Section {@NumberOf resource_solvers.assignment.requested}).
## History is not taken into account by grouping, which is not
## ideal, but this simple alternative to that works quite well.
#@PP
#The tasker's attributes may be accessed by
#@ID @C {
#KHE_SOLN KheTaskerSoln(KHE_TASKER tr);
#KHE_RESOURCE_TYPE KheTaskerResourceType(KHE_TASKER tr);
#KHE_TASK_SET KheTaskerTaskSet(KHE_TASKER tr);
#HA_ARENA KheTaskerArena(KHE_TASKER tr);
#}
#A tasker object remains in existence until its arena, @C { a },
#is deleted or recycled.
#@PP
#It seems wrong to group a task for which non-assignment has a cost
#with a task for which non-assignment has no cost.  But what to do
#about this issue is a puzzle.  Simply refusing to group such tasks
#would not address all the relevant issues, e.g. whether to include
#both types in profiles.  At present, if the instance contains at
#least one assign resource constraint, then only tasks derived from
#event resources for which @C { KheEventResourceNeedsAssignment }
#(Section {@NumberOf event_resources}) returns @C { KHE_YES } are
#considered for grouping.  If the instance contains no assign resource
#constraints, then only tasks derived from event resources for which
#@C { KheEventResourceNeedsAssignment } returns @C { KHE_MAYBE }
#are considered for grouping.  This is basically a stopgap.
#@PP
#Tasks are grouped by calls to @C { KheTaskMove }, each of which
#assigns one follower task to a le ader task.  This removes the
#follower task from the set of tasks of interest to the tasker,
#and it usually enlarges the set of times covered by the le ader task,
#placing it into a different equivalence class.  The main purpose
#of the tasker object is to keep track of these changes.
#@PP
#If @C { task_set } is non-@C { NULL }, each follower task assigned
#during grouping is added to it.  This makes it easy to remove the
#groups later, when they are no longer wanted, by running through
#@C { task_set } and unassigning each of its tasks.  @C { KheTaskSetUnGroup }
#(Section {@NumberOf extras.task_sets}) does this.
#@PP
#@C { KheTaskerMake } places its tasks into classes indexed by time.
#To visit each time, call
#@ID @C {
#int KheTaskerTimeCount(KHE_TASKER tr);
#KHE_TASKER_TIME KheTaskerTime(KHE_TASKER tr, int i);
#}
#Here @C { KheTaskerTimeTime(KheTaskerTime(tr, KheTimeIndex(t))) == t }
#for all times @C { t }.  @C { KheTaskerTimeCount(tr) } returns the same
#value as @C { KheInstanceTimeCount(ins) }, where @C { ins } is
#@C { tr }'s solution's instance.  From each @C { KHE_TASKER_TIME }
#object one can access the classes running at that time, and
#the tasks of those classes, using functions introduced above.
#@PP
#Finally,
#@ID @C {
#void KheTaskerDebug(KHE_TASKER tr, int verbosity, int indent, FILE *fp);
#}
#produces a debug print of @C { tr } onto @C { fp } with the given
#verbosity and indent.
#@End @SubSection
#
#@SubSection
#  @Title { Tasker support for grouping }
#  @Tag { resource_structural.constraints.groupings }
#@Begin
#@LP
#Taskers keep their classes up to date as tasks are grouped.  However,
#they can't know by magic that tasks are being grouped.  So it's wrong to
#call platform operations like @C { KheTaskAssign } and @C { KheTaskMove }
#directly while using a tasker.  @C { KheTaskAddTaskBound } is also out
#of bounds.  Instead, proceed as follows.
#@PP
#A @I grouping is a set of classes used for grouping tasks.  A group is
#made by taking any one task out of each class in the grouping, choosing
#one to be the l eader task, assigning the others (called the followers)
#to it, and inserting the l eader task into some other class appropriate
#to it, where it is available to participate in other groupings.
#@PP
#When a task is taken out of a class, the class may become empty, in
#which case the tasker deletes that class.  When the follower tasks are
#assigned to the le ader tasks, the set of times covered by it usually
#changes, and the tasker may need to create a new class object to hold
#it.  So class objects may be both created and destroyed by the tasker
#when tasks are grouped.
## (The tasker holds a free list of class objects.)
#@PP
#A tasker may handle any number of groupings over its lifetime, but at
#any moment there is at most one grouping.  The operations for building
#this @I { current grouping } are:
#@ID @C {
#void KheTaskerGroupingClear(KHE_TASKER tr);
#bool KheTaskerGroupingAddClass(KHE_TASKER tr, KHE_TASKER_CLASS c);
#bool KheTaskerGroupingDeleteClass(KHE_TASKER tr, KHE_TASKER_CLASS c);
#int KheTaskerGroupingBuild(KHE_TASKER tr, int max_num, char *debug_str);
#}
#These call the platform operations, as well as keeping the tasker up
#to date.
#@PP
#@C { KheTaskerGroupingClear } starts off a grouping, clearing out
#any previous grouping.
#@PP
#@C { KheTaskerGroupingAddClass }, which may be called any number of
#times, adds @C { c } to the current grouping.  If there is a problem
#with this, it returns @C { false } and changes nothing.  These
#potential problems (there are two kinds) are explained below.
#@PP
#@C { KheTaskerGroupingDeleteClass } undoes a call to
#@C { KheTaskerGroupingAddClass } with the same @C { c } that
#returned @C { true }.  Deleting @C { c } might not be possible, since it
#might leave the grouping with no viable l eader class (for which
#see below).  @C { KheTaskerGroupingDeleteClass } returns @C { false }
#in that case, and changes nothing.  This cannot happen if classes
#are deleted in stack order (last in first out), because each
#deletion then returns the grouping to a viable previous state.
#@PP
#@C { KheTaskerGroupingBuild } ends the grouping.  It makes some groups and
#returns the number it made.  Each group is either made completely, or
#not at all.  The number of groups made is the minimum of @C { max_num }
#and the @C { KheTaskerClassTaskCount } values for the classes.  It then
#removes all classes from the grouping, like @C { KheTaskerGroupingClear }
#does, understanding that some may have already been destroyed by being
#emptied out by @C { KheTaskerGroupingBuild }.
#@PP
#It is acceptable to add just one class, in which case the `groups' are
#just tasks from that class, no assignments are made, and nothing actually
#changes in the tasker's data structure.  If this is not wanted, then
#the caller should ensure that @C { KheTaskerGroupingClassCount }
#(see below) is at least 2 before calling @C { KheTaskerGroupingBuild }.
#@PP
#Parameter @C { debug_str } is used only by debugging code, to
#say why a group was made.  For example, its value might be
#@C { "combinatorial grouping" } or @C { "profile grouping" }.
#@PP
#At any time, the classes of the current grouping may be
#accessed by calling
#@ID @C {
#int KheTaskerGroupingClassCount(KHE_TASKER tr);
#KHE_TASKER_CLASS KheTaskerGroupingClass(KHE_TASKER tr, int i);
#}
#in the usual way.  They will not usually be returned in the
#order they were added, however; in particular, the class that
#the tasker currently intends to use as the l eader class has
#index 0.
#@PP
#We now describe the two problems that make
#@C { KheTaskerGroupingAddClass } return @C { false }.  The first
#problem concerns l eader tasks.  Tasks are grouped by choosing one
#task as the l eader and assigning the others to it.  So one of the
#classes added by @C { KheTaskerGroupingAddClass } has to be chosen as
#the one that l eader tasks will be taken from (the @I { l eader class }).
#The tasker does this automatically in a way that usually works well.
#(It chooses any class whose tasks are already assigned a resource,
#or if there are none of those, a class whose domain has minimum
#cardinality, and checks that the first task of each of the other
#classes can be assigned to the first task of that class without
#changing any existing resource assignment.)  But in rare cases, the
#domains of two classes may be such that neither is a subset of the
#other, or two classes may be initially assigned different resources.
#@C { KheTaskerGroupingAddClass } returns @C { false } in such cases.
#@PP
#The second problem concerns the times covered by the classes.  It
#would not do to group together two tasks which cover the same time,
#because then, when a resource is assigned to the grouped task, the
#resource would have a clash.  More generally, if a resource cannot
#be assigned to two tasks on the same day (for example), then it
#would not do to group two tasks which cover two times from the
#same day.  To help with this, the tasker has functions
#@ID @C {
#void KheTaskerAddOverlapFrame(KHE_TASKER tr, KHE_FRAME frame);
#void KheTaskerDeleteOverlapFrame(KHE_TASKER tr);
#}
## void KheTaskerAddOverlapTimeGroup(KHE_TASKER tr, KHE_TIME_GROUP tg);
## void KheTaskerClearOverlapTimeGroups(KHE_TASKER tr);
#@C { KheTaskerAddOverlapFrame } informs the tasker that a resource
#should not be assigned two tasks that cover the same time group of
#@C { frame }.  If this condition would be violated by some call to
#@C { KheTaskerGroupingAddClass }, then that call returns @C { false }
#and adds nothing.  @C { KheTaskerDeleteOverlapFrame }, which is never
#needed in practice, removes this requirement.
## @C { KheTaskerAddOverlapTimeGroup } may be called any number of times.
## It informs the tasker that a group which covers two times from @C { tg }
## (or one time twice) is not permitted.  If some call to
## @C { KheTaskerGroupingAddClass } would violate this condition, then that call
## returns @C { false } and adds nothing.  @C { KheTaskerAddOverlapFrame }
## calls @C { KheTaskerAddOverlapTimeGroup } for each time group
## of @C { frame }.  And @C { KheTaskerClearOverlapTimeGroups }, which
## is never needed in practice, clears away all overlap time groups.
#@PP
#If overlaps are prevented in this way, the same class cannot be added
#to a grouping twice.  So there is no need to prohibit that explicitly.
## @PP
## Each time may lie in at most one overlap time group.  There is no
## logical need for this, but it simplifies the implementation, and
## it is true in practice (i.e. when overlap time groups are derived
## from frames).  @C { KheTaskerAddOverlapTimeGroup } and
## @C { KheTaskerAddOverlapFrame } may not be called when a grouping
## is under construction.
#@PP
#When @C { KheTaskerGroupingAddClass } returns @C { false }, the caller
#has two options.  One is to abandon this grouping altogether, which
#is done by not calling @C { KheTaskerGroupingBuild }.  The next call to
#@C { KheTaskerGroupingClear } will clear everything out for a fresh
#start.  The other option is to continue with the grouping, finding
#other classes to add.  This is done by making zero or more other
#calls to @C { KheTaskerGroupingAddClass }, followed by
#@C { KheTaskerGroupingBuild }.
#@PP
#After one grouping is completed, the user may start another.  The tasker
#will have been updated by the previous @C { KheTaskerGroupingBuild }
#to no longer contain the ungrouped tasks but instead to contain the
#grouped ones.  They can become elements of new groups.
#@PP
#@C { KHE_TASKER_CLASS } objects may be created by
#@C { KheTaskerGroupingBuild }, to hold the newly created groups,
#and also destroyed, because empty classes are deleted.  So
#variables of type @C { KHE_TASKER_CLASS } may become
#undefined when @C { KheTaskerGroupingBuild } is called.
#@PP
#Although @C { KheTaskerGroupingAdd } can be used to check whether a
#class can be added, it may be convenient to check for overlap in
#advance.  For this there are functions
#@ID @C {
#bool KheTaskerTimeOverlapsGrouping(KHE_TASKER_TIME t);
#bool KheTaskerClassOverlapsGrouping(KHE_TASKER_CLASS c);
#}
#@C { KheTaskerTimeOverlapsGrouping } returns @C { true } if @C { t }
#lies in an overlap time group which is currently covered by a class of
#the current grouping.  @C { KheTaskerClassOverlapsGrouping } returns
#@C { true } if any of the times covered by @C { c } is already so covered.
## @PP
## Consider the following scenario.  A grouping is constructed which
## includes a class with an assigned resource.  Other classes in the
## grouping do not have the assigned resource, but they overlap in time
## with classes that do.  When a group is made from the grouping, there
## will be a clash.  This scenario is not explicitly prevented.  It
## underlies the importance of not just accepting the groups made by a
## grouping; one must check their cost.  These functions help with that:
## @ID @C {
## bool KheTaskerGroupingTestAsstBegin(KHE_TASKER tr, KHE_RESOURCE *r);
## void KheTaskerGroupingTestAsstEnd(KHE_TASKER tr);
## }
## @C { KheTaskerGroupingTestAsstBegin } selects a suitable resource
## and assigns it to tasks that form a group in the current grouping
## (skipping assigned tasks).  If it succeeds, it sets @C { *r } to the
## resource it used and returns @C { true }, otherwise it undoes any
## changes, sets @C { *r } to @C { NULL },  and returns @C { false }.
## @C { KheTaskerGroupingTestAsstEnd } undoes what a successful call
## to @C { KheTaskerGroupingTestAsstBegin } did.  It must be called,
## or else errors will occur in the tasker.
## @PP
## A suitable resource is either one that is already assigned to one
## or more tasks of the grouping, or else it is the first resource
## from the domain of the l eader class that is free at the times
## covered by all of the classes of the grouping, taking any overlap
## frame into account.  If there is no such resource (not likely),
## @C { KheTaskerGroupingTestAsstBegin } returns @C { false }.
#@End @SubSection
#
#@SubSection
#  @Title { Tasker support for profile grouping }
#  @Tag { resource_structural.constraints.pgroupings }
#@Begin
#@LP
#Taskers also have functions which support profile grouping
#(Section {@NumberOf resource_structural.constraints.profile}).  To
#set and retrieve the @I { profile maximum length }, the calls are
#@ID @C {
#void KheTaskerSetProfileMaxLen(KHE_TASKER tr, int profile_max_len);
#int KheTaskerProfileMaxLen(KHE_TASKER tr);
#}
#The profile maximum length can only be set when there are no
#profile time groups.
#@PP
#To visit the sequence of @I { profile time groups } maintained by the
#tasker, the calls are
#@ID @C {
#int KheTaskerProfileTimeGroupCount(KHE_TASKER tr);
#KHE_PROFILE_TIME_GROUP KheTaskerProfileTimeGroup(KHE_TASKER tr, int i);
#}
#To make one profile time group and add it to the end of the tasker's
#sequence, and to delete a profile time group, the calls are
#@ID @C {
#KHE_PROFILE_TIME_GROUP KheProfileTimeGroupMake(KHE_TASKER tr,
#  KHE_TIME_GROUP tg);
#void KheProfileTimeGroupDelete(KHE_PROFILE_TIME_GROUP ptg);
#}
#The last profile time group is moved to the position of the
#deleted one, which only makes sense in practice when all
#the profile time groups are being deleted.  So a better
#function to call is
#@ID @C {
#void KheTaskerDeleteProfileTimeGroups(KHE_TASKER tr);
#}
#which deletes all of @C { tr }'s profile time groups.  They go
#into a free list in the tasker.
#@PP
#Functions
#@ID @C {
#KHE_TASKER KheProfileTimeGroupTasker(KHE_PROFILE_TIME_GROUP ptg);
#KHE_TIME_GROUP KheProfileTimeGroupTimeGroup(KHE_PROFILE_TIME_GROUP ptg);
#}
#retrieve a profile time group's tasker and time group.
#@PP
#A profile time group's @I { cover } is the number of @I { cover tasks }:
#tasks that cover the time group, ignoring tasks that cover more than
#@C { profile_max_len } profile time groups.  This is returned by
#@ID @C {
#int KheProfileTimeGroupCover(KHE_PROFILE_TIME_GROUP ptg);
#}
#The profile time group also keeps track of the @I { domain cover }:
#the number of cover tasks with a given domain.  Two domains are
#considered to be equal if @C { KheResourceGroupEqual } says that
#they are.  To visit the (distinct) domains of a profile time group,
#in increasing domain size order, the calls are
#@ID @C {
#int KheProfileTimeGroupDomainCount(KHE_PROFILE_TIME_GROUP ptg);
#KHE_RESOURCE_GROUP KheProfileTimeGroupDomain(KHE_PROFILE_TIME_GROUP ptg,
#  int i, int *cover);
#}
#@C { KheProfileTimeGroupDomain } returns the domain cover as well as the
#domain itself.  The sum of the domain covers is the cover.  There is also
#@ID @C {
#bool KheProfileTimeGroupContainsDomain(KHE_PROFILE_TIME_GROUP ptg,
#  KHE_RESOURCE_GROUP domain, int *cover);
#}
#which searches @C { ptg }'s list of domains for @C { domain },
#returning @C { true } and setting @C { *cover } to the domain
#cover if it is found.
#@PP
#@C { KheProfileTimeGroupDomain } and
#@C { KheProfileTimeGroupContainsDomain } may return 0
#for @C { *cover }, when tasks with a given domain enter
#the profile and later leave it.
#@PP
#Profile grouping algorithms will group tasks while these functions
#are being called.  The sequence of profile time groups is unaffected
#by grouping, but covers and domain covers will change if the grouped
#tasks cover more than @C { profile_max_len } profile time groups.
#The domains of a profile time group may also change during grouping,
#when tasks with unequal domains are grouped.  Altogether it is safest
#to discontinue a partially completed traversal of the domains of a
#profile time group when a grouping occurs.
#@PP
#There are also a few functions on tasker classes that relate
#to profile time groups.  First,
#@ID @C {
#bool KheTaskerClassCoversProfileTimeGroup(KHE_TASKER_CLASS c,
#  KHE_PROFILE_TIME_GROUP ptg);
#}
#returns @C { true } if @C { c } covers @C { ptg }.  Each class
#keeps track of the times from profile time groups that it covers.
#Functions
#@ID @C {
#int KheTaskerClassProfileTimeCount(KHE_TASKER_CLASS c);
#KHE_TASKER_TIME KheTaskerClassProfileTime(KHE_TASKER_CLASS c, int i);
#}
#visit these times in an unspecified order.
#@PP
#Function
#@ID @C {
#void KheTaskerProfileDebug(KHE_TASKER tr, int verbosity, int indent,
#  FILE *fp);
#}
#prints the profile groups of @C { tr } onto @C { fp }, with the
#classes that cover not more than @C { profile_max_len } of them.
#@End @SubSection
#
#@SubSection
#  @Title { Combina torial grouping }
#  @Tag { resource_structural.constraints.combinatorial }
#@Begin
#@LP
#Suppose that there are two kinds of shifts (tasks), day and night;
#that a resource must be busy on both days of the weekend or neither;
#and that a resource cannot work a day shift on the day after a night
#shift.  Then resources assigned to the Saturday night shift must work
#on Sunday, and so must work the Sunday night shift.  So it makes sense
#to group one Saturday night shift with one Sunday night shift, and to
#do so repeatedly until night shifts run out on one of those days.
#@PP
#Suppose that the groups just made consume all the Sunday night shifts.
#Then those working the Saturday day shifts cannot work the Sunday
#night shifts, because the Sunday night shifts are grouped with
#Saturday night shifts now, which clash with the Saturday day shifts.
#So now it is safe to group one Saturday day shift with one Sunday
#day shift, and to do so repeatedly until day shifts run out on one
#of those days.
#@PP
#Groups made in this way can be a big help to solvers.  In instance
#@C { COI-GPost.xml }, for example, each Friday night task can be
#grouped with tasks for the next two nights.  Good solutions always
#assign these three tasks to the same resource, owing to constraints
#specifying that the weekend following a Friday night shift must be
#busy, that each weekend must be either free on both days or busy on
#both, and that a night shift must not be followed by a day shift.
#A time sweep task assignment algorithm (say) cannot look ahead
#and see such cases coming.
#@PP
#@I { Combinat orial grouping } implements these ideas.  It searches
#through a space whose elements are sets of classes.  For each set of
#classes @M { S } in the search space, it calculates a cost @M { c(S) },
#defined below, and selects a set @M { S prime } such that
#@M { c( S prime ) } is zero, or minimum.  It then makes one group by
#selecting one task from each class and grouping those tasks, and then
#repeating that until as many tasks as possible or desired have been grouped.
#@PP
#As formulated here, one application of combinatorial grouping
#groups one set of classes @M { S prime }.  In the example above,
#grouping the Saturday and Sunday night shifts would be one
#application, then grouping the Saturday and Sunday day shifts
#would be another.
#@PP
#Combinat orial grouping is carried out by a
#@I { combinatorial grouping solver }, made like this:
#@ID @C {
#KHE_COMB_SOLVER KheCombSolverMake(KHE_TASKER tr, KHE_FRAME days_frame);
#}
#It deals with @C { tr }'s tasks, using memory from @C { tr }'s arena.
#Any groups it makes are made using @C { tr }'s grouping operations,
#and so are reflected in @C { tr }'s classes, and in its task set.
#Parameter @C { days_frame } would always be the common frame.  It
#is used when selecting a suitable resource to tentatively assign to
#a group of tasks, to find out what times the resource should be free.
#@PP
#Functions
#@ID @C {
#KHE_TASKER KheCombSolverTasker(KHE_COMB_SOLVER cs);
#KHE_FRAME KheCombSolverFrame(KHE_COMB_SOLVER cs);
#}
#return @C { cs }'s tasker and frame.
#@PP
#A @C { KHE_COMB_SOLVER } object can solve any number of combinatorial
#grouping problems, one after another.  The user loads the solver with
#one problem's @I requirements (these determine the search space
#@M { S }), then requests a solve, then loads another problem and
#solves, and so on.
#@PP
#It is usually best to start the process of loading requirements
#into the solver by calling
#@ID @C {
#void KheCombSolverClearRequirements(KHE_COMB_SOLVER cs);
#}
#This clears away any old requirements.
#@PP
#A key requirement for most solves is that the groups it makes
#should cover a given time group.  Any number of such requirements
#can be added and removed by calling
#@ID @C {
#void KheCombSolverAddTimeGroupRequirement(KHE_COMB_SOLVER cs,
#  KHE_TIME_GROUP tg, KHE_COMB_SOLVER_COVER_TYPE cover);
#void KheCombSolverDeleteTimeGroupRequirement(KHE_COMB_SOLVER cs,
#  KHE_TIME_GROUP tg);
#}
#any number of times.  @C { KheCombSolverAddTimeGroup } specifies that
#the groups must cover @C { tg } in a manner given by the @C { cover }
#parameter, whose type is
#@ID @C {
#typedef enum {
#  KHE_COMB_SOLVER_COVER_YES,
#  KHE_COMB_SOLVER_COVER_NO,
#  KHE_COMB_SOLVER_COVER_PREV,
#  KHE_COMB_SOLVER_COVER_FREE,
#} KHE_COMB_SOLVER_COVER_TYPE;
#}
#We'll explain this in detail later.  @C { KheCombSolverDeleteTimeGroup }
#removes the effect of a previous call to @C { KheCombSolverAddTimeGroup }
#with the same time group.  There must have been such a call, otherwise
#@C { KheCombSolverDeleteTimeGroup } aborts.
#@PP
#Any number of requirements that the groups should cover a given
#class may be added:
#@ID @C {
#void KheCombSolverAddClassRequirement(KHE_COMB_SOLVER cs,
#  KHE_TASKER_CLASS c, KHE_COMB_SOLVER_COVER_TYPE cover);
#void KheCombSolverDeleteClassRequirement(KHE_COMB_SOLVER cs,
#  KHE_TASKER_CLASS c);
#}
#These work in the same way as for time groups, except that care is
#needed because @C { c } may be rendered undefined by a solve, if
#it makes groups which empty @C { c } out.  The safest option
#after a solve whose requirements include a class is to call
#@C { KheCombSolverClearRequirements }.
#@PP
#Three other requirements of quite different kinds may be added:
#@ID @C {
#void KheCombSolverAddProfileGroupRequirement(KHE_COMB_SOLVER cs,
#  KHE_PROFILE_TIME_GROUP ptg, KHE_RESOURCE_GROUP domain);
#void KheCombSolverDeleteProfileGroupRequirement(KHE_COMB_SOLVER cs,
#  KHE_PROFILE_TIME_GROUP ptg);
#}
#and
#@ID @C {
#void KheCombSolverAddProfileMaxLenRequirement(KHE_COMB_SOLVER cs);
#void KheCombSolverDeleteProfileMaxLenRequirement(KHE_COMB_SOLVER cs);
#}
#and
#@ID @C {
#void KheCombSolverAddNoSinglesRequirement(KHE_COMB_SOLVER cs);
#void KheCombSolverDeleteNoSinglesRequirement(KHE_COMB_SOLVER cs);
#}
#Again, we'll explain the precise effect later.  These last three
#requirements can only be added once:  a second call replaces the
#first, it does not add to it.
#@PP
#There is no need to reload requirements between solves.  The
#requirements stay in effect until they are either deleted
#individually or cleared out by @C { KheCombSolverClearRequirements }.
#The only caveat concerns classes that become undefined during
#grouping, as discussed above.
#@PP
#The search space of combinatorial solving is defined by all
#these requirements.  First, we need some definitions.  A task
#@I covers a time if it, or a task assigned to it directly or
#indirectly, runs at that time.  A task covers a time group if
#it covers any of the time group's times.  A class covers a time
#or time group if its tasks do.  A class covers a class if it is
#that class.  A set of classes covers a time, time group, or class
#if any of its classes covers that time, time group, or class.
#@PP
#Now a set @M { S } of classes lies in the search space for a run
#of combinatorial grouping if:
#@NumberedList
#
#@LI @OneRow {
#Each class in @M { S } covers at least one of the time groups and
#classes passed to the solver by the calls to
#@C { KheCombSolverAddTimeGroup } and @C { KheCombSolverAddClass }.
#}
#
#@LI @OneRow {
#For each time group @C { tg } or mtask @C { mt } passed to the solver by
#@C { KheCombSolverAddTimeGroup } or @C { KheCombSolverAddClass },
#if the accompanying @C { cover } is @C { KHE_COMB_SOLVER_COVER_YES },
#then @M { S } covers @C { tg } or @C { c }; or if @C { cover } is
#@C { KHE_COMB_SOLVER_COVER_NO }, then @M { S } does not cover @C { tg }
#or @C { c }; or if @C { cover } is @C { KHE_COMB_SOLVER_COVER_PREV },
#then @M { S } covers @C { tg } or @C { c } if and only if it covers
#the time group or class immediately preceding @C { tg } or @C { c }; or
#if @C { cover } is @C { KHE_COMB_SOLVER_COVER_FREE }, then @M { S } is
#free to cover @C { tg } or @C { c }, or not.
#@LP
#If the first time group or class has cover @C { KHE_COMB_SOLVER_COVER_PREV },
#this is treated like @C { KHE_COMB_SOLVER_COVER_FREE }.
#@LP
#Time groups and classes not mentioned may be covered, or not.  The
#difference between this and passing a time group or class with cover
#@C { KHE_GROUP_SOLVER_COVER_FREE } is that the classes that cover
#a free time group or class are included in the search space.
#}
#
#@LI @OneRow {
#The classes of @M { S } may be added to the tasker to form a grouping.
#There are rare cases where adding the classes in one order will
#succeed, while adding them in another order will fail.  In those
#cases, whether @M { S } is included in the search space or not will
#depend on the (unspecified) order in which the solver chooses to add
#@M { S }'s classes to the tasker.
#}
#
#@LI @OneRow {
#If @C { KheCombSolverAddProfileRequirement(cs, ptg, domain) } is
#in effect, then @M { S } contains at least one class that covers
#@C { ptg }'s time group, and if @C { domain != NULL }, that class
#has that domain.
#}
#
#@LI @OneRow {
#If @C { KheCombSolverAddProfileMaxLenRequirement(cs) } is in
#effect, then @M { S } contains only classes that cover at most
#@C { profile_max_len } times from profile time groups.
#}
#
#@LI @OneRow {
#If @C { KheCombSolverAddNoSinglesRequirement(cs) } is in effect,
#then @M { S } contains at least two classes.  Otherwise @M { S }
#contains at least one class.
#}
#
#@EndList
#That fixes the search space.  We now define the cost @M { c(S) }
#of each set of classes @M { S } in that space.
#@PP
#The first step is to identify a suitable resource @M { r }.  Take the
#first class of the tasker grouping made from @M { S }; this is the
#class that l eader tasks will come from.  If it already has an assigned
#resource (as returned by @C { KheTaskerClassAsstResource }), use that
#resource for @M { r }.  Otherwise search the class's domain (as
#returned by @C { KheTaskerClassDomain }) for a resource which is free at
#all of the time groups of the current frame which overlap with the time
#groups added by calls to @C { KheCombSolverAddTimeGroupRequirement }.
#If no such resource can be found, ignore @M { S }.
#@PP
#The second step is to assign @M { r } to one task from each class
#of @M { S }, except in classes where @M { r } is already assigned
#to a task.  This is done without informing the tasker, but after
#the cost is determined these assignments are undone, so the
#tasker's integrity is not compromised in the end.  The cost
#@M { c(S) } of a set of classes @M { S } is determined while the
#assignments are in place.  It is the total cost of all cluster busy
#times and limit busy times monitors which monitor @M { r } and have
#times lying entirely within the times covered by the time groups
#added by calls to @C { KheCombSolverAddTimeGroupRequirement }.
#This second condition is included because we don't want @M { r }'s
#global workload, for example, to influence the outcome.
## The cost @M { c(S) } of a set of classes @M { S } is the change
## in solution cost caused by assigning a suitable resource (as
## defined for @C { KheTaskerGroupingTestAsstBegin } in
## Section {@NumberOf resource_structural.constraints.groupings})
## to one task from each class of @M { S }, taking into account only
## avoid clashes, cluster busy times, and limit busy times constraints
## which apply to every resource of the type of the tasks being
## grouped.  Furthermore, the times of the cluster busy times and
## limit busy times constraints must lie entirely within the times
## covered by the classes from which @M { S } is chosen; we don't
## want changes in a resource's global workload, for example, to
## influence the outcome.
#@PP
#After all the requirements are added, an actual solve is carried
#out by calling
#@ID @C {
#int KheCombSolverSolve(KHE_COMB_SOLVER cs, int max_num,
#  KHE_COMB_SOLVER_COST_TYPE ct, char *debug_str);
#}
#@C { KheCombSolverSolve } searches the space of all sets of classes
#@M { S } that satisfy the six conditions, and selects one set
#@M { S prime } of minimum cost @M { c( S prime ) }.  Using
#@M { S prime }, it makes as many groups as it can, up to
#@C { max_num }, and returns the number it actually made,
#between @C { 0 } and @C { max_num }.  If @M { S prime }
#contains a single class, no groups are made and the value
#returned is 0.
#@PP
#Parameter @C { ct } has type
#@ID @C {
#typedef enum {
#  KHE_COMB_SOLVER_COST_MIN,
#  KHE_COMB_SOLVER_COST_ZERO,
#  KHE_COMB_SOLVER_COST_SOLE_ZERO
#} KHE_COMB_SOLVER_COST_TYPE;
#}
#If @C { ct } is @C { KHE_COMB_SOLVER_COST_MIN }, then @M { c( S prime ) }
#must be minimum among all @M { c(S) }.
#If @C { ct } is @C { KHE_COMB_SOLVER_COST_ZERO }
#or @C { KHE_COMB_SOLVER_COST_SOLE_ZERO }, then @M { c( S prime ) } must
#also be 0, and in the second case there must be no other @M { S } in
#the search space such that @M { c(S) } is 0.  If these conditions are
#not met, no groups are made.
#@PP
#Parameter @C { debug_str } is passed on to @C { KheTaskerGroupingBuild }.
#It might be @C { "combinatorial grouping" }, for example.
#@PP
#An awkward question raised by combinatorial grouping is what to do about
#@I { singles }:  classes whose tasks already satisfy the requirements,
#without any grouping.  The answer seems to vary depending on why
#combinatorial grouping is being called, so the combinatorial solver
#does not have a single way of dealing with singles.  Instead it
#offers three features that help with them.
#@PP
#First, as we have seen, if the set of classes @M { S prime } with
#minimum or zero cost contains only one class, @C { KheCombSolverSolve }
#accepts that it is the best but makes no groups from it, returning 0
#for the number of groups made.
#@PP
#Second, as we have also seen, @C { KheCombSolverAddNoSinglesRequirement }
#allows the user to declare that a set @M { S } whose classes consist
#of a single class which satisfies all the requirements (a single)
#should be excluded from the search space.  But adding this requirement
#is not a magical solution to the problem of singles.  For one thing,
#when we need a unique zero-cost set of classes, we may well want to
#include singles in the search space, to show that grouping is better
#than doing nothing.  For another, there may still be an @M { S }
#containing one single and another class which covers a time group or
#class with cover type @C { KHE_COMB_SOLVER_COVER_FREE }.
#@PP
#Third, after setting up a problem ready to call
#@C { KheCombSolverSolve }, one can call
#@ID @C {
#int KheCombSolverSingleTasks(KHE_COMB_SOLVER cs);
#}
#This searches the same space as @C { KheCombSolverSolve } does, but
#it does no grouping.  Instead, it returns the total number of tasks in
#sets of classes @M { S } in that space such that @M { bar S bar = 1 }.
#It returns 0 if @C { KheCombSolverAddNoSinglesRequirement } is in
#effect when it is called, quite correctly.
#@PP
#Finally,
#@ID @C {
#void KheCombSolverDebug(KHE_COMB_SOLVER cs, int verbosity,
#  int indent, FILE *fp);
#}
#produces the usual debug print of @C { cs } onto @C { fp }
#with the given verbosity and indent.
#@End @SubSection
#
#@SubSection
#  @Title { Applying combinatorial grouping }
#  @Tag { resource_structural.constraints.applying }
#@Begin
#@LP
#This section describes one way in which the general idea of
#combinatorial grouping, as just presented, may be applied in
#practice.  This way is implemented by function
#@ID @C {
#int KheCombGrouping(KHE_COMB_SOLVER cs, KHE_OPTIONS options);
#}
#@C { KheCombGrouping } does what this section describes, and
#returns the number of groups it made.  Before it is called,
#the common frame should be loaded into @C { cs }'s tasker as
#overlap time groups.
#@PP
#Let @M { m } be the value of the @F rs_combinatorial_grouping_max _days option
#of @C { options }.  Iterate over all pairs @M { (f, c) }, where
#@M { f } is a subset of the common frame containing @M { k }
#adjacent time groups, for all @M { k } such that @M { 2 <= k <= m },
#and @M { c } is a class that covers @M { f }'s first or last time group.
#@PP
#For each pair, set up and run combinatorial grouping with one `yes'
#class, namely @M { c }, and one `free' time group for each of the
#@M { k } time groups of @M { f }.  Set @C { max_num } to @C { INT_MAX },
#and set @C { ct } to @C { KHE_COMB_SOLVER_COST_SOLE_ZERO }.  If there
#is a unique zero-cost way to group a task of @M { c } with tasks on
#the following @M { k - 1 } days, this call will find it and carry out
#as many groupings as it can.
## , and set @C { allow_single } to @C { false }.
#@PP
#If @M { f } has @M { k } time groups, each with @M { n } classes,
#say, there are up to @M { (n + 1) sup {k - 1} } combinations for
#each run, so @C { rs_group_by_rc_max _days } must be small, say 3,
#or 4 at most.  In any case, unique zero-cost groupings typically
#concern weekends, so larger values are unlikely to yield anything.
#@PP
#If one @M { (f, c) } pair produces some grouping, then
#@C { KheCombGrouping } returns to the first pair containing @M { f }.
#This handles cases like the one described earlier, where a grouping
#of Saturday and Sunday night shifts opens the way to a grouping of
#Saturday and Sunday day shifts.
#@PP
#The remainder of this section describes @I { combination elimination }.
#This is a refinement that @C { KheCombGrouping } uses to make
#unique zero-cost combinations more likely in some cases.
#@PP
#Some combinations examined by combinatorial grouping may have zero
#cost as far as the monitors used to evaluate it are concerned, but
#have non-zero cost when evaluated in a different way, involving the
#overall supply of and demand for resources.  Such combinations can
#be ruled out, leaving fewer zero-cost combinations, and potentially
#more task grouping.
#@PP
#For example, suppose there is a maximum limit on the number of
#weekends each resource can work.  If this limit is tight
#enough, it will force every resource to work complete weekends,
#even without an explicit constraint, if that is the only way
#that the available supply of resources can cover the demand
#for weekend shifts.  This example fits the pattern to be given
#now, setting @M { C } to the constraint that limits the number
#of busy weekends, @M { T } to the times of all weekends,
#@M { T sub i } to the times of the @M { i }th weekend, and
#@M { f tsub i } to the number of days in the @M { i }th weekend.
#@PP
#Take any any set of times @M { T }.  Let @M { S(T) }, the
#@I { supply during @M { T } }, be the sum over all resources
#@M { r } of the maximum number of times that @M { r } can be busy
#during @M { T } without incurring a cost.  Let @M { D(T) }, the
#@I { demand during @M { T } }, be the sum over all tasks @M { x }
#for which non-assignment would incur a cost, of the number of times
#@M { x } is running during @M { T }.  Then @M { S(T) >= D(T) }
#or else a cost is unavoidable.
#@PP
#In particular, take any cluster busy times constraint @M { C } which
#applies to all resources, has time groups which are all positive, and
#has a non-trivial maximum limit @M { M }.  (The analysis also applies
#when the time groups are all negative and there is a non-trivial
#minimum limit, setting @M { M } to the number of time groups minus
#the minimum limit.)  Suppose there are @M { n } time groups
#@M { T sub i }, for @M { 1 <= i <= n }, and let their union be @M { T }.
#@PP
#Let @M { f tsub i } be the number of time groups from the common
#frame with a non-empty intersection with @M { T sub i }.  This is
#the maximum number of times from @M { T sub i } during which any one
#resource can be busy without incurring a cost, since a resource can
#be busy for at most one time in each time group of the common frame.
#@PP
#Let @M { F } be the sum of the largest @M { M } @M { f tsub i }
#values.  This is the maximum number of times from @M { T } that
#any one resource can be busy without incurring a cost:  if it is
#busy for more times than this, it must either be busy for more
#than @M { f tsub i } times in some @M { T sub i }, or else it
#must be busy for more than @M { M } time groups, violating the
#constraint's maximum limit.
#@PP
#If there are @M { R } resources altogether, then the supply during
#@M { T } is bounded by
#@ID @Math { S(T) <= RF }
#since @M { C } is assumed to apply to every resource.
#@PP
#As explained above, to avoid cost the demand must not exceed the
#supply, so
#@ID @M { D(T) <= S(T) <= RF }
#Furthermore, if @M { D(T) >= RF }, then any failure to maximize
#the use of workload will incur a cost.  That is, every resource
#which is busy during @M { T sub i } must be busy for the full
#@M { f tsub i } times in @M { T sub i }.
#@PP
#So the effect on grouping is this:  if @M { D(T) >= RF }, a resource
#that is busy in one time group of the common frame that overlaps
#@M { T sub i } should be busy in every time group of the common
#frame that overlaps @M { T sub i }.  @C { KheCombGrouping } searches
#for constraints @M { C } that have this effect, and informs its
#combinatorial grouping solver about what it found by changing the
#cover types of some time groups from `free' to `prev'.  When
#searching for groups, the option of covering some of these time
#groups but not others is removed.  With fewer options, there is
#more chance that some combination might be the only one with
#zero cost, allowing more task grouping.
#@PP
#Instance @C { CQ14-05 } has two constraints that limit busy weekends.
#One applies to 10 resources and has maximum limit 2; the other applies
#to the remaining 6 resources and has maximum limit 3.  So combination
#elimination actually takes sets of constraints with the same time
#groups that together cover every resource once.  Instead of @M { RF }
#(above), it uses the sum over the set's constraints @M { c sub j }
#of @M { R sub j F sub j }, where @M { R sub j } is the number of
#resources that @M { c sub j } applies to, and @M { F sub j } is the
#sum of the largest @M { M sub j } of the @M { f tsub i } values,
#where @M { M sub j } is the maximum limit of @M { c sub j }.  The
#@M { f tsub i } are the same for all @M { c sub j }.
#@End @SubSection
#
#@SubSection
#  @Title { Profile grouping }
#  @Tag { resource_structural.constraints.profile }
#@Begin
#@LP
#Suppose 6 nurses are required on the Monday, Tuesday, Wednesday,
#Thursday, and Friday night shifts, but only 4 are required on the
#Saturday and Sunday night shifts.  Consider any division of the
#night shifts into sequences of one or more shifts on consecutive
#days.  However these sequences are made, at least two must begin
#on Monday, and at least two must end on Friday.
#@PP
#Now suppose that the intention is to assign the same resource to
#each shift of any one sequence, and that a limit active intervals
#constraint, applicable to all resources, specifies that night shifts
#on consecutive days must occur in sequences of at least 2 and at most
#3.  Then the two sequences of night shifts that must begin on Monday
#must contain a Monday night and a Tuesday night shift at least, and the
#two that end on Friday must contain a Thursday night and a Friday night
#shift at least.  So here are two groupings, of Monday and Tuesday
#nights and of Thursday and Friday nights, for each of which we can
#build two task groups.
#@PP
#Suppose that we already have a task group which contains a sequence
#of 3 night shifts on consecutive days.  This group cannot be grouped
#with any night shifts on days adjacent to the days it currently
#covers.  So for present purposes the tasks of this group can be
#ignored.  This can change the number of night shifts running on
#each day, and so change the amount of grouping.  For example, in
#instance @C { COI-GPost.xml }, all the Friday, Saturday, and Sunday
#night shifts get grouped into sequences of 3, and 3 is the maximum,
#so those night shifts can be ignored here, and so every Monday night
#shift begins a sequence, and every Thursday night shift ends one.
#@PP
#We now generalize this example, ignoring for the moment a few
#issues of detail.  Let @M { C } be any limit active intervals
#constraint which applies to all resources, and whose time groups
#@M { T sub 1 ,..., T sub k } are all positive.  Let @M { C }'s
#limits be @M { C sub "max" } and @M { C sub "min" }, and suppose
#@M { C sub "min" } is at least 2 (if not, there can be no grouping
#based on @M { C }).  What follows is relative to @M { C }, and is
#repeated for each such constraint.  Constraints with the same
#time groups are notionally merged, allowing the minimum limit
#to come from one constraint and the maximum limit from another.
#@PP
#A @I { long task } is a task which covers at least @M { C sub "max" }
#adjacent time groups from @M { C }.  Long tasks can have no influence
#on grouping to satisfy @M { C }'s minimum limit, so they may be ignored,
#that is, profile grouping may run as though they are not there.  This
#applies both to tasks which are present at the start, and tasks which
#are constructed along the way.  
#@PP
#A task is @I { admissible for profile grouping }, or just
#@I { admissible }, if it satisfies the following conditions:
#@NumberedList
#
#@LI {
#The task is a proper root task lying within an mtask created by the
#mtask finder made available to profile grouping when
#@C { KheProfileGrouping } (see below) is called.
#}
#
#@LI {
#The task is not assigned a resource, and its assignment is not fixed.
#}
#
#@LI {
#The task is not a long task.
#}
#
#@EndList
#These conditions imply that if one task lying within an mtask is
#admissible for profile grouping, then every unassigned task in
#that mtask is also admissible.
#@PP
#Let @M { n sub i } be the number of admissible tasks that cover
#@M { T sub i }.  The @M { n sub i } together make up the
#@I profile of @M { C }.  The tasker operations from
#Section {@NumberOf resource_structural.constraints.taskers }
#which support profile grouping make it easy to find the profile.
#@PP
#For each @M { i } such that @M { n sub {i-1} < n sub i },
#@M { n sub i - n sub {i-1} } groups of length at least
#@M { C sub "min" } must start at @M { T sub i } (more precisely,
#they must cover @M { T sub i } but not  @M { T sub {i-1} }).  They may
#be constructed by combinatorial grouping, passing in time groups
#@M { T sub i ,..., T sub { i + C sub "min" - 1 } } with cover type
#`yes', and @M { T sub {i-1} } and @M { T sub { i + C sub "max" } } with
#cover type `no', asking for @M { m = n sub i - n sub {i-1} - c sub i }
#tasks, where @M { c sub i } is the number of existing tasks (not
#including long ones) that satisfy these conditions already (as
#returned by @C { KheCombSolverSingles }).  The new groups must group
#at least 2 tasks each.  Some of the time groups may not exist; in
#that case, omit the non-existent ones but still do the grouping,
#provided there are at least 2 `yes' time groups.  The case for
#sequences ending at @M { j } is symmetrical.
#@PP
#If @M { C } has no history, we may set @M { n su b 0 } and
#@M { n sub {k+1} } to 0, allowing groups to begin at @M { T sub 1 }
#and end at @M { T sub k }.  If @M { C } has history, we do not know
#how many tasks are running outside @M { C }, so we set @M { n su b 0 }
#and @M { n sub {k+1} } to infinity, preventing groups from beginning
#at @M { T sub 1 } and ending at @M { T sub k }.
#@PP
#Groups made by one round of profile grouping may participate in later
#rounds.  Suppose @M { C sub "min" = 2 }, @M { C sub "max" = 3 },
#@M { n sub 1 = n sub 5 = 0 }, and @M { n sub 2 = n sub 3 = n sub 4 = 4 }.
#Profile grouping builds 4 groups of length 2 begining at @M { T sub 2 },
#then 4 groups of length 3 ending at @M { T sub 4 }, incorporating the
#length 2 groups.
## @PP
## The general aim is to pack blocks of size freely chosen between
## @M { C sub "min" } and @M { C sub "max" } into a given profile, and
## group wherever it can be shown that the packing can only take one
## form.  But we are not interested in optimal solutions (ones with
## the maximum amount of grouping), so we do not search for other
## cases.  However, some apparently different cases are actually
## already covered.  For example, suppose @M { C sub "min" = 2 } and
## @M { C sub "max" = 3 }, with @M { n sub 1 = n sub 5 = 0 } and
## @M { n sub 2 = n sub 3 = n sub 4 = 4 }.  Then 4 groups of length 3
## can be built.  But the function does this:  it first builds 4
## groups of length 2 begining at @M { T sub 2 }, then 4 groups of
## length 3 ending at @M { T sub 4 }, incorporating the length 2 groups.
#@PP
#We turn now to three issues of detail.
## @PP
## @B { History. }  How to handle history is the subject of
## Section {@NumberOf resource_structural.constraints.history}.
## For each resource @M { r sub i } with a history value @M { x sub i }
## such that @M { x sub i < C sub "min" }, use combinatorial grouping with
## one `yes' time group for each of the first @M { C sub "min" -  x sub i }
## time groups of @M { C } (when these all exist), build one group, and
## assign @M { r sub i } to it.  (This idea is not yet implemented;
## none of the instances available at the time of writing need it.)
## , and one `no' time group for the next time group of @M { C }
#@PP
#@B { Singles. }  We need to consider how singles affect profile
#grouping.  Singles of length @M { C sub "max" } or more are
#ignored, but there may be singles of length @M { C sub "min" }
#when @M { C sub "min" < C sub "max" }.
#@PP
#The @M { n sub i - n sub {i-1} } groups that must start at
#@M { T sub i } include singles.  Singles are already present,
#which is similar to saying that they must be made first.  So before
#calling @C { KheCombSolverSolve } we call @C { CombSolverSingleTasks }
#to determine @M { c sub i }, the number of singles that satisfy the
#requirements, and then we pass @M { n sub i - n sub {i-1} - c sub i }
#to @C { KheCombSolverSolve }, not @M { n sub i - n sub {i-1} }, and
#exclude singles from its search space.
#@PP
#@B { Varyin g task domains. }  Suppose that one senior nurse is wanted
#each night, four ordinary nurses are wanted each week night, and two
#ordinary nurses are wanted each weekend night.  Then the two groups
#starting on Monday nights should group demands for ordinary nurses,
#not senior nurses.  Nevertheless, tasks with different domains are
#not totally unrelated.  A senior nurse could very well act as an
#ordinary nurse on some shifts.
#@PP
#We still aim to build @M { M = n sub i - n sub {i-1} - c sub i }
#groups as before.  However, we do this by making several calls on
#combinatorial grouping.  For each resource group @M { g } appearing
#as a domain in any class running at time @M { T sub i }, find
#@M { n sub gi }, the number of tasks (not including long ones) with
#domain @M { g } running at @M { T sub i }, and @M { n sub { g(i-1) } },
#the number at @M { T sub {i-1} }.  For each @M { g } such that
#@M { n sub gi > n sub { g(i-1) } }, call combinatorial grouping,
#insisting (by calling @C { KheCombSolverAddProfileRequirement })
#that @M { T sub i } be covered by a class whose domain is @M { g },
#passing @M { m = min( M, n sub gi - n sub { g(i-1) } ) }, then
#subtract from @M { M } the number of groups actually made.
#Stop when @M { M = 0 } or the list of domains is exhausted.
## @PP
## We still aim to build @M { M = n sub i - n sub {i-1} - c sub i }
## groups as before.  However, we do this by making several calls on
## combinatorial grouping, utilizing the @C { domain } parameter, which
## we call @M { g } here.  For each @M { g } appearing as a domain in
## any class running at time @M { T sub i }, find @M { n sub gi }, the
## number of tasks (not including long ones) with domain @M { g }
## running at @M { T sub i }, and @M { n sub { g(i-1) } }, the number
## at @M { T sub {i-1} }.  For each @M { g } such that
## @M { n sub gi > n sub { g(i-1) } }, add @M { g } and
## @M { M sub g = n sub gi - n sub { g(i-1) } } to a list.
## Then re-traverse the list.  For each @M { g } on it, call
## combinatorial grouping, passing @M { m = min( M, M sub g ) } and
## @M { g }, then subtract from @M { M } the number of groups actually
## made.  Stop when @M { M = 0 } or the list is exhausted.
## @End @SubSection
## 
## @SubSection
##   @Title { Applying profile grouping }
##   @Tag { resource_structural.constraints.applying2 }
## @Begin
## @LP
#@PP
#@B { Non-uniqueness of zero-cost groupings. }
#The main problem with profile grouping is that there may be
#several zero-cost groupings in a given situation.  For example,
#a profile might show that a group covering Monday, Tuesday, and
#Wednesday may be made, but give no guidance on which shifts on
#those days to group.
#@PP
#One reasonable way of dealing with this problem is the following.
#First, do not insist on unique zero-cost groupings; instead, accept
#any zero-cost grouping.  This ensures that a reasonable amount of
#profile grouping will happen.  Second, to reduce the chance of
#making poor choices of zero-cost groupings, limit profile grouping
#to two cases.
#@PP
#The first case is when each time group @M { T sub i } contains a
#single time, as at the start of this section, where each
#@M { T sub i } contained the time of a night shift.  Although we do
#not insist on unique zero-cost groupings, we are likely to get them
#in this case, so we call this @I { strict profile grouping }.
#@PP
#The second case is when @M { C sub "min" = C sub "max" }.  It is
#very constraining to insist, as this does, that every sequence of
#consecutive busy days (say) away from the start and end of the cycle
#must have a particular length.  Indeed, it changes the problem into a
#combinatorial one of packing these rigid sequences into the profile.
#Local repairs cannot do this well, because to increase
#or decrease the length of one sequence, we must decrease or increase
#the length of a neighbouring sequence, and so on all the way back to
#the start or forward to the end of the cycle (unless there are
#shifts nearby which can be assigned or not without cost).
#So we turn to profile grouping to find suitable groups before
#assigning any resources.  Some of these groups may be less than
#ideal, but still the overall effect should be better than no
#grouping at all.  We call this @I { non-strict profile grouping }.
## No profile grouping of this kind is done until
## all cases where the time groups are singletons have been tried.
#@PP
#When @M { C sub "min" = C sub "max" }, all singles are off-profile.
#This is easy to see:  by definition, a single covers @M { C sub "min" }
#time groups, so it covers @M { C sub "max" } time groups, but
#@C { profile_max_len } is @M { C sub "max" - 1 }.
#@PP
#These ideas are implemented by function
#@ID @C {
#int KheProfileGrouping(KHE_COMB_SOLVER cs, bool non_strict);
#}
#It carries out some profile grouping, as follows, and returns
#the number of groups it makes.
#@PP
#Find all limit active intervals constraints @M { C } whose time
#groups are all positive and which apply to all resources.  Notionally
#merge pairs of these constraints that share the same time groups when
#one has a minimum limit and the other has a maximum limit.  Let
#@M { C } be one of these (possibly merged) constraints such that
#@M { C sub "min" >= 2 }.  Furthermore, if @C { non_strict } is
#@C { false }, then @M { C }'s time groups must all be singletons,
#while if @C { non_strict } is @C { true }, then @M { C sub "min" = C sub "max" }
#must hold.
#@PP
#A constraint may qualify for both strict and non-strict processing.
#This is true, for example, of a constraint that imposes equal lower
#and upper limits on the number of consecutive night shifts.  Such a
#constraint will be selected in both the strict and non-strict cases,
#which is fine.
#@PP
#For each of these constraints, proceed as follows.  Set the profile
#time groups in the tasker to @M { T sub 1 ,..., T sub k }, the time
#groups of @M { C }, and set the @C { profile_max_len } attribute to
#@M { C sub "max" - 1 }.  The tasker will then report the values
#@M { n sub i } needed for @M { C }.
#@PP
#Traverse the profile repeatedly, looking for cases where
#@M { n sub i > n sub {i-1} } and @M { n sub j < n sub {j+1} }, and
#use combinatorial grouping (aiming to find zero-cost groups, not
#unique zero-cost groups) to build groups which cover @M { C sub "min" }
#time groups starting at @M { T sub i } (or ending at @M { T sub j }).  This
#involves loading @M { T sub i ,..., T sub {i + C sub "min" - 1} } as `yes'
#time groups, and @M { T sub {i-1} } and @M { T sub { i + C sub "max" } }
#as `no' time groups, as explained above.
#@PP
#The profile is traversed repeatedly until no points which allow
#grouping can be found.  In the strict grouping case, it is then
#time to stop, but in the non-strict case it is better to keep
#grouping, as follows.  From among all time groups @M { T sub i }
#where @M { n sub i > 0 }, choose one which has been the starting
#point for a minimum number of groups (to spread out the starting
#points as much as possible) and make a group there if combinatorial
#grouping allows it.  Then return to traversing the profile
#repeatedly:  there should now be @M { n sub i > n sub {i-1} }
#cases just before the latest group and @M { n sub j < n sub {j+1} }
#cases just after it.  Repeat until there is no @M { T sub i } where
#@M { n sub i > 0 } and combinatorial grouping can build a group.
#@End @SubSection
#
## replaced by Assign by history
## @SubSection
##   @Title { Grouping by history }
##   @Tag { resource_structural.constraints.history }
## @Begin
## @LP
## This section continues with grouping based on limit active
## intervals constraint @M { C } with limits @M { C sub "min" } and
## @M { C sub "max" }.  We focus here on the start of the cycle, which
## is special because several of @M { C }'s resources may have history,
## and groups of tasks of unusual length may be needed for them.
## @PP
## The algorithm presented here is called @I { grouping by history }.
## What it actually does, though, is assign resources to tasks rather
## than group tasks.  It does this because it is not enough to create
## groups of tasks of unusual length; it is also necessary to reserve
## them for the resources they were created for.  Assigning the resources
## to them is the obvious way to do that.  Strictly speaking, this makes
## grouping by history a resource solver rather than a resource-structural
## solver.
## @PP
## This raises the question of why KHE's other resource assignment
## solvers can't be left to handle history themselves.  The two solvers
## in question are @C { KheTimeSweepAssignResources }
## (Section {@NumberOf resource_solvers.matching.time.sweep}) and
## @C { KheDynamicRe sourceSequentialSolve }
## (Section {@NumberOf resource_solvers.dynamic.initial}).
## The answer, at least in part, is that they both run after grouping,
## which will not work well if grouping does not take history into account.
## Also, @C { KheDynamic ResourceSequentialSolve } may make arbitrary
## choices for the resources it assigns which cause problems for
## other resources that are not assigned until later, including
## problems satisfying history requirements.
## @PP
## @BI { Constraints }.  Each constraint @M { C } that the algorithm
## handles must satisfy these conditions:
## @NumberedList
## 
## @LI {
## @M { C } is a limit active intervals constraint with at least one time
## group.
## }
## 
## # @LI {
## # @M { C } has a non-zero history value for at least one
## # resource of the given resource type @C { rt }.
## # }
## 
## @LI {
## Each time group of @M { C } is positive.
## }
## 
## @LI {
## Each time group @M { g } of @M { C } is a subset of the times of one
## day (that is, one time group of the common frame), called @M { g }'s
## @I { associated day }.
## }
## 
## @LI {
## As we proceed from one time group of @M { C } to the next,
## the associated days are consecutive.
## }
## 
## @LI {
## The associated day of the first time group of @M { C } is
## the first day of the cycle.
## }
## 
## @EndList
## These conditions are checked, and if any fail to hold, @M { C }
## is ignored.  The first two conditions just ensure that @M { C }
## is relevant to history, so they don't really count as restrictions.
## The last three are less restrictive in practice than they seem.
## The most likely case of a real-world constraint that fails them
## is a limit on the number of consecutive busy
## weekends.  However, this may not matter, because limits on
## consecutive busy weekends do not seem to occur in practice, and
## it is not clear what the algorithm could do with them if they
## did, given the 5-day gaps between weekends.
## @PP
## Let the sequence of time groups of @M { C } be
## @M { G sub 0 ,..., G sub {n-1} }, where @M { n >= 1 }.  The
## first time group is @M { G sub 0 } rather than @M { G sub 1 }
## to agree with the C language convention.  We use 0-origin
## indexing generally.
## @PP
## One limit active intervals constraint may have several offsets,
## each representing a different instantiation of the constraint.
## We treat each offset as a distinct constraint, but for simplicity
## of presentation we say `constraint' when we should, strictly,
## say `constraint plus offset'.
## @PP
## We assume here that @M { C }'s cost function is not a step function.
## In the rare cases where it is a step function, our analysis does not
## always hold---but we apply our algorithm anyway.
## # @PP
## # There may be costs in assigning or not assigning certain tasks, but
## # those do not matter to us here.  Our sole concern is with the
## # requirements placed on resources' timetables by history.
## # @PP
## # The basic idea is to build an unweighted matching graph in which
## # each demand node is a resource, each supply node is a set of grouped
## # tasks, and each edge joins a resource to a set of grouped tasks
## # that satisfies the history needs of that resource.  We use a maximum
## # matching in this graph to define an assignment of resources to sets
## # of grouped tasks which satisfies the history requirements of as many
## # resources as possible.  Here now are the details.
## @PP
## @BI { Resources }.  We are only interested in resources that must be
## busy during @M { C }'s first time group in order to avoid a cost for
## @M { C } caused by history.  Let @M { h(r) } be @M { C }'s history value
## for resource @M { r }.
## @BulletList
## 
## @LI {
## If @M { h(r) = 0 }, or equivalently if @M { C } contains no value
## for @M { h(r) }, then there is no constraint on @M { r }'s timetable
## at the start of the cycle, so we are not interested in @M { r }.
## }
## 
## @LI {
## If @M { C sub "min" <= h(r) <= C sub "max" }, there is no need
## to extend the existing sequence of @M { h(r) } tasks, since as
## it stands it generates zero cost.  If @M { C sub "max" < h(r) },
## then it would be a bad idea to extend it, because it is already
## generating a cost which will increase if we extend it further.
## So we are not interested in @M { r } in these cases.
## }
## 
## @EndList
## So the set @M { R } of resources of interest consists of those
## resources @M { r } such that @M { 0 < h(r) < C sub "min" }.
## @PP
## We are not going to worry about @M { r } having history in two
## constraints @M { C sub 1 } and @M { C sub 2 }, or more.  If
## @M { C sub 1 } monitors night shifts and @M { C sub 2 } monitors
## day shifts, then we cannot have @M { h(r) > 0 } in both.  The
## only practical possibility is for @M { C sub 1 } to monitor night
## shifts (or any other single shift type) and @M { C sub 2 } to
## monitor busy days.  We'll be sorting the constraints so that those
## with smaller time groups come first, and ignoring occurrences of
## a given resource @M { r } in history lists after its first occurrence.
## @PP
## @BI { Admissible tasks }.
## We want to assign resources with non-zero history to tasks running
## at the start of the cycle.  Each task @M { t } used for this must
## satisfy these conditions:
## @ParenAlphaList
## 
## @LI @OneRow {
## Task @M { t } has the given resource type @C { rt }.
## }
## 
## @LI @OneRow {
## Task @M { t } is a proper root task.
## }
## 
## @LI @OneRow {
## The times that @M { t } is running (including the times of any
## tasks assigned, directly or indirectly, to @M { t }) include
## at least one time.
## }
## 
## @LI @OneRow {
## The times that @M { t } is running (including the times of any
## tasks assigned, directly or indirectly, to @M { t }) include
## at most one time from each day.
## }
## 
## @LI @OneRow {
## Every time that @M { t } is running is a time monitored by @M { C }.
## }
## 
## @LI @OneRow {
## The days of the times that @M { t } is running are consecutive.
## }
## 
## @EndList
## Tasks satisfying these conditions are called @I { admissible tasks }.
## @PP
## The first four conditions are not really restrictions.  The fifth
## condition is needed because if @M { t } is running at a time not
## monitored by @M { C }, then assigning @M { t } to a resource will
## make that resource busy on the day of that time, preventing it from
## being busy at a time needed to satisfy @M { C }.
## @PP
## Condition (f) allows us to represent the days that @M { t } is
## running as an interval:  a pair of integer indexes @M { (a, b) }
## satisfying @M { 0 <= a <= b } which we call @M { i(t) }.  This is
## both an interval in the sequence of days of the cycle and an interval
## in the sequence of time groups of @M { C }, given the restrictions
## above on how these two sequences of time groups are related.  We
## write @M { l(t) } for the length of @M { i(t) }.
## @PP
## The algorithm relies on sets @M { T sub i }, each of which contains
## all admissible tasks @M { t } such that @M { i(t) = (i, k) } for
## some @M { k >= i }; that is, all admissible tasks whose first day
## has index @M { i }.  Building @M { T sub i } is a straightforward
## matter of retrieving from the event timetable monitor all meets
## running at the times of @M { G sub i }, finding all the tasks of
## type @C { rt } lying within those meets, finding their proper root
## tasks, then building their intervals and omitting those tasks that
## do not satisfy all the conditions.  Each @M { T sub i } is built
## only when it is needed.
## @PP
## @BI { Admissible task-sets }.
## As we build larger sets of tasks to assign to a resource @M { r },
## we don't want the tasks to overlap in time, or be separated by
## unused days.  So we define an @I { admissible task-set } to be a
## non-empty set of tasks such that each task is admissible, the
## tasks run on disjoint days, those days include the first day of
## the cycle, and there are no unused days between tasks.
## @PP
## The days that an admissible task-set @M { s } is running form an
## interval @M { i(s) } which begins on the first day of the cycle.
## As usual we define the length @M { l(s) } to be the length of
## @M { i(s) }.  We also define the @I domain @M { d(s) } to be the
## intersection of the domains of @M { s }'s tasks.  This is the set
## of resources that can be assigned to all of the tasks of @M { s }.
## @PP
## @BI { The algorithm }.
## As an initial idea, suppose we have somehow come up with a set
## @M { S } of admissible task-sets @M { s }.  Then we can solve
## our problem by building a bipartite graph and finding a maximum
## matching in it.  Each demand node is a resource @M { r } from
## @M { R }, each supply node is a task-set @M { s } from @M { S },
## and each edge joins an @M { r } to an @M { s } when
## @NumberedList
## 
## @LI {
## @M { r in d(s) };
## }
## 
## @LI {
## @M { C sub "min" <= h(r) + l(s) };
## }
## 
## @LI {
## @M { h(r) + l(s) <= C sub "max" }.
## }
## 
## @EndList
## A maximum matching in this graph can be used to decide which assignments
## to make.
## @PP
## Although this initial idea helps to clarify the problem, the real
## issue is how to group tasks into a set @M { S } of admissible
## task-sets so that the resulting maximum matching is as large as
## possible.  There does not seem to be an efficient algorithm for
## this problem (it resembles three-dimensional matching, which
## is NP-complete), so we proceed heuristically, as follows.
## @PP
## The algorithm builds a sequence of
## minimum-cost bipartite matchings.  We represent an instance
## of the minimum-cost bipartite matching problem in the usual way,
## as a triple @M { ( V sub 1 , V sub 2 , E ) }, where @M { V sub 1 }
## is a set of @I { demand nodes } that want to be matched,
## @M { V sub 2 } is a set of @I { supply nodes } that are available
## to match with demand nodes, and @M { E } is a set of weighted edges.
## Each edge @M { e = ( v sub 1 , v sub 2 , w ) } joins one
## demand node @M { v sub 1 } to one supply node @M { v sub 2 } by
## an edge of weight @M { w }.
## @PP
## The algorithm alternates between two kinds of minimum-cost bipartite
## matchings.  For each kind, we first present the demand nodes, then
## the supply nodes, then the edges.  We then explain how the matching
## is used, and only after that do we define the edge weights.  We do
## it this way because the weights are easier to understand once we
## know how the matching is used.
## @PP
## In the first kind of matching, which we call an @I { X-graph matching },
## the graph has the form @M { X sub i = (R, S, E) }
## where @M { R } is a set of resources of interest and
## @M { S } is a set of admissible task-sets, each of which has
## interval @M { i(s) = (0, j) } for some @M { j >= i }.  In other
## words, each task-set of @M { S } covers the first @M { i + 1 }
## time groups of @M { C } and possibly more.  The particular resources
## included in @M { R } and task-sets included in @M { S } depend on
## the progress of the algorithm and will be given later.
## @PP
## # @M { R prime } is a set of dummy supply nodes, one for each resource.  In
## # other words, for each @M { r in R } there is one @M { r prime in R prime }.
## # For each @M { r } there is an edge from @M { r } to @M { r prime };
## # this is the only edge entering @M { r prime }.  This arrangement
## # ensures that @M { r } always matches with something; i
## # @PP
## Some (not all) of the edges @M { (r, s) } in a minimum-cost matching
## in @M { X sub i } will be interpreted as decisions to assign @M { r }
## to the tasks of @M { s }.  Accordingly, an edge is drawn between
## demand node @M { r } and supply node @M { s } when conditions (1)
## and (3) above hold.
## @PP
## After finding a minimum-cost matching in @M { X sub i } we
## divide the @M { r in R } into three categories:
## @BulletList
## 
## @LI @OneCol {
## If @M { r } did not match, it is dropped (removed from @M { R }).
## It is not assigned to any tasks, and grouping by history will
## not assign it to any tasks.
## }
## 
## @LI @OneCol {
## If @M { r } matched with some @M { s in S }, and (2 ) above happens
## to hold for this @M { r } and @M { s }, then assigning @M { r } to
## the tasks of @M { s } gives @M { r } everything it needs.  So those
## assignments are made, then @M { r } is dropped (removed from
## @M { R }), and @M { s } is dropped (removed from @M { S }).
## }
## 
## @LI @OneCol {
## If @M { r } matched with some @M { s in S }, but (2 )
## above does not hold for this @M { r } and @M { s }, then
## the quest to satisfy @M { r } must continue, so @M { r }
## remains in @M { R } and @M { s } remains in @M { S }.
## No assignments are made.
## }
## 
## @EndList
## Say something profound here.
## @PP
## When defining the edge weights, it helps to remember that X-graph
## matching is similar to resource matching
## (Section {@NumberOf resource_solvers.matching}).  Both use weighted
## bipartite matching to match resources with tasks.  The weight of an
## edge in resource matching is the solution cost after @M { r } is
## assigned to @M { s }.  But to do that here would probably not work
## well, because only some of the resources of type @C { rt } are being
## assigned.  So here, to each edge @M { (r, s) } we assign a weight
## @M { w(r, s) } which approximates the change in solution cost
## (that is, cost after minus cost before) when @M { r } is assigned
## to @M { s }.
## @PP
## Solution cost is affected by many constraints as grouping by
## history proceeds, but we are going to focus here on just two kinds:
## the limit active intervals constraint @M { C } that started all
## this, and the event resource constraints that are affected by the
## assignment or non-assignment of @M { s }.
## @PP
## Taking only @M { C } into account, let @M { a(r) } be the cost to
## @M { r } of assigning @M { r }, and let @M { n(r) } be the cost to
## @M { r } of not assigning @M { r }.  Similarly, taking event
## resource constraints relevant to @M { s } into account, let
## @M { a(s) } be the cost to @M { s } of assigning @M { s }, and let
## @M { n(s) } be the cost to @M { s } of not assigning @M { s }.  Then
## @ID @Math {
## w(r, s) = a(r) - n(r) + a(s) - n(s)
## }
## is a suitable weight.  The more the cost of non-assignment exceeds the
## cost of assignment, the smaller this will be (very likely it will be
## negative, but that does not matter), and the greater the chance will be
## of choosing this edge and thus avoiding the expensive non-assignment.
## @PP
## Concretely, @M { a(r) } is 0 and @M { n(r) } is the cost due to
## @M { h(r) } being smaller than @M { C sub "min" }.  The values for
## @M { n(s) } and @M { a(s) } are sums of the values returned by
## @C { KheTaskNon AsstAndAsstCost }
## (Section {@NumberOf resource_structural.mtask_finding.ops}).
## @PP
## This whole operation changes @M { R } and @M { S }.  So we notate it as
## @ID @M {
## (R, S) = XMatch(R, S);
## }
## This does not show the assignments that occur in the second case
## above, but it does show the two sets that the X-graph works with,
## and it shows that they have new values after the match.
## @PP
## In the second kind of matching, which we call a @I { Y-graph matching },
## the graph has the form @M { Y sub i = (S, T sub i , E) }, where
## @M { i >= 1 }, @M { S } is a set of admissible task-sets @M { s }
## such that @M { i(s) = (0, j) } for some @M { j >= i-1 }, and
## @M { T sub i } is (as above) the set of all admissible tasks @M { t }
## such that @M { i(t) = (i, k) } for some @M { k >= i }.
## @PP
## Each edge @M { (s, t) } in a minimum-cost matching in @M { Y sub i }
## will be interpreted as a decision to add @M { t } to @M { s },
## producing a larger admissible task-set.
## Accordingly, we draw an edge from @M { s in S } to each
## @M { t in T sub i } whenever @M { i(s) = (1, i-1) }.  We can't
## match an admissible task-set @M { s } with @M { i(s) = (1, j) }
## for some @M { j > i-1 } with a task @M { t } from @M { T sub i }
## with @M { i(t) = (i, k) }, because they would overlap at index @M { i }.
## @PP
## After finding a minimum-cost matching in @M { Y sub i } we
## divide the @M { s in S } into three categories:
## @BulletList
## 
## @LI {
## If @M { i(s) = (1, j) } for some @M { j > i-1 }, then @M { s } cannot
## match, but it is retained as is in @M { S }.
## }
## 
## @LI {
## If @M { i(s) = (1, i-1) } and @M { s } matches with some
## @M { t in T sub i }, then @M { s } is retained in @M { S }
## with @M { t } added to it.
## }
## 
## @LI {
## If @M { i(s) = (1, i-1) } and @M { s } does not match with any
## @M { t in T sub i }, then @M { s } is dropped (removed from @M { S }).
## }
## 
## @EndList
## Say something profound here.
## @PP
## For edge weights, we can't be guided by solution cost, since that
## is not directly affected by adding @M { t } to @M { s }.  Instead,
## we ask what makes a good choice.  The answer seems to have two parts.
## @PP
## First, we want the domain of @M { s cup lbrace t rbrace } to be as
## large as possible, since that will maximize our options in later
## matchings.  For example, we don't want to add a task requiring
## a senior nurse to a set of tasks requiring a trainee nurse:
## the result might be a set of tasks that no-one can be assigned to.
## Accordingly, we want to include
## @ID @Math {
## w sub 1 = minus bar ` d( s cup lbrace t rbrace ) ` bar
## }
## (where @M { bar ... bar } is set cardinality) in the weight of the
## edge from @M { s } to @M { t }.
## @PP
## Second, we don't want to add a task with a high non-assignment cost
## to a set of tasks with a high assignment cost (or vice versa), since
## that produces a set of tasks whose cost is high whether we assign
## it or not.  We want to match tasks with a high assignment cost
## together, and tasks with a high non-assignment cost together.  Let
## @M { n(s) } and @M { n(t) } be the non-assignment costs of @M { s }
## and @M { t }, and @M { a(s) } and @M { a(t) } be the assignment costs
## of @M { s } and @M { t }.  We can get what we want by including
## @ID @Math { 
## w sub 2 = bar n(s) - n(t) bar + bar a(s) - a(t) bar
## }
## (where @M { bar ... bar } is absolute value) in the weight of the
## edge from @M { s } to @M { t }.
## @PP
## How should we combine these two weights?  We could add them together,
## but that does not really make sense, because @M { w sub 1 } is a number
## of resources and @M { w sub 2 } is a cost.  Or we could declare one to
## be more important than the other, and use a weight which is an ordered
## pair:  @M { ( w sub 1 , w sub 2 ) } or @M { ( w sub 2 , w sub 1 ) }.
## The trouble with this is that it is hard to argue that either is more
## important than the other.
## @PP
## @I { remainder still to do }
## @PP
## This whole operation uses @M { T sub i } to change the admissible task-sets
## @M { S }.  So we notate it as
## @ID @M {
## S = YMatch(S, T sub i );
## }
## This shows the two sets that the Y-graph works with, and the fact that
## @M { S } changes its value.
## @PP
## Here is the main algorithm.  @M { R } is a set of resources of
## interest, and @M { S } is a set of admissible task-sets.  The
## value assigned to @M { S } at the start of the iteration of the
## loop with index value @M { i } is a set of admissible task-sets
## @M { s }, all of which satisfy @M { i(s) = (0, j) } for some @M { j >= i }.
## @ID @OneCol lines @Break {
## @M { R } = the set of all resources of interest;
## @B {for}( @M { i } = 0;  @M { i < n @B " and " bar R bar > 0 };  @M { i } = @M { i + 1 } )
## "{"
##     @B {if}( @M { i } == 0 )
##         @M { S = lbrace lbrace t rbrace `` bar `` t in T sub i rbrace };
##     @B {else}
##         @M { S = YMatch(S, T sub i ) };
## 
##     @M { (R, S) = XMatch(R, S) };
## "}"
## }
## In words, each iteration first builds a current set of admissible
## task-sets @M { S }, from scratch on the first iteration, and by
## extending the previous set on subsequent iterations.  It then matches
## @M { S } with the remaining resources of interest, and repeats until
## all resources have been handled.
## @PP
## @BI { Concluding points }.
## Although this algorithm works off limit active intervals constraints,
## it is quite different from profile grouping.  It needs to run before
## other kinds of grouping are run.  There is one point of potential
## overlap, however,  As described here, for the most part we build
## task-sets @M { s } such that @M { h(r) + l(s) = C sub "min" }.  We
## could choose to build larger sets than this, as long as we respect
## the upper limit @M { h(r) + l(s) <= C sub "max" }.  This might be
## useful if regular profile grouping determines that a set has to end
## where the larger @M { s } ends.  At present we are not doing this;
## we are relying on other parts of the overall solve to extend
## @M { s } if needed.
## @PP
## A review of this section will show that the algorithm still works if
## different resources have different values for @M { C sub "min" } and
## @M { C sub "max" }, as long as the time groups of @M { C } are the
## same for all resources.  So we start by finding all limit active
## intervals constraints that have the properties given above, then
## partition them into equivalence classes.  Two constraints lie in the
## same class when they have the same time groups in the same order.
## We then treat each class like a single constraint.  The resources
## of interest are all resources with non-zero history in any of the
## class's constraints, and @M { C sub "min" } and @M { C sub "max" },
## as well as the constraint weight and cost function, can differ
## between resources.
## @PP
## As mentioned earlier, we sort the constraint classes so that classes
## with smaller time groups come first.  A resource is of interest only
## in the first class where it has non-zero history.
## @PP
## All this is done, independently of any tasker or other solver,
## by function
## @ID @C {
## int KheGroupByHistory(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
##   KHE_OPTIONS options, KHE_TASK_SET r_ts);
## }
## Strictly speaking, this does no task grouping at all; rather,
## it assigns some resources to some tasks.  It returns the number of
## distinct resources that it assigns to tasks, adding to @C { r_ts }
## (if non-@C { NULL }) all tasks assigned a resource.  It is
## called by @C { KheGroupByResourceConstraints }
## (Section {@NumberOf resource_structural.constraints}) before
## other grouping functions.
## @PP
## A question is how to incorporate information about the cost of
## assigning or not assigning certain tasks.  We prefer to assign
## tasks for which non-assignment has a cost, and we prefer to not
## assign tasks for which assignment has a cost, but at present we
## are not doing anything to make that happen.
## @PP
## The algorithm has one undesirable property:  for each resource
## @M { r }, it either reduces @M { c(r) } all the way to 0, or else
## it does not reduce it at all.  There should be some way of handling
## resources for which the best outcome is somewhere in between.
## @End @SubSection
#
#@EndSubSections
#@End @Section

@Section
    @Title { Constraint classes }
    @Tag { resource_structural.constraint_classes }
@Begin
@LP
Some solvers, notably task groupers, are driven by constraints:
complete weekends constraints, consecutive night shifts
constraints, and so on.  But dealing with
such constraints one by one has a problem:  there could be two
constraints that constrain the same thing.  For example, there
could be one constraint specifying a minimum workload and
another specifying a maximum workload; or one constraint could
apply to some resources, and another to the rest.
@PP
What we need to do in these cases is to group the constraints into
@I { classes }:  sets of constraints that constrain the same thing.
Then solvers can handle each constraint class in turn, rather than
each constraint.  This section presents the
@I { constraint class finder }.  It finds these classes for cluster
busy times and limit active intervals constraints.
@PP
The first step is to build a constraint class finder object,
by calling
@ID @C {
KHE_CONSTRAINT_CLASS_FINDER KheConstraintClassFinderMake(
  KHE_RESOURCE_TYPE rt, KHE_FRAME days_frame, HA_ARENA a);
}
This object remains available until arena @C { a } is deleted or
recycled.  Any number of cluster busy times and limit active
intervals constraints may then be added, by calling
@ID @C {
void KheConstraintClassFinderAddConstraint(
  KHE_CONSTRAINT_CLASS_FINDER ccf, KHE_CONSTRAINT c, int offset);
}
repeatedly.  Only constraints whose points of application include
at least one resource of type @C { rt } are accepted; any others
are silently omitted.  The test used for this is
@ID @C {
KheClusterBusyTimesConstraintResourceOfTypeCount(c, rt) > 0
}
and similarly for limit active intervals constaints.  Here @C { rt }
is the resource type passed to @C { KheConstraintClassFinderMake }.
@PP
Requiring constaints to be added one by one means that the user
has to iterate over the constraints and their offsets and pass
in the ones to be included, like this for example:
@ID @C {
for( i = 0;  i < KheInstanceConstraintCount(ins);  i++ )
{
  c = KheInstanceConstraint(ins, i);
  if( KheConstraintTag(c) == KHE_CLUSTER_BUSY_TIMES_CONSTRAINT_TAG &&
      KheConstraintWeight(c) > 0 )
  {
    cbtc = (KHE_CLUSTER_BUSY_TIMES_CONSTRAINT) c;
    count = KheClusterBusyTimesConstraintAppliesToOffsetCount(cbtc);
    for( j = 0;  j < count;  j++ )
    {
      offset = KheClusterBusyTimesConstraintAppliesToOffset(cbtc, j);
      KheConstraintClassFinderAddConstraint(ccf, c, offset);
    }
  }
}
}
It is done this way because only the user knows which constraints plus
offsets are relevant.  For example, the user might want only cluster busy
times constraints whose time groups are all positive, in which case the
user can call @C { KheClusterBusyTimesConstraintAllPositive } from the
KHE platform before calling @C { KheConstraintClassFinderAddConstraint }.
@PP
There is one function that performs this kind of iteration for you:
@ID @C {
void KheConstraintClassFinderAddCompleteWeekendsConstraints(
  KHE_CONSTRAINT_CLASS_FINDER ccf, bool exact_days);
}
It searches the instance for @I { complete weekends } constraints of
the appropriate resource type and adds them to @C { ccf } using calls
to @C { KheConstraintClassFinderAddConstraint }.  A complete weekends
constraint specifies that a resource must be busy on both days of a
weekend or neither.  Concretely, it is a cluster busy times constraint
with positive weight, exactly two time groups (both positive), minimum
limit 2, maximum limit 2, and allow zero flag @C { true }.  The two time
groups must each be a subset of one of the days of @C { ccf }'s days
frame, and the two days thus defined must be adjacent in the days frame.
There is nothing special about this function; it has been included
only because KHE offers two solvers that need these constraints,
and placing the code for finding them here means that that code
is only written once.
@PP
As just mentioned, the two time groups must be subsets of time
groups of the days frame.  If @C { exact_days } is @C { true },
the two time groups must also be equal to days of the days frame.
@PP
Packaged with the constraint class finder are these test functions:
@ID @C {
bool KheConstraintTimeGroupsAllSingletons(KHE_CONSTRAINT c);
bool KheConstraintTimeGroupsEqualFrame(KHE_CONSTRAINT c, int offset,
  KHE_FRAME days_frame);
bool KheConstraintTimeGroupsSubsetFrame(KHE_CONSTRAINT c, int offset,
  KHE_FRAME days_frame);
}
@C { KheConstraintTimeGroupsAllSingletons } returns @C { true }
when the time groups of @C { c } are all singletons.
@C { KheConstraintTimeGroupsEqualFrame } returns @C { true } when
@C { c } plus @C { offset } has the same time groups as
@C { days_frame } in the same order.
@C { KheConstraintTimeGroupsSubsetFrame } returns @C { true }
when @C { c } plus @C { offset } has the same number of
time groups as @C { days_frame }, and each time group
of @C { c } is a subset of the corresponding time group
of @C { days_frame }.  No constraint class finder object
is needed when calling these functions.
@PP
At any point, the user may call
@ID @C {
int KheConstraintClassFinderClassCount(KHE_CONSTRAINT_CLASS_FINDER ccf);
KHE_CONSTRAINT_CLASS KheConstraintClassFinderClass(
  KHE_CONSTRAINT_CLASS_FINDER ccf, int i);
}
to iterate over the constraint classes of @C { ccf }.  And
@ID @C {
void KheConstraintClassFinderDebug(KHE_CONSTRAINT_CLASS_FINDER ccf,
  int verbosity, int indent, FILE *fp);
}
produces a debug print of @C { ccf } onto @C { fp } with the given
verbosity and indent.
@PP
The basic rule is that two constraints lie in the same class when
they have the same type (cluster busy times or limit active intervals)
and the same time groups (after offsets are applied) in the same
order with the same polarities.  Within one class, constraint
weights and minimum and maximum limits may vary, as may the
resources that the constraints apply to.
@PP
We turn now to functions which report attributes of constraint
classes.  Functions
@ID @C {
int KheConstraintClassConstraintCount(KHE_CONSTRAINT_CLASS cc);
KHE_CONSTRAINT KheConstraintClassConstraint(KHE_CONSTRAINT_CLASS cc,
  int i, int *offset);
}
return the constraints (with offsets) that make up class @C { cc }.
This is not often useful, because we are trying to use the class
instead of its constraints.  Function
@ID @C {
bool KheConstraintClassCoversResourceType(KHE_CONSTRAINT_CLASS cc);
}
returns @C { true } when every resource of @C { cc }'s constraint
class finder's resource type is a point of application of at least
one constraint from @C { cc }.
# This test only makes sense for the class as a whole, not for
# individual constraints.
@PP
Most constraint class operations mimic the operations of individual
constraints.  The hope is that these operations will make classes as
easy to work with as individual constraints, or indeed easier, given
that offsets are taken care of.  We present these operations now.
@PP
All constraints of a given class have the same type, whose tag
is returned by
@ID @C {
KHE_CONSTRAINT_TAG KheConstraintClassTag(KHE_CONSTRAINT_CLASS cc);
}
There is also
@ID @C {
char *KheConstraintClassId(KHE_CONSTRAINT_CLASS cc);
}
which returns the Id of any one of the constraints of @C { cc }.
@PP
All constraints of a given class have the same time groups with the
same polarities in the same order, ensuring that these functions
are well-defined:
@ID @C {
int KheConstraintClassTimeGroupCount(KHE_CONSTRAINT_CLASS cc);
KHE_TIME_GROUP KheConstraintClassTimeGroup(KHE_CONSTRAINT_CLASS cc,
  int i, KHE_POLARITY *po);
}
There is no @C { offset } parameter because each class consists
of constraint plus offset values, not constraints alone.  Other
functions which mimic those of individual constraints are
@ID {0.98 1.0} @Scale @C {
bool KheConstraintClassAllPositive(KHE_CONSTRAINT_CLASS cc);
bool KheConstraintClassAllNegative(KHE_CONSTRAINT_CLASS cc);
bool KheConstraintClassTimeGroupsDisjoint(KHE_CONSTRAINT_CLASS cc);
bool KheConstraintClassTimeGroupsCoverWholeCycle(KHE_CONSTRAINT_CLASS cc);
}
Again, these make sense because @C { cc }'s constraints have the
same time groups and polarities.  At present, the last two work
only for classes of cluster busy times constraints, because the
corresponding functions for limit active intervals constraints
are not implemented.  Also,
@ID {0.98 1.0} @Scale @C {
bool KheConstraintClassHasUniformLimits(KHE_CONSTRAINT_CLASS cc);
}
returns @C { true } if all the resources of type @C { rt } have
the same minimum limit, and also the same maximum limit, as
reported by functions @C { KheConstraintClassResourceMinimum } and
@C { KheConstraintClassResourceMaximum } below.  This is not the
same as every constraint having the same limits.  For example, even
when there is only one constraint, if that constraint does not apply
to all resources of type @C { rt } the result will be @C { false }.
@PP
The following functions are somewhat problematic, because the
values they report can vary from one constraint or resource to
another within the class.  The author has done his best to return
useful values here, but they need to be used with caution:
@ID @C {
bool KheConstraintClassAllowZero(KHE_CONSTRAINT_CLASS cc);
int KheConstraintClassMinimum(KHE_CONSTRAINT_CLASS cc);
int KheConstraintClassMaximum(KHE_CONSTRAINT_CLASS cc);
KHE_COST KheConstraintClassCombinedWeight(KHE_CONSTRAINT_CLASS cc);
KHE_COST KheConstraintClassDeterminantToCost(KHE_CONSTRAINT_CLASS cc,
  int determinant, bool at_end);
}
# KHE_COST KheConstraintClassDevToCost(KHE_CONSTRAINT_CLASS cc, int dev);
@C { KheConstraintClassAllowZero } returns @C { true } when all of the
constraints allow zero (@C { false } if they are limit active intervals
constraints).  Only then will a value of 0 have cost 0.
@PP
@C { KheConstraintClassMinimum } returns the largest of the minimum
limits of the constraints of @C { cc }.  Anything smaller will
violate one of @C { cc }'s constraints and will have a cost.
Similarly, @C { KheConstraintClassMaximum } returns the smallest
of the maximum limits of @C { cc }'s constraints.
@PP
@C { KheConstraintClassCombinedWeight } returns the sum of the
combined weights of the constraints of @C { cc }.
# Similarly,
# @C { KheConstraintClassDevToCost } returns the sum, over the
# constraints @C { c } of @C { cc }, of @C { KheConstraintDevToCost(c, dev) }.
# This value is somewhat peculiar, because it basically assumes that
# all the constraints have the same limits.  More realistically,
@C { KheConstraintClassDeterminantToCost } returns the sum, over
the constraints of @C { cc }, of the cost produced by @C { determinant }.
It compares @C { determinant } with each constraint's limits and allow
zero flag separately, producing a deviation and then a cost.
@PP
If @C { cc }'s constraints are cluster busy times constraints, the
@C { at_end } argument is unused; but if they are limit active intervals
constraints, it is used, to say whether we are at the end of the
sequence of time groups or not.  If we are at the end, then for
each constraint that has history after, a violation of a minimum limit
produces no cost.
Arguably there should also be an @C { at_start } parameter, to include
history in the limit active intervals case.  History is not included
in these calculations, so the user who needs it must include a history
value in @C { determinant }.
# @C { KheConstraintClassResourceDeterminantToCost } (below) is more
# likely to need this than @C { KheConstraintClassDeterminantToCost }.
@PP
One way to make these numbers less problematic is to get them
for a specific resource @C { r }:
@ID {0.98 1.0} @Scale @C {
bool KheConstraintClassResourceAllowZero(KHE_CONSTRAINT_CLASS cc,
  KHE_RESOURCE r);
int KheConstraintClassResourceMinimum(KHE_CONSTRAINT_CLASS cc,
  KHE_RESOURCE r);
int KheConstraintClassResourceMaximum(KHE_CONSTRAINT_CLASS cc,
  KHE_RESOURCE r);
KHE_COST KheConstraintClassResourceCombinedWeight(KHE_CONSTRAINT_CLASS cc,
  KHE_RESOURCE r);
KHE_COST KheConstraintClassResourceDeterminantToCost(
  KHE_CONSTRAINT_CLASS cc, KHE_RESOURCE r, int determinant, bool at_end);
}
# KHE_COST KheConstraintClassResourceDevToCost(KHE_CONSTRAINT_CLASS cc,
#   KHE_RESOURCE r, int dev);
The same calculations are made (indeed, using the same code), but
limited to those constraints of @C { cc } whose points of application
include @C { r }.  It is possible that none of the constraints of
@C { cc } apply to @C { r }.  In that case the values returned are
@C { true }, @C { 0 }, @C { INT_MAX }, @C { 0 }, and @C { 0 }.
@PP
The first time that any one of these last five functions is called
for any @C { r }, or the first time that
@C { KheConstraintClassCoversResourceType }
is called, a fairly large amount of work is done which prepares
@C { cc } for answering these queries for all @C { r }.  This is
to work out, for each @C { r }, which of @C { cc }'s constraints
have @C { r } as a point of application.  The return values are
not stored, so users who expect to be asking for the same values
repeatedly might want to cache them on their side.
@PP
There is also
@ID @C {
int KheConstraintClassResourceHistory(KHE_CONSTRAINT_CLASS cc,
  KHE_RESOURCE r);
}
This returns the maximum, over the constraints @C { c } of @C { cc }, of
@C { c }'s history value for @C { r }.  And
@ID @C {
int KheConstraintClassResourceMaximumMinusHistory(
  KHE_CONSTRAINT_CLASS cc, KHE_RESOURCE r);
}
returns the minimum, over the constraints @C { c } of @C { cc }, of
@C { max(0, m - h) },
where @C { m } is @C { c }'s maximum limit, and @C { h }
is @C { c }'s history for @C { r } (or 0 as usual if @C { c }
has no history for @C { r }).  This is the largest number of
time groups of @C { cc } (initial time groups for a limit active
intervals class) that @C { r } can be busy for, taking
history into account, without incurring a cost.
@PP
Finally,
@ID @C {
void KheConstraintClassDebug(KHE_CONSTRAINT_CLASS cc,
  int verbosity, int indent, FILE *fp);
}
produces a debug print of @C { cc } onto @C { fp } with the given
verbosity and indent.
@End @Section

@Section
    @Title { Task finding }
    @Tag { resource_structural.task_finding }
@Begin
@LP
@I { Task finding } is KHE's name for some operations, based on
@I { task finder } objects, that find sets of tasks which are to be
moved all together from one resource to another.  Task finding is
used by only a few solvers, because it has been replaced by
@I { mtask finding }, the subject of
Section {@NumberOf resource_structural.mtask_finding}.  Only old
code uses task finding now; it may eventually be removed altogether.
@PP
Task finding is concerned with which days tasks are running.  A @I day
is a time group of the common frame.  The days that a task @C { t }
is running are the days containing the times that @C { t } itself is
running, plus the days containing the times that the tasks assigned
to @C { t }, directly or indirectly, are running.  The days that a
task set is running are the days that its tasks are running.
@PP
Task finding represents the days that a task or task set is running
by a @I { bounding interval }, a pair of integers:  @C { first_index },
the index in the common frame of the first day that the task or task
set is running, and @C { last_index }, the index of the last day that
the task or task set is running.  So task finding is unaware of cases
where a task runs twice on the same day, or has a @I gap (a day within
the bounding interval when it is not running).  Neither is likely in
practice.  Task finding considers the duration of a task or task set
to be the length of its bounding interval.
@PP
Task finding operations typically find a set of tasks, often
stored in a task set object (Section {@NumberOf extras.task_sets}).
In some cases these tasks form a @I { task run }, that is, they
satisfy these conditions:
@NumberedList

@LI {
The set is non-empty.  An empty run would be useless.
}

@LI {
Every task is a proper root task.  The tasks are being found in
order to be moved from one resource to another, and this ensures
that the move will not break up any groups.
}

@LI {
No two tasks run on the same day.  This is more or less automatic
when the tasks are all assigned the same resource initially, but it
holds whether the tasks are assigned or not.  If it didn't, then
when the tasks are moved to a common resource there would be clashes.
}

@LI {
The days that the tasks are running are consecutive.  In other words,
between the first day and the last there are no @I { gaps }:  days
when none of the tasks is running.
}

@EndList
The task finder does not reject tasks which run twice on the same
day or which have gaps.  As explained above, it is unaware of these
cases.  So the last two conditions should really say that the task
finder does not introduce any @I new clashes or gaps when it groups
tasks into runs.
@PP
Some runs are @I { unpreassigned runs }, meaning that all of their
tasks are unpreassigned.  Only unpreassigned runs can be moved from
one resource to another.  And some runs are @I { maximal runs }:
they cannot be extended, either to left or right.  We mainly deal
with maximal runs, but just what we mean by `maximal' depends on
circumstances.  For example, we may want to exclude preassigned
tasks from our runs.  So our definition does @I not take the
arguably reasonable extra step of requiring all runs to be maximal.
@PP
Some task finding operations find all tasks assigned a particular
resource in a particular interval.  In these cases, only conditions
2 and 3 must hold; the result need not be a task run.
@PP
Task finding treats non-assignment like the assignment of a special
resource (represented by @C { NULL }).  This makes it equally at home
finding assigned and unassigned tasks.
@PP
A task @C { t } @I { needs assignment } if @C { KheTaskNeedsAssignment(t) }
(Section {@NumberOf solutions.tasks.asst}) returns @C { true },
meaning that non-assignment of a resource to @C { t } would incur
a cost, because of an assign resource constraint, or a limit
resources constraint which is currently at or below its minimum
limit, that applies to @C { t }.  Task finding never includes
tasks that do not need assignment when it searches for unassigned
tasks, because assigning resources to such tasks is not a high
priority.  It does include them when searching for assigned tasks.
@PP
A resource is @I { effectively free } during some set of days if it
is @C { NULL }, or it is not @C { NULL } and the tasks it is assigned
to on those days do not need assignment.  The point is that it
is always safe to move some tasks to a resource on days when it is
effectively free:  if the resource is @C { NULL }, they are simply
unassigned, and if it is non-@C { NULL }, any tasks running on those
days do not need assignment, and can be unassigned, at no cost, before
the move is made.  Task finding utilizes the effectively free concept and
offers move operations that work in this way.
@BeginSubSections

@SubSection
    @Title { Task finder objects }
    @Tag { resource_structural.task_finding.task_finder }
@Begin
@LP
To create a task finder object, call
@ID @C {
KHE_TASK_FINDER KheTaskFinderMake(KHE_SOLN soln, KHE_OPTIONS options,
  HA_ARENA a);
}
This returns a pointer to a private struct in arena @C { a }.  Options
@C { gs_common_frame } (Section {@NumberOf extras.frames}) and
@C { gs_event_timetable_monitor } (Section {@NumberOf general_solvers.general})
are taken from @C { options }.  If either is @C { NULL },
@C { KheTaskFinderMake } returns @C { NULL }, since it cannot
do its work without them.
@PP
Ejection chain repair code can obtain a task finder from the ejector
object, by calling
@ID @C {
KHE_TASK_FINDER KheEjectorTaskFinder(KHE_EJECTOR ej);
}
This saves time and memory compared with creating new task finders
over and over.  Once again the return value is @C { NULL } if the
two options are not both present.
@PP
The days tasks are running (the time groups of the common frame) are
represented in task finding by their indexes, as explained above.
The first legal index is 0; the last is returned by
@ID @C {
int KheTaskFinderLastIndex(KHE_TASK_FINDER tf);
}
This is just @C { KheTimeGroupTimeCount(frame) - 1 }, where @C { frame }
is the common frame.  Also,
@ID @C {
KHE_FRAME KheTaskFinderFrame(KHE_TASK_FINDER tf);
}
may be called to retrieve the frame itself.
@PP
As defined earlier, the bounding interval of a task or task set
is the smallest interval containing all the days that the task
or task set is running.  It is returned by these functions:
@ID @C {
KHE_INTERVAL KheTaskFinderTaskInterval(KHE_TASK_FINDER tf,
  KHE_TASK task);
KHE_INTERVAL KheTaskFinderTaskSetInterval(KHE_TASK_FINDER tf,
  KHE_TASK_SET ts);
}
These return an interval (Section {@NumberOf general_solvers.intervals})
holding the indexes in the common frame of the first and last days that
@C { task } or @C { ts } is running.  If @C { ts } is empty, the
interval is empty.  There is also
@ID @C {
KHE_INTERVAL KheTaskFinderTimeGroupInterval(KHE_TASK_FINDER tf,
  KHE_TIME_GROUP tg);
}
which returns an interval holding the first and last days that
@C { tg } overlaps with.  If @C { tg } is empty, the interval
is empty.
@PP
These three operations find task sets and runs:
@ID @C {
void KheFindTasksInInterval(KHE_TASK_FINDER tf,
  KHE_INTERVAL in, KHE_RESOURCE_TYPE rt, KHE_RESOURCE from_r,
  bool allow_preassigned, bool allow_partial,
  KHE_TASK_SET res_ts, KHE_INTERVAL *res_in);
bool KheFindFirstRunInInterval(KHE_TASK_FINDER tf,
  KHE_INTERVAL in, KHE_RESOURCE_TYPE rt, KHE_RESOURCE from_r,
  bool allow_preassigned, bool allow_partial, bool sep_need_asst,
  KHE_TASK_SET res_ts, KHE_INTERVAL *res_in);
bool KheFindLastRunInInterval(KHE_TASK_FINDER tf,
  KHE_INTERVAL in, KHE_RESOURCE_TYPE rt, KHE_RESOURCE from_r,
  bool allow_preassigned, bool allow_partial, bool sep_need_asst,
  KHE_TASK_SET res_ts, KHE_INTERVAL *res_in);
}
All three functions clear @C { res_ts }, which must have been
created previously, then add to it some tasks which are assigned
@C { from_r } (or are unassigned if @C { from_r } is @C { NULL }).
They set @C { *res_in } to the bounding interval of the tasks of
@C { res_ts }.
@PP
Call @C { in } the @I { target interval }.  A task @C { t }
@I { overlaps } the target interval when at least one of the days
on which @C { t } is running lies in it.  Subject to the following
conditions, @C { KheFindTasksInInterval } finds all tasks
that overlap the target interval; @C { KheFindFirstRunInInterval }
finds the first (leftmost) run containing a task that overlaps the
target interval, or returns @C { false } if there is no such run;
and @C { KheFindLastRunInInterval } finds the last (rightmost) run
containing a task that overlaps the target interval, or returns
@C { false } if there is no such run.
@PP
When @C { from_r } is @C { NULL }, only unassigned tasks that need
assignment (as discussed above) are added.  The first could be any
unassigned task of type @C { rt } (it is this that @C { rt } is
needed for), but the others must be compatible with the first, in
that we expect these tasks to be assigned some single resource,
and it would not do for them to have widely different domains.
@PP
Some tasks are @I { ignored }, which means that the operation
behaves as though they are simply not there.  Subject to this
ignoring feature, the runs found are maximal.  A task is ignored in
this way when it is running on any of the days that the tasks that
have already been added to @C { res_ts } are running.  Preassigned
tasks are allowed when @C { allow_preassigned } is @C { true }.
Tasks that are running partly or wholly outside the target
interval are allowed when @C { allow_partial } is @C { true }.
When @C { allow_partial } is @C { true }, a run can extend
an arbitrary distance beyond the target interval, and contain
some tasks that do not overlap the target interval at all.
@PP
If @C { sep_need_asst } is @C { true }, all tasks @C { t }
in the run found by @C { KheFindFirstRunInInterval } or
@C { KheFindLastRunInInterval } have the same value of
@C { KheTaskNeedsAssignment(t) }.  This value could be @C { true }
or @C { false }, but it is the same for all tasks in the run.
If @C { sep_need_asst } is @C { false }, there is no requirement
of this kind.
@End @SubSection

@SubSection
    @Title { Daily schedules }
    @Tag { resource_structural.task_finding.daily }
@Begin
@LP
Sometimes more detailed information is needed about when a
task is running than just the bounding interval.  In those
cases, task finding offers @I { daily schedules }, which
calculate both the bounding interval and what is going on
on each day:
@ID @C {
KHE_DAILY_SCHEDULE KheTaskFinderTaskDailySchedule(
  KHE_TASK_FINDER tf, KHE_TASK task);
KHE_DAILY_SCHEDULE KheTaskFinderTaskSetDailySchedule(
  KHE_TASK_FINDER tf, KHE_TASK_SET ts);
KHE_DAILY_SCHEDULE KheTaskFinderTimeGroupDailySchedule(
  KHE_TASK_FINDER tf, KHE_TIME_GROUP tg);
}
These return a @I { daily schedule }:  a representation of
what @C { task }, @C { ts }, or @C { tg } is doing on each
day, including tasks assigned directly or indirectly to
@C { task } or @C { ts }.  Also,
@ID @C {
KHE_DAILY_SCHEDULE KheTaskFinderNullDailySchedule(
  KHE_TASK_FINDER tf, KHE_INTERVAL in);
}
returns a daily schedule representing doing nothing during
the given interval.
@PP
A @C { KHE_DAILY_SCHEDULE } is an object which uses memory
taken from its task finder's arena.  It can be deleted (which
actually means being added to a free list in its task finder)
by calling
@ID @C {
void KheDailyScheduleDelete(KHE_DAILY_SCHEDULE ds);
}
It has these attributes:
@ID @C {
KHE_TASK_FINDER KheDailyScheduleTaskFinder(KHE_DAILY_SCHEDULE ds);
bool KheDailyScheduleNoOverlap(KHE_DAILY_SCHEDULE ds);
KHE_INTERVAL KheDailyScheduleInterval(KHE_DAILY_SCHEDULE ds);
}
# int KheDailyScheduleFirstDayIndex(KHE_DAILY_SCHEDULE ds);
# int KheDailyScheduleLastDayIndex(KHE_DAILY_SCHEDULE ds);
@C { KheDailyScheduleTaskFinder } returns @C { ds }'s task finder;
@C { KheDailyScheduleNoOverlap } returns @C { true } when no two
of the schedule's times occur on the same day, and @C { false }
otherwise; and @C { KheDailyScheduleInterval } returns the interval
of day indexs of the schedule's days.  For each day between the
interval's first and last inclusive,
@ID @C {
KHE_TASK KheDailyScheduleTask(KHE_DAILY_SCHEDULE ds, int day_index);
}
returns the task running in @C { ds } on day @C { day_index }.
It may be a task assigned directly or indirectly to @C { task }
or @C { ts }, not necessarily @C { task } or a task from
@C { ts }.  @C { NULL } is returned if no task is running
on that day.  This is certain for schedules created by
@C { KheTaskFinderTimeGroupDailySchedule } and
@C { KheTaskFinderNullDailySchedule }, but it is also possible
for schedules created by @C { KheTaskFinderTaskDailySchedule }
and @C { KheTaskFinderTaskSetDailySchedule }.  If there are two
or more tasks running on that day, an arbitrary one of them is
returned; this cannot happen when @C { KheDailyScheduleNoOverlap }
returns @C { true }.  Similarly,
@ID @C {
KHE_TIME KheDailyScheduleTime(KHE_DAILY_SCHEDULE ds, int day_index);
}
returns the time in @C { ds } that is busy on day @C { day_index }.
This will be @C { NULL } if there is no time in the schedule on that
day, which is always the case when the schedule was created by a
call to @C { KheTaskFinderNullDailySchedule }.
@End @SubSection

@EndSubSections
@End @Section

@Section
    @Title { Multi-task finding }
    @Tag { resource_structural.mtask_finding }
@Begin
@LP
The author has made several attempts over the years to define an
equivalence relation on tasks and use it to group equivalent tasks
together into classes.  The purpose is to avoid symmetrical
assignments, in which a resource is assigned to several tasks in
turn which are in fact equivalent, wasting time.  This section
describes what he hopes and believes will be his final attempt.
@PP
It could be argued that equivalence classes of tasks are only needed
because XHSTT, and following it the KHE platform, allow at most one
resource to be assigned to each task at any given moment during solving.
If several could be assigned, equivalence would be guaranteed because
the `tasks' thus grouped would be indistinguishable.  This would
probably work for nurse rostering, but in high school timetabling it
would not handle tasks that become equivalent when their meets are
assigned the same time---requests for ordinary classrooms, for example.
@PP
Still, `a task to which several resources can be assigned' is a
valuable abstraction, better for the user than a set of equivalent
tasks.  So instead of defining a task group or task class (as in the
author's previous attempts), we define a @I { multi-task } or @I { mtask }
to be a task to which several resources can be assigned simultaneously.
Behind the scenes, an mtask is a set of equivalent proper root tasks,
but the user does not know or care which tasks those are, or which are
assigned which resources:  the mtask handles that, in a provably best
possible way, as we'll see.
@PP
The idea, then, is to group tasks into mtasks and to write resource
assignment algorithms that assign resources to mtasks rather than to
tasks.  Assigning resources to mtasks is somewhat harder to do than
assigning them to tasks, because mtasks accept multiple assignments,
but it should run faster because assignment symmetries are avoided.
@PP
Three types are defined here.  Type @C { KHE_MTASK } represents one
mtask.  @C { KHE_MTASK_SET } represents a simple set of mtasks.  And
@C { KHE_MTASK_FINDER } creates mtasks and holds them.  All older
attemps at task equivalencing have been removed from the KHE platform
and solvers.
# @PP
# The author has removed several older attempts at task equivalencing
# from the KHE platform and solvers.  However, one type has not been
# removed:  @C { KHE_TASKER_CLASS } from
# Section {@NumberOf resource_structural.constraints.taskers}.  It
# could be unified with @C { KHE_MTASK }, but its implementation,
# supporting combinatorial and profile grouping, would add a lot of
# complexity to @C { KHE_MTASK }.  For now, anyway, it remains separate.
@BeginSubSections

@SubSection
    @Title { Multi-tasks }
    @Tag { resource_structural.mtask_finding.ops }
@Begin
@LP
A @I { multi-task } or @I mtask is a task to which several resources
can be assigned simultaneously.  Behind the scenes, it is a non-empty set
of proper root tasks which are equivalent to one another in a sense to be
defined in Section {@NumberOf resource_structural.mtask_finding.similarity}.
This section presents the operations on mtasks.
@PP
There is no operation to create one mtask, because mtasks need to
be made together all at once, which is what @C { KheMTaskFinderMake }
(Section {@NumberOf resource_structural.mtask_finding.solver}) does.
After that, any changes to individual tasks which affect their
equivalence will render these mtasks out of date.  This includes
assignments of one task to another task, changes to task domains,
changes to whether a task assignment is fixed or not, meet splits
and merges, and attaching and detaching event resource monitors.
Because of this, it is best to create mtasks at the beginning of a
call on some resource solver, after any such changes have been made,
and delete them (by deleting the mtask finder's arena) at the end
of that call, before later calls on other solvers can change things.
# KHE's solvers do this.
@PP
However, several of these `forbidden' operations have mtask versions.
These do what the forbidden operations do (indeed, each calls one
forbidden operation), but they also update the mtasks to take account
of the change.  For example, the mtask version of assigning one task
to another will cause the two tasks to be removed from their mtasks,
and then the combined entity will be added to another mtask.  This
could make one or two mtasks disappear (since there are no empty
mtasks), and it could bring a new mtask into existence.  Operations
of this type are too slow to call from the inner loops of solvers,
but they can be called from less time-critical code.
@PP
Here now are the operations on mtasks.  One advantage of the
mtask abstraction is that we can model these operations on the
corresponding task operations---although there are some
differences, such as that we cannot assign one mtask to another.
@PP
First come some general operations:
@ID @C {
char *KheMTaskId(KHE_MTASK mt);
}
This returns an Id for mtask @C { mt }, just the task Id of its first task.
@ID @C {
KHE_RESOURCE_TYPE KheMTaskResourceType(KHE_MTASK mt);
bool KheMTaskIsPreassigned(KHE_MTASK mt, KHE_RESOURCE *r);
bool KheMTaskAssignIsFixed(KHE_MTASK mt);
KHE_RESOURCE_GROUP KheMTaskDomain(KHE_MTASK mt);
int KheMTaskTotalDuration(KHE_MTASK mt);
float KheMTaskTotalWorkload(KHE_MTASK mt);
}
Again, these come from @C { mt }'s first task; they must be the same
for all @C { mt }'s tasks, otherwise those tasks would not have been
placed into the same mtask.  A preassigned task is the only member of
its mtask, except in the unlikely case of equivalent tasks preassigned
the same resource.  A task with a fixed assignment is the only
member of its mtask.
# There is also
# @ID @C {
# float KheMTaskWorkloadPerTime(KHE_MTASK mt);
# }
# This returns @C { KheMTaskTotalWorkload(mt) } divided by
# @C { KheMTaskTotalDuration(mt) }.  This will differ from
# any individual @C { KheTaskWorkloadPerTime } if tasks with
# different workloads are grouped together, but that does not
# seem likely to be a problem in practice.
@PP
The proper root tasks of an mtask can come from the same meet, or
from different meets.  When they come from the same meet, function
@ID @C {
bool KheMTaskHasSoleMeet(KHE_MTASK mt, KHE_MEET *meet);
}
sets @C { *meet } to that meet and returns @C { true }.  Otherwise
it sets @C { *meet } to @C { NULL } and returns @C { false }.
@PP
KHE allows the user to create tasks which are not derived from any
event resource or meet.  These are intended for use as proper root
tasks to which ordinary tasks are assigned.  However, if no ordinary
tasks are assigned to them, the result is a task with duration 0.
This is awkward, but careful examination (which we'll do later)
shows that it is not really a special case.
# are true vacuously when there are no atomic tasks.
# which we call here a @I { degenerate proper root task }, or just a
# @I { degenerate task }.  Degenerate tasks are awkward, useless, and
# unlikely to occur, but still we have to allow for the possibility
# that there will be some.  So even degenerate tasks lie in mtasks,
# which we call @I { degenerate mtasks }.
@PP
An mtask @I { has fixed times } when none of its tasks (including
tasks assigned, directly or indirectly, to those tasks) lie in meets
with unassigned times, and the call to @C { KheMTaskSolverMake }
that created the mtask had @C { fixed_times } set to @C { true },
meaning that there is an assumption that assigned times will not
change.  To check this condition, call
@ID @C {
bool KheMTaskHasFixedTimes(KHE_MTASK mt);
}
When it returns @C { true }, these functions provide access to the times:
@ID @C {
KHE_INTERVAL KheMTaskInterval(KHE_MTASK mt);
KHE_TIME KheMTaskDayTime(KHE_MTASK mt, int day_index,
  float *workload_per_time);
KHE_TIME_SET KheMTaskTimeSet(KHE_MTASK mt);
}
# int KheMTaskFirstDayIndex(KHE_MTASK mt);
# int KheMTaskLastDayIndex(KHE_MTASK mt);
@C { KheMTaskInterval } returns the smallest interval of days in the
days frame of @C { mt }'s task finder that contains @C { mt }'s times.
In mtask finding generally, a value of type @C { KHE_INTERVAL }
(defined in Section {@NumberOf general_solvers.intervals}) always
denotes an interval of days.  For each index @C { day_index } in
this interval, @C { KheMTaskDayTime } returns the time that @C { mt }
is busy on the day of @C { days_frame } with index @C { day_index }, or
@C { NULL } if @C { mt } does not run that day, as well as @C { mt }'s
workload per time on that day.  Finally, @C { KheMTaskTimeSet } returns
the set of times that the tasks of @C { mt } are running.
@PP
There is also
@ID @C {
void KheTaskAddTimesToTimeSet(KHE_TASK task, KHE_TIME_SET ts);
}
which adds to @C { ts } the times that @C { task } is running,
including the times of tasks assigned to @C { task }, directly
or indirectly.  It does not start by clearing @C { ts }; it
adds these times to whatever times are already there.
@PP
Many mtask operations utilize @C { KheMTaskInterval(mt) } as their
representation of when @C { mt } is running.  This representation
is convenient but it does not recognize days within the interval
where an mtask runs twice, or not at all.  Two functions help to
identify such cases:
@ID @C {
bool KheMTaskNoOverlap(KHE_MTASK mt);
bool KheMTaskNoGaps(KHE_MTASK mt);
}
@C { KheMTaskNoOverlap } returns @C { true } when no two
of @C { mt }'s busy times lie on the same day, and
@C { KheMTaskNoGaps } returns @C { true } when none of
the calls to @C { KheMTaskDayTime } return @C { NULL }.
@PP
Returning to functions that do not need fixed times,
to visit the tasks of an mtask we have
@ID @C {
int KheMTaskTaskCount(KHE_MTASK mt);
KHE_TASK KheMTaskTask(KHE_MTASK mt, int i,
  KHE_COST *non_asst_cost, KHE_COST *asst_cost);
}
@C { KheMTaskTask } returns the @C { i }th task @C { t }, plus a cost
@C { *non_asst_cost } which will be included in the solution cost
whenever @C { t } is unassigned (as reported by assign resource
monitors) and a cost @C { *asst_cost } which will be included in
the solution cost whenever @C { t } is assigned (as reported by
prefer resources monitors with empty sets of preferred resources).
Actually, these costs can vary depending on other task assignments;
the costs returned here are lower bounds that do not depend on other
assignments.  The tasks are returned so that those most in need of
assignment come first, that is, in order of decreasing
@C { *non_asst_cost - *asst_cost }.  Tasks for which this order is
not certain lie in different mtasks.  All this is explained in detail
in Section {@NumberOf resource_structural.mtask_finding.similarity}.
@PP
For the convenience of solvers that need these costs but not mtasks,
there is also
@ID @C {
void KheTaskNonAsstAndAsstCost(KHE_TASK task, KHE_COST *non_asst_cost,
  KHE_COST *asst_cost);
}
It returns these costs, as defined above, for @C { task },
quite independently of mtask finding.  Here @C { task } would
usually be a proper root task, but it does not need to be; the
costs depend on @C { task } itself and on all tasks assigned,
directly or indirectly, to @C { task }.
@PP
Next come operations concerned with resource assignment.  Each
mtask has a set of resources currently assigned to it (that is,
assigned to some of its tasks).  This set is in fact a multi-set:
a resource may be currently assigned to a given mtask more than
once.  Assigning a resource more than once to a given mtask inevitably
causes clashes, but it is better to let it happen than to waste time
preventing it.  The resource assignment operations are
@ID @C {
bool KheMTaskMoveResourceCheck(KHE_MTASK mt, KHE_RESOURCE from_r,
  KHE_RESOURCE to_r, bool disallow_preassigned);
bool KheMTaskMoveResource(KHE_MTASK mt, KHE_RESOURCE from_r,
  KHE_RESOURCE to_r, bool disallow_preassigned);
}
@C { KheMTaskMoveResourceCheck } returns @C { true } when changing one
of @C { mt }'s assignments from @C { from_r } to @C { to_r } would succeed,
and @C { false } when it would not succeed.  Here @C { from_r } could be
@C { NULL }, in which case the request is to add @C { to_r } to the set
of resources assigned to @C { mt }, that is, to increase the multiplicity
of its assignments to @C { mt } by one.  We call this an @I { assignment },
although we have not provided a @C { KheMTaskAssignResourceCheck }
operation for it.  And @C { to_r } could be @C { NULL }, in which case
the request is to remove @C { from_r } from the set of resources assigned
@C { mt }, that is, to reduce the multiplicity of its assignments to
@C { mt } by one.  We call this an @I { unassignment }. although again
there is no @C { KheMTaskUnAssignResourceCheck } operation.
@C { KheMTaskMoveResource } actually makes the change, returning
@C { true } if it was successful, and @C { false } if it wasn't
(in that case, nothing is changed).
@PP
Parameter @C { disallow_preassigned } is concerned with the awkward
question of what to do with preassigned mtasks.  The corresponding
functions for tasks allow a preassigned task to be assigned, unassigned,
and moved to another task which is preassigned the same resource.  If
@C { disallow_preassigned } is @C { false }, the equivalent behaviour
is permitted here, allowing a preassigned mtask to be assigned and
unassigned.  However, in practice callers of these functions are more
likely to want all changes to preassigned tasks to be disallowed:
such tasks will already be assigned their preassigned resources,
and changes to those assignments are not wanted.  This is what
happens when @C { disallow_preassigned } is @C { true }.
@PP
Here is the full list of reasons why an mtask move might not succeed:
@BulletList

@LI @OneRow {
@C { from_r == to_r }, so the move would change nothing.
}

@LI @OneRow {
@C { mt } contains only fixed tasks; their assignments cannot change.
}

@LI @OneRow {
@C { mt } contains only preassigned tasks, and either the
@C { disallow_preassigned } parameter is @C { true }, so
that their assignments cannot change, or else it is @C { false },
and @C { to_r } is neither of the two permitted values (the
preassigned resource and @C { NULL }).
}

@LI @OneRow {
@C { to_r != NULL } and the domain of @C { mt } (the same for all
its tasks) does not contain @C { to_r }.
}

@LI @OneRow {
@C { from_r != NULL } and @C { from_r } is not one of the resources
assigned to @C { mt }.
}

@LI @OneRow {
@C { from_r == NULL } (and therefore @C { to_r != NULL }) and
@C { mt } does not contain at least one unassigned task to
assign @C { to_r } to.
}

@EndList
As usual, returning @C { false } when the reassignment changes nothing
reflects the practical reality that no solver wants to waste time
on such changes.
@PP
This next function may be useful for suggesting a suitable resource for
assignment:
@ID @C {
bool KheMTaskResourceAssignSuggestion(KHE_MTASK mt, KHE_RESOURCE *to_r);
}
It returns @C { true } with @C { *to_r } set to a suggestion for
an assignment to @C { mt }, if one can be found, and @C { false }
if no suggestion can be made.  The suggestion comes by looking for
tasks which share an event resource with the next unassigned task
of @C { mt } and are already assigned a resource:  if that resource
can be assigned to @C { mt }, then it becomes the suggestion.  The
idea here is to promote resource constancy (assigning the same
resource to all the tasks of a given event resource) even when it
is not required by an avoid split assignments constraint.
@PP
For visiting the assignments to @C { mt } there is
@ID @C {
int KheMTaskAsstResourceCount(KHE_MTASK mt);
KHE_RESOURCE KheMTaskAsstResource(KHE_MTASK mt, int i);
}
which return the number of non-@C { NULL } resources in the
multi-set of resources assigned to @C { mt }, and the @C { i }th
resource, in the usual way.  There are also
@ID @C {
int KheMTaskAssignedTaskCount(KHE_MTASK mt);
int KheMTaskUnassignedTaskCount(KHE_MTASK mt);
}
which returns the number of assigned tasks in @C { mt }, and the
number of unassigned tasks in @C { mt }.  Naturally, they sum to
@C { KheMTaskTaskCount(mt) }.  @C { KheMTaskAssignedTaskCount }
is a synonym for @C { KheMTaskAsstResourceCount }.  The assigned
tasks always come first in an mtask, so the first unassigned task
(if there is one) is
@ID {0.95 1.0} @Scale @C {
KheMTaskTask(mt, KheMTaskAssignedTaskCount(mt), &non_asst_cost, &asst_cost);
}
There is also
@ID @C {
bool KheMTaskNeedsAssignment(KHE_MTASK mt);
}
which returns @C { true } when @C { mt } contains at least one
unassigned task such that the costs returned by @C { KheMTaskTask }
satisfy @C { *non_asst_cost - *asst_cost > 0 }.  In other words, the
cost of the solution would be reduced if this task was assigned, as
far as the event resource monitors that determine @C { *non_asst_cost }
and @C { *asst_cost } are concerned.  Also,
@ID @C {
int KheMTaskNeedsAssignmentTaskCount(KHE_MTASK mt);
KHE_TASK KheMTaskNeedsAssignmentTask(KHE_MTASK mt, int i,
  KHE_COST *non_asst_cost, KHE_COST *asst_cost);
}
return the number of tasks in @C { mt } that need assignment,
as just defined, and the @C { i }'th of these tasks, counting
from 0.  One could write
@ID @C {
KheMTaskNeedsAssignmentTaskCount(mt) > 0
}
instead of @C { KheMTaskNeedsAssignment(mt) }.  And
@ID @C {
bool KheMTaskContainsNeedlessAssignment(KHE_MTASK mt);
}
returns @C { true } if @C { mt } contains a task which is assigned
but does not need to be.  This means that a call to
@C { KheMTaskNeedsAssignment(mt) } would return @C { false }, but
furthermore, after any one resource is unassigned from @C { mt },
@C { KheMTaskNeedsAssignment(mt) } would still return @C { false }.
@PP
Any given set of resources is always assigned to the tasks of an
mtask in a best possible (least cost) way.  When a resource is
unassigned from an mtask, the remaining assignments may no longer
have this property.  In that case, they are adjusted to make them best
possible again.
@PP
A similar issue arises when an mtask is constructed:  if the
initial resource assignments are not best possible, they will
be moved from one task to another within the mtask until they
are.  So there may be calls on task assignment operations while
@C { KheMTaskSolverMake } is running.  These are guaranteed to
not increase the cost of the solution.  They might decrease it.
@PP
An mtask's tasks all have the same domain, making the following
operations well-defined:
@ID @C {
bool KheMTaskAddTaskBoundCheck(KHE_MTASK mt, KHE_TASK_BOUND tb);
bool KheMTaskAddTaskBound(KHE_MTASK mt, KHE_TASK_BOUND tb);
bool KheMTaskDeleteTaskBoundCheck(KHE_MTASK mt, KHE_TASK_BOUND tb);
bool KheMTaskDeleteTaskBound(KHE_MTASK mt, KHE_TASK_BOUND tb);

int KheMTaskTaskBoundCount(KHE_MTASK mt);
KHE_TASK_BOUND KheMTaskTaskBound(KHE_MTASK mt, int i);
}
@C { KheMTaskAddTaskBound } adds its bound to each task.  It
returns @C { false } and changes nothing if any of the underlying
@C { KheTaskAddTaskBound } operations would return @C { false }.
@PP
If the domain of an mtask is changed in this way, its tasks could
become equivalent to the tasks of some other mtask that already
have the new domain.  However, no attempt is made to find and
merge such mtasks.  It does no harm, apart from wasting solve
time, to have two mtasks on hand which could be merged into one.
@PP
Mtasks work correctly with marks and paths.  Operations on
mtasks are not stored in paths, but the underlying operations on
tasks are, and that is enough to make everything work.
@PP
Finally,
@ID @C {
void KheMTaskDebug(KHE_MTASK mt, int verbosity, int indent, FILE *fp);
}
produces a debug print of @C { mt } onto @C { fp }.  It calls
@C { KheTaskDebug } for each task of @C { mt }.
@End @SubSection

@SubSection
    @Title { Multi-task sets }
    @Tag { resource_structural.mtask_finding.sets }
@Begin
@LP
Just as type @C { KHE_TASK_SET } represents a simple set of tasks,
so @C { KHE_MTASK_SET } represents a simple set of mtasks.  The
only wrinkle is that an mtask set remembers the interval that it
covers (the union of the values of @C { KheMTaskInterval(mt) }
for each of its mtasks @C { mt }).  This is done to make function
@C { KheMTaskSetInterval }, presented below, very efficient.
@PP
The operations on mtask sets follow those on task sets, with a few
adjustments.  To create and delete an mtask set, call
@ID @C {
KHE_MTASK_SET KheMTaskSetMake(KHE_MTASK_FINDER mtf);
void KheMTaskSetDelete(KHE_MTASK_SET mts, KHE_MTASK_FINDER mtf);
}
Deleted mtask sets are held in a free list in @C { mtf }, and
freed when @C { mtf }'s arena is freed.
@PP
Three operations are offered for reducing the size of an
mtask set:
@ID @C {
void KheMTaskSetClear(KHE_MTASK_SET mts);
void KheMTaskSetClearFromEnd(KHE_MTASK_SET mts, int count);
void KheMTaskSetDropFromEnd(KHE_MTASK_SET mts, int n);
}
@C { KheMTaskSetClear } clears @C { mts } back to the empty set.
@C { KheMTaskSetClearFromEnd } removes mtasks from the end
until @C { count } mtasks remain.  If @C { count } is larger
than the number of mtasks in @C { mts }, none are removed.
@C { KheMTaskSetDropFromEnd } removes the last @C { n } mtasks
from @C { mts }.  If @C { n } is larger than the number of mtasks
in @C { mts }, all are removed.
@PP
Two operations are offered for adding mtasks to an mtask set:
@ID @C {
void KheMTaskSetAddMTask(KHE_MTASK_SET mts, KHE_MTASK mt);
void KheMTaskSetAddMTaskSet(KHE_MTASK_SET mts, KHE_MTASK_SET mts2);
}
@C { KheMTaskSetAddMTask } adds @C { mt } to the end of @C { mts };
@C { KheMTaskSetAddMTaskSet } appends the elements of @C { mts2 }
to the end of @C { mts } without disturbing @C { mts2 }.
@PP
Here are two operations for deleting one mtask:
@ID @C {
void KheMTaskSetDeleteMTask(KHE_MTASK_SET mts, KHE_MTASK mt);
KHE_MTASK KheMTaskSetLastAndDelete(KHE_MTASK_SET mts);
}
@C { KheMTaskSetDeleteMTask } deletes @C { mt } from @C { mts }
(it must be present).  Assuming that @C { mts } is not empty,
@C { KheMTaskSetLastAndDelete } deletes and returns the last
mtask of @C { mts }.
@PP
To find out whether an mtask set contains a given mtask, call
@ID {0.98 1.0} @Scale @C {
bool KheMTaskSetContainsMTask(KHE_MTASK_SET mts, KHE_MTASK mt, int *pos);
}
If found, this sets @C { *pos } to @C { mt }'s index in @C { mts }.
To visit the mtasks of an mtask set, call
@ID @C {
int KheMTaskSetMTaskCount(KHE_MTASK_SET mts);
KHE_MTASK KheMTaskSetMTask(KHE_MTASK_SET mts, int i);
}
in the usual way.  There is also
@ID @C {
KHE_MTASK KheMTaskSetFirst(KHE_MTASK_SET mts);
KHE_MTASK KheMTaskSetLast(KHE_MTASK_SET mts);
}
which return the first and last elements when @C { mts } is non-empty.
@PP
For sorting an mtask set there is
@ID @C {
void KheMTaskSetSort(KHE_MTASK_SET mts,
  int(*compar)(const void *, const void *));
}
where @C { compar } compares mtasks.  There is also
@ID @C {
void KheMTaskSetUniqueify(KHE_MTASK_SET mts);
}
which uses a call to @C { HaArraySortUnique } with a suitable
comparison function to uniqueify @C { mts }, that is, to ensure
that each mtask in @C { mts } appears there at most once.  The
mtasks are sorted by increasing starting time, with ties
broken by increasing order of @C { KheTaskSolnIndex }
applied to each mtask's first task.  This does what is wanted,
given than every mtask contains at least one task, and no task
appears in two mtasks.
@PP
When @C { mts }'s mtasks all have fixed times, function
@ID @C {
KHE_INTERVAL KheMTaskSetInterval(KHE_MTASK_SET mts);
}
returns the smallest interval containing the indexes in the days
frame of the days of all of their times.  As mentioned earlier,
this interval is kept up to date as mtasks are added and removed,
ensuring that @C { KheMTaskSetInterval } just has to return a
field of @C { mts }, making it very fast.
@PP
Next come operations for changing the assignments of resources
to an mtask set:
@ID @C {
bool KheMTaskSetMoveResourceCheck(KHE_MTASK_SET mts,
  KHE_RESOURCE from_r, KHE_RESOURCE to_r, bool disallow_preassigned,
  bool unassign_extreme_unneeded);
bool KheMTaskSetMoveResource(KHE_MTASK_SET mts,
  KHE_RESOURCE from_r, KHE_RESOURCE to_r, bool disallow_preassigned,
  bool unassign_extreme_unneeded);
}
@C { KheMTaskSetMoveResource } calls @C { KheMTaskMoveResource } for each
mtask @C { mt } of @C { mts }, and @C { KheMTaskSetMoveResourceCheck }
checks whether this would succeed, without doing it.
@PP
When @C { to_r != NULL } and @C { unassign_extreme_unneeded } is
@C { true }, the first and last mtasks in @C { mts } are treated
differently.  For each, if there is a needless assignment in the
mtask, according to @C { KheMTaskContainsNeedlessAssignment }
(Section {@NumberOf resource_structural.mtask_finding.ops}),
the mtask is unassigned instead of moved.  Over the course
of a solve this reduces the number of needless assignments,
reducing resource workloads and generally improving solutions,
as the author's tests have shown.
@PP
Two similar functions are
@ID {0.95 1.0} @Scale @C {
bool KheMTaskSetMoveResourcePartialCheck(KHE_MTASK_SET mts,
  int first_index, int last_index, KHE_RESOURCE from_r, KHE_RESOURCE to_r,
  bool disallow_preassigned, bool unassign_extreme_unneeded);
bool KheMTaskSetMoveResourcePartial(KHE_MTASK_SET mts,
  int first_index, int last_index, KHE_RESOURCE from_r, KHE_RESOURCE to_r,
  bool disallow_preassigned, bool unassign_extreme_unneeded);
}
These are like @C { KheMTaskSetMoveResourceCheck } and
@C { KheMTaskSetMoveResource } except that they only apply
to some of the mtasks of @C { mts }, those whose index in
@C { mts } lies between @C { first_index } and @C { last_index }
inclusive---just as though these were the only mtasks in @C { mts }.
@PP
Finally we have
@ID @C {
void KheMTaskSetDebug(KHE_MTASK_SET mts, int verbosity, int indent,
  FILE *fp);
}
which produces a debug print of @C { mts } onto @C { fp } with the
given verbosity and indent.
@End @SubSection

@SubSection
    @Title { Multi-task finders }
    @Tag { resource_structural.mtask_finding.solver }
@Begin
@LP
The operation for creating mtasks is
@ID @C {
KHE_MTASK_FINDER KheMTaskFinderMake(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
  KHE_FRAME days_frame, bool fixed_times, HA_ARENA a);
}
Using memory from arena @C { a }, this makes a @C { KHE_MTASK_FINDER }
object containing mtasks such that every proper root task of
@C { soln } whose type is @C { rt } lies in exactly one mtask.
Or @C { rt } may be @C { NULL }, and then mtasks are created for
every resource type.  Parameter @C { days_frame } holds the common
frame and influences the operations below that depend on days.
An mtask finder is deleted when its arena is deleted, along with
its mtasks and mtask sets.
@PP
If @C { fixed_times } is @C { true }, the finder assumes that any
times currently assigned to meets will remain as they are for its
entire lifetime.  (This is not checked, so care is needed
here.)  This allows it to treat tasks from different meets as
equivalent, if they run at the same times and satisfy all other
requirements.  If @C { fixed_times } is @C { false }, the finder
does not make this assumption.  Instead, equivalent tasks must come
from the same meet, so that they always run at the same times, even
if those times change or are unassigned.  For full details, consult
Section {@NumberOf resource_structural.mtask_finding.similarity}.
# @PP
# If @C { make_group_monitors } is @C { true }, @C { KheMTaskFinderMake }
# groups certain event resource monitors together, as described in
# detail in Section {@NumberOf resource_structural.mtask_finding.eject}.
# In effect, this hides monitors of individual tasks inside monitors
# for mtasks, just as mtasks hide the tasks themselves.  It is
# recommended when using mtasks with ejection chains.
@PP
These simple queries return the attributes passed in:
@ID @C {
KHE_SOLN KheMTaskFinderSoln(KHE_MTASK_FINDER mtf);
KHE_FRAME KheMTaskFinderDaysFrame(KHE_MTASK_FINDER mtf);
bool KheMTaskFinderFixedTimes(KHE_MTASK_FINDER mtf);
KHE_ARENA MTaskFinderArena(KHE_MTASK_FINDER mtf);
}
To find out which resource types the mtask finder is handling,
there are functions
@ID {0.98 1.0} @Scale @C {
int KheMTaskFinderResourceTypeCount(KHE_MTASK_FINDER mtf);
KHE_RESOURCE_TYPE KheMTaskFinderResourceType(KHE_MTASK_FINDER mtf, int i);
bool KheMTaskFinderHandlesResourceType(KHE_MTASK_FINDER mtf,
  KHE_RESOURCE_TYPE rt);
}
The first two allow you to visit the resource types handled by
@C { mtf }; the third tells you whether @C { mtf } handles a
given resource type.  These functions are arguably overkill,
since @C { mtf } either handles one resource type or all resource
types; but in principle it could handle any subset of the resource
types, so this approach has seemed best.
# KHE_RESOURCE_TYPE KheMTaskFinderResourceType(KHE_MTASK_FINDER mtf);
# There is no @C { KheMTaskFinderResourceType } function because
# @C { NULL } may be passed for @C { rt }.
# the mtask finder handles multiple resource types.
@PP
When dealing with mtasks, the days of the common frame that they
are running on loom large.  These days are often represented by
their indexes in the common frame (parameter @C { days_frame }
of @C { KheMTaskFinderMake }).  The index of the first day is 0,
and of the last day is
@ID @C {
int KheMTaskFinderLastIndex(KHE_MTASK_FINDER mtf);
}
This is one less than the number of time groups in @C { days_frame }.
@PP
To visit the mtasks of a @C { KHE_MTASK_FINDER } object, the calls are
@ID @C {
int KheMTaskFinderMTaskCount(KHE_MTASK_FINDER mtf);
KHE_MTASK KheMTaskFinderMTask(KHE_MTASK_FINDER mtf, int i);
}
as usual.  The order that the mtasks appear here is arbitrary,
unless one chooses to first call
@ID @C {
void KheMTaskFinderMTaskSort(KHE_MTASK_FINDER mtf,
  int (*compar)(const void *, const void *));
}
to sort the mtasks using function @C { compar }.  One comparison
function is provided:
@ID @C {
int KheMTaskDecreasingDurationCmp(const void *, const void *);
}
@C { KheMTaskFinderMTaskSort(mtf, &KheMTaskDecreasingDurationCmp) }
sorts the mtasks by decreasing duration, which might be a good
heuristic for ordering them for assignment.
# @PP
# When an mtask is non-degenerate, each of its proper root tasks is,
# or is assigned (directly or indirectly) at least one task derived
# from an event resource and meet.  These tasks are called the
# @I { atomic tasks } of the proper root task.  In a non-degenerate
# mtask, each proper root task contains at least one atomic task.
@PP
When mtasks are in use, it is best to deal only with them and not
access tasks directly.  When a task is returned by some function
and has to be dealt with, the right course is to call
@ID @C {
KHE_MTASK KheMTaskFinderTaskToMTask(KHE_MTASK_FINDER mtf, KHE_TASK t);
}
to move from task @C { t } to its proper root task and from there
to the mtask containing that proper root task.  This function will
abort if there is no such mtask.  That should never happen, provided
the resource type of @C { t } is the resource type, or one of the
resource types, handled by @C { mtf }.
# since even degenerate tasks lie in mtasks.
@PP
When the @C { fixed_times } parameter of @C { KheMTaskFinderMake }
is @C { true }, one can call
@ID @C {
KHE_MTASK_SET KheMTaskFinderMTasksInTimeGroup(KHE_MTASK_FINDER mtf,
  KHE_RESOURCE_TYPE rt, KHE_TIME_GROUP tg);
KHE_MTASK_SET KheMTaskFinderMTasksInInterval(KHE_MTASK_FINDER mtf,
  KHE_RESOURCE_TYPE rt, KHE_INTERVAL in);
}
These return the set of mtasks of resource type @C { rt } that are
running at any time of @C { tg } (which must be non-empty), or at
any time of any time group of interval @C { in } of @C { mtf }'s
days frame (again, @C { in } must be non-empty).  Each set is built
on demand (except that for singleton time groups @C { tg } the sets
are built when @C { mtf } itself is built), sorted by increasing
start time, uniqueified by @C { KheMTaskSetUniqueify } when
necessary, and cached within @C { mtf } so that subsequent requests
for it run quickly.  The caller must not modify these mtask sets.
A similar function is
@ID @C {
void KheMTaskFinderAddResourceMTasksInInterval(KHE_MTASK_FINDER mtf,
  KHE_RESOURCE r, KHE_INTERVAL in, KHE_MTASK_SET mts);
}
This adds to @C { mts } the mtasks that @C { r } is assigned to
that lie wholly within interval @C { in } in the current frame,
in chronological order.  These mtasks can change as resource
assignments change, so there is no caching of the results.  One
can also do a similar job avoiding mtasks by calling
@ID @C {
void KheAddResourceProperRootTasksInInterval(KHE_RESOURCE r,
  KHE_INTERVAL in, KHE_SOLN soln, KHE_FRAME days_frame,
  KHE_TASK_SET ts);
}
to add to @C { ts } the proper root tasks assigned @C { r } in
@C { soln } that lie wholly within @C { in } of @C { days_frame }.
This is like @C { KheResourceTimetableMonitorAddProperRootTasksInInterval }
(Section {@NumberOf monitoring_timetables_resource}) but with the
`wholly within' aspect.  Some code unification is needed here.
@PP
When @C { fixed_times } is @C { false }, or tasks lie in unassigned
meets, the functions just given aren't really useful.  But there are
other ways to visit mtasks.  @C { KheMTaskFinderMTaskCount } and
@C { KheMTaskFinderMTask } will visit them all, for example.
Another option is to visit the tasks of a given meet and use
@C { KheMTaskFinderTaskToMTask } to find the mtasks containing
those tasks.
# are not really
# available, although they can be called and will then give empty
# results.  There is still an mtask for every task, however; the
# same condition determines whether two tasks are similar and thus
# belong in the same mtask, except for one change:  instead of
# requiring the same times, the tasks must come from the same
# meet.  There are no functions for accessing subsets of these
# mtasks, there are just the functions for accessing them all,
# given earlier.  To access the mtasks derived from a given meet,
# one can traverse the tasks of the meet, and for each task of the
# appropriate resource type, call @C { KheMTaskFinderTaskToMTask }.
# Some mtasks may be visited more than once by this procedure.
# mtask contains tasks from the same meet, rather than tasks with
# the same assigned times.
# it will contain tasks
# Now for functions that are available when @C { fixed_times } is
# @C { false }, or when there are tasks whose atomic tasks do not
# all have assigned times.  One can visit the mtasks that lack
# times, or rather the non-degenerate ones, indexed by the meet
# with the lowest index:
# @ID @C {
# int KheMTaskFinderMTaskFromMeetCount(KHE_MTASK_FINDER mtf,
#   KHE_MEET meet);
# KHE_MTASK KheMTaskFinderMTaskFromMeet(KHE_MTASK_FINDER mtf,
#   KHE_MEET meet, int i);
# }
# These return the number of non-degenerate mtasks whose first meet is
# @C { meet }, and the @C { i }th of these mtasks.
@PP
We return now to functions that are available irrespective of the
value of @C { fixed_times }.  It was mentioned at the start of
this section that several operations on tasks which are forbidden
(because they would change the mtask structure) have mtask
versions which both carry out the forbidden operation and
change the mtask structure, possibly creating or destroying
some mtasks as they do so.  These operations are
@ID @C {
bool KheMTaskFinderTaskMove(KHE_MTASK_FINDER mtf, KHE_TASK task,
  KHE_TASK target_task);
bool KheMTaskFinderTaskAssign(KHE_MTASK_FINDER mtf, KHE_TASK task,
  KHE_TASK target_task);
bool KheMTaskFinderTaskUnAssign(KHE_MTASK_FINDER mtf, KHE_TASK task);
bool KheMTaskFinderTaskSwap(KHE_MTASK_FINDER mtf, KHE_TASK task1,
  KHE_TASK task2);
void KheMTaskFinderTaskAssignFix(KHE_MTASK_FINDER mtf, KHE_TASK task);
void KheMTaskFinderTaskAssignUnFix(KHE_MTASK_FINDER mtf, KHE_TASK task);
}
@C { KheMTaskFinderTaskMove } (for example) calls @C { KheTaskMove },
and it also updates @C { mtf }'s data structures so that the right
results continue to be returned by
# the various query functions:
@C { KheMTaskFinderMTaskCount },
{0.95 1.0} @Scale @C { KheMTaskFinderMTask },
# @C { KheMTaskFinderMTaskAtTimeCount },
# @C { KheMTaskFinderMTaskAtTime },
{0.95 1.0} @Scale @C { KheMTaskFinderMTaskFromMeetCount },
{0.95 1.0} @Scale @C { KheMTaskFinderMTaskFromMeet },
and also by functions
{0.95 1.0} @Scale @C { KheMTaskFinderTaskToMTask },
{0.95 1.0} @Scale @C { KheMTaskFinderMTasksInTimeGroup }, and
{0.95 1.0} @Scale @C { KheMTaskFinderMTasksInInterval }.
Mtasks held by the user, either directly or in user-defined mtask
sets, may become undefined when mtasks are created and destroyed.
@PP
Because of these updates, @C { KheMTaskFinderTaskMove } and the other
functions above are too slow to be called from within time-critical
code; but they are fine for other applications.  Structural solvers,
for example, are usually not time-critical.  The related checking
and query functions (@C { KheTaskMoveCheck } and so on) are safe to
call directly, since they change nothing.
@PP
As explained in Section {@NumberOf resource_structural.task_grouping},
to @I group some tasks means to move them to a common
@I { leader task }, forcing solvers to assign the same resource to
each task in the group (by assigning a resource to the leader task).
If any of them are assigned before grouping, then it must be the same
assignment, and the leader task will have that assignment after grouping.
Functions
@ID @C {
KHE_TASK KheMTaskFinderTaskGrouperMakeGroup(KHE_MTASK_FINDER mtf,
  KHE_TASK_GROUPER tg, KHE_SOLN_ADJUSTER sa);
KHE_TASK KheMTaskFinderTaskGrouperEntryMakeGroup(KHE_MTASK_FINDER mtf,
  KHE_TASK_GROUPER_ENTRY tge, KHE_SOLN_ADJUSTER sa);
}
are like @C { KheTaskGrouperMakeGroup } and @C { KheTaskGrouperEntryMakeGroup }
in making a group from the tasks stored in @C { tg } or @C { tge }.
But they also update @C { mtf }'s data structures, like the other
`forbidden' operations.  The other task grouper operations from
Section {@NumberOf resource_structural.task_grouping} do not have
mtask versions, because they do not change task assignments and
so do not make @C { mtf } out of date.
# @PP
# The mtask finder contains a task grouper object
# (Section {@NumberOf resource_structural.task_grouping.task_grouper})
# and offers functions based on the task grouper functions:
# @ID @C {
# void KheMTaskFinderTaskGrouperClear(KHE_MTASK_FINDER mtf);
# bool KheMTaskFinderTaskGrouperAddTask(KHE_MTASK_FINDER mtf,
#   KHE_TASK task);
# void KheMTaskFinderTaskGrouperDeleteTask(KHE_MTASK_FINDER mtf,
#   KHE_TASK task);
# KHE_COST KheMTaskFinderTaskGrouperCost(KHE_MTASK_FINDER mtf);
# KHE_TASK KheMTaskFinderTaskGrouperMakeGroup(KHE_MTASK_FINDER mtf,
#   KHE_SOLN_ADJUSTER sa);
# }
# These call the corresponding task grouper functions, and also
# update @C { mtf }'s data structures appropriately, as for the
# other `forbidden' operations.  @C { KheMTaskFinderTaskGrouperCost }
# has no @C { days_frame } parameter; it passes @C { mtf }'s days frame
# to the task grouper.
# @ID {0.95 1.0} @Scale @C {
# void KheMTaskFinderGroupBegin(KHE_MTASK_FINDER mtf, KHE_TASK leader_task);
# bool KheMTaskFinderGroupAddTask(KHE_MTASK_FINDER mtf, KHE_TASK task);
# void KheMTaskFinderGroupEnd(KHE_MTASK_FINDER mtf, KHE_SOLN_ADJUSTER sa);
# }
# @C { KheMTaskFinderGroupBegin } clears out any previous task grouping
# information and sets the leader task (a proper root task).  Then any
# number of calls to @C { KheMTaskFinderGroupAddTask } set the tasks (also
# proper root tasks) to be assigned to the leader task, without actually
# carrying out those assignments.  The return value is @C { true } if
# @C { task } can be included; if it is @C { false }, @C { task } is
# omitted from the grouped tasks, either because it cannot be moved to
# @C { leader_task }, or because it is assigned a resource and some
# other task in the group is assigned a different resource.  Finally,
# @C { KheMTaskFinderGroupEnd } actually carries out the moves.  If
# @C { sa != NULL } these are recorded in solution adjuster @C { sa },
# allowing them to be undone later if desired.
# @PP
# A sequence of calls to @C { KheMTaskFinderTaskAssign } would do
# what these calls do.  But these calls are faster because they build
# only the final mtask which reflects all the assignments.
# @PP
# To speed up `forbidden' operations and grouping operations,
# it may help to call
# @ID @C {
# void KheMTaskFinderClearCachedMTaskSets(KHE_MTASK_FINDER mtf);
# }
# when the mtask sets currently being cached
# (returned by @C { KheMTaskFinderMTasksInTimeGroup } and
# @C { KheMTaskFinderMTasksInInterval }) are not likely to be
# needed any time soon.  This would be the case, for example,
# during profile grouping, when moving from one limit active
# intervals constraint to another one with different time groups.
# With these mtask sets cleared out, @C { mtf } does not have
# to spend time updating them when mtasks are created and deleted.
@PP
Finally,
@ID @C {
void KheMTaskFinderDebug(KHE_MTASK_FINDER mtf, int verbosity,
  int indent, FILE *fp);
}
produces a debug print of @C { mtf } onto @C { fp } with
the given verbosity and indent.
@End @SubSection

@SubSection
    @Title { Behind the scenes 1:  defining task similarity }
    @Tag { resource_structural.mtask_finding.similarity }
@Begin
@LP
It is now time to look behind the scenes, and see how mtasks
guarantee that symmetrical assignments will be avoided, and at the
same time that nothing useful will be missed.
# @PP
# The specification states that meet splits and merges render the
# solver and its mtasks out of date.  So the set of proper root
# tasks to be distributed into mtasks is fixed and definite.
@PP
Behind the scenes, then, an mtask is a sequence (not a set)
of proper root tasks, each optionally assigned a resource.
When @M { m } resources are assigned to an mtask, they are
assigned to the first @M { m } proper root tasks in the
sequence.  Each mtask contains the proper root tasks of one
equivalence class of an equivalence relation between proper
root tasks that we call @I { task similarity }.  To turn
this set into a sequence we sort the elements into non-decreasing
order of an attribute of each task called its @I { task cost }.
@PP
It is easy to see how mtasks avoid many assignments.  Suppose we
have @M { n } unassigned tasks, and that we decide to assign @M { m }
resources to these tasks, where @M { m <= n }.  For the first
resource there are @M { n } unassigned tasks to choose from,
for the second there are @M { n - 1 } to choose from, and so on,
giving @M { n(n-1)...(n-m+1) } choices altogether.  This could be
a very large number.  But now suppose that these @M { n } tasks
are grouped into an mtask.  Then the mtask tries just one of these
choices, the one which assigns the first resource that comes along
to the first task, the second to the second, and so on.  So there
is a large reduction in the number of choices.  The question is
whether anything useful has been missed.
@PP
`Missing something useful' is really an appeal to a dominance
relation between solutions (Appendix {@NumberOf dynamic_theory}).
We claim that any solution containing assignments of any @M { m }
resources to the @M { n } tasks is dominated by the solution
containing the assignments chosen by the mtask.  The proof
will go like this.  Limit all consideration to the @M { m }
resources and @M { n } tasks of interest.  If a resource is
assigned to a task that appears later in the mtask's sequence
than some other task which is unassigned, then we can move the
resource to that earlier unassigned task, and the move will not
increase the cost of the solution, in fact it might decrease it.
And then, exchanging the assignments of any two resources can be
done and will not change solution cost.  These two facts, if we
can prove them, will together show that we can transform our
solution into the mtask's solution with no increase in cost.
# @PP
# From one point of view, two different assignments are just that,
# different, and so there is no symmetry.  What makes symmetry
# possible is that many monitors do not depend on exactly which
# task is assigned to which resource; instead, they depend on
# properties of the task.  If two different tasks have equal
# properties, symmetry is possible.  So uncovering symmetry is basically
# about carefully examining the effect of assignments on monitors.
# Even if two tasks are monitored by different monitors, those
# monitors could be symmetrical.  It will get complicated, but
# nothing we do will be approximate.  If we cannot prove that some
# situation is symmetrical, the tasks involved should and will go
# into different mtasks.
# # We may end up with more mtasks than
# # we actually need, but within any mtask the tasks will definitely
# # be symmetrical.
# @PP
# This section defines two key things.  First is @I { task similarity },
# an equivalence relation between proper root tasks.  Each equivalence
# class of this relation supplies the proper root tasks of one mtask.
# Second is one @I { task cost } for each proper root task.  The
# members of each mtask are ordered by non-decreasing task cost,
# which will ensure that, within each mtask, assigning earlier tasks is
# not worse than assigning later ones.
@PP
We call a task, considered independently of any tasks that may be
assigned to it, an @I { atomic task }.  We view one proper root
task as the set of all the atomic tasks assigned to it, directly
or indirectly, including itself.  Apart from domains, preassignments,
and fixed assignments, which relate specifically to the root task,
only this set matters, not which tasks are assigned to which.
# From now on, the term `task' will refer to this set of atomic tasks.
@PP
As mentioned earlier, KHE allows tasks to be created that are not
derived from any meet.  These would typically serve as proper root
tasks to which tasks derived from meets could be assigned.  Such tasks
are consulted to find domains, preassignments, and fixed assignments
when they are proper root tasks, but since they do not run at any
times and have no effect on any monitors they are ignored otherwise:
they are not included among the atomic tasks.  This means that the
set of atomic tasks could be empty.  However we do not treat this
case as special.  Conditions of the form `for each atomic task, ...'
are vacuously true.
# In that case the proper root
# task is considered to be @I degenerate and the mtask containing it
# is also said to be degenerate.
@PP
A proper root task is said to have fixed times if each of its
atomic tasks lies in a meet with an assigned time, and the
@C { fixed_times } parameter of @C { KheMTaskFinderMake } is
@C { true }, allowing us to assume that these assigned times
will not change.  In that case, similarity is based on the
assigned times of the tasks' meets.  Otherwise, things are
handled as though none of the tasks have assigned times, and
similarity is based on their meets.
# Early in the mtask construction process, the atomic tasks of each
# proper root task are found and sorted.  Atomic tasks without assigned
# times come before atomic tasks with assigned times.  Two atomic tasks
# without assigned times are sorted by increasing meet index.  Two
# atomic tasks with assigned times are sorted by increasing time
# index.  Either way, ties are broken arbitrarily.
# @PP
# There is one wrinkle.  If @C { fixed_times } is @C { false },
# assigned times cannot be relied upon to remain constant throughout
# the lifetime of the mtasks.  So in that case we treat all tasks as
# though they have no assigned times, using only their meet indexes
# in the sorting.
@PP
Two proper root tasks are similar when they satisfy these conditions:
@ParenNumberedList

@LI @OneRow {
They have equal domains.
}

@LI @OneRow {
They are either both unpreassigned, or both preassigned the same
resource.  This second possibility inevitably causes clashes, which
means that in practice a preassigned task will usually not be similar
to any other task, making it the only member of its mtask.
}

@LI @OneRow {
The assignments of both tasks are not fixed.  In other words, a
task whose assignment is fixed is always the only member of its mtask.
}

@LI @OneRow {
The number of atomic tasks must be the same for both tasks, and
taking them in a canonical order based on their assigned times
and meets, corresponding atomic tasks must be similar, according
to a definition to be given below.  This condition is vacuously
true when both tasks have no atomic tasks.
}

@EndList
Assuming that the similarity relation for atomic tasks is an
equivalence relation, this evidently defines an equivalence
relation on proper root tasks, as required.
@PP
Two atomic tasks are similar when they satisfy these conditions:
@ParenNumberedList

@LI @OneRow {
They have equal durations and workloads.
}

@LI @OneRow {
Either they both have an assigned time, in which case those
times are equal, or they both don't, in which case their meet
indexes are equal.  This second case is always followed when
@C { fixed_times } is @C { false }, consistent with what was
said about this above.  It is also followed when at least
one of the atomic tasks in question has no assigned time.
}

@LI @OneRow {
They are similar in their effects on monitors.  There are many
details to cover here; these are tackled below.
}

@EndList
Once again, this is clearly an equivalence relation, provided
that (3) is an equivalence relation.
@PP
These rules could be improved on.  For example, if there are
no limit workload monitors, then task workloads do not matter.
Still, what we have is simple and works well in practice.
@PP
The rest of this section is concerned with similarity of two
atomic tasks in their effect on monitors.  The general idea is that
this similarity holds when, for all resources @M { r }, assigning
@M { r } to one of the tasks has the same effect on monitors as
assigning it to the other task.  But there are complications in
making this general idea concrete, as we are about to see.
# We are already
# assuming that the two tasks have the same domain.  If we can show
# that, for all resources @M { r } in this domain, assigning @M { r }
# to one of the atomic tasks affects monitors in the same way as
# assigning it to the other, then it does not matter which of the
# two tasks @M { r } is assigned to, so the tasks are similar in
# their effect on monitors, and there is nothing here to prevent
# them from being placed into the same mtask.
@PP
We can safely ignore unattached monitors and monitors with weight 0.
A monitor can be an @I { event monitor }, monitoring the times assigned
to a specified set of events, or an @I { event resource monitor },
monitoring the resources assigned to a specified set of tasks, or
a @I { resource monitor }, monitoring the busy times or workload
of a specified resource.  We'll take each kind in turn.
# @PP
# Before we start, though, we have to introduce a caveat.  If we move
# from tasks to mtasks, the data structures encountered by time and
# resource repair algorithms change, and that can lead to changes in
# the repairs tried.  For example, we might end up doing fewer task
# moves, and that might lead to different random numbers being passed
# to time assignment repair operations, giving different outcomes.
# So we can't expect algorithms built on mtasks to produce solutions
# identical to algorithms built on tasks.  But this is not the fault
# of the mtasks, and there is no reason to think that such changes
# will be systematically for the worse.
@PP
@I { Event monitors } are unaffected by the assignments of resources
to tasks.  They depend only on the times assigned to meets.  So we
can ignore them here.
@PP
@I { Resource monitors } are not directly concerned with which
tasks a resource is assigned to, but rather with those tasks'
busy times and workloads.  We have already required similar tasks
to be equal in those respects, so that moving a resource from one
similar task to another leaves its resource monitors unaffected.
This is true whether or not times are assigned.
@PP
@I { Event resource monitors } (assign resource, prefer resources,
avoid split assignments, and limit resources monitors) are where
things get harder.  The tests we have so far included in the
similarity condition do not guarantee that event resource monitors
will be unaffected when a resource is moved from one task to
another---far from it.
@PP
Before we delve into event resource monitors, there is a special
case we need to dispose of.  Consider an avoid split assignments
monitor @M { m } whose monitored tasks are all assigned to each
other (have the same proper root).  At most one distinct resource
can be assigned to these tasks, so @M { m } must have cost 0.  It
can be and is ignored.  This case is quite likely to arise in
practice, although @M { m } might be detached when it does.  It
includes the case where @M { m } monitors a single task.
@PP
The author spent some time considering what happens with other
kinds of event resource monitors when their tasks have the same
proper root.  These monitors monitor a single task, in effect,
which is helpful for similarity.  However these cases seem
unlikely to arise in practice, and some of their details are
not obvious, so nothing special has been done about them.
@PP
Event resource monitors explicitly name the tasks (always atomic)
that they @I monitor (are affected by).  We divide them into two
groups.  A @I { separable monitor } is one whose cost may be apportioned
to the tasks it monitors, each portion depending only on the assignment
of that one task.  A @I { inseparable monitor } is one whose cost cannot
be apportioned in this way.
@PP
A monitor that monitors just one task is separable, because all its cost
can be apportioned to that task.  But there are less trivial examples.
Consider an assign resource constraint with a linear cost function.
Its cost is its weight times the total duration of its unassigned
tasks, and this may be apportioned to the individual unassigned
tasks, making the monitor a separable one.  But if the cost function
is not linear, one cannot apportion the cost in this way.
# Some monitors monitor several atomic tasks but are nevertheless
# classified as single-task monitors:  assign resource and prefer
# resources monitors with linear cost functions, and limit resources
# monitors with maximum limits 0 and linear cost functions.  These
# monitors can be and are divided (notionally) into one single-task
# monitor for each monitored atomic task.
# The second is when all the tasks monitored
# by @M { m } are assigned to one another (when they have the same
# proper root).  In this case the tasks behave like a single task.
# This is quite likely when @M { m } is an avoid split assignments monitor.
@PP
We analyse inseparable monitors first.  If task @M { t } is monitored
by inseparable monitor @M { m }, the cost of assigning a resource to
@M { t } cannot be apportioned to @M { t }.  This indeterminacy in
cost prevents us from saying definitely what the effect on @M { m }
of a resource assignment is.  So in this case, @M { t } cannot be
considered similar to any other task.
@PP
There is however an exception to this rule.  Consider two tasks both
monitored by @M { m }.  An examination of the event resource constraints
will show that, provided the two tasks have equal durations, the effect
on @M { m } of assigning a given resource @M { r } to either task must
be the same.  So @M { m } does not prevent the two tasks from being
declared similar.  Altogether, then, for two atomic tasks to be similar
they must have the same inseparable monitors---not monitors with the same
attributes, but the exact same monitors.
@PP
We turn now to separable monitors.  Each task has its own individually
apportionable cost, dependent only on its own assignment.  Again we
divide these monitors into two groups:
@I { resource-dependent separable monitors }, for which the cost
depends on the choice of resource, and
@I { resource-independent separable monitors }, for which the cost
depends only on whether the task is assigned or not, not on the choice
of resource.
@PP
For example, a separable prefer resources monitor will usually be
resource-dependent, because the cost depends on whether the
assigned resource is a preferred one or not.  But if the set of
preferred resources is empty, assigning any resource produces
the same cost, and the monitor is resource-independent.
@PP
To analyse the resource-dependent separable monitors, consider
the usual kind of separable prefer resources monitor.  The
cost depends on which resource is assigned, so the permutations
of resource assignments that mtasks rely on could produce virtually
any cost changes.  So we require, for similarity, that the
resource-dependent separable monitors of the two tasks can
be put into one-to-one correspondence such that corresponding
monitors have the same attributes (type, hardness, cost function,
weight, preferred resources, and limits where present).
# @PP
# Limit resources monitors that monitor a single task never appear
# in practice, so they hardly matter.  However, it is easy to follow
# the path made by prefer resources monitors, and require a one-to-one
# correspondence between these monitors such that corresponding
# monitors have the same hardness, cost function, weight, preferred
# resources, and minimum and maximum limits.
@PP
We are left with just the resource-independent separable monitors,
whose cost depends only on whether each task is assigned or not,
not on which resource is assigned.
# These are
# single-task assign resource monitors, and single-task prefer
# resources and limit resources monitors whose set of preferred
# resources is either empty or contains every resource of the
# relevant resource type.  Single-task avoid split assignments
# monitors were disposed of earlier.
# @PP
We could repeat the previous work and require a one-to-one
correspondence between these monitors such that corresponding
monitors have the same attributes.  But we can do better.
@PP
Consider three tasks, @M { t sub 1 }, @M { t sub 2 }, and
@M { t sub 3 }, that are similar according to the rules so far.
Suppose @M { t sub 1 } is monitored by a separable assign
resource monitor with weight 20, @M { t sub 2 } is not monitored,
and @M { t sub 3 } is monitored by a separable prefer resources
monitor with an empty set of resources and weight 10.  Assuming
duration 1, assigning any resource to @M { t sub 1 } reduces the
cost of the solution by 20; assigning any resource to @M { t sub 2 }
does not change the cost; and assigning any resource to @M { t sub 3 }
increases the cost by 10.  Examples like this are common in nurse
rostering, to place limits on the number of nurses assigned to a
shift.  Here, at least one nurse is wanted, but three is too many.
@PP
Let the @I { task cost } of a task @M { t } be the sum, over all
resource-independent separable monitors @M { m } that monitor
@M { t }, of the change in cost reported by @M { m } when @M { t }
goes from being unassigned to being assigned.  In the example
above, assuming duration 1, the task costs are @M { minus 20 }
for @M { t sub 1 }, @M { 0 } for @M { t sub 2 }, and @M { 10 }
for @M { t sub 3 }.  These values are independent of all other
assignments, and also of which resource is being assigned, and
so they can be calculated in advance of any solving, while
mtasks are being constructed.  When adding an assignment to an
mtask, it will always be better to choose a remaining unassigned
task with minimum task cost.  So the mtask sorts its tasks by
non-decreasing task cost at the start, and assigns them in sorted order.
@PP
It remains to state, for each monitor type, the conditions under
which it is separable, and if separable, resource-independent.  The
examples given earlier cover most of these cases.
@PP
An assign resource monitor is separable when it monitors a single task,
or its cost function is linear, or both.  It is then always
resource-independent.  Otherwise it is inseparable.
@PP
A prefer resources monitor is ignored when its set of preferred
resources includes every resource of its resource type, since
its cost is always 0 then.  Otherwise, it is separable when it
monitors a single task, or its cost function is linear, or both.
It is then resource-independent when its set of preferred resources
is empty.  Otherwise it is inseparable.
@PP
An avoid split assignments monitor is ignored when its tasks all
have the same proper root (including when it monitors a single
task).  Otherwise it is always considered to be inseparable.
@PP
It would not be unreasonable to declare all limit resources
monitors to be inseparable, since in practice they apply to
multiple tasks and have non-trivial limits.  However, they can
also be used to do what assign resource monitors do, by selecting
all resources and setting the minimum limit to the total duration
of the tasks monitored.  They can also be used to do what prefer
resources monitors do, by selecting those resources that are not
selected by the prefer resources monitor, and setting the maximum
limit to 0.  In these cases, we want a limit resources monitor to be
classified in the same way that the assign resource or prefer resources
monitor would be.
@PP
If a limit resources monitor is equivalent to an assign resource or
prefer resources monitor as just described, it is classified as that
other monitor would be.  Otherwise, it is separable when its cost function
is linear and its maximum limit is 0.  It is then resource-independent
when its set of preferred resources contains every resource of its
type.  It is also separable when it monitors a single task.  In that
case it is resource-independent when its set of preferred resources
contains every resource of its type, in which case the assignment
cost depends on how the duration of the task compares with the
monitor's limits.  (Curiously, if the task's duration is less than
the minimum limit, there will be both a non-assignment cost and an
assignment cost, because the minimum limit is not reached whether
the task is assigned or not.)  Otherwise it is inseparable.
# ignored when its set of selected
# resources is empty (its cost must be 0 then).  Otherwise, if it
@PP
To recapitulate, then, two proper root tasks are similar when they have
equal domains and preassignments, they are both not fixed, and they
have similar atomic tasks.  Two atomic tasks are similar when they
have equal durations, workloads, and start times (or meets), their
inseparable monitors are the same, and their resource-dependent
separable monitors have equal attributes.  Their resource-independent
separable monitors (usually assign resource monitors, and prefer
resources monitors with no preferred resources) may differ:  instead
of influencing similarity, they determine the task's position in the
sequence of tasks of its mtask.
@End @SubSection

@SubSection
    @Title { Behind the scenes 2:  accessing mtasks and mtask sets }
    @Tag { resource_structural.mtask_finding.impl }
@Begin
@LP
This section describes the mtask finder's moderately efficient data
structure for accessing mtasks by signature, and for finding the
mtask sets returned by @C { KheMTaskFinderMTasksInTimeGroup } and
@C { KheMTaskFinderMTasksInInterval }.  It has been written to
clarify the ideas of its somewhat confused author, and is not likely
to be of any value to the user.
@PP
Quite a few objects are created and deleted in the operations that
follow.  Deleted objects are added to free lists in the mtask finder,
where they are available for future creations.
@PP
The data structure allows proper root tasks to be inserted and
deleted at any moment, not just during initialization.  This
flexibility is needed to support the `forbidden' operations,
which work by deleting from the data structure the proper
root tasks they affect, carrying out the operation requested, and
then inserting the result tasks back into the data structure.
@PP
Actually there are three data structures.  First, there is an array
of all mtasks, included to support @C { KheMTaskFinderMTaskCount }
and @C { KheMTaskFinderMTask }.  Each mtask contains its index in
this array.  To add a new mtask we add it to the end and set its
index; to delete it we use its index to find its position, and
move the last element to that position, changing its index.
@PP
Second, there is an array of mtasks indexed by task index (function
@C { KheTaskSolnIndex } from the KHE platform).  For each task
handled by the mtask finder (each proper root task of a suitable
resource type), the value at its index is its mtask.  Other indexes
have @C { NULL } values and are never accessed.  This supports a
trivial implementation of @C { KheMTaskFinderTaskToMTask }.  When a
task is added to an mtask or removed from it, the value at its index
is changed.
@PP
We won't mention these two arrays again, although they are kept up
to date as the structure changes.  All subsequent data structure
descriptions relate to the third data structure.
@PP
Every task has a resource type, and every mtask has one too, because its
tasks all have the same domain.  @C { KheMTaskFinderMTasksInTimeGroup }
and @C { KheMTaskFinderMTasksInInterval } have a resource type parameter
and return sets of mtasks which all have that resource type.
@PP
So all operations that we are concerned with here have a parameter
which is a non-@C { NULL } resource type; call it @C { rt }.
Each operation traverses a short list of tables (this list is the
entry point for the third data structure), one table for each
resource type supported by the mtask finder, to find the table for
@C { rt }.  The rest takes place in that table; everything
in it has resource type @C { rt }.
@PP
@BI { Task insertion }.
To add a proper root task to the structure, first we
build its @I { signature }.  This is an object containing everything
needed to decide whether two proper root tasks are similar, as defined
in Section {@NumberOf resource_structural.mtask_finding.similarity},
including one @I { atomic signature } for each atomic task assigned,
directly or indirectly, to the proper root task.  Atomic signatures
are sorted into a canonical order for ease of comparison.  The
non-assignment and assignment costs, as returned by @C { KheMTaskTask },
are calculated at the same time as the signature but are not part of
it and are stored separately.
@PP
The tasks of an mtask have equal signatures.  This shared signature
is stored in the mtask.  A task belongs in an mtask if its signature
is equal to the mtask's stored signature.
@PP
So after calculating the signature of the new task, the second
step is to search the appropriate table to see if it contains an
mtask with the same signature as the signature of the new task.
There are three different ways to do this, depending on the
@I { type } of the signature:
@TaggedList

@DTI { @C { KHE_SIG_FIXED_TIMES } } {
A task's signature has this type when the @C { fixed_times } parameter
of @C { KheMTaskFinderMake } is @C { true }, each of its atomic tasks
derived from a meet has an assigned time, and there is at least one
such atomic task.  So the task has a chronologically first assigned
time, and we use that as an index into the table.  We'll explain how
this is done later on.
}

@DTI { @C { KHE_SIG_MEETS } } {
A task's signature has this type when the @C { fixed_times } parameter
of @C { KheMTaskFinderMake } is @C { false }, or not every atomic task
derived from a meet has an assigned time, and there is at least one
atomic task derived from a meet.  We use any one of these meets to
find other tasks with the same signature:  we traverse the set of
all tasks of the meet, and for each of those of the right resource
type that has an mtask, we compare the mtask's signature with the
new task's signature.  So there is no third data structure for this
case; the meet itself provides a suitable structure.  This would
not work for @C { KHE_SIG_FIXED_TIMES }, because fixed-time tasks
with the same signatures can come from different meets.
}

@DTI { @C { KHE_SIG_OTHER } } {
A task's signature has this type when neither of the other two
cases applies.  This means that the task has no atomic tasks
derived from meets; its duration is therefore zero and it is
basically useless.  Still, for uniformity it must lie in an
mtask.  These mtasks are likely to be very few, so they are
stored in a separate list in the table, and this list is
searched to find the mtask (if any) with this signature.
}

@EndList
Whichever way the search is done, if it finds an existing mtask
whose signature is equal to the new task's signature, all we have
to do is add the new task to that mtask and throw away the new
task's signature.  If it does not find an existing mtask with
that signature, we have to create a new mtask with that signature,
add the new task to it as its first task, and insert the new
mtask into the data structure.  This insertion does nothing
if the signature type is @C { KHE_SIG_MEETS }, and it is a simple
addition to the end of the table's separate list if the signature
type is @C { KHE_SIG_OTHER }.  How an mtask is inserted when its
type is @C { KHE_SIG_FIXED_TIMES } is a subject for later.
@PP
@BI { Task deletion }.
To delete a task, we first delete it from its mtask, obtained
by the usual call to @C { KheMTaskFinderTaskToMTask }.  If
the mtask becomes empty, we then have to delete the mtask
(we don't allow empty mtasks).  We do this in one of three
ways depending on the signature type.  If the type is
@C { KHE_SIG_MEETS } there is nothing to do;  if it is
@C { KHE_SIG_OTHER } we search the appropriate table's
separate list of mtasks of this type and delete the
mtask from there.  If the type is @C { KHE_SIG_FIXED_TIMES }
we use the first assigned time to index the table, as
for insertion, and carry on as described below.
@PP
@BI { The third data structure }.  The third data structure supports
five operations:  mtask retrieval by signature, mtask insertion,
mtask deletion, @C { KheMTaskFinderMTasksInTimeGroup }, and
@C { KheMTaskFinderMTasksInInterval }.  The last two operations
are supposed to cache their results so that multiple calls with
the same parameters run quickly.  These cached values must be
kept up to date as mtasks are inserted and deleted.
@PP
We've already shown how the first three operations are done when
the signature type is @C { KHE_SIG_MEETS } or @C { KHE_SIG_OTHER }.
The last two, @C { KheMTaskFinderMTasksInTimeGroup } and
@C { KheMTaskFinderMTasksInInterval }, do not deal in these
two types of mtasks anyway.  So we need to consider here only
mtasks whose signatures have type @C { KHE_SIG_FIXED_TIMES }.
@PP
One entry in the third data structure has type
@ID @C {
typedef struct khe_entry_rec {
  KHE_TIME_GROUP		tg;
  KHE_INTERVAL			in;
  KHE_MTASK_SET			mts;
} *KHE_ENTRY;
}
Entry @C { e } means:
`the value of @C { KheMTaskFinderMTasksInTimeGroup(mtf, rt, tg) }
is @C { e->mts } when @C { tg == e->tg }, and the value of
@C { KheMTaskFinderMTasksInInterval(mtf, rt, in) } is @C { e->mts }
when @C { in == e->in }.'  The @C { rt } parameter is not mentioned
because @C { e } lies within one table of the third data structure,
as defined above, and @C { rt } is taken care of when this table
is selected.
@PP
One table of the third data structure, then, consists of an
array indexed by time, where each element contains a list
of these entries.  An entry appears once in each list indexed
by a time that is one of the times of its time group or interval
(considered as a set of time groups).  This means that an entry
appears in the table as many times as its time group or interval
has times.
@PP
As we will see, from time to time it will be necessary to add
an entry to a table.  However we never delete an entry.  Once
we begin keeping track of the mtasks of a particular time group
or interval, we continue doing that until the mtask finder is
deleted.  This is arguably wasteful, but the perennial caching
question (is this cache entry still needed?) has no easy answer
here, and we expect to receive queries for only a moderate
number of distinct time groups and intervals.
@PP
Let us see now how to implement the five operations.
@PP
To retrieve an mtask by signature, we take the chronologically
first time of the signature (call it @C { t }), and we take the
first entry of the list indexed by @C { t }.  As we'll see later,
this entry is always present and its mtask set contains every
mtask whose signature includes @C { t }.  So we search that
mtask set for an mtask containing the signature we are looking for.
@PP
To insert a new mtask @C { mt }, we have to find every mtask set
that @C { mt } belongs in and add it.  So for each of @C { mt }'s
fixed times we traverse the list of entries indexed by that time
and add @C { mt } to the mtask set in each entry.  It is easy to
see that these are exactly the mtask sets that @C { mt } needs to
be added to.  An entry can appear in several lists, so we only
add @C { mt } to an mtask set when it is not already present.
If it is present it will be at the end, so that condition can
be checked quickly.
@PP
To delete an mtask @C { mt } we have to find every mtask set
that @C { mt } is currently in and remove it.  So for each of
@C { mt }'s fixed times we traverse the list of entries indexed
by that time and delete @C { mt } from the mtask set in each entry.
Because an entry can appear in several lists, we only attempt to
delete @C { mt } from an entry's mtask set when it is present.
@PP
To implement @C { KheMTaskFinderMTasksInTimeGroup(mtf, rt, tg) },
we first need to check whether the @C { rt } table contains an
entry for @C { tg }.  We do this by searching the list of entries
indexed by the first time of @C { tg } (it is a precondition that
@C { tg } cannot be empty) for an entry containing @C { tg }.
If we find one, we return its mtask set and we are done.
@PP
If there is no entry containing @C { tg }, we have to make one
and add it to each list indexed by a time of @C { tg }, which
is straightforward.  The hard part is that we also have to build
the mtask set of all mtasks whose fixed times have a non-empty
intersection with @C { tg }, so that we can add it to the new
entry and also return it to the caller.  We could do this from
scratch, by finding all tasks running at the relevant times, then
building and uniqueifying the set of all these tasks' mtasks.
But we do it in a faster way, as follows.
@PP
As we saw when inserting and deleting mtasks, once an entry is
present it is kept up to date as mtasks come and go.  So during
the initialization of the mtask finder, before any mtasks have
been created, we add one entry to the start of each list.  If
the list is for time @C { t }, the entry contains a time group
containing just @C { t } (as returned by platform function
@C { KheTimeSingletonTimeGroup }) and an empty mtask set.  As
mtasks are inserted and deleted, this mtask set will always hold
the set of all fixed-time mtasks whose times include @C { t }.
This entry will always be first in its list.
@PP
So to build the new mtask set, we take the union of the mtask sets in
the first entries of the lists indexed by the times of the new time
group.  We call @C { KheMTaskSetAddMTaskSet } repeatedly to build
the union, then we call @C { KheMTaskSetUniqueify } to uniqueify it.
@PP
@C { KheMTaskFinderMTasksInInterval } is similar to
@C { KheMTaskFinderMTasksInTimeGroup }.  Its
@C { in } parameter is just a shorthand for the union of the time
groups of @C { in }'s days.
@PP
Where then is the confusion?  The author was not sure whether
each entry had to be added to multiple lists.  Suppose each
entry was added to just one list, the one for its time group's
first time.  @C { KheMTaskFinderMTasksInTimeGroup } and
@C { KheMTaskFinderMTasksInInterval } at least would be fine:
they use only that first time to access the table.  Would
anything go wrong?
@PP
Just one thing would go wrong, as it turns out.  When a new mtask is
added, it would be added to the mtask set of each entry whose time
group's first time is one of the mtask's fixed times.  But that is
not enough.  For example, an mtask holding tasks of the Wednesday
night shift would not be added to the mtask set holding all mtasks
running on Wednesday, because that mtask set's entry would lie only
in the list indexed by the first time on Wednesday.
@PP
The mtask finder's similarity rule must be complicated, but are
the complications just described necessary?  The author believes
that they are.  @C { KheMTaskFinderMTasksInTimeGroup } and
@C { KheMTaskFinderMTasksInInterval } are used frequently by
the ejection chain solver, so they must run quickly.  The symmetry
elimination provided by mtasks is essential for combinatorial grouping
(Section {@NumberOf resource_structural.task_grouping.combinatorial}),
and that solver also needs the `forbidden' operations.  We don't want
multiple multi-task software modules, so one module has to do it all.
@End @SubSection

@EndSubSections
@End @Section

@Section
    @Title { Task grouping }
    @Tag { resource_structural.task_grouping }
@Begin
@LP
To @I group some tasks means to add an unbreakable requirement that at
each moment @M { m } from when they are grouped until when they are
ungrouped, either they are all unassigned at @M { m }, or else they
are all assigned to the same @I { parent task } at @M { m }.  The
parent task is usually a cycle task, representing a resource, although
it need not be.
# Put another way,
# assigning a parent task to a grouped task is the same as assigning
# it to every member of the group.
@PP
Concretely, task grouping is carried out by selecting one of
the tasks to be the @I { leader task }, and assigning the others,
called @I { follower tasks }, to the leader task.  Assigning
the group to a parent task is done by assigning the leader
task to the parent task.
@PP
The first subsection presents a helper module for finding the domains
of leader tasks.  The two subsections after that present the task and
mtask groupers, used throughout KHE to perform the actual grouping.
The other subsections present applications of task grouping.
# @PP
# There is an obscure but important requirement, which we will call the
# @I { task grouping precondition }, that should hold at the start of
# any task grouping procedure:
# @ID @OneRow {
# Suppose there are two tasks, @M { t sub 1 } and @M { t sub 2 },
# such that (i) the first day that @M { t sub 2 } is running is the
# day following the last day that @M { t sub 1 } is running; and
# (ii) both @M { t sub 1 } and @M { t sub 2 } are assigned the
# same resource (call it @M { r }).  Then @M { t sub 1 } and
# @M { t sub 2 } should be grouped.
# }
# The reason for this is that if they are not already grouped, it
# is possible that @M { t sub 1 } will become grouped with some
# other task @M { t sub 3 } which is running on the same day as
# @M { t sub 2 } is running.  But grouping @M { t sub 3 } with
# @M { t sub 1 } effectively assigns @M { r } to @M { t sub 3 },
# giving us two tasks, @M { t sub 2 } and @M { t sub 3 }, assigned
# the same resource and running on the same day.  Similarly,
# @M { t sub 2 } could become grouped with some other task running
# on the same day as @M { t sub 1 }.  By requiring @M { t sub 1 }
# and @M { t sub 2 } to be already grouped, we avoid this danger.
# # They
# # describe old code that would turn out rather differently if it
# # was written today.
@BeginSubSections

#@SubSection
#    @Title { The task grouper (old) }
#    @Tag { resource_structural.task_grouping.task_grouper_old }
#@Begin
#@LP
#Different solvers group tasks for different reasons, but the
#actual grouping should always be done in the same way, as follows.
#The first step is to create a @I { task grouper object } by calling
#@ID @C {
#KHE_TASK_GROUPER KheTaskGrou perMake(KHE_FRAME days_frame, HA_ARENA a);
#}
#Both parameters must be non-@C { NULL }.  This object remains
#available until arena @C { a } is deleted or recycled.  It can
#be used repeatedly to make many groups, although only one at a
#time.  To begin making a group, call
#@ID @C {
#void KheTaskGrouperClear(KHE_TASK_GROUPER tg);
#}
#This clears away any remnants of previous groups.  To add one
#task to the growing group, call
#@ID @C {
#bool KheTaskGr ouperAddTask(KHE_TASK_GROUPER tg, KHE_TASK task);
#}
#If @C { task } is a proper root task, and compatible with the
#tasks already added (concerning which see below), this stores
#@C { task } in @C { tg } and returns @C { true }.  Otherwise it
#stores nothing and returns @C { false }.  Either way, no task
#assignments or moves are made at this stage.
#@PP
#It is also possible to delete the record of a previously stored task,
#by calling
#@ID @C {
#void KheTaskGrouperDeleteTask(KHE_TASK_GROUPER tg, KHE_TASK task);
#}
#However, due to issues with finding leader tasks, only the most
#recently added but not deleted task may be deleted in this way.
#@PP
#Functions
#@ID @C {
#int KheTaskGrouperTaskCount(KHE_TASK_GROUPER tg);
#KHE_TASK KheTaskGrouperTask(KHE_TASK_GROUPER tg, int i);
#}
#return the number of tasks stored in @C { tg } and the @C { i }th of
#those tasks, in the usual way, and
#@ID @C {
#bool KheTaskGrouperContainsTask(KHE_TASK_GROUPER tg, KHE_TASK task);
#}
#returns @C { true } if @C { tg } contains @C { task }.
#@PP
#Function
#@ID @C {
#void KheTaskGrouperCopy(KHE_TASK_GROUPER dst_tg,
#  KHE_TASK_GROUPER src_tg);
#}
#copies the contents of @C { src_tg } into @C { dst_tg }.  It is
#equivalent to clearing @C { dst_tg } and then adding the tasks
#of @C { src_tg } to @C { dst_tg }, one by one.  There is no
#problem with adding a task to two task groupers.  There is
#a problem if you then try to make a group in both groupers.
#@PP
#Function
#@ID @C {
#KHE_INTERVAL KheTaskGrouperInterval(KHE_TASK_GROUPER tg);
#}
#returns the interval of @C { days_frame } covered by the tasks
#of @C { tg }.  This interval is kept up to date as tasks are added
#and deleted; it helps when deciding whether a task can be added.
#@PP
#Function
#@ID @C {
#KHE_COST KheTaskGro uperCost(KHE_TASK_GROUPER tg);
#}
#returns the cost of making a group out of the tasks currently present
#in the grouper, without actually doing any grouping.  This cost is
#defined as follows.
#@PP
#Find a resource @C { r } lying in the domain of every task of
#@C { tg }.  Such an @C { r } must exist, because (as we'll see
#below) only sets of tasks which have a leader task with a
#non-empty domain are accepted.  Let @C { in } be the smallest
#interval of days containing every day that the tasks of @C { tg }
#are running, plus (where present) the day before their first day
#and the day after their last day.  Find the set @C { S } of all
#cluster busy times and limit busy times monitors that monitor
#@C { r } during @C { in } but not outside @C { in }, and are
#derived from constraints that monitor every resource of @C { r }'s
#type, as returned by @C { KheResourceTimetableMonitorAddInterval }
#(Section {@NumberOf monitoring_timetables_resource}).
#Make sure that @C { r } is free on every day of @C { in }.  This
#could involved unassigning @C { r } from some tasks, which in turn
#could involve unfixing assignments.  Assign @C { r } to the tasks
#of @C { tg }.  Find the total cost of the monitors of @C { S } at
#this point; this is the result.  Finish by restoring the initial
#state (unassigning @C { r } from the tasks of @C { tg }, then
#re-fixing and reassigning other tasks as required).
#@PP
#The somewhat peculiar set of monitors included in the cost aims
#to focus on local things such as complete weekends and unwanted
#patterns.  Omitting global things like total workload makes
#sense because task grouping has nothing to do with global
#constraints.  Omitting limit active intervals monitors is
#wrong if the group violates the maximum limit of such a monitor.
#However, in practice groups are never large enough to do this.
#Omitting avoid unavailable times monitors makes sense because
#different resources are unavailable at different times, and if
#one resource is unavailable for a given group, another resource
#probably will be available.
#@PP
#Another kind of cost that it might be useful to include is the
#cost (reported by event resource monitors) of assigning or not
#assigning the tasks of @C { tg }.  The caller can easily include
#these costs, by calling @C { KheTaskNonAsstAndAsstCost }
#(Section {@NumberOf resource_structural.mtask_finding.ops})
#and adding them in.
#@PP
#Function
#@ID @C {
#KHE_TASK KheTaskGrouperMakeGroup(KHE_TASK_GROUPER tg,
#  KHE_SOLN_ADJUSTER sa);
#}
#makes one group from the currently stored tasks.  Concretely, it
#chooses a leader task from these stored tasks and assigns the
#other stored tasks to it.  It returns the leader task.  The call to
#@C { KheTaskGrouperMakeGroup } cannot fail, given that incompatible
#tasks have already been rejected by @C { KheTaskGr ouperAddTask },
#although it will abort if no tasks are stored, and do nothing (correctly)
#if just one is stored.  If @C { sa != NULL }, the changes are saved
#in @C { sa } so that they can be undone later.  The task grouper
#itself does not offer an undo operation.  But @C { sa } can record
#any number of grouping operations, and then undoing @C { sa } will
#undo them all.
#@PP
#@C { KheTaskGrouperMakeGroup } does not clear the grouper.  One can
#call it, evaluate the result, then use @C { sa } to undo the grouping,
#and then carry on just as though @C { KheTaskGrouperMakeGroup } had
#not been called.
## Together with @C { KheTaskGrouperDeleteTask }
## this means that a tree search for the best group (in any sense
## chosen by the caller) is supported.
## (this undo will be exact unless some tasks of the group are
## assigned initially and others are unassigned)
#@PP
#Finally,
#@ID @C {
#void KheTaskGrouperDebug(KHE_TASK_GROUPER tg,
#  int verbosity, int indent, FILE *fp);
#}
#produces a debug print of @C { tg } onto @C { fp } with the given
#verbosity and indent.
#@PP
#The task grouper keeps a list of the tasks that have been added, each
#with some associated information.  When memory for this is no longer
#needed (when @C { KheTaskGrouperClear } or @C { KheTaskGrouperDeleteTask }
#is called), it is recycled through a free list in the task grouper.
#So it is much better to re-use one task grouper than to create many.
#@PP
#All this may sound simple, but we now have a long list of issues to
#ponder, to make task grouping robust and able to interact appropriately
#with other solvers.  This is why task groupers are needed:  there is
#a lot more to it than just assigning followers to a leader task.
## ).  Task grouping is
## part of structural solving, and so we have to consider what undoing
## it means, and its interactions with other structural solvers and
## ordinary solvers.
## ---all important in practice, because task grouping
## has many applications and many interactions.
#@PP
#@BI { Acceptable tasks. }
#Earlier we deferred a detailed explanation of what makes a task
#@C { task } acceptable to @C { KheTaskGro uperAddTask }.  We give that
#explanation now.
#@PP
#To begin with, @C { task } must be non-@C { NULL } and must be a
#proper root task (either assigned to a resource or not).  Requiring
#@C { task } to be a proper root task is not absolutely necessary,
#but it is a useful sanity measure (do we really want to group a
#task that is already in a group that it is not the leader task of?),
#and it makes @C { KheTaskGrouperCost } easier to understand.
#@PP
#@C { KheTaskGrou perAddTask } aborts when this first condition does not
#hold.  The remaining conditions merely cause @C { KheTaskGr ouperAddTask }
#to return @C { false } when they do not hold:
#@NumberedList
#
#@LI @OneRow {
#@C { task }'s domain must be non-empty (so that
#@C { KheTaskGrouperCost } can be implemented).
#}
#
#@LI @OneRow {
#The interval of days that @C { task } is running must be disjoint
#from the interval of days that the other tasks of the group (taken
#together) are running.  Among other things, this prevents the same
#task from being added to the group twice.
## If @C { task } is the first task added to the group, that's all.
## The remaining conditions apply when @C { task } is not the first task.
## @C { task } must not be already in the group.
#}
#
#@LI @OneRow {
#@C { KheTaskGrou perAddTask } must be able to find a leader
#task for the group including @C { task }.  We'll explain what that
#involves in a moment.
#}
#
#@LI @OneRow {
#If @C { task } is assigned a resource, there must be no
#other task assigned a different resource.  The other tasks may
#be unassigned, or assigned the same resource, but it is not
#possible to group two tasks that are assigned different resources.
#}
#
#@LI @OneRow {
#If @C { task } is assigned a resource, then it must be possible
#to assign the leader task to that resource.  Otherwise we could
#not preserve this assignment after the tasks are grouped.
#}
#
#@LI @OneRow {
#After adding @C { task }, there cannot be one task with an assigned
#resource and another task with a fixed non-assignment.  We cannot
#preserve both conditions after the tasks are grouped.
#}
#
#@LI @OneRow {
#Adding @C { task } must not give rise to @I { interference }:  a
#situation where two tasks assigned the same resource are running
#on the same day.  Interference is explained in detail below.
#}
#
#@EndList
#In practice, only the third of these conditions is likely to
#give trouble.
#@PP
#@BI { Fin ding a leader task. }
#The next problem is to find a suitable leader task.  We choose
#a task to be leader to which every other stored task can be moved.
#In other words, the domain of the chosen leader task must be a
#subset of the domain of every stored task.  If an attempt is made
#to store a task which prevents this (for example, if the new task's
#domain is disjoint from some already stored task's domain), then
#the new task is rejected and @C { KheTaskGrou perAddTask } returns
#@C { false }.  We calculate a leader task each time a task is
#stored, and keep them all.  If the last task is deleted we return
#to the previous leader task without having to re-calculate it.
#@PP
#A more general approach would find the best candidate for leader task
#and then reduce its domain until all the followers can be assigned to
#it (recall that assignment requires the parent's domain to be a
#subset of the child's).  A disadvantage of this is that the reduced
#domain could be empty, but it has been rejected for another reason:
#when many groups are being tried, many resource groups could be
#created, which would be expensive in running time and memory.
#@PP
#For the record, here is a check of the conditions imposed by
#@C { KheTaskMoveCheck }, which every task @C { t } moved to the
#chosen leader task must satisfy.  First, @C { t}'s assignment
#cannot be fixed.  We will be circumventing this, by unfixing
#beforehand and re-fixing afterwards, as explained below.  Second,
#@C { t } must not be a cycle task.  @C { KheTaskGrou perAddTask }
#aborts in this case (a cycle task is never a proper root task).
#Third, the move must change the assignment.  This holds because
#@C { t } is a proper root task.  Fourth and last, the domain of
#@C { t } must be a superset of the domain of the leader task.
#We've just explained how we handle that.  So the move must succeed.
#@PP
#@BI { Undoing a grouping. }
#Suppose that the stored tasks are unassigned initially.  A
#structural solver groups them by assigning the followers to the
#chosen leader task, then an ordinary solver assigns a resource to
#the leader task, and then we need to undo the grouping.  An exact
#undo would unassign the follower tasks, since they were unassigned
#initially; but that is quite wrong.  In fact, the follower tasks'
#assignments are moved from the leader task to whatever the leader
#task is assigned to at the time of the undo.  We see here that an
#overly literal interpretation of undo fails to capture the true
#meaning, which is that a previously imposed requirement has to be
#removed, without disturbing other requirements.  Function
#@C { KheSolnAdjusterTaskGroup }
#(Section {@NumberOf general_solvers.adjust.adjuster}) is offered
#by the solution adjuster module to support this kind of undo.
#@PP
#@BI { Tasks which are leaders of their own groups. }
#A stored task could be the leader task of a previously created
#group.  This is not a problem, because task grouping concerns the
#task's relationship with its parent, not its children.  If the
#task is chosen to be the leader task of the new group, its
#children will be partly from the old group and partly from the
#new group.  When we use @C { sa } to remove the group, it unassigns
#only the children from the new group, not all the children.
#@PP
#@BI { Assigned tasks. }
#All accepted tasks are proper root tasks, which means that
#each is either unassigned or assigned directly to a resource.
#@PP
#It would be easy if we could disallow assigned tasks, but
#we can't, because there is an application where that would
#pose a major problem:  interval grouping, where the assigned
#tasks come from assign by history.  Instead, as we know, the
#rule is that assigned tasks are permitted provided they are
#assigned to the same resource @M { r }.  To implement this, if
#the chosen leader task @M { l } is assigned to @M { r }, we
#move the others to @M { l }.  Otherwise @M { l } must be
#unassigned, so we assign @M { l } to @M { r } and move
#the others to @M { l }.  Either way, every task is now assigned
#to @M { r }, albeit indirectly (via @M { l }).
#@PP
#@BI { Interference. }
#When several tasks are grouped, some of which are assigned a
#resource @M { r } and some of which are not, an obscure problem
#can arise.  Suppose that we group two tasks, @M { t sub 1 } and
#@M { t sub 2 }, and that @M { t sub 1 } is initially assigned
#resource @M { r } and @M { t sub 2 } is initially unassigned.
#Then the grouping effectively assigns @M { r } to @M { t sub 2 }.
#If there is some other task assigned @M { r } which is running
#on any of the days that @M { t sub 2 } is running, this will
#mean that @M { r } has to attend two tasks on the same day, which
#is not allowed.  We say that the other task @I interferes with
#the grouping of @M { t sub 1 } with @M { t sub 2 }.  The task
#grouper rejects all tasks whose addition to the group would
#cause interference.
#@PP
#@BI { Fixed task assignments. }
#A task assignment may be @I { fixed }, meaning that it may not be
#changed.  Interpreted literally, a task with a fixed assignment
#cannot participate in task grouping unless it is chosen to be the
#leader task.  But we will view task fixing as a logical requirement
#that does not necessarily prevent a task from being grouped.
#@PP
#First, suppose that the task @M { t } whose assignment is fixed
#is assigned to a resource @M { r }.  Then if we ignore the fixing,
#in the grouped task @M { t } will either keep its assignment (if it
#is chosen to be the leader task) or else it will be assigned to
#the leader task and the leader task will be assigned to @M { r }.
#We regard this as acceptable for a fixed @M { t }, because @M { t }
#is still assigned to @M { r }, indirectly.  So when building the
#group, if @M { t } is not the leader task, we unfix it, move
#it to the leader task, and re-fix it.
#@PP
#In the grouped state, the assignment of @M { t } to the leader
#task could equally well be fixed or not fixed.  It does not
#matter, because no-one is going to change it until the time
#comes to undo it.  But we prefer to fix it.  What does matter
#is that the assignment of the leader task to @M { r } must be
#fixed, otherwise some ordinary solver could change it and thus
#violate the fix on @M { t }.
#@PP
#Second, suppose that the task @M { t } whose assignment is
#fixed is unassigned.  We interpret this as saying that @M { t }
#may not be assigned.  Once again, we need to fix the assignment
#of the leader task, but now we require that the leader task be
#unassigned, since otherwise we have violated the fix on @M { t }.
#So @M { t } cannot share a group with an assigned task, and we
#have the sixth condition above.
#@PP
#Undoing the grouping of a task whose assignment is initially fixed
#is straightforward.  Unfix the task's assignment, move it to the
#leader task's parent, and fix that assignment.
#@PP
#@BI { Sum mary of the task grouping algorithm. }
#Given a set of tasks which have passed the checks made by
#@C { KheTaskGroupe rAddTask }, together with two values
#calculated while adding them (the leader task, and any
#assigned resource @M { r }), the actual grouping is done as follows.
#@PP
#First, move every task except the leader task to the leader
#task.  Fixed tasks are unfixed before their move and re-fixed
#after it.  Second, if there is an @M { r } and the leader task
#is not currently assigned to it, assign it to @M { r }.  (If
#this move is needed, then the leader task is not fixed.  This
#is because the only way that its assignment can differ from
#@M { r } is for its assignment to be @C { NULL } and @M { r }
#to be non-@C { NULL }; and in that case, if it was fixed it
#would be a fixed unassigned task which was being grouped with
#an assigned task, which is not allowed.)  Third and last, if
#the leader task has at least one fixed follower (which we
#determine as we move the followers), and its assignment is
#not fixed, then fix its assignment.
#@PP
#Undoing is not exact, but we can approximate it by carrying out
#in reverse order the reverse of each step above, and then adjust
#the algorithm we get.  This produces the following.  A record of
#what happened during grouping is held in @C { sa }; this undo
#algorithm relies on that record.  There is not enough information
#in the tasks themselves to determine what to do.
#@PP
#First, if the leader task was fixed during grouping, unfix it.
#Second, irrespective of whether the leader task was moved to a
#resource, its assignment after the undo has to be its assignment
#at the time of the undo, so do nothing.  Third, move every follower
#task from the leader task to the leader task's assignment at the
#time of the undo (possibly @C { NULL }).  If the follower task was
#(and so is) fixed, unfix it before the move and re-fix it afterwards.
#@PP
#@BI { Another interface to task grouping. }
#There is another way to access task grouping.  It offers the same
#semantics:  indeed, behind the scenes it runs the same code.  It is
#less easy to use, but for certain applications (interval grouping,
#for example) it can save a lot of time and memory.
#@PP
#Instead of type @C { KHE_TASK_GROUPER }, this interface uses
#@C { KHE_TASK_GROUPER_ENTRY }.  This type holds one task of the growing
#group, some information about the group (its leader task and optional
#assigned resource, mainly), and a pointer to the previous entry,
#holding the previous task and information.  This pointer will be @C { NULL }
#in the entry holding the first task.  This makes a singly linked
#list of tasks and information, accessed from the last (most recently
#added) entry.
#@PP
#The advantage of the linked structure is that if we are trying
#two sequences of tasks, @M { angleleft a, b, c angleright } and
#@M { angleleft a, b, d angleright }. then the first part of the
#two sequences, @M { angleleft a, b angleright }, can be shared.
#This is where the time and memory savings can be made.
#@C { KheTaskGrouperDeleteTask } offers analogous savings
#(delete @M { c } then add @M { d }), but it does not allow
#the two proto-groups to exist simultaneously.
#@PP
#These operations make up this interface to task grouping:
#@ID @C {
#bool KheTaskGrouperEntryAddTask(KHE_TASK_GROUPER_ENTRY prev,
#  KHE_TASK task, KHE_FRAME days_frame, KHE_TASK_GROUPER_ENTRY next);
#void KheTaskGrouperEntryCopy(KHE_TASK_GROUPER_ENTRY dst_last,
#  KHE_TASK_GROUPER_ENTRY src_last);
#KHE_INTERVAL KheTaskGrouperEntryInterval(KHE_TASK_GROUPER_ENTRY last);
#KHE_COST KheTaskG rouperEntryCost(KHE_TASK_GROUPER_ENTRY last,
#  KHE_FRAME days_frame);
#KHE_TASK KheTaskGrouperEntryMakeGroup(KHE_TASK_GROUPER_ENTRY last,
#  KHE_SOLN_ADJUSTER sa);
#}
#@C { KheTaskGrouperEntryAddTask } is semantically the same as
#@C { KheTaskGroup erAddTask }, but here the previously added tasks are
#represented by @C { prev }.  This will be @C { NULL } when @C { task }
#is the first task.  The result of the addition (if @C { true } is
#returned) is represented by @C { next }, which will contain
#@C { task } and related information.
#@PP
#Any number of calls to @C { KheTaskGrouperEntryAddTask } with
#the same @C { prev } may be made.  This is how sequences come
#to share subsequences, as described above.  A group is defined
#by its last entry.  There is no ambiguity, because there is
#only one path going backwards.
#@PP
#@C { KheTaskGrouperEntryCopy } copies the record pointed to by
#@C { src_last } to the record pointed to by @C { dst_last }.
#@C { KheTaskGrouperEntryInterval }, @C { KheTaskGrou perEntryCost },
#and @C { KheTaskGrouperEntryMakeGroup } are semantically the same
#as @C { KheTaskGrouperInterval }, @C { KheTaskGrouperCost }, and
#@C { KheTaskGrouperMakeGroup }, but here the tasks are the task stored
#in @C { last }, the task stored in its predecessor entry, and so on.
#@PP
#This form of task grouping does not allocate any memory.
#The memory pointed to by @C { prev } (if non-@C { NULL }) and
#@C { next } (always non-@C { NULL }) must be allocated by
#the caller, using code such as
#@ID @C {
#struct khe_task_grouper_entry_rec new_entry_rec;
#KheTaskGrouperEntryAddTask(prev, task, &new_entry_rec);
#}
#Here @C { struct khe_task_grouper_entry_rec } is the struct that
#@C { KHE_TASK_GROUPER_ENTRY } points to; it is defined (with
#its fields) in @C { khe_solvers.h } alongside
#@C { KHE_TASK_GROUPER_ENTRY }.  @C { KheTaskGrouperEntryAddTask }
#overwrites the memory pointed to by @C { next }.
## @C { KHE_TASK_GROUPER_ENTRY } is a
## @C { struct }, not a pointer type.  Its definition appears in file
## @C { khe_solvers.h }, although it is better if the user treats it
## as a private type.  When calling @C { KheTaskGrouperEntryAddTask },
## both @C { prev } (if non-@C { NULL }) and @C { next } (always
## non-@C { NULL }) must point to memory that the caller has made
## available to hold values of this type.  The memory pointed to by
## @C { next } will be overwritten by @C { KheTaskGrouperEntryAddTask }.
## When used this way, task grouping does not itself allocate any memory.
#@PP
#This function has been added to fix an issue in interval grouping:
#@ID @C {
#void KheTaskGrouperEntryAddDummy(KHE_TASK_GROUPER_ENTRY prev,
#  KHE_TASK_GROUPER_ENTRY next);
#}
#Like @C { KheTaskGrouperEntryAddTask }, this adds @C { next } as a
#successor to @C { prev }, but here the entry is a @I { dummy }:
#it changes nothing.  The fields of @C { next } are copied from
#@C { prev }, which must be non-@C { NULL }.  The new entry is marked
#so that @C { KheTaskGrouperEntryMakeGroup } knows to ignore it.  Also,
#@ID @C {
#bool KheTaskGrouperEntryIsDummy(KHE_TASK_GROUPER_ENTRY entry);
#KHE_TASK_GROUPER_ENTRY KheTaskGrouperEntryPrev(
#  KHE_TASK_GROUPER_ENTRY entry);
#KHE_TASK KheTaskGrouperEntryTask(KHE_TASK_GROUPER_ENTRY entry);
#}
#return @C { true } when @C { entry } is a dummy entry, @C { entry }'s
#previous entry, and (if @C { entry } is not a dummy entry) its task.
#These functions can be used like this:
#@ID @C {
#for( e = last;  e != NULL;  e = KheTaskGrouperEntryPrev(e) )
#  if( !KheTaskGrouperEntryIsDummy(e) )
#  {
#    task = KheTaskGrouperEntryTask(e);
#    ... visit task ...
#  }
#}
#to visit the tasks of a group whose last element is @C { last }.
#@End @SubSection

@SubSection
    @Title { Finding task group domains }
    @Tag { resource_structural.task_grouping.domains }
@Begin
@LP
As just explained, building a task group involves choosing one of
the tasks to be the leader task and assigning the other tasks,
called the follower tasks, to the leader task.
@PP
When assigning one task to another, the domain of the parent
task must be a subset of the domain of the child task.  For
grouping, this means that the domain of the leader task must
be a subset of the intersection of the domains of the follower
tasks.  In practice, if we change the domain of the leader
task, then we want to reduce it, and by as little as possible.
Altogether what we need is the intersection of the domains of
all the tasks (leader and followers) of the group.
@PP
The task grouper module to be presented shortly builds groups
one task at a time.  As each task is added, a suitable domain
for the leader task is chosen, although the leader task itself
is not chosen until the tasks are actually grouped.
@PP
To choose the domain, first the existing domain and the domain
of the incoming task are compared.  If either is a subset of
the other, then the smaller of the two is the leader task
domain.  If the two domains are disjoint, the incoming task
is rejected.  These two cases cover what usually happens in
practice, and they ensure that the requirements on domains are
satisfied without any need to build new resource groups or
change the domains of any tasks.
@PP
This leaves us with the problem of what to do when neither
domain is a subset of the other, and yet they are not disjoint.
In this case we need to build a new resource group representing
the intersection of the two domains, and record that as the
leader task domain.
@PP
There is no logical problem in doing this, but there is an
efficiency problem if we are making many groups, as in interval
grouping (Section
{@NumberOf resource_structural.task_grouping.interval_grouping})
for example.  It is true that the functions that construct new
resource groups during solving (@C { KheSolnResourceGroupBegin }
and so on from Section {@NumberOf solutions.groups}) cache their
results, meaning that if the same resource group is constructed
twice, the same object is returned both times, so that there is
no problem with the amount of memory used.  But each time those
functions are used, we have to calculate intersections of sets of
resources and perform a search of a symbol table indexed by a set
of resources, taking more time than is really needed here.  So
we turn to something else.
# There
# is also a potentially serious memory problem, if resource groups
# with the same resources, and task bound objects holding those groups,
# are created over and over again.
@PP
We start by creating one object of type
@C { KHE_TASK_GROUP_DOMAIN_FINDER }, by a call to
@ID @C {
KHE_TASK_GROUP_DOMAIN_FINDER KheTaskGroupDomainFinderMake(KHE_SOLN soln,
  HA_ARENA a);
}
This object and others created by it persist until arena @C { a } is
deleted or recycled.  To make an object representing the intersection
of one or more resource groups, call
@ID @C {
KHE_TASK_GROUP_DOMAIN KheTaskGroupDomainMake(
  KHE_TASK_GROUP_DOMAIN_FINDER tgdf, KHE_TASK_GROUP_DOMAIN prev,
  KHE_RESOURCE_GROUP rg);
}
The resource group represented by the @C { KHE_TASK_GROUP_DOMAIN }
object returned here is the intersection of the resource groups
represented by @C { prev } with @C { rg }.  Or @C { prev } can be
@C { NULL }, and then the resource group represented is just
@C { rg }.  To retrieve this result the call is
@ID @C {
KHE_RESOURCE_GROUP KheTaskGroupDomainValue(KHE_TASK_GROUP_DOMAIN tgd);
}
This could be an empty resource group.
# @PP
# @C { KheTaskGroupDomainValue } also returns, in @C { *type }, a
# value of type
# @ID @C {
# typedef enum {
#   KHE_TASK_GROUP_DOMAIN_RG_ONLY,
#   KHE_TASK_GROUP_DOMAIN_RG_SUBSET_PREV,
#   KHE_TASK_GROUP_DOMAIN_PREV_SUBSET_RG,
#   KHE_TASK_GROUP_DOMAIN_PREV_INTERSECT_RG
# } KHE_TASK_GROUP_DOMAIN_TYPE;
# }
# This says how the result was determined:  either it was @C { rg }
# because @C { prev } was @C { NULL }, or else it was @C { rg }
# because @C { rg } was a subset of @C { prev }'s value, or else
# it was @C { prev }'s value because @C { prev }'s value was a
# subset of @C { rg }, or else none of those cases applied so the
# result was the intersection of @C { prev }'s value with @C { rg }
# but equal to neither.  This makes it easy to select a suitable
# leader task.
Also,
@ID @C {
KHE_TASK_BOUND KheTaskGroupDomainTaskBound(KHE_TASK_GROUP_DOMAIN tgd);
}
returns a task bound object containing the result of
@C { KheTaskGroupDomainValue }.  This is needed when setting
the domain of the leader task of the group, and storing it in
@C { tgd } avoids recreating it each time it is needed.
@PP
Finally,
@ID @C {
void KheTaskGroupDomainDebug(KHE_TASK_GROUP_DOMAIN tgd,
  int verbosity, int indent, FILE *fp);
}
prints @C { tgd } onto @C { fp } with the given verbosity and indent.
The print shows the Ids of the resource groups that @C { tgd } is the
intersection of, separated by `@C { * }' characters to denote intersection.
@PP
Resource groups and task bound objects created by the task group domain
finder are not stored in the task group domain finder's arena.  They
are created by the platform functions @C { KheSolnResourceGroupBegin }
(etc.) and @C { KheTaskBoundMake }, and their lifetime is equal to
the lifetime of the solution object.
@PP
We now explain how the task group domain finder works in detail.  This
will make clear why it is more efficient than what we already have.
@PP
Caching is used to ensure that for each distinct
@C { (prev, rg) } pair, the value returned by
@C { KheTaskGroupDomainMake } is the same.  In practice there will
only be a moderate number of distinct pairs, so there is no issue
with memory.  When @C { prev == NULL } the cache consulted lies in
@C { tgdf }; when @C { prev != NULL } the cache lies in @C { prev }.
Each cache is a simple unsorted list when it contains up to 10
elements; as it grows beyond 10 elements, it transforms into an
array of unsorted lists indexed by @C { rg }'s first element.
Each cached value has type @C { KHE_TASK_GROUP_DOMAIN } and
contains the result resource group and task bound.
@PP
When the cache cannot supply an already existing object for the pair
@C { (prev, rg) }, a new @C { KHE_TASK_GROUP_DOMAIN } object is
created and added to the cache.  Its value, type, and task bound
are found at this time and stored in the new object.
When @C { prev == NULL } this value will be @C { rg }; when
@C { prev != NULL } and one of @C { prev }'s value and @C { rg }
is a subset of the other it will be one of the two; and otherwise
@C { KheSolnResourceGroupBegin } and its related functions
(Section {@NumberOf solutions.groups}) are used to create the
value.  This last case will be slow, but it will be done only once
for each distinct @C { (prev, rg) } pair, and only for pairs
for which @C { prev != NULL } and both subset tests fail.  Note
that even the subset tests are avoided when there is a cache hit.
@PP
Despite all these advantages, care is still needed, especially over
memory usage.  It is usually a bad idea to have many domain finders,
since each has its own cache and the memory advantages may be lost.
It is much better to share a common domain finder wherever possible.
Accordingly, the various grouping functions presented in the
following subsections take a domain finder as a parameter, allowing
a single domain finder to be created and shared among all of them.
Do-it-yourself solving (Section {@NumberOf general_solvers.yourself})
does this.
@PP
@BI { Dominance testing. }
The task group domain finder is the carrier for another feature
needed by task grouping:  @I { dominance testing } between task
groups.
@PP
Dominance testing reports cases where two task groups @M { g sub 1 }
and @M { g sub 2 } are interchangeable, so that only one needs to be
kept.  Confining our attention here to the resource domain aspect
of task grouping (i.e. ignoring task group duration, cost, etc.),
our aim is to work out whether the domains of the two groups are
such that for every way that @M { g sub 2 } can be extended by the
addition of further tasks, @M { g sub 1 } can also be extended in
that same way, and possibly in other ways.  If that is true then
@M { g sub 1 } is said to dominate @M { g sub 2 } as far as domains go.
@PP
Writing @M { d(g) } for the domain of @M { g }, we can say that
@M { g sub 1 } dominates @M { g sub 2 } when
@M { d( g sub 1 ) supseteq d( g sub 2 ) }.  We call this the
@I { superset test }.  It works because, when it holds, every set
of tasks that can be added to @M { g sub 2 } without reducing its
domain to the empty set can also be added to @M { g sub 1 } without
reducing its domain to the empty set.  Reducing a task group's domain
to the empty set is the only way that adding tasks to a task group
can fail, given that we are confining our attention here to domains.
@PP
The more dominance we can find, the fewer solutions we need to keep,
and the faster our algorithms run.  Surprisingly, we can find cases
of dominance beyond the superset rule.
@PP
Let a @I { task domain } be the domain of an individual task, as
distinguished from the domain of a task group, which is the
intersection of one or more task domains.  Let @M { D } be the
set of all distinct task domains of the tasks whose resource type
is that of @M { g sub 1 } and @M { g sub 2 }.  For example,
@M { D } might be
"{"@C { HeadNurse }, @C { Nurse }, @C { Caretaker }"}".
@PP
Define the @I { compatible set } of a task group @M { g },
@M { c(g) }, to be the set
@ID @Math {
c(g) = lbrace
x in D
`` "|" ``
d(g) cap x != emptyset
rbrace
}
That is, @M { c(g) } is the set of task domains whose intersection with
@M { d(g) } is non-empty.  Then @M { g sub 1 } dominates @M { g sub 2 }
as far as domains go when
@ID @Math {
c( g sub 1 ) supseteq c( g sub 2 )
}
If this condition (called the @I { compatible set test }) holds,
then whenever we add a task to @M { g sub 2 } which does not
cause its domain to become empty, we can add that same task to
@M { g sub 1 } without causing its domain to become empty.
@PP
It is easy to verify that when @M { g sub 1 } dominates @M { g sub 2 }
according to the superset test, it also dominates @M { g sub 2 }
according to the compatible set test.  But there are real cases
where the converse is not true, so that the compatible set test
takes us beyond the superset
test.  For example, suppose that @C { HeadNurse } and @C { Nurse }
have a non-empty intersection, @C { Nurse } and @C { Caretaker }
have a non-empty intersection, but that @C { HeadNurse } and
@C { Caretaker } have an empty intersection and none of the
three domains is a superset of any of the others.  Then given
task groups with domains @C { Nurse } and @C { Caretaker }, the
superset test does not produce any dominance, but the compatible set
test declares that a group with domain @C { Nurse } dominates a
group with domain @C { Caretaker }.
@PP
The task group domain finder supports compatibility set dominance
testing, as follows:
@ID @C {
void KheTaskGroupDomainFinderDominanceClear(
  KHE_TASK_GROUP_DOMAIN_FINDER tgdf);
void KheTaskGroupDomainFinderDominanceAddTaskDomain(
  KHE_TASK_GROUP_DOMAIN_FINDER tgdf, KHE_RESOURCE_GROUP rg);
bool KheTaskGroupDomainDominanceTest(KHE_TASK_GROUP_DOMAIN tgd1,
  KHE_TASK_GROUP_DOMAIN tgd2, bool *proper);
}
@C { KheTaskGroupDomainFinderDominanceClear } is used to clear away
all record of previous dominance testing; it needs to be called when a
solver wants to initiate dominance testing.
@C { KheTaskGroupDomainFinderDominanceAddTaskDomain } may be called
any number of times and is used to define the set @M { D } of task
domains.  It is acceptable to pass the same value for @C { rg } more
than once; @C { KheTaskGroupDomainFinderDominanceAddTaskDomain } will
uniqueify the values it receives.  @C { KheTaskGroupDomainDominanceTest }
returns @C { true } when @C { tgd1 }'s value dominates @C { tgd2 }'s value,
using the compatibility set test.  When it returns @C { true }, it
also sets @C { *proper } to @C { true } when the dominance is proper,
i.e. the compatibility sets are not equal; when it returns @C { false },
@C { *proper } is not relevant but is set to @C { false }.
This operation usually runs quickly.
@PP
There is no way to remove a @C { (prev, rg) } pair from the domain
finder after it has been inserted by @C { KheTaskGroupDomainMake },
and there is no way to remove a task domain after it has been inserted
by @C { KheTaskGroupDomainFinderDominanceAddTaskDomain }, except by
calling @C { KheTaskGroupDomainFinderDominanceClear }.  However,
calls to these functions can be arbitrarily interleaved.
@End @SubSection

# @SubSection
#     @Title { The task grouper (old) }
#     @Tag { resource_structural.task_grouping.task_grouper_old }
# @Begin
# @LP
# Different solvers group tasks for different reasons, but it is best
# for the actual grouping to always be done in the same way, as follows.
# The first step is to create a @I { task grouper object } by calling
# @ID @C {
# KHE_TASK_GROUPER KheTaskGroup erMake(KHE_SOLN soln,
#   KHE_FRAME days_frame, HA_ARENA a);
# }
# All parameters must be non-@C { NULL }.  This object remains
# available until arena @C { a } is deleted or recycled.  It can
# be used repeatedly to make many groups, although only one at a
# time.  To begin making a group, call
# @ID @C {
# void KheTaskGrouperClear(KHE_TASK_GROUPER tg);
# }
# This clears away any remnants of previous groups.  To add one
# task to the growing group, call
# @ID @C {
# bool KheTaskG rouperAddTask(KHE_TASK_GROUPER tg, KHE_TASK task);
# }
# If @C { task } is a proper root task, and compatible with the
# tasks already added (concerning which see below), this stores
# @C { task } in @C { tg } and returns @C { true }.  Otherwise it
# stores nothing and returns @C { false }.  Either way, no task
# assignments or moves are made at this stage.
# @PP
# It is also possible to delete the record of a previously stored task,
# by calling
# @ID @C {
# void KheTaskGrouperDeleteTask(KHE_TASK_GROUPER tg, KHE_TASK task);
# }
# However, due to issues with finding leader task domains, only the
# most recently added but not deleted task may be deleted in this way.
# @PP
# When a task is added to a task grouper, some other information is
# stored with it.  The task plus this other information make up type
# @C { KHE_TASK_GROUPER_ENTRY }.  Accordingly, to visit the tasks
# currently present in a task grouper, one has to visit the entries:
# @ID @C {
# int KheTaskGrouperEntryCount(KHE_TASK_GROUPER tg);
# KHE_TASK_GROUPER_ENTRY KheTaskGrouperEntry(KHE_TASK_GROUPER tg, int i);
# }
# These return the number of entries stored in @C { tg } and the
# @C { i }th entry, in the usual way.  Functions for accessing
# the attributes of one entry, including its task, appear below.
# @PP
# Function
# @ID @C {
# bool KheTaskGrouperContainsTask(KHE_TASK_GROUPER tg, KHE_TASK task);
# }
# returns @C { true } if @C { tg } contains an ordinary entry
# containing @C { task }.
# # @PP
# # Function
# # @ID {0.98 1.0} @Scale @C {
# # void KheTaskGrouperCopy(KHE_TASK_GROUPER dst_tg, KHE_TASK_GROUPER src_tg);
# # }
# # copies the contents of @C { src_tg } into @C { dst_tg }.  It is
# # equivalent to clearing @C { dst_tg }, then adding copies of
# # the entries of @C { src_tg } to @C { dst_tg }.  There is no
# # problem with adding a task to two task groupers.  There is
# # a problem if you then call @C { KheTaskGrouperMakeGroup } (see
# # below) in both.
# @PP
# Function
# @ID @C {
# KHE_INTERVAL KheTaskGrouperInterval(KHE_TASK_GROUPER tg);
# }
# returns the interval of @C { days_frame } covered by the tasks
# of @C { tg }.  It is kept up to date as tasks are added
# and deleted; it helps when deciding whether a task can be added.
# # It is in fact the value of @C { KheTaskGrouperEntryInterval }
# # for @C { tg }'s last entry, or an empty interval if there
# # are no entries.
# @PP
# Function
# @ID @C {
# KHE_COST KheTaskGrouperCost(KHE_TASK_GROUPER tg);
# }
# returns the cost of making a group out of the tasks currently present
# in the grouper, without actually doing any grouping.  This cost is
# defined as follows.
# @PP
# Find a resource @C { r } lying in the domain of every task of
# @C { tg }.  Such a resource @C { r } must exist, because (as we'll
# see below) only sets of tasks whose domains have a non-empty
# intersection are accepted.  (If any of the tasks are assigned
# a resource, then set @C { r } to that resource.)  Let @C { in }
# be the smallest interval of days containing every day that the
# tasks of @C { tg } are running, plus (where present) the day
# before their first day and the day after their last day.  Find
# the set @C { S } of all cluster busy times and limit busy times
# monitors that monitor @C { r } during @C { in } but not outside
# @C { in }, and are derived from constraints that monitor every
# resource of @C { r }'s type, as returned by
# @C { KheResourceTimetableMonitorAddInterval }
# (Section {@NumberOf monitoring_timetables_resource}).
# (If any of the tasks are assigned a resource @C { r }, also
# include any avoid unavailable times monitors for @C { r } in
# @C { in }.)  Make sure that @C { r } is free on every day of
# @C { in }.  This could involved unassigning @C { r } from some
# tasks, which in turn could involve unfixing assignments.  Assign
# @C { r } to the tasks of @C { tg }.  Find the total cost of the
# monitors of @C { S } at this point; this is the result.  Finish by
# restoring the initial state (unassigning @C { r } from the tasks of
# @C { tg }, then re-fixing and reassigning other tasks as required).
# @PP
# The somewhat peculiar set of monitors included in the cost aims
# to focus on local things such as complete weekends and unwanted
# patterns.  Omitting global things like total workload makes
# sense because task grouping has nothing to do with global
# constraints.  Omitting limit active intervals monitors is
# wrong if the group violates the maximum limit of such a monitor,
# but in practice groups are never large enough to do this.
# Omitting avoid unavailable times monitors when the group is
# unassigned makes sense because different resources are unavailable
# at different times, and if one resource is unavailable for a
# given group, another resource probably will be available.
# @PP
# Another kind of cost that it might be useful to include is the
# cost (reported by event resource monitors) of assigning or not
# assigning the tasks of @C { tg }.  The caller can easily include
# these costs, by calling @C { KheTaskNonAsstAndAsstCost }
# (Section {@NumberOf resource_structural.mtask_finding.ops})
# and adding them in.
# @PP
# Function
# @ID @C {
# KHE_TASK KheTaskGrouperMakeGroup(KHE_TASK_GROUPER tg,
#   KHE_SOLN_ADJUSTER sa);
# }
# makes one group from the currently stored tasks.  Concretely, it
# chooses a leader task from these stored tasks and assigns the
# other stored tasks to it.  It returns the leader task.  The call to
# @C { KheTaskGrouperMakeGroup } cannot fail, because incompatible
# tasks have already been rejected by @C { KheTaskGr ouperAddTask },
# although it will abort if no tasks are stored, and do nothing (correctly)
# if just one is stored.  If @C { sa != NULL }, the changes are saved
# in @C { sa } so that they can be undone later.  The task grouper
# itself does not offer an undo operation.  But @C { sa } can record
# any number of grouping operations, and then undoing @C { sa } will
# undo them all.
# @PP
# @C { KheTaskGrouperMakeGroup } does not clear the grouper.  One can
# call it, evaluate the result, then use @C { sa } to undo the grouping,
# and then carry on just as though @C { KheTaskGrouperMakeGroup } had
# not been called.
# # Together with @C { KheTaskGrouperDeleteTask }
# # this means that a tree search for the best group (in any sense
# # chosen by the caller) is supported.
# # (this undo will be exact unless some tasks of the group are
# # assigned initially and others are unassigned)
# @PP
# Finally,
# @ID @C {
# void KheTaskGrouperDebug(KHE_TASK_GROUPER tg,
#   int verbosity, int indent, FILE *fp);
# }
# produces a debug print of @C { tg } onto @C { fp } with the given
# verbosity and indent.
# @PP
# The task grouper keeps a list of the tasks that have been added, each
# with some associated information.  When memory for this is no longer
# needed (when @C { KheTaskGrouperClear } or @C { KheTaskGrouperDeleteTask }
# is called), it is recycled through a free list in the task grouper.
# So it is much better to re-use one task grouper than to create many.
# @PP
# All this may sound simple, but we now have a long list of issues to
# ponder, to make task grouping robust and able to interact appropriately
# with other solvers.  This is why task groupers are needed:  there is
# a lot more to it than just assigning followers to a leader task.
# # ).  Task grouping is
# # part of structural solving, and so we have to consider what undoing
# # it means, and its interactions with other structural solvers and
# # ordinary solvers.
# # ---all important in practice, because task grouping
# # has many applications and many interactions.
# @PP
# @BI { Acceptable tasks. }
# Earlier we deferred a detailed explanation of what makes a task
# @C { task } acceptable to @C { KheTaskGr ouperAddTask }.  We give that
# explanation now.
# @PP
# To begin with, @C { task } must be non-@C { NULL } and must be a
# proper root task (either assigned to a resource or not).  Requiring
# @C { task } to be a proper root task is not absolutely necessary,
# but it is a useful sanity measure (do we really want to group a
# task that is already in a group that it is not the leader task of?),
# and it makes @C { KheTaskGrouperCost } easier to understand.
# @PP
# @C { KheTaskGrou perAddTask } aborts when this first condition does not
# hold.  The remaining conditions merely cause @C { KheTaskGroup erAddTask }
# to return @C { false } when they do not hold:
# @NumberedList
# 
# @LI @OneRow {
# # @C { task }'s domain must be non-empty (so that
# # @C { KheTaskGrouperCost } can be implemented).
# # }
# 
# @LI @OneRow {
# The conditions @C { KheAsstResource(task) == NULL } and
# @C { KheTaskAssignIsFixed(task) } cannot both be true.  Such an
# occurrence, called a @I { fixed non-assignment }, would prevent
# a group containing @C { task } from being assigned a resource later.
# # After adding @C { task }, there cannot be one task with an assigned
# # resource and another task with a fixed non-assignment.  We cannot
# # preserve both conditions after the tasks are grouped.
# }
# 
# @LI @OneRow {
# The interval of days that @C { task } is running must be disjoint
# from the interval of days that the other tasks of the group (taken
# together) are running.  Among other things, this prevents the same
# task from being added to the group twice.
# # If @C { task } is the first task added to the group, that's all.
# # The remaining conditions apply when @C { task } is not the first task.
# # @C { task } must not be already in the group.
# }
# 
# @LI @OneRow {
# The intersection of the domain of @C { task } and the other tasks
# must be non-empty.  Without this we could not assign a resource to the
# group later on, and @C { KheTaskGrouperCost } could not be implemented.
# # @C { KheTaskGr ouperAddTask } must be able to find a leader
# # task for the group including @C { task }.  We'll explain what that
# # involves in a moment.
# }
# 
# @LI @OneRow {
# If @C { task } is assigned a resource, there must be no
# other task assigned a different resource.  The other tasks may
# be unassigned, or assigned the same resource, but it is not
# possible to group two tasks that are assigned different resources.
# }
# 
# @LI @OneRow {
# If @C { task } is assigned a resource, then the intersection of
# the domains must include that resource.  Otherwise we could not
# preserve this assignment after the tasks are grouped.
# }
# 
# @LI @OneRow {
# Adding @C { task } must not give rise to @I { interference }:  a
# situation where two tasks assigned the same resource are running
# on the same day.  Interference is explained in detail below.
# }
# 
# @EndList
# In practice, these conditions rarely fail, but users must be
# prepared for them to do so.
# @PP
# @BI { Finding a leader task and leader task domain. }
# The next problem is to find a suitable domain for the group:
# a @I { leader task domain }.  As mentioned earlier, this is the
# intersection of the domains of the tasks.  We do this
# each time a task is added, using a task group domain finder
# (Section {@NumberOf resource_structural.task_grouping.domains}).
# This ensures that time and memory is not wasted calculating
# domains over and over.  If this intersection turns out to be
# empty, then @C { KheTaskGro uperAddTask } returns @C { false }.
# # We prefer to
# # choose a task to be leader to which every other stored task can be
# # moved.  This means that the domain of the chosen leader task must
# # be a subset of the domain of every stored task.  There is usually
# # such a task, but if not, then we use a resource intersector
# # (Section {@NumberOf resource_structural.task_grouping.intersect}) to
# # build a domain which is the intersection of the current leader domain
# # and the domain of the incoming task.  If this @I { leader domain }
# # is empty then @C { KheTaskGrou perAddTask } returns @C { false }.
# # Otherwise we accept the task into the growing group, on the
# # understanding that the leader task's domain will have to be reduced
# # to the leader domain when the group is actually made.
# # @PP
# # Building a new domain is carried out by calls to
# # @C { KheSolnResourceGroupBegin } and similar functions, documented
# # in Section {@NumberOf solutions.groups}.  As explained there, these
# # resource groups are cached, so that when the same resource group is
# # built more than once, the same object is returned.  This means that,
# # in practice, memory is consumed by only a few newly created resource
# # groups.  And the fact that even this is only tried when resource
# # groups are not subsets means that building resource groups does not
# # dominate the running time.
# @PP
# We find a leader task only when we come to actually carry out
# the grouping.  The leader task domain is already known, as
# we have just seen.  The chosen leader task is any task from
# the group whose domain has minimum size.  If its domain is
# not already equal to the leader task domain it is reduced
# to that domain by the addition of a suitable task bound object.
# @PP
# For the record, here is a check of the conditions imposed by
# @C { KheTaskMoveCheck }, which every task @C { t } moved to the
# chosen leader task must satisfy.  First, @C { t }'s assignment
# cannot be fixed.  We will be circumventing this, by unfixing
# beforehand and re-fixing afterwards, as explained below.  Second,
# @C { t } must not be a cycle task.  @C { KheTaskGroup erAddTask }
# aborts in this case (a cycle task is never a proper root task).
# Third, the move must change the assignment.  This holds because
# @C { t } is a proper root task.  Fourth and last, the domain of
# @C { t } must be a superset of the domain of the leader task.
# We've just explained how we handle that.  So the move must succeed.
# @PP
# @BI { Undoing a grouping. }
# Suppose that the stored tasks are unassigned initially.  A
# structural solver groups them by assigning the followers to the
# chosen leader task, then an ordinary solver assigns a resource to
# the leader task, and then we need to undo the grouping.  An exact
# undo would unassign the follower tasks, since they were unassigned
# initially; but that is quite wrong.  In fact, the follower tasks'
# assignments are moved from the leader task to whatever the leader
# task is assigned to at the time of the undo.  We see here that an
# overly literal interpretation of undo fails to capture the true
# meaning, which is that a previously imposed requirement has to be
# removed, without disturbing other requirements.  Function
# @C { KheSolnAdjusterTaskGroup }
# (Section {@NumberOf general_solvers.adjust.adjuster}) is offered
# by the solution adjuster module to support this kind of undo.
# @PP
# @BI { Tasks which are leaders of their own groups. }
# A stored task could be the leader task of a previously created
# group.  This is not a problem, because task grouping concerns the
# task's relationship with its parent, not its children.  If the
# task is chosen to be the leader task of the new group, its
# domain may be reduced to a subset of its initial value (always
# legal for a proper root task), and its children will be partly
# from the old group and partly from the new group.  When 
# @C { sa } removes the group, it unassigns only the children
# from the new group, not all the children.
# @PP
# @BI { Assigned tasks. }
# All accepted tasks are proper root tasks, which means that
# each is either unassigned or assigned directly to a resource.
# @PP
# It would be easy if we could disallow assigned tasks, but
# we can't, because there is an application where that would
# pose a major problem:  interval grouping, where the assigned
# tasks come from assign by history.  Instead, as we know, the
# rule is that assigned tasks are permitted provided they are
# assigned to the same resource @M { r }.  To implement this, if
# the chosen leader task @M { l } is assigned to @M { r }, we
# move the others to @M { l }.  Otherwise @M { l } must be
# unassigned, so we assign @M { l } to @M { r } and move
# the others to @M { l }.  Either way, every task is now assigned
# to @M { r }, albeit indirectly (via @M { l }).
# @PP
# @BI { Interference. }
# When several tasks are grouped, some of which are assigned a
# resource @M { r } and some of which are not, an obscure problem
# can arise.  Suppose that we group two tasks, @M { t sub 1 } and
# @M { t sub 2 }, and that @M { t sub 1 } is initially assigned
# resource @M { r } and @M { t sub 2 } is initially unassigned.
# Then the grouping effectively assigns @M { r } to @M { t sub 2 }.
# If there is some other task assigned @M { r } which is running
# on any of the days that @M { t sub 2 } is running, this will
# mean that @M { r } has to attend two tasks on the same day, which
# is not allowed.  We say that the other task @I interferes with
# the grouping of @M { t sub 1 } with @M { t sub 2 }.  The task
# grouper rejects all tasks whose addition to the group would
# cause interference.
# @PP
# @BI { Fixed task assignments. }
# A task assignment may be @I { fixed }, meaning that it may not be
# changed.  Interpreted literally, a task with a fixed assignment
# cannot participate in task grouping unless it is chosen to be the
# leader task.  But we will view task fixing as a logical requirement
# that does not necessarily prevent a task from being grouped.
# @PP
# First, suppose that the task @M { t } whose assignment is fixed
# is assigned to a resource @M { r }.  Then if we ignore the fixing,
# in the grouped task @M { t } will either keep its assignment (if it
# is chosen to be the leader task) or else it will be assigned to
# the leader task and the leader task will be assigned to @M { r }.
# We regard this as acceptable for a fixed @M { t }, because @M { t }
# is still assigned to @M { r }, indirectly.  So when building the
# group, if @M { t } is not the leader task, we unfix it, move
# it to the leader task, and re-fix it.
# @PP
# In the grouped state, the assignment of @M { t } to the leader
# task could equally well be fixed or not fixed.  It does not
# matter, because no-one is going to change it until the time
# comes to undo it.  But we prefer to fix it.  What does matter
# is that the assignment of the leader task to @M { r } must be
# fixed, otherwise some ordinary solver could change it and thus
# violate the fix on @M { t }.
# @PP
# Second, suppose that the task @M { t } whose assignment is
# fixed is unassigned.  We interpret this as saying that @M { t }
# may not be assigned.  Once again, we need to fix the assignment
# of the leader task, but now we require that the leader task be
# unassigned, since otherwise we have violated the fix on @M { t }.
# So @M { t } cannot share a group with an assigned task, and we
# have the sixth condition above.
# @PP
# Undoing the grouping of a task whose assignment is initially fixed
# is straightforward.  Unfix the task's assignment, move it to the
# leader task's parent, and fix that assignment.
# @PP
# @BI { Summary of the task grouping algorithm. }
# Given a set of tasks which have passed the checks made by
# @C { KheTaskGro uperAddTask }, together with three values already
# calculated (the leader task, its domain, and any assigned resource
# @M { r }), the actual grouping is done as follows.
# @PP
# First, choose a leader task---any task whose domain has minimum size.
# Second, reduce the leader task's domain if required.  Third, move
# every task except the leader task to the leader task.  Fixed tasks are
# unfixed before their move and re-fixed after it.  Fourth, if there
# is an @M { r } and the leader task is not currently assigned to it,
# assign it to @M { r }.  (If this move is needed, then the leader
# task is not fixed.  This is because the only way that its assignment
# can differ from @M { r } is for its assignment to be @C { NULL } and
# @M { r } to be non-@C { NULL }; and in that case, if it was fixed it
# would be a fixed unassigned task which was being grouped with an
# assigned task, which is not allowed.)  Fifth and last, if the leader
# task has at least one fixed follower (which we determine as we move the
# followers), and its assignment is not fixed, then fix its assignment.
# @PP
# Undoing is not exact, but we can approximate it by carrying out
# in reverse order the reverse of each step above, and then adjust
# the algorithm we get.  This produces the following.  A record of
# what happened during grouping is held in @C { sa }; this undo
# algorithm relies on that record.  There is not enough information
# in the tasks themselves to determine what to do.
# @PP
# First, if the leader task was fixed during grouping, unfix it.
# Second, irrespective of whether the leader task was moved to a
# resource, its assignment after the undo has to be its assignment
# at the time of the undo, so do nothing.  Third, move every follower
# task from the leader task to the leader task's assignment at the
# time of the undo (possibly @C { NULL }).  If the follower task is
# fixed, unfix it before the move and re-fix it afterwards.  Fourth
# and last, restore the leader task's original domain, if required.
# @PP
# @BI { History entries. }
# A task grouper may hold three types of entries:  @I { ordinary entries },
# @I { history entries }, and @I { dummy entries }.  The description so
# far has been entirely about ordinary entries.  We now come to
# history entries and dummy entries.
# @PP
# History entries model history when dealing with resource constraints.
# For example, suppose some limit active intervals constraint places
# lower limit 4 and upper limit 5 on the number of consecutive night
# shifts that a resource can be assigned to.  When building groups we
# can expect to mainly build groups of length 4 or 5.  But if some
# resource (Smith, say) has history value 2, then Smith will want a
# group of size 2 or 3 at the start of the cycle.  We express this by
# means of a history entry of length 2 to which the other 2 or 3 tasks
# can be added as ordinary entries.
# @PP
# To add a history entry, the call is
# @ID @C {
# void KheTaskGrouperAddHistoryEntry(KHE_TASK_GROUPER tg,
#   KHE_RESOURCE r, int durn);
# }
# This adds a history entry to @C { tg }, assigned @C { r }, of duration
# @C { durn }.  It must be the first entry in @C { tg } when it is
# present at all.  Its interval is @M { [ minus d , minus 1 ] } where
# @M { d } is @C { durn }.
# @PP
# A history entry contains no task so does not participate directly
# in grouping.  What it does do is influence what tasks can be added
# to the task grouper after it:  tasks that are either assigned
# @C { r } or could be assigned @C { r }, basically.  In this respect
# it is like an ordinary entry whose task is assigned @C { r }.  Also,
# its interval is included in @C { KheTaskGrouperInterval }.
# @PP
# @BI { Dummy entries. }
# Dummy entries do not change the value that a task grouper represents.
# Unlike history entries, they may appear anywhere.  To add a dummy
# entry, the call is
# @ID @C {
# void KheTaskGrouperAddDummyEntry(KHE_TASK_GROUPER tg);
# }
# These apparently useless entries are used by interval grouping.
# There has to be a previous entry which is either an ordinary entry
# or another dummy.  This entry's task equals that entry's task.
# @PP
# @BI { Another interface to task grouping. }
# There is another way to access task grouping.  It offers the same
# semantics; indeed, behind the scenes it runs the same code.  It is
# less easy to use, but for certain applications (interval grouping,
# for example) it can save a lot of time and memory.
# @PP
# This interface bypasses @C { KHE_TASK_GROUPER }; it uses only
# @C { KHE_TASK_GROUPER_ENTRY }.  An entry holds one task of the
# growing group, some information about the group, and a pointer
# to the previous entry, holding the previous task and information.
# This pointer will be @C { NULL } in the entry holding the first
# task.  This makes a singly linked list of tasks and information,
# independent of any task grouper object, accessed from the last
# (most recently added) entry.
# @PP
# The advantage of the linked structure is that if we are trying
# two sequences of tasks, @M { angleleft a, b, c angleright } and
# @M { angleleft a, b, d angleright }, then the first part of the
# two sequences, @M { angleleft a, b angleright }, can be shared.
# This is where the time and memory savings are made.
# @C { KheTaskGrouperDeleteTask } offers analogous savings
# (delete @M { c } then add @M { d }), but it does not allow
# the two proto-groups to exist simultaneously.
# @PP
# Here are the main functions that make up this interface to task grouping:
# @ID @C {
# bool KheTaskGrouperEntryAddTask(KHE_TASK_GROUPER_ENTRY prev,
#   KHE_TASK task, KHE_FRAME days_frame,
#   KHE_TASK_GROUP_DOMAIN_FINDER tgdf, KHE_TASK_GROUPER_ENTRY next);
# bool KheTaskGrouperEntryAddTaskUnchecked(KHE_TASK_GROUPER_ENTRY prev,
#   KHE_TASK task, KHE_FRAME days_frame,
#   KHE_TASK_GROUP_DOMAIN_FINDER tgdf, KHE_TASK_GROUPER_ENTRY next);
# KHE_COST KheTaskGrouperEntryCost(KHE_TASK_GROUPER_ENTRY last,
#   KHE_FRAME days_frame, KHE_SOLN soln);
# KHE_TASK KheTaskGrouperEntryMakeGroup(KHE_TASK_GROUPER_ENTRY last,
#   KHE_SOLN_ADJUSTER sa);
# }
# # void KheTaskGrouperEntryCopy(KHE_TASK_GROUPER_ENTRY dst_last,
# #   KHE_TASK_GROUPER_ENTRY src_last);
# @C { KheTaskGrouperEntryAddTask } is semantically the same as
# @C { KheTaskGrou perAddTask }, but here the previously added tasks
# are represented by @C { prev }.  This will be @C { NULL } when
# @C { task } is the first task.  The result of the addition (if
# @C { true } is returned) is represented by @C { next }, which
# will contain @C { task } and related information.  Notice
# that @C { days_frame } and @C { tgdf } helper objects must
# be supplied explicitly.  @C { KheTaskGroupe rAddTask } gets
# these objects from the task grouper object.
# @PP
# This form of task grouping does not allocate any memory.
# The memory pointed to by @C { prev } (if non-@C { NULL }) and
# @C { next } (always non-@C { NULL }) must be allocated by
# the caller, using code such as
# @ID @C {
# struct khe_task_grouper_entry_rec new_entry_rec;
# KheTaskGrouperEntryAddTask(prev, task, &new_entry_rec);
# }
# Here @C { struct khe_task_grouper_entry_rec } is the struct that
# @C { KHE_TASK_GROUPER_ENTRY } points to; it is defined (with
# its fields) in @C { khe_solvers.h } alongside
# @C { KHE_TASK_GROUPER_ENTRY }.  @C { KheTaskGrouperEntryAddTask }
# overwrites the memory pointed to by @C { next }.
# @PP
# Any number of calls to @C { KheTaskGrouperEntryAddTask } with
# the same @C { prev } may be made.  This is how sequences come
# to share subsequences, as described above.  A group is defined
# by its last entry.  There is no ambiguity, because there is
# only one path going backwards.
# @PP
# @C { KheTaskGrouperEntryAddTaskUnchecked } is the same as
# @C { KheTaskGrouperEntryAddTask } except that it omits
# the checks and always returns @C { true }.  The author uses
# it only in debug code for interval grouping, where the checks
# are not wanted, as it turns out.  It is better avoided.
# @PP
# @C { KheTaskGrouperEntryCopy } copies the record pointed to by
# @C { src_last } to the record pointed to by @C { dst_last }.
# @C { KheTaskGrouperEntryCost } and @C { KheTaskGrouperEntryMakeGroup }
# are semantically the same as @C { KheTaskGrouperCost } and
# @C { KheTaskGrouperMakeGroup }, but here the tasks are the task stored
# in @C { last }, the task stored in its predecessor entry, and so on.
# @PP
# History entries can be added in a similar way:
# @ID @C {
# void KheTaskGrouperEntryAddHistory(KHE_TASK_GROUPER_ENTRY prev,
#   KHE_RESOURCE r, int durn, KHE_TASK_GROUP_DOMAIN_FINDER tgdf,
#   KHE_TASK_GROUPER_ENTRY next);
# }
# # void KheTaskGrouperEntryAddDummy(KHE_TASK_GROUPER_ENTRY prev,
# #   KHE_TASK_GROUPER_ENTRY next);
# Like @C { KheTaskGrouperEntryAddTask }, this adds @C { next } as a
# successor to @C { prev }, but here the new entry is a history or
# dummy entry, as described above.  A history entry must always
# come first in its group, so the @C { prev } argument of
# @C { KheTaskGrouperEntryAddHistory } must be @C { NULL }.
# @PP
# To access the attributes of one entry, the calls are
# @ID @C {
# KHE_TASK KheTaskGrouperEntryTask(KHE_TASK_GROUPER_ENTRY tge);
# KHE_INTERVAL KheTaskGrouperEntryInterval(KHE_TASK_GROUPER_ENTRY tge);
# KHE_TASK_GROUPER_ENTRY KheTaskGrouperEntryPrev(
#   KHE_TASK_GROUPER_ENTRY tge);
# KHE_TASK_GROUP_DOMAIN KheTaskGrouperEntryDomain(
#   KHE_TASK_GROUPER_ENTRY tge);
# }
# # KHE_TASK_GROUPER_ENTRY_TYPE KheTaskGrouperEntryType(
# #   KHE_TASK_GROUPER_ENTRY tge);
# # @C { KheTaskGrouperEntryType } returns a value of type
# # @ID @C {
# # typedef enum {
# #   KHE_TASK_GROUPER_ENTRY_ORDINARY,
# #   KHE_TASK_GROUPER_ENTRY_HISTORY,
# #   KHE_TASK_GROUPER_ENTRY_DUMMY
# # } KHE_TASK_GROUPER_ENTRY_TYPE;
# # }
# # saying whether the entry is an ordinary, history, or dummy entry.
# @C { KheTaskGrouperEntryTask } returns @C { tge }'s task;
# @C { KheTaskGrouperEntryInterval } returns the interval of
# days spanned by @C { tge } plus its predecessors;
# @C { KheTaskGrouperEntryPrev } returns @C { tge }'s predecessor,
# or @C { NULL } if @C { tge } is the first entry; and
# @C { KheTaskGrouperEntryDomain } returns the domain of @C { tge },
# a value of type @C { KHE_TASK_GROUP_DOMAIN } as defined in
# Section {@NumberOf resource_structural.task_grouping.domains}.
# @PP
# For example, this code visits the tasks of a group whose last
# element is @C { last }:
# @ID @C {
# for( e = last;  e != NULL;  e = KheTaskGrouperEntryPrev(e) )
#   if( KheTaskGrouperEntryType(e) == KHE_TASK_GROUPER_ENTRY_ORDINARY )
#   {
#     task = KheTaskGrouperEntryTask(e);
#     ... visit task ...
#   }
# }
# It takes care not to visit history and dummy entries.
# @End @SubSection

@SubSection
    @Title { The task grouper }
    @Tag { resource_structural.task_grouping.task_grouper }
@Begin
@LP
Different solvers group tasks for different reasons, but it is best
for the actual grouping to always be done in the same way, as follows.
The first step is to create a @I { task grouper object } by calling
@ID @C {
KHE_TASK_GROUPER KheTaskGrouperMake(KHE_SOLN soln,
  KHE_FRAME days_frame, KHE_TASK_GROUP_DOMAIN_FINDER tgdf, HA_ARENA a);
}
All parameters must be non-@C { NULL }.  A domain finder may already
be on hand; if not, one can be made by calling
@C { KheTaskGroupDomainFinderMake } from
Section {@NumberOf resource_structural.task_grouping.domains}.
The new task grouper remains available until arena @C { a } is
deleted or recycled.
# @PP
# There are two ways to use a task grouper to make groups, which
# we call @I { internal } and @I { external }.  We'll start with
# the internal way.
@PP
The task grouper allows one task group to be stored within itself.
This task group, which we call its @I { internal task group },
can be constructed using the following functions:
@ID {0.90 1.0} @Scale @C {
void KheTaskGrouperClear(KHE_TASK_GROUPER tg);
bool KheTaskGrouperAddTaskCheck(KHE_TASK_GROUPER tg, KHE_TASK task);
bool KheTaskGrouperAddTask(KHE_TASK_GROUPER tg, KHE_TASK task);
void KheTaskGrouperDeleteTask(KHE_TASK_GROUPER tg, KHE_TASK task);
void KheTaskGrouperAddHistory(KHE_TASK_GROUPER tg, KHE_RESOURCE r, int durn);
void KheTaskGrouperDeleteHistory(KHE_TASK_GROUPER tg, KHE_RESOURCE r, int durn);
}
@C { KheTaskGrouperClear } clears away any existing internal task
group, ready for a fresh start.  @C { KheTaskGrouperAddTaskCheck }
returns @C { true } when @C { task } could be added to the current
internal task group, because it is a proper root task, and compatible
with the tasks already added (for which see below).
@C { KheTaskGrouperAddTask } does the same check, but if that
is successful it actually adds @C { task } to the current internal
task group.  Otherwise it adds nothing and returns @C { false }.
In any case, no task assignments or moves are made at this stage.
@PP
@C { KheTaskGrouperDeleteTask } undoes a previous call to
@C { KheTaskGrouperAddTask } which succeeded.  Due to issues behind
the scenes, only the most recently added but not deleted task may be
deleted in this way.  That is, last-in-first-out order is required.
@PP
A task group can also contain @I { history }, which is a record of
what some resource did before the current cycle began.  For example,
suppose some limit active intervals constraint places lower limit
4 and upper limit 5 on the number of consecutive night shifts that
a resource can be assigned to.  When building groups we can expect
to mainly build groups of length 4 or 5.  But if some resource
(Smith, say) has history value 2, then Smith will want a group of
size 2 or 3 at the start of the cycle.  We express this by building
a group that includes history with duration 2, assigned Smith, to
which the other 2 or 3 tasks can be added.
@PP
@C { KheTaskGrouperAddHistory } adds history with assigned resource
@C { r } and duration @C { durn } to @C { tg }'s internal task group.
History must be added first when it is present at all.
@C { KheTaskGrouperDeleteHistory } deletes previously added history.
# Its interval is @M { [ minus d , minus 1 ] } where
# @M { d } is @C { durn }.
@PP
History contains no task so does not participate directly in grouping.
What it does do is influence what tasks can be added to the task
grouper after it:  tasks that are either assigned @C { r } or could
be assigned @C { r }.  In this respect it is like a task which is
assigned @C { r }.
# Also, its interval is included in @C { KheTaskGrouperInterval }.
@PP
To visit the tasks of the internal task group in the order they
were inserted, call
@ID @C {
int KheTaskGrouperTaskCount(KHE_TASK_GROUPER tg);
KHE_TASK KheTaskGrouperTask(KHE_TASK_GROUPER tg, int i);
}
as usual.  If history has been added, then @C { KheTaskGrouperTask(tg, 0) }
will return @C { NULL }.
@PP
Function
@ID @C {
KHE_COST KheTaskGrouperCost(KHE_TASK_GROUPER tg);
}
returns the cost of making a group out of the tasks currently
present in @C { tg }'s internal task group, without actually
doing any grouping.  This cost is defined as follows.
@PP
Find a resource @C { r } lying in the domain of every task of
@C { tg }.  Such a resource @C { r } must exist, because (as we'll
see below) only sets of tasks whose domains have a non-empty
intersection are accepted.  (If any of the tasks are assigned
a resource, or if there is history, then set @C { r } to that
resource.)  Let @C { in } be the smallest interval of days
containing every day that the tasks of @C { tg } are running,
plus (where present) the day before their first day and the day
after their last day.  Find the set @C { S } of all cluster busy
times and limit busy times monitors that monitor @C { r } during
@C { in } but not outside @C { in }, and are derived from
constraints that monitor every resource of @C { r }'s type, as
returned by @C { KheResourceTimetableMonitorAddInterval }
(Section {@NumberOf monitoring_timetables_resource}).
(If any of the tasks are assigned a resource @C { r }, also
include any avoid unavailable times monitors for @C { r } in
@C { in }.)  Make sure that @C { r } is free on every day of
@C { in }.  This could involved unassigning @C { r } from some
tasks, which in turn could involve unfixing assignments.  Assign
@C { r } to the tasks of @C { tg }.  Find the total cost of the
monitors of @C { S } at this point; this is the result.  Finish by
restoring the initial state (unassigning @C { r } from the tasks of
@C { tg }, then re-fixing and reassigning other tasks as required).
@PP
The monitors included in the cost put the focus on local things
such as complete weekends and unwanted
patterns.  Omitting global things like total workload makes
sense because task grouping has nothing to do with global
constraints.  Omitting limit active intervals monitors is
wrong if the group violates the maximum limit of such a monitor,
but in practice groups are never large enough to do this.
Omitting avoid unavailable times monitors when the group is
unassigned makes sense because different resources are unavailable
at different times, and if one resource is unavailable for a
given group, another resource probably will be available.
@PP
Another kind of cost that it might be useful to include is the
cost (reported by event resource monitors) of assigning or not
assigning the tasks of @C { tg }.  The caller can easily include
these costs, by calling @C { KheTaskNonAsstAndAsstCost }
(Section {@NumberOf resource_structural.mtask_finding.ops})
and adding them in.
@PP
Function
@ID @C {
KHE_TASK KheTaskGrouperMakeGroup(KHE_TASK_GROUPER tg,
  KHE_SOLN_ADJUSTER sa);
}
carries out the task assignments that build a group from the
tasks of @C { tg }'s internal task group:  it chooses a leader
task from these stored tasks and assigns the other stored tasks
to it.  It returns the leader task.  @C { KheTaskGrouperMakeGroup }
cannot fail, because incompatible tasks have already been rejected
by @C { KheTaskGrouperAddTask }, although it will abort if no tasks
are stored, and do nothing (correctly) if just one is stored.  If
@C { sa != NULL }, the changes are saved in @C { sa } so that they
can be undone later.  The task grouper itself does not offer an
undo operation.  But @C { sa } can record any number of grouping
operations, and then undoing @C { sa } will undo them all.
@PP
@C { KheTaskGrouperMakeGroup } does not clear the grouper.  One can
call it, evaluate the result, then use @C { sa } to undo the grouping,
and then carry on just as though @C { KheTaskGrouperMakeGroup } had
not been called.
# Together with @C { KheTaskGrouperDeleteTask }
# this means that a tree search for the best group (in any sense
# chosen by the caller) is supported.
# (this undo will be exact unless some tasks of the group are
# assigned initially and others are unassigned)
@PP
Finally,
@ID @C {
void KheTaskGrouperDebug(KHE_TASK_GROUPER tg,
  int verbosity, int indent, FILE *fp);
}
produces a debug print of @C { tg } onto @C { fp } with the given
verbosity and indent.
@PP
The task grouper keeps a list of the tasks that have been added, each
with some associated information.  When memory for this is no longer
needed (when @C { KheTaskGrouperClear } or @C { KheTaskGrouperDeleteTask }
is called), it is recycled through a free list in the task grouper.
So it is much better to re-use one task grouper than to create many.
@PP
All this may sound simple, but we now have a long list of issues to
ponder, to make task grouping robust and able to interact appropriately
with other solvers.  This is why task groupers are needed:  there is
a lot more to it than just assigning followers to a leader task.
# ).  Task grouping is
# part of structural solving, and so we have to consider what undoing
# it means, and its interactions with other structural solvers and
# ordinary solvers.
# ---all important in practice, because task grouping
# has many applications and many interactions.
@PP
@BI { Acceptable tasks. }
Earlier we deferred a detailed explanation of what makes a task
@C { task } acceptable to @C { KheTaskGrouperAddTask }.  We give that
explanation now.
@PP
To begin with, @C { task } must be non-@C { NULL } and must be a
proper root task (either assigned to a resource or not).  Requiring
@C { task } to be a proper root task is not absolutely necessary,
but it is a useful sanity measure (do we really want to group a
task that is already in a group that it is not the leader task of?),
and it makes @C { KheTaskGrouperCost } easier to understand.
@PP
@C { KheTaskGrouperAddTask } aborts when this first condition does not
hold.  The remaining conditions merely cause @C { KheTaskGrouperAddTask }
to return @C { false } when they do not hold:
@NumberedList

@LI @OneRow {
@C { task }'s domain must be non-empty (so that
@C { KheTaskGrouperCost } can be implemented).
}

@LI @OneRow {
The conditions @C { KheAsstResource(task) == NULL } and
@C { KheTaskAssignIsFixed(task) } cannot both be true.  Such an
occurrence, called a @I { fixed non-assignment }, would prevent
a group containing @C { task } from being assigned a resource later.
# After adding @C { task }, there cannot be one task with an assigned
# resource and another task with a fixed non-assignment.  We cannot
# preserve both conditions after the tasks are grouped.
}

@LI @OneRow {
The interval of days that @C { task } is running must be disjoint
from the interval of days that the other tasks of the group (taken
together) are running.  Among other things, this prevents the same
task from being added to the group twice.  Because of this rule,
adding tasks out of chronological order (or reverse chronological
order) is probably a bad idea.
# If @C { task } is the first task added to the group, that's all.
# The remaining conditions apply when @C { task } is not the first task.
# @C { task } must not be already in the group.
}

@LI @OneRow {
The intersection of the domain of @C { task } and the other tasks
must be non-empty.  Without this we could not assign a resource to the
group later on, and @C { KheTaskGrouperCost } could not be implemented.
# @C { KheTaskGrouperAddTask } must be able to find a leader
# task for the group including @C { task }.  We'll explain what that
# involves in a moment.
}

@LI @OneRow {
If @C { task } is assigned a resource, there must be no other task
or history assigned a different resource.  The other tasks may be
unassigned, or assigned the same resource, but it is not possible
to group two tasks that are assigned different resources.
}

@LI @OneRow {
If @C { task }, or any other task or history, is assigned a resource,
then the intersection of the domains must include that resource.
Otherwise we could not preserve this assignment after the tasks
are grouped.
}

@LI @OneRow {
Adding @C { task } must not give rise to @I { interference }:  a
situation where two tasks assigned the same resource are running
on the same day.  Interference is explained in detail below.
}

@EndList
These conditions rarely fail, but users must be prepared for them
to do so.  When @C { task } is the first task added, only conditions
(1) and (2) can possibly fail.
@PP
@BI { Finding a leader task domain and leader task. }
The next problem is to find a suitable domain for the group:
a @I { leader task domain }.  As mentioned earlier, this is the
intersection of the domains of the tasks.  We do this
each time a task is added, using the task group domain finder
(Section {@NumberOf resource_structural.task_grouping.domains})
passed to @C { KheTaskGrouperMake }.  The domain finder ensures
that time and memory is not wasted calculating domains over and
over.  If this intersection turns out to be empty, then
@C { KheTaskGrouperAddTask } returns @C { false }.
# We prefer to
# choose a task to be leader to which every other stored task can be
# moved.  This means that the domain of the chosen leader task must
# be a subset of the domain of every stored task.  There is usually
# such a task, but if not, then we use a resource intersector
# (Section {@NumberOf resource_structural.task_grouping.intersect}) to
# build a domain which is the intersection of the current leader domain
# and the domain of the incoming task.  If this @I { leader domain }
# is empty then @C { KheTaskGrouperAddTask } returns @C { false }.
# Otherwise we accept the task into the growing group, on the
# understanding that the leader task's domain will have to be reduced
# to the leader domain when the group is actually made.
# @PP
# Building a new domain is carried out by calls to
# @C { KheSolnResourceGroupBegin } and similar functions, documented
# in Section {@NumberOf solutions.groups}.  As explained there, these
# resource groups are cached, so that when the same resource group is
# built more than once, the same object is returned.  This means that,
# in practice, memory is consumed by only a few newly created resource
# groups.  And the fact that even this is only tried when resource
# groups are not subsets means that building resource groups does not
# dominate the running time.
@PP
We find a leader task only when we come to actually carry out
the grouping.  The leader task domain is already known, as
we have just seen.  The chosen leader task is any task from
the group whose domain has minimum size.  If its domain is
not already equal to the leader task domain it is reduced
to that domain by the addition of a suitable task bound object.
@PP
For the record, here is a check of the conditions imposed by
@C { KheTaskMoveCheck }, which every task @C { t } moved to the
chosen leader task must satisfy.  First, @C { t }'s assignment
cannot be fixed.  We will be circumventing this, by unfixing
beforehand and re-fixing afterwards, as explained below.  Second,
@C { t } must not be a cycle task.  @C { KheTaskGrouperAddTask }
aborts in this case (a cycle task is never a proper root task).
Third, the move must change the assignment.  This holds because
@C { t } is a proper root task.  Fourth and last, the domain of
@C { t } must be a superset of the domain of the leader task.
We've just explained how we handle that.  So the move must succeed.
@PP
@BI { Undoing a grouping. }
Suppose that the stored tasks are unassigned initially.  A
structural solver groups them by assigning the followers to the
chosen leader task, then an ordinary solver assigns a resource to
the leader task, and then we need to undo the grouping.  An exact
undo would unassign the follower tasks, since they were unassigned
initially; but that is quite wrong.  In fact, the follower tasks'
assignments are moved from the leader task to whatever the leader
task is assigned to at the time of the undo.  We see here that an
overly literal interpretation of undo fails to capture the true
meaning, which is that a previously imposed requirement has to be
removed, without disturbing other requirements.  Function
@C { KheSolnAdjusterTaskGroup }
(Section {@NumberOf general_solvers.adjust.adjuster}) is offered
by the solution adjuster module to support this kind of undo.
@PP
@BI { Tasks which are leaders of their own groups. }
A stored task could be the leader task of a previously created
group.  This is not a problem, because task grouping concerns the
task's relationship with its parent, not its children.  If the
task is chosen to be the leader task of the new group, its
domain may be reduced to a subset of its initial value (always
legal for a proper root task), and its children will be partly
from the old group and partly from the new group.  When 
@C { sa } removes the group, it unassigns only the children
from the new group, not all the children.
@PP
@BI { Assigned tasks. }
All accepted tasks are proper root tasks, which means that
each is either unassigned or assigned directly to a resource.
@PP
It would be easy if we could disallow assigned tasks, but
we can't, because there is an application where that would
pose a major problem:  interval grouping, where the assigned
tasks come from assign by history.  Instead, as we know, the
rule is that assigned tasks are permitted provided they are
assigned to the same resource @M { r }.  To implement this, if
the chosen leader task @M { l } is assigned to @M { r }, we
move the others to @M { l }.  Otherwise @M { l } must be
unassigned, so we assign @M { l } to @M { r } and move
the others to @M { l }.  Either way, every task is now assigned
to @M { r }, albeit indirectly (via @M { l }).
@PP
@BI { Interference. }
When several tasks are grouped, some of which are assigned a
resource @M { r } and some of which are not, an obscure problem
can arise.  Suppose that we group two tasks, @M { t sub 1 } and
@M { t sub 2 }, and that @M { t sub 1 } is initially assigned
resource @M { r } and @M { t sub 2 } is initially unassigned.
Then the grouping effectively assigns @M { r } to @M { t sub 2 }.
If there is some other task assigned @M { r } which is running
on any of the days that @M { t sub 2 } is running, this will
mean that @M { r } has to attend two tasks on the same day, which
is not allowed.  We say that the other task @I interferes with
the grouping of @M { t sub 1 } with @M { t sub 2 }.  The task
grouper rejects all tasks whose addition to the group would
cause interference.
@PP
@BI { Fixed task assignments. }
A task assignment may be @I { fixed }, meaning that it may not be
changed.  Interpreted literally, a task with a fixed assignment
cannot participate in task grouping unless it is chosen to be the
leader task.  But we will view task fixing as a logical requirement
that does not necessarily prevent a task from being grouped.
@PP
First, suppose that the task @M { t } whose assignment is fixed
is assigned to a resource @M { r }.  Then if we ignore the fixing,
in the grouped task @M { t } will either keep its assignment (if it
is chosen to be the leader task) or else it will be assigned to
the leader task and the leader task will be assigned to @M { r }.
We regard this as acceptable for a fixed @M { t }, because @M { t }
is still assigned to @M { r }, indirectly.  So when building the
group, if @M { t } is not the leader task, we unfix it, move
it to the leader task, and re-fix it.
@PP
In the grouped state, the assignment of @M { t } to the leader
task could equally well be fixed or not fixed.  It does not
matter, because no-one is going to change it until the time
comes to undo it.  But we prefer to fix it.  What does matter
is that the assignment of the leader task to @M { r } must be
fixed, otherwise some ordinary solver could change it and thus
violate the fix on @M { t }.
@PP
Second, suppose that the task @M { t } whose assignment is
fixed is unassigned.  We interpret this as saying that @M { t }
may not be assigned.  But the whole purpose of building a
group is to assign it, as a unit, to some resource later.
So the task grouper rejects tasks with fixed non-assignments.
These will be exceedingly rare anyway.
# Once again, we need to fix the assignment
# of the leader task, but now we require that the leader task be
# unassigned, since otherwise we have violated the fix on @M { t }.
# So @M { t } cannot share a group with an assigned task, and we
# have the sixth condition above.
@PP
Undoing the grouping of a task whose assignment is initially fixed
is straightforward.  Unfix the task's assignment, move it to the
leader task's parent, and fix that assignment.
@PP
@BI { Summary of the task grouping algorithm. }
Given a set of tasks which have passed the checks made by
@C { KheTaskGrouperAddTask }, together with three values already
calculated (the leader task, its domain, and any assigned resource
@M { r }), the actual grouping is done as follows.
@PP
First, choose a leader task---any task whose domain has minimum size.
Second, reduce the leader task's domain if required.  Third, move
every task except the leader task to the leader task.  Fixed tasks are
unfixed before their move and re-fixed after it.  Fourth, if there
is an @M { r } and the leader task is not currently assigned to it,
assign it to @M { r }.  (If this move is needed, then the leader
task is not fixed.  This is because the only way that its assignment
can differ from @M { r } is for its assignment to be @C { NULL } and
@M { r } to be non-@C { NULL }; and in that case, if it was fixed it
would be a fixed unassigned task which was being grouped with an
assigned task, which is not allowed.)  Fifth and last, if the leader
task has at least one fixed follower (which we determine as we move the
followers), and its assignment is not fixed, then fix its assignment.
@PP
Undoing is not exact, but we can approximate it by carrying out
in reverse order the reverse of each step above, and then adjust
the algorithm we get.  This produces the following.  A record of
what happened during grouping is held in @C { sa }; this undo
algorithm relies on that record.  There is not enough information
in the tasks themselves to determine what to do.
@PP
First, if the leader task was fixed during grouping, unfix it.
Second, irrespective of whether the leader task was moved to a
resource, its assignment after the undo has to be its assignment
at the time of the undo, so do nothing.  Third, move every follower
task from the leader task to the leader task's assignment at the
time of the undo (possibly @C { NULL }).  If the follower task is
fixed, unfix it before the move and re-fix it afterwards.  Fourth
and last, restore the leader task's original domain, if required.
@PP
# @BI { External task groups. }
# So far we have been concerned with a single task group:  the
# task grouper's @I { internal task group }.  Although it can be
# varied by deleting tasks and subsequently adding others, still
# only one internal task group can be in existence at any one time.
# So now we introduce @I { external task groups }, which are created
# by the task grouper but not held inside it.  Any number of external
# task groups may be in existence simultaneously.
# @PP
# Unusually for KHE, we use a linked structure to represent a task
# group.  For example, the set of three tasks @M { lbrace a, b, c rbrace }
# will be represented by three objects linked like this:
# @CD @Diag {
# A:: @Box 1c @Wide 1c @High @M { a } ||2c
# B:: @Box 1c @Wide 1c @High @M { b } ||2c
# C:: @Box 1c @Wide 1c @High @M { c }
# //
# @Arrow from { B } to { A }
# @Arrow from { C } to { B }
# }
# Each object contains its task, a pointer to its predecessor,
# and other information that we will come to later.  The group
# as a whole is accessed by a pointer to its last object (in the
# example, the one containing @M { c }).  Doing it this way allows
# us to share parts of the structure between several task groups,
# which in practice can save us a lot of time and memory.  For
# example, to create a second task group @M { lbrace a, b, d rbrace },
# we only need one new object, the one containing @M { d }; its
# predecessor is the object containing @M { b } that we have
# already.  @C { KheTaskGrouperDeleteTask } offers something
# similar (delete @M { c } then add @M { d }), but it does not
# allow the two groups to exist simultaneously.
# @PP
# Type @C { KHE_TASK_GROUPER_ENTRY } is a pointer to any one of the
# objects in the diagram above.  Sometimes it represents just the
# one object, sometimes it represents the entire list, from the
# object back to the start.  @C { NULL } is a legal value of this
# type; it represents the empty list.
# @PP
# To make and free objects of type @C { KHE_TASK_GROUPER_ENTRY },
# the calls are
# @ID @C {
# bool KheTaskGrouperEntryMakeTask(KHE_TASK_GROUPER tg,
#   KHE_TASK_GROUPER_ENTRY prev, KHE_TASK task, bool unchecked,
#   KHE_TASK_GROUPER_ENTRY *res);
# KHE_TASK_GROUPER_ENTRY KheTaskGrouperEntryMakeHistory(
#   KHE_TASK_GROUPER tg, KHE_RESOURCE r, int durn);
# void KheTaskGrouperEntryFree(KHE_TASK_GROUPER_ENTRY tge,
#   KHE_TASK_GROUPER tg);
# }
# # Freed entries are stored in a free list in task grouper @C { tg },
# # and memory for new ones comes from @C { tg }'s arena.
# # @PP
# @C { KheTaskGrouperEntryMakeTask } makes a new entry whose
# predecessor is @C { prev } (this will be @C { NULL } when
# the new entry is the first in its group) and whose task
# is @C { task }.  If @C { task } can be added to @C { prev },
# as determined by the six conditions above, then
# @C { KheTaskGrouperEntryMakeTask } makes the new entry,
# sets @C { *res } to point to it, and returns @C { true }.
# Otherwise it changes nothing and returns @C { false }.  If
# @C { unchecked } is @C { true }, it makes the entry and returns
# @C { true } without checking first.  This saves time when the
# caller knows that these checks have been made previously.
# @PP
# One may also pass @C { NULL } for @C { res }, and then the return
# value will be as described but no task grouper entry will be made.
# @PP
# @C { KheTaskGrouperEntryMakeHistory } returns a different kind
# of new entry, one consisting of history with resource @C { r }
# and duration @C { durn }.  There is no @C { prev } because history
# must come first in a task group, and there is no need for a
# @C { bool } result because the call always succeeds.
# @PP
# @C { KheTaskGrouperEntryFree } frees @C { tge }, that is,
# it adds the object pointed to by @C { tge } to a free list held
# in @C { tg }.  From there it is re-used by future calls to
# @C { KheTaskGrouperEntryMakeTask } and @C { KheTaskGrouperEntryMakeHistory }.
# It is the user's responsibility to free entries when they
# are no longer referenced.  Given the way that entries are shared,
# this can be non-trivial.
# @PP
# We can now reveal that
# the internal task group held within a task grouper is
# a particular external task group, held and managed by that task grouper.
# Functions @C { KheTaskGrouperClear }, @C { KheTaskGrouperAddTask },
# @C { KheTaskGrouperDeleteTask }, @C { KheTaskGrouperAddHistory },
# and @C { KheTaskGrouperDeleteHistory } delegate to this external
# task group.  Function
# @ID @C {
# KHE_TASK_GROUPER_ENTRY KheTaskGrouperLastEntry(KHE_TASK_GROUPER tg);
# }
# returns a pointer to the last entry of the internal task group.
# This makes the internal task group external, which is handy for
# iterating over its elements, as we'll see shortly.  Also, functions
# @ID @C {
# KHE_COST KheTaskGrouperEntryCost(KHE_TASK_GROUPER_ENTRY tge,
#   KHE_TASK_GROUPER tg);
# KHE_TASK KheTaskGrouperEntryMakeGroup(KHE_TASK_GROUPER_ENTRY tge,
#   KHE_SOLN_ADJUSTER sa);
# }
# do for an external task group what @C { KheTaskGrouperCost } and
# @C { KheTaskGrouperMakeGroup } do for the internal one.  As we
# know, @C { tge } can represent a single object or the task group
# as a whole, accessed via its last element; these two functions
# use the second interpretation.
# @PP
# Next come five functions which give access to the attributes
# of entries:
# @ID {0.93 1.0} @Scale @C {
# KHE_TASK_GROUPER_ENTRY KheTaskGrouperEntryPrev(KHE_TASK_GROUPER_ENTRY tge);
# KHE_TASK KheTaskGrouperEntryTask(KHE_TASK_GROUPER_ENTRY tge);
# KHE_INTERVAL KheTaskGrouperEntryInterval(KHE_TASK_GROUPER_ENTRY tge);
# KHE_TASK_GROUP_DOMAIN KheTaskGrouperEntryDomain(KHE_TASK_GROUPER_ENTRY tge);
# KHE_RESOURCE KheTaskGrouperEntryAssignedResource(KHE_TASK_GROUPER_ENTRY tge);
# }
# @C { KheTaskGrouperEntryPrev } is @C { tge }'s predecessor, possibly
# @C { NULL }.  @C { KheTaskGrouperEntryTask } is @C { tge }'s task.
# A @C { NULL } task indicates that the entry is a history entry.
# # distinguishes a history entry from an ordinary entry.
# # In history entries, both values are @C { NULL }.
# @PP
# In the last three functions, @C { tge } represents the whole list.
# @C { KheTaskGrouperEntryInterval } is the interval of days
# covered by the tasks of the list.  If there is a history entry
# with duration @M { d }, then interval @M { [minus d , minus 1] } is
# also included.
# @C { KheTaskGrouperEntryDomain } is a task group domain object
# (Section {@NumberOf resource_structural.task_grouping.domains})
# representing the intersection of the domains of the tasks of the list.
# @C { KheTaskGrouperEntryAssignedResource } is @C { r } if any of
# the tasks of the lists are assigned @C { r }, or there is history
# for @C { r }, and @C { NULL } otherwise.  For efficiency, these
# three values are calculated and stored in object @C { tge } when
# it is created, but they are determined by the whole list.
# @PP
# For example, this code visits the tasks of the internal task group:
# @ID @C {
# tge = KheTaskGrouperLastEntry(tg);
# while( tge != NULL )
# {
#   task = KheTaskGrouperEntryTask(tge);
#   if( task != NULL )
#     ... visit task ...
#   tge = KheTaskGrouperEntryPrev(tge);
# }
# }
# To visit the tasks of an external task group, omit the first line.
# @PP
# Finally, function
# @ID @C {
# void KheTaskGrouperEntryDebug(KHE_TASK_GROUPER_ENTRY tge,
#   int verbosity, int indent, FILE *fp);
# }
# produces a debug print of @C { tge }, including its predecessors,
# onto @C { fp } with the given verbosity and indent.
# @PP
@BI { Separate group testing. }
At any one time the task grouper contains only one group, the internal
task group.  This can be too limiting.  For example, interval grouping
(Section {@NumberOf resource_structural.task_grouping.interval_grouping})
builds many groups simultaneously.  To support this, the task
grouper offers operations which make its essence available to
other modules, leaving them to build the actual groups separately.
@PP
By the essence we mean the answer to this question:  given a
group @C { g } and a task @C { t }, can @C { t } be added to
@C { g }?  This question depends on various attributes of
@C { t } and three attributes of @C { g }:
@BulletList

@LI {
The interval of days that the tasks of @C { g } are running;
}

@LI {
The intersection of the domains of the tasks of @C { g };
}

@LI {
Any resource that one or more tasks of @C { g } (or history)
are assigned to, or @C { NULL } if none.
}

@EndList
The key operation for separate group testing is therefore
@ID @C {
bool KheTaskGrouperSeparateAddTask(KHE_TASK_GROUPER tg,
  KHE_INTERVAL prev_interval, KHE_TASK_GROUP_DOMAIN prev_domain,
  KHE_RESOURCE prev_assigned_resource, KHE_TASK task,
  KHE_INTERVAL *new_interval, KHE_TASK_GROUP_DOMAIN *new_domain,
  KHE_RESOURCE *new_assigned_resource);
}
Here the existing group @C { g } is represented by its three
attributes, @C { prev_interval }, @C { prev_domain }, and
@C { prev_assigned_resource }.  If @C { task } can be added
to a group with these attributes, @C { true } is returned
and @C { *new_interval }, @C { *new_domain }, and
@C { *new_assigned_resource } are set to the three attributes
of the group consisting of @C { g } plus @C { task }.
Otherwise @C { false } is returned and @C { *new_interval },
@C { *new_domain }, and @C { *new_assigned_resource } are
well-defined but (depending on what the problem was) may
only imperfectly represent what happens when @C { t } is
added to @C { g }.
@PP
When building a separate group that starts with a given task,
call this cut-down version:
@ID @C {
bool KheTaskGrouperSeparateAddInitialTask(KHE_TASK_GROUPER tg,
  KHE_TASK task, KHE_INTERVAL *new_interval,
  KHE_TASK_GROUP_DOMAIN *new_domain,
  KHE_RESOURCE *new_assigned_resource);
}
As explained above for @C { KheTaskGrouperAddTask }, only
conditions (1) and (2) can fail here.
@PP
History may be included in separate group testing by calling
@ID @C {
void KheTaskGrouperSeparateAddHistory(KHE_TASK_GROUPER tg,
  KHE_RESOURCE r, int durn, KHE_INTERVAL *new_interval,
  KHE_TASK_GROUP_DOMAIN *new_domain,
  KHE_RESOURCE *new_assigned_resource);
}
There are no @C { prev_interval }, @C { prev_domain }, and
@C { prev_assigned_resource } parameters, because history
always comes first.  There is no @C { bool } result type,
because adding history always succeeds.  This function sets
@C { *new_interval }, @C { *new_domain }, and
@C { *new_assigned_resource } to the three attributes of a group
that contains history entry @C { (r, durn) }.
@PP
There is nothing equivalent to @C { KheTaskGrouperCost } and
@C { KheTaskGrouperMakeGroup } for separate groups.  To get
the effect of these functions one must load the separate group
into the task grouper, making it the internal group.  This is
reasonable, because the actual tasks of the group are needed for
these operations; the three attributes alone are not sufficient.
@End @SubSection

# @SubSection
#     @Title { The mtask grouper (old) }
#     @Tag { resource_structural.task_grouping.mtask_grouper_old }
# @Begin
# @LP
# Just as a task grouper groups tasks, so an mtask grouper groups
# mtasks.  Much that the mtask grouper does parallels the task
# grouper---but not exactly:  a task grouper builds task groups by
# assigning tasks to a leader task, but there is nothing analogous
# for mtasks.  Instead, the mtask grouper also builds task groups.
# Each task group is made by choosing one task from each mtask in
# the grouper.  (It would be wrong to build a task group from the
# tasks of one mtask, because those tasks run simultaneously and
# so are not suited to being assigned the same resource.)
# @PP
# Whether to use a task grouper or an mtask grouper depends mainly
# on whether mtasks are already present.  If they are, it is not
# safe to use the task grouper, because the mtask finder has no
# way of knowing that tasks are being grouped, so it is at risk
# of becoming out of date.  The mtask grouper is safe, because
# it keeps the mtask finder up to date with the groups it makes.
# @PP
# The main type, @C { KHE_MTASK_GROUPER }, follows type
# @C { KHE_TASK_GROUPER } closely:
# @ID @C {
# KHE_MTASK_GROUPER KheMTaskGrou perMake(KHE_SOLN soln,
#   KHE_FRAME days_frame, KHE_TASK_GROUP_DOMAIN_FINDER tgdf, HA_ARENA a);
# void KheMTaskGrouperClear(KHE_MTASK_GROUPER mtg);
# bool KheMTaskGrouperAddMTask(KHE_MTASK_GROUPER mtg, KHE_MTASK mt);
# void KheMTaskGrouperDeleteMTask(KHE_MTASK_GROUPER mtg, KHE_MTASK mt);
# int KheMTaskGrouperMTaskCount(KHE_MTASK_GROUPER mtg);
# KHE_MTASK KheMTaskGrouperMTask(KHE_MTASK_GROUPER mtg, int i);
# bool KheMTaskGrouperContainsMTask(KHE_MTASK_GROUPER mtg, KHE_MTASK mt);
# void KheMTaskGrouperCopy(KHE_MTASK_GROUPER dst_mtg,
#   KHE_MTASK_GROUPER src_mtg);
# KHE_COST KheMTaskGrouperCost(KHE_MTASK_GROUPER mtg);
# int KheMTaskGrouperMakeGroups(KHE_MTASK_GROUPER mtg,
#   int max_num, KHE_SOLN_ADJUSTER sa);
# void KheMTaskGrouperDebug(KHE_MTASK_GROUPER mtg,
#   int verbosity, int indent, FILE *fp);
# }
# One can make an mtask grouper, clear it back to empty, add an
# mtask (if compatible), delete the most recently added mtask,
# find the number of mtasks currently stored and the @C { i }th
# of those mtasks, find whether a given mtask is present in @C { mtg },
# and copy the contents of @C { src_mtg } into @C { dst_mtg }.  This
# copy operation is equivalent to clearing @C { dst_mtg } and then
# adding the mtasks of @C { src_mtg } to @C { dst_mtg } one by one.
# It does not change @C { src_mtg }.
# @PP
# The value of @C { KheMTaskGrouperCost } is the cost of
# any one task group built from the tasks of the mtasks added to
# @C { mtg }.  All these task groups have the same cost, because,
# as defined for @C { KheTaskGrouperCost }
# (Section {@NumberOf resource_structural.task_grouping.task_grouper}),
# the cost is a sum of resource monitor costs, and the effect on
# resource monitors of assigning any one task from a given mtask
# is the same as that of assigning any of the others.
# Unlike @C { KheTaskGrouperCost }, @C { KheMTaskGrouperCost }
# has no @C { days_frame } parameter; the days frame comes from
# the mtask finder that created the mtasks.
# @PP
# @C { KheMTaskGrouperMakeGroups } makes some task groups from
# the mtasks of @C { mtg }.  As we said before, each task group
# contains one task from each mtask.  The number of groups made
# is the smaller of @C { max_num } and the largest number of
# groups that @C { KheMTaskGrouperMakeGroups } can find a way
# to make; it will not exceed the minimum, over all mtasks
# @C { mt }, of the number of tasks in @C { mt }.  The value
# returned is the number of groups actually made; it could be 0.
# @PP
# Just as for the task grouper, there is a type
# @C { KHE_MTASK_GROUPER_ENTRY } which gives access
# to the same semantics, but allowing initial sequences
# of mtasks to be shared:
# @ID @C {
# bool KheMTaskGrouperEntryAddMTask(KHE_MTASK_GROUPER_ENTRY prev,
#   KHE_MTASK mt, KHE_TASK_GROUP_DOMAIN_FINDER tgdf,
#   KHE_MTASK_GROUPER_ENTRY next);
# void KheMTaskGrouperEntryAddDummy(KHE_MTASK_GROUPER_ENTRY prev,
#   KHE_MTASK_GROUPER_ENTRY next);
# void KheMTaskGrouperEntryCopy(KHE_MTASK_GROUPER_ENTRY dst_last,
#   KHE_MTASK_GROUPER_ENTRY src_last);
# KHE_COST KheMTaskGrouperEntryCost(KHE_MTASK_GROUPER_ENTRY last);
# int KheMTaskGrouperEntryMakeGroups(KHE_MTASK_GROUPER_ENTRY last,
#   int max_num, KHE_SOLN_ADJUSTER sa);
# }
# @C { KheMTaskGrouperEntryCopy } copies the record pointed to by
# @C { src_last } into the record pointed to by @C { dst_last }.
# It does not copy a whole sequence of entries.  Once again,
# @C { KheMTaskGrouperEntryCost } has no @C { days_frame } parameter.
# @PP
# Here now are the conditions which determine whether
# @C { KheMTaskGrouperAddMTask } will accept an mtask @C { mt }.
# As usual, @C { mt } must be non-@C { NULL }, otherwise
# @C { KheMTaskGrouperAddMTask } aborts, but there is no
# requirement that @C { mt } contain only proper root tasks,
# because mtasks always contain only proper root tasks.  The
# following conditions must hold, however; if not, @C { mt }
# is not added and @C { false } is returned:
# @NumberedList
# 
# @LI @OneRow {
# The interval of days that @C { mt } is running must be disjoint from
# the interval of days that the other mtasks of the group (taken
# together) are running.  Among other things, this prevents the
# same mtask from being added to the group twice.
# }
# 
# @LI @OneRow {
# The intersection of the domain of @C { mt } and the other mtasks must
# be non-empty (so that @C { KheMTaskGrouperCost } can be implemented).
# }
# 
# # @LI @OneRow {
# # @C { KheMTaskGrouperAddMTask } must be able to find a
# # @I { leader mtask } for the group including @C { mt }:
# # an mtask that all leader tasks could come from when
# # we do some task grouping later.  Any mtask whose domain is a
# # subset of the domains of all the mtasks can be the leader mtask.
# # # If there is no such mtask, @C { KheMTaskGrouperAddMTask }
# # # returns @C { false }.
# # }
# 
# @EndList
# These parallel the first two requirements for the task grouper.
# But the last four requirements for the task grouper have been
# dropped.  This is because they involve tasks that are initially
# assigned a resource, and the mtask grouper only groups unassigned
# tasks, as discussed below.
# @PP
# It remains to explain how @C { KheMTaskGrouperMakeGroups } chooses
# its groups.  To begin with, if there is just one mtask in the grouper,
# then no actual task grouping is called for, and the value returned
# by @C { KheMTaskGrouperMakeGroups } is 0.  (Arguably, the value
# should be the number of tasks in the sole mtask, or @C { max_num }
# if @C { max_num } is smaller; but that is not what we do.)
# @PP
# When there are two or more mtasks, our choice of tasks to
# group is driven by three points.
# @PP
# First, we ignore assigned tasks.  This is because different mtasks may
# be assigned different sets of resources, and working out what to do
# about that seems to be quite difficult.  There is a big difference here
# from task grouping, which handles assigned tasks carefully and well.
# @PP
# Second, tasks that appear earlier in an mtask are supposed to be
# assigned before tasks that appear later.  Since we group tasks
# with the intention of assigning them, we group tasks that appear
# earlier in each mtask before tasks that appear later.
# @PP
# Third, it helps if tasks that are grouped have similar
# non-assignment and assignment costs.  For example, to group
# a task with a large non-assignment cost with a task with a
# large assignment cost would make a large cost inevitable,
# whether the group is assigned a resource or not.
# @PP
# To take these points into consideration, we choose the first
# unassigned task in each mtask for the first group, the second
# unassigned task in each mtask for the second group, and so on, until
# we fail to build a group, or some mtask runs out of unassigned tasks,
# or we reach @C { max_num }.
# # @PP
# # As usual, we cannot group tasks which are assigned different
# # resources.  However, we have some flexibility here because
# # although resources are assigned to specific tasks behind
# # the scenes, abstractly a resource is assigned to an mtask,
# # not to any specific task of that mtask.  So we can rearrange
# # resource assignments within mtasks if necessary while building
# # the task groups.  The details are somewhat complicated, but
# # the net effect is that for each assignment of a resource
# # @C { r } to an mtask @C { mt } at the beginning, at the end
# # there will either be a new mtask holding the grouped tasks
# # and assigned @C { r }, or else @C { mt } will still exist and
# # will continue to be assigned @C { r }.
# @PP
# Tasks with fixed assignments receive no special treatment, other
# than what the task grouper gives to them.  But fixed tasks are
# usually the only tasks in their mtask, so they limit the
# number of groups that @C { KheMTaskGrouperMakeGroups } can
# make to one.  Altogether, assigned tasks and fixed tasks do
# not go well with mtask grouping.
# @PP
# The grouping of the chosen tasks is done (in the usual way) by
# @C { mtf }'s task grouper.  So the algorithm for actually building
# one group is the same as the one used for task grouping, only
# modified to keep the mtask finder up to date.
# @End @SubSection

@SubSection
    @Title { The mtask grouper }
    @Tag { resource_structural.task_grouping.mtask_grouper }
@Begin
@LP
Just as a task grouper groups tasks, so an mtask grouper groups
mtasks.  Much that the mtask grouper does parallels the task
grouper---but not exactly.  A task grouper builds task groups by
assigning tasks to a leader task, but there is nothing analogous
for mtasks.  Instead, the mtask grouper also builds task groups.
Each task group is made by choosing one task from each mtask in
the grouper.  (It would be wrong to build a task group from the
tasks of one mtask, because those tasks run simultaneously and
so are not suited to being assigned the same resource.)
@PP
Since the mtask grouper and the task grouper both build task groups,
it is fair to ask if both are needed.  Task grouping causes an
mtask finder to become out of date, although function
@C { KheMTaskFinderTaskGrouperMakeGroup }
(Section {@NumberOf resource_structural.mtask_finding.solver})
deals with that problem.  The author has found mtask grouping
to be useful for combinatorial grouping
(Section {@NumberOf resource_structural.task_grouping.combinatorial}),
because an mtask grouping can be the unique best when the
corresponding task groupings are not.
@PP
The main type, @C { KHE_MTASK_GROUPER }, follows type
@C { KHE_TASK_GROUPER } closely.  An mtask grouper contains an
@I { internal mtask group } just as the task grouper contains an
internal task group, and its operations are similar.  One can make
an mtask grouper, clear it back to empty, add an mtask (if
compatible), and delete the most recently added mtask:
@ID @C {
KHE_MTASK_GROUPER KheMTaskGrouperMake(KHE_SOLN soln,
  KHE_FRAME days_frame, KHE_TASK_GROUP_DOMAIN_FINDER tgdf, HA_ARENA a);
void KheMTaskGrouperClear(KHE_MTASK_GROUPER mtg);
bool KheMTaskGrouperAddMTask(KHE_MTASK_GROUPER mtg, KHE_MTASK mt);
void KheMTaskGrouperDeleteMTask(KHE_MTASK_GROUPER mtg, KHE_MTASK mt);
}
A domain finder @C { tgdf } may already be on hand; if not, one can
be made by calling @C { KheTaskGroupDomainFinderMake } from
Section {@NumberOf resource_structural.task_grouping.domains}.
There are also these functions which parallel the task grouper:
@ID @C {
int KheMTaskGrouperMTaskCount(KHE_MTASK_GROUPER mtg);
KHE_MTASK KheMTaskGrouperMTask(KHE_MTASK_GROUPER mtg, int i);
KHE_COST KheMTaskGrouperCost(KHE_MTASK_GROUPER mtg);
int KheMTaskGrouperMakeGroups(KHE_MTASK_GROUPER mtg,
  int max_num, KHE_SOLN_ADJUSTER sa);
void KheMTaskGrouperDebug(KHE_MTASK_GROUPER mtg,
  int verbosity, int indent, FILE *fp);
}
The value of @C { KheMTaskGrouperCost } is the cost of
any one task group built from the tasks of the mtasks added to
@C { mtg }.  All these task groups have the same cost, because,
as defined for @C { KheTaskGrouperCost }
(Section {@NumberOf resource_structural.task_grouping.task_grouper}),
the cost is a sum of resource monitor costs, and the effect on
resource monitors of assigning any one task from a given mtask
is the same as that of assigning any of the others.
# Unlike @C { KheTaskGrouperCost }, @C { KheMTaskGrouperCost }
# has no @C { days_frame } parameter; the days frame comes from
# the mtask finder that created the mtasks.
@PP
@C { KheMTaskGrouperMakeGroups } makes some task groups from
the mtasks of @C { mtg }.  As we said before, each task group
contains one task from each mtask.  The number of groups made
is the smaller of @C { max_num } and the largest number of
groups that @C { KheMTaskGrouperMakeGroups } can
make; it will not exceed the minimum, over all mtasks
@C { mt }, of the number of tasks in @C { mt }.  The value
returned is the number of groups actually made; it could be 0.
@PP
@C { KheMTaskGrouperDebug } produces the usual debug print of
@C { mtg } onto @C { fp } with the given verbosity and indent.
# @PP
# Just as for the task grouper, there is a @C { KHE_MTASK_GROUPER_ENTRY }
# type which gives access to the same semantics, but allowing
# initial sequences of mtasks to be shared:
# @ID @C {
# bool KheMTaskGrouperEntryMakeMTask(KHE_MTASK_GROUPER mtg,
#   KHE_MTASK_GROUPER_ENTRY prev, KHE_MTASK mt, bool unchecked,
#   KHE_MTASK_GROUPER_ENTRY *res);
# void KheMTaskGrouperEntryFree(KHE_MTASK_GROUPER_ENTRY mtge,
#   KHE_MTASK_GROUPER mtg);
# KHE_MTASK_GROUPER_ENTRY KheMTaskGrouperLastEntry(KHE_MTASK_GROUPER mtg);
# KHE_COST KheMTaskGrouperEntryCost(KHE_MTASK_GROUPER_ENTRY mtge,
#   KHE_MTASK_GROUPER mtg);
# int KheMTaskGrouperEntryMakeGroups(KHE_MTASK_GROUPER_ENTRY mtge,
#   int max_num, KHE_MTASK_GROUPER mtg, KHE_SOLN_ADJUSTER sa);
# }
# There are also functions for accessing the attributes of
# one entry:
# @ID @C {
# KHE_MTASK_GROUPER_ENTRY KheMTaskGrouperEntryPrev(
#   KHE_MTASK_GROUPER_ENTRY mtge);
# KHE_MTASK KheMTaskGrouperEntryMTask(KHE_MTASK_GROUPER_ENTRY mtge);
# KHE_INTERVAL KheMTaskGrouperEntryInterval(KHE_MTASK_GROUPER_ENTRY mtge);
# KHE_TASK_GROUP_DOMAIN KheMTaskGrouperEntryDomain(
#   KHE_MTASK_GROUPER_ENTRY mtge);
# }
# For example, to visit the mtasks of mtask grouper @C { mtg } the code is
# @ID @C {
# mtge = KheMTaskGrouperLastEntry(mtg);
# while( mtge != NULL )
# {
#   mt = KheMTaskGrouperEntryMTask(mtge);
#   ... visit mt ...
#   mtge = KheMTaskGrouperEntryPrev(mtge);
# }
# }
# There are no history entries, as there may be in task groups, so
# every entry contains an mtask.
@PP
Here now are the conditions which determine whether
@C { KheMTaskGrouperAddMTask } will accept an mtask @C { mt }.
As usual, @C { mt } must be non-@C { NULL }, otherwise
@C { KheMTaskGrouperAddMTask } aborts, but there is no
requirement that @C { mt } contain only proper root tasks,
because mtasks always contain only proper root tasks.  The
following conditions must hold, however; if not, @C { mt }
is not added and @C { false } is returned:
@NumberedList

@LI @OneRow {
@C { mt }'s domain must be non-empty (so that
@C { KheMTaskGrouperCost } can be implemented).
}

@LI @OneRow {
@C { KheMTaskAssignIsFixed(mt) } must be @C { false }.  As explained
below, we are only interested in grouping @C { mt }'s unassigned tasks.
If @C { mt }'s assignments were fixed, its unassigned tasks would
have fixed non-assignments, which
Section {@NumberOf resource_structural.task_grouping.task_grouper}
excludes from task grouping.
}

@LI @OneRow {
The interval of days that @C { mt } is running must be disjoint from
the interval of days that the other mtasks of the group (taken
together) are running.  Among other things, this prevents the
same mtask from being added to the group twice.
}

@LI @OneRow {
The intersection of the domain of @C { mt } and the other mtasks must
be non-empty (so that @C { KheMTaskGrouperCost } can be implemented).
}

# @LI @OneRow {
# @C { KheMTaskGrouperAddMTask } must be able to find a
# @I { leader mtask } for the group including @C { mt }:
# an mtask that all leader tasks could come from when
# we do some task grouping later.  Any mtask whose domain is a
# subset of the domains of all the mtasks can be the leader mtask.
# # If there is no such mtask, @C { KheMTaskGrouperAddMTask }
# # returns @C { false }.
# }

@EndList
These parallel the first four requirements for the task grouper
(Section {@NumberOf resource_structural.task_grouping.task_grouper}).
But the other task grouper requirements have been dropped.  This
is because they involve tasks that are initially assigned a
resource, and the mtask grouper only groups unassigned tasks,
as explained below.
@PP
It remains to say how @C { KheMTaskGrouperMakeGroups } chooses its
groups.  To begin with, if there is just one mtask in the grouper,
then no actual task grouping is called for, and the value returned
by @C { KheMTaskGrouperMakeGroups } is 0.  (Arguably, the value
should be the number of tasks in the sole mtask, or @C { max_num }
if @C { max_num } is smaller; but that is not what we do.)
@PP
When there are two or more mtasks, our choice of tasks to
group is driven by three points.
@PP
First, we ignore assigned tasks.  This is because different mtasks may
be assigned different sets of resources, and working out what to do
about that would be awkward.  There is a big difference here from
task grouping, which handles assigned tasks carefully and well.
@PP
Second, tasks that appear earlier in an mtask are supposed to be
assigned before tasks that appear later.  Since we group tasks
with the intention of assigning them, we group tasks that appear
earlier in each mtask before tasks that appear later.
@PP
Third, it helps if tasks that are grouped have similar
non-assignment and assignment costs.  For example, to group
a task with a large non-assignment cost with a task with a
large assignment cost would make a large cost inevitable,
whether the group is assigned a resource or not.
@PP
To take these points into consideration, we choose the first
unassigned task in each mtask for the first group, the second
unassigned task in each mtask for the second group, and so on, until
we fail to build a group, or some mtask runs out of unassigned tasks,
or we reach @C { max_num }.
# @PP
# As usual, we cannot group tasks which are assigned different
# resources.  However, we have some flexibility here because
# although resources are assigned to specific tasks behind
# the scenes, abstractly a resource is assigned to an mtask,
# not to any specific task of that mtask.  So we can rearrange
# resource assignments within mtasks if necessary while building
# the task groups.  The details are somewhat complicated, but
# the net effect is that for each assignment of a resource
# @C { r } to an mtask @C { mt } at the beginning, at the end
# there will either be a new mtask holding the grouped tasks
# and assigned @C { r }, or else @C { mt } will still exist and
# will continue to be assigned @C { r }.
@PP
Tasks with fixed assignments receive no special treatment, other
than what the task grouper gives to them.  But fixed tasks are
usually the only tasks in their mtask, so they limit the
number of groups that @C { KheMTaskGrouperMakeGroups } can
make to one.  Altogether, assigned tasks and fixed tasks do
not go well with mtask grouping.
@PP
The grouping of the chosen tasks is done by a task grouper held
within @C { mtg }.  So the algorithm for actually building one
group is the same as the one used for task grouping, only
modified to keep the mtask finder up to date.
@PP
The mtask grouper offers nothing corresponding to
@C { KheTaskGrouperSeparateAddTask },
@C { KheTaskGrouperSeparateAddInitialTask }, and
@C { KheTaskGrouperSeparateAddHistory } from the task grouper.
Such functions could easily be added (although not for history).
@End @SubSection

@SubSection
    @Title { Simple grouping }
    @Tag { resource_structural.task_grouping.simple }
@Begin
@LP
By @I { simple grouping } we mean a variety of simple cases of
task grouping.  These simple cases are handled by a
@I { simple grouper } object.  To make one, call
@ID {0.96 1.0} @Scale @C {
KHE_SIMPLE_GROUPER KheSimpleGrouperMake(KHE_SOLN soln,
  KHE_FRAME days_frame, KHE_TASK_GROUP_DOMAIN_FINDER tgdf, HA_ARENA a);
}
It remains available until @C { a } is deleted.  The days frame
defines the days; it may not be @C { NULL }.
A domain finder @C { tgdf } may already be on hand; if not, one can
be made by calling @C { KheTaskGroupDomainFinderMake } from
Section {@NumberOf resource_structural.task_grouping.domains}.
@PP
A simple grouper can be cleared (returned to its initial state)
by calling
@ID @C {
void KheSimpleGrouperClear(KHE_SIMPLE_GROUPER sg);
}
The basic way to add a task is
@ID @C {
bool KheSimpleGrouperAddTask(KHE_SIMPLE_GROUPER sg, KHE_TASK task);
}
If @C { task } is a proper root task, this adds @C { task } to
@C { sg } and returns @C { true }.  Otherwise it returns
@C { false } and does not add @C { sg }.  For convenience there is also
@ID @C {
void KheSimpleGrouperAddResourceTypeTasks(KHE_SIMPLE_GROUPER sg,
  KHE_RESOURCE_TYPE rt);
void KheSimpleGrouperAddAssignedResourceTypeTasks(KHE_SIMPLE_GROUPER sg,
  KHE_RESOURCE_TYPE rt);
}
@C { KheSimpleGrouperAddResourceTypeTasks } calls
@C { KheSimpleGrouperAddTask } for each task of type @C { rt }.
@C { KheSimpleGrouperAddAssignedResourceTypeTasks } is the same,
only limited to assigned tasks.  It uses
@C { KheResourceAssignedTaskCount } and @C { KheResourceAssignedTask }
(Section {@NumberOf solutions.tasks.cycle}), so it runs quickly.
Other functions like these could easily be added.
@PP
Once the tasks are all present, a call to
@ID @C {
void KheSimpleGrouperMakeGroups(KHE_SIMPLE_GROUPER sg,
  KHE_SIMPLE_GROUPER_GROUP_TYPE group_type, KHE_SOLN_ADJUSTER sa);
}
carries out the actual grouping.  There are many possible rules for
determining how the tasks are grouped, and @C { group_type } says
which rule to follow.  At present there are two options:
@ID @C {
typedef enum {
  KHE_SIMPLE_GROUPER_GROUP_SAME_RESOURCE,
  KHE_SIMPLE_GROUPER_GROUP_SAME_RESOURCE_CONSECUTIVE
} KHE_SIMPLE_GROUPER_GROUP_TYPE;
}
@C { KHE_SIMPLE_GROUPER_GROUP_SAME_RESOURCE } places all tasks
assigned the same non-@C { NULL } resource into one group.
@C { KHE_SIMPLE_GROUPER_GROUP_SAME_RESOURCE_CONSECUTIVE }
places all tasks assigned the same non-@C { NULL } resource
and running on consecutive days of @C { days_frame } into one
group.  In both cases, unassigned tasks become the only members
of their group.  To ignore them altogether, don't add them.
# In the second case, parameter @C { days_frame } of
# @C { KheSimpleGrouperMake } must have been non-@C { NULL }; it
# defines the days.
@PP
Every task added to the simple grouper will lie in a group, even if
(as for unassigned tasks) it is its group's only member.  If,
when making one group, the task grouper
(Section {@NumberOf resource_structural.task_grouping.task_grouper})
refuses to add some task to a growing group, a new group is begun
with that task for its first member.
@PP
If @C { sa != NULL }, the task group operations used to build the
groups are saved in @C { sa }, so that the groups can be removed
later if desired.  One can call @C { KheSolnAdjusterUndo(sa) }
in the usual way to remove the groups, @C { KheSolnAdjusterRedo(sa) }
to reinstate them, and so on.
@PP
After the groups have been made, functions
@ID @C {
int KheSimpleGrouperGroupCount(KHE_SIMPLE_GROUPER sg);
KHE_TASK KheSimpleGrouperGroup(KHE_SIMPLE_GROUPER sg, int i);
}
can be used to visit the groups' leader tasks.  This is valid even
if the grouping has been undone, although in that case the leader
tasks will not be assigned their follower tasks.
@End @SubSection

# @SubSection
#     @Title { Grouping by resource }
#     @Tag { resource_structural.task_grouping.resource }
# @Begin
# @LP
# @I { Grouping by resource } is a kind of task grouping,
# obtained by calling
# @ID @C {
# bool KheGroupByResource(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
#   KHE_OPTIONS options, KHE_SOLN_ADJUSTER sa);
# }
# # bool KheTaskingGroupByResource(KHE_TASKI NG tasking,
# #   KHE_OPTIONS options, KHE_TASK_SET ts);
# Similarly to grouping by resource constraints, to be described in
# Section {@NumberOf resource_structural.grouping_by_rc}, it groups
# tasks of resource type @C { rt } which lie in adjacent time groups
# of the common frame, and records each adjustment it makes in
# @C { sa } (if @C { sa } is non-@C { NULL }) so that it can be
# undone later.  However, the tasks are chosen in quite a
# different way:  each group consists of a maximal sequence of
# tasks which lie in adjacent time groups of the frame and are
# currently assigned to the same resource.  The thinking is that
# if the solution is already of good quality, it may be advantageous
# to keep these runs of tasks together while trying to assign them
# to different resources using an arbitrary repair algorithm.
# @PP
# It is also possible to pass @C { NULL } for @C { rt }.  In that
# case the algorithm is run for each resource type of @C { soln }'s
# instance in turn.
# @PP
# There are rare cases where incompatibilities between tasks
# prevent them from being grouped.  In those cases, what should
# be one group may turn out to be two or more groups.
# # @PP
# # When a grouping made by @C { KheTaskingGroupByResource } and
# # recorded in a task set is no longer needed, function
# # @C { KheTaskSetUnGroup } (Section {@NumberOf extras.task_sets})
# # may be used to remove it.
# @End @SubSection

# @SubSection
#     @Title { The task resource grouper }
#     @Tag { resource_structural.task_grouping.task_resource_grouper }
# @Begin
# @LP
# A @I { task resource grouper } supports a form of task grouping
# which allows the grouping to be done, undone, and redone at will.
# @PP
# The first step is to create a task resource grouper object, by calling
# @ID @C {
# KHE_TASK_RESOURCE_GROUPER KheTaskResourceGrouperMake(
#   KHE_RESOURCE_TYPE rt, HA_ARENA a);
# }
# This makes a task resource grouper for tasks of type @C { rt }.
# It is deleted when @C { a } is deleted.  Also,
# @ID @C {
# void KheTaskResourceGrouperClear(KHE_TASK_RESOURCE_GROUPER trg);
# }
# clears @C { trg } back to its state immediately after
# @C { KheTaskResourceGrouperMake }.
# @PP
# To add tasks to a task resource grouper, make any number of calls to
# @ID @C {
# bool KheTaskResourceGrouperAddTask(KHE_TASK_RESOURCE_GROUPER trg,
#   KHE_TASK t);
# }
# Each task passed to @C { trg } in this way must be assigned directly
# to the cycle task for some resource @C { r } of type @C { rt }.  The
# tasks passed to @C { trg } by @C { KheTaskResourceGrouperAddTask } which are
# assigned @C { r } at the time they are passed are placed in one group.
# No assignments are made.
# @PP
# If @C { true } is returned by @C { KheTaskResourceGrouperAddTask },
# @C { t } is the @I { leader task } for its group:  the first
# task assigned @C { r } passed to @C { trg }.  If @C { false }
# is returned, @C { t } is not the leader task.
# @PP
# Adding the same task twice is legal but is the same as adding it
# once.  If the task is the leader task, it is reported to be so
# only the first time it is passed.
# @PP
# Importantly, although the grouping is determined by which resources
# the tasks are assigned to, it is only the grouping that the grouper
# cares about, not the resources.  Once the groups are made, the resources
# that determined the grouping become irrelevant to the grouper.
# @PP
# At any time one may call
# @ID @C {
# void KheTaskResourceGrouperGroup(KHE_TASK_RESOURCE_GROUPER trg);
# void KheTaskResourceGrouperUnGroup(KHE_TASK_RESOURCE_GROUPER trg);
# }
# @C { KheTaskResourceGrouperGroup } ensures that, in each group, the
# tasks other than the leader task are assigned directly to the leader
# task.  It does not change the assignment of the leader task.
# @C { KheTaskResourceGrouperUnGroup } ensures that, for each group,
# the tasks other than the leader task are assigned directly to
# whatever the leader task is assigned to (possibly nothing).  As
# mentioned, the resources which defined the groups originally
# are irrelevant to these operations.
# @PP
# If @C { KheTaskResourceGrouperGroup } cannot assign some task to its
# leader, it adds the task's task bounds to the leader and tries again.
# If it cannot add these bounds, or the assignment still does not succeed,
# it aborts.  As well as ungrouping, @C { KheTaskResourceGrouperUnGroup }
# removes any task bounds that were added by
# @C { KheTaskResourceGrouperGroup }.  In detail,
# @C { KheTaskResourceGrouperGroup } records the number of task bounds present
# when it is first called, and @C { KheTaskResourceGrouperUnGroup } removes
# task bounds from the end of the leader task until this number is reached.
# @PP
# A task grouper's tasks may be grouped and ungrouped at will.  This is
# more general than using a solution adjuster, since after ungrouping
# that way there is no way to regroup.
# # The extra power comes from the fact that a task grouper contains,
# # in effect, a task set for each group.
# @PP
# The author has encountered one case where @C { KheTaskResourceGrouperUnGroup }
# fails to remove the task bounds added by @C { KheTaskResourceGrouperGroup }.
# The immediate problem has probably been fixed, although it is hard to
# be sure that it will not recur.  So instead of aborting in that case,
# @C { KheTaskResourceGrouperUnGroup } prints a debug message and stops
# removing bounds for that task.
# @End @SubSection

# @EndSubSections
# @End @Section

# @Section
#     @Title { Task grouping by resource constraints }
#     @Tag { resource_structural.grouping_by_rc }
# @Begin
# @LP
# @I { Task grouping by resource constraints }, or @I { TGRC }, is
# KHE's term for grouping tasks together, forcing the tasks in each
# group to be assigned the same resource, based on analyses of
# resource constraints which suggest that solutions in which the
# tasks in each group are not assigned the same resource are likely
# to be inferior.  That does not mean that those tasks will always
# be assigned the same resource in good solutions, any more than,
# say, a constraint requiring nurses to work complete weekends is
# always satisfied in good solutions.  However, in practice those
# tasks usually do end up being assigned the same resource, so it
# makes sense to require it, at least to begin with.  Later we can
# remove the groups and see what happens.
# @PP
# @C { KheTaskTreeMake } also groups tasks, but its groups are based
# on avoid split assignments constraints, whereas here we make groups
# based on resource constraints.
# @PP
# The function is
# @ID @C {
# bool KheGroupByResourceConstraints(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
#   KHE_OPTIONS options, KHE_SOLN_ADJUSTER sa);
# }
# There is no @C { tasking } parameter because this kind of grouping
# cannot be applied to an arbitrary set of tasks, as it turns out.
# Instead, it applies to all tasks of @C { soln } whose resource
# type is @C { rt }, which lie in a meet which is assigned a time,
# with some exceptions, discussed below.  If @C { rt } is @C { NULL },
# @C { KheGroupByResourceConstraints } applies itself to each of the
# resource types of @C { soln }'s instance in turn.  It tries to group
# these tasks, returning @C { true } if it groups any.  If
# @C { sa != NULL }, it saves any changes in solution adjuster
# @C { sa } (Section {@NumberOf general_solvers.adjust.adjuster}),
# so that they can be undone later.
# @PP
# @C { KheGroupByResourceConstraints } finds whatever groups it can
# among these tasks.  It makes each such @I { task group } by
# choosing one of its tasks as the @I { leader task } and assigning
# the others to it.  It makes assignments only to proper root tasks
# (non-cycle tasks not already assigned to other non-cycle tasks),
# so it does not disturb existing groups.  But it does take existing
# groups into account:  it will use tasks to which other tasks are
# asssigned in its own groups.
# @PP
# Tasks initially assigned a resource participate in TGRC.  Two
# tasks can be put into the same group only if they are not
# assigned different resources initially; and if any of the grouped
# tasks are assigned a resource initially, the whole group is
# assigned that resource finally.
# # {0.97 1.0} @Scale @C { KheMTaskFinderGroupBegin },
# # {0.97 1.0} @Scale @C { KheMTaskFinderGroupAddTask }, and
# # {0.97 1.0} @Scale @C { KheMTaskFinderGroupEnd }
# # from Section {@NumberOf resource_structural.mtask_finding.solver}
# # follow this rule.
# # @PP
# # However, in practice, when @C { KheGroupByResourceConstraints }
# # is called the only tasks assigned a resource have been assigned
# # by @C { KheAssignByHistory }
# # (Section {@NumberOf resource_solvers.assignment.history}).  In
# # effect, those tasks are already grouped.  Given that
# # @C { KheGroupByResourceConstraints } does not take account of
# # history (ideally it would, but it does not at present), the
# # practical way forward is for it to ignore tasks which are
# # assigned a resource, just as though they were not there.
# # @PP
# # Tasks which are initially assigned a resource participate in
# # grouping.  Such a task may have its assignment changed to some
# # other task, but in that case the other task will be assigned the
# # resource.  In other words, if one task is assigned a resource
# # initially, and it gets grouped, then its whole group will be
# # assigned that resource afterwards.  Two tasks initially assigned
# # different resources will never be grouped together.
# @PP
# Tasks whose assignments are fixed (even to @C { NULL }) are
# usually ignored.  They can't join groups, because
# that would change their assignments unless they happen to be
# chosen as leader tasks.  At present there is an awkward
# workaround in place to allow task grouping to cooperate
# with assign by history, in which tasks with fixed assignments
# to non-@C { NULL } resource values participate in grouping.
# Their assignments are unfixed then refixed to other tasks,
# but without changing the resources they are assigned to.
# # It is true that they could become leader tasks, since
# # the assignments of leader tasks are not changed, but there are
# # other considerations when choosing leader tasks, and to add fixing
# # to the mix has seemed to the author to be a bridge too far.  In
# # any case there are not likely to be any fixed unassigned proper
# # root tasks when @C { KheGroupByResourceConstraints } is called.
# # In practice fixed tasks are fixed by @C { KheAssignByHistory }
# # (Section {@NumberOf resource_solvers.assignment.history}), so they
# # are already grouped (in effect) and it is reasonable to ignore them.
# # @PP
# # If @C { ts } is non-@C { NULL }, every task that
# # @C { KheGroupByResourceConstraints } assigns to another task is added
# # to @C { ts }.  So the groups can be removed when they are no longer
# # wanted, by running through @C { ts } and unassigning its tasks.
# # @C { KheTaskSetUnGroup } (Section {@NumberOf extras.task_sets}) does this.
# @PP
# Most of the tasks that participate in grouping are tasks for which
# non-assignment has a non-zero cost.  In practice only a few tasks
# for which non-assignment has cost zero (and assignment has cost
# zero or greater) participate in TGRC, and only when there seems
# to be no other way to build the needed groups.
# @PP
# To summarize, then, @C { KheGroupByResourceConstraints } applies
# to each proper root task of @C { soln } whose resource type is
# @C { rt } (or any type if @C { rt } is @C { NULL }), which lies
# in a meet which is assigned a time, and (usually) for which
# non-assignment has a non-zero cost.
# @PP
# @C { KheGroupByResourceConstraints } uses two kinds of grouping.
# The first, @I { combinatorial grouping }, tries all combinations of
# assignments over a few consecutive days, building a group when just
# one of those combinations has zero cost, according to the cluster
# busy times and limit busy times constraints that monitor those days.
# The second, @I { interval grouping }, uses limit active intervals
# constraints to find different kinds of groups.  All this is
# explained below.
# @PP
# @C { KheGroupByResourceConstraints } consults option
# @C { rs_invariant }, and also
# @TaggedList
# 
# @DTI { @F rs_group_by_rc_off } @OneCol {
# A Boolean option which, when @C { true }, turns task grouping by
# resource constraints off.
# }
# 
# @DTI { @F rs_group_by_rc_max _days } @OneCol {
# An integer option which determines the maximum number of consecutive days
# (in fact, time groups of the common frame) examined by combinatorial
# grouping (Section {@NumberOf resource_structural.grouping_by_rc.applying}).
# Values 0 or 1 turn combinatorial grouping off.  The default value is 3.
# }
# 
# @DTI { @F rs_group_by_rc_combinatorial_off } @OneCol {
# A Boolean option which, when @C { true }, turns combinatorial grouping off.
# }
# 
# @DTI { @F rs_group_by_rc_interval_off } @OneCol {
# A Boolean option which, when @C { true }, turns interval grouping off.
# }
# 
# @EndList
# It also calls @C { KheFrameOption } (Section {@NumberOf extras.frames})
# to obtain the common frame.
# @PP
# The following subsections describe the algorithms used behind the
# scenes for TGRC.  There are many details; some have been omitted.
# The last subsections document the interface used by the TGRC
# modules to communicate with each other, as found in header file
# @C { khe_sr_tgrc.h }.
# # in more detail than the user
# # is likely to need.  Types and functions mentioned in these subsections
# # are declared in header file @C { khe_sr_tgrc.h }, which is not
# # included in file @C { khe_solvers.h }.  So although TGRC is
# # implemented over multiple source files, its internal details are not
# # made available to users.
# # There are two main kinds:  combinatorial
# # grouping and profile grouping.
# # The following subsections describe @C { KheGroupByResourceConstraints }
# # in detail.  It has several parts, which are available separately, as we
# # will see.  For each resource type, it first calls @C { KheMTaskFinderMake }
# # (Section {@NumberOf resource_structural.mtask_finding.solver})
# # to make an mtask finder, and @C { KheCombGrouperMake } (see below) to
# # make a combinatorial grouper object @C { cg }.  Then, using @C { cg },
# # it calls @C { KheCombGrouping } to perform combinatorial grouping, and
# # then @C { KheProfileGrouping } to perform profile grouping, first with
# # @C { non_strict } set to @C { false }, then again with @C { non_strict }
# # set to @C { true }.
# @BeginSubSections

@SubSection
    @Title { Combinatorial grouping }
    @Tag { resource_structural.task_grouping.combinatorial }
@Begin
@LP
Suppose that there are two kinds of shifts, day and night; that each
nurse must be busy on both days of the weekend or neither; and
that nurses cannot work a day shift on the day after a night shift.
Then nurses assigned to the Saturday night shift must work on
Sunday, and so must work the Sunday night shift.  So it makes sense
to group one Saturday night shift with one Sunday night shift, and to
do so repeatedly until night shifts run out on one of those days.
@PP
Suppose that the groups just made consume all the Sunday night shifts.
Then nurses working the Saturday day shifts cannot work the Sunday
night shifts, because the Sunday night shifts are grouped with
Saturday night shifts now, which clash with the Saturday day shifts.
So now it is safe to group one Saturday day shift with one Sunday
day shift, and to do so repeatedly until day shifts run out on one
of those days.
@PP
Groups made in this way can be a big help to solvers.  In instance
@C { COI-GPost.xml }, for example, each Friday night task can be
grouped with tasks for the next two nights.  Good solutions always
assign these three tasks to the same resource, owing to constraints
specifying that the weekend following a Friday night shift must be
busy, that each weekend must be either free on both days or busy on
both, and that a night shift must not be followed by a day shift.
# A time sweep task assignment algorithm (say) cannot look ahead
# and see such cases coming.
@PP
@I { Combinatorial grouping } realizes these ideas.  For each
mtask @C { mt } returned by the mtask finder, it enumerates all
sets of mtasks containing @C { mt } plus a other mtasks from a
few adjacent days, and tries assigning an arbitrary resource
to one task from each mtask.  If it finds that only one of those
sets of mtasks leads to zero cost for the resource's relevant
resource monitors, it builds as many groups as it can by taking
one task from each mtask of the set.  For example, in
@C { COI-GPost.xml } it would discover that the only zero-cost
option is to group the Friday night mtask with the Saturday and
Sunday night mtasks, and build groups accordingly.
@PP
The function that does this is
@ID @C {
int KheCombinatorialGrouping(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
  KHE_OPTIONS options, KHE_TASK_GROUP_DOMAIN_FINDER tgdf,
  KHE_SOLN_ADJUSTER sa);
}
It uses an mtask grouper
(Section {@NumberOf resource_structural.task_grouping.mtask_grouper})
to build groups from @C { soln }'s mtasks of type @C { rt }.
A domain finder @C { tgdf } may already be on hand; if not, one can
be made by calling @C { KheTaskGroupDomainFinderMake } from
Section {@NumberOf resource_structural.task_grouping.domains}.
 Any groups made are recorded in
@C { sa } so that they can be undone later, by the usual call to
@C { KheSolnAdjusterUndo }.  The return value is the number of groups made.
@PP
@C { KheCombinatorialGrouping } consults one option from @C { options }:
@TaggedList
 
@DTI { @F rs_combinatorial_grouping_max_days } @OneCol {
An integer option which determines the maximum number of consecutive days
(in fact, time groups of the common frame) examined by combinatorial
grouping.  Values 0 or 1 turn combinatorial grouping off.  The default
value is 3.
}
 
@EndList
Values of @F rs_combinatorial_grouping_max_days larger than 4 are likely
to be too slow, owing to the exponential number of combinations of mtasks tried.
# It enumerates
# a space whose elements are sets of mtasks
# (Section {@NumberOf resource_structural.mtask_finding.ops}).  The space
# is defined by @I { requirements } supplied by the caller.  As explained
# in Section {@NumberOf resource_structural.grouping_by_rc.impl2},
# the requirements could state that the sets must
# cover a given time group or mtask, or must not cover a given
# time group or mtask, and so on.  For each set of mtasks
# @M { S } in the search space, it calculates a cost @M { c(S) },
# by evaluating the resource constraints that apply to one
# resource in the part of the cycle covered by @M { S },
# and selects a set @M { S prime } such that @M { c( S prime ) }
# is minimum, or zero.  It then makes one group by selecting one
# task from each mtask of @M { S prime } and grouping those tasks,
# and then repeating that until as many tasks as possible or
# desired have been grouped.
# @PP
# As formulated here, combinatorial grouping is a low-level
# algorithm which finds and groups one set of mtasks @M { S prime }.
# It is called on by higher-level algorithms to do their actual
# grouping.  For example, a higher-level algorithm might try
# combinatorial grouping at various points through the cycle,
# or even try it repeatedly at the same points, as in the
# example above, where grouping the Saturday and Sunday night
# shifts would be one application of combinatorial grouping, then
# grouping the Saturday and Sunday day shifts would be another.
# # @PP
# # As formulated here, one application of combinatorial grouping
# # groups one set of mtasks @M { S prime }.  In the example above,
# # grouping the Saturday and Sunday night shifts would be one
# # application, then grouping the Saturday and Sunday day shifts
# # would be another.
# @PP
# The number of sets of mtasks tried by combinatorial grouping will
# usually be exponential in the number of days involved in the search.
# So the number of days has to be small, unless the choices on each
# day are very limited.
# # In practice that should be
# # enough anyway, given that most groups involve weekends.
# @End @SubSection
# 
# @SubSection
# @Title { Using combinatorial grouping with combination reduction }
# @Tag { resource_structural.grouping_by_rc.applying }
# @Begin
# @LP
# This section describes one way in which the general idea of
# combinatorial grouping, as just presented, is applied by TGRC.
# This way is implemented by function
# @ID @C {
# int KheCombGrouping(KHE_COMB_GROUPER cg, KHE_OPTIONS options,
#   KHE_SOLN_ADJUSTER sa);
# }
# It does what this section describes, and returns the number of
# groups it makes.  If @C { sa != NULL }, any task assignments it
# makes are saved in @C { sa }, so that they can be undone later.
@PP
@BI { Combination reduction. }
# Let @M { m } be the value of the @F rs_group_by_rc_ma x_days option.
# Iterate over all pairs @M { (f, t) }, where @M { f } is a subset of
# the common frame containing @M { k } adjacent time groups, for all
# @M { k } such that @M { 2 <= k <= m }, and @M { t } is an mtask
# that covers @M { f }'s first or last time group.
# @PP
# For each @M { (f, t) } pair, run combinatorial grouping, set up
# to require that @M { t } be covered and that each of the @M { k }
# time groups of @M { f } be free to be either covered or not, and
# only doing grouping when there is a unique zero-cost grouping
# satisfying these requirements.
# # with one mtask requirement with cover `yes' for @M { t }, and one
# # time group requirement with cover `free' for each of the @M { k }
# # time groups of @M { f }.  Set @C { max_num } to @C { INT_MAX },
# # and set @C { cg_variant } to @C { KHE_COMB_VARIANT_SOLE_ZERO }.
# # If there is a unique zero-cost way to group a task of @M { t }
# # with tasks on the preceding or following @M { k - 1 } days,
# # this call will find it and build as many groups as it can.
# # @PP
# # For each @M { (f, t) } pair, run @C { KheCombGrouperSolve }, set up
# # with one mtask requirement with cover `yes' for @M { t }, and one
# # time group requirement with cover `free' for each of the @M { k }
# # time groups of @M { f }.  Set @C { max_num } to @C { INT_MAX },
# # and set @C { cg_variant } to @C { KHE_COMB_VARIANT_SOLE_ZERO }.
# # If there is a unique zero-cost way to group a task of @M { t }
# # with tasks on the preceding or following @M { k - 1 } days,
# # this call will find it and build as many groups as it can.
# @PP
# If @M { f } has @M { k } time groups, each with @M { n } mtasks,
# say, there are up to @M { (n + 1) sup {k - 1} } combinations for
# each run, so @C { rs_group_by_rc_max _days } must be small, say 3,
# or 4 at most.  In any case, unique zero-cost groupings typically
# concern weekends, so larger values are unlikely to yield anything.
# @PP
# If one @M { (f, t) } pair produces some grouping, then return to
# the first pair containing @M { f }.  This handles cases like the
# one described earlier, where a grouping of Saturday and Sunday night
# shifts opens the way to a grouping of Saturday and Sunday day shifts.
# @PP
The remainder of this section describes @I { combination reduction }.
This is a refinement that combinatorial grouping uses to make unique
zero-cost combinations more likely in some cases.
@PP
Some combinations examined by combinatorial grouping may have zero
cost as far as the monitors used to evaluate it are concerned, but
have non-zero cost when evaluated in a different way, involving the
overall supply of and demand for resources.  Such combinations can
be ruled out, leaving fewer zero-cost combinations, and potentially
more task grouping.
@PP
For example, suppose there is a maximum limit on the number of
weekends each resource can work.  If this limit is tight
enough, it will force every resource to work complete weekends,
even without an explicit constraint, if that is the only way
that the available supply of resources can cover the demand
for weekend shifts.  This example fits the pattern to be given
now, setting @M { C } to the constraint that limits the number
of busy weekends, @M { T } to the times of all weekends,
@M { T sub i } to the times of the @M { i }th weekend, and
@M { f tsub i } to the number of days in the @M { i }th weekend.
@PP
Take any any set of times @M { T }.  Let @M { S(T) }, the
@I { supply during @M { T } }, be the sum over all resources
@M { r } of the maximum number of times that @M { r } can be busy
during @M { T } without incurring a cost.  Let @M { D(T) }, the
@I { demand during @M { T } }, be the sum over all tasks @M { x }
for which non-assignment would incur a cost, of the number of times
@M { x } is running during @M { T }.  Then @M { S(T) >= D(T) }
or else a cost is unavoidable.
@PP
In particular, take any cluster busy times constraint @M { C } which
applies to all resources, has time groups which are all positive, and
has a non-trivial maximum limit @M { M }.  (The analysis also applies
when the time groups are all negative and there is a non-trivial
minimum limit, setting @M { M } to the number of time groups minus
the minimum limit.)  Suppose there are @M { n } time groups
@M { T sub i }, for @M { 1 <= i <= n }, and let their union be @M { T }.
@PP
Let @M { f tsub i } be the number of time groups from the common
frame with a non-empty intersection with @M { T sub i }.  This is
the maximum number of times from @M { T sub i } during which any one
resource can be busy without incurring a cost, since a resource can
be busy for at most one time in each time group of the common frame.
@PP
Let @M { F } be the sum of the largest @M { M } @M { f tsub i }
values.  This is the maximum number of times from @M { T } that
any one resource can be busy without incurring a cost:  if it is
busy for more times than this, it must either be busy for more
than @M { f tsub i } times in some @M { T sub i }, or else it
must be busy for more than @M { M } time groups, violating the
constraint's maximum limit.
@PP
If there are @M { R } resources altogether, then the supply during
@M { T } is bounded by
@ID @Math { S(T) <= RF }
since @M { C } is assumed to apply to every resource.
@PP
As explained above, to avoid cost the demand must not exceed the
supply, so
@ID @M { D(T) <= S(T) <= RF }
Furthermore, if @M { D(T) >= RF }, then any failure to maximize
the use of workload will incur a cost.  That is, every resource
which is busy during @M { T sub i } must be busy for the full
@M { f tsub i } times in @M { T sub i }.
@PP
So the effect on grouping is this:  if @M { D(T) >= RF }, a resource
that is busy in one time group of the common frame that overlaps
@M { T sub i } should be busy in every time group of the common
frame that overlaps @M { T sub i }.  Combination reduction searches
for constraints @M { C } that have this effect, and informs combinatorial
grouping about what it found by changing the requirements
for some time groups from `a group is free to cover this time group,
or not' to `a group must cover this time group if and only if it
covers the previous time group'.  When searching for groups, the
option of covering some of these time groups but not others is removed.
With fewer options, there is more chance that some combination
might be the only one with zero cost, allowing more task grouping.
@PP
Instance @C { CQ14-05 } has two constraints that limit busy weekends.
One applies to 10 resources and has maximum limit 2; the other applies
to the remaining 6 resources and has maximum limit 3.  So combination
reduction actually takes sets of constraints with the same time groups
that together cover every resource once.  It uses the constraint classes
module (Section {@NumberOf resource_structural.constraint_classes}) to
find these sets.  Instead of @M { RF } (above), it uses the sum over
the set's constraints @M { c sub j } of @M { R sub j F sub j }, where
@M { R sub j } is the number of resources that @M { c sub j } applies
to, and @M { F sub j } is the sum of the largest @M { M sub j } of the
@M { f tsub i } values, where @M { M sub j } is the maximum limit of
@M { c sub j }.  The @M { f tsub i } are the same for all @M { c sub j }.
@End @SubSection

@SubSection
    @Title { Weekend grouping }
    @Tag { resource_structural.task_grouping.weekend_grouping }
@Begin
@LP
@I { Complete weekends } constraints, saying that each resource
should either be busy on both days of each weekend, or free on
both days, are common in nurse rostering.  They help to minimize
the number of weekends that nurses work.  In this section we
carry out some @I { weekend grouping }:  task grouping inspired
by complete weekends constraints.  We also deal with an obscure
issue that arises when there are more tasks to assign on one of
the two days than on the other.  Function
@ID @C {
void KheWeekendGrouping(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
  KHE_OPTIONS options, KHE_TASK_GROUP_DOMAIN_FINDER tgdf,
  KHE_SOLN_ADJUSTER sa);
}
does this, for @C { soln }'s resources and tasks of type @C { rt }.
A domain finder @C { tgdf } may already be on hand; if not, one can
be made by calling @C { KheTaskGroupDomainFinderMake } from
Section {@NumberOf resource_structural.task_grouping.domains}.
If @C { sa != NULL }, solution adjuster @C { sa }
records the changes, so that they can be undone later.  Parameter
@C { options } is used to access the common frame and the event
timetable monitor.
@PP
The rest of this section explains in detail what
@C { KheWeekendGrouping } does.  First, it finds all cluster busy
times constraints which apply to resources of type @C { rt }, have
non-zero weight, contain exactly two time groups, which are both
positive and are subsets of adjacent days of the common frame,
and with minimum limit 2, maximum limit 2, and allow zero flag
@C { true }.  These
are the complete weekends constraints, although no-one checks, or
needs to check, that the time groups represent a Saturday and Sunday.
Cluster busy times constraints have offsets, each defining
a separate constraint.  So we really have a set of `constraint
plus offset' objects, but we'll call them constraints for
simplicity of presentation.  Using the constraint class finder
(Section {@NumberOf resource_structural.constraint_classes}),
@C { KheWeekendGrouping } groups these constraints into classes,
placing constraints into the same class when they have the same
time groups, taking offsets into account.
@PP
It then takes each class @M { C } in turn.  It checks that @M { C }'s
constraints, taken together, apply to every resource of type @C { rt }.
If not, @M { C } is skipped, because not all resources require complete
weekends.  So we can now assume that @M { C } identifies a weekend
which should be complete for all resources of type @C { rt }, and
we can carry out weekend grouping for that weekend, as follows.
@PP
@BI { Grouping required tasks. }
A task is @I required (meaning assignment of the task is
required) when @C { KheTaskNonAsstAndAsstCost }
(Section {@NumberOf resource_structural.mtask_finding.ops})
says that its non-assignment cost exceeds its assignment cost.
This has nothing to do with required constraints; the cost need
not be a hard cost.  A task is @I optional (meaning assignment
of the task is optional) when it is not required.
@PP
When complete weekends constraints are present, it makes sense to group
one required task from one weekend day with one from the other weekend
day, and to do that as many times as possible.  Combinatorial grouping
(Section {@NumberOf resource_structural.task_grouping.combinatorial})
will often group such tasks, but it is worth going beyond what it
would do, and grouping as many required tasks as possible.
@PP
Build a bipartite graph containing one left-hand node for each
required task @M { t sub 1 } on the first day, and one right-hand
node for each required task @M { t sub 2 } on the second day.
(Omit multi-day tasks that are running on both days, but include
multi-day tasks that end on the first day or begin on the second
day.)  Join each left-hand node to each right-hand node by an edge
whenever the task grouper indicates that the two tasks can be
grouped.  The edge cost is determined as follows.
@PP
The cost of an edge from @M { t sub 1 } to @M { t sub 2 } is a triple
@M { ( c sub 1 , c sub 2 , c sub 3 ) }.  Addition is defined component
by component in the obvious way, and comparisons are defined
lexicographically, that is, @M { c sub 1 } is most important, and
only if two costs have equal values for @M { c sub 1 } do we
check @M { c sub 2 }, and only if two costs have equal values for
both @M { c sub 1 } and @M { c sub 2 } do we check @M { c sub 3 }.
@PP
We want @M { c sub 1 } to be the cost of a solution in which @M { t sub 1 }
and @M { t sub 2 } are assigned the same resource, insofar as that
can be determined in isolation from other assignments.  Let @M { S }
be the solution we are starting from, and let @M { c(S) } be its
cost.  Let @M { c( t sub 1 , t sub 2 ) } be the cost of grouping
@M { t sub 1 } with @M { t sub 2 }, according to @C { KheTaskGrouperCost }
(Section {@NumberOf resource_structural.task_grouping.task_grouper}).
This is the cost to resource monitors of assigning the same resource
to both tasks.  Let @M { n( t sub i ) } and @M { a( t sub i ) } be
the non-assignment cost and assignment cost of @M { t sub i },
as returned by @C { KheTaskNonAsstAndAsstCost }
(Section {@NumberOf resource_structural.mtask_finding.ops}).  These
are the costs to event resource monitors of not assigning and of
assigning these tasks.
@PP
The cost of a solution @M { S prime } which is @M { S } plus the
assignment of some resource to @M { t sub 1 } and @M { t sub 2 } is
@ID @Math {
c( S prime ) = c(S) + c( t sub 1 , t sub 2 )
  + a( t sub 1 ) - n( t sub 1 )
  + a( t sub 2 ) - n( t sub 2 )
}
Both @M { a( t sub 1 ) - n( t sub 1 ) } and
@M { a( t sub 2 ) - n( t sub 2 ) } are negative, because
@M { t sub 1 } and @M { t sub 2 } are required.
We let @M { c sub 1 = c( S prime ) }.
@PP
We use @M { c sub 2 } to give preference to tasks which
have the same offset in the time groups of their day.  That is,
we prefer to group day shifts with day shifts, night shifts with
night shifts, and so on.  To do this, let @M { o(t) } be the
offset of task @M { t } in the time group of its day.  The
early shift has offset 0, the day shift has offset 1, and so on.
Then we set
@ID @Math {
c sub 2 = bar o( t sub 1 ) - o( t sub 2 ) bar
}
which is 0 when @M { t sub 1 } and @M { t sub 2 } have the
same offsets.
@PP
Finally, within each shift we prefer to group together tasks which
have similar domains.  So we use function
@C { KheResourceGroupSymmetricDifferenceCount }
(Section {@NumberOf resource_groups}) applied to the
domains of the two tasks as the value of @M { c sub 3 }.
# (KHE's weighted bipartite matching module offers edge costs which
# are triples of integers @M { ( c sub 1 , c sub 2 , c sub 3 ) },
# compared lexicographically.  So we let @M { c sub 1 } be the hard
# component of @M { c( S prime ) }, @M { c sub 2 } be the soft
# component of @M { c( S prime ) }, and @M { c sub 3 } be the
# absolute value of the difference between the offsets of the times
# of the tasks in the time groups of their day.)
@PP
We then find a minimum-cost matching in this graph and
make the indicated groups.
@PP
@BI { Unbalanced demand. }
Now for the obscure issue.  Suppose one day of a weekend identified
(as above) by a class @M { C } of complete weekends constraints
is @I busier (has more required tasks) than the other.  Then
something has to give.  There are three possibilities (they could
occur together):
@NumberedList

@LI {
Some nurses have complete weekends defects.
}

@LI {
On the busier day, some required tasks are unassigned.
}

@LI {
On the other (less busy) day, some optional tasks are assigned.
}

@EndList
The first two give rise directly to defects---they are highly visible.
The third does not give rise to any defects directly, unless the
optional tasks have non-zero assignment costs, but it will have a
cost if there is a general shortage of nurses, because assigning
nurses to optional tasks adds to the general overload.  Despite
having no direct defects, the third possibility might not be best.
In that case we want to steer solvers away from it, without
biasing them towards either of the others.
@PP
If the less busy day contains no optional tasks, there is nothing to
do.  Otherwise, we proceed as follows.  As above, let @M { S } be
the solution we are starting from, and let @M { c(S) } be its cost.
@PP
Let @M { c sub "opt" } be the minimum, over all optional tasks
@M { t prime } on the less busy day, of
@M { a( t prime ) - n( t prime ) }.  (Each
@M { a( t prime ) - n( t prime ) } is non-negative,
because @M { t prime } is optional.)  We use @M { c sub "opt" }
as our estimate of the event resource cost of assigning one of
these optional tasks.  The minimum is appropriate because in
practice few optional tasks are assigned, and those usually
have minimum assignment cost.
@PP
Suppose that the supply of resources is not sufficient to cover the
demand for resources.  Then every assignment of a resource to a task
increases the general overload, and so incurs a cost in violated
resource monitors concerned with total workload.  This cost is at
least the value returned by @C { KheResourceDemandExceedsSupply }
(Section {@NumberOf resource_structural.supply_and_demand.balance})
in its @C { *resource_cost } parameter.  We call this cost
@M { c sub "over" } and include it every time we assign a task.  If
supply is sufficient to cover demand, we let @M { c sub "over" = 0 }.
@PP
Let @M { c(C) } be the cost of violating the constraints of
@M { C }.  The deviation can only be 1, so we take @M { c(C) }
to be the total weight of the constraints of @M { C }.
(This is wrong when different constraints of @M { C }
apply to different resources, but we ignore this problem.)
# When
# @M { C } represents a set of constraints, we use the minimum of
# their weights as our estimate of the cost of violating @M { C }.
@PP
Let @M { t } be an unmatched required task with non-assignment
and assignment costs @M { n(t) } and @M { a(t) } as usual.
We now repeat the three cases above with particular reference
to @M { t }:
@NumberedList

@LI @OneRow {
A resource is assigned to @M { t } but not to any task on
the other weekend day.  The cost is
@ID @Math {
c sub 1 = c(S) + a(t) - n(t) + c sub "over" + c(C)
}
That is, the initial solution cost plus the event resource cost
of assigning @M { t } (which will be negative), plus the cost of
adding one task to total demand, plus the cost of violating @M { C }.
}

@LI @OneRow {
Leave @M { t } unassigned.  Then the cost is
@ID @Math {
c sub 2 = c(S)
}
since the initial solution remains unchanged.  This cost will
include @M { n(t) }.
}

@LI @OneRow {
A resource is assigned to @M { t } and to an optional task
on the other weekend day.  The cost is
@ID @Math {
c sub 3 = c(S) + a(t) - n(t) + c sub "opt" + 2c sub "over"
}
That is, the initial cost, plus the event resource
cost of assigning @M { t } (this will be negative), plus
the cost of assigning an optional task, plus the cost of
adding two tasks to total demand.
}

@EndList
When @M { c sub "over" = 0 }, @M { c sub 3 } is just as visible
as @M { c sub 1 } and @M { c sub 2 }, so there is no need to
steer solvers away from it; they can see it for themselves.  So
it is now clear that we should do nothing when @M { c sub "over" = 0 }.
@PP
Assuming now that @M { c sub "over" > 0 }, we want to prevent the
third case from being chosen for @M { t } when either
@M { c sub 3 > c sub 1 } or @M { c sub 3 > c sub 2 }, while
allowing the other two cases.  It seems that the only way to do
that is to fix all the unassigned optional tasks on the less busy
day, so that they cannot be assigned resources.  So if either of
these two conditions holds for any unmatched required task @M { t }
on the busier day, we fix all of the unassigned optional tasks on
the less busy day.  This will prevent the third case from being
chosen for @I all unmatched required tasks, not just for @M { t };
but we live with that.  The fixes are made via solution adjuster
@C { sa } so that they can be undone later, as usual.
@End @SubSection

@SubSection
  @Title { Interval grouping }
  @Tag { resource_structural.task_grouping.interval_grouping }
@Begin
@LP
@I { Interval grouping } finds task groups based on limit active
intervals constraints, when freedom of choice is limited.
A good example is Constraint:17 from instance INRC2-4-100-0-1108,
which limits the number of consecutive night tasks to between 4
and 5.  Interval grouping will use this to place all of the night
tasks into groups of 4 or 5 tasks running on consecutive days.
Function
@ID @C {
int KheIntervalGrouping(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
  KHE_OPTIONS options, KHE_TASK_GROUP_DOMAIN_FINDER tgdf,
  KHE_SOLN_ADJUSTER sa);
}
finds a grouping of this kind of minimum cost for tasks of type
@C { rt }.
A domain finder @C { tgdf } may already be on hand; if not, one can
be made by calling @C { KheTaskGroupDomainFinderMake } from
Section {@NumberOf resource_structural.task_grouping.domains}.
As usual, any groups made
are recorded in @C { sa } so that they can be removed later.
@PP
Whether interval grouping is worth doing for a particular limit
active intervals constraint @M { C } depends on how constraining
@M { C } is.  @C { KheIntervalGrouping } has already decided for
you that the only limit active intervals constraints worth handling
are those whose time groups are all positive and contain just one
time each:  constraints on consecutive day tasks or consecutive
night tasks, not constraints on consecutive busy or free days.
Beyond that it relies on these options:
@TaggedList

@DTI { @F rs_interval_grouping_min } @OneCol {
An integer-valued option with default value 4.  To be chosen for
interval grouping, @M { C }'s minimum limit must be at least the
value of this option.
}
 
@DTI { @F rs_interval_grouping_range } @OneCol {
An integer-valued option with default value 2.  To be chosen for
interval grouping, the difference between @M { C }'s minimum and
maximum limits must be at most this value.
}
 
@EndList
Limit active intervals constraints with large minimum limits and
small differences between their minimum and maximum limts are
good candidates for interval grouping because they strongly
constrain solutions, while being hard to handle by other means.
There is also
@TaggedList

@DTI { @F rs_interval_grouping_two_three } @OneCol {
A Boolean option with default value @C { false }.  When it is
set to @C { true }, then in addition to the above, @M { C }
will be chosen when its minimum limit is 2 and its maximum
limit is 3 or more; but the algorithm runs as though the
maximum limit was 3.
}
 
@EndList
This option has been included as an experiment.
@PP
We have spoken of individual constraints, but in fact
@C { KheIntervalGrouping } groups the limit active intervals constraints
into classes (Section {@NumberOf resource_structural.constraint_classes})
and does its work for each class.  So the minimum limit could come
from one constraint while the maximum limit comes from another.
@PP
The solver's implementation uses dynamic programming.  It builds
all solutions up to the end of the first day, then builds all
solutions up to the end of the second day, and so on, using
dominance testing to reduce the number of solutions kept on each
day before starting the next.
@PP
Interval grouping is simple enough in principle, but there are
many details, needed partly to estimate solution cost realistically,
and partly to save time by avoiding creating
solutions that would be deleted anyway by dominance testing.  A full
description appears in Appendix {@NumberOf interval_grouping}.
@PP
@BI { Running time. }
Because interval grouping does not actually assign nurses to shifts,
it can only reasonably consume a small amount of running time---a
few seconds, say.  Despite a careful implementation, it is not always
fast enough.  The rest of this section presents ways to reduce its
running time.  These remove the guarantee of optimality, but
in practice that may not matter.
# , at the cost of losing the guarantee of
# optimality.
# This method is intimately connected with the possibility
# that some tasks might end up remaining unassigned in the final solution.
@PP
Suppose that one night task is wanted on Monday and Wednesday, and
three are wanted on Tuesday.  We can end one sequence on Tuesday and
start another on Tuesday, but that covers only two of the three
Tuesday night tasks.  The best thing to do with the third one may
be to place it in a group by itself and leave it unassigned.  There
will be a cost for this, but it may well be less than the cost of
assigning the task and thereby creating a sequence of length 1.
@PP
The good news is that interval grouping understands that a group may
be left unassigned.  It assigns to each group a cost which is the
smaller of the cost incurred when the group is assigned a resource,
and the cost incurred when the group is not assigned a resource.  It
chooses the smaller of the two costs because it anticipates that
a subsequent solver will choose to assign the group or not depending
on which cost is smaller.  In practice, only very small groups
(containing one task, or two at most) cost less when they are not
assigned a resource.
@PP
The bad news is that small groups (other than unavoidable ones, as
in the example above) consume a lot of running time and are rarely
useful.  Why have a group of length 2 followed by a group of length
3, when the minimum length is 4?  So our plan for reducing running
time is to disallow @I { undersized groups } (groups whose duration
is less than the minimum limit) except when unavoidable.  This speeds
up the solver and only rarely produces a sub-optimal solution.
@PP
When exploring all ways to continue a given solution into the next
day, each group can either be made longer or not.  To optimize,
simply do not try the second option.  The solver offers three
settings for this:  one can do it for no groups (the original
algorithm), or for all undersized groups, or for most undersized
groups, which means that for each set of equivalent groups, all
but one of them must be made longer.  For example, suppose that
there are three identical undersized groups, requiring an
ordinary nurse on days 2Mon, 2Tue, and 2Wed.  Then the `most'
option would require two of these groups to be made longer but
allow the third to remain undersized.
@PP
Given a set of groups which must be made longer (either all
undersized groups, or most undersized groups), the solver
offers two ways to make them longer.  The first is to try
all ways, as in the original algorithm.  This itself can
be too slow.  The second is to build a bipartite graph
joining the groups that must be made longer and the tasks
available for making them longer, and assign the tasks to
the groups in only the one way defined by a maximum matching.
The matching maximizes the sum of the cardinalities of the
domains of the result task groups.
@PP
All this boils down to five options, expressed by this
enumerated type:
@ID @C {
typedef enum {
  KHE_IGU_NONE,
  KHE_IGU_MOST_ASSIGN,
  KHE_IGU_MOST_MATCH,
  KHE_IGU_ALL_ASSIGN,
  KHE_IGU_ALL_MATCH
} KHE_IGU_TYPE;
}
Here @C { IGU } means `interval grouping for undersized groups',
@C { NONE } means `select no undersized groups', @C { MOST } means
`select most undersized groups', and @C { ALL } means `select
all undersized groups'.  @C { ASSIGN } means `assign the selected
groups in all ways', and @C { MATCH } means `assign the selected
groups using a bipartite matching'.  As we proceed down the list, we
can expect solutions to become less optimal but to be found faster.
@PP
Rather than requiring the user to select one of these five
options, a more nuanced approach is supported, which chooses
depending on the amount of running time available, via option
@TaggedList

@DTI { @F rs_interval_grouping_daily_time_limit } @OneCol {
This option determines the amount of time that interval grouping
can consume on each day, as well as the IGU type to use, as
explained in detail now.
}
 
@EndList
In its simplest form, the value is a time limit.  For example,
@C { "5.0" } means that interval grouping can spend at most 5
seconds on any one day; after that it goes on to the next day.
The IGU type to use is @C { KHE_IGU_NONE }, that is, no special
arrangements are made for undersized groups.
@PP
In general, however, the value has the form
@ID @C { label time_limit ... label time_limit }
where @C { label } is @C { none }, @C { most_assign },
@C { most_match }, @C { all_assign }, or @C { all_match }, 
defining an IGU type, and @C { time_limit } is a time
limit.  The time limits must be non-decreasing.  For example,
@ID @C { none 3.0 most_match 5.0 }
is a reasonable value.  It says that when solving begins on any
one day, the IGU type is @C { KHE_IGU_NONE } initially.  After a
total of 3.0 seconds have been consumed, the IGU type switches to
@C { KHE_IGU_MOST_MATCH }.  Then after a total of 5.0 seconds have
been consumed, that day ends.  The value may begin with a time
limit rather than with a label, in which case label @C { none }
is inserted at the front.  The last time limit may be
@C { - }, meaning no limit as usual.
@PP
The default value of @F rs_interval_grouping_daily_time_limit is
@ID @C {
none 3.0 most_match 5.0
}
Five seconds per day is a reasonable hard limit; the rest
has been determined by experiment.
@PP
Although we have described these limits as applying on each day,
in fact they apply cumulatively, as follows.  Suppose that the
solve of day @M { i-1 } finishes @M { t sub {i-1} } seconds after
solving begins.  We must have @M { t sub {i-1} <= (i-1) L }, where
@M { L } is the hard time limit at the end of the option, because
@M { L } applies to each of the @M { i - 1 } days so far.  Then
the time limits for the next day are the values in the option plus
the time @M { (i-1) L - t sub {i-1} } left over from previous days.
@PP
When the time limit at the end of the option is @C { - }, meaning
unlimited, this rule for cumulative limits is not well defined.
In that case the limits apply to each day separately, although of
course the last of them is unlimited in this case.
# @PP
# @I { still to do below here }
# @PP
# @I { Undersized groups } (groups whose duration is less than
# the minimum limit) can be required to become longer; that is,
# the option of not making them longer can be skipped by the
# algorithm, except when there are actually no tasks available
# to group with them.  Interval grouping allows you to select
# which undersized groups to apply this idea to:  none of them,
# all of them, or most of them (all but one from each set of
# equivalent groups).
# @PP
# When some undersized groups are selected for this must-assign
# status, there are two ways that they can be assigned:  the usual
# way, which is to try all possible ways, or by building a bipartite
# matching between the selected groups and the tasks available for
# adding to them, and assigning the tasks to the groups in only the
# one way defined by a maximum matching.
# @PP
# Because interval grouping does not actually assign nurses to shifts,
# the amount of time that can reasonably be allocated to it is quite
# small---a few seconds, say.  Despite careful optimization, it is
# not always fast enough.  Accordingly, the following options are
# provided for making its running time smaller and more predictable,
# at the cost of losing the guarantee of optimality:
# @TaggedList
# 
# @DTI { @F rs_interval_grouping_complete } @OneCol {
# A Boolean option with default value @C { false }.  See below
# for a description.
# }
#  
# # @DTI { @F rs_interval_grouping_max_keep } @OneCol {
# # An integer-valued option with default value @C { 20000 }.  The
# # maximum number of solutions kept on each day, as detailed below.
# # }
#  
# # @DTI { @F rs_interval_grouping_max_beam } @OneCol {
# # An integer-valued option with default value @C { 1500 }.  The
# # maximum number of solutions kept on one day which does not
# # trigger `beaming' on the next day, as detailed below.
# # }
#  
# @DTI { @F rs_interval_grouping_daily_time_limit } @OneCol {
# A string-valued option representing a daily time limit,
# as explained in detail below.
# # in the format read by function @C { KheTimeFromString }
# # (Section {@NumberOf general_solvers.runningtime}).
# # This imposes a limit of @M { L } on the running time that
# # each call on interval grouping may consume each day.  More
# # precisely, it imposes a limit of @M { L } on the first day,
# # @M { 2L } on the first two days taken together, and so on,
# # so that time not used on earlier days is available for
# # later days.  If the time limit is exceeded on some day,
# # interval grouping simply stops creating new solutions on
# # that day.  The default value is @C { 0.5 }, which means
# # half a second per day.  The special value @C { - }, meaning
# # no time limit, can be used.
# # @LP
# # Most calls on interval grouping run quickly, but a few
# # consume more time than they are worth.  This time limit
# # prevents wasteful time blowouts, but has no effect on most calls.
# }
#  
# @EndList
# The @C { rs_interval_grouping_complete } option determines
# what to do in a certain problematic case.  Suppose that
# one night task is wanted on Monday and Wednesday, and three are
# wanted on Tuesday.  We can end one sequence on Tuesday and start
# another on Tuesday, but that covers only two of the three Tuesday
# night tasks.  The best thing to do with the third one may be to
# leave it unassigned.  There will be a cost for this, but it may
# be less than the cost of assigning the task and thereby creating
# a sequence of length 1.  Interval grouping understands this and
# will assign the smaller of the two costs (assigned and unassigned)
# to each group, anticipating that a subsequent solver will choose
# to assign the group or not depending on which cost is smaller.
# @PP
# Leaving a group unassigned will usually be the right choice only
# for undersized groups, of duration one or two, say.  But by default,
# the solver tries hard to avoid such undersized groups, by making
# them longer whenever it can.  (Where undersized groups are
# unavoidable, as in the example just given, they will still be found.)
# This preference for avoiding undersized groups means that sometimes
# the solution returned by the interval grouper is not the best possible.
# @PP
# This is where option @C { rs_interval_grouping_complete } comes
# in.  Setting it to @C { true } turns off the avoidance of undersized
# groups, ensuring that they get a fair trial and that the result is
# truly optimal.  This behaviour is not the default, however, because
# it slows down the algorithm significantly and provides very little
# practical benefit, as it turns out.
# # If, after dominance testing,
# # there are still more than option @F rs_interval_grouping_max_keep
# # solutions on one day, their number is reduced to this
# # number by deleting the most costly ones.
# # @PP
# # This method of reducing the number of solutions may fail to keep
# # a representative selection of solutions.  This is where option
# # @C { rs_interval_grouping_max_beam } comes in.  When the number
# # of solutions kept on one day exceeds the value of this option,
# # the behaviour on the next day changes.  For each solution on the
# # current day, only one solution is created on the next day.  This
# # keeps the number of solutions created small, but hopefully
# # sufficiently varied to include one that will lead to an
# # optimal solution.  Whether to use this `beam search' idea is
# # decided afresh on each day: doing it on one day does not mean
# # doing it on all subsequent days.
# @PP
# Testing shows that most runs of interval grouping take a few
# seconds at most, but that some
# take much longer (76 seconds for instance INRC2-4-120-1-4626, for
# example).  Accordingly, option @F rs_interval_grouping_daily_time_limit
# is offered as a way to limit the amount of time that interval grouping
# spends on any one day, by ending the generation of new solutions on
# that day when the time limit is reached.  Of course, when a solve is
# cut short in this way, the guarantee of optimality is lost.
# @PP
# The value of the option is a time limit @M { L }, in the format
# read by function @C { KheTimeFromString }
# (Section {@NumberOf general_solvers.runningtime}).
# This imposes a limit of @M { L } on the running time that each
# call on interval grouping may consume each day.  The special
# value @C { - }, meaning no time limit, is acceptable as usual.
# @PP
# The option's value can be two time limits, @M { L sub 1 } and
# @M { L sub 2 }, separated by a comma:  @C { 0.3,0.5 } for example.
# @M { L sub 2 } behaves as just described for @M { L }.
# Specifying an @M { L sub 1 }, which must be no larger than
# @M { L sub 2 }, causes the solver to start economizing on time
# when @M { L sub 1 } is reached on any day.  This is done
# by using weighted bipartite matching to assign tasks in just
# one way to the undersized task groups, and then assigning the
# remaining unused task groups and tasks in all possible ways as
# usual.  The matching maximizes the sum of the cardinalities
# of the domains of the result task groups.
# @PP
# The default value of @F rs_interval_grouping_daily_time_limit
# is @C { 0.3,0.5 }.  It starts economizing after 0.3 seconds,
# and brings the day to an end after 0.5 seconds.
# @PP
# Although we have described these limits as applying on each
# day, in fact they apply cumulatively, as follows.  Suppose that
# the solve of day @M { i-1 } finishes @M { t sub {i-1} } seconds after
# solving begins.  We must have @M { t sub {i-1} <= (i-1) L sub 2 },
# because the hard limit @M { L sub 2 } applies to each of the
# @M { i - 1 } days so far.  Then the first limit for day
# @M { i } is @M { (i-1) L sub 2 - t sub {i-1} + L sub 1 }, and
# the second limit is @M { (i-1) L sub 2 - t sub {i-1} + L sub 2 }.
# This makes unused running time from previous days available to
# subsequent days.
# @PP
# Interval grouping is simple enough in principle, but there are
# many details, needed partly to estimate solution cost realistically,
# and partly to save time by avoiding creating
# solutions that would be deleted anyway by dominance testing.  A full
# description appears in Appendix {@NumberOf interval_grouping}.
@End @SubSection

@SubSection
  @Title { Displaying grouped tasks }
  @Tag { resource_structural.task_grouping.display }
@Begin
@LP
This section documents a module that can be used to display
grouped tasks when debugging.  What is wanted is not a planning
timetable, because most of the tasks are not assigned resources.
Instead, the display shows how tasks are grouped in an easy to
read timetable-like layout.
@PP
We begin by creating a grouped tasks display object for a given solution:
@ID @C {
KHE_GROUPED_TASKS_DISPLAY KheGroupedTasksDisplayMake(KHE_SOLN soln,
  char *id, int min_limit, int max_limit, KHE_COST cost,
  KHE_FRAME days_frame, HA_ARENA a);
}
There is no way to delete this object explicitly; it is deleted when
arena @C { a } is deleted or recycled.  The @C { id } parameter, which
must be non-@C { NULL }, serves as a name to give to the display.
Groups whose primary duration (see below) is less than @C { min_limit }
or greater than @C { max_limit } will be highlighted in the display,
and @C { cost } will also be printed.  The tabular print will have
one row for each time group of @C { days_frame }.
@PP
Next, use the following calls to define groups of tasks:
@ID {0.95 1.0} @Scale @C {
void KheGroupedTasksDisplayGroupBegin(KHE_GROUPED_TASKS_DISPLAY gtd,
  bool optional, int primary_durn, int index_in_soln);
void KheGroupedTasksDisplayGroupAddTask(KHE_GROUPED_TASKS_DISPLAY gtd,
  KHE_TASK task);
void KheGroupedTasksDisplayGroupAddHistory(KHE_GROUPED_TASKS_DISPLAY gtd,
  KHE_RESOURCE r, int durn);
void KheGroupedTasksDisplayGroupEnd(KHE_GROUPED_TASKS_DISPLAY gtd);
}
Call @C { KheGroupedTasksDisplayGroupBegin } to start a group, saying
whether the group is to be considered optional, what its primary duration
is (for comparing with @C { min_limit } and @C { max_limit }), and an
index number that will be printed before the task name, if non-negative.
Then call @C { KheGroupedTasksDisplayGroupAddTask } any number of times
to add tasks to the group, plus optionally one call to
@C { KheGroupedTasksDisplayGroupAddHistory }, then call
@C { KheGroupedTasksDisplayGroupEnd } to end the group.  Do
this any number of times.
@PP
@C { KheGroupedTasksDisplayGroupAddHistory } is used to say that
the current group includes tasks from before the current cycle.
These are not representable as tasks of the current solution,
but they do have an assigned resource and a duration, and these
values will be displayed.
@PP
If the tasks are already grouped, just one call to
@C { KheGroupedTasksDisplayGroupAddTask } is needed per group, passing
the leader task.  But one can also pass any number of tasks which are
intended to be grouped, before they are grouped.
@PP
To print the display, call
@ID @C {
void KheGroupedTasksDisplayPrint(KHE_GROUPED_TASKS_DISPLAY gtd,
  bool show_asst, int indent, FILE *fp);
}
This prints @C { gtd } onto @C { fp } with the given indent.  Each
group is enclosed in a box.  Tasks which are already grouped are
separated by a line containing an asterisk.  Tasks which are not yet
grouped but are intended to be grouped are separated by a blank line.
@PP
Characters in the margins of the boxes indicate the kind of group:
@F "*" means optional; @F "<" means non-optional and undersized
(compared with @C { min_limit }); @F ">" means non-optional and
oversized (compared with @C { max_limit }).  Groups adjacent to
the end of the entire cycle (not just to the end of the current
print) are never considered to be undersized.
# Optional groups have @C { * } characters in the margins of the boxes;
# undersized groups have @C { # } charact
@PP
If @C { show_asst } is @C { true }, then each entry whose task is
assigned a resource will display that resource.  If @C { show_asst }
is @C { false }, all entries, including entries whose task is
assigned a resource, will not show that resource, instead showing
the task's domain and non-assignment cost.
# , and @C { col_width } is the
# width of each column in characters (it should be at least about 10).
# @PP
# There is also
# @ID @C {
# bool KheGroupedTasksDisplayCompatible(KHE_GROUPED_TASKS_DISPLAY gtd1,
#   KHE_GROUPED_TASKS_DISPLAY gtd2, int frame_index, 
#   KHE_TASK_GROUP_DOMAIN_FINDER domain_finder, HA_ARENA a);
# }
# which compares display @C { gtd1 } with display @C { gtd2 } at
# position @C { frame_index }.  The two objects' solutions will
# usually be different, but they must solve the same instance.
# If the groups in the two displays that include tasks running at
# @C { frame_index } are @I { compatible }, that is, have the same
# lengths and domains, not counting anything after @C { frame_index },
# it returns @C { true }, otherwise it returns @C { false }.  A domain
# finder (Section {@NumberOf resource_structural.task_grouping.domains})
# is required, and memory for the comparison is taken from arena @C { a }.
# @PP
# Actually we require, for each group in @C { gtd1 }, that its
# domain be a superset of the domain of the corresponding group
# in @C { gtd2 }.
@End @SubSection

# @SubSection
#     @Title { Profile grouping }
#     @Tag { resource_structural.grouping_by_rc.profile }
# @Begin
# @LP
# Suppose 6 nurses are required on the Monday, Tuesday, Wednesday,
# Thursday, and Friday night shifts, but only 4 are required on the
# Saturday and Sunday night shifts.  Consider any division of the
# night shifts into sequences of one or more shifts on consecutive
# days.  However these sequences are made, at least two must begin
# on Monday, and at least two must end on Friday.
# @PP
# Now suppose that the intention is to assign the same resource to
# each shift of any one sequence, and that a limit active intervals
# constraint, applicable to all resources, specifies that night shifts
# on consecutive days must occur in sequences of at least 2 and at most
# 3.  Then the two sequences of night shifts that must begin on Monday
# must contain a Monday night and a Tuesday night shift at least, and the
# two that end on Friday must contain a Thursday night and a Friday night
# shift at least.  So here are two groupings, of Monday and Tuesday
# nights and of Thursday and Friday nights, for each of which we can
# build two task groups.
# @PP
# Suppose that we already have a task group which contains a sequence
# of 3 night shifts on consecutive days.  This group cannot be grouped
# with any night shifts on days adjacent to the days it currently
# covers.  So for present purposes the tasks of this group can be
# ignored.  This can change the number of night shifts running on
# each day, and so change the amount of grouping.  For example, in
# instance @C { COI-GPost.xml }, all the Friday, Saturday, and Sunday
# night shifts get grouped into sequences of 3, and 3 is the maximum,
# so those night shifts can be ignored here, and so every Monday night
# shift begins a sequence, and every Thursday night shift ends one.
# @PP
# We now generalize this example, ignoring for the moment a few
# issues of detail.  Let @M { C } be any limit active intervals
# constraint which applies to all resources, and whose time groups
# @M { T sub 1 ,..., T sub k } are all positive.  Let @M { C }'s
# limits be @M { C sub "max" } and @M { C sub "min" }, and suppose
# @M { C sub "min" } is at least 2 (if not, there can be no grouping
# based on @M { C }).  What follows is relative to @M { C }, and is
# repeated for each such constraint.  Constraints with the same
# time groups are notionally merged, allowing the minimum limit
# to come from one constraint and the maximum limit from another.
# @PP
# Let @M { n sub i } be the number of tasks of interest that cover
# @M { T sub i }.  The @M { n sub i } make up the @I profile of @M { C }.
# @PP
# A @I { long task } is a task which covers at least @M { C sub "max" }
# adjacent time groups from @M { C }.  Long tasks can have no influence
# on grouping to satisfy @M { C }'s minimum limit, so they may be ignored,
# that is, profile grouping may run as though they are not there.  This
# applies both to tasks which are present at the start, and tasks which
# are constructed along the way.  
# @PP
# # A task is @I { admissible } (for profile grouping) if it satisfies
# # the following conditions:
# # @NumberedList
# # 
# # @LI {
# # The task is a proper root task lying within an mtask created by the mtask
# # finder passed to profile grouping when @C { KheProfileGrouping }
# # (see below) is called.
# # }
# # 
# # @LI {
# # The task is not fixed, not assigned a resource, and it needs assignment.
# # }
# # 
# # @LI {
# # The task is not a long task.
# # }
# # 
# # @EndList
# # If a task is admissible, then every unassigned task in that task's
# # mtask is also admissible.
# # @PP
# # For the definition of `cover' see
# # Section {@NumberOf resource_structural.grouping_by_rc.combinatorial}.
# # @PP
# As profile grouping proceeds, some tasks become grouped into larger
# tasks which are no longer relevant because they are long.  This causes
# some of the @M { n sub i } values to decrease.  We always base our
# decisions on the current profile, not the original profile.
# @PP
# For each @M { i } such that @M { n sub {i-1} < n sub i },
# @M { n sub i - n sub {i-1} } groups of length at least
# @M { C sub "min" } must start at @M { T sub i } (more precisely,
# they must cover @M { T sub i } but not  @M { T sub {i-1} }).  They may
# be constructed by combinatorial grouping, passing in time groups
# @M { T sub i ,..., T sub { i + C sub "min" - 1 } } with cover type
# `yes', and @M { T sub {i-1} } and @M { T sub { i + C sub "max" } } with
# cover type `no', asking for @M { m = n sub i - n sub {i-1} - c sub i }
# tasks, where @M { c sub i } is the number of existing tasks (not
# including long ones) that satisfy these conditions already.
# # (as returned by @C { KheCombSolverSingles }).
# The new groups must group at least 2 tasks each.  Some of the time
# groups may not exist; in that case, omit them, but still do the
# grouping if there are at least 2 `yes' time groups.  The case for
# sequences ending at @M { j } is symmetrical.
# @PP
# If @M { C } has no history, we set @M { n sub 0 } and
# @M { n sub {k+1} } to 0, encouraging groups to begin at @M { T sub 1 }
# and end at @M { T sub k }.  If @M { C } has history, we still
# set @M { n sub 0 } to 0, reasoning that assign by history
# (Section {@NumberOf resource_solvers.assignment.history}) has
# taken care of history at that end; but we set @M { n sub {k+1} } to
# +2p @Font @M { infty }, preventing groups from being formed to
# end at @M { T sub k }.
# # we do not know
# # how many tasks are running outside @M { C }, so we set @M { n sub 0 }
# # and @M { n sub {k+1} } to infinity, preventing groups from beginning
# # at @M { T sub 1 } and ending at @M { T sub k }.
# @PP
# Groups made by one round of profile grouping may participate in later
# rounds.  Suppose @M { C sub "min" = 2 }, @M { C sub "max" = 3 },
# @M { n sub 1 = n sub 5 = 0 }, and @M { n sub 2 = n sub 3 = n sub 4 = 4 }.
# Profile grouping builds 4 groups of length 2 begining at @M { T sub 2 },
# then 4 groups of length 3 ending at @M { T sub 4 }, incorporating the
# length 2 groups.
# @PP
# We turn now to some issues of detail.
# @PP
# @B { Singles. }  A @I single is a set of mtasks that satisfies the
# requirements of combinatorial grouping but contains only one mtask.
# We need to consider how singles affect profile grouping.  Singles
# of length @M { C sub "max" } or more are ignored, but there may be
# singles of smaller length.
# @PP
# The @M { n sub i - n sub {i-1} } groups that must start at
# @M { T sub i } include singles.  Singles are already present, just
# as though they were made first.  The combinatorial grouping solver
# has a variant that applies the given requirements, but instead of
# doing any grouping, returns @M { c sub i }, the number of tasks of
# interest that lie in the mtasks of singles.  Then we ask combinatorial
# grouping to make up to @M { n sub i - n sub {i-1} - c sub i } groups,
# not @M { n sub i - n sub {i-1} }, with an extra requirement that
# singles are to be exluded.  If @M { n sub i - n sub {i-1} - c sub i <= 0 }
# we skip the call; the sequences that need to start at @M { T sub i }
# are already present.
# @PP
# @B { Varying task domains. }  Suppose that one senior nurse is wanted
# each night, four ordinary nurses are wanted each week night, and two
# ordinary nurses are wanted each weekend night.  Then two groups still
# need to start on Monday nights, but they should group demands for
# ordinary nurses, not senior nurses.  Nevertheless, tasks with
# different domains are not totally unrelated.  A senior nurse
# could very well act as an ordinary nurse on some shifts.
# @PP
# We still aim to build @M { M = n sub i - n sub {i-1} - c sub i }
# groups as before.  However, we do this by making several calls on
# combinatorial grouping.  For each resource group @M { g } appearing
# as a domain in any mtask running at time @M { T sub i }, find
# @M { n sub gi }, the number of tasks (not including long ones) with
# domain @M { g } running at @M { T sub i }, and @M { n sub { g(i-1) } },
# the number at @M { T sub {i-1} }.  For each @M { g } such that
# @M { n sub gi > n sub { g(i-1) } }, call combinatorial grouping,
# with a requirement expressing a preference for domain @M { g },  
# # insisting that @M { T sub i } be covered by an mtask whose domain
# # is @M { g },
# and asking for @M { min( M, n sub gi - n sub { g(i-1) } ) } groups.
# Then subtract from @M { M } the number of groups actually made.
# Stop when @M { M = 0 } or the list of domains is exhausted.
# @PP
# @B { Varying task costs. }  The tasks participating in profile
# grouping might well differ in their non-assignment cost.  It feels
# wrong to group tasks with very different costs.  Although this
# is not currently prevented, it is likely to be fairly harmless,
# for two reasons.
# @PP
# First, in grouping generally we only consider tasks which
# need assignment---tasks whose cost of non-assignment exceeds
# their cost of assignment.  So we won't be grouping a task
# that needs assignment with a task that doesn't.
# @PP
# Second, the most cost-reducing tasks in each mtask are assigned
# first.  That should encourage task groups to contain tasks of
# similar cost.
# # Some might be compulsory
# # (assigning them reduces the hard cost of the solution), others might
# # be deprecated (assigning them increases cost), others might be
# # neutral.  These costs are visible as the @C { non_asst_cost } and
# # @C { asst_cost } values returned by @C { KheMTaskTask }
# # (Section {@NumberOf resource_structural.mtask_finding.ops}).
# # @PP
# # Mtasks ensure that the
# # most cost-reducing tasks are assigned first, which should help
# # task groups to contain tasks of similar cost.  But if the best
# # remaining unassigned task in one mtask has very different cost
# # to the best in another, they will be grouped.
# # @PP
# # There are other possibilities.  We could easily ignore deprecated
# # tasks altogether during profile grouping, for example.  The
# # author has not yet given serious thought to this subject.
# @PP
# @B { Non-uniqueness of zero-cost groupings. }
# The main problem with profile grouping is that there may be
# several zero-cost groupings in a given situation.  For example,
# a profile might show that a group covering Monday, Tuesday, and
# Wednesday may be made, but give no guidance on which shifts on
# those days to group.
# @PP
# There are various ways to deal with this problem.  At present
# we are limiting profile grouping to constraints @M { C } whose
# time groups all contain a single time.  Thus profile grouping
# will group sequences of day shifts, sequences of night shifts,
# and so on, but it will not group sequences of days, even when
# there is a constraint limiting the number of consecutive busy
# days whose profile shows that sequences must begin on a certain day.
# An exception to this is the case @M { C sub "min" = C sub "max" },
# discussed below.
# @PP
# @B { An overall algorithm. }
# We are now in a position to present an overall algorithm for
# profile grouping.  Find all limit active intervals constraints
# @M { C } which apply to all resources and whose time groups are
# all singletons and all positive.  Notionally merge constraints
# that share the same time groups; for example, we could take
# @M { C sub "min" } from one and @M { C sub "max" } from another.
# For each of these merged constraints @M { C } such that
# @M { C sub "min" >= 2 }, proceed as follows.
# # Furthermore, if @C { non_strict }
# # is @C { false }, then @M { C }'s time groups must all be
# # singletons, while if @C { non_strict } is @C { true }, then
# # @M { C sub "min" = C sub "max" } must hold.
# # @PP
# # A constraint may qualify for both strict and non-strict processing.
# # This is true, for example, of a constraint that imposes equal lower
# # and upper limits on the number of consecutive night shifts.  Such a
# # constraint will be selected in both the strict and non-strict cases,
# # which is fine.
# @PP
# # For each of these constraints, proceed as follows.
# # Set the profile
# # time groups in the tasker to @M { T sub 1 ,..., T sub k }, the time
# # groups of @M { C }, and set the @C { profile_max_len } attribute to
# # @M { C sub "max" - 1 }.  The tasker will then report the values
# # @M { n sub i } needed for @M { C }.
# # @PP
# Traverse the profile repeatedly, looking for cases where
# @M { n sub i > n sub {i-1} } and @M { n sub j < n sub {j+1} }, and
# use combinatorial grouping (aiming to find zero-cost groups, not
# unique zero-cost groups) to build groups which cover between
# @M { C sub "min" } and @M { C sub "max" } time groups starting
# at @M { T sub i } (or ending at @M { T sub j }).
# # This
# # involves loading @M { T sub i ,..., T sub {i + C sub "min" - 1} } as `yes'
# # time groups, and @M { T sub {i-1} } and @M { T sub { i + C sub "max" } }
# # as `no' time groups, as explained above.
# Continue traversing the profile until until no points which allow
# grouping can be found.
# @PP
# As groups are made, the @M { n sub i } will often decrease.  At some
# point they might all be zero, or the @M { n sub i - n sub {i-1} - c sub i }
# might all be zero.  Alternatively, they might all be non-zero but all
# equal, and we need to think about what to do then.  Further grouping
# is possible but would involve arbitrary choices, making whether to
# go further a matter of experience and experiment.
# @PP
# One case where going further is worthwhile is when
# @M { C sub "min" = C sub "max" }.  It is
# very constraining to insist, as this does, that every sequence of
# consecutive busy days (say) away from the start and end of the cycle
# must have a particular length.  Indeed, it changes the problem into a
# combinatorial one of packing these rigid sequences into the profile.
# Local repairs cannot do this well, because to increase
# or decrease the length of one sequence, we must decrease or increase
# the length of a neighbouring sequence, and so on all the way back to
# the start or forward to the end of the cycle (unless there are
# shifts nearby which can be assigned or not without cost).
# So we turn to profile grouping to find suitable groups before
# assigning any resources.  Some of these groups may be less than
# ideal, but still the overall effect should be better than no
# grouping at all.
# @PP
# Another case for going further is when
# @M { C sub "min" + 1 = C sub "max" } and the time groups are
# singletons.  This case arises in instance @F { INRC2-4-100-0-1108 },
# where night shifts preferably come in sequences of length 4 or 5.
# The author's other solvers struggle with this requirement, making
# it very tempting to build these sequences before doing any assignment.
# @PP
# If we do decide to keep going, one way to do that is as follows.
# From among all time groups @M { T sub i }
# where @M { n sub i > 0 }, choose one which has been the starting
# point for a minimum number of groups (to spread out the starting
# points as much as possible) and make a group there if combinatorial
# grouping allows it.  Then return to traversing the profile
# repeatedly.  There should now be an @M { n sub i > n sub {i-1} }
# case just before the latest group, and an @M { n sub j < n sub {j+1} }
# case just after it.  Repeat until there is no @M { T sub i } where
# @M { n sub i > 0 } and combinatorial grouping can build a group.
# @PP
# Another way to keep going is to use the dynamic programming
# algorithm from the next section.  Although it is not globally
# optimum, it is an efficient way to find high-quality groups.
# # It reduces every @M { n sub i }
# # by one, so it only applies when every @M { n sub i >= 1 }.  It
# # is an efficient way to find high-quality groups.
# # One reasonable way of dealing with this problem is the following.
# # First, do not insist on unique zero-cost groupings; instead, accept
# # any zero-cost grouping.  This ensures that a reasonable amount of
# # profile grouping will happen.  Second, to reduce the chance of
# # making poor choices of zero-cost groupings, limit profile grouping
# # to two cases.
# # @PP
# # The first case is when each time group @M { T sub i } contains a
# # single time, as at the start of this section, where each
# # @M { T sub i } contained the time of a night shift.  Although we do
# # not insist on unique zero-cost groupings, we are likely to get them
# # in this case.  We call this @I { Type A profile grouping }.
# # @PP
# # The second case is when @M { C sub "min" = C sub "max" }.  It is
# # very constraining to insist, as this does, that every sequence of
# # consecutive busy days (say) away from the start and end of the cycle
# # must have a particular length.  Indeed, it changes the problem into a
# # combinatorial one of packing these rigid sequences into the profile.
# # Local repairs cannot do this well, because to increase
# # or decrease the length of one sequence, we must decrease or increase
# # the length of a neighbouring sequence, and so on all the way back to
# # the start or forward to the end of the cycle (unless there are
# # shifts nearby which can be assigned or not without cost).
# # So we turn to profile grouping to find suitable groups before
# # assigning any resources.  Some of these groups may be less than
# # ideal, but still the overall effect should be better than no
# # grouping at all.  We call this @I { Type B profile grouping }.
# # @PP
# # @PP
# # When @M { C sub "min" = C sub "max" }, no singles are counted in
# # the profile.  This is easy to see:  by definition, a single covers
# # @M { C sub "min" } time groups, so it covers @M { C sub "max" }
# # time groups, but we are omitting existing groups of this length
# # or greater from the profile.
# # # @C { profile_max_len } is @M { C sub "max" - 1 }.
# # @PP
# # These ideas are implemented by function
# # @ID @C {
# # int KheProfileGrouping(KHE_COMB_GROUPER cg, bool non_strict,
# #   KHE_SOLN_ADJUSTER sa);
# # }
# # It carries out some profile grouping, as follows, and returns
# # the number of groups it makes.  If @C { sa != NULL }, any task
# # assignments it makes are saved in @C { sa }, so that they can
# # be undone later.
# # 
# # In the strict grouping case, it is then
# # time to stop, but in the non-strict case we keep
# # grouping, as follows.  From among all time groups @M { T sub i }
# # where @M { n sub i > 0 }, choose one which has been the starting
# # point for a minimum number of groups (to spread out the starting
# # points as much as possible) and make a group there if combinatorial
# # grouping allows it.  Then return to traversing the profile
# # repeatedly.  There should now be an @M { n sub i > n sub {i-1} }
# # case just before the latest group, and an @M { n sub j < n sub {j+1} }
# # case just after it.  Repeat until there is no @M { T sub i } where
# # @M { n sub i > 0 } and combinatorial grouping can build a group.
# @End @SubSection

# @SubSection
#     @Title { A dynamic programing algorithm for profile grouping }
#     @Tag { resource_structural.grouping_by_rc.dynamic }
# @Begin
# @LP
# This section presents a dynamic programming algorithm for profile
# grouping which can be applied to any subsequence @M { [a, b] } of
# the profile such that @M { n sub i > 0 } for all @M { i } in the
# range @M { a <= i <= b }, and @M { n sub {a-1} = n sub {b+1} = 0 }.
# The algorithm reduces each @M { n sub i } in the range by one,
# using groups of minimum total cost.  Applied repeatedly, it can
# produce many very good groups, although there is no suggestion
# that they are globally optimum.
# # Profile grouping is able to begin a group at position @M { i } when
# # @M { n sub i > n sub {i-1} }, and end a group at position @M { i } when
# # @M { n sub i > n sub {i+1} }.  Where these cases occur it is clearly
# # correct to begin or end a group of minimum length there, given that
# # it can be extended later if needed.  But if all the @M { n sub i }
# # are equal, this provides no guidance.  In that case it might be better
# # not to group.  Above, we carry on grouping in that case only when
# # @M { C sub "min" = C sub "max" }, arguing that the tightness of the
# # situation warrants it.
# # @PP
# # This same argument could be made when @M { C sub "min" + 1 = C sub "max" },
# # as for example in the constraint on consecutive night shifts in
# # instance @F { INRC2-4-100-0-1108 }, where night shifts should be
# # taken in sequences of length 4 or 5.
# # @PP
# # However, this section is not concerned with when further grouping
# # is needed:  that question must be answered by experience.  Instead,
# # when it is decided on, this section offers an optimal method of
# # carrying it out, assuming that the @M { n sub i } are all non-zero
# # across the cycle, and that we want to build groups such that exactly
# # one group covers each time group of @M { C }.  This will mainly be
# # useful when the @M { n sub i } are all equal, but we do not require
# # them to be equal.
# @PP
# We have one hard constraint and one soft constraint.  The hard
# constraint is that we require the algorithm to produce a set of
# groups, each of length between @M { C sub "min" } and
# @M { C sub "max" } inclusive,  such that every position in the
# range is covered by exactly one group.  The last group, however,
# may have length less than @M { C sub "min" } when it is the last
# time group of @M { C } and history (i.e. a future) is present,
# since short sequences at the end do not violate @M { C } in that
# case.  The soft constraint is that the total cost of the groups
# (as reported by combinatorial grouping) should be minimized.
# @PP
# One could ask whether there will be any cost:  a sequence of
# night shifts (say) whose length satisfies @M { C } is not
# likely to violate any other constraints.  In practice this
# is largely true.  The main exception is that complete
# weekend constraints may combine with unwanted pattern
# constraints to cause sequences that end on a Saturday
# or begin on a Sunday to have non-zero cost.
# # This is because the complete weekend
# # constraint requires something on Sunday, but the sequence has
# # ended so another night shift is excluded, and the other shifts
# # are often prohibited by unwanted pattern constraints.  On
# # the other hand, a sequence beginning on Sunday could follow
# # a day shift on Saturday.
# @PP
# Our dynamic programming algorithm finds a solution @M { S(i) }
# which is optimal among all solutions which cover the first
# @M { i } time groups of @M { [a, b] }, for each @M { i } such
# that @M { 0 <= i <= b - a + 1 }.
# @PP
# The first of these optimal solutions, @M { S(0) }, is required to
# cover no time groups, so it is the empty set of sequences, with
# cost 0.  Assume inductively that we have found @M { S(k) } for each
# @M { k } such that @M { 0 <= k < i }.  We need to find @M { S(i) }.
# @PP
# To do this, for each @M { j } such that
# @M { C sub "min" <= j <= C sub "max" },
# find the solution which consists of @M { S(i - j) } plus
# a single sequence covering time groups
# @M { T sub {i - j + 1} ... T sub i }.  The cost of this
# solution is the cost of @M { S(i - j) } plus the cost of
# the additional sequence, as reported by combinatorial
# grouping, tasked with finding a sequence of minimum cost
# covering @M { T sub {i - j + 1} ... T sub i } but not
# @M { T sub {i - j} } and not @M { T sub {i+1} }.  Find
# the solution of minimum cost over all @M { j } and declare
# that to be @M { S(i) }.
# @PP
# As explained above, the last group may have length less than
# @M { C sub "min" } when history is present.  In that case, we
# allow the last sequence to have any length @M { j } such that
# @M { 1 <= j <= C sub "max" }.
# @PP
# The main problem with this algorithm is that there may be
# no @M { S(i) } at all.  For example, @M { S(1) } does not
# exist because there are no legal sequences of length 1;
# legal sequences start only with @M { S( C sub "min" ) }.
# Even after that, there may be gaps.  For example, if
# every sequence must have length 4 or 5, there is no
# @M { S(6) } or @M { S(7) }.  There is also the possibility
# that sequences of the right lengths might exist but
# combinatorial grouping finds no way to group their tasks,
# even though we ask it only for sequences of minimum, not
# necessarily zero, cost.  We treat missing solutions of
# this kind as though they had cost +2p @Font @M { infty }.
# We also do this when we need an @M { S(i - j) } but
# @M { i - j < 0 }.
# @PP
# Another problem is that if @M { C sub "max" } is relatively
# large, combinatorial grouping could be too slow.  This has not
# been a problem in practice, but it is probably safest to limit
# dynamic programming to cases where either the time groups each
# contain a single time, or else @M { C sub "max" <= 4 }.
# @PP
# Normally, we remove a sequence from the profile only when it has
# length @M { C sub "max" }, because only then is it unable to
# participate in further grouping.  However, after one round of
# dynamic programming we remove every sequence in the optimal
# solution from the profile, reasoning that collectively they are
# finished and should not participate further.  We can repeat this,
# reducing each @M { n sub i } by one on each round, until some
# @M { n sub i = 0 } or the round fails to find a solution.
# @PP
# Although the dynamic programming algorithm finds an optimal way to reduce
# each @M { n sub i } by one, the @I { general profile grouping problem },
# which is to find an
# optimal way to fill an arbitrary profile with minimum-cost sequences
# of length between @M { C sub "min" } and @M { C sub "max" }, remains
# unsolved.  Even when the @M { n sub i } are equal there is no
# proof that a sequence of rounds, each of which finds an optimal way
# to reduce them all by one, is guaranteed to find an optimal solution
# overall.  (It is true that an optimal solution in this case can be
# divided into a sequence of rounds, each of which reduces all the
# @M { n sub i } by one, but that does not prove that our sequence of
# rounds is optimal.)  When arbitrary task domains are added, it is easy
# to see that the problem includes the NP-hard multi-dimensional matching
# problem.  However, task domains do not seem to be a problem in practice.
# @PP
# The author has considered using dynamic programming for the general
# profile grouping problem, inspired by the dynamic programming
# algorithm for optimal resource assignment
# (Section {@NumberOf resource_solvers.dynamic}).  Such an algorithm
# seems to be possible, but it would be complicated, especially since
# it would need to take into account any task grouping that has already
# occurred.  The optimal resource assignment algorithm treats grouped
# tasks heuristically; that would not suffice here.
# @End @SubSection

#@SubSection
#  @Title { Implementation notes 1:  mtask groups }
#  @Tag { resource_structural.grouping_by_rc.impl1 }
#@Begin
#@LP
#File @C { khe_sr_tgrc.h } contains the interfaces that
#the TGRC source files use to communicate with each other.
#It declares a type @C { KHE_MTASK_GROUP } representing one
#@I { mtask group }.  This is an mtask set with additional
#features relevant to grouping:  it contains an mtask set,
#plus it keeps track of which mtask will be the leader mtask,
#and the cost to a resource of assigning it to the group.
#@PP
#For creating and deleting an mtask group object there are
#@ID @C {
#KHE_MTASK_GROUP KheMTaskGroupMake(KHE_COMB_GROUPER cg);
#void KheMTaskGroupDelete(KHE_MTASK_GROUP mg);
#}
#Here @C { KHE_COMB_GROUPER } is another type defined in
#@C { khe_sr_tgrc.h }.  It is mainly concerned with running
#combinatorial grouping, but it also holds a free list of
#mtask group objects.
#@PP
#There are operations for clearing an mtask group object
#and overwriting its contents with the contents of another
#mtask group object:
#@ID @C {
#void KheMTaskGroupClear(KHE_MTASK_GROUP mg);
#void KheMTaskGroupOverwrite(KHE_MTASK_GROUP dst_mg,
#  KHE_MTASK_GROUP src_mg);
#}
#For visiting its mtasks there are
#@ID @C {
#int KheMTaskGroupMTaskCount(KHE_MTASK_GROUP mg);
#KHE_MTASK KheMTaskGroupMTask(KHE_MTASK_GROUP mg, int i);
#}
#as usual, along with
#@ID @C {
#bool KheMTaskGroupIsEmpty(KHE_MTASK_GROUP mg);
#}
#which is the same as testing whether the count is 0.
#For adding and deleting mtasks there are
#@ID @C {
#bool KheMTaskGroupAddMTask(KHE_MTASK_GROUP mg, KHE_MTASK mt);
#void KheMTaskGroupDeleteMTask(KHE_MTASK_GROUP mg, KHE_MTASK mt);
#}
#@C { KheMTaskGroupAddMTask } adds @C { mt } to @C { mg } and
#returns @C { true }, or if the addition cannot be carried out
#(because @C { mt } runs on the same day as one of the mtasks that
#is already present, or because no leader mtask can be found that
#suits both the existing mtasks and @C { mt }), it changes nothing
#and returns @C { false }.  @C { KheMTaskGroupDeleteMTask } deletes
#@C { mt } from @C { mg }.  Owing to issues around calculating
#leader mtasks, @C { mt } must be the most recently added but not
#deleted mtask, otherwise @C { KheMTaskGroupDeleteMTask }
#will abort.  Function
#@ID @C {
#bool KheMTaskGroupContainsMTask(KHE_MTASK_GROUP mg, KHE_MTASK mt);
#}
#returns @C { true } when @C { mg } contains @C { mt }.
#@PP
#An mtask group @C { mg } has a cost, which is the cost of the
#resource monitors of some resource @C { r } when @C { r } is
#assigned to one task from each mtask of @C { mg }.  Not all monitors
#are included, only cluster busy times and limit busy times monitors
#whose monitoring is limited to the days during which the mtasks of
#@C { mg } are running, plus one extra day on each side.  (We do not
#want wider issues, such as global workload limits, to influence this
#cost.)  The mtask group module is responsible for finding a suitable
#resource, making the assignments, measuring the cost, and taking the
#assignments away again, all of which is done by
#@ID @C {
#bool KheMTaskGroupHasCost(KHE_MTASK_GROUP mg, KHE_COST *cost);
#}
#If a cost can be calculated, @C { KheMTaskGroupHasCost } sets
#@C { *cost } to its value and returns @C { true }.  If a cost
#cannot be calculated, because @C { mg } is empty, or a suitable
#resource @C { r } cannot be found, or cannot be assigned to every
#mtask of @C { mg } (none of these conditions is likely to occur
#in practice), then @C { false } is returned.  There is also
#@ID @C {
#bool KheMTaskGroupIsBetter(KHE_MTASK_GROUP new_mg,
#  KHE_MTASK_GROUP old_mg);
#}
#which returns @C { true } when @C { old_mg } is empty or else
#both @C { new_mg } and @C { old_mg } have a cost, and the cost
#of @C { new_mg } is smaller than the cost of @C { old_mg }.
#@PP
#Calculating the cost is slow, so mtask group objects cache the
#most recently calculated cost, and only recalculate it when the
#set of mtasks has changed since it was last calculated.
#@PP
#To actually carry out some grouping, the function is
#@ID {0.95 1.0} @Scale @C {
#int KheMTaskGroupExecute(KHE_MTASK_GROUP mg, int max_num,
#  KHE_SOLN_ADJUSTER sa, char *debug_str);
#}
#By making calls to functions @C { KheMTaskFinderTaskGrouperClear },
#@C { KheMTaskFinderTaskGrouperAddTask }, and
#@C { KheMTaskFinderTas kGrouperMakeGroup }
#(Section {@NumberOf resource_structural.mtask_finding.solver}),
#it makes up to @C { max_num } groups from the mtasks
#of @C { mg }.  It returns the number of groups actually made.
#If @C { sa != NULL } the task assignments made are
#recorded in @C { sa } so that they can be undone later.
## If @C { fix_leaders_sa != NULL }, the @C { NULL } assignments
## in the leader tasks of the groups are fixed and stored in
## @C { fix_leaders_sa } so that they can be undone later.
## The point of this is that fixing their assignments removes
## them from the profile, which is what is wanted when finding
## groups using dynamic programming.
#Parameter @C { debug_str }
#is used for debugging only, and should contain some indication
#of how the group came to be formed:  @C { "combinatorial grouping" },
#@C { "interval grouping" }, or whatever.
#@PP
#Finally,
#@ID @C {
#void KheMTaskGroupDebug(KHE_MTASK_GROUP mg,
#  int verbosity, int indent, FILE *fp);
#}
#produces a debug print of @C { mg } onto @C { fp } with
#the given verbosity and indent.  This includes the cost,
#if currently known, and it highlights the leader mtask.
#@End @SubSection

#@SubSection
#  @Title { Implementation notes:  the combinatorial grouper }
#  @Tag { resource_structural.grouping_by_rc.impl2 }
#@Begin
#@LP
#Combinatorial grouping is a low-level solve algorithm that provides
#services to higher-level grouping solvers.  It allows those solvers
#to load a variety of different requirements, and it then will search
#for groups that satisfy those requirements.
#@PP
#This is done by a @I { combinatorial grouper } object, made like this:
#@ID @C {
#KHE_COMB_GROUPER KheCombGrouperMake(KHE_MTASK_FINDER mtf,
#  KHE_RESOURCE_TYPE rt, HA_ARENA a);
#}
#It finds groups of @C { mtf }'s mtasks of type @C { rt }, using memory
#from arena @C { a }.  There is no @C { Delete } operation; the grouper
#is deleted when @C { a } is freed.  It calls @C { KheMTaskGrouperMakeGroups }
#from Section {@NumberOf resource_structural.grouping_by_rc.impl1} to
#actually make its groups, and this updates @C { mtf }'s mtasks, so
#that @C { mtf } does not go out of date as grouping proceeds.
#Functions
#@ID @C {
#KHE_MTASK_FINDER KheCombGrouperMTaskFinder(KHE_COMB_GROUPER cg);
#KHE_SOLN KheCombGrouperSoln(KHE_COMB_GROUPER cg);
#KHE_RESOURCE_TYPE KheCombGrouperResourceType(KHE_COMB_GROUPER cg);
#HA_ARENA KheCombGrouperArena(KHE_COMB_GROUPER cg);
#}
#return various attributes of @C { cg }; the solution comes
#from @C { mtf }.
#@PP
#The resource type passed to @C { KheCombGrouperMake } must be
#non-@C { NULL }, and it must be one of the resource types handled
#by @C { mtf }.  An mtask finder is able to handle either one
#resource type or all resource types, but a comb grouper can
#only handle one resource type.
#@PP
#Incidentally to its other functions, a @C { KHE_COMB_GROUPER }
#object holds a free list of mtask group objects.  Functions
#@ID @C {
#KHE_MTASK_GROUP KheCombGrouperGetMTaskGroup(KHE_COMB_GROUPER cg);
#void KheCombGrouperPutMTaskGroup(KHE_COMB_GROUPER cg,
#  KHE_MTASK_GROUP mg);
#}
#get an object from this list (returning @C { NULL } if the list is
#empty) and put an object onto the list.
#@PP
#A @C { KHE_COMB_GROUPER } object can solve any number of combinatorial
#grouping problems for a given @C { mtf }, one after another.  The user
#loads the grouper with one problem's requirements, then requests a
#solve, then loads another lot of requirements and solves, and so on.
#@PP
#We'll present the functions which load requirements informally now.
#Precise descriptions of what each requirement does are given at the
#end of this section.  These requirements make a rather eclectic
#bunch.  They are all needed, however, to support the various kinds
#of grouping.
#@PP
#It is usually best to start the process of loading requirements by calling
#@ID @C {
#void KheCombGrouperClearRequirements(KHE_COMB_GROUPER cg);
#}
#This clears away any old requirements.
#@PP
#A key requirement for most solves is that the groups it makes
#should cover a given time group.  Any number of such requirements
#can be added and removed by calling
#@ID @C {
#void KheCombGrouperAddTimeGroupRequirement(KHE_COMB_GROUPER cg,
#  KHE_TIME_GROUP tg, KHE_COMB_COVER_TYPE cover);
#void KheCombGrouperDeleteTimeGroupRequirement(KHE_COMB_GROUPER cg,
#  KHE_TIME_GROUP tg);
#}
#any number of times.  @C { KheCombSolverAddTimeGroup } specifies that
#the groups must cover @C { tg } in a manner given by the @C { cover }
#parameter, whose type is
#@ID @C {
#typedef enum {
#  KHE_COMB_COVER_YES,
#  KHE_COMB_COVER_NO,
#  KHE_COMB_COVER_PREV,
#  KHE_COMB_COVER_FREE
#} KHE_COMB_COVER_TYPE;
#}
#We'll explain this fully later, but just briefly, @C { KHE_COMB_COVER_YES }
#means that we are only interested in sets of mtasks that cover the
#time group, @C { KHE_COMB_COVER_NO } means that we are not interested
#in sets of mtasks that cover the time group, and so on.
#@PP
#@C { KheCombGrouperDeleteTimeGroupRequirement } undoes a previous call to
#@C { KheCombGrouperAddTimeGroupRequirement } with the same time group.  If
#there has been no such call, @C { KheCombGrouperDeleteTimeGroupRequirement }
#aborts.
#@PP
#Any number of requirements that the groups should cover a given
#mtask may be added:
#@ID @C {
#void KheCombGrouperAddMTaskRequirement(KHE_COMB_GROUPER cg,
#  KHE_MTASK mt, KHE_COMB_COVER_TYPE cover);
#void KheCombGrouperDeleteMTaskRequirement(KHE_COMB_GROUPER cg,
#  KHE_MTASK mt);
#}
#These work in the same way as for time groups.  Care is needed
#because @C { mt } may be rendered undefined, if groups are made
#that leave @C { mt } empty afterwards.  The safest option after
#a solve whose requirements include an mtask is to call
#@C { KheCombGrouperClearRequirements }.
#@PP
#Next we have
#@ID @C {
#void KheCombSolverAddNoSinglesRequirement(KHE_COMB_SOLVER cs);
#void KheCombSolverDeleteNoSinglesRequirement(KHE_COMB_SOLVER cs);
#}
#This is concerned with whether mtask sets that contain a single mtask
#are acceptable---an awkward question, as we'll see.  And
#@ID {0.98 1.0} @Scale @C {
#void KheCombGrouperAddPreferredDomainRequirement(KHE_COMB_GROUPER cg,
#  KHE_RESOURCE_GROUP rg);
#void KheCombGrouperDeletePreferredDomainRequirement(KHE_COMB_GROUPER cg);
#}
#specifies that mtasks whose domains resemble @C { rg } are preferred.
#We'll return to all these requirements later.
#@PP
#There is no need to reload requirements between solves.  Requirements
#stay in effect until they are either deleted individually or cleared
#out by @C { KheCombGrouperClearRequirements }.
#@PP
#After all the requirements are added, an actual solve is carried
#out by calling
#@ID @C {
#int KheCombGrouperSolve(KHE_COMB_GROUPER cg, int max_num,
#  KHE_COMB_VARIANT_TYPE cg_variant, KHE_SOLN_ADJUSTER sa,
#  char *debug_str);
#}
#@C { KheCombGrouperSolve } searches the space of all sets of mtasks
#@M { S } that satisfy the requirements passed in by the user, and
#selects one set @M { S prime } of minimum cost @M { c( S prime ) }.
#Using @M { S prime }, it makes as many groups as it can, up to
#@C { max_num } groups, and returns the number it actually made,
#between @C { 0 } and @C { max_num }.  If @M { S prime } contains
#a single mtask, no groups are made and the value returned is 0.
#@PP
#@C { KheCombGrouperSolve } offers several variants of the algorithm
#just described, selected by parameter @C { cg_variant }, which we'll
#describe later.  If parameter @C { sa } is non-@C { NULL }, any
#task assignments made by @C { KheCombGrouperSolve } are stored in
#@C { sa }, so that they can be undone later.  Parameter @C { debug_str }
#is used only by debug code, to say how the grouping came about.  It
#might be @C { "combinatorial grouping" } or @C { "interval grouping" },
#for example.
#@PP
#One variant of @C { KheCombGrouperSolve } is different and
#has been given its own interface:
#@ID @C {
#int KheCombGrouperSolveSingles(KHE_COMB_GROUPER cg);
#}
#It makes no groups.  Instead, it counts the number of tasks
#needing assignment that lie in mtasks which satisfy the
#requirements by themselves (not grouped with any other
#mtasks).  These are the tasks we called singles above.
#@PP
#Our tour of the interface of @C { KHE_COMB_GROUPER } ends with function
#@ID @C {
#void KheCombGrouperDebug(KHE_COMB_GROUPER cg, int verbosity,
#  int indent, FILE *fp);
#}
#This produces the usual debug print of @C { cg } onto @C { fp }
#with the given verbosity and indent.
#@PP
#The rest of this section is devoted to a precise description of
#@C { KheCombGrouperSolve }.  There are three things to do here.
#First, we need to specify how the search space of mtask sets is
#determined.  Second, for each mtask set @M { S } in the search
#space we need to define a cost @M { c(S) }.  And third, we need
#to explain the algorithm variants selected by @C { cg_variant }.
#@PP
#For the search space we need some definitions.  A task @I covers a
#time if it, or a task assigned to it directly or indirectly, runs
#at that time (and possibly at other times).  A task covers a time
#group if it covers one or more of the time group's times.  An mtask
#covers a time or time group if its tasks do (they run at the same
#times).  An mtask covers an mtask if it is that mtask.  An mtask
#covers a time group or mtask requirement if it covers that
#requirement's time group or mtask.
## A set of mtasks covers a time, time group, or
## mtask if any of its mtasks covers that time, time group, or mtask.
#@PP
#A set of mtasks @M { S } lies in the search space if it satisfies
#all of the following conditions.  The letters in parentheses at
#the end of each condition will be explained afterwards.
## The solver has three opportunities to make tests which delimit
## the search space:  when it is considering whether to include
## an mtask @C { mt } in the search generally; when it
## is considering whether to add an mtask @C { mt } to its current
## set of mtasks @M { S }; and when it has a complete set @M { S }
## and is considering whether it should be considered part of the
## search space.  The earlier something can be ruled out, the
## faster the solve runs.  Anyway, we'll take each of these
## opportunities in order.
## @PP
## First then, before the solving proper begins, @C { KheCombGrouperSolve }
## finds the full set of mtasks which could possibly occur in mtask sets 
## of interest.  These are all mtasks @C { mt } that satisfy all of
## these conditions:
#@NumberedList
#
#@LI {
#Each mtask in @M { S } covers at least one time group or mtask
#requirement whose @C { cover } is not @C { KHE_COMB_COVER_NO }.
#This condition allows for a generate-and-test approach to building the
#search space:  find the set @M { X } of all mtasks that satisfy this
#condition, then use the usual recursive algorithm to generate all
#subsets @M { S } of @M { X }, then test each @M { S } against each
#of the following conditions.
#(a)
#}
#
#@LI {
#For each @C { mt } in @M { S },
#@C { mt } does not cover any time group or mtask requirement
#whose @C { cover } is @C { KHE_COMB_COVER_NO }.
#(a)
#}
#
## @LI {
## For each @C { mt } in @M { S },
## @C { KheMTaskAssignIsFixed(mt) } is @C { false }, that is, @C { mt }
## is not a set of tasks whose assignments are fixed.
## (a)
## }
#
#@LI {
#For each @C { mt } in @M { S }, @C { mt } contains at least one
#task which not fixed, not assigned, and for which non-assignment
#has a cost.  That is, @C { KheMTaskAssignIsFixed(mf) } must be
#@C { false } and @C { KheMTaskNeedsAssignment(mt) } must be
#@C { true }.  Only tasks with these properties participate in
#grouping, as discussed above.
## @C { KheMTaskUnassignedTaskCount(mt) > 0 }, that is, @C { mt }
## contains at least one unassigned task.  Any assigned tasks in
## @C { mt } are ignored throughout the solve, in accordance with
## the principle that combinatorial solving ignores assigned tasks.
#(a)
#}
#
## @LI @OneRow {
## If @C { KheCombGrouperAddMTaskFnRequirement } was called, then
## @C { mtask_fn(mt, impl) } is @C { true }.  Here @C { impl }
## is set to @C { KheCombGrouperAddMTaskFnRequirement }'s @C { impl }
## parameter.  There may be at most one call to
## @C { KheCombGrouperAddMTaskFnRequirement } per solve.  If the
## user has several conditions to test, they must be packaged into
## one @C { mtask_fn }.
## (a)
## }
#
## @EndList
## Second we need to consider testing whether a given mtask
## @C { mt } can be added to a growing set of mtasks @M { S },
## that is, whether @M { S } plus @C { mt } could be an element
## of the search space, or a subset of an element of the search
## space.  The conditions here are:
## @NumberedList
#
#@LI @OneRow {
#For each pair of distinct mtasks @C { mt1 } and @C { mt2 } in @M { S },
#@C { KheMTaskInterval(mt1) } and @C { KheMTaskInterval(mt2) } are
#disjoint.  We intend to assign some resource to one task from each
#mtask of @M { S }, so no two of those tasks can run on the same day.
#(b)
#}
#
#@LI @OneRow {
#If @M { S } is non-empty then it contains a @I { leader mtask },
#that is, an mtask containing tasks that can serve as leader tasks
#for the tasks in the other mtasks of @M { S }.  This rules out
#sets @M { S } whose mtasks have incompatible domains.
#(b)
#}
#
#@LI @OneRow {
#If @C { cg_variant == KHE_COMB_VARIANT_SINGLES }, then @M { S }
#contains at most one mtask.  We say more about this below.
#(b)
#}
#
#@LI @OneRow {
#If @C { KheCombGrouperAddNoSinglesRequirement } was called,
#then @M { S } contains at least two mtasks.  Otherwise @M { S }
#contains at least one mtask.
#(c)
#}
#
## @EndList
## Then the solve proper generates, potentially, all subsets of
## this full set of mtasks, checking the following conditions
## along the way.  For each subset @M { S } the following
## conditions are checked before @M { S } is admitted to
## the search space:
## @NumberedList
#
## @LI @OneRow {
## If {0.95 1.0} @Scale @C { KheCombSolverAddMTaskSetFnRequirement }
## was called, then {0.95 1.0} @Scale @C { mtask_set_fn(S, impl) } is
## @C { true }.  Here @C { impl } is set to
## @C { KheCombGrouperAddMTaskSetFnRequirement }'s
## @C { impl } parameter.  There may be at most one call to
## @C { KheCombGrouperAddMTaskFnRequirement } per solve.  If the
## user has several conditions to test, they must be packaged into
## one @C { mtask_set_fn }.
## # @LP
## # It is more efficient to exclude unwanted tasks using @C { mtask_fn }
## # than to wait until an entire set of mtasks is made and exclude
## # the set by calling @C { mtask_set_fn }.  But there are cases where
## # mtasks are acceptable individually but not together, and
## # @C { mtask_set_fn } is useful then.
## (c)
## }
#
#@LI @OneRow {
#Each time group or mtask requirement @M { C } must be satisfied.  What
#this means depends on the value of @M { C }'s @C { cover } parameter,
#as follows:
#@TaggedList
#
#@DTI { @C { KHE_COMB_COVER_YES } }
#{
#At least one of the mtasks
#of @M { S } covers @M { C }'s time group or mtask.
#}
#
#@DTI { @C { KHE_COMB_COVER_NO } }
#{
#None of the mtasks of @M { S } cover @M { C }'s time group or mtask.
#}
#
#@DTI { @C { KHE_COMB_COVER_PREV } }
#{
#This is interpreted like @C { KHE_COMB_COVER_YES } if the preceding time
#group or mtask requirement is covered, and like @C { KHE_COMB_COVER_NO }
#if the preceding time group or mtask requirement is not covered.
#}
#
#@DTI { @C { KHE_COMB_COVER_FREE } }
#{
#@M { C } is free to be covered by @M { S }'s mtasks, or not.
#}
#
#@EndList
#If the first time group or mtask has cover @C { KHE_COMB_COVER_PREV },
#this is treated like @C { KHE_COMB_COVER_FREE }.
#(c)
#}
#
#@EndList
#Time groups and mtasks not mentioned in any requirement may be
#covered, or not.  The difference between this and a time
#group or mtask with cover @C { KHE_COMB_COVER_FREE } is
#that the mtasks that cover a free time group or mtask may be
#included in the search space.
#@PP
#We have so far given the impression that @C { KheCombGrouperSolve }
#generates all subsets @M { S } of the set @M { X } defined in
#condition (1) above, and then tests each @M { S } against these
#conditions.  In fact, it does better.  The letter at the end of
#each condition says when that condition is evaluated:
#@ParenAlphaList
#
#@LI {
#This condition is evaluated just once for each mtask @C { mt },
#at the start of the solve.  If it does not hold, then @C { mt } is
#omitted from the set @M { X } of mtasks that we find all subsets of.
#}
#
#@LI {
#When some set @M { S } does not satisfy this condition, every
#superset of @M { S } also does not satisfy it.  So it is evaluated
#each time we add an mtask to @M { S } when generating all subsets.
#If it fails, that path of the recursive generation of all subsets is
#truncated immediately.
#}
#
#@LI {
#This condition is (and can only be) evaluated when a complete subset has
#been generated.
#}
#
#@EndList
#In addition, for each mtask @C { mt } a list is kept of all time group
#and mtask requirements @M { C } with cover @C { KHE_COMB_COVER_YES } for
#which @C { mt } is the last mtask that covers @M { C }.  Before trying
#the branch of the recursion that omits @C { mt }, the list is traversed
#and if there are any requirements in it that are not yet covered, that
#branch is not taken.
## @PP
## Mtasks @C { mt } for which @C { KheMTaskAssignIsFixed(mt) } is
## @C { true } are of no use in grouping, since their assignments
## cannot be changed.  It is true that they could be leader mtasks,
## since leader tasks' assignments are not changed.  But that allows
## at most one fixed task per group, and there are the tasks' domains
## to consider too.  Altogether fixed tasks don't go well with grouping.
## @PP
## Ignoring assigned tasks is harder to justify.  A task assigned
## resource @C { r } could be grouped with some unassigned tasks,
## leaving all of them assigned @C { r }.  The author might revisit
## this rule in the future, if practice demands it.  A key issue
## is the interaction between grouping and assign by history (the
## usual source of assignments during this early stage of the solve).
#@PP
#There is no prohibition on passing in a Yes cover requirement for
#an mtask which cannot be part of any @M { S } because it fails
#to satisfy one of the (a) conditions.  For example, we could
#require the solve to cover an mtask whose tasks were all assigned.
#This condition is impossible to satisfy, so the result will be
#that @C { KheCombGrouperSolve } finds no groups and returns 0.
#@PP
#We said above that the first step is to build the set @M { X } of all
#mtasks that satisfy the first condition.  Before doing anything further,
#this set is sorted so that mtasks whose first busy day is earlier
#come before mtasks whose first busy day is later.  If there is a
#preferred domain (if @C { KheCombGrouperAddPreferredDomainRequirement }
#was called), then as a second priority, mtasks whose domain is a
#superset of the preferred domain come before mtasks whose domain
#is not a superset of the preferred domain, and as a third priority,
#mtasks whose domain is smaller come before mtasks whose domain
#is larger.  This ensures that mtasks with preferred domains are
#tried first, which means that sets of mtasks with preferred domains
#are tested first, making them more likely to be chosen, but without
#actually ruling out any set of mtasks.
#@PP
#The second thing we need to do is to explain how the cost @M { c(S) }
#of each set of mtasks @M { S } is defined.  By the conditions above,
#@M { S } is non-empty and contains a leader mtask.
#@PP
#Let @M { I } be the smallest interval of days such that all the mtasks
#in @M { X }, as defined by conditions (1) and (a) above, run entirely
#within those days, plus (for safety) one extra day on each side.  This
#is the grouper's idea of the part of the cycle affected by the current
#solve.  Take the leader mtask of @M { S } and search its domain (as 
#returned by @C { KheMTaskDomain }) for a resource @M { r } which is
#free and available throughout @M { I }.  Most resources are free during
#grouping, and most resources are available (not subject to avoid
#unavailable times constraints) most of the time, so @M { r } should
#be easy to find; but if there is no such @M { r }, ignore @M { S }.
#@PP
#Assign @M { r } to each mtask of @M { S }.  The cost @M { c(S) } of
#@M { S } is determined while the assignments are in place.  It is the
#total cost of all cluster busy times and limit busy times monitors
#which monitor @M { r } and have times lying entirely within the times
#of the days @M { I }.  We limit ourselves to monitors within @M { I }
#because we don't want @M { r }'s global workload, for example, to
#influence the outcome.  We add one day on each side so as not to miss
#monitors that prohibit certain local patterns, such as incomplete
#weekends.  This is admittedly ad-hoc but it seems to work.  After
#the cost is worked out, the assignments of @M { r } added to the
#mtasks of @M { S } are removed.
## covered by the time groups added by calls to
## @C { KheCombGrouperAddTimeGroupRequirement }.
## This second condition is included because we don't want @M { r }'s
## global workload, for example, to influence the outcome.
## @PP
#@PP
#The third and last thing we need to do is to explain the
#@C { cg_variant } parameter.  It has type
#@ID @C {
#typedef enum {
#  KHE_COMB_VARIANT_MIN,
#  KHE_COMB_VARIANT_ZERO,
#  KHE_COMB_VARIANT_SOLE_ZERO,
#  KHE_COMB_VARIANT_SINGLES
#} KHE_COMB_VARIANT_TYPE;
#}
#and allows the user to select one of four variants of the basic
#algorithm, as follows.
#@PP
#If @C { cg_variant } is @C { KHE_COMB_VARIANT_MIN }, then
#a subset @M { S prime } is chosen such that @M { c( S prime ) }
#is minimum among all @M { c(S) }, as described above.  This
#will be possible as long as the search space contains at
#least one @M { S } satisfying the conditions.  If it
#doesn't, no groups are made.
#@PP
#If @C { cg_variant } is @C { KHE_COMB_VARIANT_ZERO } or
#@C { KHE_COMB_VARIANT_SOLE_ZERO }, then @M { c( S prime ) } must
#also be 0, and in the second case there must be no other @M { S }
#satisfying the conditions such that @M { c(S) } is 0.  If these
#conditions are not met, no groups are made.
#@PP
#If @C { cg_variant } is @C { KHE_COMB_VARIANT_SINGLES },
#the behaviour is different.  No groups are made.  Instead,
#@C { KheCombGrouperSolve } returns the number of individual,
#ungrouped tasks which satisfy the given requirements.  (If
#the requirements include `no singles', this will be 0.)
#This variant is accessed by calling @C { KheCombGrouperSolveSingles },
#not @C { KheCombGrouperSolve }.
#@PP
#Let us call an mtask that satisfies the requirements without
#any grouping a @I { single }.  Singles raise some awkward questions
#for combinatorial grouping.  What to do about them seems to vary
#depending on why combinatorial grouping is being called, so
#instead of dealing with them in a fixed way, the grouper
#offers three features that help with them.
#@PP
#First, if the set of mtasks @M { S prime } with minimum or zero
#cost contains only one mtask, @C { KheCombSolverSolve } accepts
#it as best but makes no groups from it, returning 0 for
#the number of groups made.  It is natural not to make any task
#assignments, because each of them is from a task from one
#mtask of @M { S prime } to a task from another mtask of
#@M { S prime }, which is not possible when @M { S prime }
#contains only one mtask.  But it is arguable that each
#unassigned task from that one mtask is a satisfactory group
#which should be reported.  However, the value returned here
#is 0, as we said.
#@PP
#Second, by calling @C { KheCombSolverAddNoSinglesRequirement },
#the user may declare that a set @M { S } containing just one
#mtask should be excluded from the search space.  But this
#is not a magical solution to the problem of singles.  For
#example, when we need a unique zero-cost set of mtasks, we
#may want to include singles in the search space, to show that
#grouping is better than doing nothing.  We need to think
#about the significance of singles in the current context.
## And there may still be an
## @M { S } containing one single and another mtask which covers a time
## group or mtask with cover type @C { KHE_COMB_COVER_FREE }.
#@PP
#Third, after setting up a problem, one can call
#@C { KheCombGrouperSolveSingles }.  This searches the requested space, but,
#as we have seen, it does no grouping, instead returning the total number
#of tasks lying in singles.  If our aim is to produce a certain number of
#groups, we can treat these singles as pre-existing groups, subtract
#their number from our target, and run again with `no singles' on.
#@End @SubSection

@EndSubSections
@End @Section

@EndSections
@End @Chapter
