@Chapter
    @Title { Resource-Structural Solvers }
    @Tag { resource_structural }
@Begin
@LP
This chapter documents the solvers packaged with KHE that modify
the resource structure of a solution:  group and ungroup tasks,
and so on.  These solvers may alter resource assignments, but they
only do so occasionally and incidentally to their structural work.
# We also include here one solver which adjusts resource monitors.
@BeginSections

@Section
    @Title { Task bound groups }
    @Tag { resource_structural.task_bound_groups }
@Begin
@LP
Task domains are reduced by adding task bound objects to tasks
(Section {@NumberOf solutions.tasks.domains}).  Frequently, task
bound objects need to be stored somewhere where they can be found and
deleted later.  The required data structure is trivial---just an array
of task bounds---but it is convenient to have a standard for it, so
KHE defines a type @C { KHE_TASK_BOUND_GROUP } with suitable operations.
@PP
To create a task bound group, call
@ID @C {
KHE_TASK_BOUND_GROUP KheTaskBoundGroupMake(KHE_SOLN soln);
}
To add a task bound to a task bound group, call
@ID @C {
void KheTaskBoundGroupAddTaskBound(KHE_TASK_BOUND_GROUP tbg,
  KHE_TASK_BOUND tb);
}
To visit the task bounds of a task bound group, call
@ID {0.96 1.0} @Scale @C {
int KheTaskBoundGroupTaskBoundCount(KHE_TASK_BOUND_GROUP tbg);
KHE_TASK_BOUND KheTaskBoundGroupTaskBound(KHE_TASK_BOUND_GROUP tbg, int i);
}
To delete a task bound group, including deleting all the task
bounds in it, call
@ID @C {
bool KheTaskBoundGroupDelete(KHE_TASK_BOUND_GROUP tbg);
}
This function returns @C { true } when every call it makes to
@C { KheTaskBoundDelete } returns @C { true }.
@End @Section

@Section
    @Title { Task trees }
    @Tag { resource_structural.task_trees }
@Begin
@LP
What meets do for time, tasks do for resources.  A meet has a time
domain and assignment; a task has a resource domain and assignment.
Link events constraints cause meets to be assigned to other meets;
avoid split assignments constraints cause tasks to be assigned to
other tasks.
@PP
There are differences.  Tasks lie in meets, but meets do not lie
in tasks.  Task assignments do not have offsets, because there is
no ordering of resources like chronological order for times.
@PP
Since the layer tree is successful in structuring meets for
time assignment, let us see what an analogous tree for structuring
tasks for resource assignment would look like.  A layer tree is
a tree, whose nodes each contain a set of meets.  The root node
contains the cycle meets.  A meet's assignment, if present, lies
in the parent of its node.   By convention, meets lying outside
nodes have fixed assignments to meets lying inside nodes, and
those assignments do not change.
@PP
A @I { task tree }, then, is a tree whose nodes each contain a set of
tasks.  The root node contains the cycle tasks (or there might be
several root nodes, one for each resource type).  A task's
assignment, if present, lies in the parent of its node.  By
convention, tasks lying outside nodes have fixed assignments to
tasks lying inside nodes, and those assignments do not change.
@PP
Type @C { KHE_TASKING } is KHE's nearest equivalent to a task
tree node.  It holds an arbitrary set of tasks, but there is
no support for organizing taskings into a tree structure, since
that does not seem to be needed.  It is useful, however, to look
at how tasks are structured in practice, and to relate this to
task trees, even though they are not explicitly supported by KHE.
@PP
A task is assigned to a non-cycle task and fixed, to implement an
avoid split assignments constraint.  Such tasks would therefore
lie outside nodes (if there were any).  When a solver assigns a
task to a cycle task, the task would have to lie in a child node
of a node containing the cycle tasks (again, if there were any).
So there are three levels:  a first level of nodes containing
the cycle tasks; a second level of nodes containing unfixed tasks
wanting to be assigned resources; and a third level of fixed,
assigned tasks that do not lie in nodes.
@PP
This shows that the three-way classification of tasks presented
in Section {@NumberOf solutions.tasks.asst}, into cycle tasks,
unfixed tasks, and fixed tasks, is a proxy for the missing task
tree structure.  Cycle tasks are first-level tasks, unfixed tasks
are second-level tasks, and fixed tasks are third-level tasks.
@C { KHE_TASKING } is only needed for representing second-level
nodes, since tasks at the other levels do not require assignment.
By convention, then, taskings will contain only unfixed tasks.
@End @Section

@Section
    @Title { Task tree construction }
    @Tag { resource_structural.task_tree.construction }
@Begin
@LP
KHE offers a solver for building a task tree holding the tasks
of a given solution:
@ID @C {
bool KheTaskTreeMake(KHE_SOLN soln, KHE_OPTIONS options);
}
As usual, this solver returns @C { true } if it changes the
solution.  Like any good solver, this function has no special
access to data behind the scenes.  Instead, it works by calling
basic operations and helper functions:
@BulletList

@LI {
It calls @C { KheTaskingMake } to make one tasking for each resource
type of @C { soln }'s instance, and it calls @C { KheTaskingAddTask }
to add the unfixed tasks of each type to the tasking it made for that type.
These taskings may be accessed by calling @C { KheSolnTaskingCount }
and @C { KheSolnTasking } as usual, and they are returned in an order
suited to resource assignment, as follows.  Taskings for which
@C { KheResourceTypeDemandIsAllPreassigned(rt) } is @C { true }
come first.  Their tasks will be assigned already if
@C { KheSolnAssignPreassignedResources } has been called, as it
usually has been.  The remaining taskings are sorted by decreasing
order of @C { KheResourceTypeAvoidSplitAssignmentsCount(rt) }.
These functions are described in Section {@NumberOf resource_types}.
Of course, the user is not obliged to follow this ordering.  It is
a precondition of @C { KheTaskTreeMake } that @C { soln } must have
no taskings when it is called.
}

@LI {
It calls @C { KheTaskAssign } to convert resource preassignments into
resource assignments, and to satisfy avoid split assignments constraints,
as far as possible.  Existing assignments are preserved (no calls to
@C { KheTaskUnAssign } are made).
}

@LI {
It calls @C { KheTaskAssignFix } to fix the assignments it makes
to satisfy avoid split assignments constraints.  These may be removed
later.  At present it does not call @C { KheTaskAssignFix } to fix
assignments derived from preassignments, although it probably should.
}

@LI {
It calls @C { KheTaskSetDomain } to set the domains of tasks to
satisfy preassigned resources, prefer resources constraints, and
other influences on task domains, as far as possible.
@C { KheTaskTreeMake } never adds a resource to any domain, however;
it either leaves a domain unchanged, or reduces it to a subset of
its initial value.
}

@EndList
These elements interact in ways that make them impossible to
separate.  For example, a prefer resources constraint that
applies to one task effectively applies to all the tasks that
are linked to it, directly or indirectly, by avoid split
assignments constraints.
@PP
@C { KheTaskTreeMake } does not refer directly to any options.
However, it calls function @C { KheTaskingMakeTaskTree }, described
below, and so it is indirectly influenced by its options.
@PP
The implementation of @C { KheTaskTreeMake } has two stages.  The
first creates one tasking for each resource type of @C { soln }'s
instance, in the order described, and adds to each the unfixed tasks
of its type.  This stage can be carried out separately by repeated
calls to
@ID @C {
KHE_TASKING KheTaskingMakeFromResourceType(KHE_SOLN soln,
  KHE_RESOURCE_TYPE rt);
}
which makes a tasking containing the unfixed tasks of @C { soln } of
type @C { rt }, or of all types if @C { rt } is @C { NULL }.  It
aborts if any of these unfixed tasks already lies in a tasking.
@PP
The second stage is more complex.  It applies public function
@ID @C {
bool KheTaskingMakeTaskTree(KHE_TASKING tasking,
  KHE_TASK_BOUND_GROUP tbg, KHE_OPTIONS options);
}
to each tasking made by the first stage.  When @C { KheTaskingMakeTaskTree }
is called from within @C { KheTaskTreeMake }, its @C { options } parameter
is inherited from @C { KheTaskTreeMake }.
@PP
As described for @C { KheTaskTreeMake }, @C { KheTaskingMakeTaskTree }
assigns tasks and tightens domains; it does not unassign tasks or
loosen domains.  Only tasks in @C { tasking } are affected.  If
@C { tbg } is non-@C { NULL }, any task bounds created while tightening
domains are added to @C { tbg }.  Tasks assigned to non-cycle tasks
have their assignments fixed, so are deleted from @C { tasking }.
@PP
The implementation of @C { KheTaskingMakeTaskTree } imitates the layer
tree construction algorithm:  it applies @I jobs in decreasing priority
order.  There are fewer kinds of jobs, but the situation is more complex
in another way:  sometimes, some kinds of jobs are wanted but not others.
The three kinds of jobs of highest priority install existing domains and
task assignments, and assign resources to unassigned tasks derived from
preassigned event resources.  These jobs are always included; the first
two always succeed, and so does the third unless the user has made
peculiar task or domain assignments earlier.  The other kinds of jobs
are optional, and whether they are included or not depends on the
options (other than @C { rs_invariant }) described next.
@PP
@C { KheTaskTreeMake } consults the following options.
# Those other
# than @F rs_invariant apply only to constraints @C { c } such that
# @C { KheConstraintCombinedWeight(c) } is not minimal take part.
# This is a simple attempt to limit structural changes to
# cases that make a significant difference.
@TaggedList

@DTI { @F rs_invariant } {
A Boolean option which, when @C { true }, causes @C { KheTaskTreeMake }
to omit assignments and domain tightenings which violate the resource
assignment invariant (Section {@NumberOf resource_solvers.invt}).
}

@DTI { @F rs_task_tree_prefer_hard_off } {
A Boolean option which, when @C { false }, causes @C { KheTaskTreeMake }
to make a job for each point of application of each hard prefer
resources constraint of non-zero weight.  The priority of the
job is the combined weight of its constraint, and it attempts
to reduce the domains of the tasks of @C { tasking } monitored
by the constraint's monitors so that they are subsets of the
constraint's domain.
}

@DTI { @F rs_task_tree_prefer_soft } {
Like @F rs_task_tree_prefer_hard_off except that it applies to
soft prefer resources constraints instead of hard ones, and its sense
is reversed so that the default value (@C { false } as usual) omits
these jobs.  The author has encountered cases where reducing domains
to enforce soft prefer resources constraints is harmful.
}

@DTI { @F rs_task_tree_split_hard_off } {
A Boolean option which, when @C { false }, causes @C { KheTaskTreeMake }
to make a job for each point of application of each hard avoid split
assignments constraint of non-zero weight.  Its priority is the
combined weight of its constraint, and it attempts to assign the
tasks of @C { tasking } to each other so that all the tasks of
the job's point of application of the constraint are assigned,
directly or indirectly, to the same root task.
}

@DTI { @F rs_task_tree_split_soft_off } {
Like @F rs_task_tree_split_hard_off except that it applies to
soft avoid split assignments constraints rather than hard ones.
}

@DTI { @F rs_task_tree_limit_busy_hard_off } {
A Boolean option which, when @C { false }, causes @C { KheTaskTreeMake }
to make a job for each point of application of each limit busy times
constraint with non-zero weight and maximum limit 0.  Its priority is
the combined weight of its constraint, and it attempts to reduce the
domains of those tasks of @C { tasking } which lie in events
preassigned the times of the constraint, to eliminate its resources,
since assigning them to these tasks must violate this constraint.
However, the resulting domain must have at least two elements; if
not, the reduction is undone, reasoning that it is too severe
and it is better to allow the constraint to be violated.
@LP
This flag also applies to cluster busy times constraints with
maximum limit 0, or rather to their positive time groups.
These are essentially the same as the time groups of limit
busy times constraints when the maximum limit is 0.
}

@DTI { @F rs_task_tree_limit_busy_soft_off } {
Like @F rs_task_tree_limit_busy_hard_off except that it applies to
soft limit busy times constraints rather than hard ones.
}

@EndList
By default, all of these jobs except @F rs_task_tree_prefer_soft are run.
@End @Section

@Section
    @Title { Classifying resources by available workload }
    @Tag { resource_structural.classify_by_workload }
@Begin
@LP
Resources with high workload limits, as indicated by functions
@C { KheResourceMaxBusyTimes } and @C { KheResourceMaxWorkload }
(Section {@NumberOf solutions.avail}), may be harder to exploit
than resources with lower workload limits, so it may make sense
to timetable them first.  Function
@ID @C {
bool KheClassifyResourcesByWorkload(KHE_SOLN soln,
  KHE_RESOURCE_GROUP rg, KHE_RESOURCE_GROUP *rg1,
  KHE_RESOURCE_GROUP *rg2);
}
helps with that.  It partitions @C { rg } into two resource groups,
@C { rg1 } and @C { rg2 }, such that the highest workload resources
are in @C { rg1 }, and the rest are in @C { rg2 }.  It returns
@C { true } if it succeeds with this, and @C { false } if not, which
will be because the resources of @C { rg } have equal maximum workloads.
@PP
If @C { KheClassifyResourcesByWorkload } returns @C { true }, every
resource in @C { rg1 } has a maximal value of @C { KheResourceMaxBusyTimes }
and a maximal value of @C { KheResourceMaxWorkload }, and every element
of @C { rg2 } has a non-maximal value of @C { KheResourceMaxBusyTimes }
or a non-maximal value of @C { KheResourceMaxWorkload }.  If it returns
@C { false }, then @C { rg1 } and @C { rg2 } are @C { NULL }.
@End @Section

@Section
    @Title { Limits on consecutive days }
    @Tag { resource_structural.consec }
@Begin
@LP
Nurse rostering instances typically place minimum and maximum
limits on the number of consecutive days that a resource can
be free, busy, or busy working a particular shift.  These limits
are scattered through constraints and may be hard to find.  This
section makes that easy.
@PP
An object called a @I { consec solver } is used for this.  To
create one, call
@ID @C {
KHE_CONSEC_SOLVER KheConsecSolverMake(KHE_SOLN soln, KHE_FRAME frame);
}
It uses memory from an arena taken from @C { soln }.  Its three
attributes may be retrieved by calling
@ID @C {
KHE_SOLN KheConsecSolverSoln(KHE_CONSEC_SOLVER cs);
KHE_FRAME KheConsecSolverFrame(KHE_CONSEC_SOLVER cs);
}
The frame must contain at least one time group, otherwise
@C { KheConsecSolverMake } will abort.
@PP
To delete a solver when it is no longer needed, call
@ID @C {
void KheConsecSolverDelete(KHE_CONSEC_SOLVER cs);
}
This works by returning the arena to the solution.
@PP
To find the limits for a particular resource, call
@ID {0.98 1.0} @Scale @C {
void KheConsecSolverFreeDaysLimits(KHE_CONSEC_SOLVER cs, KHE_RESOURCE r,
  int *history, int *min_limit, int *max_limit);
void KheConsecSolverBusyDaysLimits(KHE_CONSEC_SOLVER cs, KHE_RESOURCE r,
  int *history, int *min_limit, int *max_limit);
void KheConsecSolverBusyTimesLimits(KHE_CONSEC_SOLVER cs, KHE_RESOURCE r,
  int offset, int *history, int *min_limit, int *max_limit);
}
For any resource @C { r }, these return the history (see below), the
minimum limit, and the maximum limit on the number of consecutive
free days, the number of consecutive busy days, and the number of
consecutive busy times which appear @C { offset } places into each
time group of @C { frame }.  Setting @C { offset } to 0 might
return the history and limits on the number of consecutive early
shifts, setting it to 1 might return the limits on the number of
consecutive day shifts, and so on.  The largest offset acceptable
to @C { KheConsecSolverBusyTimesLimits } is returned by
@ID @C {
int KheConsecSolverMaxOffset(KHE_CONSEC_SOLVER cs);
}
An @C { offset } larger than this, or negative, produces an abort.
@PP
The @C { *history } values return history:  the number of consecutive
free days, consecutive busy days, and consecutive busy times with the
given @C { offset } in the timetable of @C { r } directly before the
timetable proper begins.  They are taken from the history values of the
same constraints that determine the @C { *min_limit } and @C { *max_limit }
values.
@PP
All these results are based on the frame passed to
@C { KheConsecSolverFrame }, which would always be the common frame.
They are calculated by finding all limit active intervals constraints
with non-zero weight, comparing their time groups with the frame
time groups, and checking their polarities.  In effect this reverse
engineers what programs like NRConv do when they convert specialized
nurse rostering formats to XESTT.
@PP
If no constraint applies, @C { *history } and @C { *min_limit } are set
to 0, and @C { *max_limit } is set to @C { KheFrameTimeGroupCount(frame) }.
In the unlikely event that more than one constraint applies,
@C { *history } and @C { *min_limit } are set to the largest of the
values from the separate constraints, and @C { *max_limit } is set
to the smallest of the values from the separate constraints.
@PP
Finally,
@ID @C {
void KheConsecSolverDebug(KHE_CONSEC_SOLVER cs, int verbosity,
  int indent, FILE *fp);
}
produces the usual debug print of @C { cs } onto @C { fp } with the
given verbosity and indent.  When @C { verbosity >= 2 }, this prints all
results for all resources, using format @C { history|min-max }.  For
efficiency, these are calculated all at once by @C { KheConsecSolverMake }.
@End @Section

@Section
    @Title { Tighten to partition }
    @Tag { resource_structural.partition }
@Begin
@LP
Suppose we are dealing with teachers, and that they have partitions
(Section {@NumberOf resource_types}) which are their faculties
(English, Mathematics, Science, and so on).  Some partitions may
be heavily loaded (that is, required to supply teachers for tasks
whose total workload approaches the total available workload of
their resources) while others are lightly loaded.
@PP
Some tasks may be taught by teachers from more than one partition.
These @I { multi-partition tasks } should be assigned to teachers from
lightly loaded partitions, and so should not overlap in time with other
tasks from these partitions.  @I { Tighten to partition } tightens the
domain of each multi-partition task in a given tasking to one partition,
returning @C { true } if it changes anything:
@ID {0.95 1.0} @Scale @C {
bool KheTaskingTightenToPartition(KHE_TASKING tasking,
  KHE_TASK_BOUND_GROUP tbg, KHE_OPTIONS options);
}
The choice of partition is explained below.  All changes are additions
of task bounds to tasks, and if @C { tbg } is non-@C { NULL }, all
these task bounds are also added to @C { tbg }.
@PP
It is best to call @C { KheTaskingTightenToPartition } after
preassigned meets are assigned, but before general time
assignment.  The tightened domains encourage time assignment to
avoid the undesirable overlaps.  After time assignment, the
changes should be removed, since otherwise they constrain
resource assignment unnecessarily.  This is what the task bound
group is for:
@ID @C {
tighten_tbg = KheTaskBoundGroupMake(soln);
for( i = 0;  i < KheSolnTaskingCount(soln);  i++ )
  KheTaskingTightenToPartition(KheSolnTasking(soln, i),
    tighten_tbg, options);
... assign times ...
KheTaskBoundGroupDelete(tighten_tbg);
}
The rest of this section explains how @C { KheTaskingTightenToPartition }
works in detail.
@PP
@C { KheTaskingTightenToPartition } does nothing when the tasking has
no resource type, or @C { KheResourceTypeDemandIsAllPreassigned }
(Section {@NumberOf resource_types}) says that the resource type's
tasks are all preassigned, or the resource type has no partitions,
or its number of partitions is less than four or more than one-third
of its number of resources.  No good can be done in these cases.
@PP
Tasks whose domains lie entirely within one partition are not touched.
The remaining multi-partition tasks are sorted by decreasing combined
weight then duration, except that tasks with a @I { dominant partition }
come first.  A task with an assigned resource has a dominant partition,
namely the partition that its assigned resource lies in.  An unassigned
task has a dominant partition when at least three-quarters of the
resources of its domain come from that partition.
@PP
For each task in turn, an attempt is made to tighten its domain so
that it is a subset of one partition.  If the task has a dominant
partition, only that partition is tried.  Otherwise, the partitions
that the task's domain intersects with are tried one by one, stopping
at the first success, after sorting them by decreasing average
available workload (defined next).
@PP
Define the @I { workload supply } of a partition to be the sum, over
the resources @M { r } of the partition, of the number of times in
the cycle minus the number of workload demand monitors for @M { r }
in the matching.  Define the @I { workload demand } of a partition
to be the sum, over all tasks @M { t } whose domain is a subset of
the partition, of the workload of @M { t }.  Then the
@I { average available workload } of a partition is its workload
supply minus its workload demand, divided by its number of resources.
Evidently, if this is large, the partition is lightly loaded.
@PP
Each successful tightening increases the workload demand of its
partition.  This ensures that equally lightly loaded partitions
share multi-partition tasks equally.
@PP
In a task with an assigned resource, the dominant partition is the
only one compatible with the assignment.  In a task without an
assigned resource, preference is given to a dominant partition, if
there is one, for the following reason.  Schools often have a few
@I { generalist teachers } who are capable of teaching junior
subjects from several faculties.  These teachers are useful for
fixing occasional problems, smoothing out workload imbalances,
and so on.  But the workload that they can give to faculties other
than their own is limited and should not be relied on.  For
example, suppose there are five Science teachers plus one
generalist teacher who can teach junior Science.  That should
not be taken by time assignment as a licence to routinely schedule
six Science meets simultaneously.  Domain tightening to a dominant
partition avoids this trap.
@PP
Tightening by partition works best when the @C { rs_invariant }
option of @C { options } is @C { true }.  For example, in a case like
Sport where there are many simultaneous multi-partition tasks, it
will then not tighten more of them to a lightly loaded partition
than there are teachers in that partition.  Assigning preassigned
meets beforehand improves the effectiveness of this check.
@End @Section

#@Section
#    @Title { Grouping by resource constraints (old) }
#    @Tag { resource_structural.constraints.old }
#@Begin
#@BeginSubSections

#@SubSection
#    @Title { Time-based grouping (old) }
#    @Tag { resource_structural.constraints.timebased.old }
#@Begin
#@LP
#Resource constraints are concerned with whether resources are busy
#at particular times, not with which tasks they are busy with.  So
#grouping by resource constraints cannot discover sets of groupable
#tasks directly; rather, it discovers sets of groupable times.
## :  it establishes
## propositions of the form `for each resource @M { r }, if @M { r }
## is busy at time @M { t sub 1 }, then @M { r } must also be busy
## at times @M { t sub 2 ,..., t sub k }'.
#@PP
#Accordingly, our first problem is @I { time-based grouping }:  given
#two sets of times, @M { T sub 0 } and @M { X sub 0 }, find a set of
#tasks which together cover all of the times of @M { T sub 0 } and
#none of the times of @M { X sub 0 }, and group them.  (The need for
#@M { X sub 0 } will become clear in later sections.)  @M { T sub 0 }
#and @M { X sub 0 } must be disjoint:  if they shared a time, no set
#of tasks could satisfy the conditions.  The chosen tasks are free
#to r un at times outside @M { T sub 0 } and @M { X sub 0 }, or not.
#But because they will be assigned the same resource, no two of them
#may r un at the same time, or at two times from the same time group
#of the common frame, and @M { T sub 0 } may not contain two times
#from the same time group of the common frame.  Also, two tasks
#initially assigned different resources may not be grouped together.
#@PP
#Some tasks could be grouped already or have duration greater than
#1, so it can be non-trivial to find suitable tasks.  The algorithm
#is heuristic and simply stops early if it cannot find them.  It is
#important for the integrity of grouping by resource constraints that
#it should handle existing groups reasonably, if only because later
#stages may encounter groups made by earlier stages.
#@PP
#If, for each time @M { t } in @M { T sub 0 }, all root tasks that
#cover @M { t } are equivalent, then it is strictly correct to choose
#any one of them for grouping.  Time-based grouping proceeds even
#when the tasks are not all equivalent, so it might sometimes choose
#inappropriate tasks to group.  However, if grouping were limited
#to times when all the tasks were equivalent, there would be very
#little of it.  The algorithm does try to ensure that the tasks in
#each group have similar domains, to maximize the number of resources
#that may be assigned to them after grouping.  It also accepts an
#optional resource group parameter @M { g }; if present, the leader
#tasks of all groups must have domain @M { g }.
#@PP
#The function for time-based grouping does not make just one group;
#rather, it takes an integer parameter @M { m }, and tries to make
#up to @M { m } groups with the requested properties.  If some groups
#already have these properties when the function is called, they
#do not count towards @M { m }.  As explained below, the function
#is only interested in root task sets that cover some of the times
#of @M { T sub 0 } but not all of them, ensuring that existing
#groups with the requested properties are ignored.
## , as though
## the function had started by making them.  So the first step is to
## search the list of root task sets for any one time of @M { T sub 0 }
## for root task sets whose tasks already have the properties, and reduce
## @M { m } by the number of tasks that those root task sets contain.
#@PP
#The function chooses root task sets sequentially, then makes each
#group by taking one task from each of the chosen root task sets.
#There is a set @M { T } of times that the root task sets not chosen
#yet must cover.  Initially, @M { T } is set to @M { T sub 0 }.
#Only root task sets that cover at least one time of @M { T } are
#chosen.  As each is chosen, the times from @M { T } covered by
#it are removed from @M { T }.  The function terminates when
#@M { T } is empty, so all the times of @M { T sub 0 } must be
#covered by the chosen root task sets.
#@PP
#There is also a set @M { X } of times that the root task sets not chosen
#yet must not cover.  Initially, @M { X } is set to @M { X sub 0 }
#plus, for each time @M { t } in @M { T sub 0 }, all the times in
#the time group from the common frame containing @M { t } other
#than @M { t } itself.  (It would be a mistake to choose a root
#task set that covered such a time, since then when @M { t } came
#to be covered later, that would be two times from the same time
#group of the common frame.)  As each root task set is chosen, for
#each time @M { t } that it covers, all the times in the time group
#from the common frame containing @M { t } are added to @M { X }.
#Only root task sets that do not cover any of the times of @M { X }
#are chosen, guaranteeing not only that none of the times of
#@M { X sub 0 } will be covered, but also that no two root task
#sets cover the same time, or cover two times from the same time group
#of the common frame, fulfilling all requirements concerning time.
#@PP
#The first root task set chosen is the @I leader root task set, the
#one that the leader tasks of the groups will be taken from.  From all
#root task sets not marked as unsuitable for providing leader tasks,
#and which cover at least one of the times of @M { T } but not all
#of them, and none of the times of @M { X }, choose a root task set
#which is smallest in the priority ordering defined earlier.  If there
#is a @M { g } parameter, the chosen root task set must also have
#domain @M { g }.  The leader root task set is found by searching
#the lists of root task sets for the times of @M { T }.  Delete from
#@M { T } the times that this root task set covers, and add to @M { X }
#the times of the common frame time groups holding these times.
#@PP
#The other root task sets, the @I follower root task sets, are
#chosen as follows.  For any time @M { t } of @M { T }, from all
#root task sets on @M { t }'s list which do not cover any of the
#times of @M { X } and are assignable to the tasks of the leader
#root task set, choose one which is smallest in priority order.
#Delete from @M { T } the times that this root task set covers,
#and add to @M { X } the times of the common frame time groups
#holding these times.  Repeat until @M { T } is empty.
#@PP
#Make one group by taking one task from the leader root task set
#and assigning one task from each of the other root task sets to
#it.  Delete all these tasks from their root task sets, then
#re-insert the leader task; it will go into a different root
#task set because of all its new followers.  Repeat this while
#at least one task remains in each of the chosen root task sets
#and @M { m } is not reached.  After that, if more groups are
#needed, try again to find a leader root task set, and so on.
#@PP
#If the algorithm is unable to choose a suitable leader root task
#set, it stops early, having made fewer than the requested number
#of groups.  If it is unable to choose a suitable follower root
#task set, it marks the current leader root task set as not suitable
#for providing leader tasks, and tries again to find a leader root
#task set.  These marks are cleared when the function returns.
#@PP
#When some root task sets are initially assigned, the algorithm is
#somewhat different.  These tasks already get priority when searching
#for leader root task sets, because they come first in the priority
#ordering.  But in addition, when an initially assigned root task set
#is chosen as leader, the search for follower root task sets makes two
#passes over the uncovered times @M { T }.  On the first pass, only
#root task sets initially assigned the same resource as the leader
#task set are considered.  On the second pass, only root task sets
#with no initial assignment are considered.
#@PP
#Suppose that two tasks are initially assigned the same resource, but
#that their domains have a non-empty symmetric difference, so that
#neither task can be assigned to the other.  This is a problem for
#the algorithm as described.  There are several possible solutions.
#The data structure could be changed to allow groups to exist without
#requiring assignment to a leader task; but how such groups would
#persist over the long term is not clear.  The domains could be changed;
#but that might be awkward to undo later.  Probably the best solution
#would be to introduce a new task to lead both tasks, since that would
#naturally be undone when the grouping is removed later; but the author
#prefers to avoid introducing tasks which are not derived from meets.
#@PP
#Our algorithm's solution is as follows.  As already presented,
#it only allows a root task set with an initial assignment to be
#chosen when it is either being chosen as a leader, or as a
#follower whose leader has the same initial assignment.  Beyond
#this, if a follower with an initial assignment which is the same
#as the leader's initial assignment fails to be chosen for any
#reason, the leader is marked as unsuitable for providing leader
#tasks, just as though no follower at all could be found (even if
#there are other suitable followers).  Altogether this avoids the
#main danger, which is the creation of two overlapping groups with
#the same initial assignment.
## @PP
## Several groups may be wanted for the same @M { T } and @M { X }.
## If so, after making the first, the root task sets which supplied its
## tasks are re-used (until one becomes empty) to supply the tasks for
## subsequent groups.  The function for time-based grouping is passed
## the number of groups to make, and it keeps making groups until that
## number is reached or it cannot make any more.
## @PP
## Often one wants to repeat grouping for a particular set of times
## several times, or until no further grouping is possible.  Clearly
## the setup cost can be shared in such cases, so the algorithm
## accepts an instruction saying how many groups to make from the
## same set of times, and reports back the number it actually made.
## @PP
## In practice, what is actually demanded of time-based grouping can
## be more nuanced than to make one group covering a given set of
## times @M { T }.  For example, what if such a group already exists?
## Or is a superset of @M { T } acceptable?  We will discuss these fine
## points as they arise.
## @PP
## Often one wants to repeat grouping for a particular set of times
## several times, or until no further grouping is possible.  In
## that case, if the sets of equivalent tasks that contributed
## the tasks for one group are not exhausted, they are re-used
## to contribute the tasks for the next.
#@End @SubSection

#@SubSection
#    @Title { Combinatorial grouping (old) }
#    @Tag { resource_structural.constraints.combinatorial.old }
#@Begin
#@LP
#For each resource type, after setting up the data structure described
#earlier, the next step in grouping by resource constraints is
#@I { combinatorial grouping }, to be described now.
#@PP
#Let @M { m } be the value of the @F rs_group_by_rc_max_days option.
#Iterate over all pairs @M { (f, t) }, where @M { f } is a subset of
#the common frame containing @M { k } adjacent time groups, for all
#@M { k } such that @M { 2 <= k <= m }, and @M { t } is a time from
#@M { f }'s first time group.  Handle each pair separately, as follows.
#@PP
#Build all sets of times @M { T sub 0 } that include @M { t } from @M { f }'s
#first time group, and any one time, or none, from each of @M { f }'s
#other time groups.  If @M { f } has @M { k } time groups, each with
#@M { n } times (for example), there are @M { (n + 1) sup {k - 1} }
#combinations, so @C { rs_group_by_rc_max_days } must be small.  Let
#@M { X sub 0 } be the set of all times in the time groups of @M { f }
#which are not in @M { T sub 0 }.
#@PP
#For each @M { T sub 0 }, find a set of tasks of the given resource
#type and a resource @M { r }, such that the tasks can be grouped,
#the group can be assigned @M { r }, no two tasks cover two times
#during the same time group of the common frame, and the tasks, taken
#together, cover all of the times of @M { T sub 0 } and none of the
#times of @M { X sub 0 }.  They may cover times outside @M { f }.
#This is done by time-based grouping, although the group is only
#temporary here.  If such tasks cannot be found, ignore @M { T sub 0 }.
#@PP
#Temporarily assign @M { r } to the tasks.  Observe the cost of each
#cluster busy times and limit busy times monitor of @M { r } whose
#constraint applies to all resources of the given resource type, and
#which monitors times within @M { f } only.  (This is done by tracing
#the assignment, and for each monitor whose cost changed, checking
#whether it satisfies these conditions.)  If any of these monitors
#has non-zero cost, then assigning tasks r unning at @M { T sub 0 } and not
#@M { X sub 0 } has non-zero cost for all resources, so is a bad
#idea.  So again ignore @M { T sub 0 }.
#@PP
#If there is exactly one non-ignored @M { T sub 0 }, then any assignment
#of any resource @M { r } to any task that covers time @M { t } will
#incur a cost unless @M { r } is assigned tasks that cover all of the
#times of @M { T sub 0 } and none of the times of @M { X sub 0 }.  This
#justifies requiring every task of the given resource type that covers
#@M { t } to lie in a group that covers all of the times of @M { T sub 0 }
#and none of the times of @M { X sub 0 }.  So call time-based grouping,
#passing it the one non-ignored @M { T sub 0 } and its @M { X sub 0 },
#with @M { m } set to infinity and no @M { g }.
#@PP
#Sadly, @M { X sub 0 } is not reflected in the groups.  For example,
#if the successful combination ends with a free day, that is not
#recorded.  And if @M { T sub 0 } contains only one time, no groups are made.
#@PP
#If processing one @M { (f, t) } pair leads to some grouping, then
#the function starts again from the first pair containing @M { f }.
#It may find groups that were not available before.  Consider an
#instance with one constraint specifying that each weekend must be
#either free on both days or busy on both, and another specifying
#that a day shift must not follow a night shift.  First, the Saturday
#and Sunday night tasks will be grouped; then, the Saturday and
#Sunday day tasks will be grouped, because the Saturday day tasks
#will not group with any Sunday night tasks, since the Sunday night
#tasks are all already grouped with Saturday night tasks.
#@PP
#The groups made in this way can be a big help to solvers.  In
#instance @C { COI-GPost.xml }, for example, each Friday night task
#is grouped with tasks for the next two nights.  Good solutions
#always assign these three tasks to the same resource, owing to
#constraints specifying that the weekend following a Friday night
#shift must be busy, that each weekend must be either free on both
#days or busy on both, and that a night shift must not be followed
#by a day shift.  A time sweep task assignment algorithm cannot by
#itself look ahead and see such cases coming, but combinatorial
#grouping allows it to do so.
#@End @SubSection
#
#@SubSection
#    @Title { Combination elimination (old) }
#    @Tag { resource_structural.constraints.elimination.old }
#@Begin
#@LP
#Some combinations examined by combinatorial grouping may have zero
#cost as far as the monitors used to evaluate it are concerned, but
#have non-zero cost when evaluated in a different way, involving the
#overall supply of and demand for resources.  Such combinations can
#be ruled out, leaving fewer zero-cost combinations, and potentially
#more task grouping.
#@PP
#For example, suppose there is a maximum limit on the number of
#weekends each resource can work.  If this limit is tight
#enough, it will force every resource to work complete weekends,
#even without an explicit constraint, if that is the only way
#that the available supply of resources can cover the demand
#for weekend shifts.  This example fits the pattern to be given
#now, setting @M { C } to the constraint that limits the number
#of busy weekends, @M { T } to the times of all weekends,
#@M { T sub i } to the times of the @M { i }th weekend, and
#@M { f tsub i } to the number of days in the @M { i }th weekend.
#@PP
#Take any any set of times @M { T }.  Let @M { S(T) }, the
#@I { supply during @M { T } }, be the sum over all resources
#@M { r } of the maximum number of times that @M { r } can be busy
#during @M { T } without incurring a cost.  Let @M { D(T) }, the
#@I { demand during @M { T } }, be the sum over all tasks @M { x }
#for which non-assignment would incur a cost, of the number of times
#@M { x } is r unning during @M { T }.  Then @M { S(T) >= D(T) }
#or else a cost is unavoidable.
#@PP
#In particular, take any cluster busy times constraint @M { C } which
#applies to all resources, has time groups which are all positive, and
#has a non-trivial maximum limit @M { M }.  (The analysis also applies
#when the time groups are all negative and there is a non-trivial
#minimum limit, setting @M { M } to the number of time groups minus
#the minimum limit.)  Suppose there are @M { n } time groups
#@M { T sub i }, for @M { 1 <= i <= n }, and let their union be @M { T }.
#@PP
#Let @M { f tsub i } be the number of time groups from the common
#frame with a non-empty intersection with @M { T sub i }.  This is
#the maximum number of times from @M { T sub i } during which any one
#resource can be busy without incurring a cost, since a resource can
#be busy for at most one time in each time group of the common frame.
#@PP
#Let @M { F } be the sum of the largest @M { M } @M { f tsub i }
#values.  This is the maximum number of times from @M { T } that
#any one resource can be busy without incurring a cost:  if it is
#busy for more times than this, it must either be busy for more
#than @M { f tsub i } times in some @M { T sub i }, or else it
#must be busy for more than @M { M } time groups, violating the
#constraint's maximum limit.
#@PP
#If there are @M { R } resources altogether, then the supply during
#@M { T } is bounded by
#@ID @Math { S(T) <= RF }
#since @M { C } is assumed to apply to every resource.
#@PP
#As explained above, to avoid cost the demand must not exceed the
#supply, so
#@ID @M { D(T) <= S(T) <= RF }
#Furthermore, if @M { D(T) >= RF }, then any failure to maximize
#the use of workload will incur a cost.  That is, every resource
#which is busy during @M { T sub i } must be busy for the full
#@M { f tsub i } times in @M { T sub i }.
#@PP
#So the consequence for grouping is this:  if @M { D(T) >= RF },
#we may assume that if a resource is busy in one time group of
#the common frame that overlaps @M { T sub i }, then it is busy
#in every time group of the common frame that overlaps @M { T sub i }.
#When searching for groups, the option of being assigned in some of
#these time groups but not others is removed.  With fewer options,
#there is more chance that some combination might be the only one
#with zero cost, allowing more task grouping.
## Next, ignoring tasks which do not incur a cost if they are left
## unassigned, suppose there are @M { m } tasks, and that the total
## number of times when task @M { j } is r unning during @M { T } is
## @M { g sub j }.  Then the total demand for workload during @M { T }
## is @ID @Math { D(T) = sum from { j = 1 } to { m } g sub i }
#@End @SubSection
#
#@EndSubSections
#@End @Section

@Section
    @Title { Grouping by resource constraints }
    @Tag { resource_structural.constraints }
@Begin
@LP
@I { Grouping by resource constraints } is KHE's term for a method
of grouping tasks together, forcing the tasks in each group to
be assigned the same resource, when all other ways of assigning
resources to those tasks can be shown to have non-zero cost.
@C { KheTaskTreeMake } also does this, but its groups are based
on avoid split assignments constraints, whereas grouping by
resource constraints makes groups based on resource constraints.
The function is
@ID @C {
bool KheGroupByResourceConstraints(KHE_SOLN soln,
  KHE_RESOURCE_TYPE rt, KHE_OPTIONS options, KHE_TASK_SET ts);
}
There is no @C { tasking } parameter because this kind of grouping
cannot be applied to an arbitrary set of tasks, as it turns out.
Instead, it applies to all tasks of @C { soln } whose resource
type is @C { rt }, which lie in a meet which is assigned a time,
and for which non-assignment may have a cost (discussed later).
If @C { rt } is @C { NULL }, it applies itself to each of the
resource types of @C { soln }'s instance in turn.  It tries to
group these tasks, returning @C { true } if it groups any.
# @PP
# Only tasks derived from event resources for which
# @C { KheEventResourceNeedsAssignment }
# (Section {@NumberOf event_resources}) returns @C { KHE_YES } are
# considered for grouping.  It would not be good to group a task for which
# non-assignment has a cost with a task for which non-assignment has no cost.
@PP
For each resource type, @C { KheGroupByResourceConstraints } finds
whatever groups it can.  It makes each such @I { task group } by
choosing one of its tasks as the @I { leader task } and assigning the
others to it.  It makes assignments only to non-cycle tasks that are
not already assigned to other non-cycle tasks, so it does not disturb
existing groups.  However it does take existing groups into account, and
it will use tasks to which other tasks are asssigned in its own groups.
@PP
Tasks which are initially assigned a resource (cycle task) participate
in grouping.  Such a task may have its assignment changed to some
other task, but in that case the other task will be assigned the
resource.  In other words, if one task is assigned a resource
initially, and it gets grouped, then its whole group will be
assigned that resource afterwards.  Two tasks initially assigned
different resources will never be grouped together.
@PP
If @C { ts } is non-@C { NULL }, every task that
@C { KheGroupByResourceConstraints } assigns is added to
@C { ts }.  This makes it easy to remove the groups when
they are no longer wanted, by running through @C { ts } and
unassigning each of its tasks.  @C { KheTaskSetUnGroup }
(Section {@NumberOf extras.task_sets}) does this.
@PP
@C { KheGroupByResourceConstraints } consults option
@C { rs_invariant }, and also
@TaggedList

@DTI { @F rs_group_by_rc_off } {
A Boolean option which, when @C { true }, turns grouping by
resource constraints off.
}

@DTI { @F rs_group_by_rc_max_days } {
An integer option which determines the maximum number of consecutive days
(in fact, time groups of the common frame) examined by combinatorial grouping
(Section {@NumberOf resource_structural.constraints.combinatorial}).
Values 0 or 1 turn combinatorial grouping off.  The default value is 3.
}

@DTI { @F rs_group_by_rc_combinatorial_off } {
A Boolean option which, when @C { true }, turns combinatorial grouping off.
}

@DTI { @F rs_group_by_rc_profile_off } {
A Boolean option which, when @C { true }, turns profile grouping off.
}

# @DTI { @F rs_group_by_rc_nocost_off } {
# Grouping by resource constraints treats tasks for which
# @C { K heTaskNon AssignmentHasCost } returns @C { false } the same as
# free time, except that such tasks are unassigned when necessary to
# avoid clashes.  This Boolean option, when @C { true }, causes all
# tasks to be treated the same.
# }

@EndList
It also calls @C { KheFrameOption } (Section {@NumberOf extras.frames})
to obtain the common frame, and retrieves the event timetable monitor
from option @C { gs_event_timetable_monitor }
(Section {@NumberOf general_solvers.general}).
@PP
@C { KheGroupByResourceConstraints } groups tasks whenever it can
show that not assigning the same resource to all of them must incur a
cost.  That does not mean that they will always be assigned the same
resource in good solutions, any more than, say, a constraint requiring
nurses to work complete weekends is always satisfied in good solutions.
However, in practice they usually are, so it makes sense to require
them to be, and decide later whether to break up a few groups.
@PP
The following subsections describe how @C { KheGroupByResourceConstraints }
works in detail.  It has several parts, which are available separately,
as we will see.  For each resource type, it starts by building a tasker
and adding the time groups of the common frame to it as overlap time
groups (Section {@NumberOf resource_structural.constraints.taskers}).
Then, using this tasker, it performs combinatorial grouping by calling
@C { KheCombGrouping }
(Section {@NumberOf resource_structural.constraints.applying}), and
profile grouping by calling @C { KheProfileGrouping }
(Section {@NumberOf resource_structural.constraints.profile}),
first with @C { non_strict } set to @C { false }, then again with
@C { non_strict } set to @C { true }.
@BeginSubSections

@SubSection
  @Title { Taskers }
  @Tag { resource_structural.constraints.taskers }
@Begin
@LP
A @I { tasker } is an object of type @C { KHE_TASKER } that
facilitates grouping by resource constraints.  We'll see how to
create one shortly; but first, we introduce two other types that
taskers use.
@PP
Taskers deal directly only with proper root tasks (tasks which are
either unassigned, or assigned directly to a cycle task, that is,
to a resource).  Taskers consider two proper root tasks to be
equivalent when they have equal domains and assigned resources
(possibly @C { NULL }), and they cover the same set of times.
(A task @I covers a time when it, or some task assigned directly
or indirectly to it, is running at that time.)  Equivalent tasks
are interchangeable with respect to resource assignment:  they
may be assigned the same resources, and their effect on resource
constraints is the same.  Identifying equivalent tasks is vital
in grouping; without it, virtually no group could be shown to
be the only zero-cost option.
# @PP
# Taskers consider two tasks to be equivalent when @C { KheTaskEquivalent }
# (Section {@NumberOf solutions.tasks}) says that they are equivalent,
# and their assigned resources are equal (possibly @C { NULL }).  Two
# equivalent tasks are interchangeable with respect to resource
# assignment:  they may be assigned the same resources, and their
# effect on resource constraints is the same.  Identifying equivalent
# tasks is vital in grouping; without it, virtually no group could be
# shown to be the only zero-cost option.
@PP
A @I class is an object of type @C { KHE_TASKER_CLASS }, representing
an equivalence class of tasks (a set of equivalent tasks).  Each task
known to a tasker lies in exactly one class.  The user cannot create
these classes; they are created and kept up to date by the tasker.
@PP
The tasks of an equivalence class may be visited by
@ID @C {
int KheTaskerClassTaskCount(KHE_TASKER_CLASS c);
KHE_TASK KheTaskerClassTask(KHE_TASKER_CLASS c, int i);
}
There must be at least one task, because if a class becomes empty,
it is deleted by the tasker.
@PP
The three attributes that equivalent tasks share may be retrieved by
@ID @C {
KHE_RESOURCE_GROUP KheTaskerClassDomain(KHE_TASKER_CLASS c);
KHE_RESOURCE KheTaskerClassAsstResource(KHE_TASKER_CLASS c);
KHE_TIME_SET KheTaskerClassTimeSet(KHE_TASKER_CLASS c);
}
These return the domain (from @C { KheTaskDomain }) that the tasks of
@C { c } share, their assigned resource (from @C { KheTaskAsstResource }),
and the set of times they each cover.  The user must not modify the
value returned by @C { KheTaskerClassTimeSet }.  Function
@ID @C {
void KheTaskerClassDebug(KHE_TASKER_CLASS c, int verbosity,
  int indent, FILE *fp);
}
produces a debug print of @C { c } onto @C { fp } with the given
verbosity and indent.
@PP
The other type that taskers use represents one time.  The type is
@C { KHE_TASKER_TIME }.  Again, the tasker creates objects of these
types, and keeps them up to date.  Function
@ID @C {
KHE_TIME KheTaskerTimeTime(KHE_TASKER_TIME t);
}
returns the time that @C { t } represents.
@PP
The tasks of an equivalence class all run at the same times, and so
for each time, either every task of an equivalence class is running
at that time, or none of them are.  Accordingly, to visit the tasks
running at a particular time, we actually visit classes:
@ID @C {
int KheTaskerTimeClassCount(KHE_TASKER_TIME t);
KHE_TASKER_CLASS KheTaskerTimeClass(KHE_TASKER_TIME t, int i);
}
Each equivalence class appears in one time object for each time
that its tasks are running, giving a many-to-many relationship
between time objects and class objects.  Function
@ID @C {
void KheTaskerTimeDebug(KHE_TASKER_TIME t, int verbosity,
  int indent, FILE *fp);
}
produces a debug print of @C { t } onto @C { fp } with the given
verbosity and indent.
@PP
We turn now to taskers themselves.  To create a tasker, call
@ID {0.98 1.0} @Scale @C {
KHE_TASKER KheTaskerMake(KHE_SOLN soln, KHE_RESOURCE_TYPE rt,
  KHE_TASK_SET task_set, HA_ARENA a);
}
The tasker's attributes may be accessed by
@ID @C {
KHE_SOLN KheTaskerSoln(KHE_TASKER tr);
KHE_RESOURCE_TYPE KheTaskerResourceType(KHE_TASKER tr);
KHE_TASK_SET KheTaskerTaskSet(KHE_TASKER tr);
HA_ARENA KheTaskerArena(KHE_TASKER tr);
}
A tasker object remains in existence until its arena, @C { a },
is deleted or recycled.
@PP
@C { KheTaskerMake } gathers into the tasker object all proper root
tasks (tasks which are either unassigned, or assigned directly to a
cycle task representing a resource) of @C { soln } whose resource
type is @C { rt }, for which non-assignment may have a cost (see below),
and which lie in meets that have an assigned time.  The meets' time
assignments are assumed to be fixed for the lifetime of the tasker; if
they change, errors will occur.  From here on, `task' means one of these
tasks, unless stated otherwise.
# event resources for which @C { KheEventResourceNeedsAssignment }
# (Section {@NumberOf event_resources}) returns @C { KHE_YES } are
@PP
It seems wrong to group a task for which non-assignment has a cost
with a task for which non-assignment has no cost.  But what to do
about this issue is a puzzle.  Simply refusing to group such tasks
would not address all the relevant issues, e.g. whether to include
both types in profiles.  At present, if the instance contains at
least one assign resource constraint, then only tasks derived from
event resources for which @C { KheEventResourceNeedsAssignment }
(Section {@NumberOf event_resources}) returns @C { KHE_YES } are
considered for grouping.  If the instance contains no assign resource
constraints, then only tasks derived from event resources for which
@C { KheEventResourceNeedsAssignment } returns @C { KHE_MAYBE }
are considered for grouping.  This is basically a stopgap.
@PP
Tasks are grouped by calls to @C { KheTaskMove }, each of which
assigns one follower task to a leader task.  This removes the
follower task from the set of tasks of interest to the tasker,
and it usually enlarges the set of times covered by the leader task,
placing it into a different equivalence class.  The main purpose
of the tasker object is to keep track of these changes.
@PP
If @C { task_set } is non-@C { NULL }, each follower task assigned
during grouping is added to it.  This makes it easy to remove the
groups later, when they are no longer wanted, by running through
@C { task_set } and unassigning each of its tasks.  @C { KheTaskSetUnGroup }
(Section {@NumberOf extras.task_sets}) does this.
@PP
@C { KheTaskerMake } places its tasks into classes indexed by time.
To visit each time, call
@ID @C {
int KheTaskerTimeCount(KHE_TASKER tr);
KHE_TASKER_TIME KheTaskerTime(KHE_TASKER tr, int i);
}
Here @C { KheTaskerTimeTime(KheTaskerTime(tr, KheTimeIndex(t))) == t }
for all times @C { t }.  @C { KheTaskerTimeCount(tr) } returns the same
value as @C { KheInstanceTimeCount(ins) }, where @C { ins } is
@C { tr }'s solution's instance.  From each @C { KHE_TASKER_TIME }
object one can access the classes running at that time, and
the tasks of those classes, using functions introduced above.
@PP
Finally,
@ID @C {
void KheTaskerDebug(KHE_TASKER tr, int verbosity, int indent, FILE *fp);
}
produces a debug print of @C { tr } onto @C { fp } with the given
verbosity and indent.
@End @SubSection

@SubSection
  @Title { Tasker support for grouping }
  @Tag { resource_structural.constraints.groupings }
@Begin
@LP
Taskers keep their classes up to date as tasks are grouped.  However,
they can't know by magic that tasks are being grouped.  So it's wrong to
call platform operations like @C { KheTaskAssign } and @C { KheTaskMove }
directly while using a tasker.  @C { KheTaskAddTaskBound } is also out
of bounds.  Instead, proceed as follows.
@PP
A @I grouping is a set of classes used for grouping tasks.  A group is
made by taking any one task out of each class in the grouping, choosing
one to be the leader task, assigning the others (called the followers)
to it, and inserting the leader task into some other class appropriate
to it, where it is available to participate in other groupings.
@PP
When a task is taken out of a class, the class may become empty, in
which case the tasker deletes that class.  When the follower tasks are
assigned to the leader tasks, the set of times covered by it usually
changes, and the tasker may need to create a new class object to hold
it.  So class objects may be both created and destroyed by the tasker
when tasks are grouped.
# (The tasker holds a free list of class objects.)
@PP
A tasker may handle any number of groupings over its lifetime, but at
any moment there is at most one grouping.  The operations for building
this @I { current grouping } are:
@ID @C {
void KheTaskerGroupingClear(KHE_TASKER tr);
bool KheTaskerGroupingAddClass(KHE_TASKER tr, KHE_TASKER_CLASS c);
bool KheTaskerGroupingDeleteClass(KHE_TASKER tr, KHE_TASKER_CLASS c);
int KheTaskerGroupingBuild(KHE_TASKER tr, int max_num, char *debug_str);
}
These call the platform operations, as well as keeping the tasker up
to date.
@PP
@C { KheTaskerGroupingClear } starts off a grouping, clearing out
any previous grouping.
@PP
@C { KheTaskerGroupingAddClass }, which may be called any number of
times, adds @C { c } to the current grouping.  If there is a problem
with this, it returns @C { false } and changes nothing.  These
potential problems (there are two kinds) are explained below.
@PP
@C { KheTaskerGroupingDeleteClass } undoes a call to
@C { KheTaskerGroupingAddClass } with the same @C { c } that
returned @C { true }.  Deleting @C { c } might not be possible, since it
might leave the grouping with no viable leader class (for which
see below).  @C { KheTaskerGroupingDeleteClass } returns @C { false }
in that case, and changes nothing.  This cannot happen if classes
are deleted in stack order (last in first out), because each
deletion then returns the grouping to a viable previous state.
@PP
@C { KheTaskerGroupingBuild } ends the grouping.  It makes some groups and
returns the number it made.  Each group is either made completely, or
not at all.  The number of groups made is the minimum of @C { max_num }
and the @C { KheTaskerClassTaskCount } values for the classes.  It then
removes all classes from the grouping, like @C { KheTaskerGroupingClear }
does, understanding that some may have already been destroyed by being
emptied out by @C { KheTaskerGroupingBuild }.
@PP
It is acceptable to add just one class, in which case the `groups' are
just tasks from that class, no assignments are made, and nothing actually
changes in the tasker's data structure.  If this is not wanted, then
the caller should ensure that @C { KheTaskerGroupingClassCount }
(see below) is at least 2 before calling @C { KheTaskerGroupingBuild }.
@PP
Parameter @C { debug_str } is used only by debugging code, to
say why a group was made.  For example, its value might be
@C { "combinatorial grouping" } or @C { "profile grouping" }.
@PP
At any time, the classes of the current grouping may be
accessed by calling
@ID @C {
int KheTaskerGroupingClassCount(KHE_TASKER tr);
KHE_TASKER_CLASS KheTaskerGroupingClass(KHE_TASKER tr, int i);
}
in the usual way.  They will not usually be returned in the
order they were added, however; in particular, the class that
the tasker currently intends to use as the leader class has
index 0.
@PP
We now describe the two problems that make
@C { KheTaskerGroupingAddClass } return @C { false }.  The first
problem concerns leader tasks.  Tasks are grouped by choosing one
task as the leader and assigning the others to it.  So one of the
classes added by @C { KheTaskerGroupingAddClass } has to be chosen as
the one that leader tasks will be taken from (the @I { leader class }).
The tasker does this automatically in a way that usually works well.
(It chooses any class whose tasks are already assigned a resource,
or if there are none of those, a class whose domain has minimal
cardinality, and checks that the first task of each of the other
classes can be assigned to the first task of that class without
changing any existing resource assignment.)  But in rare cases, the
domains of two classes may be such that neither is a subset of the
other, or two classes may be initially assigned different resources.
@C { KheTaskerGroupingAddClass } returns @C { false } in such cases.
@PP
The second problem concerns the times covered by the classes.  It
would not do to group together two tasks which cover the same time,
because then, when a resource is assigned to the grouped task, the
resource would have a clash.  More generally, if a resource cannot
be assigned to two tasks on the same day (for example), then it
would not do to group two tasks which cover two times from the
same day.  To help with this, the tasker has functions
@ID @C {
void KheTaskerAddOverlapFrame(KHE_TASKER tr, KHE_FRAME frame);
void KheTaskerDeleteOverlapFrame(KHE_TASKER tr);
}
# void KheTaskerAddOverlapTimeGroup(KHE_TASKER tr, KHE_TIME_GROUP tg);
# void KheTaskerClearOverlapTimeGroups(KHE_TASKER tr);
@C { KheTaskerAddOverlapFrame } informs the tasker that a resource
should not be assigned two tasks that cover the same time group of
@C { frame }.  If this condition would be violated by some call to
@C { KheTaskerGroupingAddClass }, then that call returns @C { false }
and adds nothing.  @C { KheTaskerDeleteOverlapFrame }, which is never
needed in practice, removes this requirement.
# @C { KheTaskerAddOverlapTimeGroup } may be called any number of times.
# It informs the tasker that a group which covers two times from @C { tg }
# (or one time twice) is not permitted.  If some call to
# @C { KheTaskerGroupingAddClass } would violate this condition, then that call
# returns @C { false } and adds nothing.  @C { KheTaskerAddOverlapFrame }
# calls @C { KheTaskerAddOverlapTimeGroup } for each time group
# of @C { frame }.  And @C { KheTaskerClearOverlapTimeGroups }, which
# is never needed in practice, clears away all overlap time groups.
@PP
If overlaps are prevented in this way, the same class cannot be added
to a grouping twice.  So there is no need to prohibit that explicitly.
# @PP
# Each time may lie in at most one overlap time group.  There is no
# logical need for this, but it simplifies the implementation, and
# it is true in practice (i.e. when overlap time groups are derived
# from frames).  @C { KheTaskerAddOverlapTimeGroup } and
# @C { KheTaskerAddOverlapFrame } may not be called when a grouping
# is under construction.
@PP
When @C { KheTaskerGroupingAddClass } returns @C { false }, the caller
has two options.  One is to abandon this grouping altogether, which
is done by not calling @C { KheTaskerGroupingBuild }.  The next call to
@C { KheTaskerGroupingClear } will clear everything out for a fresh
start.  The other option is to continue with the grouping, finding
other classes to add.  This is done by making zero or more other
calls to @C { KheTaskerGroupingAddClass }, followed by
@C { KheTaskerGroupingBuild }.
@PP
After one grouping is completed, the user may start another.  The tasker
will have been updated by the previous @C { KheTaskerGroupingBuild }
to no longer contain the ungrouped tasks but instead to contain the
grouped ones.  They can become elements of new groups.
@PP
@C { KHE_TASKER_CLASS } objects may be created by
@C { KheTaskerGroupingBuild }, to hold the newly created groups,
and also destroyed, because empty classes are deleted.  So
variables of type @C { KHE_TASKER_CLASS } may become
undefined when @C { KheTaskerGroupingBuild } is called.
@PP
Although @C { KheTaskerGroupingAdd } can be used to check whether a
class can be added, it may be convenient to check for overlap in
advance.  For this there are functions
@ID @C {
bool KheTaskerTimeOverlapsGrouping(KHE_TASKER_TIME t);
bool KheTaskerClassOverlapsGrouping(KHE_TASKER_CLASS c);
}
@C { KheTaskerTimeOverlapsGrouping } returns @C { true } if @C { t }
lies in an overlap time group which is currently covered by a class of
the current grouping.  @C { KheTaskerClassOverlapsGrouping } returns
@C { true } if any of the times covered by @C { c } is already so covered.
# @PP
# Consider the following scenario.  A grouping is constructed which
# includes a class with an assigned resource.  Other classes in the
# grouping do not have the assigned resource, but they overlap in time
# with classes that do.  When a group is made from the grouping, there
# will be a clash.  This scenario is not explicitly prevented.  It
# underlies the importance of not just accepting the groups made by a
# grouping; one must check their cost.  These functions help with that:
# @ID @C {
# bool KheTaskerGroupingTestAsstBegin(KHE_TASKER tr, KHE_RESOURCE *r);
# void KheTaskerGroupingTestAsstEnd(KHE_TASKER tr);
# }
# @C { KheTaskerGroupingTestAsstBegin } selects a suitable resource
# and assigns it to tasks that form a group in the current grouping
# (skipping assigned tasks).  If it succeeds, it sets @C { *r } to the
# resource it used and returns @C { true }, otherwise it undoes any
# changes, sets @C { *r } to @C { NULL },  and returns @C { false }.
# @C { KheTaskerGroupingTestAsstEnd } undoes what a successful call
# to @C { KheTaskerGroupingTestAsstBegin } did.  It must be called,
# or else errors will occur in the tasker.
# @PP
# A suitable resource is either one that is already assigned to one
# or more tasks of the grouping, or else it is the first resource
# from the domain of the leader class that is free at the times
# covered by all of the classes of the grouping, taking any overlap
# frame into account.  If there is no such resource (not likely),
# @C { KheTaskerGroupingTestAsstBegin } returns @C { false }.
@End @SubSection

@SubSection
  @Title { Tasker support for profile grouping }
  @Tag { resource_structural.constraints.pgroupings }
@Begin
@LP
Taskers also have functions which support profile grouping
(Section {@NumberOf resource_structural.constraints.profile}).  To
set and retrieve the @I { profile maximum length }, the calls are
@ID @C {
void KheTaskerSetProfileMaxLen(KHE_TASKER tr, int profile_max_len);
int KheTaskerProfileMaxLen(KHE_TASKER tr);
}
The profile maximum length can only be set when there are no
profile time groups.
@PP
To visit the sequence of @I { profile time groups } maintained by the
tasker, the calls are
@ID @C {
int KheTaskerProfileTimeGroupCount(KHE_TASKER tr);
KHE_PROFILE_TIME_GROUP KheTaskerProfileTimeGroup(KHE_TASKER tr, int i);
}
To make one profile time group and add it to the end of the tasker's
sequence, and to delete a profile time group, the calls are
@ID @C {
KHE_PROFILE_TIME_GROUP KheProfileTimeGroupMake(KHE_TASKER tr,
  KHE_TIME_GROUP tg);
void KheProfileTimeGroupDelete(KHE_PROFILE_TIME_GROUP ptg);
}
The last profile time group is moved to the position of the
deleted one, which only makes sense in practice when all
the profile time groups are being deleted.  So a better
function to call is
@ID @C {
void KheTaskerDeleteProfileTimeGroups(KHE_TASKER tr);
}
which deletes all of @C { tr }'s profile time groups.  They go
into a free list in the tasker.
@PP
Functions
@ID @C {
KHE_TASKER KheProfileTimeGroupTasker(KHE_PROFILE_TIME_GROUP ptg);
KHE_TIME_GROUP KheProfileTimeGroupTimeGroup(KHE_PROFILE_TIME_GROUP ptg);
}
retrieve a profile time group's tasker and time group.
@PP
A profile time group's @I { cover } is the number of @I { cover tasks }:
tasks that cover the time group, ignoring tasks that cover more than
@C { profile_max_len } profile time groups.  This is returned by
@ID @C {
int KheProfileTimeGroupCover(KHE_PROFILE_TIME_GROUP ptg);
}
The profile time group also keeps track of the @I { domain cover }:
the number of cover tasks with a given domain.  Two domains are
considered to be equal if @C { KheResourceGroupEqual } says that
they are.  To visit the (distinct) domains of a profile time group,
in increasing domain size order, the calls are
@ID @C {
int KheProfileTimeGroupDomainCount(KHE_PROFILE_TIME_GROUP ptg);
KHE_RESOURCE_GROUP KheProfileTimeGroupDomain(KHE_PROFILE_TIME_GROUP ptg,
  int i, int *cover);
}
@C { KheProfileTimeGroupDomain } returns the domain cover as well as the
domain itself.  The sum of the domain covers is the cover.  There is also
@ID @C {
bool KheProfileTimeGroupContainsDomain(KHE_PROFILE_TIME_GROUP ptg,
  KHE_RESOURCE_GROUP domain, int *cover);
}
which searches @C { ptg }'s list of domains for @C { domain },
returning @C { true } and setting @C { *cover } to the domain
cover if it is found.
@PP
@C { KheProfileTimeGroupDomain } and
@C { KheProfileTimeGroupContainsDomain } may return 0
for @C { *cover }, when tasks with a given domain enter
the profile and later leave it.
@PP
Profile grouping algorithms will group tasks while these functions
are being called.  The sequence of profile time groups is unaffected
by grouping, but covers and domain covers will change if the grouped
tasks cover more than @C { profile_max_len } profile time groups.
The domains of a profile time group may also change during grouping,
when tasks with unequal domains are grouped.  Altogether it is safest
to discontinue a partially completed traversal of the domains of a
profile time group when a grouping occurs.
@PP
There are also a few functions on tasker classes that relate
to profile time groups.  First,
@ID @C {
bool KheTaskerClassCoversProfileTimeGroup(KHE_TASKER_CLASS c,
  KHE_PROFILE_TIME_GROUP ptg);
}
returns @C { true } if @C { c } covers @C { ptg }.  Each class
keeps track of the times from profile time groups that it covers.
Functions
@ID @C {
int KheTaskerClassProfileTimeCount(KHE_TASKER_CLASS c);
KHE_TASKER_TIME KheTaskerClassProfileTime(KHE_TASKER_CLASS c, int i);
}
visit these times in an unspecified order.
@PP
Function
@ID @C {
void KheTaskerProfileDebug(KHE_TASKER tr, int verbosity, int indent,
  FILE *fp);
}
prints the profile groups of @C { tr } onto @C { fp }, with the
classes that cover not more than @C { profile_max_len } of them.
@End @SubSection

@SubSection
  @Title { Combinatorial grouping }
  @Tag { resource_structural.constraints.combinatorial }
@Begin
@LP
Suppose that there are two kinds of shifts (tasks), day and night;
that a resource must be busy on both days of the weekend or neither;
and that a resource cannot work a day shift on the day after a night
shift.  Then resources assigned to the Saturday night shift must work
on Sunday, and so must work the Sunday night shift.  So it makes sense
to group one Saturday night shift with one Sunday night shift, and to
do so repeatedly until night shifts run out on one of those days.
@PP
Suppose that the groups just made consume all the Sunday night shifts.
Then those working the Saturday day shifts cannot work the Sunday
night shifts, because the Sunday night shifts are grouped with
Saturday night shifts now, which clash with the Saturday day shifts.
So now it is safe to group one Saturday day shift with one Sunday
day shift, and to do so repeatedly until day shifts run out on one
of those days.
@PP
Groups made in this way can be a big help to solvers.  In instance
@C { COI-GPost.xml }, for example, each Friday night task can be
grouped with tasks for the next two nights.  Good solutions always
assign these three tasks to the same resource, owing to constraints
specifying that the weekend following a Friday night shift must be
busy, that each weekend must be either free on both days or busy on
both, and that a night shift must not be followed by a day shift.
A time sweep task assignment algorithm (say) cannot look ahead
and see such cases coming.
@PP
@I { Combinatorial grouping } implements these ideas.  It searches
through a space whose elements are sets of classes.  For each set of
classes @M { S } in the search space, it calculates a cost @M { c(S) },
defined below, and selects a set @M { S prime } such that
@M { c( S prime ) } is zero, or minimal.  It then makes one group by
selecting one task from each class and grouping those tasks, and then
repeating that until as many tasks as possible or desired have been grouped.
@PP
As formulated here, one application of combinatorial grouping
groups one set of classes @M { S prime }.  In the example above,
grouping the Saturday and Sunday night shifts would be one
application, then grouping the Saturday and Sunday day shifts
would be another.
@PP
Combinatorial grouping is carried out by a
@I { combinatorial grouping solver }, made like this:
@ID @C {
KHE_COMB_SOLVER KheCombSolverMake(KHE_TASKER tr, KHE_FRAME days_frame);
}
It deals with @C { tr }'s tasks, using memory from @C { tr }'s arena.
Any groups it makes are made using @C { tr }'s grouping operations,
and so are reflected in @C { tr }'s classes, and in its task set.
Parameter @C { days_frame } would always be the common frame.  It
is used when selecting a suitable resource to tentatively assign to
a group of tasks, to find out what times the resource should be free.
@PP
Functions
@ID @C {
KHE_TASKER KheCombSolverTasker(KHE_COMB_SOLVER cs);
KHE_FRAME KheCombSolverFrame(KHE_COMB_SOLVER cs);
}
return @C { cs }'s tasker and frame.
@PP
A @C { KHE_COMB_SOLVER } object can solve any number of
combinatorial grouping problems, one after another.  The user
loads the solver with one problem's @I requirements (these determine
the search space @M { S }), then requests a solve, then loads another
problem and solves, and so on.
@PP
It is usually best to start the process of loading requirements
into the solver by calling
@ID @C {
void KheCombSolverClearRequirements(KHE_COMB_SOLVER cs);
}
This clears away any old requirements.
@PP
A key requirement for most solves is that the groups it makes
should cover a given time group.  Any number of such requirements
can be added and removed by calling
@ID @C {
void KheCombSolverAddTimeGroupRequirement(KHE_COMB_SOLVER cs,
  KHE_TIME_GROUP tg, KHE_COMB_SOLVER_COVER_TYPE cover);
void KheCombSolverDeleteTimeGroupRequirement(KHE_COMB_SOLVER cs,
  KHE_TIME_GROUP tg);
}
any number of times.  @C { KheCombSolverAddTimeGroup } specifies that
the groups must cover @C { tg } in a manner given by the @C { cover }
parameter, whose type is
@ID @C {
typedef enum {
  KHE_COMB_SOLVER_COVER_YES,
  KHE_COMB_SOLVER_COVER_NO,
  KHE_COMB_SOLVER_COVER_PREV,
  KHE_COMB_SOLVER_COVER_FREE,
} KHE_COMB_SOLVER_COVER_TYPE;
}
We'll explain this in detail later.  @C { KheCombSolverDeleteTimeGroup }
removes the effect of a previous call to @C { KheCombSolverAddTimeGroup }
with the same time group.  There must have been such a call, otherwise
@C { KheCombSolverDeleteTimeGroup } aborts.
@PP
Any number of requirements that the groups should cover a given
class may be added:
@ID @C {
void KheCombSolverAddClassRequirement(KHE_COMB_SOLVER cs,
  KHE_TASKER_CLASS c, KHE_COMB_SOLVER_COVER_TYPE cover);
void KheCombSolverDeleteClassRequirement(KHE_COMB_SOLVER cs,
  KHE_TASKER_CLASS c);
}
These work in the same way as for time groups, except that care is
needed because @C { c } may be rendered undefined by a solve, if
it makes groups which empty @C { c } out.  The safest option
after a solve whose requirements include a class is to call
@C { KheCombSolverClearRequirements }.
@PP
Three other requirements of quite different kinds may be added:
@ID @C {
void KheCombSolverAddProfileGroupRequirement(KHE_COMB_SOLVER cs,
  KHE_PROFILE_TIME_GROUP ptg, KHE_RESOURCE_GROUP domain);
void KheCombSolverDeleteProfileGroupRequirement(KHE_COMB_SOLVER cs,
  KHE_PROFILE_TIME_GROUP ptg);
}
and
@ID @C {
void KheCombSolverAddProfileMaxLenRequirement(KHE_COMB_SOLVER cs);
void KheCombSolverDeleteProfileMaxLenRequirement(KHE_COMB_SOLVER cs);
}
and
@ID @C {
void KheCombSolverAddNoSinglesRequirement(KHE_COMB_SOLVER cs);
void KheCombSolverDeleteNoSinglesRequirement(KHE_COMB_SOLVER cs);
}
Again, we'll explain the precise effect later.  These last three
requirements can only be added once:  a second call replaces the
first, it does not add to it.
@PP
There is no need to reload requirements between solves.  The
requirements stay in effect until they are either deleted
individually or cleared out by @C { KheCombSolverClearRequirements }.
The only caveat concerns classes that become undefined during
grouping, as discussed above.
@PP
The search space of combinatorial solving is defined by all
these requirements.  First, we need some definitions.  A task
@I covers a time if it, or a task assigned to it directly or
indirectly, runs at that time.  A task covers a time group if
it covers any of the time group's times.  A class covers a time
or time group if its tasks do.  A class covers a class if it is
that class.  A set of classes covers a time, time group, or class
if any of its classes covers that time, time group, or class.
@PP
Now a set @M { S } of classes lies in the search space for a run
of combinatorial grouping if:
@NumberedList

@LI @OneRow {
Each class in @M { S } covers at least one of the time groups and
classes passed to the solver by the calls to
@C { KheCombSolverAddTimeGroup } and @C { KheCombSolverAddClass }.
}

@LI @OneRow {
For each time group @C { tg } or class @C { c } passed to the solver by
@C { KheCombSolverAddTimeGroup } or @C { KheCombSolverAddClass },
if the accompanying @C { cover } is @C { KHE_COMB_SOLVER_COVER_YES },
then @M { S } covers @C { tg } or @C { c }; or if @C { cover } is
@C { KHE_COMB_SOLVER_COVER_NO }, then @M { S } does not cover @C { tg }
or @C { c }; or if @C { cover } is @C { KHE_COMB_SOLVER_COVER_PREV },
then @M { S } covers @C { tg } or @C { c } if and only if it covers
the time group or class immediately preceding @C { tg } or @C { c }; or
if @C { cover } is @C { KHE_COMB_SOLVER_COVER_FREE }, then @M { S } is
free to cover @C { tg } or @C { c }, or not.
@LP
If the first time group or class has cover @C { KHE_COMB_SOLVER_COVER_PREV },
this is treated like @C { KHE_COMB_SOLVER_COVER_FREE }.
@LP
Time groups and classes not mentioned may be covered, or not.  The
difference between this and passing a time group or class with cover
@C { KHE_GROUP_SOLVER_COVER_FREE } is that the classes that cover
a free time group or class are included in the search space.
}

@LI @OneRow {
The classes of @M { S } may be added to the tasker to form a grouping.
There are rare cases where adding the classes in one order will
succeed, while adding them in another order will fail.  In those
cases, whether @M { S } is included in the search space or not will
depend on the (unspecified) order in which the solver chooses to add
@M { S }'s classes to the tasker.
}

@LI @OneRow {
If @C { KheCombSolverAddProfileRequirement(cs, ptg, domain) } is
in effect, then @M { S } contains at least one class that covers
@C { ptg }'s time group, and if @C { domain != NULL }, that class
has that domain.
}

@LI @OneRow {
If @C { KheCombSolverAddProfileMaxLenRequirement(cs) } is in
effect, then @M { S } contains only classes that cover at most
@C { profile_max_len } times from profile time groups.
}

@LI @OneRow {
If @C { KheCombSolverAddNoSinglesRequirement(cs) } is in effect,
then @M { S } contains at least two classes.  Otherwise @M { S }
contains at least one class.
}

@EndList
That fixes the search space.  We now define the cost @M { c(S) }
of each set of classes @M { S } in that space.
@PP
The first step is to identify a suitable resource @M { r }.  Take the
first class of the tasker grouping made from @M { S }; this is the
class that leader tasks will come from.  If it already has an assigned
resource (as returned by @C { KheTaskerClassAsstResource }), use that
resource for @M { r }.  Otherwise search the class's domain (as
returned by @C { KheTaskerClassDomain }) for a resource which is free at
all of the time groups of the current frame which overlap with the time
groups added by calls to @C { KheCombSolverAddTimeGroupRequirement }.
If no such resource can be found, ignore @M { S }.
@PP
The second step is to assign @M { r } to one task from each class
of @M { S }, except in classes where @M { r } is already assigned
to a task.  This is done without informing the tasker, but after
the cost is determined these assignments are undone, so the
tasker's integrity is not compromised in the end.  The cost
@M { c(S) } of a set of classes @M { S } is determined while the
assignments are in place.  It is the total cost of all cluster busy
times and limit busy times monitors which monitor @M { r } and have
times lying entirely within the times covered by the time groups
added by calls to @C { KheCombSolverAddTimeGroupRequirement }.
This second condition is included because we don't want @M { r }'s
global workload, for example, to influence the outcome.
# The cost @M { c(S) } of a set of classes @M { S } is the change
# in solution cost caused by assigning a suitable resource (as
# defined for @C { KheTaskerGroupingTestAsstBegin } in
# Section {@NumberOf resource_structural.constraints.groupings})
# to one task from each class of @M { S }, taking into account only
# avoid clashes, cluster busy times, and limit busy times constraints
# which apply to every resource of the type of the tasks being
# grouped.  Furthermore, the times of the cluster busy times and
# limit busy times constraints must lie entirely within the times
# covered by the classes from which @M { S } is chosen; we don't
# want changes in a resource's global workload, for example, to
# influence the outcome.
@PP
After all the requirements are added, an actual solve is carried
out by calling
@ID @C {
int KheCombSolverSolve(KHE_COMB_SOLVER cs, int max_num,
  KHE_COMB_SOLVER_COST_TYPE ct, char *debug_str);
}
@C { KheCombSolverSolve } searches the space of all sets of classes
@M { S } that satisfy the six conditions, and selects one set
@M { S prime } of minimal cost @M { c( S prime ) }.  Using
@M { S prime }, it makes as many groups as it can, up to
@C { max_num }, and returns the number it actually made,
between @C { 0 } and @C { max_num }.  If @M { S prime }
contains a single class, no groups are made and the value
returned is 0.
@PP
Parameter @C { ct } has type
@ID @C {
typedef enum {
  KHE_COMB_SOLVER_COST_MIN,
  KHE_COMB_SOLVER_COST_ZERO,
  KHE_COMB_SOLVER_COST_SOLE_ZERO
} KHE_COMB_SOLVER_COST_TYPE;
}
If @C { ct } is @C { KHE_COMB_SOLVER_COST_MIN }, then @M { c( S prime ) }
must be minimal among all @M { c(S) }.
If @C { ct } is @C { KHE_COMB_SOLVER_COST_ZERO }
or @C { KHE_COMB_SOLVER_COST_SOLE_ZERO }, then @M { c( S prime ) } must
also be 0, and in the second case there must be no other @M { S } in
the search space such that @M { c(S) } is 0.  If these conditions are
not met, no groups are made.
@PP
Parameter @C { debug_str } is passed on to @C { KheTaskerGroupingBuild }.
It might be @C { "combinatorial grouping" }, for example.
@PP
An awkward question raised by combinatorial grouping is what to do about
@I { singles }:  classes whose tasks already satisfy the requirements,
without any grouping.  The answer seems to vary depending on why
combinatorial grouping is being called, so the combinatorial solver
does not have a single way of dealing with singles.  Instead it
offers three features that help with them.
@PP
First, as we have seen, if the set of classes @M { S prime } with
minimum or zero cost contains only one class, @C { KheCombSolverSolve }
accepts that it is the best but makes no groups from it, returning 0
for the number of groups made.
@PP
Second, as we have also seen, @C { KheCombSolverAddNoSinglesRequirement }
allows the user to declare that a set @M { S } whose classes consist
of a single class which satisfies all the requirements (a single)
should be excluded from the search space.  But adding this requirement
is not a magical solution to the problem of singles.  For one thing,
when we need a unique zero-cost set of classes, we may well want to
include singles in the search space, to show that grouping is better
than doing nothing.  For another, there may still be an @M { S }
containing one single and another class which covers a time group or
class with cover type @C { KHE_COMB_SOLVER_COVER_FREE }.
@PP
Third, after setting up a problem ready to call
@C { KheCombSolverSolve }, one can call
@ID @C {
int KheCombSolverSingleTasks(KHE_COMB_SOLVER cs);
}
This searches the same space as @C { KheCombSolverSolve } does, but
it does no grouping.  Instead, it returns the total number of tasks in
sets of classes @M { S } in that space such that @M { bar S bar = 1 }.
It returns 0 if @C { KheCombSolverAddNoSinglesRequirement } is in
effect when it is called, quite correctly.
@PP
Finally,
@ID @C {
void KheCombSolverDebug(KHE_COMB_SOLVER cs, int verbosity,
  int indent, FILE *fp);
}
produces the usual debug print of @C { cs } onto @C { fp }
with the given verbosity and indent.
@End @SubSection

@SubSection
  @Title { Applying combinatorial grouping }
  @Tag { resource_structural.constraints.applying }
@Begin
@LP
This section describes one way in which the general idea of
combinatorial grouping, as just presented, may be applied in
practice.  This way is implemented by function
@ID @C {
int KheCombGrouping(KHE_COMB_SOLVER cs, KHE_OPTIONS options);
}
@C { KheCombGrouping } does what this section describes, and
returns the number of groups it made.  Before it is called,
the common frame should be loaded into @C { cs }'s tasker as
overlap time groups.
@PP
Let @M { m } be the value of the @F rs_group_by_rc_max_days option
of @C { options }.  Iterate over all pairs @M { (f, c) }, where
@M { f } is a subset of the common frame containing @M { k }
adjacent time groups, for all @M { k } such that @M { 2 <= k <= m },
and @M { c } is a class that covers @M { f }'s first or last time group.
@PP
For each pair, set up and run combinatorial grouping with one `yes'
class, namely @M { c }, and one `free' time group for each of the
@M { k } time groups of @M { f }.  Set @C { max_num } to @C { INT_MAX },
and set @C { ct } to @C { KHE_COMB_SOLVER_COST_SOLE_ZERO }.  If there
is a unique zero-cost way to group a task of @M { c } with tasks on
the following @M { k - 1 } days, this call will find it and carry out
as many groupings as it can.
# , and set @C { allow_single } to @C { false }.
@PP
If @M { f } has @M { k } time groups, each with @M { n } classes,
say, there are up to @M { (n + 1) sup {k - 1} } combinations for
each run, so @C { rs_group_by_rc_max_days } must be small, say 3,
or 4 at most.  In any case, unique zero-cost groupings typically
concern weekends, so larger values are unlikely to yield anything.
@PP
If one @M { (f, c) } pair produces some grouping, then
@C { KheCombGrouping } returns to the first pair containing @M { f }.
This handles cases like the one described earlier, where a grouping
of Saturday and Sunday night shifts opens the way to a grouping of
Saturday and Sunday day shifts.
@PP
The remainder of this section describes @I { combination elimination }.
This is a refinement that @C { KheCombGrouping } uses to make
unique zero-cost combinations more likely in some cases.
@PP
Some combinations examined by combinatorial grouping may have zero
cost as far as the monitors used to evaluate it are concerned, but
have non-zero cost when evaluated in a different way, involving the
overall supply of and demand for resources.  Such combinations can
be ruled out, leaving fewer zero-cost combinations, and potentially
more task grouping.
@PP
For example, suppose there is a maximum limit on the number of
weekends each resource can work.  If this limit is tight
enough, it will force every resource to work complete weekends,
even without an explicit constraint, if that is the only way
that the available supply of resources can cover the demand
for weekend shifts.  This example fits the pattern to be given
now, setting @M { C } to the constraint that limits the number
of busy weekends, @M { T } to the times of all weekends,
@M { T sub i } to the times of the @M { i }th weekend, and
@M { f tsub i } to the number of days in the @M { i }th weekend.
@PP
Take any any set of times @M { T }.  Let @M { S(T) }, the
@I { supply during @M { T } }, be the sum over all resources
@M { r } of the maximum number of times that @M { r } can be busy
during @M { T } without incurring a cost.  Let @M { D(T) }, the
@I { demand during @M { T } }, be the sum over all tasks @M { x }
for which non-assignment would incur a cost, of the number of times
@M { x } is running during @M { T }.  Then @M { S(T) >= D(T) }
or else a cost is unavoidable.
@PP
In particular, take any cluster busy times constraint @M { C } which
applies to all resources, has time groups which are all positive, and
has a non-trivial maximum limit @M { M }.  (The analysis also applies
when the time groups are all negative and there is a non-trivial
minimum limit, setting @M { M } to the number of time groups minus
the minimum limit.)  Suppose there are @M { n } time groups
@M { T sub i }, for @M { 1 <= i <= n }, and let their union be @M { T }.
@PP
Let @M { f tsub i } be the number of time groups from the common
frame with a non-empty intersection with @M { T sub i }.  This is
the maximum number of times from @M { T sub i } during which any one
resource can be busy without incurring a cost, since a resource can
be busy for at most one time in each time group of the common frame.
@PP
Let @M { F } be the sum of the largest @M { M } @M { f tsub i }
values.  This is the maximum number of times from @M { T } that
any one resource can be busy without incurring a cost:  if it is
busy for more times than this, it must either be busy for more
than @M { f tsub i } times in some @M { T sub i }, or else it
must be busy for more than @M { M } time groups, violating the
constraint's maximum limit.
@PP
If there are @M { R } resources altogether, then the supply during
@M { T } is bounded by
@ID @Math { S(T) <= RF }
since @M { C } is assumed to apply to every resource.
@PP
As explained above, to avoid cost the demand must not exceed the
supply, so
@ID @M { D(T) <= S(T) <= RF }
Furthermore, if @M { D(T) >= RF }, then any failure to maximize
the use of workload will incur a cost.  That is, every resource
which is busy during @M { T sub i } must be busy for the full
@M { f tsub i } times in @M { T sub i }.
@PP
So the effect on grouping is this:  if @M { D(T) >= RF }, a resource
that is busy in one time group of the common frame that overlaps
@M { T sub i } should be busy in every time group of the common
frame that overlaps @M { T sub i }.  @C { KheCombGrouping } searches
for constraints @M { C } that have this effect, and informs its
combinatorial grouping solver about what it found by changing the
cover types of some time groups from `free' to `prev'.  When
searching for groups, the option of covering some of these time
groups but not others is removed.  With fewer options, there is
more chance that some combination might be the only one with
zero cost, allowing more task grouping.
@PP
Instance @C { CQ14-05 } has two constraints that limit busy weekends.
One applies to 10 resources and has maximum limit 2; the other applies
to the remaining 6 resources and has maximum limit 3.  So combination
elimination actually takes sets of constraints with the same time
groups that together cover every resource once.  Instead of @M { RF }
(above), it uses the sum over the set's constraints @M { c sub j }
of @M { R sub j F sub j }, where @M { R sub j } is the number of
resources that @M { c sub j } applies to, and @M { F sub j } is the
sum of the largest @M { M sub j } of the @M { f tsub i } values,
where @M { M sub j } is the maximum limit of @M { c sub j }.  The
@M { f tsub i } are the same for all @M { c sub j }.
@End @SubSection

@SubSection
  @Title { Profile grouping }
  @Tag { resource_structural.constraints.profile }
@Begin
@LP
Suppose 6 nurses are required on the Monday, Tuesday, Wednesday,
Thursday, and Friday night shifts, but only 4 are required on the
Saturday and Sunday night shifts.  Consider any division of the
night shifts into sequences of one or more shifts on consecutive
days.  However these sequences are made, at least two must begin
on Monday, and at least two must end on Friday.
@PP
Now suppose that the intention is to assign the same resource to
each shift of any one sequence, and that a limit active intervals
constraint, applicable to all resources, specifies that night shifts
on consecutive days must occur in sequences of at least 2 and at most
3.  Then the two sequences of night shifts that must begin on Monday
must contain a Monday night and a Tuesday night shift at least, and the
two that end on Friday must contain a Thursday night and a Friday night
shift at least.  So here are two groupings, of Monday and Tuesday
nights and of Thursday and Friday nights, for each of which we can
build two task groups.
@PP
Suppose that we already have a task group which contains a sequence
of 3 night shifts on consecutive days.  This group cannot be grouped
with any night shifts on days adjacent to the days it currently
covers.  So for present purposes the tasks of this group can be
ignored.  This can change the number of night shifts running on
each day, and so change the amount of grouping.  For example, in
instance @C { COI-GPost.xml }, all the Friday, Saturday, and Sunday
night shifts get grouped into sequences of 3, and 3 is the maximum,
so those night shifts can be ignored here, and so every Monday night
shift begins a sequence, and every Thursday night shift ends one.
@PP
We now generalize this example, ignoring for the moment a few
issues of detail.  Let @M { C } be any limit active intervals
constraint which applies to all resources, and whose time groups
@M { T sub 1 ,..., T sub k } are all positive.  Let @M { C }'s
limits be @M { C sub "max" } and @M { C sub "min" }, and suppose
@M { C sub "min" } is at least 2 (if not, there can be no grouping
based on @M { C }).  What follows is relative to @M { C }, and is
repeated for each such constraint.  Constraints with the same
time groups are notionally merged, allowing the minimum limit
to come from one constraint and the maximum limit from another.
@PP
A @I { maximal task } is a task which covers at least @M { C sub "max" }
adjacent time groups from @M { C }.
Maximal tasks can have no influence on grouping to satisfy @M { C }'s
minimum limit, so they may be ignored, that is, profile grouping may
run as though they are not there.  This applies both to tasks which are
present at the start, and tasks which are constructed along the way.  
@PP
Let @M { n sub i } be the number of tasks that cover @M { T sub i },
not including maximal tasks.  The @M { n sub i } together make up
the @I profile of @M { C }.  The tasker operations from
Section {@NumberOf resource_structural.constraints.taskers }
which support profile grouping make it easy to find the profile.
@PP
For each @M { i } such that @M { n sub {i-1} < n sub i },
@M { n sub i - n sub {i-1} } groups of length at least
@M { C sub "min" } must start at @M { T sub i } (more precisely,
they must cover @M { T sub i } but not  @M { T sub {i-1} }).  They may
be constructed by combinatorial grouping, passing in time groups
@M { T sub i ,..., T sub { i + C sub "min" - 1 } } with cover type
`yes', and @M { T sub {i-1} } and @M { T sub { i + C sub "max" } } with
cover type `no', asking for @M { m = n sub i - n sub {i-1} - c sub i }
tasks, where @M { c sub i } is the number of existing tasks (not
including maximal ones) that satisfy these conditions already (as
returned by @C { KheCombSolverSingles }).  The new groups must group
at least 2 tasks each.  Some of the time groups may not exist; in
that case, omit the non-existent ones but still do the grouping,
provided there are at least 2 `yes' time groups.  The case for
sequences ending at @M { j } is symmetrical.
@PP
If @M { C } has no history, we may set @M { n sub 0 } and
@M { n sub {k+1} } to 0, allowing groups to begin at @M { T sub 1 }
and end at @M { T sub k }.  If @M { C } has history, we do not know
how many tasks are running outside @M { C }, so we set @M { n sub 0 }
and @M { n sub {k+1} } to infinity, preventing groups from beginning
at @M { T sub 1 } and ending at @M { T sub k }.
@PP
Groups made by one round of profile grouping may participate in later
rounds.  Suppose @M { C sub "min" = 2 }, @M { C sub "max" = 3 },
@M { n sub 1 = n sub 5 = 0 }, and @M { n sub 2 = n sub 3 = n sub 4 = 4 }.
Profile grouping builds 4 groups of length 2 begining at @M { T sub 2 },
then 4 groups of length 3 ending at @M { T sub 4 }, incorporating the
length 2 groups.
# @PP
# The general aim is to pack blocks of size freely chosen between
# @M { C sub "min" } and @M { C sub "max" } into a given profile, and
# group wherever it can be shown that the packing can only take one
# form.  But we are not interested in optimal solutions (ones with
# the maximum amount of grouping), so we do not search for other
# cases.  However, some apparently different cases are actually
# already covered.  For example, suppose @M { C sub "min" = 2 } and
# @M { C sub "max" = 3 }, with @M { n sub 1 = n sub 5 = 0 } and
# @M { n sub 2 = n sub 3 = n sub 4 = 4 }.  Then 4 groups of length 3
# can be built.  But the function does this:  it first builds 4
# groups of length 2 begining at @M { T sub 2 }, then 4 groups of
# length 3 ending at @M { T sub 4 }, incorporating the length 2 groups.
@PP
We turn now to four issues of detail.
@PP
@B { History. }  If history is present, the first step is to handle it.
For each resource @M { r sub i } with a history value @M { x sub i }
such that @M { x sub i < C sub "min" }, use combinatorial grouping with
one `yes' time group for each of the first @M { C sub "min" -  x sub i }
time groups of @M { C } (when these all exist), build one group, and
assign @M { r sub i } to it.  (This idea is not yet implemented;
none of the instances available at the time of writing need it.)
# , and one `no' time group for the next time group of @M { C }
@PP
@B { Singles. }  We need to consider how singles affect profile
grouping.  Singles of length @M { C sub "max" } or more are
ignored, but there may be singles of length @M { C sub "min" }
when @M { C sub "min" < C sub "max" }.
@PP
The @M { n sub i - n sub {i-1} } groups that must start at
@M { T sub i } include singles.  Singles are already present,
which is similar to saying that they must be made first.  So before
calling @C { KheCombSolverSolve } we call @C { CombSolverSingleTasks }
to determine @M { c sub i }, the number of singles that satisfy the
requirements, and then we pass @M { n sub i - n sub {i-1} - c sub i }
to @C { KheCombSolverSolve }, not @M { n sub i - n sub {i-1} }, and
exclude singles from its search space.
@PP
@B { Varying task domains. }  Suppose that one senior nurse is wanted
each night, four ordinary nurses are wanted each week night, and two
ordinary nurses are wanted each weekend night.  Then the two groups
starting on Monday nights should group demands for ordinary nurses,
not senior nurses.  Nevertheless, tasks with different domains are
not totally unrelated.  A senior nurse could very well act as an
ordinary nurse on some shifts.
@PP
We still aim to build @M { M = n sub i - n sub {i-1} - c sub i }
groups as before.  However, we do this by making several calls on
combinatorial grouping.  For each resource group @M { g } appearing
as a domain in any class running at time @M { T sub i }, find
@M { n sub gi }, the number of tasks (not including maximal ones) with
domain @M { g } running at @M { T sub i }, and @M { n sub { g(i-1) } },
the number at @M { T sub {i-1} }.  For each @M { g } such that
@M { n sub gi > n sub { g(i-1) } }, call combinatorial grouping,
insisting (by calling @C { KheCombSolverAddProfileRequirement })
that @M { T sub i } be covered by a class whose domain is @M { g },
passing @M { m = min( M, n sub gi - n sub { g(i-1) } ) }, then
subtract from @M { M } the number of groups actually made.
Stop when @M { M = 0 } or the list of domains is exhausted.
# @PP
# We still aim to build @M { M = n sub i - n sub {i-1} - c sub i }
# groups as before.  However, we do this by making several calls on
# combinatorial grouping, utilizing the @C { domain } parameter, which
# we call @M { g } here.  For each @M { g } appearing as a domain in
# any class running at time @M { T sub i }, find @M { n sub gi }, the
# number of tasks (not including maximal ones) with domain @M { g }
# running at @M { T sub i }, and @M { n sub { g(i-1) } }, the number
# at @M { T sub {i-1} }.  For each @M { g } such that
# @M { n sub gi > n sub { g(i-1) } }, add @M { g } and
# @M { M sub g = n sub gi - n sub { g(i-1) } } to a list.
# Then re-traverse the list.  For each @M { g } on it, call
# combinatorial grouping, passing @M { m = min( M, M sub g ) } and
# @M { g }, then subtract from @M { M } the number of groups actually
# made.  Stop when @M { M = 0 } or the list is exhausted.
# @End @SubSection
# 
# @SubSection
#   @Title { Applying profile grouping }
#   @Tag { resource_structural.constraints.applying2 }
# @Begin
# @LP
@PP
@B { Non-uniqueness of zero-cost groupings. }
The main problem with profile grouping is that there may be
several zero-cost groupings in a given situation.  For example,
a profile might show that a group covering Monday, Tuesday, and
Wednesday may be made, but give no guidance on which shifts on
those days to group.
@PP
One reasonable way of dealing with this problem is the following.
First, do not insist on unique zero-cost groupings; instead, accept
any zero-cost grouping.  This ensures that a reasonable amount of
profile grouping will happen.  Second, to reduce the chance of
making poor choices of zero-cost groupings, limit profile grouping
to two cases.
@PP
The first case is when each time group @M { T sub i } contains a
single time, as at the start of this section, where each
@M { T sub i } contained the time of a night shift.  Although we do
not insist on unique zero-cost groupings, we are likely to get them
in this case, so we call this @I { strict profile grouping }.
@PP
The second case is when @M { C sub "min" = C sub "max" }.  It is
very constraining to insist, as this does, that every sequence of
consecutive busy days (say) away from the start and end of the cycle
must have a particular length.  Indeed, it changes the problem into a
combinatorial one of packing these rigid sequences into the profile.
Local repairs cannot do this well, because to increase
or decrease the length of one sequence, we must decrease or increase
the length of a neighbouring sequence, and so on all the way back to
the start or forward to the end of the cycle (unless there are
shifts nearby which can be assigned or not without cost).
So we turn to profile grouping to find suitable groups before
assigning any resources.  Some of these groups may be less than
ideal, but still the overall effect should be better than no
grouping at all.  We call this @I { non-strict profile grouping }.
# No profile grouping of this kind is done until
# all cases where the time groups are singletons have been tried.
@PP
When @M { C sub "min" = C sub "max" }, all singles are off-profile.
This is easy to see:  by definition, a single covers @M { C sub "min" }
time groups, so it covers @M { C sub "max" } time groups, but
@C { profile_max_len } is @M { C sub "max" - 1 }.
@PP
These ideas are implemented by function
@ID @C {
int KheProfileGrouping(KHE_COMB_SOLVER cs, bool non_strict);
}
It carries out some profile grouping, as follows, and returns
the number of groups it makes.
@PP
Find all limit active intervals constraints @M { C } whose time
groups are all positive and which apply to all resources.  Notionally
merge pairs of these constraints that share the same time groups when
one has a minimum limit and the other has a maximum limit.  Let
@M { C } be one of these (possibly merged) constraints such that
@M { C sub "min" >= 2 }.  Furthermore, if @C { non_strict } is
@C { false }, then @M { C }'s time groups must all be singletons,
while if @C { non_strict } is @C { true }, then @M { C sub "min" = C sub "max" }
must hold.
@PP
A constraint may qualify for both strict and non-strict processing.
This is true, for example, of a constraint that imposes equal lower
and upper limits on the number of consecutive night shifts.  Such a
constraint will be selected in both the strict and non-strict cases,
which is fine.
@PP
For each of these constraints, proceed as follows.  Set the profile
time groups in the tasker to @M { T sub 1 ,..., T sub k }, the time
groups of @M { C }, and set the @C { profile_max_len } attribute to
@M { C sub "max" - 1 }.  The tasker will then report the values
@M { n sub i } needed for @M { C }.
@PP
Traverse the profile repeatedly, looking for cases where
@M { n sub i > n sub {i-1} } and @M { n sub j < n sub {j+1} }, and
use combinatorial grouping (aiming to find zero-cost groups, not
unique zero-cost groups) to build groups which cover @M { C sub "min" }
time groups starting at @M { T sub i } (or ending at @M { T sub j }).  This
involves loading @M { T sub i ,..., T sub {i + C sub "min" - 1} } as `yes'
time groups, and @M { T sub {i-1} } and @M { T sub { i + C sub "max" } }
as `no' time groups, as explained above.
@PP
The profile is traversed repeatedly until no points which allow
grouping can be found.  In the strict grouping case, it is then
time to stop, but in the non-strict case it is better to keep
grouping, as follows.  From among all time groups @M { T sub i }
where @M { n sub i > 0 }, choose one which has been the starting
point for a minimal number of groups (to spread out the starting
points as much as possible) and make a group there if combinatorial
grouping allows it.  Then return to traversing the profile
repeatedly:  there should now be @M { n sub i > n sub {i-1} }
cases just before the latest group and @M { n sub j < n sub {j+1} }
cases just after it.  Repeat until there is no @M { T sub i } where
@M { n sub i > 0 } and combinatorial grouping can build a group.
@End @SubSection

@EndSubSections
@End @Section

@Section
    @Title { Grouping by resource }
    @Tag { resource_structural.task_tree.group.by.resource }
@Begin
@LP
@I { Grouping by resource } is another kind of task grouping,
obtained by calling
@ID @C {
bool KheTaskingGroupByResource(KHE_TASKING tasking,
  KHE_OPTIONS options, KHE_TASK_SET ts);
}
Like grouping by resource constraints, it groups tasks whose resource
types are covered by @C { tasking } which lie in adjacent time groups
of the common frame, and adds each task which it makes an assignment
to to @C { ts } (if @C { ts } is non-@C { NULL }).  However, the
tasks are chosen in quite a different way:  each group consists
of a maximal sequence of tasks which lie in adjacent time groups
of the frame and are currently assigned to the same resource.
The thinking is that if the solution is already of good quality,
it may be advantageous to keep these runs of tasks together while
trying (by means of any repair algorithm whatsoever) to assign
them to different resources.
@PP
When a grouping made by @C { KheTaskingGroupByResource } and
recorded in a task set is no longer needed, function
@C { KheTaskSetUnGroup } (Section {@NumberOf extras.task_sets})
may be used to remove it.
# @PP
# @C { KheTaskingGroupByResource } and @C { KheTaskSetUnGroup }
# understand that some tasks may already be grouped.  They do not disturb
# these existing groupings.
@End @Section

#@Section
#    @Title { Enforcing work patterns }
#    @Tag { resource_structural.patterns }
#@Begin
#@LP
#@I { still to do }
#@End @Section

@Section
    @Title { The task grouper }
    @Tag { resource_structural.task_tree.grouper }
@Begin
@LP
A @I { task grouper } supports a more elaborate form of grouping, one
which allows the grouping to be done, undone, and redone at will.
@PP
The first step is to create a task grouper object, by calling
@ID @C {
KHE_TASK_GROUPER KheTaskGrouperMake(KHE_RESOURCE_TYPE rt, HA_ARENA a);
}
This makes a task grouper object for tasks of type @C { rt }.
It is deleted when @C { a } is deleted.  Also,
@ID @C {
void KheTaskGrouperClear(KHE_TASK_GROUPER tg);
}
clears @C { tg } back to its state immediately after
@C { KheTaskGrouperMake }, without changing @C { rt } or @C { a }.
@PP
To add tasks to a task grouper, make any number of calls to
@ID @C {
bool KheTaskGrouperAddTask(KHE_TASK_GROUPER tg, KHE_TASK t);
}
Each task passed to @C { tg } in this way must be assigned directly
to the cycle task for some resource @C { r } of type @C { rt }.  The
tasks passed to @C { tg } by @C { KheTaskGrouperAddTask } which are
assigned @C { r } at the time they are passed are placed in one group.
No assignments are made.
@PP
If @C { true } is returned by @C { KheTaskGrouperAddTask }, @C { t }
is the @I { leader task } for its group:  it is the first task
assigned @C { r } which has been passed to @C { tg }.
If @C { false } is returned, @C { t } is not the leader task.
@PP
Adding the same task twice is legal but is the same as adding it
once.  If the task is the leader task, it is reported to be so
only the first time it is passed.
@PP
Importantly, although the grouping is determined by which resources
the tasks are assigned to, it is only the grouping that the grouper
cares about, not the resources.  Once the groups are made, the resources
that determined the grouping become irrelevant to the grouper.
#There
#is also
#@ID @C {
#KHE_TASK KheTaskGrouperLeaderTask(KHE_TASK_GROUPER tg, KHE_RESOURCE r);
#}
#which returns the leader task of @C { r }'s group, or @C { NULL }
#if @C { r }'s group is empty.
@PP
At any time one may call
@ID @C {
void KheTaskGrouperGroup(KHE_TASK_GROUPER tg);
void KheTaskGrouperUnGroup(KHE_TASK_GROUPER tg);
}
@C { KheTaskGrouperGroup } ensures that, in each group, the tasks other
than the leader task are assigned directly to the leader task.  It does
not change the assignment of the leader task.  @C { KheTaskGrouperUnGroup }
ensures that, for each group, the tasks other than the leader task are
assigned directly to whatever the leader task is assigned to (possibly
nothing).  As mentioned above, the resources which defined the groups
originally are irrelevant to these operations.
@PP
If @C { KheTaskGrouperGroup } cannot assign some task to its leader
task, it adds the task's task bounds to the leader task and tries again.
If it cannot add these bounds, or the assignment still does not succeed,
it aborts.  In addition to ungrouping, @C { KheTaskGrouperUnGroup }
removes any task bounds added by @C { KheTaskGrouperGroup }.  In detail,
@C { KheTaskGrouperGroup } records the number of task bounds present
when it is first called, and @C { KheTaskGrouperUnGroup } removes task
bounds from the end of the leader task until this number is reached.
@PP
A task grouper's tasks may be grouped and ungrouped at will.  This
is more general than using @C { KheTaskSetUnGroup }, since after
ungrouping that way there is no way to regroup.  The extra power
comes from the fact that a task grouper contains, in effect, a
task set for each group.
@PP
The author has encountered one case where @C { KheTaskGrouperUnGroup }
fails to remove the task bounds added by @C { KheTaskGrouperGroup }.
The immediate problem has probably been fixed, although it is hard to
be sure that it will not recur.  So instead of aborting in that case,
@C { KheTaskGrouperUnGroup } prints a debug message and stops removing
bounds for that task.
@End @Section

@Section
    @Title { Task finding }
    @Tag { resource_structural.task_finding }
@Begin
@LP
@I { Task finding } is KHE's name for some operations, based on
@I { task finder } objects, that find sets of tasks which are to
be moved all together from one resource to another.  They are used
by several of the solvers of Chapter {@NumberOf resource_solvers},
mainly for nurse rostering.
# ejection chains when solving nurse
# rostering problems; they could be used elsewhere.
@PP
Task finding is concerned with which days tasks are running.  A @I day
is a time group of the common frame.  The days that a task @C { t }
is running are the days containing the times that @C { t } itself is
running, plus the days containing the times that the tasks assigned
to @C { t }, directly or indirectly, are running.  The days that a
task set is running are the days that its tasks are running.
@PP
Task finding represents the days that a task or task set is running
by a @I { bounding interval }, a pair of integers:  @C { first_index },
the index in the common frame of the first day that the task or task
set is running, and @C { last_index }, the index of the last day that
the task or task set is running.  So task finding is unaware of cases
where a task runs twice on the same day, or has a @I gap (a day within
the bounding interval when it is not running).  Neither is likely in
practice.  Task finding considers the duration of a task or task set
to be the length of its bounding interval.
@PP
Task finding operations typically find a set of tasks, often
stored in a task set object (Section {@NumberOf extras.task_sets}).
In some cases these tasks form a @I { task run }, that is, they
satisfy these conditions:
@NumberedList

@LI {
The set is non-empty.  An empty run would be useless.
}

@LI {
Every task is a proper root task.  The tasks are being found in
order to be moved from one resource to another, and this ensures
that the move will not break up any groups.
}

@LI {
No two tasks run on the same day.  This is more or less automatic
when the tasks are all assigned the same resource initially, but it
holds whether the tasks are assigned or not.  If it didn't, then
when the tasks are moved to a common resource there would be clashes.
}

@LI {
The days that the tasks are running are consecutive.  In other words,
between the first day and the last there are no @I { gaps }:  days
when none of the tasks is running.
}

@EndList
The task finder does not reject tasks which run twice on the same
day or which have gaps.  As explained above, it is unaware of these
cases.  So the last two conditions should really say that the task
finder does not introduce any @I new clashes or gaps when it groups
tasks into runs.
@PP
Some runs are @I { unpreassigned runs }, meaning that all of their
tasks are unpreassigned.  Only unpreassigned runs can be moved from
one resource to another.  And some runs are @I { maximal runs }:
they cannot be extended, either to left or right.  We mainly deal
with maximal runs, but just what we mean by `maximal' depends on
circumstances.  For example, we may want to exclude preassigned
tasks from our runs.  So our definition does @I not take the
arguably reasonable extra step of requiring all runs to be maximal.
@PP
Some task finding operations find all tasks assigned a particular
resource in a particular interval.  In these cases, only conditions
2 and 3 must hold; the result need not be a task run.
@PP
Task finding treats non-assignment like the assignment of a special
resource (represented by @C { NULL }).  This means that task
finding is equally at home finding assigned and unassigned tasks.
@PP
A task @C { t } @I { needs assignment } if @C { KheTaskNeedsAssignment(t) }
(Section {@NumberOf solutions.tasks.asst}) returns @C { true },
meaning that non-assignment of a resource to @C { t } would incur
a cost, because of an assign resource constraint, or a limit
resources constraint which is currently at or below its minimum
limit, that applies to @C { t }.  Task finding never includes
tasks that do not need assignment when it searches for unassigned
tasks, because assigning resources to such tasks is not a high
priority.  It does include them when searching for assigned tasks.
@PP
A resource is @I { effectively free } during some set of days if it
is @C { NULL }, or it is not @C { NULL } and the tasks it is assigned
to on those days do not need assignment.  The point is that it
is always safe to move some tasks to a resource on days when it is
effectively free:  if the resource is @C { NULL }, they are simply
unassigned, and if it is non-@C { NULL }, any tasks running on those
days do not need assignment, and can be unassigned, at no cost, before
the move is made.  Task finding utilizes the effectively free concept and
offers move operations that work in this way.
@BeginSubSections

@SubSection
    @Title { Task finder objects }
    @Tag { resource_structural.task_finding.task_finder }
@Begin
@LP
To create a task finder object, call
@ID @C {
KHE_TASK_FINDER KheTaskFinderMake(KHE_SOLN soln, KHE_OPTIONS options,
  HA_ARENA a);
}
This returns a pointer to a private struct in arena @C { a }.  Options
@C { gs_common_frame } (Section {@NumberOf extras.frames}) and
@C { gs_event_timetable_monitor } (Section {@NumberOf general_solvers.general})
are taken from @C { options }.  If either is @C { NULL },
@C { KheTaskFinderMake } returns @C { NULL }, since it cannot
do its work without them.
@PP
Ejection chain repair code can obtain a task finder from the ejector
object, by calling
@ID @C {
KHE_TASK_FINDER KheEjectorTaskFinder(KHE_EJECTOR ej);
}
This saves time and memory compared with creating new task finders
over and over.  Once again the return value is @C { NULL } if the
two options are not both present.
@PP
The days tasks are running (the time groups of the common frame) are
represented in task finding by their indexes, as explained above.
The first legal index is 0; the last is returned by
@ID @C {
int KheTaskFinderLastIndex(KHE_TASK_FINDER tf);
}
This is just @C { KheTimeGroupTimeCount(frame) - 1 }, where @C { frame }
is the common frame.  Also,
@ID @C {
KHE_FRAME KheTaskFinderFrame(KHE_TASK_FINDER tf);
}
may be called to retrieve the frame itself.
@PP
As defined earlier, the bounding interval of a task or task set
is the smallest interval containing all the days that the task
or task set is running.  It is returned by these functions:
@ID @C {
void KheTaskFinderTaskInterval(KHE_TASK_FINDER tf,
  KHE_TASK task, int *first_index, int *last_index);
void KheTaskFinderTaskSetInterval(KHE_TASK_FINDER tf,
  KHE_TASK_SET ts, int *first_index, int *last_index);
}
These set @C { *first_index } and @C { *last_index } to the indexes
of the first and last day that @C { task } or @C { ts } is running.
If @C { ts } is empty, @C { *first_index > *last_index }.  There is also
@ID @C {
void KheTaskFinderTimeGroupInterval(KHE_TASK_FINDER tf,
  KHE_TIME_GROUP tg, int *first_index, int *last_index);
}
which sets @C { *first_index } and @C { *last_index } to the
indexes of the first and last day that @C { tg } overlaps with.
If @C { tg } is empty, @C { *first_index > *last_index }.
@PP
These three operations find task sets and runs:
@ID @C {
void KheFindTasksInInterval(KHE_TASK_FINDER tf, int first_index,
  int last_index, KHE_RESOURCE_TYPE rt, KHE_RESOURCE from_r,
  bool ignore_preassigned, bool ignore_partial,
  KHE_TASK_SET res_ts, int *res_first_index, int *res_last_index);
bool KheFindFirstRunInInterval(KHE_TASK_FINDER tf, int first_index,
  int last_index, KHE_RESOURCE_TYPE rt, KHE_RESOURCE from_r,
  bool ignore_preassigned, bool ignore_partial, bool sep_need_asst,
  KHE_TASK_SET res_ts, int *res_first_index, int *res_last_index);
bool KheFindLastRunInInterval(KHE_TASK_FINDER tf, int first_index,
  int last_index, KHE_RESOURCE_TYPE rt, KHE_RESOURCE from_r,
  bool ignore_preassigned, bool ignore_partial, bool sep_need_asst,
  KHE_TASK_SET res_ts, int *res_first_index, int *res_last_index);
}
All three functions clear @C { res_ts }, which must have been
created previously, then add to it some tasks which are assigned
@C { from_r } (or are unassigned if @C { from_r } is @C { NULL }).
They set @C { *res_first_index } and @C { *res_last_index } to
the bounding interval of the tasks of @C { res_ts }.
# The set of tasks is @I { maximal }:  it cannot be any
# larger and still satisfy the various requirements.
# All tasks added to @C { res_ts } run on
# the days between @C { first_index } to @C { last_index } inclusive;
@PP
Let the @I { target interval } be the interval from @C { first_index }
to @C { last_index } inclusive.  Say that a task @I { overlaps } the
target interval when at least one of the days on which the task is
running lies in this interval.  Subject to the following conditions,
@C { KheFindTasksInInterval } finds all tasks that overlap the target
interval; @C { KheFindFirstRunInInterval } finds the first (leftmost)
run containing a task that overlaps the target interval, or returns
@C { false } if there is no such run; and @C { KheFindLastRunInInterval }
finds the last (rightmost) run containing a task that overlaps the
target interval, or returns @C { false } if there is no such run.
@PP
When @C { from_r } is @C { NULL }, only unassigned tasks that need
assignment (as discussed above) are added.  The first could be any
unassigned task of type @C { rt } (it is this that @C { rt } is
needed for), but the others must be compatible with the first, in
the sense defined below for widened task sets.  The point is that
we expect these tasks to be assigned some single resource, and it
would not do for them to have widely different domains.
@PP
Some tasks are @I { ignored }, which means that the operation
behaves as though they are simply not there.  Subject to this
ignoring feature, the runs found are maximal.  A task is ignored in
this way when it is running on any of the days that the tasks that
have already been added to @C { res_ts } are running.  Preassigned
tasks are ignored when @C { ignore_preassigned } is @C { true }.
Tasks that are running partly or wholly outside the target
interval are ignored when @C { ignore_partial } is @C { true }.
When @C { ignore_partial } is @C { false }, a run can extend
an arbitrary distance beyond the target interval, and contain
some tasks that do not overlap the target interval at all.
@PP
If @C { sep_need_asst } is @C { true }, all tasks @C { t }
in the run found by @C { KheFindFirstRunInInterval } or
@C { KheFindLastRunInInterval } have the same value of
@C { KheTaskNeedsAssignment(t) }.  This value could be @C { true }
or @C { false }, but it is the same for all tasks in the run.
If @C { sep_need_asst } is @C { false }, there is no requirement
of this kind.
@End @SubSection

@SubSection
    @Title { Daily schedules }
    @Tag { resource_structural.task_finding.daily }
@Begin
@LP
Sometimes more detailed information is needed about when a
task is running than just the bounding interval.  In those
cases, task finding offers @I { daily schedules }, which
calculate both the bounding interval and what is going on
on each day:
@ID @C {
KHE_DAILY_SCHEDULE KheTaskFinderTaskDailySchedule(
  KHE_TASK_FINDER tf, KHE_TASK task);
KHE_DAILY_SCHEDULE KheTaskFinderTaskSetDailySchedule(
  KHE_TASK_FINDER tf, KHE_TASK_SET ts);
KHE_DAILY_SCHEDULE KheTaskFinderTimeGroupDailySchedule(
  KHE_TASK_FINDER tf, KHE_TIME_GROUP tg);
}
These return a @I { daily schedule }:  a representation of
what @C { task }, @C { ts }, or @C { tg } is doing on each
day, including tasks assigned directly or indirectly to
@C { task } or @C { ts }.  Also,
@ID @C {
KHE_DAILY_SCHEDULE KheTaskFinderNullDailySchedule(
  KHE_TASK_FINDER tf, int first_day_index, int last_day_index);
}
returns a daily schedule representing doing nothing from the day
with index @C { first_day_index } to the day with index
@C { last_day_index } inclusive.
@PP
A @C { KHE_DAILY_SCHEDULE } is an object which uses memory
taken from its task finder's arena.  It can be deleted (which
actually means being added to a free list in its task finder)
by calling
@ID @C {
void KheDailyScheduleDelete(KHE_DAILY_SCHEDULE ds);
}
It has these attributes:
@ID @C {
KHE_TASK_FINDER KheDailyScheduleTaskFinder(KHE_DAILY_SCHEDULE ds);
bool KheDailyScheduleNoOverlap(KHE_DAILY_SCHEDULE ds);
int KheDailyScheduleFirstDayIndex(KHE_DAILY_SCHEDULE ds);
int KheDailyScheduleLastDayIndex(KHE_DAILY_SCHEDULE ds);
}
@C { KheDailyScheduleTaskFinder } returns @C { ds }'s task finder;
@C { KheDailyScheduleNoOverlap } returns @C { true } when no two
of the schedule's times occur on the same day, and @C { false }
otherwise; and @C { KheDailyScheduleFirstDayIndex } and
@C { KheDailyScheduleLastDayIndex } return the index of the
schedule's first and last days.  For each day between the first
and last inclusive,
@ID @C {
KHE_TASK KheDailyScheduleTask(KHE_DAILY_SCHEDULE ds, int day_index);
}
returns the task running in @C { ds } on day @C { day_index }.
It may be a task assigned directly or indirectly to @C { task }
or @C { ts }, not necessarily @C { task } or a task from
@C { ts }.  @C { NULL } is returned if no task is running
on that day.  This is certain for schedules created by
@C { KheTaskFinderTimeGroupDailySchedule } and
@C { KheTaskFinderNullDailySchedule }, but it is also possible
for schedules created by @C { KheTaskFinderTaskDailySchedule }
and @C { KheTaskFinderTaskSetDailySchedule }.  If there are two
or more tasks running on that day, an arbitrary one of them is
returned; this cannot happen when @C { KheDailyScheduleNoOverlap }
returns @C { true }.  Similarly,
@ID @C {
KHE_TIME KheDailyScheduleTime(KHE_DAILY_SCHEDULE ds, int day_index);
}
returns the time in @C { ds } that is busy on day @C { day_index }.
This will be @C { NULL } if there is no time in the schedule on that
day, which is always the case when the schedule was created by a
call to @C { KheTaskFinderNullDailySchedule }.
# @ID @C {
# void KheTaskFinderTaskIntervalAndTimes(KHE_TASK_FINDER tf,
#   KHE_TASK task, int *first_index, int *last_index,
#   KHE_TIME *first_time, KHE_TIME *last_time);
# void KheTaskFinderTaskSetIntervalAndTimes(KHE_TASK_FINDER tf,
#   KHE_TASK_SET ts, int *first_index, int *last_index,
#   KHE_TIME *first_time, KHE_TIME *last_time);
# void KheTaskFinderTimeGroupIntervalAndTimes(KHE_TASK_FINDER tf,
#   KHE_TIME_GROUP tg, int *first_index, int *last_index,
#   KHE_TIME *first_time, KHE_TIME *last_time);
# }
#@ID @C {
#bool KheTaskFinderTaskIntervalAndTimes(KHE_TASK_FINDER tf,
#  KHE_TASK task, int *first_index, int *last_index,
#  ARRAY_KHE_TIME *times_by_day);
#bool KheTaskFinderTaskSetIntervalAndTimes(KHE_TASK_FINDER tf,
#  KHE_TASK_SET ts, int *first_index, int *last_index,
#  ARRAY_KHE_TIME *times_by_day);
#bool KheTaskFinderTimeGroupIntervalAndTimes(KHE_TASK_FINDER tf,
#  KHE_TIME_GROUP tg, int *first_index, int *last_index,
#  ARRAY_KHE_TIME *times_by_day);
#}
#Here @C { *first_index } and @C { *last_index } are the indexes of the
#first and last days that @C { task }, @C { ts }, or @C { tg } are
#running.  Array @C { *times_by_day }, which must have been initialized
#before these functions are called, is cleared and then the times that
#@C { *task } is running are added to it in chronological order.  One
#time per day is added, so the array length is
#@C { *last_index - *first_index + 1 }.  If there are two times
#on the same day, only one is added to the array, and @C { false } is
#returned.  If there are no times on some day, @C { NULL } is added to
#the array, but this does not cause @C { false } to be returned.  If
#@C { ts } or @C { tg } is empty, an empty array of times is returned.
@End @SubSection

@SubSection
    @Title { Widened task sets }
    @Tag { resource_structural.task_finding.widened }
@Begin
@LP
The task finder offers a @I { widened task set } type, representing
a set of tasks assigned a common resource @C { from_r }, and divided
into three parts:  the @I { core }, a task run passed to the widened
task set initially; the @I { left wing }, a task run lying just before
the core in time; and the @I { right wing }, a task run lying just
after the core in time.  Widened task sets support moving and swapping
the core tasks, plus a variable number of wing tasks, from @C { from_r }
to another resource.
@PP
To create a widened task set with a given core, call
@ID @C {
bool KheWidenedTaskSetMake(KHE_TASK_FINDER tf, KHE_RESOURCE from_r,
  KHE_TASK_SET from_r_ts, int max_left_wing_count,
  int max_right_wing_count, KHE_WIDENED_TASK_SET *wts);
}
The tasks of @C { from_r_ts } must be assigned @C { from_r }, which
may be @C { NULL }, meaning unassigned as usual.  When @C { from_r_ts }
satisfies the basic conditions given above, @C { true } is returned
and @C { *wts } is set to a widened task set with a copy of
@C { from_r_ts } as its core (@C { from_r_ts } itself is not
stored, and the user is free to change it, or delete it, at any
time), plus left and right wings containing @C { max_left_wing_count }
and @C { max_right_wing_count } tasks compatible with the core, or
fewer if @C { KheWidenedTaskSetMake } cannot find suitable tasks.
@PP
When @C { from_r } is not @C { NULL }, a task is compatible
with the core if it is assigned @C { from_r }.  When @C { from_r }
is @C { NULL }, a task is compatible with the core if it needs
assignment, is unassigned, and its domain is similar to those
of the tasks of the core, in a sense that we will not define.
(When we move the core to some resource, we want the wings to
be able to move to that resource too.)
# @PP
# The tasks of a widened task set (the core and wings) must satisfy
# the basic conditions.  The user must ensure that the core tasks
# satisfy these conditions; @C { KheWidenedTaskSetMake } then finds
# wing tasks that ensure that the core and wings, taken together,
# satisfy them.
@PP
Nothing prevents the user from creating a widened task set with
@C { max_left_wing_count } and @C { max_right_wing_count } set
to 0.  Each wing is represented by one array inside the widened
task set object.  Empty arrays generate no memory allocation
calls, so basically all that is wasted is the time spent on
rediscovering, once per function call, that the wing is empty.
@PP
An alternative way to create a widened task set is
@ID @C {
bool KheWidenedTaskSetMakeFlexible(KHE_TASK_FINDER tf,
  KHE_RESOURCE from_r, KHE_TASK_SET from_r_ts,
  int max_wing_count, KHE_WIDENED_TASK_SET *wts);
}
This builds a widened task set whose left and right wings together
contain @C { max_wing_count } tasks, or fewer if suitable tasks
cannot be found.  It tries to have half the tasks in each wing,
but if that is not possible it makes one of the wings longer.
For example, if @C { from_r_ts } is immediately preceded by a
preassigned task, or lies at the left end of the common frame,
then the left wing will be empty and the right wing will contain
up to @C { max_wing_count } tasks.
@PP
When a widened task set is no longer needed, it should be
deleted, by calling
@ID @C {
void KheWidenedTaskSetDelete(KHE_WIDENED_TASK_SET wts);
}
This recycles @C { wts } through a free list in its task finder.
@PP
Finally, a few helper functions.  Function
@ID @C {
void KheWidenedTaskSetInterval(KHE_WIDENED_TASK_SET wts,
  int left_count, int right_count, int *first_index, int *last_index);
}
sets @C { *first_index } and @C { *last_index } to the endpoints
of the bounding interval of @C { wts }'s core plus the first
@C { left_count } elements of its left wing and the first
@C { right_count } elements of its right wing---the interval
affected by a move or swap.  Function
@ID @C {
void KheWidenedTaskSetFullInterval(KHE_WIDENED_TASK_SET wts,
  int *first_index, int *last_index);
}
does the same only for the entire left and right wings.
@PP
There are also functions which search for widened task sets:
@ID @C {
bool KheFindMovableWidenedTaskSetRight(KHE_TASK_FINDER tf,
  KHE_RESOURCE from_r, KHE_RESOURCE to_r, int days_first_index,
  KHE_WIDENED_TASK_SET *res_wts);
bool KheFindMovableWidenedTaskSetLeft(KHE_TASK_FINDER tf,
  KHE_RESOURCE from_r, KHE_RESOURCE to_r, int days_last_index,
  KHE_WIDENED_TASK_SET *res_wts);
}
They search right starting at @C { days_first_index }, and left
starting at @C { days_last_index }, for the first task run whose
tasks are assigned @C { from_r } (or are unassigned with the type
of @C { to_r } when @C { from_r } is @C { NULL }) and are moveable
to @C { to_r } as defined by @C { KheWidenedTaskSetMoveCheck } just
below.  If such a run is found, they return it as a widened task
set with empty wings.  When @C { from_r } is @C { NULL }, the
first task of the run may be an arbitrary unassigned task, but
subsequent tasks must be compatible with it, as defined above.
The task run is maximal subject to these conditions, given that
days before @C { days_first_index } or after @C { days_last_index }
are out of bounds.  The search ends at the first or last day; it
does not wrap around.
@PP
As an aid to debugging, function
@ID @C {
void KheWidenedTaskSetDebug(KHE_WIDENED_TASK_SET wts, int left_count,
  int right_count, int verbosity, int indent, FILE *fp);
}
prints @C { wts } onto @C { fp } with the given verbosity and indent.
Only the first @C { left_count } and @C { right_count } left and right
wing tasks are printed.  The tasks are printed in chronological order,
with the core enclosed in brackets if @C { left_count } or
@C { right_count } is non-zero.
@End @SubSection

@SubSection
    @Title { Widened task set moves }
    @Tag { resource_structural.task_finding.move }
@Begin
@LP
A widened task set move operation is offered.  To find out
whether a move is possible, call
@ID @C {
bool KheWidenedTaskSetMoveCheck(KHE_WIDENED_TASK_SET wts,
  KHE_RESOURCE to_r, bool force, int *max_left_count,
  int *max_right_count);
}
If this returns @C { true }, the core may be moved from
@C { from_r } to @C { to_r }, along with any number of initial
left and right wing tasks up to @C { *max_left_count } and
@C { *max_right_count }.  It calls @C { KheTaskMoveCheck } to
verify that the tasks will move, except that if @C { to_r } is
@C { NULL }, this is an unassignment and no checks of tasks
initially assigned @C { to_r } are needed.  If @C { wts } came
from @C { KheFindMovableWidenedTaskSetRight } or
@C { KheFindMovableWidenedTaskSetLeft }, then there is no need to
call @C { KheWidenedTaskSetMoveCheck }, since the result must be
@C { true }, with @C { *max_left_count } and @C { *max_right_count }
equal to 0.
@PP
If @C { force } is @C { false }, @C { KheWidenedTaskSetMoveCheck }
also requires @C { to_r } to be effectively free during the bounding
interval of the tasks it moves.  If @C { force } is @C { true }, this
is not required:  tasks that need assignment may be unassigned by the move.
@PP
To actually carry out a move, call
@ID @C {
bool KheWidenedTaskSetMove(KHE_WIDENED_TASK_SET wts,
  KHE_RESOURCE to_r, int left_count, int right_count,
  int *from_r_durn_change, int *to_r_durn_change);
}
This moves @C { wts }'s core tasks, plus its first @C { left_count }
and @C { right_count } left and right wing tasks, to @C { to_r },
unassigning @C { to_r }'s tasks as required.  It does not check
anything again, it just does the moves.  If
@C { KheWidenedTaskSetMoveCheck } has returned @C { true }, then
this must succeed for any @C { left_count } and
@C { right_count } such that @C { 0 <= left_count <= *max_left_count }
and @C { 0 <= right_count <= *max_right_count }.  It can be undone
using marks and paths.
@PP
If the move succeeds, @C { *from_r_durn_change } and
@C { *to_r_durn_change } are set to the change in total duration
of the tasks assigned @C { from_r } and @C { to_r }.  Tasks are
neither created nor destroyed, so @C { from_r_durn_change } and
@C { to_r_durn_change } will be equal in absolute value and
opposite in sign---that is, unless some of @C { to_r }'s tasks
were unassigned, since that causes the total duration of the
tasks assigned @C { from_r } and @C { to_r } to decrease.
@PP
The move can be debugged by calling
@ID @C {
void KheWidenedTaskSetMoveDebug(KHE_WIDENED_TASK_SET wts,
  KHE_RESOURCE to_r, int left_count, int right_count,
  int verbosity, int indent, FILE *fp);
}
This prints the widened task set to be moved, and @C { to_r },
in a self-explanatory format.
@PP
A second move operation is offered:
@ID @C {
bool KheWidenedTaskSetMovePartial(KHE_WIDENED_TASK_SET wts,
  KHE_RESOURCE to_r, int first_index, int last_index);
}
This is like @C { KheWidenedTaskSetMove } except that it moves only
some tasks:  the core tasks with index numbers @C { first_index } to
@C { last_index } inclusive, and no wing tasks.  The implementation
is deficient in two respects:  duration changes are not calculated,
and all of @C { to_r }'s core tasks that do not need assignment are
unassigned, not just those running at the times of the part of the
core that is moved.
# For convenience, either or both of @C { first_index }
# and @C { last_index } may be negative, meaning `from the end'.
# For example, @C { -1 } indexes the last task.
@PP
Again, this move can be debugged:
@ID @C {
void KheWidenedTaskSetMovePartialDebug(KHE_WIDENED_TASK_SET wts,
  KHE_RESOURCE to_r, int first_index, int last_index,
  int verbosity, int indent, FILE *fp);
}
These functions may provide suitable values for @C { first_index }
and @C { last_index }:
@ID @C {
bool KheWidenedTaskSetFindInitial(KHE_WIDENED_TASK_SET wts,
  int wanted_durn, int *first_index, int *last_index);
bool KheWidenedTaskSetFindFinal(KHE_WIDENED_TASK_SET wts,
  int wanted_durn, int *first_index, int *last_index);
}
@C { KheWidenedTaskSetFindInitial } searches for an initial sequence of
@C { wts }'s core tasks whose total duration is @C { wanted_durn }.  It
sets @C { *first_index } and @C { *last_index } to the index of the
first and last task in this sequence (@C { *first_index } is always
0), and it returns @C { true } when the duration of the sequence is
equal to @C { wanted_durn }.  @C { KheWidenedTaskSetFindFinal } is the
same except that it searches for a final sequence (@C { *last_index }
is always the index of the last task).
@End @SubSection

@SubSection
    @Title { Widened task set swaps }
    @Tag { resource_structural.task_finding.swap }
@Begin
@LP
A @I { widened task set swap } moves the tasks of a widened
task set from its own resource @C { from_r } to some other resource
@C { to_r } (possibly @C { NULL }), and moves any tasks initially
assigned @C { to_r } on those days back to @C { from_r }.
@PP
To check that a swap is possible, call
@ID @C {
bool KheWidenedTaskSetSwapCheck(KHE_WIDENED_TASK_SET wts,
  KHE_RESOURCE to_r, bool exact, KHE_TIME_GROUP blocking_tg,
  KHE_MONITOR blocking_m, int *max_left_count, int *max_right_count);
}
This checks whether a swap is possible between the core and
@C { to_r }'s tasks running on the same days as the core.  If
so it returns @C { true } and sets @C { *max_left_count } and
@C { *max_right_count } to the number of initial positions in the
wings where the swap is also possible.
@PP
In the core, the following checks are made, and if any of them fail,
@C { false } is returned.  First, @C { from_r }'s tasks must be movable
to @C { to_r }, and @C { to_r }'s tasks must be movable to @C { from_r }.
Then, if @C { exact } is @C { true }, @C { to_r }'s tasks must be
running on exactly the same days as @C { from_r }'s.  Furthermore,
if @C { blocking_tg != NULL }, none of @C { to_r }'s tasks may be
running during @C { blocking_tg }, and if @C { blocking_m != NULL },
none of @C { to_r }'s tasks may be monitored by @C { blocking_m }.
And finally, if @C { to_r } has at least one task, the first of
@C { from_r }'s tasks must not be equivalent to the first of
@C { to_r }'s tasks.  The reasoning here is that if these tasks
are equivalent, the swap is about swapping equivalent tasks, which
would achieve nothing.
@PP
In each element of each wing, the following checks are made, and the
first element at which they fail determines @C { *max_left_count } and
@C { *max_right_count }.  First, @C { from_r }'s tasks must be movable
to @C { to_r }, and @C { to_r }'s tasks must be movable to @C { from_r }.
Then, if @C { exact } is @C { true }, @C { to_r }'s tasks must be
running on exactly the same days as @C { from_r }'s.
@PP
To actually carry out a swap, call
@ID @C {
bool KheWidenedTaskSetSwap(KHE_WIDENED_TASK_SET wts,
  KHE_RESOURCE to_r, int left_count, int right_count,
  int *from_r_durn_change, int *to_r_durn_change);
}
This moves the core tasks plus the first @C { left_count } and
@C { right_count } left and right wing tasks to @C { to_r }, like
moving does, but it also moves @C { to_r }'s tasks running
on the same days from @C { to_r } to @C { from_r }.  If
@C { to_r } is @C { NULL } there will be no such tasks.  It does not
check anything again, it just does the swap.  If successful it sets
@C { *from_r_durn_change } and @C { *to_r_durn_change } in the same
way as move does.  If @C { KheWidenedTaskSetSwapCheck } has returned
@C { true }, then this must succeed for any @C { left_count } and
@C { right_count } such that @C { 0 <= left_count <= *max_left_count }
and @C { 0 <= right_count <= *max_right_count }.
@PP
The return values of @C { KheWidenedTaskSetMoveCheck } and
@C { KheWidenedTaskSetSwapCheck } may differ, and when they are
both @C { true }, @C { *max_left_count } and @C { *max_right_count }
may differ.  This is because tasks assigned @C { to_r } that need
assignment may prevent the move, but not the swap unless
@C { KheTaskMoveCheck } reports that they cannot move to @C { from_r }.
@PP
# There are several aspects of all this that the author finds
# quite confusing.  One is that
If @C { to_r } is effectively free during the core days, both the
move and the swap may succeed, and are the same except for how
they treat tasks that do not need assignment (the
move unassigns them, the swap moves them to @C { from_r }).
Also, when @C { from_r == NULL }, the code searches for
unassigned tasks, whereas when @C { to_r == NULL }, it doesn't.
Altogether it seems best to try the move first, and to only try
the swap if @C { KheWidenedTaskSetMoveCheck } returns @C { false }.
@PP
Is swapping only reasonable when both resources are non-@C { NULL }?
No.  When @C { to_r } is @C { NULL }, swapping equals moving except for
tasks that do not need assignment.  But when @C { from_r } is @C { NULL },
@C { to_r } must be non-@C { NULL }, and swapping replaces some
of @C { to_r }'s tasks with different tasks that were previously
unassigned.  While this is not striking, it is different, and cases
exist where it would do good.
@PP
A widened task set is not kept up to date as the solution changes.
If it gets out of date the only option is to delete it and make a
fresh one.  The four move and swap functions share the work of
finding the tasks assigned @C { to_r } that are running on the
same days as @C { wts }'s core and wings:  if a call on one of
these functions for a given @C { wts } has the same value for
@C { to_r } as the previous call, this shared work is not redone.
Care is needed here when the solution is changing.
@PP
Finally, as usual there is a function
@ID @C {
void KheWidenedTaskSetSwapDebug(KHE_WIDENED_TASK_SET wts,
  KHE_RESOURCE to_r, int left_count, int right_count,
  int verbosity, int indent, FILE *fp);
}
which can be used to debug the swap in a readable format.
@End @SubSection

@SubSection
    @Title { Widened task set optimal moves }
    @Tag { resource_structural.task_finding.optimal }
@Begin
@LP
Widened task set moves and swaps basically move the core tasks from
@C { from_r } to @C { to_r }.  Other moves are included only to
improve the result.  This suggests the idea of moving the core tasks from
@C { from_r } to @C { to_r }, and making whatever other changes work
best.  This is the @I { optimal move }.
@PP
To check whether an optimal move is possible, call
@ID {0.98 1.0} @Scale @C {
bool KheWidenedTaskSetOptimalMoveCheck(KHE_WIDENED_TASK_SET wts,
  KHE_RESOURCE to_r, KHE_TIME_GROUP blocking_tg, KHE_MONITOR blocking_m);
}
The parameters are as for swapping, with @C { exact } fixed to @C { false }.
To carry out the move, call
@ID {0.98 1.0} @Scale @C {
bool KheWidenedTaskSetOptimalMove(KHE_WIDENED_TASK_SET wts,
  KHE_RESOURCE to_r, int *from_r_durn_change, int *to_r_durn_change);
}
# It is probably best to set @C { exact } to @C { false } here.
The search space that the operation explores is this:
@BulletList

@LI {
For the core tasks, only one possibility is tried, although there
are two cases.  If @C { to_r } is effectively free, move the core
tasks from @C { from_r } to @C { to_r } while unassigning
@C { to_r }'s tasks on core days, like a move does.  If @C { to_r }
is not effectively free, move the core tasks from @C { from_r } to
@C { to_r } while moving @C { to_r }'s tasks on core days to
@C { from_r }, like a swap does.  This second possibility is only
tried if @C { blocking_tg } and @C { blocking_m } permit it.  More
precisely, @C { KheWidenedTaskSetOptimalMoveCheck } only returns
@C { true } if they permit it.
}

@LI {
For each wing task, two possibilities are tried:  do nothing, and
move the wing task from @C { from_r } to @C { to_r } while moving
@C { to_r }'s corresponding tasks to @C { from_r }, like a swap does.
}

@EndList
There are @M { 2 sup w } combinations of possibilities, where
@M { w } is the number of wing tasks that can be swapped, and
all of them are tried (there are no tree pruning rules); so
the wings must be small.
@PP
The result will be @C { true } whenever at least one of the combinations
of possibilities could be carried out.  In that case, the solution will
be changed to the result of applying the combination of possibilities
which produced the smallest solution cost.  This solution must be different
from the initial solution because it includes moving @C { from_r }'s
core tasks to @C { to_r }.  If several combinations produce minimum
cost, one of them that produces the fewest defects is returned.
@PP
The result will be @C { false } when none of the combinations could
be carried out.  This will usually be because one or more of @C { to_r }'s
core tasks is preassigned, and so can neither be moved nor unassigned.
@C { KheWidenedTaskSetOptimalMoveCheck } leaves the solution unchanged
in that case, but @C { KheWidenedTaskSetOptimalMove } may change it.
A mark may be used to return it to the initial state, in the usual way.
@PP
@C { KheWidenedTaskSetOptimalMove } may be called repeatedly on the
same widened task set.  If two or more consecutive calls have the
same value for @C { to_r }, they are assumed to be identical calls,
starting from the same solution state.  So instead of searching for
the optimal result, the second and later calls reinstall the result
found previously, without any searching.
Finally,
@ID @C {
void KheWidenedTaskSetOptimalMoveDebug(KHE_WIDENED_TASK_SET wts,
  KHE_RESOURCE to_r, int verbosity, int indent, FILE *fp);
}
can be called to debug the operation, as usual.
@End @SubSection

#@SubSection
#    @Title { Widened task set helper functions }
#    @Tag { resource_structural.task_finding.helper }
#@Begin
#@LP
#Also,
#are useful for debugging move, partial move, swap, and optimal move
#operations.  They assume that the preparation has been done for
#@C { to_r } (they do not do it themselves, since that would be wrong
#if the operation has already been carried out successfully), and print
#@C { from_r }, the tasks initially assigned @C { from_r } that will move,
#@C { to_r }, and the tasks initially assigned @C { to_r } that will move
#(or might move in the case of @C { KheWidenedTaskSetOptimalMoveDebug }).
#@End @SubSection

#@SubSection
#    @Title { Other task finding operations }
#    @Tag { resource_structural.task_finding.other }
#@Begin
#@PP
#Functions
#@ID @C {
#bool KheFindTaskRunInitial(KHE_TASK_FINDER tf, KHE_TASK_SET ts,
#  int wanted_durn, KHE_TASK_SET res_ts);
#bool KheFindTaskRunFinal(KHE_TASK_FINDER tf, KHE_TASK_SET ts,
#  int wanted_durn, KHE_TASK_SET res_ts);
#}
#search for an initial or final sequence of @C { ts }'s tasks (that
#is, a prefix or suffix) with total duration @C { wanted_durn }.  If
#they succeed they set @C { res_ts } to this sequence and return
#@C { true }.  Otherwise, they still set @C { res_ts } to an initial
#or final sequence, but its total duration is not @C { wanted_durn }.
#@End @SubSection

@EndSubSections
@End @Section

@Section
    @Title { Other resource-structural solvers }
    @Tag { resource_structural.task_tree.reorganization }
@Begin
@LP
This section documents some miscellaneous functions that reorganize
task trees, represented by taskings.  They assume that only unfixed
tasks lie in taskings, and they preserve this condition.
@PP
A good way to minimize split assignments is to prohibit them at
first but allow them later.  To change a tasking from the first
state to the second, call
@ID @C {
bool KheTaskingAllowSplitAssignments(KHE_TASKING tasking,
  bool unassigned_only);
}
It unfixes and unassigns all tasks assigned to the tasks of
@C { tasking } and adds them to @C { tasking }, returning
@C { true } if it changed anything.  If one of the original
unfixed tasks is assigned (to a cycle task), the tasks assigned
to it are assigned to that task, so that existing resource
assignments are not forgotten.  If @C { unassigned_only } is
@C { true }, only the unassigned tasks of @C { tasking } are
affected.  (This option is included for completeness, but it
is not recommended, since it leaves few choices open.)
@C { KheTaskingAllowSplitAssignments } preserves the resource
assignment invariant.
@PP
If any room or any teacher is better than none, then it will
be worth assigning any resource to tasks that remain unassigned
at the end of resource assignment.  Function
@ID { 0.98 1.0 } @Scale @C {
void KheTaskingEnlargeDomains(KHE_TASKING tasking, bool unassigned_only);
}
permits this by enlarging the domains of the tasks of @C { tasking }
and any tasks assigned to them (and so on recursively) to the full
set of resources of their resource types.  If @C { unassigned_only }
is true, only the unassigned tasks of @C { tasking } are affected.
The tasks are visited in postorder---that is, a task's domain is
enlarged only after the domains of the tasks assigned to it have
been enlarged---ensuring that the operation cannot fail.
Preassigned tasks are not enlarged.
@PP
This operation works, naturally, by deleting all task bounds from
the tasks it changes.  Any task bounds that become applicable to no
tasks as a result of this are deleted.
@End @Section

@Section
    @Title { Task groups }
    @Tag { resource_structural.task_groups }
@Begin
@LP
There are cases where two tasks are interchangeable as far as
resource assignment is concerned, because they demand the same
kinds of resources at the same times.  The @I { task group }
embodies KHE's approach to taking advantage of interchangeable tasks.
@PP
The @I { full task set } of an unfixed task is the task itself and all
the tasks assigned to it, directly or indirectly (all its followers),
omitting tasks that do not lie in a meet.  An unfixed task is
@I { time-complete } if each task of its full task set lies in a
meet that has been assigned a time.  Two time-complete tasks are
@I { time-equal } if their full task sets have equal cardinality,
and the two sets can be sorted so that corresponding tasks have
equal starting times, durations, and workloads.  Two unfixed tasks
are @I interchangeable if they are time-complete and time-equal,
and their domains are equal.  When two resources are assigned to
two interchangeable tasks, either resource can be assigned to
either task and it does not matter which is assigned to which.
(Exception:  if a limit resources constraint contains one of
the tasks but not the other, it does matter.)
@PP
A @I { task group } is a set of pairwise interchangeable tasks.
Task groups occur naturally when there are linked events, or when
time assignments are regular.  Virtually any resource assignment
algorithm can benefit from task groups.  Assigning to a task group
rather than to a task eliminates symmetries that can slow down
searching.  A given resource can only be assigned to one task of
a task group, since its tasks overlap in time, so task groups help
with estimating realistically how many resources are available,
and how much workload is open to a resource.
@PP
Objects of type @C { KHE_TASK_GROUP } hold one set of interchangeable
tasks, and objects of type @C { KHE_TASK_GROUPS } hold a set of task
groups.  Such a set can be created by calling
@ID @C {
KHE_TASK_GROUPS KheTaskGroupsMakeFromTasking(KHE_TASKING tasking);
}
It places every task of @C { tasking } into one task group.
The task groups are maximal.
@PP
To remove a set of task groups (but not their tasks), call
@ID @C {
void KheTaskGroupsDelete(KHE_TASK_GROUPS task_groups);
}
To access the task groups, call
@ID { 0.98 1.0 } @Scale @C {
int KheTaskGroupsTaskGroupCount(KHE_TASK_GROUPS task_groups);
KHE_TASK_GROUP KheTaskGroupsTaskGroup(KHE_TASK_GROUPS task_groups, int i);
}
To access the tasks of a task group, call
@ID @C {
int KheTaskGroupTaskCount(KHE_TASK_GROUP task_group);
TASK KheTaskGroupTask(KHE_TASK_GROUP task, int i);
}
There must be at least one task in a task group, otherwise the task
group would not have been made.  Task groups are not kept up to date
as the solution changes, so if time assignments are being altered
the affected tasks cannot be relied upon to remain interchangeable.
@PP
The tasks of a task group have the same total duration, total
workload, and domain, and these common values are returned by
@ID @C {
int KheTaskGroupTotalDuration(KHE_TASK_GROUP task_group);
float KheTaskGroupTotalWorkload(KHE_TASK_GROUP task_group);
KHE_RESOURCE_GROUP KheTaskGroupDomain(KHE_TASK_GROUP task_group);
}
@C { KheTaskGroupTotalDuration } is the value of
@C { KheTaskTotalDuration } shared by the tasks, not the sum of
their durations; and similarly for @C { KheTaskGroupTotalWorkload }.
@PP
For the convenience of algorithms that use task groups, function
@ID @C {
int KheTaskGroupDecreasingDurationCmp(KHE_TASK_GROUP tg1,
  KHE_TASK_GROUP tg2);
}
is a comparison function that may be used to sort task groups
by decreasing duration.
@PP
Because the tasks of a task group are interchangeable, it does not
matter which of them is assigned when assigning resources to them.
This makes the following functions possible:
@ID @C {
int KheTaskGroupUnassignedTaskCount(KHE_TASK_GROUP task_group);
bool KheTaskGroupAssignCheck(KHE_TASK_GROUP task_group, KHE_RESOURCE r);
bool KheTaskGroupAssign(KHE_TASK_GROUP task_group, KHE_RESOURCE r);
void KheTaskGroupUnAssign(KHE_TASK_GROUP task_group, KHE_RESOURCE r);
}
@C { KheTaskGroupUnassignedTaskCount } returns the number of
unassigned tasks in @C { task_group }; @C { KheTaskGroupAssignCheck }
checks whether @C { r } can be assigned to a task of @C { task_group }
(by finding the first unassigned task and checking there);
@C { KheTaskGroupAssign } is the same, only it actually makes
the assignment, using @C { KheTaskAssign }, if it can; and
@C { KheTaskGroupUnAssign } finds a task of @C { task_group }
currently assigned @C { r }, and unassigns that task.
@PP
The tasks of a task group may have different constraints, in which
case assigning one may change the solution cost differently from
assigning another.  This is handled heuristically as follows.
The first time @C { KheTaskGroupAssign } returns @C { true }, it
tries assigning @C { r } to each task of the task group, notes
the solution cost after each, and sorts the tasks into increasing
order of this cost.  Then it and all later calls assign the first
unassigned task in this order.
@PP
The usual debug functions are available:
@ID { 0.97 1.0 } @Scale @C {
void KheTaskGroupDebug(KHE_TASK_GROUP task_group, int verbosity,
  int indent, FILE *fp);
void KheTaskGroupsDebug(KHE_TASK_GROUPS task_groups, int verbosity,
  int indent, FILE *fp);
}
print @C { task_group } and @C { task_groups } onto @C { fp }
with the given verbosity and indent.
@End @Section

@EndSections
@End @Chapter
