@Appendix
    @Title { Dynamic Programming Resource Reassignment: Implementation }
    @Tag { dynamic_impl }
@Begin
@LP
@I { This Appendix is rather out of date.  It should be up to date
in the next release of KHE. }
This Appendix presents the implementation of the dynamic programming
algorithm for resource assignment.  It is a companion to
Appendix {@NumberOf dynamic_theory}, which describes the
algorithm and the algebra underlying its cost calculations.
@BeginSubAppendices

@SubAppendix
    @Title { Introducing the implementation }
    @Tag { dynamic_impl.impl }
@Begin
@LP
The implementation can be found in file @C { khe_sr_dynamic_resource.c }.
This file is over 26,000 lines long, making it easily the largest of
KHE's solvers.  The code is presented here in the order that it appears
in the file, with some exceptions.
@PP
The implementation defines over sixty types, not counting array types.
We group them into ten `major categories'.  Here they are, with
examples of their types:
@CD @Tbl
    aformat { @Cell ml { 0i } A | @Cell { B } | @Cell mr { 0i } C }
    mv { 0.5vx }
{
@Rowa
    A { Category }
    B { KHE type }
    C { Dynamic resource solver (DRS) type }
    rb { yes }
@Rowa
    A { Times }
    B { @C { KHE_TIME_GROUP } (common frame) }
    C { @C { KHE_DRS_DAY } }
@Rowa
    A { Resources }
    B { @C { KHE_RESOURCE } }
    C { @C { KHE_DRS_RESOURCE } }
@Rowa
    A { Events }
    B { @C { KHE_TASK } }
    C { @C { KHE_DRS_TASK } }
@Rowa
    A { Signatures }
    B { @C { - } }
    C { @C { KHE_DRS_SIGNATURE } }
@Rowa
    A { Constraints }
    B { @C { KHE_CONSTRAINT } }
    C { @C { KHE_DRS_CONSTRAINT } }
@Rowa
    A { Expressions }
    B { @C { KHE_MONITOR } }
    C { @C { KHE_DRS_EXPR } }
@Rowa
    A { Solutions }
    B { @C { KHE_SOLN } }
    C { @C { KHE_DRS_SOLN } }
@Rowa
    A { Expansion }
    B { @C { - } }
    C { @C { KHE_DRS_EXPANDER } }
@Rowa
    A { Sets of solutions }
    B { @C { KHE_SOLN_GROUP } }
    C { @C { KHE_DRS_SOLN_SET } }
@Rowa
    A { Solvers }
    B { @C { - } }
    C { @C { KHE_DYNAMIC_RESOURCE_SOLVER } }
    rb { yes }
}
Seven of these categories resemble categories found in KHE:
times, resources, events, constraints, expressions, solutions, and
sets of solutions.  The other three have no KHE equivalents.
@PP
The solver utilizes two kinds of trees:  search trees, whose nodes
represent partial solutions and have type @C { KHE_DRS_SOLN }, and
expression trees, representing constraints, whose nodes have type
@C { KHE_DRS_EXPR }.  This makes the term `node' ambiguous, so it
will not be used.  Instead, search tree nodes will be called solutions,
and expression tree nodes will be called expressions.
@PP
@C { KheDynamicResourceSolverMake(soln, rt, options) } creates data
structures for all of @C { soln } relevant to resource type @C { rt }.
It saves time, when there are many solves, for as many objects as
possible to be created just once like this, at the start.  Objects
that are created during individual solves come from free lists,
recycled from previous solves.
@PP
When @C { KheDynamicResourceSolverMake } returns, all its objects are
in the @I closed state, meaning that they are not part of any solve.
Closed objects contain values that reflect the initial solution.  At
the start of each solve, a process called @I { opening } occurs,
which identifies the DRS objects that are part of that solve.
Opening also unassigns any KHE tasks affected by the solve that
happen to be assigned initially.  Its running time depends on
the number of objects opened, not on the total number of objects.
@PP
At the end of each solve, an opposite process called @I { closing }
occurs.  It returns the open objects to the closed state, with
values that reflect the new best solution found by the solve, if
there is one, or the initial solution if there isn't.  Closing
also performs KHE task assignments to change the KHE solution
into the new best solution, or return it to the initial solution.
@PP
In between opening and closing we build the search tree, a process
we call @I { searching }.  So the implementation has four main
operations:  @I construction of a solver object and its many
associated objects (a slow but easy job which needs little
documentation); opening; searching; and closing.  The last
three operations, carried out in sequence, make one @I { solve }.
@PP
Another key operation, part of searching, is @I { expansion }.  This
takes one @M { d sub k }-solution @M { S } as its main argument and
@I { expands } it, that is, it adds one assignment for each open
resource on day @M { d sub {k+1} } to @M { S } in all possible ways,
and stores the resulting @M { d sub {k+1} }-solutions within the day
@M { d sub {k+1} } table of solutions, after checking for dominance
relations.  Searching is in fact just a sequence of expansions,
starting with the unique @M { d sub 0 }-solution, then proceeding to
the @M { d sub 1 }-solutions, then to the the @M { d sub 2 }-solutions,
and so on.
@PP
The code is organized hierarchically.  At the top level are the ten
@I { major categories } given above, in the order shown above.  For
each of these, the second level contains one @I submodule for each
of the types of that major category.  Within each submodule are the
operations on its type, typically organized into construction,
simple queries, opening, closing, and other (including debugging),
in that order.  Helper functions appear wherever seems best.
# Closing follows opening, even though during execution
# they are separated by searching, because the two
# operations are inverses and it is useful to compare them.
@PP
Searching is mostly about expansion.  The expansion algorithm is
lengthy and distributed over many types, but is best understood
as a whole.  So its code appears in special submodules distributed
through the implementation, but documented in one place here
(Appendix {@NumberOf dynamic_impl.expansion}).
@End @SubAppendix

@SubAppendix
    @Title { Exactly what gets opened }
    @Tag { dynamic_impl.open }
@Begin
@LP
Before diving into the code, we pause to explain exactly what gets
opened.  This question has puzzled the author, so here we set out
a precise, complete answer.
@PP
We view that part of the initial solution that gets opened as a
planning timetable, in which each column represents a selected day,
and each row represents a selected resource:
@CD @Tbl
  r { yes }
  aformat {@Cell A | @Cell B | @Cell C | @Cell D | @Cell E | @Cell F | @Cell G}
{
@Rowa
   A { }
   B { Day 5 }
   C { Day 6 }
   D { Day 12 }
   E { Day 13 }
   F { Day 19 }
   G { Day 20 }
   rb { yes }
@Rowa
   A { Resource 3 }
@Rowa
   A { Resource 4 }
@Rowa
   A { Resource 7 }
}
This shows six selected days (presumably three weekends) and three
selected resources.  In any solution, for each selected resource
@M { r } and each selected day @M { d }, the table cell
@M { (r, d) } either contains a single proper root task, one
that @M { r } is assigned to and that (counting all the tasks
assigned to it directly or indirectly) is running on day @M { d },
or else it is empty, indicating that @M { r } is free on @M { d }.
This is the view of the problem taken by the solver, which
proceeds from left to right across the table, filling in
the cells one day at a time.
@PP
Everything that gets opened lies within this table, but the
converse is not quite true.  Most things within this table
get opened, but with the following exceptions.  Let @M { r } be
any selected resource, and let @M { d } be any selected day.
The assignments referred to are in the initial solution, the
solution passed in to the solver.
@NumberedList

@LI {
If @M { r } is assigned to some proper root task on @M { d }, and
the busy day range of that task (counting all the tasks assigned to
it directly or indirectly) includes one or more unselected days,
then the cell @M { (r, d) } is not opened, that is, the solver will
leave that assignment untouched.
}

@LI {
If @M { r } is preassigned to some proper root task on @M { d }, and
also assigned to that task, then even if that assignment can be
removed, it won't be removed, and again @M { (r, d) } is not opened.
}

@LI {
If @M { r } is assigned to some proper root task on @M { d }, and that
assignment cannot be removed (because @C { KheTaskUnAssign } returns
@C { false }), then @M { (r, d) } is not opened.
}

@EndList
In the unlikely case that @M { r } is preassigned to some task on
@M { d } but not assigned to it, cell @M { (r, d) } is opened in the
usual way, but the solver will try only one option for assigning it,
namely the preassigned task.  It will not try leaving the cell unassigned.
@PP
The data structure representing solutions contains one element for
each cell @M { (r, d) }.  A cell that is not opened is given a special
@C { CLOSED } value, saying that the solver should not make and has not
made an assignment to this cell.  (The other special value, @C { NULL },
means that the solver has decided that @M { r } should be free on
@M { d }).  After opening, the table might be
@CD @Tbl
  r { yes }
  aformat {@Cell A | @Cell B | @Cell C | @Cell D | @Cell E | @Cell F | @Cell G}
{
@Rowa
   A { }
   B { Day 5 }
   C { Day 6 }
   D { Day 12 }
   E { Day 13 }
   F { Day 19 }
   G { Day 20 }
   rb { yes }
@Rowa
   A { Resource 3 }
   B { @C { CLOSED } }
@Rowa
   A { Resource 4 }
   G { @C { CLOSED } }
@Rowa
   A { Resource 7 }
   D { @C { CLOSED } }
   E { @C { CLOSED } }
}
A @C { CLOSED } entry may appear anywhere.  It represents a point
where the solve may not change the initial state.  Conceptually, the
entire solution outside this table is filled with @C { CLOSED } entries.
@PP
An expression whose value depends directly on the timetable of some
resource always depends on what that resource is doing on one specific
day.  It is open when the cell representing that resource on that day
is not @C { CLOSED }.
@PP
An expression whose value depends directly on whether some proper root
task is assigned or not, and possibly on what it is assigned to, is
open when that task is open.  A proper root task is open when its busy
day range lies entirely within the set of selected days, its domain has
a non-empty intersection with the set of selected resources, and it is
either unassigned or else neither of the two impediments to removing
its assignment (points (2) and (3) above) apply.
@PP
An expression whose value depends directly on the values of its children
is open when at least one of its children is open.  It may have some
open and some closed children.
@End @SubAppendix

@SubAppendix
    @Title { Times }
    @Tag { dynamic_impl.days }
@Begin
@LP
This section describes the two DRS types related to times.  The first is
@C { KHE_INTERVAL }, defined in file @C { khe_solvers.h } to represent
an integer interval (Section {@NumberOf general_solvers.intervals}).
When used here it always represents an interval of days.  The integers
are indexes into the common frame, or into an array of open days.
@PP
The second time-related type is @C { KHE_DRS_DAY }.  It represents
one day, that is, one time group of the common frame:
@IndentedList

@LI @C {
typedef struct khe_drs_day_rec *KHE_DRS_DAY;
typedef HA_ARRAY(KHE_DRS_DAY) ARRAY_KHE_DRS_DAY;
}

@LI @C {
struct khe_drs_day_rec {
  int				frame_index;
  int				open_day_index;
  KHE_TIME_GROUP		time_group;
  ARRAY_KHE_DRS_SHIFT		shifts;
  KHE_DRS_SIGNER_SET		signer_set;
  KHE_DRS_SOLN_SET		soln_set;
  KHE_DRS_SOLN_LIST		soln_list;
  int				soln_made_count;
  int				solve_expand_count;
};
}

@EndList
The @C { frame_index } field is the day's time group's index in the
common frame.  It is a fixed value, set when the day is created during
@C { KheDynamicResourceSolverMake }.  The @C { open_day_index } field
is the day's index in the list of open days when it is open, and
@C { -1 } when it is closed.  So it is fixed during any one solve,
but may vary from one solve to the next.  The @C { time_group } field
holds the time group defining the day, taken from the common frame.
@PP
The @C { shifts } field holds a set of @I { shifts }.  Each shift
contains a set of similar mtasks; each mtask contains a set of
similar tasks.  The first time of every task in every mtask of
every shift lies in this day's time group.  Full details will be
given when we come to these other types.
@PP
The @C { signer_set } field holds a set of @I { signers }, which
are rather artificial objects representing templates for how to
construct signatures and perform dominance testing for solutions
which are @M { d sub k }-solutions where @M { d sub k } is this
day.  Signatures and signers will be explained later
(Section {@NumberOf dynamic_impl.sig}).  A signer set object is always
present, but its value is set up afresh each time the day is opened.
@PP
The @C { soln_set } field is only defined when the day is open.  It
is initialized to empty at the start of each solve, but comes to
hold the set of all undominated solutions within which resources are
assigned tasks up to and including this day.  This field was called
@M { P sub k } in Appendix {@NumberOf dynamic_theory.overview}.
@PP
The @C { soln_list } field holds a simple list of solutions.  It
is created by traversing @C { soln_set } after it is completed,
and adding each solution found there to @C { soln_list }.  Once the
solutions are in the list, they are sorted by increasing cost, and
then an optional daily limit on the number of solutions may be enforced,
by removing solutions from the end until the limit is not exceeded.
@PP
The @C { soln_made_count } field holds the number of solutions
created during the current solve that end on this day (not the
number of undominated solutions).  The @C { solve_expand_count }
field is used during solving and will be explained later.
# and @C { solve_prune_cost } fields are
@PP
The operations on days are quite simple.  Creation is easy.
Opening sets @C { open_day_index } and clears the fields defined when
the day is open.  They are set properly later, when opening expressions,
as explained below.  Closing reverses opening, and also frees
the solutions stored in the day.  Searching adds solutions to the
@C { soln_set } field but leaves the day object itself untouched.
@PP
There are also two operations, @C { KheDrsDayExpandBegin } and
@C { KheDrsDayExpandEnd }, which are part of expansion.  Following
our policy, these have been placed into a separate submodule and
documented elsewhere (Appendix {@NumberOf dynamic_impl.expansion}).
@End @SubAppendix

@SubAppendix
    @Title { Resources }
    @Tag { dynamic_impl.resources }
@Begin
@LP
This section describes the DRS types related to resources.
@BeginSubSubAppendices

@SubSubAppendix
    @Title { The resource type }
    @Tag { dynamic_impl.resources.resource }
@Begin
@LP
Type @C { KHE_DRS_RESOURCE } represents one resource.
Here is its type definition:
@ID @C {
typedef struct khe_drs_resource_rec *KHE_DRS_RESOURCE;
typedef HA_ARRAY(KHE_DRS_RESOURCE) ARRAY_KHE_DRS_RESOURCE;

struct khe_drs_resource_rec {
  KHE_RESOURCE			resource;
  int				open_resource_index;
  ARRAY_KHE_DRS_RESOURCE_ON_DAY	days;
  KHE_DRS_RESOURCE_EXPAND_ROLE	expand_role;
  ARRAY_KHE_DRS_SIGNATURE	expand_signatures;
  ARRAY_KHE_DRS_MTASK_SOLN	expand_mtask_solns;
  KHE_DRS_MTASK_SOLN		expand_free_mtask_soln;
  KHE_DRS_DIM2_TABLE		expand_dom_test_cache;
};
}
The last five fields are used by expansion and will be explained
later (Appendix {@NumberOf dynamic_impl.expansion}).  The other
fields hold the corresponding KHE resource, the resource's index
in the array of open resources when open (or @C { -1 } when closed),
and an array of @C { KHE_DRS_RESOURCE_ON_DAY } objects, one for each
day of the cycle, recording what the resource is doing on that day
(Appendix {@NumberOf dynamic_impl.resources.resource_on_day}).
@PP
Resource and resource on day objects are easily built during the
initialization of the solver.  The most complex resource operation
is the one for opening a resource on the selected days:
@ID {0.95 1.0} @Scale @C {
void KheDrsResourceOpen(KHE_DRS_RESOURCE dr, int open_resource_index,
  KHE_DRS_PACKED_SOLN init_soln, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_DAY_RANGE ddr;  int i, j, open_day_index;
  KHE_DRS_RESOURCE_ON_DAY drd;
  dr->open_resource_index = open_resource_index;
  open_day_index = 0;
  HaArrayForEach(drs->selected_day_ranges, ddr, i)
    for( j = ddr.first;  j <= ddr.last;  j++ )
    {
      /* unassign any task assigned on drd, if it lies entirely in ddr */
      drd = HaArray(dr->days, j);
      KheDrsResourceOnDayOpen(drd, open_resource_index, open_day_index,
	ddr, init_soln, drs);

      /* increase open_day_index */
      open_day_index++;
    }
}
}
The first step is to set @C { dr->open_resource_index }.  After that,
the two outer loops set @C { j } to the index in the cycle of each
selected day, so each iteration of the inner loop opens one of
@C { dr }'s resource on day objects.
@PP
The matching @C { KheDrsResourceClose } operation resets
@C { dr->open_resource_index } to @C { -1 } and calls
@C { KheDrsResourceOnDayClose } repeatedly to clear the signer
in each resource on day object.  It does not make any task
assignments, because @C { KheDrsTaskAssign } below does that,
including setting the @C { closed_asst } fields in the affected
resource on day objects.
@PP
After the @C { KHE_DRS_RESOURCE } submodule there is another submodule
concerned with that part of the expansion operation that is concerned
with resources.  This submodule, and the @C { expand_assignments }
and similar fields that we passed over briefly earlier, are presented
in Appendix {@NumberOf dynamic_impl.expansion}.
@End @SubSubAppendix

@SubSubAppendix
    @Title { The resource on day type }
    @Tag { dynamic_impl.resources.resource_on_day }
@Begin
@LP
Type @C { KHE_DRS_RESOURCE_ON_DAY } represents what one resource
is doing on one day.  Here is its type definition:
@IndentedList

@LI @C {
typedef struct khe_drs_resource_on_day_rec *KHE_DRS_RESOURCE_ON_DAY;
typedef HA_ARRAY(KHE_DRS_RESOURCE_ON_DAY) ARRAY_KHE_DRS_RESOURCE_ON_DAY;
}

@LI @C {
struct khe_drs_resource_on_day_rec {
  KHE_DRS_RESOURCE		encl_dr;
  KHE_DRS_DAY			day;
  bool				open;
  KHE_DRS_TASK_ON_DAY		closed_dtd;
  KHE_DRS_TASK_ON_DAY		preasst_dtd;
  ARRAY_KHE_DRS_EXPR		external_today;
  KHE_DRS_SIGNER		signer;
};
}

@EndList
Here @C { encl_dr } and @C { day } hold the DRS resource and day that
this object is for; they are fixed.  The @C { open } field is @C { true }
when the resource is open on this day (when there is a solve underway,
and this resource is open to reassignment on this day by the solve).
@PP
Suppose @C { dr } is an object of type @C { KHE_DRS_RESOURCE }, and
suppose @C { drd } is one of its @C { KHE_DRS_RESOURCE_ON_DAY } objects.
When @C { open } is @C { false }, @C { drd->closed_dtd } says what
@C { dr } is doing on that day.  It will be @C { NULL } if @C { dr }
is free on that day.  When @C { open } is @C { true },
@C { drd->closed_dtd } is unused and has value @C { NULL }.
# There would be no problem adding a Boolean @C { open } field to
# make it quite clear at every moment whether @C { drd } is open
# or closed, but it turns out that that is not needed, so it has
# been omitted.
@PP
The @C { preasst_dtd } field is always defined.  If @C { dr } is
preassigned to some task on this day, its value is the task on
day that @C { dr } is preassigned to.  Otherwise its value is
@C { NULL }.
@PP
The @C { external_today } field is a fixed array of expressions
representing parts of constraints (always resource constraints)
that are affected by what @C { drd } is doing on this day.  When
what @C { drd } is doing changes, these expressions need to be
informed.  They are @I { external } expressions:  they are
leaves in their expression trees.
@PP
Similarly to day objects, the @C { signer } field contains a template
for the signatures of solutions that end at this resource on day.
These are solutions that are complete up to the day before this
day, plus they hold one assignment, for this resource on this day.
@PP
Here is the operation for opening a resource on day object:
@ID {0.95 1.0} @Scale @C {
void KheDrsResourceOnDayOpen(KHE_DRS_RESOURCE_ON_DAY drd,
  int open_resource_index, int open_day_index, KHE_INTERVAL ddr,
  KHE_DRS_PACKED_SOLN init_soln, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_TASK dt;  KHE_DRS_EXPR e;  int i;  KHE_DRS_TASK_ON_DAY dtd;
  KHE_INTERVAL open_day_range;  KHE_RESOURCE r;

  /* unassign any affected task; possibly add the assts to init_soln */
  dtd = drd->closed_dtd;
  if( dtd != NULL )
  {
    dt = dtd->encl_dt;
    if( KheIntervalSubset(dt->encl_dmt->day_range, ddr) &&
        !KheTaskIsPreassigned(dt->task, &r) &&
        KheDrsTaskUnAssign(dt, true) )
    {
      /* dt has been successfully unassigned */
      drs->solve_start_cost -= dt->asst_cost;
      if( init_soln != NULL )
	KheDrsPackedSolnSetTaskOnDay(init_soln, open_day_index,
	  open_resource_index, dtd);
    }
  }

  /* set make_correlated field of signer */
  if( drs->solve_correlated_exprs )
    KheDrsSignerMakeCorrelated(drd->signer);

  if( drd->closed_dtd == NULL )
  {
    /* open drd and gather its expressions for opening */
    drd->open = true;
    open_day_range = KheIntervalMake(open_day_index, open_day_index);
    HaArrayForEach(drd->external_today, e, i)
    {
      e->open_children_by_day.index_range = open_day_range;
      KheDrsExprGatherForOpening(e, drs);
    }
  }
}
}
The first paragraph unassigns any task assigned to @C { dr } on
@C { drd }'s day, unless it is part of a multi-day task which
extends beyond the current day range, or is preassigned, or
will not unassign for any reason.  A successful unassignment
includes adding this assignment to packed solution @C { init_soln }
(see Appendix {@NumberOf dynamic_impl.packed}) so that it can be
redone later if required.
@PP
The second step informs @C { drd->signer }, when appropriate, that
it is to find correlations among expressions.  This is a subject
for later (Appendix {@NumberOf dynamic_impl.sig}).
@PP
The third step gathers the expressions dependent on @C { drd } into
a list.  These expressions need to be opened, but that is delayed
until that list is traversed later.
@PP
A potentially confusing point is that the calls to
@C { KheDrsTaskUnAssign } unassign KHE tasks and so change the cost
of the solution.  Does this cause problems for the cost accounting?
No, because the original solution cost is saved before these
unassignments are made, and the costs stored in expressions are not
affected by them:  when those expressions are opened later, they
subtract their costs from the total, and those costs do not take
these unassignments into account.
@PP
The matching @C { KheDrsResourceOnDayClose } operation just
clears the signer in the resource on day object and sets the
@C { open } flag to @C { false }.  It does not make any task
assignments, because @C { KheDrsTaskAssign } below does that,
including setting the @C { closed_asst } fields in the affected
resource on day objects.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Resource sets }
    @Tag { dynamic_impl.resources.sets }
@Begin
@LP
A @I { resource set } is a set (actually a sequence) of
resource objects:
@ID @C {
typedef struct khe_drs_resource_set_rec *KHE_DRS_RESOURCE_SET;
typedef HA_ARRAY(KHE_DRS_RESOURCE_SET) ARRAY_KHE_DRS_RESOURCE_SET;

struct khe_drs_resource_set_rec {
  ARRAY_KHE_DRS_RESOURCE	resources;
};
}
There are operations for creating a new set, adding one resource on
day to a set, iterating over the elements of a set
(macro @C { KheDrsResourceSetForEach }), and so on.
@End @SubSubAppendix

@EndSubSubAppendices
@End @SubAppendix

@SubAppendix
    @Title { Events }
    @Tag { dynamic_impl.events }
@Begin
@LP
This section presents the types related to events.  All events have
fixed times in this application, so actually it is the events' tasks that
matter.  The main DRS types are @C { KHE_DRS_TASK } representing one task,
@C { KHE_DRS_MTASK } representing one mtask, and @C { KHE_DRS_SHIFT }
representing one set of similar mtasks.
@BeginSubSubAppendices

@SubSubAppendix
    @Title { Tasks }
    @Tag { dynamic_impl.tasks }
@Begin
@LP
For each proper root task of the required resource type,
there is a corresponding DRS task:
@ID @C {
typedef struct khe_drs_task_rec *KHE_DRS_TASK;
typedef HA_ARRAY(KHE_DRS_TASK) ARRAY_KHE_DRS_TASK;

struct khe_drs_task_rec {
  KHE_DRS_MTASK			encl_dmt;
  int				index_in_encl_dmt;
  KHE_DRS_TASK_EXPAND_ROLE	expand_role;
  bool				open;
  KHE_TASK			task;
  KHE_DRS_RESOURCE		closed_dr;
  KHE_COST			non_asst_cost;
  KHE_COST			asst_cost;
  ARRAY_KHE_DRS_TASK_ON_DAY	days;
};
}
Each DRS task lies in one DRS mtask
(Appendex {@NumberOf dynamic_impl.mtasks}); @C { encl_dmt } is that
mtask, and @C { index_in_encl_dmt } is the task's index in that
mtask.  The @C { expand_role } field is used by @C { KheDrsSolnExpand }
and is explained later (Appendix {@NumberOf dynamic_impl.expansion}).
The @C { open } field is @C { true } when this task is open (when there
is a current solve and this task may be assigned or reassigned by it).
@PP
The @C { task } field is the corresponding KHE proper root task.
When a DRS task is open, its @C { closed_dr } field is @C { NULL }
and its KHE task is unassigned.  When a DRS task is closed, its
@C { closed_dr } field is set to the DRS resource corresponding to
the KHE resource assigned to the KHE task, or to @C { NULL } when
the KHE task is unassigned.
@PP
The @C { non_asst_cost } field is a constant lower bound on the cost of
not assigning @C { task }, and the @C { asst_cost } field is a constant
lower bound on the cost of assigning it.  These values come from
@C { KheMTaskTask } (Section {@NumberOf resource_structural.mtask_finding.ops}).
@PP
The @C { days } field holds one @C { KHE_DRS_TASK_ON_DAY }
object for each day the task is running.  These are
sorted chronologically, and the implementation uses this in
one place to check whether a given task on day object is for
the first day of its task.  Here is @C { KHE_DRS_TASK_ON_DAY }:
@IndentedList

@LI @C {
typedef struct khe_drs_task_on_day_rec *KHE_DRS_TASK_ON_DAY;
typedef HA_ARRAY(KHE_DRS_TASK_ON_DAY) ARRAY_KHE_DRS_TASK_ON_DAY;
}

@LI @C {
struct khe_drs_task_on_day_rec {
  KHE_DRS_TASK			encl_dt;
  KHE_DRS_DAY			day;
  KHE_TASK			task;
  KHE_TIME			time;
  KHE_DRS_RESOURCE_ON_DAY	closed_drd;
  ARRAY_KHE_DRS_EXPR		external_today;
};
}

@EndList
Here @C { encl_dt } is the enclosing DRS task, @C { day } is the day
concerned, and @C { task } is the KHE task running on this day:
either the original KHE proper root task, or some other KHE task
assigned, directly or indirectly, to that task.  Also, @C { time }
is the time within @C { day } that @C { task } is running.  The
@C { task } and @C { time } fields are always well-defined and
non-@C { NULL }, because, as specified in
Section {@NumberOf resource_solvers.dynamic}, a multi-day task
must run on every day of its busy day range, and it cannot run
twice on one day.  These conditions always hold, because if they
don't, a solver is not created.
# there is one task on day object for each of these days.  
# The solver considers a multi-day task to be running on all
# days from its first busy day to its last (inclusive).  In the
# unlikely case that there is an intermediate day when the task
# is not running, there is still a task on day object for that day;
# its @C { task } and @C { time } fields are @C { NULL }.  A
# resource which becomes assigned to the first day of a multi-day
# task will eventually be assigned to all of that task's task on
# day objects, including not-running days.  Those days will be
# evaluated correctly (as free days) but will prevent the
# resource from being assigned to another task on those days, one
# of several reasons why the optimality guarantee is lost when
# there are multi-day tasks.
# @PP
# It is not possible for a task to be running twice on one day (see
# below for why).
@PP
The @C { closed_drd } field holds the resource on day object that this
task on day is assigned to when the task is closed.  It is non-@C { NULL }
exactly when the adjacent @C { task } field and the @C { closed_dr }
field of the enclosing DRS task are both non-@C { NULL }.
# It holds a DRS resource on day object, not a DRS resource object.
@PP
Finally, @C { external_today } holds a list of all external
expressions (expressions with no child expressions) whose
value depends on what this task is doing on this day.
This is similar to the @C { external_today } field in resource on day
objects, except that these leaves lie in expression trees
representing event resource constraints (assign resource, prefer
resources, and limit resources constraints) rather than in expression
trees representing resource constraints.  When the assignment of
the task represented here changes, these expressions need to be informed.
@PP
This function makes a closed assignment of a DRS resource to a DRS task:
@ID {0.95 1.0} @Scale @C {
bool KheDrsTaskAssign(KHE_DRS_TASK dt, KHE_DRS_RESOURCE dr, bool task)
{
  KHE_DRS_TASK_ON_DAY dtd;  int i;  KHE_DRS_RESOURCE_ON_DAY drd;
  HnAssert(dt->closed_dr == NULL, "KheDrsTaskAssign internal error 1");
  HnAssert(dr != NULL, "KheDrsTaskAssign internal error 2");
  if( task && !KheTaskAssignResource(dt->task, dr->resource) )
    HnAbort("KheDrsTaskAssign internal error 3 (cannot assign %s to %s)",
      KheDrsResourceId(dr), KheTaskId(dt->task));
  dt->closed_dr = dr;
  HaArrayForEach(dt->days, dtd, i)
  {
    drd = KheDrsResourceOnDay(dr, dtd->day);
    HnAssert(dtd->closed_drd == NULL, "KheDrsTaskAssign internal error 4");
    if( drd->closed_dtd != NULL )
      return false;
    dtd->closed_drd = drd;
    drd->closed_dtd = dtd;
  }
  return true;
}
}
@C { KheDrsTaskAssign } only omits calling @C { KheTaskAssignResource }
when the object is first built.  @C { KheDrsResourceOnDay } returns
the resource on day object representing what @C { dr } is doing on
@C { dtd->day }.  When @C { dtd->closed_drd } or @C { drd->closed_dtd }
changes, the expressions in their @C { external_today } arrays must
be informed.  This is done separately from @C { KheDrsTaskAssign }.
@PP
At the time a DRS task is first made, if the corresponding KHE task is
assigned a resource, then @C { KheDrsTaskAssign } is called to assign
the DRS task correspondingly, which includes setting the @C { closed_drd }
fields of the resource's resource on day objects, as shown above.  At
this point, @C { KheDrsTaskAssign } could discover that one of these
fields is already set, meaning that in the initial state, the resource
is assigned to two tasks on the same day.  The solver cannot handle
this, so if it occurs, @C { KheDrsTaskAssign } returns @C { false },
and @C { KheDynamicResourceSolverMake } takes this as a signal to
discard the solver object it was initializing and return @C { NULL }.
@PP
It will become clear (Appendix {@NumberOf dynamic_impl.mtasks}) that
only unassigned tasks are ever opened, so all that needs to be done
when opening a task is to set its @C { open } field to @C { true } and
to gather for opening all the expressions in the @C { external_today }
arrays of its task on day objects:
@ID @C {
void KheDrsTaskOpen(KHE_DRS_TASK dt, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_TASK_ON_DAY dtd;  KHE_DRS_EXPR e;  int i, j, di;
  KHE_DRS_DAY_RANGE open_day_range;

  /* open dt */
  HnAssert(!dt->open, "KheDrsTaskOpen internal error 1");
  HnAssert(dt->closed_asst == NULL, "KheDrsTaskOpen internal error 2");
  dt->open = true;

  /* gather external expressions for opening */
  HaArrayForEach(dt->days, dtd, i)
  {
    di = dtd->day->open_day_index;
    open_day_range = KheDrsDayRangeMake(di, di);
    HaArrayForEach(dtd->external_today, e, j)
    {
      e->open_day_range = open_day_range;
      KheDrsExprGatherForOpening(e, drs);
    }
  }
}
}
Closing a DRS task sets the @C { open } field to @C { false }, and
may also assign a DRS resource:
@ID @C {
void KheDrsTaskClose(KHE_DRS_TASK dt, KHE_DRS_RESOURCE dr)
{
  if( dt->open )
  {
    dt->open = false;
    if( dr != NULL )
      KheDrsTaskAssign(dt, dr, true);
  }
}
}
@C { KheDrsTaskClose } may be called on the same task several times,
but does the work only once.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Multi-tasks }
    @Tag { dynamic_impl.mtasks }
@Begin
@LP
The basic idea of multi-tasks, or mtasks as we prefer to call them,
is that in every instance there are often equivalent tasks.  To be
equivalent, two tasks must run at the same times, but they must also
be subject to the same constraints, so that assigning a resource to
one task of an mtask is really the same as assigning it to another.
When trying alternative assignments we can save a lot of time by
recognizing this and avoiding alternatives which formally are
different but in reality are equivalent.
@PP
The dynamic resource solver calls on a multi-task finder from
Section {@NumberOf resource_structural.mtask_finding} to partition
the set of all the proper root tasks of the required resource type
into mtasks.  Then for each of these @C { KHE_MTASK } objects it
makes one @C { KHE_DRS_MTASK } object, and for each KHE task in
each mtask it adds one DRS task to the DRS mtask.  It calls
@C { KheMTaskNoOverlap } on each mtask, and if any of the calls
return @C { false }, no solver is created.  So it is safe for the
solver to assume that none of its tasks run twice at the same time
or on the same day.
@PP
The type declarations for @C { KHE_DRS_MTASK } are:
@ID @C {
typedef struct khe_drs_mtask_rec *KHE_DRS_MTASK;
typedef HA_ARRAY(KHE_DRS_MTASK) ARRAY_KHE_DRS_MTASK;

struct khe_drs_mtask_rec {
  KHE_MTASK			orig_mtask;
  KHE_DRS_SHIFT			encl_shift;
  KHE_DRS_DAY_RANGE		day_range;
  ARRAY_KHE_DRS_TASK		all_tasks;
  ARRAY_KHE_DRS_TASK		unassigned_tasks;
  int				expand_must_assign_count;
  int				expand_prev_unfixed;
};
}
Here @C { orig_mtask } is the @C { KHE_MTASK } that this
DRS mtask is derived from, @C { encl_shift } is the shift
(see below) that this mtask lies within, @C { day_range } says
which days the tasks of this mtask are busy (they are all busy at
the same times, hence on the same days), @C { all_tasks } contains
the DRS tasks corresponding to the KHE tasks of @C { orig_mtask },
and @C { unassigned_tasks } contains those tasks from @C { all_tasks }
which are open during the current solve.  The last two fields,
@C { expand_must_assign_count } and @C { expand_prev_unfixed }, are
used by @C { KheDrsSolnExpand } and will be explained later
(Appendix {@NumberOf dynamic_impl.expansion}).
@PP
During solving, we want @C { unassigned_tasks } to contain the open
tasks of this mtask, that is, the tasks from @C { all_tasks } which
are available for the open resources to be assigned to.  By the time
that solving starts, these tasks will all be unassigned.  When we build
@C { unassigned_tasks } at the start of each solve, there are two issues.
@PP
First, two tasks lying in the same mtask may differ in the cost
incurred by assigning (or not assigning) them.  Those which are least
costly come first in the mtask, and should be chosen for assignment
before later tasks in the mtask.  This is explained fully in
Section {@NumberOf resource_structural.mtask_finding.similarity}.
So open tasks must appear within @C { unassigned_tasks } in
the same order that they appear in @C { all_tasks }.
@PP
Second, we want the running time of opening and closing to be
proportional to the number of objects opened, not the total number
of objects.  Accordingly, we cannot build @C { unassigned_tasks } by
traversing @C { all_tasks } when opening, because @C { all_tasks }
may contain many tasks which will not be opened, because they
are assigned unselected resources.
@PP
So we proceed as follows.  When the DRS mtask is created,
@C { unassigned_tasks } is initialized to contain all unassigned DRS tasks
from @C { all_tasks }.  Whenever a DRS task from @C { all_tasks } is
unassigned, its @C { encl_dts } field is followed to its enclosing
DRS mtask and it is added to @C { unassigned_tasks }.  But when it
is assigned, it is not deleted from @C { unassigned_tasks }.  So at any
moment, @C { unassigned_tasks } must contain all the unassigned tasks
from @C { all_tasks }, but it may contain some assigned tasks as well.
@PP
Each DRS mtask is stored in the @C { mtasks } field of one shift object,
and that shift is stored in a day object which is the first day on which
the mtask's tasks are busy.  If that day is one of the selected days for
solving, as part of opening it, each of its mtasks is visited and
potentially opened by a call to @C { KheDrsMTaskOpen }:
@ID @C {
bool KheDrsMTaskOpen(KHE_DRS_MTASK dmt, KHE_DRS_DAY_RANGE ddr,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_TASK dt;  int i;  bool res;
  if( KheDrsDayRangeSubset(dmt->day_range, ddr) &&
      !KheResourceSetDisjointGroup(drs->selected_resource_set,
	KheMTaskDomain(dmt->orig_mtask)) )
  {
    /* dmt can open; organize and open unassigned_tasks */
    KheDrsMTaskOrganizeUnassignedTasks(dmt);
    HaArrayForEach(dmt->unassigned_tasks, dt, i)
      KheDrsTaskOpen(dt, drs);
    dmt->expand_must_assign_count = 0;
    dmt->expand_prev_unfixed = -1;
    res = true;
  }
  else
  {
    /* dmt can't open; set dmt->expand_prev_unfixed to make that clear */
    dmt->expand_prev_unfixed = HaArrayCount(dmt->unassigned_tasks) - 1;
    res = false;
  }
  return res;
}
}
This function is slightly mis-named:  it only opens @C { dmt } if
its tasks lie entirely within open day range @C { ddr } and their
shared domain is not disjoint from the set of open resources.
@PP
Opening an mtask begins by sorting @C { unassigned_tasks } so
that the genuinely unassigned tasks come first, in their order in
@C { all_tasks } (the @C { index_in_encl_dmt } field helps with
this), and deleting any assigned tasks from the end.
@C { KheDrsMTaskOrganizeUnassignedTasks } does these two steps.
After that, the unassigned tasks are opened.
This way, the issues identified above are handled correctly.  This
is done after resources are opened, by which time all tasks from the
mtask that were assigned a selected resource are unassigned, and so
lie in @C { unassigned_tasks }, justifying the statement made earlier
that only unassigned tasks are ever opened.
@PP
The mtask submodule is followed by another submodule containing
mtask code related to expansion.  As usual, we'll return to this
later (Appendix {@NumberOf dynamic_impl.expansion}).
@End @SubSubAppendix

@SubSubAppendix
    @Title { Shifts }
    @Tag { dynamic_impl.events.shifts }
@Begin
@LP
To the solver, a @I { shift } is a set of mtasks whose tasks
all have the same busy times and workloads.  Usually this will be
the tasks of one shift (defined informally), but not always; for
example, not when some of the tasks are grouped.  When a resource
@M { r } is assigned to any of the tasks of the mtasks of a given
shift, the effect on @M { r }'s resource constraints is the same.
@PP
Here is the type declaration:
@ID @C {
typedef struct khe_drs_shift_rec *KHE_DRS_SHIFT;
typedef HA_ARRAY(KHE_DRS_SHIFT) ARRAY_KHE_DRS_SHIFT;

struct khe_drs_shift_rec {
  KHE_DRS_DAY			encl_day;
  int				open_shift_index;
  int				expand_must_assign_count;
  int				expand_max_included_free_resource_count;
  ARRAY_KHE_DRS_MTASK		mtasks;
  ARRAY_KHE_DRS_MTASK		open_mtasks;
  ARRAY_KHE_DRS_SHIFT_PAIR	shift_pairs;
  KHE_DRS_SIGNER		signer;	
  KHE_DRS_SHIFT_SOLN_TRIE	soln_trie;
};
}
Field @C { encl_day } is the day containing this shift (the day
containing the first busy time of this shift's tasks).  Field
@C { open_shift_index } is @C { -1 } when the shift is not open
for solving, and has a unique non-negative value when the shift
is open.
@PP
Fields @C { expand_must_assign_count } and
@C { expand_max_included_free_resource_count } are defined during
the expansion of a day solution from the previous day and will be
explained later.  Field @C { mtasks } holds the mtasks that contain
the tasks of the shift, and @C { open_mtasks } holds the open ones
when the shift is open for solving.  Field @C { shift_pairs }
contains objects representing all pairs of shifts from the same
day whose first element is this shift.
@PP
Field @C { signer } holds a signer for evaluating signatures
and dominance testing for the event resource monitors that
monitor the tasks of this shift, and field @C { soln_trie } 
holds a set of shift solutions, that is, objects representing
the assignment of sets of resources @M { R } to tasks of this
shift.  These will be explained later.
@PP
The operations on shifts include @C { KheDrsShiftMake } for
making a new shift, initially holding one mtask, and
@C { KheDrsShiftAcceptsMTask } for deciding whether to add
a given mtask to a shift, because its busy times and
workloads are the same as those of the mtasks already
in the shift.  There is also @C { KheDrsShiftOpen }, which
opens a shift by assigning an open shift index to it and
opening its mtasks:
@ID @C {
void KheDrsShiftOpen(KHE_DRS_SHIFT ds, KHE_DRS_DAY_RANGE ddr,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_MTASK dmt;  int i;
  ds->open_shift_index = HaArrayCount(drs->open_shifts);
  HaArrayAddLast(drs->open_shifts, ds);
  HaArrayForEach(ds->mtasks, dmt, i)
    if( KheDrsMTaskOpen(dmt, ddr, drs) )
      HaArrayAddLast(ds->open_mtasks, dmt);
}
}
and @C { KheDrsShiftClose }, which closes its mtasks and
clears its signer:
@ID @C {
void KheDrsShiftClose(KHE_DRS_SHIFT ds, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_MTASK dmt;  int i;
  HaArrayForEach(ds->open_mtasks, dmt, i)
    KheDrsMTaskClose(dmt);
  HaArrayClear(ds->open_mtasks);
  ds->open_shift_index = -1;
  KheDrsSignerClear(ds->signer, drs);
}
}
After the @C { KHE_DRS_SHIFT } submodule there is another
submodule which implements that part of the expansion operation
concerned with shifts.  That submodule is presented in
Appendix {@NumberOf dynamic_impl.expansion}.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Shift pairs }
    @Tag { dynamic_impl.events.shift_pairs }
@Begin
@LP
A @I { shift pair } is a pair of shifts:
@ID @C {
typedef struct khe_drs_shift_pair_rec *KHE_DRS_SHIFT_PAIR;
typedef HA_ARRAY(KHE_DRS_SHIFT_PAIR) ARRAY_KHE_DRS_SHIFT_PAIR;

struct khe_drs_shift_pair_rec {
  KHE_DRS_SHIFT			shift[2];
  KHE_DRS_SIGNER		signer;
};
}
When the solver is created, one shift pair object is created for
each pair of distinct shifts whose tasks' first times lie within
the same day.
@PP
The main purpose of this type is to store the signer, which is used
to control the signatures of shift pair solutions.  Apart from
@C { KheDrsShiftPairMake }, the only shift pair functions are
functions related to the signer.
@End @SubSubAppendix

@EndSubSubAppendices
@End @SubAppendix

@SubAppendix
    @Title { Signatures }
    @Tag { dynamic_impl.sig }
@Begin
@LP
This section presents the types concerned with signatures
and dominance testing.
# These culminate in type @C { KHE_DRS_DOM_TEST }, which
# represents a dominance test at one point along a signature.
@BeginSubSubAppendices

@SubSubAppendix
    @Title { Signatures }
    @Tag { dynamic_impl.sig.sig }
@Begin
@LP
The author is guilty of ambivalence in the use of the term
@I { signature }.  It can mean just an array of numbers
(each an @C { int } or a @C { float }) recording the current
states of some active monitors, but it can also mean a solution
cost in addition to the array.  This second meaning predominates
in this section.
@PP
The type declaration for signatures is
@ID @C {
typedef struct khe_drs_signature_rec *KHE_DRS_SIGNATURE;
typedef HA_ARRAY(KHE_DRS_SIGNATURE) ARRAY_KHE_DRS_SIGNATURE;

struct khe_drs_signature_rec {
  int			reference_count;
  int			asst_to_shift_index;
  KHE_COST		cost;
  ARRAY_KHE_DRS_VALUE	states;
};
}
@C { KHE_DRS_VALUE } is an untagged union of @C { int } and @C { float }:
@ID @C {
typedef union {
  int			i;
  float			f;
} KHE_DRS_VALUE;
}
States derived from limit workload monitors are the only ones to
use floating-point values.  The context determines which field is
currently in use.
@PP
Signatures have unpredictable lifetimes, but they are widely used and
it is important to recycle them.  Accordingly, a reference counting
system is used.  The @C { reference_count } field records the number
of references to this object from other heap-allocated objects.  Those
other objects are required to register the addition or deletion of a
reference to a signature, by calling
@IndentedList

@LI @C {
void KheDrsSignatureRefer(KHE_DRS_SIGNATURE sig)
{
  sig->reference_count++;
}
}

@LI @C {
void KheDrsSignatureUnRefer(KHE_DRS_SIGNATURE sig,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  sig->reference_count--;
  if( sig->reference_count == 0 )
    HaArrayAddLast(drs->signature_free_list, sig);
}
}

@EndList
@C { KheDrsSignatureUnRefer } adds @C { sig } to a free list
in @C { drs } when its reference count drops to 0.
@PP
The @C { asst_to_shift_index } field is used to implement
caching of dominance tests.  It holds the index of the
signature in an enclosing array.
@PP
The last two fields hold the value of the signature.  Each
signature depends on a particular set of monitors; its cost is the
total cost of those monitors, and its states are state values for
the monitors, as required.  For each signature there is a signer
(Appendix {@NumberOf dynamic_impl.sig.signers}) which knows which
monitors these are.  It would make a lot of sense to store a pointer
to this signer in the signature.  However, to save space (given that
solutions contain signatures, and there can be many thousands of
solutions), this pointer has been omitted.  The functions that
operate on signatures have to use context to find the signer.
@PP
The remaining operations on signatures are straightforward and won't
be shown here.  They include @C { KheDrsSignatureMake } to make a
new signature object, @C { KheDrsSignatureAddCost } to add a cost
to a signature, and @C { KheDrsSignatureAddState } to add a state.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Signature sets }
    @Tag { dynamic_impl.sig.sig_sets }
@Begin
@LP
A @I { signature set } is a set of signatures.  It is logically
the same as a signature:  it has a cost and an array of states.
It is basically an optimization:  one can build a signature set
by adding pointers to signatures more quickly than by appending
arrays of state values.  Its type declaration is
@ID @C {
typedef struct khe_drs_signature_set_rec *KHE_DRS_SIGNATURE_SET;
typedef HA_ARRAY(KHE_DRS_SIGNATURE_SET) ARRAY_KHE_DRS_SIGNATURE_SET;

struct khe_drs_signature_set_rec {
  KHE_COST			cost;
  ARRAY_KHE_DRS_SIGNATURE	signatures;
};
}
The @C { cost } field always holds the sum of the @C { cost }
fields of the individual signatures.
@PP
To save space, signature sets are stored in expanded form:
fields representing them have type
@C { struct khe_drs_signature_set_rec } rather than type
@C { KHE_DRS_SIGNATURE_SET }.  This has no drawbacks and saves
one pointer, which is significant given that every day solution
contains a signature set, and there may be many thousands of those.
@PP
Function @C { KheDrsSignatureSetInit } initializes a signature set.
This name is used, rather than @C { KheDrsSignatureSetMake }, to
indicate that the memory for the signature set is already allocated
and just needs to be initialized.  There are operations for
clearing a signature set, hashing it, testing two signature
sets for equality, and adding a signature to a signature set:
@ID @C {
void KheDrsSignatureSetAddSignature(KHE_DRS_SIGNATURE_SET sig_set,
  KHE_DRS_SIGNATURE sig, bool with_cost)
{
  if( with_cost )
    sig_set->cost += sig->cost;
  HaArrayAddLast(sig_set->signatures, sig);
  KheDrsSignatureRefer(sig);
}
}
This implements the rule given earlier, that the cost holds the
sum of the component signatures' costs.  This is omitted
(i.e. @C { false } is passed for @C { with_cost }) only in
one rather special case where the component costs are already
included.  Note the call to @C { KheDrsSignatureRefer }.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Signers }
    @Tag { dynamic_impl.sig.signers }
@Begin
@LP
A @I { signer }, short for @I { signature controller }, is an object
which acts as a controller for the construction of signatures and
testing them for dominance.
@PP
Different signatures may have different formats (different numbers
of states, or different meanings for their states).  Signatures
constructed using a given signer share the format defined by that
signer, allowing them to be compared for dominance.
@PP
The type of signers is
@IndentedList

@LI @C {
typedef struct khe_drs_signer_rec *KHE_DRS_SIGNER;
typedef HA_ARRAY(KHE_DRS_SIGNER) ARRAY_KHE_DRS_SIGNER;
}

@LI @C {
typedef enum {
  KHE_DRS_SIGNER_DAY,
  KHE_DRS_SIGNER_RESOURCE_ON_DAY,
  KHE_DRS_SIGNER_SHIFT,
  KHE_DRS_SIGNER_SHIFT_PAIR
} KHE_DRS_SIGNER_TYPE;
}

@LI @C {
struct khe_drs_signer_rec {
  KHE_DYNAMIC_RESOURCE_SOLVER	solver;
  ARRAY_KHE_DRS_EXPR		internal_exprs;
  ARRAY_KHE_DRS_DOM_TEST	dom_tests;
  HA_ARRAY_INT			eq_dom_test_indexes;
  int				last_hard_cost_index;
  KHE_DRS_CORRELATOR		correlator;
  KHE_DRS_SIGNER_TYPE		type;
  union {
    KHE_DRS_DAY			day;
    KHE_DRS_RESOURCE_ON_DAY	resource_on_day;
    KHE_DRS_SHIFT		shift;
    KHE_DRS_SHIFT_PAIR		shift_pair;
  } u;
};
}

@EndList
The @C { solver } field holds the solver containing this signer.
The @C { internal_exprs } field holds the internal (non-leaf)
expressions that must be evaluated when creating a signature
using this signer.  All expressions that contribute some
state or cost must enrol themselves onto this list, in
postorder (children before parents) as evaluation requires.
@PP
The @C { dom_tests } field holds the dominance tests, one for
each position in the signatures that this signer controls.
@PP
The @C { eq_dom_test_indexes } field contains a set of indexes
into the @C { dom_tests } array.  These point to those dominance
tests whose test is equality.  This is used by `medium' dominance
testing, which is obsolete but still supported.
@PP
The @C { last_hard_cost_index } field is another index into
the @C { dom_tests } array.  It points to the last dominance
test whose cost is hard (at least @C { KheCost(1, 0) }), or is
@C { -1 } when there are no such tests.  This makes it possible
to carry out the dominance testing of the hard cost elements of
a set of signatures before the soft cost ones.  This often saves
a lot of time, because when a hard cost element produces a cost,
dominance testing can end immediately (with failure).
@PP
The @C { correlator } field points to a @I { correlator },
an internal element of the signer which, when present,
allows it to detect and handle correlated expressions
(Appendix {@NumberOf dynamic_impl.sig.correlators}).
@PP
The @C { type } and @C { u } fields define a tagged union which
records whether the signer is used for day solutions, task
solutions, shift solutions, or shift pair solutions.  The main
use for this is that some @C { COUNTER } expressions
are evaluated differently depending on the type of signer.
@PP
There are operations for creating a signer, clearing it back to
the initial state (no internal expressions or dominance tests),
adding an internal expression, and adding a dominance test.  The
latter adds an entry to @C { eq_dom_test_indexes } when an
equality dominance test is added.
@PP
Function @C { KheDrsSignerAddExpr } decides whether expression
@C { e } needs to be added to signer @C { dsg }, and if so
whether a position needs to be reserved for its state in signatures
or not:
@ID @C {
bool KheDrsSignerAddExpr(KHE_DRS_SIGNER dsg, KHE_DRS_EXPR e,
  KHE_DYNAMIC_RESOURCE_SOLVER drs, int *index)
{
  KHE_DRS_DOM_TEST dom_test;
  switch( KheDrsExprEvalType(e, dsg, drs, &dom_test) )
  {
    case KHE_DRS_EXPR_EVAL_NO:

      /* do nothing */
      return *index = -1, false;

    case KHE_DRS_EXPR_EVAL_NOT_LAST:

      /* add expression and dom test */
      KheDrsSignerAddOpenExpr(dsg, e);
      return *index = KheDrsSignerAddDomTest(dsg, dom_test, drs), true;

    case KHE_DRS_EXPR_EVAL_LAST:

      /* add expression only */
      KheDrsSignerAddOpenExpr(dsg, e);
      return *index = -1, false;

    default:

      HnAbort("KheDrsSignerAddExpr internal error");
      return *index = -1, false;  /* keep compiler happy */
  }
}
}
Actually @C { KheDrsExprEvalType } makes this three-way decision;
this code does the actual addition of the expression, and possibly
of a dominance test as well, based on the decision.
@PP
Once all the necessary expressions and dominance tests have been
added, the signer is able, first, to build a signature by visiting
and evaluating the internal expressions:
@ID {0.98 1.0} @Scale @C {
KHE_DRS_SIGNATURE KheDrsSignerEvalSignature(KHE_DRS_SIGNER dsg,
  KHE_DRS_SIGNATURE prev_sig, KHE_DYNAMIC_RESOURCE_SOLVER drs,
  bool debug)
{
  KHE_DRS_EXPR e;  int i;  KHE_DRS_SIGNATURE res;
  res = KheDrsSignatureMake(drs);
  HaArrayForEach(dsg->internal_exprs, e, i)
    KheDrsExprEvalSignature(e, dsg, prev_sig, res, drs, debug);
  return res;
}
}
and second, to compare two signatures for dominance:
@ID @C {
bool KheDrsSignerDominates(KHE_DRS_SIGNER dsg,
  KHE_DRS_SIGNATURE sig1, KHE_DRS_SIGNATURE sig2,
  KHE_COST *avail_cost)
{
  *avail_cost += (sig2->cost - sig1->cost);
  return KheDrsSignerDoDominates(dsg, sig1, sig2, KHE_DRS_SIGNER_ALL,
    0, true, avail_cost);
}
}
This function assumes that @C { *avail_cost } has already been
initialized, allowing it to become part of a larger dominance
test which has these signatures as just one part.  It calls
function @C { KheDrsSignerDoDominates } to do the actual work.
We'll start by examining its header:
@ID @C {
bool KheDrsSignerDoDominates(KHE_DRS_SIGNER dsg,
  KHE_DRS_SIGNATURE sig1, KHE_DRS_SIGNATURE sig2,
  KHE_DRS_SIGNER_TEST test, int trie_start_depth, bool stop_on_neg,
  KHE_COST *avail_cost);
}
This tests @C { sig1 } and @C { sig2 } for dominance, assuming
that @C { *avail_cost } is already initialized to the available
cost and includes @C { sig2->cost - sig1->cost }.  Parameter
@C { test } has type
@ID @C {
typedef enum {
  KHE_DRS_SIGNER_HARD,
  KHE_DRS_SIGNER_SOFT,
  KHE_DRS_SIGNER_ALL
} KHE_DRS_SIGNER_TEST;
}
and specifies that only states up to position @C { last_hard_cost_index }
are to be tested, or only states from @C { last_hard_cost_index + 1 }
onwards, or all states.  As it turns out, when testing signature sets
for dominance there are significant time savings to be made by
testing all the hard constraints before testing all the soft ones.
@PP
Parameter @C { trie_start_depth } is non-zero only when this function
is called from within the trie data structure; some dominance
testing will have already occurred and we want to continue on from
position @C { trie_start_depth }.  In this case, @C { test } will
be @C { KHE_DRS_SIGNER_ALL }.  Actually the trie data structure
for solutions is been withdrawn, so @C { trie_start_depth } will
always be 0.
@PP
Parameter @C { stop_on_neg }, when @C { true }, says that if
@C { *avail_cost } ever goes negative, we are to stop immediately
and declare that there is no dominance.  As discussed in
Appendix {@NumberOf dynamic_theory}, this is best for efficiency,
although there are rare cases where it declares that there is
no dominance when in fact carrying on would show that there is
dominance.  It does not break anything to occasionally make this
mistake; it just means that a few more solutions are kept than
is absolutely necessary, because a slightly weaker dominance test
is being applied than what could be.
@PP
Here now is @C { KheDrsSignerDoDominates }.  It is more than
one page long, so we'll start by presenting it in outline,
then fill in each piece separately:
@ID @C {
bool KheDrsSignerDoDominates(KHE_DRS_SIGNER dsg,
  KHE_DRS_SIGNATURE sig1, KHE_DRS_SIGNATURE sig2,
  KHE_DRS_SIGNER_TEST test, int trie_start_depth, bool stop_on_neg,
  KHE_COST *avail_cost, int verbosity, int indent, FILE *fp)
{
  KHE_DRS_DOM_TEST dt;  int start_index, stop_index, dt_index, sig_len;
  KHE_DRS_VALUE v1, v2;

  /* consistency checks */
  ... see first code excerpt below ...

  /* work out start_index and stop_index */
  ... see second code excerpt below ...

  /* quit immediately if no available cost */
  if( stop_on_neg && *avail_cost < 0 )
    return false;

  /* do the test from start_index inclusive to stop_index exclusive */
  for( dt_index = start_index;  dt_index < stop_index;  dt_index++ )
  {
    dt = HaArray(dsg->dom_tests, dt_index);
    v1 = HaArray(sig1->states, dt_index);
    v2 = HaArray(sig2->states, dt_index);
    switch( dt->type )
    {
      ... see third code excerpt below ...
    }

    /* quit early if *avail_cost is now negative */
    if( stop_on_neg && *avail_cost < 0 )
      return false;
  }
  return *avail_cost >= 0;
}
}
The two signatures are supposed to have been created by signer
@C { dsg }.  Because we have chosen not to store a signer within
each signature, we cannot check this.  But we can check that the
number of dominance tests in the signer equals the number of states
in each signature:
@ID @C {
/* consistency checks */
sig_len = HaArrayCount(dsg->dom_tests);
HnAssert(HaArrayCount(sig1->states) == sig_len,
  "KheDrsSignerDoDominates internal error 1 (count %d != count %d)\n",
  HaArrayCount(sig1->states), sig_len);
HnAssert(HaArrayCount(sig2->states) == sig_len,
  "KheDrsSignerDoDominates internal error 2 (count %d != count %d)\n",
  HaArrayCount(sig2->states), sig_len);
}
Next we use @C { test } to work out @C { start_index }, the
place along the states arrays to start testing, and @C { stop_index },
the index just past the stopping point:
@ID @C {
/* work out start_index and stop_index */
switch( test )
{
  case KHE_DRS_SIGNER_HARD:

    HnAssert(trie_start_depth == 0,
      "KheDrsSignerDoDominates internal error 1");
    start_index = 0;
    stop_index = dsg->last_hard_cost_index + 1;
    break;

  case KHE_DRS_SIGNER_SOFT:

    HnAssert(trie_start_depth == 0,
      "KheDrsSignerDoDominates internal error 2");
    start_index = dsg->last_hard_cost_index + 1;
    stop_index = sig_len;
    break;

  case KHE_DRS_SIGNER_ALL:

    start_index = trie_start_depth;
    stop_index = sig_len;
    break;

  default:

    HnAbort("KheDrsSignerDoDominates internal error 3");
    start_index = 0, stop_index = 0;  /* keep compiler happy */
}
}
Now returning to the original function, we see that it iterates
along the states arrays from @C { start_index } inclusive to
@C { stop_index } exclusive, extracting the dominance test
@C { dt } from the signer (this says how to test for dominance
at this position), and state values @C { v1 } and @C { v2 }
from the states arrays of the signature.
@PP
The switch on @C { dt->type } has 13 branches.  Most of them are
concerned with dominance testing involving correlated expressions,
which we will omit for now.  Here are the others:
@ID @C {
switch( dt->type )
{
  case KHE_DRS_DOM_TEST_UNUSED:

    /* unused test, should never happen */
    HnAbort("internal error in KheDrsSignerDominates (UNUSED)");
    break;

  case KHE_DRS_DOM_TEST_STRONG:

    /* strong dominance */
    if( !KheDrsDomTestDominatesStrong(dt, v1, v2) )
      *avail_cost -= KheCost(1, 0);
    break;

  case KHE_DRS_DOM_TEST_SEPARATE_INT:
    
    /* separate dominance (int) */
    if( !KheDrsDomTestDominatesSeparateInt(dt, v1.i, v2.i) )
      *avail_cost -= KheCost(1, 0);
    break;

  case KHE_DRS_DOM_TEST_SEPARATE_FLOAT:

    /* separate dominance (float) */
    if( !KheDrsDomTestDominatesSeparateFloat(dt, v1.f, v2.f) )
      *avail_cost -= KheCost(1, 0);
    break;

  case KHE_DRS_DOM_TEST_TRADEOFF:

    /* tradeoff dominance */
    if( !KheDrsDomTestDominatesTradeoff(dt, v1.i, v2.i, avail_cost) )
      *avail_cost -= KheCost(1, 0);
    break;

  case KHE_DRS_DOM_TEST_TABULATED:

    /* tabulated dominance */
    KheDrsDomTestDominatesTabulated(dt, v1.i, v2.i, avail_cost);
    break;

  ... cases for correlated expressions omitted ...
}
}
The dominance test determines whether the values are
interpreted as integers or floats, among other things.
@End @SubSubAppendix

@SubSubAppendix
  @Title { Signer sets }
  @Tag { dynamic_impl.sig.signer_sets }
@Begin
@LP
Just as a signature set is a set of signatures, so a signer set
is a set of signers:
@ID @C {
typedef struct khe_drs_signer_set_rec *KHE_DRS_SIGNER_SET;
typedef HA_ARRAY(KHE_DRS_SIGNER_SET) ARRAY_KHE_DRS_SIGNER_SET;

struct khe_drs_signer_set_rec {
  ARRAY_KHE_DRS_SIGNER		signers;
};
}
There are operations for creating and freeing a signer set, adding
a signer to a signer set, and clearing a signer set back to empty.
There is also this operation, called when a day is opened:
@ID @C {
void KheDrsSignerSetDayOpen(KHE_DRS_SIGNER_SET signer_set,
  KHE_DRS_DAY day, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_RESOURCE dr;  int i;  KHE_DRS_RESOURCE_ON_DAY drd;
  KHE_DRS_SIGNER dsg;

  /* one signer for each open resource */
  HnAssert(HaArrayCount(signer_set->signers) == 0,
    "KheDrsSignerSetDayOpen internal error");
  KheDrsResourceSetForEach(drs->open_resources, dr, i)
  {
    drd = KheDrsResourceOnDay(dr, day);
    KheDrsSignerSetAddSigner(signer_set, drd->signer);
  }

  /* one additional signer for event resource expressions */
  dsg = KheDrsSignerMake(day, NULL, NULL, NULL, drs);
  KheDrsSignerSetAddSigner(signer_set, dsg);
}
}
This fills a signer set with one pre-existing signer for each open
resource, and one newly created signer to hold the event resource
constraints affected by the tasks of this day.  When the day is
closed, this function is called:
@ID @C {
void KheDrsSignerSetDayClose(KHE_DRS_SIGNER_SET signer_set,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_SIGNER dsg;

  /* delete and free the last signer (the others were not made here) */
  HnAssert(HaArrayCount(signer_set->signers) > 0,
    "KheDrsSignerSetDayClose internal error");
  dsg = HaArrayLast(signer_set->signers);
  KheDrsSignerFree(dsg, drs);

  /* clear the signers */
  HaArrayClear(signer_set->signers);
}
}
This frees the last signer and clears the signer set.
@PP
After that come operations for hashing signatures and comparing
them for equality, which we won't show because they are used
only by superseded dominance tests.  Finally we get to
@C { KheDrsSignerSetDominates }, which performs dominance
testing between two signature sets.  It's a long one so we'll
start with the header:
@ID @C {
bool KheDrsSignerSetDominates(KHE_DRS_SIGNER_SET signer_set,
  KHE_DRS_SIGNATURE_SET sig_set1, KHE_DRS_SIGNATURE_SET sig_set2,
  KHE_COST trie_extra_cost, int trie_start_depth, bool use_caching,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
}
This returns @C { true } when @C { sig_set1 } dominates @C { sig_set2 },
using @C { signer_set } to determine what to do at each position.
Parameters @C { trie_extra_cost } and @C { trie_start_depth }
have non-zero values only when @C { KheDrsSignerSetDominates }
is called from within the trie data structure, where some dominance
testing (the first @C { trie_start_depth } positions) will have
already been done.  Parameter @C { use_caching } is @C { true }
when we are using caching of the results of this function to speed
up later calls to it.  We'll see how that works shortly.
Here is the function, in outline:
@ID @C {
bool KheDrsSignerSetDominates(KHE_DRS_SIGNER_SET signer_set,
  KHE_DRS_SIGNATURE_SET sig_set1, KHE_DRS_SIGNATURE_SET sig_set2,
  KHE_COST trie_extra_cost, int trie_start_depth, bool use_caching,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  int i, count, last_cache;  KHE_DRS_SIGNER dsg;  KHE_COST avail_cost;
  KHE_DRS_SIGNATURE sig1, sig2;  KHE_DRS_RESOURCE dr;
  KHE_DRS_COST_TUPLE ct;
  count = HaArrayCount(signer_set->signers);
  HnAssert(HaArrayCount(sig_set1->signatures) == count,
    "KheDrsSignerSetDominates internal error 1");
  HnAssert(HaArrayCount(sig_set2->signatures) == count,
    "KheDrsSignerSetDominates internal error 2");
  avail_cost = sig_set2->cost - sig_set1->cost - trie_extra_cost;
  if( avail_cost < 0 )
    return false;
  if( drs->solve_dom_approx > 0 )
    avail_cost += (avail_cost * drs->solve_dom_approx) / 10;
  if( trie_start_depth > 0 )
  {
    /* won't happen anyway but do it the easy way if it does */
    ... see first code excerpt below ...
  }
  else if( USE_DOM_CACHING && use_caching )
  {
    /* use a cached value for all positions except the last */
    ... see second code excerpt below ...
  }
  else
  {
    /* visit hard constraints first; they often end the test quickly */
    ... see third code excerpt below ...
  }
  return true;
}
}
After checking that the number of signers in the signer set
equals the number of signatures in each signature set, the
function initializes @C { avail_cost } and returns @C { false }
immediately if it is negative.  This code:
@ID @C {
if( drs->solve_dom_approx > 0 )
  avail_cost += (avail_cost * drs->solve_dom_approx) / 10;
}
implements the @C { dom_approx } feature, which arbitrarily
enlarges @C { avail_cost }, increasing the chance of a
successful test but giving up provable optimality.
@PP
If @C { trie_start_depth > 0 }, we have to start
@C { trie_start_depth } places along the signature,
which is done like this:
@ID @C {
/* won't happen anyway but do it the easy way if it does */
HaArrayForEach(signer_set->signers, dsg, i)
{
  if( trie_start_depth < HaArrayCount(dsg->dom_tests) )
  {
    sig1 = HaArray(sig_set1->signatures, i);
    sig2 = HaArray(sig_set2->signatures, i);
    if( !KheDrsSignerDoDominates(dsg, sig1, sig2, KHE_DRS_SIGNER_ALL,
	  trie_start_depth, true, &avail_cost) )
      return false;
    trie_start_depth = 0;
  }
  else
    trie_start_depth -= HaArrayCount(dsg->dom_tests);
}
}
We saw @C { KheDrsSignerDoDominates } earlier.  If we are
using cached dominance test results, we execute this code:
@ID @C {
/* use a cached value for all positions except the last */
HnAbort("KheDrsSignerSetDominates - dom caching unavailable");
last_cache = HaArrayCount(signer_set->signers) - 2;
for( i = 0;  i <= last_cache;  i++ )
{
  dr = KheDrsResourceSetResource(drs->open_resources, i);
  dsg = HaArray(signer_set->signers, i);
  sig1 = HaArray(sig_set1->signatures, i);
  HnAssert(sig1->asst_to_shift_index >= 0,
    "KheDrsSignerSetDominates internal error 3");
  sig2 = HaArray(sig_set2->signatures, i);
  HnAssert(sig2->asst_to_shift_index >= 0,
    "KheDrsSignerSetDominates internal error 4");
  ct = KheDrsDim2TableGet2(dr->expand_dom_test_cache,
    sig1->asst_to_shift_index, sig2->asst_to_shift_index);
  avail_cost += ct.unweighted_psi;
  if( avail_cost < 0 )
    return false;
}

/* regular test for the last position */
dsg = HaArrayLast(signer_set->signers);
sig1 = HaArrayLast(sig_set1->signatures);
sig2 = HaArrayLast(sig_set2->signatures);
if( !KheDrsSignerDoDominates(dsg, sig1, sig2, KHE_DRS_SIGNER_ALL,
      0, true, &avail_cost) )
  return false;
}
Caching is available only for resource signatures, and is only used
within day solutions.  When it is in use, the @C { asst_to_shift_index }
fields of the two signatures must be set and are used to index a
cache of results of calls to this function, stored in
@C { dr->expand_dom_test_cache }.  Looking up a table should be much
faster than redoing the dominance test over and over, although
the author's tests do not show any benefit.
@PP
Finally we come to the usual case, where we just run along
each signature's state array in the usual way.  But even here
there is a wrinkle:
@ID @C {
/* visit hard constraints first; they often end the test quickly */
HaArrayForEach(signer_set->signers, dsg, i)
{
  sig1 = HaArray(sig_set1->signatures, i);
  sig2 = HaArray(sig_set2->signatures, i);
  if( !KheDrsSignerDoDominates(dsg, sig1, sig2, KHE_DRS_SIGNER_HARD,
	0, true, &avail_cost) )
    return false;
}

/* visit soft constraints */
HaArrayForEach(signer_set->signers, dsg, i)
{
  sig1 = HaArray(sig_set1->signatures, i);
  sig2 = HaArray(sig_set2->signatures, i);
  if( !KheDrsSignerDoDominates(dsg, sig1, sig2, KHE_DRS_SIGNER_SOFT,
      0, true, &avail_cost) )
    return false;
}
}
The hard constraint states are visited first, because if they fail
they end dominance testing immediately.  This saves a lot of time,
as the author's tests confirm.
@End @SubSubAppendix

@SubSubAppendix
  @Title { Correlators }
  @Tag { dynamic_impl.sig.correlators }
@Begin
@LP
@I { still to do }
@End @SubSubAppendix

@SubSubAppendix
  @Title { Types of signers and their signatures }
  @Tag { dynamic_impl.sig.types }
@Begin
@LP
At the risk of getting ahead of ourselves, we now give a detailed
description of the four types of signers and their signatures.
For each type of signer we will give:  the type of solution it
handles; which signers of this type there are and where they are
kept; which internal expressions are added to each signer; which
dominance tests are added to each signer; and which signatures
are created by each signer of this type, how their costs and
states are determined, and where they are kept.
@PP
But first, a few general points.  When an expression @M { e } adds
itself to a signer, it is requesting that the signer call it back
as part of creating the signatures controlled by that signer.  When
it adds a dominance test to a signer, it is saying that there will
be a position in the state array of the signatures controlled by
that signer which holds the state of @M { e }, and that the supplied
dominance test is to be used at that position during dominance
testing.  In that case, @M { e } is obliged to ensure that a state
value is added to the signature during evaluation.
@PP
An expression which adds a dominance test to a signer must also
add itself to the signer.  However, it can add itself to the
signer without adding a dominance test.  This occurs when the
expression needs to be evaluated but that evaluation will be the
last one, the one that finalizes the expression's value, so that
the value is stored somewhere other than in a signature's state array.
# , either directly by
# adding it itself, or indirectly by adding it to some other signature
# whose value is merged into this signature.
@PP
The code that adds internal expressions and dominance tests
to signers is part of function @C { KheDrsExprOpen }
(Appendix {@NumberOf dynamic_impl.expr.opening}).  The code
that adds costs and states to signatures is part of function
@C { KheDrsExprEvalSignature }
(Appendix {@NumberOf dynamic_impl.expr.search}).  Hopefully
this unified and detailed presentation will make those functions
easier to follow.
@PP
As we'll see, resource constraints are handled differently from
event resource constraints.  This is an optimization which
exploits the fact that each resource constraint is affected by
just one of the assignments on any one day (the one containing the
resource that the constraint monitors).  Event resource constraints
may be affected by several of the assignments on any one day.
@PP
@BI { Resource on day signers. }
When @C { type } is @C { KHE_DRS_SIGNER_RESOURCE_ON_DAY }, the signer
is a @I { resource on day } signer.  It handles signatures for task
solutions.  These consist of one @M { d sub k }-complete solution plus
one assignment of one resource @M { r } on day @M { d sub {k+1} } to
some task (or free day).  Field @C { u.resource_on_day } holds the
resource on day object representing @M { r } on day @M { d sub {k+1} }.
# The @M { d sub k }-complete solution and the
# task (or free day) to which @M { r } is assigned on day @M { d sub {k+1} }
# determine the values of the signatures derived from the signer.
@PP
There is one of these signers for each open resource on day, held
in the resource on day object @C { drd } and created when @C { drd }
is opened (actually, it is created when @C { drd } is created and
cleared out when @C { drd } is closed, which comes to the same thing).
This same signer is used for all @M { d sub k }-complete solutions
and all tasks, which is fine:  changing these things will lead to
different signatures, but it does not change the signer (it has
no effect on the signature format).
@PP
The expressions that enrol themselves with a resource on day
signer are those derived from resource constraints for
@M { r } that are affected by what @M { r } is doing on day
@M { d sub {k+1} }.  When this is not the last day they are
affected by, they also contribute dominance tests, reserving
for themselves a place in the states array.  Event resource
constraints take no part in resource on day signatures.
@PP
For these signers, signatures exist only while expanding a
given @M { d sub k }-complete solution.  Each signature is held in
a @C { KHE_DRS_MTASK_SOLN } object representing the assignment
of resource @M { r } to an arbitrary task of an mtask @M { c }
whose first day is @M { d sub {k+1} }.  There is one of these
mtask solution objects for each @M { (r, c) } pair such that @M { r }
can be assigned to @M { c }.  However, the implementation knows
that all tasks from the same shift produce the same signature,
and so two mtask solutions share a signature when they are from
the same shift.  All these mtask solution objects are held in the
@C { KHE_DRS_RESOURCE } object representing @M { r }.
# If @M { r } is assigned to a multi-day task in the
# @M { d sub k }-complete solution and that task is still running on
# day @M { d sub {k+1} }, there will be just one mtask solution
# object, for the mtask @M { c } containing that multi-day task.
@PP
The cost of a resource on day signature is the sum, over all
monitors enrolled in the signer, of their extra cost on day
@M { d sub {k+1} } over their cost on day @M { d sub k }.
Extra costs are more convenient than full costs when these
signatures go into signature sets, because they can be
added to the cost of the @M { d sub k }-solution with no
risk of double counting.
@PP
Resource on day signers and signatures are essentially an
optimization.  Many @M { d sub {k+1} }-complete solutions
derived from a given @M { d sub k }-complete solution
contain an assignment of @M { r } to a task of shift @M { s }.
The effect of this assignment on @M { r }'s resource
constraints is calculated only once, held in a resource
on day signature, and added to the signature sets of
@M { d sub {k+1} }-solutions as required.
@PP
@BI { Day signers. }
When @C { type } is @C { KHE_DRS_SIGNER_DAY }, the signer is a
@I { day signer }.  It handles signatures for the event resource
constraints of @M { d sub k }-solutions.  Field @C { u.day } holds the
day @M { d sub k } up to which these solutions are complete.  There
is one day signer for each open day, held in the @C { KHE_DRS_DAY }
object representing that day and created when the day is opened.
@PP
A day signer's internal expressions are the internal expressions
derived from event resource constraints which are affected by what
happens on its day.  As usual, there is one dominance test for each
of these internal expressions for which this is not the last day.
@PP
This signer must not be confused with the signer set for a given
day.  The signer set contains one resource on day signer for each
open resource @M { r }, each recording the cost and state of the
resource constraints of @M { r } up to day @M { d sub k } inclusive,
plus the day signer for the event resource constraints as just
explained.  So the signer set covers the entire solution and
defines signature sets whose cost is the full solution cost of one
@M { d sub k }-solution.
@PP
There is one solution object that none of the above applies to:
the root solution.  It is not on any day, so its signature can't
be related to any day signer.  But its signature contains just the
initial solution cost (after the selected tasks are unassigned),
has no states (initial states are stored in expressions, not in
signatures), and never participates in dominance testing.  So
the absence of a signer does not matter in this case.
@PP
@BI { Shift signers. }
When @C { type } is @C { KHE_DRS_SIGNER_SHIFT }, the signer is
a @I { shift signer }.  It handles shift solutions, made of
one @M { d sub k }-solution plus one assignment of each element
of a set of resources @M { R } to an unspecified task of a shift
@M { s } beginning on day @M { d sub {k+1} }.  These are assumed
to be the only assignments to the tasks of @M { s }.  Field
@C { u.shift } holds @M { s }.
@PP
There is one shift signer for each open shift @M { s }, held in
@M { s }, and one signature for each @C { KHE_DRS_SHIFT_SOLN }
object associated with @M { s }.  The internal expressions are
those derived from event resource constraints which are affected
by assignments to any of the tasks of @M { s }.  As usual, there
is one dominance test for each of these internal expressions
@M { e } for which the assignments to the @M { d sub k }-solution,
plus the assignments to @M { s } (crucially, combined with the
knowledge that there are no other assignments to @M { s }) leave
some children of @M { e } with undetermined values.
@PP
The same signer can be and is used for all sets @M { R }, although
dominance tests are made only between @C { KHE_DRS_SHIFT_SOLN }
objects with the same @M { R }.  For objects with the same
@M { R }, the effect on resource constraints is the same,
because each resource of @M { R } is assigned to some task of
@M { s }, and those tasks have the same effect on resource
constraints (by how shifts are defined).  This is why resource
constraints are omitted completely from these signatures:  for
dominance testing, which is all we are interested in here, they
make no difference.
The cost of each signature is the extra cost of its expressions
on day @M { d sub {k+1} }, beyond their cost on day @M { d sub k }.
# The initial cost of a shift assignment signature, before its
# expressions are evaluated, is 0.  Actually any initial value
# would do, provided it was the same in each signature, since
# these signatures are used only for dominance testing between
# shift assignment objects.
@PP
@BI { Shift pair signers. }
When @C { type } is @C { KHE_DRS_SIGNER_SHIFT_PAIR }, the signer is
a @I { shift pair signer }.  It handles shift pair solutions, made
of one @M { d sub k }-solution, plus one assignment of a set of
resources @M { R sub 1 } to tasks of a shift @M { s sub 1 } beginning
on day @M { d sub {k+1} }, plus one assignment of a set of resources
@M { R sub 2 } to a tasks of a shift @M { s sub 2 } beginning on day
@M { d sub {k+1} }.  We require @M { s sub 1 != s sub 2 } and
@M { R sub 1 cap R sub 2 = emptyset }.
@PP
There is one shift pair signer for each pair of shifts that
begin on the same day, held in the shift pair object for those
two shifts.  The expressions are all those derived from event
resource monitors whose values are affected by assignments
to the tasks of @M { s sub 1 } or @M { s sub 2 }.  As usual,
if those assignments do not finish off the expression value,
a dominance test is added as well.
@PP
A very similar result could be obtained by a signer set
consisting of the two shift signers for @M { s sub 1 } and
@M { s sub 2 }.  But this fails to work in the unlikely case
where there is a monitor whose value is affected by assignments
in both shifts.  Such a monitor would have its cost counted twice.
#@PP
#@I { obsolete below here }
#@PP
#@BI { Day signers. }
#When @C { type } is @C { KHE_DRS_SIGNER_DAY }, the signer is a
#@I { day signer }.  It handles signatures for the usual kinds of
#solutions, called @M { d sub k }-solutions in
#Appendix {@NumberOf dynamic_theory}.  Field @C { u.day } holds the
#day @M { d sub k } up to which these solutions are complete.  There
#is one day signer for each open day, held in the @C { KHE_DRS_DAY }
#object representing that day and created when the day is opened
#(actually, it is created when the day is created and cleared when
#the day is closed, which comes to the same thing), and one signature
#for each @M { d sub k } solution, held in the solution.
#@PP
#A day signer's internal expressions are the internal expressions
#derived from event resource constraints which are active on its day
#(i.e. affected by what happens then).  There is one dominance test for
#each internal expression (derived from any kind of constraint) which
#is active on this day, and for which this is not the last active day.
#@PP
#Resource constraints contribute dominance tests but not internal
#expressions.  Instead, they contribute to the resource on day
#signatures to be described next, and then those signatures are
#copied into day signatures.  So they do not need to be evaluated
#when day signatures are being constructed, but they do need to
#contribute dominance tests to day signers.
#@PP
#The initial cost of a day signer's signature is the cost of the
#@M { d sub {k-1} }-complete solution that the new @M { d sub k }-complete
#solution is derived from.  During evaluation, all relevant cost
#expressions add an extra cost to that, so that in the end the
#signature cost is the cost of the @M { d sub k }-complete solution.
#@PP
#The full details of this process are on view in
#function @C { KheDrsExpanderMakeAndMeldSoln }
#(Appendix {@NumberOf dynamic_impl.expansion.expanders}).  Some of the
#extra costs reach the signature by a devious route, from a resource
#on day signature via the expander object.  We'll explain those
#details when we present function @C { KheDrsExpanderMakeAndMeldSoln }.
#@PP
#There is one solution object that none of the above applies to:
#the root solution.  It is not on any day, so its signature can't
#be related to any day signer.  But its signature contains just the
#initial solution cost (after the selected tasks are unassigned),
#has no states (initial states are stored in expressions, not in
#signatures), and never participates in dominance testing.  So
#the absence of a signer does not matter in this case.
#@PP
#@BI { Resource on day signers. }
#When @C { type } is @C { KHE_DRS_SIGNER_RESOURCE_ON_DAY }, the signer
#is a @I { resource on day } signer.  It handles signatures for
#solutions which consist of one @M { d sub k }-complete solution plus
#one assignment of one resource @M { r } on day @M { d sub {k+1} }.
#Field @C { u.resource_on_day } holds the resource on day object
#representing @M { r } on day @M { d sub {k+1} }.  The
#@M { d sub k }-complete solution and the task to which
#@M { r } is assigned are not represented explicitly
#within the signer, although they could be.
#@PP
#There is one of these signers for each open resource on day, held
#in the resource on day object @C { drd } and created when @C { drd }
#is opened (actually, it is created when @C { drd } is created and
#cleared out when it is closed, which comes to the same thing).
#This same signer is used for all @M { d sub k }-complete solutions
#and all tasks, which is fine:  changing these things will lead to
#different signatures, but it does not change the signer (it has
#no effect on the signature format).
#@PP
#For these signers, signatures exist only while expanding a
#given @M { d sub k }-complete solution.  Each signature is held in a
#@C { KHE_DRS_ASST_TO_SHIFT } object representing the assignment
#of resource @M { r } to an arbitrary task of a shift @M { s }
#whose first day is @M { d sub {k+1} }.  There is one of these
#assignment to shift objects, and hence one signature, for each
#@M { (r, s) } pair such that @M { r } can be assigned to @M { s }.
#Alternatively, if @M { r } is assigned to a multi-day task in the
#@M { d sub k }-complete solution and that task is still running on
#day @M { d sub {k+1} }, there will be just one assignment to shift
#object, for the shift @M { s } containing that multi-day task.  The
#signature stored will be for @M { d sub {k+1} }.  These assignment
#to shift objects are held in the @C { KHE_DRS_RESOURCE } object
#representing @M { r }.
#@PP
#The internal expressions are all internal expressions derived from
#resource constraints that monitor resource @M { r } and are active
#on @M { d sub {k+1} }.  There is one dominance test for each of these
#internal expressions @M { e } for which @M { d sub {k+1} } is not
#@M { e }'s last active day.  It is a consequence of how shifts are
#defined that assigning @M { r } to any task @M { t } of a given shift
#@M { s } has the same effect on @M { r }'s resource constraints,
#and this is why there is one signature for each @M { (r, s) } pair,
#not one for each @M { (r, t) } pair.  Event resource constraints
#take no part in resource on day signatures.
#@PP
#The initial cost of a resource on day signature, before its
#expressions are evaluated, is 0.  This makes the cost after
#evaluation a sum of extra costs.  This is vital, because if
#(say) the cost of the previous solution was used as the
#initial value, then when several of these signatures are
#added to a day signature, the cost of the previous solution
#would be added in multiple times.
#@PP
#Resource on day signers and signatures are essentially an
#optimization.  Many @M { d sub {k+1} }-complete solutions
#derived from a given @M { d sub k }-complete solution
#contain an assignment of @M { r } to a task of @M { s }.
#The effect of this assignment on @M { r }'s resource
#constraints is calculated only once, held in a resource
#on day signature, and copied into the signatures of
#@M { d sub {k+1} }-complete solutions as required.
#@PP
#@BI { Shift assignment signers. }
#When @C { type } is @C { KHE_DRS_SIGNER_SHIFT }, the signer is a
#@I { shift assignment signer }.  It handles solutions made of
#one @M { d sub k }-complete solution plus one assignment of
#each element of a set of resources @M { R } to an unspecified
#task of a shift @M { s } beginning on day @M { d sub {k+1} }.
#These are assumed to be the only assignments to the tasks of
#@M { s }.  Field @C { u.shift } holds @M { s }.
#@PP
#There is one shift assignment signer for each open shift, held in
#the @C { KHE_DRS_SHIFT } object, and one signature for each
#@C { KHE_DRS_SHIFT_ASST } object associated with the shift.
#The internal expressions are those derived from event resource
#constraints which are affected by assignments
#to any of the tasks of the shift.  There is one dominance test
#for each of these internal expressions @M { e } for which the
#assignments to the @M { d sub k }-complete solution, plus the
#assignments to @M { s } (crucially, combined with the knowledge
#that there are no other assignments to @M { s }) leave some
#children of @M { e } with undetermined values.
#@PP
#The same signer can be and is used for all sets @M { R }, although
#dominance tests are made only between @C { KHE_DRS_SHIFT_ASST }
#objects with the same @M { R }.  For objects with the same
#@M { R }, the effect on resource constraints is the same,
#because each resource of @M { R } is assigned to some task of
#@M { s }, and those tasks have the same effect on resource
#constraints (by how shifts are defined).  This is why resource
#constraints are omitted completely from these signatures:  for
#dominance testing, which is all we are interested in here, they
#make no difference.
#@PP
#The initial cost of a shift assignment signature, before its
#expressions are evaluated, is 0.  Actually any initial value
#would do, provided it was the same in each signature, since
#these signatures are used only for dominance testing between
#shift assignment objects.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Dominance kinds and dominance test types }
    @Tag { dynamic_impl.sig.kind }
@Begin
@LP
Type @C { KHE_DRS_DOM_KIND } is an enumerated type, defined publicly
in @C { khe_solvers.h }, and used to specify the kind of dominance
testing to employ:
@ID @C {
typedef enum {
  KHE_DRS_DOM_LIST_NONE,
  KHE_DRS_DOM_LIST_SEPARATE,
  KHE_DRS_DOM_LIST_TRADEOFF,
  KHE_DRS_DOM_LIST_TABULATED,
  KHE_DRS_DOM_HASH_EQUALITY,
  KHE_DRS_DOM_HASH_MEDIUM,
  /* KHE_DRS_DOM_TRIE_SEPARATE, */
  /* KHE_DRS_DOM_TRIE_TRADEOFF, */
  KHE_DRS_DOM_INDEXED_TRADEOFF,
  KHE_DRS_DOM_INDEXED_TABULATED
} KHE_DRS_DOM_KIND;
}
Two ideas are mixed here:  how to organize a set of solutions (simple
list, hash table, the recently withdrawn trie, or indexed array), and
how to test a pair of solutions for dominance (none, separate, tradeoff,
tabulated, equality, or medium).  They are mixed because different
organizations support different tests.
@PP
These choices record the author's attempts to speed up dominance testing.
Eventually it will become clear that one is better than the others, and
the others can be forgotten, even deleted.  At the time of writing the
author inclines towards indexed tabulated testing; time will tell.
@PP
For the dominance test type aspect of @C { KHE_DRS_DOM_KIND },
the solver defines type
@ID @C {
typedef enum {
  KHE_DRS_DOM_TEST_UNUSED,
  KHE_DRS_DOM_TEST_SEPARATE_GENERIC,
  KHE_DRS_DOM_TEST_SEPARATE_INT,
  KHE_DRS_DOM_TEST_SEPARATE_FLOAT,
  KHE_DRS_DOM_TEST_TRADEOFF,
  KHE_DRS_DOM_TEST_TABULATED,
  KHE_DRS_DOM_TEST_CORR1_PARENT,
  KHE_DRS_DOM_TEST_CORR1_CHILD,
  KHE_DRS_DOM_TEST_CORR2_CHILD,
  KHE_DRS_DOM_TEST_CORR3_FIRST,
  KHE_DRS_DOM_TEST_CORR3_MID,
  KHE_DRS_DOM_TEST_CORR3_LAST,
  KHE_DRS_DOM_TEST_CORR4_FIRST,
  KHE_DRS_DOM_TEST_CORR4_MID,
  KHE_DRS_DOM_TEST_CORR4_LAST
} KHE_DRS_DOM_TEST_TYPE;
}
The @C { CORR1 } through @C { CORR4 } values are for dominance testing
of correlated expressions, and are not of concern to us here.  The
others request separate dominance, tradeoff dominance, or tabulated
dominance.  These values are extracted from a @C { KHE_DRS_DOM_KIND }
value by function
@ID {0.90 1.0} @Scale @C {
KHE_DRS_DOM_TEST_TYPE KheDomKindToDomTestType(KHE_DRS_DOM_KIND dom_kind)
{
  switch( dom_kind )
  {
    case KHE_DRS_DOM_LIST_NONE:		return KHE_DRS_DOM_TEST_UNUSED;
    case KHE_DRS_DOM_LIST_SEPARATE: return KHE_DRS_DOM_TEST_SEPARATE_GENERIC;
    case KHE_DRS_DOM_LIST_TRADEOFF:	return KHE_DRS_DOM_TEST_TRADEOFF;
    case KHE_DRS_DOM_LIST_TABULATED:	return KHE_DRS_DOM_TEST_TABULATED;
    case KHE_DRS_DOM_HASH_EQUALITY:	return KHE_DRS_DOM_TEST_UNUSED;
    case KHE_DRS_DOM_HASH_MEDIUM:	return KHE_DRS_DOM_TEST_SEPARATE_INT;
    case KHE_DRS_DOM_INDEXED_TRADEOFF:	return KHE_DRS_DOM_TEST_TRADEOFF;
    case KHE_DRS_DOM_INDEXED_TABULATED:	return KHE_DRS_DOM_TEST_TABULATED;

    default:

      HnAbort("KheDomKindToDomTestType: unknown dom_kind (%d)", dom_kind);
      return 0;  /* keep compiler happy */
  }
}
}
which carries out the obvious mapping.  At this point, it is not
clear whether any particular separate dominance test will be
@C { int }-valued or @C { float }-valued, so this function
reports that separate testing is wanted without specifying which;
that will be filled in later.
@PP
As the description of @C { KheDynamicResourceSolverSolve } explains,
when solutions are held in a cache as well as in a main table, a
consistency issue arises:  the main table's dominance kind and the
cache's dominance kind do not have to agree, but their dominance test
types do.  This is implemented by the following little function:
@ID @C {
KHE_DRS_DOM_TEST_TYPE KheDomKindCheckConsistency(
  KHE_DRS_DOM_KIND main_dom_kind, bool cache,
  KHE_DRS_DOM_KIND cache_dom_kind)
{
  KHE_DRS_DOM_TEST_TYPE main_dom_test_type, cache_dom_test_type;
  main_dom_test_type = KheDomKindToDomTestType(main_dom_kind);
  if( cache )
  {
    cache_dom_test_type = KheDomKindToDomTestType(cache_dom_kind);
    HnAssert(main_dom_test_type == KHE_DRS_DOM_TEST_UNUSED ||
      cache_dom_test_type == KHE_DRS_DOM_TEST_UNUSED ||
      main_dom_test_type == cache_dom_test_type,
      "KheDomKindCheckConsistency: inconsistent main_dom_kind "
      "and cache_dom_kind arguments");
  }
  return main_dom_test_type;
}
}
The only other function on type @C { KHE_DRS_DOM_KIND } is
@C { KheDomKindShow }, used to display a value of type
@C { KHE_DRS_DOM_KIND } during debugging.  Values of this
type are mainly used to control switch statements which
implement the different kinds of dominance testing.
@End @SubSubAppendix

@SubSubAppendix
  @Title { Dominance tables }
  @Tag { dynamic_impl.sig.tables }
@Begin
@LP
A @I { dominance table } is a table used to cache available costs
during uniform dominance testing, as explained in
Appendix {@NumberOf dynamic_theory}.  The type of one entry in
such a table is a triple of three costs:
@ID @C {
typedef struct khe_drs_cost_tuple_rec {
  short				unweighted_psi;
  short				unweighted_psi0;
  short				unweighted_psi_plus;
} KHE_DRS_COST_TUPLE;

typedef HA_ARRAY(KHE_DRS_COST_TUPLE) ARRAY_KHE_DRS_COST_TUPLE;
}
To save memory it stores 16-bit @I { unweighted costs } rather
than 64-bit costs.  These unweighted costs need to be multiplied
by a constraint weight, stored elsewhere, to produce costs.
@PP
Next comes type
@ID @C {
typedef struct khe_drs_dim1_table_rec {
  ARRAY_KHE_DRS_COST_TUPLE	children;
  int				offset;
} *KHE_DRS_DIM1_TABLE;

typedef HA_ARRAY(KHE_DRS_DIM1_TABLE) ARRAY_KHE_DRS_DIM1_TABLE;
}
representing a one-dimensional table of arbitrary length whose
elements have type @C { KHE_DRS_COST_TUPLE }.  As far as the
caller is concerned, the first element has index @C { offset }.
This @C { offset } field is maintained automatically as elements
are added, as we'll see.  Next comes
@ID @C {
struct khe_drs_dim2_table_rec {
  ARRAY_KHE_DRS_DIM1_TABLE	children;
  int				offset;
};

typedef HA_ARRAY(KHE_DRS_DIM2_TABLE) ARRAY_KHE_DRS_DIM2_TABLE;
}
which is a one-dimensional table of arbitrary length whose
children are one-dimensional arrays of cost tuples.  This
same pattern is continued up to type
@ID @C {
typedef struct khe_drs_dim5_table_rec {
  ARRAY_KHE_DRS_DIM4_TABLE	children;
  int				offset;
} *KHE_DRS_DIM5_TABLE;

typedef HA_ARRAY(KHE_DRS_DIM5_TABLE) ARRAY_KHE_DRS_DIM5_TABLE;
}
which offers five-dimensional tables whose individual elements
are cost tuples, and which are extensible with an adjustable
starting index (stored in @C { offset }) in each sub-array.
@PP
There is a lot of repetitive code in these types, but the author
wanted the strong typing.  Here is a typical function, which adds
a new cost tuple @C { ct } to five-dimensional array @C{ d5 }:
@ID @C {
void KheDrsDim5TablePut(KHE_DRS_DIM5_TABLE d5, int index5, int index4,
  int index3, int index2, int index1, KHE_DRS_COST_TUPLE ct,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_DIM4_TABLE d4;  int pos;  HA_ARENA a;

  /* set offset if this is the first insertion */
  if( HaArrayCount(d5->children) == 0 )
    d5->offset = index5;

  /* make sure index5 is within, or just beyond, the current range */
  pos = index5 - d5->offset;
  HnAssert(0 <= pos && pos <= HaArrayCount(d5->children),
    "KheDrsDim5TablePut: index5 %d is out of range %d .. %d", index5,
    d5->offset, d5->offset + HaArrayCount(d5->children));

  /* find or make-and-add d4, the sub-array to insert into */
  if( pos < HaArrayCount(d5->children) )
    d4 = HaArray(d5->children, pos);
  else
  {
    a = HaArrayArena(d5->children);
    d4 = KheDrsDim4TableMake(a);
    HaArrayAddLast(d5->children, d4);
  }
  
  /* do the insertion */
  KheDrsDim4TablePut(d4, index4, index3, index2, index1, ct, drs);
}
}
If this is the first insertion into this array, its @C { offset }
field is set to @C { index5 }, the new entry's position in this
array, as far as the caller is concerned.  This assumes that the
smallest index is passed first.  That assumption could be avoided
but there is no need for that in this application.
@PP
The next step is to find @C { pos }, the internal index corresponding to
the external index @C { index5 }; this is just @C { index5 - d5->offset }.
There must already be a four-dimensional array at position @C { pos },
in which case @C { d4 } is set to that array, or else @C { pos } must
be just off the end, in which case @C { d4 } is set to a new
four-dimensional array and added to the end.  Again, this `just off
the end' assumption could be avoided, but there is no need here.
@PP
Finally, @C { ct } is inserted into @C { d4 } by a call to
@C { KheDrsDim4TablePut }, the four-dimensional version of
@C { KheDrsDim5TablePut }.
@PP
There is a @C { KheDrsDim5TableGet } operation which retrieves
one four-dimensional table from a five-dimensional table:
@ID @C {
KHE_DRS_DIM4_TABLE KheDrsDim5TableGet(KHE_DRS_DIM5_TABLE d5, int index5)
{
  int pos;
  pos = index5 - d5->offset;
  HnAssert(0 <= pos && pos < HaArrayCount(d5->children),
    "KheDrsDim5TableGet: index %d out of range %d .. %d", index5,
    d5->offset, HaArrayCount(d5->children) + d5->offset - 1);
  return HaArray(d5->children, pos);
}
}
Applying this idea three times produces this function, which
uses three indexes to retrieve a two-dimensional table from
a five-dimensional table:
@ID @C {
KHE_DRS_DIM2_TABLE KheDrsDim5TableGet3(KHE_DRS_DIM5_TABLE d5,
  int index5, int index4, int index3)
{
  return KheDrsDim3TableGet(KheDrsDim4TableGet(
    KheDrsDim5TableGet(d5, index5), index4), index3);
}
}
This will be used when constructing tabulated dominance tests:  we
extract the appropriate two-dimensional table for a given test from
a five-dimensional table and store it in the test.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Dominance tests }
    @Tag { dynamic_impl.sig.tests }
@Begin
@LP
A value of type @C { KHE_DRS_DOM_TEST } represents the dominance
test at one position along some signature.  It knows which
constraint owns that position, and when dominance testing reaches
that position it supplies the information needed to carry out the
correct test for that constraint.
@PP
This tyoe should really be a union, storing just the values needed
by each type of dominance test.  However, at present alll fields
needed by all types are lumped in together:
@ID @C {
typedef struct khe_drs_dom_test_rec *KHE_DRS_DOM_TEST;
typedef HA_ARRAY(KHE_DRS_DOM_TEST) ARRAY_KHE_DRS_DOM_TEST;

struct khe_drs_dom_test_rec {
  KHE_DRS_DOM_TEST_TYPE	type;
  int			correlated_delta;
  KHE_DRS_EXPR		expr;
  bool			allow_zero;
  int			min_limit;
  int			max_limit;
  int			a;
  int			b;
  KHE_COST		combined_weight;
  KHE_DRS_DIM2_TABLE	main_dom_table2;
  KHE_DRS_DIM4_TABLE	corr_dom_table4;
  KHE_MONITOR		monitor;
};
}
Most of these fields have self-explanatory names.  When @C { type }
is @C { KHE_DRS_DOM_TEST_TABULATED }, @C { main_dom_table2 }
contains the two-dimensional table consulted by tabulated
dominance.  When @C { type } denotes a correlated expression,
@C { correlated_delta } may hold a value to add to the index of
this dominance test to obtain the index of a correlated one, and
@C { corr_dom_table4 } may hold a four-dimensional table which
is consulted to obtain a correlated available cost.
@PP
The operations on dominance tests include @C { KheDrsDomTestMake }
for creating one, as well as @C { KheDrsDomTestDominatesSeparateInt },
@C { KheDrsDomTestDominatesSeparateFloat },
@C { KheDrsDomTestDominatesTradeoff }, and
@C { KheDrsDomTestDominatesTabulated } for carrying out
one test.  Here is the last of these:
@ID @C {
void KheDrsDomTestDominatesTabulated(KHE_DRS_DOM_TEST dt,
  int val1, int val2, KHE_COST *avail_cost)
{
  KHE_DRS_COST_TUPLE ct;
  ct = KheDrsDim2TableGet2(dt->main_dom_table2, val1, val2);
  *avail_cost += ct.unweighted_psi * dt->combined_weight;
}
}
It uses the two values (taken from two signatures) to index
into @C { dt->main_dom_table2 }, and uses the @C { unweighted_psi }
value from the resulting cost tuple, multiplied by a weight, as
the change in available cost.
@End @SubSubAppendix

@SubSubAppendix
  @Title { Dominance test caching }
  @Tag { dynamic_impl.sig.dom_cache }
@Begin
@LP
Function @C { KheDrsSignerDoDominates } is a good candidate for
caching.  Assuming that tries are not in use and that we want
to do a dominance test of the whole signature (not the hard parts
and soft parts separately), there are essentially just two
parameters---the two signatures to be compared---and the result
is just the cost, usually zero or negative, to add to the available
cost.  So the cache can be very simple:  a two-dimensional table
whose elements are costs.  Retrieving from it should be a lot
faster than slogging through the two signatures.
@PP
All this applies equally well to signature sets.  However, caching
them would not be useful, because a given ordered pair of signature
sets is almost never tested for dominance twice.  In the same way,
we should not cache all pairs of signatures that ever get tested
for dominance (although logically we could), because many of those
would never recur.  We need to choose pairs of signatures that are
easy to cache and likely to be tested for dominance repeatedly.
@PP
At the start of expanding each @M { d sub k }-complete solution
@M { S }, for each open resource @M { r }, and for each shift
@M { s } beginning on day @M { d sub {k+1} } that @M { r } is
qualified for (including the special shift denoting a free day),
the solver builds one signature holding the state of @M { r }'s
resource monitors when @M { r } has its assignments from
@M { S }, and is also assigned to @M { s }.
These signatures are stored in an array within the
@C { KHE_DRS_RESOURCE } object representing @M { r }.
# Each of these
# signatures is the main field of one @C { KHE_DRS_ASST_TO_SHIFT }
# object.  These objects
@PP
Dominance testing caches are built immediately after these
signatures are created.  Each cache is a two-dimensional array
containing one entry for each ordered pair of signatures for the
same @M { S } and @M { r }, whose value is the cost to add to the
available cost when these two signatures are compared for dominance.
The cache is stored in @M { r } alongside @M { r }'s signatures.
@PP
At position @M { (i, j) } of the cache, what is stored is the cost
associated with comparing the signature of @M { r }'s @M { i }th
signature with its @M { j }th signature.  The cache is constructed
by calling @C { KheDrsSignerDoDominates } once for each of these
pairs of signatures and storing the result.  Each resource
signature contains its own index, so given two signatures it
is easy to use these indexes to access the cache and avoid
calling @C { KheDrsSignerDoDominates }.
@PP
The cache must only be consulted when a value for the two signatures
being compared is in it.  To ensure this, whenever two solutions are
tested for dominance, we first check whether both have the same
previous solution @M { S }.  If they do, then the test must be part
of the expansion of @M { S } (because all dominance tests between
pairs of solutions are part of some expansion, and at least one of
the solutions in each test must have the solution being expanded as
its previous solution), and at each position along the two solutions'
signature sets except the last, the signatures at that position form
a pair that must be in the cache (because the cache is present for
the entire expansion of @M { S }).  So the whole signature set test
for dominance can and does use cached values at each position except
the last, where @C { KheDrsSignerDoDominates } is called.  The last
position usually has empty signatures anyway.
@PP
If two solutions have different previous solutions, the test is part
of the expansion of one of those but not the other.  Caching such
cases would be much more expensive in time and memory than what we
are doing.  Our method might do a lot of good and costs very little.
@End @SubSubAppendix

@EndSubSubAppendices
@End @SubAppendix

@SubAppendix
  @Title { Constraints }
  @Tag { dynamic_impl.constraints }
@Begin
@LP
This section documents the solver's types that parallel KHE's
constraint and monitor types.  These are only a minor part
of the system, needed for technical reasons that will be explained.
@BeginSubSubAppendices

@SubSubAppendix
  @Title { Constraints }
  @Tag { dynamic_impl.constraints.constraints }
@Begin
@LP
The solver has a constraint type defined by
@ID @C {
typedef struct khe_drs_constraint_rec {
  KHE_CONSTRAINT		constraint;
  int				min_history;
  int				max_history;
  int				max_child_count;
  bool				needs_corr_table;
  KHE_DRS_EXPR			sample_expr;
  KHE_DRS_DIM3_TABLE		counter_main_dom_table3;
  KHE_DRS_DIM5_TABLE		counter_corr_dom_table5;
  KHE_DRS_DIM5_TABLE		sequence_main_dom_table5;
} *KHE_DRS_CONSTRAINT;

typedef HA_ARRAY(KHE_DRS_CONSTRAINT) ARRAY_KHE_DRS_CONSTRAINT;
}
There is one of these objects for each KHE constraint whose
resource type agrees with the solver's resource type.  There
is no absolute requirement to have such a type; it has been
included because the tables needed for uniform dominance testing
are the same for all monitors derived from the same constraint.
Having this type allows them to be built only once per constraint
rather than once per monitor.  This is a worthwhile saving because
these tables can be very large.
@PP
There is the usual @C { KheDrsConstraintMake } operation, which
however just leaves @C { NULL } values in the tables.  Then there is
# @ID @C {
# void KheDrsConstraintUpdateHistory(KHE_DRS_CONSTRAINT dc, int history)
# {
#   if( history < dc->min_history )
#     dc->min_history = history;
#   if( history > dc->max_history )
#     dc->max_history = history;
# }
# }
# which ensures that @C { dc->min_history } is the minimum of the
# history values of the constraint, and that @C { dc->max_history }
# is the maximum.  The point of this is that the tables need indexes
# for all history values that actually occur.
# @PP
# It might help to look ahead to how these functions are used.
# One part of defining an expression derived from a constraint is
# to call this function:
@ID @C {
void KheDrsExprCostSetConstraint(KHE_DRS_EXPR_COST ec,
  int history, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_CONSTRAINT c;  int index;  KHE_DRS_CONSTRAINT dc;
  c = KheMonitorConstraint(ec->monitor->monitor);
  index = KheConstraintIndex(c);
  HaArrayFill(drs->all_constraints, index + 1, NULL);
  dc = HaArray(drs->all_constraints, index);
  if( dc == NULL )
  {
    /* new, so build and add a new object */
    dc = KheDrsConstraintMake(c, history, (KHE_DRS_EXPR) ec, drs);
    HaArrayPut(drs->all_constraints, index, dc);
  }
  else
  {
    /* already built, so just update the fields */
    if( history < dc->min_history )
      dc->min_history = history;
    if( history > dc->max_history )
      dc->max_history = history;
    if( HaArrayCount(ec->children) > dc->max_child_count )
      dc->max_child_count = HaArrayCount(ec->children);
    dc->needs_corr_table = dc->needs_corr_table ||
      KheDrsExprNeedsCorrTable((KHE_DRS_EXPR) ec, drs->days_frame);
  }
}
}
It uses the expression's constraint's index to look up the
@C { all_constraints } array in the solver.  If there is no
constraint there, it calls @C { KheDrsConstraintMake } and adds
one.  If there is already one there, it updates its history
and maximum number of children.  Once all expressions are created,
all constraints will be too.
@PP
The other main function builds the three tables stored in the
constraint object:
@ID @C {
void KheDrsConstraintSetTables(KHE_DRS_CONSTRAINT dc,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  switch( dc->sample_expr->tag )
  {
    case KHE_DRS_EXPR_COUNTER_TAG:

      /* set counter dom tables */
      KheDrsConstraintSetCounterDomTables(dc,
	(KHE_DRS_EXPR_COUNTER) dc->sample_expr, drs);
      break;

    case KHE_DRS_EXPR_SEQUENCE_TAG:

      /* set sequence dom tables */
      KheDrsConstraintSetSequenceDomTables(dc,
	(KHE_DRS_EXPR_SEQUENCE) dc->sample_expr, drs);
      break;

    default:

      HnAbort("KheDrsConstraintSetTables internal error (tag %d)",
	dc->sample_expr->tag);
      break;
  }
}
}
When the constraint supports @C { KHE_DRS_EXPR_COUNTER } expressions,
the @C { counter_main_dom_table3 } and @C { counter_corr_dom_table5 }
tables are set.  When the constraint supports @C { KHE_DRS_EXPR_SEQUENCE }
expressions, the @C { sequence_main_dom_table5 } table is set.  The code
for this, beginning with @C { KheDrsConstrainSetCounterDomTables } and
@C { KheDrsConstrainSetSequenceDomTables }, may be found in this
submodule, but we won't show it here.  It is a direct transcription of
the tabulated dominance formulas of Appendix {@NumberOf dynamic_theory}.
@End @SubSubAppendix

@SubSubAppendix
  @Title { Monitors }
  @Tag { dynamic_impl.constraints.monitors }
@Begin
@LP
The solver has a @C { KHE_DRS_MONITOR } type:
@ID @C {
typedef struct khe_drs_monitor_rec {
  KHE_MONITOR		monitor;
  KHE_COST		rerun_open_and_search_cost;
  KHE_COST		rerun_open_and_close_cost;
  KHE_DRS_EXPR		sample_expr;
} *KHE_DRS_MONITOR;

typedef HA_ARRAY(KHE_DRS_MONITOR) ARRAY_KHE_DRS_MONITOR;
}
There is one of these monitor objects for each KHE monitor
whose resource type agrees with the resource type of the
solver.
@PP
As for constraints, this type is not absolutely needed.  Here,
the motive is not to save space, but rather to test the
solver.  It is not practicable to debug a full run, because
there is too much data.  But one can debug a single path through
the search tree, which the solver calls a rerun, as explained
in detail in Appendix {@NumberOf dynamic_impl.solving.testing}.
While doing this, the @C { rerun_open_and_search_cost }
and @C { rerun_open_and_close_cost } fields are kept up
to date, and at the end one can check for disagreements
with the authoritative costs produced by the KHE platform.
@PP
This work needs to be done per monitor, not per expression,
because the costs obtainable from KHE are per monitor.  So
all expressions derived from a given KHE monitor contain a
pointer to a shared DRS monitor object.  (The solver often
converts one monitor into several expressions.  This often
allows these expressions to avoid having to contribute state
information to solution signatures, a significant saving.)
@PP
The operations on DRS monitors include @C { KheDrsMonitorMakeAndAdd },
which makes a monitor and adds it to the solver's @C { all_monitors }
array; @C { KheDrsMonitorUpdateRerunCost }, which is called
during reruns to update the two cost fields, and optionally
to produce debug output saying what was done; and
@C { KheDrsMonitorCheckRerunCost }, which checks at the
end of the run that @C { rerun_open_and_search_cost } and
@C { rerun_open_and_close_cost } both agree with KHE's
value for the monitor's cost.
@End @SubSubAppendix

@EndSubSubAppendices
@End @SubAppendix

@SubAppendix
  @Title { Expressions }
  @Tag { dynamic_impl.expr }
@Begin
@BeginSubSubAppendices

@SubSubAppendix
  @Title { Introduction }
  @Tag { dynamic_impl.expr.intro }
@Begin
@LP
The reader is assumed to be familiar with @I { expression trees }, which
are tree structures representing algebraic expressions.  For example,
@M { sqrt { b sup 2 - 4ac } } may be represented by the expression tree
@CD @Diag treevsep { 0.5f } treehsep { 0.6f } alabelprox { NW } {
@Tree
{
@Box @C { sqrt }
@FirstSub {
@Box @C { - }
@FirstSub {
  @Box @C { * }
  @FirstSub @Box @C { b }
  @NextSub @Box @C { b }
}
@NextSub {
  @Box @C { * }
  @FirstSub @Box @C { 4 }
  @NextSub {
    @Box @C { * }
    @FirstSub @Box @C { a }
    @NextSub @Box @C { c }
  }
}
}
}
}
If variables have values, each node has a value, dependent on
its type and its children's values.
@PP
Each node is similar to the other nodes in some ways (they are all
expression tree nodes), but different in others (for example, in
the operations they perform).  This situation calls for inheritance,
with an abstract base class representing expression tree nodes in
general, inherited by several concrete child classes representing
particular kinds of expressions.
@PP
In our application, each constraint (strictly speaking, each point
of application of each constraint, represented in KHE by a monitor)
is represented by an expression tree which, given a particular solution,
can be evaluated to yield the cost of the monitor.  The abstract base
class is @C { KHE_DRS_EXPR }.  There are 15 concrete subclasses
representing particular types of expressions.  Here we are concerned
with introducing expressions generally, so although we will use some
concrete subtypes in examples, we leave the full list for later
(Appendix {@NumberOf dynamic_impl.expr.types}).
@PP
Although the term `expression' most naturally means `expression tree',
we usually use it to mean `expression tree node'.  As explained
earlier, this is done to avoid the term `node', which is ambiguous
here because it could also mean `search tree node'.
@PP
Here is an expression tree for constraining the number of busy weekends
for resource @M { r }:
@CD @Diag
treehsep { 1.0c }
{
||0.8c
@HTree {
@Node { @M { INT_SUM_COST } }
@FirstSub {
  @Node { @M { OR } }
  @FirstSub to { W } { @Node @M { BUSY_TIME(r, 1Sat1) } }
  @NextSub  to { W } { @Node @M { BUSY_TIME(r, 1Sat2) } }
  @NextSub  to { W } { @Node @M { BUSY_TIME(r, 1Sun1) } }
  @NextSub  to { W } { @Node @M { BUSY_TIME(r, 1Sun2) } }
}
@NextSub pathstyle { noline } {
  @Node outlinestyle { noline } { ... }
}
@NextSub {
  @Node { @M { OR } }
  @FirstSub to { W } { @Node @M { BUSY_TIME(r, 4Sat1) } }
  @NextSub  to { W } { @Node @M { BUSY_TIME(r, 4Sat2) } }
  @NextSub  to { W } { @Node @M { BUSY_TIME(r, 4Sun1) } }
  @NextSub  to { W } { @Node @M { BUSY_TIME(r, 4Sun2) } }
}
}
}
To fit it onto the page, it is drawn sideways with the subtrees for
two weekends omitted.  We assume that the instance has 28 days,
starting on a Monday, with two shifts per day.
@PP
A @M { BUSY_TIME(r, t) } expression has value 1 when @M { r } is busy
at time @M { t }.  An @M { OR } expression has value 1 when at least
one of its children has value 1.  An @M { INT_SUM_COST } expression
sums the values of its children, compares the result with the limits
(stored in the expression, but not shown here), and calculates a cost,
using a cost function and weight stored in the expression.
# This cost is its value.
# @PP
# The @M { INT_SUM_COST } expression is a child of an expression of type
# @M { COST_SUM }, omitted here, whose value is the sum of its
# children's values.  This is the solution cost, since
# every monitor is represented by an expression tree
# whose root is a child of the @M { COST_SUM } expression.
@PP
Although every expression has a value, different types of expressions
have different types of values.  Most have values of type @C { int };
@M { INT_SUM_COST } expressions have values of type @C { KHE_COST };
and there are also expressions whose values have type @C { float }.
@PP
An @I { external expression } is an expression with no children.
Its value depends on the state of the solution.  For example, the
@M { BUSY_TIME } expressions above are external expressions.  An
@I { internal expression } is an expression with one or more children.
Its value depends on its children's values.  The @M { INT_SUM_COST }
and @M { OR } expressions above are internal expressions.
@PP
External and internal expressions are sometimes handled differently.
For example, although the implementation allows arbitrary common
sub-expressions (that is, it allows any tree to be a subtree of any
number of larger trees), only external expressions utilize this option.
@PP
The KHE platform does not use expression trees; it implements each kind
of constraint with its own data structure.  Expression trees allow more
code sharing than special data structures do:  @M { INT_SUM_COST }, for
example, is used by several constraints.  Another reason for using
expression trees will be given when we come to consider signatures
in detail (Appendix {@NumberOf dynamic_impl.expr.signatures}).
@PP
Here is the base class, @C { KHE_DRS_EXPR }:
@ID @C {
typedef struct khe_drs_expr_rec *KHE_DRS_EXPR;
typedef HA_ARRAY(KHE_DRS_EXPR) ARRAY_KHE_DRS_EXPR;

#define INHERIT_KHE_DRS_EXPR					\
  KHE_DRS_EXPR_TAG			tag;			\
  bool					gathered;		\
  bool					open;			\
  int					postorder_index;	\
  KHE_DRS_RESOURCE			resource;		\
  KHE_DRS_VALUE				value;			\
  KHE_DRS_VALUE				value_ub;		\
  ARRAY_KHE_DRS_PARENT			parents;		\
  ARRAY_KHE_DRS_EXPR			children;		\
  struct khe_drs_open_children_rec	open_children_by_day;	\
  HA_ARRAY_INT				sig_indexes;

struct khe_drs_expr_rec {
  INHERIT_KHE_DRS_EXPR
};
}
The fields lie in a macro to facilitate inheritance, as we'll see.
The @C { tag } field has enumerated type and says which concrete
type of expression this is.  The @C { gathered } field is @C { true }
when the expression has been gathered for opening (explained later)
but not actually opened yet.  The @C { open } field is @C { true }
when the expression is open.
# @PP
# The @C { dom_test } field holds the dominance test
# (Appendix {@NumberOf dynamic.dom}) of this
# expression.  It is assigned when the expression is created, and
# remains fixed.  For example, if this expression has type
# @M { INT_SUM_COST } and represents a constraint with a
# maximum limit, @C { dom_test } will hold a test whose
# type is @C { KHE_DRS_DOM_LE }.  If this expression needs no
# dominance test because it never contributes a value to a signature,
# @C { dom_test } holds a test whose type is @C { KHE_DRS_DOM_UNUSED }.
# The diagrams of Appendix {@NumberOf dynamic.expr.monitors} show
# which dominance test is used in each case.
@PP
Each expression has a unique value of the @C { postorder_index }
field.  Children have smaller values than their parents, so that
if the expressions are sorted by increasing @C { postorder_index },
they appear in postorder.  These fields are set as expressions are
created, and remain fixed.
@PP
The @C { resource } field is set in expressions that represent
resource constraints, to the resource that the constraint applies
to.  It is @C { NULL } in expressions that represent event resource
constraints.
@PP
The @C { value } field contains the value of the expression when a value
is defined.  It was stated earlier that an expression's value could
have type @C { int }, @C { float }, or @C { KHE_COST }.  However,
values of type @C { KHE_COST } are not stored in expressions (instead,
as we will see later, costs are reported immediately to solutions), so
(as we saw earlier) type @C { KHE_DRS_VALUE } is
@ID @C {
typedef union {
  int			i;
  float			f;
} KHE_DRS_VALUE;
}
In an expression @C { e } of type @M { OR }, say, which has an
integer value, the value is @C { e->value.i }.
@PP
The @C { value } field has a defined value in two contexts.  First, when
@C { e } is closed, @C { e }'s value is fixed and its @C { value } field
holds that value.  Indeed, the @C { value } field is assumed to continue
to hold the closed value even after @C { e } opens, but only until
its parents open.  Second, when @C { e } is open, and a solution is
being evaluated which happens to be for @C { e }'s last open day,
@C { value } holds @C { e }'s value temporarily, from when the value
is calculated until its parents have retrieved it.  Otherwise the
@C { value } field is undefined.
@PP
The @C { value_ub } field is a constant upper bound for @C { value },
set immediately after the expression is created, and never changed.
@PP
The @C { parents } field contains pointers to the expression's parents.
Most expressions have one parent, but external expressions may have
several.  @C { KHE_DRS_PARENT } is
@ID @C {
typedef struct khe_drs_parent_rec {
  KHE_DRS_EXPR		expr;
  int			index;
} KHE_DRS_PARENT;

typedef HA_ARRAY(KHE_DRS_PARENT) ARRAY_KHE_DRS_PARENT;
}
and holds the parent plus the child expression's index in
the list of children of the parent.
@PP
Field @C { children } holds the children of this expression.  Fields
@C { open_children_by_day } and @C { sig_indexes } are used only when
the expression is open.  We'll discuss them later.
@PP
As an example of inheritance, here is type @C { KHE_DRS_EXPR_OR },
the type of @M { OR } expressions:
@ID @C {
typedef struct khe_drs_expr_or_rec {
  INHERIT_KHE_DRS_EXPR
  int				closed_state;
} *KHE_DRS_EXPR_OR;
}
It inherits all the fields of @C { KHE_DRS_EXPR }, making a C
typecast from @C { KHE_DRS_EXPR_OR } to @C { KHE_DRS_EXPR } safe.
Its tag field has the enumerated value @C { KHE_DRS_EXPR_OR_TAG }.
@PP
When present in an expression @M { x }, the @C { closed_state } field
holds a summary of the values of @M { x }'s closed children.  It is
always defined, even when @M { x } is open.  (If an expression is
open, its parents must also be open, but not all of its children
need be open.)  In @M { OR } expressions, the closed state is the
number of closed children with value 1.  Consulting this rather than
the closed children themselves avoids visiting closed expressions during
the solve, needed to fulfil the promise of running in time proportional
to the number of open objects, not the number of objects.
# @PP
# The full list of expression types is given in
# Appendix {@NumberOf dynamic.etypes}, and the particular expression trees
# that need to be built for the various monitor types are presented in
# Appendix {@NumberOf dynamic.expr.monitors}.
@End @SubSubAppendix

@SubSubAppendix
  @Title { Construction }
  @Tag { dynamic_impl.expr.construction }
@Begin
@LP
Constructing expression trees is basically a simple matter of 
creating the right objects and linking them together correctly.
There are however a couple of things that deserve some attention.
@PP
The solver uses three private functions, @C { KheDrsExprInitBegin },
@C { KheDrsExprInitEnd }, and @C { KheDrsExprAddChild },
for constructing expression trees.  For example,
suppose we want to construct an @M { OR } expression with
some children.  We do this as follows:
@ID @C {
KHE_DRS_EXPR_OR res;
HaMake(res, drs->arena);
KheDrsExprInitBegin((KHE_DRS_EXPR) res, KHE_DRS_EXPR_OR_TAG, dr, drs);
... initialize fields specific to OR expressions ...
... make child expressions and call KheDrsExprAddChild on each ...
KheDrsExprInitEnd((KHE_DRS_EXPR) res, drs);
res->value_ub.i = 1;
}
@C { HaMake } obtains memory for the new object, @C { res }, as usual.
@C { KheDrsExprInitBegin} initializes its fields that
are common to all expressions, including @C { tag } and
@C { resource }, which vary from one expression to another.  Next,
fields specific to the type of expression being constructed must be
initialized.  For @M { OR } expressions this is just the
@C { closed_state } field.  Then the children of the new expression
must be created, which involves, for each child, carrying out this same
sequence, from @C { KheDrsExprInitBegin } to @C { KheDrsExprInitEnd },
followed by a call to @C { KheDrsExprAddChild } to link parent and child.
@PP
Correct construction requires that @C { KheDrsExprInitEnd } be
called immediately after the children have been constructed and
linked in, but not before.  We can see why by studying the functions.
@C { KheDrsExprInitBegin } is quite trivial, although we will have
to look carefully at @C { KheDrsOpenChildrenInit } later:
@ID @C {
void KheDrsExprInitBegin(KHE_DRS_EXPR e, KHE_DRS_EXPR_TAG tag,
  KHE_DRS_RESOURCE dr, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_OPEN_CHILDREN_INDEX_TYPE index_type;  float float_ub;
  e->tag = tag;
  e->gathered = false;
  e->open = false;
  e->postorder_index = -1;
  e->resource = dr;
  /* e->value and e->value_ub are not initialized here */
  HaArrayInit(e->parents, drs->arena);
  HaArrayInit(e->children, drs->arena);
  index_type = (tag == KHE_DRS_EXPR_SEQUENCE_TAG ?
    KHE_DRS_OPEN_CHILDREN_INDEX_DAY_ADJUSTED :
    KHE_DRS_OPEN_CHILDREN_INDEX_DAY);
  float_ub = (tag == KHE_DRS_EXPR_SUM_FLOAT_TAG);
  KheDrsOpenChildrenInit(&e->open_children_by_day, index_type,
    float_ub, drs);
  HaArrayInit(e->sig_indexes, drs->arena);
}
}
@C { KheDrsExprInitEnd } is more interesting:
@ID @C {
void KheDrsExprInitEnd(KHE_DRS_EXPR e, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  /* postorder index */
  e->postorder_index = drs->postorder_count++;

  /* closed value; e->value is initialized here */
  KheDrsExprSetClosedValue(e, drs);
}
}
There are two points here.  First, @C { KheDrsExprInitEnd } assigns
@C { e->postorder_index } using a value from the solver.  Clearly,
this will only work as intended when @C { KheDrsExprInitEnd } is
called on @C { e } after it has been called on each of @C { e }'s children.
@PP
Second, @C { KheDrsExprInitEnd } calls @C { KheDrsExprSetClosedValue }
(Appendix {@NumberOf dynamic_impl.expr.opening}) to initialize
@C { e->value }.  Although we mainly use @C { KheDrsExprSetClosedValue }
to find the closed value of @C { e } at the end of a solve, what it does
is just right here:  it sets the closed value of @C { e }, assuming
that the children of @C { e } have their correct closed values, and
that any closed state in @C { e } is correct.  The closed value is
wanted here, because initially all expressions are closed.
@PP
Now here is @C { KheDrsExprAddChild }:
@ID @C {
void KheDrsExprAddChild(KHE_DRS_EXPR parent, KHE_DRS_EXPR child,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_PARENT prnt;

  /* link parent and child */
  HnAssert(HaArrayCount(parent->parents) == 0,
    "KheDrsExprAddChild internal error:  too late to add child (1)");
  HnAssert(parent->postorder_index == -1,
    "KheDrsExprAddChild internal error:  too late to add child (2)");
  prnt.expr = parent;
  prnt.index = HaArrayCount(parent->children);
  HaArrayAddLast(child->parents, prnt);
  HaArrayAddLast(parent->children, child);

  /* update state in each parent */
  switch( parent->tag )
  {
    case KHE_DRS_EXPR_OR_TAG:

      KheDrsExprOrAddChild((KHE_DRS_EXPR_OR) parent, child, drs);
      break;

    ...
  }
}
}
The first part is common to all expressions:  it adds @C { child }
to @C { parent }'s list of children, and it adds @C { parent }
to @C { child }'s list of parents.  The second part updates the
state of the parent to include the child, and is specific to each
type of expression, hence the large switch.  Here is one branch:
@ID @C {
void KheDrsExprOrAddChild(KHE_DRS_EXPR_OR eo, KHE_DRS_EXPR child_e,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  if( child_e->value.i == 1 )
    eo->closed_state += 1;
}
}
The closed state of @C { eo } is the number of closed children with
value 1.  All children are closed initially, so this adds 1 to
@C { eo->closed_state } if @C { child_e }'s value is 1.  The value
is well-defined, because @C { KheDrsExprInitEnd } is called on
@C { child_e } before this call is made.
@PP
@C { KheDrsExprAddChild } is declared in the expression construction
submodule, but not defined until after @C { KHE_DRS_EXPR }'s
subtypes, to avoid having to give forward declarations of its
subtype versions.  This is done for each function which switches
on the expression's tag field:  @C { KheDrsExprAddChild },
@C { KheDrsExprChildHasOpened }, @C { KheDrsExprChildDomTest },
@C { KheDrsExprDayDomTest }, @C { KheDrsExprChildHasClosed },
@C { KheDrsExprSetClosedValue }, @C { KheDrsExprLeafSet },
@C { KheDrsExprLeafClear }, and @C { KheDrsExprEvalSignature }.
@End @SubSubAppendix

@SubSubAppendix
@Title { Open day ranges and signatures }
@Tag { dynamic_impl.expr.signatures }
@Begin
@LP
This section explains in detail the values that expressions add to
signatures.  The implementation is part of expression opening and
will be given later, in Appendix {@NumberOf dynamic_impl.expr.opening}.
@PP
We may have used the term @I { open day range } previously, more or
less synonymously with @I { selected day range }.  But now we define
the open day range of an expression precisely.  As an example, we'll
use the constraint that limits the number of busy weekends for resource
@M { r } in a four-week timetable beginning on a Monday.  The weekend
days are 5, 6, 12, 13, 19, 20, 26, and 27.  Here is the expression tree,
showing open day ranges:
@CD @Diag
treehsep { 1.0c }
treevsep { 1.2f }
blabelprox { SW }
{
//0.5f
||0.8c
@HTree {
@Node blabel { 6-27 } @M { COUNTER }
@FirstSub {
  @Node blabel { 5-6 } @M { OR }
  @FirstSub to { W } { @Node blabel { 5-5 } @M { BUSY_TIME(r, 1Sat1) } }
  @NextSub  to { W } { @Node blabel { 5-5 } @M { BUSY_TIME(r, 1Sat2) } }
  @NextSub  to { W } { @Node blabel { 6-6 } @M { BUSY_TIME(r, 1Sun1) } }
  @NextSub  to { W } { @Node blabel { 6-6 } @M { BUSY_TIME(r, 1Sun2) } }
}
@NextSub pathstyle { noline } {
  @Node outlinestyle { noline } { ... }
}
@NextSub {
  @Node blabel { 26-27} @M { OR }
  @FirstSub to { W } { @Node blabel { 26-26 } @M { BUSY_TIME(r, 4Sat1) } }
  @NextSub  to { W } { @Node blabel { 26-26 } @M { BUSY_TIME(r, 4Sat2) } }
  @NextSub  to { W } { @Node blabel { 27-27 } @M { BUSY_TIME(r, 4Sun1) } }
  @NextSub  to { W } { @Node blabel { 27-27 } @M { BUSY_TIME(r, 4Sun2) } }
}
}
}
A review of the external expression types (like @M { BUSY_TIME })
given in Appendix {@NumberOf dynamic_impl.expr.types} will show that
each is affected by what happens on exactly one day.  The open day
range of an external expression contains exactly this one day.
The open day range of an internal expression (like @M { COUNTER }
and @M { OR }) is the smallest range of days which includes the
last day of each of its children's open day ranges.  For example,
the last days of the open day ranges of the children of the
@M { COUNTER } expression above are 6, 13, 20, and 27, so its
open day range is 6-27.  (The reader who expected it to be 5-27
needs to disabuse himself of that idea now.)
@PP
The numbers used in open day ranges are open day indexes, not frame
indexes.  The example assumes that all days are open.  If some are
closed, the open day indexes will be different.
# , and if some weekends
# are closed, the closed state of the @M { COUNTER } expression
# will hold the number of closed busy weekends, stored in the
# expression because it is the same for all solutions.
@PP
When we speak of an expression's open days, we mean the days of its
open day range.  For example, we can say that every open expression
has at least one open day, meaning that every open expression has a
non-empty open day range.
@PP
A solve takes a whole set of solutions for some open day @M { d sub i },
and from each of them it makes solutions for day @M { d sub {i + 1} }.
It needs to be able to pick up a solution for day @M { d sub i }, build
a new solution for day @M { d sub {i + 1} } consisting of the solution
for @M { d sub i } plus one day's worth of new assignments, and then set
the new solution aside.  But it can't ignore the constraints and their
costs, because it needs to prune solutions whose cost so far is not less
than the cost of the initial solution, and it needs to implement dominance
testing, as described in Appendix {@NumberOf dynamic_theory.overview}.
Information about a solution's constraints and costs is stored in its
@C { cost } and @C { signature } fields.
@PP
So then, what does a solution for up to day @M { d sub i } need to
store about our example constraint?  Clearly, the number of open busy
weekends up to @M { d sub i }.  This way, as we proceed along any
path in the search tree from the root solution of the tree to a final
complete solution, each solution will record the number of open busy
weekends so far.  It will be easy, at each solution, to combine the
number of open busy weekends up to the previous day with the task
assignment for @M { r } on the new day to find the number of busy
weekends up to the new day.  Then, on the constraint's last open day,
the number of open busy weekends can be added to the number of closed
busy weekends, and the sum compared with the limits to find a cost.
(Actually, we calculate a cost on every day, since it may be useful
in pruning solutions.  See Appendix {@NumberOf dynamic_impl.expr.sum}.)
@PP
But suppose @M { d sub i } is a Saturday.  Then the solution
must also remember whether that Saturday is busy.  It is not enough to
store just a number of busy weekends, because then it is not possible to
say whether a busy next day (Sunday) makes one more busy weekend or not.
# This is why
# the solver represents constraints using expression trees:  a
# solution stores one item of information per solution, not per constraint.
@PP
The reader who ponders this will find that an expression contributes
a value to store on each day of its open day range except its last.
Before its first open day, there is nothing to remember (not counting
closed state, which is the same for each solution so is stored in the
expression).  On days during the open day range other than the last,
there is information from that day and previous days to store.  For
example, a day 7 solution needs to store, for the @M { COUNTER }
expression, the number of open busy weekends so far.  This is true
even though none of that expression's children are open on day 7,
which explains why we use open day ranges rather than open day sets.
On the last open day, the expression's value is found and reported
to its parents.  It becomes the parents' responsibility, so there
is nothing to store on that day, or afterwards.
@PP
Applying this rule to the tree above, information needs to be stored
for the @M { COUNTER } expression on days 6-26,
information for the first @M { OR } expression needs to be stored on day 5,
and so on.  Nothing ever needs to be stored for the @M { BUSY_TIME }
expressions, because their open day ranges contain no days that are not
last.  Nothing is stored for the @M { COUNTER } expression on day
5, because none of its children will have reported anything then.
@PP
It is clear now why we use expression trees to represent constraints.  A
solution stores one item of information (or nothing) per expression, not
per constraint.  The item stored need not be the expression's value.
For example, the @M { COUNTER } expression's value is a cost, but
what is stored for it is an integer number of open busy weekends.
@PP
The signature of a solution for a given day @M { d sub i } consists of
one item of information for each open expression for which @M { d sub i }
is one of its open days other than the last.  The items' types are not
clear at this point, but, looking ahead, it will turn out that the root
of each expression tree will have type @C { KHE_COST }, and the sum of
these costs will be stored in the solution's @C { cost } field; while
the non-root expressions will contribute one @C { int } or @C { float }
each, held in the @C { signature } field.
@PP
When two signatures are compared during dominance testing, each
position along the signature has its own test.  While this keeps
things simple, it does cause some cases of dominance to be missed.
For example, suppose that the current day is @C { 2Sat }, and some
resource is busy on that day in solution @M { S sub 1 } but not in
solution @M { S sub 2 }.  If there is a maximum limit on the number
of busy weekends, @M { S sub 1 } cannot dominate @M { S sub 2 },
because the `@M { non <= }' test at the @M { OR } expression
affected by @C { 2Sat } fails.  But suppose the `@M { non <= }'
test at the enclosing @M { COUNTER } expression succeeds with one
weekend to spare.  Then @M { S sub 1 } does in fact dominate
@M { S sub 2 }.  This problem has in fact been fixed, by
@I { correlated expressions }.  But we won't delve into them now.
# Some day the author might
# tighten up the implementation to include such cases.  The loss
# is minor, but every bit helps.
@PP
We'll see in Appendix {@NumberOf dynamic_impl.solns} that the parts of
each signature made by expressions representing resource constraints
are calculated separately, before @C { KheDrsSolnExpand } generates
any new solutions, while the parts representing event resource
constraints are calculated as new solutions are generated.  This
distinction is irrelevant to the actual process of calculating the
signature, and expression objects are unaware of it.
@PP
Here are two points that the author is inclined to view as rather
profound.  The reader can make up his own mind.  First, each
expression only ever needs to store a small constant amount of
information in a signature.  It never stores anything complicated, such
as a set of values.  However, if we were supporting the avoid split
assignments constraint we would need to store a set:  the set of
distinct resources assigned so far.  So this first point may be just luck.
@PP
Second, although signatures were created to ensure that costs can
be calculated efficiently as solving proceeds, it turns out that
they are just what is needed for dominance testing too.  This has
something to do with the fact that a signature contains complete
information about the state of the constraints in its solution,
but still it seems somewhat miraculous that the form in which
this information is held for cost calculating should also suit
dominance testing.
@End @SubSubAppendix

@SubSubAppendix
  @Title { Open children }
  @Tag { dynamic_impl.expr.open_children }
@Begin
@LP
Our next job is to explain how expressions are opened, but before
that we need to consider type @C { KHE_DRS_OPEN_CHILDREN }, whose
functions do most of the actual work of opening expressions.  Each
expression has a field of this type, holding its open children:
@ID @C {
typedef struct khe_drs_open_children_rec {
  ARRAY_KHE_DRS_OPEN_CHILD		open_children;
  KHE_INTERVAL				index_range;
  KHE_DRS_OPEN_CHILDREN_INDEX_TYPE	index_type;
  bool					float_ub;
  HA_ARRAY_INT				child_indexes;
} *KHE_DRS_OPEN_CHILDREN;
}
It is a pointer type as usual, but (as we saw above) the field
within @C { KHE_DRS_EXPR } is expanded, that is, it is a struct
rather than a pointer to a struct:
@ID @C {
#define INHERIT_KHE_DRS_EXPR					\
  ...								\
  struct khe_drs_open_children_rec	open_children_by_day;	\
  ...
}
It is done this way purely to save memory.
@PP
The @C { open_children } field holds its expression's open children.
@C { KHE_DRS_OPEN_CHILD } is
@ID @C {
typedef struct khe_drs_open_child_rec {
  KHE_DRS_EXPR		child_e;
  int			open_index;
  KHE_DRS_VALUE		rev_cum_value_ub;
} KHE_DRS_OPEN_CHILD;
}
This is a non-pointer type, again to save space.  The @C { child_e }
field is the child itself; @C { open_index } is the child's
@I { open index }, of which more in a moment; and
@C { rev_cum_value_ub } is the sum, over all open children
@C { x } from here to the end of the @C { open_children } array,
of @C { x->value_ub }.
@PP
Returning to type @C { KHE_DRS_OPEN_CHILDREN }, the @C { index_range }
field contains the minimum and maximum, over the open children,
of their open indexes.  If the @C { open_children } array is empty, then
@C { index_range.first > index_range.last }, except that in a leaf
expression, @C { index_range.first } and @C { index_range.last } are
both set to some value whose provenance does not concern us here.
@PP
The open index of a child node is usually its last open day, the
day on which it is expected to report its final value to its parent.
However, other kinds of open index are occasionally used, specified
by the @C { index_type } field:
@ID @C {
typedef enum {
  KHE_DRS_OPEN_CHILDREN_INDEX_DAY,
  KHE_DRS_OPEN_CHILDREN_INDEX_DAY_ADJUSTED,
  KHE_DRS_OPEN_CHILDREN_INDEX_SHIFT
} KHE_DRS_OPEN_CHILDREN_INDEX_TYPE;
}
Value @C { KHE_DRS_OPEN_CHILDREN_INDEX_DAY } indicates that
each child's open index is its last open day, the usual value.
@C { KHE_DRS_OPEN_CHILDREN_INDEX_DAY_ADJUSTED } is the same
except that some of the values are adjusted, as we'll see later.
@C { KHE_DRS_OPEN_CHILDREN_INDEX_SHIFT } is quite different;
it says that the open indexes are shift indexes.
@PP
The @C { float_ub } field is @C { true } when the children's
@C { value_ub } fields (and indeed their @C { value } fields)
all contain values of type @C { float }.
@PP
The @C { child_indexes } array is used to speed up access to the
elements of @C { children } with a given open index, assuming that
open indexes are monotone increasing as one proceeds along the
sequence of open children (which is always the case).  For each
@C { i } from @C { index_range.first } to @C { index_range.last + 1 }
inclusive,
@ID @C {
HaArray(oc->child_indexes, i - oc->index_range.first)
}
is the number of elements of @C { oc->children } whose open index is
less than @C { i }.  For example, if @C { i == oc->index_range.first }
the result is 0, and if @C { i == oc->index_range.last + 1 } the result
is @C { HaArrayCount(oc->children) }.  Later we'll see an iterator,
defined by a macro, that uses @C { child_indexes } to efficiently
visit all children with a given open index.  
@PP
The functions begin with @C { KheDrsOpenChildrenInit }, which
initializes an open children object (giving it no children and an empty
range), @C { KheDrsOpenChildrenClear }, which clears an open children
object back to that state, and @C { KheDrsOpenChildrenCount },
which returns the length of the @C { children } array.  After that
come several functions used to keep a sequence of open children
up to date as children are added.  The first of these is
@ID @C {
void KheDrsOpenChildrenUpdateIndexRange(KHE_DRS_OPEN_CHILDREN oc)
{
  int first_index, last_index;
  if( HaArrayCount(oc->open_children) == 0 )
    oc->index_range = KheIntervalMake(1, 0);
  else
  {
    first_index = HaArrayFirst(oc->open_children).open_index;
    last_index = HaArrayLast(oc->open_children).open_index;
    oc->index_range = KheIntervalMake(first_index, last_index);
  }
}
}
This brings @C { oc->index_range } up to date with any change
in the open children.  It assumes that the open children's open
indexes are set and are monotone non-decreasing as we proceed
along @C { oc->open_children }.  Next is a function which brings
the @C { open_children } array up to date, assuming that
@C { oc->index_range } is up to date:
@ID @C {
void KheDrsOpenChildrenUpdateChildIndexes(KHE_DRS_OPEN_CHILDREN oc)
{
  int index, i;  KHE_DRS_OPEN_CHILD open_child;
  HaArrayClear(oc->child_indexes);
  index = oc->index_range.first - 1;
  HaArrayForEach(oc->open_children, open_child, i)
  {
    while( open_child.open_index > index )
    {
      index++;
      HaArrayAddLast(oc->child_indexes, i);
    }
  }
  HaArrayAddLast(oc->child_indexes, i);
}
}
We leave the reader to verify that this sets @C { oc->child_indexes }
to the value described above.  Next comes a function to bring the
@C { rev_cum_value_ub } field in each open child up to date:
@ID @C {
void KheDrsOpenChildrenUpdateUpperBounds(KHE_DRS_OPEN_CHILDREN oc)
{
  int i, sum_i;  KHE_DRS_OPEN_CHILD open_child;  float sum_f;
  if( oc->float_ub )
  {
    /* float version */
    sum_f = 0.0;
    HaArrayForEachReverse(oc->open_children, open_child, i)
    {
      sum_f += open_child.child_e->value_ub.f;
      HaArray(oc->open_children, i).rev_cum_value_ub.f = sum_f;
    }
  }
  else
  {
    /* int version */
    sum_i = 0;
    HaArrayForEachReverse(oc->open_children, open_child, i)
    {
      sum_i += open_child.child_e->value_ub.i;
      HaArray(oc->open_children, i).rev_cum_value_ub.i = sum_i;
    }
  }
}
}
It uses @C { oc->float_ub } to decide whether @C { int } or
@C { float } values are in use.  Next comes a small function
for finding the open shift index of the shift that a given
expression monitors:
@ID {0.95 1.0} @Scale @C {
int KheDrsExprOpenShiftIndex(KHE_DRS_EXPR e)
{
  KHE_DRS_EXPR_ASSIGNED_TASK eat;  int res;
  eat = (KHE_DRS_EXPR_ASSIGNED_TASK) e;
  res = eat->task_on_day->encl_dt->encl_dmt->encl_shift->open_shift_index;
  HnAssert(res >= 0, "KheDrsExprOpenShiftIndex internal error 2");
  return res;
}
}
This function only applies to assigned task expressions.
@PP
After all these preparations we are ready for the function that
adds a child to the sequence of open children:
@ID {0.92 1.0} @Scale @C {
void KheDrsOpenChildrenAddChild(KHE_DRS_OPEN_CHILDREN oc,
  KHE_DRS_EXPR child_e)
{
  int i, open_index, prev_open_index;  KHE_DRS_OPEN_CHILD tmp, open_child;

  /* make the child's open index */
  switch( oc->index_type )
  {
    case KHE_DRS_OPEN_CHILDREN_INDEX_DAY:

      /* open index is child_e's last day */
      open_index = child_e->open_children_by_day.index_range.last;
      break;

    case KHE_DRS_OPEN_CHILDREN_INDEX_DAY_ADJUSTED:

      /* open index is child_e's last day after adjustment */
      if( HaArrayCount(oc->open_children) > 0 )
      {
	prev_open_index = HaArrayLast(oc->open_children).open_index;
	if( child_e->open_children_by_day.index_range.last < prev_open_index )
	  child_e->open_children_by_day.index_range.last = prev_open_index;
      }
      open_index = child_e->open_children_by_day.index_range.last;
      break;

    case KHE_DRS_OPEN_CHILDREN_INDEX_SHIFT:

      /* open index is child_e's shift index */
      open_index = KheDrsExprOpenShiftIndex(child_e);
      break;

    default:
      HnAbort("KheDrsOpenChildrenAddChild internal error");
      open_index = 0;  /* keep compiler happy */
  }

  /* add child_e to oc->children in sorted position */
  ... see below ...

  /* update oc->index_range, oc->child_indexes, and rev_cum_value_ub fields */
  ... see below ...
}
}
The first paragraph sets @C { open_index } to the open index of
@C { child_e }.  If the index type is @C { KHE_DRS_OPEN_CHILDREN_INDEX_DAY },
his is value @C { index_range.last } from @C { child_e }'s open children.
If the index type is @C { KHE_DRS_OPEN_CHILDREN_INDEX_DAY_ADJUSTED }
it is this same value, except that if its value is out of order
it is increased until it isn't.  Finally, if the index type is
@C { KHE_DRS_OPEN_CHILDREN_INDEX_SHIFT }, the open index is the
open shift index.
@PP
The second paragraph inserts @C { child_e }, or rather a new
open child object containing @C { child_e }, into the sequence
of open children, ensuring that the childrens' open indexes
are sorted into non-decreasing order as required:
@ID {0.92 1.0} @Scale @C {
/* add child_e to oc->children in sorted position */
tmp = KheDrsOpenChildMake(child_e, open_index);
HaArrayAddLast(oc->open_children, tmp);  /* not really, just to make space */
for( i = HaArrayCount(oc->open_children) - 2;  i >= 0;  i-- )
{
  open_child = HaArray(oc->open_children, i);
  if( open_child.open_index <= open_index )
    break;
  HaArrayPut(oc->open_children, i + 1, open_child);
}
HaArrayPut(oc->open_children, i + 1, tmp);  /* for real this time */
}
The last paragraph updates the index range, child indexes, and upper
bounds:
@ID {0.92 1.0} @Scale @C {
/* update oc->index_range, oc->child_indexes, and rev_cum_value_ub fields */
KheDrsOpenChildrenUpdateIndexRange(oc);
KheDrsOpenChildrenUpdateChildIndexes(oc);
KheDrsOpenChildrenUpdateUpperBounds(oc);
}
These three functions are given above.  All of this is arguably slower
than it needs to be, but since it is only done during opening that
hardly matters.
@PP
A similar but much simpler function deletes a child from the
sequence of open children, keeping everything up to date:
@ID {0.92 1.0} @Scale @C {
void KheDrsOpenChildrenDeleteChild(KHE_DRS_OPEN_CHILDREN oc,
  KHE_DRS_EXPR child_e)
{
  KHE_DRS_OPEN_CHILD open_child;  int i;

  HaArrayForEach(oc->open_children, open_child, i)
    if( open_child.child_e == child_e )
    {
      /* delete and shift here */
      HaArrayDeleteAndShift(oc->open_children, i);

      /* update oc->index_range, oc->child_indexes, and rev_cum_value_ub */
      KheDrsOpenChildrenUpdateIndexRange(oc);
      KheDrsOpenChildrenUpdateChildIndexes(oc);
      KheDrsOpenChildrenUpdateUpperBounds(oc);

      /* all done */
      return;
    }

  /* should never get here */
  HnAbort("KheDrsOpenChildrenDeleteChild internal error");
}
}
Again, this is slower than it needs to be, but it is done
only during closing.
@PP
With the open children in good order, several operations
are available.  First we have
@ID @C {
int KheDrsOpenChildrenBefore(KHE_DRS_OPEN_CHILDREN oc, int index)
{
  if( index < oc->range.first )
    return 0;
  else if( index > oc->range.last )
    return HaArrayCount(oc->children);
  else
    return HaArray(oc->child_indexes, index - oc->range.first);
}
}
This returns the number of open children whose open index is
less than @C { index }.  At the time of writing, the author
is unsure whether the first two cases, which return correct
values for out-of-range indexes, are needed.  And
@ID {0.95 1.0} @Scale @C {
int KheDrsOpenChildrenAtOrAfter(KHE_DRS_OPEN_CHILDREN oc, int index)
{
  return HaArrayCount(oc->children) - KheDrsOpenChildrenBefore(oc, index);
}
}
returns the number of open children whose open index is greater
than or equal to @C { index }.
@PP
There is a function for finding a reverse cumulative upper bound
from a certain point on:
@ID @C {
int KheDrsOpenChildrenUpperBoundInt(KHE_DRS_OPEN_CHILDREN oc,
  int open_index)
{
  int i;
  HnAssert(!oc->float_ub,
    "KheDrsOpenChildrenUpperBoundInt internal error 1");
  if( open_index < oc->index_range.first )
    HnAbort("KheDrsOpenChildrenUpperBoundInt internal error 2");
  if( open_index > oc->index_range.last )
    return 0;
  else
  {
    i = HaArray(oc->child_indexes, open_index - oc->index_range.first);
    return HaArray(oc->open_children, i).rev_cum_value_ub.i;
  }
}
}
This one finds the total reverse cumulative upper bound:
@ID @C {
int KheDrsOpenChildrenUpperBoundIntAll(KHE_DRS_OPEN_CHILDREN oc)
{
  HnAssert(!oc->float_ub,
    "KheDrsOpenChildrenUpperBoundIntAll internal error 1");
  if( HaArrayCount(oc->child_indexes) == 0 )
    return 0;
  else
    return HaArrayFirst(oc->open_children).rev_cum_value_ub.i;
}
}
@C { KheDrsOpenChildrenUpperBoundFloat } and
@C { KheDrsOpenChildrenUpperBoundFloatAll } are the same,
except that they assume that values are floating-point.
@PP
Next we have two iterators, implemented by macros that expand
to C @C { for } statements.  The first iterates over each open
index, as stored in the index range:
@ID {0.95 1.0} @Scale @C {
#define KheDrsOpenChildrenForEachIndex(oc, i)			\
  for( i = (oc)->index_range.first;  i <= (oc)->index_range.last;  i++ )
}
The second iterates over all open children @C { x } with a given open
index @C { index }:
@ID {0.95 1.0} @Scale @C {
#define KheDrsOpenChildrenForEach(oc, index, x, i)		        \
  i1 = KheDrsOpenChildrenBefore((oc), (index));			        \
  i2 = KheDrsOpenChildrenBefore((oc), (index) + 1);		        \
  for( (i) = i1;						        \
   (i) < i2 ? ((x) = HaArray((oc)->open_children, (i)).child_e, true) : \
   false; (i)++ )
}
This relies on the sentinel value at the end of @C { oc->child_indexes }.
@PP
Next come four simple functions (we won't show them) for comparing an
index with the open range index:  @C { KheDrsOpenChildrenIndexInRange },
@C { KheDrsOpenChildrenIndexIsFirst },
@C { KheDrsOpenChildrenIndexIsFirstOrLess },
and @C { KheDrsOpenChildrenIndexIsLast }.  Also,
@ID @C {
int KheDrsOpenChildrenWithIndex(KHE_DRS_OPEN_CHILDREN oc, int index)
{
  return KheDrsOpenChildrenBefore(oc, index + 1) - 
    KheDrsOpenChildrenBefore(oc, index);
}
}
which returns the number of open children with a given open index.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Opening }
    @Tag { dynamic_impl.expr.opening }
@Begin
@LP
This section explains how expressions are opened.  An expression needs
to be opened when its value may be affected by an assignment to some
open task.
# Expression opening includes setting up for signatures,
# so this section also implements the ideas from the previous section.
@PP
The first step in opening expressions is to build a complete
list of all expressions that need to be opened, in field
@C { open_exprs } of the solver.  This is done by calls to
this function:
@ID @C {
void KheDrsExprGatherForOpening(KHE_DRS_EXPR e,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_PARENT prnt;  int i;
  if( !e->gathered )
  {
    e->gathered = true;
    HaArrayAddLast(drs->open_exprs, e);
    HaArrayForEach(e->parents, prnt, i)
      KheDrsExprGatherForOpening(prnt.expr, drs);
  }
}
}
Whenever @C { e } should open, @C { KheDrsExprGatherForOpening } is
called.  If @C { e->gathered } is @C { true }, meaning that @C { e }
has already been gathered, this does nothing.  Otherwise it sets
@C { e->gathered } to @C { true } to ensure that @C { e } will not be
gathered again on this solve, adds @C { e } to @C { drs->open_exprs },
and gathers its parents (an open expression's parents must also be open).
@PP
We have already seen the calls to @C { KheDrsExprGatherForOpening }
which start the gathering process, in @C { KheDrsResourceOnDayOpen }:
@ID @C {
open_day_range = KheDrsDayRangeMake(open_day_index, open_day_index);
HaArrayForEach(drd->external_today, e, i)
{
  e->open_children_by_day.range = open_day_range;
  KheDrsExprGatherForOpening(e, drs);
}
}
and in @C { KheDrsTaskOpen }, which we won't show again.
# @ID @C {
# open_day_range = KheDrsDayRangeMake(di, di);
# HaArrayForEach(dtd->external_today, e, j)
# {
#   e->open_children_by_day.range = open_day_range;
#   KheDrsExprGatherForOpening(e, drs);
# }
# }
These gather all external expressions that need to be opened,
because they depend on what an open resource on day or task on
day is doing.  They also set the open day range in each external
expression to the single day that the expression is affected by.
Then @C { KheDrsExprGatherForOpening } gathers their ancestors,
which accounts for all expressions that need to be opened.
@PP
After all expressions have been gathered, they are sorted by
increasing postorder index and opened.  The code for this is
far ahead of where we are now, in @C { KheDrsSolveOpen }:
@ID @C {
HaArraySort(drs->open_exprs, &KheDrsExprPostorderCmp);
HaArrayForEach(drs->open_exprs, e, i)
  KheDrsExprOpen(e, drs);
HaArrayForEach(drs->open_exprs, e, i)
  KheDrsExprNotifySigners(e, drs);
}
Sorting ensures that parents are opened after their children.
@C { KheDrsExprNotifySigners } informs various signers
that @C { e } has opened, as required.  We need to do this in a
separate pass over the expressions, but at the time of writing the
author has forgotten why.
@PP
At the moment each expression opens, it calls @C { KheDrsExprChildHasOpened }
once for each parent to inform it that one of its children has opened:
@ID {0.90 1.0} @Scale @C {
void KheDrsExprChildHasOpened(KHE_DRS_EXPR e, KHE_DRS_EXPR child_e,
  int child_index, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  switch( e->tag )
  {
    case KHE_DRS_EXPR_OR_TAG:

      KheDrsExprOrChildHasOpened((KHE_DRS_EXPR_OR) e,
	child_e, child_index, drs);
      break;

    ...
  }
}
}
Here @C { child_e } is the child that has just opened, and @C { e }
is the parent, not yet opened.  Clearly @C { e } must be an internal
expression, since it has a child.  This function is just a type
switch; each case updates @C { e } to take account of the fact
that @C { child_e } has opened.  Here is one branch of the switch:
# @PP
# The first step is to add the child to the parent's list of open
# children, in increasing last open day index order.  As shown,
# this is done differently when the parent is an @M { INT_SEQ_COST }
# object (Appendix {@NumberOf dynamic_impl.expr.open_children}),
# although the result is substantially the same in both cases.
# @PP
# The second part of @C { KheDrsExprChildHasOpened } updates the
# state of the parent to take account of the opening of the child.
# This is done differently depending on the type of the parent, so
# this part is a large switch on the parent's type tag.  Here is one
# branch of the switch:
@ID {0.98 1.0} @Scale @C {
void KheDrsExprOrChildHasOpened(KHE_DRS_EXPR_OR eo,
  KHE_DRS_EXPR child_e, int child_index, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KheDrsOpenChildrenAddChild(&eo->open_children_by_day, child_e);
  if( child_e->value.i == 1 )
    eo->closed_state -= 1;
}
}
The first step is to add @C { child_e } to the list of open
children of @C { eo }.  Then, since within @M { OR } expressions
the @C { closed_state } field holds the number of closed children
whose value is 1, it has to be reduced by 1 if @C { child_e }'s
value is 1, since @C { child_e } is no longer closed.
@PP
There are two important points here.  First, while an expression
is closed, its value is up to date, and does not change during the
current solve.  When a closed expression is opened, as @C { child_e }
is opened here, it retains its closed value for some time, at least
until its parents are opened.  So it is safe here to access
@C { child_e->value.i }.  Second, this code only touches open
expressions.  It avoids closed children, as it must if
we are to meet our efficiency goals.
@PP
Here now is @C { KheDrsExprOpen }:
@ID @C {
void KheDrsExprOpen(KHE_DRS_EXPR e, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_PARENT prnt;  int i;

  /* inform e's parents that e is now open */
  e->gathered = false;
  e->open = true;
  HaArrayForEach(e->parents, prnt, i)
    KheDrsExprChildHasOpened(prnt.expr, e, prnt.index, drs);

  /* if e is external, clear its value */
  if( e->tag <= KHE_DRS_EXPR_WORK_DAY_TAG )
    KheDrsExprLeafClear(e, drs);
}
}
When @C { KheDrsExprOpen(e, drs) } begins, @C { e } is considered
to open.  So the first step is to set @C { e->gathered } to
@C { false } (`gathered' means `gathered but not opened'), set
@C { e->open } to @C { true }, and inform @C { e }'s parents that
@C { e } has opened, by calling @C { KheDrsExprChildHasOpened } on
each of them.  After that, if @C { e } is external, searching assumes
that @C { e }'s initial value is correct for when there are no
assignments of open tasks to open resources.  So @C { KheDrsExprOpen }
calls @C { KheDrsExprLeafClear } to give this value to @C { e }.
# @PP
# If @C { e } is internal,
# by the time @C { KheDrsExprOpen(e, drs) } is called, all
# of @C { e }'s open children have made their calls to
# @C { KheDrsExprChildHasOpened }.  So @C { e->open_children_by_day }
# is finalized (for this solve), except for its @C { child_indexes }
# field, which is finalized here, by the
# call to @C { KheDrsOpenChildrenBuildDayChildIndexes }.
@PP
After all the expressions are opened, as shown above we re-traverse
them and call @C { KheDrsExprNotifySigners }:
@ID @C {
void KheDrsExprNotifySigners(KHE_DRS_EXPR e,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  if( e->tag <= KHE_DRS_EXPR_WORK_DAY_TAG )
  {
    /* external expression; nothing to do */
  }
  else if( e->resource != NULL )
  {
    /* add e to its resource on day signers */
    KheDrsExprNotifyResourceSigners(e, drs);
  }
  else
  {
    /* add e to its cover signers */
    HnAssert(e->tag == KHE_DRS_EXPR_COUNTER_TAG,
      "KheDrsExprNotifySigners: internal error");
    KheDrsExprCounterNotifyCoverSigners((KHE_DRS_EXPR_COUNTER) e, drs);
  }
}
}
We previously gave a detailed description of each kind of signer,
including the expressions and dominance tests each kind requires
(Appendix {@NumberOf dynamic_impl.sig.signers}).  We're now about
to make that happen, but organized for each expression rather than
for each signer.
@PP
If @C { e } is derived from a resource constraint, the code for
enrolling it into its signers is
@ID @C {
void KheDrsExprNotifyResourceSigners(KHE_DRS_EXPR e,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_DAY day;  KHE_DRS_RESOURCE_ON_DAY drd;  int di, sig_index;
  KHE_DRS_SIGNER dsg;  KHE_DRS_EXPR_COUNTER ec;

  HaArrayClear(e->sig_indexes);
  KheDrsOpenChildrenForEachIndex(&e->open_children_by_day, di)
  {
    day = HaArray(drs->open_days, di);
    drd = KheDrsResourceOnDay(e->resource, day);
    dsg = KheDrsResourceOnDaySigner(drd);
    if( KheDrsSignerAddExpr(dsg, e, drs, &sig_index) )
      HaArrayAddLast(e->sig_indexes, sig_index);
  }
}
}
First, we clear the @C { sig_indexes } array.  Then we call
@C { KheDrsSignerAddExpr } for each open day, to add
@C { e } to the signer for resource on day @C { drd }.  This ensures
that @C { e } is called back for evaluation when we are constructing
the signature for that resource on day.  If it returns @C { true },
that means that this is not @C { e }'s last day, so @C { e } needs
to reserve a position in signatures controlled by the signer to
store its state.  This position is returned in value @C { sig_index }
and appended to @C { e->sig_indexes }.
@PP
If @C { e } is an internal expression derived from an event resource
constraint, as we stated above it must be a @I COUNTER expression.
The code for notifying its signers is
@ID @C {
void KheDrsExprCounterNotifyCoverSigners(KHE_DRS_EXPR_COUNTER ec,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  int di, si, i, j, sig_index;  KHE_DRS_SHIFT ds;
  KHE_DRS_DAY day;  KHE_DRS_SHIFT_PAIR dsp;  KHE_DRS_SIGNER dsg;

  /* notify day signers */
  HaArrayClear(ec->sig_indexes);
  KheDrsOpenChildrenForEachIndex(&ec->open_children_by_day, di)
  {
    day = HaArray(drs->open_days, di);
    dsg = KheDrsDaySigner(day);
    if( KheDrsSignerAddExpr(dsg, (KHE_DRS_EXPR) ec, drs, &sig_index) )
      HaArrayAddLast(ec->sig_indexes, sig_index);
  }

  /* notify shift signers */
  KheDrsOpenChildrenForEachIndex(&ec->open_children_by_shift, si)
  {
    ds = HaArray(drs->open_shifts, si);
    dsg = KheDrsShiftSigner(ds);
    KheDrsSignerAddExpr(dsg, (KHE_DRS_EXPR) ec, drs, &sig_index);
  }

  /* notify shift pair signers */
  KheDrsOpenChildrenForEachIndex(&ec->open_children_by_day, di)
  {
    day = HaArray(drs->open_days, di);
    HaArrayForEach(day->shifts, ds, i)
      HaArrayForEach(ds->shift_pairs, dsp, j)
      {
	dsg = KheDrsShiftPairSigner(dsp);
	KheDrsSignerAddExpr(dsg, (KHE_DRS_EXPR) ec, drs, &sig_index);
      }
  }
}
}
Each paragraph follows the pattern set by 
@C { KheDrsExprNotifyResourceSigners }:  it retrieves some relevant
signers and enrols @C { e } into each of them.  In this case they
are not resource on day signers; rather, they are day signers,
shift signers, and shift pair signers.  But @C { sig_index } values
are only needed, and only stored, for day signers.  The iterator
@ID @C {
KheDrsOpenChildrenForEachIndex(&eisc->open_children_by_shift, si)
}
visits all shifts that @C { ec } is connected with.
# visits all these shifts, but it may also visit some that lie
# between the first and last shifts but are not themselves relevant,
# hence the test
# @ID @C {
# KheDrsIntSumCostNeedsShiftEval(eisc, ds, drs, &dom_test)
# }
# which succeeds only on those shifts that actually affect @C { e }.
# This is not done when iterating over days, because a state has to
# be calculated and stored even on days that do not directly affect
# @C { e }.  After this test, the code adds @C { e } to @C { ds }
# (that is, to its signer) and optionally adds a dominance test as
# well.  There is no need to remember the position of the dominance
# test, because evaluation during the construction of shift assignment
# solutions does not retrieve anything from that position; the stored
# state is used only for dominance testing.
@PP
We saw @C { KheDrsSignerAddExpr } previously
(Appendix {@NumberOf dynamic_impl.sig.signers}).  A key part of
it was a function called @C { KheDrsExprEvalType } that decided
whether expression @C { e } needs to be added to signer @C { dsg },
and if so whether a dominance test is needed, because @C { dsg }'s
day is not @C { e }'s last day.  Here is @C { KheDrsExprEvalType }:
@ID {0.90 1.0} @Scale @C {
KHE_DRS_EXPR_EVAL_TYPE KheDrsExprEvalType(KHE_DRS_EXPR e, KHE_DRS_SIGNER dsg,
  KHE_DYNAMIC_RESOURCE_SOLVER drs, KHE_DRS_DOM_TEST *dom_test)
{
  int di, si, si1, si2, scount;  KHE_DRS_EXPR_COUNTER ec;
  di = dsg->encl_day->open_day_index;
  if( dom_test != NULL )  *dom_test = NULL;
  switch( dsg->type )
  {
    case KHE_DRS_SIGNER_DAY:
    case KHE_DRS_SIGNER_RESOURCE_ON_DAY:

      if( !KheDrsOpenChildrenIndexInRange(&e->open_children_by_day, di) )
	return KHE_DRS_EXPR_EVAL_NO;
      else if( KheDrsOpenChildrenIndexIsLast(&e->open_children_by_day, di) )
	return KHE_DRS_EXPR_EVAL_LAST;
      else
      {
	if( dom_test != NULL )  *dom_test = KheDrsExprDomTest(e, di, drs);
	return KHE_DRS_EXPR_EVAL_NOT_LAST;
      }

    case KHE_DRS_SIGNER_SHIFT:

      ec = (KHE_DRS_EXPR_COUNTER) e;
      si = dsg->u.shift->open_shift_index;
      scount = KheDrsOpenChildrenWithIndex(&ec->open_children_by_shift, si);
      return KheDrsExprEvalTypeShift(ec, di, scount, drs, dom_test);

    case KHE_DRS_SIGNER_SHIFT_PAIR:

      ec = (KHE_DRS_EXPR_COUNTER) e;
      si1 = dsg->u.shift_pair->shift[0]->open_shift_index;
      si2 = dsg->u.shift_pair->shift[1]->open_shift_index;
      scount = KheDrsOpenChildrenWithIndex(&ec->open_children_by_shift, si1)
        + KheDrsOpenChildrenWithIndex(&ec->open_children_by_shift, si2);
      return KheDrsExprEvalTypeShift(ec, di, scount, drs, dom_test);

    default:

      HnAbort("KheDrsExprEvalType internal error (%d)\n", dsg->type);
      return KHE_DRS_EXPR_EVAL_NO;    /* keep compiler happy */
  }
}
}
For example, for a resource on day signer the expression needs
to be added if it is active on the day of the signer; then if
it is the expression's last day, it is needed but no dominance
test is wanted; otherwise a dominance test is needed.
@PP
When a dominance test is added to a resource on day or day signer, its
index in that signer's signatures is stored in @C { e->sig_indexes },
as we saw above.  So when @C { e } wants to retrieve its state from a
signature, it can consult its own @C { sig_indexes } array to work
out where to look.  (These retrievals are only needed from signatures
controlled by resource on day and day signers, not signatures controlled
by the other two types of signers.)  This function performs that retrieval:
@ID @C {
int KheDrsExprDaySigVal(KHE_DRS_EXPR e, int open_day_index,
  KHE_DRS_SIGNATURE sig)
{
  int pos;
  pos = HaArray(e->sig_indexes, open_day_index -
    e->open_children_by_day.range.first);
  return HaArray(sig->states, pos);
}
}
It returns the state of @C { e } stored in the signature of
@C { soln }, using @C { e->sig_indexes } to find its index.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Closing }
    @Tag { dynamic_impl.expr.closing }
@Begin
@LP
After solving, the open expressions need to be closed.  The
@C { open_exprs } array is used to visit each open expression
and close it:
@ID @C {
HaArrayForEach(drs->open_exprs, e, i)
  KheDrsExprClose(e, drs);
}
Again, this closes children before parents.  To close one expression,
the code is
@ID @C {
void KheDrsExprClose(KHE_DRS_EXPR e, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_PARENT prnt;  int i;  KHE_DRS_EXPR_COUNTER ec;

  /* set e's closed value */
  KheDrsExprSetClosedValue(e, drs);

  /* clear fields that are used only when e is open */
  KheDrsOpenChildrenClear(&e->open_children_by_day);
  HaArrayClear(e->sig_indexes);
  if( e->tag == KHE_DRS_EXPR_COUNTER_TAG )
  {
    ec = (KHE_DRS_EXPR_COUNTER) e;
    KheDrsOpenChildrenClear(&ec->open_children_by_shift);
  }

  /* close e and inform e's parents that e has closed */
  e->open = false;
  HaArrayForEach(e->parents, prnt, i)
    KheDrsExprChildHasClosed(prnt.expr, e, prnt.index, drs);
}
}
The first step is to set @C { e }'s value to whatever it is to be
in the closed state, assuming for external expressions that all
assignments are expressed in the @C { closed_asst } fields of tasks
and resources (as we can do because expressions are closed after
all assignments are made), and for internal expressions that
@C { e }'s children are now all closed (as we can do because of
the expression sorting).  @C { KheDrsExprSetClosedValue } is the
usual large switch:
@ID @C {
void KheDrsExprSetClosedValue(KHE_DRS_EXPR e,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  switch( e->tag )
  {
    case KHE_DRS_EXPR_OR_TAG:

      KheDrsExprOrSetClosedValue((KHE_DRS_EXPR_OR) e, drs);
      break;

    ...
  }
}
}
This is different for each concrete expression type; here is
one example:
@ID @C {
void KheDrsExprOrSetClosedValue(KHE_DRS_EXPR_OR eo,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  eo->value.i = (eo->closed_state > 0 ? 1 : 0);
}
}
In @M { OR } expressions, the value is 1 if there is at least one
child with value 1, and, since all the children are now closed,
the @C { closed_state } field can tell us how many such children
there are.
@PP
After @C { KheDrsExprClose } calls @C { KheDrsExprSetClosedValue },
it clears @C { e }'s fields and then ends with this code that
we saw above:
@ID @C {
/* inform e's parents that e has closed */
HaArrayForEach(e->parents, prnt, i)
  KheDrsExprChildHasClosed(prnt.expr, e, prnt.index, drs);
}
This informs @C { e }'s parents that @C { e } has closed, by
calling this function on each parent:
@ID {0.98 1.0} @Scale @C {
void KheDrsExprChildHasClosed(KHE_DRS_EXPR e,
  KHE_DRS_EXPR child_e, int child_index, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  switch( e->tag )
  {
    case KHE_DRS_EXPR_OR_TAG:

      KheDrsExprOrChildHasClosed((KHE_DRS_EXPR_OR) e,
	child_e, child_index, drs);
      break;

    ...
  }
}
}
Even though @C { KheDrsExprChildHasOpened } adds @C { child_e } to
@C { e }'s list of open children, @C { KheDrsExprChildHasClosed }
does not remove @C { child_e } from @C { e }'s list of open children.
Instead, when @C { e } is closed later its open children are cleared
out, as we have seen in function @C { KheDrsExprClose}.  Once again
the details of @C { KheDrsExprChildHasClosed } depend on the expression
type.  Here they are for @M { OR } expressions:
@ID @C {
void KheDrsExprOrChildHasClosed(KHE_DRS_EXPR_OR eo,
  KHE_DRS_EXPR child_e, int child_index)
{
  if( child_e->value.i == 1 )
    eo->closed_state += 1;
}
}
If the child's value is 1, that makes one more closed child
with value 1.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Searching }
    @Tag { dynamic_impl.expr.search }
@Begin
@LP
For expressions, searching is basically about evaluating an
expression in the context of some solution.  External expressions
are evaluated by these functions:
@IndentedList

@LI @C {
void KheDrsExprLeafSet(KHE_DRS_EXPR e, KHE_DRS_TASK_ON_DAY dtd,
  KHE_DRS_RESOURCE dr);
}

@LI @C {
void KheDrsExprLeafClear(KHE_DRS_EXPR e);
}

@EndList
@C { KheDrsExprLeafSet } is called when @C { drd } is assigned to
@C { dtd }, and @C { KheDrsExprLeafClear } is called when that
assignment is removed.  Both functions contain a switch with
one branch for each external expression type.  Here is an
example of one of the branches:
@ID @C {
void KheDrsExprBusyTimeLeafSet(KHE_DRS_EXPR_BUSY_TIME ebt,
  KHE_DRS_TASK_ON_DAY dtd, KHE_DRS_RESOURCE dr)
{
  ebt->value.i = (dtd->time == ebt->time ? 1 : 0);
}
}
If @C { dr } is assigned to @C { dtd }, then @C { ebt } has value 1 if
@C { dtd }'s time is @C { ebt }'s time, and 0 otherwise (no resource is
busy twice on one day).  @C { KheDrsExprBusyTimeLeafClear } sets
the value to 0.
@PP
For internal nodes evaluation is more complicated.  It is
done by calls on this function:
@ID @C {
void KheDrsExprEvalSignature(KHE_DRS_EXPR e, KHE_DRS_SIGNER dsg,
  KHE_DRS_SIGNATURE prev_sig, KHE_DRS_SIGNATURE next_sig,
  KHE_DYNAMIC_RESOURCE_SOLVER drs);
}
This evaluates @C { e } on the day covered by @C { next_sig }
(the open day after @C { prev_sig }'s day), and updates @C { next_sig },
which is controlled by @C { dsg }, by adding a value to the end of
its signature, or changing its cost, or both.  Its body is the usual
large switch, this time with one branch for each internal expression
type.  Here is an example of one of the branches:
@ID {0.90 1.0} @Scale @C {
void KheDrsExprOrEvalSignature(KHE_DRS_EXPR_OR eo, KHE_DRS_SIGNER dsg,
  KHE_DRS_SIGNATURE prev_sig, KHE_DRS_SIGNATURE next_sig,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  int i, i1, i2, next_di;  KHE_DRS_EXPR child_e;  KHE_DRS_VALUE val;

  next_di = KheDrsSignerOpenDayIndex(dsg);
  if( KheDrsOpenChildrenIndexIsFirst(&eo->open_children_by_day, next_di) )
  {
    /*  no previous day, so we have a 0 (false) value here */
    val.i = 0;
  }
  else
  {
    /* not first day, so retrieve a previous value */
    val = KheDrsExprDaySigVal((KHE_DRS_EXPR) eo, next_di - 1, prev_sig);
  }

  /* accumulate the values of the children of eo that finalized today */
  KheDrsOpenChildrenForEach(&eo->open_children_by_day, next_di, child_e, i)
    if( child_e->value.i == 1 )
      val.i = 1;

  if( KheDrsOpenChildrenIndexIsLast(&eo->open_children_by_day, next_di) )
  {
    /* last day; incorporate closed state and set value */
    if( eo->closed_state > 0 )
      val.i = 1;
    eo->value = val;
  }
  else
  {
    /* not last day; store val in next_soln */
    KheDrsSignatureAddState(next_sig, val, dsg, (KHE_DRS_EXPR) eo);
  }
}
}
The details depend on the particular expression type, but the
structure is common to all types.
@PP
First, find the expression's value before this day.  This will be
an initial value (here 0) if this is the expression's first open
day, and will come from the signature of @C { prev_soln } otherwise.
@PP
Second, use iterator macro @C { KheDrsOpenChildrenForEach }
(Appendix {@NumberOf dynamic_impl.expr.open_children}) to visit the
children for which @C { next_di } is the last open day, retrieve
their values, and incorporate those values into the value of this
expression.  Here, to implement the @M { OR } function, any child
whose value is 1 causes @C { val.i } to be set to 1.  The children
have their final values, because the postorder sorting ensures that
@C { KheDrsExprEvalSignature } is called on the children before
the parent.
@PP
Third, save the value.  If this is the expression's last open day,
the value simply remains in the expression (here, in @C { eo->value })
where it will be picked up by the expression's parents during their
@C { KheDrsExprEvalSignature } calls.  If this is not the expression's
last open day, the value (or whatever state needs to be stored) is
added to @C { next_sig }.
@PP
When the value is a cost, things are slightly different.  No value
is kept in the expression; instead, an extra cost
(Appendix {@NumberOf dynamic_theory.monitors})
is added to @C { next_sig } each day.
@PP
This function does not really need to use parameter @C { dsg }.
However, in some cases (@I { COUNTER } expressions derived from
event resource constraints) evaluation needs to know the type of
the signer, hence the presence of @C { dsg }.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Types of expressions }
    @Tag { dynamic_impl.expr.types }
@Begin
@LP
In this section we present the types of expressions needed for the
XESTT constraints.
@PP
First we have the types of external expressions in expression trees
for event resource constraints.  There is just one of these:
@TaggedList

@DTI { @M { ASSIGNED_TASK(t, g) } }
@OneRow {
An expression whose value is 1 when @M { t }, a task on day object,
is assigned a resource from resource group @M { g }, and 0 otherwise.
}

@EndList
Next we have the types of external expressions in expression trees
for resource constraints:
@TaggedList

@DTI { @M { BUSY_TIME(r, t) } }
@OneRow {
An expression whose value is 1 when resource @M { r }
is busy at time @M { t }, otherwise 0.
}

@DTI { @M { FREE_TIME(r, t) } }
@OneRow {
An expression whose value is 1 when resource @M { r }
is free at time @M { t }, otherwise 0.
}

@DTI { @M { WORK_TIME(r, t) } }
@OneRow {
An expression whose value is the workload of resource @M { r }
at time @M { t } (a @C { float } value).  This will be 0.0 when
@M { r } is free at time @M { t }.
}

@DTI { @M { BUSY_DAY(r, d) } }
@OneRow {
An expression whose value is 1 when resource @M { r }
is busy on day @M { d }, otherwise 0.
}

@DTI { @M { FREE_DAY(r, d) } }
@OneRow {
An expression whose value is 1 when resource @M { r }
is free on day @M { d }, otherwise 0.
}

@DTI { @M { WORK_DAY(r, d) } }
@OneRow {
An expression whose value is the workload of resource @M { r }
on day @M { d } (a @C { float } value).  This will be 0.0 when
@M { r } is free on day @M { d }.
}

@EndList
The last three could be omitted, but they speed up some common cases,
and they produce better value upper bounds.  And here are the types
of internal expressions:
@TaggedList

@DTI { @M { OR } }
@OneRow {
An expression whose value is 1 when at least one of its children has
value 1, else 0.  All its children must have value 0 or 1.
}

@DTI { @M { AND } }
@OneRow {
An expression whose value is 1 when all of its children have
value 1, else 0.  All its children must have value 0 or 1.
}

# @DTI { @M { INT_SUM } }
# @OneRow {
# An expression with an @C { int } value which is the sum of its
# children's @C { int } values.
# }

# @DTI { @M { FLOAT_SUM } }
# @OneRow {
# An expression with a @C { float } value which is the sum of its
# children's @C { float } values.
# }

# @DTI { @M { INT_DEV(a, b, z) } }
# @OneRow {
# Here @M { a } and @M { b } are integers, and @M { z } is a Boolean.
# This expression has a single child whose value is an integer.  Its value
# is the amount by which its child's value falls short of @M { a } or
# exceeds @M { b }.  If @M { z } is true, then as a special case its
# result is 0 if the child's value is 0.
# }

# @DTI { @M { FLOAT_DEV(a, b, z) } }
# @OneRow {
# Here @M { a } and @M { b } are integers, and @M { z } is a Boolean.
# This expression has a single child whose value is a @C { float }.  Its
# value is the amount by which its child's value falls short of @M { a }
# or exceeds @M { b }, rounded up to the nearest integer.  If @M { z }
# is true, then its result is 0 if the child's value is 0.0.
# }

# @DTI { @M { COST(f, w) } }
# @OneRow {
# Here @M { f } is a cost function and @M { w } is a weight.
# This expression has a single integer valued child.  Its value is
# the result of applying cost function @M { f } with weight @M { w }
# to the child's value.
# # The result is added to the new solution's
# # cost, not retrieved by any parent.
# @LP
# @M { COST } expressions appear frequently in the expression trees of
# Appendix {@NumberOf dynamic_impl.solving.monitors}, but in fact this
# type is not implemented.  All @M { COST } expressions are replaced by
# @M { INT_SUM_COST } expressions with no history, and usually with
# maximum limit zero.  Any @M { INT_SUM } or @M { INT_DEV } children
# are also replaced.  This is equivalent, and significantly reduces
# the implementation burden while only slightly increasing running
# time and memory usage.
# }

@DTI { @M { COUNTER } }
@OneRow {
An expression whose value is the deviation from given limits of
the number of its children whose value is 1.  All its children must
have value 0 or 1.  This is the cluster busy times monitor, essentially,
# This expression also produces and reports a cost based on the deviation.
}

@DTI { @M { SUM_INT } }
@OneRow {
An expression whose value is the deviation from given limits of
the sum of the values of its children.  All values are non-negative
integers.  This ought to subsume @M { COUNTER }, but the two types
handle dominance testing differently, so they remain separate.
}

@DTI { @M { SUM_FLOAT } }
@OneRow {
Identical to @M { SUM_INT } except that the children have
non-negative @C { float } values, and this expression produces a
@C { float } sum before converting it into an integer deviation.
}

@DTI { @M { SEQUENCE } }
@OneRow {
Like @M { COUNTER }, except that its value is the set of
deviations of sequences of children with value 1.  This is the
limit active intervals monitor, essentially.  It is easily the
most complex expression type to implement.  A full description
is given in Appendix {@NumberOf dynamic_impl.expr.seq}.
# There is no @M { z } (@C { AllowZero }) parameter.
}

@EndList
The last four types may also report a cost based on their deviation
(or a set of costs in the case of @M { SEQUENCE }).  They also
handle history; @M { SUM_FLOAT } is ahead of XESTT in that respect.
# The author has considered adding a @M { COST_SUM } expression
# type.  However, this has not been done, mainly because cost
# expressions report their cost every day, not just on their
# last open day, making a @M { COST_SUM } expression
# too different from other expressions to be worth having.
@PP
These expression types are implemented as subtypes of
@C { KHE_DRS_EXPR }.  All subtypes have the same operations.
For example, the operations on @C { KHE_DRS_EXPR_OR } are
@C { KheDrsExprOrMake }, @C { KheDrsExprOrAddChild },
@C { KheDrsExprOrChildHasOpened }, @C { KheDrsExprOrChildHasClosed },
@C { KheDrsExprOrSetClosedValue }, @C { KheDrsExprOrEvalSignature },
and @C { KheDrsExprOrDoDebug }.  We have seen most of these functions
already, in the preceding sections.
@PP
With two exceptions, @M { COUNTER } and @M { SEQUENCE }, we won't
present the implementations of these subtypes, because they are quite
straightforward.  The two exceptions are much more complicated,
and our next task is to study them in detail.
@End @SubSubAppendix

@SubSubAppendix
    @Title { The @II { COST } expression type }
    @Tag { dynamic_impl.expr.cost }
@Begin
@LP
Type @C { KHE_DRS_EXPR_COST } is the abstract supertype of the
solver's four expression types, @C { KHE_DRS_EXPR_COUNTER },
@C { KHE_DRS_EXPR_SUM_INT }, @C { KHE_DRS_EXPR_SUM_FLOAT },
and @C { KHE_DRS_EXPR_SEQUENCE }, which could constribute a cost:
@IndentedList

@LI @C {
#define INHERIT_KHE_DRS_EXPR_COST			\
  INHERIT_KHE_DRS_EXPR					\
  KHE_COST_FUNCTION	cost_fn;			\
  KHE_COST		combined_weight;		\
  KHE_DRS_MONITOR	monitor;
}

@LI @C {
typedef struct khe_drs_expr_cost_rec {
  INHERIT_KHE_DRS_EXPR_COST
} *KHE_DRS_EXPR_COST;
}

@EndList
# All of the fields shown here (i.e. not the inherited ones) come
# from the expression's monitor or from that monitor's constraint,
# and mean what they seem to mean.
The type has only a few
operations.  They include @C { KheDrsExprCostUnweightedCost },
which returns the cost of the expression, for a given deviation,
before multiplication by the weight:
@ID @C {
int KheDrsExprCostUnweightedCost(KHE_DRS_EXPR_COST ec, int dev)
{
  switch( ec->cost_fn )
  {
    case KHE_STEP_COST_FUNCTION:

      return dev > 0 ? 1 : 0;

    case KHE_LINEAR_COST_FUNCTION:

      return dev;

    case KHE_QUADRATIC_COST_FUNCTION:

      return dev * dev;

    default:

      HnAbort("KheDrsExprCostUnweightedCost internal error");
      return 0;  /* keep compiler happy */
  }
}
}
For the actual cost there is
@ID @C {
KHE_COST KheDrsExprCostCost(KHE_DRS_EXPR_COST ec, int dev)
{
  return ec->combined_weight * KheDrsExprCostUnweightedCost(ec, dev);
}
}
These two macros allow the child types to access these two functions
without a visible upcast, and with a much briefer function name:
@ID {0.92 1.0} @Scale @C {
#define uf(e, d) KheDrsExprCostUnweightedCost((KHE_DRS_EXPR_COST) (e), (d))
#define f(e, d) KheDrsExprCostCost((KHE_DRS_EXPR_COST) (e), (d))
}
There are also two operations which associate a cost expression with
a DRS constraint object:  @C { KheDrsExprCostSetConstraint }
(Appendix {@NumberOf dynamic_impl.constraints.constraints})
and @C { KheDrsExprCostConstraint }.
@End @SubSubAppendix

@SubSubAppendix
    @Title { The @II { COUNTER } expression type }
    @Tag { dynamic_impl.expr.sum }
@Begin
@LP
This section presents the implementation of the @M { COUNTER }
expression type.  It is all based on the formulas from
Appendix {@NumberOf dynamic_theory.counter }, where these
expressions were called @I { counter monitors }, and we
refer freely to that Appendix and its terminology.
@PP
The children of a counter expression have value 0 or 1.  The
counter expression counts the number of children with value 1,
compares this with given limits, and finds a cost.  Its type is
@ID @C {
typedef struct khe_drs_expr_counter_rec {
  INHERIT_KHE_DRS_EXPR_COST
  int				min_limit;
  int				max_limit;
  bool				allow_zero;
  int				history_before;
  int				history_after;
  int				history;
  KHE_DRS_ADJUST_TYPE		adjust_type;
  int				closed_state;
  struct khe_drs_open_children_rec  open_children_by_shift;
} *KHE_DRS_EXPR_COUNTER;
}
A counter expression always represents a monitor @M { m }, and
the first six uninherited fields are directly defined by @M { m }
or its constraint.  For the rest, @C { adjust_type } is an
enumerated value saying what type of signature value adjustment to
use; @C { closed_state } holds the number of closed children whose
value is 1; and @C { open_children_by_shift }, which is used
only when @M { m } is an event resource monitor (when the inherited
@C { resource } field is @C { NULL }),
holds the same open children as @C { open_children_by_day },
only sorted by open shift index rather than open day index.
@PP
Before the main @I COUNTER submodule there is a submodule
which is concerned with notifying signers of the existence
of this expression.  We have presented most of that already
(function @C { KheDrsExprCounterNotifyCoverSigners } in
Appendix {@NumberOf dynamic_impl.expr.opening}).
@PP
The main @I COUNTER submodule begins with two
functions for handling deviations:
@ID @C {
int KheDrsExprCounterDelta(KHE_DRS_EXPR_COUNTER ec,
  int lower_det, int upper_det)
{
  if( ec->allow_zero && lower_det == 0 )
    return 0;
  else if( lower_det > ec->max_limit )
    return lower_det - ec->max_limit;
  else if( upper_det < ec->min_limit )
    return ec->min_limit - upper_det;
  else
    return 0;
}
}
This is @M { delta(l, u) } from Appendix {@NumberOf dynamic_theory}.
Next comes
@ID @C {
int KheDrsExprCounterDev(KHE_DRS_EXPR_COUNTER ec,
  int lower_det, int upper_det_minus_lower_det)
{
  return KheDrsExprCounterDelta(ec, lower_det,
    lower_det + upper_det_minus_lower_det);
}
}
which is a more convenient way to call @M { delta } sometimes.
@PP
Next we have a function for working out the kind of signature
value adjustment that is appropriate for @C { ec }.  Its result
is a value of type
@ID @C {
typedef enum {
  KHE_DRS_ADJUST_ORDINARY,
  KHE_DRS_ADJUST_NO_MAX,
  KHE_DRS_ADJUST_LINEAR,
  KHE_DRS_ADJUST_STEP
} KHE_DRS_ADJUST_TYPE;
}
and the function itself is
@ID @C {
KHE_DRS_ADJUST_TYPE KheDrsAdjustType(KHE_COST_FUNCTION cost_fn,
  int max_limit)
{
  if( max_limit == INT_MAX )
    return KHE_DRS_ADJUST_NO_MAX;
  else if( cost_fn == KHE_LINEAR_COST_FUNCTION )
    return KHE_DRS_ADJUST_LINEAR;
  else if( cost_fn == KHE_STEP_COST_FUNCTION )
    return KHE_DRS_ADJUST_STEP;
  else
    return KHE_DRS_ADJUST_ORDINARY;
}
}
# Actually signature value adjustment is being withdrawn.
# It follows the analysis of
# Appendix {@NumberOf dynamic_theory.counter.sig_val},
# which we won't repeat here.
# @PP
After that come two functions, for beginning the creation of a new
@M { COUNTER } object, and for beginning the creation of a new
@M { COUNTER } object with maximum limit 0 (a common special case).
When a child is added during the initial construction of the
expression, we do this:
@ID @C {
void KheDrsExprCounterAddChild(KHE_DRS_EXPR_COUNTER ec,
  KHE_DRS_EXPR child_e, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  ec->closed_state += child_e->value.i;
}
}
to make @C { closed_state } hold the number of children with value 1.
After all children are added,
@ID {0.90 1.0} @Scale @C {
void KheDrsExprCounterMakeEnd(KHE_DRS_EXPR_COUNTER ec,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  int ub;

  KheDrsExprInitEnd((KHE_DRS_EXPR) ec, drs);

  /* get the total value upper bound */
  ub = HaArrayCount(ec->children) + ec->history + ec->history_after;

  /* set the value upper bound (not actually used, but anyway) */
  ec->value_ub.i = KheDrsExprCounterValueUpperBound(ec, ub);

  /* set constraint */
  KheDrsExprCostSetConstraint((KHE_DRS_EXPR_COST) ec, ec->history, drs);
}
}
is called to end the initialization of @C { ec }, sort out its upper
bound (which is not used, so this is only for completeness), and set
its constraint field.  When a child is opened we do this:
@ID {0.92 1.0} @Scale @C {
void KheDrsExprCounterChildHasOpened(KHE_DRS_EXPR_COUNTER ec,
  KHE_DRS_EXPR child_e, int child_index, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  int old_dev, new_dev, ub;  KHE_COST old_cost, new_cost;

  /* find the deviation before the change */
  ub = KheDrsOpenChildrenCount(&ec->open_children_by_day);
  old_dev = KheDrsExprCounterDev(ec,
    ec->history + ec->closed_state, ub + ec->history_after);

  /* update ec to reflect the new open child */
  KheDrsOpenChildrenAddChild(&ec->open_children_by_day, child_e);
  ec->closed_state -= child_e->value.i;

  /* find the deviation after the change */
  ub = KheDrsOpenChildrenCount(&ec->open_children_by_day);  /* one more */
  new_dev = KheDrsExprCounterDev(ec,
    ec->history + ec->closed_state, ub + ec->history_after);

  /* report the change in cost, if any, to drs->solve_start_cost */
  if( old_dev != new_dev )
  {
    old_cost = f(ec, old_dev);
    new_cost = f(ec, new_dev);
    drs->solve_start_cost += (new_cost - old_cost);
    KheDrsMonitorUpdateRerunCost(ec->monitor, (KHE_DRS_EXPR) ec, drs,
      NULL, KHE_DRS_OPEN, "open", child_index, "+-", new_cost, old_cost);
  }

  /* update open_children_by_shift, if required */
  if( ec->resource == NULL )
    KheDrsOpenChildrenAddChild(&ec->open_children_by_shift, child_e);
}
}
The old deviation is based on @C { eisc->history + eisc->closed_state }
active children and
@C { KheDrsOpenChildrenCount(&ec->open_children_by_day) + eisc->history_after }
unassigned children.  The new deviation follows the same formula,
after updating to reflect the new open child.  Any change in cost
is added to @C { drs->solve_start_cost }.  And if required, we add
@C { child_e } to @C { eisc->open_children_by_shift }.
@PP
When a child is closed we do the reverse:
@ID {0.92 1.0} @Scale @C {
void KheDrsExprCounterChildHasClosed(KHE_DRS_EXPR_COUNTER ec,
  KHE_DRS_EXPR child_e, int child_index, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  int old_dev, new_dev, ub;  KHE_COST old_cost, new_cost;

  /* find the deviation before the change */
  ub = KheDrsOpenChildrenCount(&ec->open_children_by_day);
  old_dev = KheDrsExprCounterDev(ec,
    ec->history + ec->closed_state, ub + ec->history_after);

  /* update ec to reflect one less open child */
  KheDrsOpenChildrenDeleteChild(&ec->open_children_by_day, child_e);
  ec->closed_state += child_e->value.i;

  /* find the deviation after the change */
  ub = KheDrsOpenChildrenCount(&ec->open_children_by_day);  /* one less */
  new_dev = KheDrsExprCounterDev(ec,
    ec->history + ec->closed_state, ub + ec->history_after);

  /* report the change in cost, if any, to drs->solve_start_cost */
  if( old_dev != new_dev )
  {
    new_cost = f(ec, new_dev);
    old_cost = f(ec, old_dev);
    drs->solve_start_cost += (new_cost - old_cost);
    KheDrsMonitorUpdateRerunCost(ec->monitor, (KHE_DRS_EXPR) ec, drs,
      NULL, KHE_DRS_CLOSE, "close", child_index, "+-", new_cost, old_cost);
  }

  /* update open_children_by_shift, if required */
  if( ec->resource == NULL )
    KheDrsOpenChildrenDeleteChild(&ec->open_children_by_shift, child_e);
}
}
There is a @C { KheDrsExprCounterSetClosedValue } function, but
it has nothing to do here, because counter expressions do not
store a closed value.
# @PP
# Next come several functions for defining dominance tests, which
# we skip for the time being.  Perhaps we'll document them later,
# when they are more settled.
@PP
Finally comes the function for adding this expression's extra cost
and signature value to a new signature:
@ID {0.90 1.0} @Scale -1px @Break @C {
void KheDrsExprCounterEvalSignature(KHE_DRS_EXPR_COUNTER ec,
  KHE_DRS_SIGNER dsg, KHE_DRS_SIGNATURE prev_sig,
  KHE_DRS_SIGNATURE next_sig, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  int ld1, ld2, ud1, ud2, i, i1, i2, dev1, dev2, si, count, next_di;
  KHE_DRS_EXPR child_e;  KHE_DRS_VALUE val;  KHE_DRS_EXPR_EVAL_TYPE eval_type;

  /* get ld1, ud1, and dev1 (for all days before next_di) */
  next_di = KheDrsSignerOpenDayIndex(dsg);
  if( KheDrsOpenChildrenIndexIsFirstOrLess(&ec->open_children_by_day,next_di))
    ld1 = KheDrsExprCounterInitialValue(ec);
  else
    ld1 = KheDrsExprDaySigVal((KHE_DRS_EXPR) ec, next_di - 1, prev_sig).i;
  ud1 = ld1 + ec->history_after +
    KheDrsOpenChildrenAtOrAfter(&ec->open_children_by_day, next_di);
  dev1 = KheDrsExprCounterDelta(ec, ld1, ud1);

  /* get ld2, ud2, and dev2 for one more day, shift, or shift pair */
  ld2 = ld1, ud2 = ud1;
  switch( dsg->type )
  {
    case KHE_DRS_SIGNER_DAY:
    case KHE_DRS_SIGNER_RESOURCE_ON_DAY:

      /* signer is for day or mtask solutions */
      KheDrsOpenChildrenForEach(&ec->open_children_by_day, next_di,child_e,i)
        KheDrsExprCounterUpdateLU(child_e, &ld2, &ud2, i, debug);
      break;

    ... see below for other cases ...
  }
  dev2 = KheDrsExprCounterDelta(ec, ld2, ud2);

  /* if not the last evaluation, store ld2 (adjusted) in next_sig */
  eval_type = KheDrsExprEvalType((KHE_DRS_EXPR) ec, dsg, drs, NULL);
  if( eval_type == KHE_DRS_EXPR_EVAL_NOT_LAST )
  {
    val.i = KheDrsAdjustedSigVal(ld2, ec->adjust_type,
      ec->min_limit, ec->max_limit, ec->history_after);
    KheDrsSignatureAddState(next_sig, val, dsg, (KHE_DRS_EXPR) ec);
  }

  /* report the extra cost, if any */
  if( dev2 != dev1 )
  {
    KheDrsSignatureAddCost(next_sig, f(ec, dev2) - f(ec, dev1));
    KheDrsMonitorUpdateRerunCost(ec->monitor, (KHE_DRS_EXPR) ec, drs,
      dsg, KHE_DRS_SEARCH, "search", -1, "+-", f(ec, dev2), f(ec, dev1));
  }
}
}
The structure here is the same as that of @C { KheDrsExprOrEvalSignature }
that we saw earlier, except that here the value (an extra cost) is
added to the signature rather than stored in @C { ec }.  We'll go
through it step by step now.
@PP
Here @C { ld1 } and @C { ud1 } are lower and upper determinants, the
@M { l } and @M { u } of Appendix {@NumberOf dynamic_theory}, for
the solution up to and including the day before the open day with
open day index @C { next_di }.  Now @M { l } is the
number of active children (children whose value is known to be 1),
and @M { u } is the number of active or unassigned children (children
whose value is not known).  We find @C { ld1 } by calling
@ID @C {
int KheDrsExprCounterInitialValue(KHE_DRS_EXPR_COUNTER ec)
{
  return ec->history + ec->closed_state;
}
}
if there is no previous day, and by retrieving it from
@C { prev_sig } if there is.  Then @C { ud1 } is @C { ld1 }
plus the children lying on or after the day with open day
index @C { next_di }, or in the history after range.
@PP
When the signer is a day or resource on day signer, we can derive
@C { ld2 } and @C { ud2 }, the lower and upper determinants for
the solution including the open day with open day index
@C { next_di }, by starting from @C { ld1 } and @C { ud1 }
and updating them to take account of the children whose values
become finalized on the additional day:
@ID {0.90 1.0} @Scale @C {
KheDrsOpenChildrenForEach(&ec->open_children_by_day, next_di, child_e, i)
  KheDrsExprCounterUpdateLU(child_e, &ld2, &ud2, i);
}
where the updating for one child @C { child_e } is done by a call to
@ID @C {
void KheDrsExprCounterUpdateLU(KHE_DRS_EXPR child_e,
  int *ld2, int *ud2, int i, bool debug)
{
  *ld2 += child_e->value.i;
  *ud2 += (child_e->value.i - child_e->value_ub.i);
}
}
The child has changed from unassigned to either active or inactive.
This function either adds 1 to @C { *ld2 } and leaves @C { *ud2 }
unchanged, or it adds 1 to @C { *ud2 } and leaves @C { *ld2 }
unchanged.  It has been written this way to show that the
effect of giving a value to @C { child_e } is to add its
value to @C { *ld2 } and replace its upper bound in @C { *ud2 }
(always 1 here) by its value.
# The child is changing from unassigned to either active or inactive.
# In the first case, @M { l } increases by 1 and @M { u } does not
# change; in the second case, @M { u } decreases by 1 and @M { l }
# does not change.
@PP
With the old and new values of @M { l }, @M { u }, and @M { delta }
in hand, we are ready to report the results.  If there are more
unassigned children, we add @C { ld2 }, possibly adjusted, to
@C { next_sig }.  Separately, if the cost has changed, we add
the extra cost @C { f(eisc, dev2) - f(eisc, dev1) } to @C { next_sig }.
@PP
It remains to present the two switch cases omitted above.  Of all
the expression types, @C { KHE_DSR_EXPR_COUNTER } is the only one
whose evaluation depends on the type of the signer, and even then,
only when the expression is derived from an event resource
constraint.  The idea is the same but there is a different set
of newly evaluated children to traverse:
@ID {0.95 1.0} @Scale @C {
case KHE_DRS_SIGNER_SHIFT:

  /* signer is for shift solutions */
  si = dsg->u.shift->open_shift_index;
  KheDrsOpenChildrenForEach(&ec->open_children_by_shift, si, child_e, i)
    KheDrsExprCounterUpdateLU(child_e, &ld2, &ud2, i, debug);
  break;

case KHE_DRS_SIGNER_SHIFT_PAIR:

  /* signer is for shift pair solutions */
  for( count = 0;  count < 2;  count++ )
  {
    si = dsg->u.shift_pair->shift[count]->open_shift_index;
    KheDrsOpenChildrenForEach(&ec->open_children_by_shift, si, child_e, i)
      KheDrsExprCounterUpdateLU(child_e, &ld2, &ud2, i, debug);
  }
  break;
}
Instead of traversing all the children on the next day, this
traverses all the children affected by a particular shift, or pair
of shifts, on that day.  No child is affected by two or more
shifts, because the children are always @I { ASSIGNED_TASK }
expressions, each representing one task at one time.
@End @SubSubAppendix

@SubSubAppendix
    @Title { The @II { SEQUENCE } expression type }
    @Tag { dynamic_impl.expr.seq }
@Begin
@LP
This section presents the implementation of the @M { SEQUENCE }
expression type, based on the formulas from
Appendix {@NumberOf dynamic_theory.sequence}, where these
expressions were called @I { sequence monitors }.  We
refer freely to that Appendix and its terminology.
@PP
@BI { Child order. }
In a @M { SEQUENCE } expression, the order of the children matters.
Although it never happens in practice, the last open days of the open
children (taken in order) could be out of chronological order.  For
other kinds of expressions, where the children's order does not matter,
the open children are sorted by function @C { KheDrsExprChildHasOpened }
(Appendix {@NumberOf dynamic_impl.expr.opening}) so that their last open
days are in chronological order.  But doing that to a @M { SEQUENCE }
expression would change its meaning.
@PP
Instead of sorting the children, we change their open day ranges.
For each open child @M { y sub i } after the first, if the last open day
of @M { y sub i } precedes the last open day of @M { y sub {i-1} },
then the last open day of @M { y sub i } is increased to the last open
day of @M { y sub {i-1} }.  This does not break anything, it merely
causes @M { y sub i } to contribute a value to the signature on more
days than it otherwise would have done.  It is done as each child
is opened, so the last open day of @M { y sub {i+1} } is affected
by any previous adjustment to the last open day of @M { y sub i },
and so on.  Thankfully, after doing this we can forget about it.
@PP
We do not allow a child of a @M { SEQUENCE } expression to
be a leaf (external expression), and this need to change open day
ranges is one of the two reasons why.  A leaf may be shared with
other expressions, and increasing its open day range might well
disrupt them.  But non-leaf expressions are not shared, so their
open day ranges can be increased safely.
@PP
The other reason is that the @M { SEQUENCE } expression type is
much easier to implement if it can be assumed that the children of
a @M { SEQUENCE } expression are opened in increasing child
index order.  This will happen if the postorder indexes of the
children are increasing, which is easily ensured if the children
are all newly created, simply by visiting the time groups of the
monitor in the natural order during construction.  But it cannot
be guaranteed for shared expressions, since they may be created
at arbitrary points during the initialization.
@PP
These redundant expressions slow down the evaluation of some
constraints slightly, for example constraints on consecutive night
shifts.  But, importantly, they do not make signatures any longer
(except when open day ranges are extended), as a moment's thought
will show.
@PP
In practice, the number of children with a given last open day is
always at most 1.  However, to cover all cases we allow any number
of children to have the same last open day.
@PP
@BI { Sequences. }
In this section, a @I { sequence } means a sequence of adjacent children of
a @M { SEQUENCE } expression.  The implementation makes use of three
kinds of sequences:  closed sequences (defined below), a-intervals, and
au-intervals.  We represent a sequence by a pair of indexes @M { [a, b] }
such that @M { a <= b }.  Consider this sequence of four children:
@CD @Diag {
@Box 0 &
@Box 1 &
@Box 2 &
@Box 3 &
}
We've shown their indexes inside, starting from 0 as is usual in
C.  But actually, the indexes that define a sequence are indexes
into the sequence of gaps that precede and follow the children:
@CD @Diag clabelprox { N } dlabelprox { N } {
@Box clabel { 0 } 0 &
@Box clabel { 1 } 1 &
@Box clabel { 2 } 2 &
@Box clabel { 3 } dlabel { 4 } 3 &
//0.5f
}
So the pair of indexes @M { [1, 3] } for example specifies the
children with indexes 1 and 2, because gap 1 precedes child 1
and gap 3 follows child 2.  We call the first index the
@I { start index }; as well as being the index of a gap, it
also happens to be the index of the first specified child.
We call the second index the @I { stop index }; it is one
greater than the index of the last specified child.
@PP
These details are important because they make an empty sequence be
more than an empty sequence of children; it has a definite location
in the enclosing sequence.  For example, @M { [1, 1] } is the empty
sequence starting at index 1.  It is different from, say,
@M { [4, 4] }.  We do it this way with good reason.  For example,
there is an operation which extends a sequence @M { k } places to
the right.  Applied to @M { [a, b] }, the result is @M { [a, b+k] }.
This makes sense even when @M { [a, b] } is empty.
@PP
@BI { Closed sequences. }
A @I { closed sequence }, denoted @M { Z sub i }, is the sequence of
closed children lying between two open children, or between an open child
and one end of the sequence of children.  Each @M { SEQUENCE } object
contains a sequence of closed sequences.  They summarise the closed
children, allowing them to be skipped over quickly while solving.
@PP
Consider the sequence @M { y sub 0 ,..., y sub {k-1} } of all open
children of @M { C }.  We index them starting from 0 to agree with
the C implementation.  They appear in @M { C }'s list of open
children in the same order that they appear in @M { C }'s list of
all children, thanks to the work done above on the order that the
children are opened.  This order is the one used when naming them
@M { y sub 0 ,..., y sub {k-1} }.  Now consider the list of all children.
Within this list, assuming @M { k > 0 }, let @M { Z sub 0 } be the closed
sequence of zero or more closed children preceding @M { y sub 0 }; for
@M { i } in the range @M { 0 < i < k } let @M { Z sub i } be the closed
sequence of zero or more closed children following @M { y sub {i-1} }
and preceding @M { y sub i }; and let @M { Z sub k } be the closed
sequence of zero or more closed children following @M { y sub {k-1} }.
The full sequence of all children thus looks like this:
@CD @Diag vstrut { yes } {
@Box @M { &1.2f Z sub 0 &1.2f } &
@Box @M { y sub 0 } &
@Box @M { &1.2f Z sub 1 &1.2f } &
@Box @M { y sub 1 } &
@Box @M { &1.2f Z sub 2 &1.2f } &
@Box @M { ... } &
@Box @M { &1.2f Z sub {k-1} &1.2f } &
@Box @M { y sub {k-1} } &
@Box @M { &1.2f Z sub k &1.2f }
}
If @M { k = 0 } the whole sequence is a closed sequence; let
@M { Z sub 0 } be that sequence.  Although we prefer to think of
history values as sequences of children, they are not included here,
because there is no efficient way here to represent @M { c sub i },
which could be very large.
@PP
This way of defining the @M { Z sub i } can be confusing, because
it has little connection with open days.  The open day ranges of
the open children may be adjusted, as explained above, and the
closed children have no open day ranges at all.  Instead, the
definition relies on the order of the children, which is after all
what matters, and on the fact that the open children are not reordered.
@PP
Each @M { Z sub i } is represented in the implementation by an
object of type @C { KHE_DRS_CLOSED_SEQ }:
@ID @C {
typedef struct {
  int		start_index;
  int		stop_index;
  int		active_at_left;
  int		active_at_right;
} *KHE_DRS_CLOSED_SEQ;
}
Fields @C { start_index } and @C { stop_index } are the start index and
stop index of the closed sequence.  Field @C { active_at_left } is
the number of active children within @M { Z sub i } adjacent to its left
end, and @C { active_at_right } is the number of active children within
@M { Z sub i } adjacent to its right end.  If every child in @M { Z sub i }
is active, @C { active_at_left } and @C { active_at_right } are
equal to each other and to the length of @M { Z sub i }.  This will be
the case, for example, when @M { Z sub i } is empty.
@PP
Before a solve, the @M { y sub i } are opened in increasing order, as
we know.  Initially only @M { Z sub 0 } is present, representing all the
children.  As each @M { y sub i } is opened, it is appended to the list
of open children, and the last closed sequence, @M { Z sub i }, is split
into two, a shortened @M { Z sub i } and a new @M { Z sub {i+1} }.  The
reverse procedure is followed as open children are closed at the end of
the solve.  Splitting a closed sequence into two and merging two closed
sequences into one are the only non-trivial operations on this type.
@PP
@BI { A-intervals. }
# We need to
# be able to change a child's state from inactive to unassigned, from
# active to unassigned, from unassigned to inactive, and from unassigned
# to active.  The first two operations are used only during opening; the
# last two are used during closing and solving, and different versions
# are needed for the two cases, making six key operations altogether.
# @PP
Here is the type representing an a-interval:
@ID @C {
typedef struct {
  int		start_index;
  int		stop_index;
  bool		unassigned_precedes;
} KHE_DRS_A_INTERVAL;
}
It is a non-pointer type, to avoid memory allocation.  In addition to
the start index and stop index, it contains @C { unassigned_precedes },
which is @C { true } when an unassigned child immediately precedes this
interval.  This is needed when calculating deviations:
@ID {0.98 1.0} @Scale @C {
int KheDrsAIntervalDev(KHE_DRS_A_INTERVAL ai,
  KHE_DRS_EXPR_INT_SEQ_COST eisc)
{
  int len;
  if( ai.unassigned_precedes && eisc->cost_fn == KHE_STEP_COST_FUNCTION )
    return 0;
  len = ai.stop_index - ai.start_index;
  return len > eisc->max_limit ? len - eisc->max_limit : 0;
}
}
If an unassigned child immediately precedes this interval and the cost
function is @C { Step }, the deviation is 0.  Otherwise the deviation
is the amount by which the interval's length exceeds @M { U }.  All this
follows Appendix {@NumberOf dynamic_theory.sequence} exactly.
@PP
There are also straightforward functions for creating a-intervals,
finding the a-interval adjacent to a given point, merging two
a-intervals, and so on.  An example appears below.  They optimize
by not searching the children directly; instead they assume that
the closed sequences are up to date and search those, where most
of the work has already been done.
@PP
Unlike a closed sequence, an a-interval at the extreme left includes
the @C { eisc->history } active children from history.  It does this by
setting its start index to @C { -eisc->history }.  Because of this,
@C { KheDrsAIntervalDev } does not need to pay any special attention to history.
@PP
An example of this treatment of history occurs in the following
function, which finds the (possibly empty) a-interval just to the
left of the open child with a given @C { open_index }:
@ID {0.95 1.0} @Scale @C {
KHE_DRS_A_INTERVAL KheDrsAIntervalFindLeft(
  KHE_DRS_EXPR_INT_SEQ_COST eisc, int open_index)
{
  KHE_DRS_CLOSED_SEQ dcs;
  dcs = HaArray(eisc->closed_seqs, open_index);
  if( !KheDrsClosedSeqAllActive(dcs) )
  {
    /* an inactive child precedes the active_at_right active children */
    return KheDrsAIntervalMake(dcs->stop_index - dcs->active_at_right,
      dcs->stop_index, false);
  }
  else if( open_index > 0 )
  {
    /* an unassigned child precedes the active_at_right active children */
    return KheDrsAIntervalMake(dcs->stop_index - dcs->active_at_right,
      dcs->stop_index, true);
  }
  else
  {
    /* nothing but history precedes the active_at_right active children */
    return KheDrsAIntervalMake(dcs->stop_index - dcs->active_at_right
      - eisc->history, dcs->stop_index, false);
  }
}
}
The stop index of this a-interval is the stop index of the closed
sequence just to the left.  Its start index is
@C { dcs->active_at_right } places left of there, plus @C { eisc->history }
more places to the left if we are at the start.  The function also
finds a suitable value for @C { unassigned_precedes }, the third
parameter of @C { KheDrsAIntervalMake }.
@PP
In Appendix {@NumberOf dynamic_theory.sequence}, a-intervals were
said to be maximal and non-empty.  There is nothing about the
@C { KHE_DRS_A_INTERVAL } type which guarantees these conditions.
The functions that use a-intervals never create non-maximal ones,
but they may create empty ones.  This is done to reduce the number
of cases.  For example, if one of the children of an a-interval
becomes unassigned or inactive, the a-interval splits into two
pieces, one on each side of the changed child.  Either or both
could be empty, but by allowing a-intervals to be empty the
implementation has just one case to handle.  Empty a-intervals
have deviation 0, so they cause no problems.
@PP
@BI { AU-intervals. }
Here is the type representing an au-interval:
@ID @C {
typedef struct {
  int		start_index;
  int		stop_index;
  bool		has_active_child;
} KHE_DRS_AU_INTERVAL;
}
Once again it is a non-pointer type.  In addition to the start index
and stop index, it contains @C { has_active_child }, which is @C { true }
when the interval contains at least one active child.  This is needed
when calculating deviations:
@ID @C {
int KheDrsAUIntervalDev(KHE_DRS_AU_INTERVAL aui,
  KHE_DRS_EXPR_INT_SEQ_COST eisc)
{
  int len;
  if( !aui.has_active_child )
    return 0;
  len = aui.stop_index - aui.start_index;
  return len < eisc->min_limit ? eisc->min_limit - len : 0;
}
}
If the interval contains no active children, the deviation is 0.
Otherwise the deviation is the amount by which the interval's length
falls short of @M { L }.  As for a-intervals, there are functions for
creating, finding, merging, and splitting au-intervals,
which assume that closed sequences are up to date and search them
rather than the children.  Examples of these functions appear below.
Once again, the code that creates au-intervals never creates
non-maximal ones, and although it does create empty ones, those have
deviation 0, because @C { has_active_child } is necessarily 0.
@PP
An au-interval at the extreme left includes the active children from
history, by setting its start index to @C { -eisc->history }.  An
au-interval at the extreme right includes the unassigned children
from history, by increasing its stop index by @C { eisc->history_after }.
When there are no inactive children (unlikely, but possible), both
of these adjustments apply to the same au-interval.  Because of this,
@C { KheDrsAUIntervalDev } does not need to pay any special attention
to history.
@PP
Here is an example of an au-interval function.  It finds the
(possibly empty) au-interval just to the left of the open
child with the given @C { open_index }:
@ID @C {
KHE_DRS_AU_INTERVAL KheDrsAUIntervalFindLeft(
  KHE_DRS_EXPR_INT_SEQ_COST eisc, int open_index)
{
  KHE_DRS_CLOSED_SEQ dcs;  KHE_DRS_AU_INTERVAL res;  int i;

  /* initialize res to the active children at the right of dcs */
  dcs = HaArray(eisc->closed_seqs, open_index);
  res = KheDrsAUIntervalMake(dcs->stop_index - dcs->active_at_right,
    dcs->stop_index, true);
  if( !KheDrsClosedSeqAllActive(dcs) )
    return res;

  /* now keep looking to the left of there */
  for( i = open_index - 1;  i >= 0;  i-- )
  {
    /* return early if eisc->min_limit reached */
    if( KheDrsAUIntervalLength(res) >= eisc->min_limit )
      return res;

    /* res includes the open unassigned child before the previous dcs */
    KheDrsAUIntervalExtendToLeft(&res, 1, false);

    /* res includes the active children at the right of the next dcs */
    dcs = HaArray(eisc->closed_seqs, i);
    KheDrsAUIntervalExtendToLeft(&res, dcs->active_at_right, true);
    if( !KheDrsClosedSeqAllActive(dcs) )
      return res;
  }

  /* at the start, so res includes history */
  KheDrsAUIntervalExtendToLeft(&res, eisc->history, true);
  return res;
}
}
It starts with the closed sequence object @C { dcs } immediately to the
left of the open child.  The @C { active_at_right } active children
at the right of @C { dcs } are part of the au-interval, but if they
are preceded by an inactive child (if @C { dcs } is not entirely
active) it's time to stop.  Otherwise the open child preceding
@C { dcs } is included, as are the @C { active_at_right } active
children of the preceding closed sequence, and so on.
@PP
The loop in this function could cause it to run for longer than a
constant amount of time.  However, it returns early if the interval
length reaches @C { eisc->min_limit }.  This is safe because the cost
at that point is 0, so there is no need to make the interval any longer.
It keeps the running time constant, assuming (as is true in practice)
that the minimum limit is a small constant.
@PP
@C { KheDrsAUIntervalExtendToLeft } extends an au-interval to the left:
@ID @C {
void KheDrsAUIntervalExtendToLeft(KHE_DRS_AU_INTERVAL *aui,
  int extra_len, bool has_active_child)
{
  if( extra_len > 0 )
  {
    aui->start_index -= extra_len;
    if( has_active_child )
      aui->has_active_child = true;
  }
}
}
This is done by reducing its start index by @C { extra_len }, and
updating its @C { has_active_child } if new children are actually added.
@PP
@BI { Opening and closing. }
Each of the four changes to the state of a child (inactive or active
to unassigned when opening, and unassigned to inactive or active when
closing) takes away old intervals (both a-intervals and au-intervals)
and adds in new ones.  We treat any change to any interval as taking
away one interval and adding another.  We need to find the old
intervals and subtract their costs, and find the new intervals and
add their costs.
# @PP
# When we need an interval, we have to search for it.
# We use short-cuts when we can.  For example, when solving, there are
# no unassigned children before the point that the solve has reached,
# so the a-intervals and au-intervals before that point are the same.
# In practice there are short-cuts that make this
# faster than it sounds.  when searching for an au-interval
# we can stop as soon as its length reaches @M { L }, since it has
# cost 0 at that point and searching further would make no difference.
@PP
This is straightforward in principle, although to explain all the code
in detail would be tedious.  As an example, here is what happens when the
child whose index in the sequence of open children is @C { open_index }
is opened and changes its state from inactive to unassigned.  First,
it is added to the list of open children and its @M { Z sub i } is
split into @M { Z sub i } and @M { Z sub {i+1} }.  Then comes this:
@ID {0.95 1.0} @Scale @C {
/* the au-intervals on each side merge */
aui_left = KheDrsAUIntervalFindLeft(eisc, open_index);
aui_right = KheDrsAUIntervalFindRight(eisc, open_index);
aui_merged = KheDrsAUIntervalMerge(aui_left, aui_right, false);
drs->solve_start_cost += KheDrsAUIntervalCost(aui_merged, eisc)
  - KheDrsAUIntervalCost(aui_left, eisc)
  - KheDrsAUIntervalCost(aui_right, eisc);

/* the a-interval to the right changes its unassigned_precedes */
ai_before = KheDrsAIntervalFindRight(eisc, open_index, false);
ai_after  = KheDrsAIntervalFindRight(eisc, open_index, true);
drs->solve_start_cost += KheDrsAIntervalCost(ai_after, eisc)
  - KheDrsAIntervalCost(ai_before, eisc);
}
The au-intervals on each side of the changed child become merged, so we
add in the cost of the new merged interval and subtract away the costs
of the two old unmerged intervals (possibly empty).  And the a-interval
to the right changes its @C { unassigned_precedes } from @C { false } to
@C { true }, which could change its cost, so again we add the new and
subtract the old.
# This code is optimized for simplicity; speed matters
# when searching, but not when opening and closing.
@PP
No au-intervals or a-intervals are preserved in any data structure.  As
in the example above, they are all calculated on the fly as required.
@PP
@BI { Searching. }  Searching is basically function
@C { KheDrsExprSequenceEvalSignature }:
@ID {0.90 1.0} @Scale -1px @Break @C {
void KheDrsExprSequenceEvalSignature(KHE_DRS_EXPR_SEQUENCE es,
  KHE_DRS_SIGNER dsg, KHE_DRS_SIGNATURE prev_sig,
  KHE_DRS_SIGNATURE next_sig, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  int index, i1, i2, active_len, next_di;  KHE_DRS_EXPR child_e;
  KHE_DRS_AU_INTERVAL aui_left, aui_right, aui_merged;
  KHE_DRS_AU_INTERVAL aui_before, aui_after;  KHE_DRS_CLOSED_SEQ dcs;
  KHE_DRS_A_INTERVAL ai_right_before, ai_right_after;
  KHE_DRS_A_INTERVAL ai_left, ai_right, ai_merged;  KHE_DRS_VALUE val;

  /* initialize active_len, depending on first day or not */
  next_di = KheDrsSignerOpenDayIndex(dsg);
  if( KheDrsOpenChildrenIndexIsFirst(&es->open_children_by_day, next_di) )
  {
    dcs = HaArrayFirst(es->closed_seqs);
    active_len = dcs->active_at_right;
    if( KheDrsClosedSeqAllActive(dcs) )  active_len += es->history;
  }
  else
    active_len = KheDrsExprDaySigVal((KHE_DRS_EXPR) es, next_di-1,prev_sig).i;

  /* handle each child_e whose last open day is next_di */
  KheDrsOpenChildrenForEach(&es->open_children_by_day, next_di, child_e,index)
  {
    if( child_e->value.i == 0 )
    {
      /* child_e moves from unassigned to inactive: update cost */
      ... see below ...

      /* set active_len for next iteration (child_e is now inactive) */
      dcs = HaArray(es->closed_seqs, index + 1);
      active_len = dcs->active_at_right;
    }
    else
    {
      /* child_e moves from unassigned to active: update cost */
      ... see below ...

      /* set active_len for next iteration (child_e is now active) */
      dcs = HaArray(es->closed_seqs, index + 1);
      if( KheDrsClosedSeqAllActive(dcs) )
	active_len += 1 + dcs->active_at_right;
      else
        active_len = dcs->active_at_right;
    }
  }

  /* if not last day, store active_len (adjusted) in sig */
  ... see below ...
}
}
It iterates over the open children whose value is being finalized on
some day, and over the adjacent closed sequences, and makes the same
cost changes as closing a child makes, only adding the changes to
@C { next_sig->cost }, rather than to @C { drs->solve_start_cost }.
We've omitted for the moment the parts that update @C { next_sig->cost }.
@PP
The signature value is the number of active children immediately to
the left of the start point of the iteration, called @C { active_len }
in the code.  Any unassigned children there were given values earlier
in the search path leading to the current solution, so this is the
length of both the a-interval and the au-interval immediately to the
left.  There is no need to search for these intervals.
@PP
The main focus of what we've shown here is to initialize
@C { active_len } and keep it up to date as the children
are processed.  If this is the first day, there is no
signature to retrieve @C { active_len } from.  Instead,
it is equal to the @C { active_at_right } field of the
(only) closed sequence just to the left of the current day,
increased by @C { es->history } if all the children to
the left are active.  On other days, @C { active_len } is
stored in the signature and retrieved from there.
@PP
The code then visits each open child @C { child_e } whose last open
day is the current day, and examines its value.  If it has changed
from unassigned to inactive, the cost is updated as explained below,
then @C { active_len } is updated to the correct value for the
following child.  Because @C { child_e } is now inactive, that value
is the @C { active_at_right } field of the next closed sequence.
@PP
If @C { child_e } has changed from unassigned to active, the new
@C { active_len } will still be the @C { active_at_right } value
if there is an inactive child within the next closed sequence.
But if the next closed sequence consists entirely of active
children, @C { active_len } will have its previous value plus
1 for @C { child_e } plus the @C { active_at_right } value.
@PP
After the last child has been handled, the remaining
@C { active_len } value has to be stored in the signature of
@C { next_soln } for retrieval on the next day.  Here is the
code omitted above:
@ID {0.90 1.0} @Scale @C {
/* if not last day, store adjusted active_len in next_sig */
if( !KheDrsOpenChildrenIndexIsLast(&es->open_children_by_day, next_di) )
{
  val.i = KheDrsAdjustedSigVal(active_len,
    es->adjust_type, es->min_limit, es->max_limit, 0);
  KheDrsSignatureAddState(next_sig, val, dsg, (KHE_DRS_EXPR) es);
}
}
As usual an adjusted value is stored.
@PP
We turn now to the two other parts of the function that were omitted,
that update solution cost.  When @C { child_e } changes from
unassigned to inactive, the enclosing au-interval splits, and the
a-interval to the right changes its @C { unassigned_precedes } flag
from @C { false } to @C { true }:
@ID {0.90 1.0} @Scale @C {
/* child_e moves from unassigned to inactive: update cost */
/* the enclosing au-interval splits */
aui_left = KheDrsAUIntervalMakeLeft(es, index, active_len);
aui_right = KheDrsAUIntervalFindRight(es, index, drs);
aui_merged = KheDrsAUIntervalMerge(aui_left, aui_right, false);
KheDrsSignatureAddCost(next_sig, KheDrsAUIntervalCost(aui_left, es)
  + KheDrsAUIntervalCost(aui_right, es)
  - KheDrsAUIntervalCost(aui_merged, es));

/* the a-interval to the right changes its unassigned_precedes */
ai_right_before = KheDrsAIntervalFindRight(es, index, true);
ai_right_after  = KheDrsAIntervalFindRight(es, index, false);
KheDrsSignatureAddCost(next_sig, KheDrsAIntervalCost(ai_right_after, es)
  - KheDrsAIntervalCost(ai_right_before, es));
}
Function @C { KheDrsAUIntervalMakeLeft } makes an au-interval
ending just before @C { open_index } with length @C { active_len };
no searching is required for this.
@PP
When @C { child_e } changes from unassigned to active, the
enclosing au-interval is unchanged, but it may gain an active
child for the first time, which could change its cost; and
the a-intervals on each side merge:
@ID {0.90 1.0} @Scale @C {
/* child_e moves from unassigned to active: update cost */
/* the enclosing au-interval is unchanged, but its cost may change */
aui_left = KheDrsAUIntervalMakeLeft(es, index, active_len);
aui_right = KheDrsAUIntervalFindRight(es, index, drs);
aui_before = KheDrsAUIntervalMerge(aui_left, aui_right, false);
aui_after = KheDrsAUIntervalMerge(aui_left, aui_right, true);
KheDrsSignatureAddCost(next_sig, KheDrsAUIntervalCost(aui_after, es)
  - KheDrsAUIntervalCost(aui_before, es));

/* the a-intervals on each side merge */
ai_left = KheDrsAIntervalMakeLeft(es, index, active_len);
ai_right = KheDrsAIntervalFindRight(es, index, true);
ai_merged = KheDrsAIntervalMerge(ai_left, ai_right);
KheDrsSignatureAddCost(next_sig, KheDrsAIntervalCost(ai_merged, es)
  - KheDrsAIntervalCost(ai_left, es)
  - KheDrsAIntervalCost(ai_right, es));
}
Function @C { KheDrsAIntervalMakeLeft } makes an a-interval
ending just before @C { open_index } with length @C { active_len };
no searching is required for this.
@PP
This ends our presentation of the @C { KHE_DRS_EXPR_INT_SEQ_COST }
type.  Including code for the various kinds of sequences, this
type occupies about 1900 lines of the source file.
@End @SubSubAppendix

@EndSubSubAppendices
@End @SubAppendix

@SubAppendix
    @Title { Solutions }
    @Tag { dynamic_impl.solns }
@Begin
@LP
A solution is a set of assignments of resources to tasks.  Curiously
enough, when we come to implement solutions and assignments we find
that the two ideas seem to merge:  an assignment could represent
just itself, but it could also represent the solution created by
adding that assignment to some other solution.  Experience has
shown that it is best to have no assignment objects, strictly
speaking, in the implementation, only solution objects.
# , which at times might be interpreted as assignment objects.
@PP
There are several solution types, representing variants of the idea.
It would be wonderful if they formed a neat inheritance hierarchy,
with their shared fields in an abstract parent type.  Sadly,
efficiency demands prevent that.  One of the types is not even a
pointer type, as we'll see.  But we can say that a solution object
@C { S } of any type contains two main kinds of fields.
@PP
First, there are fields which define the assignments.  Some
of them may be pointers to other solution objects.  This almost always
means that the assignments of those other solutions are included in the
assignments of @C { S }.  Some of them may be (resource, task) pairs,
meaning that those basic assignments are included in @C { S }.
@PP
Many solutions are created in the context of expanding one day solution.
They logically include that solution, but the pointer to it is often
omitted, since it is known from the context.
@PP
Second, there is one field, of type @C { KHE_DRS_SIGNATURE } or
@C { KHE_DRS_SIGNATURE_SET }, which holds the signature of @C { S },
including its cost.  It is often convenient to include in the signature
only things that differ from the signature of some other solution
that @C { S } is based on.  We take care below to define precisely
what goes into each signature.
@PP
When another solution's assignments are included in @C { S }, that
other solution's signature will be relevant to @C { S }.  But
different kinds of solutions have different ways of incorporating
the signatures of other solutions into their own signatures.  This
could be as simple as adding a signature to a signature set, or as
complicated as creating a new signature by evaluating expressions.
# @PP
# @I { The following subappendices have the correct titles but most
# of their content is out of date, in that it uses `assignment'
# where we now use `solution.' }
@BeginSubSubAppendices

@SubSubAppendix
    @Title { Day solutions }
    @Tag { dynamic_impl.solns.solns }
@Begin
@LP
# It is time to move on to solving:  everything from after opening
# ends to before closing begins.
# We'll also explain
# here the traversal of the final solution which initiates closing.
# @PP
The @I { day solution } is the most important type of solution.  It
implicitly contains all the time assignments from the initial solution,
and all the assignments of closed tasks from the initial solution.
It explicitly contains all the assignments that there are going to be
of open tasks on days up to and including one particular known open
day, called the solution's day, and no asssignments for days after
that.  We often write `@M { d sub k }-solution' for a day solution
whose day is @M { d sub k }.
@PP
Day solution objects should probably have type @C { KHE_DRS_DAY_SOLN },
but at present their type is @C { KHE_DRS_SOLN }.  The nodes of the
dynamic programming search tree are day solutions:
@CD @Diag arrow { yes } treehsep { 2f } treevsep { 1.5f } blabelprox { SW }
{
//0.5f
@HTree {
    @Box blabel { @I { no day } } @C { KHE_DRS_SOLN }
    @FirstSub {
      @Box blabel { @I { day 0 } } @C { KHE_DRS_SOLN }
	@FirstSub @Box blabel { @I { day 1 } } @C { KHE_DRS_SOLN }
	@NextSub  @Box blabel { @I { day 1 } } @C { KHE_DRS_SOLN }
    }
    @NextSub {
      @Box blabel { @I { day 0 } } @C { KHE_DRS_SOLN }
	@FirstSub @Box blabel { @I { day 1 } } @C { KHE_DRS_SOLN }
	@NextSub  @Box blabel { @I { day 1 } } @C { KHE_DRS_SOLN }
    }
}
}
The day indexes in this diagram are open day indexes, not frame
indexes.  The search tree has one level of solutions for each open day,
plus the extra level holding the root solution.  The root solution is
special in that, despite being a day solution, it has no day.  There may
be closed days, obviously, but they are not visible in the search tree.
@PP
Type @C { KHE_DRS_SOLN } is defined by
@ID @C {
typedef struct khe_drs_soln_rec *KHE_DRS_SOLN;
typedef HA_ARRAY(KHE_DRS_SOLN) ARRAY_KHE_DRS_SOLN;
typedef HP_TABLE(KHE_DRS_SOLN) TABLE_KHE_DRS_SOLN;

struct khe_drs_soln_rec {
  struct khe_drs_signature_set_rec sig_set;
  KHE_DRS_SOLN			prev_soln;
  ARRAY_KHE_DRS_TASK_ON_DAY	prev_tasks;
  int				priqueue_index;
#if TESTING
  int				sorted_rank;
#endif
};
}
The @C { sig_set } field is the solution's signature.  There may be
many thousands of day solution objects, so to save one pointer, the
signature has type @C { struct khe_drs_signature_set_rec } rather
than the pointer type @C { KHE_DRS_SIGNATURE_SET }.
@PP
The @C { prev_soln } field points to this solution's predecessor
(its parent in the search tree).  The root solution has value
@C { NULL } for this field.  No valid @C { KHE_DRS_SOLN } has
value @C { NULL }.
# The @C { day } field is the day of
# this solution (the day up to and including which all assignments are
# complete, and beyond which none have been made).
@PP
The @C { prev_tasks } field really belongs to the incoming edge,
but we are saving memory by not having edge objects.  In the
root solution it is empty, since there is no incoming edge.  In
other solutions, its length equals the number of open resources,
and the @C { i }th value is the task on day object assigned the
@C { i }th open resource on this solution's day, or @C { NULL }
if that resource is free.
@PP
It would arguably be more consistent for these task fields to have type
@C { KHE_DRS_TASK_SOLN } (Appendix {@NumberOf dynamic_impl.solns.task}),
the type of a solution containing one assignment of a resource to a
task.  But objects of that type have short lifetimes, whereas objects
of type @C { KHE_DRS_TASK_ON_DAY } have lifetime equal to the lifetime
of the solver.  So the use of @C { KHE_DRS_TASK_ON_DAY } objects can be
understood as another memory optimization.
@PP
The @C { priqueue_index } field holds the index of the solution
in the priority queue, if there is one.  This `back index' allows
the solution to be deleted efficiently from the priority queue
when it is found to be dominated by some other solution, and so
needs to be deleted and freed.  If the solution is not in the
priority queue, either because it has been deleted from it or
because the priority queue is not in use, @C { priqueue_index }
holds @C { -1 }.
@PP
The @C { sorted_rank } field holds the rank of this solution in
the sequence of all undominated solutions for its day, when those
solutions are sorted into non-decreasing cost order.  It is used
only for gathering statistics, which is why it is optional.
@PP
The operations on solutions begin with @C { KheDrsSolnMake }, which
makes a new solution object, and @C { KheDrsSolnFree }, which frees
a solution.  Then come @C { KheDrsSolnMarkExpanded } and
@C { KheDrsSolnNotExpanded }, which set and test the special
@C { -1 } value of the @C { priqueue_index } field.  Then comes this
rather ugly operation to work out which day the solution is for:
@ID @C {
KHE_DRS_DAY KheDrsSolnDay(KHE_DRS_SOLN soln,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_TASK_ON_DAY dtd;  int i;  KHE_DRS_DAY prev_day;

  /* return NULL if this is the root solution, not on any day */
  if( soln->prev_soln == NULL )
    return NULL;

  /* if prev_tasks has a non-NULL task on day, return its day */
  HaArrayForEach(soln->prev_tasks, dtd, i)
    if( dtd != NULL )
      return dtd->day;

  /* else have to recurse back */
  prev_day = KheDrsSolnDay(soln->prev_soln, drs);
  if( prev_day == NULL )
    return HaArray(drs->open_days, 0);
  else
    return HaArray(drs->open_days, prev_day->open_day_index + 1);
}
}
The day comes from any non-@C { NULL } task, or else from the
parent solution.  Previously, solutions stored their day as an
attribute, but the author removed it to save memory.
# As it turns
# out, the only call on @C { KheDrsSolnDay } that needs to be efficient
# occurs when the priority queue is in use and we need to find out which
# day the minimum-cost solution deleted from the priority queue is for.
@PP
After @C { KheDrsSolnDay } there are functions used when hashing
a solution's signature, which just delegate their work to the
@C { sig_set } attribute:  @C { KheDrsSolnSignatureSetFullHash }
and so on.  For dominance testing there are two functions:
@IndentedList

@LI @C {
bool KheDrsSolnDoDominates(KHE_DRS_SOLN soln1, KHE_DRS_SOLN soln2,
  KHE_DRS_SIGNER_SET signer_set, KHE_COST trie_extra_cost,
  int trie_start_depth, int *dom_test_count,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  *dom_test_count += 1;
  return KheDrsSignerSetDominates(signer_set, &soln1->sig_set,
    &soln2->sig_set, trie_extra_cost, trie_start_depth,
    soln1->prev_soln == soln2->prev_soln, drs);
}
}

@LI @C {
bool KheDrsSolnDominates(KHE_DRS_SOLN soln1, KHE_DRS_SOLN soln2,
  KHE_DRS_SIGNER_SET signer_set, int *dom_test_count,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  return KheDrsSolnDoDominates(soln1, soln2, signer_set, 0, 0,
    dom_test_count, drs);
}
}

@EndList
We've omitted some debugging and testing code.
@C { KheDrsSolnDoDominates } is called directly only when
solutions are stored in a trie data structure; it avoids
visiting parts of the signatures that the trie has already
handled.  For the most part, @C { KheDrsSolnDominates } is
called.
# It performs the usual dominance test on the
# signatures of @C { soln1 } and @C { soln2 }, controlled
# by @C { signer_set }.
@PP
Next comes a function for overwriting one solution by another,
used when solutions are held in hash tables, and then this
little helper function:
@ID @C {
bool KheDrsSolnResourceIsAssigned(KHE_DRS_SOLN soln,
  KHE_DRS_RESOURCE dr, KHE_DRS_TASK_ON_DAY *dtd)
{
  if( soln->prev_soln == NULL )
  {
    /* this is the root solution, so there can be no assignment */
    return *dtd = NULL, false;
  }
  else
  {
    /* non-root solution, get assignment from soln->prev_tasks */
    *dtd = HaArray(soln->prev_tasks, dr->open_resource_index);
    return *dtd != NULL;
  }
}
}
If @C { dr } is assigned a task in @C { soln }, this sets @C { *dtd }
to the appropriate task on day object and returns @C { true }.
Otherwise (if @C { soln } is the root solution or @C { dr } is not
assigned in @C { soln }), it returns @C { false }.  This function
is called by @C { KheDrsResourceOnDayIsFixed }
(Appendix {@NumberOf dynamic_impl.expansion.resource_setup}).
@PP
Following this come some debug functions, including one that prints
a neat table showing the timetable of a given resource in a given
solution.
# There are several more complex functions on solutions,
# notably for implementing expansion, but the source code places
# them in different submodules, and we present them here in different
# sub-appendices.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Mtask solutions }
    @Tag { dynamic_impl.solns.mtask }
@Begin
@LP
An @I { mtask solution } is an object representing a day solution
@M { S } plus the assignment on the following day of a resource
@M { r } to an unspecified task from a given mtask @M { c }.  The
mtask may be @C { NULL }, meaning that the resource has a free day.
We often write  `@M { c sub i }-solution' for an mtask solution whose
mtask is @M { c sub i }.  Here is the type:
@IndentedList

@LI @C {
typedef struct Khe_drs_mtask_soln_rec *KHE_DRS_MTASK_SOLN;
typedef HA_ARRAY(KHE_DRS_MTASK_SOLN) ARRAY_KHE_DRS_MTASK_SOLN;
}

@LI @C {
struct Khe_drs_mtask_soln_rec {
  KHE_DRS_SIGNATURE			sig;
  KHE_DRS_RESOURCE_ON_DAY		resource_on_day;
  KHE_DRS_MTASK				mtask;
  KHE_DRS_TASK_ON_DAY			fixed_task_on_day;
  ARRAY_KHE_DRS_MTASK_SOLN		skip_assts;
  int					skip_count;
};
}

@EndList
The day solution @M { S } is known from the context and is not
stored explicitly.
@PP
The @C { sig } field contains a signature holding the states of
the resource monitors of @M { r } after the assignment.  Its
@C { cost } field holds the extra cost of those monitors, beyond
their cost in @M { S }.  It is this field that makes this object
best interpreted as a solution, rather than as an assignment.
@PP
Even though the exact task to which the resource is assigned is
not specified, the signature is fully specified.  This is because
the tasks of one mtask have the same busy times and the same
workloads, and so they have the same effect on resource monitors.
@PP
The @C { resource_on_day } field, which is always non-@C { NULL },
holds the resource on day object representing @M { r } on the day
of the assignment---the day following @M { S }'s day.  The
@C { mtask } and @C { fixed_task_on_day } fields determine
what the resource on day is assigned to, as follows.
@PP
If @C { mtask != NULL }, the assignment may be to any task of that
mtask.  Someone will have to decide which of @C { mtask }'s tasks
to use before the assignment can actually be made.  In this case
@C { fixed_task_on_day } is not used.  Its value will be @C { NULL }.
@PP
Otherwise, @C { mtask == NULL }.  The decision about which task
to use has already been made, and @C { fixed_task_on_day } holds that
decision.  It could be @C { NULL }, in which case the decision is to
assign a free day.  Otherwise its day is the day of @C { resource_on_day }.
# @PP
# There is a similarity in meaning between an assignment to mtask
# object whose mtask is @C { NULL }, and an assignment to task
# object:  both specify the assignment of a particular task.  They
# differ, however, in how they are used by expansion, as we will see.
@PP
The last two fields support the implementation of mtask pair dominance.
As explained in Appendix {@NumberOf dynamic_theory.solutions.two_extra}, 
this involves storing a list of mtask solutions in each mtask
solution object, and incrementing a counter in those mtask solutions
when this one is used.  The @C { skip_assts } field holds the mtask
solutions, and the @C { skip_count } field holds the counter.
@PP
The @C { KHE_DRS_MTASK_SOLN } submodule holds a few simple operations
on mtask solution objects, including @C { KheDrsMTaskSolnMake } for
creating them, and @C { KheDrsMTaskSolnFree } for freeing them.  After
that there is another submodule holding the code for dominance testing
between mtask solutions, which implements the method given in the theory
appendix:
@ID @C {
bool KheDrsMTaskSolnDominates(KHE_DRS_MTASK_SOLN dms_r_c1,
  KHE_DRS_MTASK_SOLN dms_r_c2, KHE_DYNAMIC_RESOURCE_SOLVER drs,
  int verbosity, int indent, FILE *fp)
{
  KHE_COST avail_cost;  int m;  bool res;
  KHE_DRS_RESOURCE dr;  KHE_DRS_MTASK dmt_r_c1, dmt_r_c2;
  dr = dms_r_c1->resource_on_day->encl_dr;
  m = KheDrsResourceSetCount(drs->open_resources);
  avail_cost = 0;
  dmt_r_c1 = dms_r_c1->mtask;
  dmt_r_c2 = dms_r_c2->mtask;
  res = KheDrsMTaskOneExtraAvailable(dmt_r_c1, m) &&
    KheDrsMTaskMinCost(dmt_r_c1, KHE_DRS_ASST_OP_UNASSIGN, dr,
      NULL, m, &avail_cost, verbosity, indent, fp) &&
    KheDrsMTaskMinCost(dmt_r_c2, KHE_DRS_ASST_OP_ASSIGN, dr,
      NULL, m, &avail_cost, verbosity, indent, fp) &&
    KheDrsSignerDominates(dms_r_c1->resource_on_day->signer,
      KheDrsMTaskSolnSignature(dms_r_c1),
      KheDrsMTaskSolnSignature(dms_r_c2),
      &avail_cost, verbosity, indent, fp);
  return res;
}
}
We've omitted some debugging code here.  There is also a function
(arguably out of place) for finding all pairs of mtask solutions
that could be tested for dominance, making the tests, and removing
any dominated mtask solutions:
@ID @C {
void KheDrsMTaskSolnDominanceInit(KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  int ri, i, j;  KHE_DRS_RESOURCE dr;
  KHE_DRS_MTASK_SOLN dms_r_c1, dms_r_c2;
  KheDrsResourceSetForEach(drs->open_resources, dr, ri)
    HaArrayForEach(dr->expand_mtask_solns, dms_r_c1, i)
      for( j = i + 1;  j < HaArrayCount(dr->expand_mtask_solns);  j++ )
      {
	/* for each distinct pair of assignments (dms_r_c1, dms_r_c2) */
	dms_r_c2 = HaArray(dr->expand_mtask_solns, j);
	if( KheDrsMTaskSolnDominates(dms_r_c1, dms_r_c2, drs) )
	{
	  /* dms_r_c1 dominates dms_r_c2, so delete dms_r_c2 */
	  HaArrayDeleteAndShift(dr->expand_mtask_solns, j);
	  if( dr->expand_free_mtask_soln == dms_r_c2 )
            dr->expand_free_mtask_soln = NULL;
	  KheDrsMTaskSolnFree(dms_r_c2, drs);
	  j--;  /* and try the next dms_r_c2 */
	}
	else if( KheDrsMTaskSolnDominates(dms_r_c2, dms_r_c1, drs) )
	{
	  /* dms_r_c2 dominates dms_r_c1, so delete dms_r_c1 */
	  HaArrayDeleteAndShift(dr->expand_mtask_solns, i);
	  if( dr->expand_free_mtask_soln == dms_r_c1 )
            dr->expand_free_mtask_soln = NULL;
	  KheDrsMTaskSolnFree(dms_r_c1, drs);
	  i--;
	  break;  /* and try the next dms_r_c1 */
	}
      }
}
}
Again we've omitted some debugging code.
By the time this function is called, all mtask solution objects
for a given resource @C { dr } on the current day are stored in
array @C { dr->expand_mtask_solns }.  This function finds all
unordered pairs of those, tests each pair both ways for dominance,
and deletes any dominated ones.  Care is needed to continue
iterating correctly when a mtask solution is deleted.
@PP
Finally comes another submodule, holding the code for the part of
the expansion operation which is concerned with mtask solutions.
This code is presented in Appendix {@NumberOf dynamic_impl.expansion}.  
@End @SubSubAppendix

@SubSubAppendix
    @Title { Mtask pair solutions }
    @Tag { dynamic_impl.solns.mtask_pair }
@Begin
@LP
An @I { mtask pair solution } is like an mtask solution except
that it adds two assignments of resources to mtasks on
the day after @M { S }, rather than one.  We might use the
notation `@M { c sub i c sub j }-solution' for an mtask pair
solution involving mtasks @M { c sub i } and @M { c sub j }.
@PP
There is no @C { KHE_DRS_MTASK_PAIR_SOLN } object type.  Instead,
two mtask solutions are passed side by side that together make
up one mtask pair solution.
@PP
Here is the code for deciding whether mtask pair solution
@C { {dcs_r1_c1, dcs_r2_c2} } dominates mtask pair solution
@C { {dcs_r1_c2, dcs_r2_c1} }.  The variable names indicate
which resource is involved (@C { r1 } or @C { r2 }) and which
mtask (@C { c1 } or @C { c2 }):
@ID @C {
bool KheDrsMTaskPairSolnDominates(KHE_DRS_MTASK_SOLN dms_r1_c1,
  KHE_DRS_MTASK_SOLN dms_r2_c2, KHE_DRS_MTASK_SOLN dms_r1_c2,
  KHE_DRS_MTASK_SOLN dms_r2_c1, KHE_DRS_RESOURCE dr1,
  KHE_DRS_RESOURCE dr2, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_COST avail_cost;  int m;  bool res;
  KHE_DRS_MTASK dmt_r1_c1, dmt_r2_c2;

  m = KheDrsResourceSetCount(drs->open_resources);
  avail_cost = 0;
  dmt_r1_c1 = dms_r1_c1->mtask;
  dmt_r2_c2 = dms_r2_c2->mtask;
  res =
    KheDrsMTaskMinCost(dmt_r1_c1, KHE_DRS_ASST_OP_REPLACE,
      dr1, dr2, m, &avail_cost) &&
    KheDrsMTaskMinCost(dmt_r2_c2, KHE_DRS_ASST_OP_REPLACE,
      dr2, dr1, m, &avail_cost) &&
    KheDrsSignerDominates(dms_r1_c1->resource_on_day->signer,
      KheDrsMTaskSolnSignature(dms_r1_c1),
      KheDrsMTaskSolnSignature(dms_r1_c2), &avail_cost) &&
    KheDrsSignerDominates(dms_r2_c1->resource_on_day->signer,
      KheDrsMTaskSolnSignature(dms_r2_c2),
      KheDrsMTaskSolnSignature(dms_r2_c1), &avail_cost);
  return res;
}
}
Some debug code has been omitted.  The algorithm is the one
presented in the theory appendix.
@PP
To help with testing all pairs of mtask pair solutions for dominance,
we need this function, which determines whether resource @C { dr }
contains a mtask solution object corresponding to @C { dms }:
@ID @C {
bool KheDrsResourceHasMTaskSoln(KHE_DRS_RESOURCE dr,
  KHE_DRS_MTASK_SOLN dms, KHE_DRS_MTASK_SOLN *res)
{
  KHE_DRS_MTASK_SOLN dms2;  int i;

  if( dms->mtask != NULL )
  {
    /* Case 1: mtask != NULL */
    HaArrayForEach(dr->expand_mtask_solns, dms2, i)
      if( dms2->mtask == dms->mtask )
	return *res = dms2, true;
    return *res = NULL, false;
  }
  else if( dms->fixed_task_on_day != NULL )
  {
    /* Case 2: mtask == NULL && fixed_task_on_day != NULL */
    return *res = NULL, false;
  }
  else
  {
    /* Case 3: mtask == NULL && fixed_task_on_day == NULL */
    HaArrayForEach(dr->expand_mtask_solns, dms2, i)
      if( dms2->mtask == NULL && dms2->fixed_task_on_day == NULL )
	return *res = dms2, true;
    return *res = NULL, false;
  }
}
}
If @C { dcs } has a non-@C { NULL } @C { mtask } field, we
search @C { dr }'s mtask solutions for one for the same mtask.
Else, we do a similar search if the assignment is for a free
day; else we give up.
@PP
Now we are ready to find all pairs of mtask pair solutions
and record cases of dominance in the @C { skip_assts } arrays
of the dominated mtask pair solutions:
@ID {0.90 1.0} @Scale @C {
void KheDrsMTaskPairSolnDominanceInit(KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  int i1, i2, i, j, verbosity, indent;  KHE_DRS_RESOURCE dr1, dr2;  FILE *fp;
  KHE_DRS_MTASK_SOLN dms_r1_c1, dms_r1_c2, dms_r2_c1, dms_r2_c2;
  KheDrsResourceSetForEach(drs->open_resources, dr1, i1)
    if( HaArrayCount(dr1->expand_mtask_solns) >= 2 )
      for( i2 = i1 + 1; i2 < KheDrsResourceSetCount(drs->open_resources); i2++ )
      {
	dr2 = KheDrsResourceSetResource(drs->open_resources, i2);
	if( HaArrayCount(dr2->expand_mtask_solns) >= 2 )
	  HaArrayForEach(dr1->expand_mtask_solns, dms_r1_c1, i)
	  {
	    if( KheDrsResourceHasMTaskSoln(dr2, dms_r1_c1, &dms_r2_c1) )
	    {
	      for( j = i + 1;  j < HaArrayCount(dr1->expand_mtask_solns);  j++ )
	      {
		dms_r1_c2 = HaArray(dr1->expand_mtask_solns, j);
		if( KheDrsResourceHasMTaskSoln(dr2, dms_r1_c2, &dms_r2_c2) )
		{
		  if( KheDrsMTaskPairSolnDominates(dms_r1_c1, dms_r2_c2,
		      dms_r1_c2, dms_r2_c1, dr1, dr2, drs,verbosity,indent,fp) )
		  {
		    /* S + dms_r1_c2 + dms_r2_c1 is dominated */
		    HaArrayAddLast(dms_r1_c2->skip_assts, dms_r2_c1);
		    HaArrayAddLast(dms_r2_c1->skip_assts, dms_r1_c2);
		  }
		  else if( KheDrsMTaskPairSolnDominates(dms_r1_c2, dms_r2_c1,
		      dms_r1_c1, dms_r2_c2, dr1, dr2, drs,verbosity,indent,fp) )
		  {
		    /* S + dms_r1_c1 + dms_r2_c2 is dominated */
		    HaArrayAddLast(dms_r1_c1->skip_assts, dms_r2_c2);
		    HaArrayAddLast(dms_r2_c2->skip_assts, dms_r1_c1);
		  }
		}
	      }
	    }
	  }
      }
}
}
The two outer loops iterate over all unordered pairs of distinct
open resources @C { {dr1, dr2} } such that both resources have two
or more mtask solutions.  The two inner loops iterate over all
unordered pairs of distinct mtask solutions for @C { dr1 }, called
@C { dcs_r1_c1 } and @C { dcs_r1_c2 }, for which there are
corresponding mtask solutions for @C { dr2 }, called @C { dcs_r2_c1 }
and @C { dcs_r2_c2 }.  These four mtask solutions make two mtask
pair solutions, which are then tested for dominance both ways.
Dominated solutions are marked by adding entries to their
@C { skip_assts } arrays.
@PP
This algorithm could mark a given mtask pair solution as dominated
more than once.  This does not affect its correctness, so it has
seemed simplest not to prevent it.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Task solutions }
    @Tag { dynamic_impl.solns.task }
@Begin
@LP
A @I { task solution } represents one day solution @M { S } plus one
assignment of a resource @M { r } to a specific task @M { t } on the
day after @M { S }'s day.  We might write `@M { t }-solution' for a
task solution whose task is @M { t }.
@PP
Type @C { KHE_DRS_TASK_SOLN } is defined by
@ID @C {
typedef struct khe_drs_task_soln_rec KHE_DRS_TASK_SOLN;
typedef HA_ARRAY(KHE_DRS_TASK_SOLN) ARRAY_KHE_DRS_TASK_SOLN;

struct khe_drs_task_soln_rec {
  KHE_DRS_MTASK_SOLN		mtask_soln;
  KHE_DRS_TASK_ON_DAY		fixed_dtd;
};
}
As shown, this is implemented by taking an mtask solution
@C { mtask_soln }, which assigns @M { r } to an unspecified
task of some mtask, and adding @C { fixed_dtd } to it,
which specifies the task within the mtask.  Here
@C { fixed_dtd } may be @C { NULL }, meaning that the assignment
is to a free day, but @C { mtask_soln } is never @C { NULL }.
@PP
Task solution objects come and go quickly during expansion, so it
has seemed best to implement them as record types, to avoid
allocating and deallocating objects:
@ID @C {
KHE_DRS_TASK_SOLN KheDrsTaskSolnMake(KHE_DRS_MTASK_SOLN dms,
  KHE_DRS_TASK_ON_DAY fixed_dtd)
{
  KHE_DRS_TASK_SOLN res;
  res.mtask_soln = dms;
  res.fixed_dtd = fixed_dtd;
  return res;
}
}
After this come a few simple functions for accessing the
attributes of a task solution, such as
@ID @C {
KHE_DRS_RESOURCE KheDrsTaskSolnResource(KHE_DRS_TASK_SOLN dts)
{
  return dts.mtask_soln->resource_on_day->encl_dr;
}
}
The last two functions are for requesting the task on day of a given
task solution to update the affected external expressions to reflect
the assignment expressed by the task solution:
@ID @C {
void KheDrsTaskSolnLeafSet(KHE_DRS_TASK_SOLN dts, bool whole_task)
{
  KHE_DRS_RESOURCE dr;  KHE_DRS_TASK dt;  int i;
  KHE_DRS_TASK_ON_DAY dtd;
  if( dts.fixed_dtd != NULL )
  {
    dr = KheDrsTaskSolnResource(dts);
    if( whole_task )
    {
      dt = dts.fixed_dtd->encl_dt;
      HaArrayForEach(dt->days, dtd, i)
	KheDrsTaskOnDayLeafSet(dtd, dr);
    }
    else
      KheDrsTaskOnDayLeafSet(dts.fixed_dtd, dr);
  }
}
}
If the assignment is for a free day, there is nothing to do.
Otherwise this code offers the choice of using @C { dts } as
a template for assigning the task on all of its days (this will
be significant if it is a multi-day task), or just assigning
it on @C { dts }'s day.  Either way, @C { KheDrsTaskOnDayLeafSet }
is called to inform the expressions affected by @C { dtd } that
@C { dr } is being assigned to it.  Then
@ID @C {
void KheDrsTaskSolnLeafClear(KHE_DRS_TASK_SOLN dts, bool whole_task)
{
  KHE_DRS_TASK dt;  int i;  KHE_DRS_TASK_ON_DAY dtd;
  if( dts.fixed_dtd != NULL )
  {
    if( whole_task )
    {
      dt = dts.fixed_dtd->encl_dt;
      HaArrayForEach(dt->days, dtd, i)
	KheDrsTaskOnDayLeafClear(dtd);
    }
    else
      KheDrsTaskOnDayLeafClear(dts.fixed_dtd);
  }
}
}
may be called to undo the effect of @C { KheDrsTaskSolnLeafSet }.
@PP
After these functions there is a submodule holding the task solution
functions related to expansion, which will be documented later.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Task solution sets }
    @Tag { dynamic_impl.solns.task_soln_sets }
@Begin
@LP
A @I { task solution set } is a set of task solutions.  It has
type @C { KHE_DRS_TASK_SOLN_SET }:
@ID @C {
typedef struct khe_drs_task_soln_set *KHE_DRS_TASK_SOLN_SET;
typedef HA_ARRAY(KHE_DRS_TASK_SOLN_SET) ARRAY_KHE_DRS_TASK_SOLN_SET;

struct khe_drs_task_soln_set {
  ARRAY_KHE_DRS_TASK_SOLN	task_solns;
};
}
Its operations include the straightforward
@C { KheDrsTaskSolnSetMake }, @C { KheDrsTaskSolnSetFree },
@C { KheDrsTaskSolnSetClear }, @C { KheDrsTaskSolnSetCount },
@C { KheDrsTaskSolnSetAddLast }, and
@ID @C {
void KheDrsTaskSolnSetLeafSet(KHE_DRS_TASK_SOLN_SET dtss,
  bool whole_task)
{
  KHE_DRS_TASK_SOLN dts;  int i;
  KheDrsTaskSolnSetForEach(dtss, dts, i)
    KheDrsTaskSolnLeafSet(dts, whole_task);
}
}
with its corresponding
@ID @C {
void KheDrsTaskSolnSetLeafClear(KHE_DRS_TASK_SOLN_SET dtss,
  bool whole_task)
{
  KHE_DRS_TASK_SOLN dts;  int i;
  KheDrsTaskSolnSetForEach(dtss, dts, i)
    KheDrsTaskSolnLeafClear(dts, whole_task);
}
}
These last two functions assign and unassign a whole set of task
solutions at once.
@End @SubSubAppendix

#@SubSubAppendix
#    @Title { Shift assignments (obsolete) }
#    @Tag { dynamic_impl.solns.shift_obsolete }
#@Begin
#@LP
#An @I { assignment to shift } object represents the assignment of
#a resource on day to an unspecified task from an unspecified mtask
#from a given shift.  The shift may be @C { NULL }, meaning
#that the resource has a free day.  Here is the type:
#@IndentedList
#
#@LI @C {
#typedef struct khe_drs_asst_to_shift_rec *KHE_DRS_ASST_TO_SHIFT;
#typedef HA_ARRAY(KHE_DRS_ASST_TO_SHIFT) ARRAY_KHE_DRS_ASST_TO_SHIFT;
#}
#
#@LI @C {
#struct khe_drs_asst_to_shift_rec {
#  KHE_DRS_RESOURCE_ON_DAY		resource_on_day;
#  struct khe_drs_signature_rec		sig;
#  bool					used;
#};
#}
#
#@EndList
#The shift itself is not stored in the object (although it could be),
#because it is always known from the context, as it happens.  The
#resource on day is stored, along with a signature representing
#the effect on the resource constraints of that resource of
#assigning a task from this shift.  The effect is the same for
#any task, because all tasks from a given shift have the same
#busy times and the same workloads, and these alone are what
#matter to resource constraints.  Finally, there is a @C { used }
#flag which records whether any other object actually references
#this object; if not, this object can be and is freed early.
#@PP
#Once again there are very few operations on assignment to shift
#objects, in fact just @C { KheDrsAsstToShiftMake } and
#@C { KheDrsAsstToShiftFree }.  Assignment to shift objects
#are not used seriously by the implementation.  They
#are only included to avoid storing the same signature
#in multiple assignment to mtask objects.  Anyway here is
#@C { KheDrsAsstToShiftMake }:
#@ID @C {
#KHE_DRS_ASST_TO_SHIFT KheDrsAsstToShiftMake(KHE_DRS_RESOURCE_ON_DAY drd,
#  KHE_DRS_TASK_ON_DAY dtd, KHE_DRS_SOLN prev_soln,
#  KHE_DYNAMIC_RESOURCE_SOLVER drs)
#{
#  KHE_DRS_ASST_TO_SHIFT res;
#  if( HaArrayCount(drs->free_assts_to_shifts) > 0 )
#  {
#    res = HaArrayLastAndDelete(drs->free_assts_to_shifts);
#    KheDrsSignatureClear(&res->sig, 0);
#  }
#  else
#  {
#    HaMake(res, drs->arena);
#    KheDrsSignatureInit(&res->sig, 0, drs->arena);
#  }
#  res->resource_on_day = drd;
#  res->used = false;
#  KheDrsResourceSignatureSet(&res->sig, drd, dtd, prev_soln, drs);
#  return res;
#}
#}
#It is very standard apart from the call to @C { KheDrsResourceSignatureSet }:
#@ID @C {
#void KheDrsResourceSignatureSet(KHE_DRS_SIGNATURE sig,
#  KHE_DRS_RESOURCE_ON_DAY drd, KHE_DRS_TASK_ON_DAY dtd,
#  KHE_DRS_SOLN prev_soln, KHE_DYNAMIC_RESOURCE_SOLVER drs)
#{
#  /* set leaf expressions for an assignment of drd to dtd */
#  KheDrsResourceOnDayLeafSet(drd, dtd, drs);
#
#  /* evaluate the resource constraints */
#  KheDrsDominatorEvalSignature(drd->dominator, true, prev_soln,
#    drd->day->open_day_index, sig, drs);
#
#  /* clear the leaf expressions */
#  KheDrsResourceOnDayLeafClear(drd, dtd, drs);
#}
#}
#This informs @C { drd }'s leaf expressions that @C { drd } is
#assigned to @C { dtd } (possibly @C { NULL }, denoting a free
#day), evaluates the signature, then clears the leaf expressions
#back to their default state.
## @PP
## When we assign a resource on day object for some resource @M { r }
## to a task on day object (possibly @C { NULL }), @M { r }'s resource
## constraints are affected, and they need to supply updated costs and
## signature values.  Starting from a given @M { d sub k }-complete
## solution @M { S }, there are many @M { d sub {k+1} }-complete
## extensions of @M { S } which contain the same assignment for
## @M { r }.  Rather than evaluating this same change to the resource
## constraints of @M { r } many times over, we evaluate it just once
## and store it in the inherited @C { cost } and @C { sig } fields of
## the assignment.  Then when building any @M { d sub {k+1} }-complete
## extension of @M { S } that includes this assignment, we just need to
## add its @C { cost } field to the new cost and append its @C { sig }
## field  to the new signature.  It is faster this way, and it makes
## these costs and signatures available for other purposes, notably
## one-extra and two-extra selection.
## @PP
## We stress that the @C { cost } and @C { sig } fields of an
## assignment object relate only to the resource constraint
## costs incurred by that assignment.  Carrying it out will
## also affect event resource constraint costs, but those are
## not included here.  Event resource constraint costs could
## depend on other assignments as well as this one, whereas resource
## constraint costs do not.
#@End @SubSubAppendix

@SubSubAppendix
    @Title { Shift solutions }
    @Tag { dynamic_impl.solns.shift }
@Begin
@LP
A @I { shift solution } is a solution whose assignments consist of
the assignments of a day solution @M { S }, plus one assignment
for each resource in a subset @M { R } of the open resources to an
open task of a shift @M { s }, whose tasks begin on the day following
@M { S }'s day.  These assigments are the only ones that will be made
to the open tasks of @M { s }; all others will remain unassigned.
We may write `@M { s sub i }-solution' for a shift solution whose
shift is @M { s sub i }.
@PP
A shift solution is represented by type @C { KHE_DRS_SHIFT_SOLN }:
@ID @C {
typedef struct khe_drs_shift_soln_rec *KHE_DRS_SHIFT_SOLN;
typedef HA_ARRAY(KHE_DRS_SHIFT_SOLN) ARRAY_KHE_DRS_SHIFT_SOLN;

struct khe_drs_shift_soln_rec {
  KHE_DRS_SIGNATURE			sig;
  KHE_DRS_TASK_SOLN_SET			task_solns;
  ARRAY_KHE_DRS_SHIFT_SOLN		skip_assts;
  int					skip_count;
};
}
@M { S } and @M { s } are known from the context, so this just
stores one @C { KHE_DRS_TASK_SOLN_SET } object, representing the
assignment of one resource to one task for each member of @M { R }.
@PP
The @C { sig } field holds a signature, used for dominance testing
between shift solution objects with the same @M { S }, @M { s }, and
@M { R }.  It is created by evaluating each expression derived from
an event resource monitor which is affected by at least one task from
@M { s }.  These are the only expressions relevant to dominance testing
between solutions for the same @M { S }, @M { s }, and @M { R }:
event resource monitors not affected by the tasks of @M { s } are
clearly irrelevant, and resource monitors for each resource @M { r }
in @M { R } have the same values whichever task @M { r } is assigned
to, because the tasks of @M { s } have the same busy times and
workloads, by definition.
@PP
The @C { skip_assts } and @C { skip_count } fields hold the results
of shift pair dominance testing 
(Appendix {@NumberOf dynamic_impl.solns.shift_pair}).  They work just
like the corresponding fields in mtask solution objects, to ensure that
pairs of shift solutions known to be uncompetitive are never used together.
@PP
The operations on shift solutions include @C { KheDrsShiftSolnMake },
for making a new shift solution object, and @C { KheDrsShiftSolnFree },
for freeing it.  There is also a function for dominance testing
between shift solution objects, assuming the signatures are set:
@ID @C {
bool KheDrsShiftSolnDominates(KHE_DRS_SHIFT_SOLN dss1,
  KHE_DRS_SHIFT_SOLN dss2, KHE_DRS_SIGNER dsg,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_COST avail_cost;
  avail_cost = 0;
  return KheDrsSignerDominates(dsg, dss1->sig, dss2->sig, &avail_cost);
}
}
After this comes a submodule which implements the part of expansion
that is concerned with shift solutions.  For that, see
Appendix {@NumberOf dynamic_impl.expansion}.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Shift solution tries }
    @Tag { dynamic_impl.solns.shift_soln_tries }
@Begin
@LP
For many objects the solver offers a type representing a set of
those objects.  For example, @C { KHE_DRS_TASK_SOLN_SET } represents
a set of @C { KHE_DRS_TASK_SOLN } objects.
@PP
Type @C { KHE_DRS_SHIFT_SOLN_TRIE } represents a set of
@C { KHE_DRS_SHIFT_SOLN } objects.  For efficient retrieval,
a trie data structure is used rather than the usual array.  As
we saw above, the main attributes that define a shift solution
are a day solution @M { S }, a shift @M { s } for the day
following @M { S }'s day, and a set of open resources @M { R }.
There is one trie for each combination of @M { S } and @M { s },
organized so that @M { R } can be used as an index to efficiently
retrieve the shift solutions for @M { S }, @M { s }, and any
given @M { R }.  This trie is stored in @M { s }'s
@C { soln_trie } field during the expansion of @M { S }.
@PP
The indexing is straightforward.  Given a set of open
resources @M { R }, sorted by increasing open resource index,
the open resource index of the first resource is used to index
into the root of the trie, producing a child trie which is
indexed using the open resource index of the second resource
of @M { R }, and so on.  When all resources have been used
up, the node we are at contains a simple array of shift
solutions (in fact, all undominated ones, as we will see)
for the given @M { S }, @M { s }, and @M { R }.  Note that
@M { R } may be empty, in which case retrieval ends at the root
of the trie.
@PP
Here is the type declaration for @C { KHE_DRS_SHIFT_SOLN_TRIE }:
@ID @C {
typedef struct khe_drs_shift_soln_trie_rec *KHE_DRS_SHIFT_SOLN_TRIE;
typedef HA_ARRAY(KHE_DRS_SHIFT_SOLN_TRIE) ARRAY_KHE_DRS_SHIFT_SOLN_TRIE;

struct khe_drs_shift_soln_trie_rec {
  ARRAY_KHE_DRS_SHIFT_SOLN		shift_solns;
  ARRAY_KHE_DRS_SHIFT_SOLN_TRIE		children;
};
}
At any level, we may come to the end of @M { R }; then the
@C { shift_solns } field holds the undominated shift solutions
for @M { S }, @M { s }, and @M { R }.  Or if we have not exhausted
@M { R }, the open resource index of the next resource is used to
index into the @C { children } field to take the search to the
next level down.  @C { NULL } is a legal shift solution trie and
represents a trie containing no shift solution objects.
@PP
Some open resources are forced, for various reasons, to be assigned to
particular known tasks.  We call them @I { fixed resources }, and we
call the remaining open resources @I { free resources }.  If a resource
is fixed to a task not in @M { s }, then it may not appear in an
@M { R } associated with @M { s }.  If it is fixed to a task in
@M { s }, then it must appear in every @M { R } associated with @M { s }.
We ensure this by storing fixed resources separately; that is,
we store @M { R } in the form @M { R = R sub fixed cup R sub free }.
@M { R sub fixed } is represented by a set of assignments (task
solutions).  The indexing uses @M { R sub free } only.
@PP
An arguably more natural data structure would be a binary tree
in which the left subtree of the root handles all subsets @M { R }
that do not contain the first open resource, while the right
subtree of the root handles all subsets @M { R } that do contain
the first open resource, and so on recursively.  We prefer the
trie because searching the binary tree takes time proportional
to the number of open resources, whereas searching the trie
takes time proportional to the cardinality of @M { R }.
@PP
Operations on shift solution tries include @C { KheDrsShiftSolnTrieMake }
for making a new trie node, and @C { KheDrsShiftSolnTrieFree } for
freeing a trie node along with its shift solutions and proper descendants:
@ID @C {
void KheDrsShiftSolnTrieFree(KHE_DRS_SHIFT_SOLN_TRIE dsst,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_SHIFT_SOLN dss;  int i;  KHE_DRS_SHIFT_SOLN_TRIE child_dsst;

  if( dsst != NULL )
  {
    /* free the shift solution objects */
    HaArrayForEach(dsst->shift_solns, dss, i)
      KheDrsShiftSolnFree(dss, drs);

    /* free the proper descendant trie objects */
    HaArrayForEach(dsst->children, child_dsst, i)
      KheDrsShiftSolnTrieFree(child_dsst, drs);

    /* free dsst itself */
    HaArrayAddLast(drs->shift_soln_trie_free_list, dsst);
  }
}
}
For dominance testing there is
@ID {0.95 1.0} @Scale @C {
bool KheDrsShiftSolnTrieDominates(KHE_DRS_SHIFT_SOLN_TRIE dsst,
  KHE_DRS_SHIFT_SOLN dss, KHE_DRS_SIGNER dsg,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_SHIFT_SOLN other_dss;  int i;
  HaArrayForEach(dsst->shift_solns, other_dss, i)
    if( KheDrsShiftSolnDominates(other_dss, dss, dsg, drs) )
      return true;
  return false;
}
}
which returns @C { true } if any of the shift solutions within node
@C { dsst } dominates new shift solution @C { dss }.  For
@C { KheDrsShiftSolnDominates }, see
Appendix {@NumberOf dynamic_impl.solns.shift}.
The dominance test does not recurse, because dominance testing is
only between shift solutions for the same @M { S }, @M { s },
and @M { R }, and these are all held in one node of the trie.
There are similar functions for removing dominated shift solutions
from @C { dsst } and adding a new shift solution to it,
and these combine to make
@ID {0.95 1.0} @Scale @C {
void KheDrsShiftSolnTrieMeldShiftSoln(KHE_DRS_SHIFT_SOLN_TRIE dsst,
  KHE_DRS_SHIFT_SOLN dss, KHE_DRS_SIGNER dsg,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  if( KheDrsShiftSolnTrieDominates(dsst, dss, dsg, drs) )
  {
    /* dss is dominated, so free dss */
    KheDrsShiftSolnFree(dss, drs);
  }
  else
  {
    /* remove other solns that dss dominates, then add dss to dsst */
    KheDrsShiftSolnTrieRemoveDominated(dsst, dss, dsg, drs);
    KheDrsShiftSolnTrieAddShiftSoln(dsst, dss);
  }
}
}
which follows the usual algorithm:  if any of the existing shift
solutions dominates the new one, free the new one, otherwise
remove and free dominated ones and add the new one.
@PP
Each trie node will probably contain at most one solution.  This is
because the signatures of shift solutions are concerned only with
the event resource constraints of the tasks of the shift.  These
constraints can in principle also be affected by tasks outside the
shift as well, in which case they will need to add a value to the
signature's states array.  But in practice they are not, so all
they contribute to the signature is a cost, and so dominance
testing is just cost comparison, and every dominance test has a
winner.  But our code supports arbitrary XESTT constraints, so it
allows for any number of shift solutions in each trie node.
Nevertheless a major part of the motivation for expansion by shifts
is this expectation of at most one shift solution per trie node.
@PP
Given a shift solution trie it is straightforward to index into
it.  The main challenge is to build it in the first place, and
the remainder of this section is devoted to that challenge.  Here
is the function that does it:
@ID {0.95 1.0} @Scale @C {
void KheDrsShiftBuildShiftSolnTrie(KHE_DRS_SHIFT ds,
  KHE_DRS_SOLN prev_soln, KHE_DRS_DAY prev_day, KHE_DRS_DAY next_day,
  KHE_DRS_RESOURCE_SET all_free_resources,
  KHE_DRS_TASK_SOLN_SET all_fixed_assts,
  KHE_DRS_EXPANDER de, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_RESOURCE_SET included_free_resources;
  HnAssert(ds->soln_trie == NULL,
    "KheDrsShiftBuildShiftSolnTrie internal error");
  included_free_resources = KheDrsResourceSetMake(drs);
  ds->soln_trie = KheDrsShiftSolnTrieBuild(ds, prev_soln, prev_day,
    next_day, all_free_resources, 0, included_free_resources,
    all_fixed_assts, de);
  KheDrsResourceSetFree(included_free_resources, drs);
}
}
Here @C { ds } is @M { s }, @C { prev_soln } is @M { S },
@C { prev_day } is @M { S }'s day, @C { next_day } is @M { s }'s
day (the day after @C { prev_day }), @C { all_free_resources }
contains all free open resources, and @C { all_fixed_assts }
contains all fixed open resources, in the form of assignments to the tasks
they are fixed to (task solutions).  @C { KheDrsShiftBuildShiftSolnTrie }
creates and frees @C { included_free_resources }, which will hold
the resource sets @M { R sub free } as the operation proceeds, and calls
@ID {0.95 0.98} @Scale -1px @Break @C {
KHE_DRS_SHIFT_SOLN_TRIE KheDrsShiftSolnTrieBuild(KHE_DRS_SHIFT ds,
  KHE_DRS_SOLN prev_soln, KHE_DRS_DAY prev_day, KHE_DRS_DAY next_day,
  KHE_DRS_RESOURCE_SET all_free_resources, int all_free_resources_index,
  KHE_DRS_RESOURCE_SET included_free_resources,
  KHE_DRS_TASK_SOLN_SET all_fixed_assts, KHE_DRS_EXPANDER de)
{
  KHE_DRS_SHIFT_SOLN_TRIE res, child_dsst;  bool no_non_null_children;
  int i, count, included_free_resource_count;  KHE_DRS_RESOURCE dr;

  /* return NULL immediately if too many included free resources */
  included_free_resource_count =
    KheDrsResourceSetCount(included_free_resources);
  if( included_free_resource_count >
      ds->expand_max_included_free_resource_count)
    return NULL;

  /* make res and add shift solutions for included_free_resources to res */
  res = KheDrsShiftSolnTrieMake(de->solver);
  ... code omitted here, see below ...

  /* add a NULL child for every open resource */
  count = KheDrsResourceSetCount(de->solver->open_resources);
  HaArrayFill(res->children, count, NULL);

  /* add a potentially non-NULL child for each possible next resource */
  no_non_null_children = true;
  count = KheDrsResourceSetCount(all_free_resources);
  for( i = all_free_resources_index;  i < count;  i++ )
  {
    dr = KheDrsResourceSetResource(all_free_resources, i);
    KheDrsResourceSetAddLast(included_free_resources, dr);
    child_dsst = KheDrsShiftSolnTrieBuild(ds, prev_soln, prev_day,
      next_day, all_free_resources, i + 1, included_free_resources,
      all_fixed_assts, de);
    if( child_dsst != NULL )
    {
      HaArrayPut(res->children, dr->open_resource_index, child_dsst);
      no_non_null_children = false;
    }
    KheDrsResourceSetDeleteLast(included_free_resources);
  }

  /* replace by NULL if the tree contains no solutions */
  if( no_non_null_children && HaArrayCount(res->shift_solns) == 0 )
  {
    KheDrsShiftSolnTrieFree(res, de->solver);
    return NULL;
  }
  else
    return res;
}
}
Most parameters are as for @C { KheDrsShiftBuildShiftSolnTrie };
@C { all_free_resources_index } is an index into
@C { all_free_resources } used to generate all subsets of the
free resources, and @C { included_free_resources } is the
current value of @M { R sub free }.
@PP
Now @C { ds->expand_max_included_free_resource_count } is the
maximum number of free resources that can be assigned to @C { ds }
without leaving too few other free resources for the other
shifts.  So the first step is to return immediately if the
number of included free resources exceeds this number.  The
@C { NULL } result represents a shift solution trie containing
no shift solutions.
@PP
The next step is to create a shift solution trie node,
@C { res }, and fill its @C { shift_solns } array with the
undominated shift solution objects that assign the
resources of @C { included_free_resources }, plus any
fixed resources which are assigned to tasks from this shift.
Most of this code is omitted above; we return to it below.
@PP
The next step is to build the children of the new node @C { res }.
We want one child for each open resource, because we intend to
index these children using an open resource index.  So we start
by filling @C { res->children } with one @C { NULL } value for
each open resource.  Then for each free resource @C { dr } whose
open resource index we have not used in higher levels of the trie
(for each free resource whose index in @C { all_free_resources }
is @C { all_free_resources_index } or greater), we add @C { dr }
to @C { included_free_resources }, call this same function
recursively to build the child node, add that child to the
@C { children } array of @C { dsst }, and delete @C { dr }
from @C { included_free_resources }.
@PP
Finally, we check whether @C { res } contains no shift solutions at
all:  no non-@C { NULL } children, and an empty @C { res->shift_solns }.
In that case, we free @C { res } and return @C { NULL } instead.
@PP
Here is the code (omitted above) to build the shift solutions
for a given @M { S }, @M { s }, and @M { R }:
@ID @C {
if( included_free_resource_count >= ds->expand_must_assign_count )
{
  KheDrsExpanderReset(de, true, KheDrsSolnCost(prev_soln),
    de->solver->solve_init_cost, included_free_resource_count,
    ds->expand_must_assign_count);
  KheDrsExpanderMarkBegin(de);
  KheDrsExpanderAddTaskSolnSet(de, all_fixed_assts, ds);
  if( KheDrsExpanderIsOpen(de) )
    KheDrsShiftSolnTrieBuildShiftSolns(res, ds, included_free_resources,
      0, prev_soln, prev_day, next_day, de);
  KheDrsExpanderMarkEnd(de);
}
}
Here @C { ds->expand_must_assign_count } is the number of must-assign
tasks within the mtasks of @C { ds }.  If the number of included
free resources is less than this, then there is no point in building
any shift assignments at this node, because there are too few free
resources to cover the must-assign tasks.  Otherwise, we create (in
fact, reset) an expander, add the relevant fixed assignments to it
(those fixed assignments from @C { all_fixed_assts } whose tasks
lie in @C { ds }), and call @C { KheDrsShiftSolnTrieBuildShiftSolns },
which builds this node's shift assignments:
@ID {0.95 1.0} @Scale @C {
void KheDrsShiftSolnTrieBuildShiftSolns(KHE_DRS_SHIFT_SOLN_TRIE dsst,
  KHE_DRS_SHIFT ds, KHE_DRS_RESOURCE_SET included_free_resources,
  int included_free_resources_index, KHE_DRS_SOLN prev_soln,
  KHE_DRS_DAY prev_day, KHE_DRS_DAY next_day, KHE_DRS_EXPANDER de)
{
  KHE_DRS_RESOURCE dr;  int i, indent;  KHE_DRS_MTASK_SOLN dms;

  if( included_free_resources_index >=
      KheDrsResourceSetCount(included_free_resources) )
  {
    /* all included resources assigned, so build soln and add it now */
    KheDrsExpanderMakeAndMeldShiftSoln(de, dsst, ds, prev_soln, prev_day,
      next_day);
  }
  else
  {
    /* try all assignments of the next included resource */
    dr = KheDrsResourceSetResource(included_free_resources,
      included_free_resources_index);
    HaArrayForEach(dr->expand_mtask_solns, dms, i)
      if( KheDrsMTaskSolnShift(dms) == ds )
	KheDrsMTaskSolnShiftSolnTrieBuildShiftSolns(dms, dsst, ds,
	  included_free_resources, included_free_resources_index + 1,
	  prev_soln, prev_day, next_day, de);
  }
}
}
This is just expansion by resources, only for @M { R } instead of all
the open resources, for the mtasks of @M { s } instead of for
all mtasks, and with the solutions kept in @C { dsat->shift_solns }
rather than in @C { next_day }'s solution set.  If choices have been
made for all the resources, it is time to call
@C { KheDrsExpanderMakeAndMeldShiftSoln }
(Appendix {@NumberOf dynamic_impl.expansion.expanders}).
Otherwise, for each assignment of the next resource within
@C { ds } we call @C { KheDrsMTaskSolnShiftSolnTrieBuildShiftSolns }:
@ID {0.93 1.0} @Scale @C {
void KheDrsMTaskSolnShiftSolnTrieBuildShiftSolns(KHE_DRS_MTASK_SOLN dms,
  KHE_DRS_SHIFT_SOLN_TRIE dsst, KHE_DRS_SHIFT ds,
  KHE_DRS_RESOURCE_SET included_free_resources,
  int included_free_resources_index, KHE_DRS_SOLN prev_soln,
  KHE_DRS_DAY prev_day, KHE_DRS_DAY next_day, KHE_DRS_EXPANDER de)
{
  KHE_DRS_TASK_ON_DAY dtd;  KHE_DRS_MTASK dmt;
  KHE_DRS_TASK_SOLN dts;  int indent;
  dmt = dms->mtask;
  if( dmt != NULL )
  {
    /* select a task from dmt and assign it */
    if( KheDrsMTaskAcceptResourceBegin(dmt, dms->resource_on_day, &dtd) )
    {
      dts = KheDrsTaskSolnMake(dms, dtd);
      KheDrsTaskSolnShiftSolnTrieBuildShiftSolns(dts, dsst, ds,
	included_free_resources, included_free_resources_index,
	prev_soln, prev_day, next_day, de);
      KheDrsMTaskAcceptResourceEnd(dmt, dtd);
    }
  }
  else
  {
    /* use dms->fixed_task_on_day, possibly NULL meaning a free day */
    dts = KheDrsTaskSolnMake(dms, dms->fixed_task_on_day);
    KheDrsTaskSolnShiftSolnTrieBuildShiftSolns(dts, dsst, ds,
      included_free_resources, included_free_resources_index,
      prev_soln, prev_day, next_day, de);
  }
}
}
This finds a specific task to assign, by calling
@C { KheDrsMTaskAcceptResourceBegin } and
@C { KheDrsMTaskAcceptResourceEnd } in the usual way if
@C { dcs } has an mtask, or directly if the resource is
already fixed to a specific task.  Actually this second case
cannot occur here, because the resources are not fixed and a free
day is not an option.  The resulting task solution object @C { dts }
is then passed to @C { KheDrsTaskSolnShiftSolnTrieBuildShiftSolns }:
@ID {0.95 1.0} @Scale @C {
void KheDrsTaskSolnShiftSolnTrieBuildShiftSolns(KHE_DRS_TASK_SOLN dts,
  KHE_DRS_SHIFT_SOLN_TRIE dsst, KHE_DRS_SHIFT ds,
  KHE_DRS_RESOURCE_SET included_free_resources,
  int included_free_resources_index, KHE_DRS_SOLN prev_soln,
  KHE_DRS_DAY prev_day, KHE_DRS_DAY next_day, KHE_DRS_EXPANDER de)
{
  /* save the expander so it can be restored later */
  KheDrsExpanderMarkBegin(de);

  /* add dts to the expander */
  KheDrsExpanderAddTaskSoln(de, dts);

  /* if the expander is still open, recurse */
  if( KheDrsExpanderIsOpen(de) )
    KheDrsShiftSolnTrieBuildShiftSolns(dsst, ds, included_free_resources,
      included_free_resources_index, prev_soln, prev_day, next_day, de);

  /* restore the expander */
  KheDrsExpanderMarkEnd(de);
}
}
This assigns @C { dts } and recurses on the next resource if the
expander is still open.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Shift pair solutions }
    @Tag { dynamic_impl.solns.shift_pair }
@Begin
@LP
A shift pair solution is a pair of shift solutions.  We may write
`@M { s sub i s sub j }-solution' for a shift pair solution whose
shifts are @M { s sub i } and @M { s sub j }.
@PP
The shift pair solution resembles the mtask pair solution
(Appendix {@NumberOf dynamic_impl.solns.mtask_pair}),
which represents a pair of mtask solutions.  However this time there
is a declared type:
@ID @C {
typedef struct khe_drs_shift_pair_soln_rec *KHE_DRS_SHIFT_PAIR_SOLN;
typedef HA_ARRAY(KHE_DRS_SHIFT_PAIR_SOLN) ARRAY_KHE_DRS_SHIFT_PAIR_SOLN;

struct khe_drs_shift_pair_soln_rec {
  struct khe_drs_signature_set_rec	sig_set;
  KHE_DRS_SHIFT_SOLN			dss1;
  KHE_DRS_SHIFT_SOLN			dss2;
};
}
Also, the dominance testing is more conventional here, because the
assignments are to specific tasks rather than to mtasks.
@PP
After the usual @C { KheDrsShiftPairSolnMake } and
@C { KheDrsShiftPairSolnFree } functions for creating
and freeing a shift pair solution, we have two functions
that take us to the heart of things.  First is
@C { KheDrsShiftPairSolnSignerSetBuild }, which builds a signer
set suited to comparing for dominance two shift pair solutions made
from two given shift solutions:
@ID @C {
KHE_DRS_SIGNER_SET KheDrsShiftPairSolnSignerSetBuild(
  KHE_DRS_SHIFT_SOLN dss1, KHE_DRS_SHIFT_SOLN dss2,
  KHE_DRS_SHIFT_PAIR dsp, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_SIGNER_SET res;  KHE_DRS_TASK_SOLN dts;  int i;
  KHE_DRS_RESOURCE_ON_DAY drd;

  /* make a signer set object */
  res = KheDrsSignerSetMake(drs);

  /* add resource signers of the resources of dss1 */
  KheDrsTaskSolnSetForEach(dss1->task_solns, dts, i)
  {
    drd = dts.mtask_soln->resource_on_day;
    KheDrsSignerSetAddSigner(res, drd->signer);
  }

  /* add resource signers of the resources of dss2 */
  KheDrsTaskSolnSetForEach(dss2->task_solns, dts, i)
  {
    drd = dts.mtask_soln->resource_on_day;
    KheDrsSignerSetAddSigner(res, drd->signer);
  }

  /* add dsp's shift pair signer */
  KheDrsSignerSetAddSigner(res, dsp->signer);
  return res;
}
}
The signer set contains one signer for each resource assigned by
@C { dss1 }, one signer for each resource assigned by @C { dss2 }, 
and one signer, taken from shift pair object @C { dsp }, for the
event resource monitors that monitor the tasks of the two shifts.
These are all pre-existing signers; only their packaging into a
single signer set is new.
@PP
Next we have @C { KheDrsShiftPairSolnBuild }, which creates a
shift pair solution object and builds its signature set:
@ID @C {
KHE_DRS_SHIFT_PAIR_SOLN KheDrsShiftPairSolnBuild(
  KHE_DRS_SHIFT_SOLN dss1, KHE_DRS_SHIFT_SOLN dss2,
  KHE_DRS_SHIFT_PAIR dsp, KHE_DRS_SOLN prev_soln,
  KHE_DRS_DAY next_day, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_SHIFT_PAIR_SOLN res;  int i;  KHE_DRS_TASK_SOLN dts;
  KHE_DRS_SIGNATURE sig;

  /* make a shift soln object */
  res = KheDrsShiftPairSolnMake(dss1, dss2, drs);

  /* add resource signatures of the resources of dss1 */
  KheDrsTaskSolnSetForEach(dss1->task_solns, dts, i)
    KheDrsSignatureSetAddSignature(&res->sig_set,
      KheDrsMTaskSolnSignature(dts.mtask_soln), true);

  /* add resource signatures of the resources of dss2 */
  KheDrsTaskSolnSetForEach(dss2->task_solns, dts, i)
    KheDrsSignatureSetAddSignature(&res->sig_set,
      KheDrsMTaskSolnSignature(dts.mtask_soln), true);

  /* evaluate the shift pair solution signature and add it */
  /* sig = KheDrsSignatureMake(drs); */
  KheDrsTaskSolnSetLeafSet(dss1->task_solns, true);
  KheDrsTaskSolnSetLeafSet(dss2->task_solns, true);
  sig = KheDrsSignerEvalSignature(dsp->signer, false,
    KheDrsSolnEventResourceSignature(prev_soln),
    next_day->open_day_index, /* sig, */ drs, false);
  KheDrsSignatureSetAddSignature(&res->sig_set, sig, true);
  KheDrsTaskSolnSetLeafClear(dss1->task_solns, false);
  KheDrsTaskSolnSetLeafClear(dss2->task_solns, false);
  return res;
}
}
The resource signatures already exist, as usual, but the signature
of the event resource monitors that monitor the two shifts has to
be built by evaluating expressions.
@PP
Now for the function we really want, for testing dominance
between shift pair solutions:
@ID @C {
bool KheDrsShiftPairSolnDominates(KHE_DRS_SHIFT_PAIR_SOLN dsps1,
  KHE_DRS_SHIFT_PAIR_SOLN dsps2, KHE_DRS_SIGNER_SET signer_set,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  return KheDrsSignerSetDominates(signer_set, &dsps1->sig_set,
    &dsps2->sig_set, 0, 0, false, drs);
}
}
It's trivial given the work already done to build the signer
set and signature sets.
@PP
We now switch to the submodule of type @C { KHE_DRS_SHIFT_SOLN_TRIE } which
constructs shift pair solutions, calls @C { KheDrsShiftPairSolnDominates }
to test them for dominance, and marks any dominated shift pairs.  We'll
work backwards through the submodule, starting with
@C { KheDrsShiftSolnTrieFindDominatedShiftPairs }, which is called
directly from the function for expanding solution @C { prev_soln }
and does the job for all pairs of shift pair solutions:
@ID @C {
void KheDrsShiftSolnTrieFindDominatedShiftPairs(KHE_DRS_SOLN prev_soln,
  KHE_DRS_DAY next_day, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_RESOURCE_SET rs1, rs2;  int i, j;  KHE_DRS_SHIFT ds1;
  bool first;  KHE_DRS_SHIFT_PAIR dsp;
  rs1 = KheDrsResourceSetMake(drs);
  rs2 = KheDrsResourceSetMake(drs);
  first = true;
  HaArrayForEach(next_day->shifts, ds1, i)
    if( ds1->soln_trie != NULL )
      HaArrayForEach(ds1->shift_pairs, dsp, j)
	if( dsp->shift[1]->soln_trie != NULL )
	{
	  KheDrsShiftSolnTrieFindDominatedShiftPairs1(ds1->soln_trie,
	    dsp, rs1, rs2, prev_soln, next_day, drs);
	  first = false;
	}
  KheDrsResourceSetFree(rs1, drs);
  KheDrsResourceSetFree(rs2, drs);
}
}
By iterating over the shifts of @C { next_day } and then over the
shift pairs for each shift, this code visits all shift pairs @C { dsp }.
It skips shift pairs where either shift has no shift solutions.  Then
@ID {0.95 1.0} @Scale @C {
void KheDrsShiftSolnTrieFindDominatedShiftPairs1(
  KHE_DRS_SHIFT_SOLN_TRIE dsst_rs1_ds1, KHE_DRS_SHIFT_PAIR dsp,
  KHE_DRS_RESOURCE_SET rs1, KHE_DRS_RESOURCE_SET rs2,
  KHE_DRS_SOLN prev_soln, KHE_DRS_DAY next_day,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  int i;  KHE_DRS_RESOURCE dr;
  KHE_DRS_SHIFT_SOLN_TRIE child_dsst, dsst_rs1_ds2;

  /* pairs for each of dsst_rs1_ds1's assignments */
  if( HaArrayCount(dsst_rs1_ds1->shift_solns) > 0 && 
      KheDrsShiftSolnTrieContains(dsp->shift[1]->soln_trie, rs1, 0,
	&dsst_rs1_ds2)
        && HaArrayCount(dsst_rs1_ds2->shift_solns) > 0 )
  {
    HnAssert(KheDrsResourceSetCount(rs2) == 0,
      "KheDrsShiftSolnTrieFindDominatedShiftPairs1 internal error");
    KheDrsShiftSolnTrieFindDominatedShiftPairs2(dsp->shift[0]->soln_trie,
      dsst_rs1_ds1, dsst_rs1_ds2,
      dsp, rs1, rs2, prev_soln, next_day, drs);
  }

  /* pairs for children */
  HaArrayForEach(dsst_rs1_ds1->children, child_dsst, i)
    if( child_dsst != NULL )
    {
      dr = KheDrsResourceSetResource(drs->open_resources, i);
      KheDrsResourceSetAddLast(rs1, dr);
      KheDrsShiftSolnTrieFindDominatedShiftPairs1(child_dsst, dsp,
	rs1, rs2, prev_soln, next_day, drs, debug);
      KheDrsResourceSetDeleteLast(rs1);
    }
}
}
traverses @C { ds1 }'s shift solution trie.  The variable name
`@C { dsst_rs1_ds1 }' means `a shift solution trie node for free
resources @C { rs1 } and shift @C { ds1 }'.
@PP
The second paragraph calls
@C { KheDrsShiftSolnTrieFindDominatedShiftPairs1 }
for each non-@C { NULL } child recursively, updating
@C { rs1 } before each recursive call.  So this proves
that @C { KheDrsShiftSolnTrieFindDominatedShiftPairs1 }
visits every node of @C { ds1 }'s shift solution trie,
setting @C { rs1 } correctly for each node.
@PP
The first paragraph calls
@ID {0.95 1.0} @Scale @C {
bool KheDrsShiftSolnTrieContains(KHE_DRS_SHIFT_SOLN_TRIE dsst,
  KHE_DRS_RESOURCE_SET rs, int rs_index, KHE_DRS_SHIFT_SOLN_TRIE *res)
{
  KHE_DRS_RESOURCE dr;  KHE_DRS_SHIFT_SOLN_TRIE child_dsst;
  if( dsst == NULL )
    return *res = NULL, false;
  else if( rs_index >= KheDrsResourceSetCount(rs) )
    return *res = dsst, true;
  else
  {
    dr = KheDrsResourceSetResource(rs, rs_index);
    child_dsst = HaArray(dsst->children, dr->open_resource_index);
    return KheDrsShiftSolnTrieContains(child_dsst, rs, rs_index + 1, res);
  }
}
}
to work out whether @C { dsp->shift[1]->soln_trie }, the shift
solution trie of the second shift, has a node for solutions for
resource set @C { rs1 }, setting @C { dsst_rs1_ds2 } to that node if
so.  Then if both nodes, @C { dsst_rs1_ds1 } and @C { dsst_rs1_ds2 },
have at least one shift solution, we proceed to the next step by
calling @C { KheDrsShiftSolnTrieFindDominatedShiftPairs2 }:
@ID {0.92 1.0} @Scale @C {
void KheDrsShiftSolnTrieFindDominatedShiftPairs2(
  KHE_DRS_SHIFT_SOLN_TRIE dsst_rs2_ds1,
  KHE_DRS_SHIFT_SOLN_TRIE dsst_rs1_ds1,
  KHE_DRS_SHIFT_SOLN_TRIE dsst_rs1_ds2, KHE_DRS_SHIFT_PAIR dsp,
  KHE_DRS_RESOURCE_SET rs1, KHE_DRS_RESOURCE_SET rs2,
  KHE_DRS_SOLN prev_soln, KHE_DRS_DAY next_day,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  int i;  KHE_DRS_RESOURCE dr;
  KHE_DRS_SHIFT_SOLN_TRIE dsst_rs2_ds2, child_dsst;

  /* pairs for each of dsst_rs2_ds1's assignments */
  if( KheDrsResourceSetCount(rs1) + KheDrsResourceSetCount(rs2) > 0 &&
      HaArrayCount(dsst_rs2_ds1->shift_solns) > 0 &&
      KheDrsShiftSolnTrieContains(dsp->shift[1]->soln_trie, rs2, 0,
	&dsst_rs2_ds2)
        && HaArrayCount(dsst_rs2_ds2->shift_solns) > 0 )
    KheDrsShiftSolnTrieTestShiftPairs(dsst_rs1_ds1, dsst_rs2_ds2,
      dsst_rs1_ds2, dsst_rs2_ds1, dsp, rs1, rs2, prev_soln, next_day, drs);

  /* pairs for children */
  HaArrayForEach(dsst_rs2_ds1->children, child_dsst, i)
    if( child_dsst != NULL )
    {
      dr = KheDrsResourceSetResource(drs->open_resources, i);
      if( !KheDrsResourceSetContains(rs1, dr) )
      {
	KheDrsResourceSetAddLast(rs2, dr);
	KheDrsShiftSolnTrieFindDominatedShiftPairs2(child_dsst, dsst_rs1_ds1,
	  dsst_rs1_ds2, dsp, rs1, rs2, prev_soln, next_day, drs);
	KheDrsResourceSetDeleteLast(rs2);
      }
    }
}
}
This is like @C { KheDrsShiftSolnTrieFindDominatedShiftPairs1 },
except that it traverses all nodes @C { dsst_rs2_ds1 }, building
@C { rs2 } as it goes, and finding the corresponding
@C { dsst_rs2_ds2 }.  However, it only accepts what it finds when
at least one of @C { rs1 } and @C { rs2 } is non-empty, and by the
test @C { !KheDrsResourceSetContains(rs1, dr) } it ensures that
@C { rs1 } and @C { rs2 } are disjoint.
@PP
The ultimate result here is a call to @C { KheDrsShiftSolnTrieTestShiftPairs }, 
passing it four nodes, @C { dsst_rs1_ds1 }, @C { dsst_rs2_ds2 },
@C { dsst_rs1_ds2 }, and @C { dsst_rs2_ds1 }, whose shift solutions
are suited to constructing two shift pair solutions:
@ID {0.90 1.0} @Scale @C {
void KheDrsShiftSolnTrieTestShiftPairs(
  KHE_DRS_SHIFT_SOLN_TRIE dsst_rs1_ds1,
  KHE_DRS_SHIFT_SOLN_TRIE dsst_rs2_ds2,
  KHE_DRS_SHIFT_SOLN_TRIE dsst_rs1_ds2,
  KHE_DRS_SHIFT_SOLN_TRIE dsst_rs2_ds1,
  KHE_DRS_SHIFT_PAIR dsp, KHE_DRS_RESOURCE_SET rs1,
  KHE_DRS_RESOURCE_SET rs2, KHE_DRS_SOLN prev_soln,
  KHE_DRS_DAY next_day, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_SHIFT_SOLN dss_rs1_ds1, dss_rs2_ds2, dss_rs1_ds2, dss_rs2_ds1;
  int i1, i2, i3, i4;  KHE_DRS_SHIFT_PAIR_SOLN dsps1, dsps2;
  KHE_DRS_SIGNER_SET signer_set;
  signer_set = NULL;
  HaArrayForEach(dsst_rs1_ds1->shift_solns, dss_rs1_ds1, i1)
    HaArrayForEach(dsst_rs2_ds2->shift_solns, dss_rs2_ds2, i2)
    {
      dsps1 = KheDrsShiftPairSolnBuild(dss_rs1_ds1, dss_rs2_ds2,
        dsp, prev_soln, next_day, drs);
      HaArrayForEach(dsst_rs1_ds2->shift_solns, dss_rs1_ds2, i3)
	HaArrayForEach(dsst_rs2_ds1->shift_solns, dss_rs2_ds1, i4)
	{
	  dsps2 = KheDrsShiftPairSolnBuild(dss_rs1_ds2, dss_rs2_ds1,
	    dsp, prev_soln, next_day, drs);
	  if( signer_set == NULL )
	    signer_set = KheDrsShiftPairSolnSignerSetBuild(dss_rs1_ds1,
	      dss_rs2_ds2, dsp, drs);
	  if( KheDrsShiftPairSolnDominates(dsps1, dsps2, signer_set, drs) )
	  {
	    HaArrayAddLast(dss_rs1_ds2->skip_assts, dss_rs2_ds1);
	    HaArrayAddLast(dss_rs2_ds1->skip_assts, dss_rs1_ds2);
	  }
	  else if( KheDrsShiftPairSolnDominates(dsps2, dsps1, signer_set, drs) )
	  {
	    HaArrayAddLast(dss_rs1_ds1->skip_assts, dss_rs2_ds2);
	    HaArrayAddLast(dss_rs2_ds2->skip_assts, dss_rs1_ds1);
	  }
	  KheDrsShiftPairSolnFree(dsps2, drs);
	}
      KheDrsShiftPairSolnFree(dsps1, drs);
    }
  if( signer_set != NULL )
    KheDrsSignerSetFree(signer_set, drs);
}
}
Most shift solution trie nodes contain just one shift solution (or
none, but nodes with none do not make it this far).  So the four
loops, although formally correct, serve in practice to retrieve the
one shift solution of the node.  Then @C { KheDrsShiftPairSolnBuild }
is called twice to make two shift pair solution objects out of the
four shift solutions, @C { KheDrsShiftPairSolnDominates } is called
both ways to test for dominance, and if dominance is found, the
@C { skip_assts } fields of the dominated shift solutions are updated
appropriately, so that those shift pair solutions will not be generated
during expansion by shifts.
@PP
This code is capable of discovering that a particular shift pair
solution is dominated, and recording that fact, more than once.
This does not matter:  it does not make the algorithm incorrect,
and dominance is uncommon, so there is little wasted time.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Packed solutions }
    @Tag { dynamic_impl.packed }
@Begin
@LP
The solver has a solution type, quite separate from the types we have
just seen, called the @I { packed solution }.  Despite its name,
its purpose is not to save space.  Rather it is designed to provide
easy access to the solution's assignment of open resource @M { i } on
open day @M { j }.  Packed solutions only represent complete solutions,
and they are never tested for dominance or inserted into tables.
@PP
Packed solutions are used in two ways.  First, the initial
solution, the one that we want to improve, is stored in a
packed solution, so that if we fail to improve on it we
can return to it.  This is like using a mark, except that it
returns the whole solver data structure to its initial state,
not just the KHE solution.  Function @C { KheDrsResourceOpen }
(Appendix {@NumberOf dynamic_impl.resources}) builds this solution.
@PP
Second, the solver offers the option of rerunning a new best solution as
an aid to debugging (Appendix {@NumberOf dynamic_impl.solving.testing}).
A packed solution holds the new best solution while the rerun is going on.
@PP
Type @C { KHE_DRS_PACKED_SOLN_DAY } represents one day of a
packed solution:
@ID @C {
typedef struct khe_drs_packed_soln_day_rec {
  KHE_DRS_DAY			day;
  ARRAY_KHE_DRS_TASK_ON_DAY	prev_tasks;
} *KHE_DRS_PACKED_SOLN_DAY;

typedef HA_ARRAY(KHE_DRS_PACKED_SOLN_DAY) ARRAY_KHE_DRS_PACKED_SOLN_DAY;
}
The @C { prev_tasks } field is exactly as in the corresponding
@C { KHE_DRS_SOLN } object for this @C { day }.  Type
@C { KHE_DRS_PACKED_SOLN } represents a complete packed solution:
@ID @C {
typedef struct khe_drs_packed_soln_rec {
  KHE_COST			cost;
  ARRAY_KHE_DRS_PACKED_SOLN_DAY	days;
} *KHE_DRS_PACKED_SOLN;

typedef HA_ARRAY(KHE_DRS_PACKED_SOLN) ARRAY_KHE_DRS_PACKED_SOLN;
}
It holds the cost of the solution, and an array with one element
for each open day.
@PP
Packed solutions operations include @C { KheDrsPackedSolnBuildFromSoln },
which converts a complete @C { KHE_DRS_SOLN } solution into a packed
solution; @C { KheDrsPackedSolnDelete }, which deletes a packed solution
using free lists in the usual way; and @C { KheDrsPackedSolnTaskOnDay }
and @C { KheDrsPackedSolnSetTaskOnDay }, which get and set the assignment
of open resource @M { i } on open day @M { j }.  Their implementations
are all straightforward, so are not given here.
@End @SubSubAppendix

@EndSubSubAppendices
@End @SubAppendix

@SubAppendix
    @Title { Expansion }
    @Tag { dynamic_impl.expansion }
@Begin
@LP
This section presents the implementation of the key operation
on solutions:  @I { expansion }.  Starting with a given
@M { d sub k }-solution @M { S }, expansion creates
all the @M { d sub {k+1} }-solution extensions of @M { S } and
adds them to @M { P sub {k+1} }, the set of all undominated
@M { d sub {k+1} }-solutions.  The actual addition to
@M { P sub {k+1} }, including dominance testing between the
existing solutions and each new solution, is a separate
subject; here we concentrate on creating the new solutions.
@PP
Expansion is implemented by function
@ID @C {
void KheDrsSolnExpand(KHE_DRS_SOLN prev_soln, KHE_DRS_DAY prev_day,
  KHE_DRS_DAY next_day, KHE_DYNAMIC_RESOURCE_SOLVER drs);
}
Given @M { d sub k }-solution @C { prev_soln } whose
day @M { d sub k } is @C { prev_day }, and day @M { d sub {k+1} }
@C { next_day }, this finds all @M { d sub {k+1} }-solution
extensions of @C { prev_soln }, and adds them, with dominance
testing, to @C { next_day }'s solution set.  Here @C { prev_soln }
could be the root solution, which is not on any day, and in that
case @C { prev_day } is @C { NULL }.  However, @C { prev_soln }
will never be a solution for the last open day, so @C { next_day }
is always a well-defined open day.
# @PP
# The user can choose from several variants of expansion.
# These are stored in fields of @C { drs }.
@PP
The implementation is spread over several types.  Several of these
types @C { X } have a submodule called `@C { X - expansion }' following
their main submodule, containing @C { X }'s part of the expansion
code.  However, our presentation here often works top-down, ranging
across these submodules as required.  This works better than
presenting each type's expansion code separately.
#@PP
#The solver object, @C { drs }, contains this field used by expansion:
#@ID @C {
#bool				solve_expand_by_shifts;
#}
#Field @C { solve_expand_by_shifts } is fixed throughout any one
#solve, and says whether the user has requested expansion by shifts,
#as opposed to expansion by resources.  A few other solver fields
#are also used by expansion, for implementing less important options.
#Field @C { expand_drds }
#is fixed throughout any one expansion, and holds one resource on
#day object for each open resource on @C { next_day }.  It is set
#by this little function:
#@ID @C {
#void KheDrsDaySetExpandResourceOnDaySet(KHE_DRS_DAY next_day,
#  KHE_DYNAMIC_RESOURCE_SOLVER drs)
#{
#  KHE_DRS_RESOURCE dr;  int i;  KHE_DRS_RESOURCE_ON_DAY drd;
#  KheDrsResourceOnDaySetClear(drs->expand_drds);
#  HaArrayForEach(drs->open_resources, dr, i)
#  {
#    drd = KheDrsResourceOnDay(dr, next_day);
#    KheDrsResourceOnDaySetAddLast(drs->expand_drds, drd);
#  }
#}
#}
#When the priority queue is in use, each expansion can have
#a different @C { next_day } from the previous one, and so
#@C { KheDrsDaySetExpandResourceOnDaySet } is called before
#each call to @C { KheDrsSolnExpand }.  When the priority
#queue is not in use, the solve carries out all expansions
#on one day before moving to the next day, and so
#@C { KheDrsDaySetExpandResourceOnDaySet } is only
#called as the solve moves from one day to the next.
@BeginSubSubAppendices

@SubSubAppendix
    @Title { Expanders }
    @Tag { dynamic_impl.expansion.expanders }
@Begin
@LP
Expansion is always about trying certain assignments, then
undoing those and trying others.  These steps are supported
by an @I { expander } object, of type @C { KHE_DRS_EXPANDER }:
@ID @C {
typedef struct khe_drs_expander_rec *KHE_DRS_EXPANDER;
typedef HA_ARRAY(KHE_DRS_EXPANDER) ARRAY_KHE_DRS_EXPANDER;

struct khe_drs_expander_rec {
  KHE_DYNAMIC_RESOURCE_SOLVER	solver;
  ARRAY_KHE_DRS_TASK_SOLN	task_solns;
  ARRAY_KHE_DRS_TASK_SOLN	tmp_task_solns;
  bool				whole_tasks;
  bool				open;
  KHE_COST			cost;
  KHE_COST			cost_limit;
  int				free_resource_count;
  int				must_assign_count;
  HA_ARRAY_INT			marks;
};
}
Each expansion begins by creating an expander, and ends by freeing it.
@PP
Field @C { solver } is the enclosing solver.  It is not often used,
and when it is used it is usually for something fairly trivial,
like access to a free list.
@PP
Field @C { task_solns } holds the @I { current assignments }:  task
solution objects that the expansion wants to include in the next
solution it creates.  Field @C { tmp_task_solns } is a scratch
variable used by function @C { KheDrsExpanderMakeAndMeldSoln } below.
@PP
Field @C { whole_tasks } changes the meaning given to one task
solution when there are multi-day tasks.  When it is @C { false },
the task solution represents the assignment of one resource on day
to one task on day.  When it is @C { true }, although the task
solution object itself is the same, it is interpreted to mean
that the task on day's task is assigned the resource on day's
resource on every day that the task is running.
@PP
Field @C { open } is @C { true } when the expander can see nothing
wrong with the current assignments:  it is `open' to adding zero
or more additional assignments to them until a complete solution's
worth of assignments has been made.  Otherwise, there is some problem
and expansion should back out of the point it has reached and try
something else:  the expander is @I { closed } (not open).
@PP
Field @C { cost } is a lower bound on the cost of any solution
containing the current assignments.  We'll see later how the
expander keeps this up to date.  Field @C { cost_limit } is an
upper limit on how much a solution is allowed to cost.  If the
assignments chosen by the expansion cause @C { cost } to equal
or exceed @C { cost_limit }, the expander will close.
@PP
As defined in Appendix {@NumberOf dynamic_impl.expansion.resource_setup},
a @I { free resource } is an open resource which is not part of a fixed
assignment.  Field @C { free_resource_count } holds the number of free
resources available to this expansion and not assigned (not even to a
free day) by the current assignments.  The expander handles fixed
resources as well, they just don't affect @C { free_resource_count }.
@PP
As defined in Appendix {@NumberOf dynamic_impl.expansion.shift_setup},
a @I { must-assign task } is an open task, not part of a fixed
assignment, that the current expansion must assign a resource to,
otherwise the cost will be too great.  Field @C { must_assign_count }
holds the number of must-assign tasks that are part of the current
expansion but are not assigned by the current assignments.  The
expander handles all kinds of open tasks, but only must-assign
tasks affect @C { must_assign_count }.
@PP
If @C { free_resource_count < must_assign_count }, then there are
too few unused free resources to cover the must-assign tasks that
are not currently assigned.  The expander will close.  This assumes
that all the tasks that are part of the expansion are running on the
same day, so that no resource can be assigned to two of them.
@PP
Finally, @C { marks } is an array of indexes into the
@C { task_solns } array.  It allows the expander to mark
the point that it has reached and to return to that point,
as implemented by functions @C { KheDrsExpanderMarkBegin } and
@C { KheDrsExpanderMarkEnd } below.
@PP
At the start of an expansion, @C { KheDrsExpanderMake } is called to
make a new expander.  But we'll start with @C { KheDrsExpanderReset },
which resets an expander using fresh attributes:
@ID @C {
void KheDrsExpanderReset(KHE_DRS_EXPANDER de, bool whole_tasks,
  KHE_COST cost, KHE_COST cost_limit, int free_resource_count,
  int must_assign_count)
{
  de->whole_tasks = whole_tasks;
  de->cost = cost;
  de->cost_limit = cost_limit;
  de->free_resource_count = free_resource_count;
  de->must_assign_count = must_assign_count;
  KheDrsExpanderSetOpen(de);
}
}
@C { KheDrsExpanderSetOpen } sets the @C { open } field:
@ID @C {
void KheDrsExpanderSetOpen(KHE_DRS_EXPANDER de)
{
  de->open = de->cost < de->cost_limit &&
    de->free_resource_count >= de->must_assign_count;
}
}
The expander is open if its cost has not reached the limit,
and there are enough as-yet-unassigned free resources to cover
the as-yet-unassigned must-assign tasks.
@PP
Here now is @C { KheDrsExpanderMake }:
@ID @C {
KHE_DRS_EXPANDER KheDrsExpanderMake(bool whole_tasks, KHE_COST cost,
  KHE_COST cost_limit, int free_resource_count, int must_assign_count,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_EXPANDER res;

  /* get an expander from scratch or from the free list */
  if( HaArrayCount(drs->expander_free_list) > 0 )
  {
    res = HaArrayLastAndDelete(drs->expander_free_list);
    HaArrayClear(res->task_solns);
    HaArrayClear(res->tmp_task_solns);
    HaArrayClear(res->marks);
  }
  else
  {
    HaMake(res, drs->arena);
    HaArrayInit(res->task_solns, drs->arena);
    HaArrayInit(res->tmp_task_solns, drs->arena);
    HaArrayInit(res->marks, drs->arena);
  }

  /* initialize its fields and return it */
  res->solver = drs;
  KheDrsExpanderReset(res, whole_tasks, cost, cost_limit,
    free_resource_count, must_assign_count);
  return res;
}
}
It takes the object from a free list, or makes it from scratch,
as usual.  @C { KheDrsExpanderReset } is also called directly,
to make a fresh start with an existing expander.
@PP
At the end of expansion, the expander is freed by a call to
@C { KheDrsExpanderFree }:
@ID @C {
void KheDrsExpanderFree(KHE_DRS_EXPANDER de)
{
  HnAssert(HaArrayCount(de->task_solns) == 0,
    "KheDrsExpanderFree internal error 1");
  HnAssert(HaArrayCount(de->marks) == 0,
    "KheDrsExpanderFree internal error 2");
  HaArrayAddLast(de->solver->expander_free_list, de);
}
}
It checks that the expansion ended cleanly, then adds the expander to
a free list in the solver.
@PP
It is not really safe to access the fields of expander objects outside
the expander, other than @C { solver }.  Instead there are these small
and self-explanatory functions:
@IndentedList

@LI @C {
void KheDrsExpanderAddCost(KHE_DRS_EXPANDER de, KHE_COST cost)
{
  de->cost += cost;
  KheDrsExpanderSetOpen(de);
}
}

@LI @C {
void KheDrsExpanderReduceCostLimit(KHE_DRS_EXPANDER de,
  KHE_COST cost_limit)
{
  if( cost_limit < de->cost_limit )
  {
    de->cost_limit = cost_limit;
    KheDrsExpanderSetOpen(de);
  }
}
}

@LI @C {
bool KheDrsExpanderOpenToExtraCost(KHE_DRS_EXPANDER de,
  KHE_COST extra_cost)
{
  return de->cost + extra_cost < de->cost_limit;
}
}

@LI @C {
void KheDrsExpanderAddMustAssign(KHE_DRS_EXPANDER de)
{
  de->must_assign_count++;
  KheDrsExpanderSetOpen(de);
}
}

@LI @C {
void KheDrsExpanderDeleteFreeResource(KHE_DRS_EXPANDER de)
{
  de->free_resource_count--;
  KheDrsExpanderSetOpen(de);
}
}

@LI @C {
int KheDrsExpanderExcessResourceCount(KHE_DRS_EXPANDER de)
{
  return de->free_resource_count - de->must_assign_count;
}
}

@EndList
To add a task solution object to an expander, the call is
@ID {0.95 1.0} @Scale @C {
void KheDrsExpanderAddTaskSoln(KHE_DRS_EXPANDER de,
  KHE_DRS_TASK_SOLN dts)
{
  KHE_DRS_SIGNATURE sig;  KHE_DRS_TASK dt;  KHE_DRS_MTASK_SOLN asst;
  int i;  KHE_COST cost;  int must_assign_count, free_resource_count;
  KHE_DRS_RESOURCE dr;

  /* if not open, do nothing */
  if( !de->open )
    return;

  /* if there is a skip count problem, close and do nothing */
  if( dts.mtask_soln->skip_count > 0 )
  {
    de->open = false;
    return;
  }

  /* find new must_assign_count, free_resource_count, and cost values */
  must_assign_count = de->must_assign_count;
  free_resource_count = de->free_resource_count;
  cost = de->cost;
  if( dts.fixed_dtd != NULL )
  {
    dt = dts.fixed_dtd->encl_dt;
    if( dts.fixed_dtd == HaArrayFirst(dt->days) )
      cost += dt->asst_cost;
    if( dt->expand_role == KHE_DRS_TASK_EXPAND_MUST )
      must_assign_count--;
  }
  sig = KheDrsMTaskSolnSignature(dts.mtask_soln);
  cost += sig->cost;
  dr = KheDrsTaskSolnResource(dts);
  if( dr->expand_role == KHE_DRS_RESOURCE_EXPAND_FREE )
    free_resource_count--;

  /* if there is a problem with the values just found, close and return */
  if( cost >= de->cost_limit || free_resource_count < must_assign_count )
  {
    de->open = false;
    return;
  }

  ... code omitted here, see below ...
}
}
First, if the expander is already closed, it returns immediately.  If
the new assignment should not be used, because it has a non-zero
@C { skip_count } field, the expander closes and returns.  Otherwise,
it finds the effect of the new assignment on @C { must_assign_count },
@C { free_resource_count }, and @C { cost }.  If the assignment
is to a task (i.e. not to a free day), the cost increases by the
cost of that assignment unless this is not the task's first day,
and if the task is a must-assign task, @C { must_assign_count }
decreases by one.  Whatever the resource is assigned to, cost
increases by the cost of the assignment's resource monitors
signature, and if the assignment involves a free resource,
then the free resource count decreases by one.  If these new
values lead to problems, the expander closes and returns.
@PP
If we get past all that, the expander can accept the new assignment
and remain open:
@ID @C {
/* no problems with the addition; change the state of de */
de->must_assign_count = must_assign_count;
de->free_resource_count = free_resource_count;
de->cost = cost;
HaArrayAddLast(de->task_solns, dts);

/* increment the skip counts of the skip_assts */
HaArrayForEach(dts.mtask_soln->skip_assts, asst, i)
  asst->skip_count++;

/* update dtd's leaf expressions */
KheDrsTaskSolnLeafSet(dts, de->whole_tasks, de->solver);
}
It assigns the new values to the @C { must_assign_count },
@C { free_resource_count }, and @C { cost } fields, and adds
@C { dts } to @C { de->task_solns }.  It then increments the skip
count fields of @C { dts }'s skip list, as required, and ends by
informing the task on day objects affected by @C { dts } that
@C { dts } is now in force, by a call to @C { KheDrsTaskSolnLeafSet }
(Appendix {@NumberOf dynamic_impl.solns.task}).
# If @C { de->whole_tasks }
# is @C { true }, this requires one call to @C { KheDrsTaskOnDayLeafSet }
# for each task on day of the enclosing task.  Otherwise it requires
# just a single call to @C { KheDrsTaskOnDayLeafSet }.
@PP
There is also function @C { KheDrsExpanderAddTaskSolnSet }, which
adds a whole set of task solutions to the expander by calling
@C { KheDrsExpanderAddTaskSoln } on each:
@ID @C {
void KheDrsExpanderAddTaskSolnSet(KHE_DRS_EXPANDER de,
  KHE_DRS_TASK_SOLN_SET dtss, KHE_DRS_SHIFT ds)
{
  KHE_DRS_TASK_SOLN dts;  int i;
  if( ds != NULL )
  {
    KheDrsTaskSolnSetForEach(dtss, dts, i)
      if( KheDrsTaskSolnShift(dts) == ds )
	KheDrsExpanderAddTaskSoln(de, dts);
  }
  else
    KheDrsTaskSolnSetForEach(dtss, dts, i)
      KheDrsExpanderAddTaskSoln(de, dts);
}
}
If @C { ds } is non-@C { NULL }, only those elements of @C { dtss }
which assign tasks from shift @C { ds } are added.
@PP
When a task solution is no longer required, the next function removes it,
assuming that it has already been removed from @C { de->assts_to_tasks }:
@ID @C {
void KheDrsExpanderDoDeleteTaskSoln(KHE_DRS_EXPANDER de,
  KHE_DRS_TASK_SOLN dts)
{
  KHE_DRS_SIGNATURE sig;  KHE_DRS_TASK dt;  KHE_DRS_RESOURCE dr;
  int i;  KHE_DRS_MTASK_SOLN asst;

  /* update dtd's leaf expressions */
  KheDrsTaskSolnLeafClear(dts, de->whole_tasks, de->solver);

  /* decrement the skip counts of the skip_assts */
  HaArrayForEach(dts.mtask_soln->skip_assts, asst, i)
    asst->skip_count--;

  /* update cost, must_assign_count, and free_resource_count fields */
  if( dts.fixed_dtd != NULL )
  {
    dt = dts.fixed_dtd->encl_dt;
    if( dts.fixed_dtd == HaArrayFirst(dt->days) )
      de->cost -= dt->asst_cost;
    if( dt->expand_role == KHE_DRS_TASK_EXPAND_MUST )
      de->must_assign_count++;
  }
  sig = KheDrsMTaskSolnSignature(dts.mtask_soln);
  de->cost -= sig->cost;
  dr = KheDrsTaskSolnResource(dts);
  if( dr->expand_role == KHE_DRS_RESOURCE_EXPAND_FREE )
    de->free_resource_count++;
}
}
This reverses the state changes made by @C { KheDrsExpanderAddTaskSoln }.
It is called only by @C { KheDrsExpanderMarkEnd } (see below), never
directly by any expansion.
@PP
Expansion algorithms need to say `remember the assignments we have
now; we'll return to them later'.  For this we have
@C { KheDrsExpanderMarkBegin } and @C { KheDrsExpanderMarkEnd }:
@IndentedList

@LI @C {
void KheDrsExpanderMarkBegin(KHE_DRS_EXPANDER de)
{
  HaArrayAddLast(de->marks, HaArrayCount(de->assts_to_tasks));
}
}
#This just remembers the number of current assignments.  It may be
#called only when the solver is open.  A matching call to
@LI @C {
void KheDrsExpanderMarkEnd(KHE_DRS_EXPANDER de)
{
  int prev_count;  KHE_DRS_TASK_SOLN dts;
  HnAssert(HaArrayCount(de->marks) > 0,
    "KheDrsExpanderMarkEnd internal error");
  prev_count = HaArrayLastAndDelete(de->marks);
  while( HaArrayCount(de->task_solns) > prev_count )
  {
    dts = HaArrayLastAndDelete(de->task_solns);
    KheDrsExpanderDoDeleteTaskSoln(de, dts);
  }
  KheDrsExpanderSetOpen(de);
}
}

@EndList
@C { KheDrsExpanderMarkBegin } remembers the number of assignments;
@C { KheDrsExpanderMarkEnd } returns the expander to them by popping
assignments off the @C { task_solns } array and removing them.
The expander does not have to be open when @C { KheDrsExpanderMarkBegin }
is called, so @C { KheDrsExpanderSetOpen } is called to set the
correct value of @C { open }.
@PP
Expansions can test whether the expander is open:
@ID @C {
bool KheDrsExpanderIsOpen(KHE_DRS_EXPANDER de)
{
  return de->open;
}
}
to find out whether they should continue down the current path.
@PP
The expander also offers a function which makes a new day solution
object and melds it into a solution set:
@ID {0.90 0.98} @Scale -1px @Break @C {
void KheDrsExpanderMakeAndMeldSoln(KHE_DRS_EXPANDER de,
  KHE_DRS_SOLN prev_soln, KHE_DRS_DAY next_day)
{
  KHE_DRS_SOLN next_soln;  KHE_DRS_TASK_SOLN dts, junk;  int i, ri;
  KHE_DRS_SIGNATURE sig, prev_sig;  KHE_DRS_SIGNER dsg;

  /* make a soln object */
  next_day->soln_made_count++;
  next_soln = KheDrsSolnMake(prev_soln, de->cost, de->solver);

  /* make sure de->tmp_task_solns has the right length */
  if( HaArrayCount(de->tmp_task_solns) != HaArrayCount(de->task_solns) )
  {
    HaArrayClear(de->tmp_task_solns);
    junk = KheDrsTaskSolnMake(NULL, NULL);
    HaArrayFill(de->tmp_task_solns, HaArrayCount(de->task_solns), junk);
  }

  /* reorder de->task_solns into de->tmp_task_solns */
  HaArrayForEach(de->task_solns, dts, i)
  {
    ri = KheDrsResourceOnDayIndex(dts.mtask_soln->resource_on_day);
    HaArrayPut(de->tmp_task_solns, ri, dts);
  }

  /* add each dts's task and signature (but not its cost) to soln */
  HaArrayForEach(de->tmp_task_solns, dts, i)
  {
    HaArrayAddLast(next_soln->prev_tasks, dts.fixed_dtd);
    KheDrsSignatureSetAddSignature(&next_soln->sig_set, dts.mtask_soln->sig,
      false);
  }

  /* set the event resource monitor part of next_soln's signature set */
  /* this last signature will be freed when next_soln is freed */
  dsg = HaArrayLast(next_day->signer_set->signers);
  if( HaArrayCount(prev_soln->sig_set.signatures) > 0 )
    prev_sig = HaArrayLast(prev_soln->sig_set.signatures);
  else
    prev_sig = NULL;  /* prev_soln is root, so prev_sig won't be accessed */
  sig = KheDrsSignerEvalSignature(dsg, prev_sig, de->solver, false);
  KheDrsSignatureSetAddSignature(&next_soln->sig_set, sig, true);

  /* depending on cost, either add next_soln to next_day or free it */
  if( KheDrsSolnCost(next_soln) < de->cost_limit )
    KheDrsSolnSetMeldSoln(next_day->soln_set, next_soln, next_day, de,
      de->solver);
  else
    KheDrsSolnFree(next_soln, de->solver);
}
}
The first step here is to make a new @C { KHE_DRS_SOLN } object,
@C { next_soln }.  Then, after ensuring that @C { de->tmp_task_solns }
has the same length as @C { de->task_solns }, the assignment to
task objects are copied into @C { de->tmp_task_solns },
reordering them into open resource index order.  It is then easy
to copy them into @C { next_soln->prev_tasks }, and also to add
in their signatures' states.  After that, the signatures of the
solution's event resource monitors are added in, and finally the
new solution is either melded into @C { next_day }'s solution set
(if its cost is competitive) or freed.
@PP
A logically similar but much shorter function makes a new
@C { KHE_DRS_SHIFT_SOLN } object and melds it into the
set of shift assignment objects held in shift solution
trie node @C { dsst }:
@ID {0.95 1.0} @Scale @C {
void KheDrsExpanderMakeAndMeldShiftSoln(KHE_DRS_EXPANDER de,
  KHE_DRS_SHIFT_SOLN_TRIE dsst, KHE_DRS_SHIFT ds,
  KHE_DRS_SOLN prev_soln, KHE_DRS_DAY prev_day, KHE_DRS_DAY next_day)
{
  KHE_DRS_SHIFT_SOLN dss;  int i;  KHE_DRS_TASK_SOLN dts;
  KHE_DRS_SIGNATURE prev_sig;

  /* make dss and add de's non-fixed assignments to tasks to it */
  dss = KheDrsShiftSolnMake(de->solver);
  HaArrayForEach(de->task_solns, dts, i)
    if( !KheDrsTaskSolnIsFixed(dts) )
      KheDrsTaskSolnSetAddLast(dss->task_solns, dts);

  /* set dss's signature and cost */
  if( prev_soln != NULL )
    prev_sig = HaArrayLast(prev_soln->sig_set.signatures);
  else
    prev_sig = NULL;  /* prev_soln is root, so prev_sig won't be accessed */
  dss->sig = KheDrsSignerEvalSignature(ds->signer, prev_sig, de->solver));
  KheDrsSignatureRefer(dss->sig);

  /* depending on cost, either add dss to dsst or free it */
  if( KheDrsSolnCost(prev_soln) + dss->sig->cost < de->cost_limit )
    KheDrsShiftSolnTrieMeldShiftSoln(dsst, dss, ds->signer, de->solver);
  else
    KheDrsShiftSolnFree(dss, de->solver);
}
}
The function begins by making a new shift solution object.  It just
copies the task solutions into the new object; their order there
does not matter.  Only non-fixed tasks are stored in shift solution
objects; we'll defer the reason for this until we come to study
expansion by shifts.  Then a signature is calculated for the event
resource monitors only, as required in shift solution objects, and
finally the new object is either melded into @C { dsst }'s list of
shift solution objects (if its cost is competitive) by
@C { KheDrsShiftSolnTrieMeldShiftSoln }
(Appendix {@NumberOf dynamic_impl.solns.shift_soln_tries}).
or else it is freed.
@End @SubSubAppendix

@SubSubAppendix
    @Title { The main solution expansion function }
    @Tag { dynamic_impl.expansion.main }
@Begin
@LP
We begin our top-down presentation at the top, with function
@C { KheDrsSolnExpand }.  We'll be examining the functions
it calls later; for now the idea is to get an overview.
@PP
@C { KheDrsSolnExpand } is too big for one page, so we've
broken it into chunks:
@ID {0.95 1.0} @Scale @C {
void KheDrsSolnExpand(KHE_DRS_SOLN prev_soln, KHE_DRS_DAY prev_day,
  KHE_DRS_DAY next_day, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  int i, j, k;  KHE_DRS_RESOURCE dr;  KHE_DRS_EXPANDER de, shift_de;
  KHE_DRS_SHIFT ds, ds2;  KHE_DRS_RESOURCE_SET free_resources;
  KHE_DRS_TASK_SOLN_SET fixed_assts;  KHE_DRS_TASK_SOLN asst;

  ... see the five chunks of code below ...
}
}
As described previously, given @M { d sub k }-solution @C { prev_soln }
whose day @M { d sub k } is @C { prev_day }, and day @M { d sub {k+1} }
@C { next_day }, this finds all @M { d sub {k+1} }-solution
extensions of @C { prev_soln }, and adds them, with dominance
testing, to @C { next_day }'s solution set.  Here @C { prev_soln }
could be the root solution, which is not on any day, and in that
case @C { prev_day } is @C { NULL }.  However, @C { prev_soln }
will never be a solution for the last open day, so @C { next_day }
is always a well-defined open day.
@PP
Here is the first chunk of code:
@ID @C {
/* check on and update the number of expansions from prev_day */
if( prev_day != NULL )
{
  if( drs->solve_daily_expand_limit > 0 &&
      prev_day->solve_expand_count >= drs->solve_daily_expand_limit )
    return;
  prev_day->solve_expand_count += 1;
}

/* check the time limit and return early if it has been reached */
if( KheOptionsTimeLimitReached(drs->options) )
{
  if( DEBUG22 )
    fprintf(stderr, "  KheDrsSolnExpand returning early (time limit)\n");
  return;
}

/* mark prev_soln as expanded */
KheDrsSolnMarkExpanded(prev_soln);
}
The solver offers an option to limit the number of expansions
carried out on each day.  If this option is in effect (if
@C { drs->solve_daily_expand_limit > 0 }) then @C{ KheDrsSolnExpand }
returns immediately if the limit has been reached.  It then
checks the time limit and returns immediately if it has been
reached.  Then @C { KheDrsSolnMarkExpanded } is called to set the
priority queue back index of @C { prev_soln } to @C { -1 }, to
indicate that @C { prev_soln } has been expanded.
@PP
The second chunk of code does some setting up for expansion:
@ID @C {
/* make the main expander */
de = KheDrsExpanderMake(false, KheDrsSolnCost(prev_soln),
  drs->solve_init_cost, KheDrsResourceSetCount(drs->open_resources), 0, drs);

/* begin expansion in each open resource */
free_resources = KheDrsResourceSetMake(drs);
fixed_assts = KheDrsTaskSolnSetMake(drs);
KheDrsResourceSetForEach(drs->open_resources, dr, i)
  KheDrsResourceExpandBegin(dr, prev_soln, next_day, free_resources,
    fixed_assts, de);
KheDrsResourceSetForEach(free_resources, dr, i)
  KheDrsResourceExpandBeginFree(dr, prev_soln, next_day, de);

/* begin expansion in next_day and its shifts */
KheDrsDayExpandBegin(next_day, prev_soln, prev_day, de);

/* set up for mtask soln and mtask pair soln dominance, if requested */
if( drs->solve_extra_selection )
{
  KheDrsMTaskSolnDominanceInit(drs);
  KheDrsMTaskPairSolnDominanceInit(drs);
}
}
The first step here is to create an expander.  Its initial @C { cost }
is @C { KheDrsSolnCost(prev_soln) }.  Its initial @C { cost_limit }
is @C { drs->solve_init_cost }, the cost of the solution that we are
trying to improve on.  Its initial @C { free_resource_count } is
the number of open resources, but the following calls to
@C { KheDrsResourceExpandBegin } will reduce that to the number of
free resources (open resources not subject to fixed assignments).
And its initial @C { must_assign_count } is 0, but
@C { KheDrsDayExpandBegin } will increase that as it
discovers must-assign tasks.
@PP
The next step is to create two sets:  @C { free_resources },
which will grow from its initial empty value to hold the
set of all open resources not subject to a fixed assignment
on @C { next_day }, and @C { fixed_assts }, which will grow
from its initial empty value to hold the set of all fixed
assignments of open resources on @C { next_day }.
@PP
After that, for each open resource @C { dr } we call
@C { KheDrsResourceExpandBegin } to inform @C { dr }
that an expansion is beginning.  We'll see this function
later.  Among other jobs it either adds @C { dr } to
@C { free_resources } or else it adds @C { dr }'s fixed
assignment to @C { fixed_assts }.
@PP
Next we call @C { KheDrsResourceExpandBeginFree } for each free
resource.  We'll see why we visit the free resources a second
time like this, rather than doing all the work just once in
@C { KheDrsResourceExpandBegin }, when we study these two
functions in detail.
@PP
Next we call @C { KheDrsDayExpandBegin } to inform @C { next_day }
that an expansion is beginning.  Then mtask solution dominance and
mtask pair solution dominance are initialized if requested.
@PP
The third chunk of code is concerned with setting up for expansion
by shifts:
@ID {0.95 1.0} @Scale @C {
if( drs->solve_expand_by_shifts )
{
  /* initialize shift solution tries */
  shift_de = KheDrsExpanderMake(true, 0, 0, 0, 0, drs);
  HaArrayForEach(next_day->shifts, ds, i)
    KheDrsShiftBuildShiftSolnTrie(ds, prev_soln, prev_day, next_day,
      free_resources, fixed_assts, shift_de, drs);
  KheDrsExpanderFree(shift_de);

  /* find forced assignments, prune shift solutions invalidated by them */
  HaArrayForEach(next_day->shifts, ds, i)
    KheDrsResourceSetForEach(drs->open_resources, dr, j)
      if( KheDrsShiftSolnTrieResourceIsForced(ds->soln_trie, dr) )
      {
	/* dr is forced in ds, so prune it from the others */
	HaArrayForEach(next_day->shifts, ds2, k)
	  if( ds2 != ds )
	    KheDrsShiftSolnTriePruneForced(ds2->soln_trie, dr, drs);
      }

  /* find pairs of dominated shifts on next_day */
  if( drs->solve_shift_pairs )
    KheDrsShiftSolnTrieFindDominatedShiftPairs(prev_soln, next_day, drs);
}
}
This begins by calling @C { KheDrsShiftBuildShiftSolnTrie } for
each shift on @C { next_day }, to build the shift solution trie
for that shft.  Here @C { shift_de } is a scratch expander which
is reset each time an expander is needed.
@PP
If all assignments to a shift @M { s } demand some resource @M { r },
then @M { r } is not available for assignment to any other shift.
@C { KheDrsShiftSolnTrieResourceIsForced } checks this condition
for one shift and one resource, and if it is true,
@C { KheDrsShiftSolnTriePruneForced } removes all assignments
containing that resource from other shift solution tries.
@PP
Then @C { KheDrsShiftSolnTrieFindDominatedShiftPairs } is called
to initialize for shift pair dominance.  This will prevent pairs
of shift solutions being chosen that have previously been shown
to be uncompetitive.
@PP
The fourth chunk of code carries out the expansion proper:
@ID {0.95 1.0} @Scale @C {
/* carry out the main part of the expansion */
KheDrsExpanderMarkBegin(de);
KheDrsExpanderAddTaskSolnSet(de, fixed_assts, NULL);
if( KheDrsExpanderIsOpen(de) )
{
  if( drs->solve_expand_by_shifts )
    KheDrsSolnExpandByShifts(prev_soln, next_day, de, free_resources, 0);
  else
    KheDrsSolnExpandByResources(prev_soln, prev_day, next_day, de,
      free_resources, 0);
}
KheDrsExpanderMarkEnd(de);
}
The fixed assignments are added to the expander, and then the free
ones are assigned by calling @C { KheDrsSolnExpandByShifts }
or @C { KheDrsSolnExpandByResources }, if the expander is open.
@PP
Finally, the fifth and last chunk of code finishes the expansion by
ending expansion in @C { next_day } and in the open resources, and
freeing the two sets and the expander:
@ID @C {
/* end expansion in next_day and in the open resources */
KheDrsDayExpandEnd(next_day, de);
KheDrsResourceSetForEach(drs->open_resources, dr, i)
  KheDrsResourceExpandEnd(dr, de);
KheDrsResourceSetFree(free_resources, drs);

/* free fixed_assts and the expander and return */
KheDrsTaskSolnSetForEach(fixed_assts, asst, i)
  if( asst.fixed_dtd != NULL )
    asst.fixed_dtd->encl_dt->expand_role = KHE_DRS_TASK_EXPAND_NO_VALUE;
KheDrsTaskSolnSetFree(fixed_assts, drs);
KheDrsExpanderFree(de);
}
The assignments to @C { expand_role } should be encapsulated in
some logically defined function.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Initializing resources for expansion }
    @Tag { dynamic_impl.expansion.resource_setup }
@Begin
@LP
In this section we present the code that sets up each open resource
for expansion and clears it back again at the end.  We start with
the fields of @C { KHE_DRS_RESOURCE } concerned with expansion:
@ID @C {
struct khe_drs_resource_rec {
  ...
  KHE_DRS_RESOURCE_EXPAND_ROLE	expand_role;
  ARRAY_KHE_DRS_SIGNATURE	expand_signatures;
  ARRAY_KHE_DRS_MTASK_SOLN	expand_mtask_solns;
  KHE_DRS_MTASK_SOLN		expand_free_mtask_soln;
  KHE_DRS_DIM2_TABLE		expand_dom_test_cache;
};
}
These fields appear in type @C { KHE_DRS_RESOURCE } rather than,
say, @C { KHE_DRS_RESOURCE_ON_DAY } because the algorithm
is single-threaded and there is only one expansion going on at
any given moment, so only one value of these fields is needed
for a given resource at any given moment.
@PP
Field @C { expand_role } has type
@ID @C {
typedef enum {
  KHE_DRS_RESOURCE_EXPAND_NO,
  KHE_DRS_RESOURCE_EXPAND_FIXED,
  KHE_DRS_RESOURCE_EXPAND_FREE
} KHE_DRS_RESOURCE_EXPAND_ROLE;
}
and records the role that this resource (call it @M { r }) takes in
the current expansion, as follows.
@PP
@C { KHE_DRS_RESOURCE_EXPAND_NO }:  there is no expansion
in progress, or there is one but @M { r } is not an open resource,
so it does not participate in it.
@PP
@C { KHE_DRS_RESOURCE_EXPAND_FIXED }:  there is an expansion
in progress, @M { r } is open so it participates in it, and @M { r }
must be assigned by it to a specific task.  Usually this will be because
that task is a multi-day task and @M { r } was assigned to it on
its first day.  @C { KheDrsResourceOnDayIsFixed } (see below)
has the full story.  We say that the resource is @I { fixed }.
@PP
@C { KHE_DRS_RESOURCE_EXPAND_FREE }:  neither of the
other cases applies:  an expansion is in progress, @M { r } is
open so it participates in it, but @M { r } is not fixed to any
specific task, and indeed need not be assigned at all.  We say
that the resource is @I { free }.
@PP
By definition, the tasks of one shift have the same busy times and
workload.  Assigning @M { r } to any one of them has the same
effect on @M { r }'s resource monitors.  For each open resource
and each shift on the @C { next_day } of the expansion, there is
a single @C { KHE_DRS_SIGNATURE } object holding this common
resource signature.  There is also one signature for a free
day.  All these signatures are kept in the @C { expand_signatures }
field of the resource, in arbitrary order.
@PP
Field @C { expand_mtask_solns } contains the mtask solution
objects open to @C { dr } on @C { next_day }.  Typically this
will be one per mtask plus one denoting a free day.  If
@C { dr } is fixed there will be just the one mtask solution.
Again, these mtask solutions appear in arbitrary order.  As
we'll see, they get sorted into non-decreasing cost order.
This simple heuristic helps expand by resources to find better
solutions earlier, which somewhat reduces running time.
@PP
Field @C { expand_free_mtask_soln } contains the free day element
of @C { expand_mtask_solns }.  If a free day is not possible for
any reason, @C { expand_free_mtask_soln } is @C { NULL }.
@PP
Finally, @C { expand_dom_test_cache } is an optional cache
containing the results of dominance tests between pairs of
elements of @C { expand_mtask_solns }.  The intention is
to speed up dominance testing in these cases, but the
author has not observed any significant speedup.
@PP
Our first function is @C { KheDrsResourceOnDayIsFixed }, a helper
function, called only by @C { KheDrsResourceExpandBegin }, which
finds whether a resource is fixed and if so to what:
@ID {0.95 1.0} @Scale @C {
bool KheDrsResourceOnDayIsFixed(KHE_DRS_RESOURCE_ON_DAY drd,
  KHE_DRS_SOLN soln, KHE_DYNAMIC_RESOURCE_SOLVER drs,
  KHE_DRS_TASK_ON_DAY *dtd)
{
  KHE_DRS_TASK_ON_DAY dtd1, dtd2;

  /* (1) if this is a rerun, that fixes the assignment */
  if( drs->rerun_soln != NULL )
    return *dtd = KheDrsPackedSolnTaskOnDay(drs->rerun_soln,
      drd->day, drd->encl_dr), true;

  /* (2) if drd is preassigned to a task on this day, it's fixed to that */
  if( drd->preasst_dtd != NULL )
    return *dtd = drd->preasst_dtd, true;

  /* (3) if drd has a closed assignment, it's fixed to that */
  if( drd->closed_dtd != NULL )
    return *dtd = drd->closed_dtd, true;

  /* (4) if drd's resource is assigned to a task in soln which is still */
  /* running, then drd is fixed to that */
  if( KheDrsSolnResourceIsAssigned(soln, drd->encl_dr, &dtd1) &&
	KheDrsTaskRunningOnDay(dtd1->encl_dt, drd->day, &dtd2) )
    return *dtd = dtd2, true;

  /* otherwise drd has no fixed assignment */
  return *dtd = NULL, false;
}
}
First, some runs are @I reruns and for them the resource on day is
fixed to a task on day that may be retrieved from the @C { drs->rerun }
packed solution object (Appendix {@NumberOf dynamic_impl.solving.testing}).
Second, there could be a preassignment of this resource to a task
running on this day, fixing the resource to that task.  Third,
even if the resource and day are open, there could still be a
closed assignment, usually arising from a multi-day task, which is
only opened if all the days it is running are open.
Fourth, the resource could also have been assigned to a multi-day task
yesterday (in @C { soln }); if so it must continue with that task
today.  @C { KheDrsSolnResourceIsAssigned } returns @C { true } if
the resource is busy in @C { soln }, and @C { KheDrsTaskBusyOnDay }
returns @C { true } if the task it is busy with then is still running.
This is where the solver fails to handle tasks which run on multiple
days but with gaps in the days:  it looks for assignments to multi-day
tasks only on the previous day.
@PP
@C { KheDrsResourceExpandBegin } begins the job of initializing the fields
we saw earlier:
@ID {0.90 0.98} @Scale -1px @Break @C {
void KheDrsResourceExpandBegin(KHE_DRS_RESOURCE dr,
  KHE_DRS_SOLN prev_soln, KHE_DRS_DAY next_day,
  KHE_DRS_RESOURCE_SET free_resources,
  KHE_DRS_TASK_SOLN_SET fixed_assts, KHE_DRS_EXPANDER de)
{
  KHE_DRS_TASK_ON_DAY fixed_dtd;  KHE_DRS_RESOURCE_ON_DAY drd;
  KHE_DRS_TASK_SOLN dts;  KHE_DRS_MTASK_SOLN dms;
  KHE_DRS_TASK dt;  KHE_DRS_SIGNATURE prev_sig, sig;

  /* add signatures, and assignments to mtasks */
  drd = KheDrsResourceOnDay(dr, next_day);
  if( KheDrsResourceOnDayIsFixed(drd, prev_soln, de->solver, &fixed_dtd) )
  {
    /* mark the resource as fixed, and inform the expander */
    dr->expand_role = KHE_DRS_RESOURCE_EXPAND_FIXED;
    KheDrsExpanderDeleteFreeResource(de);

    /* fixed assignment to fixed_dtd; make and add sasst and dms */
    /* NB the optional addition is forced here, so sig is always referred to */
    prev_sig = HaArray(prev_soln->sig_set.signatures, dr->open_resource_index);
    sig = KheDrsResourceSignatureMake(drd, fixed_dtd, prev_sig, de->solver);
    dms = KheDrsResourceOptionallyAddMTaskSoln(sig, drd, NULL, fixed_dtd,
      true, de);
    KheDrsResourceAddExpandSignature(dr, sig);

    /* mark the task as fixed */
    if( fixed_dtd != NULL )
    {
      dt = fixed_dtd->encl_dt;
      HnAssert(dt->expand_role == KHE_DRS_TASK_EXPAND_NO_VALUE,
	"KheDrsResourceExpandBegin internal error 4 (role %s)",
	KheDrsTaskExpandRoleShow(dt->expand_role));
      dt->expand_role = KHE_DRS_TASK_EXPAND_FIXED;
    }

    /* make dts and add to fixed_assts */
    dts = KheDrsTaskSolnMake(dms, fixed_dtd);
    KheDrsTaskSolnSetAddLast(fixed_assts, dts);

    /* and build dominance cache */
    KheDrsResourceBuildDominanceTestCache(dr, drd, de->solver);
  }
  else
  {
    /* mark the resource as free */
    dr->expand_role = KHE_DRS_RESOURCE_EXPAND_FREE;

    /* add dr to free_resources */
    KheDrsResourceSetAddLast(free_resources, dr);
  }
}
}
@C { KheDrsResourceOnDayIsFixed } says whether @C { dr } is subject to
a fixed assignment or not.  If it is, then the full initialization of
@C { dr } is done here; we'll explain the details in a moment.  If it
isn't, then just the minimum is done here, marking @C { dr } as free
and adding it to @C { free_resources }, the set of all free resources,
leaving the rest to a later call to @C { KheDrsResourceExpandBeginFree }.
@PP
When @C { dr } has a fixed assignment, @C { dr }'s role is set
to fixed, and the expander is told that there is one fewer free
resource than previously thought.  A signature and mtask
solution object are made for the fixed assignment and added to
@C { dr }; we'll see the functions that do this in a moment.
Then the enclosing task is marked as fixed, a task solution
object is made and added to @C { fixed_assts }, and the dominance
test cache (field @C { expand_dom_test_cache }) is initialized.
@PP
The code goes to some trouble to store only signatures and mtask
solution objects that are actually useful.  For a fixed assignment
this is not needed, but it is done anyway, as follows.  The call to
@C { KheDrsResourceSignatureMake } makes a new signature object.
Then @C { KheDrsResourceOptionallyAddMTaskSoln } makes and adds
an mtask solution to @C { dr->expand_mtask_solns }, and returns it:
@ID @C {
KHE_DRS_MTASK_SOLN KheDrsResourceOptionallyAddMTaskSoln(
  KHE_DRS_SIGNATURE sig, KHE_DRS_RESOURCE_ON_DAY drd, KHE_DRS_MTASK dmt,
  KHE_DRS_TASK_ON_DAY fixed_dtd, bool force, KHE_DRS_EXPANDER de)
{
  KHE_DRS_MTASK_SOLN res;
  if( force || KheDrsExpanderOpenToExtraCost(de, sig->cost) )
  {
    res = KheDrsMTaskSolnMake(sig, drd, dmt, fixed_dtd, de->solver);
    HaArrayAddLast(drd->encl_dr->expand_mtask_solns, res);
    return res;
  }
  else
    return NULL;
}
}
If @C { KheDrsExpanderOpenToExtraCost(de, sig->cost) } is false,
a solution that uses this assignment has cost larger than the cost
we are trying to improve on, so there is no point in creating the
assignment object.  This happens, for example, when the assignment
violates a hard resource constraint.  So nothing is created.  Otherwise,
the assignment is created and added to @C { dr->expand_mtask_solns }.
@PP
As an exception, the assignment object is created anyway when
@C { force } is @C { true }.  This is for fixed assignments, which
our algorithm needs an assignment for, even if it is not competitive.
@PP
For adding the new signature to the resource, the call is
@ID @C {
void KheDrsResourceAddExpandSignature(KHE_DRS_RESOURCE dr,
  KHE_DRS_SIGNATURE sig)
{
  HaArrayAddLast(dr->expand_signatures, sig);
  KheDrsSignatureRefer(sig);
}
}
This keeps @C { sig }'s reference count up to date, as required.
@PP
After @C { KheDrsResourceExpandBegin } has been called for each
of the open resources, @C { KheDrsSolnExpand } calls this function
on each free resource:
@ID {0.90 1.0} @Scale @C {
void KheDrsResourceExpandBeginFree(KHE_DRS_RESOURCE dr,
  KHE_DRS_SOLN prev_soln, KHE_DRS_DAY next_day, KHE_DRS_EXPANDER de)
{
  int i, j;  KHE_DRS_RESOURCE_ON_DAY drd;  KHE_DRS_TASK_ON_DAY dtd;
  KHE_DRS_MTASK dmt;  KHE_DRS_SHIFT ds;  KHE_DRS_SIGNATURE prev_sig, sig;

  /* unfixed assignment; make signatures and mtask solns as required */
  HnAssert(dr->expand_role == KHE_DRS_RESOURCE_EXPAND_FREE,
    "KheDrsResourceExpandBeginFree internal error");
  drd = KheDrsResourceOnDay(dr, next_day);
  prev_sig = HaArray(prev_soln->sig_set.signatures, dr->open_resource_index);
  HaArrayForEach(next_day->shifts, ds, i)
  {
    sig = NULL;
    HaArrayForEach(ds->open_mtasks, dmt, j)
      if( KheDrsMTaskAcceptResourceBegin(dmt, drd, &dtd) )
      {
	if( sig == NULL )
	  sig = KheDrsResourceSignatureMake(drd, dtd, prev_sig, de->solver);
	KheDrsResourceOptionallyAddMTaskSoln(sig, drd, dmt, NULL, false, de);
	KheDrsMTaskAcceptResourceEnd(dmt, dtd);
      }
    if( sig != NULL && !KheDrsSignatureOptionallyFree(sig, de->solver) )
      KheDrsResourceAddExpandSignature(dr, sig);
  }

  /* make one signature and mtask soln for a free day */
  sig = KheDrsResourceSignatureMake(drd, NULL, prev_sig, de->solver);
  dr->expand_free_mtask_soln =
    KheDrsResourceOptionallyAddMTaskSoln(sig, drd, NULL, NULL, false, de);
  if( !KheDrsSignatureOptionallyFree(sig, de->solver) )  /* yes, we need this */
    KheDrsResourceAddExpandSignature(dr, sig);

  /* sort expand_mtask_solns by increasing cost and keep only the best */
  KheDrsSortAndReduceMTaskSolns(dr, de->solver);

  /* move any unavoidable cost into the expander */
  KheDrsResourceAdjustSignatureCosts(dr, de);

  /* add a dominance test cache */
  KheDrsResourceBuildDominanceTestCache(dr, drd, de->solver);
}
}
Since @C { dr } is free, any mtask from any shift is a possible
assignment, as is a free day.  So this code iterates over all shifts
and their mtasks, adding mtask solution objects where they are
feasible, including for a free day, and signature objects where they
are used.
@PP
At the end come two adjustments to the mtask solution objects.  First is
@ID {0.90 1.0} @Scale @C {
void KheDrsSortAndReduceMTaskSolns(KHE_DRS_RESOURCE dr,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_MTASK_SOLN dms, dms2;  KHE_DRS_SIGNATURE sig, sig2;

  /* sort the mtask solutions by increasing cost */
  HaArraySort(dr->expand_mtask_solns, &KheDrsMTaskSolnCmp);

  /* if there is a resource expand limit and it will make a difference here */
  if( drs->solve_resource_expand_limit > 0 &&
      drs->solve_resource_expand_limit < HaArrayCount(dr->expand_mtask_solns) )
  {
    /* find dms, a signature with the largest cost that we want to keep */
    dms = HaArray(dr->expand_mtask_solns, drs->solve_resource_expand_limit-1);
    sig = KheDrsMTaskSolnSignature(dms);

    /* delete and free all signatures whose cost exceeds dms's */
    dms2 = HaArrayLast(dr->expand_mtask_solns);
    sig2 = KheDrsMTaskSolnSignature(dms2);
    while( sig2->cost > sig->cost )
    {
      HaArrayDeleteLast(dr->expand_mtask_solns);
      if( dms2 == dr->expand_free_mtask_soln )
        dr->expand_free_mtask_soln = NULL;
      KheDrsMTaskSolnFree(dms2, drs);
      dms2 = HaArrayLast(dr->expand_mtask_solns);
      sig2 = KheDrsMTaskSolnSignature(dms2);
    }
  }
}
}
This sorts the newly created mtask solution objects by increasing cost
(their order does not matter).  This is useful because it means that
lower cost solutions are tried first.  After that, if the
@C { solve_resource_expand_limit } user option is in use (if its
value is positive), it removes some of the assignments, the most
costly ones, until the number remaining is approximately equal to
@C { solve_resource_expand_limit }.  Of course, this destroys the
optimality guarantee.
@PP
The other function called at the end of @C { KheDrsResourceExpandBeginFree }
is
@ID @C {
void KheDrsResourceAdjustSignatureCosts(KHE_DRS_RESOURCE dr,
  KHE_DRS_EXPANDER de)
{
  KHE_COST movable_cost;  int i;
  KHE_DRS_MTASK_SOLN dms;  KHE_DRS_SIGNATURE sig;
  if( HaArrayCount(dr->expand_mtask_solns) > 0 )
  {
    dms = HaArrayFirst(dr->expand_mtask_solns);
    movable_cost = dms->sig->cost;
    KheDrsExpanderAddCost(de, movable_cost);
    HaArrayForEach(dr->expand_signatures, sig, i)
    {
      sig->cost -= movable_cost;
      HnAssert(sig->cost >= 0,
	"KheDrsResourceAdjustSignatureCosts internal error");
    }
  }
}
}
At least one of @C { dr }'s assignments must be used, even when
@C{ dr } is assigned a free day, and so, since the assignment to
mtask objects are now sorted by non-decreasing cost, a cost
at least equal to the cost of the first of them must be incurred.
The code finds this cost and moves it out of the signatures and
into the expander.  This has zero net effect on cost, but a higher
cost in the expander may lead to more pruning.
@PP
So much for starting off an expansion.  Here is the function,
called at the end of expansion, for clearing away the fields
used by the expansion:
@ID @C {
void KheDrsResourceExpandEnd(KHE_DRS_RESOURCE dr, KHE_DRS_EXPANDER de)
{
  KHE_DRS_MTASK_SOLN dms;

  /* mark the resource as not involved in any expansion */
  dr->expand_role = KHE_DRS_RESOURCE_EXPAND_NO;

  /* clear out the dominance test cache */
  if( USE_DOM_CACHING )
    KheDrsDim2TableClear(dr->expand_dom_test_cache, de->solver);

  /* clear signatures */
  KheDrsResourceClearExpandSignatures(dr, de->solver);

  /* clear assignments to mtasks */
  while( HaArrayCount(dr->expand_mtask_solns) > 0 )
  {
    dms = HaArrayLastAndDelete(dr->expand_mtask_solns);
    KheDrsMTaskSolnFree(dms, de->solver);
  }
  dr->expand_free_mtask_soln = NULL;
}
}
This puts these fields into the state assumed at the start of
the next expansion.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Initializing days, shifts, mtasks, and tasks for expansion }
    @Tag { dynamic_impl.expansion.shift_setup }
@Begin
@LP
This section presents the code for setting up days, shifts, mtasks,
and tasks for expansion.
@PP
Type @C { KHE_DRS_TASK } has one field relevant to expansion:
@ID @C {
struct khe_drs_task_rec {
  ...
  KHE_DRS_TASK_EXPAND_ROLE		expand_role;
};
}  
Its type is
@ID @C {
typedef enum {
  KHE_DRS_TASK_EXPAND_NO_VALUE,
  KHE_DRS_TASK_EXPAND_FIXED,
  KHE_DRS_TASK_EXPAND_MUST,
  KHE_DRS_TASK_EXPAND_FREE
} KHE_DRS_TASK_EXPAND_ROLE;
}
This defines the role that this task (call it @M { t }) has in
the current expansion, as follows.
@PP
@C { KHE_DRS_TASK_EXPAND_NO_VALUE }:  there is no expansion in progress,
or there is one but @M { t } does not participate in it.
@PP
@C { KHE_DRS_TASK_EXPAND_FIXED }:  there is an expansion in progress,
@M { t } participates in it, and @M { t } is a @I { fixed task }:  it
must be assigned a specific resource.  Usually this will be because
@M { t } is a multi-day task and it was assigned that resource on its
first day.  For the full story, consult @C { KheDrsResourceOnDayIsFixed }
(Appendix {@NumberOf dynamic_impl.expansion.resource_setup}).
@PP
@C { KHE_DRS_TASK_EXPAND_MUST }:  there is an expansion in progress,
@M { t } participates in it, and @M { t } is a @I { must-assign task }:
although it is not fixed to any particular resource, it must be
assigned some resource, since the cost of leaving it unassigned is
too great.  For example, it is common for some tasks to be subject to
hard constraints that require them to be assigned, and those would
always become must-assign tasks in practice, except when they are fixed.
@PP
@C { KHE_DRS_TASK_EXPAND_FREE }:  none of the other cases applies; an
expansion is in progress, @M { t } participates in it, but @M { t }
need not be assigned a resource, specific or otherwise.
@PP
Type @C { KHE_DRS_MTASK } has two relevant fields:
@ID @C {
struct khe_drs_mtask_rec {
  ...
  int				expand_must_assign_count;
  int				expand_prev_unfixed;
};
}
Here @C { expand_must_assign_count } is the mtask's number of
must-assign tasks:  the number of tasks whose @C { expand_role }
is @C { KHE_DRS_TASK_EXPAND_MUST }.  The other field works as follows.
@PP
The mtask can be requested to give away one of its unfixed
tasks (one whose @C { expand_role } is @C { KHE_DRS_TASK_EXPAND_MUST }
or @C { KHE_DRS_TASK_EXPAND_FREE }) for assignment.  This is what a
call to @C { KheDrsMTaskAcceptResourceBegin } does:
@ID {0.95 1.0} @Scale @C {
bool KheDrsMTaskAcceptResourceBegin(KHE_DRS_MTASK dmt,
  KHE_DRS_RESOURCE_ON_DAY drd, KHE_DRS_TASK_ON_DAY *dtd)
{
  KHE_DRS_TASK dt;  int i, count;

  count = HaArrayCount(dmt->unassigned_tasks);
  for( i = dmt->expand_prev_unfixed + 1;  i < count;  i++ )
  {
    dt = HaArray(dmt->unassigned_tasks, i);
    if( dt->expand_role != KHE_DRS_TASK_EXPAND_FIXED )
    {
      if( !KheDrsTaskRunningOnDay(dt, drd->day, dtd) )
	HnAbort("KheDrsMTaskAcceptResourceBegin internal error");
      dmt->expand_prev_unfixed = i;
      return true;
    }
  }

  /* if we get here we've failed to identify a suitable task */
  return *dtd = NULL, false;
}
}
It searches forwards along its @C { unassigned_tasks } array for
the next unfixed task.  If it finds such a task, it increases
@C { dmt->expand_prev_unfixed } to its index, sets @C { *dtd }
to the relevant task on day, and returns @C { true }.  Otherwise
it sets @C { *dtd } to @C { NULL } and returns @C { false }.
@PP
When expansion no longer needs a task that it previously successfully
requested using @C { KheDrsMTaskAcceptResourceBegin }, it
calls @C { KheDrsMTaskAcceptResourceEnd } to return it:
@ID {0.92 1.0} @Scale @C {
void KheDrsMTaskAcceptResourceEnd(KHE_DRS_MTASK dmt,
  KHE_DRS_TASK_ON_DAY dtd)
{
  KHE_DRS_TASK dt;  int i;
  HnAssert(dmt->expand_prev_unfixed < HaArrayCount(dmt->unassigned_tasks) &&
    dtd->encl_dt == HaArray(dmt->unassigned_tasks, dmt->expand_prev_unfixed),
    "KheDrsMTaskAcceptResourceEnd internal error");

  /* search backwards for next unfixed task, or start of array */
  for( i = dmt->expand_prev_unfixed - 1;  i >= 0;  i-- )
  {
    dt = HaArray(dmt->unassigned_tasks, dmt->expand_prev_unfixed);
    if( dt->expand_role != KHE_DRS_TASK_EXPAND_FIXED )
      break;
  }
  dmt->expand_prev_unfixed = i;
}
}
This reduces @C { dmt->expand_prev_unfixed } to the index of the
previous unfixed task, or to @C { -1 } (also the initial value of
@C { dmt->expand_prev_unfixed }) when there is no previous unfixed
task.  The mtask does not need to record which tasks it has
given away, because it gives them away in the order they appear in
@C { unassigned_tasks }, and receives them back in reverse order.
@PP
In @C { KHE_DRS_SHIFT } there are two relevant fields:
@ID @C {
struct khe_drs_shift_rec {
  ...
  int			expand_must_assign_count;
  int			expand_max_included_free_resource_count;
};
}
Field @C { expand_must_assign_count } holds the total number of
must-assign tasks lying within the shift's mtasks.  Field
@C { expand_max_included_free_resource_count } holds the maximum
number of free resources (open resources not fixed to a
particular task during this expansion) that can be assigned to
tasks of this shift without leaving too few free resources
available to cover the must-assign tasks of other shifts.
Importantly, @C { expand_must_assign_count } excludes
fixed tasks, and @C { expand_max_included_free_resource_count }
excludes fixed resources.
@PP
All these fields are initialized at the start of expansion
by the call to
@ID {0.95 1.0} @Scale -1px @Break @C {
void KheDrsDayExpandBegin(KHE_DRS_DAY next_day, KHE_DRS_SOLN prev_soln,
  KHE_DRS_DAY prev_day, KHE_DRS_EXPANDER de)
{
  int i, excess;  KHE_DRS_SHIFT ds;

  /* begin caching expansions */
  KheDrsSolnSetBeginCacheSegment(next_day->soln_set, de->solver);

  /* begin expansion in each shift */
  HaArrayForEach(next_day->shifts, ds, i)
    KheDrsShiftExpandBegin(ds, prev_soln, prev_day, de);

  /* set expand_max_included_free_resource_count in each shift */
  excess = KheDrsExpanderExcessFreeResourceCount(de);
  HaArrayForEach(next_day->shifts, ds, i)
    ds->expand_max_included_free_resource_count =
      ds->expand_must_assign_count + excess;
}
}
This tells @C { next_day->soln_set } that an expansion is
beginning, so that it can initialize its cache if desired.  Then it
calls @C { KheDrsShiftExpandBegin } (see below) for each shift of
@C { next_day }.  The last part sets
@C { expand_max_included_free_resource_count } in each shift
@C { ds } to hold the maximum number of free resources that can be
assigned to tasks of @C { ds } without leaving too few free resources
available for the must-assign tasks of other shifts.  The reader
can confirm this.
@PP
At the end of the expansion the opposite function is called:
@ID @C {
void KheDrsDayExpandEnd(KHE_DRS_DAY next_day, KHE_DRS_EXPANDER de)
{
  int i;  KHE_DRS_SHIFT ds;

  /* end expansion in each shift */
  HaArrayForEach(next_day->shifts, ds, i)
    KheDrsShiftExpandEnd(ds, de);

  /* end caching expansions */
  KheDrsSolnSetEndCacheSegment(next_day->soln_set, next_day, de, de->solver);
}
}
This tells @C { next_day }'s shifts and solution set that the
current expansion is ending.
@PP
Here is how one shift is initialized for expansion:
@ID @C {
void KheDrsShiftExpandBegin(KHE_DRS_SHIFT ds, KHE_DRS_SOLN prev_soln,
  KHE_DRS_DAY prev_day, KHE_DRS_EXPANDER de)
{
  KHE_DRS_MTASK dmt;  int i;
  ds->expand_must_assign_count = 0;
  ds->expand_max_included_free_resource_count = 0;
  HaArrayForEach(ds->open_mtasks, dmt, i)
    KheDrsMTaskExpandBegin(dmt, prev_soln, prev_day, de);
}
}
This begins by initializing the two expansion fields to placeholder
values.  We'll see shortly how @C { ds->expand_must_assign_count }
receives its true value, and we have already seen, just above, how
@C { ds->expand_max_included_free_resource_count } receives
its true value.  The function then tells each mtask that an
expansion is beginning.  The function for ending expansion is
@ID @C {
void KheDrsShiftExpandEnd(KHE_DRS_SHIFT ds, KHE_DRS_EXPANDER de)
{
  KHE_DRS_MTASK dmt;  int i;
  HaArrayForEach(ds->open_mtasks, dmt, i)
    KheDrsMTaskExpandEnd(dmt, de);
  KheDrsShiftSolnTrieFree(ds->soln_trie, de->solver);
  ds->soln_trie = NULL;
}
}
This tells each mtask that the expansion is ending.  Elsewhere
we present a second function for initializing shifts for expansion, one
which initializes the shift solution tries used by expand by shifts.
@C { KheDrsShiftSolnTrieFree }
(Appendix {@NumberOf dynamic_impl.solns.shift_soln_tries})
removes these tries.
@PP
For informing an mtask that expansion is beginning, the code is
@ID @C {
void KheDrsMTaskExpandBegin(KHE_DRS_MTASK dmt,
  KHE_DRS_SOLN prev_soln, KHE_DRS_DAY prev_day, KHE_DRS_EXPANDER de)
{
  KHE_DRS_TASK dt;  int i;
  dmt->expand_must_assign_count = 0;
  HaArrayForEach(dmt->unassigned_tasks, dt, i)
    if( dt->expand_role != KHE_DRS_TASK_EXPAND_FIXED )
    {
      if( KheDrsExpanderOpenToExtraCost(de, dt->non_asst_cost) )
	dt->expand_role = KHE_DRS_TASK_EXPAND_FREE;
      else
      {
	/* dt must be assigned, otherwise cost will be too high */
	dt->expand_role = KHE_DRS_TASK_EXPAND_MUST;
	dmt->expand_must_assign_count++;
	dmt->encl_shift->expand_must_assign_count++;
	KheDrsExpanderAddMustAssign(de);
      }
    }
}
}
This visits each unassigned unfixed task of @C { dmt }, setting its
@C { expand_role } field, and ensuring that the
@C { expand_must_assign_count } fields of the enclosing mtask
and shift hold the correct total number of must-assign tasks, and
that @C { de } holds the number of must-assign tasks in all shifts.
@PP
This code assumes that @C { KheDrsResourceExpandBegin } has already
been called for each open resource, so that fixed tasks have already
been discovered and had their @C { expand_role } fields set.  It is
done this way because the solver's data structures are much better
at deciding whether a given open resource is fixed, and if so which
task it is fixed to, than they are at deciding whether a given task
is fixed, and if so which resource it is fixed to.
@PP
After expansion is complete, the opposite function is called:
@ID @C {
void KheDrsMTaskExpandEnd(KHE_DRS_MTASK dmt, KHE_DRS_EXPANDER de)
{
  KHE_DRS_TASK dt;  int i;
  HaArrayForEach(dmt->unassigned_tasks, dt, i)
    dt->expand_role = KHE_DRS_TASK_EXPAND_NO_VALUE;
}
}
There is no expansion now so all roles are @C { KHE_DRS_TASK_EXPAND_NO_VALUE }.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Expansion by resources }
    @Tag { dynamic_impl.expansion.by_resources }
@Begin
@LP
Expansion by resources is carried out by function
@C { KheDrsSolnExpandByResources }:
@ID {0.95 1.0} @Scale @C {
void KheDrsSolnExpandByResources(KHE_DRS_SOLN prev_soln,
  KHE_DRS_DAY prev_day, KHE_DRS_DAY next_day, KHE_DRS_EXPANDER de,
  KHE_DRS_RESOURCE_SET free_resources, int free_resources_index)
{
  KHE_DRS_RESOURCE dr;  int i;  KHE_DRS_MTASK_SOLN dms;

  if( free_resources_index >= KheDrsResourceSetCount(free_resources) )
  {
    /* de has enough mtask solns to make into a soln and evaluate */
    KheDrsExpanderMakeAndMeldSoln(de, prev_soln, next_day);
  }
  else
  {
    dr = KheDrsResourceSetResource(free_resources, free_resources_index);
    HaArrayForEach(dr->expand_mtask_solns, dms, i)
      KheDrsMTaskSolnExpandByResources(dms, prev_soln, prev_day,
	next_day, de, free_resources, free_resources_index);
  }
}
}
This expands @C { prev_soln } into @C { next_day } in all possible
ways, assuming that all possibilities have been explored for the free
resources of @C { free_resources } whose indexes in @C { free_resources }
are less than @C { free_resource_index }.
@PP
If @C { free_resources_index >= KheDrsResourceSetCount(free_resources) },
the expander has a current assignment for each open resource, so
@C { KheDrsExpanderMakeAndMeldSoln }
(Appendix {@NumberOf dynamic_impl.expansion.expanders}) is called
to make a day solution from those assignments and possibly add it to
@C { next_day }'s solution set.  Otherwise, the code finds the
next unassigned free resource @C { dr }, and for each of @C { dr }'s
mtask solution objects that @C { KheDrsResourceExpandBegin } created
previously, it continues the recursion using that assignment:
@ID {0.94 1.0} @Scale @C {
void KheDrsMTaskSolnExpandByResources(KHE_DRS_MTASK_SOLN dms,
  KHE_DRS_SOLN prev_soln, KHE_DRS_DAY prev_day, KHE_DRS_DAY next_day,
  KHE_DRS_EXPANDER de, KHE_DRS_RESOURCE_SET free_resources,
  int free_resources_index)
{
  KHE_DRS_TASK_ON_DAY dtd;  KHE_DRS_MTASK dmt;  KHE_DRS_TASK_SOLN dts;
  dmt = dms->mtask;
  if( dmt != NULL )
  {
    /* select a task from dms->mtask and assign it */
    if( KheDrsMTaskAcceptResourceBegin(dmt, dms->resource_on_day, &dtd) )
    {
      dts = KheDrsTaskSolnMake(dms, dtd);
      KheDrsTaskSolnExpandByResources(dts, prev_soln, prev_day, next_day,
	de, free_resources, free_resources_index);
      KheDrsMTaskAcceptResourceEnd(dmt, dtd);
    }
  }
  else
  {
    /* use dms->fixed_task_on_day, possibly NULL meaning a free day */
    dts = KheDrsTaskSolnMake(dms, dms->fixed_task_on_day);
    KheDrsTaskSolnExpandByResources(dts, prev_soln, prev_day, next_day,
      de, free_resources, free_resources_index);
  }
}
}
If @C { dmt != NULL }, the assignment is to an unspecified task of
mtask @C { dmt }.  It is now time to choose a specific task, which
is done by the calls to @C { KheDrsMTaskAcceptResourceBegin }
and @C { KheDrsMTaskAcceptResourceEnd }.
The call to @C { KheDrsTaskSolnExpandByResources } carries
on the recursion using the task solution built
from @C { dms } and the task on day object returned by a
successful call to @C { KheDrsMTaskAcceptResourceBegin }.
@PP
If @C { dmt == NULL }, the assignment is to
@C { dms->fixed_task_on_day }, a specific task on day.
@C { KheDrsTaskSolnExpandByResources } is called with
a task solution for this task on day.
@PP
Either way, @C { KheDrsTaskSolnExpandByResources } has a specific
task solution to add to the expander:
@ID {0.95 1.0} @Scale @C {
void KheDrsTaskSolnExpandByResources(KHE_DRS_TASK_SOLN dts,
  KHE_DRS_SOLN prev_soln, KHE_DRS_DAY prev_day, KHE_DRS_DAY next_day,
  KHE_DRS_EXPANDER de, KHE_DRS_RESOURCE_SET free_resources,
  int free_resources_index)
{
  /* save the expander so it can be restored later */
  KheDrsExpanderMarkBegin(de);

  /* add dts to the expander */
  KheDrsExpanderAddTaskSoln(de, dts);

  /* if the expander is still open, recurse */
  if( KheDrsExpanderIsOpen(de) )
    KheDrsSolnExpandByResources(prev_soln, prev_day, next_day, de,
      free_resources, free_resources_index + 1);

  /* restore the expander */
  KheDrsExpanderMarkEnd(de);
}
}
It adds @C { dts } to the expander, then, if the expander is still
open, it makes a recursive call to @C { KheDrsSolnExpandByResources },
moving on to the next free resource.  After that the change
is undone by the call to @C { KheDrsExpanderMarkEnd }.  All very
simple, thanks to the expander.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Expansion by shifts }
    @Tag { dynamic_impl.expansion.by_shifts }
@Begin
@LP
@C { KheDrsSolnExpand } carries out expansion by shifts by
calling @C { KheDrsSolnExpandByShifts }:
@ID {0.95 1.0} @Scale @C {
void KheDrsSolnExpandByShifts(KHE_DRS_SOLN prev_soln,
  KHE_DRS_DAY next_day, KHE_DRS_EXPANDER de,
  KHE_DRS_RESOURCE_SET free_resources, int shift_index)
{
  KHE_DRS_SHIFT ds;  int i;  KHE_DRS_RESOURCE dr;  KHE_DRS_TASK_SOLN dts;
  if( shift_index >= HaArrayCount(next_day->shifts) )
  {
    /* assign a free day to each remaining free resource; first, */
    /* abandon this path if any free resource has no free day asst */
    KheDrsResourceSetForEach(free_resources, dr, i)
      if( dr->expand_free_mtask_soln == NULL )
	return;

    /* save the expander */
    KheDrsExpanderMarkBegin(de);

    /* add free day assignments to the expander */
    KheDrsResourceSetForEach(free_resources, dr, i)
    {
      dts = KheDrsTaskSolnMake(dr->expand_free_mtask_soln, NULL);
      KheDrsExpanderAddTaskSoln(de, dts);
    }

    /* if the expander is still open then make the solution */
    if( KheDrsExpanderIsOpen(de) )
      KheDrsExpanderMakeAndMeldSoln(de, prev_soln, next_day);

    /* restore the expander */
    KheDrsExpanderMarkEnd(de);
  }
  else
  {
    ds = HaArray(next_day->shifts, shift_index);
    KheDrsShiftExpandByShifts(ds, shift_index, prev_soln, next_day,
      de, free_resources);
  }
}
}
Only the free (non-fixed) resources, held in @C { free_resources },
need to be assigned; assignments to all fixed resources have already
been added to @C { de } by @C { KheDrsSolnExpand }.
@PP
When @C { shift_index >= HaArrayCount(next_day->shifts) }, by
now each shift has been assigned a set of resources, and the remaining
free resources have to be assigned a free day.
@PP
The first step of this case is to check that these resources have
free day assignments, held in their @C { expand_free_mtask_soln }
fields.  If not, one or more of them has to be assigned some
task and this has not happened, so we've reached a dead end and
the code returns early.  If this were to occur often then a lot
of running time would be wasted, but in fact it happens only rarely.
@PP
The next step is to mark the expander, then update it with the
free day assignments.  Then, if the expander is still open,
@C { KheDrsExpanderMakeAndMeldSoln }
(Appendix {@NumberOf dynamic_impl.expansion.expanders}) is called
to make a solution from the current set of assignments and add it,
with dominance testing, to @C { next_day }'s solution set.  Then
the assignments are removed and the expander state is restored.
@PP
When @C { shift_index < HaArrayCount(next_day->shifts) }, we still
have shifts to assign sets of free resources to, beginning with the
shift whose index in @C { next_day } is @C { shift_index }.  We set
@C { ds } to this shift and call @C { KheDrsShiftExpandByShifts }:
@ID @C {
void KheDrsShiftExpandByShifts(KHE_DRS_SHIFT ds, int shift_index,
  KHE_DRS_SOLN prev_soln, KHE_DRS_DAY next_day, KHE_DRS_EXPANDER de,
  KHE_DRS_RESOURCE_SET free_resources)
{
  KHE_DRS_RESOURCE_SET omitted_resources;
  if( ds->soln_trie != NULL )
  {
    omitted_resources = KheDrsResourceSetMake(de->solver);
    KheDrsShiftSolnTrieExpandByShifts(ds->soln_trie, ds, shift_index,
      prev_soln, next_day, de, free_resources, 0, 0, omitted_resources);
    KheDrsResourceSetFree(omitted_resources, de->solver);
  }
}
}
This is a wrapper for @C { KheDrsShiftSolnTrieExpandByShifts }.
It creates a set of resource objects, @C { omitted_resources },
for passing to @C { KheDrsShiftSolnTrieExpandByShifts }.
@PP
Before we present @C { KheDrsShiftSolnTrieExpandByShifts } we'll
examine its header:
@ID {0.95 1.0} @Scale @C {
void KheDrsShiftSolnTrieExpandByShifts(KHE_DRS_SHIFT_SOLN_TRIE dsst,
  KHE_DRS_SHIFT ds, int shift_index, KHE_DRS_SOLN prev_soln,
  KHE_DRS_DAY next_day, KHE_DRS_EXPANDER de,
  KHE_DRS_RESOURCE_SET free_resources, int free_index,
  int selected_resource_count, KHE_DRS_RESOURCE_SET omitted_resources);
}
We'll refer to the @I { available resources }, meaning the elements
of @C { free_resources }, but only those whose index in
@C { free_resources } is @C { free_index } or larger.
@PP
Three conditions restrict the values of the parameters.  First,
@C { ds } is the shift beginning on @C { next_day } whose index
in @C { next_day } is @C { shift_index }.  Second, @C { dsst }
is the subtrie that is reached using the indexes of the
@I { selected resources }, that is, the resources selected
so far for this shift.  We don't keep track of those resources
explicitly (although we could), because @C { dsst } itself does
that sufficiently for our purposes.  Third, the available
resources as just defined, the selected resources as just
defined, and the omitted resources are pairwise disjoint
and their union contains every free resource.  The reader can
verify that these conditions hold for the initial call to
@C { KheDrsShiftSolnTrieExpandByShifts }, the one from within
@C { KheDrsShiftExpandByShifts }.
@PP
@C { KheDrsShiftSolnTrieExpandByShifts } must assign the
selected resources to @C { ds }, but it is free to also
assign any subset of the available resources to @C { ds } as
well.  It tries assigning to @C { ds } each subset of the set
of free resources that satisfies these conditions.  For each
of these subsets @M { R }, for each of the undominated shift
solutions held in the trie node for @M { R }, it tries each
undominated shift solution in turn, recursing to assign the
remaining shifts:
@ID {0.95 1.0} @Scale @C {
void KheDrsShiftSolnTrieExpandByShifts(KHE_DRS_SHIFT_SOLN_TRIE dsst,
  KHE_DRS_SHIFT ds, int shift_index, KHE_DRS_SOLN prev_soln,
  KHE_DRS_DAY next_day, KHE_DRS_EXPANDER de,
  KHE_DRS_RESOURCE_SET free_resources, int free_index,
  int selected_resource_count, KHE_DRS_RESOURCE_SET omitted_resources)
{
  int i, avail_resource_count;  KHE_DRS_SHIFT_SOLN dss;
  KHE_DRS_SHIFT_SOLN_TRIE child_dsst;  KHE_DRS_RESOURCE dr;
  avail_resource_count =
    KheDrsResourceSetCount(free_resources) - free_index;
  if( avail_resource_count <= 0 )
  {
    /* resources all done, so try each solution in dsst */
    HaArrayForEach(dsst->shift_solns, dss, i)
      KheDrsShiftSolnExpandByShifts(dss, ds, shift_index, prev_soln,
	next_day, de, omitted_resources);
  }
  else
  {
    /* try solutions that select dr, the next available resource */
    dr = KheDrsResourceSetResource(free_resources, free_index);
    child_dsst = HaArray(dsst->children, dr->open_resource_index);
    if( child_dsst != NULL )
      KheDrsShiftSolnTrieExpandByShifts(child_dsst, ds, shift_index,
        prev_soln, next_day, de, free_resources, free_index + 1,
	selected_resource_count + 1, omitted_resources);

    /* try solutions that do not select dr, staying in dsst */
    if( selected_resource_count + avail_resource_count >
	  ds->expand_must_assign_count )
    {
      KheDrsResourceSetAddLast(omitted_resources, dr);
      KheDrsShiftSolnTrieExpandByShifts(dsst, ds, shift_index,
        prev_soln, next_day, de, free_resources, free_index + 1,
	selected_resource_count, omitted_resources);
      KheDrsResourceSetDeleteLast(omitted_resources);
    }
  }
}
}
If @C { avail_resource_count <= 0 }, then then all available resources
are selected or omitted, and it is time to try the shift assignments
of @C { dsst->shift_solns }.  So for each of them we call
@C { KheDrsShiftSolnExpandByShifts }.  We'll return to that shortly.
@PP
Otherwise, we need to try subsets @M { R } that include the next
available resource @C { dr }, and also subsets @M { R } that omit it.
For the first we have
@ID {0.95 1.0} @Scale @C {
/* try solutions that select dr, the next available resource */
dr = KheDrsResourceSetResource(free_resources, free_index);
child_dsst = HaArray(dsst->children, dr->open_resource_index);
if( child_dsst != NULL )
  KheDrsShiftSolnTrieExpandByShifts(child_dsst, ds, shift_index,
    prev_soln, next_day, de, free_resources, free_index + 1,
    selected_resource_count + 1, omitted_resources);
}
This accesses the child @C { child_dsst } whose index is the next
available resource's open resource index.  If @C { child_dsst != NULL },
we recurse, adding @C { dr } implicitly to the set of selected
resources by utilizing the child node representing it, and removing
@C { dr } explicitly from the available resources by increasing
@C { free_index }.  For omitting @C { dr } the code is
@ID {0.95 1.0} @Scale @C {
/* try solutions that do not select dr, staying in dsst */
if( selected_resource_count + avail_resource_count >
      ds->expand_must_assign_count )
{
  KheDrsResourceSetAddLast(omitted_resources, dr);
  KheDrsShiftSolnTrieExpandByShifts(dsst, ds, shift_index,
    prev_soln, next_day, de, free_resources, free_index + 1,
    selected_resource_count, omitted_resources);
  KheDrsResourceSetDeleteLast(omitted_resources);
}
}
This adds @C { dr } to the set of omitted resources, and again
recurses with @C { free_index } incremented to remove @C { dr }
from the set of available resources.  It recurses on the same
trie node, @C { dsst }.
@PP
Next comes @C { KheDrsShiftSolnExpandByShifts }, called when all
available resources have either been selected or omitted, and
we have reached a specific shift solution @C { dss }:
@ID {0.95 1.0} @Scale -0.5px @Break @C {
void KheDrsShiftSolnExpandByShifts(KHE_DRS_SHIFT_SOLN dss,
  KHE_DRS_SHIFT ds, int shift_index, KHE_DRS_SOLN prev_soln,
  KHE_DRS_DAY next_day, KHE_DRS_EXPANDER de,
  KHE_DRS_RESOURCE_SET omitted_resources)
{
  KHE_DRS_SHIFT_SOLN dss2;  int i;
  if( dss->skip_count == 0 )
  {
    /* save the expander */
    KheDrsExpanderMarkBegin(de);

    /* add the assignments stored in dss to the expander */
    KheDrsExpanderAddTaskSolnSet(de, dss->task_solns, NULL);

    /* if the expander is still open, recurse */
    if( KheDrsExpanderIsOpen(de) )
    {
      HaArrayForEach(dss->skip_assts, dss2, i)
	dss2->skip_count++;
      KheDrsSolnExpandByShifts(prev_soln, next_day, de, omitted_resources,
	shift_index + 1);
      HaArrayForEach(dss->skip_assts, dss2, i)
	dss2->skip_count--;
    }

    /* restore the expander */
    KheDrsExpanderMarkEnd(de);
  }
}
}
We mark the expander, add @C { dss }'s task solutions, recurse on the
next shift (with index @C { shift_index + 1 }) if the expander is
still open, then restore the expander.  The free resources for the
next shift are the omitted resources for this shift.  We made sure
previously that @C { dss } contains no fixed assignments, which is
just as well because all fixed assignments have already been added
to the expander, before expand by shifts begins.
@PP
This code also implements shift pair dominance by not expanding
using shifts for which @C { skip_count > 0 }, and by incrementing
the approppriate @C { skip_count } fields as appropriate when
@C { dss } is in use.  I should look into whether the expander
could take over this work, by offering an operation to add a
shift solution, not just a shift solution's task solution set.
@End @SubSubAppendix

@EndSubSubAppendices
@End @SubAppendix

@SubAppendix
    @Title { Sets of solutions }
    @Tag { dynamic_impl.sets }
@Begin
@LP
# Dominance testing is time-consuming, and optimizing it is
# likely to pay off handsomely.  But we know of no optimizations
# of function @C { KheDrsSolnDominates } itself.  So instead, we
# try to reduce the number of times we call it.
# @PP
A @I { solution set } is a set of undominated solutions @M { P sub k }
for some day @M { d sub k }.  Type @C { KHE_DRS_SOLN_SET }
represents a solution set in the implementation.
@PP
As mere collections of solutions, solution sets should be very
simple.  However, there are three complications.  First, the
operation for adding a new solution @M { x } to a solution set
has to check for dominance relationships between @M { x } and
the other solutions.  This involves three steps:
@NumberedList

@LI {
Check whether @M { P sub k } contains a solution @M { y } that
dominates @M { x }.  If so, delete @M { x } and stop;
}

@LI {
Remove from @M { P sub k } and delete all solutions @M { y } such that
@M { x } dominates @M { y };
}

@LI {
Add @M { x } to @M { P sub k }.
}

@EndList
Since this is not a normal add-to-collection operation we call it
a @I { meld }.  Second, because melding is potentially slow, the
solver offers alternative kinds of dominance testing, including
alternative collection data structures.  And third, there is the
option of @I { caching }, which involves having two collections,
the @I { main solution set } holding most of the solutions, and
a @I { cache solution set } holding a smaller number of recently
inserted solutions.
@PP
In this section we work bottom-up through a variety of collection
data structures that combine to implement all these variations of
the basic idea within type @C { KHE_DRS_SOLN_SET }.
@BeginSubSubAppendices

@SubSubAppendix
    @Title { Solution lists }
    @Tag { dynamic_impl.sets.lists }
@Begin
@LP
Type @C { KHE_DRS_SOLN_LIST } defines a simple list of
solutions, stored in an array:
@ID @C {
typedef struct khe_drs_soln_list_rec *KHE_DRS_SOLN_LIST;
typedef HA_ARRAY(KHE_DRS_SOLN_LIST) ARRAY_KHE_DRS_SOLN_LIST;
typedef HP_TABLE(KHE_DRS_SOLN_LIST) TABLE_KHE_DRS_SOLN_LIST;

struct khe_drs_soln_list_rec {
  KHE_DYNAMIC_RESOURCE_SOLVER	solver;
  ARRAY_KHE_DRS_SOLN		solns;
};
}
This is basically just a simple array of day solution objects.
@PP
One type of dominance testing, called medium dominance,
requires a hash table whose elements are solution lists,
and that is what @C { TABLE_KHE_DRS_SOLN_LIST } provides.
It uses the @C { HP_TABLE } type definition macro from
Appendix {@NumberOf modules}.  This hash table also needs
access to a solver object, or something similar, since
its hash function uses a signer to determine which elements
of the signature to hash.  If these hash tables were removed
from the implementation (as could easily be done, since they
are obsolete now) the @C { solver } field could be deleted.
@PP
Function @C { KheDrsSolnListMake } makes a new, empty solution
list;  @C { KheDrsSolnListFree } frees a solution list without
freeing its solutions; and @C { KheDrsSolnListFreeSolns } frees
the solutions of a given solution list, without freeing the
solution list object.
@PP
A more interesting operation is
@ID @C {
void KheDrsSolnListGather(KHE_DRS_SOLN_LIST soln_list,
  KHE_DRS_SOLN_LIST res)
{
  int i;
  HaArrayAppend(res->solns, soln_list->solns, i);
}
}
Every data structure holding a collection of solutions
has such a `gather' operation.  It adds the collection's
solutions (here @C { soln_list }) to a
given solution list (here @C { res }).  This is how we
extract the solutions from complex data structures
(tries, etc.):  we gather them into a solution list.
@PP
Here is another operation found in all collection types.  It decides
whether solution list @C { soln_list } dominates @C { soln },
by which we mean contains a solution which dominates @C { soln }:
@ID @C {
bool KheDrsSolnListDominates(KHE_DRS_SOLN_LIST soln_list,
  KHE_DRS_SOLN soln, KHE_DRS_SIGNER_SET signer_set,
  int *dom_test_count, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_SOLN other_soln;  int i;
  HaArrayForEach(soln_list->solns, other_soln, i)
    if( KheDrsSolnDominates(other_soln, soln, signer_set,
        dom_test_count, drs, 0, 0, NULL) )
      return true;
  return false;
}
}
This implements Step 1 of the meld operation when the collection
is a solution list, except for deleting and freeing @C { soln }
if it returns @C { true }.
@PP
For Step 2 of the meld operation we have
@ID @C {
void KheDrsSolnListRemoveDominated(KHE_DRS_SOLN_LIST soln_list,
  KHE_DRS_SOLN soln, KHE_DRS_SIGNER_SET signer_set,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_SOLN other_soln;  int i, dom_test_count;
  dom_test_count = 0;
  HaArrayForEach(soln_list->solns, other_soln, i)
    if( KheDrsSolnNotExpanded(other_soln) && KheDrsSolnDominates(soln,
	  other_soln, signer_set, &dom_test_count, drs, 0, 0, NULL) )
    {
      KheDrsPriQueueDeleteSoln(drs, other_soln);
      KheDrsSolnFree(other_soln, drs);
      HaArrayDeleteAndPlug(soln_list->solns, i);
      i--;
    }
}
}
This deletes and frees all elements of @C { soln_list } that are
dominated by @C { soln }.  The calls to @C { KheDrsSolnNotExpanded }
and @C { KheDrsPriQueueDeleteSoln } will be explained later; they
are needed when the priority queue is in use, and do nothing
when it isn't.
@PP
For Step 3 of the meld operation, simply adding a solution to a
solution list, we have
@ID @C {
void KheDrsSolnListAddSoln(KHE_DRS_SOLN_LIST soln_list,
  KHE_DRS_SOLN soln, KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  HaArrayAddLast(soln_list->solns, soln);
  KheDrsPriQueueAddSoln(drs, soln);
}
}
Again, @C { KheDrsPriQueueAddSoln } does nothing if there is
no priority queue.
@PP
The next operation sorts a solution list by increasing solution
cost, and optionally deletes and frees the most expensive solutions,
depending on an option:
@IndentedList

@LI {0.95 1.0} @Scale @C {
int KheDrsSolnCmp(const void *t1, const void *t2)
{
  KHE_DRS_SOLN soln1 = * (KHE_DRS_SOLN *) t1;
  KHE_DRS_SOLN soln2 = * (KHE_DRS_SOLN *) t2;
  return KheCostCmp(KheDrsSolnCost(soln1), KheDrsSolnCost(soln2));
}
}

@LI {0.95 1.0} @Scale @C {
void KheDrsSolnListSortAndReduce(KHE_DRS_SOLN_LIST soln_list,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_SOLN soln;  int i;
  HaArraySort(soln_list->solns, &KheDrsSolnCmp);
  if( drs->solve_daily_expand_limit > 0 )
    while( HaArrayCount(soln_list->solns) > drs->solve_daily_expand_limit )
    {
      soln = HaArrayLastAndDelete(soln_list->solns);
      KheDrsPriQueueDeleteSoln(drs, soln);
      KheDrsSolnFree(soln, drs);
    }
}
}

@EndList
As usual, dropping solutions is a heuristic that gives up
any optimality guarantee.
@PP
The remaining solution list operations are mostly hash code
calculations for the hash table, and debug functions.  There
is one exception, however:
@ID @C {
void KheDrsSolnListExpand(KHE_DRS_SOLN_LIST soln_list,
  KHE_DRS_DAY prev_day, KHE_DRS_DAY next_day,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_SOLN prev_soln;  int i;
  HaArrayForEach(soln_list->solns, prev_soln, i)
    KheDrsSolnExpand(prev_soln, prev_day, next_day, drs);
}
}
To build @M { P sub {k+1} } from @M { P sub k }, we
traverse @M { P sub k } and expand each of its solutions.
@C { KheDrsSolnListExpand } does this when the collection
is held in a solution list.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Other collection data structures }
    @Tag { dynamic_impl.sets.other }
@Begin
@LP
In attempting to speed up dominance testing, the author has tried
two data structures other than a simple list for holding solutions.
This section is a brief tour through these two data structures.
The interest-to-code ratio is rather low, so we show only one
code sample.
@PP
The first data structure is a @I { trie }.  A traditional
trie is a kind of tree data structure for holding objects
retrieved by a key which is a string of characters.  In
the root of the tree is an array of subtrees indexed by
the first character of the key.  For example, all objects
whose key begins with @C { 'a' } might be in the first
subtree, all whose key begins with @C { 'b' } might be
in the second subtree, and so on.  The root of each
subtree also has an array of subtrees, this time indexed
by the second character of the key, and so on.
To retrieve an object by key, use the first character
of the key to find a subtree, then use the second character
to find a sub-subtree, and so on.
@PP
A solution's signature is an array of (usually) small integers,
ideal for tries.  If we store the solutions in a trie we can
easily retrieve by signature.  We don't need to do that, but
when testing for dominance using strong dominance, we may need
all the solutions whose first element is no larger than a given
element, or no smaller.  Tries are excellent for this.
@PP
The implementation has the usual collection operations, for
making and freeing tries, gathering a trie's solutions into a
solution list, and so on.  It all works, but tends not to be
used, because tries do not combine well with tabulated dominance.
@PP
The other non-trivial collection type is
the @I { indexed solution set }, or just @I { indexed set }.
It is an array of solution lists indexed by solution cost.  All
solutions with a given cost appear in the one list, and that
list is accessed by using their common cost as an index in
the array.
@PP
It would not be efficient to index using the cost as is.
Instead, there is a formula for converting a cost to an index:
@ID @C {
index = (cost - base) / increment;
}
Here @C { base } is the smallest cost that occurs in the
set, and @C { increment } is the least common divisor of all
the constraint weights.  For example, if all constraint weights
are multiples of 5, then @C { increment } is 5.  Here @C { base }
is updated whenever a solution whose cost is a new minimum is
added to the set, while @C { increment } is calculated once
and for all when the solver is created.
@PP
Again there are the usual operations for indexed set creating,
freeing, gathering, and so on.  Indexed sets work well with
tradeoff dominance and tabulated dominance, because those tests
need access to all solutions whose cost is at most a given cost,
or all solutions whose cost is at least a given cost.  These
can be found by traversing the array positions equal to and
to the left of the index of the given cost, or equal to and
to the right.  As an example, here is the operation for
deciding whether an indexed set @C { iss } contains a
solution which dominates @C { soln }:
@ID @C {
bool KheDrsSoftIndexedSolnSetDominates(
  KHE_DRS_SOFT_INDEXED_SOLN_SET siss, KHE_DRS_SOLN soln,
  KHE_DRS_DAY soln_day, int *dom_test_count,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  int i, pos;  KHE_DRS_SOLN_LIST soln_list;  int soft_cost;
  KheDrsSoftIndexedSolnSetCheck(siss);
  *dom_test_count = 0;
  if( HaArrayCount(siss->soln_lists) == 0 )
    return false;
  else
  {
    soft_cost = KheSoftCost(KheDrsSolnCost(soln));
    pos = (soft_cost - siss->base) / siss->increment;
    if( pos >= HaArrayCount(siss->soln_lists) )
      pos = HaArrayCount(siss->soln_lists) - 1;
    for( i = 0;  i <= pos;  i++ )
    {
      soln_list = HaArray(siss->soln_lists, i);
      if( soln_list != NULL && KheDrsSolnListDominates(soln_list, soln,
	  soln_day->signer_set, dom_test_count, drs) )
	return true;
    }
  }
  return false;
}
}
It only traverses the array positions up to the index of
@C { soln }'s cost in the array, because a solution which
dominates @C { soln } must have a cost which is no larger
than @C { soln->cost }.
@PP
@C { KheDrsSoftIndexedSolnSetDominates } assumes that all costs are
soft costs.  The indexed set data structure is actually a two-level
structure.  At the higher level is an indexed set which deals only
with hard costs (type @C { KHE_DRS_HARD_INDEXED_SOLN_SET }); its
elements are indexed sets which deal only with soft costs
(type @C { KHE_DRS_SOFT_INDEXED_SOLN_SET }).
@End @SubSubAppendix

@SubSubAppendix
    @Title { Solution set parts }
    @Tag { dynamic_impl.sets.parts }
@Begin
@LP
A @I { solution set part } is yet another collection of solutions,
represented by type @C { KHE_DRS_SOLN_SET_PART }.  Why this
is not the same as @C { KHE_DRS_SOLN_SET } is a fair question
that we will have to answer later, given than we are working
bottom-up.
@PP
Type @C { KHE_DRS_SOLN_SET_PART } is an abstract supertype with
a rather large number of concrete subtypes.  The type tag that
distinguishes these subtypes is no other than public type
@C { KHE_DRS_DOM_KIND }, that we have seen before:
@ID @C {
typedef enum {
  KHE_DRS_DOM_LIST_NONE,
  KHE_DRS_DOM_LIST_SEPARATE,
  KHE_DRS_DOM_LIST_TRADEOFF,
  KHE_DRS_DOM_LIST_TABULATED,
  KHE_DRS_DOM_HASH_EQUALITY,
  KHE_DRS_DOM_HASH_MEDIUM,
  /* KHE_DRS_DOM_TRIE_SEPARATE, */
  /* KHE_DRS_DOM_TRIE_TRADEOFF, */
  KHE_DRS_DOM_INDEXED_TRADEOFF,
  KHE_DRS_DOM_INDEXED_TABULATED
} KHE_DRS_DOM_KIND;
}
Each subtype knows which kind of dominance testing to use, if
any (none, strong, tradeoff, or uniform), and which kind of
data structure to use (a simple list, a hash table, a trie,
or an indexed set).  The two are mixed together in type
@C { KHE_DRS_DOM_KIND } because some data structures are
not compatible with some kinds of dominance testing.
@PP
Type @C { KHE_DRS_SOLN_SET_PART } contains just the type tag:
@ID @C {
#define INHERIT_KHE_DRS_SOLN_SET_PART				\
  KHE_DRS_DOM_KIND		dom_kind;

typedef struct khe_drs_soln_set_part_rec {
  INHERIT_KHE_DRS_SOLN_SET_PART
} *KHE_DRS_SOLN_SET_PART;
}
Here are its ten concrete subtypes, one for each
value of type @C { KHE_DRS_DOM_KIND }:
@IndentedList

@LI @C {
typedef struct khe_drs_soln_set_part_dom_none_rec {
  INHERIT_KHE_DRS_SOLN_SET_PART
  KHE_DRS_SOLN_LIST		soln_list;
} *KHE_DRS_SOLN_SET_PART_DOM_NONE;
}

@LI @C {
typedef struct khe_drs_soln_set_part_dom_weak_rec {
  INHERIT_KHE_DRS_SOLN_SET_PART
  TABLE_KHE_DRS_SOLN		soln_table;
} *KHE_DRS_SOLN_SET_PART_DOM_WEAK;
}

@LI @C {
typedef struct khe_drs_soln_set_part_dom_medium_rec {
  INHERIT_KHE_DRS_SOLN_SET_PART
  TABLE_KHE_DRS_SOLN_LIST	soln_list_table;
} *KHE_DRS_SOLN_SET_PART_DOM_MEDIUM;
}

@LI @C {
typedef struct khe_drs_soln_set_part_dom_separate_rec {
  INHERIT_KHE_DRS_SOLN_SET_PART
  KHE_DRS_SOLN_LIST		soln_list;
} *KHE_DRS_SOLN_SET_PART_DOM_SEPARATE;
}

@LI @C {
/* ***
typedef struct khe_drs_soln_set_part_dom_trie_rec {
  INHERIT_KHE_DRS_SOLN_SET_PART
  KHE_DRS_SOLN_TRIE		soln_trie;
} *KHE_DRS_SOLN_SET_PART_DOM_TRIE;
*** */
}

@LI @C {
typedef struct khe_drs_soln_set_part_dom_indexed_rec {
  INHERIT_KHE_DRS_SOLN_SET_PART
  KHE_DRS_INDEXED_SOLN_SET	indexed_solns;
} *KHE_DRS_SOLN_SET_PART_DOM_INDEXED;
}

@LI @C {
typedef struct khe_drs_soln_set_part_dom_tabulated_rec {
  INHERIT_KHE_DRS_SOLN_SET_PART
  KHE_DRS_SOLN_LIST		soln_list;
} *KHE_DRS_SOLN_SET_PART_DOM_TABULATED;
}

@LI @C {
typedef struct khe_drs_soln_set_part_dom_indexed_tabulated_rec {
  INHERIT_KHE_DRS_SOLN_SET_PART
  KHE_DRS_HARD_INDEXED_SOLN_SET	indexed_solns;
} *KHE_DRS_SOLN_SET_PART_DOM_INDEXED_TABULATED;
}

@EndList
Each contains one field holding the data structure appropriate
to its type:  a solution list, a hash table of solutions, a hash
table of solution lists, or an indexed array of solution lists.
Type @C { KHE_DRS_SOLN_SET_PART } offers the usual operations on
collections, implemented by large switches on the @C { dom_kind } field.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Solution sets }
    @Tag { dynamic_impl.sets.sets }
@Begin
@LP
Finally, we reach type @C { KHE_DRS_SOLN_SET }, used
to hold each set of undominated solutions @M { P sub k }:
@ID @C {
typedef struct khe_drs_soln_set_rec *KHE_DRS_SOLN_SET;

struct khe_drs_soln_set_rec {
  KHE_DRS_SOLN_SET_PART		cache;
  KHE_DRS_SOLN_SET_PART		main;
};
}
This holds an optional @C { cache } part, which when non-@C { NULL }
holds a collection of recently inserted solutions; and a @C { main }
part, a non-optional collection holding most of the solutions.  The
idea of the cache is that solutions derived from the same
predecessor solution are likely to exhibit dominance relations,
so keeping them together might save time.
@PP
When caching is used, insertions go into the cache rather than
into the main table:
@ID @C {
void KheDrsSolnSetMeldSoln(KHE_DRS_SOLN_SET soln_set, KHE_DRS_SOLN soln,
  KHE_DRS_DAY soln_day, KHE_DRS_EXPANDER de,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  if( soln_set->cache != NULL )
    KheDrsSolnSetPartMeldSoln(soln_set->cache, soln, soln_day, de, drs);
  else
    KheDrsSolnSetPartMeldSoln(soln_set->main, soln, soln_day, de, drs);
}
}
@C { KheDrsSolnSetMeldSoln } is called by
@C { KheDrsMakeEvaluateAndMeldSoln }
(Appendix {@NumberOf dynamic_impl.solns.solns}) to add
a new solution to @C { soln_set }.
Functions @C { KheDrsSolnSetBeginCacheSegment } and
@C { KheDrsSolnSetEndCacheSegment } instruct the solution set to
begin and end caching:
@IndentedList

@LI @C {
void KheDrsSolnSetBeginCacheSegment(KHE_DRS_SOLN_SET soln_set,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  /* actually there is nothing to do here */
}
}

@LI @C {
void KheDrsSolnSetEndCacheSegment(KHE_DRS_SOLN_SET soln_set,
  KHE_DYNAMIC_RESOURCE_SOLVER drs)
{
  KHE_DRS_SOLN_LIST soln_list;  KHE_DRS_SOLN soln;  int i;
  KHE_DRS_DOM_KIND cache_dom_kind;
  if( soln_set->cache != NULL )
  {
    cache_dom_kind = soln_set->cache->dom_kind;
    soln_list = KheDrsSolnListMake(drs);
    KheDrsSolnSetPartGather(soln_set->cache, soln_list);
    KheDrsSolnSetPartFree(soln_set->cache, drs);
    soln_set->cache = NULL;
    HaArrayForEach(soln_list->solns, soln, i)
      KheDrsSolnSetMeldSoln(soln_set, soln, drs);
    soln_set->cache = KheDrsSolnSetPartMake(cache_dom_kind, drs);
    KheDrsSolnListFree(soln_list, drs);
  }
}
}

@EndList
There is nothing to do to begin caching, but to end it we have to
move every element from the cache (if there is one) to the main
table.  This is rather messy.  We make a simple list of solutions,
@C { soln_list }, and call @C { KheDrsSolnSetPartGather } to gather
all the solutions from the cache into this list, then
@C { KheDrsSolnSetPartFree } to free the cache.  Then without a cache
we call @C { KheDrsSolnSetMeldSoln } on each element of @C { soln_list }
to meld every solution from the cache into the main part.  Finally,
we create a new, empty cache and free the solution list.
@End @SubSubAppendix

@EndSubSubAppendices
@End @SubAppendix

@SubAppendix
    @Title { Solving }
    @Tag { dynamic_impl.solving }
@Begin
@LP
A solver is represented by an object of public type
@C { KHE_DYNAMIC_RESOURCE_SOLVER }.  It would be too tedious
to show all the fields, but here is a selection:
@ID @C {
struct khe_dynamic_resource_solver_rec {

  /* fields constant throughout the lifetime of the solver */
  HA_ARENA			arena;
  KHE_SOLN			soln;
  KHE_RESOURCE_TYPE		resource_type;
  KHE_OPTIONS			options;
  KHE_FRAME			days_frame;
  KHE_MTASK_FINDER		mtask_finder;
  ARRAY_KHE_DRS_RESOURCE	all_resources;
  ARRAY_KHE_DRS_DAY		all_days;
  ARRAY_KHE_DRS_TASK		all_root_tasks;
  ...

  /* free list fields */
  ...

  /* priority queue (always initialized, but use is optional) */
  KHE_PRIQUEUE			priqueue;

  /* fields that vary with the solve */
  ARRAY_KHE_DRS_DAY_RANGE	selected_day_ranges;
  KHE_RESOURCE_SET		selected_resource_set;
  ...
  ARRAY_KHE_DRS_DAY		open_days;
  ARRAY_KHE_DRS_SHIFT		open_shifts;
  ARRAY_KHE_DRS_RESOURCE	open_resources;
  ARRAY_KHE_DRS_EXPR		open_exprs;
  ...
};
}
The first group of fields hold values that remain constant after
the solver object is constructed.  The most interesting ones are
the last three, holding one DRS resource for each resource of
type @C { rt }, one DRS day for each time group of the common
frame, and one DRS task for each proper root task which accepts a
resource of type @C { rt }.
@PP
After that come fields holding free lists for recycling objects
between solves, and a field holding a priority queue of solutions for
when that form of solving is requested.  Finally, there are fields
that vary with the solve.  We've shown @C { selected_day_ranges },
which holds the day ranges selected by calls to public function
@C { KheDynamicResourceSolverAddDayRange }, and 
@C { selected_resource_set }, which holds the resources selected by
calls to public function @C { KheDynamicResourceSolverAddResource }.
Then come arrays holding the days, shifts, resources, and expressions
that have been opened for a particular solve.
@BeginSubSubAppendices

@SubSubAppendix
    @Title { Construction }
    @Tag { dynamic_impl.solving.construction }
@Begin
@LP
Here is the public function for creating a new solver, drastically
abbreviated:
@ID @C {
KHE_DYNAMIC_RESOURCE_SOLVER KheDynamicResourceSolverMake(KHE_SOLN soln,
  KHE_RESOURCE_TYPE rt, KHE_OPTIONS options)
{
  KHE_DYNAMIC_RESOURCE_SOLVER res;

  /* make the basic object */
  a = KheSolnArenaBegin(soln, false);
  HaMake(res, a);

  /* straightforward initializations (omitted) */
  /* construct the days (see below) */
  /* construct the resources (see below) */
  /* construct the tasks, mtasks, and shifts (see below) */
  /* construct the expressions (see below) */
  
  return res;
}
}
It obtains an arena from @C { soln } and creates a solver object
@C { res }.  Then it initializes every field of @C { res }, a
tedious process that we won't show.  After that it constructs
the days, using this code:
@ID {0.95 1.0} @Scale @C {
/* drs day objects for the days of the frame */
for( i = 0;  i < KheFrameTimeGroupCount(days_frame);  i++ )
{
  day = KheDrsDayMake(i, res);
  HaArrayAddLast(res->all_days, day);
}
}
Next come the resources:
@ID {0.95 1.0} @Scale @C {
/* resources of rt but not their monitors (assumes days done) */
HaArrayInit(res->all_resources, a);
for( i = 0;  i < KheResourceTypeResourceCount(rt);  i++ )
{
  r = KheResourceTypeResource(rt, i);
  dr = KheDrsResourceMake(r, res);
  HaArrayAddLast(res->all_resources, dr);
}
}
Next come the tasks, mtasks, and shifts:
@ID {0.95 1.0} @Scale @C {
/* tasks, mtasks, and shifts (assumes days and resources) */
res->mtask_finder = KheMTaskFinderMake(soln, rt, days_frame, etm,
  true, a);
incomplete_times = false;  /* still to do */
if( incomplete_times )
{
  KheDynamicResourceSolverDelete(res);
  return NULL;  /* second dot point in doc */
}
for( i = 0;  i < KheMTaskSolverMTaskCount(res->mtask_solver);  i++ )
{
  mt = KheMTaskSolverMTask(res->mtask_solver, i);
  if( !KheMTaskNoGaps(mt) ||	      /* third dot point in doc */
      !KheMTaskNoOverlap(mt) ||       /* fourth dot point in doc */
      !KheDrsMTaskMake(mt, res) )     /* fourth dot point in doc */
  {
    KheDynamicResourceSolverDelete(res);
    return NULL;
  }
}
}
Next comes code for working out the maximum possible workload
that a resource could incur at each time:
@ID @C {
/* initialize drs->max_workload_per_time */
KheDrsMaxWorkloadPerTimeInit(res);
}
After that comes code for initializing shift pair objects:
@ID @C {
/* shift pairs */
HaArrayForEach(res->all_days, day, i)
  HaArrayForEach(day->shifts, ds1, j)
    for( k = j + 1;  k < HaArrayCount(day->shifts);  k++ )
    {
      ds2 = HaArray(day->shifts, k);
      dsp = KheDrsShiftPairMake(ds1, ds2, res);
      HaArrayAddLast(ds1->shift_pairs, dsp);
    }
}
After that it constructs the expressions corresponding to
@C { soln }'s monitors:
@ID {0.95 1.0} @Scale @C {
/* resource monitors */
HaArrayForEach(res->all_resources, dr, i)
  if( !KheDrsResourceAddMonitors(dr, res) )
  {
    /* can't run this instance */
    KheDynamicResourceSolverDelete(res);
    return NULL;
  }

/* event resource monitors of type rt (assumes tasks done) */
HaArrayInit(er_monitors, a);
for( i = 0;  i < KheInstanceEventResourceCount(ins);  i++ )
{
  er = KheInstanceEventResource(ins, i);
  if( KheEventResourceResourceType(er) == rt )
    for( j = 0;  j < KheSolnEventResourceMonitorCount(soln, er);  j++ )
    {
      m = KheSolnEventResourceMonitor(soln, er, j);
      HaArrayAddLast(er_monitors, m);
    }
}
HaArraySortUnique(er_monitors, &KheMonitorCmp);
HaArrayForEach(er_monitors, m, i)
  if( !KheDrsAddEventResourceMonitor(res, m) )
  {
    /* can't run this instance */
    KheDynamicResourceSolverDelete(res);
    return NULL;
  }
HaArrayFree(er_monitors);
}
The KHE platform offers no simple way to visit each event resource
monitor once, so this code puts them all into temporary array
@C { er_monitors } and uniqueifies that array before making the
corresponding expressions.  Expression construction is carried out
by @C { KheDrsResourceAddMonitors }, which adds expressions for
all the monitors of DRS resource @C { dr }, and
@C { KheDrsAddEventResourceMonitor }, which adds an expression for
event resource monitor @C { m }.  The following section explains
which expressions are constructed for each kind of monitor.  Limit idle
times monitors and avoid split assignments monitors are not supported;
if any of those are present, @C { KheDynamicResourceSolverMake }
deletes the solver object and returns @C { NULL }.
@PP
Next comes a quick check that at least one monitor has cost
greater than its lower bound:
@ID @C {
if( !KheDrsSolnCanBeImproved(res) )
{
  /* no point running this instance; soln can't be improved on */
  KheDynamicResourceSolverDelete(res);
  return NULL;
}
}
Next comes some code to ensure that the increments used by
hard and soft indexed solution sets have non-zero values:
@ID @C {
/* make sure that the increments are non-zero */
if( res->hard_increment <= 0 )
  res->hard_increment = 1;
if( res->soft_increment <= 0 )
  res->soft_increment = 1;
}
Finally we construct the uniform dominance tables stored within
constraint objects:
@ID @C {
/* table objects within constraints */
HaArrayForEach(res->all_constraints, dc, i)
  if( dc != NULL )
    KheDrsConstraintSetTables(dc, res);
}
The constraints themselves were constructed earlier, while
traversing the monitors.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Construction of expression trees }
    @Tag { dynamic_impl.solving.monitors }
@Begin
@LP
In this section we present expression trees for the XESTT event
resource and resource monitors.  These trees are built when the solver
is created, by the calls to @C { KheDrsResourceAddMonitors } and
@C { KheDrsAddEventResourceMonitor } shown above.  Only parts of
these expressions are open on any particular solve.  We omit
the actual construction code, since it can be derived from the
diagrams presented here, using calls to the various functions
for creating expression objects.
@PP
All non-root expressions have a value (either an @C { int } or a
@C { float }) which is used by their parent.  All root expressions
report a cost which is added to the solution cost.  However, we
do not consider this cost to be the value of the root expression,
because a value becomes available only on the expression's last
active day, whereas a cost (strictly speaking, an extra cost) is
added to the solution cost on each active day of the expression.
@PP
We start with event resource monitors.
@PP
@B { Assign resource monitors }.
Let the atomic tasks monitored be @M { t sub 1 ,..., t sub k }
after breaking them into single-day pieces of duration 1;
their total duration is @M { k }.  The expression tree is
@CD @Diag
  treevsep { 1.5f }
  treehsep { 1.0c }
  blabelprox { SW }
{
@HTree {
  @Box # blabel { @M { non >= } }
    @M { COUNTER }
    @FirstSub @Box @M { ASSIGNED_TASK( t sub 1 , R ) }
    @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
    @NextSub @Box @M { ASSIGNED_TASK( t sub k , R ) }
}
}
where the @M { COUNTER } expression has minimum limit @M { k }
and @M { R } is the resource type.  Each leaf contributes 1 when
its task is assigned, and the deviation is the amount by which the
sum of these values falls short of the total duration, @M { k }.
@PP
When the cost function is linear, this tree may be divided into
one tree per task on day:
@CD @Diag
  treevsep { 1.5f }
  treehsep { 1.0c }
  blabelprox { SW }
{
@HTree {
  @Box @M { COUNTER }
  @FirstSub @Box @M { ASSIGNED_TASK( t sub 1 , R ) }
}
}
and so on, where the @M { COUNTER } expression has minimum limit
1.  An @M { ASSIGNED_TASK } expression has only a single open day, the
day that its task on day is running, so these smaller trees never
contribute to signature state arrays, which is why we prefer them.
@PP
Assign resource monitors contribute to the @C { non_asst_cost }
attributes of mtasks.  So there is some danger of double counting their
costs.  However, a review of the solver's code shows that although
@C { non_asst_cost } attributes are used to rule out some
non-assignments, @C { non_asst_cost } itself is never added to any
sum of costs.  So there is in fact no double counting.
@PP
@B { Prefer resources monitors }.
We use the same terminology as for assign resource monitors, plus we
let @M { g } be the set of preferred resources.
@PP
If @M { g } includes every resource of the resource type concerned,
the cost must always be zero and the monitor is ignored.
@PP
If @M { g } is empty, meaning that it is preferable to not assign
the monitored tasks, the monitor may contribute to the @C { asst_cost }
attributes of those tasks' mtasks.  These attributes are added to
the current solution cost when expanding a solution, so again the
monitor should be ignored, but this time to avoid double counting.
For the @C { asst_cost } attribute to be affected, the monitor has
to either monitor a single task or else have a linear cost function,
as an examination of file @C { khe_sr_mtask_finder.c } will show.  So
we ignore the monitor when @M { g } is empty and it either monitors
a single task or has a linear cost function.
@PP
If we do not ignore the monitor, in general its expression tree is
@CD @Diag
  treevsep { 1.5f }
  treehsep { 1.0c }
  blabelprox { SW }
{
@HTree {
  @Box @M { COUNTER }
    @FirstSub @Box @M { ASSIGNED_TASK( t sub 1 , R - g ) }
    @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
    @NextSub @Box @M { ASSIGNED_TASK( t sub k , R - g ) }
}
}
where the @M { COUNTER } expression has maximum limit 0.  Each
@M { t sub i } assigned a resource not in @M { g } contributes
1 to the determinant.  (An unassigned @M { t sub i } contributes
0, as required.)  When the cost function is linear this can be
divided into one tree per task on day, again with maximum limit 0:
@CD @Diag
  treevsep { 1.5f }
  treehsep { 1.0c }
  blabelprox { SW }
{
@HTree {
  @Box @M { COUNTER }
  @FirstSub @Box @M { ASSIGNED_TASK( t sub 1 , R - g ) }
}
}
and so on.  Again, these smaller trees never contribute to
signature states, so are preferred.
# The @M { COST } and @M { INT_SUM } expressions are
# replaced in the implementation by @M { COUNTER } expressions.
@PP
@B { Avoid split assignments monitors }.
These do not occur in nurse rostering, and a solver is not made
when they are present.  What needs to be remembered on any day
is the set of distinct resources assigned to the monitored tasks.
Without this, one cannot tell whether a later assignment increases
the number of distinct resources or not.  There are various ways
to encode this into the signature, although none seem to be ideal.
A bit set packed into integers leads to very large arrays in a trie
structure.  An unpacked bit set leads to a large number of signature
entries.  A set of resource indexes varies in length, although there
is an upper limit:  the number of tasks monitored that are running
at or before the current day.  Perhaps a sequence of resource indexes,
sorted and uniqueified, and padded out to the upper limit on length,
would be best.  One signature dominates another when its resources are
a subset of the other's.
@PP
@B { Limit resources monitors }.
Let @M { g } be the set of resources of interest.  The tree is
@CD @Diag
  treevsep { 1.5f }
  treehsep { 1.0c }
  blabelprox { SW }
{
@HTree {
  @Box # blabel { @M { alpha } }
    @M { COUNTER }
      @FirstSub @Box @M { ASSIGNED_TASK( t sub 1 , g ) }
      @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
      @NextSub @Box @M { ASSIGNED_TASK( t sub k , g ) }
}
}
where the @M { COUNTER } expression's limits are taken from the
monitor.  This only divides into separate trees when the cost
function is linear and both limits are 0.
@PP
A limit resources monitor can mimic a prefer resources monitor.  In
that case, the mtask solver treats the limit resources monitor like
a prefer resources monitor, including adding a cost to the
@C { asst_cost } attribute when appropriate.  An examination of file
@C { khe_sr_mtask_finder.c } shows that @C { asst_cost } is affected
when the limit resource monitor's maximum limit is 0, the set @M { g }
(whose complement becomes the set of preferred resources of the
corresponding prefer resources monitor) contains every resource of
the given resource type, and the number of tasks is 1 or the cost
function is linear.  In this case we ignore the limit resources monitor.
@PP
We turn now to the resource monitors for resource @M { r }.
@PP
@B { Avoid clashes monitors }.
No clashes can occur, because tasks with clashes are excluded, and
each resource is assigned to at most one task on each day.  So these
monitors are ignored.
@PP
@B { Avoid unavailable times monitors }.
If the unavailable times are @M { t sub 1 , t sub 2 ,..., t sub k },
the tree is
@CD @Diag
  treevsep { 1.5f }
  treehsep { 1.0c }
  blabelprox { SW }
{
@HTree {
  @Box @M { COUNTER }
    @FirstSub @Box @M { BUSY_TIME(r, t sub 1 ) }
    @NextSub  @Box @M { BUSY_TIME(r, t sub 2 ) }
    @NextSub  pathstyle { noline } @Circle outlinestyle { noline } ...
    @NextSub  @Box @M { BUSY_TIME(r, t sub k ) }
}
}
The @M { COUNTER } expression has maximum limit 0.  If the cost function
is linear, each time contributes an independent value to the total cost,
and we use multiple trees instead:
@CD @Diag
  treevsep { 1.5f }
  treehsep { 1.0c }
{
@HTree {
  @Box @M { COUNTER }
  @FirstSub @Box @M { BUSY_TIME(r, t sub 1 ) }
}
}
and so on.  We prefer this because these expressions do not store
a value in the signature.
@PP
@B { Limit idle times monitors }.
These do not occur in nurse rostering, and a solver is not made
when they are present.  Handling them is future work (feasible,
but low priority).
@PP
@B { Cluster busy times monitors }.
We have already seen a cluster busy times tree, for limiting busy
weekends, assuming a four-week instance with two shifts per day.
Here it is again:
@CD @Diag
  treevsep { 1.5f }
  treehsep { 1.0c }
  blabelprox { SW }
{
  @HTree {
    @Node # Cblabel { @M { alpha } }
    @M { COUNTER }
    @FirstSub {
      @Node # blabel { @M { beta } }
      @M { OR }
      @FirstSub to { W } { @Node @M { BUSY_TIME(r, 1Sat1) } }
      @NextSub  to { W } { @Node @M { BUSY_TIME(r, 1Sat2) } }
      @NextSub  to { W } { @Node @M { BUSY_TIME(r, 1Sun1) } }
      @NextSub  to { W } { @Node @M { BUSY_TIME(r, 1Sun2) } }
    }
    @NextSub pathstyle { noline } {
      @Node outlinestyle { noline } { ... }
    }
    @NextSub {
      @Node # blabel { @M { beta } }
      @M { OR }
      @FirstSub to { W } { @Node @M { BUSY_TIME(r, 4Sat1) } }
      @NextSub  to { W } { @Node @M { BUSY_TIME(r, 4Sat2) } }
      @NextSub  to { W } { @Node @M { BUSY_TIME(r, 4Sun1) } }
      @NextSub  to { W } { @Node @M { BUSY_TIME(r, 4Sun2) } }
    }
  }
}
# We won't slog through the justification of the @M { alpha } and @M { beta }
# dominance tests.
# @PP
Within each @M { OR }, if a day's times are all present, their
@M { BUSY_TIME } expressions are replaced by a @M { BUSY_DAY }
expression, saving time.  Negative time groups become
# and improving dominance testing by
# reducing the number of children without increasing their value
# upper bounds.
@CD @Diag
  treevsep { 1.5f }
  treehsep { 1.0c }
  blabelprox { SW }
{
  @HTree {
      @Node # blabel { @M { beta } }
      @M { AND }
      @FirstSub to { W } { @Node @M { FREE_TIME(r, 1Sat1) } }
      @NextSub  to { W } { @Node @M { FREE_TIME(r, 1Sat2) } }
      @NextSub  to { W } { @Node @M { FREE_TIME(r, 1Sun1) } }
      @NextSub  to { W } { @Node @M { FREE_TIME(r, 1Sun2) } }
    }
}
Again, @M { FREE_DAY } expressions may replace @M { FREE_TIME }
expressions.  And when an @M { OR } or @M { AND } expression has
exactly one child, the @M { OR } or @M { AND } expression is omitted.
@PP
@B { Limit busy times monitors }.
A limit busy times monitor may monitor several time groups, like
a cluster busy times monitor, but a deviation is calculated for
each time group separately:
#@CD @Diag
#  treevsep { 1.5f }
#  treehsep { 0.5c }
#  blabelprox { SW }
#{
#@HTree {
#  @Box @M { COST }
#  @FirstSub {
#    @Box # blabel { @M { non <= } }
#    @M { INT_SUM }
#    @FirstSub {
#      @Box @M { INT_DEV }
#      @FirstSub
#      {
#        @Box # blabel { @M { alpha } }
#	@M { INT_SUM }
#	@FirstSub @Box @M { BUSY_TIME(r, 1Mon1) }
#	@NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
#	@NextSub @Box @M { BUSY_TIME(r, 1Mon3) }
#      }
#    }
#    @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
#    @NextSub {
#      @Box @M { INT_DEV }
#      @FirstSub {
#        @Box # blabel { @M { alpha } }
#	@M { INT_SUM }
#	@FirstSub @Box @M { BUSY_TIME(r, 4Mon1) }
#	@NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
#	@NextSub @Box @M { BUSY_TIME(r, 4Mon3) }
#      }
#    }
#  }
#}
#}
## This example requires a nurse to take at most one shift per day, and
## something like it would be found in most instances.  It should be
## optimized away, since the algorithm knows that it will be assigning
## at most one shift per day, but the author has never got around to
## implementing that idea.
## @PP
## The dominance tests can be understood in two steps.  First, the higher
## @M { INT_SUM } does not change anything for the lower @M { INT_SUM }
## expressions:  their values are still converted into deviations and then
## into costs, so @M { alpha } is correct.  Second, an
## @M { INT_DEV(0, 0, false) } expression could be inserted below the root,
## indicating `@M { non <= }' for the higher @M { INT_SUM }.
#@PP
#If the cost function is linear, or there is only one time group, each
#child of the higher @M { INT_SUM } is made into its own tree, and
#@M { COUNTER } expressions are used:
#@CD @Diag
#  treevsep { 1.5f }
#  treehsep { 1.0c }
#  blabelprox { SW }
#{
#@HTree {
#        @Box # blabel { @M { alpha } }
#	@M { COUNTER }
#	@FirstSub @Box @M { BUSY_TIME(r, 1Mon1) }
#	@NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
#	@NextSub @Box @M { BUSY_TIME(r, 1Mon3) }
#      }
#}
#and so on.
#@PP
#There is a problem with what we've just done, however, which is that
#@M { COUNTER } nodes have evolved and now they assume that each
#child has value 0 or 1.  This is true of the special case tree just
#above, but not of the general tree.  Accordingly, when the cost
#function is not linear or there is more than one time group, we use
@CD @Diag
  treevsep { 1.5f }
  treehsep { 0.5c }
  blabelprox { SW }
{
@HTree {
  @Box # blabel { @M { non <= } }
  @M { SUM_INT }
  @FirstSub {
    @Box # blabel { @M { alpha } }
    @M { SUM_INT }
    @FirstSub @Box @M { BUSY_TIME(r, 1Mon1) }
    @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
    @NextSub @Box @M { BUSY_TIME(r, 1Mon3) }
  }
  @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
  @NextSub {
    @Box # blabel { @M { alpha } }
    @M { SUM_INT }
    @FirstSub @Box @M { BUSY_TIME(r, 4Mon1) }
    @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
    @NextSub @Box @M { BUSY_TIME(r, 4Mon3) }
  }
}
}
The lower @M { SUM_INT } expressions, described earlier as case (4)
sum expressions, have deviation calculations but not cost calculations.
The higher @M { SUM_INT }, called case (1), includes cost but only a
trivial deviation (maximum limit 0).  If any of the time groups
contains a full day's worth of times, their @M { BUSY_TIME }
expressions are replaced by one @M { BUSY_DAY } expression.
@PP
If there is only one time group or the cost function is linear,
the tree is broken up into one tree for each time group.  Each
of these trees has the form
@CD @Diag
  treevsep { 1.5f }
  treehsep { 0.5c }
  blabelprox { SW }
{
@HTree {
  @Box # blabel { @M { non <= } }
  @M { COUNTER }
    @FirstSub @Box @M { BUSY_TIME(r, 1Mon1) }
    @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
    @NextSub @Box @M { BUSY_TIME(r, 1Mon3) }
}
}
We can replace @M { SUM_INT } by @M { COUNTER } here because the
children all have value 0 or 1, and we do it because it allows
tabulated dominance to be used.  This is case (2).  As before, a
day's worth of @M { BUSY_TIME } expressions are replaced by a
@M { BUSY_DAY } expression.  If the tree as a whole requires a maximum
of one busy time on one day, it is omitted, since all solutions produced
by this solver have that property.
@PP
@B { Limit workload monitors }.
These are the same as limit busy times monitors, except that they keep
track of a @C { float } workload rather than an @C { int } number
of busy times:
#@CD @Diag
#  treevsep { 1.5f }
#  treehsep { 0.2c }
#  blabelprox { SW }
#{
#@HTree {
#  @Box @M { COST }
#  @FirstSub {
#    @Box # blabel { @M { non <= } }
#    @M { INT_SUM }
#    @FirstSub {
#      @Box @M { FLOAT_DEV }
#      @FirstSub
#      {
#        @Box # blabel { @M { alpha } }
#	@M { FLOAT_SUM }
#	@FirstSub @Box @M { WORK_TIME(r, 1Mon1) }
#	@NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
#	@NextSub @Box @M { WORK_TIME(r, 1Mon3) }
#      }
#    }
#    @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
#    @NextSub {
#      @Box @M { FLOAT_DEV }
#      @FirstSub {
#        @Box # blabel { @M { alpha } }
#	@M { FLOAT_SUM }
#	@FirstSub @Box @M { WORK_TIME(r, 4Mon1) }
#	@NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
#	@NextSub @Box @M { WORK_TIME(r, 4Mon3) }
#      }
#    }
#  }
#}
#}
#Again, when the cost function is linear, this may be broken into one
#tree for each time group, and a @M { FLOAT_SUM_COST } expression used,
#like @M { COUNTER }.  However, that is not implemented.  Instead,
#the @M { COST } expression and the @M { INT_SUM } expression are
#replaced in the implementation by an @M {@M { SUM_FLOAT } COUNTER } expression
#with maximum limit 0.@M { SUM_FLOAT }
#@PP
#Once again this fails because the higher @M { INT_SUM } expects
#child values limited to 0 and 1, and although the child values
#are integers (all deviations are integers) they can be larger than 1.
#So we switch to
@CD @Diag
  treevsep { 1.5f }
  treehsep { 0.2c }
  blabelprox { SW }
{
@HTree {
  @Box # blabel { @M { non <= } }
  @M { SUM_INT }
  @FirstSub {
    @Box # blabel { @M { alpha } }
    @M { SUM_FLOAT }
    @FirstSub @Box @M { WORK_TIME(r, 1Mon1) }
    @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
    @NextSub @Box @M { WORK_TIME(r, 1Mon3) }
  }
  @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
  @NextSub {
    @Box # blabel { @M { alpha } }
    @M { SUM_FLOAT }
    @FirstSub @Box @M { WORK_TIME(r, 4Mon1) }
    @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
    @NextSub @Box @M { WORK_TIME(r, 4Mon3) }
  }
}
}
The @M { SUM_FLOAT } expressions, which are case (4), include deviation
calculations but not cost calculations.  The @M { SUM_INT } expression,
which is case (1), includes the cost calculation, but only a trivial
deviation calculation (maximum limit 0).  If any of the time groups
contains a full day's worth of times, their @M { WORK_TIME } expressions
are replaced by one @M { WORK_DAY } expression.
@PP
As for limit busy times monitors, if there is only one time group or
the cost function is linear, the tree is broken up into one tree for
each time group.  Each of these trees has the form
@CD @Diag
  treevsep { 1.5f }
  treehsep { 0.2c }
  blabelprox { SW }
{
@HTree {
    @Box # blabel { @M { alpha } }
    @M { SUM_FLOAT }
    @FirstSub @Box @M { WORK_TIME(r, 1Mon1) }
    @NextSub pathstyle { noline } @Circle outlinestyle { noline } ...
    @NextSub @Box @M { WORK_TIME(r, 1Mon3) }
}
}
The remaining @M { SUM_FLOAT } expression, which is case (3), takes
over the cost calculation.  As before, a day's worth of @M { WORK_TIME }
expressions are replaced by a @M { WORK_DAY } expression.
@PP
When a time group spans more than one day, its @M { SUM_FLOAT }
expression needs to store floating point numbers in signatures.
This is done by having each position in each signature have type
@C { KHE_DRS_VALUE }, an untagged union of @C { int } and @C { float }.
The context is used to select the appropriate value from the union.
@PP
@B { Limit active intervals monitors }.
These have the same data as cluster busy times monitors, without allow
zero.  Only the root expression is different:
@CD @Diag
  treevsep { 1.5f }
  treehsep { 1.0c }
  blabelprox { SW }
{
  @HTree {
    @Node # blabel { @M { alpha } }
    @M { SEQUENCE }
    @FirstSub {
      @Node # blabel { @M { beta } }
      @M { OR }
      @FirstSub to { W } { @Node @M { BUSY_TIME(r, 1Sat1) } }
      @NextSub  to { W } { @Node @M { BUSY_TIME(r, 1Sat2) } }
      @NextSub  to { W } { @Node @M { BUSY_TIME(r, 1Sun1) } }
      @NextSub  to { W } { @Node @M { BUSY_TIME(r, 1Sun2) } }
    }
    @NextSub pathstyle { noline } {
      @Node outlinestyle { noline } { ... }
    }
    @NextSub {
      @Node # blabel { @M { beta } }
      @M { OR }
      @FirstSub to { W } { @Node @M { BUSY_TIME(r, 4Sat1) } }
      @NextSub  to { W } { @Node @M { BUSY_TIME(r, 4Sat2) } }
      @NextSub  to { W } { @Node @M { BUSY_TIME(r, 4Sun1) } }
      @NextSub  to { W } { @Node @M { BUSY_TIME(r, 4Sun2) } }
    }
  }
}
As for counter monitors, negative time groups produce @M { AND } and
@M { FREE_TIME } expressions, and times making up complete days become
@M { BUSY_DAY } and @M { FREE_DAY } expressions.  However, when an
@M { OR } or @M { AND } expression has exactly one child, we do not
omit it as we do for cluster busy times monitors.  The two reasons
for this are explained in Appendix {@NumberOf dynamic_impl.expr.seq}.
# The same optimizations, depending on
# the maximum limit and the cost function, apply here too and they
# are implemented.
# @PP
# Each sequence of active time groups, once its length is finalized, has
# its length compared with the limits and produces a cost immediately.
# So the @M { INT_SEQ_COST } expression only has to remember the length
# of the sequence of complete children with value 1 that includes the
# most recently completed child, or 0 if there is no such interval.
# @PP
# @M { INT_SEQ_COST } is easily the most difficult expression type to
# implement.  Accordingly, its implementation is presented in detail
# in Appendix {@NumberOf dynamic_impl.expr.seq}.
@End @SubSubAppendix

@SubSubAppendix
    @Title { Solving }
    @Tag { dynamic_impl.solving.solving }
@Begin
@LP
At last, we are ready for the main solving functions.  As explained
earlier, a solve has three steps:  opening, searching, and closing.
Here is the function which opens a solve:
@ID {0.95 1.0} @Scale @C {
void KheDrsSolveOpen(KHE_DYNAMIC_RESOURCE_SOLVER drs, bool testing,
  bool priqueue, bool extra_selection, bool expand_by_shifts,
  bool correlated_exprs, int daily_expand_limit, int daily_prune_trigger,
  int resource_expand_limit, int dom_approx,
  KHE_DRS_DOM_KIND main_dom_kind, bool cache,
  KHE_DRS_DOM_KIND cache_dom_kind, KHE_DRS_PACKED_SOLN *init_soln)
{
  KHE_DRS_DAY_RANGE ddr;  KHE_DRS_DAY day;  int i, j, rcount;
  KHE_DRS_EXPR e;  KHE_DRS_RESOURCE dr;  KHE_RESOURCE r;

  /* initialize fields that vary with the solve */
  drs->solve_priqueue = priqueue;
  drs->solve_extra_selection = extra_selection;
  drs->solve_expand_by_shifts = expand_by_shifts;
  drs->solve_correlated_exprs = correlated_exprs;
  drs->solve_daily_expand_limit = daily_expand_limit;
  drs->solve_daily_prune_trigger = daily_prune_trigger;
  drs->solve_resource_expand_limit = resource_expand_limit;
  drs->solve_dom_approx = dom_approx;
  drs->solve_dom_test_type =
    KheDomKindCheckConsistency(main_dom_kind, cache, cache_dom_kind);
  drs->solve_init_cost = drs->solve_start_cost = KheSolnCost(drs->soln);
  KheDrsResourceSetClear(drs->open_resources);
  HaArrayClear(drs->open_days);
  HaArrayClear(drs->open_shifts);
  HaArrayClear(drs->open_exprs);

  /* open selected resources */
  *init_soln = KheDrsPackedSolnBuildEmpty(drs);
  rcount = KheResourceSetResourceCount(drs->selected_resource_set);
  for( i = 0;  i < rcount;  i++ )
  {
    r = KheResourceSetResource(drs->selected_resource_set, i);
    dr = HaArray(drs->all_resources, KheResourceResourceTypeIndex(r));
    KheDrsResourceOpen(dr, KheDrsResourceSetCount(drs->open_resources),
      *init_soln, drs);
    KheDrsResourceSetAddLast(drs->open_resources, dr);
  }

  ... code omitted here (see below) ...
}
}
First, the solve options are copied into the solver, and the sets
of open resources, days, shifts, and expressions are cleared.
Then the selected resources are opened by calls to
@C { KheDrsResourceOpen }, and added to @C { drs->open_resources }.
Then the code that was omitted above is run; here it is:
@ID @C {
/* open selected days */
HaArrayForEach(drs->selected_day_ranges, ddr, i)
  for( j = ddr.first;  j <= ddr.last;  j++ )
  {
    day = HaArray(drs->all_days, j);
    KheDrsDayOpen(day, ddr, HaArrayCount(drs->open_days), main_dom_kind,
      cache, cache_dom_kind, drs);
    HaArrayAddLast(drs->open_days, day);
  }

/* open the shifts and mtasks on selected days */
HaArrayForEach(drs->selected_day_ranges, ddr, i)
  for( j = ddr.first;  j <= ddr.last;  j++ )
  {
    day = HaArray(drs->all_days, j);
    KheDrsDayOpenShifts(day, ddr, drs);
  }

/* sort drs->open_exprs by postorder index, then open them */
HaArraySort(drs->open_exprs, &KheDrsExprPostorderCmp);
HaArrayForEach(drs->open_exprs, e, i)
  KheDrsExprOpen(e, drs);
HaArrayForEach(drs->open_exprs, e, i)
  KheDrsExprNotifySigners(e, drs);
}
It opens the selected days, shifts, mtasks, and expressions.
All days must be open before any mtasks are opened, because
opening a task includes assigning open day indexes to its task on
day objects, and only open days have those.  The two-stage process
for opening expressions is discussed elsewhere.
@PP
After opening comes searching, but we'll look at closing first:
@ID @C {
void KheDrsSolveClose(KHE_DYNAMIC_RESOURCE_SOLVER drs,
  KHE_DRS_PACKED_SOLN soln, bool check_rerun_costs)
{
  KHE_DRS_DAY day;  int i, j;  KHE_DRS_EXPR e;  KHE_MONITOR m;
  KHE_DRS_TASK_ON_DAY dtd;  KHE_DRS_PACKED_SOLN_DAY rd;
  KHE_DRS_RESOURCE dr;  KHE_DRS_MONITOR dm;

  /* traverse soln, closing assigned tasks */
  HaArrayForEachReverse(soln->days, rd, i)
    HaArrayForEach(rd->prev_tasks, dtd, j)
      if( dtd != NULL )
      {
	dr = KheDrsResourceSetResource(drs->open_resources, j);
	KheDrsTaskClose(dtd->encl_dt, dr);
      }

  /* close the open days, including closing unassigned tasks */
  HaArrayForEach(drs->open_days, day, i)
    KheDrsDayClose(day, drs);

  /* close the open expressions */
  HaArrayForEach(drs->open_exprs, e, i)
    KheDrsExprClose(e, drs);

  /* close the open resources */
  KheDrsResourceSetForEach(drs->open_resources, dr, i)
    KheDrsResourceClose(dr, drs);

  /* close drs */
  KheDrsResourceSetClear(drs->open_resources);
  HaArrayClear(drs->open_days);
  HaArrayClear(drs->open_shifts);
  HaArrayClear(drs->open_exprs);

  /* optionally check rerun costs */
  if( check_rerun_costs )
    HaArrayForEach(drs->all_monitors, dm, i)
      KheDrsMonitorCheckRerunCost(dm);

  /* check that DRS soln cost equals KHE soln cost */
  HnAssert(KheSolnCost(drs->soln) == soln->cost,
    "KheDrsSolveClose internal error: soln %.5f != packed %.5f",
    KheCostShow(KheSolnCost(drs->soln)), KheCostShow(soln->cost));
}
}
This closes everything that was previously opened.  Parameter
@C { soln } says which solution to install into the KHE platform:
a new best solution, or the original.  Tasks assigned
by @C { soln } are closed first, more than once if they are multi-day
tasks.  Open but unassigned tasks are closed later, when their days
are closed.  Assigned tasks get closed at least twice, but as we saw
in Appendix {@NumberOf dynamic_impl.tasks}, @C { KheDrsTaskClose } can
safely close a task more than once:  only the first call does anything.
@PP
Here now is the function for carrying out the search:
@ID {0.95 1.0} @Scale @C {
bool KheDrsSolveSearch(KHE_DYNAMIC_RESOURCE_SOLVER drs,
  bool testing, KHE_DRS_PACKED_SOLN *final_soln)
{
  KHE_DRS_DAY prev_day, next_day;  KHE_DRS_SOLN root_soln, soln;
  KHE_DRS_SOLN_LIST soln_list, root_soln_list;
  int i, made_count, undominated_count, kept_count;

  /* priority search is different */
  if( drs->solve_priqueue )
    return KheDrsSolvePrioritySearch(drs, testing, final_soln);

  /* do the search */
  root_soln_list = soln_list = KheDrsSolnListMake(drs);
  root_soln = KheDrsSolnMake(NULL, drs->solve_start_cost, drs);
  KheDrsSolnListAddSoln(soln_list, root_soln, drs);
  KheDrsAddStats(drs, testing, soln_list, NULL, 0, 0, 0);
  prev_day = NULL;
  HaArrayForEach(drs->open_days, next_day, i)
  {
    KheDrsSolnListExpand(soln_list, prev_day, next_day, drs);
    made_count = next_day->soln_made_count;
    soln_list = KheDrsDayGatherSolns(next_day, drs);
    undominated_count = KheDrsSolnListCount(soln_list);
    KheDrsSolnListSortAndReduce(soln_list, drs);
    kept_count = KheDrsSolnListCount(soln_list);
    KheDrsAddStats(drs, testing, soln_list, next_day, made_count,
      undominated_count, kept_count);
    prev_day = next_day;
  }

  /* set *final_soln, to NULL if there isn't one */
  if( KheDrsSolnListCount(soln_list) == 1 )
  {
    soln = KheDrsSolnListFirstSoln(soln_list);
    KheDrsPrintSearchStatistics(soln, drs);
    *final_soln = KheDrsPackedSolnBuildFromSoln(soln, drs);
  }
  else
    *final_soln = NULL;

  /* free root_soln (others later) and return true if have final soln */
  KheDrsSolnFree(root_soln, drs);
  KheDrsSolnListFree(root_soln_list, drs);
  return *final_soln != NULL;
}
}
First, if the priority queue is in use it passes its job on to
a completely different function.  Then it makes @C { root_soln_list },
containing just the root solution, the one not lying in any day.  Then,
for each open day @C { next_day }, it calls @C { KheDrsSolnListExpand }
(Appendix {@NumberOf dynamic_impl.sets}) to build a new solution set, by
trying all ways to expand the solutions of @C { soln_list } by
one day.  Then @C { KheDrsDayGatherSolns } traverses
this solution set and adds each solution to a new @C { soln_list },
which is then sorted and optionally reduced in size by
@C { KheDrsSolnListSortAndReduce }, ready for the next iteration.
At the end, if there is a solution in @C { soln_list } we have a
new best solution, so it calls @C { KheDrsPackedSolnBuildFromSoln }
to convert this into a packed solution, and returns @C { true }.
Otherwise it returns @C { false }.
@PP
To complete our presentation of solving, we'll skip forward in the
source file to the function called by the user to carry out a solve:
@ID {0.95 1.0} @Scale @C {
bool KheDynamicResourceSolverSolve(KHE_DYNAMIC_RESOURCE_SOLVER drs,
  bool priqueue, bool extra_selection, bool expand_by_shifts,
  bool shift_pairs, bool correlated_exprs, int daily_expand_limit,
  int daily_prune_trigger, int resource_expand_limit, int dom_approx,
  KHE_DRS_DOM_KIND main_dom_kind, bool cache,
  KHE_DRS_DOM_KIND cache_dom_kind)
{
  KHE_COST cost;  jmp_buf env;  bool res;

  if( setjmp(env) == 0 )
  {
    KheSolnJmpEnvBegin(drs->soln, &env);
    res = KheDynamicResourceSolverDoSolve(drs, priqueue, extra_selection,
      expand_by_shifts, shift_pairs, correlated_exprs, daily_expand_limit,
      daily_prune_trigger, resource_expand_limit, dom_approx,
      main_dom_kind, cache, cache_dom_kind, false, &cost);
    KheSolnJmpEnvEnd(drs->soln);
  }
  else
  {
    KheSolnJmpEnvEnd(drs->soln);
    res = false;
  }
  return res;
}
}
As shown, this code ensures that if memory runs out during the
solve, the resulting long jump will return here, then it calls
@C { KheDynamicResourceSolverDoSolve }, with the extra @C { false }
argument to indicate that this is a real solve and not a test:
@ID {0.95 1.0} @Scale -0.1px @Break @C {
bool KheDynamicResourceSolverDoSolve(KHE_DYNAMIC_RESOURCE_SOLVER drs,
  bool priqueue, bool extra_selection, bool expand_by_shifts,
  bool shift_pairs, bool correlated_exprs, int daily_expand_limit,
  int daily_prune_trigger, int resource_expand_limit, int dom_approx,
  KHE_DRS_DOM_KIND main_dom_kind, bool cache,
  KHE_DRS_DOM_KIND cache_dom_kind, bool test_only, KHE_COST *cost)
{
  int rcount, i;  KHE_RESOURCE r;  KHE_INTERVAL ddr;  KHE_TIMER timer;
  KHE_DRS_PACKED_SOLN init_soln, new_best_soln, junk;  KHE_COST init_cost;
  char buff[20];  KHE_TIME_GROUP tg1, tg2;

  /* open, search, close, and possibly rerun */
  init_cost = KheSolnCost(drs->soln);
  KheDrsSolveOpen(drs, true, priqueue, extra_selection, expand_by_shifts,
    shift_pairs, correlated_exprs, daily_expand_limit, daily_prune_trigger,
    resource_expand_limit, dom_approx, main_dom_kind, cache,
    cache_dom_kind, &init_soln);
  if( !KheDrsSolveSearch(drs, true, &new_best_soln) )
  {
    /* no new best; close using init_soln */
    KheDrsSolveClose(drs, init_soln, false);
    *cost = KheSolnCost(drs->soln);
  }
  else if( RERUN )
    ... omitted ...
  else
  {
    /* have new best solution, close using that */
    KheDrsSolveClose(drs, new_best_soln, false);
    *cost = KheSolnCost(drs->soln);
  }

  /* delete init_soln and (if present) new_best_soln */
  KheDrsPackedSolnDelete(init_soln, drs);
  if( new_best_soln != NULL )
    KheDrsPackedSolnDelete(new_best_soln, drs);

  /* clear out selections ready for a fresh set of resources and days */
  KheResourceSetClear(drs->selected_resource_set);
  HaArrayClear(drs->selected_day_ranges);

  /* return true if the solution has been improved */
  HnAssert(KheSolnCost(drs->soln) <= init_cost,
    "KheDynamicResourceSolverDoSolve internal error: new %.5f > old %.5f",
    KheCostShow(KheSolnCost(drs->soln)), KheCostShow(init_cost));
  return KheSolnCost(drs->soln) < init_cost;
}
}
It calls on @C { KheDrsSolveOpen }, @C { KheDrsSolveSearch }, and
@C { KheDrsSolveClose }, and manages two packed solutions, @C { init_soln }
holding the initial solution, and @C { new_best_soln } holding the
new best solution if @C { KheDrsSolveSearch } finds one.  For the
@C { RERUN } code see Appendix {@NumberOf dynamic_impl.solving.testing}.
@End @SubSubAppendix

@SubSubAppendix
    @Tag { dynamic_impl.solving.testing }
    @Title { Testing }
@Begin
@LP
In general it is not possible to compare the cost of a solver
solution with a KHE cost, because incomplete solutions have no
KHE cost.  But when a new best solution is found, it is complete.
If it is installed into the KHE platform its cost can be compared
with the KHE cost and should be equal to it.  This important
correctness check is made at the end of every successful solve.
@PP
If the check fails, the solver has calculated the cost of one or
more constraints incorrectly.  But working out which constraints
are wrong is not easy.  To help with this, the solver offers a
@C { RERUN } compiler flag and a @C { rerun } field in the solver
holding a packed solution.  If the @C { RERUN } flag is 1 and the
@C { rerun } field of the solver is non-@C { NULL }, the solve is
a @I { rerun }.  This means that instead of trying many different
assignments, only the assignments from @C { drs->rerun } are tried.
This reduces the amount of debug output, while still executing the
code that led to the incorrect cost.
@PP
We have already seen the key piece of code here, within
@C { KheDrsResourceOnDayIsFixed }
(Appendix {@NumberOf dynamic_impl.expansion.resource_setup}).
There, if the current run is a rerun, every resource on day is
reported to have a fixed assignment, which @C { KheDrsResourceOnDayIsFixed }
retrieves from @C { drs->rerun }.  Previously omitted code from
@C { KheDynamicResourceSolverDoSolve } does the rest:
@ID {0.94 1.0} @Scale @C {
if( !KheDrsSolveSearch(drs, true, &new_best_soln) )
{
  /* no new best; close using init_soln */
  KheDrsSolveClose(drs, init_soln, false);
  *cost = KheSolnCost(drs->soln);
}
else if( RERUN )
{
  /* close using init_soln */
  KheDrsSolveClose(drs, init_soln, false);

  /* rerun new_best_soln (drs is closed on new_best_soln after this) */
  KheDrsRerun(drs, priqueue, extra_selection, expand_by_shifts, shift_pairs,
    correlated_exprs, daily_expand_limit, daily_prune_trigger,
    resource_expand_limit, dom_approx, main_dom_kind, cache,
    cache_dom_kind, new_best_soln);
  *cost = KheSolnCost(drs->soln);

  /* if test only, return to init_soln */
  if( test_only )
  {
    KheDrsSolveOpen(drs, false, priqueue, extra_selection, expand_by_shifts,
      shift_pairs, correlated_exprs, daily_expand_limit, daily_prune_trigger,
      resource_expand_limit, dom_approx, main_dom_kind, cache,
      cache_dom_kind, &junk);
    KheDrsPackedSolnDelete(junk, drs);
    KheDrsSolveClose(drs, init_soln, false);
  }
}
}
If a new best solution is found and a rerun is wanted, the solve
is closed using @C { init_soln }, returning the solver to the
initial state.  Then @C { KheDrsRerun } is called to carry out the
rerun.  We'll see this function in a moment.  It leaves the solver
with the new best solution reinstalled.  Finally, if we are only
testing, we open for solving and immediately close again with the
initial solution.  This returns the solver once again to the
initial state, which is what is wanted when testing.
@PP
Here is @C { KheDrsRerun }:
@ID {0.95 1.0} @Scale @C {
void KheDrsRerun(KHE_DYNAMIC_RESOURCE_SOLVER drs, bool priqueue,
  bool extra_selection, bool expand_by_shifts, bool shift_pairs,
  bool correlated_exprs, int daily_expand_limit, int daily_prune_trigger,
  int resource_expand_limit, int dom_approx,
  KHE_DRS_DOM_KIND main_dom_kind, bool cache,
  KHE_DRS_DOM_KIND cache_dom_kind, KHE_DRS_PACKED_SOLN soln)
{
  KHE_DRS_PACKED_SOLN init_soln2, new_best_soln2;
  int i;  KHE_DRS_MONITOR dm;

  /* carry out the open, search, and close of the rerun */
  drs->rerun_soln = soln;
  HaArrayForEach(drs->all_monitors, dm, i)
    KheDrsMonitorInitRerunCost(dm, drs);
  KheDrsSolveOpen(drs, false, priqueue, extra_selection, expand_by_shifts,
    shift_pairs, correlated_exprs, daily_expand_limit, daily_prune_trigger,
    resource_expand_limit, dom_approx, main_dom_kind, cache,
    cache_dom_kind, &init_soln2);
  if( !KheDrsSolveSearch(drs, false, &new_best_soln2) )
    HnAbort("KheDrsRerun internal error (rerun failed to find new best)");
  HnAssert(soln->cost == new_best_soln2->cost,
    "KheDrsRerun internal error (rerun new best has different cost)");
  KheDrsSolveClose(drs, new_best_soln2, true);
  drs->rerun_soln = NULL;

  /* delete the packed solutions made by this function */
  KheDrsPackedSolnDelete(init_soln2, drs);
  KheDrsPackedSolnDelete(new_best_soln2, drs);
}
}
This begins by setting @C { drs->rerun } to its @C { soln } parameter
to signal to the rest of the code that this is a rerun using @C { soln }.
Then it initializes the rerun cost fields in the monitors; we'll
come to those shortly.  After that we have the usual open-search-close
sequence, followed by @C { drs->rerun = NULL } to indicate that the
rerun is over.  Some tidying up ends the function.
@PP
We mentioned earlier the problem of finding out which code has
calculated an incorrect cost.  This is simplified by the
@I { rerun cost expressions }.  Each descendant of type
@C { KHE_DRS_EXPR_COST } contains a @C { monitor } field of
type @C { KHE_DRS_MONITOR }:
@ID @C {
typedef struct khe_drs_monitor_rec *KHE_DRS_MONITOR;

struct khe_drs_monitor_rec {
  KHE_MONITOR		monitor;
  KHE_COST		rerun_open_and_search_cost;
  KHE_COST		rerun_open_and_close_cost;
  KHE_DRS_EXPR		sample_expr;
};
}
as we saw in Appendix {@NumberOf dynamic_impl.constraints.monitors}.
It contains the monitor plus two costs used only during
reruns.  Whenever an expression containing monitor @C { dm } reports
a cost when opening or searching, it also adds the cost it reports
to @C { dm->rerun_open_and_search_cost }; and whenever it reports
a cost when opening or closing, it also adds the cost it reports
to @C { dm->rerun_open_and_close_cost }.  These reports are made
by calls, which we have omitted in our presentations so far,
to @C { KheDrsMonitorInfoUpdateRerunCost }.  This happens only
during reruns.
@PP
At the end of a rerun, when a new best solution is installed,
we should have
@ID @C {
dm->rerun_open_and_search_cost == KheMonitorCost(dm->monitor)
}
and
@ID @C {
dm->rerun_open_and_close_cost == KheMonitorCost(dm->monitor)
}
@C { KheDrsSolveClose } checks these conditions,
by calling @C { KheDrsMonitorInfoCheckRerunCost },
which we omitted before.  This prints debug output pointing
to the failures, if there are any.  In this way we can track
down the misbehaving expressions.
@PP
Some monitors give rise to multiple expression trees, when they
are broken into independent parts.  That's fine; the different
expression trees share the same monitor object.
@PP
The solver also offers debug code to help with working out what is
going wrong.  We won't detail it here, but one can name a particular
cost expression (one previously found to be going wrong) by setting
the @C { RERUN_MONITOR_ID } compiler flag, and this will produce
debug output during the rerun which shows how the cost of that
expression is calculated during opening, closing, and evaluating
on each day.  On a regular run this would be incomprehensible,
but on a rerun there is just the one search path to follow.
@PP
When testing, the solver will also collect statistics about
the current solve.  These are stored in fields of
@C { KHE_DYNAMIC_RESOURCE_SOLVER } that we omitted before:
@ID @C {
#if TESTING
  KHE_TIMER			timer;
  HA_ARRAY_INT			solns_made_per_day;
  HA_ARRAY_INT			table_size_per_day;
  HA_ARRAY_FLOAT		running_time_per_day;
  HA_ARRAY_INT			ancestor_freq;
  HA_ARRAY_INT			dominates_freq;
  int				max_open_day_index;
#endif
}
Skipping details, there is a @C { KheDrsAddStats } function,
whose calls we have omitted, that adds values to these statistics;
and then the public @C { KheDynamicResourceSolverSolveStatsCount }
and @C { KheDynamicResourceSolverSolveStats } functions return
these values to the user.
@End @SubSubAppendix

@EndSubSubAppendices
@End @SubAppendix

@EndSubAppendices
@End @Appendix
