KHE diary for 2019
==================

31 December 2018.  Designed, implemented, documented, and tested
  resource swapping.  Without swapping:

    [ "INRC2-4-035-2-8875", 1 solution, in 26.3 secs: cost 0.01585 ]

    [ "INRC2-4-035-2-8875", 4 threads, 8 solves, 8 distinct costs, 47.7 secs:
      0.01585 0.01635 0.01690 0.01700 0.01760 0.01800 0.01820 0.01835
    ]

  With swapping:

    [ "INRC2-4-035-2-8875", 1 solution, in 32.9 secs: cost 0.01550 ]

    [ "INRC2-4-035-2-8875", 4 threads, 8 solves, 8 distinct costs, 51.2 secs:
      0.01550 0.01655 0.01690 0.01700 0.01760 0.01800 0.01820 0.01835
    ]

  So it does help, but it's a bit slow.  I'll keep it on for now.
  After all it is quite different from all the other repairs.

  Added light grey boxes to timetables, denoting soft unavailable days.

  Implemented edge_adjust4 in resource matching, which favours
  assigning the same shift type on successive days.  First results:

    [ "INRC2-4-035-2-8875", 1 solution, in 36.4 secs: cost 0.01630 ]

    [ "INRC2-4-035-2-8875", 4 threads, 8 solves, 8 distinct costs, 69.0 secs:
      0.01565 0.01630 0.01645 0.01655 0.01670 0.01760 0.01800 0.01895
    ]

  It's slower and the results are worse.  But must look at them in
  detail first, to see why.  Turning of adjust3 and adjust4 when
  the time set is not immediately after the previous one gives

    [ "INRC2-4-035-2-8875", 1 solution, in 34.0 secs: cost 0.01700 ]

    [ "INRC2-4-035-2-8875", 4 threads, 8 solves, 8 distinct costs, 65.1 secs:
      0.01635 0.01670 0.01700 0.01715 0.01740 0.01750 0.01765 0.01775
    ]

  Turning off edge adjust 3 (which tries for short sequences):

    [ "INRC2-4-035-2-8875", 1 solution, in 27.4 secs: cost 0.01680 ]

    [ "INRC2-4-035-2-8875", 4 threads, 8 solves, 8 distinct costs, 64.8 secs:
      0.01635 0.01670 0.01700 0.01715 0.01740 0.01750 0.01765 0.01775
    ]

  This is all bumping along the bottom.  I need some better ideas.

1 January 2019.  Here's a comparison between what I'm currently getting
  and the LOR solution.  First, what I'm getting:

    [ "INRC2-4-035-2-8875", 1 solution, in 36.0 secs: cost 0.01700 ]

    Summary 	Inf. 	Obj.
    ---------------------------------------------------------------
    Assign Resource Constraint (6 points) 	   		180
    Avoid Unavailable Times Constraint (9 points) 	   	120
    Cluster Busy Times Constraint (25 points) 	   		920
    Limit Active Intervals Constraint (23 points) 	   	480
    ---------------------------------------------------------------
      Grand total (63 points)		            	       1700 

  The total resource overload is 33, plus there are 6 unassigned
  shifts.  The available workload is 24, so it can't cover it all,
  but it would make a big difference if it could all be used.

  The LOR solution has cost 1155:

    Summary						Inf. 	Obj.
    ---------------------------------------------------------------
    Assign Resource Constraint (11 points) 	   		330
    Avoid Unavailable Times Constraint (9 points) 	   	150
    Cluster Busy Times Constraint (15 points) 	   		450
    Limit Active Intervals Constraint (10 points) 	   	225
    ---------------------------------------------------------------
      Grand total (45 points) 	   			       1155
  
  Reducing this gap is the aim.

  The LOR solution has 11 unassigned tasks, 5 more than the KHE18
  solution.  This seems to have freed it up to do much better in
  other respects.  The total resource overload is only 18.  It's
  about equal on unavailable times.  The big differences are in
  cluster (both working weekends and total workload) and limit
  active (all kinds: same shift, total, and free days).

  Starting work on the new grouping idea, which I'm calling
  plateau grouping.  There is an array called suitable_monitors
  which contains exactly the limit active intervals monitors we need.

2 January 2019.  Working on plateau grouping.  It's all
  good so far.  So far I've partitioned the tasks by domain,
  and I'm finding and removing plateaus in each partition.
  So all I have to do, really, is group each plateau as I
  remove it.  This will depend on the length of the plateau
  vs the minimum and maximum limits.  First results:

    [ "INRC2-4-035-2-8875", 1 solution, in 23.1 secs: cost 0.01580 ]

    [ "INRC2-4-035-2-8875", 4 threads, 8 solves, 8 distinct costs, 41.0 secs:
      0.01510 0.01580 0.01605 0.01655 0.01680 0.01700 0.01710 0.01770
    ]

  My best results so far.  Ungrouping directly after time sweep (that
  is, before the first repair):

    [ "INRC2-4-035-2-8875", 1 solution, in 19.3 secs: cost 0.01590 ]

    [ "INRC2-4-035-2-8875", 4 threads, 8 solves, 8 distinct costs, 55.5 secs:
      0.01520 0.01590 0.01600 0.01605 0.01630 0.01650 0.01675 0.01700
    ]

  Nothing in it wrt cost, but a fair bit slower.  So scrap that idea.

  I've established that my solution uses 13 tasks that don't actually
  need to be assigned, whereas the Omer solution uses only 9.  So
  there is (13 - 9) * 20 = 80 in extra cost there.

3 January 2019.  Did some general thinking, and also did a better
  job of referencing the good ideas in legrain.

  Ran the new verion on the four week instances.  Instance 8875 is
  better, sure enough, but the average cost is slightly worse.  The
  run time is about 10% faster.  Altogether not a significant
  improvement.

  Moved clash checking functions to the platform.

  Found bug in recent code for resource matching, but only minor.
  Still it suggests that new tests might be needed.

6 January 2019.  Still working on, and documenting, a more
  principled approach to task grouping.

7 January 2019.  Still working on, and documenting, a more
  principled approach to task grouping.  Today I sorted out
  non-equivalent tasks running at the same times, and rewrote
  the documentation up to the end of time-based grouping.

8 January 2019.  Still working on, and documenting, a more
  principled approach to task grouping.

9 January 2019.  Still working on, and documenting, a more
  principled approach to task grouping.  Realized that
  plateaus aren't quite what I thought they were.  Have
  to sort out the consequences.

11 January 2019.  Still working on, and documenting, a more
  principled approach to task grouping.  The documentation
  seems to be more or less complete now; good enough to
  support a start to implementing.

12 January 2019.  Refereed a paper today.

13 January 2019.  Sorted out the nuances connection time-based
  grouping with the higher-level groupers.  All documented and
  ready to implement.  A whole fresh .c file is looking best.
  Started on the implementation, KheGroupByResourceConstraints
  in file khe_group_by_rc.c.  Also re-organized khe_solvers.h to
  take account of the new chapter, "Resource-Structural Solvers".

  Wrote KheGroupByResourceConstraintsSolverMake, it seems pretty
  good, and now I have debug code to prove that it's working.

14 January 2019.  Re-implementing grouping by frame, now called
  grouping by resource constraints.  I've installed the code
  for Phase 0, now I need to audit it.

18 January 2019.  Wasted several days fiddling round with stupid
  distractions.  Back at work today.  KheEliminateCombinations
  now audited and in good shape.  KheDoCombinatorialGrouping
  also in good shape except that it calls KheGroupTasks. 

19 January 2019.  KheTaskingGroupByResourceConstraints can only
  realistically deal with *all* the tasks of a given resource
  type, not with an arbitrary subset of them.  I've changed its
  interface (removing the tasking parameter) to make this clear.

  Finished KheGroupTasksCombinatorial, but combinatorial grouping
  in general still needs work, because it seems to assume that
  all tasks initially have duration 1.

21 January 2019.  Lost yesterday to gardening.  Today I
  rewrote the documentation of combinatorial grouping.
  It looks pretty good but the test will be implementing it.

22 January 2019.  Reviewing yeterday's stuff.  Rewrote time-based
  grouping, it now seems to be the real deal.
  
23 January 2019.  Revised everything and rewrote profile grouping.
  All the documentation is done, and I'm ready to implement.
  
24 January 2019.  Lost most of the day to gardening.  Working on
  KheTimeBasedGrouping.
  
25 January 2019.  Have a clean compile of KheTimeBasedGrouping.
  
26 January 2019.  Have a clean compile of combinatorial grouping.
  Combination elimination is also done.  So it's just profile
  grouping how.
  
27 January 2019.  Decided to skip auditing what I've done, and
  go on to implementing profile grouping.

  Changed the implementation of KheSetIntersectCount so that
  it runs faster when intersecting a small set with a large
  one.  Could do more of this kind of thing.

  All of the old code is commented out now.

  Done a fairly major reorganization of how things are named,
  and added upward links as well.  So I need to sort out the
  code corresponding to those changes.
  
28 January 2019.  Continuing with auditing and polishing the
  reorganized grouping code.  All done and audited except that
  the central part of profile grouping is still to do.
  
29 January 2019.  Did the central part of profile grouping today.
  All written, ready for a careful audit tomorrow.  I started on
  this some time between 3 and 6 January, it has been a big job,
  four weeks' worth.
  
30 January 2019.  Struggling with the specification of time-based
  grouping.  I need to review the new documentation I've written
  and carry on from there.
  
1 February 2019.  Still struggling with the specification of
  time-based grouping.  Actually it seems to be pretty good
  now, except that it does not have a plan for when a root
  task is already assigned to a resource.
  
2 February 2019.  Sorting out what to do with tasks that are
  already assigned.  All documented and I have started to
  implement.

3 February 2019.  Had another look at the documentation and
  started to work over the implementation from the top.  Begun
  work on KheTimeBasedGrouping and its helper functions.

4 February 2019.  At last, KheTimeBasedGrouping documentation
  is finished and the implementation now accords with it.

5 February 2019.  Auditing the KheTimeBasedGrouping documentation
  and implementation; it's all good and now ready for testing.
  Also audited combination elimination.

6 February 2019.  Moved InAndOut out of time-based grouping,
  because it may prevent that function from finding one group
  when asked to do so during combinatorial grouping.  Audited
  combinatorial grouping.  Audited the documentation and
  implementation of profile grouping.  So grouping by resource
  constraints is at last ready to test, about one month after
  I started work on it (on or before 6 January 2019).

11 February 2019.  After finding lots of reasons for putting it
  off, I am finally getting around to testing grouping by
  resource constraints today.  I've fixed several problems but
  it's not working yet.

13 February 2019.  The cause of

    KheDoCombinatorialGroupingForInterval found {1Fri1, 1Sat1, 1Sun1}

  is that {1Fri1} returns no group, because that would be a group that
  only has a leader task, nothing more.  Fixed things to allow this
  when testing.  Combinatorial grouping seems to be working now.
  Found and fixed a small bug and that seemed to make profile
  grouping work as well.  Best of 8 is very good again:

    [ "COI-GPost", 4 threads, 8 solves, 7 distinct costs, 0.8 secs:
      0.00009 0.00010 0.00011 0.00012 0.00013 0.00013 0.00014 0.00016
    ]

  Working on instance INRC2-8-040-0-06892664, which crashed before.
  I seem to be past the crash but there is a lot of grouping, I
  need to check that it is really justified.  Actually it is
  probably not justified, judging by the 39 infeasibility I've got:

    [ "INRC2-8-040-0-06892664", 1 solution, in 180.6 secs: cost 39.08535 ]

16 February 2019.  Debugging INRC2-8-040-0-06892664.  There is something
  wrong with time-based grouping:

    [ KheTimeBasedGrouping(gs, in {6Fri4, 6Sat4, 6Sun4, 7Mon4},
        out {6Thu4, 7Wed4}, max 2)
      KheTimeBasedGrouping made 6Fri:Night.0{6Sat:Night.4, 7Mon:Night.2}
      KheTimeBasedGrouping made 6Sat:Night.2{6Sun:Night.3, 6Fri:Night.3,
        7Mon:Night.1}
    ] KheTimeBasedGrouping returning true

  The first of these groups lacks a Sunday night shift - why?  Because
  the debug print did not print all the details.  Things seem to be
  working.

17 February 2019.  Decided to only include tasks in grouping for which
  non-assignment has a cost.  The killer argument is that we don't want
  to group a task for which non-assignment has a cost with a task for
  which non-assignment has no cost.

18 February 2019.  Results on INRC2-8-040-0-06892664, after changing
  things so that only tasks for which non-assignment attracts a cost
  get grouped:

    [ "INRC2-8-040-0-06892664", 1 solution, in 180.7 secs: cost 0.05135 ]

    [ "INRC2-8-040-0-06892664", 4 threads, 8 solves, 8 distinct costs, 6.1 mins:
      0.04805 0.05020 0.05085 0.05135 0.05190 0.05200 0.05295 0.05390
    ]

  This is an improvement on the results in my paper, which are 5840
  for both best of 1 and best of 8.  The LOR17 result is 2635, so
  this is about one-third of the way to being competitive.

  Added an optional number of best solutions row to the tables
  produced by HSEval.

  Did a full INRC2-8 run, and the results were generally quite a lot
  better.  But there is one instance, INRC2-8-035-0-62987798, where
  the result had a hard cost of 1:
  
    [ "INRC2-8-035-0-62987798", 4 threads, 8 solves, 8 distinct costs, 6.0 mins:
      1.05820 1.05855 1.05970 1.06050 2.05640 2.05750 2.05785 2.05895
    ]

  So looking into that is next.  I've done the run.

19 February 2019.  Yesterday's problem was that groups were not being
  added to the task set that is used to ungroup, so there was no
  ungrouping.  Now ungrouping is occurring for the second repair,
  and that is fixing the infeasibility.

  Designed, documented, and implemented an es_nocost_off option for the
  ejection chain module, and I'm passing it down to where it needs to be
  used, but I'm not using it yet.

22 February 2019.  Working on KheTaskSetRepairStatus.  I've changed
  things so that tasks for which non-assignment has no cost are
  considered to be free time.  Then I use ejecting moves to make
  sure that the unassignments get done.

  Tested the new code on COI-GPost.xml, all good there.  Tested
  INRC2-8-035-0-62987798, old value was 5645, new value is 5470,
  so the new code which treats tasks for which unassignment has
  no cost as free time seems to be working.  Best of 8:

    [ "INRC2-8-035-0-62987798", 4 threads, 8 solves, 8 distinct costs, 6.0 mins:
      0.05110 0.05320 0.05470 0.05515 0.05635 0.05650 0.05705 0.05770
    ]

  Doing a full INRC2 run now, both 4 and 8 week.  The code may be
  working, but it's slower and this seems to have produced a small
  negative effect overall.  Results were far worse, so I am turning
  it off for now and trying a full run without it.

  Looking into the grouping to see if that is the problem.  I've
  shown only the tasks for which non-assignment produces a cost:

    <Event Id="1Wed:Early">
      <Name>1Wed:Early</Name>
      <Duration>1</Duration>
      <Time Reference="1Wed1"/>
      <Resources>
	0: <R>A=h1:P-HeadNurse=h1:1</R> (+4)
	5: <R>A=h1:P-Nurse=h1:1</R> (+4)
	10: <R>A=h1:P-Caretaker=h1:1</R>
	11: <R>A=h1:P-Caretaker=h1:2</R> (+5)
	17: <R>A=h1:P-Trainee=h1:1</R> (+4)
      </Resources>
      <EventGroups>
	...
      </EventGroups>
    </Event>

    <Event Id="1Thu:Early">
      <Name>1Thu:Early</Name>
      <Duration>1</Duration>
      <Time Reference="1Thu1"/>
      <Resources>
	0: <R>A=h1:P-HeadNurse=h1:1</R> (+4)
	5: <R>A=h1:P-Nurse=h1:1</R> (+4)
	10: <R>A=h1:P-Caretaker=h1:1</R>
	11: <R>A=h1:P-Caretaker=h1:2</R>
	12: <R>A=s30:P-Caretaker=h1:1</R>
	13: <R>A=s30:P-Caretaker=h1:2</R> (+7)
	21: <R>A=h1:P-Trainee=h1:1</R> (+4)
      </Resources>
      <EventGroups>
	...
      </EventGroups>
    </Event>

    <Event Id="1Fri:Early">
      <Name>1Fri:Early</Name>
      <Duration>1</Duration>
      <Time Reference="1Fri1"/>
      <Resources>
	3: <R>A=h1:P-Nurse=h1:1</R>
	8: <R>A=h1:P-Caretaker=h1:1</R>
	9: <R>A=s30:P-Caretaker=h1:1</R>
	10: <R>A=s30:P-Caretaker=h1:2</R>
	17: <R>A=h1:P-Trainee=h1:1</R>
	18: <R>A=s30:P-Trainee=h1:1</R>
      </Resources>
      <EventGroups>
	...
      </EventGroups>
    </Event>

  So it's correct to start two groups at 1Thu, but these should
  group caretakers, whereas they actually group other things:

    grouping from 1Thu1 (supply 0, demand 7 - 5):
    KheTimeBasedGrouping made grouped task 1Thu:Early.0{1Fri:Early.3}
    KheTimeBasedGrouping made grouped task 1Thu:Early.21{1Fri:Early.17}

  This doesn't explain everything, but it is a real problem.  I can
  use the total profile to find how many to do, and then use the
  domain-specific profiles to find which domains to do.

23 February 2019.  Working on grouping by resource constraints,
  implementing the revised specification of profile grouping,
  which takes more account of task domains.  All documented,
  implemented, and audited, ready for testing.  First results:

    [ "INRC2-8-035-0-62987798", 1 solution, in 180.4 secs: cost 0.05070 ]
    [ "INRC2-8-030-1-27093606", 1 solution, in 180.3 secs: cost 0.03385 ]

  Seems OK, need longer tests now.  All done.  The 4-week ones are
  worse, the 8-week ones are better.

25 February 2019.  Looking into what has slowed down the solve.
  Working on INRC2-4-030-1-6753.  With grouping:

    [ "INRC2-4-030-1-6753", 1 solution, in 91.4 secs: cost 0.02300 ]

  Without grouping:

    [ "INRC2-4-030-1-6753", 1 solution, in 120.6 secs: cost 0.02280 ]

  We did get a better cost in the end, but only marginally; and
  the run time is very much worse without grouping.  So grouping
  is actually helping; the problem must be elsewhere.  Including
  all tasks in grouping, not just those that cost, produces

    [ "INRC2-4-030-1-6753", 1 solution, in 78.2 secs: cost 0.03070 ]

  So that's not a good idea.  Here is the evaluation of the LOR
  solution to 6753:

    Summary						Inf. 	Obj.
    ---------------------------------------------------------------
    Assign Resource Constraint (14 points) 	   		420
    Avoid Unavailable Times Constraint (8 points) 	   	100
    Cluster Busy Times Constraint (24 points) 	   	       1280
    Limit Active Intervals Constraint (5 points) 	   	 90
    ---------------------------------------------------------------
      Grand total (51 points) 	   			       1890

  Here's the current KHE18 evaluation:

    Summary 						Inf. 	Obj.
    ---------------------------------------------------------------
    Assign Resource Constraint (33 points) 	   		990
    Avoid Unavailable Times Constraint (11 points) 	   	160
    Cluster Busy Times Constraint (29 points) 	               1380
    Limit Active Intervals Constraint (26 points) 	   	540
    ---------------------------------------------------------------
      Grand total (99 points) 	   			       3070

  It's the assign resource and limit active intervals results that
  really stand out as inferior.  Something seems to have gone wrong,
  considering I was getting 1760 on 29 Dec 2018, better than LOR.

  The KHE18 solution has a large number of assignments to tasks
  for which non-assignment incurs no cost.  They may be the root
  of the problem.  But meanwhile, magically the cost dropped to

    [ "INRC2-4-030-1-6753", 1 solution, in 86.5 secs: cost 0.02300 ]

    Summary 						Inf. 	Obj.
    ---------------------------------------------------------------
    Assign Resource Constraint (17 points) 	   		510
    Avoid Unavailable Times Constraint (11 points) 	   	140
    Cluster Busy Times Constraint (29 points) 	   	       1320
    Limit Active Intervals Constraint (14 points) 	   	330
    ---------------------------------------------------------------
      Grand total (71 points) 	   			       2300

  It's probably about unassigned tasks mainly, but why?

  I've added code to use tasks for which non-assignment has a cost
  in preference to others when gathering unassigned tasks, but:

    [ "INRC2-4-030-1-6753", 1 solution, in 85.8 secs: cost 0.02300 ]

27 February 2019.  Found that I had not turned off nocost in all
  cases.  After turning it off in all cases I get this:

    [ "INRC2-4-030-1-6753", 1 solution, in 105.9 secs: cost 0.02235 ]

  An ambiguous result.  What about turning it on again in all cases?

  Made a full implementation of the es_nocost_off flag, and now
  I'm ready to try it.  With es_nocost_off set to false:

    [ "INRC2-4-030-1-6753", 1 solution, in 109.4 secs: cost 0.02260 ]

  With es_nocost_off set to true:

    [ "INRC2-4-030-1-6753", 1 solution, in 102.6 secs: cost 0.02235 ]

  It's slightly better with es_nocost_off set to true.  But not
  too bad, so I will keep working with es_nocost_off set to false
  and see where I can get to.

  Including all tasks in grouping, not just those that have a
  cost, gives

    [ "INRC2-4-030-1-6753", 1 solution, in 120.1 secs: cost 0.02420 ]

  which is significantly worse.  Without profile grouping:

    [ "INRC2-4-030-1-6753", 1 solution, in 112.3 secs: cost 0.02210 ]

  It is better but only marginally so.  And given the importance of
  profile grouping to (say) GPost, we can't just abandon it.

1 March 2019.  Written code to avoid cases like this when we
  widen unassigned tasks: {4Mon:Day.13, 3Sun:Early.0}.  The
  code starts the search for unassigned tasks on each day
  at the same offset in the day time group as the given task.
  The result was

    [ "INRC2-4-030-1-6753", 1 solution, in 123.3 secs: cost 0.02225 ]

  which is nothing remarkable, but we might as well stick with it.

  {2Sat, 2Sun} NU_7 -> NU_16 should give an improvement in NU_7's max
  weekends score (Constraint:10/NU_7) without doing any damage; but

    [ +TaskSetMove({2Sat:Day.3, 2Sun:Day.3}, NU_16, no_cost on) visited
      new defect 0.00000 -> 0.00030: [ A1 04810 Constraint:9/NU_16
        max working weekends - sadly NU_16's limit is 1
      new defect 0.00080 -> 0.00120: [ A1 04840 Constraint:11/NU_16
        max assignments (limit is 11)
      new defect 0.00000 -> 0.00030: [ A1 06635 Constraint:18/NU_7
        max 4 consecutive days off
      new defect 0.00000 -> 0.00030: [ A1 06665 Constraint:21/NU_7
        min 3 consecutive working days
      [ KheEjectorAugment(ej, 0.02255, ...
      ] KheEjectorAugment returning false
    failure: on augment of sub-defect Constraint:11/NU_16
    ]

  So nowhere near it, in fact.

  Changed the code for assembling sets of unassigned tasks so that
  it requires some common ground in their domains (the intersect
  count has to be at least half the domain).  Result is

    [ "INRC2-4-030-1-6753", 1 solution, in 125.0 secs: cost 0.02260 ]

  which is again marginally worse.

2 March 2019.  Tried fresh visits within KheTaskSetMoveMultiRepair,
  for each resource at depth 1.  Got this:

    [ "INRC2-4-030-1-6753", 1 solution, in 143.8 secs: cost 0.02220 ]

  which is pretty marginal but may be worth keeping.  It did
  explain an apparent anomaly in the debug output, which turned
  out to be caused by tasks having been already visited.

  Added rs_time_sweep_nocost_off and rs_rematch_nocost_off options.
  Without repair, when they are false we get this:

    [ "INRC2-4-030-1-6753", 1 solution, in 0.4 secs: cost 0.02825 ]

  When they are true, we get this:

    [ "INRC2-4-030-1-6753", 1 solution, in 0.9 secs: cost 0.02785 ]

  Previously I have been somewhat ambiguous about including these
  tasks or not, so it seems best to make an option for it.

  With repair and these options false we get

    [ "INRC2-4-030-1-6753", 1 solution, in 141.3 secs: cost 0.02220 ]

  With repair and these options true we get

    [ "INRC2-4-030-1-6753", 1 solution, in 157.5 secs: cost 0.02325 ]

  which is clearly worse.

  Allowing to_r to be busy at some but not all of the times when
  from_r is busy, treating it like when it is busy at all of the
  times, gives this:

    [ "INRC2-4-030-1-6753", 1 solution, in 144.2 secs: cost 0.02220 ]

  Marginally slower and no better.

  Changed the definition of a visited task set from "at least one
  task visited" to "all tasks visited", which should open up more
  repairs.  Go this:

    [ "INRC2-4-030-1-6753", 1 solution, in 180.4 secs: cost 0.02205 ]

  It's quite a lot slower.  But more logical this way I think.

    [ "INRC2-4-030-1-6753", 4 threads, 8 solves, 6 distinct costs, 6.0 mins:
      0.02145 0.02195 0.02220 0.02220 0.02230 0.02230 0.02245 0.02305
    ]

  This is not too bad considering the LOR17 solution has cost 1890.
  But I was getting it before from KHE18 in 33 seconds!

3 March 2019.  Could run time be the problem?  Here is a run
  with a 5 second limit per day and 5 minutes repair:

    [ "INRC2-4-030-1-6753", 1 solution, in 274.5 secs: cost 0.02160 ]

  Not really a big improvement.  Best of 8:

    [ "INRC2-4-030-1-6753", 4 threads, 8 solves, 7 distinct costs, 8.6 mins:
      0.02145 0.02150 0.02170 0.02170 0.02195 0.02200 0.02245 0.02290
    ]

  No improvement at all.  So the current time limit is enough to
  get what we can get.

  After patching the bug in KheTaskSetDoubleMoveRepair, got this:

    [ "INRC2-4-030-1-6753", 1 solution, in 141.8 secs: cost 0.02130 ]

  and best of 8 is

    [ "INRC2-4-030-1-6753", 4 threads, 8 solves, 7 distinct costs, 296.5 secs:
      0.02130 0.02190 0.02200 0.02245 0.02280 0.02280 0.02305 0.02355
    ]

  Single solve without the no clashes check produced

    [ "INRC2-4-030-1-6753", 1 solution, in 135.1 secs: cost 0.02130 ]

  It is a bit faster.  Now back to the no clashes check, but with
  a fresh visit before each task set swap:

    [ "INRC2-4-030-1-6753", 1 solution, in 128.9 secs: cost 0.02185 ]

  It did get rid of the non-assignment of {4Mon:Night, 4Tue:Night}, so
  that was good.  But other things must have got worse.  Best of 8:

    [ "INRC2-4-030-1-6753", 4 threads, 8 solves, 8 distinct costs, 6.0 mins:
      0.02170 0.02185 0.02210 0.02215 0.02240 0.02245 0.02280 0.02310
    ]

  With rs_time_sweep_nocost_off=true (single and best of 8):

    [ "INRC2-4-030-1-6753", 1 solution, in 180.9 secs: cost 0.02255 ]

    [ "INRC2-4-030-1-6753", 4 threads, 8 solves, 8 distinct costs, 5.8 mins:
      0.02175 0.02230 0.02235 0.02265 0.02275 0.02280 0.02305 0.02405
    ]

  But although these results are worse, logically we should include
  these tasks because they allow time sweep to do a better job.

  This is with adjust 2 off:

    [ "INRC2-4-030-1-6753", 1 solution, in 189.9 secs: cost 0.02335 ]

  Not a great result, despite getting a better result from time sweep.

3 March 2019.  Today's starting point is

    [ "INRC2-4-030-1-6753", 1 solution, in 117.5 secs: cost 0.02175 ]

  Why not simply unassign TR_25 on 2Sat?  Because assigning it is
  a hard constraint.

  Why not assign 3Wed Late to CT_18?  It's even stevens but do we
  actually pursue it?  Yes, actually it was being unassigned at
  the end, and the unassignment was accepted because the code
  was accepting any unassignment that did not increase cost.

  Discovered that there is no ejection chain repair during time
  sweep, because there are no limit resources constraints.  So
  I decided to try ejection chain repair with soln as the initial
  defect list.  This produced this:

    [ "INRC2-4-030-1-6753", 1 solution, in 264.1 secs: cost 0.02360 ]

  which is very unimpressive.

  Documented the new option for lookahead during time sweep, but
  basically nothing implemented yet.

9 March 2019.  Where have the days gone?  On gardening mostly.
  Added lookahead to KheTimeSweepAssignResources.

10 March 2019.  Carrying on with the new time sweep.

13 March 2019.  Another gap in the working days, this time
  partly filled by refereeing.  Today I did some work on
  unifying constraints, basically I'm shirking.

25 March 2019.  Not much work done in the last two weeks.  But
  what I have done is work hard on unifying hierarchical student
  sectioning with conventional timetabling, to the point where
  it is pretty much under control.  The other major new thing
  in university course timetabling is the time model, and there
  I have to do quite a lot of work, especially on travel time.

27 March 2019.  I've started work on a paper about XUTT, a
  unified timetabling format.  I'm not sure yet how it relates
  to the other paper I started before, about time models.  I
  may merge the two.

29 March 2019.  Gerhard's email address is g.f.post@utwente.nl
  *only* from now on.

31 March 2019.  Still working on XUTT.  I've looked for the
  formal definitionn of the UniTime format online, the best
  seems to be
  
     https://www.unitime.org/uct_dataformat_v24.php

  although there are versions and stuff.  What I have found:

     Rooms
     -----

     Room availability depends on departments ("sharing").  This
     means assigning a room to a certain event has a cost depending
     on the event and (I guess) the time of the event, so

        <TaskSet tg="sometimes" r="room" eg="dept_events" label="room"/>

     nrRooms attribute of classes, says how many rooms a class
     requires.  Can be handled by a room multi-slot, presumably.

     Room location coordinates in meters, they use Euclidean
     distances.  Surely a bridge too far for XUTT.

     Teachers
     --------

     Called instructors in UniTime.  They are not fully-fledged
     resources, they are preassigned, and classes that share an
     instructor may not overlap.

     Events
     ------

     Called classes in UniTime.  Know where they lie in what
     I would call a task set tree.  Some classes are "committed"
     i.e. they have come from a different problem whose solution
     has already been published.  This amounts to preassignment.
     Each class is to be assigned a single time, but that can
     include many time intervals.  There is an integer preference
     for each possible room.

     "Department balancing constraint was introduced only for Large
     Lecture Room problem. It tries to distribute the times during
     the day fairly between the departments (preventing, e.g., one
     department to have all its classes during unpopular times like
     before 8:30 am or after 4:30 pm)."  One event set for each
     department at the unpopular times, minimize the square.

     "discouraged to have an empty half-hour (6 time slots) window 
     in a room (each meating is at least an hour long)"  Some kind
     of idle times constraint for rooms.

     Distribution constraints constrain the times of sets of classes.


     Students
     --------

     There is a very specific rule for travel time, the one in the
     competition is more general, and better.

4 April 2019.  Still working on the XUTT paper.  I basically have
  to work on meet constraints now.

5 April 2019.  Still working on the XUTT paper.  Starting work on
  meet constraints today.  Group constraints can be prohibited
  and discouraged, which is referred to below as `-P'

  R SAME_ROOM - classes meet in same room.  Resource stability,
      effectively, although perhaps for a different reason
    SAME_ROOM-P - classes meet in different rooms.  Max limit
      on task set sizes, probably.

  G SAME_TIME - classes must occur at the same time of day.
      Like link events except one time group for each time
      of day.
    SAME_TIME-P - classes must occur at different times of day.
      Same structure but with min limit equal to the number of
      classes.

  G SAME_START - classes must start at the same time of day.
      Slight variation on SAME_TIME which differs when durations
      differ.  I need to look into this distinction.
    SAME_START-P - classes must start at different times of day.

  G SAME_DAYS - classes must occur on the same days.  Like link
      events but one time group for each day.
    SAME_DAYS-P - classes must occur on different days.  Same
      structure but with min limit equal to the number of classes.

  O BTB_TIME - adjacent time segments, rooms may differ.  BTB stands
      for back-to-back
    BTB_TIME-P - not adjacent time segments (at least 30 mins
      separation), but must be taught on the same days

  O BTB - like BTB but rooms must be the same as well
    BTB-P - like BTB-TIME-P but rooms must be *the same*

  O NHB_GTE(1) - classes must be one hour or more apart
    NHB_GTE-P(1) - classes must be less than one hour apart

  O NHB_LT(6) - less than 6 hours from the end of the first
      class to the start of the next.  Must be taught on the
      same days.
    NHB_LT-P(6) - more than 6 hours between, or may be on
      different days.

  O NHB(x) - exactly x hours between classes (end of one to
      beginning of another)
    NHB-P(x) - not exactly x hours between classes

  G DIFF_TIME - all pairs of classes cannot overlap in time
    DIFF_TIME-P - all pairs of classes must overlap in time

  G SPREAD - "overlapping of the classes in time needs to be
      minimized".  There is no SPREAD-P.

  O BTB-DAY - adjacent days
    BTB-DAY-P - not on adjacent days and not on the same day

  R CAN_SHARE_ROOM - classes can occur within the same room
      at the same time if the room capacity is sufficient.
      There is no CAN_SHARE_ROOM-P.

  G SAME_INSTR - classes have the same instructor, so cannot
      overlap in time or be back-to-back if rooms far apart.
      There is no SAME_INSTR-P.

  G SAME_STUDENTS - classes have the same students; essentially
      the same as SAME_INSTR.  There is no SAME_STUDENTS-P.

  G MIN_GRUSE(10x1h) "Minimize number of groups of time that
      are used by the given classes. The time is spread into
      the following 10 groups of one hour: 7:30a-8:30a, 8:30a-9:30a,
      9:30a-10:30a, ... 4:30p-5:30p."

  G MIN_GRUSE(5x2h) "Minimize number of groups of time that
      are used by the given classes. The time is spread into
      the following 5 groups of two hours: 7:30a-9:30a,
      9:30a-11:30a, 11:30a-1:30p, 1:30p-3:30p, 3:30p-5:30p."

  G Also MIN_GRUSE(3x3h), the same stuff.

  G Also MIN_GRUSE(2x5h), the same stuff.

  G MEET_WITH - same as CAN_SHARE_ROOM and SAME_ROOM and
      SAME_TIME and SAME_DAYS.  There is no MEET_WITH-P.

  O PRECEDENCE - given order in time.
    PRECEDENCE-P - reverse order in time.

  R MIN_ROOM_USE - "Minimize number of rooms used by the given
      set of classes."  This is just room stability again.

  O NDB_GT_1 - "Given classes must have two or more days in between."
    NDB_GT_1-P - "given classes must be offered on adjacent days or
      with at most one day in between."

  G CH_NOTOVERLAP - "If parent classes do not overlap in time,
      children classes can not overlap in time as well."

  O FOLLOWING_DAY - "The second class has to be placed on the
      following day of the first class (if the first class is
      on Friday, second class have to be on Monday)."
    FOLLOWING_DAY-P - "The second class has to be placed on the
      previous day of the first class (if the first class is on
      Monday, second class have to be on Friday)."

  O EVERY_OTHER_DAY - "The second class has to be placed two
      days after the first class"
    EVERY_OTHER_DAY-P - Before rather than after.

  Here are the significant elements of the list marked `O' above, with
  notes.

  O BTB_TIME - adjacent time segments, rooms may differ.  BTB stands
      for back-to-back
    BTB_TIME-P - not adjacent time segments (at least 30 mins
      separation), but must be taught on the same days
  O NHB_GTE(1) - classes must be one hour or more apart
    NHB_GTE-P(1) - classes must be less than one hour apart
  O NHB_LT(6) - less than 6 hours from the end of the first
      class to the start of the next.  Must be taught on the
      same days.
    NHB_LT-P(6) - more than 6 hours between, or may be on
      different days.
  O NHB(x) - exactly x hours between classes (end of one to
      beginning of another)
    NHB-P(x) - not exactly x hours between classes

    When do two times qualify as being back to back?  Not clear,
    but seems to be not overlap and no more than 30 mins apart.

    These constraints are about the number of minutes from the end of
    the first class to the beginning of the second, although it seems
    that order does not matter.  The determinant can be this number
    of minutes, and then the usual eval="0-6*60|s20" does it.

  O BTB-DAY - adjacent days
    BTB-DAY-P - not on adjacent days and not on the same day

    Actually this could be a G, using type=consec.

  O PRECEDENCE - given order in time.
    PRECEDENCE-P - reverse order in time.

    This seems to be the only constraint that does actually insist
    on a particular time order, which is interesting.

  O NDB_GT_1 - "Given classes must have two or more days in between."
    NDB_GT_1-P - "given classes must be offered on adjacent days or
      with at most one day in between."

    Actually this could be a G, using type=idle.  We are trying here
    to encourage, or avoid, idle days.

  O FOLLOWING_DAY - "The second class has to be placed on the
      following day of the first class (if the first class is
      on Friday, second class have to be on Monday)."
    FOLLOWING_DAY-P - "The second class has to be placed on the
      previous day of the first class (if the first class is on
      Monday, second class have to be on Friday)."

    Here is another constraint that specifies an ordering, although
    that could be hived off, leaving another idle days constraint.

  O EVERY_OTHER_DAY - "The second class has to be placed two
      days after the first class"
    EVERY_OTHER_DAY-P - Before rather than after.

    Here we go again.  We want the busy sequences to have length 1,
    and the idle sequences to have length 2, as well a being not
    on the same day.

6 April 2019.  My results of snooping around the RobinX web site:

  BA1 assign times
  BA2 avoid clashes

  CA1 limit busy times but there is a room constraint as well;
    this could be implemented by a task constraint, given that
    "each team in team group T" could be replaced by "the set
    of room tasks from events preassigned a t from T".
  CA2 like CA1 except that the base set of events is different
  CA3 similar, has a few obscurities
  CA4 similar
  CA5 problem

  GA1 meet constraint should handle it, XHSTT can't
  GA2 messy but may be doable with a meet set tree with 2 children

  BR1 A break is basically a pattern, and we are counting the
    number of occurrences of a pattern in some time group,
    similar to what the Curtois model does in nurse rostering.
  BR3 difference in breaks not larger than k.  This looks like
    a fairness thing, square of number of breaks might do it.
  BR4 number of sequences of patterns, apparently.

  SE1
  SE2

9 April 2019.  Still designing XUTT.  I've looked closely at the
  last few constraints from ITC2019, and they blow several things
  out of the water.  I'm beginning to think now that the core
  consists of events (or tasks) satisfying the core conditions,
  and what happens after those events (or tasks) get into their
  task sets and meet sets is pretty darn arbitrary.  In fact
  it is just a mathematical expression whose base terms are
  f(S) for the set and various f (count, duration, first time,
  last time, blockiness, you name it).

15 April 2019.  Had a good idea for how to take advantage of
  time symmetry to reduce the memory and time when solving
  university course timetabling.  But when I wrote it up it
  did not take much space, so now I have decided to merge
  everything I have written into a single paper which
  introduces XUTT and presents the time model (in XML,
  not the fake syntax I was using before) as well.  So
  I'm currently working on that merged paper.

16 April 2019.  I've got a rough version of the unified paper,
  defining XUTT, the hierarchical time model (with a sketch of
  how to take advantage of time symmetries), and task and meet
  constraints.  Meet constraints are largely still to do, as is
  a conclusion, and there are UniTime (and RobinX) constraints
  that I have not attempted to incorporate yet.  The paper is
  about 15 pages long at the moment.  So I don't need more
  material, in fact I should try to trim it at some stage.

    Structure                                  Pages
    ------------------------------------------------
    Intro                                        1.0
    Issues                                       1.5
    Other models                                 2.0
    The XUTT model
      Times (the hierarchical model)             3.0
      Resources                                  0.2
      Events                                     0.8
      Constraints                                3.0
    Reducing verbosity                           1.0
    Exploiting symmetry in time                  1.3
    Conclusion                                   0.0
    References                                   1.0
    ------------------------------------------------
                                                14.8

17 April 2019.  Reviewed the version of the paper from yesterday
  and made some minor adjustments.  It's pretty good as far
  as it goes.

  Did some work on the Emir bug fix.  The current version of KHE
  does not crash, but the solution is pretty bad.  I should at
  least do something about the 8 minute run time.

  ts_layer_time_limit=0.5 ts_node_repair_time_limit=10:
    [ "RandomInstance2", 1 solution, in 69.6 secs: cost 123.00254 ]

  ts_layer_time_limit=1.0 ts_node_repair_time_limit=10:
    [ "RandomInstance2", 1 solution, in 100.4 secs: cost 111.00274 ]

  ts_layer_time_limit=0.5 ts_node_repair_time_limit=20:
    [ "RandomInstance2", 1 solution, in 89.5 secs: cost 116.00303 ]

  ts_layer_time_limit=0.5 ts_node_repair_time_limit=40:
    [ "RandomInstance2", 1 solution, in 130.5 secs: cost 111.00250 ]

  ts_layer_time_limit=0.2 ts_node_repair_time_limit=10:
    [ "RandomInstance2", 1 solution, in 46.1 secs: cost 131.00288 ]

20 April 2019.  Published Version 2.3 today and sent email to
  Emir letting him know that it's available.

  Revised the XUTT paper yet again, it now looks like this:

    Structure                                  Pages
    ------------------------------------------------
    Title, abstract, introduction                1.0
    Issues                                       1.5
    Other models                                 2.0
    The XUTT model
      Times (the hierarchical model)             2.5
      Resources                                  0.2
      Events                                     0.6
      Constraints                                2.5
    Reducing verbosity                           1.0
    Exploiting time symmetries                   1.3
    Conclusion                                   0.0
    References                                   1.0
    ------------------------------------------------
                                                13.6

24 April 2019.  Light at the end of the tunnel on the XUTT
  time model.  Still a lot of tidying up to do.

5 May 2019.  For some days now I have been revising my
  PATAT papers to respond to reviewers' reports for their
  journal publication, especially the main paper.  I've
  more or less finished that today.

7 May 2019.  I've finished revising my PATAT papers and
  resubmitted the history one.  The main one I will leave
  for a few weeks, then reread it and submit.

  I've also decided to let XUTT lie fallow for a while.
  I've made good progress with the time model, although
  I am still undecided about whether assigning Mon1 and
  then Mon2 is the same as assigning Mon12.  I'm a bit
  disappointed though that I have not been able to do
  better with the marginal features.  So I'm letting
  it rest for a while.  It may also be impolitic to
  publish it at PATAT 2020, only 2 years after XESTT.
  Although I would if I was *really* happy with it.

  What that leaves to do now is the revised time sweep.

9 May 2019.  Written combinatorial time sweep, including
  debug stuff on the edges.  Done some basic testing, the
  code seems to be doing something and does not crash.
  Need to do some debug output to see what's what.

10 May 2019.  No longer attaching prefer resources monitors.
  Checked over code in khe_sr_resource_matching.c, found a
  few small things.  Starting testing, going steadily.

11 May 2019.  Still testing time sweep with lookahead.  Got
  clashes with COI-Millar-2.1.1 but fixed those by changing
  the no_cost flag in a call in the requested code.

12 May 2019.  Results for lookahead=2 are in khe19-05-12.pdf.  They
  are mostly comparable with previous results, and they are quite a
  lot faster (average for KHE18x8 now 69.9 seconds, was 92.2 secs).
  But there is one blow-out: COI-ERRVH was 3269, now 13784.  If I
  could knock 10000 off that, it would change the average cost to
  1096 - 10000/27 = 726, which would be better than the previous 753.

  Tried the same run only with rematching after each day.  The results
  are in khe19-05-12r.pdf.  We are back to the slower run times and
  although we have fixed the COI-ERRVH problem, there is not much
  else to show for the extra time.

13 May 2019.  The current job is to modify task tree construction in
  khe_sr_task_tree.c to take account of limit busy times constraints
  with maximum limit 0 and time groups for which there are
  corresponding preassigned events.

14 May 2019.  Written code for making khe_sr_task_tree.c take account
  of limit busy times constraints with maximum limit 0 and time groups
  for which there are corresponding preassigned events.  Ready to test.

15 May 2019.  First results of new code:

    [ "COI-ERRVH", 1 solution, in 223.7 secs: cost 0.03906 ]

  This is within striking distance of what I was getting before:
  3613 for KHE18, 3269 for KHE18x8.  And the run time is quite a
  lot better that the full 5 minutes.  Best of 8:

    [ "COI-ERRVH", 4 threads, 8 solves, 8 distinct costs, 7.5 mins:
      0.02410 0.02823 0.02848 0.02900 0.03338 0.03443 0.03507 0.03598
    ]

  This is a terrific result, best I could hope for really, and
  compares well with the Misc result, which is 2001.  Now for a
  fresh run of COI.xml.

  I'm using rs_time_sweep_lookahead=2 and rs_time_sweep_rematch_off=false.
  I've also tried 3 and true in all combinations, and it does
  run faster with true, but the results are worse.

  Sadly, things are not currently any better.  The trouble was
  that COI-BCV-4.13.1 blew out, and fixing it seems to have
  removed a lot of the benefit.  So I think I am going to have
  to remove the fix and work on COI-BCV-4.13.1 separately.

17 May 2019.  Things are a bit of a mess, so I have decided
  to do some tests of all relevant combinations on COI-ERRVH:
  I've set up shell script doitc to do all this at once.  Here
  are the results as copied over from resc.xml:

      lookahead  rematch_off  lbtc_off   cost      time
      --------------------------------------------------
      0          false        false      0.02240   609.5
      0          false        true       0.03223   613.2
      0          true         false      0.02438   605.7
      0          true         true       0.02878   613.1
      2          false        false      0.02650   610.0
      2          false        true       0.03330   613.2
    **2          true         false      0.02786   448.3
      2          true         true       0.16754   451.5
      --------------------------------------------------

  The best cost is got without any lookahead.  Run time is
  improved when both lookahead and rematch_off are present.
  So in fact the line marked ** is what I was thinking would
  be best, and it is third best in time and fourth best in cost.

  One clear fact is that holding other options constant,
  lbtc_off=false is always better than lbtc_off=true, in
  both cost and running time.  So we can reduce the table to

      lookahead  rematch_off  lbtc_off   cost      time
      --------------------------------------------------
      0          false        false      0.02240   609.5
      0          true         false      0.02438   605.7
      2          false        false      0.02650   610.0
    **2          true         false      0.02786   448.3
      --------------------------------------------------

  which is easier to digest.  What about running doitc over
  all instances?  It will be a long grind but very interesting.
  Done it - stored safely in res19_05_17.xml.  The two best are

  011 (no lookahead, rematch off, lbtc_off)
  101 (lookahead, rematch, lbtc_off)

  There is no clear pattern, so I am going to do some more
  testing.

19 May 2019.  Looking over res19_05_18.xml.  The results on
  cost are very close, the averages range from 180 (000 and 001)
  to 186 (100 and 101).  Run times much the same, the best
  is 115 (000), the worst is 124 (010).  Not much guidance
  to be had from these tests; they don't really support 101
  (cost 186, time 117).

  Setting up a run for the eight-week INRC2 instances.

  Given that XUTT is on the back-burner, I think the best
  thing is to quietly continue working on the weak points
  of KHE18, which basically means the slow instances in
  COI and everything in INRC2.  A few months' work should
  get me enough improvement to be worth publishing.

20 May 2019.  Looking over res19_05_20.xml (INRC2-8).  It
  clearly supports only 000 and 001, that is, no lookahead,
  rematching on, and lbtc either on or off.  Anything else
  is definitely inferior.  The best INRC2 results I'm getting
  now are about 200 better than the ones in the paper, but
  still a long way off the money.

  Trying COI with lookahead=3.  Done, in res19_05_20.xml.
  There is a good result on MER (0.09651), but otherwise
  the costs are generally inferior, and there is the
  expected increase in running time.  In short, no dice.

  Go back to working on COI-ERRVH, see what I can get.

    [ "COI-ERRVH", 1 solution, in 5.1 mins: cost 0.02629 ]

  Actually this is not too bad, but I will have a look
  around and see what I can scrape.

22 May 2019.  Working on calculating available times in a
  more comprehensive but faster way.  The new code is in
  src_hseval/avail.c.  So far I have got the nodes I need,
  the next step is to find large independent sets in those
  nodes.  I'm using GPost as a test, but eventually I need
  to be able to do COI-ERRVH as well, and its availability
  is determined by limit workload constraints.

  A simple heuristic will probably do all I need.  Sort the
  nodes by increasing set size, or something.  From the
  largest to the smallest, find an independent set by
  testing each sequentially and adding if independent.
  Add the limits and quit early if inferior to previous.
  Possibly ignore sets of size 2 in this final stage.

  For a given set I, the value to minimize is the
  sum of the limits plus the sum of the times not
  included.  If we say that there is an independent
  set for each time, or even add one, then we are
  trying to minimize the sum of the limits over all
  complete cases (but we'll optimize that).

  For a start we'll just do it as we go.

23 May 2019.  Avail coding going well.  Just starting on
  assembling the final value for avail_times, then I have
  to work on avail_workload.

24 May 2019.  Working on avail.c, done with a careful audit
  and clean compile.  Ready to test.

25 May 2019.  Added HTML print and used it directly underneath
  each resource timetable print.  Finished avail stuff now.

  Back to COI-ERRVH, there is essentially no available workload.
  I've compared my cost 2629 solution with Curtois' cost 2001
  solution, and found two basic problems:

  * I have 5 cases of two busy weekends in a row, costing 500;

  * I assign more of the workload 12 tasks, leaving more tasks
    uncovered, costing an extra 2629 - 500 - 2001 = 128.

  I'm getting HSEval to report what is going on with the
  task workloads.  For the Curtois 2001 solution it gives this:

    Assigned tasks: 1074 with workload 8.0, 102 with workload 12.0.

  For my 2629 solution it gives this:

    Assigned tasks: 1029 with workload 8.0, 132 with workload 12.0. 

  If I could reduce the 132 to 102, that would free up workload
  30 * 12 = 360, which if I reallocate to tasks with workload 8
  I would assign 360 / 8 = 45 of those, that is, 15 extra tasks.
  But how could this save 128?  Some tasks must go to satisfy more
  than one constraint.

  Found a but in KheLimitActiveIntervalsAugment - when trying to
  reduce the length, it was not trying to remove the first and
  last, it was trying the first twice.  Wow.  But after fixing
  it, KheLimitActiveIntervalsMonitorDefectiveInterval was revealed
  to be returning -1 for *last_index at times.

26 May 2019.  Sorted out what is going on with
  KheLimitActiveIntervalsMonitorDefectiveInterval setting
  *last_index to -1.  It indicates that the defective interval
  lies entirely within the history range.  This is a feature of
  COI-Ikegami-3.1, justified in the header comment of the function
  for max 6 day shifts in the specials section of src_nrconv/coi.c.

  I've changed the KheLimitActiveIntervalsMonitorDefectiveInterval
  to say that this can happen and what it means, and I've changed
  khe_se_solvers.c to skip intervals with *last_index == -1, which
  is the right thing to do.

  I ran COI and the fix actually made things worse!  Run times are
  slightly better, notably child reduced from 424 to 329 seconds,
  but average KHE18x8 cost increased from 751 to 867.  One obvious
  problem is COI-BCV-4.13.1, which is often 10 but can revert to
  2340 very easily, which adds (2340 - 10)/27 = 86 to the average,
  accounting for most of the increase.  There were some improvements
  (6) to set against the losses (11).

  Got COI-BCV-4.13.1 working properly by not allowing the task
  tree to reduce domains to below two elements.  A rerun of COI
  worked well (khe19-05-26.pdf).

  Added cluster busy times constraints with maximum limit 0
  to the jobs handled by task tree construction; their positive
  time groups are essentially the same as the time groups of
  limit busy times constraints.  It improved COI-QMC-1 but
  only marginally.  Have to keep trying.

  Got COI-QMC-1 down to 19 by adjusting KheTaskSetDoubleMoveRepair,
  which I had over-hastily changed earlier on.  I've now changed
  it back to what it was before, but ensured that the debug output
  indicates clearly when the second move fails.  Best of 8 is 17,
  which is probably as good as I am going to get, since Curtois'
  best is 13.

  Excellent results on COI.xml now saved in khe19-05-26.pdf.
  KHE18x8 average cost 724, average time 90.7 secongs, and it
  found 11 optimal solutions.  This compares with my PATAT 2018
  KHE18x8 results: average cost 4637, average running time 60.2
  seconds, and just 3 optimal solutions.

  Now doing a run with all time limits halved.  Got average
  cost 797, average running time 52.3 (because only the slow
  ones sped up).  There was a significant hit to the slower
  instances, enough to prove that these limits are not good.

  Also did a run with all time limits doubled.  Got a solution
  to Child with cost 153, which is very close indeed to Curtois's
  value of 149, and a solution to MER with cost 8901.

  Starting a CQ14 run.  It got killed but printed these first:

    [ "CQ14-18", 4 threads, 8 solves, 8 distinct costs, 6.0 mins:
      0.06119 1.05718 1.06005 2.06634 3.05722 3.06539 3.06617 5.06196
    ]
    [ "CQ14-19", 4 threads, 8 solves, 8 distinct costs, 6.0 mins:
      0.04130 0.05435 1.04116 1.05382 1.05462 1.05537 1.05857 2.04629
    ]

  and other results several of which are new bests for KHE18x8,
  but none of which are better that Curtois' best results.

27 May 2019.  Working on COI-HED01 today.

  Changed the event names to use shift type labels rather than
  IDs.  I was already doing this for COI-BCDT-Sep so it was easy.

  Comparing my COI-HED01 solution with Curtois', the standout
  problem is Constraint:1, where I have 8 violations costing
  10 each.  This extra 80 explains more than half of why my
  cost is 263 and his is 136.

  Because of unwanted patterns [D][Any] and [D][Free][not N]
  we should get a grouping from [D][Free][N], but this is
  being prohibited by this:

    due to linkage, omitting non-assignment of 1Wed

  where 1Wed is the supposedly free day.

  The problem is that Phase 0 of grouping is overestimating the
  number of tasks for which non-assignment would create a cost.

29 May 2019.  Found that frame operations for resource solving
  were no longer used, so I've deleted them from the source code
  and from the documentation.

30 May 2019.  Working on replacing KheTaskNonAssignmentHasCost
  with KheTaskNeedsAssignment today.  I've written and documented
  KheTaskNeedsAssignment, now it needs an audit, and then I need
  to use it instead of KheTaskNonAssignmentHasCost where possible,
  and do something else again where not.

31 May 2019.  Have clean compile after KheTaskNonAssignmentHasCost
  calls all removed.  For grouping by resource constraints I used
  er != NULL && KheEventResourceNeedsAssignment(er) == KHE_YES,
  that is, there is no grouping for tasks constrained only by
  limit resources constraints.  This will work for GPost but
  may need further work for HED01.

  Tested GPpost and got a solution of cost 8 for best of 8.  The
  grouping is all there, as attested by debug output.  But HED01
  is running very slowly:

    [ "COI-HED01", 1 solution, in 85.0 secs: cost 0.00312 ]

  as compared to previously, when KHE18 produced cost 267 in
  17.4 seconds.  KheSolnAssignRequestedResources seems to be
  making a hash of things:

    KheSolnAssignRequestedResources returning true (1.0 -> 11.0)

  MakeBestAsst in khe_sr_requested.c is terrible.  It finds
  the best assignment at a single time, when the call context
  allows potentially many times to be tried.  Needs a rewrite.

  Rewritten khe_sr_requested.c.  Still getting 8 for best of
  8 in GPost, now for COI-HED01.  Whoops, did not do the right
  fix, needs more finesse on the cluster constraints.  Fixed
  that, now getting

    KheSolnAssignRequestedResources returning true (0.99999 -> 5.99999)

  which is not much better, and

    [ "COI-HED01", 1 solution, in 88.7 secs: cost 0.00306 ]

  which is also not much better.

  5Thu:M.5 := OP15 causes the first hard cost of 1, and in fact
  assignments to OP15 cause all the hard costs.  This is because
  OP15 is unavailable at all times, and these preferences are
  unsatisifiable.  No doubt they will get unassigned later
  without much drama.  So this is not our problem with HED01.

  Starting work on understanding why COI-HED01 is running
  so slowly.  Currently its performance is

    [ "COI-HED01", 1 solution, in 92.1 secs: cost 0.00314 ]

  Here's another three-fold slowdown:

    [ "COI-Musa", 1 solution, in 21.0 secs: cost 0.00175 ]

  Let's work on this one first.

2 June 2019.  Spent yesterday refereeing a paper.  Back to work
  on task profiles today.  Rewrote KheTaskProfileIncompatibility
  and adjusted other code appropriately.  Tested and seems to
  be working, although it did not speed up Musa.  Currently

    [ KheArchiveParallelSolve(COI-Musa, threads 1, make 1, limit -1.0)
      parallel solve of COI-Musa: starting solve 1 (last)
      COI-Musa#0: 0.01579, 0.0 secs KheTaskingGroupByResourceConstraints
      COI-Musa#0: 0.01579, 0.0 secs time_sweep
      COI-Musa#0: 0.01579, 0.0 secs KheSolnAssignRequestedResources
      COI-Musa#0: 0.01579, 0.0 secs KheTimeSweepAssignResources
      COI-Musa#0: 0.00175, 9.1 secs KheResourceRematch
      COI-Musa#0: 0.00175, 13.5 secs KheEjectionChainRepairResources
      COI-Musa#0: 0.00175, 14.0 secs KheResourcePairSimpleRepair
      COI-Musa#0: 0.00175, 14.0 secs KheResourcePairSimpleBusyRepair
      COI-Musa#0: 0.00175, 14.0 secs KheResourceRematch
      COI-Musa#0: 0.00175, 18.3 secs KheEjectionChainRepairResources
      COI-Musa#0: 0.00175, 18.8 secs KheResourcePairSimpleRepair
      COI-Musa#0: 0.00175, 18.8 secs KheResourcePairSimpleBusyRepair
      [ "COI-Musa", 1 solution, in 18.8 secs: cost 0.00175 ]
    ] KheArchiveParallelSolve returning (18.8 secs elapsed)

  With rs_time_sweep_ejection_off=true:

    [ KheArchiveParallelSolve(COI-Musa, threads 1, ...)
      parallel solve of COI-Musa: starting solve 1 (last)
      COI-Musa#0: 0.01579, 0.0 secs KheTaskingGroupByResourceConstraints
      COI-Musa#0: 0.01579, 0.0 secs time_sweep
      COI-Musa#0: 0.01579, 0.0 secs KheSolnAssignRequestedResources
      COI-Musa#0: 0.01579, 0.0 secs KheTimeSweepAssignResources
      COI-Musa#0: 0.00196, 0.0 secs KheResourceRematch
      COI-Musa#0: 0.00175, 5.4 secs KheEjectionChainRepairResources
      COI-Musa#0: 0.00175, 6.0 secs KheResourcePairSimpleRepair
      COI-Musa#0: 0.00175, 6.0 secs KheResourcePairSimpleBusyRepair
      COI-Musa#0: 0.00175, 6.0 secs KheResourceRematch
      COI-Musa#0: 0.00175, 11.0 secs KheEjectionChainRepairResources
      COI-Musa#0: 0.00175, 11.6 secs KheResourcePairSimpleRepair
      COI-Musa#0: 0.00175, 11.6 secs KheResourcePairSimpleBusyRepair
      [ "COI-Musa", 1 solution, in 11.6 secs: cost 0.00175 ]
    ] KheArchiveParallelSolve returning (11.6 secs elapsed)

  With rs_time_sweep_ejection_off=true and rs_rematch_ejection_off=true:

    [ KheArchiveParallelSolve(COI-Musa, threads 1, ...)
      parallel solve of COI-Musa: starting solve 1 (last)
      COI-Musa#0: 0.01579, 0.0 secs  starting KheTaskingGroupByResourceConstraints
      COI-Musa#0: 0.01579, 0.0 secs  starting time_sweep
      COI-Musa#0: 0.01579, 0.0 secs  starting KheSolnAssignRequestedResources
      COI-Musa#0: 0.01579, 0.0 secs  starting KheTimeSweepAssignResources
      COI-Musa#0: 0.00196, 0.0 secs  starting KheResourceRematch
      COI-Musa#0: 0.00186, 0.0 secs  starting KheEjectionChainRepairResources
      COI-Musa#0: 0.00175, 0.7 secs  starting KheResourcePairSimpleRepair
      COI-Musa#0: 0.00175, 0.7 secs  starting KheResourcePairSimpleBusyRepair
      COI-Musa#0: 0.00175, 0.7 secs  starting KheResourceRematch
      COI-Musa#0: 0.00175, 0.7 secs  starting KheEjectionChainRepairResources
      COI-Musa#0: 0.00175, 1.3 secs  starting KheResourcePairSimpleRepair
      COI-Musa#0: 0.00175, 1.3 secs  starting KheResourcePairSimpleBusyRepair
      [ "COI-Musa", 1 solution, in 1.3 secs: cost 0.00175 ]
    ] KheArchiveParallelSolve returning (1.3 secs elapsed)

  So it seems pretty clear who the culprits are in consuming the
  extra time.  But why have they suddenly starting consuming so
  much time?

3 June 2019.  Doing a full COI run to see what is going on.  But there
  is no doubt running times are worse across the board.  Why?  I've
  checked KheEventResourceNeedsAssignment and KheTaskNeedsAssignment
  and both seem to be in good shape.

  These are the instances that contain limit resources constraints:

    COI-Azaiez.xml             Running 6 times slower!
    COI-BCDT-Sep.xml           5 times slower
    COI-CHILD.xml              2 times slower
    COI-ERMGH.xml              Much worse cost
    COI-ERRVH.xml              Much worse cost
    COI-HED01.xml              Cost and time much worse
    COI-Ikegami-2.1.xml
    COI-Ikegami-3.1.1.xml
    COI-Ikegami-3.1.2.xml
    COI-Ikegami-3.1.xml
    COI-MER.xml
    COI-Musa.xml
    COI-Ozkarahan.xml         No
    COI-QMC-2.xml

  I've checked several of the others and they have not degraded,
  e.g. GPost is still as it was.

  I've found this in the makefile:  rs_time_sweep_nocost_off=true.
  But this is what is happening anyway now.

4 June 2019.  Today's plan is to use debug_id to work out what
  repairs are actually being tried on COI-Musa's defects (which
  can't be repaired, as it happens).  Maybe that will shed light.

5 June 2019.  Tidied up the event resource constraint repairs,
  including limit resources constraint repairs, including some
  optimization (avoiding repairing equivalent tasks).  Worth
  doing but did not in fact speed anything up, presumably
  because most tasks are assigned and hence not equivalent,
  and because when a task set is visited, all equivalent
  tasks are visited too.

  Ejecting task moves are calling KheTaskFirstUnFixed, when
  what we want, probably, is KheTaskProperRoot.  Fixed it.

  KheTaskSetReplaceRepair and KheTaskSetDoubleReplaceRepair
  written but not being used yet.

7 June 2019.  Did a full COI run yesterday, things are a
  bit better (heaven knows why), but still not anywhere
  near back to where they were, in cost or running time.

  Added a write_only option to the parallel solver, all
  documented and implemented but not tested yet.  Also
  documented the ps_first_soln_group option but it is
  not implemented yet.

  Finished khe_sm_parallel_solve.c, it needs an audit
  and test.

8 June 2019.  Audited khe_sm_parallel_solve.c, it saves the first
  solution separately when requested now, and also generates
  write-only solutions when requested.  Done a small test and it
  seems to be working well.  Did a full COI test and all went
  well, although the running times are worse than ever.  Also
  tidied up khe_soln_write_only.c a bit.  Time to return to
  the ejection chain repair rewrite.

  Have clean compile of khe_se_solvers.c.  However there are
  still several things to do.

9 June 2019.  Auditing khe_se_solvers.c today.  I've done
  up to and including KheTaskSetRepairStatus and everything
  that it calls so far.  I did find one small bug; could it
  have been the source of all my woes?

  Removed clash operations from resource timetabling monitor,
  they slow things down and were only used for debugging
  anyway, and they don't do anything useful even then as
  far as nurse rostering is concerned.

10 June 2019.  Audited KheDoTaskSetReplaceMultiRepair and its
  helper functions today, and improved its expression quite a
  lot, although did not change what it does.  I've decided
  that unassign_r1_ts has to be empty here, as before because
  it is too tedious to keep track of it across partial runs.

11 June 2019.  Finished the KHE_REPAIR_AVAIL_BUSY case, which
  finishes off the whole rewrite, although I need to do a
  careful audit before I test.

  Made max_extra and max_attempts into options that can be
  adjusted at will.  This involved grouping all the expansion
  options into a little record, trivial but handy.

12 June 2019.  Audited the KHE_REPAIR_AVAIL_BUSY case, now
  testing.  Found one bug, trying to move a preassigned task,
  which was easily fixed.
  
  Did a COI run.  The results are back where they should be in
  run time and pretty good but just a bit off the money in cost,
  suggesting that a few useful repairs are being omitted.

  khe19-05-26.pdf seems to be the one to beat:  KHE18x8
  average cost 724 and average run time 90.7 seconds.

  Today's run (with lookahead) is in khe19-05-26l.pdf, it has
  average cost 797 and average run time 94.5 seconds.  The
  same only without lookahead is in khe19-05-26n.pdf, it
  has average cost 723 and average running time 92.6 seconds,
  which is marginally better than khe19-05-26.pdf on average.
  However there are pluses and minuses, suggesting that I
  could do better still by grinding away the minuses.

  COI-QMC-1 looks like a good place to start grinding.

  Trying a run during which more stuff gets checked
  for being visited (in InitialTasks and FinalTasks).
  Look what it produced for CHILD:

    [ "COI-CHILD", 4 threads, 8 solves, first 0.00162, 294.0 secs:
      0.00150 0.00152 0.00156 0.00157 0.00157 0.00157 0.00162 0.00258
    ]

  This is a phenomenal result, my best ever.  But overall things
  were a bit worse so I've removed the extra visit checking.

  Trying a CQ14 run for a change.  We're up against it for the
  later ones:

    [ "CQ14-20", 4 threads, 8 solves, first 208.27151, 6.2 mins:
      151.25722 155.30915 178.28087 179.28495
      184.27186 194.25498 208.27151 230.28092
    ]

    [ "CQ14-21", 4 threads, 8 solves, first 478.55511, 6.7 mins:
      442.49660 463.52711 470.53728 471.53380
      478.55511 479.52439 492.53830 495.52923
    ]

    [ "CQ14-22", 4 threads, 8 solves, first 2155.93780, 6.3 mins:
      1562.86840 1698.88341 1756.88324 1760.87627
      2034.91155 2064.96610 2084.91765 2155.93780
    ]

    [ "CQ14-23", 4 threads, 8 solves, first 6492.99999, 6.9 mins:
      5746.99999 6052.99999 6099.99999 6112.99999
      6182.99999 6488.99999 6492.99999 6512.99999
    ]

    [ "CQ14-24", 4 threads, 8 solves, first 12493.99999, 13.0 mins:
      12493.99999 12666.99999 12679.99999 12781.99999
      12867.99999 12909.99999 13025.99999 13121.99999
    ]

  I've decided to look into CQ14-20.  It has a rather nasty
  lot of hard constraints, which is why we are doing so badly.
  Result of running one solve with the usual time limits is
  184.27571.  Trying a longer time limit, 10 seconds per day
  and 10 minutes for repair:

    [ "CQ14-20", 1 solution, in 15.1 mins: cost 54.39054 ]

  So things are improving.  Try an even longer run:

    [ "CQ14-20", 1 solution, in 30.1 mins: cost 4.11066 ]

  Almost feasible.  Can we get there in 60 minutes?

    [ "CQ14-20", 1 solution, in 60.1 mins: cost 2.05852 ]

  Almost.  At present there are no feasible solutions of
  CQ14-20 in the KHE18 paper, so if I can get rid of the
  last 2 infeasibilities I will have a new best, unless
  someone else has done it in the meantime.  They have,
  with cost 4943 and lower bound 4743.  See Curtois'
  web site, http://www.schedulingbenchmarks.org/ and
  http://www.schedulingbenchmarks.org/changes.html.

14 June 2019.  Working on COI-HED01, starting point is

    [ "COI-HED01", 1 solution, in 20.2 secs: cost 0.00308 ]

  This is a fair bit worse than 136, the optimal value.

  4Fri:D.0 seems to have domain OP1 .. OP16 even at the end
  when all domains are supposed to be enlarged.  This is
  preventing it from being assigned to a Temp nurse, which
  is probably better and anyway should at least be tried.

  In khe_sr_combined.c, KheTaskingEnlargeDomains(tasking, true)
  changed to KheTaskingEnlargeDomains(tasking, false).  This enlarges
  the domains of all tasks, not just unassigned ones.  Result:

    [ "COI-HED01", 1 solution, in 19.7 secs: cost 0.00183 ]

  Look at that:  a bit faster, and a lot better.  Best of 8:

    [ "COI-HED01", 4 threads, 8 solves, 8 distinct costs, 52.5 secs:
      0.00148 0.00154 0.00156 0.00167 0.00168 0.00173 0.00183 0.00191
    ]

  This is close enough to best that I don't need to do any more.

  Did a full COI run; the results are in khe19_06_14.pdf.  It
  shows the significantly better COI-HED01 result, plus some
  other small pluses and minuses.

  Changed the terminology in khe18.tex to use "task".

15 June 2019.  Working on COI-BCDT-Sep, seeing whether we can
  replace the limit resources constraints by assign resource
  constraints.  Here is debug output from the nrconv code which,
  by returning false, prevents this conversion:

    [ NrcDemandConstraintTryDemands(2:all)
    NrcShiftTryDemands(4Sun:2, [max 4 (soft:100)]) 5 demands:
      X(nap soft:0, ap soft:0, ws, npp soft:0)
      X(nap soft:0, ap soft:0, ws, npp soft:0)
      X(nap soft:0, ap soft:0, ws, npp soft:0)
      X(nap soft:0, ap soft:0, ws, npp soft:0)
      X(nap soft:0, ap soft:0, ws, npp soft:0)
      [Demand [min 3 (hard:1)], [1 shifts], RG:All, 2:all]
      [Demand [max 5 (hard:1)], [1 shifts], RG:All, 2:all]
      [Demand [min 4 (soft:100)], [1 shifts], RG:All, 2:all]
      [Demand [max 4 (soft:100)], [1 shifts], RG:All, 2:all]
    NrcShiftTryDemands returning false (count 4)
    ] NrcDemandConstraintTryDemands returning false

  The shift has gathered all the relevant demands and demand
  constraints, but it thinks they are not convertible because
  there are 4 demand constraints rather than one.

  At present, however, we can't merge two min bounds or two
  pref bounds.  This has to be a special case when the smaller
  min bound is hard and the higher one is soft.  Similarly
  for two max bounds.  Convert the two equal soft ones to
  a pref, then add the two hard ones, and we are OK.

  All done except that NrcBoundBCDTMerge in nrc_bound.c
  is still to do.

16 June 2019.  Working on COI-BCDT-Sep.  The new NRConv code
  is all written tested.  It seems to be working, judging by
  a comparison of solution costs produced by HSEval between
  the two versions.

  Working on NRConv, getting it to produce assign resource and
  prefer resources constraints in COI-BCDT-Sep.xml instead of
  limit resources constraints.  The new conversion is all done
  and tested and seems to be working.  So back to solving
  COI-BCDT-Sep.xml.  This is what I was getting before the
  NRConv rewrite:

    [ "COI-BCDT-Sep", 1 solution, in 44.5 secs: cost 0.00280 ]

  This is what I am getting now:

    [ "COI-BCDT-Sep", 1 solution, in 2.7 secs: cost 0.00370 ]

  Much faster, but inferior, strangely enough.  Best of 8:

    [ "COI-BCDT-Sep", 4 threads, 8 solves, 7 distinct costs, 6.1 secs:
      0.00320 0.00330 0.00340 0.00370 0.00380 0.00380 0.00410 0.00420
    ]

  So we've made a killing on run time but we have a problem with
  cost.  Curtois' best solution has cost 100.

  Done some debugging and the double moves seem to be finding their
  reverse moves from quite crazy places.  Actually it does make
  sense, we want places where r1 is free and r2 is busy.  There
  aren't many of those places in COI-BCDT-Sep, because everyone
  is very heavily loaded.

  Working to get an Avail column in HSEval's tables for COI-BCDT-Sep.
  The problem is that some of the times have no assignable tasks
  (the V times) and so they shouldn't count, except they have to
  be subtracted from the busy times of the resources that are
  assigned to them.  So it's a bit of a tangle.

  When building the sets of k times for which at most one task
  can be assigned, I've added all subsets of size k - 1.  This
  means that cluster constraints that use these k - 1 sets will
  realize that they have max one time sets.

  Next, I need to find out which times there are where all the
  tasks are preassigned.  Only in all-preassigned instances.

  Documented the availability spec in availability.c in src_hseval.

18 June 2019.  Working on resource availability.  Have added six
  new functions to khe_platform.h, documented them, implemented
  the boilerplate, and updated HSEval, but not yet implemented
  the actual algorithms, or fixed the current equivalents
  scattered around the solvers.

19 June 2019.  Working towards implementing resource availability
  in the platform.  I've audited the documentation.  I've created
  platform file khe_avail.c and done its boilerplate, and indeed
  everything outside khe_avail.c is all done now.

  Keeping track of the tasks assigned r in the resource in
  solution object looked to be very slow on large instances.
  I'm now storing an index in the task so as to speed this up.
  It could make a real difference on large instances.  All
  implemented and carefully audited.

  Proved the "cluster busy times constraint polarity theorem",
  and added the proof to the cluster busy times constraint
  section of chapter Instances.

20 June 2019.  Implementing src_platform/khe_avail.c today.

21 June 2019.  Still implementing src_platform/khe_avail.c.
  Spent the day defining and implementing module Hp (pointer
  tables), which does the job.  I've just defined the symbol
  table I need within the avail solver.

22 June 2019.  Still implementing src_platform/khe_avail.c.
  One thing not done yet:  AvailSolverFindBestBusyTimes.

23 June 2019.  Still implementing src_platform/khe_avail.c.
  I've ensured that to be a candidate, an avail set has to
  cover the cycle.  Progressing steadily, I've done all the
  easy cases now, just AvailSolverAddClusterBusyNodes to go.

  I'm not sorting the cluster constraints, because it's too hard
  to see how to do it right.  Instead, I'm running through them twice.

24 June 2019.  Finished implementing src_platform/khe_avail.c.
  Documented and implemented some adjustments to the spec which
  will hopefully make things a bit more efficient:  eliminating
  zero time from other times sets, not making nodes for single
  times).  Ready to audit and test.

  Got rid of all the old code: KheResourceTimetableMonitorBusyTimes,
  KheFrameResourceMaxBusyTimes, and KheFrameWorkloadMake.

25 June 2019.  Added history to the analysis and implementation.
  Did some tests, fixed a few little bugs, then it seemed to work.
  Implemented the new HSEval page, both busy times and workload,
  but workload is not tested yet.  What I have done seems to be
  working, and I've fiddled with it until it looks pretty good.

26 June 2019.  Need to think about incorporating workload limits
  in busy times limits now.

27 June 2019.  Added an eighth point which allows limit workload
  constraints to affect max busy times.  Need to implement it now.

28 June 2019.  Revised the eighth point and implement a lot of
  stuff, culminating in a function that returns w(r, t).

29 June 2019.  Have clean compile of a complete implementation
  of the new stuff.  I've done some testing and it seems to be
  working, and even giving good results on the solving:

    [ "COI-QMC-1", 1 solution, in 1.3 secs: cost 0.00020 ]

    [ "COI-QMC-1", 4 threads, 8 solves, 4 distinct costs, 3.4 secs:
      0.00019 0.00020 0.00020 0.00020 0.00021 0.00021 0.00021 0.00022
    ]

  I think these are my best ever for COI-QMC-1.  No, actually
  the best is 18 in 8.6 seconds.

  Took an off-site backup at this point.

  So now we are back where we were almost two weeks ago:  trying
  to grind down COI-BCDT-Sep.  I've got an Avail column for it now.

    [ "COI-BCDT-Sep", 4 threads, 8 solves, 8 distinct costs, 6.0 secs:
      0.00270 0.00340 0.00350 0.00360 0.00370 0.00380 0.00410 0.00420
    ]

  The optimum cost is 100, and we're aiming for about 200, which I got
  in khe19-05-15.pdf.  However, the run time here is much faster than
  in khe19-05-15.pdf.

30 June 2019.  A problem with COI-BCDT-Sep is that there are a lot
  of requested times, and these are just landing at random locations
  and causing mayhem later.  I used the diversifier to spread them
  around a bit and got this:

    [ "COI-BCDT-Sep", 1 solution, in 5.0 secs: cost 0.00300 ]

    [ "COI-BCDT-Sep", 4 threads, 8 solves, 5 distinct costs, 44.3 secs:
      0.00300 0.00340 0.00340 0.00350 0.00350 0.00400 0.00400 0.00420
    ]

  which is no help at all.  What to do?  Requested off:

    [ "COI-BCDT-Sep", 1 solution, in 5.7 secs: cost 0.00370 ]

    [ "COI-BCDT-Sep", 4 threads, 8 solves, 5 distinct costs, 6.4 secs:
      0.00300 0.00340 0.00340 0.00350 0.00350 0.00400 0.00400 0.00420
    ]

  So that's no help either.  These requested things are nasty.

  Tried with lookahead, but that did not help.  Nasty.

1 July 2019.  Looking into how the Curtois cost 100 solution handles
  the problem of 3 consecutive day shifts.  Weekdays need 9, weekend
  days need 8:

  9| 10 | 10 | 10 |XXXX|XXXX|    |    | 06 | 06 | 06 |XXXX|XXXX| 14 | 14
  8| 17 | 19 | 19 | 19 | 18 | 18 | 18 |    | 14 | 14 | 14 | 13 | 13 | 13
    ---------------------------------------------------------------------
  4| 04 | 16 | 16 | 16 | 14 | 14 | 14 | 04 | 04 | 04 | 05 | 05 | 05 | 10
  1| 01 | 09 | 09 | 09 | 08 | 08 | 08 | 03 | 03 | 03 | 11 | 11 | 11 | 02
    ---------------------------------------------------------------------
  6| 06 | 06 | 17 | 17 | 17 | 19 | 19 | 19 | 12 | 12 | 12 | 07 | 07 | 07
  5| 05 | 05 | 15 | 15 | 15 | 16 | 16 | 16 | 18 | 18 | 18 | 17 | 17 | 17
  2| 02 | 02 | 01 | 01 | 01 | 09 | 09 | 09 | 08 | 08 | 08 | 04 | 04 | 04
    ---------------------------------------------------------------------
  7| 20 | 20 | 20 | 06 | 06 | 06 | 13 | 13 | 13 | 16 | 16 | 16 | 18 | 18
  3| 12 | 12 | 12 | 02 | 02 | 02 | 07 | 07 | 07 | 10 | 10 | 10 | 12 | 12
    ---------------------------------------------------------------------
    2Wed 2Thu 2Fri 2Sat 2Sun 2Mon 2Tue 3Wed 3Thu 3Fri 3Sat 3Sun 3Mon 3Tue
                              20   20   20

  So this is how it can be made to work.  To fill the ninth slot on
  weekdays, place a triple starting on Monday and a triple ending on
  Friday.  This gives one extra on Wednesdays.  Handle that by shifting
  the pattern along one space:  3 - 3 - (gap of 1) - 3 - 3 and so on.
  Profile grouping should be able to do this, because it should be
  able to find the Mon-Wed and Wed-Fri triples, and then the overlap
  will allow it to find triples ending on Tue and starting on Thu,
  and so on ad infinitum.  Fascinating, if I can make it happen.

  The obstacle is that on each day we do not have a particular time,
  we have a choice of two times.  So we cannot choose specific
  shifts to group together.

  Choosing between M and A is not likely to be hard.  There are
  more M's wanted than A's, so you basically choose M when you
  can and A when you can't.

  My current solution has cost 330; it would reduce to 180 if
  I could get perfect blocks of 3 day shifts.  How did I ever
  get it down to 200?  Perhaps I should be looking at the other
  defects.

3 July 2019.  Looking at my solution with cost

    [ "COI-BCDT-Sep", 1 solution, in 2.3 secs: cost 0.00300 ]

  The aim now is to reduce the cost of the defects other than
  the three days in a row ones.

  Let's try double moves where the second mobe is an unassignment
  rather than a reverse move.  It should still not overload r2.
  I did try it an it seems to be quite poor.  I've commented out
  the code, but perhaps I should not rush to dismiss it.

  Started thinking about "extended profile grouping".  Wrote
  some rather tentative documentation for it.

5 July 2019.  Started implementing extended profile grouping.
  All I've actually done so far is to add a boolean "extended"
  parameter to the main function and a few subsidiary ones.
  The serious work starts now, with implementing the extended
  definition of n_i.

  Started work on khe_sr_group_solver.c.

6 July 2019.  Working on khe_sr_group_solver.c today.  Wrote
  a section called "The implementation:  taskers" which
  describes a KHE_TASKER type that implements only the
  basic data structure.  Review that and implement it.

7 July 2019.  Have a clean compile of khe_sr_tasker.c.  Now
  need to use it to define other grouping functions.  What
  about time-based grouping?

9 July 2019.  Working on khe_sr_comb_solver.c, have clean
  compile of a fair whack of it.  Also added a feature to
  the tasker to prevent groups from clashing in frame time
  groups.

10 July 2019.  Working on khe_sr_comb_solver.c.

12 July 2019.  Still working on grouping.  Today's job was
  to leave it to the tasker to choose a leader class.  Done.

13 July 2019.  Still wading through the grouping mess.  I've
  done a fair bit of work on khe_sr_comb_solver.c, and it's
  in better shape.  I still have to do a single experiment,
  and I still have find the best result of all experiments.

15 July 2019.  Finally getting somewhere with the grouping mess.
  Today I audited the comb solver documentation changes that I
  made yesterday, and implemented them.  I also implemented the
  rest of khe_sr_comb_solver.c.  It needs an audit, which I will
  do another day, but it's all written at last.

  I've documented the functions that I need to add to the tasker
  to support profile grouping, and done some of the implementing,
  the easy part.

16 July 2019.  Finishing off profile grouping support today.
  I've audited it but I need to do it again with a fresh brain.
  Two weeks expended on this so far.

18 July 2019.  Audited khe_sr_tasker.c and khe_sr_comb_solver.c.
  Removed allow_single from tasker (domain had already gone), it
  can be enforced by callers, i.e. by combinatorial grouping.

19 July 2019.  Working on the profile grouping documentation,
  converting it it to arbitrary time groups, passed to
  combinatorial grouping.  All written but needs pondering,
  especially the c_i bit, and then implementing somewhere.

  Also worked on a new "applying combinatorial grouping"
  section which applies combinatorial grouping to frames
  and explains combination elimination.

20 July 2019.  "Altogether this avoids the main danger, which is
  the creation of two overlapping groups with the same initial
  assignment."  From the old documentation, now omitted.  Looked
  into it and decided that choosing only groups of zero cost
  would handle it.  I've documented this scenario.

  Added avoid clashes constraints to the list of constraints
  whose costs count when evaluating combinatorial grouping.

21 July 2019.  Now stopping at the first zero-cost grouping,
  if that is sufficient for the cost type.

  "May need to revert to iterating over each day, rather
  than iterating over all subsets.  There may be not many
  choices for each day, but zillions of subsets."  I've
  looked into this and found that if there are K shifts
  on one day, then K(K-1)/2 unnecessary combinations are
  tried, on the way to finding the K+1 combinations that
  actually work.  For small values of K it's not too bad,
  e.g. for K=4 about half the combinations are rejected.
  But anyway I set to work and redid the search; it now
  iterates over elements, trying each class whose first
  time covers that element.  This is more efficient in
  several ways, including testing cover as we go, since
  if the element is not covered by the time we leave it,
  there are no other classes that can cover it.

  Also I've reorganized the code into more and better
  submodules.

22 July 2019.  Auditing khe_sr_comb_solver.c.  Up to
  KheMonitorSuits.

23 July 2019.  Finished auditing khe_sr_comb_solver.c.
  Done the basics of KheCombGrouping in khe_sr_group_by_rc.c.
  Now I have to add combination elimination to that, and
  then remove the old combinatorial grouping stuff, and
  I will be ready for applying profile grouping.

  Started work on eliminating combinations.  But there are
  some arrays that need initializing and I don't have a
  solver object.  Still, press on regardless.

24 July 2019.  Working on khe_sr_group_by_rc.c.  Finished
  combinatorial grouping, including eliminating combinations.
  Now I have to do profile grouping, then clear away all
  unused stuff (including types) from khe_sr_group_by_rc.c.
  Three weeks expended on this so far.

25 July 2019.  Audited yesterday's work on making
  KheTaskerProfileTimeGroupCover accept a domain parameter.

  Updated tasker support for profile grouping.  All implemented,
  compiled, and documented (in its own section).

26 July 2019.  Working on khe_sr_group_by_rc.c today.  Added
  KheCombSolverSingles to comb solver, and made a few other
  adjustments to its interface.  Also deleted all the old
  stuff (preserved now only in save_khe_sr_group_by_rc.c).
  Have a clean compile of what purports to be the whole thing.

27 July 2019.  Auditing khe_sr_group_by_rc.c today.  Line counts:

     1795  khe_sr_tasker.c
     1193  khe_sr_comb_solver.c
     1096  khe_sr_group_by_rc.c 
     ---------------------------
     4084  Total

  This is 53% longer than save_khe_sr_group_by_rc.c (2658 lines).

  Audited khe_sr_group_by_rc.c, made a few small changes.
  Enhanced profile grouping so that it sweeps to and fro as
  long as there is something to do.  Audited khe_sr_tasker.c.

28 July 2019.  Sorted out the problem with calling
  KheCombSolverFindClasses twice.  There is now a flag
  to say whether it needs to be done or not.  Basically
  ready to test, but I'm auditing the documentation first.

29 July 2019.  Auditing the documentation.  Up to the start
  of "Profile grouping".

31 July 2019.  Worked on the XESTT paper today, incorporating
  Gerhard's suggestions.  Sorted out the Nurse vs Nurses issue.
  Now I have to send to Gerhard and Greet, then wait two weeks
  and resubmit.

1 August 2019.  Sent the revised XESTT paper to Gerhard and
  Greet today.  I will wait two weeks and then submit it.

2 August 2019.  Refereed a paper today.

3 August 2019.  "Ordinary profile grouping can ask for a unique
  zero-cost grouping, extended can only ask for zero-cost.  But
  maybe zero-cost is sufficient in either case?"  Sorted out
  this issue in the documentation.  Checked the implementation
  and found that it already did it the new way.

  "Documentation speaks of handling history when Cmin = Cmax, this
  needs to be implemented or else marked as not implemented."  I've
  now marked this as not yet implemented in the documentation; I
  feel little interest given that no instances currently need it.

  One month today since I started revising grouping by resource
  constraints.

5 August 2019.  Documented KheCombSolverAddProfileRequirement and
  implemented it, and used it to get exactly what I want during
  profile grouping.  So grouping is all implemented and ready to
  test.  I started on grouping by resource constraints on 3 July,
  so it's a full month so far.

  Did a quick scan through the documentation and updated the
  general description of domain handling in profile grouping.

6 August 2019.  Testing grouping by resource today.  So far I
  have fixed a few silly bugs, and now it seems to be working:
  it has found the night shift groups in COI-GPost.

  Looked into why combinatorial grouping was not finding
  weekend day shift groups in GPost.  Found that we need
  allow_singles after all.  Added it back in and substantially
  revised the documentation to explain the whole issue.  But
  after more testing we are still not finding weekend day
  shifts in GPost.

7 August 2019.  Still looking into why we did not find any
  weekend day shifts.  The finger is currently pointing at
  KheTaskerGroupingClear.  I sorted it out, the covers in
  the tasker were not being cleared correctly.

  Looks like we're working correctly on COI-GPost now:

    parallel solve of COI-GPost: starting solve 1 (last)
    combinatorial grouping made grouped task: 1Fri:N.0{1Sat:N.0, 1Sun:N.0}
    combinatorial grouping made grouped task: 2Fri:N.0{2Sat:N.0, 2Sun:N.0}
    combinatorial grouping made grouped task: 3Fri:N.0{3Sat:N.0, 3Sun:N.0}
    combinatorial grouping made grouped task: 4Fri:N.0{4Sat:N.0, 4Sun:N.0}
    combinatorial grouping made grouped task: 1Sat:D.2{1Sun:D.2}
    combinatorial grouping made grouped task: 1Sat:D.1{1Sun:D.1}
    combinatorial grouping made grouped task: 1Sat:D.0{1Sun:D.0}
    combinatorial grouping made grouped task: 2Sat:D.2{2Sun:D.2}
    combinatorial grouping made grouped task: 2Sat:D.1{2Sun:D.1}
    combinatorial grouping made grouped task: 2Sat:D.0{2Sun:D.0}
    combinatorial grouping made grouped task: 3Sat:D.2{3Sun:D.2}
    combinatorial grouping made grouped task: 3Sat:D.1{3Sun:D.1}
    combinatorial grouping made grouped task: 3Sat:D.0{3Sun:D.0}
    combinatorial grouping made grouped task: 4Sat:D.2{4Sun:D.2}
    combinatorial grouping made grouped task: 4Sat:D.1{4Sun:D.1}
    combinatorial grouping made grouped task: 4Sat:D.0{4Sun:D.0}
    profile grouping made grouped task: 1Mon:N.0{1Tue:N.0}
    profile grouping made grouped task: 2Mon:N.0{2Tue:N.0}
    profile grouping made grouped task: 3Mon:N.0{3Tue:N.0}
    profile grouping made grouped task: 4Mon:N.0{4Tue:N.0}
    profile grouping made grouped task: 4Wed:N.0{4Thu:N.0}
    profile grouping made grouped task: 3Wed:N.0{3Thu:N.0}
    profile grouping made grouped task: 2Wed:N.0{2Thu:N.0}
    profile grouping made grouped task: 1Wed:N.0{1Thu:N.0}
    [ "COI-GPost", 1 solution, in 0.3 secs: cost 0.00016 ]

  And best of 8:

    [ "COI-GPost", 4 threads, 8 solves, 5 distinct costs, 1.2 secs:
      0.00011 0.00012 0.00012 0.00014 0.00014 0.00014 0.00015 0.00016
    ]

  This all seems fine.  The costs are what I was getting before,
  although the run times seem somewhat longer.

7 August 2019.  Starting work on COI-BCDT-Sep today.  Curtois' best
  result is 100, KHE18 has been getting around 300 without the new
  grouping code.  First results:

    [ KheArchiveParallelSolve(COI-BCDT-Sep, threads 1, make 1, keep 1, time omit, limit -1.0)
      parallel solve of COI-BCDT-Sep: starting solve 1 (last)
      profile grouping made grouped task: 1Mon:M.3{1Tue:M.3, 2Wed:M.3}
      profile grouping made grouped task: 2Thu:M.3{2Fri:M.3, 2Sat:M.2}
      profile grouping made grouped task: 3Mon:M.3{3Tue:M.3, 4Wed:M.3}
      profile grouping made grouped task: 4Thu:M.3{4Fri:M.3, 4Sat:M.2}
      profile grouping made grouped task: 4Sun:M.2{4Mon:M.3, 4Tue:M.3}
      profile grouping made grouped task: 4Mon:M.2{4Tue:M.2, 5Wed:M.3}
      profile grouping made grouped task: 5Wed:M.2{5Thu:M.3}
      profile grouping made grouped task: 4Wed:M.2{4Thu:M.2, 4Fri:M.2}
      profile grouping made grouped task: 2Wed:M.2{2Thu:M.2, 2Fri:M.2}
      profile grouping made grouped task: 1Sun:M.2{1Mon:M.2, 1Tue:M.2}
      profile grouping made grouped task: 1Thu:M.3{1Fri:M.3, 1Sat:M.2}
      profile grouping made grouped task: 1Wed:M.3{1Thu:M.2, 1Fri:M.2}
      [ "COI-BCDT-Sep", 1 solution, in 3.5 secs: cost 0.00420 ]
    ]

  Not great, there is work to do here.

9 August 2019.  Now selecting a resource not assigned on any
  relevant day.  Profile grouping is now reducing the profile to:

    inf[1:0:0:0:0:0:0:0:0:0:0:0:1:1:1:2:2:2:3:3:3:4:4:4:4:4:4:1:2:3]inf

  This is good but still not all the way.  Have to keep at it.

10 August 2019.  Looking into why not everything is getting grouped.
  Found that singles was counting grouped tasks beyond the profile
  max length.  We need to ignore these when finding singles.

11 August 2019.  Implemented the revised comb solver interface today.
  It's ready to test.  I've tested GPost, had to fix one or two
  things, but now it's working again.  Testing COI-BCDT-Sep, got
  it down to

    0[1:0:0:0:0:0:0:0:0:3:3:3:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:1:2]0

  which has one glitch but is otherwise pretty good.  But it has
  not improved the cost:

    [ "COI-BCDT-Sep", 1 solution, in 3.9 secs: cost 0.00410 ]

  The actual groups are

    profile grouping made grouped task: 1Mon:M.3{1Tue:M.3{}, 2Wed:M.3{}}
    profile grouping made grouped task: 2Thu:M.3{2Fri:M.3{}, 2Sat:M.2{}}
    profile grouping made grouped task: 2Sun:M.2{2Mon:M.3{}, 2Tue:M.3{}}
    profile grouping made grouped task: 2Mon:M.2{2Tue:M.2{}, 3Wed:M.3{}}
    profile grouping made grouped task: 3Wed:M.2{3Thu:M.3{}, 3Fri:M.3{}}
    profile grouping made grouped task: 3Thu:M.2{3Fri:M.2{}, 3Sat:M.2{}}
    profile grouping made grouped task: 3Sun:M.2{3Mon:M.3{}, 3Tue:M.3{}}
    profile grouping made grouped task: 3Mon:M.2{3Tue:M.2{}, 4Wed:M.3{}}
    profile grouping made grouped task: 4Wed:M.2{4Thu:M.3{}, 4Fri:M.3{}}
    profile grouping made grouped task: 4Thu:M.2{4Fri:M.2{}, 4Sat:M.2{}}
    profile grouping made grouped task: 4Sun:M.2{4Mon:M.3{}, 4Tue:M.3{}}
    profile grouping made grouped task: 4Mon:M.2{4Tue:M.2{}, 5Wed:M.3{}}
    profile grouping made grouped task: 2Wed:M.2{2Thu:M.2{}, 2Fri:M.2{}}
    profile grouping made grouped task: 1Sun:M.2{1Mon:M.2{}, 1Tue:M.2{}}
    profile grouping made grouped task: 1Thu:M.3{1Fri:M.3{}, 1Sat:M.2{}}
    profile grouping made grouped task: 1Wed:M.3{1Thu:M.2{}, 1Fri:M.2{}}
    profile grouping made grouped task: 1Wed:M.2{1Thu:M.1{}, 1Fri:M.1{}}
    profile grouping made grouped task: 1Wed:M.1{1Thu:M.0{}, 1Fri:M.0{}}
    profile grouping made grouped task: 1Wed:M.0{1Thu:M.4{}, 1Fri:M.4{}}
    profile grouping made grouped task: 1Wed:M.4{1Thu:A.2{}, 1Fri:A.2{}}
    profile grouping made grouped task: 1Wed:A.2{1Thu:A.1{}, 1Fri:A.1{}}
    profile grouping made grouped task: 1Wed:A.1{1Thu:A.0{}, 1Fri:A.0{}}
    profile grouping made grouped task: 1Wed:A.0{1Thu:A.3{}, 1Fri:A.3{}}
    profile grouping made grouped task: 1Sat:M.1{1Sun:M.1{}, 1Mon:M.1{}}
    profile grouping made grouped task: 1Sat:M.0{1Sun:M.0{}, 1Mon:M.0{}}
    profile grouping made grouped task: 1Sat:M.3{1Sun:M.3{}, 1Mon:M.4{}}
    profile grouping made grouped task: 1Sat:A.2{1Sun:A.2{}, 1Mon:A.2{}}
    profile grouping made grouped task: 1Sat:A.1{1Sun:A.1{}, 1Mon:A.1{}}
    profile grouping made grouped task: 1Sat:A.0{1Sun:A.0{}, 1Mon:A.0{}}
    profile grouping made grouped task: 1Sat:A.3{1Sun:A.3{}, 1Mon:A.3{}}
    profile grouping made grouped task: 1Tue:M.1{2Wed:M.1{}, 2Thu:M.1{}}
    profile grouping made grouped task: 1Tue:M.0{2Wed:M.0{}, 2Thu:M.0{}}
    profile grouping made grouped task: 1Tue:M.4{2Wed:M.4{}, 2Thu:M.4{}}
    profile grouping made grouped task: 1Tue:A.2{2Wed:A.2{}, 2Thu:A.2{}}
    profile grouping made grouped task: 1Tue:A.1{2Wed:A.1{}, 2Thu:A.1{}}
    profile grouping made grouped task: 1Tue:A.0{2Wed:A.0{}, 2Thu:A.0{}}
    profile grouping made grouped task: 1Tue:A.3{2Wed:A.3{}, 2Thu:A.3{}}
    profile grouping made grouped task: 2Fri:M.1{2Sat:M.1{}, 2Sun:A.2{}}
    profile grouping made grouped task: 2Fri:M.0{2Sat:M.0{}, 2Sun:A.1{}}
    profile grouping made grouped task: 2Fri:M.4{2Sat:M.3{}, 2Sun:A.0{}}
    profile grouping made grouped task: 2Fri:A.2{2Sat:A.2{}, 2Sun:A.3{}}
    profile grouping made grouped task: 2Mon:M.1{2Tue:M.1{}, 3Wed:M.1{}}
    profile grouping made grouped task: 2Mon:M.0{2Tue:M.0{}, 3Wed:M.0{}}
    profile grouping made grouped task: 2Mon:M.4{2Tue:M.4{}, 3Wed:M.4{}}
    profile grouping made grouped task: 2Mon:A.2{2Tue:A.2{}, 3Wed:A.2{}}
    profile grouping made grouped task: 3Thu:M.1{3Fri:M.1{}, 3Sat:M.1{}}
    profile grouping made grouped task: 3Thu:M.0{3Fri:M.0{}, 3Sat:M.0{}}
    profile grouping made grouped task: 3Thu:M.4{3Fri:M.4{}, 3Sat:M.3{}}
    profile grouping made grouped task: 3Thu:A.2{3Fri:A.2{}, 3Sat:A.2{}}
    profile grouping made grouped task: 3Sun:M.1{3Mon:M.1{}, 3Tue:M.1{}}
    profile grouping made grouped task: 3Sun:M.0{3Mon:M.0{}, 3Tue:M.0{}}
    profile grouping made grouped task: 3Sun:M.3{3Mon:M.4{}, 3Tue:M.4{}}
    profile grouping made grouped task: 3Sun:A.2{3Mon:A.2{}, 3Tue:A.2{}}
    profile grouping made grouped task: 4Wed:M.1{4Thu:M.1{}, 4Fri:M.1{}}
    profile grouping made grouped task: 4Wed:M.0{4Thu:M.0{}, 4Fri:M.0{}}
    profile grouping made grouped task: 4Wed:M.4{4Thu:M.4{}, 4Fri:M.4{}}
    profile grouping made grouped task: 4Wed:A.2{4Thu:A.2{}, 4Fri:A.2{}}
    profile grouping made grouped task: 4Sat:M.1{4Sun:M.1{}, 4Mon:M.1{}}
    profile grouping made grouped task: 4Sat:M.0{4Sun:M.0{}, 4Mon:M.0{}}
    profile grouping made grouped task: 4Sat:M.3{4Sun:M.3{}, 4Mon:M.4{}}
    profile grouping made grouped task: 4Sat:A.2{4Sun:A.2{}, 4Mon:A.2{}}
    profile grouping made grouped task: 4Tue:M.1{5Wed:M.2{}, 5Thu:M.3{}}
    profile grouping made grouped task: 4Tue:M.0{5Wed:M.1{}, 5Thu:M.2{}}
    profile grouping made grouped task: 4Tue:M.4{5Wed:M.0{}, 5Thu:M.1{}}
    profile grouping made grouped task: 4Tue:A.2{5Wed:A.2{}, 5Thu:A.2{}}
    profile grouping made grouped task: 4Tue:A.1{5Wed:A.1{}, 5Thu:A.1{}}
    profile grouping made grouped task: 4Tue:A.0{5Wed:A.0{}, 5Thu:A.0{}}
    profile grouping made grouped task: 4Tue:A.3{5Wed:A.3{}, 5Thu:A.3{}}
    profile grouping made grouped task: 4Sat:A.1{4Sun:A.1{}, 4Mon:A.1{}}
    profile grouping made grouped task: 4Sat:A.0{4Sun:A.0{}, 4Mon:A.0{}}
    profile grouping made grouped task: 4Sat:A.3{4Sun:A.3{}, 4Mon:A.3{}}
    profile grouping made grouped task: 4Wed:A.1{4Thu:A.1{}, 4Fri:A.1{}}
    profile grouping made grouped task: 4Wed:A.0{4Thu:A.0{}, 4Fri:A.0{}}
    profile grouping made grouped task: 4Wed:A.3{4Thu:A.3{}, 4Fri:A.3{}}
    profile grouping made grouped task: 3Sun:A.1{3Mon:A.1{}, 3Tue:A.1{}}
    profile grouping made grouped task: 3Sun:A.0{3Mon:A.0{}, 3Tue:A.0{}}
    profile grouping made grouped task: 3Sun:A.3{3Mon:A.3{}, 3Tue:A.3{}}
    profile grouping made grouped task: 3Thu:A.1{3Fri:A.1{}, 3Sat:A.1{}}
    profile grouping made grouped task: 3Thu:A.0{3Fri:A.0{}, 3Sat:A.0{}}
    profile grouping made grouped task: 3Thu:A.3{3Fri:A.3{}, 3Sat:A.3{}}
    profile grouping made grouped task: 2Mon:A.1{2Tue:A.1{}, 3Wed:A.1{}}
    profile grouping made grouped task: 2Mon:A.0{2Tue:A.0{}, 3Wed:A.0{}}
    profile grouping made grouped task: 2Mon:A.3{2Tue:A.3{}, 3Wed:A.3{}}

  So the next step is to look over this solution and see where the
  cost is coming from.

12 August 2019.  The problem is that when I arbitrarily set history
  to 0 I start off a whole lot of triples at the same time, that is,
  at the first time.  This causes problems later when someone wants
  to take (say) one day off and then start work again.  So I have
  commented out the history cancel, but now I need to make, say, a
  grouping which is strictly speaking unjustified, but do it at a
  random point so that groups are staggered along the cycle.  Or
  better, pick a point where a minimal number of runs has started.

15 August 2019.  Working on 12 August ideas.  Now have a count of
  where a minimal number of runs has started, but if I just ask
  for one there, using KheProfile, it will be cancelled out by
  singles.  So I need to think again about singles and how they
  interact with this one group, before doing anything.

16 August 2019.  Working on 12 August ideas.  All documented,
  implemented, and ready to test.

17 August 2019.  Audited profile grouping, made a few small
  adjustments.  First test seemed to do all the grouping we want:

    inf[6:3:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:3:7]inf
    inf<3:3:3:2:3:4:2:3:4:2:2:4:3:2:4:3:2:3:3:3:3:3:3:3:2:3:4:2:0:0>inf

  The result was

    [ "COI-BCDT-Sep", 1 solution, in 2.3 secs: cost 0.00380 ]

  Not great, why not?

18 August 2019.  Worked out that the problem is with requested,
  which has bugs which make it assign far too many tasks.  Working
  on it now.  I've redone the the whole thing now, including
  handling cases where too many actives can be fixed by assigning
  to negative time groups.  Needs a careful audit, then test.

19 August 2019.  Audited khe_sr_requested.c, but I ended up
  making a lot of changes, so now the audit needs an audit.

20 August 2019.  Submitted revised nurse rostering modelling paper
  today, informed Gerhard and Greet, and put a copy on my web site.

  Audited requested and ran it.  It seems to be working now but
  the non-forced requested assignments may be too much.  With them:

    [ "COI-BCDT-Sep", 1 solution, in 2.6 secs: cost 0.00400 ]

    [ "COI-BCDT-Sep", 4 threads, 8 solves, 5 distinct costs, 7.3 secs:
      0.00320 0.00330 0.00330 0.00350 0.00390 0.00400 0.00400 0.00400
    ]

  Without them:

    [ "COI-BCDT-Sep", 1 solution, in 2.0 secs: cost 0.00390 ]

    [ "COI-BCDT-Sep", 4 threads, 8 solves, 7 distinct costs, 5.2 secs:
      0.00340 0.00350 0.00380 0.00390 0.00410 0.00440 0.00440 0.00460
    ]

  Finished the revision of khe_sr_requested.c, and it's documented now.

22 August 2019.  Using revised version of khe_sr_requested.c, the one
  which uses a group monitor to notice when several nearby requests
  are covered by one task.  With non-forced assignments:

    [ "COI-BCDT-Sep", 4 threads, 8 solves, 5 distinct costs, 9.6 secs:
      0.00370 0.00370 0.00380 0.00380 0.00390 0.00390 0.00400 0.00410
    ]

  Without them:

    [ "COI-BCDT-Sep", 4 threads, 8 solves, 6 distinct costs, 5.6 secs:
      0.00380 0.00390 0.00400 0.00400 0.00400 0.00410 0.00480 0.00490
    ]

  These may be only random differences, although non-forced does look
  slightly better.  But either way they are not very good!

  After adding two limit active intervals constraints of weight 1
  that together require exactly three night shifts, I get this:

    [ "COI-BCDT-Sep", 4 threads, 8 solves, 8 distinct costs, 8.3 secs:
      0.00325 0.00336 0.00342 0.00349 0.00366 0.00384 0.00386 0.00433
    ]

  Best is actually 320 when you remove the two constraints again.
  So this is what happens when you group nights into threes, except
  it isn't actually doing this grouping because of a mix-up over
  the precise distinction between strict and non-strict grouping.

23 August 2019.  Sorted out the precise relationship between strict
  and non-strict profile grouping, and documented and implemented it
  all.  First results (including the two fake constraints):

    [ "COI-BCDT-Sep-A", 1 solution, in 3.3 secs: cost 0.00346 ]

  and

    [ "COI-BCDT-Sep-A", 4 threads, 8 solves, 8 distinct costs, 6.1 secs:
      0.00236 0.00257 0.00275 0.00283 0.00294 0.00307 0.00336 0.00343
    ]

  So it has worked, but not brilliantly.

  Ensured that preassigned tasks are included when grouping.  This
  may backfire if it makes too many groups, but we shall see.

24 August 2019.  Changed the definition of cost in combinatorial
  grouping, all implemented and documented.  Modified the definition
  of what it means for two tasks to be equivalent, for taskers.
  Implemented and documented.

25 August 2019.  Moved the test assignment code from the tasker
  to the comb solver and removed it from khe_solvers.h and from
  the documentation.  Changed the comb solver documentation to
  the new form, implemented it, and started testing it.  It seems
  to be working, at least combinatorial grouping anyway.

  Here is the current final cost:

    [ "COI-BCDT-Sep-A", 1 solution, in 2.5 secs: cost 0.00267 ]

  and best of 8:

    [ "COI-BCDT-Sep-A", 4 threads, 8 solves, 8 distinct costs, 6.7 secs:
      0.00228 0.00234 0.00245 0.00246 0.00267 0.00286 0.00293 0.00297
    ]

  So we are beginning to get somewhere.  Without the -A:

    [ "COI-BCDT-Sep", 1 solution, in 2.2 secs: cost 0.00350 ]

  and

    [ "COI-BCDT-Sep", 4 threads, 8 solves, 6 distinct costs, 4.7 secs:
      0.00330 0.00330 0.00350 0.00370 0.00370 0.00380 0.00390 0.00400
    ]

26 August 2019.  Looking over yesterday's results today.  Tried a
  fun test where we group {Wed, Thu, Fri, Sat} and {Sun, Mon, Tue}
  night shifts and see what happens.  Not so great:

    [ "COI-BCDT-Sep-A", 4 threads, 8 solves, 6 distinct costs, 7.3 secs:
      0.00320 0.00360 0.00370 0.00380 0.00390 0.00390 0.00390 0.00410
    ]

  I've removed the two extra constraints from COI-BCDT-Sep-A, since
  the special grouping does what's wanted anyway.  If you break
  {Wed, Thu, Fri, Sat} into {Wed, Thu} and {Fri, Sat} you get this:

    [ "COI-BCDT-Sep-A", 4 threads, 8 solves, 4 distinct costs, 7.3 secs:
      0.00270 0.00290 0.00340 0.00340 0.00340 0.00340 0.00350 0.00350
    ]

  which is quite a lot better but still not as good as having
  no special case at all, just the artificial constraints:

    [ "COI-BCDT-Sep-A2", 4 threads, 8 solves, 8 distinct costs, 7.3 secs:
      0.00228 0.00234 0.00245 0.00246 0.00267 0.00286 0.00293 0.00297
    ]

  The conclusion seems to be that no simple grouping of the night
  shifts is going to fix these problems.

  Doing more rematching (of smaller intervals), got this:

    [ "COI-BCDT-Sep-A2", 4 threads, 8 solves, 7 distinct costs, 6.0 secs:
      0.00205 0.00245 0.00265 0.00284 0.00284 0.00293 0.00296 0.00307
    ]

  It's something of a breakthrough.  For the original instance:

    [ "COI-BCDT-Sep", 4 threads, 8 solves, 6 distinct costs, 6.4 secs:
      0.00300 0.00310 0.00330 0.00340 0.00370 0.00370 0.00410 0.00410
    ]

28 August 2019.  Looking into TimeOn-Nurse3-s1000:3Fri3/Nurse3.  At
  present it seems to be being satisfied by an undesirable shift.  It
  was originally handled well by requested.  Why did that go wrong?

29 August 2019.  Added KheMonitorRequestsSpecificBusyTimes and used it
  in time sweep to not cut off monitors that request specific busy times.
  I'm hoping that this will fix the TimeOn-Nurse3-s1000:3Fri3/Nurse3
  problem, which I'm guessing arose because the monitor that led to
  the original requested assignment was cut off during time sweep.
  Tests show that it seems to be working.  Here are the results:

    [ "COI-BCDT-Sep-A2", 1 solution, in 1.9 secs: cost 0.00256 ]

    [ "COI-BCDT-Sep-A2", 4 threads, 8 solves, 7 distinct costs, 6.4 secs:
      0.00243 0.00256 0.00269 0.00275 0.00276 0.00276 0.00292 0.00326
    ]

  This is worse than before, but I should keep it because it's better
  in principle.  It does slightly better on the original instance:

    [ "COI-BCDT-Sep", 4 threads, 8 solves, 7 distinct costs, 5.0 secs:
      0.00280 0.00300 0.00310 0.00320 0.00330 0.00330 0.00350 0.00360
    ]

  This is better and faster than the 26 August solve.

  Nurse3 has AAV.  Would reverse combinatorial grouping change
  this to MMV?  Worth a try.  Yes, it works:

    [ "COI-BCDT-Sep", 4 threads, 8 solves, 8 distinct costs, 5.2 secs:
      0.00270 0.00280 0.00300 0.00310 0.00320 0.00340 0.00350 0.00390
    ]

  Tried a full run.  Results seem slightly worse, and then there
  was a crash (KheTaskGrouperUnGroup: cannot delete task bound)
  on CQ14-23.  Also, I inadvertently had nonforced requests on,
  which was not the intention.  Fixed that now.

30 August 2019.  I'm keeping the tasks in task sets sorted by decreasing
  meet index now.  Hopefully this will match up similar tasks when
  grouping.  It does seem to have helped:

    [ "COI-BCDT-Sep-A", 4 threads, 8 solves, 7 distinct costs, 5.3 secs:
      0.00204 0.00226 0.00246 0.00252 0.00252 0.00262 0.00276 0.00313
    ]

  This is my best result so far; I had 205 before but the second best
  was 245.  Without the artificial constraint:

    [ "COI-BCDT-Sep", 4 threads, 8 solves, 6 distinct costs, 6.3 secs:
      0.00290 0.00300 0.00300 0.00320 0.00320 0.00330 0.00350 0.00360
    ]

  Quite a big difference.

31 August 2019.  Wrote KheTaskSetSwapToEndRepair and called it from
  KheTaskSetMoveAugment and KheTaskMoveAugment.  Tested it, it is
  producing clashes, I need to think a bit harder about how to
  avoid them.

1 September 2019.  Revised KheTaskSetSwapToEndRepair, needs an audit
  and test.  First results:

    [ "COI-BCDT-Sep-A", 1 solution, in 3.9 secs: cost 0.00252 ]

    [ "COI-BCDT-Sep-A", 4 threads, 8 solves, 7 distinct costs, 8.9 secs:
      0.00184 0.00184 0.00214 0.00224 0.00225 0.00247 0.00252 0.00263
    ]

  So it has helped a bit.  Without the artificial constraint:

    [ "COI-BCDT-Sep", 4 threads, 8 solves, 4 distinct costs, 11.7 secs:
      0.00240 0.00240 0.00250 0.00290 0.00300 0.00300 0.00300 0.00300
    ]

  This is my best result so far for this case.  The question is, is
  it worth it considering the extra running time?  I tried limiting
  the depth to 1 but that got me back to 220.

  Night grouping pattern (my soln):

     1Wed 1Thu 1Fri 1Sat 1Sun 1Mon 1Tue 2Wed 2Thu 2Fri 2Sat 2Sun 2Mon 2Tue 
    -----------------------------------------------------------------------
    | 2       | 16           | 8            | X  | 9            | 2       
    -----------------------------------------------------------------------
    | 5       | 19           | X  | 3            | 10           | 11       
    -----------------------------------------------------------------------
    | 18 | 15           | 14           | 13           | 4            | 7   
    -----------------------------------------------------------------------
    | 20      | X  | 6                 | 18                | 19           |
    -----------------------------------------------------------------------

  Night grouping pattern (best soln):

     1Wed 1Thu 1Fri 1Sat 1Sun 1Mon 1Tue 2Wed 2Thu 2Fri 2Sat 2Sun 2Mon 2Tue 
    -----------------------------------------------------------------------
    | 6                 | 14           | 3                 | 5            |
    -----------------------------------------------------------------------
    | 13      | 12           | X  | 18           | 4            | 11V      
    -----------------------------------------------------------------------
    | 16      | 19           | 8            | 13           | 10           |
    -----------------------------------------------------------------------
    | 20 | 2            | 15           | 7                 | 12      | 17  
    -----------------------------------------------------------------------

2 September 2019.  Fixed KheTaskGrouperUnGroup.  Not tested yet but
  sure to work.

  Decided against this:
  For each monitor, store the number of attempts to augment
  since the last successful augment, and sort by increasing
  value of this number (with decreasing cost as secondary).
  Implement by storing the sequence number of the last
  successful augment and sorting by decreasing value of that.
  Actually I think I've already tried this, see the functions
  with "Failures" in the title in khe_se_ejector.c.

3 September 2019.  Had a core dump at the end of yesterday.  I'm
  working on that today.  Fixed it (I introduced a stupid bug
  yesterday), but now I'm getting clashes again from swap to end.
  Did a long run, stored in *-EL.pdf.  But I'm not happy with
  the slow run through the last few CQ14 instances.  So I'm
  trying another run there with some memory debug output.

4 September 2019.  Checked through all the places where malloc and
  friends are called.  There are basically none left, so there should
  be no problems with memory allocation.  I ran CQ14 again with
  a memory check on to see what is happening.  And there does
  seem to be a significant amount of memory being allocated
  again and again, even on instance 11.  Why?  Must review.

  I've worked out that arenas were accumulating in the leader
  thread and not being redistributed to the other threads.  So
  I've written code to do this redistributing, and now each
  thread settles down to about 5 to 7 arenas.  This is about
  as good as it will get, given that various solvers need an
  arena.  However it hasn't fixed the memory problem; that
  seems to be a problem even for CQ14-24 alone, given the
  enormous memory demands being made by it, althoug I do
  need to check.

5 September 2019.  Shaved 3 integers, which effectively is
  2 64-bit words, off the size of cluster busy times monitors.
  Their total size is 9 (from inherit-monitor) plus 11 plus
  the time group pointers (2 words each) so it's doubtful
  whether it was worth doing, even though cluster busy times
  monitors are the most intensively used in nurse rostering
  (we convert limit busy times constraints to them).

  Tried running CQ14-24 single-threaded.  It consumes a lot
  of memory but not to much for the system to cope with.
  Started looking into the slow running time.  Found that
  AvailSolverFindBestBusyTimes is running very slowly.  So
  making it run faster is my first job.  I did this by
  making it exit early if 20 sets have been tried since
  the last new best.  The overall result was

    [ "CQ14-24", 1 solution, in 282.7 secs: cost 25086.99999 ]

  which is a hopeless solution but at least the running
  time is back to something reasonable.

    [ "CQ14-24", 4 threads, 8 solves, 8 distinct costs, 12.1 mins:
      25323.99999 25451.99999 25467.99999 25479.99999 25501.99999
      25544.99999 25901.99999 25949.99999
    ]

  which is an OK running time, but the system responded slowly
  afterwards, suggesting that there is still a memory problem.
  From what I saw before, it is the tasks more than the monitors
  that are causing this problem.

6 September 2019.  Working on CQ14-05.  My current solutions are:

    [ "CQ14-05", 4 threads, 8 solves, 8 distinct costs, 13.2 secs:
      0.01345 0.01440 0.01447 0.01542 0.01543 0.01640 0.01641 0.01651
    ]

  Curtois' best is 1143.  This represents 2 fewer unassigned shifts
  (costing 100 each) and virtually the same other stuff.

  Had a look at KHE18's grouping decisions on CQ14-05.  They were
  mad.  Discovered I'd forgotten to exclude monitors which did not
  come from constraints that apply to all resources.  After fixing
  that problem there was no grouping at all, which was probably
  right for CQ14-05.  Results now are

    [ "CQ14-05", 1 solution, in 3.1 secs: cost 0.01555 ]

  and

    [ "CQ14-05", 4 threads, 8 solves, 7 distinct costs, 11.9 secs:
      0.01346 0.01534 0.01548 0.01555 0.01555 0.01639 0.01651 0.01741
    ]

  so not much has changed, except the run is faster now.

  What happens if we group Saturday and Sunday even though
  there is no justification for it?  The supply of weekends is

    Max 2 busy weekends for 10 resources           20
    Max 3 busy weekends for  6 resources           18
                                                   --
						   38

  compared with the demand: 41.  So actually we could group,
  and there are going to be 3 weekends not covered anyway.
  Let's look into how we can convince the solver to group
  these weekends.

7 September 2019.  Today I girded my loins, copied khe_sr_group_by_rc.c
  to save_khe_sr_group_by_rc.c, and set to work on generalizing the
  combination elimination code to handle a group of constraints with
  the same time groups.  Up to KheElimCombSolveForConstraintGroup,
  all done except for handling lim, which can now vary from one
  constraint of the group to another.  I've documented what to do,
  and implemented it.  Ready for an audit and test.

  Made sure the constraints we group together have disjoint resources.
  Not that it would invalidate the idea if they intersected, but the
  formula depends on the number of resources in each constraint, and
  if they are not disjoint this number becomes unclear.

8 September 2019.  Audited generalized combination elimination, and
  started testing.  It is indeed linking the weekend days now, and
  this is causing the desired grouping:

    combinatorial grouping made grouped task: 1Sat:L.0{1Sun:L.0{}} (Preferred-L)
    combinatorial grouping made grouped task: 1Sat:L.1{1Sun:L.1{}} (Preferred-L)
    combinatorial grouping made grouped task: 1Sat:L.2{1Sun:L.2{}} (Preferred-L)
    combinatorial grouping made grouped task: 1Sun:E.0{1Sat:E.0{}} ({A..P})
    combinatorial grouping made grouped task: 1Sun:E.1{1Sat:E.1{}} ({A..P})
    combinatorial grouping made grouped task: 1Sun:E.2{1Sat:E.2{}} ({A..P})
    combinatorial grouping made grouped task: 1Sun:E.3{1Sat:E.3{}} ({A..P})
    combinatorial grouping made grouped task: 1Sun:E.4{1Sat:E.4{}} ({A..P})
    combinatorial grouping made grouped task: 1Sun:E.5{1Sat:E.5{}} ({A..P})

  But there is also

    comb. grouping made grouped task: 2Mon:E.0{1Sun:E.0{1Sat:E.0{}}} ({A..P})

  which needs looking into.  It's a bug, the elements passed in are

     class 2Mon:E etc - yes
     time group 1Sun  - prev
     time group 1Mon  - free

  The first time group passed in after the class should always be free.
  After fixing that I get the correct grouping (weekends only) and:

    [ "CQ14-05", 4 threads, 8 solves, 8 distinct costs, 8.5 secs:
      0.01348 0.01444 0.01446 0.01451 0.01544 0.01550 0.01736 0.01745
    ]

  as compared with what I was getting before:

    [ "CQ14-05", 4 threads, 8 solves, 8 distinct costs, 13.2 secs:
      0.01345 0.01440 0.01447 0.01542 0.01543 0.01640 0.01641 0.01651
    ]

  The new results are marginally worse but quite a lot faster.  And it
  makes sense to do this grouping, so I'll stick with it.  Best of 16:

    [ "CQ14-05", 4 threads, 16 solves, 16 distinct costs, 16.7 secs:
      0.01348 0.01351 0.01444 0.01446 0.01449 0.01451 0.01544 0.01550
      0.01635 0.01637 0.01639 0.01640 0.01647 0.01736 0.01742 0.01745
    ]

  I'm struggling to leave fewer than 13 shifts unassigned, making
  cost 1300, when Curtois' solutions leave 11 shifts unassigned,
  giving cost 1143.  Although getting below 13 is not impossible:

    [ "CQ14-05", 4 threads, 64 solves, 49 distinct costs, 74.5 secs:
      0.01254 0.01341 0.01342 0.01346 0.01348 0.01351 0.01351 0.01353
      0.01354 0.01440 0.01440 0.01441 0.01442 0.01443 0.01443 0.01443
      0.01444 0.01444 0.01444 0.01445 0.01446 0.01447 0.01448 0.01448
      0.01449 0.01451 0.01456 0.01457 0.01458 0.01535 0.01540 0.01543
      0.01544 0.01546 0.01548 0.01548 0.01550 0.01553 0.01635 0.01637
      0.01639 0.01640 0.01642 0.01642 0.01643 0.01644 0.01644 0.01645
      0.01646 0.01647 0.01647 0.01647 0.01648 0.01649 0.01649 0.01652
      0.01736 0.01741 0.01742 0.01742 0.01742 0.01745 0.01848 0.01949
    ]

9 September 2019.  Working on this problem today:  CQ14-05 (cost 0.01348,
  diversifier 3):  Resource E is assigned a task on 1Wed that would
  prefer not to be assigned at all.  This could be corrected by

      E -> @ {1Wed}, N -> E {1Wed}

  but this simple repair is not being tried.  I've added some new
  repairs to KheEventResourceMoveAugment; I need to audit what I've
  done, and then test it on CQ14-05.

  Wrote KheResourceGainTaskAugment, which moves a task into a
  resource's timetable within a given time group, and used it
  in KheResourceUnderloadAugment (where equivalent code was
  always present) and in KheEventResourceMoveAugment (a new
  application, to address the CQ14-05 problem given above).

  KheFindUnassignedTasksBefore, KheFindUnassignedTasksAfter, and
  KheResourceGainTaskAugment are the only places where etm is used.

  Testing the new code produced 

    [ "CQ14-05", 1 solution, in 2.6 secs: cost 0.01346 ]

  which presumably means that it worked.  Best of 8:

    [ "CQ14-05", 4 threads, 8 solves, 8 distinct costs, 7.7 secs:
      0.01246 0.01346 0.01352 0.01447 0.01534 0.01544 0.01644 0.01649
    ]

  which is more than I had hoped for:  we have one less unassigned
  shift now, and a slightly faster run.  This is probably enough to
  be going on with from CQ14-05.

  Sorted out EL; it now has a documented option, es_full_widening_on,
  whose default value is false, which turns it off.  Turning it off
  produced these results:

    [ "CQ14-05", 4 threads, 8 solves, 8 distinct costs, 4.6 secs:
      0.01258 0.01339 0.01355 0.01435 0.01447 0.01544 0.01635 0.01636
    ]

  And yes, they are slightly worse (although second best is better).

  Fixed a bug in requested, it was randomly skipping over some
  of the forced requests.  Sadly, the results are worse:

    [ "CQ14-05", 1 solution, in 1.3 secs: cost 0.01556 ]

  and

    [ "CQ14-05", 4 threads, 8 solves, 7 distinct costs, 4.6 secs:
      0.01363 0.01451 0.01539 0.01544 0.01556 0.01648 0.01648 0.01853
    ]

  But we have to stick with it.

11 September 2019.  Working on non-assignment of first day problem.
  Seems like we are assigning far too many tasks to begin with (16
  in the first time group), because they are included in the demand
  set and the resources have minimum workload limits which they help
  to satisfy; and then when we come to redo it makes sense to unassign
  a lot of stuff to get workload overloads down.

  We need to be a lot more cluey about which tasks to include in
  the time sweep.  We should include the minimum number needed
  to satisfy assign resource and limit resources constraints.
  This requires a non-trivial analysis of the tasks and limit
  resources monitors.  Or just something quick and dirty.

12 September 2019.  Written KheTaskAsstNeededToSatisfyMonitors and
  KheTaskAddToLimitMonitors for fixing up yesterday's problem, and
  I'm calling them, but before testing them I need to sort out some
  details, such as what to do about tasks that are already assigned,
  and what to do about the old array of monitors.

13 September 2019.  Read through the resource matching implementation
  documentation carefully and discovered a simple way to do what
  is needed:  just delete every node whose preferences all include
  r0.  These nodes contain tasks for which non-assignment attracts
  no cost.  All implemented and documented and ready to test:

    [ "CQ14-05", 1 solution, in 2.2 secs: cost 0.01441 ]

  This doesn't really show any improvement, but my debug output
  does show that the construction is much improved, although it
  struggles to fill the slots on the last weekend.  Best of 8:

    [ "CQ14-05", 4 threads, 8 solves, 8 distinct costs, 4.2 secs:
      0.01350 0.01355 0.01440 0.01441 0.01447 0.01558 0.01559 0.01741
    ]

  It's running a bit faster.  So let's say the construction problem
  is fixed and go back to grinding down the solutions.  Best of 32:

    [ "CQ14-05", 4 threads, 32 solves, 28 distinct costs, 16.7 secs:
      0.01239 0.01342 0.01348 0.01349 0.01350 0.01352 0.01352 0.01355
      0.01440 0.01440 0.01441 0.01442 0.01445 0.01446 0.01447 0.01447
      0.01540 0.01542 0.01543 0.01544 0.01551 0.01558 0.01559 0.01641
      0.01643 0.01644 0.01647 0.01647 0.01653 0.01741 0.01763 0.01934
    ]

  So it's not impossible to get below 1300, although here we have
  just one solution out of 32 that has done it.

14 September 2019.  Back to grinding down CQ14-05.  With full
  widening on we get this:

    [ "CQ14-05", 4 threads, 8 solves, 6 distinct costs, 10.3 secs:
      0.01250 0.01343 0.01445 0.01540 0.01540 0.01551 0.01551 0.01559
    ]

  Without it (which is what we prefer) we get this:

    [ "CQ14-05", 4 threads, 8 solves, 8 distinct costs, 4.2 secs:
      0.01350 0.01355 0.01440 0.01441 0.01447 0.01558 0.01559 0.01741
    ]

  Note the big difference in running time.  NB Curtois' best soln
  has cost 1144, which is much the same as ours except for the two
  extra assigned shifts.

  Looked into E's request to be busy on {2Sat, 2Sun}.  The request
  module is assigning this but it does not last, because E starts
  off in a way that is incompatible with it, and repairing it is
  just too hard.  In fact I have more or less decided to leave
  CQ14-05 now and work on one of the later CQ14 instances.

  Found a problem with CQ14-10, it was running very slowly.  This
  proved to be a bug in combination elimination, now fixed.  I
  introduced it just recently when I allowed combination elimination
  to a group of constraints.

  I'll leave CQ14-05 for a while now.  It's hard to see how to
  make further progress with it.

  Did a long run of the COI and CQ14 instances.  The results are
  pretty good on the whole (see khe19-09-14.pdf).  Also did a
  double time limit run of COI-MER, which produced this:

    [ "COI-MER", 4 threads, 8 solves, 8 distinct costs, 19.1 mins:
      0.08853 0.08864 0.09027 0.09224 0.09255 0.09761 0.09783 0.09954
    ]

  I've recorded this 8853 in the paper.

  Started work on COI-HED01.  It has deteriorated and it is fast enough
  to be a good test.  Curtois' best is 136 and KHE18x8 is currently at
  183.  A quick look suggests that the main problems are the rotations
  from week to week.

  Here's the first day's cover:

    <DayOfWeekCover>
      <Day>Monday</Day>
      <Cover><Skill>0</Skill><Shift>1</Shift><Min>4</Min><Max>4</Max></Cover>
      <Cover><Skill>0</Skill><Shift>2</Shift><Min>4</Min><Max>4</Max></Cover>
      <Cover><Skill>0</Skill><Shift>3</Shift><Min>2</Min><Max>2</Max></Cover>
      <Cover><Shift>4</Shift><Min>1</Min><Max>1</Max></Cover>
      <Cover><Shift>5</Shift><Min>0</Min><Max>0</Max></Cover>
      <Cover><Skill>0</Skill><Shift>5</Shift></Cover>
      <Cover><Shift>1</Shift></Cover>
      <Cover><Shift>2</Shift></Cover>
      <Cover><Shift>3</Shift></Cover>
    </DayOfWeekCover>

  Here's another one:

    <DayOfWeekCover>
      <Day>Sunday</Day>
      <Cover><Skill>0</Skill><Shift>1</Shift></Cover>
      <Cover><Skill>0</Skill><Shift>2</Shift></Cover>
      <Cover><Skill>0</Skill><Shift>3</Shift></Cover>
      <Cover><Shift>4</Shift><Min>0</Min><Max>0</Max></Cover>
      <Cover><Shift>5</Shift><Min>4</Min><Max>4</Max></Cover>
      <Cover><Skill>0</Skill><Shift>5</Shift><Min>2</Min></Cover>
      <Cover><Shift>1</Shift><Max>0</Max></Cover>
      <Cover><Shift>2</Shift><Max>0</Max></Cover>
      <Cover><Shift>3</Shift><Max>0</Max></Cover>
    </DayOfWeekCover>

  Looking into why limit resources constraints are used in
  COI-HED01 rather than assign resource and prefer resources
  constraints.  NrcDemandConstraintCanUseDemands seems to be
  the culprit, it is insisting that the worker set be all
  resources.  Why?

16 September 2019.  Added AddHED01Specials to coi.c, to implement
  the limit active intervals constraints that HED01 struggles to
  express.

  Currently generating a version of COI-HED01 which includes
  two extra limit active intervals constraints.  First results:

    [ "COI-HED01", 1 solution, in 7.7 secs: cost 0.00187 ]

    [ "COI-HED01", 4 threads, 8 solves, 8 distinct costs, 19.2 secs:
      0.00186 0.00187 0.00193 0.00195 0.00196 0.00208 0.00215 0.00219
    ]

  There is no violation of either of the added constraints in the
  cost 186 solution, so it applies to the original instance too.
  However KHE18x8 is currently getting a solution of cost 183 for
  this instance.

  Actually profile grouping did nothing, probably because it was
  not able to establish that any tasks were definitely needed,
  because of the limit resources monitor.

17 September 2019.  Working on the "Optimizing demand constraints"
  section in file impl of the nrconv doc.  Going well.  I have
  been more or less forced to treat it in full generality.  If
  you don't do that, then where do you stop?

18 September 2019.  Still working on the "Optimizing demand constraints"
  section in file impl of the nrconv doc.

  Worked out that when traversing the tree I am going to be
  generating "complex demands".  A complex demand is a demand
  for one nurse, which assigns (potentially) a different penalty
  for each nurse, and for non-assignment.

  I need functions that build complex demands.  There is already
  similar code in nrc_demand.c, concerned with the INRC1 option
  of a different penalty for each nurse.  I can build on to that.

20 September 2019.  Added code to convert one demand into assign
  resource and prefer resources constraints.  All done but it
  does not handle any special cases; in prefer resources
  constraints it generates the individual resources that
  the penalty does not apply to.  Needs some work there,
  perhaps to uniqueify worker sets.  There won't be many.

  Re-implemented NrcDemandSetMakeFromBound.  It looks good.

  I now have a clean compile of NRConv, except for the one
  call to NrcDemandMake in inrc1.c that is still to do.
  But generally speaking it's going great.

22 September 2019.  NrcDemandBuildPenaltyGroups calls
  NrcWorkerSkillPenalty.  It also calls another function,
  NrcInstanceUniqueSkillPenalty, which calls it.

  File inrc1.c is the only one that calls NrcWorkerAddSkillPenalty,
  so it would make sense to handle it within that file.

  The rule is actually pretty simple:

     if w is a preferred worker then
       penalty = 0
     else if alternative_skill_penalty(w) is present then
       penalty = alternative_skill_penalty(w)
     else
       penalty = non_preferred_penalty(d)

  and in fact alternative_skill_penalty(w) is always present
  in incr1.c demands and never present in the others.  The
  penalty for non-assignment is given in d (hard 1).

  There are two problems

  (1) Enhancing the interface of nrc_demand.c so that it will
      accept these calls.  NB alternative skill penalties are
      actually defined by contract, so ideally we would pass
      in all the contract worker sets and their penalties,
      and then end by overriding them with preferred_ws.

  (2) Naming.  Actually we should continue to use the function
      calls to define the name, that will work well.

24 September 2019.  Finished revising the design of NRC_DEMAND,
  all documented and ready to implement.  I've implemented
  nrc_demand.c, now I need to use the new interface.

25 September 2019.  Looking into remaining semantic issues.
  
  The old NrcDemandMake accepted a penalty for non-assignment,
  a penalty for assignment, and a penalty for unpreferred, and
  implemented them all independently, so in the cases where
  there is both a penalty for assignment and a penalty for
  unpreferred, those have to be summed.  I'm doing that now
  in NrcDemandSetMakeFromBound.

  I also checked through all four converters to see if they
  seem to be doing the right thing.  They do.  So that seems
  to be nrc_demand.c all done.

26 September 2019.  I've basically finished the documentation
  of how demand constraints are converted into demand objects,
  although it all needs a careful audit, especially the last
  part.

27 September 2019.  Audited demand constraint conversion.

28 September 2019.  Finished auditing demand constraint
  conversion; it's time to implement.

  I've made a start on a demand constraint conversion
  solver, in nrc_dc_converter.c.  All the boilerplate
  is done, to the point where all the code I have to
  write now is confined to NrcDCConverterSolve.

30 September 2019.  Continuing with the implementation
  of NrcDCConverterSolve.

1 October 2019.  Implemented NrcWorkerSetComplement.

2 October 2019.  Still working on nrc_dc_converter.c.  I've done
  it all now except for adding penalizers to the generated demands.

4 October 2019.  Still working on nrc_dc_converter.c.  I've
  written the whole thing now, but it is going to need a
  very careful audit.

5 October 2019.  Auditing nrc_dc_converter.c today.

  I've had an idea for simplifying the analysis of the penalties to
  apply to the nodes of demand trees, by using addition rather than
  replacement.  It's all documented and ready to implement.  It
  involves being able to have multiple demand objects open to
  penalizer functions, so I've had to change the way demand objects
  are set up rather drastically.  That is all documented and
  implemented and I have a clean compile.  I've also tidied up
  nrc_demand.c generally.

6 October 2019.  Audited nrc_demand.c and nrc_dc_converter.c and
  their documentation today, tightened things up quite a bit.
  Ready for testing.  Will need some debug functions.  I started
  this stuff on 17 September (three weeks ago).

7 October 2019.  Wrote some debug code and I'm now testing the
  new stuff.  Fixed a couple of small bugs, and started testing
  HED01.  I'm looking at this demand:

    NW0=s1000+NA=s1000+NWRG:All=s1000+NA=s1000:1

  The first two demands in 1Sat:H (running at 1Sat3) are for this.
  The shift ID is 5.  It seems odd.  From 

    <DayOfWeekCover>
      <Day>Saturday</Day>
      <Cover><Skill>0</Skill><Shift>1</Shift></Cover>
      <Cover><Skill>0</Skill><Shift>2</Shift></Cover>
      <Cover><Skill>0</Skill><Shift>3</Shift></Cover>
      <Cover><Shift>4</Shift><Min>0</Min><Max>0</Max></Cover>
      <Cover><Shift>5</Shift><Min>4</Min><Max>4</Max></Cover>
      <Cover><Skill>0</Skill><Shift>5</Shift><Min>2</Min></Cover>
      <Cover><Shift>1</Shift><Max>0</Max></Cover>
      <Cover><Shift>2</Shift><Max>0</Max></Cover>
      <Cover><Shift>3</Shift><Max>0</Max></Cover>
    </DayOfWeekCover>

    <CoverWeights>
      <MinUnderStaffing>1000</MinUnderStaffing>
      <MaxOverStaffing>1000</MaxOverStaffing>
      <PrefOverStaffing>1</PrefOverStaffing>
      <PrefUnderStaffing>1</PrefUnderStaffing>
    </CoverWeights>

  Reducing this to elements relevant to shift 5 gives

    <DayOfWeekCover>
      <Day>Saturday</Day>
      <Cover><Shift>5</Shift><Min>4</Min><Max>4</Max></Cover>
      <Cover><Skill>0</Skill><Shift>5</Shift><Min>2</Min></Cover>
    </DayOfWeekCover>

  So we want exactly 4 workers (with cost of 1000 for each worker
  over or under), at least 2 of which have Skill 0 (with cost 1000
  for each worker over or under).

  Altogether we're generating max + 1 demands (5 here).  They are:

    <R>NW0=s1000+NA=s1000+NWRG:All=s1000+NA=s1000:1</R>
    <R>NW0=s1000+NA=s1000+NWRG:All=s1000+NA=s1000:2</R>

      Each of these asks for a Skill 0 worker.  If unassigned, the
      penalty is 1000 for unassigned plus 1000 for not Skill 0,
      which is correct.  If assigned but not Skill 0, the penalty
      is s1000 which is correct again.  If assigned Skill 0, the
      penalty is 0 which is correct again.

    <R>NWRG:All=s1000+NA=s1000:1</R>
    <R>NWRG:All=s1000+NA=s1000:2</R>

      Each of these asks for a worker, skill not specified.  If
      unassigned, the penalty is s1000, which is correct.  If
      assigned, the penalty is 0 which is correct.

    <R>A=s1000:1</R>

      This is asking for a worker but the penalty is s1000 if
      a worker is assigned.

  We're generating max + 1 demands so this is the right number.

  NWRG:All=s1000 can never fail.  It says, if we are assigned
  a worker not from RG:All, pay s1000.  Impossible.  How did
  it sneak through?

  After fixing various things the demands I'm getting now are:

    <R>NA=s1000+NW0=s1000:1</R>
    <R>NA=s1000+NW0=s1000:2</R>
    <R>NA=s1000:1</R>
    <R>NA=s1000:2</R>
    <R>A=s1000:1</R>

  which is perfect, really.  For 1Tue we have this:

    <DayOfWeekCover>
      <Day>Tuesday</Day>
      <Cover><Skill>0</Skill><Shift>1</Shift><Min>4</Min><Max>4</Max></Cover>
      <Cover><Skill>0</Skill><Shift>2</Shift><Min>4</Min><Max>4</Max></Cover>
      <Cover><Skill>0</Skill><Shift>3</Shift><Min>2</Min><Max>2</Max></Cover>
      <Cover><Shift>4</Shift><Min>1</Min><Max>1</Max></Cover>
      <Cover><Shift>5</Shift><Min>0</Min><Max>0</Max></Cover>
      <Cover><Skill>0</Skill><Shift>5</Shift></Cover>
      <Cover><Shift>1</Shift></Cover>
      <Cover><Shift>2</Shift></Cover>
      <Cover><Shift>3</Shift></Cover>
    </DayOfWeekCover>

  For event 1Tue:M (M is shift 1) this reduces to:

    <DayOfWeekCover>
      <Day>Tuesday</Day>
      <Cover><Skill>0</Skill><Shift>1</Shift><Min>4</Min><Max>4</Max></Cover>
      <Cover><Shift>1</Shift></Cover>
    </DayOfWeekCover>

  This requests exactly 4 Skill 0 nurses but allows any number of
  other nurses.  And indeed we generate

    <R>NA=s1000+NW0=s1000:1</R>
    <R>NA=s1000+NW0=s1000:2</R>
    <R>NA=s1000+NW0=s1000:3</R>
    <R>NA=s1000+NW0=s1000:4</R>
    <R>W0=s1000:1</R>
    <R>W0=s1000:2</R>
    <R>W0=s1000:3</R>
    <R>W0=s1000:4</R>
    <R>W0=s1000:5</R>

  which offers space for all the non Skill 0 nurses but requests
  exactly 4 Skill 0 nurses.

  Tried evaluating HED01, the old and new versions both gave 136.
  So that's hopeful.

9 October 2019.  Trying converting all of COI.xml today.  Musa is wrong:

  COI-Musa (cost 0.00400, Converted from file Musa.Solution.175.roster.)

  Looking at 1Mon:D.  Both have 10 event resources, 3 disjoint skills.

  Old has limit resources constraints:  (min 2, max 2) Skill-NA,
  (min 2, max 2) Skill-LPN, (min 3, max 3) Skill-RN.  New has

      Role                              Assigned by soln below
      ---------------------------------------------------------------
      <R>NA=s5+NWNA=s5:1</R>            Nurse1      RN
      <R>NA=s5+NWNA=s5:2</R>            Nurse11     NA

      <R>NA=s5+NWRN=s5:1</R>            Nurse2      RN
      <R>NA=s5+NWRN=s5:2</R>            Nurse4      LPN
      <R>NA=s5+NWRN=s5:3</R>            Nurse8      NA

      <R>NA=s5+NWLPN=s5:1</R>           -
      <R>NA=s5+NWLPN=s5:2</R>           -

      <R>X:1</R>			-
      <R>X:2</R>			-
      <R>X:3</R>			-
      ---------------------------------------------------------------

  with soln

    <Event Reference="1Mon:D">
      <Resources>
	<Resource Reference="Nurse1"><Role>NA=s5+NWNA=s5:1</Role></Resource>
	<Resource Reference="Nurse11"><Role>NA=s5+NWNA=s5:2</Role></Resource>
	<Resource Reference="Nurse2"><Role>NA=s5+NWRN=s5:1</Role></Resource>
	<Resource Reference="Nurse4"><Role>NA=s5+NWRN=s5:2</Role></Resource>
	<Resource Reference="Nurse8"><Role>NA=s5+NWRN=s5:3</Role></Resource>
      </Resources>
    </Event>

  The roles are correct (there are max limits but they have weight 0),
  and yes we are two resources short in old and new, but in the new
  soln the resources have been mis-assigned, leading to spurious
  violations of prefer resources constraints.

  I've revised the code for adding workers to solutions to take cost
  into account.  It's not min-cost matching but it should work better
  than the rubbish that was there before.  But look, I'm getting the
  same stuff as before:

    <Event Reference="1Mon:D">
      <Resources>
	<Resource Reference="Nurse1"><Role>NA=s5+NWNA=s5:1</Role></Resource>
	<Resource Reference="Nurse11"><Role>NA=s5+NWNA=s5:2</Role></Resource>
	<Resource Reference="Nurse2"><Role>NA=s5+NWRN=s5:1</Role></Resource>
	<Resource Reference="Nurse4"><Role>NA=s5+NWRN=s5:2</Role></Resource>
	<Resource Reference="Nurse8"><Role>NA=s5+NWRN=s5:3</Role></Resource>
      </Resources>
    </Event>

    <Event Reference="1Mon:D">
      <Resources>
	<Resource Reference="Nurse1"><Role>NA=s5+NWNA=s5:1</Role></Resource>
	<Resource Reference="Nurse11"><Role>NA=s5+NWNA=s5:2</Role></Resource>
	<Resource Reference="Nurse2"><Role>NA=s5+NWRN=s5:1</Role></Resource>
	<Resource Reference="Nurse4"><Role>NA=s5+NWRN=s5:2</Role></Resource>
	<Resource Reference="Nurse8"><Role>NA=s5+NWRN=s5:3</Role></Resource>
      </Resources>
    </Event>

  Added NrcInstanceMakeBegin and NrcInstanceMakeEnd.  Now doing the
  conversion of demand constraints to demands within NrcInstanceMakeEnd.
  And hey presto, Musa is correct!

10 October 2019.  Looking into COI-Ikegami-3.1 today.  Note that if
  a wrong assignment was the problem, the cost would be higher, but
  in fact it is lower.  The violated constraints in the old formulation
  are DemandConstraint:22A/2Wed:N (which requested exactly 3 nurses
  on each night shift) and DemandConstraint:3A/3Mon:N, which requests
  exactly one nurse from group SkillGroup-B_SS_s on each night shift.

  The problem is that COI-Ikegami-3.1's constraints seem to have
  not converted (the roles are X:1, X:2, etc.), but there are
  no limit resources constraints at all.  Fixed this now, and
  it has fixed COI-Ikegami-3.1.  So time to test the rest again.
  All tested and all correct now.  At last.  These files still
  contain limit resources constraints:

    COI-CHILD.xml
    COI-ERMGH.xml
    COI-ERRVH.xml
    COI-Ikegami-2.1.xml
    COI-Ikegami-3.1.1.xml
    COI-Ikegami-3.1.2.xml
    COI-Ikegami-3.1.xml
    COI-MER.xml
    COI-Ozkarahan.xml
    COI-QMC-2.xml

  Ten instances, not including BCDT or HED01.  The old list had 13:

    COI-Azaiez.xml
    COI-CHILD.xml
    COI-ERMGH.xml
    COI-ERRVH.xml
    COI-HED01.xml
    COI-Ikegami-2.1.xml
    COI-Ikegami-3.1.1.xml
    COI-Ikegami-3.1.2.xml
    COI-Ikegami-3.1.xml
    COI-MER.xml
    COI-Musa.xml
    COI-Ozkarahan.xml
    COI-QMC-2.xml

  So limit resources constraints have been removed from COI-Azaiez.xml,
  COI-HED01.xml, and COI-Musa.xml.  I've also been able to get rid of
  the special case code for COI-BCDT-Sep.xml, which was horrible stuff.

  I started this revised conversion job on 17 September, which is just
  over three weeks ago.  Time to get back to grinding down COI-HED01.

  The Curtois solution to COI-HED01 has cost 136.  KHE18x8 was
  previously producing a solution of cost around 183, in 29.6
  seconds.
  
  In COI-HED01, the weird A and M stuff can be boiled down
  to at least 4 A shifts in a row and at least 4 M shifts
  in a row, if we omit 3Tue from the list of time groups.
  Let's try that and see how it goes:

    [ "COI-HED01", 1 solution, in 0.8 secs: cost 0.00173 ]

    [ "COI-HED01", 4 threads, 8 solves, 7 distinct costs, 1.9 secs:
      0.00168 0.00172 0.00173 0.00173 0.00176 0.00184 0.00192 0.00194
    ]

  So there has been a good improvement in cost and a huge improvement
  in running time.  Here we are without the extra constraints:

    [ "COI-HED01", 1 solution, in 0.6 secs: cost 0.00182 ]

    [ "COI-HED01", 4 threads, 8 solves, 8 distinct costs, 2.1 secs:
      0.00179 0.00180 0.00182 0.00185 0.00189 0.00195 0.00196 0.00215
    ]

  which is about the same cost as before but much, much faster.

11 October 2019.  The difference between my best solution and
  the Curtois solution is

     Curtois               KHE18x8
     --------------------------------------------
     7 bad rotations       12 bad rotations (+20)
     4 at least one...     6 at least 1     (+12)
     --------------------------------------------

  Totalling 32, the difference between 136 and 168.

  Did a full run of COI, found that COI-Musa blew out from the
  optimal 175 to 178.  Needs looking into.  The others are OK,
  and ERMGH seems to have improved quite a bit, from 847 in 10
  minutes before to 809 in 9.9 minutes now, not sure why.  The
  optimal is 779 so we are as close to that as we need to be.

  COI-Musa has blown out from the optimal 175 to 178.  The
  difference is that Nurse 8 is now busy at an unavailable
  time.  There doesn't seem to be any reason why Nurse 8 could
  not give up that time and pick up another time where there
  is an unassigned shift.  The problem, I have discovered, is
  that although we have repairs of the form

    Nurse8 -> @ {S1}

  we don't have any repairs of the form

    Nurse8 -> @ {S1}, @ -> Nurse8 {S2}

  We really should have some repairs like this.

  Started work on adding double moves where r2 is NULL to
  KheDoTaskSetMoveMultiRepair.

  Started work on khe_sr_task_finder.c, and its types
  KHE_TASK_FINDER and KHE_INTERVAL.  Replaced all occurrences
  of first_index, last_index in khe_se_solvers.c by intervals,
  have clean compile of that.

12 October 2019.  Continuing to work on moving code from
  khe_se_solvers.c to khe_sr_task_finder.c.  I've deleted
  nearly all the commented-out code in khe_se_solvers.c,
  and it is now down to 7315 lines.

  Added KheTaskFinderFindTasksBefore and KheTaskFinderFindTasksAfter
  to khe_sr_task_finder.c.  These will find tasks before or after a
  given task set and compatible with it, taking its current assignment,
  domain etc. into account.  They make a good start.

13 October 2019.  Revised the documentation of KheTaskFinderFindTasks
  and started the implementation.

15 October 2019.  Implementing KheTaskFinderFindTasks.

16 October 2019.  Still implementing KheTaskFinderFindTasks.  Have
  clean compile of what I've done.

17 October 2019.  Still implementing KheTaskFinderFindTasks.
  Revised the documentation to say that the days that a task
  or task set is running is represented in task finding by its
  bounding interval, and that the duration is taken to be the
  length of this interval, even if that is imperfect.  And
  revised the implementation to reflect this.

18 October 2019.  Revised the documentation.  There are now 3 task
  finding functions:  KheFindMovableTasks, KheFindCompatibleTasks,
  and KheFindFreeTime.  I'm using them, and I have beautiful
  implementations of KheFindMovableTasks and KheFindCompatibleTasks.
  KheFindFreeTime is next.

18 October 2019.  Removed KheResourceTimetableMonitorTaskSetBusyType,
  it has been replaced by task finding now.  Added KheTaskWouldNotClash
  which seems to be handling the to_ts thing.

19 October 2019.  Still working on KheTaskSetMoveMultiRepair.

21 October 2019.  Still working on KheTaskSetMoveMultiRepair.
  I've started work on "widened task sets", have clean compile
  of the initial stuff which makes the core and two wings.

22 October 2019.  Working on the widened task sets documentation.

23 October 2019.  Working on the widened task sets documentation.
  It's in pretty good shape now, including a "compact swap" that
  requires to_r's tasks to be compact.

  KheWidenedTaskSetMake, KheWidenedTaskSetDelete, KhePrepareResource
  and KheWidenedTaskSetMoveCheck implemented against the new doc.

24 October 2019.  Audited yesterday's code, cleaned it up a bit
  but it was pretty good.  Implemented KheWidenedTaskSetMove,
  and audited it, again all good.  Added to_r_task_durn to
  wings and to_r_ts_total_durn to core.

25 October 2019.  Working on swapping today.  Got it all done, and
  also worked carefully over the documentation.  All good.  Started
  work on utilizing widened task sets in the ejection chain solver.
  It is going well, in fact I have done everything, down to a clean
  compile, except KheWidenedTaskSetMoveAndDoubleMoves, which
  concerns double moves, and which I have not thought about yet.

26 October 2019.  Need to get going on PrevRun and NextRun.
  Rather remarkably I've just done a major refactor of the
  task finder and widened task set code.  I have a clean
  compile of the new version, and I've audited it.  Designed
  KheFindMaximalTaskRunRight and KheFindMaximalTaskRunLeft.

27 October 2019.  Finished KheFindMaximalTaskRunRight and
  KheFindMaximalTaskRunLeft, including auditing.  Started
  work on KheWidenedTaskSetMoveAndDoubleMoves.

28 October 2019.  Implemented  KheFindTaskRunInitial and
  KheFindTaskRunFinal, and updated khe_se_solvers.c to
  use them, which basically means that the task finder
  is all written and in use in khe_se_solvers.c.  Audited
  everything, and also verified that there are no task
  finding functions remaining in khe_se_solvers.c.
  Hid type KHE_INTERVAL from the user.

29 October 2019.  I'm supposed to be testing, but instead I
  went over the task finder documentation again, and also
  did a quick reorganize of the task finder code.  Also did
  a careful audit of khe_se_solvers.c.  Start work on task
  finding on 11 October, that is 18 days ago.  But I should
  be finished in a day or two; and things are a lot better.

30 October 2019.  All audited and ready to test.  First runs
  worked without crashing and gave this:

    [ "COI-GPost", 4 threads, 8 solves, 7 distinct costs, 0.7 secs:
      0.00009 0.00010 0.00010 0.00011 0.00013 0.00014 0.00015 0.00016
    ]

  Too good to be true?  I'll keep testing.  I got a debug print of
  a failed augment and it all looks great.  I've added some debug
  code to khe_sr_task_finder.c which should help.

    [ "COI-Musa", 4 threads, 8 solves, 3 distinct costs, 0.3 secs:
      0.00178 0.00178 0.00178 0.00178 0.00178 0.00180 0.00180 0.00183
    ]

  Here we are back where we started 3 weeks ago: Nurse 8, 2Sat.
  Added forced moves and got this:

    [ "COI-Musa", 4 threads, 8 solves, 1 distinct cost, 1.1 secs:
      0.00175 0.00175 0.00175 0.00175 0.00175 0.00175 0.00175 0.00175
    ]

  It's worrying slower but it works.  But GPost got worse:

    [ "COI-GPost", 4 threads, 8 solves, 6 distinct costs, 0.9 secs:
      0.00011 0.00013 0.00014 0.00015 0.00015 0.00015 0.00016 0.00017
    ]

  Just bad luck I suppose.  But need to keep an eye on the effect
  of forced moves.  Perhaps don't widen them?  It works for Musa
  but does this for GPost:

    [ "COI-GPost", 4 threads, 8 solves, 6 distinct costs, 1.0 secs:
      0.00013 0.00014 0.00015 0.00015 0.00015 0.00016 0.00017 0.00018
    ]

  It might be worth looking into why GPost has deteriorated here.
  The best result of 9 which I got earlier is more typical; what
  has gone wrong?

  Doing a full run, going well, somewhat slower but the results
  are slightly better.  Altogether an improvement, I think, but
  CHILD is disastrous!  Look at this:

    [ "COI-CHILD", 4 threads, 8 solves, 8 distinct, first 0.01564, 7.9 mins:
      0.00862 0.00867 0.01059 0.01063 0.01363 0.01456 0.01462 0.01564
    ]

  Previously I was getter near-optimal results, around 152.

    [ "COI-ERRVH", 4 threads, 8 solves, 8 distinct, first 0.07394, 10.2 mins:
      0.05590 0.05775 0.05892 0.06286 0.06309 0.06598 0.07394 0.07539
    ]

    [ "COI-MER", 4 threads, 8 solves, 8 distinct costs, first 0.16226, 9.3 mins:
      0.14008 0.14071 0.14242 0.14504 0.14899 0.15366 0.15784 0.16226
    ]

  Two more shockers.  Needs looking into.

31 October 2019.  Did another COI run without the new forced moves.
  It does seem to be better on the whole, but CHILD is still bad:

    [ "COI-CHILD", 4 threads, 8 solves, 8 distinct, first 0.01770, 8.5 mins:
      0.00657 0.00864 0.01261 0.01262 0.01370 0.01460 0.01767 0.01770
    ]

  Still running badly, too much exploring apparently, now trying
  without double moves in the to_r == NULL case.

    [ "COI-CHILD", 1 solution, in 253.0 secs: cost 0.00977 ]

  The problem with CHILD is that some (all?) nurses require at
  most one consecutive free weekend and at most one consecutive
  busy weekend.  So this propagates along the timetable in the
  same way that profile grouping does, although it's different.

1 November 2019.  Going back to the PosSuits method, it seems to
  be the most likely reason why things have deteriorated.
  KheResourceEffectivelyFree and GatherTasks are very similar,
  I'm currently working on merging them.

2 November 2019.  Rewrote widened task sets, replacing the core
  and wing types with a single part type.  Also made a type out
  of the from and to parts of each part.

3 November 2019.  Have clean compile of the new design, and I've
  audited it carefully.  Ready to use.

4 November 2019.  Documented and implemented the revised widened
  task set functions, including withdrawing the old functons.
  Everything is khe_sr_task_finder.c is in good shape.  I am
  also using the new functions in khe_se_solvers.c, specifically
  in KheWidenedTaskSetMoveAndDoubleMoves, but there, although I
  have a clean compile, it all needs a careful audit.

5 November 2019.  Audited khe_se_solvers.c and tidied up a few
  small things, including making KhePartMovePart not report a
  duration change, since it gets it wrong anyway.  It's looking
  good and ready to test.

6 November 2019.  Testing today.

    [ "COI-GPost", 4 threads, 8 solves, 6 distinct costs, 0.8 secs:
      0.00010 0.00012 0.00012 0.00013 0.00013 0.00015 0.00023 0.00027
    ]

    [ "COI-Musa", 4 threads, 8 solves, 3 distinct costs, 0.3 secs:
      0.00178 0.00178 0.00178 0.00178 0.00178 0.00180 0.00180 0.00183
    ]

    [ "COI-CHILD", 4 threads, 8 solves, 8 distinct costs, 8.7 mins:
      0.01360 0.01363 0.01372 0.01761 0.01765 0.01966 0.02061 0.02168
    ]

  Shockers are still there.  Why?  Brought forced moves back and
  got this:

    [ "COI-Musa", 4 threads, 8 solves, 2 distinct costs, 0.4 secs:
      0.00175 0.00175 0.00175 0.00175 0.00175 0.00175 0.00175 0.00180
    ]

  The problem is that to repair that last cost 3 defect we need a
  forced move of the 2Sat task from Nurse8 to Nurse10.  Forced moves
  have helped GPost too, not the best, but the worst here:

    [ "COI-GPost", 4 threads, 8 solves, 5 distinct costs, 0.8 secs:
      0.00010 0.00010 0.00011 0.00012 0.00012 0.00013 0.00013 0.00014
    ]

  is 14, whereas above it was 27.  That's a big difference.

    [ "COI-CHILD", 4 threads, 8 solves, 8 distinct costs, 8.2 mins:
      0.00968 0.01158 0.01365 0.01371 0.01467 0.01660 0.01768 0.02171
    ]

  CHILD has improved too, but it's still a shocker.  There are limit
  busy times and limit active intervals constraints in the KHE solution,
  with total cost 800.  There are none in the best soln.  So if I could
  get rid of all of them, that would drop the cost to 168 which is in
  the right ballpark.

  Working on solution with cost

    [ "COI-CHILD", 1 solution, in 243.2 secs: cost 0.01662 ]

  Trying not grouping limit resources monitors:

    [ "COI-CHILD", 1 solution, in 228.5 secs: cost 0.02170 ]

  So that doesn't seem to be helping.  Back to grouping them.
  Fixed it! the problem was not finding optional tasks:

    [ "COI-CHILD", 1 solution, in 236.3 secs: cost 0.00156 ]

  So now it's time for a COI run.  Going well, and look at this:

    [ "COI-ERMGH", 4 threads, 8 solves, 8 distinct, first 0.00820, 10.2 mins:
      0.00788 0.00820 0.00877 0.00890 0.00931 0.00949 0.00981 0.01039
    ]

  This is best ever, surely (optimum is only 779).  But later ones
  are not so good, including the Ikegami instances and MER.  So I'm
  investigating COI-Ikegami-3.1.xml now.

7 November 2019.  Worked out that pairs of night shifts are not
  being grouped, probably owing to them being optional.  Rather
  than ignoring optional it would be better to group non-optional
  with non-optional, and optional with optional (or something).

8 November 2019.  Designed, documented, and implemented a plan for
  handling tasks for which non-assignment does not necessarily have
  a cost when grouping.  This is to require KHE_YES when there are
  assign resources constraints, and KHE_MAYBE when there aren't.
  All implemented and ready to test.

  All the night shifts are being reported here as not requiring
  assignment:

    KheTaskSuits(2Fri:N.0{}, KHE_MAYBE) = false
    KheTaskSuits(2Fri:N.1{}, KHE_MAYBE) = false
    KheTaskSuits(2Fri:N.2{}, KHE_MAYBE) = false
    KheTaskSuits(2Fri:N.3{}, KHE_MAYBE) = false

  Why not?  Because KheLimitResourcesConstraintAddEventResource is
  never called!  How can that be?  Because it's complicated, and
  I need to quietly work through it all.  It was a bug, I've fixed
  it now and I am getting a lot more profile grouping.  First results:

    [ "COI-Ikegami-3.1", 1 solution, in 11.2 secs: cost 0.00008 ]

    [ "COI-Ikegami-3.1", 4 threads, 8 solves, 6 distinct costs, 26.6 secs:
      0.00008 0.00008 0.00009 0.00010 0.00011 0.00012 0.00015 0.00015
    ]

  Optimum is 2, this is a good result.  It may be my best so far on
  this instance, I looked at a few random earlier results and could
  not find anything this good.  Did a full COI run, and incredibly
  I got these results:

    [ "COI-ERMGH", 4 threads, 8 solves, 7 distinct, first 0.00886, 9.9 mins:
      0.00779 0.00781 0.00781 0.00834 0.00874 0.00886 0.00887 0.00919
    ]

    [ "COI-CHILD", 4 threads, 8 solves, 8 distinct, first 0.00255, 5.5 mins:
      0.00149 0.00151 0.00152 0.00153 0.00154 0.00250 0.00255 0.00256
    ]

  These are OPTIMAL - for the first time.  See khe19-11-08.pdf.

  I've moved on to the INRC1 instances.  I'm currently getting
  slower run times and somewhat worse results than in the past.
  I've started looking at ML02.  KHE is producing many more
  avoid unavailable times and limit active intervals defects.

  With forced moves:

    [ "INRC1-ML02", 1 solution, in 5.7 secs: cost 0.00035 ]

  Without forced moves:

    [ "INRC1-ML02", 1 solution, in 4.8 secs: cost 0.00030 ]

  It's better.  I'd better redo COI.

  Compared khe19-11-08-withforced.pdf with khe19-11-08.pdf
  (no forced moves).  Cost is virtually identical but the
  average is a bit better, and running time is consistently
  better.  So on the whole, no forced moves.

  I've put the non-forced INRC1 run in khe19-11-08.pdf.
  I need to do some analysis starting from there.

9 November 2019.  Did a full run of COI with various values
  for es_balancing_max.  Unfortunately I got the ps_soln_group
  command line argument wrong and nothing was saved, so I
  will have to redo it some time.  It takes about 3 hours.

  I've changed the debug output so that it prints NOT SAVING
  when it starts solves that won't be saved.

  Thinking about adding an "expect_inactive" flag to the time
  groups of a limit active intervals monitor, which when beyond
  the cutoff are taken to mean that that time group will be
  inactive.  Hence minimum limits may be violated.  I've
  documented some functions to add to the limit active
  intervals monitor, but not tried to implement them yet.

10 November 2019.  KheLimitActiveIntervalsMonitorSetNotBusyState
  and KheLimitActiveIntervalsMonitorClearNotBusyState are now
  implemented.  Needs an audit and test.  Have also updated the
  documentation including the section on implementing the limit
  active intervals constraint.

  Added KheClusterBusyTimesMonitorSetNotBusyState and
  KheClusterBusyTimesMonitorClearNotBusyState corresponding
  to KheLimitActiveIntervalsMonitorSetNotBusyState and
  KheLimitActiveIntervalsMonitorClearNotBusyState.  All
  documented and implemented.  Needs an audit and test.

  Found that khe_limit_active_intervals_monitor_rec was using
  an HA_ARRAY of time info objects, whereas in the cluster monitor
  they are in a C array and hence cause no memory allocations.
  Converted khe_limit_active_intervals_monitor_rec to a C array.

  Did a quick test using GPost which seems to prove that I have
  not broken anything.

  Started work on KhePropagateUnavailableTimes.  It's designed
  and documented, and a stub implementation has been compiled
  successfully.   It's in khe_sm_monitor_adjustments.c.

11 November 2019.  Working on KhePropagateUnavailableTimes.
  All written and audited and ready to test.  Done some testing,
  it seems that the right calls to SetNotBusy are being made.
  But the result is a bit worse.  I need to compare the result
  of time sweep with and without this feature.

  Without KheDoPropagateUnavailableTimes:

    [ "INRC1-ML02", 1 solution, in 3.5 secs: cost 0.00030 ]

  With KheDoPropagateUnavailableTimes:

    [ "INRC1-ML02", 1 solution, in 4.8 secs: cost 0.00037 ]

  Best of 8, without it:

    [ "INRC1-ML02", 4 threads, 8 solves, 5 distinct costs, 8.0 secs:
      0.00027 0.00027 0.00029 0.00030 0.00030 0.00031 0.00031 0.00032
    ]

  and with it:

    [ "INRC1-ML02", 4 threads, 8 solves, 4 distinct costs, 9.5 secs:
      0.00025 0.00025 0.00030 0.00032 0.00032 0.00032 0.00037 0.00037
    ]

  So actually we have improved things a bit.  Need to look into
  what we have at the end of time sweep.  So turning repair off
  we have this without KheDoPropagateUnavailableTimes:

    [ "INRC1-ML02", 1 solution, in 0.3 secs: cost 0.00042 ]

  and this with KheDoPropagateUnavailableTimes:

    [ "INRC1-ML02", 1 solution, in 0.3 secs: cost 0.00045 ]

  It's worse, but we need to see whether it did any good
  with unavailable times.  Here's the result without it:

    ------------------------------------------------------
    Avoid Unavailable Times Constraint (9 points)    	13
    Cluster Busy Times Constraint (2 points) 	   	 2
    Limit Busy Times Constraint (2 points) 	   	 6
    Limit Active Intervals Constraint (20 points) 	21
    ------------------------------------------------------
      Grand total (33 points)		 	   	42

  and here it is with it:

    ------------------------------------------------------
    Avoid Unavailable Times Constraint (6 points)    	 8
    Cluster Busy Times Constraint (2 points) 	   	 2
    Limit Busy Times Constraint (3 points) 	   	10
    Limit Active Intervals Constraint (22 points)    	25
    ------------------------------------------------------
      Grand total (33 points) 	   			45

  And we see that it has made a big difference, reducing the
  cost of avoid unavailable times constraints from 13 to 8.
  I'm taking this as conclusive evidence that the code is
  working.

  I did two full runs of INRC1, with and without monitor
  adjustment (khe19-11-11-monadj.pdf and khe19-11-11-nomonadj.pdf).
  The results are virtually indistinguishable.  I'll do a
  COI run now.

  File khe19-11-11.pdf contains a COI run with monitor
  adjustment.  The average cost is marginally worse, as is
  the run time, but there are more optimal solutions, so it
  comes out pretty even.

  On balance I think I will keep monitor adjustment, on
  the grounds that it is a sensible feature.

  Back to grinding down ML02:

    [ "INRC1-ML02", 4 threads, 8 solves, 4 distinct costs, 8.9 secs:
      0.00025 0.00025 0.00030 0.00032 0.00032 0.00032 0.00037 0.00037
    ]

  where the optimal is 18.  Not a huge difference.

12 November 2019.  Alternating between starting with the largest and
  starting with the smallest moves.  Best of 8 actually got worse:

    [ "INRC1-ML02", 4 threads, 8 solves, 4 distinct costs, 7.9 secs:
      0.00029 0.00030 0.00030 0.00030 0.00032 0.00032 0.00034 0.00034
    ] 

    [ "INRC1-ML02", 4 threads, 8 solves, 4 distinct costs, 7.9 secs:
      0.00030 0.00030 0.00031 0.00032 0.00032 0.00033 0.00033 0.00033
    ]

  compared with shortest intervals first:

    [ "INRC1-ML02", 4 threads, 8 solves, 4 distinct costs, 9.0 secs:
      0.00025 0.00025 0.00030 0.00032 0.00032 0.00032 0.00037 0.00037
    ]

  I'm struggling to improve on my best solution here, I've looked
  at several defects and they all seem to require major surgery
  to remove.  I think I will have to given up on INRC1-ML02.  So
  then, moving on to INRC1-ML01, we currently get

    [ "INRC1-ML01", 4 threads, 8 solves, 6 distinct costs, 19.0 secs:
      0.00175 0.00175 0.00180 0.00187 0.00189 0.00193 0.00203 0.00203
    ]

  whereas the best solution has cost 157.  But once again, it's hard
  to see any way to remove the current defects.

  Did a COI run to make sure everything is still OK, and I will
  follow that with a test of different values of balancing_max.
  The COI results are turning out very well.

  KHE18-COI-aspects6.xml completed, containing different
  balancing_maxes.

13 November 2019.  Analysing KHE18-COI-aspects6.xml.  BM2 has
  the smallest average, but BM4 is better than BM2 in most cases;
  the average is swayed by an unlucky (?) result for COI-MER.
  Beyond BM4 there is no clear pattern, each setting has its
  lucky and unlucky results.  Average running time increases
  as balancing_max increases, but only by a few seconds so it's
  not critical.

  In short, one could justify reducing to BM4 but there is
  very little in it.

  Started work on INRC2, using instance INRC2-4-030-1-6291 to
  begin with.  Initial results:

    [ "INRC2-4-030-1-6291", 1 solution, in 16.0 secs: cost 0.02105 ]

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 5 distinct costs, 44.4 secs:
      0.02020 0.02050 0.02105 0.02110 0.02110 0.02115 0.02115 0.02115
    ]

  Previous runs of mine were getting 2130, so there has been some
  improvement.  But the LOR17 result is 1695, and even Schaerf got
  1700, so there is a fair way to go.  This is how LOR17 does it:

    --------------------------------------------------------
    Assign Resource Constraint (15 points) 	   	 450
    Avoid Unavailable Times Constraint (3 points)    	  30
    Cluster Busy Times Constraint (19 points) 	   	 960
    Limit Active Intervals Constraint (11 points)    	 255
    --------------------------------------------------------
      Grand total (48 points) 	   			1695

  And this is how KHE18x8 does it:

    --------------------------------------------------------
    Assign Resource Constraint (13 points) 	   	 390
    Avoid Unavailable Times Constraint (5 points)    	  60
    Cluster Busy Times Constraint (25 points) 	   	1120
    Limit Active Intervals Constraint (20 points)    	 450
    --------------------------------------------------------
      Grand total (63 points) 	   			2020

  So there is no clear inferiority to work on.

  I've looked into a couple of defects, it's a tangle.  Schaerf
  got his 1700 result by swapping timetables over (up to) 20
  consecutive days.  Food for thought.

  Here's a run with balancing_max = 24.  It does not good,
  because there just aren't that many runs.

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 5 distinct costs, 45.4 secs:
      0.02020 0.02050 0.02105 0.02110 0.02110 0.02115 0.02115 0.02115
    ]

15 November 2019.  Started work on resource run repair, in file
  khe_sr_resource_pair_run.c.  Just boilerplate done so far.

16 November 2019.  Working on resource run repair.  Going well,
  I have done and audited everything except the actual solve
  for two resources.

18 November 2019.  Working on resource run repair.  Decided that
  I need component objects, so I'm starting in on those today.
  Just written KheRunResourceFindComponents, have clean compile
  but needs an audit.

19 November 2019.  Working on resource run repair.  Have a
  complete implementation now, with debug code.  Done one
  audit, needs another, then it will be ready for testing.

20 November 2019.  First results from resource run repair.  It
  found one improvement, which turned out to be a complete swap
  of the timetables of HN_0 and HN_1, with a cost saving of 10:

    [ "INRC2-4-030-1-6291", 1 solution, in 15.0 secs: cost 0.02105 ]

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 6 distinct costs, 43.8 secs:
      0.02010 0.02050 0.02095 0.02095 0.02105 0.02110 0.02110 0.02115
    ]

  So there has in fact been an improvement over my previous best,
  but only by 10.  The run time seems slightly better, not sure why.

  Did another run and this time CT_19 and CT_21 were completely
  swapped.  But I have debug output that seems to show clearly
  enough that all combinations of assignments to components are
  being tried.  Tried splitting runs, it made no difference:

    [ "INRC2-4-030-1-6291", 1 solution, in 15.1 secs: cost 0.02105 ]

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 6 distinct costs, 46.2 secs:
      0.02010 0.02040 0.02095 0.02095 0.02105 0.02110 0.02110 0.02115
    ]

  But I really should split components, not runs.

22 November 2019.  Thinking about better lookahead in time sweep.
  I already have the rs_time_sweep_lookahead option; setting that
  to 4 gave this:

    [ "INRC2-4-030-1-6291", 1 solution, in 36.7 secs: cost 0.02130 ]

  which is actually inferior to the 2105 I got above.  Other values:

     5: [ "INRC2-4-030-1-6291", 1 solution, in 26.1 secs: cost 0.02110 ]
     6: [ "INRC2-4-030-1-6291", 1 solution, in 42.7 secs: cost 0.02065 ]
     7: [ "INRC2-4-030-1-6291", 1 solution, in 41.2 secs: cost 0.02095 ]
     8: [ "INRC2-4-030-1-6291", 1 solution, in 53.2 secs: cost 0.02065 ]
     9: [ "INRC2-4-030-1-6291", 1 solution, in 52.1 secs: cost 0.02065 ]
    12: [ "INRC2-4-030-1-6291", 1 solution, in 53.5 secs: cost 0.02065 ]

  I get the same result here even without time limits.  Best of 8 with
  rs_time_sweep_lookahead=8:

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 8 distinct costs, 121.0 secs:
      0.02020 0.02050 0.02065 0.02080 0.02090 0.02120 0.02145 0.02200
    ]

  Amazingly, it's worse than best of 8 with no lookahead at all:

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 6 distinct costs, 86.0 secs:
      0.02010 0.02040 0.02095 0.02095 0.02105 0.02110 0.02110 0.02115
    ]

  Surely then this debunks the whole idea that lookead helps?  Best of 32:

    [ "INRC2-4-030-1-6291", 4 threads, 32 solves, 19 distinct costs, 240.8 secs:
      0.01950 0.01990 0.02000 0.02005 0.02005 0.02010 0.02015 0.02025
      0.02025 0.02040 0.02050 0.02055 0.02080 0.02090 0.02090 0.02095
      0.02095 0.02095 0.02095 0.02095 0.02100 0.02105 0.02105 0.02105
      0.02105 0.02110 0.02110 0.02110 0.02115 0.02115 0.02120 0.02150
    ]

  Best of 64:

    [ "INRC2-4-030-1-6291", 4 threads, 64 solves, 25 distinct costs, 8.3 mins:
      0.01950 0.01965 0.01990 0.01990 0.02000 0.02005 0.02005 0.02005
      0.02005 0.02010 0.02015 0.02015 0.02015 0.02015 0.02015 0.02025
      0.02025 0.02025 0.02030 0.02040 0.02045 0.02050 0.02050 0.02055
      0.02055 0.02065 0.02075 0.02080 0.02080 0.02080 0.02085 0.02090
      0.02090 0.02090 0.02090 0.02095 0.02095 0.02095 0.02095 0.02095
      0.02095 0.02095 0.02095 0.02095 0.02095 0.02095 0.02100 0.02100
      0.02105 0.02105 0.02105 0.02105 0.02105 0.02105 0.02105 0.02110
      0.02110 0.02110 0.02110 0.02110 0.02115 0.02115 0.02120 0.02150
    ]

  Still a long way above 1695.  Best of 256 is also 1950:

    [ "INRC2-4-030-1-6291", 4 threads, 256 solves, 38 distinct costs, 34.4 mins:
      0.01950 0.01960 0.01965 0.01980 0.01980 0.01980 0.01980 0.01990
      0.01990 0.01990 0.01990 0.01990 0.01995 0.01995 0.01995 0.01995
      0.01995 0.01995 0.02000 0.02000 0.02000 0.02000 0.02005 0.02005
      ...
      0.02135 0.02150 0.02150 0.02150 0.02165 0.02180 0.02215 0.02225
    ]

23 November 2019.  Things are getting desperate.  I'm trying doing an
  ejection chain repair after every day:

    [ "INRC2-4-030-1-6291", 1 solution, in 149.3 secs: cost 0.02065 ]

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 8 distinct costs, 5.7 mins:
      0.01945 0.01960 0.01970 0.02005 0.02030 0.02035 0.02045 0.02070
    ]

  This is a new best but the cost in run time is not reasonable:
  we can get down to 1950 in 4 minutes using best of 32 (see above).
  But we a still a long way above 1695.

28 November 2019.  Spent a few days on refereeing a paper and other
  jobs.  Today I plan to knuckle down and start implementing resource
  type partitioning.

  I've converted khe_resource_set.c and khe_resource_group.c from
  resource type indexes to instance indexes.  I've also done all the
  boilerplate of adding a resource_type_partitions parameter alongside
  infer_resource_partitions in KheInstanceMakeEnd and in the archive
  read functions, and calling KheResourceTypeDoResourceTypePartitioning
  from KheInstanceMakeEnd.

30 November 2019.  Working on KheResourceTypeDoResourceTypePartitioning
  today.  I've written the code that merges resources, and got this:
  
    MakeResourceType(HN_0) from {HN_0, HN_1, HN_2, HN_3, NU_4, NU_5,
      NU_6, NU_7, NU_8, NU_9, NU_10, NU_11, NU_12, NU_13, NU_14, NU_15,
      NU_16, CT_17, CT_18, CT_19, CT_20, CT_21, CT_22, CT_23, CT_24}
    MakeResourceType(TR_25) from {TR_25, TR_26, TR_27, TR_28, TR_29}

  which is just what I wanted.  Indeed I'm now generating new
  resource types (just Nurse:TR_25 here), but not yet moving
  resources to them.

1 December 2019.  Implemented KheResourceMoveToResourceType, so
  I've now moved all the resources to their new types.

2 December 2019.  Audited KheResourceMoveToResourceType.  Also
  written code to move resource groups to their new resource
  types, and to reset the resource_of_type fields in the four
  constraints that have them.

3 December 2019.  Implemented moving of the resource types of
  event resources.  Also keeping resource types' full resource
  groups up to date now as resources are added and deleted.

4 December 2019.  Testing resource type partitioning today.
  First results:

    [ "INRC2-4-030-1-6291", 1 solution, in 17.0 secs: cost 0.02575 ]

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 8 distinct costs, 46.0 secs:
      0.02240 0.02280 0.02300 0.02305 0.02335 0.02340 0.02385 0.02400
    ]

    [ "INRC2-4-030-1-6291", 4 threads, 32 solves, 24 distinct costs, 150.7 secs:
      0.02210 0.02220 0.02230 0.02240 0.02240 0.02245 0.02270 0.02275
      0.02280 0.02290 0.02300 0.02300 0.02305 0.02305 0.02310 0.02335
      0.02335 0.02340 0.02340 0.02345 0.02345 0.02365 0.02365 0.02370
      0.02380 0.02385 0.02390 0.02400 0.02405 0.02405 0.02410 0.02480
    ]

  This is inferior to what I've had before:

    [ "INRC2-4-030-1-6291", 1 solution, in 15.0 secs: cost 0.02105 ]

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 6 distinct costs, 43.8 secs:
      0.02010 0.02050 0.02095 0.02095 0.02105 0.02110 0.02110 0.02115
    ]

  So it's back to grinding down INRC2-4-030-1-6291.  Best of 8
  after turning off grouping by resource constraints:

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 7 distinct costs, 86.3 secs:
      0.02310 0.02310 0.02320 0.02350 0.02380 0.02405 0.02415 0.02475
    ]

  So turning off grouping is not the answer.

7 December 2019.  Various distractions have kept me away for a few
  days.  Hopefully they're past now and I can get some work done.

  KheProfile seems to think that the time groups immediately
  following the ones we want are ruled out.  That's odd, we
  only rule them out beyond Cmax.  Fixed now.

  Found that not all nurses are caretakers; two of the head nurses
  are head nurses and nurses but not caretakers (HN_2 and HN_3).  So
  profile grouping cannot group Nurse tasks with Caretaker tasks.
  Perhaps it should be able to, given that there are only two
  exceptions.  But then, what domain would those grouped tasks have?

  Now that N cannot be grouped with C, we have 3 1Fri:Day
  tasks (HeadNurse, Nurse, Nurse) which can only be grouped
  with two tasks on adjacent days (Nurse, Nurse).  But 

     forward profile grouping:  1Thu:Day.8{1Fri:Day.12{}} (Caretaker)
     forward profile grouping:  1Fri:Day.0{1Sat:Day.3{}} (HeadNurse)/Nurse
     backward profile grouping: 1Thu:Day.3{1Fri:Day.5{}} (Nurse)
     backward profile grouping: 1Thu:Day.9{1Fri:Day.13{}} (Caretaker)

  There should be another 1Fri:Day + 1Sat:Day of Caretaker.  The
  problem is that we have 3 Caretaker on Thu and 3 Caretaker on Fri,
  so there is no incentive to start a Caretaker sequence on Fri.

  Investigating 1Thu:Day (4), 1Fri:Day (6), 1Sat:Day (3)

    <Event Id="1Thu:Day">
      <R>A=h1:P-Nurse=h1:1</R>            4
      <R>A=h1:P-Caretaker=h1:1</R>        1
      <R>A=h1:P-Caretaker=h1:2</R>        5
      <R>A=s30:P-Caretaker=h1:1</R>
    </Event>

    <Event Id="1Fri:Day">
	<R>A=h1:P-HeadNurse=h1:1</R>      2
	<R>A=h1:P-Nurse=h1:1</R>          4
	<R>A=s30:P-Nurse=h1:1</R>
	<R>A=h1:P-Caretaker=h1:1</R>      1
	<R>A=h1:P-Caretaker=h1:2</R>      5
	<R>A=h1:P-Caretaker=h1:3</R>      3
    </Event>

    <Event Id="1Sat:Day">
	<R>A=h1:P-Nurse=h1:1</R>          2
	<R>A=h1:P-Caretaker=h1:1</R>      3
	<R>A=h1:P-Caretaker=h1:2</R>
    </Event>

  I've fiddled things to make one extra group when the total wants
  it but the per-skill total is 0.  The result is

    [ "INRC2-4-030-1-6291", 1 solution, in 13.1 secs: cost 0.02260 ]

  The groups I'm getting around 1Fri:Day now are

    1 forward profile grouping: 1Thu:Day.8{1Fri:Day.12{}} (Caretaker)
    2 forward profile grouping: 1Fri:Day.0{1Sat:Day.3{}} (HeadNurse)
    3 forward profile grouping: 1Fri:Day.13{1Sat:Day.8{}} (Caretaker)
    4 backward profile grouping: 1Thu:Day.3{1Fri:Day.5{}} (Nurse)
    5 backward profile grouping: 1Thu:Day.9{1Fri:Day.14{}} (Caretaker)

  which is about right.

9 December 2019.  Best of 8 with the new grouping:

  [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 8 distinct costs, 36.0 secs:
      0.02230 0.02245 0.02260 0.02270 0.02300 0.02315 0.02350 0.02375
  ]

  Best of 8 without the new grouping:

  [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 7 distinct costs, 42.4 secs:
    0.02225 0.02255 0.02255 0.02265 0.02270 0.02290 0.02300 0.02335
  ]

  So the new grouping is producing slightly worse results quite a
  lot faster.  Stick with it I think.  Best was diversifier 3:

    Summary 						Inf. 	 Obj
    ----------------------------------------------------------------
    Assign Resource Constraint (20 points) 	   	         600
    Avoid Unavailable Times Constraint (5 points)    	          60
    Cluster Busy Times Constraint (25 points) 	   	        1120
    Limit Active Intervals Constraint (20 points)    	         450
    ----------------------------------------------------------------
      Grand total (70 points) 	   			        2230

  Every category here is significantly worse than in the 1695 solution.
  I need to grind it down starting from here.

10 December 2019.  Grinding down INRC2-4-030-1-6291.  I've been
  looking at what construction alone is doing.  It's assigning
  plenty of tasks.  The big problem is that many of the limit
  active intervals constraints are violated during construction.

  Trying ungrouped with lookahead=4:

    [ "INRC2-4-030-1-6291", 1 solution, in 36.6 secs: cost 0.02535 ]

  Slow and poor.  Actually lookahead does not work that well because
  it assumes that everyone can get anything they need.

10 December 2019.  Grinding down INRC2-4-030-1-6291.  It would be
  good to get a better initial solution from time sweep than I am
  getting now.  Here are all the Trainee groupings:

    1Tue:Early.19{1Wed:Early.21{}} (Trainee)
    1Sat:Early.11{1Sun:Early.13{}} (Trainee)
    3Thu:Early.15{3Fri:Early.17{}} (Trainee)
    4Mon:Early.19{4Tue:Early.19{}} (Trainee)
    4Thu:Early.17{4Fri:Early.19{}} (Trainee)
    3Thu:Early.15{3Fri:Early.17{}, 3Sat:Early.13{}} (Trainee)
    2Mon:Early.17{2Tue:Early.15{}} (Trainee)
    1Sat:Day.15{1Sun:Day.13{}} (Trainee)
    2Wed:Day.17{2Thu:Day.21{}} (Trainee)
    3Sun:Day.15{4Mon:Day.17{}} (Trainee)
    4Wed:Day.19{4Thu:Day.15{}} (Trainee)
    4Fri:Day.15{4Sat:Day.13{}} (Trainee)
    3Mon:Day.21{3Tue:Day.21{}} (Trainee)
    1Sat:Day.15{1Sun:Day.13{}, 2Mon:Day.17{}} (Trainee)
    1Tue:Late.17{1Wed:Late.19{}} (Trainee)
    2Thu:Late.17{2Fri:Late.17{}} (Trainee)
    3Sun:Late.13{4Mon:Late.19{}} (Trainee)
    4Fri:Late.17{4Sat:Late.13{}} (Trainee)
    1Fri:Late.17{1Sat:Late.15{}} (Trainee)
    1Tue:Night.21{1Wed:Night.15{}, 1Thu:Night.15{}} (Trainee)
    3Mon:Night.17{3Tue:Night.17{}, 3Wed:Night.19{}} (Trainee)
    3Mon:Night.17{3Tue:Night.17{}, 3Wed:Night.19{}, 3Thu:Night.13{}, 3Fri:Night.19{}} (Trainee)

14 December 2019.  Taking a look at why HN_0 ends up with a singleton
  run on 3Wed:

           E  D  L  N  Tot
    ----------------------
    3Tue   4  7  5  4   20
    3Wed   5  6  5  5   22
    3Thu   3  4  4  2   13
    ----------------------

  There is no absolute need for this, the singleton could be moved
  to NU_14 or HN_2; it's just that these already have long sequences
  of consecutive days.

15 December 2019.  Still puzzling over INRC2-4-030-1-6291.  There is
  a lot of room for improvement in the initial solution.  It would
  be good to get a clearer idea about why it is so bad.

  Got sidetracked into cleaning up the background colour and font
  selection code in src_hseval/timetable.c.  It's better now although
  there may be more to do if I do an audit of the whole file.

16 December 2019.  Done a careful audit of src_hseval/timetable.c and

17 December 2019.  Did a bit more fiddling with src_hseval/timetable.c,
  it is a bit cleaner and is now consistent across individual and
  planning timetables.  All implemented and tested.
  
  Back to grinding down INRC2-4-030-1-6291.  This is what I have right now:

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 7 distinct costs, 40.4 secs:
      0.02225 0.02255 0.02255 0.02265 0.02270 0.02290 0.02300 0.02335
    ]

  I was often getting 2010 before, so something has gone wrong.  But
  it seems to have gone wrong from the moment I started resource type
  partitioning, which I can't really give up.  Presumably I am getting
  more grouping by resource constraints, and the groups are not helping.

  Initial solution from time sweep, with grouping by resource constraints:

    [ "INRC2-4-030-1-6291", 1 solution, in 0.2 secs: cost 0.03515 ]

  And without grouping by resource constraints:

    [ "INRC2-4-030-1-6291", 1 solution, in 0.2 secs: cost 0.03735 ]

  And here is 8 complete solves without grouping by resource constraints:

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 7 distinct costs, 84.7 secs:
      0.02310 0.02310 0.02320 0.02350 0.02380 0.02405 0.02415 0.02475
    ]

  Very slow, and the costs are higher than with grouping.  So it looks
  as though I just have to struggle on.

  Off-site backup done today.

    1Mon       Early   Day   Late   Night   Total
    ---------------------------------------------
    HeadNurse      1     0      0       1       2
    Nurse          1     1      1       1       4
    Caretaker      3     2      4       3      12
    ---------------------------------------------
    Total          5     3      5       5      18

    1Tue       Early   Day   Late   Night   Total
    ---------------------------------------------
    HeadNurse      1     0      0       0       1
    Nurse          1     1      1       2       5
    Caretaker      3     4      3       4      14
    ---------------------------------------------
    Total          5     5      4       6      20

  Why not

     {1Mon:Night}     - --> NU_15
     {1Mon:Day}   NU_15 --> NU_5 (or NU_16)

  and indeed what has happened to rematching at this one time?

    [ +WidenedTaskSetMove(@ {1Mon:Night.11{}} ---> NU_15 {})
      new defect 0.00000 -> 0.00030: [ A1 06614 Constraint:17/NU_15 
      new defect 0.00000 -> 1.00000: [ A1 06089 Constraint:4/NU_15 
      failure: sub-defect Constraint:4/NU_15 1.00000 cannot fix 
    ]

  Constraint:4 is single assignment per day, Constraint:17 is
  MinConsecutiveSameShiftDays for Night shifts.  The min value
  is 3.  The min value for Day is 2.  So replacing a Day by a
  Night has a cost of 15.

  But that is not really the point.  Why was this move tried at
  all, when NU_15 is busy then?  There should have been an unassignment;
  the debug should be something like

    +WidenedTaskSetMove(@ {1Mon:Night.11{}} ---> NU_15 {1Mon:Day.3{}})

  as indeed the 1.00000 defect in Constraint:4/NU_15 proves.

22 December 2019.  Various odd things have kept me away from work for
  several days.  Back at work today.  I've discovered that the cause
  of the 17 December 2019 problem is that the common frame contains
  one time group for each time, not one for each day.  Fixed that now;
  it was caused by resource type partitioning confusing the frame code
  into thinking that there was no non-trivial frame.  First results:

    [ "INRC2-4-030-1-6291", 1 solution, in 34.6 secs: cost 0.02015 ]

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 7 distinct costs, 60.0 secs:
      0.01920 0.01980 0.01980 0.02015 0.02025 0.02080 0.02150 0.02160
    ]

  This is a new best, in both cost and run time.  Problem solved.

  Did an INRC2-4 run.  Previously, the average cost was 2813 and
  the average run time was 216.2 seconds; now the average cost is
  2610 and the average run time is 216.3 seconds.  The average LOR17
  cost is 2090, so I am still a fair way away from that.  Now I
  should go back to grinding down INRC2-4-030-1-6291.

  Here we go with SPLIT_RUNS changed from 1 to 0 in pair run asst:

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 8 distinct costs, 33.3 secs:
      0.01880 0.01965 0.01980 0.02045 0.02065 0.02090 0.02135 0.02150
    ]

  It's better, isn't that amazing?  Something to do with no longer
  exceeding the maximum number of nodes?  And look at the run time!
  Time for another INRC2-4 run.  Actually the results are coming
  out somewhat variable, except that run time is always better:
  average cost 2614, average run time 167.4 seconds.  So the
  average cost is higher (but almost undetectably so) without
  SPLIT_RUNS, and the run time is 23% lower.

24 December 2019.  Implemented TRY_UNASSIGNMENTS, ready to test.
  Without TRY_UNASSIGNMENTS:

    [ "INRC2-4-030-1-6291", 1 solution, in 15.9 secs: cost 0.01965 ]

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 8 distinct costs, 35.6 secs:
      0.01880 0.01965 0.01980 0.02045 0.02065 0.02090 0.02135 0.02150
    ]

  and with TRY_UNASSIGNMENTS:

    [ "INRC2-4-030-1-6291", 1 solution, in 27.3 secs: cost 0.01965 ]

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 8 distinct costs, 61.1 secs:
      0.01880 0.01965 0.01980 0.02045 0.02065 0.02085 0.02090 0.02150
    ]

  Much slower and no better.  I guess this means that it tries a lot of
  things that actually do no good.  So scrap TRY_UNASSIGNMENTS; back to

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 8 distinct costs, 35.3 secs:
      0.01880 0.01965 0.01980 0.02045 0.02065 0.02090 0.02135 0.02150
    ]

  Best of 32 is

    [ "INRC2-4-030-1-6291", 4 threads, 32 solves, 26 distinct costs, 154.1 secs:
      0.01880 0.01920 0.01945 0.01950 0.01965 0.01965 0.01980 0.01980
      0.02000 0.02000 0.02010 0.02015 0.02020 0.02030 0.02035 0.02035
      0.02045 0.02050 0.02060 0.02065 0.02075 0.02075 0.02075 0.02080
      0.02085 0.02090 0.02095 0.02115 0.02135 0.02140 0.02150 0.02205
    ]

  Decided to check back with KHE18x8 on COI.  Its average cost was 713
  and its average running time is 82.5 seconds.  Now its average cost
  is 695 and its average running time is 81.5 seconds.  So there has
  been a small improvement, thank heavens.

  Now I'm trying KHE18x8 on INRC1.  Previously the results were cost
  180 and time 113.3 seconds for large and medium, and cost 82 and
  time 3.2 seconds for Sprint.  The results now are cost 179 and
  time 114.7 seconds for large and medium, and cost 82 and time
  3.3 seconds for Sprint.  So there is no discernible change.

  Now I'm trying KHE18x8 on CQ14.  There are no average results for
  CQ14s, so I'm just eyeballing the old and new results to get a
  feeling for how things went.  Program crashed on CQ14-06.

25 December 2019.  Starting work on yesterday's bug.  The instance
  is CQ14-06, the error is in the exploded file too:

    CQ14-06.xml:6462:15: <Resource> of type "Nurse:B" where type \
      "Nurse" expected

  Fixed the bug, although the fix is a bit of a patch.  Also made
  some changes to when a new resource type is made, to ensure that
  there are no empty resource types left behind.

  Doing a full CQ14 run.  Costs are mostly improved, with a few worse
  ones.  Running times are up and down.  Core dump on instance 24.

26 December 2019.  The core dump occurred when handling zones in meets.
  It has occurred to me that the zones arrays of cycle meets could get
  very large indeed in instance 24.  So I am changing the zones array
  so that NULL suffixes are omitted, which will mean that when zones
  are not used all zones arrays will be empty.  All done, doit8 gives

    [ "CQ14-24", 4 threads, 8 solves, 8 distinct costs, 9.5 mins:
      444.99999 470.99999 477.99999 488.99999
      611.99999 642.99999 644.99999 695.99999
    ]

  Of course it was never going to find a feasible solution, but at
  least it is not chewing up a ridiculous amount of memory now.  32
  times per day, 7 days per week, 52 weeks, comes to 32 * 7 * 52 =
  11648 times, and half its square is 67,837,952 which is a huge
  amount of memory to waste on something that is not being used.

  Did a full CQ14 run, it all worked.  Costs were up and down,
  about the same on the whole, run times marginally slower.  So
  no remarkable improvements but should be reliable now that we
  are using so much less memory.

  CQ14-05 and CQ14-06 would make good tests, but I should do more
  work on INRC2-4-030-1-6291 before moving on.  It's currently at

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 8 distinct costs, 35.2 secs:
      0.01880 0.01965 0.01980 0.02045 0.02065 0.02090 0.02135 0.02150
    ]

  where the LOR17 result has cost 1695.

27 December 2019.  Working on khe_trace.c.  Updated it to use a single
  array of monitor info records, rather than two arrays, one of monitors
  and a parallel array of init costs, and added KheTraceMonitorCostIncrease
  and KheTraceReduceByCostIncrease.  Also now using them in ejector
  code.  First results:

    [ "COI-GPost", 1 solution, in 0.1 secs: cost 0.00009 ]

    [ "COI-GPost", 4 threads, 8 solves, 4 distinct costs, 0.8 secs:
      0.00009 0.00012 0.00013 0.00013 0.00013 0.00015 0.00015 0.00015
    ]

  These are marginally better than we are getting before.

    [ "INRC2-4-030-1-6291", 1 solution, in 3.9 secs: cost 0.01900 ]

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 8 distinct costs, 15.5 secs:
      0.01900 0.01950 0.01995 0.02050 0.02065 0.02090 0.02105 0.02140
    ]

  Not quite as good as the 1880 from before, but that was an outlier
  anyway - and look at the running time, it's less than half.

  Time for some longer runs:

    Archive    Av cost before  Av cost after  Av time before  Av time after
    -----------------------------------------------------------------------
    COI             695             689            81.1            78.2
    INRC1-LM        179             176           114.7            92.6
    INRC1-S          82              81             3.3             2.4
    INRC2-4        2614            2585           167.4           103.3
    -----------------------------------------------------------------------

  So although some individual results are worse, every average has
  improved:  all four archives, both cost and running time.  Wow.
  You don't often see that.

28 December 2019.  Documented the revised ejection chains code in
  doc_khe, and confirmed that it agrees with the implementation.
  Also revised khe18.tex slightly.

  Optional tasks don't seem to be useful when reassigning over
  multiple days.  Each consumes an entire resource.  But then
  they probably won't get assigned anyway.

  Would it be enough to mark monitors visited, and not allow
  revisiting of any monitor?  That could be done entirely behind
  the scenes and would greatly simplify augment functions.  It's
  a fascinating idea, but would it be too lax?  We might find
  ourselves moving the same task in different directions on
  the one path.  Have to try it, otherwise we'll never know.
  But I've decided against trying it, because when there are
  equivalent tasks, visiting monitors would allow all of them
  to be tried.  Perhaps later when equivalent tasks turn into
  multi event resources.  Actually we already have code for
  avoiding equivalent tasks.  Let's do it and see how it goes.

29 December 2019.  First results with visiting monitors but not tasks:

    [ "COI-GPost", 4 threads, 8 solves, 5 distinct costs, 0.8 secs:
      0.00012 0.00013 0.00013 0.00013 0.00015 0.00015 0.00016 0.00020
    ]

  A bit worse than before (cost 9 in 0.8 seconds).  INRC2-4-030-1-6291:

    [ "INRC2-4-030-1-6291", 1 solution, in 12.6 secs: cost 0.01880 ]

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 7 distinct costs, 25.2 secs:
      0.01835 0.01880 0.01940 0.01950 0.01980 0.01980 0.01995 0.02050
    ]

  Previous best was 1900 in 16.6 seconds, so we have a slower run time
  but it seems to be well worth it.  So time for some longer runs:

    Archive    Av cost before  Av cost after  Av time before  Av time after
    -----------------------------------------------------------------------
    COI             689             662            78.2            89.8
    INRC1-LM        176             175            92.6            85.0
    INRC1-S          81              81             2.4             1.9
    INRC2-4        2585            2468           103.3           150.5
    INRC2-8        5713            4580           341.9           371.7
    -----------------------------------------------------------------------

  Running times tend to be longer, as expected.  Many of the COI costs
  are new bests, or anyway better than I have seen for a long time.
  Compared to yesterday, 13 are better and 3 are (slightly) worse; and
  the average costs above show a substantial difference (662 vs 689).
  The INRC2 instances are also showing a good cost improvement, which
  is handy because they are somewhat intractable.  My test instance,
  INRC2-4-030-1-6291, has improved from 1880 (at best) to 1835.  So
  I think I will stick with this, despite the slower running time.

  In the CQ14 tables, I'm now showing averages for the first 19
  instances only, given how incomplete the later entries are.
  Also narrowed the column gaps a bit, to fit the width better.

  I could stop now; but I won't.

30 December 2019.  I've changed the name of everything from KHE18 to
  KHE20, and I've begun rewriting the PATAT paper.  To begin with I
  have added a lot of footnotes to referees explaining how this
  version of the paper relates to the previous version.

  Changed the best average running time from bold to roman.

  Worked on the PATAT paper.  It's basically ready to submit
  now, apart from doing the final runs.  But I'll hold off for
  a while and see what else I can come up with.

31 December 2019.  Wrote code for printing archive read and
  write times, seems to be working.  Have to get the CQ14
  read and write times at some stage.

  Adjusted function names and documentation to replace KHE18
  by KHE20.

  Working on INRC1-ML02.  Actually I worked on it before (from
  8 November 2019).  It inspired KhePropagateUnavailableTimes,
  which is still in use.  But can we do even better?

    INRC1-ML02 costs                 GOAL     KHE20x8 
    -------------------------------------------------
    Unavailable times                   3        9
    Friday night before free weekend    0        1
    Workload overloads                  2        2  (same two)
    Consecutive busy weekends           3        4
    Consecutive busy days               7        8
    Consecutive free days               3        5
    -------------------------------------------------
      Total                            18       29


To Do
=====

  INRC1-ML02 would be a good test.  It runs fast and the gap is
  pretty wide at the moment.

  Fun facts about INRC1-ML02
  --------------------------

    * 4 weeks 1Fri to 4Thu

    * 4 shifts per day: E (1), L (2), D (3), and N (4).  But there are
      only two D shifts each day, so this is basically a three-shift
      system of Early, Late, and Night shifts.

    * 30 Nurses:
  
        Contract-0  Nurse0  - Nurse7
        Contract-1  Nurse8  - Nurse26
        Contract-2  Nurse27 - Nurse29

    * Many day and shift off requests, all soft 1 but challenging.
      I bet this is where the cost is incurred.

    * Complete weekends (soft 2), no night shift before free
      weekend (soft 1), identical shift types during weekend (soft 1),
      unwanted patterns [L][E], [L][D], [D][N], [N][E], [N][D],
      [D][E][D], all soft 1

    * Contract constraints         Contract-0    Contract-1   Contract-2
      ----------------------------------------------------------------
      Assignments                    10-18        6-14          4-8
      Consecutive busy weekends       2-3     unconstrained     2-3
      Consecutive free days           2-4         3-5           4-6
      Consecutive busy days           3-5         2-4           3-4
      ----------------------------------------------------------------

      Workloads are tight, there are only 6 shifts to spare, or 8 if
      you ignore the overloads in Nurse28 and Nurse29, which both
      GOAL and KHE18x8 have, so presumably they are inevitable.


  Do something about constraints with step cost functions, if only
  so that I can say in the paper that it's done.

  Continue rewriting the PATAT paper.

  In INRC2-4-030-1-6291, the difference between my 1880 result and
  the LOR17 1695 result is about 200.  About 100 of that is in
  minimum consecutive same shift days defects.  Max working weekends
  defects are another problem, my solution has 3 more of those
  than the LOR17 solution has; at 30 points each that's 90 points.
  If we can improve our results on these defects we will go a long
  way towards closing the gap.

  Grinding down INRC2-4-030-1-6291 from where it is now.  It would
  be good to get a better initial solution from time sweep than I am
  getting now.  Also, there are no same shift days defects in the
  LOR17 solution, whereas there are 

  Perhaps profile grouping could do something unconventional if it
  finds a narrow peak in the profile that really needs to be grouped.

  What about an ejection chain repair, taking the current runs
  as indivisible?

  My chances of being able to do better on INRC2-4-030-1-6291
  seem to be pretty slim.  But I really should pause and make
  a serious attack on it.  After that there is only CQ to go,
  and I have until 30 January.  There's time now and if I don't
  do it now I never will.

  Fun facts about INRC2-4-030-1-6291
  ----------------------------------

  * 4 weeks

  * 4 shifts per day:  Early (1), Day (2), Late (3), and Night (4) 
    The number of required ones varies more or less randomly; not
    assigning one has soft cost 30.

  * 30 Nurses:

       4 HeadNurse:  HN_0,  ... , HN_3
      13 Nurse:      NU_4,  ... , NU_16
       8 Caretaker:  CT_17, ... , CT_24
       5 Trainee:    TR_25, ... , TR_29

    A HeadNurse can also work as a Nurse, and a Nurse can also work
    as a Caretaker; but a Caretaker can only work as a Caretaker, and
    a Trainee can only work as a Trainee.  Given that there are no
    limit resources constraints and every task has a hard constraint
    preferring either a HeadNurse, a Nurse, a Caretaker, or a Trainee,
    this makes Trainee assignment an independent problem.

  * 3 contracts: Contract-FullTime, Contract-HalfTime, Contract-PartTime.
    These determine workload limits of various kinds (see below).  There
    seems to be no relationship between them and nurse type.

  * There are unavailable times (soft 10) but they are not onerous

  * Unwanted patterns: [L][ED], [N][EDL], [D][E] (hard), so these
    prohibit all backward rotations.

  * Complete weekends (soft 30)

  * Contract constraints:                   Half   Part   Full    Wt
    ----------------------------------------------------------------
    Number of assignments                   5-11   7-15  15-20*   20
    Max busy weekends                          1      2      2    30
    Consecutive same shift days (Early)      2-5    2-5    2-5    15
    Consecutive same shift days (Day)       2-28   2-28   2-28    15
    Consecutive same shift days (Late)       2-5    2-5    2-5    15
    Consecutive same shift days (Night)      3-5    3-5    3-5    15
    Consecutive free days                    2-5    2-4    2-3    30
    Consecutive busy days                    2-4    3-5    3-5    30
    ----------------------------------------------------------------
    *15-20 is notated 15-22 but more than 20 is impossible.

  Better to not generate contract (and skill?) resource groups if
  not used.

  KheTaskSetDifferenceMove commented out now, but why was it
  necessary?  Was it a patch or do I still need something like it?
  Let's just run and do something if we come upon something.

  Change KHE's general policy so that operations that change
  nothing succeed.  Having them fail composes badly.  The user
  will need to avoid cases that change nothing.

  Are there other modules that could use the task finder?
  Combinatorial grouping for example?  There are no functions
  in khe_task.c that look like task finding, but there are some
  in khe_resource_timetable_monitor.c:

    KheResourceTimetableMonitorTimeAvailable
    KheResourceTimetableMonitorTimeGroupAvailable
    KheResourceTimetableMonitorTaskAvailableInFrame
    KheResourceTimetableMonitorAddProperRootTasks

  KheTaskSetMoveMultiRepair phase variable may be slow, try
  removing it and just doing everything all together.

  Started work on adding double moves where r2 is NULL to
  KheDoTaskSetMoveMultiRepair.  The plan is to integrate
  the r2 == NULL case with the r2 != NULL case.

  Do a major test trying different numbers of balancing
  repairs, not just 12.  e.g. try 0, 1, 2, 4, 8, 16.  There
  is already an option (balancing_max) for it.

  COI-Musa has blown out from the optimal 175 to 178.  The
  difference is that Nurse 8 is now busy at an unavailable
  time.  There doesn't seem to be any reason why Nurse 8 could
  not give up that time and pick up another time where there
  is an unassigned shift.  The problem is that although we
  have repairs of the form

    Nurse8 -> @ {S1}

  we don't have any repairs of the form

    Nurse8 -> @ {S1}, @ -> Nurse8 {S2}

  We really should have some repairs like this.


  Fun facts about COI-Musa
  ------------------------

  * 2 weeks, one shift per day, 11 nurses (skills RN, LPN, NA)

  * RN nurses:  Nurse1, Nurse2, Nurse3,
    LPN nurses: Nurse4, Nurse5, 
    NA nurses:  Nurse6, Nurse7, Nurse8, Nurse9, Nurse10, Nurse11

  Grinding down COI-HED01.  See above, 10 October, for what I've
  done so far.

  It should actually be possible to group four M's together in
  Week 1, and so on, although combinatorial grouping only tries
  up to 3 days so it probably does not realize this.

  Fun facts about COI-HED01
  -------------------------

    * 31 days, 5 shifts per day: 1=M, 2=D, 3=H, 4=A, 5=N

    * Weekend days are different, they use the H shift.  There
      is also something peculiar about 3Tue, it also uses the
      H shift.  It seems to be being treated like a weekend day.
      This point is reflected in other constraints, which treat
      Week 3 as though it had only four days.

    * All demand expressed by limit resources constraints,
      except for the D shift, which has two tasks subject
      to assign resource and prefer resources constraints.
      The other shifts vary between about 7 and 9 tasks.  But
      my new converter avoids all limit resources constraints.

    * There are 16 "OP" nurses and 4 "Temp" nurses.
      Three nurses have extensive sequences of days off.
      There is one skill, "Skill-0", but it contains the
      same nurses as the OP nurses.

    * The constraints are somewhat peculiar, and need attention
      (e.g. how do they affect combinatorial grouping?)
    
        [D][0][not N]  (Constraint:1)
          After a D, we want a day off and then a night shift (OP only).
	  Only one nurse has a D at any one time, so making this should
	  not be very troublesome.

	[not M][D]  (Constraint:2)
	  Prefer M before D (OP only), always seems to get ignored,
	  even in the best solutions.  This is because during the
	  week that D occurs, we can't have a week full of M's.
	  So really this constraint contradicts the others.

	[DHN][MDHAN]  (Constraint:3)
	  Prefer day off after D, H, or N.  Always seems to be
	  satisfied.  Since H occurs only on weekends, plus 3Tue,
	  each resource can work at most one day of the weekend,
	  and if that day is Sunday, the resource cannot work
	  M or A shifts the following week (since that would
	  require working every day).  Sure enough, in the
	  best solution, when an OP nurse works an H shift on
	  a Sunday, the following week contains N shifts and
	  usually a D shift.  And all of the H shifts assigned
	  to Temp nurses are Sunday or 3Tue ones.

	Constraint:4 says that Temp nurses should take H and
	D shifts only.  It would be better expressed by a
	prefer resources constraint but KHE seems happy
	enough with it.

	Constraint:5 says that assigning any shift at all to
	a Temp nurse is to be penalized.  Again, a prefer
	resources constraint would have been better, but at
	present both KHE and the best solution assign 15 shifts
	to Temp nurses, so that's fine.

	The wanted pattern is {M}{A}{ND}{M}{A}{ND}..., where
	{X} means that X only should occur during a week.
	This is for OP nurses only.  It is expressed rather
	crudely:  if 1 M in Week X, then 4 M in Week X.
	This part of it does not apply to N, however; it says
	"if any A in Week X, then at least one N in Week X+1".
	So during N weeks the resource usually has less than
	4 N shifts, and this is its big chance to take a D.

	OP nurses should take at least one M, exactly one D,
	at least one H, at most 2 H, at least one A, at least
	one N.  These constraints are not onerous.

    * Assign resource and prefer resources constraints specify:

        - There is one D shift per day

    * Limit resources constraints specify 

        Weekdays excluding 3Tue

        - Each N shift must have exactly 2 Skill-0 nurses.

	- Each M shift and each A shift must have exactly 4
	  Skill-0 nurses

	- There are no H shifts

	Weekend days, including 3Tue

	- Each H shift must have at least 2 Skill-0 nurses

	- Each H shift must have exactly 4 nurses altogether

	- There are no M, A, or N shifts on 3Tue

	- There are no M, A, or N shifts on weekend days

    * The new converter is expressing all demands with assign
      resource and prefer resources constraints, as follows:

      D shifts:

        <R>NA=s1000:1</R>
	<R>A=s1000:1</R>

	So one resource, any skill.

      H shifts (weekends and 3Tue):

        <R>NA=s1000+NW0=s1000:1</R>
	<R>NA=s1000+NW0=s1000:2</R>
	<R>NA=s1000:1</R>
	<R>NA=s1000:2</R>
	<R>A=s1000:1</R>

	So 2 Skill-0 and 2 arbitrary, as above

      M and A shifts (weekdays not 3Tue):

        <R>NA=s1000+NW0=s1000:1</R>
	<R>NA=s1000+NW0=s1000:2</R>
	<R>NA=s1000+NW0=s1000:3</R>
	<R>NA=s1000+NW0=s1000:4</R>
	<R>W0=s1000:1</R>
	<R>W0=s1000:2</R>
	<R>W0=s1000:3</R>
	<R>W0=s1000:4</R>
	<R>W0=s1000:5</R>

	So exactly 4 Skill-0, no limits on Temp nurses

      N shifts (weekday example)

        <R>NA=s1000+NW0=s1000:1</R>
	<R>NA=s1000+NW0=s1000:2</R>
	<R>W0=s1000:1</R>
	<R>W0=s1000:2</R>
	<R>W0=s1000:3</R>
	<R>W0=s1000:4</R>
	<R>W0=s1000:5</R>

      Exactly 2 Skill-0, no limits on Temp nurses.

  It would be good to have a look at COI-HED01.  It has
  deteriorated and it is fast enough to be a good test.
  Curtois' best is 136 and KHE18x8 is currently at 183.
  A quick look suggests that the main problems are the
  rotations from week to week.

  Back to grinding down CQ14-05.  I've fixed the construction
  problem but with no noticeable effect on solution cost.

  KheClusterBusyTimesConstraintResourceOfTypeCount returns the
  number of resources, not the number of distinct resources.
  This may be a problem in some applications of this function.

  Fun facts about CQ14-05
  -----------------------

    * 28 days, 2 shifts per day (E and L), whose demand is:

           1Mon 1Tue 1Wed 1Thu 1Fri 1Sat 1Sun 2Mon 2Tue 2Wed 2Thu
        ---------------------------------------------------------
        E   5    7    5    6    7    6    6    6    6    6    5
        L   4    4    5    4    3    3    4    4    4    6    4
        ---------------------------------------------------------
        Tot 9   11   10   10   10    9   10   10   10   12    9

      Uncovered demands (assign resources defects) make up the
      bulk of the cost (1500 out of 1543).  Most of this (14 out
      of 15) occurs on the weekends.

    * 16 resources named A, B, ... P.  There is a Preferred-L
      resource group containing {C, D, F, G, H, I, J, M, O, P}.
      The resources in its complement, {A, B, E, K, L, N}, are
      not allowed to take late shifts.

    * Max 2 busy weekends (max 3 for for resources K to P)

    * Unwanted pattern [L][E]

    * Max 14 same-shift days (not consecutive).  Not hard to
      ensure given that resource workload limits are 16 - 18.

    * Many day or shift on requests.  These basically don't
      matter because they have low weight and my current best
      solution has about the same number of them as Curtois'

    * Workload limits (all resources) min 7560, max 8640
      All events (both E and L) have workload 480;
      7560 / 480 = 15.7, 8640 / 480 = 18.0, so every resurce
      needs between 16 and 18 shifts.  The Avail column agrees.

    * Min 2 consecutive free days (min 3 for resources K to P)

    * Max 5 consecutive busy days (max 6 for resources K to P)

    * Curtois' best is 1143.  This represents 2 fewer unassigned
      shifts (costing 100 each) and virtually the same other stuff.

  Try to get CQ14-24 to use less memory an produce better results.
  But start with a smaller, faster CQ14 instance:  CQ14-05, say.

  In Ozk*, there are two skill types (RN and Aid), and each
  nurse has exactly one of those skills.  Can this be used to
  convert the limit resources constraints into assign resource
  and prefer resources constraints?

  Grinding down COI-BCDT-Sep in general.  I more or less lost
  interest when I got cost 184 on the artificial instance, but
  this does include half-cycle repairs.  So more thought needed.
  Could we add half-cycle repairs to the second repair phase
  if the first ended quickly?

  KheCombSolverAddProfileGroupRequirement could be merged with
  KheCombSolverAddTimeGroupRequirement if we add an optional
  domain parameter to KheCombSolverAddTimeGroupRequirement.

  Fun facts about COI-BCDT-Sep
  ----------------------------

    * 4 weeks and 2 days, starting on a Wednesday

    * Shifts: 1 V (vacation), 2 M (morning), 3 A (afternoon), 4 N (night).

    * All cover constraints are limit resources constraints.  But they
      are quite strict and hard.  Could they be replaced by assign
      resource constraints?  (Yes, they have been.)

	  Constraint            Shifts               Limit    Cost
	  --------------------------------------------------------
          DemandConstraint:1A   N                    max 4      10
	  DemandConstraint:2A   all A; weekend M     max 4     100
	  DemandConstraint:3A   weekdays M           max 5     100
	  DemandConstraint:4A   all A, N; weekend M  max 5    hard
	  DemandConstraint:5A   weekdays M           max 6    hard
	  DemandConstraint:6A   all A, N; weekend M  min 3    hard
	  DemandConstraint:7A   all N                min 4      10
	  DemandConstraint:8A   all A; weekend M     min 4     100
	  DemandConstraint:9A   weekday M            min 4    hard
	  DemandConstraint:10A  weekday M            min 5     100
	  --------------------------------------------------------

      Weekday M:   min 4 (hard), min 5 (100), max 5 (100), max 6 (hard),
      Weekend M:   min 3 (hard), min 4 (100), max 4 (100), max 5 (hard) 
      All A:       min 3 (hard), min 4 (100), max 4 (100), max 5 (hard)
      All N:       min 3 (hard), min 4 (10),  max 4 (10),  max 5 (hard)

    * There are day and shift off constraints, not onerous

    * Avoid A followed by M

    * Night shifts are to be assigned in blocks of 3, although
      a four block is allowed to avoid fri N and sat free.  There
      are hard constraints requiring at least 2 and at most 4
      night shifts in a row.

    * At least six days between sequences of N shifts; the
      implementation here could be better, possibly.

    * At least two days off after five consecutive shifts

    * At least two days off after night shift

    * Prefer at least two morning shifts before a vacation period and
      at least one night shift afterwards

    * Between 4 and 8 weekend days

    * At least 10 days off

    * 5-7 A (afternoon) shifts, 5-7 N (night) shifts

    * Days shifts (M and A, taken together) in blocks of exactly 3

    * At most 5 working days in a row.

  Work on COI-BCDT-Sep, try to reduce the running time.  There are
  a lot of constraints, which probably explains the poor result.

  Should we limit domain reduction at the start to hard constraints?
  A long test would be good.

  In khe_se_solvers.c, KheAddInitialTasks and KheAddFinalTasks could
  be extended to return an unassign_r1_ts task set which could then be
  passed on to the double repair.  No great urgency, but it does make
  sense to do this.  But first, let's see whether any instances need it.

  Also thought of a possibility of avoiding repairs during time sweep,
  when the cost blows out too much.  Have to think about it and see if
  it is feasible.

  Take a close look at resource matching.  How good are the
  assignments it is currently producing?  Could it do better?

  Now it is basically the big instances, ERRVH, ERMGH, and MER
  that need attention.  Previously I was working on ERRVH, I
  should go back to that.

  Is lookahead actually working in the way I expect it to?
  Or is there something unexpected going on that is preventing
  it from doing what it has the potential to do?

  UniTime requirements not covered yet:

    Need an efficient way to list available rooms and their
    penalties.  Nominally this is done by task constraints but
    something more concise, which indicates that the domain
    is partitioned, would be better.

    Ditto for the time domain of a meet.

    SameStart distribution constraint.  Place all times
    with the same start time in one time group, have one
    time group for each distinct starting time, and use
    a meet constraint with type count and eval="0-1|...".

    SameTime is a problem because there is not a simple
    partition into disjoint sets of times.  Need some
    kind of builtin function between pairs of times, but
    then it's not clear how this fits in a meet set tree.

    DifferentTime is basically no overlap, again we seem
    to need a binary attribute.

    SameDays and SameWeeks are cluster constraints, the limit
    would have to be extracted from the event with the largest
    number of meets, which is a bit dodgy.

    DifferentDays and DifferentWeeks just a max 1 on each day
    or week.

    Overlap and NotOverlap: need a binary for the amount of
    overlap between two times, and then we can constrain it
    to be at least 1 or at most 0.  NB the distributive law

       overlap(a+b, c+d) = overlap(a, c) + overlap(a, d)
         + overlap(b, c) + overlap(b, d)

    but this nice property is not going to hold for all
    binary attributes.

    Precedence: this is the order events constraint, with
    "For classes that have multiple meetings in a week or
    that are on different weeks, the constraint only cares
    about the first meeting of the class."  No design for
    this yet.

    WorkDay(S): "There should not be more than S time slots
    between the start of the first class and the end of the
    last class on any given day."  This is a kind of avoid
    idle times constraint, applied to events rather than to
    resources (which for us is a detail).
      One task or meet set per day, and then a special function
    (span or something) to give the appropriate measure.  But
    how do you define one day?  By a time group.

    MinGap(G): Any two classes that are taught on the same day
    (they are placed on overlapping days and weeks) must be at
    least G slots apart.  Not sure what to make of this.
    I guess it's overlap(a, b, extension) where extension
    applies to both a and b.

    MaxDays(D): "Given classes cannot spread over more than D days
    of the week".  Just a straight cluster constraint.

    MaxDayLoad(S): "Given classes must be spread over the days
      of the week (and weeks) in a way that there is no more
      than a given number of S time slots on every day."  Just
      a straight limit busy times constraint, measuring durations.
      But not the full duration, rather the duration on one day.

      This is one of several indications that we cannot treat
      a non-atomic time as a unit in all cases.

    MaxBreaks(R,S): "MaxBreaks(R,S) This constraint limits the
      number of breaks during a day between a given set of classes
      (not more than R breaks during a day). For each day of week
      and week, there is a break between classes if there is more
      than S empty time slots in between."  A very interesting
      definition of what it means for two times to be consecutive.

    MaxBlock(M,S): "This constraint limits the length of a block
      of consecutive classes during a day (not more than M slots
      in a block). For each day of week and week, two consecutive
      classes are considered to be in the same block if the gap
      between them is not more than S time slots."  Limit active
      intervals, interpreted using durations rather than times.

  A resource r is busy at some time t if that time overlaps with
  any interval in any meet that r is attending.

  Need a way to define time *groups* to take advantage of symmetries.
  e.g. 1-15{MWF}3 = {1-15M3, 1-15W3, 1-15F3}.  All doubles:
  [Mon-Fr][12 & 23 & 45 & 67 & 78] or something.
  {MWF:<time>} or something.  But what is the whole day anyway?
  All intervals, presumably. {1-15:{MTWRF:1-8}

  See 16 April 2019 for things to do with the XUTT paper.

  It's not clear at the moment how time sweep should handle
  rematching.  If left as is, without lookahead, it might
  well undo all the good work done by lookahead.  But to
  add lookahead might be slow.  Start by turning it off:
  rs_time_sweep_rematch_off=true.  The same problem afflicts
  ejection chain repair during time sweep.  Needs thought.
  Can the lookahead stuff be made part of the solution cost?
  "If r is assigned t, add C to solution cost".  Not easily.
  It is like a temporary prefer resources monitor.

  Here's an idea for a repair:  if a sequence is too short, try
  moving it all to another resource where there is room to make
  it longer.  KheResourceUnderloadAugment will in fact do nothing
  at all in these cases, so we really do need to do something,
  even an ejecting move on that day.

  Working over INRC2-4-030-1-6753 generally, trying to improve
  the ejection chain repairs.  No luck so far.

  Resource swapping is really just resource rematching, only not
  as good.  That is, unless there are limit resources constraints.

  The last few ideas have been too small beer.  Must do better!
  Currently trying to improve KHE18's solutions to INRC2-4-035-2-8875.xml:

    1 = Early, 2 = Day, 3 = Late, 4 = Night
    FullTime: max 2 weekends, 15-22 shifts, consec 2-3 free 3-5 busy
    PartTime: max 2 weekends,  7-15 shifts, consec 2-5 free 3-5 busy
    HalfTime: max 1 weekends,  5-11 shifts, consec 3-5 free 3-5 busy
    All: unwanted [4][123], [3][12], complete weekends, single asst per day
    All: consec same shift days: Early 2-5, Day 2-28, Late 2-5, Night 4-5

    FullTime resources and the number of weekends they work in LOR are:
    
      NU_8 2, NU_9 1, CT_17 1, CT_18 0, CT_20 1, CT_25 1, TR_30 2, TR_32 3

    NB full-time can only work 20 shifts because of max 5 busy then
    min 2 free, e.g. 5-2-5-2-5-2-5-2 with 4*5 = 20 busy shifts.  But
    this as it stands is not viable because you work no weekends.  The
    opposite, 2-5-2-5-2-5-2-5 works 4 weekends which is no good either.
    Ideally you would want 5-2-5-4-5-2-5, which works 2 weekends, but
    the 4 free days are a defect.  More breaks is the only way to
    work 2 weekends, but that means a lower workload again.  This is
    why several of LOR's full-timers are working only 18 hours.  The
    conclusion is that trying to redistribute workload overloads is
    not going to help much.

    Resource types

    HeadNurse (HN_*) can also work as Nurse or Caretaker
    Nurse     (NU_*) can also work as Caretaker
    Caretaker (CT_*) works only as Caretaker
    Trainee   (TR_*) works only as Trainee

  "At least two days off after night shift" - if we recode this,
  we might do better on COI-BCDT-Sep.  But it's surprisingly hard.

  Option es_fresh_visits seems to be inconsistent, it causes
  things to become unvisited when there is an assumption that
  they are visited.  Needs looking into.  Currently commented
  out in khe_sr_combined.c.

  YIKES - I submitted the wrong version of the modelling
  paper; conf1 rather than journal1

  Install new version of HSEval to fix character set problem.

  For the future:  time limit storing.  khe_sm_timer.c already
  has code for writing time limits, but not yet for reading.

  Work on time modelling paper for PATAT 2020.  The time model
  is an enabler for any projects I might do around ITC 2019,
  for example modelling student sectioning and implementing
  single student timetabling, so it is important for the future
  and needs to be got right.

  Time sets, time groups, resource sets, and resource groups
  ----------------------------------------------------------

    Thinking about whether I can remove construction of time
    neighbourhoods, by instead offering offset parameters on
    the time set operations (subset, etc.) which do the same.

    Need to use resource sets and time sets a lot more in the
    instance, for the constructed resource and time sets which
    in general have no name.  Maybe replace solution time groups
    and solution resource groups altogether.  But it's not
    trivial, because solution time groups are used by meets,
    and solution resource groups are used by tasks, both for
    handling domains (meet and task bounds).  What about

      typedef struct khe_time_set_rec {
          SSET elems;
      } KHE_TIME_SET;

    with SSET optimized by setting length to -1 to finalize.
    Most of the operations would have to be macros which
    add address-of operators in the style of SSET itself.

       KHE_TIME_SET KheTimeSetNeighbour(KHE_TIME_SET ts, int offset);

    would be doable with no memory allocation and one binary
    search (which could be optional for an internal version).

    I'm leaving this lie for now, something has to be done
    here but I'm not sure what, and there is no great hurry.

  There is a problem with preparing once and solving many times:
  adjustments for limit resources monitors depend on assignments
  in the vicinity, which may vary from one call to another.  The
  solution may well be simply to document the issue.

  At present resource matching is grouping then ungrouping during
  preparation, then grouping again when we start solving.  Can this
  be simplified?  There is a mark in the way.

  Document sset (which should really be khe_sset) and khe_set.

  What to do about tasks without assign resource or limit
  resources monitors?  What about including them in the
  matching but with an edge adjustment that encourages
  non-assignment?  This needs looking into in general.
  At present I am omitting these tasks, because I get
  better results on GPost when I do; but I need to think
  about this and do some serious experiments.

  I'm slightly worried that the comparison function for NRC
  worker constraints might have lost its transitivity now that
  history_after is being compared in some cases but not others.

  Look at the remaining special cases in all.map and see if some
  form of condensing can be applied to them.

  Develop my ideas for a generalized version of XHSTT/XESTT,
  but low priority.  Maybe something for PATAT 2020, when
  I've looked at the university course timetabling instances
  from the forthcoming competition.

  Might be a good idea to review the preserve_existing option in
  resource matching.  I don't exactly understand it at the moment.

  Option es_active_augment is currently undocumented.  Eventually
  I need to either remove it or document it.  At present, best of
  8 on COI-Post gives cost 10 with es_active_augment=swap, and
  cost 35 with es_active_augment=move, so I have made swap be
  its default value.

  There seem to be several silly things in the current code that are
  about statistics.  I should think about collecting statistics in
  general, and implement something.  But not this time around.

  KheTaskFirstUnFixed is quite widely used, but I am beginning to
  suspect that KheTaskProperRoot is what is really wanted.  I need
  to analyse this and perhaps make some conceptual changes.

  Read the full GOAL paper.  Are there other papers whose aims
  are the same as mine (which GOAL's are not)?  If so I need
  to compare my results with theirs.  The paper is in the 2012
  PATAT proceedings, page 254.  Also it gives this site:

    https://www.kuleuven-kulak.be/nrpcompetition/competitor-ranking

  Can I find the results from the competition winner?  According to
  Santos et al. this was Valouxis et al, but their paper is in EJOR.

  Add code for limit resources monitors to khe_se_secondary.c.

  In KheClusterBusyTimesAugment, no use is being made of the
  allow_zero option at the moment.  Need to do this some time.

  Generalize the handling of the require_zero parameter of
  KheOverloadAugment, by allowing an ejection tree repair
  when the ejector depth is 1.  There is something like
  this already in KheClusterOverloadAugment, so look at
  that before doing anything else.

  There is an "Augment functions" section of the ejection chains
  chapter of the KHE guide that will need an update - do it last.

  (KHE) What about a general audit of how monitors report what
  is defective, with a view to finding a general rule for how
  to do this, and unifying all the monitors under that rule?
  The rule could be to store reported_deviation, renaming it
  to deviation, and to calculate a delta on that and have a
  function which applies the delta.  Have to look through all
  the monitors to see how that is likely to pan out.  But the
  general idea of a delta on the deviation does seem to be
  right, given that we want evaluation to be incremental.

  (KHE) For all monitors, should I include attached and unattached
  in the deviation function, so that attachment and unattachment
  are just like any other update functions?

  Ejection chains idea:  include main loop defect ejection trees
  in the major schedule, so that, at the end when main loop defects
  have resisted all previous attempts to repair them, we can try
  ejection trees on each in turn.  Make one change, produce several
  defects, and try to repair them all.  A good last resort?

  Ejection chains idea:  instead of requiring an ejection chain
  to improve the solution by at least (0, 1), require it to
  improve it by a larger amount, at first.  This will run much
  faster and will avoid trying to fix tiny problems until there
  is nothing better to do.  But have I already tried it?  It
  sounds a lot like es_limit_defects.

  Some ideas about unifying things more
  =====================================

  Overall goals of XUTT
  ---------------------

  Completeness:  coverage of high school timetabling, nurse rostering,
  university course timetabling, and university examination timetabling
  as defined by the leading formats for those sub-disciplines, such that
  exact conversions from those other formats to XUTT is practicable.

  Naturalness:  instances to be expressed naturally, that is, without
  unexpected, artificial usages.

  Conciseness:  avoidance of unnecessary repetition in instances;
  often correlated with naturalness, since it is not natural to take
  a great deal of space to say something that can be said concisely.
  But also wanted to reduce the excessive verbosity of many XML files.


  Evaluation of constraints
  -------------------------

  At least one format allows a cost function to change in a
  piecewise fashion, for example to be linear at first and
  quadratic thereafter.  XUTT does not support such functions
  directly (they are considered to offer more detail than is
  ever needed by real-world instances).  If they are needed,
  for example when converting existing instances to XUTT,
  they can be expressed via multiple constraints with the
  same targets but different limits and weights.

  Tasks and measures
  ------------------

  A *task* is some work needing to be assigned to one resource.
  It can be an *atomic task*, meaning an indivisibly minimal
  amount of work, or a *composite task*, meaning a set of tasks
  (themselves atomic or composite) which are structurally
  constrained to be assigned the same resource.  Although one can
  always determine whether a given task is atomic or composite,
  and if composite what its elements are, it is a principle of
  KHE that tasks can and should always be handled without knowing
  or caring whether they are atomic or composite.

  At any one moment, a task can be an element of at most one
  composite task.  But this composite task can be deleted or
  changed by a solver.

  A *measure* is a function m(s) which associates a non-negative
  integer with each atomic task s.  The measure of a composite task
  is the sum of the measures of the atomic tasks that compose it.
  
  An event resource may define any number of measures, which apply
  to all atomic tasks derived from it:

     <EventResource>
       <Measure ref="duration" val="5"/>
       <Measure ref="workload" val="20"/>
     </EventResource>

  Always present, but never given explicitly, is

     <Measure ref="timecount" val="1"/>

  When a constraint requires the value of a measure for some task,
  and that measure is not defined for that task, it is an error.

  Task sets
  ---------

  A *task set* is a set of tasks.  Given a task set P, the *measure*
  of P, written m(P), is the sum over all s in P of m(s).  Constraints
  that depend on m(P) specify which measure to use, defaulting to timecount.

  XUTT offers a convenient way to define the task sets needed when
  evaluating constraints.  Define P(S, T, R) to be the set of tasks
  s satisfying these three conditions:

    * s or one of its elements is derived from an event resource lying
      in S, a set of event resources.  (For each assignment to an event
      resource there is one atomic task for each time that the event
      resource is running.  These are its derived atomic tasks.)  S may
      be absent, in which case this condition does not apply.

      A set of event resources may be defined by giving a set of
      event IDs and a set of event resource labels; the set consists
      of all event resources lying in events with the given IDs
      which have any of the given labels.  Any number of sets of
      event resources may be given in this way; their union is S.

    * s or one of its elements is running at one of the times of T,
      a set of times.  T may be absent, in which case this condition
      does not apply.  T may be defined by giving any number of time
      groups and times; their union is T.

    * s is assigned a resource from R, a set of resources.  R may
      be absent, in which case this condition does not apply.  R
      may be defined by giving any number of resource groups and
      resources; their union is R.

  S, T, and R are fixed sets, but P(S, T, R) depends on the current
  state of the solution, and the implementation must keep it and
  m(P(S, T, R)) up to date as the solution changes.

  The syntax for a task set is

    TaskSet +eg +e +label +tg +t +rg +r
      *EventResource
        *EventGroup ref
        *Event ref
        *Label ref
      *TimeGroup ref
      *Time ref
      *ResourceGroup ref
      *Resource ref

  In simple cases one can use the +eg +e +label +tg +t +rg +r
  options instead of the longer forms.  Other options may be
  added e.g. when the task point set comes with a cost.

  No constraints in common use require both S and T together.
  However, it costs practically nothing to allow them both
  to be present, as follows.
  
  When S is present, the equivalent of P(S, -, R) in KHE is
  reached by enrolling it with all tasks s in S, irrespective
  of whether they are assigned R or not.  P(S, T, R) can be
  reached in the same way:  it can be enrolled with all tasks s
  in S, irrespective of whether they are assigned T and R or not.

  When S is absent, the equivalent of P(-, T, r) in KHE is reached
  by enrolling T as a time group monitor within the timetable monitor
  of r.  P(-, T, R) can be done in much the same way, enrolling the
  time group monitor within the timetable monitors of each r in R.

  There may be special support for sets of task sets which are
  defined identically except that their resource sets R differ,
  and are in fact pairwise disjoint.

  I need to think about whether to allow "-" (unassigned) as a
  member of T or R.  It may not be necessary and it may have
  running time problems, given that, at times, the number of
  unassigned meets and tasks can be very large, much larger
  than the number of meets assigned time t, or the number of
  tasks assigned resource r.  Could it be implemented via
  null time and null resource objects?  Then every meet and
  every task would be assigned at all times, and every change
  of assignment would be a move.  But assignment to the null
  time or resource means unassigned.

  A task set P(S, T, R) may containing an <Eval> category.  In
  that case it becomes a constraint, whose determinant is
  m(P(S, T, R)).

  When repairing a constraint whose determinant is m(P(S, T, R)),
  if it is too large we need to try moving each task in P(S, T, R)
  to some time t not in T or to some resource r not in R.  If it
  is too small we need to find tasks which are in S (if present)
  but not in P(S, T, R), and try moving them into P(S, T, R) by
  assigning to them some t in T and some r in R.

  Task set trees
  --------------

  A task set tree is a tree whose leaf nodes are task sets.  Each node
  (leaf and internal) has a Boolean activity value; leaf nodes also
  have the measure(s) of their task sets as values.  Each node has
  one of these four types:

    <TaskSet active="..." +repeat>
      A leaf node representing a task set P(S, T, R).  It may have
      Eval children; if so, their determinant is m(P(S, T, R)).  It
      is active if active is "busy" (the default) and m(P(S, T, R))
      is non-zero, or if active is "free" and m(P(S, T, R)) is zero.
    
    <TaskTree type="counter">
      An internal node containing a set of task set trees.  It may
      also have Eval children; if so, their determinant is the
      number of active children.  It is active if it has any
      active children.
    
    <TaskTree type="consec">
      An internal node like <TaskTree type="counter"> except that,
      if it has Eval children, then it has one determinant for each
      sequence of active children, each producing a cost.
    
    <TaskTree type="intern">
      An internal node like <TaskTree type="consec"> except that active
      sequences at either end do not produce a cost.
  
  We need to be able to work top-down to satisfy the Eval categories
  throughout the tree.  i.e. choose a suitable set of sub-selections
  and recursively make them active.  We also need to be able to work
  bottom up, i.e. react when arbitrary assignments come from nowhere.

  Defining event resource and resource constraints with task set trees
  --------------------------------------------------------------------

  <ForEachResource rg="...">
  </ForEachResource>

  Equivalent to one copy of its body for each resource x of rg,
  with each occurrence of r="*" in that body not lying within an
  inner <ForEachResource> replaced by r="x".  It is an error if
  no replacements occur.

  <ForEachTime tg="...">
  </ForEachTime>

  Equivalent to one copy of its body for each time x of tg,
  with each occurrence of t="*" in that body not lying within
  an inner <ForEachTime> replaced by t="x".  It is an error
  if no replacements occur.

  <ForEachTimeGroup tg="..." repeat="...">
  </ForEachTimeGroup>

  Equivalent to one copy of its body for each time ti of the time
  group referenced by the repeat option, with each occurrence of
  tg="*" in that body not lying within an inner <ForEachTimeGroup>
  replaced by the value of tg shifted by index(ti) - index(t1).  It
  is an error if no replacements occur.

  Each resource constraint uses one task set tree whose root's
  children are leaf nodes containing P(-, Ti, {r}), where Ti is
  the ith time group, and r is the target resource.  The various
  other attributes define the details:

    Avoid clashes constraints

      <ForEachResource rg="Nurses">
	<ForEachTime tg="TimesOfCycle">
	  <TaskSet r="*" t="*">
	    <Eval max="1" fn="h1">
	  </TaskSet>
	</ForEachTime>
      </ForEachResource>

      There is one target for each (resource, time), consisting
      of the task set of all task points running at that time and
      assigned that resource.  So the determinant is the number of
      times (including clashing times) that the resource is busy
      during that one time.

    Avoid unavailable times constraints

        <TaskSet r="Nurse1">
	  <Eval max="0" fn="h1">
	  <Time ref="Wed1">
	  <Time ref="Wed6">
        </TaskSet>

      There is one target, consisting of a task set holding all the
      atomic tasks assigned Nurse1 at times Wed1 and Wed6.  The
      determinant is the number of these times during which the
      resource is busy.

    Limit idle times constraints

      <ForEachResource rg="Nurses">
	<ForEachTimeGroup tg="Mon" repeat="DaysOfCycle">
	  <TaskTree type="intern" active="free">
	    <Eval max="0" fn="s10">
	    <ForEachTime tg="*">
	      <TaskSet t="*" r="*">
	    </ForEachTime>
	  </TaskTree>
	</ForEachTimeGroup>
      </ForEachResource>

      There is one target for each (resource, day), consisting
      of a sequence of task sets, one for each time on that
      day.  For each sequence of free times on that day except
      sequences at either end there is one determinant, whose
      value is the length of the sequence.

    Cluster busy times constraints

      <ForEachResource rg="Teachers">
	<ForEachTimeGroup tg="1Mon" repeat="WeeksOfCycle">
	  <TaskTree>
	    <Eval max="3" fn="sq10">
	    <ForEachTimeGroup tg="*" repeat="DaysOfWeek">
	      <TaskSet tg="*" r="*">
	    </ForEachTimeGroup>
	  </TaskTree>
	</ForEachTimeGroup>
      </ForEachResource>

      This example shows a constraint which limits each teacher to
      teaching on at most three days of each week.  The first
      tg="*" is successively replaced by time groups 1Mon, 2Mon,
      etc. (the first days of each week), the second is replaced
      by 1Mon, 1Tue, ..., 1Fri when the first is 1Mon, by 2Mon,
      2Tue, ..., 2Fri when the first is 2Mon, and so on.

    Limit busy times constraints

      <ForEachResource rg="Teachers">
	<ForEachTimeGroup tg="1Mon" repeat="DaysOfCycle">
	  <TaskSet tg="*" r="*">
	    <Eval max="7" fn="s1">
	  </TaskSet>
	</ForEachTimeGroup>
      </ForEachResource>

      This example shows a constraint which limits each teacher to
      teaching for at most 7 times on each day of the cycle.

    Limit workload constraints

      <ForEachResource rg="Teachers">
	<ForEachTimeGroup tg="1Mon" repeat="DaysOfCycle">
	  <TaskSet tg="*" r="*" measure="workload">
	    <Eval max="400" fn="s1">
	  </TaskSet>
	</ForEachTimeGroup>
      </ForEachResource>

      This example shows a constraint which limits a resource
      to a workload of 400 minutes on each day of the cycle.

    Limit active intervals constraint

      <ForEachResource rg="Teachers">
	<TaskTree type="consec">
	  <Eval max="3" fn="s10">
	  <ForEachTimeGroup tg="1Mon" repeat="DaysOfCycle">
	    </TaskSet tg="*" r="*">
	  </ForEachTimeGroup>
	</TaskTree>
      </ForEachResource>

      This example shows a constraint which limits a resource
      to at most three consecutive busy days.

    Assign resource constraint

      <ForEachEvent eg="Shifts">
        <TaskSet e="*" r="-" label="nurse">
	  <Eval max="0" fn="h1">
        </TaskSet>
      </ForEachEvent>

      This constraint applies to each task with label "nurse" in
      event group "Shifts".

    Prefer resources constraint

      <ForEachEvent eg="Shifts>
        <TaskSet e="*" label="nurse">
	  <ResourceGroup rg="SeniorNurse">
	    <Eval max="0" fn="h0">
	  </ResourceGroup>
	  <Eval max="0" fn="s1">
        </TaskSet>
      </ForEachEvent>

      This constraint applies to each task with label "nurse" in
      event group "Shifts".  Tasks assigned senior nurses attract
      no cost; the rest attract cost s1.  But what about unassigned
      tasks?  To begin with, this is probably better:

      <ForEachEvent eg="Shifts>
        <TaskSet e="*" label="nurse" not_rg="SeniorNurse" eval="0|s1"/>
      </ForEachEvent>

      We can use this same arrangement to give different costs
      to different resources:

      <ForEachEvent eg="Shifts>
        <TaskTree e="*">
	  <TaskSet r="Nurse1"  eval="0|s4">
	  <TaskSet r="Nurse2"  eval="0|s6"/>
	  ...
	  <TaskSet r="Nurse50" eval="0|s2"/>
	  <TaskSet r="-"       eval="0|h1"/>
        </TaskTree>
      </ForEachEvent>

      But here we need implementation support; we don't want
      to actually keep a task set for each resource, although
      we could keep an array of pointers to cost functions.

  Limit resources constraint

      <TaskSet eg="SomeTaskSet" rg="SeniorNurse" label="nurse" eval="1-inf|s1"/>

      The task set consists of all tasks with label "nurse" lying in
      event group "SomeTaskSet" that are assigned a senior nurse.
      The measure of this task set must be at least 1, otherwise
      there is a penalty.  In other words, there must be at least
      one senior nurse assigned to the nominated tasks.

      If a fully-fledged time model is included, it might be possible
      to say "all events running at 3pm" directly, that is, without
      having to build the set manually.

  Avoid split assignments constraint

      <ForEachEvent eg="TeachingEvents">
	<TaskTree type="counter">
	  <Eval max="1" fn="s20">
	  <ForEachResource rg="Teachers">
	    <TaskSet e="*" label="teacher" r="*">
	  </ForEachResource>
	</TaskTree>
      </ForEachEvent>

    For each event resource requiring a teacher, for each teacher
    find the set of tasks of that event resource assigned that
    teacher.  The number of these that are non-empty is the
    determinant.

    This case will probably require implementation support.  It
    does not make sense to actually define and maintain a full
    P(S, T, R) for each task set in this constraint.  Rather, it
    is better to follow the existing implementation of the avoid
    split assignments constraint and keep an array holding only
    the non-empty task sets, or even just their measure.

    Interpret all the update operations as potentially adding
    or deleting a task from a task set.

  Event constraints
  -----------------

  These are done analogously to task constraints, as far as possible.

    Assign time constraints

      <ForEachEvent eg="Classes">
        <MeetSet e="*" t="-" eval="0|h1"/>
      </ForEachEvent>

      This constraint applies to each event in Classes.

  Split events constraints
  Distribute split events constraints

    still to do

  Prefer times constraints

    <MeetTree e="7Maths2">
      <MeetSet t="Mon1" eval="0|s4">
      <MeetSet t="Mon2" eval="0|s6"/>
      ...
      <MeetSet t="Fri8" eval="0|s2"/>
      <MeetSet t="-"    eval="0|h1"/>
    </MeetTree>

  Spread events constraints

    <MeetTree e="7Maths2" type="counter" eval="4-inf|s5">
      <ForEachTimeGroup tg="Mon" repeat="DaysOfWeek">
	<MeetSet tg="*">
      </ForEachTimeGroup>
    </MeetTree>

    This counts the number of days of the week that 7Maths2 is
    running, and penalizes anything under 4.

  Link events constraints

    <MeetTree type="counter" eval="6|s20">
      <ForEachTime tg="Times">
	<MeetSet e="7Maths" t="*">
      </ForEachTime>
    </MeetTree>

  Order events constraints

    This may require something new, since it is not about
    a set of two meets, it is about a sequence of two meets.
    Plus there is a lot of this stuff in the UniTime spec.


  Things to think about from UniTime
  ----------------------------------

  Travel times between rooms.  Remember we are down to one
  minute granularity.  We can use eval="0|s10" to say that
  there is a penalty of 10 for each minute over; but how
  to calculate the gap?  We have to get an attribute from
  the room, namely its travel time to other rooms.

  Need to be able to define arbitrary functions of two
  resources, as in travel(r1, r2), but even then we are
  not all the way there.

  Distribution constraints - same days, etc.  How do we
  say that two events must meet at the same time of day?

  <MeetTree eval="1|s20">
    <ForEachTimeGroup tg="Period1" repeat="TimesOfDay">
      <MeetSet eg="EventsOfInterest" tg="*">
    </ForEachTimeGroup>
  </MeetTree>

  And for different time of day we just need eval="2|s20".
  The same plan will work for "same room":

  <TaskTree eval="1|s20">
    <ForEachRoom rg="Rooms">
      <TaskSet eg="EventsOfInterest" label="room" r="*">
    </ForEachRoom>
  </TaskTree>

  This is room stability, and both of these examples can work
  by keeping, within the MeetTree or TaskTree, one object for
  each MeetSet or TaskSet with a non-empty counter.

  There is a problem though with measuring the number of
  minutes of overlap.  We can do it for each pair of times
  as required, but then how do we get it into a constraint?
  Perhaps it is a measure:  the overlap in minutes of the
  events in the event set, not their total duration:
  measure="duration", measure="overlap" and so on.  But
  then we have to add the SameAttendees room travel time
  thing.  Really want to evaluate it wrt an individual
  student's timetable.  Hmm.

  Precedence means order events, but how we do that we
  don't know yet.

  "WorkDay(S) There should not be more than S time slots
  between the start of the first class and the end of the
  last class on any given day."  For us this will be for
  a particular resource r.  Can we do this?  We may need
  a third alternative to type="consec", type="internal",
  namely type="span".

  MaxDayLoad(S) is just a limit busy times constraint
  on each day of the cycle.  But may be measured as a
  duration rather than as a number of times.  There is
  a division by nrWeeks but we don't need to model that.

  "MaxBreaks(R,S) This constraint limits the number of
  breaks during a day between a given set of classes
  (not more than R breaks during a day)."  Yet another
  measure, this time the number of sequences, but the
  definition of a break is somewhat awkward.  Might be
  best to just use idle time measured in minutes.

  "MaxBlock(M,S) This constraint limits the length of a
  block of consecutive classes during a day (not more than
  M slots in a block)."  Again, length of busy sequences,
  although once again we have this awkward question of
  whether two adjacent classes are consecutive.  They
  are saying that if two classes are no more than S
  times apart, they are in the same block.  Awkward.

  Could we say that a student is busy attending some class
  for the duration of the walking time between two rooms?

  Transformations of real-life data.  Rooms is basically
  saying that there are multiple constraints on room
  preferences and these get added together.  But there
  is a penalty for a mismatch between a room's capacity
  and a class's *actual* size (or maximum?).  So we
  need to be able to attach an eval to a room:

     <Resource type="room" eval="15-20|s1-s10">

  Later we will use this eval in a constraint whose
  determinant is the number of students, so this says
  that we pay a penalty of 1 for each student under
  15 and a penalty of 10 for each student over 20.

    <Event ref="Maths101">
      <Task ref="students">
      <Task ref="room">
    </Event>

  and the constraint is

    <TaskSet ref="Maths101" label="students" measure="asstcount"
      eval="!room">

  The last bit says to take the eval from the resource assigned
  to the task with label room in the current event.  Not great
  but it will work.  Need to change this constraint when the
  value assigned to room changes.


  Miscellaneous
  -------------

  Just four constraints:

    Constraint             One point of application
    ------------------------------------------------------------
    <MeetConstraint>       A set of one or more events
    <SplitConstraint>      A set of one or more events
    <TaskConstraint>       A set of one or more tasks
    <ResourceConstraint>   One resource
    ------------------------------------------------------------

  Convert event resource labels into unique integers on reading,
  so that they can be retrieved quickly from the sets of labels
  which appear in constraints.

  There must be a way to declare that an event will be assigned K
  times, without specifying which times.  This is to allow each of
  its event resources to produce K task points for each assignment,
  even though the times of those task points are not known.

  Attributes
  ----------

  <ResourceType id="Rooms">
     <Flag id="accessible">
     <Attr id="capacity">
     <Binary id="travel">
