KHE diary for 2023
==================

At the end of 2022 I was still working on dynamic resource assignment.
I had more or less finished the coding of expansion by shifts, and I
was just finishing off an audit of the documentation of expansion.
I was also coming to the end of a very moderate case of COVID-19.

1 January 2023.  Did some more documentation auditing.  In fact I
  ended up auditing the whole expansion section again.  Also did
  an off-site backup.

2 January 2023.  Fiddled with a few minor things, pretty tedious.
  Had a quick look over one-extra and two-extra selection, to see
  if they have decayed.  I don't think they have.

3 January 2023.  Started testing.  Several nasty little bugs so far.
  I'm now running to completion, but for every expand_by_shifts run
  I am getting "made 0", like this for example:

    [ KheDynamicResourceSolverDoSolve(drs, false, false, expand_by_shifts,
        0, 0, 0, IndexedUniform, false, -) cost 52.06240
      resources:  HN_1, NU_6, NU_9
      day ranges: 0-13
    [ KheDrsSolveSearch(3 resources, 14 days)
      KheDrsSolveSearch expanded 1Mon (made 0, undominated 0)
      KheDrsSolveSearch expanded 1Tue (made 0, undominated 0)
      KheDrsSolveSearch expanded 1Wed (made 0, undominated 0)
      KheDrsSolveSearch expanded 1Thu (made 0, undominated 0)
      KheDrsSolveSearch expanded 1Fri (made 0, undominated 0)
      KheDrsSolveSearch expanded 1Sat (made 0, undominated 0)
      KheDrsSolveSearch expanded 1Sun (made 0, undominated 0)
      KheDrsSolveSearch expanded 2Mon (made 0, undominated 0)
      KheDrsSolveSearch expanded 2Tue (made 0, undominated 0)
      KheDrsSolveSearch expanded 2Wed (made 0, undominated 0)
      KheDrsSolveSearch expanded 2Thu (made 0, undominated 0)
      KheDrsSolveSearch expanded 2Fri (made 0, undominated 0)
      KheDrsSolveSearch expanded 2Sat (made 0, undominated 0)
      KheDrsSolveSearch expanded 2Sun (made 0, undominated 0)
    ] KheDrsSolveSearch returning false
    ] KheDynamicResourceSolverDoSolve returning false

  This doesn't seem to have anything to do with fixed tasks,
  because it happens right at the start.  The problem is that
  only one shift seems to have a shift asst trie.  What happened
  to all the others?

4 January 2023.  Still testing.  Now allowing KheDrsExpanderMarkBegin
  to work even when the expander is closed.  This is easy, just have
  to recalculate de->open.

5 January 2023.  Still testing.  Something is wrong with expression
  evaluation:

    [ KheDrsExpanderMakeAndMeldShiftAsst(de, dsat, ...)
      eval ARC:NA=h1|NWNurse=h1:1/1Mon:Early/NA=h1|NWNurse=h1:1 ld1 0 + 0 ud1 1 dev1 0 si 0 ++ ld2 2 ud2 1 dev2 1 C1.00000
      eval PRC:A=s0+NWHeadNurse=h1:1:NotNU_4Etc/1Mon:Early/A=s0+NWHeadNurse=h1:1 ld1 0 + 0 ud1 1 dev1 0 si 0 -- ld2 0 ud2 -1 dev2 1 C1.00000
      eval PRC:A=s0+NWHeadNurse=h1:2:NotNU_4Etc/1Mon:Early/A=s0+NWHeadNurse=h1:2 ld1 0 + 0 ud1 1 dev1 0 si 0 -- ld2 0 ud2 -1 dev2 1 C1.00000
      eval PRC:A=s0+NWHeadNurse=h1:3:NotNU_4Etc/1Mon:Early/A=s0+NWHeadNurse=h1:3 ld1 0 + 0 ud1 1 dev1 0 si 0 -- ld2 0 ud2 -1 dev2 1 C1.00000
      eval PRC:A=s0+NWHeadNurse=h1:4:NotNU_4Etc/1Mon:Early/A=s0+NWHeadNurse=h1:4 ld1 0 + 0 ud1 1 dev1 0 si 0 -- ld2 0 ud2 -1 dev2 1 C1.00000
      eval PRC:NA=h1|NWNurse=h1:1:Nurse/1Mon:Early/NA=h1|NWNurse=h1:1 ld1 0 + 0 ud1 1 dev1 0 si 0 -- ld2 0 ud2 -1 dev2 1 C1.00000
      eval PRC:A=s0+NWNurse=h1:1:NotCT_17Etc/1Mon:Early/A=s0+NWNurse=h1:1 ld1 0 + 0 ud1 1 dev1 0 si 0 -- ld2 0 ud2 -1 dev2 1 C1.00000
      eval PRC:A=s0+NWNurse=h1:2:NotCT_17Etc/1Mon:Early/A=s0+NWNurse=h1:2 ld1 0 + 0 ud1 1 dev1 0 si 0 -- ld2 0 ud2 -1 dev2 1 C1.00000
      eval PRC:A=s0+NWNurse=h1:3:NotCT_17Etc/1Mon:Early/A=s0+NWNurse=h1:3 ld1 0 + 0 ud1 1 dev1 0 si 0 -- ld2 0 ud2 -1 dev2 1 C1.00000
      eval PRC:A=s0+NWNurse=h1:4:NotCT_17Etc/1Mon:Early/A=s0+NWNurse=h1:4 ld1 0 + 0 ud1 1 dev1 0 si 0 -- ld2 0 ud2 -1 dev2 1 C1.00000
      eval PRC:A=s0+NWCaretaker=h1:1:NotHN_2Etc/1Mon:Early/A=s0+NWCaretaker=h1:1 ld1 0 + 0 ud1 1 dev1 0 si 0 -- ld2 0 ud2 -1 dev2 1 C1.00000
      eval PRC:A=s0+NWCaretaker=h1:2:NotHN_2Etc/1Mon:Early/A=s0+NWCaretaker=h1:2 ld1 0 + 0 ud1 1 dev1 0 si 0 -- ld2 0 ud2 -1 dev2 1 C1.00000
      eval PRC:A=s0+NWCaretaker=h1:3:NotHN_2Etc/1Mon:Early/A=s0+NWCaretaker=h1:3 ld1 0 + 0 ud1 1 dev1 0 si 0 -- ld2 0 ud2 -1 dev2 1 C1.00000
      eval PRC:A=s0+NWCaretaker=h1:4:NotHN_2Etc/1Mon:Early/A=s0+NWCaretaker=h1:4 ld1 0 + 0 ud1 1 dev1 0 si 0 -- ld2 0 ud2 -1 dev2 1 C1.00000
      eval PRC:A=s0+NWCaretaker=h1:5:NotHN_2Etc/1Mon:Early/A=s0+NWCaretaker=h1:5 ld1 0 + 0 ud1 1 dev1 0 si 0 -- ld2 0 ud2 -1 dev2 1 C1.00000
      eval PRC:A=s0+NWCaretaker=h1:6:NotHN_2Etc/1Mon:Early/A=s0+NWCaretaker=h1:6 ld1 0 + 0 ud1 1 dev1 0 si 0 -- ld2 0 ud2 -1 dev2 1 C1.00000
      prev 52.05985 + dsa 16.00000 :: lim 52.06240
    ] KheDrsExpanderMakeAndMeldShiftAsst returning (no)

  I'm guessing that most of these should have one child but they seem to
  be visiting two children, which probably explains the negative values
  we're getting for ud2.

  OK, we seem to be adding the same child twice:

    [ [2:0-0: 0,2]
      [ ASSIGNED_TASK(1Mon:Early.5, -, 1) pi 12305 oc 0 ]
      [ ASSIGNED_TASK(1Mon:Early.5, -, 1) pi 12305 oc 0 ]
    ]

  Yep, the problem was that closing was not removing the old children.
  Fixed now.

  Now look at these two runs; their only difference is that the second
  uses expand by shifts.  Something goes wrong with expand by shifts on
  1Thu, but, wonderfully, on the first three days the two solves produce
  the exact same number of undominated solutions, as they should - but
  expand by shifts produces many fewer solutions.  We're close now,
  we're very close.

    [ KheDynamicResourceSolverDoSolve(drs, false, false, false,
      0, 0, 0, IndexedUniform, false, -) cost 52.06240
      resources:  HN_1, NU_6, NU_9
      day ranges: 0-13
    [ KheDrsSolveSearch(3 resources, 14 days)
      KheDrsSolveSearch expanded 1Mon (made 285, undominated 61)
      KheDrsSolveSearch expanded 1Tue (made 5811, undominated 366)
      KheDrsSolveSearch expanded 1Wed (made 165534, undominated 856)
      KheDrsSolveSearch expanded 1Thu (made 120081, undominated 932)
      KheDrsSolveSearch expanded 1Fri (made 2767, undominated 122)
      KheDrsSolveSearch expanded 1Sat (made 345, undominated 20)
      KheDrsSolveSearch expanded 1Sun (made 100, undominated 30)
      KheDrsSolveSearch expanded 2Mon (made 10495, undominated 195)
      KheDrsSolveSearch expanded 2Tue (made 18289, undominated 756)
      KheDrsSolveSearch expanded 2Wed (made 1043, undominated 67)
      KheDrsSolveSearch expanded 2Thu (made 1040, undominated 18)
      KheDrsSolveSearch expanded 2Fri (made 16, undominated 6)
      KheDrsSolveSearch expanded 2Sat (made 24, undominated 10)
      KheDrsSolveSearch expanded 2Sun (made 0, undominated 0)
    ] KheDrsSolveSearch returning false
    ] KheDynamicResourceSolverDoSolve returning false

    [ KheDynamicResourceSolverDoSolve(drs, false, false, expand_by_shifts,
      0, 0, 0, IndexedUniform, false, -) cost 52.06240
      resources:  HN_1, NU_6, NU_9
      day ranges: 0-13
    [ KheDrsSolveSearch(3 resources, 14 days)
      KheDrsSolveSearch expanded 1Mon (made 61, undominated 61)
      KheDrsSolveSearch expanded 1Tue (made 1399, undominated 366)
      KheDrsSolveSearch expanded 1Wed (made 21280, undominated 856)
      KheDrsSolveSearch expanded 1Thu (made 20381, undominated 819)
      KheDrsSolveSearch expanded 1Fri (made 0, undominated 0)
      KheDrsSolveSearch expanded 1Sat (made 0, undominated 0)
      KheDrsSolveSearch expanded 1Sun (made 0, undominated 0)
      KheDrsSolveSearch expanded 2Mon (made 0, undominated 0)
      KheDrsSolveSearch expanded 2Tue (made 0, undominated 0)
      KheDrsSolveSearch expanded 2Wed (made 0, undominated 0)
      KheDrsSolveSearch expanded 2Thu (made 0, undominated 0)
      KheDrsSolveSearch expanded 2Fri (made 0, undominated 0)
      KheDrsSolveSearch expanded 2Sat (made 0, undominated 0)
      KheDrsSolveSearch expanded 2Sun (made 0, undominated 0)
    ] KheDrsSolveSearch returning false
    ]

6 January 2023.  Still testing.

  I've got debug output which shows that 1Thu is the first day
  that any of the three open resources (HN_1, NU_6, NU_9) is
  initially assigned to a multi-day task:

    [ Task class 1Thu-1Fri
      1Thu:Late.5 asst_cost 0.00000, non_asst_cost 2.00000 (0, closed, NU_9; 1Thu:Late.5(1Thu3:1Thu:NU_9)1Fri:Late.3(1Fri3:1Fri:NU_9)) assigned expand_no
    ]

  It seems pretty clear that this multi-day task is causing expand by
  shifts to go wrong on 1Thu.  But why, exactly?  There is also some
  possibility that 1Thu is correct (dominance testing wrt full width
  of task, not just the first day), in which case the first real bug
  comes out on 1Fri.  In fact, it may be easier to work out why there
  are no solutions at all on 1Fri.

  Look at this shift on Friday.  One task is a must, yet the tree is empty:

    fixed: {<expand_fixed NU_9, 1Fri:Late.3(1Fri3:1Fri:-)>}
    free:  {HN_1, NU_6}

    [ Shift 27 (expand_min 1, expand_max 2)
      [ Task class 1Fri-1Sun
	1Fri:Day.0 asst_cost 0.00000, non_asst_cost 3.00000 (0, open, -; 1Fri:Day.0(1Fri2:1Fri:-)1Sat:Day.3(1Sat2:1Sat:-)1Sun:Day.3(1Sun2:1Sun:-)) expand_must
      ]
    ]

  This is evidently why no solutions are being made for 1Fri.  This
  task requires a head nurse, but HN_1 is available to take it surely.

  The problem is that the shift assignment object is incurring a
  hard cost of 2.00000:

    prev 52.06015 + dsa 2.00000 :: lim 52.06240

  We're getting complete rubbish in the evaluation:

    [ KheDrsExpanderMakeAndMeldShiftAsst(de, dsat, ...)
      eval ARC:NA=h1|NWHeadNurse=h1:1/1Fri:Day/NA=h1|NWHeadNurse=h1:1 ld1 0 + 0 ud1 1 dev1 0
      [ [1:27-27: 0,1]
	[ ASSIGNED_TASK(1Fri:Day.0, -, 1) pi 12267 oc 0 ]
      ]
      si 27 0+ ld2 1 ud2 1 dev2 0
      eval ARC:NA=h1|NWNurse=h1:1/1Sat:Day/NA=h1|NWNurse=h1:1 ld1 -30 ud1 -29 dev1 30
      [ [1:27-27: 0,1]
	[ ASSIGNED_TASK(1Sat:Day.3, -, 1) pi 12347 oc 0 ]
      ]
      si 27 0+ ld2 -29 ud2 -29 dev2 30
      eval ARC:NA=h1|NWNurse=h1:1/1Sun:Day/NA=h1|NWNurse=h1:1 ld1 10 ud1 -19107869 dev1 9
      [ [1:27-27: 0,1]
	[ ASSIGNED_TASK(1Sun:Day.3, -, 1) pi 12355 oc 0 ]
      ]
      si 27 0+ ld2 11 ud2 -19107869 dev2 10 C1.00000
      eval PRC:NA=h1|NWHeadNurse=h1:1:HeadNurse/1Fri:Day/NA=h1|NWHeadNurse=h1:1 ld1 0 + 0 ud1 1 dev1 0
      [ [1:27-27: 0,1]
	[ ASSIGNED_TASK(1Fri:Day.0, (null), 0) pi 13141 oc 0 ]
      ]
      si 27 0- ld2 0 ud2 0 dev2 0
      eval PRC:NA=h1|NWNurse=h1:1:Nurse/1Sat:Day/NA=h1|NWNurse=h1:1 ld1 -30 ud1 -29 dev1 29
      [ [1:27-27: 0,1]
	[ ASSIGNED_TASK(1Sat:Day.3, (null), 0) pi 13941 oc 0 ]
      ]
      si 27 0- ld2 -30 ud2 -30 dev2 30 C1.00000
      eval PRC:NA=h1|NWNurse=h1:1:Nurse/1Sun:Day/NA=h1|NWNurse=h1:1 ld1 10 ud1 -19107869 dev1 10
      [ [1:27-27: 0,1]
	[ ASSIGNED_TASK(1Sun:Day.3, (null), 0) pi 13949 oc 0 ]
      ]
      si 27 0- ld2 10 ud2 -19107870 dev2 10
      prev 52.06015 + dsa 2.00000 :: lim 52.06240
    ] KheDrsExpanderMakeAndMeldShiftAsst returning (no)

  So the evaluation needs a careful overhaul.

  Consider a task running on 1Fri, 1Sat, 1Sun, and a constraint
  which applies only on 1Sat.  On 1Sat, is it the first day?
  In the sense of the first day after the previous solution
  ends, no it isn't, but it is the first day of the constraint.
  So it is not clear what is right to do, and this is the problem.

  Perhaps we need to run through each day of the shift, building
  a solution for each day.  But we aren't set up to retrieve from
  shift assignment solutions.

  Perhaps we need a different definition of first day, i.e.
  first day means not in previous solution.

7 January 2023.  Still testing.  I seem to have fixed all the bugs, anyway
  expand by shifts is now producing the same number of undominated solutions
  as expand by resources in every case.  When solving HN_1, NU_6, NU_9 the
  number of solutions made by expand by shifts is significantly less
  than the number made by expand by resources, as shown by

    [ KheDynamicResourceSolverDoSolve(drs, false, false, expand_by_resources,
      0, 0, 0, IndexedUniform, false, -) cost 52.06240
      resources:  HN_1, NU_6, NU_9
      day ranges: 0-13
      [ KheDrsSolveSearch(3 resources, 14 days)
	KheDrsSolveSearch expanded 1Mon (made 285, undominated 61)
	KheDrsSolveSearch expanded 1Tue (made 5811, undominated 366)
	KheDrsSolveSearch expanded 1Wed (made 165534, undominated 856)
	KheDrsSolveSearch expanded 1Thu (made 120081, undominated 932)
	KheDrsSolveSearch expanded 1Fri (made 2767, undominated 122)
	KheDrsSolveSearch expanded 1Sat (made 345, undominated 20)
	KheDrsSolveSearch expanded 1Sun (made 100, undominated 30)
	KheDrsSolveSearch expanded 2Mon (made 10495, undominated 195)
	KheDrsSolveSearch expanded 2Tue (made 18289, undominated 756)
	KheDrsSolveSearch expanded 2Wed (made 1043, undominated 67)
	KheDrsSolveSearch expanded 2Thu (made 1040, undominated 18)
	KheDrsSolveSearch expanded 2Fri (made 16, undominated 6)
	KheDrsSolveSearch expanded 2Sat (made 24, undominated 10)
	KheDrsSolveSearch expanded 2Sun (made 0, undominated 0)
      ] KheDrsSolveSearch returning false
    ] KheDynamicResourceSolverDoSolve returning false

  and

    [ KheDynamicResourceSolverDoSolve(drs, false, false, expand_by_shifts,
      0, 0, 0, IndexedUniform, false, -) cost 52.06240
      resources:  HN_1, NU_6, NU_9
      day ranges: 0-13
      [ KheDrsSolveSearch(3 resources, 14 days)
	KheDrsSolveSearch expanded 1Mon (made 61, undominated 61)
	KheDrsSolveSearch expanded 1Tue (made 1399, undominated 366)
	KheDrsSolveSearch expanded 1Wed (made 21280, undominated 856)
	KheDrsSolveSearch expanded 1Thu (made 30212, undominated 932)
	KheDrsSolveSearch expanded 1Fri (made 1582, undominated 122)
	KheDrsSolveSearch expanded 1Sat (made 207, undominated 20)
	KheDrsSolveSearch expanded 1Sun (made 60, undominated 30)
	KheDrsSolveSearch expanded 2Mon (made 1855, undominated 195)
	KheDrsSolveSearch expanded 2Tue (made 3208, undominated 756)
	KheDrsSolveSearch expanded 2Wed (made 560, undominated 67)
	KheDrsSolveSearch expanded 2Thu (made 297, undominated 18)
	KheDrsSolveSearch expanded 2Fri (made 10, undominated 6)
	KheDrsSolveSearch expanded 2Sat (made 13, undominated 10)
	KheDrsSolveSearch expanded 2Sun (made 0, undominated 0)
      ] KheDrsSolveSearch returning false
    ] KheDynamicResourceSolverDoSolve returning false

  But when solving TR_25, TR_26, TR_28 the solutions made are the same:

    [ KheDynamicResourceSolverDoSolve(drs, false, false, expand_by_resources,
      0, 0, 0, IndexedUniform, false, -) cost 0.02505
      resources:  TR_25, TR_26, TR_28
      day ranges: 0-13
      [ KheDrsSolveSearch(3 resources, 14 days)
	KheDrsSolveSearch expanded 1Mon (made 125, undominated 125)
	KheDrsSolveSearch expanded 1Tue (made 240, undominated 64)
	KheDrsSolveSearch expanded 1Wed (made 64, undominated 56)
	KheDrsSolveSearch expanded 1Thu (made 56, undominated 24)
	KheDrsSolveSearch expanded 1Fri (made 336, undominated 116)
	KheDrsSolveSearch expanded 1Sat (made 1214, undominated 345)
	KheDrsSolveSearch expanded 1Sun (made 12209, undominated 1435)
	KheDrsSolveSearch expanded 2Mon (made 45493, undominated 1846)
	KheDrsSolveSearch expanded 2Tue (made 26674, undominated 1175)
	KheDrsSolveSearch expanded 2Wed (made 25531, undominated 2792)
	KheDrsSolveSearch expanded 2Thu (made 15928, undominated 1335)
	KheDrsSolveSearch expanded 2Fri (made 7173, undominated 904)
	KheDrsSolveSearch expanded 2Sat (made 2579, undominated 175)
	KheDrsSolveSearch expanded 2Sun (made 48, undominated 1)
      ] KheDrsSolveSearch returning true
    ] KheDynamicResourceSolverDoSolve returning false

  and

    [ KheDynamicResourceSolverDoSolve(drs, false, false, expand_by_shifts,
      0, 0, 0, IndexedUniform, false, -) cost 0.02505
      resources:  TR_25, TR_26, TR_28
      day ranges: 0-13
      [ KheDrsSolveSearch(3 resources, 14 days)
	KheDrsSolveSearch expanded 1Mon (made 125, undominated 125)
	KheDrsSolveSearch expanded 1Tue (made 240, undominated 64)
	KheDrsSolveSearch expanded 1Wed (made 64, undominated 56)
	KheDrsSolveSearch expanded 1Thu (made 56, undominated 24)
	KheDrsSolveSearch expanded 1Fri (made 336, undominated 116)
	KheDrsSolveSearch expanded 1Sat (made 1214, undominated 345)
	KheDrsSolveSearch expanded 1Sun (made 12209, undominated 1435)
	KheDrsSolveSearch expanded 2Mon (made 45493, undominated 1846)
	KheDrsSolveSearch expanded 2Tue (made 26674, undominated 1175)
	KheDrsSolveSearch expanded 2Wed (made 25531, undominated 2792)
	KheDrsSolveSearch expanded 2Thu (made 15928, undominated 1335)
	KheDrsSolveSearch expanded 2Fri (made 7173, undominated 904)
	KheDrsSolveSearch expanded 2Sat (made 2579, undominated 175)
	KheDrsSolveSearch expanded 2Sun (made 48, undominated 1)
      ] KheDrsSolveSearch returning true
    ] KheDynamicResourceSolverDoSolve returning false

  Presumably this means that the shift trie building found no cases
  of dominance at all, or none beyond what one-extra and two-extra
  dominance found.  This seems odd but not impossible.

  Did a four resource test, it went well too.  Tried five resources,
  it is running very slowly, even with expand by shifts.  This test
  took 61 minutes:

    [ KheDynamicResourceSolverDoSolve(drs, false, false, expand_by_shifts,
      0, 0, 0, IndexedUniform, false, -) cost 52.06240
      resources:  HN_1, HN_3, NU_6, NU_9, CT_24
      day ranges: 0-13
      [ KheDrsSolveSearch(5 resources, 14 days)
	KheDrsSolveSearch expanded 1Mon (made 225, undominated 225)
	KheDrsSolveSearch expanded 1Tue (made 10739, undominated 3026)
	KheDrsSolveSearch expanded 1Wed (made 428795, undominated 12671)
	KheDrsSolveSearch expanded 1Thu (made 2341317, undominated 60659)
	KheDrsSolveSearch expanded 1Fri (made 828867, undominated 45891)
	KheDrsSolveSearch expanded 1Sat (made 728766, undominated 16968)
	KheDrsSolveSearch expanded 1Sun (made 72789, undominated 8617)
	KheDrsSolveSearch expanded 2Mon (made 2201371, undominated 20636)
	KheDrsSolveSearch expanded 2Tue (made 563694, undominated 29935)
	KheDrsSolveSearch expanded 2Wed (made 18950, undominated 912)
	KheDrsSolveSearch expanded 2Thu (made 2935, undominated 178)
	KheDrsSolveSearch expanded 2Fri (made 119, undominated 56)
	KheDrsSolveSearch expanded 2Sat (made 460, undominated 102)
	KheDrsSolveSearch expanded 2Sun (made 2, undominated 0)
      ] KheDrsSolveSearch returning false
    ] KheDynamicResourceSolverDoSolve returning false

  Here is another run, using ungrouped tasks this time.  This test took
  157 minutes:

    [ KheDynamicResourceSolverDoSolve(drs, false, false, expand_by_shifts,
      0, 0, 0, IndexedUniform, false, -) cost 0.02045
      resources:  HN_1, HN_3, NU_6, NU_9, CT_24
      day ranges: 0-13
      [ KheDrsSolveSearch(5 resources, 14 days)
	KheDrsSolveSearch expanded 1Mon (made 613, undominated 328)
	KheDrsSolveSearch expanded 1Tue (made 16539, undominated 5017)
	KheDrsSolveSearch expanded 1Wed (made 47388, undominated 5463)
	KheDrsSolveSearch expanded 1Thu (made 734311, undominated 92525)
	KheDrsSolveSearch expanded 1Fri (made 534775, undominated 24138)
	KheDrsSolveSearch expanded 1Sat (made 3647674, undominated 95975)
	KheDrsSolveSearch expanded 1Sun (made 1690044, undominated 91932)
	KheDrsSolveSearch expanded 2Mon (made 3100104, undominated 19961)
	KheDrsSolveSearch expanded 2Tue (made 1843115, undominated 69697)
	KheDrsSolveSearch expanded 2Wed (made 3383, undominated 144)
	KheDrsSolveSearch expanded 2Thu (made 1403, undominated 99)
	KheDrsSolveSearch expanded 2Fri (made 498, undominated 264)
	KheDrsSolveSearch expanded 2Sat (made 1889, undominated 156)
	KheDrsSolveSearch expanded 2Sun (made 0, undominated 0)
      ] KheDrsSolveSearch returning false
    ] KheDynamicResourceSolverDoSolve returning false

  It started off better, but then got worse.

8 January 2023.  Tried caching, both with indexed_uniform and with
  list_uniform.  Did not complete the indexed_uniform run but there
  was no discernible improvement.  The list_uniform gave the same
  result as the one above (naturally); its running time was 160 min.

  Trying 5 resources, expand by shifts, with one-extra and two-extra
  selection turned on.

    [ KheDynamicResourceSolverDoSolve(drs, false, use_extra_selection,
      expand_by_shifts, 0, 0, 0, IndexedUniform, use_cache, ListUniform)
      cost 0.02045
      resources:  HN_1, HN_3, NU_6, NU_9, CT_24
      day ranges: 0-13
      [ KheDrsSolveSearch(5 resources, 14 days)
	KheDrsSolveSearch expanded 1Mon (made 613, undominated 328)
	KheDrsSolveSearch expanded 1Tue (made 16539, undominated 5017)
	KheDrsSolveSearch expanded 1Wed (made 45034, undominated 5463)
	KheDrsSolveSearch expanded 1Thu (made 734311, undominated 92525)
	KheDrsSolveSearch expanded 1Fri (made 525890, undominated 24138)
	KheDrsSolveSearch expanded 1Sat (made 3647674, undominated 95975)
	KheDrsSolveSearch expanded 1Sun (made 1622557, undominated 91932)
	KheDrsSolveSearch expanded 2Mon (made 3097465, undominated 19961)
	KheDrsSolveSearch expanded 2Tue (made 1843115, undominated 69697)
	KheDrsSolveSearch expanded 2Wed (made 3383, undominated 144)
	KheDrsSolveSearch expanded 2Thu (made 1403, undominated 99)
	KheDrsSolveSearch expanded 2Fri (made 464, undominated 264)
	KheDrsSolveSearch expanded 2Sat (made 1853, undominated 156)
	KheDrsSolveSearch expanded 2Sun (made 0, undominated 0)
      ] KheDrsSolveSearch returning false
    ] KheDynamicResourceSolverDoSolve returning false

  This is making fewer solutions, although not massively.  Run time
  is 153 minutes, only a slight improvement.  And here we are removing
  assignments that contradict forced assignments:

    [ KheDynamicResourceSolverDoSolve(drs, false, use_extra_selection,
      expand_by_shifts, 0, 0, 0, IndexedUniform, false, -) cost 0.02045
      resources:  HN_1, HN_3, NU_6, NU_9, CT_24
      day ranges: 0-13
      [ KheDrsSolveSearch(5 resources, 14 days)
	KheDrsSolveSearch expanded 1Mon (made 613, undominated 328)
	KheDrsSolveSearch expanded 1Tue (made 16539, undominated 5017)
	KheDrsSolveSearch expanded 1Wed (made 45034, undominated 5463)
	KheDrsSolveSearch expanded 1Thu (made 734311, undominated 92525)
	KheDrsSolveSearch expanded 1Fri (made 525890, undominated 24138)
	KheDrsSolveSearch expanded 1Sat (made 3647674, undominated 95975)
	KheDrsSolveSearch expanded 1Sun (made 1622557, undominated 91932)
	KheDrsSolveSearch expanded 2Mon (made 3097465, undominated 19961)
	KheDrsSolveSearch expanded 2Tue (made 1843115, undominated 69697)
	KheDrsSolveSearch expanded 2Wed (made 3383, undominated 144)
	KheDrsSolveSearch expanded 2Thu (made 1403, undominated 99)
	KheDrsSolveSearch expanded 2Fri (made 464, undominated 264)
	KheDrsSolveSearch expanded 2Sat (made 1853, undominated 156)
	KheDrsSolveSearch expanded 2Sun (made 0, undominated 0)
      ] KheDrsSolveSearch returning false
    ]

  Running time is 152 minutes, a negligible improvement.

9 January 2023.  Running with rs_drs_daily_expand_limit=500:

    [ KheDynamicResourceSolverDoSolve(drs, false, use_extra_selection,
      expand_by_shifts, 500, 0, 0, IndexedUniform, false, -) cost 0.02045
      resources:  HN_1, HN_3, NU_6, NU_9, CT_24
      day ranges: 0-13
    [ KheDrsSolveSearch(5 resources, 14 days)
      KheDrsSolveSearch expanded 1Mon (made 613, undominated 328)
      KheDrsSolveSearch expanded 1Tue (made 16539, undominated 5017, kept 500)
      KheDrsSolveSearch expanded 1Wed (made 2415, undominated 953, kept 500)
      KheDrsSolveSearch expanded 1Thu (made 64561, undominated 16539, kept 500)
      KheDrsSolveSearch expanded 1Fri (made 5107, undominated 1182, kept 500)
      KheDrsSolveSearch expanded 1Sat (made 82533, undominated 13065, kept 500)
      KheDrsSolveSearch expanded 1Sun (made 71318, undominated 16673, kept 500)
      KheDrsSolveSearch expanded 2Mon (made 93443, undominated 5331, kept 500)
      KheDrsSolveSearch expanded 2Tue (made 65596, undominated 13145, kept 500)
      KheDrsSolveSearch expanded 2Wed (made 87, undominated 25)
      KheDrsSolveSearch expanded 2Thu (made 505, undominated 52)
      KheDrsSolveSearch expanded 2Fri (made 283, undominated 156)
      KheDrsSolveSearch expanded 2Sat (made 1853, undominated 156)
      KheDrsSolveSearch expanded 2Sun (made 0, undominated 0)
    ] KheDrsSolveSearch returning false
    ] KheDynamicResourceSolverDoSolve returning false

  It seems to have done just as well in the end (156 undominated on 2Sat).
  It took 127 seconds.  Running with rs_drs_daily_expand_limit=200:

    [ KheDynamicResourceSolverDoSolve(drs, false, use_extra_selection,
      expand_by_shifts, 200, 0, 0, IndexedUniform, false, -) cost 0.02045
      resources:  HN_1, HN_3, NU_6, NU_9, CT_24
      day ranges: 0-13
    [ KheDrsSolveSearch(5 resources, 14 days)
      KheDrsSolveSearch expanded 1Mon (made 613, undominated 328, kept 200)
      KheDrsSolveSearch expanded 1Tue (made 7536, undominated 2503, kept 200)
      KheDrsSolveSearch expanded 1Wed (made 746, undominated 421, kept 200)
      KheDrsSolveSearch expanded 1Thu (made 26651, undominated 9538, kept 200)
      KheDrsSolveSearch expanded 1Fri (made 2378, undominated 762, kept 200)
      KheDrsSolveSearch expanded 1Sat (made 34188, undominated 7505, kept 200)
      KheDrsSolveSearch expanded 1Sun (made 33485, undominated 9413, kept 200)
      KheDrsSolveSearch expanded 2Mon (made 33932, undominated 3858, kept 200)
      KheDrsSolveSearch expanded 2Tue (made 31927, undominated 7780, kept 200)
      KheDrsSolveSearch expanded 2Wed (made 51, undominated 19)
      KheDrsSolveSearch expanded 2Thu (made 407, undominated 52)
      KheDrsSolveSearch expanded 2Fri (made 259, undominated 144)
      KheDrsSolveSearch expanded 2Sat (made 1853, undominated 156)
      KheDrsSolveSearch expanded 2Sun (made 0, undominated 0)
    ] KheDrsSolveSearch returning false
    ]

  Again, just as good, running time 45 seconds.  And down to 100:

    [ KheDynamicResourceSolverDoSolve(drs, false, use_extra_selection,
      expand_by_shifts, 100, 0, 0, IndexedUniform, false, -) cost 0.02045
      resources:  HN_1, HN_3, NU_6, NU_9, CT_24
      day ranges: 0-13
    [ KheDrsSolveSearch(5 resources, 14 days)
      KheDrsSolveSearch expanded 1Mon (made 613, undominated 328, kept 100)
      KheDrsSolveSearch expanded 1Tue (made 3412, undominated 1437, kept 100)
      KheDrsSolveSearch expanded 1Wed (made 390, undominated 242, kept 100)
      KheDrsSolveSearch expanded 1Thu (made 10598, undominated 3805, kept 100)
      KheDrsSolveSearch expanded 1Fri (made 1185, undominated 469, kept 100)
      KheDrsSolveSearch expanded 1Sat (made 17952, undominated 4111, kept 100)
      KheDrsSolveSearch expanded 1Sun (made 16618, undominated 6352, kept 100)
      KheDrsSolveSearch expanded 2Mon (made 18859, undominated 2950, kept 100)
      KheDrsSolveSearch expanded 2Tue (made 16170, undominated 5406, kept 100)
      KheDrsSolveSearch expanded 2Wed (made 21, undominated 8)
      KheDrsSolveSearch expanded 2Thu (made 236, undominated 44)
      KheDrsSolveSearch expanded 2Fri (made 148, undominated 100)
      KheDrsSolveSearch expanded 2Sat (made 480, undominated 24)
      KheDrsSolveSearch expanded 2Sun (made 0, undominated 0)
    ] KheDrsSolveSearch returning false
    ] KheDynamicResourceSolverDoSolve returning false

  This did this plus another run (days 14-27) in 33 secs.

10 January 2023.  Added dominates_freq, which builds a frequency
  table for each day, showing how many dominance tests are done 
  before a newly created solution is declared to be dominated
  (if it is dominated).  Here's a typical result, for 1Tue:

    KheDrsSolveSearch expanded 1Tue (made 2936, undominated 1085, kept 100)
    ancestor frequencies: 0 0 3 147
    dominates frequencies:
      0 1 1 1 1 1 0 2 1 1 0 0 0 0 1 0 0 0 0 0
      0 0 0 1 0 0 0 0 0 6 0 2 4 0 0 0 0 1 0 0
      0 1 0 0 1 1 0 0 2 0 0 2 0 0 0 0 0 0 0 1
      1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0
      0 0 0 2 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0
      0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 1 0
      0 0 1 1 2 0 0 0 1 0 0 0 0 0 1 0 0 2 1 0
      0 1 2 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 0
      2 1 0 1 0 2 1 0 0 0 1 0 0 1 0 0 0 1 0 0
      0 0 1 0 0 1 0 0 0 1 0 1

  This shows that it takes almost 200 dom tests to prove that some
  new solutions are dominated, although most go faster.  Relatively
  few are in fact already dominated, about 70, which seems odd.
  This was for expand by shifts; here it is for expand by resources:

    KheDrsSolveSearch expanded 1Tue (made 4486, undominated 1103, kept 100)
    ancestor frequencies: 0 14 15 178
    dominates frequencies (total 99):
      0 1 1 1 1 1 1 2 0 1 0 0 1 0 0 0 0 0 0 0
      0 0 0 0 0 0 0 0 0 3 0 6 2 0 0 0 0 0 0 0
      0 1 0 1 2 1 0 1 3 0 0 0 0 0 1 0 2 1 0 2
      2 2 2 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0
      0 0 0 1 2 1 0 0 0 0 0 0 1 0 0 1 2 0 0 1
      2 1 1 1 0 0 1 1 0 1 0 0 0 0 0 0 2 0 0 0
      1 0 0 0 0 0 1 0 0 1 0 1 1 0 1 0 0 1 0 0
      1 0 1 1 0 2 0 0 1 0 0 1 0 0 1 0 1 1 0 3
      2 0 0 1 0 0 2 0 1 1 1 0 2 0 0 1 0 0 0 0
      0 0 0 0 0 0 1 0 1

  Only 99 solutions were found to be dominated already.  Actually
  this may just be on the last expansion of the day, but the
  point is clear:  many solutions take many tests to eliminate.

  Also, it is not true that a few solutions do most of the work.
  So a move to front heuristic (if a solution S is found to
  dominate a solution T, move S to the front of the list) would
  not improve things very much.  Given the array data structure,
  it would have to be a swap to front, which is not so good.

  Unwanted pattern constraints can be omitted from dominance
  testing, because they either succeed or fail, and if they
  fail, the solution gets eliminated anyway.  (In general, this
  applies whenever the weight exceeds the target cost.)  However
  it is hard to see how this fact can be used to save time,
  because these constraints cannot be omitted from signatures,
  because they still need to be evaluated.

12 January 2023.  Added a "Correlated expressions" section to
  the theory chapter which expresses some of the ideas of
  holistic dominance.  The problem with full holistic
  dominance is that it is looking expensive to evaluate,
  whereas correlated expressions are nearly as good, and
  they are much cheaper to evaluate.

13 January 2023.  I've documented a much more general plan for
  testing the dynamic resource solver.  The next step is to
  implement it.  It includes a use_correlated_exprs option.

14 January 2023.  Implementing the more general plan for
  testing the dynamic resource solver.  Right in the middle
  of it at the moment.  KheDynamicResourceVLSNSolve is done,
  although it will need an audit.

15 January 2023.  Worked over the documentation for
  KheDynamicResourceVLSNTest.  It's in pretty good shape
  now, ready to implement.

16 January 2023.  Audited the new documentation again.  All
  good, ready to implement.  Going well on the revision.
  KheSolveArgumentsUpdateWithDimPosElt is next.

17 January 2023.  Revising khe_sr_dynamic_vlsn.c.  Got through
  the whole file but still to do in KheTestDoExecute.

18 January 2023.  Still revising khe_sr_dynamic_vlsn.c.  I've
  written something that seems to be more or less complete,
  except that some of the captions print "still to do", and
  it only produces one graph, not three (that is easy to fix).

19 January 2023.  Still revising khe_sr_dynamic_vlsn.c.  All
  done and ready to test.

  Sorted out the documentation for the running time of priqueue.
  The code was fine, only badly documented.

  Made sure that runs that should produce the same optimal
  solution do so.  We're now comparing the final costs of runs
  which test the same resources, the same days, and for which
  all of the options that compromise optimality are off.

  Carried out a careful audit which involved some nice tidying
  up, generalizing KHE_TEST to KHE_RUNNER.

20 January 2023.  Still revising khe_sr_dynamic_vlsn.c.  I've
  finally documented and implemented a reasonable approach to
  choose(x) in rs_drs_test.  Also added the other two graphs.
  
  Finished a careful audit of the whole thing and done some
  testing.  It seems to be working well.  The whole thing has
  taken one week, starting 13 January.

21 January 2023.  Started work on correlated expressions.  I've
  made sure now that every dom test contains its expression.
  And I've written code that identifies pairs of adjacent
  expressions that are correlated (just the busy weekends
  case at the moment).

22 January 2023.  Working on correlated expressions.  Ended up in
  the middle of a long list of tedious modifications.

  Thinking about dom tests.  The reason why they decide
  which kind of test to make on the fly is because the
  cache could be different from the main table.  But that
  is something we don't really care about, so it seems
  best to withdraw that option and require the cache to
  do the same kind of dominance testing (but not have the
  same data structure) as the main table.  But now we
  have to decide what the interface should look like in
  that case.  Suggestion:  same interface, but with a
  non-trivial compatibility requirement between main_dom_kind
  and cache_dom_kind.  Documented and implemented now.

23 January 2023.  Working on correlated expressions.  Parent/child
  done and tested.  The tests seem to be working, for example

     true  0.00045  -[20]  0  1  -
     true  0.00045  C[21]  2  1  cbtc 0-2|s30      Constraint:10/TR_26

  I've looked at the graphs, they don't show any difference with
  or without correlated expressions, except that correlated
  expressions run marginally slower.

24 January 2023.  Working on correlated INT_SEQ_COST expressions.
  So far I have written and tested code to identify which expressions
  could be involved, to pair them up by fiddling with their
  postorder indexes (KheDrsHandleIntSeqCost), and to identify
  them in KheDrsSignerAddDomTest.

25 January 2023.  I've realized that many parent/child correspondences
  are being missed because the postorder indexes are not adjacent,
  because the child is shared.  At present we are only getting
  correlated (1) when the weekend is 2Sat and 2Sun.  Today I've been
  fixing this problem.  It's all written and audited, ready to test.

26 January 2023.  Testing correlated expressions.  I seem to be
  finding all the right correlated pairs now.  Actually 2Sat was
  the only active one because 1Sat is not part of a correlated
  pair and 3Sat is out of the open range.  But in principle some
  pairs were indeed being missed.

  Started work on CORR_2 by breaking up the documentation so that
  the algebra is moved into the counter and sequence monitor
  sections.  I've revised all except the sequence monitor section.

27 January 2023.  I've revised the section on correlated expressions
  for sequence monitors.  It needs an audit but the result is pretty
  darn good for implementing.  I need to store one extra table which
  is similar to the existing table so should be easy to calculate,
  and then the formula is just min(a + b, c + d) where a, b, c, and
  d are table lookups.

28 January 2023.  Revised and audited the section on correlated
  expressions for sequence monitors.  All implemented now, the
  implementation needs an audit.
  replacing the cost at the bottom of the table by a pair of
  costs.  I'm now loading the correct values for psi and psi0
  into these pairs.  Just retrieving and using them now.

29 January 2023.  Pondering yesterday's brilliant idea about how
  multiple limit active intervals monitors can be correlated, when
  one non-zero value means that all the others must be zero.

30 January 2023.  Working on generalizing the correlated sequence
  monitors documentation so that it can handle any number of monitors.

1 February 2023.  Working on generalizing the correlated sequence
  monitors documentation so that it can handle any number of monitors.
  Added a KHE_DRS_CORRELATOR type which will handle correlation.  I've
  made sure that only resource on day signers have correlators,
  because only they handle correlation.  Also I'm setting the
  fields of expressions now to say whether its a positive, negative,
  or single.  If single, there is an index to say which one.

3 February 2023.  Working on generalizing correlated sequence
  monitors so that they can handle any number of monitors.  All
  done, needs a final audit.

4 February 2023.  Audited the generalized correlated sequence
  monitors code and correlators generally, and started to test.

5 February 2023.  Testing the generalized correlated sequence
  monitors code and correlators generally.  Found and fixed a
  very obvious bug and now things seems to be working.  Did a
  run on four resources:  correlated expressions are faster,
  but not dramatically faster.

  Got some results for 5 resources.  Correlation is running
  about 30% faster than uncorrelated, but it is still too slow:
  about 7,000 seconds, that is, almost two hours.

  Got some debug output, showing some cases where correlated differs
  from uncorrelated.  It all looks good.

6 February 2023.  Implemented Corr3, started testing.  It seems to
  be working, I'm adding debug code to find out whether Corr3 has
  made any improvement on Corr2.

7 February 2023.  I've seen no evidence that corr3 improves on
  corr2.  However, keep moving.  I've started thinking about
  the first Saturday.

  I've imported the Psi notation into counter monitors.  I need
  to use it now to get a correct expression for the Saturday case.

8 February 2023.  Working on correcting corr1.  I've completed the
  new documentation; it is much better, and seems to be ready for
  implementing and testing.  But it's new and still feels unfamiliar.

9 February 2023.  Working on correcting corr1.  I've audited and
  revised the documentation, including checking carefully whether
  it is correct for 'weekends' of more that two days (it is).
  Implemented 5-, 4-, 3-, 2-, and 1-dimensional tables of
  KHE_DRS_COST_TUPLE.  

10 February 2023.  Working on correcting corr1.  I've finished
  the conversion to the new, more generic multi-dimensional
  table types, and done a quick test which seems to have worked.
  I've also added corr_dom_table to INT_SUM_COST, and ensured
  that it is initialized correctly, at the same time that the
  previous dom_table is initialized.

  I've done some boilerplate for the new Corr1 and Corr2 tests,
  including renaming Corr3 -> Corr4 and Corr2 -> Corr3 and adding
  a new KHE_DRS_DOM_TEST_CORR2_CHILD dom test tag.

11 February 2023.  I've completed the implementation of the
  revised corr1 and corr2.  I've also audited everything.
  Setting corr_dom_table4 is a bit string and chicken wire,
  but it seems to be right.

12 February 2023.  Added debug output for Corr1 and Corr2.

  Here is a case where Psi turns out to be positive:

    [ KheDrsExprIntSumCostFindCorrAvailCost(Constraint:13/HN_0,
            e 1, a1 0, a2 1, l1 0, l2 22)
	Gamma(e 1, l 22, y 1) = f(1) 0.00020 - f(0) 0.00000 = 0.00020
	Gamma(e 1, l 0, y 0) = f(0) 0.00000 - f(0) 0.00000 = 0.00000
      y1 =  0, y2 =  1:  0.00020 -  0.00000 =  0.00020
	Gamma(e 1, l 22, y 1) = f(1) 0.00020 - f(0) 0.00000 = 0.00020
	Gamma(e 1, l 0, y 1) = f(0) 0.00000 - f(0) 0.00000 = 0.00000
      y1 =  1, y2 =  1:  0.00020 -  0.00000 =  0.00020
    KheDrsExprIntSumCostFindCorrAvailCost: res > 0

  Constraint:13 places a min limit of 15 and a max limit of 22 on the
  number of shifts worked.  So why is it working out Corr2?  Because
  everyone is.  I've now added code so that only int sum cost nodes
  with at least one child that spans more than one day build a corr
  table.  But still we get one example of the real thing:

    [ KheDrsExprIntSumCostFindCorrAvailCost(Constraint:10/HN_0,
          e 1, a1 0, a2 1, l1 0, l2 2)
	Gamma(e 1, l 2, y 1) = f(1) 0.00030 - f(0) 0.00000 = 0.00030
	Gamma(e 1, l 0, y 0) = f(0) 0.00000 - f(0) 0.00000 = 0.00000
      y1 =  0, y2 =  1:  0.00030 -  0.00000 =  0.00030
	Gamma(e 1, l 2, y 1) = f(1) 0.00030 - f(0) 0.00000 = 0.00030
	Gamma(e 1, l 0, y 1) = f(0) 0.00000 - f(0) 0.00000 = 0.00000
      y1 =  1, y2 =  1:  0.00030 -  0.00000 =  0.00030
      KheDrsExprIntSumCostFindCorrAvailCost: res > 0
    ]

  Constraint:10 is a limit of 2 on the number of busy weekends, just
  what CORR1 is supposed to be handling.  And in this case S2 can't
  help going over the limit:  l2 says it already has 2 busy weekends,
  and a2 says that there is definitely going to be another.

  Next problem:

    calling KheDrsDim5TableGet(0x7f2f371cd960, -1245208591)

  Actually I'm getting random integers for the second argument.
  Look at this debug output:

    1 open children, open range 6-6

  Opening the first 14 days should give 2 open children and
  open range 6-13 or some such.

13 February 2023.  Added debug output for Corr1 and Corr2.  Here
  is our problem:

    [ INT_SUM_COST(Constraint:9/HN_1, LINEAR, 1) pi 947 oc 1
      [ OR(cs 0, iv 1) pi 943 oc 2
	[ BUSY_DAY(HN_1, 1Sat, iv 0) pi 493 oc 0 ]
	[ BUSY_DAY(HN_1, 1Sun, iv 0) pi 494 oc 0 ]
      ]
      [ OR(cs 0, iv 0) pi 944 oc 2
	[ BUSY_DAY(HN_1, 2Sat, iv 0) pi 500 oc 0 ]
	[ BUSY_DAY(HN_1, 2Sun, iv 0) pi 501 oc 0 ]
      ]
      [ OR(cs 0, iv 0) pi 945 oc 0
	[ BUSY_DAY(HN_1, 3Sat, iv 0) pi 507 oc 0 ]
	[ BUSY_DAY(HN_1, 3Sun, iv 0) pi 508 oc 0 ]
      ]
      [ OR(cs 2, iv 1) pi 946 oc 0
	[ BUSY_DAY(HN_1, 4Sat, iv 1) pi 514 oc 0 ]
	[ BUSY_DAY(HN_1, 4Sun, iv 1) pi 515 oc 0 ]
      ]
    ]

  Look at the pi fields:  the busy day ones are later than the
  OR ones.  How does that happen?  It wrecks everything, obviously.

  OK, this is it.  The OR node is opened and among other things
  it tries to set up its dom test.  But this involves querying
  the parent about how many unassigned open children to the right
  there are, and at this point those children are not complete.
  So we need two iterations:  one to open the expressions, and
  then a second to set up their dom tests.  I've done this now,
  the second call is KheDrsExprBuildDomTests.

  Did a 4-resource run.  Results look OK, although they are
  not striking.

  What's this:

      false  0.00165  UN[10]  1  0  lbtc 0-1|h1  Constraint:1/HN_1/24

  Why is the cost not negative?  The decision is right, because
  there are cases in the second solution that are not dominated
  by the first, but the cost looks quite strange.  I've looked
  into KheCostShow but I can't see what's wrong.  The trouble is,
  a negative hard cost is awkward to represent and print.  Its
  hard and soft parts each has its own sign, and yet when printed
  there is only one sign.  So things are going to get awkward.

  Here's something from my to-do list that I have abandoned.
  Look for even more kinds of correlated expressions?  For the
  moment I'm out of ideas, and indeed it is hard to see where
  more correlation could be found.  What about complete weekends?
  Do they correlate in some way with busy weekends?  If a weekend
  is not busy, it is a complete (that is, completely free) weekend,
  so there is some correlation there:

      Saturday    Sunday    Busy weekend     Complete weekend
      -------------------------------------------------------
      Free        Free      No               Yes
      Free        Busy      Yes              No 
      Busy        Free      Yes              No 
      Busy        Busy      Yes              Yes
      -------------------------------------------------------

  The last two values are never No and No.  But so what?

  Something else from my to-do list that I have abandoned.
  "Holistic dominance".  Find the maximum, over all assignments
  x to one resource r, of the total reduction in available cost
  of all constraints applicable to r.  This will be less than
  the total of the maximums, which is what we are using now.
  This is not clearly defined, but there is something there.
  Consider this for example:

    [ KheDrsSolnDominatesDebug(soln1 0.01765, soln2 0.01820)
	Res    Avail  Details
      ----------------------------------------------------------------
       true  0.00055  sig cost
       true  0.00055  sig[ 0] 11 11  cbtc 5-11|s20     Constraint:11/HN_1
       true  0.00055  sig[ 1]  0  0  laic 2-5|s15      Constraint:14/HN_1
       true  0.00055  sig[ 2]  0  0  laic 2-5|s15      Constraint:16/HN_1
       true  0.00055  sig[ 3]  0  0  laic 3-5|s15      Constraint:17/HN_1
       true  0.00025  sig[ 4]  0  1  laic 2-5|s30      Constraint:19/HN_1
      false -0.00035  sig[ 5]  2  0  laic 2-4|s30      Constraint:22/HN_1
      ----------------------------------------------------------------
    ]

  Here Constraint:19 is 2-5 consecutive free days, Constraint:22
  is 2-4 consecutive busy days.  To incur the cost difference of
  30 that is being allowed for at Constraint:19, the next day
  would have to be free (and the one after that would have to be
  busy).  To incur the cost difference of 60 that is being allowed
  for at Constraint:22, the next 4 days would have to be busy.  You
  can't have both.  Holistic dominance would get this.

  It doesn't seem to be convenient to change the order of the
  monitors on each solve, because they are assigned their
  postorder indexes when the solver is created.  So I have
  saved the old Secs file that sorted by monitor limits, and
  now I have sorted by monitor weights and done the same run.

     Run time sorting by decreasing max limits:  about 11 secs
     Run time sorting by decreasing weights:     about 9 secs

  Well worth doing.  So we'll stick with sorting by weights.
  We might do even better if we test all hard weights first.
  It's interesting that we got such a big improvement.  It
  suggests that anything we can do to speed up one dom test
  will be worth doing.

14 February 2023.  I've been thinking about ways to test all
  the hard constraints first in dominance testing.  Current
  running time is about 9 secs (see above).  We'll see whether
  this change improves on that.

  Off-site backup done.

  Started implementing signature sets and signer sets.
  Up to line 25084 (KheDrsSignerEvalSignature).  Going steadily.

15 February 2023.  Working on signer sets and signature sets.
  Going well.

16 February 2023.  Working on signer sets and signature sets.
  I seem to be at the end of the things to do, but now I have
  to audit what I've done and see if it all makes sense.
  I've audited KHE_DRS_SIGNATURE, KHE_DRS_SIGNATURE_SET,
  KHE_DRS_SIGNER, and KHE_DRS_SIGNER_SET; they all seem good.

17 February 2023.  Still auditing the new code.  Actually I'm
  running out of stuff to audit.  I think it's time to test.
  Struggling with a nasty bug - storing a state of 4 when
  there hasn't been time to accumulate that much.  Help!

18 February 2023.  I may have worked out the bug.  Signatures go
  into solutions where they have to remain long-term.  But when an
  assignment to shift is finished, it goes on the free list and
  its signature gets cleared and re-used when it comes back again.
  I've fixed this by adding reference counting to signatures.

  All working now.  I did a run with four resources, it came out
  slightly faster than previously.  Presumably this is due to
  using somewhat less memory.  It's a good start.

  Updated the dominance code to visit hard constraints before
  soft ones.  Amazingly, it's reduced the time to 6.4 seconds:

     Run time sorting by decreasing max limits:  about 11 secs
     Run time sorting by decreasing weights:     about 9 secs
     Run time visiting hard entries first:       about 6.4 secs

  Anyway it's working and working well.

19 February 2023.  Trying to do a run that reassigns five
  trainee nurses over two weeks.  But it was taking forever
  so I've canned it.  My previous run that completed 5 nurses
  was for ordinary nurses, not trainees, so let's try that.

  [ KheDynamicResourceVLSNSolve(INRC2-4-030-1-6291, Nurse, options)
    [ KheDynamicResourceSolverDoSolve(drs, false, extra_selection, expand_by_shifts, correlated_exprs, 0, 0, 0, IndexedUniform, false, -) cost 52.05845
      resources:  HN_1, HN_3, NU_6, NU_9, CT_24
      day ranges: 0-13
    [ KheDrsSolveSearch(5 resources, 14 days)
      KheDrsSolveSearch ending day 1Mon (made 613, undominated 328)
      KheDrsSolveSearch ending day 1Tue (made 14139, undominated 3767)
      KheDrsSolveSearch ending day 1Wed (made 34377, undominated 4870)
      KheDrsSolveSearch ending day 1Thu (made 590539, undominated 64707)
      KheDrsSolveSearch ending day 1Fri (made 425047, undominated 22449)
      KheDrsSolveSearch ending day 1Sat (made 3380837, undominated 71408)
      KheDrsSolveSearch ending day 1Sun (made 1193241, undominated 18382)
      KheDrsSolveSearch ending day 2Mon (made 1084641, undominated 19961)
      KheDrsSolveSearch ending day 2Tue (made 1766849, undominated 62512)
      KheDrsSolveSearch ending day 2Wed (made 3072, undominated 144)
      KheDrsSolveSearch ending day 2Thu (made 1403, undominated 95)
      KheDrsSolveSearch ending day 2Fri (made 457, undominated 264)
      KheDrsSolveSearch ending day 2Sat (made 1853, undominated 156)
      KheDrsSolveSearch ending day 2Sun (made 0, undominated 0)
    ] KheDrsSolveSearch returning false
    ] KheDynamicResourceSolverDoSolve returning false
    ] KheDynamicResourceVLSNSolve returning false
      ] rdv end, 72.8 mins used

  So this took 73 minutes.  Last time (7 January 2023) it took 157
  minutes for the same resources and the same days.  Both runs are
  for ungrouped tasks.  Here is that old test copied from 7 Jan:

    [ KheDynamicResourceSolverDoSolve(drs, false, false, expand_by_shifts,
      0, 0, 0, IndexedUniform, false, -) cost 0.02045
      resources:  HN_1, HN_3, NU_6, NU_9, CT_24
      day ranges: 0-13
      [ KheDrsSolveSearch(5 resources, 14 days)
	KheDrsSolveSearch expanded 1Mon (made 613, undominated 328)
	KheDrsSolveSearch expanded 1Tue (made 16539, undominated 5017)
	KheDrsSolveSearch expanded 1Wed (made 47388, undominated 5463)
	KheDrsSolveSearch expanded 1Thu (made 734311, undominated 92525)
	KheDrsSolveSearch expanded 1Fri (made 534775, undominated 24138)
	KheDrsSolveSearch expanded 1Sat (made 3647674, undominated 95975)
	KheDrsSolveSearch expanded 1Sun (made 1690044, undominated 91932)
	KheDrsSolveSearch expanded 2Mon (made 3100104, undominated 19961)
	KheDrsSolveSearch expanded 2Tue (made 1843115, undominated 69697)
	KheDrsSolveSearch expanded 2Wed (made 3383, undominated 144)
	KheDrsSolveSearch expanded 2Thu (made 1403, undominated 99)
	KheDrsSolveSearch expanded 2Fri (made 498, undominated 264)
	KheDrsSolveSearch expanded 2Sat (made 1889, undominated 156)
	KheDrsSolveSearch expanded 2Sun (made 0, undominated 0)
      ] KheDrsSolveSearch returning false
    ] KheDynamicResourceSolverDoSolve returning false

  Although the initial costs are different, that is due to the state
  of the solution in the other resource type.  The results are much
  the same, but the running time is better now.  The two tests are
  for ungrouped tasks.

  Like before, I had to abandon 5 trainees:

    [ KheDrsSolveSearch(5 resources, 14 days)
      KheDrsSolveSearch ending day 1Mon (made 3041, undominated 3041)
      KheDrsSolveSearch ending day 1Tue (made 433770, undominated 34418)
      ... (killed before getting this far)
    ]

  The number 3041 is reasonable, as the following argument shows.
  Each trainee has a choice of 4 shifts plus a free day, making 5
  choices altogether.  (Because there are excess slots we can say
  that in practice all 4 shifts are available to all trainees.)
  So there are about 5 * 5 * 5 * 5 * 5 = 3125 choices.  And on
  subsequent days, for each undominated solution on the previous
  day there are about 3125 choices, although some of them will be
  killed off very early by hard constraints, which explains why
  we do not generate anything near 3041 * 3125 day 2 solutions.

  Tried the same test with the priority queue - 76.3 mins, a
  few minutes slower, the usual result.

20 February 2023.  Did some more testing.  Found that reassigning
  four trainees was still quite slow, minutes rather than seconds.
  One run took about 50 minutes.  It did find a small improvement.
  Tried with caching, 48.3 mins, so there was a slight speedup.

  I've added a sig1 == sig2 test, but it made no difference to
  the running time.  But proper caching very likely would.

21 February 2023.  Finished documenting dominance test caching,
  and started to implement.  Still some way to go, but I have
  implemented KheDrsResourceBuildDominanceTestCache.

22 February 2023.  Implementing dominance test caching.  All
  done, needs an audit, then I'm ready to test.

  KheDrsSignerDominatesStatesOnly stops early, when avail cost
  drops below zero.  This is not suitable for cache building.
  So I've added a stop_on_neg parameter.

  Using the new code on four non-trainee resources, the time was
  23.7 secs.  Without the new code the time was 24.3 secs.  So
  there has been an improvement, but not spectacular.  But before
  I seemed to be getting 9 seconds.  Perhaps it's setup time?
  Yes, the actual solve time is 6.3 seconds without the new code.
  This is for nurses, the solve time for trainees is much longer:

     Four resources                      Nurses         Trainees
     ----------------------------------------------------------------
     Without dominance test caching         6.3         abandoned
     With dominance test caching            6.4         abandoned
     ----------------------------------------------------------------

  This is not counting setup time.  Not much to show for it.

  New code:

    [ KheDrsSolveSearch(4 resources, 14 days)
      KheDrsSolveSearch ending day 1Mon (made 369, undominated 219)
      KheDrsSolveSearch ending day 1Tue (made 8682, undominated 2029)
      KheDrsSolveSearch ending day 1Wed (made 16054, undominated 2159)
      KheDrsSolveSearch ending day 1Thu (made 24479, undominated 3467)
      KheDrsSolveSearch ending day 1Fri (made 1193, undominated 149)
      KheDrsSolveSearch ending day 1Sat (made 4917, undominated 435)
      KheDrsSolveSearch ending day 1Sun (made 5067, undominated 493)
      KheDrsSolveSearch ending day 2Mon (made 10759, undominated 894)
      KheDrsSolveSearch ending day 2Tue (made 45268, undominated 4979)
      KheDrsSolveSearch ending day 2Wed (made 195, undominated 23)
      KheDrsSolveSearch ending day 2Thu (made 280, undominated 26)
      KheDrsSolveSearch ending day 2Fri (made 263, undominated 154)
      KheDrsSolveSearch ending day 2Sat (made 1555, undominated 103)
      KheDrsSolveSearch ending day 2Sun (made 0, undominated 0)
    ] KheDrsSolveSearch returning false
    ] KheDynamicResourceSolverDoSolve returning false, 6.4 secs

  Old code:

    [ KheDrsSolveSearch(4 resources, 14 days)
      KheDrsSolveSearch ending day 1Mon (made 369, undominated 219)
      KheDrsSolveSearch ending day 1Tue (made 8682, undominated 2029)
      KheDrsSolveSearch ending day 1Wed (made 16054, undominated 2159)
      KheDrsSolveSearch ending day 1Thu (made 24479, undominated 3467)
      KheDrsSolveSearch ending day 1Fri (made 1193, undominated 149)
      KheDrsSolveSearch ending day 1Sat (made 4917, undominated 435)
      KheDrsSolveSearch ending day 1Sun (made 5067, undominated 493)
      KheDrsSolveSearch ending day 2Mon (made 10759, undominated 894)
      KheDrsSolveSearch ending day 2Tue (made 45268, undominated 4979)
      KheDrsSolveSearch ending day 2Wed (made 195, undominated 23)
      KheDrsSolveSearch ending day 2Thu (made 280, undominated 26)
      KheDrsSolveSearch ending day 2Fri (made 263, undominated 154)
      KheDrsSolveSearch ending day 2Sat (made 1555, undominated 103)
      KheDrsSolveSearch ending day 2Sun (made 0, undominated 0)
    ] KheDrsSolveSearch returning false
    ] KheDynamicResourceSolverDoSolve returning false, 6.4 secs

  Same solutions but no discernible speedup.

23 February 2023.  For want of anything better, I've implemented
  approximate dominance.  All done, documented, and tested.  The
  results are pretty much as expected.  The sweet spot seems to be
  rs_drs_dom_approx=3, which expands the initial avail cost by 30%.
  It reduces run time from 6.3 secs to 4.0 secs but finds about the
  same number of undominated solutions on the second last day.
  Whether it is better than a simple expand limit on each day
  is hard to say.

24 February 2023.  Another cycle of chemo, not expecting to get
  much work done.  But I did change the label on the X axis of
  the graphs in non-Z cases from the generic "Options" to an
  echo of the rs_drs_test dimension.

1 March 2023.  Thinking about the first two days of solving for
  five trainees.  I really need to do better at that.

  Managed to add another test to expand by shifts that will
  cut off some subtrees.  But nothing remarkable, sadly.  I've
  tested it and it is still finding the same undominated
  solutions as expand by resources.

  Here is a slow but interesting run:

    [ KheDynamicResourceSolverDoSolve(drs, false, extra_selection,
        expand_by_resources, correlated_exprs, 0, 0, 0, 0,
	IndexedUniform, cache, IndexedUniform) cost 0.02000
	resources:  TR_25, TR_26, TR_28, TR_29
	day ranges: 0-13
    [ KheDrsSolveSearch(4 resources, 14 days)
      KheDrsSolveSearch ending day 1Mon (made 621, undominated 621)
      KheDrsSolveSearch ending day 1Tue (made 12876, undominated 2200)
      KheDrsSolveSearch ending day 1Wed (made 22838, undominated 3126)
      KheDrsSolveSearch ending day 1Thu (made 77319, undominated 6989)
      KheDrsSolveSearch ending day 1Fri (made 723949, undominated 39241)
      KheDrsSolveSearch ending day 1Sat (made 1201278, undominated 38506)
      KheDrsSolveSearch ending day 1Sun (made 784579, undominated 7339)
      KheDrsSolveSearch ending day 2Mon (made 509345, undominated 5388)
      KheDrsSolveSearch ending day 2Tue (made 607987, undominated 17457)
      KheDrsSolveSearch ending day 2Wed (made 1330886, undominated 52787)
      KheDrsSolveSearch ending day 2Thu (made 3163494, undominated 51587)
      KheDrsSolveSearch ending day 2Fri (made 770753, undominated 18878)
      KheDrsSolveSearch ending day 2Sat (made 189662, undominated 4284)
      KheDrsSolveSearch ending day 2Sun (made 12, undominated 1)
    ] KheDrsSolveSearch returning true (new best 0.01970)
    ] (52.8 mins)

  Expansion by shifts also got there, in 50.2 minutes.

2 March 2023.  Is this right?

    [ Corr4  v1 v2       Psi     Psi0     diff min_diff sum_Psi0
      4F[ 5]  0  3  -0.00030  0.00000 -0.00030 -0.00030  0.00000 laic 2-4|s30      Constraint:22/HN_1
      4L[ 4]  1  0   0.00000 -0.00030  0.00030 -0.00030 -0.00030 laic 2-5|s30      Constraint:19/HN_1
    ] corr4 -0.00060, corr3 -0.00060, uncorrelated -0.00060

  Yes.  If this is followed by one busy day and then one free day, S2 will
  incur no cost but S1 will incur cost 30 for each of the two constraints.

3 March 2023.  KheDrsSolnDominates and KheDrsSolnDominatesDebug now merged.
    KheDrsOneExtraSelectionDominates and KheDrsOneExtraSelectionDominatesDebug
    now merged.  KheDrsTaskClassMinCost and KheDrsTaskClassMinCostDebug
    now merged.  Definitely the right thing to do.  But some of the
    debug output has degraded.  I'll fix that when I have to.

4 March 2023.  I'm out of ideas, so I've deleted all old code and
  generally tidied up.  After that the file was 24629 lines long.

  Tried some 4-week runs with 4 resources and a daily limit of 2000.
  Also tried some 4-week runs with 5 resources and a daily limit of
  200.  Very interesting runs but no new bests.  The final cost was
  2000, which is a long way above my best result (1835) and a very
  long way above what Legrain got (1695 or 1685).

  I found a problem with different results from tests that should
  have come out the same.

5 March 2023.  I've established that the old version does not have this
  problem.  So something went wrong in the recent tidy up.  I think the
  easiest way forward is to redo the tidy up, checking along the way
  that nothing has gone wrong.

6 March 2023.  Redoing the recent tidy up, now testing along the way.
  Up to Submodule "KHE_DRS_ASST_TO_TASK".  This time I got 24636 lines,
  which is 7 more than last time.  And it's running correctly this time.

  I found the problem; it was a change I made to prune_trigger which
  had the unfortunate effect of causing it to be applied even when
  its value was 0, so that it was supposed to be off.  No big deal.

  VLSN search was not doing the right thing when choosing day
  sets at random.  I've fixed that.  The random choice of days
  was not very random.  I've fixed that too.

  I've just finished a 64.2-second test that found two new bests:

    KheDynamicResourceSolverDoSolve returning true (new 52.05810 < old 52.05845)
    KheDynamicResourceSolverDoSolve returning true (new 0.01950 < old 0.01965)

  So that's encouraging.

7 March 2023.  Fixed over-large constants in khe_sm_random.c.

  Working on a problem with one of the tests, where no extensions
  of the initial solution are found.  This is a problem for both
  KheDrsSolnExpandByResources and KheDrsSolnExpandByShifts, so
  I am working on it via KheDrsSolnExpandByResources.  So far I
  have established that KheDrsSolnExpandByResources is doing the
  usual recursive search, but the recursive calls only generate
  3 assignments at most, they never generate 4 assignments.  There
  are assignments available to try:

      trying 4 assignments for NU_6:
	trying 3 assignments for NU_8:
	  trying 4 assignments for NU_16:
	    trying 2 assignments for CT_19:

  but something else is cutting them off, cost problems perhaps?

  Look at this line from debugging the expander:

    {tasks 3, open false, cost 52.05670, lim 52.05845, free 1, must 1}

  The cost is fine, free resources are fine, why is it closed?
  Could it be a skip count problem?  Should we move skip count
  testing out of the expander?  Or enhance it somehow within
  the expander?  Next step:  get skip count debug information.

8 March 2023.  The finger is pointing at must_assign.  Need to
  get debug information about it within the task classes.

    KheDrsTaskClassExpandBegin: must_assign task 1Thu:Early.3
      asst_cost 0.00000, non_asst_cost 1.00000 (0, open, -;
      1Thu:Early.3 expand_must(time 1Thu1, day 1Thu, closed_asst -))
    KheDrsTaskClassExpandBegin: must_assign task 1Thu:Late.0
      asst_cost 0.00000, non_asst_cost 1.00000 (0, closed, HN_2;
      1Thu:Late.0 expand_must(time 1Thu3, day 1Thu, closed_asst HN_2))
    KheDrsTaskClassExpandBegin: must_assign task 1Thu:Night.9
      asst_cost 0.00000, non_asst_cost 1.00000 (1, open, -;
      1Thu:Night.9 expand_must(time 1Thu4, day 1Thu, closed_asst -))

  The second of these looks remarkably like a must assign task that
  is actually closed, because assigned to HN_2 which is not an open
  resource.  So I need to look into how this task is being included
  in the expand here.

  KheDrsTaskClassOrganizeUnassignedTasks is called when opening the
  task class.  It should ensure that only unassigned tasks are in
  the unassigned_tasks array.  But I've managed to get an assert
  error which shows that this is not happening - why not?

  Problem may be that the task class might fail to open because
  it is not in range or (in this case) the domain is wrong; but
  the signal about that might not be strong enough to prevent
  subsequent code from using it still.  Yes, I think this is it.
  I need to mark the task class more clearly as open and I need
  to add it to a separate list of open task classes.

  Yes, that seems to have fixed the problem.  One improvement found:

    KheDynamicResourceSolverDoSolve returning true
      (new 0.01990 < old 0.02000), 8.7 secs

  Started off a 20 minute run, just to see what we get.  On every
  day we reduce the solutions to the best 200, so there should be
  no excessive running times.

  I've had quite a good result from reassigning 3 resources over
  28 days, rs_time_limit=20:0.  One for nurses and one for trainees:

    rdv end, 16.6 mins used (new best, 52.05735 < 52.05845)
    rdv end, 199.9 secs used (new best, 0.01840 < 0.01890)

  It says above that my own previous best result was 1835.  So I'm
  in my own ballpark here.  These had rs_drs_daily_expand_limit=200,
  now let's try with rs_drs_daily_expand_limit=500:

    rdv end, 16.7 mins used (new best, 52.05735 < 52.05845)
    rdv end, 196.4 secs used (new best, 0.01840 < 0.01890)

  Basically the same result.  I wonder whether ejection chains
  could improve on this further?  Yes!  Look at this:

    [ "INRC2-4-030-1-6291", 1 solution, in 14.2 mins: cost 0.01825 ]

  Now do it eight times and see what we get:

    [ "INRC2-4-030-1-6291", 1 thread, 8 solves, 6 distinct costs, 114.2 mins:
      0.01825 0.01860 0.01875 0.01875 0.01880 0.01880 0.01895 0.01900
    ]

  So we were lucky.  And two threads run out of memory.

9 March 2023.  In the middle of moving tables to constraint objects.
  Have clean compile of what seems to be a complete implementation.

10 March 2023.  Audited and tested moving tables to constraint objects.
  All good, now for running in parallel.  Still not able to make 8
  solutions across 4 threads, but the first four finished all right.
  So is there a problem in passing arenas across?  Yes, we were
  asking for small arenas when we should have been asking for
  large ones.  Fixed now, but should code up some other improvements
  before testing again.

  In the tables, now storing unweighted costs in short fields rather
  than full costs in KHE_COST fields.  This should reduce the
  tables' memory usage by close to a factor of four.  Implemented
  with a clean compile, but needs a careful audit and test.

11 March 2023.  Auditing replacing costs by unweighted costs in
  the uniform dominance tables.  I've realized that I need to be
  careful about where I get the weights from when converting from
  unweighted to weighted.  Variables for correlated dominance
  testing, in their best order:

    Non-cumulative   Cumulative          Cumulative formula
    --------------------------------------------------------------
    psi              sum_psi             sum(psi)
    psi0             sum_psi0            sum(psi0)
    psi_plus         min_psi_diff        min(psi - psi0)
		     min_psi_plus_diff   min(psi_plus - psi0)
    --------------------------------------------------------------

  The various functions follow this order now.

  I've done some parallel testing and I'm not getting the slow down
  that I did previously, so it looks like the memory optimization has
  paid off.  Here is a run for 5 minutes per solve:

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 7 distinct costs, 10.1 mins:
      0.01855 0.01885 0.01890 0.01890 0.01905 0.01910 0.01935 0.01990
    ]

  Now I'm trying a run for 20 minutes per solve, which will get us
  to about the 40 minutes that the other authors have been using:

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 7 distinct costs, 40.1 mins:
      0.01830 0.01840 0.01850 0.01860 0.01875 0.01885 0.01885 0.01970
    ]

  I should have run ejection chains again at the end.  I'll do that now:

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 7 distinct costs, 28.5 mins:
      0.01825 0.01845 0.01870 0.01875 0.01875 0.01880 0.01895 0.01900
    ]

  As I said previously, 1825 is a new best for me, but it is still
  a long way from what Legrain got (1695 or 1685), and only slightly
  better than what I got previously with ejection chains (1835 according
  to my KHE20 paper).  Anyway it seems that our memory troubles are
  over, at least for the present.

  Here's an interesting line from solving 4 trainees over 2 weeks:

    KheDrsSolveSearch ending day 2Mon (made 2199004, undominated 20843)

  The percentage kept is tiny, just .00947 (one-tenth of one percent).
  Here's the whole run:

    [ KheDynamicResourceSolverDoSolve(drs, priqueue false, extra_selection true,
      expand_by_shifts true, correlated_exprs true, daily_expand_limit 0,
      daily_prune_trigger 0, resource_expand_limit 0, dom_approx 0,
      main_dom_kind IndexedUniform, cache false, cache_dom_kind -) cost 0.02000
      resources:  TR_26, TR_27, TR_28, TR_29
      day ranges: 0-13
      [ KheDrsSolveSearch(4 resources, 14 days)
	KheDrsSolveSearch ending day 1Mon (made 605, undominated 605)
	KheDrsSolveSearch ending day 1Tue (made 48546, undominated 4802)
	KheDrsSolveSearch ending day 1Wed (made 191565, undominated 7851)
	KheDrsSolveSearch ending day 1Thu (made 63970, undominated 3037)
	KheDrsSolveSearch ending day 1Fri (made 269272, undominated 18028)
	KheDrsSolveSearch ending day 1Sat (made 235077, undominated 18910)
	KheDrsSolveSearch ending day 1Sun (made 414074, undominated 20581)
	KheDrsSolveSearch ending day 2Mon (made 2199004, undominated 20843)
	KheDrsSolveSearch ending day 2Tue (made 1730284, undominated 33973)
	KheDrsSolveSearch ending day 2Wed (made 2310339, undominated 75829)
	KheDrsSolveSearch ending day 2Fri (made 370919, undominated 16644)
	KheDrsSolveSearch ending day 2Sat (made 511810, undominated 7471)
	KheDrsSolveSearch ending day 2Sun (made 22, undominated 1)
      ] KheDrsSolveSearch returning true (new best 0.01980)
    ] KheDynamicResourceSolverDoSolve returning false, 119.7 mins

  Here's the same test only with a daily expand limit of 200:

    [ KheDynamicResourceSolverDoSolve(drs, priqueue false, extra_selection true,
      expand_by_shifts true, correlated_exprs true, daily_expand_limit 200,
      daily_prune_trigger 0, resource_expand_limit 0, dom_approx 0,
      main_dom_kind IndexedUniform, cache false, cache_dom_kind -) cost 0.02000
      resources:  TR_26, TR_27, TR_28, TR_29
      day ranges: 0-13
    [ KheDrsSolveSearch(4 resources, 14 days)
      KheDrsSolveSearch ending day 1Mon (made 605, undominated 605, kept 200)
      KheDrsSolveSearch ending day 1Tue (made 13597, undominated 2333, kept 200)
      KheDrsSolveSearch ending day 1Wed (made 9101, undominated 1246, kept 200)
      KheDrsSolveSearch ending day 1Thu (made 2013, undominated 330, kept 200)
      KheDrsSolveSearch ending day 1Fri (made 17008, undominated 4045, kept 200)
      KheDrsSolveSearch ending day 1Sat (made 4892, undominated 1944, kept 200)
      KheDrsSolveSearch ending day 1Sun (made 6409, undominated 805, kept 200)
      KheDrsSolveSearch ending day 2Mon (made 29712, undominated 2230, kept 200)
      KheDrsSolveSearch ending day 2Tue (made 14364, undominated 2035, kept 200)
      KheDrsSolveSearch ending day 2Wed (made 14172, undominated 3150, kept 200)
      KheDrsSolveSearch ending day 2Thu (made 13994, undominated 1943, kept 200)
      KheDrsSolveSearch ending day 2Fri (made 2492, undominated 771, kept 200)
      KheDrsSolveSearch ending day 2Sat (made 9540, undominated 1114, kept 200)
      KheDrsSolveSearch ending day 2Sun (made 0, undominated 0)
    ] KheDrsSolveSearch returning false
    ] KheDynamicResourceSolverDoSolve returning false, 8.3 secs

  It has missed the new best.  Expand limit = 500 also missed it,
  and 1000 missed the 1980 but found a 1995.

12 March 2023.  Tried yesterday's run with dom_approx=2:

    [ KheDynamicResourceSolverDoSolve(drs, priqueue false, extra_selection true,
      expand_by_shifts true, correlated_exprs true, daily_expand_limit 0,
      daily_prune_trigger 0, resource_expand_limit 0, dom_approx 2,
      main_dom_kind IndexedUniform, cache false, cache_dom_kind -) cost 0.02000
      resources:  TR_26, TR_27, TR_28, TR_29
      day ranges: 0-13
      [ KheDrsSolveSearch(4 resources, 14 days)
	KheDrsSolveSearch ending day 1Mon (made 605, undominated 605)
	KheDrsSolveSearch ending day 1Tue (made 48546, undominated 4802)
	KheDrsSolveSearch ending day 1Wed (made 191565, undominated 6712)
	KheDrsSolveSearch ending day 1Thu (made 56007, undominated 2507)
	KheDrsSolveSearch ending day 1Fri (made 223641, undominated 14314)
	KheDrsSolveSearch ending day 1Sat (made 195536, undominated 15093)
	KheDrsSolveSearch ending day 1Sun (made 352174, undominated 15651)
	KheDrsSolveSearch ending day 2Mon (made 1786601, undominated 15321)
	KheDrsSolveSearch ending day 2Tue (made 1259210, undominated 26905)
	KheDrsSolveSearch ending day 2Wed (made 1839774, undominated 55834)
	KheDrsSolveSearch ending day 2Thu (made 3125523, undominated 53888)
	KheDrsSolveSearch ending day 2Fri (made 290949, undominated 12699)
	KheDrsSolveSearch ending day 2Sat (made 434115, undominated 6403)
	KheDrsSolveSearch ending day 2Sun (made 22, undominated 1)
      ] KheDrsSolveSearch returning true (new best 0.01980)
    ] KheDynamicResourceSolverDoSolve returning false, 50.7 mins

  It hasn't made a substantial difference.  I guess I hoped that
  dom_approx would prove that many solutions were almost dominated
  by other solutions, but that hope has not eventuated.  However it
  did find the improvement.  What about dom_approx=5:

    [ KheDynamicResourceSolverDoSolve(drs, priqueue false, extra_selection true,
      expand_by_shifts true, correlated_exprs true, daily_expand_limit 0,
      daily_prune_trigger 0, resource_expand_limit 0, dom_approx 5,
      main_dom_kind IndexedUniform, cache false, cache_dom_kind -) cost 0.02000
      resources:  TR_26, TR_27, TR_28, TR_29
      day ranges: 0-13
      [ KheDrsSolveSearch(4 resources, 14 days)
	KheDrsSolveSearch ending day 1Mon (made 605, undominated 247)
	KheDrsSolveSearch ending day 1Tue (made 17764, undominated 2109)
	KheDrsSolveSearch ending day 1Wed (made 94148, undominated 4158)
	KheDrsSolveSearch ending day 1Thu (made 35691, undominated 1549)
	KheDrsSolveSearch ending day 1Fri (made 137562, undominated 7231)
	KheDrsSolveSearch ending day 1Sat (made 104799, undominated 7974)
	KheDrsSolveSearch ending day 1Sun (made 199376, undominated 2737)
	KheDrsSolveSearch ending day 2Mon (made 397621, undominated 4423)
	KheDrsSolveSearch ending day 2Tue (made 344585, undominated 10461)
	KheDrsSolveSearch ending day 2Wed (made 763796, undominated 18743)
	KheDrsSolveSearch ending day 2Thu (made 1159118, undominated 19020)
	KheDrsSolveSearch ending day 2Fri (made 112682, undominated 5911)
	KheDrsSolveSearch ending day 2Sat (made 217717, undominated 3683)
	KheDrsSolveSearch ending day 2Sun (made 16, undominated 1)
      ] KheDrsSolveSearch returning true (new best 0.01980)
    ] KheDynamicResourceSolverDoSolve returning false, 5.1 mins

  This is a lot faster, and it is still finding the improvement.  So
  we might be on to something here.  Even with dom_approx=8 we still
  find the improvement:

    [ KheDynamicResourceSolverDoSolve(drs, priqueue false, extra_selection true,
      expand_by_shifts true, correlated_exprs true, daily_expand_limit 0,
      daily_prune_trigger 0, resource_expand_limit 0, dom_approx 8,
      main_dom_kind IndexedUniform, cache false, cache_dom_kind -) cost 0.02000
      resources:  TR_26, TR_27, TR_28, TR_29
      day ranges: 0-13
      [ KheDrsSolveSearch(4 resources, 14 days)
	KheDrsSolveSearch ending day 1Mon (made 605, undominated 247)
	KheDrsSolveSearch ending day 1Tue (made 17764, undominated 1960)
	KheDrsSolveSearch ending day 1Wed (made 86213, undominated 2967)
	KheDrsSolveSearch ending day 1Thu (made 27462, undominated 1207)
	KheDrsSolveSearch ending day 1Fri (made 108635, undominated 5861)
	KheDrsSolveSearch ending day 1Sat (made 88700, undominated 6671)
	KheDrsSolveSearch ending day 1Sun (made 168449, undominated 1222)
	KheDrsSolveSearch ending day 2Mon (made 211110, undominated 2668)
	KheDrsSolveSearch ending day 2Tue (made 201392, undominated 7227)
	KheDrsSolveSearch ending day 2Wed (made 518904, undominated 11729)
	KheDrsSolveSearch ending day 2Thu (made 741336, undominated 12124)
	KheDrsSolveSearch ending day 2Fri (made 73024, undominated 3682)
	KheDrsSolveSearch ending day 2Sat (made 144728, undominated 2677)
	KheDrsSolveSearch ending day 2Sun (made 8, undominated 1)
      ] KheDrsSolveSearch returning true (new best 0.01980)
    ] KheDynamicResourceSolverDoSolve returning false, 114.3 secs

  And we are under two minutes here.  Pretty amazing actually.

    [ KheDynamicResourceSolverDoSolve(drs, priqueue false, extra_selection true,
      expand_by_shifts true, correlated_exprs true, daily_expand_limit 0,
      daily_prune_trigger 0, resource_expand_limit 0, dom_approx 10,
      main_dom_kind IndexedUniform, cache false, cache_dom_kind -) cost 0.02000
      resources:  TR_26, TR_27, TR_28, TR_29
      day ranges: 0-13
      [ KheDrsSolveSearch(4 resources, 14 days)
	KheDrsSolveSearch ending day 1Mon (made 605, undominated 247)
	KheDrsSolveSearch ending day 1Tue (made 17764, undominated 1073)
	KheDrsSolveSearch ending day 1Wed (made 49665, undominated 1842)
	KheDrsSolveSearch ending day 1Thu (made 16550, undominated 846)
	KheDrsSolveSearch ending day 1Fri (made 75439, undominated 4511)
	KheDrsSolveSearch ending day 1Sat (made 70755, undominated 5309)
	KheDrsSolveSearch ending day 1Sun (made 142136, undominated 600)
	KheDrsSolveSearch ending day 2Mon (made 111283, undominated 1969)
	KheDrsSolveSearch ending day 2Tue (made 146226, undominated 5499)
	KheDrsSolveSearch ending day 2Wed (made 401712, undominated 6909)
	KheDrsSolveSearch ending day 2Thu (made 459132, undominated 8218)
	KheDrsSolveSearch ending day 2Fri (made 48084, undominated 2354)
	KheDrsSolveSearch ending day 2Sat (made 96093, undominated 1934)
	KheDrsSolveSearch ending day 2Sun (made 8, undominated 1)
      ] KheDrsSolveSearch returning true (new best 0.01980)
    ] KheDynamicResourceSolverDoSolve returning false, 50.5 secs

  This is incredible:  under one minute and still finding a new best.

    [ KheDynamicResourceSolverDoSolve(drs, priqueue false, extra_selection true,
      expand_by_shifts true, correlated_exprs true, daily_expand_limit 0,
      daily_prune_trigger 0, resource_expand_limit 0, dom_approx 15,
      main_dom_kind IndexedUniform, cache false, cache_dom_kind -) cost 0.02000
      resources:  TR_26, TR_27, TR_28, TR_29
      day ranges: 0-13
      [ KheDrsSolveSearch(4 resources, 14 days)
	KheDrsSolveSearch ending day 1Mon (made 605, undominated 247)
	KheDrsSolveSearch ending day 1Tue (made 17764, undominated 1070)
	KheDrsSolveSearch ending day 1Wed (made 49477, undominated 1213)
	KheDrsSolveSearch ending day 1Thu (made 10718, undominated 597)
	KheDrsSolveSearch ending day 1Fri (made 53283, undominated 3414)
	KheDrsSolveSearch ending day 1Sat (made 55753, undominated 2504)
	KheDrsSolveSearch ending day 1Sun (made 70949, undominated 344)
	KheDrsSolveSearch ending day 2Mon (made 64496, undominated 1805)
	KheDrsSolveSearch ending day 2Tue (made 133317, undominated 3031)
	KheDrsSolveSearch ending day 2Wed (made 261016, undominated 3696)
	KheDrsSolveSearch ending day 2Thu (made 250170, undominated 2450)
	KheDrsSolveSearch ending day 2Fri (made 18971, undominated 648)
	KheDrsSolveSearch ending day 2Sat (made 29082, undominated 789)
	KheDrsSolveSearch ending day 2Sun (made 4, undominated 1)
      ] KheDrsSolveSearch returning true (new best 0.01995)
    ] KheDynamicResourceSolverDoSolve returning false, 18.9 secs

  With dom_approx=15 we find the 1995, not the 1980.  Here is a run
  using 4 resourcces:

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 7 distinct costs, 33.2 mins:
      0.01860 0.01875 0.01895 0.01940 0.01950 0.01960 0.01960 0.02030
    ]

  And here is the same run, only using 3 resources:

    [ "INRC2-4-030-1-6291", 4 threads, 8 solves, 7 distinct costs, 28.8 mins:
      0.01785 0.01840 0.01845 0.01865 0.01875 0.01875 0.01890 0.01925
    ]

  I said previously that 1825 was a new best for me, but now I have a
  new new best of 1785. which is 100 above Legrain's 1695 or 1685.

14 March 2023.  Yesterday I started documenting dominance relations
  between shift assignment sets.  Today I've reorganized that and
  I'm now working on a section of the theory appendix called
  "Solution types".

15 March 2023.  Suggested terminology:

    d_k-solution             (replaces d_k-complete solution)
    c_i-solution             (replaces one-extra selection)
    {c_i}{c_j}-solution      (replaces two-extra selection)
    s_i-solution             (replaces one-extra-shift selection)
    {s_i}{s_j}-solution      (replaces two-extra-shift selection)

16 March 2023.  I've been working on the "Solution types" section.
  It's done up to the start of the {s_i}{s_j}-solutions part,
  which I can now go ahead with properly, given that I have a
  suitable context for it.

17 March 2023.  Finished explaining {s_i}{s_j}-solutions. but
  not sure where to take it from here.  Should I rename some
  of the types?  Should I implement a separate signer for
  {s_i}{s_j}-solutions?  Have to think about what to do.

21 March 2023.  Where have the last few days gone?  Partly to
  refereeing a paper.  Still pondering solution types and the
  {s_i}{s_j}-solution.

22 March 2023.  Made a start on shift pair dominance.  It is going
  into submodule "shift assignment tries - shift pair dominance".

23 March 2023.  Rewrote yesterday's shift pair code.  It uses
  better variable names now.  I've audited it and added some debug
  output, and I've tested it and it seems to be working.  So I
  now have the pairs of pairs of shifts that need to be tested
  for dominance.  The actual dominance testing is the next step:
  function KheDrsShiftPairDominates.

26 March 2023.  Lost the last three days to chemo, and also some
  time has gone into setting up my new web site and researching
  the requirements for my new computer.

29 March 2023.  See previous entry.  Added skip_counts and
  skip_assts fields to shift assignment objects, and updated 
  KheDrsShiftAsstExpandByShifts to use them.

30 March 2023.  Added a KHE_DRS_SHIFT_PAIR type and am now
  creating one shift pair object for each unordered pair of
  distinct shifts starting on the same day.  Each shift pair
  object contains a signer for that shift pair.

31 March 2023.  Made a good start on shift pair signers.  I've
  started moving all the code to the INT_SUM_COST submodule,
  and eventually I will make separate "INT_SUM_COST - opening"
  and "INT_SUM_COST - evaluation" submodules.

1 April 2023.  Reviewed yesterday's code and tidied it up a bit.
  All good.  KheDrsShiftPairDominates is still to do but there is
  a fair bit of prep to do first.  Also it holds a signer set
  rather than a signer.

2 April 2023.  Made a start on KheDrsShiftPairDominates but came
  a cropper.  I need to think more about what happened.

3 April 2023.  Wrote some stuff, at the beginning of the Solutions
  section of the implementation appendix, discussion the relationship
  between solutions and assignments and stating (ahead of the fact)
  that there are no assignment types in the implementation, just
  solution types.
  
  Removed type KHE_DRS_ASST_TO_SHIFT.  I need to audit what's left,
  especially reference counting of signatures.

  How do assignment to task objects get refreshed on every
  expand?  If the old ones are deleted, does that mean that
  pointers to them in KHE_DRS_SOLN objects go out of date?
  No, solutions contain an ARRAY_KHE_DRS_TASK_ON_DAY.

4 April 2023.  Audited signature reference counting.  It needed a
  lot of work but it's in good shape now.  The secret is to make
  objects that refer to signatures responsible for updating those
  signatures' reference counts.  The other secret is to count
  references from heap objects.  This means that initially the
  reference count of a new signature object is 0.  If it never
  comes to be referred to by a heap object, it gets deleted.

  KHE_DRS_ASST_TO_TASK_CLASS is needed but can we express it
  better, somehow?  Is it just a variant of KHE_DRS_ASST_TO_TASK?
  I've given a lot of thought to these two types today but I
  haven't come up with any changes.  Although that in itself
  says something about their current fitness for purpose.

5 April 2023.  I've renamed the assignment types so that they
  are now solution types:

    Old Type Name               Solution         New Type Name
    ----------------------------------------------------------------
    KHE_DRS_SOLN                d_k-solution     KHE_DRS_SOLN
    KHE_DRS_ASST_TO_TASK_CLASS  c_i-solution     KHE_DRS_CLASS_SOLN
    KHE_DRS_ASST_TO_TASK        t_i-solution     KHE_DRS_TASK_SOLN
    KHE_DRS_SHIFT_ASST          s_i-solution     KHE_DRS_SHIFT_SOLN
    ----------------------------------------------------------------
  
  I've changed the type names, field names, and function headers
  appropriately, and moved submodules around to agree with the
  grouping and order used in this table.  I still have to change
  most of the variable names, although I have done some.  In the
  implementation appendix, I've moved around some sections and
  changed their titles to agree with this plan, but I haven't
  rewritten the text of any of the sections yet.  I'll put that
  off until I've actually implemented shift pair solutions and
  everything has settled down.  Currently have 26508 lines.

6 April 2023.  Making some serious progress on shift pair solutions
  at last.  I've defined type KHE_DRS_SHIFT_PAIR_SOLN and written
  functions for creating and freeing them, building their signature
  sets, and building signer sets for those.  And now I have actually
  finished shift pair soln dominance testing.  It all needs a careful
  audit, plus it might be good to reorganize things a bit.

7 April 2023.  Audited yesterday's stuff today.  I fiddled with it
  a bit but it was basically all good.  I also changed a lot of
  variable names to follow the new terminolog, including replacing
  "one-extra selection" with "class solution dominance" and
  "two-extra selection" with "class pair solution dominance".
  
  If signatures contained pointers to signers, then we could
  do dominance testing without having to build signer sets
  or pass signers as parameters.  The cost would be one pointer
  per solution, in the event resource signature, which is too high.
  Also one pointer in each signature that gets kept.  The cost of
  saving this memory is KheDrsShiftPairSolnSignerSetBuild; I've
  written it now and it is not so bad.

  In principle we could create all signers at expr open time, but
  there would need to be one signer for each combination of s_a,
  R_a, s_b, and R_b.  That is a lot of signers.  I seem to have
  gravitated towards building the s_i s_j signer at expr open
  time, and then building the full signers as required, by
  concatenating the s_i s_j signer with c_i signers.  I could
  avoid building the signer set by simply running through each
  signer required, similar to what I do for c_i c_j signing.
  But the total cost aspect makes the signer set (or rather the
  signature set) attractive.

8 April 2023.  I'm documenting before I do any testing.  It's a way
  to spot problems.  I'm focusing on the Solutions section of the
  Implementation chapter.  I've just finished the task solutions section.

9 April 2023.  Kept working on the documentation, it is all up to
  date now in the Solutions section, except that the shift pair
  solutions section is not yet written.

10 April 2023.  Documented KheDrsShiftSolnTrieTestShiftPairs and
  its related functions.  I could revise the entire implementation
  appendix but I think I've done enough for now.

11 April 2023.  KheDrsExprEvalSignature is called from exactly one
  place:  KheDrsSignerEvalSignature.  KheDrsSignerEvalSignature is
  called from:

   * KheDrsResourceSignatureMake, to make a resource signature;

   * KheDrsShiftPairSolnBuild, to make the cover signature for
     a shift pair soln object (this is the call whose tasks are
     not being set at the moment);

   * KheDrsExpanderMakeAndMeldSoln, to make the cover signature
     for a day solution object;

   * KheDrsExpanderMakeAndMeldShiftSoln, to make the cover
     signature for a shift solution.

  They could all incorporate a call to KheDrsSignatureMake, so
  I've started by doing that.

12 April 2023.  Setting leaf nodes in KheDrsShiftPairSolnBuild
  today.  All done and ready to test.  But I went back to a
  thorough revision of the documentation after that, and
  got to the end of signer sets.

13 April 2023.  Carrying on with revising the documentation.
  Leaving correlators in the too hard basket for now.  Just
  copied in KheDrsDim5TablePut, not explained yet.

14 April 2023.  Carrying on with the documentation.  I'm up
  to the start of Expressions/Searching.

15 April 2023.  Carrying on with the documentation.  Just
  finished INT_SEQ_COST.

16 April 2023.  Carrying on with the documentation.  Now up
  to the start of Expansion.

17 April 2023.  Carrying on with the documentation.  Up to the
  start of "KheDrsSolveOpen".

19 April 2023.  Carrying on with the documentation.  I've reached
  the end of the implementation chapter.

21 April 2023.  Not much done in the last couple of days.

24 April 2023.  Coming out of a rather nasty chemo side effects
  episode, the main symptom being zero energy.  Doing some
  testing.  I actually fixed a little bug, which is more than
  I have been up for recently.

  Getting quite a lot of debug output like this:

    KheDrsShiftSolnTrieTestShiftPairs:
    {(2Mon:28, {}, 1 soln), (2Mon:30, {TR_26, TR_29}, 1 soln)}
    {(2Mon:30, {}, 1 soln), (2Mon:28, {TR_26, TR_29}, 1 soln)}
    second pair (0,0,0,0) dominates first

  which suggests that the code is finding dominated shift pairs.
  It needs a careful audit now, including redoing the dominance
  test with debug output to see just what is being tested.  Did some
  sorting out of the debug code to get nicer prints.  All good.

25 April 2023.  Still zero energy.  I made shift_pairs into an
  option alongside expand_by_shifts, and tested with and without.
  It made a tiny difference, on one day there were marginally
  fewer solutions made, but the running time was slightly higher.
  Not an encouraging result.

  Did a few miscellaneous tests.  It's still taking 1000 seconds
  to reassign 4 trainees.  There seems to be little difference
  between assignment by resources and assignment by shifts;
  assignment by shifts is about 10-15% faster for nurses, but
  marginally slower for trainees.

12 June 2023.  Turned out I had pneumonia, I started to lose my
  energy about 20 April 2023, I was admitted to hospital on
  28 April 2023, I was discharged on 2 May 2023, and after
  that I improved every day, making it hard to say when I
  actually stopped having it.  I've been back to normal for
  a couple of weeks now, but I had other jobs to catch up
  on, including setting up a replacement for my home computer.
  This entry is the first to be typed on that new computer,
  which is working but there are a few things still to install.

  Just finished getting KHE to compile cleanly with the new
  version of the gcc compiler that I installed just now.  The
  main issue is that it is very fussy about writing into string
  arrays.  The solution, basically, is to use HnStringMake.

13 June 2023.  I've received an email from Samir Ribic reporting
  that KHE is crashing on a test of his.  He reported both that
  KheSolnCopy was crashing and that the simple

    khe -c SpainSchool.xml

  I've verified the second of these two statements.

15 June 2023.  Continued working on Samir Ribic's bug reports,
  I've fixed and tested the khe -c problem and the other problem.

16 June 2023.  Checked that my own solver is working - it is.
  Put Version 2.8 on my web site today and emailed Samir Ribic.

21 June 2023.  I've spent the last few days sorting out various
  problems with my new computer.  That's all done now (I hope)
  so I should be able to get back to some real work starting
  today.  Between the pneumonia and the new computer I seem to
  have lost two months (20 April to 20 June).

  I tested 4 trainees, it took 8.1 minutes:

  [ KheDynamicResourceSolverDoSolve(drs, priqueue false, extra_selection true,
    expand_by_shifts true, shift_pairs false, correlated_exprs true,
    daily_expand_limit 0, daily_prune_trigger 0,
    resource_expand_limit 0, dom_approx 0,
    main_dom_kind IndexedUniform, cache false, cache_dom_kind -) cost 0.02000
    resources:  TR_26, TR_27, TR_28, TR_29
    day ranges: 0-13
    [ KheDrsSolveSearch(4 resources, 14 days)
      KheDrsSolveSearch ending day 1Mon (made 605, undominated 605)
      KheDrsSolveSearch ending day 1Tue (made 48546, undominated 3789)
      KheDrsSolveSearch ending day 1Wed (made 149415, undominated 6428)
      KheDrsSolveSearch ending day 1Thu (made 50821, undominated 2763)
      KheDrsSolveSearch ending day 1Fri (made 238632, undominated 17229)
      KheDrsSolveSearch ending day 1Sat (made 229390, undominated 18683)
      KheDrsSolveSearch ending day 1Sun (made 190328, undominated 6582)
      KheDrsSolveSearch ending day 2Mon (made 559186, undominated 11326)
      KheDrsSolveSearch ending day 2Tue (made 597830, undominated 23984)
      KheDrsSolveSearch ending day 2Wed (made 894268, undominated 44733)
      KheDrsSolveSearch ending day 2Thu (made 1069242, undominated 34863)
      KheDrsSolveSearch ending day 2Fri (made 77862, undominated 7061)
      KheDrsSolveSearch ending day 2Sat (made 17576, undominated 2320)
      KheDrsSolveSearch ending day 2Sun (made 0, undominated 0)
    ] KheDrsSolveSearch returning false
  ] KheDynamicResourceSolverDoSolve returning false, 8.1 mins

  This is the basic problem now.  Here is a one-week run:

  [ KheDynamicResourceSolverDoSolve(drs, priqueue false, extra_selection true,
    expand_by_shifts true, shift_pairs false, correlated_exprs true,
    daily_expand_limit 0, daily_prune_trigger 0,
    resource_expand_limit 0, dom_approx 0,
    main_dom_kind IndexedUniform, cache false, cache_dom_kind -) cost 0.02000
    resources:  TR_26, TR_27, TR_28, TR_29
    day ranges: 0-6
    [ KheDrsSolveSearch(4 resources, 7 days)
      KheDrsSolveSearch ending day 1Mon (made 605, undominated 605)
      KheDrsSolveSearch ending day 1Tue (made 47573, undominated 3720)
      KheDrsSolveSearch ending day 1Wed (made 97362, undominated 4788)
      KheDrsSolveSearch ending day 1Thu (made 30941, undominated 1234)
      KheDrsSolveSearch ending day 1Fri (made 27031, undominated 2676)
      KheDrsSolveSearch ending day 1Sat (made 1602, undominated 431)
      KheDrsSolveSearch ending day 1Sun (made 0, undominated 0)
    ] KheDrsSolveSearch returning false
  ] KheDynamicResourceSolverDoSolve returning false, 5.5 secs

  Much faster but is it any use?  I also tried 5 trainees for one
  week, it is much slower.  In fact, even 1Tue takes forever.  I
  had to kill it in the end.

  Here is a run of 12 processes in parallel with a time limit of
  5 minutes, reassigning 3 resources over 14 days using

    rs="rt(rg(rrq, rts, rrm, rec)), rt(rec, rdv), rt(rec, rdv)"

  The result is

    [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 11 distinct costs, 5.5 mins:
      0.01885 0.01890 0.01940 0.01960 0.01970 0.01985 0.01995 0.02000
      0.02000 0.02010 0.02025 0.02040
    ]

  This to be compared with my remarks of 12 March:  "I said previously
  that 1825 was a new best for me, but now I have a new new best of
  1785, which is 100 above Legrain's 1695 or 1685."  And here now is
  a 30-minute run:

    [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 11 distinct costs, 30.2 mins:
      0.01865 0.01890 0.01940 0.01960 0.01970 0.01985 0.01995 0.02000
      0.02000 0.02010 0.02025 0.02040
    ]

  Only slightly better, sadly.  Here is the same 30-minute run but
  with dynamic resource solving removed so that all the available
  time goes on ejection chains:

    [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 11 distinct costs, 13.3 secs:
      0.01885 0.01890 0.01940 0.01960 0.01970 0.01985 0.01995 0.02000
      0.02010 0.02025 0.02025 0.02040
    ]

  In 13.3 seconds it gets close to where the other run got to in
  30.2 minutes.  I tried running ejection chains three times rather
  than twice, but it made only a very slight difference and none
  at all to the best cost.

  Now here is a run that seems to be similar to what did so well
  previously, namely 4 resources with dom_approx=10:

    [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 12 distinct costs, 26.1 mins:
      0.01885 0.01890 0.01940 0.01960 0.01970 0.01985 0.01995 0.02000
      0.02010 0.02025 0.02030 0.02040
    ]

  Exactly the same, basically because the dynamic solver found
  nothing useful (in fact, over the 12 processes there was exactly
  one successful call to KheDynamicResourceSolverDoSolve).

  And now my current run is returning an error:

    KheDrsRerun internal error (rerun new best cost 0.02115 differs from
       original new best cost 0.02130), diversifier 7

  I've got this down to single thread (diversifier 7) and now I
  need to debug it.  To begin with, which cost is correct?

22 June 2023.  Working on yesterday's bug, which now reads

    KheDrsRerun internal error (rerun best cost 0.02115 differs from
    original new best cost 0.02130), KHE cost 0.02115, diversifier 7

  where the KHE cost refers to the cost of the rerun solution,
  not to the cost of the original solution.  When I close on
  the original solution, I get

    KheDrsSolveClose internal error: KHE soln cost 0.02115 != packed soln
      cost 0.02130

  indicating that it is the cost of the original packed solution,
  the one found by the full search, that is wrong.  Because it is
  too high, we may be missing many new best solutions.

  I've also verified that the two packed solutions contain the
  same assignments on the same days.  So the assignments are not
  wrong, but the cost calculations are wrong.

  And now I have some debug output showing that the start costs
  are the same in the two packed solutions.  So it seems that
  they started off together, but that then things went slightly
  wrong.  This is shaping up to be a nightmare.

23 June 2023.  Yesterday's debug output shows that everything was
  in synch on 3Mon, but that on 3Tue the total cost was wrong but
  all the individual signatures were correct.  Now we need to work
  out which constraint went wrong.  The difference is 15 which
  should give an initial clue.

  Here are the weight 15 constraints:

    Constraint:14 - 2-5 consecutive early shifts
    Constraint:15 - 2-28 consecutive day shifts
    Constraint:16 - 2-5 consecutive late shifts
    Constraint:17 - 3-5 consecutive night shifts

  There are constraints whose weight divides 15 but those all have
  weight 1, so they are not going to be the problem.

  And now I have established that the problem is with the initial
  value passed to the signature set, i.e. the value passed to
  KheDrsSolnMake().

  And now I have established that the problem is present whether
  we expand by shifts or expand by resources.  So I'm now testing
  using expand by resources, as being the simpler option.

  Commenting out KheDrsResourceAdjustSignatureCosts fixes the
  problem.  So there is something about how that is working
  that causes the bug; but I'm not sure what, yet.  It will
  increase de->cost but so what if the others are reduced?
  Needs careful thought.

  The problem may be not calling KheDrsResourceAddExpandSignature
  for free resources.  So the adding to de is done but the
  subtracting from the signatures is not done.  This looks like
  the problem.

24 June 2023.  Fixed the 21 June 2023 bug as outlined in the last
  paragraph of 23 June 2023.  Now trying a 12-core run with 3
  resources and dom_approx = 0:

    [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 10 distinct costs, 24.8 mins:
      0.01855 0.01880 0.01890 0.01910 0.01930 0.01930 0.01945 0.01955
      0.01955 0.01970 0.01980 0.01985
    ]

  Now trying a 12-core run with 4 resources and dom_approx = 15:

    [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 10 distinct costs, 26.2 mins:
      0.01830 0.01855 0.01870 0.01910 0.01915 0.01925 0.01930 0.01930
      0.01940 0.01940 0.01990 0.02035
    ]

  This best result of 1830 is actually pretty good.  Now trying the
  same thing again but with 21-day time intervals:

    [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 11 distinct costs, 58.4 mins:
      0.01885 0.01890 0.01890 0.01945 0.01980 0.01995 0.02000 0.02005
      0.02020 0.02070 0.02135 0.02170
    ]

  The individual runs here are slow for trainees.  Trying again, this
  time with a daily limit of 5000 and no dom_approx:

    [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 10 distinct costs, 59.2 mins:
      0.01915 0.01995 0.02025 0.02050 0.02110 0.02200 0.02200 0.02205
      0.02235 0.02255 0.02255 0.02290
    ]

  The problem here seems to be that each run takes too long; expanding
  5000 undominated solutions is a slow process.

  Now trying a 12-core run with 3 resources and 28 days.  I've kept
  the 5000 limit but just as a sanity check.  In fact the run does
  generate more than 5000 undominated solutions from the time to
  time - I saw one entry about 16,000 - but on the whole it keeps
  under this limit.  This time around we seem to be finding a
  healthy number of new bests.  The end result is:

    [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 10 distinct costs, 27.2 mins:
      0.01895 0.01905 0.01905 0.01910 0.01920 0.01940 0.01940 0.01955
      0.01970 0.01980 0.02010 0.02070
    ]

  A bit disappointing on the whole.  Now here is a run which uses
  ejection chains only, called three times:

    [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 11 distinct costs, 13.8 secs:
      0.01885 0.01890 0.01940 0.01960 0.01970 0.01985 0.01995 0.02000
      0.02010 0.02025 0.02025 0.02030
    ]

  Notice the run time, just a few seconds, but it does better than
  most of the runs above.  Here is the same thing again, only with
  five calls on ejection chains:

    [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 11 distinct costs, 18.2 secs:
      0.01885 0.01890 0.01940 0.01960 0.01970 0.01985 0.01995 0.02000
      0.02010 0.02025 0.02025 0.02030
    ]

  More calls on ejection chains give no improvement.
  
  Now trying a 12-core run with 4 resources, dom_approx = 10, and a
  daily limit of 500:

    [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 9 distinct costs, 24.6 mins:
      0.01885 0.01895 0.01895 0.01925 0.01935 0.01950 0.01955 0.01955
      0.01955 0.01965 0.01970 0.02020
    ]

  No better than ejection chains.  Actually daily limits don't seem
  to help at all.  Here we are with 14 days, 4 resources, dom_approx = 15,
  and no daily limit:

    [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 10 distinct costs, 26.5 mins:
      0.01870 0.01890 0.01905 0.01915 0.01935 0.01935 0.01940 0.01940
      0.01960 0.01970 0.01975 0.01985
    ]

  Above, this produced a best cost of 1830 and second best 1855, but
  neither of these is in evidence here.  Why not?  The only difference
  is that I started out running expand by resources, so I'm returning to
  that now.  Here we are with expand by resources, 14 days, 4 resources,
  dom_approx = 15, and no daily limit:

    [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 11 distinct costs, 26.8 mins:
      0.01830 0.01885 0.01890 0.01910 0.01915 0.01930 0.01930 0.01940
      0.01950 0.01955 0.01990 0.02050
    ]

  Same best, but second best here is 1885 whereas above it's 1855.  Let's
  put it down to time limit effects and keep moving.  Either way, the best
  looks like luck, given that the second best is quite a lot larger.  Now
  the same settings but with a 60 minute time limit:

    [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 11 distinct costs, 50.4 mins:
      0.01795 0.01870 0.01875 0.01880 0.01905 0.01905 0.01915 0.01930
      0.01935 0.01975 0.01985 0.02030
    ]

  Best is 1795, close to 1785 (the best I've ever had on this instance).
  This excerpt from the debug output file gives food for thought:

    rdv end, 15.5 mins used (new best, 0.01810 < 0.01860)

  That's a massive drop considering it's near end of run.

  Here's a massively parallel attempt using ejection chains only:

    [ "INRC2-4-030-1-6291", 12 threads, 48 solves, 28 distinct costs, 61.9 secs:
      0.01880 0.01885 0.01890 0.01890 0.01895 0.01910 0.01920 0.01925
      0.01935 0.01940 0.01940 0.01945 0.01945 0.01955 0.01960 0.01960
      0.01960 0.01965 0.01970 0.01970 0.01970 0.01975 0.01985 0.01985
      0.01990 0.01995 0.01995 0.01995 0.01995 0.02000 0.02010 0.02010
      0.02010 0.02020 0.02025 0.02025 0.02030 0.02030 0.02040 0.02040
      0.02050 0.02060 0.02060 0.02060 0.02075 0.02075 0.02080 0.02080
    ]

  It seems pretty clear from this that ejection chains are hitting a
  barrier that the dynamic resource solver is able to get past.  But
  it's not impossible that better move operations in the ejection chain
  code could change that situation.  Here's another massive parallel
  attempt, including drs this time but with a shorter time limit:

    [ "INRC2-4-030-1-6291", 12 threads, 48 solves, 27 distinct costs, 34.9 mins:
      0.01845 0.01870 0.01870 0.01880 0.01880 0.01890 0.01895 0.01895
      0.01895 0.01895 0.01900 0.01905 0.01910 0.01910 0.01915 0.01915
      0.01930 0.01930 0.01940 0.01940 0.01945 0.01945 0.01950 0.01955
      0.01955 0.01955 0.01965 0.01965 0.01970 0.01975 0.01975 0.01985
      0.01985 0.01985 0.01990 0.01990 0.01995 0.02000 0.02000 0.02005
      0.02005 0.02020 0.02025 0.02025 0.02035 0.02035 0.02040 0.02050
    ]

  It's better than ejection chains only, but not as good as the long
  run above.

  This to be compared with my remarks of 12 March:  "I said previously
  that 1825 was a new best for me, but now I have a new new best of
  1785, which is 100 above Legrain's 1695 or 1685."  Today's best is
  1795, see above.  It looks pretty fluky, second best is 1870.

25 June 2023.  Decided to give dynamic resource solving a rest and do
  a careful review of the ejection chain code.  Here is what gets called
  by event resource monitor augment functions:

      KheAssignResourceAugment
        KheEventResourceMoveAugment(+domain, -0, -r0)
      KhePreferResourcesAugment
        KheEventResourceMoveAugment(+domain, -0, +r0)
      KheLimitResourcesAugment
        KheEventResourceMoveAugment(+domain, -0, -r0) (underload)
        KheEventResourceMoveAugment(+domain, -constraint_domain, +r0) (over)

      KheEventResourceMoveAugment
        KheTaskMoveAugment(t, rg, not_rg, allow_unassign)
	KheResourceGainTaskAugment(r, t, true) (if domain is empty) ?why?
          KheTaskMoveAugment

      KheTaskMoveAugment and KheTaskSetMoveAugment
	KheTaskSetMoveMultiRepair
	KheTaskSetSwapToEndRepair

      KheTaskSetMoveMultiRepair
        KheWidenedTaskSetMoveAndDoubleMoves
	KheWidenedTaskSetSwapRepair

      KheWidenedTaskSetMoveAndDoubleMoves
        KheWidenedTaskSetMoveRepair
	KheTryRuns

      KheWidenedTaskSetMoveRepair
        KheWidenedTaskSetMove

      KheResourceGainTaskAugment(r, tg, force)

  It's fairly complicated.  Perhaps we just need a non-empty day range
  and a two different resources (one possibly NULL), and an operation
  that swaps the timetables of the two resources throughout the day range.
  Now for resource monitor augment functions:

      KheAvoidUnavailableTimesAugment
        KheResourceOverloadAugment(r, unavail_times, false)

      KheClusterBusyTimesAugment
        KheClusterUnderloadAugment
	  KheOverUnderAugment(r, tg, over, require_zero, allow_zero)
	KheClusterOverloadAugment
	  KheOverUnderAugment

      KheLimitBusyTimesAugment
        KheOverUnderAugment

      KheLimitActiveIntervalsAugment
        KheOverUnderAugment

      KheOverUnderAugment
        KheResourceOverloadAugment(r, tg, require_zero)
	KheResourceUnderloadAugment(r, tg, allow_zero)

      KheResourceOverloadAugment
        KheTaskSetMoveAugment(ts, domain, r_rg, true) (if require_zero)
	KheTaskMoveAugment (if !require_zero)

      KheResourceUnderloadAugment
        KheResourceGainTaskAugment(r, tg, false)

25 June 2023.  Started refamiliarising myself with the ejection chains
  code.  Here is a basic run, ejection chains only:

  [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 11 distinct costs, 13.5 secs:
    0.01885 0.01890 0.01940 0.01960 0.01970 0.01985 0.01995 0.02000
    0.02010 0.02025 0.02025 0.02040
  ]

  And here I have increased es_widening_max to 8:

  [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 10 distinct costs, 13.2 secs:
    0.01920 0.01925 0.01930 0.01945 0.01945 0.01975 0.01985 0.01990
    0.01995 0.02020 0.02060 0.02060
  ]

  It's worse, which is curious.  Now setting es_widening_max to 0:

  [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 9 distinct costs, 3.6 secs:
    0.02090 0.02100 0.02140 0.02150 0.02185 0.02185 0.02200 0.02200
    0.02215 0.02215 0.02225 0.02260
  ]

  No good.  What about es_widening_max=12?

  [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 10 distinct costs, 13.4 secs:
    0.01920 0.01925 0.01930 0.01945 0.01945 0.01975 0.01985 0.01990
    0.01995 0.02020 0.02060 0.02060
  ]

  Still no good.  We need a way to run for longer and do something useful.
  Here we are with es_full_widening_on=true and the default value of
  es_widening_max:

  [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 10 distinct costs, 13.5 secs:
    0.01850 0.01920 0.01930 0.01940 0.01940 0.01945 0.01965 0.01965
    0.01975 0.01990 0.02010 0.02025
  ]

  That's a better result, approaching the "best so far" of 1825.  It
  suggests that further work on ejection chains might pay off.  Now
  if we turn on drs as well, we get this (10 minute run):

  [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 12 distinct costs, 8.6 mins:
    0.01850 0.01860 0.01875 0.01905 0.01910 0.01920 0.01930 0.01940
    0.01955 0.01970 0.01980 0.02010
  ]

  The best is no better, but the average is much better.  Same again,
  but with a 30 minute run this time:

  [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 8 distinct costs, 25.0 mins:
    0.01870 0.01875 0.01890 0.01890 0.01900 0.01900 0.01905 0.01905
    0.01920 0.01920 0.01940 0.01945
  ]

  Curious, it came out worse.  Here we are with es_max_beam=2 (10 minutes):

  [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 10 distinct costs, 8.5 mins:
    0.01830 0.01860 0.01890 0.01900 0.01900 0.01915 0.01930 0.01935
    0.01935 0.01950 0.01975 0.01990
  ]

  Considering the time limit, this may be the best result so far.  Do
  it without rds:

  [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 10 distinct costs, 38.6 secs:
    0.01865 0.01885 0.01885 0.01910 0.01920 0.01925 0.01950 0.01970
    0.01970 0.01990 0.02015 0.02030
  ]

  So rds is adding something.  Let's try max_beam=3, no rds:

  [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 10 distinct costs, 37.7 secs:
    0.01905 0.01955 0.01980 0.01980 0.01985 0.01995 0.01995 0.02005
    0.02045 0.02055 0.02105 0.02150
  ]

  Not so good.  So let's try max_beam=2 with rds for 30 minutes:

  [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 11 distinct costs, 25.0 mins:
    0.01850 0.01875 0.01880 0.01895 0.01900 0.01905 0.01930 0.01935
    0.01960 0.01970 0.01970 0.01985
  ]

  Not wonderful.  Let's go back to es_max_beam=2 (10 minutes), but
  with more solves:

  [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 18 distinct costs, 16.8 mins:
    0.01845 0.01850 0.01865 0.01880 0.01890 0.01900 0.01900 0.01905
    0.01910 0.01910 0.01915 0.01915 0.01920 0.01930 0.01930 0.01930
    0.01935 0.01935 0.01950 0.01970 0.01975 0.01990 0.02000 0.02030
  ]

  This was with full_widening on.  Now turning that off, but with max_beam=2
  and drs on, for 10 mins, we get

  [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 20 distinct costs, 16.8 mins:
    0.01845 0.01850 0.01885 0.01885 0.01885 0.01890 0.01910 0.01920
    0.01925 0.01930 0.01935 0.01945 0.01955 0.01960 0.01965 0.01965
    0.01975 0.01980 0.01985 0.01990 0.01990 0.02010 0.02015 0.02075
  ]

  Looks like full widening is marginally better.  Now twice as many
  threads, with double the time limit:

  [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 22 distinct costs, 17.1 mins:
    0.01845 0.01855 0.01865 0.01885 0.01890 0.01895 0.01905 0.01910
    0.01920 0.01925 0.01925 0.01935 0.01945 0.01960 0.01965 0.01970
    0.01975 0.01980 0.01980 0.01990 0.02000 0.02010 0.02035 0.02075
  ]

  Same results, predictably, but a slightly longer run time.

27 June 2023.  I need to modify KheTaskSetMoveAugment and the other
  fundamental repair operations.  But at present I don't seem to
  have a list of what these are.  There is a section called "Repair
  operations for nurse rostering" which contains the only mention
  of KheTaskMoveAugment but it is marked obsolete and anyway it is
  very brief.  I need to redo this.  And it is not only about nurse
  rostering (although parts of it may be); it is about repairing
  event resource and resource defects without changing time assignments.

28 June 2023.  Started work on revising "Repair operations for nurse
  rostering", going well.

29 June 2023.  Still revising "Repair operations for nurse rostering".
  It's going well and suggesting minor adjustments to the implementation
  which it would be good to try in controlled tests.

2 July 2023.  I've more or less finished revising "Repair operations
  for nurse rostering".  The plan now is to unify the various task
  equivalencing modules into one, and then to use that one within
  the ejection chains module.

  Daily schedules - probably easily implementable, but not used
  except by khe_sr_single_resource.c, which is superseded now
  by khe_sr_dynamic_resource.c.

  I think we need an MTASK type which represents a set of proper
  root tasks and is essentially KHE_TASK_CLASS, but with extra
  functionality, including supporting tasks whose meets do not
  have assigned times, and operations that allow an mtask to be
  thought of as a single task (assignment etc.).  Then we can
  define MTASK_SET and MTASK_SOLVER; the latter can give access
  to all MTASKS running at a given time.  Like tasks except:

    * multiplicity

    * proper root tasks only

    * assigned to resources, not other tasks; so we can move
      from one resource to another, but not from task to task

    * shared workloads, durations, assigned times (possibly none)
      and domains but differing assignments.

    * can add task bounds, but only to whole class

3 July 2023.  Started documenting a new "Multi-tasks and multi-task
  sets" section.  So far so good.

4 July 2023.  Done a lot of good documentation.  Just one section to
  go now, called "Behind the scenes", which I should be able to base
  on the "Task similarity" section from task classes.

5 July 2023.  Getting towards the end of the documentation, going well.

6 July 2023.  Finished the documentation.  It needs an audit.

7 July 2023.  Finished auditing the documentation, ready to implement.
  Brought src_solvers.h and the makefile up to date, and made a start
  on khe_sr_mtask.c, in which all the boilerplate conversion from
  "task classes" to "mtasks" has been done.  I now need to go through
  it carefully and make the smaller adjustments.

  Thought about the best position for mtasks in the resource structural
  chapter.  They logically follow task grouping, because they can be
  built (and routinely would be built) after tasks are grouped, but
  you can't group after building mtasks without destroying the mtasks.
  Mtasks will probably end up before task finding, if that turns into
  mtask finding.  They belong before "other resource-structural
  solvers".  But I'll wait a while before moving them around.

8 July 2023.  Working on khe_sr_mtask.c.  All good.  The boilerplate
  is all done, including mtask sets, and I'm starting to grapple with
  the substantive changes.  I've added the type changes for with and
  without assigned times, but I'm probably only creating mtasks with
  assigned times at the moment.

9 July 2023.  Working on khe_sr_mtask.c.  I've finished off tasks that
  do not have assigned times (needs an audit).  Also I have checked
  that every publicly announced function has an implementation (in some
  cases a "still to do" stub).

  Kidney operation tomorrow.

22 July 2023.  Back from kidney operation.  I got pneumonia again
  so I was in hospital for about a week and I've had no energy since
  I came home.

  Re-familiarizing myself with the doc and code for khe_sr_mtask.c.
  Sorted out root_asst_is_fixed in doc and code.  Not a bad few
  hours' work for someone with no energy.

23 July 2023.  Sorted out the overlap between KheMTaskSolverFindMTask
  and KheMTaskSolverAddMTask.  Deleted KheMTaskSolverAddMTask.

  Filling gaps is done, although I ended up with some inlining.  It
  needs an audit.

24 July 2023.  Audited gap filling, all good.  Also audited and
  tidied up the last part of KheMTaskMake, where the time-related
  fields are set.  Replaced SORT_TYPE with fixed_times.  Filled in
  the two remaining "still to do" function bodies.

  Written doc giving concrete conditions under which mtask moves
  succeed.  Need to check these against the implementation.

25 July 2023.  Powering along.

  Previously, mtasks would not have worked with paths and tasks because
  of asst_count.  So I've written KheMTaskBringAsstCountUpToDate to
  update asst_count just before it is used.  All written, audited,
  and used where required.

  I'd previously written doc giving concrete conditions under which mtask
  moves succeed.  I've now checked these against the implementation.

  KheMonitorKind has a couple of dodgy cases.  The doc reads as though
  this was all cleared up.  It needs looking into.  Even near the
  top, assign resource monitors may be wrong.  Started thinking about
  it, it's not simple.

26 July 2023.  Working on KheMonitorKind today.  I've documented a
  revised version, implemented it, and audited it.  Pretty good.

27 July 2023.  Audited khe_sr_mtask.c today.  All good, ready to use.

28 July 2023.  Starting to replace older code my mtasks today.  I've
  removed task classes, the immediate precursor to mtasks, from the
  source code and from the doc.  There may be some cross references
  to it still in the doc which will need fixing up later.

  I've also removed task groups, which broke khe_sr_first_resource.c
  and khe_sr_pack_resource.c.  I've since got them working with
  clean compiles using mtasks instead of task groups, but both
  files need a careful audit.

29 July 2023.  Audited khe_sr_first_resource.c and khe_sr_pack_resource.c.

  Added KheMTaskAssignResourceSuggestion to khe_sr_mtask.c to support
  khe_sr_first_resource.c.  I'm now using it in khe_sr_first_resource.c.
  All done and documented.  This is to do with encouraging resource
  constancy, even when it is not absolutely required (see the now
  unused KheTaskWantsResource in khe_sr_first_resource.c).

  Here is my previous essay on grouping equivalent tasks, which I
  have finished with now because it's all done except replacing
  KheTaskEquivalent.

  "I seem to have made several attempts to group equivalent tasks:
  
    * KheTaskEquivalent, which does a reasonable job, but it
      assumes that times may not be assigned (so it requires
      the tasks to come from the same events), and it does not
      understand the nuances of the effects of event resource
      constraints.  Arguably it does not belong in the platform,
      but I suppose I'm stuck with it now.  It could be deprecated.
      It is called from khe_se_solvers.c and khe_sr_task_finder.c.
      The call in khe_sr_task_finder.c is about whether a swap
      is allowed; it's not allowed if the tasks are equivalent,
      because it does nothing effective in that case.  So we
      have clear evidence now that all uses of KheTaskEquivalent
      go back to ejection chains and could be replaced by mtasks.

    * (DON'T UNIFY THIS ONE) Taskers (section 5.1 of the resource
      structural chapter), which say up front that two proper root
      tasks are equivalent when they have equal domains and assigned
      resources (possibly NULL) and cover the same set of times.
      As the doc says, equivalent tasks by this definition are
      interchangeable as far as resource constraints are concerned.
      Is this form actually used?  Yes, for combinatorial and profile
      grouping.  Hypothesis:  KHE_TASKER_CLASS can be replaced by
      KHE_MTASK.  But there is a lot of complexity here, included
      to support combinatorial and profile grouping, and it might
      be a mistake to unify.  KHE_TASKER_CLASS has two fields:

	  KHE_RESOURCE		resource;	/* their common resource */
	  ARRAY_KHE_TASKER_TIME	profile_times;	/* times in profile tg's */

      that don't relate at all to what mtasks contain.

    * (DONE) Task groups (section 10 of the resource structural
      chapter) seems to set out to do what task classes do, but
      it does it less well.  So it should really be withdrawn.

    * (DONE) Task classes (section 11 of the resource structural
      chapter) which seems to be the current gold standard.

    * (TINY AND WHO CARES) KheTaskAssignmentCostReduction is used in
      section 4 of the resource structural chapter to group tasks with
      equal cost reductions."

30 July 2023.  Started work on khe_se_solvers.c and khe_sr_task_finder.c
  today.  Did a major rewite of how augment options are obtained, adding
  a new type KHE_AUGMENT_OPTIONS and passing it to all augment functions.
  Got a clean compile of this new stuff; KheTaskEquivalent is still there
  of course.  Not documented yet, should do that.

  KheWidenedTaskSetMoveAndDoubleMovesOptimized had a task finder
  parameter which is now accessible from parameter ao.  Ditto
  KheWidenedTaskSetMoveAndDoubleMoves.  I've removed these parameters.

31 July 2023.  Worked on the ejection chains documentation today.
  I've divided the previous single chapter into two chapters, and
  I've revised the first of these so that now it is in good shape.
  The second chapter is next.

1 August 2023.  Working on the second ejection chains chapter.

2 August 2023.  Still working on the second ejection chains chapter.

3 August 2023.  Still working on the second ejection chains chapter.
  I've managed to move correlation grouping right out of the
  ejection chains chapter and into the general solvers chapter,
  and I've changed the names of the Prepare and UnPrepare functions
  to KheGroupCorrelatedMonitors and KheUnGroupCorrelatedMonitors.

4 August 2023.  Still working on the reorganized documentation.

5 August 2023.  Pretty well finished the reorganized documentation,
  except I say nothing at present about how enhanced moves work.

  KheMeetBoundMultiRepair seems to be OK, but the accompanying
  KheMeetBoundOnSuccess function does not seem to be called as
  often as it should be.  Actually I think it's OK after all.

  Audited khe_se_solvers.c, trying to tidy it up, including using
  the revised definitions of "repair function", "multi-repair
  function", and "augment function".

  Starting to think about converting KHE_TASK_FINDER into
  KHE_MTASK_FINDER.  Type KHE_DAILY_SCHEDULE is only used
  by khe_sr_single.c, which could be withdrawn if it's what
  I think it is (dynamic programming for a single resource).

  We might be able to integrate mtasks with mtask finding,
  which could help with efficiency.

  KheTaskFinderTaskInterval and KheTaskFinderTimeGroupInterval are
  called only from khe_sr_reassign.c, which is a matching algorithm.
  KheTaskFinderTaskSetInterval is called only from khe_se_solvers.c.

  KheFindTasksInInterval is called from khe_sr_combined.c (function
  KheSolnTryTaskUnAssignments) and khe_sr_reassign.c.
  KheFindFirstRunInInterval is called from khe_sr_reassign.c.
  KheFindLastRunInInterval is not called.

  Widened task sets are created (that is, KheWidenedTaskSetMake is
  called) only from khe_se_solvers.

  To summarize:

    * khe_sr_single.c - withdraw whole solver

    * khe_sr_reassign.c - a problem but it might benefit from
      converting to mtasks.

    * khe_sr_combined.c - a rewrite should eliminate this call.

    * khe_se_solvers.c - what we're aiming to replace anyway.

  But for now, let's make an mtask finder without withdrawing the
  task finder.  Piggyback onto KHE_MTASK_SOLVER?

22 August 2023.  Lost two weeks to pneumonia and covid.  Back
  at work today.

  Looked into vizier nodes and augment options generally.  I'm
  now initializing all options in KheAugmentOptionsMake, in the
  order they appear in KHE_AUGMENT_OPTIONS.

  I've changed the doc to take account of mtasks whose tasks
  have duration 0, including the detailed description in
  Section 3.10.4.  I need to update the code now.

23 August 2023.  Implemented mtasks whose tasks have duration 0,
  now called degenerate mtasks.

  Auditing khe_sm_correlation.c, just finished KheGroupCorrelatedEventMonitors.

24 August 2023.  Finished auditing khe_sm_correlation.c.  Changed a
  few things, including in the documentation, but not very much.
  Audited khe_se_focus.c, but changed nothing.

25 August 2023.  Documented khe_se_focus.c.  Merged the two ejection
  chains chapters by making the second chapter into the last section
  of the first chapter.  Brought all cross references up to date
  throughout the entire Guide, so that there are zero error messages
  when compiling it - which is not to say that the text is up to date.
  The full Guide has about 705 pages now.

  Made khe_solvers.h consistent with the Guide, as it now is.  I
  found a few small issues that I've added to to do.

  I removed KheConvertTimeLimit from khe_solvers.h.  Actually it
  no longer exists, it was replaced by KheTimeFromString.

26 August 2023.  Changed dynamic_impl to refer to mtasks rather than
  to task classes.  The strings "CLASS", "Class", and "class" no
  longer appear in the implementation or in dynamic_impl, except
  that "class" is used in the object oriented sense when describing
  expression objects.

27 August 2023.  It seems to be time to start revising khe_se_solvers.c
  to use mtasks.  Changed the mtask solver to accept a NULL value for
  rt, meaning "all resource types".  Ensured that ao only contains a
  task finder and an mtask solver when repair_resources is true.

  Defined KheMTaskMoveTaskCheck and KheMTaskMoveTask in khe_solvers.h
  and documented them and implemented them.

  Working on KheResourceGainTaskMultiRepair.  Actually it's all
  done, but it calls nonexistent function KheMTaskMoveAugment,
  which is the next thing I have to write.

28 August 2023.  Working through khe_se_solvers.c, changing tasks
  and task sets to mtasks and mtask sets.  I have a clean compile
  except for one call to KheTaskEquivalent in khe_sr_task_finder.c;
  this includes replacing KheTaskSetMoveAugment and KheTaskMoveAugment
  by KheMTaskSetMoveAugment and KheMTaskMoveAugment.

29 August 2023.  Implemented all the "still to do" stuff in
  khe_se_solvers.c except for three cases where there is a
  problem that I have to think about.

  I should implement all the "still to do" stuff in khe_sr_task_finder.c
  now, mainly to make sure there are no gotchas.  There could be some.
  I've made a start on it, there is a lot to do but I'm getting there.

30 August 2023.  Working on task finding.  I've removed all the
  still to do's except for eight places where I need to actually
  think about what is going on.

31 August 2023.  Today's job is to replace KHE_MTASK_SOLVER by
  KHE_MTASK_FINDER and included the widened mtask set stuff in it.
  Finished my initial run through khe_sr_mtask_finder.c.  I've also
  removed the mtask code from khe_sr_task_finder.c.  Now I need to
  update khe_solvers.h to show the new state.  Then when I get a
  clean compile it will be time to update the documentation.

1 September 2023.  Now have a clean compile, after tidying up
  khe_sr_mtask_finder.c, khe_sr_task_finder.c, and khe_solvers.h
  according to the new plan.  Started updating the documentation.

2 September 2023.  Working through the mtasks documentation.

  Decided to make KHE_INTERVAL public.  Its functions are defined
  in file khe_sm_interval.c, and documented at the end of the
  general solve chapter.  All done, and all good.

3 September 2023.  Audited the mtasks code and documentation.
  Defined, documented, and implemented KheMTaskNeedsAssignment.
  Replaced first_index and last_index by in in the interfaces
  of task finding functions.  All documented and implemented,
  and all calls on those functions now use the new interface.
  I have a clean compile.

4 September 2023.  Retrofitted KHE_INTERVAL to mtask operations
  today.  All done including documentation.

  Along the way I discovered that KHE_DRS_DAY_RANGE in the dynamic
  solver is just KHE_INTERVAL.  So I replaced it with KHE_INTERVAL.

  Removed NEEDS_ASST_TYPE.  Looking through all calls, the only
  use is NEEDS_ASST_DONT_CARE.  But just commented out for now.

  Replaced KheFindMTask by KheGetMTask, which involved some
  rearranging, including replacing ignore_preassigned and
  ignore_partial by allow_preassigned and allow_partial, which
  is all implemented (including clients) and documented now.

5 September 2023.  I've revised the spec and implementation of
  KheGetMTask; it is now absolutely spotless.  The next step is
  to examine all calls to it to make sure they are doing (or
  can be made to do) what is wanted.

6 September 2023.  Sorted out KheGetMTask and the functions
  that call it.  It will need an audit later but it's pretty
  good.  This has caused the spec of KheFindMTasksInInterval
  to change; it's all implemented and documented.

7 September 2023.  Still auditing mtasks.  Currently trying to
  make sense of KheMTaskFindTo.

8 September 2023.  Finished auditing mtasks.  Towards the end I
  did not do a very good job, because I began to suspect that
  the whole approach is too convoluted.

  Designed, implemented, and documented a new public function,
  KheMTaskContainsNeedlessAssignment, which is needed by some
  of the widened task set code.

  Designed, implemented, and documented a new public function,
  KheMTaskHasSoleMeet, which is needed by combined task and
  meet repair functions in the ejection chains solvers.

  Fixed all the "still to do" stuff in khe_se_solvers.c.  Well,
  there was one that seemed too hard so I commented it out.

9 September 2023.  Thinking about things from the top down now:
  what repairs do I really need?  I've made a new version of
  the "repairs for resource assignment" section, and I've
  written a new beginning to that section which identifies
  a specific set of repair operations that I am proposing
  to implement from scratch, bypassing widened task sets,
  which seem to me now to be too complex for the running
  time savings they bring.

10 September 2023.  I've done some very useful work on the resource
  repairs:  I've got them down to just two, KheResourceMoveRepair
  and KheResourceSwapRepair, and shown that the third one that I
  had included before, KheMTaskSetMoveRepair, is a special case of
  KheResourceSwapRepair, where one of the resources is NULL.  This was
  not clear to me before now.  All documented, not implemented, but
  the implementation is trivial, just calls on KheMTaskSetMoveResource.

11 September 2023.  Wrote the "enumeration of simple pairs" section.

12 September 2023.  Working on resource repair documentation, which
  has led on to thinking about how to organize the functions that
  build the required mtask sets, and the multi-repair operations.

13 September 2023.  Working on organizing resource repair.  I've
  renamed the mtask and mtask set resource assignment functions,
  and indeed reduced them to just KheMTaskResourceReassignCheck,
  KheMTaskResourceReassign, KheMTaskSetResourceReassignCheck, and
  KheMTaskSetResourceReassign.  The idea is to leave it to
  khe_se_solvers.c to introduce assign, unassign, and move
  versions if it so desires.  All implemented and documented.

  Added an interval field to mtask sets, so that we can retrieve
  the interval in a small constant amount of time.  But functions
  that reduce the number of mtasks in an mtask set can't update
  the interval easily, they are still to do.

  Did some planning around a private KHE_RESOURCE_REPAIR type, not
  sure if it will work, but something like it would be good.

14 September 2023.  Working on organizing resource repair.  I've
  implemented KHE_RESOURCE_ITERATOR and am using it everywhere
  it makes sense.  I've also implemented KHE_INTERVAL_ITERATOR,
  but I'm not using it yet.

  Finished off adding an interval to mtask sets, I just wrote
  a function to recalculate the interval, and called it each
  time an mtask is deleted.  It's slow, but these deletion
  functions are not really used in practice.

  The test

    if( ao->repair_resources &&
      !KheResourceTypeDemandIsAllPreassigned(KheResourceResourceType(r)) )

  seems rather clumsy.  Can we prove that if start_gm is limited
  to constraints of a given type rt, then all defects explored
  by the ejection chain solver will have this type?  Not if we
  move times as well.

  In KheEjectionChainRepairResources, KheAugmentOptionsMake has
  a NULL value for rt.  It could have the resource type of the
  tasking; should it?  No.

15 September 2023.  Working on organizing resource repair.

16 September 2023.  Finished KHE_MTASK_ITERATOR and found two
  places to use it.  The uses will need a careful audit later.
  Did some interesting rewriting of KheResourceOverloadMultiRepair
  using mtask iterators.  It turned out better on the whole,
  although there is some randomization lost - reinstating that
  is still to do.

  I set out to bring clarity to KheResourceOverloadMultiRepair and
  KheResourceUnderloadMultiRepair by using iterators, and both of
  them are now in good shape.  KheMTaskSetMoveMultiRepair is next.

  KheForEachInterval did not realize when it was going off either
  end.  I've fixed it now, it will reduce the maxes to avoid this.

17 September 2023.  Carrying on with KheMTaskSetMoveMultiRepair.
  Commented out the old code; the new code is basically in stub
  form at the moment.  Now have separate widening_max options for
  swaps and for moves.

  Done some fairly amazing simplifying, basically instead of
  passing mts or mt we're now passing an interval iterator.
  Functions KheMTaskSetMoveAugment and KheMTaskMoveAugment
  are no more, they have been bypassed; we call the repair
  function directly, now renamed KheMTaskSwapAndMoveMultiRepair.

  I added an optional extra interval to the interval iterator,
  which covers from the start of the kernel interval to the
  end of the frame.  In this way I was able to completely
  bypass KheTaskSetSwapToEndRepair, saving a lot of code.
  This is a good example of the flexibility I am gaining
  by this code revision.  Also the new code is much less
  likely to have bugs than the old code.

  Removed widened task sets, from the mtask finder and from the
  documentation.  But there is some stuff in there that we'll
  need to reinstate somehow, including finding tasks that have
  the same offset in their day, KheMTaskSetBlockedByMonitor,
  KheMTaskSetBlockedByTimeGroup, and possibly other things.

  I've been able to remove all scratch_mts variables and parameters,
  because we're iterating now, not building a set and traversing it.
  This has removed quite a lot of code for creating and deleting
  these mtask sets.  An unexpected bonus.

  I've flown in KheGetMTask from khe_sr_mtask_finder.c.  I'm not
  using it yet, but it will be an important part of finding the
  mtask sets we need, at least, I believe it will.

18 September 2023.  Carrying on with KheMTaskSwapAndMoveMultiRepair.
  Done some fiddling around with KheSwap and KheMove. and verified
  that KheGetMTask could use the mtask iterator.

  Written KheFindMovableMTaskSet, and it's looking pretty darn
  good.  It needs an audit.

19 September 2023.  Audited KheFindReassignableMTaskSet.  It
  seems to be in pretty good shape now; but it needs a second audit.
  Wrote KheMoveRepair, which now calls KheMoveReassignableMTasks to
  do the moves it needs to do directly, rather than building an
  mtask set first.  It calls KheMoveReassignableMTasks twice, one to
  eject any existing assignments from to_r, the other to move
  the assignments of from_r to to_r.  Pretty good stuff, I think.

20 September 2023.  Audited KheMoveReassignableMTasks and
  KheReassignRepair.  Done some documenting.

21 September 2023.  Working on the specification and documentation
  of moves and swaps.

22 September 2023.  Working on the specification and documentation
  of moves and swaps, specifically KheReassignMTasksInInterval.

23 September 2023.  Working on the specification and documentation
  of moves and swaps.  Finally, KheReassignMTasksInInterval is
  getting somewhere.

  KheMTaskResourceReassign within KheForEachMTask can make
  KheForEachMTask go wrong.  If there is a clash, we might skip the
  second one because our iterator index is already past it.  I've
  fixed this now by adding KheMTaskIteratorDropCurrentMTask.

  KheReassignMTasksInInterval and KheEjectingReassignRepair seem to
  be in good shape now.  On to swapping.

24 September 2023.  I've finished KheMTaskFinderMTasksInTimeGroup and
  KheMTaskFinderMTasksInInterval, including documenting them and
  giving them a careful audit.  So KheAllMTaskForEach is ready to
  use.  It even handles tg_offset, but outside the mtask finder,
  not inside it.  I've also finished KheResourceMTaskForEach, I
  just cannibalised the old KheMTaskForEach.

25 September 2023.  Restored random_offset to KheAllMTaskForEach
  and KheResourceMTaskForEach.  Sorted out what to do in place
  of KheAllMTaskForEach not returning a task - iterate over the
  tasks of the mt it does return.

  Resource mtask iterator does not return the same mt twice in
  a row.  This may be as good as we can get for uniqueness.
  The all mtask iterator is definitely unique because it sorts.

  KheResourceMTaskForEach iterates over one time group, but there
  is one place, in KheReassignMTasksInInterval, where we need to
  iterate over an interval.  I've done it inline; that might do.

  I've reviewed where the parameters ought to go, and discovered
  that there is one parameter, tg in KheResourceMTaskForEach, that
  cannot be moved to Init because that would not work inside a
  traversal of an interval.  So leaving parameters as is.

  KheReassignMTasksInInterval is now KheFindReassignableMTasksInInterval
  because I have decided that swapping will not work unless I put
  the mtasks into mtask sets.

  I've implemented swaps now as well as ejecting reassignments.
  This involved removing parameters require_non_empty and
  require_superset from KheFindReassignableMTasksInInterval,
  because they were not adequate when two mtask sets are being
  moved.  All good now.

  I've started an appendix to the mtask "Behind the scenes" section
  which investigates what happens when a defect identifies a task for
  reassignment, but all we have available are mtask reassignments.

26 September 2023.  Working on the mtask "Behind the scenes" section.
  Actually I've finished the section.  It proposes grouping event
  resource monitors from the same mtask.  I've read my existing
  stuff for grouping event resource monitors, and it could be
  improved on by doing this, I believe.  It all needs some
  thinking about.

27 September 2023.  Still thinking about grouping the monitors
  of each mtask.  Some more doc written.

28 September 2023.  Still thinking about grouping the monitors
  of each mtask.  I've more or less finished the documentation,
  except that I have to design and document the implementation
  of my ideas about it.

29 September 2023.  Started work on group monitors for mtasks.
  But then I decided, based on a careful examination of what
  the ejector would do if the repair of one defect actually
  repaired another, that they were not needed.

  I've tightened up the definition of when a limit resources
  monitor is separable.  All documented and implemented.

  Now passing from_mt to KheFindReassignableMTasksInInterval as
  a seed.  It's known to be assigned from_r.  NB it is always
  non-NULL when from_r == NULL.  Now implemented and in use.

  Got blocking_tg and blocking_m working.

30 September 2023.  Serious audit of code starting today.
  I'm doing the mtask finder module first.  I've checked
  the documentation against khe_solvers.h, and deleted a
  few functions from both (and from khe_sr_mtask_finder.c)
  which are no longer being used.

  Starting auditing asst_cost and non_asst_cost.  So far I
  have just done two small thing.  First, where both are present
  (mainly in parameter and argument lists) I have swapped
  their order, so that non_asst_cost comes first.  The
  reasoning here is that it is the usual thing, being
  derived from assign resource monitors, whereas asst_cost
  is less usual, being derived from prefer resources monitors
  with empty resource sets.  Second, I have written some
  better documentation for KheMTaskTask, which hopefully
  will help to keep me on the right track.

1 October 2023.  Audited khe_sr_mtask_finder.c.  I've audited
  non_asst_cost and asst_cost within khe_sr_mtask_finder.c and
  sorted out the mess.  I've also gone carefully through all
  calls to KheMTaskTask and made sure they do the right thing.

  Implemented ensuring that limit resources monitors that do what
  assign resource and prefer resources monitors do are classified 
  in the same way that those monitors would be classified.  All
  done and documented.

2 October 2023.  Audited khe_se_solvers.c. everything except
  meet repairs.

  I gave some thought to having KHE_RESOURCE_MTASK_ITERATOR
  iterate over an interval, like KHE_ALL_MTASK_ITERATOR does.
  But there is only one use for it, at line 3967, so there is
  no pressing need for it.

  Reorganized KHE_REPAIR_TYPE.  It's tidy but I'm not sure that
  there is a lot of point to it.
  
  Revised the section where I enumerate operations.  I need to
  audit "Repairs and augment functions for resource assignment"
  next.

3 October 2023.  Audited "Repairs and augment functions for resource
  assignment", in the course of which I think I've finally sorted
  out the basic move and swap repairs.  It needs an audit and perhaps
  some comparison with the implementation.

4 October 2023.  I changed "ejecting reassignment" to "move" in
  khe_se_solvers.c, but I decided against making corresponding changes
  elsewhere.  Audited the documentation again, and audited the
  implementation to make sure it does what the documentation says.

5 October 2023.  Audited KheResourceGainTaskMultiRepair and the event
  resource augment functions.  It all seems fine for going on with.
  There are a few doubtful points concerning not retrying mtasks;
  perhaps it's OK to retry an mtask when a different task from it
  is selected by from_r.

  Wrote some very good documentation for choosing widened intervals
  when repairing defects.

6 October 2023.  Revised yesterday's documentation.  It is in very
  good shape now, down to the end of resource monitors.

7 October 2023.  Audited the documentation again and expanded the
  discussion of event resource defects.  It is in good shape now.
  Did a complete run of the Guide and reconciled cross references.
  There are no errors, and the Guide is 728 pages long.

  In KheEventResourceMoveMultiRepair, I've changed a test from
  "from_mt != *prev_mt" to "from_mt != *prev_mt || from_r != *prev_r"
  on the grounds that visiting the same mtask twice should be OK
  when the resource being moved within that mtask changes.  I've
  also reviewed how I avoid repeated visits to mtasks in other cases:

    KheFindReassignableMTasksInInterval, line 3974:
    KheResourceMTaskForEach visits each mtask assigned a
    given non-NULL resouce, so that should be a different mtask
    on each iteration.  And anyway it does the prev_mt thing.

    KheFindReassignableMTasksInInterval, line 4012:
    KheAllMTaskForEach visits all mtasks in a given interval,
    but it was previously uniqueified by the mtask finder.

    KheResourceOverloadMultiRepair, lines 5485 and 5532:
    KheResourceMTaskForEach visits each mtask assigned a
    given non-NULL resource, so visiting the same mtask twice
    is very unlikely.  And anyway it does the prev_mt thing.

    KheResourceGainTaskMultiRepair, line 5730:
    KheAllMTaskForEach visits all mtasks in a given time group,
    but it was previously uniqueified by the mtask finder.

  Decided that KheResourceGainTaskMultiRepair does not need a
  force parameter because it makes ejecting moves anyway.

  Just for fun, I decided to look into where a call to
  KheMTaskSwapAndMoveMultiRepair can be given a NULL
  from_r or a NULL to_r:

    KheResourceOverloadMultiRepair, line 5513 and 5543:
    from_r is never NULL, to_r may be NULL; we get rid of from_r's
    overload by unassigning its unwanted tasks.

    KheResourceGainTaskMultiRepair, lines 5739 and 5744:
    from_r may be NULL (line 5744), to_r is never NULL; we get rid
    of to_r's underload by assigning unassigned tasks to to_r.

    KheEventResourceMoveMultiRepair, line 6623:
    from_r will be NULL when d monitors an unassigned task, and
    to_r will be NULL when to_ri includes NULL, which it does
    when KheEventResourceMoveMultiRepair is called from line 6735
    (prefer resources) or line 7252 (limit resources overload).

  Here's a wild idea:  try moves only when one parameter is NULL,
  try swaps when both parameters are non-NULL.  We can't unify
  moves into swaps, if only because we try long swaps but only
  short moves.  I've documented this point.

8 October 2023.  I've decided to remove blocking_d.  It seems
  that it would only be useful when d is a limit resources
  monitor, and it's hard to get it to do the right thing
  wrt mtasks, because an event resource monitor can monitor
  some tasks of an mtask but not others (when it is separable
  and resource-independent).  So it's basically too hard and
  not very useful anyway.

  Here is some old stuff I wrote about this issue:
  "There may be a problem with blocking_d blocking every task in
  the mtask being tried.  In fact, can blocking_d be compared
  just once with mtask, and handled that way?  Then it would
  not need to be passed down to the swap function.  This para
  is probably rubbish, but the whole issue of the interaction
  of blocking_d with mtasks needs careful examination.  For
  example, blocking_d can apply to some tasks of an mtask but
  not to others, when it is separable and resource-independent.
  It is passed in just one place, KheEventResourceMoveMultiRepair
  line 6624.  Wild idea:  is it something to do with not trying
  resources that are already assigned to the same mtask, so that
  a swap will do nothing?"

  And:
  "KheEventResourceMoveMultiRepair line 6327
  blocking_m is the defect d we are trying to repair.  Not quite sure
  why we can't move a task monitored by d, after all we are trying to
  repair d.  But it's moving backwards that we don't want to do, and
  indeed we are trying to increase the number of tasks assigned r.
  Needs a careful audit but it will probably be right in the end,
  except the "too many resources" case from KheLimitResourcesAugment
  may be wrong at the moment."

  Added an exclude_first_days_in field to each interval iterator,
  which excludes the days in this interval from being iterated over.
  Have clean compile of version that adds an KHE_EXCLUDE_DAYS
  parameter to lots of things and passes bits of it to interval
  iterators for use as their exclude_first_days_in fields.  But
  does it work?  It needs a careful going over.

9 October 2023.  Removed ed from KheEventResourceMoveMultiRepair;
  on reflection there is nothing useful it could do there.  But
  the other uses for ed seem to be holding up.  I've simplified
  the interface a bit by resetting the exclude days interval at
  the same time as I initialize the iterator.

  Writing a new "Avoiding duplicate repairs" subsection.  It is
  answering some questions but raising others.

10 October 2023.  Working on the new "Avoiding duplicate repairs"
  subsection.  I reviewed what I wrote yesterday, and wrote some
  more covering event resource defects.  It's all good, but the
  hard part is still ahead.

11 October 2023.  Working on yesterday's plan for cluster defects.
  With today's improvements added it is

    DecreaseLoadMultiRepair(tg, r, true, *unassign_exclude, *swap_exclude)
    {
      /* get the domain and interval of the tasks we need to move */
      kernel_in = KheKernelInterval(tg, r);  /* must be non-empty */

      /* try widened swaps */
      for each swap-widening in of kernel_in % swap_exclude
	foreach alternative resource r2
	  Swap(in, r, r2);

      /* try widened unassignments */
      for each move-widening in of kernel_in % unassign_exclude
	Move(in, r, NULL, NULL);
    }

    IncreaseLoadMultiRepair(tg, r, *swap_exclude)
    {
      /* try widened swaps */
      for each day touched by tg day_in
	for each swap-widening in of day_in % swap_exclude
	  foreach alternative resource r2
	    Swap(in, r, r2);

      /* try widened assignments */
      foreach unassigned mtask mt in tg
	for each move-widening in of KheMTaskInterval(mt)
	  Move(in, NULL, r, mt);
    }

    KheIncreaseOrDecreaseLoadMultiRepair(r, tg, which)
    {
      if which
	DecreaseLoadMultiRepair(tg, r, to_zero = true,
	&unassign_exclude, &swap_exclude)
      else
	IncreaseLoadMultiRepair(tg, r, &unassign_exclude, &swap_exclude)
    }

    ClusterAugment(r)
    {
      unassign_exclude = swap_exclude = (1, 0);
      if defect is an overload 
      {
	foreach active time group tg
	  KheIncreaseOrDecreaseLoadMultiRepair(r, tg, tg is positive)
      }
      else /* defect is an underload */
      {
	foreach time group tg
	  if tg is inactive 
	    KheIncreaseOrDecreaseLoadMultiRepair(r, tg, tg is negative)
	  else if allow_zero && active_count == 1
	    KheIncreaseOrDecreaseLoadMultiRepair(r, tg, tg is positive)
      }
    }

  I've implemented this, including trying to preserve some old
  repair_times code,  and got it to compile cleanly, but it needs
  a thorough audit.

  I've made the iterators invent their own random offsets.  It saves
  a lot of code in the main functions.

12 October 2023.  What happened to tg_offset?  Is it there in
  KheMoveRepair?  Yes.

  I've implemented KheClusterBusyTimesAugment according to the plan of
  11 October 2023, and audited it.  It all seems good.  I need to go on
  now to the other augment functions, to get a feel for the whole thing.
  I've done all of them except the event resource monitors.

13 October 2023.  Redoing the multi-tasks documentation, including
  quotes from the code.  All done for KheIncreaseLoadMultiRepair
  and KheDecreaseLoadMultiRepair, and it's looking very good.

14 October 2023.  Audited the doc again.  Implemented the !require_zero
  case of KheDecreaseLoadMultiRepair.

  I wrote this previously, but after looking at it for two seconds I
  decided not to change anything, because the intervals are different
  and ed->swap_in is on the job to prevent repeated intervals:

    "allow_zero && KheDecreaseLoadMultiRepair(ej, ao, r, tg, true, ed)"
    is not great because it repeats all the swaps.  It would be better
    to call it, but with a request to omit swaps; or else to put the
    moves code into a function and call that function; or even just
    to duplicate the code (including a call to KheGetIntervalAndDomain),
    since it is quite short and neat.  But are the intervals the same?
    Needs a closer look before I do anything.  It may be best as is.

  Implemented a plan I made yesterday for KheEventResourceMoveMultiRepair:

     for each task of er
       from_r = KheTaskAsstResource(task);
       from_mt = KheMTaskFinderTaskToMTask(task);
       if( !KheResourceIteratorContainsResource(to_ri, from_r) )
       {
	 if( from_r != NULL )
	 {
	   /* swaps from from_r to any resource of to_ri except NULL */
	   for each swap-interval in around KheMTaskInterval(from_mt)
	     for each resource other_r in to_ri except NULL
	       KheSwapRepair(in, r, other_r, NULL);

	   /* unassignment */
	   if( NULL is in iterator )
	     for each move-interval in around KheMTaskInterval(from_mt)
	       KheMoveRepair(in, from_r, NULL, NULL);
	 }
	 else
	 {
	   /* assignment */
	   for each move-interval in around KheMTaskInterval(from_mt)
	     for each resource other_r in to_ri except NULL
	       KheMoveRepair(in, NULL, other_r, from_mt);
	 }
       }

  Then removed KheMTaskSwapAndMoveMultiRepair, a big step forward.

  I gave some serious thought to removing the option of having NULL
  in resource iterators.  In fact, every iteration does not have
  NULL.  But there is one call to KheResourceIteratorContainsResource,
  in KheEventResourceMoveMultiRepair on line 6552, where it is used
  in a convenient and very natural manner.  So I'm keeping it.

  Values of to_ri passed to KheEventResourceMoveMultiRepair:

    KheAssignResourceAugment line 7299:
    domain = KheEventResourceHardDomain(er);
    KheResourceIteratorInit(&to_ri_rec, ao, domain, NULL, NULL, false);
    from_r == NULL: yes, want to assign
    from_r != NULL: yes, but only when not in hard domain

    KhePreferResourcesAugment line 7364:
    domain = KhePreferResourcesConstraintDomain(prc);
    KheResourceIteratorInit(&to_ri_rec, ao, domain, NULL, NULL, true);
    from_r == NULL: no, will fail !KheResourceIteratorContainsResource
    from_r != NULL: yes, when unpreferred, want to swap or unassign

    KheLimitResourcesAugment line 7865 (underload):
    domain = KheLimitResourcesConstraintDomain(c);
    KheResourceIteratorInit(&to_ri_rec, ao, domain, NULL, NULL, false);
    from_r == NULL: yes, want to assign
    from_r != NULL: yes, want to swap but not unassign

    KheLimitResourcesAugment line 7883 (overload):
    rg = KheEventResourceHardDomain(er);
    domain = KheLimitResourcesConstraintDomain(c);
    KheResourceIteratorInit(&to_ri_rec, ao, rg, domain, NULL, true);
    from_r == NULL: no, will fail !KheResourceIteratorContainsResource
    from_r != NULL: yes, want to swap or unassign

  Renamed KheEventResourceMoveMultiRepair to KheEventResourceMultiRepair.
  It's in good shape, implemented, audited, and documented.

  The entire "Repairing resource assignments" section of the Guide
  is in good shape now.  It's pleasing that it and the code agree
  so well.  It all needs another audit, but it is very good.

15 October 2023.  Could the two steps of KheMoveRepair(in, NULL, to_r)
  cancel each other out?  The first step unassigns the tasks assigned
  to_r, the second assigns them again - that is, if there were any.
  But in this case there is also from_mt to guide the assignment.  I've
  added a paragraph documenting this issue.  Moves are complicated.

  I read through the entire "Repairing resource assignments" section.
  I made a few changes but it's really very good.  It's pleasing that
  it follows the code so closely.

16 October 2023.  I've looked again at merging the Init and Start
  functions of the iterators.  Some are done, some might be best
  as is.  All are best as is except KheAllMTaskIteratorTimeGroupInit
  and KheAllMTaskIteratorIntervalInit could be folded in.  But even
  there I see no great point in doing it.

  Off-site backup today.

23 October 2023.  Had a break while Steph had COVID but I did do
  some refereeing.  Back at work today.  I reviewed the documentation
  I finished on 16 October 2023.  I changed a few things but it is
  pretty darn good.

25 October 2023.  Removed the "Multi-tasks and ejection chains"
  section, distributing its two parts to appropriate points in
  the "Repairing resource assignments" section of the ejection
  chains chapter.

26 October 2023.  Started testing the new ejection chains code today.
  I had to add some debug code, and then I had to fix a bug in the
  resource iterator (of all places), but now it seems to be working:

      rec end (new best, 0.02225 < 0.02275)

  The problem is that it is terminating very quickly (0.2 secs) and
  not finding many improvements.  So now I need to look into why it
  is not doing more work and getting better.

27 October 2023.  Carrying on with testing the new ejection chains
  code today.  Tested skipping commented out in khe_se_ejector.c:

    [ "INRC2-4-030-1-6291", 1 solution, in 52.9 secs: cost 0.02165 ]

  As compared with not commented out:

    [ "INRC2-4-030-1-6291", 1 solution, in 26.7 secs: cost 0.02175 ]

  So commenting out decreases cost slightly and increases running
  time significantly.  Skipping reinstalled, best of 24:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 20 distinct costs, 17.7 secs:
      0.02150 0.02170 0.02175 0.02180 0.02185 0.02190 0.02200 0.02200
      0.02205 0.02225 0.02240 0.02250 0.02250 0.02250 0.02255 0.02260
      0.02260 0.02265 0.02280 0.02285 0.02290 0.02300 0.02340 0.02355
    ]

  At least the running time is pretty good.  With es_max_beam=2:

    [ "INRC2-4-030-1-6291", 1 solution, in 15.8 secs: cost 0.02180 ]

  Curious, result is slightly worse.  Best of 24:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 18 distinct costs, 36.3 secs:
      0.02115 0.02140 0.02150 0.02180 0.02185 0.02185 0.02200 0.02210
      0.02220 0.02230 0.02230 0.02235 0.02235 0.02235 0.02255 0.02270
      0.02275 0.02280 0.02285 0.02285 0.02295 0.02295 0.02300 0.02305
    ]

  Quite a lot better but also slower.  Returning to es_max_beam=1,
  let's now try es_swap_widening_max=24:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 16 distinct costs, 86.4 secs:
      0.02130 0.02155 0.02155 0.02160 0.02175 0.02180 0.02185 0.02190
      0.02190 0.02195 0.02200 0.02220 0.02220 0.02225 0.02230 0.02230
      0.02230 0.02235 0.02235 0.02250 0.02250 0.02255 0.02255 0.02275
    ]

  Somewhat better than the 0.02150 we got above, but slow.  Returning
  to the default value for es_swap_widening_max, which is 8, and
  changing es_move_widening_max from its default value (2) to 4 gives:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 19 distinct costs, 26.2 secs:
      0.02110 0.02130 0.02165 0.02165 0.02175 0.02185 0.02185 0.02190
      0.02215 0.02220 0.02220 0.02225 0.02230 0.02235 0.02250 0.02250
      0.02255 0.02260 0.02260 0.02265 0.02270 0.02285 0.02295 0.02310
    ]

  Best so far, I wonder why.  But we are still a long way off the pace.
  Here's a test of es_full_widening_on:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 17 distinct costs, 19.3 secs:
      0.02165 0.02170 0.02185 0.02185 0.02190 0.02195 0.02195 0.02200
      0.02200 0.02210 0.02210 0.02215 0.02220 0.02220 0.02240 0.02255
      0.02255 0.02255 0.02260 0.02265 0.02270 0.02275 0.02285 0.02335
    ]

  Inferior, I wonder why?  Now es_move_widening_max=8:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 19 distinct costs, 26.0 secs:
      0.02165 0.02170 0.02180 0.02195 0.02195 0.02200 0.02210 0.02220
      0.02225 0.02225 0.02225 0.02230 0.02235 0.02240 0.02245 0.02255
      0.02255 0.02260 0.02275 0.02290 0.02290 0.02295 0.02320 0.02355
    ]

  Also inferior.  Now for es_move_widening_max=4, es_full_widening_on=true:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 16 distinct costs, 19.4 secs:
      0.02165 0.02170 0.02185 0.02185 0.02195 0.02195 0.02200 0.02200
      0.02210 0.02210 0.02215 0.02220 0.02220 0.02240 0.02255 0.02255
      0.02255 0.02255 0.02260 0.02265 0.02270 0.02275 0.02285 0.02335
    ]

  Gone back to a single solution, all default values:

    [ "INRC2-4-030-1-6291", 1 solution, in 6.9 secs: cost 0.02230 ]

  I need to work on this now, try to improve it.

28 October 2023.  I was getting a crash when trying to print timetables.
  The test to run is (in src_hseval):

    ulimit -c unlimited
    ./hseval.cgi -c op:timetables_html \
      constraints: /home/jeff/tt/nurse/solve/res.xml

  All fixed now.  I was sorting an array of WEEK objects using the
  comparison function for time groups.  I don't know how it sneaked in.
  I've done some random tests of the timetable printing code, and
  they all worked.

  Implemented "es_whynot_monitor_id" similar to "gs_debug_monitor_id"
  except that it runs and debugs one separate augment at the end of
  the ejection solve.  It seems to be working well and to be useful.

29 October 2023.  I've reinstated two calls to rec in rs, giving:

    [ "INRC2-4-030-1-6291", 1 solution, in 16.6 secs: cost 0.02275 ]

  which is worse than one call but it should be better on the whole:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 14 distinct costs, 18.2 secs:
      0.02145 0.02150 0.02150 0.02155 0.02165 0.02185 0.02205 0.02210
      0.02210 0.02210 0.02210 0.02210 0.02235 0.02235 0.02235 0.02245
      0.02245 0.02245 0.02250 0.02250 0.02255 0.02265 0.02275 0.02285
    ]

  And the same only with es_move_widening_max=4:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 16 distinct costs, 26.6 secs:
      0.02105 0.02130 0.02135 0.02160 0.02165 0.02170 0.02170 0.02170
      0.02170 0.02175 0.02185 0.02190 0.02190 0.02190 0.02195 0.02195
      0.02205 0.02205 0.02230 0.02235 0.02270 0.02280 0.02295 0.02295
    ]

  This is best so far, I need to carry on with this.  After commenting
  out the calls to KheMTaskNeedsAssignment when moving from NULL:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 18 distinct costs, 24.9 secs:
      0.01895 0.01900 0.01905 0.01915 0.01925 0.01930 0.01935 0.01935
      0.01935 0.01950 0.01960 0.01965 0.01965 0.01965 0.01970 0.01970
      0.01980 0.01980 0.01985 0.01990 0.02025 0.02035 0.02040 0.02045
    ]

  What an enormous difference.  We're only 200 off the LOR17 solution,
  whose cost is 1895.  Previously my best was 1825 (11 March 2023).
  We're already in the ballpark - great.  I've made this change
  permanent, including documenting it.  Here's another run; it's
  supposed to be the same but it has come out different:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 18 distinct costs, 34.0 secs:
      0.01845 0.01855 0.01860 0.01870 0.01880 0.01885 0.01895 0.01895
      0.01905 0.01905 0.01910 0.01915 0.01915 0.01915 0.01930 0.01930
      0.01970 0.01975 0.01980 0.01985 0.01985 0.02005 0.02020 0.02025
    ] best soln has cost 0.01845 and diversifier 11

  And again, we're getting different results each time, but that is
  probably about time limits.  Here are two more goes, same parameters:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 19 distinct costs, 29.4 secs:
      0.01845 0.01890 0.01905 0.01905 0.01925 0.01930 0.01930 0.01935
      0.01955 0.01955 0.01985 0.01990 0.01995 0.02000 0.02015 0.02020
      0.02020 0.02025 0.02025 0.02030 0.02035 0.02050 0.02080 0.02085
    ] best soln has cost 0.01845 and diversifier 11

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 18 distinct costs, 23.7 secs:
      0.01895 0.01900 0.01905 0.01915 0.01925 0.01930 0.01935 0.01935
      0.01935 0.01950 0.01960 0.01965 0.01965 0.01965 0.01970 0.01970
      0.01980 0.01980 0.01985 0.01990 0.02030 0.02035 0.02040 0.02045
    ] best soln (cost 0.01895) has diversifier 17

  The variation is more extreme than I'm used to, though, I wonder
  why that would be.  I've increased the time limit to 5 minutes,
  to see whether more time would allow everyone to settle:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 16 distinct costs, 24.5 secs:
      0.01890 0.01895 0.01895 0.01900 0.01905 0.01925 0.01925 0.01935
      0.01935 0.01945 0.01945 0.01965 0.01965 0.01965 0.01970 0.01980
      0.01980 0.01985 0.01990 0.02020 0.02025 0.02025 0.02040 0.02045
    ] best soln (cost 0.01890) has diversifier 0

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 18 distinct costs, 24.6 secs:
      0.01895 0.01900 0.01905 0.01915 0.01925 0.01930 0.01935 0.01935
      0.01935 0.01950 0.01960 0.01965 0.01965 0.01965 0.01970 0.01970
      0.01980 0.01980 0.01985 0.02025 0.02035 0.02040 0.02045 0.02125
    ] best soln (cost 0.01895) has diversifier 17

  Apparently not.  Anyway we are in a pretty good place.  Then I added
  swapping the whole timetable when trying to repair avoid unavailable
  times defects (depth 1 only), and got this:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 19 distinct costs, 24.3 secs:
      0.01880 0.01895 0.01905 0.01910 0.01915 0.01935 0.01935 0.01945
      0.01945 0.01950 0.01955 0.01955 0.01960 0.01965 0.01970 0.01980
      0.01985 0.01985 0.02015 0.02025 0.02025 0.02030 0.02045 0.02055
    ] best soln (cost 0.01880) has diversifier 22

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 19 distinct costs, 24.4 secs:
      0.01880 0.01895 0.01905 0.01915 0.01915 0.01935 0.01945 0.01945
      0.01950 0.01955 0.01955 0.01960 0.01965 0.01970 0.01980 0.01985
      0.01985 0.02015 0.02020 0.02025 0.02025 0.02030 0.02045 0.02055
    ] best soln (cost 0.01880) has diversifier 22

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 18 distinct costs, 25.4 secs:
      0.01890 0.01895 0.01895 0.01905 0.01915 0.01915 0.01925 0.01935
      0.01945 0.01950 0.01955 0.01955 0.01965 0.01965 0.01970 0.01980
      0.02005 0.02015 0.02015 0.02020 0.02025 0.02025 0.02045 0.02060
    ] best soln (cost 0.01890) has diversifier 0

  It is slightly better, and slightly faster.  So worth doing.  There
  are no fewer avoid unavailable times defects in the final solution,
  but still the end result is better.

30 October 2023.  Added light pink boxes to HSEval timetables to
  signify places where a resource is unavailable for some times
  but not all.  All good.

  Did some careful whynot checking to verify that adding swapping
  whole timetables to repair avoid unavailable times defects (depth
  1 only) was working.  Here is the current state:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 18 distinct costs, 24.7 secs:
      0.01885 0.01890 0.01905 0.01915 0.01940 0.01940 0.01940 0.01950
      0.01965 0.01965 0.01970 0.01980 0.01985 0.01985 0.01990 0.02005
      0.02010 0.02010 0.02020 0.02020 0.02040 0.02065 0.02150 0.02180
    ]

  I wish it was more repeatable.  Why isn't it?  I've tried killing
  off the most fine-grained time limit test, I got these results:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 19 distinct costs, 29.5 secs:
      0.01860 0.01880 0.01885 0.01885 0.01895 0.01925 0.01930 0.01940
      0.01945 0.01950 0.01950 0.01960 0.01960 0.01960 0.01965 0.01975
      0.01995 0.02005 0.02020 0.02050 0.02055 0.02060 0.02080 0.02080
    ]

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 18 distinct costs, 34.4 secs:
      0.01840 0.01885 0.01885 0.01895 0.01910 0.01910 0.01910 0.01920
      0.01920 0.01935 0.01940 0.01945 0.01945 0.01950 0.01955 0.01965
      0.01965 0.01970 0.01985 0.01990 0.02000 0.02010 0.02025 0.02035
    ] best soln (cost 0.01840) has diversifier 22

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 19 distinct costs, 31.6 secs:
      0.01860 0.01885 0.01890 0.01905 0.01915 0.01935 0.01940 0.01940
      0.01940 0.01950 0.01955 0.01965 0.01965 0.01970 0.01980 0.01985
      0.01985 0.01990 0.02005 0.02020 0.02020 0.02040 0.02065 0.02180
    ] best soln (cost 0.01860) has diversifier 15

  So in fact it has done nothing at all for repeatability, so I've
  brought back the fine-grained time limit test.  Here we go with a
  10 minute time limit:

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 16 distinct costs, 40.6 secs:
      0.01825 0.01830 0.01875 0.01880 0.01885 0.01885 0.01885 0.01890
      0.01895 0.01895 0.01895 0.01905 0.01910 0.01910 0.01910 0.01915
      0.01915 0.01920 0.01935 0.01955 0.01960 0.01990 0.01990 0.02025
    ] best soln (cost 0.01825) has diversifier 4

  This is my equal best reault, achieved previously on 11 March 2023.
  So why not try a 20 minute time limit?  The result is

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 18 distinct costs, 29.5 secs:
      0.01860 0.01860 0.01885 0.01885 0.01890 0.01895 0.01925 0.01940
      0.01945 0.01950 0.01950 0.01960 0.01960 0.01960 0.01965 0.01975
      0.01995 0.02005 0.02020 0.02050 0.02055 0.02060 0.02080 0.02080
    ] best soln (cost 0.01860) has diversifier 8

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 20 distinct costs, 35.4 secs:
      0.01850 0.01860 0.01885 0.01890 0.01905 0.01915 0.01920 0.01920
      0.01935 0.01935 0.01940 0.01940 0.01950 0.01955 0.01965 0.01965
      0.01970 0.01980 0.01985 0.01990 0.01995 0.02015 0.02025 0.02040
    ] best soln (cost 0.01850) has diversifier 22

    [ "INRC2-4-030-1-6291", 24 threads, 24 solves, 20 distinct costs, 24.7 secs:
      0.01840 0.01865 0.01885 0.01895 0.01915 0.01915 0.01935 0.01940
      0.01940 0.01950 0.01955 0.01965 0.01970 0.01970 0.01980 0.01985
      0.01985 0.01990 0.02000 0.02005 0.02010 0.02015 0.02025 0.02040
    ]

  There is something weird about this unrepeatability.  It really needs
  looking into.  Is there an uninitialized variable somewhere?  Here
  are some augment counts from one run:

    ] KheEjectorSolveEnd(final cost 0.01935, diversifier 20, augment_count 193)
    ] KheEjectorSolveEnd(final cost 0.02045, diversifier 4, augment_count 1454)
    ] KheEjectorSolveEnd(final cost 0.01950, diversifier 0, augment_count 522)
    ] KheEjectorSolveEnd(final cost 0.02140, diversifier 9, augment_count 1812)
    ] KheEjectorSolveEnd(final cost 0.02020, diversifier 4, augment_count 161)
    ] KheEjectorSolveEnd(final cost 0.02060, diversifier 8, augment_count 1406)
    ] KheEjectorSolveEnd(final cost 0.02010, diversifier 14, augment_count 1463)
    ] KheEjectorSolveEnd(final cost 0.02145, diversifier 1, augment_count 1780)
    ] KheEjectorSolveEnd(final cost 0.01985, diversifier 9, augment_count 424)
    ] KheEjectorSolveEnd(final cost 0.02020, diversifier 8, augment_count 260)
    ] KheEjectorSolveEnd(final cost 0.01965, diversifier 14, augment_count 107)
    ] KheEjectorSolveEnd(final cost 0.02075, diversifier 1, augment_count 124)
    ] KheEjectorSolveEnd(final cost 0.02065, diversifier 23, augment_count 1518)
    ] KheEjectorSolveEnd(final cost 0.02115, diversifier 7, augment_count 1968)
    ] KheEjectorSolveEnd(final cost 0.02080, diversifier 17, augment_count 2294)
    ] KheEjectorSolveEnd(final cost 0.02010, diversifier 23, augment_count 129)
    ] KheEjectorSolveEnd(final cost 0.02020, diversifier 17, augment_count 140)
    ] KheEjectorSolveEnd(final cost 0.01975, diversifier 7, augment_count 363)
    ] KheEjectorSolveEnd(final cost 0.02095, diversifier 19, augment_count 786)
    ] KheEjectorSolveEnd(final cost 0.02065, diversifier 19, augment_count 97)
    ] KheEjectorSolveEnd(final cost 0.02050, diversifier 5, augment_count 1866)
    ] KheEjectorSolveEnd(final cost 0.02020, diversifier 5, augment_count 126)
    ] KheEjectorSolveEnd(final cost 0.01865, diversifier 22, augment_count 2512)
    ] KheEjectorSolveEnd(final cost 0.01850, diversifier 22, augment_count 93)

  and here are some more from a supposedly identical run:

    ] KheEjectorSolveEnd(final cost 0.02010, diversifier 19, augment_count 218)
    ] KheEjectorSolveEnd(final cost 0.01990, diversifier 5, augment_count 1947)
    ] KheEjectorSolveEnd(final cost 0.01985, diversifier 11, augment_count 423)
    ] KheEjectorSolveEnd(final cost 0.02035, diversifier 21, augment_count 973)
    ] KheEjectorSolveEnd(final cost 0.01915, diversifier 5, augment_count 307)
    ] KheEjectorSolveEnd(final cost 0.02000, diversifier 21, augment_count 161)
    ] KheEjectorSolveEnd(final cost 0.01950, diversifier 15, augment_count 2795)
    ] KheEjectorSolveEnd(final cost 0.01865, diversifier 15, augment_count 183)
    ] KheEjectorSolveEnd(final cost 0.01975, diversifier 18, augment_count 1391)
    ] KheEjectorSolveEnd(final cost 0.01955, diversifier 18, augment_count 168)
    ] KheEjectorSolveEnd(final cost 0.01950, diversifier 17, augment_count 1542)
    ] KheEjectorSolveEnd(final cost 0.01935, diversifier 17, augment_count 219)
    ] KheEjectorSolveEnd(final cost 0.01920, diversifier 12, augment_count 2958)
    ] KheEjectorSolveEnd(final cost 0.01895, diversifier 12, augment_count 126)
    ] KheEjectorSolveEnd(final cost 0.01970, diversifier 3, augment_count 2088)
    ] KheEjectorSolveEnd(final cost 0.01940, diversifier 3, augment_count 191)
    ] KheEjectorSolveEnd(final cost 0.01855, diversifier 22, augment_count 2764)
    ] KheEjectorSolveEnd(final cost 0.02075, diversifier 8, augment_count 3496)
    ] KheEjectorSolveEnd(final cost 0.01840, diversifier 22, augment_count 97)
    ] KheEjectorSolveEnd(final cost 0.01990, diversifier 8, augment_count 427)
    ] KheEjectorSolveEnd(final cost 0.02085, diversifier 23, augment_count 3306)
    ] KheEjectorSolveEnd(final cost 0.02005, diversifier 1, augment_count 3582)
    ] KheEjectorSolveEnd(final cost 0.02015, diversifier 23, augment_count 207)
    ] KheEjectorSolveEnd(final cost 0.01985, diversifier 1, augment_count 140)

  They are totally different, suggesting that repeatability is hopeless.

  I've found out that my machine has 12 cores, not 24, according to
  ~/misc/new_computer/to_gregr04, so I seem to have decided to run
  24 solves, two per thread, although I don't remember making that
  decision now.  Curiously, according to /proc/cpuinfo my computer
  has 20 processors.  Anyway I'm trying again but running with 12
  threads this time:

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 20 distinct costs, 30.0 secs:
      0.01850 0.01885 0.01890 0.01905 0.01910 0.01920 0.01925 0.01935
      0.01940 0.01940 0.01945 0.01945 0.01945 0.01950 0.01955 0.01955
      0.01965 0.01975 0.01985 0.01990 0.01995 0.02000 0.02010 0.02020
    ] best soln (cost 0.01850) has diversifier 22

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 21 distinct costs, 36.4 secs:
      0.01850 0.01865 0.01885 0.01890 0.01905 0.01915 0.01920 0.01935
      0.01940 0.01940 0.01940 0.01945 0.01950 0.01955 0.01965 0.01965
      0.01970 0.01985 0.01990 0.02010 0.02015 0.02020 0.02025 0.02040
    ] best soln (cost 0.01850) has diversifier 22

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 17 distinct costs, 39.5 secs:
      0.01850 0.01850 0.01880 0.01890 0.01905 0.01910 0.01925 0.01935
      0.01935 0.01940 0.01940 0.01945 0.01950 0.01950 0.01955 0.01955
      0.01965 0.01990 0.01990 0.01990 0.02000 0.02005 0.02010 0.02035
    ] best soln (cost 0.01850) has diversifier 4

  Not quite as wacky but still not what you would call repeatable.
  How many cores does it really have?  Here is a run with 10 threads:

    [ "INRC2-4-030-1-6291", 10 threads, 10 solves, 9 distinct costs, 33.9 secs:
      0.01850 0.01860 0.01880 0.01885 0.01900 0.01945 0.01950 0.01990
      0.02005 0.02005
    ] best soln (cost 0.01850) has diversifier 4

  NB cost 1810 is a new best for me.  Here is a run with 20 threads:

    [ "INRC2-4-030-1-6291", 20 threads, 20 solves, 16 distinct costs, 39.7 secs:
      0.01810 0.01850 0.01870 0.01875 0.01880 0.01885 0.01885 0.01890
      0.01910 0.01910 0.01910 0.01915 0.01920 0.01935 0.01945 0.01950
      0.01960 0.01990 0.01990 0.02005
    ] best soln (cost 0.01810) has diversifier 15

  Here is a run with 30 threads:

    [ "INRC2-4-030-1-6291", 30 threads, 30 solves, 23 distinct costs, 33.1 secs:
      0.01840 0.01850 0.01850 0.01865 0.01870 0.01885 0.01905 0.01905
      0.01910 0.01920 0.01940 0.01940 0.01940 0.01945 0.01950 0.01955
      0.01970 0.01985 0.01990 0.01995 0.02005 0.02010 0.02020 0.02020
      0.02025 0.02025 0.02035 0.02040 0.02040 0.02135
    ] best soln (cost 0.01840) has diversifier 17

  Here is a run with 40 threads:

    [ "INRC2-4-030-1-6291", 40 threads, 40 solves, 27 distinct costs, 42.8 secs:
      0.01840 0.01840 0.01865 0.01885 0.01895 0.01900 0.01910 0.01915
      0.01915 0.01920 0.01930 0.01940 0.01940 0.01950 0.01955 0.01955
      0.01965 0.01965 0.01970 0.01970 0.01970 0.01985 0.01985 0.01985
      0.01990 0.02000 0.02005 0.02010 0.02010 0.02010 0.02015 0.02025
      0.02025 0.02030 0.02040 0.02040 0.02045 0.02080 0.02100 0.02135
    ] best soln (cost 0.01840) has diversifier 22

  Here is a run with 50 threads:

    [ "INRC2-4-030-1-6291", 50 threads, 50 solves, 30 distinct costs, 78.5 secs:
      0.01830 0.01845 0.01850 0.01870 0.01870 0.01875 0.01880 0.01885
      0.01885 0.01890 0.01895 0.01895 0.01895 0.01895 0.01900 0.01900
      0.01905 0.01910 0.01915 0.01915 0.01915 0.01915 0.01915 0.01920
      0.01930 0.01930 0.01935 0.01935 0.01940 0.01940 0.01945 0.01945
      0.01945 0.01945 0.01945 0.01955 0.01960 0.01960 0.01965 0.01970
      0.01970 0.01980 0.01980 0.01985 0.01990 0.01995 0.02005 0.02020
      0.02025 0.02105
    ] best soln (cost 0.01830) has diversifier 15

  As a start on looking into the repeatability issue, I've decided to
  try a few single runs with no time limit.  Here they are:

    [ "INRC2-4-030-1-6291", 1 solution, in 12.4 secs: cost 0.01890 ]
    [ "INRC2-4-030-1-6291", 1 solution, in 7.7 secs: cost 0.01895 ]
    [ "INRC2-4-030-1-6291", 1 solution, in 5.5 secs: cost 0.01980 ]
    [ "INRC2-4-030-1-6291", 1 solution, in 8.5 secs: cost 0.01915 ]

  Something weird is definitely going on.  Without ejection chains:

    [ "INRC2-4-030-1-6291", 1 solution, in 0.3 secs: cost 0.02585 ]
    [ "INRC2-4-030-1-6291", 1 solution, in 0.3 secs: cost 0.02585 ]
    [ "INRC2-4-030-1-6291", 1 solution, in 0.3 secs: cost 0.02585 ]
    [ "INRC2-4-030-1-6291", 1 solution, in 0.3 secs: cost 0.02585 ]

  So something non-repeatable is going on in the ejection chains code.
  Here we go again with a single call to the ejection chains code:

    [ "INRC2-4-030-1-6291", 1 solution, in 3.1 secs: cost 0.02070 ]
    [ "INRC2-4-030-1-6291", 1 solution, in 2.0 secs: cost 0.02205 ]
    [ "INRC2-4-030-1-6291", 1 solution, in 3.1 secs: cost 0.02070 ]
    [ "INRC2-4-030-1-6291", 1 solution, in 2.9 secs: cost 0.02155 ]

  Yep, it's unrepeatable again.  After fixing an uninitialized
  variable that I found in the iterator, I got this:

    [ "INRC2-4-030-1-6291", 1 solution, in 4.3 secs: cost 0.02100 ]
    [ "INRC2-4-030-1-6291", 1 solution, in 4.3 secs: cost 0.02100 ]
    [ "INRC2-4-030-1-6291", 1 solution, in 4.2 secs: cost 0.02100 ]
    [ "INRC2-4-030-1-6291", 1 solution, in 4.2 secs: cost 0.02100 ]

  So that looks like the problem solved.  Now let's go back to a
  proper run:

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 18 distinct costs, 45.3 secs:
      0.01810 0.01850 0.01855 0.01860 0.01865 0.01870 0.01870 0.01875
      0.01875 0.01880 0.01885 0.01890 0.01905 0.01905 0.01910 0.01910
      0.01915 0.01930 0.01940 0.01945 0.01945 0.01945 0.01955 0.01970
    ] best soln (cost 0.01810) has diversifier 20

  This is something like it!  Is it repeatable?

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 18 distinct costs, 47.8 secs:
      0.01810 0.01850 0.01855 0.01860 0.01865 0.01870 0.01870 0.01875
      0.01875 0.01880 0.01885 0.01890 0.01905 0.01905 0.01910 0.01910
      0.01915 0.01930 0.01940 0.01945 0.01945 0.01945 0.01955 0.01970
    ] best soln (cost 0.01810) has diversifier 20

  We have repeatability, and we have a smashing new best cost.  This was
  running with no time limit.  If we add a 5 minute time limit we get this:

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 18 distinct costs, 45.3 secs:
      0.01810 0.01850 0.01855 0.01860 0.01865 0.01870 0.01870 0.01875
      0.01875 0.01880 0.01885 0.01890 0.01905 0.01905 0.01910 0.01910
      0.01915 0.01930 0.01940 0.01945 0.01945 0.01945 0.01955 0.01970
    ] best soln (cost 0.01810) has diversifier 20

  It's the same, evidently the time limit did not bite.  Here is a
  much tigher time limit, 20 seconds:

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 18 distinct costs, 36.6 secs:
      0.01820 0.01855 0.01860 0.01865 0.01870 0.01875 0.01875 0.01885
      0.01885 0.01890 0.01905 0.01905 0.01910 0.01910 0.01915 0.01920
      0.01930 0.01940 0.01945 0.01945 0.01945 0.01970 0.01985 0.01995
    ] best soln (cost 0.01820) has diversifier 20

  There is some loss of quality, unsurprisingly.  Here is a single run
  with diversifier 20:

    [ "INRC2-4-030-1-6291", 1 solution, in 17.0 secs: cost 0.01810 ]

  Here are the summaries for the LOR and 1810 solutions:

    Summary (LOR)					Inf. 	Obj.
    ----------------------------------------------------------------
    Assign Resource Constraint (15 points) 	   		 450
    Avoid Unavailable Times Constraint (3 points) 	   	  30
    Cluster Busy Times Constraint (19 points) 	   		 960
    Limit Active Intervals Constraint (11 points) 	   	 255
    ----------------------------------------------------------------
	  Grand total (48 points) 	   			1695

    Summary (1810)					Inf. 	Obj.
    ----------------------------------------------------------------
    Assign Resource Constraint (13 points) 	   		 390
    Avoid Unavailable Times Constraint (5 points) 	   	  50
    Cluster Busy Times Constraint (21 points) 	   		1040
    Limit Active Intervals Constraint (13 points) 	   	 330
    ----------------------------------------------------------------
	  Grand total (52 points) 	   			1810

  The 1810 solution is not all that far behind, the extra cost is
  concentrated in the cluster and limit active intervals defects.

  When repairing ARC:NA=s30|NWNurse=h1:1/2Thu:Day/NA=s30|NWNurse=h1:1
  we try this move:

    +Move(2Thu, -, NU_5, 2Thu:Day.5): move {2Thu:Day.5 [2Thu]} and eject {2Thu:Day.12 [2Thu]}

  This moves NU_5 from one day shift (<R>NA=h1|NWCaretaker=h1:1</R>) to another
  (<R>NA=h1|NWNurse=h1:1</R>).  It's actually a good thing to do because
  nurses are harder to come by than caretakers.

  Here's a run with gs_diversifier=20:

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 1 distinct cost, 90.4 secs:
      0.01945 0.01945 0.01945 0.01945 0.01945 0.01945 0.01945 0.01945
      0.01945 0.01945 0.01945 0.01945 0.01945 0.01945 0.01945 0.01945
      0.01945 0.01945 0.01945 0.01945 0.01945 0.01945 0.01945 0.01945
    ] best soln (cost 0.01945) has diversifier 20

  I did it by accident but it is a good repeatability test.  Here's a run with
  max_beam=2:

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 17 distinct costs, 111.6 secs:
      0.01825 0.01825 0.01830 0.01845 0.01855 0.01855 0.01855 0.01860
      0.01880 0.01885 0.01885 0.01885 0.01900 0.01900 0.01900 0.01905
      0.01915 0.01920 0.01930 0.01940 0.01945 0.01960 0.01970 0.01975
    ] best soln (cost 0.01825) has diversifier 2

  It's pretty good but slow.  Here's a retest, back to max_beam=1:

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 18 distinct costs, 45.8 secs:
      0.01810 0.01850 0.01855 0.01860 0.01865 0.01870 0.01870 0.01875
      0.01875 0.01880 0.01885 0.01890 0.01905 0.01905 0.01910 0.01910
      0.01915 0.01930 0.01940 0.01945 0.01945 0.01945 0.01955 0.01970
    ] best soln (cost 0.01810) has diversifier 20

  Yep, still good.

  I try this move fairly deep down:

    +Move(2Thu, -, CT_17, 2Thu:Late.8): move {2Thu:Late.8 [2Thu]} and eject {2Thu:Late.8 [2Thu]}

  but it would be more promising to eject 2Thu and 2Fri.

    +Move(2Thu-2Fri, -, CT_17, 2Thu:Late.8): move {2Thu:Late.8, 2Fri:Late.8 [2Thu-2Fri]} and
      eject {2Thu:Late.8, 2Fri:Late.8 [2Thu-2Fri]}

  but this does not lead anywhere according to the printout.  To investigate
  this point, here is a run with es_fresh_visits=true:

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 12 distinct costs, 94.7 secs:
      0.01835 0.01840 0.01840 0.01865 0.01865 0.01875 0.01880 0.01880
      0.01885 0.01885 0.01885 0.01885 0.01890 0.01895 0.01895 0.01895
      0.01895 0.01915 0.01915 0.01920 0.01920 0.01930 0.01930 0.01945
    ] best soln (cost 0.01835) has diversifier 16

  Slower, not a wonderful result.  Now es_swap_widening_max=4:

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 16 distinct costs, 23.8 secs:
      0.01875 0.01895 0.01905 0.01905 0.01930 0.01930 0.01935 0.01940
      0.01955 0.01960 0.01960 0.01965 0.01965 0.01965 0.01965 0.01980
      0.01980 0.01990 0.01990 0.01995 0.02000 0.02005 0.02020 0.02090
    ] best soln (cost 0.01875) has diversifier 15

  And now es_swap_widening_max=8:

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 18 distinct costs, 45.1 secs:
      0.01810 0.01850 0.01855 0.01860 0.01865 0.01870 0.01870 0.01875
      0.01875 0.01880 0.01885 0.01890 0.01905 0.01905 0.01910 0.01910
      0.01915 0.01930 0.01940 0.01945 0.01945 0.01945 0.01955 0.01970
    ] best soln (cost 0.01810) has diversifier 20

   And now es_swap_widening_max=12:

   [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 15 distinct costs, 69.9 secs:
      0.01835 0.01835 0.01850 0.01850 0.01855 0.01855 0.01860 0.01880
      0.01885 0.01895 0.01895 0.01895 0.01900 0.01910 0.01910 0.01910
      0.01910 0.01930 0.01935 0.01935 0.01940 0.01945 0.01955 0.02005
    ] best soln (cost 0.01835) has diversifier 12

   And now es_swap_widening_max=16:

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 18 distinct costs, 111 secs:
      0.01805 0.01850 0.01850 0.01865 0.01870 0.01880 0.01880 0.01880
      0.01885 0.01890 0.01895 0.01900 0.01905 0.01910 0.01920 0.01920
      0.01920 0.01930 0.01930 0.01955 0.01960 0.01965 0.01975 0.01985
    ] best soln (cost 0.01805) has diversifier 2

  Another new best, 1805.  And now es_swap_widening_max=24:

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 18 distinct costs, 186 secs:
      0.01805 0.01835 0.01840 0.01855 0.01865 0.01870 0.01870 0.01890
      0.01890 0.01895 0.01910 0.01915 0.01920 0.01930 0.01940 0.01940
      0.01940 0.01945 0.01945 0.01955 0.01955 0.01965 0.01980 0.01985
    ] best soln (cost 0.01805) has diversifier 8

  Back to the default es_swap_widening_max, best of 64 solns:

    [ "INRC2-4-030-1-6291", 12 threads, 64 solves, 29 distinct costs, 105 secs:
      0.01810 0.01825 0.01850 0.01855 0.01855 0.01860 0.01860 0.01865
      0.01870 0.01870 0.01870 0.01875 0.01875 0.01875 0.01880 0.01880
      0.01880 0.01885 0.01885 0.01890 0.01890 0.01895 0.01895 0.01895
      0.01895 0.01900 0.01905 0.01905 0.01905 0.01905 0.01910 0.01910
      0.01910 0.01910 0.01915 0.01915 0.01915 0.01920 0.01930 0.01930
      0.01940 0.01940 0.01940 0.01945 0.01945 0.01945 0.01945 0.01945
      0.01945 0.01950 0.01955 0.01955 0.01960 0.01960 0.01960 0.01970
      0.01970 0.01975 0.01980 0.01980 0.01985 0.01985 0.01990 0.02035
    ] best soln (cost 0.01810) has diversifier 20

31 October 2023.  MaxWorkingWeekends seems to be a problem, the LOR
  solution has 5 violations, costing 5 * 30 = 150, whereas my 1805
  solution has 8 violations costing 8 * 30 = 240, which is 90 more,
  which gets you most of the way from LOR's 1695 to my 1805.  I need
  a really good repair for MaxWorkingWeekends.

  Documented a new rs_drs_resources option value which will implement
  my new idea for repairing weekends, which is to identify three
  resources such that one is overloaded with busy weekends while at
  least one is underloaded.  Then do a full cycle optimal reassignment
  of those three resources.

1 November 2023.  Nothing done today except a review of yesterday's doc.

2 November 2023.  Started on restructuring KheResourceSelectMakeCluster.

4 November 2023.  Other jobs yesterday.  Got KheResourceSelectMakeCluster
  working today, and on its first test it found one improvement.  I need
  to limit the running time in some way and then try again on a larger
  test.  Without cluster we got this:
  
    [ "INRC2-4-030-1-6291", 1 solution, in 29.3 secs: cost 0.01805 ]

  which is already our best solution so far.  The parameters were

    -s ps_soln_group=KHE24x1 ps_threads=1 ps_make=1 ps_keep=1	\
    rs="rt(rg(rrq, rts, rrm, rec)), rt(rec)"			\
    rs_time_limit=20:0						\
    gs_diversifier=2						\
    gs_matching_off=true					\
    rs_time_sweep_daily_time_limit=2				\
    es_move_widening_max=4					\
    es_swap_widening_max=16					\
    rs_drs_resources="cluster(MaxWorkingWeekends, 3)"		\
    rs_drs_daily_expand_limit=20000				\

  With cluster we got this:

    [ "INRC2-4-030-1-6291", 1 solution, in 12.2 mins: cost 0.01795 ]

  which is not massively better but it is close to my best solution
  (0.01785 from 12 Marchive 2023).  Here is a larger run with parameters

    -s ps_soln_group=KHE24x24 ps_threads=12 ps_make=24 ps_keep=1 \
    rs="rt(rg(rrq, rts, rrm, rec)), rt(rec)"			\
    rs_time_limit=20:0						\
    gs_matching_off=true					\
    rs_time_sweep_daily_time_limit=2				\
    es_move_widening_max=4					\
    es_swap_widening_max=16					\

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 18 distinct costs, 105 secs:
      0.01805 0.01850 0.01850 0.01865 0.01870 0.01880 0.01880 0.01880
      0.01885 0.01890 0.01895 0.01900 0.01905 0.01910 0.01920 0.01920
      0.01920 0.01930 0.01930 0.01955 0.01960 0.01965 0.01975 0.01985
    ] best soln (cost 0.01805) has diversifier 2

  And here is a larger run adding dynamic vlsn:

    -s ps_soln_group=KHE24x24 ps_threads=12 ps_make=24 ps_keep=1 \
    rs="rt(rg(rrq, rts, rrm, rec)), rt(rec, rdv)"		\
    rs_time_limit=20:0						\
    gs_matching_off=true					\
    rs_time_sweep_daily_time_limit=2				\
    es_move_widening_max=4					\
    es_swap_widening_max=16					\
    rs_drs_resources="cluster(MaxWorkingWeekends, 3)"		\
    rs_drs_daily_expand_limit=20000				\

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 17 distinct costs, 51.6 mins:
      0.01795 0.01835 0.01840 0.01845 0.01850 0.01865 0.01865 0.01870
      0.01880 0.01880 0.01885 0.01885 0.01895 0.01895 0.01900 0.01910
      0.01920 0.01920 0.01920 0.01945 0.01950 0.01950 0.01955 0.01975
    ] best soln (cost 0.01795) has diversifier 2

  This includes the new best that we got earlier today.  Dynamic vlsn
  found lots of improvements, but it runs slowly, even with
  rs_drs_daily_expand_limit=20000 (which seems like a good value:  it kicks
  in just occasionally, about 27 times on a run with 24 parallel solves).
  For comparison, here are the same settings but just running rec again:

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 14 distinct costs, 140 secs:
      0.01805 0.01850 0.01850 0.01865 0.01865 0.01870 0.01870 0.01880
      0.01880 0.01880 0.01885 0.01885 0.01895 0.01895 0.01900 0.01900
      0.01905 0.01915 0.01920 0.01920 0.01925 0.01955 0.01975 0.01975
    ] best soln (cost 0.01805) has diversifier 2

  There are differences but only slight ones.

  Audited everything and added restart on success for cluster.  Also
  changed the spec to omit num from cluster, since it is not used.
  Then ran with these options set:

    -s ps_soln_group=KHE24x24 ps_threads=12 ps_make=24 ps_keep=1	\
    rs="rt(rg(rrq, rts, rrm, rec)), rt(rec), rt(rdv), rt(rec)"		\
    rs_time_limit=10:0					\
    gs_matching_off=true				\
    rs_time_sweep_daily_time_limit=2			\
    es_move_widening_max=4				\
    es_swap_widening_max=16				\
    rs_drs_resources="cluster(MaxWorkingWeekends)"	\
    rs_drs_daily_expand_limit=10000			\

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 20 distinct costs, 18.9 mins:
      0.01785 0.01795 0.01835 0.01840 0.01850 0.01855 0.01865 0.01865
      0.01870 0.01870 0.01875 0.01880 0.01880 0.01885 0.01895 0.01900
      0.01905 0.01910 0.01915 0.01920 0.01920 0.01925 0.01950 0.01970
    ] best soln (cost 0.01785) has diversifier 10

  This 1785 result is an equal new best, only 90 above LOR's 1695.  And
  (1785 - 1695)/1695 = 5%, which is what we are aiming for.  But
  it would be good to do better, and the running time is too slow.

  Here is the same run only changing to

    rs="rt(rg(rrq, rts, rrm, rec)), rt(rec), rt(rec)"

  to show what happens if we take rdv away:

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 14 distinct costs, 138 secs:
      0.01805 0.01850 0.01850 0.01865 0.01865 0.01870 0.01870 0.01880
      0.01880 0.01880 0.01885 0.01885 0.01895 0.01895 0.01900 0.01900
      0.01905 0.01915 0.01920 0.01920 0.01925 0.01955 0.01975 0.01975
    ] best soln (cost 0.01805) has diversifier 2

  So including cluster improves the best result by 20.

5 November 2023.  For MaxWorkingWeekends, the LOR17 solution has total
  deviation 5 across 3 resources.  My 1805 solution has total deviation
  8 across 5 resources, making a total extra cost of (8 - 5) * 30 = 90.
  I retrieved my 1785 solution by running with gs_diversifier=10:

    [ "INRC2-4-030-1-6291", 1 solution, in 5.1 mins: cost 0.01785 ]

  It has total deviation 9 across 5 resources, so it has not improved
  things by improving MaxWorkingWeekends at all; on the contrary it
  has taken them the other way.  What about MaxAssignments?

  Added code to preferably find (over, under, under) rather than
  (over, max, under).  Let's see how it goes (gs_diversifier=10):

    [ "INRC2-4-030-1-6291", 1 solution, in 53.5 secs: cost 0.01905 ]

  It's faster but the result is worse, so I've taken it away for now.
  But first I tried it with rs_drs_resources="cluster(MaxAssignments)":

    [ "INRC2-4-030-1-6291", 1 solution, in 6.5 mins: cost 0.01895 ]

  Another dud.

23 November 2023.  Had some time off, went bushwalking.  Here is my
  definition of success in practice:

      "A solver is successful in practice if, on every instance that
      is likely to be encountered in practice, it ﬁnds a solution whose
      cost is within 10% of the best known when run for 5 minutes, and
      within 5% of the best known when run for 60 minutes."

   On 4 November I found a solution of cost 1785 in 18.9 minutes.
   Since (1785 - 1695)/1695 = 5.3%, this almost satisfies the 5%
   rule in 60 minutes that we are aiming for.

   On 4 November there is also a solution of cost 1805 found in
   138 seconds.  Since (1805 - 1695)/1695 = 6.4%, this is well
   within the 10% in 5 minutes rule.

   Conclusion:  it is almost time to stop improving the solver on a
   single instance, and move to investigating its robustness.

   I started work in /home/jeff/tt/nurse/pap_solve24/conf1/khe24.tex
   today.  I have a skeleton with no data in it yet.

24 November 2023.  Worked over the new nurse rostering paper again.
  It's basically all done; it's time to start generating some data.

  Implemented rs_drs_fail_limit as documented.  This will abandon
  VLSN after 20 consecutive unsuccessful attempts.

25 November 2023.  Worked over the new nurse rostering paper again.
  Also compared it with the KHE18 paper to check for overlaps.  I
  think it's all good in that department.

  Marked the "Putting it all together" section of the resource solvers
  chapter obsolete, and enhanced the "Putting it all together yourself"
  section to the point where it can do everything requisite.  Now I
  need to implement what I've added to the "Putting it all together
  yourself" section.

  Done the boilerplate in khe_sr_combined.c, and most of the rest
  too.  

26 November 2023.  The old default solver is basically this:

    rt(stage1, stage2), rt(stage3)

  where

    stage1 = rin(rsm(rcm, rgc(rcx, repair(1.0, 0))))
    stage2 = rin(dfs, repair(0.5, 0))
    stage3 = repair(0.5, 1)

  where

    repair(x, v) = rdo(rrm(v), rec)

  Here v is "no big deal", and x influences the time limits, so its
  effect can be got explicitly in the rs version of this, which is:

    rin(rt(rsm(rcm, rgc(rcx, rrm, rec)))), rt(rrm, rec)

  with timing stuff added:

    1:rt(rin(rrq, rgc(rcx))),
    2:rt(rin(rgc(rrm, rec, rdv))),
    1:rt(red, rrm, rec, rdv)

  The construction phase runs quickly anyway and each day of it
  is subject to its own separate time limit (3 seconds), so the
  significant timing information here is that the first repair
  phase is given twice the time of the second repair phase, and
  that each repair algorithm gets equal time.

  Added -n<int> alongside -i and -x when reading archives.  This
  will be handy for testing, I can pretend that each archive
  contains (say) only 2 instances by using -n2.

27 November 2023.  Sorting out default values for various options
  that are the ones that I really want to use, and turning off
  debug prints ready for real test running.

  Configured VLSN so that by default it generates calls at random,
  for 3 resources and 28 days.

  Now testing the time limit at the start of KheDrsSolnExpand,
  which should be fine-grained enough.

  Running 12 solutions each with a 5 minute time limit:

    [ "INRC2-4-030-1-6291", 12 threads, 12 solves, 10 distinct costs, 5.0 mins:
      0.01865 0.01890 0.01905 0.01910 0.01920 0.01920 0.01935 0.01940
      0.01945 0.01955 0.01955 0.01970
    ] best soln (cost 0.01865) has diversifier 7

  Running 24 solutions each with a 2.5 minute time limit:

    [ "INRC2-4-030-1-6291", 12 threads, 24 solves, 16 distinct costs, 5.0 mins:
      0.01840 0.01865 0.01895 0.01900 0.01905 0.01910 0.01910 0.01915
      0.01920 0.01920 0.01925 0.01935 0.01945 0.01950 0.01950 0.01955
      0.01955 0.01975 0.01975 0.01975 0.01975 0.01975 0.01985 0.01990
    ] best soln (cost 0.01840) has diversifier 23

  So it's slightly better this way.

  Done several runs now and the makefile for the new paper seems to
  be generating good stuff.

28 November 2023.  Working on yesterday's problems.

    [ "INRC2-8-030-1-67535629", 12 threads, 24 solves, 24 distinct, 6.1 mins:
      0.02315 0.02445 0.02465 0.02470 0.02475 0.02500 0.02515 0.02540
      0.02575 0.02915 0.03140 1.02180 1.02195 1.02250 1.02275 1.02325
      1.02340 1.02345 1.02380 1.02405 1.02420 1.02430 1.02470 1.02495
    ] best soln (cost 0.02315) has diversifier 18

    [ "INRC2-8-030-1-67535629", 12 threads, 24 solves, 21 distinct, 5.7 mins:
      0.02350 0.02465 0.02465 0.02495 0.02495 0.02510 0.02510 0.02530
      0.02560 0.02580 0.02645 1.02230 1.02245 1.02275 1.02280 1.02320
      1.02325 1.02365 1.02380 1.02390 1.02425 1.02430 1.02440 1.02525
    ] best soln (cost 0.02350) has diversifier 18

  This not as good as KHE20x8, which produced 2225, or LOR17, which
  produced 1735.  And 2315 / 1735 = 1.334 which is not good either.
  NB 2225 / 1735 = 1.282 which is not great either.

29 November 2023.  Decided to explain arena sets and arenas in a
  new section of the Introduction chapter (intro.arena).  So far
  I've just flown in the text of an old appendix.

30 November 2023.  Worked on the new section of the intro.  It
  seems fine but it is not the place to go into details.  So I
  need to develop a new implementation with a new interface and
  new documentation (somewhere).

1 December 2023.  Working on the revised arenas and arena sets.
  Finished the documentation, ready to implement.  I've already
  created ha_all.c by concatenating the old ha*.c files, and
  got a clean compile using it.

2 December 2023.  Working on the revised arenas and arena sets.
  Defined and documented the ps_avail_mem option.  Getting my
  head around the current implementation and what needs to change.

3 December 2023.  Working on the revised arenas and arena sets.
  I'm well into the implementation now.

4 December 2023.  Working on the revised arenas and arena sets.
  I'm well into the implementation now.  Indeed I have a clean
  compile of the whole system using the new interface.

  Judging from a test I did, including overhead, the minimum 
  size of a malloc chunk is 32 bytes (4 8-byte words), and the
  overhead is 16 bytes (2 8-byte words) per chunk.  So I need
  to ask for chunks no smaller than 32 bytes, and expect to
  lose 16 bytes in overhead.

5 December 2023.  Working on the revised arenas and arena sets.
  Just finished auditing HaResizableFree, so nearly done.

6 December 2023.  Working on the revised arenas and arena sets.
  I now have something that looks like the final version, with
  a clean compile.  But it needs a very careful audit.

7 December 2023.  Working on the revised arenas and arena sets.

8 December 2023.  Working on the revised arenas and arena sets.
  Audited HaArenaSetMake, should work now.  Deleted KheSolnSetArenaSet
  and updated khe_sm_parallel_solve.c now.  Added the ps_avail_mem
  option

9 December 2023.  Working on the revised arenas and arena sets.
  Documented the fix for arenas being lost when the long jump
  is taken, implemented it, and audited the implementation.
  I've finished ha_all.c and khe_sm_parallel_solve.c, so I am
  finished, strictly speaking, and I'm ready to test.  I've
  also added memory protection to the dynamic programming solver.

10 December 2023.  Working on the revised arenas and arena sets.
  Revised the documentation, except not "Howard's memory allocator"
  in file ha; it needs a substantial rewrite.  Started testing.

11 December 2023.  Testing the revised arenas and arena sets.
  I've fixed the pthread_create bug, which was a bug in
  khe_sm_parallel_solve.c (something I forgot to finish off),
  but now I have a weird bug in KheSolnMake.

12 December 2023.  Testing the revised arenas and arena sets.
  Fixed a few bugs in khe_soln.c; it all seems to be working now.
  Running the paper, there was an assert error, but meanwhile I
  got this:

    [ "INRC2-8-030-1-27093606", 12 threads,24 solves,24 distinct costs,6.4 mins:
      0.02680 0.02725 0.02755 0.02785 0.02810 0.02815 0.02830 0.02855
      0.02880 0.02920 0.02970 0.03370 0.03390 1.02640 1.02660 1.02670
      1.02680 1.02725 1.02750 1.02760 1.02770 1.02805 1.02870 1.02875
    ] best soln (cost 0.02680) has diversifier 16

  The assert error was

    KheDrsIndexedSolnSetAdd internal error (base 1.02045, soln 0.02045,
      increment 0.00001, shift 0)

  The formula for shift gives something too large to store in an int
  variable, which explains why shift has value 0.  But what can we do
  about cases like this?  Perhaps we should ban solutions with non-zero
  hard cost?  I would rather not do this.

  In response to an email from Ortiz, I've started work on solving
  high schools again, specifically GreeceThirdHighSchoolPatras2010.xml.
  Had a segmentation fault, inserted an assert, and it blew:

    KheEjectorAugment: pred_v->beam is empty

13 December 2023.  I seem to have fixed the empty beam problem, by
  failing when the beam is empty.  All done and documented.  Now
  I'm finding and fixing other problems.  No mtf available when
  the eject solver calls KheMTaskFinderTaskToMTask, also fixed now.

14 December 2023.  Working on khe_se_solvers.c.  I've just implemented
  KheResourceTaskForEach and used it in KheDecreaseLoadMultiRepair.
  It seems to be working.

15 December 2023.  Working on getting XHSTT-2014 working.  The
  current bug is pretty nasty but I am tracking it backwards and
  so far I have reached this:

    ...
    KheEventTimetableMonitorAssignTime(x12_7_ComputingStudies_3U_1, 37) \
      at tc 39 (0 meets before)
    KheEventTimetableMonitorAssignTime(ȭ?Y?U, 4) at tc 4 (0 meets before)
    KheEventTimetableMonitorAssignTime(ȭ?Y?U, 4) at tc 5 (0 meets before)

  which looks like garbage to me.  Lots of debug prints later I've
  tracked this down:

    KheNodeMeetsTriviallyAssign calling KheMeetAssign((?A:V, ??A:V, 0)

  which seems to suggest that the nonsense has its genesis in solver
  khe_sl_split_forest.c, part of the layer tree code.  Although why that
  code should start misbehaving just now is a mystery.  Where does its
  arena come from?

  KheSplitForestMakeTrivialAssignmentsOrLayers has a very obvious error,
  where it deletes child_node and then calls a function that gets passed
  child_node.  Sadly, fixing the error did not fix the problem.

16 December 2023.  Still working on getting high school solving working;
  I've graduated to XHSTT-2014.  I've got over yesterday's bug.  It was
  caused by uninitialized id fields in types KHE_MEET and KHE_TASK.

  Added a disallow_preassigned parameter to KheMTaskResourceReassignCheck
  and KheMTaskResourceReassign.  All done and documnted.  It prevents the
  pesky unassignments of preassigned tasks that I've been getting.

  After all this debugging I have got this so far:

    [ KheArchiveParallelSolve(XHSTT-2014) soln_group NOT SAVING, threads 1,
      make 1, keep 1, time omit, limit -1.0)
      parallel solve of AU-BG-98: starting NOT SAVING solve 1 (last)
      [ "AU-BG-98", 1 solution, in 2.1 secs: cost 115.00028 ]
      parallel solve of AU-SA-96: starting NOT SAVING solve 1 (last)
      [ "AU-SA-96", 1 solution, in 11.8 secs: cost 4.00027 ]
      parallel solve of AU-TE-99: starting NOT SAVING solve 1 (last)
      [ "AU-TE-99", 1 solution, in 0.6 secs: cost 37.00030 ]
      parallel solve of BR-SA-00: starting NOT SAVING solve 1 (last)
      [ "BR-SA-00", 1 solution, in 0.1 secs: cost 0.00077 ]
      parallel solve of BR-SM-00: starting NOT SAVING solve 1 (last)
      [ "BR-SM-00", 1 solution, in 0.4 secs: cost 15.00154 ]
      parallel solve of BR-SN-00: starting NOT SAVING solve 1 (last)
      [ "BR-SN-00", 1 solution, in 0.3 secs: cost 0.00226 ]
      parallel solve of DK-FG-12: starting NOT SAVING solve 1 (last)
      [ "DK-FG-12", 1 solution, in 52.3 secs: cost 0.01868 ]
      parallel solve of DK-HG-12: starting NOT SAVING solve 1 (last)
      [ "DK-HG-12", 1 solution, in 6.3 mins: cost 12.02927 ]
      parallel solve of DK-VG-09: starting NOT SAVING solve 1 (last)
      [ "DK-VG-09", 1 solution, in 132.2 secs: cost 2.02768 ]
      parallel solve of UK-SP-06: starting NOT SAVING solve 1 (last)
      [ "UK-SP-06", 1 solution, in 40.5 secs: cost 44.01154 ]
      parallel solve of FI-PB-98: starting NOT SAVING solve 1 (last)
      [ "FI-PB-98", 1 solution, in 1.9 secs: cost 1.00007 ]
      parallel solve of FI-WP-06: starting NOT SAVING solve 1 (last)
      [ "FI-WP-06", 1 solution, in 0.8 secs: cost 0.00029 ]
      parallel solve of FI-MP-06: starting NOT SAVING solve 1 (last)
      Segmentation fault (core dumped)
    ]

  Still dumping core, but it's progress.  This one seems to be
  another attempt by ejection chains to use mtasks within time
  reassignment.  So I've killed that off but the cost is that
  KheSwapRepair and KheMoveRepair only do something when
  ao->repair_resources is set, not when ao->repair_times is set.

17 December 2023.  Working on failure of DRS to assign preassignments.
  DRS does have a concept of an assignment being fixed, but its three
  reasons for fixing do not include preassignment.  So I need to add
  that as a fourth reason.  I've done this and tested it, and it seems
  to be working.

18 December 2023.  Working on the "MakeClean internal error" bug.
  I've found what looks like the problem:  the domain field in
  demand chunks, and also the domain field in demand nodes, are
  being assigned during copy, not properly copied.  I've fixed
  that now.

20 December 2023.  I've tracked through all the copy functions:

    KheSolnCopyDoPhase1 (done, now copying packing time groups)
      KheMonitorLinkCopyPhase1 (done)
	KheGroupMonitorCopyPhase1 (done)
	  KheSolnCopyPhase1 (done above)
	  KheMonitorLinkCopyPhase1 (done above)
	KheMonitorCopyPhase1 (done)
	  KheAssignResourceMonitorCopyPhase1 (done)
	    KheEventResourceInSolnCopyPhase1 (done)
	      KheEventInSolnCopyPhase1 (done)
		KheMeetCopyPhase1 (done below)
		KheEventResourceInSolnCopyPhase1 (done above)
		KheEventTimetableMonitorCopyPhase1 (done below)
		KheMonitorCopyPhase1 (done above)
	      KheTaskCopyPhase1 (done below)
	      KheMonitorCopyPhase1 (done above)
	  KheAssignTimeMonitorCopyPhase1 (done)
	    KheEventInSolnCopyPhase1 (done)
	  KheSplitEventsMonitorCopyPhase1 (done)
	    KheEventInSolnCopyPhase1 (done above)
	  KheDistributeSplitEventsMonitorCopyPhase1 (done)
	    KheEventInSolnCopyPhase1 (done above)
	  KhePreferResourcesMonitorCopyPhase1 (done)
	    KheEventResourceInSolnCopyPhase1 (done above)
	  KhePreferTimesMonitorCopyPhase1 (done)
	    KheEventInSolnCopyPhase1 (done above)
	  KheAvoidSplitAssignmentsMonitorCopyPhase1 (done)
	  KheSpreadEventsMonitorCopyPhase1 (done)
	    KheSpreadTimeCopyPhase1 (done)
	      KheSpreadTimeGroupCopyPhase1 (done, now copying a time group)
	  KheLinkEventsMonitorCopyPhase1 (done)
	  KheOrderEventsMonitorCopyPhase1 (done)
	  KheAvoidClashesMonitorCopyPhase1 (done)
	    KheResourceInSolnCopyPhase1 (done)
	      KheSolnCopyPhase1 (done above)
	      KheTaskCopyPhase1 (done below)
	      KheResourceTimetableMonitorCopyPhase1 (done below)
	      KheMonitorCopyPhase1 (done above)
	      KheWorkloadRequirementCopy (done, now copying a time group)
		KheMonitorCopyPhase1 (done above)
	      KheMatchingDemandChunkCopyPhase1 (done below)
	  KheAvoidUnavailableTimesMonitorCopyPhase1 (done)
	    KheResourceInSolnCopyPhase1 (done above)
	    KheMonitoredTimeGroupCopyPhase1 (done, now copying a time group)
	      KheResourceTimetableMonitorCopyPhase1 (done below)
	      KheMonitorCopyPhase1 (done above)
	  KheLimitIdleTimesMonitorCopyPhase1 (done)
	    KheResourceInSolnCopyPhase1 (done above)
	    KheMonitoredTimeGroupCopyPhase1 (done above)
	  KheClusterBusyTimesMonitorCopyPhase1 (done)
	    KheResourceInSolnCopyPhase1 (done above)
	    KheMonitoredTimeGroupCopyPhase1 (done above)
	  KheLimitBusyTimesMonitorCopyPhase1 (done)
	    KheResourceInSolnCopyPhase1 (done above)
	    KheMonitoredTimeGroupCopyPhase1 (done above)
	  KheLimitWorkloadMonitorCopyPhase1 (done)
	    KheResourceInSolnCopyPhase1 (done above)
	    KheMonitoredTimeGroupCopyPhase1 (done above)
	  KheLimitActiveIntervalsMonitorCopyPhase1 (done)
	    KheResourceInSolnCopyPhase1 (done above)
	    KheMonitoredTimeGroupCopyPhase1 (done above)
	    KheIntervalCopyPhase1 (done)
	  KheLimitResourcesMonitorCopyPhase1 (done)
	  KheEventTimetableMonitorCopyPhase1 (done)
	    KheTimeCellCopyPhase1 (done)
	      KheMeetCopyPhase1 (done below)
	    KheEventInSolnCopyPhase1 (done above)
	    KheLinkEventsMonitorCopyPhase1 (done above)
	  KheResourceTimetableMonitorCopyPhase1 (done)
	    KheResourceInSolnCopyPhase1 (done above)
	    KheTimeCellCopyPhase1 (done)
	      KheTaskCopyPhase1 (done below)
	      KheMonitoredTimeGroupCopyPhase1 (done above)
	    KheAvoidClashesMonitorCopyPhase1 (done above)
	    KheMonitoredTimeGroupTableCopyPhase1 (done)
	      KheMonitoredTimeGroupCopyPhase1 (done above)
	  KheOrdinaryDemandMonitorCopyPhase1 (done)
	    KheMatchingDemandChunkCopyPhase1 (done)
	      KheMatchingCopyPhase1 (done below)
	      KheMatchingDemandNodeCopyPhase1 (done)
		KheOrdinaryDemandMonitorCopyPhase1 (done above)
		KheWorkloadDemandMonitorCopyPhase1 (done below)
	    KheMatchingSupplyNodeCopyPhase1 (done)
	      KheMatchingSupplyChunkCopyPhase1 (done)
		KheMatchingCopyPhase1 (done below)
		KheMatchingSupplyNodeCopyPhase1 (done above)
	      KheMatchingHallSetCopyPhase1 (done)
		KheMatchingHallSetCopyPhase1 (done above)
		KheMatchingSupplyNodeCopyPhase1 (done above)
		KheMatchingDemandNodeCopyPhase1 (done above)
	    KheMatchingDemandNodeCopyPhase1 (done above)
	    KheMatchingHallSetCopyPhase1 (done above)
	    KheTaskCopyPhase1 (done below)
	  KheWorkloadDemandMonitorCopyPhase1 (done, now copying a time group)
	    KheMatchingDemandChunkCopyPhase1 (done above)
	    KheMatchingSupplyNodeCopyPhase1 (done above)
	    KheMatchingDemandNodeCopyPhase1 (done above)
	    KheMatchingHallSetCopyPhase1 (done above)
	    KheResourceInSolnCopyPhase1 (done above)
	    KheMonitorCopyPhase1 (done above)
	  KheEvennessMonitorCopyPhase1 (done)
	  KheGroupMonitorCopyPhase1 (done above)
      KheSolnWriteOnlyCopyPhase1 (done)
      KmlErrorCopy (done)
      KheEventInSolnCopyPhase1 (done above)
      KheResourceInSolnCopyPhase1 (done above)
      KheMeetCopyPhase1 (done, now copying a time group)
	KheSolnCopyPhase1 (done above)
	KheMeetBoundCopyPhase1 (done, now copying time groups)
	  KheSolnCopyPhase1 (done above)
	  KheMeetCopyPhase1 (done above)
	KheTaskCopyPhase1 (done below)
	KheNodeCopyPhase1 (done below)
	KheZoneCopyPhase1 (done below)
	KheMatchingSupplyChunkCopyPhase1 (done below)
	KheMatchingDemandChunkCopyPhase1 (done below)
	KheEventInSolnCopyPhase1 (done somewhere)
      KheTaskCopyPhase1 (done, now copying its resource group)
	KheSolnCopyPhase1 (done above)
	KheMeetCopyPhase1 (done above)
	KheTaskBoundCopyPhase1 (done, now copying its resource group)
	  KheSolnCopyPhase1 (done above)
	  KheTaskCopyPhase1 (done above)
	KheTaskingCopyPhase1 (done below)
	KheTaskCopyPhase1 (done above)
	KheOrdinaryDemandMonitorCopyPhase1 (done above)
	KheEventResourceInSolnCopyPhase1 (done above)
      KheNodeCopyPhase1 (done)
	KheSolnCopyPhase1 (done above)
	KheNodeCopyPhase1 (done above)
	KheLayerCopyPhase1 (done)
	  KheNodeCopyPhase1 (done above)
	KheMeetCopyPhase1 (done above)
	KheZoneCopyPhase1 (done)
	  KheNodeCopyPhase1 (done above)
	  KheMeetCopyPhase1 (done above)
      KheTaskingCopyPhase1 (done)
	KheSolnCopyPhase1 (done above)
	KheTaskCopyPhase1 (done above)
      KheMonitorCopyPhase1 (done above)
      KheMatchingCopyPhase1 (done)
	KheMatchingSupplyChunkCopyPhase1 (done above)
	KheMatchingDemandChunkCopyPhase1 (done above)
	KheMatchingSupplyNodeCopyPhase1 (done above)
	KheMatchingDemandNodeCopyPhase1 (done above)
	KheMatchingHallSetCopyPhase1 (done above)
      KheEvennessHandlerCopyPhase1 (done)
	KheSolnCopyPhase1 (done above)
	KhePartitionHandlerCopy (done)
	  KheEvennessMonitorCopyPhase1 (done above)

  I've worked out that resource groups and time groups are the only
  dubious aspects.  I've added copying to both.  Have a clean compile,
  ready to test.  

  When making resource groups within solutions, we use fields

    resource_set_rt
    resource_set
    resource_set_table

  But resource_set must be empty when copying, and resource_set_table
  can safely be reset to empty, since it's only a write-through cache.
  The same goes for

    time_sset_building
    time_sset
    time_sset_table

  which work in exactly the same way.

21 December 2023.  Still working on copying woes.  The most recent
  bug was caused by not setting soln->matching to NULL when the
  arena holding the matching was freed.  It may be the last bug.
  At any rate there are no crashes on timetabling instances at
  the moment, so I've decided to get Version 2.9 out today.

22 December 2023.  Now that the pressure is off, with Version 2.9
  out yesterday, I'll go on and try to improve the performance of
  KHE on timetabling instances.  First, some work on time limits.

  Decided to add a ts option similar to rs.  Here's what it needs:

    gdl(<solver>)         KheDetachLowCostMonitors, reattach after
    gtm(<solver>)         Run <solver> with the global tixel matching
    gem(<solver>)         Run <solver> with evenness monitoring
    gpu                   KhePropagateUnavailableTimes
    tcl			  KheCoordinateLayers (?)
    tbr                   KheBuildRunarounds (?)
    trt			  KheNodeRecursiveAssignTimes (below cycle node)
    ts 			  KheCycleNodeAssignTimes (at cycle node)

    gta(<solver>)         Run <solver> with global tixel matching installed
                          and contributing to total cost.

    gtb(<solver>)         Run <solver> with global tixel matching installed
                          but not contributing to total cost; instead, solvers
			  must explicitly enquire about the number of
			  unassigned demand tixels.

  This is what KheCycleNodeAssignTimes does:

    tpa			  KheNodePreassignedAssignTimes
    tnp(<solver>)	  Run solver if not all times preassigned
    ttp(<solver>)	  KheTaskingTightenToPartition
    tmd(<solver>)	  KheSolnClusterAndLimitMeetDomains
    tnl			  KheNodeLayeredAssignTimes
    tec			  KheEjectionChainNodeRepairTimes
    tnf			  KheNodeFlatten
    tdz			  KheNodeDeleteZones (who added them?)

  Compulsory things at the start of general solving:

    KheSolnSplitCycleMeet
    KheSolnMakeCompleteRepresentation
    cycle_node = KheLayerTreeMake
    KheInstanceContainsSoftSplitConstraint
    KheTaskTreeMake - will have to change its pos, hope that's OK

  Compulsory things at the end of general solving:

    KheSolnEnsureOfficialCost
    KheMergeMeets
    KheSolnTryTaskUnAssignments
    KheSolnTryMeetUnAssignments

  General solver default value:

    gdl(gta(gem(gpu, [ttmake], tcl, tbr, trt, ts, gtb(retm(rs)))))

  where ts is KheCycleNodeAssignTimes, i.e.

    tpa, tnp(ttp(tmd(tnl, tec, tnf, tdz), tec)))

  but we could make ts be

    tcl, tbr, trt, tpa, tnp(ttp(tmd(tnl, tec, tnf, tdz), tec)))

  and reduce gs to

    gdl(gta(gem(gpu, [ttmake], ts, gtb(retm(rs)))))

  and in fact rs could include gtb and retm as well.  What about

    time <secs> ( <solver> )

  and what about

     option <option_name>

  so that instead of ts we write option ts, etc?  And what about

      hs ( <solver> )
      nr ( <solver> )

  to select high school only or nurse rostering only?  I'm already
  doing something like this in resource assignment, where I choose
  an assignment algorithm based on whether it is nurse rostering
  or not.  So better not pretend otherwise.

  Global tixel matching can be used in two modes, one that adds
  its cost to the solution cost, the other that keeps the matching
  up to date but leaves it to the user to check up on it.  I need
  to support both these options.

  Also we want two values of each of these options, one for high
  schools and one for nurse rostering - or do we?

23 December 2023.  As far as I can see, there is no interaction
  between KheTaskTreeMake and the other functions in its vicinity.
  So it should be movable.

24 December 2023.  Working on do-it-yourself solving for the
  general solver and for the time solver.  Have clean compile
  of a more or less complete system. just a few kinks to iron out.

25 December 2023.  Moved the matching down into time assignment
  and resource assignment.  Found that KhePropagateUnavailableTimes
  is unaffected by it.

  I've found that khe_sr_task_tree.c calls KheAtomicOperationEnd, that
  is, it is influenced by the resource assignment invariant.  Basically
  it only accepts jobs that do not violate the invariant.  So I've
  preceded the "unavoidable" call to it by setting rs_invariant to
  true, and I've followed it by setting rs_invariant to false.

  This seems to be the end of do-it-yourself solving, at least,
  it's good enough for testing.  Got this for a start:

    [ "DK-HG-12", 1 solution, in 47.2 secs: cost 14.05921 ]

  I thought it was springing the 60:0 time limit but apparently not.

  KheMTaskIndexInFrameTimeGroup is crashing now.  Curiously this
  was on DK-HG-12 which was working before.  I've added a patch
  but really understanding it is still to do.  Apparently there
  are some unassigned meets when the ejection chain solver goes
  to repair resources.  Strange.  Anyway it's not crashing now.

  The time limit seems to be working.  I may have misunderstood my
  own documentation; setting ps_time_limit is not really the way.

  File khe_sm_solver.c renamed to khe_sm_yourself.c, and its one
  public function has been renamed too.

  KheSolnTryTaskUnAssignments crashed so I put in a quick fix.  But
  I really need to understand what's going on in that function.

  I've revised the section of documentation file ha describing how
  the memory allocator is implemented.  It's reasonably good now.

26 December 2023.  It's time to start work on improving the results
  I've been getting for high school timetabling instances.  I've
  reorganized directory tt/school_modelling; it is now tt/school
  with subdirectory solve that I'll be doing the solving in.

  I've decided to handle the zero memory issue by declaring that
  the arena memory returned by Ha is not zeroed.  All done and
  documented.

  I've started testing AU-BG-98.  Actually it doesn't look too
  bad.  I need to take a closer look at everything now.  I've
  already fixed two bugs in HSEval, which was printing a lot of
  supposedly unassigned tasks that were not unassigned at all,
  and mistakenly printing preassigned tasks in italic font.

  Added finding split assignments to the default value of rs,
  and the cost dropped to 15.00662.  And here is the best of 12:

    [ "AU-BG-98", 12 threads, 12 solves, 12 distinct costs, 13.7 secs:
      4.00866 7.01068 7.01072 7.01104 9.00861 11.01153 13.00787 13.00962
      15.00662 18.00940 4006.00992 4008.00832
    ] best soln (cost 4.00866) has diversifier 11`

  Not bad for 13 seconds.  Best of 24:

    [ "AU-BG-98", 12 threads, 24 solves, 24 distinct costs, 17.4 secs:
      3.00950 4.00791 4.00866 5.00800 5.01048 6.00918 7.00794 7.00927
      7.01068 7.01104 7.01106 7.01118 7.01241 9.00648 9.00861 10.00867
      11.01153 13.00787 13.00962 15.00662 18.00940 4006.00992 4008.00832
      4008.01077
    ] best soln (cost 3.00950) has diversifier 13

  Best of 96:

    [ "AU-BG-98", 12 threads, 96 solves, 95 distinct costs, 77.9 secs:
      2.00881 3.00803 3.00836 3.00908 3.00950 3.01059 3.01073 4.00791
      4.00860 4.00866 4.00871 4.00878 5.00798 5.00800 5.00964 5.01048
      5.01144 5.01147 6.00866 6.00918 6.00950 6.00967 6.01100 6.01103
      6.01208 7.00794 7.00822 7.00835 7.00927 7.01039 7.01040 7.01066
      7.01068 7.01070 7.01104 7.01116 7.01118 7.01151 7.01241 7.01293
      8.00682 8.00921 8.00994 8.01016 8.01263 8.01334 8.01506 9.00648
      9.00853 9.00860 9.00861 9.00869 9.00883 9.00965 9.01156 9.01329
      10.00867 10.00963 10.01120 10.01177 10.01300 11.00756 11.00930 11.01153
      11.01238 11.01250 12.00673 12.00804 12.00915 12.00993 12.01013 12.01085
      12.01102 12.01111 12.01123 12.01147 12.01199 12.01228 12.01338 13.00787
      13.00787 13.00962 14.01060 15.00662 15.00884 15.01201 17.01225 18.00940
      2009.01200 3008.01168 4006.01054 4008.00832 4008.01022 4008.01077
      4009.01134 4009.01202
    ] best soln (cost 2.00881) has diversifier 95

  This is not far off the best I have ever got, which was one point
  something according to Gerhard's web site.  But still I need to
  look in detail at what is happening.

27 December 2023.  Given the good results from yesterday I have
  decided to run XHSTT-2014.xml and see how the results compare
  with my 2014 paper.  I ran KHE24 24 times in parallel with 12
  threads and a time limit of 10 minutes per soln (i.e. 20 minutes
  per instance given I'm running 12 cores in parallel) and got this:

  Instance     KHE14x8     KHE24x24    Best (from Gerhard's web site)
  -------------------------------------------------------------------
  AU-BG-98     4.00524      3.00950    0.00128 (GOAL)
  AU-SA-96     6.00006      1.00021    0.00000 (GOAL)
  AU-TE-99     2.00140      1.00186    0.00020 (GOAL - optimal)
  BR-SA-00     1.00051      0.00048    0.00005 (Sorensen et al - optimal)
  BR-SM-00    22.00129      7.00123    0.00051 (Sorensen - optimal)
  BR-SN-00     4.00243      0.00156    0.00035 (Sorensen - optimal)
  DK-FG-12     0.02046      0.01798    0.01263 (GOAL)
  DK-HG-12           -     12.02853   12.02330 (GOAL)
  DK-VG-09    12.03257      2.02624    2.02323 (GOAL)
  UK-SP-06     changed     29.01050    2.01410 (Dudek)
  FI-PB-98     1.00024      0.00012    0.00000 (Kyngas and Nurmi)
  FI-WP-06     0.00041      0.00016    0.00000 (GOAL)
  FI-MP-06     0.00125      0.00105    0.00077 (GOAL)
  GR-H1-97     0.00000      0.00000    0.00000 (Pimmer)
  GR-P3-10     0.00006      0.00002    0.00000 (Gogos and Valouxis)
  GR-PA-08     0.00021      0.00013    0.00003 (GOAL - optimal)
  IT-I4-96     0.00197      0.00063    0.00027 (GOAL - optimal)
  KS-PR-11     0.00116      0.00147    0.00000 (Demirovic and Musliu)
  NL-KP-03     0.03919      0.02413    0.00199 (GOAL)
  NL-KP-05           -      2.01154    0.00425 (GOAL)
  NL-KP-09           -      9.08750    0.01620 (GOAL)
  ZA-LW-09    16.00000     16.00010    0.00000 (Gogos et al.)
  ZA-WD-09     6.00000     18.00000    0.00000 (Sorensen et al.)
  ES-SS-08     0.01287      0.00787    0.00335 (Sorensen - v. optimal)
  US-WS-09    untested      0.00523    0.00101 (Klemsa - optimal)
  --------------------------------------------------------------------

  which shows that KHE24x24 is better on the whole than KHE14x8,
  although the time limits are different.  The whole run (all 25
  instances) took 44.5 minutes.

  This is good enough to be going on with, really.  I should get
  back to nurse rostering now, unless someone wants something.

  Started work on the assert error from 12 December 2023.  But I've
  found another bug before that one:

    KheResourceMatchingSolve internal error 4 (init 0.00000, final 50.00000)

28 December 2023.  I now have debug output proving that the matching
  demand monitors are not linked in to the solution when
  KheGroupCorrelatedMonitors is called.  It naturally links them
  into the solution.  So we have two problems:

     * Why is the matching cost so high?  Probably because tasks that
       actually don't need to be assigned at all have demand tixels.

     * Why are the ordinary demand monitors attached but not linked in
       to the solution when KheGroupCorrelatedMonitors is called?
       Because of KheDisconnectAllDemandMonitors in khe_sm_yourself.c.
       In other words, this is actually being asked for.

  So it seems that the second issue is fine as is, but

     * KheGroupCorrelatedMonitors should probably not be grouping
       monitors that are not linked to the solution.  Anyway I need
       to think about this question.  I have a plan, documented now
       in the "Correlation grouping" section, ready to implement.

     * khe_task.c should not create ordinary demand monitors unless
       the task has a non-assignment cost, of at least KheCost(1, 0).
       Or something like that.  More thought about this is needed.

29 December 2023.  Implementing yesterday's ideas today.  Actually I
  want to do a careful audit and revision of KheGroupCorrelatedMonitors,
  to make sure that I understand it and that it agrees with the Guide.
  I'm currently up to the start of KheGroupSpreadEventsMonitors, which
  is about 40% of the way through it.

30 December 2023.  Working on reconstructing khe_sm_correlation.c.
  Finished KheGroupCorrelatedEventResourceMonitors.  Its handling
  of assign resource monitors was quite strange:  it grouped together
  attached monitors and made unattached ones into children of the
  solution.  Not really comprehensible.  I've removed all grouping
  of limit resources monitors because I could not justify doing it.

31 December 2023.  Working on reconstructing khe_sm_correlation.c.
  Audited the documentation I wrote yesterday.  It's great now, and I've
  also implemented it and reorganized khe_sm_correlation.c, it's in
  great shape.  It just needs a final audit and test now.  The final
  audit is up to the start of submodule "grouping correlated resource
  monitors".

SEE FILE 2024 FOR CONTINUATION

To Do
=====

  Working on a thorough reconstruction of khe_sm_correlation.c and
  its documentation.  All done except for a final audit and test.
  It just needs a final audit and test now.  The final audit is up
  to the start of submodule "grouping correlated resource monitors".

  Started work on the assert error from 12 December 2023.  But I've
  found another bug before that one:

    KheResourceMatchingSolve internal error 4 (init 0.00000, final 50.00000)

  This seems to have been sprung by the fact that the global tixel
  matching is now in use at this point, and changes to how monitors are
  linked and attached are not undone by KheMarkEnd.  I have some debug
  output proving that no demand monitors have a cost immediately after
  KheMarkBegin and that 50 have a cost immediately after KheMarkEnd.
  But before I do anything I need to work out who is attaching and
  detaching these monitors.  I also have to think about the fact that
  there are extra tasks that are not really expected to be assigned at
  all.  Are they causing these demand tixels to be generated unnecessarily?
  KheEjectionChainRepairInitialResourceAssignment seems to be the
  culprit.  The debug output says that before it is called, soln cost is 
  0.01200, whereas afterwards it is 50.00000.  I've looked at the
  body of KheEjectionChainRepairInitialResourceAssignment but I am
  going to need debug output of its phases and the cost after each
  phase.  Yep:

      cost after KheGroupCorrelatedMonitors: 50.01200

  So KheGroupCorrelatedMonitors is the culprit.  See above for what
  to do about it.

  Marks and paths do not record monitor operations at all.  Surely
  this is wrong?  The atomic operations are

      KheMonitorSetBack
      KheMonitorDetachFromSoln
      KheMonitorAttachToSoln

      KheClusterBusyTimesMonitorSetCutoffIndex
      KheClusterBusyTimesMonitorSetCutoffTime (non-atomic)
      KheClusterBusyTimesMonitorSetNotBusyState
      KheClusterBusyTimesMonitorClearNotBusyState
      KheClusterBusyTimesMonitorSetMultiplier
      KheClusterBusyTimesMonitorSetMinimum
      KheClusterBusyTimesMonitorResetMinimum

      KheLimitBusyTimesMonitorSetCeiling

      KheLimitWorkloadMonitorSetCeiling

      KheLimitActiveIntervalsMonitorSetCutoffIndex
      KheLimitActiveIntervalsMonitorSetCutoffTime
      KheLimitActiveIntervalsMonitorSetNotBusyState
      KheLimitActiveIntervalsMonitorClearNotBusyState

      KheEventTimetableMonitorMake (omit? does not affect cost)

      KheGroupMonitorMake
      KheGroupMonitorDelete
      KheGroupMonitorAddChildMonitor
      KheGroupMonitorDeleteChildMonitor

  I need to work out how I implement paths and do it for these operations.
  Or do I?

  (High school timetabling)  Is there a repair that swaps the times
  of two meets that share a preassigned resource?  There ought to be,
  and it ought to work pretty darn well.

  (High school timetabling)  There are places when KheSwapRepair and
  KheMoveRepair can't be called because there is no mtask finder
  during time repair.  I need to be able to execute these kinds of
  repairs not just on mtasks but also on tasks - or something.

  Work on the assert error reported under 12 December 2023.  What I
  need is another level of the indexed lists data structure, one
  that works on hard cost rather than soft cost.

  Given the bugs I've been finding, I need a careful review
  of the order in which KheSolnMake initializes solutions.

  Summary of accumulated problems:

    (1) DRS does not handle limit workload monitors, it just
        returns having done nothing when they turn up.

    (2) In INRC2-8-030-1-27093606 there are mtasks that do not
        satisfy the usual conditions (no gaps etc.).  I need to
	look into these mtasks.  At present DRS just returns,
	after doing nothing, in these cases.

    (3) There is a segmentation fault somewhere in the KHE24x24
        run on INRC2-8-030-1-27093606.  It seems to be near the
	end, could it be out of memory?  according to gdb, it
	crashed at "HaArrayAddLast(soln_list->solns, soln)",
	with soln_list=0x16fffffffff which looks very strange.
	The new arena code should handle out of memory gracefully,
	and then we'll see.  (Update: I don't seem to be getting
	this error any more.  I have another one insead.)

  I've started doing some runs including other instances.  Several
  of the COI results are very pleasing.  I'm getting a slow and
  very bad result for COI-Azaiez which needs looking into before
  I try to solve all of COI.

  Some of the instances (e.g. COI-QMC-1) have limit workload monitors,
  which are not currently implemented in optimal reassignment using
  dynamic programming.  So for now I have made that solver do nothing
  when there is at least one limit workload monitor.  Later on I
  need to get on and finish off that implementation.

  Work on /home/jeff/tt/nurse/pap_solve24/conf1/khe24.tex for a
  while, including running the tests I need to evaluate the
  robustness of KHE24.  This paper is based on my KHE20 paper,
  which was based on my KHE18 paper.  However my KHE20 paper
  was never published, owing to PATAT2020 being cancelled due
  to COVID, so this paper always refers back to KHE18.  But
  there are no results for INRC2 in the KHE18 paper.

  MaxWorkingWeekends seems to be a problem, the LOR solution has 5
  violations, costing 5 * 30 = 150, whereas my 1805 solution has 8
  violations costing 8 * 30 = 240, which is 90 more, which gets you
  most of the way from LOR's 1695 to my 1805.  I need a really good
  repair for MaxWorkingWeekends.  How about this.  Identify three
  resources such that one is overloaded with busy weekends while at
  least one is underloaded.  Then do a full cycle optimal reassignment
  of those three resources.

  Would it make sense to use the diversifier to set es_swap_widening_max?
  I've rarely done that kind of thing before.

  Probably the best thing to do now is some detailed investigation
  into how repairs of cluster busy times and limit active intervals
  defects are going.

  Getting some pretty good results, but it would be good to do even better.
  A couple of ideas:

     * When moving from NULL to to_r, try cases of to_r that eject nothing
       before (or instead of) cases of to_r that eject something

     * Generally speaking, we do need to try the most promising repairs
       first, so that they get a wider field to explore.

  Testing the new ejection chains code.  It seems to be working.
  I need to get in and do the hard slog to work out what could be
  done better.  I have a solution of cost 1810, which I can use
  as the starting point.  If I can improve on that, what larks.

  I snooped through the old widened task set code to see if there is
  any functionality that hasn't made it into the new code.  The only
  things were optimal moves and runs.  I'm not really interested in
  optimal moves any more, but runs could be interesting.  The mtask
  finder does not return any mtask sets except mtasks in time group
  and mtasks in interval.  So it is probably not the place to find runs.

  "if the domain allows unassignment only, try a double move"
  I understand that the domain could be empty, which means that
  unassignment is the only possibility, but what use is a double
  move in that case?  I guess we unassign the task and then
  assign the resource being unassigned to some other task at
  that time.  Yes, it does make sense - can we implement it?

  I've been looking into where KheTaskFinderMake is called from.
  There is a call from KheSolnTryTaskUnAssignments, but there
  does not seem to be any pressing need to use a task finder
  there.  It has been done so that sequences of adjacent tasks
  can be unassigned together.  But the key operation,
  KheFindTasksInInterval, could be implemented in the resource
  timetable module.  This leaves

    khe_se_solvers.c (now deleted)
    khe_sr_reassign.c
    khe_sr_single_resource.c (I may delete this module)

  where there are calls to KheTaskFinderMake.

  Appendix dynamic_impl.sig.correlators still to write.

  The existing task finder needs gs_event_timetable_monitor, which we
  can only reasonably create when time assignment is all finished.  So
  we seem to have already ruled out time adjustments during resource
  adjustments.  MTasks are not necessarily as bad as that, but we
  will need to see if we can avoid gs_event_timetable_monitor when
  fixed_times is false.  This would mean that we could only move
  tasks that are currently assigned resources and thus can be
  accessed from those resources' timetables.

  --------- An interesting variant of ejection chains -------
  Here's an interesting proposal for a variant of ejection chains.
  Have just one repair operation, which is a minimum-cost bipartite
  matching of all resources to their tasks, over an arbitrary
  sequence of adjacent days.  Each resource can match with the
  sequence of tasks initially assigned to itself or to some
  other resource, or alternatively to a free day.

  When building the bipartite graph, we may choose to leave
  out certain edges.  For example, if we are trying to unassign
  a resource at a certain time, we leave out all the edges that
  assign it to some task.  Or if we are trying to assign it
  then, we leave out its current edge, the one that causes
  it to be unassigned.  Actually we could just nominate the
  resource or task and declare that its current edge should
  be omitted.

  Then we link together these rematchings using ejection
  chains.  We decide on a repair of a single task or set of
  tasks, which is to either force their assignment or force
  their unassignment, and we make this happen by building
  the bipartite graph with the appropriate omissions.

  How to prevent cycling within one chain is a question.
  We could use the usual "don't visit the same monitor
  twice" or rather "don't alter the same monitor twice".
  Alternatively, we could insist that no chain visit any
  given day twice.

  The existing rematch module allows you to build the
  demand nodes for a given set of supply nodes and re-use
  those demand nodes.  I could use that; I could cache
  sets of demand nodes as they are built for the first time,
  and re-use the cached values.  The actual solve call
  would need to be passed a fixed non-edge separately.

  (1) Decide to move some task, or sequence of adjacent tasks.  We
      need good analysis to produce a smallest set of adjacent
      tasks that is likely to work well for the current defect.

  (2) Weighted bipartite match over that sequence of adjacent days.

  Before building the whole matching, see the effect of the
  proposed change on the resource or task affected, and only
  proceed if it improves the initial defect and does not
  introduce any more.  Then build the full graph and proceed.
  Could bury this whole aspect into the repairer.

  One repair equals one time interval plus a (supply, demand)
  edge such that that edge must be omitted from the graph.
  This will drive the solution away from the current solution.
  --------- An interesting variant of ejection chains -------

  (The stuff below here refers to the dynamic programming algorithm.)
  What about an A* search, not for pruning but for choosing
  the next solution to expand?  The trouble is it would not
  prove anything, unless the A* estimate was known to be a
  lower bound on the actual cost.  To get that we would
  basically have to do an optimal assignment of one resource
  from the current point to the end.  Too slow, surely.

  Can we run the algorithm on multiple cores?  If we do this
  successfully we could reduce the running time by a factor
  of 12, or say 10.  But how?  Perhaps we need to lock each
  index within the indexed list data structure.  Nasty.

  At "Here is the code (omitted above) to build", the shift
  solution trie section moves from trie construction to a form
  of expansion by resources.  This latter part probably
  belongs elsewhere.

  If solutions did have a common parent type including a
  signature, we could unify code that adds a solution to
  a solution set, doing dominance testing along the way;
  although the code to free a solution object would need
  a type switch.

  Do a review of signature caching.  It never seemed to speed
  anything up, although it should have done, and the code for
  it may have decayed.

  A merge of KHE_DRS_SIGNATURE and KHE_DRS_SIGNATURE_SET might be
  good.  It would be a tagged union of the two types, basically.
  The point is that then users don't have to worry about how
  signatures are put together.  A signer could have the same
  merged structure, and it might be able to build structured
  signatures just as it builds unstructured ones now.  Perhaps
  not even tagged, perhaps one part holds states and another
  part holds sub-signatures (but not sub-sub-signatures?).
  Another way to merge them would be to make KHE_DRS_SIGNATURE
  private to KHE_DRS_SIGNATURE_SET, so that everything is a
  signature set, and users of KHE_DRS_SIGNATURE now would
  have to use a signature set containing one signature.

  What about hashing the key first?  If it strikes an exact match
  we get a definite answer immediately:  the one with smaller cost
  gets deleted.  But what are the chances?  Not good, I think, but
  I am not sure.  Actually a trie would be faster and more definite.
  For assign by resources we could drop down the tree as we build.
  But if the new solution replaces the old, we still have to do
  the full thing.

  A must-assign task must get assigned to someone.  Can we use
  that to predict poor performance on the next day?  Or is there
  any other way to uncover correlation between, as opposed to
  within, resources?

  I need to focus on the first two days of solving for five trainees
  (and indeed four, although for four the problem does not really hit
  until making 1Fri).  Here's what I wrote on 19 February 2023:

    "Like before, I had to abandon 5 trainees:

      [ KheDrsSolveSearch(5 resources, 14 days)
	KheDrsSolveSearch ending day 1Mon (made 3041, undominated 3041)
	KheDrsSolveSearch ending day 1Tue (made 433770, undominated 34418)
	... (killed before getting this far)
      ]

    The number 3041 is reasonable, as the following argument shows.
    Each trainee has a choice of 4 shifts plus a free day, making 5
    choices altogether.  (Because there are excess slots we can say
    that in practice all 4 shifts are available to all trainees.)
    So there are about 5 * 5 * 5 * 5 * 5 = 3125 choices.  And on
    subsequent days, for each undominated solution on the previous
    day there are about 3125 choices, although some of them will be
    killed off very early by hard constraints, which explains why
    we do not generate anything near 3041 * 3125 day 2 solutions."

  This is the basic remaining problem.  Compared with the number
  of solutions that could be made on Day 2, the number actually
  made is small:  433770 / (3041 * 3125) = 0.05.  And compared
  with the number made, the number of undominated solutions kept is
  also small:  34418 / 433770 = 0.08.  But despite these positives
  the algorithm is being overwhelmed by large numbers of solutions.

  Sorting by weight shaved about 20% off the run time, and
  visiting the hard constraint entries before the soft ones
  when dominance testing shaved off another 30% (amazing),
  down to 6.4 seconds.  So anything we can do to speed up
  one dom test will be well worth doing.  Any other ideas?

  Solver seems to be working now, but still it is not fast enough
  to reassign five resources, or indeed four trainees.  I need
  another good idea.

  I recently moved "included_free_resources_index + 1" to what I
  thought was a better location, but now that I see it documented
  I am much less sure.  Look at it again.

  Did breaking up resource expand begin into two stages actually
  achieve anything?  KheDrsExpanderOpenToExtraCost is called but
  does not seem to be affected by the breakup.

  EvalSignature:  where is it presented, do the calls to it
  make sense to the reader, e.g. in KheDrsAsstToShiftMake?

  Make the solver return early (with failure) if time runs out.

  Signature value adjustment may need a rethink for sequence
  monitors.

  May need to revisit the current plan of always returning when
  the available cost goes negative.  This is because sequence
  monitors can contribute a positive amount to available cost.
  At any rate we should do some testing to see which is faster.

  Good idea:  compare old dom test with new dom test, and
  if there are cases where the old test succeeds and the
  new one does not, look into it.  Also vice versa.

  ==== dynamic programming ideas above this point, general ideas below ====

  If we want to combine ejection chains with dynamic programming,
  it might actually be easier to add ejection chain code to the
  dynamic programming module.  Limit task sets to the ones that
  the resources were freed from.  Could do that now, actually.

  What about a solver that swaps around the assigned shifts,
  without assigning or unassigning any resource, with the
  aim of getting the number of consecutive same shifts right.
  Is that a tractable problem?  Surely ejection chains do that?

  Look into how the resource assignment invariant interacts
  with the new "rs" option.

  It seems to be time to do some serious testing of the VLSN solver
  and compare what we get with what Legrain got.  My paper says he
  got 1695, his paper (tt/patat21/dynamic_papers/legrain.pdf) says
  1685.  My own best result from my own paper was 1835.

  Legrain's running time is m x 6n + 60, where m is the number of
  weeks and n is the number of nurses.  For the test instance,
  m = 4 and n = 30, so this is 4 x 6 x 30 + 60 = 13 minutes.

  Testing ./doit in tt/nurse/solve.

  Reference in paper I refereed to dynamic programming solver?

  Speaking generally, we now have two new solvers to play with:
  the single resource solver, and the cluster minimum solver.
  Our mission is to make the best use we can of both.  We can
  run the cluster minimum solver once at the start and have its
  results permanently used throughout the rest of the solve.  And
  we can use KheSingleResourceSolverBest in conjunction with a
  balance solver to select a best solution from the single
  resource solver, and adopt that solution.  But when should we
  run single resource solving, and which resource(s) should we
  select for single resource solving?  For example:

  * Run single-resource solving on a fixed percentage of the
    resources (with highest workload limits) before time sweep,
    and then omit those resources from the time sweep.

  * Find optimal timetables for several resources over a subset
    of the interval, and use that as the basis for a VLSN search.

  Explore possible uses for the now-working cluster minimum
  solver.  Could it be run just before time sweep?  Could
  the changed minimum limits remain in place for the entire
  solve?  Also look at the solutions we are getting now from
  single resource assignment.  If one resource is already
  assigned, does that change the solve for the others?

  Make the cluster minimum solver take account of history.

  OK, what about this?  Use "extended profile grouping" to group all
  tasks into runs of tasks of the same shift type and domain.  Then
  use resource packing (largest workload resources first) to pack
  the runs into the resources.  Finish off with ejection chains.
  This to replace the current first stage.  Precede profile grouping
  by combinatorial grouping, to get weekend tasks grouped together.  
  Keep a matching at each time, so that unavailable times of other
  resources are taken into account, we want the unassigned tasks at
  every time to be assignable to the unpacked resources at that time.
  At least it's different!

  After INRC2-4-030-1-6291 is done, INRC2-4-035-0-1718 would be good to
  work on.  The current results are 21% worse, giving plenty to get into.

  Event timetables still to do.  Just another kind of dimension?
  But it shows meets, not tasks.

  Ideas:

  * Some kind of lookahead during time sweep that ensures resources
    get the weekends they need?  Perhaps deduce that the max limit
    implies a min limit, and go from there?

  * Swapping runs between three or more resources.  I tried this
    but it seems to take more time than it is worth; it's better
    to give the extra time to ejection chains

  * Ejection beams - K ejection chains being lengthened in
    parallel, if the number of unrepaired defects exceeds K
    we abandon the repair, but while it is less we keep going
    Tried this, it has some interest but does not improve things.

  * Hybridization with simulated annealing:  accept some chains
    that produce worse solutions; gradually reduce the temperature.

  Decided to just pick up where I left off, more or less, and go to
  work on INRC2-4-030-1-6291.  I'm currently solving in just 5.6
  seconds, so it makes a good test.

  Fun facts about INRC2-4-030-1-6291
  ----------------------------------

  * 4 weeks

  * 4 shifts per day:  Early (1), Day (2), Late (3), and Night (4) 
    The number of required ones varies more or less randomly; not
    assigning one has soft cost 30.

  * 30 Nurses:

       4 HeadNurse:  HN_0,  ... , HN_3
      13 Nurse:      NU_4,  ... , NU_16
       8 Caretaker:  CT_17, ... , CT_24
       5 Trainee:    TR_25, ... , TR_29

    A HeadNurse can also work as a Nurse, and a Nurse can also work
    as a Caretaker; but a Caretaker can only work as a Caretaker, and
    a Trainee can only work as a Trainee.  Given that there are no
    limit resources constraints and every task has a hard constraint
    preferring either a HeadNurse, a Nurse, a Caretaker, or a Trainee,
    this makes Trainee assignment an independent problem.

  * 3 contracts: Contract-FullTime (12 nurses), Contract-HalfTime
    (10 nurses), Contract-PartTime (8 nurses).  These determine
    workload limits of various kinds (see below).  There seems
    to be no relationship between them and nurse type.

  * There are unavailable times (soft 10) but they are not onerous

  * Unwanted patterns: [L][ED], [N][EDL], [D][E] (hard), so these
    prohibit all backward rotations.

  * Complete weekends (soft 30)

  * Contract constraints:                   Half   Part   Full    Wt
    ----------------------------------------------------------------
    Number of assignments                   5-11   7-15  15-20*   20
    Max busy weekends                          1      2      2    30
    Consecutive same shift days (Early)      2-5    2-5    2-5    15
    Consecutive same shift days (Day)       2-28   2-28   2-28    15
    Consecutive same shift days (Late)       2-5    2-5    2-5    15
    Consecutive same shift days (Night)      3-5    3-5    3-5    15
    Consecutive free days                    2-5    2-4    2-3    30
    Consecutive busy days                    2-4    3-5    3-5    30
    ----------------------------------------------------------------
    *15-20 is notated 15-22 but more than 20 is impossible.

  Currently giving XUTT a rest for a while.  Here is its to do
  list, prefixed by + characters:

  +Can distinct() be used for distinct times?  Why not?  And also
  +using it for "same location" might work.

  +I've finished university course timetabling, except for MaxBreaks
  +and MaxBlock, which I intend to leave for a while and ponder over
  +(see below).  I've also finished sports scheduling except for SE1
  +"games", which I am waiting on Bulck for but which will not be a
  +problem.

  +MaxBreaks and MaxBlock
  +----------------------

    +These are challenging because they do the sorts of things that
    +pattern matching does (e.g. idle times), but the criterion
    +which determines whether two things are adjacent is different:

      +High school timetabling - adjacent time periods
      +Nurse rostering - adjacent days
      +MaxBreaks and MaxBlock - intervals have gap of at most S.

    +It would be good to have a sequence of blocks to iterate over,
    +just like we have some subsequences to iterate over in high
    +school timetabling and nurse rostering.  Then MaxBreaks would
    +utilize the number of elements in the sequence, and MaxBlock
    +would utilize the duration of each block.

    +We also need to allow for future incorporation of travel time 
    +into MaxBreaks and MaxBlock.  Two events would be adjacent if
    +the amount of time left over after travelling from the first
    +to the second was at most S.

    +Assuming a 15-week semester and penalty 2:

    +MaxBreaks(R, S):

	+<Tree val="sum|15d">
	    +<ForEach v="$day" from="Days">
		+<Tree val="sum:0-(R+1)|2">
		    +<ForEachBlock v="$ms" gap="S" travel="travel()">
			+<AtomicMeetSet e="E" t="$day">
			+<Tree val="1">
		    +</ForEachBlock>
		+</Tree>
	    +</ForEach>
	+</Tree>

    +MaxBlock(M, S):

	+<Tree val="sum|15d">
	    +<ForEach v="$day" from="Days">
		+<Tree val="sum:0-M|2">
		    +<ForEachBlock v="$ms" gap="S" singles="no" travel="travel">
			+<AtomicMeetSet e="E" t="$day">
			+<Tree val="$ms.span:0-M|1s">
		    +</ForEachBlock>
		+</Tree>
	    +</ForEach>
        +</Tree>

    +Actually it might be better if each iteration produced a meet set.
    +We could then ask for span and so forth as usual.  There is also
    +a connection with spacing(a, b).  In fact it would be good to
    +give a general expression which determines whether two
    +chronologically adjacent meets are in the same block.
    +Then we could use "false" to get every meet into a separate
    +block, and then spacing(a, b) would apply to each pair of
    +adjacent blocks in the ordering.  If "block" has the same
    +type as "meet set", we're laughing.

    +I'll let this lie fallow for a while and come back to it.

  +Rather than sorting meets and defining cost functions which
  +are sums, can we iterate over the sorted meets?

  +The ref and expr attributes of time sequences and event sequences
  +do the same thing.

  +There is an example of times with attributes in the section on
  +weighted domain constraints.  Do we want them?  How do they fit
  +with time pattern trees?  Are there weights for compound times?

  +Moved history from Tree to ForEachTimeGroup.  This will be
  +consistent with pattern matching, and more principled, since
  +history in effect extends the range of the iterator.  But
  +what to do about general patterns?  We need to know how each
  +element of the pattern matches through history.

  +Could use tags to identify specific task sets within patterns.

  Install the new version of HSEval on web site, but not until after
  the final PATAT 2020 deadline.

  In the CQ14-13 table, I need to see available workload in minutes.

  Fun facts about instance CQ14-13
  --------------------------------

  * A four-week instance (1Mon to 4Sun) with 18 times per day:

      a1 (1),  a2 (2),  a3 (3),  a4 (4),  a5 (5),
      d1 (6),  d2 (7),  d3 (8),  d4 (9),  d5 (10),
      p1 (11), p2 (12), p3 (13), p4 (14), p5 (15),
      n1 (16), n2 (17), n3 (18)

    There are workloads, presumably in minutes, that vary quite a bit:

      a1 (480),  a2 (480),  a3 (480),  a4 (600),  a5 (720),
      d1 (480),  d2 (480),  d3 (480),  d4 (600),  d5 (720),
      p1 (480),  p2 (480),  p3 (480),  p4 (600),  p5 (720),
      n1 (480),                        n2 (600),  n3 (720)

    480 minutes is an 8-hour shift, 720 minutes is 12 hours.

  * 120 resources, with many hard preferences for certain shifts:

      Preferred-a1 Preferred-a2 Preferred-a3 Preferred-a4 Preferred-a5
      Preferred-d1 Preferred-d2 Preferred-d3 Preferred-d4 Preferred-d5
      Preferred-p1 Preferred-p2 Preferred-p3 Preferred-p4 Preferred-p5
      Preferred-n1 Preferred-n2 Preferred-n3

    although most resources have plenty of choices from this list.
    Anyway this leads to a huge number of prefer resources constraints.

  * There are also many avoid unavailable times constraints, some for
    whole days, many others for individual times; hard and soft.

  * Unwanted patterns (hard).  In these patterns, a stands for
    [a1a2a3a4a5] and so on.

      [d4][a]
      [p5][adp4-5]
      [n1][adp]
      [n2-3][adpn3]
      [d1-3][a1-4]
      [a5d5p1-4][ad]

    This is basically "day off after a sequence of night shifts",
    with some other stuff that probably matters less; a lot of it
    is about the 480 and 720 minute shifts.

  * MaxWeekends (hard) for most resources is 2, for some it is 1 or 3.

  * MaxSameShiftDays (hard) varies a lot, with fewer of the long
    workload shifts allowed.  NB this is not consecutive, this is
    total.  About at most 10 of the shorter, 3 of the longer.
    Doesn't seem very constraining, given that typical workloads
    are 15 or 16 shifts.

  * Many day or shift on requests, soft with varying weights (1-3).

  * Minimum and maximum workload limits in minutes (hard), e.g.

      Minutes           480-minute shifts
      -----------------------------------------------------------
      3120 - 3840
      4440 - 5160
      7440 - 8160        15.5 - 17.0
      7920 - 8640        16.5 - 18.0

    The last two ranges cover the great majority of resources.
    These ranges are quite tight, especially for hard constraints.

  * MinConsecutiveFreeDays 2 (hard) for most resources, 3 (hard)
    for a few.

  * MaxConsecutiveBusyDays 5 (hard) for most resources, 6 (hard)
    for a few.

  * MinConsecutiveBusyDays 2 (hard), for all or most resources.

  Decided to work on CQ14-13 for a while, then tidy up, rerun,
  and submit.

  What does profile grouping do when the minimum limits are
  somewhat different for different resources, and thus spread
  over several constraints?

  INRC1-ML02 would be a good test.  It runs fast and the gap is
  pretty wide at the moment.  Actually I worked on it before (from
  8 November 2019).  It inspired KhePropagateUnavailableTimes.

  Fun facts about INRC1-ML02
  --------------------------

    * 4 weeks 1Fri to 4Thu

    * 4 shifts per day: E (1), L (2), D (3), and N (4).  But there are
      only two D shifts each day, so this is basically a three-shift
      system of Early, Late, and Night shifts.

    * 30 Nurses:
  
        Contract-0  Nurse0  - Nurse7
        Contract-1  Nurse8  - Nurse26
        Contract-2  Nurse27 - Nurse29

    * Many day and shift off requests, all soft 1 but challenging.
      I bet this is where the cost is incurred.

    * Complete weekends (soft 2), no night shift before free
      weekend (soft 1), identical shift types during weekend (soft 1),
      unwanted patterns [L][E], [L][D], [D][N], [N][E], [N][D],
      [D][E][D], all soft 1

    * Contract constraints         Contract-0    Contract-1   Contract-2
      ----------------------------------------------------------------
      Assignments                    10-18        6-14          4-8
      Consecutive busy weekends       2-3     unconstrained     2-3
      Consecutive free days           2-4         3-5           4-6
      Consecutive busy days           3-5         2-4           3-4
      ----------------------------------------------------------------

      Workloads are tight, there are only 6 shifts to spare, or 8 if
      you ignore the overloads in Nurse28 and Nurse29, which both
      GOAL and KHE18x8 have, so presumably they are inevitable.


  Do something about constraints with step cost functions, if only
  so that I can say in the paper that it's done.

  In INRC2-4-030-1-6291, the difference between my 1880 result and
  the LOR17 1695 result is about 200.  About 100 of that is in
  minimum consecutive same shift days defects.  Max working weekends
  defects are another problem, my solution has 3 more of those
  than the LOR17 solution has; at 30 points each that's 90 points.
  If we can improve our results on these defects we will go a long
  way towards closing the gap.

  Grinding down INRC2-4-030-1-6291 from where it is now.  It would
  be good to get a better initial solution from time sweep than I am
  getting now.  Also, there are no same shift days defects in the
  LOR17 solution, whereas there are 

  Perhaps profile grouping could do something unconventional if it
  finds a narrow peak in the profile that really needs to be grouped.

  What about an ejection chain repair, taking the current runs
  as indivisible?

  My chances of being able to do better on INRC2-4-030-1-6291
  seem to be pretty slim.  But I really should pause and make
  a serious attack on it.  After that there is only CQ to go,
  and I have until 30 January.  There's time now and if I don't
  do it now I never will.

  Better to not generate contract (and skill?) resource groups if
  not used.

  Change KHE's general policy so that operations that change
  nothing succeed.  Having them fail composes badly.  The user
  will need to avoid cases that change nothing.

  Are there other modules that could use the task finder?
  Combinatorial grouping for example?  There are no functions
  in khe_task.c that look like task finding, but there are some
  in khe_resource_timetable_monitor.c:

    KheResourceTimetableMonitorTimeAvailable
    KheResourceTimetableMonitorTimeGroupAvailable
    KheResourceTimetableMonitorTaskAvailableInFrame
    KheResourceTimetableMonitorAddProperRootTasks

  KheTaskSetMoveMultiRepair phase variable may be slow, try
  removing it and just doing everything all together.

  Fun facts about COI-Musa
  ------------------------

  * 2 weeks, one shift per day, 11 nurses (skills RN, LPN, NA)

  * RN nurses:  Nurse1, Nurse2, Nurse3,
    LPN nurses: Nurse4, Nurse5, 
    NA nurses:  Nurse6, Nurse7, Nurse8, Nurse9, Nurse10, Nurse11

  Grinding down COI-HED01.  See above, 10 October, for what I've
  done so far.

  It should actually be possible to group four M's together in
  Week 1, and so on, although combinatorial grouping only tries
  up to 3 days so it probably does not realize this.

  Fun facts about COI-HED01
  -------------------------

    * 31 days, 5 shifts per day: 1=M, 2=D, 3=H, 4=A, 5=N

    * Weekend days are different, they use the H shift.  There
      is also something peculiar about 3Tue, it also uses the
      H shift.  It seems to be being treated like a weekend day.
      This point is reflected in other constraints, which treat
      Week 3 as though it had only four days.

    * All demand expressed by limit resources constraints,
      except for the D shift, which has two tasks subject
      to assign resource and prefer resources constraints.
      The other shifts vary between about 7 and 9 tasks.  But
      my new converter avoids all limit resources constraints.

    * There are 16 "OP" nurses and 4 "Temp" nurses.
      Three nurses have extensive sequences of days off.
      There is one skill, "Skill-0", but it contains the
      same nurses as the OP nurses.

    * The constraints are somewhat peculiar, and need attention
      (e.g. how do they affect combinatorial grouping?)
    
        [D][0][not N]  (Constraint:1)
          After a D, we want a day off and then a night shift (OP only).
	  Only one nurse has a D at any one time, so making this should
	  not be very troublesome.

	[not M][D]  (Constraint:2)
	  Prefer M before D (OP only), always seems to get ignored,
	  even in the best solutions.  This is because during the
	  week that D occurs, we can't have a week full of M's.
	  So really this constraint contradicts the others.

	[DHN][MDHAN]  (Constraint:3)
	  Prefer day off after D, H, or N.  Always seems to be
	  satisfied.  Since H occurs only on weekends, plus 3Tue,
	  each resource can work at most one day of the weekend,
	  and if that day is Sunday, the resource cannot work
	  M or A shifts the following week (since that would
	  require working every day).  Sure enough, in the
	  best solution, when an OP nurse works an H shift on
	  a Sunday, the following week contains N shifts and
	  usually a D shift.  And all of the H shifts assigned
	  to Temp nurses are Sunday or 3Tue ones.

	Constraint:4 says that Temp nurses should take H and
	D shifts only.  It would be better expressed by a
	prefer resources constraint but KHE seems happy
	enough with it.

	Constraint:5 says that assigning any shift at all to
	a Temp nurse is to be penalized.  Again, a prefer
	resources constraint would have been better, but at
	present both KHE and the best solution assign 15 shifts
	to Temp nurses, so that's fine.

	The wanted pattern is {M}{A}{ND}{M}{A}{ND}..., where
	{X} means that X only should occur during a week.
	This is for OP nurses only.  It is expressed rather
	crudely:  if 1 M in Week X, then 4 M in Week X.
	This part of it does not apply to N, however; it says
	"if any A in Week X, then at least one N in Week X+1".
	So during N weeks the resource usually has less than
	4 N shifts, and this is its big chance to take a D.

	OP nurses should take at least one M, exactly one D,
	at least one H, at most 2 H, at least one A, at least
	one N.  These constraints are not onerous.

    * Assign resource and prefer resources constraints specify:

        - There is one D shift per day

    * Limit resources constraints specify 

        Weekdays excluding 3Tue

        - Each N shift must have exactly 2 Skill-0 nurses.

	- Each M shift and each A shift must have exactly 4
	  Skill-0 nurses

	- There are no H shifts

	Weekend days, including 3Tue

	- Each H shift must have at least 2 Skill-0 nurses

	- Each H shift must have exactly 4 nurses altogether

	- There are no M, A, or N shifts on 3Tue

	- There are no M, A, or N shifts on weekend days

    * The new converter is expressing all demands with assign
      resource and prefer resources constraints, as follows:

      D shifts:

        <R>NA=s1000:1</R>
	<R>A=s1000:1</R>

	So one resource, any skill.

      H shifts (weekends and 3Tue):

        <R>NA=s1000+NW0=s1000:1</R>
	<R>NA=s1000+NW0=s1000:2</R>
	<R>NA=s1000:1</R>
	<R>NA=s1000:2</R>
	<R>A=s1000:1</R>

	So 2 Skill-0 and 2 arbitrary, as above

      M and A shifts (weekdays not 3Tue):

        <R>NA=s1000+NW0=s1000:1</R>
	<R>NA=s1000+NW0=s1000:2</R>
	<R>NA=s1000+NW0=s1000:3</R>
	<R>NA=s1000+NW0=s1000:4</R>
	<R>W0=s1000:1</R>
	<R>W0=s1000:2</R>
	<R>W0=s1000:3</R>
	<R>W0=s1000:4</R>
	<R>W0=s1000:5</R>

	So exactly 4 Skill-0, no limits on Temp nurses

      N shifts (weekday example)

        <R>NA=s1000+NW0=s1000:1</R>
	<R>NA=s1000+NW0=s1000:2</R>
	<R>W0=s1000:1</R>
	<R>W0=s1000:2</R>
	<R>W0=s1000:3</R>
	<R>W0=s1000:4</R>
	<R>W0=s1000:5</R>

      Exactly 2 Skill-0, no limits on Temp nurses.

  It would be good to have a look at COI-HED01.  It has
  deteriorated and it is fast enough to be a good test.
  Curtois' best is 136 and KHE18x8 is currently at 183.
  A quick look suggests that the main problems are the
  rotations from week to week.

  Back to grinding down CQ14-05.  I've fixed the construction
  problem but with no noticeable effect on solution cost.

  KheClusterBusyTimesConstraintResourceOfTypeCount returns the
  number of resources, not the number of distinct resources.
  This may be a problem in some applications of this function.

  Fun facts about CQ14-05
  -----------------------

    * 28 days, 2 shifts per day (E and L), whose demand is:

           1Mon 1Tue 1Wed 1Thu 1Fri 1Sat 1Sun 2Mon 2Tue 2Wed 2Thu
        ---------------------------------------------------------
        E   5    7    5    6    7    6    6    6    6    6    5
        L   4    4    5    4    3    3    4    4    4    6    4
        ---------------------------------------------------------
        Tot 9   11   10   10   10    9   10   10   10   12    9

      Uncovered demands (assign resources defects) make up the
      bulk of the cost (1500 out of 1543).  Most of this (14 out
      of 15) occurs on the weekends.

    * 16 resources named A, B, ... P.  There is a Preferred-L
      resource group containing {C, D, F, G, H, I, J, M, O, P}.
      The resources in its complement, {A, B, E, K, L, N}, are
      not allowed to take late shifts.

    * Max 2 busy weekends (max 3 for for resources K to P)

    * Unwanted pattern [L][E]

    * Max 14 same-shift days (not consecutive).  Not hard to
      ensure given that resource workload limits are 16 - 18.

    * Many day or shift on requests.  These basically don't
      matter because they have low weight and my current best
      solution has about the same number of them as Curtois'

    * Workload limits (all resources) min 7560, max 8640
      All events (both E and L) have workload 480;
      7560 / 480 = 15.7, 8640 / 480 = 18.0, so every resurce
      needs between 16 and 18 shifts.  The Avail column agrees.

    * Min 2 consecutive free days (min 3 for resources K to P)

    * Max 5 consecutive busy days (max 6 for resources K to P)

    * Curtois' best is 1143.  This represents 2 fewer unassigned
      shifts (costing 100 each) and virtually the same other stuff.

  Try to get CQ14-24 to use less memory and produce better results.
  But start with a smaller, faster CQ14 instance:  CQ14-05, say.

  In Ozk*, there are two skill types (RN and Aid), and each
  nurse has exactly one of those skills.  Can this be used to
  convert the limit resources constraints into assign resource
  and prefer resources constraints?

  Grinding down COI-BCDT-Sep in general.  I more or less lost
  interest when I got cost 184 on the artificial instance, but
  this does include half-cycle repairs.  So more thought needed.
  Could we add half-cycle repairs to the second repair phase
  if the first ended quickly?

  KheCombSolverAddProfileGroupRequirement could be merged with
  KheCombSolverAddTimeGroupRequirement if we add an optional
  domain parameter to KheCombSolverAddTimeGroupRequirement.

  Fun facts about COI-BCDT-Sep
  ----------------------------

    * 4 weeks and 2 days, starting on a Wednesday

    * Shifts: 1 V (vacation), 2 M (morning), 3 A (afternoon), 4 N (night).

    * All cover constraints are limit resources constraints.  But they
      are quite strict and hard.  Could they be replaced by assign
      resource constraints?  (Yes, they have been.)

	  Constraint            Shifts               Limit    Cost
	  --------------------------------------------------------
          DemandConstraint:1A   N                    max 4      10
	  DemandConstraint:2A   all A; weekend M     max 4     100
	  DemandConstraint:3A   weekdays M           max 5     100
	  DemandConstraint:4A   all A, N; weekend M  max 5    hard
	  DemandConstraint:5A   weekdays M           max 6    hard
	  DemandConstraint:6A   all A, N; weekend M  min 3    hard
	  DemandConstraint:7A   all N                min 4      10
	  DemandConstraint:8A   all A; weekend M     min 4     100
	  DemandConstraint:9A   weekday M            min 4    hard
	  DemandConstraint:10A  weekday M            min 5     100
	  --------------------------------------------------------

      Weekday M:   min 4 (hard), min 5 (100), max 5 (100), max 6 (hard),
      Weekend M:   min 3 (hard), min 4 (100), max 4 (100), max 5 (hard) 
      All A:       min 3 (hard), min 4 (100), max 4 (100), max 5 (hard)
      All N:       min 3 (hard), min 4 (10),  max 4 (10),  max 5 (hard)

    * There are day and shift off constraints, not onerous

    * Avoid A followed by M

    * Night shifts are to be assigned in blocks of 3, although
      a four block is allowed to avoid fri N and sat free.  There
      are hard constraints requiring at least 2 and at most 4
      night shifts in a row.

    * At least six days between sequences of N shifts; the
      implementation here could be better, possibly.

    * At least two days off after five consecutive shifts

    * At least two days off after night shift

    * Prefer at least two morning shifts before a vacation period and
      at least one night shift afterwards

    * Between 4 and 8 weekend days

    * At least 10 days off

    * 5-7 A (afternoon) shifts, 5-7 N (night) shifts

    * Days shifts (M and A, taken together) in blocks of exactly 3

    * At most 5 working days in a row.

  Work on COI-BCDT-Sep, try to reduce the running time.  There are
  a lot of constraints, which probably explains the poor result.

  Should we limit domain reduction at the start to hard constraints?
  A long test would be good.

  In khe_se_solvers.c, KheAddInitialTasks and KheAddFinalTasks could
  be extended to return an unassign_r1_ts task set which could then be
  passed on to the double repair.  No great urgency, but it does make
  sense to do this.  But first, let's see whether any instances need it.

  Also thought of a possibility of avoiding repairs during time sweep,
  when the cost blows out too much.  Have to think about it and see if
  it is feasible.

  Take a close look at resource matching.  How good are the
  assignments it is currently producing?  Could it do better?

  Now it is basically the big instances, ERRVH, ERMGH, and MER
  that need attention.  Previously I was working on ERRVH, I
  should go back to that.

  Is lookahead actually working in the way I expect it to?
  Or is there something unexpected going on that is preventing
  it from doing what it has the potential to do?

  UniTime requirements not covered yet:

    Need an efficient way to list available rooms and their
    penalties.  Nominally this is done by task constraints but
    something more concise, which indicates that the domain
    is partitioned, would be better.

    Ditto for the time domain of a meet.

    SameStart distribution constraint.  Place all times
    with the same start time in one time group, have one
    time group for each distinct starting time, and use
    a meet constraint with type count and eval="0-1|...".

    SameTime is a problem because there is not a simple
    partition into disjoint sets of times.  Need some
    kind of builtin function between pairs of times, but
    then it's not clear how this fits in a meet set tree.

    DifferentTime is basically no overlap, again we seem
    to need a binary attribute.

    SameDays and SameWeeks are cluster constraints, the limit
    would have to be extracted from the event with the largest
    number of meets, which is a bit dodgy.

    DifferentDays and DifferentWeeks just a max 1 on each day
    or week.

    Overlap and NotOverlap: need a binary for the amount of
    overlap between two times, and then we can constrain it
    to be at least 1 or at most 0.  NB the distributive law

       overlap(a+b, c+d) = overlap(a, c) + overlap(a, d)
         + overlap(b, c) + overlap(b, d)

    but this nice property is not going to hold for all
    binary attributes.

    Precedence: this is the order events constraint, with
    "For classes that have multiple meetings in a week or
    that are on different weeks, the constraint only cares
    about the first meeting of the class."  No design for
    this yet.

    WorkDay(S): "There should not be more than S time slots
    between the start of the first class and the end of the
    last class on any given day."  This is a kind of avoid
    idle times constraint, applied to events rather than to
    resources (which for us is a detail).
      One task or meet set per day, and then a special function
    (span or something) to give the appropriate measure.  But
    how do you define one day?  By a time group.

    MinGap(G): Any two classes that are taught on the same day
    (they are placed on overlapping days and weeks) must be at
    least G slots apart.  Not sure what to make of this.
    I guess it's overlap(a, b, extension) where extension
    applies to both a and b.

    MaxDays(D): "Given classes cannot spread over more than D days
    of the week".  Just a straight cluster constraint.

    MaxDayLoad(S): "Given classes must be spread over the days
      of the week (and weeks) in a way that there is no more
      than a given number of S time slots on every day."  Just
      a straight limit busy times constraint, measuring durations.
      But not the full duration, rather the duration on one day.

      This is one of several indications that we cannot treat
      a non-atomic time as a unit in all cases.

    MaxBreaks(R,S): "MaxBreaks(R,S) This constraint limits the
      number of breaks during a day between a given set of classes
      (not more than R breaks during a day). For each day of week
      and week, there is a break between classes if there is more
      than S empty time slots in between."  A very interesting
      definition of what it means for two times to be consecutive.

    MaxBlock(M,S): "This constraint limits the length of a block
      of consecutive classes during a day (not more than M slots
      in a block). For each day of week and week, two consecutive
      classes are considered to be in the same block if the gap
      between them is not more than S time slots."  Limit active
      intervals, interpreted using durations rather than times.

  A resource r is busy at some time t if that time overlaps with
  any interval in any meet that r is attending.

  Need a way to define time *groups* to take advantage of symmetries.
  e.g. 1-15{MWF}3 = {1-15M3, 1-15W3, 1-15F3}.  All doubles:
  [Mon-Fr][12 & 23 & 45 & 67 & 78] or something.
  {MWF:<time>} or something.  But what is the whole day anyway?
  All intervals, presumably. {1-15:{MTWRF:1-8}

  See 16 April 2019 for things to do with the XUTT paper.

  It's not clear at the moment how time sweep should handle
  rematching.  If left as is, without lookahead, it might
  well undo all the good work done by lookahead.  But to
  add lookahead might be slow.  Start by turning it off:
  rs_time_sweep_rematch_off=true.  The same problem afflicts
  ejection chain repair during time sweep.  Needs thought.
  Can the lookahead stuff be made part of the solution cost?
  "If r is assigned t, add C to solution cost".  Not easily.
  It is like a temporary prefer resources monitor.

  Here's an idea for a repair:  if a sequence is too short, try
  moving it all to another resource where there is room to make
  it longer.  KheResourceUnderloadAugment will in fact do nothing
  at all in these cases, so we really do need to do something,
  even an ejecting move on that day.

  Working over INRC2-4-030-1-6753 generally, trying to improve
  the ejection chain repairs.  No luck so far.

  Resource swapping is really just resource rematching, only not
  as good.  That is, unless there are limit resources constraints.

  The last few ideas have been too small beer.  Must do better!
  Currently trying to improve KHE18's solutions to INRC2-4-035-2-8875.xml:

    1 = Early, 2 = Day, 3 = Late, 4 = Night
    FullTime: max 2 weekends, 15-22 shifts, consec 2-3 free 3-5 busy
    PartTime: max 2 weekends,  7-15 shifts, consec 2-5 free 3-5 busy
    HalfTime: max 1 weekends,  5-11 shifts, consec 3-5 free 3-5 busy
    All: unwanted [4][123], [3][12], complete weekends, single asst per day
    All: consec same shift days: Early 2-5, Day 2-28, Late 2-5, Night 4-5

    FullTime resources and the number of weekends they work in LOR are:
    
      NU_8 2, NU_9 1, CT_17 1, CT_18 0, CT_20 1, CT_25 1, TR_30 2, TR_32 3

    NB full-time can only work 20 shifts because of max 5 busy then
    min 2 free, e.g. 5-2-5-2-5-2-5-2 with 4*5 = 20 busy shifts.  But
    this as it stands is not viable because you work no weekends.  The
    opposite, 2-5-2-5-2-5-2-5 works 4 weekends which is no good either.
    Ideally you would want 5-2-5-4-5-2-5, which works 2 weekends, but
    the 4 free days are a defect.  More breaks is the only way to
    work 2 weekends, but that means a lower workload again.  This is
    why several of LOR's full-timers are working only 18 hours.  The
    conclusion is that trying to redistribute workload overloads is
    not going to help much.

    Resource types

    HeadNurse (HN_*) can also work as Nurse or Caretaker
    Nurse     (NU_*) can also work as Caretaker
    Caretaker (CT_*) works only as Caretaker
    Trainee   (TR_*) works only as Trainee

  "At least two days off after night shift" - if we recode this,
  we might do better on COI-BCDT-Sep.  But it's surprisingly hard.

  Option es_fresh_visits seems to be inconsistent, it causes
  things to become unvisited when there is an assumption that
  they are visited.  Needs looking into.  Currently commented
  out in khe_sr_combined.c.

  For the future:  time limit storing.  khe_sm_timer.c already
  has code for writing time limits, but not yet for reading.

  Work on time modelling paper for PATAT 2020.  The time model
  is an enabler for any projects I might do around ITC 2019,
  for example modelling student sectioning and implementing
  single student timetabling, so it is important for the future
  and needs to be got right.

  Time sets, time groups, resource sets, and resource groups
  ----------------------------------------------------------

    Thinking about whether I can remove construction of time
    neighbourhoods, by instead offering offset parameters on
    the time set operations (subset, etc.) which do the same.

    Need to use resource sets and time sets a lot more in the
    instance, for the constructed resource and time sets which
    in general have no name.  Maybe replace solution time groups
    and solution resource groups altogether.  But it's not
    trivial, because solution time groups are used by meets,
    and solution resource groups are used by tasks, both for
    handling domains (meet and task bounds).  What about

      typedef struct khe_time_set_rec {
          SSET elems;
      } KHE_TIME_SET;

    with SSET optimized by setting length to -1 to finalize.
    Most of the operations would have to be macros which
    add address-of operators in the style of SSET itself.

       KHE_TIME_SET KheTimeSetNeighbour(KHE_TIME_SET ts, int offset);

    would be doable with no memory allocation and one binary
    search (which could be optional for an internal version).

    I'm leaving this lie for now, something has to be done
    here but I'm not sure what, and there is no great hurry.

  There is a problem with preparing once and solving many times:
  adjustments for limit resources monitors depend on assignments
  in the vicinity, which may vary from one call to another.  The
  solution may well be simply to document the issue.

  At present resource matching is grouping then ungrouping during
  preparation, then grouping again when we start solving.  Can this
  be simplified?  There is a mark in the way.

  Document sset (which should really be khe_sset) and khe_set.

  I'm slightly worried that the comparison function for NRC
  worker constraints might have lost its transitivity now that
  history_after is being compared in some cases but not others.

  Look at the remaining special cases in all.map and see if some
  form of condensing can be applied to them.

  Might be a good idea to review the preserve_existing option in
  resource matching.  I don't exactly understand it at the moment.

  There seem to be several silly things in the current code that are
  about statistics.  I should think about collecting statistics in
  general, and implement something.  But not this time around.

  KheTaskFirstUnFixed is quite widely used, but I am beginning to
  are the same as mine (which GOAL's are not)?  If so I need
  to compare my results with theirs.  The paper is in the 2012
  PATAT proceedings, page 254.  Also it gives this site:

    https://www.kuleuven-kulak.be/nrpcompetition/competitor-ranking

  Can I find the results from the competition winner?  According to
  Santos et al. this was Valouxis et al, but their paper is in EJOR.

  Add code for limit resources monitors to khe_se_secondary.c.

  In KheClusterBusyTimesAugment, no use is being made of the
  allow_zero option at the moment.  Need to do this some time.

  Generalize the handling of the require_zero parameter of
  KheOverloadAugment, by allowing an ejection tree repair
  when the ejector depth is 1.  There is something like
  this already in KheClusterOverloadAugment, so look at
  that before doing anything else.

  There is an "Augment functions" section of the ejection chains
  chapter of the KHE guide that will need an update - do it last.

  (KHE) What about a general audit of how monitors report what
  is defective, with a view to finding a general rule for how
  to do this, and unifying all the monitors under that rule?
  The rule could be to store reported_deviation, renaming it
  to deviation, and to calculate a delta on that and have a
  function which applies the delta.  Have to look through all
  the monitors to see how that is likely to pan out.  But the
  general idea of a delta on the deviation does seem to be
  right, given that we want evaluation to be incremental.

  (KHE) For all monitors, should I include attached and unattached
  in the deviation function, so that attachment and unattachment
  are just like any other update functions?

  Ejection chains idea:  include main loop defect ejection trees
  in the major schedule, so that, at the end when main loop defects
  have resisted all previous attempts to repair them, we can try
  ejection trees on each in turn.  Make one change, produce several
  defects, and try to repair them all.  A good last resort?

  Ejection chains idea:  instead of requiring an ejection chain
  to improve the solution by at least (0, 1), require it to
  improve it by a larger amount, at first.  This will run much
  faster and will avoid trying to fix tiny problems until there
  is nothing better to do.  But have I already tried it?  It
  sounds a lot like es_limit_defects.
