# A 10-minute scheduling classic with glpk-hs

Page contents:

This page demonstrates the use of Integer Linear Programming (ILP) to solve a simple sequential job scheduling problem. We begin with generalities on ILP, followed by a description of the problem, its formulation using the Haskell GLPK bindings, and finish with an example. Some of the code used in this page is available in this repository. This page does not cover any of the complexity theory or algorithmic aspects. For those, I would suggest reading [1].

# (Integer) Linear Programming

Linear Programming (LP) and its integer and mixed precision variants are classic mathematical optimization techniques. Together, they constitute an important part of the model-based family of approaches to optimizationModel-based approaches can be opposed to method-based approaches where custom algorithms are written to solve specific problem classes.

. The LP setting is the following:

Where , and are known parameters, and is the design variable. There are a few classical cases for :

When is real-valued, the problem is called Linear Programming(LP). LP is a tractable problem for which polynomial (in both and ) time algorithms are known. This is true for both the optimization and constraint satisfaction versions of the problem.

When is an integer vector, the problem is called Integer Linear Programming (ILP) and no polynomial time algorithm in (in both and ) is known. This is true for both the minimization and constraint satisfaction versions of the problem.

When is part integer- and part real-valued, the problem is called Mixed-Integer Programming (MIP) and the complexity situation is similar to ILP.

While don’t have a polynomial algorithm for the integer variants of this problem (and never will unless P=NP), a number of things can be done in practice:

Exact algorithms for ILP/MIP can be used with computational budgets. Such algorithms may terminate quickly despite their exponential worst case complexity. Should such an algorithm indeed terminate, the solution it returns is guaranteed to be optimal for the problem.

Heuristics can also be used. They outperform exact algorithms for some problem instances. Their worst case run-time is still exponential though, and they don’t guarantee optimality if and when a result is returned.

The LP relaxations for the problem can be used to obtain various useful quantities. The LP relaxation of the integer-valued program is the corresponding real-valued (in ) version of the formulation. When the relaxed problem is satisfiable, its optimal value is guaranteed to be a lower bound on the optimal value of original problem. Such a lower bound can therefore be obtained easily in practice, since linear programming is tractable. When the relaxed problem formulation is unsatisfiable, we have a proof that the integer-constrained version of the problem is also unsatisfiable.

`glpk-hs`

provides Haskell support for linear programming in the form of a shallowly embedded DSL for the CPLEX LP format and bindings to the GLPK solver. The code snippets in this page will use this package. Let’s now describe the scheduling problem we’re looking to formulate as an ILP.

# A classic scheduling problem

Consider the following problem: we’d like to schedule some sequential jobs on some heterogeneous resources. Jobs have precedence constraints, meaning that some jobs must finish before others can start. Each resource is only able to run a subset of the jobs, and a job run-time is function of the resource it is assigned to. This is a classic problem, for which some good method-based heuristicsThe HEFT[3] heuristic is designed specifically for this problem. Note that this heuristic considers a problem with communication times, which we are not considering here. Those can also be encoded in an ILP formulation, as is done in [2].

are known.

Suppose that we have full a-priori information about our problem. Its parameters are the following.

Statically known set of

**jobs**,**resources**and**precedences**.must precede job if there exists (Suppose this constitutes a DAG: there are no cycles).

Resource-job compatibility parameter of job and resource .

Processing time for job on resource .

Suppose we want our solution to schedule jobs as fast as possible, for some definition of fast. In particular, we consider the makespan of the schedule as our objective, defined as the latest completion time of all jobs in the schedule. Now that we’ve defined our problem and assigned some notation for its parameters, let’s move on to the ILP formulation.

# ILP Formulation

We want to write the following function that takes the problem parameters as input and returns a value of type `LP`

, which represents an instance of a problem. The type of the function we’ll be writing in the rest of this page is the following:

```
formulation ::
forall job resource.
(Eq job, ToILPVar job, ToILPVar resource) =>
Set job -> -- ^ set of jobs to schedule
Set resource -> -- ^ set of available resources
Map job [job] -> -- ^ directed adjacency list of job precedences
((job, resource) -> Bool) -> -- ^ job validity
((job, resource) -> Time) -> -- ^ run-times
Time -> -- ^ upper bound on makespan
LP String Int -- ^ resulting formulation to optimize
```

Where the following has been defined.

```
newtype Time = Time {unTime :: Int} -- newtype representing time
class ToILPVar a where
lpShow :: a -> String -- constructs a variable name for our ILP
```

Note that the type signature for `formulation`

corresponds exactly to the problem parametrization that was just presented, with the addition of an upper bound on the makespan, the importance of which will be outlined later in this page. Since we’ll be going through the implementation of this function, we need to know the variable bindingsThis snippet uses the ViewPatterns GHC extension. Read through that link if you’re unfamiliar, it’s an easy one.

. Those are taken to have similar notation to the problem parametrization presented above.

```
formulation
(toList -> _J :: [job]) -- list is convenient for comprehensions
(toList -> _R :: [resource]) -- list is convenient for comprehensions
(expandMap -> _P :: [(job, job)]) -- list of tuples for convenience
v_ -- job validity
((unTime .) -> p_ :: (job, resource) -> Int) -- run-time, integer
(unTime -> u :: Int) -- makespan upper bound, integer
= execLPM . sequenceA_ $
... -- a value of type [LPM]
```

Where `expandMap :: Map a [a]`

builds a set of precedence edges from an adjacency list map. Unless specified otherwise, all subsequent code snippets page are lists of type `[LPM]`

. Once concatenated and sequenced together, they form the full implementation we need to write. `LPM`

is the monadic eDSL provided by `glpk-hs`

to help write such formulations. Let’s start with the design variables of our problem.

## Design variables

We’ll use two sets of design variables. One for allocation, and one for start times.

**allocation**: . This is expressed using the`setVarKind`

primitive that sets variable bounds:

where `x_ :: (ToILPVar job, ToILPVar resource) => (job, resource) -> String`

is our variable naming convention for , and `cartesian :: [a] -> [b] -> [(a,b)]`

is the cartesian product. Note that we don’t declare this binary variable for cases where . We’ll make those variables redundant by construction.

**start times**: :

where `s_ :: ToILPVar job => job -> String`

is our variable naming convention for , and other primitives are self-explanatory.

## Objective

The makespan of the schedule is the maximum completion time of all jobs. We’ll define this using a constraint, for now let us just declare a corresponding variable:

Where `toFun`

is a helper that lifts a single design variable to a constraint, `setDirection`

and `setObjective`

are primitives used to set-up the problem, and `cMax :: String`

is our variable name for the objective.

## Constraints

**makespan definition**: The makespan is the maximum job completion time.

For any job and resource such that , one has .

```
[ linCombination [(-1, cMax), (1, s_ j), (p_ jr, x_ jr)] `leqTo` 0
| jr@(j, _) <- cartesian _J _R
, v_ jr ]
```

**unique assignment**: Each job is assigned to exactly one resource.

For any job , one has .

**resource/job compatibility**: A job can only run on a compatible resource.

For any job and resource such that , one has .

**precedences**: A job can not start before its predecessors have terminated.

For any and such that and there exists a precedence relation , one has .

```
[ linCombination [(1, s_ k), (-1, s_ j), (- (p_ jr), x_ jr)] `geqTo` 0
| (j, k) <- _P,
r <- _R,
v_ (j, r),
let jr = (j, r)
]
```

**over-commitment**: Resources should not be over-committed.

This one is more tricky to express. For any two jobs , define the overlap of job on job to be , and consider the following graph for two jobs allocated to the same resource, where jobs are overlapping.

```
j |------------------------------------|
k . |--------------------------------------------|
. . . .
. . O_jk . .
. <--(negative direction)------ .
. .
<-----(negative direction)-----------------------------
O_kj
```

For any , and overlap iff and . Here is how our strategy for enforcing the lack of overlap works:

- First, we’ll introduce variables
- Second, we will force to be indicator variable for .
- Third, we’ll express the “not and” constraints on and for jobs that are allocated to the same resource.

The second step can be done in the following way. For any specify that and , where is any upper bound on the left-side quantities. Re-formulated, this corresponds to the following set of constraints and variable definitions. For any such that , one has:

where is an upper bound on the overlap - which is maximally equal to the makespan of the solution. We’ll use the upper-bound on the makespan obtained from the signature of `formulation`

.

```
[ let eq = toFun (s_ k) - toFun (s_ j) + linCombination [(u, tau_ (j, k)) ]
- linCombination [ (p_ (j, r), x_ (j, r)) | r <- _R, v_ (j, r) ]
in do
eq `geqTo` 0
eq `leqTo` (u-1)
setVarKind (tau_ (j, k)) BinVar
| (j,k) <- cartesian _J _J,
j /= k,
(j, k) `notElem` fullPrecs
]
```

Note that some of those are redundant with job precedence constraints. An easy way to reduce the number of constraints is to omit any such constraint between jobs where and have an indirect precedence relation in the DAG. Here, we have computed such a set of indirect precedence relations called `fullPrecs`

, which we used to guard our list comprehension. Remains to express the “not and” constraint on and :

For all , s.t. , one has .

```
[ let elms = [x_ (j, r), x_ (k, r), tau_ (j, k), tau_ (k, j)]
in linCombination ((1,) <$> elms) `leqTo` 3
| (j, k) <- cartesian _J _J,
j /= k,
(j, k) `notElem` fullPrecs,
r <- _R,
v_ (j, r),
v_ (k, r)
]
```

Again, we guard using `fullPrecs`

, the full set of indirect precedence constraints. And with that, we’re done describing and implementing the formulation this problem. Let us now consider an example instance.

# Example problem instance

Consider a problem instance with two types of tasks and two types of resources.

Jobs are only valid on a resource of the same type:

```
validity :: (Task, Resource) -> Bool
validity (Task _ A, Resource _ RA) = True
validity (Task _ (B _), Resource _ RB) = True
validity _ = False
```

In this instance, run-times are only task dependent. Instruction constructor `B`

carries its own task length.

```
runtime :: (Task, Resource) -> Time
runtime (Task _ A, Resource _ RA) = Time 1
runtime (Task _ (B x), Resource _ RB) = Time x
runtime _ = Time (panic "runtime query on invalid placement")
```

We consider a setting with three resources: Two of type `RA`

, one of type `RB`

:

```
exampleResources :: Map ResourceID Resource
exampleResources = M.fromList [(0, RA), (1, RA), (2, RB)]
```

Consider the following set of tasks and precedence graph:

```
exampleTasks :: Map TaskID Task
exampleTasks = fromList $
[(0, B 3), (1, B 3), (2, B 2)]
<> ([3 .. 10] <&> (,A) . TaskID)
examplePrecedences :: Map TaskID [TaskID]
examplePrecedences =
fromList $
bimap TaskID (fmap TaskID)
<$> [ (2, [4, 3, 8, 5]),
(1, [4, 5, 2, 3]),
(3, [7, 6, 8, 10]) ]
```

We first run GLPK using the LP relaxation. This will return us either a lower bound on the objective or a proof of non satisfiability.

We now know the LP version of our problem is satisfiable (although that was easy to deduce by hand from the problem instance), and we know a lower bound on . Let’s now use GLPK’s direct exact algorithm on the ILP formulation for our problem instance.

```
Constructing initial basis...
Size of triangular part is 351
0: obj = 6.000000000e+00 inf = 5.838e+01 (90)
105: obj = 1.000000000e+01 inf = 0.000e+00 (0)
* 113: obj = 7.000000000e+00 inf = 0.000e+00 (0)
Long-step dual simplex will be used
+ 113: mip = not found yet >= -inf (1; 0)
+ 900: >>>>> 1.800000000e+01 >= 8.500000000e+00 52.8% (121; 11)
+ 1748: >>>>> 1.600000000e+01 >= 9.000000000e+00 43.8% (198; 45)
+ 5283: >>>>> 1.000000000e+01 >= 9.000000000e+00 10.0% (360; 927)
+ 36529: mip = 1.000000000e+01 >= 9.000000000e+00 10.0% (42; 11207)
+ 37861: mip = 1.000000000e+01 >= tree is empty 0.0% (0; 11967)
Success
```

We’re lucky, as GLPK terminates on this instance in less than six seconds. Because the algorithm is an exact one, we also know this is an optimal solution. The following graph shows the resulting schedule, with the LP lower bound outlined in red.

Note that there was no a-priori guarantee that we could obtain a quick optimal solution for our problem instance. The lower bound however is always “easy” to obtain, and on this problem it certifies that the makespan can’t get any better than 6. That can be useful for method-based heuristics and algorithms. If a given heuristic is often close enough to the optimal for relevant problem instances, we might decide to stop improving on it. The lower bound value may also be used at run-time, during an algorithm’s execution. Finally, in case the relaxed problem is unfeasible, we have an easy proof that the original problem is unfeasible too.

# References

[1] Korte, B. et al. 2012. *Combinatorial optimization*. Springer.

[2] Tompkins, M.F. 2003. Optimization techniques for task allocation and scheduling in distributed multi-agent operations. (2003).

[3] Topcuoglu, H. et al. 2002. Performance-effective and low-complexity task scheduling for heterogeneous computing. *IEEE TDPS*. 13, 3 (2002), 260–274.