Improvement techniques

In this chapter, you will get to know two improvement techniques often used in modeling problems using constraint programming:

  • symmetry removal ;

  • redundant constraints.

Symmetry removal

For many optimization problems, some variables are equivalent and interchangeable. It may occur when you want to sell products with equivalent characteristics considering your constraints and producing the same revenue. Another example happens when you want to assign a job to a group of people having the same skills. When using the standard model, Artelys Kalis can provide many equivalent solutions just by changing the role of two such variables. This can be avoided by introducing a relation of order.

A simple example can be made using the knapsack problem defined in Chapter 8. Two projects are strictly equivalent: the 4th and 6th. Both have a revenue of 400 and a cost of 80. If only one of theme remains in the optimal solution, we can enforce that it is the 4th one. It means that the associated binary variable associated to projectchosen[3] must be greater or equal to projectChosen[5].

A more general code would be:

for(int i = 0; i < nbProj; i++) {
    for(int j = i+1; j < nbProj; j++) {
        if ((projValues[i] == projValues[j]) && (constrCoeffs[0][i] == constrCoeffs[0][j]))    {
            knapsack.post( projectChosen[i] >= projectChosen[j]);
        }
    }
}

Considering this example and different strategies already used, the number of visited nodes was reduced between 3 and 25 %.

In case of multiple variables equivalence, many more cases will be removed by adding a global order.

For example, we can consider three projects having the same characteristics. The possible values for there associated variables X1, X2 and X3 would be reduced from:

\begin{matrix}
&\forall x \in XS, y \in YS : V_{xy} \in {1, ..., 9} \\
&all-different(V_{A1}, ..., V_{C4})
\end{matrix}

by adding the constraint: X3 \geq X2 \geq X1.

This will help the search to reduce drastically the number of explored nodes without changing the optimum.

Redundant constraints

Adding redundant constraints to a model is the main improvement method used in constraint programming. A redundant constraint occurs when it is implied by other constraints of the problem. In fact, it gives just a different representation of a part of these constraints. The interest of redundant constraints is a better propagation. In fact, due to incomplete propagation, this avoid infeasible assignments earlier in the tree search.

Here is a simple example showing this amelioration.

Consider the following problem:

\text{Variables:} &\\
    &X, Y \in [0,10] \\
    &X, Y \text{are integers} \\
\\
\text{Constraints:} &\\
    &C1: 2X + 3Y \geq 11 \\
    &C2: X - 3Y \geq 4

Usual propagation on linear constraints is made on lower and upper bounds of each variable.

In that case we can propagate C2 and we will obtain: X \in [4,10] and Y \in [0,2]. No more propagation can be done.

Now we add a redundant constraint C3 with is just a linear combination of the others: C3 = C1 + C2. Thus C3 is 3X \geq 15.

Considering this redundant constraint, we can complete the initial propagation with C3 and deduce a new domain for X.

It gives: X \in [5,10] and Y \in [0,2].

This small example shows the interest of redundant constraints. Their application in real problems will often be made with global constraints which include powerful, but incomplete, propagation.

Do not abuse of them, they consume memory and may increase time computation in some cases.