Finite Constraint Satisfaction Problems

The Problem

A constraint satisfaction problem (CSP) is defined as

  1. a set of variables $(X_1,\dots,X_n)$, each of which has a domain $(D_1,\dots,D_n)$
  2. a set of constraints $(C_1,\dots,C_m)$ that restricts what combination of variable-value assignments are allowed
A solution to a CSP is a set of assignments of variables-to-values that satisfy the constraints. In this section, we will concern ourselves only with variables with finite domains.

A typical example of this is solving a Sudoku puzzle. In this case, the variables are the "cell" on the board, the domain of each variable are the digits from 1-9, and the constraints are

  1. No two cells in a row can have the same value.
  2. No two cells in a column can have the same value.
  3. No two cells in a 3x3 subtable can have the same value.
  4. The cells whose values are provided must have those values

We can classify constraints as 1-constraints, 2-constraints, etc. based on how many variables interact in the restriction. In our example, (1)-(3) are 9-constraints, while (4) is a 1-constraint.

Simple Algorithms

The "trivial" algorithm is simply to check every combination of assignments and see any of them satisfy the constraints. To do this, we can implement depth-first search over a tree in which variables are nodes and values are edges. Sadly, if we have $n$ variables with $d$ possible values and $c$ constraints, this runs in $O(c \cdot d^n)$ time. I should n

We can often improve the constant factor of this algorithm by checking our constraints at each level rather than just the leaf-nodes. While this isn't guaranteed to improve the running time, it often does due to the exponential nature of trees and the fact that many assignments lead to constraint violations at earlier depths.

Sadly, CSPs include 3SAT as a special case, so it is unlikely we'll ever find a polynomial-time algorithm. However, that doesn't mean that we can't achieve significant improvements in practice. Alas, these simple algorithms are rather bad even in practice, so we'll have to be cleverer.

Backtracking Search

It turns out that we can use decomposition methods to reduce any set of constraints into an equivalent set consisting only of 1- and 2-constraints. Hence, we only need an algorithm that can solve CSPs with 1- and 2-constraints. The former are quite easy to enforce from the get-go, just reduce each variable's domain to satsify them. Hence, we just need to worry about 2-constraints.

Enter the idea of a constraint graph. A constraint graph is a graph whose vertices correspond to variables and whose edges correspond to constraints. In particular

  class Node {
    set<VALUE> domain;
    set<BinaryConstraint*> constraints;
  }

  class BinaryConstraint {
    Node* x;
    Node* y;
    bool get_y_domain(set<VALUE> x_domain);
  }
  
To be a little more precise, each constraint corresponds two two directed edges: one from vertex/variable $A$ to $B$, and the other from $B$ to $A$.

So, now we've represented the CSP as a graph - so what? Well, now we can use a smarter set of algorithms collectively called backtracking search. Here's the general procedure:

  backtrack(Assignment ass, CSP csp) {
    if (ass complete) return ass;
    var = select_unassigned_variable(csp, ass);
    for each value in domain_of(var, ass) {
      if (value doesn't violate csp.constraints) {
        ass[var] = value 
result = backtrack(ass, csp); if (result != null) return result; delete ass[var];
} } return null; }

You'll note, however, that the function select_unassigned_variable is left deliberately vague. If this just randomly chooses a variable, then we haven't really improved over our simple depth-first search.

However, if we use the minimum remaining values (MRV) heuristic, then we can often (but not always) do much better. This heuristic basically says that we should assign the variable with the fewest choices to assign. The idea being that we want to fail as fast as possible to reduce the number of nodes we need to search.

Moreover, we can also change the order we search the domain of the selected variable. In particular, if we search the values that cause the fewest conflicts in the constraint graph (least constraining value). Unlike MRV, this heuristic will only make the algorithm faster if a solution actually exists - and even then it's not guaranteed.

In practice, this is often enough to vastly speed up CSP solving. However, if we're even cleverer, we can do even better...

Forward Checking

Instead of starting from scratch, let's build on our backtracking search. One relatively simple way to do this is by using forward checking. The idea here is simple: sometimes by assigning one variable a specific value, we've reduced the options for another variable.

For instance, in Sudoku, if I assign one cell the value "3", then none of the other cells in the row can be assigned "3". When I make this assignment, forward checking simply says that I should look at all of the vertice's (variable's) edges (constraints) and update it's neighbors domains. Of course, when I'm going back up the call stack, I'll need to undo these changes.

This simple addition to the algorithm can dramatically prune the tree, but we can do even better...

Arc Consistency

When forward checking, we restrict the immediate neighbors of the vertex we assign. But, this restriction can cause cascading, as any Sudoku player can attest: the fact that my neighbor can't take a certain value can further restrict it's neighbors, and so on and so forth. This is similar in Sudoku to how, upon deducing one cell's value, you can immediately (1) deduce other cells' values and (2) restrict other cells' domains. If we keep propogating these restrictions until no more domain-updating can occur, we maintain a property called arc consistency.

As you can imagine all this propogating of domain-restrictions can be costly. Indeed, the worst-case time for achieving arc consistency is $O(n^2 d^3)$. This means that, in practice, we are sometimes better off only computing arc-consistency once at the beginning and then simply running forward checking.

An arc is simply a directed edge. An arc from $X$ to $Y$ with constraint $C$ is called arc consistency if and only if for every value in the domain of $X$, there exist a value in the domain of $Y$ so that $C$ is satisfied.

$k$-Consistency

1-consistency simply means that each variable is consistent with itself. This is, you may recall, often simply guaranteed at the beginning of the algorithm. 2-consistency is identical to arc consistency.

We can even generalize this by saying that a graph has $k$-consistency if and only if for "any set of $k-1$ variables adnf or any consistent assignment to those variables, a consistent value can always be assigned to any $k$th variable. Similarly, we can call a graph strongly $k$-consistent if it is 1-consistent, 2-consistent, ..., and $k$-consistent.

More Information

Additional concepts to look into include

  • Efficiency gains if one of our constraints is that no two variables can hold the same values
  • Chronological and conflict-directed backtracking
  • Local search algorithms