TY - CHAP
T1 - Search space extraction
AU - Mehta, Deepak
AU - O'Sullivan, Barry
AU - Quesada, Luis
AU - Wilson, Nic
PY - 2009
Y1 - 2009
N2 - Systematic tree search is often used in conjunction with inference and restarts when solving challenging Constraint Satisfaction Problems (csps). In order to improve the efficiency of constraint solving, techniques that learn during search, such as constraint weighting and nogood learning, have been proposed. Constraint weights can be used to guide heuristic choices. Nogood assignments can be avoided by adding additional constraints. Both of these techniques can be used in either one-shot systematic search, or in a setting in which we frequently restart the search procedure. In this paper we propose a third way of learning during search, generalising previous work by Freuder and Hubbe. Specifically, we show how, in a restart context, we can guarantee that we avoid revisiting a previously visited region of the search space by extracting it from the problem. Likewise, we can avoid revisiting inconsistent regions of the search space by extracting inconsistent subproblems, based on a significant improvement upon Freuder and Hubbe's approach. A major empirical result of this paper is that our approach significantly outperforms combined with weighted degree heuristics and restarts on challenging constraint problems. Our approach can be regarded as an efficient form of learning that does not rely on constraint propagation. Instead, we rely on a reformulation of a csp into an equivalent set of csps, none of which contain any of the search space we wish to avoid.
AB - Systematic tree search is often used in conjunction with inference and restarts when solving challenging Constraint Satisfaction Problems (csps). In order to improve the efficiency of constraint solving, techniques that learn during search, such as constraint weighting and nogood learning, have been proposed. Constraint weights can be used to guide heuristic choices. Nogood assignments can be avoided by adding additional constraints. Both of these techniques can be used in either one-shot systematic search, or in a setting in which we frequently restart the search procedure. In this paper we propose a third way of learning during search, generalising previous work by Freuder and Hubbe. Specifically, we show how, in a restart context, we can guarantee that we avoid revisiting a previously visited region of the search space by extracting it from the problem. Likewise, we can avoid revisiting inconsistent regions of the search space by extracting inconsistent subproblems, based on a significant improvement upon Freuder and Hubbe's approach. A major empirical result of this paper is that our approach significantly outperforms combined with weighted degree heuristics and restarts on challenging constraint problems. Our approach can be regarded as an efficient form of learning that does not rely on constraint propagation. Instead, we rely on a reformulation of a csp into an equivalent set of csps, none of which contain any of the search space we wish to avoid.
UR - https://www.scopus.com/pages/publications/70350431924
U2 - 10.1007/978-3-642-04244-7_48
DO - 10.1007/978-3-642-04244-7_48
M3 - Chapter
AN - SCOPUS:70350431924
SN - 3642042430
SN - 9783642042430
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 608
EP - 622
BT - Principles and Practice of Constraint Programming - CP 2009 - 15th International Conference, CP 2009, Proceedings
T2 - 15th International Conference on Principles and Practice of Constraint Programming, CP 2009
Y2 - 20 September 2009 through 24 September 2009
ER -