TY - JOUR
T1 - Sat feature analysis for machine learning classification tasks
AU - Dalla, Marco
AU - Provan-Bessell, Benjamin
AU - Visentin, Andrea
AU - O’Sullivan, Barry
N1 - Publisher Copyright:
© 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org).
PY - 2023
Y1 - 2023
N2 - The extraction of meaningful features from CNF instances is crucial to applying machine learning to SAT solving, enabling algorithm selection and configuration for solver portfolios and satisfiability classification. While many approaches have been proposed for feature extraction, their relevance to these tasks is unclear. Their applicability and comparison of the information extracted and the computational effort needed are complicated by the lack of working or updated implementations, negatively affecting reproducibility. In this paper, we analyse the performance of five sets of features presented in the literature on SAT/UNSAT and problem category classification over a dataset of 3000 instances across ten problem classes distributed equally between SAT and UNSAT. To increase reproducibility and encourage research in this area, we released a Python library containing an updated and clear implementation of structural, graph-based, statistical and probing features presented in the literature for SAT CNF instances; and we define a clear pipeline to compare feature sets in a given learning task robustly. We analysed which of the computed features are relevant for the specific task and the tradeoff they provide between accuracy and computational effort. The results of the analysis provide insights into which features mostly affect an instance’s satisfiability and which can be used to identify the problem’s type. These insights can be used to develop more effective solver portfolios and satisfiability classification algorithms.
AB - The extraction of meaningful features from CNF instances is crucial to applying machine learning to SAT solving, enabling algorithm selection and configuration for solver portfolios and satisfiability classification. While many approaches have been proposed for feature extraction, their relevance to these tasks is unclear. Their applicability and comparison of the information extracted and the computational effort needed are complicated by the lack of working or updated implementations, negatively affecting reproducibility. In this paper, we analyse the performance of five sets of features presented in the literature on SAT/UNSAT and problem category classification over a dataset of 3000 instances across ten problem classes distributed equally between SAT and UNSAT. To increase reproducibility and encourage research in this area, we released a Python library containing an updated and clear implementation of structural, graph-based, statistical and probing features presented in the literature for SAT CNF instances; and we define a clear pipeline to compare feature sets in a given learning task robustly. We analysed which of the computed features are relevant for the specific task and the tradeoff they provide between accuracy and computational effort. The results of the analysis provide insights into which features mostly affect an instance’s satisfiability and which can be used to identify the problem’s type. These insights can be used to develop more effective solver portfolios and satisfiability classification algorithms.
UR - https://www.scopus.com/pages/publications/85180552522
U2 - 10.1609/socs.v16i1.27292
DO - 10.1609/socs.v16i1.27292
M3 - Article
AN - SCOPUS:85180552522
SN - 2832-9171
VL - 16
SP - 138
EP - 142
JO - The International Symposium on Combinatorial Search
JF - The International Symposium on Combinatorial Search
IS - 1
T2 - 16th International Symposium on Combinatorial Search, SoCS 2023
Y2 - 14 July 2023 through 16 July 2023
ER -