TY - JOUR
T1 - Computational argumentation and automatic rule-generation for explainable data-driven modeling
AU - Longo, Luca
AU - Berretta, Serena
AU - Verda, Damiano
AU - Rizzo, Lucas
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2025
Y1 - 2025
N2 - The creation of data-driven models for classification problems requires increasing transparency and inferential explainability, especially in high-stakes domains such as health-care, finance, and policy making. Rule-based systems are widely regarded as a strong candidate for the development of models that are also comprehensible to humans. However, the generated rules are often considered individually with minimal or no consideration of their interactions. This research focuses on the adoption of computational argumentation techniques, which allow for rule-interaction for enhanced explainability. In other words, rules can be revoked when new information is introduced, essentially achieving the notion of non-monotonicity. In detail, an empirical work was designed to automatically extract inference rules from datasets of various multi-class classification tasks by using the Logic Learning Machine (LLM) approach. In turn, these rules were integrated within a structured argumentation framework, able to employ abstract argumentation semantics for conflict resolution among contradicting inferences. Findings demonstrated that the LLM technique can indeed extract compact rules with varying degrees of interpretability and predictive power. Furthermore, the argument-based models built on these rules demonstrated improved inferential and explanatory performance on certain datasets. Examples show how a Cohen's kappa coefficient improved from 0.85 to 0.99 when applying the argumentation-based conflict resolution strategy to the same set of rules generated by LLM. The contribution to the body of knowledge offered to the community is both a customisable approach for rule-extraction from datasets for multi-class problems, via hyperparameter tuning, and a transparent integration strategy with computational argumentation, which is able to enhance human understanding and support justifiability.
AB - The creation of data-driven models for classification problems requires increasing transparency and inferential explainability, especially in high-stakes domains such as health-care, finance, and policy making. Rule-based systems are widely regarded as a strong candidate for the development of models that are also comprehensible to humans. However, the generated rules are often considered individually with minimal or no consideration of their interactions. This research focuses on the adoption of computational argumentation techniques, which allow for rule-interaction for enhanced explainability. In other words, rules can be revoked when new information is introduced, essentially achieving the notion of non-monotonicity. In detail, an empirical work was designed to automatically extract inference rules from datasets of various multi-class classification tasks by using the Logic Learning Machine (LLM) approach. In turn, these rules were integrated within a structured argumentation framework, able to employ abstract argumentation semantics for conflict resolution among contradicting inferences. Findings demonstrated that the LLM technique can indeed extract compact rules with varying degrees of interpretability and predictive power. Furthermore, the argument-based models built on these rules demonstrated improved inferential and explanatory performance on certain datasets. Examples show how a Cohen's kappa coefficient improved from 0.85 to 0.99 when applying the argumentation-based conflict resolution strategy to the same set of rules generated by LLM. The contribution to the body of knowledge offered to the community is both a customisable approach for rule-extraction from datasets for multi-class problems, via hyperparameter tuning, and a transparent integration strategy with computational argumentation, which is able to enhance human understanding and support justifiability.
KW - Argumentation semantics
KW - Computational argumentation
KW - Defeasible Reasoning
KW - Explainability
KW - Explainability
KW - Explainable Artificial Intelligence
KW - Logic Learning Machine
KW - Non-monotonic reasoning
KW - Rule-base systems
UR - https://www.scopus.com/pages/publications/105018222779
U2 - 10.1109/ACCESS.2025.3618992
DO - 10.1109/ACCESS.2025.3618992
M3 - Article
AN - SCOPUS:105018222779
SN - 2169-3536
JO - IEEE Access
JF - IEEE Access
ER -