Computational argumentation and automatic rule-generation for explainable data-driven modeling

  • Luca Longo
  • , Serena Berretta
  • , Damiano Verda
  • , Lucas Rizzo

Research output: Contribution to journalArticlepeer-review

Abstract

The creation of data-driven models for classification problems requires increasing transparency and inferential explainability, especially in high-stakes domains such as health-care, finance, and policy making. Rule-based systems are widely regarded as a strong candidate for the development of models that are also comprehensible to humans. However, the generated rules are often considered individually with minimal or no consideration of their interactions. This research focuses on the adoption of computational argumentation techniques, which allow for rule-interaction for enhanced explainability. In other words, rules can be revoked when new information is introduced, essentially achieving the notion of non-monotonicity. In detail, an empirical work was designed to automatically extract inference rules from datasets of various multi-class classification tasks by using the Logic Learning Machine (LLM) approach. In turn, these rules were integrated within a structured argumentation framework, able to employ abstract argumentation semantics for conflict resolution among contradicting inferences. Findings demonstrated that the LLM technique can indeed extract compact rules with varying degrees of interpretability and predictive power. Furthermore, the argument-based models built on these rules demonstrated improved inferential and explanatory performance on certain datasets. Examples show how a Cohen's kappa coefficient improved from 0.85 to 0.99 when applying the argumentation-based conflict resolution strategy to the same set of rules generated by LLM. The contribution to the body of knowledge offered to the community is both a customisable approach for rule-extraction from datasets for multi-class problems, via hyperparameter tuning, and a transparent integration strategy with computational argumentation, which is able to enhance human understanding and support justifiability.

Original languageEnglish
JournalIEEE Access
DOIs
Publication statusAccepted/In press - 2025
Externally publishedYes

Keywords

  • Argumentation semantics
  • Computational argumentation
  • Defeasible Reasoning
  • Explainability
  • Explainability
  • Explainable Artificial Intelligence
  • Logic Learning Machine
  • Non-monotonic reasoning
  • Rule-base systems

Fingerprint

Dive into the research topics of 'Computational argumentation and automatic rule-generation for explainable data-driven modeling'. Together they form a unique fingerprint.

Cite this