A Tool for Fairness Assessment and Red-Lining Detection in AI Systems

Research output: Chapter in Book/Report/Conference proceedingsConference proceedingpeer-review

Abstract

Fairness in AI is essential to data analysis, ensuring ethical decision-making. Various fairness metrics are used depending on the specific use case. In this work, we propose a fairness assessment tool that provides a comprehensive analysis, incorporating state-of-the-art fairness metrics and association analysis to detect and include the red-lining effect. Features that exhibit strong associations with sensitive attributes can contribute to indirect discrimination. For this reason, we identify hidden biases through association analysis, detecting proxy variables that may perpetuate discrimination and lead to a more discriminatory machine learning model. We introduce a generalized fairness analysis framework capable of addressing complex scenarios, supported by a query mechanism designed to capture extensive contextual information for a more representative evaluation. Notably, fairness concerns in the business sector often involve complex, context-dependent scenarios. Our tool enables users to formulate such problems effectively and retrieve important insights. In this work, we present the flow of the tool along with some indicative results on complex fairness problems.

Original languageEnglish
Title of host publicationProceedings - 2025 IEEE International Conference on Smart Computing, SMARTCOMP 2025
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages522-527
Number of pages6
ISBN (Electronic)9798331586461
DOIs
Publication statusPublished - 2025
Event11th IEEE International Conference on Smart Computing, SMARTCOMP 2025 - Cork, Ireland
Duration: 16 Jun 202519 Jun 2025

Publication series

NameProceedings - 2025 IEEE International Conference on Smart Computing, SMARTCOMP 2025

Conference

Conference11th IEEE International Conference on Smart Computing, SMARTCOMP 2025
Country/TerritoryIreland
CityCork
Period16/06/2519/06/25

Keywords

  • bias detection
  • fairness
  • machine learning
  • tool

Fingerprint

Dive into the research topics of 'A Tool for Fairness Assessment and Red-Lining Detection in AI Systems'. Together they form a unique fingerprint.

Cite this