TY - GEN
T1 - A Tool for Fairness Assessment and Red-Lining Detection in AI Systems
AU - Bikoulis, Dimitrios
AU - Kyziropoulos, Panagiotis E.
AU - Vyhmeister, Eduardo
AU - Castane, Gabriel G.
AU - Visentin, Andrea
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Fairness in AI is essential to data analysis, ensuring ethical decision-making. Various fairness metrics are used depending on the specific use case. In this work, we propose a fairness assessment tool that provides a comprehensive analysis, incorporating state-of-the-art fairness metrics and association analysis to detect and include the red-lining effect. Features that exhibit strong associations with sensitive attributes can contribute to indirect discrimination. For this reason, we identify hidden biases through association analysis, detecting proxy variables that may perpetuate discrimination and lead to a more discriminatory machine learning model. We introduce a generalized fairness analysis framework capable of addressing complex scenarios, supported by a query mechanism designed to capture extensive contextual information for a more representative evaluation. Notably, fairness concerns in the business sector often involve complex, context-dependent scenarios. Our tool enables users to formulate such problems effectively and retrieve important insights. In this work, we present the flow of the tool along with some indicative results on complex fairness problems.
AB - Fairness in AI is essential to data analysis, ensuring ethical decision-making. Various fairness metrics are used depending on the specific use case. In this work, we propose a fairness assessment tool that provides a comprehensive analysis, incorporating state-of-the-art fairness metrics and association analysis to detect and include the red-lining effect. Features that exhibit strong associations with sensitive attributes can contribute to indirect discrimination. For this reason, we identify hidden biases through association analysis, detecting proxy variables that may perpetuate discrimination and lead to a more discriminatory machine learning model. We introduce a generalized fairness analysis framework capable of addressing complex scenarios, supported by a query mechanism designed to capture extensive contextual information for a more representative evaluation. Notably, fairness concerns in the business sector often involve complex, context-dependent scenarios. Our tool enables users to formulate such problems effectively and retrieve important insights. In this work, we present the flow of the tool along with some indicative results on complex fairness problems.
KW - bias detection
KW - fairness
KW - machine learning
KW - tool
UR - https://www.scopus.com/pages/publications/105010821358
U2 - 10.1109/SMARTCOMP65954.2025.00099
DO - 10.1109/SMARTCOMP65954.2025.00099
M3 - Conference proceeding
AN - SCOPUS:105010821358
T3 - Proceedings - 2025 IEEE International Conference on Smart Computing, SMARTCOMP 2025
SP - 522
EP - 527
BT - Proceedings - 2025 IEEE International Conference on Smart Computing, SMARTCOMP 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 11th IEEE International Conference on Smart Computing, SMARTCOMP 2025
Y2 - 16 June 2025 through 19 June 2025
ER -