Abstract

Explainability in decision support systems is essential for fostering trust and transparency in
algorithmic/AI-driven decision processes. By making complex models understandable, users can
see the rationale behind decisions, ensuring accountability and facilitating better-informed
choices. This is particularly crucial in high-stakes areas like healthcare, finance, and criminal
justice, where the consequences of decisions can significantly impact individuals and society.
Explainability also aids in identifying and mitigating biases within models, promoting fairness
and ethical AI use. Overall, enhancing explainability helps bridge the gap between advanced
technology and human users, ensuring that decision support systems are both effective and
reliable.

In a different direction, fairness in decision support systems ensures that algorithmic/AI-driven
decisions do not perpetuate or amplify biases, promoting equality and justice. It involves
developing and implementing algorithms that treat all individuals and groups equitably,
especially those from historically marginalized communities. Ensuring fairness is crucial in
sectors like hiring, lending, and law enforcement, where biased decisions can have severe social
and economic consequences. Addressing fairness involves continuous monitoring, bias
detection, and corrective measures, creating systems that not only perform well but also uphold
ethical standards and societal values, fostering trust and legitimacy in automated decision-
making processes.

Topics

The aim of the current session is to promote the exchange between different researchers
working on these topics, over the view of the intelligent data engineering framework. Core
themes or topics include, but are not limited to:

  • Innovative algorithms and techniques for boosting transparency and explainability in decision support systems.
  • New approaches for characterizing and improving fairness in decision support.
  • Human-in-the-loop approaches for explainability and fairness.
  • Explainability and fairness in recommender systems.
  • Real-world applications.

Research works focused on explainability and fairness under the umbrella of the computational
intelligence and intelligent data analysis practices, are also welcome.
The session is supported by the Spanish Tematic Network of Recommender Systems Research
(ELIGE-IA) (https://www.esi.uclm.es/elige-ia/).

Organizers

Luis Martínez. University of Jaén, Spain.
Bapi Dutta. University of Jaén, Spain.
Raciel Yera. University of Jaén, Spain

Submission

See submission instructions for the conference at https://ideal2024.webs.upv.es/submission/

Special Session Papers Submission Deadline: July 26, 2024