Abstract
In the rapidly evolving field of artificial intelligence, the need for transparency and interpretability is crucial. As AI systems become increasingly integral to critical decision-making processes across various domains, ensuring that these systems are not only accurate but also understandable is paramount.
Unlike traditional approaches that rely solely on abstract metrics or feature importance scores, example-based methods provide concrete, relatable instances that illustrate how AI models make decisions. By anchoring explanations in real-world examples, this approach enhances the interpretability of AI systems, making them more accessible to a broader audience, including non-experts.
In this session, we will focus on research and practical applications of example-based XAI. Our talks will present a range of topics, including novel algorithms, case studies, and the integration of example-based explanations in various AI systems. We aim to foster a deeper understanding of how these methods can be effectively implemented to improve transparency, trust, and user engagement in AI technologies.
Topics
- Machine Teaching
- Learning from few examples
- Prototype Selection Techniques
- Counterfactual Example Generation
- Interactive Learning
- Frameworks and Tools
- Scalability and Performance Considerations
- User Studies and Feedback
- Comparative Analysis with Other Methods
- Handling High-dimensional Data
- Ensuring Explanation Diversity
- Ethical and Privacy Concerns
Organizers
Cesar Ferri, VRAIN and UPV
Jan Arne Telle, University of Bergen, Norway
Submission
See submission instructions for the conference at https://ideal2024.webs.upv.es/submission/
Special Session Papers Submission Deadline: July 26, 2024