/ 2024 XAI Tutorial

KAIST XAI Tutorial Series (KAIST XAI 튜토리얼 시리즈)

2024.11.05 - 2024.11.22


Introduction

KAIST XAI Research Center will host an XAI Tutorial Series 2024 from Nov. 5th to Nov. 22nd. This Tutorial Series aims to bring together both AI researchers and industrial practitioners to learn XAI (Explainable AI) from the fundamentals to its current topics. On DAY7, we organized three invited lectures on AI regulatory landscapes and the latest research topics in concept-based explanations.

Target audience:

1) Researchers, Industrial practitioners, and anyone interested in the XAI

2) Members from collaboration institutions who are interested

When: Nov. 5(Thu) ~ Nov. 22(Tue), every Tuesday, Thursday, Friday 15:00-18:00

Where: Hybrid (Zoom & KAIST AI) Seongnam-si Seongnam-daero 331 beon-gil 8 Kins Tower 18F

Each Session: 50min (including Q&A)

Language: Korean (Slides in English)


Program

DAY 1. Nov 05 (Thu)
1. Recent Trends in Explainable Artificial Intelligence

Jaesik Choi (Professor, KAIST AI)


As complex artificial intelligence (AI) systems such as deep neural networks is used for many mission critical task such as military, finance, human resources and autonomous driving, it is important to secure the safe use of such complex AI systems. In this talk, we will overview the principles and recent advances in explainable artificial intelligence.


Slides Recording

2. XAI Methods I: Local Explanation Methods

Myeongjin Lee (SAILab, KAIST AI)


TBD


Slides Recording

DAY 2. Nov 07 (Thu)
3. XAI Methods II: Global Explanation Methods

Youngju Joung (SAILab, KAIST AI)


TBD


Slides Recording

4. XAI Evaluation

Sol A Kim (SAILab, KAIST AI)


TBD


Slides Recording

DAY 3. Nov 12 (Thu)
5. Explaining Discriminative LLMs: Key Methods and Evaluations

Nari Kim (SAILab, KAIST AI)


TBD


Slides Recording

6. Explaining Generative LLMs: Prompting-based Explanations

Cheongwoong Kang (SAILab, KAIST AI)


This presentation explores approaches to explaining the outputs of generative large language models (LLMs) using prompting-based techniques. By analyzing how the model interacts with inputs, we aim to uncover the most relevant parts of the input that contribute to its decisions, provide readable explanations for its reasoning, and present structured interpretations to clarify how various components of the input lead to specific outputs.


Slides Recording

DAY 4. Nov 14 (Thu)
7. Generative Models and XAI

Junho Choi (SAILab, KAIST AI)


Generative models are models that mimic a target distribution. They are used for creating new instances that belong to the distribution, such as new images, sentences, or sounds. In XAI, these models are used as explainer or explainee, acting as a part of the process to explain another model or as the target model to be explained. This presentation introduces several examples for both cases in the image domain, discussing a general overview of the algorithms.


Slides Recording

8. Explaining Diffusion-based Generative Models

Dahee Kwon (SAILab, KAIST AI)


This talk introduces text-to-image generation models that have recently gained attention, focusing particularly on diffusion models. We will explore the various features learned by diffusion models and analyze the roles of different internal modules, aiming to understand how modifying these components can lead to better image generation. Additionally, we will examine how traditional explainable AI (XAI) techniques can be applied to diffusion models, offering insights into how we can better understand the complex diffusion models.


Slides Recording

DAY 5. Nov 19 (Tue)
9. Domain-Specific XAI Techniques for Time Series

Sehyun Lee (SAILab, KAIST AI)


This talk focuses on Explainable AI (XAI) techniques specifically designed for time series data. We’ll explore how attribution methods can highlight key features in neural networks and discuss how to interpret these insights to better understand the decision-making process of time series models. We’ll also introduce a technique for identifying prototypes of temporal patterns learned by these models and explain how to analyze the patterns that the model pays attention to at different points in time.


Slides Recording

10. Basics of Causality for XAI

Won Jo (SAILab, KAIST AI)


Causality goes beyond correlations between variables, enabling us to clearly identify the direction of cause and effect. This allows us to more precisely describe the reasons behind model decisions and to provide a more detailed explanation of the ways that specific inputs and features affect those decisions. This talk introduces the basic concepts of causality to improve explainability. Furthermore, recent works that use causality to explain model decisions will be summarized. This will provide the application of causality in XAI.


Slides Recording

DAY 6. Nov 21 (Thu)
11. XAI for Clinical Decision Support

Jihyeon Seong (SAILab, KAIST AI)


TBD


Slides Captum code Recording

12. Tutorials on XAI Frameworks

Chanwoo Lee (SAILab, KAIST AI)


TBD


Slides Recording

DAY 7. Nov 22(Fri)
13. Overview and Latest Trends in AI Regulation

Kanghye Lee (법무법인(유) 태평양)


In 2023, Generative AI technologies rapidly spread, bringing significant impact on our society. As we anticipate continued innovation in AI in 2024, the importance of legal and policy discussions on the safe use and trustworthiness of AI technologies is growing. In this seminar, we will explore recent legislative trends related to AI and discuss strategies to enhance trust and transparency.

Reading List:


Recording
14. Understanding and Monitoring Model Behavior With Concept-based Explanations (English)

Maximilian Dreyer (TBD, Fraunhofer HHI)


Concept-based explanations offer deep insights into neural networks, but analyzing individual explanations across large datasets can be inefficient. In this talk, we solve this by summarizing similar explanations with prototypes, providing a quick yet detailed overview of the model behavior. This approach is promising for monitoring model strategies while learning and allows to quickly spot model weaknesses. Prototypes further help to validate newly seen predictions by comparing them to prototypes, making it easier to identify outliers or assign predictions to known model strategies.


Slides Recording

15. Concept-based Explanations for Large Language Models (English)

Reduan Achtibat (TBD, Fraunhofer HHI)


Large Language Models (LLMs) present a significant challenge for Explainable AI (XAI) due to their immense size and complexity. Their sheer scale not only makes them expensive to run and explain but also complicates our ability to fully understand how their components interact. In this talk, we introduce a highly efficient attribution method based on Layer-wise Relevance Propagation that allows us to trace the most important components in these models. Additionally, we can identify which concepts dominate in the residual stream and use this knowledge to influence the generation process. While this is a promising first step, there is still much work ahead to make LLMs more transparent and controllable.


Slides Recording