KAIST XAI Tutorial Series (KAIST XAI 튜토리얼 시리즈)


2023.01.26 - 2023.02.16

XAI Tutorial Playlist


Introduction

The XAI Center at KAIST is hosting an Explainable Artificial Intelligence tutorial series this winter. The educational series will go over fundamental technologies in XAI technology, such as XAI methods, evaluation of XAI, and tools. We also invited two distinguished speakers who will help us review recent topics in XAI combined with NLP and human factors engineering.
  1. Target audience:
    • The newcomers to the SAIL lab, PhD and MSc students who are interested
    • Members from collaboration institutions who are interested
  2. When: Jan. 26(Thu) ~ Feb. 16(Tue), every Thursday&Tuesday 16:00-18:00
  3. Where: Hybrid, KAIST XAI Research Center and Zoom
  4. Each Session: 60min (including 5~10min for Q&A)
  5. Language: Korean/English (Slides in English)

Program

DAY 1. Jan 26 (Thu)

1. Recent Trends in Explainable Artificial Intelligence

Jaesik Choi (Professor, KAIST AI)

As complex artificial intelligence (AI) systems such as deep neural networks is used for many mission critical task such as military, finance, human resources and autonomous driving, it is important to secure the safe use of such complex AI systems. In this talk, we will overview the principles and recent advances in explainable artificial intelligence.

2. Symbolic AI methods

Seongwoo Lim (SAILab, KAIST AI)

Symbolic representation can be interpretable to both human and computer. In this session, we will introduce symbolic AI models like decsion tree and (first-order) logic based models. Also, we will introduce neuro-symbolic models, which combine neural network and symbolic model.
DAY 2. Jan 31 (Tue)

3. Global model-agnostic methods

Wonjoon Chang (SAILab, KAIST AI)

Global interpretation methods describe the average behavior of data or a model. They are particularly useful when a user wants to understand the general mechanisms in the data or debug the model.

4. Local model-agnostic methods

Jiyeon Han (SAILab, KAIST AI)

In this session, we discuss about model-agnostic methods to explain individual predictions. In specific, we mainly deal with two approaches, LIME and Shapley values.
DAY 3. Feb 02 (Thu)

5. CAM-based methods

Jungmin Kim (SAILab, KAIST AI)

Class Activation Maps(CAM) is motivated by CNN's ability to localize the discriminative regions for classification. XAI methods based on Activation (Grad-CAM, Grad-CAM++, etc.) have been developed to improve the limiations of previous activation methods. In this session, we explore the motivation of CAM based methods and how the other methods are developed.

6. Input attribution methods

Seongun Kim (SAILab, KAIST AI)

Input attribution methods explain the model through quantifying importance score of input features for the model. In this session, we mainly deal with three input attribution methods, LRP, RAP, and IG.
DAY 4. Feb 07 (Tue)

7. Perturbation-based explanations

Sehyun Lee (SAILab, KAIST AI)

The talk will focus on two important perturbation methods of Explainable AI (XAI): RISE and SISE. RISE (Random Interpolation of Sensitized Examples) generates saliency maps to highlight the areas of an input that have the most impact on the model's prediction. While Similar with RISE, SISE (Semantic Input Sampling for Explanation) uses a guided sampling approach to generate explanations. Both RISE and SISE are discussed in detail, with examples to illustrate how they can be used to interpret the predictions made by AI models.

8. Concept-based explanations

Dahee Kwon (SAILab, KAIST AI)

In this session, we will discuss concept-based explanations which provide human-interpretable explanations. We can measure the importance of each concept, (e.g, how much influence the stripe pattern has on zebra prediction). Also, we can visualize the important learned concepts from the model.
DAY 5. Feb 09 (Thu)

9. Attention as Explanation

Cheongwoong Kang (SAILab, KAIST AI)

This session includes two sub-topics: (1) how attention is used as local/global explanations and (2) quantifying attention flow in transformers, where information from different tokens gets increasingly mixed.

10. Evaluation of XAI

Junho Choi (SAILab, KAIST AI)

The definition of "good explanation" can vary by audience and problem: it is subjective and diverse. Nevertheless, there are some qualities that one could expect from a "good explanation". This presentation will go over Quantus, an open-source package that lists these qualities and the methods for their quantitative evaluation.
DAY 6. Feb 14 (Tue)

11. Application of XAI Toolkit for PyTorch

Soyeon Kim (SAILab, KAIST AI)

In this tutorial we use Captum library to explain pre-trained models with various input attribution methods which we studied previous session: Saliency, Grad-CAM, LRP, IG, perturbation-based, Shapley values.

12. Explanation tools of Google What-If & LIT

Seongyeop Jeong (SAILab, KAIST AI)

We will talk about google explanation tools which serve automatic explanation algorithm IG, Shapley value, visualization of features, and helpful interface. The complex model could be easily analyzed by these tools with the intuitive interface representing which information is important & model's result is trustable.
DAY 7. Feb 16(Thu)

13. Tutorial on Chain of Thoughts (English)

Minjoon Seo (Professor, KAIST AI)

As complex artificial intelligence (AI) systems such as deep neural networks is used for many mission critical task such as military, finance, human resources and autonomous driving, it is important to secure the safe use of such complex AI systems. In this talk, we will overview the principles and recent advances in explainable artificial intelligence.

Reading List:

14. Human factors engineering for Explainable AI (English)

Woojin Park (Professor, Seoul National University)

Symbolic representation can be interpretable to both human and computer. In this session, we will introduce symbolic AI models like decsion tree and (first-order) logic based models. Also, we will introduce neuro-symbolic models, which combine neural network and symbolic model.