Introduction
The XAI Center at KAIST is hosting an Explainable Artificial Intelligence tutorial series this winter. The educational series will go over fundamental technologies in XAI technology, such as XAI methods, evaluation of XAI, and tools. We also invited two distinguished speakers who will help us review recent topics in XAI combined with NLP human factors engineering.
Target audience:
1) The newcomers to the SAIL lab, PhD and MSc students who are interested
2) Members from collaboration institutions who are interested
When: Jan. 26(Thu) ~ Feb. 16(Tue), every Thursday&Tuesday 16:00-18:00
Where: Hybrid, KAIST XAI Research Center and Zoom
Each Session: 60min (including 5~10min for Q&A)
Language: Korean/English (Slides in English)
Program
1. Recent Trends in Explainable Artificial Intelligence
Jaesik Choi (Professor, KAIST AI)
As complex artificial intelligence (AI) systems such as deep neural networks is used for many mission critical task such as military, finance, human resources and autonomous driving, it is important to secure the safe use of such complex AI systems. In this talk, we will overview the principles and recent advances in explainable artificial intelligence.
2. Symbolic AI methods
Seongwoo Lim (SAILab, KAIST AI)
Symbolic representation can be interpretable to both human and computer. In this session, we will introduce symbolic AI models like decsion tree and (first-order) logic based models. Also, we will introduce neuro-symbolic models, which combine neural network and symbolic model.
3. Global model-agnostic methods
Wonjoon Chang (SAILab, KAIST AI)
Global interpretation methods describe the average behavior of data or a model. They are particularly useful when a user wants to understand the general mechanisms in the data or debug the model.
5. CAM-based methods
Jungmin Kim (SAILab, KAIST AI)
Class Activation Maps(CAM) is motivated by CNN's ability to localize the discriminative regions for classification. XAI methods based on Activation (Grad-CAM, Grad-CAM++, etc.) have been developed to improve the limitations of previous activation methods. In this session, we explore the motivation of CAM based methods and how the other methods are developed.
7. Perturbation-based explanations
Sehyun Lee (SAILab, KAIST AI)
The talk will focus on two important perturbation methods of Explainable AI (XAI): RISE and SISE. RISE (Random Interpolation of Sensitized Examples) generates saliency maps to highlight the areas of an input that have the most impact on the model's prediction. While Similar with RISE, SISE (Semantic Input Sampling for Explanation) uses a guided sampling approach to generate explanations. Both RISE and SISE are discussed in detail, with examples to illustrate how they can be used to interpret the predictions made by AI models.
8. Concept-based explanations
Dahee Kwon (SAILab, KAIST AI)
In this session, we will discuss concept-based explanations which provide human-interpretable explanations. We can measure the importance of each concept, (e.g, how much influence the stripe pattern has on zebra prediction). Also, we can visualize the important learned concepts from the model.
9. Attention as Explanation
Cheongwoong Kang (SAILab, KAIST AI)
This session includes two sub-topics: (1) how attention is used as local/global explanations and (2) quantifying attention flow in transformers, where information from different tokens gets increasingly mixed.
10. Evaluation of XAI
Junho Choi (SAILab, KAIST AI)
The definition of "good explanation" can vary by audience and problem: it is subjective and diverse. Nevertheless, there are some qualities that one could expect from a "good explanation". This presentation will go over Quantus, an open-source package that lists these qualities and the methods for their quantitative evaluation.
11. Application of XAI Toolkit for PyTorch
Soyeon Kim (SAILab, KAIST AI)
In this tutorial we use Captum library to explain pre-trained models with various input attribution methods which we studied previous session: Saliency, Grad-CAM, LRP, IG, perturbation-based, Shapley values.
12. Explanation tools of Google What-If & LIT
Seongyeop Jeong (SAILab, KAIST AI)
We will talk about google explanation tools which serve automatic explanation algorithm IG, Shapley value, visualization of features, and helpful interface. The complex model could be easily analyzed by these tools with the intuitive interface representing which information is important & model's result is trustable.
13. Tutorial on Chain of Thoughts (English)
Minjoon Seo (Professor, KAIST AI)
As complex artificial intelligence (AI) systems such as deep neural networks is used for many mission critical task such as military, finance, human resources and autonomous driving, it is important to secure the safe use of such complex AI systems. In this talk, we will overview the principles and recent advances in explainable artificial intelligence.
Reading List:
Recording
14. Human factors engineering for Explainable AI (English)
Woojin Park (Professor, Seoul National University)
Symbolic representation can be interpretable to both human and computer. In this session, we will introduce symbolic AI models like decsion tree and (first-order) logic based models. Also, we will introduce neuro-symbolic models, which combine neural network and symbolic model.