2019 ICCV Workshop on Interpreting and Explaining Visual Artificial Intelligence Models
Saturday, November 2nd, 2019
@ COEX 308BC, Seoul, Korea
Invitation
Explainable and interpretable machine learning models and algorithms are important topics which have received growing attention from research, application and administration. Many advanced visual artificial intelligence systems are often perceived as black-boxes. Researchers would like to be able to interpret what the AI model has learned in order to identify biases and failure models and improve models. Many government agencies pay special attention to the topic. In May 2018, EU enacted the General Data Protection Regulation (GDPR) which mandates a right to explanation from machine learning model. An USA agency, DARPA, launched an Explainable Artificial Intelligence program in 2017. The ministry of science and ICT (MSIT) of South Korea established an Explainable Artificial Intelligence Center.
Recently, several models and algorithms are introduced to explain or interpret the decision of complex artificial intelligence system. Explainable Artificial Intelligence systems can now explain the decisions of autonomous system such as self-driving cars and game agents trained by deep reinforcement learning. Saliency maps built by attribution methods such as network gradients, DeConvNet, Layer-wise Relevance Propagation, PatternAttribution and RISE can identify relevant inputs upon the decisions of classification or regression tasks. Bayesian model composition methods can learn automated decomposition of input data into a composition of explainable base models in human pose estimation. Model agnostic methods such as LIME (Local interpretable model-agnostic explanations) and SHAP make complex deep learning models by providing importance of input features. Network Dissection and GAN Dissection provide human-friendly interpretations of internal nodes in deep neural network and deep generative models.
The present workshop aims to overview recent advances in explainable/interpretable artificial intelligence and establish new theoretical foundations of interpreting and understanding visual artificial intelligence models including deep neural networks. We will discuss the future research directions and applications of explainable visual artificial intelligence.
This workshop has interest including but not limited to the following topics.
- Explaining the decision of visual deep learning models
- Interpretable deep learning models
- Machine learning/deep learning models which generates human-friendly explanations
- Bayesian model composition/decomposition methods
- Model-agnostic machine learning explainable models
- Evaluation of explainable AI models
- Causal analysis of complex AI/ML systems
- Practical applications of explainable AI
Invited Speakers
Program
Time | Title | Room |
---|---|---|
9:00 - 9:10 | Opening Remarks |
308BC
|
9:10 -10:00 |
Invited Talk 1. "Recent progress towards XAI at UC Berkeley" Trevor Darrell (UC Berkeley) |
|
10:00 - 10:20 | Oral Presentations 1 (10min * 2papers) | |
10:20 - 10:40 | Coffee Break | |
10:40 - 11:30 | Invited Talk 2. "Meta-Explanations, Interpretable Clustering & Other Recent Developments" Wojciech Samek (Fraunhofer Heinrich Hertz Institute) |
308BC
|
11:30 - 11:55 | Poster Spotlights (1min * 25papers) | |
12:00 - 12:50 | Lunch | |
12:50 - 13:20 | Poster Session 1 (16papers) | E25~40 |
13:30 - 14:20 | Invited Talk 3. "The Role of Individual Units in Deep Networks in Vision" David Bau (MIT) |
308BC
|
14:20 -15:00 | Oral Presentations 2 (10min * 4papers) | |
15:00 - 15:30 | Poster Session 2 (15papers) | E25~40 |
15:30 - 15:50 | Coffee Break | |
15:50 - 16:40 | Invited Talk 4. "Zooming in: From Activation Atlases down to Features & Circuits" OpenAI Clarity Team, Ludwig Schubert |
308BC
|
16:40 - 16:50 | Tutorial 1. "Tutorial on TorchRay: a PyTorch interpretability library for reproducible research" Ruth Fong (University of Oxford) | 16:50 - 16:55 | Tutorial 2. "An Open Source Repository of Explainable Artificial Intelligence Projects" Sohee Cho (Explainable Artificial Intelligence Center) |
16:55 - 17:00 | Closing Remarks |
Download Presentation Slides Here!
1. Trevor Darrell (Recent progress towards XAI at UC Berkeley)
2. Wojciech Samek (Meta-Explanations, Interpretable Clustering & Other Recent Development)
3. David Bau (The Role of Individual Units in Deep Networks in Vision)
4. Sohee Cho (An Open Source Repository of Explainable Artificial Intelligence Projects)
Presentaion List
Paper ID: 18 - Characterizing Sources of Uncertainty to Proxy Calibration and Disambiguate Annotator and Data Bias Asma Ghandeharioun (MIT)*, Brian Eoff (Google Research), Brendan Jou (Google Research), Rosalind Picard (MIT Media Lab)
Paper ID: 34 - Decision explanation and feature importance for invertible networks Juntang Zhuang (Yale University)*, Nicha Dvornek (Yale University), Xiaoxiao Li (Yale University), Junlin Yang (Yale University), James S Duncan (Yale University)
Paper ID: 19 - Adaptive Activation Thresholding: Dynamic Routing Type Behavior for Interpretability in Convolutional Neural Networks Yiyou Sun (University of Wisconsin Madison)*, Sathya Ravi (University of Wisconsin-Madison), Vikas Singh (University of Wisconsin-Madison USA)
Paper ID: 35 - Free-Lunch Saliency via Attention in Atari Agents Dmitry Nikulin (Samsung AI Center Moscow)*, Anastasia Ianina (Samsung AI Center Moscow), Vladimir A Aliev (Samsung AI Center, Moscow), Sergey I Nikolenko (PDMI RAS)
Paper ID: 25 - Interpretable BoW Networks for Adversarial Example Detection Krishna Kanth Nakka (EPFL)*, Mathieu Salzmann (EPFL)
Paper ID: 17 - Leveraging Model Interpretability and Stability to increase Model Robustness Fei WU (CentraleSupelec), Thomas MICHEL (Valeo)*, Alexandre Briot (Valeo)
Paper ID: 1 - Visualization of Time Series Deep Neural Network Sohee Cho (UNIST)*, Jaesik Choi (KAIST)
Paper ID: 2 - Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps Beomsu Kim (Korea Advanced Institute of Science and Technology)*, Junghoon Seo (Satrec Initiative), SeungHyun Jeon (Satrec Initiative), Jamyoung Koo (SI Analytics), Jeongyeol Choe (SI Analytics), Taegyun Jeon (SI Analytics)
Paper ID: 6 - Interpretable Disentanglement of Neural Networks by Extracting Class-Specific Subnetwork Yulong Wang (Tsinghua University), Xiaolin Hu (Tsinghua University), Hang Su (Tsinghua Univiersity)*
Paper ID: 7 - Occlusions for Effective Data Augmentation in Image Classification Ruth C Fong (University of Oxford)*, Andrea Vedaldi (Oxford University)
Paper ID: 8 - Grid Saliency for Context Explanations of Semantic Segmentation Lukas Hoyer (Bosch Center for Artificial Intelligence)*, Mauricio Munoz (Bosch Center for Artificial Intelligence), Prateek Katiyar (Bosch Center for Artificial Intelligence), Anna Khoreva (Bosch Center for Artificial Intelligence), Volker Fischer (Bosch Center for Artificial Intelligence)
Paper ID: 9 - Explaining Visual Models by Causal Attribution Álvaro Parafita Martínez (Universitat de Barcelona)*, Jordi Vitria (Universitat de Barcelona)
Paper ID: 10 - Explaining Convolutional Neural Networks using Softmax Gradient Layer-wise Relevance Propagation Brian K Iwana (Kyushu University)*, Ryohei Kuroki (Kyushu University), Seiichi Uchida (Kyushu University)
Paper ID: 11 - Understanding Convolutional Networks Using Linear Interpreters (Extended Abstract) Pablo Navarrete Michelini (BOE Technology Group Co., Ltd.)*, Hanwen Liu (BOE Technology Group Co., Ltd.), Yunhua Lu (BOE Technology Group Co., Ltd.), Xingqun Jiang (BOE Technology Group Co., Ltd.)
Paper ID: 12 - Bin-wise Temperature Scaling (BTS):Improvement in Confidence Calibration Performance through Simple Scaling Techniques Younghak Shin (LGCNS)*, ByeongmoonJi (LG CNS), Hyemin Jung (LG CNS), jihyeun yoon (LG CNS), kyungyul kim (LG CNS)
Paper ID: 13 - Towards Analyzing Semantic Robustness of Deep Neural Networks Abdullah J Hamdi (KAUST)*, Bernard Ghanem (KAUST)
Paper ID: 14 - Towards A Rigorous Evaluation Of XAI Methods On Time Series Udo M Schlegel (University Konstanz)*, Hiba Arnout (Siemens CT & TU Munich), Mennatallah El-Assady (University of Konstanz), Daniela Oelke (Siemens CT), Daniel Keim (Uni. Konstanz)
Paper ID: 15 - Class Feature Pyramids for Video Explanation Alexandros Stergiou (Utrecht University)*, George Kapidis (Utrecht University), Grigorios Kalliatakis (University of Essex, UK), Christos Chrysoulas (London South Bank University), Ronald Poppe (Utrecht University), Remco C.Veltkamp (Utrecht University)
Paper ID: 16 - Localizing Occluders with Compositional Convolutional Networks Adam Kortylewski (Johns Hopkins University)*, Qing Liu (Johns Hopkins University), Huiyu Wang (Johns Hopkins University), Zhishuai Zhang (Johns Hopkins University), Alan Yuille (Johns Hopkins University)
Paper ID: 18 - Characterizing Sources of Uncertainty to Proxy Calibration and Disambiguate Annotator and Data Bias Asma Ghandeharioun (MIT)*, Brian Eoff (Google Research), Brendan Jou (Google Research), Rosalind Picard (MIT Media Lab)
Paper ID: 20 - Efficient Exploration-based Sampling in the Generative Boundary of Deep Generative Neural Networks Giyoung Jeon (Ulsan National Institute of Science and Technology), Haedong Jeong (Ulsan National Institute of Science and Technology), Jaesik Choi (KAIST)*
Paper ID: 34 - Decision explanation and feature importance for invertible networks Juntang Zhuang (Yale University)*, Nicha Dvornek (Yale University), Xiaoxiao Li (Yale University), Junlin Yang (Yale University), James S Duncan (Yale University)
Paper ID: 17 - Leveraging Model Interpretability and Stability to increase Model Robustness Fei WU (CentraleSupelec), Thomas MICHEL (Valeo)*, Alexandre Briot (Valeo)
Paper ID: 19 - Adaptive Activation Thresholding: Dynamic Routing Type Behavior for Interpretability in Convolutional Neural Networks Yiyou Sun (University of Wisconsin Madison)*, Sathya Ravi (University of Wisconsin-Madison), Vikas Singh (University of Wisconsin-Madison USA)
Paper ID: 21 - Cost-Effective Interactive Attention Learning for Action Recognition Jay Heo (KAIST)*, Junhyeon Park (KAIST), Hyewon Jeong (KAIST), Wuhyun Shin (KAIST), Kwang Joon Kim (Yonsei University College of Medicine), Sung Ju Hwang (KAIST)
Paper ID: 22 - A Plug-in Factorizer for Disentangling a Latent Representation Jee Seok Yoon (Korea University), Wonjun Ko (Korea University), Heung-Il Suk (Korea University)*
Paper ID: 24 - Visual Understanding of Multiple Attributes Learning Model of X-Ray Scattering Images Xinyi Huang (Kent State University), SuphanutJamonnak (Kent State University), Ye Zhao (Kent State University)*, Boyu Wang (Stony Brook University), Minh Hoai Nguyen (Stony Brook University), KevinYager (Brookhaven National Laboratory), Wei Xu (Brookhaven National Lab)
Paper ID: 25 - Interpretable BoW Networks for Adversarial Example Detection Krishna Kanth Nakka (EPFL)*, Mathieu Salzmann (EPFL)
Paper ID: 28 - Semantically Interpretable Activation Maps: what-where-how explanations within CNNs Diego Marcos (Wageningen University)*, Sylvain Lobry (Wageningen University and Research), Devis Tuia (Wageningen University and Research)
Paper ID: 30 - To Trust, or Not to Trust? A Case Study of Human Bias in Automated Video Interview Assessments Chee Wee Leong (Educational Testing Service (ETS))*, Katrina Roohr (Educational Testing Service), Vikram Ramanarayanan (University of California, San Francisco), MichelleMartin-Raugh (Educational Testing Service (ETS)), Harrison Kell (Educational Testing Service), Rutuja Ubale (Educational Testing Service Research), Yao Qian (Educational Testing Service), Zydrune Mladineo (Educational Testing Service), Laura McCulla (Educational Testing Service)
Paper ID: 31 - Assisting human experts in the interpretation of their visual process: A case study on assessing copper surface adhesive potency Tristan Hascoet*, Xuejiao Deng, Kiyoto Tai, Mari Sugiyama, Yuji Adachi, Sachiko Nakamura, Yasuo Ariki, Tomoko Hayashi, Tetusya Takiguchi (Kobe University)
Paper ID: 32 - Propagated Perturbation of Adversarial Attack for well-known CNNs: Empirical Study and its Explanation jihyeun yoon (LG CNS), kyungyul kim (LG CNS), jongseong jang (LG CNS)*
Paper ID: 33 - Attention Guided Metal Artifact Correction in MRI using Deep Neural Networks Jee Won Kim (KAIST), Kinam Kwon (KAIST), Byungjai Kim (KAIST), HyunWook Park (KAIST)*
Paper ID: 35 - Free-Lunch Saliency via Attention in Atari Agents Dmitry Nikulin (Samsung AI Center Moscow)*, Anastasia Ianina (Samsung AI Center Moscow), Vladimir A Aliev (Samsung AI Center, Moscow), Sergey I Nikolenko (PDMI RAS)
Paper ID: 37 - Fooling Neural Network Interpretations via Adversarial Model Manipulation Juyeon Heo (Sungkyunkwan University), Sunghwan Joo (Sungkyunkwan University), Taesup Moon (Sungkyunkwan University)*
Paper ID: 39 - Second-order feature representation for visualizing pyramidal multiscale superpixel pooling network Ali Tousi (Ulsan National Institute of Science and Technology)*, Jaesik Choi (KAIST)
Paper ID: 42 - Interpreting Undesirable Pixels for Image Classification on Black-Box Models Sin-Han Kang (Korea University)*, Hong-Gyu Jung (Korea University), Seong-Whan Lee (Korea University)
Contact
UNIST Explainable Artificial Intelligence Center
Jaesik Choi / jaesik.choi@kaist.ac.kr
Sohee Cho / sohee.cho@kaist.ac.kr
GyeongEun Lee / socool@unist.ac.kr / 052-217-2196