Human-Centric AI:

From Explainability and Trustworthiness to Actionable Ethics

CIKM 2025 Workshop

November 14, 2025 | COEX, Seoul, Korea

Keynote Talk I

Irwin King

The Chinese University of Hong Kong

Hong Kong, China

Trustworthy Artificial Intelligence, leveraging Federated Learning and beyond.

Abstract:

Artificial intelligence (AI) has swiftly become an indispensable component of our daily lives. However, the extensive adoption of AI necessitates the establishment of Trustworthy AI (TAI) systems that are secure, resilient, explainable, and impartial. One promising approach to attaining these attributes is through Federated Learning (FL), a distributed learning methodology that prioritizes privacy by training AI models on decentralized data sources, thereby eliminating the necessity for centralized data collection. While FL enhances user privacy and data security, it is not immune to attacks, such as model poisoning and inference attacks, which can compromise its integrity. This presentation delves into the significance of FL in constructing TAI systems, elucidating various attack techniques and presenting defense mechanisms to mitigate these risks. Furthermore, we address elements such as explainability and watermarking, which further enhance transparency and safeguard intellectual property. Through real-world applications across sectors like healthcare and finance, we demonstrate how integrating these components can lead to responsible and trustworthy AI technologies, advocating for a comprehensive approach to AI development.

Biography:

Professor Irwin King, Pro-Vice-Chancellor (Education) and Distinguished Professor at the Department of Computer Science & Engineering, The Chinese University of Hong Kong, has a diverse research portfolio in machine learning, social computing, artificial intelligence, and data mining. His scholarly contributions include publications in prestigious journals and editorial board memberships with international publishers. He has received numerous accolades, including Test of Time Awards at ACM CIKM, ACM SIGIR, and ACM WSDM, and the Dennis Gabor Award from INNS for his pioneering work in machine learning within social computing. As a Fellow of ACM, IEEE, INNS, AAIA, and HKIE, he has demonstrated exceptional leadership in the field. He has held significant positions, including President of the International Neural Network Society (INNS) and General Co-chair for premier international conferences like WebConf, ACML, and RecSys. He is also the Director of the ELearning Innovation and Technology (ELITE) Centre, the Trustworthy Machine Intelligent Joint Lab, and the Machine Intelligence and Social Computing (MISC) Lab. His academic journey began with a Bachelor of Science degree from Caltech, followed by a Master of Science and Doctor of Philosophy in Computer Science from the University of Southern California (USC).

Keynote Talk II

Chang D. Yoo

KAIST

Korea

Fair Alignment in Large Vision Language Models.

Abstract:

Large Vision-Language Models (LVLMs) integrate computer vision and natural language processing, enabling machines to understand and reason about the world through both visual and textual inputs. This fusion allows AI systems to describe images, answer visual questions, and interpret complex multimodal contexts - bridging perception and reasoning. However, as LVLMs become more capable and pervasive, ensuring fair alignment - that is, alignment with human values that is inclusive, unbiased, and socially equitable - becomes critical. This talk explores what fairness means in multimodal alignment, why biases emerge in vision-language systems, and how principled data design, training, and evaluation can help build LVLMs that not only perform well but also act responsibly across diverse users and contexts.

Biography:

Chang D. Yoo is a professor at Korea Advanced Institute of Science and Technology (KAIST) and Director of the AI Fairness Research Center, supported by the Ministry of Science and ICT, Korea. He advises the AI Federation of Korean Industries and serves on the IEEE MLSP Technical Committee. He has served as Area Chair for ICCV, ECCV, ICASSP, and as Associate Editor for IEEE TASLP, Information Fusion, and SPL.

Keynote Talk III

Maximilian Dreyer

Fraunhofer Heinrich Hertz Institute

Germany

Inspecting AI Like Engineers: From Explanation to Validation with SemanticLens

Abstract:

Traditional engineered systems are validated against clear standards, yet most AI models remain black boxes with limited insight into their components and few ways to validate behavior. In this talk, we present SemanticLens, a scalable framework that converts models and their components into a semantic vector database that is searchable, comparable, and summarizable. This semantic embedding enables automated auditing of component functions, providing a new approach to systematic validation of AI models. To build reliable models, we combine these mechanistic insights with targeted correction tools, establishing a practical workflow for understanding model components, validating their behavior, and refining their function.

Biography:

Maximilian Dreyer is a PhD student in the Explainable AI group at the Fraunhofer Heinrich Hertz Institute in Berlin, under the guidance of Sebastian Lapuschkin and Wojciech Samek. His research centers on creating concept-based XAI methods that are both insightful and easy to use, as well as developing frameworks that enhance safety and robustness of AI models using XAI insights.