Human-Centric AI: From Explainability and Trustworthiness to Actionable Ethics
November 10-14, 2025 (Mon-Fri)
COEX, Seoul, Korea
Invitation
Workshop Abstract
To address the potential risks of AI while supporting innovation and ensuring responsible adoption, there is an urgent need for clear governance frameworks grounded in human-centric values. It is imperative that AI systems operate in ways that are transparent, trustworthy, and ethically sound. Our vision of Human-Centric AI is built on four foundational pillars: explainability, trustworthiness, fairness, and privacy. Developing truly human-centric AI requires more than technical innovation—it demands interdisciplinary collaboration. Diverse perspectives are essential for tackling the complex and nuanced challenges in the field. To this end, fostering dialogue among researchers and practitioners across disciplines is vital for advancing shared values at the core of human-centric AI.
This workshop will explore key challenges and emerging solutions in the development of human-centric AI, with a focus on explainability, trustworthiness, fairness, and privacy. In addition to theoretical advances, we actively welcome applied research and real-world case studies that demonstrate how human-centered principles are implemented in practical AI systems.
Workshop Theme and Topics
Topics of interest include, but are not limited to:
- Algorithms and evaluation methods for Explainable AI
- Model interpretability (Large Language Models and Large Multi-modal Models)
- Trustworthy AI, including Robustness and Safety
- Fairness and AI Ethics
- Privacy-preserving machine learning
- Applications and Use cases of human-centric AI
- Human–AI Interaction
Workshop Objectives, Goals, and Expected Outcome
Objectives: This workshop aims to bring together researchers and practitioners from foundational areas of Human-Centric AI—such as Explainability, Trustworthiness, Fairness, and Privacy—to build a shared understanding of its core principles and challenges. We envision this event helping to make the concept of Human-Centric AI more tangible and actionable.
Goals:
- Shape the Research Agenda Develop shared roadmaps, benchmarks, and conceptual frameworks to guide the evolution of human-centric AI as both a scientific discipline and societal endeavor.
- Bridge Research and Practice Examine how the core technologies of human-centric AI are applied in real-world contexts, and identify gaps between academic research and deployed systems.
- Facilitate global and interdisciplinary dialogue Bring together participants from core areas of Human-Centric AI—Explainability, Trustworthiness, Fairness, and Privacy—for in-depth exchange on the societal and human dimensions of AI.
Expected Outcomes:
- Identification of key challenges and future directions Identify open problems in Human-Centric AI and outline priorities for future research.
- Establishment of Human-Centric AI guidelines Develop practical guidelines and surface key challenges for implementing Human-Centric AI across industries.
Workshop Length: Full Day
Target Audience
Our audience includes scholars, students, and industry professionals worldwide interested in Human-Centric AI, particularly in areas such as Explainable AI, Trustworthiness, Fairness, and Privacy. We expect around 70 attendees.
Workshop Relevance
CIKM serves as a premier venue for research with direct industrial impact, covering critical areas such as knowledge management and information retrieval—technologies in high demand across enterprise sectors. As a result, the conference has attracted not only academics but also industry practitioners and government officials for idea exchange. With the rapid implementation of AI regulations worldwide, there is growing interest across both the research community and industry in core technologies essential for the responsible adoption of AI—namely, Explainable AI, Trustworthiness, Fairness, Privacy, and Human-Centric AI at large. In this context, our workshop directly engages with some of the most pressing and strategically significant topics for the CIKM research community, industry stakeholders, and government officials.
- ICCV Workshop on Interpreting and Explaining Visual Artificial Intelligence Models, co-located with ICCV 2019, co-organizer Jaesik Choi, Attendee 200, Number of Acceptance 72.
- Explainable AI Workshop 2023, co-located with Korea Computer Congress 2023, co-organizer Chair Jaesik Choi, Attendee 174, 38 submissions received; all accepted and assigned to oral (69%) or poster sessions based on review scores.
- Explainable AI Workshop 2024, co-located with Korea Computer Congress 2024, co-organizer Chair Jaesik Choi, Attendee 178, 37 submissions received; all accepted and assigned to oral (49%) or poster sessions based on review scores.
- The 1st International Workshop on Anomaly and Novelty detection in Satellite and Drone Systems (ANSD), co-located with CIKM 2023, Simon S. Woo, Attendee 30.
- The 1st Workshop on the security implications of Deepfakes and Cheapfakes (WDC '22), co-located with ACM AsiaCCS 2022, Simon S. Woo, Attendee 30, Number of Submissions 20, Acceptance ~40%.
- The 2nd Workshop on the security implications of Deepfakes and Cheapfakes (WDC '23), co-located with ACM AsiaCCS 2023, Simon S. Woo, Attendee 30, Number of Submissions 30, Acceptance ~40%.
- The 3rd Workshop on the security implications of Deepfakes and Cheapfakes (WDC '24), co-located with ACM AsiaCCS 2024, Simon S. Woo, Attendee 30, Number of Submissions 20, Acceptance ~40%.
- The 4th Workshop on the security implications of Deepfakes and Cheapfakes (WDC '25), co-located with ACM AsiaCCS 2025, Simon S. Woo, Attendee 30, Number of Submissions 25, Acceptance ~40%.
- The 1st Deepfake, Deception, and Disinformation Security Workshop (3D-Sec), co-located with ACM CCS 2025, Simon S. Woo, (Accepted and to be held on Oct 2025).
- Trustworthy and Responsible AI
for Information and Knowledge Management Systems (CIKM 2024)
Keywords: fairness & robustness, responsible AI, explainability, risk mitigation, real-world deployments.
- Large Language Models’ Interpretation and
Trustworthiness (LLMIT) (CIKM 2023)
Keywords: LLM interpretability, hallucination & misinformation detection, bias mitigation, adversarial prompts, responsible generative IR.
- Privacy Algorithms in Systems (PAS) (CIKM
2022)
Keywords: differential privacy, privacy attacks & defenses, privacy-fairness trade-offs, policy implementation, cross-domain PPML.
- Workshop on Explainable Artificial
Intelligence (XAI) (IJCAI 2023)
Keywords: human-centered evaluation, counterfactual & concept-based XAI, autonomy-oriented explanations, decision-maker trust.
While most previous workshops have concentrated on specific technical domains, the recent rapid progress of generative AI highlights the need for cross-disciplinary conversations to effectively shape the future of Human-Centric AI. What distinguishes our workshop is its integrative focus on four foundational pillars—explainability, trustworthiness, fairness, and privacy—framed within the context of Human-Centric AI. This holistic approach enables a balanced and comprehensive understanding of the key challenges the field faces today.
Workshop Program Format
- Three keynote talks by invited speakers on recent advances and future directions in
Human-Centric AI
- Explainability: “Inspecting AI Like Engineers: From Explanation to Validation with SemanticLens,” Maximilian Dreyer, Fraunhofer Heinrich Hertz Institute, Germany
- Fairness: “Fair Alignment in Large Vision Language Models,” Chang D. Yoo, KAIST, Korea
- Trustworthiness: We plan to contact leading scholars in trustworthy AI who have recently presented at CIKM – such as Yulan He (King’s College London, UK), Tulika Saha (IIIT Bangalore, India), and Sriparna Saha (IIT Patna, India) – as potential speakers for the third keynote
- Two Oral presentation sessions featuring peer-reviewed papers and engaging discussions
- One Poster session designed to foster in-depth discussions and facilitate networking
- Coffee breaks scheduled between sessions, serving as opportunities for informal interaction
Publicity and Outreach Plan:
To maximize participation, we will implement the following outreach strategies:- Circulation of the CFP and workshop announcements through major academic mailing lists.
- Coordination with the main conference organizers to include workshop highlights in their newsletters and official announcements.
- Direct invitations to relevant research groups and industry labs.
- Promotion via social media platforms using both personal and institutional channels.
Special Requirements
We would like to request a room setup in classroom style that can accommodate up to 70 participants.
Workshop Schedule/Important Dates:
Title | Date |
---|---|
Workshop Website Launch | July 25, 2025 |
Call for Papers Release | July 25, 2025 |
Paper Submission Deadline | Aug. 31, 2025 |
Notification of Acceptance | Sep. 12, 2025 |
Camera-Ready Paper Due | Sep. 26, 2025 |
Workshop Date | Nov. 14, 2025 |
Program Committee

Jaesik Choi
KAIST, Korea

Bohyung Han
Seoul National University, Korea

Myoung-Wan Koo
Sogang University, Korea
Kyoungman Bae
ETRI, Korea

Chang D. Yoo
KAIST, Korea

Simon S. Woo
Sungkyunkwan University, Korea

Nari Kim
KAIST, Korea

Maximilian Dreyer
Fraunhofer HHI, Germany

Hwaran Lee
Sogang University, Korea
Sangwoo Heo
NAVER Future AI Center, Korea
Attendee Participation
We invite researchers, practitioners, and students from all relevant disciplines, and especially encourage participation from diverse and underrepresented groups. All presenters are expected to attend in person.
Selection/Review Criteria and Process
Submissions and reviews will be managed via EasyChair. Papers will be evaluated on originality, impact, relevance, and clarity through single-blind review by 2-3 program committee members. Based on scores and topical balance, accepted papers will be assigned to oral or poster sessions and published as non-archival.
Workshop Organizers






Workshop Contact Person
Nari Kim, Research Professor
Korea Advanced Institute of Science and Technology (KAIST)
(13558) Seongnam-daero 331 beon-gil 8 Kins Tower 18F
Seongnam-si, Gyeonggi-do, Korea
phone: +82-010-0000-0000 | email: nari.kim@kaist.ac.kr | home: xai.kaist.ac.kr