/ 2025 CIKM Workshop

Human-Centric AI: From Explainability and Trustworthiness to Actionable Ethics

November 10-14, 2025 (Mon-Fri)

COEX, Seoul, Korea


Invitation

Workshop Abstract

To address the potential risks of AI while supporting innovation and ensuring responsible adoption, there is an urgent need for clear governance frameworks grounded in human-centric values. It is imperative that AI systems operate in ways that are transparent, trustworthy, and ethically sound. Our vision of Human-Centric AI is built on four foundational pillars: explainability, trustworthiness, fairness, and privacy. Developing truly human-centric AI requires more than technical innovation—it demands interdisciplinary collaboration. Diverse perspectives are essential for tackling the complex and nuanced challenges in the field. To this end, fostering dialogue among researchers and practitioners across disciplines is vital for advancing shared values at the core of human-centric AI.

This workshop will explore key challenges and emerging solutions in the development of human-centric AI, with a focus on explainability, trustworthiness, fairness, and privacy. In addition to theoretical advances, we actively welcome applied research and real-world case studies that demonstrate how human-centered principles are implemented in practical AI systems.

Workshop Theme and Topics

Topics of interest include, but are not limited to:

  • Algorithms and evaluation methods for Explainable AI
  • Model interpretability (Large Language Models and Large Multi-modal Models)
  • Trustworthy AI, including Robustness and Safety
  • Fairness and AI Ethics
  • Privacy-preserving machine learning
  • Applications and Use cases of human-centric AI
  • Human–AI Interaction

Workshop Objectives, Goals, and Expected Outcome

Objectives: This workshop aims to bring together researchers and practitioners from foundational areas of Human-Centric AI—such as Explainability, Trustworthiness, Fairness, and Privacy—to build a shared understanding of its core principles and challenges. We envision this event helping to make the concept of Human-Centric AI more tangible and actionable.

Goals:

  1. Shape the Research Agenda Develop shared roadmaps, benchmarks, and conceptual frameworks to guide the evolution of human-centric AI as both a scientific discipline and societal endeavor.
  2. Bridge Research and Practice Examine how the core technologies of human-centric AI are applied in real-world contexts, and identify gaps between academic research and deployed systems.
  3. Facilitate global and interdisciplinary dialogue Bring together participants from core areas of Human-Centric AI—Explainability, Trustworthiness, Fairness, and Privacy—for in-depth exchange on the societal and human dimensions of AI.

Expected Outcomes:

  1. Identification of key challenges and future directions Identify open problems in Human-Centric AI and outline priorities for future research.
  2. Establishment of Human-Centric AI guidelines Develop practical guidelines and surface key challenges for implementing Human-Centric AI across industries.

Workshop Length: Full Day

Target Audience

Our audience includes scholars, students, and industry professionals worldwide interested in Human-Centric AI, particularly in areas such as Explainable AI, Trustworthiness, Fairness, and Privacy. We expect around 70 attendees.

Workshop Relevance

CIKM serves as a premier venue for research with direct industrial impact, covering critical areas such as knowledge management and information retrieval—technologies in high demand across enterprise sectors. As a result, the conference has attracted not only academics but also industry practitioners and government officials for idea exchange. With the rapid implementation of AI regulations worldwide, there is growing interest across both the research community and industry in core technologies essential for the responsible adoption of AI—namely, Explainable AI, Trustworthiness, Fairness, Privacy, and Human-Centric AI at large. In this context, our workshop directly engages with some of the most pressing and strategically significant topics for the CIKM research community, industry stakeholders, and government officials.

Past Workshops
Related Workshops

While most previous workshops have concentrated on specific technical domains, the recent rapid progress of generative AI highlights the need for cross-disciplinary conversations to effectively shape the future of Human-Centric AI. What distinguishes our workshop is its integrative focus on four foundational pillars—explainability, trustworthiness, fairness, and privacy—framed within the context of Human-Centric AI. This holistic approach enables a balanced and comprehensive understanding of the key challenges the field faces today.


Workshop Program Format

The full-day workshop will offer a dynamic program to foster knowledge exchange and community building. It will include:
  • Three keynote talks by invited speakers on recent advances and future directions in Human-Centric AI
    • Explainability: “Inspecting AI Like Engineers: From Explanation to Validation with SemanticLens,” Maximilian Dreyer, Fraunhofer Heinrich Hertz Institute, Germany
    • Fairness: “Fair Alignment in Large Vision Language Models,” Chang D. Yoo, KAIST, Korea
    • Trustworthiness: We plan to contact leading scholars in trustworthy AI who have recently presented at CIKM – such as Yulan He (King’s College London, UK), Tulika Saha (IIIT Bangalore, India), and Sriparna Saha (IIT Patna, India) – as potential speakers for the third keynote
  • Two Oral presentation sessions featuring peer-reviewed papers and engaging discussions
  • One Poster session designed to foster in-depth discussions and facilitate networking
  • Coffee breaks scheduled between sessions, serving as opportunities for informal interaction

Publicity and Outreach Plan:

To maximize participation, we will implement the following outreach strategies:
  • Circulation of the CFP and workshop announcements through major academic mailing lists.
  • Coordination with the main conference organizers to include workshop highlights in their newsletters and official announcements.
  • Direct invitations to relevant research groups and industry labs.
  • Promotion via social media platforms using both personal and institutional channels.

Special Requirements

We would like to request a room setup in classroom style that can accommodate up to 70 participants.

Workshop Schedule/Important Dates:

Title Date
Workshop Website Launch July 25, 2025
Call for Papers Release July 25, 2025
Paper Submission Deadline Aug. 31, 2025
Notification of Acceptance Sep. 12, 2025
Camera-Ready Paper Due Sep. 26, 2025
Workshop Date Nov. 14, 2025

Program Committee

Jaesik Choi

KAIST, Korea


Bohyung Han

Seoul National University, Korea

Myoung-Wan Koo

Sogang University, Korea

Kyoungman Bae

ETRI, Korea


Chang D. Yoo

KAIST, Korea


Simon S. Woo

Sungkyunkwan University, Korea

Nari Kim

KAIST, Korea


Maximilian Dreyer

Fraunhofer HHI, Germany

Hwaran Lee

Sogang University, Korea

Sangwoo Heo

NAVER Future AI Center, Korea

Attendee Participation

We invite researchers, practitioners, and students from all relevant disciplines, and especially encourage participation from diverse and underrepresented groups. All presenters are expected to attend in person.

Selection/Review Criteria and Process

Submissions and reviews will be managed via EasyChair. Papers will be evaluated on originality, impact, relevance, and clarity through single-blind review by 2-3 program committee members. Based on scores and topical balance, accepted papers will be assigned to oral or poster sessions and published as non-archival.


Workshop Organizers

Jaesik Choi

KAIST, Korea

jaesik.choi@kaist.ac.kr

Read more

Bohyung Han

Seoul National University, Korea

bhhan@snu.ac.kr

Read more

Myoung-Wan Koo

Sogang University, Korea

mwkoo@sogang.ac.kr

Read more

Kyoungman Bae

ETRI, Korea

kyoungman.bae@etri.re.kr

Read more

Chang D. Yoo

KAIST, Korea

cd_yoo@kaist.ac.kr

Read more

Simon S. Woo

Sungkyunkwan University, Korea

swoo@skku.edu

Read more

Wojciech Samek

TU Berlin and HHI, Germany

wojciech.samek@hhi.fraunhofer.de

Read more

Workshop Contact Person

Nari Kim, Research Professor

Korea Advanced Institute of Science and Technology (KAIST)

(13558) Seongnam-daero 331 beon-gil 8 Kins Tower 18F

Seongnam-si, Gyeonggi-do, Korea

phone: +82-010-0000-0000 | email: nari.kim@kaist.ac.kr | home: xai.kaist.ac.kr