Human-Centric AI:

From Explainability and Trustworthiness to Actionable Ethics

CIKM 2025 Workshop | Coex, Seoul, Korea

November 14, 2025

Invitation

This workshop aims to bring together researchers and practitioners from foundational areas of Human-Centric AI—such as Explainability, Trustworthiness, Fairness, and Privacy — to build a shared understanding of its core principles and challenges.

It is imperative that AI systems operate in ways that are transparent, trustworthy, and ethically sound. Our vision of Human-Centric AI is built on four foundational pillars: explainability, trustworthiness, fairness, and privacy. Developing truly human-centric AI requires more than technical innovation—it demands interdisciplinary collaboration. Diverse perspectives are essential for tackling the complex and nuanced challenges in the field. To this end, fostering dialogue among researchers and practitioners across disciplines is vital for advancing shared values at the core of human-centric AI.

This workshop will explore key challenges and emerging solutions in the development of human-centric AI, with a focus on explainability, trustworthiness, fairness, and privacy. We welcome not only theoretical advances but also applied research and case studies that show how human-centered principles are realized in real-world AI applications.

Target Audience

Our intended audience includes researchers, graduate students, practitioners, and policy makers from around the world who are interested in Human-Centric AI, particularly in areas such as Explainable AI, Trustworthiness, Fairness, and Privacy.

Call for Papers

Topics of interest include, but are not limited to:

  • Explainable AI (XAI) methods and evaluations
  • Trustworthy AI, including Robustness and Safety
  • Fairness-aware machine learning
  • Privacy-preserving machine learning
  • Model interpretability (for Large Language Models and Large Multi-modal Models)
  • AI Ethics, Frameworks, and Governance
  • Global AI regulations (e.g., EU AI Act, Korea's AI Basic Act) and their practical implications
  • Case studies of human-centric AI in real-world applications
  • Human-AI Interaction

Important Dates

All deadlines are at 11:59PM UTC-12:00 ("Anywhere on Earth", AoE)

1. Submission Deadline (EasyChair): Aug. 29, 2025
2. Notification of Acceptance: Sep. 14, 2025
3. Camera-Ready Paper Deadline: Sep. 26, 2025
4. Workshop Date: Nov. 14, 2025

* Note: You will require an EasyChair account to submit.

Paper Submission

Manuscripts should be submitted via the CIKM 2025 EasyChair site (https://easychair.org/my/conference?conf=cikm25) in PDF format, using the 2-column ACM sigconf template. Please refer to the official ACM proceedings template available at https://www.acm.org/publications/proceedings-template .

We accept both short and long paper submissions. Short papers are limited to 4 pages, and long papers to 9 pages, including any appendix. Authors may include an appendix only if it fits within the page limit; there is no additional space allowed beyond the limit for appendices. References may extend to an unlimited number of pages and do not count toward the page limit.

Papers will be evaluated on originality, impact, relevance, and clarity through a single-blind review process conducted by 2 to 3 program committee members. Based on review scores and topical balance, accepted papers will be assigned to either oral or poster presentations.

Submissions are non-archival. At least one author of each accepted paper must register for CIKM 2025 and present the paper on-site in Seoul, Korea, as scheduled in the program.

Invited Talks

Keynote I

TBA

Keynote II

Chang D. Yoo

KAIST, Korea

Fair Alignment in Large Vision Language Models

Keynote III

Maximilian Dreyer

Fraunhofer Heinrich Hertz Institute, Germany

Inspecting AI Like Engineers: From Explanation to Validation with SemanticLens

Program

Time Duration Session Speaker
13:00 - 13:05 05 min

Opening Remarks

Jaesik Choi, Organizing Chair

(KAIST, Korea)

13:05 - 13:10 05 min

Congratulatory Address

TBA

13:10 - 13:40 30 min

Keynote Speech I: TBA

TBA

13:40 - 14:10 30 min

Keynote Speech II (Fairness)

"Fair Alignment in Large Vision Language Models"

Chang. D. Yoo

(KAIST, Korea)

14:10 - 14:50 40 min

Spotlights I

Session Chair

14:50 - 15:30 40 min

Poster Session with Coffee

Session Chair

15:30 - 16:10 40 min

Panel Discussion:

"Human-Centric AI: From Technology to Practice"

TBA

16:10-16:40 30 min

Oral Presentation | Best Papers Awards

Session Chair

16:40 - 17:10 30 min

Keynote III: "Inspecting AI Like Engineers:

Explanation to Validation with SemanticLens"

Maximilian Dreyer

(Fraunhofer HHI, Germany)

17:10 - 17:50 40 min

Spotlights II

Session Chair

17:50 - 18:00 10 min

Closing Remarks

Organizing Committee

Program Committee

  • Jaesik Choi, KAIST, Korea
  • Bohyung Han, Seoul National University, Korea
  • Myoung-Wan Koo, Sogang University, Korea
  • Kyoungman Bae, ETRI, Korea
  • Chang D. Yoo, KAIST, Korea
  • Simon S. Woo, Sungkyunkwan University, Korea
  • Nari Kim, KAIST, Korea
  • Maximilian Dreyer, Fraunhofer HHI, Germany
  • Hwaran Lee, Sogang University, Korea
  • Sangwoo Heo, NAVER AI Risk Management Center, Korea

Contacts

Questions about paper submission

Dr. Nari Kim

Program Committee, KAIST, Korea

Email: nari.kim@kaist.ac.kr

Questions about administration

Ms. Yunjung Choi

Workshop Admin, KAIST, Korea

Email: Choi3721@kaist.ac.kr