Human-Centric AI:

From Explainability and Trustworthiness to Actionable Ethics

CIKM 2025 Workshop

November 14, 2025 | COEX, Seoul, Korea

Invitation

We invite researchers and practitioners in human-centric AI—focusing on explainability, trustworthiness, fairness, privacy, and related values — to come together in building a shared understanding of its key principles and challenges.

Developing truly human-centric AI goes beyond technical innovation. It requires interdisciplinary collaboration and diverse perspectives. We welcome both theoretical contributions and practical case studies that demonstrate how human-centered principles are realized in real-world AI systems.

This workshop will be held in conjunction with CIKM 2025 at COEX in Seoul, Korea. Located in the heart of Gangnam, COEX is a landmark venue for global conferences. We warmly invite you to join us in exploring the future of human-centric AI with leading scholars and industry experts from around the world.

Call for Papers

Topics of interest include, but are not limited to:

  • Explainable AI (XAI) methods and evaluations
  • Trustworthy AI, including Robustness and Safety
  • Fairness-aware machine learning
  • Privacy-preserving machine learning
  • Model interpretability (for Large Language Models and Large Multi-modal Models)
  • AI Ethics, Frameworks, and Governance
  • Global AI regulations (e.g., EU AI Act, Korea's AI Basic Act) and their practical implications
  • Case studies of human-centric AI in real-world applications
  • Human-AI Interaction

Important Dates

1. Paper Submission Deadline: September 14, 2025
2. Notification of Acceptance: September 30, 2025
3. Camera-Ready Paper Deadline: October 14, 2025
4. Workshop Date: November 14, 2025
All deadlines are at 11:59PM UTC-12:00 (Anywhere on Earth).

Submission Guidelines

We welcome both short and long paper submissions:

  • Long papers: up to 9 pages, including any appendix
  • Short papers: up to 4 pages, including any appendix

Authors may include an appendix only if it fits within the page limit; no additional pages are permitted for appendices. References are not included in the page limit and may extend to any length. Author anonymity is not required.

All manuscripts must be submitted in PDF format via the submission site. Please use the ACM 2-column sigconf template. The official ACM proceedings template is available at: https://www.acm.org/publications/proceedings-template.

Where to Submit

Review and Publication Policy

Papers will be evaluated based on originality, impact, relevance, and clarity through a single-blind review process conducted by 2 to 3 program committee members. Accepted papers will be assigned to oral or poster sessions according to evaluation results and thematic balance.

This workshop follows a non-archival publication policy (not indexed in the official ACM proceedings). Authors are encouraged to submit early-stage or in-progress work to receive feedback prior to submission to archival venues.

Attendance Policy

At least one author of each accepted paper must register for CIKM 2025 and present the paper on-site in Seoul, Korea, as scheduled in the program. This is the default requirement. However, in exceptional cases (e.g., visa denial or travel issues), our workshop will follow the CIKM 2025 main conference attendance policy regarding potential alternatives such as remote presentation. More details can be found here: https://cikm2025.org/attend/attendance-policy

Invited Talks

Keynote I

TBA

Keynote II

Chang D. Yoo

KAIST, Korea

Fair Alignment in Large Vision Language Models

Keynote III

Maximilian Dreyer

Fraunhofer Heinrich Hertz Institute, Germany

Inspecting AI Like Engineers: From Explanation to Validation with SemanticLens

Program

Time Session Speaker
14:00 - 14:10

Opening Remarks

Jaesik Choi, General Chair

(KAIST, Korea)

14:10 - 14:40

Keynote I:

To be announced

TBA

14:40 - 15:10

Keynote II:

"Fair Alignment in Large Vision Language Models"

Chang. D. Yoo

(KAIST, Korea)

15:10 - 15:50

Spotlights I

Session Chair

15:50 - 16:10

Coffee Break

16:10 - 16:40

Poster Session

Session Chair

16:40 - 17:10

Keynote III: "Inspecting AI Like Engineers:

Explanation to Validation with SemanticLens"

Maximilian Dreyer

(Fraunhofer HHI, Germany)

17:10 - 17:20

Best Paper Awards

Session Chair

17:20 - 17:50

Spotlights II

Session Chair

17:50 - 18:00

Closing Remarks

Organizing Committee

Program Committee

  • Jaesik Choi, KAIST, Korea
  • Bohyung Han, Seoul National University, Korea
  • Myoung-Wan Koo, Sogang University, Korea
  • Kyoungman Bae, ETRI, Korea
  • Chang D. Yoo, KAIST, Korea
  • Simon S. Woo, Sungkyunkwan University, Korea
  • Nari Kim, KAIST, Korea
  • Maximilian Dreyer, Fraunhofer HHI, Germany
  • Hwaran Lee, Sogang University, Korea
  • Sangwoo Heo, NAVER AI Risk Management Center, Korea
  • Seung-Hyun Lee, NAVER Cloud AI Lab
  • Hwarim Hyun, NAVER AI Risk Management Center, Korea
  • Seongwoo Kang, NAVER AI Risk Management Center, Korea
  • Joon Ho Kwak, TTA Center for Trustworthy AI and The AI Safety Institute, Korea
  • Sangkyun Lee, Korea University, Korea

Contact

Regarding administration

Jisun Kim (jisunkim@kaist.ac.kr)

Workshop Admin, KAIST

Regarding program/submission

Nari Kim (nari.kim@kaist.ac.kr)

Program Committee, KAIST

The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.