Welcome to the

Workshop on Theory-of-Mind

at IJCAI 2025


Read More

About

ToM @ IJCAI 2025

Motivation

Theory of Mind (ToM) is the ability to reason about the minds of other agents. The main theme of our workshop is the computational modeling of ToM, with a special focus on the role of natural language in such modeling. Specifically, ToM 2025 pays attention to cognitive foundations and theories of ToM, the acquisition and relationship between language and ToM, leveraging ToM to improve and explain NLP and ML models, and using ToM for positive social impact. This workshop intends to promote the community of researchers that are interested in improving the ability of intelligent agents to reason about others' mental states. Our proposed program provides a space to discuss pathways for understanding and applying ToM in psycholinguistics, pragmatics, human value alignment, social good, model explainability, and many other areas of NLP. ToM 2025 will be a full-day hybrid in-person/virtual workshop with several keynote speeches, and oral/poster/spotlight presentations, followed by a breakout discussion, panel discussion, and best paper award announcement. We also intend to host a mentoring program to broaden participation from a diverse set of researchers.

The ToM Workshop will be co-located with IJCAI 2025!

If you have any questions, feel free to contact us.

Calls

Call for papers

We welcome submissions of full papers as well as work-in-progress and accept submissions of work recently published or currently under review.

In general, we encourage three types of papers:

  • Empirical paper: Submissions should focus on presenting original research, case studies or novel implementations in the fields of machine learning, artificial intelligence, natural language processing, and related areas.
  • Position paper: Authors are encouraged to submit papers that discuss critical and thought-provoking topics within the scientific community.
  • Thought pieces: Contributions in this category should provide insights, thought experiments, or discussions pertaining to theoretical or philosophical questions in machine learning and related disciplines. We welcome the papers discussing the relationships between theory-of-mind and language acquisition, large language models, agency, subjectivity, embodiment, AI ethics and safety, social intelligence, artificial intelligence, etc. We also welcome contributions that address related questions or offer new perspectives on existing conversations.
Potential topics include:
  • Leveraging ToM for Machine Learning Applications (e.g., NLP, Robotics, CV)
  • Cognitive Science Perspectives of ToM
  • ToM for HCI / Human-AI collaboration
  • Surveys or replication of existing work
  • Social Impacts of ToM
Important Dates
TBD
Submission Guidelines

The ToM 2025 workshop will use CMT as the review platform.

Accepted papers will be presented as posters, and a subset of them will be selected for oral presentation. We plan to organize the ToM workshop at ICML 2024 in a hybrid format. For virtual workshop attendees, we plan to use Zoom for the talks/panel and Gather for posters/socializing. To support the hybrid format, we will hold parallel meet-and-greet sessions online and in-person.

The paper template and style files can be found at here (IJCAI Author Kit. Please use LaTeX. ). The length of paper can be as short as 2 pages or as long as 8 pages (excluding references and appendix). Submissions must follow the template and style and should be properly anonymized.

Dual Submission Policy

We welcome papers that have never been submitted, are currently under review, or recently published. Accepted papers will be published on the workshop homepage, but will not be part of the official proceedings and are to be considered non-archival.

Program

Workshop schedule

🚪Room TBD

TimeEvent
  8:55 -   9:00Opening remarks
  9:00 -   9:45Invited Talk (TBD)
  9:45 -   10:15Oral Presentation #1
  10:15 -   10:45Break / Meet-and-greet
  10:45 -   11:30Invited Talk (TBD)
  11:30 -   12:15Invited Talk (TBD)
  12:15 -   13:30Lunch / Poster Session / Spotlight
  13:30 -   14:15Invited Talk (TBD)
  14:15 -   14:45Oral Presentation #2
  14:45 -   15:30Invited Talk (TBD)
  15:30 -   16:00Break / Meet-and-greet
  16:00 -   17:00Panel discussion [sli.do link]
  17:00 -   17:10Closing remarks / Best paper award

Talks

Invited Speakers and Panelists (Tentatively Accepted)

Joyce Y. Chai

Joyce Y. Chai's research interests are in the area of natural language processing, situated dialogue agents, human-robot communication, and artificial intelligence. She's particularly interested in language processing that is sensorimotor-grounded, pragmatically-rich, and cognitively-motivated. Her recent work has focused on grounded language processing to facilitate situated communication with robots and other artificial agents.

Yejin Choi

Yejin Choi Yejin is a Professor at the Paul G. Allen School of Computer Science \& Engineering at the University of Washington and also a senior research director at AI2 overseeing the project Mosaic. Her research investigates a wide variety problems across NLP and AI including commonsense knowledge and reasoning, neural language (de-)generation, language grounding with vision and experience, and AI for social good.

Diyi Yang

Diyi Yang is an assistant professor in the Computer Science Department at Stanford, affiliated with the Stanford NLP Group, Stanford HCI Group, Stanford AI Lab (SAIL), and Stanford Human-Centered Artificial Intelligence (HAI). She is interested in Computational Social Science, and Natural Language Processing. Her research goal is to better understand human communication in social context and build socially aware language technologies to support human-human and human-computer interaction.

Vered Swartz

Vered Swartz an assistant professor of Computer Science at the University of British Columbia, and a CIFAR AI Chair at the Vector Institute. Her research interests focus on natural language processing, with the fundamental goal of building models capable of human-level understanding of natural language. She is interested in computational semantics and pragmatics, and commonsense reasoning. She is currently working on learning to uncover implicit meaning, which is abundant in human speech, developing machines with advanced reasoning skills, multimodal models, and culturally-aware NLP models.

Mina Lee

Mina Lee is an assistant professor at University of Chicago. Previously, she was a postdoctoral researcher in the Computational Social Science group at Microsoft Research. She received her Ph.D. in Computer Science from Stanford University, where she was advised by Percy Liang. Her research is at the intersection of natural language processing (NLP) and human-computer interaction (HCI).

Organization

Workshop Organizers

Hao Zhu

Hao Zhu

Postdoc at Stanford University
Effat Farhana

Effat Farhana

Assistant Professor at Auburn University
Melanie Sclar

Melanie Sclar

Ph.D. student at the University of Washington
Chenghao Yang

Chenghao Yang

Ph.D. student at University of Chicago
Jennifer Hu

Jennifer Hu

Research Fellow at Harvard University
Bodhi

Bodhisattwa Prasad Majumder

Research Scientist at the Allen Institute for AI
Saujas Vaduguru

Saujas Vaduguru

Ph.D. student at Carnegie Mellon University
Xuhui Zhou

Xuhui Zhou

Ph.D. student at Carnegie Mellon University
Hyunwoo Kim

Hyunwoo Kim

Postdoctoral researcher at NVIDIA

Program Committee

  • Ute Schmid (University of Bamberg)
  • Yuwei "Emily" Bao (University of Michigan)
  • Tianmin Shu (MIT)
  • Melanie Sclar (University of Washington)
  • Luyao Yuan (Meta)
  • Yewen Pu (Autodesk Research)
  • Robert Hawkins (Princeton University → UW Madison)
  • Natalie Shapira (Bar-Ilan University)
  • Cathy Jiao (CMU)
  • Minglu Zhao (University of California, Los Angeles)
  • Theodore Sumers (Princeton University)
  • Yuwei Sun (University of Tokyo)
  • Kartik Chandra (Massachusetts Institute of Technology)
  • Luca Bischetti (Istituto Universitario di Studi Superiori)
  • Lion Schulz (Max Planck Institute for Biological Cybernetics)
  • Siddhant Bhambri (Arizona State University)
  • Suhong Moon (University of California Berkeley)
  • Ziluo Ding (Peking University)
  • Renze Lou (Pennsylvania State University)
  • Ini Oguntola (Carnegie Mellon University)
  • Herbie Bradley (University of Cambridge)
  • Kai Zhang (Ohio State University, Columbus)
  • Shuwen Qiu (University of California, Los Angeles)
  • Guillaume Dumas (Université de Montréal)
  • Anfisa Chuganskaya (Lomonosov Moscow State University)
  • Mudit Verma (Arizona State University)
  • Alfredo Garcia (Texas A&M University - College Station)
  • Erin Grant (University College London)
  • Mine Caliskan (Eberhard-Karls-Universität Tübingen)
  • Laura Ruis (University College London, University of London)
  • Ece Takmaz (University of Amsterdam)
  • Cameron Jones (University of California, San Diego)
  • Minhae Kwon (Soongsil University)
  • Tan Zhi-Xuan (Massachusetts Institute of Technology)
  • Chaoqi Wang (University of Chicago)
  • Alexey Kovalev (AIRI)
  • Peter Dayan (Max-Planck Institute)
  • Dongsu Lee (soongsil university)
  • Shane Steinert-Threlkeld (University of Washington, Seattle)
  • Eliza Kosoy (University of California Berkeley)
  • Cathy Jiao (Carnegie Mellon University)
  • Soham Dinesh Tiwari (Carnegie Mellon University)