Call for Papers

To train machines to sensibly detect and recognize human emotions, we need valid emotion ground truths. A fundamental challenge here is the momentary emotion elicitation and capture (MEEC) from individuals continuously and in real-time, without adversely affecting user experience. In this half-day virtual CHI 2021 workshop, we will (1) have participant talks and a keynote presentation (2) ideate elicitation, sensing, and annotation techniques (3) create mappings of when to apply an elicitation method.

We seek contributions across disciplines that explore how emotions can be naturally elicited and captured in the moment. Topics include:


  • multi-modal (e.g., film, music) and multi-sensory (e.g., smell, taste, thermal) elicitation
  • emotion elicitation across domains (e.g., automotive, healthcare)
  • elicitation and immersiveness (e.g., AR/VR)
  • elicitation over time (e.g., mood)
  • ethical considerations


  • emotion models (dimensional, discrete)
  • annotation modalities (e.g., speech, gestures) and tools (e.g., questionnaires, ESMs)
  • devices (e.g., mobile, wearable) and sensors (e.g., RGB / thermal cameras, EEG, eye tracking)
  • attention considerations (e.g., interruptions)
  • ethical issues in tracking and detection

How to Participate

We invite position papers, posters, and demos (2-9 pages, including references) that describe/showcase emotion elicitation and/or capture methods. Submissions should be single blind (i.e., not anonymized). Each submission will be peer-reviewed by 2 peers, and selected on their potential to spark discussion. Submissions should be prepared according to the ACM Master Article template (single column) (see CHI Publication Formats page) and submitted in PDF through Easychair. Accepted submissions will be made available on the workshop website. At least one author must register for the workshop ($30) and one day of the conference ($100 for early ACM/SIGCHI member). See CHI 2021 blog post on registration rates for details.

Submit [EasyChair]


For any additional questions, please contact us at