Advanced search
1 file | 4.32 MB Add to list
Author
Organization
Project
Abstract
Most datasets for multimodal emotion recognition only have one emotion annotation for all the modalities combined, which serves as a gold standard for single modalities. This procedure ignores, however, the fact that each modality constitutes a unique perspective that contains its own clues. Moreover, as in unimodal emotion analysis, the perspectives of annotators can also diverge in a multimodal setup. In this paper, we therefore propose to annotate each modality independently and to more closely investigate how perspectives between modalities and annotators diverge. Moreover, we also explore the role of annotator training on perspectivism. We find that for the different unimodal levels, the annotations made on text resemble most closely those of the multimodal setup. Furthermore, we see that annotator training has a positive influence on the annotator agreement in modalities with lower agreement scores, but it also reduces the variety of perspectives. We therefore suggest that a moderate training which still values the individual perspectives of annotators might be beneficial before starting annotations. Finally, we observe that negative sentiment and emotions tend to be annotated more inconsistently across the different modality setups.
Keywords
Multimodal versus unimodal emotion annotation, Annotator agreement, Emotion analysis, Perspectivism in NLP

Downloads

  • Unimodalities Count as Perspectives in Multimodal Emotion Annotation.pdf
    • full text (Published version)
    • |
    • open access
    • |
    • PDF
    • |
    • 4.32 MB

Citation

Please use this url to cite or link to this publication:

MLA
Du, Quanqi, et al. “Unimodalities Count as Perspectives in Multimodal Emotion Annotation.” Proceedings of the 2nd Workshop on Perspectivist Approaches to NLP (NLPerspectives 2023), Co-Located with the 26th European Conference on Artificial Intelligence (ECAI 2023), edited by Gavin Abercrombie et al., vol. 3494, CEUR-WS.org, 2023.
APA
Du, Q., Labat, S., Demeester, T., & Hoste, V. (2023). Unimodalities count as perspectives in multimodal emotion annotation. In G. Abercrombie, V. Basile, D. Bernardi, S. Dudy, S. Frenda, L. Havens, … S. Tonelli (Eds.), Proceedings of the 2nd Workshop on Perspectivist Approaches to NLP (NLPerspectives 2023), co-located with the 26th European Conference on Artificial Intelligence (ECAI 2023) (Vol. 3494). CEUR-WS.org.
Chicago author-date
Du, Quanqi, Sofie Labat, Thomas Demeester, and Veronique Hoste. 2023. “Unimodalities Count as Perspectives in Multimodal Emotion Annotation.” In Proceedings of the 2nd Workshop on Perspectivist Approaches to NLP (NLPerspectives 2023), Co-Located with the 26th European Conference on Artificial Intelligence (ECAI 2023), edited by Gavin Abercrombie, Valerio Basile, Davide Bernardi, Shiran Dudy, Simona Frenda, Lucy Havens, Elisa Leonardelli, and Sara Tonelli. Vol. 3494. CEUR-WS.org.
Chicago author-date (all authors)
Du, Quanqi, Sofie Labat, Thomas Demeester, and Veronique Hoste. 2023. “Unimodalities Count as Perspectives in Multimodal Emotion Annotation.” In Proceedings of the 2nd Workshop on Perspectivist Approaches to NLP (NLPerspectives 2023), Co-Located with the 26th European Conference on Artificial Intelligence (ECAI 2023), ed by. Gavin Abercrombie, Valerio Basile, Davide Bernardi, Shiran Dudy, Simona Frenda, Lucy Havens, Elisa Leonardelli, and Sara Tonelli. Vol. 3494. CEUR-WS.org.
Vancouver
1.
Du Q, Labat S, Demeester T, Hoste V. Unimodalities count as perspectives in multimodal emotion annotation. In: Abercrombie G, Basile V, Bernardi D, Dudy S, Frenda S, Havens L, et al., editors. Proceedings of the 2nd Workshop on Perspectivist Approaches to NLP (NLPerspectives 2023), co-located with the 26th European Conference on Artificial Intelligence (ECAI 2023). CEUR-WS.org; 2023.
IEEE
[1]
Q. Du, S. Labat, T. Demeester, and V. Hoste, “Unimodalities count as perspectives in multimodal emotion annotation,” in Proceedings of the 2nd Workshop on Perspectivist Approaches to NLP (NLPerspectives 2023), co-located with the 26th European Conference on Artificial Intelligence (ECAI 2023), Kraków, Poland, 2023, vol. 3494.
@inproceedings{01HCCF3S0GRTT98SR5XBXDQPKV,
  abstract     = {{Most datasets for multimodal emotion recognition only have one emotion annotation for all the modalities combined, which serves as a gold standard for single modalities. This procedure ignores, however, the fact that each modality constitutes a unique perspective that contains its own clues. Moreover, as in unimodal emotion analysis, the perspectives of annotators can also diverge in a multimodal setup. In this paper, we therefore propose to annotate each modality independently and to more closely investigate how perspectives between modalities and annotators diverge. Moreover, we also explore the role of annotator training on perspectivism. We find that for the different unimodal levels, the annotations made on text resemble most closely those of the multimodal setup. Furthermore, we see that annotator training has a positive influence on the annotator agreement in modalities with lower agreement scores, but it also reduces the variety of perspectives. We therefore suggest that a moderate training which still values the individual perspectives of annotators might be beneficial before starting annotations. Finally, we observe that negative sentiment and emotions tend to be annotated more inconsistently across the different modality setups.}},
  articleno    = {{14}},
  author       = {{Du, Quanqi and Labat, Sofie and Demeester, Thomas and Hoste, Veronique}},
  booktitle    = {{Proceedings of the 2nd Workshop on Perspectivist Approaches to NLP (NLPerspectives 2023), co-located with the 26th European Conference on Artificial Intelligence (ECAI 2023)}},
  editor       = {{Abercrombie, Gavin and Basile, Valerio and Bernardi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Leonardelli, Elisa and Tonelli, Sara}},
  issn         = {{1613-0073}},
  keywords     = {{Multimodal versus unimodal emotion annotation,Annotator agreement,Emotion analysis,Perspectivism in NLP}},
  language     = {{eng}},
  location     = {{Kraków, Poland}},
  pages        = {{12}},
  publisher    = {{CEUR-WS.org}},
  title        = {{Unimodalities count as perspectives in multimodal emotion annotation}},
  url          = {{https://ceur-ws.org/Vol-3494/paper14.pdf}},
  volume       = {{3494}},
  year         = {{2023}},
}