Advanced search
1 file | 407.46 KB Add to list

Aspect-based emotion analysis and multimodal coreference : a case study of customer comments on Adidas instagram posts

Author
Organization
Project
Abstract
While aspect-based sentiment analysis of user-generated content has received a lot of attention in the past years, emotion detection at the aspect level has been relatively unexplored. Moreover, given the rise of more visual content on social media platforms, we want to meet the ever-growing share of multimodal content. In this paper, we present a multimodal dataset for Aspect-Based Emotion Analysis (ABEA). Additionally, we take the first steps in investigating the utility of multimodal coreference resolution in an ABEA framework. The presented dataset consists of 4,900 comments on 175 images and is annotated with aspect and emotion categories and the emotional dimensions of valence and arousal. Our preliminary experiments suggest that ABEA does not benefit from multimodal coreference resolution, and that aspect and emotion classification only requires textual information. However, when more specific information about the aspects is desired, image recognition could be essential.
Keywords
lt3, ABSA, ABEA, Sentiment Analysis, Emotion Detection, Multimodal Coreference

Downloads

  • 2022.lrec-1.61.pdf
    • full text (Published version)
    • |
    • open access
    • |
    • PDF
    • |
    • 407.46 KB

Citation

Please use this url to cite or link to this publication:

MLA
De Bruyne, Luna, et al. “Aspect-Based Emotion Analysis and Multimodal Coreference : A Case Study of Customer Comments on Adidas Instagram Posts.” Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022), edited by Nicoletta Calzolari et al., ELRA, 2022, pp. 574–80.
APA
De Bruyne, L., Karimi, A., De Clercq, O., Prati, A., & Hoste, V. (2022). Aspect-based emotion analysis and multimodal coreference : a case study of customer comments on Adidas instagram posts. In N. Calzolari, B. Frédéric, P. Blache, K. Choukri, C. Cieri, T. Declerck, … S. Piperidis (Eds.), Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022) (pp. 574–580). ELRA.
Chicago author-date
De Bruyne, Luna, Akbar Karimi, Orphée De Clercq, Andrea Prati, and Veronique Hoste. 2022. “Aspect-Based Emotion Analysis and Multimodal Coreference : A Case Study of Customer Comments on Adidas Instagram Posts.” In Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022), edited by Nicoletta Calzolari, Béchet Frédéric, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, et al., 574–80. ELRA.
Chicago author-date (all authors)
De Bruyne, Luna, Akbar Karimi, Orphée De Clercq, Andrea Prati, and Veronique Hoste. 2022. “Aspect-Based Emotion Analysis and Multimodal Coreference : A Case Study of Customer Comments on Adidas Instagram Posts.” In Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022), ed by. Nicoletta Calzolari, Béchet Frédéric, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk, and Stelios Piperidis, 574–580. ELRA.
Vancouver
1.
De Bruyne L, Karimi A, De Clercq O, Prati A, Hoste V. Aspect-based emotion analysis and multimodal coreference : a case study of customer comments on Adidas instagram posts. In: Calzolari N, Frédéric B, Blache P, Choukri K, Cieri C, Declerck T, et al., editors. Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022). ELRA; 2022. p. 574–80.
IEEE
[1]
L. De Bruyne, A. Karimi, O. De Clercq, A. Prati, and V. Hoste, “Aspect-based emotion analysis and multimodal coreference : a case study of customer comments on Adidas instagram posts,” in Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022), Marseille, 2022, pp. 574–580.
@inproceedings{8764279,
  abstract     = {{While aspect-based sentiment analysis of user-generated content has received a lot of attention in the past years, emotion detection at the aspect level has been relatively unexplored. Moreover, given the rise of more visual content on social media platforms, we want to meet the ever-growing share of multimodal content. In this paper, we present a multimodal dataset for Aspect-Based Emotion Analysis (ABEA). Additionally, we take the first steps in investigating the utility of multimodal coreference resolution in an ABEA framework. The presented dataset consists of 4,900 comments on 175 images and is annotated with aspect and emotion categories and the emotional dimensions of valence and arousal. Our preliminary experiments suggest that ABEA does not benefit from multimodal coreference resolution, and that aspect and emotion classification only requires textual information. However, when more specific information about the aspects is desired, image recognition could be essential.}},
  author       = {{De Bruyne, Luna and Karimi, Akbar and De Clercq, Orphée and Prati, Andrea and Hoste, Veronique}},
  booktitle    = {{Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)}},
  editor       = {{Calzolari, Nicoletta and Frédéric, Béchet and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, Hélène and Odijk, Jan and Piperidis, Stelios}},
  isbn         = {{9791095546726}},
  keywords     = {{lt3,ABSA,ABEA,Sentiment Analysis,Emotion Detection,Multimodal Coreference}},
  language     = {{eng}},
  location     = {{Marseille}},
  pages        = {{574--580}},
  publisher    = {{ELRA}},
  title        = {{Aspect-based emotion analysis and multimodal coreference : a case study of customer comments on Adidas instagram posts}},
  year         = {{2022}},
}

Web of Science
Times cited: