Advanced search
1 file | 286.02 KB Add to list

It's absolutely divine! Can fine-grained sentiment analysis benefit from coreference resolution?

Orphée De Clercq (UGent) and Veronique Hoste (UGent)
Author
Organization
Abstract
While it has been claimed that anaphora or coreference resolution plays an important role in opinion mining, it is not clear to what extent coreference resolution actually boosts performance, if at all. In this paper, we investigate the potential added value of coreference resolution for the aspect-based sentiment analysis of restaurant reviews in two languages, English and Dutch. We focus on the task of aspect category classification and investigate whether including coreference information prior to classification to resolve implicit aspect mentions is beneficial. Because coreference resolution is not a solved task in NLP, we rely on both automatically-derived and gold-standard coreference relations, allowing us to investigate the true upper bound. By training a classifier on a combination of lexical and semantic features, we show that resolving the coreferential relations prior to classification is beneficial in a joint optimization setup. However, this is only the case when relying on gold-standard relations and the result is more outspoken for English than for Dutch. When validating the optimal models, however, we found that only the Dutch pipeline is able to achieve a satisfying performance on a held-out test set and does so regardless of whether coreference information was included.
Keywords
LT3

Downloads

  • 2020.crac-1.2.pdf
    • full text (Published version)
    • |
    • open access
    • |
    • PDF
    • |
    • 286.02 KB

Citation

Please use this url to cite or link to this publication:

MLA
De Clercq, Orphée, and Veronique Hoste. “It’s Absolutely Divine! Can Fine-Grained Sentiment Analysis Benefit from Coreference Resolution?” Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference (CRAC 2020), Association for Computational Linguistics (ACL), 2020, pp. 11–21.
APA
De Clercq, O., & Hoste, V. (2020). It’s absolutely divine! Can fine-grained sentiment analysis benefit from coreference resolution? Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference (CRAC 2020), 11–21. Association for Computational Linguistics (ACL).
Chicago author-date
De Clercq, Orphée, and Veronique Hoste. 2020. “It’s Absolutely Divine! Can Fine-Grained Sentiment Analysis Benefit from Coreference Resolution?” In Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference (CRAC 2020), 11–21. Association for Computational Linguistics (ACL).
Chicago author-date (all authors)
De Clercq, Orphée, and Veronique Hoste. 2020. “It’s Absolutely Divine! Can Fine-Grained Sentiment Analysis Benefit from Coreference Resolution?” In Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference (CRAC 2020), 11–21. Association for Computational Linguistics (ACL).
Vancouver
1.
De Clercq O, Hoste V. It’s absolutely divine! Can fine-grained sentiment analysis benefit from coreference resolution? In: Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference (CRAC 2020). Association for Computational Linguistics (ACL); 2020. p. 11–21.
IEEE
[1]
O. De Clercq and V. Hoste, “It’s absolutely divine! Can fine-grained sentiment analysis benefit from coreference resolution?,” in Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference (CRAC 2020), Barcelona, Spain (online), 2020, pp. 11–21.
@inproceedings{8684343,
  abstract     = {{While it has been claimed that anaphora or coreference resolution plays an important role in opinion mining, it is not clear to what extent coreference resolution actually boosts performance, if at all. In this paper, we investigate the potential added value of coreference resolution for the aspect-based sentiment analysis of restaurant reviews in two languages, English and Dutch. We focus on the task of aspect category classification and investigate whether including coreference information prior to classification to resolve implicit aspect mentions is beneficial. Because coreference resolution is not a solved task in NLP, we rely on both automatically-derived and gold-standard coreference relations, allowing us to investigate the true upper bound. By training a classifier on a combination of lexical and semantic features, we show that resolving the coreferential relations prior to classification is beneficial in a joint optimization setup. However, this is only the case when relying on gold-standard relations and the result is more outspoken for English than for Dutch. When validating the optimal models, however, we found that only the Dutch pipeline is able to achieve a satisfying performance on a held-out test set and does so regardless of whether coreference information was included.}},
  author       = {{De Clercq, Orphée and Hoste, Veronique}},
  booktitle    = {{Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference (CRAC 2020)}},
  isbn         = {{9781952148354}},
  keywords     = {{LT3}},
  language     = {{eng}},
  location     = {{Barcelona, Spain (online)}},
  pages        = {{11--21}},
  publisher    = {{Association for Computational Linguistics (ACL)}},
  title        = {{It's absolutely divine! Can fine-grained sentiment analysis benefit from coreference resolution?}},
  url          = {{https://www.aclweb.org/anthology/2020.crac-1.2.pdf}},
  year         = {{2020}},
}