Advanced search
1 file | 133.66 KB Add to list

What Does BERT actually learn about event coreference? Probing structural information in a fine-tuned Dutch language model

Loic De Langhe (UGent) , Orphée De Clercq (UGent) and Veronique Hoste (UGent)
Author
Organization
Project
Abstract
We probe structural and discourse aspects of coreferential relationships in a fine-tuned Dutch BERT event coreference model. Previous research has suggested that no such knowledge is encoded in BERT-based models and the classification of coreferential relationships ultimately rests on outward lexical similarity. While we show that BERT can encode a (very) limited number of these discourse aspects (thus disproving assumptions in earlier research), we also note that knowledge of many structural features of coreferential relationships is absent from the encodings generated by the fine-tuned BERT model.

Downloads

  • 2023.insights-1.13.pdf
    • full text (Published version)
    • |
    • open access
    • |
    • PDF
    • |
    • 133.66 KB

Citation

Please use this url to cite or link to this publication:

MLA
De Langhe, Loic, et al. “What Does BERT Actually Learn about Event Coreference? Probing Structural Information in a Fine-Tuned Dutch Language Model.” Proceedings of the Fourth Workshop on Insights from Negative Results in NLP, edited by Shabnam Tafreshi et al., Association for Computational Linguistics, 2023, pp. 103–08.
APA
De Langhe, L., De Clercq, O., & Hoste, V. (2023). What Does BERT actually learn about event coreference? Probing structural information in a fine-tuned Dutch language model. In S. Tafreshi, A. Akula, J. Sedoc, A. Drozd, A. Rogers, & A. Rumshisky (Eds.), Proceedings of the Fourth Workshop on Insights from Negative Results in NLP (pp. 103–108). Dubrovnik: Association for Computational Linguistics.
Chicago author-date
De Langhe, Loic, Orphée De Clercq, and Veronique Hoste. 2023. “What Does BERT Actually Learn about Event Coreference? Probing Structural Information in a Fine-Tuned Dutch Language Model.” In Proceedings of the Fourth Workshop on Insights from Negative Results in NLP, edited by Shabnam Tafreshi, Arjun Akula, João Sedoc, Aleksandr Drozd, Anna Rogers, and Anna Rumshisky, 103–8. Dubrovnik: Association for Computational Linguistics.
Chicago author-date (all authors)
De Langhe, Loic, Orphée De Clercq, and Veronique Hoste. 2023. “What Does BERT Actually Learn about Event Coreference? Probing Structural Information in a Fine-Tuned Dutch Language Model.” In Proceedings of the Fourth Workshop on Insights from Negative Results in NLP, ed by. Shabnam Tafreshi, Arjun Akula, João Sedoc, Aleksandr Drozd, Anna Rogers, and Anna Rumshisky, 103–108. Dubrovnik: Association for Computational Linguistics.
Vancouver
1.
De Langhe L, De Clercq O, Hoste V. What Does BERT actually learn about event coreference? Probing structural information in a fine-tuned Dutch language model. In: Tafreshi S, Akula A, Sedoc J, Drozd A, Rogers A, Rumshisky A, editors. Proceedings of the Fourth Workshop on Insights from Negative Results in NLP. Dubrovnik: Association for Computational Linguistics; 2023. p. 103–8.
IEEE
[1]
L. De Langhe, O. De Clercq, and V. Hoste, “What Does BERT actually learn about event coreference? Probing structural information in a fine-tuned Dutch language model,” in Proceedings of the Fourth Workshop on Insights from Negative Results in NLP, Dubrovnik, Croatia, 2023, pp. 103–108.
@inproceedings{01HXC16RPW27PKYSG4NG093FZG,
  abstract     = {{We probe structural and discourse aspects of coreferential relationships in a fine-tuned Dutch BERT event coreference model. Previous research has suggested that no such knowledge is encoded in BERT-based models and the classification of coreferential relationships ultimately rests on outward lexical similarity. While we show that BERT can encode a (very) limited number of these discourse aspects (thus disproving assumptions in earlier research), we also note that knowledge of many structural features of coreferential relationships is absent from the encodings generated by the fine-tuned BERT model.}},
  author       = {{De Langhe, Loic and De Clercq, Orphée and Hoste, Veronique}},
  booktitle    = {{Proceedings of the Fourth Workshop on Insights from Negative Results in NLP}},
  editor       = {{Tafreshi, Shabnam and Akula, Arjun and Sedoc, João and Drozd, Aleksandr and Rogers, Anna and Rumshisky, Anna}},
  isbn         = {{9781959429494}},
  language     = {{eng}},
  location     = {{Dubrovnik, Croatia}},
  pages        = {{103--108}},
  publisher    = {{Association for Computational Linguistics}},
  title        = {{What Does BERT actually learn about event coreference? Probing structural information in a fine-tuned Dutch language model}},
  year         = {{2023}},
}