Comparing MT approaches for text normalization
- Author
- Claudia Matos Veliz, Orphée De Clercq (UGent) and Veronique Hoste (UGent)
- Organization
- Abstract
- One of the main characteristics of social media data is the use of non-standard language. Since NLP tools have been trained on traditional text material, their performance drops when applied to social media data. One way to overcome this is to first perform text normalization. In this work, we apply text normalization to noisy English and Dutch text coming from different genres: text messages, message board posts and tweets. We consider the normalization task as a Machine Translation problem and test the two leading paradigms: statistical and neural machine translation. For SMT we explore the added value of varying background corpora for training the language model. For NMT we have a look at data augmentation since the parallel datasets we are working with are limited in size. Our results reveal that when relying on SMT to perform the normalization, it is beneficial to use a background corpus that is close to the genre to be normalized. Regarding NMT, we find that the translations - or normalizations - coming out of this model are far from perfect and that for a low-resource language like Dutch adding additional training data works better than artificially augmenting the data.
- Keywords
- LT3
Downloads
-
(...).pdf
- full text (Published version)
- |
- UGent only
- |
- |
- 728.85 KB
Citation
Please use this url to cite or link to this publication: http://hdl.handle.net/1854/LU-8629116
- MLA
- Matos Veliz, Claudia, et al. “Comparing MT Approaches for Text Normalization.” Proceedings of Recent Advances in Natural Language Processing (RANLP 2019) : Natural Language Processing in a Deep Learining World, 2019, pp. 740–49.
- APA
- Matos Veliz, C., De Clercq, O., & Hoste, V. (2019). Comparing MT approaches for text normalization. Proceedings of Recent Advances in Natural Language Processing (RANLP 2019) : Natural Language Processing in a Deep Learining World, 740–749.
- Chicago author-date
- Matos Veliz, Claudia, Orphée De Clercq, and Veronique Hoste. 2019. “Comparing MT Approaches for Text Normalization.” In Proceedings of Recent Advances in Natural Language Processing (RANLP 2019) : Natural Language Processing in a Deep Learining World, 740–49.
- Chicago author-date (all authors)
- Matos Veliz, Claudia, Orphée De Clercq, and Veronique Hoste. 2019. “Comparing MT Approaches for Text Normalization.” In Proceedings of Recent Advances in Natural Language Processing (RANLP 2019) : Natural Language Processing in a Deep Learining World, 740–749.
- Vancouver
- 1.Matos Veliz C, De Clercq O, Hoste V. Comparing MT approaches for text normalization. In: Proceedings of Recent Advances in Natural Language Processing (RANLP 2019) : natural language processing in a deep learining world. 2019. p. 740–9.
- IEEE
- [1]C. Matos Veliz, O. De Clercq, and V. Hoste, “Comparing MT approaches for text normalization,” in Proceedings of Recent Advances in Natural Language Processing (RANLP 2019) : natural language processing in a deep learining world, Varna, Bulgaria, 2019, pp. 740–749.
@inproceedings{8629116, abstract = {{One of the main characteristics of social media data is the use of non-standard language. Since NLP tools have been trained on traditional text material, their performance drops when applied to social media data. One way to overcome this is to first perform text normalization. In this work, we apply text normalization to noisy English and Dutch text coming from different genres: text messages, message board posts and tweets. We consider the normalization task as a Machine Translation problem and test the two leading paradigms: statistical and neural machine translation. For SMT we explore the added value of varying background corpora for training the language model. For NMT we have a look at data augmentation since the parallel datasets we are working with are limited in size. Our results reveal that when relying on SMT to perform the normalization, it is beneficial to use a background corpus that is close to the genre to be normalized. Regarding NMT, we find that the translations - or normalizations - coming out of this model are far from perfect and that for a low-resource language like Dutch adding additional training data works better than artificially augmenting the data.}}, author = {{Matos Veliz, Claudia and De Clercq, Orphée and Hoste, Veronique}}, booktitle = {{Proceedings of Recent Advances in Natural Language Processing (RANLP 2019) : natural language processing in a deep learining world}}, isbn = {{9789544520557}}, issn = {{1313-8502}}, keywords = {{LT3}}, language = {{eng}}, location = {{Varna, Bulgaria}}, pages = {{740--749}}, title = {{Comparing MT approaches for text normalization}}, url = {{http://lml.bas.bg/ranlp2019/proceedings-ranlp-2019.pdf}}, year = {{2019}}, }