Advanced search
1 file | 1.14 MB Add to list

Using the crowd for readability prediction

Orphée De Clercq (UGent) , Veronique Hoste (UGent) , Bart Desmet (UGent) , Philip van Oosten (UGent) , Martine De Cock (UGent) and Lieve Macken (UGent)
(2014) NATURAL LANGUAGE ENGINEERING. 20(3). p.293-325
Author
Organization
Abstract
While human annotation is crucial for many natural language processing tasks, it is often very expensive and time-consuming. Inspired by previous work on crowdsourcing, we investigate the viability of using non-expert labels instead of gold standard annotations from experts for a machine learning approach to automatic readability prediction. In order to do so, we evaluate two different methodologies to assess the readability of a wide variety of text material: A more traditional setup in which expert readers make readability judgments and a crowdsourcing setup for users who are not necessarily experts. To this purpose two assessment tools were implemented: a tool where expert readers can rank a batch of texts based on readability, and a lightweight crowdsourcing tool, which invites users to provide pairwise comparisons. To validate this approach, readability assessments for a corpus of written Dutch generic texts were gathered. By collecting multiple assessments per text, we explicitly wanted to level out readers' background knowledge and attitude. Our findings show that the assessments collected through both methodologies are highly consistent and that crowdsourcing is a viable alternative to expert labeling. This is a good news as crowdsourcing is more lightweight to use and can have access to a much wider audience of potential annotators. By performing a set of basic machine learning experiments using a feature set that mainly encodes basic lexical and morpho-syntactic information, we further illustrate how the collected data can be used to perform text comparisons or to assign an absolute readability score to an individual text. We do not focus on optimising the algorithms to achieve the best possible results for the learning tasks, but carry them out to illustrate the various possibilities of our data sets. The results on different data sets, however, show that our system outperforms the readability formulas and a baseline language modelling approach. We conclude that readability assessment by comparing texts is a polyvalent methodology, which can be adapted to specific domains and target audiences if required.
Keywords
crowdsourcing, DIFFICULTY, generic text, LANGUAGE, machine learning, INFORMATION, readability prediction, TEXTS, FORMULAS

Downloads

  • (...).pdf
    • full text
    • |
    • UGent only
    • |
    • PDF
    • |
    • 1.14 MB

Citation

Please use this url to cite or link to this publication:

MLA
De Clercq, Orphée, et al. “Using the Crowd for Readability Prediction.” NATURAL LANGUAGE ENGINEERING, vol. 20, no. 3, 2014, pp. 293–325, doi:10.1017/S1351324912000344.
APA
De Clercq, O., Hoste, V., Desmet, B., van Oosten, P., De Cock, M., & Macken, L. (2014). Using the crowd for readability prediction. NATURAL LANGUAGE ENGINEERING, 20(3), 293–325. https://doi.org/10.1017/S1351324912000344
Chicago author-date
De Clercq, Orphée, Veronique Hoste, Bart Desmet, Philip van Oosten, Martine De Cock, and Lieve Macken. 2014. “Using the Crowd for Readability Prediction.” NATURAL LANGUAGE ENGINEERING 20 (3): 293–325. https://doi.org/10.1017/S1351324912000344.
Chicago author-date (all authors)
De Clercq, Orphée, Veronique Hoste, Bart Desmet, Philip van Oosten, Martine De Cock, and Lieve Macken. 2014. “Using the Crowd for Readability Prediction.” NATURAL LANGUAGE ENGINEERING 20 (3): 293–325. doi:10.1017/S1351324912000344.
Vancouver
1.
De Clercq O, Hoste V, Desmet B, van Oosten P, De Cock M, Macken L. Using the crowd for readability prediction. NATURAL LANGUAGE ENGINEERING. 2014;20(3):293–325.
IEEE
[1]
O. De Clercq, V. Hoste, B. Desmet, P. van Oosten, M. De Cock, and L. Macken, “Using the crowd for readability prediction,” NATURAL LANGUAGE ENGINEERING, vol. 20, no. 3, pp. 293–325, 2014.
@article{3072683,
  abstract     = {{While human annotation is crucial for many natural language processing tasks, it is often very expensive and time-consuming. Inspired by previous work on crowdsourcing, we investigate the viability of using non-expert labels instead of gold standard annotations from experts for a machine learning approach to automatic readability prediction. In order to do so, we evaluate two different methodologies to assess the readability of a wide variety of text material: A more traditional setup in which expert readers make readability judgments and a crowdsourcing setup for users who are not necessarily experts. To this purpose two assessment tools were implemented: a tool where expert readers can rank a batch of texts based on readability, and a lightweight crowdsourcing tool, which invites users to provide pairwise comparisons. To validate this approach, readability assessments for a corpus of written Dutch generic texts were gathered. By collecting multiple assessments per text, we explicitly wanted to level out readers' background knowledge and attitude. Our findings show that the assessments collected through both methodologies are highly consistent and that crowdsourcing is a viable alternative to expert labeling. This is a good news as crowdsourcing is more lightweight to use and can have access to a much wider audience of potential annotators. By performing a set of basic machine learning experiments using a feature set that mainly encodes basic lexical and morpho-syntactic information, we further illustrate how the collected data can be used to perform text comparisons or to assign an absolute readability score to an individual text. We do not focus on optimising the algorithms to achieve the best possible results for the learning tasks, but carry them out to illustrate the various possibilities of our data sets. The results on different data sets, however, show that our system outperforms the readability formulas and a baseline language modelling approach. We conclude that readability assessment by comparing texts is a polyvalent methodology, which can be adapted to specific domains and target audiences if required.}},
  author       = {{De Clercq, Orphée and Hoste, Veronique and Desmet, Bart and van Oosten, Philip and De Cock, Martine and Macken, Lieve}},
  issn         = {{1469-8110}},
  journal      = {{NATURAL LANGUAGE ENGINEERING}},
  keywords     = {{crowdsourcing,DIFFICULTY,generic text,LANGUAGE,machine learning,INFORMATION,readability prediction,TEXTS,FORMULAS}},
  language     = {{eng}},
  number       = {{3}},
  pages        = {{293--325}},
  title        = {{Using the crowd for readability prediction}},
  url          = {{http://doi.org/10.1017/S1351324912000344}},
  volume       = {{20}},
  year         = {{2014}},
}

Altmetric
View in Altmetric
Web of Science
Times cited: