Advanced search
2 files | 15.80 MB Add to list

Learning to reuse distractors to support multiple choice question generation in education

Author
Organization
Project
Abstract
Multiple-choice questions (MCQs) are widely used in digital learning systems, as they allow for automating the assessment process. However, owing to the increased digital literacy of students and the advent of social media platforms, MCQ tests are widely shared online, and teachers are continuously challenged to create new questions, which is an expensive and time-consuming task. A particularly sensitive aspect of MCQ creation is to devise relevant distractors, i.e., wrong answers that are not easily identifiable as being wrong. This article studies how a large existing set of manually created answers and distractors for questions over a variety of domains, subjects, and languages can be leveraged to help teachers in creating new MCQs, by the smart reuse of existing distractors. We built several data-driven models based on context-aware question and distractor representations and compared them with static feature-based models. The proposed models are evaluated with automated metrics and in a realistic user test with teachers. Both automatic and human evaluations indicate that context-aware models consistently outperform a static feature-based approach. For our best-performing context-aware model, on average, three distractors out of the ten shown to teachers were rated as high-quality distractors. We create a performance benchmark, and make it public, to enable comparison between different approaches and to introduce a more standardized evaluation of the task. The benchmark contains a test of 298 educational questions covering multiple subjects and languages and a 77k multilingual pool of distractor vocabulary for future research.
Keywords
Computer Science Applications, General Engineering, Education, transformers, online learning, natural language processing (NLP), multiple-choice question (MCQ), Distractor generation, Benchmark testing, Guidelines, Vocabulary, Agricultural machinery, Semantics, Task analysis, Context modeling

Downloads

  • AAM.pdf
    • full text (Accepted manuscript)
    • |
    • open access
    • |
    • PDF
    • |
    • 14.33 MB
  • (...).pdf
    • full text (Published version)
    • |
    • UGent only
    • |
    • PDF
    • |
    • 1.46 MB

Citation

Please use this url to cite or link to this publication:

MLA
Bitew, Semere Kiros, et al. “Learning to Reuse Distractors to Support Multiple Choice Question Generation in Education.” IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, vol. 17, 2024, pp. 375–90, doi:10.1109/tlt.2022.3226523.
APA
Bitew, S. K., Hadifar, A., Sterckx, L., Deleu, J., Develder, C., & Demeester, T. (2024). Learning to reuse distractors to support multiple choice question generation in education. IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, 17, 375–390. https://doi.org/10.1109/tlt.2022.3226523
Chicago author-date
Bitew, Semere Kiros, Amir Hadifar, Lucas Sterckx, Johannes Deleu, Chris Develder, and Thomas Demeester. 2024. “Learning to Reuse Distractors to Support Multiple Choice Question Generation in Education.” IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES 17: 375–90. https://doi.org/10.1109/tlt.2022.3226523.
Chicago author-date (all authors)
Bitew, Semere Kiros, Amir Hadifar, Lucas Sterckx, Johannes Deleu, Chris Develder, and Thomas Demeester. 2024. “Learning to Reuse Distractors to Support Multiple Choice Question Generation in Education.” IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES 17: 375–390. doi:10.1109/tlt.2022.3226523.
Vancouver
1.
Bitew SK, Hadifar A, Sterckx L, Deleu J, Develder C, Demeester T. Learning to reuse distractors to support multiple choice question generation in education. IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES. 2024;17:375–90.
IEEE
[1]
S. K. Bitew, A. Hadifar, L. Sterckx, J. Deleu, C. Develder, and T. Demeester, “Learning to reuse distractors to support multiple choice question generation in education,” IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, vol. 17, pp. 375–390, 2024.
@article{01GQ8AVSQN2JJ7WNRS5NBXZY74,
  abstract     = {{Multiple-choice questions (MCQs) are widely used in digital learning systems, as they allow for automating the assessment process. However, owing to the increased digital literacy of students and the advent of social media platforms, MCQ tests are widely shared online, and teachers are continuously challenged to create new questions, which is an expensive and time-consuming task. A particularly sensitive aspect of MCQ creation is to devise relevant distractors, i.e., wrong answers that are not easily identifiable as being wrong. This article studies how a large existing set of manually created answers and distractors for questions over a variety of domains, subjects, and languages can be leveraged to help teachers in creating new MCQs, by the smart reuse of existing distractors. We built several data-driven models based on context-aware question and distractor representations and compared them with static feature-based models. The proposed models are evaluated with automated metrics and in a realistic user test with teachers. Both automatic and human evaluations indicate that context-aware models consistently outperform a static feature-based approach. For our best-performing context-aware model, on average, three distractors out of the ten shown to teachers were rated as high-quality distractors. We create a performance benchmark, and make it public, to enable comparison between different approaches and to introduce a more standardized evaluation of the task. The benchmark contains a test of 298 educational questions covering multiple subjects and languages and a 77k multilingual pool of distractor vocabulary for future research.}},
  author       = {{Bitew, Semere Kiros and Hadifar, Amir and Sterckx, Lucas and Deleu, Johannes and Develder, Chris and Demeester, Thomas}},
  issn         = {{1939-1382}},
  journal      = {{IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES}},
  keywords     = {{Computer Science Applications,General Engineering,Education,transformers,online learning,natural language processing (NLP),multiple-choice question (MCQ),Distractor generation,Benchmark testing,Guidelines,Vocabulary,Agricultural machinery,Semantics,Task analysis,Context modeling}},
  language     = {{eng}},
  pages        = {{375--390}},
  title        = {{Learning to reuse distractors to support multiple choice question generation in education}},
  url          = {{http://doi.org/10.1109/tlt.2022.3226523}},
  volume       = {{17}},
  year         = {{2024}},
}

Altmetric
View in Altmetric
Web of Science
Times cited: