Advanced search
Add to list

Comparative judgment within online assessment: exploring students’ feedback reactions

Author
Organization
Abstract
Theoretical framework Feedback on students’ competences is a valuable resource since it aims to facilitate learning and enhance their performance. In practice, students mostly receive marks on their tasks based on several predefined criteria. This implies that they were evaluated using absolute standards, which might restrict feedback. Additionally, personal influences of the assessor can affect these assessments. Given this, some authors argue in favor for an alternative method, such as comparative judgment (CJ) (e.g. Pollitt, 2012). In this method, various assessors compare independently several representations of different students and decide each time which of them demonstrate the best performance of the given competence. One of the strengths of this holistic approach is that it rules out the personal standards, leading to a higher consistency in judgments over different assessors (Bramley, 2007; Pollitt, 2012). Using an online tool (e.g. D-PAC), this whole assessment procedure is facilitated. When all judgments are completed, all representations are ranked on an interval scale ranging from the poorest to the best performance. Based on this scale, students’ feedback can be provided. Up until now, no research has been conducted on CJ-based feedback. Addi-tionally, no research has investigated this type of feedback provided by an online tool. Since honest, relevant, and trustworthy feedback is vital for learning, the question arises how students will perceive CJ-based feedback. Central research goal(s), problem(s) and/or question(s) In light of the research gap described above, we have the following pioneering research questions: • Is feedback provided by an online tool using CJ perceived as honest? • Is feedback provided by an online tool using CJ perceived as relevant? • Is feedback provided by an online tool using CJ perceived as trustworthy? Research methods Personal feedback reports regarding the academic writing competence were constructed based on the online generated CJ output of 40 secondary school students. Since the developed digital tool does not provide an electronic feedback report yet, the feedback report was presented on paper. Reports were handed over to each individual student during school hours and students went through this report independently. Next, a semi-structured interview was conducted to investigate the research questions mentioned above. Additionally, the duration that students needed to consult their report was recorded to assess feedback acceptance. Results and main conclusions Data collection and analysis is currently ongoing. Final results will be presented at the time of the conference. Implication to research and practice. We expect that our results give insights on how students perceive feedback provided by an online tool and based upon CJ. Additionally, feedback will be optimized in order to increase learning. References Bramley, T. (2007). Paired comparisons methods. In Newton, P., Baird, J-A., Goldstein, H., Patrick, H., & Tymms, P. (Eds). Techniques for monitoring the comparability of examination standards (246 – 294). London: Qualifica-tion and authority Pollitt, A. (2012). The method of Adaptive Comparative Judgment. Assessment in Education: Principles, Policy & Practice. 19: 3, 1-20. DOI:10.1080/0969594X.2012.665354
Keywords
pairwise comparison, Feedback

Citation

Please use this url to cite or link to this publication:

MLA
Mortier, Anneleen, et al. “Comparative Judgment within Online Assessment: Exploring Students’ Feedback Reactions.” International Computer Assisted Assessment Conference, Abstracts, 2015.
APA
Mortier, A., Lesterhuis, M., Vlerick, P., & De Maeyer, S. (2015). Comparative judgment within online assessment: exploring students’ feedback reactions. International Computer Assisted Assessment Conference, Abstracts. Presented at the International Computer Assisted Assessment Conference, Zeist, The Netherlands.
Chicago author-date
Mortier, Anneleen, Marije Lesterhuis, Peter Vlerick, and Sven De Maeyer. 2015. “Comparative Judgment within Online Assessment: Exploring Students’ Feedback Reactions.” In International Computer Assisted Assessment Conference, Abstracts.
Chicago author-date (all authors)
Mortier, Anneleen, Marije Lesterhuis, Peter Vlerick, and Sven De Maeyer. 2015. “Comparative Judgment within Online Assessment: Exploring Students’ Feedback Reactions.” In International Computer Assisted Assessment Conference, Abstracts.
Vancouver
1.
Mortier A, Lesterhuis M, Vlerick P, De Maeyer S. Comparative judgment within online assessment: exploring students’ feedback reactions. In: International Computer Assisted Assessment Conference, Abstracts. 2015.
IEEE
[1]
A. Mortier, M. Lesterhuis, P. Vlerick, and S. De Maeyer, “Comparative judgment within online assessment: exploring students’ feedback reactions,” in International Computer Assisted Assessment Conference, Abstracts, Zeist, The Netherlands, 2015.
@inproceedings{6966598,
  abstract     = {{Theoretical framework

Feedback on students’ competences is a valuable resource since it aims to facilitate learning and enhance their performance. In practice, students mostly receive marks on their tasks based on several predefined criteria. This implies that they were evaluated using absolute standards, which might restrict feedback. Additionally, personal influences of the assessor can affect these assessments. Given this, some authors argue in favor for an alternative method, such as comparative judgment (CJ) (e.g. Pollitt, 2012). In this method, various assessors compare independently several representations of different students and decide each time which of them demonstrate the best performance of the given competence. One of the strengths of this holistic approach is that it rules out the personal standards, leading to a higher consistency in judgments over different assessors (Bramley, 2007; Pollitt, 2012). Using an online tool (e.g. D-PAC), this whole assessment procedure is facilitated. When all judgments are completed, all representations are ranked on an interval scale ranging from the poorest to the best performance. Based on this scale, students’ feedback can be provided.
Up until now, no research has been conducted on CJ-based feedback. Addi-tionally, no research has investigated this type of feedback provided by an online tool. Since honest, relevant, and trustworthy feedback is vital for learning, the question arises how students will perceive CJ-based feedback.

Central research goal(s), problem(s) and/or question(s)

In light of the research gap described above, we have the following pioneering research questions: 
•	Is feedback provided by an online tool using CJ perceived as honest? 
•	Is feedback provided by an online tool using CJ perceived as relevant?
•	Is feedback provided by an online tool using CJ perceived as trustworthy?

Research methods

Personal feedback reports regarding the academic writing competence were constructed based on the online generated CJ output of 40 secondary school students. Since the developed digital tool does not provide an electronic feedback report yet, the feedback report was presented on paper. Reports were handed over to each individual student during school hours and students went through this report independently. Next, a semi-structured interview was conducted to investigate the research questions mentioned above. Additionally, the duration that students needed to consult their report was recorded to assess feedback acceptance.

Results and main conclusions

Data collection and analysis is currently ongoing. Final results will be presented at the time of the conference.

Implication to research and practice.

We expect that our results give insights on how students perceive feedback provided by an online tool and based upon CJ. Additionally, feedback will be optimized in order to increase learning. 


References
Bramley, T. (2007). Paired comparisons methods. In Newton, P., Baird, J-A., Goldstein, H., Patrick, H., & Tymms, P. (Eds). Techniques for monitoring the comparability of examination standards (246 – 294). London: Qualifica-tion and authority
Pollitt, A. (2012). The method of Adaptive Comparative Judgment. Assessment in Education: Principles, Policy & Practice. 19: 3, 1-20. DOI:10.1080/0969594X.2012.665354}},
  author       = {{Mortier, Anneleen and Lesterhuis, Marije and Vlerick, Peter and De Maeyer, Sven}},
  booktitle    = {{International Computer Assisted Assessment Conference, Abstracts}},
  keywords     = {{pairwise comparison,Feedback}},
  language     = {{eng}},
  location     = {{Zeist, The Netherlands}},
  title        = {{Comparative judgment within online assessment: exploring students’ feedback reactions}},
  year         = {{2015}},
}