Virtually all human activity involves collaboration, and yet, collaboration during an examination is typically considered cheating. Collaborative assessments have not been widely adopted because of the perceived lack of individual accountability and the notion that collaboration during assessments simply causes propagation of correct answers. Hence, collaboration could help weaker students without providing much benefit to stronger students. In this paper, we examine student performance in open-ended, two-stage collaborative assessments comprised of an individually accountable round followed by an automatically scored, collaborative round. We show that collaboration entails more than just propagation of correct answers. We find greater rates of correct answers after collaboration for all students, including the strongest members of a team. We also find that half of teams that begin without a correct answer to propagate still obtain the correct answer in the collaborative round. Our findings, combined with the convenience of automatic feedback and grading of open-ended questions, provide a strong argument for adopting collaborative assessments as an integral part of education.
Skip Nav Destination
PHYSICS EDUCATION RESEARCH| March 01 2017
Collaborative exams: Cheating? Or learning?
Hyewon Jang, Nathaniel Lasry, Kelly Miller, Eric Mazur; Collaborative exams: Cheating? Or learning?. Am. J. Phys. 1 March 2017; 85 (3): 223–227. https://doi.org/10.1119/1.4974744
Download citation file: