Article describes how open-ended verbal creativity assessments are commonly administered in psychological research and in educational practice to elementary-aged children. Authors modeled the predictors of inter-rater disagreement in a large (i.e., 387 elementary school students and 10,449 individual item responses) dataset of children's creativity assessment responses.
The UNT College of Education prepares professionals and scholars who contribute to the advancement of education, health, and human development. Programs in the college prepare teachers, leaders, physical activity and health specialists, educational researchers, recreational leaders, child development and family studies specialists, doctoral faculty, counselors, and special and gifted education teachers and leaders.
Article describes how open-ended verbal creativity assessments are commonly administered in psychological research and in educational practice to elementary-aged children. Authors modeled the predictors of inter-rater disagreement in a large (i.e., 387 elementary school students and 10,449 individual item responses) dataset of children's creativity assessment responses.
Physical Description
20 p.
Notes
Abstract: Open-ended verbal creativity assessments are commonly administered in psychological research and in educational practice to elementary-aged children. Children's responses are then typically rated by teams of judges who are trained to identify original ideas, hopefully with a degree of inter-rater agreement. Even in cases where the judges are reliable, some residual disagreement on the originality of the responses is inevitable. Here, we modeled the predictors of inter-rater disagreement in a large (i.e., 387 elementary school students and 10,449 individual item responses) dataset of children's creativity assessment responses. Our five trained judges rated the responses with a high degree of consistency reliability (α = 0.844), but we undertook this study to predict the residual disagreement. We used an adaptive LASSO model to predict 72% of the variance in our judges' residual disagreement and found that there were certain types of responses on which our judges tended to disagree more. The main effects in our model showed that responses that were less original, more elaborate, prompted by a Uses task, from younger children, or from male students, were all more difficult for the judges to rate reliably. Among the interaction effects, we found that our judges were also more likely to disagree on highly original responses from Gifted/Talented students, responses from Latinx students who were identified as English Language Learners, or responses from Asian students who took a lot of time on the task. Given that human judgments such as these are currently being used to train artificial intelligence systems to rate responses to creativity assessments, we believe understanding their nuances is important.
This article is part of the following collection of related materials.
UNT Scholarly Works
Materials from the UNT community's research, creative, and scholarly activities and UNT's Open Access Repository. Access to some items in this collection may be restricted.
Dumas, Denis; Acar, Selcuk; Berthiaume, Kelly; Organisciak, Peter; Eby, David; Grajzel, Katalin et al.What Makes Children's Responses to Creativity Assessments Difficult to Judge Reliably?,
article,
May 26, 2023;
(https://digital.library.unt.edu/ark:/67531/metadc2201621/:
accessed May 30, 2024),
University of North Texas Libraries, UNT Digital Library, https://digital.library.unt.edu;
crediting UNT College of Education.