What Makes Children's Responses to Creativity Assessments Difficult to Judge Reliably?

PDF Version Also Available for Download.

Description

Article describes how open-ended verbal creativity assessments are commonly administered in psychological research and in educational practice to elementary-aged children. Authors modeled the predictors of inter-rater disagreement in a large (i.e., 387 elementary school students and 10,449 individual item responses) dataset of children's creativity assessment responses.

Physical Description

20 p.

Creation Information

Dumas, Denis; Acar, Selcuk; Berthiaume, Kelly; Organisciak, Peter; Eby, David; Grajzel, Katalin et al. May 26, 2023.

Context

This article is part of the collection entitled: UNT Scholarly Works and was provided by the UNT College of Education to the UNT Digital Library, a digital repository hosted by the UNT Libraries. More information about this article can be viewed below.

Who

People and organizations associated with either the creation of this article or its content.

Authors

Publisher

Provided By

UNT College of Education

The UNT College of Education prepares professionals and scholars who contribute to the advancement of education, health, and human development. Programs in the college prepare teachers, leaders, physical activity and health specialists, educational researchers, recreational leaders, child development and family studies specialists, doctoral faculty, counselors, and special and gifted education teachers and leaders.

Contact Us

What

Descriptive information to help identify this article. Follow the links below to find similar items on the Digital Library.

Degree Information

Description

Article describes how open-ended verbal creativity assessments are commonly administered in psychological research and in educational practice to elementary-aged children. Authors modeled the predictors of inter-rater disagreement in a large (i.e., 387 elementary school students and 10,449 individual item responses) dataset of children's creativity assessment responses.

Physical Description

20 p.

Notes

Abstract: Open-ended verbal creativity assessments are commonly administered in psychological research and in educational practice to elementary-aged children. Children's responses are then typically rated by teams of judges who are trained to identify original ideas, hopefully with a degree of inter-rater agreement. Even in cases where the judges are reliable, some residual disagreement on the originality of the responses is inevitable. Here, we modeled the predictors of inter-rater disagreement in a large (i.e., 387 elementary school students and 10,449 individual item responses) dataset of children's creativity assessment responses. Our five trained judges rated the responses with a high degree of consistency reliability (α = 0.844), but we undertook this study to predict the residual disagreement. We used an adaptive LASSO model to predict 72% of the variance in our judges' residual disagreement and found that there were certain types of responses on which our judges tended to disagree more. The main effects in our model showed that responses that were less original, more elaborate, prompted by a Uses task, from younger children, or from male students, were all more difficult for the judges to rate reliably. Among the interaction effects, we found that our judges were also more likely to disagree on highly original responses from Gifted/Talented students, responses from Latinx students who were identified as English Language Learners, or responses from Asian students who took a lot of time on the task. Given that human judgments such as these are currently being used to train artificial intelligence systems to rate responses to creativity assessments, we believe understanding their nuances is important.

Source

  • Journal of Creative Behavior, 57(3), John Wiley & Sons, May 26, 2023, pp. 1-20

Language

Item Type

Identifier

Unique identifying numbers for this article in the Digital Library or other systems.

Publication Information

  • Publication Title: Journal of Creative Behavior
  • Volume: 57
  • Issue: 3
  • Page Start: 419
  • Page End: 438
  • Peer Reviewed: Yes

Collections

This article is part of the following collection of related materials.

UNT Scholarly Works

Materials from the UNT community's research, creative, and scholarly activities and UNT's Open Access Repository. Access to some items in this collection may be restricted.

What responsibilities do I have when using this article?

When

Dates and time periods associated with this article.

Creation Date

  • May 26, 2023

Added to The UNT Digital Library

  • Dec. 14, 2023, 5:14 a.m.

Description Last Updated

  • Jan. 8, 2024, 11:01 a.m.

Usage Statistics

When was this article last used?

Yesterday: 0
Past 30 days: 0
Total Uses: 2

Interact With This Article

Here are some suggestions for what to do next.

Start Reading

PDF Version Also Available for Download.

International Image Interoperability Framework

IIF Logo

We support the IIIF Presentation API

Dumas, Denis; Acar, Selcuk; Berthiaume, Kelly; Organisciak, Peter; Eby, David; Grajzel, Katalin et al. What Makes Children's Responses to Creativity Assessments Difficult to Judge Reliably?, article, May 26, 2023; (https://digital.library.unt.edu/ark:/67531/metadc2201621/: accessed May 30, 2024), University of North Texas Libraries, UNT Digital Library, https://digital.library.unt.edu; crediting UNT College of Education.

Back to Top of Screen