Computational modeling of human reasoning processes for interpretable visual knowledge: a case study with radiographers

PDF Version Also Available for Download.

Description

Article proposing a computational method to quantify and dissect visual reasoning. The method characterizes spatial and temporal features and identifies common and contrast visual reasoning patterns to extract significant gaze activities. The visual reasoning patterns are explainable and can be compared among different groups to discover strategy differences. Empirical observations show that the method can capture the temporal and spatial features of human visual attention and distinguish expertise level. By revealing task-related reasoning processes, this method demonstrates potential for explaining human visual understanding.

Physical Description

11 p.

Creation Information

Li, Yuan; Cao, Hongfei; Allen, Carla M.; Wang, Xin; Erdelez, Sanda & Shyu, Chi-Ren December 10, 2020.

Context

This article is part of the collection entitled: UNT Scholarly Works and was provided by the UNT College of Information to the UNT Digital Library, a digital repository hosted by the UNT Libraries. More information about this article can be viewed below.

Who

People and organizations associated with either the creation of this article or its content.

Authors

Publisher

Provided By

UNT College of Information

Situated at the intersection of people, technology, and information, the College of Information's faculty, staff and students invest in innovative research, collaborative partnerships, and student-centered education to serve a global information society. The college offers programs of study in information science, learning technologies, and linguistics.

Contact Us

What

Descriptive information to help identify this article. Follow the links below to find similar items on the Digital Library.

Degree Information

Description

Article proposing a computational method to quantify and dissect visual reasoning. The method characterizes spatial and temporal features and identifies common and contrast visual reasoning patterns to extract significant gaze activities. The visual reasoning patterns are explainable and can be compared among different groups to discover strategy differences. Empirical observations show that the method can capture the temporal and spatial features of human visual attention and distinguish expertise level. By revealing task-related reasoning processes, this method demonstrates potential for explaining human visual understanding.

Physical Description

11 p.

Notes

Abstract: Visual reasoning is critical in many complex visual tasks in medicine such as radiology or pathology. It is challenging to explicitly explain reasoning processes due to the dynamic nature of real-time human cognition. A deeper understanding of such reasoning processes is necessary for improving diagnostic accuracy and computational tools. Most computational analysis methods for visual attention utilize black-box algorithms which lack explainability and are therefore limited in understanding the visual reasoning processes. In this paper, we propose a computational method to quantify and dissect visual reasoning. The method characterizes spatial and temporal features and identifies common and contrast visual reasoning patterns to extract significant gaze activities. The visual reasoning patterns are explainable and can be compared among different groups to discover strategy differences. Experiments with radiographers of varied levels of expertise on 10 levels of visual tasks were conducted. Our empirical observations show that the method can capture the temporal and spatial features of human visual attention and distinguish expertise level. The extracted patterns are further examined and interpreted to showcase key differences between expertise levels in the visual reasoning processes. By revealing task-related reasoning processes, this method demonstrates potential for explaining human visual understanding.

Source

  • Scientific Reports, 10, Springer Nature, December 10, 2020, p. 1-11

Language

Item Type

Identifier

Unique identifying numbers for this article in the Digital Library or other systems.

Publication Information

  • Publication Title: Scientific Reports
  • Volume: 10
  • Article Identifier: 21620 (2020)
  • Pages: 11
  • Peer Reviewed: Yes

Collections

This article is part of the following collection of related materials.

UNT Scholarly Works

Materials from the UNT community's research, creative, and scholarly activities and UNT's Open Access Repository. Access to some items in this collection may be restricted.

What responsibilities do I have when using this article?

When

Dates and time periods associated with this article.

Creation Date

  • December 10, 2020

Added to The UNT Digital Library

  • May 27, 2022, 6:01 a.m.

Description Last Updated

  • May 31, 2022, 12:39 p.m.

Usage Statistics

When was this article last used?

Yesterday: 0
Past 30 days: 0
Total Uses: 7

Interact With This Article

Here are some suggestions for what to do next.

Start Reading

PDF Version Also Available for Download.

International Image Interoperability Framework

IIF Logo

We support the IIIF Presentation API

Li, Yuan; Cao, Hongfei; Allen, Carla M.; Wang, Xin; Erdelez, Sanda & Shyu, Chi-Ren. Computational modeling of human reasoning processes for interpretable visual knowledge: a case study with radiographers, article, December 10, 2020; (https://digital.library.unt.edu/ark:/67531/metadc1934204/: accessed June 10, 2024), University of North Texas Libraries, UNT Digital Library, https://digital.library.unt.edu; crediting UNT College of Information.

Back to Top of Screen