Document Type

Journal Article

Publication Date

1-1-2023

Journal

OTO Open

Volume

7

Issue

4

DOI

10.1002/oto2.94

Abstract

OBJECTIVE: To quantify ChatGPT's concordance with expert Otolaryngologists when posed with high-level questions that require blending rote memorization and critical thinking.

STUDY DESIGN: Cross-sectional survey.

SETTING: OpenAI's ChatGPT-3.5 Platform.

METHODS: Two board-certified otolaryngologists (HZ, RS) input 2 sets of 30 text-based questions (open-ended and single-answer multiple-choice) into the ChatGPT-3.5 model. Responses were rated on a scale (correct, partially correct, incorrect) by each Otolaryngologist working simultaneously with the AI model. Interrater agreement percentage was based on binomial distribution for calculating the 95% confidence intervals and performing significance tests. Statistical significance was defined as

RESULTS: In testing open-ended questions, the ChatGPT model had 56.7% of initially answering questions with complete accuracy, and 86.7% chance of answer with some accuracy (corrected agreement = 80.1%;

CONCLUSION: ChatGPT currently does not provide reliably accurate responses to sophisticated questions in Otolaryngology. Professional societies must be aware of the potential of this tool and prevent unscrupulous use during test-taking situations and consider guidelines for clinical scenarios. Expert clinical oversight is still necessary for myriad use cases (eg, hallucination).

Comments

This is an open access article under the terms of the Creative Commons Attribution‐NonCommercial‐NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non‐commercial and no modifications or adaptations are made.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Peer Reviewed

1

Open Access

1

Department

Pediatrics

Find in your library

Share

COinS