Return to search

ChatGPT’s Performance on the BriefElectricity and Magnetism Assessment

In this study, we tested the performance of ChatGPT-4 on the concept inventory Brief Electricity and Magnetism Assessment (BEMA) to understand its potential as an educational tool in physics, especially in tasks requiring visual interpretation. Our results indicate that ChatGPT-4 performs similarly to undergraduate students in introductory electromagnetism courses, with an average score close to that of the students. However, ChatGPT-4 displayed significant differences compared to students, particularly in tasks involving complex visual elements such as electrical circuits and magnetic field diagrams. While ChatGPT-4 was proficient in proposing correct physical reasoning, it struggled with accurately interpreting visual information. These findings suggest that while ChatGPT-4 can be a useful supplementary tool for students, it should not be relied upon as a primary tutor for subjects heavily dependent on visual interpretation. Instead, it could be more effective as a peer, where its outputs are critically evaluated by students. Further research should focus on improving ChatGPT’s visual processing capabilities and exploring its role in diverse educational contexts.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-530114
Date January 2024
CreatorsMelin, Jakob, Elias, Önerud
PublisherUppsala universitet, Fysikundervisningens didaktik
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess
RelationFYSAST ; FYSPROJ1343

Page generated in 0.0018 seconds