Dear Editor Chat Generative Pre-trained Transformer (ChatGPT) is a language model that uses training data to calculate the statistical structure of language and predict its output (Ramponi Citation2022). We compared the performance of ChatGPT in a final year applied knowledge test against a cohort of final year medical students assessed in 2020 at Imperial College School of Medicine. This test was administered remotely in an open-book format and consisted of 300 single best answer questions from the Medical Schools Council question bank (Sam et al. Citation2020). Each question was submitted as a new conversation through the New Chat Interface. The first answers from ChatGPT were evaluated to assess whether the response generated by ChatGPT matched the correct answer for the question. ChatGPT passed the knowledge test with a score of 63.33 which exceeded the pass mark of 57.05, even though it has not yet been optimised for this purpose. By comparison, the 2019/2020 final year student cohort achieved a mean score of 77.08. ChatGPT also correctly answered 40 (n?=?6) of the questions involving image recognition from the text prompt only. The results of the comparison have important implications for medical education. Artificial Intelligence (AI) systems such as ChatGPT can provide near instantaneous access to comprehensive information and individualised feedback to students in a formative setting, which has the potential to improve student engagement and learning outcomes.
展开▼