This version is not peer-reviewed.
Submitted:
30 January 2025
Posted:
31 January 2025
You are already at the latest version
In this paper we explore ChatGPT's ability to produce a summary, a precis and/or an essay on the basis of excerpts from a novel – The Solid Mandala - by Noble Prize Australian writer Patrick White. We use a number of prompts to test a number of functions related to narrative analysis from the point of view of the “sujet”, the “fable”, and the style. In the paper, we illustrate extensively a number of recurrent hallucinations that can badly harm the understanding of the contents of the novel. We made a list of 12 different types of mistakes or hallucinations we found GPT made. We then tested Gemini for the same 12 mistakes and found a marked improvement in all critical key issues. The conclusion for ChatGPT is mostly negative. We formulate as an underlying hypothesis for its worse performance, the influence of vocabulary size which in Gemma 2 is 7 times higher than in GPT.
supplementary.docx (80.62KB )
Altmetrics
Downloads
24
Views
12
Comments
0
Subscription
Notify me about updates to this article or when a peer-reviewed version is published.
© 2025 MDPI (Basel, Switzerland) unless otherwise stated