Preprint
Article

How ChatGPT’s Hallucinations (Compared to Gemini’s) Impact Text Summarization with Literary Text

This version is not peer-reviewed.

Submitted:

30 January 2025

Posted:

31 January 2025

You are already at the latest version

Abstract

In this paper we explore ChatGPT's ability to produce a summary, a precis and/or an essay on the basis of excerpts from a novel – The Solid Mandala - by Noble Prize Australian writer Patrick White. We use a number of prompts to test a number of functions related to narrative analysis from the point of view of the “sujet”, the “fable”, and the style. In the paper, we illustrate extensively a number of recurrent hallucinations that can badly harm the understanding of the contents of the novel. We made a list of 12 different types of mistakes or hallucinations we found GPT made. We then tested Gemini for the same 12 mistakes and found a marked improvement in all critical key issues. The conclusion for ChatGPT is mostly negative. We formulate as an underlying hypothesis for its worse performance, the influence of vocabulary size which in Gemma 2 is 7 times higher than in GPT.

Keywords: 
ChatGPT Prompts; Narrative Theory; Semantic Theory; Modality and Factuality; Temporal Reordering
Subject: 
Arts and Humanities  -   Literature and Literary Theory
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.

Altmetrics

Downloads

24

Views

12

Comments

0

Subscription

Notify me about updates to this article or when a peer-reviewed version is published.

Email

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated