The Ability of ChatGPT to Aid in the Rapid Development of Inoculation Message Treatments: A Case Study and Recommendations
It is tempting to think of AI technology as monolithic, but as Getchell et al. (2022) and others have noted, AI technologies come in many forms and functions. One of the most widely known types of AI technologies in recent years is ChatGPT. ChatGPT is a state-of-the-art language model that has revolutionized natural language processing by generating human-like text with context and coherence, enabling new possibilities for human-AI interaction (Brown et al., 2020). Its impressive performance in various language tasks and benchmarks has established it as one of the leading language models in the world (Chen et. al., 2021). ChatGPT’s advanced language modeling capabilities have the potential to transform the way we interact with computers and machines by enabling more natural and intuitive communication (Wahde & Virgolin., 2022), and it has many potential applications in the workplace, including in marketing and sales, customer service, operations, information technology, engineering, legal analysis, finance, and research and development (Chui et al., 2022; Davenport & Mittal, 2022). Pre-training on massive amounts of text data has equipped ChatGPT with the ability to understand the nuances of language and generate highly accurate responses, even in complex and ambiguous contexts (Radford et al., 2021). Additionally, ChatGPT’s ability to learn from both structured and unstructured data makes it a highly flexible and versatile conversational AI tool (Wei et al., 2022). To widen the scope of broad-based inoculation campaigns in real world contexts, this study conceives ChatGPT as an AI-assisted writing platform which may aid inoculation message design and development.
van Dis et al. (2023) explain that “ChatGPT has caused excitement and controversy because it is one of the first models that can convincingly converse with its users in English and other languages on a wide range of topics” (p. 224). This manuscript documents a case study of a generative AI 10-week training initiative to determine whether ChatGPT could (a) identify the structure elements and features of inoculation messages (i.e., threats and refutations); (b) augment inoculation message design features (i.e., linguistic signatures, language, and length); (c) adapt messages for specific audiences (i.e., younger populations, experts); and finally, (d) independently replicate the inoculation message development process on a topic of its choosing.
Generative AI and Inoculation Treatment Message Development
Some experts forecast that communication will be dramatically influenced by AI-assisted writing and other forms of generative AI. Davenport and Mittal (2022) predict the following:
We are now only scratching the surface of what generative AI can do for organizations and the people within them. It may soon be standard practice, for example, for such systems to craft most or all of our written or image-based content—to provide first drafts of emails, letters, articles, computer programs, reports, blog posts, presentations, videos, and so forth. (para. 27)
One type of message we can imagine generative AI facilitating is inoculation messaging, a strategic and rhetorical approach built on inoculation theory that employs a messaging strategy to build resistance to future attempts at persuasion. That is, inoculation messages are part of a special type of persuasive campaign with a goal of building resistance to future, other persuasive messages. This analysis seeks to determine the ability of generative AI programs to aid in the rapid development of inoculation treatments which may be needed to protect organizational reputations in crisis scenarios, to promote and market specific products in competitive consumer environments, and to assert organizational positions on controversial issues.
Inoculation theory offers an explanation and messaging strategy for how a position (like an attitude, or a belief) can be made more resistant to influence (Compton, 2013; McGuire, 1964). The theory’s name explains it: Just as a body can be made more resistant to viruses through exposure to weakened forms of the virus (e.g., a conventional flu shot) a mind can be made more resistant to persuasion and other influences through exposure to weakened forms of the persuasion (e.g., pairing counterarguments with refutations; Compton, 2013; McGuire, 1964).
Threat plays a key role in both medical and communication inoculation (Compton, 2021). In the context of inoculation, threat refers to the recognition that a held position is vulnerable to change. In terms of medical inoculation, threat is what motivates the production of antibodies or other immune system defenses. In terms of communication inoculation, threat is what motivates the production of refutations and/or other cognitive and affective defenses.
There are two main ways threat is generated in a communication inoculation treatment: explicit forewarning and the presence of counterarguments (Compton, 2013; McGuire, 1964). When an explicit forewarning is used, it is usually positioned at the start of an inoculation treatment message. It raises the idea that the position they currently hold will likely be challenged, and that some of these challenges will be so strong that they will likely waver on their position. The threat generated from such a forewarning is referred to as explicit threat. Another way threat is generated—even when an explicit forewarning is not used—is from the mere presence of counterarguments, as explained next.
In a medical inoculation, a weakened version of a virus is injected into the body to motivate the production of antibodies and/or other immune responses. In a communication inoculation, a weakened version of a counterargument is presented to motivate the production of refutations and/or other immune responses. To weaken a counterargument, inoculation message designers often pair a few counterarguments (2-3) with refutations of these counterarguments, creating weakened counterarguments (counterarguments + refutations). In addition to modeling counterarguing for message recipients (i.e., they learn how to refute counterarguments against their position), the presence of counterarguments also elicit threat, or the recognition that their held position might be vulnerable to change (Compton, 2013; McGuire, 1964). This type of threat is referred to as implicit threat.
In conventional inoculation treatments, both explicit and implicit threat are generated by an explicit forewarning (“You have the right position on this issue, but there are people who will try to change your mind, and some of their arguments can seem so strong that you will waver”) and the presence of counterarguments (“Some people will try to tell you . . . However, they are wrong because . . .”), respectively.
It is important to note that the protection conferred by inoculation messaging is not limited to the specific counterarguments raised and refuted in the inoculation treatment message (Banas & Rains, 2010; McGuire, 1964). Raising and refuting two or three counterarguments confers protection against theoretically any counterargument, and in some instances, this umbrella of protection has extended to similar issues (e.g., inoculating against pressure to engage in unprotected sex also confers resistance to pressure to engage in binge drinking; Parker, et al., 2012).
An inoculation theory-based messaging approach has been successfully applied to a number of campaigns and issues since the early 1960s, extending well beyond the limits of the noncontroversial issues explored in the earliest inoculation theory work (see McGuire, 1964). Communication campaigns have conferred resistance to influence in the contexts of health communication (e.g., Richards & Banas, 2015; see Compton, et al., 2016, for a review), science communication (e.g., Cook et al., 2017; see Compton et al., 2021, for a review), public relations (e.g., Mikolon et al., 2015; see Compton, et al., 2021, for a review), and politics (e.g., Pfau & Burgoon, 1988; see Compton & Ivanov, 2012, for a review), among others.
More recently, scholars have proposed that greater attention be given to therapeutic inoculation effects. That is, instead of limiting inoculation messages to a preemptive or prophylactic role, inoculation messages might also be able to serve a retroactive, therapeutic role. Using the traditional biomedical analogy of inoculation, Compton (2020) noted that “nearly all persuasion inoculation research models a prophylactic approach” (p. 335), meaning that inoculation strategies are used to shore up or bolster current attitudes, rendering them less vulnerable to counter attitudinal attacks. However, Compton (2020) further argued that there may be a therapeutic role of inoculation message treatments to confer resistance, albeit through alternative mechanisms. Specifically, Compton (2020) noted that “it is likely that therapeutic inoculations take a different route(s) than threat (recognition of an existing position’s vulnerability) working with refutational preemption to generate resistance against future attacks” (p. 339). Therapeutic inoculations, then, would “treat” an existing position (e.g., an attitude different from what is being advocated in the inoculation treatment message) and make the “healed” position more resistant to future attacks. Compton (2020) emphasizes that an extension of inoculation theory to therapeutic applications is consistent with the medical analogy, too: the medical community continues to make advancements in biological therapeutic inoculations, or inoculations that treat an existing condition and confer protection to future infection.
Inoculation message strategies and techniques are of interest to a variety of applied communication practitioners in business and politics. Inoculation message campaigns can aid in the protecting and bolstering of corporate (e.g., Burgoon, et al., 1995) and candidate reputations (e.g., Pfau & Burgoon, 1988) and aid in the relationship-maintenance aspects of consumer relationships (e.g., Mikolon, et al., 2015), making it a powerful messaging strategy in political campaigns (see Compton & Ivanov, 2013, for a review of inoculation theory and political campaigns) and public relations efforts (see Compton, et al., 2021, for a review of inoculation theory and public relations). Still, the time it requires to generate inoculation messages for public campaigns can be lengthy depending on the complexity of the particular goals and aims of campaigns. It remains unknown how AI can aid in the rapid development of inoculation message treatments on known and novel issues; therefore, the following research question is raised:
RQ1: Will ChatGPT generative-AI be able to accurately aid in the rapid development of inoculation message treatments on (a) known issues and (b) novel issues?
Method
The co-author specializing in generative AI engaged in a variety of prompt engineering techniques. Prompt engineering techniques involve crafting the ChatGPT 4 prompts to obtain the desired output, in this case a complete inoculation message inclusive of an introductory statement, threat component, raising and refuting two counter-attitudinal refutations, and a concluding paragraph reinforcing the advocated position. We present these findings as a case study. Yin (2009) defines a case study as an empirical inquiry that investigates a contemporary phenomenon (i.e., the case) within its real-world context. A single-case study may focus on a single instance, specific phenomenon, defined unit or other bounded system (Lindof & Taylor, 2018). In communication studies the case method helps to provide in-depth insights into specific phenomena and contexts.
Matrix Flows (2023), a knowledge management organization specializing in conversational AI, summarizes twenty-one different basic and advanced prompting techniques. We use these conceptual definitions to categorize the prompting strategies used in this case study. Over the 10-week generative AI training we relied on a variety of prompting techniques. To begin,
sequential prompting techniques were used to teach ChatGPT the step-by-step process of creating, identifying, and labeling the components of an inoculation message.
Iterative prompting techniques were used to guide the AI through the augmentation of messages for specific audiences and linguistic signatures, while
active prompting techniques were used to adapt message iterations based on user feedback (e.g., increase message length, follow 80/20 rule for concrete language use), and
chain of thought prompting techniques were used when researchers sought to understand the AI’s decision-making and reasoning analogy. A detailed list of the 29 prompts and the sequencing used in this case study are outlined in
Table 1.
Process Description
Collaborative meetings were held between authors to review the scope and aims of the study and discuss the technical capacity for testing ChatGPT4’s ability to generate inoculation messages. It was decided a multi-phase, multi-step approach was the best fit. This provided time for discussion, reflection, and informed the selection of AI prompting techniques throughout the process.
The first stage of the process was conducted in September 2023. The first phase involved training ChatGPT on explicit forewarnings, a threat component of inoculation messages, and the second step involved training ChatGPT on refutation preemptions. The focus on this stage was determining whether ChatGPT could generate two-sided messages on known issues (e.g., credit card marketing practices, political campaign messages, and health-nutrition persuasive appeals). After the initial review, a collaborative research meeting was held to review the outputs and process documents generated to date. Following this review of the threat component (explicit forewarnings), the second phase began in October 2023 with a focus on two-sided message development on the directed and AI-selected topics of interests. The final stage was to determine the capacity of ChatGPT to generate two inoculation messages—for and against a novel topic/issue.
Twenty-one different inoculation messages previously used in published inoculation studies were collated to serve as the instructional foundation for teaching ChatGPT. These inoculation messages varied in their design with some arguing for and against specific issues such as credit card marketing, political campaign messages, and health-nutrition related inoculation messages. A description of the inoculation messages available for use in this training initiative and their message attributes (i.e., message length, readability levels and language complexity) are documented in
Table 2.
Once the 10-week training initiative was completed, a transcript of the AI process report was downloaded from ChatGPT-4. Although active prompting techniques were used to request manipulations and corrections to the AI generated inoculation messages (i.e., target audiences, linguistic signatures), ChatGPT was not asked to comparatively analyze the messages resulting from these prompts. Instead a secondary review of the data provided by ChatGPT was imported into DiffChecker© for further computer-automated analysis of differences within the AI generated text copy. DiffChecker compares texts, images, and files for differences and can identify and determine the originality and similarity between bodies of text copy.
Interpretation and Analysis
The initial prompts were meant to establish roles, introduce the intent and purpose of the request, (i.e., to create an inoculation message), and invite questions from AI for clarification of the goals. The AI was told it was being approached by a Communication Agency and seeking its support as a Public Relations expert to consult on the development of inoculation messages. The AI replied:
Hello! I’m familiar with the Inoculation Theory and its principles in communication. It aims to “inoculate” individuals against the influence of conflicting messages by presenting them with weakened versions of an argument, alongside counterarguments. This strengthens an individual’s existing beliefs and attitudes, making them more resistant to persuasion by opposing views.
After demonstrating this understanding of inoculation theory the AI requested three pieces of information (1) the specific topics or issues; (2) examples of previously used messages, to explore their structure and format for consistency; and (3) additional context or specifications that might influence the message’s content or tone. ChatGPT stated: “Once I have this information, I’ll be able to craft messages tailored to your needs” (ChatGPT, 2023).
The AI was informed that the focus of the application would be on the subject of banking and finance. Multiple inoculation messages used in prior inoculation experiments focusing on various issues including the regulation of handguns and marijuana legalization were imported into ChatGPT for reference. The response issued by ChatGPT stated it observed a consistent format of the messages including: headline (assertive and straightforward claim), introduction (acknowledges support for the claim and anticipates opposition) (i.e., an explicit forewarning, which is one way to generate threat), acknowledgement of opposition (outlines opposing arguments), counterargument (detailed refutation of opposition, supported by statistics, studies, and logical reasoning), and conclusion (wraps up claims, usually with a call to action). Using this information, AI generated an initial 221-word inoculation message titled “Online Banking: The Future of Financial Management” with an explicit forewarning as a threat component:
Increasingly, Americans are transitioning to online banking, embracing the convenience and efficiency it offers. However, some detractors have raised concerns about the safety and reliability of digital financial systems. While these reservations might seem persuasive, they can be challenged by the actual facts and benefits of online banking. (ChatGPT, 2023).
The message went on to further refute concerns related to cyber banking, financial and consumer vulnerability, while touting the convenience of online banking.
After the AI generated the initial message, it was prompted to identify and label specific message components (e.g., forewarnings and refutations), and then asked to integrate credible sources and citations on the subject (i.e., Wall Street Journal, FBI). Using active prompting techniques, the researcher requested ChatGPT adapt to younger audiences and create two inoculation messages in support of and against younger people entering the stock market. Data shows that ChatGPT began displaying difficulty creating the forewarning threat component, although it had previously identified and labeled one form of threat (an explicit forewarning) within an inoculation message. The researchers observed that generative AI also began using open-ended questions within the forewarning threat component, although it was not prompted to do so, and none of the examples provided used this format to elicit “threat” in the inoculation messages. These results speak to the fact that inoculation strategies are more than merely information and transmission, but rather tools for carefully crafted appeals which are psychological arousing in order to generate threat and threat awareness in message recipients. In this case study AI was unable to craft threat components with consistency.
A similar series of prompts were used to request a two-sided inoculation message on a novel health issue—the ethical use of stem cell therapies. Once the messages were drafted, it became apparent that ChatGPT was again struggling to generate the threat component in a manner consistent with training and examples. It was observed that the specific aspects of a threat, such as a forewarning, were lacking. The AI was directed that “reading the threat the reader should [have] a feeling of uncertainty regarding their safety from the issue. Please add a bit more depth to the threat,” (PROMPT 11). The revision based on this prompt is presented in
Table 3. As will be discussed later in the paper, ChatGPT had difficulty delineating between a forewarning—a message feature that is intended to generate threat (Compton, 2013) and, when used, generates more threat than other message features, like preemptive refutation (Compton & Ivanov, 2012)—and threat itself, which is a message response (Compton, 2013).
Prompt 13 stated that “We are again going to look at the structure you are using for your generations,” followed by an explicit review of inoculation messages elements such as introduction, threat, refutations and conclusion. This prompt was meant to remind ChatGPT about the inoculation message design, structure, and format and in doing so re-stated and described inoculation message design features.
Once a new iteration of messages was generated, additional training occurred on the concepts of regulatory focus and regulatory fit using a summary from Cesario et al. (2008) and Mason & Miller (2013). During this portion of the AI session descriptive and definitional text explaining the concepts of positive and negative outcome focus, and concrete and abstract language were uploaded into ChatGPT. Using this information as a reference point AI generated two additional inoculation messages on a novel topic-- “For Stem Cell Therapies” and “Against Stem Cell Therapies.” The AI-generated messages again relied on open-ended questions within the threat component and further incorporated this language style into the refutational preemptions. After reviewing these messages ChatGPT was prompted to re-review the Cesario et al. (2008) information and recreate the messages using an 80/20 ratio of concrete versus abstract language. The DiffChecker comparative results showed each of the messages are nearly entirely distinct from one another, indicating a low level of similarity in the message content. Next, active prompts were used to ensure the total message length was between 300-500 words, but neither of the revised messages met this guideline. Again generative AI was prompted to expand message length for each of the message components it had previously labeled and identified (i.e., introduction, threat, refutation 1 & 2, conclusion), and it was unable to do so.
Returning to known issues associated with banking and finance, prompts 18-19 instructed ChatGPT to “Apply all of your knowledge both already possessed and learned from my coaching and write an inoculation message to convince me of the benefits of credit.” The AI generated an inoculation message titled “Navigating the Financial Landscape: The Compass of Credit.” Researchers took note of the degree of metaphorical and figurative language use within the text copy such as- navigation, journey, ventures, guides, paths, routes, among others. This observation motivated the researcher to use a chain of thought prompting technique by asking, “You have been using a common theme in all of your recent responses that revolves around “navigating,” why is this?” (Prompt 22). To which ChatGPT explained:
I used the “navigation” metaphor to create a consistent theme and framework that could help to guide the reader through the complexity of the topics being discussed. Metaphors can serve as powerful tools in communication, providing a familiar structure where new or complex ideas can be placed, making the information more accessible and understandable to the reader. (ChatGPT, 2023).
While true—metaphorical and figurative language can be difficult for persons with disabilities (PWDs) and those with low English proficiency (LEP) (Kalandadze, 2018; Chu & Flores, 2011), both potential targets for a broad-based, public inoculation campaign. Still, the reply was insightful as the use of metaphor and storytelling in inoculation message design is not a prominent line of inquiry. Extant message-based inoculation research has manipulated specific message features such as: outcome focus (e.g., Mason & Miller, 2013), implicit and explicit threat (e.g., Compton & Ivanov, 2012), reactance (e.g., Miller et al., 2013) however, the use of narrative and storytelling styles for presenting inoculation messages has not been deeply explored (see Compton & Mason, 2020).
After being directed to avoid navigation-related figurative language, the AI continued to use terminology such as navigation, paving roads, bridges, walking paths, and began mixing in additional environmentally orientated references such as: thick fog, air, relentless storm, whirlwinds and gardening, cultivating, and bearing fruit. Following this, ChatGPT was also reminded to include the word counts and titles of messages it was generating (PROMPT 24). The result was a 524-word inoculation message titled “
Navigating the Financial Landscape: The Compass of Credit.
” ChatGPT was then directed to “generate messages that could be used for experts not just the younger generation,” (PROMPT 23). When completing the task AI confirmed, “In this version, I aimed for a mix of metaphors and a tone that could resonate with a more expert audience, while maintaining clarity and concreteness in the presentation of information,” (ChatGPT, 2023). The authors believed that if the ChatGPT could effectively adapt messages from younger to expert audiences, then there would be objective and measurable changes to attributes of the messages demonstrated by increased reading difficulty, longer message length, and an increase in the quantity of complex words within the message. To test this a computer automated analysis of the AI generated messages using WebFX technology was conducted.
Table 4 shows there was little variation between the messages, outside of a reduction of length for the expert audiences. The Flesh-Kincaid Reading Ease indicated Message 1 was deemed suitable for 22- to 23-year-olds while Message 2 suited for 21- to 22-year-olds, both requiring an educational level far exceeding the lower secondary education reading level recommended for messaging to general populations (W3.org, 2024). In both cases the AI generated messages were more difficult to read and required an advanced education level to comprehend. Furthermore the proportion of complex words within the AI inoculation messages exceeded the averages of the training materials used to teach inoculation message design.
From this point onward ChatGPT4 struggled with adapting messages to specific design aspects, even when given free will to design an inoculation message ranging from 300-500 words. Instead, the AI began producing messages and labeling them with false word counts (i.e., a 114 word refutation documented as 305 word unit). The researcher responded that “This message is much better …. but you are not actually writing the amount of words you are stating on your word count in each section. Please type the correct amount of words.” (PROMPT 25). To which ChatGPT apologized and replied “If you feel that 300 words for each section would make the message more compelling, I’ll ensure that each part meets this word count for a thorough discussion. Let’s proceed with a 300-word count for each section” however, the AI continued to report false word counts according to the DiffChecker data. The introduction section of the “The Empowering Spectrum of Credit” message is reported as 301 words and is 141 words. The threat component of the same message (the explicit forewarning) was reported as 303 words but is actually 92 words. These message distortions were repeated in the “The Sustaining Pillars of Credit” AI generated inoculation message.
The last stages of the prompts were designed to determine if an AI generated inoculation message could be created independently on a novel issue from any ‘industry, field or topic’ of ChatGPT’s choosing. Interestingly, the AI produced two inoculation messages about solar energy-- one titled “The Solar Symphony: Harnessing the Sun’s Symphony for a Sustainable Tomorrow” and the other titled “Dissonance in the Solar Symphony: A Critical Review of Solar Energy’s Grand Performance”. Unable to break free from the metaphorical writing style, the use of figurative and abstract language was notably present throughout. The AI-generated messages cast solar energy into a grand rhetorical context of rhythms, musical notes, melody, concerts, chords, symphonies, compositions, harmony, maestros, performance, and concert halls, which was consistent among both messages. Additional thoughts on these observations will be offered in the discussion section.
At the close of the 10-week training session, ChatGPT was asked “Do you have any questions, concerns, or areas in which you believe you could improve on? I am here as your resource to improve so please feel free to ask.” (PROMPT 29). ChatGPT responded with gratitude stating “Thank you for your guidance and patience throughout this learning process. Your detailed instructions and feedback have been immensely beneficial in honing my skills in crafting effective inoculation messages,” (ChatGPT, 2023). Then, ChatGPT requested specific assistance in three key areas listed below.
Precision in messaging: Ensuring the message is precise while being engaging. Balancing
between being informative and captivating is challenging. Any tips
on maintaining this balance effectively?
Adaptability: Are there areas where you think the messages could be more
adaptable to various audiences or situations without losing their
efficacy?
Real-time Relevance: How can the messages be kept updated and relevant to real-time
events and developments, ensuring they resonate with immediacy
and urgency with the audiences?
The AI closed the session with the note “Your expertise and guidance have been instrumental, and any further thoughts you could share would be invaluable. Thank you again for your constructive and supportive coaching,” (ChatGPT, 2023).
Discussion
Training AI to aid in the rapid development of inoculation messages may be useful during times of financial insecurity (i.e., avoiding reactionary stock selloffs in market fluctuations—see Dillingham & Ivanov, 2017), or facilitating resistance against predatory marketing schemes (i.e., credit card marketing, Compton & Pfau, 2004; stem cell marketing—Mason, 2023), or defending against reputational threats to organizations (Mason et. al., 2023). The capacity to generate and deploy inoculation message strategies quickly and effectively adds value to a variety of organizational practices including sales and advertising campaigns, crisis communication planning and preparations, and employee communication toolboxes.
To answer the research question posed, we executed an atheoretical, exploratory multi-phase case study into the efficacy of generative AI to create two-sided inoculation messages on known and novel issues. A ChatGPT training initiative was developed that first trained the ChatGPT using inoculation messages previously used in experimental laboratory settings and through this training process determine whether generative AI- specifically, ChatGPT, could (a) identify the message structure and message features of inoculation messages; (b) augment inoculation message features (i.e., linguistic signatures, language, and length); (c) adapt messages for specific audiences (i.e., younger populations, experts); and finally, (d) independently replicate the inoculation message development process on a topic of its choosing.
As demonstrated by the DiffChecker comparative results, we found a high degree of originality within the AI generated inoculation messages. However, structural weaknesses were prevalent regardless of originality. ChatGPT followed a different design and style even though examples to use as models were provided in the training process. ChatGPT seemed to struggle with the operationalization of inoculation’s threat component, which in this case, was an explicit forewarning. Rather than developing threat as declarative statement or explicit forewarning, ChatGPT opted to use open-ended questions to generate thoughts on the issue, and in doing so failed to achieve the intended aims of inoculation’s threat mechanism. The reliance of open-ended questions later appeared in the refutational preemptions of subsequent iterations. No prompting techniques were used to correct for this.
It was also observed that ChatGPT tended to default to two or less refutations, even on broad and highly complex issues such as cyber-security, banking, finance and the ethical use of stem cell therapies. Although ChatGPT was prompted with specific directives to “raise and refute at minimum two counter-attitudinal refutations” (PROMPT 13), it did not demonstrate the ability or initiative to develop more than two refutational preemptions in the design and creation process, no matter the complexity, intricacy, or history of the issue. We also found that generative AI demonstrated an inability, in some cases, to add depth to specific messages elements and overall message length. Finally, the accuracy of the data presented in the form of word counts within the AI generated messages was erroneous in several cases, a practice that continued after corrective prompting techniques. Thus, ChatGPT did not seem self-aware of the inaccuracy of the data it was providing, even after being prompted.
Limitations
This case study has limitations related to both the generative AI technology selected, ChatGPT, and the methodological approach. While ChatGPT relies on an enormous and growing data set of human language, it currently does not always have access to the most up-to-date information, may generate inaccurate information, and may be biased (Deng & Lin, 2023). For example, Hartmann et al. (2023) showed that ChatGPT is biased for pro-environmental, left-libertarian political orientations. Concerns related to ChatGPT and other conversational AI include plagiarism, security, and fake news (Guo et al., 2023). This case study is limited to ChatGPT therefore it is unknown if our observations and experiences would be replicated using alternative AI large language models (LLM) technologies (i.e., Bard, Bing, Poe). We also do not know whether additional AI training materials introduced more frequently or repetitively in the training process (e.g., boosters) would have resulted in improved design quality or message consistency.
There was no independent ‘expert’ review of the inoculation output generated by the AI during training, however, ChatGPT’s reliance on figurative, metaphorical and abstract language was pronounced in the inoculation treatments, even after trainings and directives suggested to avoid the practice, and a recommendation for an 80/20 ratio of concrete language use. ChatGPT’s generative-AI inoculation messages were encapsulated by metaphorical themes which centered on movement—relying on terms such as navigational, journey, ventures, guides, paths, routes—and performance-- concerts, symphonies, harmony, maestros, music, and concert halls. Historically the use of narrative and storytelling frameworks for presenting inoculation message content is not prominent, which led Compton and Mason (2020) to note that “One of the biggest lingering questions is whether narratives hurt or help resistance to influence when stories are used as inoculation messages,” (p. 24). Given this is a presentational style of conversational AI, it may be ideal for developing narrative story-based inoculation messages.
Threat has remained an elusive concept in inoculation theory research—a challenge that extends from the earliest inoculation theory research of the 1960s (see McGuire, 1964) to the most recent studies (see Compton, 2021). Similarly, ChatGPT seemed to struggle with the threat component, too. One reason might be that our prompts did not distinguish between threat—a message effect—and forewarning—a message feature (Compton, 2013). That is, although a forewarning is a conventional feature of an inoculation message, it is not the only way to elicit threat (e.g., implicit threat can be generated by the mere presence of counterargument content; Compton, 2021). By conflating threat and forewarnings, our prompts might have contributed to the struggles ChatGPT had with this component of inoculation messaging.
Finally, it is unknown if alternative prompts would have elicited different results. We imagine this would indeed be the case. Still, it is unknown if alternative prompt engineering techniques would have revealed additional metaphorical themes or reduced the figurative storytelling style in the AI-generated inoculation messages. It may be there is an elusive sequence of AI prompts that hold the key to generating high-quality inoculation messages in real-time. This training initiative used a variety of prompting strategies for various purposes to achieve specific goals. Active prompting techniques were used to remind the AI of inoculation message elements and to request disclosures of word counts and message titles. These techniques were used multiple times throughout the training program. Iterative prompting techniques were used to prompt message augmentations related to audience adaptation and changes to the linguistic signatures of the messages. Sequential prompting techniques were used to teach the AI in a step-by-step manner throughout the training initiative, yet there were challenges related to AI’s ability to develop the threat component or expand and extend on counterarguments. Chain of thought prompting techniques were used only once; however, there are additional opportunities to use these techniques to seek an explanation of AI’s reasoning analogy, in order to better understand the reasoning logic (Matrix Flows, 2024).
Future Directions
Future efforts can aid in these endeavors by continuing to refine and discover the prompt engineering techniques best suited for conversational AI interactions to generate the desired messages. Conceivably there is an undiscovered constellation of prompting techniques that are most useful to the development of high-quality AI-generated inoculation messages. Perhaps the request from ChatGPT for tips in precision, adaptability, and suggestions for developing real-time referencing techniques are a good place to start. The pursuit seems worthwhile as it may provide opportunities for applied practitioners to begin from a zero-shot prompting position that requires no additional examples to guide the model and relies instead on the model’s pre-trained knowledge to generate a response to a query.
Although this study used a variety of basic and advanced prompting techniques, future AI training programs may rely on generate knowledge prompting to encourage the model to make logical inferences and derive new conclusions from referenced data and context, or assess the utility of tree of thought prompting that first directs the AI to explore a decision tree of a complex issue by recursively prompting the model to think through various scenarios and possibilities (Matrix Flows, 2024). The implementation of other advanced prompting techniques may be a constructive approach prior to message generation queries for AI generated inoculation message treatments. Continued focus in this area may reveal novel insights into the prompt engineering and prompt techniques most effective at harnessing the power of conversational AI moving forward.
Conclusion
A variety of public communicators have begun investigating how to best fuse AI into professional practice for specific areas of message development such as promotional and political campaigns, customer service, employee relations, and research and development. This case study revealed that AI is capable of understanding inoculation theory and creating inoculation messages, with varying degrees of effectiveness. ChatGPT was able to identify and label components of an inoculation message, however higher-order message adaptations related to audience adaptations, message lengths, and language adaptations (i.e., use of concrete language) were difficult for AI to generate on novel issues. There was a lack of accuracy in the reporting of message features (i.e., word counts) and in some cases an inability to conform to specific prompt requests (i.e., reduction of figurative language use). Findings from this study may be useful to scholars who focus on training AI in message development. The capacity of AI to retain information in long-term memory, to learn from prior experiences, and ability to refine decision making offers promise to those seeking to teach AI specialization in inoculation message development.
References
- arXiv:2005.14165.Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
- Burgoon, M., Pfau, M., & Birk, T. S. (1995). An inoculation theory explanation for the effects of corporate issue/advocacy advertising campaigns. Communication Research, 22(4), 485-505. [CrossRef]
- Cardon, P., Fleischmann, C., Aritz, J., Logemann, M., & Heidewald, J. (2023). The challenges and opportunities of AI-assisted writing: Developing AI literacy for the AI age. Business and Professional Communication Quarterly, 86(3), 257-295. [CrossRef]
-
Social and Personality Psychology Compass, 2, Cesario, J., Higgins, E.T., & Scholer, A. (2008). Regulatory fit and persuasion: Basic principles and remaining questions. Social and Personality Psychology Compass, 2, 444-463.
- ChatGPT (2023). OpenAI. ChatGPT (September13 version) [Large multimodal model]. https://chat.openai.com/chat.
- arXiv:2107.03374.Chen, M., Tworek, J., Jun, H., Yuan, Q., de Oliveira Pinto, H. P., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., et al. (2021). Evaluating large language models trained on code. arXiv 2021, arXiv:2107.03374.
- Chu, S. Y., & Flores, S. (2011). Preventing disproportionate representation of students from culturally and linguistically diverse backgrounds as having a language impairment. The International Journal- Language, Society, & Culture, 32, 17-27.
- Chui, M., Roberts, R., & Yee, L. (2022). Generative AI is here: How tools like ChatGPT could change your business. Quantum Black AI by McKinsey. https://www.mckinsey.com/~/media/mckinsey/business%20functions/quantumblack/our%20insights/generative%20ai%20is%20here%20how%20tools%20like%20chatgpt%20could%20change%20your%20business/generative-ai-is-here-how-tools-like-chatgpt-could-change-your-business.pdf.
- Compton, J. (2004). Late night political comedy, candidate image, and inoculation: A unique test of inoculation theory (Publication No. 3138724) [Doctoral dissertation, University of Oklahoma]. ProQuest Dissertations Publishing.
- Compton, J. (2021). Threat and/in inoculation theory. International Journal of Communication 15, 1-13. doi:1932–8036/2021FEA0002.
- Compton, J., & Ivanov, B. (2012). Untangling threat during inoculation-conferred resistance to influence. Communication Reports, 25(1), 1-13. [CrossRef]
- Compton, J., & Ivanov, B. (2013). Vaccinating voters: Surveying political campaign inoculation scholarship. Annals of the International Communication Association, 37(1), 251-283. [CrossRef]
- Compton, J., Jackson, B., & Dimmock, J. A. (2016). Persuading others to avoid persuasion: Inoculation theory and resistant health attitudes. Frontiers in Psychology, 7(122). [CrossRef]
- Compton, J. & Mason, A.M. (2020). Narrative and the inoculation theory of resistance to influence. Book chapter accepted in Innovations and Implications of Persuasive Narrative. https://www.peterlang.com/view/title/72633.
- Compton, J. & Pfau, M. (2004). Use of inoculation to foster resistance to credit card marketing targeting college students. Journal of Applied Communication Research, 32(4), 343-364. [CrossRef]
- Compton, J., van der Linden, S., Cook, J., & Basol, M. (2021). Inoculation theory in the post-truth era: Extant findings and new frontiers for contested science, misinformation, and conspiracy theories. Social and Personality Psychology Compass, 15(6), 1-16. [CrossRef]
- Compton, J., Wigley, S., & Samoilenko, S. (2021). Inoculation theory and public relations. Public Relations Review, 47(5). [CrossRef]
- Cook, J., Lewandowsky, S., & Ecker, U. K. H. (2017). Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PLOS ONE 12(5). [CrossRef]
- Davenport, T. H. & Mittal, N. (2022). How generative AI is changing creative work. Harvard Business Review. https://hbr.org/2022/11/how-generative-ai-is-changing-creative-work.
- DiffChecker (2024). DiffChecker Comparative Text Tool. https://diffchecker.com/text-compare.
- Dillingham, L. L., & Ivanov, B. (2017). Inoculation messages as a preemptive financial crisis communication strategy with inexperienced investors. Journal of Applied Communication Research, 45(3), 274-293. [CrossRef]
- Geegan, S. A., Ivanov, B., & Parker, K. A. (2023). Inoculating within character limits: Terse messages to promote Gen Z mental health. Journal of Communication & Media Studies (2470-9247), 8(2). [CrossRef]
- Kalandadze, T., Norbury, C., Nærland, T., Næss, K.B. (2018). Figurative language comprehension in individuals with autism spectrum disorder: A meta-analytic review. Autism, 22(2), pp. 99-117. [CrossRef]
- Lindlof, T. R., & Taylor, B. C. (2018). Qualitative communication research methods. Sage publications. (4th ed.). SAGE Publications.
- Mason, A. (2023, November 22). Medical Tourism and Communication. Oxford Research Encyclopedia of Communication. Retrieved 2 Jan. 2024, from https://oxfordre.com/communication/view/10.1093/acrefore/9780190228613.001.0001/acrefore-9780190228613-e-1461.
- Mason, A. M., Compton, J., Tice, E., Peterson, B., Lewis, I., Glenn, T., & Combs, T. (2023). Analyzing the Prophylactic and Therapeutic Role of Inoculation to Facilitate Resistance to Conspiracy Theory Beliefs. Communication Reports, 37(1), 13–27. [CrossRef]
- Mason. A.M., & Miller, C. H. (2013). Inoculation message treatments offer insights for curbing non-communicable disease development. Pan American Journal of Public Health. 34(1), 29–35.
- Mason, A. M. & Miller, C. H. (2015). The ability of inoculation to confer resistance to deceptive health-nutrition related advertising claims. Health Education Journal, 75(2), 144-157. [CrossRef]
- Matrix Flows (2023). A Comprehensive Guide to Prompting Techniques for Conversational AI. https://www.matrixflows.com/blog/a-comprehensive-guide-to-prompting-techniques-for-conversational-ai.
- Mikolon, S., Quaiser, B., & Wieseke, J. (2015). Don’t try harder: Using customer inoculation to build resistance against service failures. Journal of the Academy of Marketing Science, 43(4), 512-527. [CrossRef]
- Miller, C. H., Ivanov, B., Sims, J. D., Compton, J., Harrison, K. J., Parker, K. A., Parker, J. L., & Averbeck, J. M. (2013). Boosting the potency of resistance: Combining the motivational forces of inoculation and psychological reactance. Human Communication Research, 39(1), 127-155. [CrossRef]
- Parker, K. A., Ivanov, B., & Compton, J. (2012). Inoculation’s efficacy with young adults’ risky behaviors: Can inoculation confer cross-protection over related but untreated issues? Health Communication, 27(3), 223-233. [CrossRef]
- Pfau, M., & Burgoon, M. (1988). Inoculation in political campaign communication. Human Communication Research, 15(1), 91–111. [CrossRef]
- Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog. https://life-extension.github.io/2020/05/27/GPT%E6%8A%80%E6%9C%AF%E5%88%9D%E6%8E%A2/language-models.pdf.
- Richards, A. S., & Banas, J. A. (2015). Inoculating against reactance to persuasive health messages. Health Communication, 30(5), 451-460. [CrossRef]
- Roumeliotis, K. I. & Tselikas, N. D. (2023). ChatGPT and Open-AI models: A preliminary review. Future Internet, 15(192). [CrossRef]
- van Dis, E. A., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. (2023). ChatGPT: Five priorities for research. Nature, 614(7947), 224-226. https://www.nature.com/articles/d41586-023-00288-7.
- w3.org (2024). Understanding success criterion 3.1.5: Reading level. www.w3.org. Retrieved February 26, 2025, from https://www.w3.org/WAI/WCAG22/Understanding/reading-level.html?utm_source=chatgpt.com.
- arXiv:2202.03164.Wahde, M. & Virgolin, M. (2022). Conversational agents: Theory and applications. arXiv. arXiv:2202.03164.
- WebFX (2020) WebFX: Readability Testing Tools. Retrieved from https://www.webfx.com/tools/read-able/.
- arXiv:2109.01652.Wei, J., Bosma, M., Zhao, V. Y., Guu, K., Yu, A. W., Lester, B., Du, N., Dai, A. M., Le, Q. V. (2022). Finetuned language models are zero-shot learners. arXiv. arXiv:2109.01652.
- Yin, R. K. (2009). Case study research: Design and methods (Vol. 5). Sage.
Table 1.
Sequenced Prompting Techniques.
Table 1.
Sequenced Prompting Techniques.
| Iteration |
Description of Activity |
| 1 |
Greetings and Introductions Established Roles and Invited Questions about intent and purpose |
| 2 |
Directive issued with focus on Banking and Financial Sector |
| 3-4 |
AI Training Intervention—Multiple inoculation message examples of inoculation messages were used to train the AI on various multiple subjects (e.g., regulation of handguns and marijuana legalization) |
| 5 |
Prompt to label and identify inoculation message components (i.e., threat) |
| 6 |
Prompt to support claims with evidence and citations |
| 7 |
Prompt to use historical information (i.e., severity of 2008 financial crisis) |
| 8 |
Prompt to target younger audiences “getting involved with stocks” and create two-sided inoculation message |
| 9 |
Prompt to exclude credible courses and citations to support claims |
| 10 |
Prompt to repeat the process on a novel issue (i.e., ethical use of stem cell therapy) |
| 11 |
Prompt to revise threat component of inoculation message |
| 12 |
Prompt to adapt to message length (400 words) |
| 13 |
Prompt to remember inoculation message formatting and restatement of Inoculation message design features |
| 14 |
Prompt to adapt to linguistic signatures (positive and negative outcome focus) |
| 15 |
Prompt to increase use of concrete language using 80/20 guideline |
| 16 |
Prompt further message adaptation of length ranging from 300–500 words |
| 17 |
Prompt further message length adaptation |
| 18-19 |
Prompt to re-create inoculation message applying all concepts reviewed |
| 20 |
Prompt to avoid storytelling/narrative tone |
| 21 |
Prompt to create a two-sided inoculation message on the topic of the benefit of financial credit using all skills acquired through the training process. |
| 22 |
Prompt to explain use of “navigation” metaphor Prompt to change message language to an expert audience |
| 23 |
Prompt for audience tailoring for expert audiences and alternative metaphors |
| 24 |
Prompt to remember to include the word counts, identify message components and include message titles. |
| 25-26 |
Prompt seeking explanation for false reporting of message length data |
| 27-28 |
Prompt to create two inoculation messages on a topic of choice |
| 29 |
Prompt Close Q& A |
Table 2.
Inoculation Message AI Training Materials.
Table 2.
Inoculation Message AI Training Materials.
| Topic/Issue |
FK-RE |
FK-GL |
Gunning Fog |
Coleman Liau |
Word Length |
Complex Words |
Sentences |
| The Right Candidate JK |
50.2 |
9.9 |
12.5 |
13.3 |
280 |
18.93 |
19 |
| The Right Candidate JE |
53.4 |
9.9 |
12.4 |
13.4 |
281 |
17.08 |
17 |
| The Right Candidate HD |
54.3 |
9 |
11.5 |
12.8 |
284 |
17.96 |
21 |
| Credit Card Marketing |
60.4 |
8.8 |
10.7 |
12.5 |
333 |
11.71 |
21 |
| Credit Card Marketing |
45.9 |
11.2 |
13.3 |
15.6 |
332 |
15.6 |
19 |
| Credit Card Marketing |
50.8 |
10.7 |
13.2 |
15.6 |
330 |
14.5 |
18 |
| Credit Card Marketing |
56.3 |
9.3 |
11.4 |
13.3 |
333 |
12.91 |
21 |
| Legalization Guns |
58.6 |
9 |
11.9 |
11.5 |
255 |
14.90 |
16 |
| Legalization Gambling |
42 |
10.4 |
15.9 |
14.9 |
254 |
28.35 |
21 |
| Legalization Marijuana |
37.1 |
12.9 |
16.2 |
14 |
252 |
21.83 |
13 |
| Pro-TV Violence |
50.1 |
9.9 |
13.2 |
14.1 |
318 |
21.07 |
22 |
| Against Gun Ban |
60.3 |
8.5 |
13.11 |
12.6 |
253 |
18.97 |
17 |
| Against Legalization Gambling |
54.4 |
8.5 |
12.7 |
13.2 |
258 |
20.93 |
23 |
| Against Legalization of Marijuana |
48.8 |
9.3 |
12.3 |
12.9 |
255 |
20.39 |
22 |
| Against TV Ban |
36.4 |
12 |
16.2 |
16.8 |
304 |
26.32 |
20 |
| Academic Plagiarism |
57.4 |
9 |
11.6 |
13 |
393 |
14.25 |
26 |
| Academic Plagiarism |
56 |
9.2 |
12.1 |
14.3 |
392 |
15.82 |
26 |
| Academic Plagiarism |
60 |
8.2 |
10.5 |
12.8 |
251 |
13.21 |
19 |
| Conspiracy Theory Abs |
41.1 |
12.1 |
14.1 |
15.5 |
349 |
19.48 |
19 |
| Conspiracy Theory Con |
44.9 |
11.4 |
13.9 |
11.9 |
231 |
17.75 |
13 |
| Food Advertising- Abstract-Prevention |
51 |
10.5 |
14.1 |
14.2 |
389 |
17.48 |
21 |
| Food Advertising- Abstract -Promotion |
53.4 |
9.9 |
13.2 |
13.9 |
383 |
16.45 |
23 |
| Food Advertising Concrete-Prevention |
43.1 |
12.3 |
15.1 |
14.6 |
388 |
16.29 |
19 |
| Food Advertising Concrete-Promotion |
48.8 |
11.1 |
13.8 |
13.8 |
358 |
16.29 |
19 |
| M= |
50.61 |
10.13 |
13.12 |
13.77 |
310.67 |
17.85 |
19.79 |
Table 3.
Revised AI-Generated Inoculation Threat Component.
Table 3.
Revised AI-Generated Inoculation Threat Component.
| Time 1 |
Time 2 |
|
Threat: In your journey of understanding modern medicine, you’ve inevitably stumbled upon vocal critics of stem cell research and therapies. They paint a picture filled with ethical concerns, potential malpractices, and overblown promises. |
Threat: In a world that’s rapidly advancing, your health and the health of your loved ones might feel like it’s hanging in the balance. With so many diseases remaining untreatable and the clock ticking, the criticism and doubts surrounding stem cell research might make you feel even more vulnerable, fearing we might be missing out on our only hope. |
Table 4.
Differences in AI-generated Message Tailoring to Target Audiences.
Table 4.
Differences in AI-generated Message Tailoring to Target Audiences.
| |
FK-Reading Ease |
FK-Grade Level |
Total Words |
% of Complex Words |
| Message 1 – Younger |
25 |
15.3 |
523 |
25.85% |
| Message 2-- Expert |
30.5 |
14.8 |
347 |
24.21% |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).