Preprint
Article

This version is not peer-reviewed.

From Uncertainty to Tenacity: Investigating User Strategies and Continuance Intentions in AI-Powered ChatGPT with Uncertainty Reduction Theory

A peer-reviewed article of this preprint also exists.

Submitted:

27 January 2024

Posted:

30 January 2024

You are already at the latest version

Abstract
The introduction of ChatGPT in academia has raised significant concerns, particularly regarding issues like plagiarism and the ethical use of generative text for academic purposes. Existing literature has predominantly focused on ChatGPT adoption, leaving a notable gap in addressing the strategies users employ to mitigate these emerging challenges. To bridge this gap, this research utilizes the Uncertainty Reduction Theory (URT) to formulate user strategies for reducing uncertainty. These strategies include both interactive and passive approaches. Concurrently, the study identifies key sources of uncertainty, which include concerns related to transparency, information accuracy, and privacy. Additionally, it introduces the concepts of seeking clarification and consulting peer feedback as mediating roles to facilitate reduced uncertainty. We tested these hypotheses with a sample of Indonesian users (N = 566) using structural equation modeling via Smart-PLS 4.0 software. The results confirm that interactive Uncertainty Reduction Strategies (URS) are more effective in reducing uncertainty when using ChatGPT compared to passive URS. Transparency concerns, information accuracy, and privacy concerns are identified as factors that increase the level of uncertainty. In contrast, consulting peer feedback is the most effective strategy for reducing uncertainty compared to seeking clarification at the individual-system level. Insights from the mediating effects confirm that consulting peer feedback significantly mediates uncertainty sources to reduce uncertainty. The study discusses various strategies for the ethical use of ChatGPT by users in the educational context and contributes significantly to theoretical development.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

The emergence of ChatGPT has significantly disrupted the landscape of higher education. Developed by OpenAI, ChatGPT harnesses the power of the Large Language Model (LLM) framework (https://chat.openai.com/). Thormundsson (2023) [1] reports that, as of January, ChatGPT boasts a global user base exceeding 100 million, a fact documented by Statista’s records. This statistic highlights the substantial interest among users in adopting ChatGPT for a wide array of activities. Notably, Dwivedi et al. (2023) [2] assert that ChatGPT holds the potential to enhance user productivity in specific tasks. Furthermore, a growing body of research supports the integration of ChatGPT in higher education, demonstrating its capacity to bolster task efficiency and accelerate academic performance [3,4]. Of particular interest is ChatGPT’s role in academic authorship, where it collaborates as a co-author in published academic journals (e.g., [5,6,7]). This phenomenon highlights ChatGPT’s widespread presence in today’s academic sector.
In scholarly discourse, a pros and cons emerge regarding ChatGPT. On one hand, ChatGPT is acknowledged as a technology that significantly boosts user productivity and efficiency in academic tasks, as found by several authors [8,9,10,11]. These discussions emphasize ChatGPT’s potential to provide an instrumental assistance in academic work. For instance, Foroughi et al. (2023) [12] introduce the concept of ‘learning value’ within the UTAUT2 framework to encourage the adoption of ChatGPT for academic purposes. Additionally, Ma and Huo (2023) [9] introduce ‘novelty value’ and ‘humanness’ within their ChatGPT acceptance framework, highlighting the assessment of individuals employing ChatGPT. However, scholarly discourse also scrutinizes the use of ChatGPT from a skeptical viewpoint, raising concerns about its potentially adverse impact on the field of education, leading to a paradoxical scenario. For example, Baek and Kim (2023) [3] emphasize ChatGPT’s enhancement of task efficiency while simultaneously highlighting the notion of ‘creepiness,’ embodied by the tagline ‘ChatGPT is scary good.’ Further, Cotton et al. (2023) [13] coins the tagline ‘chatting and cheating,’ stressing the potential degradation of academic integrity in the era of ChatGPT’s integration into the academic landscape.
Numerous previous studies have highlighted a ubiquitous skepticism surrounding the utilization of ChatGPT. This skepticism including various dimensions, each posing distinct challenges. For instance, concerns about academic integrity [13] emphasize the need to maintain the educational process’s integrity in the face of AI assistance. Issues regarding transparency [14] stress the importance of understanding how ChatGPT generates responses. Challenges related to information accuracy [12] raise questions about the reliability of information provided by ChatGPT. In addition, substantial privacy apprehensions [15,16] have emerged concerning the data handling and storage practices of ChatGPT. Concerns about potential plagiarism [17,18] and academic misconduct [13] cast doubt on academic honesty in the era of AI assistance. Ethical dilemmas [19,20] surrounding the use of ChatGPT further complicate its integration into education. Inaccuracies in information generated by ChatGPT [21] challenge its suitability for academic reliance, and concerns regarding the reliability of sourced information [22] pose questions about its effectiveness.
These compound concerns have created an intensified state of uncertainty among users and stakeholders across various sectors, particularly in education. Users often exhibit reluctance toward embracing ChatGPT as an educational tool. While ChatGPT is designed as an AI-based product to enhance performance and productivity, effectively mitigating these associated risks remains a challenge, as previous studies and the current educational landscape have not provided comprehensive solutions [12,20,22]. Therefore, this research attempt aims to offer practical solutions for risk mitigation and provide guidance for ethical usage practices with ChatGPT. By addressing these concerns, this research aims to reduce uncertainty and support the adaptation to the evolving landscape of modern education.
The existing literature has theoretically failed to offer comprehensive solutions for mitigating the uncertainties arising from the utilization of ChatGPT in higher education. Remarkably, to our knowledge, there has been a notable absence of research that specifically investigates the reduction of uncertainty associated with ChatGPT’s application in higher education. While studies continuously identify factors contributing to uncertainty, such as ethical challenges [19], privacy concerns [16], issues of transparency [14], and the accuracy of information [12], these studies have primarily stopped at identification rather than focusing on methods to alleviate such uncertainty. Previous research has been more centered on reducing uncertainty in the usage of various technologies and information sources, including areas like information dissemination on social media [23], the integration of AI-driven robot co-advisors for investment [24], and minimizing uncertainty through reciprocal communication on interactive movie platforms [25]. Significantly, the exploration of how users of ChatGPT employ uncertainty reduction strategies (URS) for ongoing and continuable usage remains unexplored territory. This identifies a critical theoretical gap that merits immediate attention. The urgency of this gap is amplified by the expanding adoption of ChatGPT in various domains, particularly in the realm of higher education. However, the challenge of effectively mitigating associated risks remains unresolved both in theoretical discourse and practical policymaking.
To initiate this academic discourse, this research presents the following research question (RQ): RQ 1 – “How do users employ strategies aimed at reducing uncertainty to facilitate the continuous utilization of ChatGPT in the context of higher education?” This study embarks on an investigation with the primary objective of addressing the mitigation of risks associated with the integration of ChatGPT in higher education. The theoretical framework underpinning this inquiry is the Uncertainty Reduction Theory (URT). While the origins of URT can be traced to the domain of communication studies [26], its adaptation to the practical domain of diminishing uncertainty in the adoption of technology, including the sphere of artificial intelligence [24,27,28], has been rigorously validated. This investigation centers its focus on three fundamental factors contributing to uncertainty, namely concerns pertaining to transparency, privacy, and the accuracy of information. The impact of these three constructs on achieving a reduced level of uncertainty will be subjected to thorough examination.
Effectively mitigating the risks associated with the utilization of text generated by ChatGPT emerges as a matter of utmost significance. In this regard, the study posits that seeking clarification and obtaining peer consultation feedback represent mechanisms for reducing uncertainty. Seeking clarification regarding the generated text is proposed as a means to provide vigorous insights into the acquired information [29]. Furthermore, peer consultation and feedback, particularly within the domain of higher education, present an opportunity to leverage the expertise of peers who are proficient in their respective fields to validate the generated text [30]. In this light, the research introduces RQ 2: How can the practices of seeking clarification and soliciting peer consultation feedback be optimally employed to mitigate risks and achieve a lower level of uncertainty? The prominent contribution of this research lies in introducing the mediating role of seeking clarification and peer consultation feedback as a means to attain a reduced level of uncertainty. In corresponding, the academic discourse facilitated by the unrestricted access to information through ChatGPT is dignified to display, consequently accelerating the processes of knowledge clarification and consultation. To the best of current knowledge, this study stands as a pioneering effort, introducing a mediating framework that incorporates the practices of seeking clarification and peer consultation feedback, ultimately resulting in a reduced level of transparency. In addition to extending the view of existing studies [23,31], this research represents a novel contribution by offering strategies for the mitigation of risks, thereby promoting the sustained use of ChatGPT in the realm of higher education.
This research consistently incorporates two aspects of the URT, which have been commonly employed in prior studies [23,31]: interactive URS and passive URS. To the best of our knowledge, this study represents the first attempt to adapt URS to the context of ChatGPT, aiming to achieve a reduced level of uncertainty. Consequently, this constitutes a theoretical contribution embedded within the framework of our research model. The ultimate objective of this study is to assess users’ continuance intention. Similar to previous research, the primary focus centers on the behavioral intentions that ensue subsequent to the application of URS in the context of specific technologies. This study also holds implications for the practical utilization of ChatGPT, providing insights to users, policymakers, and other stakeholders in higher education to mitigate the risks arising from the disruptive technology of ChatGPT. Ultimately, this research posits and believes that the presence of technology should ideally ease human tasks, necessitating risk mitigation. This highlights the significance of this study.

2. Literature Review

2.1. Previous Studies and Gaps Identification

A comprehensive literature review was conducted to explore strategies for mitigating uncertainty in the utilization of AI-powered ChatGPT. A summary of previous studies relevant to the applied theory and context can be found in Table 1. Unfortunately, only one study, conducted by Pan et al. (2023) [32] in 2023, systematically addressed users’ concerns and uncertainties related to interactions with AI-driven Chatbots. Their research had a specific focus on a Chinese online community and primarily employed sentiment analysis. They identified four key areas of uncertainty in Chatbot interactions: technical, relational, ontological, and sexual uncertainties. Notably, this research did not provide explicit solutions for mitigating the risks associated with the usage of AI-powered Chatbots, particularly in the higher education context. Additionally, prior studies applying the URT have concentrated on reducing uncertainty in different contexts, such as social media usage [23], financial robot advisors [24], and interactive movie recommendation systems [25]. This observation highlights a significant gap in exploring URS, especially concerning the application of AI-powered Chatbots like ChatGPT in higher education. As a result, there exists a substantial, largely unexplored research space for further investigation into these uncertainties and the development of effective solutions for mitigating the associated risks posed by this technology. Building upon these identified gaps, this research contributes significantly to various fields:
It extends the application of the URT to the domain of AI-powered ChatGPT, offering practical insights for effectively applying URS in higher education to facilitate the use of artificial intelligence. (1) The study identifies three primary sources of uncertainty associated with ChatGPT utilization: concerns about transparency, privacy, and information accuracy. This comprehensive overview illuminates the factors contributing to uncertainty when adopting ChatGPT in higher education. (2) The research investigates the reduction of uncertainty through the mediating effects of seeking clarification and soliciting peer feedback. These strategies guide users and stakeholders in higher education, emphasizing ethical usage. (3) The study also examines how the applied URS influence both the reduction of uncertainty and continuance intention regarding ChatGPT in higher education.
In summary, this research makes a significant theoretical contribution by exploring the application of URT while integrating the mediating effects of seeking clarification and peer feedback to reduce uncertainty. It assesses interactive and passive URS and establishes their connection to continuance intention. In practical terms, within the higher education context, the study provides insights into the ethical use of ChatGPT, promoting ethical practices in alignment with academic integrity and misconduct standards.

2.2. The Study’s Theoretical Framework

The URT, initially conceptualized by Berger and Calabrese (1974) [26], serves as the foundational theoretical groundwork this study. While traditionally rooted in interpersonal communication [26], the URT’s applicability extends beyond these origins, finding relevance in diverse research domains, including technology [24] and social media research [23]. In this research, the URT takes on a central role, offering a robust framework to explore the sophisticated factors that contribute to ChatGPT continuance intention within the unique context of higher education. Continuance intention, as examined within this study, pertains to users’ intentions to persist in their usage of ChatGPT as an essential tool for various academic activities [3]. The URT framework provides a structured approach to understand the sources of uncertainty and the strategies employed to mitigate this uncertainty [33], particularly within the context of ChatGPT. This, in turn, directly influences the continuance intention of ChatGPT users.
This research explores into two types of URS, both interactive and passive, and integrates them into the URT model to predicts continuance intention of ChatGPT. Interactive URS embrace direct communication behaviors, including self-disclosure and interrogation [34]. Through self-disclosure, users share information and usage goals with peers, enhancing the quality of interactions with ChatGPT. In contrast, passive URS, guided by principles similar to those proposed by Emmers and Canary (1996) [35], involve unobtrusively observing how others, especially peers, interact with ChatGPT, providing insights into sources of uncertainty such as transparency, information accuracy, and privacy concerns. This approach supports a holistic model that examines the interplay of transparency concerns, information accuracy, and privacy concerns in influencing the level of uncertainty.
By employing URS as a theoretical foundation, this research broadens the application of the URT to the evolving landscape of artificial intelligence in education, offering insights into how users strategically employ both interactive and passive strategies to reduce uncertainty. Furthermore, this study introduces mediating effects, examining how seeking clarification and peer feedback contribute to transforming and confirming transparency, privacy, and information accuracy in the research model. The theoretical framework is illustrated in Figure 1.

3. Hypothesis Development

3.1. URS to Reduce Uncertainty

This research explores two strategies for reducing uncertainty in interactions with ChatGPT: interactive and passive URS. Interactive URS, rooted in the URT, a well-established communication theory [26], posits that individuals encountering uncertainty in interpersonal communication situations employ strategies to diminish that uncertainty [26]. One pivotal strategy within URT is active information-seeking and interaction [36]. In the context of technology adoption, Interactive URS suggests that users may engage in self-disclosure and interrogation to minimize uncertainty [37]. Self-disclosure, as defined by URT, involves revealing personal information about oneself to another person [26]. In the context of technology adoption and communication theory, self-disclosure can be interpreted as actively sharing one’s concerns, doubts, and questions about a technology or system [38]. In the case of ChatGPT, self-disclosure might involve a user openly discussing their uncertainties, seeking clarification, or expressing reservations about using the system for academic tasks. By sharing these concerns, users actively engage in a dialogue with the technology to reduce uncertainty. Interrogation, within URT, refers to the process of asking questions and seeking information from others [26,39]. In the context of technology adoption, interrogation can involve actively seeking answers and clarifications [40]. Users may ask ChatGPT about its capabilities, limitations, and potential risks. By engaging in interrogation, individuals aim to gain a better understanding of the technology, thus reducing uncertainty through a proactive information-seeking approach. Previous studies have investigated ChatGPT adoption in higher education [12,20,22] but have not proposed specific strategies for reducing user uncertainty. Building on Antheunis et al. (2010) [31], this research defines the primary hypothesis regarding ChatGPT's uncertainty reduction strategies as follows:
H1a. 
The interactive URS implemented by users can influence to low level of uncertainty in ChatGPT utilization.
In accordance with the URT concept, passive URS are rooted in the notion of observational learning [26]. In this context, passive observation conducted by users serves as a fundamental mechanism through which individuals can quietly reduce uncertainty. Observational learning, a psychological concept, highlights how individuals acquire knowledge, skills, and behaviors by observing and imitating others [41,42]. When applied to ChatGPT adoption in higher education, particularly within the framework of this study based on the URT, observational learning is relevant as it pertains to individuals passively observing how their peers and others interact with ChatGPT. This involves silently studying the actions, successes, and experiences of others to gain insights and reduce uncertainty. Therefore, this study assumes that when individuals passively observe the use of ChatGPT in the higher education landscape, it can reduce uncertainty. Thus, it extends the findings of previous studies [24,32] and advances technology adoption by observing and learning to diminish uncertainty. Consequently, the following hypothesis is proposed:
H1b. 
The passive URS implemented by users can influence to low level of uncertainty in ChatGPT utilization.

3.2. Source of Uncertainty and Low-Level Uncertainty

The integration of AI-powered ChatGPT has the potential to bring about a transformative shift in educational practices [43]. However, it is imperative to acknowledge that this profound transformation is accompanied by a spectrum of challenges, with one of the most prominent being the exacerbation of uncertainty arising from various sources. This study, firmly rooted in the framework of the URT [26], embarks on a comprehensive exploration of the origins of uncertainty that manifest within the context of ChatGPT adoption in higher education. This research identifies three primary sources of uncertainty: transparency concerns, privacy issues, and information accuracy. Transparency concerns relate to the opacity of ChatGPT’s decision-making processes, privacy issues involve protection sensitive information, and information accuracy pertains to concerns about the correctness and reliability of information generated by ChatGPT. These sources collectively contribute to an intensified sense of uncertainty within the context of ChatGPT utilization.
According to previous studies, among the various sources of uncertainty entailed by AI-powered ChatGPT is transparency concerns [44]. The conspicuous lack of transparency within the ChatGPT paradigm has been noted in prior studies [2,44,45]. Notably, a significant illustration of this impenetrability lies in the puzzling nature of ChatGPT’s decision-making processes [46], often likened to a ‘black box’ [44]. For instance, users alike are confronted with a scarcity of insights into the criteria, data sources, and algorithms underpinning ChatGPT’s response generation. This obscurity in operation elicits tangible uncertainty, decomposing the trust assigned in AI systems. Furthermore, the absence of transparency can confound effective interaction with the technology, hindering its seamless integration into educational practices. Therefore, the following hypothesis is proposed:
H2a. 
Transparency concern will negatively influence low level of uncertainty
In the realm of higher education, a significant source of uncertainty decreasing from the utilization of ChatGPT is information accuracy. This issue assumes paramount importance, as the accuracy of data (text) generated by ChatGPT has the potential to give rise to problems or provide inadequately substantiated information [47], consequently leading to substantial uncertainty. For instance, when ChatGPT is employed for research development and serves as the primary source of interaction to obtain information on research methodologies and contextual study phenomena, users may contend with distressing doubts regarding the accuracy of the research process, validation, theoretical foundations, and current phenomena under investigation. Similarly, researchers who utilize ChatGPT as a tool for creating research content encounter uncertainty concerning the accuracy, reliability, and coherence of the information it imparts [47]. Thus, the study proposed the following hypothesis:
H2b. 
Information accuracy will negatively influence low level of uncertainty
In the current digital technology era, data privacy has acquired increasing prominence. Within the context of ChatGPT, concerns related to safeguarding personal, sensitive, and confidential information assume a central role in the discourse surrounding ChatGPT utilization. Prior research highlights that the convergence of academic activities with ChatGPT gives rise to a fusion of data and textual inputs, including potentially sensitive information [48]. This information fusion includes deeply personal considerations, confidential academic records [49], and sensitive data. These circumstances highlight concerns related to data privacy, secure data management, and protection against unwarranted intrusion or breaches. These privacy concerns extend beyond the realm of mere uncertainty, enquiring into the ethical and legal domains, necessitating comprehensive attention and resolution within educational settings. Further, this study posits the following hypothesis:
H2c. 
Privacy concern will negatively influence low level of uncertainty

3.3. Source of Uncertainty, Seeking for Clarification and Consultation Peer Feedback

This study investigates into the context of ChatGPT utilization in higher education and posits that transparency concern, information accuracy, and privacy concern as crucial factors influence the extent to which users actively engage in seeking clarification. Transparency concern, in the context of ChatGPT’s decision-making processes, manifests as users’ apprehension about the system’s opacity [2,45]. Users confronted with transparency concerns are more inclined to seek clarification. This relationship stems from the recognition that when users encounter uncertainty due to the opaqueness of ChatGPT’s response generation processes [44], they are motivated to proactively seek clarification to gain insights into the underlying mechanisms. Seeking clarification serves as a mechanism for users to address their concerns regarding ChatGPT’s ‘black box’ nature and to attain a deeper understanding of the criteria and algorithms governing its responses [2]. Information accuracy concerns emerge when users question the reliability and precision of information furnished by ChatGPT [2]. In this scenario, the relationship between information accuracy and seeking clarification takes center stage. Users exhibiting concerns about the accuracy of the information they receive are more inclined to engage in seeking clarification to validate, refine, or enhance the information provided by ChatGPT. This proactive pursuit of clarification thus becomes a strategic approach to enhance the accuracy of information, rendering it more reliable and pertinent for academic activities [2].
Privacy concern, a prevailing consideration in the contemporary age of data privacy [16], relates to users’ anxieties about the safeguarding of their personal data and sensitive information within the ChatGPT system. In the specific context of this study, privacy concern significantly influences the propensity to seek clarification. Users embracing privacy concerns are motivated to actively seek clarification to gain a deeper understanding of how ChatGPT handles their personal information [16]. Through the act of seeking clarification, they aspire to acquire insights into data security measures and data handling procedures. This proactive process instills a sense of control and assurance, consequently mitigating uncertainty linked to privacy [16]. These interconnected relationships underscore the dynamic nature of user interactions with ChatGPT. Ultimately, transparency concern, information accuracy, and privacy concern serve as key elements driving users to proactively seek clarification. This, in turn, contributes to a more effective and informed utilization of ChatGPT within the realm of higher education. As a result, this study posits the following hypothesis:
H3a, b, c.
Transparency concern, information accuracy and privacy concern will significantly influence seeking clarification
Within the context of ChatGPT usage in higher education, this study attempts to elucidate how various sources of uncertainty, such as transparency concern, information accuracy, and privacy concern, are connected to users’ engagement in consultation peer feedback. Firstly, transparency concern pertains to users’ unease about ChatGPT’s decision-making processes, which are often seen as having a ‘black box’ nature, providing little insight into the underlying algorithms [44]. Consequently, when users experience transparency concerns regarding ChatGPT’s decision-making mechanisms, consulting with peers is presented as a solution in this study. Engaging in consultations regarding ChatGPT’s generative text allows users to gain insights into the algorithms, making each response clearer, thus boosting users’ confidence in these responses. Similarly, generative text from systems with a ‘black box’ nature raises concerns about information accuracy [2,44]. Therefore, when users actively question the precision and reliability of the information provided by ChatGPT, consulting with peers is deemed an effective strategy. This leads users to seek validation, deeper insights, and clarification on ChatGPT’s generative text. This drive highlights the role of consultation peer feedback as a means to enhance information accuracy and ensure its reliability for academic purposes.
Furthermore, concerns about the clarity of privacy in using ChatGPT can also be addressed through consultation with peers. This should be done through mechanisms that involve peers with expertise in data privacy to exchange ideas and gain deeper insights into the proper use of ChatGPT. Users with privacy concerns are motivated to engage in consultations with peers to better understand how ChatGPT manages their personal data and sensitive information [16]. This collaborative approach allows them to gather insights into data security measures and data handling procedures, ultimately fostering a sense of control and assurance while mitigating concerns related to privacy. In summary, the relationships proposed in this study aim to elucidate the dynamic nature of user interactions with ChatGPT. Consequently, this study posits the following hypothesis:
H4a, b, c.
Transparency concern, information accuracy and privacy concern will significantly influence consultation peer feedback

3.4. Seeking Clarification to Reduce Uncertainty

In this study, “seeking clarification” refers to the active efforts made by individuals in higher education to verify and improve their understanding of the responses generated by AI-powered ChatGPT (which emphasize to attempts of individual level). This conceptualization aligns with a communication concept proposed by Stephens (2012) [50], suggesting that individuals actively seek clarification to enhance their comprehension of a particular issue. The absence of prior research on the role of seeking clarification in reducing uncertainty motivates its inclusion in this research model (see [12,32]). Seeking clarification in this study is distinct from the concept of Interactive URS introduced earlier. While Interactive URS relates to the extent to which individuals actively expose and interrogate the use of ChatGPT for deeper insights (clarification through interrogation and self-exposure), seeking clarification in this context pertains to actively seeking information and clarification directly from ChatGPT (individual – system level of clarification). For example, an individual who engages interactively with ChatGPT on a specific topic may experience confusion about the response provided. In this situation, the individual actively seeks continuous clarification based on their existing knowledge. Thus, the concept of seeking clarification in this study focuses on the system-individual stage of clarification. As a result, the learning process occurs at the individual level through the cognitive stage, where individuals actively interact with ChatGPT and comprehend the responses on an individual basis [2].
The relationship between “Seeking Clarification” and “Low-level of Uncertainty” is rooted in the premise that active information-seeking and clarification-seeking in ChatGPT interactions empower users to reduce uncertainty. When users engage with ChatGPT with a specific aim to clarify responses and understand better [2], they naturally enhance their comprehension. This iterative process leads to a reduction in uncertainty, as users progressively gain confidence in ChatGPT’s capabilities and responses. Consequently, “Seeking Clarification” is a proactive mechanism that fosters a low level of uncertainty, enabling more effective and confident ChatGPT integration into educational practices. Therefore, this study posits the following hypothesis:
H5. 
When individuals actively seek clarification by utilizing their cognitive abilities and interacting with ChatGPT, it results in a reduced level of uncertainty

3.5. Consultation Peer Feedback to Reduce Uncertainty

In the study by Wilkins & Shin (2010) [51], peer feedback was defined as reciprocal teaching, where paired educators provided mutual assistance to observe and improve their teaching techniques within the classroom. This definition was situated within the educational context, primarily focusing on the development of innovative learning processes through feedback mechanisms. However, the current technological landscape has brought about a profound transformation in the field of education [2]. Notably, the advent of disruptive technologies such as AI-powered ChatGPT has revolutionized the learning process, as it can generate personalized responses through user interactions. Nevertheless, questions persist regarding the accountability and accuracy of the generative responses obtained from ChatGPT. This issue has been a subject of discussion in previous studies, which have highlighted that text generated by ChatGPT may lead to issues like plagiarism and misinformation. Consequently, the redefinition of peer feedback, particularly in the context of ChatGPT, appears to be of paramount importance. This research proposes a redefined definition of consultation peer feedback within the framework of disruptive technologies in learning (e.g., ChatGPT). It is characterized as a dynamic and co-creative process that transcends the traditional concept of users merely providing insights to one another. In the context of this study, involving ChatGPT and other advanced technologies in education, consultation peer feedback signifies a complex collaboration between users (e.g., students, educators) and AI-powered models (e.g., ChatGPT). It embraces active engagement with ChatGPT to enhance users’ understanding, knowledge acquisition, and creative problem-solving abilities, with consultations involving educators and peers. This redefined concept represents a transformative and generative partnership in which human peers and AI systems collaborate to shape innovative and multifaceted learning experiences.
Paradoxically, existing literature often overlooks the pivotal role of human consultation, particularly peer interactions, in attaining generative outcomes from ChatGPT. Despite numerous studies investigating peer feedback and the integration of technology in education (see [3,10,12]), a limited number examine deeply into the unique challenges posed by AI-driven educational tools. Within this study, it is presumed that uncertainty increases as the opacity of the obtained results, concerns regarding privacy and information accuracy, as well as the reliability of AI-generated responses leading to ethical dilemmas, and the necessity for users to navigate these complexities are compounded. This area remains relatively unexplored in scholarly discourse. The present research aspires to bridge this substantial gap by conducting a thorough examination of the interplay between peer consultation feedback and the alleviation of uncertainties during the integration of ChatGPT into learning environments. By elucidating the dynamic relationship of consultation peer feedback and its effect on the mitigation of uncertainties, this study aims to offer pioneering insights into the transformative potential of human consultation and AI collaboration in the digital age, thereby reshaping the educational landscape.
Within the context of reducing uncertainty, scholars have initiated an inquiry that addresses the reduction of uncertainty and caution regarding ChatGPT, while also providing recommendations for the utilization of multiple learning methods and traditional approaches. One traditional method for fostering innovation in the learning process is consultation peer feedback, as established by the study conducted by Wilkins & Shin (2010) [51]. In the context of this research, it is proposed that consultation peer feedback be advanced as a vital mechanism for diminishing uncertainty, as it actively stimulates discussions, encourages feedback, and culminates in comprehensive deliberations. For instance, when users (noncomputer science background) engage in discussions regarding generative text (e.g., Python coding) generated by AI-powered ChatGPT with experts (e.g., peers who are experts in computer science), the resulting discourse is more likely to be comprehensive and accurate. Furthermore, such interactions can serve as a gateway to acquiring new knowledge, rendering the learning process be more interactive, transformative, collective, confirmative, and accelerative. Consequently, this study posits the following hypothesis.
H6. 
When individuals engage in consultation peer feedback about the generated text from ChatGPT, it leads to a low level of uncertainty

3.6. Mediating Effect of Seeking Clarification

Within the context of this research, seeking clarification is proposed as an individual’s ability to employ cognitive skills to obtain confirmation on sought information with the objective of reducing uncertainty through the use of generative text from ChatGPT. Three sources of uncertainty have been identified, including concerns related to transparency, information accuracy, and privacy. Transparency concern arises from a lack of insights into ChatGPT’s decision-making processes and its perceived ‘black box’ nature. Accommodating this concern involves users proactively engaging in clarification-seeking behaviors as a strategic approach [2]. Through active clarification-seeking at the individual stage, users gain insights into ChatGPT’s learning processes and its responses to personalized questions. By iteratively seeking clarification and improving their comprehension, users gradually develop confidence in ChatGPT’s capabilities and the reliability of its responses, thereby enhancing transparency and reducing uncertainty.
Regarding information accuracy in generative text from ChatGPT, users who prioritize the quality of information they receive from ChatGPT are expected to proactively seek additional information or clarification to validate and enhance the accuracy of the responses as emphasize by Ashford et al. (2003) [52]. This continuous process of seeking clarification operates as an intermediary step in mitigating uncertainty. As users actively engage in acquiring supplementary information and validation, the level of uncertainty associated with concerns about information accuracy is likely to diminish [53]. Therefore, seeking clarification assumes a key role as an intermediary mechanism in significantly reducing user uncertainty. In response to privacy concerns, users employ clarification-seeking behaviors as a proactive strategy to address uncertainties linked to data security and privacy. The active process of seeking clarification, serving as a mediating mechanism, bolsters users’ confidence in the security of their personal information and sensitive data when interacting with ChatGPT. As users progressively gain a better understanding of the protective measures in place, their level of uncertainty concerning privacy concerns is anticipated to decline. Therefore, the study posits the following hypothesis:
H7a – c.
Seeking clarification functions as a mediating agent in the relationships (a) transparency concern, (b) information accuracy, and (c) privacy concern, with the low level of reducing uncertainty

3.7. Mediating Effect of Consultation Peer Feedback

As outlined in this study, the concept of consultation peer feedback underpins a dynamic collaborative process involving users, including students and educators, and AI models such as ChatGPT. Its primary purpose is to enhance comprehension and problem-solving, making it imperative for reducing uncertainty from various sources, including transparency concern, information accuracy, and privacy concern. To address transparency concerns effectively, engaging with peers becomes a crucial step [51]. Discussing with peers regarding ChatGPT’s ‘black box’ decision-making processes serves to provide users with a clearer understanding of its operational procedures and decision-making rationale. This emphasis on transparency, fostered through peer feedback, is expected to lead to a reduction in uncertainty linked to transparency concerns. Furthermore, as consultation peer feedback contributes to the enhancement of information accuracy [54], users’ uncertainty regarding the reliability of ChatGPT’s generated information is anticipated to decrease. Simultaneously, in response to privacy concerns, by actively consulting with peers enabling users to actively seek clarifications and thereby boosting their confidence in the security of their personal data during interactions with ChatGPT. Thus, this study posits the following hypothesis:
H8a – c.
Consultation peer feedback functions as a mediating agent in the relationships (a) transparency concern, (b) information accuracy, and (c) privacy concern, with the low level of reducing uncertainty

3.8. Reduced Uncertainty to Continuance Intention

Continuance intention in the context of technology acceptance essentially refers to users' willingness to continue using a particular technology, taking various considerations into account. For instance, drawing from Joo et al. (2018) [55] in the context of learners' acceptance of technology, continuance intention is defined as learners' willingness to persist in using technology, regardless of whether they complete an entire course or not, based on their initial experiences. In a similar manner, this research recontextualizes the previous definition for ChatGPT continuance intention. In the context of low uncertainty and ChatGPT acceptance, it signifies the user’s determination to engage persistently and willingly with ChatGPT for educational purposes over an extended period. This intention reflects the user’s commitment to continuously utilize ChatGPT, rooted in their reduced uncertainty and a raised sense of trust and confidence in the system’s capacity to offer accurate and valuable educational support.
This study suggest that low level of uncertainty indicates a strong sense of user trust in the AI’s competence, leading to a more positive and confident user experience across diverse applications of ChatGPT. Reduced uncertainty plays a crucial role in establishing user trust and confidence [56], which are dynamic factors influencing user intention to continue using ChatGPT as an educational tool. Low-level uncertainty is anticipated to fostering a perception of predictability and reliability in ChatGPT’s responses. When users feel more confident in the accuracy and trustworthiness of the information provided, they are more inclined to maintain their engagement with the tool for their educational needs [57]. Conversely, high levels of uncertainty may lead to user hesitancy and a reluctance to continue using ChatGPT, as it might be seen as less dependable for educational purposes [2]. This study suggests that when users experience reduced uncertainty and an enhanced level of trust in ChatGPT, they are more likely to persist in using it as a reliable educational tool. This emphasizes the importance of minimizing uncertainty in fostering user trust and sustaining engagement with AI-powered educational systems. Therefore, the following hypothesis is posited.
H9. 
When users perceived ChatGPT with low-level uncertainty, it encourages them to continue using the system

4. Methods

4.1. Operationalization and Measures

This research forms its foundation upon the Uncertainty Reduction Theory (URT). Within this theoretical framework, we introduce two key mediating constructs: seeking clarification and consultation peer feedback. Concurrently, we identify several sources of uncertainty, including information accuracy, transparency, and privacy concern. Our primary aim is to investigate strategies for mitigating uncertainty and assess the continuance intention regarding ChatGPT within the higher education landscape. Throughout this study, we offer specific definitions developed through an extensive review of existing literature. This choice arises from the observation that these constructs have not received extensive prior investigation in the realm of previous studies. However, it is important to note that our selection of measures for each construct is carefully grounded in established research. Finally, every proposed measure undergoes thorough testing and evaluation, in accordance with widely recognized statistical criteria. For a detailed overview of the operational definitions and measures used in this research, please consult Table 2 below.

4.2. Sampling Technique and Data Collection

This study employs a survey-based approach to examine ChatGPT users in the higher educational landscape of Indonesia, including students (e.g., undergraduate and graduate students) and lecturers. The primary objective of this survey is threefold: (1) to investigate the uncertainty reduction strategies employed by users in response to text generated by ChatGPT, (2) to understand how students and lecturers interact in seeking clarification and consultation peer feedback regarding ChatGPT-generated text, and (3) to predict user behavior concerning the continued use of ChatGPT for academic purposes. To achieve these goals, a purposive sampling method is employed to select participants meeting specific criteria: (1) individuals who use ChatGPT for academic purposes, such as research and coursework, and (2) individuals who have discussed text generated by ChatGPT with colleagues and experts within the academic environment. These criteria are imperative for potential respondents to qualify for participation in the study.
The survey was conducted using an online questionnaire in the form of a Google Form, distributed randomly to users based on predefined criteria. The questionnaire comprises three sections. Firstly, respondents are asked about their experiences using ChatGPT and their interactions with peers regarding the results generated by ChatGPT. Subsequently, respondents are requested to provide demographic information, including gender, age, educational background, the type of ChatGPT used (e.g., GPT 3.5, GPT 4.0), ChatGPT usage frequency, occupation (e.g., students, lecturers, researchers), and the type of university (e.g., public, private, foreign institution). Following the completion of demographic information, respondents proceed to the core questionnaire items developed, as detailed in Table 2. In a bid to enhance response efficacy, the questionnaire concludes with information regarding a financial reward to be randomly distributed to respondents. Once the questionnaire is prepared in Google Forms, a link is generated and randomly shared with higher education community members through various social media platforms (e.g., WhatsApp, LINE, Facebook Messenger, Instagram, etc.). Data collection was carried out from July to October 2023, resulting in 566 participants whose responses were considered valid out of an initial 578 respondents; some were excluded due to ineligibility. The demographic information of the respondents is presented in Table 3.

4.3. Analysis Technique

This study employs the Structural Equation Modeling (SEM) technique, employing Smart-PLS 4.0 software to validate and assess the research hypotheses. The SEM methodology comprises a sequence of rigorous evaluations, start with an evaluation of validity and reliability [63]. This includes an assessment of convergent validity [63], the computation of internal consistency [63], and an investigation of discriminant validity within the model under investigation [64,65]. Furthermore, the Smart-PLS software is utilized to evaluate the model’s explanatory power, as indicated by the R-square criterion [66]. In addition to SEM analysis, SPSS version 26 software is employed to scrutinize Common Method Variance (CMV). This analysis aims to ensure the consistency of participants’ responses to each questionnaire item presented [67].

5. Results

5.1. Sample Demographics

A total of 566 responses were gathered during the data collection period. The sample profile, as presented in Table 3, reveals that, with regard to gender, males (45.4%) demonstrated lower utilization of ChatGPT compared to females (54.6%). In terms of age categories, the most substantial user group was individuals between 21 and 40 years old (64.3%). Concerning educational levels, users with master's degrees (46.3%) and undergraduate degrees (34.8%) represented the majority, with fewer users holding doctorates (11.1%) and vocational qualifications (7.8%). Private universities were the dominant institutions among the users (70.6%), and ChatGPT 3.5 (54.2%) had higher usage compared to GPT 4.0 (45.8%). Regarding frequency of usage, users tend to engage with ChatGPT several times a day (25.9%), indicating fairly intensive utilization. Lastly, the majority of users (48.1%) reported having less than six months of experience with ChatGPT, in contrast to those with more than a year of experience and less than a month of experience.

5.2. Common Method Variance

This research utilized a self-reported survey as the primary data collection method, prompting the need to assess common method variance (CMV). Harman’s Single Factor technique, executed using SPSS version 26 software, was employed to evaluate CMV. In this procedure, all items were loaded onto a single dependent construct, allowing for the examination of response consistency. CMV is considered minor when the obtained value falls below 50% [67]. In this study, the CMV value is recorded at 27.1%, signifying an absence of significant CMV concerns. This level of consistency in CMV is noteworthy, as it substantially deviates from the 50% threshold and approaches near-zero levels [67]. Furthermore, CMV robustness was assessed using the variance inflation factor (VIF) approach. Recommended guidelines for VIF evaluation indicate that the values below 3 indicate the absence of multicollinearity among the items employed. The VIF values obtained in this study range from 1.215 (minimum) to 1.756 (maximum), confirming the absence of any significant concerns. In summary, the assessment of CMV in this study reveals robust results, signifying the validity and reliability of the collected data.

5.3. Assessment of Validity and Reliability

Table 2 provides an overview of the evaluation of convergent validity and internal consistency. In order to fulfill these criteria, specific items were eliminated from the analysis due to their failure to meet the predefined threshold values for outer loadings (OL), which are denoted by marks (*) in Table 2. Following the removal of these items, a comprehensive reevaluation was conducted. The subsequent analysis demonstrated that all outer loading (OL) values exceeded the recommended threshold of 0.70, as established by Hair et al. (2017) [63]. Furthermore, the values for Cronbach's alpha, Composite Reliability (CR), and Average Variance Extracted (AVE) surpassed their respective threshold values of 0.70 and 0.50, also as stipulated by Hair et al. (2017) [63]. This analysis ensures the absence of concerns regarding convergent validity and internal consistency in this study.
Discriminant validity was evaluated using three approaches, including the Fornell-Larcker Criterion, HTMT, and cross-loadings matrix as displayed in Table 5 and Table 6. Firstly, the Fornell-Larcker Criterion indicates that all the square root values of AVE (diagonal values) are greater than the correlations between the respective variables, thus indicating that discriminant validity is not a concern [65]. On the other hand, the evaluation of HTMT values, overall, shows values below 0.85, signifying strong discriminant validity [64]. Furthermore, the assessment of the cross-loading matrix reveals that all outer loading values are greater than the loadings obtained outside their respective constructs [63]. Hence, this suggests that discriminant validity is not an issue.
Table 4. Discriminant Validity .
Table 4. Discriminant Validity .
CPF CI IA INT LLU PSS PC SC TC
CPF 0.883
CI 0.200
(0.268)
0.847
IA 0.319
(0.499)
0.220
(0.351)
0.838
INT 0.302
(0.454)
0.309
(0.465)
0.315
(0.455)
0.884
LLU 0.414
(0.629)
0.306
(0.453)
0.254
(0.374)
0.457
(0.644)
0.867
PSS 0.311
(0.557)
0.290
(0.534)
0.256
(0.463)
0.345
(0.421)
0.207
(0.257)
0.857
PC 0.389
(0.603)
0.313
(0.466)
0.434
(0.654)
0.221
(0.315)
0.303
(0.442)
0.346
(0.624)
0.871
SC 0.161
(0.239)
0.456
(0.748)
0.358
(0.558)
0.418
(0.595)
0.176
(0.240)
0.379
(0.661)
0.354
(0.521)
0.774
TC 0.358
(0.555)
0.391
(0.616)
0.419
(0.613)
0.504
(0.729)
0.429
(0.629)
0.334
(0.560)
0.401
(0.558_
0.454
(0.678)
0.866
Notes:
  • The values within parentheses represent HTMT, with a threshold of < 0.90 indicating weak validity and < 0.85 indicating strong validity.
  • The values printed in bold along the diagonal are the square roots of AVE.
  • The other values represent intercorrelations between variables for measuring the Fornell-Larcker criterion.

5.4. Model Robustness Test

A robustness test was performed to evaluate the model's effectiveness and suitability for hypothesis testing. This assessment including two key methods: R-square analysis and model fit evaluation, both of which aim to assess the predictive capacity of independent variables within distinct models. In the initial model, continuance intention to use Chat-GPT was contingent on a low level of uncertainty, with additional factors from uncertainty reduction strategies (e.g., interactive and passive URS) and sources of uncertainty (e.g., transparency and privacy concern, information accuracy), as per the Uncertainty Reduction Theory (URT). The R-square value for continuance intention amounted to 0.435, signifying that a low level of uncertainty can explicate 43.5% of the variance in intention to use Chat-GPT. In the second model, low-level uncertainty was assessed in the context of interactive and passive URS, seeking clarification, and consultation peer feedback. Here, the R-square value reached 0.393, indicating that interactive and passive URS, as well as seeking clarification and consultation peer feedback, can elucidate 39.3% of the variance in low-level uncertainty. Subsequently, the third model examined the seeking for clarification within the framework of sources of uncertainty, resulting in an R-square value of 0.135. This value indicates that sources of uncertainty can elucidate 13.5% of the variance in seeking clarification. In the fourth model, consultation peer feedback was assessed in relation to sources of uncertainty, yielding an R-square value of 0.351. This suggests that sources of uncertainty can account for approximately 35.1% of the variance in consultation peer feedback, a value exceeding the recommended threshold of 10% proposed by Falk and Miller (1992) [66]. Consequently, based on the R-square criteria, the model qualifies as robust. Moreover, the model fit test produced the following values: SRMR 0.059, Chi-square 2347.6, NFI 0.830, d_ULS 2.715, and d_G 1.513. These results meet all the prescribed criteria [63], further confirming the model’s robustness.

5.5. Hypothesis Testing

Hypothesis testing is categorized into two phases. Firstly, the direct hypothesis proposed in the study. Secondly, this study will continue to examines the mediating hypothesis regarding the role of seeking clarification and consultation peer feedback in the research model. The following section details the hypothesis testing procedures.

5.5.1. Direct Hypothesis

A concise description of the direct effect hypotheses is delineated in Table 6. Firstly, within the realm of URS, it is noticeable that interactive URS (β = 0.321; t = 6.550), as initiated by users, exerts a more pronounced impact on the mitigation of low levels of uncertainty in comparison to passive URS (β = -0.039; t = 0.852). This lends support to H1a and necessitates the dismissal of H1b. Exploring the facet of the source of uncertainty, it is affirmed that transparency concern (β = -0.024; t = 0.852) and information accuracy (β = -0.008; t = 0.164) have a negative effect on the reduction of low levels of uncertainty; however, their influence is not statistically significant. Consequently, hypotheses H2a and H2b are rejected. Alternatively, privacy concern (β = 0.018; t = 1.301) appears to have a positive influence, although insignificantly so, resulting in the rejection of H2c.
Furthermore, the investigation into the impact of the source of uncertainty on the pursuit of clarification and consultation of peer feedback. Evidently, the source of uncertainty appears to significantly influence the quest for clarification, with transparency concern (β = 0.327, t = 7.056), information accuracy (β = 0.152, t = 3.175), and privacy concern (β = 0.157, t = 3.169) all contributing to this phenomenon. Consequently, these observations substantiate hypotheses H3a to H3c. Similarly, with respect to the consultation of peer feedback, transparency concern (β = 0.205; t = 4.560), information accuracy (β = 0.123; t = 2.444), and privacy concern (β = 0.253; t = 4.973) appear to exert significant influences, thereby corroborating hypotheses H4a to H4c.
Subsequently, an examination is carried out to identify the impact of seeking clarification and consultation of peer feedback on low levels of uncertainty. The results demonstrate that seeking clarification (β = -0.112; t = 1.848) falls short of making a meaningful contribution to the mitigation of low levels of uncertainty, prompting the dismissal of H5. Conversely, the act of consulting peer feedback (β = 0.231; t = 4.854) is deemed a significant method for diminishing low levels of uncertainty, hence lending support to H6. Finally, the scrutiny turns toward the influence of low levels of uncertainty on the intention to persist. The findings affirm that once low levels of uncertainty have been attained, there exists a significant connection with the intention to continue, thereby reinforcing H9 (β = 0.306; t = 6.603).

5.5.2. Mediating Hypothesis

This study introduces two mediating variables, namely, seeking clarification and consultation of peer feedback, as key components in the quest to attain low levels of uncertainty. The results pertaining to the mediation hypotheses are presented in Table 7. The findings indicate that the attempt to seek clarification fails to mediate the relationships between transparency concern (β = -0.036; t = 1.735), information accuracy (β = -0.017; t = 1.482), and privacy concern (β = -0.017; t = 1.513) in achieving low levels of uncertainty. Consequently, this leads to the rejection of hypotheses H7a to H7c. Conversely, the act of consulting peer feedback emerges as a primary mediating factor, manifesting as a full mediator in the relationships between transparency concern (β = 0.047; t = 3.295), information accuracy (β = 0.028; t = 2.172), and privacy concern (β = 0.058; t = 3.552) and the attainment of low levels of uncertainty. Hence, this substantiates hypotheses H8a to H8c.

6. Discussion

This research has made a significant contribution in the striving to comprehensively elucidate strategies for reducing uncertainty in the academic utilization of ChatGPT. To the best of our knowledge, this study represents the pioneering effort in the literature to develop a model of low levels of uncertainty and continuance intention by employing the Uncertainty Reduction Theory within the context of ChatGPT in higher education. Specifically, this explanatory project was designed to address the achievement of low levels of uncertainty and continuance intention. It does so by integrating interactive and passive URS, seeking clarification, consultation peer feedback, and the facets including the source of uncertainty within the framework of the Uncertainty Reduction Theory (URT), with the ultimate goal of fostering sustained continuance intention. Additionally, this study underscores the mediating effects of seeking clarification and consultation of peer feedback on the relationships involving the source of uncertainty in the pursuit of low levels of uncertainty. This research, therefore, holds the potential of offering valuable practical insights to a diverse spectrum of ChatGPT users, including individuals and academic stakeholders, guiding them toward the ethical and appropriate utilization of this technology for academic purposes. It is anticipated that the findings will contribute to the development of best practices in leveraging ChatGPT effectively. In order to facilitate a structured discourse, this study is poised to address the research questions (RQ) posed earlier.
This study has effectively provided a detailed exploration of the strategies employed to achieve low levels of uncertainty using ChatGPT and to sustain the utilization of ChatGPT in higher education, as delineated in Research Question 1. First and foremost, the utilization of the Uncertainty Reduction Theory in this research permits the examination of two Uncertainty Reduction Strategies (URS), namely interactive and passive URS, which were directly assessed concerning their influence on low levels of uncertainty. Remarkably, the passive URS strategy was found to be inconsequential in achieving low levels of uncertainty, aligning with the findings derived from the analysis. This underscores that individuals who adopt the passive URS approach, characterized by observing other ChatGPT users, reviewing their opinions, and scrutinizing comments related to ChatGPT, do not significantly reduce their uncertainty levels. This outcome resonates with the conclusions drawn by Antheunis et al. (2010) [31], who similarly established that employing the passive URS approach does not diminish user uncertainty; individuals using such a method still maintain high levels of uncertainty. However, a incongruent result emerges when users employ the interactive URS strategy, which has been shown to significantly reduce uncertainty, as affirmed by the analysis. This signifies that users who actively engage with ChatGPT, providing feedback on generative text, seeking information and clarification regarding the generated content, and sharing thoughts and information with ChatGPT and others, ultimately attain low levels of uncertainty. This outcome aligns with the findings of Antheunis et al. (2010) [31], who established that interactive URS represents the most potent strategy for reducing user uncertainty.
Continuing to address Research Question 1, this study has successfully identified and examined how the sources of uncertainty, namely transparency concern, information accuracy, and privacy concern, impact low levels of uncertainty. The empirical data obtained substantiates that none of the sources of uncertainty significantly influence low levels of uncertainty. A more in-depth examination reveals that transparency concern and information accuracy both exert negative but statistically insignificant effects on low levels of uncertainty, while privacy concern has a positive yet non-significant influence. The insights gained affirm that, indeed, transparency, particularly within the context of ChatGPT's decision-making mechanism characterized by a ‘black box’ nature, may lead users to be hesitant regarding the generated text; however, it does not have a significant impact. On the other hand, information accuracy also serves as a precedent that negatively affects user perceptions, implying that the generative text obtained from ChatGPT fosters doubts about the reliability, validity, and accuracy of the information. In summation, transparency concern and information accuracy may actually give rise to high or moderate levels of uncertainty among users. In contrast, privacy concern exhibits a positive influence on low levels of uncertainty, indicating that users possess a profound understanding of data privacy, security, and safety when interacting with ChatGPT. To the best of our knowledge, this study marks the pioneering effort to develop a model by identifying sources of uncertainty using ChatGPT within the existing literature. Thus, this research successfully extends the existing body of knowledge, building upon previous studies (e.g., [12])
Furthermore, this study also identifies seeking clarification (an individual stage) and consultation of peer feedback as strategies for addressing the sources of uncertainty in the quest to achieve low levels of uncertainty, which will be discussed in further detail in Research Question 2. However, at this juncture, this research substantiates the direct effects of both constructs on low levels of uncertainty. The analysis results demonstrate that consultation of peer feedback significantly contributes to the attainment of low levels of uncertainty, while seeking clarification (relying on individual cognitive evaluation) with ChatGPT is perceived to have a negative impact on low levels of uncertainty. Hence, this research establishes that the engagement in discussions about all generative text obtained from ChatGPT with peers, professors (experts), and seeking advice from colleagues proves to be a significant determinant for achieving low levels of uncertainty. It becomes evident that consulting with peers not only reduces uncertainty but also bolsters confidence in the text generated by ChatGPT. This observation aligns with the counsel put forth by Wilkins and Shin (2010) [51], which emphasizes the importance of consultation and peer feedback in the learning process. In contrast, seeking clarification, when carried out individually and relying heavily on individual cognitive abilities in interactions with ChatGPT, ironically results in heightened levels of uncertainty.
Furthermore, this study examines how achieving low levels of uncertainty influences continuance intention. The analysis results substantiate that once users attain low levels of uncertainty, they are more inclined to exhibit continuance intention. Therefore, these findings underscore the crucial importance of reaching low levels of uncertainty concerning AI-powered technologies and other similar Language Model (LLM) models, as it significantly impacts user continuance intention. As a result, this study extends the existing literature (e.g., [3,10,12]) by identifying continuance intention as an outcome of reduced uncertainty experienced by users in ChatGPT context.
Transitioning to address Research Question 2, which centers on the mediating effects of seeking clarification and consultation of peer feedback regarding the source of uncertainty (e.g., transparency concern, information accuracy, and privacy concern) on low levels of uncertainty, this study furnishes an answer. The analysis results unequivocally demonstrate that when the mediation strategy is enacted through seeking clarification, at an individual level within the system, it fails to act as an intermediary between the source of uncertainty and low levels of uncertainty. In contrast, consultation of peer feedback, in fact, effectively mediates the influence of the sources of uncertainty in achieving low levels of uncertainty. Consequently, referring back to Research Question 1, in the context of mitigating the risks associated with the negative effects of generative text in ChatGPT, it becomes evident that consulting with experts, peers, and colleagues can significantly diminish uncertainty. For instance, when a user initially perceives information accuracy negatively due to concerns about its reliability, validity, and credibility, engaging in consultations with professors, peers, and colleagues regarding that information can notably reduce the level of uncertainty.
This research significantly enriches the theoretical comprehension of ChatGPT continuance intention in higher education within the framework of Uncertainty Reduction Theory (URT). To our knowledge, this study pioneers the investigation of attaining low level of uncertainty using the URT framework by (1) integrating interactive and passive URS to reduce uncertainty, (2) identifying source of uncertainty in the ChatGPT context such as transparency concern, information accuracy and privacy concern, (3) testing the mediating effect of seeking clarification and consultation peer feedback to attain low level of uncertainty, and (4) testing the effect from low level of uncertainty to continuance intention to use ChatGPT in academic purposes. This study extents the existing literature and provide a novel lens through which to examine ChatGPT AI-powered technology continuance intention in higher education [3,10,12]. Hence, this provides a broader perspective on the application of the URT to reduce uncertainty and achieve continuance intention within the context of ChatGPT.

7. Implication

7.1. Theoretical Implication

In its original formulation within the Uncertainty Reduction Theory (URT) framework, there exist only three types of Uncertainty Reduction Strategy (URS), specifically interactive, active, and passive URS [26]. However, this study has achieved the theoretical validation that URS can be executed at the individual level, relying on cognitive evaluation within the interactive ChatGPT system, and through consultation of peer feedback regarding generative text obtained from ChatGPT. With the advent of ChatGPT and other AI-powered technologies based on large language models (LLM), this research undoubtedly lays the foundation for the proper and ethically sound utilization of generative text for academic purposes, ensuring adherence to ethical standards and averting issues related to plagiarism. This study has proven that when consultation of peer feedback is intensively employed on ChatGPT generative text responses, the acceleration of knowledge development attains its maximum potential. Therefore, the crucial role and significance of continuance intention in this study corroborate that the reduction of uncertainty through consultation of peer feedback will extend to the long-term use of the ChatGPT system.
Ultimately, this research further extends the original URT [26] by incorporating three additional sources of uncertainty, namely transparency concern, information accuracy, and privacy concern. These three sources of uncertainty hold paramount relevance in the current and future context of ChatGPT implementation. It becomes imperative to critically evaluate how the ChatGPT system generates text transparently, provides reliable and clearly sourced information, and safeguards user privacy in every interaction, online data, security, and online information safety. This underscores the urgency of extending the original URT [26], given the importance of investigating and validating the aspects of ChatGPT in terms of transparency, accuracy, and privacy, as discussed in the existing literature [3,10,12].

7.2. Practical Implication

Drawing insights from the theoretical implications, this study also endeavors to offer practical solutions aimed at enhancing the effective, efficient, and ethical utilization of ChatGPT in the higher education context. Firstly, higher education institutions should consider implementing enhanced user training programs [68]. These training programs are put forth as practical solutions with the objective of elucidating students and lecturers on the significance of seeking clarification from peers, professors, and colleagues to reduce uncertainty when utilizing AI-generated text, such as ChatGPT. By cultivating this practice, higher education institutions can augment users' confidence in the accuracy and reliability of ChatGPT-generated content, thereby ultimately improving their academic experience.
Secondly, the present study recommends the promotion of collaborative learning environments within the higher education sector [69]. Such environments are conducive to stimulating students to engage in discussions and share insights concerning the generative text generated by ChatGPT. These interactive dialogues contribute to the cultivation of transparency, accuracy, and privacy in the deployment of AI technologies, thereby ensuring the alignment of generated content with established academic standards and guidelines. Furthermore, collaborative learning serves as a catalyst in empowering students to exercise critical scrutiny and validation of the information produced. In addition to this, it is imperative for higher education institutions to establish unambiguous ethical guidelines and institute comprehensive training programs that underscore plagiarism awareness [2]. These guidelines should specifically target the ethical utilization of AI-generated content, thereby equipping students with an understanding of the potential perils linked to plagiarism and the paramount significance of appropriate citation practices. This educational endeavor empowers students to navigate the ethical terrain of AI-generated text with acumen and integrity.
To ensure the sustainable use of ChatGPT in higher education, institutions should develop long-term integration plans. These plans should include mechanisms for addressing issues of transparency, accuracy, and privacy by continuously monitoring and refining the AI system’s performance. Regular feedback mechanisms and user support should be integral components of these plans. Moreover, users should be encouraged to actively seek consultation with peers, professors, and colleagues to validate and refine information obtained from AI-generated text. Incentives or recognition for active participation in consultation can further motivate students. This practice not only reduces uncertainty but also facilitates critical thinking and knowledge exchange among users.
In the context of privacy and security, higher education institutions must ensure that robust protocols are in place for interactions with AI systems like ChatGPT. These protocols should guarantee the security of user data and interactions [70], as well as the protection of user privacy. Clear communication about these precautions can alleviate concerns and foster trust in the technology. Furthermore, institutions and educators can conduct ongoing research to assess the user experience and effectiveness of integrating AI-powered technologies. This research can inform continuous improvements, ensuring that users have a positive experience while utilizing these tools. By doing suggested practical solutions, higher education institutions can harness the benefits of AI-powered technologies like ChatGPT while addressing the challenges related to transparency, accuracy, and privacy. This approach not only enhances the academic experience but also prepares students for responsible and ethical use of AI in their careers.

8. Limitation and Avenues for Future Research

Despite the valuable theoretical and practical insights gained through this research, certain limitations merit acknowledgment and point toward promising directions for future research in the context of ChatGPT’s utilization within higher education. Firstly, this study encountered a limitation in establishing a significant effect from the source of uncertainty (comprising transparency concern, information accuracy, and privacy concern) to low levels of uncertainty. In light of this, future research endeavors may contemplate reexamining this relationship with diverse sample demographics. Additionally, the exploration of further potential sources of uncertainty could be fruitful. Examining how demographic factors influence the impact of the source of uncertainty on users' uncertainty levels is a promising avenue for enhancing our understanding of this relationship. Secondly, this study did not reveal significant support for the effectiveness of passive URS in reducing uncertainty. The role of passive URS in different contexts and conditions warrants more detailed investigation in future research. By probing the circumstances in which passive URS might be more efficacious, researchers can offer a more nuanced understanding of its utility and potential contributions to uncertainty reduction strategies. Lastly, this study involved the exclusion of certain measurement items for the proposed constructs. Future studies should consider revisiting these measurement items or refining them to gain more comprehensive insights and robust results. Thoroughly exploring the omitted measurement items holds the potential to enrich our understanding of the constructs under investigation.

References

  1. Thormundsson, Bergur. (2023). Growth forecast of monthly active users of ChatGPT in December 2022 and January 2023. https://www.statista.com/statistics/1368657/chatgpt-mau-growth/#:~:text=In%202022%2C%20around%2057%20million,100%20million%20by%20January%202023. Accessed at 2nd October 2023.
  2. Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., ... & Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. [CrossRef]
  3. Baek, T. H., & Kim, M. (2023). Is ChatGPT scary good? How user motivations affect creepiness and trust in generative artificial intelligence. Telematics and Informatics, 83, 102030. [CrossRef]
  4. Bin-Nashwan, S. A., Sadallah, M., & Bouteraa, M. (2023). Use of ChatGPT in academia: Academic integrity hangs in the balance. Technology in Society, 102370. [CrossRef]
  5. Mijwil, M., & Aljanabi, M. (2023). Towards artificial intelligence-based cybersecurity: the practices and ChatGPT generated ways to combat cybercrime. Iraqi Journal For Computer Science and Mathematics, 4(1), 65-70. [CrossRef]
  6. Aljanabi, M., Ghazi, M., Ali, A. H., & Abed, S. A. (2023). ChatGPT: open possibilities. Iraqi Journal For Computer Science and Mathematics, 4(1), 62-64. [CrossRef]
  7. O’Connor, S. (2022). Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse?. Nurse Education in Practice, 66, 103537-103537. [CrossRef]
  8. Tiwari, C. K., Bhat, M. A., Khan, S. T., Subramaniam, R., & Khan, M. A. I. (2023). What drives students toward ChatGPT? An investigation of the factors influencing adoption and usage of ChatGPT. Interactive Technology and Smart Education. [CrossRef]
  9. Ma, X., & Huo, Y. (2023). Are users willing to embrace ChatGPT? Exploring the factors on the acceptance of chatbots from the perspective of AIDUA framework. Technology in Society, 102362. [CrossRef]
  10. Strzelecki, A. (2023). To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interactive Learning Environments, 1-14. [CrossRef]
  11. Pradana, M., Elisa, H. P., & Syarifuddin, S. (2023). Discussing ChatGPT in education: A literature review and bibliometric analysis. Cogent Education, 10(2), 2243134. [CrossRef]
  12. Foroughi, B., Senali, M. G., Iranmanesh, M., Khanfar, A., Ghobakhloo, M., Annamalai, N., & Naghmeh-Abbaspour, B. (2023). Determinants of Intention to Use ChatGPT for Educational Purposes: Findings from PLS-SEM and fsQCA. International Journal of Human–Computer Interaction, 1-20. [CrossRef]
  13. Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 1-12. [CrossRef]
  14. Sallam, M. (2023, March). ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. In Healthcare (Vol. 11, No. 6, p. 887). MDPI. [CrossRef]
  15. Aljanabi, M. (2023). ChatGPT: Future directions and open possibilities. Mesopotamian journal of Cybersecurity, 2023, 16-17. [CrossRef]
  16. Paul, J., Ueno, A., & Dennis, C. (2023). ChatGPT and consumers: Benefits, pitfalls and future research agenda. International Journal of Consumer Studies, 47(4), 1213-1225. [CrossRef]
  17. Teel, Z. A., Wang, T., & Lund, B. (2023). ChatGPT conundrums: Probing plagiarism and parroting problems in higher education practices. College & Research Libraries News, 84(6), 205. [CrossRef]
  18. Gill, S. S., Xu, M., Patros, P., Wu, H., Kaur, R., Kaur, K., ... & Buyya, R. (2024). Transformative effects of ChatGPT on modern education: Emerging Era of AI Chatbots. Internet of Things and Cyber-Physical Systems, 4, 19-23. [CrossRef]
  19. Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2023). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International, 1-15. [CrossRef]
  20. Liu, G., & Ma, C. (2023). Measuring EFL learners’ use of ChatGPT in informal digital learning of English based on the technology acceptance model. Innovation in Language Learning and Teaching, 1-14. [CrossRef]
  21. Harrison, L. M., Hurd, E., & Brinegar, K. M. (2023). Critical race theory, books, and ChatGPT: Moving from a ban culture in education to a culture of restoration. Middle School Journal, 54(3), 2-4. [CrossRef]
  22. Nune, A., Iyengar, K., Manzo, C., Barman, B., & Botchu, R. (2023). Chat generative pre-trained transformer (ChatGPT): potential implications for rheumatology practice. Rheumatology International, 43(7), 1379-1380. [CrossRef]
  23. Shin, S. I., Lee, K. Y., & Yang, S. B. (2017). How do uncertainty reduction strategies influence social networking site fan page visiting? Examining the role of uncertainty reduction strategies, loyalty and satisfaction in continuous visiting behavior. Telematics and Informatics, 34(5), 449-462. [CrossRef]
  24. Hong, X., Pan, L., Gong, Y., & Chen, Q. (2023). Robo-advisors and investment intention: A perspective of value-based adoption. Information & Management, 60(6), 103832. [CrossRef]
  25. Lee, S., & Choi, J. (2017). Enhancing user experience with conversational agent for movie recommendation: Effects of self-disclosure and reciprocity. International Journal of Human-Computer Studies, 103, 95-105. [CrossRef]
  26. Berger, C. R., & Calabrese, R. J. (1974). Some explorations in initial interaction and beyond: Toward a developmental theory of interpersonal communication. Human communication research, 1(2), 99-112. [CrossRef]
  27. Sohail, M., Mohsin, Z., & Khaliq, S. (2021, July). User Satisfaction with an AI-Enabled Customer Relationship Management Chatbot. In International Conference on Human-Computer Interaction (pp. 279-287). Cham: Springer International Publishing. [CrossRef]
  28. Delgosha, M. S., & Hajiheydari, N. (2021). How human users engage with consumer robots? A dual model of psychological ownership and trust to explain post-adoption behaviours. Computers in Human Behavior, 117, 106660. [CrossRef]
  29. Sturman, N., Tan, Z., & Turner, J. (2017). “A steep learning curve”: junior doctor perspectives on the transition from medical student to the health-care workplace. BMC medical education, 17(1), 1-7. [CrossRef]
  30. Spooren, P., Brockx, B., & Mortelmans, D. (2013). On the validity of student evaluation of teaching: The state of the art. Review of Educational Research, 83(4), 598-642. [CrossRef]
  31. Antheunis, M. L., Valkenburg, P. M., & Peter, J. (2010). Getting acquainted through social network sites: Testing a model of online uncertainty reduction and social attraction. Computers in Human Behavior, 26(1), 100-109. [CrossRef]
  32. Pan, S., Cui, J., & Mou, Y. (2023). Desirable or Distasteful? Exploring Uncertainty in Human-Chatbot Relationships. International Journal of Human–Computer Interaction, 1-11. [CrossRef]
  33. Whalen, E. A., & Belarmino, A. (2023). Risk mitigation through source credibility in online travel communities. Anatolia, 34(3), 414-425. [CrossRef]
  34. Gudykunst, W. B., Chua, E., & Gray, A. J. (1987). Cultural dissimilarities and uncertainty reduction processes. Annals of the International Communication Association, 10(1), 456-469. [CrossRef]
  35. Emmers, T. M., & Canary, D. J. (1996). The effect of uncertainty reducing strategies on young couples' relational repair and intimacy. Communication Quarterly, 44(2), 166-182. [CrossRef]
  36. Brashers, D. E. (2001). Communication and uncertainty management. Journal of communication, 51(3), 477-497. [CrossRef]
  37. Venkatesh, V., Thong, J. Y., Chan, F. K., & Hu, P. J. (2016). Managing citizens’ uncertainty in e-government services: The mediating and moderating roles of transparency and trust. Information systems research, 27(1), 87-111. [CrossRef]
  38. Kokolakis, S. (2017). Privacy attitudes and privacy behaviour: A review of current research on the privacy paradox phenomenon. Computers & security, 64, 122-134. [CrossRef]
  39. Tidwell, L. C., & Walther, J. B. (2002). Computer-mediated communication effects on disclosure, impressions, and interpersonal evaluations: Getting to know one another a bit at a time. Human communication research, 28(3), 317-348. [CrossRef]
  40. Nelson, A. J., & Irwin, J. (2014). “Defining what we do—all over again”: Occupational identity, technological change, and the librarian/Internet-search relationship. Academy of Management Journal, 57(3), 892-928. [CrossRef]
  41. Ferrari, M. (1996). Observing the observer: Self-regulation in the observational learning of motor skills. Developmental review, 16(2), 203-240. [CrossRef]
  42. Bandura, A., & Walters, R. H. (1977). Social learning theory (Vol. 1). Prentice Hall: Englewood cliffs.
  43. Firat, M. (2023). What ChatGPT means for universities: Perceptions of scholars and students. Journal of Applied Learning and Teaching, 6(1). [CrossRef]
  44. Menichetti, J., Hillen, M. A., Papageorgiou, A., & Pieterse, A. H. (2023). How can ChatGPT be used to support healthcare communication research?. Patient Education and Counseling, 115, 107947. [CrossRef]
  45. Aghemo, A., Forner, A., & Valenti, L. (2023). Should Artificial Intelligence-based language models be allowed in developing scientific manuscripts? A debate between ChatGPT and the editors of Liver International. Liver International, 43(5), 956-957. [CrossRef]
  46. Ayinde, L., Wibowo, M. P., Ravuri, B., & Emdad, F. B. (2023). ChatGPT as an important tool in organizational management: A review of the literature. Business Information Review, 02663821231187991. [CrossRef]
  47. Morocco-Clarke, A., Sodangi, F. A., & Momodu, F. (2023). The implications and effects of ChatGPT on academic scholarship and authorship: a death knell for original academic publications?. Information & Communications Technology Law, 1-21. [CrossRef]
  48. Alawida, M., Mejri, S., Mehmood, A., Chikhaoui, B., & Isaac Abiodun, O. (2023). A Comprehensive Study of ChatGPT: Advancements, Limitations, and Ethical Considerations in Natural Language Processing and Cybersecurity. Information, 14(8), 462. [CrossRef]
  49. Jo, H. (2023). Decoding the ChatGPT mystery: A comprehensive exploration of factors driving AI language model adoption. Information Development, 02666669231202764. [CrossRef]
  50. Stephens, K. K. (2012). Multiple conversations during organizational meetings: Development of the multicommunicating scale. Management Communication Quarterly, 26(2), 195-223. [CrossRef]
  51. Wilkins, E. A., & Shin, E. K. (2010). Peer feedback: Who, what, when, why, & how. Kappa Delta Pi Record, 46(3), 112-117.
  52. Ashford, S. J., Blatt, R., & VandeWalle, D. (2003). Reflections on the looking glass: A review of research on feedback-seeking behavior in organizations. Journal of management, 29(6), 773-799. [CrossRef]
  53. Pavlou, P. A., Liang, H., & Xue, Y. (2007). Understanding and mitigating uncertainty in online exchange relationships: A principal-agent perspective. MIS quarterly, 105-136. [CrossRef]
  54. Filip, G., Meng, X., Burnett, G., & Harvey, C. (2017, May). Human factors considerations for cooperative positioning using positioning, navigational and sensor feedback to calibrate trust in CAVs. In 2017 Forum on Cooperative Positioning and Service (CPGPS) (pp. 134-139). IEEE. [CrossRef]
  55. Joo, Y. J., So, H. J., & Kim, N. H. (2018). Examination of relationships among students' self-determination, technology acceptance, satisfaction, and continuance intention to use K-MOOCs. Computers & Education, 122, 260-272. [CrossRef]
  56. Ryu, H. S., & Ko, K. S. (2020). Sustainable development of Fintech: Focused on uncertainty and perceived quality issues. Sustainability, 12(18), 7669. [CrossRef]
  57. Adarkwah, M. A., Ying, C., Mustafa, M. Y., & Huang, R. (2023, August). Prediction of Learner Information-Seeking Behavior and Classroom Engagement in the Advent of ChatGPT. In International Conference on Smart Learning Environments (pp. 117-126). Singapore: Springer Nature Singapore. [CrossRef]
  58. Bhattacherjee, A. (2001). Understanding information systems continuance: An expectation-confirmation model. MIS quarterly, 351-370. [CrossRef]
  59. Aysolmaz, B., Müller, R., & Meacham, D. (2023). The public perceptions of algorithmic decision-making systems: Results from a large-scale survey. Telematics and Informatics, 79, 101954. [CrossRef]
  60. Li, E. Y. (1997). Perceived importance of information system success factors: A meta analysis of group differences. Information & management, 32(1), 15-28. [CrossRef]
  61. Xu, F., Michael, K., & Chen, X. (2013). Factors affecting privacy disclosure on social network sites: an integrated model. Electronic Commerce Research, 13, 151-168. [CrossRef]
  62. Pitardi, V., & Marriott, H. R. (2021). Alexa, she's not human but… Unveiling the drivers of consumers' trust in voice-based artificial intelligence. Psychology & Marketing, 38(4), 626-642. [CrossRef]
  63. Hair, J., Hollingsworth, C. L., Randolph, A. B., & Chong, A. Y. L. (2017). An updated and expanded assessment of PLS-SEM in information systems research. Industrial management & data systems, 117(3), 442-458. [CrossRef]
  64. Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the academy of marketing science, 43, 115-135. [CrossRef]
  65. Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of marketing research, 18(1), 39-50. [CrossRef]
  66. Falk, R. F., & Miller, N. B. (1992). A primer for soft modeling. University of Akron Press.
  67. Baumgartner, H., Weijters, B., & Pieters, R. (2021). The biasing effect of common method variance: Some clarifications. Journal of the Academy of Marketing Science, 49, 221-235. [CrossRef]
  68. Mahmood, M. A., Burn, J. M., Gemoets, L. A., & Jacquez, C. (2000). Variables affecting information technology end-user satisfaction: a meta-analysis of the empirical literature. International Journal of Human-Computer Studies, 52(4), 751-771. [CrossRef]
  69. Herrera-Pavo, M. Á. (2021). Collaborative learning for virtual higher education. Learning, culture and social interaction, 28, 100437. [CrossRef]
  70. Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L. (2023). From ChatGPT to ThreatGPT: Impact of generative AI in cybersecurity and privacy. IEEE Access. [CrossRef]
Figure 1. Research’s Conceptual Framework.
Figure 1. Research’s Conceptual Framework.
Preprints 97479 g001
Table 1. Previous Studies and Gaps Identification.
Table 1. Previous Studies and Gaps Identification.
Author(s) Artificial Intelligence Context? Focus on Reducing Uncertainty? Mediating Effect of Seeking Clarification and Consultation Peer Feedback? Objectives Theory Main Findings
Pan et al. (2023) [32] Yes No No This study systematically examines user concerns and uncertainties in their interactions with AI-driven social chatbots, with a focus on a Chinese online community, providing a cross-cultural perspective. Non-URT (Using Sentiment Analysis) Users experience four key uncertainties: technical, relational, ontological, and sexual. These encompass concerns about chatbot functionality, the nature of the relationship, chatbot identity, and boundaries in intimate interactions. Visibility and sentiment analysis reveal the dynamic and context-dependent nature of user responses to these uncertainties, contributing to a broader understanding of human-AI interactions.
Shin et al. (2017) [23] No
(SNS focus reducing uncertainty from various perceptions)
No
(Only focus on investigation of low level of uncertainty from interactive and passive USR)
No This study investigates into Facebook fan page dynamics and their followers' recurring visits, examining how URS, perceived content usefulness, SNS satisfaction, and SNS loyalty influence this behavior. URT The study conclusively establishes that URS decrease uncertainty about fan page information, enhancing perceived posting usefulness and promoting continuous visits. Moreover, SNS satisfaction and loyalty effectively moderate these relationships.
Hong et al. (2023) [24] No
(Focus on Financial Robot-Advisor)
No
(Reducing the risk using URS of financial robot-advisor)
No The study aims to fill a gap in existing empirical research by examining how URS influence users' investment intentions when utilizing financial robo-advisors. URT & VBAM This study links algorithm transparency, assurance, and interactivity strategies to higher user investment intentions in financial robo-advisors, offering guidance for service providers.
Lee & Choi (2017) [25] No No No This study examines how communication variables and the conversational agent-user relationship affect user satisfaction and intention to use an interactive movie recommendation system. It analyzes the influence of self-disclosure and reciprocity on user satisfaction. URT & CASA The findings emphasize that trust and interactional enjoyment mediate communication variables' impact on user satisfaction. Reciprocity outweighs self-disclosure in agent-user relationship building. Notably, user satisfaction significantly influences usage intention.
This study Yes
(Focus on investigation of AI-powered ChatGPT)
Yes
(Offering the mediating effect and different strategies of URS to achieve low-level uncertainty)
Yes
(Integrating seeking clarification and consultation peer feedback into URT)
There are several objectives:
-
Identify the cause of uncertainties such as transparency concern, privacy concern and information accuracy in using ChatGPT in higher educational landscape.
-
Testing the mediating effect of seeking clarification and consultation peer feedback in reducing uncertainties.
-
Testing the low-level uncertainty after applied USR to continuance intention.
URT The findings of this research suggest that interactive URS represent the most significant strategy for attaining low levels of uncertainty. On the other hand, it appears that the source of uncertainty is notably mediated, primarily through consultation of peer feedback, which proves to be a more effective approach compared to seeking clarification (an individual stage). Ultimately, this study also establishes that when users achieve low levels of uncertainty in their interactions with ChatGPT, this significantly translates into continued behavioral intention. Hence, this research effectively integrates the source of uncertainty (e.g., transparency concern, information accuracy, and privacy concern) into the Uncertainty Reduction Theory (URT) model. On the other hand, the integration of consultation of peer feedback as a mediating factor appears to be a favorable approach in engaging users towards the ethical utilization of ChatGPT for academic purposes.
Notes: URT, Uncertainty Reduction Theory; URS, Uncertainty Reduction Strategies; VBAM, Value-Based Adoption Mechanism; ECT, Expectation Confirmation Theory; CASA, Computers-Are-Social-Actors.
Table 2. Operationalization.
Table 2. Operationalization.
Constructs Definition Measurement Items OL CA CR AVE
Interactive URS Antheunis et al. (2010) [31] Modified from Antheunis et al. (2010) [31]
Interactive strategies involve direct engagement between the user and the AI model. One such interactive strategy is the use of direct questions, while another involves sharing self-disclosure information. Commented or given feedback on ChatGPT’s responses. (*) 0.670 0.719 0.877 0.781
Asked for more information or clarification from ChatGPT 0.889
Shared your thoughts on comments made by others regarding ChatGPT’s responses 0.878
Passive URS Antheunis et al. (2010) [31] Modified from Antheunis et al. (2010) [31]
Passive strategies are those in which an informant unobtrusively observes the target person, for example in situations in which the target person reacts to or interacts with others. Observed ChatGPT’s responses without actively participating in the conversation 0.973 0.769 0.795 0.736
Reviewed ChatGPT’s responses and observed its interactions without active involvement 0.726
Read comments or feedback from other users on ChatGPT’s responses. (*) 0.678
Low level of Uncertainty This study Modified from Shin et al. (2017) [23]
Low-level uncertainty suggests a high degree of user trust and belief in the AI's proficiency, resulting in a more positive and confident user experience when using ChatGPT for various purposes I have confidence that ChatGPT's responses reduce uncertainty. (*) 0.619 0.897 0.869 0.751
I feel uncertainty about ChatGPT's responses is low. 0.866
There is low uncertainty when I rely on ChatGPT’s responses for information or decision-making. 0.867
Seeking for Clarification This study Modified from Stephens (2012) [50]
Seeking clarification pertains to actively seeking information and clarification directly from ChatGPT (individual – system level of clarification). Actively sought clarification from ChatGPT to ensure you understood its responses. 0.725 0.762 0.817 0.599
Requested clarification from ChatGPT to make its responses clearer and more understandable. 0.843
Asked questions to ChatGPT to reduce uncertainty and enhance your understanding of the conversation. 0.747
Consultation Peer Feedback This study Modified from Stephens (2012) [50]
A dynamic and co-creative process that transcends the traditional concept of users merely providing insights to one another, signifies a complex collaboration between users (e.g., students, educators) and AI-powered models (e.g., ChatGPT). Shared ChatGPT’s responses with friends or colleagues and asked for their opinions. (*) 0.676 0.811 0.836 0.718
Sought advice or feedback from peers about the information provided by ChatGPT. 0.818
Compared ChatGPT’s responses with information or opinions from friends or colleagues. 0.875
Continuance Intention Bhattacherjee (2001) [58] Modified from Baek & Kim (2023) [3]
Continuance intention is the user's intent to persist in utilizing the technology. I plan to keep using ChatGPT 0.903 0.708 0.831 0.711
I want to continue using ChatGPT 0.780
Transparency Concern Modified from Aysolmaz et al. (2023) [59] Modified from Aysolmaz et al. (2023) [59]
Transparency concerns for a system, when perceived by users, result in the system being viewed as having lower levels of fairness, privacy, and accountability. Consequently, this leads to lower levels of trust and perceived usefulness of the system. I’m concerned when it’s unclear how ChatGPT produces information. 0.875 0.766 0.857 0.749
It bothers me if ChatGPT doesn’t explain how it gets information or offers suggestions. 0.856
Information Accuracy Li (1997) [60] Adapted from Foroughi et al. (2023) [12]
Information accuracy pertains to the degree to which the provided information is accurate enough to fulfill its intended purpose. Information from ChatGPT is correct. (*) 0.432 0.793 0.824 0.702
Information from ChatGPT is reliable. 0.758
Information from ChatGPT is accurate. 0.911
Privacy Concern Xu et al. (2013) [61] Modified from Pitardi et al. (2021) [62]
Privacy concern relates to users' apprehensions regarding potential threats to their online privacy. I doubt the privacy of my interactions with ChatGPT. (*) 0.692 0.781 0.862 0.757
I worry that my personal data on ChatGPT could be stolen. 0.869
I'm concerned that ChatGPT collects too much information about me. 0.872
Notes:
a.
“(*)” denotes dropped items
b.
The threshold for OL, Outer Loadings ≥ 0.70; CA, Cronbach’s Alpha ≥ 0.70; CR, Composite Reliability ≥ 0.70; AVE, Average Variance Extracted ≥ 0.50.
Table 3. Sample Demographics.
Table 3. Sample Demographics.
Measures Category Frequency %
Gender Male 257 45.4
Female 309 54.6
Age (years old) < 20 47 8.3
21 – 30 183 32.3
31 – 40 181 32
41 – 50 104 18.4
> 50 51 9
Educational Level Vocational Studies 44 7.8
Undergraduate 197 34.8
Master 262 46.3
Doctorate 63 11.1
Status Students 66 11.7
Lecturers 376 66.4
Researchers 124 21.9
Type of University Public University 160 28.4
Private University 400 70.6
Foreign University 6 1
Type of ChatGPT GPT 3.5 307 54.2
GPT 4.0 259 45.8
Usage Frequency Never 0 0
Once a month 32 5.6
Several times a month 65 11.5
Once a week 17 3
Several times a week 147 25.9
Once a day 121 21.4
Several times a day 184 32.6
How long have you used ChatGPT? Less than one month 78 13.8
One month 191 33.7
Less than six months 272 48.1
Less than one year 25 4.4
Table 5. Cross-Loading Matrix.
Table 5. Cross-Loading Matrix.
Constructs Items Loadings and Cross-Loading Matrix
CPF CI IA INT LLU PSS PC SC TC
Continuance Intention CI.1 0.276 0.903 0.204 0.276 0.299 0.283 0.324 0.367 0.338
CI.2 0.017 0.780 0.164 0.246 0.205 0.195 0.184 0.442 0.326
Consultation Peer Feedback CPF.2 0.818 0.021 0.227 0.232 0.319 0.263 0.322 0.099 0.260
CPF.3 0.875 0.295 0.308 0.278 0.379 0.264 0.337 0.169 0.342
Information Accuracy IA.2 0.182 0.135 0.758 0.170 0.143 0.168 0.262 0.253 0.203
IA.3 0.329 0.221 0.911 0.331 0.262 0.250 0.438 0.337 0.455
Interactive URS INT.2 0.258 0.286 0.287 0.889 0.413 0.177 0.203 0.398 0.484
INT.3 0.277 0.260 0.270 0.878 0.394 0.439 0.187 0.339 0.404
Low Level of Uncertainty LOW.2 0.329 0.204 0.224 0.389 0.866 0.111 0.291 0.095 0.395
LOW.3 0.394 0.328 0.221 0.411 0.867 0.248 0.243 0.210 0.358
Privacy Concern PC.2 0.371 0.225 0.395 0.182 0.266 0.328 0.869 0.266 0.268
PC.3 0.306 0.320 0.362 0.202 0.262 0.275 0.872 0.351 0.429
Passive URS PSS.1 0.271 0.244 0.218 0.372 0.221 0.973 0.286 0.326 0.298
PSS.2 0.300 0.312 0.267 0.093 0.065 0.726 0.388 0.383 0.300
Seeking for Clarification SCL.1 0.248 0.354 0.268 0.372 0.264 0.323 0.283 0.725 0.324
SCL.2 0.078 0.366 0.304 0.276 0.019 0.299 0.247 0.843 0.324
SCL.3 0.033 0.354 0.256 0.305 0.101 0.251 0.282 0.747 0.395
Transparency Concern TC.1 0.342 0.344 0.514 0.390 0.382 0.264 0.470 0.388 0.875
TC.2 0.276 0.332 0.203 0.485 0.360 0.315 0.216 0.398 0.856
Notes: The values printed in bold are the outer loadings.
Table 6. Summary of Direct Hypothesis.
Table 6. Summary of Direct Hypothesis.
Hypothesis Path Coefficient T-Value Bootstrapping 97.5% Conclusion
Lower Upper
H1a, INT URS ➔ LLU 0.321*** 6.550 0.222 0.415 Supported
H1b, PSS URS ➔ LLU -0.039 0.852 -0.125 0.059 Unsupported
H2a, TC ➔ LLU -0.024 0.856 -0.101 0.096 Unsupported
H2b, IA ➔ LLU -0.008 0.164 -0.100 0.085 Unsupported
H2c, PC ➔ LLU 0.018 1.301 0.015 0.217 Unsupported
H3a, TC ➔ SC 0.327*** 7.056 0.231 0.413 Supported
H3b, IA ➔ SC 0.152** 3.175 0.057 0.246 Supported
H3c, PC ➔ SC 0.157** 3.169 0.059 0.250 Supported
H4a, TC ➔ CPF 0.205*** 4.560 0.119 0.293 Supported
H4b, IA ➔ CPF 0.123** 2.444 0.024 0.222 Supported
H4c, PC ➔ CPF 0.253*** 4.973 0.154 0.352 Supported
H5, SC ➔ LLU -0.112 1.848 -0.229 0.007 Unsupported
H6, CPF ➔ LLU 0.231*** 4.854 0.134 0.320 Supported
H9, LLU ➔ CI 0.306*** 6.603 0.219 0.398 Supported
Notes:
  • INT URS, Interactive URS; PSS URS, Passive URS; TC, Transparency Concern; IA, Information Accuracy; PC, Privacy Concern; SC, Seeking Clarification; CPF, Consultation Peer Feedback; LLU, Low Level of Uncertainty; CI, Continuance Intention.
  • Significance level of ***P < 0.001; **P < 0.010; *P < 0.050
Table 7. Summary of Mediating Hypothesis.
Table 7. Summary of Mediating Hypothesis.
Hypothesis Path Coefficient T-Value Bootstrapping 97.5% Conclusion
Lower Upper
H7a, TC ➔ SC ➔ LLU -0.036 1.735 -0.080 0.002 Non-mediation
H7b, IA ➔ SC ➔ LLU -0.017 1.482 -0.044 0.001 Non-mediation
H7c, PC ➔ SC ➔ LLU -0.017 1.513 -0.043 0.001 Non-mediation
H8a, TC ➔ CPF ➔ LLU 0.047*** 3.295 0.022 0.078 Full mediation
H8b, IA ➔ CPF ➔ LLU 0.028** 2.172 0.005 0.056 Full mediation
H8c, PC ➔ CPF ➔ LLU 0.058*** 3.552 0.029 0.093 Full mediation
Notes:
  • INT URS, Interactive URS; PSS URS, Passive URS; TC, Transparency Concern; IA, Information Accuracy; PC, Privacy Concern; SC, Seeking Clarification; CPF, Consultation Peer Feedback; LLU, Low Level of Uncertainty; CI, Continuance Intention.
  • Significance level of ***P < 0.001; **P < 0.010; *P < 0.050
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated