Preprint
Article

This version is not peer-reviewed.

Will the Age of Generative Artificial Intelligence Become an Age of Public Ignorance?

A peer-reviewed article of this preprint also exists.

Submitted:

21 September 2023

Posted:

22 September 2023

You are already at the latest version

Abstract
Generative artificial intelligence (AI), in particular large language models such as ChatGPT have reached public consciousness with a wide-ranging discussion of their capabilities and suitability for various professions. Following the printing press and internet, generative AI language models are the third transformative technological invention with truly cross-sectoral impact on knowledge transmission. While the printing press allowed for the transmission of knowledge that is independent of the physical presence of the knowledge holder with publishers acting as gatekeepers, the internet added levels of democratization allowing anyone to publish, along with global immediacy. The development of social media resulted in an increased fragmentation and tribalization of on-line communities on their ways of knowing, resulting in alternative truths propagated in echo chambers. It is against this background that generative AI language models have entered public consciousness. Using strategic foresight methodology, this paper will examine the polemic proposition that the age of generative AI will emerge as an age of public ignorance.
Keywords: 
;  ;  ;  ;  ;  ;  

1. Introduction

Even though generative artificial intelligence (AI) language models have only reached public consciousness since public release of ChatGPT 3.5 in November 2022, much has been written on the potentially transformative nature of generative AI in various professions. As this is not the venue to reviews these, a few examples may suffice: agriculture [1], chemistry [2], computer programming [3], cultural heritage management [4], diabetes education [5], medicine [6,7], museum exhibitions [8], nursing education [9], radiography [10]and remote sensing in archaeology [11].
There is considerable public fascination with generative AI language models and the popularization of their capabilities and suitability for various professions, as well as technophobic scenarios mentioned in the public press [12,13]. Despite this, or no thought appears to have been given to the implications the generative AI language models may have on the formation of public knowledge in the medium- and long-term future. Generative AI is the last on a series of technological inventions that is truly transformative in its cross-sectoral impact on knowledge transmission and public education. Unlike the earlier seismic shifts caused by the inventions of the printing press and the internet, both of which expanded public access to knowledge, the latter may not be as beneficial as currently touted. Using strategic foresight methodology [14,15] and drawing on Jim Dator’s dictum that “any useful statement about the future appears [at first] ridiculous” [16,17], this paper will examine the polemic proposition that the age of generative AI will emerge as an age of public ignorance. Given that this paper is a deliberation, it does not follow the standard IMRAD (Introduction, methodology, results and discussion) format of papers.

2. Trajectories of the Creation of Public Knowledge

Before we consider the possible implications of generative AI on the creation of public knowledge, we need to consider the long- and short-term trajectories that define the present as we know it.

2.1. The pre-digital creation of Public Knowledge

Before the Age of Enlightenment and the subsequent Scientific Revolution, knowledge was concentrated in a few hands, essentially the clergy and later also the various guilds of professionals and artisans. As a manifestation of power and social control, both literacy and professional knowledge were carefully curated. People were generally excluded from access to the knowledge and technology held by a guild, as well as the economic opportunities this represented, unless they had been formally admitted and sworn to secrecy [18]. Johannes Gutenberg’s invention of the printing press (1452) allowed for the mass production of texts. While knowledge largely continued to be curated, once produced in printed form it could be rapidly disseminated to all those who could read. Moreover, in printed form, while the knowledge could be passed on without the physical presence of the knowledge holders, publishers emerged as the new gatekeepers, with commercial or political interests influencing what was deemed publishable [19]. In addition to standard works such as Bibles and Psalters, the press soon allowed for the broadcasting of political news in the form of pamphlets. Early examples are the pamphlet publication campaigns during the Bauernkrieg (Great Peasants' Revolt) of 1524-1525 [20] or the British Civil Wars (1641–1651) [21]. Formal publication and thus public dissemination of parts of academic knowledge commenced during the mid-seventeenth century, such as Matthäus Merian’s Historiae naturalis de quadrupetibus (natural history of quadrupeds) in 1652 [22].
During the Age of Enlightenment, formal and later compulsory public education not only raised the literacy levels of the general public but also opened the doors for a broad range of knowledge to be systematically disseminated in printed form, such as Diderot’s Encyclopédie [23]. The societal change that this entailed led to well-educated generations of educators, civil servants and professionals, aspiring to improve their own and their children’s social position through education and knowledge. In addition to the ability to enter most professions on academic merit, a proliferation of multi-volume encyclopaedias meant that everybody who had the means to acquire a set, or to access it in public libraries, had access to broad range of carefully curated information [24]. Well known examples are The Encyclopaedia Britannica (Edinburgh, from 1768 onwards), Diderot’s Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers (Paris, 1751 to 1772) or the Brockhaus Conversations-Lexikon (Leipzig 1808 onwards). The nineteenth century saw the development of Mechanics Institutes and similar venues of adult education, as well as the rise of university and technical college trained professionals who engaged in outreach, extension and public education and thereby transformed many professions that still maintained traditional practices, such as agriculture [25,26]. During the second part of the twentieth century, initiatives like the GI-bill in the USA [27] or the Dawkins reforms of the 1980s in Australia [28] saw an expansion of the tertiary education sector with a concomitant dramatic increase of college and university-educated professionals and civil servants [29,30]. In the closing years of the twentieth century, formal outreach and public education processes began to wither and gave way to Ted Talks .

2.2. The creation of Public Knowledge in an online world

Even though multivolume encyclopaedias existed and often were the hallmark of educated families, their prohibitive costs meant that they only graced the shelves of upper class and aspiring upper middleclass families [31]. The creation of the World Wide Web (WWW) in 1993 [32,33] spawned a transformative technology on a global scale putting information at the fingertips of those who could afford a computer. The ubiquity of smart phones by the end of the first decade of the twenty-first century put to rest any fears of a digital divide in knowledge access [34,35]. While websites and the knowledge contained therein were initially managed via Special Interest Networks curated by academics and IT specialists [36], search engines based on web crawler algorithms soon democratized the process, not only by automatically indexing the content on the web but also by allocating page ranks based on the connectivity of the individual pages and the number of links that pointed back to them [37].
This development revolutionized the public dissemination of knowledge as it made content not only readily available on a global scale, but online discussion groups also allowed for the development of highly specialized online communities sharing and pooling their knowledge. In addition, knowledge aggregators soon emerged, generated by their users as a distributed model. Examples for this are Wikipedia (since 2001 [38], Quora (since 2006) and various ‘Wikis.’
Concurrent with the ever-increasing body of information, usage patterns on the WWW changed. A web initially populated and used by early adopters and ‘techno-geeks’ soon saw widespread cross-sectoral and cross-generational adoption. The emergent future-native generation (sense Inayatullah [39]) began to rely on the WWW as a primary source of information, so much so that ‘google’ has become an accepted verb [40]. Concomitantly, users came to expect that answers to almost any question could be obtained with a high degree of immediacy, from cooking recipes to medical advice (‘ask Dr. Google’) [41,42]. Given that much of the information is provided in a largely decontextualized form, users have few avenues to assess the veracity of the information. Where information is contextualized, a casual user is likely to lack the skills and background knowledge to fully understand the implications.
The commercialization of the WWW soon saw the page ranks not being purely defined by connectivity but influenced by commercial interests, ranging from promotional revenue to behind-the-scenes business interests of the search engine providers [37,43,44]. Today, even though others exist, Google and Bing dominate the market, often integrated with customized browsers offered by the same companies. While the internet allows for an anarchic ‘free-for-all’ in publishing content, in practice access to that content occurs via a search engine that, with their page ranking algorithms, effectively function as gate keepers. Whilst it is possible to find any content with persistence and aided by a complex set of keyword combinations and nested search logic, the majority of web searches do not progress beyond the first page of links offered up by a search engine [45,46]. In consequence, many users seem satisfied with the fragmented, snippet-kind of information they are presented with.
In a parallel development, segmented digital communities with special interests emerged: LinkedIn (2002), Flickr (2004), Reddit (2005), Twitter (2006), ResearchGate (2008), Instagram (2010)—as well as Facebook (2004, now Meta) which was to become a social media behemoth. While some are highly specific to segments of society such as Flickr (photographers) or ResearchGate (academia), others are cross-sectional. Within these on-line communities increasingly specialized sub-communities emerged, catering for highly segmented needs. These on-line sub-communities facilitated three parallel developments, the generation of genuine new knowledge, for example driven by study and technical observation of collectible items (such as Camera-Wiki); the rise of social media ‘influencers’ [47,48], and the emergence of ‘alternative truths and the concomitant devaluation of experts with academic credentials [49,50]. The social media ecosystems that developed from this became sources of knowledge and ‘truth’, with the emergence of narrow casting of ideological viewpoints bouncing inside in echo chambers devoid of divergent views [51,52]. The conspiracy theories of the ‘anti-vaxxer’ movements during the COVID-19 pandemic [53-55], or the alternative narratives created around the January 6 insurrection in Washington, DC (USA) are both cases in point [56,57].

3. The transformative power of generative AI language models

Generative AI language models, such as ChatGPT or GoogleBard, are deep learning models that use transformer architecture to detect the statistical connections and patterns in textual data in order to generate coherent and contextually relevant, human-like responses based on the input they receive [58,59]. Generative AI language models are pre-trained on a large and diverse body of textual materials, such as books (both fiction and non-fiction), articles, and webpages. Pre-training, carried out by human interaction, teaches such models to anticipate the following word in a text string, by moderating statistical and patterns with linguistic patterns and semantic fields. The depth and complexity of responses is correlated with the size of the training data set and the nature of the textual resources incorporated into that dataset.
Taking ChatGPT as an example, the language model has undergone several iterations and improvements since its formal release in 2018. ChatGPT 2.0, released in September 2019, was based on a training data set that relied on 1.5 billion parameters and possessed the ability to provide longer segments of coherent text including the addition of human preferences and feedback. The next release, ChatGPT 3 (June 2020), drew on a training data set of 175 billion parameters, allowing it to execute diverse natural language tasks, such as text classification and sentiment analysis, thereby facilitating contextual answering of questions [60]. In addition to functioning as a chatbot, the pre-training with this dataset allowed ChatGPT to draft basic contextual texts such as e-mails and programming code. ChatGPT 3.5 was released to the general public in November 2022, as a part of a free research preview to encourage experimentation [61]. The current version GPT-4 (March 2023) exhibits responsiveness to user intentions as expressed in the questions/ query tasks, a reduced probability of generating offensive or dangerous output and a greater factual accuracy [60]. The temporal cut off for the addition of training data for both ChatGPT 3.5 and GPT-4 was September 2021, which implies that ChatGPT cannot integrate or comment on events, discoveries and viewpoints that are later than that date.
A generative AI language model is not a monolith, however. Apart from competing public use products, such as OpenAI’s ChatGPT and Google’s Bard, the underlying technology allows it to be customised. While the open access models that captured public imagination draw on a large data of public knowledge, industry-specific applications can rely on a customised and well-defined training data set. Consider a museum setting for example, where generative AI language models can be used to conceptualise and plan exhibitions based on museum holdings and extract and summarise pertinent data from longer documents [62], to create texts for exhibition panels, object labels and catalogue information and museum guides [63-65] as well as to respond to user queries, track reactions to specific exhibitions or the museum overall and to track visitor satisfaction [66,67]. Consider a business setting such a as housing developer, where a generative AI language model, coupled with generative visual AI design. A user could interactively design a home and the prospective homeowner could then use their own language to express their desires and concepts. Generative AI could prompt where needed, and offer aspects of home design that have not been considered. Once fully customised with choices such as bathroom fittings etc, the total design cannot only be automatically costed out, but also a broad delivery time frame can be calculated. Consider also a governmental portal, where a generative AI language model can guide a user to navigate the labyrinth of regulations, funding opportunities and general service delivery.
While such approaches allow for a maximum of highly personalised user input and user interaction in their way of expressing themselves, a major shortcomings exist. Such approaches lack the capacity for empathy and another issue is that any human creativity is confined to the user interacting with the generative AI model, rather than a combination of the user and the person answering, as would be the case in an inter-human communication.
Given that the output of generative AI language models are merely complex text predictions based on statistical connections and patterns in textual data that are included in their training data set, such language models, at least at this point in time, can suffer from inverted logic phenomena [68] and are incapable of independent creative thought. Any apparent creativity displayed by generative AI language models, such as when providing a requested poem, is solely based on the perception of the person interacting with the language model. The reader interpreting the output within their own experiences and expectations will judge a generative AI written poem as creative and ‘fit-for-purpose’ or will dismiss it as bad poetry.
Common to the examples presented above is that the knowledge applied by the model is owned by the entity that deploys the generative AI model and that the knowledge base contained in its training data set is finite, well circumscribed and authoritative. All answers provided will adhere to one truth only and given the design of the model and its training, that truth will be absolute. In industry specific applications that may be applicable and apposite, but what about general, public settings where truth is based on a presentation of evidence and its critical examination?

4. The creation of Public Knowledge by generative AI language models

It can be assumed that there will always be individuals who engage in critical enquiry and thus the desire to triangulate the validity and veracity of answers from multiple sources. Yet, based on the trajectory of current WWW usage, the majority of users will be looking for a quick answer without the need to engage in research that is in-depth. The allure of generative AI language models is that user queries can be asked in the user’s natural way of expressing themselves rather than by entering a series of arcane keyword combinations that best summarize what the user is seeking to know. Depending on how the question is asked, the user is presented with a concise or a contextual answer. Further elaboration, if required, occurs in the form of a dialogue which effectively mimics the user’s interpersonal communication patterns. A significant advantage of generative AI language models over standard web pages is that the response is tailored specifically to the question in the way it was asked, thereby obviating the need to screen a body of text such as a web page or a Wikipedia entry for the specific information sought.
Even though the majority of web searches do not progress beyond the first page of links offered by a search engine, they still offer the user a choice with information source(s) to access. Questions posed to generative AI language models will provide one answer, the validity of which has to be taken at face value. While the response can be regenerated, the result will be one answer that is broadly the same as the answer received before. The question is whether that single answer satisfies a user’s needs and the user’s expectations of veracity.
The author posits that over time, critical thinking of the majority of users will decline even further and that such single-answer solutions, in particular when offered in an interactive, natural language mode of delivery, will suffice. This proposition is based on four trajectories:
  • Generative AI language models are suited to semi-automate repetitive and routine tasks (draft e-mails, summarise and extract information from larger textual data sets, provide item selection based on semi-vague user input) that are customised to a user’s needs [69,70]. The increasing familiarity with such systems in daily work life will ‘bleed’ into daily practice in non-work settings, leading to a wide-spread uptake.
  • In an age of both instant gratification and an attitude that ‘near enough is good enough’, the bulk of the general public will avail themselves to solutions that provide the immediate and most convenient answers and with the least amount of effort.
  • Transformative technologies that satisfy this demand are poised to gain traction and dominance over alternate ‘traditional’ approaches.
  • There is a worrying, trend that sees critical thinking skills and information literacy in a near terminal decline among large swathes of the populace. Evidence for this can be found in the increasing uncritical consumption of news and information and the growing reliance on and the trust placed in the opinion of social media influencers and the continued devaluation of academic experts. At present, many researchers, relying on years of experience and rigorous, peer reviewed research, find themselves in the position that they may well generate findings and insights into social or environmental phenomena, but that their findings are dismissed out of hand, without any evidence to the contrary, by ideologically or politically motivated commentators and social media influencers who have assumed a position of authority in online communities. The past decade has shown an increased level of tribalism in the general public, where selective use of news sources, online communities that act as echo chambers, and the spruiking of alternative ‘truths’ that defy unequivocal evidence to the contrary have increasingly become normalized. In many western democracies there is no indication that this trend will abate anytime soon. Rather, it is bound to continue, intensify and accelerate.
  • Finally, there are multiple examples where, over time, information sources that once were derided as untrustworthy or shallow, have become accepted by the general public not only as the norm but also as the primary source of information. A good example is Wikipedia which has become one of the main ‘go-to’ sites on the internet.
Even though it is possibly of little concern to the average user of the general public, any reliance on generative AI language models has fundamental problems as any such model can only be as good as its design. ChatGPT, for example, often purports to merely strive to provide factual and neutral information and not to hold political opinions [71]. Because model specifications, algorithmic constraints and policy decisions shape the final product [72], however, ChatGPT and any other generative AI language model cannot be without bias. This relates both to quality of the source material that comprises the dataset, such as whether primary, secondary or even tertiary sources, such as Wikipedia, have been used to train the model [73,74] Additional biases derive from the selection of the source material, which would have been subconsciously, if not consciously influenced and shaped by the ideologies of the people programming, ‘feeding’ and training the system. Consequently, while some studies suggested right-leaning moral foundations in the generated answers [75], political orientation tests, for example, showed that ChatGPT exhibits a preference for libertarian, progressive, and left-leaning viewpoints [71,76-79], with a North American slant [80].
While is posited that the observed present biases are unintentional and subconsciously reflective of the interest spheres and ideological outlook of the creators and trainers, it raises the spectre of a malevolent actor intentionally influencing the dataset to pursue an ideological, political or commercial agenda. While such control is more likely to occur in authoritarian regimes, in particular those that already exercise restrictive control and censorship over internet and social media content accessible to their citizens (e.g., PR China), there is no guarantee that other countries or the commercial IT behemoths (e.g. Google, Microsoft, Baidu) themselves may not engage in a similar fashion.
Critical here is also the fact that such a dataset is unlikely to remain static. While at present this seems to be the case as the technologies are being refined, this is unlikely to continue in the future. It can be anticipated that subsequent iterations of generative AI language models will possess the capability to dynamically acquire new sources and add them to the dataset. What sources are being added and which source will be ‘overlooked’ will deepened entirely on the algorithm deployed. Thus, it is readily conceivable that access to news sources can be confined to selected news channels with the concomitant editorial and political reporting bias.
In an age where disinformation campaigns via online troll farms are commonplace, a scenario has to be contemplated where politically motivated state actors may inject disinformation content into the dataset of a generative AI language model, thereby adjusting its responses. Further manipulation of these responses appears possible by targeted external training of the language model. At present, users have the opportunity to regenerate a response if the initial response does not match their expectations. They are then asked to evaluate whether the regenerated version was better or worse than the initial answer. As this feedback mechanism adds a ‘learning’ element to the model, it is readily conceivable that a malevolent actor may engage an ‘army’ of users to flood a generative AI language model with selected queries asked in different phrasing but with the same content and then systematically nudge the responses, through feedback, into a desired direction.
Finally, in moves reminiscent of George Orwell’s 1984, it is of course also possible to alter the responses of generative AI language models by removing material that had been included in the dataset, but that for whatever reason has become undesirable. In consequence of the material no longer being accessible to the model, responses will exhibit stronger biases in the opposite direction. There is a real risk of a future with a single truth presented to a progressively uncritical public.

5. Is there an off-ramp or are we doomed to be on the road to public ignorance?

Before we consider whether we are doomed and on the road to public ignorance, it is apposite to briefly consider alternate futures, as these may indicate off-ramps that we can take to avoid the spectre that has been painted above. Futures studies and strategic foresight methodology, of course stipulate that there is not only one future that can be conceptualised but that trajectories point to multiple futures that diverge the further we move forward from the present [14,15].
One of these scenarios entails the continuation and expansion of the tribalisation of the public sphere as exemplified by the increasing and deepening political polarisation currently on display in United States politics. This phenomenon appears to be gaining traction on other western democracies. This is hardly a new development, however. During the nineteenth century newspaper proprietors blatantly advanced the political and economic interests of their constituency [81,82], at (as?) modus operandi that at the present time plays out in TV news channels and internet media. If where that standpoint is not catered for, either in general or in the desired intensity, alternative news outlets and media systems are established (e.g. Breitbart News and ‘Truth Social’ [83,84]). What is different compared to the past, and what is of both particular interest and concern, is the increasing unwillingness of segments for the public to engage in critical examination of one’s own standpoint and tolerance for the standpoints of others. While this is at present largely confined to diverging opinions and interpretations of political, social and environmental/natural events, examples such as the ‘anti-vaxxer’ movements during the COVID-19 pandemic [53-55], show that this can extend to other aspects of public life where ideological standpoints rather than evidence dominate discussion.
A future can thus be conceptualised where competing and tribalized generative AI language models will provide users with access to knowledge that conforms with their own ideological persuasion. By controlling the training datasets, as well as any future additions, the generative AI language models will become the ultimate echo chambers, perpetually reinforcing opinion and ‘knowledge.’
As both scenarios have a distinctly dystopian feel to them one has to ask whether there is there an off-ramp or whether we are doomed to be on the road to public ignorance? Two underlying trends are propelling society on the trajectory to these dystopian futures: an increasingly uncritical population, and the devaluation of evidenced-based research carried out by researchers and specialists. The public education system plays a pivotal role in slowing down and reversing these trends. Educators play a critical role in instilling an understanding of the nature and value of evidenced-based research among their students, by showing that divergent interpretations of a finding may be possible, but that such divergent interpretations need to be based on informed critiques and be evidenced-based in themselves. Information literacy, including AI literacy are corner stones. Fundamental, however will be that educators actively instil a desire for critical thinking and foster this at every step of the way, from entry to school through to University. Unless they do so, an information illiterate society will be the inevitable outcome. To avoid this, present and future educators will need to be equipped with appropriate intellectual and curriculum tools. To do so, requires political will: a will to make this a priority and a will to provide the required teaching resources and teacher training. Education is always political, but several recent examples in the USA have seen an increasing politicisation of the education system along hard-line ideological lines. It has been posited that political ideologues are not interested in and are afraid of a population capable of critical thinking.
It would appear that the emergent generative AI has forced our hand and as a society we have arrived at the Rubicon.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Biswas, S. Importance of chat GPT in Agriculture: According to chat GPT. Available at SSRN 4405391 2023.
  2. Castro Nascimento, C.M.; Pimentel, A.S. Do Large Language Models Understand Chemistry? A Conversation with ChatGPT. Journal of Chemical Information and Modeling 2023, 63, 1649-1655.
  3. Surameery, N.M.S.; Shakor, M.Y. Use chat gpt to solve programming bugs. International Journal of Information Technology & Computer Engineering (IJITC) ISSN: 2455-5290 2023, 3, 17-22.
  4. Spennemann, D.H.R. ChatGPT and the generation of digitally born “knowledge”: how does a generative AI language model interpret cultural heritage values? preprint.org 2023, 1-40. [CrossRef]
  5. Sng, G.G.R.; Tung, J.Y.M.; Lim, D.Y.Z.; Bee, Y.M. Potential and pitfalls of ChatGPT and natural-language artificial intelligence models for diabetes education. Diabetes Care 2023, 46, e103-e105. [CrossRef]
  6. Bays, H.E.; Fitch, A.; Cuda, S.; Gonsahn-Bollie, S.; Rickey, E.; Hablutzel, J.; Coy, R.; Censani, M. Artificial intelligence and obesity management: An Obesity Medicine Association (OMA) Clinical Practice Statement (CPS) 2023. Obesity Pillars 2023, 6, 100065. [CrossRef]
  7. Grünebaum, A.; Chervenak, J.; Pollet, S.L.; Katz, A.; Chervenak, F.A. The exciting potential for ChatGPT in obstetrics and gynecology. Am. J. Obstet. Gynecol. 2023, 228, 696-705. [CrossRef]
  8. Spennemann, D.H.R. Exhibiting the Heritage of Covid-19—a Conversation with ChatGPT. Heritage 2023, 6, 5732-5749. [CrossRef]
  9. Qi, X.; Zhu, Z.; Wu, B. The promise and peril of ChatGPT in geriatric nursing education: What We know and do not know. Aging and Health Research 2023, 3, 100136.
  10. Currie, G.; Singh, C.; Nelson, T.; Nabasenja, C.; Al-Hayek, Y.; Spuur, K. ChatGPT in medical imaging higher education. Radiography 2023, 29, 792-799. [CrossRef]
  11. Agapiou, A.; Lysandrou, V. Interacting with the Artificial Intelligence (AI) Language Model ChatGPT: A Synopsis of Earth Observation and Remote Sensing in Archaeology. Heritage 2023, 6, 4072-4085. [CrossRef]
  12. Bryant, A. AI Chatbots: Threat or Opportunity? Informatics 2023, 10. [CrossRef]
  13. De Angelis, L.; Baglivo, F.; Arzilli, G.; Privitera, G.P.; Ferragina, P.; Tozzi, A.E.; Rizzo, C. ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. Frontiers in Public Health 2023, 11, 1166120. [CrossRef]
  14. Hines, A.; Bishop, P.J.; Slaughter, R.A. Thinking about the future: Guidelines for strategic foresight; Social Technologies Washington, DC: 2006.
  15. van Duijne, F.; Bishop, P. Introduction to strategic foresight; Future Motions, Dutch Futures Society: Den Hag, 2018; Volume 1, p. 67.
  16. Dunagan, J.F. Jim Dator: The Living Embodiment of Futures Studies. J. Future Stud. 2013, 18, 131-138.
  17. Inayatullah, S. Learnings from futures studies: Learnings from dator. J. Future Stud. 2013, 18, 1-10.
  18. Kieser, A. Organizational, institutional, and societal evolution: Medieval craft guilds and the genesis of formal organizations. Administrative science quarterly 1989, 540-564. [CrossRef]
  19. Landau, D.; Parshall, P.W. The Renaissance Print, 1470-1550; Yale University Press: 1994.
  20. Frey, W.; Raitz, W.; Seitz, D.; Frey, W.; Raitz, W.; Seitz, D. Flugschriften aus der Zeit der Reformation und des Bauernkriegs. Einführung in die deutsche Literatur des 12. bis 16. Jahrhunderts: Bürgertum und Fürstenstaat—15./16. Jahrhundert 1981, 38-68.
  21. Peacey, J. Politicians and pamphleteers: propaganda during the English Civil Wars and Interregnum; Routledge: 2017.
  22. Spennemann, D.H.R. Matthäus Merian’s crocodile in Japan. A biblio-forensic examination of the origins and longevity of an illustration of a Crocodylus niloticus in Jan Jonston’s Historiae naturalis de quadrupetibus. Script & Print 2019, 43, 201–239.
  23. Boto, C. The Age of Enlightenment and Education. In Oxford Research Encyclopedia of Education, Noblit, G.W., Ed.; Oxford: Oxford Universoty Press, 2021.
  24. Sullivan, L.E. Circumscribing knowledge: Encyclopedias in historical perspective. The Journal of Religion 1990, 70, 315-339. [CrossRef]
  25. Spennemann, D.H.R. Combining science with education: the beginnings of agricultural extension in 1890s New South Wales (Australia). Rural Society 2000, 10, 175–194.
  26. True, A.C. A history of agricultural extension work in the United States, 1785-1923; US Government Printing Office: Washington, 1928.
  27. Mettler, S. Soldiers to citizens: The GI Bill and the making of the greatest generation; Oxford University Press: 2005.
  28. Croucher, G.; Woelert, P. Institutional isomorphism and the creation of the unified national system of higher education in Australia: An empirical analysis. Higher Education 2016, 71, 439-453. [CrossRef]
  29. McClelland, C.E. The German experience of professionalization: Modern learned professions and their organizations from the early nineteenth century to the Hitler era; Cambridge University Press: 2002.
  30. Brezis, E.S.; Crouzet, F. The role of higher education institutions: recruitment of elites and economic growth. Institutions, development, and economic growth 2006, 13, 191.
  31. Haider, J.; Sundin, O. The materiality of encyclopedic information: Remediating a loved one–Mourning Britannica. Proceedings of the American Society for Information Science and Technology 2014, 51, 1-10.
  32. Berners-Lee, T.J. Information management: A proposal No. CERN-DD-89-001-OC. Available online: https://web.archive.org/web/20100401051011/https://www.w3.org/History/1989/proposal.html (accessed on Sep 1, 2023).
  33. Berners-Lee, T. Weaving the Web: The original design and ultimate destiny of the World Wide Web by its inventor; Harper San Francisco: 1999.
  34. Van Dijk, J.; Hacker, K. The digital divide as a complex and dynamic phenomenon. The information society 2003, 19, 315-326. [CrossRef]
  35. Spennemann, D.H.R. Digital Divides in the Pacific Islands. IT & Society 2004, 1, 46-65.
  36. Spennemann, D.H.R.; Green, D.G. A special interest network for natural hazard mitigation for cultural heritage sites. In Disaster Management Programs for Historic Sites, Spennemann, D.H.R., Look, D.W., Eds.; Association for Preservation Technology, Western Chapter and Johnstone Centre, Charles Sturt University: San Francisco and Albury, NSW, 1998 pp. 165-172.
  37. Langville, A.N.; Meyer, C.D. Google's PageRank and beyond: The science of search engine rankings; Princeton university press: 2006.
  38. Wikipedia. History of Wikipedia. Available online: https://en.wikipedia.org/wiki/History_of_Wikipedia (accessed on Sep 1, 2023).
  39. Inayatullah, S. Future Avoiders, Migrants and Natives. J. Future Stud. 2004, 9, 83–86.
  40. Merriam-Webster. google [verb]. Available online: https://www.merriam-webster.com/dictionary/google (accessed on Sep 1, 2023).
  41. Lee, P.M.; Foster, R.; McNulty, A.; McIver, R.; Patel, P. Ask Dr Google: what STI do I have? Sex. Transm. Infect. 2021, 97, 420-422.
  42. Burzyńska, J.; Bartosiewicz, A.; Januszewicz, P. Dr. Google: Physicians—The Web—Patients Triangle: Digital Skills and Attitudes towards e-Health Solutions among Physicians in South Eastern Poland—A Cross-Sectional Study in a Pre-COVID-19 Era. Int. J. Env. Res. Publ. Health 2023, 20, 978. [CrossRef]
  43. Subba Rao, S. Commercialization of the Internet. New Library World 1997, 98, 228-232. [CrossRef]
  44. Fabos, B. Wrong turn on the information superhighway: Education and the commercialization of the Internet; Teachers College Press: 2004.
  45. Silverstein, C.; Marais, H.; Henzinger, M.; Moricz, M. Analysis of a very large web search engine query log. In Proceedings of the Acm sigir forum, 1999; pp. 6-12. [CrossRef]
  46. McTavish, J.; Harris, R.; Wathen, N. Searching for health: the topography of the first page. Ethics and information technology 2011, 13, 227-240.
  47. Khamis, S.; Ang, L.; Welling, R. Self-branding,‘micro-celebrity’and the rise of social media influencers. Celebrity studies 2017, 8, 191-208.
  48. Smith, B.G.; Kendall, M.C.; Knighton, D.; Wright, T. Rise of the brand ambassador: Social stake, corporate social responsibility and influence among the social media influencers. Communication Management Review 2018, 3, 6-29. [CrossRef]
  49. Patil, S.V. Penalized for expertise: Psychological proximity and the devaluation of polymathic experts. In Proceedings of the Academy of Management Proceedings, 2012; p. 14694.
  50. Lavazza, A.; Farina, M. The role of experts in the Covid-19 pandemic and the limits of their epistemic authority in democracy. Frontiers in public health 2020, 8, 356. [CrossRef]
  51. Garrett, R.K. Echo chambers online?: Politically motivated selective exposure among Internet news users. Journal of computer-mediated communication 2009, 14, 265-285. [CrossRef]
  52. Kitchens, B.; Johnson, S.L.; Gray, P. Understanding Echo Chambers and Filter Bubbles: The Impact of Social Media on Diversification and Partisan Shifts in News Consumption. MIS quarterly 2020, 44. [CrossRef]
  53. Zwanka, R.J.; Buff, C. COVID-19 generation: A conceptual framework of the consumer behavioral shifts to be caused by the COVID-19 pandemic. Journal of International Consumer Marketing 2021, 33, 58-67. [CrossRef]
  54. Carrion-Alvarez, D.; Tijerina-Salina, P.X. Fake news in COVID-19: A perspective. Health promotion perspectives 2020, 10, 290. [CrossRef]
  55. Bojic, L.; Nikolic, N.; Tucakovic, L. State vs. anti-vaxxers: Analysis of Covid-19 echo chambers in Serbia. Communications 2023, 48, 273-291. [CrossRef]
  56. Lee, C.S.; Merizalde, J.; Colautti, J.D.; An, J.; Kwak, H. Storm the capitol: linking offline political speech and online Twitter extra-representational participation on QAnon and the January 6 insurrection. Frontiers in Sociology 2022, 7, 876070. [CrossRef]
  57. Anderson, J.; Coduto, K.D. Attitudinal and Emotional Reactions to the Insurrection at the US Capitol on January 6, 2021. American Behavioral Scientist 2022, 00027642221132796. [CrossRef]
  58. Markov, T.; Zhang, C.; Agarwal, S.; Eloundou, T.; Lee, T.; Adler, S.; Jiang, A.; Weng, L. New and Improved Content Moderation Tooling. Available online: https://web.archive.org/web/20230130233845mp_/https://openai.com/blog/new-and-improved-content-moderation-tooling/ (accessed on June 28, 2023).
  59. Collins, E.; Ghahramani, Z. LaMDA: our breakthrough conversation technology. Available online: https://blog.google/technology/ai/lamda/ (accessed on Sep 1, 2023).
  60. Ray, P.P. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems 2023, 3, 121-154. [CrossRef]
  61. Ray, P.P. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems 2023.
  62. Lehmann, J. On the Use of ChatGPT in Cultural Heritage Institutions. Available online: https://mmk.sbb.berlin/2023/03/03/on-the-use-of-chatgpt-in-cultural-heritage-institutions/?lang=en (accessed on Jun 29, 2023).
  63. Trichopoulos, G.; Konstantakis, M.; Caridakis, G.; Katifori, A.; Koukouli, M. Crafting a Museum Guide Using GPT4. Preprints.org 2023, 2023061618.
  64. Maas, C. Was kann ChatGPT für Kultureinrichtungen tun? Available online: https://aureka.ai/2023/05/13/was-kann-chatgpt-fuer-kultureinrichtungen-tun/ (accessed on Jun 29, 2023).
  65. Merritt, E. Chatting About Museums with ChatGPT. Available online: https://www.aam-us.org/2023/01/25/chatting-about-museums-with-chatgpt (accessed on Jun 29, 2023).
  66. Ciecko, B. 9 ways ChatGPT can empower museums & cultural organizations in the digital age. Available online: https://cuseum.com/blog/2023/4/13/9-ways-chatgpt-can-empower-museums-cultural-organizations-in-the-digital-age (accessed on Jun 29, 2023).
  67. Frąckiewicz, M. ChatGPT in the World of Museum Technology: Enhancing Visitor Experiences and Digital Engagement. Available online: https://ts2.space/en/chatgpt-in-the-world-of-museum-technology-enhancing-visitor-experiences-and-digital-engagement/ (accessed on Jun 29, 2023).
  68. Spennemann, D.H.R. ChatGPT and the generation of digitally born “knowledge”: how does a generative AI language model interpret cultural heritage values? Knowledge in press, 3, [accepted].
  69. Ritala, P.; Ruokonen, M.; Ramaul, L. Transforming boundaries: how does ChatGPT change knowledge work? Journal of Business Strategy 2023, ahead-of-print. [CrossRef]
  70. Trichopoulos, G.; Konstantakis, M.; Alexandridis, G.; Caridakis, G. Large Language Models as Recommendation Systems in Museums. Electronics 2023, 12, 3829. [CrossRef]
  71. Rozado, D. The political biases of chatgpt. Social Sciences 2023, 12, 148. [CrossRef]
  72. Ferrara, E. Should chatgpt be biased? challenges and risks of bias in large language models. arXiv preprint arXiv:2304.03738 2023.
  73. Spennemann, D.H.R. What has ChatGPT read? References and referencing of archaeological literature by a generative artificial intelligence application ArXiv preprint 2308.03301 2023.
  74. Chang, K.K.; Cramer, M.; Soni, S.; Bamman, D. Speak, memory: An archaeology of books known to chatgpt/gpt-4. arXiv preprint 2023, doi:arXiv:2305.00118.
  75. Park, P.; Schoenegger, P.; Zhu, C. Correct answers" from the psychology of artificial intelligence. Preprint at https://doi. org/10.48550/arXiv 2023, 2302.
  76. Rutinowski, J.; Franke, S.; Endendyk, J.; Dormuth, I.; Pauly, M. The Self-Perception and Political Biases of ChatGPT. arXiv preprint arXiv:2304.07333 2023.
  77. Motoki, F.; Pinho Neto, V.; Rodrigues, V. More human than human: Measuring chatgpt political bias. Available at SSRN 4372349 2023.
  78. McGee, R.W. Is chat gpt biased against conservatives? an empirical study (February 15, 2023). SSRN 2023, doi:dx.doi.org/10.2139/ssrn.4359405.
  79. Hartmann, J.; Schwenzow, J.; Witte, M. The political ideology of conversational AI: Converging evidence on ChatGPT's pro-environmental, left-libertarian orientation. arXiv preprint arXiv:2301.01768 2023.
  80. Cao, Y.; Zhou, L.; Lee, S.; Cabello, L.; Chen, M.; Hershcovich, D. Assessing cross-cultural alignment between chatgpt and human societies: An empirical study. arXiv preprint arXiv:2303.17466 2023.
  81. Hughes, S.; Spennemann, D.H.R.; Harvey, R. Printing heritage of colonial newspapers in Victoria: the Ararat Advertiser and the Avoca Mail. Bulletin of the Bibliographic Society of Australia and New Zealand 2004, 28, 41–61.
  82. Spennemann, D.H.R. Albury Banner. In A Companion to the Australian Media, Griffen-Foley, B., Ed.; Australian Scholarly Publishing: Melbourne, 2014; pp. 17-18.
  83. Gerard, P.; Botzer, N.; Weninger, T. Truth Social Dataset. In Proceedings of the Proceedings of the International AAAI Conference on Web and Social Media, 2023; pp. 1034-1040.
  84. Roberts, J.; Wahl-Jorgensen, K. Strategies of alternative right-wing media: The case of Breitbart News. In The Routledge Companion to Political Journalism; Routledge: 2021; pp. 164-173.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated