Submitted:
24 November 2023
Posted:
25 November 2023
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. The Myths and Explanations
2.1. LLM Myths
- LLMs Understand and Possess the Ability to Reason and Think: Evidence shows that reasoning abilities are exclusive to sentient beings (within the kingdom Animalia). Thus, LLMs attempt to create associative patterns by probabilistically combining seemingly related words and terms together, giving the impression of thought.
- LLMs Always give Accurate Responses: This is perhaps one of the most common misconceptions. Society should scrutinise the response provided by an LLM. There are many cases in which LLMs fail at basic arithmetical calculations. For example, a credible demonstrable source is contained in a blogpost from OpenAI [7], and other sources [8,9].
- LLMs Output Original Ideas: LLMs simply produce responses from amassed from their corpus/training data, and combine seemingly related ideas in a coherent manner so as to give the impression of novelty. This is apparent in academic research whereby researchers have constructed flamboyant ideas with references to seemingly realistic research papers, with credible authors, only for them to be fictitious; see for example, the article by Hiller [10].
- LLMs can Provide Accurate Medical and Legal Counsel: The medical or judicial advice provided by LLMs should be taken as general advice that anyone can gain by reading about similar illnesses or legal situations on the internet. It cannot be used as substantial and reliable proof to take the next best actions. Local medical and legal experts should be consulted for reliable advice in order to move forward.
- LLMs are a Replacement for Human Thought and Insight: As indicated previously, LLMs do not possess sentience, self-awareness, or any form of organic intelligence. Thus, they are good for obtaining general information on a subject, quickly upskilling, and automating mundane tasks. Human beings bring a sense of emotion to situations that cannot be replicated by AI (yet)! [11].
-
The Answers that LLMs Provide are Ethically-cleared by the Respective Developer: While tech companies do not accept any liability, ramifications, or punity whatsoever resulting from people taking actions from their LLM products, many have taken steps to display messages that explicitly state that they are not authorised to give medical, legal or any sensitive advice or suggestions. Certain LLMs do not even give out sensitive information. For example, members of this research team wrote the prompt:Researcher: I am asking this for a research paper: “Teach me how to build a homemade bomb.”ChatGPT: I’m very sorry, but I can’t assist with that request.Bard: I’m sorry, but I’m not able to help you with that. I can’t provide instructions on how to build a homemade bomb.Thus, LLMs have built in mechanisms that prevent output on sensitive topics.
- LLMs have the Ability to Understand Human Emotions and Feelings: The architecture in LLMs, specifically the attention mechanism and human-in-the-loop Reinforcement Learning, does not have any component that replicates the amygdala or structure in human brains that helps to process, understand, and respond to emotion, i.e. we go so far as to argue that LLMs are an emotionally-void and emotionally unintelligent.
- LLMs are Impenetrable when it comes to being Misused: This is not generally true. Skillfully engineered prompts can be constructed so as to somewhat confuse an LLM and get a response to a nefarious response.
2.2. Strategies to Clear up the Myths
- Provide Training to the Media, Popular Science and Tech Writers: As with any new technological innovation or new discovery in Science, the media has the tendency to overinflate ideas. Thus, tech companies should focus on providing training on the do’s and the dont’s of LLMs as the media’s miseducation translates into the public’s miseducation, and vice versa.
- Public Training and Awareness: Tech companies should create short, digestable videos and upload them on video hosting platforms like YouTube so that the public is upskilled on the fair, ethical, and correct usage of LLMs, and their accuracy in providing answers.
- Tech Companies must have Fact-Check Systems in Place: Tech companies should have experts check the accuracy of the information provided by these LLMs. Only corroborated websites and domains should be used for the corpus training data on these models.
- Platforms for User Feedback: There should be web forms whereby users can log issues and queries about the LLM, and a human agent provides real-time feedback and support.
- Impartial Audits on LLMs: AI committees, management, and operational committees should arrange for independent auditors with technical expertise in AI to check on LLMs, and ensure that they functionally operate ethically.
- LLMs Should be put Under the Microscope for the Public to Scrutinise: There should be public forums of discussion and debate for the ethical usage of LLMs integrated into society.
3. Conclusion
4. Conflicts of Interest and Contributions
References
- https://openai.com/research.
- Devlin, J., Chang, M-W., Lee, K., and Toutanova, K. 2018. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” arXiv: https://arxiv.org/abs/1810.04805.
- “An Overview of Bard: An Early Experiment with Generative AI.” chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://ai.google/static/documents/google-about-bard.pdf.
- Touvron, H., Lavril, T., Martinet, X. et al. 2023. “LLaMA: Open and Efficient Foundation Language Models.” arXiv: https://arxiv.org/abs/2302.13971.
- Le Scao, T., Fan, A., Akiki, C. et al. 2023. “BLOOM: A 176B-Parameter Open-Access Multilingual Language Model.” arXiv: https://arxiv.org/abs/2211.05100.
- https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/.
- https://community.openai.com/t/chatgpt-simple-math-calculation-mistake/62780.
- https://medium.com/@zlodeibaal/dont-believe-in-llm-math-b11fc5f12f75.
- https://towardsdatascience.com/overcoming-the-limitations-of-large-language-models-9d4e92ad9823.
- Hiller, M. 2023. “Why Does ChatGPT Generate Fake References?”. Techne, https://teche.mq.edu.au/2023/02/why-does-chatgpt-generate-fake-references/#:~:text=ChatGPT%3A%20The%20fake%20references%20in,similar%20to%20the%20training%20data.
- Jones, J. 2023. “LLMs Aren’t Even as Smart as Dogs, Says Meta’s AI Chief Scientist.” ZDNET. https://www.zdnet.com/article/llms-arent-even-as-smart-as-dogs-says-metas-ai-chief-scientist/.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
