Preprint
Essay

This version is not peer-reviewed.

Artificial Intelligence and the Lab Diamond Analogy: Rethinking Authorship, Effort, and Authenticity in the Age of AI-Assisted Writing

Submitted:

11 March 2026

Posted:

12 March 2026

You are already at the latest version

Abstract
The rapid emergence of artificial intelligence (AI) language models has generated intense debate regarding their appropriate role in scholarly communication. Critics frequently argue that AI-assisted writing undermines intellectual authenticity by bypassing the traditional labor associated with authorship. This commentary proposes an analogy between AI-assisted writing and laboratory-grown diamonds. Both produce artifacts that are materially indistinguishable from their traditional counterparts—classically written prose and mined diamonds—yet provoke cultural discomfort because their provenance differs. By examining this analogy through the lenses of technological history, epistemic responsibility, and evolving definitions of craftsmanship, this paper argues that resistance to AI-assisted writing largely reflects cultural attachment to narratives of effort rather than objective differences in intellectual value. Historical parallels—including the adoption of statistical software, word processors, and digital literature databases—demonstrate that scholarly practices often undergo initial moral panic followed by normalization. AI does not eliminate authorship but relocates the locus of scholarly mastery from mechanical production toward conceptual clarity, judgment, and interpretive accountability. The critical ethical question is therefore not whether AI tools participate in writing, but whether authors retain responsibility for accuracy, reasoning, and intellectual integrity. Understanding this shift may help academic institutions develop policies that promote transparency without conflating technological assistance with intellectual fraud.
Keywords: 
;  ;  ;  ;  ;  

Introduction

Artificial intelligence systems capable of generating fluent natural language have rapidly entered academic life. Large language models (LLMs) can assist authors in drafting manuscripts, editing prose, organizing arguments, summarizing literature, and refining grammar. Their increasing presence in scholarly workflows has prompted vigorous debate regarding their implications for academic integrity and authorship.
Some commentators view AI-assisted writing as a profound threat to scholarly authenticity. If a machine participates in drafting text, critics argue, the resulting work may no longer reflect genuine intellectual effort. Universities and journals have responded with policies requiring disclosure of AI assistance or restricting its use in certain contexts [1,2,3].
Yet the cultural discomfort surrounding AI-assisted writing may stem less from objective differences in intellectual quality than from deeper assumptions about the relationship between effort and value. Many academic traditions implicitly equate intellectual legitimacy with the visible labor of writing itself—the slow drafting of sentences, the iterative revision of paragraphs, and the solitary effort traditionally associated with scholarship.
To clarify this tension, a useful analogy can be drawn from a seemingly unrelated domain: diamonds. The emergence of laboratory-grown diamonds initially provoked widespread skepticism among consumers and jewelers. These diamonds are chemically and structurally identical to mined diamonds, yet their artificial origin led many observers to question their authenticity.
This essay argues that AI-assisted writing and laboratory-grown diamonds occupy a similar conceptual space. Both technologies produce outputs indistinguishable from traditional forms while challenging deeply embedded cultural narratives about effort, scarcity, and authenticity. Understanding this analogy may help illuminate the current debate surrounding AI in academic writing.

The Lab Diamond Analogy

Diamonds consist of carbon atoms arranged in a tetrahedral crystal lattice. Laboratory-grown diamonds replicate this structure using high-pressure high-temperature (HPHT) or chemical vapor deposition (CVD) processes. The resulting gemstones possess the same hardness, optical properties, and chemical composition as naturally occurring diamonds.
Despite this equivalence, the introduction of synthetic diamonds generated widespread controversy. Consumers often perceived laboratory diamonds as somehow less “real,” even though gemological analysis revealed no intrinsic difference.
The discomfort arose from provenance rather than substance. A mined diamond carries a narrative of geological formation over millions of years. The gemstone embodies a story of depth, pressure, rarity, and extraction. Laboratory diamonds bypass this narrative entirely.
AI-assisted writing provokes an analogous reaction. A carefully edited AI-assisted manuscript may be linguistically indistinguishable from a traditionally written text. Yet readers sometimes view such work as less authentic because it did not emerge from the familiar narrative of solitary intellectual struggle.
In both cases, cultural expectations about effort and origin shape perceptions of value.

The Cultural Romance of Intellectual Labor

Human societies frequently attach symbolic importance to effort. Objects produced through visible labor often acquire moral significance beyond their functional properties. This phenomenon appears in domains ranging from handcrafted goods to athletic achievement.
Academic culture has long embraced a similar ethos. The archetypal scholar is imagined as a solitary figure working late into the night, gradually transforming ideas into polished prose. Writing becomes not merely a means of communication but a ritual of intellectual struggle.
Technological tools that reduce friction in this process can therefore provoke resistance. Word processors were once criticized for making writing “too easy.” Calculators were accused of eroding mathematical understanding. Statistical software raised concerns that researchers might conduct complex analyses without truly understanding the mathematics involved [4,5].
In retrospect, these anxieties appear overstated. The technologies did not eliminate intellectual rigor; they shifted its locus. Scholars no longer needed to perform arithmetic calculations by hand, but they remained responsible for study design, model selection, and interpretation.
AI-assisted writing may represent a comparable shift. Rather than eliminating intellectual work, it redistributes effort toward higher-level tasks such as conceptual reasoning and critical evaluation.

Democratization of Intellectual Expression

One of the most significant consequences of technological innovation is the democratization of capabilities that were once scarce.
Laboratory-grown diamonds dramatically reduced the cost of high-quality gemstones, making them accessible to a broader population. Similarly, digital technologies have expanded access to knowledge production and communication.
AI-assisted writing tools may extend this trend by helping individuals articulate complex ideas more effectively. Many researchers possess valuable insights but struggle with the mechanics of academic writing, particularly when publishing in a second language. AI systems can assist with grammar, organization, and clarity, allowing authors to focus on conceptual contributions.
Recent studies suggest that AI writing tools may improve productivity and accessibility in scientific communication [6,7]. For early-career researchers and non-native English speakers, such tools may reduce barriers to participation in global scholarship.
From this perspective, AI-assisted writing may function less as a threat to intellectual integrity than as an instrument of epistemic inclusion.

Authenticity as a Social Construct

The debate surrounding AI-assisted writing frequently invokes the concept of authenticity. Critics argue that scholarly texts should reflect the unmediated effort of their authors.
However, authenticity is not an intrinsic property of objects or texts; it is a social judgment shaped by cultural expectations. Diamonds are not inherently valuable because they originate underground. Their value arises from collective agreement about their desirability and symbolic significance.
Similarly, the authenticity of a scholarly work depends not on the mechanics of its production but on the integrity of its ideas. A manuscript is intellectually authentic if its arguments are honestly presented, its sources accurately cited, and its conclusions responsibly interpreted.
AI tools do not eliminate these responsibilities. Authors remain accountable for verifying facts, evaluating evidence, and ensuring the accuracy of references. In this sense, AI functions as an instrument rather than an autonomous author.
Major academic organizations have emphasized this principle. The International Committee of Medical Journal Editors (ICMJE) states that AI tools cannot meet authorship criteria because they cannot assume responsibility for published work [8].
The ethical burden therefore remains firmly with human authors.

The Evolution of Craft

Technological innovation rarely abolishes craftsmanship; it transforms it.
The invention of photography did not eliminate painting but shifted artistic emphasis toward abstraction and interpretation. Digital imaging did not destroy photography but expanded creative possibilities.
Similarly, the adoption of statistical software did not diminish the importance of quantitative reasoning. Instead, it allowed researchers to focus on experimental design and interpretation rather than manual calculation.
AI-assisted writing may produce a comparable transformation. Traditional writing emphasized endurance—the ability to draft and revise large volumes of text. AI-supported workflows shift emphasis toward conceptual clarity, editorial judgment, and synthesis of evidence.
In this emerging model of authorship, the writer’s role becomes more analogous to that of an editor or curator. The intellectual task lies in selecting ideas, evaluating arguments, and shaping narrative coherence rather than generating every sentence manually.
This shift does not eliminate creativity.
Instead, it relocates it.

Risks and Ethical Considerations

Despite its potential benefits, AI-assisted writing introduces several important risks that must be addressed responsibly.

Hallucination and Accuracy

Large language models may generate plausible but incorrect information, a phenomenon often described as “hallucination” [9]. Authors must therefore verify all factual claims and references generated by AI systems.

Bias and Representation

AI models are trained on large corpora of existing text and may reproduce biases present in those datasets [10]. Scholars must remain attentive to potential distortions in language and representation.

Transparency

Many journals now require authors to disclose AI assistance in manuscript preparation [11]. Such transparency allows readers and editors to evaluate the role of AI in the research process.

Intellectual Responsibility

Ultimately, the ethical responsibility for scholarly work rests with human authors. AI systems cannot assume accountability for errors or misinterpretations.
These concerns highlight the importance of responsible integration rather than wholesale rejection of AI technologies.

Lessons from Previous Technological Transitions

History suggests that new intellectual technologies often provoke predictable cycles of anxiety and adaptation.
The printing press, introduced in the fifteenth century, initially raised concerns about the uncontrolled spread of information. Photography was criticized as a mechanical imitation lacking artistic merit. Word processors were accused of eroding the discipline of writing.
In each case, the technology ultimately became normalized within scholarly practice.
The same pattern appears to be unfolding with AI-assisted writing. Early debates focus on questions of authenticity and legitimacy. Over time, institutions develop norms governing responsible use, and the technology becomes integrated into everyday workflows.
The key challenge lies not in resisting technological change but in guiding its ethical implementation.

Conclusion

The analogy between AI-assisted writing and laboratory-grown diamonds offers a useful framework for understanding contemporary debates about authorship and authenticity.
Both technologies produce artifacts indistinguishable from traditional forms while disrupting cultural narratives about effort and origin. The discomfort they provoke reflects a deeper tension between historical ideals of intellectual labor and the realities of technological progress.
Ultimately, the value of scholarly writing does not reside in the difficulty of its production but in the clarity of its ideas and the integrity of its reasoning. AI tools may alter the mechanics of writing, but they do not eliminate the need for judgment, expertise, and ethical responsibility.
Rather than viewing AI as a threat to authorship, the academic community may benefit from recognizing it as part of a long continuum of intellectual tools—from the printing press to statistical software—that reshape how knowledge is produced and communicated.
Like laboratory-grown diamonds, AI-assisted writing challenges us to reconsider what truly constitutes authenticity.
The brilliance of the diamond remains unchanged.
What changes is how we understand its origin.

Artificial Intelligence (AI)

Computer systems capable of performing tasks that normally require human intelligence, including language processing and pattern recognition.

Large Language Model (LLM)

A machine learning model trained on large text datasets to generate or analyze natural language.

AI-assisted writing

The use of AI systems to help draft, edit, or refine written text.

Hallucination (AI)

The generation of incorrect or fabricated information by AI language models.

Scholarly authorship

The intellectual responsibility for the content, interpretation, and accuracy of a published work.

Technological democratization

The expansion of access to capabilities previously limited to experts or specialized institutions.:

AI Disclosure

Portions of this manuscript were developed with the assistance of a large language model (ChatGPT, OpenAI) to support drafting and editing. The author reviewed, revised, and assumes full responsibility for the content, interpretation, and references.

References

  1. Flanagin, A; Bibbins-Domingo, K; Berkwits, M; Christiansen, SL. Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA. 2023, 329, 637–639. [Google Scholar] [CrossRef] [PubMed]
  2. Salvagno, M; Taccone, FS; Gerli, AG. Can artificial intelligence help for scientific writing? Crit Care. 2023, 27, 75. [Google Scholar] [CrossRef] [PubMed]
  3. Biswas, S. ChatGPT and the future of medical writing. Radiology. 2023, 307, e223312. [Google Scholar] [CrossRef] [PubMed]
  4. Altman, DG. Practical statistics for medical research. BMJ. 1991, 302, 1489–1490. [Google Scholar] [PubMed]
  5. Ioannidis, JPA. Why most published research findings are false. PLoS Med. 2005, 2, e124. [Google Scholar] [CrossRef] [PubMed]
  6. van Dis, EAM; Bollen, J; Zuidema, W; et al. ChatGPT: five priorities for research. Nature. 2023, 614, 224–226. [Google Scholar] [CrossRef] [PubMed]
  7. Gilson, A; Safranek, CW; Huang, T; et al. How well does ChatGPT perform on medical questions? JMIR Med Educ. 2023, 9, e45312. [Google Scholar] [CrossRef] [PubMed]
  8. International Committee of Medical Journal Editors. Defining the role of authors and contributors. Ann Intern Med. 2010, 153, 261–267. [Google Scholar] [PubMed]
  9. Ji, Z; Lee, N; Frieske, R; et al. Survey of hallucination in natural language generation. ACM Comput Surv. 2023. [Google Scholar] [CrossRef] [PubMed]
  10. Bender, EM; Gebru, T; McMillan-Major, A; Shmitchell, S. On the dangers of stochastic parrots. FAccT Proceedings. 2021. PMID: 34221195.
  11. Harrer, S. Attention is not all you need: ethical use of large language models in healthcare. EBioMedicine. 2023, 90, 104512. [Google Scholar] [CrossRef] [PubMed]
  12. Kung, TH; Cheatham, M; Medenilla, A; et al. Performance of ChatGPT on USMLE. PLoS Digit Health. 2023, 2, e0000198. [Google Scholar] [CrossRef] [PubMed]
  13. Davenport, T; Kalakota, R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019, 6, 94–98. [Google Scholar] [CrossRef] [PubMed]
  14. Topol, EJ. High-performance medicine: convergence of human and artificial intelligence. Nat Med. 2019, 25, 44–56. [Google Scholar] [CrossRef] [PubMed]
  15. Beam, AL; Kohane, IS. Big data and machine learning in health care. JAMA. 2018, 319, 1317–1318. [Google Scholar] [CrossRef] [PubMed]
  16. Obermeyer, Z; Emanuel, EJ. Predicting the future—big data and clinical medicine. N Engl J Med. 2016, 375, 1216–1219. [Google Scholar] [CrossRef] [PubMed]
  17. Rajkomar, A; Dean, J; Kohane, I. Machine learning in medicine. N Engl J Med. 2019, 380, 1347–1358. [Google Scholar] [CrossRef] [PubMed]
  18. Yu, KH; Beam, AL; Kohane, IS. Artificial intelligence in healthcare. Nat Biomed Eng. 2018, 2, 719–731. [Google Scholar] [CrossRef] [PubMed]
  19. Hosny, A; Parmar, C; Quackenbush, J; Schwartz, LH; Aerts, HJWL. Artificial intelligence in radiology. Nat Rev Cancer. 2018, 18, 500–510. [Google Scholar] [CrossRef] [PubMed]
  20. Kelly, CJ; Karthikesalingam, A; Suleyman, M; Corrado, G; King, D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019, 17, 195. [Google Scholar] [CrossRef] [PubMed]
  21. London, AJ. Artificial intelligence and black-box medical decisions. Hastings Cent Rep. 2019, 49, 15–21. [Google Scholar] [CrossRef] [PubMed]
  22. Price, WN; Gerke, S; Cohen, IG. Potential liability for physicians using artificial intelligence. JAMA. 2019, 322, 1765–1766. [Google Scholar] [CrossRef] [PubMed]
  23. Kaul, V; Enslin, S; Gross, SA. History of artificial intelligence in medicine. Gastrointest Endosc. 2020, 92, 807–812. [Google Scholar] [CrossRef] [PubMed]
  24. Rajpurkar, P; Chen, E; Banerjee, O; Topol, EJ. AI in health and medicine. Nat Med. 2022, 28, 31–38. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated