1. Introduction
The academic publishing landscape has undergone rapid transformation since the emergence of generative artificial intelligence tools in late 2022. Publishers across disciplines have implemented new policies requiring authors to disclose any use of AI assistance in manuscript preparation, with violations classified as academic misconduct. This policy proliferation represents a fundamental shift in how institutions define academic integrity, expanding beyond traditional concerns of plagiarism and fabrication to encompass the tools authors use in their writing process.[
1,
2,
3]
The current approach treats AI assistance as categorically different from other writing aids that have been accepted throughout academic history without disclosure requirements. When Newton used mechanical calculating devices, when researchers adopted statistical software, or when authors embraced word processing programs, no disclosure was deemed necessary. Yet contemporary policies position AI writing assistance as inherently problematic, requiring transparency that has never been demanded of other technological aids.[
3,
4]
This expansion of academic misconduct definitions occurs within a broader context of institutional resistance to paradigm-challenging research. The systematic rejection patterns documented across major publishers suggest that AI disclosure requirements serve not as integrity protection mechanisms but as additional tools for editorial gatekeeping. When authors with innovative theoretical contributions use AI to articulate complex arguments, they face a discriminatory environment where their methodological choices become grounds for suspicion rather than evaluation based on intellectual merit.[
4]
2. Current Landscape of AI Disclosure Requirements
2.1. Publisher Policy Variations
The implementation of AI disclosure requirements reveals significant inconsistencies across the academic publishing ecosystem. Major publishers have adopted markedly different approaches, ranging from complete prohibition to permissive use with transparency requirements. Science journals maintain the most restrictive stance, explicitly prohibiting AI-generated text and classifying violations as scientific misconduct. Nature's policy similarly prohibits AI co-authorship while requiring disclosure of any AI assistance. In contrast, JAMA Network journals take a more permissive approach, allowing AI use with proper attribution and clear description of AI-generated content.[
1,
2,
5]
This policy heterogeneity creates a fragmented landscape where authors must navigate different requirements depending on their target journals. The lack of standardization suggests that policies emerged reactively rather than through careful consideration of academic integrity principles. Publishers implemented requirements rapidly after ChatGPT's emergence, with most policies appearing after January 2023. This timeline indicates institutional panic rather than thoughtful policy development.[
1,
6,
7]
2.2. Disclosure Requirements and Implementation
Current disclosure requirements vary substantially in their specificity and burden on authors. Some journals require detailed documentation including full prompts, AI tool versions, and timestamps. The American Psychological Association mandates that authors upload complete AI outputs as supplementary materials. These requirements impose significant administrative burdens while providing questionable value for integrity assessment.[
7,
8]
The monitoring and enforcement mechanisms for these policies remain underdeveloped. Publishers increasingly rely on AI detection tools despite documented concerns about their reliability. These automated systems generate false positives and cannot distinguish between appropriate and inappropriate AI use. The reliance on imperfect detection technology creates an adversarial relationship between publishers and authors rather than fostering genuine integrity.[
9]
3. The Privacy Rights Argument
3.1. Historical Precedent for Tool Privacy
Academic authors have historically possessed privacy rights regarding their writing processes and tool usage. No disclosure requirements exist for spell-checking software, grammar checkers, citation managers, or statistical analysis packages. When researchers use computational tools to analyze data or when authors employ professional editing services, transparency about these methodological choices is not mandated. This historical acceptance establishes a presumption of privacy in authorial tool selection.[
10,
11,
12,
13]
The distinction between content and process represents a fundamental principle in academic evaluation. Reviewers assess the intellectual contribution, methodological rigor, and empirical adequacy of research rather than investigating the specific tools authors used in manuscript preparation.[
14] AI writing assistance falls within this established category of process rather than content, warranting the same privacy protection accorded to other writing aids.
3.2. Constitutional and Legal Framework
Author privacy rights in academic publishing derive from broader principles of intellectual freedom and creative autonomy. The right to select one's methods and tools represents an aspect of academic freedom that institutions should protect rather than constrain. When publishers mandate disclosure of AI assistance, they intrude into authors' private methodological choices without demonstrating compelling institutional interests.[
10,
11,
15]
The legal framework surrounding intellectual property recognizes that authors retain control over their creative processes. Copyright law protects the expression of ideas rather than the methods used to generate that expression. This principle supports author autonomy in tool selection and argues against mandatory disclosure requirements that have no bearing on the substantive evaluation of research contributions.[
11,
16]
3.3. Discriminatory Impact on Innovation
AI disclosure requirements disproportionately impact authors pursuing innovative or paradigm-challenging research. When researchers develop theoretical frameworks that challenge established orthodoxies, they often require sophisticated argumentation and precise articulation of complex concepts. AI assistance can enhance their ability to communicate these contributions effectively. However, in an environment that discriminates against AI-assisted writing, these authors face additional barriers to publication that their mainstream counterparts do not encounter.[
3,
4]
This discriminatory impact undermines the fundamental purpose of academic publishing: advancing knowledge through rigorous evaluation of ideas. When editorial decisions consider authorial tool choices rather than intellectual merit, the system fails its core mission.[
14] Authors with groundbreaking insights should not face additional scrutiny because they employed AI to articulate their contributions more effectively.
4. Institutional Gatekeeping and Systematic Bias
4.1. The Gatekeeping Function of AI Policies
AI disclosure requirements serve institutional gatekeeping functions that extend beyond their stated integrity purposes. Publishers use these policies to maintain control over the publication process and preserve existing hierarchies within academic disciplines. When authors must reveal their use of AI assistance, editors gain additional grounds for rejection that circumvent substantive evaluation of research quality. [
3,
4]
The systematic bias against AI-assisted writing reflects broader institutional resistance to technological change in academic publishing. Publishers benefit from maintaining traditional production methods that preserve their role as intermediaries in scholarly communication. AI tools that democratize writing assistance threaten these established relationships by reducing authors' dependence on traditional editorial services and professional development programs.
4.2. Documented Patterns of Coordinated Rejection
Evidence of coordinated rejection patterns across major publishers demonstrates how AI policies facilitate systematic gatekeeping. The manuscript transfer systems implemented by major publishers share rejection information between editorial offices, creating opportunities for coordinated decisions based on non-substantive criteria. When authors disclose AI assistance, this information becomes part of their rejection history, potentially influencing subsequent editorial decisions across multiple journals within the same publisher network.[
17,
18]
The case studies documented in viXra submissions reveal explicit acknowledgment of this coordination problem. Authors must disclose previous rejections when submitting to preprint servers, creating a permanent record that influences future evaluation. This transparency requirement, while intended to prevent duplicate submissions, enables systematic discrimination against manuscripts that challenge established paradigms or employ non-traditional methodological approaches. (Appendix)
4.3. Publisher Network Coordination
The emergence of manuscript transfer systems across major publishers creates opportunities for coordinated gatekeeping that extends beyond individual editorial decisions. When Elsevier's Transfer Your Manuscript service or Nature's transfer system shares author and manuscript information across journals, editorial offices can access rejection histories that influence their evaluation processes. This coordination enables systematic exclusion of research that challenges established theories or employs methodological approaches that publishers consider problematic.[
17,
19,
20]
The privacy implications of these transfer systems have received insufficient attention in academic integrity discussions. Authors lose control over their submission histories when publishers share rejection information across their journal networks. This information sharing violates principles of confidentiality that should govern the peer review process and creates additional barriers for authors whose work challenges established orthodoxies.[
18]
5. Case Studies in Publisher Coordination
5.1. The viXra Documentation Case
The submission documents provided illustrate how transparency requirements regarding rejection histories create systematic barriers for innovative research. The author's experience with multiple rejections from arXiv, Qeios, and Preprints.org demonstrates coordinated resistance to paradigm-challenging work in microwave absorption theory. When submitting to viXra, the author was required to disclose this rejection history, potentially influencing the platform's evaluation despite the substantive merit of the theoretical contributions.
The viXra submission guidelines explicitly discourage authors from uploading manuscripts formatted by journals, citing concerns about copyright violations and takedown notices. This risk-averse approach reflects institutional unwillingness to challenge established publisher preferences, even when doing so would serve legitimate academic purposes. The platform's decision to avoid potential conflicts with major publishers illustrates how institutional coordination extends beyond formal partnerships to include informal pressure that shapes editorial policies.[
21]
5.2. Materials Today Network Coordination
The documented rejections from Materials Today Physics, Progress in Materials Science, and Applied Materials Today reveal systematic patterns that transcend individual editorial judgment. These rejections occurred rapidly and without substantive scientific feedback, suggesting coordination based on institutional policies rather than rigorous peer review. The author's theoretical work on wave mechanics versus impedance matching theory faced uniform rejection across journals that should have provided independent evaluation.[
22]
This coordination pattern demonstrates how publisher networks can effectively suppress research that challenges established paradigms. When multiple journals within the same network reject manuscripts for non-substantive reasons, authors lose access to traditional publication venues regardless of their work's intellectual merit. The system fails to provide the independent evaluation that peer review is supposed to ensure.[
23]
5.3. AI Platform Segregation
The emergence of separate platforms for AI-assisted research, such as
ai.viXra.org, represents institutional acknowledgment that AI disclosure creates systematic bias in traditional evaluation processes. Rather than addressing discriminatory policies, publishers create separate channels that explicitly segregate AI-assisted work. This segregation institutionalizes second-class status for research that employs AI assistance, regardless of its intellectual merit.[
3,
4]
The segregation approach fails to address the fundamental problem: evaluation should focus on content quality rather than production methods. When institutions create separate channels for AI-assisted research, they perpetuate discrimination while avoiding the necessary policy reforms that would ensure fair evaluation regardless of authorial tool choices.
6. Recommendations for Balanced Policies
6.1. Content-Focused Evaluation Standards
Academic publishers should adopt evaluation standards that focus exclusively on intellectual content rather than production methods.[
14] Reviewers should assess research contributions based on their theoretical rigor, empirical adequacy, and methodological soundness without considering the specific tools authors used in manuscript preparation. This approach aligns with historical precedents for tool privacy and ensures that evaluation remains focused on substantive criteria.
The implementation of content-focused standards[
24] requires explicit policy statements that protect author privacy in tool selection. Publishers should prohibit editorial decisions based on disclosure of AI assistance and establish clear guidelines that limit evaluation to intellectual merit. These protections would restore the traditional separation between process and content that has historically governed academic evaluation.
6.2. Limited Disclosure for Specific Contexts
Where disclosure requirements serve legitimate purposes, they should be narrowly tailored to specific contexts rather than broadly applied across all AI assistance. Disclosure might be appropriate when AI tools generate data, images, or other research materials that require verification. However, disclosure should not be required for AI assistance in writing, editing, or other compositional activities that parallel traditionally private authorial processes.[
7,
8]
The distinction between research assistance and writing assistance provides a principled basis for limiting disclosure requirements. When AI tools participate in data generation or analysis, transparency serves legitimate verification purposes. When AI tools assist with expression and communication, disclosure requirements intrude into private authorial processes without serving compelling institutional interests.
6.3. Publisher Coordination Reforms
Publishers should implement reforms that eliminate systematic coordination in manuscript evaluation and ensure independent review across different editorial offices. Manuscript transfer systems should not share author histories or editorial comments that could bias subsequent evaluation. Each submission should receive independent assessment based solely on its intellectual merits without reference to previous editorial decisions.
The implementation of these reforms requires structural changes in publisher information systems and editorial training programs. Publishers must redesign their platforms to prevent information sharing that enables coordinated gatekeeping and establish clear protocols that protect author privacy throughout the evaluation process. These changes would restore the independence that peer review requires to function effectively.[
14,
21]
7. Conclusion
The current landscape of AI disclosure requirements in academic publishing represents an unjustified expansion of academic misconduct definitions that violates author privacy rights and facilitates institutional gatekeeping. The historical acceptance of computational aids without disclosure requirements establishes a presumption of privacy that should extend to AI writing assistance. When publishers mandate disclosure of AI use, they create discriminatory barriers that disproportionately impact authors with innovative research contributions.
The documented patterns of coordinated rejection across major publishers demonstrate how AI policies serve gatekeeping functions that extend beyond their stated integrity purposes. Rather than protecting academic integrity, these requirements provide additional mechanisms for suppressing research that challenges established paradigms. The systematic bias against AI-assisted writing undermines the fundamental purpose of academic publishing: advancing knowledge through rigorous evaluation of ideas.
Reform requires a return to content-focused evaluation standards that assess research contributions based on their intellectual merit rather than their production methods. Publishers must implement policies that protect author privacy in tool selection and eliminate systematic coordination that enables discriminatory rejection patterns. Only through these reforms can academic publishing fulfill its mission of advancing human knowledge while respecting the rights and autonomy of researchers who contribute to that mission.
The stakes of this debate extend beyond procedural concerns to encompass the fundamental values that should govern academic inquiry. When institutions prioritize conformity to preferred methodological approaches over intellectual innovation, they betray their commitment to knowledge advancement. The resolution of AI disclosure controversies will determine whether academic publishing evolves to embrace technological change or retreats into increasingly restrictive gatekeeping that stifles the very creativity it should foster.
References
-
AI Disclosure Requirements in Research and Publishing. https://www.kennesaw.edu/research/ai-disclosure.php (accessed.
-
AI Tools for Research: AI Use Policy for Journals. https://cshl.libguides.com/aitools/aijournalpolicies (accessed.
- Liu, Y. Comment on Springer's New Screening Tool for AI Tortured Phrases. 2025. http://ai.vixra.org/abs/2509.0018 (accessed.
- Liu, Y. The Academic AI Backlash: Innovation vs. Integrity in the Age of Artificial Intelligence. preprints.org 2025. [CrossRef]
- Harker, J. Science journals set new authorship guidelines for AI-generated text. 2023. https://factor.niehs.nih.gov/2023/3/feature/2-artificial-intelligence-ethics (accessed.
- Bradley, S. Moving Beyond Plagiarism and AI Detection: Academic Integrity in 2025-2026. 2025. https://packback.co/resources/blog/moving-beyond-plagiarism-and-ai-detection-academic-integrity-in-2025/ (accessed.
-
Journal AI policies: what to cover and how to monitor compliance. https://blog.scholasticahq.com/post/journal-ai-policies/ (accessed.
-
APA Journals policy on generative AI. https://www.apa.org/pubs/journals/resources/publishing-tips/policy-generative-ai (accessed.
- Staiman, A. Publishers, Don’t Use AI Detection Tools! 2023. https://scholarlykitchen.sspnet.org/2023/09/14/publishers-dont-use-ai-detection-tools/ (accessed.
-
Publishing Your Research. https://guides.library.txstate.edu/publishing/author_rights_agreements (accessed.
-
Scholarly Publishing. https://researchguides.library.tufts.edu/c.php?g=685277&p=4842590 (accessed.
- Miller, C. T.; Rice, R. L. Toward a Potential Solution of the Crisis in Scholarly Publishing: An Academic Research Community Alliance Model. Journal of Scholarly Publishing 2023, 54 (4), 569-596. [CrossRef]
- Singh Deo, P.; Hangsing, P. Preserving Academic Integrity: Combating the Proliferation of Paper Mills in Scholarly Publishing. Journal of Electronic Resources in Medical Libraries 2024, 21 (3), 117-124. [CrossRef]
- Liu, Y. The Theoretical Poverty of Modern Academia: Evidence of Widespread Intellectual Decline in Contemporary Scientific Research. 2025. https://yueliusd.substack.com/p/the-theoretical-poverty-of-modern?utm_source=publication-search (accessed.
- Liu, Y. Balancing Transparency and Data Protection in Academic Publishing: The Case of Editorial Correspondence Disclosure on Preprint Servers, Doi: Website: . Preprints.org 2025. [CrossRef]
-
Authors and Copyright. https://scholarlycommunications.wustl.edu/copyright/copyright-authors/ (accessed.
-
How can I transfer my rejected manuscript using Transfer Your Manuscript? https://www.elsevier.support/publishing/answer/how-can-i-transfer-my-rejected-manuscript-using-transfer-your-manuscript (accessed.
- Privacy Policy. 2025.
-
How to transfer manuscripts. https://www.nature.com/nature-portfolio/for-authors/transfer (accessed.
-
Springer Nature Transfer Desk. https://www.springernature.com/gp/authors/transferdesk (accessed.
- Liu, Y. The Illusion of Quality Control: How Peer Review Enables Mediocrity While Suppressing Innovation in Academic Publishing. 2025. https://yueliusd.substack.com/p/the-illusion-of-quality-control (accessed.
- Liu, Y. Rethinking “Balanced View” in Scientific Controversies: Why Fairness Is Not Equivalence Between Correct and Incorrect Theories. 2025.
- Liu, Y. Commentary on the Applied Materials Today Rejection: Evidence of Coordinated Academic Gatekeeping. 2025. https://yueliusd.substack.com/p/commentary-on-the-applied-materials (accessed.
- Liu, Y. Self-Citation Versus External Citation in Academic Publishing: A Critical Analysis of Citation Reliability, Publication Biases, And Scientific Quality Assessment. SSRN 2025. [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).