Preprint
Article

This version is not peer-reviewed.

Rethinking Academic Publishing: A Call for Inclusive, Transparent, and Sustainable Reforms

Submitted:

22 May 2025

Posted:

26 May 2025

You are already at the latest version

Abstract
The current academic publishing system is increasingly defined by exclusionary gatekeeping, prestige-driven metrics, and restricted access, all of which undermine equity, innovation, and scholarly integrity. This position paper critiques these systemic issues and proposes a reform framework grounded in transparency, inclusivity, and sustainability. Key recommendations include mentorship-centered editorial models, post-publication peer review, diversified evaluation criteria, and the development of community-led open access platforms. The paper also anticipates common concerns, such as fears of declining quality and resistance from entrenched stakeholders, and offers counterarguments grounded in evidence and practice. The paper concludes with a call to action for researchers, editors, academic institutions, funding agencies, and policymakers to collaboratively transform the publishing landscape, ensuring that the production and dissemination of knowledge remains a public good.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  

1. Position Overview: Toward a More Inclusive and Authentic Publishing Model

For decades, the academic publishing system has served as a cornerstone for the validation and dissemination of research. Despite its essential role, this system has become increasingly characterized by inefficiencies and inequities rooted in issues such as subjective peer-review processes, citation-based metric journal rankings, and paywall-led knowledge inaccessibility. An attestation to this problem is a recent antitrust lawsuit that was filed against prominent publishers such as Elsevier, Springer Nature, Taylor and Francis, Sage, Wiley, and Wolters Kluwer, alleging that they diverted billions of dollars from scientific research into the their profits through unpaid academic labor, restricting manuscript submissions to one journal at a time, appropriating researcher’s intellectual property upon manuscript acceptance without compensation [1]. Researchers encounter significant pressure to conform to the rigid demands of high-prestige outlets, often at the expense of pursuing intellectually bold and unconventional inquiries [2].
This pressure manifests in various ways, from the prioritization of "safe" research topics to the undervaluation of niche-but-meaningful work [3]. Scholars often face the dilemma of choosing between producing research that aligns with the priorities of elite journals or risking rejection by focusing on less mainstream but potentially transformative ideas. This culture of conformity not only undermines the diversity of academic thought but also perpetuates systemic biases that privilege certain disciplines, regions, and methodologies [2,4].
Furthermore, the overreliance on metrics such as journal impact factors - a quantitative measure indicating the average number of citations received by articles published in a journal over a two-year period - exacerbates these issues [5]. An example of such practices is the tendency for researchers to prioritize publication in journals with high impact factors, often at the expense of the quality and originality of their work [5]. This overemphasis on impact factors can lead to a narrow focus on topics that are likely to generate high citation counts, thereby stifling innovation and discouraging risk-taking in research [6]. The metric-driven culture contributes to a homogenization of academic output, where only certain types of research are deemed valuable, further entrenching existing hierarchies within academia.
Additionally, academic paywalls, whether from traditional close-access models that burden the reader or open-access models that place the financial burden on the author, worsen the issue by restricting access to knowledge. Traditional subscription-based journals create barriers for readers, especially those from institutions with limited funding, who cannot afford the high costs of accessing articles [7]. On the other hand, open-access models often require authors to pay substantial publication fees, which can be prohibitive for researchers without adequate funding [8,9]. Both models thus perpetuate inequities in knowledge dissemination, limiting the reach and impact of research, and creating a divide between those who can afford to participate fully in the academic community and those who cannot.
This position paper, which is a document that presents arguments to a subject matter supported through researched evidence, interrogates the structural shortcomings of the current academic publishing model and outlines a comprehensive vision for reform. By addressing entrenched gatekeeping practices, inequities in access, and the misplaced emphasis on prestige, this essay initiates a conversation of a research culture that values intellectual diversity, equity, and meaningful collaboration. The ultimate goal is to empower researchers to prioritize authentic inquiry over performative pursuits of narrowly defined academic success. Furthermore, this paper adds to the ongoing conversation posited by Batterbury et al. [10] and Pia et al. [11] that urges academics to critically examine the inequalities embedded in their scholarly practices and publishing labor. The dialogue prompts vital questions such as: Who truly benefits from these systems? And who is systematically excluded or silenced by them?

2. Critiques of the Current System

2.1. Gatekeeping and Editorial Subjectivity

Academic gatekeeping, referred to in this paper as the role and process of journal editors, perpetuates systemic issues within academic publishing. Journal editors are responsible for overseeing the editorial process of scholarly journals, with primary roles including peer review coordination and research selection [12]. Given these responsibilities, editors wield considerable authority over the journals they manage. One manifestation of this authority is the ability to desk reject, which involves rejecting manuscripts without peer review [13,14]. Proponents of desk rejection argue that it efficiently conserves reviewer time and goodwill, ensuring that only papers likely to meet the journal’s standards undergo peer review [13]. As a gatekeeping mechanism, it helps maintain high standards, which are essential for the credibility and prestige of scientific outlets in a competitive academic environment [15]. Desk rejection also improves operational efficiency by reducing review backlogs through early elimination of unsuitable manuscripts [13]. Moreover, editors, drawing on their broader perspective of the journal’s aims and standards, are often better positioned than individual reviewers to assess whether a submission is a good fit.
However, the decision to desk-reject a manuscript can be influenced by editor bias or subjective criteria, rather than a fair assessment of the manuscript’s quality [14]. Editors may unconsciously favor certain topics, methodologies (e.g., quantitative vs. qualitative), paper types (e.g., position paper vs. empirical paper), or authors based on personal preferences or familiarity, leading to the exclusion of innovative or unconventional work. This phenomenon may be a case of confirmation bias, where editors select works that align with their worldview of "scientific merit" or "novelty" instead of considering the actual contribution of the work [16,17]. Furthermore, the argument that submissions are "not up to journal standards" is flawed, as rigid standards risk excluding valuable contributions that the paper may offer after refinements. For example, replication studies, which may not be perceived as novel, strengthen the existing foundation of certain topics and are valuable as building blocks for meta-analyses [18,19]. Another example is studies with negative results, which are underrepresented due to the positive publication bias - where papers with positive or successful results are deemed more interesting and generate more citations [20]. However, negative results papers also offer valuable lessons that can be learned [21]. To foster a more inclusive, innovative, and fair publishing process, journals should prioritize collaboration and openness to manuscript submissions over narrow editorial gatekeeping.
Arguments supporting desk rejection for efficiency overlook the deeper purpose of academia: fostering a culture of continual learning and inclusive knowledge dissemination [22]. In fact, the entrenched notion of "prestigious" journals could exacerbate these issues by acting as bottlenecks in the dissemination of knowledge [23]. While not all research is initially of equal clarity or impact, scholarly work can often reach comparable standards through rigorous peer review and revision. The hierarchical structure of academic publishing implies that some findings or implications are inherently more valuable than others, which undermines the diverse and cumulative nature of scientific inquiry [24,25]. Every study, regardless of its perceived novelty, outcome, or scope, has the potential to contribute meaningfully to the academic discourse [23,25]. The competitive and selective practices upheld by prestigious journals, therefore, may inadvertently slow scientific progress by privileging exclusivity over inclusivity and iterative development [23].
Academic journals and their editors should not merely be arbiters of what is publishable at a given moment, but they rather should also serve as mentors and enablers for scholars at all stages of their careers. By embracing this role, journals can truly embody the spirit of academia and ensure that no voice is excluded from the conversation. Cross et al. [22] highlight the benefits of mentorship in academia, emphasizing how it supports scholars in overcoming challenges and advancing their careers. Similarly, Bajracharya et al. [26] posits that the value of mentorship is in building expertise and sustaining capacity in academic fields. These examples underscore the importance of academic journals fostering a culture of collaboration and support, ultimately enriching the academic community and advancing knowledge.

2.2. Distorted Academic Priorities from Prestige-Driven Metrics

Although the impact factor has its merits, such as serving as an indicator of research exposure, its overemphasis distorts academic priorities. For instance, McKiernan et al. [27] found that 40% of research-focused universities and 18% of master’s-level institutions explicitly mention the journal impact factor in their guidelines for review, promotion, and tenure. Furthermore, 87% of these institutions endorse its application in at least one component of the evaluation process. In certain countries and fields, journals with an impact factor below a specified cut-off point, such as < 5.0, are considered of no value and may be reflected as poor attempts in the CV of early career researchers [28]. This criteria encourages early-career researchers to "play the game" by aiming for high-impact factor journals, even if it means repeatedly submitting and facing rejection, which can shift researchers’ focus away from the meaningful dissemination of intended knowledge toward the pursuit of metrics [28,29]. Ironically, this practice is not the intended use of such a metric when it was originally created. In fact, the index was initially developed by Garfield [30] to aid researchers in locating relevant literature and facilitating retrieval and relationship mapping for meta-level research such as systematic review.
Over time, impact factor, journal quartiles, and citation counts, all of which are used in the calculations of each other, have come to dominate evaluations of academic success, thereby shaping researchers’ behaviors in problematic ways [5,30,31]. As Brembs et al. [32] found, publications in high-impact journals are actually associated with higher rates of retraction, suggesting that the intense pressure to publish in these venues may undermine research integrity and quality. This reliance on aggregated indicators represents an ecological fallacy: it assumes that the merit of individual articles can be accurately inferred from journal-level metrics (first order) or quartile rankings (second order), a flawed logic that distorts evaluation and decision-making processes at multiple levels [31].
From another angle, this practice can also be viewed as a reductionist fallacy, which is a flawed logic that oversimplifies complex phenomena by reducing them to a single or overly simplistic metric [33,34]. In the context of academic research, it distorts research evaluation and decision making by assuming that a multifaceted body of scholarly work can be fully represented by metrics such as impact factor, citation count, or journal quartiles [31,35]. For example, research focused on improving local educational outcomes for underprivileged students may have significant societal impact but is often undervalued if published in lower-impact journals. In contrast, studies on advanced educational theories published in top journals are typically viewed more favorably, despite being less directly relevant to immediate community needs. While both studies contribute valuable insights, the current academic system may prioritize visibility and prestige from high-impact journals. Consequently, researchers may prioritize work that is more likely to be cited or published in prestigious outlets by focusing on trending topics over research that addresses more substantive, but less "publishable" scientific or societal needs.
This phenomenon is compounded by the valorization of high-impact journals, which frequently demand concise, polished narratives that risk oversimplifying complex findings. Consequently, these metrics could foster a culture where prestige and visibility take precedence over substantive contributions, discouraging in-depth investigations or replication studies that are foundational to scientific progress [36]. This misalignment between evaluation criteria and genuine contribution fosters a hypercompetitive environment, where style often eclipses substance. The reliance on metrics has created what Smaldino and McElreath [37] term a "natural selection of bad science," where methodological quality is often sacrificed for publishability. This dynamic not only discourages innovation but also marginalizes critical research areas that deviate from mainstream trends [16]. For example, groundbreaking studies in climate science, social equity, or indigenous epistemologies may struggle to find a platform in high-impact journals due to their perceived lack of generalizability or marketability. Instead, journals often prioritize research that aligns with popular theoretical frameworks or produces results with a high likelihood of generating citations, regardless of its societal relevance or methodological rigor.
The consequences of these trends extend beyond individual researchers. Institutions and funding bodies frequently rely on journal rankings and citation metrics to assess the impact and quality of scholars’ work, further entrenching these problematic norms. For example, the The Royal Golden Jubilee (RGJ) Ph.D. Programme [38] from Thailand requires scholars to publish in international journals indexed in the Institute for Scientific Information Web of Science database, which typically includes quartile 1 and 2 journals. This pressure can lead to academic malpractice such as p-hacking [39] and selective reporting [40]. While intended as a proxy for quality for modern academic evaluation, these metrics can also incentivize practices that prioritize quantity over substance, including salami-slicing (fragmenting research into smaller publishable units) and the pursuit of sensational findings at the expense of replicable and rigorous science [37]. As a result, the current system privileges the visibility of research over its actual contribution to knowledge.

2.3. Barrier and Equity Issues in Research Accessibility

The academic publishing industry has long been dominated by a handful of powerful entities, leading to systemic barriers that disproportionately affect marginalized scholars and institutions [41]. Commercial publishers, such as Elsevier and Springer, have historically profited from subscription-based models that restrict access to publicly funded research [42]. Specifically, these models have created a significant financial burden on academic libraries and researchers, thereby limiting the dissemination of knowledge [41].
The commercialization of academic publishing has led to what Larivière et al. [43] describe as an "oligopoly" of commercial publishers, where they control over 50% of published research output. This concentration of power has resulted in an increase in subscription costs, restricted access to knowledge, and the establishment of problematic evaluation metrics associated with the journals themselves [44]. For example, the high costs associated with accessing research articles have forced many institutions to cut back on journal subscriptions, thereby limiting the resources available to their researchers [45].
The emergence of open-access publishing was intended to disrupt these barriers, yet many open-access models rely on article processing charge (APCs), which can be prohibitively expensive for researchers from underfunded institutions or low-income countries. This high-cost issue is currently allowed to continue in a widely unregulated scholarly publishing market [42]. With prices estimated to have outpaced inflation at 250% in the past thirty years, this will slowly but surely diminish the scope of access to the scholarly literature as fewer organizations are able to pay such high costs [46].
Examples of costs are approximately $1,350 for MDPI [47] and $1,595 for PLOS [48]. Paywalls and high publication fees exacerbate global disparities in access to knowledge. Researchers and institutions from underfunded regions face significant barriers to participation in academic discourse, hindering equitable contributions to global scholarship [49]. This problem is evident in the modern publishing ecosystem.
The profit-driven "pay-to-publish/pay-to-read" model prioritizes financial gain over public good, undermining the societal impact of research by restricting accessibility to those who need it most. Sci-Hub, a shadow library, is an epitome of how people circumvent this problem in terms of universal accessibility through legally gray routes [50]. On one hand, using shadow libraries may be seen as damaging to citation indexing due to its piracy nature, but it also speaks to the flaw of the subscription-based publishing system [51].
Unsurprisingly, Sci-hub was found to provide a wider range of article coverage than some universities in the United States [52]. In fact, articles that are downloadable from Sci-hub are more likely to be cited than articles that are not available from such a site [53]. While these alternatives provide temporary relief, they do not address the root cause of the issue: the restrictive practices of commercial publishers. By relying on such workarounds, the academic community inadvertently perpetuates the existing problems rather than challenging the systemic barriers by focusing on public good over profit and prestige.

3. Strategies for Reformation

3.1. Mentorship-Oriented Editorial Practice and Post-Publication Peer Review Model

One promising strategy to reform gatekeeping practices in academic publishing is the implementation of robust mentorship programs. Journals can establish formal initiatives in which editorial teams actively guide early career researchers through the publication process. An example of such approach is the Canadian journal for new scholars in education [54], where researchers are required to work with the review mentor to bring the manuscript to the publishable quality to foster the learning process through mentorship. By providing comprehensive feedback and support, editors can help authors refine their work prior to publication, thereby ensuring that quality is achieved through iterative improvement rather than being used as a strict pre-publication barrier.
This approach reinforces the view that academic publishing should be a dynamic learning process, where every submission, regardless of its initial state, is treated as an opportunity for growth [55]. Furthermore, mentorship programs not only demystify the publication process but also help early career researchers develop their scholarly voice, contributing to a more inclusive and supportive research environment [56]. In fact, this practice helps advocating for epistemic justice, where different ways of knowing are acknowledged and respected by rethinking who gets to decide what counts as valid knowledge, who controls the publishing process, and what values shape how research is shared [10]. By shifting the focus from rigid gatekeeping to nurturing scholarly potential, journals can become true enablers of academic progress.
Complementing mentorship, the post-publication peer review model marks a significant shift from traditional pre-publication evaluation. In this approach, exemplified by F1000Research, submissions are immediately viewable and citable while undergoing peer review [57]. Unlike preprints, which do not guarantee an ongoing review process, this model ensures active assessment by experts with authors playing a proactive role by identifying and inviting qualified reviewers from relevant fields [57].
The editorial team, rather than serving as the ultimate arbiter of quality through pre-publication rejection, assumes a supportive role that includes verifying reviewer credentials, facilitating communication between authors and reviewers, and ensuring that every critical aspect of the article undergoes thorough evaluation. Manuscript submissions that do not get approvals from the peer review process can be revised and resubmitted as a new version. Figure 1 delineates the complete post-publication peer review. This approach is recommended especially in specialized research fields where the notion of a truly blind peer review is increasingly unrealistic.
In small, interconnected academic communities, anonymity is often more theoretical than practical. The open-review process may encourage greater accountability in the reviewing process [58,59]. In this system, all versions of an article, as well as the complete history of peer review reports, are permanently published alongside the final version. This transparency allows the scholarly community to witness the evolution of research, acknowledging that the notion of truth or knowledge is inherently dynamic and subject to refinement.
The post-publication model offers several benefits. First, it enables research to evolve continuously, as subsequent reviews and revisions can address limitations that were not apparent at the time of initial publication. Second, by making all review reports publicly available, the process becomes more accountable and less prone to the subjective biases that can pervade traditional pre-publication review systems.
Third, by shifting responsibility for identifying expert reviewers to the authors, the model encourages a more engaged and collaborative review process while mitigating the workload that journal editors have in identifying peer reviewers. This not only democratizes the evaluation of scholarly work but also alleviates a major operational bottleneck in traditional publishing, which is editors’ recurring difficulty in securing reviewers, thereby improving the efficiency of journal workflows. Finally, researchers can publish their work as long as they are proactive in engaging with feedback and committed to improving the quality of their scholarship. This model affirms that academic research is an evolving conversation rather than a static product.
Together, these strategies, mentorship programs and a post-publication peer review model, offer a transformative vision for addressing gatekeeping and subjectivity in academic publishing. By promoting an inclusive environment that values iterative learning and transparency, journals can fulfill their deeper role as mentors and facilitators of scholarly discourse. This model supports the broader mission of advancing knowledge in a manner that is both rigorous and open to continuous improvement.

3.2. Toward Meaningful Research Evaluation

The current reliance on prestige-driven metrics, such as journal impact factors, journal quartiles, and citation counts, has significantly distorted academic priorities by incentivizing research that maximizes short-term visibility rather than advancing rigorous, innovative scholarship. Such practice does not directly translate into the quality of the published studies [24]. Reforming this metric-centric paradigm requires strategies that shift the focus from superficial indicators of prestige to a more comprehensive, meaningful evaluation of scholarly contributions. Rather than relying on a handful of metrics to determine the worth of research, researchers could consider diversifying evaluation metrics or indicators to examine studies on several aspects where appropriate.
Academic institutions, funding agencies, and journals could broaden the criteria by adopting evaluation frameworks that incorporate a diverse array of quantitative and qualitative measures. In addition to traditional citation counts, alternative metrics (altmetrics) such as online engagement, downloads, and social media impact should be considered [60]. Qualitative indicators like methodological rigor, reproducibility, transparency, and societal relevance must also be integrated. This diversification mitigates the disproportionate influence of any single metric, allowing for a more balanced and equitable assessment of research contributions.
Furthermore, academic evaluation systems should explicitly value research contributions that do not conform to the traditional mold of “high-impact” work. Replication studies, negative results, interdisciplinary research, and data sharing initiatives often fall outside the purview of conventional metrics, yet these products are vital for scientific progress [20,21]. Institutions should create venues such as conferences, awards, and recognition programs that reward high-quality research regardless of the publication venue. This strategy encourages researchers to pursue innovative and meaningful inquiries without being constrained by the pressure to publish only in high-impact journals.
Similarly to the proposed revision of the academic evaluation system, a broader cultural shift is required to move away from an overemphasis on prestige-driven metrics. Guidelines such as the Leiden Manifesto [35] and the San Francisco Declaration of Research Assessment (DORA) [61] have underscored the pitfalls of overreliance on simplistic metrics and recommended its departure. Institutions must reconfigure reward and promotion criteria to value long-term impact and scholarly quality over short-term citation metrics. This may include restructuring tenure policies to foster an environment that prioritizes deep, meaningful inquiry. By aligning institutional incentives with the true goals of academia, the scholarly community can cultivate a culture that prizes innovation and rigorous scholarship.
For example, instead of focusing solely on the impact factors of journals where faculty published, the new system could require researchers to submit a "long-term impact statement" alongside their applications. This statement details how their work has influenced subsequent research, informed policy decisions, or led to tangible changes in practice within their field even if immediate citation counts are modest. One example in practice is the opportunity for faculty members to explain the quality of their knowledge dissemination through means such as personal reflection on research audiences, engagements, and impacts, as well as capacity for the research to develop communities [62]. This practice allows academic institutions to perform responsible research assessment, where faculty and administrators learn how to evaluate the broader societal and academic contributions of research to reduce the pressure on researchers to aim exclusively for high-impact journals and foster an environment that values innovative, risk-taking research in niche contexts.

3.3. Democratizing Knowledge Accessibility

In order to dismantle the entrenched barriers that restrict access to and equitable participation in academic publishing, it is essential to implement a range of strategies that address both the financial and structural aspects of the current system. The following strategies could be considered to transform the publication landscape into one that is truly inclusive and oriented toward the public good.
First, mandating that published work be available under open license such as the Creative Commons Attribution (CC BY) license to ensure that research outputs are free to access, reuse, and distribute [63]. This policy not only facilitates the broader dissemination of knowledge but also encourages collaborative innovation across disciplines and borders, as it removes legal and financial obstacles that often stifle interdisciplinary engagement.
Second, the widespread adoption of preprint repositories such as ArXiv [64], ResearchGate [65], or Open science framework [66] websites should be actively encouraged. By enabling researchers to share their work prior to formal publication, preprint platforms significantly increase accessibility and provide a venue for early feedback from the global research community. This exchange of ideas can improve the quality of research before it enters the formal peer-review process, while also fostering a culture of openness and rapid dissemination of new findings without financial barriers.
In addition to open licensing and preprint repositories, the development of community-driven publishing platforms or library-led publishing such as the Education and Research Archive [67] offers a promising alternative to profit-driven commercial publishers. Non-profit, community-led publishing initiatives prioritize academic merit and public accessibility over commercial gain. Empowering academic communities to take control of their own publishing infrastructure not only curbs the influence of profit-driven models but also ensures that scholarly work is disseminated without imposing prohibitive financial barriers on either authors or readers. Such grassroots initiatives can help balance the system by providing equitable access and fostering an environment where quality and innovation are the primary metrics of success.
Universities and research institutions play a pivotal role in this reform by investing in robust, interoperable digital repositories. These repositories should host not only published articles but also datasets, supplementary materials, and earlier versions of research outputs. By enhancing the discoverability of the full spectrum of academic work, these repositories ensure that knowledge is accessible to the global community. Given that many universities are subsidized by government funding, which is ultimately sourced from taxpayers, it is both logical and justifiable that the general public should not be required to pay again for access to research that was already funded by their tax contributions. Collectively, these strategies could dismantle existing barriers that treat knowledge as commodity and build an ecosystem that values and ensures equitable access to knowledge for all.

4. Anticipated Challenges and Counterarguments

4.1. Fear of Low-Quality Work

One of the primary concerns raised by critics of reforming academic publishing, particularly with respect to open repository, mentorship programs, and post-publication peer review, is the fear that these approaches will lead to the proliferation of low-quality work [15]. Critics may argue that traditional pre-publication peer review and stringent gatekeeping have long served as essential filters to ensure that only rigorously vetted research is disseminated [13]. They may contend that relaxing these standards might inadvertently allow flawed, poorly executed, or even pseudo-scientific studies to enter the public domain, thereby diluting the overall quality of scholarly literature [15].
In response to these concerns, several counterarguments can be presented. First, it is important to recognize that the traditional system is not infallible; even with rigorous pre-publication review, instances of substandard or even fraudulent work have historically made their way into respected journals [24,68]. The proposed models of open repository and post-publication review do not eliminate quality control; rather, they transform and distribute it across a broader, more transparent process. Under a post-publication review model, research is also subjected to continuous scrutiny by scholars, which creates an ongoing feedback loop. This iterative process also allows errors and deficiencies to be identified and corrected over time, reinforcing the self-correcting nature of scientific inquiry.
Additionally, the transparency of the proposed strategies (i.e., post-publication peer review, using qualitative metrics) may mitigate the threat of predatory publication—a term that refers to exploitative academic publishing practices in which journals charge authors publication fees without providing legitimate editorial and peer review services [69]. When peer review reports, editorial decisions, and revision histories are made publicly accessible, it becomes far easier for the academic community to distinguish between legitimate journals that adhere to scholarly standards and those that do not. Transparency creates a verifiable record of scholarly discourse, which not only deters unethical editorial practices but also empowers researchers to make informed decisions about where to publish.
There may also be concerns that open peer review may create the chance of nepotism that allows poor-quality work to be published, but the very transparency of the process serves as a powerful safeguard against such risks [58]. When reviewer identities and their comments are made publicly available, any bias or favoritism becomes immediately visible to the scholarly community, which can then hold individuals accountable for their evaluations [70]. This openness discourages any preferential treatment, as reviewers are aware that their feedback is subject to collective scrutiny [71]. Empirical studies have indicated that open peer review not only maintains but can enhance the quality of research by promoting constructive criticism and iterative improvement [72]. In this way, while the concern about nepotism is understandable, the accountability and inclusivity inherent in open peer review effectively counterbalance these risks, ensuring that quality control is maintained through a transparent and self-correcting process.
Moreover, the transparent documentation of review histories and revisions serves as a powerful accountability mechanism. When all versions of an article, along with the corresponding review reports, are publicly available, any concerns regarding quality are immediately visible to the scholarly community. This level of openness encourages reviewers to provide thorough, constructive feedback and motivates authors to address criticisms promptly. As a result, the collaborative environment fosters improvements in the quality of published work rather than compromising it.
Additionally, the integration of mentorship programs plays a crucial role in mitigating the risk of low-quality publications. By engaging experienced editors and senior researchers in guiding early career scholars, these programs offer hands-on support and training that help to elevate the standard of submitted work from its inception. Instead of serving as a rigid barrier, traditional gatekeeping is reimagined as an opportunity for developmental feedback and academic growth, ensuring that manuscripts are refined and enhanced before they reach a broader audience. This approach is consistent with the fundamental objective of academia: to facilitate learning, teaching, and the elevation of emerging researchers, rather than obstructing the efforts of those keen to learn and contribute.
Empirical evidence also suggests that open dissemination platforms, such as preprint repositories, have not led to a significant increase in low-quality work. For example, Abdill and Blekhman [73] conducted a comprehensive analysis of bioRxiv preprints and found that the quality, as measured by download count, has significant positive correlation with journal impact factor that it is subsequently published in. Similarly, Fraser et al. [74] examined the explosion of preprints during the COVID-19 pandemic and observed that the rapid dissemination of research via preprint servers did not compromise quality; instead, it facilitated swift scholarly communication and post-publication scrutiny, which in turn helped to maintain research integrity. These studies suggest that when open dissemination is coupled with transparent review processes and the opportunity for post-publication feedback, the overall quality of research is preserved, mitigating concerns that open platforms inherently lead to the propagation of substandard work.
In summary, while the fear of low-quality work is a legitimate concern, it is counterbalanced by the inherent advantages of a transparent, iterative, and collaborative review process. By shifting the emphasis from pre-publication perfection to ongoing quality improvement, the scholarly community can maintain high standards without sacrificing accessibility or inclusivity.

4.2. Resistance from Established Stakeholders

One of the most formidable challenges to reforming the academic publishing landscape is the anticipated resistance from established stakeholders, commercial publishers, traditional editorial boards, funding agencies, and even segments of the academic community that have long benefited from the current system. These stakeholders often have entrenched interests and financial incentives tied to subscription-based models, high APCs, and prestige-driven metrics, making them naturally inclined to resist reforms that could disrupt their revenue streams and influence.
Commercial publishers, in particular, wield significant power and profit from maintaining restricted access to academic work. Their business models are built on the status quo, where paywalls and high APCs serve as primary revenue generators. Critics may argue that any shift toward open or community-driven publishing platforms could undermine the financial stability of these organizations, potentially leading to a perceived decline in the overall quality of scholarly work due to reduced resources for editorial services and peer review management Bianchi and Squazzoni [75].
However, counterarguments emphasize that the landscape of academic publishing is already undergoing substantial transformation. Many publishers are beginning to adopt hybrid models, as evidenced by the steady increase in open access publication Piwowar et al. [76]. Piwowar et al. [76]’s findings also indicate that global mandates and funding agency policies are driving the adoption of open access practices, further supporting the idea of an inevitable shift in the publishing landscape. This shift is further propelled by global mandates from funding agencies and governments, such as those outlined in cOAlition S Schiltz [77], signaling an inevitable move toward more open and accessible scholarly communication. These market pressures can foster innovation, encouraging publishers to explore sustainable business models that balance profit with public good, rather than simply clinging to outdated practices.
Furthermore, traditional academic gatekeepers, including established editorial boards and senior researchers, may argue that their rigorous pre-publication peer review processes are essential for maintaining research quality Fradkov [15]. There may be concerns that alternative models, such as post-publication review and community-driven platforms, may lead to the dissemination of low-quality or unvetted research. Yet, this resistance overlooks the inherent limitations of traditional peer review, which is not immune to bias, errors, or the inadvertent suppression of innovative ideas Arms [68]. However, as discussed in the section on the fear of low-quality work, empirical evidence indicates that open dissemination platforms, complemented by transparent, community-driven review and continuous mentorship, do not necessarily compromise research quality.
Similarly, funding agencies and institutional stakeholders often reinforce the status quo by relying heavily on conventional evaluation metrics, such as journal impact factors and citation counts, which favor established publication venues. Initiatives like the Leiden Manifesto [35] and the San Francisco DORA [61], as mentioned previously, challenge these entrenched metrics by advocating for more holistic measures of research impact. By drawing on these alternative models and evolving evaluation practices, the academic community can both address the concerns regarding quality and gradually mitigate the resistance from established stakeholders, ultimately fostering an environment that prizes innovation and meaningful research over superficial prestige.
In summary, while resistance from established stakeholders is a significant challenge to academic publishing reforms, it is countered by both market forces and a growing body of evidence supporting more open, transparent, and inclusive models. The evolving landscape of research dissemination, coupled with ethical imperatives for broader access and the democratization of knowledge, suggests that these reforms are not only desirable but ultimately inevitable. As the academic community shifts its focus from profit and prestige towards quality, innovation, and public good, the momentum for reform will likely continue to grow, gradually overcoming the entrenched resistance of established stakeholders.

4.3. Sustainability Aspect of the Community Repository

Community-driven repositories offer a promising alternative to traditional, commercially controlled publishing models by promoting open access, transparency, and equity. However, ensuring the long-term sustainability of these repositories is a complex challenge that requires careful consideration of financial, technical, and organizational factors.
One of the primary concerns is securing stable and ongoing funding to support the development, maintenance, and expansion of community-driven repositories. Unlike commercial publishers that operate on established revenue streams such as subscription fees or APCs, community-driven platforms often rely on a patchwork of funding sources, including grants, donations, institutional support, and government funding. There may be concerns that such models can be volatile and may not guarantee consistent financial backing, which could jeopardize repository operations over time.
Diversified funding strategies can mitigate this risk. For example, institutions can pool resources to create consortiums that share the costs of maintaining these platforms, while government agencies and philanthropic organizations can provide targeted grants for digital infrastructure projects. Furthermore, successful examples like arXiv Ginsparg [64] and Zenodo Peters et al. [78] have demonstrated that community-driven repositories can achieve financial sustainability through a combination of institutional support, modest membership fees from contributing organizations, and external grants. By developing robust business models that emphasize transparency and accountability in fund usage, community-driven repositories can secure the long-term investments necessary for their continued operation.
From a technical standpoint, maintaining a community-driven repository involves ensuring data integrity, system security, and user-friendly interfaces while continuously updating software and hardware to keep pace with evolving technologies. The risk of technological obsolescence and cybersecurity threats poses ongoing challenges that can affect the reliability and performance of the repository.
Open-source platforms and community collaboration offer viable solutions to these challenges. Leveraging the open-source model, repositories can benefit from collective innovation, where developers and researchers contribute to the improvement and modernization of the platform. Regularly scheduled software updates, strong cybersecurity protocols, and partnerships with established technical institutions can further enhance the resilience of these systems Peters et al. [78]. Additionally, adopting interoperable standards and modular architectures can make it easier to integrate new technologies and adapt to future requirements, thereby extending the platform’s lifespan and functionality.
Finally, the sustainability of community-driven repositories also depends on active engagement from the research community. A lack of participation or insufficient contribution of content and technical improvements can undermine the platform’s relevance and operational efficiency. This concern could be mitigated by building a vibrant community around the repository is essential for its success. Features such as academic recognition and integration with institutional repositories can encourage researchers to contribute and maintain high-quality content.
Additionally, organizing workshops, webinars, and community forums can foster a sense of ownership and collective responsibility among users. For example, researchers can create virtual research communities, where projects, domains, and conferences can be managed and discussed Sicilia et al. [79]. By continuously demonstrating the tangible benefits of open access and community collaboration, such as increased visibility and broader societal impact, repositories could maintain and even enhance community participation over time.
By adopting diversified funding models, leveraging open-source technology, establishing strong governance frameworks, and fostering active community participation, these platforms can secure their long-term viability. The success stories of existing repositories provide concrete examples of how these countermeasures can be effectively implemented, underscoring that a sustainable, community-driven approach to academic publishing is not only possible but also essential for promoting equitable access to knowledge.

5. Conclusion and Call to Action

This paper calls on researchers, institutions, and policymakers to actively engage in reimagining the academic publishing system. In summary, the current academic publishing landscape is burdened by systemic inefficiencies and inequities that hinder the advancement of genuine, innovative research. The discussion of issues related to gatekeeping and subjectivity, prestige-driven metrics, and access and equity reveals a pressing need for transformative reforms. Traditional models, which are characterized by opaque pre-publication gatekeeping, an overreliance on superficial impact metrics, and restricted access through costly paywalls, are increasingly out of step with the evolving demands of modern scholarship.
The proposed strategies advocate for a reimagined publishing ecosystem built on transparency, inclusivity, and ongoing quality improvement. By integrating robust mentorship programs and considering post-publication peer review, we can dismantle the elitist barriers that have historically stifled innovation. In reconfiguring evaluation metrics to recognize the long-term, multifaceted impact of research, we shift the focus from journal prestige to substantive scholarly contributions. Moreover, by mandating open licensing, promoting preprint repositories, and fostering community-driven platforms, we ensure that knowledge generated through public funding is freely accessible and can be continuously refined through open collaboration.
However, the journey toward these reforms is not without challenges. Concerns about a potential decline in research quality, resistance from established stakeholders, and the sustainability of community-driven repositories have been raised. Yet, as discussed, these concerns were addressed through reformed notion of research quality, diversified funding strategies, open-source technological solutions, robust governance structures, and proactive community engagement. Empirical evidence supports the assertion that open dissemination platforms can maintain, and even enhance, the quality and reliability of research when coupled with transparent review processes and continuous mentorship Marino [56], Abdill and Blekhman [73], Fraser et al. [74].
As we stand, it is imperative that all stakeholders, researchers, editors, academic institutions, funding agencies, publishers, and policymakers, join forces to foster an environment where knowledge is not confined by outdated models of exclusivity and profit. This is not merely a technical or logistical challenge but a moral imperative to democratize access to the cumulative knowledge of humanity.

5.1. Call to Action:

  • For Researchers: Resist the pressure to chase prestige and metrics as proxies for worth. Publish what matters to you and your communities, not what aligns with arbitrary rankings or external validation. Respect your own contributions and those of others based on quality, relevance, and impact, not citation counts or journal rankings. By embracing open dissemination, thoughtful mentorship, and transparent review, researchers can foster a more inclusive, intellectually honest publishing culture.
  • For Editors and Publishers: For editors, reimagine your roles as mentors rather than gatekeepers. Reassess traditional quality control methods and consider integrating mentorship programs and the post-publication peer review model into your editorial practices. For publishers, invest in sustainable, community-driven models that prioritize access and equity over profit.
  • For Academic Institutions and Funding Agencies: Revise evaluation criteria to reward holistic, long-term contributions to scholarship and society. Move beyond journal-based metrics and incentivize open dissemination, methodological rigor, and interdisciplinary research through tenure, promotion, and funding decisions. Support initiatives that enhance community-driven repositories and open licensing policies.
  • For Policymakers: Mandate public access to publicly funded research and invest in infrastructure that supports open science. Enact regulations that curb exploitative publishing practices and empower community-owned platforms that democratize knowledge for societal benefit.
The future of academic publishing depends on our collective willingness to reimagine and reconstruct the frameworks that govern knowledge dissemination. By uniting around shared values of transparency, quality, and inclusivity, we can create a resilient and dynamic system that not only meets the needs of today’s scholars but also lays a solid foundation for the discovery and dissemination of future researchers.
Ultimately, the intention of this paper is not to claim definitive answers or to enact change overnight. Rather, it seeks to prompt critical reflection on the structural challenges within academic publishing and to invite meaningful engagement with the ideas presented. The critiques raised and the strategies proposed are not positioned as absolute truths, but as earnest contributions to an ongoing dialogue. If this paper succeeds in encouraging you to reflect on your own academic practices and consider actionable steps toward a more equitable and transparent scholarly ecosystem, then its purpose has been fulfilled. In that reflection lies the first step toward a more equitable and forward-looking scholarly ecosystem.

References

  1. Harwell, E.N. Academic journal publishers antitrust litigation, 2024.
  2. Van Dalen, H.P. How the publish-or-perish principle divides a science: The case of economists. Scientometrics 2021, 126, 1675–1694. [Google Scholar] [CrossRef]
  3. Siler, K.; Lee, K.; Bero, L. Measuring the effectiveness of scientific gatekeeping. Proceedings of the National Academy of Sciences 2015, 112, 360–365. [Google Scholar] [CrossRef] [PubMed]
  4. Primack, R.B.; Regan, T.J.; Devictor, V.; Zipf, L.; Godet, L.; Loyola, R.; Maas, B.; Pakeman, R.J.; Cumming, G.S.; Bates, A.E.; et al. Are scientific editors reliable gatekeepers of the publication process? Biological Conservation 2019, 238, 108232. [Google Scholar] [CrossRef]
  5. Neuberger, J.; Counsell, C. Impact factors: Uses and abuses. European Journal of Gastroenterology & Hepatology 2002, 14, 209–211. [Google Scholar] [CrossRef]
  6. Nisonger, T.E. The benefits and drawbacks of impact factor for journal collection management in libraries. The Serials Librarian 2004, 47, 57–75. [Google Scholar] [CrossRef]
  7. Ke, S. Liberate academic research from paywalls, 2022. [CrossRef]
  8. Burchardt, J. Researchers outside APC-financed open access: Implications for scholars without a paying institution. Sage Open 2014, 4. [Google Scholar] [CrossRef]
  9. Segado-Boj, F.; Martín-Quevedo, J.; Prieto-Gutiérrez, J.J. Jumping over the paywall: Strategies and motivations for scholarly piracy and other alternatives. Information Development 2024, 40, 442–460. [Google Scholar] [CrossRef]
  10. Batterbury, S.; Wielander, G.; Pia, A.E. After the Labour of Love: the incomplete revolution of open access and open science in the humanities and creative social sciences. Commonplace 2022. https://commonplace.knowledgefutures.org/pub/lsiyku2n.
  11. Pia, A.E.; Batterbury, S.; Joniak-Lüthi, A.; LaFlamme, M.; Wielander, G.; Zerilli, F.M.; Nolas, M.; Schubert, J.; Loubere, N.; Franceschini, I.; et al. Labour of Love: An Open Access Manifesto for Freedom, Integrity, and Creativity in the Humanities and Interpretive Social Sciences. Commonplace 2020. https://commonplace.knowledgefutures.org/pub/y0xy565k.
  12. Ray, J. Judging the judges: The role of journal editors. QJM 2002, 95, 769–774. [Google Scholar] [CrossRef]
  13. Ansell, B.W.; Samuels, D.J. Desk rejecting: A better use of your time. PS: Political Science & Politics 2021, 54, 686–689. [Google Scholar] [CrossRef]
  14. Teixeira Da Silva, J.A.; Al-Khatib, A.; Katavić, V.; Bornemann-Cimenti, H. Establishing sensible and practical guidelines for desk rejections. Science and Engineering Ethics 2018, 24, 1347–1365. [Google Scholar] [CrossRef]
  15. Fradkov, A.L. How to Publish a Good Article and to Reject a Bad One. Notes of a Reviewer. Automation & Remote Control 2003, 64. [Google Scholar]
  16. Ioannidis, J.P.A. Why most published research findings are false. PLoS Medicine 2005, 2, e124. [Google Scholar] [CrossRef] [PubMed]
  17. Jussim, L.; Honeycutt, N. Bias in psychology: A critical, historical and empirical review. Swiss Psychology Open 2024, 4, 5. [Google Scholar] [CrossRef]
  18. Nosek, B.A.; Errington, T.M. What is replication? PLOS Biology 2020, 18, e3000691. [Google Scholar] [CrossRef]
  19. Borrego, M.; Foster, M.J.; Froyd, J.E. Systematic literature reviews in engineering education and other developing interdisciplinary fields. Journal of Engineering Education 2014, 103, 45–76. [Google Scholar] [CrossRef]
  20. Mlinarić, A.; Horvat, M.; Šupak Smolčić, V. Dealing with the positive publication bias: Why you should really publish your negative results. Biochemia Medica 2017, 27, 030201. [Google Scholar] [CrossRef]
  21. Fanelli, D. Negative results are disappearing from most disciplines and countries. Scientometrics 2012, 90, 891–904. [Google Scholar] [CrossRef]
  22. Cross, M.; Lee, S.; Bridgman, H.; Thapa, D.K.; Cleary, M.; Kornhaber, R. Benefits, barriers and enablers of mentoring female health academics: An integrative review. PLOS ONE 2019, 14, e0215319. [Google Scholar] [CrossRef]
  23. Starbuck, W.H. How much better are the most-prestigious journals? The statistics of academic publication. Organization Science 2005, 16, 180–200. [Google Scholar] [CrossRef]
  24. Brembs, B. Prestigious science journals struggle to reach even average reliability. Frontiers in Human Neuroscience 2018, 12. [Google Scholar] [CrossRef]
  25. Oswald, A.J. An examination of the reliability of prestigious scholarly journals: Evidence and implications for decision-makers. Economica 2007, 74, 21–31. [Google Scholar] [CrossRef]
  26. Bajracharya, K.S.; Luu, S.; Cheah, R.; Kc, S.; Mushtaq, A.; Elijah, M.; Poudel, B.K.; Cham, C.F.X.; Mandal, S.; Muhi, S.; et al. Mentorship advances antimicrobial use surveillance systems in low- and middle-income countries. JAC-Antimicrobial Resistance 2024, 7, dlae212. [Google Scholar] [CrossRef] [PubMed]
  27. McKiernan, E.C.; Schimanski, L.A.; Muñoz Nieves, C.; Matthias, L.; Niles, M.T.; Alperin, J.P. Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations. eLife 2019, 8, e47338. [Google Scholar] [CrossRef] [PubMed]
  28. Chapman, C.A.; Bicca-Marques, J.C.; Calvignac-Spencer, S.; Fan, P.; Fashing, P.J.; Gogarten, J.; Guo, S.; Hemingway, C.A.; Leendertz, F.; Li, B.; et al. Games academics play and their consequences: how authorship, h-index and journal impact factors are shaping the future of academia. Proceedings of the Royal Society B: Biological Sciences 2019, 286, 20192047. [Google Scholar] [CrossRef]
  29. Mount Royal University. Criteria, evidence and standards for tenure and promotion.
  30. Garfield, E. Citation indexes in sociological and historical research. American Documentation 1963, 14, 289–291. [Google Scholar] [CrossRef]
  31. Viiu, G.A.; Paunescu, M. The lack of meaningful boundary differences between journal impact factor quartiles undermines their independent use in research evaluation. Scientometrics 2021, 126, 1495–1525. [Google Scholar] [CrossRef]
  32. Brembs, B.; Button, K.; Munafò, M. Deep impact: Unintended consequences of journal rank. Frontiers in Human Neuroscience 2013, 7. [Google Scholar] [CrossRef]
  33. Manolopoulos, Y.; Katsaros, D. Metrics and rankings: Myths and fallacies. In Data Analytics and Management in Data Intensive Domains; Kalinichenko, L.; Kuznetsov, S.O.; Manolopoulos, Y., Eds.; Springer International Publishing: Cham, 2017; Vol. 706, pp. 265–280. Series Title: Communications in Computer and Information Science. [CrossRef]
  34. Sloane, E.H. Reductionism. Psychological Review 1945, 52, 214–223. [Google Scholar] [CrossRef]
  35. Hicks, D.; Wouters, P.; Waltman, L.; De Rijcke, S.; Rafols, I. Bibliometrics: The leiden manifesto for research metrics. Nature 2015, 520, 429–431. [Google Scholar] [CrossRef]
  36. Brembs, B. Reliable novelty: New should not trump true. PLOS Biology 2019, 17, e3000117. [Google Scholar] [CrossRef]
  37. Smaldino, P.E.; McElreath, R. The natural selection of bad science. Royal Society Open Science 2016, 3, 160384. [Google Scholar] [CrossRef] [PubMed]
  38. The Royal Golden Jubilee (RGJ) Ph.D. Programme. Publication criteria.
  39. Head, M.L.; Holman, L.; Lanfear, R.; Kahn, A.T.; Jennions, M.D. The extent and consequences of P-hacking in science. PLOS Biology 2015, 13, e1002106. [Google Scholar] [CrossRef]
  40. Rodgers, M.A.; Pustejovsky, J.E. Evaluating meta-analytic methods to detect selective reporting in the presence of dependent effect sizes. Psychological methods 2021, 26, 141. [Google Scholar] [CrossRef] [PubMed]
  41. Dewidar, O.; Elmestekawy, N.; Welch, V. Improving equity, diversity, and inclusion in academia. Research Integrity and Peer Review 2022, 7, 4. [Google Scholar] [CrossRef]
  42. Tennant, J.P.; Waldner, F.; Jacques, D.C.; Masuzzo, P.; Collister, L.B.; Hartgerink, C.H.J. The academic, economic and societal impacts of open access: an evidence-based review. F1000Research 2016, 5, 632. [Google Scholar] [CrossRef]
  43. Larivière, V.; Haustein, S.; Mongeon, P. The oligopoly of academic publishers in the digital era. PLOS ONE 2015, 10, e0127502. [Google Scholar] [CrossRef] [PubMed]
  44. Fyfe, A.; Coate, K.; Curry, S.; Lawson, S.; Moxham, N.; Røstvik, C.M. Untangling academic publishing: A history of the relationship between commercial interests, academic prestige and the circulation of research, 2017.
  45. Sheare, K. Responding to unsustainable journal costs: A CARL brief, 2018.
  46. Van Noorden, R. Open access: The true cost of science publishing. Nature 2013, 495, 426–429. [Google Scholar] [CrossRef]
  47. Multidisciplinary Digital Publishing Institute [MDPI]. Open Access and Article Processing Charge (APC).
  48. Public Library of Science [PLOS]. PLOS Fee.
  49. HighWire Press. Rethinking Open Access: Moving Beyond Article Processing Charges for True Equity, 2024.
  50. Greshake, B. Looking into Pandora’s box: The content of Sci-hub and its usage. F1000Research 2017, 6, 541. [Google Scholar] [CrossRef]
  51. Faust, J.S. Sci-hub: A solution to the problem of paywalls, or merely a diagnosis of a broken system? Annals of Emergency Medicine 2016, 68, 15A–17A. [Google Scholar] [CrossRef]
  52. Himmelstein, D.S.; Romero, A.R.; Levernier, J.G.; Munro, T.A.; McLaughlin, S.R.; Greshake Tzovaras, B.; Greene, C.S. Sci-Hub provides access to nearly all scholarly literature. eLife 2018, 7, e32822. [Google Scholar] [CrossRef]
  53. Correa, J.C.; Laverde-Rojas, H.; Tejada, J.; Marmolejo-Ramos, F. The Sci-Hub effect on papers’ citations. Scientometrics 2022, 127, 99–126. [Google Scholar] [CrossRef]
  54. Startup, A.; Dupuis Brouillette, M. Canadian journal for new scholars in education.
  55. Lorenzetti, D.L.; Shipton, L.; Nowell, L.; Jacobsen, M.; Lorenzetti, L.; Clancy, T.; Paolucci, E.O. A systematic review of graduate student peer mentorship in academia. Mentoring & Tutoring: Partnership in Learning 2019, 27, 549–576. [Google Scholar] [CrossRef]
  56. Marino, F.E. Mentoring gone wrong: What is happening to mentorship in academia? Policy Futures in Education 2021, 19, 747–751. [Google Scholar] [CrossRef]
  57. F1000Research. The Peer Review Process.
  58. Pros and cons of open peer review. Nature Neuroscience 1999, 2, 197–198. [CrossRef]
  59. Wolfram, D.; Wang, P.; Hembree, A.; Park, H. Open peer review: Promoting transparency in open science. Scientometrics 2020, 125, 1033–1051. [Google Scholar] [CrossRef]
  60. Trueger, N.S.; Thoma, B.; Hsu, C.H.; Sullivan, D.; Peters, L.; Lin, M. The altmetric score: A new measure for article-level dissemination and impact. Annals of Emergency Medicine 2015, 66, 549–553. [Google Scholar] [CrossRef]
  61. Cagan, R. San Francisco declaration on research assessment. Disease Models & Mechanisms 2013, p. dmm.012955. [CrossRef]
  62. University of Alberta. Faculty of education faculty evaluation committee procedures and criteria for the evaluation of academic faculty, 2022.
  63. Creative Commons. About CC licenses, 2019.
  64. Ginsparg, P. ArXiv at 20. Nature 2011, 476, 145–147. [Google Scholar] [CrossRef] [PubMed]
  65. O’Brien, K. ResearchGate. Journal of the Medical Library Association 2019, 107. [Google Scholar] [CrossRef]
  66. Foster, E.D.; Deardorff, A. Open science framework (OSF). Journal of the Medical Library Association 2017, 105, 203–206. [Google Scholar] [CrossRef]
  67. University of Alberta Library. ERA (Education and research archive).
  68. Arms, W.Y. What are the alternatives to peer review? Quality control in scholarly publishing on the web. The Journal of Electronic Publishing 2002, 8. [Google Scholar] [CrossRef]
  69. Beall, J. Predatory publishers are corrupting open access. Nature 2012, 489, 179–179. [Google Scholar] [CrossRef]
  70. Godlee, F. Making reviewers visible: Openness, accountability, and credit. Journal of American Medical Association 2002, 287, 2762–2765. [Google Scholar] [CrossRef] [PubMed]
  71. Ross-Hellauer, T. What is open peer review? A systematic review. F1000Research 2017, 6, 588. [Google Scholar] [CrossRef] [PubMed]
  72. Bravo, G.; Grimaldo, F.; López-Iñesta, E.; Mehmani, B.; Squazzoni, F. The effect of publishing peer review reports on referee behavior in five scholarly journals. Nature Communications 2019, 10, 322. [Google Scholar] [CrossRef] [PubMed]
  73. Abdill, R.J.; Blekhman, R. Tracking the popularity and outcomes of all bioRxiv preprints. eLife 2019, 8, e45133. [Google Scholar] [CrossRef] [PubMed]
  74. Fraser, N.; Brierley, L.; Dey, G.; Polka, J.K.; Pálfy, M.; Nanni, F.; Coates, J.A. The evolving role of preprints in the dissemination of COVID-19 research and their impact on the science communication landscape. PLOS Biology 2021, 19, e3000959. [Google Scholar] [CrossRef]
  75. Bianchi, F.; Squazzoni, F. Can transparency undermine peer review? A simulation model of scientist behavior under open peer review. Science and Public Policy 2022, 49, 791–800. [Google Scholar] [CrossRef]
  76. Piwowar, H.; Priem, J.; Larivière, V.; Alperin, J.P.; Matthias, L.; Norlander, B.; Farley, A.; West, J.; Haustein, S. The state of OA: A large-scale analysis of the prevalence and impact of open access articles. PeerJ 2018, 6, e4375. [Google Scholar] [CrossRef]
  77. Schiltz, M. Science without publication paywalls: cOAlition S for the realisation of full and immediate open access. Frontiers in Neuroscience 2018, 12, 656. [Google Scholar] [CrossRef]
  78. Peters, I.; Kraker, P.; Lex, E.; Gumpenberger, C.; Gorraiz, J.I. Zenodo in the spotlight of traditional and new metrics. Frontiers in Research Metrics and Analytics 2017, 2, 13. [Google Scholar] [CrossRef]
  79. Sicilia, M.A.; García-Barriocanal, E.; Sánchez-Alonso, S. Community curation in open dataset repositories: Insights from zenodo. Procedia Computer Science 2017, 106, 54–60. [Google Scholar] [CrossRef]
Figure 1. The Post-Publication Peer Review Process. Note. Diagram from “Everything you need to know about the F1000 post-publication open peer review process”, (https://www.f1000.com/resources-for-researchers/how-to-publish-your-research/peer-review/). Copyright 2025 by F1000.
Figure 1. The Post-Publication Peer Review Process. Note. Diagram from “Everything you need to know about the F1000 post-publication open peer review process”, (https://www.f1000.com/resources-for-researchers/how-to-publish-your-research/peer-review/). Copyright 2025 by F1000.
Preprints 160690 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated