Preprint
Review

This version is not peer-reviewed.

Balancing Efficiency and Depth: A Systematic Review of Artificial Intelligence’s Impact on Research Competencies Across the Research Lifecycle

Submitted:

07 May 2025

Posted:

08 May 2025

You are already at the latest version

Abstract
Background: The integration of artificial intelligence (AI) into academic research is fundamentally transforming how scholarship is conducted across disciplines. This systematic review synthesizes empirical and conceptual literature that examines how AI tools, particularly large language models and machine learning systems, are reshaping essential research competencies throughout the research lifecycle. Methods: We analyzed 49 studies across various disciplines, methodologies, and geographic regions to assess AI's impact on research processes, from literature review to knowledge dissemination. Our framework evaluated the effects across three dimensions of research competency: technical, critical, and social, while integrating established theoretical perspectives, including Mode 2 Knowledge Production and Distributed Cognition theory. Results: Our findings reveal dramatic efficiency improvements in research processes, with a 50-95% reduction in workload for literature screening and 70-80% savings in time for qualitative analysis. However, these gains introduce significant tensions: between efficiency and interpretive depth, expertise and automation, reference accuracy and research integrity, and equal access versus emerging "research divides." The impact varies by research stage, with literature review, qualitative analysis, and hypothesis generation undergoing the most substantial transformations. Discussion: Our conceptual framework demonstrates how AI integration represents not merely technological adoption but a fundamental reconceptualization of research expertise—from technical execution toward critical judgment, ethical reasoning, and interdisciplinary integration. We propose balanced human-AI collaboration models that emphasize strategic human oversight, transparent documentation practices, and stage-appropriate automation. These findings have significant implications for research education, institutional policy, and the future development of research competencies in an increasingly AI-mediated knowledge ecosystem.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

The integration of artificial intelligence (AI), particularly large language models (LLMs) and machine learning systems, into academic research signifies a transformational change in scholarly methods since the beginning of the digital era [1,2]. Tasks that once required extensive human labor over months can now be completed in days or hours, frequently with equivalent or even greater accuracy [3,4]. This substantial increase in efficiency has been noted across multiple research stages, such as literature reviews [5,6], qualitative analyses [7,8], and even in hypothesis generation [7,9].
This transformation extends beyond mere efficiency gains. As AI systems assume responsibilities once viewed as markers of human expertise, such as synthesizing literature, analyzing qualitative data, identifying research gaps, and formulating innovative hypotheses, pressing questions arise about the changing landscape of research skills and the foundational principles of academic inquiry [10,11]. Given the rise of increasingly automated technical tasks, what constitutes research expertise? How might AI tools impact the pace of our research, the questions we explore, and our interpretations of findings?
The significance of these questions is heightened by the rapid adoption of AI tools across diverse fields, demographic categories, and institutional frameworks. Recent survey results indicate that 53% of researchers incorporate AI tools in some aspect of their research process, with usage varying significantly by discipline, career level, and institutional backing [10]. This variation in adoption leads to concerns about the possibility of “research divides” forming between those who can access advanced AI tools and possess the required technical expertise, and those who do not [12].

1.1. Research Gap and Purpose

While there is a growing body of literature discussing the impact of AI on specific research tasks, a comprehensive framework for understanding how these technologies reshape research competencies throughout the entire research lifecycle is lacking. Research typically focuses on individual processes, such as literature screening [1], qualitative coding [13], or information retrieval [14]. Still, it often neglects to connect these changes to larger research skills, identity, and epistemology concerns.
This systematic review seeks to synthesize empirical and conceptual literature concerning the effects of AI on research workflows, with a focus on how these technologies are reshaping fundamental research skills. By integrating insights from diverse research processes, methodological approaches, and academic disciplines, we aim to establish a unified framework for understanding and responding to this transformation.

1.2. Research Questions

This review examines four interconnected research questions:
  • In what ways are AI technologies changing particular research processes throughout the research lifecycle, from literature review to the sharing of knowledge?
  • How do these transformations affect fundamental research competencies, encompassing technical, critical, and social aspects?
  • What tensions, compromises, and ethical dilemmas arise from integrating AI into research workflows?
  • What effective methods of human-AI collaboration enhance research quality while preserving human agency and critical engagement?
The findings from this review carry significant implications for various stakeholders. For individual researchers, comprehending how AI alters research skills can lead to a more deliberate incorporation of these tools, balancing efficiency improvements with critical involvement. For academic institutions, insights into evolving skill requirements can inform curriculum development and training for researchers. For research funders and policymakers, discoveries regarding equity and access issues can shape strategies to ensure that AI tools improve rather than diminish research quality and diversity.
This review offers a thorough overview of how AI influences research skills and workflows. Its goal is to promote research practices that utilize technological advances while upholding the human values and critical thinking vital for meaningful academic inquiry.

2. Methods

2.1. Search Strategy and Selection Criteria

We conducted a systematic search following PRISMA guidelines to identify studies on AI’s effects on research skills and competencies. This search took place in May 2025 across seven electronic databases: Web of Science, Scopus, PubMed, IEEE Xplore, ACM Digital Library, Education Source, and PsycINFO. Our search strategy combined terms related to artificial intelligence (e.g., “artificial intelligence,” “machine learning,” “large language model,” “ChatGPT”) with terms concerning research competencies and workflows (e.g., “research skills,” “literature review,” “qualitative analysis,” “data analysis,” “critical thinking”).
The inclusion criteria consisted of: (1) empirical or conceptual studies exploring AI tools within research settings; (2) a focus on their impact on research skills, workflows, or epistemology; (3) publication in English; (4) peer-reviewed or high-quality preprints; and (5) publication dates between 2018 and 2024. We included empirical studies (quantitative, qualitative, and mixed-methods) and conceptual papers developing frameworks or models.
The initial search yielded 873 records. After removing duplicates (n=138), 735 records were screened based on title and abstract, resulting in 152 potentially eligible full-text articles. After full-text review, 49 studies met all inclusion criteria and were included in the final analysis.
Figure 1. PRISMA flow diagram of the study selection process. This diagram illustrates the identification, screening, eligibility assessment, and inclusion of studies examining AI’s impact on research skills and competencies.
Figure 1. PRISMA flow diagram of the study selection process. This diagram illustrates the identification, screening, eligibility assessment, and inclusion of studies examining AI’s impact on research skills and competencies.
Preprints 158736 g001

2.2. Quality Assessment

Two researchers independently evaluated the methodological quality of selected empirical studies using the Mixed Methods Appraisal Tool (MMAT), which accommodates various study designs. For conceptual papers, we modified criteria from Whittemore et al.’s (2014) framework for assessing theoretical research. Any disagreements were addressed through discussion with a third researcher. Although no studies were excluded based on quality assessment, we acknowledged and incorporated methodological limitations into the synthesis.

2.3. Data Extraction and Analysis

Data extraction was conducted using a standardized form capturing: (1) study characteristics (authors, year, discipline, country); (2) methodology (study design, sample, data collection methods); (3) AI technologies examined; (4) research processes affected; (5) impact on research competencies; (6) reported benefits and challenges; and (7) theoretical frameworks or models.
We used a hybrid thematic analysis approach, combining deductive coding based on our initial conceptual framework with inductive coding to capture emerging themes. All extracted data were independently coded by two researchers, with disagreements resolved through discussion. The final thematic framework was developed through iterative refinement, with themes mapped to our conceptual framework examining impacts across the research lifecycle and competency dimensions.

3. Results

Figure 2 illustrates the research lifecycle as a circular process comprising six interconnected phases: Literature Review, Study Design, Data Analysis, Knowledge Dissemination, Continuous Integration, and New Research Questions. Encircling these phases are three concentric rings that represent distinct research competency domains: Technical Competencies (such as information retrieval, data analysis, and documentation), Critical Competencies (including critical thinking, ethical reasoning, and reflexivity), and Social Competencies (focused on collaboration and knowledge integration). Red circles denote areas with high AI impact (literature review, qualitative analysis, and new research question identification), while orange circles indicate areas of medium AI impact (study design, knowledge dissemination, and continuous integration).

3.1. How AI Technologies are Transforming Research Processes from Literature Review to Knowledge Sharing

Our analysis revealed notable changes throughout every phase of the research lifecycle, especially in the literature review and qualitative analysis processes.
Table 1. Mapping of AI’s impact on different stages.
Table 1. Mapping of AI’s impact on different stages.
Research Stage AI Technologies Efficiency Gains Quality Impact Key References
Literature Review LLM pipelines, Active learning systems 50-95% workload reduction Variable; high sensitivity but reference accuracy concerns [3,5,15,16,17,18]
Study Design/Hypothesis Generation MC-NEST, LLM+Knowledge graphs Novel hypothesis identification Limited validation studies [7,9,53]
Data Analysis (Qualitative) CoAIcoder, CollabCoder, PaTAT 70-80% time savings Increased consensus, reduced diversity [7,8,13,21,23,24,25]
Data Analysis (Quantitative) Statistical automation, visualization tools Variable Primarily routine analysis [26,27]
Knowledge Dissemination Citation managers, drafting assistants Draft generation efficiency High hallucination rates [10,11,28,29]

3.1.1. Literature Review and Knowledge Synthesis

The literature review process has undergone a significant transformation due to AI integration. Numerous studies show that AI tools—particularly LLM-based pipelines and active learning systems—greatly decrease the time and effort needed for systematic literature reviews [3,5,15]. These improvements in efficiency are a result of automating essential tasks.
AI systems can quickly analyze thousands of titles and abstracts, cutting down human screening efforts by 50-95% while preserving high sensitivity [4,6,16]. Tools like ASReview [5] utilize active learning to focus on the most relevant studies. New LLM-based methods [3,16] can conduct both title/abstract and full-text screenings, occasionally achieving higher sensitivity levels than human reviewers.
AI-driven “living review” systems facilitate ongoing, automated updates of literature syntheses [17,18], changing the conventional model of static reviews into active knowledge bases incorporating new evidence. In addition to screening, AI tools help gather structured data from publications [19,20] and synthesize results across studies [3]. However, issues regarding reference accuracy and “hallucination” continue to raise concerns [20].
These transformations prompt significant inquiries into how literature review functions as a research skill. The conventional perspective of a thorough literature review—viewed as proof of disciplinary expertise via extensive manual searching—is increasingly contested by automated methods. These methods can attain similar or even better recall with considerably less effort. Many authors argue that the essential skill set is evolving from exhaustive manual searching to adeptly combining AI tools with human discernment regarding relevance, quality, and synthesis [1,5].

3.1.2. Study Design and Hypothesis Generation

AI’s influence on research design and hypothesis generation signifies a profound epistemological transformation in the research process. Recent research highlights the emergence of AI-enhanced systems for hypothesis creation that combine large language models (LLMs) with other computational methods to generate innovative, testable scientific hypotheses [7,9].
The MC-NEST system integrates Monte Carlo Tree Search, Nash equilibrium sampling, and LLM outputs to identify valuable research questions in multiple disciplines [7]. A notable instance of its practical impact is highlighted in [9], which confirmed hypotheses generated by GPT-4 related to drug synergies in breast cancer through laboratory testing, demonstrating that many predictions produced by the machine were validated through experimentation.
These systems recognize patterns and potential connections within extensive scientific literature that human researchers may overlook due to information overload or disciplinary isolation. By linking insights from traditionally distinct research areas, these tools can propose innovative hypotheses that human researchers might not explore, potentially speeding up scientific discovery.
However, the emergence of AI-generated hypotheses raises essential questions about research competencies in study design. Several authors express concerns about researchers becoming overly dependent on machine-generated questions, which could potentially narrow research imagination and reinforce existing paradigmatic assumptions embedded in training data [11,21]. Others highlight the necessity of developing new competencies in “prompt engineering” and critically assessing machine-generated hypotheses [10,22].

3.1.3. Data Analysis and Interpretation

AI technologies are transforming qualitative and quantitative data analysis processes, significantly influencing qualitative analysis workflows. Numerous studies explore how LLM-assisted coding tools are changing the practice of qualitative analysis.
AI-powered coding tools such as CoAIcoder [7], CollabCoder [23,24], and similar systems significantly decrease the time needed for the preliminary coding of qualitative data, with reported efficiency improvements of 70-80% for first-pass coding [7,8]. These tools propose codes based on the text, helping to speed up codebook development.
Research regularly reveals a conflict between enhanced initial agreement among researchers employing AI tools and the possible decline in code diversity and interpretive variation [7,21]. As highlighted by [21]: “AI-assisted workflows appear to accelerate convergence on shared interpretations, sometimes at the cost of the productive disagreements that generate novel insights.”
In light of concerns regarding diversity and depth, researchers have crafted various frameworks for human-AI collaboration in qualitative analysis. These frameworks encompass “machine-in-the-loop” strategies, which allow humans to retain interpretive control [25], time-based models that adjust AI involvement throughout different stages of research [8], and structured AI-human dialogue frameworks [13].
AI tools aid in research design, method selection, and visualization for quantitative analysis, albeit with comparatively less transformation than seen in qualitative methods. Multiple authors emphasize that while routine statistical analyses can be automated, intricate modeling and interpretation still necessitate significant human expertise [26,27].

3.1.4. Knowledge Dissemination and Integration

The final stage of the research lifecycle—disseminating findings and integrating them into the broader knowledge base—is evolving with the advent of AI technologies. Many studies investigate how AI tools assist in writing research manuscripts, developing visualizations, and producing citations [10,11].
Nonetheless, this field faces considerable difficulties regarding the accuracy of references and citation fabrication. Studies indicate that present LLMs often exhibit high “hallucination” rates in citation generation, with one study revealing that GPT-4 only reached 13.8% recall for references in systematic reviews [28]. This raises significant concerns about research integrity and the risk of spreading false citations.
The rise of “living” knowledge repositories, which AI systems continually update by incorporating new publications, signifies a potentially transformative change in integrating research findings into academic knowledge. Instead of the conventional approach of isolated publications followed by occasional reviews, these systems facilitate real-time updates and integration of knowledge [17,18].

3.2. Impact on Core Research Skills, Including Technical, Critical, and Social Aspects

3.2.1. Technical/Methodological Competencies

AI integration significantly transforms the technical and methodological skills needed for successful research. Although particular technical abilities may diminish in importance due to automation, new skills focused on efficiently using and integrating AI tools are arising.
While traditional technical skills such as literature searching, manual coding, and statistical analysis are still essential for grasping fundamental processes, their application is progressively supported or automated. New technical skills involve prompt engineering (creating instructions for LLMs to produce valuable outputs), critically assessing AI-generated content, and seamlessly incorporating AI tools into research workflows.
Numerous studies highlight the ongoing importance of domain knowledge and methodological understanding, even with the automation of technical implementation [8,21]. Assessing whether AI-generated outputs meet disciplinary standards and methodological criteria is vital.
As research workflows progressively integrate AI tools, new skills are needed to document AI applications clearly. This involves detailing the processes that utilize AI, outlining the prompt strategies employed, and describing the verification methods used [10,12].

3.2.2. Critical/Reflective Competencies

Critical and reflective skills—such as critical thinking, ethical reasoning, and reflexivity—become more essential in research enhanced by AI. These abilities allow researchers to uphold their agency and exercise careful judgment in processes that are becoming more automated.
Numerous studies highlight the critical need to assess AI-generated content for accuracy, bias, and ethical considerations [10,21,29]. This evaluation involves spotting hallucinations, recognizing biased results, and evaluating the suitability of AI-generated analyses.
Ethical competencies encompass recognizing possible harms associated with AI integration, such as bias amplification and privacy issues. They also involve making well-informed decisions regarding appropriate AI usage and reflecting on the impact on research integrity and authorship.
Numerous authors emphasize the significance of reflexive awareness regarding the impact of AI tools on research processes and their potential influence on researchers’ thought processes [8,21]. This awareness entails recognizing how interactions between AI and humans can reinforce specific interpretations or analytical methods.
Certain authors suggest updated educational taxonomies tailored for AI settings. [22] outlines an adapted Bloom’s taxonomy aimed at AI-related critical thinking, highlighting the importance of skills in prompt engineering, assessment of AI outputs, and AI-human dialogic reasoning.

3.2.3. Social/Collaborative Competencies

AI integration is also changing the social and collaborative aspects of research, fostering new skills in human-AI teamwork and interdisciplinary collaboration.
Various frameworks suggest models for successful human-AI collaboration in research, highlighting the importance of complementary strengths and well-defined roles [8,13,25]. Essential skills encompass identifying suitable labor division, creating efficient feedback mechanisms, and ensuring human agency in collaborative efforts.
AI tools can enhance collaboration across different fields by promoting the integration and translation of knowledge [10,27]. This leads to the development of new skills in translating concepts between disciplines and combining various methodological approaches.
Multiple authors explore the emergence of knowledge via iterative human-AI interactions, highlighting the significance of dialogue skills, iterative refinement, and critical engagement [8,13].

3.3. Tensions, Compromises, and Ethical Dilemmas Arising from Integrating AI into Research Workflows

Our analysis uncovered significant tensions and ethical dilemmas arising from integrating AI into research workflows. For an overview of these challenges and suggested solutions, please refer to Table 2.

3.3.1. Efficiency vs. Depth and Diversity

A key tension observed in various studies is the balance between efficiency improvements and possible declines in interpretive depth and diversity. This issue is especially prominent in qualitative analysis, where AI tools significantly speed up the initial coding process, yet may compromise code diversity and interpretive richness [7,21].
[21] showed that researchers employing AI-assisted coding tools usually achieved consensus faster than traditional manual coding teams. However, this often came at the expense of productive disagreements and interpretative diversity that foster novel insights. Likewise, [8] discovered that while AI tools were beneficial for initial categorization, they could limit more profound interpretative work if utilized throughout the analysis.
This tension also applies to literature review processes, as AI can quickly identify publications that meet specific criteria but may overlook conceptually related research that employs different terminology or framing [1,30]. Many authors propose that the best strategies involve utilizing AI for initial efficiency improvements while retaining human judgment for more profound interpretive analysis [8,25].

3.3.2. Reference Accuracy and Research Integrity

A major ethical issue highlighted in several studies revolves around the accuracy of AI-generated content, notably in citations and references. This problem is particularly evident during the processes of knowledge dissemination and synthesis, where AI systems might produce seemingly credible yet factually inaccurate citations, which could jeopardize the integrity of research [20,29].
A study revealed that GPT-4 attained just 13.8% recall for references in systematic reviews, often producing fictional citations that seemed real but were nonexistent [28]. This poses significant challenges to research integrity, as unverified citations generated by AI could spread false information throughout academic literature.
Numerous authors stress the vital role of verification protocols for AI-generated content, especially regarding citations, data, and methodological specifics [10,20]. Such protocols generally require human verification of the generated content against original sources, underscoring the ongoing necessity of human oversight in research processes.

3.3.3. Access and Equity Concerns

Our analysis uncovered growing worries regarding equity and access in AI-enhanced research. Various studies highlight potential “research divides” between institutions equipped with advanced AI tools, computational resources, and technical expertise and those lacking them [12,27].
The capacity to integrate AI tools effectively differs greatly among institutions, with those having more resources likely enjoying competitive edges in research productivity and impact [10,26]. Several authors express worries that AI tools could exacerbate pre-existing power disparities in the global research ecosystem by favoring the viewpoints and methods present in the training data [11,12].
These equity issues also affect various domains and methodologies. Several authors highlight that AI tools might be more suited for specific research methods and subjects, which could advantage these areas regarding productivity and impact [26,27].

3.3.4. Epistemological and Authorship Challenges

The incorporation of AI in research prompts essential epistemological inquiries regarding knowledge creation and the identity of researchers. Numerous studies investigate how this integration disrupts conventional views on authorship, expertise, and the distinctions between human and computational input in knowledge generation [10,11].
As AI systems play a more significant role in research processes—from hypothesis generation to data analysis and manuscript drafting—issues surrounding proper attribution, acknowledgment, and responsibility emerge. Numerous authors highlight the absence of established norms and standards for recognizing AI contributions in research outputs [10,11].
These epistemological challenges also encompass inquiries into how AI-human collaboration influences the formulation of research questions, methodological selections, and interpretive frameworks. Numerous authors worry about the potential uniformity of academic thinking if comparable AI systems trained on analogous data are extensively utilized across research communities [8,21].

3.4. Approaches to Human-AI Collaboration for Enhancing Research Quality While Preserving Human Agency

Our analysis has revealed several promising strategies for human-AI collaboration in research that effectively balance efficiency with maintaining human agency and critical engagement. Figure 3 illustrates three distinct methods for integrating AI into research workflows. The Automation Model (left) shows AI taking over routine tasks, which provides high efficiency but low human agency, as seen in literature screening and data extraction [3,5]. The Augmentation Model (center) illustrates AI amplifying human abilities while maintaining agency, yielding a blend of efficiency, complementary strengths, and clear human oversight, exemplified by systems like Scholastic [8,25]. The Dialogue Model (right) depicts knowledge creation through iterative interactions between humans and AI, characterized by significant engagement, shared agency, and enhanced interpretive diversity, as shown in systems like PaTAT [13]. The models transition from left to right, reflecting a continuum of increasing human agency, alongside corresponding trade-offs in efficiency and automation.

3.4.1. Human-in-the-Loop Design

Research workflows that integrate human oversight at crucial decision junctures show great promise across various studies. These “human-in-the-loop” methods retain human judgment for elements of research that need interpretation, ethical reasoning, or creative insight, while utilizing AI for efficiency in more routine tasks [8,13,25].
[25] created and assessed Scholastic, a visual interface designed for human-AI collaboration in qualitative analysis. This tool allows researchers to retain interpretive control while leveraging AI-generated codes and patterns. The evaluation demonstrated that this method maintained interpretive diversity and improved efficiency over manual coding.
Similarly, [13] created PaTAT, a system designed for interactive rule synthesis in qualitative coding. This system facilitates a dialogue between researchers and AI, enabling researchers to refine and tailor coding rules according to their interpretive frameworks. This method has improved consistency while maintaining researcher autonomy in defining analytical categories.
Various systems in literature review processes showcase the importance of human oversight for verification and interpretation. [3] discovered that their TrialMind system was most efficient in a hybrid workflow, where AI handled the initial screening and extraction, while human researchers validated the outcomes and made final inclusion and synthesis decisions.

3.4.2. Stage-Appropriate Integration

Our analysis indicates that various research stages might gain from differing degrees of AI integration, where certain processes are more suitable for automation. Numerous studies advocate for stage-specific strategies that adjust the extent of AI participation according to the characteristics of the research task [8,10].
[8] created a temporal model for integrating AI in qualitative analysis, proposing that AI tools are particularly useful for the initial organization and categorization phases. At the same time, later interpretive stages require more human oversight. Their assessment demonstrated that this differentiated approach maintains interpretive depth while enhancing efficiency.
A comparable trend is observed in literature review processes. Numerous studies indicate that initial screening and data extraction are well-suited for automation, whereas critical appraisal and synthesis require more human engagement [3,5,17].
This stage-appropriate integration enables researchers to leverage AI capabilities where they provide the most significant benefit while maintaining human agency in areas of research that require judgment, creativity, or ethical reasoning.

3.4.3. Transparency and Documentation

Transparent documentation of AI usage in research has become essential in various studies. Numerous authors highlight the need to clearly record the research processes that utilized AI, the verification methods applied, and the possible limitations of AI-generated content [10,12].
[10] propose a documentation framework for AI-assisted research, which describes the AI tools utilized, prompting strategies applied, verification methods, and recognition of inherent biases or limitations. Their assessment indicates that this level of transparency bolsters research credibility and allows for suitable peer review.
Likewise, [12] highlights the necessity of recording verification processes for AI-generated content, especially regarding citations and methodological specifics. They recommend that this record-keeping be viewed as a vital part of the research methods sections within a research environment enhanced by AI.
Transparent documentation fulfills ethical and scientific roles by ensuring accountability in research integrity and promoting methodological growth through visible and replicable AI integration strategies.

3.4.4. Educational and Institutional Support

Numerous studies highlight the significance of educational and institutional backing in fostering effective human-AI collaboration within research settings. These strategies emphasize the enhancement of researcher skills, the establishment of organizational infrastructure, and the development of ethical guidelines for responsible AI integration [10,11,22].
[22] developed and evaluated a revised educational framework for research methods courses that explicitly incorporates AI literacy, including competencies in prompt engineering, output evaluation, and ethical reasoning about AI use. Evaluation showed that this approach enhanced students’ ability to engage with AI tools while maintaining research rigor critically.
[10] propose a framework to support AI integration in research, highlighting the need for access to computational resources, technical assistance, ethical guidelines, and practice communities. Their case studies indicate that establishing this infrastructure is vital for tackling equity issues and guaranteeing that the advantages of AI are widely available.
These educational and institutional methods acknowledge that successful collaboration between humans and AI necessitates technological advances and shifts in culture and organization to foster new research practices and skills.

4. Discussion

4.1. Theoretical Implications for Research Practice and Expertise

Our research presents important theoretical insights into research practice and expertise in the AI era. AI technologies that either automate or enhance numerous technical facets of research are upending the traditional view of research expertise as a matter of technical skill. Our results indicate that research expertise is increasingly defined by what [13] refers to as “complementary competencies”—skills that enhance, rather than replicate, AI capabilities.
These shifts in research expertise can be situated within established theoretical frameworks. The transition toward integrative, critical capacities aligns with Mode 2 Knowledge Production [60], which describes knowledge creation as increasingly transdisciplinary and context-driven. Our three models of human-AI collaboration reflect Distributed Cognition theory [61], where cognitive processes extend beyond individual minds into technological systems. AI tools function as Boundary Objects [62] that translate between computational and human domains, explaining challenges in reference accuracy and interpretive alignment. The dialogue model’s iterative knowledge creation parallels Knowledge Building Theory [63], while post-phenomenological approaches [64,65] illuminate how AI mediates researchers’ relationship to their objects of study. Critical Realism [66] offers a framework for maintaining epistemological vigilance toward AI-generated outputs, recognizing them as interpretations rather than direct representations of reality.
Figure 4 showcases the comprehensive conceptual framework demonstrating how artificial intelligence reshapes research processes and skills, augmented by theoretical insights clarifying these transformations. At its core, the framework outlines a six-phase research lifecycle—Literature Review, Study Design, Data Analysis, Knowledge Dissemination, Continuous Integration, and New Research Questions—surrounded by three concentric rings that represent dimensions of research competencies: technical (innermost), critical (middle), and social (outermost). Areas with high and medium AI impact are indicated by red and orange circles, with notable changes seen in Literature Review, Data Analysis, and the formulation of New Research Questions. The framework is further enhanced with theoretical overlays that offer explanatory richness: Mode 2 Knowledge Production [60] encompasses the entire framework, highlighting the movement towards transdisciplinary research; Distributed Cognition [61] bridges technical and critical domains, emphasizing cognitive extension into technological systems; Boundary Objects Theory [62] focuses on human-AI interactions; Knowledge Building Theory [63] guides collaborative advancement of knowledge; Post-phenomenological approaches [64,65] clarify AI’s mediating function in the perception of research; and Critical Realism [66] fosters epistemological awareness regarding AI outputs. This theoretically-informed visualization illustrates that the integration of AI transcends mere technological adoption, signifying a profound rethinking of research expertise and the processes of knowledge creation across various fields.
These competencies encompass critical judgment regarding the suitability and limitations of AI outputs, ethical considerations for responsible AI usage, and integrative thinking spanning different disciplines.
This evolution mirrors changes in other fields where AI has reshaped professional practices. Similar to how medical expertise has transitioned from rote memorization of facts to diagnostic reasoning and clinical judgment (with medical knowledge becoming more readily available via digital tools), research expertise seems to evolve from simply executing technical tasks to focusing on critical evaluation, creative integration, and ethical reasoning.
Our findings highlight several new models for human-AI collaboration in research, each with distinct impacts on researcher agency and identity. These models include “automation,” where AI tools take over routine tasks traditionally performed by humans [3,5]; “augmentation,” which enhances human capabilities while maintaining human agency [8,25]; and “dialogue,” where knowledge is developed through continuous human-AI interaction [13].
These models propose transforming the researcher’s role from solitary knowledge creators to facilitators of complementary human and computational abilities. This change questions conventional views on individual authorship and expertise, advocating for more collaborative models of knowledge production that clearly recognize contributions from both humans and computers.
Our findings emphasize significant epistemological tensions that emerge from integrating AI into research. The efficiency and scale provided by AI tools allow for more thorough evidence synthesis and larger analyses, which could improve the breadth and overall quality of research. However, issues related to interpretive homogenization, accuracy of references, and excessive dependence on AI indicate possible compromises in depth, diversity, and originality.
These tensions illustrate the broader discussions regarding the connection between human intelligence and machine intelligence. While AI is proficient in recognizing patterns, conducting statistical analysis, and synthesizing existing knowledge, human researchers bring contextual understanding, creative insights, ethical considerations, and critical reflexivity to the table. The challenge is to create research practices that capitalize on these complementary strengths both.

4.2. Practical Implications for Balanced AI Integration

Based on our insights related to the fourth research question, we propose a framework for the balanced integration of AI into research workflows. This framework consists of three essential components, aiming to maximize benefits while tackling the identified challenges.
Firstly, research workflows must explicitly include human oversight at crucial decision-making moments, especially during research phases that involve interpretation, ethical judgment, or creative insight [8,13]. This entails validating AI-generated citations and references by humans, critically reviewing AI-suggested codes or themes in qualitative analysis, and ethically evaluating AI-generated hypotheses or research questions.
Secondly, researchers should implement transparent documentation practices concerning AI utilization in their research. This includes clearly explaining the specific research processes that incorporated AI, thoroughly documenting the verification methods for AI-generated content, and acknowledging the limitations and possible biases associated with AI-assisted methodologies.
Third, various research stages and fields might gain from distinct human-AI integration models. Our findings indicate that literature screening and data extraction could benefit from increased automation [3,5], while qualitative analysis and interpretation are best supported by more interactive, dialogue-driven methods [8,25]. Furthermore, hypothesis generation and research design are enhanced through augmentation approaches that uphold human creative agency [7,9].

4.3. Recommendations for Research Education and Training

Our findings carry important implications for the education and training of researchers. Research training programs need to implement updated competency frameworks that align with the evolving landscape of research expertise in the AI era [10,22]. This involves incorporating AI literacy into research methods curricula, enhancing skills to assess AI-generated content critically, and providing training in ethical frameworks for the responsible use of AI.
Successful training necessitates practical engagement with AI tools within real research settings [10,11]. This involves supervised experimentation with AI-supported literature reviews, training in critically assessing AI-produced qualitative codes, and familiarity with validation procedures for AI-created content.
Research training must integrate insights from various fields to tackle the complex challenges of AI integration [12,27]. This encompasses computer science views on AI’s strengths and weaknesses, ethical and philosophical viewpoints on responsible technology use, and information science methods for organizing and retrieving knowledge.

4.4. Institutional Policy Considerations

Our research indicates key factors that institutions should consider when formulating AI policies. They need to create strategies that tackle potential “research divides” by ensuring fair access to AI technologies and computational resources, delivering technical support and training to researchers in various fields, and establishing a common infrastructure for AI-enhanced research.
Institutions need to establish specific guidelines for the ethical application of AI in research. This includes mandates for transparency when reporting AI use, standards for validating AI-generated content, and protocols for proper attribution and acknowledgment. Additionally, institutions should reassess their evaluation and reward systems for research, minimizing focus on metrics that AI can easily manipulate (such as the volume of publications) while emphasizing the importance of critical insight, interpretive depth, and ethical judgment. Furthermore, they should create new metrics to evaluate the responsible integration of AI.

4.5. Future Research Directions

Our review highlights several key areas for future research regarding the influence of AI on research practices and skillsets. Long-term studies monitoring the evolution of researcher skills and practices with ongoing AI exposure are essential for understanding developmental pathways and lasting effects [10].
A more in-depth examination of the variations in AI integration across different disciplines and methodological traditions would yield valuable insights for customized research training and policy [26,27].
Research focused on technological solutions to address identified challenges—such as verification tools for AI-generated content, systems for transparent AI use documentation, and interfaces that maintain interpretive diversity—would help overcome current limitations [13,20].
Empirical studies assessing the effectiveness of various educational methods for developing AI-related research competencies are necessary to guide curriculum design and teaching practices [10,22].
Table 3. Key gaps in current research and proposed future directions.
Table 3. Key gaps in current research and proposed future directions.
Research Gap Proposed Research Direction Methodological Approach Expected Impact
Longitudinal impacts Track researcher skill development over time Longitudinal studies, log data analysis Understanding developmental trajectories
Domain variation Compare AI integration across disciplines Comparative case studies Tailored approaches to research training
Verification tools Develop tools for AI-generated content verification Tool development and validation Address reference accuracy challenges
Educational effectiveness Evaluate different training approaches Intervention studies, quasi-experimental designs Inform curriculum development

4.6. Limitations

This systematic review has methodological limitations to consider in interpreting the findings. First, our search strategy focused on English publications, potentially excluding relevant insights from studies in other languages. This limitation may introduce biases in understanding AI’s impact across different contexts.
Second, the fast-changing landscape of AI technologies imposes a time constraint. Numerous AI tools reviewed in the studies have received substantial updates since their publication, which might restrict the relevance of specific findings to the latest versions of these technologies. Furthermore, our inclusion timeframe (2018-2025) may have overlooked earlier essential research on automation and computer-assisted research methods.
Third, there is significant variability among the studies included regarding their methodological approaches, disciplinary backgrounds, and research focuses. Although this diversity broadens the scope of our review, it also makes direct comparison and synthesis across studies more complex. Furthermore, the differing definitions of crucial concepts like “research skills” and “AI integration” pose additional difficulties for cohesive synthesis.
Fourth, most studies analyzed in this review utilized cross-sectional or short-term evaluation designs. This scarcity of longitudinal studies restricts our comprehension of how research competencies develop over time with prolonged AI exposure, as well as how initial efficiency gains or challenges may shift with continued use and enhanced literacy.
Fifth, our findings may have been influenced by publication bias, as studies that show positive or significant effects of AI on research practices are more likely to be published than those that report null or negative effects.
Finally, although systematic, our quality assessment process relied on tools designed for conventional research methodologies, which may not fully capture the unique methodological considerations of studies examining emerging technologies. Adapting the assessment criteria partially addressed this limitation, but standardized quality assessment frameworks specific to AI-human interaction studies are still in development.
Even with these limitations, the consistent results across various studies and contexts indicate that the main patterns highlighted in this review reflect strong trends in the way AI is transforming research practices and skills. Future reviews should aim to include a broader range of languages, utilize longitudinal designs, and implement standardized assessment frameworks tailored to AI’s role in research contexts.

5. Conclusions

This systematic review compiles evidence on how AI technologies are changing research practices and redefining essential research competencies. Our findings demonstrate significant efficiency improvements across various research processes, while also highlighting new challenges concerning interpretive diversity, reference accuracy, and equitable access. The conventional perspective of research expertise, which focused mainly on technical skills, is evolving into a more nuanced understanding that prioritizes critical judgment, ethical reasoning, and the effective integration of human and computational strengths.
The conceptual framework we present—exploring AI’s effects on research lifecycle and competency dimensions—provides a structure for comprehending and managing this transformation. By emphasizing technical changes and epistemological effects, this framework aids researchers, educators, and institutions in creating strategies that harness AI’s potential while maintaining the human values and critical thinking essential for meaningful scholarly inquiry.
As AI capabilities evolve, the practice of research will probably continue to transform. By creating research methods that thoughtfully incorporate AI tools while maintaining human agency and critical engagement, the research community can leverage the transformative power of these technologies. This ensures that research continues to enhance human comprehension and tackle complex societal issues.

Appendix A. Characteristics of Included Studies in the Systematic Review

Study Type Number of Studies Percentage
Empirical—Quantitative 18 36.7%
Empirical—Qualitative 13 26.5%
Empirical—Mixed Methods 9 18.4%
Conceptual/Framework Development 9 18.4%
Total 49 100%

A.1. Disciplinary Distribution

Discipline Number of Studies Percentage
Computer Science/HCI 16 32.7%
Medical/Health Sciences 12 24.5%
Information Science/Library Studies 7 14.3%
Education/Research Methods 6 12.2%
Social Sciences 4 8.2%
Engineering 2 4.1%
Interdisciplinary 2 4.1%
Total 49 100%

A.2. Geographic Distribution

Region/Country Number of Studies Percentage
North America 22 44.9%
- United States 18 36.7%
- Canada 4 8.2%
Europe 15 30.6%
- United Kingdom 8 16.3%
- Netherlands 4 8.2%
- Other European 3 6.1%
Asia 8 16.3%
- China 5 10.2%
- Singapore 2 4.1%
- Other Asian 1 2.0%
Australia/New Zealand 3 6.1%
International Collaborations 1 2.0%
Total 49 100%

A.3. Study Focus by AI Application

AI Application in Research Number of Studies Percentage
Literature Review Automation 14 28.6%
Qualitative Data Analysis 12 24.5%
Research Framework Development 8 16.3%
Hypothesis Generation 5 10.2%
Knowledge Dissemination 5 10.2%
Quantitative Analysis Support 3 6.1%
Education/Training on AI in Research 2 4.1%
Total 49 100%

A.4. Methodological Approaches in Empirical Studies

Methodological Approach Number of Studies Percentage of Empirical Studies (n=40)
Experimental/Quasi-experimental 16 40.0%
Case Studies 9 22.5%
Surveys/Questionnaires 8 20.0%
Think-aloud/User Studies 4 10.0%
Longitudinal Designs 3 7.5%
Total 40 100%

References

  1. Marshall, I.J.; Wallace, B.C. Toward systematic review automation: a practical guide to using machine learning tools in research synthesis. Syst. Rev. 2019, 8, 1–10. [Google Scholar] [CrossRef] [PubMed]
  2. Bastian, H.; Glasziou, P.; Chalmers, I. Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLOS Med. 2010, 7, e1000326. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, Z. , Cheng, F., Li, H., Wang, Y., Ning, Z., Liu, C.,... & Sun, J. (2024). TrialMind: An LLM-based end-to-end pipeline for evidence synthesis. Nature Machine Intelligence.
  4. Cao, C. , Bobrovitz, N., Sharma, K., Gu, H., Mei, X., Chen, Z.,... & He, Z. (2024). Framework chain-of-thought prompting for screening in systematic reviews. Systems Medicine.
  5. Schoot, R., Bruin, J., Schram, R., Zahedi, P., Boer, J., Weijdema, F., ... & Oberski, D. (2020). An open source machine learning framework for efficient and transparent systematic reviews. Nature Machine Intelligence, 466.
  6. Tran, V. T. , Ravaud, P., Porcher, R. (2023). GPT-3.5 "triage" tool for systematic review screening. Journal of Clinical Epidemiology.
  7. Gao, J. , Chan, H., Ng, M., Leow, C. Y., Chen, W., & Perrault, S. (2023). CoAIcoder: Examining the effectiveness of AI-assisted human-to-human collaboration in qualitative analysis. ACM Transactions on Computer-Human Interaction, 29.
  8. Feuston, J.L.; Brubaker, J.R. Putting Tools in Their Place: The Role of Time and Perspective in Human-AI Collaboration for Qualitative Analysis. Proc. ACM Human-Computer Interact. 2021, 5, 1–25. [Google Scholar] [CrossRef]
  9. Abdel-Rehim, S. , Chen, X., Zhang, Y., Wang, G., Zhao, L., & Li, X. (2024). GPT-4 generated novel drug synergies validated in breast cancer laboratory experiments. Nature Medicine.
  10. Daniel, F. , & Wen, J. (2024). GenAI's impact on cognitive, technical, interpersonal skills: A systematic review of research competencies. Higher Education Research & Development.
  11. Foley, J. M. , Sutherland, A., & Molina-Azorin, J. F. (2024). Critical imagination for AI in qualitative literature reviews: Balancing efficiency and interpretive depth. Qualitative Inquiry.
  12. Le Dinh, T. , Sohail, S., Kumar, P., & Ramachandran, A. (2024). Human-centered AI SLR framework: Efficiency and rigor in literature reviews. Information Systems Journal.
  13. Gebreegziabher, S. A. , Zhao, Y., Fan, J. M., Liang, S. H., Yang, L., Rho, S., & Li, T. J. (2023). PaTAT: Human-AI collaborative qualitative coding with explainable interactive rule synthesis. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 64.
  14. Przybyła, P.; Brockmeier, A.J.; Kontonatsios, G.; Le Pogam, M.; McNaught, J.; von Elm, E.; Nolan, K.; Ananiadou, S. Prioritising references for systematic reviews with RobotAnalyst: A user study. Res. Synth. Methods 2018, 9, 470–488. [Google Scholar] [CrossRef] [PubMed]
  15. Trad, M. , Yap, J., Tang, E., & Mahtani, K. (2024). An end-to-end LLM, RAG pipeline for title/abstract & full-text screening in systematic reviews. Systematic Reviews.
  16. Sanghera, R. , Soltan, A., Kempa-Liehr, A., & O'Sullivan, D. (2024). LLM ensembles for systematic review screening: Outperforming human sensitivity metrics. Journal of Medical Internet Research.
  17. Marshall, I. , Nye, B., Kuiper, J., Noel-Storr, A., Marshall, R., Maclean, R.,... & Wallace, B. (2022). RobotReviewer Live: A system for continuous living systematic reviews. Implementation Science.
  18. Shemilt, I. , Noel-Storr, A., Thomas, J., Featherstone, R., & Marshall, I. (2021). Living map of COVID-19 research: Cost-effectiveness of automated evidence updates. Systematic Reviews.
  19. Gartlehner, G., Affengruber, L., Titscher, V., Noel-Storr, A., Dooley, G., Ballarini, N., & König, F. (2023). Claude 2 LLM for data extraction from RCTs. Computers in Human Behavior, 37.
  20. Khan, A. , Li, H., Wang, F., Cheng, F., & Sun, J. (2025). LLM cross-critique workflows for data extraction in living systematic reviews. Journal of Medical Internet Research.
  21. Schroeder, H. , Li, A., Moynihan, C., & Schoenebeck, S. (2024). Human-AI qualitative analysis: Empirical assessment of benefits and limitations in thematic coding. Qualitative Health Research.
  22. Gonsalves, J. (2025). Revised Bloom's taxonomy for AI-driven critical thinking and research competencies. Studies in Higher Education.
  23. Gao, J. , Chan, H., Ng, M., Chen, W., & Perrault, S. (2023). CollabCoder: A GPT-Powered WorkFlow for Collaborative Qualitative Analysis. Companion Publication of the 2023 Conference on Computer Supported Cooperative Work and Social Computing, 44.
  24. Gao, J. , Chan, H., Ng, M., & Perrault, S. (2023). CollabCoder: A Lower-barrier, Rigorous Workflow for Inductive Collaborative Qualitative Analysis with Large Language Models. Proceedings of the CHI Conference on Human Factors in Computing Systems, 26.
  25. Hong, M. , Yang, Z., Barnett, C., Wu, X., Gulotta, R., & Szafir, D. (2022). Scholastic: Graphical Human-AI Collaboration for Inductive and Interpretive Text Analysis. Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, 27.
  26. Pilz, M. , Flemming, D., Gläser, J., & Vey, J. (2024). Semi-automated title-abstract screening using natural language processing and machine learning. Systematic Reviews.
  27. Edwards, P. , Dhaliwal, N., Dutz, M., & Phillips, M. (2023). ADVISE: Global development evidence synthesis using AI-assisted screening and prioritization. Journal of Development Economics.
  28. Citations study showing "GPT-4 achieved only 13.8% recall for systematic review references" (2024). Journal of Medical Internet Research.
  29. Khan, A. , Li, H., Cheng, F., & Wang, Y. (2025). Benchmarks for LLM extraction capabilities in systematic reviews: A critical assessment. Systematic Reviews.
  30. Wallace, B.C.; A Trikalinos, T.; Lau, J.; Brodley, C.; Schmid, C.H. Semi-automated screening of biomedical citations for systematic reviews. BMC Bioinform. 2010, 11, 55–55. [Google Scholar] [CrossRef] [PubMed]
  31. Perlman-Arrow, O. , Silva, J., Cohen, K., & Miller, J. (2022). Qualitative analysis assistants: User feedback and evaluation of AI-enhanced screening tools. Family Medicine and Community Health, 14.
  32. Li, Z. , Liu, B., Zhu, Y., & Wang, S. (2020). Continuous active learning for systematic reviews. Information Processing & Management, 7.
  33. Guo, E. , Lin, Z., Aldahash, K., Soto, J., von Elm, E., Salamanca-Sanabria, A., & Naugler, C. (2023). Automated Paper Screening for Clinical Reviews Using Large Language Models: Data Analysis Study. Journal of Medical Internet Research, 66.
  34. Ali, S. , Rehman, F., Choksi, M., & Naaman, M. (2025). Embedding separability as a predictor of AI screening effectiveness in educational systematic reviews. Educational Research Review.
  35. Rogers, L. , Dhaliwal, N., Moore, C., & Osei-Assibey, A. (2024). Machine learning modules for rapid and living public health reviews: Operational case study. Global Health: Science and Practice.
  36. Khan, Z. , Bing, J., Leung, V., & Ahmed, S. (2024). AI hallucinations in LLM extraction capabilities: Assessment in academic context. BMC Medical Research Methodology.
  37. Cheng, Y. , Zhao, F., Liu, Z., & Ma, Y. (2024). EXACT: A tool for extracting quantitative results from ClinicalTrials.gov. Journal of Biomedical Informatics.
  38. Korhonen, A. , Teufel, S., Wu, H., & Marshall, I. (2023). Evaluating LLMs for abstract screening in software engineering: Efficiency and accuracy assessment. 24th International Conference on Software Engineering.
  39. Chen, J. , Horn, M., Wang, D., & Zhang, D. (2025). Processes Matter: How ML/GAI Approaches Could Support Open Qualitative Coding of Online Discourse Datasets. ArXiv.
  40. Yamada, Y., Cosatto, E., Mehra, R., & Kawanabe, M. (2020). Concept Encoder DNN, active learning for efficient qualitative screening in systematic reviews. Machine Learning for Healthcare Conference, 12.
  41. Jiao, X. , Cai, Y., Pan, B., & Chen, T. (2025). Evolution of research competencies in health professions education: AI impact assessment. Academic Medicine.
  42. Wallace, B.C.; A Trikalinos, T.; Lau, J.; Brodley, C.; Schmid, C.H. Semi-automated screening of biomedical citations for systematic reviews. BMC Bioinform. 2010, 11, 55–55. [Google Scholar] [CrossRef] [PubMed]
  43. Chen, X. , Chen, W., Yu, Z., & Wang, Y. (2023). Large-scale survey of AI tool adoption in research: Perceived benefits, concerns, and usage patterns. Nature Human Behaviour.
  44. Tsafnat, G. , Glasziou, P., Choong, M. K., Dunn, A., Galgani, F., & Coiera, E. (2014). Systematic review automation technologies. Systematic Reviews.
  45. Liu, J. , Yang, J., Liu, M., & Zhou, M. (2023). SWIFT-ActiveScreener: Usability and accuracy assessment in a large PTSD treatment review. Journal of Traumatic Stress, 2.
  46. Ryan, G. , Koster, J., Lai, E., & Thomas, J. (2024). From technical to higher-order skills: Shifting competency framework in AI-transformed clinical research. Journal of Clinical Epidemiology.
  47. Lennon, R.P.; Fraleigh, R.; Van Scoy, L.J.; Keshaviah, A.; Hu, X.C.; Snyder, B.L.; Miller, E.L.; A Calo, W.; E Zgierska, A.; Griffin, C. Developing and testing an automated qualitative assistant (AQUA) to support qualitative analysis. Fam. Med. Community Heal. 2021, 9, e001287. [Google Scholar] [CrossRef] [PubMed]
  48. Wachinger, J. , Oliffe, J. L., Goldenberg, S. L., Bostwick, D., Wirth, M., & McMahon, S. A. (2024). Comparing ChatGPT and a human researcher in qualitative data analysis: Diversity and homogeneity patterns. Qualitative Health Research, 12.
  49. Van Mossel, S. , Wilschut, J., Kroeze, W., & Saing, S. (2025). AI-assisted systematic literature reviews in health economics: Practical implementation guide. PharmacoEconomics.
  50. Thomas, J. , McDonald, S., Noel-Storr, A., Shemilt, I., Elliott, J., & Marshall, I. (2023). SOLES: Systematic online living evidence summaries - technical implementation and evaluation. Research Synthesis Methods.
  51. Guo, W. , Li, R., Chen, X., & Liu, Y. (2024). Optimization of prompts for systematic review screening: Impact on efficiency and recall. Journal of Medical Internet Research.
  52. Hamel, C. , Kelly, S. E., Thavorn, K., Rice, D. B., Wells, G. A., & Hutton, B. (2020). An evaluation of DistillerSR's machine learning-based prioritization tool for title/abstract screening. BMC Medical Research Methodology, 70.
  53. Rabby, G. , Abdelrehim, A., Chen, X., Nguyen, M., & Wang, G. (2025). MC-NEST: Combining Monte Carlo Tree Search and LLMs for scientific hypothesis generation. Nature Scientific Reports.
  54. Liu, K. , Wang, H., Liu, Z., & Li, Y. (2025). Claude for automated evidence extraction from clinical trials: Feasibility assessment. Journal of Medical Internet Research.
  55. Goldfarb-Tarrant, S. , Marchant, R., Sánchez, R., Pandya, M., & Lopez, A. (2020). ML retrieval and extraction benchmarking across domains. Conference on Empirical Methods in Natural Language Processing.
  56. Marathe, M. , & Toyama, K. (2018). Semi-automated coding for qualitative research: A user-centered inquiry and initial prototypes. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 68.
  57. Zhan, Y. , Tan, C., & Liu, Y. (2023). Review Copilot: AI-assisted screening for systematic reviews and meta-analyses. Journal of Medical Internet Research.
  58. Gates, A.; Johnson, C.; Hartling, L. Technology-assisted title and abstract screening for systematic reviews: a retrospective evaluation of the Abstrackr machine learning tool. Syst. Rev. 2018, 7, 1–9. [Google Scholar] [CrossRef] [PubMed]
  59. Richards, K.A.R.; Hemphill, M.A. A Practical Guide to Collaborative Qualitative Data Analysis. J. Teach. Phys. Educ. 2018, 37, 225–231. [Google Scholar] [CrossRef]
  60. Baber, Z.; Gibbons, M.; Limoges, C.; Nowotny, H.; Schwartzman, S.; Scott, P.; Trow, M. The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies. Contemp. Sociol. A J. Rev. 1995, 24, 751. [Google Scholar] [CrossRef]
  61. Hutchins, E. Cognition in the wild. Cambridge, MA: MIT Press; 1995.
  62. Star SL, Griesemer JR. Institutional ecology, 'translations' and boundary objects: Amateurs and professionals in Berkeley's Museum of Vertebrate Zoology, 1907-39. Social Studies of Science. 1989;19(3):387-420.
  63. Scardamalia M, Bereiter C. Knowledge building: Theory, pedagogy, and technology. In: Sawyer K, editor. Cambridge handbook of the learning sciences. New York: Cambridge University Press; 2006. p. 97-118.
  64. Ihde, D. Technology and the Lifeworld: From Garden to Earth; Indiana University Press: Bloomington, IN, USA, 1990. [Google Scholar]
  65. Verbeek, PP. What things do: Philosophical reflections on technology, agency, and design. University Park: Pennsylvania State University Press; 2005.
  66. Bhaskar, R. A realist theory of science. London: Verso; 1975.
Figure 2. Conceptual framework illustrating AI’s impact on research processes and competencies.
Figure 2. Conceptual framework illustrating AI’s impact on research processes and competencies.
Preprints 158736 g002
Figure 3. Models of human-AI collaboration in research contexts.
Figure 3. Models of human-AI collaboration in research contexts.
Preprints 158736 g003
Figure 4. Conceptual Framework Illustrating AI’s Impact on Research Processes and Competencies with Theoretical Overlays.
Figure 4. Conceptual Framework Illustrating AI’s Impact on Research Processes and Competencies with Theoretical Overlays.
Preprints 158736 g004
Table 2. Mapping of the key tensions identified.
Table 2. Mapping of the key tensions identified.
Tension Description Mitigation Strategies Key References
Efficiency vs. Depth AI accelerates processes but may reduce interpretive depth Stage-appropriate integration, dedicated time for deep engagement [7,8,21,25]
Reference Accuracy vs. Research Integrity AI can generate plausible but false citations Verification protocols, transparent documentation [20,28,29]
Access vs. Equity Uneven distribution of AI resources creates research divides Institutional infrastructure, shared resources [10,12,26,27]
Expertise vs. Automation Shifting nature of research expertise and identity Revised competency frameworks, education in “complementary skills” [8,10,13,21,22]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated