Preprint
Article

This version is not peer-reviewed.

A Qualitative Approach to EFL Postgraduates’ GenAI-Assisted Research Writing Within Social Sciences

A peer-reviewed article of this preprint also exists.

Submitted:

16 September 2025

Posted:

18 September 2025

You are already at the latest version

Abstract
In academic L2 English / EFL (English as a Foreign Language) writing, GenAI (Generative Artificial Intelligence) and other digital tools are being extensively explored. However, this AI exploration for academic / research writing has been addressed less at postgraduate levels, and even less so, according to different scientific fields. This study examines this topic within Social Sciences at University of Extremadura, Spain. Seven participants with a B2 English level or higher enrolled in a 10-hour hybrid course about GenAI for academic English writing in October and November of 2024, focusing on AI tools and Broad Data-Driven Learning (BDDL) resources (e.g., simple online corpora tools) to assist their writing. Participants’ feedback was collected by qualitative means (in-class discussions, task writing annotation, and final survey). Overall findings indicate notably positive responses and usage of these tools for the improvement of their texts (e.g., linguistic analysis, lexical-grammatical refinement, and text style improvement). Participants also revealed miscellaneous approaches and strategies in their management of GenAI. Despite the study’s small sample, these preliminary findings suggest that postgraduate researchers in Social Sciences combine expert and linguistic knowledge effectively, demonstrating linguistic awareness and digital literacy concerns.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

The rise of Generative Artificial Intelligence (GenAI) is rapidly transforming academic practices. In higher education, researchers have begun to explore how different educational communities respond to GenAI, focusing on teaching methods, ethical concerns, and policy development. Studies examine GenAI’s impact across various levels—from broad institutional practices to individual learning strategies—highlighting its influence on academic skills and pedagogical tools (Chanpradit, 2025; Feng et al., 2025; Godwin-Jones, 2024; Pigg, 2024).
A growing body of work investigates how GenAI affects academic writing in second language (L2) and English as a Foreign Language (EFL) settings at the university level (Azennoud, 2024; Cheung & Crosthwaite, 2025; Huang & Deng, 2025; Jiang & Su, 2025). Most studies report benefits such as improved text quality and writing efficiency. However, they also notice critical limitations, including weak authorial voice, lack of personalization, and excessive use of technical jargon.
Research has mainly focused on undergraduates, emphasizing GenAI’s potential to reduce workload and offer tailored support. At the same time, concerns persist over users’ overdependence on GenAI outputs, breaches of academic integrity, and insufficient development of critical thinking skills (Farrokhnia et al., 2024; Rowland, 2023). Fewer studies have examined how more advanced academic writers, such as doctoral and postdoctoral researchers, use GenAI (Xia et al., 2025). These users tend to show stronger epistemological awareness, better judgment when evaluating GenAI content, and a more nuanced understanding of how Large Language Models (LLMs) function (Ruff et al., 2024; Williams, 2024; Pérez Paredes et al., 2025). In the current academic context of GenAI evolvement, how these users refine and adapt feedback and language choices deserves close attention.
Additionally, Broad Data-Driven Learning (BDDL) offers a valuable approach in this context. BDDL draws on widely available web-based tools—such as online concordancers, simple corpus interfaces, digital dictionaries, translation services, and collocation finders—to support language learning and writing (Curado Fuentes, 2025a; Pérez-Paredes, 2024; Ordoñana-Guillamón et al., 2024). Their integration with GenAI to target critical thinking skills and digital literacy thus also deserves further exploration.
This article presents a case study of seven EFL postgraduates within Social Sciences disciplines at the University of ----, Spain. These participants took part in a 10-hour course on GenAI for academic writing during October and November 2024. The study explores their strategies and attitudes toward using GenAI and BDDL tools for supporting their research writing process.
The research is qualitative and focuses on participants’ written tasks, survey responses, and reflections during and after lectures and discussions—both online and offline. Findings show that participants viewed GenAI and BDDL tools positively for pre-writing and rewriting, while also emphasizing the importance of human judgment in the research writing process. Small variations were identified according to individual use, disciplinary approaches, and even academic status. For example, the value of these tools for academic writing activities was appreciated more by the Tourism participants and by tenured faculty members.
This article begins with a review of relevant literature on GenAI in academic writing, followed by the study’s research questions and methodology. It then presents the results, discusses key insights, and concludes with implications, limitations, and directions for future research.

2. Literature Review

2.1. GenAI and Academic L2 English Writing

The rise of GenAI tools—such as ChatGPT, Gemini, and Co-Pilot—has influenced every stage of the academic writing process, including pre-writing, drafting, and revising (Pigg, 2024). Their usefulness depends largely on the quality of user input, which must include clear goals, context, and guidance. These tools have shown promise in higher education, especially for writers working in L2 English contexts (Cordero et al., 2024; Ingley & Pack, 2023; Godwin-Jones, 2024).
Research shows that academic writers often turn to GenAI for idea generation, outlining, and drafting (Nordling, 2023; Pigg, 2024). At postgraduate levels, doctoral and post-doctoral researchers use GenAI to summarize texts, synthesize literature, locate research gaps, and simplify complex information. These tools are also generally seen as helpful for improving vocabulary, grammar, tone, and overall textual cohesion (Barrot, 2023; Ji et al., 2023; Nordling, 2023).
Still, human revision remains essential. Writers must refine GenAI-generated output to correct errors, tailor language, and insert their own authorial voice (Berber-Sardinha, 2024; Markey et al., 2024). One common shortfall of GenAI textual output is the lack of personal engagement with readers. These texts often miss key meta-discoursal elements, such as evaluative language or self-referencing strategies (Jiang & Hyland, 2024; Mo & Crosthwaite, 2025; Zhang & Crosthwaite, 2025).
As the literature suggests, a major concern among EFL / L2 writers is how to maintain authenticity and depth in their writing. GenAI often falls short in generating appropriate register, nuanced phrasing, and stylistic variation—elements crucial for expressing insights. As Pigg (2024) notes, successful academic writers must effectively focus on evaluating and revising AI output by drawing on their own disciplinary and linguistic expertise.

2.2. EFL / L2 Postgraduate Writers’ Use of GenAI Across Scientific Disciplines

Recent studies have explored how GenAI supports academic EFL / L2 writing across disciplines, though most focus on undergraduate use (Xia et al., 2025). Few have examined how doctoral and post-doctoral researchers apply these tools in their scientific fields. A search conducted in August of 2025 on mainstream databases (Scopus, DOAJ, and Google Scholar) about this topic yielded few results. In these few studies, ChatGPT emerges as the most popular platform, likely due to its earlier adoption compared to newer tools like Gemini or Co-Pilot.
In Experimental Sciences and Technology, postgraduate researchers seem to mostly appreciate the use of GenAI for providing correct grammar and vocabulary, and less for content alignment with their thematic development (Khuder, 2025; Kramar et al., 2024; Liu & Wang, 2024; Ruff et al., 2024; Smit et al., 2025). Participants in these studies also use translation and paraphrasing tools, but they tend to rely more heavily on ChatGPT as their use progresses. At the same time, concerns about ethics and academic integrity are frequently raised in these contexts (Kramar et al., 2024; Smit et al., 2025).
In Health Sciences, many postgraduate participants find ChatGPT useful for planning and revising texts, especially to improve accuracy and meet disciplinary conventions (Williams, 2024). Most errors examined in the tool involve the misapplication of technical terms and lack of contextual relevance. Compared to tools like Bing or Bard, ChatGPT proves more effective in these discipline-specific writing tasks. Milton et al. (2024), upon surveying over 300 postgraduate students in Health Sciences at an Indian university, find that most students respond very positively to using ChatGPT. However, these authors also observe concerns among participants about becoming too dependent on AI tools, which could undermine more independent ways of working.
In Social Sciences, Jacob et al. (2023), focusing on one research writer in Education, analyze how she critically evaluates ChatGPT’s limitations—such as factual inaccuracies and bias—while adopting its linguistic suggestions to revise her work. The student also focuses on correcting repetitive wording and seeking peer input along the process of improving her text. The authors conclude that this type of frequent GenAI user is fully aware of consistent oversight and proofreading demands. With a similar qualitative method, Curado Fuentes (2025b) examines four postgraduate writers’ developments in Tourism, observing that this type of user, already experienced with obtaining support from digital tools for writing, adapts AI technology effectively for revising research texts.
Overall, albeit scarce, these findings suggest that postgraduate L2 writers recognize both the strengths and risks of GenAI. They tend to approach these tools with greater critical awareness and domain knowledge. Ethical concerns remain central at postgraduate levels, especially regarding plagiarism and loss of voice, as authors confirm (Costa et al., 2024). As a result, academic writers’ successful approaches at this level involve shifting from a product-centered view of writing to a more process-focused approach—one that emphasizes contextualization and the development of an individual voice. Moreover, doctoral and post-doctoral writers are often proficient in specialized terminology and conceptual frameworks. This expertise helps them to evaluate and refine GenAI-generated content, whereas main linguistic challenges often involve the mastering of phraseology and discipline-specific lexico-grammatical nuances (Laso Martín & Comelles Pujadas, 2025).

2.3. Broad Data-Driven Learning (BDDL) for Academic Writing

Alongside GenAI, a Broad Data-Driven Learning (BDDL) approach offers linguistic support for academic writers. BDDL incorporates user-friendly online tools—such as collocation finders, easy-to-use concordancers, and digital dictionaries—that help learners resolve language-related questions (Curado Fuentes, 2025a; Pérez-Paredes, 2024; Ordoñana-Guillamón et al., 2024). These resources include large collections of human-written texts, offering authentic linguistic input and helping writers to expand their repertoire of academic phraseology, thus suitably complementing and contrasting GenAI-generated output (Crosthwaite & Baisa, 2023), and fostering linguistic awareness and autonomy across disciplines (Ordoñana-Guillamón et al., 2024, p. 87).
Another advantage of BDDL is its shifting focus from complex data analysis to practical language learning (Pérez-Paredes, 2024, p. 218). It enhances specific linguistic skills while supporting critical digital literacy. These resources also promote independent learning and reflective thinking—key competencies for postgraduate researchers (Criollo et al., 2024). Ultimately, BDDL fosters flexible, self-directed learning environments that align well with the needs of advanced L2 academic writers. When used in combination with GenAI, these tools can support more informed, precise, and effective academic writing.

3. Research Questions

This case study addresses a gap in the literature by attempting to answer the following two questions:
How do doctoral and post-doctoral L2 / EFL writers within Social Sciences tend to approach their academic writing using GenAI and BDDL tools?
What are these writers’ perceptions and ideas regarding the integration of these tools along the process?

4. Methodology

This study employed a local case study design focused on a short course on GenAI for academic writing. The course lasted 10 hours (eight hours of instruction and two hours for task completion) and was offered to doctoral and post-doctoral researchers at University of -----, Spain, in October and November of 2024. The course combined in-person seminars, online discussions, and hands-on tasks. Core activities included a lecture on prompt engineering, linguistic analysis, and critical reflections about GenAI use for academic writing. Another important component was the integration of GenAI with BDDL tools, which were explained and showcased as simple, easy-to-use concordancers and collocation finders to support linguistic decisions.
The methodology followed standard practices for qualitative case studies (Mabry, 2008; Sena, 2024), drawing on three primary data sources: 1) classroom observation, 2) analysis of participants’ written tasks, and 3) a post-course survey. Triangulating these data allowed for a comprehensive view of participants’ learning progress and attitudes.

4.1. Participants

The course attendants were researchers and lecturers from Social Sciences disciplines: Business Management (one participant), Education (three), and Tourism (three). All attended both face-to-face and online sessions. Five participants were tenured faculty members: Paloma and Valeria (Education), Aurora and Juan (Tourism), and Pablo (Business Management), and two were doctoral researchers (María in Education and Begoña in Tourism). All had at least a B2 level of English, a requirement for course participation. Each gave informed consent for their coursework to be analyzed, but their names are not the real ones to protect individual privacy.

4.2. Data Collection Instruments

4.2.1. In-Class Observation

Observation took place during a four-hour in-person seminar and a four-hour Zoom session. Notes were taken on participants’ contributions during the seminar, while the online session was recorded for analysis. The in-person seminar included lectures (delivered by the author and a colleague from the University of Murcia) using slide presentations. While the instructors spoke for about 80% of the time, students were encouraged to participate actively through questions and discussion. In the online session, the first half focused on guided instruction. In the second half, participants worked in pairs in breakout rooms, with the author of this paper rotating among rooms to provide support and gather observational data on engagement and tool use.

4.2.2. Final Writing Task

Participants completed a written assignment using GenAI and BDDL tools to develop an academic text relevant to their own studies. The task prompt instructed:
“Use GenAI for any stage(s) of your academic writing process—pre-writing, drafting, writing, and / or rewriting—and with any type of text (e.g., research paper, teaching material, technical document). In addition to GenAI, use concordance tools to support your writing. Clearly describe each step you followed and explain how the tools were used.”
Analysis of these tasks focused on both micro- and macro-level writing issues. These included lexical choices, grammar, text organization, authorial stance, thematic coherence, and the participants’ ability to manage discourse-level features.

4.2.3. Final Survey

An online survey (via Google Forms) was administered after course completion to assess participants' perceptions. The survey was anonymous, but it required participants to state their academic / professional position and university degree. The survey included 33 Likert-scale items (1–5), with 17 questions on GenAI and 16 on corpus tools. Items measured general attitudes, perceived usefulness for language improvement, ease of use, and intent to continue using the tools, based on the model provided by Hua et al. (2024). The instrument showed strong internal consistency (Cronbach’s α > 0.96). The survey also included three open-ended questions asking participants to: 1) Identify the most helpful tool; 2) Describe which types of texts they used the tools for; 3) Reflect on both advantages and limitations they encountered.

5. Results

5.1. In-Class Observation

This part of the study focused on observing participants' initial ideas, habits, and interactions with GenAI. These observations provide a baseline for understanding how they approach individual tasks and respond to survey questions.
Most experienced participants, who were faculty members, led most of the discussions. The two young researchers, in contrast, listened, agreed, and took notes more frequently. A key point that arose was the discussion of academic integrity and ethical issues related to GenAI. For example, Juan was very enthusiastic about GenAI's potential for summarizing research, but he also pointed out that the tools often provided incorrect references. Aurora agreed, stating that while these AI tools are a good starting point, they can't replace real research. She mentioned she uses them mostly to brainstorm ideas and preliminary conceptual frameworks for her topics.
During a break, Juan expressed a concern about one of the course requirements to use GenAI in a way that doesn't sound "robotic." He noted that for non-native English speakers, who already face challenges with language revision and acceptance in academic journals, these tools could be very helpful, and he felt that having to make the language sound more "human" was unfair if the original message was already clear. This concern was brought up for further discussion and activities later on.
The second part of the in-class session focused on specific strategies for writing with GenAI, such as prompt engineering and developing a unique authorial voice. All participants found the explanations and examples highly useful for improving their interactions with GenAI tools. A major challenge discussed was how to find one's own voice while using the tool for re-writing. Juan brought up his frustration with having to constantly rewrite to make texts sound less generic. This led to a discussion about linguistic accuracy, academic integrity and plagiarism. The group discussed that using GenAI-generated text is risky, and research articles should be original. Aurora suggested that if a text was “co-authored with GenAI”, the author should explicitly state this in a footnote to ensure an ethical approach. Additionally, the use of other tools, such as online thesauri, collocation finders, dictionaries, and concordancers, was found interesting and helpful for writing.
The online session focused on using GenAI and other tools for rewriting text at both the micro (sentence level) and macro (structural) levels. Most participants were already familiar with online BDDL tools, such as simple concordancers and academic corpora, for confirming and comparing language choices during the writing process. Some participants, for example, had taken a course on the use of the COCA corpus. We explored research proposal texts, which they all considered important, to practice refining their writing by examining genre conventions, academic tone, and linguistic nuances.
Working in pairs, students explored these textual and linguistic features. For example, one activity asked them to state the key objectives for a research proposal using clear, simple sentences. All participants used sophisticated prompts in ChatGPT and Co-Pilot, asking the tools to account for grammatical simplicity, clarity, vocabulary, content, context, and audience (e.g., potential funders of the research).
They integrated online concordancers and corpora utilities (Corpus Mate, COCA, Linggle, and NetSpeak) to compare academic phraseology and vocabulary in social science texts. One example was the difference between "the project aims to" and "this work is aimed at," as both are frequently found in academic writing. However, these distinctions weren't always clear. María referred to her constant struggle to find adequate lexical-grammatical choices based on more appropriate English usage. Generally, the group favored an active, more direct voice for this type of writing. Students also noted the importance of semi-academic tones, which combine familiar language with specialized terms for clarity.

5.2. Writing Task Developments

Most participants combined GenAI with corpus tools for their final written assignment. AI mainly helped them with content, structure and clarity, while most corpus checks were made to ensure the use of appropriate academic English phraseology and collocations. Each participant applied different tactics for research writing, described below according to annotated writing developments.

5.2.1. Applying GenAI to Pre-Write Research

The following participants deployed GenAI tools to start their research texts:
Juan used pre-writing strategies by relying on Elicit as a research assistant “to better explain his work on climate change impact on urban planning and design.” The tool summarized top papers. However, he found that only two of the eight papers were academically sound. He then asked Microsoft's Co-Pilot to create an outline for his article. The tool provided a detailed outline with headings and subheadings, which Juan saved for later.
Juan then used Co-Pilot to develop the article's introduction, asking for an outline for the text by providing the tool with three PDF articles. Co-Pilot summarized the papers by section. Juan then asked the tool to write the full introduction using the outline and ideas from the papers. Juan was not completely satisfied, so he made manual changes, like correcting an inaccurate description of a paper's methodology.
Paloma used Co-Pilot to write a 500-word academic text on AI in Education. She first asked Co-Pilot to write about the paradigm shift that this type of technology entailed for teaching. She then specified that she wanted clear simple sentences and specific examples of new types of teaching approaches.
Valeria did the same with Co-Pilot for writing a blog entry on novel digital teaching methods, which she requested to be concise, clear, and informative for a lay audience. She explicitly stated her satisfaction with the text in terms of clarity and conciseness.
Paloma and Valeria also asked Co-Pilot to integrate updated bibliographic references within their texts, which they checked for validity. They discarded some of them due to their poor and/or vague quality.

5.2.2. Applying GenAI to Assist with Research

In this case, these participants had already written their own texts and used the tools to improve research concepts and references.
Aurora's task involved the introduction section of a research article she was writing for a high-impact journal. She told ChatGPT Scholar about this goal in her initial prompt and asked it to act as an expert on social factor analysis. She wanted the final text to be about the same length as her original one (around 830–840 words) and asked the tool to add scholarly references as needed. Aurora was happy with the content, noting that the final version “faithfully represented” what she wanted to say and it included valid sources. The tool also highlighted research gaps in a bulleted list, a feature Aurora found especially helpful and appealing for promoting her research.
Pablo’s abstract was rewritten by Co-Pilot according to his research needs, but the tool reduced the text considerably. Therefore, Pablo rewrote some parts on his own, and in this process, he asked the tool to insert some updated references within the text, which he checked for academic integrity. He removed two and kept one of the sources (in addition to his initial ones).

5.2.3. Aiming at Text Simplicity and Clarity

In this case, these participants focused on re-writing procedures for linguistic clarity and conciseness.
María asked ChatGPT to rewrite a conference abstract she had written herself so that the language sounded “simple and clear, with few adverbs, connecting words, or dependent clauses, and with words a general audience would understand.” ChatGPT provided a good result: it split the abstract into three short paragraphs, used shorter sentences, and changed all passive voice to active. The language also became more familiar. While María liked the result, she preferred a single paragraph. She asked the tool to combine the text into one paragraph and simplify it further.
Begoña used Gemini to make a conference abstract (already submitted and accepted) clearer, more cohesive, and more stylish. However, she found Gemini's rewritten text to be “too bombastic and robotic.” She therefore asked Gemini why the text didn't sound human, and the tool responded with the following points: 1) The language was too formal (e.g., using “delving into” and “renowned authors”), 2) It lacked a personal touch, and 3) It was overly objective. To fix this, Begoña manually added some specific context about her research topic and its application in her local tourism sector. The new version was a single paragraph that was more cohesive and personalized, and, in Begoña's words, “more directly represented my authorial voice.”
Paloma and Valeria ran prompts in Co-Pilot to convert their texts into slides for classroom presentations. They revised all the bulleted slides for intended content, clear language, and format. In their conclusions about the task, they expressed their satisfaction not only with the final product but also with the process of interacting with the tools, alluding to dynamic ways of working that saved time and made academic work enjoyable.
Pablo, in his rewriting step, asked Co-Pilot to revise his research abstract for clear language, strong cohesion, and style, and to maintain the same number of words as his initial text. The final product convinced him, and he decided to use this version for his submission to a journal.

5.2.4. Using GenAI for Revising Specific Linguistic Aspects

Juan, working on his research article's introduction, asked Co-Pilot to proofread the text by specifically focusing on “appropriate phrases and vocabulary usage.” He also asked Co-Pilot to explain the changes it made. The tool made 15 changes to vocabulary, grammar, and phrasing, explaining each one. For example, it changed “contempt” to “mitigate” for clarity, and it fixed subject-verb agreement errors. It also made sentences shorter to improve conciseness.

5.2.5. Using BDDL to Focus on Linguistic Nuances

Juan also used COCA’s academic texts to double-check Co-Pilot's changes. He found that various changes were valid, but he noticed two exceptions: that “dramatically” was used less frequently than “significantly,” and “request for government action” was less frequent than “demand government action.” Juan thus made linguistic changes based on these corpus-based frequencies.
María used Corpus Mate to make phraseological changes in her text when she felt that some expressions sounded awkward or when she was uncertain about their proper academic use. For example, she looked up “help” and “personalize” and found that “facilitate personalization” was more common in academic writing. So, she changed her original phrase, “technology that helps personalize teaching” to “technology that facilitates the personalization of teaching.” She also used Corpus Mate and another tool, Linggle, to check and change collocations (e.g., she saw that “broaden knowledge” was used more often than “expand knowledge” in social sciences). Additionally, based on her own linguistic introspection, María changed some linguistic items, such as “a mix of research methods” with “mixed research methods.” She also favored the passive voice in two sentences “to make the text sound more academic.”
Aurora used two different corpora, Corpus Mate and COCA, to check her language. By comparing the results from both, she was able to make a number of stylistic corrections. For example, she improved academic phrases (e.g., changing “must increase the understanding of” to “need for a greater understanding of”), checked subject-specific word combinations (e.g., “strategic role” instead of “strategic position”), refined cohesive devices (e.g., using “additionally” instead of “further”), and adjusted the voice of her text.
Paloma and Valeria also worked with COCA, but in their cases, they focused on lexical collocations. Paloma, for example, replaced “deeply impacted” with the more widely used “greatly impacted”, and Valeria proofread collocations such as “fully fabricated” and “minimal importance”, replacing them with more frequent collocations.
Finally, Begoña used Corpus Mate's frequency charts to replace three word combinations in the final abstract; for example, “push factor” was changed to “driving factor.”

5.3. Survey Findings

The analysis of the Likert scale responses indicated predominantly positive evaluations, with all items—except for the “Difficulty with tool” section and one respondent’s score of 2 for BDDL use in academic work—receiving scores of at least 3 on a 5-point scale. To illustrate these findings in detail, Table 1 reports the individual items and corresponding mean (M) scores for the evaluation of GenAI exclusively, as the assessment of the BDDL tools produced comparable results. This comparability is further reflected in Table 2, which presents the mean satisfaction scores across the four sections of the survey.
The scores for BDDL were slightly lower than those for GenAI, primarily due to the items “I find linguistic patterns useful for academic writing” (M = 3.85) and “I would recommend BDDL to my colleagues” (M = 3.71). BDDL was also rated as more technically difficult to manage than GenAI (M = 4.14), while being considered easier to understand from a linguistic perspective (M = 3.28).
The overall survey results also varied according to participants’ academic positions and disciplinary backgrounds. With respect to academic status, the mean score derived from tenured faculty responses was higher than that of doctoral researchers (M = 4.41 vs. M = 3.74). With respect to disciplinary background, participants from Tourism assigned the highest mean scores (M = 4.54), followed by the Business Management professor (M = 4.40) and respondents from Education (M = 3.84). The lower overall mean in the latter case was largely attributable to the doctoral researcher in Education, who assigned lower scores to most items. All participants except two in Tourism (the doctoral researcher and one of the tenured lecturers) assigned higher scores to GenAI than to BDDL.
Following the Likert-scale items, participants responded to three open-ended questions (see Methodology). For the first question, all participants mentioned ChatGPT and Co-Pilot, while Gemini was mentioned by one respondent. Regarding corpus tools, Corpus Mate and Linggle were preferred by most participants, with COCA cited by two. For the second question, all participants referred to research texts such as abstracts and articles, with two also including academic texts and course materials. In addition, all participants reported using GenAI for research syntheses, outlines, and brainstorming, while four noted its key role in linguistic revision.
Finally, their comments about advantages and disadvantages (question 3) included:
Doctoral researcher (Education): “The best thing is that writing can be significantly sped up with GenAI, and this is good for research writing. However, corpus information is more reliable than GenAI for real language use because it was directly written by humans, and the quality of GenAI can be bad and repetitive sometimes”.
Doctoral researcher (Tourism): “I think GenAI is the future”.
Tenured faculty (Education): “I think these tools entail a great qualitative step in the academic world. Like any other tools, their use determines their usefulness. It is something that is going to stay with us, and so, the sooner we integrate and control them, the better advantage we can take of them”.
Tenured faculty (Education): “I think GenAI is more dynamic and easier-to-use than corpora tools, which need technical training (…) GenAI, however, must be consistently supervised and corrected by human intervention because it can make many mistakes and compromise academic integrity. It could also hinder or diminish the writer’s linguistic competence in the long run if we don’t practice writing”.
Tenured faculty (Tourism): “The advantage is that they improve the process of revising and proofreading the English text before the article is submitted to an academic journal. The disadvantage is that you can end up overusing GenAI, which would significantly reduce the researchers’ capacity to improve their grammatical and lexical competences on their own because of their overdependence on a digital entity that does it better and faster than the L2 English writer”.
Tenured faculty (Tourism): “Good impression. I will use these tools more for sure”.
Tenured faculty (Business Management): “GenAI is very useful for research and academic activities. I use it every day now. This course has provided interesting ideas for improving their use and application. I think corpora are less interesting or necessary”.

6. Discussion

Based on these local findings, some key facets can be discussed.

6.1. Adoption of GenAI for Text Enhancement

GenAI was adopted positively by these postgraduate researchers in Social Sciences primarily as a means of revising and improving the clarity, accuracy, and organization of their academic texts. Participants reported using the tools not only to polish lexico-grammar and phrasing but also to address macro-level concerns such as cohesion, text structure, tone, disciplinary conventions, and intended audience. Importantly, they engaged in iterative rewriting, experimenting with prompts until outputs aligned with their intended meaning and authorial voice. In addition, most participants extended GenAI use beyond revision into other writing tasks such as outlining, documenting, and drafting, which indicate resourceful dynamics in supporting different steps along the writing process. This extended use was particularly noteworthy among tenured faculty members, perhaps due to their greater experience with research writing. This observation correlates with these veteran participants’ higher survey scores, pointing to a more positive evaluation likely based on their recognition of various enriching practices.

6.2. Developing Prompt Literacy

The course facilitated the development of a more refined prompt literacy, as participants moved from basic tool use to more deliberate and targeted prompting strategies. Although all seven participants had prior experience with ChatGPT and Co-Pilot—the second tool being used with a full license at University of ( )—they reported learning to phrase requests more specifically according to writing goals. While the more advanced researchers were more proactive in testing different affordances, novices also demonstrated acute awareness of issues such as accountability, bias, and “machine-sounding” language, concerns widely acknowledged in other studies (e.g., Smit et al., 2025). Active participation in online tasks and final writing assignments underscored their engagement with the process of learning to guide AI outputs more effectively. This type of partnership and co-agency with AI aligns with current analyses of successful academic writing using critical AI literacies (Godwin-Jones, 2024; Liu et al., 2025).

6.3. Preserving Authorial Voice and Human-like Style

A central developmental insight was participants’ recognition that GenAI’s value extends beyond grammatical correctness to the preservation of authorial voice and stylistic authenticity. Initially, as Juan noted, some questioned whether AI could contribute more than producing correct English. Through practice, however, they began to see the importance of shaping texts so that they sounded human-like and reflective of their own academic identities. Additionally, various concerns were raised regarding the long-term usefulness of GenAI for linguistic enhancement if users over-relied on these tools. As some commented, this type of overdependence may be detrimental to their own linguistic progress, which calls for a pivotal focus on critical digital literacies using meta-cognitive and meta-linguistic thinking (Mizumoto, 2024; Pérez-Paredes et al., 2025).
Alongside prompting strategies, corpus-driven resources were also valued for their role in refining lexical and phraseological nuance, even if these BDDL affordances were generally regarded as a bit less supportive than GenAI. A main reason for this less positive appraisal may be the more technical aspect involved in the management of these additional tools, as the survey revealed, even though these were easy-to-use concordancers and collocation / pattern finding utilities. Nonetheless, these resources were found helpful for linguistic enhancement, and two participants in Tourism rated them higher than GenAI. This observation coincides with how postgraduate L2 writers often grapple more with lexico-grammatical issues such as academic phraseology, collocations, and grammar, finding corpus tools useful for this type of linguistic enquiring (Curado Fuentes, 2025b; Laso Martín & Comelles Pujadas, 2025; Yoon, 2016).

6.4. Discipline-Specific Focus

Participants demonstrated a capacity to direct GenAI and BDDL to serve their stylistic demands within their own disciplines and fields of work, employing distinctive strategies in their writing processes for finding disciplinary voices (as observed in Khuder, 2025). They requested clear and concise language for describing their research and academic material, and then critically evaluated AI outputs to identify weaknesses in content or phrasing according to model texts within their scientific area. For example, Begoña concentrated on linguistic accuracy, Juan on integrating disciplinary background, and María and Aurora on corpus-informed analysis of lexical and register-appropriate patterns. These variations illustrate how postgraduate writers strategically combine GenAI outputs with other resources to address their unique challenges within academic writing (Liu et al., 2025; Smit et al., 2025).
The Tourism and Business Management researchers showed more positive attitudes (in agreement with Curado Fuentes, 2025). However, the Education participants also demonstrated an effective deployment of these tools, and, with less buoyant impressions in the survey (especially in the case of the doctoral researcher), they reflected more critically on the artificial and limited aspects of GenAI-generated texts, coinciding with Jacob et al.’s (2023) observations in this discipline.

6.5. Balancing GenAI and BDDL

Participants highlighted a key tension between the accessibility of GenAI and the linguistic reliability of corpus tools. While GenAI was seen as easy to use but linguistically ambiguous, corpus tools were acknowledged as technically demanding but trusted for validating collocations and phraseological choices. This balance reflects their dual concern: maximizing efficiency without compromising linguistic precision. The finding echoes existing scholarship demonstrating that corpus-based training fosters more accurate and discipline-sensitive language use among postgraduate students (Hua et al., 2024).

6.6. Divergent Attitudes Toward Academic Validity

Clear differences emerged in participants’ perceptions of the academic validity of AI-assisted writing, as deduced from in-class discussions, survey comments, and task developments. Three people (María and two professors in Education and Tourism) commented in the survey on the need to adopt a cautious position, warning against overreliance on GenAI due to potential risks for integrity, oversight, and linguistic competence. In contrast, other respondents expressed a more direct trust and reliance on these tools. Juan’s case was particularly noteworthy: he shifted from uncritical reliance on GenAI before the course to a more balanced and reflective use of both prompt engineering and corpus consultation. In turn, Pablo expressed little concern with potential negative aspects of GenAI, and produced highly positive survey scores, even though he discarded BDDL as an effective complement to writing. Additionally, the two faculty members from Education enjoyed the combination of AI and BDDL affordances to target both research and teaching texts, as they worked on initial research which they transformed into class material. These divergent perspectives illustrate how individual trajectories influence attitudes toward AI integration, as seen in Khuder (2025).

6.7. Sensitivity to Linguistic Nuances

Most participants displayed great sensitivity to linguistic nuance. This resourceful linguistic exploration is likely due to their advanced English learning levels (B2 or above), contributing to their profiles as linguistic analysts. However, this linguistic level does not always correlate with such intensive linguistic explorations. For example, many experimental sciences and engineering postgraduates tend to prioritize efficiency over nuance and rely more directly on AI for translation or text automation (Liu & Wang, 2024; Ruff et al., 2024). By contrast, the researchers in our study engaged more critically with AI for linguistic precision and academic writing conventions.

6.8. Integrating Creativity with the Tools

Ultimately, participants’ practices demonstrate that GenAI is most effective when integrated with other tools and approaches such as online corpus consultation, supporting rather than replacing human creativity. This aspect is critical for research writing in order to maintain heterogeneous discourses, multilingual perspectives, multi-disciplinary scopes, and cultural diversity (Kuteeva & Andersson, 2025). An example in this study is the critical position already maturing among fledgling researchers via task development and metacognitive reflections. Rather than accepting AI outputs at face value, these postgraduate writers actively negotiate the tensions between efficiency and authenticity, correctness and authorial voice, and technological reliance and academic integrity. Their engagement illustrates the potential for postgraduate L2 researchers to harness GenAI critically and productively as part of complex academic writing processes.

7. Conclusions

This case study provides clear evidence of how postgraduate researchers and faculty can integrate GenAI and BDDL tools into academic writing in meaningful and discipline-specific ways within Social Sciences.
Regarding the first research question of this study about their strategies and developmental facets, participants show their progress from using these technologies for routine tasks to employing advanced prompt engineering, authorial alignment, and discipline-specific / stylistic adaptation. These findings demonstrate that with targeted training, existing linguistic competences can be transformed into deliberate, critical, and effective writing practices assisted by GenAI and BDDL.
Answering the second research question on their attitudes and evaluation of these tools, the participants’ perceptions of value and future use demonstrate an incremental use of these technologies for academic writing at postgraduate levels in Social Sciences, aligning with current debates on human–AI collaboration and co-agencies in higher education (e.g., Liu et al., 2025). Beyond supporting research dissemination, these academics recognize the potential of the tools for miscellaneous academic developments while also acknowledging persistent challenges and potential risks, especially for L2 academic English competences, in agreement with other studies (e.g., Khuder, 2025). They also emphasize the use of appropriate academic phraseology and lexical collocations, which they can explore and learn using these digital tools, as found in other studies (e.g., Yoon, 2016). Therefore, their overall emphasis on combining linguistic precision with disciplinary expertise corroborates the crucial position of human agency in AI-supported writing.
A central limitation of this case study is its small sample size and the exclusive focus on participants who voluntarily enrolled in an ad-hoc course. This selective approach constrains the generalizability of the findings regarding writing practices in the overall Social Sciences spectrum, where many faculty members and researchers may resist adopting these tools or may fail to recognize their potential benefits. Consequently, future research must address these gaps. Specifically, the observations made in this study should be tested against larger and more heterogeneous samples to validate and refine these preliminary insights.
Moreover, longitudinal studies are essential to capture the evolving role of GenAI in academic writing, examining not only which tools are employed but also the underlying motivations, strategies, and variations in their use across distinct academic communities. A systematic and rigorous approach is necessary to fully understand GenAI’s impact on L2 academic writing in higher education contexts. As Raitskaya and Tikhonova (2025) note, GenAI presents substantial potential to enhance cognitive processes that underpin research methodology and scholarly inquiry. Yet, the practical realization of these benefits remains inconsistent, highlighting a critical need for research that investigates long-term effects, discipline-specific practices, and theoretical frameworks addressing cognitive processing and meta-linguistic reasoning in relation to these tools.

Funding

This research received no external funding.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BDDL Broad Data-Driven Learning
COCA Corpus of Contemporary American English (Academic Section)
EFL English as a Foreign Language
GenAI Generative Artificial Intelligence
L2 Second Language

References

  1. Azennoud, A. (2024). Enhancing writing accuracy and complexity through AI-assisted tools among Moroccan EFL university learners. International Journal of Linguistics and Translation Studies, 5(4), 211-226. [CrossRef]
  2. Barrot, J. S. (2023). Using ChatGPT for second language writing: Pitfalls and potentials. Assessing Writing, 57, 100745. [CrossRef]
  3. Berber Sardinha, T. (2024). AI-generated vs human-authored texts: A multidimensional comparison. Applied Corpus Linguistics, 4, 100083. [CrossRef]
  4. Chanpradit, T. (2025). Generative artificial intelligence in academic writing in higher education: A systematic review. Edelweiss Applied Science and Technology, 9(4), 889-906. [CrossRef]
  5. Cheung, L., & Crosthwaite, P. (2025). CorpusChat: integrating corpus linguistics and generative AI for academic writing development, Computer Assisted Language Learning. [CrossRef]
  6. Cordero, J., Torres-Zambrano, J., & Cordero-Castillo, A. (2025). Integration of generative artificial intelligence in higher education: Best practices. Education Sciences, 15(1), 32. [CrossRef]
  7. Costa, R., Costa, A.L., & Carvalho, A.A. (2024). Use of ChatGPT in higher education: A study with graduate students. In A. de Bem Machado, M.J. Sousa, F. Dal Mas, S. Secinaro & D. Calandra (Eds.), Digital transformation in higher education institutions (pp. 121-137). Springer. [CrossRef]
  8. Criollo, S., González-Rodríguez, M., Guerrero-Arias, A., Urquiza-Aguiar, L. F., & Luján-Mora, S. (2024). A review of emerging technologies and their acceptance in higher education. Education Sciences, 14(1), 10. [CrossRef]
  9. Crosthwaite, P., & Baisa, V. (2023). Generative AI and the end of corpus-assisted data-driven learning? Not so fast! Applied Corpus Linguistics, 3(3). [CrossRef]
  10. Curado Fuentes, A. (2025a). Digital tools for a broad data-driven learning approach in mixed linguistic-proficiency ESP courses. Language Value, 18(1), 18-48. https://www.doi.org/10.6035/languagev.8793.
  11. Curado Fuentes, A. (2025b). GenAI and BDDL tools for academic L2 English postgraduate writing in Tourism: A local case study. US-China Education Review, 15(8), 553-567. [CrossRef]
  12. Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2024). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education & Teaching International, 61(3), 460–474. [CrossRef]
  13. Feng, H., Li, K., & Zhang, L. J. (2025). What does AI bring to second language writing? A systematic review (2014-2024). Language Learning & Technology, 29(1), 1–27. https://hdl.handle.net/10125/73629.
  14. Godwin-Jones, R. (2024). Distributed agency in language learning and teaching through generative AI. Language Learning & Technology, 28(2), 4–31. https://hdl.handle.net/10125/73570.
  15. Hua, Y.F., Lu, X., & Guo, Q. (2024). Independent corpus consultation for collocation use in academic writing by L2 graduate students. System, 127, 103515. [CrossRef]
  16. Huang, L., & Deng, J. (2025). “This dissertation intricately explores…”: ChatGPT’s shell noun use in rephrasing dissertation abstracts. System, 129, 103578. [CrossRef]
  17. Ingley, S. J., & Pack, A. (2023). Leveraging AI tools to develop the writer rather than the writing. Trends in Ecology & Evolution, 38(9), 785–787. [CrossRef]
  18. Jacob, T., Tate, P., & Warschauer, M. (2023). Emergent AI-assisted discourse: Case study of a second language writer authoring with ChatGPT. arXiv.org.
  19. Ji, H., Han, I., & Ko, Y. (2023). A systematic review of conversational AI in language education: Focusing on the collaboration with human teachers. Journal of Research on Technology in Education, 55(1), 48–63. [CrossRef]
  20. Jiang, F., & Hyland, K. (2024). Does ChatGPT write like a student? Engagement markers in argumentative essays. Written Communication, 42(3). [CrossRef]
  21. Jiang, F., & Hyland, K. (2025). Metadiscursive nouns in academic argument: ChatGPT vs student practices. Journal of English for Academic Purposes, 75. 101514. [CrossRef]
  22. Jiang, F., & Su, H. (2025). Exemplification and its local grammar patterns in English as an academic lingua franca in research writing. Journal of English for Academic Purposes, 75, 101504. [CrossRef]
  23. Khuder, B. (2025). Enhancing disciplinary voice through feedback-seeking in AI-assisted doctoral writing for publication. Applied Linguistics. [CrossRef]
  24. Kramar, N., Bedrych, Y., & Shelkovnikova, Z. (2024). Ukranian PhD students’ attitudes toward AI language processing tools in the context of English for academic purposes. Advanced Education, 24, 24-47. [CrossRef]
  25. Kuteeva, M., & Andersson, M. (2024) Diversity and standards in writing for publication in the age of AI—between a rock and a hard place, Applied Linguistics, 45, 561–7. [CrossRef]
  26. Laso Martín, N.J., & Comelles Pujadas, E. (2025). Exploring ERPP writing challenges: An investigation into the perceived and analysed difficulties of Spanish EFL university researchers. Ibérica, 49, 181-212. [CrossRef]
  27. Liu, L., & Wang, C. (2024). An action research on improving STEM postgraduate students’ English academic paper reading and writing abilities with AI-powered tools. Creative Education Studies, 11(12), 3757-3766. DOI: 10.12677/CES.2023.1112548.
  28. Liu, G.L., Lee, J.S., & Zhao, X. (2025). Critical digital literacies, agentic practices, and AI-mediated informal digital learning of English. System. [CrossRef]
  29. Mabry, L. (2008). Case study in social research. In P. Alasuutari, L. Bickman, & J. Brannen (Eds.), The Sage handbook of social research methods (pp. 214-227). Sage.
  30. Markey, B., Brown, D. W., Laudenbach, M., & Kohler, A. (2024). Dense and disconnected: Analyzing the sedimented style of ChatGPT-generated text at scale. Written Communication, 1–30. [CrossRef]
  31. Milton, C., Lokesh, V., & Thiruvengadam, G. (2024). Examining the impact of AI-powered writing tools on independent writing skills of Health science graduates. Advanced Education, 25, 143-161. [CrossRef]
  32. Mizumoto, A. (2024). Data-driven learning meets generative AI: Introducing the framework of metacognitive resource use. Applied Corpus Linguistics, 3. [CrossRef]
  33. Mo, Z., & Crosthwaite, P. (2025). Exploring the affordances of generative AI large language models for stance and engagement in academic writing. Journal of English for Academic Purposes, 75. 101499. [CrossRef]
  34. Nordling, L. (2023). How ChatGPT is transforming the postdoc experience. Nature, 622(7983), 655–657. [CrossRef]
  35. Ordoñana Guillamón C., Pérez-Paredes, P., & Aguado-Jiménez, P. (2024). Pedagogic natural language processing resources for L2 education: Teachers’ perceptions and beliefs. Language Value, 17(2), 60-99. [CrossRef]
  36. Pérez-Paredes, P. (2024). Data-driven learning in informal contexts? Embracing Broad Data-driven learning (BDDL) research. In P. Crosthwaite (Ed.), Corpora for language learning: Bridging the research-practice divide (pp. 211-221). Routledge.
  37. Pérez-Paredes, P., Curry, N., & Ordañana-Guillamón, C. (2025). Critical AI literacy for applied linguistics and language education students. Journal of China Computer-Assisted Language Learning. [CrossRef]
  38. Pigg, S. (2024). Research writing with ChatGPT: A descriptive embodied practice framework. Computers and Composition, 71, 102830. [CrossRef]
  39. Raitskaya L., & Tikhonova, E. (2025). Enhancing critical thinking skills in ChatGPT-human interaction: A scoping review. Journal of Language and Education, 11(2), 5-19. [CrossRef]
  40. Rowland, D. R. (2023). Two frameworks to guide discussions around levels of acceptable use of generative AI in student academic research and writing. Journal of Academic Language and Learning, 17(1), T31–T69. https://journal.aall.org.au/index.php/jall/article/view/915.
  41. Ruff, E.F., Engen, M.A., Franz, J.L., Mauser, J.F., West, J.K., & Zemke, J.M.O. (2024). ChatGPT writing assistance and evaluation assignments across the Chemistry curriculum. Journal of Chemical Education, 101, 2483-2492. [CrossRef]
  42. Sena, B. (2024). The case study in social research. History, methods and applications. Routledge.
  43. Smit, M., Bond-Barnard, T., & Wagner, R.F. (2025). Artificial intelligence in South African higher education: Survey data of master’s level students. Data in Brief, 61, 111813. [CrossRef]
  44. Williams, A. (2024). Comparison of generative AI performance on undergraduate and postgraduate written assessments in the biomedical sciences. International Journal of Educational Technology in Higher Education, 21(5), 1-22. [CrossRef]
  45. Xia, Q., Zhang, P., Huang, W., & Chiu, T. K. F. (2025). The impact of generative AI on university students’ learning outcomes via Bloom’s taxonomy: a meta-analysis and pattern mining approach. Asia Pacific Journal of Education, 1–31. [CrossRef]
  46. Yoon, C. (2016). Individual differences in online reference resource consultation: Case studies of Korean ESL graduate writers. Journal of Second Language Writing, 32, 67-80. [CrossRef]
  47. Zhang, M., & Crosthwaite, P. (2025). More human than human? Differences in lexis and collocation within academic essays produced by ChatGPT-3.5 and human L2 writers. International Review of Applied Linguistics in Language Teaching. [CrossRef]
Table 1. Survey items and their mean (M) scores for the GenAI tools.
Table 1. Survey items and their mean (M) scores for the GenAI tools.
Section Survey item M score
Academic use GenAI helps me with my academic writing 4.71
GenAI is useful for pre-writing work 4.57
GenAI is useful for drafting my texts 4.57
GenAI helps me with re-writing / revising 4.71
GenAI helps me with paraphrasing 4
GenAI is helpful for other academic tasks 4.57
Linguistic profitability GenAI helps to enhance my vocabulary 4.71
GenAI helps to enhance my grammar 4.28
GenAI helps to organize my texts better 4.42
GenAI helps to correct my mistakes 4.57
GenAI helps to improve my linguistic competence 4.42
GenAI helps to improve my linguistic confidence 4
Difficulty with tool My difficulties were technical / navigational 3.85
My difficulties were linguistic / discoursal 3.85
Usability I would recommend GenAI to my colleagues 4.14
GenAI is more valuable than other tools 4.14
I will use GenAI for writing in the future 4.42
Table 2. M scores for the two types of tools by survey sections
Table 2. M scores for the two types of tools by survey sections
Section GenAI tools BDDL tools
Academic use 4.50 4.30
Linguistic profitability 4.40 4.14
Difficulty with tool 3.85 3.71
Usability 4.23 4.04
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated