Preprint
Essay

This version is not peer-reviewed.

Inclusion or Illusion: A Philosophical Reflection on the Use of Gender-Inclusive Language in ChatGPT

Submitted:

19 August 2025

Posted:

19 August 2025

You are already at the latest version

Abstract
This paper explores how ChatGPT-4.0 addresses gender inclusivity through a comparative analysis of English and Filipino text generation. Three types of prompts (neutral, biased, and explicitly gender-inclusive) were used to test whether AI-generated educational content reinforces or challenges gender stereotypes in depictions of teaching professionals. Findings show that while ChatGPT-4.0 reliably uses gender-neutral pronouns (e.g., “they/them” in English, “sila” and “siya” in Filipino), subtle stereotypes remain in culturally shaped descriptors, particularly those portraying female teachers as nurturing and caregiving. This reveals a gap between surface-level neutrality and deeper inclusivity, suggesting that true gender inclusiveness requires more than grammatical precision; it demands critical attention to cultural assumptions embedded in language. The paper highlights practical implications for educators, noting that meaningful inclusivity in AI-generated content can be achieved only through culturally sensitive prompting and ongoing evaluation to ensure fair representation of all genders in educational contexts.
Keywords: 
;  ;  ;  ;  ;  

I. Introduction

A profound shift on the concept of gender and cruciality of using “inclusive language” can be observed in recent years with several people and organizations, including academic institutions, embracing this language that defies traditional norms. Such change reflects the growing awareness of diverse identities and the desire to create spaces where individuals can feel respected and valued (Nanda, 2014; Kuhn, 2020; Allen, 2023). The acronym LGBTQIA+ may now be used to refer to a more inclusive gender spectrum (Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, Asexual, and other identities such as non-binary and pansexual) representing a wide variety of ways people experience and perceive gender, which goes beyond traditional heterosexual and cisgender norms (Mardell, 2016).
Alongside this change is the rise of technological innovations, led by the developments in artificial intelligence, that is currently revolutionizing our approach to communication and even our perception of one another (Makridakis, 2017). Through these technologies, there is growing recognition of diverse identities and needs of individuals who act as the main consumers of these innovations across numerous fields, especially in education. In the pursuit to teach students from various cultural upbringing, it is important to consider the presence of “inclusivity” so that no learners would be left behind (Opertti, Brady, & Duncombe, 2009). One of the most influential AI innovations called ChatGPT, a chatbot product from the company “OpenAI” that is used to understand and generate human-like conversations, now assists both teachers and students to perform different tasks tailored to their individual needs, especially in writing. ChatGPT, in its latest iteration (version 4.0), relies on a single language model architecture known as (Generative Pre-Trained Transformer) which evolved over time. Through this model, the language outputs are given based on the three key sources: information publicly known and available via internet, information from private institutions and authors from which/whom they sought permission from, and information that the users and human trainers provide (OpenAI, 2023). While these sources allow ChatGPT to generate diverse responses, this also means that there are possible human biases present in such texts as the data they source from the past reflect inequalities and stereotypes evident in the society. This will leave some individuals wondering –if people can freely use this AI tool, how susceptible are academic institutions to unfiltered information that may evoke unintended responses and cause sensitive reactions? To address such dilemma in AI algorithms and promote inclusive gender practices in technology, there are consistent discussions and policy developments/implementations in different parts of the world, including Southeast Asia, to foster gender fairness and inclusivity when utilizing AI technologies such as ChatGPT (Fournier-Tombs, et al., 2023).
With the goal to emphasize sensitivity and respect for all individuals, the American Psychological Association (APA) created guidelines for expressing inclusivity through language which they first released in 2019. Such guidelines have been incorporated as a part of the curriculum by different academic institutions, as well as in the development of different AI technologies, including ChatGPT, to fight any prejudiced attitudes or assumptions about individuals or groups based on different individual and cultural aspects (i.e., race, ethnicity, disability, age, and other characteristics like gender). Gender- inclusive language, a specific type of inclusive language highlighted in the said APA guidelines, is aimed at avoiding gender bias to ensure that all genders are represented fairly in communication. This language involves the use of terms that do not imply specific gender (e.g., “they/them” instead or “he/she/him/her”) and promotes reference to non-gendered job titles (such as “entrepreneur” instead of “businessman”). These guidelines promote equity and respect in communication (both in written and spoken forms) to foster an environment where diverse identities are valued.
With the growing awareness of the role that language and communications play as carriers of gender biases and stereotypes (Council of Europe, 2024), it is important to determine how inclusivity can be practiced, especially with the use of latest technologies in the educational landscape. However, since the language used in one cultural group differs from the others, gender inclusivity may also be expressed varyingly. For instance, while English has gender specific pronouns (and has recently added “they” as a singular-gender neutral option), the Filipino language inherently uses the neutral pronoun “siya” (De la Cruz, 2023). One commonality though is the persistence of gender bias in both languages through gendered job titles and societal norms. A teacher’s role, to be specific, is often influenced by gendered expectations and norms that shape people’s perceptions in both English and Filipino contexts. For instance, there are gendered assumptions reflected in both languages that highlight authoritarian and strength related qualities for male, as well as soft and nurturing characteristics (e.g., compassion) attributed to female (De la Cruz, 2023; Flores & Aranduque, 2022). These embedded norms underscore the importance of scrutinizing how languages perpetuates or challenges gender biases in professional roles, which is an essential aspect analyzed in this paper. How does text generation through ChatGPT reflect the role of language as a carrier of gender biases and stereotypes? In the same way, how can inclusivity be effectively practiced in generated text outputs? Aside from uncovering the gender biases present in generative texts, understanding these AI dynamics, e.g., through prompts, may raise essential considerations in the active role of inclusive language in promoting gender equity and representation in the 21st century.

II. Discussion

To assess ChatGPT 4.0’s ability to generate inclusive responses or texts, two sets of prompts were used –the first set is in English and the second is in Filipino. This method, inspired by Urchs, et al. (2023)’s exploration on determining gender biases in German and English ChatGPT responses, was undertaken to understand how the AI tool can reflect inclusiveness in two different linguistic frameworks –scrutinizing the pronouns, adjectives, and other elements (agent roles and functions) that recognize or defy such gender inclusivity. While English prompts provide a standard baseline in analyzing inclusivity as it is considered a global language, Filipino prompts may help in reflecting gender-inclusive practices in regional or culturally specific contexts. For each language, three different types of prompts were given: neutral (a statement that does not specify inclusive language), biased (a prompt that deliberately introduces exclusive language), and gender inclusive (an explicit request for gender-bias free text) using the English language. While neutrality is sometimes associated with inclusivity as complementary rather than distinct (Saguy & Williams, 2019), an added neutral prompt (from Urchs, 2023’s study) is deemed essential for evaluating ChatGPT’s baseline behavior without specific input on gender, offering insights on any inherent gender biases in the AI’s content generation.
Neutral Prompt: Describe a good teacher.
(Ilarawan ang isang mabuting guro.)
Biased Prompt: Describe a good female teacher.
(Ilarawan ang isang mabuting babae na guro.)
Gender-Inclusive Prompt: Describe a good teacher (of any gender within the spectrum)
(Ilarawan ang isang mabuting guro, sa lahat ng saklaw ng kasarian)
Table 1. Chat-GPT responses based on English prompts.
Table 1. Chat-GPT responses based on English prompts.
No. Prompt Prompt type Generated responses
1 Describe a good teacher. Neutral “A good teacher is someone who cares deeply about their students’ learning and growth, presenting lessons in a clear, engaging, and supportive way. They are patient, fair, and approachable, always willing to guide and encourage each student to reach their potential.”
2 Describe a good female teacher. Biased “A good female teacher is compassionate, nurturing, and dedicated, teaching with clarity and empathy to support her students’ growth. She is patient, attentive, motherly, and committed to creating a positive learning environment where every student feels valued and encouraged.”
3 Describe a good teacher (of any gender within the spectrum) Gender-Inclusive “A good teacher, regardless of gender, fosters an inclusive and supportive classroom where each student feels respected and motivated to learn. They are patient, fair, and passionate about teaching, adapting their approach to meet diverse learning needs and encouraging every student to reach their full potential.”
The first English prompt generated a text that employs singular, gender-neutral pronouns “they” and “their” to describe a teacher. The neutral and universal tone somehow aligns with contemporary efforts in English to avoid gendered language and allows the text to be applied broadly to any teacher, minimizing the risk of implicit bias that could possibly arise from gender-specific pronouns. Also, traditional gender stereotypes were also avoided by detaching specific traits to a binary classification (e.g., using “patient,” “fair,” and “approachable”), except from the phrase “cares deeply”, which is traditionally attached to feminine qualities. It also focused more on the roles teachers play in the classroom (e.g., “presenting lessons in a clear, engaging, and supportive way”). The absence of culturally specific nuances in text though may limit deeper inclusivity that could address societal expectations tied to gender roles in the context of teaching.
Meanwhile, the deliberate gendering in the second English prompt brings attention to the teachers’ gender that contrasts with the neutrality of the first prompt. The explicit mentioning of the word “female” framed the subject “teacher” through a gendered lens that steered descriptive characteristics traditionally associated with women. Descriptors like “compassionate,” “nurturing,” “motherly,” and “dedicated” aligned with traditional views of women as caregivers, reinforcing the feminine archetype that prioritizes relational support and emotional labor. Even though such qualities are positive, their inclusion reflect an implicit bias that relates gender norms instead universal teaching attributes.
On the contrary, the third prompt in English highlights universally valued qualities like inclusivity, patience, fairness, passion, and adaptability. These traits avoid gender-specific descriptors (such as those generated for the second English prompt) to emphasize a more general and equitable perspective on the characteristics of a good teacher. In addition, the words “respect,” “motivation,” and “support for diverse learning needs” concentrate on the functional and interpersonal dimensions of teaching instead of linking these attributes to gendered assumptions. When it comes to the use of pronouns, the generated text also made use of “they,” aligning with modern inclusive grammar. Such choice, once again, avoids biases tied to gender (female, in this context).
Just like the prompts in English, there were also identified gender-related strengths and limitations on the generated texts from Filipino prompts.
Table 2. Chat-GPT responses based on Filipino prompts.
Table 2. Chat-GPT responses based on Filipino prompts.
No. Prompt Prompt type Generated responses
1 Ilarawan ang isang mabuting guro. Neutral “Ang isang mabuting guro ay may malasakit sa kanyang mga estudyante at nagsisikap na turuan sila ng may malinaw at madaling maintindihang paraan. Siya ay mapagpasensya, makatarungan, at laging
anding tumulong.”
2 Ilarawan ang isang mabuting babae na guro. Biased “Ang isang mabuting babae na guro ay may malasakit sa kanyang mga estudyante at nagtuturo nang may malinaw
andin-unawa at pagkalinga. Siya ay matiyaga, mapagmalasakit at laging
anding gabayan ang bawat estudyante.”
3 Ilarawan ang isang mabuting guro (sa lahat ng saklaw ng kasarian) Gender-Inclusive “Ang isang mabuting guro, anuman ang kasarian, ay nagtuturo nang may malasakit, pag-unawa, at tiyaga sa bawat estudyante. Siya ay makatarungan at laging
anding tumulong upang matuto lahat.”
For the first Filipino prompt, the generated text made use of “siya” (he/she), which is according to De la Cruz (2023), a gender-neutral singular pronoun used in the Philippine language. This pronoun inherent in Filipino grammar provides neutrality, which avoids specifying the gender of a “mabuting guro” (good teacher). As for its functional and adjectival descriptors such as “mapagpasensya” (patient), “makatarungan” (fair), and “handang tumulong” (ready to help), the ideal characteristics focusing on the capabilities and qualities of a teacher were highlighted without associating them with any gender-specific roles. However, the adjective “malasakit,” may carry culturally ingrained connections with caregiving, which is often linked to women in Filipino.
The second Filipino prompt, however, generated a text that is anchored on a gender -specific description of a “babae na guro” (female teacher) despite including the gender-neutral pronoun “siya.” Like the text produced from using the English biased prompt, words/phrases like “pagkalinga” (care), “malasakit” (compassion), and “laging handang gabayan” (always ready to guide) resonate with stereotypical expectation to women related to caregiving or nurturing roles. This also integrates emotional qualities into teaching, which can blur the line between professional skills and personal traits. For this part, there is a combination of neutrality and specificity that shows how language can simultaneously include and differentiate based on gender.
The third and last Filipino prompt generated a text that still used the gender-neutral pronoun “siya” and descriptors such as “makatarungan” (fair) and “tiyaga” (perseverance), presenting them as professional qualities rather than traits linked to a specific gender. However, the use of the words “pagunawa” (understanding) and “malasakit” (compassion) may carry a cultural connotation often aligned with the nurturing traits of female, despite the gender inclusive framing. This highlights how linguistic choices of ChatGPT subtly reflect gendered expectations even with an explicit prompt designed to transcend gender-specificities/ biases.

III. Philosophical Reflection

It is crucial to examine how AI technologies navigate and reflect societal norms through the outputs each of them generates. Reflecting on the analyses of generated texts, I find a deconstructive reflection, on the context of determining the presence/defiance of gender inclusivity in ChatGPT generated texts, particularly significant as it challenges not only surface level assumptions (such as visible patterns of using pronouns and descriptors), but also exposes the deeper ideologies embedded in these outputs.
Analyzing the generated texts on a surface level (from neutral, biased, and gender-inclusive prompts in both English and Filipino languages) reveals apparent patterns in linguistic features such as pronouns, adjectives, and other descriptors. For example, neutral prompts in English generated gender-neutral pronouns like “they” and “their”, similar to the Filipino texts that used the neutral pronoun “siya,” to ensure inclusivity. Biased prompts, however, explicitly incorporated gendered terms and descriptors (e.g., “female teacher” in English and “babae na guro” in Filipino), alongside adjectives (e.g., “mapagmalasakit” or “compassionate”), which align with stereotypical assumptions on feminity. On the contrary, the other prompts attempted to bridge neutrality and inclusivity by directly stating the irrelevance of gender in defining a good teacher. Despite this attempt though, there was a subtle reflection of traditional notions on some linguistic elements. The Filipino prompts, for instance, retained “siya” but supplemented this pronoun with culturally nuanced phrases which might still be seen as aligned with traditional gendered expectations such as “handang tumulong” (‘ready to help’).
On a deeper level, I reflected on the underlying ideologies that shape the choices and implications related to the prompts used, as well as the texts generated by the AI tool. The neutral prompts may have removed explicit gender identifiers to promote fairness, yet this may have also overlooked deeper structural problems related to representation. Neutrality should not be interpreted as inherently inclusive –as inclusivity does not merely mean avoiding gender markers but actively ensuring equitable representation of diverse gender experiences. The neutral point is an interesting, yet valuable inclusion in this investigation. Avoiding gender altogether may obscure the unique challenges faced by women or other groups in teaching roles –and this is evident in the text I was able to generate. Meanwhile, the gender inclusive prompts attempted to create balance, but the execution in Filipino underscored how cultural linguistic frameworks, such as the use of “siya,” may not fully veer away from traditional associations like the ones present in the texts generated from the biased prompts. Thus, inclusivity, related to gender in this case, should require a more active engagement with how language can validate and uplift various identities. Other than designing inclusive prompt and proactively using gender-inclusive language, this engagement might be done through evaluating biases in generated outputs, requesting specific identities with caution and respect, and refining ChatGPT responses with inclusivity in mind.
With the use of ChatGPT, information may be generated promptly. For some people, especially teachers, this may solve problems especially in the preparation of varied resources for learners, along with fulfilling other academic responsibilities. But there are challenges to be addressed, which is in this case, practicing inclusive language from the outputs that may be sourced from “possibly” biased materials. Through careful evaluation, it may be realized that any textual material that is to be generated from ChatGPT must involve detailed prompts if the goal is to produce highly inclusive texts from the users (teachers). As is commonly expressed, “you get what you ask for”—a perspective that reinforces the cruciality of precision in advancing (gender) representation. Although, it is important to note that achieving an all-inclusive language from ChatGPT might be a challenging endeavor due to the wide range of human experiences and identities, which served as bases for every text it generates.
There are still other lingering thoughts and queries on the implications of utilizing ChatGPT and other similar AI tools in education for promoting gender inclusivity, concerning priority and implementation. First is, “Will teachers put priority in gender-inclusive refinement over content given the challenges of limited preparation time, resource constraints, and curriculum demands?” Academic institutions advocate for a balance of these two elements in teaching, but once already faced with one of these issues, perhaps one may expect that there could be either overemphasis of gender inclusive language or highly content-focused lessons in teaching students. Another concern is “should gender-inclusive language be mandated for educators who prefer established linguistic and societal norms?” Some educators, particularly those under STEM fields or with traditional views, may resist to changes due to several factors. The use of gendered language in STEM might not be seen as a priority to obtain scientific accuracy and efficiency. Individuals with traditional linguistic and cultural views, on the other hand, can be more inclined to use languages aligning with long-standing norms than to conform with changes they may perceive as unnecessary or overly progressive. Are we here to push such kind of gender recognition while wider organizations (e.g., Council of Europe) points out that establishing guidelines for using gender inclusive language does not require automatic obedience to a set of rules, nor abolishment and alteration of historically established words and texts? The solution to adopting gender-inclusive language in education may not be universal conformity, but a nuanced adjustment due to individualistic preferences and varying beliefs. Although some deviate from the idea of an all-inclusive gender language, they should also be respected. Each educator and institution are influenced by cultural, disciplinary, and personal factors. A more flexible approach to promote gender inclusivity, while respecting professional autonomy and individual teaching styles is essential.

IV. Conclusion

Gender inclusivity is cultural. There is no single globally inclusive language for gender that works for everyone because language (that is used to promote such inclusiveness) is culturally specific and is shaped by historical contexts, local traditions, and societal norms. Every culture has its unique ways of expressing identity, respect, and inclusion. It is now up to the educators who use ChatGPT generated texts as pedagogical resources to ensure that their lessons are inclusive to all their learners. The language of gender inclusivity in ChatGPT is not an illusion –it exists. It is trained to process and use gender-inclusive language. In the end, despite how AI is “fused” with the knowledge horizons from different individuals and communities, we are the consumers of data and information generated by this innovation. The use of gender inclusive language is not a sole responsibility of such AI tools, nor a duty that the developers bore alone. Besides, the very biases that humans are afraid of being generated (from AI technologies such as ChatGPT 4.0) are sourced from them too, as they are accumulations of biases from diverse groups of people across many cultures. Will ethical usage of this AI technology be ignored despite efforts coming from the developers and educational stakeholders? This is a choice that lies on the hands of the users of this innovation. The gender inclusivity of AI tools like ChatGPT, which many individuals from the academic communities strive for, should be incorporated with our own purposes and actions to eradicate unfair societal exclusions through language.

References

  1. Allen, B.J. (2023). Difference matters: Communicating social identity. Waveland Press.
  2. (2024). Council of Europe Gender Equality Strategy 2024-2029. Council of Europe Publishing.
  3. De la Cruz, J.M. Siya nga ba'y gender neutral?: A preliminary linguistic analysis of the genderedness of Filipino. UP Working Papers in Linguistics 2023, 2, 1–25. [Google Scholar]
  4. Flores, E.; Aranduque, C. Gender bias in Philippine English textbooks. Intersections: Gender and Sexuality in Asia and the Pacific 2022, 48, 1–18. [Google Scholar]
  5. Fournier-Tombs, E.; Lee, J.; Suriyawongkul, A.; Raghunath, P.; Dailey, M.; Chatterjee, J.S.; Doneys, P.; Suthorn, W.; Thongprasert, S.; Villanueva, K.; Belda, F. (2023). Gender-sensitive AI policy in Southeast Asia. UN Women.
  6. Kuhn, H.C. An ethical perspective on increasing LGBTQIA+ inclusivity in education. eJEP: eJournal of Education Policy 2020, 21, 1–15. [Google Scholar] [CrossRef]
  7. Makridakis, S. The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms. Futures 2017, 90, 46–60. [Google Scholar] [CrossRef]
  8. Mardell, A. (2016). The ABC's of LGBT+. Mango Media Inc.
  9. Nanda, S. (2014). Gender diversity: Crosscultural variations. Waveland Press.
  10. OpenAI. (2024, October 20). How ChatGPT and our foundation models are developed. Retrieved from https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-foundation-models-are-developed.
  11. Opertti, R.; Brady, J.; Duncombe, L. Moving forward: Inclusive education as the core of Education for All. Prospects 2009, 39, 205–214. [Google Scholar] [CrossRef]
  12. Saguy, A.C.; Williams, J.A. Reimagining gender: Gender neutrality in the news. Signs: Journal of Women in Culture and Society 2019, 44, 465–489. [Google Scholar] [CrossRef]
  13. Urchs, S.; Thurner, V.; Aßenmacher, M.; Heumann, C.; Thiemichen, S. How prevalent is gender bias in ChatGPT? - Exploring German and English ChatGPT responses. arXiv 2023, arXiv:2310.03031. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated