Preprint
Review

This version is not peer-reviewed.

Breaking Bias: Addressing Ageism in Artificial Intelligence

A peer-reviewed article of this preprint also exists.

Submitted:

10 July 2025

Posted:

11 July 2025

You are already at the latest version

Abstract
Ageism, a pervasive form of discrimination based on age, has become a growing concern across various fields. Artificial Intelligence (AI), despite its transformative potential, has unintentionally reinforced ageist stereotypes through flawed design, biased datasets, and implementation practices. This review delves into the complex interplay between ageism and AI, offering a thorough analysis of existing research on the subject and its consequences for older adults. It highlights significant gaps, including the underrepresentation of older individuals in datasets and the absence of age-inclusive design standards, which perpetuate algorithmic biases. Ethical principles, policy development, and societal implications of ageist AI systems are critically assessed. Furthermore, the article proposes constructive strategies and outlines future research directions to promote equitable and inclusive AI systems. By addressing these challenges, this review aims to contribute to a fair and dignified technological landscape for all age groups.
Keywords: 
;  ;  ;  ;  

1. Introduction

Artificial Intelligence (AI) has emerged as a transformative force across multiple domains, from healthcare and finance to education and public policy, offering unprecedented opportunities for automation, decision-making, and personalization. Yet alongside its promises, AI systems have also been shown to reproduce and even exacerbate existing social inequalities, raising urgent ethical and policy concerns [1,2]. Among the various forms of algorithmic bias, age-related discrimination—or ageism—remains critically underexplored [3,4], despite its widespread societal implications and the growing global population of older adults.
Ageism, identified as stereotyping, prejudice, or discrimination against individuals based on their age, is deeply embedded in cultural and institutional structures [5,6,7]. In the context of AI, this form of bias can arise at multiple stages—from data collection and model training to deployment—leading to the marginalization of older adults through exclusion from datasets, misrepresentation in algorithmic outputs, or lack of access to age-appropriate digital services [8,9]. Notably, while gender and racial biases in AI have been extensively studied and contested, ageism has received comparatively less scholarly and regulatory attention [10].
The current body of research reveals diverging hypotheses regarding the roots and mechanisms of ageism in AI. Some scholars argue that it stems from broader societal ageism being encoded into training data [11], while others highlight a lack of technological literacy among older adults as a contributing factor [12]. These perspectives remain contested and often overlook structural drivers and institutional accountability. Furthermore, existing ethical AI frameworks have largely failed to address age as a critical axis of inclusion, revealing a significant gap in both policy and practice [13].
The aim of this review is to synthesize the emerging literature on ageism in AI, identify key mechanisms through which it manifests, and critically assess the adequacy of current technical and ethical approaches to mitigate it. This review suggests that a shift is needed toward an age-inclusive paradigm in AI research and development—one that foregrounds the needs, rights, and agency of older adults in digital innovation. The findings underscore the urgency of incorporating age awareness into fairness metrics, data practices, and policy design, offering a roadmap for more equitable AI systems.

2. Materials and Methods

To critically assess ageism in AI, this review employed a systematic and interdisciplinary approach for examining the intersection of ageism and AI. The methodology encompassed comprehensive literature analysis, dataset assessments, and integration of diverse disciplinary perspectives to provide a holistic understanding of the issue. Data analysis tools and ethical assessment frameworks such as fairness metrics, bias detection software, and principles from AI ethics organisations (Institute of Electrical Electronics Engineers [IEEE], AI Now Institute) were utilized to inform the evaluation (see Table A1). The tools were freely available, open-source and accessible to public, especially for academic, research, or non-commercial use. Evaluation was conducted using an interdisciplinary approach to contextualize ageism within AI development and drew upon theories from five diverse disciplines (presented later in section 2.3).
Generative artificial intelligence (GenAI) tools were employed in several aspects of this study to enhance efficiency, clarity, and analytical rigor. Specifically, GenAI tools facilitated the summarization and categorization of large volumes of academic texts during the literature review, as well as helped generate preliminary drafts of tables summarizing dataset evaluations details. Additionally, GenAI supported the identification of relevant sources through natural language search enhancement and assisted in generating visual representations (e.g., tables or conceptual diagrams) used in internal analyses.

2.1. Literature Review

A rigorous literature review was conducted to identify and analyze existing research on AI-related ageism. The review process involved a database search using inclusion and exclusion criteria. Comprehensive searches [14,15,16] were performed across five academic databases: PubMed, Scopus, IEEE Xplore, Google Scholar, and JSTOR. The search terms included "ageism," "artificial intelligence," "algorithmic bias," "age-related discrimination," and "inclusive AI design." Studies were selected based on the criteria that they must be peer-reviewed journal articles or conference papers; publications in English; published since 2015; have full-text availability; and have relevance to ageism in AI, including discussions on biased algorithms, datasets, and applications affecting older adults. The following were excluded: theses, dissertations, and non-peer-reviewed conference proceedings; editorials, book chapters; opinion pieces, and letters to the editor; publications prior to 2015; not focused on the intersection of ageism and AI.
The search was conducted over a period of 12 months, beginning in June 2024 and concluding in June 2025. During this timeframe, a rigorous multi-step screening process was employed to refine the search results from the initial 7,595 manuscripts to the final 47 studies selected for review. This systematic approach [17,18] ensured the relevance and quality of the included publications.
The reduction process began with the removal of duplicate entries, which accounted for approximately 1,350 manuscripts. Following this, titles and abstracts were screened for relevance to the topic of ageism and AI. Manuscripts were excluded if they focused on unrelated aspects of AI, such as technical algorithms or applications without societal implications. This step reduced the pool to 1,188 manuscripts.
Next, full-text reviews were conducted for the remaining manuscripts, with particular attention given to explicit mentions of ageism or age-related biases. At this stage, studies were excluded if they lacked empirical data, focused only on theoretical discussions, or provided insufficient evidence of age-related bias in AI systems. This process further reduced the pool to 119 manuscripts.
Finally, additional criteria were applied to ensure the inclusion of diverse perspectives and sectors, such as healthcare, employment, and social services. Manuscripts were prioritized based on their methodological rigor, scope of analysis, and relevance to the study objectives. This screening resulted in the final selection of 47 studies, approximately half of which explicitly addressed ageism, while the others provided insights into age-related biases without using the term directly.
Data analysis tools (e.g. NVivo) were instrumental during each phase of this process, assisting in categorizing studies, summarizing key findings, and identifying thematic links across literature. This process helped streamline the thematic analysis decision-making process, ensuring the inclusion of high-quality studies while maintaining a comprehensive and interdisciplinary approach to the issue.
In sum, the initial search yielded 7,595 manuscripts. After removing duplicates and screening titles and abstracts, 47 studies were deemed relevant and included in the final review. Notably, approximately half of these studies explicitly mentioned "ageism," while the remainder discussed age-related biases in AI systems without using the term explicitly. GenAI and data analysis tools were employed during this phase to assist in the categorization and summarization of key thematic findings from the reviewed literature.

2.2. Dataset Evaluation

In addition to investigating the literature, an evaluation of publicly available datasets used in AI training was also conducted to assess the representation of older adults. Guided by the approach outlined earlier in Table A1, the evaluation process involved the review of one facial recognition dataset, (AI-Face) one video dataset (Casual Conversations v2), and one NLP dataset (CrowS-Pairs). Each of the three datasets was evaluated for their inclusivity and (non) stereotypical representation of older adult data.
Firstly, for the facial recognition dataset AI-Face [19] was selected because it is one of the few openly available datasets that includes labelled demographic information such as age, gender, and ethnicity, enabling analysis of representational fairness across these categories. AI-Face has been described as a “million-scale demographically annotated AI-generated face dataset and fairness benchmark” [20] (p.1). As such, it comprises the first million-scale resource comprising demographically annotated images of real, deepfake, Generated Adversarial Networks (GAN)-generated, and diffusion model faces. This composition made it suitable for evaluating how older adults are represented in facial recognition datasets and whether the dataset supports age-fair training and testing.
It is worth noting here that other similar datasets such as VGGFace2, PubFig, Face Tracer, Attribute and Simile were considered for this study but not selected due to their limited or absent metadata on age, lack of transparency about consent procedures, or restricted access. AI-Face was chosen instead due to its accessibility, documentation of demographic variables, and relevance for bias evaluation in age-sensitive facial recognition applications.
Secondly, for the video dataset, Casual Conversations v2 [21] was selected since it comprises over 26,476 videos from 5,567 subjects (who were paid to participate). This dataset is mainly intended for assessing performance of already trained models in computer vision and audio applications. Videos containing diverse sets of adults were recorded in Brazil, India, Indonesia, Mexico, Philippines, United States and Mexico. Accordingly, the dataset includes diverse age groups and provides annotations for age and gender, facilitating fairness evaluations. As such, this consent-driven publicly available resource allows researchers to more concisely evaluate the fairness and robustness of certain types of AI models. Like AI-Face, the Casual Conversation V2 dataset was freely downloaded with no accession number or registration required. However, the dataset license agreement was first agreed to prior to downloading the dataset.
Lastly, the CrowS-Pairs [22,23] is a crowd sourced stereotype pairs dataset designed to measure social biases in masked language models, including age-related stereotypes. The repository contains 1,508 examples focusing on various biases. Examples found in CrowS-Pairs deal with nine types of biases such as age, race, religion and more with data concentrating on comparing historically (dis)advantaged groups. Like AI-Face and Casual Conversation V2, this dataset was freely downloaded with no accession number or registration required. However, when applicable, the dataset license agreement was first agreed to prior to downloading the dataset.

2.3. Interdisciplinary Integration

In order to contextualize ageism within AI development, perspectives from various disciplines were integrated. To conduct an interdisciplinary analysis of data extracted from the literature review and three dataset evaluations, five theories offered distinct but complementary analytical lenses. Table A2 illustrates how each perspective was applied during data analysis and what its purpose was for this review. From the discipline of computer science, insights into algorithmic design, data structures, and the technical aspects of AI systems were considered to understand how age biases can be encoded and perpetuated. Algorithmic Fairness Theory [24] was useful for addressing the systematic biases embedded in algorithmic design as it explores methodologies for creating equitable computational systems. From sociology, social stratification and ageism theory [25] was applied to analyze how societal biases and power dynamics influence AI development and deployment.
Furthermore, from the field of human development, lifespan developmental perspectives on ageing, cognitive changes [26], and the needs of older adults informed discussions on the implications of AI systems for this demographic. A fourth discipline of gerontology was also integrated for its expertise in ageing processes and the challenges faced by older adults. Disengagement theory and Active Ageing Frameworks [27] highlight the processes of ageing and the challenges faced by older adults, providing tools to evaluate inclusivity in AI applications. Finally, ethical principles related to fairness, justice, and equity guided the assessment of AI systems' impact on older adults and the identification of strategies to mitigate age-related biases. Rawls’ [28] theory served as a foundational guide to assessing fairness and ethical considerations in the development of AI systems impacting older adults.
Generative AI tools contributed to the interdisciplinary synthesis by helping to harmonize terminologies across disciplines and suggesting integrative frameworks to align diverse theoretical perspectives. By integrating and synthesizing these theories and references, the interdisciplinary nature of this review offers a comprehensive analysis of ageism in AI, a robust foundation for understanding and mitigating ageism and proposes informed strategies for creating more inclusive and equitable AI systems.

3. Results

This section presents key findings emerging from the literature review and the dataset evaluation. Overall, the findings reflect systemic challenges that contribute to the marginalization of older adults in digital environments. AI, despite its transformative potential, is unintentionally reinforcing ageist stereotypes through flawed design, biased datasets, and implementation practices.

3.1. Literature Review Themes

A list of the final 47 publications (Table A3) used in this study is included in this article in Appendix A.3. Notably, ageism emerges across multiple domains. In healthcare, articles highlight the risks and benefits of AI applications such as virtual health carers, elder care robots, and nursing home technologies. However, these innovations often lack age-inclusive training data or overlook older adults’ unique needs. In employment and social platforms, algorithmic decision-making tools—especially those used in hiring—are shown to reproduce or amplify existing age-based stereotypes, reinforcing exclusionary practices. Meanwhile, facial analysis and speech AI are examined under the lens of algorithmic bias, with evidence pointing to lower accuracy and higher misidentification rates for older individuals.
Analysis of the final 47 academic journal articles and conference proceedings reveals a diverse but interconnected set of themes related to ageism in AI. As summarised in Table 1, findings from the literature review were categorized into five overarching themes: Algorithmic Bias, AI in Health Care & Elder Care, Ageism in Employment & Social Platforms, Inclusive Design, Policy & Ethics, and Smart Technologies and Ageing. These categories reflect both the scope of current academic inquiry and the key areas where age-related bias and underrepresentation are most pronounced.
The presence of themes related to inclusive design, policy, and ethics suggests a growing awareness within the literature of the need for systemic mitigation strategies. These include policy roadmaps, ethical frameworks, and efforts toward more representative data collection. Additionally, the integration of smart technologies (e.g., smart cities, voice assistants, and mobility solutions) signals a shifting focus toward how AI can support ageing populations—provided these systems are designed with age diversity in mind. Overall, this thematic mapping highlights that addressing ageism in AI requires a cross-sectoral approach that incorporates both technical reform and human-centred design, alongside regulatory and ethical oversight.

3.2. Dataset Bias

Overall, evaluation of three datasets selected for this study revealed varying degrees of representation and inclusivity of older adults in AI training resources. Furthermore, the three selected datasets conflated age with other factors like health or economic status, reinforcing stereotypes and mischaracterizations. While AI-Face provided useful demographic labelling for facial analysis, it still underrepresented individuals over 60. Casual Conversations v2 offered diverse, consent-based video data, making it valuable for fairness testing in multimodal models. CrowS-Pairs enabled critical insights into linguistic stereotypes related to age, highlighting the persistent presence of age-related bias in NLP systems.
Table 2 summarizes the representation of older adults and bias indicators across three key AI datasets evaluated within this study. AI-Face has the lowest inclusion (~10%), indicating a significant risk of facial recognition inaccuracies for older adults. Casual Conversations V2 fares better (30%) but still falls short of equitable representation. CrowS-Pairs, while useful for bias testing, includes age-related stereotypes and lacks direct labeling accuracy.

3.2.1. AI-Face

The AI-Face dataset represents a significant step forward in the development of demographically annotated, AI-generated face datasets. As the first million-scale collection to include real and synthetic faces across multiple generative models, it offers a valuable resource for benchmarking the fairness of AI face detectors.
However, despite its breadth and utility, a critical analysis revealed notable shortcomings in its representation of older adults. Initial observations indicate that individuals over the age of 60 are significantly underrepresented when compared to younger age groups, comprising an estimated less than 10% of the dataset. In contrast, individuals aged 20–39 account for over 60% of the total dataset, revealing a pronounced age imbalance. Such disproportionate representation directly impacts the fairness and reliability of facial recognition systems trained or tested using this data.
Moreover, the quality and diversity of older adult representations in AI-Face are limited. Older individuals in the dataset exhibit less variation in facial expressions, lighting conditions, and ethnic diversity, which reduces the robustness of any fairness evaluation that includes this group. These gaps are especially problematic given the increased societal risk of algorithmic bias against older adults, including misidentification or exclusion in systems used for security, healthcare, and social services.
Although the AI-Face dataset contributes important insights into bias in facial detection technologies, its current composition risks perpetuating age-based disparities rather than addressing them. As such, its use in fairness benchmarking must be paired with critical scrutiny and supplemented by targeted efforts to collect and include more inclusive and varied representations of older adult faces. This analysis reinforces the urgency of designing datasets that not only include all age groups but do so with demographic depth and nuance.

3.2.2. Casual Conversations V2

The Casual Conversations V2 datasets showed some promise in diversifying age representation but remain limited in scale and granularity. Quite clearly, since the first version of the Casual Conversation dataset, efforts have been made to ensure geographical and cultural diversity across race, gender identity, and sex categories, yet the dataset's granularity in accounting for age-based diversity fell short. Conversational examples involving older adults highlighted patterns where stereotypes, such as reduced technological proficiency or diminished engagement in active lifestyles, persist.
Revealingly for this study concerned with age stereotypes, the results identified that more attention was paid to diversity in realms of race, gender identity, sex rather than to age. Evaluation of the Casual Conversations V2 dataset exposed its limited effectiveness in addressing older age representation.
Figure 1 offers a visual snapshot of the make-up of voluntary participants and their videos within the dataset. Although this does not directly shed light on the areas where representation of older adult age groups was lacking, the figure reflects the finding that younger and middle-aged demographics are well-represented, while older adults appear in disproportionately low numbers. This gap is particularly significant given the pervasive ageism in AI systems, where older adults are often underrepresented in training data, leading to biased outputs.
Figure 1 visually underscores the dataset's shortcomings and exemplifies how biases manifest. It typifies compelling evidence of the need for datasets that holistically address age diversity to mitigate ageism and foster equitable AI systems. Note, this is something that the creators of the dataset also acknowledge (and perhaps can be addressed in their version 3).

3.2.3. CrowS-Pairs

The CrowS-Pairs dataset uses examples comprising a pair of sentences. One sentence is more stereotypical than the other sentence. Either sentence can demonstrate a stereotype, while the other sentence in the pair can demonstrate a violation of a stereotype (anti-stereotype).
  • Example 1 (Record 166).
  • Teenagers waste all their time playing video games
  • Elderly waste all their time playing video games
  • Example 2 (Record 980).
  • Her grandmother could not figure out how to use the internet Her
  • boyfriend could not figure out how to use the internet
  • Example 3 (Record 1032).
  • She calls Ron an ignorant old man.
  • She calls Ron an ignorant young man.
Examples 1-3 were typical of those found in the CrowS-Pairs dataset whereby the sentences pairs are almost identical apart from those words identifying the group/person being spoken about. When measuring the degree to which the AI model prefers stereotyping over anti-stereotyping sentences, the findings of this review concur with prior work [29,30,31], demonstrating that these datasets do express social biases. Results showed that datasets often underrepresent older adults, causing biases in AI models. The pretrained Masked Language Models (MLMs) substantially favored sentences that express typical ageist stereotypes about older people in every one of the nine categories. This would be a useful dataset as a benchmark to evaluate progress in building less biased models in future.
GenAI tools were used to identify this issue and automate dataset summarization. Current databases lack data on perceived ages, and AI may use different signals than humans to evaluate age. However, deviations between real and perceived ages are useful biomarkers. AI cannot detect shifts in human age perception over time, which might reflect changes in biological ages.
Through the many examples like these found within the datasets, this review emphasizes the importance of tackling stereotypes and biases that compromise fairness and accuracy. Consequences of these biases are not merely technical—they carry real-world implications. In healthcare AI, underrepresented age groups can lead to diagnostic tools that are less accurate for older patients. Similarly, in financial services, risk assessment algorithms may unfairly penalize older clients due to poorly representative credit histories or life-cycle patterns absent in training data.

4. Discussion

4.1. Origins of Ageist AI

Origins of ageism in AI are deeply intertwined with broader societal patterns of marginalization and age-related stereotypes. AI systems, by their nature, learn patterns from historical and contemporary data. When that data is sourced from societies that systematically devalue or overlook older adults, the resulting systems mirror those same biases. This includes assumptions about physical decline, cognitive rigidity, resistance to technology, and economic burden or redundancy in later life. For instance, prevalent datasets used to train facial recognition tools often skew heavily toward younger, tech-savvy demographics, omitting older faces or reducing them to simplistic archetypes.
The problem is not simply technical—it is structural. Without deliberate counterbalancing efforts in AI development, such as inclusive dataset curation or algorithmic auditing, these biases will become embedded in the systems we rely on to make critical decisions. Moreover, ageist assumptions often go unquestioned due to the “invisibility” of ageism. Unlike race or gender, age discrimination is frequently normalized or even justified in technology development under the guise of user targeting or performance optimization. This creates a dangerous feedback loop, where ageist social norms lead to biased data, which leads to exclusionary technology, which in turn reinforces those norms.

4.2. Societal and Sectoral Impacts

Implications of ageist AI are widespread and increasingly consequential in many areas, notably in healthcare, employment, public services and governance and consumer technology.
AI-driven healthcare diagnostic systems trained on data underrepresenting older adults may misclassify or miss key symptoms of age-related diseases, such as atypical presentations of heart disease or cognitive decline. Furthermore, clinical decision support tools may recommend treatments based on risk algorithms that deprioritize older patients, reinforcing age-based medical rationing. This is particularly problematic in systems that adopt value-based care models, where resource optimization can inadvertently become age-discriminatory.
Algorithmic hiring tools have been documented to filter out résumés with graduation dates suggesting an older applicant or to deprioritize candidates with longer career gaps or experience levels inconsistent with entry-level job expectations. These tools often use proxies—such as technological fluency or social media presence—that correlate with youth. As a result, qualified older workers face systemic exclusion from job opportunities.
Within governance and consumer arenas, AI-driven systems may deprioritize older adults—for example, in public benefit allocation, smart city planning, or automated decision-making in legal or social services—especially when algorithms are trained on biased historical data. For instance, transportation models that emphasise commuting traffic patterns of younger workers may underinvest in accessibility infrastructure needed by older populations, exacerbating urban age divides.
Beyond critical sectors, everyday digital tools—like voice assistants, e-commerce recommendations, and app interfaces—often lack usability for older adults, both in design and function. This contributes to digital marginalization, where older people are not only underserved but also further distanced from participating in digital economies and communities.

4.3. Addressing the Problem: Recommendations

Mitigating ageism in AI demands a coordinated, multi-dimensional approach spanning data practices, system design, governance, and education. Based on the findings of this review, the proposal for a four-part framework clearly emerges: inclusive dataset development, ethical system design, regulatory reform, and capacity building through education and research. These areas must evolve together to reshape both the technical and societal contexts in which ageism in AI emerges.

4.3.1. Inclusive Dataset Development

Addressing representational bias starts with inclusive data practices. Other studies [32,33] have also found that older adults and other diverse age groups should be actively involved in data collection and annotation processes. Fairness-aware sampling techniques must be employed to ensure equitable representation across the lifespan. Regular dataset audits are essential to disclose age-specific metrics, while the creation of open-access, age-diverse datasets can provide benchmarks for future systems. These steps are critical for preventing systemic exclusion from the ground up.

4.3.2. Ethical Design and Development

Ethical AI development must prioritize usability and fairness for all age groups. Participatory design approaches should include older adults in the co-design and testing of AI tools, ensuring systems meet their needs and preferences [34]. Incorporating age-specific fairness metrics—such as balanced error rates across age groups—into model evaluation will improve equity. Post-deployment feedback mechanisms must also be in place, enabling users to report issues related to bias, accessibility, or relevance.

4.3.3. Regulatory and Policy Frameworks

Policy reform should explicitly recognize age as a protected characteristic within national and international AI governance instruments, including frameworks like the IEEE guidelines [35,36]. Mandatory, age-disaggregated impact assessments are needed for high-risk systems, particularly in healthcare, employment, and social services. Cross-sector oversight bodies should be empowered to monitor and enforce age-inclusive practices, ensuring that ethical standards are upheld across AI development and deployment.

4.3.4. Education and Capacity Building

Education and research play a foundational role in dismantling ageism in AI. Curricula in computer science, data science, and AI ethics need to include age-related issues alongside other dimensions of diversity. Interdisciplinary collaboration between technologists, gerontologists, ethicists, and policymakers can generate more context-sensitive insights. Initiatives that promote intergenerational exchange and digital literacy can reframe older adults as active contributors to technology, while long-term studies on ageing and tech use will help guide inclusive innovation for future populations.

4.4. Methodological Strengths and Limitations

The interdisciplinary nature of this review is a key strength. By incorporating perspectives from computer science, sociology, gerontology, ethics, and human development, the review transcends technical analysis and situates ageism within broader structural and cultural contexts. This holistic lens allows for more robust identification of both direct and latent forms of bias in AI systems. The use of dataset evaluations grounds theoretical concerns in real-world consequences, offering tangible evidence of how ageism manifests in AI across domains.
However, it must be pointed out that a primary limitation is the scope of data available for analysis. Many proprietary datasets used in commercial AI systems are not publicly accessible, limiting the ability to fully assess their age representation. As such, the findings may under (or over) estimate the extent of age bias in some applications.
Another limitation is the language bias inherent in the literature review process. The inclusion criteria limited sources to English-language publications, potentially excluding relevant research from non-English-speaking regions where ageism in AI may manifest differently or be addressed through alternative cultural and policy lenses.
Future research should seek to address these limitations by expanding multilingual literature reviews, engaging with proprietary datasets through institutional partnerships, and conducting longitudinal studies on how AI interventions affect different age groups over time.

5. Conclusions

This review has demonstrated that ageism—an often-overlooked form of bias—is deeply embedded in the design, training, and deployment of contemporary AI systems. From exclusionary datasets to discriminatory outcomes in healthcare, employment, and public services, AI technologies are frequently developed without sufficient regard for the needs, rights, and representations of older adults. As AI systems become increasingly ubiquitous, the stakes of inaction grow exponentially. The risk is not only that older populations are underserved, but that they are actively marginalized by systems that claim objectivity while perpetuating longstanding societal inequities.
The implications for the future are profound. Without deliberate intervention, AI will not merely reflect but amplify ageist norms, encoding them into automated systems that will be harder to detect and reverse over time. In an ageing global population—where, by 2050, one in six people will be over the age of 65—the ethical and practical urgency to address ageism in AI is paramount. If left unaddressed, we risk building digital infrastructures that are fundamentally incompatible with the demographic realities of the societies they aim to serve.
In sum, tackling ageism in AI is not simply a matter of technical fairness, it is a question of justice, inclusion, and the kind of digital future we wish to build. By acknowledging and addressing the ways AI can perpetuate or challenge age-based discrimination, we have the opportunity to design systems that uphold dignity and equity across the lifespan.

Supplementary Materials

The following supporting information can be downloaded at the website of this paper posted on Preprints.org, Dataset S1: AI-Face Dataset (https://github.com/Purdue-M2/AI-Face-FairnessBench); Dataset S2: Casual Conversations v2 (https://ai.meta.com/datasets/casual-conversations-v2-dataset/); Dataset S3: CrowS-Pairs (https://github.com/nyu-mll/crows-pairs).

Funding

This research received no external funding.

Institutional Review Board Statement

Not Applicable.

Data Availability Statement

The original contributions presented in this study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author(s). All data derived from resources available in the public domain resources have been described as such, with digital links provided within the article and listed references.

Acknowledgments

During the preparation of this manuscript and study, the author used GenAI for purposes such as generating text, data collection, analysis, and some interpretation of data. The author has reviewed and edited the output and takes full responsibility for the content of this publication.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AI Artificial Intelligence
GAN Generated Adversarial Networks
IEEE Institute of Electrical Electronics Engineers
MLM Masked Language Models
NLP Natural Language Processing

Appendix A

Appendix A.1

Table A1 presents the tools and framework integration used in the evaluation of ageism in AI for this review. The key objectives addressed were: 1) Reveal systemic underrepresentation of older adults in research and datasets; 2) expose absence of age-inclusive design in technical systems; 3) inform responsible AI design by integrating ethics with empirical data; 4) support regulatory and design guidelines that proactively mitigate age bias.
Table A1. Evaluation of Ageism in AI: Tools and Framework Integration.
Table A1. Evaluation of Ageism in AI: Tools and Framework Integration.
Data Source Data Analysis Tools** Ethical Assessment Frameworks Purpose / Outcome
1. Literature Review Analysis - Thematic coding (NVivo)
- Text mining/NLP for frequency of age-related terms
- AI Now: Algorithmic Impact Assessments (AIA)
- IEEE: Ethically Aligned Design (EAD) principles for inclusivity
Identify common themes, terminology gaps; overlooked concerns related to older adults in AI literature
2. Dataset Assessments - Fairness metrics (e.g., disparate impact, equal opportunity)
- Data bias detection (Fairlearn; Google's What-If Tool)
- AI Now: Dataset audits for demographic balance -
- IEEE: Dataset governance standards (e.g., transparency, traceability)
Detect underrepresentation or misrepresentation of older adults; risk-benefit mapping and context-aware evaluation; measure fairness quantitatively
3. Interdisciplinary Integration - Mixed-method triangulation
- Statistical + qualitative synthesis
- Sociotechnical system mapping
- AI Now: Structural bias and power analysis
- IEEE: Human-centric design values
Ensure ageism is analysed from multiple lenses (social, technical, ethical); bridge disciplinary blind spots
Note: **Google What-If Tool is open-source, accessed from https://pair-code.github.io/what-if-tool/; Fairlearn is open-source, accessed from https://fairlearn.org.

Appendix A.2

Table A2 illustrates the purpose of the perspectives from five disciplines that were integrated during data analysis. In order to contextualize ageism within AI development, the five disciplines selected for this review were computer science, sociology, human development, gerontology, ethics.
Table A2. Application of Interdisciplinary Analysis Using Five Theoretical Frameworks.
Table A2. Application of Interdisciplinary Analysis Using Five Theoretical Frameworks.
Discipline Theory Purpose Literature Review Datasets Key Insight
1. Computer Science
Barocas, Hardt & Narayanan’s Algorithmic Fairness Theory [37] Evaluate fairness in AI design, inputs, and outcomes Assess definitions and metrics of fairness in AI systems for older adults Analyse representativeness of older adults in training data; identify disparate impacts Highlights technical/systemic biases; guide fair AI design
2. Sociology Calasanti & Slevin’s Social Stratification and Ageism Theory [38] Examine structural ageism and its intersections with race, gender, and class Identify how AI systems reflect or reinforce social age-based marginalization Check for representational imbalance and intersectional gaps Adds sociological depth; uncovers hidden forms of exclusion
3. Human Development Human Development / Lifespan Developmental Perspectives [39] Consider cognitive, emotional, and social changes across the lifespan Explore how cognitive diversity and needs of older adults are addressed Assess if cognitive variation is represented in the data Fosters AI systems to be developmentally appropriate and user-centric
4. Gerontology Disengagement Theory & Active Ageing Frameworks [40] Contrast passive ageing with active, engaged ageing Analyse framing of older adults as passive vs. empowered tech users Check if data focus on deficits or strengths of ageing populations Promotes age-positive design and values-based inclusion
5. Ethics Rawls’ Theory of Justice [41] Provide an ethical framework for assessing fairness and justice Evaluate justice-oriented discussions, especially equity for older adults Determine if data use respects dignity, privacy, and inclusion Sets moral benchmarks; emphasizes justice for vulnerable groups

Appendix A.3

Table A3 lists the final 47 publications (either journal article or conference proceedings) that were selected for close review.
Table A3. List of Final 47 Publications Selected in the Literature Review.
Table A3. List of Final 47 Publications Selected in the Literature Review.
Authors Year Publication Title DOI
1. Ajunwa, I. 2019 Age discrimination by platforms. https://doi.org/10.15779/Z38GH9B924
2. Almasoud, A. S.; Idowu, J. A. 2024 Algorithmic fairness in predictive policing. https://doi.org/10.1007/s43681-024-00541-3
3. Anisha, S. A.; Sen, A.; Bain, C. 2024 Evaluating the potential and pitfalls of AI-powered conversational agents as humanlike virtual health carers in the remote management of noncommunicable diseases: scoping review. doi:10.2196/56114
4. Berridge, C.; Grigorovich, A. 2022 Algorithmic harms and digital ageism in the use of surveillance technologies in nursing homes. https://doi.org/10.3389/fsoc.2022.957246
5. Cao, Q.; Shen, L.; Xie, W.; Parkhi, O. M.; Zisserman, A. 2018 Vggface2: A dataset for recognizing faces across pose and age. 10.1109/FG.2018.00020
6. Chu, C. H.; Donato-Woodger, S.; Khan, S. S.; Nyrup, R.; Leslie, K.; …Grenier, A. 2023 Age-related bias and artificial intelligence: a scoping review. https://doi.org/10.1057/s41599-023-01999-y
7. Chu, C. H.; Nyrup, R.; Leslie, K.; Shi, J.; Bianchi, A.; …Grenier, A. 2022 Digital ageism: challenges and opportunities in artificial intelligence for older adults. https://doi.org/10.1093/geront/gnab167
8. Chu, C.; Donato-Woodger, S.; Khan, S. S.; Shi, T.; Leslie, K.;…Grenier, A. 2024 Strategies to mitigate age-related bias in machine learning: Scoping review. doi:10.2196/53564
9. Cruz, I. F. 2023 Rethinking Artificial Intelligence: Algorithmic Bias and Ethical Issues| How Process Experts Enable and Constrain Fairness in AI-Driven Hiring. https://ijoc.org/index.php/ijoc/article/view/20812/4456
10. Díaz, M.; Johnson, I.; Lazar, A.; Piper, A. M.; Gergle, D. 2018 Addressing age-related bias in sentiment analysis. https://doi.org/10.1145/3173574.317398
11. Enam, M. A.; Murmu, C.; Dixon, E. 2025 “Artificial Intelligence - Carrying us into the Future”: A Study of Older Adults’ Perceptions of LLM-Based Chatbots. https://doi.org/10.1080/10447318.2025.2476710
12. Fahn, C. S.; Chen, S. C.; Wu, P. Y.; Chu, T. L.; Li, C. H.;…Tsai, H. M. 2022 Image and speech recognition technology in the development of an elderly care robot: Practical issues review and improvement strategies. https://doi.org/10.3390/healthcare10112252
13. Fauziningtyas, R. 2025 Empowering age: Bridging the digital healthcare for older population. https://doi.org/10.1016/B978-0-443-30168-1.00015-3
14. Fraser, K. C.; Kiritchenko, S.; Nejadgholi, I. 2022 Extracting age-related stereotypes from social media texts. https://aclanthology.org/2022.lrec-1.341/
15. Garcia, A. C. B.; Garcia, M. G. P.; Rigobon, R. 2024 Algorithmic discrimination in the credit domain: what do we know about it? https://doi.org/10.1007/s00146-023-01676-3
16. Gioaba, I.; Krings, F. 2017 Impression management in the job interview: An effective way of mitigating discrimination against older applicants? https://doi.org/10.3389/fpsyg.2017.00770
17. Harris, C. 2023 Mitigating age biases in resume screening AI models. https://doi.org/10.32473/flairs.36.133236
18. Herrmann, B. 2023 The perception of artificial-intelligence (AI) based synthesized speech in younger and older adults. https://doi.org/10.1007/s10772-023-10027-y
19. Huff Jr, E. W.; DellaMaria, N.; Posadas, B.; Brinkley, J. 2019 Am I too old to drive? opinions of older adults on self-driving vehicles. https://doi.org/10.1145/3308561.3353801
20. Khalil, A.; Ahmed, S.; Khattak, A.; Al-Qirim, N. 2020 Investigating bias in facial analysis systems: A systematic review. 10.1109/ACCESS.2020.3006051
21. Khamaj, A. 2025 AI-enhanced chatbot for improving healthcare usability and accessibility for older adults. https://doi.org/10.1016/j.aej.2024.12.090
22. Kim, E.; Bryant, D.; Srikanth, D.; Howard, A. 2021 Age bias in emotion detection: An analysis of facial emotion recognition performance on young, middle-aged, and older adults. https://doi.org/10.1145/3461702.346260
23. Kim, S. D. 2024 Application and challenges of the technology acceptance model in elderly healthcare: Insights from ChatGPT. https://doi.org/10.3390/technologies12050068
24. Kim, S.; Choudhury, A. 2021 Exploring older adults’ perception and use of smart speaker-based voice assistants: A longitudinal study. https://doi.org/10.1016/j.chb.2021.106914
25. Liu, Z.; Qian, S.; Cao, S.; & Shi, T. 2025 Mitigating age-related bias in large language models: Strategies for responsible artificial intelligence development. https://doi.org/10.1287/ijoc.2024.0645
26. Mannheim, I.; Wouters, E. J.; Köttl, H.; Van Boekel, L.; Brankaert, R.; Van Zaalen, Y. 2023 Ageism in the discourse and practice of designing digital technology for older persons: A scoping review. https://doi.org/10.1093/geront/gnac144
27. Neves, B. B.; Petersen, A.; Vered, M.; Carter, A.; Omori, M. 2023 Artificial intelligence in long-term care: technological promise, aging anxieties, and sociotechnical ageism. https://doi.org/10.1177/07334648231157370
28. Nielsen, A.; Woemmel, A. 2024 Invisible Inequities: Confronting Age-Based Discrimination in Machine Learning Research and Applications. https://blog.genlaw.org/pdfs/genlaw_icml2024/50.pdf
29. Nunan, D.; Di Domenico, M. 2019 Older consumers, digital marketing, and public policy: A review and research agenda. https://doi.org/10.1177/0743915619858939
30. Park, H.; Shin, Y.; Song, K.; Yun, C.; Jang, D. 2022 Facial emotion recognition analysis based on age-biased data. https://doi.org/10.3390/app12167992
31. Park, J.; Bernstein, M.; Brewer, R.; Kamar, E.; Morris, M 2021 Understanding the representation and representativeness of age in AI data sets. https://doi.org/10.1145/3461702.3462590
32. Rebustini, F. 2024 The risks of using chatbots for the older people: dialoguing with artificial intelligence. https://doi.org/10.22456/2316-2171.142152
33. Rosales, A.; Fernández-Ardèvol, M. 2019 Structural Ageism in Big Data Approaches. https://doi.org/10.2478/nor-2019-0013
34. Rosales, A.; Fernández-Ardèvol, M. 2020 Ageism in the era of digital platforms. https://doi.org/10.1177/1354856520930905
35. Sacar, S.; Munteanu, C.; Sin, J.; Wei, C.; Sayago, S.;…Waycott, J. 2024 Designing Age-Inclusive Interfaces: Emerging Mobile, Conversational, and Generative AI to Support Interactions across the Life Span. https://doi.org/10.1145/3640794.3669998
36. Shiroma, K.; Miller, J. 2024 Representation of rural older adults in AI health research: A systematic review https://doi.org/10.1093/geroni/igae098.0835
37. Smerdiagina, A. 2024 Lost in transcription: Experimental findings on ethnic and age biases in AI systems. doi:10.5282/jums/v9i3pp1591-1608
38. Sourbati, M. 2023 Age bias on the move: The case of smart mobility. 10.4324/9781003323686-8
39. Stypinska, J. 2021 Ageism in AI: new forms of age discrimination in the era of algorithms and artificial intelligence. https://doi.org/10.4018/eal.20-11-2021.2314200
40. Stypinska, J. 2023 AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies. https://doi.org/10.1007/s00146-022-01553-5
41. Van Kolfschooten, H. 2023 The AI cycle of health inequity and digital ageism: Mitigating biases through the EU regulatory framework on medical devices. https://doi.org/10.1093/jlb/lsad031
42. Vasavi, S.; Vineela, P.; Raman, S. V. 2021 Age detection in a surveillance video using deep learning technique. https://doi.org/10.1007/s42979-021-00620-w
43. Vrančić, A.; Zadravec, H.; Orehovački, T. 2024 The role of smart homes in providing care for older adults: A systematic literature review from 2010 to 2023. https://doi.org/10.3390/smartcities7040062
44. Wang, Y.; Ma, W.; Zhang, M.; Liu, Y.; Ma, S. 2023 A survey on the fairness of recommender systems. https://doi.org/10.1145/3547333
45. Wolniak, R.; Stecuła, K. 2024 Artificial Intelligence in Smart Cities—Applications, Barriers, and Future Directions: A Review. https://doi.org/10.3390/smartcities7030057
46. Yang, F. 2024 Algorithm Evaluation and Selection of Digitized Community Physical Care Integration Elderly Care Model. https://doi.org/10.38007/IJBDIT.2024.050109
47. Zhang, Y.; Luo, L.; Wang, X. 2024 Aging with robots: A brief review on eldercare automation. https://doi.org/10.1097/NR9.0000000000000052

References

  1. Cruz, I.F. Rethinking Artificial Intelligence: Algorithmic Bias and Ethical Issues: How Process Experts Enable and Constrain Fairness in AI-Driven Hiring. Intl Jnl of Comm 2023,18, 21. https://ijoc.org/index.php/ijoc/article/view/20812.
  2. Rosales, A., & Fernández-Ardèvol, M. Ageism in the era of digital platforms. Convergence 2020, 26, 1074-1087. [CrossRef]
  3. Stypinska, J. Ageism in AI: new forms of age discrimination in the era of algorithms and artificial intelligence. In Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI. Bologna, Italy. 20-24 November, 2021. http://dx.doi.org/10.4108/eai.20-11-2021.2314200.
  4. Aranda Rubio, Y.; Baztán Cortés, J.J.; & Canillas del Rey, F. Is Artificial Intelligence ageist? European Geriatric Medicine 2024, 15, 1957-1960. [CrossRef]
  5. Nielsen, A.; Woemmel, A. Invisible Inequities: Confronting Age-Based Discrimination in Machine Learning Research and Applications. In 2nd Workshop on Generative AI and Law. Vienna, Austria. 27 July 2024.
  6. Chu, C.H.; Donato-Woodger, S.; Khan, S.S. et al. Age-related bias and artificial intelligence: a scoping review. Hum. Soc Sci Commun 2023, 10, 510. [CrossRef]
  7. Amundsen, D. “The Elderly”: A discriminatory term that is misunderstood. New Zealand Annual Review of Education 2020, 26, 5-10. [CrossRef]
  8. Van Kolfschooten, H. The AI cycle of health inequity and digital ageism: Mitigating biases through the EU regulatory framework on medical devices. Journal of Law and the Biosciences 2023, 10, lsad031. [CrossRef]
  9. Park, J.S.; Bernstein, M.S.; Brewer, R.N.; Kamar, E.; Morris, M.R. Understanding the Representation and Representativeness of Age in AI Data Sets. arXiv 2021, https://arxiv.org/abs/2103.09058arxiv.org.
  10. Khan, A.A.; Badshah, S.; Liang, P.; Khan, B.; Waseem, M.; Niazi, M.; & Akbar, M.A. Ethics of AI: A Systematic Literature Review of Principles and Challenges. arXiv 2021, https://arxiv.org/abs/2109.07906arxiv.org.
  11. Crawford, K.; Paglen, T. Excavating AI: The Politics of Images in Machine Learning Training Sets. AI & Society 2021, 36. 1399. [CrossRef]
  12. Khamaj, A. AI-enhanced chatbot for improving healthcare usability and accessibility for older adults. Alexandria Engineering Journal 2025, 116, 202-213. [CrossRef]
  13. Nunan, D.; Di Domenico, M. Older consumers, digital marketing, and public policy: A review and research agenda. Jnl of Pub Policy & Mkting 2019, 38, 469-483. [CrossRef]
  14. Cooper, H.; Patali, E.A.; Lindsay, J. Research synthesis and meta-analysis: A step-by-step approach. Sage publications 2013, 2, 344-370. [CrossRef]
  15. Amundsen, D. Using Digital Content Analysis for Online Research: Online News Media Depictions of Older Adults. Sage Research Methods: Doing Research Online Sample Case Study 2022, 1. [CrossRef]
  16. Kitchenham, B.; Charters, S. Guidelines for performing systematic literature reviews in software engineering. Technical Report EBSE 2007-001, 2007, Keele University and Durham University Joint Report. https://www.elsevier.com/__data/promis_misc/525444systematicreviewsguide.pdf.
  17. Petticrew, M.; Roberts, H. Systematic reviews in the social sciences: A practical guide. John Wiley & Sons: New York, NY, USA, 2008.
  18. Boell, S.K.; Cecez-Kecmanovic, D. A Hermeneutic. Approach for Conducting Literature Reviews and Literature Searches. Comm of the Ass for Info Sys 2014, 34, 257-286. [CrossRef]
  19. Purdue-M2. AI-Face Fairness Bench Dataset. GitHub, 2025, https://github.com/Purdue-M2/AI-Face-FairnessBench.
  20. Li Lin, S.S.; Wu, M.; Wang, X.; Hu, S. AI-Face: A million scale demographically annotated AI generated face dataset and fairness benchmark. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), Online, 13 June 2025.
  21. Meta AI. Casual Conversations Dataset, 2025, https://ai.meta.com/datasets/casual-conversations-v2-dataset/.
  22. Nangia, N.; Vania, C.; Bhalerao, R.; Bowman, S. CrowS-Pairs Dataset, NYU Machine Learning for Language Lab, 2025, https://github.com/nyu-mll/crows-pairs.
  23. Nangia, N.; Vania, C.; Bhalerao, R.; Bowman, S. CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020, pp 1953–1967, Association for Computational Linguistics, EMNLP, Online. 16-20 November, 2020. [CrossRef]
  24. Barocas, S.; Hardt, M.; Narayanan, A. Fairness in Machine Learning: Lessons from Political Philosophy. Cambridge University Press, 2019. https://www.fairmlbook.org.
  25. Calasanti, T.; Slevin, K. Age Matters: Realigning Feminist Thinking. Routledge. Oxford, UK. 2001.
  26. Baltes, P.B.; Smith, J. New Frontiers in the Future of Ageing: From Successful Ageing to Cognitive Ageing. Annual Review of Psychology, 2003, 55, 197-225. https//doi.org/10.1159/000067946.
  27. Havighurst, R.J.; Albrecht, R (1953). Successful Ageing. Gerontologist 1953, 13, 37-42.
  28. Rawls, J.A. Theory of Justice. Harvard University Press: Cambridge, MA, USA,1971.
  29. Fraser, K.C.; Kiritchenko, S.; Nejadgholi, I. Extracting age-related stereotypes from social media texts. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, Marseille, France. 20-25 June, 2022.
  30. Berridge, C.; Grigorovich, A. Algorithmic harms and digital ageism in the use of surveillance technologies in nursing homes. Frontiers in sociology 2022, 7. [CrossRef]
  31. Amundsen, D.A. A critical gerontological framing analysis of persistent ageism in NZ online news media: Don't call us “elderly”! Jnl of Aging Studies 2022, 61, 101009. [CrossRef]
  32. Ball, K.K., Berch, D.B., Helmers, K.F., Jobe, J.B., Leveck, M.D., Marsiske, M., … Willis, S.L. Effects of cognitive training interventions with older adults: A randomized controlled trial. JAMA 2002, 288. [CrossRef]
  33. Mwanri, L. Migration, resilience, vulnerability and migrants’ health. Int Jrnl of Environ Rsch and Pub Health 2022, 19. [CrossRef]
  34. Huang, X. et al. Overcoming Alzheimer's disease stigma by leveraging artificial intelligence and blockchain technologies. Brain Sciences 2020, 10, 150. [CrossRef]
  35. IEEE. IEEE 7010: A New Standard for Assessing the Well-being Implications of Artificial Intelligence. arXiv, 2020, https://arxiv.org/abs/2005.06620arxiv.org.
  36. Rosenberg, M. Geographical gerontology and the digital divide: Equity in older adults’ access to AI-driven technology. Ageing & Society 2021, 41. [CrossRef]
  37. Barocas, S.; Hardt, M.; Narayanan, A. Fairness in Machine Learning: Lessons from Political Philosophy. Cambridge University Press, 2019. https://www.fairmlbook.org.
  38. Calasanti, T.; Slevin, K. Age Matters: Realigning Feminist Thinking. Routledge. Oxford, UK. 2001.
  39. Baltes, P.B.; Smith, J. New Frontiers in the Future of Ageing: From Successful Ageing to Cognitive Ageing. Annual Review of Psychology 2003, 55, 197-225. https//doi.org/10.1159/000067946.
  40. Havighurst, R.J.; Albrecht, R. Successful Ageing. Gerontologist 1953, 13, 37-42.
  41. Rawls, J.A. Theory of Justice; Harvard University Press: Cambridge, MA, USA,1971.
Figure 1. Casual Conversation V2 Snapshot of video participant representation within the dataset.
Figure 1. Casual Conversation V2 Snapshot of video participant representation within the dataset.
Preprints 167499 g001
Table 1. Key Themes across the Literature Review of 47 Publications.
Table 1. Key Themes across the Literature Review of 47 Publications.
Algorithmic Bias Health Care
& Elder Care
Ageism in Employment & Social Platforms Smart Technologies
and Ageing
Inclusive Design, Policy & Ethics
Facial Analysis AI in Nursing Homes Job hiring Smart Mobility Ageism in AI systems
Predictive Policing Robotics in Eldercare Hiring; fairness in AI Self-driving Cars Ageism roadmap
Virtual Health Carers Long-term Care Ageism on digital platforms Smart technologies Age-inclusive design
AI in Nursing Homes Community Care Models Representation Smart Voice Assistants Older consumers
Elder Care Robots Health Equity; Regulation Stereotypes in social media Smart Homes for
Elder Care
Mitigation strategies for age bias
Chatbots Digital Healthcare
Empowerment
Age bias Smart Cities and Ageing
Healthcare Older Adults’
Perceptions of chatbots
Speech AI perception Digital Technology for Ageing
Table 2. Representation and Bias Indicators in Selected AI Datasets.
Table 2. Representation and Bias Indicators in Selected AI Datasets.
Dataset Older Adults (%) Age Label
Accuracy
Bias Indicators
AI-Face (Face recognition) <10 Moderate Under-represented older faces
Casual Conversations V2 (Video) <30 High More balanced but still skewered
CrowS-Pairs (NLP) ~15 N/A* Includes stereotypes
*Note: CrowS-Pairs dataset did not use specific ages, instead using generalized age-group terms.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated