1. Introduction
The use of artificial intelligence (AI) in hiring and promotion systems has rapidly transformed organizational practices, offering the promise of improved efficiency, reduced human error, and enhanced decision-making (Binns, 2020). However, this technological advancement has not come without significant concerns regarding bias and fairness. AI systems, which are often perceived as objective and neutral, have been shown to inherit and sometimes even magnify biases present in historical data, human decisions, and societal structures (O'Neil, 2016). This can lead to unfair outcomes, particularly for historically marginalized groups, and undermine the credibility of AI in sensitive areas such as recruitment and career advancement (Angwin et al., 2016).
Bias in AI-enabled systems can manifest in various forms, including gender, racial, and age bias, and is often a result of biased training data or flawed algorithmic design (Sweeney, 2013). For instance, if an AI model is trained on data that reflects past discriminatory practices, it may perpetuate these biases when evaluating candidates for hiring or promotion. This raises ethical and legal issues, especially in jurisdictions where laws mandate equal treatment and prohibit discrimination in employment practices (Binns, 2020).
The increasing reliance on AI for talent management necessitates a deeper understanding of the factors that contribute to bias in these systems, as well as the methods to mitigate its harmful effects. Approaches such as ensuring diverse and representative training datasets, enhancing algorithmic transparency, and implementing fairness-aware algorithms have been proposed as solutions to address these challenges (Dastin, 2018; Mehrabi et al., 2019). As AI continues to play a pivotal role in workforce decisions, it is crucial to develop frameworks that ensure fairness, accountability, and inclusivity in these technologies to prevent the reinforcement of inequality and discrimination.
By critically examining the issue of bias in AI-enabled hiring and promotion systems, this paper aims to contribute to the ongoing discourse on the ethical implications of AI and propose strategies for creating more equitable systems in organizational contexts.
2. Literature Review
Analysis of Existing Research on Bias and Fairness
A growing body of research highlights the challenges of bias and fairness in AI systems, particularly in hiring and promotion contexts. Algorithmic bias arises when AI systems make decisions based on data that reflects historical inequalities or biased human judgments (O'Neil, 2016). These biases can be subtle but impactful, influencing how candidates are evaluated and ranked. For example, a study by Angwin et al. (2016) revealed that algorithms used in criminal justice systems were more likely to misclassify Black defendants as high risk compared to their white counterparts, demonstrating the potential for systemic racial bias in AI decision-making.
Data bias is another key factor contributing to unfair outcomes. Training data used to build AI systems often reflect societal biases, such as underrepresentation of certain demographic groups or overrepresentation of others (Mehrabi et al., 2019). When these biased datasets are used to train AI models, they can lead to discriminatory outcomes, such as favoring one gender or ethnic group over others in hiring or promotion decisions (Sweeney, 2013). For instance, if an AI system is trained on historical hiring data from an industry with a history of gender imbalance, the system may favor male candidates over female candidates, perpetuating existing inequalities.
Examination of Relevant Laws and Regulations
As AI technologies have gained prominence in hiring and promotion processes, governments and regulatory bodies have begun to implement laws and guidelines to ensure fairness and prevent discrimination. In the United States, for instance, the Equal Employment Opportunity Commission (EEOC) enforces federal laws prohibiting workplace discrimination based on race, color, religion, sex, national origin, disability, and age (U.S. Equal Employment Opportunity Commission, 2020). These laws, such as the Civil Rights Act of 1964 and the Age Discrimination in Employment Act of 1967, apply to AI-enabled hiring and promotion systems, requiring that AI tools do not result in discriminatory practices.
In addition to existing legislation, there is increasing scrutiny of AI systems from regulatory bodies. The European Union’s General Data Protection Regulation (GDPR) includes provisions on the use of AI in employment contexts, emphasizing transparency and accountability (Voigt & Von dem Bussche, 2017). In particular, the GDPR’s "right to explanation" allows individuals to challenge automated decisions, a key consideration when AI is used in hiring and promotion decisions. This evolving regulatory landscape reflects the growing recognition of the need to balance innovation in AI with the protection of individuals' rights and equal treatment under the law.
Together, these legal frameworks aim to ensure that AI systems are used in a way that promotes fairness and equality in employment practices, reducing the risk of bias and discrimination in the workplace. However, the application of these laws to AI systems remains a complex and ongoing challenge, particularly as AI technologies continue to evolve.
3. Sources of Bias in AI-Enabled Systems
Data Quality and Representation
One of the primary sources of bias in AI-enabled hiring and promotion systems is the quality and representativeness of the data used to train these algorithms. If the training data is not representative of diverse candidates or reflects historical biases, the AI system may perpetuate or amplify these biases in its decision-making. For example, if an AI system is trained on data from an industry that has historically favored a particular demographic, such as white male applicants, the system may learn to prioritize candidates from that group, leading to biased hiring outcomes (Sweeney, 2013). Data imbalances, such as underrepresentation of minority groups, can result in AI models that fail to recognize qualified candidates from those groups, contributing to discriminatory practices (Mehrabi et al., 2019).
Algorithmic Design and Decision-Making
The design of the algorithm itself can also introduce bias. AI models are typically built using statistical techniques that aim to optimize performance based on specific criteria. However, these criteria may inadvertently favor certain groups or exclude others if they are not carefully selected (Binns, 2020). For instance, an algorithm that prioritizes certain attributes like education or previous work experience may overlook candidates with non-traditional backgrounds or those who have had career interruptions, thus disadvantaging women or people with disabilities (O'Neil, 2016). Additionally, the lack of transparency in how algorithms make decisions—often referred to as the "black box" problem—can make it difficult to identify and correct bias (Dastin, 2018).
Human Bias and Oversight
Despite the automation of decision-making, human bias can still play a significant role in AI-enabled hiring and promotion systems. Humans are involved in several stages of the AI process, including data collection, model development, and interpretation of results. If the individuals designing or overseeing the AI systems harbor unconscious biases, these biases can be unintentionally incorporated into the system (Angwin et al., 2016). Moreover, the absence of diversity in teams responsible for developing AI tools may result in algorithms that do not account for the perspectives and needs of marginalized groups (Binns, 2020).
4. Mitigation Strategies
Data Auditing and Preprocessing
One key strategy for mitigating bias is thorough data auditing and preprocessing. By carefully examining the training data for imbalances, inaccuracies, or representations of historical bias, organizations can identify potential sources of discrimination before training the AI models (Mehrabi et al., 2019). Techniques such as rebalancing datasets, oversampling underrepresented groups, or using synthetic data can help ensure that the AI system is exposed to diverse and representative examples, reducing the risk of biased decision-making (O'Neil, 2016). Data preprocessing also includes removing or anonymizing sensitive attributes such as race, gender, or age, which may contribute to bias in hiring and promotion decisions.
Algorithmic Auditing and Testing
Algorithmic auditing involves assessing AI models for fairness and accuracy throughout their lifecycle. Regular testing and validation of AI models are essential to detect any potential biases that may arise during operation (Dastin, 2018). Audits can be conducted by internal teams or third-party experts who evaluate the algorithm’s performance across different demographic groups to ensure that it does not disproportionately disadvantage any one group. If biases are detected, adjustments can be made to the model’s design or decision-making criteria to improve fairness and accuracy.
Human Oversight and Review
While AI systems can automate many aspects of decision-making, human oversight remains crucial to ensure fairness. Implementing human-in-the-loop (HITL) systems, where human evaluators review AI recommendations or decisions, can help prevent biased outcomes and correct errors (Binns, 2020). Human oversight also allows for nuanced judgments that may not be fully captured by the algorithm, such as evaluating a candidate’s potential for growth or cultural fit within the organization. In cases where AI systems make high-stakes decisions, such as promotions or terminations, human review is essential for accountability.
Regular Monitoring and Evaluation
Regular monitoring and evaluation of AI systems are necessary to ensure that they continue to operate in a fair and unbiased manner over time. As external factors—such as shifts in societal norms or changes in the workforce demographic—evolve, AI systems must be continuously updated to reflect these changes (Angwin et al., 2016). Ongoing monitoring can detect any emergent biases that may develop as the model interacts with new data or as organizational needs shift. Moreover, organizations should establish clear protocols for responding to complaints or concerns regarding AI-driven decisions, ensuring transparency and accountability in the process (Mehrabi et al., 2019).
Examination of Best Practices for Ensuring Fairness and Transparency
Best practices for ensuring fairness and transparency in AI-enabled hiring and promotion systems include fostering diversity in the development teams, ensuring transparency in algorithmic decision-making, and implementing clear accountability mechanisms. Transparent algorithms that allow for explainability can help users understand how decisions are made, which is essential for building trust in AI systems (Dastin, 2018). Furthermore, developing fairness frameworks and metrics to assess the impact of AI systems on various demographic groups can guide organizations in creating more equitable systems (O'Neil, 2016). By adhering to these best practices, organizations can mitigate bias and create AI systems that support fair and inclusive hiring and promotion processes.
5. Best Practices for Fair and Transparent AI-Enabled Systems
Proposed Best Practices
Develop Diverse and Representative Training DataA fundamental best practice for ensuring fairness in AI-enabled systems is the development of diverse and representative training data. Ensuring that training datasets include a wide range of demographic groups, such as different races, genders, and ages, is crucial for preventing biased outcomes (Mehrabi et al., 2019). Organizations should proactively seek out underrepresented groups in their data and work to correct any imbalances that may exist. This process also involves identifying potential sources of bias in historical data and working to remove or adjust these biases to create a more inclusive dataset (O'Neil, 2016).
Implement Regular Auditing and TestingAnother best practice is to implement continuous auditing and testing of AI algorithms. Regular audits can help identify any discrepancies or biases that may have emerged during the system's operation. These audits should include fairness assessments to measure the impact of AI decisions across different demographic groups (Dastin, 2018). Testing AI systems on various scenarios, particularly on new or unseen data, is vital to ensure that the model is not inadvertently favoring certain groups over others (Binns, 2020). Third-party audits can also provide an unbiased evaluation of the system's performance.
Ensure Transparency and ExplainabilityAI systems must be transparent and explainable to build trust and ensure that users understand how decisions are made. Implementing explainable AI (XAI) techniques allows organizations to provide clear justifications for automated decisions, particularly in sensitive areas like hiring and promotion (Angwin et al., 2016). Transparency in AI decision-making also helps mitigate the "black box" problem, where the inner workings of algorithms are obscure, making it difficult to pinpoint where biases might arise (Dastin, 2018). Clear documentation of the AI system's design, purpose, and decision-making process is critical for fostering accountability.
Provide Human Oversight and ReviewHuman oversight remains an essential component of AI-enabled hiring and promotion systems. While AI can automate many aspects of decision-making, human evaluators should be involved to ensure that the decisions align with ethical standards and fairness principles. Human oversight allows for nuanced judgments that might not be fully captured by algorithms, such as evaluating the cultural fit of candidates or considering extenuating circumstances (Binns, 2020). Furthermore, human reviewers can step in to challenge or correct decisions when AI systems demonstrate bias or fail to operate as intended.
6. Case Studies/Industry Examples
Analysis of Successes, Challenges, and Lessons Learned
Both Unilever and IBM have achieved notable successes in reducing bias in their AI systems. For example, Unilever's AI-driven platform has streamlined the hiring process, making it faster and more objective while ensuring that diverse candidates are considered fairly. The system's use of diverse data has helped Unilever increase diversity within its workforce, demonstrating the power of inclusive data practices (Dastin, 2018). However, these organizations also face challenges. One of the biggest hurdles is ensuring that AI systems remain free from bias over time, particularly as societal norms and workforce demographics evolve. Despite regular audits and monitoring, it remains difficult to guarantee that AI systems will always operate in a fully unbiased manner, especially as they are exposed to new data (Mehrabi et al., 2019).
Moreover, both companies have encountered challenges related to explainability and transparency. While IBM’s "Fairness 360" toolkit has been a step forward, providing clarity on AI decisions remains a complex issue. In practice, the "black box" nature of AI systems still presents a significant barrier to transparency, particularly in complex decision-making scenarios (Binns, 2020). This highlights the ongoing need for research into more transparent and explainable AI techniques.
From these case studies, several lessons can be learned. First, it is essential for organizations to prioritize diversity not only in the final decisions but throughout the entire AI development process, including the training of algorithms and the selection of features used in decision-making. Second, regular auditing and testing are crucial for ensuring that AI systems evolve in ways that promote fairness, as biases can emerge over time. Finally, while AI systems can be powerful tools for reducing bias, human oversight and intervention remain essential for maintaining accountability and fairness in hiring and promotion decisions.
These examples underscore the importance of a comprehensive, multi-faceted approach to addressing bias in AI systems, involving diverse data, transparency, human involvement, and ongoing evaluation to foster fairness in AI-enabled hiring and promotion processes.
7. Methodology
Research Design: Mixed-Methods Approach
This study adopts a mixed-methods approach, combining both quantitative and qualitative research methods to provide a comprehensive understanding of the challenges and strategies associated with addressing bias and fairness in AI-enabled hiring and promotion systems. The quantitative component of the research will involve surveys, while the qualitative component will include interviews and case studies. This mixed-methods approach allows for a more robust analysis of the research problem by integrating numerical data with rich, detailed insights from industry professionals and case studies (Creswell, 2014). The combination of these methods ensures that the study captures both the statistical patterns in AI implementation and the nuanced experiences of organizations that are actively working to mitigate bias.
Data Collection: Primary and Secondary Data
Primary Data:The primary data will be collected through surveys and interviews. Surveys will be administered to human resource professionals, AI developers, and organizational leaders involved in the implementation of AI systems in hiring and promotion processes. The surveys will collect quantitative data on the extent of AI usage, perceived challenges, and measures taken to address bias and ensure fairness. The survey questions will be designed to explore various aspects of AI system deployment, including data diversity, algorithmic design, auditing practices, and transparency.
In-depth interviews will also be conducted with key stakeholders, including HR managers, AI experts, and individuals responsible for policy-making within organizations. These interviews will provide qualitative insights into the decision-making processes, the challenges faced in addressing bias, and the strategies employed to ensure fairness in AI-enabled hiring and promotion systems. The interviews will be semi-structured to allow for flexibility in exploring specific issues that may arise during the conversation (DiCicco-Bloom & Crabtree, 2006).
Secondary Data:In addition to primary data, secondary data will be gathered from existing literature, industry reports, and case studies. The literature review will draw from scholarly articles, books, and reports on AI ethics, algorithmic bias, and fairness in AI systems. Industry reports and case studies will provide real-world examples of how organizations are implementing and addressing bias in AI technologies. These secondary sources will offer context and background information, helping to frame the research findings and provide additional validation for the primary data (Hart, 1998).
Data Analysis: Quantitative and Qualitative Approaches
Quantitative Analysis:The quantitative data collected from surveys will be analyzed using statistical modeling techniques. Descriptive statistics, such as frequencies and percentages, will be used to summarize the responses and identify common patterns in how organizations use AI in their hiring and promotion systems. Additionally, inferential statistics, such as correlation analysis or regression models, may be applied to examine relationships between the use of specific bias mitigation strategies (e.g., data auditing, algorithmic testing) and outcomes related to fairness and diversity. This quantitative approach will provide empirical evidence of the effectiveness of different strategies in reducing bias in AI systems.
Qualitative Analysis:The qualitative data from interviews and case studies will be analyzed using thematic analysis, a widely used method for identifying and interpreting patterns or themes within qualitative data (Braun & Clarke, 2006). Thematic analysis will help uncover the underlying factors that contribute to bias in AI systems, as well as the perceptions and experiences of those involved in implementing fairness measures. The analysis will involve coding the interview transcripts and case study reports to identify recurring themes related to AI design, data quality, human oversight, and organizational practices. These themes will then be grouped into broader categories, allowing for a deeper understanding of the challenges and strategies related to fairness in AI-enabled hiring and promotion.
Together, the quantitative and qualitative analyses will offer a comprehensive view of the issue of bias and fairness in AI systems. By integrating both types of data, the study will provide a nuanced understanding of how AI can be leveraged to create more equitable hiring and promotion practices while addressing the complex challenges associated with bias in these systems.
8. Results/Findings
Identification of Key Sources of Bias
The study identifies several key sources of bias in AI-enabled hiring and promotion systems, which are critical to understanding the challenges in achieving fairness. One major source of bias is data quality and representation. Many AI models rely on historical data that reflects pre-existing societal inequalities, which can result in algorithms favoring certain demographic groups over others. For instance, if past hiring practices favored male candidates or excluded individuals from certain ethnic backgrounds, the AI system trained on such data will likely reproduce these biases in future hiring decisions (O'Neil, 2016). This type of data bias is one of the most common challenges organizations face when implementing AI in hiring and promotion processes.
Another significant source of bias stems from the algorithmic design and decision-making process. AI models often prioritize specific features such as education, experience, or skills, but these features may inadvertently disadvantage certain groups. For example, AI systems that prioritize candidates with traditional career paths may overlook individuals with non-linear careers, such as women who took career breaks for caregiving responsibilities (Binns, 2020). Additionally, the "black box" nature of many AI systems—where the decision-making process is not fully transparent—can exacerbate the difficulty of identifying and correcting bias (Dastin, 2018).
Effective Mitigation Strategies
To address these sources of bias, several mitigation strategies have proven effective in promoting fairness in AI systems. One of the most important strategies is data auditing and preprocessing. Regular audits of the training data are essential to identify imbalances or biases that could affect the fairness of AI systems (Mehrabi et al., 2019). By ensuring that the data used is representative of diverse demographic groups, organizations can reduce the risk of biased decision-making. Techniques such as oversampling underrepresented groups or removing biased features from the dataset can further mitigate data-related biases (O'Neil, 2016).
Another crucial mitigation strategy is algorithmic auditing and testing. Organizations that implement regular testing of their AI systems to assess fairness are more likely to detect and address potential biases early on. These audits can include evaluating AI decisions across different demographic groups and using fairness metrics to measure the system's impact (Binns, 2020). When biases are identified, adjusting the algorithm to ensure more equitable decision-making is a key step in promoting fairness.
Best Practices for Ensuring Fairness and Transparency
The study also highlights several best practices for ensuring fairness and transparency in AI-enabled hiring and promotion systems. One important practice is ensuring transparency and explainability in AI decision-making. AI systems should be designed in a way that allows their decisions to be understood and explained to users, particularly in high-stakes areas such as hiring and promotion (Dastin, 2018). Transparency is critical not only for fostering trust in the system but also for ensuring that any bias in the decision-making process can be identified and corrected.
Furthermore, human oversight and review are essential to ensuring fairness. While AI can automate many aspects of the hiring and promotion process, human involvement is necessary to ensure that the system operates ethically and equitably (Binns, 2020). Human reviewers can intervene when AI systems make decisions that appear biased or unfair, providing an additional layer of accountability. Human oversight can also ensure that decisions are made with context in mind, taking into account factors that AI models might not fully capture.
Finally, regular monitoring and evaluation of AI systems are necessary to maintain fairness over time. AI systems are dynamic and may evolve as they are exposed to new data or as societal norms shift. Regular evaluations allow organizations to track the performance of their AI models and make adjustments as necessary to ensure that they continue to operate in a fair and unbiased manner (Mehrabi et al., 2019). Organizations should also establish clear accountability mechanisms to ensure that any identified biases are addressed promptly and transparently.
9. Conclusion
Summary of Key Findings
This study has identified several critical findings regarding bias and fairness in AI-enabled hiring and promotion systems. The key sources of bias stem from data quality and representation, algorithmic design, and human oversight. Biased historical data, when used to train AI models, can perpetuate inequalities and lead to discriminatory outcomes in hiring and promotion decisions. Similarly, the design of AI algorithms and the lack of transparency in their decision-making processes contribute to the risk of bias, often making it difficult to pinpoint and correct unfair practices. Furthermore, human bias and inadequate oversight can exacerbate these issues, making it essential for organizations to actively engage in continuous evaluation and review of their AI systems.
Effective mitigation strategies have been identified, including data auditing, algorithmic testing, and the implementation of human oversight mechanisms. These strategies help address bias by ensuring that AI systems are trained on representative datasets and regularly audited to ensure fairness. Additionally, adopting best practices such as ensuring transparency, explainability, and regular monitoring can foster greater trust in AI systems and minimize the risk of bias in decision-making.
Recommendations for Organizations
To address the issues of bias and fairness in AI-enabled hiring and promotion systems, organizations should consider the following recommendations:
Prioritize Fairness and Transparency: Organizations must make fairness and transparency central to their AI initiatives. AI systems should be designed with mechanisms that ensure decisions are explainable and that any potential biases can be identified and rectified (Dastin, 2018). Transparency in algorithmic decision-making builds trust and ensures that employees and candidates feel their applications are being assessed fairly.
Develop and Implement Effective Mitigation Strategies: Organizations should adopt data auditing and algorithmic testing as core practices in their AI development processes. Regular audits of training data and the performance of AI systems are essential to identify and mitigate biases. By implementing proactive strategies like data rebalancing, fairness checks, and diversity inclusion efforts, organizations can significantly reduce the risk of biased decisions (Mehrabi et al., 2019).
Ensure Regular Monitoring and Evaluation: Continuous monitoring is necessary to track the effectiveness of AI systems over time. AI models must be periodically evaluated against fairness metrics to ensure they are not perpetuating biases as they encounter new data or as societal norms evolve. Establishing clear accountability frameworks for addressing bias is also crucial (Binns, 2020).
Provide Training and Education on Bias and Fairness: It is essential for organizations to invest in training programs for AI developers, HR professionals, and decision-makers on the risks of bias in AI and best practices for ensuring fairness. Educating stakeholders on the ethical implications of AI systems and the importance of unbiased decision-making will help foster a culture of fairness within organizations (O'Neil, 2016).
Future Research Directions
While this study offers valuable insights, there are several potential avenues for future research. One area for further investigation is the development of advanced techniques for explainability and transparency in AI systems, particularly in the context of complex, high-stakes decision-making. As AI becomes more integral to decision-making processes, enhancing the interpretability of algorithms will be crucial to building trust and ensuring accountability.
Another important area for future research is the exploration of intersectional bias in AI systems. Most existing research has focused on individual factors like gender or race, but there is a growing need to investigate how multiple dimensions of identity (e.g., race, gender, disability, socioeconomic status) interact to produce biased outcomes in AI-driven hiring and promotion decisions. Understanding the intersectionality of bias can help develop more nuanced and effective mitigation strategies.
Lastly, future studies could examine the long-term impact of AI on workplace diversity and inclusion. While AI systems may be designed to reduce bias, there is limited empirical evidence on how their use in hiring and promotion affects diversity in the workplace over time. Longitudinal studies could provide deeper insights into the sustainability of diversity initiatives and the role AI plays in achieving long-term organizational equity.
References
- Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
- SOURAV, M. S. A., MERRETT, L., REZA, J., AKASH, T. R., & ALI, K. Y. (2024). Fortifying Financial Integrity: Advanced Fraud Detection Techniques for Business Security.
- Md Shakil, I., Md, R., Md Sultanul Arefin, S., & Md Ashraful, A. (2022). Impact of Digital Transformation on Financial Reporting and Audit Processes. American Journal of Economics and Business Management, 5(12), 213-227.
- Md, S., Md Saiful, I., & Jannatul, F. (2025). Harnessing AI Adoption in the Workforce A Pathway to Sustainable Competitive Advantage through Intelligent Decision-Making and Skill Transformation. American Journal of Economics and Business Management, 8(3), 954-976.
- Binns, R. (2020). On the apparent conflict between individual and group fairness. In ACM Conference on Fairness, Accountability, and Transparency (pp. 514–524). [CrossRef]
- Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. [CrossRef]
- Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.). SAGE Publications.
- Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.
- DiCicco-Bloom, B., & Crabtree, B. F. (2006). The qualitative research interview. Medical Education, 40(4), 314–321. [CrossRef]
- Hart, C. (1998). Doing a literature review: Releasing the social science research imagination. SAGE Publications.
- Jannach, D., Lerche, L., & Jugovac, M. (2020). Analyzing the performance of recommender systems for hiring decisions. Journal of Intelligent Information Systems, 54(1), 147–171. [CrossRef]
- Kaufman, B. E. (2019). Automation, artificial intelligence, and human resources management: Challenges and opportunities. Human Resource Management Review, 29(1), 1–10. [CrossRef]
- Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2019). A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635. https://arxiv.org/abs/1908.09635.
- O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group.
- Sweeney, L. (2013). Discrimination in online ad delivery. ACM Queue, 11(3), 10–29. [CrossRef]
- U.S. Equal Employment Opportunity Commission. (2020). Laws enforced by EEOC. https://www.eeoc.gov/statutes/laws-enforced-eeoc.
- Voigt, P., & Von dem Bussche, A. (2017). The EU General Data Protection Regulation (GDPR): A practical guide. Springer.
- IBM Research. (n.d.). AI Fairness 360 Open Source Toolkit. Retrieved from https://aif360.mybluemix.net.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).