Preprint
Article

This version is not peer-reviewed.

Designing AI-Enabled Career Coaching Tools for Women: An Expert Interview Study

Submitted:

25 April 2026

Posted:

28 April 2026

You are already at the latest version

Abstract
AI-enabled coaching is increasingly used in career development, yet limited attention has been paid to how such tools should be designed for women, particularly in male-dominated contexts. This study examines which AI-enabled career coaching use cases experts consider appropriate and what design and governance conditions should guide their responsible development. Sixteen multidisciplinary experts across career development, coaching, organisational development, women’s health, AI coaching, AI ethics, and DEI participated in semi-structured interviews. Data were analysed using reflexive thematic analysis. Five themes were identified: integrated expert lenses; the biopsychosocial context of women’s careers; AI intervention mechanics; the human–AI coaching relationship; and governance and agency risks. Experts supported AI use conditionally, primarily for bounded functions such as structured reflection, scoped psychoeducation, and behavioural support. They cautioned against reproducing an-drocentric assumptions, individualising structural barriers, and substituting for human empathy, contextual judgement, or professional accountability. The study proposes an agency-centred framework specifying requirements for conceptual alignment, ecosystem attunement, intervention design, and governance. These findings suggest that AI-enabled career coaching is most credible as a constrained, developmental support tool. Future research should examine these use cases with women end-users across career stages and contexts.
Keywords: 
;  ;  
Subject: 
Social Sciences  -   Psychology

1. Introduction

Women’s careers in male-dominated fields continue to be shaped by structural inequities, gendered organisational norms, and unequal access to opportunity (Makarem & Wang, 2020; Stamarski & Son Hin, 2015). These constraints extend beyond representation or pay disparities (European Commission, 2024). Women may encounter biased evaluation, limited sponsorship, and workplace cultures in which the same behaviours are interpreted differently depending on gender, with cumulative consequences for progression, retention, and career decision-making over time (Ibarra et al., 2010; Trauth, 2006). Such pressures may be particularly acute during career transitions, including return to work after maternity, caregiving periods, and midlife health transitions, where organisational expectations often still assume linear and uninterrupted careers (Jack et al., 2016; O’Neill et al., 2023; Torres et al., 2024).
These conditions create a challenge for career support. Although contemporary career scholarship increasingly recognises non-linearity, context, and wellbeing, many applied models and organisational support practices continue to privilege individual adaptation, continuous availability, and male-centric markers of success (Charness et al., 2024; Greenhaus et al., 2018; Ndabeni et al., 2020; Lent & Brown, 2008; Sullivan & Baruch, 2009). As a result, support may insufficiently account for how women’s career development is shaped by the interaction of personal agency, organisational context, and life-course demands.
At the same time, artificial intelligence (AI), particularly conversational systems, is increasingly being introduced into coaching and career development as a scalable form of support (Passmore et al., 2025a; Terblanche, 2024). These tools may widen access to reflection, planning, and guidance where cost, time, or service availability limit human support (Terblanche et al., 2022). However, their use in women’s career development raises important concerns. AI systems may reproduce gendered assumptions embedded in training data and organisational norms, individualise structural problems, or handle sensitive personal and workplace information in ways that raise privacy, safety and accountability issues (Guo et al., 2024).
Although scholarship on AI in coaching and career development is growing, limited attention has been paid to how such tools should be designed for women’s career development, particularly in male-dominated contexts. There is therefore a need to identify which career coaching functions are appropriate for AI, what needs such tools should address, and what safeguards should guide their responsible development and use.
This study addresses that gap through semi-structured interviews with multidisciplinary experts. Its primary contribution is to identify and refine AI-enabled career coaching use cases for women’s career development, and to specify the contextual, design, and governance requirements that should shape their responsible development. Rather than positioning AI as a substitute for human support, the study examines how AI-enabled career coaching might provide bounded developmental support within clear ethical, relational, and professional limits.
The study builds on a prior scoping review that mapped emerging applications of AI in women’s career development and generated an initial set of use cases (Portell-Fonolla et al., in press). It addresses the following research questions:
  • Which AI-enabled career coaching use cases do experts identify as most relevant and feasible for women’s career development, particularly in male-dominated fields?
  • What career challenges, transition points, and support needs should these use cases be designed to address?
  • In what ways do existing career support models and practices fall short in meeting these needs?
  • What design, risk, and governance requirements do experts identify for the responsible development and use of such tools?
In this paper, “AI-enabled career coaching tools” refers to AI-mediated, typically conversational, systems designed to support career reflection, decision-making, skills planning, and preparation for career-relevant situations. These tools are treated as developmental supports, not as substitutes for therapy, clinical care, or employer-led performance evaluation. The study shows that expert support for such tools was conditional rather than general and develops an expert-derived, agency-centred framework to guide their design and evaluation.

2. Materials and Methods

This study used a qualitative design based on semi-structured expert interviews to identify and refine AI-enabled career coaching use cases for women’s career development, particularly in male-dominated fields, and to examine the contextual, design, and governance requirements for their responsible development. Expert interviews were appropriate because the research questions required integrative judgement across domains relevant to the design and deployment of AI-enabled career coaching, including career and coaching psychology, organisational development, women’s health, AI coaching product development, and AI ethics (Creswell & Poth, 2018).
The interview study built on a prior scoping review (Portell-Fonolla et al., in press), which generated an initial set of use cases (Appendix A). The interviews were used to assess the adequacy of existing career support approaches for women’s career realities across the life course, and to evaluate, refine, and prioritise candidate AI-enabled use cases while identifying conditions for responsible implementation.
Participants were recruited through purposive sampling to capture disciplinary and applied perspectives central to the study (Stratton, 2024). Inclusion criteria were: (a) at least five years of relevant professional experience; (b) active involvement in career development or coaching practice, career research, workforce or executive development, women’s health practice relevant to work participation, and/or AI coaching and ethics; and (c) the ability to comment on the applications, limitations, and safeguards of AI-enabled career coaching.
The final sample comprised 16 experts based in Canada, Germany, Ireland, Portugal, Spain, Switzerland, the United Kingdom, and the United States. Thirteen were women and three were men. Participants occupied predominantly senior and specialist roles, including AI coaching founders, senior executive coaches, career development academics and practitioners, HR and organisational development specialists, AI ethics advisors, a DEI consultant, and medical professionals in psychiatry, general practice, and women’s health. The sample was also highly experienced, with 13 of the 16 participants reporting at least 10 years of professional experience, including several with 15 years or more. Their backgrounds spanned AI coaching, AI ethics, executive coaching, career development, counselling, HR, organisational development, DEI, mental health, and women’s health. An anonymised overview of participant characteristics is provided in Appendix B.
Sampling adequacy was guided by the principle of information power (Malterud et al., 2016). This was considered sufficient given the specificity of the study aim, the specialised expertise of participants, the inclusion of multiple relevant domains, and the depth of the material. Recruitment continued until the dataset provided sufficient breadth and depth to assess candidate use cases, examine convergence and divergence across expert perspectives, and develop a coherent interpretive account.
Data were collected through one-to-one semi-structured interviews. Most were conducted via video-conferencing platforms, with one conducted in person. Interviews lasted approximately 45-60 minutes. A flexible interview guide (Appendix C) elicited: (a) expert perspectives on women’s career challenges and support needs in male-dominated contexts, including life-course transitions; (b) limitations of existing career support models and organisational practices; (c) evaluation of candidate AI-enabled career coaching use cases derived from the prior scoping review; and (d) views on risks, safeguards, and governance requirements. Participants were encouraged to draw on professional experience and provide concrete examples of feasible tool functions and potential harms. One participant provided written responses rather than taking part in a live interview. Although this limited opportunities for real-time probing, the responses addressed the same core topics and were included in the analysis.
Data were analysed using reflexive thematic analysis (RTA). RTA was selected because the study aimed to generate an interpretive, conceptually informed account of patterns of meaning across expert perspectives, rather than to quantify views or produce a single definitive coding structure. Analysis proceeded iteratively through familiarisation with the dataset, generation of initial codes, development and refinement of candidate themes, definition and naming of themes, and production of the final analytic narrative (Braun & Clarke, 2019; 2022). Coding attended to both semantic and latent meaning, including assumptions about gender, careers, technology, and governance.
The first author led the analysis through repeated reading, coding, memo writing, and thematic development. Two additional researchers contributed through structured analytic dialogue. Consistent with RTA, these discussions were used not to establish inter-coder reliability, but to deepen interpretation, surface assumptions, challenge premature closure, and refine theme boundaries.
Because the study had a design-oriented aim, a cross-participant coding matrix was developed after the primary thematic analysis to indicate whether particular subthemes had been raised across the expert panel. This served only as a supplementary descriptive device to show how widely issues surfaced, not to determine theme development or establish analytic validity. These counts are therefore not interpreted as evidence of importance, magnitude, or statistical prevalence.
Several strategies were used to strengthen the credibility and transparency of the analysis. An audit trail was maintained, including interview materials, analytic memos, coding notes, and evolving thematic maps. The research team engaged in critical-reflexive dialogue throughout the analytic process to interrogate interpretations, clarify theme boundaries, and consider divergent or tension-filled cases. Quotations were selected to illustrate both convergent and contrasting perspectives while avoiding over-reliance on isolated vivid examples.
Given the interdisciplinary nature of the topic, reflexive attention was paid to how different professional lenses might shape both participant accounts and the researchers’ interpretations. The analysis therefore sought to preserve cross-domain complexity while producing themes that were coherent and analytically useful for the study’s design-oriented aim.
Ethical approval was obtained from the Ethics Committee of the Catholic University of Portugal (ref number CETCH2025-239). All participants received an information sheet detailing study aims, procedures, risks, and data handling; provided informed consent prior to participation; and were informed of their right to withdraw without penalty. Participant identities were anonymised during transcription and reporting. Data were stored securely on password-protected university servers with access limited to the research team, in alignment with GDPR principles and institutional data management requirements (European Parliament & Council, 2016). Given the sensitivity of discussions involving workplace experiences, health-related issues, and AI governance concerns, identifying details were removed and quotations were edited only where necessary to preserve meaning while minimising re-identification risk.

3. Results

Reflexive thematic analysis generated five overarching themes: (1) integrated expert lenses and theoretical foundations; (2) biopsychosocial ecosystem of women’s careers; (3) AI architecture: mechanics of intervention; (4) the human-AI coaching dialectic; and (5) ethical governance and the agency risk. Experts evaluated AI-enabled career coaching not in isolation, but in relation to women’s career realities, the limits of existing support models, and the safeguards needed for responsible use.
To indicate how widely particular issues surfaced across the expert panel, Table 1 reports the number of participants referring to each subtheme. Consistent with the analytic approach, these counts are presented only as indicators of breadth across the sample; they were not used to generate themes and are not interpreted as measures of prevalence or importance. The full codebook is provided in Supplementary Materials (Table S1).

3.1. Theme 1. Integrated Expert Lenses & Theoretical Foundations

Experts approached AI-enabled career coaching through multiple professional lenses, drawing on career development theory, coaching practice, organisational experience, and wellbeing or health expertise. Two subthemes captured this foundation: multidisciplinary career frameworks and critique of androcentric models.

3.1.1. Multidisciplinary Career Frameworks (n=14)

Most experts described career development as non-linear, contextual, and shaped by identity, relationships, and changing priorities over time. Career support was therefore framed less as matching individuals to fixed options and more as helping them interpret circumstances, clarify choices, and navigate uncertainty.
“We'll draw the squiggle and we get everybody to sort of self-select like which feels more like your career and pretty much everybody goes with the squiggle [...] I hear that like all the time” (E9; Career development practitioner).
Several participants also emphasised that career decisions are shaped not only by the individual, but by organisational conditions and available support.
"It's not only you who is making the decision, but it's also your whole community... the effort that the organization also makes that empowerment in terms of providing time, providing resources, providing guidance" (E16; Organisational transformation advisor).
Within this framing, experts were more supportive of AI-enabled coaching when it focused on reflection, clarification, and preparation rather than direct recommendation.
"Human counselling aims to promote agency and autonomy, leading the person to find their own answers. If AI is limited to giving 'free advice' or direct solutions without considering the person's specificity, it may compromise this process" (E13; Career development academic & researcher).

3.1.2. Critique of the Androcentric Model (n=11)

The majority of experts argued that dominant career and workplace models still reflect assumptions that do not fit women’s realities in male-dominated contexts, including norms of uninterrupted availability, narrow definitions of leadership, and expectations of adaptation to unequal environments.
"These systems were not built with women in mind... if you think about something as basic as not considering women's biology in seatbelts, you can understand why not considering women in the workplace is also very common" (E14; AI strategy and ethics advisor).
This concern extended to AI-enabled coaching: participants cautioned that tools built on historically biased assumptions about success, leadership, or communication may reproduce those norms under the appearance of neutral guidance.
"For AI to support women and minorities in general, there needs to be purpose behind it... it takes months and months of work and it's a work that will never stop" (E15; DEI consultant).

3.2. Theme 2. Biopsychosocial Ecosystem of Women’s Careers

3.2.1. Gender-Specific Life Transitions (n=10)

Experts identified maternity, caregiving, perimenopause, and menopause as particularly significant transition points for women’s career development, especially in male-dominated environments. These periods were described as times when women may need to renegotiate identity, capacity, and career direction within workplaces that still assume stable performance and uninterrupted availability.
"In perimenopause ... that's when you're maybe at the peak of your career... and suddenly because of brain fog and sleep issues and hot flashes...you're suddenly gonna decide maybe it's better for me to retire early" (E05; Psychiatrist and menopause specialist).
These accounts suggested that useful AI-enabled support should respond to changing circumstances rather than assume a generic or stable career pathway.

3.2.2. Systemic-Psychological Interaction (n=13)

Many experts linked structural inequities to their psychological and behavioural consequences. Bias in evaluation, exclusion from networks, and double binds around leadership behaviour were described as shaping women’s willingness to self-advocate, negotiate, or pursue advancement.
"If a woman is kind of direct then is perceived as too bossy and if it's too kind is perceived that doesn't have enough personality or energy... no matter what is the role that women are playing there is a much more negative perception around that behaviour” (E16; Organisational transformation advisor).
These were not framed as individual deficits, but as responses to unequal systems. In this context, experts were more positive about AI uses that support interpretation, preparation, and rehearsal than those that reduce structural problems to women’s confidence or communication alone.
“The most promising use cases are those that explore AI’s capacity to offer simulation, behavioural training, and objective feedback, acting as a technical assistant” (E13; Career development academic & researcher).

3.3. Theme 3. AI Architecture: Mechanics of Intervention

Experts distinguished between general-purpose conversational AI and a more credible AI-enabled career coaching environment. They focused not only on what the system might help users do, but also on how support should be structured. Three subthemes captured this theme: adaptive intervention modalities, system intelligence and orchestration, and affective gating and readiness.

3.3.1. Adaptive Intervention Modalities (n=11)

Experts most often endorsed AI-enabled coaching when it was framed as providing bounded and structured forms of support. Three types of use case were especially prominent: structured reflection, psychoeducational support, and behavioural scaffolding. Structured reflection involved helping users clarify goals, identify assumptions, and consider trade-offs. Psychoeducational support referred to the provision of scoped, evidence-aligned information to support sense-making. Behavioural scaffolding included more action-oriented functions such as rehearsal, check-ins, action planning, and support for follow-through.
"I would build it almost on the journaling side and very much not on the advice side... maybe ask you some reflective questions, but I would be very careful even in such an area with setting very clear goals" (E01; AI coaching executive).
Several experts emphasised that the value of AI lay in helping users unpack situations, consider alternatives, and prepare action, rather than supplying answers.
"If it could help you parse apart the problem, see it from a bunch of different angles and help you think through how to approach it, it would be so amazing" (E04; senior executive coach).

3.3.2. System Intelligence and Coordination (n=5)

A smaller group of experts, particularly those with AI coaching or AI ethics backgrounds, emphasised the importance of system intelligence and coordination. They argued that an effective AI-enabled coaching tool would need an internal logic to structure the interaction, sequence activities, and maintain role clarity. Examples included moving from onboarding to exploration, then synthesis and action; distinguishing between reflective, informational, and practical modes of support; and tailoring prompts to the user’s stated goals and context.
These participants treated system coordination as relevant to both quality and safety. In their view, a system that lacked role clarity or sequencing was more likely to drift into generic reassurance, confusing role performance, or inappropriate advice.
"We started with a very linear process... then we had a non-linear process...then you have the issue that it just switches too often" (E01; AI coaching executive).

3.3.3. Affective Gating and Readiness (n=5)

A small subset of experts highlighted the importance of readiness-sensitive support. They noted that deeper reflection may be unhelpful when users are distressed, cognitively overloaded, or highly constrained by time and circumstance. In such cases, AI-enabled coaching may need to shift towards lighter-touch planning, grounding, or signposting.
"If it asks you a very difficult question, people tend to go out... they put their phone back into their pocket. And they're like, well, maybe I'll think about it" (E01; AI coaching executive).
“We need to start with recommendations that are actually feasible and align with the characteristics of each person... it has to be very tailored” (E12; Career development academic & researcher).
This theme shows that experts did not simply ask whether AI should be used in career coaching, but what forms of coaching support could be responsibly structured through AI.

3.4. Theme 4. The Human-AI Coaching Dialectic

Experts consistently described AI-enabled coaching as potentially useful, but only within clear limits. Two subthemes captured this tension: technological affordances and strengths and limitations.

3.4.1. Technological Affordances/Strengths (n=13)

Experts identified availability, immediacy, and lower-threshold access as the main advantages of AI-enabled coaching. These were seen as particularly relevant for women whose schedules, responsibilities, or financial constraints make conventional support harder to access. Participants also noted possible benefits linked to anonymity and reduced fear of judgement.
"AI comes in as being able to meet the user 24/7 on demand in a personalized manner... if I need career coaching, that might be because I've just been let off and I need an intense 2 to 4 weeks" (E02; AI coaching executive).
In practice, these affordances were usually linked to bounded use cases such as structured reflection, rehearsal, and action follow-through.

3.4.2. Technological Limitations (n=15)

At the same time, almost all experts emphasised the limits of AI-enabled coaching. Participants highlighted the absence of genuine empathy, difficulty interpreting non-verbal and relational cues, and limited ability to understand subtle organisational dynamics. High-complexity or high-risk situations were therefore consistently seen as requiring human judgement.
"Communication is much more than writing or talking. It has a non-verbal aspect that is very important" (E12; Career development academic).
Overall, experts did not regard AI and human coaching as interchangeable, but as having different roles and limits.

3.5. Theme 5. Ethical Governance & the Agency Risk

Experts treated governance as integral to whether AI-enabled career coaching could be considered acceptable. Concerns focused on how AI might reproduce bias, mishandle sensitive information, or undermine users’ autonomous judgement. Three subthemes captured these issues: 1) reductionism bias; 2) safety, privacy, and clinical safeguards; 3) dependency and agency erosion.

3.5.1. Reductionism Bias (n=7)

Some experts, particularly those with AI ethics or DEI expertise, raised concerns that AI systems may reproduce gendered assumptions embedded in training data and organisational norms. Participants warned that such systems could privilege male-coded benchmarks of leadership or success and frame structural problems as matters of personal confidence or resilience.
"The AI coach would keep working with her, all the things she can do ...it won't give up. That is kind of an issue" (E01; AI coaching executive).
"Since women have entered the workforce, we've all been socially engineered to the male model... ask for what you want, be bold, be clear about your answers, do your power pose, be confident...this is what society has been telling women to do" (E06; Senior executive coach).

3.5.2. Safety, Privacy & Clinical Safeguards (n=11)

A larger group focused on privacy, psychological safety, and boundary management. Because career conversations may involve health, workplace conflict, identity, and personal circumstances, participants emphasised the need for clear scope boundaries, explicit consent, strong data governance, and escalation pathways.
"It is crucial that the system is designed to acknowledge its limitation in demonstrating true affective empathy, referring the user to a human professional when problems escalate" (E13; Career development academic).

3.5.3. Dependency & Agency Erosion (n=8)

A further concern involved the possibility that users might become overly reliant on AI for reflection or decision-making. Experts described this as an agency risk: if AI becomes too directive, authoritative, or affirming, users may begin to defer judgement rather than strengthen it. This can also be particularly problematic where users might act on AI guidance in situations involving distress, conflict, or major career decisions.
"The AI coach gives them some advice and then they go do it because the AI coach said so and that could create real damage...there's really no accountability in that sense" (E01; AI coaching executive).
Accordingly, experts argued that AI-enabled coaching should be designed to reinforce agency rather than substitute for it.

3.6. Theme 5. Cross-Theme Synthesis: Agency-Centred Framework

Although experts broadly agreed that AI may have a bounded role in career coaching, their emphases differed by professional lens. Career development, coaching, and organisational experts more often foregrounded women’s career context, relational complexity, and the limits of generic developmental advice. By contrast, participants with AI coaching and AI ethics backgrounds more often emphasised system coordination, role clarity, readiness sensitivity, bias, and structural misdiagnosis.
These differences did not amount to opposing positions. Rather, they reflected different emphases in judging the conditions under which AI-enabled coaching might be considered appropriate.
The five themes show that expert support for AI-enabled career coaching was conditional rather than general. Experts were most positive about bounded use cases that support reflection, preparation, and follow-through while remaining attentive to women’s career context and operating within clear ethical, relational, and governance limits. These findings informed the development of the agency-centred design and governance framework presented in Figure 1.

4. Discussion

This study suggests that the value of AI-enabled career coaching for women lies not in broad substitution for human support, but in the careful design of bounded, context-sensitive, and agency-preserving interventions (Passmore & Tee, 2023). AI was seen as potentially useful where it can scaffold reflection, preparation, and follow-through, but far less appropriate where support depends on nuanced contextual judgement, relational depth, or accountability for high-stakes decisions. The key issue, therefore, is not whether AI coaching “works” in the abstract, but how scope, context, and governance shape what constitutes acceptable and useful support in this domain (Passmore et al., 2025a, 2025b; Terblanche et al., 2022, 2024).
A central contribution of the study is to show that AI-enabled career coaching for women cannot be designed from generic or decontextualised models of career development (Bimrose et al., 2013; Pryor & Bright, 2003; Tupper & Ellis, 2020). Career support technologies are not neutral delivery mechanisms: they embed assumptions about what counts as progress, which goals are worth pursuing, and whether difficulties are interpreted as individual, relational, or structural. In the context of women’s careers, particularly in male-dominated settings, this is consequential because systems that ignore organisational power, life-course transitions, and gendered norms risk reinforcing the very conditions they purport to address (O’Neil et al., 2008, 2015; Patton & McMahon, 2006; Savickas, 2005).
This point is especially important in relation to women’s careers in male-dominated environments. Experts repeatedly described sponsorship gaps, gendered evaluation, caregiving pressures, and physiological transitions as career-relevant influences, not peripheral issues. The analytic implication is that AI-enabled coaching may misframe structurally produced difficulties as matters of confidence, resilience, or strategy if it is designed around overly individualised assumptions (Rahim et al., 2018). In other words, a system may appear supportive while still steering women towards adaptation to inequitable environments. This extends existing critiques of androcentric and linear career models into the AI domain: unless these assumptions are explicitly addressed, they risk being reproduced through prompts, feedback logics, success rubrics, and behavioural recommendations (Ndabeni et al., 2020; O’Neil & Hopkins, 2015).
The findings align with longstanding work showing that women’s careers are shaped by cumulative and systemic constraints, including gendered organisational norms, unequal advancement opportunities, and life-course demands that conventional career models often treat as secondary (O’Neil et al., 2008; Ryan & Morgenroth, 2024). They also reinforce emerging work on AI coaching by suggesting that its value lies less in broad substitution for human coaching than in the careful design of limited modalities under explicit governance conditions (Passmore et al., 2025a). More broadly, the study contributes a design-oriented argument: in the context of women’s career development, structural inequity should be treated as a design constraint, not as an external issue beyond the remit of the intervention.
These wider implications are synthesised in Figure 1, which presents an agency-centred design-and-governance framework derived from the five themes. Rather than treating the findings as a set of separate considerations, the framework shows how responsible AI-enabled career coaching depends on alignment across four interdependent layers: conceptual alignment, ecosystem attunement, intervention mechanics, and agency-centred governance, with continuous evaluation and human accountability operating across the system. The figure therefore represents a design synthesis of the study’s findings, translating thematic insights into a structured model for development, implementation, and oversight.
At the base of the framework, the first layer, ‘Conceptual alignment’ captures the argument that AI-enable coaching should be grounded in contextual, non-linear, and multidisciplinary understandings of careers rather than narrow optimisation models (Law, 2009; Lent et al., 1994; McDonald et al., 2005; Sullivan & Al Ariss, 2021, Super et al., 1996). This is the foundation on which later design decisions depend. If the conceptual model is reductive or male-normed, subsequent layers are likely to reproduce those assumptions, regardless of how sophisticated the interface appears (Worth, 2016). The second layer, ‘Ecosystem attunement’, extends this by showing that women’s careers must be understood as part of a biopsychosocial career ecosystem shaped by life-course transitions, internalised constraints, and organisational conditions (Blake, 2018; Gibson, 1995; McIntosh et al., 2012; Mosconi, 2024). These two layers make clear that effective support requires not only personalisation, but correct interpretation of context (Brown et al, 2012; De Looff et al., 2018; Holland, 1997).
The third layer of the framework, ‘Intervention mechanics’, corresponds most directly to the study’s practical contribution. Experts were most supportive of bounded modalities such as structured reflection, scoped psychoeducation, and behavioural micro-support (Del Corso & Rehfuss, 2011; Hodges & Clifton, 2004; Iqbal & Maldonado Garcia, 2020; Ryan et al., 2025). These uses were perceived as more credible because they can be deliberately constrained and are compatible with developmental coaching processes without requiring the system to simulate human empathy or assume professional responsibility (Whitmore, 2009). This suggests that the most credible role for AI-enabled coaching is not to replicate the full relational depth of human coaching, but to support well-defined developmental tasks (Graßmann & Schermuly, 2021; Passmore et al., 2025c; Terblanche et al., 2024).
The fourth layer, ‘Agency-centred governance’, is what gives the framework its normative core. Experts’ concerns were not limited to bias, privacy, or technical failure, although these were important (Steenkamp & Terblanche, 2026; Terblanche et al. 2023). More fundamentally, they questioned whether AI-enabled coaching might erode agency by becoming directive, encouraging over-reliance, or subtly repositioning structural problems as matters of individual adjustment (Montag et al., 2024). As shown in Figure 1, this makes governance integral rather than supplementary to design. Scope boundaries, privacy-by-design, human escalation pathways, bias and reductionism controls, and explicit agency protections are not optional safeguards added after deployment; they are core conditions of legitimacy (Dattathrani & De’, 2023; Haase, 2025; Zallio et al., 2025).
Seen in this way, Figure 1 also helps clarify the paper’s broader theoretical contribution. The findings suggest that one of the central ethical risks in this domain is not simply algorithmic bias, but structural misdiagnosis: the risk that discriminatory or exclusionary environments are misread as deficits in the individual user’s behaviour, mindset, or preparedness (Worth, 2016). This sharpens the ethical question from whether the system is fair in general to whether it is locating the source of the problem appropriately. That distinction is especially important in career contexts, where support can easily slide into conformity if structural conditions are left uninterrogated (Brey & Dainow, 2024; Guo et al., 2024; Laine et al., 2025).
Three tensions should shape the design and governance of AI-enabled career coaching. First, accessibility versus dependency: reducing barriers of cost and availability risks encouraging habitual reliance if left unbounded (Kim et al., 2026). Second, context sensitivity versus gendered steering: personalisation becomes problematic when it rests on stereotyped assumptions or reproduces male-coded benchmarks of success (O’Neil & Bilimoria, 2005). Third, structured support versus human accountability: AI may assist with bounded developmental tasks, but interpretation, escalation, and ethical judgement remain human responsibilities (Beg, 2025). As Figure 1 illustrates, none of these tensions can be resolved at the interface level alone, they require attention across conceptual, contextual, technical, and governance layers simultaneously.

4.1. Practical Implications

For practitioners, the findings suggest that AI-enabled career coaching is best positioned as a bounded adjunct to human support rather than a substitute for it. Its strongest fit appears to be in structured and repeatable activities such as reflection, rehearsal, preparation for difficult conversations, and action planning. However, outputs should be used to support reflection and preparedness, not as authoritative guidance in complex, emotionally charged, or high-stakes situations.
For organisations and policy makers, Figure 1 highlights that responsible deployment requires more than accessibility or efficiency gains; it requires governance arrangements that preserve agency, protect privacy, separate coaching from evaluative HR functions, and provide clear escalation pathways to human coaches and therapists where needed. It also requires recognition that AI-enabled coaching cannot substitute for organisational responsibility to address structural barriers affecting women’s careers.
For developers, the framework indicates that AI-enabled career coaching should be designed as a constrained intervention system rather than as a general-purpose conversational agent. This includes grounding the system in non-linear, context-sensitive understandings of women’s careers; recognising life-course transitions, organisational constraints, and the risk of structural misdiagnosis; and prioritising bounded modalities such as structured reflection, scoped psychoeducation, and behavioural micro-support. The findings also suggest that more mature systems may benefit from orchestration and readiness-sensitive depth control so that support is sequenced appropriately, remains within scope, and shifts towards lighter-touch guidance or signposting where deeper intervention is unsuitable. Across these layers, core design requirements include explicit limits, privacy protections, bias and reductionism checks, escalation pathways, and interaction patterns that reinforce user judgement rather than displacing it.

4.2. Limitations

This study was designed to generate interpretive, design-oriented insight rather than establish prevalence or effectiveness. The findings identify plausible use cases, risks, and governance requirements, but do not show how AI-enabled career coaching tools perform in practice or how women themselves would experience them.
The purposive sample was relatively small and predominantly female, with 13 of 16 participants being women, which likely shaped the emphasis on gendered career barriers and support needs. Although the sample was multidisciplinary and cross-national, it may not capture the views of frontline practitioners, professionals in lower-resource settings, or workers in unrepresented sectors and labour markets.
The study also relied on expert judgement rather than end-user experience, so the proposed use cases and design conditions remain provisional. In addition, intersectional differences were not examined in depth, despite their likely importance for the design and appropriateness of AI-enabled support.

4.3. Future Research

Future research should build on these findings in three directions. First, participatory and user-centred research is needed to examine how women themselves evaluate proposed AI-enabled coaching use cases across different life stages and work contexts (Nakao et al., 2022; Steen, 2013). Second, mixed-method and longitudinal studies should test the specific bounded modalities identified here, particularly structured reflection, rehearsal, and action support, using outcomes that include agency, self-efficacy, psychological wellbeing, and career sustainability. Third, implementation research should examine governance in practice, especially in organisational settings, including how consent is understood over time, how data access and privacy are managed, and whether AI-enabled coaching complements rather than displaces human support and organisational responsibility.

5. Conclusions

The findings of the current study suggest that expert support for AI-enabled career coaching is conditional rather than general. AI is perceived as most appropriate when used for bounded developmental functions, especially structured reflection, scoped psychoeducation, and behavioural micro-support, rather than as a substitute for human coaching, counselling, or organisational accountability.
The study also shows that these use cases cannot be designed in isolation from women’s career realities. While AI-enabled career coaching tools should help women adapt more efficiently to biased environments, they should support context-sensitive reflection and action while avoiding the individualisation of structural problems.
A further contribution of the study is to clarify the boundaries and governance requirements that should shape the development of such tools. Experts were nearly unanimous that AI cannot replace human empathy, contextual judgement, or professional accountability in complex, high-stakes, or emotionally charged situations. Concerns about privacy, safety, clinical overreach, bias, structural misdiagnosis, and dependency were therefore central to judgements about what constitutes acceptable use. In this respect, responsible AI-enabled career coaching depends not only on useful functions, but also on governance arrangements that protect agency and maintain clear limits.
Study findings indicate that the most credible role for AI-enabled career coaching is as a tightly scoped, theory-informed, and agency-preserving form of developmental support. Future research should test the bounded use cases identified here with women end-users across different career stages and contexts, examining outcomes such as confidence, career decision self-efficacy, emotional wellbeing, safety and perceived agency and usefulness.

Supplementary Materials

The following supporting information can be downloaded at the website of this paper posted on Preprints.org.

Author Contributions

Conceptualization, S.P.F.; methodology, S.P.F.; validation, S.P.F., A.D.G and J.C.P.; formal analysis, S.P.F., A.D.G and J.C.P.; investigation, S.P.F., A.D.G and J.C.P.; data curation, S.P.F., A.D.G and J.C.P writing—original draft preparation, S.P.F.; writing—review and editing, A.D.G, L.C. and J.C.P.; visualization, S.P.F.; supervision, A.D.G, L.C. and J.C.P.; project administration, S.P.F.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by Ethics Committee of the Catholic University of Portugal (ref number CETCH2025-239) on the 15th of December 2025.

Data Availability Statement

The data that support the findings of this study are not publicly available due to confidentiality and participant re-identification risks. Anonymised excerpts and supporting materials are available from the corresponding author upon request.

Acknowledgments

During the preparation of this manuscript, the authors used ChatGPT (OpenAI, version GPT-5) for limited language editing and clarity. The authors reviewed and edited all outputs and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A. Use cases explored during the expert interviews.
Table A. Use cases explored during the expert interviews.
Theme Potential Use Cases
Life-Work balance - Setting boundaries
- Reducing mental load
- Practicing conversations with managers to uphold balance without career penalty
Remote influence - Building visibility in virtual environments
- Navigating organisational politics remotely
- Growing careers without co-location
Promotion & self-advocacy - Negotiation skills for promotions
- Effective self-promotion
- Building a network of allies
Leadership & transitions - Coaching for leadership roles
- Supporting career returners (e.g., post-maternity leave)
- Navigating career pivots
Bias response - Assertiveness training
- Responding to microaggressions
- Navigating salary negotiations through a bias-aware lens
Inclusive well-being - Preventing burnout
- Building resilience
- Managing emotional labour
Mentor stories - AI-curated inspirational journeys from diverse women leaders
AI literacy - Building confidence in using AI tools
- Developing skills for AI-integrated workplaces
- Understanding AI ethics
Women's physiological changes - Navigating career and leadership through physiological changes (menstrual cycle, maternity, pre and menopause)
- Strategies for managing, emotional states, energy and workload

Appendix B

Table B. Participants profiles.
Table B. Participants profiles.
# Domain expertise Title Gender Years of experience Region
1 AI coaching Founder/executive M 10-14 EU
2 AI coaching Founder/executive M 10-14 EU
3 Executive and leadership coaching Senior executive coach F 20+ UK/ I
4 Career development and HR Senior executive coach F 20+ NA
5 Mental health and women’s health Psychiatrist and menopause specialist F 5-9 NA
6 Executive and leadership coaching Senior executive coach
F 20+ NA
7 AI ethics and coaching Senior executive coach, AI ethics advisor F 10-14 UK/ I
8 Executive and leadership coaching Senior executive coach F 20+
UK/ I
9 Career development practitioner Senior executive coach
F 15-19
UK/ I
10 Executive and leadership coaching and HR Senior executive coach
F 20+
NA
11 General practice and women’s health General practitioner and menopause specialist F 15-19
UK/ I
12 Career development and counselling Career development academic F 10-14 EU
13 Career development and counselling Career development academic F 15-19
EU
14 AI ethics and DEI (Diversity, equity, and inclusion) AI strategy and ethics advisor F 15-19
NA
15 DEI (Diversity, equity, and inclusion) DEI consultant F 5-9
EU
16 Organisational development and HR Organisational transformation advisor F 20+
EU

Appendix C

Appendix C.1. Interview Guide

Introduction (5 minutes)
  • Brief introduction and explanation of the purpose of the study.
  • Remind the participant of confidentiality and the right to withdraw at any time.
  • Confirm consent to audio and video recording.
  • Outline the structure of the interview.
[Ask for consent to record. Press recording]
Background and Context (5–10 minutes)
  • Could you briefly describe your background and experience in career development, coaching, AI technologies, and/or supporting women?
  • In your professional experience, what are the most common career challenges faced by women in their career journeys?
Current apps (5 - 10 minutes)
  • Where do you see gaps or limitations in existing career development models or applications for women?
AI in Career Coaching (20 minutes)
[This section is intended only for career and coaching experts]
  • Which career development or coaching frameworks or models do you typically draw on in your work (e.g., SCCT, Career Construction)?
  • How do these frameworks inform the way you structure your coaching or assess the needs of women, particularly those navigating male-dominated fields like tech?
  • Which elements of these models do you think could be meaningfully translated into an AI-based coaching tool to support women’s career development?
  • How might AI systems support, if at all, coaching processes such as identity exploration, values clarification, goal attainment, or life-role integration for women in tech?
  • Do you think AI can replicate a human coach – that is, do as well, underperform, or outperform a human coach?
    o 
    If underperform: where do you see the limitations of AI in this area?
    o 
    If as well or outperform: in your view, what enables AI to match or exceed the capabilities of human coaches? Do you see any limitations despite this?
  • In your view, what role, if any, might AI play in addressing gender-specific barriers or advancing equity in career coaching / development?
  • What limitations or risks do you foresee, if any, in using AI for career development, particularly in ways that could impact women’s wellbeing, autonomy, safety, or inclusion?
Physiological Changes and Professional Impact (20 minutes)
[This section is intended only for participants with expertise in women's health]
  • Based on your clinical experience, what are some of the most significant physiological changes women typically experience throughout their working lives (e.g., menstrual cycle, pregnancy, post-partum, perimenopause, menopause)?
  • How might these changes affect women’s energy levels, emotional states, cognitive focus, or general capacity to manage professional responsibilities?
  • In your view, how well are these physiological experiences currently understood and supported in workplace environments?
  • Are there any specific support strategies (clinical or behavioural) that could be adapted or translated into career coaching or workplace interventions?
    o 
    Strategies to help managing biological changes, emotional states, energy/ workload, cognitive shifts
  • What role, if any, could technology – particularly AI – play in supporting women to navigate their careers in light of these physiological changes?
  • Are there any ethical considerations or risks that should be kept in mind when designing AI tools addressing women’s health in the context of professional life?
Use Cases Exploration (20–30 minutes)
  • Based on preliminary findings, here are some AI career coaching use cases identified in the literature [present short list].
    o 
    Which of these use cases do you find most promising or relevant? Why?
    o 
    Are there any that you would modify, remove, or further develop?
  • Are there any additional use cases or innovative applications of AI for career coaching that come to mind based on your experience?
Prioritisation and Feasibility (10–15 minutes)
  • Of the use cases discussed, which do you believe would have the greatest impact on supporting women in tech? Why?
  • How feasible do you believe it is to implement these use cases in real-world coaching environments?
  • What ethical considerations or risks should be taken into account when designing AI career coaching solutions for women in tech?
Closing (5 minutes)
  • Is there anything else you would like to add that you feel is important for this research?
  • Would you be open to being contacted for follow-up clarification if necessary?

References

  1. Beg, M. J. (2025). Responsible AI integration in mental health research: Issues, guidelines, and best practices. Indian Journal of Psychological Medicine, 47(1), 5–8. [CrossRef]
  2. Bimrose, J., McMahon, M., & Watson, M. (2013). Career trajectories of older women: implications for career guidance. British Journal of Guidance & Counselling, 41(5), 587–601. [CrossRef]
  3. Blake, A. (2018). Your body is your brain: Leverage your somatic intelligence to find purpose, build resilience, deepen relationships and lead more powerfully. Trokay Press.
  4. Braun, V., & Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health, 11(4), 589–597. [CrossRef]
  5. Braun, V., & Clarke, V. (2022). Thematic analysis: A practical guide. London: SAGE.
  6. Brey, P. & Dainow, B. (2024). Ethics by design for artificial intelligence. AI Ethics 4, 1265–1277. [CrossRef]
  7. Brown, A., Bimrose, J., Barnes, S. A., & Hughes, D. (2012). The role of career adaptabilities for mid-career changers. Journal of Vocational Behavior, 80(3), 754-761. [CrossRef]
  8. Charness, G., Le Bihan, Y. and Villeval, M. C. (2024). Mindfulness training, cognitive performance and stress reduction. Journal of Economic Behavior & Organization, 217, 207-226. [CrossRef]
  9. Creswell, J. W., & Poth, C. N. (2018). Qualitative inquiry and research design: Choosing among five approaches (4th ed.). Thousand Oaks, CA: Sage.
  10. Dattathrani, S. & De’, R. (2023). The Concept of Agency in the Era of Artificial Intelligence: Dimensions and Degrees. Information Systems Frontier v.25, p.29–54. [CrossRef]
  11. De Looff, P. C., Cornet, L. J. M., Embregts, P. J. C. M., Nijman, H. L. I., & Didden, H. C. M. (2018). Associations of sympathetic and parasympathetic activity in job stress and burnout: A systematic review. PLOS ONE, 13(10), e0205741. [CrossRef]
  12. Del Corso, J. & Rehfuss, M. C. (2011). The role of narrative in career construction theory. Journal of Vocational Behavior. Volume 79, Issue 2, Pages 334-339. [CrossRef]
  13. European Commission. (2024). The gender pay gap situation in the EU: Facts and figures. https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/gender-equality/equal-pay/gender-pay-gap-situation-eu_en.
  14. European Parliament and Council of the European Union. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Official Journal of the European Union, L 119, 1–88. http://data.europa.eu/eli/reg/2016/679/oj.
  15. Gibson, C. (1995). An Investigation of Gender Differences in Leadership Across Four Countries. Journal of International Business Studies 26, 255–279. [CrossRef]
  16. Graßmann, C., & Schermuly, C. C. (2021). Coaching With Artificial Intelligence: Concepts and Capabilities. Human Resource Development Review, 20(1), 106-126. [CrossRef]
  17. Greenhaus, J.H., Callanan, G.A., & Godshalk, V.M. (2018). Career Management for Life (5th ed.). Routledge. [CrossRef]
  18. Guo, Y., Guo, M., Su, J., Yang, Z., Zhu, M., Li, H., Qiu, M., & Liu, S.S. (2024). Bias in Large Language Models: Origin, Evaluation, and Mitigation. Computation and Language. arXiv. [CrossRef]
  19. Jack, G., Riach, K., Bariola, E., Pitts, M., Schapper, J., & Sarrel, P. (2016). Menopause in the workplace: What employers should be doing. Maturitas, Volume 85, 2016, Pages 88-95. [CrossRef]
  20. Haase, J. (2025). Augmenting coaching with GenAI: Insights into use, effectiveness, and future potential. arXiv. [CrossRef]
  21. Hodges, T. D., & Clifton, D. O. (2004). Strengths-Based Development in Practice. In P. A. Linley & S. Joseph (Eds.), Positive psychology in practice (pp. 256–268). John Wiley & Sons, Inc. [CrossRef]
  22. Holland, J. L. (1997). Making vocational choices: A theory of vocational personalities and work environments (3rd ed.). Psychological Assessment Resources.
  23. Ibarra H., Carter N.M. & Silva C. (2010). Why men still get more promotions than women. Harvard business review, 88(9), 80–126.
  24. Iqbal, A. & Maldonado Garcia, M. (2020). Gendered Narratives and Linguistic Force-Dynamics: Influencing Career Paths of Women. South Asian Journal of Social Studies and Economics. [CrossRef]
  25. Kim, T. W., Usman, U., Garvey, A., & Duhachek, A. (2026). From algorithm aversion to AI dependence: Deskilling, upskilling, and emerging addictions in the GenAI age. Consumer Psychology Review, 9(1), 142–164. [CrossRef]
  26. Laine, J., Minkkinen, M., & Mäntymäki, M. (2025). Understanding the Ethics of Generative AI: Established and New Ethical Principles. Communications of the Association for Information Systems, 56, 1-25. [CrossRef]
  27. Law, B. (2009). Building on what we know: Community-interaction and its importance for contemporary careers-work. The Career-leaning NETWORK. http://www.hihohiho.com/memory/cafcit.pdf.
  28. Lent, R.W., Brown, S. D., & Hackett, G. (1994). Toward a Unifying Social Cognitive Theory of Career and Academic Interest, Choice, and Performance. Journal of Vocational Behavior, 45(1), 79-122. [CrossRef]
  29. Lent, R. W., & Brown, S. D. (2008). Social Cognitive Career Theory and Subjective Well-Being in the Context of Work. Journal of Career Assessment, 16(1), 6-21. [CrossRef]
  30. Malterud, K., Siersma, V. D., & Guassora, A. D. (2016). Sample Size in Qualitative Interview Studies: Guided by Information Power. Qualitative health research, 26(13), 1753–1760. [CrossRef]
  31. Makarem, Y. & Wang, J. (2020). Career experiences of women in science, technology, engineering, and mathematics fields: A systematic literature review. Human Resource Development Quarterly. 2020;31:91–111. [CrossRef]
  32. McDonald, P., Brown, K. & Bradley, L. (2005), Have traditional career paths given way to protean ones?: Evidence from senior managers in the Australian public sector. Career Development International, Vol. 10 No. 2 pp. 109–129.: . [CrossRef]
  33. McIntosh, B., McQuaid, R., Munro, A., & Dabir-Alai, P. (2012), Motherhood and its impact on career progression. Gender in Management: An International Journal, Vol. 27 No. 5 pp. 346–364. [CrossRef]
  34. Montag, C., Nakov, P., & Ali, R. (2024). Considering the IMPACT framework to understand the AI-well-being-complex from an interdisciplinary perspective. Telematics and Informatics Reports, 12, 100112. [CrossRef]
  35. Mosconi, L. (2024). The menopause brain: New science empowers women to navigate the pivotal transition with knowledge and confidence. Penguin Press.
  36. Nakao, Y., Stumpf, S., Ahmed, S., Naseer, A. & Strappelli, L. (2022). Toward Involving End-users in Interactive Human-in-the-loop AI Fairness. ACM Trans. Interact. Intell. Syst. 12, 3, Article 18, 30 pages. [CrossRef]
  37. Ndabeni, L.L., Mashigo, P.M., Mbongi Ndabeni, M., & Rugimbana, R. (2020). Mainstreaming Gender in the Analyses of Science Technology and Innovation Policies. International Journal of Science and Research, Volume 9 Issue 6, June. https://dx.doi.org/10.21275/SR20427234931.
  38. O’Neil, D. A. & Bilimoria, D. (2005). Women’s career development phases: Idealism, endurance, and reinvention. Career Development International, 10(3), 168–189. [CrossRef]
  39. O’Neil, D. A., Hopkins, M. M., & Bilimoria, D. (2008). Women’s careers at the start of the 21st century: Patterns and paradoxes. Journal of Business Ethics, 80, 727–743. [CrossRef]
  40. O’Neil, D. A. & Hopkins M. M. (2015). The impact of gendered organizational systems on women’s career advancement. Organizational Psychology. Frontiers in Psychology, Volume 6 - 2015. [CrossRef]
  41. O'Neill, M. T., Jones, V., & Reid, A. (2023). Impact of menopausal symptoms on work and careers: a cross-sectional study. Occupational medicine (Oxford, England), 73(6), 332–338. [CrossRef]
  42. Passmore, J. & Tee, D. (2023). Can Chatbots like GPT-4 replace human coaches: Issues and dilemmas for the coaching profession, coaching clients and for Organisations. The Coaching Psychologist. 19(1), 47-54. [CrossRef]
  43. Passmore, J., Olafsson, B. & Tee, D. (2025a). A systematic literature review of artificial intelligence (AI) in coaching: insights for future research and product development. Journal of Work-Applied Management, Vol. ahead-of-print. [CrossRef]
  44. Passmore, J., Tee, D. R., & Rutschmann, R. (2025b). Getting better all the time: using professional human coach competencies to evaluate the quality of AI coaching agent performance. Coaching: An International Journal of Theory, Research and Practice, 1–17. [CrossRef]
  45. Passmore, J., Tee, D., Palermo, G. & Rutschmann, R. (2025c). Human coaches & AI Coaching Agents: An exploratory quasi experimental design study of workplace client attitudes performance. Journal of Work - Applied Management, http://doi.org/10.1108/JWAM-02-2025-0032.
  46. Patton, W. & McMahon, M. (2006). The Systems Theory Framework of Career Development and Counseling: Connecting Theory and Practice. Int J Adv Counselling 28, 153–166. [CrossRef]
  47. Portell-Fonolla, S., El Fassi, Y., Gaspar, A.G., Correia, L., & Pinto, J. C. (in press). AI-Driven Career Development: A Scoping Review of Applications and Interventions for Women’s Career Development. International Journal of Educational and Vocational Guidance.
  48. Pryor, R. G. L., & Bright, J. (2003). The Chaos Theory of Careers. Australian Journal of Career Development, 12(3), 12-20. [CrossRef]
  49. Rahim A.G., Akintunde, O., Afolabi, A.A., & Okikiola I.O. (2018). The glass ceiling conundrum: Illusory belief or barriers that impede Women’s career advancement in the workplace. Journal of Evolutionary Studies in Business, 3(1), 137-166. [CrossRef]
  50. Ryan, S., Charter, R., Ussher, J., Perich, T., Power, R., & Sperring, S. (2025). Navigating menopause at work: A rapid review and narrative synthesis of psycho-educational and behavioral interventions to support menopausal women in the workplace. Women’s Reproductive Health. Advance online publication. [CrossRef]
  51. Ryan, M.K. & Morgenroth, T. (2024). Why We Should Stop Trying to Fix Women: How Context Shapes and Constrains Women's Career Trajectories. Annual Review Psychology. 75:555-572. [CrossRef]
  52. Savickas, M. L. (2005). The Theory and Practice of Career Construction. In S. D. Brown & R. W. Lent (Eds.), Career development and counseling: Putting theory and research to work (pp. 42–70). John Wiley & Sons, Inc.
  53. Stamarski, C.S. & Son Hin, L.S. (2015). Gender inequalities in the workplace: the effects of organizational structures, processes, practices, and decision makers’ sexism. Front. Psychol. 6:1400. [CrossRef]
  54. Steen, M. (2013). Co-Design as a Process of Joint Inquiry and Imagination. Design Issues 2013; 29 (2): 16–28. [CrossRef]
  55. Steenkamp, B., & Terblanche, N. (2026). Exploring artificial intelligence coaching’s role in translating business training into real-world applications. SA Journal of Human Resource Management, 24, 12 pages. [CrossRef]
  56. Stratton S. J. (2024). Purposeful Sampling: Advantages and Pitfalls. Prehospital and disaster medicine, 39(2), 121–122. [CrossRef]
  57. Sullivan, S., & Baruch, Y. (2009). Advances in career theory and research: A critical review and agenda for future exploration. Journal of Management, 35(6), 1542–1571. [CrossRef]
  58. Sullivan, S. E. & Al Ariss, A. (2021). Making sense of different perspectives on career transitions: A review and agenda for future research. Human Resource Management Review, Volume 31, Issue 1, 100727. [CrossRef]
  59. Super, D. E., Savickas, M. L., & Super, C. M. (1996). The life-span, life-space approach to careers. In D. Brown & L. Brooks (Eds.), Career choice and development (3rd ed., pp. 121-178). San Francisco: Jossey-Bass.
  60. Terblanche, N., Molyn, J., de Haan E., & Nilsson, V.O. (2022). Comparing artificial intelligence and human coaching goal attainment efficacy. PLoS ONE 17(6): e0270255. [CrossRef]
  61. Terblanche, N., Molyn, J., Williams, K., & Maritz, J. (2023), Performance matters: students' perceptions of Artificial Intelligence Coach adoption factors, Coaching: An International Journal of Theory, Research and Practice, Vol. 16 No. 1, pp. 100-114. [CrossRef]
  62. Terblanche, N. H. D. (2024). Artificial Intelligence (AI) Coaching: Redefining People Development and Organizational Performance. The Journal of Applied Behavioral Science, 60(4), 631-638. [CrossRef]
  63. Terblanche, N.H.D., Van Heerden, M., & Hunt, R. (2024), The influence of an artificial intelligence chatbot coach assistant on the human coach-client working alliance. Coaching: An International Journal of Theory, Research and Practice, Vol. 17 No. 2, pp. 1-18. [CrossRef]
  64. Torres, A. J. C., Barbosa-Silva, L., Oliveira-Silva, L. C., Miziara, O. P. P., Guahy, U. C. R., Fisher, A. N., & Ryan, M. K. (2024). The Impact of Motherhood on Women’s Career Progression: A Scoping Review of Evidence-Based Interventions. Behavioral Sciences, 14(4), 275. [CrossRef]
  65. Trauth, E. M. (2006). Theorizing gender and information technology research. In E. M. Trauth (Ed.), Encyclopedia of gender and information technology (pp. 1154–1159). IGI Global. [CrossRef]
  66. Tupper, H., & Ellis, S. (2020). The squiggly career: Ditch the ladder, discover opportunity, design your career. Penguin.
  67. Whitmore, J. (2009). Coaching for Performance: Growing Human Potential and Purpose-The Principles and Practice of Coaching and Leadership. 4th Edition, Nicholas Brealey Publishing, London.
  68. Worth, N. (2016). Who we are at work: Millennial women, everyday inequalities and insecure work. Gender, Place & Culture, 23(9), 1302–1314. [CrossRef]
  69. Zallio, M., Ike, C. B., & Chivăran, C. (2025). Designing Artificial Intelligence: Exploring Inclusion, Diversity, Equity, Accessibility, and Safety in Human-Centric Emerging Technologies. AI, 6(7), 143. [CrossRef]
Figure 1. Agency-centred design-and-governance framework for AI-enabled career counselling tools for women (derived from Themes 1-5).
Figure 1. Agency-centred design-and-governance framework for AI-enabled career counselling tools for women (derived from Themes 1-5).
Preprints 210378 g001
Table 1. Breadth of theme and subtheme occurrence across the expert panel.
Table 1. Breadth of theme and subtheme occurrence across the expert panel.
Theme Sub-theme n/16 %
Integrated expert lenses
& theoretical foundations
Multidisciplinary career frameworks 14 88%
Critique of androcentric model 11 69%
Biopsychosocial ecosystem of women’s careers Gender-specific life transitions 10 63%
Systemic-psychological interaction 13 81%
Adaptive intervention modalities 11 69%
AI architecture: mechanics
of intervention
System intelligence & orchestration 5 31%
Affective gating & readiness 5 31%
The Human-AI coaching
dialectic
Technological affordances/ strengths 13 81%
Limitations 15 94%
Ethical governance
& the agency risk
Algorithmic bias & individualistic reductionism 7 44%
Safety, privacy & clinical safeguards 11 69%
Dependency & agency erosion 8 50%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated