Preprint
Article

This version is not peer-reviewed.

Recruiters’ Attitudes Toward the Use of Artificial Intelligence in the Recruitment Process

Submitted:

26 February 2026

Posted:

27 February 2026

You are already at the latest version

Abstract
Recruitment is among the HRM processes in which information and communication technologies and artificial intelligence can be applied extensively and fruitfully. It’s acceptance is studied but mostly from candidates perspective and some studies have showed that the acceptance of artificial intelligence applications is not universal among recruiters. The aim of this study was to examine whether recruiters’ attitudes toward the use of AI in the recruitment process vary depending on the stage of the process and the form of AI intervention. Based on data collected through an online questionnaire from 120 Polish recruiters, the findings indicate that acceptance of AI is higher in the earlier stages of the recruitment process than in the later stages. The results further show, in line with self-determination theory, that AI is more readily accepted when it serves an advisory rather than a decision-making function, regardless of the phase of the selection process. It was also found that the factor that increases acceptance for AI applications is the willingness to sacrifice decision-making autonomy, while employment in a large (above 500 employees) company significantly affects this acceptance only in the earlier stages. Practical recommendations were formulated and directions for further research were proposed.
Keywords: 
;  ;  ;  

1. Introduction

Employee hiring processes are one area of HR operations where the use of information and communication technologies (ICT) has brought numerous benefits to organizations. In addition to automating a number of operating activities, there are new ways of meeting objectives, both in terms of identifying potential candidates and encouraging them to apply (recruitment) and in terms of assessing which potential candidate is most likely to succeed in the vacant position (selection). Currently, AI tools are increasingly being adopted in recruitment and selection processes. However, according to Eurostat, in 2024, only 13.5% of companies in the European Union and 5.9% in Poland used AI solutions in HR processes, indicating that this adoption remains at an early stage. There is need to identify the obstacles that result in such a slow adaptation of AI in recruitment and selection processes, and in particular whether some of the resistance stems from the attitudes and beliefs of recruiters.
The importance of AI for achieving the objectives of Sustainable HRM implies that, without a clear understanding of recruiters’ attitudes toward its use, it will be difficult to realize the full benefits of these technologies. It is widely acknowledged that the application of algorithms in employee selection can contribute to sustainability outcomes, such as reducing process duration, costs, and energy consumption associated with travel [1,2]. Moreover, some authors strongly emphasize that AI-driven recruitment has the potential to mitigate bias in hiring decisions [3,4]. At the same time, as noted by Chang and Ke [5], there are well-documented cases of biased algorithms, for example in recruitment systems developed by Amazon, as well as in job advertisement targeting systems used by Google and Facebook. As we will show below, individual studies also indicate that recruiters are often skeptical about the use of AI in recruitment and selection. Therefore, understanding the contexts and specific modes of use in which recruiters are willing to adopt AI is essential for organizations seeking to expand its use – both for ethical reasons and to improve organizational effectiveness. It is therefore essential for scientific research to provide evidence that supports the development of AI-driven HR tools aligned with the principles of sustainable HRM, as well as – equally importantly – the design of effective methods for their implementation in recruitment and selection practice.
In this article we will attempt to answer the following questions: a) what forms of incorporating Artificial Intelligence (AI) into hiring activities are better accepted by Polish recruiters; (b) do recruiters more favorably accept AI participation in the early or late stages of the employee selection process; (c) do they prefer AI solutions in an advisory or decision-making form. Based on data collected via an electronic questionnaire from 120 Polish recruiters (employees of HR departments or recruitment agencies), it was discovered that recruiters are more positive about the involvement of AI in the early stages of the selection process. It was also confirmed that – in line with the thesis of the Self-determination theory (SDT) theory – AI participation in an advisory form is better received than in a decision-making form regardless of the phase of the recruitment and selection process. It was also found that the factor that increases acceptance for AI applications is the willingness to sacrifice decision-making autonomy, while employment in a large (above 500 employees) company significantly affects this acceptance only in the earlier stages.
The article is constructed as follows. The first part discusses the issue of using AI in employee selection processes and presents the state of scientific knowledge about recruiters' attitudes toward these applications. The second part presents an empirical study conducted to examine how selected factors influence the acceptance of AI-based hiring tools by recruiters. The third and fourth parts present, respectively, the results of the study and their practical implications.

2. Theoretical Background and Hypotheses Development

Adopting ICT in employee recruitment and selection processes is undoubtedly one of the successes of applying ICT solutions in HR. Subsequent waves of development of tools based on these technologies, from tools for managing databases of documents collected from candidates (which were the first versions of Applicant Tracking Systems), to tools for automated testing or remote communication enabling remote selection interviews, have facilitated the work of recruiters and different forms of conducting research with selection tools adapted to the needs of other communication channels. The ability to use social media data to identify potential candidates and assess their qualities has been a significant next step in radically changing recruitment and selection processes. However, a radical change in selection techniques involves the use of AI and big data to create selection solutions that go beyond creating direct equivalents of traditional ICT-based selection tools. AI tools not only enable cheaper collection of data on candidates' competencies, but also – thanks to their properties (learning, reasoning, natural language processing, facial recognition) – can be used to resume screening, interviewing and decision making [6,7,8,9].
The widespread use of ICT solutions in selection processes has generated a wealth of academic research (see: [10,11,12]). However, these studies are dominated primarily by two perspectives – the perspective of the organization, understood as determining the utility of a particular tool in predicting which candidate is likely to succeed on the job [13,14,15], and the perspective of the candidate understood as their attitude towards being recruited with these tools [11,13,16]. The third perspective – namely, recruiters' attitudes toward the use of these tools – is much less prevalent in the scientific literature [17,18,19], and the need to take it into account has been emphasized for years [14,20,21,22,23].
A similar phenomenon can be observed in research on the use of AI itself. There are studies on the benefits that AI can bring to organizations and on the determinants of its application [21,24,25], or the reception of its use by candidates [26,27,28,29,30], but research on recruiters' readiness to use it is lacking [9,19,29,31].
Empirical studies nevertheless reveal substantial resistance among recruiters, as well as an ambivalent attitude toward the use of AI in employee selection processes [8,9,19]. These concerns stem, on the one hand, from fears related to algorithmic bias [19], and, on the other, from the limited ability to oversee and control decisions made by AI systems [32]. Moreover, in line with broader theoretical perspectives on attitudes toward new technologies, a lack of trust in AI-based tools is consistently identified as a key barrier to their adoption in recruitment [9,31,33]. There are also few Polish studies on recruiters' attitudes toward AI-driven recruitment [16,34,35] while industry reports show that Polish recruiters utilize some ICT-based tools that employ AI in the execution of employee selection processes.
It should be emphasized that research on the first two perspectives shows that any use of ICT in hiring processes faces difficulties. Studies of the predictive validity of tools corresponding to traditional selection tools have pointed out the need to adapt them appropriately to new communication channels in order to achieve a similar level of value in the data collected. The actual shift that can increase the validity of prediction that surpasses traditional selection tools can only be brought by the utilization of AI and big data to create company-specific tools [11,36]. Based on large databases of company employees, it is possible to create computer games to assess candidates' performance - and to determine if they are similar to the top or bottom performers. This means using the ideas behind biographical inventories (so-called empirical inventories, i.e., which collect data from candidates' lives to predict their similarity to good employees) [10,37] and treating computer games as simulated work samples [38,39] to assess, with the help of AI, the similarity of candidates' actions to patterns built from performance analysis of top and bottom employees [11,36].
This kind of application of AI as part of selection processes opens a new phase in the use of ICT in those processes, which includes not only simulated work samples in the form of games, but also the analysis of content and forms of social media activity [11] . Therefore, there is room for the application of AI that goes beyond the analysis of statements made during a selection interview (analysis of a recording) [21,26,27] or the matching of competencies revealed in a resume or written text with the profile of a desired candidate [24]. It also leads to a broader intrusion of AI into the decision-making process [40] regarding the classification of candidates - it does not simply create new data for the recruiter to base decisions on. The accuracy of the prediction, but also the avoidance of discrimination against certain types of candidates (e.g., women), is a subject of scientific discussion and an actual practical problem [24,32,36]. It is worth pointing out that the difficulty in resolving these issues is related, in particular, to the lack of transparency of the criteria of the algorithmized decisions. The technical solutions used in these selection methods are based on neural networks and machine learning by classifying examples, and it has been found that better results are obtained by these algorithms when they do not specify the criteria they use to make their classifications [6]. Currently, the so-called accuracy–interpretability trade-off is increasingly being questioned, including in the context of unstructured data. Explainable Artificial Intelligence (XAI) tools, through the application of post-hoc methods such as SHAP (SHapley Additive exPlanations), enable the enrichment of predictive models with an additional analytical layer that clarifies and justifies machine-generated decisions [41]. However, the effectiveness of XAI implementation in recruitment also depends on the users of these XAI-driven tools. Research by Kalff and Simbeck [42] demonstrates that HR managers’ AI literacy plays a crucial role in determining the effectiveness of XAI in recruitment contexts. Although XAI functionalities increase perceived transparency and trust among more competent users, they do not universally ensure improved objective understanding and may, in some cases, even diminish it. This suggests that XAI is not a universal remedy for the black-box effect and that its implementation must be accompanied by appropriate training and competence development among HR professionals.
In addition to the problem of “algorithmic discrimination” – as some [43] call the algorithms’ discriminatory decision-making, researchers point out how machine learning processes (and more so their use in AI-based recruitment/selection tools) can jeopardize diversity in organizations if we thus condemn ourselves to cloning our employees by selecting among candidates only those similar to those already employed and performing well [12,36,44]. In fact, it has been argued, and rightly so, that a change in research approaches is needed on both of these issues, as studies comparing the validity of predictions of job performance using traditional selection tools and their ICT-based counterparts are not sufficient to address these emerging challenges, as the use of AI and big data is significantly changing the selection process, and traditional tools are not being outright replaced [12].
These new types of programs represent the achievements of recent years of AI research, which has a tradition of more than 70 years, meaning the search for computer programs that perform tasks (cognitive and verbal, but not yet manual) in a way that is "analogous" to how humans perform them. There is no universally accepted single definition of what AI is [40,45,46,47], but for the purposes of this article, it is sufficient to indicate that the tools being discussed here are based on sophisticated image, motion, and natural language recognition systems, by classifying similarities between objects, to conduct dialogue, create clues for action in specific situations, and present arguments for selected positions with a quality comparable to a rational human analyzing content on the Internet (ChatGPT). Such tools are probably not yet widely used in Polish recruitment practice, although it is clear – given the media buzz and the actual usefulness of these solutions – consulting firms make attempts to introduce them in Poland as well. The potential for AI applications in management has long been recognized, and the concepts of automation have been referred to by some as algorithmic management. They lead to the replacement of humans (or their collaboration with an algorithm [48] not only in data collection and analysis, but also in decision making [7]. In the HR field these are referred to as digital HRM, which is defined as using computer systems, telecommunication networks, and interactive electronic media to perform HRM functions [49]. To date, there have been no analyses of how attitudes toward cooperation with algorithms similar to the GPT chatbot have changed – according to current applications – which indicates the need for further research in this area, but several results about algorithm aversion [8] do not lead to positive expectations.
Research on attitudes toward the use of AI-based solutions in recruitment processes is often conducted within theoretical frameworks derived from the Technology Acceptance Model (e.g., the Unified Theory of Acceptance and Use of Technology). These studies demonstrate that perceived performance increase [9,31], trust and propensity to trust [33], as well as perceived ease of use and perceived usefulness [9,31], play a crucial role in shaping recruiters’ attitudes toward such tools.
Previous global research on recruiters' attitudes toward the use of AI in HR shows that recruiters recognize a number of benefits and risks associated with using AI in hiring processes. Based on interviews with 10 recruiters from a large Fortune 500 multinational corporation, Ore and Sposato [50] found that the respondents recognized the potential benefits of analyzing employee data through programs based on neural networks. These benefits primarily included the ability to make better decisions about hiring and the desired characteristics of a candidate, thanks to the creation of better predictive analytics based on big data analytics. At the same time, the recruiters interviewed were concerned about the negative impact that the use of these programs could have on company’s image, due to the negative perception of these technologies by candidates and the risks associated with privacy violations. A different kind of concern is the indication of the threat of "loss of the human touch" – which can be seen as a generalized concern about the non-obvious consequences of replacing humans with machines in recruitment processes.
These concerns about reputational implications are not unfounded, as research on potential employees' perceptions of the use of ICT in selection processes, both globally [29,51,52], and in Poland [16,28,53], reveals concerning results. Typically, potential employees are negative about these applications [54], and fear of privacy intrusion is the factor that most strongly determines this attitude [29,51,53]. However, this is not always the case, although relatively few studies show that under certain circumstances there is no decline in the perceived fairness of the selection process. These studies mentioned above primarily address the use of AI as part of the content analysis of selection interviews [26,27,55], and are only sometimes concerned with the issue of prompting the recruiter with specific actions or decisions to take [52,56].
Similar concerns – suggesting that recruiters are worried about impairing the candidate experience – emerged in interviews with 33 French professionals, 10 of whom conducted recruitment processes as part of their professional responsibilities [19]. They generally favored human involvement in selection processes (especially interviews), citing the potential damage to the company's image if AI replaced humans in hiring, and the concerns of candidates that the company would need to address in the hiring process to help them maintain a sense of fairness. In order to ensure the sense of fairness, potential candidates would need to believe not only that the AI does not discriminate against anyone and that its decisions are sound, but also that they have information about the criteria used to make those decisions or how the AI reaches its conclusions. The use of AI-based selection tools was, for the respondents, a manifestation of the fact that the company does not value human interaction (which may be the equivalent of the expectation of the human touch), and they pointed out the threat to the sense of justice and the indignation it caused.
Decisions made during the recruitment process – particularly at its final stages – are not based solely on recruiters’ domain knowledge and experience, but also on their intuition, empathy, and indirect perception [9,32]. Being aware of this, recruiters may exhibit lower trust in AI-generated outcomes at the selection stage, especially when they recognize that the absence of a “human touch” may negatively affect the candidate experience. In the study by Almeida et al. [9], this lack of “human touch” was identified by recruiters as the most significant drawback of AI-driven recruitment. As we know from traditional recruitment research, the human touch is particularly important to candidates in the stages of the recruitment process that follow the pre-selection stage and before the decision stage of accepting a job offer [57]. At the same time, it is reasonable to assume that while candidates expect fair selection to provide them with an opportunity to showcase their strengths in face-to-face contact with the recruiter [58], they know from experience that the early stages of selection serve to confirm their formal qualifications and weed out candidates who clearly do not meet expectations. Therefore, it is expected that the use of AI in these early stages of the hiring process will be better received by candidates than in later stages, and consequently, a similar opinion will be expressed by recruiters.
It is also worth noting that previous research on the application of AI in selection processes has focused on programs used to analyze selection interviews, which have already been reported to be widely used in large multinational companies several years prior [59]. These programs allow asynchronous analysis of a recorded interview of a candidate, and most often - of a statement guided by questions from a predefined script, and the task of an algorithm capable of analyzing spoken text and images is to reject candidates who do not meet predetermined requirements (or to highlight parts of statements suspected of being false) [36]. This auxiliary role of algorithms in interviews points to a different use of AI compared to older programs, such as scripted chatbots with fixed conversation flow, which have been used on the Polish market since the latter part of the second decade of the 21st century. In other words, recruiters are more knowledgeable about the use of algorithmic solutions in the preselection phase and have a better understanding of the benefits its use can bring there. This is another argument in favor of the assumption that acceptance of the use of AI is likely to be higher in the early stages of the selection process than in later stages, as it is also likely to be facilitated by better knowledge of successful applications of the technology in this area. Therefore, H1 is formulated as follows.
Hypothesis 1 (H1).Recruiters will be more receptive to using AI tools in the early stages of the recruitment and selection process (analyzing resumes and test data, etc.) than in the later stages of selection (preparation, conducting, and analysis of interviews, making final decisions.
The types of AI applications in selection processes can be classified in several independent ways [52,60]. AI can:
  • provide data on the basis of which a human recruiter makes further decisions or actions, but can also make independent decisions;
  • provide assistance when it recognizes that a person needs it, or regardless of the need;
  • provide assistance only at the user's request, or regardless of the need;
  • provide assistance with either specific problems or a complete solution;
  • provide information, arguments, or criteria needed to make a decision, or indicate what decision should be made.
It is to be expected that in each of these applications, the opinion on the benefits of working with an AI tool will be different [40] and, in particular, the willingness of recruiters to let AI assume the above function will vary. Research on human-computer interaction shows fairly consistently that the use of AI as a decision-maker in decision-making processes is perceived less favorably by humans than the use of only AI-prepared data combined with human decision-making in decision-making processes [52,60]. However, there are also results showing that under certain conditions this phenomenon does not occur [61], especially when the positive effect of the decision on the person subject to it is involved [56]. As a result, it is not clear how recruiters will rate the contribution of AI to selection decisions at different levels of input.
Adopting the assumptions of Self-Determination Theory (SDT) [62,63], it can be expected that when the use of AI constrains autonomy, competence (i.e., a sense of self-efficacy or effectiveness), or relatedness, it will be evaluated less favorably than forms of AI application that support the satisfaction of these needs.
SDT constitutes a broad theoretical framework of human motivation, development, and well-being, grounded in three innate psychological needs: autonomy, competence, and relatedness. The theory has evolved considerably and now encompasses six mini-theories [64]; accordingly, some of its core constructs have been further refined. Within this framework, autonomy refers to the experience of volition and psychological freedom – the sense that one’s behavior is self-endorsed rather than externally controlled. The need for competence, initially conceptualized as the development of new skills enabling effective action, is currently understood more broadly as an inherent tendency to explore and interact with the environment in ways that foster a sense of agency. It thus encompasses both the capacity to act effectively and the belief that one’s actions are optimally challenging – neither too difficult nor too easy – and capable of producing intended outcomes [65] (pp. 1198–1199). Relatedness denotes the need to feel meaningfully connected to others. It is satisfied when individuals perceive themselves as members of a group, experience a sense of belonging, and develop close interpersonal relationships. In practice, it involves mutual care, concern, and respect among interaction partners [65] (p. 1199).
Although the suggestion that, for example, a chatbot could be perceived as a relational partner may appear far-fetched, it is consistent with the assumptions underlying the development of care robots designed to support older adults in maintaining independent functioning. In such contexts, relationships are built not only through dialogue but also through tangible assistance. Analogously, recruiters may experience AI systems as supportive agents in the execution of professional tasks, even if the interaction is limited to task-related functions. It can therefore be assumed that situations involving extended dialogue with an AI tool may be more conducive to a sense of relational engagement than scenarios in which the system autonomously makes, announces, or merely communicates a decision without interactive exchange.
On this basis, Hypothesis 2 (H2) is proposed. It assumes that recruiters’ evaluations of AI tools used in the selection process are shaped not only by professional considerations – such as the validity of selection methods or the quality of candidate experience – but also by the extent to which these tools support their fundamental psychological needs as individuals.
Hypothesis 2 (H2). Recruiters evaluate more positively those AI applications that:
(a) preserve their autonomy (i.e., leave decision-making to the recruiter) compared to applications that constrain it;
(b) support their performance-related competence (e.g., by providing advice on demand) rather than limit it; and
(c) create space for communication and interaction, as opposed to applications that do not facilitate dialogue and thus restrict the sense of relatedness.
The third hypothesis addresses the determinants of acceptance of AI systems supporting recruitment and selection processes. It can be assumed that individuals who are convinced of the high predictive validity of an AI tool – and who are therefore more willing to accept a partial limitation of their decision-making autonomy – will demonstrate greater acceptance of AI use at both the initial and subsequent stages of the recruitment and selection process. This thesis is consistent with the results of previous studies showing that for recruiters, increased productivity through collaboration with AI is the primary criterion for their decision to use it [9].
Employment in a large organization (i.e., one employing more than 500 individuals) may also constitute a facilitating factor. In such contexts, recruiters are more likely to recognize the benefits of ICT-based solutions, including AI-driven tools, particularly with respect to streamlining procedures, shortening time-to-hire, enhancing candidate experience, and increasing operational efficiency. Performance expectancy – especially the anticipated reduction in time devoted to specific recruitment activities – has been identified as one of the principal predictors of recruiters’ intention to use AI tools [9,31].
Accordingly, Hypothesis 3 (H3) is formulated as follows:
Hypothesis 3 (H3).  Acceptance of AI applications at both early and later stages of the recruitment and selection process will be higher among recruiters who are willing to endorse AI solutions that constrain recruiter autonomy and among those employed in large organizations (i.e., employing more than 500 employees).

3. Materials and Methods

Study participants were recruited through snowball sampling among HR professionals, facilitated by the assistance of Ms. M. Kisiel, an HR practitioner. A summary of respondents’ demographic characteristics is presented in Table 1.
A structured questionnaire was developed for the purposes of this study. In addition to demographic items, it comprised three substantive sections, each preceded by a brief explanation stating that the questions referred to recruitment software incorporating artificial intelligence to varying degrees. The three sections were organized as follows:
1. The first section consisted of 18 statements measuring respondents’ willingness to use various forms of AI support across different recruitment scenarios. In constructing this block, particular attention was paid to identifying situations that could be interpreted as affecting autonomy, competence, and relatedness.
Situations limiting autonomy were relatively straightforward to operationalize – for example, when the AI tool makes a decision rather than merely provides information or advice. Conversely, advisory functions (e.g., suggesting arguments or generating diagnostic questions) may enhance perceived competence by supporting more effective performance. Operationalizing relatedness was more challenging. However, it was assumed that scenarios involving dialogue – particularly those in which the AI explains criteria, discusses dilemmas, or adopts a non-directive instructional approach – may signal concern for the recruiter’s professional judgment and subjectivity. Such interactions could, under certain assumptions, foster a minimal sense of relational engagement.
Accordingly, the 18 statements were initially classified into three groups:
  • Some items described AI assistance (e.g., “I wish the AI would help me create questions to ask candidates in order to diagnose their competencies”; “I wish the AI would – when I have doubts – suggest arguments supporting my decision regarding which candidates performed well at a given stage”). Other items explicitly referred to AI making decisions (e.g., “I wish the AI would make the decision regarding the evaluation of specific competencies of the candidate”; “I wish the AI would make the decision regarding which candidates should be invited to the next stage”). Items of both types were mixed.
  • Some items reflected the assumption that AI-generated decisions are substantively accurate and may enhance recruiter effectiveness (e.g., “I wish the AI would suggest a decision regarding which candidates should be invited to the final stage”; “I wish the AI would make the decision—when I have doubts—regarding which candidates performed well at a given stage”). By contrast, items in which AI merely provided neutral assistance or information without suggesting or making a decision were not assumed to directly increase effectiveness (e.g., generating interview questions or discussing relevant evaluation criteria without formulating a conclusion). For example - „I wish the AI would help me create questions to ask candidates in order to diagnose their competencies” or “I wish the AI would talk to me – when I have doubts and difficulties in making up my mind – about things that are important in deciding which candidates performed well at a given stage of the selection procedure.”
  • Two items were designed to capture relational engagement through dialogue not strictly limited to decision output: “I wish the AI would – when I have doubts – talk to me about what is important in deciding which candidates performed well at a given stage;” “I wish the AI would – when I have doubts – teach me what factors I should consider when deciding which candidates performed well at a given stage.”
The preliminary categorization of items was based on linguistic interpretation by the research team and by respondents participating in the pilot phase of questionnaire development. The principal component analysis with Oblimin rotation yielded a set of two factors that accounted for 77.17% % of variance The Kaiser-Meyer-Olkin measure of sampling adequacy was 0. 813 and the Bartlett test of sphericity (χ2 =3149.89, df=153, p<0.001) proves the adequacy of the correlation matrix. The first set consists of 13 items related to assisting the recruiter in the recruitment/selection process (loadings ranged from 0.66 to 0.94; Cronbach's alpha = 0.97) and was labeled assistance, the second consists of five items related to replacing the recruiter in decision-making (loadings ranged from 0.86 to 0.97; Cronbach's alpha = 0.96) and was labeled replacement.
2. The second section comprised five statements assessing willingness to use AI-based tools in the early stages of recruitment. Example items included: “I would like AI to be used to collect social media data on potential candidates;” “I would like AI to be used in the initial screening of candidate applications.” The factor analysis extracted one component with no items being discarded. (KMO=0.76; the Bartlett test of sphericity: χ2 =686.16, df=10, p<0.001; 82.31% of variance explained). Loadings ranged from 0.85 to 0.93; Cronbach’s alpha = 0.94. The resulting index was labelled earlier.
3. The third section included four statements concerning the use of AI in later stages of the recruitment and selection process. Example items included: “I would like AI to be used to verify candidate truthfulness during the selection interview;” “I would like AI to suggest which selection tools the recruiter should use.” The factor analysis extracted one component with no items being discarded. (KMO=0.75; the Bartlett test of sphericity: χ2 =258.64, df=6, p<0.001; 71.29% of variance explained). Loadings ranged from 0.74 to 0.89; Cronbach’s alpha = 0.85. The resulting index was labelled later.
All items were rated on a five-point Likert scale ranging from 1 (“strongly disagree”) to 5 (“strongly agree”).
All statistical analyses were completed using IBM SPSS Statistic software (v. 29). Statistical significance was set at the 0.05 level.

4. Results

To compare respondents' acceptance of AI applications in the earlier and later stages of the recruitment and selection process, and their acceptance of AI applications to assist in decision making (support) with their acceptance of applications to replace the recruiter in decision making (replacement), two paired sample t-tests were conducted (Table 2). As the analyses revealed, acceptance for AI applications in earlier stages is significantly higher than in the later ones, and that acceptance for AI applications supporting decision-making (assistance) significantly exceeds acceptance for applications replacing the recruiter in decision-making (replacement), hypotheses 1 and 2a are confirmed.
To verify hypothesis 3, that is, to investigate differences in acceptance for AI applications in recruitment and selection, two factorial ANOVAs, with two factors: replacement (2 levels) and company_size (2 levels) were performed. The analysis used categorized variable replacement with cut-point at 3.0. The factors' group frequencies are given in Table 3.
The analysis investigating factors affecting acceptance for AI applications in the earlier stages of recruitment/selection process indicated that the interaction replacement x company_size effect is not significant: F(1, 116) = 1.967 , p=0.163. There is small (partial eta-squared = 0.046) and significant the main effect of replacement (F(1, 116) = 5.619 , p=0.019) and also significant and small (partial eta-squared = 0.052) the main effect of company_size F(1, 116) = 6.388 , p=0.013. Respondents who are willing to sacrifice autonomy have significantly higher acceptance for AI applications in the earlier stages of recruitment/selection process than others. Also, employment in a large (above 500 employees) company significantly increases this acceptance.
The analysis investigating factors affecting acceptance for AI applications in selection also revealed that interaction effect is not significant: F(1, 116) = 2.588 , p=0.110. There is medium (partial eta-squared = 0.095) and significant the main effect of replacement (F(1, 116)=12.230, p<0.001). The main effect of company_size is not significant: F(1, 116) = 0.868, p=0.353.
This result means that the third hypothesis is partially confirmed; willingness to sacrifice decision-making autonomy(replacement) increases acceptance for AI applications both in the earlier and the later stages of recruitment/selection process, but working in a large (above 500 employees) company significantly increases this acceptance only in the earlier stages. No interaction between these factors was detected.

5. Discussion

Hypothesis 2 was only confirmed in part (a), and this result is consistent with previous findings that have been obtained by studying human-computer interaction [48,56,60]. In those studies, the basis for hypothesizing lower acceptance of AI decisions (relative to decisions involving humans as decision-makers) was the belief that there was a lower assessment of the (aggregate) fairness of AI decisions (due to a lower value of interactional fairness), but also a sense of dehumanization “because the lack of human interaction in AI decision making, can lead workers to think that they are ‘being reduced to a percentage’ [60,66,67]; which constitutes a set of feelings often captured under the umbrella term of dehumanization” [56] (p. 859).
Also, those studies treated decisions made by algorithms as failing to account for “human abilities, emotion, and motivation” [68] (p. 5) understood as the requisite capabilities to make HRM-based decisions [60]. This included the belief that an essential element of the decision maker's role, consistent with the expectations that people in an organization create about the roles that individuals play in an organizational context, is the display of emotion. The area of research that most clearly demonstrates the inadequacy of the decision maker's characteristics with respect to its individual characteristics is the work of Lee and colleagues [60,67] They show that employees find it difficult to characterize AI as an appropriate decision maker in an HRM context, given the limitations they identify in its decision-making processes. An attempt to overcome this barrier would be to create a “women-voiced” chatbot based on Siri-like verbal language understanding, as women are stereotypically perceived to have higher levels of emotional intelligence [69] (p. 310).
However, it is clear that the role of the decision-maker requires not only showing emotion, but also meeting the requirements of interactional justice. This type of reasoning, which is also present in human-computer interaction, opens the field for research on the consequences of perceiving other components of justice, in particular distributive justice (decisions that are positive for the individual are accepted even if the AI makes them [56]). There is also a clear expectation that AI will be able to avoid at least some of the discriminatory bias in decisions, which may offset the lower rating for interactional justice.
Another line of argumentation relates to the issue of trust. Some argue (e.g. [70,71]) ) that workers perceive lower levels of trust in decisions if an AI (as opposed to a human) is the decision-maker. All of above mentioned studies do not directly address the employee selection process and examine the attitudes of employees as subjects of AI-driven decisions rather than as co-decision makers. Nor do they explain what underlies this lack of trust Lacroux and Martin-Lacroux [33,72] analyzed recruitment scenarios from the perspective of trust in algorithms; however, their studies focused primarily on situations in which algorithmic recommendations were inconsistent with recruiters’ assessments. They also demonstrated that even recruiters who declare greater trust in human recommendations than in algorithmic ones tend to follow algorithmic advice more frequently. This finding suggests the need for alternative explanations of such behavior. Hence, our result complements those studies by showing that not only the people who are subject to the decision [48], but also the decision-makers themselves [8,33] are skeptical about leaving room for AI to replace them in decision-making processes. Our argumentation is based on SDT theory, and thus refers to meeting the individual needs of the recruiter as a human being, that is, it is different from those already contained in the literature from the human-computer interaction field. It can be transferred to other situations, either directly – when the use of AI threatens the autonomy of the decision-maker, or by analogy – when the person subject to decisions made by AI believes that the person replaced by AI loses the ability to decide freely and the autonomy to do so.
The failure to operationalize the threat in AI-mediated selection processes to the other two needs inherent in SDT theory requires some consideration. The most straightforward explanation for this failure could be found in the design of the questionnaire, in which most of the initial questions dealt with situations of either assistance or decision-making, without explicitly eliciting other situational factors. As a result, respondents may have come to believe that when answering subsequent questions, they should only classify situations according to this axis.
However, another explanation is also possible, one that would be consistent with the dominance of research on such differentiation in the area of human-computer interaction, namely - that “AI taking over” is a concern common enough to project attitudes toward any, even narrow, applications of AI. This line of argument can be supported by the dominance of this theme in today's media, as well as in traditional literature and other products of popular science fiction products.
Past research has convincingly shown that candidates expect to interact with a recruiter during the selection process, and more recent research - with explicit reference to AI in recruiting - has demonstrated that candidates expect a “human touch,” which is consistent with the expectation of ensuring interactional justice [19]. There is a lack of research on the consequences of the absence of such interactional justice on the part of the recruiter. We wanted to present this aspect of the recruiter's relationship with AI-based tools as an analysis of attitudes toward situations in which the need for relationality, understood as the possibility of two-way communication with the co-worker during the recruitment process, is compromised.
The problem of accepting AI in an advisory role has been analyzed in the human-computer interaction literature, with results consistent with those expected in our formulation of hypothesis 2. However, studies such as [73] demonstrated that people sometimes do not want to take advice from algorithms (displaying so called “algorithm aversion”). To be more specific, when an algorithm fails to provide 100% accurate advice, confidence in the value of that advice declines more than when the same mistakes are made by a human. Similar result has been obtained for recruiters [8]. As interpreted by SDT theory, this would imply a reluctance to increase one's self-efficacy in action or competence simply because of the form in which the advice is delivered (i.e., its source), indicating the need for further research in this area.
As our argument based on SDT suggests, one might expect recruiters to strongly believe not only in the need for their autonomy in the selection process (limited, of course, by the role of the line-manager in the process and by the policies, such as diversity, that prevail in the organization), but also to treat selection processes as a space for satisfying their other needs. Thus, it is worth continuing the line of research indicated in this paper to analyze the selection process from the perspective of the recruiter's fulfillment of their needs as postulated by SDT (i.e., including the needs for competency and relationality). Specifically, it is worth investigating whether the fact that AI mimics a communication partner – as part of the pair that decides the steps in the selection process – is relevant to the recruiter's perception of such forms of assistance. Resolving questions on this issue could promote better design of these algorithms to facilitate their acceptance.
Similarly, a more favorable attitude toward AI involvement in decision-making assistance would be expected if AI provided support on request or in situations of clear need, which could have similar implications for the design of these algorithms.
The hypothesis of a more favorable view of recruiters toward the use of AI in the recruitment and pre-selection phase than in the actual selection phase was based in part on inference by analogy – since candidates expect human contact, so do recruiters. Implementing selection activities in line with candidates' expectations, and thus aiming to provide them with an appropriate candidate experience, recruiters will view AI more favorably in activities in this early phase of the selection process. The findings can be taken as an argument in favor of protecting the autonomy of recruiters to decide on the issues they consider most important from the perspective of their role, that is, the final decisions in selection processes.
In searching for recruiter traits that would encourage adoption of AI applications in both the earlier and later stages of the recruitment/selection process, a willingness to accept an AI tool in a decision-making role was hypothesized to be one such factor. In addition, it was assumed that greater familiarity with the procedures associated with working in a large company (over 500 employees) might have a similar effect, namely, that it increases the willingness to accept AI participation in both phases of the selection process. Hypothesis 3 was only partially confirmed; willingness to sacrifice decision-making autonomy (replacement) increases acceptance for AI applications both in the earlier and the later stages of recruitment/selection process. There was also no reciprocal interaction between the influence of these two factors, suggesting their separate effects on the willingness to use AI in subsequent phases of the selection process.
However, what proved to be a modifier of the relationship was the respondent's employment in a large or small organization. Working in a large (above 500 employees) company significantly increases this acceptance only in the earlier stages, not in both stages. This suggests that despite the greater familiarity with procedures (and also impersonal company’s procedures that take away full autonomy), as well as the greater potential benefits to a large company from the use of an algorithm, individuals who are more favorable - in this context, from large companies - towards the use of algorithmic decisions are willing to employ them in the early stages of the hiring process, which are less critical to their job. This finding can be considered consistent with the results of a study of Chinese firms, which showed that large firms do not use AI more often in the employee selection process [54]. These Chinese studies used both company size and technology sector affiliation as indicators for higher technology competency of HR professionals, and, as in our study, found that HR competency measured in this way did not correlate significantly with the use of AI in recruitment and selection [54], nor, as in our case, with the acceptance of this use. Previous studies using both types of indicators have yielded divergent results. Our findings can be seen as consistent with previous studies showing that digital competence does not necessarily promote acceptance of online or AI-based recruiting [28,74].

6. Conclusions

The purpose of this paper was to try to answer the question as to which of the various forms of incorporating AI into recruitment and selection processes would be better received by Polish recruiters. Based on data collected via an e-questionnaire from 120 Polish recruiters (employees of either HR departments or recruitment consultancies), it was found that recruiters are more receptive to the use of AI in the early than in the late stages of the employee selection process. It was also confirmed that – in line with the conclusions of the SDT theory – AI participation in an advisory form is better received than in a decision-making form, regardless of the phase of the selection process.
The results of the current study confirm previous findings from human-computer interaction research [48] that people are less supportive of the use of AI in the form of decision-making algorithms than in the form of algorithms that work with humans but leave the decision-making to humans. The results also complement the previous data not only on the basis of the opinions of Polish HR professionals, but also on the specific form of application of algorithmic management, which is an AI tool advising the recruiter during the selection process of specific selection activities.
The way of justifying the hypothesis presented here is also novel in the context of current analyses from the field of human-computer interaction, as it refers not so much to the theory of interactional justice [19,56], trust [33,70,71], or expressing emotions [67,68], or TAM-based approaches [9,31], but to the pursuit of the recruiter's own needs, stemming from the need for autonomy described in the SDT theory. This line of reasoning is important because previous explanations have blocked the possibility of cooperation between a human and an algorithm, since the latter is unlikely to be a good partner for interaction, one that shows emotion and inspires trust. Our line of argumentation, however, permits a GPT-type chatbot to be designed in such a way as to reduce the sense of diminished autonomy in task collaboration with the algorithm, which, if further research confirms this result, makes it possible to overcome the barriers this collaboration faces, in line with the results of previous research mentioned in the discussion.
The results obtained here can serve as a starting point for further research that deepens the understanding of the determinants of recruiters' acceptance of various forms of AI application in the recruitment and selection process, thus fostering not only the digital transformation of the organization but also supporting the development of sustainable management and responsible business practices. They can also be said to support the importance of XAI algorithms in digital transformation, as XAI-based tools recommendations are closer to advising practices than decision-making. They also demonstrate that an analytical perspective, provided by reference to the needs and interests of the recruiter as a specific individual, can provide explanations that facilitate the implementation of solutions that support digital transformation while maintaining ethical principles and striving for sustainability.
Nevertheless, it is important to keep in mind that the current study has a pilot study character, and not only because the research sample is not representative. An important limitation of the survey, which we have already pointed out when discussing the results, is the nature of the questionnaire, which may have influenced respondents' attitudes toward subsequent questions due to large blocks of questions dealing with a similar type of problem. As already mentioned, without further research it is impossible to determine whether in the relationship with the algorithm that helps in the performance of professional tasks, the main dimension of evaluation is the issue of gaining or losing autonomy, or whether there remain valid issues related to other needs characterized by the SDT theory. Another important limitation of the current study is that it diagnoses the attitude of recruiters towards the, as of yet, abstract and unrealistic situations described in the questionnaire, through the opinions expressed in the questionnaire, as an indicator of their future attitude towards the use of such a tool. Ultimately, one might expect that in a hypothetical situation – like using GPT chatbots in recruitment – the opinions formed reflect generalized beliefs shaped by media messages, rather than a grounded reaction to the real-world difficulties of using the actual tool. Due to the early stage of AI adoption in recruitment this limitation, which is characteristic of many studies based on predicting own attitudes to hypothetical situations, requires a different study design that is based on actual experience with the type of tool, not just in an experimental situation, but under the actual weight of real-world consequences. This suggests that AI-based solutions should be introduced into organizations incrementally and preceded by practical trials, rather than extrapolating recommendations for the introduction of radically different technological solutions based on hypothetical opinions.
Despite these limitations, several questions and suggestions for further research can be derived from the results of the current study.
The current study showed that in response to the use of AI tools as a support for the recruitment and selection process, the factor related to the recruiter's decision-making autonomy overshadowed the respondents' other individual determinants of their decision (namely – increasing their competency or increasing their social interaction). As indicated in the Discussion section, this could be either an artifact of the questionnaire design or an indication of the main dimension by which recruiters evaluate the participation of AI in selection processes. The results obtained here do not provide data to resolve this question, so they are an invitation to further research into the criteria that determine recruiters' attitudes toward the various forms of assistance provided to them by chatbots in recruitment.
The second result of the study concerned contextual considerations, in which the recruiter's approval of AI tools is greater or lesser. As expected, the AI usage in the recruitment and pre-selection phase is viewed more favorably than in the actual selection phase (recruitment interview and final selection of the candidate), which is congruent to the results from [33]. It was also confirmed that the willingness to allow the chatbot to make decisions favors its acceptance in this role in both phases of the selection process, but the company's own experience of working in a mostly process-oriented organization - a large one - only in the recruitment phase.
Interpreting this result in light of the overarching importance of decision-making autonomy as a determinant of attitudes toward the use of AI in the selection process, one can conclude that the recruiters we surveyed consider the preliminary stages of the selection process to be less critical to the hiring decision, thus allowing AI to play a supporting role even when it makes some decisions in this preliminary stage. This type of reasoning may indicate a significant reluctance on the part of recruiters to relinquish control over important aspects of their work, and thus may be interpreted as a fear of losing relevance in the organization or significantly simplifying their work (job deskilling). However, it can be interpreted quite differently – the use of ICT in the early stages of the selection process is a fact to which recruiters are already accustomed. The willingness to limit the use of new technological solutions to this stage may stem from the positive experience to date with the use of ICT in early stages (which has its roots in the long-standing use of Applicant Tracking System) and the clearly inferior experience in the actual selection phase (long promised computer games in the role of simulated job samples or selection interviews analyzed by algorithms). Therefore, it may be a well-founded judgment based on past practice [16,34] rather than a concern about the erosion of their professional role. It can also be argued that studies showing that recruiters' reluctance to use AI in the recruitment process stems from their inability to use these tools [9,32] support this line of argument. However these two radically different interpretations of the results provide a good starting point for further research into the question of whether Polish recruiters are actually reluctant to use AI in their work, or whether they are merely disillusioned by the promises that consulting firms have been making for years about the usefulness of new selection tools.
The results suggest several practical recommendations for preparing Ai tools (like chatbots) to work with recruiters in the selection process. First, the forms of communication used by the bot should be carefully crafted to avoid giving the recruiter the impression that the bot is issuing an order rather than merely offering advice or suggesting an additional course of action. Second, it is worthwhile not only to conduct extensive piloting of the solutions to be implemented, but also to analyze previous experiences with analogous tools that recruiters have already used. The relatively reserved reception to the use AI tools revealed by our survey, despite the clear benefits their use can bring to a recruiter, shows the resilience of the surveyed community to trends and media buzz around the emergence of the GPT chatbots. This fact can be interpreted as skepticism based on past experience with the unfulfilled promises of technological “magic wands,” or as a symptom of personal maturity and aversion to media hype. When implementing new technology solutions in this environment, different tactics should be used, depending on which of the above interpretations of their experience is appropriate for an organization's HR staff.
In summary, given the potential benefits associated with the use of AI in recruitment and selection processes, understanding the determinants of recruiters’ acceptance of these tools is of critical importance for both organizational sustainability and effectiveness. Recruiters’ resistance may stem not only from ethical concerns and doubts regarding the validity of AI-generated recommendations and decisions, but also from more or less conscious concerns about the extent to which such tools satisfy their needs for autonomy, competence, and relatedness. The present study seeks to extend the discussion on AI implementation in HR processes by incorporating a frequently overlooked perspective – namely, the role of the needs and concerns of the key actors in this process, that is, recruiters.

Author Contributions

Conceptualization, A.B. and J.W.; methodology, A.B. and J.W.; formal analysis, A.B. and J.W.; investigation, A.B. and J.W.; resources, A.B. and J.W.; data curation, A.B.; writing—original draft preparation, A.B. and J.W.; writing—review and editing, A.B. and J.W.; supervision, J.W. Both authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of VIZJA University

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
(X)AI (Explainable) Artificial Intelligence
SDT Self-determination theory
SHAP .SHapley Additive exPlanations
TAM Technology Acceptance Model

References

  1. Ogbeibu, S.; Emelifeonwu, J.; Pereira, V.; Oseghale, R.; Gaskin, J.; Sivarajah, U.; Gunasekaran, A. Demystifying the roles of organisational smart technology, artificial intelligence, robotics and algorithms capability: A strategy for green human resource management and environmental sustainability. Bus. Strategy Environ. 2024, 33, 369–388. [CrossRef]
  2. Koteczki, R.; Csikor, D.; Balassa, B.E. The role of generative AI in improving the sustainability and efficiency of HR recruitment process. Discover Sustain. 2025, 6, 601. [CrossRef]
  3. Khan, A.J.; Chaudhry, I.S.; Iqbal, J.; El Refae, G.A. Automation to sustainability: A systematic review of artificial intelligence applications in human resource management. Hum. Behav. Emerg. Technol. 2025, 7021656. [CrossRef]
  4. Abdelhay, S. The role of generative AI (ChatGPT) in optimizing the recruitment process in organizations: The mediating role of level of position and organization size. Int. J. Bus. Manag. Invent. 2024, 13, 89–100. [CrossRef]
  5. Chang, Y.L.; Ke, J. Socially responsible artificial intelligence empowered people analytics: A novel framework towards sustainability. Hum. Resour. Dev. Rev. 2024, 23, 88–120. [CrossRef]
  6. Oswald, F.L.; Behrend, T.S.; Putka, D.J.; Sinar, E. Big data in I-O psychology. Annu. Rev. Organ. Psychol. Organ. Behav. 2020, 7, 505–533. [CrossRef]
  7. Zhang, J.; Chen, Z. HRM digital transformation. J. Knowl. Econ. 2023. [CrossRef]
  8. Madanchian, M. AI tools for Human Resources decision-making. Appl. Sci. 2024, 14, 11750. [CrossRef]
  9. Almeida, F.; Junca Silva, A.; Lopes, S.L.; Braz, I. Understanding Recruiters’ Acceptance of Artificial Intelligence: Insights from the Technology Acceptance Model. Appl. Sci. 2025, 15, 746. [CrossRef]
  10. Breaugh, J.A. Employee recruitment. Annu. Rev. Psychol. 2013, 64, 389–416. [CrossRef]
  11. McCarthy, J.M.; Bauer, T.N.; Truxillo, D.M.; Anderson, N.R.; Costa, A.C.; Ahmed, S.M. Applicant perspectives during selection. J. Manag. 2017, 43, 1693–1725. [CrossRef]
  12. Woods, S.A.; Ahmed, S.; Nikolaou, I.; Costa, A.C.; Anderson, N.R. Personnel selection in the digital age. Eur. J. Work Organ. Psychol. 2020, 29, 64–77. [CrossRef]
  13. Ryan, A.M.; Ployhart, R.E. Applicants’ perceptions of selection procedures. J. Manag. 2000, 26, 565–606.
  14. Van Iddekinge, C.H.; Lanivich, S.E.; Roth, P.L.; Junco, E. Facebook-based assessment. J. Manag. 2016, 42, 1811–1835. [CrossRef]
  15. Nikolaou, I.; Georgiou, K.; Bauer, T.N.; Truxillo, D.M. Applicant reactions in recruitment and selection. In The Cambridge Handbook of Technology and Employee Behavior; Landers, R.N., Ed.; Cambridge University Press: Cambridge, 2019; 100–130. [CrossRef]
  16. Balcerak, A.; Woźniak, J. Process favorability for different types of selection methods. In Education Excellence and Innovation Management: A 2025 Vision to Sustain Economic Development during Global Challenges; Soliman, K.S., Ed.; 2020; 14832–14842.
  17. Albert, L.; Aggarwal, N.; Silva, N. Demographic differences and HR professionals’ concerns over the use of social media in hiring. e-J. Soc. Behav. Res. Bus. 2019, 10, 1–9.
  18. Koivunen, S.; Ala-Luopa, S.; Olsson, T.; Haapakorpi, A. The march of chatbots into recruitment. Comput. Support. Coop. Work 2022, 31, 487–516. [CrossRef]
  19. Mirowska, A.; Mesnet, L. Preferring the devil you know. Hum. Resour. Manag. J. 2022, 32, 364–383. [CrossRef]
  20. Roth, P.L.; Bobko, P.; Van Iddekinge, C.H.; Thatcher, J.B. Social media in selection decisions. J. Manag. 2016, 42, 269–298. [CrossRef]
  21. Black, J.S.; van Esch, P. AI-enabled recruiting: What is it and how should a manager use it? Bus. Horiz. 2020, 63, 215–226. [CrossRef]
  22. Wheeler, E.; Dillahunt, T.R. Navigating the job search as a low-resourced job seeker. In Proc. CHI Conf. Hum. Factors Comput. Syst.; 2018; 1–10. [CrossRef]
  23. Lu, A.J.; Dillahunt, T.R. Uncovering the promises and challenges of social media use in the low-wage labor market: insights from employers. In Proc. CHI Conf. Hum. Factors Comput. Syst.; 2021; 1–13. [CrossRef]
  24. Campion, M.C.; Campion, M.A.; Campion, E.D.; Reider, M.H. Computer scoring of candidate essays for personnel selection. J. Appl. Psychol. 2016, 101, 958–975. [CrossRef]
  25. Pan, Y.; Froese, F.; Liu, N.; Hu, Y.; Ye, M. Adoption of AI in recruitment. Int. J. Hum. Resour. Manag. 2022, 33, 1125–1147. [CrossRef]
  26. Suen, H.; Chen, M.Y.; Lu, S. AI in video interviews. Comput. Hum. Behav. 2019, 98, 93–101. [CrossRef]
  27. Van Esch, P.; Black, J.S.; Ferolie, J. Marketing AI recruitment. Comput. Hum. Behav. 2019, 90, 215–222. [CrossRef]
  28. Zacny, B.; Kania, K.; Sołtysik, A. Stosunek kandydatów do AI w rekrutacji. Zarz. Zasobami Ludz. 2019, 5, 39–56.
  29. Mirowska, A. AI evaluation in selection. J. Pers. Psychol. 2020, 19, 142–149. [CrossRef]
  30. Schick, J.; Fischer, S. Candidates’ perception of AI-based assessment. Front. Psychol. 2021, 12, 739711. [CrossRef]
  31. Horodyski, P. Recruiter's perception of artificial intelligence (AI)-based tools in recruitment. Comput. Hum. Behav. Rep. 2023, 10, 100298. [CrossRef]
  32. Soleimani, M.; Intezari, A.; Arrowsmith, J.; Pauleen, D.J.; Taskin, N. Reducing AI bias in recruitment and selection: An integrative grounded approach. Int. J. Hum. Resour. Manag. 2025, 36, 2480–2515. [CrossRef]
  33. Lacroux, A.; Martin-Lacroux, C. Should I trust the artificial intelligence to recruit? Recruiters’ perceptions and behavior when faced with algorithm-based recommendation systems during resume screening. Front. Psychol. 2022, 13, 895997. [CrossRef]
  34. Woźniak, J. Akceptacja różnych form narzędzi selekcyjnych – przegląd literatury i wstępne wyniki badania. Zarz. Zasob. Ludz. 2019, 5, 11–39.
  35. Stańczyk, I.; Stuss, M. AI tools applied in HR 4.0. Zesz. Nauk. Politech. Śl. Organ. Zarz. 2022, 159, 425–436.
  36. Woźniak, J. Zarządzanie pracownikami w dobie Internetu; Wolters Kluwer: Warszawa, 2020.
  37. Speer, A.B.; Tenbrink, A.P.; Wegmeyer, L.J.; Sendra, C.C.; Shihadeh, M.; Kaur, S. Meta-analysis of biodata in employment settings: Providing clarity to criterion and construct-related validity estimates. J. Appl. Psychol. 2022, 107, 1678–1705. [CrossRef]
  38. Landers, R.N.; Auer, E.M.; Collmus, A.B.; Armstrong, M.B. Gamification science, its history and future: Definitions and a research agenda. Simul. Gaming 2018, 49, 315–337. [CrossRef]
  39. Woźniak, J. The use of gamification at different levels of e-recruitment. Manag. Dyn. Knowl. Econ. 2015, 3, 257–278.
  40. Gladden, M.; Fortuna, P.; Modliński, A. The empowerment of artificial intelligence in post-digital organizations: Exploring human interactions with supervisory AI. Hum. Technol. 2022, 18, 98–121. [CrossRef]
  41. Nowak, M.; Pawłowska-Nowak, M. Integrating explainable AI (XAI) and NCA-validated clustering for an interpretable multi-layered recruitment model. AI 2026, 7, 53. [CrossRef]
  42. Kalff, Y.; Simbeck, K. Explained, yet misunderstood: How AI literacy shapes HR managers’ interpretation of user interfaces in recruiting recommender systems. arXiv 2025, arXiv:2509.06475.
  43. Köchling, A.; Riazy, S.; Wehner, M.C. Highly accurate, but still discriminatory. Bus. Inf. Syst. Eng. 2021, 63, 39–54. [CrossRef]
  44. Hunkenschroer, A.L.; Luetge, C. Ethics of AI-enabled recruiting and selection. J. Bus. Ethics 2022, 178, 977–1007. [CrossRef]
  45. McCarthy, J. What is AI? 2007. Available online: http://jmc.stanford.edu/artificial-intelligence/index.html.
  46. Haenlein, M.; Kaplan, A. A brief history of artificial intelligence. Calif. Manag. Rev. 2019, 61, 5–14. [CrossRef]
  47. Woźniak, J. Workplace Monitoring and Technology; Routledge: New York, NY, USA; London, UK, 2023.
  48. De Cremer, D.; McGuire, J. Human–algorithm collaboration works best if humans lead. Soc. Justice Res. 2022, 35, 33–55. [CrossRef]
  49. Vardarlier, P. Digital transformation of human resource management: Digital applications and strategic tools in HRM. In Digital Business Strategies in Blockchain Ecosystems; Springer: Cham, Switzerland, 2020; pp. 239–264.
  50. Ore, O.; Sposato, M. Opportunities and risks of AI in recruitment. Int. J. Organ. Anal. 2022, 30, 1771–1782. [CrossRef]
  51. Acikgoz, Y.; Davison, K.H.; Compagnone, M.; Laske, M. Justice Perceptions of Artificial Intelligence in Selection. Int. J. Sel. Assess. 2020, 28, 399–416. [CrossRef]
  52. Newman, D.T.; Fast, N.J.; Harmon, D.J. Algorithmic reductionism and procedural justice. Organ. Behav. Hum. Decis. Process. 2020, 160, 149–167. [CrossRef]
  53. Balcerak, A.; Woźniak, J.; Zbuchea, A. Predictors of fairness assessment for social media screening in employee selection. J. Entrep. Manag. Innov. 2023, 19, 97–123. [CrossRef]
  54. Pan, Y.; Froese, F.J. AI and HRM. Hum. Resour. Manag. Rev. 2023, 33, 100924. [CrossRef]
  55. Langer, M.; König, C.J.; Sanchez, D.R.-P.; Samadi, S. Highly automated interviews. J. Manag. Psychol. 2019, 35, 301–314. [CrossRef]
  56. Bankins, S.; Formosa, P.; Griep, Y.; Richards, D. AI decision making with dignity? Inf. Syst. Front. 2022, 24, 857–875. [CrossRef]
  57. Uggerslev, K.L.; Fassina, N.E.; Kraichy, D. Applicant attraction across recruitment stages. Pers. Psychol. 2012, 65, 597–660. [CrossRef]
  58. Anderson, N.; Witvliet, C. Fairness reactions to personnel selection methods: An international comparison between the Netherlands, the United States, France, Spain, Portugal, and Singapore. Int. J. Sel. Assess. 2008, 16, 1–13. [CrossRef]
  59. Daugherty, P.R.; Wilson, H.J. Human + Machine: Reimagining Work in the Age of AI; Harvard Business Press: Boston, 2018.
  60. Lee, M.K. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data Soc. 5.1 2018. [CrossRef]
  61. Kern, C.; Gerdon, F.; Bach, R.L.; Keusch, F.; Kreuter, F. Humans versus machines. Patterns 2022, 3, 100591. [CrossRef]
  62. Ryan, R.M.; Deci, E.L. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am. Psychol. 2000, 55, 68–78. [CrossRef]
  63. Ryan, R.M.; Deci, E.L. Self-Determination Theory: Basic Psychological Needs in Motivation, Development, and Wellness; Guilford Press: New York, NY, USA, 2017. [CrossRef]
  64. Ryan, R.M.; Duineveld, J.J.; Di Domenico, S.I.; Ryan, W.S.; Steward, B.A.; Bradshaw, E.L. Meta-review of self-determination theory. Psychol. Bull. 2022, 148, 813–842. [CrossRef]
  65. Van den Broeck, A.; Ferris, D.L.; Chang, C.-H.; Rosen, C.C. Self-determination theory at work. J. Manag. 2016, 42, 1195–1229. [CrossRef]
  66. Binns, R.; Van Kleek, M.; Veale, M.; Lyngs, U.; Zhao, J.; Shadbolt, N. Perceptions of justice in algorithmic decisions. In Proc. CHI Conf. Hum. Factors Comput. Syst.; 2018; 1–14. [CrossRef]
  67. Lee, M.K.; Jain, A.; Cha, H.J.; Ojha, S.; Kusbit, D. Procedural justice in algorithmic fairness. Proc. ACM Hum.-Comput. Interact. 2019, 3, 182. [CrossRef]
  68. Lee, M.K.; Kusbit, D.; Metsky, E.; Dabbish, L. Working with machines: The impact of algorithmic and data-driven management on human workers. In Proc. 33rd Annu. ACM Conf. Hum. Factors Comput. Syst. (CHI 2015); ACM: Seoul, Republic of Korea, 2015; 1603–1612. [CrossRef]
  69. Craiut, M.-V.; Iancu, I.R. Is technology gender neutral? Hum. Technol. 2022, 18, 297–315. [CrossRef]
  70. Karunakaran, A. In cloud we trust? Normalization of uncertainties in online platform services. Acad. Manag. Proc. 2018, 13700. [CrossRef]
  71. Ticona, J.; Mateescu, A. Trusted strangers: Carework platforms’ cultural entrepreneurship in the on-demand economy. New Media Soc. 2018, 20, 4384–4404. [CrossRef]
  72. Lacroux, A.; Martin-Lacroux, C. Recruiters’ behaviors faced with dual (AI and human) recommendations in personnel selection. In Proc. Acad. Manage. Annu. Meet. 2023, 2023, 14704. [CrossRef]
  73. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm aversion. J. Exp. Psychol. Gen. 2015, 144, 114–126.
  74. Langer, M.; Landers, R.N. The future of artificial intelligence at work. Comput. Hum. Behav. 2021, 123, 106878. [CrossRef]
Table 1. Demographics and characteristics of the sample.
Table 1. Demographics and characteristics of the sample.
Characteristics N=120 Percentage
Gender
Female 90 75.0
Male 30 25.0
Age
Under 26 15 12.5
26 –35 34 28.3
36 –45 53 44.2
46 or older 18 15.0
Level of education
High school degree 6 5.0
Bachelor degree 32 26.7
Master degree 82 68.3
Length of professional experience in the HR area
Less than 1 year 14 11.7
1–5 years 34 28.3
6 –10 years 25 20.8
11 –20 years 32 26.7
More than 20 years 15 12.5
Company size (number of employees)
Less than 50 16 13.3
50–100 12 10.0
101–250 20 16.7
251–500 10 8.3
More than 500 62 51.7
Table 2. Mean scores, standard deviations, and t-test results for acceptance for ai applications.
Table 2. Mean scores, standard deviations, and t-test results for acceptance for ai applications.
Hypothesis Variables M SD t(119) Sig. Cohen’s d
H1 Earlier 4.01 0.93 7.391 <0.001 0.994
Later 3.62 0.87 (large)
H2 Assistance 3.89 0.84 11.316 <0.001 0.477
Replacement 2.81 1.17 (medium)
Table 3. Descriptive statistics.
Table 3. Descriptive statistics.
Stages Replacement (categorized) Company_size Mean SD N
Earlier No
Up to 500 employees 3.53 1.10 35
Above 500 employees 4.17 0.91 38
Total 3.86 1.05 73
Yes
Up to 500 employees 4.15 0.60 23
Above 500 employees 4.33 0.67 24
Total 4.24 0.64 47
Total
Up to 500 employees 3.77 0.98 58
Above 500 employees 4.24 0.82 62
Total 4.01 0.93 120
Later
No
Up to 500 employees 3.21 0.96 35
Above 500 employees 3.60 0.90 38
Total 3.42 0.95 73
Yes
Up to 500 employees 4.00 0.65 23
Above 500 employees 3.90 0.57 24
Total 3.95 0.61 47
Total
Up to 500 employees 3.53 0.93 58
Above 500 employees 3.72 0.80 62
Total 3.62 0.87 120
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated