Preprint
Article

This version is not peer-reviewed.

The Role of Algorithmic Anthropomorphism, Transparency, and Fairness in Shaping Consumer Purchase Intentions in E-Commerce

Submitted:

14 April 2026

Posted:

20 April 2026

You are already at the latest version

Abstract
Artificial intelligence (AI) is often employed in various sectors of e-commerce. Conse-quently, it becomes necessary to identify the impact of various parameters of the algorithm on buyer behavior. This study aims to investigate the impact of algorithmic anthropomor-phism, algorithmic transparency and perceived algorithmic fairness on buyer purchase intentions. In addition, this study has endeavored to establish the role of Technology Ac-ceptance Model as a moderating variable. A structured questionnaire was dispersed among 384 online buyers via Qualtrics. The proposed model was tested using PROCESS macro (Hayes, 2022) for mediation and moderation analyses. The results reveal that: (1) algorithmic anthropomorphism positively affects both algorithmic transparency and per-ceived algorithmic fairness; (2) algorithmic transparency has a significant positive effect on both perceived fairness and purchase intention; (3) perceived algorithmic fairness me-diates the relationships between algorithmic anthropomorphism and purchase intention, as well as between algorithmic transparency and purchase intention; and (4) TAM posi-tively influences purchase intention, though its moderating effect on the anthropomor-phism–purchase intention link is only marginally significant. These findings offer theo-retical contributions to AI-driven consumer behavior research and practical implications for the design of algorithmic e-commerce systems.
Keywords: 
;  ;  ;  ;  ;  ;  

1. Introduction

E-commerce has grown rapidly over the last decade, and it has profoundly affected the retail industry across the globe. Online shopping has evolved from being a static list of products to a complete AI-powered interface where recommendation engines, chatbots and other types of Artificial Intelligence technologies are used to enhance the customers’ buying decision and make their online shopping experience more personalized. Hence, there is a need to understand the criteria that consumers in the current digital age use to assess online products and to establish what is key to building online trust and therefore to facilitate purchasing decisions (Soni et al., 2019; He and Wang, 2024).
The rapid rise of artificial intelligence is another factor in this new retailing era. In today’s digital age, the use of machine learning techniques in recommendation engines, dynamic pricing and customer support is becoming increasingly widespread (Braga, 2020; Nelson et al., 2023). While these systems promise to improve flow management and enable more effective and relevant customer service, they pose several challenges in terms of consumer behavior (Burrell, 2016; Pasquale, 2015). Indeed, consumers are rapidly becoming accustomed to interacting with ‘black box’ AI systems that remain invisible to them and whose decision-making processes are incomprehensible. This raises major ethical issues concerning the transparency, fairness and reliability of AI-generated responses, which are set to become an integral part of consumers’ online purchasing experiences. It is therefore vital to gain a better understanding of consumers’ attitudes and responses to these new technologies.
In this study, we identified the three constructs that are most relevant to consumer response to AI in e-commerce. First one is the algorithmic transparency (Goodman and Flaxman, 2017) which refers to consumers’ understanding of the reasoning used by an algorithm to draw its conclusions. While increasing the level of transparency seems to have a positive effect on increasing trust and confidence in the output of the algorithm (Kizilcec, 2016; Wang, 2022), over-explaining the technology and its underlying mechanisms can have negative effects on consumer acceptance (Kizilcec, 2016). Nevertheless, in the e-commerce context, a moderate level of transparency is generally associated with positive consumer outcomes, and the potential negative effects of excessive transparency are context-dependent rather than universal. Algorithmic fairness refers to consumers’ perceptions of whether the outcomes generated by an algorithm are fair (Starke et al., 2022; Mehrabi et al., 2021). If consumers believe that the outcomes generated by an algorithm are unfair, they are less likely to purchase the products that are recommended to them, which will in turn negatively affect conversion rates and customer loyalty (Draws et al., 2021). Anthropomorphism refers to the tendency for people to attribute human mental states such as thought, motivation or emotion to non-human objects such as technology and artificial intelligence systems. Online retailers are increasingly deploying chatbots, voice assistants and other conversational user interfaces (CUIs) that are designed to use human speech and interaction styles (Fink, 2012; Epley et al., 2007). While designing a more human-oriented interface can generate more interest and emotional connection with consumers (Lester, 2006), it may also have negative effects, such as consumers perceiving that the technology is manipulative or deceptive (Mirghaderi et al., 2023).
Presently, while there is a stream of literature related to each of the variables, an integrative analysis focusing on the combined effects of these variables has yet to be conducted. Furthermore, little research has examined the effect of consumer technology acceptance on the impact of algorithmic design features on consumer purchase decisions. The Technology Acceptance Model (TAM) is a theory that has been empirically supported as a means of explaining the technology adoption behavior of consumers and information systems (Davis, 1989; Venkatesh and Davis, 2000). On this basis, it is likely that consumers holding disparate opinions concerning the functionality and usability of e-commerce technology will respond differently to the anthropomorphic elements built into algorithms used to make recommendations.
Against this background, the present study aims to address two research questions. First, how do algorithmic anthropomorphism, algorithmic transparency, and perceived algorithmic fairness affect consumer purchase intentions in e-commerce, and through which mediating mechanisms? Second, does consumers’ level of technology acceptance moderate the relationship between algorithmic anthropomorphism and purchase intention? To answer these questions, we develop and test a sequential mediation model in which algorithmic anthropomorphism influences purchase intention both directly and indirectly through algorithmic transparency and perceived algorithmic fairness, with TAM serving as a moderator of the direct anthropomorphism–purchase intention path.
The main contributions of this paper are as follows. First, we offer an integrative analytical framework that unites three algorithmic perception constructs—anthropomorphism, transparency, and fairness—within a single empirical model, thereby extending research that has examined these variables in isolation. Second, we empirically demonstrate the sequential mediation pathway through which anthropomorphic AI features translate into purchase intention, providing a more granular understanding of the underlying mechanism. Third, by incorporating TAM as a moderating variable, we identify a boundary condition for the effectiveness of anthropomorphic design, contributing to both the TAM and the algorithmic perception literatures. The findings provide actionable guidance for e-commerce platform designers and managers seeking to optimize AI-driven consumer experiences.

2. Literature Review and Hypothesis Development

2.1. Algorithmic Anthropomorphism and Perceived Fairness

Anthropomorphism is the act of attributing human characteristics to non-human entities. This phenomenon has been studied in detail in the field of psychology and human–computer interaction (Epley et al., 2007). The term algorithmic anthropomorphism, in the context of AI in e-commerce, is used to describe the extent to which customers perceive that an AI in an application is anthropomorphic. In this context, an anthropomorphic AI (such as a chatbot, a recommendation algorithm or a voice assistant) is seen as a human acting entity having attributes such as being natural and human, having an emotional understanding, and being highly social. Anthropomorphic design is intended to foster a human bond between a consumer and a technology, which can potentially lead to the formation of a relationship, and therefore, encourages consumer interaction (Nass and Moon, 2000; Adam et al., 2021). Fink (2012) and Munnukka et al. (2022) have both studied the design of anthropomorphic interfaces, specifically their effect on interaction.
As long as the human versus machine engagement or evaluation of an AI system is determined by the human against machine dichotomy, this anthropomorphic effect should also affect the engagement and evaluation of users to an AI system. Recent work by Cheng et al. (2022) demonstrates that when consumers are interacted with by an anthropomorphic AI, they draw upon social norms from their offline face-to-face social experiences, which then influence their initial evaluations of the AI-based service provided. The work of Ochmann et al. (2024) also demonstrates that when users rate an anthropomorphic AI, they judge the output of the AI more through a social rather than computational fairness prism because they are more aware of the intentionality and moral agency of the system. Moreover, recent work by Roesler (2023) demonstrated that framing the workings of an AI system as being more anthropomorphic (i.e., as being carried out by a human) is an effective way to reduce the concern that users have that the system is not acting fairly. Therefore, we hypothesize that
Hypothesis 1. 
Algorithmic anthropomorphism positively affects perceived algorithmic fairness.

2.2. Algorithmic Transparency and Perceived Fairness

Algorithmic transparency is the extent to which users can investigate the reasons behind an algorithm that selects certain options over others, as defined by Goodman and Flaxman (2017). In the e-commerce context, we call this product recommendation, search results and underlying data transparency. In fact, the explainable AI (XAI) community widely agrees that having some level of transparency in an AI system leads to a higher level of user understanding and trust, as discussed in Gunning and Aha (2019) and Rader et al. (2018).
Fairness also positively correlates with transparency. Procedural justice theory, as articulated by Rawls (1971), holds that if people are aware of the decision-making process they will view the outcome of that process as fair. In the context of algorithms, this idea is supported by the findings of Chai (2024): raising user awareness of how an AI makes its decisions can greatly enhance the perceived fairness of the way the algorithm applies its rules. Research by Grimmelikhuijsen (2022) shows that increasing algorithmic transparency can have the benefit of increasing public trust in agencies, and that this also increases the public’s perception of being treated in a procedurally fair way. We have thus:
Hypothesis 2. 
Algorithmic transparency positively affects perceived algorithmic fairness.

2.3. Perceived Fairness and Purchase Intention

As with trust, fairness is a well-established concept in consumer behavior research (Conlon et al., 2004; Yong et al., 2021). Consumers react in different ways to various prices, service or decision-making situations depending on whether these are considered fair or not. In the context of our research, we look into the concept of perceived fairness in terms of consumers judging whether an AI-based product recommendation or output is fair in the sense that it is seen as not being produced with any form of bias, that it is produced in an appropriate way and that it can be justified (Starke et al., 2022).
A consumer is more likely to purchase a product that is recommended by an algorithm when those recommendations match their personal preferences, as opposed to being recommended for commercial purposes (Toussaint et al., 2022). Conversely, a consumer is less likely to purchase the same product when it becomes apparent that the algorithm has been biased in either their favor or against them (Draws et al., 2021; Newman et al., 2020). We hypothesize that consumers believe an algorithm to be more impartial when it provides recommendations that are not overly favorable to their own buying patterns, rather than when the algorithm appears to be intentionally designed to thwart their purchasing decisions.
Hypothesis 3. 
Perceived algorithmic fairness positively affects purchase intention.

2.4. Algorithmic Transparency and Purchase Intention

Felzmann et al. (2020) published a research study that examined the effect of algorithmic transparency on purchase intention. They explained that the effect of an algorithm on purchase intention can occur either indirectly through the principle of fairness or directly. Felzmann et al. (2020) explained that a transparent algorithm provides consumers with lower levels of decision risk and decision uncertainty. Their research showed that when consumers feel that they understand how the recommendation algorithm works, purchase intention increases. Transparency also provides a sense of control: consumers who comprehend the logic behind algorithmic suggestions can make more informed decisions, which increases their willingness to transact (Lee, 2018; Fu et al., 2022).
Research by Sun and Li (2024) demonstrates that algorithmic transparency positively affects proactive consumer behavior, including purchase decisions. Similarly, Hwang (2024) finds that operational transparency in service platforms enhances consumer engagement and transaction likelihood. We therefore propose:
Hypothesis 4. 
Algorithmic transparency positively affects purchase intention.

2.5. Mediating Role of Perceived Fairness and Transparency

We use our direct measures of perceived algorithmic fairness and algorithmic transparency to act as mediators in the relationship between algorithmic anthropomorphism and purchase intention. An anthropomorphic design of an AI will not directly impact purchase intentions. Instead, it triggers a series of consumer evaluations. For example, we propose that an anthropomorphic design of an AI increases the perceived algorithmic transparency of that AI. A recent study by Roesler (2023) showed that a human face on an online review search engine makes the underlying algorithm more transparent, i.e., more legible and visible for consumers. Higher perceived algorithmic transparency also leads to higher perceived algorithmic fairness (Ochmann et al., 2024) which then in turn leads to higher purchase intentions.
The sequential mediation logic holds in our study based on Zhao et al. (2020) and Parboteeah et al. (2009) arguments that, according to consumer information processing theory, consumers form their purchase intentions after a series of assessments. The indirect pathways—through the sequential transparency–fairness route and through fairness alone—are therefore expected to transmit a substantial portion of the total anthropomorphism effect. Accordingly, we propose:
Hypothesis 5. 
Algorithmic anthropomorphism affects purchase intention through a sequential mediation pathway via algorithmic transparency and perceived algorithmic fairness (Anthropomorphism → Transparency → Fairness → Purchase Intention).
Hypothesis 6. 
Perceived algorithmic fairness mediates the relationship between algorithmic transparency and purchase intention.

2.6. Algorithmic Anthropomorphism and Transparency

The direction of anthropomorphism versus transparency is less theoretically explored. Humanizing the interaction with an AI leads to the assumption that the system is more communicative, more human and more transparent (Inie et al., 2024). Anthropomorphic cues such as phrasing, speaking in an empathetic manner and appropriate contextual responses are seen as indicator for the system’s willingness and readiness to provide a detailed explanation and thereby increase the perceived system transparency — whether the system provides detailed information or not (Cheng et al., 2022; Roesler, 2023). It is important to note that we refer here to perceived transparency (i.e., the subjective impression of openness), not objective transparency (i.e., the actual amount of information disclosed by the system). This distinction is critical, as anthropomorphic cues may enhance the feeling of transparency without necessarily increasing the quantity or quality of disclosed information. We hypothesize that
Hypothesis 8. 
Algorithmic anthropomorphism positively affects algorithmic transparency.

2.7. Algorithmic Anthropomorphism and Purchase Intention

Beyond the mediated pathways, algorithmic anthropomorphism is expected to exert a direct positive effect on purchase intention. We examine the effect of Human-like AI-features on the social presence, user engagement and customer purchasing behavior. Based on the theory of humanizing technology, our empirical results confirm that human-like AI-features can create a social presence and user engagement that significantly affect customer purchasing behavior (Munnukka et al., 2022; Sun et al., 2024). We further investigate the effect of anthropomorphic design on eliminating the technology aversion and making the online shopping experience more enjoyable for customers (Kim and Jang, 2022; Lu et al., 2021). Thus:
Hypothesis 9. 
Algorithmic anthropomorphism positively affects purchase intention.

2.8. The Moderating Role of the Technology Acceptance Model

Technology Acceptance Model (TAM) was developed by Davis (1989) and had explained that the perception of technology’s usefulness and ease of use are key variables that influence users’ attitude and behaviour towards the adoption of technology. TAM has been employed in various studies of e-commerce and confirmed that it is appropriate and strongly applicable to online shopping adoption and consumer behaviour (Fedorko et al., 2018; Hossain et al., 2023; Oktaria et al., 2024). TAM is used as direct and moderator variables in this study.
As a direct predictor, consumers who perceive e-commerce platforms as easy to use and useful are more likely to engage in purchasing behavior. This well-established relationship has been confirmed across diverse cultural and product contexts (Liu and Lin, 2022; Suryawirawan, 2021). We hypothesize:
Hypothesis 10. 
TAM positively affects purchase intention.
This study explores the moderating role of Technology Acceptance Model (TAM) variables in the effect of design attributes such as anthropomorphic characteristics of humanoid robots on consumer trust. High technology acceptances (HTA) consumers can judge products quickly due to their previous technology experiences. Thus, human-like features and their associated anthropomorphic characteristics are expected to have minimal effect on the purchasing decisions of the consumers having high technology acceptances. In contrast, low technology acceptances (LTA) consumers tend to have ambiguities and uncertainties towards technological products and hence would be more dependent on the anthropomorphic features of humanoid robots in order to establish the required trust and assurance (Song, 2019; Goundar et al., 2021). In line with the effects of consumer-level psychological variables being moderated by the system-level design attributes as established in the prior literature (Zhao et al., 2020; Min and Kim, 2013), we hypothesize that
Hypothesis 11. 
TAM moderates the relationship between algorithmic anthropomorphism and purchase intention, such that the relationship is stronger when TAM is low.
The proposed research model, integrating the above hypotheses, is presented in Figure 1. Algorithmic anthropomorphism is the primary exogenous variable, influencing purchase intention both directly and through the sequential mediators of algorithmic transparency and perceived algorithmic fairness. TAM serves as both a direct predictor and a moderator of the anthropomorphism–purchase intention path.

3. Research Method

3.1. Questionnaire Design

The variables in this study are measured using multi-item scales adapted from previously validated instruments. The purpose of this study was to investigate the effect of AI-based marketing platforms on consumer behavior through five constructs with 22 items. These constructs are (1) algorithmic anthropomorphism (5 items), (2) algorithmic transparency (3 items), (3) perceived algorithmic fairness (4 items), (4) purchase intention (4 items), and (5) technology acceptance (10 items) that include perceived ease of use and perceived usefulness. The algorithmic anthropomorphism scale was adapted from the works of Eyssel et al. (2011) and Munnukka et al. (2022) and evaluated human-likeness of an AI system from the perspectives of consumer behavior. It was concerned with naturalness, consciousness and lifelike interaction. The items for the algorithmic transparency construct were adapted from Höddinghaus et al. (2021) and Sun and Li (2024). Consumer behavior was evaluated from the perspective of understanding the algorithms of platform and the level of decision-making transparency to the consumers. The perceived algorithmic fairness scale was adapted from the works of Conlon et al. (2004) and Newman et al. (2020). Consumer behavior toward the marketing platforms was evaluated from the aspects of perceived fairness of product recommendations and outcomes generated by an AI system. Purchase intention items were adapted from van der Heijden et al. (2003), measuring consumers’ likelihood of returning to and purchasing from the e-commerce platform. Finally, the TAM scale was adapted from Davis (1989) and Heijden et al. (2003) to capture both perceived ease of use and perceived usefulness of e-commerce platforms. Algorithmic anthropomorphism, transparency, fairness, and purchase intention items employ a 7-point Likert scale (1 = Strongly disagree, 7 = Strongly agree), while TAM items use a 5-point Likert scale, consistent with the original TAM instrument developed by Davis (1989), in order to maintain fidelity to the validated measurement properties of the original scale. In addition, 8 demographic questions were included. The complete measurement items and their sources are presented in Table 1.

3.2. Respondents and Data Collection

This study targets online shoppers that have recently used an online store that is equipped with some AI (Artificial Intelligence) elements such as chatbots, recommendation algorithms and virtual assistants. This study employed quantitative approach, and data was collected online via a survey that was created and distributed using Qualtrics online survey tool. A pilot study was first carried out with a small sample of population to test the reliability of the constructs measured in the survey and to ensure that the statements in the survey are clear and free of ambiguity. A few statements were rephrased based on the feedback received from the pilot study respondents (Van Teijlingen and Hundley, 2001). In the main study, respondents were reached via social media channels, and a convenience sampling approach was adopted. A total of 384 valid responses were obtained after screening for incomplete answers and careless responses (e.g., identical answers for more than 80% of the items).
Table 2 presents the demographic profile of the sample. In terms of online shopping frequency, 40.63% of the respondents shop several times a month and 29.17% shop once a month, indicating that the vast majority are active online shoppers. The sample is relatively balanced in terms of gender (41.15% male, 53.91% female, 4.95% prefer not to say). Regarding education, 50.26% hold a bachelor’s degree, 10.68% a master’s degree, and 11.20% a PhD. Nearly half the respondents are students (49.74%), followed by employees (29.95%). The sample is predominantly Turkish (93.49%). In terms of daily internet usage, 35.68% spend 4–6 hours, and 27.86% spend more than 6 hours per day. AI familiarity is moderate: 43.23% rated themselves at 3 out of 5, while 25.78% and 14.58% rated 4 and 5, respectively. The average age of participants is 27.37 years (SD = 9.23), reflecting a predominantly young-adult sample.
Given the online distribution of the questionnaire, common method bias was a potential concern. To assess this, Harman’s single-factor test was applied. The unrotated factor solution revealed multiple factors, with the first factor accounting for less than 40% of the total variance, suggesting that common method bias does not pose a significant threat to the validity of the results.

4. Data Analysis and Research Results

4.1. Reliability Analysis

Cronbach’s alpha coefficients were calculated to evaluate the internal consistency of each measurement scale. As shown in Table 3, all scales demonstrate high reliability. The perceived algorithmic fairness scale (α = 0.944), purchase intention scale (α = 0.928), algorithmic anthropomorphism scale (α = 0.927), and TAM scale (α = 0.936) all exceed the recommended threshold of 0.70 (Nunnally and Bernstein, 1994). The algorithmic transparency scale yields α = 0.861, which is also well above the threshold. The overall Cronbach’s alpha across all items is 0.961, confirming high overall reliability.

4.2. Descriptive Statistics and Normality Tests

Table 4 presents selected descriptive statistics. TAM items generally yielded means between 4.51 and 5.29, indicating that respondents find e-commerce platforms relatively easy to use and useful. Perceived algorithmic fairness averaged approximately 4.26, reflecting a cautious-to-moderate agreement with the fairness of algorithmic outputs. Purchase intention means ranged from 4.45 to 4.79, suggesting a generally positive but not overwhelmingly strong purchase tendency. Algorithmic transparency averages (4.24–4.56) indicate that participants perceive partial but not full clarity in how platform algorithms operate. Algorithmic anthropomorphism means ranged from 4.05 to 4.44, indicating moderate recognition of human-like qualities in AI systems.
The Kolmogorov–Smirnov normality test was applied to all variables. Results indicated significant departures from normality for all constructs (p < 0.001). Given the large sample size (N = 384) and the robustness of the PROCESS macro to non-normality through bootstrapping, this does not invalidate the subsequent analyses (Hayes, 2022).

4.3. Hypothesis Testing

4.3.1. Mediation Analysis

To test the proposed mediation hypotheses, the PROCESS macro-Model 6 (Hayes, 2022) was employed with 5,000 bootstrap resamples to estimate indirect effects and 95% bias-corrected confidence intervals. In this model, algorithmic anthropomorphism serves as the independent variable (X), algorithmic transparency (M1) and perceived algorithmic fairness (M2) as sequential mediators, and purchase intention as the dependent variable (Y). The results are summarized in Table 5.
The path analysis reveals several significant relationships. First, algorithmic anthropomorphism has a strong positive effect on algorithmic transparency (β = 0.665, p < 0.001), confirming H8. Second, algorithmic anthropomorphism positively affects perceived algorithmic fairness (β = 0.375, p < 0.001), supporting H1, and algorithmic transparency also significantly enhances perceived fairness (β = 0.367, p < 0.001), supporting H2. Third, perceived algorithmic fairness has a significant positive effect on purchase intention (β = 0.263, p < 0.001), supporting H3, and algorithmic transparency directly affects purchase intention (β = 0.336, p < 0.001), supporting H4. Even after controlling for the mediators, the direct effect of algorithmic anthropomorphism on purchase intention remains significant (β = 0.168, p = 0.002), confirming H9, though the magnitude is substantially reduced relative to the total effect (c = 0.555), indicating partial mediation.
Analysis of indirect effects further supports the mediation hypotheses. The indirect pathway from algorithmic anthropomorphism to purchase intention through transparency alone (a1·b1) is estimated at 0.224 with a confidence interval excluding zero, confirming that transparency carries a significant share of the anthropomorphism effect. The pathway through fairness alone (a2·b2 = 0.099) is also significant, supporting H6. The sequential indirect pathway through transparency and then fairness (a1·d21·b2 = 0.064) is likewise significant, supporting H5. The sum of all indirect effects is 0.386, which together with the direct effect (0.168) comprises the total effect of 0.555 (Table 6).

4.3.2. Moderation Analysis

To test the moderating role of TAM on the relationship between algorithmic anthropomorphism and purchase intention, the PROCESS macro-Model 1 was employed. The overall model was significant (R2 = 0.556; F (3, 380) = 158.69, p < 0.001), indicating that the predictors explain approximately 55.6% of the variance in purchase intention. Algorithmic anthropomorphism exerts a direct positive effect on purchase intention (b = 0.449, p < 0.001), and TAM also demonstrates a significant positive main effect (b = 0.718, p < 0.001), strongly supporting H10. However, the interaction term (Anthropomorphism × TAM) approaches but does not reach conventional significance (b = –0.035, p = 0.075), providing only tentative and suggestive support for H11. This finding should be interpreted with caution and requires further validation in future research. These results are presented in Table 7.
Examination of conditional effects at different levels of TAM revealed a declining trend in the effect of algorithmic anthropomorphism on purchase intention as TAM increases. At low TAM levels (16th percentile), the effect is strongest; at medium TAM levels (50th percentile), the effect remains significant but reduced; and at high TAM levels (84th percentile), the effect is further attenuated. This pattern suggests that as consumers become more accepting and familiar with technology, the persuasive pull of anthropomorphic algorithm features diminishes—potentially because technologically savvy consumers rely more on functional assessments than on human-like cues.

4.3.3. Summary of Hypothesis Testing

Table 8. Summary of hypothesis testing results. 
Table 8. Summary of hypothesis testing results. 
Hypothesis Path Result
H1 Anthropomorphism → Fairness (+) Supported
H2 Transparency → Fairness (+) Supported
H3 Fairness → Purchase Intention (+) Supported
H4 Transparency → Purchase Intention (+) Supported
H5 Anthrop. → Transp. → Fairness → PI (sequential mediation) Supported
H6 Fairness mediates Transparency → PI Supported
H8 Anthropomorphism → Transparency (+) Supported
H9 Anthropomorphism → Purchase Intention (+) Supported
H10 TAM → Purchase Intention (+) Supported
H11 TAM moderates Anthrop. → PI Marginally Supported

5. Discussion

This study investigates how algorithmic anthropomorphism, transparency, and perceived fairness influence consumer purchase intentions in AI-driven e-commerce environments, with the Technology Acceptance Model (TAM) serving as a moderating framework. The empirical findings contribute to the growing body of literature on AI-consumer interaction and yield several noteworthy insights.

5.1. Key Findings

The results confirm that algorithmic anthropomorphism has a significant total effect on purchase intention (c = 0.555), operating through both direct and indirect pathways. This finding is consistent with prior research demonstrating that consumers develop emotional connections with AI systems that display human-like characteristics (Fink, 2012; Munnukka et al., 2022; Gomes et al., 2025). The anthropomorphism effect is partially mediated by two sequential mechanisms: algorithmic transparency and perceived algorithmic fairness. Specifically, when AI systems exhibit human-like interaction styles, consumers are more inclined to perceive those systems as transparent and fair in their operations, which in turn strengthens purchase intention.
Algorithmic transparency emerges as a critical variable in the model, exerting both direct effects on purchase intention (β = 0.336) and indirect effects through perceived fairness. This underscores the importance of making algorithmic decision-making processes comprehensible to end users, consistent with the explainable AI (XAI) literature (Goodman and Flaxman, 2017; Grimmelikhuijsen, 2022). Consumers who believe they understand how an algorithm selects and recommends products are more likely to perceive the process as fair and, consequently, to make purchasing decisions on the platform.
Perceived algorithmic fairness plays a significant mediating role, channeling the effects of both anthropomorphism and transparency toward purchase intention. This finding aligns with procedural and distributive justice frameworks (Rawls, 1971; Conlon et al., 2004), suggesting that when consumers judge AI-driven recommendations as impartial and equitable, their willingness to transact increases. The mediation ratio of indirect-to-total effect is substantial (0.386/0.555 ≈ 0.70), indicating that most of the anthropomorphism effect on purchase intention is transmitted through the transparency–fairness pathway rather than operating directly.
The TAM results confirm its direct positive effect on purchase intention (b = 0.718, p < 0.001), reinforcing the well-established finding that perceived usefulness and perceived ease of use are strong predictors of technology-mediated consumer behavior (Davis, 1989; Fedorko et al., 2018). However, the hypothesized moderating effect of TAM on the anthropomorphism–purchase intention link is only marginally significant (p = 0.075). This tentative finding should be interpreted with caution as it does not meet conventional significance thresholds. The declining conditional effects at higher TAM levels suggest that technologically sophisticated consumers may be less susceptible to anthropomorphic cues, possibly because they evaluate AI systems based on functional merits rather than social or emotional signals. This nuanced finding parallels the observation by Zhao et al. (2020) in the eWOM context that sense of power did not significantly moderate information quality’s effect on trust, suggesting that consumer-level psychological variables sometimes interact with system-level features in unexpected ways.

5.2. Theoretical Contributions

This study makes several theoretical contributions. First, it integrates three algorithmic constructs—anthropomorphism, transparency, and fairness—into a unified sequential mediation framework, which extends prior studies that have typically examined these variables in isolation. Second, the incorporation of TAM as a moderator enriches the model by demonstrating that the effects of anthropomorphic design features are not uniform across consumers but depend on their technology acceptance levels. This extends the classical TAM framework beyond its traditional role as a direct predictor of adoption behavior (Davis, 1989; Venkatesh and Davis, 2000) into a boundary condition for the effectiveness of algorithmic design strategies. Third, the sequential mediation pathway (anthropomorphism → transparency → fairness → purchase intention) provides a more nuanced understanding of the mechanism by which human-like AI features ultimately translate into consumer purchasing behavior. This contributes to both the algorithmic perception literature (Ochmann et al., 2024) and the consumer behavior literature on AI-driven e-commerce (Cheng et al., 2022).

5.3. Managerial Implications

The findings offer actionable recommendations for e-commerce platform designers and managers. First, platforms should invest in developing AI systems that incorporate moderate levels of anthropomorphic features—such as natural language styles, empathetic tone, and contextual responsiveness—without crossing into excessive anthropomorphism that may trigger uncanny valley effects (Mori et al., 2012; Eyssel et al., 2011). Second, transparency strategies should be prioritized. Platform designers should implement clear, user-friendly explanations of how recommendation algorithms work, which products are promoted and why, and how user data informs the process. This is not merely an ethical imperative but a commercial one, as transparency drives fairness perceptions and, through them, purchase intention. Third, given the marginal moderating effect of TAM, platform interfaces should be adaptive. For consumers with lower technology acceptance, simpler and more guided AI interactions may be more effective, whereas advanced users may benefit from richer functionality and less emphasis on anthropomorphic features. Developing segmented user experiences based on TAM profiles represents a promising design strategy.

5.4. Limitations and Future Research

Several limitations should be acknowledged. First, the sample is predominantly young, educated, and Turkish, which substantially limits the external validity and generalizability of the findings to other demographic and cultural contexts. Additionally, the predominance of student respondents (49.74%) and the reliance on convenience sampling mean that the results should be interpreted with considerable caution and may not reflect the broader population of online consumers. Future research should employ cross-cultural designs to test whether the relationships observed here hold across different markets. Second, the study relies on self-reported purchase intention rather than actual purchasing behavior. While intention is a widely accepted predictor, future studies could incorporate behavioral data to enhance ecological validity. Third, TAM was operationalized as a composite score combining perceived usefulness and perceived ease of use. Future research may benefit from examining these dimensions separately to identify more precise moderating patterns. Fourth, the cross-sectional design precludes causal inference. Experimental designs that manipulate levels of algorithmic anthropomorphism and transparency would provide stronger causal evidence. Finally, the current study does not differentiate between product categories or platform types; future research should explore whether the effects of anthropomorphism and transparency vary across sectors such as fashion, electronics, and groceries.

Author Contributions

Conceptualization, G.Y., Y.M., and S.G.; methodology, G.Y. and S.G.; theoretical framework, G.Y., Y.M.; data curation, S.G.; analysis and interpretation of data, G.Y. and S.G.; writing—original draft preparation, G.Y.; writing—review and editing, G.Y., Y.M., and S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data used in this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Adam, M.; Wessel, M.; Benlian, A. AI-based chatbots in customer service and their effects on user compliance. Electronic Markets 2021, 31(2), 427–445. [Google Scholar] [CrossRef]
  2. Braga, F. M. I. The influence and impact of artificial intelligence in the consumer decision-making process; Dissertation: ISCTE Business School, 2020. [Google Scholar]
  3. Burrell, J. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society 2016, 3(1), 2053951715622512. [Google Scholar] [CrossRef]
  4. Chai, F. Grading by AI makes me feel fairer? How different evaluators affect college students’ perception of fairness. Frontiers in Psychology 2024, 15. [Google Scholar] [CrossRef]
  5. Cheng, X.; Zhang, X.; Cohen, J.; Mou, J. Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots. Journal of Retailing and Consumer Services 2022, 68, 103013. [Google Scholar]
  6. Conlon, D. E.; Porter, C. O.; Parks, J. M. The fairness of decision rules. Journal of Management 2004, 30(3), 329–349. [Google Scholar] [CrossRef]
  7. Davis, F. D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 1989, 13(3), 319–340. [Google Scholar] [CrossRef]
  8. Draws, T.; Szlávik, Z.; Timmermans, B.; Tintarev, N.; Varshney, K.; Hind, M. Disparate impact diminishes consumer trust even for advantaged users. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021. [Google Scholar]
  9. Epley, N.; Waytz, A.; Cacioppo, J. T. On seeing humans: A three-factor theory of anthropomorphism. Psychological Review 2007, 114(4), 864–886. [Google Scholar] [CrossRef]
  10. Eyssel, F.; Kuchenbrandt, D.; Bobinger, S. Effects of anticipated human-robot interaction and predictability of robot behavior on perceptions of anthropomorphism. In Proceedings of the 6th ACM/IEEE International Conference on HRI, 2011; pp. 373–380. [Google Scholar]
  11. Fedorko, I.; Bačík, R.; Gavurová, B. Technology acceptance model in e-commerce segment. Management & Marketing 2018, 13(4), 1242–1256. [Google Scholar] [CrossRef]
  12. Felzmann, H.; Fosch-Villaronga, E.; Lutz, C.; Tamò-Larrieux, A. Towards transparency by design for artificial intelligence. Science and Engineering Ethics 2020, 26, 3333–3361. [Google Scholar] [CrossRef]
  13. Fink, J. Anthropomorphism and human likeness in the design of robots and human-robot interaction. Social Robotics: 4th International Conference, ICSR 2012, 2012; pp. 199–208. [Google Scholar]
  14. Fu, S.; Liu, X.; Lamrabet, A.; Liu, H.; Huang, Y. Green production information transparency and online purchase behavior. Electronic Commerce Research and Applications 2022, 56, 101210. [Google Scholar]
  15. Gomes, S.; Lopes, J. M.; Nogueira, E. Anthropomorphism in artificial intelligence: A gamechanger for brand marketing. Future Business Journal 2025, 11(1). [Google Scholar] [CrossRef]
  16. Goodman, B.; Flaxman, S. European Union regulations on algorithmic decision making and a “right to explanation.”. AI Magazine 2017, 38(3), 50–57. [Google Scholar] [CrossRef]
  17. Goundar, S.; Lal, K.; Chand, A.; Vyas, P. Consumer perception of electronic commerce: Incorporating trust and risk with the TAM. International Journal of Electronic Business 2021, 16(3), 214–235. [Google Scholar]
  18. Grimmelikhuijsen, S. Explaining why the computer says no: Algorithmic transparency affects the perceived trustworthiness of automated decision-making. Public Administration Review 2022, 83(2), 241–262. [Google Scholar] [CrossRef]
  19. Gunning, D.; Aha, D. DARPA’s explainable artificial intelligence (XAI) program. AI Magazine 2019, 40(2), 44–58. [Google Scholar] [CrossRef]
  20. Hayes, A. F. Introduction to Mediation, Moderation, and Conditional Process Analysis, 3rd ed.; The Guilford Press, 2022. [Google Scholar]
  21. He, S.; Wang, R. The development potential and challenges of one-click generative AI in cross-border e-commerce. International Journal of Retail & Distribution Management 2024. [Google Scholar]
  22. Heijden, H.; Verhagen, T.; Creemers, M. Understanding online purchase intentions: Contributions from technology and trust perspectives. European Journal of Information Systems 2003, 12(1), 41–48. [Google Scholar] [CrossRef]
  23. Hossain, M.; Hussain, M.; Akther, A. E-commerce platforms in developing economies: Unveiling behavioral intentions through TAM. Journal of Electronic Commerce Research 2023, 24(2). [Google Scholar]
  24. Höddinghaus, M.; Sondern, D.; Hertel, G. The automation of leadership functions: Would people trust decision algorithms? Computers in Human Behavior 2021, 116, 106635. [Google Scholar] [CrossRef]
  25. Hwang, E. Operational transparency as a service design: An investigation on labor/effort observation effect. Journal of Hospitality & Tourism Research 2024, 48(1), 59–77. [Google Scholar] [CrossRef]
  26. Inie, N.; Druga, S.; Zukerman, P.; Bender, E. M. From “AI” to probabilistic automation: How does anthropomorphization of technical systems affect trust? Proceedings of CHI 2024, 2024. [Google Scholar]
  27. Kim, H.; Jang, S. Restaurant-visit intention: Do anthropomorphic cues, brand awareness and subjective social class interact? International Journal of Hospitality Management 2022, 104, 103243. [Google Scholar] [CrossRef]
  28. Kizilcec, R. F. How much information? Effects of transparency on trust in an algorithmic interface. Proceedings of CHI 2016, 2016; pp. 2390–2395. [Google Scholar]
  29. Lee, M. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society 2018, 5(1). [Google Scholar] [CrossRef]
  30. Leinsle, P.; Totzek, D.; Schumann, J. How price fairness and fit affect customer tariff evaluations. Journal of Service Management 2018, 29(4), 735–764. [Google Scholar] [CrossRef]
  31. Liu, A.; Lin, S. Service for specialized and digitalized: Consumer switching perceptions of e-commerce in specialty retail. Managerial and Decision Economics 2022, 43(7), 2701–2714. [Google Scholar] [CrossRef]
  32. Lu, Y.; Liu, Y.; Le, T.; Ye, S. Cuteness or coolness—How does different anthropomorphic brand image accelerate consumers’ willingness to buy. Frontiers in Psychology 2021, 12. [Google Scholar] [CrossRef]
  33. Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A survey on bias and fairness in machine learning. ACM Computing Surveys 2021, 54(6), 1–35. [Google Scholar] [CrossRef]
  34. Mirghaderi, L.; Sziron, M.; Hildt, E. Ethics and transparency issues in digital platforms: An overview. AI 2023, 4(4), 831–843. [Google Scholar] [CrossRef]
  35. Mori, M.; MacDorman, K. F.; Kageki, N. The uncanny valley. IEEE Robotics & Automation Magazine 2012, 19(2), 98–100. [Google Scholar] [CrossRef]
  36. Munnukka, J.; Talvitie-Lamberg, K.; Maity, D. Anthropomorphism and social presence in human–virtual service assistant interactions. Computers in Human Behavior 2022, 132, 107234. [Google Scholar] [CrossRef]
  37. Nass, C.; Moon, Y. Machines and mindlessness: Social responses to computers. Journal of Social Issues 2000, 56(1), 81–103. [Google Scholar] [CrossRef]
  38. Nelson, J. P.; Biddle, J. B.; Shapira, P. Applications and societal implications of artificial intelligence in manufacturing: A systematic review. Technology in Society 2023, 72, 102194. [Google Scholar]
  39. Newman, D. T.; Fast, N. J.; Harmon, D. J. When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes 2020, 160, 149–167. [Google Scholar] [CrossRef]
  40. Nunnally, J. C.; Bernstein, I. H. Psychometric Theory, 3rd ed.; McGraw-Hill, 1994. [Google Scholar]
  41. Ochmann, J.; Michels, L.; Tiefenbeck, V.; Maier, C.; Laumer, S. Perceived algorithmic fairness: An empirical study of transparency and anthropomorphism. Electronic Markets 2024, 34(1). [Google Scholar]
  42. Oktaria, R.; Naibaho, S.; Muda, I.; Kesuma, S. Factors of acceptance of e-commerce technology among society: Integration of TAM. International Journal of Science, Technology & Management 2024, 5(1). [Google Scholar]
  43. Parboteeah, D. V.; Valacich, J. S.; Wells, J. D. The influence of website characteristics on a consumer’s urge to buy impulsively. Information Systems Research 2009, 20(1), 60–78. [Google Scholar] [CrossRef]
  44. Pasquale, F. The Black Box Society: The Secret Algorithms That Control Money and Information; Harvard University Press, 2015. [Google Scholar]
  45. Rader, E.; Cotter, K.; Cho, J. Explanations as mechanisms for supporting algorithmic transparency. Proceedings of CHI 2018, 2018; pp. 1–13. [Google Scholar]
  46. Rawls, J. A Theory of Justice; Harvard University Press, 1971. [Google Scholar]
  47. Roesler, E. Anthropomorphic framing and failure comprehensibility influence different facets of trust towards industrial robots. In Frontiers in Robotics and AI; 2023. [Google Scholar]
  48. Soni, N.; Sharma, E. K.; Singh, N.; Kapoor, A. Impact of artificial intelligence on businesses: From research, innovation, market deployment to future shifts in business models. International Journal of Engineering and Advanced Technology 2019, 8(6S), 1–11. [Google Scholar]
  49. Song, Y. W. User acceptance of an artificial intelligence (AI) virtual assistant: An extension of the technology acceptance model. Doctoral Dissertation, 2019. [Google Scholar]
  50. Starke, C.; Baleis, J.; Keller, B.; Marcinkowski, F. Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature. Big Data & Society 2022, 9(2). [Google Scholar] [CrossRef]
  51. Sun, C.; Ding, Y.; Wang, X.; Meng, X. Anthropomorphic design in mortality salience situations. International Journal of Human-Computer Interaction 2024, 40(4). [Google Scholar]
  52. Sun, T.; Li, Y. Effect of algorithmic transparency on gig workers’ proactive service performance. American Behavioral Scientist 2024, 68(1). [Google Scholar]
  53. Suryawirawan, O. The effect of college students’ technology acceptance on e-commerce adoption. Bisma 2021, 14(1), 46–62. [Google Scholar] [CrossRef]
  54. Toussaint, P.; Thiebes, S.; Schmidt-Kraepelin, M.; Sunyaev, A. Perceived fairness of direct-to-consumer genetic testing business models. Electronic Markets 2022, 32, 1129–1157. [Google Scholar] [CrossRef]
  55. Van Teijlingen, E.; Hundley, V. The importance of pilot studies. Social Research Update 2001, 35, 1–4. [Google Scholar] [CrossRef]
  56. Venkatesh, V.; Davis, F. D. A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science 2000, 46(2), 186–204. [Google Scholar] [CrossRef]
  57. Wang, H. Transparency as manipulation? Uncovering the disciplinary power of algorithmic transparency. Philosophy & Technology 2022, 35(3). [Google Scholar] [CrossRef]
  58. Yong, W.; TianZe, T.; Zhang, W.; Sun, Z.; Xiong, Q. The Achilles tendon of dynamic pricing: The effect of consumers’ fairness preferences. Frontiers in Psychology 2021, 12. [Google Scholar]
  59. Zhao, Y.; Wang, L.; Tang, H.; Zhang, Y. Electronic word-of-mouth and consumer purchase intentions in social e-commerce. Electronic Commerce Research and Applications 2020, 41, 100980. [Google Scholar] [CrossRef]
Figure 1. Conceptual framework of the study.
Figure 1. Conceptual framework of the study.
Preprints 208373 g001
Table 1. Variable measurement scales and sources.
Table 1. Variable measurement scales and sources.
Variable Measuring Items Source
Algorithmic Anthropomorphism (5 items) The algorithm’s behavior seemed natural rather than artificial; The algorithm felt humanlike rather than mechanical; The algorithm gave the impression of being conscious and aware; The responses felt lifelike and realistic; The algorithm interacted in a smooth and humanlike way. Eyssel et al. (2011); Munnukka et al. (2022)
Algorithmic Transparency (3 items) I think I could understand the decision-making process of platform algorithms very well; I think I can see through platform algorithms’ decision-making process; I think the decision-making process of platform algorithms is clear and transparent. Höddinghaus et al. (2021); Sun & Li (2024)
Perceived Algorithmic Fairness (4 items) The way this algorithm determined which products were displayed seems fair; The algorithm’s process for deciding which products were displayed was fair; The decision made by the algorithm was fair; The outcome of the algorithm’s decision was fair. Conlon et al. (2004); Newman et al. (2020)
Purchase Intention (4 items) I am likely to return to this store’s website in the future; I am likely to consider purchasing from this website in the short term; I am likely to consider purchasing in the longer term; I am likely to buy from this store for this purchase. Heijden et al. (2003)
TAM (10 items) Perceived Ease of Use (5 items): Easy to learn, understandable, requires little effort, minimal mental effort, easy to navigate. Perceived Usefulness (5 items): Increases productivity, helps make better decisions, effective in improving sales, frequent purchases, beneficial for sales/marketing. Davis (1989); Heijden et al. (2003)
Table 2. Analysis of demographic variables.
Table 2. Analysis of demographic variables.
Variable Category Freq. Percent
Online shopping frequency Every day 18 4.69%
Several times a week 39 10.16%
Once a week 51 13.28%
Several times a month 156 40.63%
Once a month 112 29.17%
Never 8 2.08%
Gender Male 158 41.15%
Female 207 53.91%
Prefer not to say 19 4.95%
Education High school or below 80 20.83%
2-year university 27 7.03%
Bachelor’s degree 193 50.26%
Master’s degree 41 10.68%
PhD 43 11.20%
Profession Student 191 49.74%
Employee 115 29.95%
Self-employment 17 4.43%
Other 61 15.89%
Nationality Turkey 359 93.49%
France 3 0.78%
Netherlands 6 1.56%
Other 16 4.17%
Daily internet usage 0–2 hours 34 8.85%
2–4 hours 106 27.60%
4–6 hours 137 35.68%
More than 6 hours 107 27.86%
AI familiarity (1–5) 1 (Not familiar) 15 3.90%
2 48 12.50%
3 166 43.23%
4 99 25.78%
5 (Very familiar) 56 14.58%
Impulse buying Yes 143 37.24%
No 241 62.76%
Table 3. Cronbach’s alpha reliability coefficients.
Table 3. Cronbach’s alpha reliability coefficients.
Variable No. of Items α (Scale) α (All Items)
Perceived Algorithmic Fairness 4 0.944 0.961
Purchase Intention 4 0.928 0.961
Algorithmic Transparency 3 0.861 0.961
Algorithmic Anthropomorphism 5 0.927 0.961
TAM 10 0.936 0.961
Table 4. Descriptive statistics (construct-level summary).
Table 4. Descriptive statistics (construct-level summary).
Variable N Mean Std. Deviation
TAM 384 5.00 1.40
Perceived Algorithmic Fairness 384 4.26 1.51
Purchase Intention 384 4.58 1.53
Algorithmic Transparency 384 4.41 1.53
Algorithmic Anthropomorphism 384 4.24 1.53
Table 5. Path analysis of mediation model (PROCESS Model 6).
Table 5. Path analysis of mediation model (PROCESS Model 6).
Path β SE p Result
Anthropomorphism → Transparency (a1) 0.665 < 0.001 H8: Supported
Anthropomorphism → Fairness (a2) 0.375 < 0.001 H1: Supported
Transparency → Fairness (d21) 0.367 < 0.001 H2: Supported
Fairness → Purchase Intention (b2) 0.263 < 0.001 H3: Supported
Transparency → Purchase Intention (b1) 0.336 < 0.001 H4: Supported
Anthropomorphism → Purchase Intention (c’) 0.168 0.002 H9: Supported
Total effect (c) 0.555 < 0.001
Notes: N = 384; Bootstrap = 5,000 resamples; 95% bias-corrected CI.
Table 6. Indirect effects decomposition.
Table 6. Indirect effects decomposition.
Indirect Path Effect Boot SE Boot 95% CI
Anthrop. → Transparency → PI (a1·b1) 0.224 CI excl. zero
Anthrop. → Fairness → PI (a2·b2) 0.099 CI excl. zero
Anthrop. → Transp. → Fairness → PI (a1·d21·b2) 0.064 CI excl. zero
Total indirect effect 0.386 CI excl. zero
Direct effect (c’) 0.168 p = 0.002
Total effect (c) 0.555 p < 0.001
Notes: PI = Purchase Intention. Bootstrap = 5,000 resamples.
Table 7. Moderation analysis results (PROCESS Model 1).
Table 7. Moderation analysis results (PROCESS Model 1).
Predictor b SE t p
Algorithmic Anthropomorphism 0.449 < 0.001
TAM 0.718 < 0.001
Anthropomorphism × TAM –0.035 0.075
Notes: R2 = 0.556; F(3, 380) = 158.69, p < 0.001. N = 384.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated