2. Paradigm Shift Analysis
The impact of digital & intelligence on the education economy is mainly reflected in three aspects. Firstly, technology driven factors (AI/Data) have expanded educational capabilities, reshaped the education ecosystem, challenged existing structures, and required new paradigms. The second is that the DIE requires profound institutional restructuring, which reduces transaction costs and creates a framework for promoting sustainable development through new forms of education and collaborative governance models. The third is that technological progress has led to polarization in the labor market, changing the basic human capital and requiring the reconstruction of human capital. In terms of the research paradigm elements of educational economy, the following transformations should be observed:
(1)Theoretical frameworks: from Linear Causality to Dynamic Networks
Theoretical frameworks (Conceptual Anchors) underpin the foundational logic and explanatory systems of research design, guiding the selection of scientific problems. The dominant theoretical paradigm in traditional economics of education has been human capital theory, specifically the paradigm of “Human Capital + Institutional Economics + Technological Economics”. The seminal work establishing human capital theory is Schultz’s (1961) Investment in Human Capital. This work revolutionized classical economics, which had narrowly defined “capital” as solely physical capital. He reconstructed the concept of human capital by integrating the enhancement of human capabilities into growth models, positing human capital as the core driver of economic growth. He identified education as the most crucial form of human capital investment and its primary vehicle, thereby articulating the economic value proposition of education. This reconstruction broke through the bottlenecks of existing growth theories by fundamentally revising the assumption of “exogenous technological progress” in the Solow growth model. By endogenizing human capital, Schultz laid the essential groundwork for the New Growth Theory (Romer, Lucas) and catalyzed a pivotal shift in development economics. Becker (1964) subsequently provided a systematic analysis of the various types of human capital and their formation mechanisms. His work offered detailed examinations of the processes involved in education, on-the-job training, and other forms of human capital investment, emphasizing the critical role of education and training in human capital development. Beyond constituting a paradigm revolution within economics itself, human capital theory furnished a powerful policy logic: achieving a “dual win of growth and equity through educational investment.” To this day, it remains the core theoretical foundation for educational reform and economic development strategies globally.
The theory of human capital focuses on education and individual productivity, institutional economics emphasizes rules and transaction costs, and technical economics focuses on technological efficiency. Although the three complement each other, their essence is a parallel relationship, which is difficult to explain the dynamic entanglement of education, technology, and institutions in the era of digitization. The traditional paradigm regards education as a linear input-output system, while digital education has non-linear and adaptive characteristics. In the era of digital intelligence, new elements such as digital skills and AI collaboration capabilities need to be incorporated. Due to the upgrading of human capital, digital skills have replaced traditional skills and become the core of education investment. The demand for data analysts in American companies is growing at an annual rate of 15%, with a salary premium of 30% (Goldfarb&Trefler, 2019). Digital resources such as data, algorithms, and computing power serve as capital elements, driving educational output together with human capital (Cao et al., 2023). When digital capital is essentialized and data and algorithms become new production materials for educational output, dynamic cognitive diagnostic models can optimize teaching interventions and improve learning efficiency by analyzing student behavior data. At the same time, the capitalization of educational data also requires policy intervention to avoid monopolies.
The popularization of AI tools has made “tool usage ability” a new dimension of human capital. Educational economics needs to redefine the boundary of “skills”, predicting a decrease in cost → a decrease in human predicted value → an increase in judgment and tool operation value → a shift of skill boundaries towards “human-machine collaboration” (Agrawal et al., 2022). In the digital age, the education economy focuses on the allocation and efficiency of educational resources and emphasizes how information technology (such as big data) can optimize resource allocation. This implies that digital capital (such as digital resources) is becoming a new focus of the education economy, and educational economic theory is integrating digital elements (such as data assets, digital skills) and system perspectives (such as interdisciplinary integration). Traditional human capital is being supplemented rather than completely replaced in the context of digitization. AI automates routine tasks but complements non-routine tasks, redefining skill demands;AI reshapes employment toward polarization, requiring proactive adaptation through skill upgrading”(Autor,2019).70% of businesses prioritize creativity and critical thinking in curricula; social safety nets must buffer AI-driven displacement(World Economic Forum,2023).Task-based models showing AI complements creativity-driven professions. Policy recommendations for reskilling displaced worker(Agrawal, et al.,2022).
Therefore, AI will not replace humans, but humans who fail to adapt to AI will be replaced as Leah Belsky, the OpenAI Education Lead, declared(OpenAI,2025).Promoting the transformation of the education system towards “cultivating creativity, critical thinking, and interpersonal communication skills” and providing retraining for workers replaced by AI has become a new era demand, which also lays a theoretical foundation for subsequent research on AI education economics.
(2) Methodological tools: from Static Inference to Real-time Prediction
Methodology is the embodiment of paradigms at the operational level, and research method design is systematically divided into three paradigms: qualitative, quantitative, and mixed methods. Each paradigm corresponds to specific philosophical assumptions (ontology/epistemology). Methodology and theoretical paradigms are inseparable, together forming a “problem-solving toolbox” that determines how to define problems and seek answers. The traditional methodological tools of educational economics are based on static, linear, and simplified assumptions. For example, Schultz’s (1961) macroeconomic growth model is a static regression analysis of cost-benefit based on human capital theory, Becker’s (1964) cost-benefit analysis quantifies individual returns and spillover effects. However, traditional models of educational production functions, such as Hanushek’s (1986) analogy of the educational process to the “production process” in economics, use quantitative methods to analyze the causal relationship between educational inputs (teachers, students, families) and academic output. However, traditional methods perform poorly in simulating the dynamic impact of educational policies due to their inability to capture the “feedback loop” (such as student performance → teacher decision-making → iteration of student performance), and cannot handle the dynamic, nonlinear, and heterogeneous nature of the education system in the digital age (such as real-time changes in student behavior data and chain reactions of educational policies).
The methodological tools of DIE need to shift from “static regression and cost-benefit analysis” to “machine learning, complex systems, and real-time data fusion”, adopting a hybrid approach of complex systems and social simulation, incorporating digital capital into the framework of education investment evaluation, and surpassing traditional human capital theory (HCT). The hybrid approach combines systems science theory with computational simulation technology, focusing on the dynamic process of “individual interaction → system emergence”, which is more closely related to the educational reality of the digital age. For example, using supervised learning (such as random forests, neural networks) and unsupervised learning (such as cluster analysis) instead of traditional OLS regression to handle high-dimensional, nonlinear data, treating the education system as a complex adaptive system (CAS), and using system dynamics (SD) and agent-based modeling (ABM) to simulate the long-term effects of policies. Because traditional OLS cannot capture nonlinear relationships, deep learning models can improve prediction accuracy by over 30% (Athey, 2018).
Traditional cost-benefit analysis (CBA) relies on macro indicators such as income and employment rates, while multi-source data can quantify “learning engagement” (such as MOOC video pause frequency) and “cognitive load” (eye tracking heatmap). Therefore, it is necessary to integrate big educational data (such as LMS logs, eye tracking, social media) with traditional survey data to achieve micro behavioral insights. At the same time, the usage rate of educational apps can be monitored through streaming data processing, dynamically adjusting resource allocation, and real-time policy evaluation. Digital intelligence technology has revealed a multi-directional interactive network of educational elements, with the data foundation shifting from “sampling survey data” to “holographic multimodal data streams (behavioral, physiological, environmental)”, and the analysis unit shifting from “individual/institutional independent variables” to “network node relationship strength”. Nodes are diverse subjects such as teachers, students, resources, and the environment, and connections are adaptive feedback loops driven by real-time data streams. The epistemological basis of the dynamic network paradigm is “mixed method triangulation needs to capture the emergent characteristics of the education system” (Hanushek, 2023). Mixed Method Research (MMR) is a “methodology” rather than a mere “method”. Between 2010 and 2014, the average annual growth rate of mixed methods papers reached 12% (Creswell, 2014).
The educational application of complexity science, big data reveals that school effectiveness is a function of network resilience (Johnes, 2020). The empirical basis is that it relies on intelligent recording and virtual simulation systems to collect multimodal full process data. Therefore, it can be considered that educational economics in the era of digital intelligence has entered the third paradigm:
Among them, W is the network weight matrix, and A is the node activity vector. The education value Φ (W ⋅ A) model originates from the Generalized Linear Models (GLM) in econometrics. McCullagh &Nelder (1989) first systematically constructed the mapping relationship between linear predictive factors and response variables. Femi et al. (2025) recently proposed Linear Representation Transferability(LRT) Hypothesis-that there exists an affine transformation between the representation spaces of different models, a theoretical breakthrough has been achieved. Hanushek (2003) modeled education output as a linear combination (such as teacher-student ratio, education expenditure, teacher qualifications, etc.), i.e., [education output, where is the input weight and is the input variable. Woessmann (2023) proposed a formal neural network model for educational value: educational output=Φ (W ⋅ A), where W is the spatial projection of educational features (weight matrix), A is the individual potential ability (node activity vector), and Φ is the nonlinear activation function (mapping educational benefits). The study verified the substitutability of Φ (W ⋅ A) for traditional production functions. Its mathematical foundation is based on the transfer theory expressed by neural networks, while integrating the econometric tradition of educational production functions, achieving a paradigm shift in neural networks. The policy implications of the model emphasize the role of AI technology (such as personalized learning systems) in optimizing the education production function, which means that education policies are shifting from a “resource input oriented” to a “capability activation oriented” approach.
Social simulation models (especially ABM) are the core tools of hybrid methods, which simulate the decision-making interactions of individuals (students, parents, schools) through computers and can predict the overall impact of policies on the education system. By using ABM to simulate the effect of the US education voucher policy, it was found that after the policy was implemented, the enrollment rate of high-quality schools increased by 20%, but the enrollment rate of disadvantaged students decreased by 10% (due to their lack of information and transportation resources), providing a basis for policy adjustments (such as providing information support for disadvantaged students) (Lee et.al, 2023). The iconic action formula for decision-makers in digital educational economics is: educational economic benefits=(human capital appreciation x technological adaptability)/institutional friction costs. Among them, the cost of institutional friction (derived from North, 1990), hinders the consistency between formal rules and technological change, and can be seen as the sum of the cost of maintaining inefficient institutions and the cost of resisting institutional change.
In summary, education is a typical complex system (individual interaction → system emergence), and complex system theory can simulate the “amplification effect” of educational inequality (such as family background → social network → educational gap); ABM is a commonly used social simulation tool in educational economics, which simulates individual behavior (such as rules for students to choose schools) and predicts the “emergent outcomes” of policies (such as the negative impact of education voucher policies on disadvantaged students). The advantage of hybrid methods is that they handle complex problems, while the disadvantage is that they require large amounts of data and complex models, but they are still a future trend. At present, global education research institutions such as the European Union, the United States, and China have incorporated mixed methods as core tools for policy simulation, replacing the trend of traditional “static prediction”.
(3) Shared Value: from Economic Rationality to Humanistic Value
The value of disciplines dynamically adjusts with paradigm shifts, and paradigms undergo revolutionary changes with the evolution of disciplines. Shared value provides evaluation criteria for the scientific community (such as “what is an important discovery”), directly affecting theoretical choices. The core component of the value paradigm is the criterion for evaluating the utility of the scientific community judgment paradigm, rather than the utility itself. It is an implicit driving factor for research breakthroughs (paradigm revolutions), rather than the result of breakthroughs. The subjectivity of “value” leads to the “irreconcilability” of paradigms, which is the core of scientific debate. The shared value of the traditional educational economic paradigm is the efficiency maximization decision framework centered on the calculation of return rates, which takes “efficiency maximization” as the strategic goal and “return rate calculation” as the quantitative tool, reflecting the decision-making logic of “goal means”. Like Schultz’s (1961) Investment in Human Capital, which laid the foundation for human capital, it was the first systematic demonstration of the economic value of education investment, breaking the simple consumption view. Becker (1964) constructed an individual education return model and pointed out that spillover effects drive overall productivity improvement, forming a shared value foundation. Arrow (1973) refuted the notion that “education is merely a signal for selecting talent” and emphasized that the educational process itself creates human capital and generates positive externalities for society. Stiglitz (1999) incorporated education into the global public goods framework, highlighting the shared human value of knowledge sharing for sustainable development. Carnoy (1995) analyzed the problem of class stratification in educational expansion, emphasizing that fair distribution is a prerequisite for achieving social common benefits. McMahon (2009) believes that the implicit values that education can bring, such as improved health and increased citizen participation, such as a 10% increase in college enrollment rates and a 6.3% decrease in violent crime rates, indicate the significant role of emphasizing education in reducing social violence. Psacharopoulos & Patrinos (2004) confirmed based on research data from 111 countries/regions worldwide (1980-2000s) that the social return on education (especially primary education) is significantly higher than private returns, providing a core basis for public investment and suggesting that expanding primary education is the most cost-effective path to reduce the Gini coefficient. Moretti (2004) innovated the design of instrumental variables (IV), validated spatial spillover effects, and addressed endogeneity issues between human capital agglomeration and productivity. Empirical evidence shows that for every 1% increase in the proportion of college graduates, the regional wage level increases by 1-2%, proving the geographical spillover effect of knowledge diffusion. The quantitative conclusion of spatial spillover research is that for every 1% increase in the proportion of urban college graduates, productivity (manufacturing data) increases by 0.6-1.2%. McMahon (2009) quantified the non-market benefits of education on democratic participation, health improvement, crime reduction, and expanded the connotation of shared values.
The DIE requires the reconstruction of human capabilities beyond economic productivity indicators, and digital education must go beyond “instrumental economic rationality”. In the AI and data-driven education ecosystem, the shared value anchor points are algorithm transparency, resource accessibility, and environmental tolerance. We need to break through technological centrism and return to the essence of education. The basic value principles recognized by multiple entities (government/institutions/enterprises/learners) need to balance technological efficiency, social equity, and long-term sustainability. It should be a “triple governance framework” of algorithmic ethics, educational equity, and sustainable development, with “algorithmic ethics” as the core of technological governance, “educational equity” as the social goal, and “sustainable development” as the long-term value orientation. Through “driving”, the three form a progressive relationship, reflecting the impact of technology on equity and anchoring the sustainability goals of education. The use of ethical frameworks to constrain educational algorithm decisions and prevent data abuse and bias solidification is the core of technological governance in algorithmic ethics. Using digital technology to break through regional/economic barriers, achieve resource accessibility redistribution, and achieve digital reconstruction of educational equity. Optimizing educational resource allocation through digital platforms, reducing system entropy increase, and forming a sustainable development-oriented resource cycle.
Reich & Ito (2017) demonstrated through cross regional case studies that technology popularization needs to be matched with resource allocation mechanisms, otherwise it will exacerbate the digital divide. Selwyn (2020) criticized the implicit bias of data algorithms in AI educational tools, such as racial, class, and regional discrimination. He pointed out that algorithmic decision-making often reinforces educational inequality, rejects technology centrism, emphasizes the subjectivity of people in education, and proposes that “humanistic ethics” should lead technology design. Brown & Lauder (2020) criticized the excessive pursuit of efficiency in traditional human capital theory and proposed the need to reconstruct a “people-centered” educational value evaluation system in the digital age, emphasizing the value of emotional ability and ethical decision-making. Sterling (2021) argued that digital courses need to embed environmental socio-economic sustainability indicators, beyond short-term skills training. Williamson (2021) revealed how data governance affects fair resource allocation through algorithmic decisions, ultimately leading to a sustainable education ecology. Cavoukian & Jonas (2023) proposed a “preventive ethical framework” for public sector algorithmic audits, requiring government led technology compliance reviews. OECD (2023), based on data from 30 countries worldwide, reveals that policymakers are shifting from “skill instrumentalization” to “learner well-being driven” and proposes the “Human Centrality Index” for digital education. Selwyn et al. (2024) criticized the erosion of teacher-student subjectivity by algorithmic management and called for the reconstruction of trust and care ethics in education through the “Slow EdTech” movement. They proposed using moral values (such as fairness, care, respect) rather than economic rationality (such as cost-benefit, efficiency) to guide the design and use of EdTech, prioritizing value, subject participation, and accountability.
In short, technology optimizes resource allocation efficiency, but may exacerbate regional inequality, so policy intervention needs to be emphasized. Educational economics needs to shift from “human capital ROI” to comprehensive human development and sustainable well-being assessment and must cultivate hybrid capabilities that integrate technical proficiency and ethical reasoning, namely the digitization iteration of data critical thinking (technical critical dimension) and digital humanistic literacy (value ethical dimension) implementation capability method theory.
(4) Academic Community: from Single Academic Loop to Cross-disciplinary Team
The academic community is the carrier of the existence of paradigms and plays a role in maintaining them. The rupture of its consensus can lead to a paradigm crisis. The essence of academic consensus is the tacit understanding of the behavior of a community of practice, which needs to be verified in three dimensions through bibliometric analysis, institutional analysis, and anthropological observation. Paradigm papers are necessary but not sufficient evidence, and the collapse of consensus during the revolutionary period often manifests first as the fragmentation of citation networks rather than methodological changes. The academic community in the traditional research paradigm is dominated by experts in a single disciplinary field, while in the era of digitalization, the academic community is an interdisciplinary alliance composed of data science, neuroscience, and educational economists. The core transformation is from “academic closed loop” to “industry collaboration”.
In the pre digital era, the research ecosystem was bounded by disciplines, and universities were controlled by educators (scholars/administrators) for curriculum design, degree certification, and academic standard setting. The allocation of educational resources was based on academic value rather than economic returns. The resource allocation led by traditional educators’ neglects cost-effectiveness, and the academic community with “peer review” as the core mechanism, where subject experts hold absolute discourse power in policy research, dominate research agendas and resource allocation. The power structure is hidden behind consensus, placing theoretical models above applied innovation, and leading the direction of knowledge production through resource allocation (topics, journal layouts) and academic evaluation (titles, awards). Core journals, academic societies, and academic review systems form an “invisible college” that excludes interdisciplinary perspectives, and discipline leaders maintain a dominant paradigm through “peer review”. Traditional economic models overly rely on rational assumptions, resulting in “maximizing income rather than optimizing efficiency”. Economists question the role of educators in leading resource misallocation and call for performance-based funding, while educators advocate knowledge as a public good, refuting market driven erosion of academic value. Decision making relies on a single discipline, ignoring the nonlinear interactions and social network effects of the real world. Policy design almost entirely relies on economic models, which can lead to policy design failure. The traditional educator camp insists on academic guilds leading quality certification, while the reformists promote microlearning effectiveness as the core KPI for education funding. When old paradigms cannot explain new phenomena, decision-making relying solely on a single discipline can lead to crises, triggering academic revolutions and paradigm shifts.
The profound impact of AI technology on human language habits and cultural evolution, especially how generative AI represented by ChatGPT can reverse shape human expression and form the so-called ‘machine culture’. AI has shifted from a “passive tool” to an active cultural participant, reconstructing the cultural production chain through algorithm recommendations and content generation (such as GPT models). Based on NSF data from 2018 to 2022, interdisciplinary projects account for approximately 58% (Brinkmann et al., 2023). Composite talents (educational economics + digital technology) are the core driving force for research and practice. Interdisciplinary teams can use artificial intelligence, neural data, and educational economics expertise to address key pain points in human capital optimization, especially in the fields of vocational training and educational equity, where new paradigms are emerging. For example, an education economic cost-benefit model based on AI algorithm analysis of fMRI data can dynamically adjust training incentive strategies. UNESCO (2024) proposed the development of cultural and creative industries to promote the construction of a green economy driven by culture and social inclusiveness, that is, cultural diversity policies to protect the rights and interests of vulnerable groups (such as indigenous cultural heritage), providing policy makers with recommendations for a coordinated development strategy of “culture environment economy”.
(5) Exemplars: from Single Model to Multimodal Verification
The role of examples and abstract rules is different. Examples transmit paradigms through practice, making abstract theories concrete. Kuhn emphasized that the core of paradigm shift lies in the rupture and reconstruction of consensus within the scientific community, and “exemplars” are the key carriers of consensus building. As an applied discipline, educational economics has a direct impact on the adoption and iteration of tool paradigms through classic cases such as cost-benefit analysis models and educational production functions. Therefore, it needs to be independently evaluated as an element. By monitoring the replacement of paradigms, we can provide early warning of the critical point of educational economics transitioning from “reparative evolution” to “revolutionary transformation”, which is more in line with the dynamic nature of Kuhn’s paradigm.
There has been a long-standing opposition between two methodological camps in educational economics: the positivist quantitative paradigm, which advocates for random experiments and emphasizes causal inference and econometric models; The humanistic qualitative paradigm focuses on situational understanding and deep interpretation, criticizing its neglect of subjectivity. The root cause lies in the conflict of philosophical foundations. Positivism believes that educational phenomena can be quantified (such as measuring the rate of return on education), while humanism advocates that education has irreducible socio-cultural attributes (such as classroom interaction effectiveness), and believes that instrumental variable methods may also solve endogeneity problems but create a “policy disconnect”. Using econometric models as the core, relying on natural experiments, instrumental variables (IV), regression breakpoint design (RDD), and other methods to verify the causal relationship between educational variables, simplifying education as an input-output function, ignoring soft variables such as school culture, oversimplifying risks, and creating a technical black box, the result is that the model is significant, but educational managers cannot understand it. The traditional human capital model (such as Mincer’s logarithmic wage model) is a static equation: ln (wage)=α+β ₁ (school education) +β ₂ (experience)+ε quantifying the return on investment in education as a linear input. Due to the omission of variable bias, it showed particularly low replicability, with 78% of 18 experimental economic studies failing to replicate (Camerer et al., 2016). Although the historical trends at the macro level are still valid (Heckman, 2020), the model ignores non cognitive skills, network effects, and digital literacy premiums, and assumes labor market homogenization, which is outdated in the gig economy (Acemoglu &Autor, 2022).
From the perspective of the paradigm shift, educational economics in the era of digitization is undergoing an evolution from a single econometric model (such as the Mincer wage equation) to a multimodal validation framework (such as the PISA × Ed Metaverse experiment). Traditional educational economics is treated as an independent mode, with theoretical deduction as the main approach and model decline. For example, the Mincer equation is ineffective in the digital gig economy, and the value of skills exceeds the length of education; The rise of multimodal verification supports real-time heterogeneous data streams such as the education metaverse, including cross modal case verification of PISA x biometric recognition, but comes with risks of privacy erosion and algorithm bias. The future research path can integrate randomized controlled trials, simulations, and real-time analysis, and embed the Mincer equation into panoramic data streams. The core advantage of multimodal frameworks lies in cross modal collaboration and information complementarity, achieving more comprehensive perception, understanding, and generation capabilities by integrating multiple modal data such as text, images, audio, and video.
Radford et al. (2021)Pioneered cross-modal contrastive learning (text-image embedding), enabling zero-shot learning, which is widely used in image retrieval, content moderation, and AI-generated content (AIGC). In the field of education economy, the main successful cases of breakthroughs include firstly, the paradigm reform of measuring education return rate, which breaks through the traditional Mincer equation (single income variable) and integrates multimodal output indicators such as skill premium, digital literacy, and social capital. As Deming & Noray (2020) compared the single income model with the multimodal indicator model (income+ career resilience +innovation output), they proved that the latter had a 37% increase in explanatory power. The second is the application of Complex Adaptive Systems (CAS) theory, which regards the educational economic system as an adaptive system formed by the dynamic interaction of intelligent agents such as learners, institutions, technology, and policies. The system’s behavior needs to be verified through multidimensional data. It is believed that traditional single econometric models (such as OLS) cannot capture nonlinear interactions and must be replaced by multi-agent modeling (ABM) instead of linear regression. ABM allows for the simulation of heterogeneous subject decision-making (e.g., student course selection behavior is influenced by algorithm recommendations). Epstein (2019) proposed the “Multimodal Validation Triangle” framework (ABM +neural experiments +field surveys) to validate the diffusion path of technology through education policy simulation. The third is the proposal of a mixed validation framework for policy intervention, which suggests that policy effectiveness evaluation needs to integrate a multimodal evidence chain of experiments (RCT), simulations (SD), and natural experiments (IV). Murnane & Willett (2021) discussed the application of Regression Discontinuity Design (RDD) in education policy evaluation, using “US Federal Education Subsidy Allocation” as a case study, emphasizing the causal inference trap of threshold selection, and proposing the “Policy Verification Cube” (experimental evidence+ computational simulation +historical counterfactual).
Additionally, utilizing behavioral data tracking (eye movements, operation paths) to quantify learning outcomes in real-time, an immersive virtual learning environment built through augmented reality (XR) AI and blockchain. Williams & Clark (2023) integrated administrative, behavioral, and immersive data (PISA scores x VR learning analysis) to capture the complexity of education, proposing a three-dimensional model of “administrative data x behavioral logs x immersive indicators” to address the issue of omitted variables in traditional assessments. The paradigm shift in validation is achieved through multi method triangulation, transitioning from statistical significance (p-value) to causal robustness, with experimental (randomized controlled trials in VR environments, such as testing growth mindset interventions) and computability (agent-based models simulating classroom inequality) (Liu et al., 2024). Its advantage is to reveal hidden variables (e.g., peer influence accounts for 31% of learning variance (Zimmerman, 2003), but its disadvantage is data privacy risk, algorithmic bias in ADHD diagnosis, and reliance on historical diagnostic data can amplify the misdiagnosis rate of ADHD in minority groups(Gianfrancesco,2018).