1. Introduction
The importance of metaverses in educational contexts
Metaverses integrate technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR), creating persistent, interactive, multisensory virtual environments that expand the limits of human experience. Their potential to transform areas such as education, mental health, and social-emotional support has sparked growing academic and commercial interest [
1]. The metaverse, as well as its associated technologies, is still under development. Even so, its application in the field of education has been the subject of multiple studies [
1,
2]. A systematic review and bibliometric analysis mention the recent explosion of research analyzing the use of virtual reality in education [
3]. They highlight the leading role of China and the United States, the diversity of topics being worked on, and the ability of VR to influence the improvement or reinvention of teaching processes. This trend is corroborated by more recent systematic reviews [
4], which specifically address the growing body of evidence related to the integration of immersive technologies to improve learning outcomes in primary education.
Due to the characteristics of the metaverse, it contributes to education in a different way than traditional education, proving to be effective in placing students in authentic contexts. Its learning objective goes beyond a conventional course or learning activity; it seeks to allow students to have a different experience, an environment where they can consolidate the development of skills, experiencing an authentic training process [
5].
The metaverse has positioned itself as an immersive digital space that combines augmented reality (AR), virtual reality (VR), artificial intelligence, and collaborative 3D environments. Its application in education is transforming the ways of teaching, learning, and connecting with knowledge. Among its main benefits and importance in education are the followings : (a) providing immersive and experiential learning, allowing students to have more meaningful learning experiences by interacting with realistic virtual environments, facilitating the understanding of abstract or complex content by being able to experience it firsthand, as well as promoting discovery learning and the simulation of real scenarios without the risks or costs involved in the physical world [
6]; (b) providing a personalized and adaptable learning path, i.e., the metaverse offers the possibility of adjusting the difficulty, pace, and resources according to the needs of each student, promoting inclusive education, as experiences can be designed that are accessible to people with different abilities and learning styles [
7,
8]; promoting collaboration and networking, this is possible because virtual spaces allow synchronous and asynchronous interactions between students and teachers from different parts of the world, as well as enhancing real-time collaborative work, with the possibility of building, solving problems, or designing projects together in the same digital environment, helping to develop networking, digital communication, and global citizenship [
9]; (d) facilitating didactic-pedagogical innovation, given that teachers can explore new active methodologies (gamification, project-based learning, role-playing, situated learning), opening up the possibility of creating virtual laboratories, digital campuses, immersive museums, and interactive classrooms [
10]; (e) fostering the development of digital and 21st-century skills, since, by interacting in metaverse environments, students develop key skills such as critical thinking, creativity, problem solving, digital collaboration, and technological literacy, thus preparing future professionals to perform in contexts of digital transformation, typical of the knowledge society and the 4.0 economy [
11]; (f) promoting the democratization of access to knowledge and educational equity, as this technology makes it easier for people in remote geographical locations or with infrastructure limitations to access high-level educational experiences, reducing gaps in access to physical laboratories, cultural experiences, or professional practices by replicating them in a virtual environment [
12].
In summary, the metaverse not only expands the boundaries of the traditional classroom but also redefines the way people learn and teach [
13,
14]. Its importance lies in the possibility of offering more immersive, inclusive, collaborative, and innovative experiences that develop key skills for the challenges of today's and tomorrow's society, transforming education into a more motivating and engaging experience for students, increasing engagement, and reducing dropout rates [
15,
16,
17].
Virtual Reality, Virtual Worlds, and the Metaverse: A Conceptual Approach and Precision and Their Integration
Towards the middle of the 20th century, the so-called Third Industrial Revolution or Digital Revolution led to the emergence of spaces where individuals can receive multimedia signals that enable new forms of symbolic interaction. These forms expand the boundaries of physical experience and, in their most sophisticated manifestations, allow experiences previously restricted to the extrasensory realm. This transformation made it possible, in its initial stage, to create digitally generated environments, known as virtual worlds, thus laying the foundations for the development of virtual reality (VR) [
18,
19]. Subsequently, with augmented reality, digital objects are superimposed on physical world scenarios in real time [
20], and when the characteristics of interaction with physical and digital objects are indistinguishable, it is considered mixed reality. The space, still theoretical, where virtual worlds, augmented reality interaction spaces, and mixed reality spaces—extended reality—converge is known as the metaverse [
21]. It is characterized by being immersive, with few limitations or blurred boundaries, persistent, decentralized, and oriented toward enriching social experiences [
10,
22]. Within the metaverse, there may be virtual worlds entirely dedicated to education, known as Virtual World Learning Environments (VWLE).
To achieve the metaverse, technologies have been developed such as devices that reproduce fully digital scenes on a screen very close to the eyes, other translucent devices that allow the environment to be seen and introduce digital elements; suits with sensors and actuators; three-dimensional immersion scenarios, as well as software systems that enable the reification of virtual worlds, augmented reality, and mixed reality in collaborative environments, connecting users, most of whom are represented by digital entities called avatars, through networks, mainly the Internet [
23,
24]. VR was conceived as a technology capable of transforming computers from simple processing tools into authentic generators of unrestricted realities that promote the creation of arbitrary environments for individualized and affective instruction [
25]. Dede [
61] proposed that Virtual Worlds represent a new vision in which students no longer interact with the phenomenon but rather shape the nature of how they experience their physical and social context.
The global rise and expansion of the metaverse after the pandemic
In 2021, leveraging the effects of the pandemic, Meta (then Facebook) presented the community with a utopian vision of a unified, immersive metaverse deployed in the cloud, where millions of users would connect to have social experiences. Due to the high market penetration of Facebook, the idea attracted the spotlight of the mass media, and over the past three years, mayor conglomerates such as Meta, Alphabet, Microsoft, NVIDIA, Epic Games, and Roblox have invested significantly to create or improve the necessary technological elements in hardware and software infrastructure. During those years, a boom began in which digital objects were traded at high prices with the figure of Non-Fungible Tokens (NFTs) and, perhaps because of the novelty of the experiment, digital land began to be purchased by thousands of people who, without yet understanding what it meant, wanted to secure a place in this new Noah's Ark [
26,
27].
Then came the explosion of generative artificial intelligence. The spotlight and investments shifted to another field, and the metaverse, lacking real traction and facing social, economic, and technological complications, entered a slowdown, moving at a slower pace but without losing sight of its future vision [
28]. Like certain cities in decline when the industry that supported them closes, the platforms that emerged during the boom years now look abandoned, failing to retain the millions of promised users. However, there was transformation. Online game creators strengthened their platforms through the constant support of the gaming community, hundreds of applications continue to be developed and are available in software stores, VR devices have improved, and many researchers, seeing the potential impact of the metaverse, are focusing their efforts on capitalizing on it in different areas such as education [
29]. It is clear that technology, the economy, and society are not yet ready to support the metaverse. Currently, the industry faces significant challenges, including severe fragmentation and lack of interoperability, chich is compounded by a proliferation of non-standardized devices, high technological volatility, software products that often fail to progress beyond the prototype stage, commercial promises, and suggestive but not definitive impacts on education [
30,
31].
Purpose and objective of this study
Despite its expansion, the metaverse remains an evolving concept. Research shows that the metaverse technology platform, far from being a unified environment, is a collection of hardware and software elements that, despite the efforts of organizations such as the Metaverse Standards Forum, lacks standards that define the characteristics of its components, facilitate interoperability, and provide a fully immersive experience.
These limitations not only hinder adoption but also impede implementation in educational contexts. One of the main challenges today lies in the proliferation of metaverse platforms with diverse characteristics, architectures, and purposes, which complicates the task of making informed technology choices. Given this diversity, a systematic approach is required to compare platforms based on functional and non-functional attributes relevant to specific application contexts. Furthermore, although proposals such as OpenUSD (an open-source standard whose code is publicly available, allowing for collaborative modification and distribution) exist, the ecosystem is still closed, characterized by proprietary protocols that lead to a lack of traction in achieving a metaverse with an acceptable degree of functionality and reliability.
This study proposes and validates a maturity level assessment model (NM1–NM5) for 35 attributes of virtual world platforms. The objective is to estimate and compare the maturity status of 23 platforms in seven categories (technical, identity, content/economy, interoperability, governance, literacy/support, and accessibility/inclusion), providing: (i) operational rules for rating by attribute; (ii) formal inter-rater reliability (weighted κ, ICC); (iii) a reproducible composite metric with sensitivity analysis; and (iv) a traceable statistical pipeline (PCA/k-means with explicit selection criteria). This study had two objectives:
1. To propose a quality assessment model based on (a) the ISO/IEC 25000 family of standards, (b) a maturity model according to Weinberger and Gross [
32], and (c) the Metagon typology of metaverses by Schöbel et al. [
33].
2. To analyze a set of 22 metaverses from Schultz's [
34] list, using the proposed quality assessment model.
2. Materials and Methods
This study was developed using a mixed research design, organized into two sequential phases that respond to the two main objectives. The first phase focused on building a quality assessment model for metaverses, using the Design Science Research (DSR) approach. The second phase consisted of the empirical application of this model to a set of 22 metaverse platforms to validate its usefulness and generate a comparative analysis.
Phase 1. Design and Development of the Hybrid Quality Assessment Model
To develop the central artifact of this research—the evaluation model—we adopted an approach based on Design Science Research (DSR), as its objective is the creation and evaluation of innovative artifacts that solve practical problems while maintaining a high level of scientific rigor [
35]. The process was divided into the five stages described below.
Problem Identification and Motivation
The starting point was the identification of a significant gap in the literature: the absence of a standardized, multidimensional framework for evaluating the quality of metaverse platforms. While several models exist for evaluating the quality of traditional software and conceptual frameworks for describing metaverses, no integrated model has been proposed that combines the technical perspective of product quality with the dimensions of maturity and typological characterization specific to these immersive environments. This gap makes it difficult for developers, investors, and end users to make informed decisions.
Review and Selection of Foundational Artifacts
A systematic review of the literature was conducted to identify theoretical frameworks that could serve as pillars for the new model. Based on this, three frameworks were selected for their relevance and complementarity:
ISO/IEC 25000 family of standards (SQuARE): This was adopted as the base framework because it is the international standard for software product quality assessment. SQuARE provides a hierarchical, robust, and validated quality model with characteristics (e.g., functional adequacy, usability, reliability, security) and sub-characteristics that offer a solid and generic basis for evaluation (ISO/IEC, 2011).
Metaverse Maturity Model [
32]: To incorporate the evolutionary dimension, this model was integrated, which allows platforms to be classified according to their degree of development towards a fully realized metaverse. This approach considers that quality attributes vary depending on the maturity of the platform.
Typology for Characterizing Metaverses [
33]: In order to contextualize the assessment, this typology was used, which classifies metaverses according to their purpose (e.g., social, gaming, business) and their underlying technological characteristics. This allows the assessment to be context-sensitive, recognizing that not all platforms pursue the same goals nor should they be judged by the same priority criteria.
Hybrid Artifact Design
At this stage, the three selected frameworks were synthesized. The Hybrid Quality Assessment Model for Metaverses was designed using a conceptual relationship analysis, aligning the characteristics of SQuARE with the maturity levels and typologies of the metaverse. The result was a multidimensional artifact that defines:
Evaluation Processes: Structured steps to guide the evaluator from data collection to the issuance of results.
Comparison Matrices: Structures that cross-reference quality characteristics with metaverse typologies to weigh the relevance of each attribute according to the context.
Quality Criteria Matrices: A unified set of characteristics, sub-characteristics, and quality indicators specific to metaverses at each maturity level.
Measurement and Analysis Mechanisms: Rating scales and methods for calculating aggregate and partial quality scores.
Expert Implementation and Validation of the Model
Once designed, the model was implemented in a set of templates and instrumental guides. To ensure its content validity and applicability, it underwent a validation process by expert judgment. A panel of three experts was formed (two software engineers with more than 10 years of experience in immersive software development and an expert in educational technology with research and publications in the field). Using a structured questionnaire (see
Appendix A), the experts evaluated the relevance, consistency, clarity, and applicability of each component of the model. The results were analyzed using Aiken's V coefficient, supplemented by a qualitatively assessment to refine and consolidate the final version of the artifact.
Evaluation and Refinement of the Method
Finally, a pilot test of the model was carried out by applying the evaluation process to a set of two metaverses. This evaluation cycle made it possible to verify the internal consistency of the model, the feasibility of the data collection process, and the usefulness of the results generated, making minor adjustments to the wording of some indicators and the evaluation protocol.
Phase 2. Empirical Application and Comparative Analysis
To address the second objective and demonstrate the practical usefulness of the model, it was systematically applied to a broad sample of platforms.
Case Selection
A set of 22 metaverses was evaluated based on Schultz's sectoral analysis [
34]. This sample was selected for its relevance and representativeness, covering platforms across distinct levels of maturity (from emerging to established), types (gaming, social, business, content creation), and technological architectures (e.g., systems integrating Virtual Reality (VR), Augmented Reality (AR), Extended Reality (XR), and Mixed Reality (MR)).
Data Collection
A multi-source secondary data collection protocol was established to obtain the information necessary for the evaluation. Sources included:
● Official documentation and white papers from the platforms.
● Technical data sheets and code repositories (when available).
● Academic publications and market reports analyzing these platforms.
● Technical reviews from specialized media and user community analyses.
The evaluation of each metaverse was conducted independently by three researchers using the model templates. To ensure inter-rater reliability, discrepancies were resolved through discussion and consensus.
3. Results
3.1. Results of Objective 1
An evaluation model was defined with the components shown in
Figure 1.
Figure 1.
Components of the evaluation model. Source: Own elaboration. The Assessment Process component defines a general process for assessment in accordance with ISO/IEC 25040 (see
Figure 2).
Figure 1.
Components of the evaluation model. Source: Own elaboration. The Assessment Process component defines a general process for assessment in accordance with ISO/IEC 25040 (see
Figure 2).
Figure 2.
Metaverse Platform Evaluation Process. Source: Own elaboration, based on the ISO/IEC 25040 standard.
Figure 2.
Metaverse Platform Evaluation Process. Source: Own elaboration, based on the ISO/IEC 25040 standard.
The Metaverse Evaluation Purpose Statement defines: The metaverse can be useful for deploying socio-emotional support experiences for teachers. However, given the incipient standardization, the proliferation of platforms, and the loss of momentum that came with the advent of artificial intelligence, it is necessary to conduct an assessment of the functional and non-functional characteristics of the various platforms to answer the following question: What is the quality level of metaverse platforms considering the characteristics that are relevant for their use in socio-emotional support processes for teachers?
The Stakeholder and Needs matrices identify researchers in ICT integration in specific domains as stakeholders. Their need for information is related to the comprehensive characterization of metaverse platforms.
On the other hand, to define the Category Matrix component, a hybridization was used between the Metagon typology [
33] for the characterization of metaverses, the metaverse maturity model proposed by Weinberger and Gross [
32], the taxonomy of metaverse characteristics by Sadeghi-Niaraki et al. [
36], and the Systems and Software Quality Assessment and Requirements (SQuaRE) standard.
Table 1.
Quality Categories Matrix.
Table 1.
Quality Categories Matrix.
| Category |
Attribute |
ID |
ISO 25010 |
| Technical Aspects |
Immersive Realism |
AT1 |
Usability (User Interface Aesthetics, Accessibility, Operability), Functional Suitability, Performance Efficiency |
| Spatiotemporal Management |
AT2 |
Functional Suitability, Usability, Performance Efficiency |
| Dynamic Interactivity |
AT3 |
Usability (Operability, Error Protection, Learnability), Functional Suitability, Performance Efficiency |
| Environment Persistence |
AT4 |
Reliability (Availability, Recoverability), Functional Suitability |
| Real-Time Experience |
AT5 |
Performance Efficiency (Time-behaviour), Usability, Reliability (Availability) |
| System Scalability |
AT6 |
Performance Efficiency (Capacity, Resource utilization), Reliability |
| Technological Convergence |
AT7 |
Compatibility (Co-existence, Interoperability), Functional Suitability |
| Infrastructural Evolution |
AT8 |
Maintainability (Modifiability, Analysability), Portability (Adaptability), Performance Efficiency (Capacity, Resource utilization) |
| License |
AT9 |
Maintainability (Modifiability, Analysability), Portability (Installability, Replaceability, Scalability) |
| Identity and Representation |
Persona Representation |
AT10 |
Usability (User Interface Aesthetics, Appropriateness Recognizability), Functional Suitability, Security (Authenticity) |
| User Authentication and Anonymity Options |
AT11 |
Security (Confidentiality, Authenticity, Accountability), Usability (Operability, Accessibility) |
| Avatar Embodiment and Interaction |
AT12 |
Usability (Operability, User Interface Aesthetics, Accessibility), Functional Suitability, Performance Efficiency |
| Digital Rights Management |
AT13 |
Security (Confidentiality, Integrity, Non-repudiation), Functional Suitability |
| Content and Economy |
User-Generated Content (UGC) |
AT14 |
Functional Suitability (Completeness, Correctness), Usability (Operability, Learnability) |
| Economic Development |
AT15 |
Functional Suitability, Security (Integrity, Confidentiality), Reliability (Availability) |
| Imaginative Creation |
AT16 |
Functional Suitability (Completeness, Relevance), Usability (Operability, User Interface Aesthetics) |
| Governance and Accountability |
Policy Development |
AT17 |
Functional Suitability (Completeness, Relevance), Maintainability (Modifiability) |
| Regulatory Compliance |
AT18 |
Security (Accountability), Functional Suitability (Correctness, Relevance) |
| Stakeholder Participation |
AT19 |
Usability (Operability, Learnability), Functional Suitability |
| Operational Transparency |
AT20 |
Security (Accountability), Usability (Appropriateness Recognizability) |
| Environment Security |
AT21 |
Security (Confidentiality, Integrity, Non-repudiation, Accountability, Authenticity), Reliability (Fault tolerance) |
| Uncertainty Management |
AT22 |
Reliability (Maturity), Maintainability (Modifiability, Analysability), Functional Suitability |
| Credibility Generation |
AT23 |
Security (Authenticity, Accountability, Non-repudiation), Reliability (Maturity) |
| Interoperability |
Standardization |
AT24 |
Compatibility (Interoperability, Co-existence), Portability (Adaptability) |
| Platform Compatibility |
AT25 |
Compatibility (Interoperability, Co-existence), Portability (Adaptability) |
| Data and Identity Portability |
AT26 |
Portability (Installability, Replaceability), Security (Confidentiality, Integrity), Compatibility (Interoperability) |
| API Integration |
AT27 |
Compatibility (Interoperability), Maintainability (Modularity, Reusability) |
| Literacy and Support |
Educational Resources |
AT28 |
Usability (Learnability, Appropriateness Recognizability), Functional Suitability (Completeness, Correctness) |
| Change Management and Cultural Adoption |
AT29 |
Usability (Learnability), Maintainability (Modifiability), Portability (Adaptability) |
| Community Support |
AT30 |
Usability (Operability, User Error Protection), Functional Suitability (Completeness) |
| User Competency Evaluation |
AT31 |
Usability (Learnability), Functional Suitability (Correctness) |
| Accessibility and Inclusion |
Software Access |
AT32 |
Usability (Accessibility, Operability), Portability (Installability, Adaptability) |
| Network and Community |
AT33 |
Usability (User Interface Aesthetics, Accessibility), Functional Suitability (Completeness) |
| Awareness and Education |
AT34 |
Usability (Appropriateness Recognizability), Functional Suitability (Correctness) |
| Inclusive and Adaptive User Experience |
AT35 |
Usability (Accessibility, User Interface Aesthetics, Operability), Portability (Adaptability) |
The Rigor Requirements Matrix specifies the minimum rigor requirements that must be taken into consideration when conducting evaluations. These aspects must be understood and adopted by the entire group of researchers through training workshops and continuous monitoring.
Table 2.
Rigor Requirements Matrix.
Table 2.
Rigor Requirements Matrix.
| Category |
Rigor Requirement |
Description |
Evaluation Design |
Clarity of Objectives and Questions |
The evaluation objectives must be specific, measurable, achievable, relevant, and time-bound (SMART). The research questions must be explicit and directly addressable by the design [37]. |
| Robust Theoretical Framework |
The evaluation must be based on recognized theoretical and technological frameworks. This ensures that the criteria are well-founded [37]. |
| Defined and Operationalized Evaluation Criteria |
Each criterion (e.g., usability, immersion, security) must be clearly defined and translated into measurable indicators, including detailed rubrics or rating scales [38]. |
| Participant Selection |
If users are involved, the sample must be representative of the target population. The selection process must be transparent and unbiased, with well-defined inclusion and exclusion criteria [39]. |
Data Collection |
Multiple Collection Methods (Triangulation) |
Use a combination of qualitative and quantitative methods (e.g., surveys, interviews, observation, interaction logs, biometric data, validated scales) for a well-grounded understanding [40]. |
| Validated Measurement Instruments |
Employ questionnaires, scales, and tools that have demonstrated their validity. |
| Consistency in Collection |
Establish standardized protocols for data collection, ensuring that all evaluators follow the same procedures to minimize bias and variability [41]. |
| Controlled Evaluation Conditions |
Conduct the evaluation in environments that minimize external variables that could influence the results, ensuring that participants experience the platform under similar conditions [42]. |
Data Analysis |
Appropriate Analysis Methods |
Select appropriate statistical analysis techniques (quantitative) and thematic or content analysis methods (qualitative) for the research questions and data type [43]. |
| Transparency in Analysis |
Document the analysis steps in detail, including the justification for analytical decisions, to allow for reproducibility and peer review [44]. |
| Analysis of Biases and Limitations |
Explicitly identify and discuss any potential biases in the design, collection, or analysis of the data. Acknowledge the inherent limitations of the study [42]. |
| Rigorous Interpretation |
Conclusions must be derived directly from the data and analysis, avoiding excessive generalizations or inferences not supported by evidence [37]. |
Ethics and Replicability |
Exhaustive Ethical Considerations |
Ensure that all aspects comply with the highest ethical standards, including informed consent, data privacy, confidentiality, and protection of vulnerable participants. Approval from an ethics committee is mandatory [45]. |
| Detailed and Transparent Documentation |
Maintain an exhaustive record of the entire process, from design to final results (plan, instruments, anonymized raw data, analysis scripts, reports) [44]. |
| Replicability |
The design and methodology must be described in sufficient detail for an independent third party to replicate the evaluation and potentially obtain similar results [46]. |
The Target Entities component states that the functional components and official/community documentation of metaverse platforms are the entities to be evaluated. In particular, quality indicators for five maturity levels (ML1-ML5) are evaluated for 35 attributes, grouped into the categories of Technical Aspects, Identity and Representation, Content and Economy, Interoperability, Governance and Accountability, Literacy and Support, and Accessibility and Inclusion.
The functional components are those available in a production environment, accessed from user interfaces or application programming interfaces (APIs). The documentation artifacts are those defined in the matrix of primary and secondary sources. These matrices were constructed by experts considering official metaverse portals, technical documentation, available functionalities, artifacts published by developers, specialized articles, user reviews, technical forums, and third-party reports (see file Appendix D of the dataset, [
47]).
The Quality Assessment Module was defined using a multi-criteria approach with three assessment modules. The first module, with a descriptive scope, was based on the matrix of attributes derived from the work of Schultz [
34] (see file
Appendix A of the dataset, [
47]). The second module, whic employed a predominantly quantitative method, utilized the criteria-based rating module adapted from the metaverse maturity model proposed by Weinberger and Gross [
32]. This module defines five levels, each with membership criteria (see file Appendix B of the dataset, [
47]). The third modules, a quantitative approach, is based on the quality indicators for each of the maturity levels (see Appendix C and Appendix E of the dataset, [
47]).
For the Quality Analysis Methods component, it was proposed that the evaluation be carried out using a staged inferential approach. In the first stage, three researchers take the rating modules and source matrices to define—by weighted scoring and expert consensus—the maturity levels using the indicators. Inter-rater reliability should be measured by Cohen's Quadratically Weighted Kappa coefficient and, if there are discrepancies of level 2 or higher, by Spearman's coefficient. In cases where consensus is not acceptable, meetings were held to reconcile the assigned maturity levels.
The second stage defines the quality of each platform using the geometric performance method, also known as the “Area-Based Aggregation Method” or “Radar Chart Area Method,” used in the Metagon typology [
33]. This consists of representing the quality level of the metaverse platform through a radar chart where each axis corresponds to an evaluated attribute. The score obtained (between 0 and 5) is translated into a vertex of a closed polygon. It is proposed that the surface area of the polygon represents the multidimensional quality of the evaluated platform. The larger the area, the closer it is to the ideal, defined as maturity level 5 (NM5) in the dimensions.
The third stage compares the platforms using k-means Cluster Analysis and Principal Component Aanalysis (PCA), using the maturity levels and the radar area.
The Evaluation Expected Products Matrix proposes a set of artifacts that allow the results to be validated, the maturity level of each platform to be determined, and the quality of the different platforms to be compared (see
Table 3).
3.2. Results of Objective 2
For the evaluation of platforms, activities consistent with the defined model components were carried out (see
Table 4).
Twenty-three metaverse platforms were selected for the study, including solutions provided by manufacturers of VR, AR, or XR devices (see
Table 5).
The evaluation of each platform used triangulation of sources and researchers. It was carried out, obtaining an evaluation matrix per platform with comments. In 21% of the evaluations, the inter-evaluator reliability yielded a Kappa index < 0.6. Consequently,consensus workshops were conducted to review these cases and generate an aggregate matrix for subsequent quantitative analysis (see Appendix F of the dataset [
47]).
Based on these matrices, radar diagrams were generated for each platform and the areas were calculated.
Figure 3.
Radar Charts of Maturity Levels by Attribute.
Figure 3.
Radar Charts of Maturity Levels by Attribute.
Algorithms written in Python were used to analyze the classification results, using libraries such as numpy, pandas, and scikit-learn. The first step was to obtain the sorted table of the radar chart areas. Then, a simple segmentation of the platforms was defined based on the quintiles. Thus, the relative quality of the platforms is classified as Very High (Compliance above 53.31%), High (Compliance between 43.29% and 53.31%), Medium (Compliance between 25.16% and 43.29%), Low (Compliance between 22.47% and 25.16%), and Very Low (Compliance below 22.47%).
Table 6.
Analysis by Area of the Radar Chart.
Table 6.
Analysis by Area of the Radar Chart.
| Platform |
Radar Area |
(%) |
Relative Quality |
Score |
| Roblox |
52.76 |
67.54 |
Very High |
5 |
| Decentraland |
50.98 |
65.26 |
Very High |
5 |
| Overte |
47.32 |
60.57 |
Very High |
5 |
| Webaverse |
46.07 |
58.97 |
Very High |
5 |
| OpenSimulator |
41.78 |
53.49 |
Very High |
5 |
| Second Life |
41.43 |
53.03 |
High |
4 |
| Engage VR |
39.19 |
50.17 |
High |
4 |
| Resonite |
37.05 |
47.43 |
High |
4 |
| Sinespace / Breakroom |
35.53 |
45.49 |
High |
4 |
| Frame VR |
33.39 |
42.74 |
Medium |
3 |
| Fornite |
33.03 |
42.29 |
Medium |
3 |
| VRChat |
26.43 |
33.83 |
Medium |
3 |
| Rec Room |
25.36 |
32.46 |
Medium |
3 |
| Bigscreen |
19.82 |
25.37 |
Low |
2 |
| Spatial |
19.02 |
24.34 |
Low |
2 |
| Horizon Worlds |
18.84 |
24.11 |
Low |
2 |
| vTime XR |
18.3 |
23.43 |
Low |
2 |
| Vircadia |
18.03 |
23.09 |
Low |
2 |
| Renyland |
17.23 |
22.06 |
Very Low |
1 |
| Hubs |
17.05 |
21.83 |
Very Low |
1 |
| WorkAdventure |
16.07 |
20.57 |
Very Low |
1 |
| Sansar |
12.86 |
16.46 |
Very Low |
1 |
| JanusXR |
9.11 |
11.66 |
Very Low |
1 |
This initial classification was complemented by a subsequent cluster analysis. This analysis began with dimensional reduction using PCA and unsupervised clustering employing the K-means algorithm. Following the analysis of the sedimentation graph, the first five principal components were retained, which explained 85% of the total variance, and the weight matrices of the attributes within each principal component were generated. The weight matrices were assigned to three experts for consensus on the naming of each cluster. Bearing in mind that the ratings represent levels of maturity of quality attributes, the relative magnitude of the weights was used as a discriminating factor of relevance.
Table 7.
Identified Clusters.
Table 7.
Identified Clusters.
| Cluster |
Designation |
Estimated Relevance |
Value |
| 6 |
Mature and robust platforms |
Very relevant (benchmark) |
5 |
| 3 |
Advanced with technological integration |
Very relevant |
4 |
| 2 |
Consolidated in immersive experience |
Relevant |
3 |
| 4 |
Emerging with innovative potential |
Interesting but weak |
2 |
| 1 |
Rigid or closed architecture |
Limited |
1 |
| 5 |
Lagging or experimental |
Possible discard or niche |
0 |
Figure 4.
PCA Sedimentation Graph. Source: The authors.
Figure 4.
PCA Sedimentation Graph. Source: The authors.
The Decentraland (P2), Overte (P10), Engage VR (P3), and Frame VR (P5) platforms are in the first octant of the PC1, PC2, and PC3 relationship graph. Cluster 5, or the reference cluster, includes Decentraland (P2), Opensimulator (P9), Overte (P10), Roblox (P14), Second Life (P16), and Webaverse (P22).
It should be noted that open source platforms (Overte, OpenSimulator, Webaverse) are consistently ranked at high maturity levels, suggesting that open code favors scalability and infrastructural evolution. In contrast, closed commercial platforms such as Horizon Worlds or vTime XR have limitations in interoperability and accessibility, which restricts their educational applicability.
Figure 5.
Cluster graph on principal components PC1 and PC2. Source: The authors.
Figure 5.
Cluster graph on principal components PC1 and PC2. Source: The authors.
Figure 6.
Conglomerates by K-Means and PCA with PC1, PC2, and PC3. Source: The authors.
Figure 6.
Conglomerates by K-Means and PCA with PC1, PC2, and PC3. Source: The authors.
Finally, a quality classification of the platforms was defined, resulting in a general assessment matrix.
Table 8.
Metaverse Platforms Quality Assessment.
Table 8.
Metaverse Platforms Quality Assessment.
| id |
Platform |
Radar |
Cluster |
PCA |
| 1 |
Decentraland |
50.98 |
5 |
1 |
| 2 |
Overte |
47.32 |
5 |
1 |
| 3 |
Resonite |
37.05 |
5 |
1 |
| 4 |
Roblox |
52.76 |
5 |
0.75 |
| 5 |
OpenSimulator |
41.78 |
5 |
0.81 |
| 6 |
Engage VR |
46.07 |
5 |
0.55 |
| 7 |
Second Life |
41.43 |
5 |
0.53 |
| 8 |
Frame VR |
33.39 |
3 |
0.35 |
| 9 |
Fornite |
33.03 |
4 |
0.05 |
| 10 |
Sinespace |
35.53 |
3 |
0.16 |
| 11 |
Webaverse |
39.19 |
3 |
-0.02 |
| 12 |
Rec Room |
25.36 |
4 |
-0.37 |
| 13 |
WorkAdventure |
16.07 |
1 |
0.3 |
| 14 |
Vircadia |
18.03 |
2 |
0.05 |
| 15 |
VRChat |
26.43 |
4 |
-0.59 |
| 16 |
Hubs |
17.05 |
1 |
0.09 |
| 17 |
Bigscreen |
19.82 |
2 |
-0.35 |
| 18 |
Renyland |
17.23 |
2 |
-0.38 |
| 19 |
vTime XR |
18.3 |
2 |
-0.6 |
| 20 |
JanusXR |
9.11 |
1 |
-0.69 |
| 21 |
Spatial |
19.02 |
0 |
-0.74 |
| 22 |
Sansar |
12.86 |
2 |
-1.14 |
| 23 |
Horizon Worlds |
18.84 |
0 |
-1.19 |
4. Discussion
Discussion of the results of objective 1
The first objective of this study was to propose a model for evaluating the quality of metaverses. As a result, unlike previous studies that focused on lists of functionalities, this study validated a model for evaluating metaverse platforms by integrating three approaches: the ISO/IEC 25010 software quality standard, the Weinberger and Gross [
32] maturity model, and the Metagon typology by Schöbel et al. [
33]. This hybridization provides a multidimensional assessment that ranges from technical aspects (e.g., system scalability) to governance attributes (policy development) and user experience (accessibility and inclusion). Thus, this proposal overcomes traditional approaches focused solely on lists of functionalities by offering a multidimensional perspective that comprehensive considers technical, governance, and user experience aspects.
Therefore, it contributes to multidimensional integration and methodological robustness. On the one hand, the combination of the ISO/IEC 25010 standard with a maturity model provides the proposed model with a formal and gradual basis for technical evaluation, while, on the other hand, the Metagon typology adds specificity by characterizing types of metaverse from perspectives such as scalability, accessibility, inclusion, and development policies. This hybrid strategy is aligned with emerging approaches that promote more holistic and adaptive assessments in immersive technological environments [
48].
Some recent studies highlight the importance of considering quality of experience, user perception, and economic context in the evaluation of metaverse services. For example, Du et al. [
49] propose an economic-consumer framework that integrates quality of experience and perceived value into the service offering, while Lin et al. [
50] developed the Metaverse Announcer User Experience (MAUE) model to identify the factors that affect the quality of user experience through a configuration of settings to the preferences of the majority of users, covering different parameters. This research reinforces the relevance of incorporating subjective and experiential dimensions into the proposed model, consistent with the inclusion of attributes such as accessibility and inclusion.
Specifically in education, the metaverse has been highlighted as a resource capable of transforming learning. In the contexts of teaching and learning processes, immersive platforms allow for the co-creation of scenarios and real-time feedback, although they also pose challenges such as privacy and ethics [
51]. In addition, a metaverse-based framework has been proposed to improve the reliability, accuracy, and legitimacy of educational assessment processes through technologies such as VR, AR, and blockchain [
52]. This confirms the relevance of considering the pedagogical dimension, now linked to governance and technical quality within the model.
Some studies warn that the metaverse, in its current state, is not yet fully consolidated and carries technological, ethical, social, and environmental risks [
1]. The evolving regulatory landscape requires the inclusion of governance attributes, such as policy development and sustainability, in assessment models. This, in turn, reinforces the incorporation of this dimension into the proposal.
Discussion of the results of objective 2
Applying the evaluation model to 23 platforms revealed that the virtual worlds ecosystem is highly fragmented and heterogeneous, with very disparate levels of maturity and quality. This situation confirms the findings of Allam et al. [
30] and Jagatheesaperumal et al. [
31], who warn that the development of the metaverse does not follow a linear pattern of evolution, but rather responds to particular interests, divergent business models, and differentiated technological capabilities. The results obtained through the radar chart area analysis and clustering´ povide, for the first time, an empirical and systematized overview of this ecosystem, moving beyond the descriptive approaches currently found in the literature.
In the case of cluster 6, called “Mature and robust platforms,” virtual worlds such as Decentraland, Roblox, Second Life, and Overte were identified. What is interesting about this finding is that the group includes platforms with very different business models and architectures: centralized and commercial (Roblox), decentralized blockchain-based (Decentraland), and open source (Overte). This result suggests that maturity is not determined by a single type of architecture or economic model, but rather by the ability to sustain continuous development in multiple dimensions of quality (functionality, interoperability, security, user experience). This corroborates the idea that the evolution of the metaverse relies more on the comprehensive consolidation of the ecosystem rather than on specific design decisions. This finding further expands upon the work of Dwivedi et al. [
1] who established that adoption depends on social and cultural factors rather than exclusively on technical ones.
In contrast, the existence of groups such as cluster 5 (“Lagging or experimental platforms”) and cluster 1 (“Platforms with rigid or closed architecture”) reflects that a significant portion of metaverses do not progress beyond the prototype stage or are limited to very specific application niches [
53]. These platforms present clear barriers in terms of scalability, accessibility, and interoperability, which limits their potential for mass adoption [
54]. This finding is consistent with the work of Lee et al. [
55] who describe how most metaverses remain in the technological validation stage rather than achieving commercial or educational consolidation. In this sense, the results show a risk of permanent fragmentation, were only a small subset of platforms will achieve a level of sustainable development over time [
56].
Cluster 3 (“Advanced platforms with technological integration”) is particularly relevant to the discussion, given that it is characterized by its high score in Principal Component 2, mainly associated with the attributes of API Integration (0.3457) and Data and Identity Portability (0.3027). This indicates that interoperability is the main differentiating factor for this group, reinforcing recent studies that highlight the importance of data integration and mobility as a critical condition for the development of sustainable virtual ecosystems [
57,
58]. The evidence from this study confirms that the platforms demonstrating the greatest potential are those that not only offer attractive immersive experiences but also facilitate seamless connection with other digital environments, respecting the continuity of digital identity and the flow of information.
Finally, although the metaverse has been recognized as one of the technologies with the greatest disruptive potential today, the results of this study reflect that its use for educational purposes remains marginal. Most of the platforms analyzed prioritize entertainment, commerce, or social interaction, with no advanced features explicitly geared toward learning. This finding coincides with the conclusion of Hwang and Chien [
59], who argue that most educators remain unaware of the characteristics and full potential of the metaverse in educational settings. Consequently, a gap exists between technological advancement and its appropriation in education. This reality necessitates the development of models for curricular integration and teacher training that allow the metaverse to be leveraged as a setting for educational innovation and immersive learning [
60].
In summary, the results of the comparative analysis not only confirm the existence of a fragmented ecosystem but also show that the determining factors of maturity do not depend exclusively on technological architecture or economic models, but rather on the ability to guarantee interoperability, scalability, and continuity of the user experience. This opens up fertile ground for future research aimed at understanding how these attributes can be transferred and adapted to educational and professional environments, overcoming the current predominance of recreational and commercial uses.
Limitations of the present study
The validity of this study should be considered in light of certain limitations. First, the inherent role of expert judgment in the evaluation process. Although a consensus process among researchers was rigorously employed, the assignment of maturity levels across 35 attributes necessarily incorporated a component of expert judgment. Future implementations could benefit from the use of more highly detailed rubrics and the inclusion of evaluators with even more diverse professional profiles (e.g., teachers without technical training). Second, the simplification of the aggregation model, considering that the equal weighting of metrics (area, cluster, and PCA) is a methodological choice that could be adjusted in future work to reflect different priorities. Third, the inherent volatility of the metaverse ecosystem. The assessment is a snapshot taken at a specific point in time (2024-2025). Platforms classified as “emerging” could mature rapidly, while others might be discontinued. However, the model is designed for reusability allowing for future longitudinal analysis to address this limitation. Finally, the scope of the analysis was limited to accessible platforms. This excluded those with high-cost barriers or those requiring exclusively business access, which potentially biases the representativeness of the sample.
Future lines of research
This work opens up several avenues for future research. First, the possibility of conducting context-weighted evaluations, i.e., applying the model by adjusting the weights of the attribute categories according to specific contexts (e.g., K-12 education, psychological therapy, corporate collaboration) to generate specific rankings. Second, developing qualitative and quantitative studies with end-users to correlate the evaluation results with their perceptions of usability, effectiveness, and satisfaction, thereby contributing to the model’s further validation. Third, it would be beneficial to replicate the evaluation annually to track the evolution of platforms and the ecosystem in general, thereby identifying trends and maturity trajectories. Fourth, it is valuable to expand the model by incorporating new attributes as technology evolves, such as native integration with generative AI or sovereign digital identity frameworks.
5. Conclusions
As the metaverse transitions from speculation to practical application, the need for rigorous evaluation frameworks is critical, and this study proposed to address the need for a systematic method to evaluate the quality of metaverse platforms in a fragmented and rapidly evolving market. In this regard, a robust and multidimensional quality assessment model has been developed and validated. This model, which hybridizes the ISO/IEC 25010 standard, a maturity model, and the Metagon typology, constitutes the main contribution of the study. It offers a valuable tool for academics, developers, and educators to analyze and compare platforms in an informed manner, thereby moving beyond purely descriptive reviews.
The quality and maturity of existing metaverse platforms exhibit significant variation. An evaluation of 23 platforms reveals that, while a select group - such as Decentraland, Overte, Resonite, and Roblox- demonstrates high maturity across multiple dimensions, a significant portion of the market comprises niche, experimental, or limited-functionality solutions that are not yet sufficiently developed or suitable for widespread adoption.
There is no single “best” metaverse platform; rather, the optimal selection is context-dependent. Our analysis demonstrates that a platform with a high overall rating is not necessarily the superior option for all purposes. The proposed model breaks down quality into specific categories whose attributes can be reconfigured, thereby facilitating technology selection that is aligned with specific needs.
Supplementary Materials
The following supporting information can be downloaded at the website of this paper posted on Preprints.org, see Appendix A to F of the dataset.
Author Contributions
Conceptualization, F.S-D, J.M-N, and P.C.; methodology, F.S-D, J.M-N, P.C., and Y.L-A,; software, J.M-N, P.C., and G.R.; validation, F.S-D, Y.L-A, G.R, and A.CH.; formal analysis, F.S-D, J.M-N, and P.C.; investigation, F.S-D, P.C., M.B-Q, and A.CH.; resources, F.S-D and M.B-Q.; data curation, J.M-N, P.C., and G.R.; writing—original draft preparation, F.S-D, P.C., Y.L-A, and A.CH. ; writing—review and editing, Y.L-A, G.R, and M.B-Q.; visualization, P.C., Y.L-A, and A.CH.; supervision, F.S-D, Y.L-A, and M.B-Q.; project administration, F.S-D, and M.G-B.; funding acquisition, F.S-D, and M.B-Q. All authors have read and agreed to the published version of the manuscript.
Funding
This study was funded by the National Fund for Scientific and Technological Development (Fondecyt Regular), number 1231136.
Institutional Review Board Statement
Comité Ético Científico, Universidad Católica de la Santísima Concepción.
Informed Consent Statement
Informed consent was obtained from all participants involved in the study.
Data Availability Statement
Acknowledgments
To the Consolidated Research Group “Research and Innovation Group in Socioemotional Learning, Well-Being and Mental Health to Foster Thriving” (THRIVE4ALL) UCSC.
Conflicts of Interest
The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results”.
Abbreviations
The following abbreviations are used in this manuscript:
| VW |
Virtual World |
| VWLE |
Virtual World Learning Environments |
| AI |
Artificial Intelligence |
| APIs |
Application Programming Interfaces |
| AR |
Augmented Reality |
| MR |
Mixed Reality |
| VR |
Virtual Reality |
| XR |
Extended Reality |
| DSR |
Design Science Research |
| ICT |
Information and Communication Technologies |
| ISO |
International Organization for Standardization |
| ISO |
International Organization for Standardization |
| IEC |
International Electrotechnical Commission |
| NFTs |
Non-Fungible Tokens |
| UGC |
User-Generated Content |
| SQuaRE |
Software Quality Assessment and Requirements |
Appendix A
Appendix A. Validation Questionnaire by Expert Judges
Title of the Instrument: Checklist for Determining the Maturity Level of Virtual Worlds.
Objective of the Questionnaire. Evaluate the content validity of the items (attributes) that make up the checklist. Your expert opinion is essential to ensure that each attribute is relevant, clear, and pertinent to the maturity level you intend to measure.
Expert Judge Information
| Name |
|
| Area of Expertise |
|
| Years of Experience |
|
Instructions
Below are a series of key assessment attributes, organized by category and maturity level (NM1 to NM5). For each attribute, please assess the following three criteria using the scales provided:
Relevance (R): How essential is this attribute in defining the specified maturity level?
Clarity (C): Is the attribute description clear, concise, and easy to understand?
Relevance (P): Do you consider that the attribute is correctly located at this maturity level (e.g., NM1) or would it belong to another (a lower or higher one)?
Evaluation Scales
| Relevance |
Clarity |
Relevance of the Level |
| 1 = Not relevant |
1 = Unclear |
1 = Incorrect (belongs to another level) |
| 2 = Not very relevant |
2 = Clear |
2 = Adequate |
| 3 = Relevant |
3 = Very clear |
|
| 4 = Very relevant |
|
|
Quality Attributes Assessment Questionnaire
| ID |
Main Attribute |
Level |
Key Attribute Description |
R |
C |
P |
Observations |
| AT1.1 |
System Scalability |
NM1 |
Supports ≤ 10 concurrent users without failure |
|
|
|
|
| AT1.2 |
System Scalability |
NM1 |
There is no load balancing or fault tolerance |
|
|
|
|
| AT1.3 |
System Scalability |
NM1 |
Maximum CPU usage > 95% in basic tests |
|
|
|
|
| AT2.1 |
System Scalability |
NM2 |
Supports up to 50 users with limited stability |
|
|
|
|
| AT2.2 |
System Scalability |
NM2 |
Basic balancing without dynamic monitoring |
|
|
|
|
| IR1.1 |
Representation of the Person |
NM1 |
Default generic rendering only |
|
|
|
|
| IR1.2 |
Representation of the Person |
NM1 |
No avatar customization option |
|
|
|
|
| CE1.1 |
Imaginative Creation |
NM1 |
There are no creation or design tools |
|
|
|
|
| GO1.1 |
Policy Development |
NM1 |
There are no documented policies |
|
|
|
|
| AS1.1 |
Educational Resources |
NM1 |
Total absence of educational resources |
|
|
|
|
References
- Dwivedi, Y.K.; Hughes, L.; Baabdullah, A.M.; Ribeiro-Navarrete, S.; Giannakis, M.; Al-Debei, M.M.; …; Wamba, S.F. Metaverse beyond the hype. Int. J. Inf. Manag. 2023, 71, 102642. [CrossRef]
- López-Belmonte, J.; Pozo-Sánchez, S.; Moreno-Guerrero, A.J.; Lampropoulos, G. Metaverse in education: A systematic review. Rev. Educ. Dist. 2023, 23. [Google Scholar] [CrossRef]
- Rojas-Sánchez, M.A.; Palos-Sánchez, P.R.; Folgado-Fernández, J.A. Systematic literature review on VR and education. Educ. Inf. Technol. 2023, 28, 155–192. [Google Scholar] [CrossRef]
- Sandoval-Henríquez, F.; Sáez, F.; Badilla-Quintana, M.G. Immersive technologies in primary education. J. Comput. Educ. 2025, 12, 477–502. [Google Scholar] [CrossRef]
- Hwang, G.-J.; Chien, S.-Y. Metaverse in education: AI perspective. Comput. Educ. Artif. Intell. 2022, 3, 100082. [Google Scholar] [CrossRef]
- Onu, P.; Pradhan, A.; Mbohwa, C. Metaverse in future teaching. Educ. Inf. Technol. 2024, 29, 8893–8924. [Google Scholar] [CrossRef]
- Shu, X.; Gu, X. Edu-metaverse smart education model. Systems 2023, 11, 75. [Google Scholar] [CrossRef]
- Yeganeh, L.; Fenty, N.; Chen, Y.; Simpson, A.; Hatami, M. Metaverse classroom model. Future Internet 2025, 17, 63. [Google Scholar] [CrossRef]
- Jovanović, A.; Milosavljević, A. VoRtex Metaverse platform. Electronics 2022, 11, 317. [Google Scholar] [CrossRef]
- Zhang, X.; Chen, Y.; Hu, L.; Wang, Y. Metaverse in education. Front. Psychol. 2022, 13, 1016300. [Google Scholar] [CrossRef]
- Ermağan, E. Language education and the metaverse. Int. J. Mod. Educ. Stud. 2025, 9. [Google Scholar] [CrossRef]
- Vaz, E. The Role of the Metaverse in Knowledge Management. In Regional Knowledge Economies; Springer Nature: Cham, Switzerland, 2024; pp. 23–40. [Google Scholar] [CrossRef]
- Göçen, A. Metaverse in education. Int. J. West. Black Sea Soc. Humanit. Sci. 2022, 6, 98–122. [Google Scholar] [CrossRef]
- Kye, B.; Han, N.; Kim, E.; Park, Y.; Jo, S. Educational applications of metaverse. J. Educ. Eval. Health Prof. 2021, 18, 32. [Google Scholar] [CrossRef]
- Al Yakin, A.; Seraj, P. Metaverse effects on motivation and performance. Int. J. Comput. Inf. Manuf. 2023, 3, 10–18. [Google Scholar] [CrossRef]
- Çelik, F.; Baturay, M. Metaverse in L2 learning. BMC Psychol. 2024, 12, 58. [Google Scholar] [CrossRef] [PubMed]
- De Felice, F.; Petrillo, A.; Iovine, G.; Salzano, C.; Baffo, I. Metaverse and education. Appl. Sci. 2023, 13, 5682. [Google Scholar] [CrossRef]
- Bell, M. Definition of virtual worlds. J. Virtual Worlds Res. 2008, 1. [Google Scholar] [CrossRef]
- Girvan, C. What is a virtual world? Educ. Technol. Res. Dev. 2018, 66, 1087–1100. [Google Scholar] [CrossRef]
- INTEL. Demystifying the Virtual Reality Landscape. Available online: https://www.intel.com/... (accessed on 30 November 2025).
- Ng, D. What is the metaverse? Australas. J. Educ. Technol. 2022, 38, 190–205. [Google Scholar] [CrossRef]
- Cheng, S. Metaverse. In Metaverse: Concept, Content and Context; Springer Nature: Cham, Switzerland, 2023; pp. 1–23. [Google Scholar] [CrossRef]
- Dionisio, J.D.; III, W.G.; Gilbert, R. 3D virtual worlds. ACM Comput. Surv. 2013, 45, 1–38. [Google Scholar] [CrossRef]
- Xu, M.; Ng, W.; Lim, W.; Kang, J.; Xiong, Z.; Niyato, D.; Miao, C. Edge-enabled metaverse. IEEE Commun. Surv. Tutor. 2022, 25, 656–700. [Google Scholar] [CrossRef]
- Bricken, W. Learning in Virtual Reality. In Proceedings of the 1990 ACM SIGGRAPH Symposium on Interactive 3D Graphics, San Diego, CA, USA, 1990; pp. 177–184. [Google Scholar]
- Kaddoura, S.; Al Husseiny, F. Metaverse in education. PeerJ Comput. Sci. 2023, 9, e1252. [Google Scholar] [CrossRef]
- Wang, H.; Ning, H.; Lin, Y.; Wang, W.; Dhelim, S.; Farha, F.; Daneshmand, M. Survey on the metaverse. IEEE Internet Things J. 2023, 10, 14671–14688. [Google Scholar] [CrossRef]
- Yıldız, T. AI, metaverse, and digital education. İstanbul Univ. Sosyoloji Derg. 2024, 44, 969–988. [Google Scholar] [CrossRef]
- McClarty, K.; Orr, A.; Frey, P.; Dolan, R.; Vassileva, V.; McVay, A. Gaming in education. Gaming Educ. 2012, 1, 1–35. [Google Scholar]
- Allam, Z.; Sharifi, A.; Bibri, S.; Jones, D.; Krogstie, J. Metaverse & smart cities. Smart Cities 2022, 5, 771–801. [Google Scholar] [CrossRef]
- Jagatheesaperumal, S.; Ahmad, K.; Al-Fuqaha, A.; Qadir, J. Extended reality in education. IEEE Trans. Learn. Technol. 2024, 17, 1120–1139. [Google Scholar] [CrossRef]
- Weinberger, M.; Gross, D. Metaverse maturity model. Glob. J. Comput. Sci. Technol. 2023, 22, 39–45. [Google Scholar] [CrossRef]
- Schöbel, S.; Karatas, J.; Tingelhoff, F.; Leimeister, J.M. Not everything is a metaverse?! Available online: https://www.researchgate.net/publication/364344441 (accessed on 30 November 2025).
- Schultz, R. Comparison Chart of 15 Social VR Platforms. Available online: https://ryanschultz.com (accessed on 30 November 2025). 30 November.
- Hevner, A.R.; March, S.T.; Park, J.; Ram, S. Design science in IS research. MIS Q. 2004, 75–105. [Google Scholar] [CrossRef]
- Sadeghi-Niaraki, A.; Rahimi, F.; Binti Azlan, N.; Song, H.; Ali, F.; Choi, S.-M. Metaverse taxonomy. Artif. Intell. Rev. 2025, 58, 244. [Google Scholar] [CrossRef]
- Creswell, J.W.; Creswell, J.D. Research Design, 5th ed.; SAGE Publications: Thousand Oaks, CA, USA, 2018. [Google Scholar]
- Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis, 8th ed.; Pearson: Harlow, UK, 2019. [Google Scholar]
- Gravetter, F.J.; Forzano, L.-A. Research Methods for the Behavioral Sciences, 6th ed.; Cengage: Boston, MA, USA, 2018. [Google Scholar]
- Patton, M.Q. Qualitative Research & Evaluation Methods, 4th ed.; SAGE Publications: Thousand Oaks, CA, USA, 2015. [Google Scholar]
- Flick, U. An Introduction to Qualitative Research, 6th ed.; SAGE Publications: Thousand Oaks, CA, USA, 2018. [Google Scholar]
- Shadish, W.R.; Cook, T.D.; Campbell, D.T. Experimental and Quasi-Experimental Designs for Generalized Causal Inference; Houghton Mifflin: Boston, MA, USA, 2002; Volume 42. [Google Scholar]
- Miles, M.B.; Huberman, A.M.; Saldaña, J. Qualitative Data Analysis, 4th ed.; SAGE Publications: Thousand Oaks, CA, USA, 2019. [Google Scholar]
- American Psychological Association. Publication Manual of the APA, 7th ed.; APA: Washington, DC, USA, 2020. [Google Scholar]
- World Medical Association. Declaration of Helsinki. JAMA 2013, 310, 2191–2194. [Google Scholar] [CrossRef]
- Open Science Collaboration. Reproducibility of psychological science. Science 2015, 349, aac4716. [Google Scholar] [CrossRef]
- Sáez-Delgado, F.; Coronado Sánchez, P.C. Metaverse Platforms – SQuaRE Based Assessment. Figshare Dataset, 2025. [Google Scholar] [CrossRef]
- Adhini, N.; Prasad, C. Metaverse adoption drivers. Int. J. Consum. Stud. 2024, 48, e13069. [Google Scholar] [CrossRef]
- Du, H.; Ma, B.; Niyato, D.; Kang, J.; Xiong, Z.; Yang, Z. Quality of experience for metaverse services. IEEE Netw. 2023, 37, 255–263. [Google Scholar] [CrossRef]
- Lin, Z.; Duan, H.; Li, J.; Sun, X.; Cai, W. MetaCast architecture. In Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, ON, Canada, October 2023; pp. 6756–6764. [Google Scholar]
- Sinha, E. Experiential learning in the metaverse. Int. J. Manag. Educ. 2023, 21, 100875. [Google Scholar] [CrossRef]
- Zi, L.; Cong, X. Metaverse for educational evaluation. Electronics 2024, 13, 1017. [Google Scholar] [CrossRef]
- Zhou, Q.; Wang, B.; Mayer, I. Social construction of metaverse. Technol. Forecast. Soc. Change 2024, 208, 123716. [Google Scholar] [CrossRef]
- Hamdan, I.K.A.; Aziguli, W.; Zhang, D.; Alhakeem, B. Barriers to metaverse adoption. J. Manuf. Eng. 2024, 19, 146–162. [Google Scholar] [CrossRef]
- Lee, J.; Kim, Y. Deep learning–based metaverse. Sustainability 2023, 15, 12663. [Google Scholar] [CrossRef]
- De Giovanni, P. Sustainability of the metaverse. Sustainability 2023, 15, 6079. [Google Scholar] [CrossRef]
- Al-Kfairy, M.; Alomari, A.; Al-Bashayreh, M.; Alfandi, O.; Tubishat, M. User perceptions of metaverse. Heliyon 2024, 10, e31413. [Google Scholar] [CrossRef] [PubMed]
- Ruijue, Z. Data interconnectivity in metaverse. J. Jishou Univ. (Soc. Sci. Ed.) 2025, 46, 37. [Google Scholar] [CrossRef]
- Hwang, G.-J.; Chien, S.-Y. Metaverse research roles. Comput. Educ. Artif. Intell. 2022, 3, 100082. [Google Scholar] [CrossRef]
- Mitra, S. Virtual-physical ecosystem for blended education. J. Metaverse 2023, 3, 66–72. [Google Scholar] [CrossRef]
- Dede, C. Constructivist virtual worlds. Educ. Technol. 1995, 35, 46–52. [Google Scholar]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).