Preprint
Article

Evaluating design guidelines for intuitive virtual reality authoring tools: a NVIDIA Omniverse's experiment

Altmetrics

Downloads

226

Views

164

Comments

0

Submitted:

30 September 2023

Posted:

04 October 2023

You are already at the latest version

Alerts
Abstract
Virtual reality software might be challenging to utilize for beginners and unskilled professionals who do not have a programming or 3D modeling background. Concurrently, there is a knowledge gap in software project design for intuitive virtual reality authoring tools, which were supposed to be easier to use. These tools are frequently insufficient due to a lack of support and standard operating procedures. Adopting the Design Science Research paradigm, this study aims to evaluate the validity of fourteen design guidelines for the development of intuitive virtual reality authoring tools as an artifact. While a previous study completed the first steps of the Design Science Research, by identifying problems, defining solution objectives, and developing and demonstrating the design guidelines, this work seeks to qualitatively evaluate their application in a practical experiment. A group of engineering students with no prior experience in creating virtual worlds were tasked with examining the design guidelines while using the NVIDIA Omniverse Enterprise as an exemplary use case and responding to a questionnaire and a focus group interview about how they perceived these guidelines. A correlation analysis confirmed that most guidelines scores behaved as expected and were ranked according to the use-case functionality. The participants understood the guidelines' definition and could decide if they agreed or disagreed with their presence during the experiment. We conclude that, in accordance with the Design Science Research, the proposed artifact is useful, i.e., the design guidelines for virtual reality authoring tools perform what they are designed to do and are operationally reliable in accomplishing their goals.
Keywords: 
Subject: Computer Science and Mathematics  -   Computer Vision and Graphics

1. Introduction

The creation of virtual reality (VR) content and experiences is not widespread and still demands costly and time-consuming development processes employing game engines such as Unreal1 and Unity2, which require the services of skilled professionals [1,2,3]. This is because of the unusual input and output devices used in virtual reality, as well as the complexity of the hardware and software architecture of VR systems. These devices include head-mounted displays (HMD), tracking systems, 3D mice, and others [4,5]. Because immersive technology is complicated, professionals need to have a wide range of skills, including a lot of technical knowledge in programming languages and/or 3D modeling [3,6,7,8]. So, making interactive scenes in virtual reality is hard and uncomfortable for people who have never done it before [7,8].
Authoring tools are an alternative to the lengthy learning curve, as they aim to facilitate the creation of content with minimal iterations. The term authoring tool refers to software structures that include only the most relevant tools and features for content creation while enhancing and speeding up product maintenance [9,10]. In contrast to high-fidelity prototypes, which necessitate sophisticated programming skills, these technologies are used for low-fidelity writing, which requires fewer programming skills [7]. Virtual reality experiences can presently be created using a variety of authoring tools, many of which are free-source programs [11]. But these tools usually lack documentation and tutorials in addition to functionality, which makes them unsuitable for supporting the complete development cycle [7,12].
Professionals of all skill levels would benefit from mature and mainstream authoring tools that are intuitive, helping them reach their virtual reality goals more quickly. Furthermore, the accelerated growth of immersive technology can benefit concepts such as the metaverse, in which users can seamlessly experience a digital life and make digital creations supported by the metaverse engine, especially with the support of extended reality (XR) and human-computer interaction (HCI) [13]. Similar to authoring tools, integrated virtual world platforms (IVWPs), such as Roblox, Minecraft, and Fortnite Creative, are used to create games through graphical symbols and objectives instead of code and have a simpler interface, enabling users to create virtual worlds for the metaverse with less support, money, expertise, and skills [14].
On the other hand, software or platforms in the form of authoring tools are very hard to develop because they aim to give creators creative freedom while standardizing underlying technologies, making everything as interconnected as possible, and minimizing the need for creators to be trained or know how to program [14]. In the end, every feature becomes a priority.
This issue has previously been addressed, and design guidelines have been compiled to assist software developers in defining authoring tool projects [15]. The guidelines were meant to help these developers in choosing and creating the requirements and features that the authoring tools must fulfill in order to be considered intuitive [16], as well as provide a way for virtual reality authors to evaluate the intuitiveness of previously developed tools. Figure 1 illustrates the information flow when using design guidelines in these two scenarios.
Chamusca et al. [15] developed and demonstrated the design guidelines as an artifact, but they have not yet been assessed. According to the Design Science Research (DSR) paradigm, it is important to collect evidence that a proposed artifact is useful. This means showing that the proposed artifact works and does what it is supposed to do, i.e., that it is operationally reliable in achieving its goals [17].
There are still a lot of open questions on what is the easiest and best way to build the metaverse, facilitating exchanges of information, virtual goods, and currencies between these virtual worlds. However, such design guidelines contribute to the growth of the metaverse through their impact on the development of easier-to-use virtual reality authoring tools and, consequently, the increase in the volume of virtual world creation. Virtual world engines will become a standard feature of the metaverse as the global economy continues to shift to virtual worlds [14].
This study aims to evaluate the validity of the design guidelines for intuitive virtual reality authoring tools [15] by putting them to the test on an example tool: the NVIDIA Omniverse Enterprise. Therefore, verifying qualitatively the use of this artifact in the stage depicted in green on Figure 1. Developed by NVIDIA, the Omniverse intends to impact the open metaverse and 3D internet, by becoming a foundation for the creation of industrial metaverse applications in architecture, engineering, manufacturing, scientific computing, robotics and industrial digital twins [18].
This document is organized as follows: Section 2 describes the materials and methods utilized, Section 3 presents and analyzes the results, and Section 4 provides our conclusions and suggestions for further research.

2. Materials and Methods

This study adopted the Design Science Research paradigm. In addition to a knowledge contribution, effective DSR should make clear contributions to the real-world application environment from which the research problem or opportunity is drawn [17], i.e., an important practical contribution by the DSR’s artifact.
Similar to the method used in prior DSR investigations [17,19], we followed the six following steps: (1) identify the problem; (2) define the solution objectives; (3) design and development; (4) demonstration; (5) evaluation; and (6) communication.
The first four steps were completed by Chamusca et al. [15]. In steps 1 and 2, the problem was identified and the solution objectives were set, which were to propose design guidelines to support the project process of intuitive virtual reality authoring tools. Step 3 was a literature review that followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) principles [20] and was done using a method that includes planning, scoping, searching, assessing, and synthesizing [21]. The outcomes of the literature review were synthesized, and the authors developed an artifact: the fourteen design guidelines described in Table 1.
In step 4, the authors demonstrated a proof-of-concept of the applicability of the proposed design guidelines, testing and revising them through expert reviews, with preliminary versions exposed to researchers in seminars and workshops, such as the Metaverse and Applications Workshop, held in the IEEE International Symposium on Mixed and Augmented Reality (ISMAR) [22]. The methods employed to carry out the remaining 5 and 6 steps in this study are described below.
In step 5, we evaluated the validity criteria of using the fourteen developed design guidelines to verify the intuitiveness of existing VR authoring tools by putting them to the test on an example tool. We started this evaluation by applying the Pearson Correlation Coefficient (PCC) to the Chamusca et al. [15] results to find out how often two guidelines were found together in the studies that were looked at during the SLR (Section 3.1). In a later step of this study, this analysis was used along with the Likert-scale questionnaire as another indicator to evaluate the validity of the design guidelines (Section 3.2).
Then, we conducted an experiment with six engineering students from our Virtual and Augmented Reality for Industrial Innovation Lab (referred to as participants P1–P6). They had no background in programming, no prior experience using the exemplary authoring tool or creating virtual worlds in any form of game engine, and no prior awareness of the fourteen design guidelines. However, some of them mentioned a basic understanding of 3D modeling and/or navigation.
The experiment’s participants were tasked with qualitatively examining the design guidelines while using the NVIDIA Omniverse Enterprise3 package as an exemplary use case of an authoring tool. Although NVIDIA has not specifically indicated so, for the purposes of this study, the Omniverse components are regarded as an authoring tool, as just a subset of its available tools were utilized in our experiment, including only the most relevant features for content creation. The evaluation of a tool is part of its life cycle and, consequently, enters the process of product design and may generate improvements to be implemented. Therefore, the design guidelines must work as a reference for the whole software product design process, including their evaluation ( Figure 1).
We chose NVIDIA Omniverse as an use case because it helps create virtual worlds and the metaverse through virtual collaboration, 3D simulation, modeling, and architectural design [13,23]. Omniverse’s main features include virtual reality, artificial intelligence to analyze audio samples and match them with meta-humans’ facial animation, 3D marketplaces and digital asset libraries, connectors to outside applications like Autodesk Maya4 and Unreal Engine, and integration of 3D workflows like digital twins [24]. The platform was used, for example, to build a digital twin for BMW that improved the precision of its industrial work by combining real-world auto factories with VR, AI, and robotics experiences [25].
Industrial concerns are gaining a lot from the engineering simulation available on this tool, even though it was the creative sector that gave virtual worlds their initial impetus through game development and entertainment studios [26]. For professional teams, NVIDIA Omniverse Enterprise can develop comprehensive and photo-realistic design platforms that enable better designs with fewer expensive mistakes in less time. Teams of designers, engineers, marketers, and manufacturers can work together through the Omniverse Nucleus Cloud. This lets creators in different places share and collaborate in real time on designing 3D scenes for industrial applications, like car design [13,24,26]. However, even similar to VR authoring tools in the professional context, Omniverse can be seen as complex for having many sub-products, requiring some time to learn the user interface, presenting a challenge to find the most efficient way to use it, and requiring research in its documentation [27].
The six participants experimented the hands-on lab Build a 3D Scene and Collaborate in Full Fidelity (Figure 2a) taking turns with three NVIDIA LaunchPad free tryout accounts for Omniverse Enterprise (Figure 2b). LaunchPad gave users access to NVIDIA virtual machines with graphics capabilities that they could use to run Omniverse apps like Create and View.
Figure 3a illustrates the experiment scope, limited to the activities described on the topics Overview, Step #1: Setting Up Your Environment, Step #2: Start Creating, Designer #1, Designer #2 and Designer #3. The goal was to install the application needed to access the virtual machine, learn how to install and run the Create application, and work together to build a 3D scene of a park, as shown in Figure 3b. The same scene could be seen by three people at the same time, each using one of the three accounts that had been requested before. Each participant should execute the activity described by one of the Designers. The activities included adding an environment, adjusting lighting, adding 3D assets from a library, adding or changing textures from a library, and organizing the work layers to guarantee the organization of the space while also avoiding the conflict of more than one person editing the same object at once.
Before using LaunchPad to get into Omniverse, the participants read a document that explained each design guideline in detail (Supplementary Materials) [15]. Then, we answered questions for further clarification on the design guidelines definitions. After that, the participants took turns using the accounts. All Omniverse LaunchPad sessions were done through online remote meetings.
Then, we captured the participants’ insights about the design guidelines using two methods. The first method was a Likert-scale questionnaire comprising fifteen questions (Supplementary Materials). The scale had a numeric scale that ranged from totally disagree (1 point) to totally agree (5 points). which should be marked according to their agreement about the existence of a design guideline in Omniverse. Additionally, the participants were questioned about significant observations made throughout the execution of the tutorials, which could include system errors, challenges, and interesting functionalities. The equations shown in Figure 4 are a first proposal of how to estimate a punctuation to an authoring tool’s intuitiveness using the guidelines, where the guideline score corresponds to the average of participants’ answers on the Likert-scale questionnaire (1-5) and the final score of the tool evaluated stands for the sum of all guideline scores. These equations were applied to the experiment realized in this study so the answers obtained with the second method could be compared to other indicator.
The number obtained as the final score was compared to the maximum score value in the questionnaire, which is equal to 70, considering the product between fourteen guidelines and five points for totally agree. It was assumed that a percentage lower than 50% of this total value would characterize authoring tools that are not very intuitive, while a higher percentage would indicate greater intuitiveness. The questionnaire results (Section 3.2) were also matched to the correlation analysis results (Section 3.1) to confirm the similarities, which were determined by examining the score of the guidelines with strongest positive and negative correlation obtained on the questionnaire. However, these values where obtained to serve as a demonstration of how the guidelines could be used to evaluate a VR authoring tool and to be compared with the results obtained with the second method.
The second method was a focus group interview (Section 3.3), in which participants answered eighteen questions on their understanding of the design guidelines and their experience using them to evaluate the exemplary use case (Supplementary Materials). The answers were recorded in audios and converted to text using an online tool, which was then analyzed in the results session. Finally, we provide a pipeline including a compilation of all the steps carried out in this study, as a guide for anybody wishing to replicate the experiment using different VR authoring tools.
Step 6 entails communicating our findings from this work, in which we demonstrate how an evaluation experiment using a VR authoring tool may be undertaken from the perspective of the design guidelines, therefore assessing the validity of the guidelines as an artifact.

3. Results

In the following sessions, we describe our findings.

3.1. Reviewing the design guidelines

Most of the authoring tools found in the systematic review are just proof-of-concept, but the design guidelines can encourage the development of mainstream platforms with fewer limitations, democratizing the technology and increasing its maturity. Moreover, the findings in Chamusca et al. [15] contribute to initiating or advancing the creation of ontologies for the development of virtual reality authoring tools in relation to the gap previously identified [9]. The lack of ontologies related to the concepts of virtual reality authoring tools has been discussed, indicating that there are few connected standards for the development of these platforms [9].
Furthermore, the guidelines can positively contribute to the creation of the metaverse through their influence in facilitating the use of the components that make it up. The wide scope of this concept causes a lack of understanding about how it works, leading to the need for a taxonomy proposal for the metaverse [28]. Between the proposed taxonomies, the components thought to be necessary for the realization of the metaverse were: hardware, software, and contents. Many similarities were found between the design guidelines developed by Chamusca et al. [15] and the technologies that have recently become issues and interests in the metaverse and were mapped as hardware, software, and content [28].
The works reviewed by Chamusca et al. [15] define intuitiveness as related to completing tasks quickly, requiring minimal learning, lowering the entry barrier, reducing information, time, and steps, being appropriate for both expert and non-expert users, being aware of and feeling present in virtual reality, feeling comfortable with the tool, making few mistakes, and using natural movements in virtual reality. Although there is no standard method to evaluate or measure intuitiveness, aspects such as usability, effectiveness, efficiency, and satisfaction may be quantified using well-established questionnaires and methods like the System Usability Scale (SUS) and the After-Scenario Questionnaire (ASQ) [9,29].
The utilization of questionnaires as a well-established method to evaluate software tools was the source of the idea of using the guidelines artifact in association with a questionnaire to help the process of evaluating virtual reality authoring tools. This is also supported by the contribution of the guidelines to the creation of ontologies and taxonomies in the field. There is a lack of standard concepts, methods, and nomenclature not only during the development of VR authoring tools with vastly different formats but also in the application of diverse evaluation techniques to determine their usability [9].
The developed guidelines complement one another and were not given separately [15]. Figure 5 shows the correlation analysis that was done with the fourteen design guidelines. It shows which pairs of guidelines show up together more or less often in the works that were reviewed. The three strongest negative and positive correlation values (CV) in Figure 5 were captured, and from that, the pairs of design guidelines that presented these values were highlighted in Table 2 and Table 3. The columns related to questionnaire scores (QS) link the scores for each guideline presented in Figure 6, which will be explained in more detail in Section 3.2, to the correlation analysis.
Examining the cases of Democratization (DG4) and Adaptation and commonality (DG1), the strong positive correlation can be associated with the fact that multiple elements related to DG1 can, consequently, lead to DG4. For example, using the same authoring tool on different devices and accepting different file extensions for the same type of data can help simplify and provide access to a tool for more users. Movement freedom (DG6) and Immersive authoring (DG9) are codependent, since DG6 can not exist without DG9, but the inverse can happen. Actually, DG6 complements DG9, literally highlighting the importance of having movement freedom during an immersive authoring experience. Movement freedom (DG6) can be composed by Metaphors (DG5), for example, by moving and positioning objects as if they were in the real world and connecting objects distant from each other by making the physical movement of drawing visible lines between them.
Documentation and tutorials (DG8) are often created using Automation (DG2), for example, through AI assistants that detect when the user is having difficulties moving on with a task and provide smart suggestions to solve that. The use of Metaphors (DG5) can help make Immersive authoring (DG9) easier by turning abstract concepts into tangible tools, such as using buttons on the controllers to reproduce actions similar to what we would do in real life, like pulling the trigger button to grab an item and releasing it to drop it. Finally, Immersive authoring (DG9) and Metaphors (DG5) must have Real-time feedback (DG11) to work properly, enabling content creators to have a what you see is what you get experience, meaning the user has a real view of the virtual environment while composing the scene [15].
Regarding the design guidelines with strong negative correlation, it is remarkable that Immersive feedback (DG10) appears on three of the four correlations. This makes sense, because the definitions brought by DG10 are really unique for the immersive context, making the use of some kind of virtual reality device mandatory. Reutilization (DG12), Democratization (DG4), and Adaptation and commonality (DG1) are not guidelines limited by the use of devices, being more generalist to the virtual world creation. Safe conduct, Adaptation and commonality (DG1) could indirectly contribute to Immersive feedback (DG10), considering that allowing communication with different types of VR hardware is one of its definitions. Real-time feedback (DG11) and Automation (DG2) are two guidelines connected to a good system infrastructure, and automated functions should have real-time feedback but nothing more than that.
These results illustrate that it is possible to assess the existence of guidelines on a tool by understanding how they relate to one another, resulting in an indicator evaluate the design guidelines artifact, which were done in Section 3.2.

3.2. Likert-scale questionnaire

After executing the tutorial described in the NVIDIA LaunchPad, the participants answered the Likert-scale questionnaire, followed by the detailed document about the design guidelines. Figure 6 presents these answers, with the design guidelines ranked by the average value of their scores, as determined by the equation provided in Figure 4a.
The five guidelines with higher scores are shown in the following topics with examples of where the guidelines were seen by the participants, according to their comments:
  • Sharing and collaboration (DG13): the participants could see in real time the updates made by the others, and they finished the activities quicker by splitting the job between more people;
  • Customization (DG3): the participants could easily change the color and texture of the assets imported from the libraries;
  • Documentation and tutorials (DG8): the LaunchPad itself promotes a good step-by-step for a first try of the tool, giving an enough number of activities so the person can get to know the tool without being lost in numerous tutorials;
  • Reutilization (DG12): Omniverse Create has libraries of assets with many 3D models and textures available, so the participants did not need to look for them outside the software;
  • Adaptation and commonality (DG1): the participants could see the same file being updated in real time on the Omniverse View, while the scene was being created on Omniverse Create; also, the asset libraries were integrated with the software interface, so they did not need to worry about file extension compatibility or do an extra process to import them.
The five guidelines with lower scores were: Immersive authoring (DG9), Democratization (DG4), Immersive feedback (DG10), Movement freedom (DG6), and Visual programming (DG14). We could not run a test using virtual reality during the experiment with the exemplary use case because the NVIDIA LaunchPad did not provide the tool Omniverse XR, which certainly caused the decrease in the score given to the guidelines related to immersiveness, which are Immersive authoring, Immersive feedback, Movement freedom, and Visual programming. This demonstrates that the participants understood the design guidelines’ definitions since, even though they are not specialists, they were able to understand that the experience did not fit their descriptions and disagreed with the presence of these guidelines.
We also observed that, similar to the complex game engines frequently used for VR development today, such as Unreal and Unity, in the version of Omniverse Enterprise experimented as the exemplary use case, virtual worlds for VR experiences are still developed primarily using 2D screens, not HMDs and other wearables. This is different from what Chamusca et al. [15] saw during the development of the guidelines, since the reviewed works showed that adding virtual reality equipment to the process of creating an VR experience can make it easier to understand and do it correctly. This indicates that the guidelines were comprehensible and the participants did not perceive intuitiveness in creating an immersive experience without being allowed to test it along the way.
Democratization (DG4) was at the bottom of the list, probably because Omniverse Enterprise is not free and can only be used with paid NVIDIA accounts or limited free tryout accounts, which were the case in this study. Also, technical problems related to the high latency of the virtual machines faced by some participants probably affected the results, which will be discussed in the next Section 3.3. On the other hand, all the participants could complete the activities proposed in the exemplary use case, even though they had never used similar software before.
Using the equation shown in Figure 4b to calculate the sum of all guidelines scores and comparing them to the maximum score value in the questionnaire, we obtained a total score of 45 out of a maximum of 70, or 64%. This percentage represents the global level of intuitiveness of a VR authoring tool from the guidelines’ perspective, as experienced by the participants while executing the experiment. This average score is aligned with the declaration that the Omniverse tool can be seen as complex, requiring time to understand the user interface, presenting a challenge to find the most efficient way to use it, and requiring research in its documentation [27]. This contributes to the validity of the design guidelines since the medium score of 64% obtained from their perspective, matches past feedback about the software.
Regarding the correlation between the guidelines, most of them were in line with the results shown in Section 3.1 when the difference between their scores was checked. It was assumed that guidelines with strong positive correlation values would have lower difference values, while those with strong negative correlation values should have high difference values. Table 2 and Table 3 show that the design guidelines pairs with strong positive correlation values had a score difference of around 0.5 and 1.5, while the pairs with strong negative correlation values had a score difference of around 2.5 and 3, which matched the expectation. However, an unexpected score difference of 2.5 in Table 2 and 0 in Table 3 draws attention, having the guideline Democratization (DG4) as a common factor.
This indicates that some unexpected occurrence connected to the Omniverse experience produced a mismatch between this guideline and the others, most likely the same incident that led to this guideline’s low score on the Likert-scale questionnaire. During the focus group interview, which will be discussed in Section 3.3, participants talked about problems like the program taking too long to respond to commands and difficulty installing the virtual machine. Such problems are not directly related to the usability of the tool, but rather to the specific circumstances of each participant, such as an incompatible internet connection. This may have caused a decrease in the Democratization (DG4) score to 1.5, not following the expectation of having a higher score such as Adaptation and Commonality (DG1) with 4 points, with which has a strong positive correlation of 0.75, leading to the high score difference of 2.5. Technical issues in conjunction with the absence of Omniverse XR approximated the Democratization (DG4) score with the low results of the immersiveness-related guidelines, Immersive feedback (DG10) being one of them with 1.5 points, with which DG4 has a low correlation level of -0.63, but a low difference score of 0 in this experiment.

3.3. Focus group interview

The participants’ responses obtained with the focus group interview are examined in the following section.

3.3.1. The exemplary use case Omniverse tool

During the execution of the experiment, the participants encountered both obstacles and opportunities associated with the activities proposed in Omniverse LaunchPad. Four of the participants said that applying textures to small areas was the hardest part. This includes actions applying grass on a small piece of the 3D ground mesh. Three participants said that the software took too long to respond to commands, which could be caused by technical problems like incompatible internet connection.
Only one participant mentioned difficulty starting the program and following the LaunchPad step-by-step instructions for installing the virtual machine. Two participants had difficulties understanding how to navigate inside the 3D environment, which includes rotating the camera and zooming in and out on objects, while two other participants considered this an easy and intuitive task.
Preprints 86579 i001
Omniverse LaunchPad provided links to external videos along the tutorial with more details on some features, such as applying textures to meshes. Possibly, participants who had difficulty with this function did not notice these links in the explanation or limited themselves to only follow the instructions on the main page with the activities. Four participants said that importing 3D assets from the Sketchfab5 library and placing them in the scene was one of the easiest things to do. Another participant highlighted how easy was to set the environment’s illumination for the skybox using a slide button that changed the position of the sun in real time.
Preprints 86579 i002
When asked if they had already used a tool similar to the exemplary use case Omniverse, three participants mentioned they had already had contact with parametric 3D modeling software (Solidworks6), three cited games like The Sims and Minecraft as facilitators, and only one had already had a brief contact with a game engine (Unity) but with the intention of create a 2D mobile application. We found that participants who had previous experiences with software or games that required interaction and movement in a 3D environment found Omniverse easier to use because the controls are usually very similar.
Preprints 86579 i003

3.3.2. Guidelines identification

Along with the activity to be carried out for the exemplary use case Omniverse, participants were provided with a detailed document describing the fourteen design guidelines’ definitions and a Likert-scale questionnaire that asked if they agreed, or disagreed, with the presence of the guidelines in association with the software functions used in the activity. To efficiently answer the questionnaire, most of the participants (four) chose to take notes as they followed LaunchPad tutorials, using the guidelines’ document as a support during this process. Only two participants did not take notes, although they did consult the guidelines’ definitions in order to be able to answer to the questionnaire coherently.
Despite being instructed to identify the presence or absence of the design guidelines in the tool under test, the participants were not told how to do it. When asked about their method for associating the guidelines with Omniverse, the participants answered that they focused on identifying the steps they found complex or easy to accomplish and connecting them with the definitions of the guidelines. Most did it in a segmented way, i.e., after completing each step instructed by LaunchPad, so that all the details were clear in their memories. Another way of highlighting the presence or absence of a guideline in the experimental tool was the association with the examples given in the definitions of the guidelines; if an example was directly found, positive points were given to the guideline.
Participants also mentioned that some guidelines were obvious while others required more reflection, particularly on whether their presence or absence would be limited to a specific stage of the activity or was truly part of the Omniverse’s characterization as a tool. Among the guidelines that were easier to identify were: Automation, Customization (cited three times), Democratization, Movement freedom, Documentation and tutorials (cited twice), Real-time feedback (cited twice), Reutilization, Sharing and collaboration, and Visual programming (cited twice). Listed below are statements from participants that demonstrate their reasons for identifying these guidelines as easy to identify:
  • “In group dynamics and collaboration, I could see the almost instantaneous change of material, color, or movement made by other people” - (P1, about Real-time feedback);
  • "This guideline did not exist, and because of that, I had a lot of difficulty with the slowness to perform some actions" - (P2, about Real-time feedback);
  • “I pointed out this guideline because I could not find it during the experiment, so it was very easy to identify” - (P3 and P6, about Visual programming);
  • “I was impressed with what a person is able to do using Omniverse through a virtual machine accessed by a mere notebook, since even using a computer with a good GPU, the graphics processing of programs like this takes a long time” - (P6, about Democratization);
  • “The tool has a library with assets you can place and reuse in the environment” - (P3, about Reutilization).
Among the guidelines considered more difficult to identify, the following were mentioned: Metaphors, Movement freedom (cited twice), Optimization and diversity balance, Immersive authoring (cited twice), Immersive feedback, Sharing and collaboration, and Visual programming. Below are some of the participants’ statements that show their motivations for pointing out these guidelines as difficult to identify:
  • “I had a lot of difficulty answering the question about this guideline. I had to read its description several times to find out if the LaunchPad would apply with the definition” - (P1, about Immersive authoring);
  • “Even interacting with an open environment, I felt a little limited, so I kept questioning whether I really had this movement freedom or if it was a freedom within the limitation of using the software through a 2D screen” - (P1, about Movement freedom);
  • "I found it a little subjective; I could not say to what extent we can consider that the process was optimized or not, and whether it was complex or not" - (P2, about Optimization and diversity balance);
  • "The most difficult for me were the two that involved immersion, because I believe it is subjective to identify if I am immersed in that environment; what may be immersive for me may not be immersive for someone else, and vice versa" - (P3, about Immersive authoring and Immersive feedback);
  • "I had to read the guideline a few times to have a better understanding when answering, due to my lack of knowledge in the area" - (P4, about Visual programming);
  • “I could not say if that was easy or not, because I did not have much experience with collaboration in other similar applications and software, so Omniverse collaboration might not be efficient in front of the guideline” - (P5, about Sharing and collaboration).
Guidelines classified as features or requirements were equally mentioned as easy or difficult to identify, so no discussion can be given on that. However, the Movement freedom, Sharing and collaboration, and Visual programming guidelines were mentioned both as easy and difficult to identify by different participants, which may represent ambiguity in the definitions given to them and, consequently, a lack of standards to determine situations in which these guidelines apply or not. This was clear from what the participants said, since they were not sure about the meaning of some of the terms used in the guidelines’ definition. Immersiveness, for example, was not directly linked to virtual reality experiences by the participants, but all of the examples in the definition of the guidelines are linked to this aspect. This can also be attributed to the participants’ lack of experience with the area and its technical terms.
The lack of experience may also be the reason why the guidelines with highest scores on the Likert-scale questionnaire (Sharing and collaboration, Customization, Documentation and tutorials and Reutilization) were presented as easy to identify, while four of the guidelines with the lowest scores (Immersive authoring, Immersive feedback, Movement freedom and Visual programming) were presented as difficult to identify. This suggests that even though the participants were able to discern that the low score guidelines were not featured in the tool, they still had doubts when responding to the questionnaire, indicating that they were challenging to recognize. The inverse is true of the guidelines with the highest scores, which were easily observable throughout the execution of the experiment and could, thus, be better evaluated. This raises the question of whether the difficult-to-identify guidelines had subjective descriptions, as many of the participants claimed, or whether the fact that the tool did not present the examples stated by its definitions led to a lack of clarity for the interpretation of the participants, who were unable to implement the concepts illustrated in the examples.
In this perspective, the Democratization guideline stands out because, unlike the others with low scores on the questionnaire, it was presented as easy to identify, preserving the history of inconsistencies revealed throughout the experiment. Comparing Democratization’s score to the correlation analysis revealed unexpected findings, which could be attributed to the fact that the tool is not free and that technical issues occurred throughout the test. Given that not all participants experienced technical difficulties during the experiment, P6’s generally positive comment in this Section may add to the prior claims. In addition, the fact that a guideline is considered easy to identify should not be correlated with its presence, as participants P3 and P6 made evident in their comments regarding the absence of Visual Programming.

3.3.3. Guidelines strengths and weaknesses

Then, the participants were asked about the strengths and weaknesses related to the use of guidelines for evaluating the intuitiveness of existing authoring tools for experiences in virtual reality (Figure 1). Three participants said that the inclusion of practical examples to the description of the guidelines was the greatest strength. This was due to the fact that the examples made it feasible to compare the assessed tool functionalities to those of other software or apps throughout the experiment, despite the fact that part of the general description was not very clear. Moreover, titles were cited as strengths, since they allowed for rapid reference to what the guideline defines.
Preprints 86579 i004
Participants pointed out that one of the weaknesses was the use of unusual words like haptic, that were derived from the field’s technical terminology. Other examples were the subjectivity of some of the definitions, the lack of visual references, such as pictures, to compose the definitions of the guidelines, and the lack of delimitation to make more clear the difference between guidelines with similar names.
Preprints 86579 i005
Concerning the presented set of guidelines, all participants agreed that it was appropriate and complete. They did not suggest any additional guidelines to be added to the list, although some believe that as technology evolves, new guidelines may be necessary.
According to all participants, the guidelines have different weights in terms of intuitiveness. This indicates that the presence of guidelines with a higher weight makes a tool more intuitive, whereas those with a lower weight have less of an effect. However, there was no consensus among the participants about which guideline would have higher or lower weights, so this topic should be treated as a future research. Three of the participants stated that the relevance of the guidelines varies based on the context in which a tool is being assessed. For instance, if the experience is collaborative or individual, or if the technology includes head-mounted displays and other VR peripherals, the relevance of certain guidelines changes.
Preprints 86579 i006
All participants believed that most of the guidelines were self-explanatory. However, some of them are subjective, making it difficult to use them to evaluate VR authoring tools, as their existence or absence can be understood differently by each individual. Nevertheless , all participants indicated they would utilize the guidelines to evaluate the intuitiveness of other VR authoring tools. This is due to the fact that the guidelines helped them comprehend the potential of using Omniverse, and how it could be implemented. One of the participants believes that using the guidelines to evaluate other authoring tools will also contribute to the improvement of their definitions. Two others said that the guidelines can assist them in finding a tool that satisfies the requirements for the development of a particular project.
Preprints 86579 i007

3.3.4. Changing suggestions for the guidelines’ future

In an effort to improve the concept of the guidelines, participants were requested to suggest changes and future applications. The majority of proposed modifications involved rearranging and categorizing the guidelines, including, for instance, a reduction in their number and convergence of those with comparable concepts. In order to guide the evaluators to assess an authoring tool through a certain sequence of the guidelines list, it was suggested that the guidelines be reorganized into those to be judged before testing with a tool and those to be judged during the experiment. Moreover, the parameters might be categorized as applicable to the evaluation of 2D experiences, virtual reality immersion, or both. In the end, one participant disagreed with the suggestions to make modifications because he believed it was essential to analyze each guideline as it is now written.
Preprints 86579 i008
For future implementations of the guidelines, the participants proposed replicating this experience, primarily by altering the composition of the evaluation group and the software tools evaluated. For instance, the application might be conducted with a group of industry specialists, such as programmers and VR experience designers, in order to obtain more technical input, since they are also the target audience for the guidelines application as a development guide for new VR authoring tools. The present investigation selected a group of participants with different degrees of experience, which may have led to variations in scores and interpretations of the guidelines’ principles. The same test can be administered to individuals of different generations, such as children, teenagers, and the elderly, in order to compare their findings based on their technological experiences.
Participants also suggested conducting more extensive testing with each of the guidelines individually, examining specific experiences to identify them in tools, and then returning to the test collectively. About altering the software tools evaluated, identifying those that are recognized as intuitive on the market can help to confirm whether or not the guidelines are effective, since high scores would be expected. Reproducing the experiment using a tool that serves a different purpose or in a situation that enables the experience not only on 2D screens but also on head-mounted displays may illustrate that the guidelines are applicable to a wide range of authoring tools.
Preprints 86579 i009
This leads to a discussion of the consequences of not being able to utilize head-mounted displays during the current experiment. Even though they knew what the immersiveness guidelines meant, all of the participants reported that it was difficult for them to evaluate the tool based on these guidelines. If everyone had tested the tool in virtual reality, their responses about Immersive authoring, Immersive feedback, Movement freedom, and Metaphors would be different. Nonetheless, the majority of them took this into account when answering the questions. Figure 6 demonstrates that these recommendations earned low scores.
All participants were aware that, in the context of the experiment, the example use case Omniverse lacked immersive elements, which resulted in a lower score. This demonstrates the effectiveness of the guidelines for the evaluation of existing VR authoring tools. In addition, even though the intuitive creation of virtual reality experiences is the final objective of the design guidelines, a significant portion of this creative process consists of developing the virtual world on 2D screens. Yet, the literature review indicates that the incorporation of virtual reality devices throughout the creation of the experience makes the process more intuitive and straightforward to implement, since the author will have the same experience as their audience along the way.
Preprints 86579 i010

3.3.5. Further considerations

Throughout the experiment, the Internet connection, the configuration of the virtual machine, and the execution of some software operations presented technical issues or took too long for certain participants. The participants were asked if these concerns affected their overall impressions of the experiment. Three participants claimed that they did not encounter any technical issues or that the issues were minor and had no effect on their performance during the experiment. Two more participants reported relevant issues during the experiment, but they did not believe they were related to the program’s adherence to the guidelines. Instead, they believed the difficulties were due to their own circumstances. For instance, P4’s poor internet connection made the access to the virtual machine unstable and impacted the video call communication with the interviewers.
On the other hand, P6 mentioned a delay in the software’s response to his actions, such as zooming in and out and updating reflections and shadows when adding objects to the scene, which we believe may have affected his perceptions of the Real-time feedback guideline, although he did not specifically mention this connection.
Preprints 86579 i011
Only one participant made the connection between the technical issues and their perceptions of the guidelines. P3 had problems installing the virtual machine to access the Omniverse, which impacted his analysis of the Democratization guideline. For him, this meant that LaunchPad might not function properly on all computers, and that the instruction lacked sufficient information to assist him fix the issue. Even P1, who indicated minor difficulty with this step, stated that he "self-taught" himself how to accomplish it.
At the conclusion, the participants offered additional observations about the entire experience, from utilizing Ominiverse and reading the list of guidelines to responding to the Likert-scale questionnaire and taking part in the focus group interview. Throughout the experiment in collaborative mode, one participant missed seeing who was working with him since the tool did not display the person’s name, number of coworkers in the same environment, position in the scene, or the object they were modifying at the moment. We speculate that this indirectly affected his opinion of the Sharing and collaboration guideline.
Preprints 86579 i012
The participants also stated that there was little information about errors in LaunchPad and that it was difficult to determine their causes. Some of them were unable to perform simple operations such as undo (ctrl+z) but could not explain why. Before beginning the activity, the training also neglected to offer users with fundamental information about how to use the program, such as where to alter the camera speed and screen size for navigating in the scene. Such information would have increased user comfort.

3.4. The pipeline of using Design Guidelines for evaluating existing VR authoring tools

Figure 7 illustrates a pipeline containing a compilation of all the steps taken in this study to evaluate the intuitiveness of an existing VR authoring tool in accordance with the Design Science Research paradigm [17], whereas Figure 8 illustrates how these steps are applied as a guide for anyone who wishes to replicate the experiment using different VR authoring tools.
Figure 8 illustrates the step-by-step process for evaluating the intuitiveness of an existing VR authoring tool using the design guidelines artifact. Different evaluators may use different-sized groups to test the to-be-evaluated tool; in the present study, six participants were utilized (1). The fourteen design guidelines definitions list should be distributed to the participants, as done here and described at the Section 2, so that they get familiar with them (2). Participants must have access to the authoring tool that will be tested and evaluated in order to complete an activity or series of tasks that demonstrate the tool’s functionality (3). Hence, the Likert-Scale questionnaire can be filled independently by each participant based on their opinions of the tool’s features (4). Participants must consult the design guidelines anytime they are uncertain about how to complete the questionnaire (5).
The questionnaire responses must then be analyzed so that a ranking of the scores for the design guidelines and an global level of intuitiveness may be determined. To obtain these products, he answers from the Google Forms must be exported to an Excel spreadsheet and then run through the equations in Figure 4 (6). The scores of the guidelines that form pairs of strong positive or negative correlations with others should be highlighted, as shown in Tables 3 and 4, and compared to see if the tool exhibits expected behavior (7). The final findings of the evaluation should include the ranking, the comparison with the correlation values and the intuitiveness global level, which, when combined, should reflect the intuitiveness of the evaluated VR authoring tool (8).
As the primary objective of this study was to assess the validity of the design guidelines, we utilized the focus group interview to obtain more in-depth qualitative data on them. Future experiments utilizing different VR authoring tools do not require focus group interviews into their process flow.

4. Conclusions

We demonstrated how to conduct an evaluation experiment from the perspective of the design guidelines using an existing VR authoring tool, thereby analyzing the guidelines’ validity as an artifact. The proposed artifact is valuable, according to Design Science Research, because the design guidelines for virtual reality authoring tools created by Chamusca et al. [15] perform what they are supposed to do and are operationally reliable in completing their goals. As a significant contribution to the field, we produced a pipeline encapsulating all of the steps taken in this study, which may be used as a guide for anyone desiring to recreate the experiment using the artifact in a different VR authoring tool.
The study concentrated on illustrating how to use the design guidelines rather than offering a wide range of quantitative data analysis. Despite the fact that the primary goal of the experiment was to qualitatively assess the validity of the design guidelines in evaluating existing VR authoring tools, the quantitative results showed that the exemplary use case does not have a high level of intuitiveness, receiving a score of 64%, which was supported by previous feedback from users who tested the NVIDIA Omniverse Enterprise tool [27].
The correlation analysis between the guidelines sought to determine the level of interdependence between the guidelines under review, as they did not exist in isolation in any of the VR authoring tools which has the potential to be evaluated. As a result, the correlations were employed as a cross-check indicator when analyzing the findings of the Likert-scale questionnaire and focus group interviews. The cross-check confirmed that the majority of the guidelines scores behaved as predicted and that the ranking obtained using the Likert-scale questionnaire was consistent with the Omniverse functionalities.
The participants understood the definition of the guidelines and could correctly identify their existence during the experiment. The Likert-scale questionnaire provided a simple method of gathering participants’ perspectives on which guidelines they agreed or disagreed about having found in the tool. Later in the focus group session, they were asked to reaffirm their viewpoint on which guidelines were easier or more difficult to identify. Comparing the responses, the easy-to-identify guidelines were connected with those that obtained the highest scores, and the difficult-to-identify guidelines with those that received low scores. This outcome was consistent with the profile of the group used in this experiment, which lacked technical capabilities and indicated that the participants’ evaluation was carried out mostly using the practical examples supplied by the guidelines’ definitions as direct references.
As a result, everything that the participants observed in the tool and was presented in the definition as a practical example acted as a motivator for a rise in score, while the opposite also occurred. Therefore , when an example was not displayed in Omniverse, the definition of the guidelines became more subjective in the participants’ eyes, because it could not be viewed in an illustrated and practical manner. This is supported by the participants’ statement highlighting the guidelines’ weakness of not offering illustrated examples with figures.
The choice of a use case that is not particularly regarded as a VR authoring tool by its developers is a limitation of this experiment, although it is crucial to account for the lack of ontologies and taxonomies in this domain. While many programs have all of the qualities of an authoring tool, such as the IVWPs, they are not frequently declared as such. Participants’ inability to experiment with creating virtual worlds using VR devices also influenced their perceptions and was a limitation of this study. The participants’ profile of the group used to judge the guidelines can also be viewed as a limitation, because while the participants’ lack of knowledge allows for testing how well defined the guidelines are to the point of being clear to professionals who are not in the VR area, it can also lead to feedback on subjectivity in the definition of guidelines that contains more technical terms.
The experiment’s goal was to create a pipeline through a qualitative review of the steps performed during the experiment, rather than to provide robust quantitative data. Given the reduced sample of participants (six) and the fact we assessed only one authoring tool, the numerical data offered in the study can be viewed as a limitation. In any case, it should not be interpreted as an invalidation of the experiment, but rather as a chance for further research.
In terms of future research, we propose altering the group of evaluators with VR industry players, such as expert programmers and designers of virtual reality experiences, to gather additional technical input. Furthermore, we recommend experimenting with various VR authoring tools or in a context that enables the experience to be enjoyed not only on 2D screens but also on head-mounted displays. Comparing the findings of the evaluation through design guidelines with common methods for measuring usability, such as the System Usability Scale (SUS) and the After-Scenario Questionnaire (ASQ), can be used to demonstrate their efficacy as a method. The Omniverse tool can be assessed again to test if the score given for the design guidelines is restricted to the activity outlined in the LaunchPad, as well as to examine its potential for metaverse creation and industrial applications.
In the future, the guidelines’ definitions could be improved by reorganizing the list format, using pictures to explain the definitions in text, and including more explanation for the technical terms. Further tests with the design guidelines are recommended in order to propose an organization of these by different weights, resulting in different relevance among the fourteen listed today in terms of intuitiveness. In addition to reinforcing Chamusca et al. [15] suggestion as future research, guidelines must be adopted to guide the development of new VR authoring tools at the start of a software project, which can also bring input for their use in the evaluation of existing ones. Furthermore, since immersive technologies will improve in terms of hardware and software, as well as product and service, the design guidelines definition and practical examples must evolve over time.
The use of design guidelines worked successfully even for guiding professionals outside the field in their initial contact with tools like Omniverse. This study revealed that the design guidelines might be effective in assisting not only the development of new intuitive VR authoring tools but also the evaluation of the intuitiveness of existing ones. As a result, the design guidelines contribute to the democratization of tools for authoring virtual worlds to be experienced in virtual reality, which has a direct impact on the creation of ontologies and the faster dissemination of technology trends such as the metaverse, as more people from various professional backgrounds become capable of creating it.

Supplementary Materials

The following supporting information can be downloaded at: https://bityli.com/I45G2, (1) Design Guidelines for Intuitive Virtual Reality Authoring Tools - definitions list; (2) Likert-scale questionnaire; (3) Design guidelines ranking and intuitiveness global level; (4) Focus group interview questions.

Author Contributions

Conceptualization, I.L.C. and I.W.; methodology, I.L.C. and I.W.; validation, I.W., A.L.A.J, T.B.M. and C.V.F.; formal analysis, I.W. and A.L.A.J.; investigation, I.L.C.; resources, I.W., A.L.A.J. and T.B.M; data curation, I.L.C.; writing—original draft preparation, I.L.C.; writing—review and editing, I.W., A.L.A.J, T.B.M. and C.V.F.; visualization, I.L.C.; supervision, I.W., A.L.A.J. and T.B.M.; project administration, I.L.C. and I.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study is fully available in the Supplementary Material.

Acknowledgments

The authors are thankful to the participants of the Virtual and Augmented Reality for Industrial Innovation Lab for their participation in the experiment, to NVIDIA for making the Omniverse tool accessible to our study, to Fabrício Vinicius de Jesus Santos for insightful comments and for Lucas Arnaldo Alves for the creation of the Correlation Graphic presented as Figure 5. We also acknowledge the financial support from the National Council for Scientific and Technological Development (CNPq); Ingrid Winkler is a CNPq technological development fellow (Proc. 308783/2020-4).

Conflicts of Interest

The authors declare no conflict of interest.
1
2
3
4
5
6

References

  1. Cassola, F.; Pinto, M.; Mendes, D.; Morgado, L.; Coelho, A.; Paredes, H. A novel tool for immersive authoring of experiential learning in virtual reality. 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, 2021, pp. 44–49.
  2. Nebeling, M.; Speicher, M. The trouble with augmented reality/virtual reality authoring tools. 2018 IEEE international symposium on mixed and augmented reality adjunct (ISMAR-Adjunct). IEEE, 2018, pp. 333–337.
  3. Zhang, L.; Oney, S. Flowmatic: An immersive authoring tool for creating interactive scenes in virtual reality. Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, 2020, pp. 342–353.
  4. Sherman, W.R.; Craig, A.B. Understanding virtual reality: Interface, application, and design; Morgan Kaufmann, 2018.
  5. Cai, Y.; See, S. GPU computing and applications; Springer, 2015.
  6. Ipsita, A.; Li, H.; Duan, R.; Cao, Y.; Chidambaram, S.; Liu, M.; Ramani, K. VRFromX: from scanned reality to interactive virtual experience with human-in-the-loop. Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, 2021, pp. 1–7.
  7. Krauß, V.; Boden, A.; Oppermann, L.; Reiners, R. Current practices, challenges, and design implications for collaborative AR/VR application development. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 2021, pp. 1–15.
  8. Yigitbas, E.; Klauke, J.; Gottschalk, S.; Engels, G. VREUD-an end-user development tool to simplify the creation of interactive VR scenes. 2021 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, 2021, pp. 1–10.
  9. Coelho, H.; Monteiro, P.; Gonçalves, G.; Melo, M.; Bessa, M. Authoring tools for virtual reality experiences: a systematic review. Multimedia Tools and Applications, 2022; 1–24. [Google Scholar]
  10. Zikas, P.; Papagiannakis, G.; Lydatakis, N.; Kateros, S.; Ntoa, S.; Adami, I.; Stephanidis, C. Immersive visual scripting based on VR software design patterns for experiential training. The Visual Computer 2020, 36, 1965–1977. [Google Scholar] [CrossRef]
  11. Lynch, T.; Ghergulescu, I. Review of virtual labs as the emerging technologies for teaching STEM subjects. INTED2017 Proc. 11th Int. Technol. Educ. Dev. Conf, 2017, Vol. 6, pp. 6082–6091.
  12. O’Connor, E.A.; Domingo, J. A practical guide, with theoretical underpinnings, for creating effective virtual reality learning environments. Journal of Educational Technology Systems 2017, 45, 343–364. [Google Scholar] [CrossRef]
  13. Wang, Y.; Su, Z.; Zhang, N.; Xing, R.; Liu, D.; Luan, T.H.; Shen, X. A survey on metaverse: Fundamentals, security, and privacy. IEEE Communications Surveys & Tutorials, 2022. [Google Scholar]
  14. Ball, M. The metaverse: and how it will revolutionize everything; Liveright Publishing, 2022.
  15. Chamusca, I.L.; Ferreira, C.V.; Murari, T.B.; Apolinario, A.L.; Winkler, I. Towards Sustainable Virtual Reality: Gathering Design Guidelines for Intuitive Authoring Tools. Sustainability 2023, 15, 2924. [Google Scholar] [CrossRef]
  16. Pressman, R.S. Software engineering: a practitioner’s approach; Palgrave macmillan, 2021.
  17. Gregor, S.; Hevner, A.R. Positioning and presenting design science research for maximum impact. MIS quarterly, 2013; 337–355. [Google Scholar]
  18. NVIDIA and Partners Build Out Universal Scene Description to Accelerate Industrial Metaverse and Next Wave of AI. Available online: https://nvidianews.nvidia.com/news/nvidia-and-partners-build-out-universal-scene-description-to-accelerate-industrial-metaverse-and-next-wave-of-ai. (accessed on 22 February 2023).
  19. Peffers, K.; Tuunanen, T.; Rothenberger, M.A.; Chatterjee, S. A design science research methodology for information systems research. Journal of management information systems 2007, 24, 45–77. [Google Scholar] [CrossRef]
  20. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; others. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Systematic reviews 2021, 10, 1–11. [Google Scholar] [CrossRef] [PubMed]
  21. Booth, A.; Sutton, A.; Clowes, M.; Martyn-St James, M. Systematic approaches to a successful literature review; Sage, 2021.
  22. Chamusca, I.L.; Santos, F.V.D.J.; Ferreira, C.V.; Murari, T.B.; Junior, A.L.A.; Winkler, I. Evaluation of design guidelines for the development of intuitive virtual reality authoring tools: a case study with NVIDIA Omniverse. 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2022, pp. 357–362.
  23. Ning, H.; Wang, H.; Lin, Y.; Wang, W.; Dhelim, S.; Farha, F.; Ding, J.; Daneshmand, M. A Survey on Metaverse: the State-of-the-art, Technologies, Applications, and Challenges. arXiv preprint arXiv:2111.09673, arXiv:2111.09673 2021.
  24. CES 2022: Nvidia unleashes Omniverse as metaverse design push gains steam. Available online: https://www.fierceelectronics.com/embedded/ces-2022-nvidia-unleashes-omniverse-metaverse-design-push-gains-steam. (accessed on 17 January 2023).
  25. Xu, M.; Ng, W.C.; Lim, W.Y.B.; Kang, J.; Xiong, Z.; Niyato, D.; Yang, Q.; Shen, X.S.; Miao, C. A full dive into realizing the edge-enabled metaverse: Visions, enabling technologies, and challenges. IEEE Communications Surveys & Tutorials, 2022. [Google Scholar]
  26. NVIDIA Omniverse: The Useful Metaverse. Available online: https://www.forbes.com/sites/karlfreund/2022/10/07/nvidia-omniverse-the-useful-metaverse/?sh=1e270893359a. (accessed on 17 January 2023).
  27. Building the Metaverse using NVIDIA Omniverse. Available online: https://www.deltatre.com/insights/building-metaverse-using-nvidia-omniverse. (accessed on 17 January 2023).
  28. Park, S.M.; Kim, Y.G. A Metaverse: Taxonomy, components, applications, and open challenges. Ieee Access 2022, 10, 4209–4251. [Google Scholar]
  29. Brooke, J. SUS: a retrospective. Journal of usability studies 2013, 8, 29–40. [Google Scholar]
Figure 1. The design guidelines’ artifact may be used at two stages of the life cycle of a VR authoring tool (adapted from Chamusca et al. [15]).
Figure 1. The design guidelines’ artifact may be used at two stages of the life cycle of a VR authoring tool (adapted from Chamusca et al. [15]).
Preprints 86579 g001
Figure 2. (a) Hands-on lab; (b) NVIDIA LaunchPad interface for Omniverse Enterprise.
Figure 2. (a) Hands-on lab; (b) NVIDIA LaunchPad interface for Omniverse Enterprise.
Preprints 86579 g002
Figure 3. (a) Tutorial steps for the Create platform; (b) Screenshot of the Create interface.
Figure 3. (a) Tutorial steps for the Create platform; (b) Screenshot of the Create interface.
Preprints 86579 g003
Figure 4. (a) Average of participants’ answers on the Likert-scale questionnaire (1–5); (b) Sum of all guidelines scores.
Figure 4. (a) Average of participants’ answers on the Likert-scale questionnaire (1–5); (b) Sum of all guidelines scores.
Preprints 86579 g004
Figure 5. Applying the Pearson Correlation Coefficient (PCC) to the fourteen design guidelines.
Figure 5. Applying the Pearson Correlation Coefficient (PCC) to the fourteen design guidelines.
Preprints 86579 g005
Figure 6. Average value of each guideline’s determined score for the exemplary use case.
Figure 6. Average value of each guideline’s determined score for the exemplary use case.
Preprints 86579 g006
Figure 7. The pipeline and the elements that compose it (Supplementary Materials).
Figure 7. The pipeline and the elements that compose it (Supplementary Materials).
Preprints 86579 g007
Figure 8. Process flow of the pipeline application.
Figure 8. Process flow of the pipeline application.
Preprints 86579 g008
Table 1. Design guidelines (DG) list, Abbreviation code (AC), and Frequent terms [15].
Table 1. Design guidelines (DG) list, Abbreviation code (AC), and Frequent terms [15].
DG AC Frequent terms
Adaptation and commonality DG1 interoperability, exchange, data type, patterns, multiple, modular, export/import process, hardware compatibility
Automation DG2 inputs, artificial intelligence, algorithms, translation, reconstruction, active learning, human-in-the-loop, neural systems
Customization DG3 control, flexibility, interactions, manipulate, change, transformation, adapt, modify, programming, editing, modification
Democratization DG4 web-based, popularization, open-source, free assets, A-FRAME, WebGL, deployment
Metaphors DG5 natural, organic, real life, real-world, physicality, abstraction; embodied cognition
Movement freedom DG6 manipulation, gestures, position, unrestricted, selection, interaction, flexible, free-form
Optimization and diversity balance DG7 trade-off, less steps, fast, complete, limitation, effective, efficient, simplify, focus, priorities
Documentation and tutorials DG8 help, support, fix, step-by-step, learning, practice, knowledge, instructions
Immersive authoring DG9 what-you-see-is-what-you-get (WYSIWYG), engagement, 3D modeling, programming, 3D interaction, paradigm, creation, HMD
Immersive feedback DG10 visual, haptic, hardware, multi-sensory, physical stimuli, senses
Real-time feedback DG11 simultaneous, latency, WYSIWYG, synchronization, preview, immediate, run-mode, liveness, compilation, direct
Reutilization DG12 retrieve, assets, objects, behaviors, reusable, patterns, store, library, collection, search
Sharing and collaboration DG13 multi-user, multi-player, remote interaction, community, simultaneous, communication, network, workspace
Visual programming DG14 primitives, logic, data-flow, nodes, blocks, modular, prototype, graphic
Table 2. Design guideline pairs with the strongest positive correlation.
Table 2. Design guideline pairs with the strongest positive correlation.
CV Design guidelines pairs QS QS Dif.
0.75 Democratization (DG4) and Adaptation and commonality (DG1) 1.5 (DG4) and 4 (DG1) 2.5
0.60 Movement freedom (DG6) and Immersive authoring (DG9) 1.5 (DG6) and 2.5 (DG9) 1
Movement freedom (DG6) and Metaphors (DG5) 1.5 (DG6) and 3.5 (DG5) 2
0.58 Documentation and tutorials (DG8) and Automation (DG2) 4.5 (DG8) and 3 (DG2) 1.5
Metaphors (DG5) and Immersive authoring (DG9) 3.5 (DG5) and 2.5 (DG9) 1
Real-time feedback (DG11) and Immersive authoring (DG9) 4 (DG11) and 2.5 (DG9) 1.5
Real-time feedback (DG11) and Metaphors (DG5) 4 (DG11) and 3.5 (DG5) 0.5
Table 3. Design guideline pairs with the strongest negative correlation.
Table 3. Design guideline pairs with the strongest negative correlation.
CV Design guidelines pairs QS QS Dif.
-0.65 Immersive feedback (DG10) and Reutilization (DG12) 1.5 (DG10) and 4.5 (DG12) 3
-0.63 Immersive feedback (DG10) and Democratization (DG4) 1.5 (DG10) and 1.5 (DG4) 0
-0.52 Immersive feedback (DG10) and Adaptation and commonality (DG1) 1.5 (DG10) and 4 (DG1) 2.5
Real-time feedback (DG11) and Automation (DG2) 4 (DG11) and 3 (DG2) 1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated