Preprint
Review

This version is not peer-reviewed.

A Heuristic Evaluation of the Next Generation of AR Glasses Across Four Use Cases

Submitted:

22 December 2025

Posted:

25 December 2025

You are already at the latest version

Abstract
The latest generation of AR glasses is currently being introduced with models such as the Meta Display, XREAL Air, Viture, Rokid, RayNeo, and Google Android. Past versions of AR glasses have not been as widely adopted as predicted due to several factors [1,2]. How will the latest AR glasses be received in the marketplace, and what design elements are important in determining adoption in specific domains? This review presents use cases for the next generation of AR glasses and applies a new heuristic evaluation system [3] to analyze usability across the likely use domains. Results support the use of AR glasses across training, sport, accessibility and consumer domains, identifying specific usability features that are highly important or critical within each domain for future adoption.
Keywords: 
;  ;  ;  ;  
Subject: 
Social Sciences  -   Other

1. Introduction

Compared to other technologies, the introduction of AR/VR/MR glasses is a relatively new development. The first glasses, Google Glass, were introduced in 2012 and then marketed to the public in 2014 [1]. The potential appeal was obvious. Tourists would be able to access a visual display providing interesting information in a new country. Surgeons could utilize the display during complex surgeries and potentially ask for additional medical information that could be presented to them instantaneously. Google Glass would be used by students to enhance their understanding in a domain, while at the same time enhancing engagement. Those in technical training fields could be presented with visual displays as they engaged in applied learning scenarios to facilitate and accelerate skill development.
Although these and other use cases were readily apparent, adoption of Google Glass and other AR/VR/MR commercial technologies has been underwhelming among consumers. Weidner in his article [1] cited price, confusion over use cases, and concerns over information privacy as key factors for the overall failure of Google Glass, leading to it being pulled from the market in 2023. In her article, Garrison discussed 5 primary barriers that prevented widespread adoption [2]. First, was cost. The glasses were expensive, and adoption across a large organization would require significant investment. The second barrier to use was that the glasses were heavy, and users did not find them comfortable to wear for extended periods. They also had to be tethered to a device, which made the set-up more bulky and limited some movement. Third, applications developed for AR Glasses were limited. Investments in applications for new technologies and new domains were slow to develop. Related to this last point is the fourth barrier. Consumer demand for AR glasses was not as great as anticipated. Consumers were undecided; they didn’t know if they wanted AR glasses or a total VR headset, or in which domains AR or VR was preferred. If consumer demand is low, investment in software applications for such devices may also be low. The fifth barrier mentioned was FOV or Field of View. The displays tended to correspond with regular visual FOV, thereby not increasing the abilities or perspective of users, and not providing extra value for the investment. As the dust has settled over time, Garrison pointed out that AR was more readily adopted in educational arenas, while VR has been used more in entertainment [2].
Rauschnabel and colleagues’ xReality (XR) framework define XR as a general term encompassing a range of digital reality mediums, where “X” serves as a placeholder [4]. Augmented reality (AR) and virtual reality (VR) are two major categories of XR, distinguished from one another by whether the physical environment is incorporated into the experience. AR combines virtual elements with the real world, while VR fully replaces it with a virtual environment. AR exists on a continuum of local presence, or how physically present virtual elements feel [4], from assisted reality (lower presence) to mixed reality (higher presence) [4]. The primary purpose of assisted reality is informational support, where information is simply overlaid on the real-world view to help the user. In contrast, mixed reality systems merge virtual elements with the real world, including features such as spatial anchoring or interaction between digital and physical content.
O’Luanaigh tried to anticipate what the market would want and need in future AR/VR glasses or headsets [5]. He wrote: “The ultimate dream is a glasses-sized headset, which uses 5G to do rendering and processing in the cloud… and delivers ultra-low-latency visuals to the glasses. The ideal version of this headset would provide full hand and eye tracking… and switch instantly between AR and VR applications, in high-resolution and with a full field of view. [The] headset can be incredibly light but deliver better-than-console visuals on the move. And given the lack of expensive CPUs and GPUs, it should cost less than $200 and require no additional hardware” (paragraph 2).
With the introduction of the newest AR glasses, such as the Ray-Ban Meta Display, XReal Air 2 Ultra, Viture Luma Pro, Rokid, RayNeo and Google’s Android Display glasses, is the technology dream of O’Luanaigh now a reality [6,8,9]? Will these new devices and others to follow lead to widespread adoption of this technology and acceleration across use cases, domains, and different populations?
These devices certainly are a step forward with a lightweight design, integrated audio, a visual display, as well as access to artificial intelligence (AI) [7]. Compared to traditional AR devices, such as the Microsoft HoloLens 2, Apple Vision Pro, or Meta Quest 3, which focus on immersive mixed-reality experiences blending virtual content with the real world, these lightweight glasses prioritize everyday functionality as an assisted reality technology. Traditional AR devices often include binocular displays with a wide field of view and spatial integration, whereas the lighter AR glasses include a relatively unobtrusive monocular display with a smaller field of view. Additionally, traditional AR devices can be quite large and heavy, contributing to physical discomfort when using these devices [10,11]. In contrast, the AR glasses are designed with a lightweight form factor similar to everyday eyeglasses. Table 1 shows a summary comparison of the newer lightweight AR Glasses to the AR headsets.

2. Materials and Methods

2.1. Methods

In this evaluation, review articles about the latest AR display glasses were collected. These articles are from the public domain and were drawn from technology reviews and related sources. All are included in the references. Using what is currently known about the latest AR glasses technologies, we applied a heuristic evaluation process to potential consumer needs and use cases in four domains: technical training and education, sport and exercise, accessibility, and consumer services. For each domain, we discuss how AR can be used along with a realistic use case for AR glasses, addressing what features may be most important to make widespread adoption optimal in each domain.

2.2. Materials

In order to assess the functionality and potential of the new AR glasses across different domains, we used a heuristic evaluation tool, the Derby Dozen [3], as a framework to assess the factors that contribute to perceived usability (Table 2). This heuristic checklist offers structured best practices to assess these devices and their applications across key areas like comfort, user feedback, interaction, and device maintainability.

3. Results

The Derby Dozen heuristics were applied to assess the factors contributing to perceived usability of AR glasses across four domains: technical training and education, sport and exercise, accessibility and consumer services. For each domain, we first describe an optimal use case for AR glasses and then discuss the use and potential barriers to adoption, leading to a suggestion of the most important Derby Dozen heuristics to consider.

3.1. AR Use in Technical Training and Education

Evan is in a training program to become an automotive repair technician. While some of his education is classroom-based, much of it involves hands-on learning under the hood of a car or under a car positioned on a lift. Evan wears AR glasses that provide audio to guide him through a repair. Evan can use voice commands to pause the video or rewind it as needed. If he doesn’t recognize a part or procedure, he can ask for a video display or schematics of the parts to accompany the audio. Through the entire procedure, the video of his work is streamed live to his instructor, so the instructor can provide additional guidance on Evan’s performance. Evan knows his work will be recorded, so he and the instructor can also play it back later to reinforce his learning. Evan doesn’t mind wearing the glasses. They are lightweight and comfortable, and they protect his eyes, replacing the safety glasses he would have to wear anyway.
When the first AR glasses were created by Google in 2012 and introduced to the public in 2014, they were, almost immediately, discussed as potential game changers for technical training in domains such as education and machine maintenance and repair [13,14]. Lee [13] articulated several potential strengths associated with AR-based training and education, including enhancements in motivation of users, greater simplicity of training, enhanced safety in a hands-free environment and improved training efficiency. Iatsyshyn et al. [15] in a more recent review of AR-based training, added that AR glasses have the potential to enhance learning in users due to the interactivity associated with the visual display of information.
The latest AR glasses hold several advantages in technical training and education over older AR glasses. First, they include a display interface that can be accessed via voice, touch or gesture. This allows for interactivity across different training spaces. For instance, if used in mechanical training environments, the glasses can provide instructional displays or schematics that could be advanced via voice if a technician was actively engaged in a hands-on activity [16,17,18]. AR glasses provide a look-through display in a fixed position. For the Meta Display glasses, this is shown on the right side of the field only. The display can be combined with audio input and video capture in real time, limited only by operating memory capacity. Multi-modal information presentation and media capture in real-time enhances both current and future learning. Technical training environments can involve unusual spaces (e.g. under a car lift, inside a plane’s fuselage, driving a forklift in a crowded warehouse) where safety is a primary concern. Maintaining situation awareness is important. Look-through displays allow for continued awareness of the external environment. The newest glasses are also lighter than other AR or VR headset solutions. This allows users to wear them for longer periods of time in training environments without discomfort, and the glasses provide a measure of safety often required in technical fields. Battery life for the next generation of AR glasses has been improved. For instance, the Meta Display glasses offer up to 6 hours of mixed use running on Wi-Fi. This allows for more extensive training to be conducted in a single session [14]. Also, for the Meta Display, the user interface is designed to look and feel like a smartphone display, which most users will be familiar with [14,17]. In training environments, this shortens the learning curve and allows users to go directly into hands-on learning while wearing the technology. The cost of the glasses is reasonable at between $200 and $800 a pair with no additional software costs. Educational or training organizations may find AR glasses to be budget friendly, even though costs at the higher end of the range may preclude some users from purchasing the glasses directly. If widely adopted, educational institutions could work the cost of the glasses into tuition and then allow customization of the glasses for those who need prescription strength lenses.
Using the lens of the Derby Dozen [3], in the technical training environment, the following heuristics are of particular importance:
  • Instructions: As mentioned above, if interfaces are used in a manner similar to smartphone mechanics, then the learning curve to proficiency of use should be shortened.
  • Organization and Simplification: Interfaces that resemble familiar visual displays, such as those on current smartphones or tablets, will facilitate user learning speed. Speech-based controls that mimic smartphone inputs will also enhance the user experience.
  • Integration of Physical and Virtual Worlds: This is critical for technical training and education. In the use case presented above, seamless presentation of informational videos, schematics, diagrams or related materials using a look-through display facilitates learning, without compromising safety or situational awareness.
  • Collaboration: In any educational environment, collaboration is important for learning. Teacher-experts must be able to interact with and direct learners in real-time. Interaction between learners is also important. Glasses that can live stream video to instructors or classmates with the additional capability of recording training sessions is a valuable learning tool. Allowing instructors to interact via audio with the students during the same training sessions further enhances learning and the student experience.
  • Maintenance: Only daily use will confirm that the new AR glasses are built well enough to withstand the rigors of environments that can include temperature variation, high or low light, potentially damaging substances like grease or lubricants, and hard surfaces that could damage the glasses if dropped. Institutions that implement new technologies often have capital funding to purchase equipment but lack sufficient operational funding for repair or replacement. If students are responsible for repairs or replacement on their own, this may place strain on limited student finances.
Adoption of the next generation of AR glasses is possible and more probable in technical training environments than ever before, due to enhanced capabilities that result in greater usability. Technical training environments often require students to identify a problem in a technological system, understand the problem and then implement the correct set of steps to solve the problem. AR glasses have the potential to facilitate learning with advanced displays, memory and storage enhancements, lighter form factors, multi-modal input options, and real-time connectivity with others at a lower cost of entry.

3.2. AR Glasses Use in Sports and Exercise

Alex is an experienced recreational cyclist training in a rural environment. During high-speed segments, his hands are fully engaged on the handlebars, and his visual attention is focused on the road and surrounding traffic. While riding, Alex wears AR glasses that provide real-time performance feedback without requiring him to look down at a bike computer or interact with a smartphone. The glasses display key metrics such as speed, cadence, and route guidance within his peripheral vision. When the terrain or effort changes, the system automatically adapts the feedback, using motion data and contextual inference to avoid unnecessary distractions. Voice commands and subtle tap gestures allow Alex to control functions hands-free, while auditory cues supplement visual feedback during moments of high visual demand.
AR glasses in sports have evolved from early heads-up display (HUD) devices to more sophisticated, AI-enhanced systems that guide performance in real-time. The first sport-oriented models, such as Google Glass, Recon Jet, and Vuzix M100, were designed to provide minimal yet essential real-time information (e.g., speed, altitude, and pace), projected into the peripheral vision of the athletes, to maintain situational awareness during high-intensity performance [18]. These first-generation devices were realized by utilizing lightweight optical systems with basic Inertial Measurement Unit (IMU) components and smartphone connectivity. Applications were found in cycling, running, winter sports, and sailing. For example, early snow-sport goggles, such as the Recon Snow 2, extended these ideas into robust frames for dynamic outdoor conditions, incorporating trajectory, navigation, and barometric feedback [18]. This formative phase of development reflected a technical focus on feasibility and usability, in line with foundational work on the design of optical see-through displays, which demonstrated the benefits of glanceable feedback for tasks involving motion [19,20]. Since the late 2010s, smart glasses have evolved beyond simple displays and begun to incorporate multimodal sensors and adaptive algorithms. Designs have increasingly combined accelerometers, gyroscopes, GPS, and machine-learning models to classify movement and dynamically adjust feedback, integrating multisensory data streams for real-time activity recognition [21]. Systems such as the Mobile Pervasive Augmented Reality System (MPARS), which utilize accelerometer-GPS fusion, demonstrate how understanding the context can enhance pacing, decision-making, and technical monitoring during outdoor activities [22]. Evidence from IMU research supports this progression: using multiple sensors together is more effective than using just one for tracking complex movement patterns [21]. In the early 2020s, smart glasses, such as Ray-Ban Stories and Meta glasses, gained popularity. These devices featured lightweight frames, integrated cameras, and offered hands-free interaction [23]. Moreover, sport-specific glasses such as FORM Swim and Holoswim 2 introduced underwater HUDs for real-time pacing and stroke monitoring while swimming. Meanwhile, in other sports, ENGO 2 ActiveLook, Everysight Raptor, and Julbo Evad-2 continued to refine brightness, responsiveness, and sensor compatibility for cyclists and runners. Similarly, winter sports with AR-enabled goggles, such as Ostloong Sirius, align with trends in IMU-based analytics that emphasize miniaturization, enhanced wireless communication, and on-device computation as prerequisites for effective real-world kinematic monitoring [24]. The latest transition, emerging around 2024–2025, marks the arrival of AI-driven performance eyewear. New devices, such as the Oakley Meta series, integrate IMUs, centrally aligned cameras, and on-device inference engines to interpret motion, understand environmental context, and provide real-time adaptive coaching. This aligns with the need for new sports technologies to move toward multisensory fusion, embedded computation, and contextual analysis, providing comprehensive performance profiles [21].
Based on our review, as sensing accuracy, processing power, and ergonomic refinement converge, AR glasses are becoming more than data visualization. In particular, as athletes often operate in a “hands-busy, eyes-busy” context, athletes can now rely on voice command and a tap gesture. In addition, the glasses include hand-tracking, head aiming, and AI agent interaction, which can evolve into the “motion-driven, eye-forward” interaction. In terms of perception and feedback, current glasses include minimal occlusion microLED displays designed to perceive stable information peripherally. If these features are ideal for sports contexts, in terms of safety, high-brightness and low-latency rendering might be critical for outdoor activities. The presence of Inertial Measurement Units (IMUs) enables motion understanding, while scene recognition is driven by the AI application. In other words, AR glasses are moving toward context-aware sensors, generating feedback that is appropriate to the situation and minimizing distractions. Combined with IMU data (e.g., speed, effort, and environmental complexity), the glasses summarize and cue the essential information to display, to avoid cognitive overload. Moreover, to reduce visual strain, feedback is provided by multimodal cues, either visually or auditorily. From an ergonomic perspective, the glasses present a robust yet lightweight frame that provides a durable and stable experience; however, they are not designed for high-impact sports.
Using the lens of the Derby Dozen, in the sports and exercise domain, the following heuristics are of particular importance:
  • Integration of Physical & Virtual Worlds: Performance metrics, navigation cues, and contextual prompts must remain spatially stable, accurately registered, and responsive to rapid body and environmental motion. Even minor latency or misalignment can compromise trust in the system, increase distraction, or introduce safety risks, particularly in high-speed or outdoor activities.
  • Comfort: Lightweight construction, secure fit, and effective thermal management are essential to prevent distraction and fatigue during movement. Even minor discomfort can negatively impact performance and adherence. While the latest AR glasses provide promising ergonomics for low- to moderate-intensity activities, they remain less suitable for high-impact or collision-based sports.
  • Feedback to the User: Feedback must be selectively gated, context-aware, and dynamically adapted to task demands to avoid cognitive overload. The system should prioritize essential information based on activity phase, speed, and environmental complexity, suppressing noncritical cues during high-demand moments. Multimodal feedback strategies, which balance visual and auditory cues, are particularly important for reducing visual strain and attentional disruption while preserving situational awareness.
  • Device Maintainability: heat, sweat, rain, dust, and rapid temperature changes are some potential sport-specific environmental stressors. Optical clarity, sensor accuracy, and interaction reliability should be maintained during prolonged exertion. Additionally, high device maintainability, including durability, ease of cleaning, and resistance to wear, is crucial for sustained real-world use across repeated training sessions.

3.3. AR Glasses Use for Accessibility

Sarah is a user with low vision and uses her glasses throughout her day. Her glasses are able to magnify text for her that would otherwise be too small for her to read. Her glasses are also able to read out loud other text for her when she is unable to do so herself. The glasses also enable her to augment the physical world, highlighting important elements such as walls, signs, and other obstacles. This allows her to navigate the world more easily without the need for other accessibility aids. Sarah’s friend John is also a user, but who is deaf. John uses his AR glasses throughout the day. The glasses can transcribe conversations for him that he would otherwise be unable to understand. The glasses are able to present to him a real-time transcript of conversations around him. This allows him to easily communicate with people who do not know sign language. He also uses the glasses to notify him of important audio signals around him. The glasses can interpret audio cues around him to alert him of important events, such as a fire alarm going off or a car honking. The glasses are able to give him information he would otherwise miss.
AR technologies have always been considered for the cutting edge of accessibility enhancement. The first AR system designed for accessibility purposes was the Low Vision Enhancement System (LVES), developed through a collaboration of Johns Hopkins University, NASA, and the Veterans Administration [25]. This system magnified and enhanced what the user saw using a multi-camera system that displayed the final image to the user. While this system was constrained by the technology of the time, its ideas have continued to inform modern AR research. Htike et al. [26] explored the use of the Microsoft HoloLens v1 and custom programs to augment users' visual experience. They used the system to overlay different visual elements to assist users with low vision in accurately navigating their surroundings. This concept was also used by Lee et al. [27], with a custom AR system being applied to a sports setting to allow for low vision users to interact in sports. Their system augmented the user’s vision to highlight important objects, such as balls when playing sports. This research has shown the ability of AR systems to be used as an assistive technology for the visually impaired. Beyond users with visual impairments, AR technologies have shown exceptional promise for deaf and hard-of-hearing (DHH) users. Olwal et al. [28] developed lightweight AR glasses that were able to transcribe speech in real time and display it to the users. AR glasses have proven themselves to be an exceptional technology for people with disabilities, but they have yet to achieve widespread adoption. Google Glass, first released in 2012, was somewhat promising, being the first mainstream AR glasses. Some research for accessibility use was done using Google Glass [29], but due to the very low adoption of Google Glass as a whole, research and use were limited.
The latest AR glasses have certainly innovated and improved upon Google Glass, including built-in accessibility features. Out of the box, the glasses can transcribe and translate spoken text for users who are deaf or hard of hearing [6]. The glasses are also able to view and read text aloud for users with low vision or blindness [30]. These features, along with others, are primarily integrated through the use of AI, which, while functional, has not been proven in the long term for full accessibility use. In the future, the best solution for these use cases would be unique, fully dedicated programs that are tailor-made for transcription, text identification, etc. Potentially the greatest innovation of AR glasses, the Meta Display glasses in particular, is the included neural band. This device is a small wristband that you wear that is able to sense small hand gestures and use those to navigate and control the glasses. This is potentially game-changing for users with motor or vocal impairments. Previous input methods primarily consisted of voice commands or physical gestures on the glass. The Meta Displays use both aforementioned input methods, but the inclusion of the neural band allows other users who would not otherwise be able to engage with the technology to fully use it.
Using the lens of the Derby Dozen, in the accessibility domain, the following heuristics are of particular importance:
  • Unboxing & Set up: The process must be both intuitive and easy to use for all users. Instruction manuals and setup guides must have multiple modes of delivery for users with different disabilities.
  • Consistency & Flexibility: Content presented through the glasses must adhere to current UI and UX design principles. Test, contrast, and font size must all be easily adjustable. Overly complicated menus will render the glasses ineffective as accessibility aids.
  • Integration of Digital and Physical Worlds: The glasses must be able to seamlessly integrate digital content and overlays with the physical world; otherwise, the glasses will be less effective than other currently existing accessibility aids.
  • User Interaction: Users must be able to seamlessly interact with their glass. Any gestures or controls must be accessible to users with motor disabilities. If gestures are impossible to perform, second methods of interaction must be available, such as voice commands and eye tracking.
  • Comfort: Users must be able to use the device all day without discomfort to be a viable replacement for other accessibility aids. The device must also have the necessary battery life to accommodate this.
  • Feedback to the User: Device feedback must be multimodal, and the glasses must be able to present information in a format that they can use relative to their disability.

3.4. AR Glasses Use for General Consumption

Bonnie is from the United States and is traveling in a foreign country for leisure. She wants to tour the rural areas and talk to the locals but doesn’t speak their language, and they do not speak English. Bonnie wears her AR glasses and instantly can follow along with the conversation without knowing their language through smart translation. The glasses also translate the street signs, restaurant menus, and suggest appropriate cultural norms (e.g., traditional greeting when he meets someone, or how to say “thank you”). The glasses also help Bonnie understand the local products (i.e., name, description, price) at the produce market, using image recognition. She feels immersed in the culture which contributes to the richness of her visit.
AR glasses offer an array of potential benefits to the general consumer with many stemming from the convenience of being completely hands-free and having immediate access to information that currently is available only upon search on a phone or computer. The glasses support everyday multitasking by allowing users to answer messages, make phone calls, receive navigation instructions, or set reminders without needing to reach for their phone. This capability is especially beneficial while walking, traveling, commuting, cooking, or performing other tasks where one’s hands may already be occupied [15,16]. The integrated camera also enables hands-free photo and video capture, making it easier to document memorable moments during hikes, travel, or social events, with reviewers noting that on-display framing and previews further streamline the experience [17,31]. The option to immediately livestream or share captured content adds another layer of convenience by reducing the effort typically required when using a smartphone. The embedded AI assistant provides additional ease of use through live information retrieval, such as identifying objects or landmarks, providing translation, or offering step-by-step navigation, which reviewers describe as enhancing situational awareness and supporting real-world tasks in more natural, seamless ways [16,17]. Another noteworthy advantage mentioned across reviews is the discreet and socially acceptable design of the glasses. Because the AR glasses model familiar prescription frames, reviewers emphasize that the device looks far more natural in everyday contexts compared to bulkier AR or VR headsets. This conventional aesthetic allows users to access digital information without drawing unnecessary attention, making the glasses easier to wear in public settings such as cafés, airports, or city streets [15,31,32]. Reviewers also highlight benefits related to display privacy and audio discretion. The micro-LED display produces minimal light leakage, meaning bystanders generally cannot see what the wearer is viewing, and the open-ear audio remains relatively quiet to others unless used at higher volumes [15,31]. Together, these features help maintain a sense of personal privacy while interacting with the device in social environments. Reviewers also suggest that the neural band’s gesture-based controls may offer a more intuitive interaction method, as simple wrist and finger gestures allow users to navigate menus or control media without relying solely on taps or voice commands [16,31,32]. Finally, the user interface presented in the in-lens display closely resembles familiar smartphone layouts. Reviewers note that standard applications, such as camera, music, maps, and calls, appear in recognizable formats, which may shorten the learning curve and allow most consumers to navigate the system with ease [16,17]. Collectively, these observations suggest that the AR glasses provide a blend of convenience, discretion, and intuitive interaction that may support broader consumer adoption.
Using the lens of the Derby Dozen, in the consumer domain, the following heuristics are of particular importance:
  • Unboxing and Setup: out of the box intuitiveness and seamless application start-up is critical to this audience. Significant delays or confusion on how to get started will be catastrophic to adoption.
  • Integration of the physical and virtual worlds: Overlays of information (e.g., map directions or spatial location of virtual retail objects) must be seamless and accurate. Latency issues or low-fidelity imagery will detract from usage.
  • Comfort - the AR glasses must be lightweight and comfortable. Consumers will not tolerate discomfort even if the glasses provide convenient access to information.
  • Intuitiveness of virtual objects: the casual consumer user will not read instructions or view a tutorial to use the glasses for everyday tasks. Virtual elements need to be immediately intuitive and draw on familiar metaphors.
  • Privacy: privacy concerns can be experienced by the users (i.e., how is the information I am seeing being used if it is stored in the cloud?) as well as those in their environment (i.e., is that person recording me without my consent?). Full transparency will be required for all to feel comfortable with the technology use in public.

4. Discussion

The present review applied the Derby Dozen heuristics [3] to the latest generation of AR glasses currently or soon to be entering the marketplace. The purpose of the review was to explore potential use cases for the glasses across four domains: technical training and education, sport and exercise, accessibility and consumer services and discuss potential barriers to adoption.
Based on the latest reviews of the glasses, we found that the importance of certain heuristics varied across domains. For instance, in technical training displays that integrate the physical and virtual world seamlessly and real-time collaboration were critical needs for the glasses, and in the domain of consumer services, intuitiveness of virtual elements is the most critical need. Based on these results, it is predicted that within each domain of use, consumers will base their purchase selection on which AR glasses meet their critical and most important needs. Thus, this review provides a guide for consumers outlining what qualities are important for four typical use domains.
By focusing on use domains and evaluating needs within the domain using a valid set of heuristics, considerations are customized based on consumer needs. Organizations considering large-scale capital investment in these technologies may find the heuristics particularly valuable as they plan their expenditures. No organization wishes to spend money on solutions that don’t end up meeting the needs for which they were acquired. The Derby Dozen can be used by those evaluating technological alternatives with potential for greater success when investing capital funds.
Of course, not all aspects of evaluation can be fully known until the actual glasses are released to a wider consumer population and used more extensively within each domain. Limitations will surely arise that cannot be currently assessed or predicted. For instance, in training environments, will the glasses stand up to environmental conditions, such as temperature changes, hard surfaces if dropped, or substances, such as lubricants or harsh fluids, that could damage lenses? In sports, will glasses survive constant motion, and in those environments, will users be able to use a visual display without motion sickness or distraction that could impact safety or limit optimal performance? Considering the vast population of amateurs and elite athletes, can the glasses be adopted for each physical activity and sport played at each performance level? Will they be prohibited or allowed during official competition? In accessibility domains, will multi-modal operational input (voice, touch and gesture) be flexible and sensitive enough to accommodate a wide range of physical limitations of potential users? For whom will accessibility be enhanced enough to warrant greater adoption? In consumer domains, will tourists find value added in having a display and additional information presented to them visually, or will this information detract from the experience of exploration and immersion in a new location?
Once the latest versions of AR glasses are released, future user experience and in-person usability studies need to be conducted to fully assess the value of the technology and what additional modifications might be necessary to accommodate users within specific domains. This paper introduces one method of usability evaluation that can be applied to AR, as well as related technologies. The Derby Dozen should continue to be a valuable evaluation tool in these future studies.

Author Contributions

C.F., B.Ch., B.Ca., and G.F. contributed to the conceptualization, organization, writing and original draft preparation. B.Ch contributed to visualization. G.F contributed to writing-review and editing. M.A. and H.A. contributed to additional research and literature review. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.:

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Weidner, J.B. How & Why Google Glass Failed Available online: https://www.investopedia.com/articles/investing/052115/how-why-google-glass-failed.asp. (accessed on 18 December 2025).
  2. Garrison, N. Why Haven’t AR and vr Changed Our Lives Yet? 5 Barriers to Adoption Available online: https://arpost.co/2019/11/27/ar-and-vr-changed-our-lives-5-barriers-adoption/. (accessed on 18 December 2025).
  3. Derby, J.L.; Aros, M.; Chaparro, B.S. The Validation of a Heuristic Toolkit for Augmented Reality: The Derby Dozen. Frontiers in Virtual Reality 2025, 6. [CrossRef]
  4. Rauschnabel, P.A.; Felix, R.; Hinsch, C.; Shahab, H.; Alt, F. What Is XR? Towards a Framework for Augmented and Virtual Reality. Computers in Human Behavior 2022, 133, 107289. [CrossRef]
  5. O’Luanaigh, P. VR: The Path to Mass Adoption: AIXR Available online: https://aixr.org/insights/vr-the-path-to-mass-adoption/ (accessed on 18 December 2025).
  6. Meta Ray-Ban Meta Advanced AI Glasses Available online: https://www.meta.com/ai-glasses/. (accessed on 18 December 2025).
  7. Valecillos, D. XREAL Beam pro + XREAL Air 2 pro AR Glasses Review Available online: https://blog.learnxr.io/extended-reality/xreal-beam-pro-and-air-2-pro-ar-glasses-review (accessed on 18 December 2025).
  8. Stein, S. Google’s Putting It All on Glasses next Year: My Demos with Project Aura and More Available online: https://www.cnet.com/tech/computing/googles-putting-it-all-on-glasses-next-year-my-demos-with-project-aura-and-more/ (accessed on 16 December 2025).
  9. Aros, M.; Arnold, H.; Chaparro, B. Exploring Device Usability: A Heuristic Evaluation of the Apple Vision pro and Meta Quest 3. Communications in computer and information science 2025, 3–12. [CrossRef]
  10. Bondarenko, Y.; Vladimir Kuts; Pizzagalli, S.; Karle Nutonen; Murray, N.; Eoin O’Connell Universal XR Framework Architecture Based on Open-Source XR Tools. Springer proceedings in business and economics 2024, 87–98. [CrossRef]
  11. Derby, J.L. Designing Tomorrow’s Reality: The Development and Validation of an Augmented and Mixed Reality Heuristic Checklist. Dissertation, Department of Human Factors and Behavioral Neurobiology, Embry-Riddle Aeronautical University, Daytona Beach, FL, 2023.
  12. Lee, K. Augmented reality in education and training. TechTrends 2012, 56, 13–21.
  13. Lee, K. The future of learning and training in augmented reality. InSight J. Sch. Teach. 2012, 7, 31–42.
  14. Iatsyshyn, A.V.; Kovach, V.O.; Romanenko, Y.O.; Deinega, I.I.; Iatsyshyn, A.V.; Popov, O.O.; Lytvynova, S.H. Application of augmented reality technologies for preparation of specialists of new technological era. In Augmented Reality in Education: Proceedings of the 2nd International Workshop (AREdu 2019), Kryvyi Rih, Ukraine, 22 March 2019; CEUR Workshop Proceedings: 2020; No. 2547, pp. 181–200.
  15. Song, V. I Regret to Inform You Meta’s New Smart Glasses Are the Best I’ve Ever Tried Available online: https://www.theverge.com/tech/779566/meta-ray-ban-display-hands-on-smart-glasses-price-battery-specs. (accessed on 18 December 2025).
  16. Song, V. The Future I Saw through the Meta Ray-Ban Display Amazes and Terrifies Me Available online: https://www.theverge.com/tech/801684/meta-ray-ban-display-review-smart-glasses-ai-wearables. (accessed on 18 December 2025).
  17. Bloomberg. Meta’s $799 Display glasses give a glimpse of the future. 2025. Available online: https://www.bloomberg.com/news/articles/2025-10-09/meta-ray-ban-display-review-should-i-buy-meta-s-new-799-glasses-with-a-screen (accessed on 9 October 2025).
  18. Elder, S.; Vakaloudis, A. A technical evaluation of devices for smart glasses applications. In Proceedings of the 2015 Internet Technologies and Applications (ITA), UK, 2015; pp. 98–103.
  19. Dünser, A.; Grasset, R.; Billinghurst, M. A survey of evaluation techniques used in augmented reality studies. Human Interface Technology Laboratory New Zealand: Christchurch, New Zealand, 2008; pp. 5–1.
  20. Azuma, R.T. A survey of augmented reality. Presence Teleoperators Virtual Environ. 1997, 6, 355–385.
  21. Rana, M.; Mittal, V. Wearable sensors for real-time kinematics analysis in sports: A review. IEEE Sens. J. 2020, 21, 1187–1207.
  22. Pascoal, R.M.; de Almeida, A.; Sofia, R.C. Activity recognition in outdoor sports environments: Smart data for end-users involving mobile pervasive augmented reality systems. In Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, London, UK, 2019; pp. 446–453.
  23. Iqbal, M.Z.; Campbell, A.G. Adopting smart glasses responsibly: Potential benefits, ethical, and privacy concerns with Ray-Ban Stories. AI Ethics 2023, 3, 325–327.
  24. Zhang, M.; Li, H.; Ge, T.; Meng, Z.; Gao, N.; Zhang, Z. Integrated sensing and computing for wearable human activity recognition with MEMS IMU and BLE network. Meas. Sci. Rev. 2022, 22, 193–201.
  25. Massof, R.W.; Rickman, D.L.; Lalle, P.A. Low vision enhancement system. Johns Hopkins APL Tech. Dig. 1994, 15, 120–125. Available online: https://secwww.jhuapl.edu/techdigest/Content/techdigest/pdf/V15-N02/15-02-Massof.pdf.
  26. Htike, H.M.; Margrain, T.M.; Lai, Y.-K.; Eslambolchilar, P. Augmented reality glasses as an orientation and mobility aid for people with low vision: A feasibility study of experiences and requirements. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI), 2021; pp. 1–15. [CrossRef]
  27. Lee, J.; Li, Y.; Bunarto, D.; Lee, E.; Wang, O.H.; Rodriguez, A.; Zhao, Y.; Tian, Y.; Froehlich, J.E. Towards AI-powered AR for enhancing sports playability for people with low vision: An exploration of ARSports. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), 2024; pp. 228–233. [CrossRef]
  28. Olwal, A.; Balke, K.; Votintcev, D.; Starner, T.; Conn, P.; Chinh, B.; Corda, B. Wearable subtitles. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST), 2020. [CrossRef]
  29. Berger, A.; Vokalova, A.; Maly, F.; Poulova, P. Google Glass used as assistive technology: Its utilization for blind and visually impaired people. In Mobile Web and Intelligent Information Systems; Springer: Cham, Switzerland, 2017; pp. 70–82. [CrossRef]
  30. Goldfield, D. List of Accessibility Features for the Meta Ray-Ban Smart Glasses Available online: https://groups.io/g/tech-vi/message/9207 (accessed on 4 December 2025).
  31. Heaney, D. Meta Ray-Ban Display Review: First Generation Heads-up Mobile Computing Available online: https://www.uploadvr.com/meta-ray-ban-display-review/ (accessed on 18 December 2025).
  32. Prospero, M. Meta Ray-Ban Display Available online: https://www.tomsguide.com/computing/smart-glasses/meta-ray-ban-display-hands-on-this-is-the-future (accessed on 18 December 2025).
Table 1. Comparison of AR Glasses to AR/XR Headsets.
Table 1. Comparison of AR Glasses to AR/XR Headsets.
Feature AR Glasses (e.g., Meta, XREAL Air, Viture, Rokid, RayNeo, Google Android) AR Headsets (e.g., Quest 3, Vision Pro)
Immersion Method Optical See-Through (Direct View) Video Passthrough (Camera View)
How the User Sees Users look directly through transparent lenses where a virtual image is projected onto the glass. The real-world view is natural. Users look at screens inside the headset, which display a live video feed from external cameras with added virtual elements.
Form Factor Glasses (Look like sunglasses or prescription frames); lightweight Headset/Goggles; Bulky and covers the eyes/top part of the face
Digital Objects Appear as a translucent overlay Appear as fully opaque, solid objects that are anchored and realistic.
Screen Primarily 2D/3DOF (Screen floats in front of user). Newer models have 6DOF (Object is anchored in 3D space). Full 3D/6DOF (Objects stay on the real-world table). Uses advanced hand, eye, and controller tracking.
User Interaction Finger gestures on the glasses stem and/or accessory (e.g., Meta neural band) Hand gestures in mid-air (custom to the headset system)
Connection Wifi, Bluetooth connection to phone Processor and battery are built into the headset or tethered to a separate battery/processor “puck”
Price $300 - $800 USD $500 - $3,500+ USD
Table 2. The Derby Dozen Heuristics.
Table 2. The Derby Dozen Heuristics.
Heuristic Definition
Unboxing & Set-Up Getting started with the AR/MR device/application should be easy to identify, complete, and a positive experience.
Instructions Help and documentation for the app and device should be easily accessible and easy to understand (including tutorials). Instructions and error messages should give users clear feedback.
Organization & Simplification The AR application should minimize cognitive overload by easing the user into the environment and avoiding unnecessary clutter.
Consistency & Flexibility The AR application should be consistent and follow design standards for text, audio, navigation, and other elements.
Integration of Physical & Virtual Worlds It should be easy to identify virtual elements, and which virtual elements are interactive. Virtual elements should not obstruct physical objects in the users’ environment that are crucial for the completion of their goals.
User Interaction All interactions that the user has with the AR device/application should be simple, easy to understand, and easy to complete.
Comfort The AR application and device should be designed to minimize user discomfort.
Feedback to the User The AR application and device should provide adequate feedback to the user to explain what is currently going on.
Intuitiveness of Virtual Elements The AR application should be designed in a way that promotes the use of recognition rather than recall to minimize the user’s memory load.
Collaboration When sharing an AR space with others, it should be easy to understand what actions are available, what is private vs. public, and communication between users should be seamless.
Privacy It should be clear what AR content is able to be viewed by the public and what content is private (only viewable to the specific user or device).
Device Maintainability The AR device should be designed in a way that makes it easy to maintain. This includes reusability, storage, cleaning, and the ability to fix/replace parts.
1 Heuristics gathered from [12].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated