ARTICLE | doi:10.20944/preprints202209.0306.v1
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: AIoT; Artificial Intelligence; Assistive Technology; Deep Learning; Machine Learning
Online: 20 September 2022 (10:45:15 CEST)
According to the World Health Organization, about 15% of the world’s population has some form of disability. Assistive Technology, in this context, contributes directly to the overcoming of difficulties encountered by people with disabilities in their daily lives, allowing them to receive education and become part of the labor market and society in a worthy manner. Assistive Technology has made great advances in its integration with Artificial Intelligence of Things (AIoT) devices. AIoT processes and analyzes the large amount of data generated by IoT devices and applies Artificial Intelligence models, specifically Machine Learning, to discover patterns for generating insights and assisting in decision making. Based on a systematic literature review, this article aims at identifying the Machine Learning models used in multiple different research about Artificial Intelligence of Things applied to Assistive Technology. The survey of the topics approached in this article also highlights the context of such research, their application, IoT devices used, and gaps and opportunities for further development. Survey results show that 50% of the analyzed research address visual impairment, and for this reason, most of the topics cover issues related to computational vision. Portable devices, wearables, and smartphones constituted the majority of IoT devices. Deep Neural Networks represent 81% of the Machine Learning models applied in the reviewed research.
ARTICLE | doi:10.20944/preprints201612.0071.v1
Subject: Physical Sciences, Optics Keywords: tactile sensors; assistive technologies; power wheelchair; medical systems; robotic; joystick; optical sensor
Online: 14 December 2016 (05:12:29 CET)
This paper presents a new optical, multi-functional, high-resolution 3-axis sensor which serves to navigate and can, for example, replace standard joysticks in medical devices such as electric wheelchairs, surgical robots or medical diagnosis devices. A light source, e.g. a laser diode is affixed to a movable axis and projects a random geometric shape on an image sensor (CMOS or CCD). The software in the downstream microcontroller identifies the geometric shape’s center, distortion and size, then calculates X, Y, and Z coordinates. These coordinates can then be processed in attached devices. The 3-axis sensor is characterized by its very high resolution, precise reproducibility and plausibility of the coordinates produced. In addition, optical processing of the signal provides a high level of safety against electromagnetic and radio frequency interference. The sensor presented here is adaptive and can be adjusted to fit a user’s range of motion (stroke and force). This recommendation aims to optimize sensor systems such as joysticks in medical devices in terms of safety, ease of use, and adaptability.
ARTICLE | doi:10.20944/preprints202208.0286.v1
Subject: Social Sciences, Other Keywords: Assistive Technology; Accessible Technology; Consumer Technologies; Provision, Policy; Funding
Online: 16 August 2022 (09:57:38 CEST)
Estimates by the World Health Authority suggest that 1 billion people do not have access to the assistive technologies they require. Over the past decade, the design of products that empower people with a disability has shifted from specialised and dedicated products designed only for those with a disability to features and functions integrated into cost-effective consumer technologies for the benefit of all. The opportunity for expansion of the availability of such technologies is at risk of being ignored as a result of models of delivery that are founded in medical devices and which have failed to reflect trends in our understanding of technology and the choices and preferences expressed by persons with a disability. This research undertaken suggests that the opportunities of such expansion offer significant benefits to people with a disability and better both economic and social return on investment for authorities.
ARTICLE | doi:10.20944/preprints202012.0634.v1
Subject: Arts & Humanities, Anthropology & Ethnography Keywords: Assistive Technology; Assistive devices; Students with disabilities; Decolonial Approach; South African Higher Education; Disability Staff members; learning; Enable and Constrain
Online: 24 December 2020 (14:46:01 CET)
This paper used the decolonial theory to analyse provision of Assistive Technology and assistive devices at an institution of higher education in South African. It was an empirical study, in which data were collected through interviews with students with disabilities and the Disability Rights Centre staff members. The paper sought to understand the invisible hidden implications of provision of Assistive Technology and assistive devices. The finding was that it is students with disabilities who were provided with Assistive Technology and assistive devices at the institution. The institution provided them through the Centre, to support their learning. However, this way of provision was found to be stigmatising and segregative. Furthermore, while the provision on one hand enabled students with disabilities’ learning, on the other, it constrained it. The argument of the paper is that when provision of Assistive Technology and assistive devices is for a particular group of students it defeats the whole purpose for it is intended, and could hinder rather than promote learning. It is hoped that the paper will contribute to contemporary debate on provision of Assistive Technology and support services for people with disabilities in low resource settings, from a South African context specifically, and in higher education broadly.
ARTICLE | doi:10.20944/preprints201702.0049.v1
Subject: Engineering, Control & Systems Engineering Keywords: tactile sensors; assistive technologies; power wheelchair; medical systems; robotic; joystick; strain-gauge; spasticity
Online: 14 February 2017 (09:02:48 CET)
This article presents a new input device for spastic patients and others with similar symptoms. The sensor consists of a disc used to determine positions and can replace standard joysticks in medical devices such as electric-powered wheelchairs. Using a standard joystick while operating a powered wheelchair can result in dangerous situations when spastic movements occur and extremities cramp making them uncontrollable. To avoid this, a disc was developed that can be controlled with any body part. By shifting weight (x- and y-axis) the disc can tilt in any direction creating a proportional output signal and can also be pushed down in the center (z-axis) to open a menu on a screen for example. When spasms occur it is impossible for users to get stuck on the input device because the disc is flat and can be mounted within a control panel. Body parts coming into contact with the disc would merely slide across the disc without triggering unintentional actions. The sensor presented here is also adaptive and can be adjusted to fit a user’s strength and range of motion. This proposal aims to develop an input device that enables spastic patients to operate sensitive systems safely.
ARTICLE | doi:10.20944/preprints201809.0483.v1
Subject: Earth Sciences, Space Science Keywords: visually impaired people; mobile devices; assistive cartography.
Online: 25 September 2018 (10:32:06 CEST)
The vision can be used to recognize images and to improve mental pictures of environments. Visually impaired people feel a lack of aid for their independent or facilitated urban mobility, which can be achieved with the use of mobile devices and cartographic tools using audiovisual outputs. This work raises issues about urban mobility for the accessibility of visually impaired people in areas still unexplored by them, it uses cartographic technologies in electronic devices. For a preview of the test area located in Monte Carmelo (MG), a tactile model was used to form the first image of the eight volunteers. Results were obtained through research with not blind and blind individuals, it validated the use of áreas’s mobile registration prototype positioning in field or not, when the coordinates from the objects are known for registration. The results indicate that both the tactile model and the audiovisual prototype can be used by blind and non-blind people. Above all, the prototype proved to be a viable and adequate option for decision making in urban environments. New ways of presenting data to blind or otherwise blind people can still be studied.
ARTICLE | doi:10.20944/preprints201806.0449.v1
Subject: Engineering, Control & Systems Engineering Keywords: surface electromyography; computer vision; grasping; assistive robotics
Online: 27 June 2018 (15:01:06 CEST)
This paper presents a system that merges computer vision and surface electromyography techniques to carry out grasping tasks. To perform this, the vision-driven system is used to compute pre-grasping poses of the robotic system based on the analysis of tridimensional object features. Then, the human operator can correct the pre-grasping pose of the robot using surface electromyographic signals from the forearm during wrist flexion and extension. Weak wrist flexions and extensions allow a fine adjustment of the robotic system to grasp the object and finally, when the operator considers that the grasping position is optimal, a strong flexion is performed to initiate the grasping of the object. The system has been tested with several subjects to check its performance showing a grasping accuracy of around 95% of the attempted grasps which increases by around a 9% the grasping accuracy of previous experiments in which electromyographic control was not implemented.
ARTICLE | doi:10.20944/preprints201608.0199.v1
Subject: Medicine & Pharmacology, Other Keywords: active ageing; social participation; mobility; assistive technologies; service delivery
Online: 23 August 2016 (14:53:52 CEST)
Active ageing is defined as the process of optimizing opportunities for physical, social and mental health to enable older people to take an active part in society without discrimination and to enjoy an independent and good quality of life. The World Health Organization assumed this as a process for increasing and maintaining an individual’s participation in activities to enhance his/her quality of life. In this survey, the authors addressed the following question: “Is assistive technology (AT) for mobility contributing to enhancement of lifelong capacity and performance?”. From June 2015 until February 2016, 96 community dwelling adults, AT users for mobility (powered wheelchairs, manual wheelchairs, lower limb prostheses, walkers, crutches and canes), aged 45-97, mean 67.02 +/- 14.24 years old, 56.3% female, were interviewed using the Psychosocial Impact of Assistive Devices Scale (P-PIADS), the Activities and Participation Profile related to Mobility (APPM) and demographics, clinical and questions about AT use and training. The participants’ profiles revealed moderate limitation and restrictions in participation, measured by the APPM (2.03). Most participants showed positive impact of AT; average scores obtained from the P-PIADS subscales were: Self-esteem 0.62, Competency 1.11 and Adaptability 1.10. P-PIADS total was 0.96, with the powered wheelchair users scoring the highest (1.53) and the walker users scoring the lowest (0.73). All subscales and P-PIADS total were positively correlated with the activities and participation profile. There was no relation between age and the psychosocial impact of AT or activities and participation profile. These results encourage the authors to follow these participants up for a lifelong intervention. To accomplish that aim, currently, the protocol is implemented at the AT prescribing centers in Coimbra, Portugal in order to assess the impact of AT on participation in society, one of the domains of the Active Ageing Index, a new analytical tool to help policy makers in developing policies for active and healthy ageing.
ARTICLE | doi:10.20944/preprints202209.0490.v1
Subject: Medicine & Pharmacology, Sport Sciences & Therapy Keywords: rehabilitation; shoulder; electromyography feedback; visual biofeedback; assistive robot; musculoskeletal disorder
Online: 30 September 2022 (15:04:05 CEST)
While shoulder injuries represent the musculoskeletal disorders (MSDs) most encountered in physical therapy, there is no consensus on their management. As attempts to provide standardized and personalized treatment, a robot-ic-assisted device combined with EMG biofeedback specifically dedicated to shoulder MSDs has been developed. The aim of this study was to determine the efficacy of an 8-week rehabilitation program (≈3 sessions a week) using a ro-botic-assisted device combined with EMG biofeedback (RA-EMG group) in comparison with a conventional program (CONV group) in patients presenting with shoulder MSDs. This study is a retrospective cohort study including data from 2010 to 2013 on patients initially involved in a physical rehabilitation program in a private clinic of Chicoutimi (Canada) for shoulder MSDs. Shoul-der flexion strength and range of motion were collected before and after the rehabilitation program. Forty-four patients participated in a conventional pro-gram using dumbbell (CONV group) while 72 of them completed a program on robot-assisted device with EMG and visual biofeedback (RA-EMG group), whereby both programs consisted in 2 sets of 20 repetitions at 60% of maximal capacity. Results showed that the RA-EMG had significantly greater benefits than the Conv group for shoulder flexion strength (+103.1% vs 67%, p = 0.016) and range of motion (+14.4% vs 6.1%, p = 0.046). The current retrospective co-hort study showed that a specific and tailored rehabilitation program with constant effort by automatic adjustment of the level of resistance was able to potentiate strength and range of motion shoulder flexion after an 8-week reha-bilitation period in comparison with a conventional approach in patients with shoulder MSDs. This study provides new insight on shoulder MSD rehabilita-tion and future research should be pursued to determine the added potential of this approach for abduction and external rotation with a randomized controlled design.
ARTICLE | doi:10.20944/preprints201903.0285.v1
Subject: Social Sciences, Education Studies Keywords: children with autism; education; learning tools; design; intervention; assistive technology
Online: 30 March 2019 (06:27:17 CET)
The prevalence of autism in children in the world is estimated as one per 62 children, higher levels reported in some countries. These children experience significant problems with the development of social, behavioural and verbal and non-verbal communication skills. The skills impairment levels varies from an individual to another and that made teaching autistics a challenge for caregivers such as teachers and relatives. Hence, there are quite a number of frameworks of a software learning systems which focus on gaining the children’s attention using representational visual illustration as a learning method instead of the textual form. However, majority of these tools are lacking the personalisation ability to suite everyone in the spectrum. Assistive technology offers an alternative way to attract children with autism. Therefore, this research is proposing an Adaptive Content Management Learning System (ACMLS) model to assist caregivers to produce, design and fine-tune or customise the learning materials appropriately so that the system interface and the materials are suitable for every individual in the spectrum according to each child personal profile aiming to make learning attractive and to contribute in improving their social, communication and behavioural skills and nonetheless, their attention level to the delivered educational topics. The ACMLS model design adopts four main components which are: (1) Design component: which covers the visual design, design principles and the mental model of the children with autism. (2) Technology component: which covers the assistive technology tools and the architecture of the ACMLS system. (3) Education component: Which covers the learning objectives, styles, strategies, methods and the cognitive model. (4) Participants component: which covers the main participants who’re playing a role in the ACMLS model such as: caregivers and children with autism.
REVIEW | doi:10.20944/preprints201905.0251.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: social robots; behavioural models; assistive robotics; cognitive architectures; empathy; human-robot interaction
Online: 20 May 2019 (12:31:23 CEST)
The cooperation between humans and robots is becoming increasingly important in our society. Consequently, there is a growing interest in the development of models that can enhance the interaction between humans and robots. A key challenge in the Human-Robot Interaction (HRI) field is to provide robots with cognitive and affective capabilities, developing architectures that let them establish empathetic relationships with users. Several models have been proposed in recent years to solve this open-challenge. This work provides a survey of the most relevant attempts/works. In details, it offers an overview of the architectures present in literature focusing on three specific aspects of HRI: the development of adaptive behavioural models, the design of cognitive architectures, and the ability to establish empathy with the user. The research was conducted within two databases: Scopus and Web of Science. Accurate exclusion criteria were applied to screen the 1007 articles found (at the end 30 articles were selected). For each work, an evaluation of the model is made. Pros and cons of each work are detailed by analysing the aspects that can be improved so that an enjoyable interaction between robots and users can be established.
ARTICLE | doi:10.20944/preprints202207.0068.v1
Subject: Mathematics & Computer Science, Analysis Keywords: blind; visually impaired; assistive devices; object recognition; navigation; virtual assistants; Smart Cities; Saudi Arabia
Online: 5 July 2022 (08:24:38 CEST)
Visually impaired people encounter many impediments and challenges in their lives such as related to their mobility, education, communication, use of technology, and others. This paper reports the results of an online survey to understand the requirements and challenges blind and visually impaired people face in their daily lives regarding the availability and use of digital devices. The survey was conducted among the blind and visually impaired in Saudi Arabia using digital forms. A total of 164 people responded to the survey most of them using the VoiceOver function. People were asked about the use of smart devices, special devices, operating systems, object recognition apps, indoor and outdoor navigation apps, virtual digital assistive apps, the purpose (navigation, education, etc.) of and difficulty in using these apps, the type of assistance needed, the reliance on others in using the assistive technologies, and the level of satisfaction from the existing assistive technologies. The majority of the participants were 18 – 65 years old with 13% under 18 and 3% above 65. Sixty-five percent of the participants were graduates or postgraduates and the rest only had secondary education. White Cane, mobile phones, Apple iOS, Envision, Seeing AI, VoiceOver, and Google Maps were the most used devices, technologies, and apps used by the participants. Navigation at 39.6% was the most reported purpose of the special devices followed by education (34.1%) and office jobs (12.8%). The information from this survey along with a detailed literature review of academic and commercial technologies for the visually impaired was used to establish the research gap, design requirements, and a comprehensive understanding of the relevant landscape, which in turn was used to design smart glasses called LidSonic for visually impaired.
ARTICLE | doi:10.20944/preprints202110.0316.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: Bone fracture; Median sternotomy; Ossification; Rehabilitation; Functional mobility; Assistive device; Feedback training; Sternal precautions; Instrumented walker
Online: 21 October 2021 (22:50:25 CEST)
Patients often need the use of their arms to assist with functional activities, but after bone disruption, pushing is frequently limited to <10 lb (4.5 kg). No method exists to measure arm weight bearing objectively in clinical settings. This project aimed to design, construct, and test a walker for patients who need to limit arm force to prevent excessive bone stress during post-fracture (iatrogenic or traumatic) ossification. First, a qualitative study was conducted to obtain critiques of a Clinical Force Measuring (CFM) walker prototype from rehabilitation professionals. Key statements and phrases were coded that allowed “themes” to emerge from transcribed interviews, which guided device revisions. Next, a second CFM Walker prototype was designed based on the qualitative data and device criteria/constraints and finally tested. The result was fabrication of a new lightweight, streamlined, and cost-effective prototype walker with a simple visual display and auditory cue with upper limit alarms. Key features included attachments for medical equipment and thin film force-sensing resistors integrated into the walker handles that progressively activated 3 LEDs and a buzzer when force exceeded programmed thresholds. The innovative CFM Walker will help patients with restricted arm froce, especially elderly adults, recover safer and faster in the future.
ARTICLE | doi:10.20944/preprints202110.0238.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: Median sternotomy; Ossification; Cardiac surgery; Rehabilitation; Functional mobility; Bone fracture; Assistive device; Feedback training; Sternal precautions; Instrumented walker
Online: 18 October 2021 (10:43:01 CEST)
Patients recovering from bone disruption due to trauma or surgery need to limit movement to minimize shear force, thereby protecting callus formation and osteogenesis. Patients often use their arms to assist with functional activities, but pushing is frequently limited to <10 lb (4.5 kg). With only verbal instructions, patients’ ability to accurately limit weight-bearing (WB) force is poor. A therapeutic intervention to improve patient adherence with upper extremity (UE) WB guidelines during functional mobility using an instrumented walker could be beneficial. Therefore, the purpose of this article is to describe a feedback training protocol to improve the ability to modulate weight-bearing force in older adults and then provide an overview of the efficacy of this protocol and subsequent development of a Clinical Force Measuring Walker. An instrumented walker was used to measure UE WB during functional mobility in older healthy subjects (n = 30) before, during, and after (immediately and 2 hours) a visual and auditory concurrent feedback training session. During feedback training, force was significantly reduced with all 3 sessions as compared to baseline. When using the front wheeled walker, UE WB force during the second and third feedback training trials went down compared to the first trial. During the third feedback training trial, force was greater than the two previous trials while transferring sit-to-stand and stand-to-sit. After completion of practice with feedback, UE WB force was significantly reduced and remained so 2 hours later. These findings suggest that feedback training is effective for helping patients to modulate UE WB. Use of an instrumented walker and feedback training would be beneficial in clinical practice, especially with older patients. A more intensive feedback training with additional trials and or simultaneous visual and auditory cues during whole-practice may be needed to get UE WB below a 10 lb threshold.
ARTICLE | doi:10.20944/preprints201907.0138.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: 3D printing; additive manufacturing; assistive devices; blind; obstacle avoidance; sensors; sensory substitution; ultrasonic sensing; ultrasound sensing; visually impaired
Online: 10 July 2019 (06:24:05 CEST)
Nineteen million Americans have significant vision loss. Over 70% of these are not employed full-time, and more than a quarter live below the poverty line. Globally, there are 36 million blind people, but less than half use white canes or more costly commercial sensory substitutions. The quality of life for visually impaired people is hampered by the resultant lack of independence. To help alleviate these challenges this study reports on the development of a low-cost (<$24), open-source navigational support system to allow people with the lost vision to navigate, orient themselves in their surroundings and avoid obstacles when moving. The system can be largely made with digitally distributed manufacturing using low-cost 3-D printing/milling. It conveys point-distance information by utilizing the natural active sensing approach and modulates measurements into haptic feedback with various vibration patterns within the distance range of 3 m. The developed system allows people with lost vision to solve the primary tasks of navigation, orientation, and obstacle detection (>20 cm stationary, moving up to 0.5 m/s) to ensure their safety and mobility. Sighted blindfolded participants successfully demonstrated the device for eight primary everyday navigation and guidance tasks including indoor and outdoor navigation and avoiding collisions with other pedestrians.
REVIEW | doi:10.20944/preprints201903.0033.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: augmentative and alternative communication; assistive technologies; sensing modalities; signal processing; voice communication; machine learning; mobile health; speech disability
Online: 4 March 2019 (10:14:44 CET)
High-tech augmentative and alternative communication (AAC) methods are on a constant rise; however, the interaction between the user and the assistive technology is still challenged for an optimal user experience centered around the desired activity. This review presents a range of signal sensing and acquisition methods utilized in conjunction with the existing high-tech AAC platforms for speech disabled individuals, including imaging methods, touch-enabled systems, mechanical and electro-mechanical access, breath-activated methods, and brain computer interfaces (BCI). The listed AAC sensing modalities are compared in terms of ease of access, affordability, complexity, portability, and typical conversational speeds. A revelation of the associated AAC signal processing, encoding, and retrieval highlights the roles of machine learning (ML) and deep learning (DL) in the development of intelligent AAC solutions. The demands and the affordability of most systems were found to hinder the scale of usage of high-tech AAC. Further research is indeed needed for the development of intelligent AAC applications reducing the associated costs and enhancing the portability of the solutions for a real user’s environment. The consolidation of natural language processing with current solutions also needs to be further explored for the amelioration of the conversational speeds. The recommendations for prospective advances in coming high-tech AAC are addressed in terms of developments to support mobile health communicative applications.
ARTICLE | doi:10.20944/preprints202208.0215.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: visually impaired; smart mobility; sensors; LiDAR; ultrasonic; deep learning; obstacle detection; obstacle recognition; assistive tools; edge computing; green computing; sustainability; Arduino Uno; Smart App
Online: 11 August 2022 (11:12:58 CEST)
Over a billion people around the world are disabled, among them, 253 million are visually impaired or blind, and this number is greatly increasing due to ageing, chronic diseases, poor environment, and health. Despite many proposals, the current devices and systems lack maturity and do not completely fulfill user requirements and satisfaction. Increased research activity in this field is required to encourage the development, commercialization, and widespread acceptance of low-cost and affordable assistive technologies for visual impairment and other disabilities. This paper proposes a novel approach using a LiDAR with a servo motor and an ultrasonic sensor to collect data and predict objects using deep learning for environment perception and navigation. We adopted this approach in a pair of smart glasses, called LidSonic V2.0, to enable the identification of obstacles for the visually impaired. The LidSonic system consists of an Arduino Uno edge computing device integrated into the smart glasses and a smartphone app that transmits data via Bluetooth. Arduino gathers data, operates the sensors on smart glasses, detects obstacles using simple data processing, and provides buzzer feedback to visually impaired users. The smartphone application collects data from Arduino, detects and classifies items in the spatial environment, and gives spoken feedback to the user on the detected objects. In comparison to image processing-based glasses, LidSonic uses far less processing time and energy to classify obstacles using simple LiDAR data, according to several integer measurements. We comprehensively describe the proposed system's hardware and software design, construct their prototype implementations, and test them in real-world environments. Using the open platforms, WEKA and TensorFlow, the entire LidSonic system is built with affordable off-the-shelf sensors and a microcontroller board costing less than $80. Essentially, we provide designs of an inexpensive, miniature, green device that can be built into, or mounted on, any pair of glasses or even a wheelchair to help the visually impaired. Our approach affords faster inference and decision-making using relatively low energy with smaller data sizes as well as faster communications for the edge, fog, and cloud computing.