ARTICLE | doi:10.20944/preprints201809.0483.v1
Subject: Earth Sciences, Space Science Keywords: visually impaired people; mobile devices; assistive cartography.
Online: 25 September 2018 (10:32:06 CEST)
The vision can be used to recognize images and to improve mental pictures of environments. Visually impaired people feel a lack of aid for their independent or facilitated urban mobility, which can be achieved with the use of mobile devices and cartographic tools using audiovisual outputs. This work raises issues about urban mobility for the accessibility of visually impaired people in areas still unexplored by them, it uses cartographic technologies in electronic devices. For a preview of the test area located in Monte Carmelo (MG), a tactile model was used to form the first image of the eight volunteers. Results were obtained through research with not blind and blind individuals, it validated the use of áreas’s mobile registration prototype positioning in field or not, when the coordinates from the objects are known for registration. The results indicate that both the tactile model and the audiovisual prototype can be used by blind and non-blind people. Above all, the prototype proved to be a viable and adequate option for decision making in urban environments. New ways of presenting data to blind or otherwise blind people can still be studied.
ARTICLE | doi:10.20944/preprints202207.0068.v1
Subject: Mathematics & Computer Science, Analysis Keywords: blind; visually impaired; assistive devices; object recognition; navigation; virtual assistants; Smart Cities; Saudi Arabia
Online: 5 July 2022 (08:24:38 CEST)
Visually impaired people encounter many impediments and challenges in their lives such as related to their mobility, education, communication, use of technology, and others. This paper reports the results of an online survey to understand the requirements and challenges blind and visually impaired people face in their daily lives regarding the availability and use of digital devices. The survey was conducted among the blind and visually impaired in Saudi Arabia using digital forms. A total of 164 people responded to the survey most of them using the VoiceOver function. People were asked about the use of smart devices, special devices, operating systems, object recognition apps, indoor and outdoor navigation apps, virtual digital assistive apps, the purpose (navigation, education, etc.) of and difficulty in using these apps, the type of assistance needed, the reliance on others in using the assistive technologies, and the level of satisfaction from the existing assistive technologies. The majority of the participants were 18 – 65 years old with 13% under 18 and 3% above 65. Sixty-five percent of the participants were graduates or postgraduates and the rest only had secondary education. White Cane, mobile phones, Apple iOS, Envision, Seeing AI, VoiceOver, and Google Maps were the most used devices, technologies, and apps used by the participants. Navigation at 39.6% was the most reported purpose of the special devices followed by education (34.1%) and office jobs (12.8%). The information from this survey along with a detailed literature review of academic and commercial technologies for the visually impaired was used to establish the research gap, design requirements, and a comprehensive understanding of the relevant landscape, which in turn was used to design smart glasses called LidSonic for visually impaired.
ARTICLE | doi:10.20944/preprints201907.0138.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: 3D printing; additive manufacturing; assistive devices; blind; obstacle avoidance; sensors; sensory substitution; ultrasonic sensing; ultrasound sensing; visually impaired
Online: 10 July 2019 (06:24:05 CEST)
Nineteen million Americans have significant vision loss. Over 70% of these are not employed full-time, and more than a quarter live below the poverty line. Globally, there are 36 million blind people, but less than half use white canes or more costly commercial sensory substitutions. The quality of life for visually impaired people is hampered by the resultant lack of independence. To help alleviate these challenges this study reports on the development of a low-cost (<$24), open-source navigational support system to allow people with the lost vision to navigate, orient themselves in their surroundings and avoid obstacles when moving. The system can be largely made with digitally distributed manufacturing using low-cost 3-D printing/milling. It conveys point-distance information by utilizing the natural active sensing approach and modulates measurements into haptic feedback with various vibration patterns within the distance range of 3 m. The developed system allows people with lost vision to solve the primary tasks of navigation, orientation, and obstacle detection (>20 cm stationary, moving up to 0.5 m/s) to ensure their safety and mobility. Sighted blindfolded participants successfully demonstrated the device for eight primary everyday navigation and guidance tasks including indoor and outdoor navigation and avoiding collisions with other pedestrians.
ARTICLE | doi:10.20944/preprints202107.0200.v1
Subject: Engineering, Electrical & Electronic Engineering Keywords: image quality assessment; image quality metrics; NR-IQAs; D-IQA; OCR accuracy; OCR prediction; OCR improvements; visual aids; visually impaired; reading aids; document images; text-based images
Online: 8 July 2021 (13:21:49 CEST)
For Visually impaired People (VIPs), the ability to convert text to sound can mean a new level of independence or the simple joy of a good book. With significant advances in Optical Character Recognition (OCR) in recent years, a number of reading aids are appearing on the market. These reading aids convert images captured by a camera to text which can then be read aloud. However, all of these reading aids suffer from a key issue – the user must be able to visually target the text and capture an image of sufficient quality for the OCR algorithm to function – no small task for VIPs. In this work, a Sound-Emitting Document Image Quality Assessment metric (SEDIQA) is proposed which allows the user to hear the quality of the text image and automatically captures the best image for OCR accuracy. This work also includes testing of OCR performance against image degradations, to identify the most significant contributors to accuracy reduction. The proposed No-Reference Image Quality Assessor (NR-IQA) is validated alongside established NR-IQAs and this work includes insights into the performance of these NR-IQAs on document images.
ARTICLE | doi:10.20944/preprints201805.0135.v1
Subject: Social Sciences, Other Keywords: visually handicapped; visually handicapped sportsman; rosenberg self- esteem
Online: 9 May 2018 (04:50:46 CEST)
The purpose of this study is to examine the self-esteem levels of visually handicapped individuals who do sports and do’t do sports. There were 106 sportsmen and 94 persons with visual handicapped (200 in total) who participated in the research clubs in the province of Izmir. As the sub-problems, the relationship between the genders of participant who visually handicapped and not visually handicapped was investigated. The study consists of two parts. In the first part, the demographic characteristics of the participants were determined; in the second part Rosenberg Self Value scale consisting of 10 questions was used. Data were analyzed with SPSS 18.00 package program. T test, correlation analysis, descriptive statistics were applied to test hypotheses of the study. The research found that there is a significant difference between the self-esteem levels of individuals with and without visually handicapped sports (P <0.05). No statistically significant difference was found between the self-esteem levels of the sportsmen and the sportswomen (P> 0,05). There was no significant difference in the self-esteem levels of visually handicapped individuals who played individual sports and team sports (P> 0,05). As a result, it has been seen that sports have a positive effect on self-esteem in visually handicapped individuals and they contribute and hold to life more meaningful.
ARTICLE | doi:10.20944/preprints202208.0215.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: visually impaired; smart mobility; sensors; LiDAR; ultrasonic; deep learning; obstacle detection; obstacle recognition; assistive tools; edge computing; green computing; sustainability; Arduino Uno; Smart App
Online: 11 August 2022 (11:12:58 CEST)
Over a billion people around the world are disabled, among them, 253 million are visually impaired or blind, and this number is greatly increasing due to ageing, chronic diseases, poor environment, and health. Despite many proposals, the current devices and systems lack maturity and do not completely fulfill user requirements and satisfaction. Increased research activity in this field is required to encourage the development, commercialization, and widespread acceptance of low-cost and affordable assistive technologies for visual impairment and other disabilities. This paper proposes a novel approach using a LiDAR with a servo motor and an ultrasonic sensor to collect data and predict objects using deep learning for environment perception and navigation. We adopted this approach in a pair of smart glasses, called LidSonic V2.0, to enable the identification of obstacles for the visually impaired. The LidSonic system consists of an Arduino Uno edge computing device integrated into the smart glasses and a smartphone app that transmits data via Bluetooth. Arduino gathers data, operates the sensors on smart glasses, detects obstacles using simple data processing, and provides buzzer feedback to visually impaired users. The smartphone application collects data from Arduino, detects and classifies items in the spatial environment, and gives spoken feedback to the user on the detected objects. In comparison to image processing-based glasses, LidSonic uses far less processing time and energy to classify obstacles using simple LiDAR data, according to several integer measurements. We comprehensively describe the proposed system's hardware and software design, construct their prototype implementations, and test them in real-world environments. Using the open platforms, WEKA and TensorFlow, the entire LidSonic system is built with affordable off-the-shelf sensors and a microcontroller board costing less than $80. Essentially, we provide designs of an inexpensive, miniature, green device that can be built into, or mounted on, any pair of glasses or even a wheelchair to help the visually impaired. Our approach affords faster inference and decision-making using relatively low energy with smaller data sizes as well as faster communications for the edge, fog, and cloud computing.
ARTICLE | doi:10.20944/preprints201803.0058.v2
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: indoor navigation; wayfinding; visually impaired navigation; sensor fusion
Online: 27 December 2018 (11:37:53 CET)
Indoor navigation systems must deal with absence of GPS signals, since they are only available in outdoor environments. Therefore, indoor systems have to rely upon other techniques for positioning users. Recently various indoor navigation systems have been designed and developed to help visually impaired people. In this paper an overview of some existing indoor navigation systems for visually impaired people are presented and they are compared from different perspectives. The evaluated techniques are ultrasonic systems, RFID-based solutions, computer vision aided navigation systems, ans smartphone-based applications.
ARTICLE | doi:10.20944/preprints202111.0045.v1
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: Pattern based access; Graphical password; safe password; non-intuitive password; non-static password; visually encrypted password
Online: 2 November 2021 (11:06:37 CET)
With increased vulnerabilities and vast technology landscapes, it is extremely critical to build systems which are highly resistant to cyber-attacks, to break into systems to exploit. It is almost impossible to build 100% secure authentication & authorization mechanisms merely through standard password / PIN (With all combinations of special characters, numbers & upper/lower case alphabets and by using any of the Graphical password mechanisms). The immense computing capacity and several hacking methods used, make almost every authentication method susceptible to cyber-attacks in one or the other way. Only proven / known system which is not vulnerable in spite of highly sophisticated computing power is, human brain. In this paper, we present a new method of authentication using a combination of computer’s computing ability in combination with human intelligence. In fact this human intelligence is personalized making the overall security method more secure. Text based passwords are easy to be cracked . There is an increased need for an alternate and more complex authentication and authorization methods. Some of the Methods   in the category of Graphical passwords could be susceptible, when Shoulder surfing/cameras/spy devices are used.