Preprint
Article

This version is not peer-reviewed.

AI- and IoT-Integrated Framework for Intelligent Sensing and Accessibility in Smart Transportation Systems

Submitted:

24 October 2025

Posted:

27 October 2025

You are already at the latest version

Abstract
This paper presents an intelligent, sensor-driven framework that integrates emerging technologies to deliver smart, reliable, and accessible transit assistance for visually impaired people (VIP). The proposed system leverages Internet of Things (IoT), Internet of Devices (IoD), GPS, and crowdsensing to collect multimodal data, comprising audio, video, and environmental signals, used to characterize and respond to users’ real-time mobility needs. A cloud-based architecture performs centralized data fusion and decision-making using artificial intelligence (AI) and machine learning (ML) algorithms, enabling rapid interpretation of sensor inputs and generation of personalized navigation guidance. The framework is implemented through a mobile application that coordinates data exchange between edge devices and cloud services, providing context-aware navigation, obstacle alerts, and two-way communication with transit operators. Unlike existing assistive mobility solutions, which rely primarily on static location services and lack cross-sensor integration, the proposed system introduces a unified AI-enabled sensing layer that supports dynamic adaptation to complex urban environments. The results demonstrate the framework’s potential to enhance autonomy, safety, and situational awareness for VIPs, offering a scalable foundation for inclusive smart transportation systems.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

Public transportation plays a pivotal role in urban living, facilitating access to various destinations like workplaces, educational institutions, medical appointments, social gatherings, and more. However, navigating public transit can be daunting and restrictive for individuals with visual impairments. The absence of accessibility features and information within these systems severely limits their mobility and independence. Thus, the transportation system for visually impaired persons (VIPs) should be accessible and safe. In general, transportation systems for VIPs should have the following features: (1) Accessible – public buses, subway, ridesharing, and trains should have audible announcements for start and stop, (2) Support services – should have designated personnel or support staff for VIPs (3) Audible pedestrian signal, (4) Braille signage is commonly used in transit stations that provide information about directions, service, and relevant information, (5) Audio announcement and information booth, (6) Smart ticket, smart card, mobile ticket instead of a physical ticket, (7) Special transport services for VIPs who cannot use any fixed route public transit, (8) Accessible cabs and ridesharing services having ramps and audio systems, and (9) Mobile applications.
To achieve all these features in a mobile application for the transportation of VIPs, emerging technologies such as AI and ML can play a vital role in enhancing the quality of life of VIPs. Data can be collected using IoT, IoD, crowdsensing, audio and video data, and using GPS. Then the information is processed either locally or in a cloud server using ML, and AI algorithms, which perform data classification, analysis, prediction and generate output for VIP. The output is information about transportation such as schedule, route change, arrival time, and alerts about obstacles. Most existing approaches and mobile applications [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31] do not include artificial intelligence (AI) and machine learning (ML) algorithms to achieve better predictions and outcomes.
Hence, this paper introduces an AI and ML-based framework for developing a mobile application for VIPs. The mobile application based on the proposed framework is known as Accessible Transit Made Easy (ATME) which intends to improve the public transportation experience for those with visual impairments in response to these difficulties. By utilizing the possibilities of contemporary technology such as IoT, AI, and ML this proposed ATME will solve the particular demands and challenges experienced by people with visual impairments. It gives individuals step-by-step instructions based on current information from bus and train timetables and thorough station layouts, assuring them that they can get where they're going. The program caters to users with differing degrees of visual impairment by emphasizing accessibility as a key concept and providing various features, such as varied text-to-speech choices, high-contrast color schemes, and large font sizes. A standout feature of the ATME application is its ability to issue timely alerts and notifications as users approach their stops, mitigating the risk of missed connections or disembarking at the wrong location. Bolstering safety and peace of mind during travel, the application also facilitates two-way communication with transit operators, enabling users to request assistance whenever needed. We like to achieve the following building ML-based mobile applications for VIPs.
  • Ensure the empowerment and rights of individuals with disabilities, enabling them to actively engage in contemporary society. We are committed to enhancing user independence and facilitating their ability to independently manage everyday transportation tasks.
  • Enhance accessibility to a range of public transportation options, with a particular focus on improving access for individuals with visual impairments.
  • Enable our users to fully leverage the affordability of public transportation, thereby reducing the additional expenses associated with seeking assistance or specialized services for their mobility within the city.
  • Reduces greenhouse gas emissions, with the underlying principle that fewer people driving will lead to a more favorable environmental impact. Therefore, if individuals with visual impairments begin utilizing public transportation systems, it would result in a reduction in emissions caused by their private chauffeurs.
  • Provides a wide range of training materials and user assistance to empower individuals with visual impairments to fully utilize the app's functionalities.
The rest of the paper is organized as follows. Section 2 presents existing mobile applications and transportation frameworks for VIPs. Section 3 presents the proposed ATME framework for VIPs including different components of this framework. Section 4 presents different AI and ML algorithms, the challenges of using these algorithms, and our proposed framework. Section 5 concludes the paper with future work.

2. Literature Review

We present existing work [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31] on the transportation systems for visually impaired people (VIPs) in this section.
Existing research highlights the importance of mobile applications in enhancing the daily lives of individuals with visual impairments. Mobile applications such as "Be My Eyes" [1] have demonstrated the potential for smartphones to provide real-time assistance and information, making various tasks more accessible. This app promotes independence and accessibility for people with visual impairments by leveraging the power of human connections and technology. A global voice control method for non-visual access to the Android operating system is what Be My Eyes suggests. In [2] the authors examine the spatiotemporal imbalance of accessibility to demand-responsive transit (DRT) service for people with disabilities in Seoul, South Korea. The study analyzes historical data on call taxi services for disabled people and identifies the reasons for imbalanced accessibility. The results show spatial and temporal imbalances in DRT service, with insufficient supply during the night and concentration of drivers' brakes affecting accessibility. The study suggests increasing supply, evenly distributing brakes for drivers, reallocating garages, and improving traffic environments to improve accessibility.
A systematic literature review that examines the application of information communication technology (ICT) for visually impaired people is given in [3]. It discusses the use of assistive technology, e-accessibility, and virtual interfaces to improve the lives of visually impaired individuals. The article emphasizes the need for collaboration among healthcare professionals, caregivers, programmers, engineers, and policymakers, as well as the adoption of policies in future ICT projects. It also highlights the challenges and limitations in developing and implementing these technologies and calls for further research and governmental support. The authors in [4] introduce the Intelligent Eye mobile application, which is designed to assist visually impaired individuals by providing features such as light detection, color detection, object recognition, and banknote recognition. The practical and cost-effective application combines multiple features into a single device. User acceptability tests indicate that the application is well-received by visually impaired individuals. Future enhancements include adding features like barcode scanning, GPS reader, pedestrian guide, and traffic light detector.
In [5], the authors discuss the development of a navigation system called SMART_EYE, which assists visually impaired individuals in navigating unfamiliar environments and detecting obstacles. The system employs an intelligent application incorporating AI and sensor technologies for capturing and categorizing images, with obstacle detection accomplished through ultrasonic sensors. Real-time information on obstacles is conveyed through voice commands. Additionally, the paper encompasses a review of existing research and methodologies within the field of assistive technology for individuals with visual impairments.
In [6], authors perform a survey on VIPs and find that most VIPs Google map, while a small percentage of them use Apple map, electronic timetables, and soundscape. Survey results also identify that VIPs look for a mobile application with features such as touch and sound to gather information, training features or support staff, AODA compliant such as large font, clear image/text, proper color/contrast, and use of Braille to gather information from the context.
Mobile applications have been developed for VIPs in different parts of the world such as TransmiGuia [7] is an Android-based mobile apps for VIPs that works with geolocation and voice recognition in Colombia and Greece, TEUBICAIS [7] is built to help VIPS to move around in Panama. These applications work by activating the satellite geolocation of the smartphone. If VIPs lose their path, they can be tracked using the GPS of their mobile phone. These applications work in tutor mode and visually impaired mode. Tutor mode is used for learning the transport activities of the VIPs and visually impaired mode is used to send location, route info, and alert messages to the central server if any VIPs lose their path. Other prominent mobile applications for VIPs are TrAVEI in Singapore, Busalert system in Brazil, Transportation for Edinburgh, One Busway with StopInfo in USA, Moovit, EyeSight, AccessNote, iMove [7], Tyflo Technologies Set (TYFLOSET) in the Czech Republic [8], Navigation and Control System for the Blind (NOPPA) in Finland, and Personal Assistant for the Visually Impaired (PAVIP) in Switzerland [8].
In [9], Rui Lime et al. introduce an Artificial Intelligence (AI) based application to help VIPs locate themselves that uses visual recognition to capture the images of important landmarks for the partially sighted person. Then, these images are used to identify these landmarks by other users. The work done in [10] introduces a Web-based application for vocation training for VIPs that includes voice recognition and gesture control for navigating the Website and Braille reading and music systems that can be learned using mobile devices. In [11], the authors introduce PerSEEption, a mobile and Web application for VIPs that includes basic braille such as numbers, alphabet, and punctuation that work together with audio and vibration.
NavTU is an Android-based mobile application introduced in Thailand for VIPs by Somyat et al. [12]. NavTU uses GPS and camera-based technology that helps VIPs to navigate outdoors and identify obstacles to avoid any possible collisions with concrete, or electric poles. The VIP can record the walking through their preferred paths that might not be available in Google Maps and use it later if required. The work done in [13] introduces a mobile application for VIPs that is based on voice-over technology to assist them in performing their daily tasks without any help such as reading out an SMS to send over the phone, sending emails using a voice assistant, scanning text info using devices camera to read out, writing a diary with voice assistant.
In [14] Lin et al. introduce a smart glasses application system based on deep learning for VIPs. The proposed system uses EPSON BT-300 smart glasses and an Android system to perform image recognition. The VIP can take pictures with a built-in camera and upload them to the back-end object detection deep learning system. Among other works on VIPs, in [15] the authors introduce IoT-based smart shoes to support VIPs to solve the issues they face in their everyday lives. Another sensor-based solution known as a wireless bus identification system [16] uses two detection subsystems: (1) A VIP's personal assistance subsystem, and (2) A Bus driver's subsystem. A VIP reaches a bus stop, the sensor in the proposed system senses the presence of the bus in that area and transmits the presence to the Arduino sensors. The VIP switches to the user mode that sends a signal to the bus driver through the receiver installed on the bus driver’s module about his presence. The bus driver can communicate with the VIP through the wireless connection using Bluetooth. Thus, this system provides a safe, robust solution to the VIPs. However, this system does not include navigation such as how to reach the destination, schedule change, route change, and other required services for VIPs.

3. Proposed Framework`

The conceptual architecture of the proposed framework of transportation systems for visually impaired persons (VIPs) is illustrated in Figure 1. The architecture comprises the following components: (1) visually impaired people (VIPs) (2) cloud server (3) IoT, IoD, Crowdsensing, and other techniques for collecting data to be used by cloud server for further processing (4) Mobile Applications (5) data processing component that uses AI, ML algorithms. Several technologies are integrated into this framework and connected to the cloud server, which are the Internet of Things (IoT), Internet of Devices (IoD), and Crowdsensing. Sensor-based IoT, IoD, and Crowdsensing technologies are used for data collection. Moreover, the mobile application uses ML and AI at the backend (cloud or Webserver) for data processing, data classification, data analysis, providing prediction, and provides output to VIPs. The following subsection presents each of the components.

3.1. Cloud Server

The system leverages cloud services for hosting and managing its infrastructure to ensure robust infrastructure and scalability. The cloud environment supports real-time data integration with transit systems, enabling the mobile application to access up-to-date information about bus and train schedules, as well as other relevant transit data. The mobile application requires an Internet connection to communicate with the cloud services and retrieve real-time transit information.
Developing a cloud system aims to provide autonomous planning, control, monitoring, and data analysis. The data collected during the transit time can be easily transferred, managed, and analyzed. The information exchange between cloud services and mobile applications is through the Internet. One of the characteristics of the interaction between mobile applications and the cloud is that it is asynchronous. For instance, the mobile application notifies the cloud of its status by sending its current location, and battery level. The VIPs can plan their trip based on the information provided using the cloud services through the mobile application. Figure 2 illustrates a framework that smart bus and smart bus stop with VIPs using mobile applications. The mobile application is connected to a cloud server for back-end data processing and decision-making using AI and ML algorithms.

3.2. IoT, IoD and Other technologies

Internet of Things (IoT), Internet of Devices (IoD), Crowdsensing emerged with the invention of smartphones, sensor technologies, and smart devices that can collect data from visually impaired people (VIPs) from their smartphones that are connected to other devices to transfer data to the cloud server. This data can also be processed locally by sensors integrated with smart devices. Many existing IoT-based solutions also exist [17,18,19,20,21,22,23,24] for providing transportation services for VIPs such as Wayfindr [17], Sunu Band [18], WEWALK Smart Cane [19], Talkingsign [20], Aira [21], IoT-enabled Public Transportation Systems [22].

3.3. Mobile Application

The most important component of the proposed framework is a mobile application that has connectivity with a cloud or central server to provide input and receive output for the VIPs. The proposed framework aims at individuals with disabilities, it is essential to ensure that the application complies with the Web Content Accessibility Guidelines (WCAG). This involves making it compatible with screen readers, voice assistants, and other assistive technologies commonly used by individuals with visual impairments. This user-centered approach increases the chances of creating a successful and impactful solution. The proposed framework involves three stages.
Bus Stop Information - we will develop a comprehensive SQL database of bus stops and transit stations. From there, we will collaborate with local transit agencies to ensure the app incorporates real-time information about bus schedules and transit station layouts to provide accurate and up-to-date navigation assistance. Reasoning: Establishing data-sharing agreements maintains the reliability of the service. Integrating real-time data about transit schedules and station layouts enhances the accuracy and reliability of the navigation assistance provided by the application. This ensures that users have access to the most up-to-date information, resulting in a better user experience.
Navigation and Direction - to provide step-by-step directions to users, we will integrate the Google Maps API into our system to access its routing capabilities and transmit data. The algorithm will begin by retrieving the user’s current location using geolocation services. From there, the API’s directions service will calculate the optimal route for the user.
The algorithm will provide step-by-step directions that can be presented to the user following WCAG, guiding them through their journey using clear instructions and visual representations. Reasoning: The Google Maps API offers numerous useful features. Most notably, it offers powerful routing algorithms that can calculate the most efficient routes while considering real-time traffic conditions. Also, using the API ensures our navigation is accurate and up to date.
Mobile Development with Accessibility Features – the mobile application will be developed using Flutter with accessibility features such as high-contrast colors, large fonts, and text-to-speech functionality to make the application usable for individuals with visual impairments. This application is expected to read from the Bus Stop Information database and provide directions using the Google Maps API. Reasoning: Flutter enables cross-platform development, allowing developers to build apps for both iOS and Android using a single codebase. This significantly reduces development time and effort since there's no need to create separate apps for each platform. Additionally, Flutter provides excellent native integration, granting access to platform-specific features and APIs. This means developers can leverage device functionalities such as the camera, geolocation, and sensors, enhancing the app's capabilities and user experience.
Based on the features presented above we have designed a mobile application prototype. Figure 3 illustrates the application prototype and how mobile applications work with the other components of the framework.
The mobile application has four subsystems for interacting with the VIPs: login/authentication, voice analysis, transit information integration, and navigation.
Login/Authentication SubsystemFigure 4 illustrates the login/sign-in component of the proposed mobile application which works as follows:
User Registration: Develop a user registration system with email validation and password setup.
Authentication: Implement secure authentication protocols (such as OAuth or JWT) to ensure user data privacy and app security.
User Profiles: Allow users to create profiles, storing essential information and preferences securely.
Voice Analysis SubsystemFigure 5 demonstrates the flowchart of the voice analysis subsystem of the proposed mobile application. This subsystem includes the following features.
Speech Recognition: Integrate a speech recognition system to convert user voice commands into text.
Natural Language Processing (NLP): Implement NLP algorithms to understand user intents and extract relevant information from voice inputs.
Voice Feedback: Utilize text-to-speech technology to provide vocal responses and directions to users.
Transit Information Integration Subsystem – includes the following components.
Real-time Data APIs: Connect with public transportation APIs to access live bus and train schedules, station layouts, and accessibility information.
Data Processing: Process and normalize transit data to ensure consistency and accuracy in the information provided to users.
Data Storage: Store transit data securely, enabling efficient retrieval and updates as needed.
Navigation SubsystemFigure 6 illustrates the sequence diagram of the navigation subsystem.
GPS and Location Services: Utilize GPS and location services on mobile devices to track users' positions in real time.
Routing Algorithm: Develop a robust routing algorithm considering factors like transit schedules, station layouts, and user preferences.
Interactive Maps: Implement interactive maps displaying transit routes, stations, and user location, providing a visual navigation aid.

4. AI and ML Algorithms

We studied several AI and ML algorithms [23,24,25,26,27,28,29,30,31] in the literature and identified their usage in transportation, their strengths, and their weaknesses. We compare them in terms of different features in Table 1.

4.1. AI and ML Component

Though a few solutions based on AI and ML [23,24,25,26,27,28,29,30,31] exist in the literature that address real-world transport systems for the VIP, they also face several challenges. Some of the challenges are: (1) Training machine learning models on biased or insufficient data can result in biased outcomes. Ensuring that training datasets are diverse, representative, and inclusive of various visual impairments and demographics is essential, (2) Many machine learning models, especially deep neural networks, are often considered "black boxes," making it challenging to understand how they arrive at specific decisions. Ensuring interpretability is crucial for building trust and understanding among users and caregivers, (3) VIPs have diverse needs, and a one-size-fits-all approach may not be effective. Developing AI solutions that can adapt to individual preferences and requirements is essential for user satisfaction, (4) AI solutions often involve processing and analyzing sensitive information. Ensuring robust privacy measures and protecting user data from unauthorized access is crucial to building and maintaining trust, (5) Integrating AI solutions with existing assistive technologies or devices can be challenging. Ensuring seamless compatibility and interoperability is essential for a holistic and effective user experience, (6) Making AI solutions financially accessible to a wide range of users is important. High costs can create barriers to entry for individuals with limited financial resources, limiting the impact of these technologies, (7) Continuous Learning and Adaptation – the dynamic nature of visual environments and user needs requires AI models to continuously learn and adapt. Developing systems that can evolve with changing conditions and user preferences is a significant challenge, and (8) Ethical Considerations and Legal and Regulatory compliances pose a great challenge as well for the development and deployment of AI solutions, (9) Providing adequate training and support for visually impaired individuals and their caregivers is crucial for successful adoption. Ensuring that users understand how to interact with and benefit from AI solutions is a challenge.
In our proposed framework, we plan to use natural language processing (NLP) algorithms to understand user intents and extract relevant information from voice inputs. Each of these aspects plays a crucial role in the development of the mobile application, ensuring seamless user experience, accurate navigation, and secure interactions with the system.
To identify whether a VIP is in any dangerous situation on the road such as accidents, road closures, or transportation services interruptions, we use different machine language algorithms. We plan to include various ML algorithms in the future in our experimental study of the proposed framework such as Logistic Regression (LR) Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), and Naive Bayes (NB). Logistic regression is an ML algorithm for analyzing data, classifying data and can predict an outcome based on earlier observations of a dataset [23] such as it can predict the possibility of any accidental situations for VIPs based on the data collected in the cloud server. More data in the datasets provides better prediction using LR algorithm. We plan to use binary logistic regression (BLR) in our experiment to find the performance of the proposed framework. We would also implement Linear Discriminant Analysis (LDA) [24] in our framework that provides the supervised classification method for creating ML models. In a dataset, dimensionality reduction techniques are used to reduce the number of dimensions that are used in image recognition and predictive analysis. Decision Trees are an important type of algorithm in machine learning for predictive modeling. Classification and Regression Trees (CART) is a modern technique that is also known as the decision tree algorithm. The classical decision tree algorithm is one of the most popular algorithms for creating a model that predicts a certain value based on a target. Naive Bayes is a classification technique based on Bayes’ Theorem for predictive modeling [25], which works with the independence of every input variable. This model is simple, easy to build, and very useful when dealing with large datasets. Even when working with highly sophisticated classification methods, the performance of NB is outstanding.
Figure 7 illustrates the flowchart of how the prediction model based on ML works that will be used in our proposed framework in the future. This flowchart shows how the transportation system works from data collection using emerging technologies (IoT, IoD, Crowdsensing, audio, and video data) to data analysis decision-making using AI, and ML algorithms to provide responses to VIPs.

5. Conclusions

This paper introduces a mobile application framework using artificial intelligence (AI) and Machine learning approaches (ML) to provide transportation services for visually impaired people (VIPs). We presented the conceptual framework comprising different components such as (1) IoT, IoD for data collection (2) cloud server for data processing (3) Mobile application for VIPs to be connected with other components. We presented different AI and ML algorithms for data processing, analysis, and prediction and we plan to implement AI and ML approaches in our framework and experiments to evaluate the performance of the proposed framework.
Moreover, Security attacks on the solutions based on IoT, AI, and Machine learning is a major concern. Attackers can get control of the devices or systems provide false output and result in critical situations for VIPs. Thus, the approaches should include secure solutions to protect against such vulnerabilities and attacks. Moreover, addressing all these challenges requires a multidisciplinary approach, involving collaboration among AI researchers, developers, accessibility experts, ethicists, policymakers, and the visually impaired community. Continuous dialogue and feedback are essential to refine AI solutions and ensure they meet the evolving needs of users.

Author Contributions

Conceptualization, Nidal Nasser and Asmaa Ali; methodology, Nidal Nasser and Asmaa Ali; software, Asmaa Ali and AbdulAziz Al-Helali; validation, Nidal Nasser, Asmaa Ali, and Lutful Karim; formal analysis, Asmaa Ali; investigation, Nidal Nasser and Asmaa Ali; resources, Nidal Nasser; data curation, Asmaa Ali and Taghreed Altamimi; writing—original draft preparation, Asmaa Ali and Nidal Nasser; writing—review and editing, Nidal Nasser and Lutful Karim; visualization, Asmaa Ali and AbdulAziz Al-Helali; supervision, Nidal Nasser; project administration, Nidal Nasser; funding acquisition, Nidal Nasser. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Alfaisal University” and “The APC was funded by Alfaisal University.

Data Availability Statement

Not applicable.

Acknowledgments

The Authors would like to thank Alfaisal University for funding and supporting this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. P. Kaur, M. Ganore, R. Doiphode, and T. Ghuge, “Be My Eyes: Android App for visually impaired people,” ResearchGate, Apr. 09, 2017. [Online]. [CrossRef]
  2. Son, J. H., Kim, D. G., Lee, E., & Choi, H. (2022, January 6). Investigating the Spatiotemporal Imbalance of Accessibility to Demand Responsive Transit (DRT) Service for People with Disabilities: Explanatory Case Study in South Korea. Journal of Advanced Transportation, 2022, 1–9.
  3. M. M. Ashraf, N. Hasan, L. Lewis, M. R. Hasan, and P. Ray, “A Systematic Literature Review of the Application of Information Communication Technology for Visually Impaired People,” International Journal of Disability Management, vol. 11, 2016. [CrossRef]
  4. M. Awad, J. E. Haddad, E. Khneisser, T. M. Mahmoud, E. Yaacoub, and M. Malli, “Intelligent eye: A mobile application for assisting blind people,” Apr. 01, 2018. [Online]. [CrossRef]
  5. B. Pydala, T. P. Kumar, and K. K. Baseer, “Smart_Eye: A Navigation and Obstacle Detection for Visually Impaired People through Smart App,” Journal of Applied Engineering and Technological Science (JAETS), vol. 4, no. 2, pp. 992–1011, Jun. 2023. [CrossRef]
  6. F. E. -Z. El-Taher, L. Miralles-Pechuán, J. Courtney, K. Millar, C. Smith and S. Mckeever, "A Survey on Outdoor Navigation Applications for People with Visual Impairments," in IEEE Access, vol. 11, pp. 14647-14666, 2023. [CrossRef]
  7. N. P. Landazabal, O. Andrés Mendoza Rivera, M. H. Martínez, C. Ramírez Nates and B. T. Uchida, "Design and implementation of a mobile app for public transportation services of persons with visual impairment (TransmiGuia)," 2019 XXII Symposium on Image, Signal Processing and Artificial Vision (STSIVA), Bucaramanga, Colombia, 2019, pp. 1-5. [CrossRef]
  8. V. I. Prusova, M. A. Zhidkova, A. V. Goryunova and V. S. Tsaryova, "Intelligent Transport Systems as an Inclusive Mobility Solution for People with Disabilities," 2022 Intelligent Technologies and Electronic Devices in Vehicle and Road Transport Complex (TIRVED), Moscow, Russian Federation, 2022, pp. 1-5. [CrossRef]
  9. S. K, S. KMR, S. K. D, V. M, F. L. J and S. P, "I-PWA: IoT based Progressive Web Application for Visually Impaired People," 2023 3rd International Conference on Innovative Practices in Technology and Management (ICIPTM), Uttar Pradesh, India, 2023, pp. 1-6. [CrossRef]
  10. R. Lima, L. Barreto, A. Amaral and S. Paiva, "Visually Impaired People Positioning Assistance System Using Artificial Intelligence," in IEEE Sensors Journal, vol. 23, no. 7, pp. 7758-7765, 1 April1, 2023. [CrossRef]
  11. J. A. P. De Jesus, K. A. V. Gatpolintan, C. L. Q. Manga, M. R. L. Trono and E. R. Yabut, "PerSEEption: Mobile and Web Application Framework for Visually Impaired Individuals," 2021 1st International Conference in Information and Computing Research (iCORE), Manila, Philippines, 2021, pp. 205-210. [CrossRef]
  12. N. Somyat, T. Wongsansukjaroen, W. Longjaroen and S. Nakariyakul, "NavTU: Android Navigation App for Thai People with Visual Impairments," 2018 10th International Conference on Knowledge and Smart Technology (KST), Chiang Mai, Thailand, 2018, pp. 134-138. [CrossRef]
  13. S. P, S. N, P. D and U. M. R. N, "BLIND ASSIST: A One Stop Mobile Application for the Visually Impaired," 2021 IEEE Pune Section International Conference (PuneCon), Pune, India, 2021, pp. 1-4. [CrossRef]
  14. J. -Y. Lin, C. -L. Chiang, M. -J. Wu, C. -C. Yao and M. -C. Chen, "Smart Glasses Application System for Visually Impaired People Based on Deep Learning," 2020 Indo – Taiwan 2nd International Conference on Computing, Analytics and Networks (Indo-Taiwan ICAN), Rajpura, India, 2020, pp. 202-206. [CrossRef]
  15. S. Durgadevi, C. Komathi, K. ThirupuraSundari, S. S. Haresh and A. K. R. Harishanker, "IOT Based Assistive System for Visually Impaired and Aged People," 2022 2nd International Conference on Power Electronics & IoT Applications in Renewable Energy and its Control (PARC), Mathura, India, 2022, pp. 1-4. [CrossRef]
  16. A. Agarwal, K. Agarwal, R. Agrawal, A. k. Patra, A. K. Mishra and N. Nahak, "Wireless Bus Identification System for Visually Impaired Person," 2021 1st Odisha International Conference on Electrical Power Engineering, Communication and Computing Technology (ODICON), Bhubaneswar, India, 2021, pp. 1-6. [CrossRef]
  17. Wayfindr. https://www.wayfindr.net/how-audio-navigation-works/wayfindr-consultancy-support, accessed on December 24, 2023.
  18. Sunu Band. https://sunu.io/pages/sunu-band, accessed on December 22, 2023.
  19. WEWALK Smartcane. https://wewalk.io/en/about/, access on December 21, 2023.
  20. Next Generation Talking Sign. https://www.clickandgomaps.com/clickandgo-nextgen-talking-signs.
  21. Aira Inc. https://aira.io/aira-app/, accessed on December 23, 2023.
  22. IoT and 5G: Transforming Public Transportation System. https://www.iotforall.com/iot-and-5g-transforming-public-transportation-system, accessed on December 24, 2023.
  23. A. Ng and M. Jordan, “On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes,” Advances in Neural Information Processing Systems, vol. 14, 2001.
  24. J. Friedman, T. Hastie and R. Tibshirani, The Elements of Statistical Learning, vol. 1, 2001.
  25. R. M. Cruz, G. D. Cavalcanti, R. Tsang and R. Sabourin, “Feature representation selection based on classifier projection space and oracle analysis,” Expert Systems with Applications, vol. 40, no. 9, pp. 3813-3827, 2013.
  26. Orcam Inc. https://www.orcam.com/en-ca/home?utm_source=landing-page&utm_medium=redirected-from-404, accessed on December 24, 2023.
  27. Bespecular. https://www.bespecular.com/en/, accessed on December 12, 2023.
  28. Envision. https://www.letsenvision.com/app, accessed on December 24, 2023.
  29. Blindsquare. https://www.blindsquare.com/contact/, accessed on December 18, 2023.
  30. Horizon for Blinds. https://www.horizons-blind.org/, accessed on December 21, 2023.
  31. V. U and G. V, "Opportunities and Challenges in Development of Support System for Visually Impaired: A Survey," 2023 13th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 2023, pp. 684-690. [CrossRef]
Figure 1. Transportation system for visually impaired people.
Figure 1. Transportation system for visually impaired people.
Preprints 182255 g001
Figure 2. AI, ML-based solutions in IoT for Transportation of VIPs.
Figure 2. AI, ML-based solutions in IoT for Transportation of VIPs.
Preprints 182255 g002
Figure 3. Mobile Application System Design of Mobile Application.
Figure 3. Mobile Application System Design of Mobile Application.
Preprints 182255 g003
Figure 4. Log-in/Sign-in Activity Diagram.
Figure 4. Log-in/Sign-in Activity Diagram.
Preprints 182255 g004
Figure 5. Voice Analysis Activity Diagram.
Figure 5. Voice Analysis Activity Diagram.
Preprints 182255 g005
Figure 6. Navigation Systems Sequence Diagram.
Figure 6. Navigation Systems Sequence Diagram.
Preprints 182255 g006
Figure 7. Flowchart of ML-based Prediction Model.
Figure 7. Flowchart of ML-based Prediction Model.
Preprints 182255 g007
Table 1. Comparison of existing AI and ML-based solutions for visually impaired people. 
Table 1. Comparison of existing AI and ML-based solutions for visually impaired people. 
Algorithm Type Applications in Transportation Strengths Weaknesses
Supervised Learning - Traffic flow prediction - Effective for prediction tasks - Reliance on labeled training data
- Demand forecasting - Generalization to unseen data - May struggle with complex patterns
- Anomaly detection
Unsupervised Learning - Clustering of traffic patterns - Discover hidden patterns - Lack of labeled data, interpretability
- Anomaly detection - No need for labeled data - Interpretation challenges
Reinforcement Learning - Traffic signal control - Learning from interactions - High computational requirements
- Autonomous vehicle navigation - Decision-making in dynamic environments - Sensitivity to hyperparameters
Deep Learning - Image recognition for traffic sign detection - Hierarchical feature learning - Requires large, labeled datasets
- Natural language processing for route optimization - State-of-the-art performance in some tasks - Computationally intensive
Decision Trees - Route optimization - Intuitive and easy to interpret - Prone to overfitting, sensitivity to noise
Ensemble Methods - Predictive maintenance for vehicles - Improved predictive performance - Complexity and increased computation
- Traffic prediction
Nearest Neighbors - Traffic flow prediction - Simple and intuitive - Sensitive to irrelevant features
- Anomaly detection - No training phase - Computationally expensive
Natural Language Processing - Intelligent transportation systems - Semantic understanding - Challenges in understanding context
- Voice-activated control systems
Optimization Algorithms - Route optimization - Efficient solution search - May not scale well for large datasets
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated