Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Mobile Phone Indoor Scene Recognition Location Method Based on Semantic Constraint of Building Map

Version 1 : Received: 26 January 2022 / Approved: 28 January 2022 / Online: 28 January 2022 (08:55:08 CET)

How to cite: Jianhua, L.; Guoqiang, F.; Jingyan, L.; Danqi, W.; Zheng, C.; Nan, W.; Baoshan, Z.; Xiaoyi, W.; Xinyue, L.; Botong, G. Mobile Phone Indoor Scene Recognition Location Method Based on Semantic Constraint of Building Map. Preprints 2022, 2022010431. https://doi.org/10.20944/preprints202201.0431.v1 Jianhua, L.; Guoqiang, F.; Jingyan, L.; Danqi, W.; Zheng, C.; Nan, W.; Baoshan, Z.; Xiaoyi, W.; Xinyue, L.; Botong, G. Mobile Phone Indoor Scene Recognition Location Method Based on Semantic Constraint of Building Map. Preprints 2022, 2022010431. https://doi.org/10.20944/preprints202201.0431.v1

Abstract

At present, indoor localization is one of the core technologies of location-based services (LBS), and there exist numerous scenario-oriented application solutions. Visual features, as the main semantic information to help people understand the environment and thus occupy the dominant part, many techniques about indoor scene recognition are widely adopted. However, the engineering application problem of cell phone indoor scene recognition and localization has not been well solved due to insufficient semantic constraint information of building map and the immaturity of building map location anchors (MLA) matching positioning technology. To address the above problems, this paper proposes a cell phone indoor scene recognition and localization method with building map semantic constraints. Firstly, we build a library of geocoded entities for building map location anchors (MLA), which can provide users with "immersive" real-world building maps on the one hand and semantic anchor point constraints for cell phone positioning on the other. Secondly, using the improved YOLOv5s deep learning model carried on the mobile terminal, we recognize the universal map location anchors (MLA) elements in building scenes by cell phone camera video in real-time. Lastly, the spatial location of the scene elements obtained from the cell phone video recognition is matched with the building MLA to achieve real-time positioning and navigation. The experimental results show that the model recognition accuracy of this method is above 97.2%, and the maximum localization error is within the range of 0.775 m, and minimized to 0.5 m after applying the BIMPN road network walking node constraint, which can effectively achieve high positioning accuracy in the building scenes with rich MLA element information. In addition, the building map location anchors (MLA) has universal characteristics, and the positioning algorithm based on scene element recognition is compatible with the extension of indoor map data types, so this method has good prospects for engineering applications.

Keywords

cell phone indoor positioning; scene recognition; building map; map location anchor; YOLOv5; geocoding matching

Subject

Environmental and Earth Sciences, Remote Sensing

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.