: Received: 16 September 2021 / Approved: 17 September 2021 / Online: 17 September 2021 (09:43:06 CEST)
: Received: 13 October 2021 / Approved: 13 October 2021 / Online: 13 October 2021 (12:14:39 CEST)
Wei, X.; Wei, Z.; Radu, V. Sensor-Fusion for Smartphone Location Tracking Using Hybrid Multimodal Deep Neural Networks. Sensors2021, 21, 7488.
Wei, X.; Wei, Z.; Radu, V. Sensor-Fusion for Smartphone Location Tracking Using Hybrid Multimodal Deep Neural Networks. Sensors 2021, 21, 7488.
Many engineered approaches have been proposed over the years for solving the hard problem of performing indoor localization. However, specialising solutions for the edge cases remains challenging. Here we propose to build the solution with zero hand-engineered features, but having everything learned directly from data. We use a modality specific neural architecture for extracting preliminary features, which are then integrated with cross-modality neural network structures. We show that each modality-specific neural architecture branch is capable of estimating the location with good accuracy independently. But for better accuracy a cross-modality neural network fusing the features of those early modality-specific representations is a better proposition. Our multimodal neural network, MM-Loc, is effective because it allows the uniform flow of gradients during training across modalities. Because it is a data driven approach, complex features representations are learned rather than relying heavily on hand-engineered features.
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.