Preprint Article Version 1 This version is not peer-reviewed

Deep Sensing: Inertia and Ambient Sensing for Activity Context Recognition Using Deep Convolutional Neural Networks

Version 1 : Received: 25 May 2020 / Approved: 26 May 2020 / Online: 26 May 2020 (11:33:55 CEST)

How to cite: Otebolaku, A.; Enamamu, T.; Alfouldi, A.; Ikpehai, A.; Marchang, J. Deep Sensing: Inertia and Ambient Sensing for Activity Context Recognition Using Deep Convolutional Neural Networks. Preprints 2020, 2020050430 (doi: 10.20944/preprints202005.0430.v1). Otebolaku, A.; Enamamu, T.; Alfouldi, A.; Ikpehai, A.; Marchang, J. Deep Sensing: Inertia and Ambient Sensing for Activity Context Recognition Using Deep Convolutional Neural Networks. Preprints 2020, 2020050430 (doi: 10.20944/preprints202005.0430.v1).

Abstract

With the widespread of embedded sensing capabilities of mobile devices, there has been unprecedented development of context-aware solutions. This allows the proliferation of various intelligent applications such as those for remote health and lifestyle monitoring, intelligent personalized services, etc. However, activity context recognition based on multivariate time series signals obtained from mobile devices in unconstrained conditions is naturally prone to imbalance class problems. This means that recognition models tend to predict classes with the majority number of samples whilst ignoring classes with the least number of samples, resulting in poor generalization. To address this problem, we propose to augment the time series signals from inertia sensors with signals from ambient sensing to train deep convolutional neural networks (DCNN) models. DCNN provides the characteristics that capture local dependency and scale invariance of these combined sensor signals. Consequently, we developed a DCNN model using only inertial sensor signals and then developed another model that combined signals from both inertia and ambient sensors aiming to investigate the class imbalance problem by improving the performance of the recognition model. Evaluation and analysis of the proposed system using data with imbalanced classes show that the system achieved better recognition accuracy when data from inertial sensors are combined with those from ambient sensors such as environment noise level and illumination, with an overall improvement of 5.3% accuracy.

Subject Areas

Activity Context Sensing; Smartphones; Deep Convolutional Neural Networks; Smart devices

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.