Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

IMU-Based Fitness Activity Recognition Using CNNs for Time Series Classification

Version 1 : Received: 21 November 2023 / Approved: 22 November 2023 / Online: 22 November 2023 (14:51:12 CET)

A peer-reviewed article of this Preprint also exists.

Müller, P.N.; Müller, A.J.; Achenbach, P.; Göbel, S. IMU-Based Fitness Activity Recognition Using CNNs for Time Series Classification. Sensors 2024, 24, 742. Müller, P.N.; Müller, A.J.; Achenbach, P.; Göbel, S. IMU-Based Fitness Activity Recognition Using CNNs for Time Series Classification. Sensors 2024, 24, 742.

Abstract

Augmented reality (AR) provides an opportunity for mobile fitness applications to show users real-time feedback on their current fitness activity. For such applications, it is essential to accurately track the user’s current fitness activity using available mobile sensors, such as inertial measurement units (IMUs). Convolutional neural networks (CNNs) have been shown to produce strong results in different time series classification tasks, including the recognition of activities of daily living. However, fitness activities can present unique challenges to the human activity recognition task (HAR), including greater similarity between individual activities and fewer available data for model training. In this paper, we evaluate the applicability of CNNs to the fitness activity recognition task (FAR) using IMU data and determine the impact of input data size and sensor count on performance. For this purpose, we adapted three existing CNN architectures to the FAR task and designed a fourth CNN variant, which we call the scaling fully convolutional network (Scaling-FCN). We designed a preprocessing pipeline and recorded a running exercise data set with 20 participants, in which we evaluated the respective recognition performances of the four networks, comparing them with three traditional machine learning methods commonly used in HAR. On our data set, all CNN architectures significantly outperformed traditional machine learning methods, reaching up to 97.14±1.36% classification accuracy on the test set for ResNet. Whereas a reduced input data size invariably led to a performance loss, removing specific sensors improved the performance of all CNN architectures, with our Scaling-FCN reaching the highest accuracy of 99.86±0.11%. Our results suggest that CNNs are generally well suited for fitness activity recognition, and noticeable performance improvements can be achieved if sensors are dropped selectively.

Keywords

activity recognition; inertial measurement unit; deep learning; convolutional neural network; residual neural network; traditional machine learning; study

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.