ARTICLE | doi:10.20944/preprints201908.0022.v1
Subject: Mathematics & Computer Science, Other Keywords: Down syndrome; Kinect sensor; reading skills
Online: 2 August 2019 (09:14:16 CEST)
People with Down syndrome present cognitive difficulties that affect their reading skills. In this study we present results about the use of gestural interaction with Kinect sensor to improve the reading skills of students with Down syndrome. Following a case of study method for small samples with disabilities, measuring different variables related to reading skills in an experimental group and in a control group. We found improvements in the visual association, visual comprehension, sequential memory, and visual integration after this stimulation in the experimental group compared to the control group. Also, we found that the number of error and delay time of interaction decrease between sessions in the experimental group.
ARTICLE | doi:10.20944/preprints202009.0752.v1
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: education; human machine interaction; azure kinect; mathematics
Online: 30 September 2020 (14:36:37 CEST)
The way in which the human being learns certain complex contents has always been a focus of interest and a challenge for researchers. Given the fact that children's cognitive abilities do not fully develop until a certain age, this topic is particularly important in the young children's learning scope as they do not correctly and easily learn some content of abstract nature, such as contents in math class. This work presents the results of the use of an application called "Mathematics Learning System with Augmented Reality based on Kinect" (SAM-RAK by its acronym in Spanish), which was designed to cover basic topics of mathematics in the Basic General Education level (EGB by its acronym in Span-ish) in Ecuador. The research was carried out under an experimental quantitative approach with 30 children (18 girls and 12 boys), who study in third grade of EGB level, from 2 different educational in-stitutions in Riobamba city. In order to obtain the results that evaluate the developed application, a pre-test and a post-test were applied, which were contrasted with the student’s t-test for paired samples. The statistical evidence suggests that the proposed computer system had a positive effect on children's performance, when it was used as a support tool in the classroom. The system was more effective in low performance children compared to those of high performance. It was also proved that children were motivated and showed positive attitudes when using the proposed System.
ARTICLE | doi:10.20944/preprints201805.0435.v1
Subject: Engineering, Biomedical & Chemical Engineering Keywords: Kinect; validation; assessment; functional evaluation; shoulder; markerless system
Online: 30 May 2018 (05:59:51 CEST)
Optoelectronic devices are gold standard for 3D evaluation in clinics but due to the complexity of such kind of hardware and the lack of access for patients affordable, transportable and easy to use systems must be developed to be largely used in daily clinics. The KinectTM sensor presents various advantages compared to optoelectronic devices: price, transportability but also some limitations: (in)accuracy of the skeleton detection and tracking as well as the limited amount of available points that make 3D evaluation impossible. To overcome these limitations a novel method has been developed to perform 3D evaluation of the upper limbs. This system is coupled to rehabilitation exercises allowing functional evaluation while performing physical rehabilitation. To validate this new approach a double step method was used. The first step is a laboratory validation where the results obtained with the KinectTM have been compared with results obtained with an optoelectronic device, 40 healthy young adults participated in this first part. The second step was to determine the clinical relevance of such kind of measurement. Results of the healthy subjects were compared with a group of 22 elderly adults and a group of 10 chronic stroke patients to determine if different patterns can be observed. The new methodology and the different steps of the validations are presented in this paper.
ARTICLE | doi:10.20944/preprints201808.0407.v1
Subject: Engineering, Other Keywords: angle estimation; microsoft kinect; single camera; markerless mocap system
Online: 23 August 2018 (05:55:56 CEST)
The use of motion capture has increased from last decade in a varied spectrum of applications like film special effects, controlling games and robots, rehabilitation system, animations etc. The current human motion capture techniques use markers, structured environment, and high resolution cameras in a dedicated environment. Because of rapid movement, elbow angle estimation is observed as the most difficult problem in human motion capture system. In this paper, we take elbow angle estimation as our research subject and propose a novel, markerless and cost-effective solution that uses RGB camera for estimating elbow angle in real time using part affinity field. We have recruited five (5) participants of (height, 168 ± 8 cm; mass, 61 ± 17 kg) to perform cup to mouth movement and at the same time measured the angle by both RGB camera and Microsoft Kinect. The experimental results illustrate that markerless and cost-effective RGB camera has a median RMS errors of 3.06° and 0.95° in sagittal and coronal plane respectively as compared to Microsoft Kinect.
ARTICLE | doi:10.20944/preprints201703.0159.v1
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: object detection; background subtraction; video surveillance; Kinect sensor fusion
Online: 20 March 2017 (10:21:40 CET)
Depth-sensing technology has led to broad applications of inexpensive depth cameras that can capture human motion and scenes in 3D space. Background subtraction algorithms can be improved by fusing color and depth cues, thereby allowing many issues encountered in classical color segmentation to be solved. In this paper, we propose a new fusion method that combines depth and color information for foreground segmentation based on an advanced color-based algorithm. First, a background model and a depth model are developed. Then, based on these models, we propose a new updating strategy that can eliminate ghosting and black shadows almost completely. Extensive experiments have been performed to compare the proposed algorithm with other, conventional RGB-D algorithms. The experimental results suggest that our method extracts foregrounds with higher effectiveness and efficiency.
ARTICLE | doi:10.20944/preprints201806.0367.v1
Subject: Mathematics & Computer Science, Other Keywords: kinect; depth calibration; RGB-D; media art; skeletal joint data
Online: 24 June 2018 (11:19:41 CEST)
Kinect is a device that has been widely used in many areas since it was released in 2010. Kinect SDK was announced in 2011 and used in many other areas than its original purpose, which was a controller for gaming. In particular, it has been used by a number of artists in digital media art since it is inexpensive and has a fast recognition rate. However, there is a problem. Kinect create 3D coordinates with a single 2D RGB image for x, y value - single depth image for z value. And this creates a significant limitation on the installation for interactivity of media art. Because the Cartesian XY coordinate and the spherical Z coordinate system are used in combination, depth error depending on the distance is generated, which makes real-time rotation recognition and coordinate correction difficult above coordinate system. This paper proposes a real-time calibration method of Kinect recognition range expansion for useful application in the digital media art area. The proposed method can recognize the viewer accurately by calibrating a coordinate in any direction in front of the viewer. 3,400 datasets witch acquire from experiment were measured as five stances: the 1m attention stance, 1m hands-up stance, 2m attention stance, 2m hands-up stance, and 2m hands-half-up stance, which were taken and recorded every 0.5 sec. The experimental results showed that the accuracy rate was improved about 11.5% compared with front measurement data according to Kinect reference installation method.
ARTICLE | doi:10.20944/preprints201708.0022.v1
Subject: Mathematics & Computer Science, Other Keywords: real‐time reconstruction; SLAM; kinect sensors; depth cameras; open source
Online: 7 August 2017 (11:03:23 CEST)
Given a stream of depth images with a known cuboid reference object present in the scene, we propose a novel approach for accurate camera tracking and volumetric surface reconstruction in real-time. Our contribution in this paper is threefold: (a) utilizing a priori knowledge of the cuboid reference object, we keep drift-free camera tracking without explicit global optimization; (b) we improve the fineness of the volumetric surface representation by proposing a prediction-corrected data fusion strategy rather than simple moving average, which enables accurate reconstruction of high-frequency details such as sharp edges of objects and geometries of high curvature; (c) we introduce a benchmark dataset CU3D containing both synthetic and real-world scanning sequences with ground-truth camera trajectories and surface models for quantitative evaluation of 3D reconstruction algorithms. We test our algorithm on our dataset and demonstrate its accuracy compared with other state-of-the-art algorithms. We release both our dataset and code as opensource1 for other researchers to reproduce and verify our results.
ARTICLE | doi:10.20944/preprints202209.0422.v1
Subject: Engineering, Other Keywords: Parkinson’s Disease; Neurorehabilitation; exergames; Azure Kinect; UPDRS; Movement Analysis; body tracking; telemedicine
Online: 27 September 2022 (10:27:37 CEST)
Motor impairments are among the most relevant, evident, and disabling symptoms of Parkinson’s disease that adversely affect quality of life, resulting in limited autonomy, independence, and safety. Recent studies have demonstrated the benefits of physiotherapy and rehabilitation programs specifically targeted to the needs of Parkinsonian patients in supporting drug treatments and improving motor control and coordination. However, due to the expected increase of patients in the coming years, traditional rehabilitation pathways in healthcare facilities could become unsustainable. Consequently, new strategies are needed, in which technologies play a key role in enabling more frequent, comprehensive, and out-of-hospital follow-up. The paper proposes a vision-based solution using the new Azure Kinect DK sensor to implement an integrated approach for remote assessment, monitoring, and rehabilitation of Parkinsonian patients, exploiting non-invasive 3D tracking of body movements to objectively and automatically characterize both standard evaluative motor tasks and virtual exergames. Preliminary results show the system’s ability to quantify specific features of motor performance, easily monitor changes and disease progression over time, and the possibility of using exergames to support motor condition assessment and training. The main innovation relies precisely on the integration of evaluative and rehabilitative aspects, which could be used as a closed loop to design new protocols for remote management of patients tailored to their actual conditions.
ARTICLE | doi:10.20944/preprints202208.0184.v1
Subject: Medicine & Pharmacology, Sport Sciences & Therapy Keywords: Exergames; Kinect; neuromuscular disesase; physical disability; rehabilitation; serious games; Virtual reality rehabilitation
Online: 10 August 2022 (03:24:24 CEST)
This paper presents a modular approach to generic exergame design that combines custom physical exercises in a meaningful and motivating story. This aims to provide a tool that can be individually tailored and adapted to people with different needs, making it applicable to different diseases and states of disease. The game is based on motion capturing and integrates four example exercises that can be configured via our therapeutic web platform "Blexer-med". To prove the feasibility for a wide range of different users, evaluation tests were performed on 14 patients with various types and degrees of neuromuscular disorders, classified into three groups based on strength and autonomy. Users were free to choose their schedule and frequency. Game scores and three surveys (before, during, and after the intervention) showed similar experiences for all groups, with the most vulnerable having the most fun and satisfaction. The players were motivated by the story and by achieving high scores. The average usage time was 2.5 times per week, 20 minutes per session. Pure exercise time was about half the game time. The concept has proven feasible and forms a reasonable basis for further developments. The full 3D exercise needs further fine-tuning to enhance fun and motivation.
ARTICLE | doi:10.20944/preprints202010.0455.v1
Subject: Engineering, Automotive Engineering Keywords: KINECT; industrial robot; vision system; RobotStudio; Visual Studio; gesture control; voice control
Online: 22 October 2020 (09:57:07 CEST)
The paper presents the possibility of using KINECT v2 module to control an industrial robot by means of gestures and voice commands. It describes elements of creating software for off-line and on-line robot control. The application for KINECT module was developed in C# language in Visual Studio environment, while the industrial robot control program was developed in RAPID language in RobotStudio environment. The development of a two-threaded application in RAPID language allowed to separate two independent tasks for the IRB120 robot. The main task of the robot is performed in thread no. 1 (responsible for movement). Simultaneously working thread no. 2 ensures continuous communication with the KINECT system and provides information about the gesture and voice commands in real time without any interference in thread no. 1. The applied solution allows the robot to work in industrial conditions without negative impact of communication task on the time of robot’s work cycles. Thanks to the development of a digital twin of the real robot station, tests of proper application functioning in off-line mode (without using a real robot) were conducted. Obtained results were verified online (on the real test station). Tests of correctness of gesture recognition were carried out, the robot recognized all programmed gestures. Another test carried out was the recognition and execution of voice commands. A difference in the time of task completion between the actual and virtual station was noticed - the average difference was 0.67 s. The last test carried out was to examine the impact of interference on the recognition of voice commands. With a 10dB difference between the command and noise, the recognition of voice commands was equal to 91.43%. The developed computer programs have a modular structure, which enables easy adaptation to process requirements.
Subject: Keywords: 3D object reconstruction, depth cameras, Kinect sensors; open source, signal denoising, SLAM
Online: 9 April 2019 (12:24:34 CEST)
3D object reconstruction from depth image streams using Kinect-style depth cameras has been extensively studied. In this paper, we propose an approach for accurate camera tracking and volumetric dense surface reconstruction assuming a known cuboid reference object is present in the scene. Our contribu¬tion is three-fold. (a) We maintain drift-free camera pose tracking by incorporating the 3D geometric constraints of the cuboid reference object into the image registration process. (b) We reformulate the problem of depth stream fusion as a binary classification problem, enabling high-fidelity surface reconstruction, especially in the con¬cave zones of objects. (c) We further present a surface denoising strategy to mitigate the topological inconsistency (e.g., holes and dangling triangles), which facilitates the generation of a noise-free triangle mesh. We extend our public dataset CU3D with several new image sequences, test our algorithm on these sequences and quantitatively compare them with other state-of-the-art algorithms. Both our dataset and our algorithm are available as open-source content at https://github.com/zhangxaochen/CuFusion for oth-er researchers to reproduce and verify our results.
ARTICLE | doi:10.20944/preprints202105.0468.v1
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: 3D reconstruction; ICP; Azure Kinect; RGB-D image processing; point cloud filtering; rapeseed
Online: 20 May 2021 (09:52:14 CEST)
The 3D reconstruction method using RGB-D camera has a good balance in hardware cost, point cloud quality and automation. However, due to the limitation of inherent structure and imaging principle, the acquired point cloud has problems such as a lot of noise and difficult registration. This paper proposes a three-dimensional reconstruction method using Azure Kinect to solve these inherent problems. Shoot color map, depth map and near-infrared image of the target from six perspectives by Azure Kinect sensor. Multiply the 8-bit infrared image binarization with the general RGB-D image alignment result provided by Microsoft to remove ghost images and most of the background noise. In order to filter the floating point and outlier noise of the point cloud, a neighborhood maximum filtering method is proposed to filter out the abrupt points in the depth map. The floating points in the point cloud are removed before generating the point cloud, and then using the through filter filters out outlier noise. Aiming at the shortcomings of the classic ICP algorithm, an improved method is proposed. By continuously reducing the size of the down-sampling grid and the distance threshold between the corresponding points, the point clouds of each view are continuously registered three times, until get the complete color point cloud. A large number of experimental results on rape plants show that the point cloud accuracy obtained by this method is 0.739mm, a complete scan time is 338.4 seconds, and the color reduction is high. Compared with a laser scanner, the proposed method has considerable reconstruction accuracy and a significantly ahead of the reconstruction speed, but the hardware cost is much lower and it is easy to automate the scanning system. This research shows a low-cost, high-precision 3D reconstruction technology, which has the potential to be widely used for non-destructive measurement of crop phenotype.
ARTICLE | doi:10.20944/preprints201810.0664.v1
Subject: Mathematics & Computer Science, Other Keywords: RGB-D sensors; empirical analysis; sensors in agriculture; phenotyping; microsoft kinect; Intel D-435; Intel SR300; ORBBEC ASTRA S
Online: 29 October 2018 (09:15:45 CET)
Phenotyping is the task of measuring plant attributes for analyzing the current state of the plant. In agriculture, phenotyping can be used to make decisions concerning the management of crops, such as the watering policy, or whether to spray for a certain pest. Currently, large scale phenotyping in fields is typically done using manual labor, which is a costly, low throughput process. Researchers often advocate the use of automated systems for phenotyping, relying on the use of sensors for making measurements. The recent rise of low cost, yet reasonably accurate, RGB-D sensors has opened the way for using these sensors in field phenotyping applications. In this paper, we investigate the applicability of 4 different RGB-D sensors for this task. We conduct an outdoor experiment, measuring plant attribute in various distances and light conditions. Our results show that modern RGB-D sensors, in particular, the Intel D435 sensor, provides a viable tool for close range phenotyping tasks in fields.
ARTICLE | doi:10.20944/preprints202007.0625.v1
Subject: Engineering, General Engineering Keywords: elderly care; hand gesture; computer vision system; Microsoft Kinect depth sensor; Arduino Nano Microcontroller; global system for mobile communication (GSM)
Online: 26 July 2020 (02:07:09 CEST)
Hand gestures may play an important role in medical applications for health care of elderly people, where providing a natural interaction for different requests can be executed by making specific gestures. In this study we explored three different scenarios using a Microsoft Kinect V2 depth sensor then evaluated the effectiveness of the outcomes. The first scenario utilized the default system embedded in the Kinect V2 sensor, which depth metadata gives 11 parameters related to the tracked body with five gestures for each hand. The second scenario used joint tracking provided by Kinect depth metadata and depth threshold together to enhance hand segmentation and efficiently recognize the number of fingers extended. The third scenario used a simple convolutional neural network with joint tracking by depth metadata to recognize five categories of gestures. In this study, deaf-mute elderly people execute five different hand gestures to indicate a specific request, such as needing water, meal, toilet, help and medicine. Then, the requests were sent to the care provider’s smartphone because elderly people could not execute any activity independently. The system transferred these requests as a message through the global system for mobile communication (GSM) using a microcontroller.