Subject: Social Sciences, Psychology Keywords: multimodal experiment; multisensory experiment; automatic device integration; open-source; PsychoPy; Unity; Virtual Reality (VR); Lab Streaming Layer; LabRecorder; LabRecorderCLI; Windows command line (cmd.exe)
Online: 12 October 2020 (07:06:28 CEST)
The human mind is multimodal. Yet most behavioral studies rely on century-old measures of behavior—task accuracy and latency (response time). Multimodal and multisensory analysis of human behavior creates a better understanding of how the mind works. The problem is that designing and implementing these experiments is technically complex and costly. This paper introduces versatile and economical means of developing multimodal-multisensory human experiments. We provide an experimental design framework that automatically integrates and synchronizes measures including electroencephalogram (EEG), galvanic skin response (GSR), eye-tracking, virtual reality (VR), body movement, mouse/cursor motion and response time. Unlike proprietary systems (e.g., iMotions), our system is free and open-source; it integrates PsychoPy, Unity and Lab Streaming Layer (LSL). The system embeds LSL inside PsychoPy/Unity for the synchronization of multiple sensory signals—gaze motion, electroencephalogram (EEG), galvanic skin response (GSR), mouse/cursor movement, and body motion—with low-cost consumer-grade devices in a simple behavioral task designed by PsychoPy and a virtual reality environment designed by Unity. This tutorial shows a step-by-step process by which a complex multimodal-multisensory experiment can be designed and implemented in a few hours. When conducting the experiment, all of the data synchronization and recoding of the data to disk will be done automatically.
ARTICLE | doi:10.20944/preprints202203.0403.v1
Subject: Computer Science And Mathematics, Artificial Intelligence And Machine Learning Keywords: behavioral change prediction; learned features; deep feature learning; handcrafted features; bidirectional long-short term memory; autoencoders; temporal convolutional neural network; clinical decision support system; multisensory stimulation therapy; physiological signals.
Online: 31 March 2022 (08:38:58 CEST)
Predicting change from multivariate time series has relevant applications ranging from medical to engineering fields. Multisensory stimulation therapy in patients with dementia aims to change the patient’s behavioral state. For example, patients who exhibit a baseline of agitation may be paced to change their behavioral state to relaxed. This study aims to predict changes in behavioral state from the analysis of the physiological and neurovegetative parameters to support the therapist during the stimulation session. In order to extract valuable indicators for predicting changes, both handcrafted and learned features were evaluated and compared. The handcrafted features were defined starting from the CATCH22 feature collection, while the learned ones were extracted using a Temporal Convolutional Network, and the behavioral state was predicted through Bidirectional Long Short-Term Memory Auto-Encoder, operating jointly. From the comparison with the state-of-the-art, the learned features-based approach exhibits superior performance with accuracy rates of up to 99.42% with a time window of 70 seconds and up to 98.44% with a time window of 10 seconds.