Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Multimodal-Multisensory Experiments

Version 1 : Received: 25 August 2020 / Approved: 27 August 2020 / Online: 27 August 2020 (12:06:13 CEST)
Version 2 : Received: 10 October 2020 / Approved: 12 October 2020 / Online: 12 October 2020 (07:06:28 CEST)

How to cite: Razavi, M.; Yamauchi, T.; Janfaza, V.; Leontyev, A.; Longmire-Monford, S.; Orr, J. Multimodal-Multisensory Experiments. Preprints 2020, 2020080614. https://doi.org/10.20944/preprints202008.0614.v1 Razavi, M.; Yamauchi, T.; Janfaza, V.; Leontyev, A.; Longmire-Monford, S.; Orr, J. Multimodal-Multisensory Experiments. Preprints 2020, 2020080614. https://doi.org/10.20944/preprints202008.0614.v1

Abstract

The human mind is multimodal. Yet most behavioral studies rely on century-old measures of behavior—task accuracy and latency (response time). Multimodal and multisensory analysis of human behavior creates a better understanding of how the mind works. The problem is that designing and implementing these experiments is technically complex and costly. This paper introduces versatile and economical means of developing multimodal-multisensory human experiments. We provide an experimental design framework that automatically integrates and synchronizes measures including electroencephalogram (EEG), galvanic skin response (GSR), eye-tracking, virtual reality (VR), body movement, mouse/cursor motion and response time. Unlike proprietary systems (e.g., iMotions), our system is free and open-source; it integrates PsychoPy, Unity and Lab Streaming Layer (LSL). The system embeds LSL inside PsychoPy/Unity for the synchronization of multiple sensory signals—gaze motion, electroencephalogram (EEG), galvanic skin response (GSR), mouse/cursor movement, and body motion—with low-cost consumer-grade devices in a simple behavioral task designed by PsychoPy and a virtual reality environment designed by Unity. This tutorial shows a step-by-step process by which a complex multimodal-multisensory experiment can be designed and implemented in a few hours. When conducting the experiment, all of the data synchronization and recoding of the data to disk will be done automatically.

Keywords

multimodal experiment; multisensory experiment; automatic device integration; open-source; PsychoPy; Unity; Virtual Reality (VR); Lab Streaming Layer; LabRecorder; LabRecorderCLI; Windows command line (cmd.exe)

Subject

Social Sciences, Cognitive Science

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.