This technology, named PhysioLabXR, is an open-source, Python-based application for real-time processing and visualization of multimodal physiological data that can be customized for human-computer interaction and extended reality (XR) experiments.
In neuroscience and human-computer interaction, interest has grown in performing experiments combining multiple types of physiological measurements in order to enhance their predictive power or to study the interaction between different physiological systems. Few methods currently allow integration of real-time multimodal physiological data; those that do are either written in a statically compiled language and not customizable or lack a high-performance backend compatible with high-throughput data. More flexible, scalable software is needed to design and deploy more complex extended reality (XR) experiments and neurofeedback loops.
This technology is a Python-based open-source software platform that provides real-time and multimodal data processing via an interactive interface, enabling enhanced neuroscience and human-computer interaction experimentation. It synchronizes, visualizes, records, and processes multiple streams of high-throughput physiological data using a modular software architecture optimized for efficiency via concurrency and parallelism. The software utilizes a user-friendly graphical user interface and easily extensible Python APIs. The software can handle data from electrophysiological sensors (EEG, EMG, and EOG), functional near-infrared spectroscopy (fNIRS), eye trackers, cameras, microphones, and screen capture. It also implements commonly used data transfer protocols, Lab Streaming Layer and ZeroMQ, while also offering an SDK for user to create plugins for their own devices. In addition, the scripting feature lets users take advantage of Python’s large collection of software to create and deploy complex custom data processing pipelines and the establishment of cross-platform solutions.
This technology has been applied in augmented/virtual reality and screen-based experiments.
IR CU24335
Licensing Contact: Dovina Qu