Abstract:

As virtual and augmented reality (VR/AR) and assistive robotics become more prevalent in our day-to-day lives, the need to understand our behavior in mixed reality increases. Using VR to study cognitive systems is also advantageous as the environment is entirely modular, yet analogous to the physical world. Two modalities we can record easily using VR are hand pose and gaze; together, they may reveal some of the neural underpinnings behind how we interact with objects. In this study, we used Unity3D, Steam VR, and a VIVE Pro Eye headset to create a virtual environment that allows us to observe our hand and eye latencies, defined as the difference in time between a user’s gaze position and its nearest spatial hand position neighbor. Using Unity3D, we collected manual, ocular, and head positions over the course of several trials for multiple participants. Factors such as the color, shape, and speed of a target object were changed to investigate how participants navigated a target pursuit task with their eyes and dominant hand. By experimenting, we hope to determine which factors can be used to optimize human perception, specifically visuomotor coordination.

Arise.pdf