Xinlei Yu (Leo)

Xinlei is a research assistant at the HaVRI Lab, advised by Dr. Heather Culbertson at the University of Southern California. He recently graduated with an MS in Computer Engineering from the same institution. His research interests include human-computer interaction, haptics, and drones.

Before this, he worked in the Brain-Body Dynamics Lab, advised by Dr. Francisco Valero-Cuevas, where he developed a computer vision-based Infant Motor Learning Assistant Toy to study whether babies can learn to control their bodies while on their tummies. Previously, Xinlei graduated from Iowa State University with a B.S. in Computer Engineering in December 2021.

Currently, I’m starting a contract position as a software engineer at Google’s Augmented Reality Team.



On-going Project

teaser Working on end-to-end wearable navigation devices for blindness or low vision (BLV) that guide users within indoor environments via haptic feedback:
SightCap is a head-mounted navigation system designed to help people with BLV navigate their environment. It integrates an RGB-D camera, audio input/output, and edge computing into a wearable device. The system processes environmental data locally to provide real-time mapping, obstacle detection, and voice interaction capabilities while maintaining user privacy.



Under Review

teaser CrazyJoystick: A Handheld Flyable Joystick for Providing On-Demand Haptic Feedback in Virtual Reality
Submitted to ACM SIGCHI 2025

Yang Chen*, Xinlei Yu*, and Heather Culbertson



Research Project

drawing Exploring Electrotactile Stimulation as a Modality for Sensation Illusion on the Arm: Presented at the recent SCR'23

Xinlei Yu, Xin Zhu, Xiaopan Zhang and Heather Culbertson

[abstract]
drawing Tummy Time Toy: An Infant Learning Toy


Xinlei Yu, Arya Salgaonkar, Stacey Dusing and Francisco Valero-Cuevas

[Demo Video] [Pilot Study Video]


Fun Project

System Image Gesture-based Controlled Crazyflie Hand Tracking System
[Demo Video] [System Diagram]
VR Room Image VR Dressing Room
[Demo Video]


Past Research Project (selected)

drawing2 TactileNet: Multi-armed bandit-based calibration for Electro-tactile Simulation: Developed an electro-tactile display with a Sensory PCI card and a group of power sources and amplifiers and designed a multi-armed bandit-based calibration method to find an optimal signal parameter for pleasant stimulation.

[Github(partially available)]
drawing3 American Sign Language: we present an alphabet translator for American Sign Language (ASL), deploying Convolutional Neural Networks (CNN) and Residual Neural Networks (ResNet) to classify RGB images of ASL alphabet hand gestures. We meticulously tuned hyperparameters to ensure high training accuracy and solid test performance.

[paper] [Github]