Recognizing Multitouch Gestures in 3D Environments

Exploring natural user interaction and gesture recognition for immersive environments and virtual tasks

Role: Undergraduate Researcher and Developer, Prototyper

Technologies: Python, C# in Unity3D, Tensorflow plugin, Leap Motion

Research Questions:

  • How can we utilize contextual information to recognize fine-grained and unique gestures in 3D environments?

  • How can we augment data and artificial intelligence for virtual environments and humanitarian purposes?

  • Can we create a more natural interactive environment for immersive computing users, especially non-gaming and non-traditional users?

Throughout my junior year, I worked in Dr. Francisco Ortega's Natural User Interaction Lab on new ways of recognizing detailed and unique human gestures for more effective interactivity in three dimensional computing. 

This initially stemmed from my sign language learning virtual reality project, as one of our main challenges was effectively and consistently recognizing unique three dimensional signs in the virtual environment. Thus, I researched how to effectively utilize forward feed and other deep learning oriented neural networks to recognize natural human gestures in three dimensional environments. 

For this project, I utilized Python, C# in Unity3D, and a Tensorflow plugin to demonstrate example gestures from a Leap Motion. The main use-cases include construction, design, and therapeutic activities in virtual and augmented reality.

Previous
Previous

VR for Medical Education (2018)

Next
Next

VR Sign Language Learning Tool (2017)