Virtual Reality Sign Language Learning Tool

A hackathon prototype to visually enable faster and more accessible sign language education

Project Role: Hackathon Developer

Technologies:  Unity3D, C#, Leap Motion

Articles: Sign Language Project Wins Second Annual Hackathon, Hackathon Teams Create Mixed Realities in One Weekend

After working in the Craig Hospital's Assistive Technology Lab, I became interested in accessibility and took this focus to my university's Virtual Reality Hackathon, RamHack, in October 2017. I worked with my team of four to create an immersive American Sign Language (ASL) tutor for both deaf and hearing people. Since children learn language best through immersion, and many parents of deaf children do not have many options to learn or help their children learn sign language, we decided to create an immersive, vision-based ASL application that can recognize ASL gestures, teach the user to spell different words, and then spawn a matching object into the space upon success. For example, the application might teach a user to spell "bed" and then spawn a three dimensional bed into the virtual space. 

Our project won first prize as voted upon by co-founders and directors of VR at Hewlett Packard, NVIDIA, Motion Reality Inc, and other guest judges, after we worked on it for just 48 hours. The initial prototype code can be found at https://github.com/kellyndassler/CSU-VR-2017-Hackathon-Localhost. The vector-based challenges that we faced in hand gesture recognition inspired me to continue to work on research in multitouch gesture recognition during the following year in the NUI lab--a summary of which can be found in the "Research" section of my portfolio. Additionally, this pushed me to attend other hackathons such as Hack Georgia Tech and JPMorgan Chase Code for Good, and help other women pursue hackathon participation.

Previous
Previous

Recognizing Multitouch Gestures in Virtual Environments (2018)