GAIN-AI NASA SUITS (Ongoing)
This project investigates how spatial tasks in extreme environments—such as spacesuit use—can be better supported through context-aware guidance rather than static instructions. Focusing on the embodied demands of mobility, donning, and task execution, the work explores how visual overlays and haptic cues can translate complex spatial constraints into intuitive, real-time feedback. By aligning guidance with the wearer’s physical state and environment, the project proposes a more adaptive training and assistance model that reduces error, cognitive load, and reliance on memorization in high-risk conditions.
Date 2025
​
Project Type: Student Project Proposal
​
Keywords: AR, UI Design, Extrvehiclar Acitivy, NASA SUITS
​
Team: Rodrigo Gallardo, Qilmeg Doudatcz, Ganit Goldstein, IIkyaz Sarimehmetoglu, Sergio Mutis, Alex Htet Kyaw, Anita Lin, Clara Emmerling, Berfin Ataman
​
Advisor: Skylart Tibbits
System Overview


Glove Navigation

Current spacesuit gloves restrict dexterity, limiting how astronauts can manipulate tools and interact with interfaces.

Many AR interfaces assume full-arm gestures, requiring large, imprecise movements that are impractical inside a pressurized suit.



The UI is designed around a five-icon central layout, organizing the main menu into five categories that can all be accessed with one hand from any angle. This eliminates the need to lift the hand or reach toward spatially distributed icons. The interface is head-locked to the user for consistent and reliable access.
Hardware




Two approaches were considered for implementing this technically. The first uses conductive, flexible fingertip pads to detect touch/contact for interaction. The second, and ultimately more feasible, approach embeds IMUs into the glove’s stitched threads, enabling motion-based detection directly within the glove.




.jpeg)
