The Touch-Code Glove: a multimodal mapping interface with triboelectric-digital encoding for intuitive robot training
Abstract
Current human-robot interaction (HRI) systems for training embodied intelligent robots often suffer from limited motion dimensionality and unintuitive control. This work presents the Touch-Code Glove, a multimodal HRI interface integrating functional materials, structural intelligence, and deep-learning decoding. A triboelectric digital interface is embedded into the Wrist-pad via a mosaic-patterned array of polyamide/polytetrafluoroethylene (PA/PTFE)-doped silicone rubber films, generating polarity-dependent digital signal pairs upon contact. A co-electrode layout enables 16 touch points with minimal wiring, allowing multiplexed, programmable tactile input via sliding or multi-point gestures. Coupled triboelectric signals are accurately decoded using a convolutional neural network and long short-term memory (CNN-LSTM) model, achieving over 98% recognition accuracy. Complementarily, a double-network conductive hydrogel composed of sodium alginate, polyacrylamide, and sodium chloride (SA/PAM/NaCl) is integrated into the Finger-fibers and the Wrist-pad to provide strain-sensing capabilities with excellent stretchability, high linearity, low hysteresis, and long-term stability. The system incorporates three concurrent sub-mapping strategies: gesture-driven control, wrist posture-based movement, and touch path-guided input, which together enable real-time control of robotic hands and arms without requiring professional training. This triboelectric-hydrogel hybrid interface offers a materials-centric solution for intelligent, wearable, and accessible HRI, paving the way for next-generation multimodal robotic control systems in assistive and industrial applications.
Keywords
Human-robot interface, multimodal sensing, triboelectric encoding, motion mapping, flexible electronics, embodied intelligent robots