Role Shifting and Spatial Organization for Robotic Signing of American Sign Language
Loading...
Authors
Hatami, Behnoushsadat
Issue Date
2025
Type
Thesis
Language
en_US
Keywords
American Sign Language (ASL) , Human-Robot Interaction , Motion Planning , Robotic Sign Language , Role Shifting
Alternative Title
Abstract
Fluent sign languages rely on spatial structure, perspective, and the coordinated use of the upper body. Yet most existing robotic systems in sign language reproduce isolated words or alphabets, offering little support for discourse-level communication where signers shift perspectives or describe interactions between people. A major limitation is the lack of role shifting, the linguistic device through which signers adopt different viewpoints by repositioning the signing space through body orientation and arm placement. This work introduces a motion-generation framework that enables a humanoid robot to perform American Sign Language (ASL) sentences with linguistically grounded role shifting. The system organizes the signing space into referent-specific regions (“signer’s squares”), coordinates torso rotation with arm motion to shift narrative perspective, and preserves the linguistic constraints of different sign types. The framework was implemented and evaluated in simulation on the Unitree G1 humanoid robot, demonstrating that the system can reliably reposition signs and execute role-shifting behaviors across different signer’s squares. A complementary case-study evaluation with an ASL-fluent collaborator revealed issues not visible in the technical tests, which led to refinements in spatial calibration and timing. Together, these results show that modeling perspective and spatial consistency structure produce robotic signing that is clearer and more aligned with how ASL encodes meaning.
