This technology is a soft-skinned anthropomorphic facial robot, Emo, designed to display a wide range of nuanced facial expressions, enhancing the quality of human-robot interactions.
Humanoid robots often struggle to engage in natural and authentic social interactions due to limitations in their ability to predict and synchronize facial expressions with human companions. Current solutions, which rely primarily on preprogrammed facial animations, lack flexibility and responsiveness, resulting in delayed or artificial responses. Addressing this gap is essential for improving human-robot interactions in applications requiring seamless nonverbal communication, such as healthcare, education, and customer service.
This technology is an anthropomorphic facial robot that solves two major challenges in robotic communication: versatility in facial expressions and natural, timely responses. Emo is designed to predict and synchronize its expressions with humans. Using high-resolution cameras embedded within its eyes, a self-learned kinematic model, and a huge dataset of human expressions, it can anticipate emotions— like a smile— up to 839 milliseconds before they occur, allowing for real-time interaction. Further, equipped with 26 actuators, soft lips, passive joints, and linkages, Emo is designed to more accurately replicate the expressions of the human face and mouth. The use of direct-attached magnets to deform the replaceable face skin provides more precise control over facial expressions. As a result of these design choices, Emo creates interactions that feel authentic and engaging. This breakthrough has exciting applications in consumer robotics, medical training, experimental psychology, and entertainment.
Patent Pending
IR CU24302
Licensing Contact: Dovina Qu