Columbia Technology Ventures

Humanoid-robot capable of facial co-expression

This technology is a soft-skinned anthropomorphic facial robot, Emo, designed to display a wide range of nuanced facial expressions, enhancing the quality of human-robot interactions.

Unmet Need: Advancing naturalistic human-robot nonverbal communication

Humanoid robots often struggle to engage in natural and authentic social interactions due to limitations in their ability to predict and synchronize facial expressions with human companions. Current solutions, which rely primarily on preprogrammed facial animations, lack flexibility and responsiveness, resulting in delayed or artificial responses. Addressing this gap is essential for improving human-robot interactions in applications requiring seamless nonverbal communication, such as healthcare, education, and customer service.

The Technology: Predictive co-expression for improving human-robot interactions

This technology is an anthropomorphic facial robot that solves two major challenges in robotic communication: versatility in facial expressions and natural, timely responses. Emo is designed to predict and synchronize its expressions with humans. Using high-resolution cameras embedded within its eyes, a self-learned kinematic model, and a huge dataset of human expressions, it can anticipate emotions— like a smile— up to 839 milliseconds before they occur, allowing for real-time interaction. Further, equipped with 26 actuators, soft lips, passive joints, and linkages, Emo is designed to more accurately replicate the expressions of the human face and mouth. The use of direct-attached magnets to deform the replaceable face skin provides more precise control over facial expressions. As a result of these design choices, Emo creates interactions that feel authentic and engaging. This breakthrough has exciting applications in consumer robotics, medical training, experimental psychology, and entertainment.

Applications:

  • Consumer electronics and robotics
  • Wearable technology, including smart glasses or augmented reality experiences
  • Telepresence and robotics
  • Gaming and entertainment
  • Industrial inspection and diagnostics of hard-to-access areas
  • Research platform for neuroscience and psychology experiments
  • Lifelike animatronic characters for theme parks and performances
  • Tool to study nuances in human communication

Advantages:

  • Predictive co-expression platform for natural human-robot interaction
  • Anticipates and synchronizes facial expressions in real time
  • High degree of expressiveness with 26 degrees of freedom
  • Real-time adaptability for dynamic human expressions
  • Precise control over facial expressions using magnet-attached skin

Lead Inventor:

Hod Lipson, Ph.D.

Patent Information:

Patent Pending

Related Publications:

Tech Ventures Reference: