My research bridges autonomous robotics and cognitive science to develop mathematical algorithms for robot behavior that are built around the unique cognitive science of human-robot interaction. Using my background in computer science and cognitive psychology, I design, implement, and evaluate robotic systems for providing assistance on complex tasks in human environments. My research uses principles from robotics, artificial intelligence, machine learning, computer vision, and cognitive science.

In the process of pursuing this vision, I investigate fundamental questions of human-robot interaction, such as:

My research has found that robots can direct human attention using eye gaze [HRI 2013, CogSci 2011], and that a robot's nonverbal behavior can improve human-robot collaboration [HRI 2014, CogSci 2014]. I've built models for understanding nonverbal behaviors expressed by people [ICMI 2014]; for producing assistive robot actions based on human nonverbal behaviors [ShARP 2016 , HUMORARR 2016]; and for generating robot nonverbal behaviors that improve collaborations [HRI 2016, ICRA 2016, IROS 2016].

The Cognitive Science of Robot Behavior

The ability to attract and direct human attention is a key skill for collaborative robots, and using a robot's eye gaze to direct a human's attention has relatively high benefit and low cost in human-robot collaborations. Robots can evoke the feeling that they are paying attention to someone by establishing eye contact through short, frequent glances [HRI 2013]. Robots can also direct a partner's attention toward objects in the environment by looking at those objects. Such gaze-based deictic references improve human-robot collaborations. Even when there is an error in gaze behaviors (such as when the robot looks at a different object than it names), people can quickly recover from this mismatch [CogSci 2014].

Though robot eye gaze seems to have similar effects to human eye gaze, my PhD research suggests that existing cognitive science models of gaze processing may not necessarily apply to human-robot interactions. Using a classic psychophysical test of attention called counterpredictive cueing, I found that robot gaze---unlike human gaze---fails to cause a reflexive shift in human attention, an effect measured in milliseconds [CogSci 2011].

My research has also challenged assumptions about how robots should be designed to work with people. For example, when a robot produces an unexpected nonverbal behavior (such as failing to release an object during a handover), people re-direct their attention to the robot's eye gaze for an explanation. This leads to significantly more compliance with the robot's gaze-based communication [HRI 2014]. This finding underscores the notion that algorithms for human-robot interaction must be informed by actual (rather than idealized) human behavior.

My studies are the first applications of certain psychophysics techniques---the counterpredictive cueing test [CogSci 2011] and the target-amidst-distractors paradigm [HRI 2013]---in human-robot interaction. To implement the target-amidst-distractors task, which required eight identical robots, I built a low-cost programmable robot out of modified MyKeepon children's toys and Arduino boards. I open-sourced the MyKeepon 2 platform, which has since been used by other researchers investigating social effects of robots.

Photo of four Keepon robots

Programmable MyKeepon robots were developed to test robot gaze behaviors on people's perception of attention.

Diagram of stimulus from cueing experiment

Robots like Keepon fail to elicit visual reflexes that human faces do.

The robot HERB engaged in a handover task.

By breaking the seamlessness of handovers, robots can direct attention to nonverbal modalities like their eye gazes.

Modeling Human and Robot Behavior

Collaborative robots must (1) recognize the meaning of human nonverbal behaviors, and (2) produce their own nonverbal behaviors that have meaning to people. To recognize the meaning of human nonverbal behaviors, I trained a model on examples of human eye gaze and gestures obtained in the lab from naturalistic human-human tutoring interactions [ICMI 2014]. This model could predict the communicative intent of a nonverbal action (e.g., asking a question or performing a demonstration), and could suggest a nonverbal behavior to match a desired communicative intent.

However, data-driven models are often dependent on manual annotation of human-human interaction data, which is time-consuming and domain-dependent. To produce robot nonverbal behaviors, I avoided the manual annotation problem by developing a scenario-independent, robot-agnostic generative model of robot nonverbal behavior for human-robot collaborations. The model computes the location of human visual attention to select the best deictic gaze and gestures [ICRA 2016]. Nonverbal referential behaviors generated by my model led to statistically significant improvements in people's performance in a human-robot collaboration; on difficult tasks, people were 23% faster and 12% more accurate when the robot used nonverbal behaviors than when it did not [HRI 2016].

Photo of a Nao robot pointing to a tool and saying "Red pliers".

Models of human attention can predict when deictic gestures will be most necessary.

Photo of a participant and Nao looking at a set of blocks on a table between them.

People follow robot instructions more accurately when it uses attention-relevant deictic eye gazes and gestures.

Shared Autonomy

Assistive technology provides people with motor disabilities a means of living independent lives by allowing them to control manipulator robots from simple input devices like joysticks and head button arrays. But as these robots become more capable, they also become harder to control, which increases how much time and energy they take to operate. Shared autonomy augments these teleoperated robots with intelligence and autonomy to predict user intentions and help complete actions while maintaining user control. I am developing a shared autonomy system that predicts people's goals by taking advantage of the eye gaze people express during manipulation [ShARP 2016]. Using this subconscious, nonverbal signal, robots can actively assist toward goals and avoid blocking the user's view as they control the robot [HUMORARR 2016]. I've also shown how the same shared autonomy formulation can be inverted to provide responsive assistance in a collaboration in which the robot is fully autonomous [IROS 2016].

Photo of user wearing an eyetracker and using a Kinova MICO robot arm

Eyetracking improves shared autonomy predictions.

Application Domains

My ongoing research goal is to build assistive robots that people can interact with as naturally as they would with another person. In the future, physically assistive robots could enable the elderly or people with severe motor impairments to independently perform daily tasks like eating, allowing them to live at home with less reliance on caregivers; socially assistive robots could tutor children and adults or act as skill practice partners for people with social or cognitive disabilities. These robots will act as intelligent partners, interpreting natural human verbal and nonverbal communication to understand what tasks people are trying to accomplish and where they need help, then providing that assistance without explicit direction from the user.

MyKeepon Project

I've also been involved in the MyKeepon Project, which seeks to adapt a commerically available robot toy (MyKeepon) into a programmable robot for research and outreach. More information is on the MyKeepon Project page.