
Despite the plethora of success stories in robotics, robots are still not capable of taking over many daily life tasks, such as doing the dishes or cleaning the kitchen. These tasks are challenging, due to the unstructured setting and the large varieties in, for example, object geometries, environment conditions, and contexts. Since most of these tasks are not challenging at all for humans, we should leverage the knowledge of humans to let a robot interactively learn to execute useful tasks. This interaction will be most natural if we could do this through natural language and gestures. Therefore I am studying how to optimally fuse multimodal information for interactive robot learning.
List of Publications
Latest publications with the lab
For more information, please visit:
Software
Supervisors
- Laura Ferranti
- Jens Kober