KCIST Colloquium - Future action models for AI-powered and cognitively-enabled robots

  • Tagungsort:

    Adenauerring 2, Geb. 50.20, Raum 148, 76131 Karlsruhe

  • Datum:

    23. Mai 2023, 16:00 Uhr

  • Referent:

    Prof. Michael Beetz, University of Bremen

    Michael Beetz is a professor for Computer Science at the University of Bremen and heads the Institute for Artificial lntelligence. Until 2011 he was vice-coordinator of the German Cluster of Excellence CoTeSys (Cognition for Technical Systems) where he also was coordinator of the research area “Knowledge and Learning”. He coordinated the European FP7 integrating project RoboHow and is now coordinator of the collaborative research center “Everyday Activity Science and Engineering (EASE)” founded by the German Science Foundation DFG. He is also co-coordinator of the University of Bremen research focus area “Minds, Media, Machines” and co-coordinator of the Bremen AI strategy.

    His research focuses on plan-based control of robotic agents, knowledge processing and representation for robots, integrated robot learning, and cognition-enabled perception.

  • Abstract:

    The realization of computational models for accomplishing everyday manipulation tasks for any object and any purpose would be a disruptive breakthrough in the creation of versatile, general-purpose robot agents; and it is a grand challenge for AI and robotics. Humans are able to accomplish tasks such as “cut up the fruit” for many types of fruit by generating a large variety of context-specific manipulation behaviours. They can typically accomplish the tasks on the first attempt, despite uncertain physical conditions and novel objects. Acting so effectively requires comprehensive reasoning about the possible consequences of intended behaviour, before physically interacting with the real world.

    In the talk, he will sketch ideas about a knowledge representation and reasoning (KR\&R) framework based on explicitly-represented and machine-interpretable inner-world models, that enable robots to contextualize underdetermined manipulation task requests on the first attempt. The hybrid symbolic/sub-symbolic KR\&R framework is to contextualize actions by reasoning symbolically in an abstract and generalized manner, but also by reasoning with “one's eyes and hands” through mental simulation and imagistic reasoning.