GRK 2340

Graduiertenkolleg "Computational Cognition"


Osnabrück University navigation and search


Main content

Top content

Michael Marino

e-mail

Tel. +49 (0)541 969-3372
Room 50/106

Institute of Cognitive Science,
Wachsbleiche 27,
49090 Osnabrück, Germany

Supervisors

Gunther Heidemann
Joachim Hertzberg

From Point Clouds to Symbols in Mobile Robotics

When you look around, what do you see? Most responses will vary to some extent based on the type of environment you currently find yourself in, but what will likely be consistent across responses (from humans particularly) is that the answer will take the form of a subset of object classes - semantic classes of objects as we would call them. Likely if someone responded to this question by saying "I see a 2D array of colors representing light reflected at various wavelengths spanning the continuum that is my particular instance of human vision'' this would seem strange. And yet, such an answer would, in some sense, be a more objective representation of "reality''; perhaps less usefel in ways, but more objective nonetheless.

This notion of perceiving sense data in its raw form, without the addition of a "semantic'' interpretation to aid in inference and communication, is what may be called a "bottom-up'' view of the world, and is more closely aligned with what machine learning models do. They may encode a particular (semantic) output class as being related to a particular entry in an output vector, which is on some level semantic information. But there is a significant gap between simply mapping vector indices onto semantic classes, and the sort of complex, hierarchical model of semantic relationships at the heart of every human beings' understanding of the world, consciously or subconsciously. If I asked you what is the relationship between a chair and a sofa, you would likely say they are both pieces of furniture, or they are both things we sit on. Attempting to ask an analogous question to a machine learning model is slightly more challenging, as one must figure out a way to connect particular instances of raw sensor data to something resembling an object class, and then come up with some meaningful way of reasoning about the connection between the two classes. This project seeks to develop methods for relating raw sensor data to semantic level relationships by developing hierarchical analysis techniques as well as ways of enforcing arbitrary hierarchies for bottom-up machine learning models. The final end goal is to use the analysis techniques and training paradigms that were developed, as well as the insights gained in the process, in order to develop robotic learning algorithms that can use feedback from a human user in order to enable robots to generalize more effectively than they could without the extra level of semantic feedback. That is, the resulting approach to learning in robotics should shift the robot's "view'' of the world in a direction more readily understandable to a human user.

 

 

Publication

Michael Marino, Georg Schröter, Gunther Heidemann, Joachim Hertzberg (2020)
Hierarchical Modeling with Neurodynamical Agglomerative Analysis
LNCS, volume 12396

Top content