We organized the Research Training Group in two related clusters:
1. From Signals to Symbols and Back: Neural Representations and Learning
2. Symbolic Thought: Processes on Uncertain and Incomplete Representations
The first cluster addresses the important problem of bridging low- and high-level aspects of cognition. In particular, we address the question of how symbolic representations can be implemented and learned in neurally plausible architectures. This is, of course, an old question, but recent theoretical and practical advances in deep learning and recurrent neural networks warrant a fresh look at the problem. In addition, there is growing experimental work on oscillations and predictive coding in the brain that casts new light on the principles of neural computation.
The second cluster is complementary to the first and focuses on high-level cognitive processes, like analogical reasoning and language understanding. These processes are best conceptualized as symbolic thought, and hence most easily modeled as symbol systems. However, it is crucial that these systems need to be able to deal with uncertainty and incomplete knowledge. We seek to identify some of the missing ingredients for human-level intelligence by studying how symbolic thought in humans deals with the problems of uncertainty and knowledge induction in some well-understood domains of cognition.
A common theme for several of the envisioned dissertation projects in both clusters is their work on bridging the gap between low- and high-level cognition, but from different directions and either with an emphasis on the signal side or the symbol side. In addition, cross-cutting the two clusters, there are many methodological synergies between possible projects. We expect several PhD students to work with neural networks and several with logics. Machine learning tools will be used routinely by many of the students. On the experimental side, psychophysics, eye-tracking and EEG will be common methods in many projects. Hence, the Research Training Group will provide an inspiring environment in respect to topics and methodologies.
- Deep recurrent neural networks for action models
- Self-organized grammar learning with a plastic recurrent network
- Hierarchically structured object representations
- Human categorization strategies for Computer Vision
- From point clouds to symbols in mobile robotics
- Semisupervised Conceptors and conceptor logic
- Constrained semi-supervised learning
- Relationship extraction using NLP and image content
Co-development of analogical reasoning in language and cognition
Robots focusing on relevant knowledge
Attention from abduction
Action oriented scene understanding
Probabilistic models of conditionals
The semantics, pragmatics and acquisition of polarity items