GRK 2340

Graduiertenkolleg "Computational Cognition"


Osnabrück University navigation and search


Main content

Top content

Daniel Anthes

e-mail

Room 50/111

Institute of Cognitive Science,
Wachsbleiche 27,
49090 Osnabrück, Germany

 

Supervisors

Prof. Dr. Tim Kietzmann
Prof. Dr. Peter König

The neural mechanics of lifelong learners

My research interests lie primarily at the intersection of machine learning and neuroscience. I am passionate about exploring the representations and functional organisation in both biological brains and artificial neural network models.

Project abstract:
Key abilities of humans and other animals are their adaptability to changing environments and the ability to acquire new skills and memories over their lifetime. Humans easily remember facts they learned long ago, and can perform skills they have not practised recently. Even if they do forget, the process is usually gradual. Biological brains have these properties, while still being plastic enough to change and adapt to new situations over their lifetime.

Artificial neural networks, while being used successfully as state-of-the-art models of neural processes, do not possess these same capabilities: Artificial neural networks perform best when trained on a static distribution of input data, where all tasks are interleaved and optimised for at the same time. After initial training of the network, changing the input data distribution or attempting to optimise the network for additional tasks leads to a rapid decline in performance on tasks the network previously performed well. This phenomenon is termed ‘catastrophic forgetting’ and is the subject of ongoing debate and research.

In this project, we seek to better understand the learning process in artificial neural networks in continual learning scenarios where data and tasks are available sequentially. We research why learning algorithms and artificial networks that perform impressively well in static tasks fail in continual learning scenarios. We take inspiration from properties of biological brains to identify mechanisms and inductive biases that may enable artificial neural networks to become successful continual learners.

Top content