The collaborative research center “Robust Vision – Inference Principles and Neural Mechanisms” (CRC 1233) deals with basic principles of biological and machine vision, and is a close collaboration between scientists from the University and the Max Planck Institute for Intelligent Systems. Human visual perception is amazingly robust: Even in highly variable environments, we are able to make reliable inferences about the spatial arrangement of the world from limited visual information. To achieve this, our brain must perform complex computations. Artificial vision systems, in turn – as used, for example, in self-driving cars – are making steep progress in reproducing the visual skills of humans. The goal of this centre will be to better understand the principles and algorithms that enable robust visual inference both in humans and machines.
- Task–dependend top down modulation of visual processing (von Luxburg, Franz)
- Top-down control of visual inference in sensory representations in early visual cortex (Nienborg, Macke, Wichmann)
- Large-scale neuronal interactions during natural vision (Siegel)
- Integration of bottom-up and top-down processing in perceptual learning during sleep (Rauss, Nienborg)
- Natural dynamic scene processing in the human brain (Bartels, Black)
- Natural stimuli for mice: environment statistics and neural representations in the early visual system (Busse, Schaeffel, Euler)
- Stable vision in the presence of fixational eye movements: where and how is the retinal image perceptually stabilized? (Schaeffel, Hafed)