Early vision, natural images.

To understand visual perceptual deficits in schizophrenia, or how to write a better algorithm for computer-assisted vision, we need to understand how visual neurons respond to the cluttered natural environment. The neurons that I am most interested in are in primary visual cortex: how are these earliest cortical representations of visual information modulated by local scene structure or even the broader behavioral context?

Simulation of early visual responses to a picture. Our problem is that we know very well how early visual neurons respond to isolated image features, but all sorts of new behaviors crop up when you overlap these different features to build a natural image. To make matters worse, early visual responses are modulated by perception and experience in ways that we are only beginning to understand. We therefore have to understand not only how juxtaposed features interact (and how these interactions result in different modulations of the blood flow and oxygenation measured with fMRI), but also a good understanding how early visual responses are modulated by higher- level aspects of our perceptual experience, such as scene segmentation and object detection/recognition.

My approach is to combine high-resolution fMRI data with computational models of V1 neural networks (using behavioral data to constrain our models) to study spatial interactions in the V1 population code. Some of the most exciting projects we're currently working on are studying how the V1 response to the same stimulus changes when the stimulus is presented with different timing or when you see it in a different way.

For details on current projects, see the Olman lab research page .






© 2008, C. A. Olman
date of last update: July 29, 2008