Document Type


Publication Date


Publication Title

Proceedings of the National Academy of Sciences of the United States of America


Department of Psychological and Brain Sciences


In making sense of the visual world, the brain's processing is driven by two factors: the physical information provided by the eyes (“bottom-up” data) and the expectancies driven by past experience (“top-down” influences). We use degraded stimuli to tease apart the effects of bottom-up and top-down processes because they are easier to recognize with prior knowledge of undegraded images. Using machine learning algorithms, we quantify the amount of information that brain regions contain about stimuli as the subject learns the coherent images. Our results show that several distinct regions, including high-level visual areas and the retinotopic cortex, contain more information about degraded stimuli with prior knowledge. Critically, these regions are separate from those that exhibit classical priming, indicating that top-down influences are more than feature-based attention. Together, our results show how the neural processing of complex imagery is rapidly influenced by fleeting experiences.