Binding crossmodal object features in perirhinal cortex.


Knowledge of objects in the world is stored in our brains as rich, multimodal representations. Because the neural pathways that process this diverse sensory information are largely anatomically distinct, a fundamental challenge to cognitive neuroscience is to explain how the brain binds the different sensory features that comprise an object to form meaningful, multimodal object representations. Studies with nonhuman primates suggest that a structure at the culmination of the object recognition system (the perirhinal cortex) performs this critical function. In contrast, human neuroimaging studies implicate the posterior superior temporal sulcus (pSTS). The results of the functional MRI study reported here resolve this apparent discrepancy by demonstrating that both pSTS and the perirhinal cortex contribute to crossmodal binding in humans, but in different ways. Significantly, only perirhinal cortex activity is modulated by meaning variables (e.g., semantic congruency and semantic category), suggesting that these two regions play complementary functional roles, with pSTS acting as a presemantic, heteromodal region for crossmodal perceptual features, and perirhinal cortex integrating these features into higher-level conceptual representations. This interpretation is supported by the results of our behavioral study: Patients with lesions, including the perirhinal cortex, but not patients with damage restricted to frontal cortex, were impaired on the same crossmodal integration task, and their performance was significantly influenced by the same semantic factors, mirroring the functional MRI findings. These results integrate nonhuman and human primate research by providing converging evidence that human perirhinal cortex is also critically involved in processing meaningful aspects of multimodal object representations.