Neural dynamics of visual and semantic object processing


Recognizing objects involves the processing of visual properties and the activation of semantic information. This process is known to rely on the ventral visual pathway extending into the anterior and medial temporal lobes. Building on the established neural architecture supporting object recognition, I argue that we need dynamic accounts that can explain the speed of recognition and incorporate feedforward and recurrent processing dynamics. In order to explain recognition, we need explicit models of visual and semantic processing, situated at the level of individual objects, and methods to apply such models to time-resolved neuroimaging data. Here, I outline a computational and cognitive approach to modeling the incremental visual and semantic properties with a neural network, before providing an account of how we access meaning from visual inputs over time. I argue an early phase of processing extracts coarse meaning from visual properties, before long-range recurrent processing dynamics enable the formation of more specific conceptual representations beyond 150 ms. Various sources of evidence underlie the importance of feedback for detailed conceptual representations, with connectivity between the anterior temporal and posterior temporal regions playing a major role. Finally, I will discuss how the nature of the task impacts the processing dynamics, and discuss the role the environmental context could play.