12/22/2012

Computational-model derived visual and auditory saliency helps decode functional neuroimaging data obtained under naturalistic stimulation conditions


The use of naturalistic, ecologically valid, stimuli such as feature films, in neuroimaging studies is becoming one of the most exciting areas of cognitive neuroscience. Indeed, a vast body of knowledge has been acquired about the neural basis of perceptual and cognitive functions in experiments where highly controlled and artificial stimuli have been repetitively presented to experimental subjects and the relevant cerebral responses have been isolated by the means of trial averaging. This knowledge, together with recent advances in neuroimaging and data analysis methods, has laid strong foundation for efforts towards the use of complex real-life like stimuli such as movies. 

To date, model-free analysis methods such as inter-subject correlation and independent component analysis have been successfully utilized in disclosing brain activity related to specific events in the movies. Model-based approaches such as general linear model and multiple voxel pattern analyses have been also used where stimulus features contained in the movie clips and subjective experiences of the experimental subjects have been utilized as predictors and teaching data, respectively. However, biologically motivated computational models, built based on quantitative knowledge obtained through studies utilizing simple/artificial stimuli, have not been, to date, utilized to generate predictions about how the brain responds to naturalistic stimulation.

In their recent study, Cecile Bordier et al. (2012) derived visual saliency maps based on combination of local discontinuities in intensity, color, orientation, motion and flicker, and auditory saliency maps based on discontinuities in intensity, frequency contrast, (spectrotemporal) orientation, and temporal contrast. The saliency information was then utilized, together with the stimulus feature time courses, as predictors in analysis of functional magnetic resonance imaging data obtained from healthy volunteers when they watched manipulated (with color, motion, and sound switched on/off) and un-manipulated versions of an episode of TV series “24”.

It was observed that while visual and auditory stimulus features per se predicted activity in visual and auditory cortical areas, visual saliency predicted hemodynamic responses in extra-striate visual areas and posterior parietal cortex, and auditory saliency predicted activity in the superior temporal cortex. Notably, data-driven independent component analyses, while revealing sensory network components, would not have provided similar knowledge about contributions of sensory features vs. saliency to brain activity patterns.

These results are highly encouraging and pave way for the use of biologically motivated computational models in forming predictors for analysis of data obtained under naturalistic stimulation and task conditions. This approach complements in a significant fashion the previously used predictors (time courses of sensory stimulus features and subjective experiences), and adds to the rapidly growing kit of tools that one can use in neuroimaging studies where real-life like naturalistic stimuli/tasks are used to probe the neural basis of human perceptual and cognitive functions.

Reference: Bordier C, Puja F, Macaluso E. Sensory processing during viewing of cinematographic material: Computational modeling and functional neuroimaging. Neuroimage (2012) e-publication ahead of print. http://dx.doi.org/10.1016/j.neuroimage.2012.11.031

No comments:

Post a Comment

Any thoughts on the topic of this blog? You are most welcome to comment, for example, point to additional relevant information and literature on the topic. All comments are checked prior to publication on this site.