With recent advances in neuroimaging and data analysis methods, it has become possible to even map the neural basis of semantic concepts. There are studies showing that distinct object categories such as faces and outdoor scenes are differentially represented in the human brain, however, one of the most profound observations in early reaction time studies conducted on processing of semantic concepts (and categories) is that semantically similar words cause largest priming effects, as if semantically similar concepts would be represented close to one another in a “semantic space” so that spreading of activation would facilitate processing of concepts related to preceding ones. However, it has not been empirically shown whether such semantic space, where semantically similar concepts would be represented close to one another in a gradient/continuum, can be found on the human cortical surface.
In their recent study, Huth et al. (2012) presented healthy volunteers feature films during 3-Tesla functional magnetic resonance imaging. They then derived a large number (altogether 1705) of object and action, as well as higher-category, names based on WordNet lexicon and labeled the movies so that a time course was obtained for the presence of each concept name in the movies. These concept time courses were then regressed against the brain hemodynamic responses recorded during movie watching, and the results suggested that there indeed is a continuous semantic space on the human cortex. These results provide highly exciting novel information on how concepts are mapped in the human brain, and overall the study presents a new type of methodological approach that offers exciting possibilities for further studies on the neural basis of language, one of the most fundamental of human cognitive functions.
Reference: Huth AG, Nishimoto S, Vu AT, Gallant JL. A Continuous semantic space describes the representation of thousands of object and action categories across the human brain. Neuron (2012) 76, 1210–1224. http://dx.doi.org/10.1016/j.neuron.2012.10.014
Computational-model derived visual and auditory saliency helps decode functional neuroimaging data obtained under naturalistic stimulation conditions
The use of naturalistic, ecologically valid, stimuli such as feature films, in neuroimaging studies is becoming one of the most exciting areas of cognitive neuroscience. Indeed, a vast body of knowledge has been acquired about the neural basis of perceptual and cognitive functions in experiments where highly controlled and artificial stimuli have been repetitively presented to experimental subjects and the relevant cerebral responses have been isolated by the means of trial averaging. This knowledge, together with recent advances in neuroimaging and data analysis methods, has laid strong foundation for efforts towards the use of complex real-life like stimuli such as movies.
To date, model-free analysis methods such as inter-subject correlation and independent component analysis have been successfully utilized in disclosing brain activity related to specific events in the movies. Model-based approaches such as general linear model and multiple voxel pattern analyses have been also used where stimulus features contained in the movie clips and subjective experiences of the experimental subjects have been utilized as predictors and teaching data, respectively. However, biologically motivated computational models, built based on quantitative knowledge obtained through studies utilizing simple/artificial stimuli, have not been, to date, utilized to generate predictions about how the brain responds to naturalistic stimulation.
In their recent study, Cecile Bordier et al. (2012) derived visual saliency maps based on combination of local discontinuities in intensity, color, orientation, motion and flicker, and auditory saliency maps based on discontinuities in intensity, frequency contrast, (spectrotemporal) orientation, and temporal contrast. The saliency information was then utilized, together with the stimulus feature time courses, as predictors in analysis of functional magnetic resonance imaging data obtained from healthy volunteers when they watched manipulated (with color, motion, and sound switched on/off) and un-manipulated versions of an episode of TV series “24”.
It was observed that while visual and auditory stimulus features per se predicted activity in visual and auditory cortical areas, visual saliency predicted hemodynamic responses in extra-striate visual areas and posterior parietal cortex, and auditory saliency predicted activity in the superior temporal cortex. Notably, data-driven independent component analyses, while revealing sensory network components, would not have provided similar knowledge about contributions of sensory features vs. saliency to brain activity patterns.
These results are highly encouraging and pave way for the use of biologically motivated computational models in forming predictors for analysis of data obtained under naturalistic stimulation and task conditions. This approach complements in a significant fashion the previously used predictors (time courses of sensory stimulus features and subjective experiences), and adds to the rapidly growing kit of tools that one can use in neuroimaging studies where real-life like naturalistic stimuli/tasks are used to probe the neural basis of human perceptual and cognitive functions.
Reference: Bordier C, Puja F, Macaluso E. Sensory processing during viewing of cinematographic material: Computational modeling and functional neuroimaging. Neuroimage (2012) e-publication ahead of print. http://dx.doi.org/10.1016/j.neuroimage.2012.11.031
Modern neuroimaging methods that enable measurement of brain function without opening the skull of the experimental subjects are truly amazing. Currently, there are multiple non-invasive neuroimaging methods available, however, each of them is limited either in terms of spatial or temporal resolution. For instance, functional magnetic resonance imaging, while being spatially accurate down to the millimeter scale, suffers from compromised temporal resolution. Conversely, electroencephalography is temporally highly accurate (~milliseconds), but due to the ill-posed electromagnetic inverse problem the spatial accuracy of the method is rather limited. There are computational methods that make it possible to combine complementary information provided by the different methods, but in an ideal case the measurements should be conducted simultaneously. However, recording electroencephalography during functional magnetic resonance imaging has been highly challenging due to the strong magnetic fields producing artifacts to the recorded signal.
In their recent study, Neuner et al. (2012) extended their previous work to test whether electoencephalography can be reliably recorded at an ultra-high 9.4 Tesla field strength. Their results indicate that the artifacts due to cardiac activity (that induce slight movement of the subject and thus induction of currents to the electrodes) increased in amplitude at 9.4 field strength but that it was still possible to measure meaningful and replicable electroencephalographic signals at this ultra-high field strength. The authors further demonstrate that independent component analysis is a useful method for separating artifacts from relevant electroencephalographic signals at the extremely challenging recording conditions. While these measures were obtained under conditions of static magnetic field and gradient switching that takes place during functional imaging does introduce additional artifacts, this demonstration by the authors is nonetheless promising and there are ways to circumvent the disturbances caused by gradient switching, such as inter-leaved acquisition (see, for example, Bonmassar et al. 2002).
Neuner I, Warbrick T, Arrubla J, Felder J, Celik A, Reske M, Boers F, Shah NJ. EEG Acquisition in Ultra-High Static Magnetic Fields up to 9.4T. Neuroimage (2012), online publication ahead of print. http://dx.doi.org/10.1016/j.neuroimage.2012.11.064
Bonmassar G, Purdon PP, Jaaskelainen IP, Solo V, Brown EN, Belliveau JW. Motion and ballistocardiogram artifact removal for interleaved recording of EEG and ERP during MRI. Neuroimage 16:1127-1141, 2002. http://dx.doi.org/10.1006/nimg.2002.1125
Enhanced synaptic plasticity four weeks after birth of new neurons in adult hippocampus support memory
Even though the very first observations suggesting that birth of new neurons, neurogenesis, does take place in the adult brain were documented as early as the 1960s, it was not until the early 1990s that converging lines of evidence confirmed this to be true. The functional role of neurogenesis has been less clear, and while results have been reported suggesting that hippocampal neurogenesis supports memory and learning, the underlying synaptic mechanisms have remained to a large extent an open question.
In their recent study Gu et al. (2012), by combining retroviral and optogenetic methods in adult mice, determined the time of birth of hippocampal dental granule cells, tracked the timescale over which these cells formed functional synaptic connections, and tested whether reversible inactivation of these cells adversely impacted memory performance. It was observed that adult-born neurons formed functional synapses onto area CA3 neurons two weeks after birth and that these projections became stable at about four weeks after birth. Reversible silencing these neurons at four weeks (but not two or eight weeks) after birth disrupted recall of a task that was learned while the neurons were intact and part of the hippocampal circuitry, suggesting that there is a specific time window within which adult-born hippocampal neurons support memory.
These highly exciting results shed light on the synaptic mechanisms that take place during neurogenesis, and suggest that there is a specific time window during which newborn neurons become integrated to hippocampal neuronal circuits and support learning and subsequent recall of a task. The methods used by the authors in their study constitute an astounding example of the possibilities that have become available for elucidation of the functionality of neuronal networks at the level of individual neurons and synaptic connections.
Reference: Gu Y, Arruda-Carvalho M, Wang J, Janoschka SR, Josselyn SA, Frankland PW, Ge S. Optical controlling reveals time-dependent roles for adult-born dentate granule cells. Nature Neuroscience (2012) advance online publication. http://dx.doi.org/10.1038/nn.3260