Use of three-dimensional movies with surround sound as stimuli during functional magnetic resonance imaging
Naturalistic stimuli such as movies are being increasingly used as stimuli in cognitive neuroimaging studies. One of the advantages offered by movies is that they make it possible to test ecological validity of predictions based on research with more artificial stimulus features. Challenges in data analysis due to inherent complexity of movie stimuli have been eloquently handled by development of novel data analysis methods, including decomposition of the movie stimulus into a set of relevant stimulus time courses that are used as predictors in data analysis. One aspect that has not been tested under neuroimaging settings, however, is the use of three-dimensional movies with surround sound.
In their recent study, Ogawa et al. (2013) presented healthy volunteers with alternating 2D and 3D movie clips with vs. without surround sound during functional magnetic resonance imaging. The surround sound was generated with a custom-build MR-compatible piezoelectric speaker array. Data analysis was carried out by both contrasting the blocked conditions (3D with surround sound, 3D without surround sound, 2D with surround sound, and 2D without surround sound) and by using time courses of the degree of binocular disparity and the number of sound sources as predictors of brain hemodynamic activity.
The authors observed that brain hemodynamic activity was predicted by absolute visual disparity in dorsal occipital and posterior parietal areas and by visual disparity gradients in posterior aspects of the middle temporal gyrus as well as inferior frontal gyrus. The complexity of the auditory space was associated with hemodynamic activity in specific areas of the superior temporal gyrus and middle temporal gyrus. These results are highly exciting per se and, further, given that 3D and surround sound effects are known to increase the immersive effect of movies, this study represents an important step forward by demonstrating the feasibility of using 3D movies with surround sound during functional magnetic resonance imaging.
Reference: Ogawa A, Bordier C, Macaluso E. Audio-visual perception of 3D cinematography: an fMRI study using condition-based and computation-based analyses (2013) PLoS ONE 8: e76003. http://dx.doi.org/10.1371/journal.pone.0076003
Providing sense of touch via intracortical microstimulation of somatosensory cortex from a prosthetic limb
Research on brain computer interfaces has shown amazing progress over the past decade, with non-human primate studies showing that it is possible for monkeys to even learn guide an artificial arm based on neural signals recorded from the motor cortical areas. As such this line of research holds great promise for patients who have lost a limb or are suffering from paralysis due to spinal cord injury. One critical aspect that has been lacking in this exciting area of research has been the question of how somatosensory feedback could be provided from the prosthetic arm to the brain. This is important given that somatosensory feedback is a prerequisite for dexterous manipulation of objects and given that sense of touch is important for the embodied sensation (i.e., that limb feels part of oneself) as well as for emotional-social communication.
In their recent study, Tabot et al. (2013) compared the ability of monkeys to carry out somatosensory discrimination tasks based on endogenous vs. artificial somatosensory feedback inputs provided through native vs. prosthetic finger. Somatosensory stimulation was experimentally varied to find a set of parameters that could be used to guide manipulation of objects by the monkeys. The results suggest that 1) intracortical microstimulation of somatosensory cortex elicits spatially localized percepts consistently with the somatotopic organization of somatosensory cortex, 2) magnitude of the percept seems to depend on the magnitude of the microstimulation, and 3) phasic stimulation can be utilized to convey information about making of initial contact with an object. Based on these findings, the authors envision how microstimulation of the somatosensory cortex from a prosthetic limb could be used to provide sense of touch to human patients with an artificial limb.
Reference: Tabot GA, Dammann JF, Beg JA, Tenore FV, Boback JL, Vogelstein J, Bensmaia SJ. Restoring the sense of touch with a prosthetic hand through a brain interface. Proc Natl Acad Sci USA (2013) e-publication ahead of print. http://dx.doi.org/10.1073/pnas.1221113110
Direct causal evidence for auditory cortical "what" and "where" processing streams provided by transcranial magnetic stimulation
Since initial observations in animal models, there has been accumulating evidence suggesting that sound identity and location information is processed in parallel anterior and posterior auditory-cortex streams in humans. Human neuroimaging evidence has, however, not been indisputable since posterior auditory cortical areas have been observed to be sensitive to also other than auditory spatial features. Furthermore, while neuroimaging findings are beyond any doubt highly informative, they cannot not per se provide causal evidence for the involvement of anterior and posterior auditory cortical areas in processing of “what” and “where” auditory information. Transcranial magnetic stimulation guided by magnetic resonance imaging is a method that, by making it possible to transiently deactivate specific cortical areas, allows causal testing of the involvement of cortical regions in task performance.
In their recent study, Dr. Jyrki Ahveninen et al. (2013) transiently inhibited bilateral anterior and posterior auditory cortical areas in healthy volunteers when they were performing sound localization and sound identity discrimination tasks. The transient inhibition was accomplished with paired-pulse transcranial magnetic stimulation guided by magnetic resonance imaging, with the pulses delivered 55-145 ms following the to-be-discriminated auditory stimuli. The anatomical areas targeted by the transcranial magnetic stimulation were further confirmed with individual-level cortical electric field estimates. It was observed that transient inhibition of posterior auditory cortical regions delayed reaction times significantly more during sound location than sound identity discrimination. In contrast, transient inhibition of anterior auditory cortical regions delayed reaction times significantly more during sound identity than sound location discrimination.
These highly exciting findings provide direct causal evidence in support of the parallel auditory cortex “what” vs. “where” processing pathways in humans. These results not only nicely help clarify the still-debated issue of whether the posterior human auditory cortex participates in auditory space processing, but methodologically the findings further demonstrate the feasibility of using paired-pulse transcranial magnetic stimulation in targeting cortical areas that are located very close to one another. The introduction of methods that allow precise estimation of the cortical targets of transcranial magnetic stimulation also provides an important methodological advance.
Reference: Ahveninen J, Huang S, Nummenmaa A, Belliveau JW, Hung A-Y, Jaaskelainen IP, Rauschecker JP, Rossi S, Tiitinen H, Raij T. Evidence for distinct auditory cortex regions for sound location versus identity processing. Nature Communications (2013) 4: 2585. http://dx.doi.org/10.1038/ncomms3585
Brain regions involved in processing gestural, facial, and actor-orientation cues in short video clips revealed by functional MRI
How is the human brain able to process social gestures so quickly and with (seemingly) so little effort? Answering this question is one of the most pivotal ones when attempting to understand the neural basis of social cognition. This is a very important area of research given that social skills is what makes humans an inherently social species, and further since deficits in social cognition in certain clinical conditions are highly handicapping to afflicted individuals. Neuroimaging studies on the neural basis of social cognition have been rapidly increasing in number, but there have been relatively few studies where processing of several social cues (e.g., gestures, facial expressions, orientation of social gestures towards vs. away from the subjects) have been included in the same study design.
In their recent study, Saggar et al. (2013) showed short 2-sec video clips depicting social vs. non-social gestures oriented away vs. towards the subjects and with face occluded (blurred) vs. clearly visible, during functional magnetic resonance imaging. The authors observed enhanced hemodynamic activity in amygdala and brain areas relevant for theory of mind when contrasting social vs. non-social gestures. Activity in lateral occipital cortex and precentral gyrus was further observed when comparing responses elicited by gestures towards vs. away from the subjects. Visibility of facial gestures in turn modulated activity in posterior superior temporal sulcus and fusiform gyrus. Taken together, these highly interesting findings shed light on how multiple social cues that signal information about the intentions of other persons are processed in the human brain, and significantly pave way for clinical research in patient groups with social cognition deficits.
Reference: Saggar M, Shelly EW, Lepage J-F, Hoeft F, Reiss AL. Revealing the neural networks associated with processing of natural social interaction and the related effects of actor-orientation and face-visibility. Neuroimage (2013) e-publication ahead of print. http://dx.doi.org/10.1016/j.neuroimage.2013.09.046