Human amygdala is activated by social and emotional information

It has been a widely held view in cognitive neuroscience that the human amygdala is primarily involved in fast processing of stimuli that elicit fear. Recent studies have, however, suggested a broader role for the amygdala, for instance, sensitivity to stimulus relevance, such as emotionally and socially significant information. Careful experimental test of how these factors modulate amygdala responsiveness has not been carried out previously.

In their recent study, Dr. Pascal Vrticka et al. (2013) studied with a two-by-two factorial design the sensitivity of amygdala to emotional valence (i.e., positive vs. negative) vs. social (i.e., social vs. non-social) information contained in pictorial stimuli. Specifically, 19 healthy female volunteers were presented 240 pictures taken from the International Affective Pictures collection during functional magnetic resonance imaging. In the first condition, emotionally neutral pictures with social vs. non-social content were presented to the subjects with a task to rate the photographic quality of the pictures, serving as a non-emotional baseline. In three other viewing conditions, emotional images were presented with three types of tasks across which the analysis was finally collapsed.

The results indicate that the human amygdala does respond to social information more robustly than to non-social information also in case of neutral stimuli. Further, there was an overall effect towards higher sensitivity to negative as compared to positive images, and this valence effect was largely driven by increased responsiveness to negative information in non-social scenes. This suggests that social stimuli need not be negative in order to be processed by amygdala as behaviorally relevant.

This valence vs. social information interaction was further pronounced in the right amygdala and was modulated by trait anxiety measures of the subjects. Similar valence vs. social information interactions were noted in the right fusiform gyrus, right anterior superior temporal sulcus, and medial orbitofrontal cortex, suggesting that these areas form a network that detects stimulus relevance in humans.  Overall, these results significantly advance knowledge of the role of amygdala in processing of social and emotional aspects of pictorial stimuli, and how the amygdala activation is modulated by personality of the experimental subjects.

Reference: Vrticka P, Sander D, Vuilleumier P. Lateralized interactive social content and valence processing within the human amygdala. Frontiers in Human Neuroscience (2013) 6:358. http://dx.doi.org/10.3389/fnhum.2012.00358


Cortical areas tracking the speech amplitude envelope revealed by electrocorticographic recordings

Research on auditory features that contribute to speech comprehension has pointed out the importance of temporal cues and the slow amplitude envelope. Specifically, manipulating the speech amplitude envelope degrades recognition of speech sounds and comprehensibility of sentences and, further, it has been shown that speech comprehension is relatively preserved even when the frequency content is disrupted if the speech amplitude envelope is preserved. While previous studies have shown that electrical potentials generated by neural activity during listening to speech correlate with the speech envelope, and that the degree of this correlation predicts speech comprehensibility, it has not been determined precisely which cortical regions track the speech envelope.

In their recent study, Kubanek et al. (2013) recorded electrical activity intracranially with electrode grids placed on left-hemisphere cortical surface in five epileptic patients undergoing pre-surgical mapping of seizure foci. During recording, the subjects listened to four short stories narrated by a male voice. The stimulus amplitude envelope was computed as the sound power in consecutive 50-ms time windows, as was power of high-frequency gamma activity (75-115 Hz) for each of the recording electrodes. The results show that the speech amplitude envelope is most faithfully tracked by human non-primary auditory cortex that surrounds the primary auditory cortex located within the confines of the Heschl’s gyrus, and that the gamma-band signal in these recordings correlates best with the speech envelope. Using non-speech control stimuli, the authors further demonstrated that the non-primary auditory cortical areas, while also tracking amplitude envelope of melody, do track more specifically the speech amplitude envelope. Higher-order structures (superior temporal gyrus and inferior frontal cortex), in contrast, tracked the speech amplitude envelope more weakly but at the same time even more specifically than the non-primary auditory cortical areas.

These findings provide important further knowledge on the cortical mechanisms underlying processing of speech amplitude envelope. The results reveal cortical areas that track the speech amplitude envelope, and further suggest that there is a processing hierarchy with the non-primary auditory cortical areas tracking the envelope of speech more robustly but less speech-specifically than superior temporal gyrus and posterior inferior frontal gyrus (i.e., Broca's area) that are considered higher-order language areas.

Reference: Kubanek J, Brunner P, Gunduz A, Poeppel D, Schalk G. The tracking of speech envelope in the human cortex. PLoS ONE (2013) 8: e53398. http://dx.doi.org/10.1371/journal.pone.0053398  


Three cerebral networks integrate linguistic information with global and local contextual cues

Outside of the laboratory settings one rarely encounters situations where one would have to try to understand discourse without contextual information. When those instances do take place, it is often difficult to comprehend until one gets clues on the relevant context; on the other hand, with the appropriate context provided, it is often very easy to predict what another is about to say, even if the information that is provided per se would be rather limited. The contextual cues can further be broken down to local and global; when listening to narrated stories or discourse, the one or two preceding sentences are though to provide the local context, whereas the preceding paragraph typically is thought to provide the global context that guides comprehension. The underlying neural mechanisms have, however, remained largely unexplored, with some cognitive theories speculating that availability of local and global contextual information in working memory would be the determining factor.

In their recent study Egidi and Caramazza (2013) probed, by combining behavioral measures and functional magnetic resonance imaging in healthy volunteers, the neural structures supporting integration of narrative information by local and global contextual information. They specifically used short stories where the endings were consistent vs. inconsistent with the global vs. local context, set up by distally vs. proximally preceding sentences. Thirty subjects first participated in a self-paced reading task where they were instructed to move to the next sentence only when they had comprehended the one at hand. The subjects read locally consistent sentences quicker, but slower when the sentences were globally inconsistent, and vice versa, thus demonstrating robust behavioral interaction effects. Fourteen subjects took part in the functional magnetic resonance imaging study that disclosed involvement of three different networks of brain areas, one comprising superior parietal areas and intraparietal sulcus associated with integration of story endings with both local and global contextual information, one that consisted of supramarginal gyrus, superior parietal lobule, and anterior intraparietal sulcus sensitive to availability of global context, and a third one comprising  of multiple areas that was associated with fluency of the processing given the local context. 

These results are highly exciting in that they illuminate how global and immediate/local contextual information is integrated by the brain to facilitate comprehension. The setup and findings of the authors provide interesting possibilities for further neuroimaging research on this very important topic that is one of the most fundamental research questions concerning human language comprehension. After all, language comprehension is a process that to a large degree relies on (and in case of misinterpretations is biased by) preceding contextual information. Understanding the underlying neural mechanism provides important insights as to how contextually-driven language comprehension is possible.

Reference: Egidi G, Caramazza A. Cortical systems for local and global integration in discourse comprehension, NeuroImage (2013), advance online publication prior to print. http://dx.doi.org/10.1016/j.neuroimage.2013.01.003


Degree of correlation between children's right intraparietal sulcus hemodynamic activity during free viewing of Sesame Street with those of adults predicts their cognitive maturation

The intricate relationship between brain maturation and cognitive development in children has been previously investigated by measuring brain hemodynamic activity with functional magnetic resonance imaging during performance of a variety of mathematical and conceptual tasks. While these studies have produced vast amount of knowledge on maturation of cognitive abilities, it has remained to a large extent an open question how development of cognitive abilities shapes the way that the developing brain responds to real-life like stimuli such as movies, for example, whether school-based knowledge of mathematics shapes the way that children process educational videos involving math problems.

To answer this question, Drs. Jessica Cantlon and Rosa Li (2013) presented children and adults an episode of the educational TV-program Sesame Street during functional magnetic resonance imaging. They then correlated the hemodynamic data of individual children with the data of adult participants to derive an index of brain maturation. Their results show in a very convincing manner how increasing similarity between hemodynamic responses of children and adults under the natural viewing conditions predicts the degree that mathematical ability has developed in the children. The right intraparietal sulcus was especially implicated by their analyses as a brain structure wherein maturation (i.e., adult-likeness) of functional brain activity predicts school-based mathematical ability. They further verified the involvement of this brain region in numerical processing using a more conventional functional magnetic resonance imaging experiment, however, interestingly only the measures of the natural viewing paradigm significantly predicted the children’s school based mathematical performance.

These highly exciting findings further demonstrate the complementary nature of the natural viewing paradigms that are becoming increasingly frequent in cognitive neuroscience. The approach where an index of brain functional maturity is derived in a straightforward and model free manner by correlating hemodynamic response time courses across brain regions between children and adult participants during free viewing of movie clips is one that can be easily foreseen as paving way, in a fundamentally important manner, for further neurocognitive development studies.

Reference: Cantlon JF, Li R. Neural activity during natural viewing of Sesame Street statistically predicts test scores in early childhood. PLoS Biology (2013) 11: e1001462. http://dx.doi.org/10.1371/journal.pbio.1001462