12/30/2013

Stress-response mediated by repeated media exposure to collective traumatic events

In the modern world, information about traumatic events, such as earthquakes, major accidents, and terrorist strikes claiming innocent lives, spreads quickly. Further, media provides repeated exposure to the major catastrophic events in the form of newsfeeds that add new details about the events as they are uncovered. It is an inherently important question whether media exposure can induce stress responses in a similar manner and magnitude than being at the site of the catastrophe. This is an important question from at least three perspectives: 1) answering this question provides important information about human cognition and emotional responses in the modern global information flow environment, 2) mental health professionals can better appreciate media-exposure related problems, and 3) also given that the population at large is often the intended psychological target of terrorists who carry out acts of violence.

In their recent study, Holman et al. (2013) compared media vs. direct exposure to a collective trauma, by carrying out a survey over the two weeks following the Boston marathon bombings, in representative samples of persons living in Boston, New York, and rest of the united states. When the authors adjusted acute stress symptom scores for demographics, preceding mental health, and prior collective stress exposure, it was observed that >6 hours of media exposure to the marathon bombing events during the week following the bombings was associated with higher acute stress symptoms than direct exposure to the bombings.

These very interesting findings suggest that indirect exposure to traumatic events via repeated media coverage may produce even stronger stress responses than direct exposure to the event, which is a clear indication of the robustness of prolonged and repeated media-exposure in triggering stress-related mental health problems, even though, as pointed out by the authors, it has to be kept in mind that emergency actions taken by the local authorities in cases of direct exposure to the bombings likely lessened distress in that group. Mass media may thus inadvertently serve as a channel that spreads the psychological trauma far beyond the directly affected population. Outside of the scope of considering the effects of mass-media coverage, these results further suggest that being repeatedly related information about a catastrophic event can trigger stress response and produce symptoms of post-traumatic stress disorder without direct exposure, which may also be an important form of societal-cultural learning.


Reference: Holman EA, Garfin DR, Silver RC. Media’s role in broadcasting acute stress following the Boston Marathon bombings. Proc Natl Acad Sci USA (2013) e-publication ahead of print. http://dx.doi.org/10.1073/pnas.1316265110

12/22/2013

Increased functional brain network modularity predicts working memory deficits in early-stage multiple sclerosis

Multiple sclerosis is a neurological disorder where, due to inflammatory processes, there is focal demyelination and axonal damage that step-by-step severs the anatomical connections of the brain. Recent neuroimaging studies and theoretical work are both pointing out the importance of inter-area connectivity and interactions in giving rise to perceptual and cognitive functions. Therefore, one of the crucial questions regarding multiple sclerosis is in which ways the breaking down of brain connectivity alters the way that the functional networks are reorganized and how this impacts cognition.

In their recent study, Gamboa et al. (2013) recorded resting state functional magnetic resonance imaging in early-stage multiple sclerosis patients and healthy controls. As a measure of cognition, subjects of both groups separately performed the Paced auditory serial addition task in a dual task manner to assess working memory, attention, and speed of information processing. Using graph theoretical analysis of brain functional connectivity, the authors observed increased modularity in the early-stage multiple sclerosis patients as compared with the healthy controls. Furthermore, the increased modularity of brain functional connectivity negatively correlated with performance in the neuropsychological test of working memory, attention, and speed of information processing.

These highly interesting findings demonstrate how the subtle changes in connectivity due to focal damage caused to axonal fibers in the earliest stages of multiple sclerosis alter the functional network properties of the brain, and how such changes in brain network activity adversely reflects upon cognitive ability. It is easy to see how these findings pave way for further studies examining how accumulating focal damage to the links of the functional networks affect perceptual and cognitive functions in multiple sclerosis patients. Given that relatively robust effects were seen in these early-stage patients, these findings could also be interesting from the point of view of clinical research aiming at development of measures that enable follow-up of disease progression.


Reference: Gamboa OL, Tagliazucchi E, vonWegner F, Jurcoane A, Wahl M, Laufs H, Ziemann U. Working memory performance of early MS patients correlates inversely with modularity increases in resting state functional connectivity networks. NeuroImage (2013) e-publication ahead of print. http://dx.doi.org/10.1016/j.neuroimage.2013.12.008

11/24/2013

Speech motor system may mediate visual information to auditory cortex during silent speech reading

While the sense of hearing is clearly the dominant channel for speech perception, humans are surprisingly good at reading the lips of one’s conversation partners, a phenomenon referred to as speech reading. This ability has been demonstrated already in early psychophysics studies to significantly enhance speech perception, especially when speech is to be perceived under noisy conditions. There is a fairly good amount of neuroimaging literature on the underlying neural mechanisms. In these studies, visual speech stimuli (i.e., articulatory gestures) have been reported to modulate auditory cortical processing, with some evidence pointing to speech motor system first being activated by visual speech and then influencing auditory-cortical processing via an efference copy.

In their recent study, Chu et al. (2013) studied the neural basis of speech reading by presenting 19 healthy volunteers with silent videoclips of a person articulating vowels during event-related functional magnetic resonance imaging (fMRI). Speech reading activated a wide range of occipital, temporal, and prefrontal cortical areas. The authors used structural equation modeling to estimate information flow during speech reading between the activated areas. The results suggested that there is parallel information flow from extrastriate areas to anterior prefrontal areas and, further, feedback information flow from the anterior prefrontal areas to posterior-superior temporal lobe auditory areas. These effective connectivity estimates thus support the model wherein speech reading influences auditory-cortical areas via prefrontal speech motor areas, possibly in the form of an efference copy that might facilitate speech perception.


Reference: Chu Y-H, Lin F-H, Chou Y-J, Tsai K W-K, Kuo W-J, Jaaskelainen IP. Effective cerebral connectivity during silent speech reading revealed by functional magnetic resonance imaging. PLoS ONE (2013) 8: e80265. http://dx.doi.org/10.1371/journal.pone.0080265

11/17/2013

Chronic two-photon imaging of entire cortical columns in awake mice using microprisms

In the scientific quest to unravel the neural basis of many perceptual and cognitive functions, animal models are very important in complementing the findings obtained in non-invasive human neuroimaging studies. Furthermore, even though there are many species-specific aspects to cognition (e.g., human language), for those perceptual-cognitive functions that do generalize across species, animal models often offer the only possibility to test decisively between alternative hypotheses. Further, development of animal research methods is advancing at astounding speed. Two-photon calcium imaging is a relatively new method that allows simultaneous recording from large (~hundreds) populations of neurons, however, the method has been limited to recording from limited number of cortical layers at a time, and it has not been possible to record the neural populations over extended periods of time, which would be very useful in studies of, for example, the neural basis of various types of learning.

With the method recently published by Andermann et al. (2013) it is now possible to record extensive populations of neurons simultaneously from all six cortical layers over extended periods of time, even for months. The authors surgically implanted glass microprisms in somatosensory and visual cortical areas of mice, which then allowed chronic two-photon imaging of hundreds of neurons and from all layers simultaneously, in awake animals. The authors point out that their novel methodology, when combined with advances in genetic, pharmacological, and optogenetic methods (using which individual neurons in a population can be selectively suppressed and excited), can considerably expand the highly exciting capabilities offered by two-photon imaging in animal-model studies of the neural basis of perceptual and cognitive functions.


Reference: Andermann ML, Gilfoy NB, Goldey GJ, Sachdev RNS, Wölfel M, McGormick DA, Reid RC, Levene MJ. Chronic cellular imaging of entire cortical columns in awake mice using microprisms. Neuron (2013) e-publication ahead of print. http://dx.doi.org/10.1016/j.neuron.2013.07.052

11/03/2013

Visual-cortex GABA concentrations predict incidence of cognitive failures in daily life in healthy volunteers

Since the amount of information one receives in daily life by far exceeds the limited capacity of one’s processing resources, selecting relevant information and suppressing irrelevant information is a vital ability. The link between this cognitive ability, termed selective attention, and cognitive failures in daily life  (e.g., failing to notice things, getting distracted) is well established. On the other hand, gamma-aminobutyric acid (GABA), the most common inhibitory neurotransmitter in the human brain, has been observed to contribute to visual cortex selectivity to stimuli, a function that is an integral part of selective attention. What has not been investigated before, however, is whether inter-individual variability in the amount of visual-cortical GABA is linked with the frequency of cognitive failures in daily life.

In their recent study, Sandberg et al. (2013) had 36 healthy participants fill out a cognitive failures questionnaire, where participants were asked to self-rate frequency with which they experience common cognitive failures in perception, memory, and motor function. They then underwent 3T whole-head structural magnetic resonance imaging and focal magnetic resonance spectroscopy measurement  of GABA concentration was obtained with two voxels placed in 1) calcarine sulcus in occipital cortex and 2) in the anterior part of the superior parietal lobule. It was observed that GABA concentrations in the visual cortex correlated with the incidence of self-reported cognitive failures. In contrast, the authors failed to see any correlation between GABA concentrations in the parietal voxel and cognitive failures. The authors however observed that gray matter volume in left superior parietal lobule and occipital GABA concentration independently predicted cognitive failures.

These exciting results first of all demonstrate nicely that it is possible to predict inter-individual variability in cognitive failures that take place in daily life with inter-individual differences in local brain neurochemical properties. The results further add an important piece of evidence pointing to the role of GABA in cognitive processing by suggesting that visual-cortical GABA concentrations impact selective attention under ecologically valid conditions as estimated by the questionnaire items. Third, these results suggest that the role of GABA in modulating selective attention is specific to the sensory cortical areas, whereas gray matter volume in parietal cortex additionally contributes to frequency of cognitive failures in daily life.


Reference: Sandberg K, Blicher JU, Dong MY, Rees G, Near J, Kanai R. Occipital GABA correlates with cognitive failures in daily life. Neuroimage (2013) e-publication ahead of print. http://dx.doi.org/10.1016/j.neuroimage.2013.10.059

10/29/2013

Use of three-dimensional movies with surround sound as stimuli during functional magnetic resonance imaging

Naturalistic stimuli such as movies are being increasingly used as stimuli in cognitive neuroimaging studies. One of the advantages offered by movies is that they make it possible to test ecological validity of predictions based on research with more artificial stimulus features. Challenges in data analysis due to inherent complexity of movie stimuli have been eloquently handled by development of novel data analysis methods, including decomposition of the movie stimulus into a set of relevant stimulus time courses that are used as predictors in data analysis. One aspect that has not been tested under neuroimaging settings, however, is the use of three-dimensional movies with surround sound.

In their recent study, Ogawa et al. (2013) presented healthy volunteers with alternating 2D and 3D movie clips with vs. without surround sound during functional magnetic resonance imaging. The surround sound was generated with a custom-build MR-compatible piezoelectric speaker array. Data analysis was carried out by both contrasting the blocked conditions (3D with surround sound, 3D without surround sound, 2D with surround sound, and 2D without surround sound) and by using time courses of the degree of binocular disparity and the number of sound sources as predictors of brain hemodynamic activity.

The authors observed that brain hemodynamic activity was predicted by absolute visual disparity in dorsal occipital and posterior parietal areas and by visual disparity gradients in posterior aspects of the middle temporal gyrus as well as inferior frontal gyrus. The complexity of the auditory space was associated with hemodynamic activity in specific areas of the superior temporal gyrus and middle temporal gyrus. These results are highly exciting per se and, further, given that 3D and surround sound effects are known to increase the immersive effect of movies, this study represents an important step forward by demonstrating the feasibility of using 3D movies with surround sound during functional magnetic resonance imaging.


Reference:  Ogawa A, Bordier C, Macaluso E. Audio-visual perception of 3D cinematography: an fMRI study using condition-based and computation-based analyses (2013) PLoS ONE 8: e76003. http://dx.doi.org/10.1371/journal.pone.0076003

10/21/2013

Providing sense of touch via intracortical microstimulation of somatosensory cortex from a prosthetic limb

Research on brain computer interfaces has shown amazing progress over the past decade, with non-human primate studies showing that it is possible for monkeys to even learn guide an artificial arm based on neural signals recorded from the motor cortical areas. As such this line of research holds great promise for patients who have lost a limb or are suffering from paralysis due to spinal cord injury. One critical aspect that has been lacking in this exciting area of research has been the question of how somatosensory feedback could be provided from the prosthetic arm to the brain. This is important given that somatosensory feedback is a prerequisite for dexterous manipulation of objects and given that sense of touch is important for the embodied sensation (i.e., that limb feels part of oneself) as well as for emotional-social communication.

In their recent study, Tabot et al. (2013) compared the ability of monkeys to carry out somatosensory discrimination tasks based on endogenous vs. artificial somatosensory feedback inputs provided through native vs. prosthetic finger. Somatosensory stimulation was experimentally varied to find a set of parameters that could be used to guide manipulation of objects by the monkeys. The results suggest that 1) intracortical microstimulation of somatosensory cortex elicits spatially localized percepts consistently with the somatotopic organization of somatosensory cortex, 2) magnitude of the percept seems to depend on the magnitude of the microstimulation, and 3) phasic stimulation can be utilized to convey information about making of initial contact with an object. Based on these findings, the authors envision how microstimulation of the somatosensory cortex from a prosthetic limb could be used to provide sense of touch to human patients with an artificial limb.


Reference: Tabot GA, Dammann JF, Beg JA, Tenore FV, Boback JL, Vogelstein J, Bensmaia SJ. Restoring the sense of touch with a prosthetic hand through a brain interface. Proc Natl Acad Sci USA (2013) e-publication ahead of print. http://dx.doi.org/10.1073/pnas.1221113110

10/14/2013

Direct causal evidence for auditory cortical "what" and "where" processing streams provided by transcranial magnetic stimulation

Since initial observations in animal models, there has been accumulating evidence suggesting that sound identity and location information is processed in parallel anterior and posterior auditory-cortex streams in humans. Human neuroimaging evidence has, however, not been indisputable since posterior auditory cortical areas have been observed to be sensitive to also other than auditory spatial features. Furthermore, while neuroimaging findings are beyond any doubt highly informative, they cannot not per se provide causal evidence for the involvement of anterior and posterior auditory cortical areas in processing of “what” and “where” auditory information. Transcranial magnetic stimulation guided by magnetic resonance imaging is a method that, by making it possible to transiently deactivate specific cortical areas, allows causal testing of the involvement of cortical regions in task performance.

In their recent study, Dr. Jyrki Ahveninen et al. (2013) transiently inhibited bilateral anterior and posterior auditory cortical areas in healthy volunteers when they were performing sound localization and sound identity discrimination tasks. The transient inhibition was accomplished with paired-pulse transcranial magnetic stimulation guided by magnetic resonance imaging, with the pulses delivered 55-145 ms following the to-be-discriminated auditory stimuli. The anatomical areas targeted by the transcranial magnetic stimulation were further confirmed with individual-level cortical electric field estimates.  It was observed that transient inhibition of posterior auditory cortical regions delayed reaction times significantly more during sound location than sound identity discrimination. In contrast, transient inhibition of anterior auditory cortical regions delayed reaction times significantly more during sound identity than sound location discrimination.

These highly exciting findings provide direct causal evidence in support of the parallel auditory cortex “whatvs. “where” processing pathways in humans. These results not only nicely help clarify the still-debated issue of whether the posterior human auditory cortex participates in auditory space processing, but methodologically the findings further demonstrate the feasibility of using paired-pulse transcranial magnetic stimulation in targeting cortical areas that are located very close to one another. The introduction of methods that allow precise estimation of the cortical targets of transcranial magnetic stimulation also provides an important methodological advance.


Reference: Ahveninen J, Huang S, Nummenmaa A, Belliveau JW, Hung A-Y, Jaaskelainen IP, Rauschecker JP, Rossi S, Tiitinen H, Raij T. Evidence for distinct auditory cortex regions for sound location versus identity processing. Nature Communications (2013) 4: 2585. http://dx.doi.org/10.1038/ncomms3585

10/07/2013

Brain regions involved in processing gestural, facial, and actor-orientation cues in short video clips revealed by functional MRI

How is the human brain able to process social gestures so quickly and with (seemingly) so little effort? Answering this question is one of the most pivotal ones when attempting to understand the neural basis of social cognition. This is a very important area of research given that social skills is what makes humans an inherently social species, and further since deficits in social cognition in certain clinical conditions are highly handicapping to afflicted individuals. Neuroimaging studies on the neural basis of social cognition have been rapidly increasing in number, but there have been relatively few studies where processing of several social cues (e.g., gestures, facial expressions, orientation of social gestures towards vs. away from the subjects) have been included in the same study design. 

In their recent study, Saggar et al. (2013) showed short 2-sec video clips depicting social vs. non-social gestures oriented away vs. towards the subjects and with face occluded (blurred) vs. clearly visible, during functional magnetic resonance imaging. The authors observed enhanced hemodynamic activity in amygdala and brain areas relevant for theory of mind when contrasting social vs. non-social gestures. Activity in lateral occipital cortex and precentral gyrus was further observed when comparing responses elicited by gestures towards vs. away from the subjects. Visibility of facial gestures in turn modulated activity in posterior superior temporal sulcus and fusiform gyrus. Taken together, these highly interesting findings shed light on how multiple social cues that signal information about the intentions of other persons are processed in the human brain, and significantly pave way for clinical research in patient groups with social cognition deficits. 


Reference: Saggar M, Shelly EW, Lepage J-F, Hoeft F, Reiss AL. Revealing the neural networks associated with processing of natural social interaction and the related effects of actor-orientation and face-visibility. Neuroimage (2013) e-publication ahead of print. http://dx.doi.org/10.1016/j.neuroimage.2013.09.046

9/27/2013

Dietary polyamine supplement prevents aging-related memory impairments in Drosophila

It is well known that aging results in cognitive decline, including memory impairments, even in the absence of any dementing neurodegenerative disorders per se such as Alzheimer’s disease. Given the rapidly aging populations in many countries, the causes of the aging-related memory impairments have been a focus of intensive research. One central challenge for research on aging-related memory impairments has been posed by the relatively long lifespan of most animal models. Conditioning paradigms that can be used in Drosophila (aka "fruit flies") provide a model where aging-related memory impairments are seen over the course of days and weeks instead of years, thus offering a model that can be effectively used to study the underlying molecular mechanisms.

In their recent study, Dr. Varun K Gupta et al. (2013) conducted a series of experiments where they first observed that polyamine spermidine and putrescine levels decreased in the heads of aging Drosophila. In the following experiment they observed that dietary sperminide supplement reduced aging-related memory impairment in the Drosophila, as assessed with a maze-learning task involving olfactory cues and electric shocks. Investigating the possible underlying molecular mechanisms, the authors observed that dietary spermidine, in addition to reducing aging-related memory impairment, prevented aging-related decrease of autophagy. Furthermore, when the autophagic mechanisms were genetically impaired, the spermidine-induced reduction of aging-related memory impairment was blocked.

This impressive set of findings demonstrates how the Drosophila model can be highly effectively used to study molecular mechanisms that underlie aging-related memory impairments. The authors point out that prior to their observations, few substances (and all of them exogenous) have been observed to protect against aging-related memory impairments. Spermidine, being an endogenous substance, thus holds a lot of potential for further studies and might ultimately provide a candidate substance for prevention of aging-related memory deficits in humans.


Reference: Gupta VK, Scheunemann L, Eisenberg T, Mertel S, Bhukel A, Koemans TS, Kramer JM, Liu KSY, Schroeder S, Stunnenberg HG, Sinner F, Magnes C, Pieber TR, Dipt S, Fiala A, Schenck A, Schwaerzel M, Madeo F, Sigrist SJ. Restoring polyamines protects from age-induced memory impairment in an autophagy-dependent manner. Nature Neuroscience (2013) e-publication ahead of print. http://dx.doi.org/10.1038/nn.3512

9/23/2013

Economic conditions at the time of birth predict cognitive ability late in life

Severe economic recessions are known to adversely affect population health, and it is also well known that exposure to adverse stimuli during early stages of life can hinder development. The question of whether economic-societal conditions at the time of birth, such as poverty brought about by economic downturns and wellbeing by economic booms, can impact cognitive functions later in life is a highly interesting one that has been addressed relatively little. Due to large in-depth surveys carried out in the 2000s across multiple European countries among the aging population, it has become possible to address this important question.

In their recent study, Doblhammer et al. (2013) examined whether economic cycle at the time of birth could predict poor cognitive functioning at older age. They specifically inspected whether cycles in economic indicators during the first half of the 20th Century (excluding war years) would predict cognitive ability as assessed with five interview measures, on orientation to time, recall, delayed recall, verbal fluency, and numerical ability. Multiple potentially intervening factors were carefully controlled for in the analyses. The results showed that economic downturns at the time of birth significantly predicted poorer cognitive function in the elderly. While it is naturally difficult to pinpoint causal factors in this type of study design (the authors mention malnutrition and psychological stress/insecurity within families as possible explanations) these results nonetheless bear high societal significance by demonstrating that economical factors can have lasting consequences on cognitive function.  


Reference: Doblhammer G, van den Berg GJ, Fritze T. Economic conditions at the time of birth and cognitive abilities late in life: evidence from ten European countries. PLoS ONE (2013) 8: e74915. http://dx.doi.org/10.1371/journal.pone.0074915

9/15/2013

Watching short emotional movie clips robustly activates the human dorsal visual stream areas

Rapid advances in neuroimaging method development are currently making it possible to answer one of the most intriguing questions in cognitive neuroscience, specifically, how does seeing emotion-arousing events in one’s environment modulate the various systems (e.g., emotional, attentional, somatomotor) of the brain. Up until relatively recently, neuroimaging studies on the neural basis of emotions utilized stimuli such as emotional pictures and sounds to delineate brain structures responding to emotional events. More naturalistic stimuli such as movies that elicit more robust and genuine emotions have been used recently and in such early studies, more extensive set of brain areas have been shown to be modulated by emotional valence and arousal than in studies using more artificial stimuli, thus warranting further research into the neural basis of emotions with naturalistic stimuli.

In their recent study, Goldberg et al. (2014) presented healthy volunteers with short ~14 s clips taken from commercial movies during functional magnetic resonance imaging of brain hemodynamic activity. The subjects had prior to the scanning session watched longer clips of ~a few minutes that contained the short experimental clips to familiarize the subjects with the emotional events and content of the clips. The clips ranged from neutral to strongly emotional, which was used in modeling hemodynamic activity. In separate control experiments the clips were played upside down, and the soundtrack and video inputs were mixed, to control for the possibility of low-level sensory differences between emotional and neutral clips. The authors observed responses to emotional clips in a number of brain areas, however, the most robust responses to emotionally arousing clips were noted in the dorsal visual stream.

These highly interesting findings of dorsal stream activity enhancement by emotionally arousing movie clips are interpreted by the authors to indicate initial step in the chain of events ultimately leading to action towards emotionally meaningful objects. Methodologically, the study also presents an interesting and a potentially very useful advance: by familiarizing the subjects with the movie material in advance, the authors could effectively utilize very short movie clips to trigger the recollection of the previously seen emotional events during the scanning. Given that there are limitations to how long a given subject can be scanned, and that the signal-to-noise ratio limitations of even modern neuroimaging methods require one to obtain multiple repetitions of similar events (e.g., emotional responses) over the duration of the experiment, this setup of the authors offers an attractive alternative paradigm for further neuroimaging studies of emotions.


Reference: Goldberg H, Preminger S, Malach R. The emotion–action link? Naturalistic emotional stimuli preferentially activate the human dorsal visual stream. Neuroimage (2014) 84: 254–264. http://dx.doi.org/10.1016/j.neuroimage.2013.08.032

9/09/2013

Human electrocorticography shows phase resetting of visual cortical oscillatory activity by auditory stimuli

Perception is inherently multisensory, as evidenced by multisensory integration effects such as increased comprehensibility of speech when lip movements of a speaker are clearly visible in a noisy environment, as well as by audio-visual illusions such as the ventriloquism and McGurk effects. At the neural level, it has been increasingly recognized that there are cross-modal inputs even to primary sensory cortical areas that take place already at very short latencies from stimulus onset, however, understanding of the neural mechanisms by which auditory stimuli influence visual processing has been relatively limited. Given recent findings in other sense modalities, phase resetting of oscillatory visual cortex activity by auditory stimuli has emerged as a potential mechanism by which auditory stimuli might facilitate processing of visual stimuli in visual cortical areas.

In their recent study, Mercier et al. (2013) recorded brain electrical activity intracranially in patients undergoing presurgical mapping procedures. Oscillatory and evoked activity was recorded with electrodes placed in a number of occipital-visual cortical areas during presentation of auditory-only, visual-only, and audiovisual stimuli. While the authors observed also some responses in visual cortical areas evoked by auditory stimuli, the most robust effects that auditory stimuli caused in visual cortical areas was modulation / phase resetting of the ongoing oscillatory activity by auditory stimuli. Such phase resetting might be the neurophysiological-level mechanism that supports behaviorally measurable multisensory interaction effects.


Reference: Mercier MR, Foxe JJ, Fiebelkorn IC, Butler JS, Schwartz TH, Molholm S. Auditory-driven phase reset in visual cortex: human electrocorticography reveals mechanisms of early multisensory integration. Neuroimage (2013) 79:19-29. http://dx.doi.org/10.1016/j.neuroimage.2013.04.060

9/01/2013

Self-initiated task selection is predicted by multi-voxel pattern activity in medial prefrontal and parietal cortices

Based on clinical observations in brain-damaged individuals, it has been known for a long time that the brain mechanisms underlying externally triggered and internally initiated goal-directed behavior must be different, as there are patients who do very well when the clinician asks them to carry out various cognitive and behavioral tasks, including ones that measure so-called executive functions, however, the very same patients might be unable to take initiative and make their own internally driven choices to pursue meaningful goals on their own. In neuroimaging studies, the neural basis of switching from one task to another has been mostly studied using cue-stimulus triggered paradigms. What has remained less well known, due in large part to obvious methodological challenges in measuring the timing of “internal triggers”, is which brain mechanisms underlie genuine internally driven task selection.

In their recent study, Soon et al. (2013) used functional magnetic resonance imaging in a group of 34 healthy volunteers to study brain mechanisms that precede internally initiated task selection. The authors set up a paradigm where subjects voluntarily engaged into mental tasks of adding or subtracting. Brain hemodynamic activity patterns preceding task initiation / selection were then examined using multivoxel pattern analysis algorithms. It was observed that intention to switch task could be decoded from patterns of hemodynamic activity within medial prefrontal cortex and parietal cortex, areas partly overlapping with the so-called default-mode network, as early as four seconds before the subjects self-estimated having become consciously aware of their choices.

These highly important findings not only disclose brain areas that underlie internally driven task selection, but also provide important methodological advances that can be utilized in further studies on this important research question. Notably, it was shown that specifically the distributed pattern of brain activity held the information that predicted the initiation of task switch, the overall amplitudes of hemodynamic activity within the medial prefrontal and parietal cortices failed to do so. The task paradigm devised by the authors is also one that can be readily modified for further studies of the neural basis of voluntary task selection.


Reference: Soon CS, He AH, Bode S, Haynes JD. Predicting free choices for abstract intentions. Proc Natl Acad Sci USA (2013) 110: 6217-6222. http://dx.doi.org/10.1073/pnas.1212218110

8/25/2013

The head and tail of caudate nucleus code flexible and stable reward value of visual objects

Being able to estimate the reward value of visual objects is a crucial factor in guiding ones behavioral choices. What makes this task even more challenging is that the reward value is often not fixed but can vary quickly, making it necessary for there to be flexibility to take into account the immediate reward history in addition to the stable reward value that has been learned over the longer term. The underlying neural mechanisms have remained a topic of speculation. Existence of two parallel reward-value processing mechanisms, one processing flexibly short-term reward value and another holding the longer-term stable reward value of objects has been hypothesized, however, empirical support for this hypothesis has been lacking.

In their recent study, Drs. Kim and Hikosaka (2013) used combined single-neuron recording and temporal inactivation methods in non-human primates to investigate the roles of distinct caudate nucleus areas in determination of flexible and stable reward values of visual stimuli. They observed that, during behavioral tasks wherein monkeys looked more at objects with high than low value, neuronal firing recorded from head of the caudate nucleus coded reward value flexibly and neurons in the tail of the caudate nucleus coded for the longer-term stable reward value. Temporary inactivation of these two caudate nucleus subregions corroborated the findings obtained in the single-neuron recordings.

These very important and exciting results suggest that there indeed are two parallel neural systems coding for reward value of objects, one enables flexible coding of reward value when there is short-term volatility in value, and another mechanism enables holding and appreciating the stable reward values of objects. In addition to shedding light on the neural basis of reward-value processing under different types of task conditions, these findings offer hypotheses and insights into the neural basis of specific deficits that have been documented in various basal ganglia disorders, as also briefly discussed by the authors in their paper.


Reference: Kim HF, Hikosaka O. Distinct basal ganglia circuits controlling behaviors guided by flexible and stable values.  Neuron (2013) e-publication ahead of print. http://dx.doi.org/10.1016/j.neuron.2013.06.044

8/18/2013

Unexpectedness of both observed errors and successes activates the dorsomedial prefrontal and rostral cingulate cortex in humans

It has been sometimes said that the ability to predict what is going to happen next is the primary task that the human brain needs to accomplish (e.g., perhaps the reason that ability to form memories of past events ever developed was solely due to the need to be able to predict the future). Indeed, when observing others, there typically are few surprises, and unexpected acts robustly catch one’s attention to figure out what is taking place. Activation of dorsomedial prefrontal cortical areas together with rostral cingulate cortex has been associated in previous studies with both detection of errors (e.g., when observing someone fail on a task) and observation of surprising events. Whether the responses seen in these areas are more due to unexpectedness or erroneousness of observed actions has, however, remained as an unresolved issue.

In their recent study, Dr. Anne-Marike Schiffer et al. (2013) studied whether responses in the aforementioned brain areas are more due to unexpectedness or erroneousness of observed actions. They presented video clips, shot from a first-person perspective, of an actress making sailing, fishing, and climbing knots. The videos were edited so that both unexpected failures and unexpected successes were observable, as also validated in a separate behavioral experiment carried out in the volunteers who all were sufficiently skilled in making the knots themselves. The movie clips were then shown to the volunteers during functional magnetic resonance imaging. The results indicated an area encompassing medial prefrontal cortex and rostral cingulate cortex that responded to both correct and erroneous knot-tying actions that were unexpected.

These very important and interesting results suggest that, at least to some extent, previously observed error-related responses in dorsomedial prefrontal cortex and rostral cingulate gyrus could have been due to unexpectedness of the errors. Based on their findings, the authors further bring up the interesting possibility that an unexpectedness signal in the dorsal rostral cingulate gyrus could serve the purpose of adjusting internal models that help predict flow of actions. Overall, this study is a very nice demonstration of how behavioral and neuroimaging experiments can be combined to advance our understanding of the neural basis of cognitive functions.


Reference: Schiffer A-M, Krause KH, Schubotz RI. Surprisingly correct: unexpectedness of observed actions activates the medial prefrontal cortex. Human Brain Mapping (2013) online e-publication ahead of print. http://dx.doi.org/10.1002/hbm.22277

8/09/2013

Brain regions processing complex acoustic features across different musical genres

Music is a fundamental and highly interesting aspect of humanity. The neural basis of music perception has been studied for the most part with relatively simplified stimuli isolating a given element of music, such as by presenting short sound sequences that form tonality or rhythm, and observing which brain areas exhibit responses to such stimulation. Over the last few years, there has been an emerging trend, enabled by developments in non-invasive neuroimaging technology and data analysis methods, towards utilization of naturalistic stimuli during neuroimaging, including free listening of music. What has been wanting, however, are studies looking at which brain areas are consistently activated by musical features across different musical pieces and genres during free listening conditions.

In their recent study, Alluri et al. (2013) presented healthy volunteers musical pieces of various genres that included both instrumental music and music with lyrics during functional magnetic resonance imaging. Musical features were then extracted by automated algorithms included in the so-called MIR toolbox that the authors have developed previously. These complex acoustic feature time series were then used as regression models to predict voxel-wise brain hemodynamic activity recorded during music listening. Cross-validation was used across musical genres and two different subject populations to map areas that respond consistently to the musical complex acoustic features.

It was shown that brain activity can be predicted by the musical complex acoustic features in the auditory, limbic, and motor regions of the brain, as well as in orbitofrontal regions that have been previously associated with evaluative appraisal and not during free music listening per se. Cross-validation identified a region in right superior temporal gyrus that included planum polare and Heschl’s gyrus as the core structure that processes complex acoustic features across musical genres. These highly exciting findings will help pave way for further neuroimaging studies into the neural basis of music processing under naturalistic free music listening conditions.


Reference: Alluri V, Toiviainen P, Lund TE, Wallentin M, Vuust P, Nandi AK, Ristaniemi T, Brattico E. From Vivaldi to Beatles and back: predicting lateralized brain responses to music. Neuroimage 83 (2013) 627-636. http://dx.doi.org/10.1016/j.neuroimage.2013.06.064

8/02/2013

Psychophysics of spatial hearing and the underlying neural mechanisms in humans nicely reviewed

Localization of sound sources is a complicated challenge for the human brain since the auditory system, unlike the visual one, lacks direct correspondence between sound source locations and sensory receptive fields. In their recent review article, Dr. Jyrki Ahveninen et al. (2013) provide a comprehensive review of what is known about the psychophysics of sound localization and the current understanding of the underlying cortical mechanisms as elucidated by neuroimaging studies.

Both animal models and more recently non-invasive neuroimaging studies in humans have suggested a special role in auditory spatial processing for cortical areas that reside posterior to the primary auditory cortex, including planum temporale and posterior superior temporal gyrus, however, both the precise underlying neural mechanisms have remained in many ways an unresolved puzzle in cognitive neuroscience. The most significant outstanding questions are laid out in the paper, which is a good read for anyone interested in the cognitive neuroscience of spatial hearing.


Reference: Ahveninen J, Kopco N, Jaaskelainen IP. Psychophysics and neuronal bases of sound localization in humans (2013) Hearing Research, e-publication ahead of print. http://dx.doi.org/10.1016/j.heares.2013.07.008

7/08/2013

Multisensory integration effects caused by cross-modal mental imagery

The existence of perceptually robust multisensory interactions, such as the ventriloquism and McGurk effects, has been well established in behavioral studies, and neuroimaging studies have further shown that multisensory processing of stimuli takes place even in primary sensory cortical areas. There is also evidence suggesting that mental imagery, such as an imagined sound of a hammer seen to hit an anvil in a silent movie, modulates processing in sensory cortical areas. What has remained less explored is the extent that imaginary visual stimuli influence processing of real auditory stimuli and vice versa.

In their recently published study, Berger and Ehrsson (2013) conducted a series of behavioral experiments where they tested whether imagined stimuli cause well-known multisensory illusory percepts similarly as real stimuli. They first tested the effects of an imagined sound of collision on cross-bounce illusion, followed by testing the effects of an imagined visual stimulus on ventriloquism effect, and as a third test, the effects of an imagined seen articulation on the so-called McGurk effect. In all three experiments, the authors were able to demonstrate that imagined stimulus causes similar multisensory illusions as real cross-modal stimuli; imagining the sound of a collision gave rise to the cross-bounce illusion, imagining a visual stimulus shifted the perceived location of an auditory stimulus, and auditory imagery of speech stimuli led to a promotion of an illusory speech percept, i.e., in a modified McGurk illusion.

These highly exciting results nicely expand previous findings on multisensory interactions, and provide further evidence for the view that sensory cortices play a pivotal role in generation of mental imagery – even to the extent that visual imagery modulate processing of auditory stimuli and vice versa. It is easy to see that these behavioral results also provide an excellent starting point for further neuroimaging studies investigating the multisensory effects of mental imagery in sensory cortical areas of the brain.

Reference: Berger CC, Ehrsson HH. Mental imagery changes multisensory perception. Current Biology (2013), e-publication ahead of print.

7/01/2013

Task-specific networks of the brain revealed by a meta-analysis of more than 1600 neuroimaging studies

Neuroinformatics refers to free sharing of analysis tools and experimental data. When neuroinformatics was taking its very first steps, there were, among extensive support, also voices of critic, mostly doubting the usefulness of making published neuroimaging datasets available, whether anyone could in practice utilize data that is most often collected to answer some highly specific research question. With gigantic advances in computational power, it has become possible to put together thousands of such datasets and look for consistent patterns in the resulting big data with sophisticated data analysis algorithms that have been adapted from, e.g., statistical physics. Recently, there have been studies combining data over a large number of resting-state imaging studies to inspect the brain as a complex network. Consistent patterns of functional connectivity (or rather “co-activation”, where a number of brain areas tend to change their level of activity hand-in-hand) have indeed been observed, but it has been less well known how active engagement in various types of tasks changes the “resting state” networks of co-activity.

In their recent study, Crossley et al. (2013) included in a meta-analysis data from more than 1600 functional magnetic resonance imaging and positron emission tomography studies published between 1985–2010 to inspect the network activity patterns of the human brain when experimental subjects are engaging in different types of tasks, including perception, action, executive tasks, and during emotions.  Based on this meta-analysis, the authors were able to observe that there are large similarities in functional networks of the brain between resting state (where the task of the subjects typically has been to lay in the scanner and either do nothing or focus on a fixation cross) and active tasks, however, differences also emerged. It was observed that so-called occipital module was mostly activated during perception, central module during action, the default-mode module by emotions, and fronto-parietal module by executive tasks. Further, the authors observed that there were important nodes in parietal and prefrontal cortex that often connected over long distances and were involved in diverse range of tasks. Deactivation of nodes was also noted to play an important role in flexible network reconfiguration with changing cognitive demands. Overall, this study is a prime example of the usefulness of big data in cognitive neuroscience by allowing sophisticated analysis of the brain’s central processing principles that will likely pave way for further research efforts in a highly significant manner.

Reference: Crossley NA, Mechelli A, Vértes PE, Winton-Brown TT, Patel AX, Ginestet CE, McGuire P, Bullmore ET. Cognitive relevance of the community structure of the human brain functional coactivation network. Proc. Natl. Acad. Sci. USA (2013) e-publication ahead of print. http://dx.doi.org/10.1073/pnas.1220826110