Green tea consumption is associated with lower incidence of mild cognitive impairment and dementia in elderly people
Mitigating effects of dietary habits, including drinking of tea or coffee, on cognitive decline (and even dementias) in aging is a research topic with potentially very high societal impact. While coffee and tea do contain large amounts of polyphenols and caffeine that have potential neuroprotective effects, previous studies on the relationships between coffee and tea consumption and dementia seem to have produced mixed results. In their recent population-based longitudial study, Dr Moeko Noguchi-Shinohara et al. (2014) inspected the relationships between coffee, black tea, and green tea consumption and incidence of dementia and mild cognitive impairment.
Out of a total of 2845 residents aged >60 years in 2007 in Nakajima, Japan, 723 individuals meeting criteria for inclusion voluntarily participated in the study. Cognitive level was tested using mini-mental state examination and clinical dementia rating scales. Health surveys and blood tests were also carried out to control for some of the potentially intervening variables such as ApoE phenotype status and diabetes. Consumption of coffee, black tea, and green tea was recorded and divided into three classes for the purposes of data analysis: zero consumption, 1-6 days/week, and every day. At the time of follow-up testing conducted on the average 4.9 years later, it was observed that frequent consumption of green tea (but neither black tea nor coffee) was associated with significantly lower incidence of dementia and mild cognitive impairment.
The authors propose that these interesting findings could be due to a number of factors. One of the possible mechanisms that they bring up is that, unlike black tea, green tea contains catechins, especially epigallo catechin 3-gallate, as well as myricetin, which both have been described to have neuroprotective effects. The authors further remind that higher physical activity and number of hobbies also correlated with green tea consumption, although the beneficial effects of green tea prevailed even when these factors were taken into account in the analysis. Taken together, these findings add to the pool of evidence suggesting that green tea might have some neuroprotective effects that help guard against aging-related cognitive decline.
Reference: Noguchi-Shinohara M, Yuki S, Dohmoto C, Ikeda Y, Samuraki M, Iwasa K, Yokogawa M, Asai K, Komai K, Nakumura H, Yamada M. Consumption of green tea, but not black tea or coffee, is associated with reduced risk of cognitive decline. PLoS ONE (2014) 9: e96013. http://dx.doi.org/10.1371/journal.pone.0096013
The question of which neural events predict risky vs. safe behaviors such as overtaking a slower vehicle when there is little space to do so due to oncoming traffic vs. driving behind the slower vehicle and arriving a few minutes later to work is a highly interesting and important one. The vast majority of neuroimaging studies investigating the neural basis of risk taking have utilized models adapted from economics, in which risks are defined as the degree of variance in outcomes, however, it has been argued that for lay persons risk equals being exposed to a potential loss.
In their recent study, Dr. Helfinstein et al. (2014) had 108 healthy volunteers engage in a task called Balloon analog risk task during functional magnetic resonance imaging. In this task, the subjects earn points when they pump up balloons, but lose the points if the balloon explodes before they “cash out” by stopping pumping. This task was selected because it is highly correlated with public health relevant risk taking behaviors, including unsafe driving, sexual risk taking, and drug use. The authors observed that multi-voxel pattern analysis of brain activity before the point of decision making predicted subsequent risky vs. safe choices by the subjects, specifically involving brain regions found in previous studies to participate in control functions. Interestingly, in a separate univariate analysis these areas were found more active before safe than risky choices.
These highly interesting findings show that it is possible to predict risky vs. safe choices based on preceding patterns of brain activity in a set of regions that have been previously shown to be activated during tasks requiring cognitive control. The fact that these areas were more strongly activated preceding safe than risky decisions suggests that increased risk taking might be due to failures in engaging appropriate cognitive control processes. The relevance of these findings is further augmented given that the Balloon analog risk task that was used has been found in previous studies to correlate highly with real-life risk taking behaviors relevant for public health such as unsafe driving and drug use.
Reference: Helfinstein SM, Schonberg T, Congdon E, Karlsgodt KH, Mumford JA, Sabb FW, Cannon TD, London ED, Bilder BM, Poldrack RA. Predicting risky choices from brain activity patterns. Proc Natl Acad Sci USA (2014) e-publication ahead of print. http://dx.doi.org/10.1073/pnas.1321728111
The vast majority of cognitive neuroscience research on perception of facial expressions has utilized static stimuli with emotional expressions varying in category (e.g., fear, anger, happiness) and magnitude of expressed emotions. Indeed, these studies have provided a lot of important information that has facilitated understanding of how facial stimuli are processed by the brain, however, at the same time it has been assumed that processing of faces, including facial expressions, would be highly automatic and hard-wired in the brain. Recently, it has been increasingly recognized that contextual information affects the perception and interpretation of facial expressions, though it has remained relatively poorly known at which latencies and how robustly contextual information shapes processing of neutral facial stimuli in the human brain.
In their recent study, Wieser et al. (2014) recorded, with electroencephalogram, event-related brain responses to neutral facial stimuli when they were preceded by contextual valence information that was either self-relevant or other-relevant (i.e., brief verbal descriptions of neutral, positive or negative valence, in separate sentences either in first-person or third-person case, as in “my pain” vs. “his pain”, respectively). The authors observed that event-related responses associated with emotional processing were modulated by both types of contextual information from 220 ms post-stimulus. The subjective perception of the affective state of the neutral faces was also shaped by the brief affective descriptions that preceded presentation of the neutral facial stimuli.
Taken together, these findings very nicely demonstrate how contextual second-hand type of information (i.e., what people are told about others), enhances cortical processing of facial stimuli, starting as early as >200 ms from onset of the facial stimuli, even though the facial stimuli themselves were completely neutral without any emotional or self-referential information. The authors conclude that the very perception of facial features is modulated by prior second-hand information that one has about another person, a finding which might in part help explain how initial impressions of others are formed.
Reference: Wieser MJ, Gerdes ABM, Büngel I, Schwarz KA, Mühlberger A, Pauli P. Not so harmless anymore: how context impacts the perception and electro-cortical processing of neutral faces. Neuroimage (2014) e-publication ahead of print. http://dx.doi.org/10.1016/j.neuroimage.2014.01.022
The question of how the human auditory cortex represents complex natural sounds is one of the most fundamental ones in cognitive neuroscience. While previous studies have documented a number of tonotopically organized areas occupying the primary and non-primary auditory cortices, there are additionally studies that have shown preference to other sound features, such as sound location and speech sound category, in specific auditory cortical areas. Furthermore, there are findings in animal models suggesting that auditory cortical neurons are selective to various types of spectrotemporal sound features, however, it has not been known whether there are topographic representations of spectrotemporal features, a model that could potentially explain how complex natural sounds are represented in the human auditory cortex.
In their recent study, Santoro et al. (2014), analyzed data from two previous functional magnetic resonance imaging experiments where a rich array of natural sounds had been presented to healthy volunteers. They then tested between three computational models, where the first model assumed that auditory cortex represents sounds as spectral/frequency information, the second model assumed that auditory cortex represents sounds as temporal information and third model assumed that sounds are represented as sets of spectrotemporal modulations. The results indicate that natural sounds are represented with frequency-specific analysis of spectrotemporal modulations. Furthermore, in anterior auditory cortex regions analysis of spectral information was found to be more fine-grained than in posterior auditory cortical areas, wherein temporal information was, in turn, found to be represented more accurately with rather coarse representation of spectral information.
In sum, the authors provide a very exciting approach to testing how well alternative computational models, inspired by neurophysiological findings obtained in animal research, can predict hemodynamic data collected during presentation of a various natural sounds. Taken together, the results offer a very interesting vantage point into how natural sounds could be represented in the human auditory cortex. It is easy to predict that the approach and findings will generate wide interest and help further research efforts to significantly step forward, especially given the increasing popularity of the use of naturalistic stimuli in neuroimaging research.
Reference: Santoro R, Moerel M, De Martino F, Goebel R, Ugurbil K, Yacoub E, Formisano E. Encoding of natural sounds at multiple spectral and temporal resolutions in the human auditory cortex. PLoS Computational Biology 10: e1003412. http://dx.doi.org/10.1371/journal.pcbi.1003412
In linguistic expressions, emotional experiences are often described as bodily sensations, such as someone “having cold feet” or “heartache” that can be surprisingly similar across different cultures and languages. Furthermore, in cognitive neuroscience theories of emotions, somatosensory feedback has been proposed to support conscious emotional experiences. On the other hand, there are classical findings indicating that it is difficult to classify emotional states (other than changes in the level of arousal) based on measures of autonomic nervous system activity. Somewhat surprisingly, the question of whether emotional experiences during different emotional states (e.g., anger, sadness, happiness) are associated with distinct patterns of bodily sensations has not been addressed empirically.
In their recent study, Nummenmaa et al. (2013) conducted a series of five closely related experiments where a total of 701 participants were presented outlines of bodies along with emotional stimuli of different types and were asked to color bodily regions in the outlines where they felt increasing or decreasing activity while experiencing different kinds of emotions. The authors observed that different emotions were associated with across-stimulus-type replicable patterns of bodily sensations as indicated by the coloring patterns. These patterns of bodily sensations further replicated across Finnish and Swedish speaking subjects, as well as Taiwanese subjects tested in a separate control study. Based on these findings, the authors propose that emotions are represented in the somatosensory system as culturally universal categorical somatotopic maps that contribute to conscious emotional experiences.
Reference: Nummenmaa L, Glerean E, Hari R, Hietanen JK. Bodily maps of emotions. Proceedings of the National Academy of Sciences USA (2013) e-publication ahead of print. http://dx.doi.org/10.1073/pnas.1321664111
In the modern world, information about traumatic events, such as earthquakes, major accidents, and terrorist strikes claiming innocent lives, spreads quickly. Further, media provides repeated exposure to the major catastrophic events in the form of newsfeeds that add new details about the events as they are uncovered. It is an inherently important question whether media exposure can induce stress responses in a similar manner and magnitude than being at the site of the catastrophe. This is an important question from at least three perspectives: 1) answering this question provides important information about human cognition and emotional responses in the modern global information flow environment, 2) mental health professionals can better appreciate media-exposure related problems, and 3) also given that the population at large is often the intended psychological target of terrorists who carry out acts of violence.
In their recent study, Holman et al. (2013) compared media vs. direct exposure to a collective trauma, by carrying out a survey over the two weeks following the Boston marathon bombings, in representative samples of persons living in Boston, New York, and rest of the united states. When the authors adjusted acute stress symptom scores for demographics, preceding mental health, and prior collective stress exposure, it was observed that >6 hours of media exposure to the marathon bombing events during the week following the bombings was associated with higher acute stress symptoms than direct exposure to the bombings.
These very interesting findings suggest that indirect exposure to traumatic events via repeated media coverage may produce even stronger stress responses than direct exposure to the event, which is a clear indication of the robustness of prolonged and repeated media-exposure in triggering stress-related mental health problems, even though, as pointed out by the authors, it has to be kept in mind that emergency actions taken by the local authorities in cases of direct exposure to the bombings likely lessened distress in that group. Mass media may thus inadvertently serve as a channel that spreads the psychological trauma far beyond the directly affected population. Outside of the scope of considering the effects of mass-media coverage, these results further suggest that being repeatedly related information about a catastrophic event can trigger stress response and produce symptoms of post-traumatic stress disorder without direct exposure, which may also be an important form of societal-cultural learning.
Reference: Holman EA, Garfin DR, Silver RC. Media’s role in broadcasting acute stress following the Boston Marathon bombings. Proc Natl Acad Sci USA (2013) e-publication ahead of print. http://dx.doi.org/10.1073/pnas.1316265110
Increased functional brain network modularity predicts working memory deficits in early-stage multiple sclerosis
Multiple sclerosis is a neurological disorder where, due to inflammatory processes, there is focal demyelination and axonal damage that step-by-step severs the anatomical connections of the brain. Recent neuroimaging studies and theoretical work are both pointing out the importance of inter-area connectivity and interactions in giving rise to perceptual and cognitive functions. Therefore, one of the crucial questions regarding multiple sclerosis is in which ways the breaking down of brain connectivity alters the way that the functional networks are reorganized and how this impacts cognition.
In their recent study, Gamboa et al. (2013) recorded resting state functional magnetic resonance imaging in early-stage multiple sclerosis patients and healthy controls. As a measure of cognition, subjects of both groups separately performed the Paced auditory serial addition task in a dual task manner to assess working memory, attention, and speed of information processing. Using graph theoretical analysis of brain functional connectivity, the authors observed increased modularity in the early-stage multiple sclerosis patients as compared with the healthy controls. Furthermore, the increased modularity of brain functional connectivity negatively correlated with performance in the neuropsychological test of working memory, attention, and speed of information processing.
These highly interesting findings demonstrate how the subtle changes in connectivity due to focal damage caused to axonal fibers in the earliest stages of multiple sclerosis alter the functional network properties of the brain, and how such changes in brain network activity adversely reflects upon cognitive ability. It is easy to see how these findings pave way for further studies examining how accumulating focal damage to the links of the functional networks affect perceptual and cognitive functions in multiple sclerosis patients. Given that relatively robust effects were seen in these early-stage patients, these findings could also be interesting from the point of view of clinical research aiming at development of measures that enable follow-up of disease progression.
Reference: Gamboa OL, Tagliazucchi E, vonWegner F, Jurcoane A, Wahl M, Laufs H, Ziemann U. Working memory performance of early MS patients correlates inversely with modularity increases in resting state functional connectivity networks. NeuroImage (2013) e-publication ahead of print. http://dx.doi.org/10.1016/j.neuroimage.2013.12.008
While the sense of hearing is clearly the dominant channel for speech perception, humans are surprisingly good at reading the lips of one’s conversation partners, a phenomenon referred to as speech reading. This ability has been demonstrated already in early psychophysics studies to significantly enhance speech perception, especially when speech is to be perceived under noisy conditions. There is a fairly good amount of neuroimaging literature on the underlying neural mechanisms. In these studies, visual speech stimuli (i.e., articulatory gestures) have been reported to modulate auditory cortical processing, with some evidence pointing to speech motor system first being activated by visual speech and then influencing auditory-cortical processing via an efference copy.
In their recent study, Chu et al. (2013) studied the neural basis of speech reading by presenting 19 healthy volunteers with silent videoclips of a person articulating vowels during event-related functional magnetic resonance imaging (fMRI). Speech reading activated a wide range of occipital, temporal, and prefrontal cortical areas. The authors used structural equation modeling to estimate information flow during speech reading between the activated areas. The results suggested that there is parallel information flow from extrastriate areas to anterior prefrontal areas and, further, feedback information flow from the anterior prefrontal areas to posterior-superior temporal lobe auditory areas. These effective connectivity estimates thus support the model wherein speech reading influences auditory-cortical areas via prefrontal speech motor areas, possibly in the form of an efference copy that might facilitate speech perception.
Reference: Chu Y-H, Lin F-H, Chou Y-J, Tsai K W-K, Kuo W-J, Jaaskelainen IP. Effective cerebral connectivity during silent speech reading revealed by functional magnetic resonance imaging. PLoS ONE (2013) 8: e80265. http://dx.doi.org/10.1371/journal.pone.0080265
In the scientific quest to unravel the neural basis of many perceptual and cognitive functions, animal models are very important in complementing the findings obtained in non-invasive human neuroimaging studies. Furthermore, even though there are many species-specific aspects to cognition (e.g., human language), for those perceptual-cognitive functions that do generalize across species, animal models often offer the only possibility to test decisively between alternative hypotheses. Further, development of animal research methods is advancing at astounding speed. Two-photon calcium imaging is a relatively new method that allows simultaneous recording from large (~hundreds) populations of neurons, however, the method has been limited to recording from limited number of cortical layers at a time, and it has not been possible to record the neural populations over extended periods of time, which would be very useful in studies of, for example, the neural basis of various types of learning.
With the method recently published by Andermann et al. (2013) it is now possible to record extensive populations of neurons simultaneously from all six cortical layers over extended periods of time, even for months. The authors surgically implanted glass microprisms in somatosensory and visual cortical areas of mice, which then allowed chronic two-photon imaging of hundreds of neurons and from all layers simultaneously, in awake animals. The authors point out that their novel methodology, when combined with advances in genetic, pharmacological, and optogenetic methods (using which individual neurons in a population can be selectively suppressed and excited), can considerably expand the highly exciting capabilities offered by two-photon imaging in animal-model studies of the neural basis of perceptual and cognitive functions.
Reference: Andermann ML, Gilfoy NB, Goldey GJ, Sachdev RNS, Wölfel M, McGormick DA, Reid RC, Levene MJ. Chronic cellular imaging of entire cortical columns in awake mice using microprisms. Neuron (2013) e-publication ahead of print. http://dx.doi.org/10.1016/j.neuron.2013.07.052
Visual-cortex GABA concentrations predict incidence of cognitive failures in daily life in healthy volunteers
Since the amount of information one receives in daily life by far exceeds the limited capacity of one’s processing resources, selecting relevant information and suppressing irrelevant information is a vital ability. The link between this cognitive ability, termed selective attention, and cognitive failures in daily life (e.g., failing to notice things, getting distracted) is well established. On the other hand, gamma-aminobutyric acid (GABA), the most common inhibitory neurotransmitter in the human brain, has been observed to contribute to visual cortex selectivity to stimuli, a function that is an integral part of selective attention. What has not been investigated before, however, is whether inter-individual variability in the amount of visual-cortical GABA is linked with the frequency of cognitive failures in daily life.
In their recent study, Sandberg et al. (2013) had 36 healthy participants fill out a cognitive failures questionnaire, where participants were asked to self-rate frequency with which they experience common cognitive failures in perception, memory, and motor function. They then underwent 3T whole-head structural magnetic resonance imaging and focal magnetic resonance spectroscopy measurement of GABA concentration was obtained with two voxels placed in 1) calcarine sulcus in occipital cortex and 2) in the anterior part of the superior parietal lobule. It was observed that GABA concentrations in the visual cortex correlated with the incidence of self-reported cognitive failures. In contrast, the authors failed to see any correlation between GABA concentrations in the parietal voxel and cognitive failures. The authors however observed that gray matter volume in left superior parietal lobule and occipital GABA concentration independently predicted cognitive failures.
These exciting results first of all demonstrate nicely that it is possible to predict inter-individual variability in cognitive failures that take place in daily life with inter-individual differences in local brain neurochemical properties. The results further add an important piece of evidence pointing to the role of GABA in cognitive processing by suggesting that visual-cortical GABA concentrations impact selective attention under ecologically valid conditions as estimated by the questionnaire items. Third, these results suggest that the role of GABA in modulating selective attention is specific to the sensory cortical areas, whereas gray matter volume in parietal cortex additionally contributes to frequency of cognitive failures in daily life.
Reference: Sandberg K, Blicher JU, Dong MY, Rees G, Near J, Kanai R. Occipital GABA correlates with cognitive failures in daily life. Neuroimage (2013) e-publication ahead of print. http://dx.doi.org/10.1016/j.neuroimage.2013.10.059
Use of three-dimensional movies with surround sound as stimuli during functional magnetic resonance imaging
Naturalistic stimuli such as movies are being increasingly used as stimuli in cognitive neuroimaging studies. One of the advantages offered by movies is that they make it possible to test ecological validity of predictions based on research with more artificial stimulus features. Challenges in data analysis due to inherent complexity of movie stimuli have been eloquently handled by development of novel data analysis methods, including decomposition of the movie stimulus into a set of relevant stimulus time courses that are used as predictors in data analysis. One aspect that has not been tested under neuroimaging settings, however, is the use of three-dimensional movies with surround sound.
In their recent study, Ogawa et al. (2013) presented healthy volunteers with alternating 2D and 3D movie clips with vs. without surround sound during functional magnetic resonance imaging. The surround sound was generated with a custom-build MR-compatible piezoelectric speaker array. Data analysis was carried out by both contrasting the blocked conditions (3D with surround sound, 3D without surround sound, 2D with surround sound, and 2D without surround sound) and by using time courses of the degree of binocular disparity and the number of sound sources as predictors of brain hemodynamic activity.
The authors observed that brain hemodynamic activity was predicted by absolute visual disparity in dorsal occipital and posterior parietal areas and by visual disparity gradients in posterior aspects of the middle temporal gyrus as well as inferior frontal gyrus. The complexity of the auditory space was associated with hemodynamic activity in specific areas of the superior temporal gyrus and middle temporal gyrus. These results are highly exciting per se and, further, given that 3D and surround sound effects are known to increase the immersive effect of movies, this study represents an important step forward by demonstrating the feasibility of using 3D movies with surround sound during functional magnetic resonance imaging.
Reference: Ogawa A, Bordier C, Macaluso E. Audio-visual perception of 3D cinematography: an fMRI study using condition-based and computation-based analyses (2013) PLoS ONE 8: e76003. http://dx.doi.org/10.1371/journal.pone.0076003
Providing sense of touch via intracortical microstimulation of somatosensory cortex from a prosthetic limb
Research on brain computer interfaces has shown amazing progress over the past decade, with non-human primate studies showing that it is possible for monkeys to even learn guide an artificial arm based on neural signals recorded from the motor cortical areas. As such this line of research holds great promise for patients who have lost a limb or are suffering from paralysis due to spinal cord injury. One critical aspect that has been lacking in this exciting area of research has been the question of how somatosensory feedback could be provided from the prosthetic arm to the brain. This is important given that somatosensory feedback is a prerequisite for dexterous manipulation of objects and given that sense of touch is important for the embodied sensation (i.e., that limb feels part of oneself) as well as for emotional-social communication.
In their recent study, Tabot et al. (2013) compared the ability of monkeys to carry out somatosensory discrimination tasks based on endogenous vs. artificial somatosensory feedback inputs provided through native vs. prosthetic finger. Somatosensory stimulation was experimentally varied to find a set of parameters that could be used to guide manipulation of objects by the monkeys. The results suggest that 1) intracortical microstimulation of somatosensory cortex elicits spatially localized percepts consistently with the somatotopic organization of somatosensory cortex, 2) magnitude of the percept seems to depend on the magnitude of the microstimulation, and 3) phasic stimulation can be utilized to convey information about making of initial contact with an object. Based on these findings, the authors envision how microstimulation of the somatosensory cortex from a prosthetic limb could be used to provide sense of touch to human patients with an artificial limb.
Reference: Tabot GA, Dammann JF, Beg JA, Tenore FV, Boback JL, Vogelstein J, Bensmaia SJ. Restoring the sense of touch with a prosthetic hand through a brain interface. Proc Natl Acad Sci USA (2013) e-publication ahead of print. http://dx.doi.org/10.1073/pnas.1221113110
Direct causal evidence for auditory cortical "what" and "where" processing streams provided by transcranial magnetic stimulation
Since initial observations in animal models, there has been accumulating evidence suggesting that sound identity and location information is processed in parallel anterior and posterior auditory-cortex streams in humans. Human neuroimaging evidence has, however, not been indisputable since posterior auditory cortical areas have been observed to be sensitive to also other than auditory spatial features. Furthermore, while neuroimaging findings are beyond any doubt highly informative, they cannot not per se provide causal evidence for the involvement of anterior and posterior auditory cortical areas in processing of “what” and “where” auditory information. Transcranial magnetic stimulation guided by magnetic resonance imaging is a method that, by making it possible to transiently deactivate specific cortical areas, allows causal testing of the involvement of cortical regions in task performance.
In their recent study, Dr. Jyrki Ahveninen et al. (2013) transiently inhibited bilateral anterior and posterior auditory cortical areas in healthy volunteers when they were performing sound localization and sound identity discrimination tasks. The transient inhibition was accomplished with paired-pulse transcranial magnetic stimulation guided by magnetic resonance imaging, with the pulses delivered 55-145 ms following the to-be-discriminated auditory stimuli. The anatomical areas targeted by the transcranial magnetic stimulation were further confirmed with individual-level cortical electric field estimates. It was observed that transient inhibition of posterior auditory cortical regions delayed reaction times significantly more during sound location than sound identity discrimination. In contrast, transient inhibition of anterior auditory cortical regions delayed reaction times significantly more during sound identity than sound location discrimination.
These highly exciting findings provide direct causal evidence in support of the parallel auditory cortex “what” vs. “where” processing pathways in humans. These results not only nicely help clarify the still-debated issue of whether the posterior human auditory cortex participates in auditory space processing, but methodologically the findings further demonstrate the feasibility of using paired-pulse transcranial magnetic stimulation in targeting cortical areas that are located very close to one another. The introduction of methods that allow precise estimation of the cortical targets of transcranial magnetic stimulation also provides an important methodological advance.
Reference: Ahveninen J, Huang S, Nummenmaa A, Belliveau JW, Hung A-Y, Jaaskelainen IP, Rauschecker JP, Rossi S, Tiitinen H, Raij T. Evidence for distinct auditory cortex regions for sound location versus identity processing. Nature Communications (2013) 4: 2585. http://dx.doi.org/10.1038/ncomms3585
Brain regions involved in processing gestural, facial, and actor-orientation cues in short video clips revealed by functional MRI
How is the human brain able to process social gestures so quickly and with (seemingly) so little effort? Answering this question is one of the most pivotal ones when attempting to understand the neural basis of social cognition. This is a very important area of research given that social skills is what makes humans an inherently social species, and further since deficits in social cognition in certain clinical conditions are highly handicapping to afflicted individuals. Neuroimaging studies on the neural basis of social cognition have been rapidly increasing in number, but there have been relatively few studies where processing of several social cues (e.g., gestures, facial expressions, orientation of social gestures towards vs. away from the subjects) have been included in the same study design.
In their recent study, Saggar et al. (2013) showed short 2-sec video clips depicting social vs. non-social gestures oriented away vs. towards the subjects and with face occluded (blurred) vs. clearly visible, during functional magnetic resonance imaging. The authors observed enhanced hemodynamic activity in amygdala and brain areas relevant for theory of mind when contrasting social vs. non-social gestures. Activity in lateral occipital cortex and precentral gyrus was further observed when comparing responses elicited by gestures towards vs. away from the subjects. Visibility of facial gestures in turn modulated activity in posterior superior temporal sulcus and fusiform gyrus. Taken together, these highly interesting findings shed light on how multiple social cues that signal information about the intentions of other persons are processed in the human brain, and significantly pave way for clinical research in patient groups with social cognition deficits.
Reference: Saggar M, Shelly EW, Lepage J-F, Hoeft F, Reiss AL. Revealing the neural networks associated with processing of natural social interaction and the related effects of actor-orientation and face-visibility. Neuroimage (2013) e-publication ahead of print. http://dx.doi.org/10.1016/j.neuroimage.2013.09.046
It is well known that aging results in cognitive decline, including memory impairments, even in the absence of any dementing neurodegenerative disorders per se such as Alzheimer’s disease. Given the rapidly aging populations in many countries, the causes of the aging-related memory impairments have been a focus of intensive research. One central challenge for research on aging-related memory impairments has been posed by the relatively long lifespan of most animal models. Conditioning paradigms that can be used in Drosophila (aka "fruit flies") provide a model where aging-related memory impairments are seen over the course of days and weeks instead of years, thus offering a model that can be effectively used to study the underlying molecular mechanisms.
In their recent study, Dr. Varun K Gupta et al. (2013) conducted a series of experiments where they first observed that polyamine spermidine and putrescine levels decreased in the heads of aging Drosophila. In the following experiment they observed that dietary sperminide supplement reduced aging-related memory impairment in the Drosophila, as assessed with a maze-learning task involving olfactory cues and electric shocks. Investigating the possible underlying molecular mechanisms, the authors observed that dietary spermidine, in addition to reducing aging-related memory impairment, prevented aging-related decrease of autophagy. Furthermore, when the autophagic mechanisms were genetically impaired, the spermidine-induced reduction of aging-related memory impairment was blocked.
This impressive set of findings demonstrates how the Drosophila model can be highly effectively used to study molecular mechanisms that underlie aging-related memory impairments. The authors point out that prior to their observations, few substances (and all of them exogenous) have been observed to protect against aging-related memory impairments. Spermidine, being an endogenous substance, thus holds a lot of potential for further studies and might ultimately provide a candidate substance for prevention of aging-related memory deficits in humans.
Reference: Gupta VK, Scheunemann L, Eisenberg T, Mertel S, Bhukel A, Koemans TS, Kramer JM, Liu KSY, Schroeder S, Stunnenberg HG, Sinner F, Magnes C, Pieber TR, Dipt S, Fiala A, Schenck A, Schwaerzel M, Madeo F, Sigrist SJ. Restoring polyamines protects from age-induced memory impairment in an autophagy-dependent manner. Nature Neuroscience (2013) e-publication ahead of print. http://dx.doi.org/10.1038/nn.3512
Severe economic recessions are known to adversely affect population health, and it is also well known that exposure to adverse stimuli during early stages of life can hinder development. The question of whether economic-societal conditions at the time of birth, such as poverty brought about by economic downturns and wellbeing by economic booms, can impact cognitive functions later in life is a highly interesting one that has been addressed relatively little. Due to large in-depth surveys carried out in the 2000s across multiple European countries among the aging population, it has become possible to address this important question.
In their recent study, Doblhammer et al. (2013) examined whether economic cycle at the time of birth could predict poor cognitive functioning at older age. They specifically inspected whether cycles in economic indicators during the first half of the 20th Century (excluding war years) would predict cognitive ability as assessed with five interview measures, on orientation to time, recall, delayed recall, verbal fluency, and numerical ability. Multiple potentially intervening factors were carefully controlled for in the analyses. The results showed that economic downturns at the time of birth significantly predicted poorer cognitive function in the elderly. While it is naturally difficult to pinpoint causal factors in this type of study design (the authors mention malnutrition and psychological stress/insecurity within families as possible explanations) these results nonetheless bear high societal significance by demonstrating that economical factors can have lasting consequences on cognitive function.
Reference: Doblhammer G, van den Berg GJ, Fritze T. Economic conditions at the time of birth and cognitive abilities late in life: evidence from ten European countries. PLoS ONE (2013) 8: e74915. http://dx.doi.org/10.1371/journal.pone.0074915
Rapid advances in neuroimaging method development are currently making it possible to answer one of the most intriguing questions in cognitive neuroscience, specifically, how does seeing emotion-arousing events in one’s environment modulate the various systems (e.g., emotional, attentional, somatomotor) of the brain. Up until relatively recently, neuroimaging studies on the neural basis of emotions utilized stimuli such as emotional pictures and sounds to delineate brain structures responding to emotional events. More naturalistic stimuli such as movies that elicit more robust and genuine emotions have been used recently and in such early studies, more extensive set of brain areas have been shown to be modulated by emotional valence and arousal than in studies using more artificial stimuli, thus warranting further research into the neural basis of emotions with naturalistic stimuli.
In their recent study, Goldberg et al. (2014) presented healthy volunteers with short ~14 s clips taken from commercial movies during functional magnetic resonance imaging of brain hemodynamic activity. The subjects had prior to the scanning session watched longer clips of ~a few minutes that contained the short experimental clips to familiarize the subjects with the emotional events and content of the clips. The clips ranged from neutral to strongly emotional, which was used in modeling hemodynamic activity. In separate control experiments the clips were played upside down, and the soundtrack and video inputs were mixed, to control for the possibility of low-level sensory differences between emotional and neutral clips. The authors observed responses to emotional clips in a number of brain areas, however, the most robust responses to emotionally arousing clips were noted in the dorsal visual stream.
These highly interesting findings of dorsal stream activity enhancement by emotionally arousing movie clips are interpreted by the authors to indicate initial step in the chain of events ultimately leading to action towards emotionally meaningful objects. Methodologically, the study also presents an interesting and a potentially very useful advance: by familiarizing the subjects with the movie material in advance, the authors could effectively utilize very short movie clips to trigger the recollection of the previously seen emotional events during the scanning. Given that there are limitations to how long a given subject can be scanned, and that the signal-to-noise ratio limitations of even modern neuroimaging methods require one to obtain multiple repetitions of similar events (e.g., emotional responses) over the duration of the experiment, this setup of the authors offers an attractive alternative paradigm for further neuroimaging studies of emotions.
Reference: Goldberg H, Preminger S, Malach R. The emotion–action link? Naturalistic emotional stimuli preferentially activate the human dorsal visual stream. Neuroimage (2014) 84: 254–264. http://dx.doi.org/10.1016/j.neuroimage.2013.08.032
Human electrocorticography shows phase resetting of visual cortical oscillatory activity by auditory stimuli
Perception is inherently multisensory, as evidenced by multisensory integration effects such as increased comprehensibility of speech when lip movements of a speaker are clearly visible in a noisy environment, as well as by audio-visual illusions such as the ventriloquism and McGurk effects. At the neural level, it has been increasingly recognized that there are cross-modal inputs even to primary sensory cortical areas that take place already at very short latencies from stimulus onset, however, understanding of the neural mechanisms by which auditory stimuli influence visual processing has been relatively limited. Given recent findings in other sense modalities, phase resetting of oscillatory visual cortex activity by auditory stimuli has emerged as a potential mechanism by which auditory stimuli might facilitate processing of visual stimuli in visual cortical areas.
In their recent study, Mercier et al. (2013) recorded brain electrical activity intracranially in patients undergoing presurgical mapping procedures. Oscillatory and evoked activity was recorded with electrodes placed in a number of occipital-visual cortical areas during presentation of auditory-only, visual-only, and audiovisual stimuli. While the authors observed also some responses in visual cortical areas evoked by auditory stimuli, the most robust effects that auditory stimuli caused in visual cortical areas was modulation / phase resetting of the ongoing oscillatory activity by auditory stimuli. Such phase resetting might be the neurophysiological-level mechanism that supports behaviorally measurable multisensory interaction effects.
Reference: Mercier MR, Foxe JJ, Fiebelkorn IC, Butler JS, Schwartz TH, Molholm S. Auditory-driven phase reset in visual cortex: human electrocorticography reveals mechanisms of early multisensory integration. Neuroimage (2013) 79:19-29. http://dx.doi.org/10.1016/j.neuroimage.2013.04.060
Self-initiated task selection is predicted by multi-voxel pattern activity in medial prefrontal and parietal cortices
Based on clinical observations in brain-damaged individuals, it has been known for a long time that the brain mechanisms underlying externally triggered and internally initiated goal-directed behavior must be different, as there are patients who do very well when the clinician asks them to carry out various cognitive and behavioral tasks, including ones that measure so-called executive functions, however, the very same patients might be unable to take initiative and make their own internally driven choices to pursue meaningful goals on their own. In neuroimaging studies, the neural basis of switching from one task to another has been mostly studied using cue-stimulus triggered paradigms. What has remained less well known, due in large part to obvious methodological challenges in measuring the timing of “internal triggers”, is which brain mechanisms underlie genuine internally driven task selection.
In their recent study, Soon et al. (2013) used functional magnetic resonance imaging in a group of 34 healthy volunteers to study brain mechanisms that precede internally initiated task selection. The authors set up a paradigm where subjects voluntarily engaged into mental tasks of adding or subtracting. Brain hemodynamic activity patterns preceding task initiation / selection were then examined using multivoxel pattern analysis algorithms. It was observed that intention to switch task could be decoded from patterns of hemodynamic activity within medial prefrontal cortex and parietal cortex, areas partly overlapping with the so-called default-mode network, as early as four seconds before the subjects self-estimated having become consciously aware of their choices.
These highly important findings not only disclose brain areas that underlie internally driven task selection, but also provide important methodological advances that can be utilized in further studies on this important research question. Notably, it was shown that specifically the distributed pattern of brain activity held the information that predicted the initiation of task switch, the overall amplitudes of hemodynamic activity within the medial prefrontal and parietal cortices failed to do so. The task paradigm devised by the authors is also one that can be readily modified for further studies of the neural basis of voluntary task selection.
Reference: Soon CS, He AH, Bode S, Haynes JD. Predicting free choices for abstract intentions. Proc Natl Acad Sci USA (2013) 110: 6217-6222. http://dx.doi.org/10.1073/pnas.1212218110
Being able to estimate the reward value of visual objects is a crucial factor in guiding ones behavioral choices. What makes this task even more challenging is that the reward value is often not fixed but can vary quickly, making it necessary for there to be flexibility to take into account the immediate reward history in addition to the stable reward value that has been learned over the longer term. The underlying neural mechanisms have remained a topic of speculation. Existence of two parallel reward-value processing mechanisms, one processing flexibly short-term reward value and another holding the longer-term stable reward value of objects has been hypothesized, however, empirical support for this hypothesis has been lacking.
In their recent study, Drs. Kim and Hikosaka (2013) used combined single-neuron recording and temporal inactivation methods in non-human primates to investigate the roles of distinct caudate nucleus areas in determination of flexible and stable reward values of visual stimuli. They observed that, during behavioral tasks wherein monkeys looked more at objects with high than low value, neuronal firing recorded from head of the caudate nucleus coded reward value flexibly and neurons in the tail of the caudate nucleus coded for the longer-term stable reward value. Temporary inactivation of these two caudate nucleus subregions corroborated the findings obtained in the single-neuron recordings.
These very important and exciting results suggest that there indeed are two parallel neural systems coding for reward value of objects, one enables flexible coding of reward value when there is short-term volatility in value, and another mechanism enables holding and appreciating the stable reward values of objects. In addition to shedding light on the neural basis of reward-value processing under different types of task conditions, these findings offer hypotheses and insights into the neural basis of specific deficits that have been documented in various basal ganglia disorders, as also briefly discussed by the authors in their paper.
Reference: Kim HF, Hikosaka O. Distinct basal ganglia circuits controlling behaviors guided by flexible and stable values. Neuron (2013) e-publication ahead of print. http://dx.doi.org/10.1016/j.neuron.2013.06.044