12/30/2012

Continuous semantic space mapped on human cerebral cortex


With recent advances in neuroimaging and data analysis methods, it has become possible to even map the neural basis of semantic concepts. There are studies showing that distinct object categories such as faces and outdoor scenes are differentially represented in the human brain, however, one of the most profound observations in early reaction time studies conducted on processing of semantic concepts (and categories) is that semantically similar words cause largest priming effects, as if semantically similar concepts would be represented close to one another in a “semantic space” so that spreading of activation would facilitate processing of concepts related to preceding ones. However, it has not been empirically shown whether such semantic space, where semantically similar concepts would be represented close to one another in a gradient/continuum, can be found on the human cortical surface.

In their recent study, Huth et al. (2012) presented healthy volunteers feature films during 3-Tesla functional magnetic resonance imaging. They then derived a large number (altogether 1705) of object and action, as well as higher-category, names based on WordNet lexicon and labeled the movies so that a time course was obtained for the presence of each concept name in the movies. These concept time courses were then regressed against the brain hemodynamic responses recorded during movie watching, and the results suggested that there indeed is a continuous semantic space on the human cortex. These results provide highly exciting novel information on how concepts are mapped in the human brain, and overall the study presents a new type of methodological approach that offers exciting possibilities for further studies on the neural basis of language, one of the most fundamental of human cognitive functions.

Reference: Huth AG, Nishimoto S, Vu AT, Gallant JL. A Continuous semantic space describes the representation of thousands of object and action categories across the human brain. Neuron (2012) 76, 1210–1224. http://dx.doi.org/10.1016/j.neuron.2012.10.014

12/22/2012

Computational-model derived visual and auditory saliency helps decode functional neuroimaging data obtained under naturalistic stimulation conditions


The use of naturalistic, ecologically valid, stimuli such as feature films, in neuroimaging studies is becoming one of the most exciting areas of cognitive neuroscience. Indeed, a vast body of knowledge has been acquired about the neural basis of perceptual and cognitive functions in experiments where highly controlled and artificial stimuli have been repetitively presented to experimental subjects and the relevant cerebral responses have been isolated by the means of trial averaging. This knowledge, together with recent advances in neuroimaging and data analysis methods, has laid strong foundation for efforts towards the use of complex real-life like stimuli such as movies. 

To date, model-free analysis methods such as inter-subject correlation and independent component analysis have been successfully utilized in disclosing brain activity related to specific events in the movies. Model-based approaches such as general linear model and multiple voxel pattern analyses have been also used where stimulus features contained in the movie clips and subjective experiences of the experimental subjects have been utilized as predictors and teaching data, respectively. However, biologically motivated computational models, built based on quantitative knowledge obtained through studies utilizing simple/artificial stimuli, have not been, to date, utilized to generate predictions about how the brain responds to naturalistic stimulation.

In their recent study, Cecile Bordier et al. (2012) derived visual saliency maps based on combination of local discontinuities in intensity, color, orientation, motion and flicker, and auditory saliency maps based on discontinuities in intensity, frequency contrast, (spectrotemporal) orientation, and temporal contrast. The saliency information was then utilized, together with the stimulus feature time courses, as predictors in analysis of functional magnetic resonance imaging data obtained from healthy volunteers when they watched manipulated (with color, motion, and sound switched on/off) and un-manipulated versions of an episode of TV series “24”.

It was observed that while visual and auditory stimulus features per se predicted activity in visual and auditory cortical areas, visual saliency predicted hemodynamic responses in extra-striate visual areas and posterior parietal cortex, and auditory saliency predicted activity in the superior temporal cortex. Notably, data-driven independent component analyses, while revealing sensory network components, would not have provided similar knowledge about contributions of sensory features vs. saliency to brain activity patterns.

These results are highly encouraging and pave way for the use of biologically motivated computational models in forming predictors for analysis of data obtained under naturalistic stimulation and task conditions. This approach complements in a significant fashion the previously used predictors (time courses of sensory stimulus features and subjective experiences), and adds to the rapidly growing kit of tools that one can use in neuroimaging studies where real-life like naturalistic stimuli/tasks are used to probe the neural basis of human perceptual and cognitive functions.

Reference: Bordier C, Puja F, Macaluso E. Sensory processing during viewing of cinematographic material: Computational modeling and functional neuroimaging. Neuroimage (2012) e-publication ahead of print. http://dx.doi.org/10.1016/j.neuroimage.2012.11.031

12/15/2012

Simultaneous EEG and fMRI is possible even at ultra-high 9.4 Tesla field strength


Modern neuroimaging methods that enable measurement of brain function without opening the skull of the experimental subjects are truly amazing. Currently, there are multiple non-invasive neuroimaging methods available, however, each of them is limited either in terms of spatial or temporal resolution. For instance, functional magnetic resonance imaging, while being spatially accurate down to the millimeter scale, suffers from compromised temporal resolution. Conversely, electroencephalography is temporally highly accurate (~milliseconds), but due to the ill-posed electromagnetic inverse problem the spatial accuracy of the method is rather limited. There are computational methods that make it possible to combine complementary information provided by the different methods, but in an ideal case the measurements should be conducted simultaneously. However, recording electroencephalography during functional magnetic resonance imaging has been highly challenging due to the strong magnetic fields producing artifacts to the recorded signal.

In their recent study, Neuner et al. (2012) extended their previous work to test whether electoencephalography can be reliably recorded at an ultra-high 9.4 Tesla field strength. Their results indicate that the artifacts due to cardiac activity (that induce slight movement of the subject and thus induction of currents to the electrodes) increased in amplitude at 9.4 field strength but that it was still possible to measure meaningful and replicable electroencephalographic signals at this ultra-high field strength. The authors further demonstrate that independent component analysis is a useful method for separating artifacts from relevant electroencephalographic signals at the extremely challenging recording conditions. While these measures were obtained under conditions of static magnetic field and gradient switching that takes place during functional imaging does introduce additional artifacts, this demonstration by the authors is nonetheless promising and there are ways to circumvent the disturbances caused by gradient switching, such as inter-leaved acquisition (see, for example, Bonmassar et al. 2002).

References

Neuner I, Warbrick T, Arrubla J, Felder J, Celik A, Reske M, Boers F, Shah NJ. EEG Acquisition in Ultra-High Static Magnetic Fields up to 9.4T. Neuroimage (2012), online publication ahead of print. http://dx.doi.org/10.1016/j.neuroimage.2012.11.064

Bonmassar G, Purdon PP, Jaaskelainen IP, Solo V, Brown EN, Belliveau JW. Motion and ballistocardiogram artifact removal for interleaved recording of EEG and ERP during MRI. Neuroimage 16:1127-1141, 2002. http://dx.doi.org/10.1006/nimg.2002.1125

12/07/2012

Enhanced synaptic plasticity four weeks after birth of new neurons in adult hippocampus support memory


Even though the very first observations suggesting that birth of new neurons, neurogenesis, does take place in the adult brain were documented as early as the 1960s, it was not until the early 1990s that converging lines of evidence confirmed this to be true. The functional role of neurogenesis has been less clear, and while results have been reported suggesting that hippocampal neurogenesis supports memory and learning, the underlying synaptic mechanisms have remained to a large extent an open question.

In their recent study Gu et al. (2012), by combining retroviral and optogenetic methods in adult mice, determined the time of birth of hippocampal dental granule cells, tracked the timescale over which these cells formed functional synaptic connections, and tested whether reversible inactivation of these cells adversely impacted memory performance. It was observed that adult-born neurons formed functional synapses onto area CA3 neurons two weeks after birth and that these projections became stable at about four weeks after birth. Reversible silencing these neurons at four weeks (but not two or eight weeks) after birth disrupted recall of a task that was learned while the neurons were intact and part of the hippocampal circuitry, suggesting that there is a specific time window within which adult-born hippocampal neurons support memory.

These highly exciting results shed light on the synaptic mechanisms that take place during neurogenesis, and suggest that there is a specific time window during which newborn neurons become integrated to hippocampal neuronal circuits and support learning and subsequent recall of a task. The methods used by the authors in their study constitute an astounding example of the possibilities that have become available for elucidation of the functionality of neuronal networks at the level of individual neurons and synaptic connections.

Reference: Gu Y, Arruda-Carvalho M, Wang J, Janoschka SR, Josselyn SA, Frankland PW, Ge S. Optical controlling reveals time-dependent roles for adult-born dentate granule cells. Nature Neuroscience (2012) advance online publication. http://dx.doi.org/10.1038/nn.3260

11/30/2012

Stimulus-specific adaptation detects sound novelty in macaque A1


The question of how the auditory system is able to automatically track unattended background sounds and detect novelty therein is one of the most intriguing ones in cognitive neuroscience. Electroencephalography and magnetoencephalography studies in humans have extensively documented associations between differential auditory cortex responses to sounds deviating from background stimulation (a.k.a. mismatch negativity responses) and the extent that these deviations in stimulation penetrate into the awareness of the subjects as reflected by disruptions of ongoing task performance. The underlying neural mechanisms have, however, been an issue of controversy, with a number of studies suggesting that stimulus-specific adaptation could serve as a relatively simple and ecological neurophysiological automatic sound deviance screening mechanism, and other studies arguing that there must be some other, more complex, mechanisms at play.

In their recent study, Fishman and Steinschneider (2012) studied carefully the underlying neural mechanisms by recording macaque primary auditory cortex responses to frequently repeating “standard” sounds and infrequently occurring “deviant” sounds in a conventional oddball paradigm and various control conditions. Taken together, their results suggest that stimulus-specific adaptation underlies detection of deviations in ongoing auditory stimulation  in the macaque primary auditory cortex. The authors caution that their recordings were confined within the macaque A1, and thus there could always be other mechanisms at play in secondary auditory cortical areas that should be investigated in future studies. These findings are highly significant in that they bridge the gap between human mismatch negativity research and animal models.  Indeed, it is fascinating to think that a relatively simple adaptation mechanism could underlie the relatively complex perceptual-cognitive level phenomena that the mismatch negativity response has been associated with, including auditory sensory memory and involuntary attention. 

Reference: Fishman YI, Steinschneider M. Searching for the mismatch negativity in primary auditory cortex of the awake monkey: deviance detection or stimulus specific adaptation? Journal of Neuroscience (2012) 32: 15747–15758. http://dx.doi.org/10.1523/JNEUROSCI.2835-12.2012


11/24/2012

Dissociation between learning from one’s own mistakes and from errors made by others in Parkinson’s patients


In everyday life, one constantly faces situations where one has to make choices. These decisions are guided both by consequences, rewards and punishments, of one’s own choices in similar situations in the past, as well as by having observed others’ choices getting rewarded or punished. It has been, however, poorly known whether learning by observing others and learning from one’s own mistakes rely on the same vs. different underlying neural mechanisms. 

In Parkinson’s disease there is a bias to learn more from negative than positive feedback, which has been presumed to be due to loss of striatal dopaminergic function in the disease. It has not been, however, investigated whether there is a similar bias with respect to learning by observing successful and erroneous choices of others. Dissociation between these two types of learning in Parkison’s disease would suggest that dopaminergic system plays a differential role in trial-and-error vs. observational learning.

In their recent study, Kobza et al. (2012) investigated this highly interesting question in a total of 19 Parkinson’s patients and 40 healthy controls, divided into separate groups that were exposed to highly similar trial-and-error and observational learning tasks, respectively. The results showed that while Parkinson patients (who were off medication) exhibited a typical bias to learn better from negative than positive feedback when they were actively performing the task themselves, those Parkinson patients who learned by observation showed similar pattern of results as the healthy control subjects who learned better from positive than negative feedback. 

These findings indicate that there is a dissociation in the involvement of phasic dopamine activity in trial-and-error vs. observational learning. Dopaminergic activity, while clearly implicated when learning by getting positive and negative feedback based on one’s own active behavioral choices, would seem to be little involved in observational learning. These findings clearly justify further studies into the neural mechanisms underlying the ability to learn by observation.

Reference: Kobza S, Ferrea S, Schnitzler A, Pollok B, Südmeyer M, Bellebaum C Dissociation between active and observational learning from positive and negative feedback in Parkinsonism. PLoS ONE (2012) 7: e50250. http://dx.doi.org/10.1371/journal.pone.0050250

11/16/2012

Recent studies shed light on how the brain processes temporally distributed information


The vast majority of cognitive neuroscience research has focused on mapping brain responses to static stimuli during highly simplified experimental paradigms, and indeed such studies have provided valuable information about the hierarchical processing steps that take place in the human brain, with sensory cortical areas processing relatively simple stimulus features and higher-order association cortical areas processing more complex aspects of perceptual objects. This is, however, only half of the story, as stimuli and events in the real world almost always take place in a temporal context. For example, the meaning of a single word is greatly shaped by the preceding words, gestures, and social interactions. Thus, the brain needs to have mechanisms that accumulate information over longer timescales in order to make sense of things that are unfolding across time.

Two distinct studies in the most recent issue of the prestigious journal Neuron have addressed the issue of where in the brain processing of temporally distributed information takes place using very interesting experimental setups. In the first study, Yaron et al. (2012) presented anesthetized rats with auditory stimulation that either contained periodicity or was completely random. The authors hypothesized that if auditory cortical neurons code periodicity information the responses to sounds presented in the periodicity-containing sequences should be smaller than responses to sounds when they are presented randomly. Indeed, their results showed this to be the case and the authors conclude that neurons in the auditory cortex are sensitive to the detailed structure of sound sequences over timescales even as long as minutes. In the second study, Honey et al. (2012) measured electrocorticography in human subjects during watching of intact and temporally scrambled movies. By inspecting the degree of synchrony of neuronal activity across cortical locations in the intact vs. scrambled movie conditions, the authors noted that while sensory cortical areas synchronized over very short times scales, within higher-order regions slow power fluctuations were more reliable for the intact than the scrambled movie, suggesting that these regions accumulate information over longer time periods.

These studies provide recent examples of a highly exciting and relatively new area of research that is focusing on how the brain is able to accumulate information over longer time scales to make sense of words, sentences, melodies, and patterns of social interactions. The finding that auditory cortex of rats can track periodicity over timescales of minutes is truly significant and is bound to inspire further research; on the other hand, the experimental setup of using scrambled vs. intact movies to investigate temporal receptive windows in humans based on recording of brain electrical activity provide a significant methodological step forward for further research in humans.

References:

Honey CJ, Thesen T, Donner TH, Silbert LJ, Carlson CE, Devinsky O, Doyle WK, Rubin N, Heeger DJ, Hasson U. Slow cortical dynamics and the accumulation of information over long timescales. Neuron (2012) 76: 423–434. http://dx.doi.org/10.1016/j.neuron.2012.08.011

Yaron A, Hershenhoren I, Nelken I. Sensitivity to complex statistical regularities in rat auditory cortex. Neuron (2012) 76: 603-615. http://dx.doi.org/10.1016/j.neuron.2012.08.025

11/10/2012

Inferior frontal gyrus hemodynamic activity synchronizes between subjects engaging in face-to-face communication


Even though humans have evolved to communicate face-to-face, life in modern societies involves communication via email and phone to an increasingly large extent, and time spent on face-to-face communications is becoming less frequent. In contrast to these other forms of communication, face-to-face communication is characterized by rich audiovisual stimulation and non-verbal cues such as facial expressions and gestures that further provide cues for turn taking during conversations. It has been an open question, however, whether there are neurocognitive mechanisms that are specifically activated during face-to-face (and not during other forms of) interpersonal communication.

In their recent study, Jiang et al. (2012) recorded hemodynamic brain activity using near-infrared spectroscopy simultaneously from 10 pairs of interacting subjects to study whether there are brain responses that are elicited in synchrony in the interacting subjects' brains only during face-to-face communication. More specifically, their subjects engaged in face-to-face dialogue, face-to-face monologue, back-to-back dialogue, and back-to-back monologue while their brain hemodynamic activity was recorded. The authors observed synchronization of hemodynamic activity in the inferior frontal gyrus among the conversing subject pairs that was specific to face-to-face conversation. A further analysis of the dynamics of inferior frontal gyrus synchronization suggested that the activity was due to face-to-face interactions such as turn-taking behavior rather than mere verbal signal transmission.

These findings suggest that face-to-face communication involves interpersonal brain activity patterns that other types of communication lack. These novel findings are highly interesting also from the perspective that simultaneous recording of brain activity from two interacting subjects has become a very exciting area of research (that is often referred to as two-person neuroscience or hyperscanning), and Jiang et al. (2012) demonstrate in their study that the approach can indeed be utilized to capture neuroscientifically interesting phenomena that take place specifically during two-person interactions.

Reference: Jiang J, Dai B, Peng D, Zhu C, Liu L, Lu C. Neural synchronization during face-to-face communication. Journal of Neuroscience (2012) 32: 16064–16069. http://dx.doi.org/10.1523/JNEUROSCI.2926-12.2012

11/04/2012

Flow proneness correlates with dorsal striatum dopamine D2 receptor binding potential


Being in a “flow state” refers to a state of enthusiasm with high but subjectively effortless attention, reduced sense of self-awareness, and control during challenging tasks that match in difficulty the competence level of a person. In many tasks such as competitive sports and work, flow state is often sought to improve performance. The neural basis of the phenomenon of flow has been a topic of speculation, however, there are lines of research suggesting links between dopamine system and flow state, for example, higher availability of striatal dopamine D2 receptors has been linked with decreased impulsivity and poor impulse control has, in turn, been suggested to make it difficult for one to enter and maintain a flow state.

In their recent study, de Manzano et al. (2012) measured how prone a group of 25 healthy volunteers were to flow experiences at work, household maintenance, and leisure time using the so-called Swedish Flow Proneness Questionnaire. In this questionnaire, there are several questions (for example “When you do something at work, how often does it happen that you feel completely concentrated?” that subjects are to score on a five-point Likert scale from “never” to “every day or almost every day”). One year prior to administration of this questionnaire, the same subjects had undergone positron emission tomography measurement of striatal dopamine D2 receptor binding potential with radioligand [11C]raclopride.

The authors observed a positive correlation between striatal D2 receptor binding potential and scores of the Flow Proneness Questionnaire. Further analyses focusing on subregions of the striatum showed that this correlation specifically involved dorsal striatum (i.e., caudate nucleus and putamen). These findings are highly interesting and provide the first demonstration that the degree a person is prone to experience flow states correlates with inter-individual differences in brain biochemistry. Based on these findings, the authors suggest that flow proneness might be related to higher impulse control due to higher dopamine D2 receptor binding potential making it easier to enter and maintain flow states. Overall, this study provides a highly interesting and important pioneering finding on the neural basis of flow states that clearly warrants further research.

Reference: de Manzano Ö, Cervenka S, Jucaite A, Hellenäs O, Farde L, Ullén F. Individual differences in the proneness to have flow experiences are linked to dopamine D2-receptor availability in the dorsal striatum. Neuroimage (2012) e-publication ahead of print. http://dx.doi.org/10.1016/j.neuroimage.2012.10.072

10/28/2012

How do I get my statistics right? Solid advice to younger cognitive neuroscientists

Beginning (and also more advanced) cognitive neuroscientists often face the problem of neuroimaging being highly costly (up to even some thousands of dollars/euros per subject) and thus the number of subjects that one can measure end up being rather modest, typically from a few to few tens. Furthermore, modern neuroimaging methods tend to produce a wealth of data per subject, and the poor neuroscientist quickly runs into the problem of having to decide whether and how to conduct corrections for multiple statistical comparisons. Adding to confusion, a cognitive neuroscientist can easily find functional magnetic resonance imaging studies published in notable journals such as Science with as few as eight or even five subjects, and yet run into arguments that his/her "too few" 15 or 20 subjects is a severe problem that results in rejection of the study from a lesser journal.

In his paper that is written with an unusual but highly entertaining ironic tone, Karl Friston (2012) presents the most common lines of critic on statistical analysis of neuroimaging data, provides advice and insights on how to design one's experiment so that it is statistically sound, and how to counter argue the most typical erroneous critic by reviewers. I see this as highly important given the abundance of misunderstandings on statistics that one often runs into when trying to publish findings in the area of neuroimaging (and also other fields of science).

It is important to note that none of the hard work and exciting findings that one obtains really contribute to scientific knowledge until the results have been published in one of the scientific journals. The peer-review process is inarguably the cornerstone of science, and good reviews often help improve the scientific quality of one's publications, but on the other hand delays in publication or rejection of a manuscript (that results in even larger delay in getting published) if based on misunderstanding of statistics especially on part of an expert reviewer, is a highly unfortunate outcome that slows down progress of science.

Reference: Friston K. Ten ironic rules for non-statistical reviewers. NeuroImage (2012) 61: 1300–1310. http://dx.doi.org/10.1016/j.neuroimage.2012.04.018

10/22/2012

Midbrain dopamine system triggers shifting of context representations in dorsolateral prefrontal cortex


Shifting quickly and flexibly between different goals that one is pursuing is one of the most amazing of human cognitive skills. For example, running into a friend in the midst of shopping groceries for one’s family in a local store one is able to recall what one had intended to relate to him/her upon seeing that friend. Following lively discussion for several minutes, getting back to the goal of shopping the groceries is flexible and effortless. At the same while that there is this amazing flexibility, one is perseverant and not distracted from pursuing one’s goals by other stimuli and events that are irrelevant to the behavioral goals at hand. It has been proposed that the midbrain dopaminergic system and prefrontal cortical areas underlie this capability to shift between goal directed behaviors, but empirical demonstrations have not been unequivocal.

In their recent study, D’Ardenne et al. (2012) combined transcranial magnetic stimulation and functional magnetic resonance imaging to study the interplay between phasic signals produced by the brain stem dopaminergic system and context representations (aka “cognitive set”) maintained by the prefrontal cortex. The authors observed that transcranial magnetic stimulation of especially the right dorsolateral prefrontal cortex, timed around the presentation of the task context, impaired context-dependent responses more than context-independent responses. Functional magnetic resonance imaging of the ventral tegmental area and substantia nigra further disclosed phasic signals that co-occurred with context shifting events and correlated with phasic signals that were observed in dorsolateral prefrontal cortex.

Together, these experiments provide robust support for the model where phasic (presumably dopaminergic) signals produced by ventral tegmental area and substantia nigra trigger shifting of context representation in the (especially the right) dorsolateral prefrontal cortex. This study also provides a nice example of how transcranial magnetic stimulation can be combined with functional magnetic resonance imaging and behavioral task designs to gain insight into the neural basis of human goal directed behavior.

Reference: D’Ardenne K, Eshel N, Luka J, Lenartowicz A, Nystrom LE, Cohen JD. Role of prefrontal cortex and the midbrain dopamine system in working memory updating. Proc Natl Acad Sci USA (2012) e-publication ahead of print. http://dx.doi.org/10.1073/pnas.1116727109

10/14/2012

Enhanced stress reactivity in women after exposure to negative news


Given that stress-related disorders constitute one of the most severe societal and medical problems in modern societies, investigation of the predisposing factors are more than well justified. One potential source of stress is the constant and abundant flow of negative news via the media, including 24-hour TV coverage, internet, and recently also constant access to negative newsfeed through mobile devices such as tablets and smartphones. It has been relatively little explored, however, whether exposure to negative news via the mass media elevates secretion of stress hormones such as cortisol in healthy individuals.

In their recent study, Marin et al. (2012) randomly assigned thirty women and thirty men to groups that were to read twenty-four neutral vs. negative news excerpts lasting for a total of 10 minutes. After that they were all administered the Trier Social Stress Test. Salivary cortisol samples were collected at 10 minute intervals throughout the experimental procedure. A free recall of the news was also performed one day after the experiment. Even though reading the news per se failed to change cortisol levels, cortisol levels were significantly elevated by the Trier Social Stress Test in those women who were first exposed to negative news. Women also remembered the negative news excerpts better than men on the following day.

These findings disclose exposure to negative news as a potential factor that might predispose individuals to elevated stress (especially women, even though similar patterns that however failed to reach statistical significance were also noted in men) and thus in part also enhance the chance for developing stress-related disorders.  The findings show that exposure to negative news modulates the stress reactivity of women to subsequent psychosocial stressor and enhances their memory performance for the negative news. These results point out the importance of better understanding individual and societal reactions to negative information that is brought to people via modern mass media more readily and abundantly than ever before in the history of our species.

Reference: Marin M-F, Morin-Major J-K, Schramek TE, Baupre A, Perna A, Juster R-P, Lupien SJ. There is no news like bad news: women are more remembering and stress reactive after reading real negative news than men. PLoS ONE (2012) 7: e47189. http://dx.doi.org/10.1371/journal.pone.0047189

10/05/2012

Brain-state based triggering of target sounds as a novel paradigm in selective attention research


The neural basis selective attention (i.e., how one can select, out of the massive amount of stimuli that constantly bombard one’s senses, the concurrently relevant ones for further processing) is one of the most interesting research questions in cognitive neuroscience. The importance of research on selective attention is further enhanced by the central role that selective attention deficits play in a number of neurological and psychiatric disorders. Dichotic listening is one of the most often-used task paradigms in selective attention research, where subjects are instructed to attend sounds presented to one ear and to ignore sounds presented to the opposite ear. Average neural responses to attended vs. ignored sounds are then compared to disclose neural correlates of selective attention. Fluctuation of the focus of attention over the course of the experiment has been one potential shortcoming of this otherwise excellent paradigm. 

In their recent study, Andermann et al. (2012) detected online the attentional states of experimental subjects and triggered presentation of near-threshold target stimuli based on the presence of correct vs. incorrect attentional state. Specifically, electroencephalogram epochs time-locked to onset of stimuli were first recorded to attended vs. ignored sound streams during a dichotic listening task. These data were then utilized to teach a brain computer interface algorithm to detect high vs. low selective attention states that triggered presentation of the near-threshold targets in the experiment proper. Notably, when the near-threshold target stimuli were presented during estimated correct (i.e., towards the designated ear) vs. incorrect (i.e., fluctuation of attention away from the designated ear) attentional state, the target sounds were detected at a higher rate. Curiously, in the near-threshold target detection task, correct attentional state also resulted in higher number of “illusory percepts” (i.e., a target was detected when none was present). It was also observed that there was considerable fluctuation in the attentional states of the subjects over the course of the experiment.

The strength of this study is that it provides a new type of research paradigm for the investigation of the neural basis of selective attention. More generally, the authors suggest that the brain state-triggered stimulus delivery will enable efficient, statistically tractable studies of rare patterns of ongoing activity in single neurons and distributed neural circuits, and their influence on subsequent behavioral and neural responses.

Reference: Andermann ML, Kauramaki J, Palomaki T, Moore CI, Hari R, Jaaskelainen IP, Sams M. Brain state-triggered stimulus delivery: An efficient tool for probing ongoing brain activity. Open Journal of Neuroscience (2012) 2-5. 

9/22/2012

Novelty seeking and harm avoidance personality traits correlate with cerebellar volume in healthy volunteers


For a long time it was widely thought that the role of cerebellum is restricted to motor coordination and control of well-learned motor sequences. Recently there has been, however, increasing evidence pointing to other roles that the cerebellum possibly plays. One line of investigation has pointed to involvement of the cerebellum in “purely cognitive” functions such as attention (as recently highlighted also in this blog), processing of musical features (Alluri et al. 2012), and tentative findings of emotional disturbances in patients with cerebellar lesions, especially when involving the posterior lobe, have suggested an even more extensive role for cerebellum in supporting human cognitive-affective functions (Schmahmann & Sherman 1998).

In their recent study, Laricchiuta et al. (2012) measured the cerebellar volumes of an extensive sample (N=125) of healthy volunteers using magnetic resonance imaging. The same subjects were assessed with a comprehensive personality inventory (the Temperament and Character Inventory developed by Cloninger). Temperamental traits (that are assumed to be relatively stable over time and to a large extent genetically/biologically determined) were derived from the inventory, including novelty seeking, harm avoidance, reward dependence, and persistence. Notably, the authors observed that novelty-seeking scores were positively correlated, and harm-avoidance scores were negatively correlated, with cerebellar volumes.

These highly interesting findings add to the growing pool of evidence indicating that the role of cerebellum is much more extensive than what was for a long time assumed in cognitive neuroscience / neurology. While the involvement of cerebellum in cognitive processing has been widely demonstrated, studies on the role played by cerebellum in affective regulation and motivational-goal directed behavior, functions that are closely associated with novelty seeking and harm avoidance personality traits, have been more scarce. The findings of Laricchiuta et al. (2012) provide an important demonstration of an association between the novelty seeking and harm avoidance personality features and cerebellar structures, and pave way for further studies on the role of cerebellum in affective-cognitive regulation of behavior.

References:

Alluri V, Toiviainen P, Jaaskelainen IP, Glerean E, Sams M, Brattico E. Large­scale brain networks emerge from dynamic processing of musical timbre, key and rhythm. Neuroimage (2012) 59: 3677-3689. http://dx.doi.org/10.1016/j.neuroimage.2011.11.019

Laricchiuta D, Petrosini L, Piras F, Macci E, Cutuli D, Chiapponi C, Cerasa A, Picerni E, Caltagirone C, Girardi P, Tamorri SF, Spalletta G. Linking novelty seeking and harm avoidance personality traits to cerebellar volumes. Human Brain Mapping (2012) e-publication ahead of print. http://dx.doi.org/ 10.1002/hbm.22174

Schmahmann JD, Sherman JC. The cerebellar cognitive affective syndrome. Brain (1998) 121: 561–579. http://dx.doi.org/10.1093/brain/121.4.561

9/17/2012

Acclimation to hypoxia protects brain tissue against the effects of a stroke


Stroke refers to ischemic conditions where blood flow to brain tissue is significantly reduced (typically when a clot blocks one of the arteries) causing lack of oxygen that results in destruction of brain tissue. Stroke currently constitutes one of the most severe medical problems, with one-third of deaths in Western societies being caused by stroke. Measures have been targeted to preventing / reducing the number of strokes (e.g., providing public information on factors elevating the risk for stroke and medically intervening when high blood pressure is detected), development of treatments that reduce brain damage caused by strokes (e.g., administration of clot-dissolving agents within three hours from the occurrence of a stroke), and rehabilitation methods. In addition to these, there is also research on ways to mitigate the damaging effects of stroke/ischemia.

In their recent study, Dunn et al. (2012) tested whether exposing experimental animals to hypoxia (i.e., keeping the experimental animals in chambers with half of the normal atmospheric pressure) over longer periods of time (in their study three weeks) reduces the effects of an experimentally induced stroke (occlusion of middle cerebral artery for 60 minutes). The authors note that hypoxia acclimation has been previously shown to induce changes that improve the capacity of tissue to survive low oxygen conditions, including increased capacity to supply oxygen (i.e., higher proportion of red blood cells and higher vascular density), more robust removal of end-products, and anaerobic energy production. In accordance with previous findings, they observed increased oxygen carrying capacity (increased hematocrit, capillary density, and tissue oxygen content) in the experimental animals that had been acclimated to hypoxic conditions. Notably, the animals that had been acclimated showed over 50% reduction in the extent of a lesion caused by the experimentally induced stroke, showed reduced inflammatory response, and less severe behaviorally measured dysfunction than control animals.

The authors suggest that increased oxygen levels and increased capillary density explain the beneficial effects of hypoxia acclimation, and point to possibilities for development of targeted treatments (that increase stroke-resistance via similar mechanisms as hypoxia acclimation) especially for high-risk patients such as those who have already suffered a transient ischemic attack which is a severe warning signal. These results are highly interesting and pave way for clinical research on additional measures to reduce the devastating consequences of stroke.

Reference: Dunn JF, Wu Y, Zhao Z, Srinivasan S, Natah SS. Training the brain to survive stroke. PLoS ONE (2012) 7: e45108. http://dx.doi.org/10.1371/journal.pone.0045108

9/09/2012

Combat stress produces partially persistent changes to midbrain-prefrontal cortical circuitry and cognition


While acute short-lasting stress can be beneficial, such as when pushing to meet an important and potentially rewarding deadline at work, prolonged strong stress is known to cause cognitive impairments such as memory deficits. The precise nature and loci of anatomical and functional alterations due to chronic stress, and the extent to which they are (ir)reversible, constitutes an important topic in cognitive neuroscience that is being increasingly investigated.

In their recent follow-up study, van Wingen et al. (2012) investigated 33 healthy soldiers, using neuropsychological tests as well as functional and diffusion magnetic resonance imaging, first before a four-month combat-zone deployment to Afghanistan as a part of the North Atlantic Treaty Organization International Security Assistance Force peacekeeping operation. Then, follow-up studies were conducted 1.5 months (short-term) and 1.6 years (long-term) after the deployment. As a control group, they investigated 26 healthy soldiers who were not deployed at similar time intervals. During deployment, the combat group was exposed to typical combat zone stressors, such as armed combat, combat patrols, exposure to enemy fire, as well as risk of exposure to improvised explosive devices.

At the 1.5-month short-term follow up, midbrain activity was reduced in the combat group, including area containing substantia nigra. Functional connectivity between the midbrain area and lateral prefrontal cortex was also weakened. Combat stress further reduced fractional anisotropy and increased mean diffusivity in the midbrain areas, suggesting weakening of anatomical connectivity. Notably, these measures correlated with reduced performance in a sustained attention task. At the time of the long-term 1.6-year follow-up, the other deficits had normalized, but the reduced functional connectivity between the midbrain and prefrontal cortical areas persisted. The authors conclude that these persistent changes may increase the vulnerability to subsequent stressors and promote later development of difficulties with cognitive, social, and occupational functioning. More generally, these findings also provide important information about neurocognitive deficits that may develop when an individual is exposed to severe chronic stress in other types of context.

Reference: van Wingen GA, Geuzed E, Caan MWA, Kozicza T, Olabarriagah SD, Denysb D, Vermettend E, Fernándeza G. Persistent and reversible consequences of combat stress on the mesofrontal circuit and cognition. Proc Natl Acad Sci USA (2012) advance online publication. http://dx.doi.org/10.1073/pnas.1206330109

8/31/2012

Medial frontal cortical neurons code errors made by others


Learning from the errors of others’ is one of the most fundamental of cognitive abilities, as so well captured by the phrase “The wise learn by the mistakes of others, fools by their own”. There is neuroimaging evidence pointing to medial frontal cortical areas as a candidate region that makes it possible to learn from the mistakes of others, however, there have been a number of open questions, including the precise loci of observed-error processing, whether or not self-generated errors and those made by others are processed by the same neurons, and how the neural mechanisms of observed-error processing shape one’s own behavior.

In their recent study, Yoshida et al. (2012) investigated, using neurophysiological single-cell recordings, how medial frontal cortical neurons fire when macaque monkeys observe errors of another monkey. Two monkeys facing one another alternated in a choice task and, indicating that the monkeys were learning from each other’s mistakes, the monkeys correctly guided their own choice in most trials subsequent to no-reward trials of the other monkey. Cells firing during observation of another’s error were found in two medial frontal regions (convexity and sulcus), and about half of these neurons fired only when observing other’s errors (and not also during errors committed by oneself). The authors further suggested that the convexity subregion is more specifically involved in detection of others’ errors, and that the sulcus subregion is more important for guiding one’s behavior based on the errors committed by others.

These highly interesting findings shed light on the neural mechanisms underlying observational learning, demonstrate that there are neurons specifically responding to mistakes made by others and that there is fine regional specialization supporting error detection and shaping of one’s subsequent behavioral choices. This study also presents a very nice example of how specific aspects of a behavioral task can be isolated and associated with specific aspects of neural activity

Reference: Yoshida K, Saito N, Iriki A, Isoda M. Social error monitoring in macaque frontal cortex. Nature Neuroscience (2012) advance online publication http://dx.doi.org/doi:10.1038/nn.3180

8/24/2012

Superior temporal sulcus as the hub for distributed brain networks that support social perception


Over the last decade, there has been a significant surge of interest towards the neural underpinnings of social cognition, and indeed several candidate brain structures have been implicated based on the results of such studies. While the vast majority of these studies have utilized impoverished stimuli and task paradigms (e.g., contrasting two perceptual categories such as faces vs. bodies), there have been recently also studies that have utilized naturalistic stimuli such as movie clips to investigate the neural basis of social cognition.

In their recent study, Lahnakoski et al. (2012) presented, during 3-Telsa functional magnetic resonance imaging, healthy volunteers with a collection of short movie clips containing both social features (i.e., faces, human bodies, biological motion, goal-oriented actions, emotions, social interactions, pain, and speech) and non-social features (i.e., places, objects, rigid motion, people not engaged in social interaction, non-goal-oriented action, and non-human sounds). Brain activity patterns were then modeled based on the time course of occurrence of these social and non-social features.

Interestingly, the authors observed that the posterior superior temporal sulcus responded to all social features but not to any of the non-social features. Furthermore, there were four extended networks that participated in processing of specific social signals: 1) a fronto-temporal network responding to multiple social categories, 2) a fronto-parietal network preferentially activated by bodies, motion, and pain, 3) a temporal-lobe-amygdala network responding to faces, social interaction, and speech, and, finally, 4) a fronto-insular network that activated during perception of emotions, social interactions, and speech. Taken together, these results disclose the posterior superior temporal sulcus as a central hub for distributed brain networks that support social perception, and add to accumulating pool of evidence indicating that utilization of naturalistic stimuli in fMRI studies provides an effective tool for the study of the neural basis of social cognition.

Reference: Lahnakoski JM, Glerean E, Salmi J, Jääskeläinen IP, Sams M, Hari R, Nummenmaa L. Naturalistic fMRI mapping reveals superior temporal sulcus as the hub for distributed brain network for social perception. Frontiers in Human Neuroscience (2012) 6: 233. http://dx.doi.org/10.3389/fnhum.2012.00233

8/17/2012

Cortical-cerebellar loops during attention to visual motion


For a long time the role of the human cerebellum was thought to be limited to motor functions such as coordination of well-learned motor sequences (e.g., an experienced golf player swinging the golf club). It is, however, being increasingly recognized in the neuroscience community that specific parts of the cerebellum play a much wider role in human cognitive functions than what has been previously assumed.

In a recent study by Kellermann et al. (2012), the involvement of cerebellar-cortical loops in a visual attention-to-motion task was investigated using functional magnetic resonance imaging. Healthy volunteers were shown stationary vs. moving grating stimuli; in the test condition the subjects were instructed to attend slight changes in the speed that the bars were moving. In reality there were no changes in movement velocity and thus the only factor that was experimentally manipulated in the test vs. control conditions was the level of attention to visual motion.

The authors observed increased effective connectivity between the cerebellum and neocortical dorsal visual stream structures with increasing level of attention to visual motion. Further, it was observed that, under attention, functional connectivity from cerebellum to visual area V5 (that processes visual motion) was enhanced, whereas connectivity from V5 to posterior parietal cortex (that is a higher-order attention-directing structure in the brain) was attenuated.

The authors interpret these findings as indicating that, under conditions where visual motion is highly predictable (i.e., when internal models can strongly guide perception), the posterior parietal cortex feeds top-down predictions to the hierarchically lower motion processing area V5 via crus I of cerebellum (thus potentially enhancing the precision of input-predictions of V5 neurons), while at the same time influence of bottom-up inputs from V5 to posterior parietal cortex are suppressed; the authors further note that the task-specific input-output patterns of the cerebellum likely determine the functional role that the cerebellum plays in various cognitive processes. Overall, these findings highlight the importance of cerebellar-cortical loops in perceptual-cognitive functions, something that has been regrettably often neglected in the human functional neuroimaging literature.

Reference: Kellermann T, Regenbogen C, De Vos M, Mößnang C, Finkelmeyer A, Habel U. Effective connectivity of the human cerebellum during visual attention. Journal of Neuroscience (2012) 32: 11453–11460. http://dx.doi.org/ 10.1523/JNEUROSCI.0678-12.2012

8/11/2012

Presenting a rehearsed melody during slow-wave sleep enhances learning of the melody


It is increasingly recognized that learning of skills is facilitated by sleep. Intermittent sleep, even for briefer periods (napping), leads to increasing level of performance on a task that has been rehearsed prior to sleeping. Furthermore, the role of memory consolidation during sleep has been observed to depend on whether one sees task-related dreams; in studies where subjects have been awakened in the middle of sleep and requested to recall their dream contents, task-related dream content predicted higher post-sleep increments in task performance.

In their recent study, Antony et al. (2012) studied whether one could facilitate learning of skills by external stimulation related to task learning that does not wake up the skill-learner. Specifically, the authors hypothesized that the ability to produce a melody could be influenced by auditory cuing during sleep. Volunteers practiced two melodies for an equal amount of time. During an afternoon nap following the training session, one of the melodies was presented during slow-wave sleep detected with electroencephalography. Post-sleep testing revealed that performance of the melody that was played during slow-wave sleep was better than performance of the other melody (prior to the nap there were no differences in performance of the two melodies). Performance enhancements further correlated with the amount of time that the subjects were under slow-wave sleep.

These highly interesting results further contribute to a rapidly growing and exciting area of research in cognitive neuroscience on the importance of sleep for memory consolidation and learning of skills. These results further underline the importance of sleep for learning and suggest that it is possible to facilitate the beneficial effects of sleep on learning of musical sequences by external stimulation during a specific period of sleep, the slow wave sleep.

Reference: Antony JW, Gobel EW, O’Hare JK, Reber PJ, Paller KA. Cued memory reactivation during sleep influences skill learning. Nature Neuroscience (2012) 15: 1114-1116. http://dx.doi.org/10.1038/nn.3152

8/02/2012

Functional connectivity of dorsal vs. ventral posterior parietal cortices during top-down vs. bottom-up attention to memory


In previous neuroimaging studies, the human posterior parietal cortex has been identified as one intimately involved in attentional processes. Dorsal aspects of the posterior parietal cortex have been associated with “top-down” attention (i.e., focusing attention on external stimuli based on internal goals of the subject) and ventral aspects of the posterior parietal cortex have been associated with “bottom-up” attention (i.e., externally presented unexpected stimuli capturing one’s attention). Whilst studying attention to externally applied stimuli is experimentally convenient, focusing attention on internal events (e.g., conducting memory searches) is equally important. One might assume that the posterior parietal cortex governs attention to memorized items similarly as in the case of externally applied stimuli, however, it has not been systematically tested whether the posterior parietal cortex is functionally segregated into dorsal and ventral areas during cued vs. non-cued recognition memory trials.

In their recent study, Burianová et al. (2012) tested, by presenting cued vs. non-cued recognition memory trials during functional magnetic resonance imaging, whether dorsal aspects of the posterior parietal cortex are more involved in top-down memory searches and whether ventral aspects of the posterior parietal cortex are more involved in non-cued bottom-up recognition memory. The authors observed spatially dissociable networks of brain areas that overlapped only in precuneus. During cued recognition memory trials (“top-down”), dorsal posterior parietal cortex was functionally connected with areas comprising the dorsal attention network as well as with memory-related brain areas; there was further a significant correlation between cued memory recognition performance and this network activity. In contrast, during uncued trials, ventral posterior parietal cortex was functionally connected with the ventral attention system and with relevant memory areas. These findings thus disclose a nice double-dissociation of roles between dorsal and ventral posterior parietal cortical areas in recognition memory that closely resembles the distinct roles that these areas play in top-down vs. bottom-up attention.

Reference: Burianová H, Ciaramelli E, Grady CL, Moscovitch M.  Top-down and bottom-up attention-to-memory: mapping functional connectivity in two distinct networks that underlie cued and uncued recognition memory. Neuroimage (2012) e-publication available prior to publication. http://dx.doi.org/10.1016/j.neuroimage.2012.07.057