7/20/2012

Hippocampal 5-HT4 serotonin receptor levels predict memory performance in human volunteers


The neurochemical basis of cognitive functions is one of the most fundamental of cognitive neuroscience research areas. Indeed, there is a vast body of literature documenting the effects of manipulating function of various neurotransmitter systems on different cognitive and perceptual functions. Given the increasing incidence of memory deficits with aging populations in western countries, neurotransmitter basis of memory functions is one of the most prominent research questions in this exciting area of research. In addition to findings linking acetylcholine function with memory consolidation, there is a body of literature suggesting a significant role for serotonin system in memory functions.

A recent study by Dr. Mette Haahr et al. (2012) combined mapping of levels of specific serotonin receptor called 5-HT4R in hippocampi of 30 healthy volunteers with neuropsychological measures of memory performance. Two memory tests were utilized; in a so-called Reys Auditory Verbal Learning Test, the participants were presented with 15 words on five separate accounts with free immediate recall of the list, as well as presentation of an interference list, on each trial. Delayed recall of the list occurred 30 minutes after the cessation of the task. This allowed deriving indices of both immediate recall and delayed recall. In another test called Rey-Osterrieth’s Complex Figure Test, the participants were to copy a complex geometric figure and then reproduce it from memory after delays of 3 and 30 minutes, with the number of aspects of the figure memorized after the 3-minute delay representing immediate recall, and memorization after the 30-min delay representing delayed recall.

When the memory performance of the participants was correlated with 5-HT4R receptor density in hippocampal areas, as quantified with positron emission tomography following injection of [11C]SB207145 tracer substance, negative correlations were observed between immediate recall scores in Reys Auditory Verbal Learning Test and 5-HT4R receptor densities in the hippocampus bilaterally, and with delayed recall scores in the right hippocampus. The authors note that theirs is the first study examining associations between hippocampal 5-HT4R density and memory functions in humans and, while the observed inverse relationship between receptor densities and memory function warrant further studies looking at the complex interactions between intrinsic serotonergic tonus and receptor levels, the authors suggest their findings predicting that stimulation of the human 5-HT4R could improve memory functions.

Reference: Haahr ME, Fisher P, Holst K, Madsen K, Jensen CG, Marner L, Lehel S, Baare W, Knudsen G, Hasselbalch S. The 5-HT4 receptor levels in hippocampus correlates inversely with memory test performance in humans. Human Brain Mapping (2012) e-publication ahead of print. http://dx.doi.org/ 10.1002/hbm.22123

7/13/2012

Superior-posterior temporal cortex decodes distances to sound sources


Quick and accurate localization of perceptual objects in our environment is a fundamentally important ability where the sense of hearing significantly complements that of vision. For instance, objects that are occluded or out of one’s field of vision (such as a rare bird chirping on branch behind a bird watcher) are efficiently and almost automatically segregated and localized by the auditory system in the three-dimensional space that surrounds oneself.  While a number of previous studies have suggested that there are neurons in superior-posterior temporal cortical areas specialized in localization of the directions that sounds emanate from, it has remained less well known wherein and how distance to sound sources are processed in the human brain. Importantly, the most salient sound distance cue, intensity of the sound, is not always a reliable one, as sound intensity can and does vary independently of source distance. Therefore it is feasible to assume that there are also other cues that the auditory system uses to decode distances to sound sources.  

A recent ingenious study by Dr. Norbert Kopčo et al. (2012) combined psychophysics, computational modeling, and functional magnetic resonance imaging to probe the neural basis of sound distance processing. The authors presented healthy volunteers with sounds at varying distances (15-100 cm) in a virtual reverberant environment. The behavioral results suggested that direct-to-reverberant ratio is, out of the intensity-independent distance cues, the most reliable one, but that discrimination performance is best explained by utilization of a combination direct-to-reverberant ratio and inter-aural level difference cues. Furthermore, inspection of the functional magnetic resonance data collected during presentation of the sounds at varying distances disclosed planum temporale and posterior superior temporal gyrus contralateral to the direction of stimulation as the auditory system structures underlying the decoding of distances to sound sources.

Reference: Kopčo N, Huang S, Belliveau JW, Raij T, Tengshe C, Ahveninen J. Neuronal representations of distance in human auditory cortex. Proc Natl Acad Sci USA (2012) 109: 11019-11024. http://dx.doi.org/10.1073/pnas.1119496109

7/07/2012

Childhood maltreatment correlates with reactivity of amygdala to subliminally presented negative facial expressions


The human amygdala is known to respond to emotional stimuli that are presented subliminally, such as photographs of facial expressions presented so briefly (few tens of milliseconds) that conscious percept of the stimuli fails to take place. Interestingly, hyper-responsiveness of human amygdala to negative facial expressions has been observed in a number of psychiatric conditions including clinical depression, anxiety disorders, and borderline personality. One of the critical questions has been whether these deviations in pre-attentive amygdala responsiveness reflect a trait (caused by for example due to adverse childhood events) that predisposes to psychiatric conditions, or whether the psychiatric conditions (i.e., state) cause the negative processing bias.

In their recent study, Dannlowski et al. (2012) investigated in a sizeable group of healthy volunteers (N=150) whether childhood maltreatment predicts amygdala hyper-responsiveness to subliminally presented negative facial expressions. During functional magnetic resonance imaging, pictures depicting neutral, positive, and negative facial expressions were presented briefly (33 ms) followed immediately by presentation of a neutral facial expression that served as a masker stimulus. The authors assessed childhood maltreatment using childhood trauma questionnaire, which is a retrospective 25-item self-report questionnaire.

As hypothesized by the authors, there was a significant correlation between the childhood trauma questionnaire scores and amygdala hyper-responsiveness to subliminally presented sad facial expressions, which was not confounded by trait anxiety, current depression level, age, gender, intelligence, education level, or recent stressful life-events that the authors carefully controlled. While the authors quite correctly caution that only a prospective study could provide decisive evidence on a causal relationship between childhood maltreatment and pre-attentive amygdala hyper-responsiveness to negative facial expressions, these results nonetheless provide significant evidence for a link between maltreatment in childhood and aberrant automatic processing of negative emotional expressions in adulthood. Importantly, these findings might in part explain how childhood maltreatment predisposes individuals to development of psychiatric conditions, such as clinical depression, later in life.

Reference: Dannlowski U, Kugel H, Huber F, Stuhrmann A, Redlich R, Grotegerd D, Dohm K, Sehlmeyer K, Konrad C, Baune BT, Arolt V, Heindel W, Zwitserlood P, Suslow T. Childhood maltreatment is associated with an automatic negative emotion processing bias in the amygdala. Human Brain Mapping (2012) e-publication ahead of print. http://dx.doi.org/10.1002/hbm.22112 

7/01/2012

Comparison of monkeys and humans reveals superior temporal sulcus as the region that has evolved for human language processing


It has been often remarked that speech and language represent a highly specialized skill that is unique to humans. It is, however, increasingly recognized that animals do also use acoustic signals to communicate with conspecifics. This suggests that humans and certain other species are closer to each other with respect to evolution of language than what has been traditionally assumed, even though the human language is much more complex and refined than animal communication calls. The species-specific vocalizations of non-human primates constitute a prime example of this, however, there have been relatively few attempts to compare foci of brain responses to non-speech/communicative sounds vs. speech and communication calls in humans vs. non-human primates.

In their recent functional magnetic resonance imaging (fMRI) study, Olivier Joly et al. (2012), presented humans and macaque monkeys with monkey vocalizations, human emotional non-linguistic vocalizations, intelligible speech, non-intelligible speech, bird songs, as well as scrambled control sounds. The authors observed widespread hemodynamic responses in temporal, frontal and parietal cortical areas to vocalizations and scrambled control sounds in both species. Further, non-primary auditory areas in the temporal cortex preferentially responded to the intact sounds. Interestingly, parabelt areas extending into superior temporal gyrus responded to monkey vocalizations in macaques matching areas activated by unintelligible speech and emotional sounds in humans. Further, monkey superior temporal sulcus appeared as not responding to species-specific sounds, thus sharply contrasting with the human superior temporal sulcus (and Broca’s area) that specifically responded to intelligible speech.  

Taken together, the results of this highly interesting study suggest that evolution of language in humans has recruited most of the superior temporal sulcus, whereas in monkeys the much simpler species-specific vocalizations have not required corresponding involvement of this area. Methodologically, this pioneering study very nicely demonstrates how macaque and human brain function can be compared at multiple levels of processing using non-invasive functional magnetic resonance imaging, in addition to shedding light on the highly intriguing question of which brain areas have developed in humans to enable our rich language skills that have to a large part made it possible for human societies to emerge and develop.

Reference: Joly O, Pallier C, Ramus F, Pressnitzer D, Vanduffel W, Orban GA. Processing of vocalizations in humans and monkeys: a comparative fMRI study. Neuroimage (2012) 62: 1376-1389.  http://dx.doi.org/ 10.1016/j.neuroimage.2012.05.070