Parahippocampal region

Do older adults forget as much as they think, or is it rather that they ‘misremember’?

A small study adds to evidence that gist memory plays an important role in false memories at any age, but older adults are more susceptible to misremembering because of their greater use of gist memory.

Gist memory is about remembering the broad story, not the details. We use schemas a lot. Schemas are concepts we build over time for events and experiences, in order to relieve the cognitive load. They allow us to respond and process faster. We build schemas for such things as going to the dentist, going to a restaurant, attending a lecture, and so on. Schemas are very useful, reminding us what to expect and what to do in situations we have experienced before. But they are also responsible for errors of perception and memory — we see and remember what we expect to see.

As we get older, we do of course build up more and firmer schemas, making it harder to really see with fresh eyes. Which means it’s harder for us to notice the details, and easier for us to misremember what we saw.

A small study involving 20 older adults (mean age 75) had participants look at 26 different pictures of common scenes (such as a farmyard, a bathroom) for about 10 seconds, and asked them to remember as much as they could about the scenes. Later, they were shown 300 pictures of objects that were either in the scene, related to the scene (but not actually in the scene), or not commonly associated to the scene, and were required to say whether or not the objects were in the picture. Brain activity was monitored during these tests. Performance was also compared with that produced in a previous identical study, involving 22 young adults (mean age 23).

As expected and as is typical, there was a higher hit rate for schematic items and a higher rate of false memories for schematically related lures (items that belong to the schema but didn’t appear in the picture). True memories activated the typical retrieval network (medial prefrontal cortex, hippocampus/parahippocampal gyrus, inferior parietal lobe, right middle temporal gyrus, and left fusiform gyrus).

Activity in some of these regions (frontal-parietal regions, left hippocampus, right MTG, and left fusiform) distinguished hits from false alarms, supporting the idea that it’s more demanding to retrieve true memories than illusory ones. This contrasts with younger adults who in this and previous research have displayed the opposite pattern. The finding is consistent, however, with the theory that older adults tend to engage frontal resources at an earlier level of difficulty.

Older adults also displayed greater activation in the medial prefrontal cortex for both schematic and non-schematic hits than young adults did.

While true memories activated the typical retrieval network, and there were different patterns of activity for schematic vs non-schematic hits, there was no distinctive pattern of activity for retrieving false memories. However, there was increased activity in the middle frontal gyrus, middle temporal gyrus, and hippocampus/parahippocampal gyrus as a function of the rate of false memories.

Imaging also revealed that, like younger adults, older adults also engage the ventromedial prefrontal cortex when retrieving schematic information, and that they do so to a greater extent. Activation patterns also support the role of the mediotemporal lobe (MTL), and the posterior hippocampus/parahippocampal gyrus in particular, in determining true memories from false. Note that schematic information is not part of this region’s concern, and there was no consistent difference in activation in this region for schematic vs non-schematic hits. But older adults showed this shift within the hippocampus, with much of the activity moving to a more posterior region.

Sensory details are also important for distinguishing between true and false memories, but, apart from activity in the left fusiform gyrus, older adults — unlike younger adults — did not show any differential activation in the occipital cortex. This finding is consistent with previous research, and supports the conclusion that older adults don’t experience the recapitulation of sensory details in the same way that younger adults do. This, of course, adds to the difficulty they have in distinguishing true and false memories.

Older adults also showed differential activation of the right MTG, involved in gist processing, for true memories. Again, this is not found in younger adults, and supports the idea that older adults depend more on schematic gist information to assess whether a memory is true.

However, in older adults, increased activation of both the MTL and the MTG is seen as rates of false alarms increase, indicating that both gist and episodic memory contribute to their false memories. This is also in line with previous research, suggesting that memories of specific events and details can (incorrectly) provide support for false memories that are consistent with such events.

Older adults, unlike young adults, failed to show differential activity in the retrieval network for targets and lures (items that fit in with the schema, but were not in fact present in the image).

What does all this mean? Here’s what’s important:

  • older adults tend to use schema information more when trying to remember
  • older adults find it harder to recall specific sensory details that would help confirm a memory’s veracity
  • at all ages, gist processing appears to play a strong role in false memories
  • memory of specific (true) details can be used to endorse related (but false) details.

What can you do about any of this? One approach would be to make an effort to recall specific sensory details of an event rather than relying on the easier generic event that comes to mind first. So, for example, if you’re asked to go to the store to pick up orange juice, tomatoes and muesli, you might end up with more familiar items — a sort of default position, as it were, because you can’t quite remember what you were asked. If you make an effort to remember the occasion of being told — where you were, how the other person looked, what time of day it was, other things you talked about, etc — you might be able to bring the actual items to mind. A lot of the time, we simply don’t make the effort, because we don’t think we can remember.

https://www.eurekalert.org/pub_releases/2018-03/ps-fdg032118.php

[4331] Webb, C. E., & Dennis N. A.
(Submitted).  Differentiating True and False Schematic Memories in Older Adults.
The Journals of Gerontology: Series B.

A meta-analysis of studies reporting brain activity in individuals with a diagnosis of PTSD has revealed differences between the brain activity of individuals with PTSD and that of groups of both trauma-exposed (those who had experienced trauma but didn't have a diagnosis of PTSD) and trauma-naïve (those who hadn't experienced trauma) participants.

The critical difference between those who developed PTSD and those who experienced trauma but didn't develop PTSD lay in the basal ganglia. Specifically:

  • PTSD brains compared with trauma-exposed controls showed differentially active regions of the basal ganglia
  • trauma-exposed brains compared with trauma-naïve controls revealed differences in the right anterior insula, precuneus, cingulate and orbitofrontal cortices, all known to be involved in emotional regulation
  • PTSD brains compared with both control groups showed differences in activity in the amygdala and parahippocampal cortex.

The finding is consistent with other new evidence from the researchers, that other neuropsychiatric disorders were also associated with specific imbalances in specific brain networks.

The findings suggest that, while people who have experienced trauma may not meet the threshold for a diagnosis of PTSD, they may have similar changes within the brain, which might make them more vulnerable to PTSD if they experience a subsequent trauma.

The finding also suggests a different perspective on PTSD — that it “may not actually be abnormal or a 'disorder' but the brain's natural reaction to events and experiences that are abnormal”.

http://www.eurekalert.org/pub_releases/2015-08/uoo-tec080315.php

A pilot study involving 17 older adults with mild cognitive impairment and 18 controls (aged 60-88; average age 78) has found that a 12-week exercise program significantly improved performance on a semantic memory task, and also significantly improved brain efficiency, for both groups.

The program involved treadmill walking at a moderate intensity. The semantic memory tasks involved correctly recognizing names of celebrities well known to adults born in the 1930s and 40s (difficulty in remembering familiar names is one of the first tasks affected in Alzheimer’s), and recalling words presented in a list. Brain efficiency was demonstrated by a decrease in the activation intensity in the 11 brain regions involved in the memory task. The brain regions with improved efficiency corresponded to those involved in Alzheimer's disease, including the precuneus region, the temporal lobe, and the parahippocampal gyrus.

Participants also improved their cardiovascular fitness, by about 10%.

http://www.eurekalert.org/pub_releases/2013-07/uom-emb073013.php

Smith, J.C. et al. 2013. Semantic Memory Functional MRI and Cognitive Function After Exercise Intervention in Mild Cognitive Impairment. Journal of Alzheimer’s Disease, 37 (1), 197-215.

I’ve reported before on how London taxi drivers increase the size of their posterior hippocampus by acquiring and practicing ‘the Knowledge’ (but perhaps at the expense of other functions). A new study in similar vein has looked at the effects of piano tuning expertise on the brain.

The study looked at the brains of 19 professional piano tuners (aged 25-78, average age 51.5 years; 3 female; 6 left-handed) and 19 age-matched controls. Piano tuning requires comparison of two notes that are close in pitch, meaning that the tuner has to accurately perceive the particular frequency difference. Exactly how that is achieved, in terms of brain function, has not been investigated until now.

The brain scans showed that piano tuners had increased grey matter in a number of brain regions. In some areas, the difference between tuners and controls was categorical — that is, tuners as a group showed increased gray matter in right hemisphere regions of the frontal operculum, the planum polare, superior frontal gyrus, and posterior cingulate gyrus, and reduced gray matter in the left hippocampus, parahippocampal gyrus, and superior temporal lobe. Differences in these areas didn’t vary systematically between individual tuners.

However, tuners also showed a marked increase in gray matter volume in several areas that was dose-dependent (that is, varied with years of tuning experience) — the anterior hippocampus, parahippocampal gyrus, right middle temporal and superior temporal gyrus, insula, precuneus, and inferior parietal lobe — as well as an increase in white matter in the posterior hippocampus.

These differences were not affected by actual chronological age, or, interestingly, level of musicality. However, they were affected by starting age, as well as years of tuning experience.

What these findings suggest is that achieving expertise in this area requires an initial development of active listening skills that is underpinned by categorical brain changes in the auditory cortex. These superior active listening skills then set the scene for the development of further skills that involve what the researchers call “expert navigation through a complex soundscape”. This process may, it seems, involve the encoding and consolidating of precise sound “templates” — hence the development of the hippocampal network, and hence the dependence on experience.

The hippocampus, apart from its general role in encoding and consolidating, has a special role in spatial navigation (as shown, for example, in the London cab driver studies, and the ‘parahippocampal place area’). The present findings extend that navigation in physical space to the more metaphoric one of relational organization in conceptual space.

The more general message from this study, of course, is confirmation for the role of expertise in developing specific brain regions, and a reminder that this comes at the expense of other regions. So choose your area of expertise wisely!

Back when I was young, sleep learning was a popular idea. The idea was that a tape would play while you were asleep, and learning would seep into your brain effortlessly. It was particularly advocated for language learning. Subsequent research, unfortunately, rejected the idea, and gradually it has faded (although not completely). Now a new study may presage a come-back.

In the study, 16 young adults (mean age 21) learned how to ‘play’ two artificially-generated tunes by pressing four keys in time with repeating 12-item sequences of moving circles — the idea being to mimic the sort of sensorimotor integration that occurs when musicians learn to play music. They then took a 90-minute nap. During slow-wave sleep, one of the tunes was repeatedly played to them (20 times over four minutes). After the nap, participants were tested on their ability to play the tunes.

A separate group of 16 students experienced the same events, but without the playing of the tune during sleep. A third group stayed awake, during which 90-minute period they played a demanding working memory task. White noise was played in the background, and the melody was covertly embedded into it.

Consistent with the idea that sleep is particularly helpful for sensorimotor integration, and that reinstating information during sleep produces reactivation of those memories, the sequence ‘practiced’ during slow-wave sleep was remembered better than the unpracticed one. Moreover, the amount of improvement was positively correlated with the proportion of time spent in slow-wave sleep.

Among those who didn’t hear any sounds during sleep, improvement likewise correlated with the proportion of time spent in slow-wave sleep. The level of improvement for this group was intermediate to that of the practiced and unpracticed tunes in the sleep-learning group.

The findings add to growing evidence of the role of slow-wave sleep in memory consolidation. Whether the benefits for this very specific skill extend to other domains (such as language learning) remains to be seen.

However, another recent study carried out a similar procedure with object-location associations. Fifty everyday objects were associated with particular locations on a computer screen, and presented at the same time with characteristic sounds (e.g., a cat with a meow and a kettle with a whistle). The associations were learned to criterion, before participants slept for 2 hours in a MR scanner. During slow-wave sleep, auditory cues related to half the learned associations were played, as well as ‘control’ sounds that had not been played previously. Participants were tested after a short break and a shower.

A difference in brain activity was found for associated sounds and control sounds — associated sounds produced increased activation in the right parahippocampal cortex — demonstrating that even in deep sleep some sort of differential processing was going on. This region overlapped with the area involved in retrieval of the associations during the earlier, end-of-training test. Moreover, when the associated sounds were played during sleep, parahippocampal connectivity with the visual-processing regions increased.

All of this suggests that, indeed, memories are being reactivated during slow-wave sleep.

Additionally, brain activity in certain regions at the time of reactivation (mediotemporal lobe, thalamus, and cerebellum) was associated with better performance on the delayed test. That is, those who had greater activity in these regions when the associated sounds were played during slow-wave sleep remembered the associations best.

The researchers suggest that successful reactivation of memories depends on responses in the thalamus, which if activated feeds forward into the mediotemporal lobe, reinstating the memories and starting the consolidation process. The role of the cerebellum may have to do with the procedural skill component.

The findings are consistent with other research.

All of this is very exciting, but of course this is not a strategy for learning without effort! You still have to do your conscious, attentive learning. But these findings suggest that we can increase our chances of consolidating the material by replaying it during sleep. Of course, there are two practical problems with this: the material needs an auditory component, and you somehow have to replay it at the right time in your sleep cycle.

Memory begins with perception. We can’t remember what we don’t perceive, and our memory of things is influenced by how we perceive them.

Our ability to process visual scenes has been the subject of considerable research. How do we process so many objects? Some animals do it by severely limiting what they perceive, but humans can perceive a vast array of features. We need some other way of filtering the information. Moreover, it’s greatly to our advantage that we can process the environment extremely quickly. So that’s two questions: how do we process so much, and so fast?

Brain region behind the scene-facilitation effect identified

A critical factor, research suggests, is our preferential processing of interacting objects — we pick out interacting objects more quickly than unrelated objects. A new study has now identified the region of the brain responsible for this ‘scene-facilitation effect’. To distinguish between the two leading contenders, the lateral occipital cortex and the intraparietal sulcus, transcranial magnetic stimulation was used to temporarily shut down each region in turn, while volunteers viewed brief flashes of object pairs (half of which were interacting with each other) and decided whether these glimpsed objects matched the presented label. Half of the object pairs were shown as interacting.

The scene-facilitation effect was eliminated when the lateral occipital cortex was out of action, while the non-performance of the intraparietal sulcus made no difference.

The little we need to identify a scene

The scene-facilitation effect is an example of how we filter and condense the information in our visual field, but we also work in the opposite direction — we extrapolate.

When ten volunteers had their brains scanned while they viewed color photographs and line drawings of six categories of scenes (beaches, city streets, forests, highways, mountains and offices), brain activity was nearly identical, regardless of whether participants were looking at a color photo or a simple line drawing. That is, researchers could tell, with a fair amount of success, what category of scene the participant was looking at, just by looking at the pattern of brain activity in the ventral visual cortex — regardless of whether the picture was a color photo or a line drawing. When they made mistakes, the mistakes were similar for the photos and the drawings.

In other words, most of what the brain is responding to in the photo is also evident in the line drawing.

In order to determine what those features were, the researchers progressively removed some of the lines in the line drawings. Even when up to 75% of the pixels in a line drawing were removed, participants could still identify what the scene was 60% of the time — as long as the important lines were left in, that is, those showing the broad contours of the scene. If only the short lines, representing details like leaves or windows, were left, participants became dramatically less accurate.

The findings cast doubt on some models of human visual perception which argue that people need specific information that is found in photographs to classify a scene.

Consistent with previous research, activity in the parahippocampal place area and the retrosplenial cortex was of greatest importance.

The brain performs visual search near optimally

Visual search involves picking out a target in a sea of other objects, and it’s one of the most important visual tasks we do. It’s also (not surprisingly, considering its evolutionary importance) something we are very very good at. In fact, a new study reveals that we’re pretty near optimal.

Of course we make mistakes, and have failures. But these happen not because of our incompetence, but because of the complexity of the task.

In the study, participants were shown sets of lines that might or might not contain a line oriented in a particular way. Each screen was shown for only a fraction of a second, and the contrast of each line was randomly varied, making the target easier or more difficult to detect. The variation in contrast was designed as a model for an important variable in visual search — that of the reliability of the sensory information. Optimally, an observer would take into consideration the varying reliability of the items, giving the information different weights as a result of that perceived reliability. That weighted information would then be combined according to a specific integration rule. That had been calculated as the optimal process, and the performance of the participants matched that expectation.

The computer model that simulated this performance, and that matched the human performance, used groups of (simulated) neurons that responded differently to different line orientations.

In other words, it appears that we are able, very quickly, to integrate information coming from multiple locations, while taking into account the reliability of the different pieces of information, and we do this through the integration of information coming from different groups of neurons, each group of which is responding to different bits of information.

Another recent study into visual search has found that, when people are preparing themselves to look for very familiar object categories (people or cars) in natural scenes, activity in their visual cortex was very similar to that shown when they were actually looking at the objects in the scenes. Moreover, the precise activity in the object-selective cortex (OSC) predicted performance in detecting the target, while preparatory activity in the early visual cortex (V1) was actually negatively related to search performance. It seems that these two regions of the visual cortex are linked to different search strategies, with the OSC involved in relatively abstract search preparation and V1 to more specific imagery-like preparation. Activity in the medial prefrontal cortex also reflected later target detection performance, suggesting that this may be the source of top-down processing.

The findings demonstrate the role of preparatory and top-down processes in guiding visual search (and remind us that these processes can bias us against seeing what we’re looking for, just as easily as they help us).

'Rewarding' objects can't be ignored

Another aspect of visual search is that some objects just leap out at us and capture our attention. Loud noises and fast movement are the most obvious of the attributes that snag our gaze. These are potential threats, and so it’s no wonder we’ve evolved to pay attention to such things. We’re also drawn to potential rewards. Prospective mates; food; liquids.

What about rewards that are only temporarily rewarding? Do we move on easily, able to ignore previously rewarding items as soon as they lose their relevance?

In a recent study, people spent an hour searching for red or green circles in an array of many differently colored circles. The red and green circles were always followed by a monetary reward (10 cents for one color, and 1 cent for the other). Afterwards, participants were asked to search for particular shapes, and color was no longer relevant or rewarded. However, when, occasionally, one of the shapes was red or green, reaction times slowed, demonstrating that these were distracting (even though the participants had been told to ignore this if it happened).

This distraction persisted for weeks after the original learning session. Interestingly, people who scored highly on a questionnaire measuring impulsivity were more likely to be distracted by these no-longer-relevant items.

The findings indicate that stimuli that have been previously associated with reward continue to capture attention regardless of their relevance to the task in hand, There are implications here that may help in the development of more effective treatments for drug addiction, obesity and ADHD.

People make an image memorable

What makes an image memorable? It’s always been assumed that visual memory is too subjective to allow a general answer to this question. But an internet study has found remarkable consistency among hundreds of people who viewed images from a collection of about 10,000 images, some of which were repeated, and decided whether or not they had seen the image before. The responses generated a memorability rating for each image. Once this had been collated, the researchers made "memorability maps" of each image by asking people to label all the objects in the images. These maps were then used to determine which objects make an image memorable.

In general, images with people in them were the most memorable, followed by images of human-scale space — such as the produce aisle of a grocery store — and close-ups of objects. Least memorable were natural landscapes, although those could be memorable if they featured an unexpected element, such as shrubbery trimmed into an unusual shape.

Computer modeling then allowed various features for each image (such as color, or the distribution of edges) to be correlated with the image's memorability. The end result was an algorithm that can predict memorability of images the computational model hasn't "seen" before.

The researchers are now doing a follow-up study to test longer-term memorability, as well as working on adding more detailed descriptions of image content.

[2291] Kim, J. G., Biederman I., & Juan C-H.
(2011).  The Benefit of Object Interactions Arises in the Lateral Occipital Cortex Independent of Attentional Modulation from the Intraparietal Sulcus: A Transcranial Magnetic Stimulation Study.
The Journal of Neuroscience. 31(22), 8320 - 8324.

[2303] Walther, D. B., Chai B., Caddigan E., Beck D. M., & Fei-Fei L.
(2011).  Simple line drawings suffice for functional MRI decoding of natural scene categories.
Proceedings of the National Academy of Sciences. 108(23), 9661 - 9666.

[2292] Ma, W J., Navalpakkam V., Beck J. M., van den Berg R., & Pouget A.
(2011).  Behavior and neural basis of near-optimal visual search.
Nat Neurosci. 14(6), 783 - 790.

[2323] Peelen, M. V., & Kastner S.
(2011).  A neural basis for real-world visual search in human occipitotemporal cortex.
Proceedings of the National Academy of Sciences. 108(29), 12125 - 12130.

[2318] Anderson, B. A., Laurent P. A., & Yantis S.
(2011).  Value-driven attentional capture.
Proceedings of the National Academy of Sciences. 108(25), 10367 - 10371.

Isola, P., Xiao, J., Oliva, A. & Torralba, A. 2011. What makes an image memorable? Paper presented at the IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, Colorado Springs.

 

A two-year study involving 53 older adults (60+) has found that those with a mother who had Alzheimer's disease had significantly more brain atrophy than those with a father or no parent with Alzheimer's disease. More specifically, they had twice as much gray matter shrinkage, and about one and a half times more whole brain shrinkage per year.

This atrophy was particularly concentrated in the precuneus and parahippocampal gyrus. Those with the APOE4 gene also had more atrophy in the frontal cortex than those who didn’t carry the ‘Alzheimer’s gene’.

This adds to evidence indicating that maternal history is a far greater risk factor for Alzheimer’s than paternal history. Eleven participants reported having a mother with Alzheimer's disease, 10 had a father with Alzheimer's disease and 32 had no family history of the disease. It has been estimated that people who have first-degree relatives with Alzheimer's disease are four to 10 times more likely to develop the disease.

Comparison of 17 people with severe obstructive sleep apnea (OSA) with 15 age-matched controls has revealed that those with OSA had reduced gray matter in several brain regions, most particularly in the left parahippocampal gyrus and the left posterior parietal cortex, as well as the entorhinal cortex and the right superior frontal gyrus. These areas were associated with deficits in abstract reasoning and executive function. Deficits in the left posterior parietal cortex were also associated with daytime sleepiness.

Happily, however, three months of treatment with continuous positive airway pressure (CPAP), produced a significant increase in gray matter in these regions, which was associated with significant improvement in cognitive function. The researchers suggest that the hippocampus, being especially sensitive to hypoxia and innervation of small vessels, is the region most strongly and quickly affected by hypoxic episodes.

The findings point to the importance of diagnosing and treating OSA.

A study involving young (average age 22) and older adults (average age 77) showed participants pictures of overlapping faces and places (houses and buildings) and asked them to identify the gender of the person. While the young adults showed activity in the brain region for processing faces (fusiform face area) but not in the brain region for processing places (parahippocampal place area), both regions were active in the older adults. Additionally, on a surprise memory test 10 minutes later, older adults who showed greater activation in the place area were more likely to recognize what face was originally paired with what house.

These findings confirm earlier research showing that older adults become less capable of ignoring irrelevant information, and shows that this distracting information doesn’t merely interfere with what you’re trying to attend to, but is encoded in memory along with that information.

Children’s ability to remember past events improves as they get older. This has been thought by many to be due to the slow development of the prefrontal cortex. But now brain scans from 60 children (8-year-olds, 10- to 11-year-olds, and 14-year-olds) and 20 young adults have revealed marked developmental differences in the activity of the mediotemporal lobe.

The study involved the participants looking at a series of pictures (while in the scanner), and answering a different question about the image, depending on whether it was drawn in red or green ink. Later they were shown the pictures again, in black ink and mixed with new ones. They were asked whether they had seen them before and whether they had been red or green.

While the adolescents and adults selectively engaged regions of the hippocampus and posterior parahippocampal gyrus to recall event details, the younger children did not, with the 8-year-olds indiscriminately using these regions for both detail recollection and item recognition, and the 10- to 11-year-olds showing inconsistent activation. It seems that the hippocampus and posterior parahippocampal gyrus become increasingly specialized for remembering events, and these changes may partly account for long-term memory improvements during childhood.

Older news items (pre-2010) brought over from the old website

September 2009

Healthy older brains not significantly smaller than younger brains

A study using healthy older adults from Holland's long-term Maastricht Aging Study found that the 35 cognitively healthy people who stayed free of dementia showed no significant decline in gray matter, but the 30 people who showed substantial cognitive decline although still dementia-free showed a significant reduction in brain tissue in the hippocampus and parahippocampal areas, and in the frontal and cingulate cortices. The findings suggest that atrophy in the normal older brain may have been over-estimated in earlier studies, by not screening out people whose undetected, slowly developing brain disease was killing off cells in key areas.

Burgmans, S. et al. 2009. The Prevalence of Cortical Gray Matter Atrophy May Be Overestimated In the Healthy Aging Brain. Neuropsychology, 23 (5), 541-550.

http://www.eurekalert.org/pub_releases/2009-09/apa-hob090309.php

June 2009

Perception affected by mood

An imaging study has revealed that when people were shown a composite image with a face surrounded by "place" images, such as a house, and asked to identify the gender of the face, those in whom a bad mood had been induced didn’t process the places in the background. However, those in a good mood took in both the focal and background images. These differences in perception were coupled with differences in activity in the parahippocampal place area. Increasing the amount of information is of course not necessarily a good thing, as it may result in more distraction.

Schmitz, T.W., De Rosa, E. & Anderson, A.K. 2009. Opposing Influences of Affective State Valence on Visual Cortical Encoding. Journal of Neuroscience, 29 (22), 7199-7207.

http://www.eurekalert.org/pub_releases/2009-06/uot-pww060309.php

January 2006

Fitness counteracts cognitive decline from hormone-replacement therapy

A study of 54 postmenopausal women (aged 58 to 80) suggests that being physically fit offsets cognitive declines attributed to long-term hormone-replacement therapy. It was found that gray matter in four regions (left and right prefrontal cortex, left parahippocampal gyrus and left subgenual cortex) was progressively reduced with longer hormone treatment, with the decline beginning after more than 10 years of treatment. Therapy shorter than 10 years was associated with increased tissue volume. Higher fitness scores were also associated with greater tissue volume. Those undergoing long-term hormone therapy had more modest declines in tissue loss if their fitness level was high. Higher fitness levels were also associated with greater prefrontal white matter regions and in the genu of the corpus callosum. The findings need to be replicated with a larger sample, but are in line with animal studies finding that estrogen and exercise have similar effects: both stimulate brain-derived neurotrophic factor.

Erickson, K.I., Colcombe, S.J., Elavsky, S., McAuley, E., Korol, D., Scalf, P.E. & Kramer, A.F. 2006. Interactive effects of fitness and hormone treatment on brain health in postmenopausal women. Neurobiology of Aging, In Press, Corrected Proof, Available online 6 January 2006

http://www.eurekalert.org/pub_releases/2006-01/uoia-fcc012406.php

September 2003

More learned about how spatial navigation works in humans

Researchers monitored signals from individual brain cells as patients played a computer game in which they drove around a virtual town in a taxi, searching for passengers who appeared in random locations and delivering them to their destinations. Previous research has found specific cells in the brains of rodents that respond to “place”, but until now we haven’t known whether humans have such specific cells. This study identifies place cells (primarily found in the hippocampus), as well as “view” cells (responsive to landmarks; found mainly in the parahippocampal region) and “goal” cells (responsive to goals, found throughout the frontal and temporal lobes). Some cells respond to combinations of place, view and goal — for example, cells that responded to viewing an object only when that object was a goal.

Ekstrom, A.D., Kahana, M.J., Caplan, J.B., Fields, T.A., Isham, E.A., Newman, E.L. & Fried, I. 2003. Cellular networks underlying human spatial navigation.Nature, 425 (6954), 184-7.

http://www.eurekalert.org/pub_releases/2003-09/uoc--vgu091003.php

Error | About memory

Error

The website encountered an unexpected error. Please try again later.