medial prefrontal cortex

part of the prefrontal cortex close to the midline, implicated in social memory -- in particular, self-reflection as well as theory of mind and empathy. The region is also involved when sounds evoke feelings.

Gist memory may be why false memories are more common in older adults

  • Gist processing appears to play a strong role in false memories.
  • Older adults rely on gist memory more.
  • Older adults find it harder to recall specific sensory details that would help confirm whether a memory is true.

Do older adults forget as much as they think, or is it rather that they ‘misremember’?

A small study adds to evidence that gist memory plays an important role in false memories at any age, but older adults are more susceptible to misremembering because of their greater use of gist memory.

Gist memory is about remembering the broad story, not the details. We use schemas a lot. Schemas are concepts we build over time for events and experiences, in order to relieve the cognitive load. They allow us to respond and process faster. We build schemas for such things as going to the dentist, going to a restaurant, attending a lecture, and so on. Schemas are very useful, reminding us what to expect and what to do in situations we have experienced before. But they are also responsible for errors of perception and memory — we see and remember what we expect to see.

As we get older, we do of course build up more and firmer schemas, making it harder to really see with fresh eyes. Which means it’s harder for us to notice the details, and easier for us to misremember what we saw.

A small study involving 20 older adults (mean age 75) had participants look at 26 different pictures of common scenes (such as a farmyard, a bathroom) for about 10 seconds, and asked them to remember as much as they could about the scenes. Later, they were shown 300 pictures of objects that were either in the scene, related to the scene (but not actually in the scene), or not commonly associated to the scene, and were required to say whether or not the objects were in the picture. Brain activity was monitored during these tests. Performance was also compared with that produced in a previous identical study, involving 22 young adults (mean age 23).

As expected and as is typical, there was a higher hit rate for schematic items and a higher rate of false memories for schematically related lures (items that belong to the schema but didn’t appear in the picture). True memories activated the typical retrieval network (medial prefrontal cortex, hippocampus/parahippocampal gyrus, inferior parietal lobe, right middle temporal gyrus, and left fusiform gyrus).

Activity in some of these regions (frontal-parietal regions, left hippocampus, right MTG, and left fusiform) distinguished hits from false alarms, supporting the idea that it’s more demanding to retrieve true memories than illusory ones. This contrasts with younger adults who in this and previous research have displayed the opposite pattern. The finding is consistent, however, with the theory that older adults tend to engage frontal resources at an earlier level of difficulty.

Older adults also displayed greater activation in the medial prefrontal cortex for both schematic and non-schematic hits than young adults did.

While true memories activated the typical retrieval network, and there were different patterns of activity for schematic vs non-schematic hits, there was no distinctive pattern of activity for retrieving false memories. However, there was increased activity in the middle frontal gyrus, middle temporal gyrus, and hippocampus/parahippocampal gyrus as a function of the rate of false memories.

Imaging also revealed that, like younger adults, older adults also engage the ventromedial prefrontal cortex when retrieving schematic information, and that they do so to a greater extent. Activation patterns also support the role of the mediotemporal lobe (MTL), and the posterior hippocampus/parahippocampal gyrus in particular, in determining true memories from false. Note that schematic information is not part of this region’s concern, and there was no consistent difference in activation in this region for schematic vs non-schematic hits. But older adults showed this shift within the hippocampus, with much of the activity moving to a more posterior region.

Sensory details are also important for distinguishing between true and false memories, but, apart from activity in the left fusiform gyrus, older adults — unlike younger adults — did not show any differential activation in the occipital cortex. This finding is consistent with previous research, and supports the conclusion that older adults don’t experience the recapitulation of sensory details in the same way that younger adults do. This, of course, adds to the difficulty they have in distinguishing true and false memories.

Older adults also showed differential activation of the right MTG, involved in gist processing, for true memories. Again, this is not found in younger adults, and supports the idea that older adults depend more on schematic gist information to assess whether a memory is true.

However, in older adults, increased activation of both the MTL and the MTG is seen as rates of false alarms increase, indicating that both gist and episodic memory contribute to their false memories. This is also in line with previous research, suggesting that memories of specific events and details can (incorrectly) provide support for false memories that are consistent with such events.

Older adults, unlike young adults, failed to show differential activity in the retrieval network for targets and lures (items that fit in with the schema, but were not in fact present in the image).

What does all this mean? Here’s what’s important:

  • older adults tend to use schema information more when trying to remember
  • older adults find it harder to recall specific sensory details that would help confirm a memory’s veracity
  • at all ages, gist processing appears to play a strong role in false memories
  • memory of specific (true) details can be used to endorse related (but false) details.

What can you do about any of this? One approach would be to make an effort to recall specific sensory details of an event rather than relying on the easier generic event that comes to mind first. So, for example, if you’re asked to go to the store to pick up orange juice, tomatoes and muesli, you might end up with more familiar items — a sort of default position, as it were, because you can’t quite remember what you were asked. If you make an effort to remember the occasion of being told — where you were, how the other person looked, what time of day it was, other things you talked about, etc — you might be able to bring the actual items to mind. A lot of the time, we simply don’t make the effort, because we don’t think we can remember.

https://www.eurekalert.org/pub_releases/2018-03/ps-fdg032118.php

Reference: 

Topics: 

tags problems: 

tags development: 

tags memworks: 

Cognitive decline in old age related to poorer sleep

February, 2013

A new study confirms the role slow-wave sleep plays in consolidating memories, and reveals that one reason for older adults’ memory problems may be the quality of their sleep.

Recent research has suggested that sleep problems might be a risk factor in developing Alzheimer’s, and in mild cognitive impairment. A new study adds to this gathering evidence by connecting reduced slow-wave sleep in older adults to brain atrophy and poorer learning.

The study involved 18 healthy young adults (mostly in their 20s) and 15 healthy older adults (mostly in their 70s). Participants learned 120 word- nonsense word pairs and were tested for recognition before going to bed. Their brain activity was recorded while they slept. Brain activity was also measured in the morning, when they were tested again on the word pairs.

As has been found previously, older adults showed markedly less slow-wave activity (both over the whole brain and specifically in the prefrontal cortex) than the younger adults. Again, as in previous studies, the biggest difference between young and older adults in terms of gray matter volume was found in the medial prefrontal cortex (mPFC). Moreover, significant differences were also found in the insula and posterior cingulate cortex. These regions, like the mPFC, have also been associated with the generation of slow waves.

When mPFC volume was taken into account, age no longer significantly predicted the extent of the decline in slow-wave activity — in other words, the decline in slow-wave activity appears to be due to the brain atrophy in the medial prefrontal cortex. Atrophy in other regions of the brain (precuneus, hippocampus, temporal lobe) was not associated with the decline in slow-wave activity when age was considered.

Older adults did significantly worse on the delayed recognition test than young adults. Performance on the immediate test did not predict performance on the delayed test. Moreover, the highest performers on the immediate test among the older adults performed at the same level as the lowest young adult performers — nevertheless, these older adults did worse the following day.

Slow-wave activity during sleep was significantly associated with performance on the next day’s test. Moreover, when slow-wave activity was taken into account, neither age nor mPFC atrophy significantly predicted test performance.

In other words, age relates to shrinkage of the prefrontal cortex, this shrinkage relates to a decline in slow-wave activity during sleep, and this decline in slow-wave sleep relates to poorer cognitive performance.

The findings confirm the importance of slow-wave brainwaves for memory consolidation.

All of this suggests that poorer sleep quality contributes significantly to age-related cognitive decline, and that efforts should be made to improve quality of sleep rather than just assuming lighter, more disturbed sleep is ‘natural’ in old age!

Reference: 

Source: 

tags lifestyle: 

Topics: 

tags memworks: 

tags problems: 

tags development: 

Why acute stress makes it hard to think properly

October, 2012

A rat study indicates that acute stress disrupts feedback loops in the prefrontal cortex that may be keeping information alive in working memory.

Stress is a major cause of workplace accidents, and most of us are only too familiar with the effects of acute stress on our thinking. However, although the cognitive effects are only too clear, research has had little understanding of how stress has this effect. A new rat study sheds some light.

In the study, brain activity was monitored while five rats performed a working memory task during acute noise stress. Under these stressful conditions, the rats performed dramatically worse on their working memory task, with performance dropping from an average of 93% success to 65%.

The stress also significantly increased the discharge rate of a subset of neurons in the medial prefrontal cortex during two phases of the task: planning and assessment.

This brain region is vital for working memory and executive functions such as goal maintenance and emotion regulation. The results suggest that the firing and re-firing of these neurons keeps recent information ‘fresh’. When the re-firing is delayed, the information can be lost.

What seems to be happening is that the stress is causing these neurons to work even more furiously, but instead of performing their normal task — concentrating on keeping important information ‘alive’ during brief delays — they are reacting to all the other, distracting and less relevant, stimuli.

The findings contradict the view that stress simply suppresses prefrontal cortex activity, and suggests a different approach to treatment, one that emphasizes shutting out distractions.

The findings are also exciting from a theoretical viewpoint, suggesting as they do that this excitatory recursive activity of neurons within the prefrontal cortex provide the neural substrate for working memory. That is, that we ‘hold’ information in the front of our mind through reverberating feedback loops within this network of neurons, that keep information alive during the approximately 1.5 seconds of our working memory ‘span’.

Reference: 

Source: 

tags memworks: 

tags problems: 

Topics: 

Visual perception - a round-up of recent news

July, 2011

Memory begins with perception. Here's a round-up of recent research into visual perception.

Memory begins with perception. We can’t remember what we don’t perceive, and our memory of things is influenced by how we perceive them.

Our ability to process visual scenes has been the subject of considerable research. How do we process so many objects? Some animals do it by severely limiting what they perceive, but humans can perceive a vast array of features. We need some other way of filtering the information. Moreover, it’s greatly to our advantage that we can process the environment extremely quickly. So that’s two questions: how do we process so much, and so fast?

Brain region behind the scene-facilitation effect identified

A critical factor, research suggests, is our preferential processing of interacting objects — we pick out interacting objects more quickly than unrelated objects. A new study has now identified the region of the brain responsible for this ‘scene-facilitation effect’. To distinguish between the two leading contenders, the lateral occipital cortex and the intraparietal sulcus, transcranial magnetic stimulation was used to temporarily shut down each region in turn, while volunteers viewed brief flashes of object pairs (half of which were interacting with each other) and decided whether these glimpsed objects matched the presented label. Half of the object pairs were shown as interacting.

The scene-facilitation effect was eliminated when the lateral occipital cortex was out of action, while the non-performance of the intraparietal sulcus made no difference.

The little we need to identify a scene

The scene-facilitation effect is an example of how we filter and condense the information in our visual field, but we also work in the opposite direction — we extrapolate.

When ten volunteers had their brains scanned while they viewed color photographs and line drawings of six categories of scenes (beaches, city streets, forests, highways, mountains and offices), brain activity was nearly identical, regardless of whether participants were looking at a color photo or a simple line drawing. That is, researchers could tell, with a fair amount of success, what category of scene the participant was looking at, just by looking at the pattern of brain activity in the ventral visual cortex — regardless of whether the picture was a color photo or a line drawing. When they made mistakes, the mistakes were similar for the photos and the drawings.

In other words, most of what the brain is responding to in the photo is also evident in the line drawing.

In order to determine what those features were, the researchers progressively removed some of the lines in the line drawings. Even when up to 75% of the pixels in a line drawing were removed, participants could still identify what the scene was 60% of the time — as long as the important lines were left in, that is, those showing the broad contours of the scene. If only the short lines, representing details like leaves or windows, were left, participants became dramatically less accurate.

The findings cast doubt on some models of human visual perception which argue that people need specific information that is found in photographs to classify a scene.

Consistent with previous research, activity in the parahippocampal place area and the retrosplenial cortex was of greatest importance.

The brain performs visual search near optimally

Visual search involves picking out a target in a sea of other objects, and it’s one of the most important visual tasks we do. It’s also (not surprisingly, considering its evolutionary importance) something we are very very good at. In fact, a new study reveals that we’re pretty near optimal.

Of course we make mistakes, and have failures. But these happen not because of our incompetence, but because of the complexity of the task.

In the study, participants were shown sets of lines that might or might not contain a line oriented in a particular way. Each screen was shown for only a fraction of a second, and the contrast of each line was randomly varied, making the target easier or more difficult to detect. The variation in contrast was designed as a model for an important variable in visual search — that of the reliability of the sensory information. Optimally, an observer would take into consideration the varying reliability of the items, giving the information different weights as a result of that perceived reliability. That weighted information would then be combined according to a specific integration rule. That had been calculated as the optimal process, and the performance of the participants matched that expectation.

The computer model that simulated this performance, and that matched the human performance, used groups of (simulated) neurons that responded differently to different line orientations.

In other words, it appears that we are able, very quickly, to integrate information coming from multiple locations, while taking into account the reliability of the different pieces of information, and we do this through the integration of information coming from different groups of neurons, each group of which is responding to different bits of information.

Another recent study into visual search has found that, when people are preparing themselves to look for very familiar object categories (people or cars) in natural scenes, activity in their visual cortex was very similar to that shown when they were actually looking at the objects in the scenes. Moreover, the precise activity in the object-selective cortex (OSC) predicted performance in detecting the target, while preparatory activity in the early visual cortex (V1) was actually negatively related to search performance. It seems that these two regions of the visual cortex are linked to different search strategies, with the OSC involved in relatively abstract search preparation and V1 to more specific imagery-like preparation. Activity in the medial prefrontal cortex also reflected later target detection performance, suggesting that this may be the source of top-down processing.

The findings demonstrate the role of preparatory and top-down processes in guiding visual search (and remind us that these processes can bias us against seeing what we’re looking for, just as easily as they help us).

'Rewarding' objects can't be ignored

Another aspect of visual search is that some objects just leap out at us and capture our attention. Loud noises and fast movement are the most obvious of the attributes that snag our gaze. These are potential threats, and so it’s no wonder we’ve evolved to pay attention to such things. We’re also drawn to potential rewards. Prospective mates; food; liquids.

What about rewards that are only temporarily rewarding? Do we move on easily, able to ignore previously rewarding items as soon as they lose their relevance?

In a recent study, people spent an hour searching for red or green circles in an array of many differently colored circles. The red and green circles were always followed by a monetary reward (10 cents for one color, and 1 cent for the other). Afterwards, participants were asked to search for particular shapes, and color was no longer relevant or rewarded. However, when, occasionally, one of the shapes was red or green, reaction times slowed, demonstrating that these were distracting (even though the participants had been told to ignore this if it happened).

This distraction persisted for weeks after the original learning session. Interestingly, people who scored highly on a questionnaire measuring impulsivity were more likely to be distracted by these no-longer-relevant items.

The findings indicate that stimuli that have been previously associated with reward continue to capture attention regardless of their relevance to the task in hand, There are implications here that may help in the development of more effective treatments for drug addiction, obesity and ADHD.

People make an image memorable

What makes an image memorable? It’s always been assumed that visual memory is too subjective to allow a general answer to this question. But an internet study has found remarkable consistency among hundreds of people who viewed images from a collection of about 10,000 images, some of which were repeated, and decided whether or not they had seen the image before. The responses generated a memorability rating for each image. Once this had been collated, the researchers made "memorability maps" of each image by asking people to label all the objects in the images. These maps were then used to determine which objects make an image memorable.

In general, images with people in them were the most memorable, followed by images of human-scale space — such as the produce aisle of a grocery store — and close-ups of objects. Least memorable were natural landscapes, although those could be memorable if they featured an unexpected element, such as shrubbery trimmed into an unusual shape.

Computer modeling then allowed various features for each image (such as color, or the distribution of edges) to be correlated with the image's memorability. The end result was an algorithm that can predict memorability of images the computational model hasn't "seen" before.

The researchers are now doing a follow-up study to test longer-term memorability, as well as working on adding more detailed descriptions of image content.

Reference: 

[2291] Kim JG, Biederman I, Juan C-H. The Benefit of Object Interactions Arises in the Lateral Occipital Cortex Independent of Attentional Modulation from the Intraparietal Sulcus: A Transcranial Magnetic Stimulation Study. The Journal of Neuroscience [Internet]. 2011 ;31(22):8320 - 8324. Available from: http://www.jneurosci.org/content/31/22/8320.abstract

[2303] Walther DB, Chai B, Caddigan E, Beck DM, Fei-Fei L. Simple line drawings suffice for functional MRI decoding of natural scene categories. Proceedings of the National Academy of Sciences [Internet]. 2011 ;108(23):9661 - 9666. Available from: http://www.pnas.org/content/108/23/9661.abstract

[2292] Ma WJ, Navalpakkam V, Beck JM, van den Berg R, Pouget A. Behavior and neural basis of near-optimal visual search. Nat Neurosci [Internet]. 2011 ;14(6):783 - 790. Available from: http://dx.doi.org/10.1038/nn.2814

[2323] Peelen MV, Kastner S. A neural basis for real-world visual search in human occipitotemporal cortex. Proceedings of the National Academy of Sciences [Internet]. 2011 ;108(29):12125 - 12130. Available from: http://www.pnas.org/content/108/29/12125.abstract

[2318] Anderson BA, Laurent PA, Yantis S. Value-driven attentional capture. Proceedings of the National Academy of Sciences [Internet]. 2011 ;108(25):10367 - 10371. Available from: http://www.pnas.org/content/108/25/10367.abstract

Isola, P., Xiao, J., Oliva, A. & Torralba, A. 2011. What makes an image memorable? Paper presented at the IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, Colorado Springs.

 

Source: 

tags memworks: 

Topics: 

Natural scenes have positive impact on brain

October, 2010

Images of nature have been found to improve attention. A new study shows that natural scenes encourage different brain regions to synchronize.

A couple of years ago I reported on a finding that walking in the park, and (most surprisingly) simply looking at photos of natural scenes, could improve memory and concentration (see below). Now a new study helps explain why. The study examined brain activity while 12 male participants (average age 22) looked at images of tranquil beach scenes and non-tranquil motorway scenes. On half the presentations they concurrently listened to the same sound associated with both scenes (waves breaking on a beach and traffic moving on a motorway produce a similar sound, perceived as a constant roar).

Intriguingly, the natural, tranquil scenes produced significantly greater effective connectivity between the auditory cortex and medial prefrontal cortex, and between the auditory cortex and posterior cingulate gyrus, temporoparietal cortex and thalamus. It’s of particular interest that this is an example of visual input affecting connectivity of the auditory cortex, in the presence of identical auditory input (which was the focus of the research). But of course the take-home message for us is that the benefits of natural scenes for memory and attention have been supported.

Previous study:

Many of us who work indoors are familiar with the benefits of a walk in the fresh air, but a new study gives new insight into why, and how, it works. In two experiments, researchers found memory performance and attention spans improved by 20% after people spent an hour interacting with nature. The intriguing finding was that this effect was achieved not only by walking in the botanical gardens (versus walking along main streets of Ann Arbor), but also by looking at photos of nature (versus looking at photos of urban settings). The findings are consistent with a theory that natural environments are better at restoring attention abilities, because they provide a more coherent pattern of stimulation that requires less effort, as opposed to urban environments that are provide complex and often confusing stimulation that captures attention dramatically and requires directed attention (e.g., to avoid being hit by a car).

Reference: 

Source: 

tags: 

Topics: 

tags memworks: 

tags lifestyle: 

Light shed on the cause of the most common learning disability

September, 2010

The discovery that the mutated NF1 gene inhibits working memory through too much GABA in the prefrontal cortex offers hope for an effective therapy for those with the most common learning disability.

Neurofibromatosis type 1 (NF1) is the most common cause of learning disabilities, caused by a mutation in a gene that makes a protein called neurofibromin. Mouse research has now revealed that these mutations are associated with higher levels of the inhibitory neurotransmitter GABA in the medial prefrontal cortex. Brain imaging in humans with NF1 similarly showed reduced activity in the prefrontal cortex when performing a working memory task, with the levels of activity correlating with task performance. It seems, therefore, that this type of learning disability is a result of too much GABA in the prefrontal cortex inhibiting the activity of working memory. Potentially they could be corrected with a drug that normalizes the excess GABA's effect. The researchers are currently studying the effect of the drug lovastatin on NF1 patients.

Reference: 

Source: 

tags memworks: 

tags problems: 

Topics: 

Subscribe to RSS - medial prefrontal cortex