Working Memory

Latest Research News

A small study that fitted 29 young adults (18-31) and 31 older adults (55-82) with a device that recorded steps taken and the vigor and speed with which they were made, has found that those older adults with a higher step rate performed better on memory tasks than those who were more sedentary. There was no such effect seen among the younger adults.

Improved memory was found for both visual and episodic memory, and was strongest with the episodic memory task. This required recalling which name went with a person's face — an everyday task that older adults often have difficulty with.

However, the effect on visual memory had more to do with time spent sedentary than step rate. With the face-name task, both time spent sedentary and step rate were significant factors, and both factors had a greater effect than they had on visual memory.

Depression and hypertension were both adjusted for in the analysis.

There was no significant difference in executive function related to physical activity, although previous studies have found an effect. Less surprisingly, there was also no significant effect on verbal memory.

Both findings might be explained in terms of cognitive demand. The evidence suggests that the effect of physical exercise is only seen when the task is sufficiently cognitively demanding. No surprise that verbal memory (which tends to be much less affected by age) didn't meet that challenge, but interestingly, the older adults in this study were also less impaired on executive function than on visual memory. This is unusual, and reminds us that, especially with small studies, you cannot ignore the individual differences.

This general principle may also account for the lack of effect among younger adults. It is interesting to speculate whether physical activity effects would be found if the younger adults were given much more challenging tasks (either by increasing their difficulty, or selecting a group who were less capable).

Step Rate was calculated by total steps taken divided by the total minutes in light, moderate, and vigorous activities, based on the notion that this would provide an independent indicator of physical activity intensity (how briskly one is walking). Sedentary Time was the total minutes spent sedentary.

http://www.eurekalert.org/pub_releases/2015-11/bumc-slp112415.php

[4045] Hayes SM, Alosco ML, Hayes JP, Cadden M, Peterson KM, Allsup K, Forman DE, Sperling RA, Verfaellie M. Physical Activity Is Positively Associated with Episodic Memory in Aging. Journal of the International Neuropsychological Society [Internet]. 2015 ;21(Special Issue 10):780 - 790. Available from: http://journals.cambridge.org/article_S1355617715000910

Brain imaging while 11 individuals with traumatic brain injury and 15 healthy controls performed a working memory task has revealed that those with TBI showed greater connectivity between the hemispheres in the fronto-parietal regions (involved in working memory) and less organized flow of information from posterior to anterior parts.

The study used a new task, known as CapMan, which allows working memory capacity and the mental manipulation of information in working memory to be distinguished from each other.

The discovery may help in the development of more effective therapies.

http://www.eurekalert.org/pub_releases/2015-10/kf-njs102015.php

I've written at length about implementation plans in my book “Planning to Remember: How to Remember What You're Doing and What You Plan to Do”. Essentially, they're intentions you make in which you explicitly tie together your intended action with a specific situational cue (such as seeing a post box).

A new study looked at the benefits of using an implementation intention for those with low working memory capacity.

The study involved 100 college students, of whom half were instructed to form an implementation intention in the event-based prospective memory task. The task was in the context of a lexical decision task in which the student had to press a different key depending on whether a word or a pseudo-word was presented, and to press the spacebar when a waiting message appeared between each trial. However (and this is the prospective element), if they saw one of four cue words, they were to stop doing the lexical task and say aloud both the cue word and its associated target word. They were then given the four word pairs to learn.

After they had mastered the word pairs, students in the implementation intention group were also given various sentences to say aloud, of the form: “When I see the word _______ (hotel, eraser, thread, credit) while making a word decision, I will stop doing the lexical decision task and call out _____-______ (hotel-glass, eraser-pencil, thread-book, credit-card) to the experimenter during the waiting message.” They said each sentence (relating to each word pair) twice.

Both groups were given a 5-minute survey to fill out before beginning the trials. At the end of the trials, their working memory was assessed using both the Operation Span task and the Reading Span task.

Overall, as expected, the implementation intention group performed significantly better on the prospective memory task. Unlike other research, there was no significant effect of working memory capacity on prospective memory performance. But this is because other studies haven't used implementation intentions — among those who made no such implement plans, low working memory capacity did indeed negatively affect prospective memory performance. However, those with low working memory capacity did just as well as those with high WMC when they formed implementation intentions (in fact, they did slightly better).

The most probable benefit of the strategy is that it heightened sensitivity to the event cues, something which is of particular value to those with low working memory capacity, who by definition have poorer attentional control.

It should be noted that this was an attentionally demanding task — there is some evidence that working memory ability only relates to prospective memory ability when the prospective memory task requires a high amount of attentional demand. But what constitutes “attentionally demanding” varies depending on the individual.

Perhaps this bears on evidence suggesting that a U-shaped function might apply, with a certain level of cognitive ability needed to benefit from implementation intentions, while those above a certain level find them unnecessary. But again, this depends on how attentionally demanding the task is. We can all benefit from forming implementation intentions in very challenging situations. It should also be remembered that WMC is affected not only more permanently by age, but also more temporarily by stress, anxiety, and distraction.

Of course, this experiment framed the situation in a very short-term way, with the intentions only needing to be remembered for about 15 minutes. A more naturalistic study is needed to confirm the results.

This is just a preliminary study presented at a recent conference, so we can't give it too much weight, but the finding is consistent with what we know about working memory, and it is of some usefulness.

The study tested the ability of young-adult native English speakers to store spoken words in short-term memory. The English words were spoken either with a standard American accent or with a pronounced but still intelligible Korean accent. Every now and then, the listeners (all unfamiliar with a Korean accent) would be asked to recall the last three words they had heard.

While there was no difference for the last and second-last words, the third word back was remembered significantly better when it was spoken in the familiar accent (80% vs 70%).

The finding suggests that the effort listeners needed to put into understanding the foreign accent used up some of their working memory, reducing their ability to hold onto the information.

The finding is consistent with previous research showing that people with hearing difficulties or who are listening in difficult circumstances (such as over a bad phone line or in a loud room) are poorer at remembering and processing the spoken information compared to individuals who are hearing more clearly.

On a practical level, this finding suggests that, if you're receiving important information (for example, medical information) from someone speaking with an unfamiliar accent, you should make special efforts to remember and process the information. For example, by asking them to speak more slowly, by taking notes and asking for clarification, etc. Those providing such information should take on board the idea that if their listeners are likely to be unfamiliar with their accent, they need to take greater care to speak slowly and clearly, with appropriate levels of repetition and elaboration. Gestures are also helpful for reducing the load on working memory.

http://www.eurekalert.org/pub_releases/2015-05/asoa-htu050715.php

Van Engen, K. et al. 2015. Downstream effects of accented speech on memory. Presentation 1aSC4 at the 169th meeting of the Acoustical Society of America, held May 18-22, 2015 in Pittsburgh, Pennsylvania.

I've reported before on the idea that the drop in working memory capacity commonly seen in old age is related to the equally typical increase in distractability. Studies of brain activity have also indicated that lower WMC is correlated with greater storage of distractor information. So those with higher WMC, it's thought, are better at filtering out distraction and focusing only on the pertinent information. Older adults may show a reduced WMC, therefore, because their ability to ignore distraction and irrelevancies has declined.

Why does that happen?

A new, large-scale study using a smartphone game suggests that the root cause is a change in the way we hold items in working memory.

The study involved 29,631 people aged 18—69, who played a smartphone game in which they had to remember the positions of an increasing number of red circles. Yellow circles, which had to be ignored, could also appear — either at the same time as the red circles, or after them. Data from this game revealed both WMC (how many red circle locations the individual could remember), and distractability (how many red circle locations they could remember in the face of irrelevant yellow circles).

Now this game isn't simply a way of measuring WMC. It enables us to make an interesting distinction based on the timing of the distraction. If the yellow circles appeared at the same time as the red ones, they are providing distraction when you are trying to encode the information. If they appear afterward, the distraction occurs when you are trying to maintain the information in working memory.

Now it would seem commonsensical that distraction at the time of encoding must be the main problem, but the fascinating finding of this study is that it was distraction during the delay (while the information is being maintained in working memory) that was the greater problem. And it was this distraction that became more and more marked with increasing age.

The study is a follow-up to a smaller 2014 study that included two experiments: a lab experiment involving 21 young adults, and data from the same smartphone game involving only the younger cohort (18-29 years; 3247 participants).

This study demonstrated that distraction during encoding and distraction during delay were independent contributory factors to WMC, suggesting that separate mechanisms are involved in filtering out distraction at encoding and maintenance.

Interestingly, analysis of the data from the smartphone game did indicate some correlation between the two in that context. One reason may be that participants in the smartphone game were exposed to higher load trials (the lab study kept WM load constant); another might be that they were in more distracting environments.

While in general researchers have till now assumed that the two processes are not distinct, it has been theorized that distractor filtering at encoding may involve a 'selective gating mechanism', while filtering during WM maintenance may involve a shutting down of perception. The former has been linked to a gating mechanism in the striatum in the basal ganglia, while the latter has been linked to an increase in alpha waves in the frontal cortex, specifically, the left middle frontal gyrus. The dorsolateral prefrontal cortex may also be involved in distractor filtering at encoding.

To return to the more recent study:

  • there was a significant decrease in WMC with increasing age in all conditions (no distraction; encoding distraction; delay distraction)
  • for older adults, the decrease in WMC was greatest in the delay distraction condition
  • when 'distraction cost' was calculated (((ND score − (ED or DD score))/ND score) × 100), there was a significant correlation between delay distraction cost and age, but not between encoding distraction cost and age
  • for older adults, performance in the encoding distraction condition was better predicted by performance in the no distraction condition than it was among the younger groups
  • this correlation was significantly different between the 30-39 age group and the 40-49 age group, between the 40s and the 50s, and between the 50s and the 60s — showing that this is a progressive change
  • older adults with a higher delay distraction cost (ie, those more affected by distractors during delay) also showed a significantly greater correlation between their no-distraction performance and encoding-distraction performance.

All of this suggests that older adults are focusing more attention during attention even when there is no distraction, and they are doing so to compensate for their reduced ability to maintain information in working memory.

This suggests several approaches to improving older adults' ability to cope:

  • use perceptual discrimination training to help improve WMC
  • make working memory training more about learning to ignore certain types of distraction
  • reduce distraction — modify daily tasks to make them more "older adult friendly"
  • (my own speculation) use meditation training to improve frontal alpha rhythms.

You can participate in the game yourself, at http://thegreatbrainexperiment.com/

http://medicalxpress.com/news/2015-05-smartphone-reveals-older.html

[3921] McNab F, Zeidman P, Rutledge RB, Smittenaar P, Brown HR, Adams RA, Dolan RJ. Age-related changes in working memory and the ability to ignore distraction. Proceedings of the National Academy of Sciences [Internet]. 2015 ;112(20):6515 - 6518. Available from: http://www.pnas.org/content/112/20/6515

McNab, F., & Dolan, R. J. (2014). Dissociating distractor-filtering at encoding and during maintenance. Journal of Experimental Psychology. Human Perception and Performance, 40(3), 960–7. doi:10.1037/a0036013

The number of items a person can hold in short-term memory is strongly correlated with their IQ. But short-term memory has been recently found to vary along another dimension as well: some people remember (‘see’) the items in short-term memory more clearly and precisely than other people. This discovery has lead to the hypothesis that both of these factors should be considered when measuring working memory capacity. But do both these aspects correlate with fluid intelligence?

A new study presented 79 students with screen displays fleetingly showing either four or eight items. After a one-second blank screen, one item was returned and the subject asked whether that object had been in a particular location previously. Their ability to detect large and small changes in the items provided an estimate of how many items the individual could hold in working memory, and how clearly they remembered them. These measures were compared with individuals’ performance on standard measures of fluid intelligence.

Analysis of data found that these two measures of working memory — number and clarity —are completely independent of each other, and that it was the number factor only that correlated with intelligence.

This is not to say that clarity is unimportant! Only that it is not related to intelligence.

Cognitive decline is common in those with multiple sclerosis, but not everyone is so afflicted. What governs whether an individual will suffer cognitive impairment? One proposed factor is cognitive reserve, and a new study adds to the evidence that cognitive reserve does indeed help protect against cognitive decline, as it does with age-related decline.

The study involved 50 people with multiple sclerosis plus a control group included 157 clinically healthy adults of similar age and education level, and found that those with more education (defined as more than 13 years of schooling) were protected against cognitive impairment. This is not simply a matter of the more educated starting off from a higher base! MS patients with low education performed more poorly on a demanding cognitive test than healthy controls with the same level of education, while MS patients with high education performed at the same level as their matched controls.

On the other hand, occupation (also implicated as a factor in cognitive reserve, though a less important one than education) did not have an effect. Nor did fatigue.

Cognitive performance was evaluated using the Paced Auditory Serial Addition Test (PASAT), in which a series of single digit numbers are presented and the two most recent digits must be summed. This test has high sensitivity in detecting MS-related cognitive deficits as it relies strongly on working memory and information processing speed abilities. The poorer performance of low-education MS patients was only found at higher speeds.

http://www.eurekalert.org/pub_releases/2013-07/ip-hem070213.php

[3474] Scarpazza C, Braghittoni D, Casale B, Malagú S, Mattioli F, di Pellegrino G, Ladavas E. Education protects against cognitive changes associated with multiple sclerosis. Restorative Neurology and Neuroscience [Internet]. 2013 ;31(5):619 - 631. Available from: http://dx.doi.org/10.3233/RNN-120261

Evidence is accumulating that age-related cognitive decline is rooted in three related factors: processing speed slows down (because of myelin degradation); the ability to inhibit distractions becomes impaired; working memory capacity is reduced.

A new study adds to this evidence by looking at one particular aspect of age-related cognitive decline: memory search.

The study put 185 adults aged 29-99 (average age 67) through three cognitive tests: a vocabulary test, digit span (a working memory test), and the animal fluency test, in which you name as many animals as you can in one minute.

Typically, in the animal fluency test, people move through semantic categories such as ‘pets’, ‘big cats’, and so on. The best performers are those who move from category to category with optimal timing — i.e., at the point where the category has been sufficiently exhausted that efforts would be better spent on a new one.

Participants recalled on average 17 animal names, with a range from 5 to 33. While there was a decline with age, it wasn’t particularly marked until the 80s (an average of 18.3 for those in their 30s, 17.5 for those in their 60s, 16.5 for the 70s, 12.8 for the 80s, and 10 for the 90s). Digit span did show a decline, but it was not significant (from 17.5 down to 15.3), while vocabulary (consistent with previous research) showed no decline with age.

But all this is by the by — the nub of the experiment was to discover how individuals were searching their memory. This required a quite complicated analysis, which I will not go into, except to mention two important distinctions. The first is between:

  • global context cue: activates each item in the active category according to how strong it is (how frequently it has been recalled in the past);
  • local context cue: activates each item in relation to its semantic similarity to the previous item recalled.

A further distinction was made between static and dynamic processes: in dynamic models, it is assumed the user switches between local and global search. This, it is further assumed, is because memory is ‘patchy’ – that is, information is represented in clusters. Within a cluster, we use local cues, but to move from one cluster to another, we use global cues.

The point of all this was to determine whether age-related decline in memory search has to do with:

  • Reduced processing speed,
  • Persisting too long on categories, or
  • Inability to maintain focus on local cues (this would relate it back to the inhibition deficit).

By modeling the exact recall patterns, the researchers ascertained that the recall process is indeed dynamic, although the points of transition are not clearly understood. The number of transitions from one cluster to another was negatively correlated with age; it was also strongly positively correlated with performance (number of items recalled). Digit span, assumed to measure ‘cognitive control’, was also negatively correlated with number of transitions, but, as I said, was not significantly correlated with age.

In other words, it appears that there is a qualitative change with age, that increasing age is correlated with increased switching, and reduced cognitive control is behind this — although it doesn’t explain it all (perhaps because we’re still not able to fully measure cognitive control).

At a practical level, the message is that memory search may become less efficient because, as people age, they tend to change categories too frequently, before they have exhausted their full potential. While this may well be a consequence of reduced cognitive control, it seems likely (to me at least) that making a deliberate effort to fight the tendency to move on too quickly will pay dividends for older adults who want to improve their memory retrieval abilities.

Nor is this restricted to older adults — since age appears to be primarily affecting performance through its effects on cognitive control, it is likely that this applies to those with reduced working memory capacity, of any age.

[3378] Hills TT, Mata R, Wilke A, Samanez-Larkin GR. Mechanisms of Age-Related Decline in Memory Search Across the Adult Life Span. Developmental Psychology. 2013 :No - Pagination Specified.

Preliminary findings from a small study show that older adults, after learning to use Facebook, performed about 25% better on tasks designed to measure their ability to continuously monitor and to quickly add or delete the contents of their working memory (updating).

Report on Futurity

Another study looking into the urban-nature effect issue takes a different tack than those I’ve previously reported on, that look at the attention-refreshing benefits of natural environments.

In this study, a rural African people living in a traditional village were compared with those who had moved to town. Participants in the first experiment included 35 adult traditional Himba, 38 adolescent traditional Himba (mean age 12), 56 adult urbanized Himba, and 37 adolescent urbanized Himba. All traditional Himba had had little contact with the Western world and only spoke their native language; all adult urbanized Himba had grown up in traditional villages and only moved to town later in life (average length of time in town was 6 years); all adolescent urbanized Himba had grown up in town the town and usually attended school regularly.

The first experiments assessed the ability to ignore peripheral distracting arrows while focusing on the right or left direction of a central arrow.

There was a significant effect of urbanization, with attention being more focused (less distracted) among the traditional Himba. Traditional Himba were also slower than urbanized Himba — but note that there was substantial overlap in response times between the two groups. There was no significant effect of age (that is, adolescents were faster than adults in their responses, but the effect of the distracters was the same across age groups), or a significant interaction between age and urbanization.

The really noteworthy part of this, was that the urbanization effect on task performance was the same for the adults who had moved to town only a few years earlier as for the adolescents who had grown up and been educated in the town. In other words, this does not appear to be an educational effect.

The second experiment looked at whether traditional Himba would perform more like urbanized Himba if there were other demands on working memory. This was done by requiring them to remember three numbers (the number words in participants’ language are around twice as long as the same numbers in English, hence their digit span is shorter).

While traditional Himba were again more focused than the urbanized in the no-load condition, when there was this extra load on working memory, there was no significant difference between the two groups. Indeed, attention was de-focused in the traditional Himba under high load to the same degree as it was for urbanized Himba under no-load conditions. Note that increasing the cognitive load made no difference for the urbanized group.

There was also a significant (though not dramatic) difference between the traditional and urbanized Himba in terms of performance on the working memory task, with traditional Himba remembering an average of 2.46/3 digits and urbanized Himba 2.64.

Experiment 3 tested the two groups on a working memory task, a standard digit span test (although, of course, in their native language). Random sequences of 2-5 digits were read out, with the participant being required to say them aloud immediately after. Once again, the urbanized Himba performed better than the traditional Himba (4.32 vs 3.05).

In other words, the problem does not seem to be that urbanization depletes working memory, rather, that urbanization encourages disengagement (i.e., we have the capacity, we just don’t use it).

In the fourth experiment, this idea was tested more directly. Rather than the arrows used in the earlier experiments, black and white faces were used, with participants required to determine the color of the central face. Additionally, inverted faces were sometimes used (faces are stimuli we pay a lot of attention to, but inverting them reduces their ‘faceness’, thus making them less interesting).

An additional group of Londoners was also included in this experiment.

While urbanized Himba and Londoners were, again, more de-focused than traditional Himba when the faces were inverted, for the ‘normal’ faces, all three groups were equally focused.

Note that the traditional Himba were not affected by the changes in the faces, being equally focused regardless of the stimulus. It was the urbanized groups that became more alert when the stimuli became more interesting.

Because it may have been a race-discrimination mechanism coming into play, the final experiment returned to the direction judgment, with faces either facing left or right. This time the usual results occurred – the urbanized groups were more de-focused than the traditional group.

In other words, just having faces was not enough; it was indeed the racial discrimination that engaged the urbanized participants (note that both these urban groups come from societies where racial judgments are very salient – multicultural London, and post-apartheid Namibia).

All of this indicates that the attention difficulties that appear so common nowadays are less because our complex environments are ‘sapping’ our attentional capacities, and more because we are in a different attentional ‘mode’. It makes sense that in environments that contain so many more competing stimuli, we should employ a different pattern of engagement, keeping a wider, more spread, awareness on the environment, and only truly focusing when something triggers our interest.

[3273] Linnell KJ, Caparos S, de Fockert JW, Davidoff J. Urbanization Decreases Attentional Engagement. Journal of experimental psychology. Human perception and performance. 2013 .

An online study open to anyone, that ended up involving over 100,000 people of all ages from around the world, put participants through 12 cognitive tests, as well as questioning them about their background and lifestyle habits. This, together with a small brain-scan data set, provided an immense data set to investigate the long-running issue: is there such a thing as ‘g’ — i.e. is intelligence accounted for by just a single general factor; is it supported by just one brain network? — or are there multiple systems involved?

Brain scans of 16 healthy young adults who underwent the 12 cognitive tests revealed two main brain networks, with all the tasks that needed to be actively maintained in working memory (e.g., Spatial Working Memory, Digit Span, Visuospatial Working Memory) loading heavily on one, and tasks in which information had to transformed according to logical rules (e.g., Deductive Reasoning, Grammatical Reasoning, Spatial Rotation, Color-Word Remapping) loading heavily on the other.

The first of these networks involved the insula/frontal operculum, the superior frontal sulcus, and the ventral part of the anterior cingulate cortex/pre-supplementary motor area. The second involved the inferior frontal sulcus, inferior parietal lobule, and the dorsal part of the ACC/pre-SMA.

Just a reminder of individual differences, however — when analyzed by individual, this pattern was observed in 13 of the 16 participants (who are not a very heterogeneous bunch — I strongly suspect they are college students).

Still, it seems reasonable to conclude, as the researchers do, that at least two functional networks are involved in ‘intelligence’, with all 12 cognitive tasks using both networks but to highly variable extents.

Behavioral data from some 60,000 participants in the internet study who completed all tasks and questionnaires revealed that there was no positive correlation between performance on the working memory tasks and the reasoning tasks. In other words, these two factors are largely independent.

Analysis of this data revealed three, rather than two, broad components to overall cognitive performance: working memory; reasoning; and verbal processing. Re-analysis of the imaging data in search of the substrate underlying this verbal component revealed that the left inferior frontal gyrus and temporal lobes were significantly more active on tasks that loaded on the verbal component.

These three components could also be distinguished when looking at other factors. For example, while age was the most significant predictor of cognitive performance, its effect on the verbal component was much later and milder than it was for the other two components. Level of education was more important for the verbal component than the other two, while the playing of computer games had an effect on working memory and reasoning but not verbal. Chronic anxiety affected working memory but not reasoning or verbal. Smoking affected working memory more than the others. Unsurprisingly, geographical location affected verbal more than the other two components.

A further test, involving 35 healthy young adults, compared performance on the 12 tasks and score on the Cattell Culture Fair test (a classic pen and paper IQ test). The working memory component correlated most with the Cattell score, followed by the reasoning component, with the Verbal component (unsurprisingly, given that this is designed to be a ‘culture-fair’ test) showing the smallest correlation.

All of this is to say that this is decided evidence that what is generally considered ‘intelligence’ is based on the functioning of multiple brain networks rather than a single ‘g’, and that these networks are largely independent. Thus, the need to focus on and maintain task-relevant information maps onto one particular brain network, and is one strand. Another network specializes in transforming information, regardless of source or type. These, it would seem, are the main processes involved in fluid intelligence, while the Verbal component most likely reflects crystallized intelligence. There are also likely to be other networks which are not perhaps typically included in ‘general intelligence’, but are nevertheless critical for task performance (the researchers suggest the ability to adapt plans based on outcomes might be one such function).

The obvious corollary of all this is that similar IQ scores can reflect different abilities for these strands — e.g., even if your working memory capacity is not brilliant, you can develop your reasoning and verbal abilities. All this is consistent with the growing evidence that, although fundamental WMC might be fixed (and I use the word ‘fundamental’ deliberately, because WMC can be measured in a number of different ways, and I do think you can, at the least, effectively increase your WMC), intelligence (because some of its components are trainable) is not.

If you want to participate in this research, a new version of the tests is available at http://www.cambridgebrainsciences.com/theIQchallenge

In my book on remembering intentions, I spoke of how quickly and easily your thoughts can be derailed, leading to ‘action slips’ and, in the wrong circumstances, catastrophic mistakes. A new study shows how a 3-second interruption while doing a task doubled the rate of sequence errors, while a 4s one tripled it.

The study involved 300 people, who were asked to perform a series of ordered steps on the computer. The steps had to be performed in a specific sequence, mnemonically encapsulated by UNRAVEL, with each letter identifying the step. The task rules for each step differed, requiring the participant to mentally shift gears each time. Moreover, task elements could have multiple elements — for example, the letter U could signal the step, one of two possible responses for that step, or be a stimulus requiring a specific response when the step was N. Each step required the participant to choose between two possible responses based on one stimulus feature — features included whether it was a letter or a digit, whether it was underlined or italic, whether it was red or yellow, whether the character outside the outline box was above or below. There were also more cognitive features, such as whether the letter was near the beginning of the alphabet or not. The identifying mnemonic for the step was linked to the possible responses (e.g., N step – near or far; U step — underline or italic).

At various points, participants were very briefly interrupted. In the first experiment, they were asked to type four characters (letters or digits); in the second experiment, they were asked to type only two (a very brief interruption indeed!).

All of this was designed to set up a situation emulating “train of thought” operations, where correct performance depends on remembering where you are in the sequence, and on producing a situation where performance would have reasonably high proportion of errors — one of the problems with this type of research has been the use of routine tasks that are generally performed with a high degree of accuracy, thus generating only small amounts of error data for analysis.

In both experiments, interruptions significantly increased the rate of sequence errors on the first trial after the interruption (but not on subsequent ones). Nonsequence errors were not affected. In the first experiment (four-character interruption), the sequence error rate on the first trial after the interruption was 5.8%, compared to 1.8% on subsequent trials. In the second experiment (two-character interruption), it was 4.3%.

The four-character interruptions lasted an average of 4.36s, and the two-character interruptions lasted an average of 2.76s.

Whether the characters being typed were letters or digits made no difference, suggesting that the disruptive effects of interruptions are not overly sensitive to what’s being processed during the interruption (although of course these are not wildly different processes!).

The absence of effect on nonsequence errors shows that interruptions aren’t disrupting global attentional resources, but more specifically the placekeeping task.

As I discussed in my book, the step also made a significant difference — for sequence errors, middle steps showed higher error rates than end steps.

All of this confirms and quantifies how little it takes to derail us, and reminds us that, when engaged in tasks involving the precise sequence of sub-tasks (which so many tasks do), we need to be alert to the dangers of interruptions. This is, of course, particularly true for those working in life-critical areas, such as medicine.

[3207] Altmann EM, Gregory J, Hambrick DZ. Momentary Interruptions Can Derail the Train of Thought. Journal of Experimental Psychology: General. 2013 :No - Pagination Specified.

Being a woman of a certain age, I generally take notice of research into the effects of menopause on cognition. A new study adds weight, perhaps, to the idea that cognitive complaints in perimenopause and menopause are not directly a consequence of hormonal changes, but more particularly, shows that early post menopause may be the most problematic time.

The study followed 117 women from four stages of life: late reproductive, early and late menopausal transition, and early postmenopause. The late reproductive period is defined as when women first begin to notice subtle changes in their menstrual periods, but still have regular menstrual cycles. Women in the transitional stage (which can last for several years) experience fluctuation in menstrual cycles, and hormone levels begin to fluctuate significantly.

Women in the early stage of post menopause (first year after menopause), as a group, were found to perform more poorly on measures of verbal learning, verbal memory, and fine motor skill than women in the late reproductive and late transition stages. They also performed significantly worse than women in the late menopausal transition stage on attention/working memory tasks.

Surprisingly, self-reported symptoms such as sleep difficulties, depression, and anxiety did not predict memory problems. Neither were the problems correlated with hormone levels (although fluctuations could be a factor).

This seemingly contradicts earlier findings from the same researchers, who in a slightly smaller study found that those experiencing poorer working memory and attention were more likely to have poorer sleep, depression, and anxiety. That study, however, only involved women approaching and in menopause. Moreover, these aspects were not included in the abstract of the paper but only in the press release, and because I don’t have access to this particular journal, I cannot say whether there is something in the data that explains this. Because of this, I am not inclined to put too much weight on this point.

But we may perhaps take the findings as support for the view that cognitive problems experienced earlier in the menopause cycle are, when they occur, not a direct result of hormonal changes.

The important result of this study is the finding that the cognitive problems often experienced by women in their 40s and 50s are most acute during the early period of post menopause, and the indication that the causes and manifestations are different at different stages of menopause.

It should be noted, however, that there were only 14 women in the early postmenopause stage. So, we shouldn’t put too much weight on any of this. Nevertheless, it does add to the picture research is building up about the effects of menopause on women’s cognition.

While the researchers said that this effect is probably temporary — which was picked up as the headline in most media — this was not in fact investigated in this study. It would be nice to have some comparison with those, say, two or three and five years post menopause (but quite possibly this will be reported in a later paper).

[3237] Weber MT, Rubin LH, Maki PM. Cognition in perimenopause. Menopause: The Journal of The North American Menopause Society [Internet]. 2013 . Available from: http://journals.lww.com/menopausejournal/Abstract/publishahead/Cognition_in_perimenopause___the_effect_of.98696.aspx

The issue of ‘chemo-brain’ — cognitive impairment following chemotherapy — has been a controversial one. While it is now (I hope) accepted by most that it is, indeed, a real issue, there is still an ongoing debate over whether the main cause is really the chemotherapy. A new study adds to the debate.

The study involved 28 women who received adjuvant chemotherapy for breast cancer, 37 who received radiotherapy, and 32 age-matched healthy controls. Brain scans while doing a verbal working memory task were taken before treatment and one month after treatment.

Women who underwent chemotherapy performed less accurately on the working memory task both before treatment and one month after treatment. They also reported a significantly higher level of fatigue. Greater fatigue correlated with poorer test performance and more cognitive problems, across both patient groups and at both times (although the correlation was stronger after treatment).

Both patient groups showed reduced function in the left inferior frontal gyrus, before therapy, but those awaiting chemotherapy showed greater impairment than those in the radiotherapy group. Pre-treatment difficulty in recruiting this brain region in high demand situations was associated with greater fatigue after treatment.

In other words, reduced working memory function before treatment began predicted how tired people felt after treatment, and how much their cognitive performance suffered. All of which suggests it is not the treatment itself that is the main problem.

But the fact that reduced working memory function precedes the fatigue indicates it’s not the fatigue that’s the main problem either. The researchers suggest that the main driver is level of worry —worry interfered with the task; level of worry was related to fatigue. And worry, as we know, can reduce working memory capacity (because it uses up part of it).

All of which is to say that support for cancer patients aimed at combating stress and anxiety might do more for ‘chemo-brain’ than anything else. In this context, I note also that there have been suggestions that sleep problems have also been linked to chemo-brain — a not unrelated issue!

Cimprich, B. et al. 2012. Neurocognitive impact in adjuvant chemotherapy for breast cancer linked to fatigue: A Prospective functional MRI study. Presented at the 2012 CTRC-AACR San Antonio Breast Cancer Symposium, Dec. 4-8

Providing some support for the finding I recently reported — that problems with semantic knowledge in those with mild cognitive impairment (MCI) and Alzheimer’s might be rooted in an inability to inhibit immediate perceptual information in favor of conceptual information — a small study has found that executive function (and inhibitory control in particular) is impaired in far more of those with MCI than was previously thought.

The study involved 40 patients with amnestic MCI (single or multiple domain) and 32 healthy older adults. Executive function was tested across multiple sub-domains: divided attention, working memory, inhibitory control, verbal fluency, and planning.

As a group, those with MCI performed significantly more poorly in all 5 sub-domains. All MCI patients showed significant impairment in at least one sub-domain of executive functioning, with almost half performing poorly on all of the tests. The sub-domain most frequently and severely impaired was inhibitory control.

The finding is in sharp contrast with standard screening tests and clinical interviews, which have estimated executive function impairment in only 15% of those with MCI.

Executive function is crucial for many aspects of our behavior, from planning and organization to self-control to (as we saw in the previous news report) basic knowledge. It is increasingly believed that inhibitory control might be a principal cause of age-related cognitive decline, through its effect on working memory.

All this adds weight to the idea that we should be focusing our attention on ways to improve inhibitory control when it declines. Although training to improve working memory capacity has not been very successful, specific training targeted at inhibitory control might have more luck. Something to hope for!

Organophosphate pesticides are the most widely used insecticides in the world; they are also (according to WHO), one of the most hazardous pesticides to vertebrate animals. While the toxic effects of high levels of organophosphates are well established, the effects of long-term low-level exposure are still controversial.

A meta-analysis involving 14 studies and more than 1,600 participants, reveals that the majority of well-designed studies undertaken over the last 20 years have found a significant association between low-level exposure to organophosphates and impaired cognitive function. Impairment was small to moderate, and mainly concerned psychomotor speed, executive function, visuospatial ability, working memory, and visual memory.

A small study shows how those on the road to Alzheimer’s show early semantic problems long before memory problems arise, and that such problems can affect daily life.

The study compared 25 patients with amnestic MCI, 27 patients with mild-to-moderate Alzheimer's and 70 cognitively fit older adults (aged 55-90), on a non-verbal task involving size differences (for example, “What is bigger: a key or a house?”; “What is bigger: a key or an ant?”). The comparisons were presented in three different ways: as words; as images reflecting real-world differences; as incongruent images (e.g., a big ant and a small house).

Both those with MCI and those with AD were significantly less accurate, and significantly slower, in all three conditions compared to healthy controls, and they had disproportionately more difficulty on those comparisons where the size distance was smaller. But MCI and AD patients experienced their biggest problems when the images were incongruent – the ant bigger than the house. Those with MCI performed at a level between that of healthy controls and those with AD.

This suggests that perceptual information is having undue influence in a judgment task that requires conceptual knowledge.

Because semantic memory is organized according to relatedness, and because this sort of basic information has been acquired a long time ago, this simple test is quite a good way to test semantic knowledge. As previous research has indicated, the problem doesn’t seem to be a memory (retrieval) one, but one reflecting an actual loss or corruption of semantic knowledge. But perhaps, rather than a loss of data, it reflects a failure of selective attention/inhibition — an inability to inhibit immediate perceptual information in favor of more relevant conceptual information.

How much does this matter? Poor performance on the semantic distance task correlated with impaired ability to perform everyday tasks, accounting (together with delayed recall) for some 35% of the variance in scores on this task — while other cognitive abilities such as processing speed, executive function, verbal fluency, naming, did not have a significant effect. Everyday functional capacity was assessed using a short form of the UCSD Skills Performance Assessment scale (a tool generally used to identify everyday problems in patients with schizophrenia), which presents scenarios such as planning a trip to the beach, determining a route, dialing a telephone number, and writing a check.

The finding indicates that semantic memory problems are starting to occur early in the deterioration, and may be affecting general cognitive decline. However, if the problems reflect an access difficulty rather than data loss, it may be possible to strengthen these semantic processing connections through training — and thus improve general cognitive processing (and ability to perform everyday tasks).

In my last report, I discussed a finding that intensive foreign language learning ‘grew’ the size of certain brain regions. This growth reflects gray matter increase. Another recent study looks at a different aspect: white matter.

In the study, monthly brain scans were taken of 27 college students, of whom 11 were taking an intensive nine-month Chinese language course. These brain scans were specifically aimed at tracking white matter changes in the students’ brains.

Significant changes were indeed observed in the brains of the language learners. To the researchers’ surprise, however, the biggest changes were observed in an area not previously considered part of the language network: the white matter tracts that cross the corpus callosum, the main bridge between the hemispheres. (I’m not quite sure why they were surprised, since a previous study had found that bilinguals showed higher white matter integrity in the corpus callosum.)

Significant changes were also observed within the left-hemisphere language network and in the right temporal lobe. The rate of increase in white matter was linear, showing a steady progression with each passing month.

The researchers suggest that plasticity in the adult brain may differ from that seen in children’s brains. While children’s brains change mainly through the pruning of unwanted connections and the death of unwanted cells, adult brains may rely mainly on neurogenesis and myelinogenesis.

The growth of new myelin is a process that is still largely mysterious, but it’s suggested that activity at the axons (the extensions of neurons that carry the electrical signals) might trigger increases in the size, density, or number of oligodendrocytes (the cells responsible for the myelin sheaths). This process is thought to be mediated by astrocytes, and in recent years we have begun to realize that astrocytes, long regarded as mere ‘support cells’, are in fact quite important for learning and memory. Just how important is something researchers are still working on.

The finding of changes between the frontal hemispheres and caudate nuclei is consistent with a previously-expressed idea that language learning requires the development of a network to control switching between languages.

Does the development of such a network enhance the task-switching facility in working memory? Previous research has found that bilinguals tend to have better executive control than monolinguals, and it has been suggested that the experience of managing two (or more) languages reorganizes certain brain networks, creating a more effective basis for executive control.

As in the previous study, the language studied was very different from the students’ native language, and they had no previous experience of it. The level of intensity was of course much less.

I do wonder if the fact that the language being studied was Mandarin Chinese limits the generality of these findings. Because of the pictorial nature of the written language, Chinese has been shown to involve a wider network of regions than European languages.

Nevertheless, the findings add to the evidence that adult brains retain the capacity to reorganize themselves, and add to growing evidence that we should be paying more attention to white matter changes.

[3143] Schlegel AA, Rudelson JJ, Tse PU. White Matter Structure Changes as Adults Learn a Second Language. Journal of Cognitive Neuroscience [Internet]. 2012 ;24(8):1664 - 1670. Available from: http://dx.doi.org/10.1162/jocn_a_00240

Bialystok, E., Craik, F. I. M., & Luk, G. (2012). Bilingualism: consequences for mind and brain. Trends in Cognitive Sciences, 16(4), 240–250. doi:10.1016/j.tics.2012.03.001

Luk, G. et al. (2011) Lifelong bilingualism maintains white matter integrity in older adults. J. Neurosci. 31, 16808–16813

A small Swedish brain imaging study adds to the evidence for the cognitive benefits of learning a new language by investigating the brain changes in students undergoing a highly intensive language course.

The study involved an unusual group: conscripts in the Swedish Armed Forces Interpreter Academy. These young people, selected for their talent for languages, undergo an intensive course to allow them to learn a completely novel language (Egyptian Arabic, Russian or Dari) fluently within ten months. This requires them to acquire new vocabulary at a rate of 300-500 words every week.

Brain scans were taken of 14 right-handed volunteers from this group (6 women; 8 men), and 17 controls that were matched for age, years of education, intelligence, and emotional stability. The controls were medical and cognitive science students. The scans were taken before the start of the course/semester, and three months later.

The brain scans revealed that the language students showed significantly greater changes in several specific regions. These regions included three areas in the left hemisphere: the dorsal middle frontal gyrus, the inferior frontal gyrus, and the superior temporal gyrus. These regions all grew significantly. There was also some, more selective and smaller, growth in the middle frontal gyrus and inferior frontal gyrus in the right hemisphere. The hippocampus also grew significantly more for the interpreters compared to the controls, and this effect was greater in the right hippocampus.

Among the interpreters, language proficiency was related to increases in the right hippocampus and left superior temporal gyrus. Increases in the left middle frontal gyrus were related to teacher ratings of effort — those who put in the greatest effort (regardless of result) showed the greatest increase in this area.

In other words, both learning, and the effort put into learning, had different effects on brain development.

The main point, however, is that language learning in particular is having this effect. Bear in mind that the medical and cognitive science students are also presumably putting in similar levels of effort into their studies, and yet no such significant brain growth was observed.

Of course, there is no denying that the level of intensity with which the interpreters are acquiring a new language is extremely unusual, and it cannot be ruled out that it is this intensity, rather than the particular subject matter, that is crucial for this brain growth.

Neither can it be ruled out that the differences between the groups are rooted in the individuals selected for the interpreter group. The young people chosen for the intensive training at the interpreter academy were chosen on the basis of their talent for languages. Although brain scans showed no differences between the groups at baseline, we cannot rule out the possibility that such intensive training only benefited them because they possessed this potential for growth.

A final caveat is that the soldiers all underwent basic military training before beginning the course — three months of intense physical exercise. Physical exercise is, of course, usually very beneficial for the brain.

Nevertheless, we must give due weight to the fact that the brain scans of the two groups were comparable at baseline, and the changes discussed occurred specifically during this three-month learning period. Moreover, there is growing evidence that learning a new language is indeed ‘special’, if only because it involves such a complex network of processes and brain regions.

Given that people vary in their ‘talent’ for foreign language learning, and that learning a new language does tend to become harder as we get older, it is worth noting the link between growth of the hippocampus and superior temporal gyrus and language proficiency. The STG is involved in acoustic-phonetic processes, while the hippocampus is presumably vital for the encoding of new words into long-term memory.

Interestingly, previous research with children has suggested that the ability to learn new words is greatly affected by working memory span — specifically, by how much information they can hold in that part of working memory called phonological short-term memory. While this is less important for adults learning another language, it remains important for one particular category of new words: words that have no ready association to known words. Given the languages being studied by these Swedish interpreters, it seems likely that much if not all of their new vocabulary would fall into this category.

I wonder if the link with STG is more significant in this study, because the languages are so different from the students’ native language? I also wonder if, and to what extent, you might be able to improve your phonological short-term memory with this sort of intensive practice.

In this regard, it’s worth noting that a previous study found that language proficiency correlated with growth in the left inferior frontal gyrus in a group of English-speaking exchange students learning German in Switzerland. Is this difference because the training was less intensive? because the students had prior knowledge of German? because German and English are closely related in vocabulary? (I’m picking the last.)

The researchers point out that hippocampal plasticity might also be a critical factor in determining an individual’s facility for learning a new language. Such plasticity does, of course, tend to erode with age — but this can be largely counteracted if you keep your hippocampus limber (as it were).

All these are interesting speculations, but the main point is clear: the findings add to the growing evidence that bilingualism and foreign language learning have particular benefits for the brain, and for protecting against cognitive decline.

Stress is a major cause of workplace accidents, and most of us are only too familiar with the effects of acute stress on our thinking. However, although the cognitive effects are only too clear, research has had little understanding of how stress has this effect. A new rat study sheds some light.

In the study, brain activity was monitored while five rats performed a working memory task during acute noise stress. Under these stressful conditions, the rats performed dramatically worse on their working memory task, with performance dropping from an average of 93% success to 65%.

The stress also significantly increased the discharge rate of a subset of neurons in the medial prefrontal cortex during two phases of the task: planning and assessment.

This brain region is vital for working memory and executive functions such as goal maintenance and emotion regulation. The results suggest that the firing and re-firing of these neurons keeps recent information ‘fresh’. When the re-firing is delayed, the information can be lost.

What seems to be happening is that the stress is causing these neurons to work even more furiously, but instead of performing their normal task — concentrating on keeping important information ‘alive’ during brief delays — they are reacting to all the other, distracting and less relevant, stimuli.

The findings contradict the view that stress simply suppresses prefrontal cortex activity, and suggests a different approach to treatment, one that emphasizes shutting out distractions.

The findings are also exciting from a theoretical viewpoint, suggesting as they do that this excitatory recursive activity of neurons within the prefrontal cortex provide the neural substrate for working memory. That is, that we ‘hold’ information in the front of our mind through reverberating feedback loops within this network of neurons, that keep information alive during the approximately 1.5 seconds of our working memory ‘span’.

We know that stress has a complicated relationship with learning, but in general its effect is negative, and part of that is due to stress producing anxious thoughts that clog up working memory. A new study adds another perspective to that.

The brain scanning study involved 60 young adults, of whom half were put under stress by having a hand immersed in ice-cold water for three minutes under the supervision of a somewhat unfriendly examiner, while the other group immersed their hand in warm water without such supervision (cortisol and blood pressure tests confirmed the stress difference).

About 25 minutes after this (cortisol reaches peak levels around 25 minutes after stress), participants’ brains were scanned while participants alternated between a classification task and a visual-motor control task. The classification task required them to look at cards with different symbols and learn to predict which combinations of cards announced rain and which sunshine. Afterward, they were given a short questionnaire to determine their knowledge of the task. The control task was similar but there were no learning demands (they looked at cards on the screen and made a simple perceptual decision).

In order to determine the strategy individuals used to do the classification task, ‘ideal’ performance was modeled for four possible strategies, of which two were ‘simple’ (based on single cues) and two ‘complex’ (based on multiple cues).

Here’s the interesting thing: while both groups were successful in learning the task, the two groups learned to do it in different ways. Far more of the non-stressed group activated the hippocampus to pursue a simple and deliberate strategy, focusing on individual symbols rather than combinations of symbols. The stressed group, on the other hand, were far more likely to use the striatum only, in a more complex and subconscious processing of symbol combinations.

The stressed group also remembered significantly fewer details of the classification task.

There was no difference between the groups on the (simple, perceptual) control task.

In other words, it seems that stress interferes with conscious, purposeful learning, causing the brain to fall back on more ‘primitive’ mechanisms that involve procedural learning. Striatum-based procedural learning is less flexible than hippocampus-based declarative learning.

Why should this happen? Well, the non-conscious procedural learning going on in the striatum is much less demanding of cognitive resources, freeing up your working memory to do something important — like worrying about the source of the stress.

Unfortunately, such learning will not become part of your more flexible declarative knowledge base.

The finding may have implications for stress disorders such as depression, addiction, and PTSD. It may also have relevance for a memory phenomenon known as “forgotten baby syndrome”, in which parents forget their babies in the car. This may be related to the use of non-declarative memory, because of the stress they are experiencing.

[3071] Schwabe L, Wolf OT. Stress Modulates the Engagement of Multiple Memory Systems in Classification Learning. The Journal of Neuroscience [Internet]. 2012 ;32(32):11042 - 11049. Available from: http://www.jneurosci.org/content/32/32/11042

Memory problems in those with mild cognitive impairment may begin with problems in visual discrimination and vulnerability to interference — a hopeful discovery in that interventions to improve discriminability and reduce interference may have a flow-on effect to cognition.

The study compared the performance on a complex object discrimination task of 7 patients diagnosed with amnestic MCI, 10 older adults considered to be at risk for MCI (because of their scores on a cognitive test), and 19 age-matched controls. The task involved the side-by-side comparison of images of objects, with participants required to say, within 15 seconds, whether the two objects were the same or different.

In the high-interference condition, the objects were blob-like and presented as black and white line-drawings, with some comparison pairs identical, while others only varied slightly in either shape or fill pattern. Objects were rotated to discourage a simple feature-matching strategy. In the low-interference condition, these line-drawings were interspersed with color photos of everyday objects, for which discriminability was dramatically easier. The two conditions were interspersed by a short break, with the low interference condition run in two blocks, before and after the high interference condition.

A control task, in which the participants compared two squares that could vary in size, was run at the end.

The study found that those with MCI, as well as those at risk of MCI, performed significantly worse than the control group in the high-interference condition. There was no difference in performance between those with MCI and those at risk of MCI. Neither group was impaired in the first low-interference condition, although the at-risk group did show significant impairment in the second low-interference condition. It may be that they had trouble recovering from the high-interference experience. However, the degree of impairment was much less than it was in the high-interference condition. It’s also worth noting that the performance on this second low-interference task was, for all groups, notably higher than it was on the first low-interference task.

There was no difference between any of the groups on the control task, indicating that fatigue wasn’t a factor.

The interference task was specifically chosen as one that involved the perirhinal cortex, but not the hippocampus. The task requires the conjunction of features — that is, you need to be able to see the object as a whole (‘feature binding’), not simply match individual features. The control task, which required only the discrimination of a single feature, shows that MCI doesn’t interfere with this ability.

I do note that the amount of individual variability on the interference tasks was noticeably greater in the MCI group than the others. The MCI group was of course smaller than the other groups, but variability wasn’t any greater for this group in the control task. Presumably this variability reflects progression of the impairment, but it would be interesting to test this with a larger sample, and map performance on this task against other cognitive tasks.

Recent research has suggested that the perirhinal cortex may provide protection from visual interference by inhibiting lower-level features. The perirhinal cortex is strongly connected to the hippocampus and entorhinal cortex, two brain regions known to be affected very early in MCI and Alzheimer’s.

The findings are also consistent with other evidence that damage to the medial temporal lobe may impair memory by increasing vulnerability to interference. For example, one study has found that story recall was greatly improved in patients with MCI if they rested quietly in a dark room after hearing the story, rather than being occupied in other tasks.

There may be a working memory component to all this as well. Comparison of two objects does require shifting attention back and forth. This, however, is separate to what the researchers see as primary: a perceptual deficit.

All of this suggests that reducing “visual clutter” could help MCI patients with everyday tasks. For example, buttons on a telephone tend to be the same size and color, with the only difference lying in the numbers themselves. Perhaps those with MCI or early Alzheimer’s would be assisted by a phone with varying sized buttons and different colors.

The finding also raises the question: to what extent is the difficulty Alzheimer’s patients often have in recognizing a loved one’s face a discrimination problem rather than a memory problem?

Finally, the performance of the at-risk group — people who had no subjective concerns about their memory, but who scored below 26 on the MoCA (Montreal Cognitive Assessment — a brief screening tool for MCI) — suggests that vulnerability to visual interference is an early marker of cognitive impairment that may be useful in diagnosis. It’s worth noting that, across all groups, MoCA scores predicted performance on the high-interference task, but not on any of the other tasks.

So how much cognitive impairment rests on problems with interference?

Newsome, R. N., Duarte, A., & Barense, M. D. (2012). Reducing Perceptual Interference Improves Visual Discrimination in Mild Cognitive Impairment : Implications for a Model of Perirhinal Cortex Function, Hippocampus, 22, 1990–1999. doi:10.1002/hipo.22071

Della Sala S, Cowan N, Beschin N, Perini M. 2005. Just lying there, remembering: Improving recall of prose in amnesic patients with mild cognitive impairment by minimising interference. Memory, 13, 435–440.

Here’s an exciting little study, implying as it does that one particular aspect of information processing underlies much of the cognitive decline in older adults, and that this can be improved through training. No, it’s not our usual suspect, working memory, it’s something far less obvious: temporal processing.

In the study, 30 older adults (aged 65-75) were randomly assigned to three groups: one that received ‘temporal training’, one that practiced common computer games (such as Solitaire and Mahjong), and a no-activity control. Temporal training was provided by a trademarked program called Fast ForWord Language® (FFW), which was developed to help children who have trouble reading, writing, and learning.

The training, for both training groups, occupied an hour a day, four days a week, for eight weeks.

Cognitive assessment, carried out at the beginning and end of the study, and for the temporal training group again 18 months later, included tests of sequencing abilities (how quickly two sounds could be presented and still be accurately assessed for pitch or direction), attention (vigilance, divided attention, and alertness), and short-term memory (working memory span, pattern recognition, and pattern matching).

Only in the temporal training group did performance on any of the cognitive tests significantly improve after training — on the sequencing tests, divided attention, matching complex patterns, and working memory span. These positive effects still remained after 18 months (vigilance was also higher at the end of training, but this improvement wasn’t maintained).

This is, of course, only a small pilot study. I hope we will see a larger study, and one that compares this form of training against other computer training programs. It would also be good to see some broader cognitive tests — ones that are less connected to the temporal training. But I imagine that, as I’ve discussed before, an effective training program will include more than one type of training. This may well be an important component of such a program.

[3075] Szelag E, Skolimowska J. Cognitive function in elderly can be ameliorated by training in temporal information processing. Restorative Neurology and Neuroscience [Internet]. 2012 ;30(5):419 - 434. Available from: http://dx.doi.org/10.3233/RNN-2012-120240

I’ve reported, often, on the evidence that multitasking is a problem, something we’re not really designed to do well (with the exception of a few fortunate individuals), and that the problem is rooted in our extremely limited working memory capacity. I’ve also talked about how ‘working memory’ is a bit of a misnomer, given that we probably have several ‘working memories’, for different modalities.

It follows from that, that tasks that use different working memories should be easier to do at the same time than tasks that use the same working memory. A new study confirms that multitasking is more difficult if you are trying to use the same working memory modules for both tasks.

In the study, 32 students carried out a visual pattern-matching task on a computer while giving directions to another person either via instant messaging (same modalities — vision and motor) or online voice chat (different modality — hearing).

While both simultaneous tasks significantly worsened performance on the pattern-matching task, communicating by IM (same modality) led to a 50% drop in visual pattern-matching performance (from a mean of 11 correct responses to a mean of 5), compared to only a 30% drop in the voice condition (mean of 7).

The underlying reason for the reductions in performance seems to be in the effect on eye movement: the number and duration of eye fixations was reduced in both dual-task conditions, and more so in the IM condition.

Note that this is apparently at odds with general perception. According to one study, IM is perceived to be less disruptive than the phone. Moreover, in the current study, participants felt they performed better in the IM condition (although this palpably wasn’t true). This feeling may reflect the greater sense of personal control in instant messaging compared to chat. It may also reflect an illusion of efficiency generated by using the visual channel — because we are so strongly practiced in using vision, we may find visual tasks more effortless than tasks using other modalities. (I should note that most people, regardless of the secondary task, felt they did better than they had! But those in the IM condition were more deluded than those in the chat condition.)

The finding also explains why texting is particularly dangerous when driving — both rely heavily on the same modalities.

All this is consistent with the idea that there are different working memory resources which can operate in parallel, but share one particular resource which manages the other resources.

The idea of ‘threaded cognition’ — of maintaining several goal threads and strategically allocating resources as needed — opens up the idea that multitasking is not all bad. In recent years, we have focused on multitasking as a problem. This has been a very necessary emphasis, given that its downsides were unappreciated. But although multitasking has its problems, it may be that there are trade-offs that come from the interaction between the tasks being carried out.

In other words, rather than condemning multitasking, we need to learn its parameters. This study offers one approach.

What underlies differences in fluid intelligence? How are smart brains different from those that are merely ‘average’?

Brain imaging studies have pointed to several aspects. One is brain size. Although the history of simplistic comparisons of brain size has been turbulent (you cannot, for example, directly compare brain size without taking into account the size of the body it’s part of), nevertheless, overall brain size does count for something — 6.7% of individual variation in intelligence, it’s estimated. So, something, but not a huge amount.

Activity levels in the prefrontal cortex, research also suggests, account for another 5% of variation in individual intelligence. (Do keep in mind that these figures are not saying that, for example, prefrontal activity explains 5% of intelligence. We are talking about differences between individuals.)

A new study points to a third important factor — one that, indeed, accounts for more than either of these other factors. The strength of the connections from the left prefrontal cortex to other areas is estimated to account for 10% of individual differences in intelligence.

These findings suggest a new perspective on what intelligence is. They suggest that part of intelligence rests on the functioning of the prefrontal cortex and its ability to communicate with the rest of the brain — what researchers are calling ‘global connectivity’. This may reflect cognitive control and, in particular, goal maintenance. The left prefrontal cortex is thought to be involved in (among other things) remembering your goals and any instructions you need for accomplishing those goals.

The study involved 93 adults (average age 23; range 18-40), whose brains were monitored while they were doing nothing and when they were engaged in the cognitively challenging N-back working memory task.

Brain activity patterns revealed three regions within the frontoparietal network that were significantly involved in this task: the left lateral prefrontal cortex, right premotor cortex, and right medial posterior parietal cortex. All three of these regions also showed signs of being global hubs — that is, they were highly connected to other regions across the brain.

Of these, however, only the left lateral prefrontal cortex showed a significant association between its connectivity and individual’s fluid intelligence. This was confirmed by a second independent measure — working memory capacity — which was also correlated with this region’s connectivity, and only this region.

In other words, those with greater connectivity in the left LPFC had greater cognitive control, which is reflected in higher working memory capacity and higher fluid intelligence. There was no correlation between connectivity and crystallized intelligence.

Interestingly, although other global hubs (such as the anterior prefrontal cortex and anterior cingulate cortex) also have strong relationships with intelligence and high levels of global connectivity, they did not show correlations between their levels of connectivity and fluid intelligence. That is, although the activity within these regions may be important for intelligence, their connections to other brain regions are not.

So what’s so important about the connections the LPFC has with the rest of the brain? It appears that, although it connects widely to sensory and motor areas, it is primarily the connections within the frontoparietal control network that are most important — as well as the deactivation of connections with the default network (the network active during rest).

This is not to say that the LPFC is the ‘seat of intelligence’! Research has made it clear that a number of brain regions support intelligence, as do other areas of connectivity. The finding is important because it shows that the left LPFC supports cognitive control and intelligence through a mechanism involving global connectivity and some other as-yet-unknown property. One possibility is that this region is a ‘flexible’ hub — able to shift its connectivity with a number of different brain regions as the task demands.

In other words, what may count is how many different connectivity patterns the left LPFC has in its repertoire, and how good it is at switching to them.

An association between negative connections with the default network and fluid intelligence also adds to evidence for the importance of inhibiting task-irrelevant processing.

All this emphasizes the role of cognitive control in intelligence, and perhaps goes some way to explaining why self-regulation in children is so predictive of later success, apart from the obvious.

Back in 2009, I reported briefly on a large Norwegian study that found that older adults who consumed chocolate, wine, and tea performed significantly better on cognitive tests. The association was assumed to be linked to the flavanols in these products. A new study confirms this finding, and extends it to older adults with mild cognitive impairment.

The study involved 90 older adults with MCI, who consumed either 990 milligrams, 520 mg, or 45 mg of a dairy-based cocoa drink daily for eight weeks. Their diet was restricted to eliminate other sources of flavanols (such as tea, red wine, apples and grapes).

Cognitive assessment at the end of this period revealed that, although scores on the MMSE were similar across all groups, those consuming higher levels of flavanol cocoa took significantly less time to complete Trail Making Tests A and B, and scored significantly higher on the verbal fluency test. Insulin resistance and blood pressure was also lower.

Those with the highest levels of flavanols did better than those on intermediate levels on the cognitive tests. Both did better than those on the lowest levels.

Changes in insulin resistance explained part, but not all, of the cognitive improvement.

One caveat: the group were generally in good health without known cardiovascular disease — thus, not completely representative of all those with MCI.

 

Our life-experiences contain a wealth of new and old information. The relative proportions of these change, of course, as we age. But how do we know whether we should be encoding new information or retrieving old information? It’s easy if the information is readily accessible, but what if it’s not? Bear in mind that (especially as we get older) most information / experiences we meet share some similarity to information we already have.

This question is made even more meaningful when you consider that it is the same brain region — the hippocampus — that’s involved in both encoding and retrieval, and these two processes depend (it is thought) on two quite opposite processes. While encoding is thought to rely on pattern separation (looking for differences), retrieval is thought to depend on pattern completion.

A recent study looked at what happens in the brain when people rapidly switch between encoding new objects and retrieving recently presented ones. Participants were shown 676 pictures of objects and asked to identify each one as being shown for the first time (‘new’), being repeated (‘old’), or as a modified version of something shown earlier (‘similar’). Recognizing the similar items as similar was the question of interest, as these items contain both old and new information and so the brain’s choice between encoding and retrieval is more difficult.

What they found was that participants were more likely to recognize similar items as similar (rather than old) if they had viewed a new item on the preceding trial. In other words, the experience of a new item primed them to notice novelty. Or to put it in another way: context biases the hippocampus toward either pattern completion or pattern separation.

This was supported by a further experiment, in which participants were shown both the object pictures, and also learned associations between faces and scenes. Critically, each scene was associated with two different faces. In the next learning phase, participants were taught a new scene association for one face from each pair. Each face-scene learning trial was preceded by an object recognition trial (new and old objects were shown and participants had to identify them as old or new) — critically, either a new or old object was consistently placed before a specific face-scene association. In the final test phase, participants were tested on the new face-scene associations they had just learned, as well as the indirect associations they had not been taught (that is, between the face of each pair that had not been presented during the preceding phase, and the scene associated with its partnered face).

What this found was that participants were more likely to pair indirectly related faces if those faces had been consistently preceded by old objects, rather than new ones. Moreover, they did so more quickly when the faces had been preceded by old objects rather than new ones.

This was interpreted as indicating that the preceding experience affects how well related information is integrated during encoding.

What all this suggests is that the memory activities you’ve just engaged in bias your brain toward the same sort of activities — so whether or not you notice changes to a café or instead nostalgically recall a previous meal, may depend on whether you noticed anyone you knew as you walked down the street!

An interesting speculation by the researchers is that such a memory bias (which only lasts a very brief time) might be an adaptive mechanism, reflecting the usefulness of being more sensitive to changes in new environments and less sensitive to irregularities in familiar environments.

I have said before that there is little evidence that working memory training has any wider benefits than to the skills being practiced. Occasionally a study arises that gets everyone all excited, but by and large training only benefits the skill being practiced — despite the fact that working memory underlies so many cognitive tasks, and limited working memory capacity is thought to negatively affect performance on so many tasks. However, one area that does seem to have had some success is working memory training for those with ADHD, and researchers have certainly not given hope of finding evidence for wider transfer among other groups (such as older adults).

A recent review of the research to date has, sadly, concluded that the benefits of working memory training programs are limited. But this is not to say there are no benefits.

For a start, the meta-analysis (analyzing data across studies) found that working memory training produced large immediate benefits for verbal working memory. These benefits were greatest for children below the age of 10.

These benefits, however, were not maintained long-term (at an average of 9 months after training, there were no significant benefits) — although benefits were found in one study in which the verbal working memory task was very similar to the training task (indicating that the specific skill practiced did maintain some improvement long-term).

Visuospatial working memory also showed immediate benefits, and these did not vary across age groups. One factor that did make a difference was type of training: the CogMed training program produced greater improvement than the researcher-developed programs (the studies included 7 that used CogMed, 2 that used Jungle Memory, 2 Cognifit, 4 n-back, 1 Memory Booster, and 7 researcher-developed programs).

Interestingly, visuospatial working memory did show some long-term benefits, although it should be noted that the average follow-up was distinctly shorter than that for verbal working memory tasks (an average of 5 months post-training).

The burning question, of course, is how well this training transferred to dissimilar tasks. Here the evidence seems sadly clear — those using untreated control groups tended to find such transfer; those using treated control groups never did. Similarly, nonrandomized studies tended to find far transfer, but randomized studies did not.

In other words, when studies were properly designed (randomized trials with a control group that is given alternative treatment rather than no treatment), there was no evidence of transfer effects from working memory training to nonverbal ability. Moreover, even when found, these effects were only present immediately and not on follow-up.

Neither was there any evidence of transfer effects, either immediate or delayed, on verbal ability, word reading, or arithmetic. There was a small to moderate effect on training on attention (as measured by the Stroop test), but this only occurred immediately, and not on follow-up.

It seems clear from this review that there are few good, methodologically sound studies on this subject. But three very important caveats should be noted in connection with the researchers’ dispiriting conclusion.

First of all, because this is an analysis across all data, important differences between groups or individuals may be concealed. This is a common criticism of meta-analysis, and the researchers do try and answer it. Nevertheless, I think it is still a very real issue, especially in light of evidence that the benefit of training may depend on whether the challenge of the training is at the right level for the individual.

On the other hand, another recent study, that compared young adults who received 20 sessions of training on a dual n-back task or a visual search program, or received no training at all, did look for an individual-differences effect, and failed to find it. Participants were tested repeatedly on their fluid intelligence, multitasking ability, working memory capacity, crystallized intelligence, and perceptual speed. Although those taking part in the training programs improved their performance on the tasks they practiced, there was no transfer to any of the cognitive measures. When participants were analyzed separately on the basis of their improvement during training, there was still no evidence of transfer to broader cognitive abilities.

The second important challenge comes from the lack of skill consolidation — having a short training program followed by months of not practicing the skill is not something any of us would expect to produce long-term benefits.

The third point concerns a recent finding that multi-domain cognitive training produces longer-lasting benefits than single-domain training (the same study also showed the benefit of booster training). It seems quite likely that working memory training is a valuable part of a training program that also includes practice in real-world tasks that incorporate working memory.

I should emphasize that these results only apply to ‘normal’ children and adults. The question of training benefits for those with attention difficulties or early Alzheimer’s is a completely different issue. But for these healthy individuals, it has to be said that the weight of the evidence is against working memory training producing more general cognitive improvement. Nevertheless, I think it’s probably an important part of a cognitive training program — as long as the emphasis is on part.

A British study looking at possible gender differences in the effects of math anxiety involved 433 secondary school children (11-16 years old) completing customized (year appropriate) mental mathematics tests as well as questionnaires designed to assess math anxiety and (separately) test anxiety. These sources of anxiety are often confounded in research studies (and in real life!), and while they are indeed related, reported correlations are moderate, ranging from .30 to .50.

Previous research has been inconsistent as regards gender differences in math anxiety. While many studies have found significantly greater levels of math anxiety in females, many studies have found no difference, and some have even found higher levels in males. These inconsistencies may stem from differences in how math anxiety is defined or measured.

The present study looked at a rather more subtle question: does the connection between math anxiety and math performance differ by gender? Again, previous research has produced inconsistent findings.

Findings in this study were very clear: while there was no difference between boys and girls in math performance, there were marked differences in both math and test anxiety. Girls showed significantly greater levels of both. Both boys and girls showed a positive correlation between math anxiety and test anxiety, and a negative correlation between math anxiety and math performance, and test anxiety and performance. However, these relationships between anxiety and performance were stronger for girls than boys, with the correlation between test anxiety and performance being only marginally significant for boys (p<0.07), and the correlation between math anxiety and performance disappearing once test anxiety was controlled for.

In other words, greater math anxiety was linked to poorer math performance, but it was significant only for girls. Moreover, anxiety experienced by boys may simply reflect test anxiety, rather than specific math anxiety.

It is worth emphasizing that there was no gender difference in performance — that is, despite laboring under the burden of greater levels of anxiety, the girls did just as well as boys. This suggests that girls might do better than boys if they were free of anxiety. It is possible, however, that levels of anxiety didn’t actually differ between boys and girls — that the apparent difference stems from girls feeling more free to express their anxiety.

However, the finding that anxiety is greater in girls than boys is in line with evidence that anxiety (and worry in particular) is twice as prevalent in women as men, and more support for the idea that the girls are under-performing because of their anxiety comes from another recent study.

In this study, 149 college students performed a relatively simple task while their brain activity was measured. Specifically, they had to identify the middle letter in a series of five-letter groups. Sometimes the middle letter was the same as the other four ("FFFFF") while sometimes it was different ("EEFEE"). Afterward the students completed questionnaires about their anxiety and how much they worry (Penn State Worry Questionnaire and the Anxious Arousal subscale of the Mood and Anxiety Symptom Questionnaire).

Anxiety scores were significantly negatively correlated with accuracy on the task; worry scores were unrelated to performance.

Only girls who identified themselves as particularly anxious or big worriers recorded high brain activity when they made mistakes during the task (reflecting greater performance-monitoring). Although these women performed about the same as others on simple portions of the task, their brains had to work harder at it. Then, as the test became more difficult, the anxious females performed worse, suggesting worrying got in the way of completing the task.

Greater performance monitoring was not evident among anxious men.

[A reminder: these are group differences, and don't mean that all men or all women react in these ways.]

I’ve reported before on the evidence suggesting that carriers of the ‘Alzheimer’s gene’, APOE4, tend to have smaller brain volumes and perform worse on cognitive tests, despite being cognitively ‘normal’. However, the research hasn’t been consistent, and now a new study suggests the reason.

The e4 variant of the apolipoprotein (APOE) gene not only increases the risk of dementia, but also of cardiovascular disease. These effects are not unrelated. Apoliproprotein is involved in the transportation of cholesterol. In older adults, it has been shown that other vascular risk factors (such as elevated cholesterol, hypertension or diabetes) worsen the cognitive effects of having this gene variant.

This new study extends the finding, by looking at 72 healthy adults from a wide age range (19-77).

Participants were tested on various cognitive abilities known to be sensitive to aging and the effects of the e4 allele. Those abilities include speed of information processing, working memory and episodic memory. Blood pressure, brain scans, and of course genetic tests, were also performed.

There are a number of interesting findings:

  • The relationship between age and hippocampal volume was stronger for those carrying the e4 allele (shrinkage of this brain region occurs with age, and is significantly greater in those with MCI or dementia).
  • Higher systolic blood pressure was significantly associated with greater atrophy (i.e., smaller volumes), slower processing speed, and reduced working memory capacity — but only for those with the e4 variant.
  • Among those with the better and more common e3 variant, working memory was associated with lateral prefrontal cortex volume and with processing speed. Greater age was associated with higher systolic blood pressure, smaller volumes of the prefrontal cortex and prefrontal white matter, and slower processing. However, blood pressure was not itself associated with either brain atrophy or slower cognition.
  • For those with the Alzheimer’s variant (e4), older adults with higher blood pressure had smaller volumes of prefrontal white matter, and this in turn was associated with slower speed, which in turn linked to reduced working memory.

In other words, for those with the Alzheimer’s gene, age differences in working memory (which underpin so much of age-related cognitive impairment) were produced by higher blood pressure, reduced prefrontal white matter, and slower processing. For those without the gene, age differences in working memory were produced by reduced prefrontal cortex and prefrontal white matter.

Most importantly, these increases in blood pressure that we are talking about are well within the normal range (although at the higher end).

The researchers make an interesting point: that these findings are in line with “growing evidence that ‘normal’ should be viewed in the context of individual’s genetic predisposition”.

What it comes down to is this: those with the Alzheimer’s gene variant (and no doubt other genetic variants) have a greater vulnerability to some of the risk factors that commonly increase as we age. Those with a family history of dementia or serious cognitive impairment should therefore pay particular attention to controlling vascular risk factors, such as hypertension and diabetes.

This doesn’t mean that those without such a family history can safely ignore such conditions! When they get to the point of being clinically diagnosed as problems, then they are assuredly problems for your brain regardless of your genetics. What this study tells us is that these vascular issues appear to be problematic for Alzheimer’s gene carriers before they get to that point of clinical diagnosis.

A study involving 75 perimenopausal women aged 40 to 60 has found that those with memory complaints tended to show impairments in working memory and attention. Complaints were not, however, associated with verbal learning or memory.

Complaints were also associated with depression, anxiety, somatic complaints, and sleep disturbance. But they weren’t linked to hormone levels (although estrogen is an important hormone for learning and memory).

What this suggests to me is that a primary cause of these cognitive impairments may be poor sleep, and anxiety/depression. A few years ago, I reported on a study that found that, although women’s reports of how many hot flashes they had didn’t correlate with memory impairment, an objective measure of the number of flashes they experienced during sleep did. Sleep, as I know from personal experience, is of sufficient importance that my rule-of-thumb is: don’t bother looking for any other causes of attention and memory deficits until you have sorted out your sleep!

Having said that, depressive symptoms showed greater relationship to memory complaints than sleep disturbance.

It’s no big surprise to hear that it is working memory in particular that is affected, because what many women at this time of life complain of is ‘brain fog’ — the feeling that your brain is full of cotton-wool. This doesn’t mean that you can’t learn new information, or remember old information. But it does mean that these tasks will be impeded to the extent that you need to hold on to too many bits of information. So mental arithmetic might be more difficult, or understanding complex sentences, or coping with unexpected disruptions to your routine, or concentrating on a task for a long time.

These sorts of problems are typical of those produced by on-going sleep deprivation, stress, and depression.

One caveat to the findings is that the study participants tended to be of above-average intelligence and education. This would protect them to a certain extent from cognitive decline — those with less cognitive reserve might display wider impairment. Other studies have found verbal memory, and processing speed, impaired during menopause.

Note, too, that a long-running, large population study has found no evidence for a decline in working memory, or processing speed, in women as they pass through perimenopause and menopause.

A new study explains how marijuana impairs working memory. The component THC removes AMPA receptors for the neurotransmitter glutamate in the hippocampus. This means that there are fewer receivers for the information crossing between neurons.

The research is also significant because it adds to the growing evidence for the role of astrocytes in neural transmission of information.

This is shown by the finding that genetically-engineered mice who lack type-1 cannabinoid receptors in their astroglia do not show impaired working memory when exposed to THC, while those who instead lacked the receptors in their neurons do. The activation of the cannabinoid receptor expressed by astroglia sends a signal to the neurons to begin the process that removes AMPA receptors, leading to long-term depression (a type of synaptic plasticity that weakens, rather than strengthens, neural connections).

See the Guardian and Scientific American articles for more detail on the study and the processes involved.

For more on the effects of marijuana on memory

A review of 10 observational and four intervention studies as said to provide strong evidence for a positive relationship between physical activity and academic performance in young people (6-18). While only three of the four intervention studies and three of the 10 observational studies found a positive correlation, that included the two studies (one intervention and one observational) that researchers described as “high-quality”.

An important feature of the high-quality studies was that they used objective measures of physical activity, rather than students' or teachers' reports. More high-quality studies are clearly needed. Note that the quality score of the 14 studies ranged from 22%! to 75%.

Interestingly, a recent media report (NOT, I hasten to add, a peer-reviewed study appearing in an academic journal) spoke of data from public schools in Lincoln, Nebraska, which apparently has a district-wide physical-fitness test, which found that those were passed the fitness test were significantly more likely to also pass state reading and math tests.

Specifically, data from the last two years apparently shows that 80% of the students who passed the fitness test either met or exceeded state standards in math, compared to 66% of those who didn't pass the fitness test, and 84% of those who passed the fitness test met or exceeded state standards in reading, compared to 71% of those who failed the fitness test.

Another recent study looks at a different aspect of this association between physical exercise and academic performance.

The Italian study involved138 normally-developing children aged 8-11, whose attention was tested before and after three different types of class: a normal academic class; a PE class focused on cardiovascular endurance and involving continuous aerobic circuit training followed by a shuttle run exercise; a PE class combining both physical and mental activity by involving novel use of basketballs in varying mini-games that were designed to develop coordination and movement-based problem-solving. These two types of physical activity offered the same exercise intensity, but very different skill demands.

The attention test was a short (5-minute) paper-and-pencil task in which the children had to mark each occurrence of “d” with double quotation marks either above or below in 14 lines of randomly mixed p and d letters with one to four single and/or double quotation marks either over and/or under each letter.

Processing speed increased 9% after mental exercise (normal academic class) and 10% after physical exercise. These were both significantly better than the increase of 4% found after the combined physical and mental exertion.

Similarly, scores on the test improved 13% after the academic class, 10% after the standard physical exercise, and only 2% after the class combining physical and mental exertion.

Now it’s important to note is that this is of course an investigation of the immediate arousal benefits of exercise, rather than an investigation of the long-term benefits of being fit, which is a completely different question.

But the findings do bear on the use of PE classes in the school setting, and the different effects that different types of exercise might have.

First of all, there’s the somewhat surprising finding that attention was at least as great, if not better, after an academic class than the PE class. It would not have been surprising if attention had flagged. It seems likely that what we are seeing here is a reflection of being in the right head-space — that is, the advantage of continuing with the same sort of activity.

But the main finding is the, also somewhat unexpected, relative drop in attention after the PE class that combined mental and physical exertion.

It seems plausible that the reason for this lies in the cognitive demands of the novel activity, which is, I think, the main message we should take away from this study, rather than any comparison between physical and mental activity. However, it would not be surprising if novel activities that combine physical and mental skills tend to be more demanding than skills that are “purely” (few things are truly pure I know) one or the other.

Of course, it shouldn’t be overlooked that attention wasn’t hampered by any of these activities!

Back in 2008, I reported on a small study that found that daily doses of Pycnogenol® for three months improved working memory in older adults, and noted research indicating that the extract from the bark of the French maritime pine tree had reduced symptoms in children with ADHD. Now another study, involving 53 Italian university students, has found that cognitive performance improved in those taking 100 mg of Pycnogenol every day for eight weeks.

Students taking the supplement had higher scores on university exams than the control group, and they were apparently happier, less anxious, and more alert. It seems plausible that the improvement in academic performance results from working memory benefits.

The plant extract is an antioxidant, and benefits may have something to do with improved vascular function and blood flow in the brain.

However, the control group was apparently not given a placebo (I’m relying on the abstract and press release here, as this journal is not one to which I have access), they were simply “a group of equivalent students”. I cannot fathom why a double-blind, placebo procedure wasn’t followed, and it greatly lessens the conclusions of this study. Indeed, I wouldn’t ordinarily report on it, except that I have previously reported on this dietary supplement, and I am in hopes that a better study will come along. In the meantime, this is another small step, to which I wouldn’t give undue weight.

Luzzi R., Belcaro G., Zulli C., Cesarone M. R., Cornelli U., Dugall M., Hosoi M., Feragalli B. 2011. Pycnogenol® supplementation improves cognitive function, attention and mental performance in students. Panminerva Medica, 53(3 Suppl 1), 75-82.

We’re all familiar with the experience of going to another room and forgetting why we’ve done so. The problem has been largely attributed to a failure of attention, but recent research suggests something rather more specific is going on.

In a previous study, a virtual environment was used to explore what happens when people move through several rooms. The virtual environment was displayed on a very large (66 inch) screen to provide a more immersive experience. Each ‘room’ had one or two tables. Participants ‘carried’ an object, which they would deposit on a table, before picking up a different object. At various points, they were asked if the object was, say, a red cube (memory probe). The objects were not visible at the time of questioning. It was found that people were slower and less accurate if they had just moved to a new room.

To assess whether this effect depends on a high degree of immersion, a recent follow-up to this study replicated the study using standard 17” monitors rather than the giant screens. The experiment involved 55 students and once again demonstrated a significant effect of shifting rooms. Specifically, when the probe was positive, the error rate was 19% in the shift condition compared to 12% on trials when the participant ‘traveled’ the same distance but didn’t change rooms. When the probe was negative, the error rate was 22% in the shift condition vs 7% for the non-shift condition. Reaction time was less affected — there was no difference when the probes were positive, but a marginally significant difference on negative-probe trials.

The second experiment went to the other extreme. Rather than reducing the immersive experience, researchers increased it — to a real-world environment. Unlike the virtual environments, distances couldn’t be kept constant across conditions. Three large rooms were used, and no-shift trials involved different tables at opposite ends of the room. Six objects, rather than just one, were moved on each trial. Sixty students participated.

Once again, more errors occurred when a room-shift was involved. On positive-probe trials, the error rate was 28% in the shift condition vs 23% in the non-shift. On negative-probe trials, the error rate was 21% and 18%, respectively. The difference in reaction times wasn’t significant.

The third experiment, involving 48 students, tested the idea that forgetting might be due to the difference in context at retrieval compared to encoding. To do this, the researchers went back to using the more immersive virtual environment (the 66” screen), and included a third condition. In this, either the participant returned to the original room to be tested (return) or continued on to a new room to be tested (double-shift) — the idea being to hold the number of spatial shifts the same.

There was no evidence that returning to the original room produced the sort of advantage expected if context-matching was the important variable. Memory was best in the no-shift condition, next best in the shift and return conditions (no difference between them), and worst in the double shift condition. In other words, it was the number of new rooms entered that appears to be important.

This is in keeping with the idea that we break the action stream into separate events using event boundaries. Passing through a doorway is one type of event boundary. A more obvious type is the completion of an action sequence (e.g., mixing a cake — the boundary is the action of putting it in the oven; speaking on the phone — the boundary is the action of ending the call). Information being processed during an event is more available, foregrounded in your attention. Interference occurs when two or more events are activated, increasing errors and sometimes slowing retrieval.

All of this has greater ramifications than simply helping to explain why we so often go to another room and forget why we’re there. The broader point is that everything that happens to us is broken up and filed, and we should look for the boundaries to these events and be aware of the consequences of them for our memory. Moreover, these contextual factors are important elements of our filing system, and we can use that knowledge to construct more effective tags.

Read an article on this topic at Mempowered

This is another demonstration of stereotype threat, which is also a nice demonstration of the contextual nature of intelligence. The study involved 70 volunteers (average age 25; range 18-49), who were put in groups of 5. Participants were given a baseline IQ test, on which they were given no feedback. The group then participated in a group IQ test, in which 92 multi-choice questions were presented on a monitor (both individual and group tests were taken from Cattell’s culture fair intelligence test). Each question appeared to each person at the same time, for a pre-determined time. After each question, they were provided with feedback in the form of their own relative rank within the group, and the rank of one other group member. Ranking was based on performance on the last 10 questions. Two of each group had their brain activity monitored.

Here’s the remarkable thing. If you gather together individuals on the basis of similar baseline IQ, then you can watch their IQ diverge over the course of the group IQ task, with some dropping dramatically (e.g., 17 points from a mean IQ of 126). Moreover, even those little affected still dropped some (8 points from a mean IQ of 126).

Data from the 27 brain scans (one had to be omitted for technical reasons) suggest that everyone was initially hindered by the group setting, but ‘high performers’ (those who ended up scoring above the median) managed to largely recover, while ‘low performers’ (those who ended up scoring below the median) never did.

Personality tests carried out after the group task found no significant personality differences between high and low performers, but gender was a significant variable: 10/13 high performers were male, while 11/14 low performers were female (remember, there was no difference in baseline IQ — this is not a case of men being smarter!).

There were significant differences between the high and low performers in activity in the amygdala and the right lateral prefrontal cortex. Specifically, all participants had an initial increase in amygdala activation and diminished activity in the prefrontal cortex, but by the end of the task, the high-performing group showed decreased amygdala activation and increased prefrontal cortex activation, while the low performers didn’t change. This may reflect the high performers’ greater ability to reduce their anxiety. Activity in the nucleus accumbens was similar in both groups, and consistent with the idea that the students had expectations about the relative ranking they were about to receive.

It should be pointed out that the specific feedback given — the relative ranking — was not a factor. What’s important is that it was being given at all, and the high performers were those who became less anxious as time went on, regardless of their specific ranking.

There are three big lessons here. One is that social pressure significantly depresses talent (meetings make you stupid?), and this seems to be worse when individuals perceive themselves to have a lower social rank. The second is that our ability to regulate our emotions is important, and something we should put more energy into. And the third is that we’ve got to shake ourselves loose from the idea that IQ is something we can measure in isolation. Social context matters.

One of the few established cognitive differences between men and women lies in spatial ability. But in recent years, this ‘fact’ has been shaken by evidence that training can close the gap between the genders. In this new study, 545 students were given a standard 3D mental rotation task, while at the same time manipulating their confidence levels.

In the first experiment, 70 students were asked to rate their confidence in each answer. They could also choose not to answer. Confidence level was significantly correlated with performance both between and within genders.

On the face of it, these findings could be explained, of course, by the ability of people to be reliable predictors of their own performance. However, the researchers claim that regression analysis shows clearly that when the effect of confidence was taken into account, gender differences were eliminated. Moreover, gender significantly predicted confidence.

But of course this is still just indicative.

In the next experiment, however, the researchers tried to reduce the effect of confidence. One group of 87 students followed the same procedure as in the first experiment (“omission” group), except they were not asked to give confidence ratings. Another group of 87 students was not permitted to miss out any questions (“commission” group). The idea here was that confidence underlay the choice of whether or not to answer a question, so while the first group should perform similarly to those in the first experiment, the second group should be less affected by their confidence level.

This is indeed what was found: men significantly outperformed women in the first condition, but didn’t in the second condition. In other words, it appears that the mere possibility of not answering makes confidence an important factor.

In the third experiment, 148 students replicated the commission condition of the second experiment with the additional benefit of being allowed unlimited time. Half of the students were required to give confidence ratings.

The advantage of unlimited time improved performance overall. More importantly, the results confirmed those produced earlier: confidence ratings produced significant gender differences; there were no gender differences in the absence of such ratings.

In the final experiment, 153 students were required to complete an intentionally difficult line judgment task, which men and women both carried out at near chance levels. They were then randomly informed that their performance had been either above average (‘high confidence’) or below average (‘low confidence’). Having manipulated their confidence, the students were then given the standard mental rotation task (omission version).

As expected (remember this is the omission procedure, where subjects could miss out answers), significant gender differences were found. But there was also a significant difference between the high and low confidence groups. That is, telling people they had performed well (or badly) on the first task affected how well they did on the second. Importantly, women in the high confidence group performed as well as men in the low confidence group.

I’ve reported before on evidence that young children do better on motor tasks when they talk to themselves out loud, and learn better when they explain things to themselves or (even better) their mother. A new study extends those findings to children with autism.

In the study, 15 high-functioning adults with Autism Spectrum Disorder and 16 controls (age and IQ matched) completed the Tower of London task, used to measure planning ability. This task requires you to move five colored disks on three pegs from one arrangement to another in as few moves as possible. Participants did the task under normal conditions as well as under an 'articulatory suppression' condition whereby they had to repeat out loud a certain word ('Tuesday' or 'Thursday') throughout the task, preventing them from using inner speech.

Those with ASD did significantly worse than the controls in the normal condition (although the difference wasn’t large), but they did significantly better in the suppression condition — not because their performance changed, but because the controls were significantly badly affected by having their inner speech disrupted.

On an individual basis, nearly 90% of the control participants did significantly worse on the Tower of London task when inner speech was prevented, compared to only a third of those with ASD. Moreover, the size of the effect among those with ASD was correlated with measures of communication ability (but not with verbal IQ).

A previous experiment had confirmed that these neurotypical and autistic adults both showed similar patterns of serial recall for labeled pictures. Half the pictures had phonologically similar labels (bat, cat, hat, mat, map, rat, tap, cap), and the other nine had phonologically dissimilar labels (drum, shoe, fork, bell, leaf, bird, lock, fox). Both groups were significantly affected by phonological similarity, and both groups were significantly affected when inner speech was prevented.

In other words, this group of ASD adults were perfectly capable of inner speech, but they were much less inclined to use it when planning their actions.

It seems likely that, rather than using inner speech, they were relying on their visuospatial abilities, which tend to be higher in individuals with ASD. Supporting this, visuospatial ability (measured by the block design subtest of the WAIS) was highly correlated with performance on the Tower of London test. Which may not seem surprising, but the association was minimal in control participants.

Complex planning is said to be a problem for many with ASD. It’s also suggested that the relative lack of inner speech use might contribute to some of the repetitive behaviors common in people with autism.

It may be that strategies targeted at encouraging inner speech may help those with ASD develop such skills. Such strategies include encouraging children to describe their actions out loud, and providing “parallel talk”, whereby an observer plays alongside the child while verbalizing their actions.

It is also suggested that children with ASD could benefit from verbal learning of their daily schedule at school rather than using visual timetables as is currently a common approach. This could occur in stages, moving from pictures to symbols, symbols with words, before finally being restricted to words only.

ASD is estimated to occur in 1% of the population, but perhaps this problem could be considered more widely. Rather than seeing this as an issue limited to those with ASD, we should see this as a pointer to the usefulness of inner speech, and its correlation with communication skills. As one of the researchers said: "These results show that inner speech has its roots in interpersonal communication with others early in life, and it demonstrates that people who are poor at communicating with others will generally be poor at communicating with themselves.”

One final comment: a distinction has been made between “dialogic” and “monologic” inner speech, where dialogic speech refers to a kind of conversation between different perspectives on reality, and monologic speech is simply a commentary to oneself about the state of affairs. It may be that it is specifically dialogic inner speech that is so helpful for problem-solving. It has been suggested that ASD is marked by a reduction in this kind of inner speech only, and the present researchers suggest further that it is this form of speech that may have inherently social origins and require training or experience in communicating with others.

The corollary to this is that it is only in those situations where dialogic inner speech is useful in achieving a task, that such differences between individuals will matter.

Clearly there is a need for much more research in this area, but it certainly provides food for thought.

We know that physical exercise greatly helps you prevent cognitive decline with aging. We know that mental stimulation also helps you prevent age-related cognitive decline. So it was only a matter of time before someone came up with a way of combining the two. A new study found that older adults improved executive function more by participating in virtual reality-enhanced exercise ("exergames") that combine physical exercise with computer-simulated environments and interactive videogame features, compared to the same exercise without the enhancements.

The Cybercycle Study involved 79 older adults (aged 58-99) from independent living facilities with indoor access to a stationary exercise bike. Of the 79, 63 participants completed the three-month study, meaning that they achieved at least 25 rides during the three months.

Unfortunately, randomization was not as good as it should have been — although the researchers planned to randomize on an individual basis, various technical problems led them to randomize on a site basis (there were eight sites), with the result that the cybercycle group and the control bike group were significantly different in age and education. Although the researchers took this into account in the analysis, that is not the same as having groups that match in these all-important variables. However, at least the variables went in opposite directions: while the cybercycle group was significantly younger (average 75.7 vs 81.6 years), it was significantly less educated (average 12.6 vs 14.8 years).

Perhaps also partly off-setting the age advantage, the cybercycle group was in poorer shape than the control group (higher BMI, glucose levels, lower physical activity level, etc), although these differences weren’t statistically significant. IQ was also lower for the cybercycle group, if not significantly so (but note the high averages for both groups: 117.6 vs 120.6). One of the three tests of executive function, Color Trails, also showed a marked group difference, but the large variability in scores meant that this difference was not statistically significant.

Although participants were screened for disorders such as Alzheimer’s and Parkinson’s, and functional disability, many of both groups were assessed as having MCI — 16 of the 38 in the cybercycle group and 14 of the 41 in the control bike group.

Participants were given cognitive tests at enrolment, one month later (before the intervention began), and after the intervention ended. The stationary bikes were identical for both groups, except the experimental bike was equipped with a virtual reality display. Cybercycle participants experienced 3D tours and raced against a "ghost rider," an avatar based on their last best ride.

The hypothesis was that cybercycling would particularly benefit executive function, and this was borne out. Executive function (measured by the Color Trails, Stroop test, and Digits Backward) improved significantly more in the cybercycle condition, and indeed was the only cognitive task to do so (other cognitive tests included verbal fluency, verbal memory, visuospatial skill, motor function). Indeed, the control group, despite getting the same amount of exercise, got worse at the Digits Backward test, and failed to show any improvement on the Stroop test.

Moreover, significantly fewer cybercyclists progressed to MCI compared to the control group (three vs nine).

There were no differences in exercise quantity or quality between the two groups — which does argue against the idea that cyber-enhanced physical activity would be more motivating. However, the cybercycling group did tend to comment on their enjoyment of the exercise. While the enjoyment may not have translated into increased activity in this situation, it may well do so in a longer, less directed intervention — i.e. real life.

It should also be remembered that the intervention was relatively short, and that other cognitive tasks might take longer to show improvement than the more sensitive executive function. This is supported by the fact that levels of the brain growth factor BDNF, assessed in 30 participants, showed a significantly greater increase of BDNF in cybercyclists.

I should also emphasize that the level of physical exercise really wasn't that great, but nevertheless the size of the cybercycle's effect on executive function was greater than usually produced by aerobic exercise (a medium effect rather than a small one).

The idea that activities that combine physical and mental exercise are of greater cognitive benefit than the sum of benefits from each type of exercise on its own is not inconsistent with previous research, and in keeping with evidence from animal studies that physical exercise and mental stimulation help the brain via different mechanisms. Moreover, I have an idea that enjoyment (in itself, not as a proxy for motivation) may be a factor in the cognitive benefits derived from activities, whether physical or mental. Mere speculation, derived from two quite separate areas of research: the idea of “flow” / “being in the zone”, and the idea that humor has physiological benefits.

Of course, as discussed, this study has a number of methodological issues that limit its findings, but hopefully it will be the beginning of an interesting line of research.  

American football has been in the news a lot in recent years, as evidence has accumulated as to the brain damage incurred by professional footballers. But American football is a high-impact sport. Soccer is quite different. And yet the latest research reveals that even something as apparently unexceptional as bouncing a ball off your forehead can cause damage to your brain, if done often enough.

Brain scans on 32 amateur soccer players (average age 31) have revealed that those who estimated heading the ball more than 1,000-1,500 times in the past year had damage to white matter similar to that seen in patients with concussion.

Six brain regions were seen to be affected: one in the frontal lobe and five in the temporo-occipital cortex. These regions are involved in attention, memory, executive functioning and higher-order visual functions. The number of headings (obviously very rough estimates, based presumably on individuals’ estimates of how often they play and how often they head the ball on average during a game) needed to produce measurable decreases in the white matter integrity varied per region. In four of temporo-occipital regions, the threshold number was around 1500; in the fifth it was only 1000; in the frontal lobe, it was 1300.

Those with the highest annual heading frequency also performed worse on tests of verbal memory and psychomotor speed (activities that require mind-body coordination, like throwing a ball).

This is only a small study and clearly more research is required, but the findings indicate that we should lower our ideas of what constitutes ‘harm’ to the brain — if repetition is frequent enough, even mild knocks can cause damage. This adds to the evidence I discussed in a recent blog post, that even mild concussions can produce long-lasting trauma to the brain, and it is important to give your brain time to repair itself.

At the moment we can only speculate on the effect such repetition might have to the vulnerable brains of children.

The researchers suggest that heading should be monitored to prevent players exceeding unsafe exposure thresholds.

Kim, N., Zimmerman, M., Lipton, R., Stewart, W., Gulko, E., Lipton, M. & Branch, C. 2011. PhD Making Soccer Safer for the Brain: DTI-defined Exposure Thresholds for White Matter Injury Due to Soccer Heading. Presented November 30 at the annual meeting of the Radiological Society of North America (RSNA) in Chicago.

The study involved 1,292 children followed from birth, whose cortisol levels were assessed at 7, 15, and 24 months. Three tests related to executive functions were given at age 3. Measures of parenting quality (maternal sensitivity, detachment, intrusiveness, positive regard, negative regard, and animation, during interaction with the child) and household environment (household crowding, safety and noise levels) were assessed during the home visits.

Earlier studies have indicated that a poor environment in and of itself is stressful to children, and is associated with increased cortisol levels. Interestingly, in one Mexican study, preschool children in poor homes participating in a conditional cash transfer scheme showed reduced cortisol levels.

This study found that children in lower-income homes received less positive parenting and had higher levels of cortisol in their first two years than children in slightly better-off homes. Higher levels of cortisol were associated with lower levels of executive function abilities, and to a lesser extent IQ, at 3 years.

African American children were more affected than White children on every measure. Cortisol levels were significantly higher; executive function and IQ significantly lower; ratings of positive parenting significantly lower and ratings of negative parenting significantly higher. Maternal education was significantly lower, poverty greater, homes more crowded and less safe.

The model derived from this data shows executive function negatively predicted by cortisol, while the effect on IQ is marginal. However, both executive function and IQ are predicted by negative parenting, positive parenting, and household risk (although this last variable has a greater effect on IQ than executive function). Neither executive function nor IQ was directly predicted by maternal education, ethnicity, or poverty level. Cortisol level was inversely related to positive parenting, but was not directly related to negative parenting or household risk.

Indirectly (according to this best-fit model), poverty was related to executive function through negative parenting; maternal education was related to executive function through negative parenting and to a lesser extent positive parenting; both poverty and maternal education were related to IQ through positive parenting, negative parenting, and household risk; African American ethnicity was related to executive function through negative parenting and positive parenting, and to IQ through negative parenting, positive parenting, and household risk. Cortisol levels were higher in African American children and this was unrelated to poverty level or maternal education.

Executive function (which includes working memory, inhibitory control, and attention shifting) is vital for self-regulation and central to early academic achievement. A link between cortisol level and executive function has previously been shown in preschool children, as well as adults. The association partly reflects the fact that stress hormone levels affect synaptic plasticity in the prefrontal cortex, where executive functions are carried out. This is not to say that this is the only brain region so affected, but it is an especially sensitive one. Chronic levels of stress alter the stress response systems in ways that impair flexible regulation.

What is important about this study is this association between stress level and cognitive ability at an early age, that the effect of parenting on cortisol is associated with positive aspects rather than negative ones, and that the association between poverty and cognitive ability is mediated by both cortisol and parenting behavior — both positive and negative aspects.

A final word should be made on the subject of the higher cortisol levels in African Americans. Because of the lack of high-income African Americans in the sample (a reflection of the participating communities), it wasn’t possible to directly test whether the effect is accounted for by poverty. So this remains a possibility. It is also possible that there is some genetic difference. But it also might reflect other sources of stress, such as that relating to prejudice and stereotype threat.

Based on mother’s ethnic status, 58% of the families were Caucasian and 42% African American. Two-thirds of the participants had an income-to-need ratio (estimated total household income divided by the 2005 federal poverty threshold adjusted for number of household members) less than 200% of poverty. Just over half of the mothers weren’t married, and most of them (89%) had never been married. The home visits at 7, 15, and 24 months lasted at least an hour, and include a videotaped free play or puzzle completion interaction between mother and child. Cortisol samples were taken prior to an emotion challenge task, and 20 minutes and 40 minutes after peak emotional arousal.

Long-term genetic effects of childhood environment

The long-term effects of getting off to a poor start are deeper than you might believe. A DNA study of forty 45-year-old males in a long-running UK study has found clear differences in gene methylation between those who experienced either very high or very low standards of living as children or adults (methylation of a gene at a significant point in the DNA reduces the activity of the gene). More than twice as many methylation differences were associated with the combined effect of the wealth, housing conditions and occupation of parents (that is, early upbringing) than were associated with the current socio-economic circumstances in adulthood (1252 differences as opposed to 545).

The findings may explain why the health disadvantages known to be associated with low socio-economic position can remain for life, despite later improvement in living conditions. The methylation profiles associated with childhood family living conditions were clustered together in large stretches of DNA, which suggests that a well-defined epigenetic pattern is linked to early socio-economic environment. Adult diseases known to be associated with early life disadvantage include coronary heart disease, type 2 diabetes and respiratory disorders.

[2589] Blair C, Granger DA, Willoughby M, Mills-Koonce R, Cox M, Greenberg MT, Kivlighan KT, Fortunato CK, the Investigators FLP. Salivary Cortisol Mediates Effects of Poverty and Parenting on Executive Functions in Early Childhood. Child Development [Internet]. 2011 :no - no. Available from: http://dx.doi.org/10.1111/j.1467-8624.2011.01643.x

Fernald, L. C., & Gunnar, M. R. (2009). Poverty-alleviation program participation and salivary cortisol in very low-income children. Social Science and Medicine, 68, 2180–2189.

[2590] Borghol N, Suderman M, McArdle W, Racine A, Hallett M, Pembrey M, Hertzman C, Power C, Szyf M. Associations with early-life socio-economic position in adult DNA methylation. International Journal of Epidemiology [Internet]. 2011 . Available from: http://ije.oxfordjournals.org/content/early/2011/10/18/ije.dyr147.abstract

In yet another study of the effects of pollution on growing brains, it has been found that children who grew up in Mexico City (known for its very high pollution levels) performed significantly worse on cognitive tests than those from Polotitlán, a city with a strong air quality rating.

The study involved 30 children aged 7 or 8, of whom 20 came from Mexico City, and 10 from Polotitlán. Those ten served as controls to the Mexico City group, of whom 10 had white matter hyperintensities in their brains, and 10 had not. Regardless of the presence of lesions, MC children were found to have significantly smaller white matter volumes in right parietal and bilateral temporal regions. Such reduced volumes were correlated with poorer performance on a variety of cognitive tests, especially those relating to attention, working memory, and learning.

It’s suggested that exposure to air pollution disturbs normal brain development, resulting in cognitive deficits.

Math-anxiety can greatly lower performance on math problems, but just because you suffer from math-anxiety doesn’t mean you’re necessarily going to perform badly. A study involving 28 college students has found that some of the students anxious about math performed better than other math-anxious students, and such performance differences were associated with differences in brain activity.

Math-anxious students who performed well showed increased activity in fronto-parietal regions of the brain prior to doing math problems — that is, in preparation for it. Those students who activated these regions got an average 83% of the problems correct, compared to 88% for students with low math anxiety, and 68% for math-anxious students who didn’t activate these regions. (Students with low anxiety didn’t activate them either.)

The fronto-parietal regions activated included the inferior frontal junction, inferior parietal lobule, and left anterior inferior frontal gyrus — regions involved in cognitive control and reappraisal of negative emotional responses (e.g. task-shifting and inhibiting inappropriate responses). Such anticipatory activity in the fronto-parietal region correlated with activity in the dorsomedial caudate, nucleus accumbens, and left hippocampus during math activity. These sub-cortical regions (regions deep within the brain, beneath the cortex) are important for coordinating task demands and motivational factors during the execution of a task. In particular, the dorsomedial caudate and hippocampus are highly interconnected and thought to form a circuit important for flexible, on-line processing. In contrast, performance was not affected by activity in ‘emotional’ regions, such as the amygdala, insula, and hypothalamus.

In other words, what’s important is not your level of anxiety, but your ability to prepare yourself for it, and control your responses. What this suggests is that the best way of dealing with math anxiety is to learn how to control negative emotional responses to math, rather than trying to get rid of them.

Given that cognitive control and emotional regulation are slow to mature, it also suggests that these effects are greater among younger students.

The findings are consistent with a theory that anxiety hinders cognitive performance by limiting the ability to shift attention and inhibit irrelevant/distracting information.

Note that students in the two groups (high and low anxiety) did not differ in working memory capacity or in general levels of anxiety.

Research into the effects of cannabis on cognition has produced inconsistent results. Much may depend on extent of usage, timing, and perhaps (this is speculation) genetic differences. But marijuana abuse is common among sufferers of schizophrenia and recent studies have shown that the psychoactive ingredient of marijuana can induce some symptoms of schizophrenia in healthy volunteers.

Now new research helps explain why marijuana is linked to schizophrenia, and why it might have detrimental effects on attention and memory.

In this rat study, a drug that mimics the psychoactive ingredient of marijuana (by activating the cannabinoid receptors) produced significant disruption in brain networks, with brain activity becoming uncoordinated and inaccurate.

In recent years it has become increasingly clear that synchronized brainwaves play a crucial role in information processing — especially that between the hippocampus and prefrontal cortex (see, for example, my reports last month on theta waves improving retrieval and the effect of running on theta and gamma rhythms). Interactions between the hippocampus and prefrontal cortex seem to be involved in working memory functions, and may provide the mechanism for bringing together memory and decision-making during goal-directed behaviors.

Consistent with this, during decision-making on a maze task, hippocampal theta waves and prefrontal gamma waves were impaired, and the theta synchronization between the two was disrupted. These effects correlated with impaired performance on the maze task.

These findings are consistent with earlier findings that drugs that activate the cannabinoid receptors disrupt the theta rhythm in the hippocampus and impair spatial working memory. This experiment extends that result to coordinated brainwaves beyond the hippocampus.

Similar neural activity is observed in schizophrenia patients, as well as in healthy carriers of a genetic risk variant.

The findings add to the evidence that working memory processes involve coordination between the prefrontal cortex and the hippocampus through theta rhythm synchronization. The findings are consistent with the idea that items are encoded and indexed along the phase of the theta wave into episodic representations and transferred from the hippocampus to the neocortex as a theta phase code. By disrupting that code, cannabis makes it more difficult to retain and index the information relevant to the task at hand.

In the study, two rhesus monkeys were given a standard human test of working memory capacity: an array of colored squares, varying from two to five squares, was shown for 800 msec on a screen. After a delay, varying from 800 to 1000 msec, a second array was presented. This array was identical to the first except for a change in color of one item. The monkey was rewarded if its eyes went directly to this changed square (an infra-red eye-tracking system was used to determine this). During all this, activity from single neurons in the lateral prefrontal cortex and the lateral intraparietal area — areas critical for short-term memory and implicated in human capacity limitations — was recorded.

As with humans, the more squares in the array, the worse the performance (from 85% correct for two squares to 66.5% for 5). Their working memory capacity was calculated at 3.88 objects — i.e. the same as that of humans.

That in itself is interesting, speaking as it does to the question of how human intelligence differs from other animals. But the real point of the exercise was to watch what is happening at the single neuron level. And here a surprise occurred.

That total capacity of around 4 items was composed of two independent, smaller capacities in the right and left halves of the visual space. What matters is how many objects are in the hemifield an eye is covering. Each hemifield can only handle two objects. Thus, if the left side of the visual space contains three items, and the right side only one, information about the three items from the left side will be degraded. If the left side contains four items and the right side two, those two on the right side will be fine, but information from the four items on the left will be degraded.

Notice that the effect of more items than two in a hemifield is to decrease the total information from all the items in the hemifield — not to simply lose the additional items.

The behavioral evidence correlated with brain activity, with object information in LPFC neurons decreasing with increasing number of items in the same hemifield, but not the opposite hemifield, and the same for the intraparietal neurons (the latter are active during the delay; the former during the presentation).

The findings resolve a long-standing debate: does working memory function like slots, which we fill one by one with items until all are full, or as a pool that fills with information about each object, with some information being lost as the number of items increases? And now we know why there is evidence for both views, because both contain truth. Each hemisphere might be considered a slot, but each slot is a pool.

Another long-standing question is whether the capacity limit is a failure of perception or  memory. These findings indicate that the problem is one of perception. The neural recordings showed information about the objects being lost even as the monkeys were viewing them, not later as they were remembering what they had seen.

All of this is important theoretically, but there are also immediate practical applications. The work suggests that information should be presented in such a way that it’s spread across the visual space — for example, dashboard displays should spread the displays evenly on both sides of the visual field; medical monitors that currently have one column of information should balance it in right and left columns; security personnel should see displays scrolled vertically rather than horizontally; working memory training should present information in a way that trains each hemisphere separately. The researchers are forming collaborations to develop these ideas.

[2335] Buschman TJ, Siegel M, Roy JE, Miller EK. Neural substrates of cognitive capacity limitations. Proceedings of the National Academy of Sciences [Internet]. 2011 . Available from: http://www.pnas.org/content/early/2011/06/13/1104666108.abstract

A study comparing activity in the dorsolateral prefrontal cortex in young, middle-aged and aged macaque monkeys as they performed a spatial working memory task has found that while neurons of the young monkeys maintained a high rate of firing during the task, neurons in older animals showed slower firing rates. The decline began in middle age.

Neuron activity was recorded in a particular area of the dorsolateral prefrontal cortex that is most important for visuospatial working memory. Some neurons only fired when the cue was presented (28 CUE cells), but most were active during the delay period as well as the cue and response periods (273 DELAY neurons). Persistent firing during the delay period is of particular interest, as it is required to maintain information in working memory. Many DELAY neurons increased their activity when the preferred spatial location was being remembered.

While the activity of the CUE cells was unaffected by age, that of DELAY cells was significantly reduced. This was true both of spontaneous activity and task-related activity. Moreover, the reduction was greatest during the cue and delay periods for the preferred direction, meaning that the effect of age was to reduce the ability to distinguish preferred and non-preferred directions.

It appeared that the aging prefrontal cortex was accumulating excessive levels of an important signaling molecule called cAMP. When cAMP was inhibited or cAMP-sensitive ion channels were blocked, firing rates rose to more youthful levels. On the other hand, when cAMP was stimulated, aged neurons reduced their activity even more.

The findings are consistent with rat research that has found two of the agents used — guanfacine and Rp-cAMPS — can improve working memory in aged rats. Guanfacine is a medication that is already approved for treating hypertension in adults and prefrontal deficits in children. A clinical trial testing guanfacine's ability to improve working memory and executive functions in elderly subjects who do not have dementia is now taking place.

Working memory capacity and level of math anxiety were assessed in 73 undergraduate students, and their level of salivary cortisol was measured both before and after they took a stressful math test.

For those students with low working memory capacity, neither cortisol levels nor math anxiety made much difference to their performance on the test. However, for those with higher WMC, the interaction of cortisol level and math anxiety was critical. For those unafraid of math, the more their cortisol increased during the test, the better they performed; but for those anxious about math, rising cortisol meant poorer performance.

It’s assumed that low-WMC individuals were less affected because their performance is lower to start with (this shouldn’t be taken as an inevitability! Low-WMC students are disadvantaged in a domain like math, but they can learn strategies that compensate for that problem). But the effect on high-WMC students demonstrates how our attitude and beliefs interact with the effects of stress. We may all have the same physiological responses, but we interpret them in different ways, and this interpretation is crucial when it comes to ‘higher-order’ cognitive functions.

Another study investigated two theories as why people choke under pressure: (a) they’re distracted by worries about the situation, which clog up their working memory; (b) the stress makes them pay too much attention to their performance and become self-conscious. Both theories have research backing from different domains — clearly the former theory applies more to the academic testing environment, and the latter to situations involving procedural skill, where explicit attention to the process can disrupt motor sequences that are largely automatic.

But it’s not as simple as one effect applying to the cognitive domain, and one to the domain of motor skills, and it’s a little mysterious why pressure could have too such opposite effects (drawing attention away, or toward). This new study carried out four experiments in order to define more precisely the characteristics of the environment that lead to these different effects, and suggest solutions to the problem.

In the first experiment, participants were given a category learning task, in which some categories had only one relevant dimension and could be distinguished according to one easily articulated rule, and others involved three relevant dimensions and one irrelevant one. Categorization in this case was based on a complex rule that would be difficult to verbalize, and so participants were expected to integrate the information unconsciously.

Rule-based category learning was significantly worse when participants were also engaged in a secondary task requiring them to monitor briefly appearing letters. However it was not affected when their secondary task involved them explicitly monitoring the categorization task and making a confidence judgment. On the other hand, the implicit category learning task was not disrupted by the letter-monitoring task, but was impaired by the confidence-judgment task. Further analysis revealed that participants who had to do the confidence-judgment task were less likely to use the best strategy, but instead persisted in trying to verbalize a one- or two-dimension rule.

In the second experiment, the same tasks were learned in a low-pressure baseline condition followed by either a low-pressure control condition or one of two high-pressure conditions. One of these revolved around outcome — participants would receive money for achieving a certain level of improvement in their performance. The other put pressure on the participants through monitoring — they were watched and videotaped, and told their performance would be viewed by other students and researchers.

Rule-based category learning was slower when the pressure came from outcomes, but not when the pressure came from monitoring. Implicit category learning was unaffected by outcome pressure, but worsened by monitoring pressure.

Both high-pressure groups reported the same levels of pressure.

Experiment 3 focused on the detrimental combinations — rule-based learning under outcome pressure; implicit learning under monitoring pressure — and added the secondary tasks from the first experiment.

As predicted, rule-based categories were learned more slowly during conditions of both outcome pressure and the distracting letter-monitoring task, but when the secondary task was confidence-judgment, the negative effect of outcome pressure was counteracted and no impairment occurred. Similarly, implicit category learning was slowed when both monitoring pressure and the confidence-judgment distraction were applied, but was unaffected when monitoring pressure was counterbalanced by the letter task.

The final experiment extended the finding of the second experiment to another domain — procedural learning. As expected, the motor task was significantly affected by monitoring pressure, but not by outcome pressure.

These findings suggest two different strategies for dealing with choking, depending on the situation and the task. In the case of test-taking, good test preparation and a writing exercise can boost performance by reducing anxiety and freeing up working memory. If you're worried about doing well in a game or giving a memorized speech in front of others, you instead want to distract yourself so you don't become focused on the details of what you're doing.

Binge drinking occurs most frequently among young people, and there has been concern that consequences will be especially severe if the brain is still developing, as it is in adolescence. Because of the fact that it is only some parts of the brain — most crucially the prefrontal cortex and the hippocampus — that are still developing, it makes sense that only some functions will be affected.

I recently reported on a finding that binge drinking university students, performed more poorly on tests of verbal memory, but not on a test of visual memory. A new study looks at another function: spatial working memory. This task involves the hippocampus, and animal research has indicated that this region may be especially vulnerable to binge drinking. Spatial working memory is involved in such activities as driving, figural reasoning, sports, and navigation.

The study involved 95 adolescents (aged 16-19) from San Diego-area public schools: 40 binge drinking (27 males, 13 females) and 55 control (31 males, 24 females). Brain scans while performing a spatial working memory task revealed that there were significant gender differences in brain activation patterns for those who engaged in binge drinking. Specifically, in eight regions spanning the frontal cortex, anterior cingulate, temporal cortex, and cerebellum, female binge drinkers showed less activation than female controls, while male bingers exhibited greater activation than male controls. For female binge drinkers, less activation was associated with poorer sustained attention and working memory performances, while for male binge drinkers, greater activation was linked to better spatial performance.

The differences between male binge drinkers and controls were smaller than that seen in the female groups, suggesting that female teens may be particularly vulnerable. This is not the first study to find a gender difference in the brains’ response to excess alcohol. In this case it may have to do, at least partly, with differences in maturity — female brains mature earlier than males’.

Following on from research showing that long-term meditation is associated with gray matter increases across the brain, an imaging study involving 27 long-term meditators (average age 52) and 27 controls (matched by age and sex) has revealed pronounced differences in white-matter connectivity between their brains.

The differences reflect white-matter tracts in the meditators’ brains being more numerous, more dense, more myelinated, or more coherent in orientation (unfortunately the technology does not yet allow us to disentangle these) — thus, better able to quickly relay electrical signals.

While the differences were evident among major pathways throughout the brain, the greatest differences were seen within the temporal part of the superior longitudinal fasciculus (bundles of neurons connecting the front and the back of the cerebrum) in the left hemisphere; the corticospinal tract (a collection of axons that travel between the cerebral cortex of the brain and the spinal cord), and the uncinate fasciculus (connecting parts of the limbic system, such as the hippocampus and amygdala, with the frontal cortex) in both hemispheres.

These findings are consistent with the regions in which gray matter increases have been found. For example, the tSLF connects with the caudal area of the temporal lobe, the inferior temporal gyrus, and the superior temporal gyrus; the UNC connects the orbitofrontal cortex with the amygdala and hippocampal gyrus

It’s possible, of course, that those who are drawn to meditation, or who are likely to engage in it long term, have fundamentally different brains from other people. However, it is more likely (and more consistent with research showing the short-term effects of meditation) that the practice of meditation changes the brain.

The precise mechanism whereby meditation might have these effects can only be speculated. However, more broadly, we can say that meditation might induce physical changes in the brain, or it might be protecting against age-related reduction. Most likely of all, perhaps, both processes might be going on, perhaps in different regions or networks.

Regardless of the mechanism, the evidence that meditation has cognitive benefits is steadily accumulating.

The number of years the meditators had practiced ranged from 5 to 46. They reported a number of different meditation styles, including Shamatha, Vipassana and Zazen.

An increasing number of studies have been showing the benefits of bilingualism, both for children and in old age. However, there’s debate over whether the apparent benefits for children are real, or a product of cultural (“Asians work harder!” or more seriously, are taught more behavioral control from an early age) or environmental factors (such as socioeconomic status).

A new study aimed to disentangle these complicating factors, by choosing 56 4-year-olds with college-educated parents, from middle-class neighborhoods, and comparing English-speaking U.S. children, Korean-speaking children in the U.S. and in Korea, and Korean-English bilingual children in the U.S.

The children were tested on a computer-game-like activity designed to assess the alerting, orienting, and executive control components of executive attention (a child version of the Attention Network Test). They were also given a vocabulary test (the Peabody Picture Vocabulary Test-III) in their own language, if monolingual, or in English for the bilinguals.

As expected, given their young age, English monolinguals scored well above bilinguals (learning more than one language slows the acquisition of vocabulary in the short-term). Interestingly, however, while Korean monolinguals in Korea performed at a comparable level to the English monolinguals, Korean monolinguals in the U.S. performed at the level of the bilinguals. In other words, the monolinguals living in a country where their language is a majority language have comparable language skills, and those living in a country in which their primary language is a minority language have similar, and worse, language skills.

That’s interesting, but the primary purpose of the study was to look at executive control. And here the bilingual children shone over the monolinguals. Specifically, the bilingual children were significantly more accurate on the attention test than the monolingual Koreans in the U.S. (whether they spoke Korean or English). Although their performance in terms of accuracy was not significantly different from that of the monolingual children in Korea, these children obtained their high accuracy at the expense of speed. The bilinguals were both accurate and fast, suggesting a different mechanism is at work.

The findings confirm earlier research indicating that bilingualism, independent of culture, helps develop executive attention, and points to how early this advantage begins.

The Korean-only and bilingual children from the United States had first generation native Korean parents. The bilingual children had about 11 months of formal exposure to English through a bilingual daycare program, resulting in them spending roughly 45% of their time using Korean (at home and in the community) and 55% of their time using English (at daycare). The children in Korea belonged to a daycare center that did offer a weekly 15-minute session during which they were exposed to English through educational DVDs, but their understanding of English was minimal. Similarly, the Korean-only children in the U.S. would have had some exposure to English, but it was insufficient to allow them to understand English instructions. The researchers’ informal observation of the Korean daycare center and the ones in the U.S. was that the programs were quite similar, and neither was more enriching.

It has been difficult to train individuals in such a way that they improve in general skills rather than the specific ones used in training. However, recently some success has been achieved using what is called an “n-back” task, a task that involves presenting a series of visual and/or auditory cues to a subject and asking the subject to respond if that cue has occurred, to start with, one time back. If the subject scores well, the number of times back is increased each round.

In the latest study, 62 elementary and middle school children completed a month of training on a computer program, five times a week, for 15 minutes at a time. While the active control group trained on a knowledge and vocabulary-based task, the experimental group was given a demanding spatial task in which they were presented with a sequence of images at one of six locations, one at a time, at a rate of 3s. The child had to press one key whenever the current image was at the same location as the one n items back in the series, and another key if it wasn’t. Both tasks employed themed graphics to make the task more appealing and game-like.

How far back the child needed to remember depended on their performance — if they were struggling, n would be decreased; if they were meeting the challenge, n would be increased.

Although the experimental and active control groups showed little difference on abstract reasoning tasks (reflecting fluid intelligence) at the end of the training, when the experimental group was divided into two subgroups on the basis of training gain, the story was different. Those who showed substantial improvement on the training task over the month were significantly better than the others, on the abstract reasoning task. Moreover, this improvement was maintained at follow-up testing three months later.

The key to success seems to be whether or not the games hit the “sweet spot” for the individual — fun and challenging, but not so challenging as to be frustrating. Those who showed the least improvement rated the game as more difficult, while those who improved the most found it challenging but not overwhelming.

You can try this task yourself at http://brainworkshop.sourceforge.net/.

Jaeggi, Susanne M, Martin Buschkuehl, John Jonides, and Priti Shah. “Short- and long-term benefits of cognitive training.” Proceedings of the National Academy of Sciences of the United States of America 2011 (June 13, 2011): 2-7. http://www.ncbi.nlm.nih.gov/pubmed/21670271.

[1183] Jaeggi SM, Buschkuehl M, Jonides J, Perrig WJ. From the Cover: Improving fluid intelligence with training on working memory. Proceedings of the National Academy of Sciences [Internet]. 2008 ;105(19):6829 - 6833. Available from: http://www.pnas.org/content/early/2008/04/25/0801268105.abstract

A number of studies have demonstrated the cognitive benefits of music training for children. Now research is beginning to explore just how long those benefits last. This is the second study I’ve reported on this month, that points to childhood music training protecting older adults from aspects of cognitive decline. In this study, 37 adults aged 45 to 65, of whom 18 were classified as musicians, were tested on their auditory and visual working memory, and their ability to hear speech in noise.

The musicians performed significantly better than the non-musicians at distinguishing speech in noise, and on the auditory temporal acuity and working memory tasks. There was no difference between the groups on the visual working memory task.

Difficulty hearing speech in noise is among the most common complaints of older adults, but age-related hearing loss only partially accounts for the problem.

The musicians had all begun playing an instrument by age 8 and had consistently played an instrument throughout their lives. Those classified as non-musicians had no musical experience (12 of the 19) or less than three years at any point in their lives. The seven with some musical experience rated their proficiency on an instrument at less than 1.5 on a 10-point scale, compared to at least 8 for the musicians.

Physical activity levels were also assessed. There was no significant difference between the groups.

The finding that visual working memory was not affected supports the idea that musical training helps domain-specific skills (such as auditory and language processing) rather than general ones.

Once upon a time we made a clear difference between emotion and reason. Now increasing evidence points to the necessity of emotion for good reasoning. It’s clear the two are deeply entangled.

Now a new study has found that those with a higher working memory capacity (associated with greater intelligence) are more likely to automatically apply effective emotional regulation strategies when the need arises.

The study follows on from previous research that found that people with a higher working memory capacity suppressed expressions of both negative and positive emotion better than people with lower WMC, and were also better at evaluating emotional stimuli in an unemotional manner, thereby experiencing less emotion in response to those stimuli.

In the new study, participants were given a test, then given either negative or no feedback. A subsequent test, in which participants were asked to rate their familiarity with a list of people and places (some of which were fake), evaluated whether their emotional reaction to the feedback affected their performance.

This negative feedback was quite personal. For example: "your responses indicate that you have a tendency to be egotistical, placing your own needs ahead of the interests of others"; "if you fail to mature emotionally or change your lifestyle, you may have difficulty maintaining these friendships and are likely to form insecure relations."

The false items in the test were there to check for "over claiming" — a reaction well known to make people feel better about themselves and control their reactions to criticism. Among those who received negative feedback, those with higher levels of WMC were found to over claim the most. The people who over claimed the most also reported, at the end of the study, the least negative emotions.

In other words, those with a high WMC were more likely to automatically use an emotion regulation strategy. Other emotional reappraisal strategies include controlling your facial expression or changing negative situations into positive ones. Strategies such as these are often more helpful than suppressing emotion.

Schmeichel, Brandon J.; Demaree, Heath A. 2010. Working memory capacity and spontaneous emotion regulation: High capacity predicts self-enhancement in response to negative feedback. Emotion, 10(5), 739-744.

Schmeichel, Brandon J.; Volokhov, Rachael N.; Demaree, Heath A. 2008. Working memory capacity and the self-regulation of emotional expression and experience. Journal of Personality and Social Psychology, 95(6), 1526-1540. doi: 10.1037/a0013345

I’ve always been intrigued by neurofeedback training. But when it first raised its head, technology was far less sophisticated. Now a new study has used real-time functional Magnetic Resonance Imaging (fMRI) feedback from the rostrolateral prefrontal cortex to improve people's ability to control their thoughts and focus their attention.

In the study, participants performed tasks that either raised or lowered mental introspection in 30-second intervals over four six-minute sessions. Those with access to real-time fMRI feedback could see their RLPFC activity increase during introspection and decrease during non-introspective thoughts, such as mental tasks that focused on body sensations. These participants became significantly better at controlling their thoughts and performing the mental tasks. Moreover, the improved regulation was reflected only in activity in the rostrolateral prefrontal cortex. Those given inaccurate or no brain feedback showed no such improvement.

The findings point to a means of improving attentional control, and also raise hope for clinical treatments of conditions that can benefit from improved awareness and regulation of one's thoughts, including depression, anxiety, and obsessive-compulsive disorders.

As I’ve discussed on many occasions, a critical part of attention (and working memory capacity) is being able to ignore distraction. There has been growing evidence that mindfulness meditation training helps develop attentional control. Now a new study helps fill out the picture of why it might do so.

The alpha rhythm is particularly active in neurons that process sensory information. When you expect a touch, sight or sound, the focusing of attention toward the expected stimulus induces a lower alpha wave height in neurons that would handle the expected sensation, making them more receptive to that information. At the same time the height of the alpha wave in neurons that would handle irrelevant or distracting information increases, making those cells less receptive to that information. In other words, alpha rhythm helps screen out distractions.

In this study, six participants who completed an eight-week mindfulness meditation program (MBSR) were found to generate larger alpha waves, and generate them faster, than the six in the control group. Alpha wave activity in the somatosensory cortex was measured while participants directed their attention to either their left hand or foot. This was done on three occasions: before training, at three weeks of the program, and after the program.

The MBSR program involves an initial two-and-a-half-hour training session, followed by daily 45-minute meditation sessions guided by a CD recording. The program is focused on training participants first to pay close attention to body sensations, then to focus on body sensations in a specific area, then being able to disengage and shifting the focus to another body area.

Apart from helping us understand why mindfulness meditation training seems to improve attention, the findings may also explain why this meditation can help sufferers of chronic pain.

A study involved 117 older adults (mean age 78) found those at greater risk of coronary artery disease had substantially greater risk for decline in verbal fluency and the ability to ignore irrelevant information. Verbal memory was not affected.

The findings add to a growing body of research linking cardiovascular risk factors and age-related cognitive decline, leading to the mantra: What’s good for the heart is good for the brain.

The study also found that the common classification into high and low risk groups was less useful in predicting cognitive decline than treating risk as a continuous factor. This is consistent with a growing view that no cognitive decline is ‘normal’, but is always underpinned by some preventable damage.

Risk for coronary artery disease was measured with the Framingham Coronary Risk Score, which uses age, cholesterol levels, blood pressure, presence of diabetes, and smoking status to generate a person's risk of stroke within 10 years. 37 (31%) had high scores. Age, education, gender, and stroke history were controlled for in the analysis.

Gooblar, J., Mack, W.J., Chui, H.C., DeCarli, C., Mungas, D., Reed, B.R. & Kramer, J.H. 2011. Framingham Coronary Risk Profile Predicts Poorer Executive Functioning in Older Nondemented Adults. Presented at the American Academy of Neurology annual meeting on Tuesday, April 12, 2011.

A study of 265 New York City minority children has found that those born with higher amounts of the insecticide chlorpyrifos had lower IQ scores at age 7. Those most exposed (top 25%) scored an average 5.3 points lower on the working memory part of the IQ test (WISC-IV), and 2.7 points lower on the full IQ test, compared to those in the lowest quartile.

The children were born prior to the 2001 ban on indoor residential use of the common household pesticide in the US. The babies' umbilical cord blood was used to measure exposure to the insecticide.

Previous research had found that, prior to the ban, chlorpyrifos was detected in all personal and indoor air samples in New York, and 70% of umbilical cord blood collected from babies. The amount of chlorpyrifos in babies' blood was associated with neurodevelopmental problems at age three. The new findings indicate that these problems persist.

While exposure to the organophosphate has measurably declined, agricultural use is still permitted in the U.S.

Similarly, another study, involving 329 7-year-old children in a farming community in California, has found that those with the highest prenatal exposure to the pesticide dialkyl phosphate (DAP) had an average IQ 7 points lower than children whose exposure was in the lowest quintile. Prenatal pesticide exposure was linked to poorer scores for working memory, processing speed, verbal comprehension, and perceptual reasoning, as well as overall IQ.

Prenatal exposure was measured by DAP concentration in the mother’s urine. Urine was also collected from the children at age 6 months and 1, 2, 3½ and 5 years. However, there was no consistent link between children’s postnatal exposure and cognition.

While this was a farming community where pesticide exposure would be expected to be high, the levels were within the range found in the general population.

It’s recommended that people wash fruit and vegetables thoroughly, and limit their use of pesticides at home.

Research has shown that people are generally poor at predicting how likely they are to remember something. A recent study tested the theory that the reason we’re so often inaccurate is that we make predictions about memory based on how we feel while we're encountering the information to be learned, and that can lead us astray.

In three experiments, each involving about 80 participants ranging in age from late teens to senior citizens, participants were serially shown words in large or small fonts and asked to predict how well they'd remember each (actual font sizes depended on the participants’ browsers, since this was an online experiment and participants were in their own homes, but the larger size was four times larger than the other).

In the first experiment, each word was presented either once or twice, and participants were told if they would have another chance to study the word. The length of time the word was displayed on the first occasion was controlled by the participant. On the second occasion, words were displayed for four seconds, and participants weren’t asked to make a new prediction. At the end of the study phase, they had two minutes to type as many words as they remembered.

Recall was significantly better when an item was seen twice. Recall wasn’t affected by font size, but participants were significantly more likely to believe they’d recall those presented in larger fonts. While participants realized seeing an item twice would lead to greater recall, they greatly underestimated the benefits.

Because people so grossly discounted the benefit of a single repetition, in the next experiment the comparison was between one and four study trials. This time, participants gave more weight to having three repetitions versus none, but nevertheless, their predictions were still well below the actual benefits of the repetitions.

In the third experiment, participants were given a simplified description of the first experiment and either asked what effect they’d expect font size to have, or what effect having two study trials would have. The results (similar levels of belief in the benefits of each condition) neither resembled the results in the first experiment (indicating that those people’s predictions hadn’t been made on the basis of their beliefs about memory effects), or the actual performance (demonstrating that people really aren’t very good at predicting their memory performance).

These findings were confirmed in a further experiment, in which participants were asked about both variables (rather than just one).

The findings confirm other evidence that (a) general memory knowledge tends to be poor, (b) personal memory awareness tends to be poor, and (c) ease of processing is commonly used as a heuristic to predict whether something will be remembered.

 

Addendum: a nice general article on this topic by the lead researcher Nate Kornell has just come out in Miller-McCune

Kornell, N., Rhodes, M. G., Castel, A. D., & Tauber, S. K. (in press). The ease of processing heuristic and the stability bias: Dissociating memory, memory beliefs, and memory judgments. Psychological Science.

Comparison of young adults (mean age 24.5) and older adults (mean age 69.1) in a visual memory test involving multitasking has pinpointed the greater problems older adults have with multitasking. The study involved participants viewing a natural scene and maintaining it in mind for 14.4 seconds. In the middle of the maintenance period, an image of a face popped up and participants were asked to determine its sex and age. They were then asked to recall the original scene.

As expected, older people had more difficulty with this. Brain scans revealed that, for both groups, the interruption caused their brains to disengage from the network maintaining the memory and reallocate resources to processing the face. But the younger adults had no trouble disengaging from that task as soon as it was completed and re-establishing connection with the memory maintenance network, while the older adults failed both to disengage from the interruption and to reestablish the network associated with the disrupted memory.

This finding adds to the evidence that an important (perhaps the most important) reason for cognitive decline in older adults is a growing inability to inhibit processing, and extends the processes to which that applies.

Following earlier research suggesting mood affects attention, a new study tries to pin down exactly what it’s affecting.

To induce different moods, participants were shown either a video of a stand-up comedy routine or an instructional video on how to install flooring. This was followed by two tests, one of working memory capacity (the Running Memory Span), during which numbers are presented through headphones at a rate of four numbers per second ending with subjects asked to recall the last six numbers in order, and one of response inhibition (the Stroop task).

Those that watched the comedy routine performed significantly worse on the RMS task but not on the Stroop task. To confirm these results, a second experiment used a different measure of response inhibition, the Flanker task. Again, those in a better mood performed worse on the span task but not the inhibition task.

These findings point to mood affecting storage capacity — something we already had evidence for in the case of negative mood, like anxiety, but a little more surprising to find it also applies to happy moods. Basically, it seems as if any emotion, whether good or bad, is likely to leave you less room in your working memory store for information processing. That shouldn’t be taken as a cue to go all Spock! But it’s something to be aware of.

A study involving 125 younger (average age 19) and older (average age 69) adults has revealed that while younger adults showed better explicit learning, older adults were better at implicit learning. Implicit memory is our unconscious memory, which influences behavior without our awareness.

In the study, participants pressed buttons in response to the colors of words and random letter strings — only the colors were relevant, not the words themselves. They then completed word fragments. In one condition, they were told to use words from the earlier color task to complete the fragments (a test of explicit memory); in the other, this task wasn’t mentioned (a test of implicit memory).

Older adults showed better implicit than explicit memory and better implicit memory than the younger, while the reverse was true for the younger adults. However, on a further test which required younger participants to engage in a number task simultaneously with the color task, younger adults behaved like older ones.

The findings indicate that shallower and less focused processing goes on during multitasking, and (but not inevitably!) with age. The fact that younger adults behaved like older ones when distracted points to the problem, for which we now have quite a body of evidence: with age, we tend to become more easily distracted.

Readers of my books and articles will know that working memory is something I get quite excited about. It’s hard to understate the importance of working memory in our lives. Now a new study tells us that working memory is in fact made up of three areas: a core focusing on one active item, a surrounding area holding at least three more active items (called the outer store), and a wider region containing passive items that have been tagged for later retrieval. Moreover, the core region (the “focus of attention”) has three roles (one more than thought) — it not only directs attention to an item and retrieves it, but it also updates it later, if required.

In two experiments, 49 participants were presented with up to four types of colored shapes on a computer screen, with particular types (eg a red square) confined to a particular column. Each colored shape was displayed in sequence at the beginning with a number from 1 to 4, and then instances of the shapes appeared sequentially one by one. The participants’ task was to keep a count of each shape. Different sequences involved only one shape, or two, three, or four shapes. Participants controlled how quickly the shapes appeared.

Unsurprisingly, participants were slower and less accurate as the set size (number of shape types) increased. There was a significant jump in response time when the set-size increased from one to two, and a steady increase in RT and decline in accuracy as set-size increased from 2 to 4. Responses were also notably slower when the stimulus changed and they had to change their focus from one type of shape to another (this is called the switch cost). Moreover, this switch cost increased linearly with set-size, at a rate of about 240ms/item.

Without getting into all the ins and outs of this experiment and the ones leading up to it, what the findings all point to is a picture of working memory in which:

  • the focus contains only one item,
  • the area outside the focus contains up to three items,
  • this outer store has to be searched before the item can be retrieved,
  • more recent items in the outer store are not found any more quickly than older items in the outer store,
  • focus-switch costs increase as a direct function of the number of items in the outer store,
  • there is (as earlier theorized) a third level of working memory, containing passive items, that is quite separate from the two areas of active storage,
  • that the number of passive items does not influence either response time or accuracy for recalling active items.

It is still unclear whether the passive third layer is really a part of working memory, or part of long-term memory.

The findings do point to the need to use active loads rather than passive ones, when conducting experiments that manipulate cognitive load (for example, requiring subjects to frequently update items in working memory, rather than simply hold some items in memory while carrying out another task).

In the first of three experiments, 132 students were found to gesture more often when they had difficulties solving mental rotation problems. In the second experiment, 22 students were encouraged to gesture, while 22 were given no such encouragement, and a further 22 were told to sit on their hands to prevent gesturing. Those encouraged to gesture solved more mental rotation problems.

Interestingly, the amount of gesturing decreased with experience with these spatial problems, and when the gesture group were given new spatial visualization problems in which gesturing was prohibited, their performance was still better than that of the other participants. This suggests that the spatial computation supported by gestures becomes internalized. The third experiment increased the range of spatial visualization problems helped by gesture.

The researchers suggest that hand gestures may improve spatial visualization by helping a person keep track of an object in the mind as it is rotated to a new position, and by providing additional feedback and visual cues by simulating how an object would move if the hand were holding it.

[2140] Chu M, Kita S. The nature of gestures' beneficial role in spatial problem solving. Journal of Experimental Psychology: General [Internet]. 2011 ;140(1):102 - 116. Available from: http://psycnet.apa.org/journals/xge/140/1/102/

Full text of the article is available at http://www.apa.org/pubs/journals/releases/xge-140-1-102.pdf

We’ve all experienced the fading of our ability to concentrate when we’ve been focused on a task for too long. The dominant theory of why this should be so has been around for half a century, and describes attention as a limited resource that gets ‘used up’. Well, attention is assuredly a limited resource in the sense that you only have so much of it to apply. But is it limited in the sense of being used up and needing to refresh? A new study indicates that it isn’t.

The researchers make what strikes me as a cogent argument: attention is an endless resource; we are always paying attention to something. The problem is our ability to maintain attention on a single task without respite. Articulated like this, we are immediately struck by the parallel with perception. Any smell, touch, sight, sound, that remains constant eventually stops registering with us. We become habituated to it. Is that what’s happening with attention? Is it a form of habituation?

In an experimental study, 84 volunteers were tested on their ability to focus on a repetitive computerized task for 50 minutes under various conditions: one group had no breaks or distractions; two groups memorized four digits beforehand and were told to respond if they saw them on the screen during the task (but only one group were shown them during the task); one group were shown the digits but told to ignore them if they saw them.

As expected, performance declined significantly over the course of the task for most participants — with the exception of those who were twice shown the memorized digits and had to respond to them. That was all it took, a very brief break in the task, and their focus was maintained.

The finding suggests that prolonged attention to a single task actually hinders performance, but briefly deactivating and reactivating your goals is all you need to stay focused.

A link between positive mood and creativity is supported by a study in which 87 students were put into different moods (using music and video clips) and then given a category learning task to do (classifying sets of pictures with visually complex patterns). There were two category tasks: one involved classification on the basis of a rule that could be verbalized; the other was based on a multi-dimensional pattern that could not easily be verbalized.

Happy volunteers were significantly better at learning the rule to classify the patterns than sad or neutral volunteers. There was no difference between those in a neutral mood and those in a negative mood.

It had been theorized that positive mood might only affect processes that require hypothesis testing and rule selection. The mechanism by which this might occur is through increased dopamine levels in the frontal cortex. Interestingly, however, although there was no difference in performance as a function of mood, analysis based on how closely the subjects’ responses matched an optimal strategy for the task found that, again, positive mood was of significant benefit.

The researchers suggest that this effect of positive mood may be the reason behind people liking to watch funny videos at work — they’re trying to enhance their performance by putting themselves in a good mood.

The music and video clips were rated for their mood-inducing effects. Mozart’s “Eine Kleine Nachtmusik—Allegro” was the highest rated music clip (at an average rating of 6.57 on a 7-point scale), Vivaldi’s Spring was next at 6.14. The most positive video was that of a laughing baby (6.57 again), with Whose Line is it Anyway sound effects scoring close behind (6.43).

[2054] Nadler RT, Rabi R, Minda JP. Better Mood and Better Performance. Psychological Science [Internet]. 2010 ;21(12):1770 - 1776. Available from: http://pss.sagepub.com/content/21/12/1770.abstract

A working memory training program developed to help children with ADHD has been tested by 52 students, aged 7 to 17. Between a quarter and a third of the children showed significant improvement in inattention, overall number of ADHD symptoms, initiation, planning/organization, and working memory, according to parental ratings. While teacher ratings were positive, they did not quite reach significance. It is worth noting that this improvement was maintained at the four-month follow-up.

The children used the software in their homes, under the supervision of their parents and the researchers. The program includes a set of 25 exercises in a computer-game format that students had to complete within 5 to 6 weeks. For example, in one exercise a robot will speak numbers in a certain order, and the student has to click on the numbers the robot spoke, on the computer screen, in the opposite order. Each session is 30 to 40 minutes long, and the exercises become progressively harder as the students improve.

The software was developed by a Swedish company called Cogmed in conjunction with the Karolinska Institute. Earlier studies in Sweden have been promising, but this is the first study in the United States, and the first to include children on medication (60% of the participants).

It’s well known that being too anxious about an exam can make you perform worse, and studies indicate that part of the reason for this is that your limited working memory is being clogged up with thoughts related to this anxiety. However for those who suffer from test anxiety, it’s not so easy to simply ‘relax’ and clear their heads. But now a new study has found that simply spending 10 minutes before the exam writing about your thoughts and feelings can free up brainpower previously occupied by testing worries.

In the first laboratory experiments, 20 college students were given two math tests. After the first test, the students were told that there would be a monetary reward for high marks — from both them and the student they had been paired with. They were then told that the other student had already sat the second test and improved their score, increasing the pressure. They were also they’d be videotaped, and their performance analyzed by teachers and students. Having thus upped the stakes considerably, half the students were given 10 minutes to write down any concerns they had about the test, while the other half were just given 10 minutes to sit quietly.

Under this pressure, the students who sat quietly did 12% worse on the second test. However those who wrote about their fears improved by 5%. In a subsequent experiment, those who wrote about an unrelated unemotional event did as badly as the control students (a drop of 7% this time, vs a 4% gain for the expressive writing group). In other words, it’s not enough to simply write, you need to be expressing your worries.

Moving out of the laboratory, the researchers then replayed their experiment in a 9th-grade classroom, in two studies involving 51 and 55 students sitting a biology exam. The students were scored for test anxiety six weeks before the exam. The control students were told to write about a topic that wouldn’t be covered in the exam (this being a common topic in one’s thoughts prior to an exam). It was found that those who scored high in test anxiety performed poorly in the control condition, but at the level of those low in test anxiety when in the expressive writing condition (improving their own performance by nearly a grade point). Those who were low in test anxiety performed at the same level regardless of what they wrote about prior to the exam.

One of the researchers, Sian Beilock, recently published a book on these matters: Choke: What the Secrets of the Brain Reveal About Getting It Right When You Have To

When stroke or brain injury damages a part of the brain controlling movement or sensation or language, other parts of the brain can learn to compensate for this damage. It’s been thought that this is a case of one region taking over the lost function. Two new studies show us the story is not so simple, and help us understand the limits of this plasticity.

In the first study, six stroke patients who have lost partial function in their prefrontal cortex, and six controls, were briefly shown a series of pictures to test the ability to remember images for a brief time (visual working memory) while electrodes recorded their EEGs. When the images were shown to the eye connected to the damaged hemisphere, the intact prefrontal cortex (that is, the one not in the hemisphere directly receiving that visual input) responded within 300 to 600 milliseconds.

Visual working memory involves a network of brain regions, of which the prefrontal cortex is one important element, and the basal ganglia, deep within the brain, are another. In the second study, the researchers extended the experiment to patients with damage not only to the prefrontal cortex, but also to the basal ganglia. Those with basal ganglia damage had problems with visual working memory no matter which part of the visual field was shown the image.

In other words, basal ganglia lesions caused a more broad network deficit, while prefrontal cortex lesions resulted in a more limited, and recoverable, deficit. The findings help us understand the different roles these brain regions play in attention, and emphasize how memory and attention are held in networks. They also show us that the plasticity compensating for brain damage is more dynamic and flexible than we realized, with intact regions stepping in on a case by case basis, very quickly, but only when the usual region fails.

Because people with damage to their hippocampus are sometimes impaired at remembering spatial information even over extremely short periods of time, it has been thought that the hippocampus is crucial for spatial information irrespective of whether the task is a working memory or a long-term memory task. This is in contrast to other types of information. In general, the hippocampus (and related structures in the mediotemporal lobe) is assumed to be involved in long-term memory, not working memory.

However, a new study involving four patients with damage to their mediotemporal lobes, has found that they were perfectly capable of remembering for one second the relative positions of three or fewer objects on a table — but incapable of remembering more. That is, as soon as the limits of working memory were reached, their performance collapsed. It appears, therefore, that there is, indeed, a fundamental distinction between working memory and long-term memory across the board, including the area of spatial information and spatial-objection relations.

The findings also underscore how little working memory is really capable of on its own (although absolutely vital for what it does!) — in real life, long-term memory and working memory work in tandem.

Following on from earlier research suggesting that simply talking helps keep your mind sharp at all ages, a new study involving 192 undergraduates indicates that the type of talking makes a difference. Engaging in brief (10 minute) conversations in which participants were simply instructed to get to know another person resulted in boosts to their executive function (the processes involved in working memory, planning, decision-making, and so on). However when participants engaged in conversations that had a competitive edge, their performance showed no improvement. The improvement was limited to executive function; neither processing speed nor general knowledge was affected.

Further experiments indicated that competitive discussion could boost executive function — if the conversations were structured to allow for interpersonal engagement. The crucial factor seems to be the process of getting into another person’s mind and trying to see things from their point of view (something most of us do naturally in conversation).

The findings also provide support for the social brain hypothesis — that we evolved our larger brains to help us deal with large social groups. They also support earlier speculations by the researcher, that parents and teachers could help children improve their intellectual skills by encouraging them to develop their social skills.

Previous research has indicated that obesity in middle-age is linked to higher risk of cognitive decline and dementia in old age. Now a study of 32 middle-aged adults (40-60) has revealed that although obese, overweight and normal-weight participants all performed equally well on a difficult cognitive task (a working memory task called the 2-Back task), obese individuals displayed significantly lower activation in the right inferior parietal cortex. They also had lower insulin sensitivity than their normal weight and overweight peers (poor insulin sensitivity may ultimately lead to diabetes). Analysis pointed to the impaired insulin sensitivity mediating the relationship between task-related activation in that region and BMI.

This suggests that it is insulin sensitivity that is responsible for the higher risk of cognitive impairment later in life. The good news is that insulin sensitivity is able to be modified through exercise and diet.

A follow-up study to determine if a 12-week exercise intervention can reverse the differences is planned.

Inflammation in the brain appears to be a key contributor to age-related memory problems, and it may be that this has to do with the dysregulation of microglia that, previous research has shown, occurs with age. As these specialized support cells in the brain do normally when there’s an infection, with age microglia start to produce excessive cytokines, some of which result in the typical behaviors that accompany illness (sleepiness, appetite loss, cognitive deficits and depression).

Now new cell and mouse studies suggests that the flavenoid luteolin, known to have anti-inflammatory properties, apparently has these benefits because it acts directly on the microglial cells to reduce their production of inflammatory cytokines. It was found that although microglia exposed to a bacterial toxin produced inflammatory cytokines that killed neurons, if the microglia were first exposed to luteolin, the neurons lived. Exposing the neuron to luteolin had no effect.

Old mice fed a luteolin-supplemented diet for four weeks did better on a working memory test than old mice on an ordinary diet, and restored levels of inflammatory cytokines in their brains to that of younger mice.

Luteolin is found in many plants, including carrots, peppers, celery, olive oil, peppermint, rosemary and chamomile.

Confirming earlier research, a study involving 257 older adults (average age 75) has found that a two-minute questionnaire filled out by a close friend or family member is more accurate that standard cognitive tests in detecting early signs of Alzheimer’s.

The AD8 asks questions about changes in everyday activities:

  • Problems with judgment, such as bad financial decisions;
  • Reduced interest in hobbies and other activities;
  • Repeating of questions, stories or statements;
  • Trouble learning how to use a tool or appliance, such as a television remote control or a microwave;
  • Forgetting the month or year;
  • Difficulty handling complicated financial affairs, such as balancing a checkbook;
  • Difficulty remembering appointments; and
  • Consistent problems with thinking and memory.

Problems with two or more of these are grounds for further evaluation. The study found those with AD8 scores of 2 or more were very significantly more likely to have early biomarkers of Alzheimer’s (abnormal Pittsburgh compound B binding and cerebrospinal fluid biomarkers), and was better at detecting early stages of dementia than the MMSE. The AD8 has now been validated in several languages and is used in clinics around the world.

A couple of years ago I reported on a finding that walking in the park, and (most surprisingly) simply looking at photos of natural scenes, could improve memory and concentration (see below). Now a new study helps explain why. The study examined brain activity while 12 male participants (average age 22) looked at images of tranquil beach scenes and non-tranquil motorway scenes. On half the presentations they concurrently listened to the same sound associated with both scenes (waves breaking on a beach and traffic moving on a motorway produce a similar sound, perceived as a constant roar).

Intriguingly, the natural, tranquil scenes produced significantly greater effective connectivity between the auditory cortex and medial prefrontal cortex, and between the auditory cortex and posterior cingulate gyrus, temporoparietal cortex and thalamus. It’s of particular interest that this is an example of visual input affecting connectivity of the auditory cortex, in the presence of identical auditory input (which was the focus of the research). But of course the take-home message for us is that the benefits of natural scenes for memory and attention have been supported.

Previous study:

Many of us who work indoors are familiar with the benefits of a walk in the fresh air, but a new study gives new insight into why, and how, it works. In two experiments, researchers found memory performance and attention spans improved by 20% after people spent an hour interacting with nature. The intriguing finding was that this effect was achieved not only by walking in the botanical gardens (versus walking along main streets of Ann Arbor), but also by looking at photos of nature (versus looking at photos of urban settings). The findings are consistent with a theory that natural environments are better at restoring attention abilities, because they provide a more coherent pattern of stimulation that requires less effort, as opposed to urban environments that are provide complex and often confusing stimulation that captures attention dramatically and requires directed attention (e.g., to avoid being hit by a car).

Reports on cognitive decline with age have, over the years, come out with two general findings: older adults do significantly worse than younger adults; older adults are just as good as younger adults. Part of the problem is that there are two different approaches to studying this, each with their own specific bias. You can keep testing the same group of people as they get older — the problem with this is that they get more and more practiced, which mitigates the effects of age. Or you can test different groups of people, comparing older with younger — but cohort differences (e.g., educational background) may disadvantage the older generations. There is also argument about when it starts. Some studies suggest we start declining in our 20s, others in our 60s.

One of my favorite cognitive aging researchers has now tried to find the true story using data from the Virginia Cognitive Aging Project involving nearly 3800 adults aged 18 to 97 tested on reasoning, spatial visualization, episodic memory, perceptual speed and vocabulary, with 1616 tested at least twice. This gave a nice pool for both cross-sectional and longitudinal comparison (retesting ranged from 1 to 8 years and averaged 2.5 years).

From this data, Salthouse has estimated the size of practice effects and found them to be as large as or larger than the annual cross-sectional differences, although they varied depending on the task and the participant’s age. In general the practice effect was greater for younger adults, possibly because younger people learn better.

Once the practice-related "bonus points" were removed, age trends were flattened, with much less positive changes occurring at younger ages, and slightly less negative changes occurring at older ages. This suggests that change in cognitive ability over an adult lifetime (ignoring the effects of experience) is smaller than we thought.

A study involving 65 older adults (59-80), who were very sedentary before the study (reporting less than two episodes of physical activity lasting 30 minutes or more in the previous six months), has found that those who joined a walking group improved their cognitive performance and the connectivity in important brain circuits after a year. However, those who joined a stretching and toning group showed no such improvement. The walking program involved three 40-minute walks at a moderate pace every week. The two affected brain circuits (the default mode network and the fronto-executive network) typically become less connected with age. It is worth emphasizing that the improvement was not evident at the first test, after six months, but only at the second 12-month test.

Interestingly, I noticed in the same journal issue a study into the long-term benefits of dancing for older adults. The study compared physical and cognitive performance of those who had engaged in amateur dancing for many years (average: 16.5 years) and those with no dancing or sporting engagement. The dancing group were overall significantly better than the other group on all tests: posture, balance, reaction time, motor behavior, cognitive performance. However, the best dancers weren’t any better than individuals in the other group; the group difference arose because none of the dancers performed poorly, while many of the other group did.

Neurofibromatosis type 1 (NF1) is the most common cause of learning disabilities, caused by a mutation in a gene that makes a protein called neurofibromin. Mouse research has now revealed that these mutations are associated with higher levels of the inhibitory neurotransmitter GABA in the medial prefrontal cortex. Brain imaging in humans with NF1 similarly showed reduced activity in the prefrontal cortex when performing a working memory task, with the levels of activity correlating with task performance. It seems, therefore, that this type of learning disability is a result of too much GABA in the prefrontal cortex inhibiting the activity of working memory. Potentially they could be corrected with a drug that normalizes the excess GABA's effect. The researchers are currently studying the effect of the drug lovastatin on NF1 patients.

A new study explains why variable practice improves your memory of most skills better than practice focused on a single task. The study compared skill learning between those asked to practice one particular challenging arm movement, and those who practiced the movement with other related tasks in a variable practice structure. Using magnetic stimulation applied to different parts of the brain after training (which interferes with memory consolidation), it was found that interference to the dorsolateral prefrontal cortex, but not to the primary motor cortex, affected skill learning for those engaged in variable practice, whereas interference to the motor cortex, but not to the prefrontal cortex, affected learning in those engaged in constant practice.

These findings indicate that variable practice involves working memory (which happens in the prefrontal cortex) rather than motor memory, and that the need to re-engage with the task each time underlies the better learning produced by variable practice (which involves repeatedly switching between tasks). The experiment also helps set a time frame for this consolidation — interference four hours after training had no effect.

A study involving 117 six year old children and 104 eight year old children has found that the ability to preserve information in working memory begins at a much younger age than had previously been thought. Moreover the study revealed that, while any distraction between learning the words and having to recall them hindered recall, having to perform a verbal task was particularly damaging. This suggests that their remembering was based on “phonological rehearsal”, that is, verbally repeating the names of the items to themselves. Consistent with the research suggesting children begin to phonologically rehearse at around 7 years of age, the verbal task hindered the 8 year olds more than the 6 year olds.

While brain training programs can certainly improve your ability to do the task you’re practicing, there has been little evidence that this transfers to other tasks. In particular, the holy grail has been very broad transfer, through improvement in working memory. While there has been some evidence of this in pilot programs for children with ADHD, a new study is the first to show such improvement in older adults using a commercial brain training program.

A study involving 30 healthy adults aged 60 to 89 has demonstrated that ten hours of training on a computer game designed to boost visual perception improved perceptual abilities significantly, and also increased the accuracy of their visual working memory to the level of younger adults. There was a direct link between improved performance and changes in brain activity in the visual association cortex.

The computer game was one of those developed by Posit Science. Memory improvement was measured about one week after the end of training. The improvement did not, however, withstand multi-tasking, which is a particular problem for older adults. The participants, half of whom underwent the training, were college educated. The training challenged players to discriminate between two different shapes of sine waves (S-shaped patterns) moving across the screen. The memory test (which was performed before and after training) involved watching dots move across the screen, followed by a short delay and then re-testing for the memory of the exact direction the dots had moved.

A number of studies have demonstrated that negative stereotypes (such as “women are bad at math”) can impair performance in tests. Now a new study shows that this effect extends to learning. The study involved learning to recognize target Chinese characters among sets of two or four. Women who were reminded of the negative stereotypes involving women's math and visual processing ability failed to improve at this search task, while women who were not reminded of the stereotype got faster with practice. When participants were later asked to choose which of two colored squares, imprinted with irrelevant Chinese characters, was more saturated, those in the control group were slower to respond when one of the characters had been a target. However, those trained under stereotype threat showed no such effect, indicating that they had not learned to automatically attend to a target. It’s suggested that the women in the stereotype threat group tried too hard to overcome the negative stereotype, expending more effort but in an unproductive manner.

There are two problems here, it seems. The first is that people under stereotype threat have more invested in disproving the stereotype, and their efforts may be counterproductive. The second, that they are distracted by the stereotype (which uses up some of their precious working memory).

While studies have demonstrated that listening to music before doing a task can improve performance on that task, chiefly through its effect on mood, there has been little research into the effects of background music while doing the task. A new study had participants recall a list of 8 consonants in a specific order in the presence of five sound environments: quiet, liked music, disliked music, changing-state (a sequence of random digits such as "4, 7, 1, 6") and steady-state ("3, 3, 3"). The most accurate recall occurred when participants performed the task in the quieter, steady-state environments. The level of recall was similar for the changing-state and music backgrounds.

Mind you, this task (recall of random items in order) is probably particularly sensitive to the distracting effects of this sort of acoustical variation in the environment. Different tasks are likely to be differentially affected by background music, and I’d also suggest that the familiarity of the music, and possibly its predictability, also influence its impact. Personally, I am very aware of the effect of music on my concentration, and vary the music, or don’t play at all, depending on what I’m doing and my state of mind. I hope we’ll see more research into these variables.

[1683] Perham N, Vizard J. Can preference for background music mediate the irrelevant sound effect?. Applied Cognitive Psychology [Internet]. 2010 ;9999(9999):n/a - n/a. Available from: http://dx.doi.org/10.1002/acp.1731

Abstract

Another study has come out showing that older adults with low levels of vitamin D are more likely to have cognitive problems. The six-year study followed 858 adults who were age 65 or older at the beginning of the study. Those who were severely deficient in vitamin D were 60% more likely to have substantial cognitive decline, and 31% more likely to have specific declines in executive function, although there was no association with attention. Vitamin D deficiency is common in older adults in the United States and Europe (levels estimated from 40% to 100%!), and has been implicated in a wide variety of physical disease.

‘Working memory’ is thought to consist of three components: one concerned with auditory-verbal processing, one with visual-spatial processing, and a central executive that controls both. It has been hypothesized that the relationships between the components changes as children develop. Very young children are more reliant on visuospatial processing, but later the auditory-verbal module becomes more dominant. It has also been found that the two sensory modules are not strongly associated in younger (5-8) American children, but are strongly associated in older children (9-12). The same study found that this pattern was also found in Laotian children, but not in children from the Congo, none of whom showed a strong association between visual and auditory working memory. Now a new study has found that Ugandan children showed greater dominance of the auditory-verbal module, particularly among the older children (8 ½ +); however, the visuospatial module was dominant among Senegalese children, both younger and older. It is hypothesized that the cultural differences are a product of literacy training — school enrolment was much less consistent among the Senegalese. But there may also be a link to nutritional status.

Consistent with studies showing that gender stereotypes can worsen math performance in females, a year-long study involving 17 first- and second-grade teachers and their 52 boy and 65 girl students has found that boys' math performance was not related to their (female) teacher's math anxiety while girls' math achievement was. Early elementary school teachers in the United States are almost exclusively female. Math achievement was unrelated to teacher math anxiety in both boys and girls at the beginning of the school year. Moreover, achievement was negatively associated with belief in gender stereotypes. Girls who confirmed a belief that boys are better in math than girls scored six points lower in math achievement than did boys or girls who had not developed a belief in the stereotype (102 versus 108). Research has found that elementary education majors have the highest rate of mathematics anxiety of any college major.

[1450] Beilock SL, Gunderson EA, Ramirez G, Levine SC. Female teachers’ math anxiety affects girls’ math achievement. Proceedings of the National Academy of Sciences [Internet]. 2010 ;107(5):1860 - 1863. Available from: http://www.pnas.org/content/107/5/1860.abstract

Previous research has shown that older adults are more likely to incorrectly repeat an action in situations where a prospective memory task has become habitual — for example, taking more medication because they’ve forgotten they’ve already taken it. A new study has found that doing something unusual at the same time helps seniors remember having done the task. In the study, older adults told to put a hand on their heads whenever they made a particular response, reduced the level of repetition errors to that of younger adults. It’s suggested that doing something unusual, like knocking on wood or patting yourself on the head, while taking a daily dose of medicine may be an effective strategy to help seniors remember whether they've already taken their daily medications.

It’s now well established that older brains tend to find it harder to filter out irrelevant information. But now a new study suggests that that isn’t all bad. The study compared the performance of 24 younger adults (17-29) and 24 older adults (60-73) on two memory tasks separated by a 10-minute break. In the first task, they were shown pictures overlapped by irrelevant words, told to ignore the words and concentrate on the pictures only, and to respond every time the same picture appeared twice in a row. The second task required them to remember how the pictures and words were paired together in the first task. The older adults showed a 30% advantage over younger adults in their memory for the preserved pairs. It’s suggested that older adults encode extraneous co-occurrences in the environment and transfer this knowledge to subsequent tasks, improving their ability to make decisions.

[276] Campbell KL, Hasher L, Thomas RC. Hyper-binding: a unique age effect. Psychological Science: A Journal of the American Psychological Society / APS [Internet]. 2010 ;21(3):399 - 405. Available from: http://www.ncbi.nlm.nih.gov/pubmed/20424077

Full text available at http://pss.sagepub.com/content/early/2010/01/15/0956797609359910.full

Using a large data set of 241 brain-lesion patients, researchers have mapped the location of each patient's lesion and correlated that with each patient's IQ score to produce a map of the brain regions that influence intelligence. Consistent with other recent findings, and with the theory that general intelligence depends on the brain's ability to integrate several different kinds of processing, they found general intelligence was determined by a distributed network in the frontal and parietal cortex, critically including white matter association tracts and frontopolar cortex. They suggest that general intelligence draws on connections between regions that integrate verbal, visuospatial, working memory, and executive processes.

[173] Gläscher J, Rudrauf D, Colom R, Paul LK, Tranel D, Damasio H, Adolphs R. Distributed neural system for general intelligence revealed by lesion mapping. Proceedings of the National Academy of Sciences [Internet]. 2010 ;107(10):4705 - 4709. Available from: http://www.pnas.org/content/107/10/4705.abstract

A study of 80 pairs of middle-income Canadian mothers and their year-old babies has revealed that children of mothers who answered their children's requests for help quickly and accurately; talked about their children's preferences, thoughts, and memories during play; and encouraged successful strategies to help solve difficult problems, performed better at a year and a half and 2 years on tasks that call for executive skills, compared to children whose mothers didn't use these techniques.

A study involving over 1000 older men and women (60-75) with type-2 diabetes has found that those with higher levels of the stress hormone cortisol in their blood are more likely to have experienced cognitive decline. Higher fasting cortisol levels were associated with greater estimated cognitive decline in general intelligence, working memory and processing speed. This was independent of mood, education, metabolic variables and cardiovascular disease. Strategies aimed at lowering stress levels may be helpful for older diabetics.

Mindfulness Training had a positive effect on both working memory capacity and mood in a group of Marine reservists during the high-stress pre-deployment interval. While those who weren’t given the 8-week MT program, as well as those who spent little time engaging in mindfulness exercises, showed greater negative mood and decreased working memory capacity over the eight weeks, those who recorded high practice time showed increased capacity and decreased negative mood. A civilian control group showed no change in working memory capacity over the period. The program, called Mindfulness-based Mind Fitness Training (MMFT™), blended mindfulness skills training with concrete applications for the operational environment and information and skills about stress, trauma and resilience in the body. The researchers suggest that mindfulness training may help anyone who must maintain peak performance in the face of extremely stressful circumstances.

A new study challenges the popular theory that expertise is simply a product of tens of thousands of hours of deliberate practice. Not that anyone is claiming that this practice isn’t necessary — but it may not be sufficient. A study looking at pianists’ ability to sight-read music reveals working memory capacity helps sight-reading regardless of how much someone has practiced.

The study involved 57 volunteers who had played piano for an average of 18.6 years (range from one to 57 years). Their estimated hours of overall practice ranged from 260 to 31,096 (average: 5806), and hours of sight-reading practice ranged from zero to 9,048 (average: 1487 hours). Statistical analysis revealed that although hours of practice was the most important factor, nevertheless, working memory capacity did, independently, account for a small but significant amount of the variance between individuals.

It is interesting that not only did WMC have an effect independent of hours of practice, but hours of practice apparently had no effect on WMC — although the study was too small to tell whether a lot of practice at an early age might have affected WMC (previous research has indicated that music training can increase IQ in children).

The study is also too small to properly judge the effects of the 10,000 hours deliberate practice claimed necessary for expertise: the researchers did not advise the number of participants that were at that level, but the numbers suggest it was low.

It should also be noted that an earlier study involving 52 accomplished pianists found no effect of WMC on sight-reading ability (but did find a related effect: the ability to tap two fingers rapidly in alternation and to press a computer key quickly in response to visual and acoustic cues was unrelated to practice but correlated positively with good sight-readers).

Nevertheless, the findings are interesting, and do agree with what I imagine is the ‘commonsense’ view: yes, becoming an expert is all about the hours of effective practice you put in, but there are intellectual qualities that also matter. The question is: do they matter once you’ve put in the requisite hours of good practice?

A study in which 60 young adult mice were trained on a series of maze exercises designed to challenge and improve their working memory ability (in terms of retaining and using current spatial information), has found that the mice improved their proficiency on a wide range of cognitive tests, and moreover better retained their cognitive abilities into old age.

Visual working memory, which can only hold three of four objects at a time, is thought to be based on synchronized brain activity across a network of brain regions. Now a new study has allowed us to get a better picture of how exactly that works. Both the maintenance and the contents of working memory were connected to brief synchronizations of neural activity in alpha, beta and gamma brainwaves across frontoparietal regions that underlie executive and attentional functions and visual areas in the occipital lobe. Most interestingly, individual VWM capacity could be predicted by synchrony in a network centered on the intraparietal sulcus.

[458] Palva MJ, Monto S, Kulashekhar S, Palva S. Neuronal synchrony reveals working memory networks and predicts individual memory capacity. Proceedings of the National Academy of Sciences [Internet]. 2010 ;107(16):7580 - 7585. Available from: http://www.pnas.org/content/107/16/7580.abstract

Older news items (pre-2010) brought over from the old website

More light shed on distinction between long and short-term memory

The once clear-cut distinction between long- and short-term memory has increasingly come under fire in recent years. A new study involving patients with a specific form of epilepsy called 'temporal lobe epilepsy with bilateral hippocampal sclerosis' has now clarified the distinction. The patients, who all had severely compromised hippocampi, were asked to try and memorize photographic images depicting normal scenes. Their memory was tested and brain activity recorded after five seconds or 60 minutes. As expected, the patients could not remember the images after 60 minutes, but could distinguish seen-before images from new at five seconds. However, their memory was poor when asked to recall details about the images. Brain activity showed that short-term memory for details required the coordinated activity of a network of visual and temporal brain areas, whereas standard short-term memory drew on a different network, involving frontal and parietal regions, and independent of the hippocampus.

[996] Cashdollar N, Malecki U, Rugg-Gunn FJ, Duncan JS, Lavie N, Duzel E. Hippocampus-dependent and -independent theta-networks of active maintenance. Proceedings of the National Academy of Sciences [Internet]. 2009 ;106(48):20493 - 20498. Available from: http://www.pnas.org/content/106/48/20493.abstract

http://www.eurekalert.org/pub_releases/2009-11/ucl-tal110909.php

Short stressful events may improve working memory

We know that chronic stress has a detrimental effect on learning and memory, but a new rat study shows how acute stress (a short, sharp event) can produce a beneficial effect. The rats, trained to a level of 60-70% accuracy on a maze, were put through a 20-minute forced swim before being run through the maze again. Those who experienced this stressful event were better at running the maze 4 hours later, and a day later, than those not forced through the stressful event. It appears that the stress hormone corticosterone (cortisol in humans) increases transmission of the neurotransmitter glutamate in the prefrontal cortex and improves working memory. It also appears that chronic stress suppresses the transmission of glutamate in the prefrontal cortex of male rodents, while estrogen receptors in female rodents make them more resilient to chronic stress than male rats.

[1157] Yuen EY, Liu W, Karatsoreos IN, Feng J, McEwen BS, Yan Z. Acute stress enhances glutamatergic transmission in prefrontal cortex and facilitates working memory. Proceedings of the National Academy of Sciences of the United States of America [Internet]. 2009 ;106(33):14075 - 14079. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19666502

http://www.eurekalert.org/pub_releases/2009-07/uab-sse072309.php

Individual differences in working memory capacity depend on two factors

A new computer model adds to our understanding of working memory, by showing that working memory can be increased by the action of the prefrontal cortex in reinforcing activity in the parietal cortex (where the information is temporarily stored). The idea is that the prefrontal cortex sends out a brief stimulus to the parietal cortex that generates a reverberating activation in a small subpopulation of neurons, while inhibitory interactions with neurons further away prevents activation of the entire network. This lateral inhibition is also responsible for limiting the mnemonic capacity of the parietal network (i.e. provides the limit on your working memory capacity). The model has received confirmatory evidence from an imaging study involving 25 volunteers. It was found that individual differences in performance on a short-term visual memory task were correlated with the degree to which the dorsolateral prefrontal cortex was activated and its interconnection with the parietal cortex. In other words, your working memory capacity is determined by both storage capacity (in the posterior parietal cortex) and prefrontal top-down control. The findings may help in the development of ways to improve working memory capacity, particularly when working memory is damaged.

[441] Edin F, Klingberg T, Johansson P, McNab F, Tegner J, Compte A. Mechanism for top-down control of working memory capacity. Proceedings of the National Academy of Sciences [Internet]. 2009 ;106(16):6802 - 6807. Available from: http://www.pnas.org/content/early/2009/03/31/0901894106.abstract

http://www.eurekalert.org/pub_releases/2009-04/i-id-aot040109.php

Some short-term memories die suddenly, no fading

We don’t remember everything; the idea of memory as being a video faithfully recording every aspect of everything we have ever experienced is a myth. Every day we look at the world and hold a lot of what we say for no more than a few seconds before discarding it as not needed any more. Until now it was thought that these fleeting visual memories faded away, gradually becoming more imprecise. Now it seems that such memories remain quite accurate as long as they exist (about 4 seconds), and then just vanish away instantly. The study involved testing memory for shapes and colors in 12 adults, and it was found that the memory for shape or color was either there or not there – the answers either correct or random guesses. The probability of remembering correctly decreased between 4 and 10 seconds.

[941] Zhang W, Luck SJ. Sudden death and gradual decay in visual working memory. Psychological Science: A Journal of the American Psychological Society / APS [Internet]. 2009 ;20(4):423 - 428. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19320861

http://www.eurekalert.org/pub_releases/2009-04/uoc--ssm042809.php

Where visual short-term memory occurs

Working memory used to be thought of as a separate ‘store’, and now tends to be regarded more as a process, a state of mind. Such a conception suggests that it may occur in the same regions of the brain as long-term memory, but in a pattern of activity that is somehow different from LTM. However, there has been little evidence for that so far. Now a new study has found that information in WM may indeed be stored via sustained, but low, activity in sensory areas. The study involved volunteers being shown an image for one second and instructed to remember either the color or the orientation of the image. After then looking at a blank screen for 10 seconds, they were shown another image and asked whether it was the identical color/orientation as the first image. Brain activity in the primary visual cortex was scanned during the 10 second delay, revealing that areas normally involved in processing color and orientation were active during that time, and that the pattern only contained the targeted information (color or orientation).

[1032] Serences JT, Ester EF, Vogel EK, Awh E. Stimulus-Specific Delay Activity in Human Primary Visual Cortex. Psychological Science [Internet]. 2009 ;20(2):207 - 214. Available from: http://dx.doi.org/10.1111/j.1467-9280.2009.02276.x

http://www.eurekalert.org/pub_releases/2009-02/afps-sih022009.php
http://www.eurekalert.org/pub_releases/2009-02/uoo-dsm022009.php

The finding is consistent with that of another study published this month, in which participants were shown two examples of simple striped patterns at different orientations and told to hold either one or the other of the orientations in their mind while being scanned. Orientation is one of the first and most basic pieces of visual information coded and processed by the brain. Using a new decoding technique, researchers were able to predict with 80% accuracy which of the two orientations was being remembered 11 seconds after seeing a stimulus, from the activity patterns in the visual areas. This was true even when the overall level of activity in these visual areas was very weak, no different than looking at a blank screen.

[652] Harrison SA, Tong F. Decoding reveals the contents of visual working memory in early visual areas. Nature [Internet]. 2009 ;458(7238):632 - 635. Available from: http://dx.doi.org/10.1038/nature07832

http://www.eurekalert.org/pub_releases/2009-02/vu-edi021709.php
http://www.physorg.com/news154186809.html

Even toddlers can ‘chunk' information for better remembering

We all know it’s easier to remember a long number (say a phone number) when it’s broken into chunks. Now a study has found that we don’t need to be taught this; it appears to come naturally to us. The study showed 14 months old children could track only three hidden objects at once, in the absence of any grouping cues, demonstrating the standard limit of working memory. However, with categorical or spatial cues, the children could remember more. For example, when four toys consisted of two groups of two familiar objects, cats and cars, or when six identical orange balls were grouped in three groups of two.

Feigenson, L. & Halberda, J. 2008. Conceptual knowledge increases infants' memory capacity. Proceedings of the National Academy of Sciences, 105 (29), 9926-9930.

http://www.eurekalert.org/pub_releases/2008-07/jhu-etg071008.php

Full text available at http://www.pnas.org/content/105/29/9926.abstract?sid=c01302b6-cd8e-4072-842c-7c6fcd40706f

Brain-training to improve working memory boosts fluid intelligence

General intelligence is often separated into "fluid" and "crystalline" components, of which fluid intelligence is considered more reflective of “pure” intelligence, and largely resistant to training and learning effects. However, in a new study in which participants were given a series of training exercises designed to improve their working memory, fluid intelligence was found to have significantly improved, with the amount of improvement increasing with time spent training. The small study contradicts decades of research showing that improving on one kind of cognitive task does not improve performance on other kinds, so has been regarded with some skepticism by other researchers. More research is definitely needed, but the memory task did differ from previous studies, engaging executive functions such as those that inhibit irrelevant items, monitor performance, manage two tasks simultaneously, and update memory.

Jaeggi, S.M., Buschkuehl, M., Jonides, J. & Perrig, W.J. 2008. Improving fluid intelligence with training on working memory. PNAS, 105 (19), 6829-6833.

http://www.physorg.com/news128699895.html
http://www.sciam.com/article.cfm?id=study-shows-brain-power-can-be-bolstered

Working memory has a fixed number of 'slots'

A study that showed volunteers a pattern of colored squares for a tenth of a second, and then asked them to recall the color of one of the squares by clicking on a color wheel, has found that working memory acts like a high-resolution camera, retaining three or four features in high detail. Unlike a digital camera, however, it appears that you can’t increase the number of images you can store by lowering the resolution. The resolution appears to be constant for a given individual. However, individuals do differ in the resolution of each feature and the number of features that can be stored.

[278] Zhang W, Luck SJ. Discrete fixed-resolution representations in visual working memory. Nature [Internet]. 2008 ;453(7192):233 - 235. Available from: http://dx.doi.org/10.1038/nature06860

http://www.physorg.com/news126432902.html
http://www.eurekalert.org/pub_releases/2008-04/uoc--wmh040208.php

And another study of working memory has attempted to overcome the difficulties involved in measuring a person’s working memory capacity (ensuring that no ‘chunking’ of information takes place), and concluded that people do indeed have a fixed number of ‘slots’ in their working memory. In the study, participants were shown two, five or eight small, scattered, different-colored squares in an array, which was then replaced by an array of the same squares without the colors, after which the participant was shown a single color in one location and asked to indicate whether the color in that spot had changed from the original array.

[437] Rouder JN, Morey RD, Cowan N, Zwilling CE, Morey CC, Pratte MS. An assessment of fixed-capacity models of visual working memory. Proceedings of the National Academy of Sciences [Internet]. 2008 ;105(16):5975 - 5979. Available from: http://www.pnas.org/content/105/16/5975.abstract

http://www.eurekalert.org/pub_releases/2008-04/uom-mpd042308.php

Impressive feats in visual memory

In light of all the recent experiments emphasizing how small our short-term visual memory is, it’s comforting to be reminded that, nevertheless, we have an amazing memory for pictures — in the right circumstances. Those circumstances include looking at images of familiar objects, as opposed to abstract artworks, and being motivated to do well (the best-scoring participant was given a cash prize). In the study, 14 people aged 18 to 40 viewed 2,500 images, one at a time, for a few seconds. Afterwards, they were shown pairs of images and asked to select the exact image they had seen earlier. The previously viewed item could be paired with either an object from a novel category, an object of the same basic-level category, or the same object in a different state or pose. Stunningly, participants on average chose the correct image 92%, 88% and 87% of the time, in each of the three pairing categories respectively.

[870] Brady TF, Konkle T, Alvarez GA, Oliva A. Visual long-term memory has a massive storage capacity for object details. Proceedings of the National Academy of Sciences [Internet]. 2008 ;105(38):14325 - 14329. Available from: http://www.pnas.org/content/early/2008/09/10/0803390105

Full text available at http://www.pnas.org/content/105/38/14325.abstract

Children's under-achievement could be down to poor working memory

A survey of over three thousand children has found that 10% of school children across all age ranges suffer from poor working memory seriously affecting their learning. However, poor working memory is rarely identified by teachers, who often describe children with this problem as inattentive or as having lower levels of intelligence. The researchers have developed a new tool, a combination of a checklist and computer programme called the Working Memory Rating Scale, that enables teachers to identify and assess children's memory capacity in the classroom from as early as four years old. The tool has already been piloted successfully in 35 schools across the UK, and is now widely available. It has been translated into ten foreign languages.

http://www.physorg.com/news123404466.html 
http://www.eurekalert.org/pub_releases/2008-02/du-cuc022608.php

More on how short-term memory works

It’s been established that visual working memory is severely limited — that, on average, we can only be aware of about four objects at one time. A new study explored the idea that this capacity might be affected by complexity, that is, that we can think about fewer complex objects than simple objects. It found that complexity did not affect memory capacity. It also found that some people have clearer memories of the objects than other people, and that this is not related to how many items they can remember. That is, a high IQ is associated with the ability to hold more items in working memory, but not with the clarity of those items.

[426] Awh E, Barton B, Vogel EK. Visual working memory represents a fixed number of items regardless of complexity. Psychological Science: A Journal of the American Psychological Society / APS [Internet]. 2007 ;18(7):622 - 628. Available from: http://www.ncbi.nlm.nih.gov/pubmed/17614871

http://www.eurekalert.org/pub_releases/2007-07/uoo-htb071107.php
http://www.physorg.com/news103472118.html

Executive function as important as IQ for math success

A study of 141 preschoolers from low-income homes has found that a child whose IQ and executive functioning were both above average was three times more likely to succeed in math than a child who simply had a high IQ. The parts of executive function that appear to be particularly linked to math ability in preschoolers are working memory and inhibitory control. In this context, working memory may be thought of as the ability to keep information or rules in mind while performing mental tasks. Inhibitory control is the ability to halt automatic impulses and focus on the problem at hand. Inhibitory control was also important for reading ability. The finding offers the hope that training to improve executive function will improve academic performance

Blair, C. & Razza, R.P. 2007. Relating Effortful Control, Executive Function, and False Belief Understanding to Emerging Math and Literacy Ability in Kindergarten. Child Development, 78 (2), 647–663.

New research shows why too much memory may be a bad thing

People who are able to easily and accurately recall historical dates or long-ago events may have a harder time with word recall or remembering the day's current events. A mouse study reveals why. Neurogenesis has been thought of as a wholly good thing — having more neurons is surely a good thing — but now a mouse study has found that stopping neurogenesis in the hippocampus improved working memory. Working memory is highly sensitive to interference from information previously stored in memory, so it may be that having too much information may hinder performing everyday working memory tasks.

Saxe, M.D. et al. 2007. Paradoxical influence of hippocampal neurogenesis on working memory. Proceedings of the National Academy of Sciences, 104 (11), 4642-4646.

http://www.physorg.com/news94384934.html
http://www.eurekalert.org/pub_releases/2007-03/cumc-nrs032807.php

Implicit stereotypes and gender identification may affect female math performance

Relatedly, another study has come out showing that women enrolled in an introductory calculus course who possessed strong implicit gender stereotypes, (for example, automatically associating "male" more than "female" with math ability and math professions) and were likely to identify themselves as feminine, performed worse relative to their female counterparts who did not possess such stereotypes and who were less likely to identify with traditionally female characteristics. Strikingly, a majority of the women participating in the study explicitly expressed disagreement with the idea that men have superior math ability, suggesting that even when consciously disavowing stereotypes, female math students are still susceptible to negative perceptions of their ability.

Kiefer, A.K., & Sekaquaptewa, D. 2007. Implicit stereotypes, gender identification, and math performance: a prospective study of female math students. Psychological Science, 18(1), 13-18.

http://www.eurekalert.org/pub_releases/2007-01/afps-isa012407.php

Reducing the racial achievement gap

And staying with the same theme, a study that came out six months ago, and recently reviewed on the excellent new Scientific American Mind Matters blog, revealed that a single, 15-minute intervention erased almost half the racial achievement gap between African American and white students. The intervention involved writing a brief paragraph about which value, from a list of values, was most important to them and why. The intervention improved subsequent academic performance for some 70% of the African American students, but none of the Caucasians. The study was repeated the following year with the same results. It is thought that the effect of the intervention was to protect against the negative stereotypes regarding the intelligence and academic capabilities of African Americans.

[1082] Cohen GL, Garcia J, Apfel N, Master A. Reducing the Racial Achievement Gap: A Social-Psychological Intervention. Science [Internet]. 2006 ;313(5791):1307 - 1310. Available from: http://www.sciencemag.org/cgi/content/abstract/313/5791/1307

Highly accomplished people more prone to failure than others when under stress

One important difference between those who do well academically and those who don’t is often working memory capacity. Those with a high working memory capacity find it easier to read and understand and reason, than those with a smaller capacity. However, a new study suggests there is a downside. Such people tend to heavily rely on their abundant supply of working memory and are therefore disadvantaged when challenged to solve difficult problems, such as mathematical ones, under pressure — because the distraction caused by stress consumes their working memory. They then fall back on the less accurate short-cuts that people with less adequate supplies of working memory tend to use, such as guessing and estimation. Such methods are not made any worse by working under pressure. In the study involving 100 undergraduates, performance of students with strong working memory declined to the same level as those with more limited working memory, when the students were put under pressure. Those with more limited working memory performed as well under added pressure as they did without the stress.

The findings were presented February 17 at the annual meeting of the American Association for the Advancement of Science.

http://www.eurekalert.org/pub_releases/2007-02/uoc-hap021607.php

Common gene version optimizes thinking but carries a risk

On the same subject, another study has found that the most common version of DARPP-32, a gene that shapes and controls a circuit between the striatum and prefrontal cortex, optimizes information filtering by the prefrontal cortex, thus improving working memory capacity and executive control (and thus, intelligence). However, the same version was also more prevalent among people who developed schizophrenia, suggesting that a beneficial gene variant may translate into a disadvantage if the prefrontal cortex is impaired. In other words, one of the things that make humans more intelligent as a species may also make us more vulnerable to schizophrenia.

[864] Kolachana B, Kleinman JE, Weinberger DR, Meyer-Lindenberg A, Straub RE, Lipska BK, Verchinski BA, Goldberg T, Callicott JH, Egan MF, et al. Genetic evidence implicating DARPP-32 in human frontostriatal structure, function, and cognition. Journal of Clinical Investigation [Internet]. 2007 ;117(3):672 - 682. Available from: http://www.jci.org/articles/view/30413?search%5Babstract_text%5D=&search%5Barticle_text%5D=&search%5Bauthors_text%5D=&search%5Bfpage%5D=672&search%5Bissue%5D=&search%5Btitle_text%5D=&search%5Bvolume%5D=117

http://www.sciencedaily.com/releases/2007/02/070208230059.htm
http://www.eurekalert.org/pub_releases/2007-02/niom-cgv020707.php

People remember prices more easily if they have fewer syllables

The phonological loop — an important component of working memory —can only hold 1.5 to 2 seconds of spoken information. For that reason, faster speakers have an advantage over slower speakers. Now a consumer study reveals that every extra syllable in a product's price decreases its chances of being remembered by 20%. Thus, people who shorten the number of syllables (e.g. read 5,325 as 'five three two five' as opposed to 'five thousand three hundred and twenty five') have better recall. However, since we store information both verbally and visually, it’s also the case that unusual looking prices, such as $8.88, are recalled better than typical looking prices.

Vanhuele, M., Laurent, G., Dreze, X. & Calder, B. 2006. Consumers' Immediate Memory for Prices. Journal of Consumer Research, 33(2), 163-72.

http://www.sciencedaily.com/releases/2006/06/060623001231.htm
http://www.eurekalert.org/pub_releases/2006-06/uocp-prp062206.php

New view of hippocampus’s role in memory

Amnesiacs have overturned the established view of the hippocampus, and of the difference between long-and short-term memories. It appears the hippocampus is just as important for retrieving certain types of short-term memories as it is for long-term memories. The critical thing is not the age of the memory, but the requirement to form connections between pieces of information to create a coherent episode. The researchers suggest that, for the brain, the distinction between 'long-term' memory and 'short-term' memory are less relevant than that between ‘feature’ memory and ‘conjunction’ memory — the ability to remember specific things versus how they are related. The hippocampus may be thought of as the brain's switchboard, piecing individual bits of information together in context.

[817] Olson IR, Page K, Moore KS, Chatterjee A, Verfaellie M. Working Memory for Conjunctions Relies on the Medial Temporal Lobe. J. Neurosci. [Internet]. 2006 ;26(17):4596 - 4601. Available from: http://www.jneurosci.org/cgi/content/abstract/26/17/4596

http://origin.www.upenn.edu/pennnews/article.htm?id=963
http://www.eurekalert.org/pub_releases/2006-05/uop-aso053106.php

Learning and working memory

A 3-year research project on Working Memory and Cognition has reached its conclusion. The association between effective language learning and good short-term memory is, it seems, not a causal relationship. It is not that a good short-term memory is a prerequisite for long-term learning; it is that both short-term and long-term memory tasks tap the same ability to create representations of sufficient quality to support the maintenance of several of them at once.
Another finding is that metaphoric language often puts greater stress on working memory and so is harder to process than literal language.
Another study looked at differences between the abilities of musicians and persons who did not have music as an active hobby to remember series of notes presented in succession on a computer screen. The results show how expertise makes it possible to apparently bypass working memory limits, even when the memory items cannot be grouped into simple categories.

http://www.eurekalert.org/pub_releases/2006-03/uoh-nrd031306.php
http://www.sciencedaily.com/releases/2006/03/060320084440.htm

Discovery disproves simple concept of memory as 'storage space'

The idea of memory “capacity” has become more and more eroded over the years, and now a new technique for measuring brainwaves seems to finally knock the idea on the head. Consistent with recent research suggesting that a crucial problem with aging is a growing inability to ignore distracting information, this new study shows that visual working memory depends on your ability to filter out irrelevant information. Individuals have long been characterized as having a “high” working memory capacity or a “low” one — the assumption has been that these people differ in their storage capacity. Now it seems it’s all about a neural mechanism that controls what information gets into awareness. People with high capacity have a much better ability to ignore irrelevant information.

[1091] Vogel EK, McCollough AW, Machizawa MG. Neural measures reveal individual differences in controlling access to working memory. Nature [Internet]. 2005 ;438(7067):500 - 503. Available from: http://dx.doi.org/10.1038/nature04171

http://www.eurekalert.org/pub_releases/2005-11/uoo-dds111805.php

How much can your mind keep track of?

A recent study has tried a new take on measuring how much a person can keep track of. It's difficult to measure the limits of processing capacity because most people automatically break down large complex problems into small, manageable chunks. To keep people from doing this, therefore, researchers created problems the test subjects wouldn’t be familiar with. 30 academics were presented with incomplete verbal descriptions of statistical interactions between fictitious variables, with an accompanying set of graphs that represented the interactions. It was found that, as the problems got more complex, participants performed less well and were less confident. They were significantly less able to accurately solve the problems involving four-way interactions than the ones involving three-way interactions, and were completely incapable of solving problems with five-way interactions. The researchers concluded that we cannot process more than four variables at a time (and at that, four is a strain).

[415] Halford GS, Baker R, McCredden JE, Bain JD. How many variables can humans process?. Psychological Science: A Journal of the American Psychological Society / APS [Internet]. 2005 ;16(1):70 - 76. Available from: http://www.ncbi.nlm.nih.gov/pubmed/15660854

http://www.eurekalert.org/pub_releases/2005-03/aps-hmc030805.php

Cognitive therapy for ADHD

A researcher that has previously demonstrated that working memory capacity can be increased through training, has now reported that the training software has produced significant improvement in children with ADHD — a disability that is associated with deficits in working memory. The study involved 53 children with ADHD, aged 7-12, who were not on medication for their disability. 44 of these met the criterion of more than 20 days of training. Half the participants were assigned to the working memory training program and the other half to a comparison program. 60% of those who underwent the wm training program no longer met the clinical criteria for ADHD after five weeks of training. The children were tested on visual-spatial memory, which has the strongest link to inattention and ADHD. Further research is needed to show that training improves ability on a wider range of tasks.

[583] Klingberg T, Fernell E, Olesen PJ, Johnson M, Gustafsson P, Dahlström K, Gillberg CG, Forssberg H, Westerberg H. Computerized Training of Working Memory in Children With ADHD-A Randomized, Controlled Trial. Journal of the American Academy of Child & Adolescent Psychiatry [Internet]. 2005 ;44(2):177 - 186. Available from: http://www.sciencedirect.com/science/article/B987N-4XKH91F-B/2/44e91ac6d66cbd1822ee93ad0b14ec59

http://www.sciam.com/article.cfm?articleID=000560D5-7252-12B9-9A2C83414B7F0000&sc=I100322

Anxiety adversely affects those who are most likely to succeed at exams

It has been thought that pressure harms performance on cognitive skills such as mathematical problem-solving by reducing the working memory capacity available for skill execution. However, a new study of 93 students has found that this applies only to those high in working memory. It appears that the advantage of a high working memory capacity disappears when that attention capacity is compromised by anxiety.

[355] Beilock SL, Carr TH. When high-powered people fail: working memory and "choking under pressure" in math. Psychological Science: A Journal of the American Psychological Society / APS [Internet]. 2005 ;16(2):101 - 105. Available from: http://www.ncbi.nlm.nih.gov/pubmed/15686575

http://www.eurekalert.org/pub_releases/2005-02/bpl-wup020705.php

Development of working memory with age

An imaging study of 20 healthy 8- to 30-year-olds has shed new light on the development of working memory. The study found that pre-adolescent children relied most heavily on the prefrontal and parietal regions of the brain during the working memory task; adolescents used those regions plus the anterior cingulate; and in adults, a third area of the brain, the medial temporal lobe, was brought in to support the functions of the other areas. Adults performed best. The results support the view that a person's ability to have voluntary control over behavior improves with age because with development, additional brain processes are used.

http://www.eurekalert.org/pub_releases/2004-10/uopm-dow102104.php

Training improves working memory capacity

Working memory capacity has traditionally been thought to be constant. Recent studies, however, suggest that working memory can be improved by training. In this recent imaging study, it was found that adults who practiced working memory tasks for 5 weeks showed increased brain activity in the middle frontal gyrus and superior and inferior parietal cortices. These changes could be evidence of training-induced plasticity in the neural systems that underlie working memory.

Olesen, P.J., Westerberg, H. & Klingberg, T. 2004. Increased prefrontal and parietal activity after training of working memory. Nature Neuroscience, 7(1), 75-9.

http://www.nature.com/cgi-taf/DynaPage.taf?file=/neuro/journal/v7/n1/abs/nn1165.html

Tests for working memory capacity more limited than thought

The so-called “magic number 7” has been a useful mnemonic for working memory capacity — how many items you can hold in your working memory at one time — but we’ve known for some time that it isn’t quite as it was originally thought. Apart from the fact that the “7” is an average, and that the idea of an “item” is awfully vague as far as informational content is concerned, we have known for some time that what is really important is how long it takes for you to say the words. Thus, Chinese can hold on average 9 items, because the words used in the test are short and simple to pronounce, whereas the Welsh can hold only 5 on average, because of the length and complexity of their words. (note: it’s not because we actually say these words out loud). Similarly, the finding that deaf users of American Sign Language have an average of only 5 items was thought to be because signs take longer to utter. However, new research casts doubt on this theory. The researchers were trying to devise a sign-language test that would be comparable to a hearing language test. To their surprise they found that even when signs were faster to pronounce than spoken language, signers recalled only five items. Also, hearing individuals who were fluent in American Sign Language scored differently when asked to recall spoken lists in order, versus when they recalled signed lists (seven heard items remembered, five signed items remembered). Up until this time, the predominant idea was that the number found by this test was a good measure of overall cognitive capacity, but this assumption must now be in doubt. It's suggested that a test requiring recall of items, but not in temporal order, is a better measure of cognitive capacity. The results have important implications for standardized tests, which often employ ordered-list retention as a measure of a person's mental aptitude.

[422] Boutla M, Supalla T, Newport EL, Bavelier D. Short-term memory span: insights from sign language. Nat Neurosci [Internet]. 2004 ;7(9):997 - 1002. Available from: http://dx.doi.org/10.1038/nn1298

http://www.eurekalert.org/pub_releases/2004-08/uor-stm083104.php

Hippocampus and subiculum both critical for short-term memory

A new animal study has revealed that the hippocampus shares its involvement in short-term memory with an adjacent brain region, the subiculum. Both regions act together to establish and retrieve short-term memories. The process involves each region acting at different times, with the other region shutting off while the other is active. The shortest memories (10-15s) were found to be controlled almost exclusively by the subiculum. After 15s, the hippocampus took over. It was also found that the hippocampus appeared to respond in a way influenced by previous experiences, allowing it to anticipate future events on the basis of past outcomes. This is an advantage but can also cause errors.

[349] Deadwyler SA, Hampson RE. Differential but Complementary Mnemonic Functions of the Hippocampus and Subiculum. Neuron [Internet]. 2004 ;42(3):465 - 476. Available from: http://www.cell.com/neuron/retrieve/pii/S0896627304001953

http://www.eurekalert.org/pub_releases/2004-05/wfub-nrs050604.php

Why working memory capacity is so limited

There’s an old parlor game whereby someone brings into a room a tray covered with a number of different small objects, which they show to the people in the room for one minute, before whisking it away again. The participants are then required to write down as many objects as they can remember. For those who perform badly at this type of thing, some consolation from researchers: it’s not (entirely) your fault. We do actually have a very limited storage capacity for visual short-term memory.
Now visual short-term memory is of course vital for a number of functions, and reflecting this, there is an extensive network of brain structures supporting this type of memory. However, a new imaging study suggests that the limited storage capacity is due mainly to just one of these regions: the posterior parietal cortex. An interesting distinction can be made here between registering information and actually “holding it in mind”. Activity in the posterior parietal cortex strongly correlated with the number of objects the subjects were able to remember, but only if the participants were asked to remember. In contrast, regions of the visual cortex in the occipital lobe responded differently to the number of objects even when participants were not asked to remember what they had seen.

[598] Todd JJ, Marois R. Capacity limit of visual short-term memory in human posterior parietal cortex. Nature [Internet]. 2004 ;428(6984):751 - 754. Available from: http://www.ncbi.nlm.nih.gov/pubmed/15085133

http://www.eurekalert.org/pub_releases/2004-04/vu-slo040704.php
http://tinyurl.com/2jzwe (Telegraph article)

Brain signal predicts working memory capacity

Our visual short-term memory may have an extremely limited capacity, but some people do have a greater capacity than others. A new study reveals that an individual's capacity for such visual working memory can be predicted by his or her brainwaves. In the study, participants briefly viewed a picture containing colored squares, followed by a one-second delay, and then a test picture. They pressed buttons to indicate whether the test picture was identical to -- or differed by one color -- from the one seen earlier. The more squares a subject could correctly identify having just seen, the greater his/her visual working memory capacity, and the higher the spike of corresponding brain activity – up to a point. Neural activity of subjects with poorer working memory scores leveled off early, showing little or no increase when the number of squares to remember increased from 2 to 4, while those with high capacity showed large increases. Subjects averaged 2.8 squares.

[1154] Vogel EK, Machizawa MG. Neural activity predicts individual differences in visual working memory capacity. Nature [Internet]. 2004 ;428(6984):748 - 751. Available from: http://dx.doi.org/10.1038/nature02447

http://www.eurekalert.org/pub_releases/2004-04/niom-bsp041604.php

Small world networks key to working memory

A computer model may reveal an important aspect of working memory. The key, researchers say, is that the neurons form a "small world" network. In such a network, regardless of its size, any two points within them are always linked by only a small number of steps. Working memory may rely on the same property.

[2547] Micheloyannis S, Pachou E, Stam CJ, Vourkas M, Erimaki S, Tsirka V. Using graph theoretical analysis of multi channel EEG to evaluate the neural efficiency hypothesis. Neuroscience Letters [Internet]. 2006 ;402(3):273 - 277. Available from: http://www.sciencedirect.com/science/article/pii/S030439400600382X

http://www.newscientist.com/article/mg18224481.600-its-a-small-world-inside-your-head.html

Memory-enhancing drugs for elderly may impair working memory and other executive functions

Drugs that increase the activity of an enzyme called protein kinase A improve long-term memory in aged mice and have been proposed as memory-enhancing drugs for elderly humans. However, the type of memory improved by this activity occurs principally in the hippocampus. A new study suggests that increased activity of this enzyme has a deleterious effect on working memory (which principally involves the prefrontal cortex). In other words, a drug that helps you remember a recent event may worsen your ability to remember what you’re about to do (to take an example).

Ramos, B.P., Birnbaum, S.G., Lindenmayer, I., Newton, S.S., Duman, R.S. & Arnsten, A.F.T. 2003. Dysregulation of Protein Kinase A Signaling in the Aged Prefrontal Cortex: New Strategy for Treating Age-Related Cognitive Decline. Neuron, 40, 835-845.

http://www.eurekalert.org/pub_releases/2003-11/naos-mdf110303.php

Sleep deprivation affects working memory

A recent study investigated the working memory capacities of individuals who were sleep-deprived. For nine days, 7 of the 12 participants slept four hours each night, and 5 slept for eight hours. Each morning, participants completed a computer task to measure how quickly they could access a list of numbers they had been asked to memorize. The list could be one, three, or five items long. Then participants were presented with a series of single digits and asked to answer "yes" or "no" to indicate whether each digit was one they had memorized. Those who slept eight hours a night steadily increased their working memory efficiency on this task, but those who slept only four hours a night failed to show any improvement in memory efficiency. Motor skill did not change across days for either group of participants.

Casement, M.D., Mullington, J.M., Broussard, J.L., & Press, D.Z. 2003. The effects of prolonged sleep restriction on working memory performance. Paper presented at the annual meeting of the Society for Neuroscience, New Orleans, LA.

http://www.eurekalert.org/pub_releases/2003-11/sfn-sfb_1111003.php

Gesturing reduces cognitive load

Why is it that people cannot keep their hands still when they talk? One reason may be that gesturing actually lightens cognitive load while a person is thinking of what to say. Adults and children were asked to remember a list of letters or words while explaining how they solved a math problem. Both groups remembered significantly more items when they gestured during their math explanations than when they did not gesture.

[1300] Goldin-Meadow S, Nusbaum H, Kelly SD, Wagner S. Explaining math: gesturing lightens the load. Psychological Science: A Journal of the American Psychological Society / APS [Internet]. 2001 ;12(6):516 - 522. Available from: http://www.ncbi.nlm.nih.gov/pubmed/11760141