News Topic attention

About these topic collections

I’ve been reporting on memory research for over ten years and these topic pages are simply collections of all the news items I have made on a particular topic. They do not pretend to be in any way exhaustive! I cover far too many areas within memory to come anywhere approaching that. What I aim to do is provide breadth, rather than depth. Outside my own area of cognitive psychology, it is difficult to know how much weight to give to any study (I urge you to read my blog post on what constitutes scientific evidence). That (among other reasons) is why my approach in my news reporting is based predominantly on replication and consistency. It's about the aggregate. So here is the aggregate of those reports I have at one point considered of sufficient interest to discuss. If you know of any research you would like to add to the collection, feel free to write about it in a comment (please provide a reference).

Improving attention

Latest news

A survey of college students found that those who scored highest in multitasking ability were also least likely to multitask, while those who scored lowest were most likely to engage in it.

I’ve reported often on the perils of multitasking. Here is yet another one, with an intriguing new finding: it seems that the people who multitask the most are those least capable of doing so!

The study surveyed 310 undergraduate psychology students to find their actual multitasking ability, perceived multitasking ability, cell phone use while driving, use of a wide array of electronic media, and personality traits such as impulsivity and sensation-seeking.

Those who scored in the top quarter on a test of multitasking ability tended not to multitask. Some 70% of participants thought they were above average at multitasking, and perceived multitasking ability (rather than actual) was associated with multitasking. Those with high levels of impulsivity and sensation-seeking were also more likely to multitask (with the exception of using a cellphone while driving, which wasn’t related to impulsivity, though it was related to sensation seeking).

The findings suggest that those who multitask don’t do so because they are good at multitasking, but because they are poor at focusing on one task.

A comparison of traditional African villagers and those who have moved to town indicates that urban living improves working memory capacity even as it makes us more vulnerable to distraction.

Another study looking into the urban-nature effect issue takes a different tack than those I’ve previously reported on, that look at the attention-refreshing benefits of natural environments.

In this study, a rural African people living in a traditional village were compared with those who had moved to town. Participants in the first experiment included 35 adult traditional Himba, 38 adolescent traditional Himba (mean age 12), 56 adult urbanized Himba, and 37 adolescent urbanized Himba. All traditional Himba had had little contact with the Western world and only spoke their native language; all adult urbanized Himba had grown up in traditional villages and only moved to town later in life (average length of time in town was 6 years); all adolescent urbanized Himba had grown up in town the town and usually attended school regularly.

The first experiments assessed the ability to ignore peripheral distracting arrows while focusing on the right or left direction of a central arrow.

There was a significant effect of urbanization, with attention being more focused (less distracted) among the traditional Himba. Traditional Himba were also slower than urbanized Himba — but note that there was substantial overlap in response times between the two groups. There was no significant effect of age (that is, adolescents were faster than adults in their responses, but the effect of the distracters was the same across age groups), or a significant interaction between age and urbanization.

The really noteworthy part of this, was that the urbanization effect on task performance was the same for the adults who had moved to town only a few years earlier as for the adolescents who had grown up and been educated in the town. In other words, this does not appear to be an educational effect.

The second experiment looked at whether traditional Himba would perform more like urbanized Himba if there were other demands on working memory. This was done by requiring them to remember three numbers (the number words in participants’ language are around twice as long as the same numbers in English, hence their digit span is shorter).

While traditional Himba were again more focused than the urbanized in the no-load condition, when there was this extra load on working memory, there was no significant difference between the two groups. Indeed, attention was de-focused in the traditional Himba under high load to the same degree as it was for urbanized Himba under no-load conditions. Note that increasing the cognitive load made no difference for the urbanized group.

There was also a significant (though not dramatic) difference between the traditional and urbanized Himba in terms of performance on the working memory task, with traditional Himba remembering an average of 2.46/3 digits and urbanized Himba 2.64.

Experiment 3 tested the two groups on a working memory task, a standard digit span test (although, of course, in their native language). Random sequences of 2-5 digits were read out, with the participant being required to say them aloud immediately after. Once again, the urbanized Himba performed better than the traditional Himba (4.32 vs 3.05).

In other words, the problem does not seem to be that urbanization depletes working memory, rather, that urbanization encourages disengagement (i.e., we have the capacity, we just don’t use it).

In the fourth experiment, this idea was tested more directly. Rather than the arrows used in the earlier experiments, black and white faces were used, with participants required to determine the color of the central face. Additionally, inverted faces were sometimes used (faces are stimuli we pay a lot of attention to, but inverting them reduces their ‘faceness’, thus making them less interesting).

An additional group of Londoners was also included in this experiment.

While urbanized Himba and Londoners were, again, more de-focused than traditional Himba when the faces were inverted, for the ‘normal’ faces, all three groups were equally focused.

Note that the traditional Himba were not affected by the changes in the faces, being equally focused regardless of the stimulus. It was the urbanized groups that became more alert when the stimuli became more interesting.

Because it may have been a race-discrimination mechanism coming into play, the final experiment returned to the direction judgment, with faces either facing left or right. This time the usual results occurred – the urbanized groups were more de-focused than the traditional group.

In other words, just having faces was not enough; it was indeed the racial discrimination that engaged the urbanized participants (note that both these urban groups come from societies where racial judgments are very salient – multicultural London, and post-apartheid Namibia).

All of this indicates that the attention difficulties that appear so common nowadays are less because our complex environments are ‘sapping’ our attentional capacities, and more because we are in a different attentional ‘mode’. It makes sense that in environments that contain so many more competing stimuli, we should employ a different pattern of engagement, keeping a wider, more spread, awareness on the environment, and only truly focusing when something triggers our interest.

[3273] Linnell, K. J., Caparos S., de Fockert J. W., & Davidoff J. (2013).  Urbanization Decreases Attentional Engagement. Journal of experimental psychology. Human perception and performance.

A new study provides more evidence that meditation changes the brain, and different types of meditation produce different effects.

More evidence that even an 8-week meditation training program can have measurable effects on the brain comes from an imaging study. Moreover, the type of meditation makes a difference to how the brain changes.

The study involved 36 participants from three different 8-week courses: mindful meditation, compassion meditation, and health education (control group). The courses involved only two hours class time each week, with meditation students encouraged to meditate for an average 20 minutes a day outside class. There was a great deal of individual variability in the total amount of meditation done by the end of the course (210-1491 minutes for the mindful attention training course; 190-905 minutes for the compassion training course).

Participants’ brains were scanned three weeks before the courses began, and three weeks after the end. During each brain scan, the volunteers viewed 108 images of people in situations that were either emotionally positive, negative or neutral.

In the mindful attention group, the second brain scan showed a decrease in activation in the right amygdala in response to all images, supporting the idea that meditation can improve emotional stability and response to stress. In the compassion meditation group, right amygdala activity also decreased in response to positive or neutral images, but, among those who reported practicing compassion meditation most frequently, right amygdala activity tended to increase in response to negative images. No significant changes were seen in the control group or in the left amygdala of any participant.

The findings support the idea that meditation can be effective in improving emotional control, and that compassion meditation can indeed increase compassionate feelings. Increased amygdala activation was also correlated with decreased depression scores in the compassion meditation group, which suggests that having more compassion towards others may also be beneficial for oneself.

The findings also support the idea that the changes brought about by meditation endure beyond the meditative state, and that the changes can start to occur quite quickly.

These findings are all consistent with other recent research.

One point is worth emphasizing, in the light of the difficulty in developing a training program that improves working memory rather than simply improving the task being practiced. These findings suggest that, unlike most cognitive training programs, meditation training might produce learning that is process-specific rather than stimulus- or task-specific, giving it perhaps a wider generality than most cognitive training.

A pilot study suggests declines in temporal processing are an important part of age-related cognitive decline, and shows how temporal training can significantly improve some cognitive abilities.

Here’s an exciting little study, implying as it does that one particular aspect of information processing underlies much of the cognitive decline in older adults, and that this can be improved through training. No, it’s not our usual suspect, working memory, it’s something far less obvious: temporal processing.

In the study, 30 older adults (aged 65-75) were randomly assigned to three groups: one that received ‘temporal training’, one that practiced common computer games (such as Solitaire and Mahjong), and a no-activity control. Temporal training was provided by a trademarked program called Fast ForWord Language® (FFW), which was developed to help children who have trouble reading, writing, and learning.

The training, for both training groups, occupied an hour a day, four days a week, for eight weeks.

Cognitive assessment, carried out at the beginning and end of the study, and for the temporal training group again 18 months later, included tests of sequencing abilities (how quickly two sounds could be presented and still be accurately assessed for pitch or direction), attention (vigilance, divided attention, and alertness), and short-term memory (working memory span, pattern recognition, and pattern matching).

Only in the temporal training group did performance on any of the cognitive tests significantly improve after training — on the sequencing tests, divided attention, matching complex patterns, and working memory span. These positive effects still remained after 18 months (vigilance was also higher at the end of training, but this improvement wasn’t maintained).

This is, of course, only a small pilot study. I hope we will see a larger study, and one that compares this form of training against other computer training programs. It would also be good to see some broader cognitive tests — ones that are less connected to the temporal training. But I imagine that, as I’ve discussed before, an effective training program will include more than one type of training. This may well be an important component of such a program.

[3075] Szelag, E., & Skolimowska J. (2012).  Cognitive function in elderly can be ameliorated by training in temporal information processing. Restorative Neurology and Neuroscience. 30(5), 419 - 434.

Multitasking is significantly worse if your tasks use the same modality. Instant messaging while doing another visual-motor task reduces performance more than talking on the phone.

I’ve reported, often, on the evidence that multitasking is a problem, something we’re not really designed to do well (with the exception of a few fortunate individuals), and that the problem is rooted in our extremely limited working memory capacity. I’ve also talked about how ‘working memory’ is a bit of a misnomer, given that we probably have several ‘working memories’, for different modalities.

It follows from that, that tasks that use different working memories should be easier to do at the same time than tasks that use the same working memory. A new study confirms that multitasking is more difficult if you are trying to use the same working memory modules for both tasks.

In the study, 32 students carried out a visual pattern-matching task on a computer while giving directions to another person either via instant messaging (same modalities — vision and motor) or online voice chat (different modality — hearing).

While both simultaneous tasks significantly worsened performance on the pattern-matching task, communicating by IM (same modality) led to a 50% drop in visual pattern-matching performance (from a mean of 11 correct responses to a mean of 5), compared to only a 30% drop in the voice condition (mean of 7).

The underlying reason for the reductions in performance seems to be in the effect on eye movement: the number and duration of eye fixations was reduced in both dual-task conditions, and more so in the IM condition.

Note that this is apparently at odds with general perception. According to one study, IM is perceived to be less disruptive than the phone. Moreover, in the current study, participants felt they performed better in the IM condition (although this palpably wasn’t true). This feeling may reflect the greater sense of personal control in instant messaging compared to chat. It may also reflect an illusion of efficiency generated by using the visual channel — because we are so strongly practiced in using vision, we may find visual tasks more effortless than tasks using other modalities. (I should note that most people, regardless of the secondary task, felt they did better than they had! But those in the IM condition were more deluded than those in the chat condition.)

The finding also explains why texting is particularly dangerous when driving — both rely heavily on the same modalities.

All this is consistent with the idea that there are different working memory resources which can operate in parallel, but share one particular resource which manages the other resources.

The idea of ‘threaded cognition’ — of maintaining several goal threads and strategically allocating resources as needed — opens up the idea that multitasking is not all bad. In recent years, we have focused on multitasking as a problem. This has been a very necessary emphasis, given that its downsides were unappreciated. But although multitasking has its problems, it may be that there are trade-offs that come from the interaction between the tasks being carried out.

In other words, rather than condemning multitasking, we need to learn its parameters. This study offers one approach.

Three recent studies show that meditation training reduces the stress of multitasking and reduces task-switching, that it improves white matter efficiency, and that the improved executive control may be largely to do with better emotional awareness and regulation.

Meditation may improve multitasking

I recently reported that developing skill at video action games doesn’t seem to improve general multitasking ability, but perhaps another approach might be more successful. Meditation has, of course, been garnering growing evidence that it can help improve attentional control. A new study extends that research to multitasking in a realistic work setting.

The study involved three groups of 12-15 female human resource managers, of whom one group received eight weeks of mindfulness-based meditation training, another received eight weeks of body relaxation training, and another initially received no training (control), before receiving the mindfulness training after the eight weeks.

Before and after each eight-week period, the participants were given a stressful test of their multitasking abilities, requiring them to use email, calendars, instant-messaging, telephone and word-processing tools to perform common office tasks (scheduling a meeting; finding a free conference room; writing a draft announcement of the meeting, eating snacks and drinking water, writing a memo proposing a creative agenda item for the meeting). Necessary information came from emails, instant messages, telephone calls, and knocks on the door. The participants had 20 minutes to complete the tasks.

The meditation group reported lower levels of stress during the multitasking test compared to the control and relaxation groups. They also spent more time on tasks and switched tasks less often, while taking no longer to complete the overall job than the others. Both meditation and relaxation groups showed improved memory for the tasks they were performing.

After the control group underwent the meditation training, their results matched those of the meditation group.

The meditation training emphasized:

  • control of attentional focus
  • focusing attention in the present moment or task
  • switching focus
  • breath and body awareness.

The relaxation training emphasized progressive tensing and relaxing of major muscle groups, aided by relaxation imagery.

It's interesting that overall time on task didn't change (the researchers remarked that the meditators didn't take any longer, but of course most of us would be looking for it to become shorter!), but I wouldn't read too much into it. The task was relatively brief. It would be interesting to see the effects over the course of, say, a day. Nor did the study look at how well the tasks were done.

But it is, of course, important that meditation training reduced task-switching and stress. Whether it also has a postitive effect on overall time and quality of work is a question for another day.

IBMT improves white matter efficiency

A recent imaging study has found that four weeks of a form of mindfulness meditation called integrative body–mind training (IBMT) improved white matter efficiency in areas surrounding the anterior cingulate cortex, compared to controls given relaxation training.

The anterior cingulate is part of the brain network related to self-regulation. Deficits in activation in this part of the brain have been associated with attention deficit disorder, dementia, depression, schizophrenia, and other disorders.

Using the data from a 2010 study involving 45 U.S. college students, and another involving 68 Chinese students, researchers found that axon density (one factor in white matter efficiency) had improved after two weeks, but not myelin formation. After a month (about 11 hours of meditation), both had improved. Mood improved by two weeks.

Previous studies involving computer-based training for improving working memory have found changes in myelination, but not axon density.

Meditators’ better cognitive control may be rooted in emotional regulation

Previous work has found that people who engage in meditation show higher levels of executive control on laboratory tasks.

An electrical signal called the Error Related Negativity (ERN) occurs in the brain within 100 ms of an error being committed. When meditators and non-meditators were given the Stroop Test, meditators not only tended to do better on the test, but their ERNs were stronger.

The interesting thing about this is that the best performers were those who scored highest on emotional acceptance. Mindful awareness was less important. It’s suggested that meditators may be able to control their behavior better not because of their sharper focus, but because they are more aware of their emotions and regulate them better.

Something to think about!

Levy, D. M., Wobbrock, J. O., Kaszniak, A. W., & Ostergren, M. (2012). The Effects of Mindfulness Meditation Training on Multitasking in a High-Stress Information Environment, 45–52. Full text available at http://faculty.washington.edu/wobbrock/pubs/gi-12.02.pdf

[3051] Tang, Y. - Y., Lu Q., Fan M., Yang Y., & Posner M. I. (2012).  Mechanisms of white matter changes induced by meditation. Proceedings of the National Academy of Sciences. 109(26), 10570 - 10574.

[3052] Teper, R., & Inzlicht M. (2012).  Meditation, mindfulness and executive control: the importance of emotional acceptance and brain-based performance monitoring. Social Cognitive and Affective Neuroscience.

A comparison of skilled action gamers and non-gamers reveals that all that multitasking practice doesn’t make you any better at multitasking in general.

The research is pretty clear by this point: humans are not (with a few rare exceptions) designed to multitask. However, it has been suggested that the modern generation, with all the multitasking they do, may have been ‘re-wired’ to be more capable of this. A new study throws cold water on this idea.

The study involved 60 undergraduate students, of whom 34 were skilled action video game players (all male) and 26 did not play such games (19 men and 7 women). The students were given three visual tasks, each of which they did on its own and then again while answering Trivial Pursuit questions over a speakerphone (designed to mimic talking on a cellphone).

The tasks included a video driving game (“TrackMania”), a multiple-object tracking test (similar to a video version of a shell game), and a visual search task (hidden pictures puzzles from Highlights magazine).

While the gamers were (unsurprisingly) significantly better at the video driving game, the non-gamers were just as good as them at the other two tasks. In the dual-tasking scenarios, performance declined on all the tasks, with the driving task most affected. While the gamers were affected less by multitasking during the driving task compared to the non-gamers, there was no difference in the amount of decline between gamers and non-gamers on the other two tasks.

Clearly, the smaller effect of dual-tasking on the driving game for gamers is a product of their greater expertise at the driving game, rather than their ability to multitask better. It is well established that the more skilled you are at a task, the more automatic it becomes, and thus the less working memory capacity it will need. Working memory capacity / attention is the bottleneck that prevents us from being true multitaskers.

In other words, the oft-repeated (and somewhat depressing) conclusion remains: you can’t learn to multitask in general, you can only improve specific skills, enabling you to multitask reasonably well while doing those specific tasks.

[3001] Donohue, S., James B., Eslick A., & Mitroff S. (2012).  Cognitive pitfall! Videogame players are not immune to dual-task costs. Attention, Perception, & Psychophysics. 74(5), 803 - 809.

A small study provides more support for the idea that viewing nature can refresh your attention and improve short-term memory, and extends it to those with clinical depression.

I’ve talked before about Dr Berman’s research into Attention Restoration Theory, which proposes that people concentrate better after nature walks or even just looking at nature scenes. In his latest study, the findings have been extended to those with clinical depression.

The study involved 20 young adults (average age 26), all of whom had a diagnosis of major depressive disorder. Short-term memory and mood were assessed (using the backwards digit span task and the PANAS), and then participants were asked to think about an unresolved, painful autobiographical experience. They were then randomly assigned to go for a 50-minute walk along a prescribed route in either the Ann Arbor Arboretum (woodland park) or traffic heavy portions of downtown Ann Arbor. After the walk, mood and cognition were again assessed. A week later the participants repeated the entire procedure in the other location.

Participants exhibited a significant (16%) increase in attention and working memory after the nature walk compared to the urban walk. While participants felt more positive after both walks, there was no correlation with memory effects.

The finding is particularly interesting because depression is characterized by high levels of rumination and negative thinking. It seemed quite likely, then, that a solitary walk in the park might make depressed people feel worse, and worsen working memory. It’s intriguing that it didn’t.

It’s also worth emphasizing that, as in earlier studies, this effect of nature on cognition appears to be independent of mood (which is, of course, the basic tenet of Attention Restoration Theory).

Of course, this study is, like the others, small, and involves the same demographic. Hopefully future research will extend the sample groups, to middle-aged and older adults.

A small study has found that ten hours of playing action video games produced significant changes in brainwave activity and improved visual attention for some (but not all) novices.

Following on from research finding that people who regularly play action video games show visual attention related differences in brain activity compared to non-players, a new study has investigated whether such changes could be elicited in 25 volunteers who hadn’t played video games in at least four years. Sixteen of the participants played a first-person shooter game (Medal of Honor: Pacific Assault), while nine played a three-dimensional puzzle game (Ballance). They played the games for a total of 10 hours spread over one- to two-hour sessions.

Selective attention was assessed through an attentional visual field task, carried out prior to and after the training program. Individual learning differences were marked, and because of visible differences in brain activity after training, the action gamers were divided into two groups for analysis — those who performed above the group mean on the second attentional visual field test (7 participants), and those who performed below the mean (9). These latter individuals showed similar brain activity patterns as those in the control (puzzle) group.

In all groups, early-onset brainwaves were little affected by video game playing. This suggests that game-playing has little impact on bottom–up attentional processes, and is in keeping with earlier research showing that players and non-players don’t differ in the extent to which their attention is captured by outside stimuli.

However, later brainwaves — those thought to reflect top–down control of selective attention via increased inhibition of distracters — increased significantly in the group who played the action game and showed above-average improvement on the field test. Another increased wave suggests that the total amount of attention allocated to the task was also greater in that group (i.e., they were concentrating more on the game than the below-average group, and the control group).

The improved ability to select the right targets and ignore other stimuli suggests, too, that these players are also improving their ability to make perceptual decisions.

The next question, of course, is what personal variables underlie the difference between those who benefit more quickly from the games, and those who don’t. And how much more training is necessary for this latter group, and are there some people who won’t achieve these benefits at all, no matter how long they play? Hopefully, future research will be directed to these questions.

[2920] Wu, S., Cheng C. K., Feng J., D'Angelo L., Alain C., & Spence I. (2012).  Playing a First-person Shooter Video Game Induces Neuroplastic Change. Journal of Cognitive Neuroscience. 24(6), 1286 - 1293.

A review supports the benefits of physical activity for children’s and adolescent’s scholastic performance, but points to the need for better studies. A recent study looks at the effects on attention of different types of physical activity.

A review of 10 observational and four intervention studies as said to provide strong evidence for a positive relationship between physical activity and academic performance in young people (6-18). While only three of the four intervention studies and three of the 10 observational studies found a positive correlation, that included the two studies (one intervention and one observational) that researchers described as “high-quality”.

An important feature of the high-quality studies was that they used objective measures of physical activity, rather than students' or teachers' reports. More high-quality studies are clearly needed. Note that the quality score of the 14 studies ranged from 22%! to 75%.

Interestingly, a recent media report (NOT, I hasten to add, a peer-reviewed study appearing in an academic journal) spoke of data from public schools in Lincoln, Nebraska, which apparently has a district-wide physical-fitness test, which found that those were passed the fitness test were significantly more likely to also pass state reading and math tests.

Specifically, data from the last two years apparently shows that 80% of the students who passed the fitness test either met or exceeded state standards in math, compared to 66% of those who didn't pass the fitness test, and 84% of those who passed the fitness test met or exceeded state standards in reading, compared to 71% of those who failed the fitness test.

Another recent study looks at a different aspect of this association between physical exercise and academic performance.

The Italian study involved138 normally-developing children aged 8-11, whose attention was tested before and after three different types of class: a normal academic class; a PE class focused on cardiovascular endurance and involving continuous aerobic circuit training followed by a shuttle run exercise; a PE class combining both physical and mental activity by involving novel use of basketballs in varying mini-games that were designed to develop coordination and movement-based problem-solving. These two types of physical activity offered the same exercise intensity, but very different skill demands.

The attention test was a short (5-minute) paper-and-pencil task in which the children had to mark each occurrence of “d” with double quotation marks either above or below in 14 lines of randomly mixed p and d letters with one to four single and/or double quotation marks either over and/or under each letter.

Processing speed increased 9% after mental exercise (normal academic class) and 10% after physical exercise. These were both significantly better than the increase of 4% found after the combined physical and mental exertion.

Similarly, scores on the test improved 13% after the academic class, 10% after the standard physical exercise, and only 2% after the class combining physical and mental exertion.

Now it’s important to note is that this is of course an investigation of the immediate arousal benefits of exercise, rather than an investigation of the long-term benefits of being fit, which is a completely different question.

But the findings do bear on the use of PE classes in the school setting, and the different effects that different types of exercise might have.

First of all, there’s the somewhat surprising finding that attention was at least as great, if not better, after an academic class than the PE class. It would not have been surprising if attention had flagged. It seems likely that what we are seeing here is a reflection of being in the right head-space — that is, the advantage of continuing with the same sort of activity.

But the main finding is the, also somewhat unexpected, relative drop in attention after the PE class that combined mental and physical exertion.

It seems plausible that the reason for this lies in the cognitive demands of the novel activity, which is, I think, the main message we should take away from this study, rather than any comparison between physical and mental activity. However, it would not be surprising if novel activities that combine physical and mental skills tend to be more demanding than skills that are “purely” (few things are truly pure I know) one or the other.

Of course, it shouldn’t be overlooked that attention wasn’t hampered by any of these activities!

It seems that what is said by deeper male voices is remembered better by heterosexual women, while memory is impaired for higher male voices. Pitch didn’t affect the memorability of female voices.

I had to report on this quirky little study, because a few years ago I discovered Leonard Cohen’s gravelly voice and then just a few weeks ago had it trumped by Tom Waits — I adore these deep gravelly voices, but couldn’t say why. Now a study shows that woman are not only sensitive to male voice pitch, but this affects their memory.

In the first experiment, 45 heterosexual women were shown images of objects while listening to the name of the object spoken either by a man or woman. The pitch of the voice was manipulated to be high or low. After spending five minutes on a Sudoku puzzle, participants were asked to choose which of two similar but not identical versions of the object was the one they had seen earlier. After the memory test, participants were tested on their voice preferences.

Women strongly preferred the low pitch male voice and remembered objects more accurately when they have been introduced by the deeper male voice than the higher male voice (mean score for object recognition was 84.7% vs 77.8%). There was no significant difference in memory relating to pitch for the female voices (83.9% vs 81.7% — note that these are not significantly different from the score for the deeper male voice).

So is it that memory is enhanced for deeper male voices, or that it is impaired for higher male voices (performance on the female voices suggests the latter)? Or are both factors at play? To sort this out, the second experiment, involving a new set of 46 women, included unmanipulated male and female voices.

Once again, women were unaffected by the different variations of female voices. However, male voices produced a clear linear effect, with the unmanipulated male voices squarely in the middle of the deeper and higher versions. It appears, then, that both factors are at play: deepening a male voice enhances its memorability, while raising it impairs its memorability.

It’s thought that deeper voices are associated with more desirable traits for long-term male partners. Having a better memory for specific encounters with desirable men would allow women to compare and evaluate men according to how they might behave in different relationship contexts.

The voices used were supplied by four young adult men and four young adult women. Pitch was altered through software manipulation. Participants were told that the purpose of the experiment was to study sociosexual orientation and object preference. Contraceptive pill usage did not affect the women’s responses.

A mouse study finds that gamma waves in the hippocampus, critically involved in learning, grow stronger as mice run faster.

I’ve always felt that better thinking was associated with my brain working ‘in a higher gear’ — literally working at a faster rhythm. So I was particularly intrigued by the findings of a recent mouse study that found that brainwaves associated with learning became stronger as the mice ran faster.

In the study, 12 male mice were implanted with microelectrodes that monitored gamma waves in the hippocampus, then trained to run back and forth on a linear track for a food reward. Gamma waves are thought to help synchronize neural activity in various cognitive functions, including attention, learning, temporal binding, and awareness.

We know that the hippocampus has specialized ‘place cells’ that record where we are and help us navigate. But to navigate the world, to create a map of where things are, we need to also know how fast we are moving. Having the same cells encode both speed and position could be problematic, so researchers set out to find how speed was being encoded. To their surprise and excitement, they found that the strength of the gamma rhythm grew substantially as the mice ran faster.

The results also confirmed recent claims that the gamma rhythm, which oscillates between 30 and 120 times a second, can be divided into slow and fast signals (20-45 Hz vs 45-120 Hz for mice, consistent with the 30-55 Hz vs 45-120 Hz bands found in rats) that originate from separate parts of the brain. The slow gamma waves in the CA1 region of the hippocampus were synchronized with slow gamma waves in CA3, while the fast gamma in CA1 were synchronized with fast gamma waves in the entorhinal cortex.

The two signals became increasingly separated with increasing speed, because the two bands were differentially affected by speed. While the slow waves increased linearly, the fast waves increased logarithmically. This differential effect could have to do with mechanisms in the source regions (CA3 and the medial entorhinal cortex, respectively), or to mechanisms in the different regions in CA1 where the inputs terminate (the waves coming from CA3 and the entorhinal cortex enter CA1 in different places).

In the hippocampus, gamma waves are known to interact with theta waves. Further analysis of the data revealed that the effects of speed on gamma rhythm only occurred within a narrow range of theta phases — but this ‘preferred’ theta phase also changed with running speed, more so for the slow gamma waves than the fast gamma waves (which is not inconsistent with the fact that slow gamma waves are more affected by running speed than fast gamma waves). Thus, while slow and fast gamma rhythms preferred similar phases of theta at low speeds, the two rhythms became increasingly phase-separated with increasing running speed.

What’s all this mean? Previous research has shown that if inputs from CA3 and the entorhinal cortex enter CA1 at the same time, the kind of long-term changes at the synapses that bring about learning are stronger and more likely in CA1. So at low speeds, synchronous inputs from CA3 and the entorhinal cortex at similar theta phases make them more effective at activating CA1 and inducing learning. But the faster you move, the more quickly you need to process information. The stronger gamma waves may help you do that. Moreover, the theta phase separation of slow and fast gamma that increases with running speed means that activity in CA3 (slow gamma source) increasingly anticipates activity in the medial entorhinal cortex (fast gamma source).

What does this mean at the practical level? Well at this point it can only be speculation that moving / exercising can affect learning and attention, but I personally am taking this on board. Most of us think better when we walk. This suggests that if you’re having trouble focusing and don’t have time for that, maybe walking down the hall or even jogging on the spot will help bring your brain cells into order!

Pushing speculation even further, I note that meditation by expert meditators has been associated with changes in gamma and theta rhythms. And in an intriguing comparison of the effect of spoken versus sung presentation on learning and remembering word lists, the group that sang showed greater coherence in both gamma and theta rhythms (in the frontal lobes, admittedly, but they weren’t looking elsewhere).

So, while we’re a long way from pinning any of this down, it may be that all of these — movement, meditation, music — can be useful in synchronizing your brain rhythms in a way that helps attention and learning. This exciting discovery will hopefully be the start of an exploration of these possibilities.

A study of Korean preschoolers demonstrates that at least some of the cognitive benefits of bilingualism are due to learning two languages, not because of a more diligent culture or a more enriched environment.

An increasing number of studies have been showing the benefits of bilingualism, both for children and in old age. However, there’s debate over whether the apparent benefits for children are real, or a product of cultural (“Asians work harder!” or more seriously, are taught more behavioral control from an early age) or environmental factors (such as socioeconomic status).

A new study aimed to disentangle these complicating factors, by choosing 56 4-year-olds with college-educated parents, from middle-class neighborhoods, and comparing English-speaking U.S. children, Korean-speaking children in the U.S. and in Korea, and Korean-English bilingual children in the U.S.

The children were tested on a computer-game-like activity designed to assess the alerting, orienting, and executive control components of executive attention (a child version of the Attention Network Test). They were also given a vocabulary test (the Peabody Picture Vocabulary Test-III) in their own language, if monolingual, or in English for the bilinguals.

As expected, given their young age, English monolinguals scored well above bilinguals (learning more than one language slows the acquisition of vocabulary in the short-term). Interestingly, however, while Korean monolinguals in Korea performed at a comparable level to the English monolinguals, Korean monolinguals in the U.S. performed at the level of the bilinguals. In other words, the monolinguals living in a country where their language is a majority language have comparable language skills, and those living in a country in which their primary language is a minority language have similar, and worse, language skills.

That’s interesting, but the primary purpose of the study was to look at executive control. And here the bilingual children shone over the monolinguals. Specifically, the bilingual children were significantly more accurate on the attention test than the monolingual Koreans in the U.S. (whether they spoke Korean or English). Although their performance in terms of accuracy was not significantly different from that of the monolingual children in Korea, these children obtained their high accuracy at the expense of speed. The bilinguals were both accurate and fast, suggesting a different mechanism is at work.

The findings confirm earlier research indicating that bilingualism, independent of culture, helps develop executive attention, and points to how early this advantage begins.

The Korean-only and bilingual children from the United States had first generation native Korean parents. The bilingual children had about 11 months of formal exposure to English through a bilingual daycare program, resulting in them spending roughly 45% of their time using Korean (at home and in the community) and 55% of their time using English (at daycare). The children in Korea belonged to a daycare center that did offer a weekly 15-minute session during which they were exposed to English through educational DVDs, but their understanding of English was minimal. Similarly, the Korean-only children in the U.S. would have had some exposure to English, but it was insufficient to allow them to understand English instructions. The researchers’ informal observation of the Korean daycare center and the ones in the U.S. was that the programs were quite similar, and neither was more enriching.

[2351] Yang, S., Yang H., & Lust B. (2011).  Early Childhood Bilingualism Leads to Advances in Executive Attention: Dissociating Culture and Language. Bilingualism: Language and Cognition. 14(03), 412 - 422.

Receiving immediate feedback on the activity in a brain region enabled people to improve their control of that region’s activity, thus improving their concentration.

I’ve always been intrigued by neurofeedback training. But when it first raised its head, technology was far less sophisticated. Now a new study has used real-time functional Magnetic Resonance Imaging (fMRI) feedback from the rostrolateral prefrontal cortex to improve people's ability to control their thoughts and focus their attention.

In the study, participants performed tasks that either raised or lowered mental introspection in 30-second intervals over four six-minute sessions. Those with access to real-time fMRI feedback could see their RLPFC activity increase during introspection and decrease during non-introspective thoughts, such as mental tasks that focused on body sensations. These participants became significantly better at controlling their thoughts and performing the mental tasks. Moreover, the improved regulation was reflected only in activity in the rostrolateral prefrontal cortex. Those given inaccurate or no brain feedback showed no such improvement.

The findings point to a means of improving attentional control, and also raise hope for clinical treatments of conditions that can benefit from improved awareness and regulation of one's thoughts, including depression, anxiety, and obsessive-compulsive disorders.

New research suggests that meditation can improve your ability to control alpha brainwaves, thus helping you block out distraction.

As I’ve discussed on many occasions, a critical part of attention (and working memory capacity) is being able to ignore distraction. There has been growing evidence that mindfulness meditation training helps develop attentional control. Now a new study helps fill out the picture of why it might do so.

The alpha rhythm is particularly active in neurons that process sensory information. When you expect a touch, sight or sound, the focusing of attention toward the expected stimulus induces a lower alpha wave height in neurons that would handle the expected sensation, making them more receptive to that information. At the same time the height of the alpha wave in neurons that would handle irrelevant or distracting information increases, making those cells less receptive to that information. In other words, alpha rhythm helps screen out distractions.

In this study, six participants who completed an eight-week mindfulness meditation program (MBSR) were found to generate larger alpha waves, and generate them faster, than the six in the control group. Alpha wave activity in the somatosensory cortex was measured while participants directed their attention to either their left hand or foot. This was done on three occasions: before training, at three weeks of the program, and after the program.

The MBSR program involves an initial two-and-a-half-hour training session, followed by daily 45-minute meditation sessions guided by a CD recording. The program is focused on training participants first to pay close attention to body sensations, then to focus on body sensations in a specific area, then being able to disengage and shifting the focus to another body area.

Apart from helping us understand why mindfulness meditation training seems to improve attention, the findings may also explain why this meditation can help sufferers of chronic pain.

A new study suggests a positive mood affects attention by using up some of your working memory capacity.

Following earlier research suggesting mood affects attention, a new study tries to pin down exactly what it’s affecting.

To induce different moods, participants were shown either a video of a stand-up comedy routine or an instructional video on how to install flooring. This was followed by two tests, one of working memory capacity (the Running Memory Span), during which numbers are presented through headphones at a rate of four numbers per second ending with subjects asked to recall the last six numbers in order, and one of response inhibition (the Stroop task).

Those that watched the comedy routine performed significantly worse on the RMS task but not on the Stroop task. To confirm these results, a second experiment used a different measure of response inhibition, the Flanker task. Again, those in a better mood performed worse on the span task but not the inhibition task.

These findings point to mood affecting storage capacity — something we already had evidence for in the case of negative mood, like anxiety, but a little more surprising to find it also applies to happy moods. Basically, it seems as if any emotion, whether good or bad, is likely to leave you less room in your working memory store for information processing. That shouldn’t be taken as a cue to go all Spock! But it’s something to be aware of.

[2180] Martin, E. A., & Kerns J. G. (2011).  The influence of positive mood on different aspects of cognitive control. Cognition & Emotion. 25(2), 265 - 265.

A new study suggests we lose focus because of habituation, and we can ‘reset’ our attention by briefly switching to another task before returning.

We’ve all experienced the fading of our ability to concentrate when we’ve been focused on a task for too long. The dominant theory of why this should be so has been around for half a century, and describes attention as a limited resource that gets ‘used up’. Well, attention is assuredly a limited resource in the sense that you only have so much of it to apply. But is it limited in the sense of being used up and needing to refresh? A new study indicates that it isn’t.

The researchers make what strikes me as a cogent argument: attention is an endless resource; we are always paying attention to something. The problem is our ability to maintain attention on a single task without respite. Articulated like this, we are immediately struck by the parallel with perception. Any smell, touch, sight, sound, that remains constant eventually stops registering with us. We become habituated to it. Is that what’s happening with attention? Is it a form of habituation?

In an experimental study, 84 volunteers were tested on their ability to focus on a repetitive computerized task for 50 minutes under various conditions: one group had no breaks or distractions; two groups memorized four digits beforehand and were told to respond if they saw them on the screen during the task (but only one group were shown them during the task); one group were shown the digits but told to ignore them if they saw them.

As expected, performance declined significantly over the course of the task for most participants — with the exception of those who were twice shown the memorized digits and had to respond to them. That was all it took, a very brief break in the task, and their focus was maintained.

The finding suggests that prolonged attention to a single task actually hinders performance, but briefly deactivating and reactivating your goals is all you need to stay focused.

A three-month trial comparing the effects of exercise programs on cognitive function in sedentary, overweight children, has found dose-related benefits of regular aerobic exercise.

A study involving 171 sedentary, overweight 7- to 11-year-old children has found that those who participated in an exercise program improved both executive function and math achievement. The children were randomly selected either to a group that got 20 minutes of aerobic exercise in an after-school program, one that got 40 minutes of exercise in a similar program, or a group that had no exercise program. Those who got the greater amount of exercise improved more. Brain scans also revealed increased activity in the prefrontal cortex and reduced activity in the posterior parietal cortex, for those in the exercise group.

The program lasted around 13 weeks. The researchers are now investigating the effects of continuing the program for a full year. Gender, race, socioeconomic factors or parental education did not change the impact of the exercise program.

The effects are consistent with other studies involving older adults. It should be emphasized that these were sedentary, overweight children. These findings are telling us what the lack of exercise is doing to young minds. I note the report just previous, about counteracting what we have regarded as “normal” brain atrophy in older adults by the simple action of walking for 40 minutes three times a week. Children and older adults might be regarded as our canaries in the coal mine, more vulnerable to many factors that can affect the brain. We should take heed.

After 8 weeks practicing mindfulness meditation, measurable changes occurred in brain regions associated with memory and emotion.

Brain images of 16 participants in an 8-week mindfulness meditation program, taken two weeks before and after the program, have found measurable changes in brain regions associated with memory, sense of self, empathy and stress. Specifically, they showed increased grey-matter density in the left hippocampus, posterior cingulate cortex, temporo-parietal junction, and cerebellum, as well as decreased grey-matter density in the amygdala. Similar brain scans of a control group of non-meditators (those on a waiting list for the program) showed no such changes over time.

Although a number of studies have found differences in the brains of experienced meditators and those who don’t practice meditation, this is the first to demonstrate that those differences are actually produced by meditation.

The Mindfulness-Based Stress Reduction program involved weekly meetings that included practice of mindfulness meditation and audio recordings for guided meditation practice. Participants reported spending an average of 27 minutes each day practicing mindfulness exercises.

Students who watched a video of a laughing baby or listened to a peppy Mozart piece performed better on a classification task.

A link between positive mood and creativity is supported by a study in which 87 students were put into different moods (using music and video clips) and then given a category learning task to do (classifying sets of pictures with visually complex patterns). There were two category tasks: one involved classification on the basis of a rule that could be verbalized; the other was based on a multi-dimensional pattern that could not easily be verbalized.

Happy volunteers were significantly better at learning the rule to classify the patterns than sad or neutral volunteers. There was no difference between those in a neutral mood and those in a negative mood.

It had been theorized that positive mood might only affect processes that require hypothesis testing and rule selection. The mechanism by which this might occur is through increased dopamine levels in the frontal cortex. Interestingly, however, although there was no difference in performance as a function of mood, analysis based on how closely the subjects’ responses matched an optimal strategy for the task found that, again, positive mood was of significant benefit.

The researchers suggest that this effect of positive mood may be the reason behind people liking to watch funny videos at work — they’re trying to enhance their performance by putting themselves in a good mood.

The music and video clips were rated for their mood-inducing effects. Mozart’s “Eine Kleine Nachtmusik—Allegro” was the highest rated music clip (at an average rating of 6.57 on a 7-point scale), Vivaldi’s Spring was next at 6.14. The most positive video was that of a laughing baby (6.57 again), with Whose Line is it Anyway sound effects scoring close behind (6.43).

[2054] Nadler, R. T., Rabi R., & Minda J. P. (2010).  Better Mood and Better Performance. Psychological Science. 21(12), 1770 - 1776.

Two recent studies suggest that caffeine is most effective in boosting your energy and alertness in small doses, and more effective for males.

A study involving 80 college students (34 men and 46 women) between the ages of 18 and 40, has found that those given a caffeinated energy drink reported feeling more stimulated and less tired than those given a decaffeinated soda or no drink. However, although reaction times were faster for those consuming caffeine than those given a placebo drink or no drink, reaction times slowed for increasing doses of caffeine, suggesting that smaller amounts of caffeine are more effective.

The three caffeine groups were given caffeine levels of either 1.8 ml/kg, 3.6 ml/kg or 5.4 ml/kg. The computerized "go/no-go" test which tested their reaction times was given half an hour after consuming the drinks.

In another study, 52 children aged 12-17 drank flattened Sprite containing caffeine at four concentrations: 0, 50 mg, 100 mg or 200 mg. Changes in blood pressure and heart rate were then checked every 10 minutes for one hour, at which point they were given a questionnaire and an opportunity to eat all they wanted of certain types of junk food.

Interestingly, there were significant gender differences, with boys drinking high-caffeine Sprite showing greater increases in diastolic blood pressure (the lower number) than boys drinking the low-caffeine Sprite, but girls being unaffected. Boys were also more inclined to report consuming caffeine for energy or “the rush”, than girls were.

Those participants who ingested the most caffeine also ate more high-sugar snack foods in the laboratory, and reported higher protein and fat consumption outside the lab.

[2047] Howard, M. A., & Marczinski C. A. (2010).  Acute Effects of a Glucose Energy Drink on Behavioral Control. Experimental and Clinical Psychopharmacology. 18(6), 553 - 561.

[2074] Temple, J. L., Dewey A. M., & Briatico L. N. (2010).  Effects of Acute Caffeine Administration on Adolescents. Experimental and Clinical Psychopharmacology. 18(6), 510 - 520.

Being actively involved improves learning significantly, and new research shows that the hippocampus is at the heart of this process.

We know active learning is better than passive learning, but for the first time a study gives us some idea of how that works. Participants in the imaging study were asked to memorize an array of objects and their exact locations in a grid on a computer screen. Only one object was visible at a time. Those in the "active study” group used a computer mouse to guide the window revealing the objects, while those in the “passive study” group watched a replay of the window movements recorded in a previous trial by an active subject. They were then tested by having to place the items in their correct positions. After a trial, the active and passive subjects switched roles and repeated the task with a new array of objects.

The active learners learned the task significantly better than the passive learners. Better spatial recall correlated with higher and better coordinated activity in the hippocampus, dorsolateral prefrontal cortex, and cerebellum, while better item recognition correlated with higher activity in the inferior parietal lobe, parahippocampal cortex and hippocampus.

The critical role of the hippocampus was supported when the experiment was replicated with those who had damage to this region — for them, there was no benefit in actively controlling the viewing window.

This is something of a surprise to researchers. Although the hippocampus plays a crucial role in memory, it has been thought of as a passive participant in the learning process. This finding suggests that it is actually part of an active network that controls behavior dynamically.

Images of nature have been found to improve attention. A new study shows that natural scenes encourage different brain regions to synchronize.

A couple of years ago I reported on a finding that walking in the park, and (most surprisingly) simply looking at photos of natural scenes, could improve memory and concentration (see below). Now a new study helps explain why. The study examined brain activity while 12 male participants (average age 22) looked at images of tranquil beach scenes and non-tranquil motorway scenes. On half the presentations they concurrently listened to the same sound associated with both scenes (waves breaking on a beach and traffic moving on a motorway produce a similar sound, perceived as a constant roar).

Intriguingly, the natural, tranquil scenes produced significantly greater effective connectivity between the auditory cortex and medial prefrontal cortex, and between the auditory cortex and posterior cingulate gyrus, temporoparietal cortex and thalamus. It’s of particular interest that this is an example of visual input affecting connectivity of the auditory cortex, in the presence of identical auditory input (which was the focus of the research). But of course the take-home message for us is that the benefits of natural scenes for memory and attention have been supported.

Previous study:

Many of us who work indoors are familiar with the benefits of a walk in the fresh air, but a new study gives new insight into why, and how, it works. In two experiments, researchers found memory performance and attention spans improved by 20% after people spent an hour interacting with nature. The intriguing finding was that this effect was achieved not only by walking in the botanical gardens (versus walking along main streets of Ann Arbor), but also by looking at photos of nature (versus looking at photos of urban settings). The findings are consistent with a theory that natural environments are better at restoring attention abilities, because they provide a more coherent pattern of stimulation that requires less effort, as opposed to urban environments that are provide complex and often confusing stimulation that captures attention dramatically and requires directed attention (e.g., to avoid being hit by a car).

[1867] Hunter, M. D., Eickhoff S. B., Pheasant R. J., Douglas M. J., Watts G. R., Farrow T. F. D., et al. (2010).  The state of tranquility: Subjective perception is shaped by contextual modulation of auditory connectivity. NeuroImage. 53(2), 611 - 618.

[279] Berman, M. G., Jonides J., & Kaplan S. (2008).  The cognitive benefits of interacting with nature. Psychological Science: A Journal of the American Psychological Society / APS. 19(12), 1207 - 1212.

We know language affects what we perceive, but a new study shows it can also improve our ability to perceive, even when an object should be invisible to us.

I’ve talked about the importance of labels for memory, so I was interested to see that a recent series of experiments has found that hearing the name of an object improved people’s ability to see it, even when the object was flashed onscreen in conditions and speeds (50 milliseconds) that would render it invisible. The effect was specific to language; a visual preview didn’t help.

Moreover, those who consider their mental imagery particularly vivid scored higher when given the auditory cue (although this association disappeared when the position of the object was uncertain). The researchers suggest that hearing the image labeled evokes an image of the object, strengthening its visual representation and thus making it visible. They also suggested that because words in different languages pick out different things in the environment, learning different languages might shape perception in subtle ways.

A month's training in sound discrimination reversed normal age-related cognitive decline in the auditory cortex in old rats.

A rat study demonstrates how specialized brain training can reverse many aspects of normal age-related cognitive decline in targeted areas. The month-long study involved daily hour-long sessions of intense auditory training targeted at the primary auditory cortex. The rats were rewarded for picking out the oddball note in a rapid sequence of six notes (five of them of the same pitch). The difference between the oddball note and the others became progressively smaller. After the training, aged rats showed substantial reversal of their previously degraded ability to process sound. Moreover, measures of neuron health in the auditory cortex had returned to nearly youthful levels.

Mindfulness Training had a positive effect on both working memory capacity and mood in a group of Marine reservists during the high-stress pre-deployment interval.

Mindfulness Training had a positive effect on both working memory capacity and mood in a group of Marine reservists during the high-stress pre-deployment interval. While those who weren’t given the 8-week MT program, as well as those who spent little time engaging in mindfulness exercises, showed greater negative mood and decreased working memory capacity over the eight weeks, those who recorded high practice time showed increased capacity and decreased negative mood. A civilian control group showed no change in working memory capacity over the period. The program, called Mindfulness-based Mind Fitness Training (MMFT™), blended mindfulness skills training with concrete applications for the operational environment and information and skills about stress, trauma and resilience in the body. The researchers suggest that mindfulness training may help anyone who must maintain peak performance in the face of extremely stressful circumstances.

In another demonstration of the many factors that affect exam success, three experiments have found that seeing the letter A before an exam makes a student more likely to perform better than if he sees the letter F instead.

In another demonstration of the many factors that affect exam success, three experiments involving a total of 131 college students have found that seeing the letter A before an exam makes a student more likely to perform better than if he sees the letter F instead. In the first experiment, 23 undergraduates took a word-analogies test, of which half were labeled "Test Bank ID: F" in the top right corner, and half "Test Bank ID: A". The A group got an average of 11.08 of 12 answers correct, compared to 9.42 for the F group. The same pattern was confirmed in two more studies. Moreover, performance of students whose exams were labeled "Test Bank ID:J" fell between those with the A and F test papers. While hard to believe, these findings are consistent with the many findings supporting the idea of "stereotype threat" (the tendency to do less well on a test when a person fears their performance could confirm a negative stereotype about their racial or gender group).

[154] Ciani, K. D. [1], & Sheldon K. M. [2] (2010).  A versus F: The effects of implicit letter priming on cognitive performance. British Journal of Educational Psychology. 80, 99 - 119.

Another study confirms the effects of meditation training on visual perception.

Another study showing the cognitive benefits of meditation has revealed benefits to perception and attention. The study involved 30 participants attending a three-month meditation retreat, during which they attended group sessions twice a day and engaging in individual practice for about six hours a day. The meditation practice involved sustained selective attention on a chosen stimulus (e.g., the participant’s breath). By midway through the retreat, meditators had become better at making fine visual distinctions, and better able to sustain attention during the half-hour test, compared to matched controls. Those who continued practicing meditation after the retreat still showed improvements in perception when they were retested about five months later.

An intriguing set of experiments has showed how you can improve vision by manipulating mindset.

An intriguing set of experiments showing how you can improve perception by manipulating mindset found significantly improved vision when:

  • an eye chart was arranged in reverse order (the letters getting progressively larger rather than smaller);
  • participants were given eye exercises and told their eyes would improve with practice;
  • participants were told athletes have better vision, and then told to perform jumping jacks or skipping (seen as less athletic);
  • participants flew a flight simulator, compared to pretending to fly a supposedly broken simulator (pilots are believed to have good vision).

[158] Langer, E., Djikic M., Pirson M., Madenci A., & Donohue R. (2010).  Believing Is Seeing. Psychological Science. 21(5), 661 - 666.

A study of over 3,100 older men (49-71) from across Europe has found that men with higher levels of vitamin D performed consistently better in an attention and speed of processing task. There was no difference on visual memory tasks. Although previous studies have suggested low vitamin D levels may be associated with poorer cognitive performance, findings have been inconsistent. Vitamin D is primarily synthesised from sun exposure but is also found in certain foods such as oily fish.

Older news items (pre-2010) brought over from the old website

Music training helps you hear better in noisy rooms

I’ve often talked about the benefits of musical training for cognition, but here’s a totally new benefit. A study involving 31 younger adults (19-32) with normal hearing has found that musicians (at least 10 years of music experience; music training before age 7; practicing more than 3 times weekly within previous 3 years) were significantly better at hearing and repeating sentences in increasingly noisy conditions, than the non-musicians. The number of years of music practice also correlated positively with better working memory and better tone discrimination ability. Hearing speech in noisy environments is of course difficult for everyone, but particularly for older adults, who are likely to have hearing and memory loss, and for poor readers.

[960] Parbery-Clark, A., Skoe E., Lam C., & Kraus N. (2009).  Musician enhancement for speech-in-noise. Ear and Hearing. 30(6), 653 - 661.

http://www.eurekalert.org/pub_releases/2009-08/nu-tum081709.php

Meditation technique can temporarily improve visuospatial abilities

And continuing on the subject of visual short-term memory, a study involving experienced practitioners of two styles of meditation: Deity Yoga (DY) and Open Presence (OP) has found that, although meditators performed similarly to nonmeditators on two types of visuospatial tasks (mental rotation and visual memory), when they did the tasks immediately after meditating for 20 minutes (while the nonmeditators rested or did something else), practitioners of the DY style of meditation showed a dramatic improvement compared to OP practitioners and controls. In other words, although the claim that regular meditation practice can increase your short-term memory capacity was not confirmed, it does appear that some forms of meditation can temporarily (and dramatically) improve it. Since the form of meditation that had this effect was one that emphasizes visual imagery, it does support the idea that you can improve your imagery and visual memory skills (even if you do need to ‘warm up’ before the improvement is evident).

[860] Kozhevnikov, M., Louchakova O., Josipovic Z., & Motes M. A. (2009).  The enhancement of visuospatial processing efficiency through Buddhist Deity meditation. Psychological Science: A Journal of the American Psychological Society / APS. 20(5), 645 - 653.

http://www.sciencedaily.com/releases/2009/04/090427131315.htm
http://www.eurekalert.org/pub_releases/2009-04/afps-ssb042709.php

A walk in the park a day keeps mental fatigue away

Many of us who work indoors are familiar with the benefits of a walk in the fresh air, but a new study gives new insight into why, and how, it works. In two experiments, researchers found memory performance and attention spans improved by 20% after people spent an hour interacting with nature. The intriguing finding was that this effect was achieved not only by walking in the botanical gardens (versus walking along main streets of Ann Arbor), but also by looking at photos of nature (versus looking at photos of urban settings). The findings are consistent with a theory that natural environments are better at restoring attention abilities, because they provide a more coherent pattern of stimulation that requires less effort, as opposed to urban environments that are provide complex and often confusing stimulation that captures attention dramatically and requires directed attention (e.g., to avoid being hit by a car).

[279] Berman, M. G., Jonides J., & Kaplan S. (2008).  The cognitive benefits of interacting with nature. Psychological Science: A Journal of the American Psychological Society / APS. 19(12), 1207 - 1212.

http://www.eurekalert.org/pub_releases/2008-12/afps-awi121808.php
http://www.physorg.com/news148663388.html

Even toddlers can ‘chunk' information for better remembering

We all know it’s easier to remember a long number (say a phone number) when it’s broken into chunks. Now a study has found that we don’t need to be taught this; it appears to come naturally to us. The study showed 14 months old children could track only three hidden objects at once, in the absence of any grouping cues, demonstrating the standard limit of working memory. However, with categorical or spatial cues, the children could remember more. For example, when four toys consisted of two groups of two familiar objects, cats and cars, or when six identical orange balls were grouped in three groups of two.

[196] Feigenson, L., & Halberda J. (2008).  From the Cover: Conceptual knowledge increases infants' memory capacity. Proceedings of the National Academy of Sciences. 105(29), 9926 - 9930.

http://www.eurekalert.org/pub_releases/2008-07/jhu-etg071008.php

Full text available at http://www.pnas.org/content/105/29/9926.abstract?sid=c01302b6-cd8e-4072-842c-7c6fcd40706f

Brain-training to improve working memory boosts fluid intelligence

General intelligence is often separated into "fluid" and "crystalline" components, of which fluid intelligence is considered more reflective of “pure” intelligence (for more on this, see my article at http://www.memory-key.com//memory/individual/wm-intelligence), and largely resistant to training and learning effects. However, in a new study in which participants were given a series of training exercises designed to improve their working memory, fluid intelligence was found to have significantly improved, with the amount of improvement increasing with time spent training. The small study contradicts decades of research showing that improving on one kind of cognitive task does not improve performance on other kinds, so has been regarded with some skepticism by other researchers. More research is definitely needed, but the memory task did differ from previous studies, engaging executive functions such as those that inhibit irrelevant items, monitor performance, manage two tasks simultaneously, and update memory.

[1183] Jaeggi, S. M., Buschkuehl M., Jonides J., & Perrig W. J. (2008).  From the Cover: Improving fluid intelligence with training on working memory. Proceedings of the National Academy of Sciences. 105(19), 6829 - 6833.

http://www.physorg.com/news128699895.html
http://www.sciam.com/article.cfm?id=study-shows-brain-power-can-be-bolstered

Teaching older brains to regain youthful skills

Researchers have succeeded in training seniors to multitask at the same level as younger adults. Over the course of two weeks, both younger and older subjects learned to identify a letter flashed quickly in the middle of a computer screen and simultaneously localize the position of a spot flashed quickly in the periphery as well as they could perform either task on its own. The older adults did take longer than the younger adults to reach the same level of performance, but they did reach it.

[571] Richards, E., Bennett P. J., & Sekuler A. B. (2006).  Age related differences in learning with the useful field of view. Vision Research. 46(25), 4217 - 4231.

http://www.eurekalert.org/pub_releases/2006-10/mu-yct100206.php

Novelty aids learning

We’ve long suspected that the human brain is particularly attracted to new information. Research now reveals that the brain region that regulates our levels of motivation and our ability to predict rewards, by releasing dopamine in the frontal and temporal regions of the brain, responds better to novelty than to the familiar. Behavioral experiments also revealed that participants best remembered the images they had been shown when new images were mixed in with slightly familiar images during learning. It’s worth noting that this midbrain area (substantia nigra/ventral tegmentum) responded strongly only to completely new stimuli.

[1113] Bunzeck, N., & Duzel E. (2006).  Absolute Coding of Stimulus Novelty in the Human Substantia Nigra/VTA. Neuron. 51(3), 369 - 379.

http://www.eurekalert.org/pub_releases/2006-08/ucl-nal073106.php

Support for labeling as an aid to memory

A study involving an amnesia-inducing drug has shed light on how we form new memories. Participants in the study participants viewed words, photographs of faces and landscapes, and abstract pictures one at a time on a computer screen. Twenty minutes later, they were shown the words and images again, one at a time. Half of the images they had seen earlier, and half were new. They were then asked whether they recognized each one. For one session they were given midazolam, a drug used to relieve anxiety during surgical procedures that also causes short-term anterograde amnesia, and for one session they were given a placebo.
It was found that the participants' memory while in the placebo condition was best for words, but the worst for abstract images. Midazolam impaired the recognition of words the most, impaired memory for the photos less, and impaired recognition of abstract pictures hardly at all. The finding reinforces the idea that the ability to recollect depends on the ability to link the stimulus to a context, and that unitization increases the chances of this linking occurring. While the words were very concrete and therefore easy to link to the experimental context, the photographs were of unknown people and unknown places and thus hard to distinctively label. The abstract images were also unfamiliar and not unitized into something that could be described with a single word.

[1216] Reder, L. M., Oates J. M., Thornton E. R., Quinlan J. J., Kaufer A., & Sauer J. (2006).  Drug-Induced Amnesia Hurts Recognition, but Only for Memories That Can Be Unitized. Psychological science : a journal of the American Psychological Society / APS. 17(7), 562 - 567.

http://www.sciencedaily.com/releases/2006/07/060719092800.htm

Language cues help visual learning in children

A study of 4-year-old children has found that language, in the form of specific kinds of sentences spoken aloud, helped them remember mirror image visual patterns. The children were shown cards bearing red and green vertical, horizontal and diagonal patterns that were mirror images of one another. When asked to choose the card that matched the one previously seen, the children tended to mistake the original card for its mirror image, showing how difficult it was for them to remember both color and location. However, if they were told, when viewing the original card, a mnemonic cue such as ‘The red part is on the left’, they performed “reliably better”.

The paper was presented by a graduate student at the 17th annual meeting of the American Psychological Society, held May 26-29 in Los Angeles.

http://www.eurekalert.org/pub_releases/2005-05/jhu-lc051705.php

Cognitive therapy for ADHD

A researcher that has previously demonstrated that working memory capacity can be increased through training, has now reported that the training software has produced significant improvement in children with ADHD — a disability that is associated with deficits in working memory. The study involved 53 children with ADHD, aged 7-12, who were not on medication for their disability. 44 of these met the criterion of more than 20 days of training. Half the participants were assigned to the working memory training program and the other half to a comparison program. 60% of those who underwent the wm training program no longer met the clinical criteria for ADHD after five weeks of training. The children were tested on visual-spatial memory, which has the strongest link to inattention and ADHD. Further research is needed to show that training improves ability on a wider range of tasks.

[583] Klingberg, T., Fernell E., Olesen P. J., Johnson M., Gustafsson P., Dahlström K., et al. (2005).  Computerized Training of Working Memory in Children With ADHD-A Randomized, Controlled Trial. Journal of the American Academy of Child & Adolescent Psychiatry. 44(2), 177 - 186.

http://www.sciam.com/article.cfm?articleID=000560D5-7252-12B9-9A2C83414B7F0000&sc=I100322

Training improves working memory capacity

Working memory capacity has traditionally been thought to be constant. Recent studies, however, suggest that working memory can be improved by training. In this recent imaging study, it was found that adults who practiced working memory tasks for 5 weeks showed increased brain activity in the middle frontal gyrus and superior and inferior parietal cortices. These changes could be evidence of training-induced plasticity in the neural systems that underlie working memory.

[606] Olesen, P. J., Westerberg H., & Klingberg T. (2004).  Increased prefrontal and parietal activity after training of working memory. Nat Neurosci. 7(1), 75 - 79.

http://www.nature.com/cgi-taf/DynaPage.taf?file=/neuro/journal/v7/n1/abs/nn1165.html

Children who concentrate and switch attention better are more likely to cross streets safely

How can we help kids cross streets more safely? Improving their abilities to concentrate and switch their attention may be part of the answer. British psychologists studied these two central attentional skills in children ages four to 10 in relation to how safely they crossed the street. The results suggest that children who can concentrate and switch their attention better may cross more safely. The study used a computer game to gauge the “attention switching” skills of 101 children. Distractability and impulsivity were also measured, in a representative sample of 35 children. These 35 children were then covertly videotaped crossing streets (with their parents). Attentional skills significantly correlated with pedestrian behavior, in different ways. Children who were better at switching attention on the Frog Game were more likely to look at traffic when about to cross a road. Children who were less able to concentrate in the lab when challenged by a distraction also tended to be more impulsive; children rated as more impulsive tended to cross the road in a less controlled way. The biggest improvements seemed to come between the group of four-five year olds and the group of five-six year olds, the difference between preschool and kindergarten age. Finally, concentration, but not switching, correlated with impulsivity, suggesting that these two skills (concentration and attention switching) represent distinct aspects of attention.

[385] Dunbar, G., Hill R., & Lewis V. (2001).  Children's attentional skills and road behavior. Journal of Experimental Psychology. Applied. 7(3), 227 - 234.

http://www.eurekalert.org/pub_releases/2001-09/apa-cwc091001.php

Skill-specific exercises better for people who suffer from attention problems following stroke or brain injury

Treatment programs for people who suffer from attention problems following a stroke or other traumatic brain injuries often involve abstract cognitive exercises designed to directly restore impaired attention processes. But a review of 30 studies involving a total of 359 participants shows that an alternative and lesser-used therapy that teaches patients to relearn the tasks that affect their daily lives the most may be more effective. In this specific skills approach, people with brain damage learn to perform attention skills in a way that is different from non-brain-damaged people. In one study, for example, participants whose brain injuries affected their ability to drive a car used small electric cars in the lab to practice specific driving exercises, such as steering between pylons that were moved closer and closer together. Those that practiced specific exercises showed substantial improvement on a variety of driving related tasks compared to those who drove the car, but did not practice the exercises.

[2548] Park, N. W., & Ingles J. L. (2001).  Effectiveness of attention rehabilitation after an acquired brain injury: A meta-analysis.. Neuropsychology. 15(2), 199 - 210.

http://www.eurekalert.org/pub_releases/2001-04/APA-Rlsm-0704101.php

Gestures

A new study claims to provide ‘some of the strongest evidence yet’ for the benefits of gesturing to help students learn.

The study involved 184 children aged 7-10, of whom half were shown videos of an instructor teaching math problems using only speech, while the rest were shown videos of the instructor teaching the same problems using both speech and gestures. The problem involved mathematical equivalence (i.e., 4+5+7=__+7), which is known to be critical to later algebraic learning.

Students who learned from the gesture videos performed substantially better on a test given immediately afterward than those who learned from the speech-only video (average proportion correct around 42% vs 31% — approximations because I’m eyeballing the graph), and, unlike the speech-only group, showed further improvement on a test 24 hours later (around 46% vs 30%). They also showed stronger transfer to different problem types (35% vs 22%).

http://www.futurity.org/society-culture/to-teach-kids-math-keep-hands-mo...

[3377] Cook, S. W., Duffy R. G., & Fenn K. M. (2013).  Consolidation and Transfer of Learning After Observing Hand Gesture. Child Development. n/a - n/a.

A study of deaf toddlers suggests that we can support children’s acquisition of language by providing physical links to words, through the use of gestures, facial expressions, and tone.

The relative ease with which children acquire language has produced much debate and theory, mirroring the similar quantity of debate and theory over how we evolved language. One theory of language evolution is that it began with gesture. A recent study looking at how deaf children learn sign language might perhaps be taken as partial support for this theory, and may also have wider implications for how children acquire language and how we can best support them.

The study, involving 31 deaf toddlers, looked at 89 specific signs understood and produced by the children. It was found that both younger (11-20 months) and older (21-30 months) toddlers understood and produced more signs that were iconic than signs that were less iconic. This benefit seemed to be greater for the older toddlers, supporting the idea that a certain amount of experience and/or cognitive development is needed to make the link between action and meaning.

Surprisingly, the benefits of iconicity did not seem to depend on how familiar, phonologically complex, or imageable the words were.

In contrast to spoken language, a high proportion of signs are iconic, that is, related to the concept being expressed (such as, bringing the hand to the mouth to indicate ‘eat’). Nevertheless, if iconicity is important in sign language, it is surely also important in spoken languages. This is supported by the role of gesture in speech.

The researchers suggest that iconic links between our perceptual-motor experience of the world and the form of a sign may provide an imitation-based mechanism that supports early sign acquisition, and that this might also apply to spoken language — with gestures, tone of voice, inflection, and facial expression helping make the link between words and their meanings less arbitrary.

This suggests that we can support children’s acquisition of language by providing and emphasizing such ‘scaffolding’.

Those learning a new language benefit from making suitable gestures as they repeat new vocabulary, and this can even extend to gestures arbitrarily linked to abstract adverbs.

I always like gesture studies. I think I’m probably right in saying that they started with language learning. Way back in 1980 it was shown that acting out action phrases meant they were remembered better than if the phrases had been only heard or read (the “enactment effect”). Enacted items, it turned out, “popped out” effortlessly in free recall tests — in other words, enactment had made the phrases highly accessible. Subsequent research found that this effect occurred both for both older and younger adults, and in immediate and delayed recall tests — suggesting not only that such items are more accessible but that forgetting is slower.

Following these demonstrations, there have been a few studies that have specifically looked at the effect of gestures on learning foreign languages, which have confirmed the benefits of gestures. But there are various confounding factors that are hard to remove when using natural languages, which is why the present researchers have developed an artificial language (“Vimmi”) to use in their research. In their first study, as in most other studies, the words and phrases used related to actions. In a new study, the findings were extended to more abstract vocabulary.

In this study, 20 German-speakers participated in a six-day language class to study Vimmi. The training material included 32 sentences, each containing a subject, verb, adverb, and object. While the subject nouns were concrete agents (e.g., musician, director), the other words were all abstract. Here’s a couple of sample sentences (translated, obviously): (The) designer frequently shapes (the) style. (The) pilot really enjoys (the) view. The length of the words was controlled: nouns all had 3 syllables; verbs and adverbs all had two.

For 16 of the sentences, participants saw the word in Vimmi and heard it. The translation of the word appeared on the screen fractionally later, while at the same time a video appeared in which woman performed the gesture relating to the word. The audio of the word was replayed, and participants were cued to imitate the gesture as they repeated the word. For the other 16 sentences, a video with a still image of the actress appeared, and the participants were simply cued to repeat the word when the audio was replayed.

While many of the words used gestures similar to their meaning (such as a cutting gesture for the word “cut”), the researchers found that the use of any gesture made a difference as long as it was unique and connected to a specific word. For example, the abstract word “rather” does not have an obvious gesture that would go with it. However, a gesture attached to this word also worked.

Each daily session lasted three hours. From day 2, sessions began with a free recall and a cued recall test. In the free recall test, participants were asked to write as many items as possible in both German and Vimmi. Items had to be perfectly correct to be counted. From day 4, participants were also required to produce new sentences with the words they had learned.

Right from the beginning, free recall of items which had been enacted was superior to those which hadn’t been — in German. However, in Vimmi, significant benefits from enactment occurred only from day 3. The main problem here was not forgetting the items, but correctly spelling them. In the cued recall test (translating from Vimmi to German, or German to Vimmi), again, the superiority of the enactment condition only showed up from day 3.

Perhaps the most interesting result came from the written production test. Here, people reproduced the same number of sentences they had learned on each of the three days of the test, and although enacted words were remembered at a higher rate, that rate didn’t alter, and didn’t reach significance. However, the production of new sentences improved each day, and the benefits of enactment increased each day. These benefits were significant from day 5.

The main question, however, was whether the benefits of enactment depended on word category. As expected, concrete nouns were remembered than verbs, followed by abstract nouns, and finally adverbs. When all the tests were lumped together, there was a significant benefit of enactment for all types of word. However, the situation became a little more nuanced when the data was separately analyzed.

In free recall, for Vimmi, enactment was only of significant benefit for concrete nouns and verbs. In cued recall, for translating German into Vimmi, the enactment benefit was significant for all except concrete nouns (I’m guessing concrete nouns have enough ‘natural’ power not to need gestures in this situation). For translating Vimmi into German, the benefit was only significant for verbs and abstract nouns. In new sentence production, interestingly, participants used significantly more items of all four categories if they had been enacted. This is perhaps the best evidence that enactment makes items more accessible in memory.

What all this suggests is that acting out new words helps you learn them, but some types of words may benefit more from this strategy than others. But I think we need more research before being sure about such subtleties. The pattern of results make it clear that we really need longer training, and longer delays, to get a better picture of the most effective way to use this strategy.

For example, it may be that adverbs, although they showed the most inconsistent benefits, are potentially the category that stands to gain the most from this strategy — because they are the hardest type of word to remember. Because any embodiment of such an abstract adverb must be arbitrary — symbolic rather than representational — it naturally is going to be harder to learn (yes, some adverbs could be represented, but the ones used in this study, and the ones I am talking about, are of the “rather”, “really”, “otherwise” ilk). But if you persist in learning the association between concept and gesture, you may derive greater benefit from enactment than you would from easier words, which need less help.

Here’s a practical discussion of all this from a language teacher’s perspective.

[2688] Macedonia, M., & Knösche T. R. (2011).  Body in Mind: How Gestures Empower Foreign Language Learning. Mind, Brain, and Education. 5(4), 196 - 211.

Two recent studies in embodied cognition show that hand movements and hand position are associated with less abstract thinking.

I always like studies about embodied cognition — that is, about how what we do physically affects how we think. Here are a couple of new ones.

The first study involved two experiments. In the first, 86 American college students were asked questions about gears in relation to each other. For example, “If five gears are arranged in a line, and you move the first gear clockwise, what will the final gear do?” The participants were videotaped as they talked their way through the problem. But here’s the interesting thing: half the students wore Velcro gloves attached to a board, preventing them from moving their hands. The control half were similarly prevented from moving their feet — giving them the same experience of restriction without the limitation on hand movement.

Those who gestured commonly used perceptual-motor strategies (simulation of gear movements) in solving the puzzles. Those who were prevented from gesturing, as well as those who chose not to gesture, used abstract, mathematical strategies much more often.

The second experiment confirmed the results with 111 British adults.

The findings are consistent with the hypothesis that gestures highlight and structure perceptual-motor information, and thereby make such information more likely to be used in problem solving.

That can be helpful, but not always. Even when we are solving problems that have to do with motion and space, more abstract strategies may sometimes be more efficient, and thus an inability to use the body may force us to come up with better strategies.

The other study is quite different. In this study, college students searched for a single letter embedded within images of fractals and other complex geometrical patterns. Some did this while holding their hands close to the images; others kept their hands in their laps, far from the images. This may sound a little wacky, but previous research has shown that perception and attention are affected by how close our hands are to an object. Items near our hands tend to take priority.

In the first experiment, eight randomly chosen images were periodically repeated 16 times, while the other 128 images were only shown once. The target letter was a gray “T” or “L”; the images were colorful.

As expected, finding the target letter was faster the more times the image had been presented. Hand position didn’t affect learning.

In the second experiment, a new set of students were shown the same shown-once images, while 16 versions of the eight repeated images were created. These versions varied in their color components. In this circumstance, learning was slower when hands were held near the images. That is, people found it harder to recognize the commonalities among identical but differently colored patterns, suggesting they were too focused on the details to see the similarities.

These findings suggest that processing near the hands is biased toward item-specific detail. This is in keeping with earlier suggestions that the improvements in perception and attention near the hands are item-specific. It may indeed be that this increased perceptual focus is at the cost of higher-order function such as memory and learning. This would be consistent with the idea that there are two largely independent visual streams, one of which is mainly concerned with visuospatial operations, and the other of which is primarily for more cognitive operations (such as object identification).

All this may seem somewhat abstruse, but it is worryingly relevant in these days of hand-held technological devices.

The point of both these studies is not that one strategy (whether of hand movements or hand position) is wrong. What you need to take away is the realization that hand movements and hand position can affect the way you approach problems, and the things you perceive. Sometimes you want to take a more physical approach to a problem, or pick out the fine details of a scene or object — in these cases, moving your hands, or holding something in or near your hands, is a good idea. Other times you might want to take a more abstract/generalized approach — in these cases, you might want to step back and keep your body out of it.

Another study confirms the value of gestures in helping you solve spatial problems, and suggests that gesturing can help you develop better mental visualization.

In the first of three experiments, 132 students were found to gesture more often when they had difficulties solving mental rotation problems. In the second experiment, 22 students were encouraged to gesture, while 22 were given no such encouragement, and a further 22 were told to sit on their hands to prevent gesturing. Those encouraged to gesture solved more mental rotation problems.

Interestingly, the amount of gesturing decreased with experience with these spatial problems, and when the gesture group were given new spatial visualization problems in which gesturing was prohibited, their performance was still better than that of the other participants. This suggests that the spatial computation supported by gestures becomes internalized. The third experiment increased the range of spatial visualization problems helped by gesture.

The researchers suggest that hand gestures may improve spatial visualization by helping a person keep track of an object in the mind as it is rotated to a new position, and by providing additional feedback and visual cues by simulating how an object would move if the hand were holding it.

[2140] Chu, M., & Kita S. (2011).  The nature of gestures' beneficial role in spatial problem solving.. Journal of Experimental Psychology: General. 140(1), 102 - 116.

Full text of the article is available at http://www.apa.org/pubs/journals/releases/xge-140-1-102.pdf

A study involving problem-solving adds to recent research showing that gestures affect how you think and remember.

In a recent study, volunteers were asked to solve a problem known as the Tower of Hanoi, a game in which you have to move stacked disks from one peg to another. Later, they were asked to explain how they did it (very difficult to do without using your hands.) The volunteers then played the game again. But for some of them, the weight of the disks had secretly reversed, so that the smallest disk was now the heaviest and needed two hands.

People who had used one hand in their gestures when talking about moving the small disk were in trouble when that disk got heavier. They took longer to complete the task than did people who used two hands in their gestures—and the more one-handed gestures they used, the longer they took.

For those who had not been asked to explain their solution (and replayed the game in the interval) were unaffected by the disk weights changing. So even though they had repeated the action with the original weights, they weren’t thrown by the unexpected changes in weights, as those who gestured with one hand were.

The findings add to the evidence that gestures make thought concrete. Related research has indicated that children can come to understand abstract concepts in mathematics and science more readily if they gesture (and perhaps if their teachers gesture).

[2043] Beilock, S. L., & Goldin-Meadow S. (2010).  Gesture Changes Thought by Grounding It in Action. Psychological Science. 21(11), 1605 - 1610.

Older news items (pre-2010) brought over from the old website

Connection between language and movement

A study of all three groups of birds with vocal learning abilities – songbirds, parrots and hummingbirds – has revealed that the brain structures for singing and learning to sing are embedded in areas controlling movement, and areas in charge of movement share many functional similarities with the brain areas for singing. This suggests that the brain pathways used for vocal learning evolved out of the brain pathways used for motor control. Human brain structures for speech also lie adjacent to, and even within, areas that control movement. The findings may explain why humans talk with our hands and voice, and could open up new approaches to understanding speech disorders in humans. They are also consistent with the hypothesis that spoken language was preceded by gestural language, or communication based on movements. Support comes from another very recent study finding that mice engineered to have a mutation to the gene FOXP2 (known to cause problems with controlling the formation of words in humans) had trouble running on a treadmill.
Relatedly, a study of young children found that 5-year-olds do better on motor tasks when they talk to themselves out loud (either spontaneously or when told to do so by an adult) than when they are silent. The study also showed that children with behavioral problems (such as ADHD) tend to talk to themselves more often than children without signs of behavior problems. The findings suggest that teachers should be more tolerant of this kind of private speech.

[436] Feenders, G., Liedvogel M., Rivas M., Zapka M., Horita H., Hara E., et al. (2008).  Molecular Mapping of Movement-Associated Areas in the Avian Brain: A Motor Theory for Vocal Learning Origin. PLoS ONE. 3(3), e1768 - e1768.

[1235] Winsler, A., Manfra L., & Diaz R. M. (2007).  "Should I let them talk?": Private speech and task performance among preschool children with and without behavior problems. Early Childhood Research Quarterly. 22(2), 215 - 231.

http://www.physorg.com/news124526627.html
http://www.sciam.com/article.cfm?id=song-learning-birds-shed
http://www.eurekalert.org/pub_releases/2008-03/gmu-pkd032808.php

Kids learn more when mother is listening

Research has already shown that children learn well when they explain things to their mother or a peer, but that could be because they’re getting feedback and help. Now a new study has asked 4- and 5-year-olds to explain their solution to a problem to their moms (with the mothers listening silently), to themselves or to simply repeat the answer out loud. Explaining to themselves or to their moms improved the children's ability to solve similar problems, and explaining the answer to their moms helped them solve more difficult problems — presumably because explaining to mom made a difference in the quality of the child's explanations.

[416] Rittle-Johnson, B., Saylor M., & Swygert K. E. (2008).  Learning from explaining: Does it matter if mom is listening?. Journal of Experimental Child Psychology. 100(3), 215 - 224.

http://www.physorg.com/news120320713.html

Gesturing helps grade-schoolers solve math problems

Two studies of children in late third and early fourth grade, who made mistakes in solving math problems, have found that children told to move their hands when explaining how they’d solve a problem were four times as likely as kids given no instructions to manually express correct new ways to solve problems. Even though they didn’t give the right answer, their gestures revealed an implicit knowledge of mathematical ideas, and the second study showed that gesturing set them up to benefit from subsequent instruction. The findings extend previous research that body movement not only helps people to express things they may not be able to verbally articulate, but actually to think better.

[1170] Broaders, S. C., Cook S. W., Mitchell Z., & Goldin-Meadow S. (2007).  Making Children Gesture Brings Out Implicit Knowledge and Leads to Learning. Journal of Experimental Psychology: General. 136(4), 539 - 550.

http://www.eurekalert.org/pub_releases/2007-11/apa-ghg102907.php

Actors’ memory tricks help students and older adults

The ability of actors to remember large amounts of dialog verbatim is a marvel to most of us, and most of us assume they do by painful rote memorization. But two researchers have been studying the way actors learn for many years and have concluded that the secret of actors' memories is in the acting; an actor learning lines by focusing on the character’s motives and feelings — they get inside the character. To do this, they break a script down into a series of logically connected "beats" or intentions. The researchers call this process active experiencing, which uses "all physical, mental, and emotional channels to communicate the meaning of material to another person." This principle can be applied in other contexts. For example, students who imagined themselves explaining something to somebody else remembered more than those who tried to memorize the material by rote. Physical movement also helps — lines learned while doing something, such as walking across the stage, were remembered better than lines not accompanied with action. The principles have been found useful in improving memory in older adults: older adults who received a four-week course in acting showed significantly improved word-recall and problem-solving abilities compared to both a group that received a visual-arts course and a control group, and this improvement persisted four months afterward.

[2464] Noice, H., & Noice T. (2006).  What Studies of Actors and Acting Can Tell Us About Memory and Cognitive Functioning. Current Directions in Psychological Science. 15(1), 14 - 18.

http://www.eurekalert.org/pub_releases/2006-01/aps-bo012506.php

People remember speech better when it is accompanied by gestures

A recent study had participants watch someone narrating three cartoons. Sometimes the narrator used hand gestures and at other times they did not. The participants were then asked to recall the story. The study found that when the narrator used gestures as well as speech the participants were more likely to accurately remember what actually happened in the story rather than change it in some way.

The research was presented to the British Psychological Society Annual Conference in Bournemouth on Thursday 13 March.

Gesturing reduces cognitive load

Why is it that people cannot keep their hands still when they talk? One reason may be that gesturing actually lightens cognitive load while a person is thinking of what to say. Adults and children were asked to remember a list of letters or words while explaining how they solved a math problem. Both groups remembered significantly more items when they gestured during their math explanations than when they did not gesture.

[1300] Goldin-Meadow, S., Nusbaum H., Kelly S. D., & Wagner S. (2001).  Explaining math: gesturing lightens the load. Psychological Science: A Journal of the American Psychological Society / APS. 12(6), 516 - 522.

Age & individual differences

Older news items (pre-2010) brought over from the old website

When less attention improves behavior

An intriguing finding from a new study with confabulating patients has found that, unlike with normal individuals, or indeed other patients with damaged prefrontal lobes who don’t confabulate, memory accuracy improves when attention is reduced. It appears that lack of attention during memory retrieval is not the reason for confabulation; instead the problem might lie in over-processing irrelevant information. Training such patients to double-check the accuracy of their memories may not therefore be useful; instead they should be trained not to give too much attention to events.

[595] Ciaramelli, E., Ghetti S., & Borsotti M. (2009).  Divided attention during retrieval suppresses false recognition in confabulation. Cortex. 45(2), 141 - 153.

http://www.eurekalert.org/pub_releases/2009-01/e-wla012109.php

Children's under-achievement could be down to poor working memory

A survey of over three thousand children has found that 10% of school children across all age ranges suffer from poor working memory seriously affecting their learning. However, poor working memory is rarely identified by teachers, who often describe children with this problem as inattentive or as having lower levels of intelligence. The researchers have developed a new tool, a combination of a checklist and computer programme called the Working Memory Rating Scale, that enables teachers to identify and assess children's memory capacity in the classroom from as early as four years old. The tool has already been piloted successfully in 35 schools across the UK, and is now widely available. It has been translated into ten foreign languages.
http://www.physorg.com/news123404466.html 
http://www.eurekalert.org/pub_releases/2008-02/du-cuc022608.php

Changes in brain, not age, determine one's ability to focus on task

It’s been established that one of the reasons why older adults may do less well on cognitive tasks is because they have greater difficulty in ignoring distractions, which impairs their concentration. But not all older people are afflicted by this. Some are as focused as young adults. An imaging study has now revealed a difference between the brains of those people who are good at focusing, and those who are poor. Those who have difficulty screening out distractions have less white matter in the frontal lobes. They activated neurons in the left frontal lobe as well as the right. Young people and high-functioning older adults tended to use only the right frontal lobe.

[1117] Colcombe, S. J., Kramer A. F., Erickson K. I., & Scalf P. (2005).  The implications of cortical recruitment and brain morphology for individual differences in inhibitory function in aging humans. Psychology and Aging. 20(3), 363 - 375.

http://www.eurekalert.org/pub_releases/2005-10/uoia-cib102605.php

Memory loss in older adults due to distractions, not inability to focus

We know that older adults often have short-term memory problems, and this has been linked to problems with attention. An imaging study now provides evidence that these short-term memory problems are associated with an inability to filter out surrounding distractions, rather than problems with focusing attention. It’s been suggested that an inability to ignore distracting information may indeed be at the heart of many of the cognitive problems that accompany aging. It should be noted that this is not an inevitable effect of age — in the study, 6 of the 16 older adults involved had no problems with short-term memory or attention.

[383] Gazzaley, A., Cooney J. W., Rissman J., & D'Esposito M. (2005).  Top-down suppression deficit underlies working memory impairment in normal aging. Nat Neurosci. 8(10), 1298 - 1300.

http://www.eurekalert.org/pub_releases/2005-09/uoc--mli090805.php

Insight into the processes of 'positive' and 'negative' learners

An intriguing study of the electrical signals emanating from the brain has revealed two types of learners. A brainwave event called an "event-related potential" (ERP) is important in learning; a particular type of ERP called "error-related negativity" (ERN), is associated with activity in the anterior cingulate cortex. This region is activated during demanding cognitive tasks, and ERNs are typically more negative after participants make incorrect responses compared to correct choices. Unexpectedly, studies of this ERN found a difference between "positive" learners, who perform better at choosing the correct response than avoiding the wrong one, and "negative" learners, who learn better to avoid incorrect responses. The negative learners showed larger ERNs, suggesting that "these individuals are more affected by, and therefore learn more from, their errors.” Positive learners had larger ERNs when faced with high-conflict win/win decisions among two good options than during lose/lose decisions among two bad options, whereas negative learners showed the opposite pattern.

[818] Frank, M. J., Woroch B. S., & Curran T. (2005).  Error-Related Negativity Predicts Reinforcement Learning and Conflict Biases. Neuron. 47(4), 495 - 501.

http://www.eurekalert.org/pub_releases/2005-08/cp-iit081205.php

Teen's ability to multi-task develops late in adolescence

A study involving adolescents between 9 and 20 years old has found that the ability to multi-task continues to develop through adolescence. The ability to use recall-guided action to remember single pieces of spatial information (such as looking at the location of a dot on a computer screen, then, after a delay, indicating where the dot had been) developed until ages 11 to 12, while the ability to remember multiple units of information in the correct sequence developed until ages 13 to 15. Tasks in which participants had to search for hidden items in a manner requiring a high level of multi-tasking and strategic thinking continued to develop until ages 16 to 17. "These findings have important implications for parents and teachers who might expect too much in the way of strategic or self-organized thinking, especially from older teenagers."

[547] Luciana, M., Conklin H. M., Hooper C. J., & Yarger R. S. (2005).  The Development of Nonverbal Working Memory and Executive Control Processes in Adolescents. Child Development. 76(3), 697 - 712.

http://www.eurekalert.org/pub_releases/2005-05/sfri-tat051205.php

Development of working memory with age

An imaging study of 20 healthy 8- to 30-year-olds has shed new light on the development of working memory. The study found that pre-adolescent children relied most heavily on the prefrontal and parietal regions of the brain during the working memory task; adolescents used those regions plus the anterior cingulate; and in adults, a third area of the brain, the medial temporal lobe, was brought in to support the functions of the other areas. Adults performed best. The results support the view that a person's ability to have voluntary control over behavior improves with age because with development, additional brain processes are used.

The findings were presented at the 2004 Annual Meeting of the Society for Neuroscience.

http://www.eurekalert.org/pub_releases/2004-10/uopm-dow102104.php

Making attention worse

Latest news

More than 10% of all babies are born preterm every year, and prematurity is a well-established risk factor for cognitive impairment at some level.

Prematurity affects working memory in particular

In a recent German study involving 1326 8-year-old children, it was found that being born preterm specifically affected the ability to solve tasks with a high cognitive load (i.e. greater demands on working memory), whereas tasks with a low load were largely unaffected.

These findings are consistent with other research suggesting that prematurity is associated in particular with difficulties in math, in complex problem-solving, and in simultaneous processing (such as occurs in recognition of spatial patterns).

There was also a clear dividing line, with deficits disproportionally higher for children born before the 34th week of pregnancy compared with children born after week 33.

Rate of cognitive impairment in premature infants

A Swedish study of 491 toddlers (2 ½ years) who had been born extremely preterm (less than 27 gestational weeks) found that 42% of them had no disability (compared with 78% of controls), 31% had mild disability, 16% had moderate disability, and 11% had severe disability. Unsurprisingly, there was an increase in moderate or severe disabilities with greater prematurity. There was no gender difference.

Cognitive impairment in premies linked to smaller brain tissue in specific regions

Why are some individuals affected by prematurity, why others aren’t? An analysis of brain imaging data of 97 adolescents who had very low birth weights, and whose academic progress has been followed, found that more than half of the babies that weighed less than 1.66 pounds and more than 30% of those less than 3.31 pounds at birth later had academic deficits. Academic deficits were linked to smaller brain volumes, and in particular to reduced volume of the caudate and corpus callosum, which are involved in connectivity, executive attention and motor control.

J. Jäkel, N. Baumann, D. Wolke (2013): Effects of gestational age at birth on cognitive performance: a function of cognitive workload demands, PLOS ONE, http://dx.plos.org/10.1371/journal.pone.0065219

[3444] Serenius F, K. K. (2013).  Neurodevelopmental outcome in extremely preterm infants at 2.5 years after active perinatal care in sweden. JAMA. 309(17), 1810 - 1820.

[3442] Clark, C, A., Fang H., Espy K. A., Filipek P. A., Juranek J., Bangert B., et al. (2013).  Relation of neural structure to persistently low academic achievement: A longitudinal study of children with differing birth weights. Neuropsychology. 27(3), 364 - 377.

http://www.eurekalert.org/pub_releases/2013-05/rb-pba052713.php (1st study)

http://www.eurekalert.org/pub_releases/2013-04/tjnj-sen042513.php (2nd study)

http://www.eurekalert.org/pub_releases/2013-06/uoo-rbv061013.php (3rd study)

Three classroom experiments have found that students who meditated before a psychology lecture scored better on a quiz that followed than students who did not meditate. Mood, relaxation, and class interest were not affected by the meditation training.

The noteworthy thing is that the meditation was very very basic — six minutes of written meditation exercises.

The effect was stronger in classes where more freshmen students were enrolled, suggesting that the greatest benefit is to those students who have most difficulty in concentrating (who are more likely to drop out).

The finding suggests the value in teaching some active self-reflection strategies to freshmen, and disadvantaged ones in particular.

It’s reasonable to speculate that more extensive training might increase the benefits.

And in another recent meditation study, a two week mindfulness course significantly improved both Graduate Record Exam reading comprehension scores and working memory capacity.

The study involved 48 undergrads who either attended the mindfulness course or a nutrition class. Each 45-minute class met eight times over two weeks. Mindfulness training was associated with a 16-percentile boost in GRE scores, on average. Mind wandering also significantly decreased. The healthy nutrition course had no effect on any of these factors.

http://medicalxpress.com/news/2013-04-meditating-grades.html (first study)

[3382] Ramsburg, J. T., & Youmans R. J. (Submitted).  Meditation in the Higher-Education Classroom: Meditation Training Improves Student Knowledge Retention during Lectures. Mindfulness. 1 - 11.

http://www.scientificamerican.com/podcast/episode.cfm?id=mindfulness-may-improve-test-scores-13-03-28 (second study)

[3380] Mrazek, M. D., Franklin M. S., Phillips D. T., Baird B., & Schooler J. W. (2013).  Mindfulness Training Improves Working Memory Capacity and GRE Performance While Reducing Mind Wandering. Psychological Science.

Why do we find it so hard to stay on task for long? A recent study uses a new technique to show how the task control network and the default mode network interact (and fight each other for control).

The task control network (which includes the dorsal anterior cingulate and bilateral anterior insula) regulates attention to surroundings, controlling your concentration on tasks. The default mode network, on the other hand, becomes active when a person seems to be doing 'nothing', and becomes less active when a task is being performed.

The study shows that we work better and faster the better the default mode network is suppressed by the task control network. However, when the default mode network is not sufficiently suppressed by the task control network, it sends signals to the task control network, interfering with its performance (and we lose focus).

Interestingly, in certain conditions, such as autism, depression, and mild cognitive impairment, the default mode network remains unchanged whether the person is performing a task or interacting with the environment. Additionally, deficits in the functioning of the default mode network have been implicated in age-related cognitive decline.

The findings add a new perspective to our ideas about attention. One of the ongoing questions concerns the relative importance of the two main aspects of attention: focus, and resisting distraction. A lot of work in recent years has indicated that a large part of age-related cognitive decline is a growing difficulty in resisting distraction. Similarly, there is some evidence that people with a low working memory capacity are less able to ignore irrelevant information.

This recent finding, then, suggests that these difficulties in ignoring distracting / irrelevant stimuli reflect the failure of the task control network to adequately suppress the activity of the default mode network. This puts the emphasis back on training for focus, and may help explain why meditation practices are effective in improving concentration.

http://www.futurity.org/science-technology/why-your-seesaw-brain-cant-stay-on-task/

[3384] Wen, X., Liu Y., Yao L., & Ding M. (2013).  Top-Down Regulation of Default Mode Activity in Spatial Visual Attention. The Journal of Neuroscience. 33(15), 6444 - 6453.

A study of younger and older adults indicates that memory search tends to decline with age because, with reduced cognitive control, seniors’ minds tend to ‘flit’ too quickly from one information cluster to another.

Evidence is accumulating that age-related cognitive decline is rooted in three related factors: processing speed slows down (because of myelin degradation); the ability to inhibit distractions becomes impaired; working memory capacity is reduced.

A new study adds to this evidence by looking at one particular aspect of age-related cognitive decline: memory search.

The study put 185 adults aged 29-99 (average age 67) through three cognitive tests: a vocabulary test, digit span (a working memory test), and the animal fluency test, in which you name as many animals as you can in one minute.

Typically, in the animal fluency test, people move through semantic categories such as ‘pets’, ‘big cats’, and so on. The best performers are those who move from category to category with optimal timing — i.e., at the point where the category has been sufficiently exhausted that efforts would be better spent on a new one.

Participants recalled on average 17 animal names, with a range from 5 to 33. While there was a decline with age, it wasn’t particularly marked until the 80s (an average of 18.3 for those in their 30s, 17.5 for those in their 60s, 16.5 for the 70s, 12.8 for the 80s, and 10 for the 90s). Digit span did show a decline, but it was not significant (from 17.5 down to 15.3), while vocabulary (consistent with previous research) showed no decline with age.

But all this is by the by — the nub of the experiment was to discover how individuals were searching their memory. This required a quite complicated analysis, which I will not go into, except to mention two important distinctions. The first is between:

  • global context cue: activates each item in the active category according to how strong it is (how frequently it has been recalled in the past);
  • local context cue: activates each item in relation to its semantic similarity to the previous item recalled.

A further distinction was made between static and dynamic processes: in dynamic models, it is assumed the user switches between local and global search. This, it is further assumed, is because memory is ‘patchy’ – that is, information is represented in clusters. Within a cluster, we use local cues, but to move from one cluster to another, we use global cues.

The point of all this was to determine whether age-related decline in memory search has to do with:

  • Reduced processing speed,
  • Persisting too long on categories, or
  • Inability to maintain focus on local cues (this would relate it back to the inhibition deficit).

By modeling the exact recall patterns, the researchers ascertained that the recall process is indeed dynamic, although the points of transition are not clearly understood. The number of transitions from one cluster to another was negatively correlated with age; it was also strongly positively correlated with performance (number of items recalled). Digit span, assumed to measure ‘cognitive control’, was also negatively correlated with number of transitions, but, as I said, was not significantly correlated with age.

In other words, it appears that there is a qualitative change with age, that increasing age is correlated with increased switching, and reduced cognitive control is behind this — although it doesn’t explain it all (perhaps because we’re still not able to fully measure cognitive control).

At a practical level, the message is that memory search may become less efficient because, as people age, they tend to change categories too frequently, before they have exhausted their full potential. While this may well be a consequence of reduced cognitive control, it seems likely (to me at least) that making a deliberate effort to fight the tendency to move on too quickly will pay dividends for older adults who want to improve their memory retrieval abilities.

Nor is this restricted to older adults — since age appears to be primarily affecting performance through its effects on cognitive control, it is likely that this applies to those with reduced working memory capacity, of any age.

There’s been a certain amount of criticism of working memory training recently. Scott Barry Kaufman in the Scientific American has put out an excellent article critiquing the criticism. Among his points (most of which I have previously made), he notes:

These nuanced effects suggest that personal characteristics should be taken into account when considering the effectiveness of cognitive training. …

Next, it’s important to consider that working memory training is most helpful for those who need it the most. …

[Nevertheless] The evidence suggests that the activities that show the strongest and most widespread effects on cognitive functioning are those that target the “whole person,” ...

Cook rightly notes that the effects of the large majority of working memory training programs don’t generalize well beyond the specific skills that are targeted. ...

It’s also important to keep in mind that regardless of the method, working memory improvements are transient. Repeated practice and challenge is essential to maintaining improvements in any kind of cognitive training or else they’ll very likely decline rapidly. ...

Read the full article

As many of you will know, I like nature-improves-mind stories. A new twist comes from a small Scottish study, in which participants were fitted up with a mobile EEG monitor that enabled their brainwaves to be recorded as they walked for 25 minutes through one of three different urban settings: an urban shopping street, a path through green space, or a street in a busy commercial district. The monitors measured five ‘channels’ that are claimed to reflect “short-term excitement,” “frustration,” “engagement,” “arousal,” and “meditation level."

Consistent with Attention restoration theory, walkers entering the green zone showed lower frustration, engagement and arousal, and higher meditation, and then showed higher engagement when moving out of it — suggesting that their time in a natural environment had ‘refreshed’ their brain.

http://richardcoyne.com/2013/03/09/the-brain-in-the-city/

[3375] Aspinall, P., Mavros P., Coyne R., & Roe J. (2013).  The urban brain: analysing outdoor physical activity with mobile EEG. British journal of sports medicine.

 

A humanoid robot has been designed, and shows promise, for teaching joint attention to children with ASD. Robots are particularly appealing to children, and even more so to those with ASD.

http://www.futurity.org/health-medicine/interactive-robot-trains-kids-with-autism/

[3351] Bekele, E. T., Lahiri U., Swanson A. R., Crittendon J. A., Warren Z. E., & Sarkar N. (2013).  A Step Towards Developing Adaptive Robot-Mediated Intervention Architecture (ARIA) for Children With Autism. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 21(2), 289 - 299.

Assuredly one of my biggest problems! and I'm sure I'm not alone, is the ever-present difficulty in not getting sidetracked. So much to do! So much coming at us all the time! So I was intrigued to see this new book come out, promising to help us stop getting sidetracked - a new take on the information overload/attention deficit problem.

Harvard Business School’s Francesca Gino discusses her  book at Scientific American:

In Sidetracked, I explain how even simple and seemingly irrelevant factors have profound consequences on our decisions and behavior, diverting us from our original plans. Most of us care a good deal about being consistent—we care about following through on our goals and wishes. And we also aim to behave in ways that are consistent with our self-image as capable, competent, and honest individuals. But often, without our knowledge, subtle influences—often unexpected—steer us away from what we initially planned or wanted. As a result, our decisions fail to align with our best intentions.

I wrote Sidetracked to discuss the main set of forces that prevent us from following through on our plans, and to identify a set of principles we can apply to stay on track going forward.

Scientific American article

A comparison of traditional African villagers and those who have moved to town indicates that urban living improves working memory capacity even as it makes us more vulnerable to distraction.

Another study looking into the urban-nature effect issue takes a different tack than those I’ve previously reported on, that look at the attention-refreshing benefits of natural environments.

In this study, a rural African people living in a traditional village were compared with those who had moved to town. Participants in the first experiment included 35 adult traditional Himba, 38 adolescent traditional Himba (mean age 12), 56 adult urbanized Himba, and 37 adolescent urbanized Himba. All traditional Himba had had little contact with the Western world and only spoke their native language; all adult urbanized Himba had grown up in traditional villages and only moved to town later in life (average length of time in town was 6 years); all adolescent urbanized Himba had grown up in town the town and usually attended school regularly.

The first experiments assessed the ability to ignore peripheral distracting arrows while focusing on the right or left direction of a central arrow.

There was a significant effect of urbanization, with attention being more focused (less distracted) among the traditional Himba. Traditional Himba were also slower than urbanized Himba — but note that there was substantial overlap in response times between the two groups. There was no significant effect of age (that is, adolescents were faster than adults in their responses, but the effect of the distracters was the same across age groups), or a significant interaction between age and urbanization.

The really noteworthy part of this, was that the urbanization effect on task performance was the same for the adults who had moved to town only a few years earlier as for the adolescents who had grown up and been educated in the town. In other words, this does not appear to be an educational effect.

The second experiment looked at whether traditional Himba would perform more like urbanized Himba if there were other demands on working memory. This was done by requiring them to remember three numbers (the number words in participants’ language are around twice as long as the same numbers in English, hence their digit span is shorter).

While traditional Himba were again more focused than the urbanized in the no-load condition, when there was this extra load on working memory, there was no significant difference between the two groups. Indeed, attention was de-focused in the traditional Himba under high load to the same degree as it was for urbanized Himba under no-load conditions. Note that increasing the cognitive load made no difference for the urbanized group.

There was also a significant (though not dramatic) difference between the traditional and urbanized Himba in terms of performance on the working memory task, with traditional Himba remembering an average of 2.46/3 digits and urbanized Himba 2.64.

Experiment 3 tested the two groups on a working memory task, a standard digit span test (although, of course, in their native language). Random sequences of 2-5 digits were read out, with the participant being required to say them aloud immediately after. Once again, the urbanized Himba performed better than the traditional Himba (4.32 vs 3.05).

In other words, the problem does not seem to be that urbanization depletes working memory, rather, that urbanization encourages disengagement (i.e., we have the capacity, we just don’t use it).

In the fourth experiment, this idea was tested more directly. Rather than the arrows used in the earlier experiments, black and white faces were used, with participants required to determine the color of the central face. Additionally, inverted faces were sometimes used (faces are stimuli we pay a lot of attention to, but inverting them reduces their ‘faceness’, thus making them less interesting).

An additional group of Londoners was also included in this experiment.

While urbanized Himba and Londoners were, again, more de-focused than traditional Himba when the faces were inverted, for the ‘normal’ faces, all three groups were equally focused.

Note that the traditional Himba were not affected by the changes in the faces, being equally focused regardless of the stimulus. It was the urbanized groups that became more alert when the stimuli became more interesting.

Because it may have been a race-discrimination mechanism coming into play, the final experiment returned to the direction judgment, with faces either facing left or right. This time the usual results occurred – the urbanized groups were more de-focused than the traditional group.

In other words, just having faces was not enough; it was indeed the racial discrimination that engaged the urbanized participants (note that both these urban groups come from societies where racial judgments are very salient – multicultural London, and post-apartheid Namibia).

All of this indicates that the attention difficulties that appear so common nowadays are less because our complex environments are ‘sapping’ our attentional capacities, and more because we are in a different attentional ‘mode’. It makes sense that in environments that contain so many more competing stimuli, we should employ a different pattern of engagement, keeping a wider, more spread, awareness on the environment, and only truly focusing when something triggers our interest.

[3273] Linnell, K. J., Caparos S., de Fockert J. W., & Davidoff J. (2013).  Urbanization Decreases Attentional Engagement. Journal of experimental psychology. Human perception and performance.

A smallish study suggests that the cognitive effects of menopause are greatest in the first year after menopause.

Being a woman of a certain age, I generally take notice of research into the effects of menopause on cognition. A new study adds weight, perhaps, to the idea that cognitive complaints in perimenopause and menopause are not directly a consequence of hormonal changes, but more particularly, shows that early post menopause may be the most problematic time.

The study followed 117 women from four stages of life: late reproductive, early and late menopausal transition, and early postmenopause. The late reproductive period is defined as when women first begin to notice subtle changes in their menstrual periods, but still have regular menstrual cycles. Women in the transitional stage (which can last for several years) experience fluctuation in menstrual cycles, and hormone levels begin to fluctuate significantly.

Women in the early stage of post menopause (first year after menopause), as a group, were found to perform more poorly on measures of verbal learning, verbal memory, and fine motor skill than women in the late reproductive and late transition stages. They also performed significantly worse than women in the late menopausal transition stage on attention/working memory tasks.

Surprisingly, self-reported symptoms such as sleep difficulties, depression, and anxiety did not predict memory problems. Neither were the problems correlated with hormone levels (although fluctuations could be a factor).

This seemingly contradicts earlier findings from the same researchers, who in a slightly smaller study found that those experiencing poorer working memory and attention were more likely to have poorer sleep, depression, and anxiety. That study, however, only involved women approaching and in menopause. Moreover, these aspects were not included in the abstract of the paper but only in the press release, and because I don’t have access to this particular journal, I cannot say whether there is something in the data that explains this. Because of this, I am not inclined to put too much weight on this point.

But we may perhaps take the findings as support for the view that cognitive problems experienced earlier in the menopause cycle are, when they occur, not a direct result of hormonal changes.

The important result of this study is the finding that the cognitive problems often experienced by women in their 40s and 50s are most acute during the early period of post menopause, and the indication that the causes and manifestations are different at different stages of menopause.

It should be noted, however, that there were only 14 women in the early postmenopause stage. So, we shouldn’t put too much weight on any of this. Nevertheless, it does add to the picture research is building up about the effects of menopause on women’s cognition.

While the researchers said that this effect is probably temporary — which was picked up as the headline in most media — this was not in fact investigated in this study. It would be nice to have some comparison with those, say, two or three and five years post menopause (but quite possibly this will be reported in a later paper).

[3237] Weber, M. T., Rubin L. H., & Maki P. M. (2013).  Cognition in perimenopause. Menopause: The Journal of The North American Menopause Society.

A new study points to pre-treatment reasons for declined cognitive function following chemotherapy, and suggests that anxiety may be the main driver.

The issue of ‘chemo-brain’ — cognitive impairment following chemotherapy — has been a controversial one. While it is now (I hope) accepted by most that it is, indeed, a real issue, there is still an ongoing debate over whether the main cause is really the chemotherapy. A new study adds to the debate.

The study involved 28 women who received adjuvant chemotherapy for breast cancer, 37 who received radiotherapy, and 32 age-matched healthy controls. Brain scans while doing a verbal working memory task were taken before treatment and one month after treatment.

Women who underwent chemotherapy performed less accurately on the working memory task both before treatment and one month after treatment. They also reported a significantly higher level of fatigue. Greater fatigue correlated with poorer test performance and more cognitive problems, across both patient groups and at both times (although the correlation was stronger after treatment).

Both patient groups showed reduced function in the left inferior frontal gyrus, before therapy, but those awaiting chemotherapy showed greater impairment than those in the radiotherapy group. Pre-treatment difficulty in recruiting this brain region in high demand situations was associated with greater fatigue after treatment.

In other words, reduced working memory function before treatment began predicted how tired people felt after treatment, and how much their cognitive performance suffered. All of which suggests it is not the treatment itself that is the main problem.

But the fact that reduced working memory function precedes the fatigue indicates it’s not the fatigue that’s the main problem either. The researchers suggest that the main driver is level of worry —worry interfered with the task; level of worry was related to fatigue. And worry, as we know, can reduce working memory capacity (because it uses up part of it).

All of which is to say that support for cancer patients aimed at combating stress and anxiety might do more for ‘chemo-brain’ than anything else. In this context, I note also that there have been suggestions that sleep problems have also been linked to chemo-brain — a not unrelated issue!

Cimprich, B. et al. 2012. Neurocognitive impact in adjuvant chemotherapy for breast cancer linked to fatigue: A Prospective functional MRI study. Presented at the 2012 CTRC-AACR San Antonio Breast Cancer Symposium, Dec. 4-8

Impairment in executive function is apparently far more common in those with MCI than previously thought, with the most common and severe impairment occurring in inhibitory control.

Providing some support for the finding I recently reported — that problems with semantic knowledge in those with mild cognitive impairment (MCI) and Alzheimer’s might be rooted in an inability to inhibit immediate perceptual information in favor of conceptual information — a small study has found that executive function (and inhibitory control in particular) is impaired in far more of those with MCI than was previously thought.

The study involved 40 patients with amnestic MCI (single or multiple domain) and 32 healthy older adults. Executive function was tested across multiple sub-domains: divided attention, working memory, inhibitory control, verbal fluency, and planning.

As a group, those with MCI performed significantly more poorly in all 5 sub-domains. All MCI patients showed significant impairment in at least one sub-domain of executive functioning, with almost half performing poorly on all of the tests. The sub-domain most frequently and severely impaired was inhibitory control.

The finding is in sharp contrast with standard screening tests and clinical interviews, which have estimated executive function impairment in only 15% of those with MCI.

Executive function is crucial for many aspects of our behavior, from planning and organization to self-control to (as we saw in the previous news report) basic knowledge. It is increasingly believed that inhibitory control might be a principal cause of age-related cognitive decline, through its effect on working memory.

All this adds weight to the idea that we should be focusing our attention on ways to improve inhibitory control when it declines. Although training to improve working memory capacity has not been very successful, specific training targeted at inhibitory control might have more luck. Something to hope for!

One reason for the association between poverty and poorer cognition in children may lie in how poverty affects attention, with poor children tending to use more cognitive resources in monitoring the environment.

There have been a number of studies in the past few years showing how poverty affects brain development and function. One of these showed specifically that children of high and low socioeconomic status showed differences in brain wave patterns associated with an auditory selective attention task. This was thought to indicate that the groups were using different mechanisms to carry out the task, with the lower SES children employing extra resources to attend to irrelevant information.

In a follow-up study, 28 young adolescents (12-14 years) from two schools in neighborhoods of different socioeconomic status answered questions about their emotional and motivational state at various points during the day, and provided saliva samples to enable monitoring of cortisol levels. At one point in the afternoon, they also had their brainwaves monitored while they carried out an auditory selective attention task (hearing different sounds played simultaneously into both ears, they were required to press a button as fast as possible when they heard one particular sound).

While performance on the task was the same for both groups, there were, once again, differences in the brain wave patterns. Higher SES children exhibited far larger theta waves in the frontal lobes in response to sounds they attended to than to compared to those they should have ignored, while lower SES children showed much larger theta waves to the unattended sounds than for the attended sounds.

While the lower SES children had higher cortisol levels throughout the school day, like the higher SES children, they showed little change around the task, suggesting neither group was particularly stressed by the task. Both groups also showed similar levels of boredom and motivation.

What the findings suggest is that lower SES children have to exert more cognitive control to avoid attending to irrelevant stimuli than higher SES children — perhaps because they live in more threatening environments.

Persistent marijuana use beginning before age 18 (but not after) is associated with a significant drop in IQ in a large, long-running study.

A large long-running New Zealand study has found that people who started using cannabis in adolescence and continued to use it for years afterward showed a significant decline in IQ from age 13 to 38. This was true even in those who hadn’t smoked marijuana for some years.

The study has followed a group of 1,037 children born in 1972-73. At age 38, 96% of the 1004 living study members participated in the latest assessment. Around 5% were regularly smoking marijuana more than once a week before age 18 (cannabis use was ascertained in interviews at ages 18, 21, 26, 32, and 38 years, and this group was not more or less likely to have dropped out of the study).

This group showed an average decline in IQ of 8 points on cognitive tests at age 38 compared to scores at age 13. Such a decline was not found in those who began using cannabis after the age of 18. In comparison, those who had never used cannabis showed a slight increase in IQ. The effect was dose-dependent, with those diagnosed as cannabis dependent on three or more occasions showing the greatest decline.

While executive function and processing speed appeared to be the most seriously affected areas, impairment was seen across most cognitive domains and did not appear to be statistically significantly different across them.

The size of the effect is shown by a further measure: informants (nominated by participants as knowing them well) also reported significantly more attention and memory problems among those with persistent cannabis dependence. (Note that a decline of 8 IQ points in a group whose mean is 100 brings it down to 92.)

The researchers ruled out recent cannabis use, persistent dependence on other drugs (tobacco, alcohol, hard drugs), and schizophrenia, as alternative explanations for the effect. The effect also remained after years of education were taken into account.

The finding supports the view that the adolescent brain is vulnerable to the effects of marijuana, and that these effects are long-lasting and significant.

Some numbers for those interested: Of the 874 participants included in the analysis (those who had missed at least 3 interviews in the 25 years were excluded), 242 (28%) never used cannabis, 479 (55%) used it but were never diagnosed as cannabis-dependent, and 153 (17%) were diagnosed on at least one of the interviews as cannabis-dependent. Of these, 80 had been so diagnosed on only one occasion, 35 on two occasions, and 38 on three or more occasions. I note that the proportion of males was significantly higher in the cannabis-dependent groups (39% in never used; 49% in used but never diagnosed; 70%, 63%, 82% respectively for the cannabis-dependent).

A smallish study of women approaching and in menopause found that some experienced poorer working memory and attention, and these were more likely to have poorer sleep, depression, and anxiety.

A study involving 75 perimenopausal women aged 40 to 60 has found that those with memory complaints tended to show impairments in working memory and attention. Complaints were not, however, associated with verbal learning or memory.

Complaints were also associated with depression, anxiety, somatic complaints, and sleep disturbance. But they weren’t linked to hormone levels (although estrogen is an important hormone for learning and memory).

What this suggests to me is that a primary cause of these cognitive impairments may be poor sleep, and anxiety/depression. A few years ago, I reported on a study that found that, although women’s reports of how many hot flashes they had didn’t correlate with memory impairment, an objective measure of the number of flashes they experienced during sleep did. Sleep, as I know from personal experience, is of sufficient importance that my rule-of-thumb is: don’t bother looking for any other causes of attention and memory deficits until you have sorted out your sleep!

Having said that, depressive symptoms showed greater relationship to memory complaints than sleep disturbance.

It’s no big surprise to hear that it is working memory in particular that is affected, because what many women at this time of life complain of is ‘brain fog’ — the feeling that your brain is full of cotton-wool. This doesn’t mean that you can’t learn new information, or remember old information. But it does mean that these tasks will be impeded to the extent that you need to hold on to too many bits of information. So mental arithmetic might be more difficult, or understanding complex sentences, or coping with unexpected disruptions to your routine, or concentrating on a task for a long time.

These sorts of problems are typical of those produced by on-going sleep deprivation, stress, and depression.

One caveat to the findings is that the study participants tended to be of above-average intelligence and education. This would protect them to a certain extent from cognitive decline — those with less cognitive reserve might display wider impairment. Other studies have found verbal memory, and processing speed, impaired during menopause.

Note, too, that a long-running, large population study has found no evidence for a decline in working memory, or processing speed, in women as they pass through perimenopause and menopause.

A large, long-running study suggests both that children with attention difficulties tend to spend more time playing video games, and that extensive video game playing is bad for attention.

A three-year study involving 3,034 Singaporean children and adolescents (aged 8-17) has found that those who spent more time playing video games subsequently had more attention problems, even when earlier attention problems, sex, age, race, and socioeconomic status were statistically controlled. Those who were more impulsive or had more attention problems subsequently spent more time playing video games, even when initial video game playing was statistically controlled. These findings suggest that the cause-effect relationship between video game playing and attention problems/impulsiveness goes both ways.

While the particular content may have an effect on attention problems and impulsiveness (violent games appeared to be an additional, independent, factor in attention problems), it was the total time spent that was more important.

Participants completed questionnaires about their video game playing habits annually for three years running. They also completed questionnaires aimed to measure attention and impulsiveness (the Current ADHD Symptoms Scale Self-Report, and the Barratt Impulsiveness Scale-11, respectively). Regarding attention, the children answered questions such as how often they "fail to give close attention to details or make careless mistakes" in their work or "blurt out answers before questions have been completed." For the impulsivity test, they selected points they felt described themselves, such as "I often make things worse because I act without thinking" or "I concentrate easily."

How does this finding relate to other evidence showing that playing video games can improve visual attention for rapid and accurate recognition of information from the environment? The answer lies in the different nature of attention — the attention needed for visual search differs in important ways from the attention necessary for sustained concentration in contexts that are often effortful and/or boring.

The example of many attention-challenged individuals makes this more understandable. Many parents of children with ADHD find that the only thing their child can concentrate on for a lengthy period is video games. The answer to that riddle is the rapidly changing nature of video games, and the way they are designed to grab the attention, with flashing lights and loud noises and moving images etc. The young person is not, therefore, improving their ability to focus in a way that is helpful for the school environment, or indeed for everyday life.

Unfortunately, this study suggests that it is precisely those people who are most in need of such ‘external supports’ for attention (‘grabbing’ stimuli such as lights and sounds and movement) — that is, those individuals who are least able to control their own attention — who are most likely to spend a lot of time playing such games. The games then weaken their attentional control even more, and so the cycle continues.

So this research answers the question ADHD parents tend to have: should I encourage my child to play video games a lot (given that it’s the only thing that holds their attention) or not? The answer, unfortunately, would seem to be: not. However, all is not lost. There are computer ‘games’ that are designed to help those with ADHD learn to concentrate in a way that is more useful (see the Topic collection on ADHD for more on this).

The American Academy of Pediatrics recommends one hour per day of total media screen time (including TV, DVDs, video games, Internet, iPad, etc.) for children in elementary school, and two hours for children in secondary school.

Gentile, D.A., Swing, E.L., Lim, C.G. & Khoo, A. 2012. Video game playing, attention problems, and impulsiveness: Evidence of bidirectional causality. Psychology of Popular Media Culture, Vol 1(1), Jan 2012, 62-70. doi: 10.1037/a0026969

Full text available at http://www.apa.org/pubs/journals/releases/ppm-1-1-62.pdf

Another small study indicates that the plant extract Pycnogenol may improve working memory.

Back in 2008, I reported on a small study that found that daily doses of Pycnogenol® for three months improved working memory in older adults, and noted research indicating that the extract from the bark of the French maritime pine tree had reduced symptoms in children with ADHD. Now another study, involving 53 Italian university students, has found that cognitive performance improved in those taking 100 mg of Pycnogenol every day for eight weeks.

Students taking the supplement had higher scores on university exams than the control group, and they were apparently happier, less anxious, and more alert. It seems plausible that the improvement in academic performance results from working memory benefits.

The plant extract is an antioxidant, and benefits may have something to do with improved vascular function and blood flow in the brain.

However, the control group was apparently not given a placebo (I’m relying on the abstract and press release here, as this journal is not one to which I have access), they were simply “a group of equivalent students”. I cannot fathom why a double-blind, placebo procedure wasn’t followed, and it greatly lessens the conclusions of this study. Indeed, I wouldn’t ordinarily report on it, except that I have previously reported on this dietary supplement, and I am in hopes that a better study will come along. In the meantime, this is another small step, to which I wouldn’t give undue weight.

Luzzi R., Belcaro G., Zulli C., Cesarone M. R., Cornelli U., Dugall M., Hosoi M., Feragalli B. 2011. Pycnogenol® supplementation improves cognitive function, attention and mental performance in students. Panminerva Medica, 53(3 Suppl 1), 75-82.

A large study shows the impact of having multiple family disadvantages on cognitive development. A brain scan study finds childhood maltreatment significantly reduces the size of the hippocampus, while another finds parental care can increase it.

Quarter of British children performing poorly due to family disadvantage

A British study involving over 18,000 very young children (aged 9 months to 5 years) has found that those exposed to two or more “disadvantages” (28% of the children) were significantly more likely to have impaired intellectual development, expressed in a significantly reduced vocabulary and behavioral problems.

These differences were significant at three, and for the most part tended to widen between ages three or five (cognitive development, hyperactivity, peer problems and prosocial behaviors; the gap didn’t change for emotional problems, and narrowed for conduct problems). However, only the narrowing of the conduct problem gap and the widening of the peer problem gap was statistically significant.

Ten disadvantages were identified: living in overcrowded housing; having a teenage mother; having one or more parents with depression, parent with a physical disability; parent with low basic skills; maternal smoking during pregnancy; excessive alcohol intake; financial stress, unemployment; domestic violence..

Around 41% of the children did not face any of these disadvantages, and 30% faced only one of these disadvantages. Of those facing two or more, half of those (14%) only had two, while 7% of the total group experienced three risk factors, and fewer than 2% had five or more.

There was no dominant combination of risks, but parental depression was the most common factor (19%), followed by parental disability (15%). Violence was present in only 4% of families, and both parents unemployed in only 5.5%. While there was some correlation between various risk factors, these correlations were relatively modest for the most part. The highest correlations were between unemployment and disability; violence and depression; unemployment and overcrowding.

There were ethnic differences in rate: at 48%, Bangladeshi children were most likely to be exposed to multiple disadvantages, followed by Pakistani families (34%), other (including mixed) (33%), black African (31%), black Caribbean (29%), white (28%) and Indian (20%).

There were also differences depending on family income. Among those in the lowest income band (below £10,400 pa) — into which 21% of the families fell, the same proportion as is found nationally — nearly half had at least two risk factors, compared to 27% of those in families above this threshold. Moreover, children in families with multiple risk factors plus low income showed the lowest cognitive development (as measured by vocabulary).

Childhood maltreatment reduces size of hippocampus

In this context, it is interesting to note a recent finding that three key areas of the hippocampus were significantly smaller in adults who had experienced maltreatment in childhood. In this study, brain scans were taken of nearly 200 young adults (18-25), of whom 46% reported no childhood adversity and 16% reported three or more forms of maltreatment. Maltreatment was most commonly physical and verbal abuse from parents, but also included corporal punishment, sexual abuse and witnessing domestic violence.

Reduced volume in specific hippocampus regions (dentate gyrus, cornu ammonis, presubiculum and subiculum) was still evident after such confounding factors as a history of depression or PTSD were taken into account. The findings support the theory that early stress affects the development of subregions in the hippocampus.

While mother’s nurturing grows the hippocampus

Supporting this, another study, involving 92 children aged 7 to 10 who had participated in an earlier study of preschool depression, has found that those children who received a lot of nurturing from their parent (generally mother) developed a larger hippocampus than those who didn’t.

‘Nurturing’ was assessed in a videotaped interaction at the time of the preschool study. In this interaction, the parent performed a task while the child waited for her to finish so they could open an attractive gift. How the parent dealt with this common scenario — the degree to which they helped the child through the stress — was evaluated by independent raters.

Brain scans revealed that children who had been nurtured had a significantly larger hippocampus than those whose mothers were not as nurturing, and (this was the surprising bit), this effect was greater among the healthy, non-depressed children. Among this group, those with a nurturing parent had hippocampi which were on average almost 10% larger than those whose parent had not been as nurturing.

First study:
Sabates, R., Dex, S., Sabates, R., & Dex, S. (2012). Multiple risk factors in young children’s development. CLS Cohort Studies Working paper 2012/1.
Full text available at http://www.cls.ioe.ac.uk/news.aspx?itemid=1661&itemTitle=More+than+one+i...

Second study:
[2741] Teicher, M. H., Anderson C. M., & Polcari A. (2012).  Childhood maltreatment is associated with reduced volume in the hippocampal subfields CA3, dentate gyrus, and subiculum. Proceedings of the National Academy of Sciences.
Full text available at http://www.pnas.org/content/early/2012/02/07/1115396109.abstract?sid=f73...

Third study:
[2734] Luby, J. L., Barch D. M., Belden A., Gaffrey M. S., Tillman R., Babb C., et al. (2012).  Maternal support in early childhood predicts larger hippocampal volumes at school age. Proceedings of the National Academy of Sciences.

Stress in the lives of young children from low-income homes negatively affects their executive function and IQ, and these associations are mediated through parenting behavior and household risk.

The study involved 1,292 children followed from birth, whose cortisol levels were assessed at 7, 15, and 24 months. Three tests related to executive functions were given at age 3. Measures of parenting quality (maternal sensitivity, detachment, intrusiveness, positive regard, negative regard, and animation, during interaction with the child) and household environment (household crowding, safety and noise levels) were assessed during the home visits.

Earlier studies have indicated that a poor environment in and of itself is stressful to children, and is associated with increased cortisol levels. Interestingly, in one Mexican study, preschool children in poor homes participating in a conditional cash transfer scheme showed reduced cortisol levels.

This study found that children in lower-income homes received less positive parenting and had higher levels of cortisol in their first two years than children in slightly better-off homes. Higher levels of cortisol were associated with lower levels of executive function abilities, and to a lesser extent IQ, at 3 years.

African American children were more affected than White children on every measure. Cortisol levels were significantly higher; executive function and IQ significantly lower; ratings of positive parenting significantly lower and ratings of negative parenting significantly higher. Maternal education was significantly lower, poverty greater, homes more crowded and less safe.

The model derived from this data shows executive function negatively predicted by cortisol, while the effect on IQ is marginal. However, both executive function and IQ are predicted by negative parenting, positive parenting, and household risk (although this last variable has a greater effect on IQ than executive function). Neither executive function nor IQ was directly predicted by maternal education, ethnicity, or poverty level. Cortisol level was inversely related to positive parenting, but was not directly related to negative parenting or household risk.

Indirectly (according to this best-fit model), poverty was related to executive function through negative parenting; maternal education was related to executive function through negative parenting and to a lesser extent positive parenting; both poverty and maternal education were related to IQ through positive parenting, negative parenting, and household risk; African American ethnicity was related to executive function through negative parenting and positive parenting, and to IQ through negative parenting, positive parenting, and household risk. Cortisol levels were higher in African American children and this was unrelated to poverty level or maternal education.

Executive function (which includes working memory, inhibitory control, and attention shifting) is vital for self-regulation and central to early academic achievement. A link between cortisol level and executive function has previously been shown in preschool children, as well as adults. The association partly reflects the fact that stress hormone levels affect synaptic plasticity in the prefrontal cortex, where executive functions are carried out. This is not to say that this is the only brain region so affected, but it is an especially sensitive one. Chronic levels of stress alter the stress response systems in ways that impair flexible regulation.

What is important about this study is this association between stress level and cognitive ability at an early age, that the effect of parenting on cortisol is associated with positive aspects rather than negative ones, and that the association between poverty and cognitive ability is mediated by both cortisol and parenting behavior — both positive and negative aspects.

A final word should be made on the subject of the higher cortisol levels in African Americans. Because of the lack of high-income African Americans in the sample (a reflection of the participating communities), it wasn’t possible to directly test whether the effect is accounted for by poverty. So this remains a possibility. It is also possible that there is some genetic difference. But it also might reflect other sources of stress, such as that relating to prejudice and stereotype threat.

Based on mother’s ethnic status, 58% of the families were Caucasian and 42% African American. Two-thirds of the participants had an income-to-need ratio (estimated total household income divided by the 2005 federal poverty threshold adjusted for number of household members) less than 200% of poverty. Just over half of the mothers weren’t married, and most of them (89%) had never been married. The home visits at 7, 15, and 24 months lasted at least an hour, and include a videotaped free play or puzzle completion interaction between mother and child. Cortisol samples were taken prior to an emotion challenge task, and 20 minutes and 40 minutes after peak emotional arousal.

Long-term genetic effects of childhood environment

The long-term effects of getting off to a poor start are deeper than you might believe. A DNA study of forty 45-year-old males in a long-running UK study has found clear differences in gene methylation between those who experienced either very high or very low standards of living as children or adults (methylation of a gene at a significant point in the DNA reduces the activity of the gene). More than twice as many methylation differences were associated with the combined effect of the wealth, housing conditions and occupation of parents (that is, early upbringing) than were associated with the current socio-economic circumstances in adulthood (1252 differences as opposed to 545).

The findings may explain why the health disadvantages known to be associated with low socio-economic position can remain for life, despite later improvement in living conditions. The methylation profiles associated with childhood family living conditions were clustered together in large stretches of DNA, which suggests that a well-defined epigenetic pattern is linked to early socio-economic environment. Adult diseases known to be associated with early life disadvantage include coronary heart disease, type 2 diabetes and respiratory disorders.

[2589] Blair, C., Granger D. A., Willoughby M., Mills-Koonce R., Cox M., Greenberg M. T., et al. (2011).  Salivary Cortisol Mediates Effects of Poverty and Parenting on Executive Functions in Early Childhood. Child Development. no - no.

Fernald, L. C., & Gunnar, M. R. (2009). Poverty-alleviation program participation and salivary cortisol in very low-income children. Social Science and Medicine, 68, 2180–2189.

[2590] Borghol, N., Suderman M., McArdle W., Racine A., Hallett M., Pembrey M., et al. (2011).  Associations with early-life socio-economic position in adult DNA methylation. International Journal of Epidemiology.

A small Mexican study provides more evidence for the negative effect of pollution on developing brains, with cognitive impairment linked to reduced white matter in specific regions.

In yet another study of the effects of pollution on growing brains, it has been found that children who grew up in Mexico City (known for its very high pollution levels) performed significantly worse on cognitive tests than those from Polotitlán, a city with a strong air quality rating.

The study involved 30 children aged 7 or 8, of whom 20 came from Mexico City, and 10 from Polotitlán. Those ten served as controls to the Mexico City group, of whom 10 had white matter hyperintensities in their brains, and 10 had not. Regardless of the presence of lesions, MC children were found to have significantly smaller white matter volumes in right parietal and bilateral temporal regions. Such reduced volumes were correlated with poorer performance on a variety of cognitive tests, especially those relating to attention, working memory, and learning.

It’s suggested that exposure to air pollution disturbs normal brain development, resulting in cognitive deficits.

A large, long-running study reveals that academic achievement for those with ADHD is hindered by attention problems not hyperactivity.

Data from parents and teachers of 2000 randomly selected children has revealed that only 29% of children with attention problems finished high school compared to 89% of children without such problems. When it came to hyperactivity, the difference was smaller: 40% versus 77%. After taking account of factors such as socioeconomic status and health issues that are correlated with ADHD, inattention was still a highly significant contributor, but hyperactivity was not.

Yearly assessments of the children were taken from age 6 to 12, and high school graduation status was obtained from official records. Attention problems were evaluated by teachers on the basis of behavior such as an inability to concentrate, absentmindedness, or a tendency to give up or be easily distracted. Hyperactivity was identified by behavior such as restlessness, running around, squirming and being fidgety.

The researchers make the excellent point that those with attention difficulties are often forgotten because, unlike hyperactive children, they don't disturb the class.

The findings point to the need to distinguish inattention and hyperactivity, and to provide early preventive intervention for attention problems.

Simple interventions can help college students improve their sleep. Regular sleep habits are important for young children. Sleep deprivation especially affects performance on open-ended problems.

One survey of nearly 200 undergraduate college students who were not living with a parent or legal guardian found that 55% reported getting less than seven hours sleep. This is consistent with other surveys. The latest study confirms such a result, but also finds that students tend to think their sleep quality is better than it is (70% of students surveyed described their sleep as "fairly good" or better). It’s suggested that this disconnect arises from students making comparisons in an environment where poor sleep is common — even though they realized, on being questioned, that poor sleep undermined their memory, concentration, class attendance, mood, and enthusiasm.

None of this is surprising, of course. But this study did something else — it tried to help.

The researchers launched a campuswide media campaign consisting of posters, student newspaper advertisements and a "Go to Bed SnoozeLetter", all delivering information about the health effects of sleep and tips to sleep better, such as keeping regular bedtime and waking hours, exercising regularly, avoiding caffeine and nicotine in the evening, and so on. The campaign cost less than $2,500, and nearly 10% (90/971) said it helped them sleep better.

Based on interviews conducted as part of the research, the researchers compiled lists of the top five items that helped and hindered student sleep:

Helpers

  • Taking time to de-stress and unwind
  • Creating a room atmosphere conducive to sleep
  • Being prepared for the next day
  • Eating something
  • Exercising

Hindrances

  • Dorm noise
  • Roommate (both for positive/social reasons and negative reasons)
  • Schoolwork
  • Having a room atmosphere not conducive to sleep
  • Personal health issues

In another study, this one involving 142 Spanish schoolchildren aged 6-7, children who slept less than 9 hours and went to bed late or at irregular times showed poorer academic performance. Regular sleep habits affected some specific skills independent of sleep duration.

69% of the children returned home after 9pm at least three evenings a week or went to bed after 11pm at least four nights a week.

And a recent study into the effects of sleep deprivation points to open-ended problem solving being particularly affected. In the study, 35 West Point cadets were given two types of categorization task. The first involved cate­gorizing drawings of fictional animals as either “A” or “not A”; the second required the students to sort two types of fic­tional animals, “A” and “B.” The two tests were separated by 24 hours, during which half the students had their usual night’s sleep, and half did not.

Although the second test required the students to learn criteria for two animals instead of one, sleep deprivation impaired performance on the first test, not the second.

These findings suggest the fault lies in attention lapses. Open-ended tasks, as in the first test, require more focused attention than those that offer two clear choices, as the second test did.

News reports on sleep deprivation are collated here.

[2521] Orzech, K. M., Salafsky D. B., & Hamilton L. A. (2011).  The State of Sleep Among College Students at a Large Public University. Journal of American College Health. 59, 612 - 619.

[2515] Cladellas, R., Chamarro A., del Badia M. M., Oberst U., & Carbonell X. (2011).  Efectos de las horas y los habitos de sueno en el rendimiento academico de ninos de 6 y 7 anos: un estudio preliminarEffects of sleeping hours and sleeping habits on the academic performance of six- and seven-year-old children: A preliminary study. Cultura y Educación. 23(1), 119 - 128.

Maddox WT; Glass BD; Zeithamova D; Savarie ZR; Bowen C; Matthews MD; Schnyer DM. The effects of sleep deprivation on dissociable prototype learning systems. SLEEP 2011;34(3):253-260.

More evidence of the importance of adequate folate consumption for cognitive functioning at all ages.

Most research into the importance of folate and B12 levels has centered on seniors, and it does seem clear now that having adequate levels of these vitamins is important for maintaining cognitive functioning as you get older. Folic acid levels are of course also regarded as crucial when the brain is developing, which is why pregnant women are urged to take supplements, and why some countries fortify their bread with it. There is less research in the extensive ground between these two end-points.

A Swedish study involving 386 15-year-olds has now found that those in the top third of folic acid intake (more than 253 micrograms per day for girls and 335 for boys) performed significantly better on their school grades compared to those in the bottom third (less than 173 micrograms folic acid per day for girls and 227 for boys).

Interestingly, while homocysteine levels in the blood were initially significant, this association disappeared after other significant predictors (gender, smoking, and SES) were controlled for. Neither was a genotype linked to higher homocysteine levels (MTHFR 677 TT homozygosity) significantly related to academic achievement. Low folate and B12 levels are associated with higher homocysteine levels in the blood, and there is evidence that it is this increase in homocysteine that is the reason for the cognitive impairment seen in age-related cognitive decline. This finding, then, suggests that this is only one part of the story.

Sweden does not fortify flour with folic acid as the United States, Canada and Australia do. Folate is a B vitamin found particularly in citrus fruit, green leafy vegetables, whole-wheat bread, and dried beans and peas; however, they are often destroyed by cooking or processing.

The sum of school grades in 10 core subjects obtained in the final semester of compulsory 9 years of schooling was used as the measure of academic achievement

Memory begins with perception. Here's a round-up of recent research into visual perception.

Memory begins with perception. We can’t remember what we don’t perceive, and our memory of things is influenced by how we perceive them.

Our ability to process visual scenes has been the subject of considerable research. How do we process so many objects? Some animals do it by severely limiting what they perceive, but humans can perceive a vast array of features. We need some other way of filtering the information. Moreover, it’s greatly to our advantage that we can process the environment extremely quickly. So that’s two questions: how do we process so much, and so fast?

Brain region behind the scene-facilitation effect identified

A critical factor, research suggests, is our preferential processing of interacting objects — we pick out interacting objects more quickly than unrelated objects. A new study has now identified the region of the brain responsible for this ‘scene-facilitation effect’. To distinguish between the two leading contenders, the lateral occipital cortex and the intraparietal sulcus, transcranial magnetic stimulation was used to temporarily shut down each region in turn, while volunteers viewed brief flashes of object pairs (half of which were interacting with each other) and decided whether these glimpsed objects matched the presented label. Half of the object pairs were shown as interacting.

The scene-facilitation effect was eliminated when the lateral occipital cortex was out of action, while the non-performance of the intraparietal sulcus made no difference.

The little we need to identify a scene

The scene-facilitation effect is an example of how we filter and condense the information in our visual field, but we also work in the opposite direction — we extrapolate.

When ten volunteers had their brains scanned while they viewed color photographs and line drawings of six categories of scenes (beaches, city streets, forests, highways, mountains and offices), brain activity was nearly identical, regardless of whether participants were looking at a color photo or a simple line drawing. That is, researchers could tell, with a fair amount of success, what category of scene the participant was looking at, just by looking at the pattern of brain activity in the ventral visual cortex — regardless of whether the picture was a color photo or a line drawing. When they made mistakes, the mistakes were similar for the photos and the drawings.

In other words, most of what the brain is responding to in the photo is also evident in the line drawing.

In order to determine what those features were, the researchers progressively removed some of the lines in the line drawings. Even when up to 75% of the pixels in a line drawing were removed, participants could still identify what the scene was 60% of the time — as long as the important lines were left in, that is, those showing the broad contours of the scene. If only the short lines, representing details like leaves or windows, were left, participants became dramatically less accurate.

The findings cast doubt on some models of human visual perception which argue that people need specific information that is found in photographs to classify a scene.

Consistent with previous research, activity in the parahippocampal place area and the retrosplenial cortex was of greatest importance.

The brain performs visual search near optimally

Visual search involves picking out a target in a sea of other objects, and it’s one of the most important visual tasks we do. It’s also (not surprisingly, considering its evolutionary importance) something we are very very good at. In fact, a new study reveals that we’re pretty near optimal.

Of course we make mistakes, and have failures. But these happen not because of our incompetence, but because of the complexity of the task.

In the study, participants were shown sets of lines that might or might not contain a line oriented in a particular way. Each screen was shown for only a fraction of a second, and the contrast of each line was randomly varied, making the target easier or more difficult to detect. The variation in contrast was designed as a model for an important variable in visual search — that of the reliability of the sensory information. Optimally, an observer would take into consideration the varying reliability of the items, giving the information different weights as a result of that perceived reliability. That weighted information would then be combined according to a specific integration rule. That had been calculated as the optimal process, and the performance of the participants matched that expectation.

The computer model that simulated this performance, and that matched the human performance, used groups of (simulated) neurons that responded differently to different line orientations.

In other words, it appears that we are able, very quickly, to integrate information coming from multiple locations, while taking into account the reliability of the different pieces of information, and we do this through the integration of information coming from different groups of neurons, each group of which is responding to different bits of information.

Another recent study into visual search has found that, when people are preparing themselves to look for very familiar object categories (people or cars) in natural scenes, activity in their visual cortex was very similar to that shown when they were actually looking at the objects in the scenes. Moreover, the precise activity in the object-selective cortex (OSC) predicted performance in detecting the target, while preparatory activity in the early visual cortex (V1) was actually negatively related to search performance. It seems that these two regions of the visual cortex are linked to different search strategies, with the OSC involved in relatively abstract search preparation and V1 to more specific imagery-like preparation. Activity in the medial prefrontal cortex also reflected later target detection performance, suggesting that this may be the source of top-down processing.

The findings demonstrate the role of preparatory and top-down processes in guiding visual search (and remind us that these processes can bias us against seeing what we’re looking for, just as easily as they help us).

'Rewarding' objects can't be ignored

Another aspect of visual search is that some objects just leap out at us and capture our attention. Loud noises and fast movement are the most obvious of the attributes that snag our gaze. These are potential threats, and so it’s no wonder we’ve evolved to pay attention to such things. We’re also drawn to potential rewards. Prospective mates; food; liquids.

What about rewards that are only temporarily rewarding? Do we move on easily, able to ignore previously rewarding items as soon as they lose their relevance?

In a recent study, people spent an hour searching for red or green circles in an array of many differently colored circles. The red and green circles were always followed by a monetary reward (10 cents for one color, and 1 cent for the other). Afterwards, participants were asked to search for particular shapes, and color was no longer relevant or rewarded. However, when, occasionally, one of the shapes was red or green, reaction times slowed, demonstrating that these were distracting (even though the participants had been told to ignore this if it happened).

This distraction persisted for weeks after the original learning session. Interestingly, people who scored highly on a questionnaire measuring impulsivity were more likely to be distracted by these no-longer-relevant items.

The findings indicate that stimuli that have been previously associated with reward continue to capture attention regardless of their relevance to the task in hand, There are implications here that may help in the development of more effective treatments for drug addiction, obesity and ADHD.

People make an image memorable

What makes an image memorable? It’s always been assumed that visual memory is too subjective to allow a general answer to this question. But an internet study has found remarkable consistency among hundreds of people who viewed images from a collection of about 10,000 images, some of which were repeated, and decided whether or not they had seen the image before. The responses generated a memorability rating for each image. Once this had been collated, the researchers made "memorability maps" of each image by asking people to label all the objects in the images. These maps were then used to determine which objects make an image memorable.

In general, images with people in them were the most memorable, followed by images of human-scale space — such as the produce aisle of a grocery store — and close-ups of objects. Least memorable were natural landscapes, although those could be memorable if they featured an unexpected element, such as shrubbery trimmed into an unusual shape.

Computer modeling then allowed various features for each image (such as color, or the distribution of edges) to be correlated with the image's memorability. The end result was an algorithm that can predict memorability of images the computational model hasn't "seen" before.

The researchers are now doing a follow-up study to test longer-term memorability, as well as working on adding more detailed descriptions of image content.

[2291] Kim, J. G., Biederman I., & Juan C. - H. (2011).  The Benefit of Object Interactions Arises in the Lateral Occipital Cortex Independent of Attentional Modulation from the Intraparietal Sulcus: A Transcranial Magnetic Stimulation Study. The Journal of Neuroscience. 31(22), 8320 - 8324.

[2303] Walther, D. B., Chai B., Caddigan E., Beck D. M., & Fei-Fei L. (2011).  Simple line drawings suffice for functional MRI decoding of natural scene categories. Proceedings of the National Academy of Sciences. 108(23), 9661 - 9666.

[2292] Ma, W. J., Navalpakkam V., Beck J. M., van den Berg R., & Pouget A. (2011).  Behavior and neural basis of near-optimal visual search. Nat Neurosci. 14(6), 783 - 790.

[2323] Peelen, M. V., & Kastner S. (2011).  A neural basis for real-world visual search in human occipitotemporal cortex. Proceedings of the National Academy of Sciences. 108(29), 12125 - 12130.

[2318] Anderson, B. A., Laurent P. A., & Yantis S. (2011).  Value-driven attentional capture. Proceedings of the National Academy of Sciences. 108(25), 10367 - 10371.

Isola, P., Xiao, J., Oliva, A. & Torralba, A. 2011. What makes an image memorable? Paper presented at the IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, Colorado Springs.

 

Two recent studies have come out implicating traffic pollutants as factors in age-related cognitive decline and dementia and as prenatal risk factors for attention problems.

A study in which mice were exposed to polluted air for three 5-hour sessions a week for 10 weeks, has revealed that such exposure damaged neurons in the hippocampus and caused inflammation in the brain. The polluted air was laden with particles collected from an urban freeway.

Another recent study found that, of 215 children, those whose cord blood showed high levels of combustion-related pollutants such as polycyclic aromatic hydrocarbons (PAH), had more attention (and anxiety) problems at ages 5 and 7. The children were born to nonsmoking African-American and Dominican women residing in New York City.

A new study further confirms the idea that a growing inability to ignore irrelevancies is behind age-related cognitive decline.

A study involving 125 younger (average age 19) and older (average age 69) adults has revealed that while younger adults showed better explicit learning, older adults were better at implicit learning. Implicit memory is our unconscious memory, which influences behavior without our awareness.

In the study, participants pressed buttons in response to the colors of words and random letter strings — only the colors were relevant, not the words themselves. They then completed word fragments. In one condition, they were told to use words from the earlier color task to complete the fragments (a test of explicit memory); in the other, this task wasn’t mentioned (a test of implicit memory).

Older adults showed better implicit than explicit memory and better implicit memory than the younger, while the reverse was true for the younger adults. However, on a further test which required younger participants to engage in a number task simultaneously with the color task, younger adults behaved like older ones.

The findings indicate that shallower and less focused processing goes on during multitasking, and (but not inevitably!) with age. The fact that younger adults behaved like older ones when distracted points to the problem, for which we now have quite a body of evidence: with age, we tend to become more easily distracted.

A placebo-controlled study reveals a treatment for mild traumatic brain injury that sufferers can administer themselves.

A study involving 38 people suffering from mild traumatic brain injury (TBI) has found that those receiving acupressure treatments from trained experts (eight treatments over 4 weeks) scored significantly better on tests of working memory compared to those who received treatments from the same experts on places on the body that are not considered to be acupressure points.

Acupressure involves the practitioner using his fingertips to stimulate particular points on a person's body. The acupressure treatment type used in the study was Jin Shin. This treatment can be taught to family and friends of those with TBI and can even be used as a self-treatment, making it a good candidate for an adjunct treatment for TBI.

A new study confirms that learning ability declines with time awake, and shows that stage 2 non-REM sleep, achieved during a long afternoon nap, can re-invigorate your brain.

In a study involving 44 young adults given a rigorous memorizing task at noon and another such task at 6pm, those who took a 90-minute nap during the interval improved their ability to learn on the later task, while those who stayed awake found it harder to learn.

The degree to which the nappers were refreshed correlated with the amount of stage 2 non-REM sleep they experienced. This sleep phase is characterized by sleep spindles, which are associated with brain activity between the hippocampus and prefrontal cortex. Spindle-rich sleep occurs mostly in the second half of the night, so those who don’t get their quota of sleep are probably getting less.

The finding confirms the idea that learning ability decreases the more time you spend awake.

[2144] Mander, B. A., Santhanam S., Saletin J. M., & Walker M. P. (2011).  Wake deterioration and sleep restoration of human learning. Current Biology. 21(5), R183-R184 - R183-R184.

The intraparietal sulcus appears to be a hub for connecting the different sensory-processing areas as well as higher-order processes, and may be key to attention problems.

If our brains are full of clusters of neurons resolutely only responding to specific features (as suggested in my earlier report), how do we bring it all together, and how do we switch from one point of interest to another? A new study using resting state data from 58 healthy adolescents and young adults has found that the intraparietal sulcus, situated at the intersection of visual, somatosensory, and auditory association cortices and known to be a key area for processing attention, contains a miniature map of all the things we can pay attention to (visual, auditory, motor stimuli etc).

Moreover, this map is copied in at least 13 other places in the brain, all of which are connected to the intraparietal sulcus. Each copy appears to do something different with the information. For instance, one map processes eye movements while another processes analytical information. This map of the world may be a fundamental building block for how information is represented in the brain.

There were also distinct clusters within the intraparietal sulcus that showed different levels of connectivity to auditory, visual, somatosensory, and default mode networks, suggesting they are specialized for different sensory modalities.

The findings add to our understanding of how we can shift our attention so precisely, and may eventually help us devise ways of treating disorders where attention processing is off, such as autism, attention deficit disorder, and schizophrenia.

[1976] Anderson, J. S., Ferguson M. A., Lopez-Larson M., & Yurgelun-Todd D. (2010).  Topographic maps of multisensory attention. Proceedings of the National Academy of Sciences. 107(46), 20110 - 20114.

A new study adds to the evidence that our ability to focus on one thing and ignore irrelevant information gets worse with age, and that this may be a crucial factor in age-related cognitive impairment.

A study involving young (average age 22) and older adults (average age 77) showed participants pictures of overlapping faces and places (houses and buildings) and asked them to identify the gender of the person. While the young adults showed activity in the brain region for processing faces (fusiform face area) but not in the brain region for processing places (parahippocampal place area), both regions were active in the older adults. Additionally, on a surprise memory test 10 minutes later, older adults who showed greater activation in the place area were more likely to recognize what face was originally paired with what house.

These findings confirm earlier research showing that older adults become less capable of ignoring irrelevant information, and shows that this distracting information doesn’t merely interfere with what you’re trying to attend to, but is encoded in memory along with that information.

Alcohol and marijuana abuse associated with specific cognitive impairments in adolescents, but more surprisingly family history of substance abuse can also have an effect.

A study involving 48 adolescents, of whom 19 had been diagnosed with substance abuse/dependence, and 14 had a family history of substance abuse but no history of personal use, has found that greater alcohol use was associated with a significant decrease in attention and executive function (which is involved in planning and decision-making), while greater marijuana use was associated with poorer memory. Adolescents in the substance abuse group had lower scores in attention, memory, and processing speed, compared to the other groups, while those with a family history of abuse (but no personal history) had poorer visuospatial ability.

Fetal exposure to large amounts of alcohol is found to be associated with reduced cognitive efficiency in perception, attention and recognition memory, in older children.

Data from 217 children from Inuit communities in Arctic Quebec (average age 11), of whom some had mothers that reported binge drinking during pregnancy, has revealed that the alcohol-exposed group, while similar to the control in accuracy and reaction time, showed a significant differences in their brains’ electrical activity while doing those tasks (a Go/No-go response inhibition task and a continuous recognition memory task). The differences suggest that fetal alcohol exposure is associated with reduced efficiency in the initial extracting of the meaning of a stimulus, reduced allocation of attention to the task, and poorer conscious recognition memory processing.

A small study indicates that nurturing mothers and increased reward centers in the brain go hand-in-hand — although the jury’s still out on which comes first.

The issue of “mommy brain” is a complex one. Inconsistent research results make it clear that there is no simple answer to the question of whether or not pregnancy and infant care change women’s brains. But a new study adds to the picture.

Brain scans of 19 women two to four weeks and three to four months after they gave birth showed that grey matter volume increased by a small but significant amount in the midbrain (amygdala, substantia nigra, hypothalamus), prefrontal cortex, and parietal lobe. These areas are involved in motivation and reward, emotion regulation, planning, and sensory perception.

Mothers who were most enthusiastic about their babies were significantly more likely to show this increase in the midbrain regions. The authors speculated that the “maternal instinct” might be less of an instinctive response and more of a result of active brain building. Interestingly, while the brain’s reward regions don’t usually change as a result of learning, one experience that does have this effect is that of addiction.

While the reasons may have to do with genes, personality traits, infant behavior, or present circumstances, previous research has found that mothers who had more nurturing in their childhood had more gray matter in those brain regions involved in empathy and reading faces, which also correlated with the degree of activation in those regions when their baby cried.

A larger study is of course needed to confirm these findings.

A small study suggests that social activities are more tiring for extraverts than introverts, and that this personality trait may influence the effect of sleep loss on attention.

A study involving 48 healthy adults aged 18-39 has found that extraverts who were deprived of sleep for 22 hours after spending 12 hours in group activities performed worse on a vigilance task that did those extraverts who engaged in the same activities on their own in a private room. Introverts were relatively unaffected by the degree of prior social interaction.

The researchers suggest that social interactions are cognitively complex experiences that may lead to rapid fatigue in brain regions that regulate attention and alertness, and (more radically) that introverts may have higher levels of cortical arousal, giving them greater resistance to sleep deprivation.

Rupp TL; Killgore WDS; Balkin TJ. Socializing by day may affect performance by night: vulnerability to sleep deprivation is differentially mediated by social exposure in extraverts vs introverts. SLEEP 2010;33(11):1475-1485.

A very large study has found everyday memory problems among middle-aged and elderly are more likely in those with a history of cancer.

Confirming earlier indications from small studies, a very large nationwide survey has found that people who have had cancer are 40% more likely to experience memory problems that interfere with daily functioning.

The U.S. study involved nearly 10,000 people aged 40 and older, of whom 1,305 (13.3%) reported they had cancer or a history of cancer. Of these, 14% answered yes to the question "Are you limited in any way because of difficulty remembering or because you experience periods of confusion?" Of those who did not have a history of cancer, 8% answered yes to this question.

The degree to which these memory problems are related to the treatment or to the cancer itself (or even perhaps to the experience of having cancer) is one that needs further investigation, but the researcher suggests the finding points to memory issues being more common among cancer sufferers than realized, and recommends that cognitive assessment should be a standard part of cancer treatment.

The study is noteworthy in including all cancers, rather than focusing on one. Nevertheless, I hope that we eventually see a published paper (these results were presented at conference) that also analyses the data in terms of different cancers, different treatments, and length of time since the cancer.

Earlier reports on ‘chemobrain’, and possible ways to help

Results were presented at the Third AACR Conference on The Science of Cancer Health Disparities.

Recent rodent studies confirm attention and learning is more difficult for women when estrogen is high, but estrogen therapy can help menopausal women — if given during a critical window.

Recent rodent studies add to our understanding of how estrogen affects learning and memory. A study found that adult female rats took significantly longer to learn a new association when they were in periods of their estrus cycle with high levels of estrogen, compared to their ability to learn when their estrogen level was low. The effect was not found among pre-pubertal rats. The study follows on from an earlier study using rats with their ovaries removed, whose learning was similarly affected when given high levels of estradiol.

Human females have high estrogen levels while they are ovulating. These high levels have also been shown to interfere with women's ability to pay attention.

On the other hand, it needs to be remembered that estrogen therapy has been found to help menopausal and post-menopausal women. It has also been found to be detrimental. Recent research has suggested that timing is important, and it’s been proposed that a critical period exists during which hormone therapy must be administered if it is to improve cognitive function.

This finds some support in another recent rodent study, which found that estrogen replacement increased long-term potentiation (a neural event that underlies memory formation) in young adult rats with their ovaries removed, through its effects on NMDA receptors and dendritic spine density — but only if given within 15 months of the ovariectomy. By 19 months, the same therapy couldn’t induce the changes.

Ritalin helps some survivors of childhood cancer with attention problems.

Many survivors of childhood cancer experience cognitive problems as a result of their treatment. The drug methylphenidate (marketed under several names, the best known of which is Ritalin) has previously been shown to help attention problems in such survivors in the short term. Now a new study demonstrates that it can also be of benefit in the long term.

The study tested attention, social skills and behavior in survivors who had been on the drug for a year, comparing them to a similar group of unmedicated survivors. Although the drug did not lead to a significant gain in measured academic skills in math, reading and spelling, many did show improvements to attention that put them back in the normal range.

Nevertheless, the results also emphasize the need for other approaches, given that many did not benefit from the drug, and some may not be good candidates for medical or other reasons. The treatment group included 35 survivors of brain tumors and 33 of acute lymphoblastic leukemia (ALL). Any who suffered from ADHD before their cancer were excluded from the study.

Watching another person do something can leave you with the memory of having done it yourself.

I’m not at all sure why the researcher says they were “stunned” by these findings, since it doesn’t surprise me in the least, but a series of experiments into the role of imagination in creating false memories has revealed that people who had watched a video of someone else doing a simple action often remembered doing the action themselves two weeks later. In fact in my book on remembering intentions, which includes a chapter on remembering whether you’ve done something, I mention the risk of imagining yourself doing something (that you then go on to believe you have actually done it), and given all the research on mirror neurons, it’s no big step to go from watching someone doing something to remembering that you did it. Nevertheless, it’s nice to get the confirmation.

The experiments involved participants performing several simple actions, such as shaking a bottle or shuffling a deck of cards. Then they watched videos of someone else doing simple actions—some of which they had performed themselves and some of which they hadn’t. Two weeks later, they were asked which actions they had done. They were much more likely to falsely remember doing an action if they had watched someone else do it — even when they had been warned about the effect.

It seems likely that this is an unfortunate side-effect of a very useful ability — namely our ability to learn motor skills by observing others (using the aforesaid mirror neurons) — and there’s probably not a great deal we can do to prevent it happening. It’s just a reminder of how easy it is to form false memories.

[1839] Lindner, I., Echterhoff G., Davidson P. S. R., & Brand M. (2010).  Observation Inflation. Psychological Science. 21(9), 1291 - 1299.

A large American study of middle- and high-school students has found lower academic performance in core subjects was associated with three dopamine gene variants

Analysis of DNA and lifestyle data from a representative group of 2,500 U.S. middle- and high-school students tracked from 1994 to 2008 in the National Longitudinal Study of Adolescent Health has revealed that lower academic performance was associated with three dopamine gene variants. Having more of the dopamine gene variants (three rather than one, say) was associated with a significantly lower GPA.

Moreover, each of the dopamine genes (on its own) was linked to specific deficits: there was a marginally significant negative effect on English grades for students with a specific variant in the DAT1 gene, but no apparent effect on math, history or science; a specific variant in the DRD2 gene was correlated with a markedly negative effect on grades in all four subjects; those with the deleterious DRD4 variant had significantly lower grades in English and math, but only marginally lower grades in history and science.

Precisely why these specific genes might impact academic performance isn’t known with any surety, but they have previously been linked to such factors as adolescent delinquency, working memory, intelligence and cognitive abilities, and ADHD, among others.

Adding to research suggesting type of background noise affects whether it impairs learning or not, a new study indicates white noise has different effects depending on whether the students have attention problems.

Five years ago I reported on a finding that primary school children exposed to loud aircraft noise showed impaired reading comprehension (see below). Now a small Norwegian study has found that playing white noise helped secondary school children with attention problems, but significantly impaired those who were normally attentive.

The adolescents were asked to remember as many items as possible from a list read out either in the presence or absence of white noise (78dB). The results were consistent with a computational model based on the concepts of stochastic resonance and dopamine related internal noise, postulating that a moderate amount of external noise would benefit individuals in hypodopaminergic states (such as those with ADHD). The results need to be verified with a larger group, but they do suggest a new approach to helping those with attention problems.

The previous study referred to involved 2844 children aged 9-10. The children were selected from primary schools located near three major airports — Schiphol in the Netherlands, Barajas in Spain, and Heathrow in the UK. Reading age in children exposed to high levels of aircraft noise was delayed by up to 2 months in the UK and by up to 1 month in the Netherlands for each 5 decibel change in noise exposure. On the other hand, road traffic noise did not have an effect on reading and indeed was unexpectedly found to improve recall memory. An earlier German study found children attending schools near the old Munich airport improved their reading scores and cognitive memory performance when the airport shut down, while children going to school near the new airport experienced a decrease in testing scores.

In a study of young Mexican-American children, higher prenatal exposure to pesticides was significantly associated with ADHD symptoms at age 5.

A study following over 300 Mexican-American children living in an agricultural community has found that their prenatal exposure to organophosphate pesticides (measured by metabolites in the mother’s urine during pregnancy) was significantly associated with attention problems at age 5. This association was stronger among boys, and stronger with age (at 3 ½ the association, although present, did not reach statistical significance — perhaps because attention disorders are much harder to recognize in toddlers). Based on maternal report, performance on attention tests, and a psychometrician’s report, 8.5% of 5-year-olds were classified as having ADHD symptoms. Each tenfold increase in prenatal pesticide metabolites was linked to having five times the odds of scoring high on the computerized tests at age 5. The child’s own level of phosphate metabolites was not linked with attention problems.

Organophosphate pesticides disrupt acetylcholine, which is important for attention and short-term memory. While the exposure of these children to pesticides is presumably higher and more chronic than that of the general U.S. population, food is a significant source of pesticide exposure among the general population.

Marks AR, Harley K, Bradman A, Kogut K, Barr DB, Johnson C, et al. 2010. Organophosphate Pesticide Exposure and Attention in Young Mexican-American Children. Environ Health Perspect :-. doi:10.1289/ehp.1002056
Full text available at http://ehp03.niehs.nih.gov/article/fetchArticle.action?articleURI=info%3...

Experiments with mice have found that inhibiting the production of kynurenic acid in the brain has dramatic benefits for cognitive performance.

Commercial use is a long way off, but research with mice offers hope for a ‘smart drug’ that doesn’t have the sort of nasty side-effects that, for example, amphetamines have. The mice, genetically engineered to produce dramatically less (70%) kynurenic acid, had markedly better cognitive abilities. The acid, unusually, is produced not in neurons but in glia, and abnormally high levels are produced in the brains of people with disorders such as schizophrenia, Alzheimer's and Huntington's. More acid is also typically produced as we get older.

The acid is produced in our brains after we’ve eaten food containing the amino acid tryptophan, which helps us produce serotonin (turkey is a food well-known for its high tryptophan levels). But serotonin helps us feel good (low serotonin levels are linked to depression), so the trick is to block the production of kynurenic acid without reducing the levels of serotonin. The next step is therefore to find a chemical that blocks production of the acid in the glia, and can safely be used in humans. Although no human tests have yet been performed, several major pharmaceutical companies are believed to be following up on this research.

Many studies have now shown that walking helps older brains fight cognitive decline, but a new study shows that this is also associated with improved connectivity in important brain networks.

A study involving 65 older adults (59-80), who were very sedentary before the study (reporting less than two episodes of physical activity lasting 30 minutes or more in the previous six months), has found that those who joined a walking group improved their cognitive performance and the connectivity in important brain circuits after a year. However, those who joined a stretching and toning group showed no such improvement. The walking program involved three 40-minute walks at a moderate pace every week. The two affected brain circuits (the default mode network and the fronto-executive network) typically become less connected with age. It is worth emphasizing that the improvement was not evident at the first test, after six months, but only at the second 12-month test.

Interestingly, I noticed in the same journal issue a study into the long-term benefits of dancing for older adults. The study compared physical and cognitive performance of those who had engaged in amateur dancing for many years (average: 16.5 years) and those with no dancing or sporting engagement. The dancing group were overall significantly better than the other group on all tests: posture, balance, reaction time, motor behavior, cognitive performance. However, the best dancers weren’t any better than individuals in the other group; the group difference arose because none of the dancers performed poorly, while many of the other group did.

A word experiment shows that unpleasant or traumatic events are likely to be inaccurately remembered, and this memory distortion increases with age. The findings have implications for eyewitness testimony.

Findings that children are less likely than adults to distort memories when negative emotions are evoked has significant implications for the criminal justice system. Experiments involving children aged seven and 11, and young adults (18-23) found that when they were shown lists of closely related emotional words (e.g. pain, cut, ouch, cry, injury), they would tend to mistakenly remember a related word (e.g. hurt) although it had not been present. Despite the prevailing theory that being involved in a very negative experience focuses your mind and helps you notice and remember details, words that had negative emotional content produced the highest levels of false memory. With arousal (such as would be evoked in a traumatic experience), memory was distorted more. These tendencies increased with age.

Older adults are more likely to forget that they've done something. A new study has found that doing something unusual (such as putting a hand on their head) at the same time helps seniors remember having done the task.

Previous research has shown that older adults are more likely to incorrectly repeat an action in situations where a prospective memory task has become habitual — for example, taking more medication because they’ve forgotten they’ve already taken it. A new study has found that doing something unusual at the same time helps seniors remember having done the task. In the study, older adults told to put a hand on their heads whenever they made a particular response, reduced the level of repetition errors to that of younger adults. It’s suggested that doing something unusual, like knocking on wood or patting yourself on the head, while taking a daily dose of medicine may be an effective strategy to help seniors remember whether they've already taken their daily medications.

A study has found fetuses exposed to elevated levels of the stress hormone cortisol may have trouble paying attention or solving problems at 17 months -- but only if they are not securely attached to their mothers.

A study involving 125 women has found the first, direct human evidence that fetuses exposed to elevated levels of the stress hormone cortisol may have trouble paying attention or solving problems at 17 months. But more hopefully, the association only occurred among children showing insecure attachment to their mothers, independent of socioeconomic factors. The findings suggest that a stressful prenatal environment may be effectively counteracted by good parental care. The children will be followed up when they turn 6.

[1114] [Anonymous] (2010).  Maternal Prenatal Cortisol and Infant Cognitive Development: Moderation by Infant-Mother Attachment. Biological Psychiatry. In Press, Corrected Proof,

A small study has found that regular use ecstasy or cocaine is associated with impaired prospective memory (remembering things you plan to do).

A study involving 42 students who were ecstasy/polydrug users has found that ecstasy, or the regular use of several drugs, affects users' prospective memory (remembering things you plan to do), even when tests are controlled for cannabis, tobacco or alcohol use. Cocaine use in particular was prominently associated with prospective memory impairment. Deficits were evident in both lab-based and self-reported measurements.

[164] Hadjiefthyvoulou, F., Fisk J. E., Montgomery C., & Bridges N. J. (2010).  Everyday and prospective memory deficits in ecstasy/polydrug users. J Psychopharmacol. 0269881109359101 - 0269881109359101.

Full text is available for a limited time at http://jop.sagepub.com/cgi/rapidpdf/0269881109359101v1

A review of 35 studies has found that depression does not always lead to cognitive impairment, and that processing speed is the cognitive function most consistently affected by depression.

A review of 35 studies published between 1991 and 2007 has found that depression does not always lead to cognitive impairment. Part of the variability in findings may be due to inconsistent measurement and diagnosis of depression. Processing speed was found to be the cognitive function most consistently affected by depression. Processing speed deficits can be helped by decreasing the amount of information to process at one time.

A large study of elementary school children and college students has found greater screen time (TV and video games) is associated with more attention problems.

A study following 1,323 children in Grades 3 to 5 and 210 college students has found that children who exceeded two hours per day of screen time (TV and video games) were 1.5 to two times more likely to be considered above average in attention problems by their teachers compared to children who met the guideline. A similar association between screen media time and attention problems (self-reported) was found for the college students. A study earlier this year found U.S. children aged eight to 18 devote an average of seven hours and 38 minutes per day to entertainment media (http://www.kff.org/entmedia/entmedia012010nr.cfm ).

A sleep lab study has revealed that traffic noise during sleep produces significantly slower reaction times on a vigilance task in the morning.

It’s not just a matter of quantity; quality of sleep matters too. A study involving 72 adults (average age 40), whose sleep was monitored for 11 consecutive nights, has revealed that reaction times on a morning psychomotor vigilance task was significantly slower after exposure to recorded traffic noise during sleep. The slowing was directly related to the frequency and sound-pressure level of the nightly noise. Traffic noise has been identified as one cause of "environmental sleep disorder," which involves an environmental disturbance that causes a complaint of insomnia or daytime sleepiness. Other common causes include bright light and temperature extremes. The researchers also note that nighttime traffic noise may have even stronger effects on the performance of people who are more susceptible to sleep disturbances. Risk groups include children, shift workers, the elderly and people with chronic medical conditions. White noise, produced by fans, sound machines, and special applications for computers and smart phones, can be used to mask other noise.

Elmenhorst, E. et al. 2010. Nocturnal traffic noise and morning cognitive performance. Presented at SLEEP 2010, the 24th annual meeting of the Associated Professional Sleep Societies LLC, in San Antonio, Texas.

A Chicago study has found substantially lower reading scores in African-American children who were assessed directly after a local homicide. Hispanic children were not affected.

A study using data on reported homicides in Chicago 1994-2002 and two independent surveys of children and families in Chicago, has revealed that African-American children who were assessed directly after a local homicide occurred scored substantially lower on vocabulary and reading assessments than their peers from the same neighborhood who were assessed at different times. The impact of the homicide faded both with time and distance from the child's home. However, in both datasets, while the results were extremely strong for African Americans, there was no effect of local homicides for Hispanics. Because of the prevalence of homicide in the most violent neighborhoods in cities like Chicago, these results mean that some children spend about one week out of every month functioning at a low level. Whites and other ethnic groups were excluded from the study because they were almost never exposed to local homicides in the samples used.

[1631] Sharkey, P. (2010).  The acute effect of local homicides on children's cognitive performance. Proceedings of the National Academy of Sciences. 107(26), 11733 - 11738.

A recent study indicates that the alertness benefits of caffeine may simply reflect the reversal of the fatiguing effects of caffeine withdrawal.

A study involving 379 individuals who abstained from caffeine for 16 hours has revealed little variance in levels of alertness after receiving caffeine. Those who were medium/high caffeine consumers reported a decrease in alertness and an increase in headache if given the placebo, neither of which were reported by those who received caffeine. However, their post-caffeine levels of alertness were no higher than the non/low consumers who received a placebo, suggesting caffeine only brings coffee drinkers back up to 'normal'. In other words, the stimulatory effects of caffeine appears to be an illusion generated by the reversal of the fatiguing effects of acute caffeine withdrawal.

At the end of first grade, at-risk children showing strong self-regulation in preschool and kindergarten did dramatically better on math, reading and vocabulary, than at-risk children with weaker self-regulation.

A study following nearly 1300 young children from birth through the first grade provides more evidence for the importance of self-regulation for academic achievement. The study found that children showing strong self-regulation in preschool and kindergarten did significantly better on math, reading and vocabulary at the end of first grade, independent of poverty, ethnic status, and maternal education (all of which had significant negative effects on reading, math, and vocabulary achievement in first grade). At-risk children with stronger self-regulation in kindergarten scored 15 points higher on a standardized math test in first grade, 11 points higher on an early reading test, and nearly seven points higher on a vocabulary test than at-risk children with weaker self-regulation. The findings emphasize the need to help children learn how to listen, pay attention, follow instructions, and persist on a task.

[1590] Sektnan, M., McClelland M. M., Acock A., & Morrison F. J. (Submitted).  Relations between early family risk, children's behavioral regulation, and academic achievement. Early Childhood Research Quarterly. In Press, Uncorrected Proof,

A rat study shows how Ritalin improves concentration and, it now appears, speed of learning. The finding may help the development of better-targeted drugs.

A rat study shows how Ritalin improves concentration and, it now appears, speed of learning. The study reveals that it does this by increasing the activity of dopamine at two specific types of neurotransmitter receptors in the amygdala. The dopamine receptor tagged “D2” appears to control the ability to stay focused on a task, while the D1 receptor underlies learning efficiency. The finding may help the development of better-targeted drugs.

A study assessing multitasking ability has found that a very few (5 out of 200) were unaffected by doing two complex tasks simultaneously (indeed their performance on the memory task improved!).

A study assessing the performance of 200 people on a simulated freeway driving task, with or without having a cell phone conversation that involved memorizing words and solving math problems, has found that, as expected, performance on both tasks was significantly impaired. However, for a very few, performance on these tasks was unaffected (indeed their performance on the memory task improved!). These few people — five of them (2.5%) — also performed substantially better on these tasks when performed alone.

Watson, J.M. & Strayer, D.L. 2010. Supertaskers: Profiles in extraordinary multitasking ability. Psychonomic Bulletin and Review. In Press.

Full text is available at http://www.psych.utah.edu/lab/appliedcognition/publications/supertaskers...

A new study provides more support for the idea that cognitive decline in older adults is a product of a growing inability to ignore distractions, and that forewarning doesn't help.

A new study provides more support for the idea that cognitive decline in older adults is a product of a growing inability to ignore distractions. Moreover, the study, involving 21 older adults (60-80) shown random sequences of pictures containing faces and scenes and asked to remember only the scene or the face, reveals that being given forewarning about which specific pictures would be relevant (say the second, or the fourth) did not help. The findings suggest that the failure to suppress irrelevant information is not due to a failure in quickly assessing what is relevant, but is a related to mechanisms that occur early in the visual processing stream.

A study of medication administrations in hospitals has found scarily high rates of procedural and clinical failures, of which 2.7% were considered to be major errors — which were much more likely to occur after interruptions, particularly repeated interruptions. Nurse experience provided no protection and indeed was associated with higher procedural failure rates (common with procedural failures — expertise renders you more vulnerable, not less).

As we all know, being interrupted during a task greatly increases the chance we’ll go off-kilter (I discuss the worst circumstances and how you can minimize the risk of mistakes in my book Planning to remember). Medication errors occur as often as once per patient per day in some settings, and around one-third of harmful medication errors are thought to occur during medication administration. Now an in-depth study involving 98 nurses at two Australian teaching hospitals over 505 hours has revealed that at least one procedural failure occurred in 74.4% of administrations and at least one clinical failure in 25%. Each interruption was associated with a 12.1% increase in procedural failures and a 12.7% increase in clinical errors. Procedural failures include such errors as failure to check patient's identification, record medication administration, use aseptic technique; clinical failures such errors as wrong drug, dose, or route. Interruptions occurred in over half of the 4000 drug administrations. While most errors were rated as clinically insignificant, 2.7% were considered to be major errors — and these were much more likely to occur after interruptions, particularly after repeated interruptions. The risk of major error was 2.3% when there was no interruption; this rose to 4.7% with four interruptions. Nurse experience provided no protection against making a clinical error and was associated with higher procedural failure rates (this is common with procedural failures — expertise renders you more vulnerable, not less).

Older news items (pre-2010) brought over from the old website

Binge drinking affects attention and working memory in young university students

A Spanish study of 95 first-year university students, 42 of them binge drinkers, has found that those who engaged in binge drinking required greater attentional processing during a visual working memory task in order to carry it out correctly. They also had difficulties differentiating between relevant and irrelevant stimuli. Binge drinkers are defined as males who drink five or more standard alcohol drinks, and females who drink four or more, on one occasion and within a two-hour interval. Some 40% of university students in the U.S. are considered binge drinkers.

 [231] Crego, A., Holguín S. R., Parada M., Mota N., Corral M., & Cadaveira F. (2009).  Binge drinking affects attentional and visual working memory processing in young university students. Alcoholism, Clinical and Experimental Research. 33(11), 1870 - 1879.

http://www.eurekalert.org/pub_releases/2009-08/ace-bda080509.php

Short stressful events may improve working memory

We know that chronic stress has a detrimental effect on learning and memory, but a new rat study shows how acute stress (a short, sharp event) can produce a beneficial effect. The rats, trained to a level of 60-70% accuracy on a maze, were put through a 20-minute forced swim before being run through the maze again. Those who experienced this stressful event were better at running the maze 4 hours later, and a day later, than those not forced through the stressful event. It appears that the stress hormone corticosterone (cortisol in humans) increases transmission of the neurotransmitter glutamate in the prefrontal cortex and improves working memory. It also appears that chronic stress suppresses the transmission of glutamate in the prefrontal cortex of male rodents, while estrogen receptors in female rodents make them more resilient to chronic stress than male rats.

[1157] Yuen, E. Y., Liu W., Karatsoreos I. N., Feng J., McEwen B. S., & Yan Z. (2009).  Acute stress enhances glutamatergic transmission in prefrontal cortex and facilitates working memory. Proceedings of the National Academy of Sciences of the United States of America. 106(33), 14075 - 14079.

http://www.eurekalert.org/pub_releases/2009-07/uab-sse072309.php

When emotions involved, older adults may perform memory tasks better than young adults

A study involving 72 young adults (20-30 years old) and 72 older adults (60-75) has found that regulating emotions – such as reducing negative emotions or inhibiting unwanted thoughts – is a resource-demanding process that disrupts the ability of young adults to simultaneously or subsequently perform tasks, but doesn’t affect older adults. In the study, most of the participants watched a two-minute video designed to induce disgust, while the rest watched a neutral two-minute clip. Participants then played a computer memory game. Before playing 2 further memory games, those who had watched the disgusting video were instructed either to change their negative reaction into positive feelings as quickly as possible or to maintain the intensity of their negative reaction, or given no instructions. Those young adults who had been told to turn their disgust into positive feelings, performed significantly worse on the subsequent memory tasks, but older adults were not affected. The feelings of disgust in themselves did not affect performance in either group. It’s speculated that older adults’ greater experience allows them to regulate their emotions without cognitive effort.

[200] Scheibe, S., & Blanchard-Fields F. (2009).  Effects of regulating emotions on cognitive performance: what is costly for young adults is not so costly for older adults. Psychology and Aging. 24(1), 217 - 223.

http://www.eurekalert.org/pub_releases/2009-03/giot-oac030409.php

Inconsistent processing speed among children with ADHD

A new analytical technique has revealed that the problem with children with ADHD is not so much that they are slower at responding to tasks, but rather that their response is inconsistent. The study of 25 children with ADHD and 24 typically developing peers found that on a task in which a number on one screen needed to be mentally added to another number shown on a second screen, those with ADHD were much less consistent in their response times, although the responses they did give were just as accurate. Higher levels of hyperactivity and restlessness or impulsivity (as measured by parent survey) correlated with more slower reaction times. The finding supports the idea that what underlies impaired working memory is a problem in how consistently a child with ADHD can respond during a working memory task.

[911] Buzy, W. M., Medoff D. R., & Schweitzer J. B. (2009).  Intra-Individual Variability Among Children with ADHD - on a Working Memory Task: An Ex-Gaussian Approach. Child Neuropsychology. 15(5), 441 - 441.

http://www.eurekalert.org/pub_releases/2009-03/uoc--ips032409.php

Hyperactivity enables children with ADHD to stay alert

A study of 12 8- to 12-year-old boys with ADHD, and 11 of those without, has found that activity levels of those with ADHD increased significantly whenever they had to perform a task that placed demands on their working memory. In a highly stimulating environment where little working memory is required (such as watching a Star Wars video), those with ADHD kept just as still as their normal peers. It’s suggested that movement helps them stay alert enough to complete challenging tasks, and therefore trying to limit their activity (when non-destructive) is counterproductive. Providing written instructions, simplifying multi-step directions, and using poster checklists are all strategies that can be used to help children with ADHD learn without overwhelming their working memories.

[734] Rapport, M., Bolden J., Kofler M., Sarver D., Raiker J., & Alderson R. (2009).  Hyperactivity in Boys with Attention-Deficit/Hyperactivity Disorder (ADHD): A Ubiquitous Core Symptom or Manifestation of Working Memory Deficits?. Journal of Abnormal Child Psychology. 37(4), 521 - 534.

http://www.eurekalert.org/pub_releases/2009-03/uocf-ush030909.php

Poverty can physically impair brain, reducing children's ability to learn

We know that stress affects learning and memory, and there is considerable evidence confirming the commonsense intuition that low-income families are under a lot of stress. Now a long-term study involving 195 children from rural households above and below the poverty line has found that children who lived in impoverished environments for longer periods of time during childhood showed higher stress scores and suffered greater impairments in working memory at 17. Those who spent their entire childhood in poverty scored about 20% lower on working memory tests at 17 than those who were never poor.

[461] Evans, G. W., & Schamberg M. A. (2009).  Childhood poverty, chronic stress, and adult working memory. Proceedings of the National Academy of Sciences. 106(16), 6545 - 6549.

Full text available at http://www.pnas.org/content/early/2009/03/27/0811910106.abstract?sid=b4c74b57-a4a5-447b-8675-ba75e69f3ec2
http://www.physorg.com/news158594009.html
http://www.washingtonpost.com/wp-dyn/content/article/2009/04/05/AR2009040501719.html

New research shows why too much memory may be a bad thing

People who are able to easily and accurately recall historical dates or long-ago events may have a harder time with word recall or remembering the day's current events. A mouse study reveals why. Neurogenesis has been thought of as a wholly good thing — having more neurons is surely a good thing — but now a mouse study has found that stopping neurogenesis in the hippocampus improved working memory. Working memory is highly sensitive to interference from information previously stored in memory, so it may be that having too much information may hinder performing everyday working memory tasks.

[635] Saxe, M. D., Malleret G., Vronskaya S., Mendez I., Garcia D. A., Sofroniew M. V., et al. (2007).  Paradoxical influence of hippocampal neurogenesis on working memory. Proceedings of the National Academy of Sciences. 104(11), 4642 - 4646.

Full text is available at http://www.pnas.org/cgi/reprint/104/11/4642
http://www.physorg.com/news94384934.html
http://www.sciencedaily.com/releases/2007/03/070329092022.htm
http://www.eurekalert.org/pub_releases/2007-03/cumc-nrs032807.php

Implicit stereotypes and gender identification may affect female math performance

Another study has come out showing that women enrolled in an introductory calculus course who possessed strong implicit gender stereotypes, (for example, automatically associating "male" more than "female" with math ability and math professions) and were likely to identify themselves as feminine, performed worse relative to their female counterparts who did not possess such stereotypes and who were less likely to identify with traditionally female characteristics. Strikingly, a majority of the women participating in the study explicitly expressed disagreement with the idea that men have superior math ability, suggesting that even when consciously disavowing stereotypes, female math students are still susceptible to negative perceptions of their ability.

[969] Kiefer, A. K., & Sekaquaptewa D. (2007).  Implicit stereotypes, gender identification, and math-related outcomes: a prospective study of female college students. Psychological Science: A Journal of the American Psychological Society / APS. 18(1), 13 - 18.

http://www.eurekalert.org/pub_releases/2007-01/afps-isa012407.php

Reducing the racial achievement gap

And staying with the same theme, a study that came out six months ago, and recently reviewed on the excellent new Scientific American Mind Matters blog, revealed that a single, 15-minute intervention erased almost half the racial achievement gap between African American and white students. The intervention involved writing a brief paragraph about which value, from a list of values, was most important to them and why. The intervention improved subsequent academic performance for some 70% of the African American students, but none of the Caucasians. The study was repeated the following year with the same results. It is thought that the effect of the intervention was to protect against the negative stereotypes regarding the intelligence and academic capabilities of African Americans.

[1082] Cohen, G. L., Garcia J., Apfel N., & Master A. (2006).  Reducing the Racial Achievement Gap: A Social-Psychological Intervention. Science. 313(5791), 1307 - 1310.

Highly accomplished people more prone to failure than others when under stress

One important difference between those who do well academically and those who don’t is often working memory capacity. Those with a high working memory capacity find it easier to read and understand and reason, than those with a smaller capacity. However, a new study suggests there is a downside. Such people tend to heavily rely on their abundant supply of working memory and are therefore disadvantaged when challenged to solve difficult problems, such as mathematical ones, under pressure — because the distraction caused by stress consumes their working memory. They then fall back on the less accurate short-cuts that people with less adequate supplies of working memory tend to use, such as guessing and estimation. Such methods are not made any worse by working under pressure. In the study involving 100 undergraduates, performance of students with strong working memory declined to the same level as those with more limited working memory, when the students were put under pressure. Those with more limited working memory performed as well under added pressure as they did without the stress.

The findings were presented February 17 at the annual meeting of the American Association for the Advancement of Science.

http://www.eurekalert.org/pub_releases/2007-02/uoc-hap021607.php

Common gene version optimizes thinking but carries a risk

On the same subject, another study has found that the most common version of DARPP-32, a gene that shapes and controls a circuit between the striatum and prefrontal cortex, optimizes information filtering by the prefrontal cortex, thus improving working memory capacity and executive control (and thus, intelligence). However, the same version was also more prevalent among people who developed schizophrenia, suggesting that a beneficial gene variant may translate into a disadvantage if the prefrontal cortex is impaired. In other words, one of the things that make humans more intelligent as a species may also make us more vulnerable to schizophrenia.

[864] Kolachana, B., Kleinman J. E., Weinberger D. R., Meyer-Lindenberg A., Straub R. E., Lipska B. K., et al. (2007).  Genetic evidence implicating DARPP-32 in human frontostriatal structure, function, and cognition. Journal of Clinical Investigation. 117(3), 672 - 682.

http://www.sciencedaily.com/releases/2007/02/070208230059.htm
http://www.eurekalert.org/pub_releases/2007-02/niom-cgv020707.php

Anxiety adversely affects those who are most likely to succeed at exams

It has been thought that pressure harms performance on cognitive skills such as mathematical problem-solving by reducing the working memory capacity available for skill execution. However, a new study of 93 students has found that this applies only to those high in working memory. It appears that the advantage of a high working memory capacity disappears when that attention capacity is compromised by anxiety.

[355] Beilock, S. L., & Carr T. H. (2005).  When high-powered people fail: working memory and "choking under pressure" in math. Psychological Science: A Journal of the American Psychological Society / APS. 16(2), 101 - 105.

http://www.eurekalert.org/pub_releases/2005-02/bpl-wup020705.php

Memory-enhancing drugs for elderly may impair working memory and other executive functions

Drugs that increase the activity of an enzyme called protein kinase A improve long-term memory in aged mice and have been proposed as memory-enhancing drugs for elderly humans. However, the type of memory improved by this activity occurs principally in the hippocampus. A new study suggests that increased activity of this enzyme has a deleterious effect on working memory (which principally involves the prefrontal cortex). In other words, a drug that helps you remember a recent event may worsen your ability to remember what you’re about to do (to take an example).

[1404] Ramos, B. P., Birnbaum S. G., Lindenmayer I., Newton S. S., Duman R. S., & Arnsten A. F. T. (2003).  Dysregulation of protein kinase a signaling in the aged prefrontal cortex: new strategy for treating age-related cognitive decline. Neuron. 40(4), 835 - 845.

http://www.eurekalert.org/pub_releases/2003-11/naos-mdf110303.php

Sleep deprivation affects working memory

A recent study investigated the working memory capacities of individuals who were sleep-deprived. For nine days, 7 of the 12 participants slept four hours each night, and 5 slept for eight hours. Each morning, participants completed a computer task to measure how quickly they could access a list of numbers they had been asked to memorize. The list could be one, three, or five items long. Then participants were presented with a series of single digits and asked to answer "yes" or "no" to indicate whether each digit was one they had memorized. Those who slept eight hours a night steadily increased their working memory efficiency on this task, but those who slept only four hours a night failed to show any improvement in memory efficiency. Motor skill did not change across days for either group of participants.

The findings were presented at the Society for Neuroscience 2003 annual  conference.

http://www.eurekalert.org/pub_releases/2003-11/sfn-sfb_1111003.php

Cognitive impairment following bypass surgery may last longer than thought

More support for a link between cardiopulmonary bypass surgery and cognitive impairment comes from a new study. In particular, it seems, that attention may be most affected. The study also found evidence of longer-lasting cognitive decline than previously thought. Bypass patients also demonstrated poorer cognitive performance before the surgery, and it is now being suggested that it may be the disease itself that is the major problem, rather than the surgery itself. This is consistent with recent research connecting cardiovascular risk factors with risk factors for cognitive decline.

[716] Keith, J. R., Puente A. E., Malcolmson K. L., Tartt S., Coleman A. E., & Marks H. F. (2002).  Assessing postoperative cognitive change after cardiopulmonary bypass surgery. Neuropsychology. 16(3), 411 - 421.

http://www.eurekalert.org/pub_releases/2002-07/apa-lci070802.php

Cocaine may permanently damage learning abilities in developing fetuses

Two recent studies investigating the effect of pre-natal exposure to cocaine in rats suggest that children exposed to cocaine while in the womb may have permanent changes to the part of the brain that helps control attention and memory, leading to learning deficits and symptoms that are very much like attention deficit hyperactivity disorder.

[1270] Morrow, B. A., Elsworth J. D., & Roth R. H. (2002).  Male rats exposed to cocaine in utero demonstrate elevated expression of Fos in the prefrontal cortex in response to environment. Neuropsychopharmacology: Official Publication of the American College of Neuropsychopharmacology. 26(3), 275 - 285.

[264] Morrow, B. A., Elsworth J. D., & Roth R. H. (2002).  Prenatal cocaine exposure disrupts non-spatial, short-term memory in adolescent and adult male rats. Behavioural Brain Research. 129(1-2), 217 - 223.

http://www.eurekalert.org/pub_releases/2002-02/yu-ucd021802.php

Multitasking

A survey of college students found that those who scored highest in multitasking ability were also least likely to multitask, while those who scored lowest were most likely to engage in it.

I’ve reported often on the perils of multitasking. Here is yet another one, with an intriguing new finding: it seems that the people who multitask the most are those least capable of doing so!

The study surveyed 310 undergraduate psychology students to find their actual multitasking ability, perceived multitasking ability, cell phone use while driving, use of a wide array of electronic media, and personality traits such as impulsivity and sensation-seeking.

Those who scored in the top quarter on a test of multitasking ability tended not to multitask. Some 70% of participants thought they were above average at multitasking, and perceived multitasking ability (rather than actual) was associated with multitasking. Those with high levels of impulsivity and sensation-seeking were also more likely to multitask (with the exception of using a cellphone while driving, which wasn’t related to impulsivity, though it was related to sensation seeking).

The findings suggest that those who multitask don’t do so because they are good at multitasking, but because they are poor at focusing on one task.

A new study quantifies the degree to which tasks that involve actions in a precise sequence are vulnerable to interruptions.

In my book on remembering intentions, I spoke of how quickly and easily your thoughts can be derailed, leading to ‘action slips’ and, in the wrong circumstances, catastrophic mistakes. A new study shows how a 3-second interruption while doing a task doubled the rate of sequence errors, while a 4s one tripled it.

The study involved 300 people, who were asked to perform a series of ordered steps on the computer. The steps had to be performed in a specific sequence, mnemonically encapsulated by UNRAVEL, with each letter identifying the step. The task rules for each step differed, requiring the participant to mentally shift gears each time. Moreover, task elements could have multiple elements — for example, the letter U could signal the step, one of two possible responses for that step, or be a stimulus requiring a specific response when the step was N. Each step required the participant to choose between two possible responses based on one stimulus feature — features included whether it was a letter or a digit, whether it was underlined or italic, whether it was red or yellow, whether the character outside the outline box was above or below. There were also more cognitive features, such as whether the letter was near the beginning of the alphabet or not. The identifying mnemonic for the step was linked to the possible responses (e.g., N step – near or far; U step — underline or italic).

At various points, participants were very briefly interrupted. In the first experiment, they were asked to type four characters (letters or digits); in the second experiment, they were asked to type only two (a very brief interruption indeed!).

All of this was designed to set up a situation emulating “train of thought” operations, where correct performance depends on remembering where you are in the sequence, and on producing a situation where performance would have reasonably high proportion of errors — one of the problems with this type of research has been the use of routine tasks that are generally performed with a high degree of accuracy, thus generating only small amounts of error data for analysis.

In both experiments, interruptions significantly increased the rate of sequence errors on the first trial after the interruption (but not on subsequent ones). Nonsequence errors were not affected. In the first experiment (four-character interruption), the sequence error rate on the first trial after the interruption was 5.8%, compared to 1.8% on subsequent trials. In the second experiment (two-character interruption), it was 4.3%.

The four-character interruptions lasted an average of 4.36s, and the two-character interruptions lasted an average of 2.76s.

Whether the characters being typed were letters or digits made no difference, suggesting that the disruptive effects of interruptions are not overly sensitive to what’s being processed during the interruption (although of course these are not wildly different processes!).

The absence of effect on nonsequence errors shows that interruptions aren’t disrupting global attentional resources, but more specifically the placekeeping task.

As I discussed in my book, the step also made a significant difference — for sequence errors, middle steps showed higher error rates than end steps.

All of this confirms and quantifies how little it takes to derail us, and reminds us that, when engaged in tasks involving the precise sequence of sub-tasks (which so many tasks do), we need to be alert to the dangers of interruptions. This is, of course, particularly true for those working in life-critical areas, such as medicine.

[3207] Altmann, E. M., Gregory J., & Hambrick D. Z. (2013).  Momentary Interruptions Can Derail the Train of Thought. Journal of Experimental Psychology: General. No - Pagination Specified.

Multitasking is significantly worse if your tasks use the same modality. Instant messaging while doing another visual-motor task reduces performance more than talking on the phone.

I’ve reported, often, on the evidence that multitasking is a problem, something we’re not really designed to do well (with the exception of a few fortunate individuals), and that the problem is rooted in our extremely limited working memory capacity. I’ve also talked about how ‘working memory’ is a bit of a misnomer, given that we probably have several ‘working memories’, for different modalities.

It follows from that, that tasks that use different working memories should be easier to do at the same time than tasks that use the same working memory. A new study confirms that multitasking is more difficult if you are trying to use the same working memory modules for both tasks.

In the study, 32 students carried out a visual pattern-matching task on a computer while giving directions to another person either via instant messaging (same modalities — vision and motor) or online voice chat (different modality — hearing).

While both simultaneous tasks significantly worsened performance on the pattern-matching task, communicating by IM (same modality) led to a 50% drop in visual pattern-matching performance (from a mean of 11 correct responses to a mean of 5), compared to only a 30% drop in the voice condition (mean of 7).

The underlying reason for the reductions in performance seems to be in the effect on eye movement: the number and duration of eye fixations was reduced in both dual-task conditions, and more so in the IM condition.

Note that this is apparently at odds with general perception. According to one study, IM is perceived to be less disruptive than the phone. Moreover, in the current study, participants felt they performed better in the IM condition (although this palpably wasn’t true). This feeling may reflect the greater sense of personal control in instant messaging compared to chat. It may also reflect an illusion of efficiency generated by using the visual channel — because we are so strongly practiced in using vision, we may find visual tasks more effortless than tasks using other modalities. (I should note that most people, regardless of the secondary task, felt they did better than they had! But those in the IM condition were more deluded than those in the chat condition.)

The finding also explains why texting is particularly dangerous when driving — both rely heavily on the same modalities.

All this is consistent with the idea that there are different working memory resources which can operate in parallel, but share one particular resource which manages the other resources.

The idea of ‘threaded cognition’ — of maintaining several goal threads and strategically allocating resources as needed — opens up the idea that multitasking is not all bad. In recent years, we have focused on multitasking as a problem. This has been a very necessary emphasis, given that its downsides were unappreciated. But although multitasking has its problems, it may be that there are trade-offs that come from the interaction between the tasks being carried out.

In other words, rather than condemning multitasking, we need to learn its parameters. This study offers one approach.

Three recent studies show that meditation training reduces the stress of multitasking and reduces task-switching, that it improves white matter efficiency, and that the improved executive control may be largely to do with better emotional awareness and regulation.

Meditation may improve multitasking

I recently reported that developing skill at video action games doesn’t seem to improve general multitasking ability, but perhaps another approach might be more successful. Meditation has, of course, been garnering growing evidence that it can help improve attentional control. A new study extends that research to multitasking in a realistic work setting.

The study involved three groups of 12-15 female human resource managers, of whom one group received eight weeks of mindfulness-based meditation training, another received eight weeks of body relaxation training, and another initially received no training (control), before receiving the mindfulness training after the eight weeks.

Before and after each eight-week period, the participants were given a stressful test of their multitasking abilities, requiring them to use email, calendars, instant-messaging, telephone and word-processing tools to perform common office tasks (scheduling a meeting; finding a free conference room; writing a draft announcement of the meeting, eating snacks and drinking water, writing a memo proposing a creative agenda item for the meeting). Necessary information came from emails, instant messages, telephone calls, and knocks on the door. The participants had 20 minutes to complete the tasks.

The meditation group reported lower levels of stress during the multitasking test compared to the control and relaxation groups. They also spent more time on tasks and switched tasks less often, while taking no longer to complete the overall job than the others. Both meditation and relaxation groups showed improved memory for the tasks they were performing.

After the control group underwent the meditation training, their results matched those of the meditation group.

The meditation training emphasized:

  • control of attentional focus
  • focusing attention in the present moment or task
  • switching focus
  • breath and body awareness.

The relaxation training emphasized progressive tensing and relaxing of major muscle groups, aided by relaxation imagery.

It's interesting that overall time on task didn't change (the researchers remarked that the meditators didn't take any longer, but of course most of us would be looking for it to become shorter!), but I wouldn't read too much into it. The task was relatively brief. It would be interesting to see the effects over the course of, say, a day. Nor did the study look at how well the tasks were done.

But it is, of course, important that meditation training reduced task-switching and stress. Whether it also has a postitive effect on overall time and quality of work is a question for another day.

IBMT improves white matter efficiency

A recent imaging study has found that four weeks of a form of mindfulness meditation called integrative body–mind training (IBMT) improved white matter efficiency in areas surrounding the anterior cingulate cortex, compared to controls given relaxation training.

The anterior cingulate is part of the brain network related to self-regulation. Deficits in activation in this part of the brain have been associated with attention deficit disorder, dementia, depression, schizophrenia, and other disorders.

Using the data from a 2010 study involving 45 U.S. college students, and another involving 68 Chinese students, researchers found that axon density (one factor in white matter efficiency) had improved after two weeks, but not myelin formation. After a month (about 11 hours of meditation), both had improved. Mood improved by two weeks.

Previous studies involving computer-based training for improving working memory have found changes in myelination, but not axon density.

Meditators’ better cognitive control may be rooted in emotional regulation

Previous work has found that people who engage in meditation show higher levels of executive control on laboratory tasks.

An electrical signal called the Error Related Negativity (ERN) occurs in the brain within 100 ms of an error being committed. When meditators and non-meditators were given the Stroop Test, meditators not only tended to do better on the test, but their ERNs were stronger.

The interesting thing about this is that the best performers were those who scored highest on emotional acceptance. Mindful awareness was less important. It’s suggested that meditators may be able to control their behavior better not because of their sharper focus, but because they are more aware of their emotions and regulate them better.

Something to think about!

Levy, D. M., Wobbrock, J. O., Kaszniak, A. W., & Ostergren, M. (2012). The Effects of Mindfulness Meditation Training on Multitasking in a High-Stress Information Environment, 45–52. Full text available at http://faculty.washington.edu/wobbrock/pubs/gi-12.02.pdf

[3051] Tang, Y. - Y., Lu Q., Fan M., Yang Y., & Posner M. I. (2012).  Mechanisms of white matter changes induced by meditation. Proceedings of the National Academy of Sciences. 109(26), 10570 - 10574.

[3052] Teper, R., & Inzlicht M. (2012).  Meditation, mindfulness and executive control: the importance of emotional acceptance and brain-based performance monitoring. Social Cognitive and Affective Neuroscience.

A comparison of skilled action gamers and non-gamers reveals that all that multitasking practice doesn’t make you any better at multitasking in general.

The research is pretty clear by this point: humans are not (with a few rare exceptions) designed to multitask. However, it has been suggested that the modern generation, with all the multitasking they do, may have been ‘re-wired’ to be more capable of this. A new study throws cold water on this idea.

The study involved 60 undergraduate students, of whom 34 were skilled action video game players (all male) and 26 did not play such games (19 men and 7 women). The students were given three visual tasks, each of which they did on its own and then again while answering Trivial Pursuit questions over a speakerphone (designed to mimic talking on a cellphone).

The tasks included a video driving game (“TrackMania”), a multiple-object tracking test (similar to a video version of a shell game), and a visual search task (hidden pictures puzzles from Highlights magazine).

While the gamers were (unsurprisingly) significantly better at the video driving game, the non-gamers were just as good as them at the other two tasks. In the dual-tasking scenarios, performance declined on all the tasks, with the driving task most affected. While the gamers were affected less by multitasking during the driving task compared to the non-gamers, there was no difference in the amount of decline between gamers and non-gamers on the other two tasks.

Clearly, the smaller effect of dual-tasking on the driving game for gamers is a product of their greater expertise at the driving game, rather than their ability to multitask better. It is well established that the more skilled you are at a task, the more automatic it becomes, and thus the less working memory capacity it will need. Working memory capacity / attention is the bottleneck that prevents us from being true multitaskers.

In other words, the oft-repeated (and somewhat depressing) conclusion remains: you can’t learn to multitask in general, you can only improve specific skills, enabling you to multitask reasonably well while doing those specific tasks.

[3001] Donohue, S., James B., Eslick A., & Mitroff S. (2012).  Cognitive pitfall! Videogame players are not immune to dual-task costs. Attention, Perception, & Psychophysics. 74(5), 803 - 809.

A new study reveals that older adults’ greater problems with multitasking stem from their impaired ability to disengage from an interrupting task and restore the original task.

Comparison of young adults (mean age 24.5) and older adults (mean age 69.1) in a visual memory test involving multitasking has pinpointed the greater problems older adults have with multitasking. The study involved participants viewing a natural scene and maintaining it in mind for 14.4 seconds. In the middle of the maintenance period, an image of a face popped up and participants were asked to determine its sex and age. They were then asked to recall the original scene.

As expected, older people had more difficulty with this. Brain scans revealed that, for both groups, the interruption caused their brains to disengage from the network maintaining the memory and reallocate resources to processing the face. But the younger adults had no trouble disengaging from that task as soon as it was completed and re-establishing connection with the memory maintenance network, while the older adults failed both to disengage from the interruption and to reestablish the network associated with the disrupted memory.

This finding adds to the evidence that an important (perhaps the most important) reason for cognitive decline in older adults is a growing inability to inhibit processing, and extends the processes to which that applies.

A new study further confirms the idea that a growing inability to ignore irrelevancies is behind age-related cognitive decline.

A study involving 125 younger (average age 19) and older (average age 69) adults has revealed that while younger adults showed better explicit learning, older adults were better at implicit learning. Implicit memory is our unconscious memory, which influences behavior without our awareness.

In the study, participants pressed buttons in response to the colors of words and random letter strings — only the colors were relevant, not the words themselves. They then completed word fragments. In one condition, they were told to use words from the earlier color task to complete the fragments (a test of explicit memory); in the other, this task wasn’t mentioned (a test of implicit memory).

Older adults showed better implicit than explicit memory and better implicit memory than the younger, while the reverse was true for the younger adults. However, on a further test which required younger participants to engage in a number task simultaneously with the color task, younger adults behaved like older ones.

The findings indicate that shallower and less focused processing goes on during multitasking, and (but not inevitably!) with age. The fact that younger adults behaved like older ones when distracted points to the problem, for which we now have quite a body of evidence: with age, we tend to become more easily distracted.

A new study finds that overheard cell phone conversations are particularly distracting because we can't predict what will be said next.

Why are other people’s phone conversations so annoying? A new study suggests that hearing only half a conversation is more distracting than other kinds of conversations because we're missing the other side of the story and so can't predict the flow of the conversation. This finding suggests that driving a car might be impaired not only by the driver talking on the phone, but also by passengers talking on their phones.

It also tells us something about the way we listen to people talking — we’re actively predicting what the person is going to say next. This helps explain something I’ve always wondered about. Listen to people talking in a language you don’t know and you’re often amazed how fast they talk. See an audio recording of the soundwaves, and you’ll wonder how people know when one word starts and another begins. Understanding what people are saying is not as easy as we believe it is — it takes a lot of experience. An important part of that experience, it seems, is learning the patterns of people’s speech, so we can predict what’s going to come next.

The study showed that people overhearing cell phone conversations did more poorly on everyday tasks that demanded attention, than when overhearing both sides of a cell phone conversation, which resulted in no decreased performance. By controlling for other acoustic factors, the researchers demonstrated that it was the unpredictable information content of the half-heard conversation that was so distracting.

Emberson, L.L., Lupyan, G., Goldstein, M.H. & Spivey, M.J. 2010. Overheard Cell-Phone Conversations: When Less Speech Is More Distracting Psychological Science first published on September 3, 2010 as doi:10.1177/0956797610382126

A new study shows improvement in visual working memory in older adults following ten hours training with a commercial brain training program. The performance gains correlated with changes in brain activity.

While brain training programs can certainly improve your ability to do the task you’re practicing, there has been little evidence that this transfers to other tasks. In particular, the holy grail has been very broad transfer, through improvement in working memory. While there has been some evidence of this in pilot programs for children with ADHD, a new study is the first to show such improvement in older adults using a commercial brain training program.

A study involving 30 healthy adults aged 60 to 89 has demonstrated that ten hours of training on a computer game designed to boost visual perception improved perceptual abilities significantly, and also increased the accuracy of their visual working memory to the level of younger adults. There was a direct link between improved performance and changes in brain activity in the visual association cortex.

The computer game was one of those developed by Posit Science. Memory improvement was measured about one week after the end of training. The improvement did not, however, withstand multi-tasking, which is a particular problem for older adults. The participants, half of whom underwent the training, were college educated. The training challenged players to discriminate between two different shapes of sine waves (S-shaped patterns) moving across the screen. The memory test (which was performed before and after training) involved watching dots move across the screen, followed by a short delay and then re-testing for the memory of the exact direction the dots had moved.

A study assessing multitasking ability has found that a very few (5 out of 200) were unaffected by doing two complex tasks simultaneously (indeed their performance on the memory task improved!).

A study assessing the performance of 200 people on a simulated freeway driving task, with or without having a cell phone conversation that involved memorizing words and solving math problems, has found that, as expected, performance on both tasks was significantly impaired. However, for a very few, performance on these tasks was unaffected (indeed their performance on the memory task improved!). These few people — five of them (2.5%) — also performed substantially better on these tasks when performed alone.

Watson, J.M. & Strayer, D.L. 2010. Supertaskers: Profiles in extraordinary multitasking ability. Psychonomic Bulletin and Review. In Press.

Full text is available at http://www.psych.utah.edu/lab/appliedcognition/publications/supertaskers...

A study of medication administrations in hospitals has found scarily high rates of procedural and clinical failures, of which 2.7% were considered to be major errors — which were much more likely to occur after interruptions, particularly repeated interruptions. Nurse experience provided no protection and indeed was associated with higher procedural failure rates (common with procedural failures — expertise renders you more vulnerable, not less).

As we all know, being interrupted during a task greatly increases the chance we’ll go off-kilter (I discuss the worst circumstances and how you can minimize the risk of mistakes in my book Planning to remember). Medication errors occur as often as once per patient per day in some settings, and around one-third of harmful medication errors are thought to occur during medication administration. Now an in-depth study involving 98 nurses at two Australian teaching hospitals over 505 hours has revealed that at least one procedural failure occurred in 74.4% of administrations and at least one clinical failure in 25%. Each interruption was associated with a 12.1% increase in procedural failures and a 12.7% increase in clinical errors. Procedural failures include such errors as failure to check patient's identification, record medication administration, use aseptic technique; clinical failures such errors as wrong drug, dose, or route. Interruptions occurred in over half of the 4000 drug administrations. While most errors were rated as clinically insignificant, 2.7% were considered to be major errors — and these were much more likely to occur after interruptions, particularly after repeated interruptions. The risk of major error was 2.3% when there was no interruption; this rose to 4.7% with four interruptions. Nurse experience provided no protection against making a clinical error and was associated with higher procedural failure rates (this is common with procedural failures — expertise renders you more vulnerable, not less).

Older news items (pre-2010) brought over from the old website

Talking, walking and driving with cell phone users

Another cellphone-multitasking study! Compared with people walking alone, in pairs, or listening to their ipod, cell phone users were the group most prone to oblivious behavior: only 25% of them noticed a unicycling clown passing them on the street, compared to 51% of single individuals, 61% of music player users, and 71% of people in pairs. In fact, cell phone users even had problems walking — walking more slowly, changing direction more often, being prone to weaving, and acknowledging other people more rarely.

Hyman, I.E.Jr, Boss, S. M., Wise, B. M., McKenzie, K. E., & Caggiano, J. M. (2009). Did you see the unicycling clown? Inattentional blindness while walking and talking on a cell phone. Applied Cognitive Psychology, 9999(9999), n/a. doi: 10.1002/acp.1638.

http://www.eurekalert.org/pub_releases/2009-10/w-tuc101909.php

Chronic media multitasking correlated with poor attention

Media multitasking — keeping tabs on email, texts, IM chat, the web — is routine among young people in particular. We know that humans can’t really multitask very successfully — that what we do is switch tracks, and every time we do that there’s a cost, in terms of your efficiency at the task. But what about long-term costs of chronic multitasking? A study that selected 19 students who multitasked the most and 22 who multitasked least, from a pool of 262 students, found those who multitasked least performed better on three cognitive tests that are thought to reflect ability to ignore distracting information, ability to organize things in working memory, and ability to switch between tasks. The findings can’t answer whether chronic media multitasking reduces these abilities, or whether people who are poor at these skills are more likely to succumb to chronic media multitasking, but they do demonstrate that chronic media multitasking is associated with this particular information processing style.

[890] Ophir, E., Nass C., & Wagner A. D. (2009).  From the Cover: Cognitive control in media multitaskers. Proceedings of the National Academy of Sciences. 106(37), 15583 - 15587.

http://www.wired.com/wiredscience/2009/08/multitasking/

Cell phone ringtones can pose major distraction, impair recall

Cell phones ringing during a concert is not simply irritating. It appears that in a classroom, a cell phone left to ring for 30 seconds significantly affected the students’ recall for the information presented just prior to and during the ringing. The effect was even greater when the phone’s owner rummaged frantically through her bag. Ringtones that are popular songs were even greater distractions. However, with repeated trials, people could be trained to reduce the negative effects; being warned about the distracting effects also helped people be less affected.

[1299] Shelton, J. T., Elliott E. M., Eaves S. D., & Exner A. L. (2009).  The distracting effects of a ringing cell phone: An investigation of the laboratory and the classroom setting. Journal of Environmental Psychology. 29(4), 513 - 521.

http://www.eurekalert.org/pub_releases/2009-06/wuis-cpr060209.php

Police with higher multitasking abilities less likely to shoot unarmed persons

In a study in which police officers watched a video of an officer-involved shooting that resulted in the death of the officer before participating in a computer-based simulation where they were required to make split-second decisions whether to shoot or not to shoot someone, based on slides showing a person holding either a gun or a harmless object like a cell phone, it was found that among those more stressed by the video, those with a lower working memory capacity were more likely to shoot unarmed people. Working memory capacity was not a significant factor for those who did not show heightened negative emotionality in response to the video.

[739] Kleider, H. M., Parrott D. J., & King T. Z. (2009).  Shooting behaviour: How working memory and negative emotionality influence police officer shoot decisions. Applied Cognitive Psychology. 9999(9999), n/a - n/a.

http://www.eurekalert.org/pub_releases/2009-03/gsu-pwh033009.php

Switchboard in the brain helps us learn and remember at the same time

It’s very common that we are required to both process new information while simultaneously recalling old information, as in conversation we are paying attention to what the other person is saying while preparing our own reply. A new study confirms what has been theorized: that there is a bottleneck in our memory system preventing us from doing both simultaneously. Moreover, the study provides evidence that a specific region in the left prefrontal cortex can resolve the bottleneck, possibly by allowing rapid switching between learning and remembering. This is supported by earlier findings that patients with damage to this area have problems in rapidly adapting to new situations and tend to persevere in old rules. The same region is also affected in older adults.

[1355] Huijbers, W., Pennartz C. M., Cabeza R., & Daselaar S. M. (2009).  When Learning and Remembering Compete: A Functional MRI Study. PLoS Biol. 7(1), e1000011 - e1000011.

Full text is available at http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pbio.1000011
http://www.eurekalert.org/pub_releases/2009-01/plos-sit010909.php

Neural bottleneck found that thwarts multi-tasking

An imaging study has revealed just why we can’t do two things at once. The bottleneck appears to occur at the lateral frontal and prefrontal cortex and the superior frontal cortex. Both areas are known to play a critical role in cognitive control. These brain regions responded to tasks irrespective of the senses involved, and could be seen to 'queue' neural activity — that is, a response to the second task was postponed until the response to the first was completed. Such queuing occurred when two tasks were presented within 300 milliseconds of each other, but not when the time gap was longer.

[896] Dux, P. E., Ivanoff J., Asplund C. L., & Marois R. (2006).  Isolation of a Central Bottleneck of Information Processing with Time-Resolved fMRI. Neuron. 52(6), 1109 - 1120.

http://www.eurekalert.org/pub_releases/2007-01/vu-nbf011807.php

How multitasking impedes learning

A number of studies have come out in recent years demonstrating that the human brain can’t really do two things at once, and that when we do attempt to do so, performance is impaired. A new imaging study provides evidence that we tend to use a less efficient means of learning when distracted by another task. In the study, 14 younger adults (in their twenties) learned a simple classification task by trial-and-error. For one set of the cards, they also had to keep a running mental count of high tones that they heard while learning the classification task. Imaging revealed that different brain regions were used for learning depending on whether the participants were distracted by the other task or not — the hippocampus was involved in the single-task learning, but not in the dual-task, when the striatum (a region implicated in procedural and habit learning) was active. Although the ability of the participants to learn didn’t appear to be affected at the time, the distraction did reduce the participants' subsequent knowledge about the task during a follow-up session. In particular, on the task learned with the distraction, participants could not extrapolate from what they had learned.

[1273] Foerde, K., Knowlton B. J., & Poldrack R. A. (2006).  Modulation of competing memory systems by distraction. Proceedings of the National Academy of Sciences. 103(31), 11778 - 11783.

http://www.sciencedaily.com/releases/2006/07/060726083302.htm

Doing two things at once

Confirmation of what many of us know, and many more try to deny - you can't do two complex tasks simultaneously as well as you could do either one alone. Previous research has showed that when a single area of the brain, like the visual cortex, has to do two things at once, like tracking two objects, there is less brain activation than occurs when it watches one thing at a time. This new study sought to find out whether something similar happened when two highly independent tasks, carried out in very different parts of the brain, were done concurrently. The two tasks used were language comprehension (carried out in the temporal lobe), and mental rotation (carried out in the parietal lobe). The language task alone activated 37 voxels of brain tissue. The mental rotation task alone also activated 37 voxels. But when both tasks were done at the same time, only 42 voxels were activated, rather than the sum of the two (74). While overall accuracy did not suffer, each task took longer to perform.

[2546] Just, M. A., Carpenter P. A., Keller T. A., Emery L., Zajac H., & Thulborn K. R. (2001).  Interdependence of Nonoverlapping Cortical Systems in Dual Cognitive Tasks. NeuroImage. 14(2), 417 - 426.

http://www.nytimes.com/2001/07/31/health/anatomy/31BRAI.html

The costs of multitasking

Technology increasingly tempts people to do more than one thing (and increasingly, more than one complicated thing) at a time. New scientific studies reveal the hidden costs of multitasking. In a study that looked at the amounts of time lost when people switched repeatedly between two tasks of varying complexity and familiarity, it was found that for all types of tasks, subjects lost time when they had to switch from one task to another, and time costs increased with the complexity of the tasks, so it took significantly longer to switch between more complex tasks. Time costs also were greater when subjects switched to tasks that were relatively unfamiliar. They got "up to speed" faster when they switched to tasks they knew better. These results suggest that executive control involves two distinct, complementary stages: goal shifting ("I want to do this now instead of that") and rule activation ("I'm turning off the rules for that and turning on the rules for this").

[1124] Rubinstein, J. S., Meyer D. E., & Evans J. E. (2001).  Executive Control of Cognitive Processes in Task Switching,. Journal of Experimental Psychology: Human Perception and Performance. 27(4), 763 - 797.

http://www.apa.org/journals/xhp/press_releases/august_2001/xhp274763.html

Brain's halves compete for attention

Claus Hilgetag, of Boston University, and his colleagues fired focused magnetic pulses through healthy subjects' skulls for 10 minutes to induce 'hemispatial neglect'. This condition, involving damage to one side of the brain, leaves patients unaware of objects in the opposite half of their visual field (which sends messages to the damaged half of the brain). The subjects showed the traditional symptoms of hemispatial neglect. They were worse at detecting objects opposite to the numb side of their brain, and worse still if there was also an object in the functioning half of the visual field. Yet numbed subjects were better at spotting objects with the unaffected half of their brains. This behavior confirms the idea that activity in one half of the brain usually eclipses that in the opposite half. The finding supports the idea that mental activity is a tussle between the brain's many different areas.

[720] Hilgetag, C. C., Theoret H., & Pascual-Leone A. (2001).  Enhanced visual spatial attention ipsilateral to rTMS-induced 'virtual lesions' of human parietal cortex. Nat Neurosci. 4(9), 953 - 957.

http://www.nature.com/nsu/010830/010830-5.html

Multitasking and driving

Why cell phones and driving don't mix

A host of studies have come out in recent years demonstrating that multitasking impairs performance and talking on a cell phone while driving a car is a bad idea. A new study helps explain why. In two different experiments, subjects were found to be four times more distracted while preparing to speak or speaking than when they were listening. The researcher expects the effect to be even stronger in real-life conversation. It was also found that subjects could complete the visual task in front of them more easily when the projected voice also was in front. This suggests that it may be easier to have all things that require attention in the same space.

[1132] Almor, A. (2008).  Why Does Language Interfere with Vision-Based Tasks?. Experimental Psychology (formerly "Zeitschrift für Experimentelle Psychologie"). 55(4), 260 - 268.

http://www.sciencedaily.com/releases/2008/05/080531084958.htm

Talking on a cellphone while driving as bad as drinking

Yet another study has come out rubbing it in that multitasking comes with a cost, and most particularly, that you shouldn’t do anything else while driving. This study demonstrates — shockingly — that drivers are actually worse off when using a cell phone than when legally drunk. The study had 40 volunteers use a driving simulator under 4 different conditions: once while legally intoxicated, once while talking on a hands-free cell phone, once while talking on a hand-held cell phone, and once with no distractions. There were differences in behavior —drunk drivers were more aggressive, tailgated more, and hit the brake pedal harder; cell phone drivers (whether hands-free and hand-held ) took longer to hit the brakes, and got in more accidents. But in both cases drivers were significantly impaired.

[1250] Strayer, D. L., Drews F. A., & Crouch D. J. (2006).  A Comparison of the Cell Phone Driver and the Drunk Driver. Human Factors: The Journal of the Human Factors and Ergonomics Society. 48(2), 381 - 391.

http://www.sciencentral.com/articles/view.htm3?article_id=218392815
http://www.eurekalert.org/pub_releases/2006-06/uou-doc062306.php
http://www.guardian.co.uk/mobile/article/0,,1809549,00.html

Performing even easy tasks impairs driving

In yet another demonstration that driving is impaired when doing anything else, a simulator study has found that students following a lead car and instructed to brake as soon as they saw the illumination of the lead car's brake lights, responded slower when required to respond to a concurrent easy task, where a stimulus - either a light flash in the lead car's rear window or an auditory tone - was randomly presented once or twice and participants had to indicate the stimulus' frequency. The finding suggests that even using a hands-free device doesn’t make it okay to talk on a cell phone while driving.

[837] Levy, J., Pashler H., & Boer E. (2006).  Central interference in driving: is there any stopping the psychological refractory period?. Psychological Science: A Journal of the American Psychological Society / APS. 17(3), 228 - 235.

http://www.psychologicalscience.org/media/releases/2006/pr060303.cfm

Talking and listening impairs your ability to drive safely

A study involving almost 100 students driving virtual cars has provided evidence that people have greater difficultly maintaining a fixed speed when performing tasks that simulated conversing on a mobile phone. Both speaking and listening were equally distracting.

[203] Kubose, T. T., Bock K., Dell G. S., Garnsey S. M., Kramer A. F., & Mayhugh J. (2006).  The effects of speech production and speech comprehension on simulated driving performance. Applied Cognitive Psychology. 20(1), 43 - 63.

http://www.eurekalert.org/pub_releases/2005-08/jws-cpu082205.php

Cell phone users drive like seniors

Another study on the evils of multitasking, in particular, of talking on a cellphone while driving. This one has a nice spin — the study found that when young motorists talk on cell phones, they drive like elderly people, moving and reacting more slowly and increasing their risk of accidents. Specifically, when 18- to 25-year-olds were placed in a driving simulator and talked on a cellular phone, they reacted to brake lights from a car in front of them as slowly as 65- to 74-year-olds who were not using a cell phone. Although elderly drivers became even slower to react to brake lights when they spoke on a cell phone, they were not as badly affected as had been expected. An earlier study by the same researchers found that motorists who talk on cell phones are more impaired than drunken drivers with blood alcohol levels exceeding 0.08.

[339] Strayer, D. L., & Drew F. A. (2004).  Profiles in Driver Distraction: Effects of Cell Phone Conversations on Younger and Older Drivers. Human Factors: The Journal of the Human Factors and Ergonomics Society. 46(4), 640 - 649.

http://www.eurekalert.org/pub_releases/2005-02/uou-cpu020105.php

Complex mental tasks interfere with drivers' ability to detect visual targets

The researchers studied 12 adults who drove for about four hours on the highway north from Madrid. During the journey, drivers listened to recorded audio messages with either abstract or concrete information (acquisition task), and later were required to freely generate a reproduction of what they had just listened to (production task). Although the more receptive tasks – listening and learning -- had little or no effect on performance, there were significant differences in almost all of the measures of attention when drivers had to reproduce the content of the audio message they had just heard. Drivers also performed other tasks, either live or by phone. One was mental calculus (mentally changing between Euros and Spanish pesetas) either with an experimenter in the car, talking to the driver, or with the driver speaking by hands-free phone. One was a memory task (giving detailed information about where they were and what they were doing at a given day and time). Both tasks significantly impacted on the driver's ability to detect visual targets. In the experimental variation that examined the impact of hands-free phone conversation, message complexity made the difference. The relative safety of low-demand phone conversation -- if hands-free and voice-operated --appeared to be about the same as that of live conversation. The findings also confirm that the risk of internal distraction (one’s own thoughts) is at least as relevant as external distraction.

Goldarecena, M.A.R. & González, L.M.N. 2003. Mental Workload While Driving: Effects on Visual Search, Discrimination and Decision Making. Journal of Experimental Psychology: Applied, 9(2)

http://www.eurekalert.org/pub_releases/2003-06/apa-mcm062403.php

How attention works

Latest news

A recent study reveals that when we focus on searching for something, regions across the brain are pulled into the search. The study sheds light on how attention works.

In the experiments, brain activity was recorded as participants searched for people or vehicles in movie clips. Computational models showed how each of the roughly 50,000 locations near the cortex responded to each of the 935 categories of objects and actions seen in the movie clips.

When participants searched for humans, relatively more of the cortex was devoted to humans, and when they searched for vehicles, more of the cortex was devoted to vehicles.

Now this might not sound very surprising, but it appears to contradict our whole developing picture of the brain as having specialized areas for specific categories — instead, areas normally involved in recognizing categories such as plants or buildings were being switched to become attuned to humans or vehicles. The changes occurred across the brain, not just in those regions devoted to vision, and in fact, the largest changes were seen in the prefrontal cortex.

What this suggests is that categories are represented in highly organized, continuous maps, a ‘semantic space’, as it were. By increasing the representation of the target category (and related categories) at the expense of other categories, this semantic space is changed. Note that this did not come about in response to the detection of the target; it occurred in response to the direction of attention — the goal setting.

In other words, in the same way that gravity warps the space-time continuum (well, probably not the exact same way!), attention warps your mental continuum.

You can play with an interactive online brain viewer which tries to portray this semantic space.

http://www.futurity.org/science-technology/to-find-whats-lost-brain-forms-search-party/

[3417] Çukur, T., Nishimoto S., Huth A. G., & Gallant J. L. (2013).  Attention during natural vision warps semantic representation across the human brain. Nature Neuroscience. advance online publication,

Three classroom experiments have found that students who meditated before a psychology lecture scored better on a quiz that followed than students who did not meditate. Mood, relaxation, and class interest were not affected by the meditation training.

The noteworthy thing is that the meditation was very very basic — six minutes of written meditation exercises.

The effect was stronger in classes where more freshmen students were enrolled, suggesting that the greatest benefit is to those students who have most difficulty in concentrating (who are more likely to drop out).

The finding suggests the value in teaching some active self-reflection strategies to freshmen, and disadvantaged ones in particular.

It’s reasonable to speculate that more extensive training might increase the benefits.

And in another recent meditation study, a two week mindfulness course significantly improved both Graduate Record Exam reading comprehension scores and working memory capacity.

The study involved 48 undergrads who either attended the mindfulness course or a nutrition class. Each 45-minute class met eight times over two weeks. Mindfulness training was associated with a 16-percentile boost in GRE scores, on average. Mind wandering also significantly decreased. The healthy nutrition course had no effect on any of these factors.

http://medicalxpress.com/news/2013-04-meditating-grades.html (first study)

[3382] Ramsburg, J. T., & Youmans R. J. (Submitted).  Meditation in the Higher-Education Classroom: Meditation Training Improves Student Knowledge Retention during Lectures. Mindfulness. 1 - 11.

http://www.scientificamerican.com/podcast/episode.cfm?id=mindfulness-may-improve-test-scores-13-03-28 (second study)

[3380] Mrazek, M. D., Franklin M. S., Phillips D. T., Baird B., & Schooler J. W. (2013).  Mindfulness Training Improves Working Memory Capacity and GRE Performance While Reducing Mind Wandering. Psychological Science.

Why do we find it so hard to stay on task for long? A recent study uses a new technique to show how the task control network and the default mode network interact (and fight each other for control).

The task control network (which includes the dorsal anterior cingulate and bilateral anterior insula) regulates attention to surroundings, controlling your concentration on tasks. The default mode network, on the other hand, becomes active when a person seems to be doing 'nothing', and becomes less active when a task is being performed.

The study shows that we work better and faster the better the default mode network is suppressed by the task control network. However, when the default mode network is not sufficiently suppressed by the task control network, it sends signals to the task control network, interfering with its performance (and we lose focus).

Interestingly, in certain conditions, such as autism, depression, and mild cognitive impairment, the default mode network remains unchanged whether the person is performing a task or interacting with the environment. Additionally, deficits in the functioning of the default mode network have been implicated in age-related cognitive decline.

The findings add a new perspective to our ideas about attention. One of the ongoing questions concerns the relative importance of the two main aspects of attention: focus, and resisting distraction. A lot of work in recent years has indicated that a large part of age-related cognitive decline is a growing difficulty in resisting distraction. Similarly, there is some evidence that people with a low working memory capacity are less able to ignore irrelevant information.

This recent finding, then, suggests that these difficulties in ignoring distracting / irrelevant stimuli reflect the failure of the task control network to adequately suppress the activity of the default mode network. This puts the emphasis back on training for focus, and may help explain why meditation practices are effective in improving concentration.

http://www.futurity.org/science-technology/why-your-seesaw-brain-cant-stay-on-task/

[3384] Wen, X., Liu Y., Yao L., & Ding M. (2013).  Top-Down Regulation of Default Mode Activity in Spatial Visual Attention. The Journal of Neuroscience. 33(15), 6444 - 6453.

As many of you will know, I like nature-improves-mind stories. A new twist comes from a small Scottish study, in which participants were fitted up with a mobile EEG monitor that enabled their brainwaves to be recorded as they walked for 25 minutes through one of three different urban settings: an urban shopping street, a path through green space, or a street in a busy commercial district. The monitors measured five ‘channels’ that are claimed to reflect “short-term excitement,” “frustration,” “engagement,” “arousal,” and “meditation level."

Consistent with Attention restoration theory, walkers entering the green zone showed lower frustration, engagement and arousal, and higher meditation, and then showed higher engagement when moving out of it — suggesting that their time in a natural environment had ‘refreshed’ their brain.

http://richardcoyne.com/2013/03/09/the-brain-in-the-city/

[3375] Aspinall, P., Mavros P., Coyne R., & Roe J. (2013).  The urban brain: analysing outdoor physical activity with mobile EEG. British journal of sports medicine.

 

A new study quantifies the degree to which tasks that involve actions in a precise sequence are vulnerable to interruptions.

In my book on remembering intentions, I spoke of how quickly and easily your thoughts can be derailed, leading to ‘action slips’ and, in the wrong circumstances, catastrophic mistakes. A new study shows how a 3-second interruption while doing a task doubled the rate of sequence errors, while a 4s one tripled it.

The study involved 300 people, who were asked to perform a series of ordered steps on the computer. The steps had to be performed in a specific sequence, mnemonically encapsulated by UNRAVEL, with each letter identifying the step. The task rules for each step differed, requiring the participant to mentally shift gears each time. Moreover, task elements could have multiple elements — for example, the letter U could signal the step, one of two possible responses for that step, or be a stimulus requiring a specific response when the step was N. Each step required the participant to choose between two possible responses based on one stimulus feature — features included whether it was a letter or a digit, whether it was underlined or italic, whether it was red or yellow, whether the character outside the outline box was above or below. There were also more cognitive features, such as whether the letter was near the beginning of the alphabet or not. The identifying mnemonic for the step was linked to the possible responses (e.g., N step – near or far; U step — underline or italic).

At various points, participants were very briefly interrupted. In the first experiment, they were asked to type four characters (letters or digits); in the second experiment, they were asked to type only two (a very brief interruption indeed!).

All of this was designed to set up a situation emulating “train of thought” operations, where correct performance depends on remembering where you are in the sequence, and on producing a situation where performance would have reasonably high proportion of errors — one of the problems with this type of research has been the use of routine tasks that are generally performed with a high degree of accuracy, thus generating only small amounts of error data for analysis.

In both experiments, interruptions significantly increased the rate of sequence errors on the first trial after the interruption (but not on subsequent ones). Nonsequence errors were not affected. In the first experiment (four-character interruption), the sequence error rate on the first trial after the interruption was 5.8%, compared to 1.8% on subsequent trials. In the second experiment (two-character interruption), it was 4.3%.

The four-character interruptions lasted an average of 4.36s, and the two-character interruptions lasted an average of 2.76s.

Whether the characters being typed were letters or digits made no difference, suggesting that the disruptive effects of interruptions are not overly sensitive to what’s being processed during the interruption (although of course these are not wildly different processes!).

The absence of effect on nonsequence errors shows that interruptions aren’t disrupting global attentional resources, but more specifically the placekeeping task.

As I discussed in my book, the step also made a significant difference — for sequence errors, middle steps showed higher error rates than end steps.

All of this confirms and quantifies how little it takes to derail us, and reminds us that, when engaged in tasks involving the precise sequence of sub-tasks (which so many tasks do), we need to be alert to the dangers of interruptions. This is, of course, particularly true for those working in life-critical areas, such as medicine.

[3207] Altmann, E. M., Gregory J., & Hambrick D. Z. (2013).  Momentary Interruptions Can Derail the Train of Thought. Journal of Experimental Psychology: General. No - Pagination Specified.

One reason for the association between poverty and poorer cognition in children may lie in how poverty affects attention, with poor children tending to use more cognitive resources in monitoring the environment.

There have been a number of studies in the past few years showing how poverty affects brain development and function. One of these showed specifically that children of high and low socioeconomic status showed differences in brain wave patterns associated with an auditory selective attention task. This was thought to indicate that the groups were using different mechanisms to carry out the task, with the lower SES children employing extra resources to attend to irrelevant information.

In a follow-up study, 28 young adolescents (12-14 years) from two schools in neighborhoods of different socioeconomic status answered questions about their emotional and motivational state at various points during the day, and provided saliva samples to enable monitoring of cortisol levels. At one point in the afternoon, they also had their brainwaves monitored while they carried out an auditory selective attention task (hearing different sounds played simultaneously into both ears, they were required to press a button as fast as possible when they heard one particular sound).

While performance on the task was the same for both groups, there were, once again, differences in the brain wave patterns. Higher SES children exhibited far larger theta waves in the frontal lobes in response to sounds they attended to than to compared to those they should have ignored, while lower SES children showed much larger theta waves to the unattended sounds than for the attended sounds.

While the lower SES children had higher cortisol levels throughout the school day, like the higher SES children, they showed little change around the task, suggesting neither group was particularly stressed by the task. Both groups also showed similar levels of boredom and motivation.

What the findings suggest is that lower SES children have to exert more cognitive control to avoid attending to irrelevant stimuli than higher SES children — perhaps because they live in more threatening environments.

A rat study indicates that acute stress disrupts feedback loops in the prefrontal cortex that may be keeping information alive in working memory.

Stress is a major cause of workplace accidents, and most of us are only too familiar with the effects of acute stress on our thinking. However, although the cognitive effects are only too clear, research has had little understanding of how stress has this effect. A new rat study sheds some light.

In the study, brain activity was monitored while five rats performed a working memory task during acute noise stress. Under these stressful conditions, the rats performed dramatically worse on their working memory task, with performance dropping from an average of 93% success to 65%.

The stress also significantly increased the discharge rate of a subset of neurons in the medial prefrontal cortex during two phases of the task: planning and assessment.

This brain region is vital for working memory and executive functions such as goal maintenance and emotion regulation. The results suggest that the firing and re-firing of these neurons keeps recent information ‘fresh’. When the re-firing is delayed, the information can be lost.

What seems to be happening is that the stress is causing these neurons to work even more furiously, but instead of performing their normal task — concentrating on keeping important information ‘alive’ during brief delays — they are reacting to all the other, distracting and less relevant, stimuli.

The findings contradict the view that stress simply suppresses prefrontal cortex activity, and suggests a different approach to treatment, one that emphasizes shutting out distractions.

The findings are also exciting from a theoretical viewpoint, suggesting as they do that this excitatory recursive activity of neurons within the prefrontal cortex provide the neural substrate for working memory. That is, that we ‘hold’ information in the front of our mind through reverberating feedback loops within this network of neurons, that keep information alive during the approximately 1.5 seconds of our working memory ‘span’.

Emotionally arousing images that are remembered more vividly were seen more vividly. This may be because the amygdala focuses visual attention rather than more cognitive attention on the image.

We know that emotion affects memory. We know that attention affects perception (see, e.g., Visual perception heightened by meditation training; How mindset can improve vision). Now a new study ties it all together. The study shows that emotionally arousing experiences affect how well we see them, and this in turn affects how vividly we later recall them.

The study used images of positively and negatively arousing scenes and neutral scenes, which were overlaid with varying amounts of “visual noise” (like the ‘snow’ we used to see on old televisions). College students were asked to rate the amount of noise on each picture, relative to a specific image they used as a standard. There were 25 pictures in each category, and three levels of noise (less than standard, equal to standard, and more than standard).

Different groups explored different parameters: color; gray-scale; less noise (10%, 15%, 20% as compared to 35%, 45%, 55%); single exposure (each picture was only presented once, at one of the noise levels).

Regardless of the actual amount of noise, emotionally arousing pictures were consistently rated as significantly less noisy than neutral pictures, indicating that people were seeing them more clearly. This was true in all conditions.

Eye-tracking analysis ruled out the idea that people directed their attention differently for emotionally arousing images, but did show that more eye fixations were associated both with less noisy images and emotionally arousing ones. In other words, people were viewing emotionally important images as if they were less noisy.

One group of 22 students were given a 45-minute spatial working memory task after seeing the images, and then asked to write down all the details they could remember about the pictures they remembered seeing. The amount of detail they recalled was taken to be an indirect measure of vividness.

A second group of 27 students were called back after a week for a recognition test. They were shown 36 new images mixed in with the original 75 images, and asked to rate them as new, familiar, or recollected. They were also asked to rate the vividness of their recollection.

Although, overall, emotionally arousing pictures were not more likely to be remembered than neutral pictures, both experiments found that pictures originally seen as more vivid (less noise) were remembered more vividly and in more detail.

Brain scans from 31 students revealed that the amygdala was more active when looking at images rated as vivid, and this in turn increased activity in the visual cortex and in the posterior insula (which integrates sensations from the body). This suggests that the increased perceptual vividness is not simply a visual phenomenon, but part of a wider sensory activation.

There was another neural response to perceptual vividness: activity in the dorsolateral prefrontal cortex and the posterior parietal cortex was negatively correlated with vividness. This suggests that emotion is not simply increasing our attentional focus, it is instead changing it by reducing effortful attentional and executive processes in favor of more perceptual ones. This, perhaps, gives emotional memories their different ‘flavor’ compared to more neutral memories.

These findings clearly need more exploration before we know exactly what they mean, but the main finding from the study is that the vividness with which we recall some emotional experiences is rooted in the vividness with which we originally perceived it.

The study highlights how emotion can sharpen our attention, building on previous findings that emotional events are more easily detected when visibility is difficult, or attentional demands are high. It is also not inconsistent with a study I reported on last year, which found some information needs no repetition to be remembered because the amygdala decrees it of importance.

I should add, however, that the perceptual effect is not the whole story — the current study found that, although perceptual vividness is part of the reason for memories that are vividly remembered, emotional importance makes its own, independent, contribution. This contribution may occur after the event.

It’s suggested that individual differences in these reactions to emotionally enhanced vividness may underlie an individual’s vulnerability to post-traumatic stress disorder.

A large, long-running study suggests both that children with attention difficulties tend to spend more time playing video games, and that extensive video game playing is bad for attention.

A three-year study involving 3,034 Singaporean children and adolescents (aged 8-17) has found that those who spent more time playing video games subsequently had more attention problems, even when earlier attention problems, sex, age, race, and socioeconomic status were statistically controlled. Those who were more impulsive or had more attention problems subsequently spent more time playing video games, even when initial video game playing was statistically controlled. These findings suggest that the cause-effect relationship between video game playing and attention problems/impulsiveness goes both ways.

While the particular content may have an effect on attention problems and impulsiveness (violent games appeared to be an additional, independent, factor in attention problems), it was the total time spent that was more important.

Participants completed questionnaires about their video game playing habits annually for three years running. They also completed questionnaires aimed to measure attention and impulsiveness (the Current ADHD Symptoms Scale Self-Report, and the Barratt Impulsiveness Scale-11, respectively). Regarding attention, the children answered questions such as how often they "fail to give close attention to details or make careless mistakes" in their work or "blurt out answers before questions have been completed." For the impulsivity test, they selected points they felt described themselves, such as "I often make things worse because I act without thinking" or "I concentrate easily."

How does this finding relate to other evidence showing that playing video games can improve visual attention for rapid and accurate recognition of information from the environment? The answer lies in the different nature of attention — the attention needed for visual search differs in important ways from the attention necessary for sustained concentration in contexts that are often effortful and/or boring.

The example of many attention-challenged individuals makes this more understandable. Many parents of children with ADHD find that the only thing their child can concentrate on for a lengthy period is video games. The answer to that riddle is the rapidly changing nature of video games, and the way they are designed to grab the attention, with flashing lights and loud noises and moving images etc. The young person is not, therefore, improving their ability to focus in a way that is helpful for the school environment, or indeed for everyday life.

Unfortunately, this study suggests that it is precisely those people who are most in need of such ‘external supports’ for attention (‘grabbing’ stimuli such as lights and sounds and movement) — that is, those individuals who are least able to control their own attention — who are most likely to spend a lot of time playing such games. The games then weaken their attentional control even more, and so the cycle continues.

So this research answers the question ADHD parents tend to have: should I encourage my child to play video games a lot (given that it’s the only thing that holds their attention) or not? The answer, unfortunately, would seem to be: not. However, all is not lost. There are computer ‘games’ that are designed to help those with ADHD learn to concentrate in a way that is more useful (see the Topic collection on ADHD for more on this).

The American Academy of Pediatrics recommends one hour per day of total media screen time (including TV, DVDs, video games, Internet, iPad, etc.) for children in elementary school, and two hours for children in secondary school.

Gentile, D.A., Swing, E.L., Lim, C.G. & Khoo, A. 2012. Video game playing, attention problems, and impulsiveness: Evidence of bidirectional causality. Psychology of Popular Media Culture, Vol 1(1), Jan 2012, 62-70. doi: 10.1037/a0026969

Full text available at http://www.apa.org/pubs/journals/releases/ppm-1-1-62.pdf

Increasing evidence shows that perception is nowhere near the simple bottom-up process we once thought. Two recent perception studies add to the evidence.

Previous research has found practice improves your ability at distinguishing visual images that vary along one dimension, and that this learning is specific to the visual images you train on and quite durable. A new study extends the finding to more natural stimuli that vary on multiple dimensions.

In the small study, 9 participants learned to identify faces and 6 participants learned to identify “textures” (noise patterns) over the course of two hour-long sessions of 840 trials (consecutive days). Faces were cropped to show only internal features and only shown briefly, so this was not a particularly easy task. Participants were then tested over a year later (range: 10-18 months; average 13 and 15 months, respectively).

On the test, participants were shown both images from training and new images that closely resembled them. While accuracy rates were high for the original images, they plummeted for the very similar new images, indicating that despite the length of time since they had seen the original images, they still retained much of the memory of them.

Although practice improved performance across nearly all items and for all people, there were significant differences between both participants and individual stimuli. More interestingly, individual differences (in both stimuli and people) were stable across sessions (e.g., if you were third-best on day 1, you were probably third-best on day 2 too, even though you were doing better). In other words, learning didn’t produce any qualitative changes in the representations of different items — practice had nearly the same effect on all; differences were rooted in initial difficulty of discriminating the pattern.

However, while it’s true that individual differences were stable, that doesn’t mean that every person improved their performance the exact same amount with the same amount of practice. Interestingly (and this is just from my eye-ball examination of the graphs), it looks like there was more individual variation among the group looking at noise patterns. This isn’t surprising. We all have a lot of experience discriminating faces; we’re all experts. This isn’t the case with the textures. For these, people had to ‘catch on’ to the features that were useful in discriminating patterns. You would expect more variability between people in how long it takes to work out a strategy, and how good that strategy is. Interestingly, three of the six people in the texture group actually performed better on the test than they had done on the second day of training, over a year ago. For the other three, and all nine of those in the face group, test performance was worse than it had been on the second day of training (but decidedly better than the first day).

The durability and specificity of this perceptual learning, the researchers point out, resembles that found in implicit memory and some types of sensory adaptation. It also indicates that such perceptual learning is not limited, as has been thought, to changes early in the visual pathway, but produces changes in a wider network of cortical neurons, particularly in the inferior temporal cortex.

The second, unrelated, study also bears on this issue of specificity.

We look at a scene and extract the general features — a crowd of people, violently riotous or riotously happy? — or we look at a scene and extract specific features that over time we use to build patterns about what goes with what. The first is called “statistical summary perception”; the second “statistical learning”.

A study designed to disentangle these two processes found that you can only do one or other; you can’t derive both types of information at the same time. Thus, when people were shown grids of lines slanted to varying degrees, they could either assess whether the lines were generally leaning to the left or right, or they could learn to recognize pairs of lines that had been hidden repeatedly in the grids — but they couldn’t do both.

The fact that each of these tasks interfered with the other suggests that the two processes are fundamentally related.

Faces of people about whom something negative was known were perceived more quickly than faces of people about whom nothing, or something positive or neutral, was known.

Here’s a perception study with an intriguing twist. In my recent round-up of perception news I spoke of how images with people in them were more memorable, and of how some images ‘jump out’ at you. This study showed different images to each participant’s left and right eye at the same time, creating a contest between them. The amount of time it takes the participant to report seeing each image indicates the relative priority granted by the brain.

So, 66 college students were shown faces of people, and told something ‘gossipy’ about each one. The gossip could be negative, positive or neutral — for example, the person “threw a chair at a classmate”; “helped an elderly woman with her groceries”; “passed a man on the street.” These faces were then shown to one eye while the other eye saw a picture of a house.

The students had to press one button when they could see a face and another when they saw a house. As a control, some faces were used that the students had never seen. The students took the same length of time to register seeing the unknown faces and those about which they had been told neutral or positive information, but pictures of people about whom they had heard negative information registered around half a second quicker, and were looked at for longer.

A second experiment confirmed the findings and showed that subjects saw the faces linked to negative gossip for longer periods than faces about whom they had heard about upsetting personal experiences.

[2283] Anderson, E., Siegel E. H., Bliss-Moreau E., & Barrett L. F. (2011).  The Visual Impact of Gossip. Science. 332(6036), 1446 - 1448.

Memory begins with perception. Here's a round-up of recent research into visual perception.

Memory begins with perception. We can’t remember what we don’t perceive, and our memory of things is influenced by how we perceive them.

Our ability to process visual scenes has been the subject of considerable research. How do we process so many objects? Some animals do it by severely limiting what they perceive, but humans can perceive a vast array of features. We need some other way of filtering the information. Moreover, it’s greatly to our advantage that we can process the environment extremely quickly. So that’s two questions: how do we process so much, and so fast?

Brain region behind the scene-facilitation effect identified

A critical factor, research suggests, is our preferential processing of interacting objects — we pick out interacting objects more quickly than unrelated objects. A new study has now identified the region of the brain responsible for this ‘scene-facilitation effect’. To distinguish between the two leading contenders, the lateral occipital cortex and the intraparietal sulcus, transcranial magnetic stimulation was used to temporarily shut down each region in turn, while volunteers viewed brief flashes of object pairs (half of which were interacting with each other) and decided whether these glimpsed objects matched the presented label. Half of the object pairs were shown as interacting.

The scene-facilitation effect was eliminated when the lateral occipital cortex was out of action, while the non-performance of the intraparietal sulcus made no difference.

The little we need to identify a scene

The scene-facilitation effect is an example of how we filter and condense the information in our visual field, but we also work in the opposite direction — we extrapolate.

When ten volunteers had their brains scanned while they viewed color photographs and line drawings of six categories of scenes (beaches, city streets, forests, highways, mountains and offices), brain activity was nearly identical, regardless of whether participants were looking at a color photo or a simple line drawing. That is, researchers could tell, with a fair amount of success, what category of scene the participant was looking at, just by looking at the pattern of brain activity in the ventral visual cortex — regardless of whether the picture was a color photo or a line drawing. When they made mistakes, the mistakes were similar for the photos and the drawings.

In other words, most of what the brain is responding to in the photo is also evident in the line drawing.

In order to determine what those features were, the researchers progressively removed some of the lines in the line drawings. Even when up to 75% of the pixels in a line drawing were removed, participants could still identify what the scene was 60% of the time — as long as the important lines were left in, that is, those showing the broad contours of the scene. If only the short lines, representing details like leaves or windows, were left, participants became dramatically less accurate.

The findings cast doubt on some models of human visual perception which argue that people need specific information that is found in photographs to classify a scene.

Consistent with previous research, activity in the parahippocampal place area and the retrosplenial cortex was of greatest importance.

The brain performs visual search near optimally

Visual search involves picking out a target in a sea of other objects, and it’s one of the most important visual tasks we do. It’s also (not surprisingly, considering its evolutionary importance) something we are very very good at. In fact, a new study reveals that we’re pretty near optimal.

Of course we make mistakes, and have failures. But these happen not because of our incompetence, but because of the complexity of the task.

In the study, participants were shown sets of lines that might or might not contain a line oriented in a particular way. Each screen was shown for only a fraction of a second, and the contrast of each line was randomly varied, making the target easier or more difficult to detect. The variation in contrast was designed as a model for an important variable in visual search — that of the reliability of the sensory information. Optimally, an observer would take into consideration the varying reliability of the items, giving the information different weights as a result of that perceived reliability. That weighted information would then be combined according to a specific integration rule. That had been calculated as the optimal process, and the performance of the participants matched that expectation.

The computer model that simulated this performance, and that matched the human performance, used groups of (simulated) neurons that responded differently to different line orientations.

In other words, it appears that we are able, very quickly, to integrate information coming from multiple locations, while taking into account the reliability of the different pieces of information, and we do this through the integration of information coming from different groups of neurons, each group of which is responding to different bits of information.

Another recent study into visual search has found that, when people are preparing themselves to look for very familiar object categories (people or cars) in natural scenes, activity in their visual cortex was very similar to that shown when they were actually looking at the objects in the scenes. Moreover, the precise activity in the object-selective cortex (OSC) predicted performance in detecting the target, while preparatory activity in the early visual cortex (V1) was actually negatively related to search performance. It seems that these two regions of the visual cortex are linked to different search strategies, with the OSC involved in relatively abstract search preparation and V1 to more specific imagery-like preparation. Activity in the medial prefrontal cortex also reflected later target detection performance, suggesting that this may be the source of top-down processing.

The findings demonstrate the role of preparatory and top-down processes in guiding visual search (and remind us that these processes can bias us against seeing what we’re looking for, just as easily as they help us).

'Rewarding' objects can't be ignored

Another aspect of visual search is that some objects just leap out at us and capture our attention. Loud noises and fast movement are the most obvious of the attributes that snag our gaze. These are potential threats, and so it’s no wonder we’ve evolved to pay attention to such things. We’re also drawn to potential rewards. Prospective mates; food; liquids.

What about rewards that are only temporarily rewarding? Do we move on easily, able to ignore previously rewarding items as soon as they lose their relevance?

In a recent study, people spent an hour searching for red or green circles in an array of many differently colored circles. The red and green circles were always followed by a monetary reward (10 cents for one color, and 1 cent for the other). Afterwards, participants were asked to search for particular shapes, and color was no longer relevant or rewarded. However, when, occasionally, one of the shapes was red or green, reaction times slowed, demonstrating that these were distracting (even though the participants had been told to ignore this if it happened).

This distraction persisted for weeks after the original learning session. Interestingly, people who scored highly on a questionnaire measuring impulsivity were more likely to be distracted by these no-longer-relevant items.

The findings indicate that stimuli that have been previously associated with reward continue to capture attention regardless of their relevance to the task in hand, There are implications here that may help in the development of more effective treatments for drug addiction, obesity and ADHD.

People make an image memorable

What makes an image memorable? It’s always been assumed that visual memory is too subjective to allow a general answer to this question. But an internet study has found remarkable consistency among hundreds of people who viewed images from a collection of about 10,000 images, some of which were repeated, and decided whether or not they had seen the image before. The responses generated a memorability rating for each image. Once this had been collated, the researchers made "memorability maps" of each image by asking people to label all the objects in the images. These maps were then used to determine which objects make an image memorable.

In general, images with people in them were the most memorable, followed by images of human-scale space — such as the produce aisle of a grocery store — and close-ups of objects. Least memorable were natural landscapes, although those could be memorable if they featured an unexpected element, such as shrubbery trimmed into an unusual shape.

Computer modeling then allowed various features for each image (such as color, or the distribution of edges) to be correlated with the image's memorability. The end result was an algorithm that can predict memorability of images the computational model hasn't "seen" before.

The researchers are now doing a follow-up study to test longer-term memorability, as well as working on adding more detailed descriptions of image content.

[2291] Kim, J. G., Biederman I., & Juan C. - H. (2011).  The Benefit of Object Interactions Arises in the Lateral Occipital Cortex Independent of Attentional Modulation from the Intraparietal Sulcus: A Transcranial Magnetic Stimulation Study. The Journal of Neuroscience. 31(22), 8320 - 8324.

[2303] Walther, D. B., Chai B., Caddigan E., Beck D. M., & Fei-Fei L. (2011).  Simple line drawings suffice for functional MRI decoding of natural scene categories. Proceedings of the National Academy of Sciences. 108(23), 9661 - 9666.

[2292] Ma, W. J., Navalpakkam V., Beck J. M., van den Berg R., & Pouget A. (2011).  Behavior and neural basis of near-optimal visual search. Nat Neurosci. 14(6), 783 - 790.

[2323] Peelen, M. V., & Kastner S. (2011).  A neural basis for real-world visual search in human occipitotemporal cortex. Proceedings of the National Academy of Sciences. 108(29), 12125 - 12130.

[2318] Anderson, B. A., Laurent P. A., & Yantis S. (2011).  Value-driven attentional capture. Proceedings of the National Academy of Sciences. 108(25), 10367 - 10371.

Isola, P., Xiao, J., Oliva, A. & Torralba, A. 2011. What makes an image memorable? Paper presented at the IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, Colorado Springs.

 

A new study provides further support for a three-tier model of working memory, where the core only holds one item, the next layer holds up to three, and further items can be passively held ready.

Readers of my books and articles will know that working memory is something I get quite excited about. It’s hard to understate the importance of working memory in our lives. Now a new study tells us that working memory is in fact made up of three areas: a core focusing on one active item, a surrounding area holding at least three more active items (called the outer store), and a wider region containing passive items that have been tagged for later retrieval. Moreover, the core region (the “focus of attention”) has three roles (one more than thought) — it not only directs attention to an item and retrieves it, but it also updates it later, if required.

In two experiments, 49 participants were presented with up to four types of colored shapes on a computer screen, with particular types (eg a red square) confined to a particular column. Each colored shape was displayed in sequence at the beginning with a number from 1 to 4, and then instances of the shapes appeared sequentially one by one. The participants’ task was to keep a count of each shape. Different sequences involved only one shape, or two, three, or four shapes. Participants controlled how quickly the shapes appeared.

Unsurprisingly, participants were slower and less accurate as the set size (number of shape types) increased. There was a significant jump in response time when the set-size increased from one to two, and a steady increase in RT and decline in accuracy as set-size increased from 2 to 4. Responses were also notably slower when the stimulus changed and they had to change their focus from one type of shape to another (this is called the switch cost). Moreover, this switch cost increased linearly with set-size, at a rate of about 240ms/item.

Without getting into all the ins and outs of this experiment and the ones leading up to it, what the findings all point to is a picture of working memory in which:

  • the focus contains only one item,
  • the area outside the focus contains up to three items,
  • this outer store has to be searched before the item can be retrieved,
  • more recent items in the outer store are not found any more quickly than older items in the outer store,
  • focus-switch costs increase as a direct function of the number of items in the outer store,
  • there is (as earlier theorized) a third level of working memory, containing passive items, that is quite separate from the two areas of active storage,
  • that the number of passive items does not influence either response time or accuracy for recalling active items.

It is still unclear whether the passive third layer is really a part of working memory, or part of long-term memory.

The findings do point to the need to use active loads rather than passive ones, when conducting experiments that manipulate cognitive load (for example, requiring subjects to frequently update items in working memory, rather than simply hold some items in memory while carrying out another task).

The two measures of working memory capacity appear to be fully independent, and only one of them is related to intelligence.

The number of items a person can hold in short-term memory is strongly correlated with their IQ. But short-term memory has been recently found to vary along another dimension as well: some people remember (‘see’) the items in short-term memory more clearly and precisely than other people. This discovery has lead to the hypothesis that both of these factors should be considered when measuring working memory capacity. But do both these aspects correlate with fluid intelligence?

A new study presented 79 students with screen displays fleetingly showing either four or eight items. After a one-second blank screen, one item was returned and the subject asked whether that object had been in a particular location previously. Their ability to detect large and small changes in the items provided an estimate of how many items the individual could hold in working memory, and how clearly they remembered them. These measures were compared with individuals’ performance on standard measures of fluid intelligence.

Analysis of data found that these two measures of working memory — number and clarity —are completely independent of each other, and that it was the number factor only that correlated with intelligence.

This is not to say that clarity is unimportant! Only that it is not related to intelligence.

The intraparietal sulcus appears to be a hub for connecting the different sensory-processing areas as well as higher-order processes, and may be key to attention problems.

If our brains are full of clusters of neurons resolutely only responding to specific features (as suggested in my earlier report), how do we bring it all together, and how do we switch from one point of interest to another? A new study using resting state data from 58 healthy adolescents and young adults has found that the intraparietal sulcus, situated at the intersection of visual, somatosensory, and auditory association cortices and known to be a key area for processing attention, contains a miniature map of all the things we can pay attention to (visual, auditory, motor stimuli etc).

Moreover, this map is copied in at least 13 other places in the brain, all of which are connected to the intraparietal sulcus. Each copy appears to do something different with the information. For instance, one map processes eye movements while another processes analytical information. This map of the world may be a fundamental building block for how information is represented in the brain.

There were also distinct clusters within the intraparietal sulcus that showed different levels of connectivity to auditory, visual, somatosensory, and default mode networks, suggesting they are specialized for different sensory modalities.

The findings add to our understanding of how we can shift our attention so precisely, and may eventually help us devise ways of treating disorders where attention processing is off, such as autism, attention deficit disorder, and schizophrenia.

[1976] Anderson, J. S., Ferguson M. A., Lopez-Larson M., & Yurgelun-Todd D. (2010).  Topographic maps of multisensory attention. Proceedings of the National Academy of Sciences. 107(46), 20110 - 20114.

A new study adds to the evidence that our ability to focus on one thing and ignore irrelevant information gets worse with age, and that this may be a crucial factor in age-related cognitive impairment.

A study involving young (average age 22) and older adults (average age 77) showed participants pictures of overlapping faces and places (houses and buildings) and asked them to identify the gender of the person. While the young adults showed activity in the brain region for processing faces (fusiform face area) but not in the brain region for processing places (parahippocampal place area), both regions were active in the older adults. Additionally, on a surprise memory test 10 minutes later, older adults who showed greater activation in the place area were more likely to recognize what face was originally paired with what house.

These findings confirm earlier research showing that older adults become less capable of ignoring irrelevant information, and shows that this distracting information doesn’t merely interfere with what you’re trying to attend to, but is encoded in memory along with that information.

Every moment a multitude of stimuli compete for our attention. Just how this competition is resolved, and how we control it, is not known. But a new study adds to our understanding.

Following on from earlier studies that found individual neurons were associated with very specific memories (such as a particular person), new research has shown that we can actually regulate the activity of specific neurons, increasing the firing rate of some while decreasing the rate of others.

The study involved 12 patients implanted with deep electrodes for intractable epilepsy. On the basis of each individual’s interests, four images were selected for each patient. Each of these images was associated with the firing of specific neurons in the mediotemporal lobe. The firing of these neurons was hooked up to a computer, allowing the patients to make their particular images appear by thinking of them. When another image appeared on top of the image as a distraction, creating a composite image, patients were asked to focus on their particular image, brightening the target image while the distractor image faded. The patients were successful 70% of the time in brightening their target image. This was primarily associated with increased firing of the specific neurons associated with that image.

I should emphasize that the use of a composite image meant that the participants had to rely on a mental representation rather than the sensory stimuli, at least initially. Moreover, when the feedback given was fake — that is, the patients’ efforts were no longer linked to the behavior of the image on the screen — success rates fell dramatically, demonstrating that their success was due to a conscious, directed action.

Different patients used different strategies to focus their attention. While some simply thought of the picture, others repeated the name of the image out loud or focused their gaze on a particular aspect of the image.

Resolving the competition of multiple internal and external stimuli is a process which involves a number of different levels and regions, but these findings help us understand at least some of the process that is under our conscious control. It would be interesting to know more about the relative effectiveness of the different strategies people used, but this was not the focus of the study. It would also be very interesting to compare effectiveness at this task across age, but of course this procedure is invasive and can only be used in special cases.

The study offers hope for building better brain-machine interfaces.

Two studies suggest that ADHD is being over-diagnosed among students who are the youngest in their classes.

Two independent studies have found that students whose birthdays fell just before their school's age enrollment cutoff date—making them among the youngest in their class—had a substantially higher rate of ADHD diagnoses than students who were born later. One study, using data from the Early Childhood Longitudinal Study-Kindergarten cohort, found that ADHD diagnoses among children born just prior to their state’s kindergarten eligibility cutoff date are more than 60% more prevalent than among those born just afterward (who therefore waited an extra year to begin school). Moreover, such children are more than twice as likely to be taking Ritalin in grades 5 and 8. While the child’s school starting age strongly affects teachers’ perceptions of ADHD symptoms, it only weakly affects parental perceptions (who are more likely to compare their child with others of the same age, rather than others in the same class). The other study, using data from the 1997 to 2006 National Health Interview Survey, found that 9.7% of those born just before the cutoff date were diagnosed with ADHD compared to 7.6% of those born just after.

The two findings suggest that many of these children are mistakenly being diagnosed with ADHD simply because they are less emotionally or intellectually mature than their (older) classmates.

A new study finds a decision-making advantage to the increased difficulty older brains have in filtering out irrelevant information.

It’s now well established that older brains tend to find it harder to filter out irrelevant information. But now a new study suggests that that isn’t all bad. The study compared the performance of 24 younger adults (17-29) and 24 older adults (60-73) on two memory tasks separated by a 10-minute break. In the first task, they were shown pictures overlapped by irrelevant words, told to ignore the words and concentrate on the pictures only, and to respond every time the same picture appeared twice in a row. The second task required them to remember how the pictures and words were paired together in the first task. The older adults showed a 30% advantage over younger adults in their memory for the preserved pairs. It’s suggested that older adults encode extraneous co-occurrences in the environment and transfer this knowledge to subsequent tasks, improving their ability to make decisions.

[276] Campbell, K. L., Hasher L., & Thomas R. C. (2010).  Hyper-binding: a unique age effect. Psychological Science: A Journal of the American Psychological Society / APS. 21(3), 399 - 405.

Full text available at http://pss.sagepub.com/content/early/2010/01/15/0956797609359910.full

A paralyzed patient implanted with a brain-computer interface device has allowed scientists to determine the relationship between brain waves and attention.

A paralyzed patient implanted with a brain-computer interface device has allowed scientists to determine the relationship between brain waves and attention. Recordings found a characteristic pattern of activity as the subject paid close attention to the task. High-frequency beta oscillations increased in strength as the subject waited for the relevant instruction, with peaks of activity occurring just before each instructional cue. After receiving the relevant instruction and before the subject moved the cursor, the beta oscillation intensity fell dramatically to lower levels through the remaining, irrelevant instructions. On the other hand, the slower delta oscillation adjusted its frequency to mirror the timing of each instructional cue. The authors suggest that this "internal metronome" function may help fine-tune beta oscillations, so that maximum attention is paid at the appropriate time.

A new study suggests that our memory for visual scenes may not depend on how much attention we’ve paid to it or what a scene contains, but the context in which the scene is presented.

A new study suggests that our memory for visual scenes may not depend on how much attention we’ve paid to it or what a scene contains, but when the scene is presented. In the study, participants performed an attention-demanding letter-identification task while also viewing a rapid sequence of full-field photographs of urban and natural scenes. They were then tested on their memory of the scenes. It was found that, notwithstanding their attention had been focused on the target letter, only those scenes which were presented at the same time as a target letter (rather than a distractor letter) were reliably remembered. The results point to a brain mechanism that automatically encodes certain visual features into memory at behaviorally relevant points in time, regardless of the spatial focus of attention.

[321] Lin, J. Y., Pype A. D., Murray S. O., & Boynton G. M. (2010).  Enhanced Memory for Scenes Presented at Behaviorally Relevant Points in Time. PLoS Biol. 8(3), e1000337 - e1000337.

Full text available at doi:10.1371/journal.pbio.1000337

Older news items (pre-2010) brought over from the old website

More light shed on distinction between long and short-term memory

The once clear-cut distinction between long- and short-term memory has increasingly come under fire in recent years. A new study involving patients with a specific form of epilepsy called 'temporal lobe epilepsy with bilateral hippocampal sclerosis' has now clarified the distinction. The patients, who all had severely compromised hippocampi, were asked to try and memorize photographic images depicting normal scenes. Their memory was tested and brain activity recorded after five seconds or 60 minutes. As expected, the patients could not remember the images after 60 minutes, but could distinguish seen-before images from new at five seconds. However, their memory was poor when asked to recall details about the images. Brain activity showed that short-term memory for details required the coordinated activity of a network of visual and temporal brain areas, whereas standard short-term memory drew on a different network, involving frontal and parietal regions, and independent of the hippocampus.

[996] Cashdollar, N., Malecki U., Rugg-Gunn F. J., Duncan J. S., Lavie N., & Duzel E. (2009).  Hippocampus-dependent and -independent theta-networks of active maintenance. Proceedings of the National Academy of Sciences. 106(48), 20493 - 20498.

http://www.eurekalert.org/pub_releases/2009-11/ucl-tal110909.php

Attention is more about reducing the noticeability of the unattended

No visual scene can be processed in one fell swoop — we piece it together from the bits we pay attention to (which explains why we sometimes miss objects completely, and can’t understood how we could have missed them when we finally notice them). We know that paying attention to something increases the firing rate of neurons tuned for that type of stimulus, and until a recent study we thought that was the main process underlying our improved perception when we focus on something. However a macaque study has found that the main cause — perhaps four times as important — is a reduction in the background noise, allowing the information coming in to be much more noticeable.

[1093] Mitchell, J. F., Sundberg K. A., & Reynolds J. H. (2009).  Spatial Attention Decorrelates Intrinsic Activity Fluctuations in Macaque Area V4. Neuron. 63(6), 879 - 888.

http://esciencenews.com/articles/2009/09/23/rising.above.din

Brainwaves regulate our searching

A long-standing question concerns how we search complex visual scenes. For example, when you enter a crowded room, how do you go about searching for your friends? Now a monkey study reveals that visual attention jumps sequentially from point to point, shifting focus around 25 times in a second. Intriguingly, and unexpectedly, it seems this timing is determined by brainwaves. The finding connects speed of thinking with the oscillation frequency of brainwaves, giving a new significance to brainwaves (whose function is rather mysterious, but of increasing interest to researchers), and also suggesting an innovative approach to improving attention.

[1118] Buschman, T. J., & Miller E. K. (2009).  Serial, Covert Shifts of Attention during Visual Search Are Reflected by the Frontal Eye Fields and Correlated with Population Oscillations. Neuron. 63(3), 386 - 396.

http://www.eurekalert.org/pub_releases/2009-08/miot-tme080609.php

Ability to ignore distraction most important for attention

Confirming an earlier study, a series of four experiments involving 84 students has found that students with high working memory capacity were noticeably better able to ignore distractions and stay focused on their tasks. The findings provide more evidence that the poor attentional capacity of individuals with low working memory capacity result from a reduced ability to ignore attentional capture (stimuli that involuntarily “capture” your attention, like a loud noise or a suddenly appearing object), rather than an inability to focus.

[828] Fukuda, K., & Vogel E. K. (2009).  Human Variation in Overriding Attentional Capture. J. Neurosci.. 29(27), 8726 - 8733.

http://www.eurekalert.org/pub_releases/2009-08/uoo-bbo080609.php

Individual differences in working memory capacity depend on two factors

A new computer model adds to our understanding of working memory, by showing that working memory can be increased by the action of the prefrontal cortex in reinforcing activity in the parietal cortex (where the information is temporarily stored). The idea is that the prefrontal cortex sends out a brief stimulus to the parietal cortex that generates a reverberating activation in a small subpopulation of neurons, while inhibitory interactions with neurons further away prevents activation of the entire network. This lateral inhibition is also responsible for limiting the mnemonic capacity of the parietal network (i.e. provides the limit on your working memory capacity). The model has received confirmatory evidence from an imaging study involving 25 volunteers. It was found that individual differences in performance on a short-term visual memory task were correlated with the degree to which the dorsolateral prefrontal cortex was activated and its interconnection with the parietal cortex. In other words, your working memory capacity is determined by both storage capacity (in the posterior parietal cortex) and prefrontal top-down control. The findings may help in the development of ways to improve working memory capacity, particularly when working memory is damaged.

[441] Edin, F., Klingberg T., Johansson P., McNab F., Tegner J., & Compte A. (2009).  Mechanism for top-down control of working memory capacity. Proceedings of the National Academy of Sciences. 106(16), 6802 - 6807.

http://www.eurekalert.org/pub_releases/2009-04/i-id-aot040109.php

Some short-term memories die suddenly, no fading

We don’t remember everything; the idea of memory as being a video faithfully recording every aspect of everything we have ever experienced is a myth. Every day we look at the world and hold a lot of what we say for no more than a few seconds before discarding it as not needed any more. Until now it was thought that these fleeting visual memories faded away, gradually becoming more imprecise. Now it seems that such memories remain quite accurate as long as they exist (about 4 seconds), and then just vanish away instantly. The study involved testing memory for shapes and colors in 12 adults, and it was found that the memory for shape or color was either there or not there – the answers either correct or random guesses. The probability of remembering correctly decreased between 4 and 10 seconds.

[941] Zhang, W., & Luck S. J. (2009).  Sudden death and gradual decay in visual working memory. Psychological Science: A Journal of the American Psychological Society / APS. 20(4), 423 - 428.

http://www.eurekalert.org/pub_releases/2009-04/uoc--ssm042809.php

Where visual short-term memory occurs

Working memory used to be thought of as a separate ‘store’, and now tends to be regarded more as a process, a state of mind. Such a conception suggests that it may occur in the same regions of the brain as long-term memory, but in a pattern of activity that is somehow different from LTM. However, there has been little evidence for that so far. Now a new study has found that information in WM may indeed be stored via sustained, but low, activity in sensory areas. The study involved volunteers being shown an image for one second and instructed to remember either the color or the orientation of the image. After then looking at a blank screen for 10 seconds, they were shown another image and asked whether it was the identical color/orientation as the first image. Brain activity in the primary visual cortex was scanned during the 10 second delay, revealing that areas normally involved in processing color and orientation were active during that time, and that the pattern only contained the targeted information (color or orientation).

[1032] Serences, J. T., Ester E. F., Vogel E. K., & Awh E. (2009).  Stimulus-Specific Delay Activity in Human Primary Visual Cortex. Psychological Science. 20(2), 207 - 214.

http://www.eurekalert.org/pub_releases/2009-02/afps-sih022009.php
http://www.eurekalert.org/pub_releases/2009-02/uoo-dsm022009.php

The finding is consistent with that of another study published this month, in which participants were shown two examples of simple striped patterns at different orientations and told to hold either one or the other of the orientations in their mind while being scanned. Orientation is one of the first and most basic pieces of visual information coded and processed by the brain. Using a new decoding technique, researchers were able to predict with 80% accuracy which of the two orientations was being remembered 11 seconds after seeing a stimulus, from the activity patterns in the visual areas. This was true even when the overall level of activity in these visual areas was very weak, no different than looking at a blank screen.

[652] Harrison, S. A., & Tong F. (2009).  Decoding reveals the contents of visual working memory in early visual areas. Nature. 458(7238), 632 - 635.

http://www.eurekalert.org/pub_releases/2009-02/vu-edi021709.php
http://www.physorg.com/news154186809.html

Stress disrupts task-switching, but the brain can bounce back

A new neuroimaging study involving 20 male M.D. candidates in the middle of preparing for their board exams has found that they had a harder time shifting their attention from one task to another after a month of stress than other healthy young men who were not under stress. The finding replicates what has been found in rat studies, and similarly correlates with impaired function in an area of the prefrontal cortex that is involved in attention. However, the brains recovered their function within a month of the end of the stressful period.

[829] Liston, C., McEwen B. S., & Casey B. J. (2009).  Psychosocial stress reversibly disrupts prefrontal processing and attentional control. Proceedings of the National Academy of Sciences. 106(3), 912 - 917.

Full text available at http://www.pnas.org/content/106/3/912.abstract
http://www.eurekalert.org/pub_releases/2009-01/ru-sdh012709.php

Attention, it’s all about connecting

An imaging study in which volunteers spent an hour identifying letters that flashed on a screen has shed light on what happens when our attention wanders. Reduced communication in the ventral fronto-parietal network, critical for attention, was found to predict slower response times 5-8 seconds before the letters were presented.

Daniel Weissman presented the results at the 38th annual meeting of the Society for Neuroscience, held Nov. 15 to 19 in Washington, DC.

http://www.newscientist.com/article/mg20026865.600-bored-your-brain-is-disconnecting.html

The importance of acetylcholine

A rat study suggests that acetylcholine, a neurotransmitter known to be important for attention, is critical for "feature binding"— the process by which our brain combines all of the specific features of an object and gives us a complete and unified picture of it. The findings may lead to improved therapies and treatments for a variety of attention and memory disorders.

[1265] Botly, L. C. P. [1], & De Rosa E. (2008).  A Cross-Species Investigation of Acetylcholine, Attention, and Feature Binding. Psychological Science. 19, 1185 - 1193.

http://www.eurekalert.org/pub_releases/2008-11/afps-bba111808.php

Attention grabbers snatch lion's share of visual memory

It’s long been thought that when we look at a visually "busy" scene, we are only able to store a very limited number of objects in our visual short-term or working memory. For some time, this figure was believed to be four or five objects, but a recent report suggested it could be as low as two. However, a new study reveals that although it might not be large, it’s more flexible than we thought. Rather than being restricted to a limited number of objects, it can be shared out across the whole image, with more memory allocated for objects of interest and less for background detail. What’s of interest might be something we’ve previously decided on (i.e., we’re searching for), or something that grabs our attention.  Eye movements also reveal how brief our visual memory is, and that what our eyes are looking at isn’t necessarily what we’re ‘seeing’ — when people were asked to look at objects in a particular sequence, but the final object disappeared before their eyes moved on to it, it was found that the observers could more accurately recall the location of the object that they were about to look at than the one that they had just been looking at.

[1398] Bays, P. M., & Husain M. (2008).  Dynamic shifts of limited working memory resources in human vision. Science (New York, N.Y.). 321(5890), 851 - 854.

http://www.physorg.com/news137337380.html

How Ritalin works to focus attention

Ritalin has been widely used for decades to treat attention deficit hyperactivity disorder (ADHD), but until now the mechanism of how it works hasn’t been well understood. Now a rat study has found that Ritalin, in low doses, fine-tunes the functioning of neurons in the prefrontal cortex, and has little effect elsewhere in the brain. It appears that Ritalin dramatically increases the sensitivity of neurons in the prefrontal cortex to signals coming from the hippocampus. However, in higher doses, PFC neurons stopped responding to incoming information, impairing cognition. Low doses also reinforced coordinated activity of neurons, and weakened activity that wasn't well coordinated. All of this suggests that Ritalin strengthens dominant and important signals within the PFC, while lessening weaker signals that may act as distractors.

[663] Devilbiss, D. M., & Berridge C. W. (2008).  Cognition-Enhancing Doses of Methylphenidate Preferentially Increase Prefrontal Cortex Neuronal Responsiveness. Biological Psychiatry. 64(7), 626 - 635.

http://www.eurekalert.org/pub_releases/2008-06/uow-suh062408.php

Working memory has a fixed number of 'slots'

A study that showed volunteers a pattern of colored squares for a tenth of a second, and then asked them to recall the color of one of the squares by clicking on a color wheel, has found that working memory acts like a high-resolution camera, retaining three or four features in high detail. Unlike a digital camera, however, it appears that you can’t increase the number of images you can store by lowering the resolution. The resolution appears to be constant for a given individual. However, individuals do differ in the resolution of each feature and the number of features that can be stored.

[278] Zhang, W., & Luck S. J. (2008).  Discrete fixed-resolution representations in visual working memory. Nature. 453(7192), 233 - 235.

http://www.physorg.com/news126432902.html
http://www.eurekalert.org/pub_releases/2008-04/uoc--wmh040208.php

And another study of working memory has attempted to overcome the difficulties involved in measuring a person’s working memory capacity (ensuring that no ‘chunking’ of information takes place), and concluded that people do indeed have a fixed number of ‘slots’ in their working memory. In the study, participants were shown two, five or eight small, scattered, different-colored squares in an array, which was then replaced by an array of the same squares without the colors, after which the participant was shown a single color in one location and asked to indicate whether the color in that spot had changed from the original array.

[437] Rouder, J. N., Morey R. D., Cowan N., Zwilling C. E., Morey C. C., & Pratte M. S. (2008).  An assessment of fixed-capacity models of visual working memory. Proceedings of the National Academy of Sciences. 105(16), 5975 - 5979.

http://www.eurekalert.org/pub_releases/2008-04/uom-mpd042308.php

Impressive feats in visual memory

In light of all the recent experiments emphasizing how small our short-term visual memory is, it’s comforting to be reminded that, nevertheless, we have an amazing memory for pictures — in the right circumstances. Those circumstances include looking at images of familiar objects, as opposed to abstract artworks, and being motivated to do well (the best-scoring participant was given a cash prize). In the study, 14 people aged 18 to 40 viewed 2,500 images, one at a time, for a few seconds. Afterwards, they were shown pairs of images and asked to select the exact image they had seen earlier. The previously viewed item could be paired with either an object from a novel category, an object of the same basic-level category, or the same object in a different state or pose. Stunningly, participants on average chose the correct image 92%, 88% and 87% of the time, in each of the three pairing categories respectively.

[870] Brady, T. F., Konkle T., Alvarez G. A., & Oliva A. (2008).  Visual long-term memory has a massive storage capacity for object details. Proceedings of the National Academy of Sciences. 105(38), 14325 - 14329.

Full text available at http://www.pnas.org/content/105/38/14325.abstract

Disentangling attention

A new study provides more evidence that the ability to deliberately focus your attention is physically separate in the brain from the part that helps you filter out distraction. The study trained monkeys to take attention tests on a video screen in return for a treat of apple juice. When the monkeys voluntarily concentrated (‘top-down’ attention), the prefrontal cortex was active, but when something distracting grabbed their attention (‘bottom-up’ attention), the parietal cortex became active. The electrical activity in these two areas vibrated in synchrony as they signaled each other, but top-down attention involved synchrony that was stronger in the lower-frequencies and bottom-up attention involved higher frequencies. These findings may help us develop treatments for attention disorders.

[1071] Buschman, T. J., & Miller E. K. (2007).  Top-Down Versus Bottom-Up Control of Attention in the Prefrontal and Posterior Parietal Cortices. Science. 315(5820), 1860 - 1862.

http://dsc.discovery.com/news/2007/03/29/attention_hea.html?category=health

Attention grabbers snatch lion's share of visual memory

It’s long been thought that when we look at a visually "busy" scene, we are only able to store a very limited number of objects in our visual short-term or working memory. For some time, this figure was believed to be four or five objects, but a recent report suggested it could be as low as two. However, a new study reveals that although it might not be large, it’s more flexible than we thought. Rather than being restricted to a limited number of objects, it can be shared out across the whole image, with more memory allocated for objects of interest and less for background detail. What’s of interest might be something we’ve previously decided on (i.e., we’re searching for), or something that grabs our attention.  Eye movements also reveal how brief our visual memory is, and that what our eyes are looking at isn’t necessarily what we’re ‘seeing’ — when people were asked to look at objects in a particular sequence, but the final object disappeared before their eyes moved on to it, it was found that the observers could more accurately recall the location of the object that they were about to look at than the one that they had just been looking at.

[1398] Bays, P. M., & Husain M. (2008).  Dynamic shifts of limited working memory resources in human vision. Science (New York, N.Y.). 321(5890), 851 - 854.

http://www.physorg.com/news137337380.html

More on how short-term memory works

It’s been established that visual working memory is severely limited — that, on average, we can only be aware of about four objects at one time. A new study explored the idea that this capacity might be affected by complexity, that is, that we can think about fewer complex objects than simple objects. It found that complexity did not affect memory capacity. It also found that some people have clearer memories of the objects than other people, and that this is not related to how many items they can remember. That is, a high IQ is associated with the ability to hold more items in working memory, but not with the clarity of those items.

[426] Awh, E., Barton B., & Vogel E. K. (2007).  Visual working memory represents a fixed number of items regardless of complexity. Psychological Science: A Journal of the American Psychological Society / APS. 18(7), 622 - 628.

http://www.eurekalert.org/pub_releases/2007-07/uoo-htb071107.php
http://www.physorg.com/news103472118.html

Asymmetrical brains let fish multitask

A fish study provides support for a theory that lateralized brains allow animals to better handle multiple activities, explaining why vertebrate brains evolved to function asymmetrically. The minnow study found that nonlateralized minnows were as good as those bred to be lateralized (enabling it to favor one or other eye) at catching shrimp. However, when the minnows also had to look out for a sunfish (a minnow predator), the nonlateralized minnows took nearly twice as long to catch 10 shrimp as the lateralized fish.

[737] Dadda, M., & Bisazza A. (2006).  Does brain asymmetry allow efficient performance of simultaneous tasks?. Animal Behaviour. 72(3), 523 - 529.

http://sciencenow.sciencemag.org/cgi/content/full/2006/623/2?etoc

Why are uniforms uniform? Because color helps us track objects

Laboratory tests have revealed that humans can pay attention to only 3 objects at a time. Yet there are instances in the real world — for example, in watching a soccer match — when we certainly think we are paying attention to more than 3 objects. Are we wrong? No. Anew study shows how we do it — it’s all in the color coding. People can focus on more than three items at a time if those items share a common color. But, logically enough, no more than 3 color sets.

[927] Halberda, J., Sires S. F., & Feigenson L. (2006).  Multiple spatially overlapping sets can be enumerated in parallel. Psychological Science: A Journal of the American Psychological Society / APS. 17(7), 572 - 576.

http://www.eurekalert.org/pub_releases/2006-06/jhu-wau062106.php

People remember prices more easily if they have fewer syllables

The phonological loop — an important component of working memory —can only hold 1.5 to 2 seconds of spoken information. For that reason, faster speakers have an advantage over slower speakers. Now a consumer study reveals that every extra syllable in a product's price decreases its chances of being remembered by 20%. Thus, people who shorten the number of syllables (e.g. read 5,325 as 'five three two five' as opposed to 'five thousand three hundred and twenty five') have better recall. However, since we store information both verbally and visually, it’s also the case that unusual looking prices, such as $8.88, are recalled better than typical looking prices.

Vanhuele, M., Laurent, G., Dreze, X. & Calder, B. 2006. Consumers' Immediate Memory for Prices. Journal of Consumer Research, 33(2), 163-72.

http://www.sciencedaily.com/releases/2006/06/060623001231.htm
http://www.eurekalert.org/pub_releases/2006-06/uocp-prp062206.php

New view of hippocampus’s role in memory

Amnesiacs have overturned the established view of the hippocampus, and of the difference between long-and short-term memories. It appears the hippocampus is just as important for retrieving certain types of short-term memories as it is for long-term memories. The critical thing is not the age of the memory, but the requirement to form connections between pieces of information to create a coherent episode. The researchers suggest that, for the brain, the distinction between 'long-term' memory and 'short-term' memory are less relevant than that between ‘feature’ memory and ‘conjunction’ memory — the ability to remember specific things versus how they are related. The hippocampus may be thought of as the brain's switchboard, piecing individual bits of information together in context.

[817] Olson, I. R., Page K., Moore K. S., Chatterjee A., & Verfaellie M. (2006).  Working Memory for Conjunctions Relies on the Medial Temporal Lobe. J. Neurosci.. 26(17), 4596 - 4601.

http://origin.www.upenn.edu/pennnews/article.htm?id=963
http://www.eurekalert.org/pub_releases/2006-05/uop-aso053106.php

Discovery disproves simple concept of memory as 'storage space'

The idea of memory “capacity” has become more and more eroded over the years, and now a new technique for measuring brainwaves seems to finally knock the idea on the head. Consistent with recent research suggesting that a crucial problem with aging is a growing inability to ignore distracting information, this new study shows that visual working memory depends on your ability to filter out irrelevant information. Individuals have long been characterized as having a “high” working memory capacity or a “low” one — the assumption has been that these people differ in their storage capacity. Now it seems it’s all about a neural mechanism that controls what information gets into awareness. People with high capacity have a much better ability to ignore irrelevant information.

[1091] Vogel, E. K., McCollough A. W., & Machizawa M. G. (2005).  Neural measures reveal individual differences in controlling access to working memory. Nature. 438(7067), 500 - 503.

http://www.eurekalert.org/pub_releases/2005-11/uoo-dds111805.php

How much can your mind keep track of?

A recent study has tried a new take on measuring how much a person can keep track of. It's difficult to measure the limits of processing capacity because most people automatically break down large complex problems into small, manageable chunks. To keep people from doing this, therefore, researchers created problems the test subjects wouldn’t be familiar with. 30 academics were presented with incomplete verbal descriptions of statistical interactions between fictitious variables, with an accompanying set of graphs that represented the interactions. It was found that, as the problems got more complex, participants performed less well and were less confident. They were significantly less able to accurately solve the problems involving four-way interactions than the ones involving three-way interactions, and were completely incapable of solving problems with five-way interactions. The researchers concluded that we cannot process more than four variables at a time (and at that, four is a strain).

[415] Halford, G. S., Baker R., McCredden J. E., & Bain J. D. (2005).  How many variables can humans process?. Psychological Science: A Journal of the American Psychological Society / APS. 16(1), 70 - 76.

http://www.eurekalert.org/pub_releases/2005-03/aps-hmc030805.php

An advantage of age

A study comparing the ability of young and older adults to indicate which direction a set of bars moved across a computer screen has found that although younger participants were faster when the bars were small or low in contrast, when the bars were large and high in contrast, the older people were faster. The results suggest that the ability of one neuron to inhibit another is reduced as we age (inhibition helps us find objects within clutter, but makes it hard to see the clutter itself). The loss of inhibition as we age has previously been seen in connection with cognition and speech studies, and is reflected in our greater inability to tune out distraction as we age. Now we see the same process in vision.

[1356] Betts, L. R., Taylor C. P., Sekuler A. B., & Bennett P. J. (2005).  Aging Reduces Center-Surround Antagonism in Visual Motion Processing. Neuron. 45(3), 361 - 366.

http://psychology.plebius.org/article.htm?article=739
http://www.eurekalert.org/pub_releases/2005-02/mu-opg020305.php

Tests for working memory capacity more limited than thought

The so-called “magic number 7” has been a useful mnemonic for working memory capacity — how many items you can hold in your working memory at one time — but we’ve known for some time that it isn’t quite as it was originally thought. Apart from the fact that the “7” is an average, and that the idea of an “item” is awfully vague as far as informational content is concerned, we have known for some time that what is really important is how long it takes for you to say the words. Thus, Chinese can hold on average 9 items, because the words used in the test are short and simple to pronounce, whereas the Welsh can hold only 5 on average, because of the length and complexity of their words. (note: it’s not because we actually say these words out loud). Similarly, the finding that deaf users of American Sign Language have an average of only 5 items was thought to be because signs take longer to utter. However, new research casts doubt on this theory. The researchers were trying to devise a sign-language test that would be comparable to a hearing language test. To their surprise they found that even when signs were faster to pronounce than spoken language, signers recalled only five items. Also, hearing individuals who were fluent in American Sign Language scored differently when asked to recall spoken lists in order, versus when they recalled signed lists (seven heard items remembered, five signed items remembered). Up until this time, the predominant idea was that the number found by this test was a good measure of overall cognitive capacity, but this assumption must now be in doubt. It's suggested that a test requiring recall of items, but not in temporal order, is a better measure of cognitive capacity. The results have important implications for standardized tests, which often employ ordered-list retention as a measure of a person's mental aptitude.

[422] Boutla, M., Supalla T., Newport E. L., & Bavelier D. (2004).  Short-term memory span: insights from sign language. Nat Neurosci. 7(9), 997 - 1002.

http://www.eurekalert.org/pub_releases/2004-08/uor-stm083104.php

We weren't made to multitask

A new imaging study supports the view that we can’t perform two tasks at once, rather, the tasks must wait their turn — queuing up for their turn at processing.

[1070] Jiang, Y., Saxe R., & Kanwisher N. (2004).  Functional magnetic resonance imaging provides new constraints on theories of the psychological refractory period. Psychological Science: A Journal of the American Psychological Society / APS. 15(6), 390 - 396.

http://www.eurekalert.org/pub_releases/2004-06/aps-wwm060704.php

Hippocampus and subiculum both critical for short-term memory

A new animal study has revealed that the hippocampus shares its involvement in short-term memory with an adjacent brain region, the subiculum. Both regions act together to establish and retrieve short-term memories. The process involves each region acting at different times, with the other region shutting off while the other is active. The shortest memories (10-15s) were found to be controlled almost exclusively by the subiculum. After 15s, the hippocampus took over. It was also found that the hippocampus appeared to respond in a way influenced by previous experiences, allowing it to anticipate future events on the basis of past outcomes. This is an advantage but can also cause errors.

[349] Deadwyler, S. A., & Hampson R. E. (2004).  Differential but Complementary Mnemonic Functions of the Hippocampus and Subiculum. Neuron. 42(3), 465 - 476.

http://www.eurekalert.org/pub_releases/2004-05/wfub-nrs050604.php

Why working memory capacity is so limited

There’s an old parlor game whereby someone brings into a room a tray covered with a number of different small objects, which they show to the people in the room for one minute, before whisking it away again. The participants are then required to write down as many objects as they can remember. For those who perform badly at this type of thing, some consolation from researchers: it’s not (entirely) your fault. We do actually have a very limited storage capacity for visual short-term memory.
Now visual short-term memory is of course vital for a number of functions, and reflecting this, there is an extensive network of brain structures supporting this type of memory. However, a new imaging study suggests that the limited storage capacity is due mainly to just one of these regions: the posterior parietal cortex. An interesting distinction can be made here between registering information and actually “holding it in mind”. Activity in the posterior parietal cortex strongly correlated with the number of objects the subjects were able to remember, but only if the participants were asked to remember. In contrast, regions of the visual cortex in the occipital lobe responded differently to the number of objects even when participants were not asked to remember what they had seen.

[598] Todd, J. J., & Marois R. (2004).  Capacity limit of visual short-term memory in human posterior parietal cortex. Nature. 428(6984), 751 - 754.

http://www.eurekalert.org/pub_releases/2004-04/vu-slo040704.php
http://tinyurl.com/2jzwe (Telegraph article)

Brain signal predicts working memory capacity

Our visual short-term memory may have an extremely limited capacity, but some people do have a greater capacity than others. A new study reveals that an individual's capacity for such visual working memory can be predicted by his or her brainwaves. In the study, participants briefly viewed a picture containing colored squares, followed by a one-second delay, and then a test picture. They pressed buttons to indicate whether the test picture was identical to -- or differed by one color -- from the one seen earlier. The more squares a subject could correctly identify having just seen, the greater his/her visual working memory capacity, and the higher the spike of corresponding brain activity – up to a point. Neural activity of subjects with poorer working memory scores leveled off early, showing little or no increase when the number of squares to remember increased from 2 to 4, while those with high capacity showed large increases. Subjects averaged 2.8 squares.

[1154] Vogel, E. K., & Machizawa M. G. (2004).  Neural activity predicts individual differences in visual working memory capacity. Nature. 428(6984), 748 - 751.

http://www.eurekalert.org/pub_releases/2004-04/niom-bsp041604.php

Small world networks key to working memory

A computer model may reveal an important aspect of working memory. The key, researchers say, is that the neurons form a "small world" network. In such a network, regardless of its size, any two points within them are always linked by only a small number of steps. Working memory may rely on the same property.

[2547] Micheloyannis, S., Pachou E., Stam C. J., Vourkas M., Erimaki S., & Tsirka V. (2006).  Using graph theoretical analysis of multi channel EEG to evaluate the neural efficiency hypothesis. Neuroscience Letters. 402(3), 273 - 277.

http://www.newscientist.com/article/mg18224481.600-its-a-small-world-inside-your-head.html

More light shed on memory encoding

Anything we perceive contains a huge amount of sensory information. How do we decide what bits to process? New research has identified brain cells that streamline and simplify sensory information, markedly reducing the brain's workload. The study found that when monkeys were taught to remember clip art pictures, their brains reduced the level of detail by sorting the pictures into categories for recall, such as images that contained "people," "buildings," "flowers," and "animals." The categorizing cells were found in the hippocampus. As humans do, different monkeys categorized items in different ways, selecting different aspects of the same stimulus image, most likely reflecting different histories, strategies, and expectations residing within individual hippocampal networks.

[662] Hampson, R. E., Pons T. P., Stanford T. R., & Deadwyler S. A. (2004).  Categorization in the monkey hippocampus: A possible mechanism for encoding information into memory. Proceedings of the National Academy of Sciences of the United States of America. 101(9), 3184 - 3189.

http://www.eurekalert.org/pub_releases/2004-02/wfub-nfo022604.php

Neural circuits that control eye movements play crucial role in visual attention

Everyone agrees that to improve your memory it is important to “pay attention”. Unfortunately, noone really knows how to improve our ability to “pay attention”. An important step in telling us how visual attention works was recently made in a study that looked at the brain circuits that control eye movements. It appears that those brain circuits that program eye movements also govern whether the myriad signals that pour in from the locations where the eyes could move should be amplified or suppressed. It appears that the very act of preparing to move the eye to a particular location can cause an amplification (or suppression) of signals from that area. This is possible because humans and primates can attend to something without moving their eyes to that object.

[741] Moore, T., & Armstrong K. M. (2003).  Selective gating of visual signals by microstimulation of frontal cortex. Nature. 421(6921), 370 - 373.

http://www.eurekalert.org/pub_releases/2003-01/pu-ssh012303.php

Different aspects of attention located in different parts of the brain

We all know attention is important, but we’ve never been sure exactly what it is. Recent research suggests there’s good reason for this – attention appears to be multi-faceted, far less simple than originally conceived. Patients with specific lesions in the frontal lobes and other parts of the brain have provided evidence that different types of attentional problems are associated with injuries in different parts of the brain, suggesting that attention is not, as has been thought, a global process. The researchers have found evidence for at least three distinct processes, each located in different parts of the frontal lobes. These are: (1) a system that helps us maintain a general state of readiness to respond, in the superior medial frontal regions; (2) a system that sets our threshold for responding to an external stimulus, in the left dorsolateral region; and (3) a system that helps us selectively attend to appropriate stimuli, in the right dorsolateral region.

[260] Stuss, D. T., Binns M. A., Murphy K. J., & Alexander M. P. (2002).  Dissociations within the anterior attentional system: effects of task complexity and irrelevant information on reaction time speed and accuracy. Neuropsychology. 16(4), 500 - 513.

http://www.eurekalert.org/pub_releases/2002-10/apa-pda100702.php

Add comment

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.

Comments

Well hello fellow posters. :)

Hello everybody, I am Kenan and i am a newbie here. How should i start discussion on this board? Please share your views with me.