individual differences

Video gamers don’t become expert multitaskers

August, 2012

A comparison of skilled action gamers and non-gamers reveals that all that multitasking practice doesn’t make you any better at multitasking in general.

The research is pretty clear by this point: humans are not (with a few rare exceptions) designed to multitask. However, it has been suggested that the modern generation, with all the multitasking they do, may have been ‘re-wired’ to be more capable of this. A new study throws cold water on this idea.

The study involved 60 undergraduate students, of whom 34 were skilled action video game players (all male) and 26 did not play such games (19 men and 7 women). The students were given three visual tasks, each of which they did on its own and then again while answering Trivial Pursuit questions over a speakerphone (designed to mimic talking on a cellphone).

The tasks included a video driving game (“TrackMania”), a multiple-object tracking test (similar to a video version of a shell game), and a visual search task (hidden pictures puzzles from Highlights magazine).

While the gamers were (unsurprisingly) significantly better at the video driving game, the non-gamers were just as good as them at the other two tasks. In the dual-tasking scenarios, performance declined on all the tasks, with the driving task most affected. While the gamers were affected less by multitasking during the driving task compared to the non-gamers, there was no difference in the amount of decline between gamers and non-gamers on the other two tasks.

Clearly, the smaller effect of dual-tasking on the driving game for gamers is a product of their greater expertise at the driving game, rather than their ability to multitask better. It is well established that the more skilled you are at a task, the more automatic it becomes, and thus the less working memory capacity it will need. Working memory capacity / attention is the bottleneck that prevents us from being true multitaskers.

In other words, the oft-repeated (and somewhat depressing) conclusion remains: you can’t learn to multitask in general, you can only improve specific skills, enabling you to multitask reasonably well while doing those specific tasks.

Reference: 

[3001] Donohue, S., James B., Eslick A., & Mitroff S.
(2012).  Cognitive pitfall! Videogame players are not immune to dual-task costs.
Attention, Perception, & Psychophysics. 74(5), 803 - 809.

Source: 

Topics: 

tags lifestyle: 

tags memworks: 

tags problems: 

tags strategies: 

tags study: 

Old honeybees can regain youthful cognition when they return to youthful duties

August, 2012
  • A honey bee study shows how old foraging bees quickly start to decline cognitively, and how this can be reversed in some if they return to more social domestic duties in the hive.

I often talk about the importance of attitudes and beliefs for memory and cognition. A new honey bee study provides support for this in relation to the effects of aging on the brain, and suggests that this principle extends across the animal kingdom.

Previous research has shown that bees that stay in the nest and take care of the young remain mentally competent, but they don’t nurse for ever. When they’re older (after about 2-3 weeks), they become foragers, and foraging bees age very quickly — both physically and mentally. Obviously, you would think, bees ‘retire’ to foraging, and their old age is brief (they begin to show cognitive decline after just two weeks).

But it’s not as simple as that, because in artificial hives where worker bees are all the same age, nurse bees of the same age as foragers don’t show the same cognitive and sensory decline. Moreover, nurse bees have been found to maintain their cognitive abilities for more than 100 days, while foragers die within 18 days and show cognitive declines after 13-15 days (although their ability to assess sweetness remains intact).

The researchers accordingly asked a very interesting question: what happens if the foragers return to babysitting?

To achieve this, they removed all of the younger nurse bees from the nest, leaving only the queen and babies. When the older, foraging bees returned to the nest, activity slowed down for several days, and then they re-organized themselves: some of the old bees returned to foraging; others took on the babysitting and housekeeping duties (cleaning, building the comb, and tending to the queen). After 10 days, around half of these latter bees had significantly improved their ability to learn new things.

This cognitive improvement was also associated with a change in two specific proteins in their brains: one that has been associated with protection against oxidative stress and inflammation associated with Alzheimer disease and Huntington disease in humans (Prx6), and another dubbed a “chaperone” protein because it protects other proteins from being damaged when brain or other tissues are exposed to cell-level stress.

Precisely what it is about returning to the hive that produces this effect is a matter of speculation, but this finding does show that learning impairment in old bees can be reversed by changes in behavior, and this reversal is correlated with specific changes in brain protein.

Having said this, it shouldn’t be overlooked that only some of the worker bees showed this brain plasticity. This is not, apparently, due to differences in genotype, but may depend on the amount of foraging experience.

The findings add weight to the idea that social interventions can help our brains stay younger, and are consistent with growing evidence that, in humans, social engagement helps protect against dementia and age-related cognitive impairment.

The (probably) experience-dependent individual differences shown by the bees is perhaps mirrored in our idea of cognitive reserve, but with a twist. The concept of cognitive reserve emphasizes that accumulating a wealth of cognitive experience (whether through education or occupation or other activities) protects your brain from the damage that might occur with age. But perhaps (and I’m speculating now) we should also consider the other side of this: repeated engagement in routine or undemanding activities may have a deleterious effect, independent of and additional to the absence of more stimulating activities.

Reference: 

Source: 

Topics: 

tags: 

tags development: 

tags lifestyle: 

tags memworks: 

tags problems: 

tags strategies: 

Review of working memory training programs finds no broader benefit

July, 2012

A meta-analysis of 23 studies has found no evidence that working memory training has wider cognitive benefits for normally developing children and healthy adults.

I have said before that there is little evidence that working memory training has any wider benefits than to the skills being practiced. Occasionally a study arises that gets everyone all excited, but by and large training only benefits the skill being practiced — despite the fact that working memory underlies so many cognitive tasks, and limited working memory capacity is thought to negatively affect performance on so many tasks. However, one area that does seem to have had some success is working memory training for those with ADHD, and researchers have certainly not given hope of finding evidence for wider transfer among other groups (such as older adults).

A recent review of the research to date has, sadly, concluded that the benefits of working memory training programs are limited. But this is not to say there are no benefits.

For a start, the meta-analysis (analyzing data across studies) found that working memory training produced large immediate benefits for verbal working memory. These benefits were greatest for children below the age of 10.

These benefits, however, were not maintained long-term (at an average of 9 months after training, there were no significant benefits) — although benefits were found in one study in which the verbal working memory task was very similar to the training task (indicating that the specific skill practiced did maintain some improvement long-term).

Visuospatial working memory also showed immediate benefits, and these did not vary across age groups. One factor that did make a difference was type of training: the CogMed training program produced greater improvement than the researcher-developed programs (the studies included 7 that used CogMed, 2 that used Jungle Memory, 2 Cognifit, 4 n-back, 1 Memory Booster, and 7 researcher-developed programs).

Interestingly, visuospatial working memory did show some long-term benefits, although it should be noted that the average follow-up was distinctly shorter than that for verbal working memory tasks (an average of 5 months post-training).

The burning question, of course, is how well this training transferred to dissimilar tasks. Here the evidence seems sadly clear — those using untreated control groups tended to find such transfer; those using treated control groups never did. Similarly, nonrandomized studies tended to find far transfer, but randomized studies did not.

In other words, when studies were properly designed (randomized trials with a control group that is given alternative treatment rather than no treatment), there was no evidence of transfer effects from working memory training to nonverbal ability. Moreover, even when found, these effects were only present immediately and not on follow-up.

Neither was there any evidence of transfer effects, either immediate or delayed, on verbal ability, word reading, or arithmetic. There was a small to moderate effect on training on attention (as measured by the Stroop test), but this only occurred immediately, and not on follow-up.

It seems clear from this review that there are few good, methodologically sound studies on this subject. But three very important caveats should be noted in connection with the researchers’ dispiriting conclusion.

First of all, because this is an analysis across all data, important differences between groups or individuals may be concealed. This is a common criticism of meta-analysis, and the researchers do try and answer it. Nevertheless, I think it is still a very real issue, especially in light of evidence that the benefit of training may depend on whether the challenge of the training is at the right level for the individual.

On the other hand, another recent study, that compared young adults who received 20 sessions of training on a dual n-back task or a visual search program, or received no training at all, did look for an individual-differences effect, and failed to find it. Participants were tested repeatedly on their fluid intelligence, multitasking ability, working memory capacity, crystallized intelligence, and perceptual speed. Although those taking part in the training programs improved their performance on the tasks they practiced, there was no transfer to any of the cognitive measures. When participants were analyzed separately on the basis of their improvement during training, there was still no evidence of transfer to broader cognitive abilities.

The second important challenge comes from the lack of skill consolidation — having a short training program followed by months of not practicing the skill is not something any of us would expect to produce long-term benefits.

The third point concerns a recent finding that multi-domain cognitive training produces longer-lasting benefits than single-domain training (the same study also showed the benefit of booster training). It seems quite likely that working memory training is a valuable part of a training program that also includes practice in real-world tasks that incorporate working memory.

I should emphasize that these results only apply to ‘normal’ children and adults. The question of training benefits for those with attention difficulties or early Alzheimer’s is a completely different issue. But for these healthy individuals, it has to be said that the weight of the evidence is against working memory training producing more general cognitive improvement. Nevertheless, I think it’s probably an important part of a cognitive training program — as long as the emphasis is on part.

Reference: 

Melby-Lervåg, M., & Hulme, C. (2012). Is Working Memory Training Effective? A Meta-Analytic Review. Developmental psychology. doi:10.1037/a0028228
Full text available at http://www.apa.org/pubs/journals/releases/dev-ofp-melby-lervag.pdf

[3012] Redick, T. S., Shipstead Z., Harrison T. L., Hicks K. L., Fried D. E., Hambrick D. Z., et al.
(2012).  No Evidence of Intelligence Improvement After Working Memory Training: A Randomized, Placebo-Controlled Study..
Journal of Experimental Psychology: General.
Full text available at http://psychology.gatech.edu/renglelab/publications/2012/RedicketalJEPG.pdf
 

Source: 

Topics: 

tags memworks: 

tags problems: 

tags strategies: 

How exercise affects the brain, and who it benefits

June, 2012

New research indicates that the cognitive benefits of exercise depend on the gene variant you carry.

I’ve mentioned before that, for some few people, exercise doesn’t seem to have a benefit, and the benefits of exercise for fighting age-related cognitive decline may not apply to those carrying the Alzheimer’s gene.

New research suggests there is another gene variant that may impact on exercise’s effects. The new study follows on from earlier research that found that physical exercise during adolescence had more durable effects on object memory and BDNF levels than exercise during adulthood. In this study, 54 healthy but sedentary young adults (aged 18-36) were given an object recognition test before participating in either (a) a 4-week exercise program, with exercise on the final test day, (b) a 4-week exercise program, without exercise on the final test day, (c) a single bout of exercise on the final test day, or (d) remaining sedentary between test days.

Exercise both improved object recognition memory and reduced perceived stress — but only in one group: those who exercised for 4 weeks including the final day of testing. In other words, both regular exercise and recent exercise was needed to produce a memory benefit.

But there is one more factor — and this is where it gets really interesting — the benefit in this group didn’t happen for every member of the group. Only those carrying a specific genotype benefited from regular and recent exercise. This genotype has to do with the brain protein BDNF, which is involved in neurogenesis and synaptic plasticity, and which is increased by exercise. The BDNF gene comes in two flavors: Val and Met. Previous research has linked the less common Met variant to poorer memory and greater age-related cognitive decline.

In other words, it seems that the Met allele affects how much BDNF is released as a result of exercise, and this in turn affects cognitive benefits.

The object recognition test involved participants seeing a series of 50 images (previously selected as being highly recognizable and nameable), followed by a 15 minute filler task, before seeing 100 images (the previous 50 and 50 new images) and indicating which had been seen previously. The filler task involved surveys for state anxiety, perceived stress, and mood. On the first (pre-program) visit, a survey for trait anxiety was also completed.

Of the 54 participants, 31 carried two copies of the Val allele, and 23 had at least one Met allele (19 Val/Met; 4 Met/Met). The population frequency for carrying at least one Met allele is 50% for Asians, 30% in Caucasians, and 4% in African-Americans.

Although exercise decreased stress and increased positive mood, the cognitive benefits of exercise were not associated with mood or anxiety. Neither was genotype associated with mood or anxiety. However, some studies have found an association between depression and the Met variant, and this study is of course quite small.

A final note: this study is part of research looking at the benefits of exercise for children with ADHD. The findings suggest that genotyping would enable us to predict whether an individual — a child with ADHD or an older adult at risk of cognitive decline or impairment — would benefit from this treatment strategy.

Reference: 

Source: 

Topics: 

tags development: 

tags lifestyle: 

tags memworks: 

tags problems: 

How action videogames change some people’s brains

May, 2012

A small study has found that ten hours of playing action video games produced significant changes in brainwave activity and improved visual attention for some (but not all) novices.

Following on from research finding that people who regularly play action video games show visual attention related differences in brain activity compared to non-players, a new study has investigated whether such changes could be elicited in 25 volunteers who hadn’t played video games in at least four years. Sixteen of the participants played a first-person shooter game (Medal of Honor: Pacific Assault), while nine played a three-dimensional puzzle game (Ballance). They played the games for a total of 10 hours spread over one- to two-hour sessions.

Selective attention was assessed through an attentional visual field task, carried out prior to and after the training program. Individual learning differences were marked, and because of visible differences in brain activity after training, the action gamers were divided into two groups for analysis — those who performed above the group mean on the second attentional visual field test (7 participants), and those who performed below the mean (9). These latter individuals showed similar brain activity patterns as those in the control (puzzle) group.

In all groups, early-onset brainwaves were little affected by video game playing. This suggests that game-playing has little impact on bottom–up attentional processes, and is in keeping with earlier research showing that players and non-players don’t differ in the extent to which their attention is captured by outside stimuli.

However, later brainwaves — those thought to reflect top–down control of selective attention via increased inhibition of distracters — increased significantly in the group who played the action game and showed above-average improvement on the field test. Another increased wave suggests that the total amount of attention allocated to the task was also greater in that group (i.e., they were concentrating more on the game than the below-average group, and the control group).

The improved ability to select the right targets and ignore other stimuli suggests, too, that these players are also improving their ability to make perceptual decisions.

The next question, of course, is what personal variables underlie the difference between those who benefit more quickly from the games, and those who don’t. And how much more training is necessary for this latter group, and are there some people who won’t achieve these benefits at all, no matter how long they play? Hopefully, future research will be directed to these questions.

Reference: 

[2920] Wu, S., Cheng C K., Feng J., D'Angelo L., Alain C., & Spence I.
(2012).  Playing a First-person Shooter Video Game Induces Neuroplastic Change.
Journal of Cognitive Neuroscience. 24(6), 1286 - 1293.

Source: 

Topics: 

tags lifestyle: 

tags memworks: 

tags strategies: 

Genes, brain size, brain atrophy, and Alzheimer’s risk

May, 2012

A round-up of genetic news.

  • Several genes are linked to smaller brain size and faster brain atrophy in middle- & old age.
  • The main Alzheimer's gene is implicated in leaky blood vessels, and shown to interact with brain size, white matter lesions, and dementia risk.
  • Some evidence suggests early-onset Alzheimer's is not so dissimilar to late-onset Alzheimer's.

Genetic analysis of 9,232 older adults (average age 67; range 56-84) has implicated four genes in how fast your hippocampus shrinks with age (rs7294919 at 12q24, rs17178006 at 12q14, rs6741949 at 2q24, rs7852872 at 9p33). The first of these (implicated in cell death) showed a particularly strong link to a reduced hippocampus volume — with average consequence being a hippocampus of the same size as that of a person 4-5 years older.

Faster atrophy in this crucial brain region would increase people’s risk of Alzheimer’s and cognitive decline, by reducing their cognitive reserve. Reduced hippocampal volume is also associated with schizophrenia, major depression, and some forms of epilepsy.

In addition to cell death, the genes linked to this faster atrophy are involved in oxidative stress, ubiquitination, diabetes, embryonic development and neuronal migration.

A younger cohort, of 7,794 normal and cognitively compromised people with an average age of 40, showed that these suspect gene variants were also linked to smaller hippocampus volume in this age group. A third cohort, comprised of 1,563 primarily older people, showed a significant association between the ASTN2 variant (linked to neuronal migration) and faster memory loss.

In another analysis, researchers looked at intracranial volume and brain volume in 8,175 elderly. While they found no genetic associations for brain volume (although there was one suggestive association), they did discover that intracranial volume (the space occupied by the fully developed brain within the skull — this remains unchanged with age, reflecting brain size at full maturity) was significantly associated with two gene variants (at loci rs4273712, on chromosome 6q22, and rs9915547, on 17q21). These associations were replicated in a different sample of 1,752 older adults. One of these genes is already known to play a unique evolutionary role in human development.

A meta-analysis of seven genome-wide association studies, involving 10,768 infants (average age 14.5 months), found two loci robustly associated with head circumference in infancy (rs7980687 on chromosome 12q24 and rs1042725 on chromosome 12q15). These loci have previously been associated with adult height, but these effects on infant head circumference were largely independent of height. A third variant (rs11655470 on chromosome 17q21 — note that this is the same chromosome implicated in the study of older adults) showed suggestive evidence of association with head circumference; this chromosome has also been implicated in Parkinson's disease and other neurodegenerative diseases.

Previous research has found an association between head size in infancy and later development of Alzheimer’s. It has been thought that this may have to do with cognitive reserve.

Interestingly, the analyses also revealed that a variant in a gene called HMGA2 (rs10784502 on 12q14.3) affected intelligence as well as brain size.

Why ‘Alzheimer’s gene’ increases Alzheimer’s risk

Investigation into the so-called ‘Alzheimer’s gene’ ApoE4 (those who carry two copies of this variant have roughly eight to 10 times the risk of getting Alzheimer’s disease) has found that ApoE4 causes an increase in cyclophilin A, which in turn causes a breakdown of the cells lining the blood vessels. Blood vessels become leaky, making it more likely that toxic substances will leak into the brain.

The study found that mice carrying the ApoE4 gene had five times as much cyclophilin A as normal, in cells crucial to maintaining the integrity of the blood-brain barrier. Blocking the action of cyclophilin A brought blood flow back to normal and reduced the leakage of toxic substances by 80%.

The finding is in keeping with the idea that vascular problems are at the heart of Alzheimer’s disease — although it should not be assumed from that, that other problems (such as amyloid-beta plaques and tau tangles) are not also important. However, one thing that does seem clear now is that there is not one single pathway to Alzheimer’s. This research suggests a possible treatment approach for those carrying this risky gene variant.

Note also that this gene variant is not only associated with Alzheimer’s risk, but also Down’s syndrome dementia, poor outcome following TBI, and age-related cognitive decline.

On which note, I’d like to point out recent findings from the long-running Nurses' Health Study, involving 16,514 older women (70-81), that suggest that effects of postmenopausal hormone therapy for cognition may depend on apolipoprotein E (APOE) status, with the fastest rate of decline being observed among HT users who carried the APOe4 variant (in general HT was associated with poorer cognitive performance).

It’s also interesting to note another recent finding: that intracranial volume modifies the effect of apoE4 and white matter lesions on dementia risk. The study, involving 104 demented and 135 nondemented 85-year-olds, found that smaller intracranial volume increased the risk of dementia, Alzheimer's disease, and vascular dementia in participants with white matter lesions. However, white matter lesions were not associated with increased dementia risk in those with the largest intracranial volume. But intracranial volume did not modify dementia risk in those with the apoE4 gene.

More genes involved in Alzheimer’s

More genome-wide association studies of Alzheimer's disease have now identified variants in BIN1, CLU, CR1 and PICALM genes that increase Alzheimer’s risk, although it is not yet known how these gene variants affect risk (the present study ruled out effects on the two biomarkers, amyloid-beta 42 and phosphorylated tau).

Same genes linked to early- and late-onset Alzheimer's

Traditionally, we’ve made a distinction between early-onset Alzheimer's disease, which is thought to be inherited, and the more common late-onset Alzheimer’s. New findings, however, suggest we should re-think that distinction. While the genetic case for early-onset might seem to be stronger, sporadic (non-familial) cases do occur, and familial cases occur with late-onset.

New DNA sequencing techniques applied to the APP (amyloid precursor protein) gene, and the PSEN1 and PSEN2 (presenilin) genes (the three genes linked to early-onset Alzheimer's) has found that rare variants in these genes are more common in families where four or more members were affected with late-onset Alzheimer’s, compared to normal individuals. Additionally, mutations in the MAPT (microtubule associated protein tau) gene and GRN (progranulin) gene (both linked to frontotemporal dementia) were also found in some Alzheimer's patients, suggesting they had been incorrectly diagnosed as having Alzheimer's disease when they instead had frontotemporal dementia.

Of the 439 patients in which at least four individuals per family had been diagnosed with Alzheimer's disease, rare variants in the 3 Alzheimer's-related genes were found in 60 (13.7%) of them. While not all of these variants are known to be pathogenic, the frequency of mutations in these genes is significantly higher than it is in the general population.

The researchers estimate that about 5% of those with late-onset Alzheimer's disease have changes in these genes. They suggest that, at least in some cases, the same causes may underlie both early- and late-onset disease. The difference being that those that develop it later have more protective factors.

Another gene identified in early-onset Alzheimer's

A study of the genes from 130 families suffering from early-onset Alzheimer's disease has found that 116 had mutations on genes already known to be involved (APP, PSEN1, PSEN2 — see below for some older reports on these genes), while five of the other 14 families all showed mutations on a new gene: SORL1.

I say ‘new gene’ because it hasn’t been implicated in early-onset Alzheimer’s before. However, it has been implicated in the more common late-onset Alzheimer’s, and last year a study reported that the gene was associated with differences in hippocampal volume in young, healthy adults.

The finding, then, provides more support for the idea that some cases of early-onset and late-onset Alzheimer’s have the same causes.

The SORL1 gene codes for a protein involved in the production of the beta-amyloid peptide, and the mutations seen in this study appear to cause an under-expression of SORL1, resulting in an increase in the production of the beta-amyloid peptide. Such mutations were not found in the 1500 ethnicity-matched controls.

 

Older news reports on these other early-onset genes (brought over from the old website):

New genetic cause of Alzheimer's disease

Amyloid protein originates when it is cut by enzymes from a larger precursor protein. In very rare cases, mutations appear in the amyloid precursor protein (APP), causing it to change shape and be cut differently. The amyloid protein that is formed now has different characteristics, causing it to begin to stick together and precipitate as amyloid plaques. A genetic study of Alzheimer's patients younger than 70 has found genetic variations in the promoter that increases the gene expression and thus the formation of the amyloid precursor protein. The higher the expression (up to 150% as in Down syndrome), the younger the patient (starting between 50 and 60 years of age). Thus, the amount of amyloid precursor protein is a genetic risk factor for Alzheimer's disease.

Theuns, J. et al. 2006. Promoter Mutations That Increase Amyloid Precursor-Protein Expression Are Associated with Alzheimer Disease. American Journal of Human Genetics, 78, 936-946.

http://www.eurekalert.org/pub_releases/2006-04/vfii-rda041906.php

Evidence that Alzheimer's protein switches on genes

Amyloid b-protein precursor (APP) is snipped apart by enzymes to produce three protein fragments. Two fragments remain outside the cell and one stays inside. When APP is produced in excessive quantities, one of the cleaved segments that remains outside the cell, called the amyloid b-peptides, clumps together to form amyloid plaques that kill brain cells and may lead to the development of Alzheimer’s disease. New research indicates that the short "tail" segment of APP that is trapped inside the cell might also contribute to Alzheimer’s disease, through a process called transcriptional activation - switching on genes within the cell. Researchers speculate that creation of amyloid plaque is a byproduct of a misregulation in normal APP processing.

[2866] Cao, X., & Südhof T. C.
(2001).  A Transcriptively Active Complex of APP with Fe65 and Histone Acetyltransferase Tip60.
Science. 293(5527), 115 - 120.

http://www.eurekalert.org/pub_releases/2001-07/aaft-eta070201.php

Inactivation of Alzheimer's genes in mice causes dementia and brain degeneration

Mutations in two related genes known as presenilins are the major cause of early onset, inherited forms of Alzheimer's disease, but how these mutations cause the disease has not been clear. Since presenilins are involved in the production of amyloid peptides (the major components of amyloid plaques), it was thought that such mutations might cause Alzheimer’s by increasing brain levels of amyloid peptides. Accordingly, much effort has gone into identifying compounds that could block presenilin function. Now, however, genetic engineering in mice has revealed that deletion of these genes causes memory loss and gradual death of nerve cells in the mouse brain, demonstrating that the protein products of these genes are essential for normal learning, memory and nerve cell survival.

Saura, C.A., Choi, S-Y., Beglopoulos, V., Malkani, S., Zhang, D., Shankaranarayana Rao, B.S., Chattarji, S., Kelleher, R.J.III, Kandel, E.R., Duff, K., Kirkwood, A. & Shen, J. 2004. Loss of Presenilin Function Causes Impairments of Memory and Synaptic Plasticity Followed by Age-Dependent Neurodegeneration. Neuron, 42 (1), 23-36.

http://www.eurekalert.org/pub_releases/2004-04/cp-ioa032904.php

Reference: 

[2858] Consortium, E N I G M-A(ENIGMA)., & Cohorts Heart Aging Research Genomic Epidemiology(charge)
(2012).  Common variants at 12q14 and 12q24 are associated with hippocampal volume.
Nature Genetics. 44(5), 545 - 551.

[2909] Taal, R. H., Pourcain B S., Thiering E., Das S., Mook-Kanamori D. O., Warrington N. M., et al.
(2012).  Common variants at 12q15 and 12q24 are associated with infant head circumference.
Nature Genetics. 44(5), 532 - 538.

[2859] Cohorts Heart Aging Research Genomic Epidemiology,(charge), & Consortium E G G(EGG).
(2012).  Common variants at 6q22 and 17q21 are associated with intracranial volume.
Nature Genetics. 44(5), 539 - 544.

[2907] Stein, J. L., Medland S. E., Vasquez A A., Hibar D. P., Senstad R. E., Winkler A. M., et al.
(2012).  Identification of common variants associated with human hippocampal and intracranial volumes.
Nature Genetics. 44(5), 552 - 561.

[2925] Bell, R. D., Winkler E. A., Singh I., Sagare A. P., Deane R., Wu Z., et al.
(2012).  Apolipoprotein E controls cerebrovascular integrity via cyclophilin A.
Nature.

Kang, J. H., & Grodstein F. (2012).  Postmenopausal hormone therapy, timing of initiation, APOE and cognitive decline. Neurobiology of Aging. 33(7), 1129 - 1137.

Skoog, I., Olesen P. J., Blennow K., Palmertz B., Johnson S. C., & Bigler E. D. (2012).  Head size may modify the impact of white matter lesions on dementia. Neurobiology of Aging. 33(7), 1186 - 1193.

[2728] Cruchaga, C., Chakraverty S., Mayo K., Vallania F. L. M., Mitra R. D., Faber K., et al.
(2012).  Rare Variants in APP, PSEN1 and PSEN2 Increase Risk for AD in Late-Onset Alzheimer's Disease Families.
PLoS ONE. 7(2), e31039 - e31039.

Full text available at http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0031039

[2897] Pottier, C., Hannequin D., Coutant S., Rovelet-Lecrux A., Wallon D., Rousseau S., et al.
(2012).  High frequency of potentially pathogenic SORL1 mutations in autosomal dominant early-onset Alzheimer disease.
Molecular Psychiatry.

McCarthy, J. J., Saith S., Linnertz C., Burke J. R., Hulette C. M., Welsh-Bohmer K. A., et al. (2012).  The Alzheimer's associated 5′ region of the SORL1 gene cis regulates SORL1 transcripts expression. Neurobiology of Aging. 33(7), 1485.e1-1485.e8 - 1485.e1-1485.e8

Source: 

Topics: 

tags: 

tags development: 

tags memworks: 

tags problems: 

Video game training benefits cognition in some older adults

April, 2012

A study has found that playing a cognitively complex video game improved cognitive performance in some older adults, particularly those with initially poorer cognitive scores.

A number of studies have found evidence that older adults can benefit from cognitive training. However, neural plasticity is thought to decline with age, and because of this, it’s thought that the younger-old, and/or the higher-functioning, may benefit more than the older-old, or the lower-functioning. On the other hand, because their performance may already be as good as it can be, higher-functioning seniors may be less likely to benefit. You can find evidence for both of these views.

In a new study, 19 of 39 older adults (aged 60-77) were given training in a multiplayer online video game called World of Warcraft (the other 20 formed a control group). This game was chosen because it involves multitasking and switching between various cognitive abilities. It was theorized that the demands of the game would improve both spatial orientation and attentional control, and that the multiple tasks might produce more improvement in those with lower initial ability compared to those with higher ability.

WoW participants were given a 2-hour training session, involving a 1-hour lecture and demonstration, and one hour of practice. They were then expected to play the game at home for around 14 hours over the next two weeks. There was no intervention for the control group. All participants were given several cognitive tests at the beginning and end of the two week period: Mental Rotation Test; Stroop Test; Object Perspective Test; Progressive Matrices; Shipley Vocabulary Test; Everyday Cognition Battery; Digit Symbol Substitution Test.

As a group, the WoW group improved significantly more on the Stroop test (a measure of attentional control) compared to the control group. There was no change in the other tests. However, those in the WoW group who had performed more poorly on the Object Perspective Test (measuring spatial orientation) improved significantly. Similarly, on the Mental Rotation Test, ECB, and Progressive Matrices, those who performed more poorly at the beginning tended to improve after two weeks of training. There was no change on the Digit Symbol test.

The finding that only those whose performance was initially poor benefited from cognitive training is consistent with other studies suggesting that training only benefits those who are operating below par. This is not really surprising, but there are a few points that should be made.

First of all, it should be noted that this was a group of relatively high-functioning young-old adults — poorer performance in this case could be (relatively) better performance in another context. What it comes down to is whether you are operating at a level below which you are capable of — and this applies broadly, for example, experiments show that spatial training benefits females but not males (because males tend to already have practiced enough).

Given that, in expertise research, training has an on-going, apparently limitless, effect on performance, it seems likely that the limited benefits shown in this and other studies is because of the extremely limited scope of the training. Fourteen hours is not enough to improve people who are already performing adequately — but that doesn’t mean that they wouldn’t improve with more hours. I have yet to see any interventions with older adults that give them the amount of cognitive training you would expect them to need to achieve some level of mastery.

My third and final point is the specific nature of the improvements. This has also been shown in other studies, and sometimes appears quite arbitrary — for example, one 3-D puzzle game apparently improved mental rotation, while a different 3-D puzzle game had no effect. The point being that we still don’t understand the precise attributes needed to improve different skills (although the researchers advocate the use of a tool called cognitive task analysis for revealing the underlying qualities of an activity) — but we do understand that it is a matter of precise attributes, which is definitely a step in the right direction.

The main thing, then, that you should take away from this is the idea that different activities involve specific cognitive tasks, and these, and only these, will be the ones that benefit from practicing the activities. You therefore need to think about what tasks you want to improve before deciding on the activities to practice.

Reference: 

Source: 

Topics: 

tags development: 

tags lifestyle: 

tags memworks: 

tags problems: 

tags strategies: 

Fluctuating sense of control linked to cognitive ability in older adults

April, 2012

A small study has found that, in older adults, their sense of control fluctuates over the course of a day, and this affects their cognitive abilities.

Previous research has pointed to a typical decline in our sense of control as we get older. Maintaining a sense of control, however, appears to be a key factor in successful aging. Unsurprisingly, in view of the evidence that self-belief and metacognitive understanding are important for cognitive performance, a stronger sense of control is associated with better cognitive performance. (By metacognitive understanding I mean the knowledge that cognitive performance is malleable, not fixed, and strategies and training are effective in improving cognition.)

In an intriguing new study, 36 older adults (aged 61-87, average age 74) had their cognitive performance and their sense of control assessed every 12 hours for 60 days. Participants were asked questions about whether they felt in control of their lives and whether they felt able to achieve goals they set for themselves.

The reason I say this is intriguing is that it’s generally assumed that a person’s sense of control — how much they feel in control of their lives — is reasonably stable. While, as I said, it can change over the course of a lifetime, until recently we didn’t think that it could fluctuate significantly in the course of a single day — which is what this study found.

Moreover, those who normally reported having a low sense of control performed much better on inductive reasoning tests during periods when they reported feeling a higher sense of control. Similarly, those who normally reported feeling a high sense of control scored higher on memory tests when feeling more in control than usual.

Although we can’t be sure (since this wasn’t directly investigated), the analysis suggests that the improved cognitive functioning stems from the feeling of improved control, not vice versa.

The study builds on an earlier study that found weekly variability in older adults’ locus of control and competency beliefs.

Assessment was carried out in the form of a daily workbook, containing a number of measures, which participants completed twice daily. Each assessment took around 30-45 minutes to complete. The measures included three cognitive tests (14 alternate forms of each of these were used, to minimize test familiarity):

  • Letter series test: 30 items in which the next letter in a series had to be identified. [Inductive reasoning]
  • Number comparison: 48 items in which two number strings were presented beside each other, and participants had to identify where there was any mismatch. [Perceptual speed]
  • Rey Auditory Verbal Learning Task: participants have to study a list of 15 unrelated words for one minute, then on another page recall as many of the words as they could. [Memory]

Sense of control over the previous 12 hours was assessed by 8 questions, to which participants indicated their agreement/disagreement on a 6-point scale. Half the questions related to ‘locus of control’ and half to ‘perceived competence’.

While, unsurprisingly, compliance wasn’t perfect (it’s quite an arduous regime), participants completed on average 115 of 120 workbooks. Of the possible 4,320 results (36 x 120), only 166 were missing.

One of the things that often annoys me is the subsuming of all within-individual variability in cognitive scores into averages. Of course averages are vital, but so is variability, and this too often is glossed over. This study is, of course, all about variability, so I was very pleased to see people’s cognitive variability spelled out.

Most of the variance in locus of control was of course between people (86%), but 14% was within-individual. Similarly, the figures for perceived competence were 88% and 12%. (While locus of control and perceived competence are related, only 26% of the variability in within-person locus of control was associated with competence, meaning that they are largely independent.)

By comparison, within-individual variability was much greater for the cognitive measures: for the letter series (inductive reasoning), 32% was within-individual and 68% between-individual; for the number matching (perceptual speed), 21% was within-individual and 79% between-individual; for the memory test, an astounding 44% was within-individual and 56% between-individual.

Some of this within-individual variability in cognitive performance comes down to practice effects, which were significant for all cognitive measures. For the memory test, time of day was also significant, with performance being better in the morning. For the letter and number series tests, previous performance also had a small effect on perceived competence. For the number matching, increase in competence subsequent to increased performance was greatest for those with lower scores. However, lagged analyses indicated that beliefs preceded performance to a greater extent than performance preceding beliefs.

While it wasn’t an aspect of this study, it should also be noted that a person’s sense of control may well vary according to domain (e.g., cognition, social interaction, health) and context. In this regard, it’s interesting to note the present findings that sense of control affected inductive reasoning for low-control individuals, but memory for high-control individuals, suggesting that the cognitive domain also matters.

Now this small study was a preliminary one and there are several limitations that need to be tightened up in subsequent research, but I think it’s important for three reasons:

  • as a demonstration that cognitive performance is not a fixed attribute;
  • as a demonstration of the various factors that can affect older adults’ cognitive performance;
  • as a demonstration that your beliefs about yourself are a factor in your cognitive performance.

Reference: 

[2794] Neupert, S. D., & Allaire J. C.
(2012).  I think I can, I think I can: Examining the within-person coupling of control beliefs and cognition in older adults.
Psychology and Aging. No Pagination Specified - No Pagination Specified.

Source: 

Topics: 

tags development: 

tags memworks: 

tags problems: 

tags strategies: 

Scent of rosemary may help cognition

March, 2012

Rosemary is a herb long associated with memory. A small study now provides some support for the association, and for the possible benefits of aromatherapy. And a rat study indicates that your attitude to work might change how stimulants affect you.

A small study involving 20 people has found that those who were exposed to 1,8-cineole, one of the main chemical components of rosemary essential oil, performed better on mental arithmetic tasks. Moreover, there was a dose-dependent relationship — higher blood concentrations of the chemical were associated with greater speed and accuracy.

Participants were given two types of test: serial subtraction and rapid visual information processing. These tests took place in a cubicle smelling of rosemary. Participants sat in the cubicle for either 4, 6, 8, or 10 minutes before taking the tests (this was in order to get a range of blood concentrations). Mood was assessed both before and after, and blood was tested at the end of the session.

While blood levels of the chemical correlated with accuracy and speed on both tasks, the effects were significant only for the mental arithmetic task.

Participants didn’t know that the scent was part of the study, and those who asked about it were told it was left over from a previous study.

There was no clear evidence that the chemical improved attention, but there was a significant association with one aspect of mood, with higher levels of the scent correlating with greater contentment. Contentment was the only aspect of mood that showed such a link.

It’s suggested that this chemical compound may affect learning through its inhibiting effect on acetylcholinesterase (an important enzyme in the development of Alzheimer's disease). Most Alzheimer’s drugs are cholinesterase inhibitors.

While this is very interesting (although obviously a larger study needs to confirm the findings), what I would like to see is the effects on more prolonged mental efforts. It’s also a little baffling to find the effect being limited to only one of these tasks, given that both involve attention and working memory. I would also like to see the rosemary-infused cubicle compared to some other pleasant smell.

Interestingly, a very recent study also suggests the importance of individual differences. A rat study compared the effects of amphetamines and caffeine on cognitive effort. First of all, giving the rats the choice of easy or hard visuospatial discriminations revealed that, as with humans, individuals could be divided into those who tended to choose difficult trials (“workers”) and those who preferred easy ones (“slackers”). (Easy trials took less effort, but earned commensurately smaller reward.)

Amphetamine, it was found, made the slackers worked harder, but made the workers take it easier. Caffeine, too, made the workers slack off, but had no effect on slackers.

The extent to which this applies to humans is of course unknown, but the idea that your attitude to cognitive effort might change how stimulants affect you is an intriguing one. And of course this is a more general reminder that factors, whatever they are, have varying effects on individuals. This is why it’s so important to have a large sample size, and why, as an individual, you can’t automatically assume that something will benefit you, whatever the research says.

But in the case of rosemary oil, I can’t see any downside! Try it out; maybe it will help.

Reference: 

Source: 

Topics: 

tags lifestyle: 

tags memworks: 

Group settings hurt expressions of intelligence, especially in women

March, 2012

Comparing performance on an IQ test when it is given under normal conditions and when it is given in a group situation reveals that IQ drops in a group setting, and for some (mostly women) it drops dramatically.

This is another demonstration of stereotype threat, which is also a nice demonstration of the contextual nature of intelligence. The study involved 70 volunteers (average age 25; range 18-49), who were put in groups of 5. Participants were given a baseline IQ test, on which they were given no feedback. The group then participated in a group IQ test, in which 92 multi-choice questions were presented on a monitor (both individual and group tests were taken from Cattell’s culture fair intelligence test). Each question appeared to each person at the same time, for a pre-determined time. After each question, they were provided with feedback in the form of their own relative rank within the group, and the rank of one other group member. Ranking was based on performance on the last 10 questions. Two of each group had their brain activity monitored.

Here’s the remarkable thing. If you gather together individuals on the basis of similar baseline IQ, then you can watch their IQ diverge over the course of the group IQ task, with some dropping dramatically (e.g., 17 points from a mean IQ of 126). Moreover, even those little affected still dropped some (8 points from a mean IQ of 126).

Data from the 27 brain scans (one had to be omitted for technical reasons) suggest that everyone was initially hindered by the group setting, but ‘high performers’ (those who ended up scoring above the median) managed to largely recover, while ‘low performers’ (those who ended up scoring below the median) never did.

Personality tests carried out after the group task found no significant personality differences between high and low performers, but gender was a significant variable: 10/13 high performers were male, while 11/14 low performers were female (remember, there was no difference in baseline IQ — this is not a case of men being smarter!).

There were significant differences between the high and low performers in activity in the amygdala and the right lateral prefrontal cortex. Specifically, all participants had an initial increase in amygdala activation and diminished activity in the prefrontal cortex, but by the end of the task, the high-performing group showed decreased amygdala activation and increased prefrontal cortex activation, while the low performers didn’t change. This may reflect the high performers’ greater ability to reduce their anxiety. Activity in the nucleus accumbens was similar in both groups, and consistent with the idea that the students had expectations about the relative ranking they were about to receive.

It should be pointed out that the specific feedback given — the relative ranking — was not a factor. What’s important is that it was being given at all, and the high performers were those who became less anxious as time went on, regardless of their specific ranking.

There are three big lessons here. One is that social pressure significantly depresses talent (meetings make you stupid?), and this seems to be worse when individuals perceive themselves to have a lower social rank. The second is that our ability to regulate our emotions is important, and something we should put more energy into. And the third is that we’ve got to shake ourselves loose from the idea that IQ is something we can measure in isolation. Social context matters.

Reference: 

Source: 

Topics: 

tags: 

tags memworks: 

tags strategies: 

Pages

Subscribe to RSS - individual differences
Error | About memory

Error

The website encountered an unexpected error. Please try again later.