How the brain works

The role of consolidation in memory

"Consolidation" is a term that is bandied about a lot in recent memory research. Here's my take on what it means.

Becoming a memory

Initially, information is thought to be encoded as patterns of neural activity — cells "talking" to each other. Later, the information is coded in more persistent molecular or structural formats (e.g., the formation of new synapses). It has been assumed that once this occurs, the memory is "fixed" — a permanent, unchanging, representation.

With new techniques, it has indeed become possible to observe these changes (you can see videos here). Researchers found that the changes to a cell that occurred in response to an initial stimulation lasted some three to five minutes and disappeared within five to 10 minutes. If the cell was stimulated four times over the course of an hour, however, the synapse would actually split and new synapses would form, producing a (presumably) permanent change.

Memory consolidation theory

The hypothesis that new memories consolidate slowly over time was proposed 100 years ago, and continues to guide memory research. In modern consolidation theory, it is assumed that new memories are initially 'labile' and sensitive to disruption before undergoing a series of processes (e.g., glutamate release, protein synthesis, neural growth and rearrangement) that render the memory representations progressively more stable. It is these processes that are generally referred to as “consolidation”.

Recently, however, the idea has been gaining support that stable representations can revert to a labile state on reactivation.

Memory as reconstruction

In a way, this is not surprising. We already have ample evidence that retrieval is a dynamic process during which new information merges with and modifies the existing representation — memory is now seen as reconstructive, rather than a simple replaying of stored information

Reconsolidation of memories

Researchers who have found evidence that supposedly stable representations have become labile again after reactivation, have called the process “reconsolidation”, and suggest that consolidation, rather than being a one-time event, occurs repeatedly every time the representation is activated.

This raises the question: does reconsolidation involve replacing the previously stable representation, or the establishment of a new representation, that coexists with the old?

Whether reconsolidation is the creating of a new representation, or the modifying of an old, is this something other than the reconstruction of memories as they are retrieved? In other words, is this recent research telling us something about consolidation (part of the encoding process), or something about reconstruction (part of the retrieval process)?

Hippocampus involved in memory consolidation

The principal player in memory consolidation research, in terms of brain regions, is the hippocampus. The hippocampus is involved in the recognition of place and the consolidation of contextual memories, and is part of a region called the medial temporal lobe (MTL), that also includes the perirhinal, parahippocampal,and entorhinal cortices. Lesions in the medial temporal lobe typically produce amnesia characterized by the disproportionate loss of recently acquired memories. This has been interpreted as evidence for a memory consolidation process.

Some research suggests that the hippocampus may participate only in consolidation processes lasting a few years. The entorhinal cortex, on the other hand, gives evidence of temporally graded changes extending up to 20 years, suggesting that it is this region that participates in memory consolidation over decades. The entorhinal cortex is damaged in the early stages of Alzheimer’s disease.

There is, however, some evidence that the hippocampus can be involved in older memories — perhaps when they are particularly vivid.

A recent idea that has been floated suggests that the entorhinal cortex, through which all information passes on its way to the hippocampus, handles “incremental learning” — learning that requires repeated experiences. “Episodic learning” — memories that are stored after only one occurrence — might be mainly stored in the hippocampus.

This may help explain the persistence of some vivid memories in the hippocampus. Memories of emotionally arousing events tend to be more vivid and to persist longer than do memories of neutral or trivial events, and are, moreover, more likely to require only a single experience.

Whether or not the hippocampus may retain some older memories, the evidence that some memories might be held in the hippocampus for several years, only to move on, as it were, to another region, is another challenge to a simple consolidation theory.

Memory more complex than we thought

So where does all this leave us? What is consolidation? Do memories reach a fixed state?

My own feeling is that, no, memories don't reach this fabled "cast in stone" state. Memories are subject to change every time they are activated (such activation doesn't have to bring the memory to your conscious awareness). But consolidation traditionally (and logically) refers to encoding processes. It is reasonable, and useful, to distinguish between:

  • the initial encoding, the "working memory" state, when new information is held precariously in shifting patterns of neural activity,
  • the later encoding processes, when the information is consolidated into a more permanent form with the growth of new connections between nerve cells,
  • the (possibly much) later retrieval processes, when the information is retrieved in, most probably, a new context, and is activated anew

I think that "reconsolidation" is a retrieval process rather than part of the encoding processes, but of course, if you admit retrieval as involving a return to the active state and a modification of the original representation in line with new associations, then the differences between retrieval and encoding become less evident.

When you add to this the possibility that memories might "move" from one area of the brain to another after a certain period of time (although it is likely that the triggering factor is not time per se), then you cast into disarray the whole concept of memories becoming stable.

Perhaps our best approach is to see memory as a series of processes, and consolidation as an agreed-upon (and possibly arbitrary) subset of those processes.

References: 

  • Frankland, P.W., O'Brien, C., Ohno, M., Kirkwood, A. & Silva, A.J. 2001. -CaMKII-dependent plasticity in the cortex is required for permanent memory. Nature, 411, 309-313.
  • Gluck, M.A., Meeter, M. & Myers, C.E. 2003. Computational models of the hippocampal region: linking incremental learning and episodic memory. Trends in Cognitive Sciences, 7 (6), 269-276.
  • Haist, F., Gore, J.B. & Mao, H. 2001. Consolidation of human memory over decades revealed by functional magnetic resonance imaging. Nature neuroscience, 4 (11), 1139-1145.
  • Kang, H., Sun, L.D., Atkins, C.M., Soderling, T.R., Wilson, M.A. & Tonegawa, S. (2001). An Important Role of Neural Activity-Dependent CaMKIV Signaling in the Consolidation of Long-Term Memory. Cell, 106, 771-783.
  • Lopez, J.C. 2000. Shaky memories in indelible ink. Nature Reviews Neuroscience, 1, 6-7.
  • Miller, R.R. & Matzel, L.D. 2000. Memory involves far more than 'consolidation'. Nature Reviews Neuroscience, 1, 214-216.
  • Slotnick, S.D., Moo, L.R., Kraut, M.A., Lesser, R.P. & Hart, J. Jr. 2002. Interactions between thalamic and cortical rhythms during semantic memory recall in human. Proc. Natl. Acad. Sci. U.S.A., 99, 6440-6443.
  • Spinney, L. 2002. Memory debate focuses on hippocampal role. BioMedNet News, 18 March 2002.
  • Wirth, S., Yanike, M., Frank, L.M., Smith, A.C., Brown, E.N. & Suzuki, W.A. 2003. Single Neurons in the Monkey Hippocampus and Learning of New Associations. Science, 300, 1578-1581.
  • Zeineh, M.M., Engel, S.A., Thompson, P.M. & Bookheimer, S.Y. 2003. Dynamics of the Hippocampus During Encoding and Retrieval of Face-Name Pairs, Science, 299, 577-580.

For more, see the research reports

Topics: 

tags memworks: 

tags lifestyle: 

Correlation between emotional intelligence and IQ

February, 2013

A study shows that IQ and conscientiousness significantly predict emotional intelligence, and identifies shared brain areas that underlie this interdependence.

By using brain scans from 152 Vietnam veterans with a variety of combat-related brain injuries, researchers claim to have mapped the neural basis of general intelligence and emotional intelligence.

There was significant overlap between general intelligence and emotional intelligence, both in behavioral measures and brain activity. Higher scores on general intelligence tests and personality reliably predicted higher performance on measures of emotional intelligence, and many of the same brain regions (in the frontal and parietal cortices) were found to be important to both.

More specifically, impairments in emotional intelligence were associated with selective damage to a network containing the extrastriate body area (involved in perceiving the form of other human bodies), the left posterior superior temporal sulcus (helps interpret body movement in terms of intentions), left temporo-parietal junction (helps work out other person’s mental state), and left orbitofrontal cortex (supports emotional empathy). A number of associated major white matter tracts were also part of the network.

Two of the components of general intelligence were strong contributors to emotional intelligence: verbal comprehension/crystallized intelligence, and processing speed. Verbal impairment was unsurprisingly associated with selective damage to the language network, which showed some overlap with the network underlying emotional intelligence. Similarly, damage to the fronto-parietal network linked to deficits in processing speed also overlapped in places with the emotional intelligence network.

Only one of the ‘big five’ personality traits contributed to the prediction of emotional intelligence — conscientiousness. Impairments in conscientiousness were associated with damage to brain regions widely implicated in social information processing, of which two areas (left orbitofrontal cortex and left temporo-parietal junction) were also involved in impaired emotional intelligence, suggesting where these two attributes might be connected (ability to predict and understand another’s emotions).

It’s interesting (and consistent with the growing emphasis on connectivity rather than the more simplistic focus on specific regions) that emotional intelligence was so affected by damage to white matter tracts. The central role of the orbitofrontal cortex is also intriguing – there’s been growing evidence in recent years of the importance of this region in emotional and social processing, and it’s worth noting that it’s in the right place to integrate sensory and bodily sensation information and pass that onto decision-making systems.

All of this is to say that emotional intelligence depends on social information processing and general intelligence. Traditionally, general intelligence has been thought to be distinct from social and emotional intelligence. But humans are fundamentally social animals, and – contra the message of the Enlightenment, that we have taken so much to heart – it has become increasingly clear that emotions and reason are inextricably entwined. It is not, therefore, all that surprising that general and emotional intelligence might be interdependent. It is more surprising that conscientiousness might be rooted in your degree of social empathy.

It’s also worth noting that ‘emotional intelligence’ is not simply a trendy concept – a pop quiz question regarding whether you ‘have a high EQ’ (or not), but that it can, if impaired, produce very real problems in everyday life.

Emotional intelligence was measured by the Mayer, Salovey, Caruso Emotional Intelligence Test (MSCEIT), general IQ by the Wechsler Adult Intelligence Scale, and personality by the Neuroticism-Extroversion-Openness Inventory.

One of the researchers talks about this study on this YouTube video and on this podcast.

Reference: 

Source: 

tags memworks: 

tags problems: 

Topics: 

The importance of cognitive control for intelligence

October, 2012

Brain imaging points to the importance of cognitive control, mediated by the connectivity of one particular brain region, for fluid intelligence.

What underlies differences in fluid intelligence? How are smart brains different from those that are merely ‘average’?

Brain imaging studies have pointed to several aspects. One is brain size. Although the history of simplistic comparisons of brain size has been turbulent (you cannot, for example, directly compare brain size without taking into account the size of the body it’s part of), nevertheless, overall brain size does count for something — 6.7% of individual variation in intelligence, it’s estimated. So, something, but not a huge amount.

Activity levels in the prefrontal cortex, research also suggests, account for another 5% of variation in individual intelligence. (Do keep in mind that these figures are not saying that, for example, prefrontal activity explains 5% of intelligence. We are talking about differences between individuals.)

A new study points to a third important factor — one that, indeed, accounts for more than either of these other factors. The strength of the connections from the left prefrontal cortex to other areas is estimated to account for 10% of individual differences in intelligence.

These findings suggest a new perspective on what intelligence is. They suggest that part of intelligence rests on the functioning of the prefrontal cortex and its ability to communicate with the rest of the brain — what researchers are calling ‘global connectivity’. This may reflect cognitive control and, in particular, goal maintenance. The left prefrontal cortex is thought to be involved in (among other things) remembering your goals and any instructions you need for accomplishing those goals.

The study involved 93 adults (average age 23; range 18-40), whose brains were monitored while they were doing nothing and when they were engaged in the cognitively challenging N-back working memory task.

Brain activity patterns revealed three regions within the frontoparietal network that were significantly involved in this task: the left lateral prefrontal cortex, right premotor cortex, and right medial posterior parietal cortex. All three of these regions also showed signs of being global hubs — that is, they were highly connected to other regions across the brain.

Of these, however, only the left lateral prefrontal cortex showed a significant association between its connectivity and individual’s fluid intelligence. This was confirmed by a second independent measure — working memory capacity — which was also correlated with this region’s connectivity, and only this region.

In other words, those with greater connectivity in the left LPFC had greater cognitive control, which is reflected in higher working memory capacity and higher fluid intelligence. There was no correlation between connectivity and crystallized intelligence.

Interestingly, although other global hubs (such as the anterior prefrontal cortex and anterior cingulate cortex) also have strong relationships with intelligence and high levels of global connectivity, they did not show correlations between their levels of connectivity and fluid intelligence. That is, although the activity within these regions may be important for intelligence, their connections to other brain regions are not.

So what’s so important about the connections the LPFC has with the rest of the brain? It appears that, although it connects widely to sensory and motor areas, it is primarily the connections within the frontoparietal control network that are most important — as well as the deactivation of connections with the default network (the network active during rest).

This is not to say that the LPFC is the ‘seat of intelligence’! Research has made it clear that a number of brain regions support intelligence, as do other areas of connectivity. The finding is important because it shows that the left LPFC supports cognitive control and intelligence through a mechanism involving global connectivity and some other as-yet-unknown property. One possibility is that this region is a ‘flexible’ hub — able to shift its connectivity with a number of different brain regions as the task demands.

In other words, what may count is how many different connectivity patterns the left LPFC has in its repertoire, and how good it is at switching to them.

An association between negative connections with the default network and fluid intelligence also adds to evidence for the importance of inhibiting task-irrelevant processing.

All this emphasizes the role of cognitive control in intelligence, and perhaps goes some way to explaining why self-regulation in children is so predictive of later success, apart from the obvious.

Reference: 

Source: 

tags memworks: 

Topics: 

Sleep preserves your feelings about traumatic events

January, 2012

New research suggests that sleeping within a few hours of a disturbing event keeps your emotional response to the event strong.

Previous research has shown that negative objects and events are preferentially consolidated in sleep — if you experience them in the evening, you are more likely to remember them than more neutral objects or events, but if you experience them in the morning, they are not more likely to be remembered than other memories (see collected sleep reports). However, more recent studies have failed to find this. A new study also fails to find such preferential consolidation, but does find that our emotional reaction to traumatic or disturbing events can be greatly reduced if we stay awake afterward.

Being unable to sleep after such events is of course a common response — these findings indicate there’s good reason for it, and we should go along with it rather than fighting it.

The study involved 106 young adults rating pictures on a sad-happy scale and their own responses on an excited-calm scale. Twelve hours later, they were given a recognition test: noting pictures they had seen earlier from a mix of new and old pictures. They also rated all the pictures on the two scales. There were four groups: 41 participants saw the first set late in the day and the second set 12 hours later on the following day (‘sleep group’); 41 saw the first set early and the second set 12 hours later on the same day; 12 participants saw both sets in the evening, with only 45 minutes between the sets; 12 participants saw both sets in the morning (these last two groups were to rule out circadian effects). 25 of the sleep group had their brain activity monitored while they slept.

The sleep group performed significantly better on the recognition test than the same-day group. Negative pictures were remembered better than neutral ones. However, unlike earlier studies, the sleep group didn’t preferentially remember negative pictures more than the same-day group.

But, interestingly, the sleep group was more likely to maintain the strength of initial negative responses. The same-day group showed a weaker response to negative scenes on the second showing.

It’s been theorized that late-night REM sleep is critical for emotional memory consolidation. However, this study found no significant relationship between the amount of time spent in REM sleep and recognition memory, nor was there any relationship between other sleep stages and memory. There was one significant result: those who had more REM sleep in the third quarter of the night showed the least reduction of emotional response to the negative pictures.

There were no significant circadian effects, but it’s worth noting that even the 45 minute gap between the sets was sufficient to weaken the negative effect of negative scenes.

While there was a trend toward a gender effect, it didn’t reach statistical significance, and there were no significant interactions between gender and group or emotional value.

The findings suggest that the effects of sleep on memory and emotion may be independent.

The findings also contradict previous studies showing preferential consolidation of emotional memories during sleep, but are consistent with two other recent studies that have also failed to find this. At this stage, all we can say is that there may be certain conditions in which this occurs (or doesn’t occur), but more research is needed to determine what these conditions are. Bear in mind that there is no doubt that sleep helps consolidate memories; we are talking here only about emphasizing negative memories at the expense of emotionally-neutral ones.

Reference: 

Source: 

tags lifestyle: 

tags memworks: 

Topics: 

Working memory capacity not 4 but 2+2

October, 2011

A monkey study finds that our very limited working memory capacity of around 4 items reflects two capacities of two items. The finding has practical implications for information presentation.

In the study, two rhesus monkeys were given a standard human test of working memory capacity: an array of colored squares, varying from two to five squares, was shown for 800 msec on a screen. After a delay, varying from 800 to 1000 msec, a second array was presented. This array was identical to the first except for a change in color of one item. The monkey was rewarded if its eyes went directly to this changed square (an infra-red eye-tracking system was used to determine this). During all this, activity from single neurons in the lateral prefrontal cortex and the lateral intraparietal area — areas critical for short-term memory and implicated in human capacity limitations — was recorded.

As with humans, the more squares in the array, the worse the performance (from 85% correct for two squares to 66.5% for 5). Their working memory capacity was calculated at 3.88 objects — i.e. the same as that of humans.

That in itself is interesting, speaking as it does to the question of how human intelligence differs from other animals. But the real point of the exercise was to watch what is happening at the single neuron level. And here a surprise occurred.

That total capacity of around 4 items was composed of two independent, smaller capacities in the right and left halves of the visual space. What matters is how many objects are in the hemifield an eye is covering. Each hemifield can only handle two objects. Thus, if the left side of the visual space contains three items, and the right side only one, information about the three items from the left side will be degraded. If the left side contains four items and the right side two, those two on the right side will be fine, but information from the four items on the left will be degraded.

Notice that the effect of more items than two in a hemifield is to decrease the total information from all the items in the hemifield — not to simply lose the additional items.

The behavioral evidence correlated with brain activity, with object information in LPFC neurons decreasing with increasing number of items in the same hemifield, but not the opposite hemifield, and the same for the intraparietal neurons (the latter are active during the delay; the former during the presentation).

The findings resolve a long-standing debate: does working memory function like slots, which we fill one by one with items until all are full, or as a pool that fills with information about each object, with some information being lost as the number of items increases? And now we know why there is evidence for both views, because both contain truth. Each hemisphere might be considered a slot, but each slot is a pool.

Another long-standing question is whether the capacity limit is a failure of perception or  memory. These findings indicate that the problem is one of perception. The neural recordings showed information about the objects being lost even as the monkeys were viewing them, not later as they were remembering what they had seen.

All of this is important theoretically, but there are also immediate practical applications. The work suggests that information should be presented in such a way that it’s spread across the visual space — for example, dashboard displays should spread the displays evenly on both sides of the visual field; medical monitors that currently have one column of information should balance it in right and left columns; security personnel should see displays scrolled vertically rather than horizontally; working memory training should present information in a way that trains each hemisphere separately. The researchers are forming collaborations to develop these ideas.

Reference: 

[2335] Buschman TJ, Siegel M, Roy JE, Miller EK. Neural substrates of cognitive capacity limitations. Proceedings of the National Academy of Sciences [Internet]. 2011 . Available from: http://www.pnas.org/content/early/2011/06/13/1104666108.abstract

Source: 

tags memworks: 

tags strategies: 

Topics: 

Many genes are behind human intelligence

August, 2011

A large-scale genome-wide analysis has confirmed that half the differences in intelligence between people of similar background can be attributed to genetic differences — but it’s an accumulation of hundreds of tiny differences.

There has been a lot of argument over the years concerning the role of genes in intelligence. The debate reflects the emotions involved more than the science. A lot of research has gone on, and it is indubitable that genes play a significant role. Most of the research however has come from studies involving twins and adopted children, so it is indirect evidence of genetic influence.

A new technique has now enabled researchers to directly examine 549,692 single nucleotide polymorphisms (SNPs — places where people have single-letter variations in their DNA) in each of 3511 unrelated people (aged 18-90, but mostly older adults). This analysis had produced an estimate of the size of the genetic contribution to individual differences in intelligence: 40% of the variation in crystallized intelligence and 51% of the variation in fluid intelligence. (See http://www.memory-key.com/memory/individual/wm-intelligence for a discussion of the difference)

The analysis also reveals that there is no ‘smoking gun’. Rather than looking for a handful of genes that govern intelligence, it seems that hundreds if not thousands of genes are involved, each in their own small way. That’s the trouble: each gene makes such a small contribution that no gene can be fingered as critical.

Discussions that involve genetics are always easily misunderstood. It needs to be emphasized that we are talking here about the differences between people. We are not saying that half of your IQ is down to your genes; we are saying that half the difference between you and another person (unrelated but with a similar background and education — study participants came from Scotland, England and Norway — that is, relatively homogenous populations) is due to your genes.

If the comparison was between, for example, a middle-class English person and someone from a poor Indian village, far less of any IQ difference would be due to genes. That is because the effects of environment would be so much greater.

These findings are consistent with the previous research using twins. The most important part of these findings is the confirmation it provides of something that earlier studies have hinted at: no single gene makes a significant contribution to variation in intelligence.

Reference: 

Source: 

tags memworks: 

Topics: 

Why our brains produce fewer new neurons in old age

August, 2011

New research explains why fewer new brain cells are created in the hippocampus as we get older.

It wasn’t so long ago we believed that only young brains could make neurons, that once a brain was fully matured all it could do was increase its connections. Then we found out adult brains could make new neurons too (but only in a couple of regions, albeit critical ones). Now we know that neurogenesis in the hippocampus is vital for some operations, and that the production of new neurons declines with age (leading to the idea that the reduction in neurogenesis may be one reason for age-related cognitive decline).

What we didn’t know is why this happens. A new study, using mice genetically engineered so that different classes of brain cells light up in different colors, has now revealed the life cycle of stem cells in the brain.

Adult stem cells differentiate into progenitor cells that ultimately give rise to mature neurons. It had been thought that the stem cell population remained stable, but that these stem cells gradually lose their ability to produce neurons. However, the mouse study reveals that during the mouse's life span, the number of brain stem cells decreased 100-fold. Although the rate of this decrease actually slows with age, and the output per cell (the number of progenitor cells each stem cell produces) increases, nevertheless the pool of stem cells is dramatically reduced over time.

The reason this happens (and why it wasn’t what we expected) is explained in a computational model developed from the data. It seems that stem cells in the brain differ from other stem cells. Adult stem cells in the brain wait patiently for a long time until they are activated. They then undergo a series of rapid divisions that give rise to progeny that differentiate into neurons, before ‘retiring’ to become astrocytes. What this means is that, unlike blood or gut stem cells (that renew themselves many times), brain stem cells are only used once.

This raises a somewhat worrying question: if we encourage neurogenesis (e.g. by exercise or drugs), are we simply using up stem cells prematurely? The researchers suggest the answer depends on how the neurogenesis has been induced. Parkinson's disease and traumatic brain injury, for example, activate stem cells directly, and so may reduce the stem cell population. However, interventions such as exercise stimulate the progenitor cells, not the stem cells themselves.

Reference: 

Source: 

tags lifestyle: 

Topics: 

tags memworks: 

tags problems: 

tags development: 

Individual differences in learning motor skills reflect brain chemical

April, 2011

An imaging study demonstrates that people who are quicker at learning a sequence of finger movements have lower levels of the inhibitory chemical GABA.

What makes one person so much better than another in picking up a new motor skill, like playing the piano or driving or typing? Brain imaging research has now revealed that one of the reasons appears to lie in the production of a brain chemical called GABA, which inhibits neurons from responding.

The responsiveness of some brains to a procedure that decreases GABA levels (tDCS) correlated both with greater brain activity in the motor cortex and with faster learning of a sequence of finger movements. Additionally, those with higher GABA concentrations at the beginning tended to have slower reaction times and less brain activation during learning.

It’s simplistic to say that low GABA is good, however! GABA is a vital chemical. Interestingly, though, low GABA has been associated with stress — and of course, stress is associated with faster reaction times and relaxation with slower ones. The point is, we need it in just the right levels, and what’s ‘right’ depends on context. Which brings us back to ‘responsiveness’ — more important than actual level, is the ability of your brain to alter how much GABA it produces, in particular places, at particular times.

However, baseline levels are important, especially where something has gone wrong. GABA levels can change after brain injury, and also may decline with age. The findings support the idea that treatments designed to influence GABA levels might improve learning. Indeed, tDCS is already in use as a tool for motor rehabilitation in stroke patients — now we have an idea why it works.

Reference: 

Source: 

tags memworks: 

Topics: 

tags strategies: 

tags problems: 

How we can control individual neurons

November, 2010

Every moment a multitude of stimuli compete for our attention. Just how this competition is resolved, and how we control it, is not known. But a new study adds to our understanding.

Following on from earlier studies that found individual neurons were associated with very specific memories (such as a particular person), new research has shown that we can actually regulate the activity of specific neurons, increasing the firing rate of some while decreasing the rate of others.

The study involved 12 patients implanted with deep electrodes for intractable epilepsy. On the basis of each individual’s interests, four images were selected for each patient. Each of these images was associated with the firing of specific neurons in the mediotemporal lobe. The firing of these neurons was hooked up to a computer, allowing the patients to make their particular images appear by thinking of them. When another image appeared on top of the image as a distraction, creating a composite image, patients were asked to focus on their particular image, brightening the target image while the distractor image faded. The patients were successful 70% of the time in brightening their target image. This was primarily associated with increased firing of the specific neurons associated with that image.

I should emphasize that the use of a composite image meant that the participants had to rely on a mental representation rather than the sensory stimuli, at least initially. Moreover, when the feedback given was fake — that is, the patients’ efforts were no longer linked to the behavior of the image on the screen — success rates fell dramatically, demonstrating that their success was due to a conscious, directed action.

Different patients used different strategies to focus their attention. While some simply thought of the picture, others repeated the name of the image out loud or focused their gaze on a particular aspect of the image.

Resolving the competition of multiple internal and external stimuli is a process which involves a number of different levels and regions, but these findings help us understand at least some of the process that is under our conscious control. It would be interesting to know more about the relative effectiveness of the different strategies people used, but this was not the focus of the study. It would also be very interesting to compare effectiveness at this task across age, but of course this procedure is invasive and can only be used in special cases.

The study offers hope for building better brain-machine interfaces.

Reference: 

Source: 

tags memworks: 

Topics: 

Having a male twin improves mental rotation performance in females

October, 2010

A twin study suggests prenatal testosterone may be a factor in the innate male superiority in mental rotation*.

Because male superiority in mental rotation appears to be evident at a very young age, it has been suggested that testosterone may be a factor. To assess whether females exposed to higher levels of prenatal testosterone perform better on mental rotation tasks than females with lower levels of testosterone, researchers compared mental rotation task scores between twins from same-sex and opposite-sex pairs.

It was found that females with a male co-twin scored higher than did females with a female co-twin (there was no difference in scores between males from opposite-sex and same-sex pairs). Of course, this doesn’t prove that that the differences are produced in the womb; it may be that girls with a male twin engage in more male-typical activities. However, the association remained after allowing for computer game playing experience.

The study involved 804 twins, average age 22, of whom 351 females were from same-sex pairs and 120 from opposite-sex pairs. There was no significant difference between females from identical same-sex pairs compared to fraternal same-sex pairs.

* Please do note that ‘innate male superiority’ does NOT mean that all men are inevitably better than all women at this very specific task! My words simply reflect the evidence that the tendency of males to be better at mental rotation is found in infants as young as 3 months.

Reference: 

Source: 

tags: 

Topics: 

tags memworks: 

tags lifestyle: 

Pages

Subscribe to RSS - How the brain works