perception

Eye movements get re-enacted when we remember

  • An imaging and eye-tracking study has shown that the brain uses eye movements to help us recall remembered images.

A small study has tested the eminent Donald Hebb’s hypothesis that visual imagery results from the reactivation of neural activity associated with viewing images, and that the re-enactment of eye-movement patterns helps both imagery and neural reactivation.

In the study, 16 young adults (aged 20-28) were shown a set of 14 distinct images for a few seconds each. They were asked to remember as many details of the picture as possible so they could visualize it later on. They were then cued to mentally visualize the images within an empty rectangular box shown on the screen.

Brain imaging and eye-tracking technology revealed that the same pattern of eye movements and brain activation occurred when the image was learned and when it was recalled. During recall, however, the patterns were compressed (which is consistent with our experience of remembering, where memories take a much shorter time than the original experiences).

Our understanding of memory is that it’s constructive — when we remember, we reconstruct the memory from separate bits of information in our database. This finding suggests that eye movements might be like a blueprint to help the brain piece together the bits in the right way.

https://www.eurekalert.org/pub_releases/2018-02/bcfg-cga021318.php

Reference: 

tags memworks: 

Topics: 

Why older adults lose working memory capacity

The root of age-related cognitive decline may lie in a reduced ability to ignore distractors. A new study indicates that older adults put more effort into focusing during encoding, in order to compensate for a reduced ability to hold information in working memory. The finding suggests a multi-pronged approach to improving cognitive ability in older adults.

I've reported before on the idea that the drop in working memory capacity commonly seen in old age is related to the equally typical increase in distractability. Studies of brain activity have also indicated that lower WMC is correlated with greater storage of distractor information. So those with higher WMC, it's thought, are better at filtering out distraction and focusing only on the pertinent information. Older adults may show a reduced WMC, therefore, because their ability to ignore distraction and irrelevancies has declined.

Why does that happen?

A new, large-scale study using a smartphone game suggests that the root cause is a change in the way we hold items in working memory.

The study involved 29,631 people aged 18—69, who played a smartphone game in which they had to remember the positions of an increasing number of red circles. Yellow circles, which had to be ignored, could also appear — either at the same time as the red circles, or after them. Data from this game revealed both WMC (how many red circle locations the individual could remember), and distractability (how many red circle locations they could remember in the face of irrelevant yellow circles).

Now this game isn't simply a way of measuring WMC. It enables us to make an interesting distinction based on the timing of the distraction. If the yellow circles appeared at the same time as the red ones, they are providing distraction when you are trying to encode the information. If they appear afterward, the distraction occurs when you are trying to maintain the information in working memory.

Now it would seem commonsensical that distraction at the time of encoding must be the main problem, but the fascinating finding of this study is that it was distraction during the delay (while the information is being maintained in working memory) that was the greater problem. And it was this distraction that became more and more marked with increasing age.

The study is a follow-up to a smaller 2014 study that included two experiments: a lab experiment involving 21 young adults, and data from the same smartphone game involving only the younger cohort (18-29 years; 3247 participants).

This study demonstrated that distraction during encoding and distraction during delay were independent contributory factors to WMC, suggesting that separate mechanisms are involved in filtering out distraction at encoding and maintenance.

Interestingly, analysis of the data from the smartphone game did indicate some correlation between the two in that context. One reason may be that participants in the smartphone game were exposed to higher load trials (the lab study kept WM load constant); another might be that they were in more distracting environments.

While in general researchers have till now assumed that the two processes are not distinct, it has been theorized that distractor filtering at encoding may involve a 'selective gating mechanism', while filtering during WM maintenance may involve a shutting down of perception. The former has been linked to a gating mechanism in the striatum in the basal ganglia, while the latter has been linked to an increase in alpha waves in the frontal cortex, specifically, the left middle frontal gyrus. The dorsolateral prefrontal cortex may also be involved in distractor filtering at encoding.

To return to the more recent study:

  • there was a significant decrease in WMC with increasing age in all conditions (no distraction; encoding distraction; delay distraction)
  • for older adults, the decrease in WMC was greatest in the delay distraction condition
  • when 'distraction cost' was calculated (((ND score − (ED or DD score))/ND score) × 100), there was a significant correlation between delay distraction cost and age, but not between encoding distraction cost and age
  • for older adults, performance in the encoding distraction condition was better predicted by performance in the no distraction condition than it was among the younger groups
  • this correlation was significantly different between the 30-39 age group and the 40-49 age group, between the 40s and the 50s, and between the 50s and the 60s — showing that this is a progressive change
  • older adults with a higher delay distraction cost (ie, those more affected by distractors during delay) also showed a significantly greater correlation between their no-distraction performance and encoding-distraction performance.

All of this suggests that older adults are focusing more attention during attention even when there is no distraction, and they are doing so to compensate for their reduced ability to maintain information in working memory.

This suggests several approaches to improving older adults' ability to cope:

  • use perceptual discrimination training to help improve WMC
  • make working memory training more about learning to ignore certain types of distraction
  • reduce distraction — modify daily tasks to make them more "older adult friendly"
  • (my own speculation) use meditation training to improve frontal alpha rhythms.

You can participate in the game yourself, at http://thegreatbrainexperiment.com/

http://medicalxpress.com/news/2015-05-smartphone-reveals-older.html

Reference: 

[3921] McNab F, Zeidman P, Rutledge RB, Smittenaar P, Brown HR, Adams RA, Dolan RJ. Age-related changes in working memory and the ability to ignore distraction. Proceedings of the National Academy of Sciences [Internet]. 2015 ;112(20):6515 - 6518. Available from: http://www.pnas.org/content/112/20/6515

McNab, F., & Dolan, R. J. (2014). Dissociating distractor-filtering at encoding and during maintenance. Journal of Experimental Psychology. Human Perception and Performance, 40(3), 960–7. doi:10.1037/a0036013

Topics: 

tags strategies: 

tags memworks: 

tags problems: 

Face Recognition

Older news items (pre-2010) brought over from the old website

Children recognize other children’s faces better than adults do

It is well known that people find it easier to distinguish between the faces of people from their own race, compared to those from a different race. It is also known that adults recognize the faces of other adults better than the faces of children. This may relate to holistic processing of the face (seeing the face as a whole rather than analyzing it feature by feature) — it may be that we more easily recognize faces for which we have strong holistic ‘templates’. A new study has tested to see whether the same is true for children aged 8 to 13. The study found that children had stronger holistic processing for other children than adults did. This may reflect an own-age bias, but I’d love to see what happens with teachers, or any other adults who spend much of their time with many children.

[1358] Susilo T, Crookes K, McKone E, Turner H. The Composite Task Reveals Stronger Holistic Processing in Children than Adults for Child Faces. PLoS ONE [Internet]. 2009 ;4(7):e6460 - e6460. Available from: http://dx.doi.org/10.1371/journal.pone.0006460

Full text at http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0006460
http://dsc.discovery.com/news/2009/08/18/children-faces.html

Alcoholics show abnormal brain activity when processing facial expressions

Excessive chronic drinking is known to be associated with deficits in comprehending emotional information, such as recognizing different facial expressions. Now an imaging study of abstinent long-term alcoholics has found that they show decreased and abnormal activity in the amygdala and hippocampus when looking at facial expressions. They also show increased activity in the lateral prefrontal cortex, perhaps in an attempt to compensate for the failure of the limbic areas. The finding is consistent with other studies showing alcoholics invoking additional and sometimes higher-order brain systems to accomplish a relatively simple task at normal levels. The study compared 15 abstinent long-term alcoholics and 15 healthy, nonalcoholic controls, matched on socioeconomic backgrounds, age, education, and IQ.

[1044] Marinkovic K, Oscar-Berman M, Urban T, O'Reilly CE, Howard JA, Sawyer K, Harris GJ. Alcoholism and dampened temporal limbic activation to emotional faces. Alcoholism, Clinical and Experimental Research [Internet]. 2009 ;33(11):1880 - 1892. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19673745

http://www.eurekalert.org/pub_releases/2009-08/ace-edc080509.php
http://www.eurekalert.org/pub_releases/2009-08/bumc-rfa081109.php

More insight into encoding of identity information

Different pictures of, say, Marilyn Monroe can evoke the same mental image — even hearing or reading her name can evoke the same concept. So how exactly does that work? A study in which pictures, spoken and written names were used has revealed that single neurons in the hippocampus and surrounding areas respond selectively to representations of the same individual regardless of the sensory cue. Moreover, this occurs very quickly, not only to very familiar people — the same process was observed with the researcher’s image and name, although he was unknown to the subject a day or two earlier. It also appears that the degree of abstraction reflects the hierarchical structure within the mediotemporal lobe.

[1141] Quiroga QR, Kraskov A, Koch C, Fried I. Explicit Encoding of Multimodal Percepts by Single Neurons in the Human Brain. Current Biology [Internet]. 2009 ;19(15):1308 - 1313. Available from: http://www.cell.com/current-biology/abstract/S0960-9822(09)01377-3

http://www.eurekalert.org/pub_releases/2009-07/uol-ols072009.php

Monkeys and humans use the same mechanism to recognize faces

The remarkable ability of humans to distinguish faces depends on sensitivity to unique configurations of facial features. One of the best demonstrations for this sensitivity comes from our difficulty in detecting changes in the orientation of the eyes and mouth in an inverted face — what is known as the Thatcher effect . A new study has revealed that this effect is also demonstrated among rhesus macaque monkeys, indicating that our skills in facial recognition date back 30 million years or more.

[1221] Adachi I, Chou DP, Hampton RR. Thatcher Effect in Monkeys Demonstrates Conservation of Face Perception across Primates. Current Biology [Internet]. 2009 ;19(15):1270 - 1273. Available from: http://www.cell.com/current-biology/abstract/S0960-9822(09)01195-6

http://www.eurekalert.org/pub_releases/2009-06/eu-yri062309.php

Face recognition may vary more than thought

We know that "face-blindness" (prosopagnosia) may afflict as many as 2%, but until now it’s been thought that either a person has ‘normal’ face recognition skills, or they have a recognition disorder. Now for the first time a new group has been identified: those who are "super-recognizers", who have a truly remarkable ability to recognize faces, even those only seen in passing many years earlier. The finding suggests that these two abnormal groups are merely the ends of a spectrum — that face recognition ability varies widely.

[1140] Russell R, Duchaine B, Nakayama K. Super-recognizers: people with extraordinary face recognition ability. Psychonomic Bulletin & Review [Internet]. 2009 ;16(2):252 - 257. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19293090

http://www.eurekalert.org/pub_releases/2009-05/hu-we051909.php

Oxytocin improves human ability to recognize faces but not places

The breastfeeding hormone oxytocin has been found to increase social behaviors like trust. A new study has found that a single dose of an oxytocin nasal spray resulted in improved recognition memory for faces, but not for inanimate objects, suggesting that different mechanisms exist for social and nonsocial memory. Further analysis showed that oxytocin selectively improved the discrimination of new and familiar faces — participants with oxytocin were less likely to mistakenly characterize unfamiliar faces as familiar.

[897] Rimmele U, Hediger K, Heinrichs M, Klaver P. Oxytocin Makes a Face in Memory Familiar. J. Neurosci. [Internet]. 2009 ;29(1):38 - 42. Available from: http://www.jneurosci.org/cgi/content/abstract/29/1/38

http://www.eurekalert.org/pub_releases/2009-01/sfn-hii010509.php

Insight into 'face blindness'

An imaging study has finally managed to see a physical difference in the brains of those with congenital prosopagnosia (face blindness): reduced connectivity in the region that processes faces. Specifically, a reduction in the integrity of the white matter tracts in the ventral occipito-temporal cortex, the extent of which was related to the severity of the impairment.

[1266] Thomas C, Avidan G, Humphreys K, Jung K-jin, Gao F, Behrmann M. Reduced structural connectivity in ventral visual cortex in congenital prosopagnosia. Nat Neurosci [Internet]. 2009 ;12(1):29 - 31. Available from: http://dx.doi.org/10.1038/nn.2224

http://www.eurekalert.org/pub_releases/2008-11/cmu-cms112508.php

Visual expertise marked by left-side bias

It’s been established that facial recognition involves both holistic processing (seeing the face as a whole rather than the sum of parts) and a left-side bias. The new study explores whether these effects are specific to face processing, by seeing how Chinese characters, which share many of the same features as faces, are processed by native Chinese and non-Chinese readers. It was found that non-readers tended to look at the Chinese characters more holistically, and that native Chinese readers prefer characters that are made of two left sides. These findings suggest that whether or not we use holistic processing depends on the task performed with the object and its features, and that holistic processing is not used in general visual expertise – but left-side bias is.

[1103] Hsiao JH, Cottrell GW. Not all visual expertise is holistic, but it may be leftist: the case of Chinese character recognition. Psychological Science: A Journal of the American Psychological Society / APS [Internet]. 2009 ;20(4):455 - 463. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19399974

http://www.physorg.com/news160145799.html

Object recognition fast and early in processing

We see through our eye and with our brain. Visual information flows from the retina through a hierarchy of visual areas in the brain until it reaches the temporal lobe, which is ultimately responsible for our visual perceptions, and also sends information back along the line, solidifying perception. This much we know, but how much processing goes on at each stage, and how important feedback is compared to ‘feedforward’, is still under exploration. A new study involving children about to undergo surgery for epilepsy (using invasive electrode techniques) reveals that feedback from the ‘smart’ temporal lobe is less important than we thought, that the brain can recognize objects under a variety of conditions very rapidly, at a very early processing stage. It appears that certain areas of the visual cortex selectively respond to specific categories of objects.

[1416] Liu H, Agam Y, Madsen JR, Kreiman G. Timing, Timing, Timing: Fast Decoding of Object Information from Intracranial Field Potentials in Human Visual Cortex. Neuron [Internet]. 2009 ;62(2):281 - 290. Available from: http://www.cell.com/neuron/abstract/S0896-6273(09)00171-8

http://www.sciencedaily.com/releases/2009/04/090429132231.htm
http://www.physorg.com/news160229380.html
http://www.eurekalert.org/pub_releases/2009-04/chb-aga042709.php

New brain region associated with face recognition

Using a new technique, researchers have found evidence for neurons that are selectively tuned for gender, ethnicity and identity cues in the cingulate gyrus, a brain area not previously associated with face processing.

[463] Ng M, Ciaramitaro VM, Anstis S, Boynton GM, Fine I. Selectivity for the configural cues that identify the gender, ethnicity, and identity of faces in human cortex. Proceedings of the National Academy of Sciences [Internet]. 2006 ;103(51):19552 - 19557. Available from: http://www.pnas.org/content/103/51/19552.abstract

http://www.sciencedaily.com/releases/2006/12/061212091823.htm

No specialized face area

Another study has come out casting doubt on the idea that there is an area of the brain specialized for faces. The fusiform gyrus has been dubbed the "fusiform face area", but a detailed imaging study has revealed that different patches of neurons respond to different images. However, twice as many of the patches are predisposed to faces versus inanimate objects (cars and abstract sculptures), and patches that respond to faces outnumber those that respond to four-legged animals by 50%. But patches that respond to the same images are not physically connected, implying a "face area" may not even exist.

[444] Grill-Spector K, Sayres R, Ress D. High-resolution imaging reveals highly selective nonface clusters in the fusiform face area. Nat Neurosci [Internet]. 2007 ;10(1):133 - 133. Available from: http://dx.doi.org/10.1038/nn0107-133

http://www.sciencedaily.com/releases/2006/08/060830005949.htm

Face blindness is a common hereditary disorder

A German study has found 17 cases of the supposedly rare disorder prosopagnosia (face blindness) among 689 subjects recruited from local secondary schools and a medical school. Of the 14 subjects who consented to further interfamilial testing, all of them had at least one first degree relative who also had it. Because of the compensation strategies that sufferers learn to utilize at an early age, many of them do not realize that it is an actual disorder or even realize that other members of their family have it — which may explain why it has been thought to be so rare. The disorder is one of the few cognitive dysfunctions that has only one symptom and is inherited. It is apparently controlled by a defect in a single gene.

[1393] Kennerknecht I, Grueter T, Welling B, Wentzek S, Horst J, Edwards S, Grueter M. First report of prevalence of non-syndromic hereditary prosopagnosia (HPA). American Journal of Medical Genetics. Part A [Internet]. 2006 ;140(15):1617 - 1622. Available from: http://www.ncbi.nlm.nih.gov/pubmed/16817175

http://www.sciencedaily.com/releases/2006/07/060707151549.htm

Nothing special about face recognition

A new study adds to a growing body of evidence that there is nothing special about face recognition. The researchers have found experimental support for their model of how a brain circuit for face recognition could work. The model shows how face recognition can occur simply from selective processing of shapes of facial features. Moreover, the model equally well accounted for the recognition of cars.

[373] Jiang X, Rosen E, Zeffiro T, VanMeter J, Blanz V, Riesenhuber M. Evaluation of a Shape-Based Model of Human Face Discrimination Using fMRI and Behavioral Techniques. Neuron [Internet]. 2006 ;50(1):159 - 172. Available from: http://www.cell.com/neuron/abstract/S0896-6273(06)00205-4

http://www.eurekalert.org/pub_releases/2006-04/cp-eht033106.php

Rare learning disability particularly impacts face recognition

A study of 14 children with Nonverbal Learning Disability (NLD) has found that the children were poor at recognizing faces. NLD has been associated with difficulties in visual spatial processing, but this specific deficit with faces hasn’t been identified before. NLD affects less than 1% of the population and appears to be congenital.

[577] Liddell GA, Rasmussen C. Memory Profile of Children with Nonverbal Learning Disability. Learning Disablilities Research & Practice [Internet]. 2005 ;20(3):137 - 141. Available from: http://dx.doi.org/10.1111/j.1540-5826.2005.00128.x

http://www.eurekalert.org/pub_releases/2005-08/uoa-sra081005.php

Single cell recognition research finds specific neurons for concepts

An intriguing study surprises cognitive researchers by showing that individual neurons in the medial temporal lobe are able to recognize specific people and objects. It’s long been thought that concepts such as these require a network of cells, and this doesn’t deny that many cells are involved. However, this new study points to the importance of a single brain cell. The study of 8 epileptic subjects found variable responses from subjects, but within subjects, individuals showed remarkably specific responses to concepts. For example, a single neuron in the left posterior hippocampus of one subject responded to all pictures of actress Jennifer Aniston, and also to Lisa Kudrow, her co-star on the TV hit "Friends", but not to pictures of Jennifer Aniston together with actor Brad Pitt, and not, or only very weakly, to other famous and non-famous faces, landmarks, animals or objects. In another patient, pictures of actress Halle Berry activated a neuron in the right anterior hippocampus, as did a caricature of the actress, images of her in the lead role of the film "Catwoman," and a letter sequence spelling her name. The results suggest an invariant, sparse and explicit code, which might be important in the transformation of complex visual percepts into long-term and more abstract memories.

[1372] Quiroga QR, Reddy L, Kreiman G, Koch C, Fried I. Invariant visual representation by single neurons in the human brain. Nature [Internet]. 2005 ;435(7045):1102 - 1107. Available from: http://dx.doi.org/10.1038/nature03687

http://www.eurekalert.org/pub_releases/2005-06/uoc--scr062005.php

Evidence faces are processed like words

It has been suggested that faces and words are recognized differently, that faces are identified by wholes, whereas words and other objects are identified by parts. However, a recent study has devised a new test, that finds people use letters to recognize words and facial features to recognize faces.

[790] Martelli M, Majaj NJ, Pelli DG. Are faces processed like words? A diagnostic test for recognition by parts. Journal of Vision [Internet]. 2005 ;5(1). Available from: http://www.journalofvision.org/content/5/1/6.abstract

You can read this article online at http://www.journalofvision.org//5/1/6/.

http://www.eurekalert.org/pub_releases/2005-03/afri-ssf030705.php

Face blindness runs in families

A study of those with prosopagnosia (face blindness) and their relatives has revealed a genetic basis to the neurological condition. An earlier questionnaire study by the same researcher (himself prosopagnosic) suggests the impairment may be more common than has been thought. The study involved 576 biology students. Nearly 2% reported face-blindness symptoms.

[2545] Grueter M, Grueter T, Bell V, Horst J, Laskowski W, Sperling K, Halligan PW, Elli HD, Kennerknecht I. Hereditary Prosopagnosia: the First Case Series. Cortex [Internet]. 2007 ;43(6):734 - 749. Available from: http://www.sciencedirect.com/science/article/pii/S0010945208705021

http://www.newscientist.com/article.ns?id=dn7174

Faces must be seen to be recognized

In an interesting new perspective on face recognition, a series of perception experiments have revealed that identifying a face depends on actually seeing it, as opposed to merely having the image of the face fall on the retina. In other words, attention is necessary.

[725] Moradi F, Koch C, Shimojo S. Face Adaptation Depends on Seeing the Face. Neuron [Internet]. 2005 ;45(1):169 - 175. Available from: http://www.cell.com/neuron/abstract/S0896-6273(04)00834-7

http://www.eurekalert.org/pub_releases/2005-01/cp-fmb122904.php

New insight into the relationship between recognizing faces and recognizing expressions

The quest to create a computer that can recognize faces and interpret facial expressions has given new insight into how the human brain does it. A study using faces photographed with four different facial expressions (happy, angry, screaming, and neutral), with different lighting, and with and without different accessories (like sunglasses), tested how long people took to decide if two faces belonged to the same person. Another group were tested to see how fast they could identify the expressions. It was found that people were quicker to recognize faces and facial expressions that involved little muscle movement, and slower to recognize expressions that involved a lot of movement. This supports the idea that recognition of faces and recognition of facial expressions are linked – it appears, through the part of the brain that helps us understand motion.

[1288] Martínez AM. Matching expression variant faces. Vision Research [Internet]. 2003 ;43(9):1047 - 1060. Available from: http://www.ncbi.nlm.nih.gov/pubmed/12676247

http://www.osu.edu/researchnews/archive/compvisn.htm

How the brain is wired for faces

The question of how special face recognition is — whether it is a process quite distinct from recognition of other objects, or whether we are simply highly practiced at this particular type of recognition — has been a subject of debate for some time. A new imaging study has concluded that the fusiform face area (FFA), a brain region crucially involved in face recognition, extracts configural information about faces rather than processing spatial information on the parts of faces. The study also indicated that the FFA is only involved in face recognition.

Yovel, G. & Kanwisher, N. 2004. Face Perception: Domain Specific, Not Process Specific. Neuron, 44 (5), 889–898.

http://www.eurekalert.org/pub_releases/2004-12/cp-htb112304.php

How the brain recognizes a face

Face recognition involves at least three stages. An imaging study has now localized these stages to particular regions of the brain. It was found that the inferior occipital gyrus was particularly sensitive to slight physical changes in faces. The right fusiform gyrus (RFG), appeared to be involved in making a more general appraisal of the face and compares it to the brain's database of stored memories to see if it is someone familiar. The third activated region, the anterior temporal cortex (ATC), is believed to store facts about people and is thought to be an essential part of the identifying process.

Rotshtein, P., Henson, R.N.A., Treves, A., Driver, J. & Dolan, R.J. 2005. Morphing Marilyn into Maggie dissociates physical and identity face representations in the brain. Nature Neuroscience, 8, 107-113.

http://news.bbc.co.uk/go/pr/fr/-/2/hi/health/4086319.stm

Memories of crime stories influenced by racial stereotypes

The influence of stereotypes on memory, a well-established phenomenon, has been demonstrated anew in a study concerning people's memory of news photographs. In the study, 163 college students (of whom 147 were White) examined one of four types of news stories, all about a hypothetical Black man. Two of the stories were not about crime, the third dealt with non-violent crime, while the fourth focused on violent crime. All four stories included an identical photograph of the same man. Afterwards, participants reconstructed the photograph by selecting from a series of facial features presented on a computer screen. It was found that selected features didn’t differ from the actual photograph in the non-crime conditions, but for the crime stories, more pronounced African-American features tended to be selected, particularly so for the story concerning violent crime. Participants appeared largely unaware of their associations of violent crime with the physical characteristics of African-Americans.

[675] Oliver MB, Jackson, II RL, Moses NN, Dangerfield CL. The Face of Crime: Viewers' Memory of Race-Related Facial Features of Individuals Pictured in the News. The Journal of Communication [Internet]. 2004 ;54(1):88 - 104. Available from: http://dx.doi.org/10.1111/j.1460-2466.2004.tb02615.x

http://www.eurekalert.org/pub_releases/2004-05/ps-rmo050504.php

Special training may help people with autism recognize faces

People with autism tend to activate object-related brain regions when they are viewing unfamiliar faces, rather than a specific face-processing region. They also tend to focus on particular features, such as a mustache or a pair of glasses. However, a new study has found that when people with autism look at a picture of a very familiar face, such as their mother's, their brain activity is similar to that of control subjects – involving the fusiform gyrus, a region in the brain's temporal lobe that is associated with face processing, rather than the inferior temporal gyrus, an area associated with objects. Use of the fusiform gyrus in recognizing faces is a process that starts early with non-autistic people, but does take time to develop (usually complete by age 12). The study indicates that the fusiform gyrus in autistic people does have the potential to function normally, but may need special training to operate properly.

Aylward, E. 2004. Functional MRI studies of face processing in adolescents and adults with autism: Role of experience. Paper presented February 14 at the annual meeting of the American Association for the Advancement of Science in Seattle.

Dawson, G. & Webb, S. 2004. Event related potentials reveal early abnormalities in face processing autism. Paper presented February 14 at the annual meeting of the American Association for the Advancement of Science in Seattle.

http://www.eurekalert.org/pub_releases/2004-02/uow-stm020904.php

How faces become familiar

With faces, familiarity makes a huge difference. Even when pictures are high quality and faces are shown at the same time, we make a surprising number of mistakes when trying to decide if two pictures are of the same person – when the face is unknown to us. On the other hand, even when picture quality is very poor, we’re very good at recognising familiar faces. So how do faces become familiar to us? Recent research led by Vicki Bruce (well-known in this field) showed volunteers video sequences of people, episodes of unfamiliar soap operas, and images of familiar but previously unseen characters from radio's The Archers and voices from The Simpsons. They confirmed previous research suggesting that for unfamiliar faces, memory appears dominated by the 'external' features, but where the face is well-known it is 'internal' features such as the eyes, nose and mouth, that are more important. The shift to internal features occurred rapidly, within minutes. Speed of learning was unaffected by whether the faces were experienced as static or moving images, or with or without accompanying voices, but faces which belonged to well-known, though previously unseen, personal identities were learned more easily.

Bruce, V., Burton, M. et al. 2003. Getting To Know You – How We Learn New Faces. A research report funded by the Economic and Social Research Council (ESRC).

http://www.eurekalert.org/pub_releases/2003-06/esr-hs061603.php
http://www.esrc.ac.uk/esrccontent/news/june03-5.asp

Face recognition may not be a special case

Many researchers have argued that the brain processes faces quite separately from other objects — that faces are a special class. Research has shown many ways in which face recognition does seem to be a special case, but it could be argued that the differences are due not to a separate processing system, but to people’s expertise with faces. We have, after all, plenty of evidence that babies are programmed right from the beginning to pay lots of attention to faces. A new study has endeavored to answer this question, by looking at separate and concurrent perception of faces and cars, by people who were “car buffs” and those who were not. If expert processing of these objects depends on a common mechanism (presumed to be related to the perception of objects as wholes), then car perception would be expected to interfere with concurrent face perception. Moreover, such interference should get worse, as the subjects became more expert at processing cars. This is indeed what was found. Experts were found to recognize cars holistically, but this recognition interfered with their recognition of familiar faces. While novices processed the cars piece by piece, in a slower process that did not interfere with face recognition. This study follows on from earlier research in which car fanciers and bird watchers were found to identify cars and birds, respectively, using the same area of the brain as is used in face recognition. A subsequent study found that people trained to identify novel, computer-generated objects, began to recognize them holistically (as is done in face recognition). This latest study shows that, not only is experts’ car recognition occurring in the same brain region as face recognition, but that the same neural circuits are involved.

[1318] Gauthier I, Curran T, Curby KM, Collins D. Perceptual interference supports a non-modular account of face processing. Nat Neurosci [Internet]. 2003 ;6(4):428 - 432. Available from: http://dx.doi.org/10.1038/nn1029

http://www.eurekalert.org/pub_releases/2003-03/vu-cfe030503.php
http://www.nytimes.com/2003/03/11/health/11PERC.html

Detection of foreign faces faster than faces of your own race

A recent study tracked the time it takes for the brain to perceive the faces of people of other races as opposed to faces from the same race. The faces were mixed with images of everyday objects, and the subjects were given the distracting task of counting butterflies. The study found that the Caucasian subjects took longer to detect Caucasian faces than Asian faces. The study complements an earlier imaging study that showed that, when people are actively trying to recognize faces, they are better at recognizing members of their own race. [see Why recognizing a face is easier when the race matches our own]

[2544] Caldara R, Thut G, Servoir P, Michel CM, Bovet P, Renault B. Face versus non-face object perception and the ‘other-race’ effect: a spatio-temporal event-related potential study. Clinical Neurophysiology [Internet]. 2003 ;114(3):515 - 528. Available from: http://www.sciencedirect.com/science/article/pii/S1388245702004078

http://news.bmn.com/news/story?day=030108&story=1

Women better at recognizing female but not male faces

Women’s superiority in face recognition tasks appears to be due to their better recognition of female faces. There was no difference between men and women in the recognition of male faces.

[671] Lewin C, Herlitz A. Sex differences in face recognition--Women's faces make the difference. Brain and Cognition [Internet]. 2002 ;50(1):121 - 128. Available from: http://www.sciencedirect.com/science/article/B6WBY-46WVHDY-C/2/20e92b605a3fb8210460c4766ba66d35

Imaging confirms people knowledge processed differently

Earlier research has demonstrated that semantic knowledge for different classes of inanimate objects (e.g., tools, musical instruments, and houses) is processed in different brain regions. A new imaging study looked at knowledge about people, and found a unique pattern of brain activity was associated with person judgments, supporting the idea that person knowledge is functionally dissociable from other classes of semantic knowledge within the brain.

[766] Mitchell JP, Heatherton TF, Macrae NC. Distinct neural systems subserve person and object knowledge. Proceedings of the National Academy of Sciences of the United States of America [Internet]. 2002 ;99(23):15238 - 15243. Available from: http://www.pnas.org/content/99/23/15238.abstract

http://www.pnas.org/cgi/content/abstract/99/23/15238?etoc

Identity memory area localized

An imaging study investigating brain activation when people were asked to answer yes or no to statements about themselves (e.g. 'I forget important things', 'I'm a good friend', 'I have a quick temper'), found consistent activation in the anterior medial prefrontal and posterior cingulate. This is consistent with lesion studies, and suggests that these areas of the cortex are involved in self-reflective thought.

[210] Johnson SC, Baxter LC, Wilder LS, Pipe JG, Heiserman JE, Prigatano GP. Neural correlates of self-reflection. Brain [Internet]. 2002 ;125(8):1808 - 1814. Available from: http://brain.oxfordjournals.org/cgi/content/abstract/125/8/1808

http://brain.oupjournals.org/cgi/content/abstract/125/8/1808

Recognizing yourself is different from recognizing other people

Recognition of familiar faces occurs largely in the right side of the brain, but new research suggests that identifying your own face occurs more in the left side of your brain. Evidence for this comes from a split-brain patient (a person whose corpus callosum – the main bridge of nerve fibers between the two hemispheres of the brain - has been severed to minimize the spread of epileptic seizure activity). The finding needs to be confirmed in studies of people with intact brains, but it suggests not only that there is a distinction between recognizing your self and recognizing other people you know well, but also that memories and knowledge about oneself may be stored largely in the left hemisphere.

[1075] Turk DJ, Heatherton TF, Kelley WM, Funnell MG, Gazzaniga MS, Macrae NC. Mike or me? Self-recognition in a split-brain patient. Nat Neurosci [Internet]. 2002 ;5(9):841 - 842. Available from: http://dx.doi.org/10.1038/nn907

http://www.nature.com/neurolink/v5/n9/abs/nn907.html
http://www.sciencenews.org/20020824/fob8.asp

Differential effects of encoding strategy on brain activity patterns

Encoding and recognition of unfamiliar faces in young adults were examined using PET imaging to determine whether different encoding strategies would lead to differences in brain activity. It was found that encoding activated a primarily ventral system including bilateral temporal and fusiform regions and left prefrontal cortices, whereas recognition activated a primarily dorsal set of regions including right prefrontal and parietal areas. The type of encoding strategy produced different brain activity patterns. There was no effect of encoding strategy on brain activity during recognition. The left inferior prefrontal cortex was engaged during encoding regardless of strategy.

[566] Bernstein LJ, Beig S, Siegenthaler AL, Grady CL. The effect of encoding strategy on the neural correlates of memory for faces. Neuropsychologia [Internet]. 2002 ;40(1):86 - 98. Available from: http://www.ncbi.nlm.nih.gov/pubmed/11595264

http://tinyurl.com/i87v

Babies' experience with faces leads to narrowing of perception

A theory that infants' experience in viewing faces causes their brains (in particular an area of the cerebral cortex known as the fusiform gyrus) to "tune in" to the types of faces they see most often and tune out other types, has been given support from a study showing that 6-month-old babies were significantly better than both adults and 9-month-old babies in distinguishing the faces of monkeys. All groups were able to distinguish human faces from one another.

[526] Pascalis O, de Haan M, Nelson CA. Is Face Processing Species-Specific During the First Year of Life?. Science [Internet]. 2002 ;296(5571):1321 - 1323. Available from: http://www.sciencemag.org/cgi/content/abstract/296/5571/1321

http://www.eurekalert.org/pub_releases/2002-05/uom-ssi051302.php
http://news.bbc.co.uk/hi/english/health/newsid_1991000/1991705.stm
http://www.eurekalert.org/pub_releases/2002-05/aaft-bbl050902.php

Different brain regions implicated in the representation of the structure and meaning of pictured objects

Imaging studies continue apace! Having established that that part of the brain known as the fusiform gyrus is important in picture naming, a new study further refines our understanding by studying the cerebral blood flow (CBF) changes in response to a picture naming task that varied on two dimensions: familiarity (or difficulty: hard vs easy) and category (tools vs animals). Results show that although familiarity effects are present in the frontal and left lateral posterior temporal cortex, they are absent from the fusiform gyrus. The authors conclude that the fusiform gyrus processes information relating to an object's structure, rather than its meaning. The blood flows suggest that it is the left posterior middle temporal gyrus that is involved in representing the object's meaning.

[691] Whatmough C, Chertkow H, Murtha S, Hanratty K. Dissociable brain regions process object meaning and object structure during picture naming. Neuropsychologia [Internet]. 2002 ;40(2):174 - 186. Available from: http://www.sciencedirect.com/science/article/B6T0D-4465750-6/2/0c2055de1cc1afdee26f18f2f0b0e848

Debate over how the brain deals with visual information

Neuroscientists can't agree on whether the brain uses specific regions to distinguish specific objects, or patterns of activity from different regions. The debate over how the brain deals with visual information has been re-ignited with apparently contradictory findings from two research groups. One group has pinpointed a distinct region in the brain that responds selectively to images of the human body, while another concludes that the representations of a wide range of image categories are dealt with by overlapping brain regions. (see below)

Specific brain region responds specifically to images of the human body

Cognitive neuroscientists have identified a new area of the human brain that responds specifically when people view images of the human body. They have named this region of the brain the 'extrastriate body area' or 'EBA'. The EBA can be distinguished from other known anatomical subdivisions of the visual cortex. However, the EBA is in a region of the brain called the posterior superior temporal sulcus, where other areas have been implicated in the perception of socially relevant information such as the direction that another person's eyes are gazing, the sound of human voices, or the inferred intentions of animate entities.

Brain scan patterns identify objects being viewed

National Institute of Mental Health (NIMH) scientists have shown that they can tell what kind of object a person is looking at — a face, a house, a shoe, a chair — by the pattern of brain activity it evokes. Earlier NIMH fMRI studies had shown that brain areas that respond maximally to a particular category of object are consistent across different people. This new study finds that the full pattern of responses — not just the areas of maximal activation — is consistent within the same person for a given category of object. Overall, the pattern of fMRI responses predicted the category with 96% accuracy. Accuracy was l00% for faces, houses and scrambled pictures.

[683] Downing PE, Jiang Y, Shuman M, Kanwisher N. A Cortical Area Selective for Visual Processing of the Human Body. Science [Internet]. 2001 ;293(5539):2470 - 2473. Available from: http://www.sciencemag.org/cgi/content/abstract/293/5539/2470

[1239] Haxby JV, Gobbini IM, Furey ML, Ishai A, Schouten JL, Pietrini P. Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex. Science [Internet]. 2001 ;293(5539):2425 - 2430. Available from: http://www.sciencemag.org/cgi/content/abstract/293/5539/2425

http://www.eurekalert.org/pub_releases/2001-09/niom-bsp092601.php
http://www.sciencemag.org/cgi/content/abstract/293/5539/2425

Why recognizing a face is easier when the race matches our own

We have known for a while that recognizing a face is easier when its owner's race matches our own. An imaging study now shows that greater activity in the brain's expert face-discrimination area occurs when the subject is viewing faces that belong to members of the same race as their own.

Golby, A. J., Gabrieli, J. D. E., Chiao, J. Y. & Eberhardt, J. L. 2001. Differential responses in the fusiform region to same-race and other-race faces. Nature Neuroscience, 4, 845-850.

http://www.nature.com/nsu/010802/010802-1.html

Boys' and girls' brains process faces differently

Previous research has suggested a right-hemisphere superiority in face processing, as well as adult male superiority at spatial and non-verbal skills (also associated with the right hemisphere of the brain). This study looked at face recognition and the ability to read facial expressions in young, pre-pubertal boys and girls. Boys and girls were equally good at recognizing faces and identifying expressions, but boys showed significantly greater activity in the right hemisphere, while the girls' brains were more active in the left hemisphere. It is speculated that boys tend to process faces at a global level (right hemisphere), while girls process faces at a more local level (left hemisphere). This may mean that females have an advantage in reading fine details of expression. More importantly, it may be that different treatments might be appropriate for males and females in the case of brain injury.

[2541] Everhart ED, Shucard JL, Quatrin T, Shucard DW. Sex-related differences in event-related potentials, face recognition, and facial affect processing in prepubertal children. Neuropsychology. 2001 ;15(3):329 - 341.

http://www.eurekalert.org/pub_releases/2001-07/aaft-pba062801.php
http://news.bbc.co.uk/hi/english/health/newsid_1425000/1425797.stm

Children's recognition of faces

Children aged 4 to 7 were found to be able to use both configural and featural information to recognize faces. However, even when trained to proficiency on recognizing the target faces, their recognition was impaired when a superfluous hat was added to the face.

[1424] Freire A, Lee K. Face Recognition in 4- to 7-Year-Olds: Processing of Configural, Featural, and Paraphernalia Information. Journal of Experimental Child Psychology [Internet]. 2001 ;80(4):347 - 371. Available from: http://www.sciencedirect.com/science/article/B6WJ9-457D48M-3/2/cb66483ea30cd07cb6c2047ade7b1e57

Differences in face perception processing between autistic and normal adults

An imaging study compared activation patterns of adults with autism and normal control subjects during a face perception task. While autistic subjects could perform the face perception task, none of the regions supporting face processing in normals were found to be significantly active in the autistic subjects. Instead, in every autistic patient, faces maximally activated aberrant and individual-specific neural sites (e.g. frontal cortex, primary visual cortex, etc.), which was in contrast to the 100% consistency of maximal activation within the traditional fusiform face area (FFA) for every normal subject. It appears that, as compared with normal individuals, autistic individuals `see' faces utilizing different neural systems, with each patient doing so via a unique neural circuitry.

[704] Pierce K, Muller R-A, Ambrose J, Allen G, Courchesne E. Face processing occurs outside the fusiform `face area' in autism: evidence from functional MRI. Brain [Internet]. 2001 ;124(10):2059 - 2073. Available from: http://brain.oxfordjournals.org/cgi/content/abstract/124/10/2059

http://brain.oupjournals.org/cgi/content/abstract/124/10/2059

tags memworks: 

It’s not the noise in the brain; it’s the noise in the input

A new study has found that errors in perceptual decisions occurred only when there was confused sensory input, not because of any ‘noise’ or randomness in the cognitive processing. The finding, if replicated across broader contexts, will change some of our fundamental assumptions about how the brain works.

05/2013

Mynd: 

tags problems: 

tags: 

tags memworks: 

Meditation can produce enduring changes in emotional processing

December, 2012

A new study provides more evidence that meditation changes the brain, and different types of meditation produce different effects.

More evidence that even an 8-week meditation training program can have measurable effects on the brain comes from an imaging study. Moreover, the type of meditation makes a difference to how the brain changes.

The study involved 36 participants from three different 8-week courses: mindful meditation, compassion meditation, and health education (control group). The courses involved only two hours class time each week, with meditation students encouraged to meditate for an average 20 minutes a day outside class. There was a great deal of individual variability in the total amount of meditation done by the end of the course (210-1491 minutes for the mindful attention training course; 190-905 minutes for the compassion training course).

Participants’ brains were scanned three weeks before the courses began, and three weeks after the end. During each brain scan, the volunteers viewed 108 images of people in situations that were either emotionally positive, negative or neutral.

In the mindful attention group, the second brain scan showed a decrease in activation in the right amygdala in response to all images, supporting the idea that meditation can improve emotional stability and response to stress. In the compassion meditation group, right amygdala activity also decreased in response to positive or neutral images, but, among those who reported practicing compassion meditation most frequently, right amygdala activity tended to increase in response to negative images. No significant changes were seen in the control group or in the left amygdala of any participant.

The findings support the idea that meditation can be effective in improving emotional control, and that compassion meditation can indeed increase compassionate feelings. Increased amygdala activation was also correlated with decreased depression scores in the compassion meditation group, which suggests that having more compassion towards others may also be beneficial for oneself.

The findings also support the idea that the changes brought about by meditation endure beyond the meditative state, and that the changes can start to occur quite quickly.

These findings are all consistent with other recent research.

One point is worth emphasizing, in the light of the difficulty in developing a training program that improves working memory rather than simply improving the task being practiced. These findings suggest that, unlike most cognitive training programs, meditation training might produce learning that is process-specific rather than stimulus- or task-specific, giving it perhaps a wider generality than most cognitive training.

Reference: 

Source: 

tags memworks: 

tags strategies: 

Topics: 

How emotion keeps some memories vivid

September, 2012

Emotionally arousing images that are remembered more vividly were seen more vividly. This may be because the amygdala focuses visual attention rather than more cognitive attention on the image.

We know that emotion affects memory. We know that attention affects perception (see, e.g., Visual perception heightened by meditation training; How mindset can improve vision). Now a new study ties it all together. The study shows that emotionally arousing experiences affect how well we see them, and this in turn affects how vividly we later recall them.

The study used images of positively and negatively arousing scenes and neutral scenes, which were overlaid with varying amounts of “visual noise” (like the ‘snow’ we used to see on old televisions). College students were asked to rate the amount of noise on each picture, relative to a specific image they used as a standard. There were 25 pictures in each category, and three levels of noise (less than standard, equal to standard, and more than standard).

Different groups explored different parameters: color; gray-scale; less noise (10%, 15%, 20% as compared to 35%, 45%, 55%); single exposure (each picture was only presented once, at one of the noise levels).

Regardless of the actual amount of noise, emotionally arousing pictures were consistently rated as significantly less noisy than neutral pictures, indicating that people were seeing them more clearly. This was true in all conditions.

Eye-tracking analysis ruled out the idea that people directed their attention differently for emotionally arousing images, but did show that more eye fixations were associated both with less noisy images and emotionally arousing ones. In other words, people were viewing emotionally important images as if they were less noisy.

One group of 22 students were given a 45-minute spatial working memory task after seeing the images, and then asked to write down all the details they could remember about the pictures they remembered seeing. The amount of detail they recalled was taken to be an indirect measure of vividness.

A second group of 27 students were called back after a week for a recognition test. They were shown 36 new images mixed in with the original 75 images, and asked to rate them as new, familiar, or recollected. They were also asked to rate the vividness of their recollection.

Although, overall, emotionally arousing pictures were not more likely to be remembered than neutral pictures, both experiments found that pictures originally seen as more vivid (less noise) were remembered more vividly and in more detail.

Brain scans from 31 students revealed that the amygdala was more active when looking at images rated as vivid, and this in turn increased activity in the visual cortex and in the posterior insula (which integrates sensations from the body). This suggests that the increased perceptual vividness is not simply a visual phenomenon, but part of a wider sensory activation.

There was another neural response to perceptual vividness: activity in the dorsolateral prefrontal cortex and the posterior parietal cortex was negatively correlated with vividness. This suggests that emotion is not simply increasing our attentional focus, it is instead changing it by reducing effortful attentional and executive processes in favor of more perceptual ones. This, perhaps, gives emotional memories their different ‘flavor’ compared to more neutral memories.

These findings clearly need more exploration before we know exactly what they mean, but the main finding from the study is that the vividness with which we recall some emotional experiences is rooted in the vividness with which we originally perceived it.

The study highlights how emotion can sharpen our attention, building on previous findings that emotional events are more easily detected when visibility is difficult, or attentional demands are high. It is also not inconsistent with a study I reported on last year, which found some information needs no repetition to be remembered because the amygdala decrees it of importance.

I should add, however, that the perceptual effect is not the whole story — the current study found that, although perceptual vividness is part of the reason for memories that are vividly remembered, emotional importance makes its own, independent, contribution. This contribution may occur after the event.

It’s suggested that individual differences in these reactions to emotionally enhanced vividness may underlie an individual’s vulnerability to post-traumatic stress disorder.

Reference: 

Source: 

tags memworks: 

tags problems: 

Topics: 

Perception

See also

Smell

Hearing

Vision

Older news items (pre-2010) brought over from the old website

Perception affected by mood

An imaging study has revealed that when people were shown a composite image with a face surrounded by "place" images, such as a house, and asked to identify the gender of the face, those in whom a bad mood had been induced didn’t process the places in the background. However, those in a good mood took in both the focal and background images. These differences in perception were coupled with differences in activity in the parahippocampal place area. Increasing the amount of information is of course not necessarily a good thing, as it may result in more distraction.

[1054] Schmitz TW, De Rosa E, Anderson AK. Opposing Influences of Affective State Valence on Visual Cortical Encoding. J. Neurosci. [Internet]. 2009 ;29(22):7199 - 7207. Available from: http://www.jneurosci.org/cgi/content/abstract/29/22/7199

http://www.eurekalert.org/pub_releases/2009-06/uot-pww060309.php

What we perceive is not what we sense

Perceiving a simple touch may depend as much on memory, attention, and expectation as on the stimulus itself. A study involving macaque monkeys has found that the monkeys’ perception of a touch (varied in intensity) was more closely correlated with activity in the medial premotor cortex (MPC), a region of the brain's frontal lobe known to be involved in making decisions about sensory information, than activity in the primary somatosensory cortex (which nevertheless accurately recorded the intensity of the sensation). MPC neurons began to fire before the stimulus even touched the monkeys' fingertips — presumably because the monkey was expecting the stimulus.

[263] de Lafuente V, Romo R. Neuronal correlates of subjective sensory experience. Nat Neurosci [Internet]. 2005 ;8(12):1698 - 1703. Available from: http://dx.doi.org/10.1038/nn1587

http://www.eurekalert.org/pub_releases/2005-11/hhmi-tsi110405.php

Varied sensory experience important in childhood

A new baby has far more connections between neurons than necessary; from birth to about age 12 the brain trims 50% of these unnecessary connections while at the same time building new ones through learning and sensory stimulation — in other words, tailoring the brain to its environment. A mouse study has found that without enough sensory stimulation, infant mice lose fewer connections — indicating that connections need to be lost in order for appropriate ones to grow. The findings support the idea that parents should try to expose their children to a variety of sensory experiences.

[479] Zuo Y, Yang G, Kwon E, Gan W-B. Long-term sensory deprivation prevents dendritic spine loss in primary somatosensory cortex. Nature [Internet]. 2005 ;436(7048):261 - 265. Available from: http://www.ncbi.nlm.nih.gov/pubmed/16015331

http://www.sciencentral.com/articles/view.htm3?article_id=218392607

Brain regions that process reality and illusion identified

Researchers have now identified the regions of the brain involved in processing what’s really going on, and what we think is going on. Macaque monkeys played a virtual reality video game in which the monkeys were tricked into thinking that they were tracing ellipses with their hands, although they actually were moving their hands in a circle. Monitoring of nerve cells revealed that the primary motor cortex represented the actual movement while the signals from cells in a neighboring area, called the ventral premotor cortex, were generating elliptical shapes. Knowing how the brain works to distinguish between action and perception will help efforts to build biomedical devices that can control artificial limbs, some day enabling the disabled to move a prosthetic arm or leg by thinking about it.

[1107] Schwartz AB, Moran DW, Reina AG. Differential Representation of Perception and Action in the Frontal Cortex. Science [Internet]. 2004 ;303(5656):380 - 383. Available from: http://www.sciencemag.org/cgi/content/abstract/303/5656/380

http://news-info.wustl.edu/tips/page/normal/652.html
http://www.eurekalert.org/pub_releases/2004-02/wuis-rpb020704.php

Memory different depending on whether information received via eyes or ears

Carnegie Mellon scientists using magnetic resonance imaging found quite different brain activity patterns for reading and listening to identical sentences. During reading, the right hemisphere was not as active as expected, suggesting a difference in the nature of comprehension experienced when reading versus listening. When listening, there was greater activation in a part of Broca's area associated with verbal working memory, suggesting that there is more semantic processing and working memory storage in listening comprehension than in reading. This should not be taken as evidence that comprehension is better in one or other of these situations, merely that it is different. "Listening to an audio book leaves a different set of memories than reading does. A newscast heard on the radio is processed differently from the same words read in a newspaper."

[2540] Michael EB, Keller TA, Carpenter PA, Just MA. fMRI investigation of sentence comprehension by eye and by ear: Modality fingerprints on cognitive processes. Human Brain Mapping [Internet]. 2001 ;13(4):239 - 252. Available from: http://onlinelibrary.wiley.com/doi/10.1002/hbm.1036/abstract

http://www.eurekalert.org/pub_releases/2001-08/cmu-tma081401.php

The chunking of our lives: the brain "sees" life in segments

We talk about "chunking" all the time in the context of memory. But the process of breaking information down into manageable bits occurs, it seems, right from perception. Magnetic resonance imaging reveals that when people watched movies of common, everyday, goal-directed activities (making the bed, doing the dishes, ironing a shirt), their brains automatically broke these continuous events into smaller segments. The study also identified a network of brain areas that is activated during the perception of boundaries between events. "The fact that changes in brain activity occurred during the passive viewing of movies indicates that this is how we normally perceive continuous events, as a series of segments rather than a dynamic flow of action."

Zacks, J.M., Braver, T.S., Sheridan, M.A., Donaldson, D.I., Snyder, A.Z., Ollinger, J.M., Buckner, R.L. & Raichle, M.E. 2001. Human brain activity time-locked to perceptual event boundaries. Nature Neuroscience, 4(6), 651-5.

http://www.eurekalert.org/pub_releases/2001-07/aaft-bp070201.php

Amygdala may be critical for allowing perception of emotionally significant events despite inattention

We choose what to pay attention to, what to remember. We give more weight to some things than others. Our perceptions and memories of events are influenced by our preconceptions, and by our moods. Researchers at Yale and New York University have recently published research indicating that the part of the brain known as the amygdala is responsible for the influence of emotion on perception. This builds on previous research showing that the amygdala is critically involved in computing the emotional significance of events. The amygdala is connected to those brain regions dealing with sensory experiences, and the theory that these connections allow the amygdala to influence early perceptual processing is supported by this research. Dr. Anderson suggests that “the amygdala appears to be critical for the emotional tuning of perceptual experience, allowing perception of emotionally significant events to occur despite inattention.”

[968] Anderson AK, Phelps EA. Lesions of the human amygdala impair enhanced perception of emotionally salient events. Nature [Internet]. 2001 ;411(6835):305 - 309. Available from: http://dx.doi.org/10.1038/35077083

http://www.eurekalert.org/pub_releases/2001-05/NYU-Infr-1605101.php

tags memworks: 

Older people find it harder to see the wood for the trees

September, 2011

A study indicates that difficulty in seeing the whole, vs elements of the whole, is associated with impairment in perceptual grouping, and this is more common with age.

A standard test of how we perceive local vs global features of visual objects uses Navon figures — large letters made up of smaller ones (see below for an example). As in the Stroop test when colors and color words disagree (RED), the viewer can focus either on the large letter or the smaller ones. When the viewer is faster at seeing the larger letter, they are said to be showing global precedence; when they’re faster at seeing the component letters, they are said to be showing local precedence. Typically, the greater the number of component letters, the easier it is to see the larger letter. This is consistent with the Gestalt principles of proximity and continuity — elements that are close together and form smooth lines will tend to be perceptually grouped together and seen as a unit (the greater the number of component letters, the closer they will be, and the smoother the line).

In previous research, older adults have often demonstrated local precedence rather than global, although the results have been inconsistent. One earlier study found that older adults performed poorly when asked to report in which direction (horizontal or vertical) dots formed smooth lines, suggesting an age-related decline in perceptual grouping. The present study therefore investigated whether this decline was behind the decrease in global precedence.

In the study 20 young men (average age 22) and 20 older men (average age 57) were shown Navon figures and asked whether the target letter formed the large letter or the smaller letters (e.g., “Is the big or the small letter an E?”). The number of component letters was systematically varied across five quantities. Under such circumstances it is expected that at a certain level of letter density everyone will switch to global precedence, but if a person is impaired at perceptual grouping, this will occur at a higher level of density.

The young men were, unsurprisingly, markedly faster than the older men in their responses. They were also significantly faster at responding when the target was the global letter, compared to when it was the local letter (i.e. they showed global precedence). The older adults, on the other hand, had equal reaction times to global and local targets. Moreover, they showed no improvement as the letter-density increased (unlike the young men).

It is noteworthy that the older men, while they failed to show global precedence, also failed to show local precedence (remember that results are based on group averages; this suggests that the group was evenly balanced between those showing local precedence and those showing global precedence). Interestingly, previous research has suggested that women are more likely to show local precedence.

The link between perceptual grouping and global precedence is further supported by individual differences — older men who were insensitive to changes in letter-density were almost exclusively the ones that showed persistent local precedence. Indeed, increases in letter-density were sometimes counter-productive for these men, leading to even slower reaction times for global targets. This may be the result of greater distractor interference, to which older adults are more vulnerable, and to which this sub-group of older men may have been especially susceptible.

Example of a Navon figure:

FFFFFF
F
FFFFFF
F
FFFFFF

Reference: 

Source: 

tags memworks: 

tags problems: 

Topics: 

The durability and specificity of perceptual learning

September, 2011

Increasing evidence shows that perception is nowhere near the simple bottom-up process we once thought. Two recent perception studies add to the evidence.

Previous research has found practice improves your ability at distinguishing visual images that vary along one dimension, and that this learning is specific to the visual images you train on and quite durable. A new study extends the finding to more natural stimuli that vary on multiple dimensions.

In the small study, 9 participants learned to identify faces and 6 participants learned to identify “textures” (noise patterns) over the course of two hour-long sessions of 840 trials (consecutive days). Faces were cropped to show only internal features and only shown briefly, so this was not a particularly easy task. Participants were then tested over a year later (range: 10-18 months; average 13 and 15 months, respectively).

On the test, participants were shown both images from training and new images that closely resembled them. While accuracy rates were high for the original images, they plummeted for the very similar new images, indicating that despite the length of time since they had seen the original images, they still retained much of the memory of them.

Although practice improved performance across nearly all items and for all people, there were significant differences between both participants and individual stimuli. More interestingly, individual differences (in both stimuli and people) were stable across sessions (e.g., if you were third-best on day 1, you were probably third-best on day 2 too, even though you were doing better). In other words, learning didn’t produce any qualitative changes in the representations of different items — practice had nearly the same effect on all; differences were rooted in initial difficulty of discriminating the pattern.

However, while it’s true that individual differences were stable, that doesn’t mean that every person improved their performance the exact same amount with the same amount of practice. Interestingly (and this is just from my eye-ball examination of the graphs), it looks like there was more individual variation among the group looking at noise patterns. This isn’t surprising. We all have a lot of experience discriminating faces; we’re all experts. This isn’t the case with the textures. For these, people had to ‘catch on’ to the features that were useful in discriminating patterns. You would expect more variability between people in how long it takes to work out a strategy, and how good that strategy is. Interestingly, three of the six people in the texture group actually performed better on the test than they had done on the second day of training, over a year ago. For the other three, and all nine of those in the face group, test performance was worse than it had been on the second day of training (but decidedly better than the first day).

The durability and specificity of this perceptual learning, the researchers point out, resembles that found in implicit memory and some types of sensory adaptation. It also indicates that such perceptual learning is not limited, as has been thought, to changes early in the visual pathway, but produces changes in a wider network of cortical neurons, particularly in the inferior temporal cortex.

The second, unrelated, study also bears on this issue of specificity.

We look at a scene and extract the general features — a crowd of people, violently riotous or riotously happy? — or we look at a scene and extract specific features that over time we use to build patterns about what goes with what. The first is called “statistical summary perception”; the second “statistical learning”.

A study designed to disentangle these two processes found that you can only do one or other; you can’t derive both types of information at the same time. Thus, when people were shown grids of lines slanted to varying degrees, they could either assess whether the lines were generally leaning to the left or right, or they could learn to recognize pairs of lines that had been hidden repeatedly in the grids — but they couldn’t do both.

The fact that each of these tasks interfered with the other suggests that the two processes are fundamentally related.

Reference: 

Source: 

tags memworks: 

Topics: 

Meditation's cognitive benefits

A critical part of attention (and working memory capacity) is being able to ignore distraction. There has been growing evidence that meditation training (in particular mindfulness meditation) helps develop attentional control, and that this can start to happen very quickly.

For example:

  • after an eight-week course that included up to 30 minutes of daily meditation, novices improved their ability to quickly and accurately move and focus attention.
  • three months of rigorous training in Vipassana meditation improved attentional control.
  • after eight weeks of Mindfulness Training, Marine reservists during pre-deployment showed increased working memory capacity and decreased negative mood (this training also included concrete applications for the operational environment and information and skills about stress, trauma and resilience in the body).
  • after a mere four sessions of 20 minutes, students produced a significant improvement in critical cognitive skills — and a dramatic improvement when conditions became more stressful (provided by increasingly challenging time-constraints).

There seem to be several factors involved in these improvements: better control of brainwaves; increased gray matter density in some brain regions; improved white-matter connectivity.

Thus, after ten weeks of Transcendental Meditation (TM) practice, students showed significant changes in brainwave patterns during meditation compared to eyes-closed rest for the controls. These changes reflected greater coherence and power in brainwave activity in areas that overlap with the default mode network (the brain’s ‘resting state’). Similarly, after an eight-week mindfulness meditation program, participants had better control of alpha brainwaves. Relatedly, perhaps, experienced Zen meditators have shown that, after interruptions designed to mimic spontaneous thoughts, they could bring activity in most regions of the default mode network back to baseline faster than non-meditators.

Thus, after an 8-week mindfulness meditation program, participants showed increased grey-matter density in the left hippocampus , posterior cingulate cortex, temporo-parietal junction , and cerebellum , as well as decreased grey-matter density in the amygdala . Similarly, another study found experienced meditators showed significantly larger volumes of the right hippocampus and the right orbitofrontal cortex, and to a lesser extent the right thalamus and the left inferior temporal gyrus.

These areas of the brain are all closely linked to emotion, and may explain meditators' improved ability in regulating their emotions.

Thus, long-term meditators showed pronounced differences in white-matter connectivity between their brains and those of age-matched controls, meaning that meditators’ brains were better able to quickly relay electrical signals. The brain regions linked by these white-matter tracts include many of those mentioned as showing increased gray matter density. Another study found that a mere 11 hours of meditation training (IBMT) produced measurable changes in the integrity and efficiency of white matter in the corona radiata (which links to the anterior cingulate cortex, an area where attention and emotion are thought to be integrated).

It’s an interesting question, the extent to which poor attentional control is a reflection of poor emotional regulation. Obviously there is more to distractability than that, but emotion and attention are clearly inextricably entwined. So, for example, a pilot study involving 10 middle school students with ADHD found that those who participated in twice-daily 10 minute sessions of Transcendental Meditation for three months showed a dramatic reduction in stress and anxiety and improvements in ADHD symptoms and executive function.

The effects of emotion regulation are of course wider than the effects on attention. Another domain they impact is that of decision-making. A study involving experienced Buddhist meditators found that they used different brain regions than controls when making decisions in a ‘fairness’ game. The differences reflected less input from emotional reactions and more emphasis on the actual benefits.

Similarly, brain scans taken while experienced and novice meditators meditated found that periodic bursts of disturbing noise had less effect on brain areas involved in emotion and decision-making for experienced meditators compared to novices — and very experienced meditators (at least 40,000 hours of experience) showed hardly any activity in these areas at all.

Attention is also entwined with perception, so it’s also interesting to observe that several studies have found improved visual perception attendant on meditation training and/or experience. Thus, participants attending a three-month meditation retreat, showed significant improvements in making fine visual distinctions, and ability to sustain attention.

But such benefits may depend on the style of meditation. A study involving experienced practitioners of two styles of meditation (Deity Yoga (DY) and Open Presence (OP)) found that DY meditators were dramatically better at mental rotation and visual memory tasks compared to OP practitioners and controls (and only if they were given the tasks immediately after meditating). Similarly, a study involving Tibetan Buddhist monks found that, during "one-point" meditation, monks were significantly better at maintaining their focus on one image, when two different images were presented to each eye. This superior attentional control was not found during compassion-oriented meditation. However, even under normal conditions the monks showed longer stable perception compared to meditation-naïve control subjects. And three months of intense training in Vipassana meditation produced an improvement in the ability of participants to detect the second of two visual signals half a second apart (the size of the improvement was linked to reduced brain activity to the first target — which was still detected with the same level of accuracy). Similarly, three months of intensive meditation training reduced variability in attentional processing of target tones.

References

You can read about these studies below in more detail. Three studies were mentioned here without having appeared in the news reports:

Lutz, A., Slagter, H. A., Rawlings, N. B., Francis, A. D., Greischar, L. L., & Davidson, R. J. (2009). Mental Training Enhances Attentional Stability: Neural and Behavioral Evidence. J. Neurosci., 29(42), 13418-13427. doi:10.1523/JNEUROSCI.1614-09.2009

Tang, Y.-Y., Lu, Q., Geng, X., Stein, E. A., Yang, Y., & Posner, M. I. (2010). Short-term meditation induces white matter changes in the anterior cingulate. Proceedings of the National Academy of Sciences, 107(35), 15649 -15652. doi:10.1073/pnas.1011043107

Travis, F., Haaga, D., Hagelin, J., Tanner, M., Arenander, A., Nidich, S., Gaylord-King, C., et al. (2010). A self-referential default brain state: patterns of coherence, power, and eLORETA sources during eyes-closed rest and Transcendental Meditation practice. Cognitive Processing, 11(1), 21-30. doi:10.1007/s10339-009-0343-2

Older news items (pre-2010) brought over from the old website

More on how meditation can improve attention

Another study adds to research showing meditation training helps people improve their ability to focus and ignore distraction. The new study shows that three months of rigorous training in Vipassana meditation improved people's ability to stabilize attention on target tones, when presented with tones in both ears and instructed to respond only to specific tones in one ear. Marked variability in response time is characteristic of those with ADHD.

[1500] Lutz A, Slagter HA, Rawlings NB, Francis AD, Greischar LL, Davidson RJ. Mental Training Enhances Attentional Stability: Neural and Behavioral Evidence. J. Neurosci. [Internet]. 2009 ;29(42):13418 - 13427. Available from: http://www.jneurosci.org/cgi/content/abstract/29/42/13418

http://www.physorg.com/news177347438.html

Meditation may increase gray matter

Adding to the increasing evidence for the cognitive benefits of meditation, a new imaging study of 22 experienced meditators and 22 controls has revealed that meditators showed significantly larger volumes of the right hippocampus and the right orbitofrontal cortex, and to a lesser extent the right thalamus and the left inferior temporal gyrus. There were no regions where controls had significantly more gray matter than meditators. These areas of the brain are all closely linked to emotion, and may explain meditators' improved ability in regulating their emotions.

[1055] Luders E, Toga AW, Lepore N, Gaser C. The underlying anatomical correlates of long-term meditation: Larger hippocampal and frontal volumes of gray matter. NeuroImage [Internet]. 2009 ;45(3):672 - 678. Available from: http://www.sciencedirect.com/science/article/B6WNP-4VCH6WN-8/2/fa4f305302758ca5631926fc44a5350f

http://www.eurekalert.org/pub_releases/2009-05/uoc--htb051209.php

Meditation technique can temporarily improve visuospatial abilities

And continuing on the subject of visual short-term memory, a study involving experienced practitioners of two styles of meditation: Deity Yoga (DY) and Open Presence (OP) has found that, although meditators performed similarly to nonmeditators on two types of visuospatial tasks (mental rotation and visual memory), when they did the tasks immediately after meditating for 20 minutes (while the nonmeditators rested or did something else), practitioners of the DY style of meditation showed a dramatic improvement compared to OP practitioners and controls. In other words, although the claim that regular meditation practice can increase your short-term memory capacity was not confirmed, it does appear that some forms of meditation can temporarily (and dramatically) improve it. Since the form of meditation that had this effect was one that emphasizes visual imagery, it does support the idea that you can improve your imagery and visual memory skills (even if you do need to ‘warm up’ before the improvement is evident).

[860] Kozhevnikov M, Louchakova O, Josipovic Z, Motes MA. The enhancement of visuospatial processing efficiency through Buddhist Deity meditation. Psychological Science: A Journal of the American Psychological Society / APS [Internet]. 2009 ;20(5):645 - 653. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19476594

http://www.sciencedaily.com/releases/2009/04/090427131315.htm
http://www.eurekalert.org/pub_releases/2009-04/afps-ssb042709.php

Transcendental Meditation reduces ADHD symptoms among students

A pilot study involving 10 middle school students with ADHD has found that those who participated in twice-daily 10 minute sessions of Trancendental Meditation for three months showed a dramatic reduction in stress and anxiety and improvements in ADHD symptoms and executive function. The effect was much greater than expected. ADHD children have a reduced ability to cope with stress.
A second, recently completed study has also found that three months practice of the technique resulted in significant positive changes in brain functioning during visual-motor skills, especially in the circuitry of the brain associated with attention and distractibility. After six months practice, measurements of distractibility moved into the normal range.

Grosswald, S. J., Stixrud, W. R., Travis, F., & Bateh, M. A. (2008, December). Use of the Transcendental Meditation technique to reduce symptoms of Attention Deficit Hyperactivity Disorder (ADHD) by reducing stress and anxiety: An exploratory study. Current Issues in Education [On-line], 10(2). Available: http://cie.ed.asu.edu/volume10/number2/

http://www.eurekalert.org/pub_releases/2008-12/muom-tmr122408.php

Meditation speeds the mind's return after distraction

Another study comparing brain activity in experienced meditators and novices has looked at what happens when people meditating were interrupted by stimuli designed to mimic the appearance of spontaneous thoughts. The study compared 12 people with more than three years of daily practice in Zen meditation with 12 others who had never practiced meditation. It was found that, after interruption, experienced meditators were able to bring activity in most regions of the default mode network (especially the angular gyrus, a region important for processing language) back to baseline faster than non-meditators. The default mode network is associated with the occurrence of spontaneous thoughts and mind-wandering during wakeful rest. The findings indicate not only the attentional benefits of meditation, but also suggest a value for disorders characterized by excessive rumination or an abnormal production of task-unrelated thoughts, such as obsessive-compulsive disorder, anxiety disorder and major depression.

[910] Pagnoni G, Cekic M, Guo Y. “Thinking about Not-Thinking”: Neural Correlates of Conceptual Processing during Zen Meditation. PLoS ONE [Internet]. 2008 ;3(9):e3083 - e3083. Available from: http://dx.doi.org/10.1371/journal.pone.0003083

Full text available at http://dx.plos.org/10.1371/journal.pone.0003083
http://www.eurekalert.org/pub_releases/2008-09/eu-zts082908.php

Improved attention with mindfulness training

More evidence of the benefits of meditation for attention comes from a study looking at the performance of novices taking part in an eight-week course that included up to 30 minutes of daily meditation, and experienced meditators who attended an intensive full-time, one-month retreat. Initially, the experienced participants demonstrated better executive functioning skills, the cognitive ability to voluntarily focus, manage tasks and prioritize goals. After the eight-week training, the novices had improved their ability to quickly and accurately move and focus attention, while the experienced participants, after their one-month intensive retreat, also improved their ability to keep attention "at the ready."

[329] Jha AP, Krompinger J, Baime MJ. Mindfulness training modifies subsystems of attention. Cognitive, Affective & Behavioral Neuroscience [Internet]. 2007 ;7(2):109 - 119. Available from: http://www.ncbi.nlm.nih.gov/pubmed/17672382

http://www.eurekalert.org/pub_releases/2007-06/uop-mtc062507.php

Brain scans show how meditation affects the brain

An imaging study comparing novice and experienced meditators found that experienced meditators showed greater activity in brain circuits involved in paying attention. But the most experienced meditators with at least 40,000 hours of experience showed a brief increase in activity as they started meditating, and then a drop to baseline, as if they were able to concentrate in an effortless way. Moreover, while the subjects meditated inside the MRI, the researchers periodically blasted them with disturbing noises. Among the experienced meditators, the noise had less effect on the brain areas involved in emotion and decision-making than among novice meditators. Among meditators with more than 40,000 hours of lifetime practice, these areas were hardly affected at all. The attention circuits affected by meditation are also involved in attention deficit hyperactivity disorder.

[1364] Brefczynski-Lewis JA, Lutz A, Schaefer HS, Levinson DB, Davidson RJ. Neural correlates of attentional expertise in long-term meditation practitioners. Proceedings of the National Academy of Sciences [Internet]. 2007 ;104(27):11483 - 11488. Available from: http://www.pnas.org/content/104/27/11483.abstract

Full text is available at http://tinyurl.com/3d6wx4
http://www.physorg.com/news102179695.html

Meditation may improve attentional control

Paying attention to one thing can keep you from noticing something else. When people are shown two visual signals half a second apart, they often miss the second one — this effect is called the attentional blink. In a study involving 40 participants being trained in Vipassana meditation (designed to reduce mental distraction and improve sensory awareness), one group of 17 attended a 3 month retreat during which they meditated for 10–12 hours a day (practitioner group), and 23 simply received a 1-hour meditation class and were asked to meditate for 20 minutes daily for 1 week prior to each testing session (control group). The three months of intense training resulted in a smaller attentional blink and reduced brain activity to the first target (which was still detected with the same level of accuracy. Individuals with the most reduction in activity generally showed the most reduction in attentional blink size. The study demonstrates that mental training can result in increased attentional control.

[1153] Slagter HA, Lutz A, Greischar LL, Francis AD, Nieuwenhuis S, Davis JM, Davidson RJ. Mental Training Affects Distribution of Limited Brain Resources. PLoS Biol [Internet]. 2007 ;5(6):e138 - e138. Available from: http://dx.doi.org/10.1371/journal.pbio.0050138

Full text available at http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.0050138 
http://www.physorg.com/news97825611.html
http://www.eurekalert.org/pub_releases/2007-05/uow-mmf050407.php

Meditation skills of Buddhist monks yield clues to brain's regulation of attention

Recent research has suggested that skilled meditation can alter certain aspects of the brain's neural activity. A new study has now found evidence that certain types of trained meditative practice can influence the conscious experience of visual perceptual rivalry, a phenomenon thought to involve brain mechanisms that regulate attention and conscious awareness. Perceptual rivalry arises normally when two different images are presented to each eye, and it is manifested as a fluctuation in the "dominant" image that is consciously perceived. The study involved 76 Tibetan Buddhist monks with training ranging from 5 to 54 years. Tested during the practice of two types of meditation: a "compassion"-oriented meditation (contemplation of suffering within the world), and "one-point" meditation (involving the maintained focus of attention on a single object or thought). Major increases in the durations of perceptual dominance were experienced by monks practicing one-point meditation, but not during compassion-oriented meditation. Additionally, under normal conditions the monks showed longer stable perception (average 4.1 seconds compared to 2.6 seconds for meditation-naïve control subjects). The findings suggest that processes particularly associated with one-point meditation can considerably alter the normal fluctuations in conscious state that are induced by perceptual rivalry.

[350] Carter O, Presti D, Callistemon C, Ungerer Y, Liu G, Pettigrew J. Meditation alters perceptual rivalry in Tibetan Buddhist monks. Current Biology [Internet]. 2005 ;15(11):R412-R413 - R412-R413. Available from: http://www.cell.com/current-biology/fulltext/S0960-9822(05)00558-0

http://www.eurekalert.org/pub_releases/2005-06/cp-mso060205.php

tags memworks: 

tags strategies: 

tags problems: 

Pages

Subscribe to RSS - perception