Perception

Eye movements get re-enacted when we remember

  • An imaging and eye-tracking study has shown that the brain uses eye movements to help us recall remembered images.

A small study has tested the eminent Donald Hebb’s hypothesis that visual imagery results from the reactivation of neural activity associated with viewing images, and that the re-enactment of eye-movement patterns helps both imagery and neural reactivation.

In the study, 16 young adults (aged 20-28) were shown a set of 14 distinct images for a few seconds each. They were asked to remember as many details of the picture as possible so they could visualize it later on. They were then cued to mentally visualize the images within an empty rectangular box shown on the screen.

Brain imaging and eye-tracking technology revealed that the same pattern of eye movements and brain activation occurred when the image was learned and when it was recalled. During recall, however, the patterns were compressed (which is consistent with our experience of remembering, where memories take a much shorter time than the original experiences).

Our understanding of memory is that it’s constructive — when we remember, we reconstruct the memory from separate bits of information in our database. This finding suggests that eye movements might be like a blueprint to help the brain piece together the bits in the right way.

https://www.eurekalert.org/pub_releases/2018-02/bcfg-cga021318.php

Reference: 

tags memworks: 

Topics: 

Reviving a failing sense of smell through training

January, 2012

A rat study reveals how training can improve or impair smell perception.

The olfactory bulb is in the oldest part of our brain. It connects directly to the amygdala (our ‘emotion center’) and our prefrontal cortex, giving smells a more direct pathway to memory than our other senses. But the olfactory bulb is only part of the system processing smells. It projects to several other regions, all of which are together called the primary olfactory cortex, and of which the most prominent member is the piriform cortex. More recently, however, it has been suggested that it would be more useful to regard the olfactory bulb as the primary olfactory cortex (primary in the sense that it is first), while the piriform cortex should be regarded as association cortex — meaning that it integrates sensory information with ‘higher-order’ (cognitive, contextual, and behavioral) information.

Testing this hypothesis, a new rat study has found that, when rats were given training to distinguish various odors, each smell produced a different pattern of electrical activity in the olfactory bulb. However, only those smells that the rat could distinguish from others were reflected in distinct patterns of brain activity in the anterior piriform cortex, while smells that the rat couldn’t differentiate produced identical brain activity patterns there. Interestingly, the smells that the rats could easily distinguish were ones in which one of the ten components in the target odor had been replaced with a new component. The smells they found difficult to distinguish were those in which a component had simply been deleted.

When a new group of rats was given additional training (8 days vs the 2 days given the original group), they eventually learned to discriminate between the odors the first animals couldn’t distinguish, and this was reflected in distinct patterns of brain activity in the anterior piriform cortex. When a third group were taught to ignore the difference between odors the first rats could readily distinguish, they became unable to tell the odors apart, and similar patterns of brain activity were produced in the piriform cortex.

The effects of training were also quite stable — they were still evident after two weeks.

These findings support the idea of the piriform cortex as association cortex. It is here that experience modified neuronal activity. In the olfactory bulb, where all the various odors were reflected in different patterns of activity right from the beginning (meaning that this part of the brain could discriminate between odors that the rat itself couldn’t distinguish), training made no difference to the patterns of activity.

Having said that, it should be noted that this is not entirely consistent with previous research. Several studies have found that odor training produces changes in the representations in the olfactory bulb. The difference may lie in the method of neural recording.

How far does this generalize to the human brain? Human studies have suggested that odors are represented in the posterior piriform cortex rather than the anterior piriform cortex. They have also suggested that the anterior piriform cortex is involved in expectations relating to the smells, rather than representing the smells themselves. Whether these differences reflect species differences, task differences, or methodological differences, remains to be seen.

But whether or not the same exact regions are involved, there are practical implications we can consider. The findings do suggest that one road to olfactory impairment is through neglect — if you learn to ignore differences between smells, you will become increasingly less able to do so. An impaired sense of smell has been found in Alzheimer’s disease, Parkinson's disease, schizophrenia, and even normal aging. While some of that may well reflect impairment earlier in the perception process, some of it may reflect the consequences of neglect. The burning question is, then, would it be possible to restore smell function through odor training?

I’d really like to see this study replicated with old rats.

Reference: 

Source: 

tags memworks: 

Topics: 

tags problems: 

tags strategies: 

tags development: 

The durability and specificity of perceptual learning

September, 2011

Increasing evidence shows that perception is nowhere near the simple bottom-up process we once thought. Two recent perception studies add to the evidence.

Previous research has found practice improves your ability at distinguishing visual images that vary along one dimension, and that this learning is specific to the visual images you train on and quite durable. A new study extends the finding to more natural stimuli that vary on multiple dimensions.

In the small study, 9 participants learned to identify faces and 6 participants learned to identify “textures” (noise patterns) over the course of two hour-long sessions of 840 trials (consecutive days). Faces were cropped to show only internal features and only shown briefly, so this was not a particularly easy task. Participants were then tested over a year later (range: 10-18 months; average 13 and 15 months, respectively).

On the test, participants were shown both images from training and new images that closely resembled them. While accuracy rates were high for the original images, they plummeted for the very similar new images, indicating that despite the length of time since they had seen the original images, they still retained much of the memory of them.

Although practice improved performance across nearly all items and for all people, there were significant differences between both participants and individual stimuli. More interestingly, individual differences (in both stimuli and people) were stable across sessions (e.g., if you were third-best on day 1, you were probably third-best on day 2 too, even though you were doing better). In other words, learning didn’t produce any qualitative changes in the representations of different items — practice had nearly the same effect on all; differences were rooted in initial difficulty of discriminating the pattern.

However, while it’s true that individual differences were stable, that doesn’t mean that every person improved their performance the exact same amount with the same amount of practice. Interestingly (and this is just from my eye-ball examination of the graphs), it looks like there was more individual variation among the group looking at noise patterns. This isn’t surprising. We all have a lot of experience discriminating faces; we’re all experts. This isn’t the case with the textures. For these, people had to ‘catch on’ to the features that were useful in discriminating patterns. You would expect more variability between people in how long it takes to work out a strategy, and how good that strategy is. Interestingly, three of the six people in the texture group actually performed better on the test than they had done on the second day of training, over a year ago. For the other three, and all nine of those in the face group, test performance was worse than it had been on the second day of training (but decidedly better than the first day).

The durability and specificity of this perceptual learning, the researchers point out, resembles that found in implicit memory and some types of sensory adaptation. It also indicates that such perceptual learning is not limited, as has been thought, to changes early in the visual pathway, but produces changes in a wider network of cortical neurons, particularly in the inferior temporal cortex.

The second, unrelated, study also bears on this issue of specificity.

We look at a scene and extract the general features — a crowd of people, violently riotous or riotously happy? — or we look at a scene and extract specific features that over time we use to build patterns about what goes with what. The first is called “statistical summary perception”; the second “statistical learning”.

A study designed to disentangle these two processes found that you can only do one or other; you can’t derive both types of information at the same time. Thus, when people were shown grids of lines slanted to varying degrees, they could either assess whether the lines were generally leaning to the left or right, or they could learn to recognize pairs of lines that had been hidden repeatedly in the grids — but they couldn’t do both.

The fact that each of these tasks interfered with the other suggests that the two processes are fundamentally related.

Reference: 

Source: 

tags memworks: 

Topics: 

Negative gossip sharpens attention

July, 2011

Faces of people about whom something negative was known were perceived more quickly than faces of people about whom nothing, or something positive or neutral, was known.

Here’s a perception study with an intriguing twist. In my recent round-up of perception news I spoke of how images with people in them were more memorable, and of how some images ‘jump out’ at you. This study showed different images to each participant’s left and right eye at the same time, creating a contest between them. The amount of time it takes the participant to report seeing each image indicates the relative priority granted by the brain.

So, 66 college students were shown faces of people, and told something ‘gossipy’ about each one. The gossip could be negative, positive or neutral — for example, the person “threw a chair at a classmate”; “helped an elderly woman with her groceries”; “passed a man on the street.” These faces were then shown to one eye while the other eye saw a picture of a house.

The students had to press one button when they could see a face and another when they saw a house. As a control, some faces were used that the students had never seen. The students took the same length of time to register seeing the unknown faces and those about which they had been told neutral or positive information, but pictures of people about whom they had heard negative information registered around half a second quicker, and were looked at for longer.

A second experiment confirmed the findings and showed that subjects saw the faces linked to negative gossip for longer periods than faces about whom they had heard about upsetting personal experiences.

Reference: 

[2283] Anderson E, Siegel EH, Bliss-Moreau E, Barrett LF. The Visual Impact of Gossip. Science [Internet]. 2011 ;332(6036):1446 - 1448. Available from: http://www.sciencemag.org/content/332/6036/1446.abstract

Source: 

tags memworks: 

Topics: 

Visual perception - a round-up of recent news

July, 2011

Memory begins with perception. Here's a round-up of recent research into visual perception.

Memory begins with perception. We can’t remember what we don’t perceive, and our memory of things is influenced by how we perceive them.

Our ability to process visual scenes has been the subject of considerable research. How do we process so many objects? Some animals do it by severely limiting what they perceive, but humans can perceive a vast array of features. We need some other way of filtering the information. Moreover, it’s greatly to our advantage that we can process the environment extremely quickly. So that’s two questions: how do we process so much, and so fast?

Brain region behind the scene-facilitation effect identified

A critical factor, research suggests, is our preferential processing of interacting objects — we pick out interacting objects more quickly than unrelated objects. A new study has now identified the region of the brain responsible for this ‘scene-facilitation effect’. To distinguish between the two leading contenders, the lateral occipital cortex and the intraparietal sulcus, transcranial magnetic stimulation was used to temporarily shut down each region in turn, while volunteers viewed brief flashes of object pairs (half of which were interacting with each other) and decided whether these glimpsed objects matched the presented label. Half of the object pairs were shown as interacting.

The scene-facilitation effect was eliminated when the lateral occipital cortex was out of action, while the non-performance of the intraparietal sulcus made no difference.

The little we need to identify a scene

The scene-facilitation effect is an example of how we filter and condense the information in our visual field, but we also work in the opposite direction — we extrapolate.

When ten volunteers had their brains scanned while they viewed color photographs and line drawings of six categories of scenes (beaches, city streets, forests, highways, mountains and offices), brain activity was nearly identical, regardless of whether participants were looking at a color photo or a simple line drawing. That is, researchers could tell, with a fair amount of success, what category of scene the participant was looking at, just by looking at the pattern of brain activity in the ventral visual cortex — regardless of whether the picture was a color photo or a line drawing. When they made mistakes, the mistakes were similar for the photos and the drawings.

In other words, most of what the brain is responding to in the photo is also evident in the line drawing.

In order to determine what those features were, the researchers progressively removed some of the lines in the line drawings. Even when up to 75% of the pixels in a line drawing were removed, participants could still identify what the scene was 60% of the time — as long as the important lines were left in, that is, those showing the broad contours of the scene. If only the short lines, representing details like leaves or windows, were left, participants became dramatically less accurate.

The findings cast doubt on some models of human visual perception which argue that people need specific information that is found in photographs to classify a scene.

Consistent with previous research, activity in the parahippocampal place area and the retrosplenial cortex was of greatest importance.

The brain performs visual search near optimally

Visual search involves picking out a target in a sea of other objects, and it’s one of the most important visual tasks we do. It’s also (not surprisingly, considering its evolutionary importance) something we are very very good at. In fact, a new study reveals that we’re pretty near optimal.

Of course we make mistakes, and have failures. But these happen not because of our incompetence, but because of the complexity of the task.

In the study, participants were shown sets of lines that might or might not contain a line oriented in a particular way. Each screen was shown for only a fraction of a second, and the contrast of each line was randomly varied, making the target easier or more difficult to detect. The variation in contrast was designed as a model for an important variable in visual search — that of the reliability of the sensory information. Optimally, an observer would take into consideration the varying reliability of the items, giving the information different weights as a result of that perceived reliability. That weighted information would then be combined according to a specific integration rule. That had been calculated as the optimal process, and the performance of the participants matched that expectation.

The computer model that simulated this performance, and that matched the human performance, used groups of (simulated) neurons that responded differently to different line orientations.

In other words, it appears that we are able, very quickly, to integrate information coming from multiple locations, while taking into account the reliability of the different pieces of information, and we do this through the integration of information coming from different groups of neurons, each group of which is responding to different bits of information.

Another recent study into visual search has found that, when people are preparing themselves to look for very familiar object categories (people or cars) in natural scenes, activity in their visual cortex was very similar to that shown when they were actually looking at the objects in the scenes. Moreover, the precise activity in the object-selective cortex (OSC) predicted performance in detecting the target, while preparatory activity in the early visual cortex (V1) was actually negatively related to search performance. It seems that these two regions of the visual cortex are linked to different search strategies, with the OSC involved in relatively abstract search preparation and V1 to more specific imagery-like preparation. Activity in the medial prefrontal cortex also reflected later target detection performance, suggesting that this may be the source of top-down processing.

The findings demonstrate the role of preparatory and top-down processes in guiding visual search (and remind us that these processes can bias us against seeing what we’re looking for, just as easily as they help us).

'Rewarding' objects can't be ignored

Another aspect of visual search is that some objects just leap out at us and capture our attention. Loud noises and fast movement are the most obvious of the attributes that snag our gaze. These are potential threats, and so it’s no wonder we’ve evolved to pay attention to such things. We’re also drawn to potential rewards. Prospective mates; food; liquids.

What about rewards that are only temporarily rewarding? Do we move on easily, able to ignore previously rewarding items as soon as they lose their relevance?

In a recent study, people spent an hour searching for red or green circles in an array of many differently colored circles. The red and green circles were always followed by a monetary reward (10 cents for one color, and 1 cent for the other). Afterwards, participants were asked to search for particular shapes, and color was no longer relevant or rewarded. However, when, occasionally, one of the shapes was red or green, reaction times slowed, demonstrating that these were distracting (even though the participants had been told to ignore this if it happened).

This distraction persisted for weeks after the original learning session. Interestingly, people who scored highly on a questionnaire measuring impulsivity were more likely to be distracted by these no-longer-relevant items.

The findings indicate that stimuli that have been previously associated with reward continue to capture attention regardless of their relevance to the task in hand, There are implications here that may help in the development of more effective treatments for drug addiction, obesity and ADHD.

People make an image memorable

What makes an image memorable? It’s always been assumed that visual memory is too subjective to allow a general answer to this question. But an internet study has found remarkable consistency among hundreds of people who viewed images from a collection of about 10,000 images, some of which were repeated, and decided whether or not they had seen the image before. The responses generated a memorability rating for each image. Once this had been collated, the researchers made "memorability maps" of each image by asking people to label all the objects in the images. These maps were then used to determine which objects make an image memorable.

In general, images with people in them were the most memorable, followed by images of human-scale space — such as the produce aisle of a grocery store — and close-ups of objects. Least memorable were natural landscapes, although those could be memorable if they featured an unexpected element, such as shrubbery trimmed into an unusual shape.

Computer modeling then allowed various features for each image (such as color, or the distribution of edges) to be correlated with the image's memorability. The end result was an algorithm that can predict memorability of images the computational model hasn't "seen" before.

The researchers are now doing a follow-up study to test longer-term memorability, as well as working on adding more detailed descriptions of image content.

Reference: 

[2291] Kim JG, Biederman I, Juan C-H. The Benefit of Object Interactions Arises in the Lateral Occipital Cortex Independent of Attentional Modulation from the Intraparietal Sulcus: A Transcranial Magnetic Stimulation Study. The Journal of Neuroscience [Internet]. 2011 ;31(22):8320 - 8324. Available from: http://www.jneurosci.org/content/31/22/8320.abstract

[2303] Walther DB, Chai B, Caddigan E, Beck DM, Fei-Fei L. Simple line drawings suffice for functional MRI decoding of natural scene categories. Proceedings of the National Academy of Sciences [Internet]. 2011 ;108(23):9661 - 9666. Available from: http://www.pnas.org/content/108/23/9661.abstract

[2292] Ma WJ, Navalpakkam V, Beck JM, van den Berg R, Pouget A. Behavior and neural basis of near-optimal visual search. Nat Neurosci [Internet]. 2011 ;14(6):783 - 790. Available from: http://dx.doi.org/10.1038/nn.2814

[2323] Peelen MV, Kastner S. A neural basis for real-world visual search in human occipitotemporal cortex. Proceedings of the National Academy of Sciences [Internet]. 2011 ;108(29):12125 - 12130. Available from: http://www.pnas.org/content/108/29/12125.abstract

[2318] Anderson BA, Laurent PA, Yantis S. Value-driven attentional capture. Proceedings of the National Academy of Sciences [Internet]. 2011 ;108(25):10367 - 10371. Available from: http://www.pnas.org/content/108/25/10367.abstract

Isola, P., Xiao, J., Oliva, A. & Torralba, A. 2011. What makes an image memorable? Paper presented at the IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, Colorado Springs.

 

Source: 

tags memworks: 

Topics: 

Simple training helps infants maintain ability to distinguish other-race faces

July, 2011

New research confirms the role of experience in the other race effect, and shows how easily the problem in discriminating faces belonging to other races might be prevented.

Our common difficulty in recognizing faces that belong to races other than our own (or more specifically, those we have less experience of) is known as the Other Race Effect. Previous research has revealed that six-month-old babies show no signs of this bias, but by nine months, their ability to recognize faces is reduced to those races they see around them.

Now, an intriguing study has looked into whether infants can be trained in such a way that they can maintain the ability to process other-race faces. The study involved 32 six-month-old Caucasian infants, who were shown picture books that contained either Chinese (training group) or Caucasian (control group) faces. There were eight different books, each containing either six female faces or six male faces (with names). Parents were asked to present the pictures in the book to their child for 2–3 minutes every day for 1 week, then every other day for the next week, and then less frequently (approximately once every 6 days) following a fixed schedule of exposures during the 3-month period (equating to approximately 70 minutes of exposure overall).

When tested at nine months, there were significant differences between the two groups that indicated that the group who trained on the Chinese faces had maintained their ability to discriminate Chinese faces, while those who had trained on the Caucasian faces had lost it (specifically, they showed no preference for novel or familiar faces, treating them both the same).

It’s worth noting that the babies generalized from the training pictures, all of which showed the faces in the same “passport photo” type pose, to a different orientation (three-quarter pose) during test trials. This finding indicates that infants were actually learning the face, not simply an image.

Reference: 

Source: 

tags development: 

Topics: 

tags memworks: 

tags strategies: 

tags: 

How the deaf have better vision; the blind better hearing

November, 2010

Two recent studies point to how those lacking one sense might acquire enhanced other senses, and what limits this ability.

An experiment with congenitally deaf cats has revealed how deaf or blind people might acquire other enhanced senses. The deaf cats showed only two specific enhanced visual abilities: visual localization in the peripheral field and visual motion detection. This was associated with the parts of the auditory cortex that would normally be used to pick up peripheral and moving sound (posterior auditory cortex for localization; dorsal auditory cortex for motion detection) being switched to processing this information for vision.

This suggests that only those abilities that have a counterpart in the unused part of the brain (auditory cortex for the deaf; visual cortex for the blind) can be enhanced. The findings also point to the plasticity of the brain. (As a side-note, did you know that apparently cats are the only animal besides humans that can be born deaf?)

The findings (and their broader implications) receive support from an imaging study involving 12 blind and 12 sighted people, who carried out an auditory localization task and a tactile localization task (reporting which finger was being gently stimulated). While the visual cortex was mostly inactive when the sighted people performed these tasks, parts of the visual cortex were strongly activated in the blind. Moreover, the accuracy of the blind participants directly correlated to the strength of the activation in the spatial-processing region of the visual cortex (right middle occipital gyrus). This region was also activated in the sighted for spatial visual tasks.

Reference: 

Source: 

tags memworks: 

tags problems: 

Topics: 

Learning how to hear shapes

November, 2010

Researchers trained blindfolded people to recognize shapes through coded sounds, demonstrating the abstract nature of perception.

We can see shapes and we can feel them, but we can’t hear a shape. However, in a dramatic demonstration of just how flexible our brain is, researchers have devised a way of coding spatial relations in terms of sound properties such as frequency, and trained blindfolded people to recognize shapes by their sounds. They could then match what they heard to shapes they felt. Furthermore, they were able to generalize from their training to novel shapes.

The findings not only offer new possibilities for helping blind people, but also emphasize that sensory representations simply require systematic coding of some kind. This provides more evidence for the hypothesis that our perception of a coherent object ultimately occurs at an abstract level beyond the sensory input modes in which it is presented.

Reference: 

[1921] Kim J-K, Zatorre RJ. Can you hear shapes you touch?. Experimental Brain Research [Internet]. 2010 ;202(4):747 - 754. Available from: http://www.springerlink.com/content/41gq1u30671q3737/

Source: 

tags memworks: 

tags problems: 

Topics: 

Changing sounds are key to understanding speech

July, 2010

New research reveals that understanding spoken speech relies on sound changes, making "low" vowels most important and "stop" consonants least important.

As I get older, the question of how we perceive speech becomes more interesting (people don’t talk as clearly as they used to!). So I was intrigued by this latest research that reveals that it is not so much a question of whether consonants or vowels are more important (although consonants do appear to be less important than vowels — the opposite of what is true for written language), but a matter of transitions. It’s all a matter of the very brief changes across amplitude and frequency that make sound-handling neurons fire more often and easily — after all, as we know from other perception research, we’re designed to recognize/respond to change. Most likely to rate as high-change sounds are "low" vowels, sounds like "ah" in "father" or "top" that draw the jaw and tongue downward. Least likely to cause much change are "stop" consonants like "t" and "d" in "today." The physical measure of change corresponds closely with the linguistic construct of sonority (or vowel-likeness).

Reference: 

[1632] Stilp CE, Kluender KR. Cochlea-scaled entropy, not consonants, vowels, or time, best predicts speech intelligibility. Proceedings of the National Academy of Sciences [Internet]. 2010 ;107(27):12387 - 12392. Available from: http://www.pnas.org/content/107/27/12387.abstract

Source: 

tags: 

tags problems: 

Topics: 

Memory better if timing is right

March, 2010

A new study suggests that our memory for visual scenes may not depend on how much attention we’ve paid to it or what a scene contains, but the context in which the scene is presented.

A new study suggests that our memory for visual scenes may not depend on how much attention we’ve paid to it or what a scene contains, but when the scene is presented. In the study, participants performed an attention-demanding letter-identification task while also viewing a rapid sequence of full-field photographs of urban and natural scenes. They were then tested on their memory of the scenes. It was found that, notwithstanding their attention had been focused on the target letter, only those scenes which were presented at the same time as a target letter (rather than a distractor letter) were reliably remembered. The results point to a brain mechanism that automatically encodes certain visual features into memory at behaviorally relevant points in time, regardless of the spatial focus of attention.

Reference: 

[321] Lin JY, Pype AD, Murray SO, Boynton GM. Enhanced Memory for Scenes Presented at Behaviorally Relevant Points in Time. PLoS Biol [Internet]. 2010 ;8(3):e1000337 - e1000337. Available from: http://dx.doi.org/10.1371/journal.pbio.1000337

Full text available at doi:10.1371/journal.pbio.1000337

Source: 

tags memworks: 

Topics: 

Pages

Subscribe to RSS - Perception