Language cues

A Canadian study involving French-speaking university students has found that repeating aloud, especially to another person, improves memory for words.

In the first experiment, 20 students read a series of words while wearing headphones that emitted white noise, in order to mask their own voices and eliminate auditory feedback. Four actions were compared:

  • repeating silently in their head
  • repeating silently while moving their lips
  • repeating aloud while looking at the screen
  • repeating aloud while looking at someone.

They were tested on their memory of the words after a distraction task. The memory test only required them to recognize whether or not the words had occurred previously.

There was a significant effect on memory. The order of the conditions matches the differences in memory, with memory worst in the first condition, and best in the last.

In the second experiment, 19 students went through the same process, except that the stimuli were pseudo-words. In this case, there was no memory difference between the conditions.

The effect is thought to be due to the benefits of motor sensory feedback, but the memory benefit of directing your words at a person rather than a screen suggests that such feedback goes beyond the obvious. Visual attention appears to be an important memory enhancer (no great surprise when we put it that way!).

Most of us have long ago learned that explaining something to someone really helps our own understanding (or demonstrates that we don’t in fact understand it!). This finding supports another, related, experience that most of us have had: the simple act of telling someone something helps our memory.

http://www.eurekalert.org/pub_releases/2015-10/uom-rat100615.php

Online social networking, such as Facebook, is hugely popular. A series of experiments has explored the intriguing question of whether our memories are particularly ‘tuned’ to remember the sort of information shared on such sites.

The first experiment involved 32 college students (27 female), who were asked to study either 100 randomly chosen Facebook posts, or 100 sentences randomly chosen from books on Amazon. After the study period (which involved each sentence being presented for 3 seconds), the students were given a self-paced recognition test, in which the 100 study sentences were mixed with another 100 sentences from the same source, with participants responding with a number expressing their confidence that they had (or had not) seen the sentence before (e.g., ‘1’ would indicate they were completely confident that they hadn’t seen it before, ‘20’ that they were totally confident that they had).

Recognition of Facebook posts was significantly better than recognition of sentences from books (an average of 85% correct vs 76%). The ‘Facebook advantage’ remained even when only posts with more normal surface-level characteristics were analyzed (i.e., all posts containing irregularities of spelling and typography were removed).

In the next experiment, involving 16 students (11 female), Facebook posts (a new set) were compared with neutral faces. Again, memory for Facebook posts was substantially better than that for faces. This is quite remarkable, since humans have a particular expertise for faces and tend to score highly on recognition tests for them.

One advantage the Facebook posts might have is in eliciting social thinking. The researchers attempted to test this by comparing the learning achieved when people were asked to count the words of each sentence or post, against the learning achieved when they were asked to think of someone they knew (real or fictional) who could have composed such a sentence / post. This experiment involved 64 students (41 female).

The deeper encoding encouraged by the latter strategy did improve memory for the texts, but it did so equally. The fact that it helped Facebook posts as much as it did book sentences argues against the idea that the Facebook advantage rests on social elaboration (because if so, encouraging them to be socially elaborated would have little extra effect).

Another advantage the Facebook posts might have over book sentences is that they were generally complete in themselves, making sense in a way that randomly chosen sentences from books would not. Other possibilities have to do with the gossipy nature of Facebook posts, and the informal language used. To test these theories, 180 students (138 female) were shown text from two CNN twitter feeds: Breaking News and Entertainment News. Texts included headlines, sentences, and comments.

Texts from Entertainment News were remembered significantly better than those from Breaking News (supporting the gossip advantage). Headlines were remembered significantly better than random sentences (supporting the completeness argument), but comments were remembered best of all (supporting the informality theory) — although the benefit of comments over headlines was much greater for Breaking News than Entertainment News (perhaps reflecting the effort the Entertainment News people put into making catchy headlines?).

It seems then, that three factors contribute to the greater memorability of Facebook posts: the completeness of ideas; the gossipy content; the casually generated language.

You’ll have noticed I made a special point of noting the gender imbalance in the participant pools. Given gender differences in language and social interaction, it’s a shame that the participants were so heavily skewed, and I would like this replicated with males before generalizing. However, the evidence for the advantage of more informal language is, at least, less likely to be skewed by gender.

[3277] Mickes, L., Darby R. S., Hwe V., Bajic D., Warker J. A., Harris C. R., et al.
(Submitted).  Major memory for microblogs.
Memory & Cognition. 1 - 9.

The relative ease with which children acquire language has produced much debate and theory, mirroring the similar quantity of debate and theory over how we evolved language. One theory of language evolution is that it began with gesture. A recent study looking at how deaf children learn sign language might perhaps be taken as partial support for this theory, and may also have wider implications for how children acquire language and how we can best support them.

The study, involving 31 deaf toddlers, looked at 89 specific signs understood and produced by the children. It was found that both younger (11-20 months) and older (21-30 months) toddlers understood and produced more signs that were iconic than signs that were less iconic. This benefit seemed to be greater for the older toddlers, supporting the idea that a certain amount of experience and/or cognitive development is needed to make the link between action and meaning.

Surprisingly, the benefits of iconicity did not seem to depend on how familiar, phonologically complex, or imageable the words were.

In contrast to spoken language, a high proportion of signs are iconic, that is, related to the concept being expressed (such as, bringing the hand to the mouth to indicate ‘eat’). Nevertheless, if iconicity is important in sign language, it is surely also important in spoken languages. This is supported by the role of gesture in speech.

The researchers suggest that iconic links between our perceptual-motor experience of the world and the form of a sign may provide an imitation-based mechanism that supports early sign acquisition, and that this might also apply to spoken language — with gestures, tone of voice, inflection, and facial expression helping make the link between words and their meanings less arbitrary.

This suggests that we can support children’s acquisition of language by providing and emphasizing such ‘scaffolding’.

Here’s an intriguing study for those interested in how language affects how we think. It’s also of interest to those who speak more than one language or are interested in learning another language, because it deals with the long-debated question as to whether bilinguals working in their non-native language automatically access the native-language representations in long-term memory, or whether they can ‘switch off’ their native language and use only the target language memory codes.

The study follows on from an earlier study by the same researchers that indicated, through the demonstration of hidden priming effects, that bilinguals subconsciously access their first language when reading in their second language. In this new study, 45 university students (15 native English speakers, 15 native Chinese speakers, and 15 Chinese-English bilinguals) were shown two blocks of 90 word pairs. The pairs could have positive emotional value (e.g., honesty-program), negative valence (failure-poet), or neutral valence (aim-carpenter); could be semantically related (virus-bacteria; love-rose) or unrelated (weather-gender). The English or Chinese words were flashed on the screen one at a time, with a brief interval between the first and second word. The students had to indicate whether the second word was related in meaning to the first, and their brain activity was monitored.

The English and Chinese speakers acted as controls — it was the bilinguals, of course, who were the real interest. Some of the English word pairs shared a sound in the Chinese translation. If the Chinese words were automatically activated, therefore, the sound repetition would have a priming effect.

This is indeed what was found (confirming the earlier finding and supporting the idea that native language translations are automatically activated) — but here’s the interesting thing: the priming effect occurred only for positive and neutral words. It did not occur when the bilinguals saw negative words such as war, discomfort, inconvenience, and unfortunate.

The finding, which surprised the researchers, is nonetheless consistent with previous evidence that anger, swearing or discussing intimate feelings has more power in a speaker's native language. Parents, too, tend to speak to their infants in their native tongue. Emotion, it seems, is more strongly linked to our first language.

It’s traditionally thought that second language processing is fundamentally determined by the age of acquisition and the level of proficiency. The differences in emotional resonance have been, naturally enough, attributed to the native language being acquired first. This finding suggests the story is a little more complicated.

The researchers theorize that they have touched on the mechanism by which emotion controls our fundamental thought processes. They suggest that the brain is trying to protect us by minimizing the effect of distressing or disturbing emotional content, by shutting down the unconscious access to the native language (in which the negative words would be more strongly felt).

A few more technical details for those interested:

The Chinese controls demonstrated longer reaction times than the English controls, which suggests (given that 60% of the Chinese word pairs had overt sound repetitions but no semantic relatedness) that this conjunction made the task substantially more difficult. The bilinguals, however, had reaction times comparable to the English controls. The Chinese controls showed no effect of emotional valence, but did show priming effects of the overt sound manipulation that were equal for all emotion conditions.

The native Chinese speakers had recently arrived in Britain to attend an English course. Bilinguals had been exposed to English since the age of 12 and had lived in Britain for an average of 20.5 months.

[2969] Wu, Y J., & Thierry G.
(2012).  How Reading in a Second Language Protects Your Heart.
The Journal of Neuroscience. 32(19), 6485 - 6489.

I’ve reported before on evidence that young children do better on motor tasks when they talk to themselves out loud, and learn better when they explain things to themselves or (even better) their mother. A new study extends those findings to children with autism.

In the study, 15 high-functioning adults with Autism Spectrum Disorder and 16 controls (age and IQ matched) completed the Tower of London task, used to measure planning ability. This task requires you to move five colored disks on three pegs from one arrangement to another in as few moves as possible. Participants did the task under normal conditions as well as under an 'articulatory suppression' condition whereby they had to repeat out loud a certain word ('Tuesday' or 'Thursday') throughout the task, preventing them from using inner speech.

Those with ASD did significantly worse than the controls in the normal condition (although the difference wasn’t large), but they did significantly better in the suppression condition — not because their performance changed, but because the controls were significantly badly affected by having their inner speech disrupted.

On an individual basis, nearly 90% of the control participants did significantly worse on the Tower of London task when inner speech was prevented, compared to only a third of those with ASD. Moreover, the size of the effect among those with ASD was correlated with measures of communication ability (but not with verbal IQ).

A previous experiment had confirmed that these neurotypical and autistic adults both showed similar patterns of serial recall for labeled pictures. Half the pictures had phonologically similar labels (bat, cat, hat, mat, map, rat, tap, cap), and the other nine had phonologically dissimilar labels (drum, shoe, fork, bell, leaf, bird, lock, fox). Both groups were significantly affected by phonological similarity, and both groups were significantly affected when inner speech was prevented.

In other words, this group of ASD adults were perfectly capable of inner speech, but they were much less inclined to use it when planning their actions.

It seems likely that, rather than using inner speech, they were relying on their visuospatial abilities, which tend to be higher in individuals with ASD. Supporting this, visuospatial ability (measured by the block design subtest of the WAIS) was highly correlated with performance on the Tower of London test. Which may not seem surprising, but the association was minimal in control participants.

Complex planning is said to be a problem for many with ASD. It’s also suggested that the relative lack of inner speech use might contribute to some of the repetitive behaviors common in people with autism.

It may be that strategies targeted at encouraging inner speech may help those with ASD develop such skills. Such strategies include encouraging children to describe their actions out loud, and providing “parallel talk”, whereby an observer plays alongside the child while verbalizing their actions.

It is also suggested that children with ASD could benefit from verbal learning of their daily schedule at school rather than using visual timetables as is currently a common approach. This could occur in stages, moving from pictures to symbols, symbols with words, before finally being restricted to words only.

ASD is estimated to occur in 1% of the population, but perhaps this problem could be considered more widely. Rather than seeing this as an issue limited to those with ASD, we should see this as a pointer to the usefulness of inner speech, and its correlation with communication skills. As one of the researchers said: "These results show that inner speech has its roots in interpersonal communication with others early in life, and it demonstrates that people who are poor at communicating with others will generally be poor at communicating with themselves.”

One final comment: a distinction has been made between “dialogic” and “monologic” inner speech, where dialogic speech refers to a kind of conversation between different perspectives on reality, and monologic speech is simply a commentary to oneself about the state of affairs. It may be that it is specifically dialogic inner speech that is so helpful for problem-solving. It has been suggested that ASD is marked by a reduction in this kind of inner speech only, and the present researchers suggest further that it is this form of speech that may have inherently social origins and require training or experience in communicating with others.

The corollary to this is that it is only in those situations where dialogic inner speech is useful in achieving a task, that such differences between individuals will matter.

Clearly there is a need for much more research in this area, but it certainly provides food for thought.

Why are other people’s phone conversations so annoying? A new study suggests that hearing only half a conversation is more distracting than other kinds of conversations because we're missing the other side of the story and so can't predict the flow of the conversation. This finding suggests that driving a car might be impaired not only by the driver talking on the phone, but also by passengers talking on their phones.

It also tells us something about the way we listen to people talking — we’re actively predicting what the person is going to say next. This helps explain something I’ve always wondered about. Listen to people talking in a language you don’t know and you’re often amazed how fast they talk. See an audio recording of the soundwaves, and you’ll wonder how people know when one word starts and another begins. Understanding what people are saying is not as easy as we believe it is — it takes a lot of experience. An important part of that experience, it seems, is learning the patterns of people’s speech, so we can predict what’s going to come next.

The study showed that people overhearing cell phone conversations did more poorly on everyday tasks that demanded attention, than when overhearing both sides of a cell phone conversation, which resulted in no decreased performance. By controlling for other acoustic factors, the researchers demonstrated that it was the unpredictable information content of the half-heard conversation that was so distracting.

Emberson, L.L., Lupyan, G., Goldstein, M.H. & Spivey, M.J. 2010. Overheard Cell-Phone Conversations: When Less Speech Is More Distracting Psychological Science first published on September 3, 2010 as doi:10.1177/0956797610382126

I’ve talked about the importance of labels for memory, so I was interested to see that a recent series of experiments has found that hearing the name of an object improved people’s ability to see it, even when the object was flashed onscreen in conditions and speeds (50 milliseconds) that would render it invisible. The effect was specific to language; a visual preview didn’t help.

Moreover, those who consider their mental imagery particularly vivid scored higher when given the auditory cue (although this association disappeared when the position of the object was uncertain). The researchers suggest that hearing the image labeled evokes an image of the object, strengthening its visual representation and thus making it visible. They also suggested that because words in different languages pick out different things in the environment, learning different languages might shape perception in subtle ways.

While most foreign language courses try hard to provide native speakers, a new study shows that adults find it easier when the teacher speaks it in the same accent as the student. 60 participants aged 18-26, of whom 20 were native Hebrew speakers, 20 new adult immigrants to Israel from the Former Soviet Union, and 20 were Israeli Arabic speakers who began learning Hebrew at age 7-8, has found that while accent made no difference to native Hebrew speakers, both the Russian and Arabic speakers needed less phonological information to recognize Hebrew words when they were pronounced in the accent of their native language.

[167] Leikin, M., Ibrahim R., Eviatar Z., & Sapir S.
(2009).  Listening with an Accent: Speech Perception in a Second Language by Late Bilinguals.
Journal of Psycholinguistic Research. 38(5), 447 - 457.

Because Nicaraguan Sign Language is only about 35 years old, and still evolving rapidly, the language used by the younger generation is more complex than that used by the older generation. This enables researchers to compare the effects of language ability on other abilities. A recent study found that younger signers (in their 20s) performed better than older signers (in their 30s) on two spatial cognition tasks that involved finding a hidden object. The findings provide more support for the theory that language shapes how we think and perceive.

[1629] Pyers, J. E., Shusterman A., Senghas A., Spelke E. S., & Emmorey K.
(2010).  Evidence from an emerging sign language reveals that language supports spatial cognition.
Proceedings of the National Academy of Sciences. 107(27), 12116 - 12120.

Older news items (pre-2010) brought over from the old website

What I was doing vs. what I did: How verb aspect influences memory and behavior

A new study reveals that the way a statement is phrased (and specifically, how the verbs are used), affects our memory of an event being described and may also influence our behavior. The study involved volunteers doing a word game and being asked to stop and describe what they had been doing, using either the imperfect (e.g., I was solving word puzzles) or perfect (e.g., I solved word puzzles) tense. The volunteers then completed a memory test (for the word game) or a word game which was similar to the first one they had worked on. Those who had described their behavior in the imperfect tense were able to recall more specific details of their experience compared to volunteers who had described their behavior in the perfect tense; they also performed better on the second word game and were more willing to complete the task. It seems likely that use of the perfect encouraged people to see the task as completed, and thus less likely to spend more time on it, either mentally or physically. The effects did however decay over time.

[676] Hart, W., & Albarracín D.
(2009).  What I was doing versus what I did: verb aspect influences memory and future actions.
Psychological Science: A Journal of the American Psychological Society / APS. 20(2), 238 - 244.

http://www.eurekalert.org/pub_releases/2009-03/afps-wiw031009.php

How alliteration helps memory

Previous studies have shown that alliteration can act as a better tool for memory than both imagery and meaning. Now a series of experiments explains why and demonstrates the effect occurs whether you read aloud or silently, and whether the text is poetry or prose. The memory-enhancing property of alliteration appears to occur because the alliterative cues reactivated readers' memories for earlier words that were similar sounding. Alliteration, then, is most powerful when the same alliterative sounds are repeated throughout the text.

[1408] Lea, B. R., Rapp D. N., Elfenbein A., Mitchel A. D., & Romine R S.
(2008).  Sweet silent thought: alliteration and resonance in poetry comprehension.
Psychological Science: A Journal of the American Psychological Society / APS. 19(7), 709 - 716.

http://www.physorg.com/news136632182.html
http://www.eurekalert.org/pub_releases/2008-07/afps-tpo073008.php

Connection between language and movement

A study of all three groups of birds with vocal learning abilities – songbirds, parrots and hummingbirds – has revealed that the brain structures for singing and learning to sing are embedded in areas controlling movement, and areas in charge of movement share many functional similarities with the brain areas for singing. This suggests that the brain pathways used for vocal learning evolved out of the brain pathways used for motor control. Human brain structures for speech also lie adjacent to, and even within, areas that control movement. The findings may explain why humans talk with our hands and voice, and could open up new approaches to understanding speech disorders in humans. They are also consistent with the hypothesis that spoken language was preceded by gestural language, or communication based on movements. Support comes from another very recent study finding that mice engineered to have a mutation to the gene FOXP2 (known to cause problems with controlling the formation of words in humans) had trouble running on a treadmill.
Relatedly, a study of young children found that 5-year-olds do better on motor tasks when they talk to themselves out loud (either spontaneously or when told to do so by an adult) than when they are silent. The study also showed that children with behavioral problems (such as ADHD) tend to talk to themselves more often than children without signs of behavior problems. The findings suggest that teachers should be more tolerant of this kind of private speech.

[436] Feenders, G., Liedvogel M., Rivas M., Zapka M., Horita H., Hara E., et al.
(2008).  Molecular Mapping of Movement-Associated Areas in the Avian Brain: A Motor Theory for Vocal Learning Origin.
PLoS ONE. 3(3), e1768 - e1768.

[1235] Winsler, A., Manfra L., & Diaz R. M.
(2007).  "Should I let them talk?": Private speech and task performance among preschool children with and without behavior problems.
Early Childhood Research Quarterly. 22(2), 215 - 231.

http://www.physorg.com/news124526627.html
http://www.sciam.com/article.cfm?id=song-learning-birds-shed

http://www.eurekalert.org/pub_releases/2008-03/gmu-pkd032808.php

Kids learn more when mother is listening

Research has already shown that children learn well when they explain things to their mother or a peer, but that could be because they’re getting feedback and help. Now a new study has asked 4- and 5-year-olds to explain their solution to a problem to their moms (with the mothers listening silently), to themselves or to simply repeat the answer out loud. Explaining to themselves or to their moms improved the children's ability to solve similar problems, and explaining the answer to their moms helped them solve more difficult problems — presumably because explaining to mom made a difference in the quality of the child's explanations.

Rittle-Johnson, B., Saylor, M. & Swygert, K.E. 2008. Learning from explaining: Does it matter if mom is listening? Journal of Experimental Child Psychology, In press.

http://www.physorg.com/news120320713.html

Poetry as a memory and concentration aid

A research group at Dundee and St Andrews universities claim poems exercise the mind more than a novel. They found poetry generated far more eye movement, and also that people read poems more slowly, concentrating and re-reading individual lines more than they did with prose. Imaging also showed greater levels of cerebral activity when people listened to poems being read aloud. Interestingly, they also found this was true even when the poem and prose text had identical content; it appears people read poems in a different way than prose. The researchers suggest the findings have implications for the way English literature is taught in schools, and may be helpful for children with certain learning difficulties, or even age-related memory problems.

Carminati, M. N., Stabler, J., Roberts, A. M., & Fischer, M. H. (2006). Readers' responses to sub-genre and rhyme scheme in poetry. Poetics, 34(3),  204-218.

http://news.scotsman.com/arts.cfm?id=352752005

Support for labeling as an aid to memory

A study involving an amnesia-inducing drug has shed light on how we form new memories. Participants in the study participants viewed words, photographs of faces and landscapes, and abstract pictures one at a time on a computer screen. Twenty minutes later, they were shown the words and images again, one at a time. Half of the images they had seen earlier, and half were new. They were then asked whether they recognized each one. For one session they were given midazolam, a drug used to relieve anxiety during surgical procedures that also causes short-term anterograde amnesia, and for one session they were given a placebo.
It was found that the participants' memory while in the placebo condition was best for words, but the worst for abstract images. Midazolam impaired the recognition of words the most, impaired memory for the photos less, and impaired recognition of abstract pictures hardly at all. The finding reinforces the idea that the ability to recollect depends on the ability to link the stimulus to a context, and that unitization increases the chances of this linking occurring. While the words were very concrete and therefore easy to link to the experimental context, the photographs were of unknown people and unknown places and thus hard to distinctively label. The abstract images were also unfamiliar and not unitized into something that could be described with a single word.

[1216] Reder, L. M., Oates J. M., Thornton E. R., Quinlan J. J., Kaufer A., & Sauer J.
(2006).  Drug-Induced Amnesia Hurts Recognition, but Only for Memories That Can Be Unitized.
Psychological science : a journal of the American Psychological Society / APS. 17(7), 562 - 567.

http://www.sciencedaily.com/releases/2006/07/060719092800.htm

Language cues help visual learning in children

A study of 4-year-old children has found that language, in the form of specific kinds of sentences spoken aloud, helped them remember mirror image visual patterns. The children were shown cards bearing red and green vertical, horizontal and diagonal patterns that were mirror images of one another. When asked to choose the card that matched the one previously seen, the children tended to mistake the original card for its mirror image, showing how difficult it was for them to remember both color and location. However, if they were told, when viewing the original card, a mnemonic cue such as ‘The red part is on the left’, they performed “reliably better”.

The paper was presented by a graduate student at the 17th annual meeting of the American Psychological Society, held May 26-29 in Los Angeles.

http://www.eurekalert.org/pub_releases/2005-05/jhu-lc051705.php

Error | About memory

Error

The website encountered an unexpected error. Please try again later.