Self-regulation

A natural experiment involving 5,740 participants in a MOOC ( massive open online course) has found that when students were asked to assess each other's work, and the examples were exceptional, a large proportion of students dropped the course.

In the MOOC, as is not uncommon practice, course participants were asked to write an essay and then to grade a random sample of their peers' essays. Those randomly assigned to evaluate exemplary peer essays were dramatically more likely to quit the course than those assigned to read more typical essays.

Specifically, around 68% of students who graded essays of average quality finished and passed the course, earning a certificate. Among those who graded slightly above average essays (more than one standard deviation above the class mean, 7.5/9), 64% earned a certificate. But among those who graded the best essays (those more than 1.6 SDs above the mean), only 45% earned a certificate.

These numbers can be compared to the fact that 75% of students who wrote an average essay earned a certificate, and 95% of those who wrote a 'perfect' essay, 9/9, earned a certificate. The difference between these numbers is about the same (in fact, slightly less) than the effect of grading average vs top essays.

A follow-up study, involving 361 participants, simulated this setting, in order to delve into what the students thought. Participants, recruited via Amazon's Mechanical Turk, were asked to write a minimum of 500 characters in response to a quote and essay prompt. They were told the best responses would go into a lottery to win a bonus. They were then asked to assess two very short essays (about 200 words) supposedly written by peers. These were either both well-written, or both poorly-written. This was followed by some questions about what they felt and thought, and an opportunity to write a second essay.

Unsurprisingly, those who were given exceptional essays to grade felt significantly less able to write an essay as good as those. They also decided that the ability to write an excellent short answer to such philosophical questions was not very important or relevant to them, and were much more likely not to write another essay (43% of those who read the poor essays went on to try again, while only 27% of those who read the excellent essays did so).

Until now, research has mainly focused on how students respond when peer work is of a standard that the student is likely to see as “attainable”. This research shows how comparisons that are seen as unattainable may do more harm than good.

http://www.eurekalert.org/pub_releases/2016-02/afps-sep020216.php

There's been a lot of talk in recent years about the importance of mindset in learning, with those who have a “growth mindset” (ie believe that intelligence can be developed) being more academically successful than those who believe that intelligence is a fixed attribute. A new study shows that a 45-minute online intervention can help struggling high school students.

The study involved 1,594 students in 13 U.S. high schools. They were randomly allocated to one of three intervention groups or the control group. The intervention groups either experienced an online program designed to develop a growth mindset, or an online program designed to foster a sense of purpose, or both programs (2 weeks apart). All interventions were expected to improve academic performance, especially in struggling students.

The interventions had no significant benefits for students who were doing okay, but were of significant benefit for those who had an initial GPA of 2 or less, or had failed at least one core subject (this group contained 519 students; a third of the total participants). For this group, each of the interventions was of similar benefit; interestingly, the combined intervention was less beneficial than either single intervention. It's plausibly suggested that this might be because the different messages weren't integrated, and students may have had some trouble in taking on board two separate messages.

Overall, for this group of students, semester grade point averages improved in core academic courses and the rate at which students performed satisfactorily in core courses increased by 6.4%.

GPA average in core subjects (math, English, science, social studies) was calculated at the end of the semester before the interventions, and at the end of the semester after the interventions. Brief questions before and after the interventions assessed the students' beliefs about intelligence, and their sense of meaningfulness about schoolwork.

GPA before intervention was positively associated with a growth mindset and a sense of purpose, explaining why the interventions had no effect on better students. Only the growth mindset intervention led to a more malleable view of intelligence; only the sense-of-purpose intervention led to a change in perception in the value of mundane academic tasks. Note that the combined intervention showed no such effects, suggesting that it had confused rather than enlightened!

In the growth mindset intervention, students read an article describing the brain’s ability to grow and reorganize itself as a consequence of hard work and good strategies. The message that difficulties don't indicate limited ability but rather provide learning opportunities, was reinforced in two writing exercises. The control group read similar materials, but with a focus on functional localization in the brain rather than its malleability.

In the sense-of-purpose interventions, students were asked to write about how they wished the world could be a better place. They read about the reasons why some students worked hard, such as “to make their families proud”; “to be a good example”; “to make a positive impact on the world”. They were then asked to think about their own goals and how school could help them achieve those objectives. The control group completed one of two modules that didn't differ in impact. In one, students described how their lives were different in high school compared to before. The other was much more similar to the intervention, except that the emphasis was on economic self-interest rather than social contribution.

The findings are interesting in showing that you can help poor learners with a simple intervention, but perhaps even more, for their indication that such interventions are best done in a more holistic and contextual way. A more integrated message would hopefully have been more effective, and surely ongoing reinforcement in the classroom would make an even bigger difference.

http://www.futurity.org/high-school-growth-mindset-910082/

Three classroom experiments have found that students who meditated before a psychology lecture scored better on a quiz that followed than students who did not meditate. Mood, relaxation, and class interest were not affected by the meditation training.

The noteworthy thing is that the meditation was very very basic — six minutes of written meditation exercises.

The effect was stronger in classes where more freshmen students were enrolled, suggesting that the greatest benefit is to those students who have most difficulty in concentrating (who are more likely to drop out).

The finding suggests the value in teaching some active self-reflection strategies to freshmen, and disadvantaged ones in particular.

It’s reasonable to speculate that more extensive training might increase the benefits.

And in another recent meditation study, a two week mindfulness course significantly improved both Graduate Record Exam reading comprehension scores and working memory capacity.

The study involved 48 undergrads who either attended the mindfulness course or a nutrition class. Each 45-minute class met eight times over two weeks. Mindfulness training was associated with a 16-percentile boost in GRE scores, on average. Mind wandering also significantly decreased. The healthy nutrition course had no effect on any of these factors.

http://medicalxpress.com/news/2013-04-meditating-grades.html (first study)

[3382] Ramsburg JT, Youmans RJ. Meditation in the Higher-Education Classroom: Meditation Training Improves Student Knowledge Retention during Lectures. Mindfulness [Internet]. Submitted :1 - 11. Available from: http://link.springer.com/article/10.1007/s12671-013-0199-5

http://www.scientificamerican.com/podcast/episode.cfm?id=mindfulness-may-improve-test-scores-13-03-28 (second study)

[3380] Mrazek MD, Franklin MS, Phillips DT, Baird B, Schooler JW. Mindfulness Training Improves Working Memory Capacity and GRE Performance While Reducing Mind Wandering. Psychological Science [Internet]. 2013 . Available from: http://pss.sagepub.com/content/early/2013/03/27/0956797612459659

Working memory capacity and level of math anxiety were assessed in 73 undergraduate students, and their level of salivary cortisol was measured both before and after they took a stressful math test.

For those students with low working memory capacity, neither cortisol levels nor math anxiety made much difference to their performance on the test. However, for those with higher WMC, the interaction of cortisol level and math anxiety was critical. For those unafraid of math, the more their cortisol increased during the test, the better they performed; but for those anxious about math, rising cortisol meant poorer performance.

It’s assumed that low-WMC individuals were less affected because their performance is lower to start with (this shouldn’t be taken as an inevitability! Low-WMC students are disadvantaged in a domain like math, but they can learn strategies that compensate for that problem). But the effect on high-WMC students demonstrates how our attitude and beliefs interact with the effects of stress. We may all have the same physiological responses, but we interpret them in different ways, and this interpretation is crucial when it comes to ‘higher-order’ cognitive functions.

Another study investigated two theories as why people choke under pressure: (a) they’re distracted by worries about the situation, which clog up their working memory; (b) the stress makes them pay too much attention to their performance and become self-conscious. Both theories have research backing from different domains — clearly the former theory applies more to the academic testing environment, and the latter to situations involving procedural skill, where explicit attention to the process can disrupt motor sequences that are largely automatic.

But it’s not as simple as one effect applying to the cognitive domain, and one to the domain of motor skills, and it’s a little mysterious why pressure could have too such opposite effects (drawing attention away, or toward). This new study carried out four experiments in order to define more precisely the characteristics of the environment that lead to these different effects, and suggest solutions to the problem.

In the first experiment, participants were given a category learning task, in which some categories had only one relevant dimension and could be distinguished according to one easily articulated rule, and others involved three relevant dimensions and one irrelevant one. Categorization in this case was based on a complex rule that would be difficult to verbalize, and so participants were expected to integrate the information unconsciously.

Rule-based category learning was significantly worse when participants were also engaged in a secondary task requiring them to monitor briefly appearing letters. However it was not affected when their secondary task involved them explicitly monitoring the categorization task and making a confidence judgment. On the other hand, the implicit category learning task was not disrupted by the letter-monitoring task, but was impaired by the confidence-judgment task. Further analysis revealed that participants who had to do the confidence-judgment task were less likely to use the best strategy, but instead persisted in trying to verbalize a one- or two-dimension rule.

In the second experiment, the same tasks were learned in a low-pressure baseline condition followed by either a low-pressure control condition or one of two high-pressure conditions. One of these revolved around outcome — participants would receive money for achieving a certain level of improvement in their performance. The other put pressure on the participants through monitoring — they were watched and videotaped, and told their performance would be viewed by other students and researchers.

Rule-based category learning was slower when the pressure came from outcomes, but not when the pressure came from monitoring. Implicit category learning was unaffected by outcome pressure, but worsened by monitoring pressure.

Both high-pressure groups reported the same levels of pressure.

Experiment 3 focused on the detrimental combinations — rule-based learning under outcome pressure; implicit learning under monitoring pressure — and added the secondary tasks from the first experiment.

As predicted, rule-based categories were learned more slowly during conditions of both outcome pressure and the distracting letter-monitoring task, but when the secondary task was confidence-judgment, the negative effect of outcome pressure was counteracted and no impairment occurred. Similarly, implicit category learning was slowed when both monitoring pressure and the confidence-judgment distraction were applied, but was unaffected when monitoring pressure was counterbalanced by the letter task.

The final experiment extended the finding of the second experiment to another domain — procedural learning. As expected, the motor task was significantly affected by monitoring pressure, but not by outcome pressure.

These findings suggest two different strategies for dealing with choking, depending on the situation and the task. In the case of test-taking, good test preparation and a writing exercise can boost performance by reducing anxiety and freeing up working memory. If you're worried about doing well in a game or giving a memorized speech in front of others, you instead want to distract yourself so you don't become focused on the details of what you're doing.

Whether IQ tests really measure intelligence has long been debated. A new study provides evidence that motivation is also a factor.

Meta-analysis of 46 studies where monetary incentives were used in IQ testing has revealed a large effect of reward on IQ score. The average effect was equivalent to nearly 10 IQ points, with the size of the effect depending on the size of the reward. Rewards greater than $10 produced increases roughly equivalent to 20 IQ points. The effects of incentives were greater for individuals with lower baseline IQ scores.

Follow-up on a previous study of 500 boys (average age 12.5) who were videotaped while undertaking IQ tests in the late 80s also supports the view that motivation plays a part in IQ. The tapes had been evaluated by those trained to detect signs of boredom and each boy had been given a motivational score in this basis. Some 12 years later, half the participants agreed to interviews about their educational and occupational achievements.

As found in other research, IQ score was found to predict various life outcomes, including academic performance in adolescence and criminal convictions, employment, and years of education in early adulthood. However, after taking into account motivational score, the predictiveness of IQ score was significantly reduced.

Differences in motivational score accounted for up to 84% of the difference in years of education (no big surprise there if you think about it), but only 25% of the differences relating to how well they had done in school during their teenage years.

In other words, test motivation can be a confounding factor that has inflated estimates of the predictive validity of IQ, but the fact that academic achievement was less affected by motivation demonstrates that high intelligence (leaving aside the whole thorny issue of what intelligence is) is still required to get a high IQ score.

This is not unexpected — from the beginning of intelligence testing, psychologists have been aware that test-takers vary in how seriously they take the test, and that this will impact on their scores. Nevertheless, the findings are a reminder of this often overlooked fact, and underline the importance of motivation and self-discipline, and the need for educators to take more account of these factors.

It’s well-established that feelings of encoding fluency are positively correlated with judgments of learning, so it’s been generally believed that people primarily use the simple rule, easily learned = easily remembered (ELER), to work out whether they’re likely to remember something (as discussed in the previous news report). However, new findings indicate that the situation is a little more complicated.

In the first experiment, 75 English-speaking students studied 54 Indonesian-English word pairs. Some of these were very easy, with the English words nearly identical to their Indonesian counterpart (e.g, Polisi-Police); others required more effort but had a connection that helped (e.g, Bagasi-Luggage); others were entirely dissimilar (e.g., Pembalut-Bandage).

Participants were allowed to study each pair for as long as they liked, then asked how confident they were about being able to recall the English word when supplied the Indonesian word on an upcoming test. They were tested at the end of their study period, and also asked to fill in a questionnaire which assessed the extent to which they believed that intelligence is fixed or changeable.

It’s long been known that theories of intelligence have important effects on people's motivation to learn. Those who believe each person possesses a fixed level of intelligence (entity theorists) tend to disengage when something is challenging, believing that they’re not up to the challenge. Those who believe that intelligence is malleable (incremental theorists) keep working, believing that more time and effort will yield better results.

The study found that those who believed intelligence is fixed did indeed follow the ELER heuristic, with their judgment of how well an item was learned nicely matching encoding fluency.

However those who saw intelligence as malleable did not follow the rule, but rather seemed to be following the reverse heuristic: that effortful encoding indicates greater engagement in learning, and thus is a sign that they are more likely to remember. This group therefore tended to be marginally underconfident of easy items, marginally overconfident for medium-level items, and significantly overconfident for difficult items.

However, the entanglement of item difficulty and encoding fluency weakens this finding, and accordingly a second experiment separated these two attributes.

In this experiment, 41 students were presented with two lists of nine words, one list of which was in small font (18-point Arial) and one in large font (48-point Arial). Each word was displayed for four seconds. While font size made no difference to their actual levels of recall, entity theorists were much more confident of recalling the large-size words than the small-size ones. The incremental theorists were not, however, affected by font-size.

It is suggested that the failure to find evidence of a ‘non-fluency heuristic’ in this case may be because participants had no control over learning time, therefore were less able to make relative judgments of encoding effort. Nevertheless, the main finding, that people varied in their use of the fluency heuristic depending on their beliefs about intelligence, was clear in both cases.

[2182] Miele DB, Finn B, Molden DC. Does Easily Learned Mean Easily Remembered?. Psychological Science [Internet]. 2011 ;22(3):320 - 324. Available from: http://pss.sagepub.com/content/22/3/320.abstract

Research has shown that people are generally poor at predicting how likely they are to remember something. A recent study tested the theory that the reason we’re so often inaccurate is that we make predictions about memory based on how we feel while we're encountering the information to be learned, and that can lead us astray.

In three experiments, each involving about 80 participants ranging in age from late teens to senior citizens, participants were serially shown words in large or small fonts and asked to predict how well they'd remember each (actual font sizes depended on the participants’ browsers, since this was an online experiment and participants were in their own homes, but the larger size was four times larger than the other).

In the first experiment, each word was presented either once or twice, and participants were told if they would have another chance to study the word. The length of time the word was displayed on the first occasion was controlled by the participant. On the second occasion, words were displayed for four seconds, and participants weren’t asked to make a new prediction. At the end of the study phase, they had two minutes to type as many words as they remembered.

Recall was significantly better when an item was seen twice. Recall wasn’t affected by font size, but participants were significantly more likely to believe they’d recall those presented in larger fonts. While participants realized seeing an item twice would lead to greater recall, they greatly underestimated the benefits.

Because people so grossly discounted the benefit of a single repetition, in the next experiment the comparison was between one and four study trials. This time, participants gave more weight to having three repetitions versus none, but nevertheless, their predictions were still well below the actual benefits of the repetitions.

In the third experiment, participants were given a simplified description of the first experiment and either asked what effect they’d expect font size to have, or what effect having two study trials would have. The results (similar levels of belief in the benefits of each condition) neither resembled the results in the first experiment (indicating that those people’s predictions hadn’t been made on the basis of their beliefs about memory effects), or the actual performance (demonstrating that people really aren’t very good at predicting their memory performance).

These findings were confirmed in a further experiment, in which participants were asked about both variables (rather than just one).

The findings confirm other evidence that (a) general memory knowledge tends to be poor, (b) personal memory awareness tends to be poor, and (c) ease of processing is commonly used as a heuristic to predict whether something will be remembered.

 

Addendum: a nice general article on this topic by the lead researcher Nate Kornell has just come out in Miller-McCune

Kornell, N., Rhodes, M. G., Castel, A. D., & Tauber, S. K. (in press). The ease of processing heuristic and the stability bias: Dissociating memory, memory beliefs, and memory judgments. Psychological Science.

A study involving 120 toddlers, tested at 14, 24, and 36 months, has assessed language skills (spoken vocabulary and talkativeness) and the development of self-regulation. Self-regulation is an important skill that predicts later academic and social success. Previous research has found that language skills (and vocabulary in particular) help children regulate their emotions and behavior. Boys have also been shown to lag behind girls in both language and self-regulation.

The present study hoped to explain inconsistencies in previous research findings by accounting for general cognitive development and possible gender differences. It found that vocabulary was more important than talkativeness, and 24-month vocabulary predicted the development of self-regulation even when general cognitive development was accounted for. However, girls seemed ‘naturally’ better able to control themselves and focus, but the ability in boys was much more associated with language skills. Boys with a strong vocabulary showed a dramatic increase in self-regulation, becoming comparable to girls with a strong vocabulary.

These gender differences suggest that language skills may be more important for boys, and that more emphasis should be placed on encouraging young boys to use words to solve problems, rather than accepting that ‘boys will be boys’.

Metamemory or metacognition — your ability to monitor your own cognitive processes — is central to efficient and effective learning. Research has also shown that, although we customarily have more faith in person’s judgment the more confident they are in it, a person’s accuracy and their confidence in their accuracy are two quite separate things (which is not to say it’s not a useful heuristic; just that it’s far from infallible). A new study involving 32 participants has looked at individual differences in judging personal accuracy when assessing a geometric image, comparing these differences to differences in the brain.

The perceptual test used simple stimuli, and each one was customized to the individual's level of skill in order to achieve a score of 71%. In keeping with previous research, there was considerable variation in the participants’ accuracy in assessing their own responses. But the intriguing result was that these differences were reflected in differences in the volume of gray matter in the right anterior prefrontal cortex. Moreover, those who were better at judging their own performance not only had more neurons in that region, but also tended to have denser connections between the region and the white matter connected to it. The anterior prefrontal cortex is associated with various executive functions, and seems to be more developed in humans compared to other animals.

The finding should not be taken to indicate a genetic basis for metacognitive ability. The finding implies nothing about whether the physical differences are innate or achieved by training and experience. However it seems likely that, like most skills and abilities, a lot of it is training.

A study following nearly 1300 young children from birth through the first grade provides more evidence for the importance of self-regulation for academic achievement. The study found that children showing strong self-regulation in preschool and kindergarten did significantly better on math, reading and vocabulary at the end of first grade, independent of poverty, ethnic status, and maternal education (all of which had significant negative effects on reading, math, and vocabulary achievement in first grade). At-risk children with stronger self-regulation in kindergarten scored 15 points higher on a standardized math test in first grade, 11 points higher on an early reading test, and nearly seven points higher on a vocabulary test than at-risk children with weaker self-regulation. The findings emphasize the need to help children learn how to listen, pay attention, follow instructions, and persist on a task.

It has been well-established that, compared to younger adults, older adults require more practice to achieve the same level of performance1. Sometimes, indeed, they may need twice as much2.

In the present study, two groups of adult subjects were given paired items to learn during multiple study-test trials. During each trial items were presented at the subject's pace. Afterwards the subjects were asked to judge how likely they were to be able to recall each item in a test.

It was found that people were very good at accurately judging the likelihood of their correct recall. Correlations between judgments and the amount of time the subjects studied the items suggested that people were monitoring their learning and using this to allocate study time.

However, older adults (with a mean age of 67) used monitoring to a lesser degree than the younger adults (with a mean age of 22), and the results suggested that part of the reason for the deficit in recall commonly found with older adults is due to this factor.

References

1. For a review, see Kausler, D.H. 1994. Learning and memory in normal aging. New York: Academic Press.

2. Delbecq-Derousné, J. & Beauvois, M. 1989. Memory processes and aging: A defect of automatic rather than controlled processes? Archives of Gerontology & Geriatrics, 1 (Suppl), 121-150.

Salthouse, T.A. & Dunlosky, J. 1995. Analyses of adult age differences in associative learning. Zeitschrift für Psychologie, 203, 351-360

Dunlosky, J. & Connor, L.T. (1997). Age differences in the allocation of study time account for age differences in memory performance. Memory and Cognition, 25, 691-700.

Older news items (pre-2010) brought over from the old website

Pointers for better learning

One of the crucial aspects to learning efficiently is being able to accurately assess your own learning process. Research has shown that in general people are not very accurate at judging how well they have learned complex materials. A review of recent research into how to improve judgment accuracy has concluded that rereading or summarizing text can help, as well as techniques that focus people’s attention on just the most important details of a text, such as trying to recall the key ideas from memory.

Dunlosky, J. & Lipko, A.R. 2007. Metacomprehension: A Brief History and How to Improve Its Accuracy. Current Directions in Psychological Science, 16 (4), 228–232.

http://www.eurekalert.org/pub_releases/2007-08/afps-rpt082307.php
http://www.sciencedaily.com/releases/2007/08/070823142827.htm