Academic Motivation

A natural experiment involving 5,740 participants in a MOOC ( massive open online course) has found that when students were asked to assess each other's work, and the examples were exceptional, a large proportion of students dropped the course.

In the MOOC, as is not uncommon practice, course participants were asked to write an essay and then to grade a random sample of their peers' essays. Those randomly assigned to evaluate exemplary peer essays were dramatically more likely to quit the course than those assigned to read more typical essays.

Specifically, around 68% of students who graded essays of average quality finished and passed the course, earning a certificate. Among those who graded slightly above average essays (more than one standard deviation above the class mean, 7.5/9), 64% earned a certificate. But among those who graded the best essays (those more than 1.6 SDs above the mean), only 45% earned a certificate.

These numbers can be compared to the fact that 75% of students who wrote an average essay earned a certificate, and 95% of those who wrote a 'perfect' essay, 9/9, earned a certificate. The difference between these numbers is about the same (in fact, slightly less) than the effect of grading average vs top essays.

A follow-up study, involving 361 participants, simulated this setting, in order to delve into what the students thought. Participants, recruited via Amazon's Mechanical Turk, were asked to write a minimum of 500 characters in response to a quote and essay prompt. They were told the best responses would go into a lottery to win a bonus. They were then asked to assess two very short essays (about 200 words) supposedly written by peers. These were either both well-written, or both poorly-written. This was followed by some questions about what they felt and thought, and an opportunity to write a second essay.

Unsurprisingly, those who were given exceptional essays to grade felt significantly less able to write an essay as good as those. They also decided that the ability to write an excellent short answer to such philosophical questions was not very important or relevant to them, and were much more likely not to write another essay (43% of those who read the poor essays went on to try again, while only 27% of those who read the excellent essays did so).

Until now, research has mainly focused on how students respond when peer work is of a standard that the student is likely to see as “attainable”. This research shows how comparisons that are seen as unattainable may do more harm than good.

http://www.eurekalert.org/pub_releases/2016-02/afps-sep020216.php

A study of 438 first- and second-grade students and their primary caregivers has revealed that parents' math anxiety affects their children's math performance — but (and this is the surprising bit) only when they frequently help them with their math homework.

The study builds on previous research showing that students learn less math when their teachers are anxious about math. This is not particularly surprising, and it wouldn't have been surprising if this study had found that math-anxious parents had math-anxious children. But the story wasn't that simple.

Children were assessed in reading achievement, math achievement and math anxiety at both the beginning and end of the school year. Children of math-anxious parents learned significantly less math over the school year and had more math anxiety by the year end—but only if math-anxious parents reported providing help every day with math homework. When parents reported helping with math homework once a week or less often, children’s math achievement and attitudes were not related to parents’ math anxiety. Reading achievement (included as a control) was not related to parents' math anxiety.

Interestingly, the parents' level of math knowledge didn't change this effect (although this is less surprising when you consider the basic-level of math taught in the 1st and 2nd grade).

Sadly, the effect still held even when the teacher was strong in math.

It's suggested that math-anxious parents may be less effective in explaining math concepts, and may also respond less helpfully when children make a mistake or solve the problem in a non-standard way. People with high math anxiety tend to have poor attitudes toward math, and also a high fear of failing at math. It's also possible (likely even) that they will have inflexible attitudes to how a math problem “should” be done. All of these are likely to demotivate the child.

Analysis also indicated that it is not that parents induced math anxiety in their children, who thus did badly, but that their homework help caused the child to do poorly, thus increasing their math anxiety.

Information about parental anxiety and how often parents helped their children with math homework was collected by questionnaire. Math anxiety was assessed using the short (25-item) Math Anxiety Rating Scale. The question, “How often do you help your child with their math homework?” was answered on a 7-point scale (1 = never, 2 = once a month, 3 = less than once a week, 4 = once a week, 5 = 2–3 times a week, 6 = every day, 7 = more than once a day). The mean was 5.3.

The finding points to the need for interventions focused on both decreasing parents' math anxiety and scaffolding their skills in how to help with math homework. It also suggests that, in the absence of such support, math-anxious parents are better not to help!

http://www.eurekalert.org/pub_releases/2015-08/uoc-pma080715.php

http://www.futurity.org/parents-math-anxiety-979472/

There's been a lot of talk in recent years about the importance of mindset in learning, with those who have a “growth mindset” (ie believe that intelligence can be developed) being more academically successful than those who believe that intelligence is a fixed attribute. A new study shows that a 45-minute online intervention can help struggling high school students.

The study involved 1,594 students in 13 U.S. high schools. They were randomly allocated to one of three intervention groups or the control group. The intervention groups either experienced an online program designed to develop a growth mindset, or an online program designed to foster a sense of purpose, or both programs (2 weeks apart). All interventions were expected to improve academic performance, especially in struggling students.

The interventions had no significant benefits for students who were doing okay, but were of significant benefit for those who had an initial GPA of 2 or less, or had failed at least one core subject (this group contained 519 students; a third of the total participants). For this group, each of the interventions was of similar benefit; interestingly, the combined intervention was less beneficial than either single intervention. It's plausibly suggested that this might be because the different messages weren't integrated, and students may have had some trouble in taking on board two separate messages.

Overall, for this group of students, semester grade point averages improved in core academic courses and the rate at which students performed satisfactorily in core courses increased by 6.4%.

GPA average in core subjects (math, English, science, social studies) was calculated at the end of the semester before the interventions, and at the end of the semester after the interventions. Brief questions before and after the interventions assessed the students' beliefs about intelligence, and their sense of meaningfulness about schoolwork.

GPA before intervention was positively associated with a growth mindset and a sense of purpose, explaining why the interventions had no effect on better students. Only the growth mindset intervention led to a more malleable view of intelligence; only the sense-of-purpose intervention led to a change in perception in the value of mundane academic tasks. Note that the combined intervention showed no such effects, suggesting that it had confused rather than enlightened!

In the growth mindset intervention, students read an article describing the brain’s ability to grow and reorganize itself as a consequence of hard work and good strategies. The message that difficulties don't indicate limited ability but rather provide learning opportunities, was reinforced in two writing exercises. The control group read similar materials, but with a focus on functional localization in the brain rather than its malleability.

In the sense-of-purpose interventions, students were asked to write about how they wished the world could be a better place. They read about the reasons why some students worked hard, such as “to make their families proud”; “to be a good example”; “to make a positive impact on the world”. They were then asked to think about their own goals and how school could help them achieve those objectives. The control group completed one of two modules that didn't differ in impact. In one, students described how their lives were different in high school compared to before. The other was much more similar to the intervention, except that the emphasis was on economic self-interest rather than social contribution.

The findings are interesting in showing that you can help poor learners with a simple intervention, but perhaps even more, for their indication that such interventions are best done in a more holistic and contextual way. A more integrated message would hopefully have been more effective, and surely ongoing reinforcement in the classroom would make an even bigger difference.

http://www.futurity.org/high-school-growth-mindset-910082/

I’ve spoken before about the effects of motivation on test performance. This is displayed in a fascinating study by researchers at the Educational Testing Service, who gave one of their widely-used tests (the ETS Proficiency Profile, short form, plus essay) to 757 students from three institutions: a research university, a master's institution and a community college. Here’s the good bit: students were randomly assigned to groups, each given a different consent form. In the control condition, students were told: “Your answers on the tests and the survey will be used only for research purposes and will not be disclosed to anyone except the research team.” In the “Institutional” condition, the rider was added: “However, your test scores will be averaged with all other students taking the test at your college.” While in the “Personal” condition, they were told instead: “However, your test scores may be released to faculty in your college or to potential employers to evaluate your academic ability.”

No prizes for guessing which of these was more motivating!

Students in the “personal” group performed significantly and consistently better than those in the control group at all three institutions. On the multi-choice part of the test, the personal group performed on average .41 of the standard deviation higher than the control group, and the institutional group performed on average .26 SD higher than the controls. The largest difference was .68 SD. On the essay, the largest effect size was .59 SD. (The reason for the results being reported this way is because the focus of the study was on the use of such tests to assess and compare learning gains by colleges.)

The effect is perhaps less dramatic at the individual level, with the average sophomore score on the multichoice test being 460, compared to 458 and 455, for personal, institutional, and control groups, respectively. Interestingly, this effect was greater at the senior level: 469 vs 466 vs 460. For the essay question, however, the effect was larger: 4.55 vs 4.35 vs 4.21 (sophomore); 4.75 vs 4.37 vs 4.37 (senior). (Note that these scores have been adjusted by college admission scores).

Students also reported on motivation level, and this was found to be a significant predictor of test performance, after controlling for SAT or placement scores.

Student participants had received at least one year of college, or (for community colleges) taken at least three courses.

The findings confirm recently expressed concern that students don’t put their best efforts into low-stakes tests, and that, when such tests are used to make judgments about institutional performance (how much value they add), they may well be significantly misleading, if different institutions are providing different levels of motivation.

On a personal level, of course, the findings may be taken as further confirmation of the importance of non-academic factors in academic achievement. Something looked at more directly in the next study.

Motivation, study habits—not IQ—determine growth in math achievement

Data from a large German longitudinal study assessing math ability in adolescents found that, although intelligence was strongly linked to students' math achievement, this was only in the initial development of competence. The significant predictors of growth in math achievement, however, were motivation and study skills.

Specifically (and excitingly for me, since it supports some of my recurring themes!), at the end of Grade 5, perceived control was a significant positive predictor for growth, and surface learning strategies were a significant negative predictor. ‘Perceived control’ reflects the student’s belief that their grades are under their control, that their efforts matter. ‘Surface learning strategies’ reflect the use of rote memorization/rehearsal strategies rather than ones that encourage understanding. (This is not to say, of course, that these strategies don’t have their place — but they need to be used appropriately).

At the end of Grade 7, however, a slightly different pattern emerged, with intrinsic motivation and deep learning strategies the significant positive predictors of growth, while perceived control and surface learning strategies were no longer significant.

In other words, while intelligence didn’t predict growth at either point, the particular motivational and strategy variables that affected growth were different at different points in time, reflecting, presumably, developmental changes and/or changes in academic demands.

Note that this is not to say that intelligence doesn’t affect math achievement! It is, indeed, a strong predictor — but through its effect on getting the student off to a good start (lifting the starting point) rather than having an ongoing benefit.

There was, sadly but unfortunately consistent with other research, an overall decline in motivation from grade 5 to 7. There was also a smaller decline in strategy use (any strategy! — presumably reflecting the declining motivation).

It’s also worth noting that (also sadly but unsurprisingly) the difference between school types increased over time, with those in the higher track schools making more progress than those in the lowest track.

The last point I want to emphasize is that extrinsic motivation only affected initial levels, not growth. The idea that extrinsic motivation (e.g., wanting good grades) is of only short-term benefit, while intrinsic motivation (e.g., being interested in the subject) is far more durable, is one I have made before, and one that all parents and teachers should pay attention to.

The study involved 3,520 students, following them from grades 5 to 10. The math achievement test was given at the end of each grade, while intelligence and self-reported motivation and strategy use were assessed at the end of grades 5 and 7. Intelligence was assessed using the nonverbal reasoning subtest of Thorndike’s Cognitive Abilities Test (German version). The 42 schools in the study were spread among the three school types: lower-track (Hauptschule), intermediate-track (Realschule), and higher-track (Gymnasium). These school types differ in entrance standards and academic demands.

In contradiction of some other recent research, a large new study has found that offering students rewards just before standardized testing can improve test performance dramatically. One important factor in this finding might be the immediate pay-off — students received their rewards right after the test. Another might be in the participants, who were attending low-performing schools.

The study involved 7,000 students in Chicago public schools and school districts in south-suburban Chicago Heights. Older students were given financial rewards, while younger students were offered non-financial rewards such as trophies.

Students took relatively short, standardized diagnostic tests three times a year to determine their grasp of mathematics and English skills. Unusually for this type of research, the students were not told ahead of time of the rewards — the idea was not to see how reward improved study habits, but to assess its direct impact on test performance.

Consistent with other behavioral economics research, the prospect of losing a reward was more motivating than the possibility of receiving a reward — those given money or a trophy to look at while they were tested performed better.

The most important finding was that the rewards only ‘worked’ if they were going to be given immediately after the test. If students were told instead that they would be given the reward sometime later, test performance did not improve.

Follow-up tests showed no negative impact of removing the rewards in successive tests.

Age and type of reward mattered. Elementary school students (who were given nonfinancial rewards) responded more to incentives than high-school students. Younger students have been found to be more responsive to non-monetary rewards than older students. Among high school students, the amount of money involved mattered.

It’s important to note that the students tested had low initial motivation to do well. I would speculate that the timing issue is so critical for these students because distant rewards are meaningless to them. Successful students tend to be more motivated by the prospect of distant rewards (e.g., a good college, a good job).

The finding does demonstrate that a significant factor in a student’s poor performance on tests may simply come from not caring to try.

Whether IQ tests really measure intelligence has long been debated. A new study provides evidence that motivation is also a factor.

Meta-analysis of 46 studies where monetary incentives were used in IQ testing has revealed a large effect of reward on IQ score. The average effect was equivalent to nearly 10 IQ points, with the size of the effect depending on the size of the reward. Rewards greater than $10 produced increases roughly equivalent to 20 IQ points. The effects of incentives were greater for individuals with lower baseline IQ scores.

Follow-up on a previous study of 500 boys (average age 12.5) who were videotaped while undertaking IQ tests in the late 80s also supports the view that motivation plays a part in IQ. The tapes had been evaluated by those trained to detect signs of boredom and each boy had been given a motivational score in this basis. Some 12 years later, half the participants agreed to interviews about their educational and occupational achievements.

As found in other research, IQ score was found to predict various life outcomes, including academic performance in adolescence and criminal convictions, employment, and years of education in early adulthood. However, after taking into account motivational score, the predictiveness of IQ score was significantly reduced.

Differences in motivational score accounted for up to 84% of the difference in years of education (no big surprise there if you think about it), but only 25% of the differences relating to how well they had done in school during their teenage years.

In other words, test motivation can be a confounding factor that has inflated estimates of the predictive validity of IQ, but the fact that academic achievement was less affected by motivation demonstrates that high intelligence (leaving aside the whole thorny issue of what intelligence is) is still required to get a high IQ score.

This is not unexpected — from the beginning of intelligence testing, psychologists have been aware that test-takers vary in how seriously they take the test, and that this will impact on their scores. Nevertheless, the findings are a reminder of this often overlooked fact, and underline the importance of motivation and self-discipline, and the need for educators to take more account of these factors.

You may think that telling students to strive for excellence is always a good strategy, but it turns out that it is not quite as simple as that. A series of four experiments looking at how students' attitudes toward achievement influenced their performance on various tasks has found that while those with high achievement motivation did better on a task when they also were exposed to subconscious "priming" that related to winning, mastery or excellence, those with low achievement motivation did worse. Similarly, when given a choice, those with high achievement motivation were more likely to resume an interrupted task which they were told tested their verbal reasoning ability. However, those with high achievement motivation did worse on a word-search puzzle when they were told the exercise was fun. The findings point to the fact that people have different goals (e.g., achievement vs enjoyment), and that effective motivation requires this to be taken account of.