Stephanie Morgan

identicon

Students’ mistaken beliefs about how much their peers typically study could be harming their exam performance in some surprising ways

GettyImages-882969886.jpgBy Christian Jarrett

A lot of us use what we consider normal behaviour – based on how we think most other people like us behave – to guide our own judgments and decisions. When these perceptions are wide of the mark (known as “pluralistic ignorance”), this can affect our behaviour in detrimental ways. The most famous example concerns students’ widespread overestimation of how much their peers drink alcohol, which influences them to drink more themselves.

Now a team led by Steven Buzinksi at the University of North Carolina at Chapel Hill has investigated whether students’ pluralistic ignorance about how much time their peers spend studying for exams could be having a harmful influence on how much time they devote to study themselves. Reporting their findings in Teaching in Psychology, the team did indeed find evidence of pluralistic ignorance about study behaviour, but it seemed to have some effects directly opposite to what they expected.

Across four studies with hundreds of social psych undergrads, the researchers found that, overall, students tended to underestimate how much time their peers spent studying for an upcoming exam (but there was a spread of perceptions, with some students overestimating the average). Moreover, students’ perceptions of the social norm for studying were correlated with their own study time, suggesting – though not proving – that their decisions about how much to study were influenced by what they felt was normal.

However, when Buzinksi and his colleagues looked to see whether the students’ misconceptions about their peers’ study time were associated with their subsequent exam performance, they found the opposite pattern to what they expected.

The researchers had thought that underestimating typical study time would be associated with choosing to study less, and in turn that this would be associated with poorer exam performance. Instead, they found that it was those students who overestimated their peers’ study time who performed worse in the subsequent exam, and this seemed to be fully explained by their feeling unprepared for the exam (the researchers speculated that such feelings could increase anxiety and self-doubt, thus harming exam performance).

In a final study, one week before an exam, the researchers corrected students’ misconceptions about the average exam study time and this had the hoped-for effect of correcting pluralistic ignorance about normal study behaviour; it also removed any links between beliefs about typical study time and feelings of unpreparedness.

Most promisingly, average exam performance was superior after this intervention, as compared with performance in a similar exam earlier in the semester, suggesting that correcting misconceptions about others’ study behaviour is beneficial (perhaps learning the truth about how much their peers studied gave the students a chance to adjust their own study behaviour, and this may have boosted the confidence of those who would otherwise have overestimated average study time. However this wasn’t tested in the study so remains speculative).

Of course another explanation for the improved performance could just have been due to practice effects through the semester, but it’s notable that such an improvement in the late-semester exam was not observed in earlier years when the study-time-beliefs intervention was not applied.

Future research will be needed to confirm the robustness of these findings, including in more diverse student groups, and to test the casual role of beliefs about study time and feelings of preparedness, for example by directly observing how correcting misconceptions affects students’ study behaviour and their confidence.

For now, Buzinksi and his colleagues recommend it could be beneficial to use class discussions “…to correct potentially detrimental misperceptions”. They added: “Unless we as educators actively intervene, our students will approach their coursework from an understanding based upon flawed perceptions of the classroom norm, and those most at risk may suffer the most from their shared ignorance.”

Insidious Assumptions
How Pluralistic Ignorance of Studying Behavior Relates to Exam Performance

Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest

Article source: http://feedproxy.google.com/~r/BpsResearchDigest/~3/434H6Zg3yp8/

Growth mindset doesn’t only apply to learning – it’s better to encourage your child to help, than to be “a helper”

GettyImages-531922948.jpg
Children primed to think of themselves as “helpers” were more discouraged when things didn’t go to plan

By Emma Young

According to the Mindset Theory, if you tell a child repeatedly that they’re smart, it makes them less willing to push themselves when they get stuck on an intellectual challenge, presumably because failure would threaten their self-image of being a “smart kid”. For this reason, effort-based praise – rewarding kids for “working hard” rather than “being smart” – is widely recommended (though it’s not the same for adults). But does a similar effect occur in the social sphere? What if you ask a child – as so many parents and surely teachers do – to “be a helper” as if it’s a category that you either belong to or you don’t? 

Earlier research has found that young kids are more likely to try to help others when they are asked to “be helpers” instead of “to help”. But as Emily Foster-Hanson and her fellow researchers at New York University note, “Setbacks and difficulties are common features of children’s experience throughout development and into adulthood,” so it’s important to examine the effects of category labelling – like “being smart” or “being a helper” – when things go wrong for the child. And in their new paper, published in Child Development, they find that setbacks are more detrimental to a child labelled “a helper” than a child asked “to help”.

The researchers recruited a total of 139 four- and five-year olds who were visiting the Children’s Museum of Manhattan and tested each of them alone in a private room in the museum. At the start, half of them were primed with a short introduction to think of themselves as “a helper” (for example, “when someone needs to pick things up, you could be a helper”) and the others to think of themselves as someone who could “help” (“when someone needs to pick things up, you could help”). 

Next, the researchers gave the children various theoretical helping scenarios to act out with puppets, one of which represented them, either “helping” or “being a helper” (the wording was varied in the experimenter’s script according to the child’s experimental group). Afterwards the children were quizzed about their attitudes towards helping and the results suggested that, after role-playing encountering a setback when helping (such as accidentally knocking over a cup of crayons when tidying them up), “helpers” had more negative attitudes toward helping than those who’d “helped”.

For a second study, on a fresh group of children, the researchers investigated the effect of real setbacks. These kids were set up to fail. In one scenario, for example, an experimenter prompted the child to help (or be a helper) by putting away a box that was on the table. If the child didn’t immediately go to do it, they got a succession of prompts, until they did. But the box had a loose bottom, and it was full of ping pong balls, which fell onto the floor when the child picked it up. In another example, a child was prompted to put away a toy truck, which had in fact been disassembled and then had the parts put back together so that it looked intact, but as soon as it was picked up, it fell apart. 

The researchers found that after experiencing these setbacks, the “helper” kids were less likely to voluntarily go and help in two other fairly demanding helping situations (such as going into another part of the room to put away bricks into bags) than the kids in the “helping” group. “This pattern is broadly consistent with the idea that children who had been told to ‘be helpers’ but then made mistakes were overall less motivated to help than the children who had been told ‘to help’,” the researchers write. 

The helper kids were, however, more likely to go on to voluntarily help with an easy task that involved bending down to pick up dropped crayons that they could then use. This was a low-effort task with a high degree of success. Perhaps they were taking advantage of a quick, virtually guaranteed way to restore a little of their dented “helper” image. 

The researchers also found that children asked to be helpers – and who subsequently chose not to help on either of the more effortful tasks – afterwards gave lower self-evaluations of their helping abilities than children in the “helping” group who had also declined to help with those tasks. This suggests that the helper group were now thinking in a black-and-white way about helper status and helping abilities. 

“These data indicate that categorical language can have detrimental consequences for children’s behaviour, even in non-academic domains and even when the categorical input is not evaluative in content,” the researchers write. (In these studies, no one talked about being a “good helper” and there was no evaluation of this behaviour.) 

Do these scenarios accurately mirror real life? After the setbacks, the experimenter always responded in a neutral fashion, saying without emotion, “Oh well, I guess I can put those away later”, for instance. A parent or a teacher might respond differently, telling the child not to worry, and pointing out that it was a really tricky task. Might these kinds of encouraging, comforting responses ameliorate or even eradicate the effects of a setback on future helping? Only further research will tell. 

Still, this work does, as the researchers write, “provide an important caveat to previous messages to parents and teachers about how to use language to encourage pro-sociality in early childhood.” 

Asking Children to “Be Helpers” Can Backfire After Setbacks

Emma Young (@EmmaELYoung) is Staff Writer at BPS Research Digest

Article source: http://feedproxy.google.com/~r/BpsResearchDigest/~3/Kkxk_PAo4-Y/

There’s a fascinating psychological story behind why your favourite film baddies all have a truly evil laugh


By guest blogger David Robson

Towards the end of the Disney film Aladdin, our hero’s love rival, the evil Jafar, discovers Aladdin’s secret identity and steals his magic lamp. Jafar’s wish to become the world’s most powerful sorcerer is soon granted and he then uses his powers to banish Aladdin to the ends of the Earth. 

What follows next is a lingering, close-up of Jafar’s body. He leans forward, fists clenched, with an almost constipated look on his face. He then explodes in uncontrollable cackles that echo across the landscape. For many millennials growing up in the 1990s, it is an archetypical evil laugh.

Such overt displays of delight at others’ misfortune are found universally in kids’ films, and many adult thriller and horror films too. Just think of the rapturous guffaws of the alien in the first Predator film as it is about to self-detonate, taking Arnold Schwarzenegger with it. Or Jack Nicholson’s chilling snicker in The Shining. Or Wario’s manic crowing whenever Mario was defeated. 

A recent essay by Jens Kjeldgaard-Christiansen in the Journal of Popular Culture asks what the psychology behind this might be. Kjeldgaard-Christiansen is well placed to provide an answer having previously used evolutionary psychology to explain the behaviours of heroes and villains in fiction more generally.

In that work, he argued that one of the core traits a villain should show is a low “welfare trade-off” ratio: they are free-riders who cheat and steal, taking from their community while contributing nothing. Such behaviour is undesirable for societies today, but it would have been even more of a disaster in prehistory when the group’s very survival depended on everyone pulling their weight. As a result, Kjeldgaard-Christiansen argues we are wired to be particularly disgusted by cheating free-riders – to the point that we may even feel justified in removing them from the group, or even killing them.

However, there are degrees of villainy and the most dangerous and despised people are those who are not only free riders and cheats, but psychopathic sadists, who perform callous acts for sheer pleasure. Sure enough, previous studies have shown that it is people matching this description whom we consider to be truly evil (since there is no other way to excuse or explain their immorality) and therefore deserving of the harshest punishments. Crucially, Kjeldgaard-Christiansen argues that a wicked laugh offers one of the clearest signs that a villain harbours such evil, gaining “open and candid enjoyment” from others’ suffering – moreover, fiction writers know this intuitively, time and again using the malevolent cackle to identify their darkest characters. 

Part of the power of the evil laugh comes from its salience, Kjeldgaard-Christiansen says: it is both highly visual and vocal (as the close up of Jafar beautifully demonstrates) and the staccato rhythm can be particularly piercing. What’s more, laughs are hard to fake: a genuine, involuntary laugh relies on the rapid oscillation of the “intrinsic laryngeal muscles”, movements that appear to be difficult to produce by our own volition without sounding artificial. As a result, it’s generally a reliable social signal of someone’s reaction to an event, meaning that we fully trust what we are hearing. Unlike dialog – even the kind found in a children’s film – a sadistic or malevolent laugh leaves little room for ambiguity, so there can be little doubt about the villain’s true motives. 

Such laughs are also particularly chilling because they run counter to the usual pro-social function of laughter – the way it arises spontaneously during friendly chats, for example, serving to cement social bonds. 

There are practical reasons too for the ubiquity of the evil laugh in children’s animations and early video games, Kjeldgaard-Christiansen explains. The crude graphics of the first Super Mario or Kung Fu games for Nintendo, say, meant it was very hard to evoke an emotional response in the player – but equipping the villain with an evil laugh helped to create some kind of moral conflict between good and evil that motivated the player to don their cape and beat the bad guys. “This is the only communicative gesture afforded to these vaguely anthropomorphic, pixelated opponents, and it does the job,” he notes. 

There are limits to the utility of the evil laugh in story-telling, though. Kjeldgaard-Christiansen admits that its crude power would be destructive in more complex story-telling, since the display of pleasure at others’ expense would prevent viewers from looking for more subtle motivations or the role of context and circumstance in a character’s behaviour. But for stories dealing with black and white morality, such as those aimed at younger viewers who have not yet developed a nuanced understanding of the world, its potential to thrill is second to none.

Kjeldgaard-Christiansen’s article is certainly one of the most entertaining papers I have read in a long time [get open access here], and his psychological theories continue to be thought provoking. It would be fun to see more experimental research on this subject – comparing the acoustic properties of laughs, for instance, to find out which sounds the most evil. But in my mind, it will always be Jafar’s.

Social Signals and Antisocial Essences: The Function of Evil Laughter in Popular Culture

Post written by David Robson (@d_a_robson) for the BPS Research Digest. His first book, The Intelligence Trap, will be published by Hodder Stoughton (UK)/WW Norton (USA) in 2019.

Article source: http://feedproxy.google.com/~r/BpsResearchDigest/~3/MbGh35K94Xc/