Skip to main content

Verified by Psychology Today

Bullying

Bullying Interventions Increase Bullying (Or Do They?)

Recent study erroneously concludes that programs teach bullying

Bullying Prevention Month

November signaled the close of Bullying Prevention Awareness Month. And wouldn’t you know, a study about bullying prevention programs went viral (see the original study here).

The authors and some news outlets (e.g., CBS, USNews) have taken the study to mean that, not only do bullying prevention programs not work, they make bullying WORSE by – their best guess – actually teaching children bullying tactics. If you read the original study, however, you will quickly find that it suggests no such thing.

Rest assured, gentle reader, that studies published in journals are supposed to undergo quality checks. Namely, every study we (faculty at academic institutions, for example) write undergoes a rigorously painful “peer review”. Researchers from elsewhere are tasked to go over each and every one of our written words to ensure a) the study is sound, b) results are correctly interpreted, and c) the conclusions logically follow from the results. These high standards are set in place in part so that the public is not misled about the applications and implications of our scientific enterprise.

Yet in this case I fear the public has been misled. Take, for example, the web headline of my local paper, “Schools with anti-bullying may see more bullying as a result, research shows”. Sadly, the research cited in the article did not show this at all.

The study in question is an exemplar of what is known as a ‘secondary analysis’. That is, the authors did not collect data (surveys from teachers and kids, for example) from local schools. Instead, they analyzed already collected data from a large national study. This is not problematic. Indeed, some of the finest work comes from such analyses or re-analyses. But it is useful to note that it appears the authors stepped foot on no school grounds to conduct the study. Thus, anything they say about the programs in the schools is pure speculation.

One of the key findings is a correlation of .046 between “the presence of a bullying intervention program” and “victimization by peers”. It is probably this number that stimulated the lion’s share of the discussion in journalistic treatments because it is easiest to understand.1

A correlation is a number that reflects the strength and direction of a linear relationship. They range from 0 to 1.0 if the relationship is positive (both variables rise and fall together), or 0 to -1.0 if the variables go in opposite directions. Ideally we want to see a negative correlation between presence of a program and victimization; as participation in a program goes up, victimization goes down. This would signal a successful program.

Instead, the relationship reported by the authors was positive; participation bullying prevention programs was associated with increased victimization. On this surface, this suggests are programs are doing more harm than good.

In truth, their correlation is very nearly zero. And zero means there is no relationship between the two variables in question. The study (correctly) reports this value as ‘significant’ because in a technical statistical sense, it is. Based on the very large sample (over 7000), however, it is not unusual to find very small but ‘significant’ effects. In these cases, follow up tests are generally performed to look at ‘the size of the effect’. We teach this in introductory statistics to college sophomores.

But more importantly for our present purposes, the authors draw two incorrect but seductive conclusions from their results. The first conclusion ( “…students attending schools with bullying prevention programs were more likely to have experienced peer victimization”, p. 7) is simply not valid. The correct conclusion would be, “…students attending schools with bullying prevention programs were more likely to have reported peer victimization”. The difference here hinges on a single word likely to go unnoticed by the untrained eye. If students are taught what bullying and victimization are at the onset of any prevention program, and are surrounded by intervention strategies that remind them every day that they need to be on the lookout for these things, they are more likely to report them because they know how to identify them. Interactions that were once overlooked by children and teachers are now noted. Education doesn’t necessarily cause the behavior, it causes the identification of the behavior. That is a win for the intervention. It is why women are encouraged to do breast self-exams; to find developing tumors that would be overlooked without education.

Second, from the above, one cannot then conclude, “bulling prevention had a negative effect on peer victimization” (p. 8) like the authors did. This statement asserts cause. And as nearly every second year college student can tell you, “correlation does not imply causation”. The number of churches in a community may be positively correlated with the amount of violence in the community. Would it be correct to conclude that churches have a negative effect on violence (i.e., cause violence)? Of course not! Something else is causing both variables; here, the size of the city is the third variable that is causing both.

Could the intervention-victimization link be similarly caused by a third variable? Yes. The presence of a bullying problem in the school in the first place.

Schools with bullying problems invest time, money, energy, in bullying prevention programs. Schools with little problem don’t. In the end, the schools with prevention programs have higher incidences of victimization than schools that don’t have such programs. To conclude that the intervention caused the problem cannot be drawn from the present study. Gving weight to this alternative interpretation is the fact that all of the programs at the schools were correlated; meaning, a school with bullying prevention was also more likely to have Safe Passage and Gang Prevention (Table 2).

And that is why it’s important to study the fine print. The authors end the paper by stating, “Future studies need to utilize a longitudinal design in investigating the temporal ordering between the preventive measures and peer victimization in schools.” This is probably meaningless to a non-statistician. To specialists, however, this means, “we cannot legitimately make causal statements without actually studying the phenomenon in ways that illuminate cause”. Sadly, the titles of articles emerging online and in our local papers (e.g., “Schools with anti-bullying may see more bullying as a result, research shows”) gives too much weight to the authors potentially erroneous conclusions that they themselves say they cannot legitimately draw. More accurate titles might read, “Schools with anti-bullying may see more bullying reported, research shows”. But admittedly, that is not nearly as interesting.

Additionally, it is not o.k. to measure ‘bullying prevention program’ with a yes-no question as the authors did (Does your school have a bullying program?). What does that question even mean to the respondent? Did the schools really put up a poster and call it a program, as the authors utterances to the media suggest? Do teachers really give children lists of things not to do and call it bullying prevention? If so (and really, how could the authors know?), these are not considered sufficient intervention by anyone in the field. Instead, evidence based bullying interventions do recognize the complexity of the issue and focus on the whole school culture, as the authors helpfully suggest they should. But these different types of “programs” appear to be lumped by the authors for inexplicable reasons. The results are misleading at best, dangerous at worst.

So, is it possible that some aspects of intervention programs cause increases in bullying? Yes, it is possible (see Ttofi & Farrington, 2011 below). Has the present study helpfully shown this? Not at all.

For an excellent handling of the effectiveness/ineffectiveness of bullying intervention programs and the elements that comprise them, please see: Ttofi, M. M., & Farrington, D. P. (2011). Effectiveness of school-based programs to reduce bullying: A systematic and meta-analytic review. Journal of Experimental Criminology, 7(1), 27-56.

1 For the statistically minded: The authors violated best statistical practices from the outset. First, one ought never dichotomize what is originally a continuous variable (e.g., victimization). Second, the table of correlations between 'individual-level and school-level covariates' is for the most part meaningless. And third, the high intercorrelations among the programs lead to problems interpreting the regressions that follow, especially since gang prevention programs appear to be related to victimization in the anticipated direction (table 3). That is to say, the positive beta weights for bullying prevention represents the variance in victimization accounted for after the effects (in negative directions) associated with the other programs is removed. How can we clearly interpret this? Finally, the authors repeatedly claim that bullilng prevention programs were "negatively related to peer victimization" (p. 7), when in fact they showed they are positively related.

The opinions expressed in the present article are solely the views of the author and do not reflect the views and opinions of any employer of the author.

advertisement
More from Patricia H. Hawley Ph.D.
More from Psychology Today