When sceince is wrong: Abstinence Education
Statistical methodology is an interesting animal. Ideally one wants to follow the most scientific approach possible to an experiment in order to ensure that one gets the best results. Double blind treatment groups, randomly assigned from similar samples and all that Jazz.
Unfortunately, methodology and heuristics sometimes cause us to set aside our critical thinking caps and wander through the land of “this is how things are done.”
Today I was pointed to a study about the affects of Abstinence Only Education. (I think as an offshoot of the whole Palin debate, witchhunt, revetting, what the hell? thing (whatever you want to call it) that is going on right now.)
Anyway, the study claimed:
The study found that youth in the four evaluated programs were no more likely than youth not in the programs to have abstained from sex in the four to six years after they began participating in the study. Youth in both groups who reported having had sex also had similar numbers of sexual partners and had initiated sex at the same average age.
Contrary to concerns raised by some critics of federal funding for abstinence education, however, youth in the abstinence education programs were no more likely to have engaged in unprotected sex than youth who did not participate in the programs.
Touted as the “gold standard” this study (though it had a small sample and looked at only 4 of 900 exsisting programs) meticulously followed procedures that on the surface look like great experimental methods. In fact, they are ready to tell us all about how rigorous their methods were:
The study used the most rigorous, scientifically based approach to measure the impacts of the programs. Much like a clinical trial in medicine, this approach compares outcomes for two statistically equivalent groups—a program group and a control group—created by random assignment (similar to a lottery). Youth in the program group were eligible to receive the abstinence education program services, while those in the control group were not, and received only the usual health, family life, and sex education services available in their schools and communities. When coupled with sufficiently large sample sizes, longitudinal surveys conducted by independent data collectors, and appropriate statistical methods, this design is able to produce highly credible estimates of the impacts of the programs being studied.
The emphasis here is mine. You see, statistically equivalent brought out some big alarm bells for me. Typically it is difficult to get true equivalencies when you are looking at different geographical areas, even within the same city. At best you might be able to use comparable groups. Reading on, one discovers the following
Youth were enrolled in the study sample over three consecutive school years, from fall 1999 through fall 2001, and randomly assigned within schools to either the program or the control group.
The creators of this study apparently assumed that kids would not swap information about something as mundane as sex education. They in fact neglected to consider that a student could have a potential sex partner in the OTHER GROUP!
By providing comprehensive sex education to HALF a class, you are in effect providing it to all of them.
They contaminated their own freaking study because they followed the methodology, instead of USING THEIR BRAINS. Further criticism here. So yeah, watch out for anyone hoisting this study as “evidence” that abstinence only education is just the same as comprehensive sex ed. It is only the same if comprehensive sex ed is taught in the exact school.
(Ideal solution to parents who want their kids to have abstienence education? Teach both types of classes and allow the parents to chose which they are enrolled in. That way the information is still available within the school community but parents who want their kids brainwashed to protect their virginity can have them brainwashed. )