In a column in The Australian last Monday, Jennifer Buckingham from the Centre for Independent Studies claimed that the expansion of Independent Public Schools will benefit students who have the most to gain. The basis for her claim is that independent public schools in the United States, called charter schools, achieve much higher results for low income and minority students than do traditional public schools. However, her evidence fails to stack up and she is guilty of grossly exaggerating the differences in results.
Ms. Buckingham says that research published by the Centre for Research on Education Outcomes (CREDO) at Stanford University shows that the results for the Black, Hispanic and low income students in charter schools are “strikingly different” from those in traditional public schools, with charter school advantages of between 14 and 50 days of school. These appear to be large advantages. However, the CREDO national charter school study published last year actually shows that the differences in results for these students are so trivial as to be meaningless. Let’s take them one by one.
The reading results for Black students in charter schools in 2013 were 0.01 standard deviations above those for Black students in traditional public schools and there was no difference between the two groups in mathematics. Hispanic students in charter schools were 0.01 standard deviations below those in traditional public schools in both reading and mathematics while low income students in charter schools were 0.02 standard deviations above in reading and 0.03 standard deviations above in mathematics.
Standard deviation is a measure of variation and units of standard deviations are often used in statistical studies to measure the size of differences in results between groups being compared. In his book Visible Learning, Professor John Hattie categorized effect sizes as small (at least 0.2 standard deviations), medium (at least 0.4), or large (at least 0.6).
The differences in effect sizes found by the CREDO research are therefore trivial. A review of the national charter school study by the National Education Policy Centre at the University of Colorado shows that a difference of 0.01 standard deviations indicates that a quarter of a hundredth of one per cent (0.000025) of the variation in test scores can be attributed to a student being in a charter school as compared to a traditional public school. Such effect sizes “are so close to zero as to be regarded as effectively zero” [p.7].
To put this in perspective consider the results of the US National Assessment of Educational Progress (NAEP) for 2011. The national 8th grade reading average was 268 with a standard deviation of 34. This means that a gain of 0.01 standard deviations is about one-third of a single point (0.34) on the NAEP scale. A gain of 0.03 is approximately one point. As a Senior Fellow at the prestigious Brookings Institution stated in a review of the CREDO study: “I don’t know a single analyst who gets excited over a one point change in NAEP scores”.
Ms. Buckingham’s claim of “striking differences” between the results of disadvantaged students in charter schools and traditional public schools is thus much ado about nothing. CREDO’s national study shows that charter schools are basically indistinguishable from traditional public schools in terms of their impact on the test performance of Black, Hispanic and low income students. Similar findings are made in a recent CREDO study of charter schools in California
Unfortunately, Ms. Buckingham has been misled by the controversial method CREDO used to translate units of standard deviations in test scores into equivalent days of learning. Tiny differences in effect sizes are mysteriously converted into seemingly impressive gains in learning days. The method has been described as “problematic” by the National Education Policy Centre and “iffy” by another think tank. One highly regarded US education commentator who believes that many charter schools have positive benefits says that the CREDO conversion overstates the effect.
A major problem with the CREDO presentation is that the use of learning days as the measure of gains in student test scores tends to exaggerate perception of the actual extent of the gains. As the Brookings Institution review shows, a gain of 0.03 standard deviations amounts to about one point on the NAEP scale but the CREDO conversion table published in several recent studies translates this tiny gain into 21 days of learning (that is, a month of schooling), which appears to most people as a very large gain. Other studies prefer to report the effect sizes in terms of changes in percentile rank for a student attending a charter school. This approach does not present the same exaggerated perception of change as the CREDO approach.
CREDO does not explain how the test score differences have been translated into learning days. There is no justification or explanation of the measure in either the study itself or in the technical appendix to the report. It does seem odd that the methodology behind the table is not disclosed.
CREDO’s conversion of very small effect sizes into several weeks and months of learning gains must be treated with a great deal of scepticism. As the review by the National Education Policy Centre states: “Without a clear (or indeed any) rationale for this choice, the “days of learning” metric cannot be regarded as credible” [p.6].
At least CREDO does urge caution in interpreting its approach. Its national charter school report states that “…the days of learning are only an estimate and should be used as general guide rather than as empirical transformations” [p.13]. Its recent report on charter schools in California issued a stronger caveat: “Transforming the results into more accessible units is challenging and can be done only imprecisely” [p.15]. It says that the estimates of gains in days of learning “should be treated cautiously”.
Ms. Buckingham ignores these exhortations for caution in her desperation to find support for independent public schools. Instead, she inflates trivial differences in results between charter school and traditional public schools into striking differences. She ignores CREDO’s statistical analysis which shows no real advantage for disadvantaged students in charter schools compared to traditional public schools. There is thus no basis to consider that charter schools provide a model for improving achievement amongst disadvantaged students in Australia.
Trevor Cobbold