The Federal Department of Education recently published the first report in its so-called independent evaluation of the Government’s school autonomy program. The report purports to be a literature review of academic research on school autonomy, but it is a ‘dud’ of a review.
The review was done by Professor Brian Caldwell as part of the evaluation to be carried out by the Australian Council of Educational Research. Professor Caldwell is a long-standing advocate of school autonomy so the review is hardly an independent review and it shows. It relies heavily on somewhat dated research, much of which is also ambiguous about the impact of school autonomy.
Professor Caldwell says that the weight of evidence since the turn of the century shows a positive impact on learning from school autonomy and that the view that school autonomy has no impact on student learning is a myth [p. 33]. However, the only evidence he cites is two studies using the PISA 2003 results, an OECD study of the 2006 results, a recent World Bank study and a 1998 study by Caldwell himself and others on increased school autonomy in Victoria in the mid-1990s.
Caldwell ignores the latest PISA study on school autonomy and ignores a large number of recent studies from several countries. The weight of evidence from these studies is that school autonomy in staffing and budgeting has little to no effect on student outcomes.
The 2009 PISA study reports results from two types of analysis. One set of results is from a cross-country correlation analysis of education outcomes in reading and school autonomy in resource allocation (budgets and staffing) and curriculum and assessment. The other set is from multi-level regression analyses of the relationship between student performance and school and student characteristics within each participating country. Both analyses take account of differences in the socio-economic background of students and schools.
The cross-country correlation analysis found that education systems that provide schools with greater autonomy in selecting teachers and for school budgets do not achieve higher results in reading. The study concluded that “…greater responsibility in managing resources appears to be unrelated to a school system’s overall student performance” [p.41] and that “…school autonomy in resource allocation is not related to performance at the system level” [Note 7, p. 86]. In contrast, greater responsibility for curriculum and assessment was found to be positively related to student achievement.
The within-country analysis shows that in the vast majority of countries participating in PISA, including Australia, there was no statistically significant difference between student achievement in schools with a high degree of autonomy in hiring teachers and over the school budget and in schools with lower autonomy over these decisions [Table IV.2.4c, p. 169]. A positive relationship was found in only four of the 65 countries participating in PISA 2009.
Cross-country correlation analysis of the PISA 2009 data shows that the combination of greater school autonomy and the publication of individual school results is associated with higher student achievement. However, the impact is trivial – amounting to only 2.6 points on the PISA scale where one year’s learning is equivalent to 35-40 points.
The national report on Australia’s PISA 2009 results also shows virtually no difference in the correlation between school autonomy and student achievement in NSW, with the lowest degree of autonomy of any jurisdiction, and Victoria which has a high degree of autonomy [p. 274]. Moreover, it found no significant relationship between student performance and school autonomy in budgeting and staffing in any school sector – government, Catholic or Independent – even though government schools overall have significantly less autonomy than Independent schools.
New Zealand has the most decentralized school system in the OECD with schools exercising full control over budgets and staffing. The head of research at the NZ Council for Educational Research says that there has not been any significant gains in overall student achievement, new approaches to learning, or greater equality of educational opportunity since this radical path was taken in 1989. Nor has there been any progress in reducing the number of low achievers or closing the gaps between students from rich and poor families.
Charter schools in the United States are independent schools with public funding and have operated for over 20 years. The most sophisticated studies of charter schools show that some do better than traditional public schools, some do no better and some do worse. The major national studies show that charter schools do no better than traditional schools.
The ‘gold standard’ national study on charter schools published by the Centre for Research on Education Outcomes at Stanford University in 2009 found that the gains in maths results for nearly half of all charter schools (46%) were no different from those in comparable traditional public schools while over one third (37%) of charter schools had significantly worse results. Only 17% of charter schools had significantly higher maths results than students in comparable traditional public schools.
Another national study published by the Centre this year found that only 19% of charter schools performed better in reading and mathematics than competing traditional public schools in their local area.
The overview of a recent special issue of the journal Economics of Education Review on the charter school experience concluded:
… the existing literature is inconclusive about the aggregate effect charter schools have on student achievement. Some studies in some locations find charters outperform traditional public schools, some find they are no different than the traditional ones, and some find they perform worse. [p. 209]
The evidence from studies of school autonomy in other countries is mixed. Studies of the impact of free schools in Sweden show mixed results. Foundation schools in England have not improved student achievement while the evidence on academies is mixed.
Caldwell ignores all this evidence. Instead, he cites studies using the 2006 and 2003 PISA results.
The 2006 study used a cross-country multi-level regression analysis of the relationship between student performance and a range of school factors including school autonomy, which takes account of the socio-economic background of students. It shows no significant association between student performance in science and the extent of autonomy individual schools have over educational content, staffing and budgeting. However, it does show a strong association at the country level between student performance and school autonomy in educational content and budgeting, that is, education systems that give more autonomy for schools in these areas achieve higher results. This difference between the school and system level results is not explored in the study.
This 2006 PISA study was extensively criticised in a technical review by a RAND Corporation statistician, Laura Hamilton, at a high level international conference of technical experts sponsored by the US National Centre for Education Statistics in 2009.
Hamilton said that the study “…could lead readers to make conclusions that are not warranted based on the data and analyses used” [p. 7]. In particular, multi-level regression analysis based on cross-country data “…do not support the kinds of causal inferences that most readers would like to make” because “a host of unmeasured factors could influence the magnitude and even the direction of an observed relationship between achievement and a school or system characteristic [p. 10]. Further:
The fact that PISA does not gather longitudinal achievement data for individual students makes it especially difficult to parse out important confounders. The possibility of unmeasured influences exists at the individual student, school, and country levels, which complicates the interpretation of relationships. [p. 11]
Hamilton also says that linking achievement in the PISA tests with school and system characteristics, such as school autonomy, is hindered by the fact that the tests measure cumulative knowledge and skill development that occurs over many years.
We would expect a specific characteristic of the school or system measured at one point in time to exert a limited influence on students’ test scores which reflect knowledge and skills gained over many years and across school-based and outside-of-school contexts. [p. 14]
These and other criticisms of the PISA cross-country regression analysis “…raise doubts about the extent to which PISA can be used to support causal inferences about education policies and practices” [p. 17].
Indeed, they may have been a factor in the decision of the PISA panel to use a simple cross-country correlation analysis for the 2009 study. It notes that there is little more to be gained in using the sophisticated cross-country modelling used in the 2006 study [p. 30]. The other difference between the two studies is that the more recent one analyses the relationships between student performance and student and school characteristics within each country, using two-level regression models (student and school levels). While within-country regression analysis does not overcome the criticisms made by Hamilton, it does remove one major source of unobserved and confounding factors, namely those occurring between countries.
It should be noted that the above criticisms also apply to the cross-country analyses of the PISA 2003 results cited by Caldwell. The other studies he cites are no more convincing.
The World Bank review of research studies itself says that there is no convincing evidence of the effects of school autonomy in Australia, New Zealand and the UK reforms on student achievement [p. 11]. The review focuses on studies of school autonomy in developing countries and notes that there are few rigorous studies available and that the evidence on impact on student test scores is mixed [pp. 12, 103, 106, 131].
The other study cited is one by Caldwell and colleagues at the University of Melbourne which explored the links between school autonomy and learning in Victoria over five years from 1994 to 1998 following the Schools of the Future school autonomy program introduced by the Kennett Government. As Caldwell acknowledges, the study concluded that “…decentralization of decision-making in planning and resource allocation does not, of and in itself, result in improved learning for students” [p. 14].
The website of the Federal Department of Education claims that the review summarizes key academic research on school autonomy. It is a ludicrous claim. The review is demonstrably inadequate. It ignores a large number of recent studies on school autonomy in several countries and it ignores the latest PISA study of the relationship between student achievement and school autonomy. The weight of evidence from these studies is that greater school autonomy has little impact on student results. Even the few and somewhat dated studies cited in the review indicate that the evidence is mixed.