Test-based accountability has been a key education policy in most OECD countries, including Australia, for many years. It was believed that publication of school test results would put pressure on schools and teachers to increase student achievement. A new OECD study shows this policy is an abject failure. It found no evidence that test-based accountability has affected education outcomes in higher income countries.
Our results suggest that across most OECD countries test-based accountability does not relate to academic achievement, nor has a substantial impact on educational inequality for the subject of mathematics. With some small variations we achieved similar results for the subjects of reading and science. [p. 25]
The findings have important implications for policy makers in higher income countries:
…the intensive competition across schools that test-based accountability promotes could be disruptive in some educational contexts, producing unintended consequences in school communities. The fact that these reforms do not seem to render the expected results implies important questions about the convenience of these policies. [p. 25]
The study carried out a cross-country statistical analysis based on four cycles of PISA data from 2006 to 2015 covering 63 countries, including 42 higher income countries. It measured test-based accountability by the proportion of students in each country attending schools whose test results were publicly published. It included Australia where NAPLAN results for all schools except K-2, senior secondary and special schools have been published on the My School website since 2010.
The study also found no association between school autonomy, accountability, and educational outcomes. Accountability measures have not made any difference to school performance, no matter what level of school autonomy was adopted by countries. Nor did the study find any relationship between school autonomy and education outcomes:
….we do not find any association between school autonomy in curriculum and assessment or teacher management and academic results. [p. 25]
As the study notes, this finding contradicts that of recent studies using PISA data.
The study did find that school accountability measures had positive effects on school performance in mathematics and science in low and medium income countries. However, the results did not hold up after taking account of the socio-economic status of students. Moreover, the improvement in school performance was offset by increased educational inequality.
This study raises serious questions for Australian education policy makers about the value of continuing to publish school results on the My School website. My School was launched with great fanfare and promises of better performance by the then Minister for Education, Julia Gillard, who said the new era of transparency “will drive vital improvements in school education” and “deliver an Education Revolution to all Australian schools”. Far from delivering a revolution, since then NAPLAN results have largely stagnated across the tested subjects and Year levels.
The publication of school results has been an unmitigated disaster. Instead of forcing improvements in school performance, it has harmed the education of a generation of Australian students in many ways. It has narrowed the curriculum, encouraged teaching to the test, unfairly stigmatised disadvantaged schools and their students, made it more difficult for disadvantaged schools to retain high quality teachers, discouraged co-operation and collaboration between schools and teachers, and increased social segregation and inequity in education.
Published school results are also an inaccurate and misleading measure of school quality. This is because of differences in school composition and because many other factors outside the control of schools affect school results, including student absenteeism, student turnover, funding, parent involvement in learning and the proportion of students receiving private tutoring. There are also significant statistical errors on school test results, especially in the case of smaller schools.
The new OECD study indicates that Australia should stop the publication of the results of individual schools to end the harm they do.
Accountability of education systems and governments for school results can be ensured without publishing the results of individual schools. It can be done by publishing the number of schools whose average score falls within different test score ranges. This information could be summarised in a bar chart for each strand tested in each Year level for both literacy and numeracy. This would provide enough information for the public to hold governments and education departments publicly accountable for improving school results across the system.