Last week, the UK Government acknowledged that the publication of school results and school league tables create perverse incentives to ignore high and low achieving students. In the effort to improve or maintain rankings, schools often concentrate on lifting the performance of students just below benchmark cut-off scores and neglect other students.
The Government announced new performance measures designed to encourage schools to focus on all students instead of concentrating on borderline students. From this year, school performance tables will report the variation in performance of low attaining students, high attaining students and those in the middle.
The announcement follows the recommendation of an independent report in March to introduce a performance indicator which focuses on the whole distribution of results within a school, including those at the top and bottom ends of the distribution. The report found that school performance tables have created perverse incentives for schools to ignore their less successful and most successful students. It said that if a single measure of school performance is used it invites “gaming or worse” and will become “corrupted”.
Many studies in England and the United States have observed this effect. Improving the results of students just below benchmarks is seen as the easiest way to increase a school’s average score or the proportion of students achieving a benchmark.
There is evidence that schools in Australia have responded to the higher stakes associated with NAPLAN in the same way. A survey published last year by the Australian Primary Principals Association found that schools allocated more resources to this group of students and that lower achieving students did not receive as much attention for the first five months of the year until after completion of the NAPLAN tests.
Booher-Jennings, J. 2005. Below the Bubble: “Educational Triage” and the Texas Accountability System. American Educational Research Journal, 42 (2): 231–268.
Booher-Jennings, J. 2006. Rationing Education in the Age of Accountability. Phi Delta Kappan, 87 (10): 756-761.
Burgess, S.; Propper, C.; Slater, H. & Wilson, D. 2005. Who Wins and Who Loses from School Accountability? The Distribution of Educational Gain in English Secondary Schools. Working Paper No. 05/128, Centre for Market and Public Organisation, University of Bristol.
Hamilton, L.S. & Berends, M. 2006. Instructional practices related to standards and assessments. Working Paper WR-374-EDU. RAND Corporation, Santa Monica, April.
Hamilton, L.S.; Stecher, B.M.; Marsh, J.A.; McCombs, J.S.; Robyn, A.; Russell, J.; Naftel, S. & Barney, H. 2007. Standards-based Accountability under No Child Left Behind: Experiences of Teachers and Administrators in Three States. RAND Corporation, Santa Monica.
Krieg, J. M. 2008. Are Students Left Behind? The Distributional Effects of the No Child Left Behind Act. Education Finance and Policy, 3 (2): 250-281.
Neal, D. & Schanzenbach, D. 2010. Left Behind By Design: Proficiency Counts and Test-Based Accountability. Review of Economics and Statistics, 92 (2): 263–283.
Wolf, Alison 2011. Review of Vocational Education. UK Department of Education.