Swedish-Style School Privatisation for England

Wednesday May 26, 2010

The ideologues of choice and competition in schooling are at it again. Their latest miracle cure is Sweden, despite the fact that its next door neighbour Finland has the best school results in the world and doesn’t follow the free market ideology in education.

The new UK coalition government has announced that it will allow parents, teachers and charities to set up their own schools in England along the “free schools” model of Sweden. These schools will be privately run, but publicly funded. The plan also opens up state schools to be operated by for-profit companies.

While English law prohibits commercial operators taking over schools, they could provide teaching and other services to the new schools. There is nothing to stop school governors inviting commercial firms to operate state schools. Indeed, the US company Edison already operates one school in north London.

Several for-profit companies are already lining up to take advantage of the scheme. Global Management Education Systems (Gems), a company based in the United Arab Emirates that already runs 12 UK private schools, aspires to run state-funded schools in England. Anders Hultin, chief executive of Gems, said: “We are exploring opportunities right now, supporting groups of parents. That’s a natural starting point.” [ The Guardian, 25 May]

Gems is not alone. Edison, the largest provider of state-funded private schools in the US, envisages running several academies. Kunskapsskolan, which runs 30 state-funded schools in Sweden, plans to sponsor two new academies, in Richmond, south London, and Suffolk.

The new Government’s plan draws on the Swedish “free schools” model introduced in the early 1990s which allowed new schools to be set up that are independent of government control. A variety of educational providers stepped in, ranging from non-profit co-operatives and religious groups to for-profit corporations. These organisations are now running schools funded with public money through a voucher system.

The changes were introduced to provide greater choice for parents unable to afford the fees for Sweden’s small private school sector. They were based on the free-market principle that competition and choice would raise standards in all schools as government schools would be forced to improve student achievement so as to maintain enrolments.

“Free schools” now account for just over 40 per cent of the 945 upper secondary schools in Sweden and about 15% of schools teaching younger children. About 20 per cent of Swedish students attend these independent schools in the upper secondary sector. In larger urban areas such as Stockholm, half of all students attend independent schools.

There is strong evidence is that the Swedish “free school” model has failed to improve student achievement. Far from improving school results, the evidence is that student outcomes in Sweden have declined since the reforms to its education systems were introduced. The performance of its 15-year-olds has slipped steadily in international comparisons, its measures of social mobility and equity have declined, and it now lags behind other Nordic countries, having led them for decades.

Per Thulberg, Director-General of the Swedish National Agency for Education, says that the schools have “not led to better results” in Sweden [ BBC Newsnight, 8 February 2010]. He said that where these schools had improved their results, it was because the pupils they took had “better backgrounds” than those who attended the institutions the free schools had replaced.

This competition between schools that was one of the reasons for introducing the new schools has not led to better results. The lesson is that it’s not easy to find a way to continue school improvement….The students in the new schools have, in general, better standards, but it has to do with their parents and backgrounds. They come from well-educated families.

Thulberg says that Sweden cannot give the world good examples of how to run an excellent school system [ The Financial Times, 23 May 2010]. “We have had increasing segregation and decreasing results, so we can’t say that increasing competition between schools has led to better results” [ The Times Educational Supplement, 12 March 2010].

Jan-Eric Gustafsson, professor of education at Gothenburg University, says there has been a long decline in Sweden’s school results.

What we observe is a decline, especially in science and mathematics, and especially from 1995. It has been quite a steep decline, by as much as one grade, so that students in Grade 8 (age 13 to 14) in 1995 performed at the level of Grade 9 students now…..
There is increasing variation between schools. We have widened the difference between students with educated parents and those whose parents are not so highly educated, and girls who were ahead of boys in the 1990s are now even further ahead. [ The Times Educational Supplement, 12 March 2010]

Andreas Schleicher, head of education indicators at the Organisation for Economic Co-operation and Development (OECD), said that although Sweden is still above the OECD average on equity, “…the slide, though not dramatic, is noteworthy” [ The Times Educational Supplement, 12 March 2010].

These conclusions are supported by academic studies. A report published by Sweden’s National Education Agency earlier this year states:

To a certain extent, trends in pupils’ attainments can be seen as a measure of how successful the various reforms have been, and, with a relatively high degree of certainty, we can conclude that grade point averages within several central subject areas have declined over time. The National Agency for Education’s own national evaluations, as well as international studies, present a broadly consistent picture of Swedish school pupils’ results in mathematics, natural sciences and reading comprehension in later years of compulsory school, showing a decline in performance since the beginning/middle of the 1990s. In the systematic review, a re-interpretation is made of the results of the international investigations conducted up to and including 1995. These new analyses indicate that levels of attainment in Swedish compulsory schools at the beginning of the 1990s were particularly high and that the decline is therefore more dramatic than earlier analyses have shown. The decline in performance of Swedish pupils has been from a previous high in international comparisons. [ What Influences Educational Achievement in Swedish Schools, p. 16]

In addition, the report also shows that achievement gaps have increased:

….the variation in results between schools and between various groups of pupils has become more pronounced. From 1993, attainment differentials have increased between various schools, particularly from 1998, when a new grade-setting procedure was introduced for the first time. The analyses have also pointed to increasing differences in grades attained by various groups of pupils (differentiated by social background, gender, and ethnicity) but most particularly between groups differentiated by parents’ educational background. [p.17]

The report says it is difficult to separate out the effects of educational changes from the effects of ongoing social change and that it is also difficult to separate out the effects of different reforms that were implemented at the same time. However, it concludes that:

……an increasing differentiation of levels of attainment coincides with comprehensive changes in the Swedish school system that have occurred since the beginning of the 1990s. [p.18]

Recent academic studies concur.

A study by academics at Stockholm University and Uppsala University published in 2008 found that the introduction of independent schools in Sweden did not increase student achievement in high schools and had no impact on the years of schooling or subsequent university attainment [Bohlmark & Lindahl 2008: 22-23]. It found a very small positive short term effect on the grade point average in year 9, but which was not sustained over the medium or longer term.

The study concluded that despite the large increase in enrolments in independent schools in Sweden between 1995 and 2003, the findings of the study show that “school choice and competition is not a panacea for improving overall educational achievement” [p.23].

Another recent technical study concluded that there was no evidence that competition from independent schools had either a positive or negative effect on the performance of public schools [Waldo 2007: 246].

A more general review published earlier this year concluded:

….17 years after the reform was enacted, it seems that it has not managed to bring decisive changes (either positive or negative) into the educational system. Despite almost 1000 new independent schools and 150,000 students attending them and many others changing between public schools, despite an intensive ideological offensive for competition and a vivid public debate on the effects, most researchers and evaluators still claim that the outcomes in terms of segregation, costs, and achievement at the national level are ambiguous or at best visible but small. [Bunar 2010: 13]

It is truly amazing how advocates of markets in education persist in ignoring evidence that rebuts their claims. Their approach is based on blind faith: “we believe; therefore it is true”. Yet, the countries that have dallied with the market in education for the longest – England, Sweden and the United States – have all failed to achieve improvements in student achievement, while inequity in student outcomes and social segregation in schools have increased.

Trevor Cobbold

References

Bohlmark, Anders and Lindahl, Mikael 2008. Does School Privatization Improve Educational Achievement? Evidence from Sweden’s Voucher Reform. Discussion Paper No. 3691, Institute for the Study of Labor, Bonn, September.

Bunar, Nihad 2010. Choosing for Quality or Inequality: Current Perspectives on the Implementation of School Choice Policy in Sweden. Journal of Education Policy 25: 1-18.

Slolverket 2010. What Influences Educational Achievement in Swedish Schools? Stockholm.

Waldo, Staffan 2007. Efficiency in Swedish Public Education: Competition and Voter Monitoring. Education Economics 15 (2): 231-251.

Previous Next

Gillard Concedes Under the Collective Pressure of Teachers

The agreement of Minister Gillard to set up a working party to oversee the My School website has been the consequence of many teachers across the country making it clear that we are unhappy with the government’s intolerable treatment; silencing us, and leaving us without any say in our professional conduct whatsoever. Continue reading “Gillard Concedes Under the Collective Pressure of Teachers”

Misclassification Errors in My School Comparisons: A Response to Professor McGaw

Monday April 26, 2010

The Chairman of the Australian Curriculum, Assessment and Reporting Authority, Professor Barry McGaw, has rejected criticisms made by Save Our Schools (SOS) that the “like school” comparisons on the My School website are biased in favour of private schools. But, his response effectively proves the SOS case and he fails to produce any evidence to support his claims. Indeed, the available evidence supports the SOS case.

The basis of the SOS criticism is that My School uses a methodology to determine so-called “like schools” which is based on the socio-economic status (SES) characteristics of small geographical areas rather than on the SES of families. The problem is that each area contains a mix of families and the higher income families more often choose private schools. For example, 55% of high income families choose private secondary schools compared to 26% of low income families.

This causes My School to systematically over-estimate the level of socio-economic disadvantage in private schools and under-estimate disadvantage in government schools. Consequently, My School compares the test results of supposedly similar private and government schools, but which may have large differences in the SES composition of their enrolments.

Professor McGaw says the SOS argument relies on the geographical areas being heterogeneous [ Australian Financial Review, 17-18 April 2010]. But, “they are not,” he says.

They are fairly small districts of a couple of hundred households and the evidence is that they are fairly homogenous.

Here Professor McGaw contradicts himself and concedes the point by saying that the areas are “fairly homogeneous”. The My School methodology depends on the districts being homogenous, but McGaw admits they are not. Districts that are “fairly homogenous” will include families with different socio-economic characteristics, some with a higher SES and some with a lower SES.

Once it is conceded there are differences in the SES of families within districts, the SOS criticisms of the My School methodology come into play because the higher SES families are more likely to choose private schools. This leakage together with the use of area-based measures of SES to rate the SES of schools leads to bias in the comparison of school results which favours private schools.

The use by My School of an area-based measure of SES to approximate individual family characteristics in the area is based upon an assumption of population homogeneity. That is, it assumes that like people live near like people. The validity of this assumption has been tested by studies comparing individual or family SES values or scores with scores assigned on the basis of the average characteristics of residents living within small areas (called census collection districts). The evidence is that census collection districts are not homogenous.

A study by the Australian Council of Educational Research has shown that the correlation between individual and census collection district measures of SES for a national sample of secondary school students was unacceptably low [Ainley & Long 1995]. The study reported correlations between 0.36 and 0.45 between individual and collection district measures in a sample of secondary school students [73]. They found that the greatest loss in precision occurs in moving from an individual based analysis to the collection district level, and that the additional loss of validity when moving from collection districts to larger geographical areas such as postcodes is not great [81-83].

The Australian Bureau of Statistics has demonstrated that some highly advantaged families live in low SES areas and some disadvantaged families live in high SES areas. In an analysis of census data for Western Australia it found that individual and family relative socio-economic disadvantage was quite diverse within small areas [Baker & Adhikari 2007].

It found that about 20 per cent of people in the most disadvantaged quartile of the individual SES measure lived in census collection districts that were in the highest three deciles of the area-based Index of Relative Socio-economic Disadvantage (IRSD). Over a third of people in the bottom quartile lived in areas in the top five IRSD deciles and six per cent of people in the lowest group in the individual based SES measure lived in collection districts found in the highest IRSD decile.

On the other hand, nearly 20 per cent of people in the most advantaged quartile for individual SES lived in areas that were classified in the bottom three deciles of the IRSD. Over a third of people in the most advantaged quartile lived in areas in the bottom five deciles. Five per cent of people in the highest individual based SES group lived in the collection districts found in the lowest IRSD decile.

The ABS researchers conclude from this study that “there is a large amount of heterogeneity in the socio-economic status of individuals and families within small areas” [1].

This conclusion directly refutes Professor McGaw’s claim that small areas are not heterogeneous. Rather, “there is a large amount of heterogeneity” as SOS has argued.

Heterogeneity in the SES of families in small districts means there will be errors in classifying the SES of individuals, such as students, on the basis of the average SES of the areas. This is emphasised by the ABS:

A relatively disadvantaged area is likely to have a high proportion of relatively disadvantaged people. However, such an area is also likely to contain people who are not disadvantaged, as well as people who are relatively advantaged. When area level indexes are used as proxy measures of individual level socio-economic status, many people are likely to be misclassified. This is known as the ecological fallacy. [ABS 2008: 3; see also Adhikari 2006: 6]

Moreover, the potential for error is quite significant:

These findings indicate that there is a high risk of the ecological fallacy when SEIFA is used as a proxy for the socio-economic status of smaller groups within an area and there is considerable potential for misclassification error. [Baker & Adhikari 2007: 1]

Other researchers have also noted this, for example:

Assigning a value of socioeconomic status to a student on the basis of the area in which they live will introduce a potential error and the magnitude of the error will be greater when the social background of those living in the area is relatively heterogeneous. [Ainley & Long 1995: 53; see also Preston 2010]

The potential for misclassification errors in using area-based measures of SES to approximate individual characteristics was also noted in a report commissioned by the Performance Measurement and Reporting Taskforce of the national education minister’s council [Marks et.al. 2000: 27].

Heterogeneity of family SES in small areas has fatal implications for the reliability of comparisons of so-called “like schools” by My School. It introduces the potential for significant errors in classifying students and measuring school SES. This is clearly demonstrated in an analysis of the Penrith statistical area using 2001 Census data [Preston 2010].

The study shows that a government school drawing students from the ten most disadvantaged collection districts in the region would have 16 students from low income families for every student from a high income family. In contrast, an independent school drawing from the same collection districts would have equal numbers of low and high income families. Yet, using an ABS area-based index of socio-economic status which is similar to that used by My School, the two schools would be classified as “like schools”.

Professor McGaw and the Federal Education Minister claim that “like school” comparisons on My School are robust. The evidence so far available overwhelmingly indicates they are not. Apart from errors in the measurement of school SES and being biased in favour of private schools, they also exclude a range of other factors which have a significant bearing on the classification of so-called “like schools”, thereby distorting comparisons of school results [Cobbold 2010ab].

Trevor Cobbold

References

Adhikari, Pramod 2006. Socio-Economic Indexes for Areas: Introduction, Use and Future Directions. Research Paper, Australian Bureau of Statistics, Canberra.

Ainley, John and Long, Michael 1995. Measuring Student Socioeconomic Status. In John Ainley; Brian Graetz; Michael Long and Margaret Batten, Socioeconomic Status and School Education, Australian Government Publishing Service, Canberra, June.

Australian Bureau of Statistics 2008. Socio-Economic Indexes for Areas (SEIFA) – Technical Paper.

Baker, Joanne and Adhikari, Pramod 2007. Socio-economic Indexes for Individuals and Families. Research Paper, Analytic Services Branch, Australian Bureau of Statistics, Canberra, June.

Cobbold, Trevor 2010a. Like School Comparisons do not Measure Up. Research paper, Save Our Schools, Canberra, February. See Research section.

Cobbold, Trevor 2010b. ‘Like School’ Comparisons on My School are Flawed and Misleading. Save Our Schools, Canberra, February.

Marks, Gary; McMillan, Julie; Jones, Frank L. and Ainley, John 2000. The Measurement of Socioeconomic Status for the Reporting of Nationally Comparable Outcomes of Schooling. Report to the National Education Performance Monitoring Taskforce, MCEETYA, March

Preston, Barbara 2010. Notes on the ecological fallacy when area-based indexes of disadvantage/advantage are applied to schooling in Australia, Canberra, March.

Previous Next

School Daze

Tuesday April 20, 2010

The ‘like school’ comparisons on the My School website purport to compare the test results of schools with similar socio-economic student populations. However, like is not consistently compared with like. My School’s measure of the socio-economic status (SES) of schools is systematically biased in favour of private schools when comparing their results with so-called ‘like’ government schools.

The bias works in two separate, but compounding ways. My School under-estimates the SES of private schools that draw enrolments from high SES families living in lower SES areas. It also over-estimates the SES of government schools because high SES families resident in their area tend to choose private schools.

There are two sources of this bias. One is that the Index of Community Socio-Educational Advantage (ICSEA) used to measure the SES of schools is based on the average socio-economic characteristics of the areas in which students live and not on the actual SES of their families. Studies by the Australian Bureau of Statistics show that some high income families live in low SES areas and vice versa, so the actual SES of some students will be above the area-average and others below the area-average.

ICSEA also fails to allow for differences in the proportion of high and low SES families that enrol in private and government schools. On average, 47% of high income families choose private schools compared to 24% of low income families. In the case of secondary schools, 55% of high income families choose private schools compared to 26% of low income families.

The greater leakage of high SES students from each area into private schools causes the ICSEA rating of private schools to under-estimate their actual SES because these students are classified according to their (lower) area SES measure rather than by their (higher) family SES.

On the other hand, the ICSEA rating of government schools over-estimates their actual SES because of the leakage of high SES students to private schools. Government schools take a greater proportion of low SES students, but these students are classified at the (higher) area SES rating rather than by the actual SES of their families. The lower SES students carry the higher area SES score influenced by high SES families in the area whose students do not attend government schools. Thus, the level of disadvantage in government schools is under-estimated by ICSEA.

This systematic bias in the measurement of the SES of government and private schools can be illustrated by an example from My School.

My School classifies the wealthy King’s School in Sydney as having the same SES rating as Gundaroo Public School, a small school in a semi-rural area of NSW near Canberra. The King’s School has excellent test results with many green colour codes for above average results while Gundaroo has many red colour codes for below average results.

However, far from being ‘like schools’, they are very unalike schools.

The SES rating for the King’s School is likely under-estimated because it traditionally draws many students from farming families. About 30% of its enrolments are boarding students and only the wealthiest of rural families can afford tuition and boarding fees of over $36 000 a year for primary students. Yet, because these students are resident in lower SES rural areas they carry a lower SES score than their actual family circumstances. The relatively large proportion of these students attending the King’s School therefore significantly reduces its ICSEA rating.

On the other hand, the ICSEA rating for Gundaroo Public School is likely an over-estimate of its actual SES composition

The Gundaroo area has a large proportion of high income, well educated, highly skilled households, but it also has a significant proportion of lower SES families. Census data shows that about 12% of households in the Gundaroo region are relatively low income. Some 32% of the population over 20 years of age did not finish Year 12, 25% had certificate-based qualifications and 30% are employed in lower skilled occupations.

Only about half of Gundaroo’s primary age children attend Gundaroo Public School. Many high SES families send their children to schools in Canberra leaving mostly lower and middle SES families at the local school. However, its ICSEA rating is based on the average SES characteristics of the area, including high SES families who do not attend the school, and therefore over-estimates its actual SES.

This comparison of unalike schools is not an isolated example. There are numerous others on My School.

Another source of bias occurs because of the exclusion of international students enrolled in many high fee private schools from the ICSEA ratings. They are excluded because it is not possible to geo-code their addresses to a Census collection district.

This also artificially lowers the rating of some high SES schools because it is only wealthy overseas families who can afford the high tuition and boarding fees and associated costs of sending their children to Australia. This bias may not be large because of the relatively small number of international students, but it does add to the inherent bias of ICSEA.

Thus, the ‘like school’ comparisons on My School tend to pit higher SES private schools against lower SES government schools. This shows private schools in a more favourable light because students from higher SES families tend to have higher average results than students from lower SES families.

It should also be noted that ICSEA omits a range of factors that strongly influence school results. These include differences in the student composition of schools by gender, ethnic sub-groups and students with disabilities as well as differences in funding, school size, student mobility between schools, student selection and private tutoring.

Some of these omissions may further disadvantage government schools in comparisons with their “like” private schools. For example, schools with higher proportions of students with disabilities participating in tests may have lower average results than other schools with a similar ISCEA value. Government schools generally have higher proportions of students with disabilities than private schools

My School is a travesty of ‘like school’ comparisons. Its biased comparisons in favour of private schools will unfairly affect the reputations of government schools and the careers of their teachers and principals. It will also mislead parents in choosing schools and mislead policy makers in drawing conclusions about best practice in schools.

Trevor Cobbold

This article was originally published in the April newsletter of the Australia Institute.

Previous Next

A Devastating Critique of Choice and Competition in Education by a Former Advocate: Part 3

Tuesday March 23, 2010

The final part of a review of Diane Ravitch’s new book: The Death and Life of the Great American School System: How Testing and Choice are Undermining Education.

Call to rejuvenate public education

Ravitch ends her book with a clarion call:

At the present time, public education is in peril. Efforts to reform public education are, ironically, diminishing its quality and endangering its very survival. We must turn attention to improving schools, infusing them with the substance of genuine learning and reviving the conditions that make learning possible

Above all, it must be recognised that education improvement is hard work and there are no miracle cures: “…history teaches that there are no shortcuts, no quick fixes, no easy answers” The New Republic, 15 March].

A quality curriculum for all is central to school improvement. She says that “what we need is not a marketplace, but a coherent curriculum that prepares all students” Wall Street Journal, 9 March]. In her book, she says that a broad curriculum for all students is fundamental: “You can’t have a full and rich education by teaching only basic skills”.

Ravitch emphasises the need to deal with poverty and its affect on learning. She says that schools cannot be improved if this is ignored.

Disadvantaged children need… preschool and medical care. They need small classes [and] extra learning time. Their families need….coordinated social services that help them….acquire necessary social and job skills, and…..obtain jobs and housing. While the school itself cannot do these things, it should be part of a web of public and private agencies that buttress families.

Ravitch says that every neighbourhood should have a good public school. She notes that neighbourhoods were once knitted together by a familiar local school that served all children in the neighbourhood, but this has been undermined by choice of schools. She says that most parents want a strong stable school that is within reasonable distance of their home.

Ravitch also supports a greater community voice in education decisions. She says that the integration of a corporate agenda for education and centralized education administration is denying a public voice in education. She points to Julia Gillard’s much admired New York City model as a classic case of how corporate interests and autocratic control of public education combine to deny parents and teachers a voice in education policy.

It solves no problems to exclude parents and the public from important decisions about education policy or to disregard the educators who work with students daily. Public education is a vital institution in our democratic society, and its governance must be democratic, open to public discussion and public participation.

She argues that the promotion of democratic citizenship is the central mission of public schools, an important corrective to big business reformers who focus only on test scores and satisfying “consumers.” In the penultimate paragraph of her book she writes:

Our public education system is a fundamental element of our democratic society. Our public schools have been the pathway to opportunity and a better life for generations of Americans, giving them the tools to fashion their own lives and to improve the commonweal. To the extent we strengthen them, we strengthen our democracy.

Timely warning for Australia

Ravitch’s book is a timely warning for the Australian public as the Rudd Government embarks on an extension of the market in education that David Kemp could only dream of.

Test-based accountability is now entrenched in Australia with the launch of the My School website. There is no strong evidence that it will increase student achievement. Even the head of the Australian Curriculum, Assessment and Reporting Council, Peter Hill, admits that there is very little evidence to support Gillard’s faith in reporting school results. Ravitch’s book shows that it narrows the curriculum and denies children a well balanced education.

Gillard has threatened schools that fail to lift their performance with sanctions, such as firing principals and teachers and closing schools. Sanctions are now a feature of the American education system. Ravitch clearly shows it has had little success in raising performance. Firing teachers is a punitive approach that demoralises the profession and turns people away from teaching. Closing schools in the US is now at a farcical stage as many school districts are closing schools that were opened after closing other schools.

At least one Australian state is now experimenting with a teacher pay system partly based on student test scores. Gillard has also floated the idea of adopting the New York City approach of basing teacher tenure on student test scores. Once again, there is very little evidence to support such schemes as Ravitch shows.

The Rudd Government has continued the privatisation of Australian education through the SES funding model of the Howard Government which has underwritten the expansion of private schools and undermined public education.

Further privatisation of public education is behind the agenda of calls for greater autonomy and independence for schools within the public system. It will create a wedge for greater business involvement and control of public education. The new system of autonomous schools in Western Australia is the start of a form of charter schools in Australia. The next stage will be a greater role for big business and private foundations. The recent call by former Victorian Premier, Steve Bracks, for big business to help schools improve and to play a greater role in funding and shaping public education should be seen as part of this agenda (_The Age_, 19 March). Bracks is now a senior advisor of the National Australia Bank.

The prospect is that these measures will cause Australian education to more and more resemble the parlous state of American education. There has to be a substantial re-think. Just as Diane Ravitch was forced to confront her own hopes about test-based accountability by the accumulating evidence that it does not work, Gillard and other education ministers around Australia should do so too. Ravitch’s new book is the place for them to start.

Trevor Cobbold

Previous Next

US School Cheating Scandal Sends Warning on My School

Monday February 15, 2010

One of the largest school cheating scandals ever in the US is under investigation in the state of Georgia. Last week, the New York Times and the Atlanta Journal-Constitution reported that one in five of Georgia’s public elementary and middle schools are under investigation for changing student answers on the tests.

The case has momentous implications for Australia now that My School is up and running and test scores have become the measure of a school’s worth, as well as that of its teachers and principal.

The Georgia State Board of Education has ordered investigations at 191 schools across the state where evidence had been found of tampering on answer sheets for the state’s standardised achievement test. Another 178 schools will see stepped-up monitoring during testing.

The investigation is the result of an analysis by the Georgia Governor’s Office of Student Achievement. It flagged any school that had an abnormal number of erasures on answer sheets where the answers were changed from wrong to right, suggesting deliberate interference by teachers, principals or other administrators.

According to the analysis, more than half of elementary and middle schools in the state had at least one classroom where erasure marks were so unusual that cheating may have occurred.

Four percent of schools were placed in the “severe concern” category, which meant that 25 percent or more of a school’s classes were flagged. Six percent were in the “moderate concern” category, which meant that 11 percent to 25 percent of its classes were flagged, and 10 percent raised “minimal concern,” meaning 6 percent to 10 percent of its classes were flagged.

The main focus of the investigation is Atlanta, where 70% of all elementary and middle schools face investigation. At one school, for example, averages of 27 of 70 answers on each fourth-grader’s math test were changed from wrong to right in one classroom. At another an average of 26 of 70 answers on the fifth-grade math test were erased and corrected.

More than half the classes at 27 schools were flagged, and at four Atlanta schools more than 80 percent of the classes were flagged.

Experts said it could become one of the largest cheating scandals in the era of widespread standardized testing in the US. Gregory Cizek, Professor of Educational Measurement and Evaluation at the University of North Carolina, told the Atlanta Journal Constitution_ (11 February) that the extent of the suspicious answer changes is stunning. He has studied cheating for more than a decade, but said he didn’t know of another state that has detected so many potential problems. He told the New York Times (11 February): “This is the biggest erasure problem I’ve ever seen”.

It is the second time in two years that cheating has been exposed in Georgia’s state tests. In 2009, four schools were found to have extensively changed student answer sheets. Court records show that in one case, the school principal and assistant principal had systematically erased and changed student answers to ensure that the school met the “annual yearly progress” benchmark of the Federal government’s No Child Left Behind Act. ( Atlanta Journal-Constitution , 12 February).

The scandal is the latest in a series of cheating scandals across the United States since the No Child Left Behind legislation came into force. If schools fail to meet the federal benchmarks of the No Child Left Behind Act, they are placed in a “needs improvement” category and must offer extra tutoring and allow parents to transfer their children to higher performing schools.

The legislation has placed principals and teachers under intense pressure to improve school test scores and many come to think that they must cheat to survive. High personal stakes are involved for teachers and principals as they fear their careers will suffer from the failure to improve school test scores.

Last year a survey of public school teachers in Chicago by the Chicago Sun-Times (29 August) and the teachers’ union revealed that one-third of all teachers had been pressured in the last year by principals and boards to change student grades. Twenty per cent said that they had actually raised grades under this pressure.

Extra pressure is also being placed on Georgia teachers as a result of a new initiative that ties teachers’ pay to student performance.

Erasing and replacing student answers is just one type of cheating carried out by schools. Others include filling in unanswered questions, helping students with answers during tests and alerting teachers and students to test questions before the test is taken.

The new scandal has raised new questions about inadequate security procedures for tests in schools and the absence of audits of test results across most US states. Last August, a report by the US Government Accountability Office identified many gaps in the security of tests and found that many US states faced challenges in ensuring reliable and valid assessments.

The Georgia scandal was revealed by computer scanning to screen for erasures on test answer sheets. Testing experts say that most states fail to use even this most elementary means to monitor for cheating. Professor Cizek told the New York Times (12 February) that there is no incentive to vigorously pursue cheating because parents and administrators all like to see higher test scores.

Jennifer Jennings, Associate Professor of Sociology at New York University who studies school accountability, said that the Federal government should require states to check their test results. “It’s absolutely scandalous that we have no audit system in place to address any of this,” she said.

Some US states have better procedures in place. For example, South Carolina has been using erasure analysis on tests since the 1980s. If a school is flagged for suspicious activity, the state sends testing monitors the following year, and sometimes educators are criminally prosecuted or lose their teaching certificates.

In response to the latest cheating revelations, the Georgia State Board of Education is setting up new test security procedures. The Atlanta Journal-Constitution (11 February) reported that State testing monitors will be sent to each of the 74 “severe” schools and monitors will conduct random checks at the 117 “moderate” schools.

Schools in either of those categories will have to randomly rotate teachers to different classrooms during testing; teachers cannot supervise tests for their own students. At an additional 178 schools, about which the state had “minimal” concerns, teachers will either have to rotate or take additional steps to ensure proper monitoring.

The Georgia House of Representatives will soon debate bills that would make it unlawful to tamper with state tests or help others cheat on them. No law in Georgia currently makes it a crime to cheat. The bills being considered would find violators guilty of a misdemeanour, subject to the loss of their pensions and possibly fined.

The Georgia case raises serious questions about the adequacy of security aroung Australia’s national literacy and numeracy tests (NAPLAN) and auditing of schools.

The administration of NAPLAN relies on principals, school test co-ordinators and teachers administering the tests adhering to the guidelines. It is open to considerable abuse by principals and teachers who face undue pressure to improve results. Some school co-ordinators and teachers have told Save Our Schools that the security arrangements for NAPLAN are totally inadequate to stop cheating.

Test booklets are delivered to schools a week or 10 days beforehand and there is little to stop an unethical principal or co-ordinator from opening them and alerting teachers about questions to practice in their class. Tests are mostly supervised alone by teachers in the classroom, except in cases where assistants may be present for students with disabilities, and there is no monitoring of whether some teachers help students with answers. There are also ample opportunities available after the tests are taken to change answers or fill in unanswered questions.

My School is now a maker and breaker of school reputations and careers. Soaring test scores may put a principal on the administrative ladder, where salaries can rise well into six figures. Conversely, bad scores can mean unfavourable assessments and may even cost principals their jobs, as Julia Gillard has threatened several times.

In this environment, the pressure on principals and teachers to improve their school’s test scores will be intense. There will be no surprise if many are tempted to take the path of their colleagues in Georgia.

Security and auditing arrangements for NAPLAN must be reviewed and upgraded before the next round of tests in May to ensure their integrity. It is a task for the Australian National Audit Office as an independent authority reporting directly to the Australian Parliament on public sector administration and accountability.

Previous Next

League Tables Damned by Major UK Report

Friday October 23, 2009

Just as Australia is introducing reporting of school test results and the inevitable league tables that will follow, a major review of the primary curriculum in England has issued damning conclusions on the impact of standardized tests and league tables.

The Cambridge Primary Review released last week says that the testing and reporting of school results in English and maths has distorted children’s learning and eroded their entitlement to a broad education. It says that 10 and 11-year-olds spend around half their time in the classroom studying English and maths and this has “squeezed out” other subjects from the curriculum.

The Review recommends that the English and maths tests be replaced and that league tables that report school performance on these tests be axed as well.

The Review’s 608-page final report is the most comprehensive review of primary education in England in 40 years. It is based 4,000 published reports and 1,000 submissions from around the world. It makes 78 recommendations for reforming the English system of primary education.

The Review says that the current focus on passing exams and hitting targets at a young age was “even narrower than that of the Victorian elementary schools”. It claims that the existing system caused significant “collateral damage” as children were drilled to pass exams, marginalising other subjects such as history, geography, art and science which have been “squeezed out” of the curriculum. The study said:

The prospect of testing, especially high-stakes testing undertaken in the public arena, forces teachers, pupils and parents to concentrate their attention on those areas of learning to be tested, too often to the exclusion of other activities of considerable educational importance.

As children move through the primary phase, their statutory entitlement to a broad and balanced education is increasingly but needlessly compromised by a ‘standards’ agenda which combines high-stakes testing and the national strategies’ exclusive focus on literacy and numeracy.

The head of the Review, Professor Robin Alexander, wrote in the Daily Telegraph that primary education should amount to much more than basic literacy and numeracy, supremely important though these are. He said claims that tests in those areas can serve as a proxy for the rest of a child’s education are both wrong and misleading to parents.

The report proposes that the tests be replaced by a system of less formal teacher assessment throughout primary school which could be externally moderated. A random sample of children could then take place at age 11 to gauge national performance in all subjects.

Information on the Cambridge Primary Review is available at this link

Previous Next

School Results Fail to Measure Up

Sunday October 11, 2009

A testing expert has made some devastating criticisms of the reliability of school test results to be published later this year or early next year.

Professor Margaret Wu from the University of Melbourne says that linking school performance to student achievement on these tests is “pure conjecture”.

In a keynote paper delivered in Hong Kong in July, Professor Wu said that the NAPLAN tests have a high level of inaccuracy. She said that there are large measurement errors at the individual student and class levels.

She said that these errors meant that high stakes decisions such as judging school and teacher performance on student scores should not be made on the basis of these tests.

Professor Wu also said that and that the tests are not suitable for measuring achievement growth between two points in time for individual students or classes. She also made some technical criticisms which call into question the validity of the tests and the method used to equate the scores of students across different year levels on the same scoring scale.

The extent of the errors is quite large, even for individual students, and they are exacerbated at the class and school levels. Professor Wu found that measurement errors in annual 40-item tests, such as those being used in NAPLAN, would lead to about 16 per cent of students appearing to go backward when they had actually made a year’s progress. She said this is a conservative estimate as it does not take account of other sources of error such as the assumption that two tests are assessing the same content. The errors could well be larger.

While the size of the measurement error reduces for classes and schools, they are still quite large. For example, Professor Wu found that the statistical uncertainty around the average results on these tests for classes of 30 students is equivalent to more than six month’s learning. Many schools around Australia only have this many students or less participating in the NAPLAN tests. For schools, with two classes of 30 students tested the error could amount to about four months of learning.

These results relate only to measurement error in the tests. There are also other sources of error, most notably sampling and equating errors, which add to the uncertainty and inaccuracy of the results.

Measurement error is a result of inconsistency in test results because the same students may achieve different results on the same test on different days because of differences in their own well-being, such as lack of sleep or food, or because of variations in external factors such as how cold or hot conditions are in the room in which the tests are conducted. It also arises from differences in the items selected for testing and the way answers are scored.

Sampling error arises from differences in the selection of students to participate in tests. A group of students selected for a test are likely to achieve different results from another group simply because of differences in their composition. The group selected for testing may not reflect the average level of ability of all students. The smaller the sample, the more likely there will be a significant difference between the average results of the sample tested and the results if all students were tested.

Sampling error occurs even when all students in a year cohort are tested. This is because inferences are made about school performance by testing selected cohorts, such as Years 3, 5, 7 and 9 in the national literacy and numeracy assessments. Each cohort of students tested is a sample of the students in the school for the purpose of measuring school performance.

Equating errors arise in comparing tests over time and in creating a common scale of scores for students across different Year levels. For example, building a common score scale across several year levels involves sophisticated statistical methodology to ensure that the results are reliable and valid. Different methodologies produce different results.

Professor Wu says that equating error is a major source of inaccuracy. This is because test items often work differently for different groups of students across states, there are curriculum differences across states and some content areas are not fully covered.

Professor Wu has followed up her criticisms in a letter to The Age recently saying that if student performance is not measured well by NAPLAN then the results cannot be used to assess school and teacher performance. She said that it could mean that schools and teachers are accused of not doing their job when they are.

Professor Wu says that the criticisms apply also to so-called like school comparisons. The large error margins make these comparisons practically meaningless.

When schools are grouped into ‘’like’’ groups, we need even more precision in the measures to detect differences between schools. It will be easy to demonstrate the difference between a high-profile private school and a low socio-economic government school, but it will be more difficult to determine significant differences between two high-profile private schools.

These are devastating criticisms. Julia Gillard has assured that the new national school performance reporting system will give accurate data on individual school performance. However, it appears that the national tests are not up to the mark.

The large statistical errors will wreak havoc when comparing school results.

It will not be possible to make reliable comparisons or rankings of schools because they may reflect chance differences in school performance rather than real differences. Such comparisons will mostly identify lucky and unlucky schools, not good and bad schools. It also means that current school performance is highly misleading as a guide to future school performance.

These statistical errors in school results also mean that school performance and school rankings are highly unstable from year-to-year. It is highly misleading to compare changes in school performance from one year to the next, especially in the case of smaller schools. It leads to unwarranted conclusions about changes and often unfairness in the inferences drawn about schools.

Professor Wu’s criticims show that Julia Gillard’s faith in the ability of NAPLAN to identify successful schools is misplaced. Rather than accurately measuring school performance as Gillard asserts, the new school performance reporting system is likely to mislead parents and policy makers.

Parents may be misled in choosing a school. Some schools may be recognised as outstanding while others are identified as unsuccessful simply as the result of chance and not because of actual programs and teaching practice. It also means that current school performance is highly misleading as a guide to future school performance.

The large error margins may also mislead policy makers because it will be difficult to identify effective school practices. It may mislead decision-makers and schools in recommending and adopting particular educational programs. Action taken to assist less successful schools may appear more effective than it is in practice.

Trevor Cobbold

Previous Next

New York City’s Bogus School Results

Thursday September 10, 2009

Diane Ravitch, Professor of Education at New York University and former US Assistant Secretary of Education, says that the latest school results in New York City are bogus.

Writing in the New York Daily News, Ravitch says the City’s school reporting system, so admired by Federal Education Minister Julia Gillard, makes a mockery of accountability. When nearly every school gets an A or B there is no accountability.

Ravitch attributes the massive increase in schools being graded A or B to a collapse in standards in the New York State tests in recent years.

Earlier this year, the Daily News [7 June] revealed that test questions have been getting easier. It reported an investigation by Columbia University’s Jennifer Jennings which found that the state asks nearly identical questions year after year. For example, at least 14 of the 30 multiple choice questions on the seventh-grade exam in 2009 had appeared in similar form in previous years. Only 55% of the specific math skills the state requires seventh-graders to learn were ever tested in the four years the exam has been given.

This predictability in test questions allows for intensive preparation of students which has corrupted the results. Test experts said that students are essentially tipped off as to which specific items are going to be on the test and this undermines the validity of the test.

With teachers administering daily practice tests containing questions very nearly the same as those that would appear on the state tests, it became easier for students to become “proficient.”

As a result, test scores are increasing massively. The number of students at the lowest level – those who are at risk of being held back in their grade – has dropped dramatically. In sixth-grade reading, 10.1% (7,019) were at Level 1 in 2006, but only 0.2% (146) were by 2009. In fifth-grade reading, the proportion of Level 1 students fell from 8.9% in 2006 (6,120) to 1.0% (654) in 2009. In seventh-grade math, the proportion of Level 1 students plummeted from 18.8% (14,231 students) in 2006 to 2.1% (1,457) in 2009.

In almost every grade, the state has lowered the bar, making it easier for students to get a higher score. In 2006, students had to earn around 60% of the points on the state math tests to reach Level 3, which the state defines as proficiency, but by 2009, they needed to earn only 50%.

Ravitch says that New York City’s school report cards should be revised or scrapped. “We are reaching a perilous stage where the test scores go up while real education – the kind that is available in the best schools – disappears.”

Previous Next

Gillard Renews Threat of Sanctions Against Lowly Ranked Schools

Saturday August 22, 2009

Julia Gillard has yet again raised the spectre of using school results to punish low performing schools. She said on the SBS Insight program that principals deserve to be sacked if they repeatedly fail to lift their school’s performance.

The Minister failed to produce any evidence that punishing schools succeeds in lifting their results. Her problem is that she has none to produce. There is no substantial evidence that applying sanctions against schools succeeds in improving school results. But, having no evidence to sustain her case does not seem to faze the Deputy Prime Minister. It is has become a feature of her administration of education.

Gillard has threatened sanctions against schools with low student achievement previously, as has the Prime Minister. On her recent trip to the United States, she told a roundtable discussion on education reform at the Brookings Institution that schools which persistently fail “might eventually be closed”. In his address to the National Press Club on education last year, the Prime Minister threatened:

…where despite best efforts, these schools are not lifting their performance, the Commonwealth expects education authorities to take serious action – such as replacing the school principal, replacing senior staff, reorganising the school or even merging that school with more effective schools.

In all likelihood, these threats will apply only to government schools. Although, the Federal Government has no constitutional power to sack staff or close schools, it will presumably implement its sanctions by holding state and territory governments to ransom over funding grants. However, there is no chance it will threaten any private school with forced closure or require any private school to sack its principal or staff.

Gillard and Rudd are taking their cues from England and New York City where blaming teachers and principals has become established procedure. It has allowed politicians and education officials to dodge their own responsibility for the quality of educational services serving highly disadvantaged communities.

In June 2008, the UK Schools Secretary threatened to close any English secondary school that failed to ensure that at least 30 per cent of its pupils achieved five good General Certificates of Secondary Education, including English and maths, within three years. This put some 638 secondary schools, or 20 per cent of all secondary schools in England, under threat of closure. In October, the government also threatened primary schools whose results were below a performance threshold with closure.

In the large majority of cases, the schools targeted for closures or other sanctions are schools serving highly disadvantaged communities. Of the 638 schools threatened with closure, 542 have a student intake that has an above average proportion of students who qualify for free school meals, an indicator of disadvantage used in England.

Julia Gillard’s hero, New York City’s Schools Chancellor Joel Klein, has also been sacking principals and staff and closing schools for several years because of persistently low performance on New York’s school grading system. The New York Times reported that 14 schools were marked for closure this year because they were deemed to be ‘failing’ schools. Since Klein took over the city education system, 92 low performing schools have been closed. Many have been turned over to charter schools.

The evidence is that none of this works. For example, a review of the use of sanctions and rewards across a wide range of programs, including education, published by the UK National Audit Office last September, found “no quantified evidence of the effect of sanctions and rewards on levels of performance for the programmes in the survey”. The sanctions covered in the review included closing schools and the harm to reputation from a low ranking on league tables.

A study of the impact of sanctions against low performing schools recently published by the American Educational Research Association refers to their “predictable failure”. It concluded that there is a lack of evidence that the sanctions have been successful as an effective and universal treatment for raising achievement levels at low performing schools. It concluded that the sanctions applied under the No Child Left Behind legislation are more likely to result in “unproductive turbulence than in sustained school improvement”.

A report published last April by the Education and Public Interest Center at the University of Colorado concluded that there is little to no evidence that sanctions against low performing schools increased student achievement. It recommended that policy makers refrain from adopting restructuring sanctions such as sacking principals and staff or closing schools and allowing them to be taken over by private operators or charter schools. It said that these sanctions have produced negative by-products without yielding systemic positive effects on student achievement.

Joel Klein’s sanctions against New York City schools have not worked either. National tests show that average student achievement in New York City schools has stagnated since Klein took over and there has been no reduction in achievement gaps.

Using school results to sanction low achieving schools and staff is likely to be highly arbitrary and unfair. A report published by a group of English education academics last week said that using test results to judge school performance can be very misleading. It cited extensive evidence in the UK that a large proportion of students have been marked incorrectly in tests in the past.

The report questioned making the fate of schools hang on a single set of test results, saying that raw test scores only measured part of what a school does and were influenced by factors beyond the control of schools. Sacking principals and school staff on the basis of these results is similarly unfair and arbitrary.

The use of league tables results to target schools for sanctions is also often contradicted by other assessments of performance. For example, an analysis of reports of the UK Office of Standards in Education (Ofsted), showed that a quarter of the English secondary schools threatened with closure were graded “good” by Ofsted school inspectors, and 16 were judged to be “outstanding”. About a third of them were in the top 40 per cent on the government’s “value-added” league tables. Only one in ten needed special intervention according to the Ofsted inspectors.

The use of unreliable test data to apply sanctions against schools and teachers also encourages school responses which further corrupt the results. These include poaching high achieving students from other schools; denying entry to, or expelling, low achieving students; suspending low achieving students on test days; increasing use of special dispensations for tests; encouraging students to take courses whose results are not used to compare schools and cheating.

The underlying assumption behind Gillard’s threat is that if schools are failing to deliver quality education it is the fault of the school’s leadership and teachers and that they should be replaced. It assumes that a ‘culture of success’ in so-called failing schools is just a matter of strong leadership.

A review of the Fresh Start initiative for ‘failing schools’ in England published this month in the British Educational Research Journal calls this assumption into question. It says that it ignores the ongoing impact of severe social inequalities and the context in which school operate. The study concluded that “…managerial solutions are not sufficient to deal with problems that are both educational and social”.

By raising the spectre of sanctions against low performing schools, Julia Gillard has once again resorted to discredited schemes used in England and the United States. She continues to ignore the reality of the impact of poverty on education. Closing schools in poor communities will only disadvantage them further.

The threat itself is enough to set off a spiral of decline. The curse of failure will encourage parents and teachers seek to transfer to other schools. Few will make such a school their first preference for children starting school. Wholesale sacking of staff in schools serving poor communities will only make it more difficult to attract quality teachers. Few principals will elect to take on a challenging school if they face a higher risk of being sacked, and branded a failure on the basis of dodgy statistics.

A different approach is needed as recommended by the review of the Fresh Start program in England:

If we are to improve achievement in inner-city schools, education policy needs to address fundamental matters concerning attainment, such as those related to resources, curricular innovation and pedagogy, and to design measures to raise, in particular, the attainments of pupils who are traditionally disadvantaged. [613]

Similarly, the AERA study of sanctions in the United States concluded:

…after about 15 years of state and federal sanctions-driven accountability that has yielded relatively little, it is time to try a new approach, one that centres on the idea of sharing responsibility among government, the teaching profession and low income parents. The hard cultural work of broader-based movements, nourished by government and civic action, will have to replace legal-administrative enforcement and mandates as the centrepiece of such an equity agenda. [361]

Julia Gillard would do well to heed this advice instead of playing to the grandstand of populist rhetoric and discredited policy.

Trevor Cobbold

Previous Next