Have Kids Stopped Trying on PISA and NAPLAN?

This is a summary of a new Education Research Brief. It can be downloaded below

A much-ignored aspect of school results in Australia over the past decade or more is the sharp contrast between declining or stagnating scores on international and national tests for Years 9 and 10 and solid improvements in Year 12 results. How is it that trends in school outcomes only two or three Year levels apart are so different? Continue reading “Have Kids Stopped Trying on PISA and NAPLAN?”

Are Finland’s Vaunted Schools Slipping?

Leading Finnish education expert, Pasi Sahlberg, comments on Finland’s slip down the rankings on international test results from the Programme of International Student Assessment (PISA).

The irony of Finland’s successful school system is that the Finns never aimed to be better than anyone else — except, it is often humorously claimed, Sweden. Since the announcement of the first results of the Organization for Economic and Cooperation and Development’s Program for International Student Assessment, or PISA, in 2001, Finland has been the center of educational attention. Finland’s PISA scores topped the charts, and the Finnish approach to educational policy has stood in direct opposition to the path embraced by the United States, England, and much of the rest of the world. Continue reading “Are Finland’s Vaunted Schools Slipping?”

Study Reveals the Damage to Education by NAPLAN and My School

Together with many teachers, academics and others around Australia, we can only feel vindicated by a new study by researchers at the University of Melbourne that shows the disastrous consequences of reporting school results on national literacy and numeracy tests. Incredibly, 75% of teachers say that they now teach to the test because of the focus on the NAPLAN tests and 70% say that less time is now spent on other subjects in schools. Continue reading “Study Reveals the Damage to Education by NAPLAN and My School”

Many Schools Have High Withdrawal Rates from NAPLAN

Monday November 26, 2012

While there have been large increases since 2008 in the percentage of students withdrawn from the NAPLAN tests, the average withdrawn remains low in all states and for Australia. However, these low averages disguise some very high withdrawal rates in many schools.

The average withdrawal rates across Australia in 2011 and 2012 were one to two per cent for the different Year levels tested. In contrast, 276 schools had withdrawal rates of 10% or more in 2011 according to the My School website (see table below). Seventy-eight schools had over 25% withdrawn and 32 schools had withdrawal rates of between 75 and 100%, most of the latter being Rudolf Steiner schools.

Victoria had the most schools with high withdrawal rates – 106 schools had withdrawal rates of 10% or more and 11 schools had between 75 and 100% of students withdrawn. Fifty-three schools in Queensland and 51 in South Australia had withdrawal rates of 10% or more.

Victorian schools accounted for 38% of all schools in Australia with high withdrawal rates, yet it accounts for only 24% of the total number of schools in Australia. In contrast, NSW has 33% of all schools in Australia yet has only 14% of schools with high withdrawal rates.

Schools with high withdrawal rates include both government and private schools. In 2011, 183 government schools and 93 private schools had high withdrawal rates. Private schools accounted for a slightly higher proportion than their share of total schools in Australia. They had 33% of schools with high withdrawal rates compared to 29% of the total of all schools.

Private schools accounted for the large proportion of schools with very high withdrawal rates. Seventy-three per cent of all schools that had 25% or more of their students withdrawn from NAPLAN were private schools.

Trevor Cobbold

Number of Schools and Withdrawal Rates from NAPLAN.pdf

Previous Next

Big Increase in Students Withdrawn from NAPLAN Tests

Thursday November 22, 2012

An increasing number of parents are withdrawing their children from the NAPLAN tests. There has been a four- to five-fold increase across Australia since 2008 in the percentage of children withdrawn from the numeracy tests. Withdrawals have increased in all Year levels tested and across all states and territories, with the largest increases in the ACT, Queensland, South Australia and Victoria.

It is not clear whether the increase is due to increasing parent concerns about NAPLAN, increasing rorting of school results or a combination of factors. Certainly, more and more parents are becoming aware that NAPLAN is not compulsory despite the efforts of education authorities to suggest they are mandatory. Certainly, schools are under tremendous pressure to improve their results and there is anecdotal evidence of schools encouraging parents of lower achieving students to withdraw them from the tests or keep them home on test days.

Although the percentages of students withdrawn are still small, the rapid growth poses a threat to the reliability of NAPLAN results for inter-school comparisons, inter-jurisdictional comparisons and trends in indicators of student achievement. This threat has been highlighted by the COAG Reform Council.

In the ACT, the percentage of Year 3 students withdrawn from the numeracy tests increased from 0.8% in 2008 to 4% in 2012 [Chart 1]. In South Australia, the percentage withdrawn increased from 0.6% to 3.3%; in Queensland it increased from 0.3% to 2.4% and in Victoria from 0.1% to 2.4%. For Australia, it has increased from 0.5% to 1.9% – a four-fold increase. The smallest increase was in NSW where the percentage withdrawn increased from 0.8% to 1%.

There was also a big increase in Year 9 students withdrawn. The percentage withdrawn from the Year 9 numeracy test across Australia increased from 0.3% to 1.4% – a five-fold increase [Chart 2]. In the ACT, it increased from 0.3% to 2.1%; in South Australia from 0.2% to 2.8%; in Queensland from 0.5% to 2.8%; and in Victoria from 0.1% to 1.3%. The increase in NSW was negligible – from 0.4% to 0.5%.

There are significant differences between the states and territories in the percentage of Year 3 students withdrawn from NAPLAN, ranging from 4% in the ACT to 1% in NSW. The ACT and South Australia (3.3%) had the highest percentages of students withdrawn from the Year 3 numeracy tests in 2012 while NSW, Tasmania (1.3%) and Western Australia (1.3%) had the lowest percentages withdrawn.

State/territory differences at the Year 9 level are smaller than in Year 3. Queensland and South Australia had the highest percentages of Year 9 students withdrawn – 2.8% and 2.3% respectively. NSW, Tasmania and Western Australia had only a very small percentage withdrawn – 0.5% to 0.7%.

The increase in students withdrawn accounts for a small, but significant declining trend in the percentage of students sitting the NAPLAN tests since 2008. The percentage present for the Year 3 numeracy tests across Australia fell from 94.6% in 2008 to 93.1% in 2012 and from 91.8% to 89.8% in Year 9 [Charts 3 &4].

In contrast to the trend in students withdrawn from NAPLAN there has been little change in the percentage of students absent on test days and exempt students since 2008.

The percentage of students exempt from the tests shows little change. In Year 3 numeracy, it increased from 1.7% to 1.9% for Australia and from 1.1% to 1.6% in Year 9 [Charts 5 & 6]. The biggest increases were in NSW where the percentage of Year 3 exempt students increased from 0.9% to 1.7% and from 0.5% to 1.3% in Year 9.

There was also little change in the percentage of students absent on test day. In Year 3 numeracy, the percentage absent for Australia declined slightly from 3.3% in 2008 to 3.1% in 2012 while it increased slightly in Year 9 from 6.8% to 7.2% [Charts 7 & 8]. Absent students comprise the large proportion of students not sitting the NAPLAN tests. For example, 7.2% of Year 9 students in Australia were absent from the numeracy test in 2012 while 1.4% were withdrawn and 1.6% were exempt.

In 2012, 9% of Year 9 students in Australia were either withdrawn or absent, with a range from 7% in NSW to 11% in Tasmania and 17% in the NT. Five per cent of Year 3 students in Australia were either withdrawn or absent, ranging from 3% in NSW to 7% in the ACT and South Australia and 14% in the NT.

There is concern in official circles about decreasing participation in NAPLAN. Last year, the Ministerial Council for Education, Early Childhood Development and Youth Affairs commissioned work on participation rates by a strategic policy working group with the Australian Curriculum Assessment and Reporting Authority (ACARA). The report of this group should be completed by the end of 2012.

The reason officials are becoming concerned about the trend is that increasing numbers of students being withdrawn or absent from NAPLAN will affect the reliability of the results (exempt students are included in the NAPLAN results by being deemed to be below minimum national standards). Changes in participation rates could affect the results of individual schools, sub-groups of students such as Indigenous and low socio-economic status students and state/territory results as well as trends over time.

In its report on education performance in 2010, the COAG Reform Council emphasised the importance of high participation in NAPLAN for the reliability of results. It said:

In order to accurately report literacy and numeracy achievement, it is important that as many students as possible sit the NAPLAN tests. Small differences in participation may affect literacy and numeracy achievement because the ability of students who do not participate is likely to differ from students who do. [p. 22]

The impact on individual school results will depend on the background of the students who are withdrawn or absent. The withdrawal of lower achieving students will increase a school’s average result. As more students are withdrawn over time a school’s results will be artificially boosted.

Research recently published by the COAG Reform Council shows that non-participants in NAPLAN tend to be lower scoring students. ACARA uses statistical imputation techniques to estimate test scores for absent and withdrawn students to reduce bias in state-wide comparisons of results. The COAG Reform Council research drew on this data to show that students who sat for NAPLAN have higher mean test scores than those of withdrawn and absent students. In NSW, the difference between Present and Absent student means in numeracy in 2011 was 20 points in Year 3 and 39 points for Year 9. In Victoria, the differences were 13 points for Year 3 and 24 points for Year 9. The Year 9 differences are significant, being equivalent to nearly two years of learning in NSW and one year in Victoria. The differences between Present and Withdrawn students were much smaller with negligible differences in Year 3 and 15 points in Year 9 in both NSW and Victoria, the latter difference is equivalent to about six months of learning.

As participation declines, the reliability of the average scores of individual schools will also decrease particularly in small schools where the statistical uncertainty or error band around the average score is relatively large because of the small numbers of students sitting the tests. This increases the unreliability of school rankings and league tables as a guide to school quality.

There are also implications for state-wide comparisons and trends. Lower participation rates mean that more and more test scores are imputed by ACARA and this is not as accurate as having students participate in the tests. If a state or territory has a low level of participation then more scores in that jurisdiction will be imputed and this could bias inter-jurisdictional comparisons and trends.

The research commissioned by the COAG Reform Council recommended that further work should be done on examining the impact of non-participation in NAPLAN on trends in achievement and the comparability of results across jurisdictions. It found that variability in participation rates may have “a substantially important impact” on jurisdictional comparisons of NAPLAN results and that their impact on achievement trends is “potentially of concern” [p. 15]. It also said that examination of the method of imputing test scores is warranted because it is statistically complex and the impact that it may or may not have on the indicators is not immediately clear.

To summarise, data on participation rates in NAPLAN clearly shows that parents are increasingly exercising their right to withdraw children from the tests and a significant proportion of students are absent on test day, especially secondary students. While the absolute percentages of students withdrawn or absent are small, the likelihood is that it will continue to increase as more and more parents become aware of their rights. Declining participation will affect the reliability of published school results, inter-school comparisons and league tables; trends in national and state/territory achievement; and inter-jurisdictional comparisons of results. There is evidence of growing official concern about the reliability of NAPLAN results.

Trevor Cobbold

Charts on the Withdrawal of Students from NAPLAN Tests.pdf

Big Increase in Students Withdrawn from NAPLAN Tests.pdf

Previous Next

The Great School Accountability Hoax

Sunday June 20, 2010

Diane Ravitch’s latest blog shows how school accountability in the United States is a great hoax. It serves as a major warning on the future of school education in Australia under a policy regime that slavishly follows the US lead without regard to the evidence.

The following is an edited version. Read the full version at Education Week.

The evidence continues to accumulate that our “accountability” policies are a great fraud and hoax, but our elected officials and policymakers remain completely oblivious to the harm caused by the policies they mandate.

Over the past several years, efforts to “hold teachers accountable” and “hold schools accountable” have produced perverse consequences. Instead of better education, we are getting cheating scandals, teaching to bad tests, a narrowed curriculum, lowered standards, and gaming of the system. Even if it produces higher test scores (of dubious validity), high-stakes accountability does not produce better education.

In their eagerness to show “results,” states are dumbing down their standards. The New York state education department dropped cut scores on the state tests from 2006 (the year that annual testing in grades 3-8 was introduced) to 2009. In 2006, a student in 7th grade could achieve “proficiency” by getting 59.6 percent of the points correct on the state math test; by 2009, a student in the same grade needed only 44 percent of the available points.

When New York state’s education department was criticized for dropping the cut scores on its tests, officials responded by insisting that the department dropped the cut scores because the tests were actually harder than in previous years. This was utter nonsense because the passing rates soared as the cut scores fell, which would not have been the case if the tests were “harder.” So, although it never acknowledged its past chicanery, the state education department claimed that the tests would really, really, truly be hard this year and that standards would once again be high.

The scandal of high-stakes testing is not limited to New York and Illinois. Last week, The New York Times reported about the ubiquity of cheating scandals across the nation. My guess is that it revealed only the tip of the iceberg.

I was in Baltimore on May 27, when The Baltimore Sun wrote about a major cheating scandal at an elementary school that had been widely recognized for its excellent test scores. In 2003, only one-third of the students in the school passed the state reading test, but within four years, almost all did. This was a “miracle” school; it won a federal Blue Ribbon for its remarkable gains. But it turned out that the school’s success was phony: Someone had erased and corrected many student answers.

The more that test scores are used to measure teacher effectiveness and to determine the fate of schools, the more we will see such desperate efforts by teachers and principals to save their jobs and their schools.

Yet even as more cheating scandals are documented, even as the perfidy of state testing agencies is documented, our federal policymakers plunge forward, blithely imposing unproven policies as well as “remedies” that have been tested and found wanting. Latest example: The June 9 issue of Education Week has a front-page story with this headline: “Merit-Pay Model Pushed by Duncan Shows No Achievement Edge”.

Merit pay has been tried and found ineffective again and again since the 1920s, but repeated failure never discourages its advocates, who are certain that if the incentives were larger, or if some other element was adjusted, it would surely work.

More emphasis on test scores. More money for teachers if the scores go up. More punishment for teachers and schools if the scores don’t go up. More cheating. More gaming the system. More concentration on basic skills (they count) and more indifference to the arts, history, science, foreign languages, etc. (they don’t count).

Diane Ravitch is Professor of Education at New York University and a former Assistant Secretary of Education under George Bush Snr. Her latest book is The Death and Life of the Great American School System.

Previous Next

Gillard Concedes Under the Collective Pressure of Teachers

The agreement of Minister Gillard to set up a working party to oversee the My School website has been the consequence of many teachers across the country making it clear that we are unhappy with the government’s intolerable treatment; silencing us, and leaving us without any say in our professional conduct whatsoever. Continue reading “Gillard Concedes Under the Collective Pressure of Teachers”

Misclassification Errors in My School Comparisons: A Response to Professor McGaw

Monday April 26, 2010

The Chairman of the Australian Curriculum, Assessment and Reporting Authority, Professor Barry McGaw, has rejected criticisms made by Save Our Schools (SOS) that the “like school” comparisons on the My School website are biased in favour of private schools. But, his response effectively proves the SOS case and he fails to produce any evidence to support his claims. Indeed, the available evidence supports the SOS case.

The basis of the SOS criticism is that My School uses a methodology to determine so-called “like schools” which is based on the socio-economic status (SES) characteristics of small geographical areas rather than on the SES of families. The problem is that each area contains a mix of families and the higher income families more often choose private schools. For example, 55% of high income families choose private secondary schools compared to 26% of low income families.

This causes My School to systematically over-estimate the level of socio-economic disadvantage in private schools and under-estimate disadvantage in government schools. Consequently, My School compares the test results of supposedly similar private and government schools, but which may have large differences in the SES composition of their enrolments.

Professor McGaw says the SOS argument relies on the geographical areas being heterogeneous [ Australian Financial Review, 17-18 April 2010]. But, “they are not,” he says.

They are fairly small districts of a couple of hundred households and the evidence is that they are fairly homogenous.

Here Professor McGaw contradicts himself and concedes the point by saying that the areas are “fairly homogeneous”. The My School methodology depends on the districts being homogenous, but McGaw admits they are not. Districts that are “fairly homogenous” will include families with different socio-economic characteristics, some with a higher SES and some with a lower SES.

Once it is conceded there are differences in the SES of families within districts, the SOS criticisms of the My School methodology come into play because the higher SES families are more likely to choose private schools. This leakage together with the use of area-based measures of SES to rate the SES of schools leads to bias in the comparison of school results which favours private schools.

The use by My School of an area-based measure of SES to approximate individual family characteristics in the area is based upon an assumption of population homogeneity. That is, it assumes that like people live near like people. The validity of this assumption has been tested by studies comparing individual or family SES values or scores with scores assigned on the basis of the average characteristics of residents living within small areas (called census collection districts). The evidence is that census collection districts are not homogenous.

A study by the Australian Council of Educational Research has shown that the correlation between individual and census collection district measures of SES for a national sample of secondary school students was unacceptably low [Ainley & Long 1995]. The study reported correlations between 0.36 and 0.45 between individual and collection district measures in a sample of secondary school students [73]. They found that the greatest loss in precision occurs in moving from an individual based analysis to the collection district level, and that the additional loss of validity when moving from collection districts to larger geographical areas such as postcodes is not great [81-83].

The Australian Bureau of Statistics has demonstrated that some highly advantaged families live in low SES areas and some disadvantaged families live in high SES areas. In an analysis of census data for Western Australia it found that individual and family relative socio-economic disadvantage was quite diverse within small areas [Baker & Adhikari 2007].

It found that about 20 per cent of people in the most disadvantaged quartile of the individual SES measure lived in census collection districts that were in the highest three deciles of the area-based Index of Relative Socio-economic Disadvantage (IRSD). Over a third of people in the bottom quartile lived in areas in the top five IRSD deciles and six per cent of people in the lowest group in the individual based SES measure lived in collection districts found in the highest IRSD decile.

On the other hand, nearly 20 per cent of people in the most advantaged quartile for individual SES lived in areas that were classified in the bottom three deciles of the IRSD. Over a third of people in the most advantaged quartile lived in areas in the bottom five deciles. Five per cent of people in the highest individual based SES group lived in the collection districts found in the lowest IRSD decile.

The ABS researchers conclude from this study that “there is a large amount of heterogeneity in the socio-economic status of individuals and families within small areas” [1].

This conclusion directly refutes Professor McGaw’s claim that small areas are not heterogeneous. Rather, “there is a large amount of heterogeneity” as SOS has argued.

Heterogeneity in the SES of families in small districts means there will be errors in classifying the SES of individuals, such as students, on the basis of the average SES of the areas. This is emphasised by the ABS:

A relatively disadvantaged area is likely to have a high proportion of relatively disadvantaged people. However, such an area is also likely to contain people who are not disadvantaged, as well as people who are relatively advantaged. When area level indexes are used as proxy measures of individual level socio-economic status, many people are likely to be misclassified. This is known as the ecological fallacy. [ABS 2008: 3; see also Adhikari 2006: 6]

Moreover, the potential for error is quite significant:

These findings indicate that there is a high risk of the ecological fallacy when SEIFA is used as a proxy for the socio-economic status of smaller groups within an area and there is considerable potential for misclassification error. [Baker & Adhikari 2007: 1]

Other researchers have also noted this, for example:

Assigning a value of socioeconomic status to a student on the basis of the area in which they live will introduce a potential error and the magnitude of the error will be greater when the social background of those living in the area is relatively heterogeneous. [Ainley & Long 1995: 53; see also Preston 2010]

The potential for misclassification errors in using area-based measures of SES to approximate individual characteristics was also noted in a report commissioned by the Performance Measurement and Reporting Taskforce of the national education minister’s council [Marks et.al. 2000: 27].

Heterogeneity of family SES in small areas has fatal implications for the reliability of comparisons of so-called “like schools” by My School. It introduces the potential for significant errors in classifying students and measuring school SES. This is clearly demonstrated in an analysis of the Penrith statistical area using 2001 Census data [Preston 2010].

The study shows that a government school drawing students from the ten most disadvantaged collection districts in the region would have 16 students from low income families for every student from a high income family. In contrast, an independent school drawing from the same collection districts would have equal numbers of low and high income families. Yet, using an ABS area-based index of socio-economic status which is similar to that used by My School, the two schools would be classified as “like schools”.

Professor McGaw and the Federal Education Minister claim that “like school” comparisons on My School are robust. The evidence so far available overwhelmingly indicates they are not. Apart from errors in the measurement of school SES and being biased in favour of private schools, they also exclude a range of other factors which have a significant bearing on the classification of so-called “like schools”, thereby distorting comparisons of school results [Cobbold 2010ab].

Trevor Cobbold

References

Adhikari, Pramod 2006. Socio-Economic Indexes for Areas: Introduction, Use and Future Directions. Research Paper, Australian Bureau of Statistics, Canberra.

Ainley, John and Long, Michael 1995. Measuring Student Socioeconomic Status. In John Ainley; Brian Graetz; Michael Long and Margaret Batten, Socioeconomic Status and School Education, Australian Government Publishing Service, Canberra, June.

Australian Bureau of Statistics 2008. Socio-Economic Indexes for Areas (SEIFA) – Technical Paper.

Baker, Joanne and Adhikari, Pramod 2007. Socio-economic Indexes for Individuals and Families. Research Paper, Analytic Services Branch, Australian Bureau of Statistics, Canberra, June.

Cobbold, Trevor 2010a. Like School Comparisons do not Measure Up. Research paper, Save Our Schools, Canberra, February. See Research section.

Cobbold, Trevor 2010b. ‘Like School’ Comparisons on My School are Flawed and Misleading. Save Our Schools, Canberra, February.

Marks, Gary; McMillan, Julie; Jones, Frank L. and Ainley, John 2000. The Measurement of Socioeconomic Status for the Reporting of Nationally Comparable Outcomes of Schooling. Report to the National Education Performance Monitoring Taskforce, MCEETYA, March

Preston, Barbara 2010. Notes on the ecological fallacy when area-based indexes of disadvantage/advantage are applied to schooling in Australia, Canberra, March.

Previous Next

School Daze

Tuesday April 20, 2010

The ‘like school’ comparisons on the My School website purport to compare the test results of schools with similar socio-economic student populations. However, like is not consistently compared with like. My School’s measure of the socio-economic status (SES) of schools is systematically biased in favour of private schools when comparing their results with so-called ‘like’ government schools.

The bias works in two separate, but compounding ways. My School under-estimates the SES of private schools that draw enrolments from high SES families living in lower SES areas. It also over-estimates the SES of government schools because high SES families resident in their area tend to choose private schools.

There are two sources of this bias. One is that the Index of Community Socio-Educational Advantage (ICSEA) used to measure the SES of schools is based on the average socio-economic characteristics of the areas in which students live and not on the actual SES of their families. Studies by the Australian Bureau of Statistics show that some high income families live in low SES areas and vice versa, so the actual SES of some students will be above the area-average and others below the area-average.

ICSEA also fails to allow for differences in the proportion of high and low SES families that enrol in private and government schools. On average, 47% of high income families choose private schools compared to 24% of low income families. In the case of secondary schools, 55% of high income families choose private schools compared to 26% of low income families.

The greater leakage of high SES students from each area into private schools causes the ICSEA rating of private schools to under-estimate their actual SES because these students are classified according to their (lower) area SES measure rather than by their (higher) family SES.

On the other hand, the ICSEA rating of government schools over-estimates their actual SES because of the leakage of high SES students to private schools. Government schools take a greater proportion of low SES students, but these students are classified at the (higher) area SES rating rather than by the actual SES of their families. The lower SES students carry the higher area SES score influenced by high SES families in the area whose students do not attend government schools. Thus, the level of disadvantage in government schools is under-estimated by ICSEA.

This systematic bias in the measurement of the SES of government and private schools can be illustrated by an example from My School.

My School classifies the wealthy King’s School in Sydney as having the same SES rating as Gundaroo Public School, a small school in a semi-rural area of NSW near Canberra. The King’s School has excellent test results with many green colour codes for above average results while Gundaroo has many red colour codes for below average results.

However, far from being ‘like schools’, they are very unalike schools.

The SES rating for the King’s School is likely under-estimated because it traditionally draws many students from farming families. About 30% of its enrolments are boarding students and only the wealthiest of rural families can afford tuition and boarding fees of over $36 000 a year for primary students. Yet, because these students are resident in lower SES rural areas they carry a lower SES score than their actual family circumstances. The relatively large proportion of these students attending the King’s School therefore significantly reduces its ICSEA rating.

On the other hand, the ICSEA rating for Gundaroo Public School is likely an over-estimate of its actual SES composition

The Gundaroo area has a large proportion of high income, well educated, highly skilled households, but it also has a significant proportion of lower SES families. Census data shows that about 12% of households in the Gundaroo region are relatively low income. Some 32% of the population over 20 years of age did not finish Year 12, 25% had certificate-based qualifications and 30% are employed in lower skilled occupations.

Only about half of Gundaroo’s primary age children attend Gundaroo Public School. Many high SES families send their children to schools in Canberra leaving mostly lower and middle SES families at the local school. However, its ICSEA rating is based on the average SES characteristics of the area, including high SES families who do not attend the school, and therefore over-estimates its actual SES.

This comparison of unalike schools is not an isolated example. There are numerous others on My School.

Another source of bias occurs because of the exclusion of international students enrolled in many high fee private schools from the ICSEA ratings. They are excluded because it is not possible to geo-code their addresses to a Census collection district.

This also artificially lowers the rating of some high SES schools because it is only wealthy overseas families who can afford the high tuition and boarding fees and associated costs of sending their children to Australia. This bias may not be large because of the relatively small number of international students, but it does add to the inherent bias of ICSEA.

Thus, the ‘like school’ comparisons on My School tend to pit higher SES private schools against lower SES government schools. This shows private schools in a more favourable light because students from higher SES families tend to have higher average results than students from lower SES families.

It should also be noted that ICSEA omits a range of factors that strongly influence school results. These include differences in the student composition of schools by gender, ethnic sub-groups and students with disabilities as well as differences in funding, school size, student mobility between schools, student selection and private tutoring.

Some of these omissions may further disadvantage government schools in comparisons with their “like” private schools. For example, schools with higher proportions of students with disabilities participating in tests may have lower average results than other schools with a similar ISCEA value. Government schools generally have higher proportions of students with disabilities than private schools

My School is a travesty of ‘like school’ comparisons. Its biased comparisons in favour of private schools will unfairly affect the reputations of government schools and the careers of their teachers and principals. It will also mislead parents in choosing schools and mislead policy makers in drawing conclusions about best practice in schools.

Trevor Cobbold

This article was originally published in the April newsletter of the Australia Institute.

Previous Next

US School Cheating Scandal Sends Warning on My School

Monday February 15, 2010

One of the largest school cheating scandals ever in the US is under investigation in the state of Georgia. Last week, the New York Times and the Atlanta Journal-Constitution reported that one in five of Georgia’s public elementary and middle schools are under investigation for changing student answers on the tests.

The case has momentous implications for Australia now that My School is up and running and test scores have become the measure of a school’s worth, as well as that of its teachers and principal.

The Georgia State Board of Education has ordered investigations at 191 schools across the state where evidence had been found of tampering on answer sheets for the state’s standardised achievement test. Another 178 schools will see stepped-up monitoring during testing.

The investigation is the result of an analysis by the Georgia Governor’s Office of Student Achievement. It flagged any school that had an abnormal number of erasures on answer sheets where the answers were changed from wrong to right, suggesting deliberate interference by teachers, principals or other administrators.

According to the analysis, more than half of elementary and middle schools in the state had at least one classroom where erasure marks were so unusual that cheating may have occurred.

Four percent of schools were placed in the “severe concern” category, which meant that 25 percent or more of a school’s classes were flagged. Six percent were in the “moderate concern” category, which meant that 11 percent to 25 percent of its classes were flagged, and 10 percent raised “minimal concern,” meaning 6 percent to 10 percent of its classes were flagged.

The main focus of the investigation is Atlanta, where 70% of all elementary and middle schools face investigation. At one school, for example, averages of 27 of 70 answers on each fourth-grader’s math test were changed from wrong to right in one classroom. At another an average of 26 of 70 answers on the fifth-grade math test were erased and corrected.

More than half the classes at 27 schools were flagged, and at four Atlanta schools more than 80 percent of the classes were flagged.

Experts said it could become one of the largest cheating scandals in the era of widespread standardized testing in the US. Gregory Cizek, Professor of Educational Measurement and Evaluation at the University of North Carolina, told the Atlanta Journal Constitution_ (11 February) that the extent of the suspicious answer changes is stunning. He has studied cheating for more than a decade, but said he didn’t know of another state that has detected so many potential problems. He told the New York Times (11 February): “This is the biggest erasure problem I’ve ever seen”.

It is the second time in two years that cheating has been exposed in Georgia’s state tests. In 2009, four schools were found to have extensively changed student answer sheets. Court records show that in one case, the school principal and assistant principal had systematically erased and changed student answers to ensure that the school met the “annual yearly progress” benchmark of the Federal government’s No Child Left Behind Act. ( Atlanta Journal-Constitution , 12 February).

The scandal is the latest in a series of cheating scandals across the United States since the No Child Left Behind legislation came into force. If schools fail to meet the federal benchmarks of the No Child Left Behind Act, they are placed in a “needs improvement” category and must offer extra tutoring and allow parents to transfer their children to higher performing schools.

The legislation has placed principals and teachers under intense pressure to improve school test scores and many come to think that they must cheat to survive. High personal stakes are involved for teachers and principals as they fear their careers will suffer from the failure to improve school test scores.

Last year a survey of public school teachers in Chicago by the Chicago Sun-Times (29 August) and the teachers’ union revealed that one-third of all teachers had been pressured in the last year by principals and boards to change student grades. Twenty per cent said that they had actually raised grades under this pressure.

Extra pressure is also being placed on Georgia teachers as a result of a new initiative that ties teachers’ pay to student performance.

Erasing and replacing student answers is just one type of cheating carried out by schools. Others include filling in unanswered questions, helping students with answers during tests and alerting teachers and students to test questions before the test is taken.

The new scandal has raised new questions about inadequate security procedures for tests in schools and the absence of audits of test results across most US states. Last August, a report by the US Government Accountability Office identified many gaps in the security of tests and found that many US states faced challenges in ensuring reliable and valid assessments.

The Georgia scandal was revealed by computer scanning to screen for erasures on test answer sheets. Testing experts say that most states fail to use even this most elementary means to monitor for cheating. Professor Cizek told the New York Times (12 February) that there is no incentive to vigorously pursue cheating because parents and administrators all like to see higher test scores.

Jennifer Jennings, Associate Professor of Sociology at New York University who studies school accountability, said that the Federal government should require states to check their test results. “It’s absolutely scandalous that we have no audit system in place to address any of this,” she said.

Some US states have better procedures in place. For example, South Carolina has been using erasure analysis on tests since the 1980s. If a school is flagged for suspicious activity, the state sends testing monitors the following year, and sometimes educators are criminally prosecuted or lose their teaching certificates.

In response to the latest cheating revelations, the Georgia State Board of Education is setting up new test security procedures. The Atlanta Journal-Constitution (11 February) reported that State testing monitors will be sent to each of the 74 “severe” schools and monitors will conduct random checks at the 117 “moderate” schools.

Schools in either of those categories will have to randomly rotate teachers to different classrooms during testing; teachers cannot supervise tests for their own students. At an additional 178 schools, about which the state had “minimal” concerns, teachers will either have to rotate or take additional steps to ensure proper monitoring.

The Georgia House of Representatives will soon debate bills that would make it unlawful to tamper with state tests or help others cheat on them. No law in Georgia currently makes it a crime to cheat. The bills being considered would find violators guilty of a misdemeanour, subject to the loss of their pensions and possibly fined.

The Georgia case raises serious questions about the adequacy of security aroung Australia’s national literacy and numeracy tests (NAPLAN) and auditing of schools.

The administration of NAPLAN relies on principals, school test co-ordinators and teachers administering the tests adhering to the guidelines. It is open to considerable abuse by principals and teachers who face undue pressure to improve results. Some school co-ordinators and teachers have told Save Our Schools that the security arrangements for NAPLAN are totally inadequate to stop cheating.

Test booklets are delivered to schools a week or 10 days beforehand and there is little to stop an unethical principal or co-ordinator from opening them and alerting teachers about questions to practice in their class. Tests are mostly supervised alone by teachers in the classroom, except in cases where assistants may be present for students with disabilities, and there is no monitoring of whether some teachers help students with answers. There are also ample opportunities available after the tests are taken to change answers or fill in unanswered questions.

My School is now a maker and breaker of school reputations and careers. Soaring test scores may put a principal on the administrative ladder, where salaries can rise well into six figures. Conversely, bad scores can mean unfavourable assessments and may even cost principals their jobs, as Julia Gillard has threatened several times.

In this environment, the pressure on principals and teachers to improve their school’s test scores will be intense. There will be no surprise if many are tempted to take the path of their colleagues in Georgia.

Security and auditing arrangements for NAPLAN must be reviewed and upgraded before the next round of tests in May to ensure their integrity. It is a task for the Australian National Audit Office as an independent authority reporting directly to the Australian Parliament on public sector administration and accountability.

Previous Next