There is extensive research evidence of the impact of family background on student results. Many studies from the United States, the United Kingdom, the OECD and Australia also show a school socio-economic composition (SEC) effect whereby students attending schools with a high concentration of students from poor families tend to have lower results than students from similar backgrounds attending schools with higher proportions of students from well-off backgrounds.
There is a “double jeopardy” effect for students from low socio-economic status (SES) families in that they tend to be disadvantaged because of their circumstances at home, but when they are also segregated into low SES schools they are likely to fare even worse. As a result, increasing social segregation between schools tends to lead to worse results for low SES students and widen the achievement gap between high SES and low SES students.
This school compositional effect has been questioned by some researchers. For example, the Australian researcher, Gary Marks, claims that it is a “statistical artefact”. More broadly, Marks says that the impact of SES on student results is weak and that genetic factors are a much more substantial influence. He claims that low SES students have lower intelligence and that you cannot do much about SES. His studies are used by private school organisations such as Independent Schools Victoria and their advocates such as Kevin Donnelly to claim there is no case for additional funding for disadvantaged students and schools.
A new study by Australian academics published in the British Journal of Sociology of Education conclusively debunks Marks’ claims that school composition has a negligible effect on student achievement. It shows that the statistical methods used by Marks are “very unlikely to detect significant SEC effects” [p. 10]. It says that the methods he uses actually remove variance in results attributable to school composition.
A significant issue is that one of the methods (called “residualised change models”) used by the Marks and others to analyse the effect of SEC includes measures of prior achievement at both the student and school level to allow estimation of the effects of other variables. The problem with this approach is that it removes all the effect of factors such as school resources, the SES of students and schools, parental involvement and teaching practices that influence prior achievement. For example, the new study analysed the 2017 NAPLAN results and found that prior school level achievement explained 50-74% of the variance in SEC in Year 5 depending on the domain tested. As a result, this methodology used by Marks and others likely underestimates the effect of SEC on student results.
The study compared its own analysis with that of Marks and found that inclusion of both SEC and prior school achievement has much smaller SEC effects but there is a much higher SEC effect when school prior achievement is excluded. The study found that SEC explains 79-87% of the difference between schools in the growth in student achievement from Year 3 to Year 5. Thus, this method used by Marks vastly under-estimates the effect of SEC on school results.
Another method (called “fixed effects models”) used by Marks and others to analyse the impact of SEC also found very small effects. However, this method removes stable differences between students as it is directed at analysing the impact of changes in individual characteristics over time. The problem in using this method to analyse the effect of change in school composition is that school composition tends to change little over time. As a result, this approach finds very small effects of SEC on student achievement growth:
… the limitations of the fixed-effects methodology largely explain why such models found that changes in SEC had little effect on academic achievement growth. [p. 5]
The new study provides a detailed mathematical analysis of why these models find little effect of school composition on student achievement growth.
The study also notes that some research studies have found that measurement error may have led to false or exaggerated findings of school composition effects. Many previous studies of the impact of school composition on student achievement did not control for measurement error. However, there is a method called structural equation modelling that can exclude measurement error from variables. The study applied this approach to PISA 2015 data and found that school composition has a statistically significant effect on student achievement.
The study also compared the results from this method with those of methods used in previous studies. Interestingly, it found a significantly larger effect of school composition in some countries when controls for measurement error are used. As a result:
Our findings suggest that it is not reasonable to reject prior compositional research that has not controlled for measurement error… [p. 10]
The findings of the study confirm that school composition has a significant effect on student achievement. It shows that this effect remains significant and can be even larger when measurement error is accounted for.
These findings have major implications for education policy in Australia. According to My School, nearly 20% of schools in Australia are highly disadvantaged with 50% or more of their students from low SES families. Some 94% of these highly disadvantaged schools are public schools.
Yet, data from the OECD’s Programme for International Student Assessment (PISA) in 2018 show that Australia continues to allocate more and better quality teacher and physical resources to high SES schools than to low SES schools. The gaps are amongst the largest in the OECD. The data also show that private schools have far more, and better quality, teacher and physical resources than public schools. All this has to change if Australia is to address the large achievement gaps between rich and poor.