19 minutes reading time (3833 words)

Evidence on Educational Leadership, is that sufficient?

Evidence about educational leadership or innovation in education does not guarantee implementation without problems. There is growing awareness that evidence in one context may not hold in other contexts (see my last News Alert *1). There are also more doubts about the results of social or medical science in general (see my earlier blog on Evidence *2). The new focus (Long Zhao) on negative side-effects of seemingly good solutions also complicates the issue of evidence.
The practitioner, the teacher or the schoolleader, might have the impression that she or he is left with empty hands. Not by the imagination of the many approaches, theories or concepts that could inspire thinking over problems in practice, but by the lack of relevant evidence in choosing solutions.
 
Surprisingly (so it seems) at the same time there is a tendency to train teacher and schoolleaders in research skills,and to establish data teams in school. Why use research methodologies that don’t deliver even when used by experts? My answer would be that the methodologies really can help solving a particular problem at a particular school when teachers and schoolleaders are enthusiast about using these methodologies (if these are kept simple).
Optimists however also see a beckoning perspective that experiences in one school can be translated to other schools to foster innovation processes there. But in reality transfer still proves to be very difficult. In the Netherlands we experiment with Laboratories for Research in Education, where practitioners and university researchers cooperate in solving practical problems.A recent report (in Dutch *3) states that these laboratories work, however not much is said about the difficulties of transfer of solutions, nor about results (except for research skills).
 
Even more of a surprise is that in some places there is large-scale funding for organisations that support the teacher and the schoolleader to know which evidence exists: John Hattie, Australia (Visible Learning), Robert Slavin, USA (Best Evidence Encyclopedia), Education Endowment Foundation, UK (Teaching and Learning Toolkit - TLK), Evidence for Learning - EfL (Australian version of TLK) and a new organisation to be established in Australia (see 11 below).
Experts from these organisations are involved in serious discussions what counts as evidence.
Hattie has his critics, and recently his main critic became Slavin, see 4 below. The main issue being the value of meta-analyses. TLK is questioned by Biesta. (If the gurus wrestle, step aside).
EEF and AfL already discovered that their main problem might not be evidence but implementation, but that is not their focus. What they add are rather simple projectcycles (see 13 and 17). They focus on more (nuanced) evidence.

It would be nice if researchers could admit that it becomes more and more complex to deliver to the practitioner. It also would be nice to admit that current evidence is about very small issues or strands that hardly have a relation with the macro changes predicted for schools.
 
It seems to be rather easy to predict that within 10 years we discover that not much evidence is added to what we currently have. Of course there will be new developments, new foci, but it might not add more clarity to the already small 'treatment effects' of schooling *4 (see 14).

The real burden is on the schoolleader and the teacher judging what is the best to do taking into consideration the specifics of the situation of their school or their classes. For that reason I predict the next focus of support to schools (after the current focus on evidence) will be support to schoolleaders and teachers in choosing from limited evidence and on questions of implementation. Support not mainly by experts from outside the school but from colleagues both in- and outside the school. (see 5).

The current situation-if well perceived- is a blessing in disguise. Acknowledging the failure to deliver relevant evidence offers the opportunity for teachers (and leaders) to (again) become the professionals we need, freed from too much bureaucracy with strict models and rules, taking responsibility to deal with a future with foreseen drastic, almost disruptive changes. We need all their creativity and out-of-the-box thinking for developing new environments in which all children flourish.
(But maybe in contrast I am also convinced that we need national rules and regulations to fight inequity in education. It is not a free choice to act on that. That is for another blog).
 
Below you find an overview of (publicly available) books, blogs and articles that I consider as key publications of the first half of 2018 regarding these topics. The latest on top. Not all publications have evidence as its main theme, but all include important remarks on the theme. Welcome to my collection of tweets in a blog, my ‘tweetsblog'. Take your pick.

Here is my collection. Just with short quotes from the authors. Personal remarks are in red.

1. Long Zhao, book, What Works May Hurt - Side Effects in Education (June 29)
https://www.tcpress.com/what-works-may-hurt%E2%80%94side-effects-in-education-9780807759059
In his new book, Yong Zhao, distinguished professor and specialist in education policy, shines a light on the long-ignored phenomenon of side effects of education policies and practices, bringing a fresh and perhaps surprising perspective to evidence-based practices and policies. Identifying the adverse effects of some of the “best” educational interventions with examples from classrooms to boardrooms, the author investigates causes and offers clear recommendations. (Publisher)
Education as a field has been slow to improve because it largely has failed to build on past experience (Bryk, 2015). One way to build on past experience is to consider side effects as an integral part of research (p. 127)
This book is intended to build a strong case for treating effects of educational treatments the same way as the medical field treats effect of medical products: effects and side effects as two sides of the same coin (p. 128)
Very important and original contribution to the field.

2. Evidence in practice with Philippa Cordingley from CUREE, webpage (June 29)
http://evidenceforlearning.org.au/news/evidence-in-practice-with-philippa-cordingley-from-curee/
Education research is generally developed to feed the interests of researchers and there isn't enough focus on practitioners' questions. CUREE systematically collects the questions of teachers and school leaders to shape the education research agenda in order to develop evidence that makes a difference in practice. Evidence in education would be much better if we were systematic about finding out what teachers and school leaders want to know….
Good use of evidence in classrooms is exemplified by three things:
1. Tools that help teachers to collect and record evidence to subsequently use it for reflection and interrogate it with more depth. Tools make the learning more visible and make it possible to record learning;
2. An approach to systematically collect more and richer evidence – which might involve working with a sub-group or sample of students, as it is near impossible to collect rich evidence for a class of 30 students;
3. Prior planning about the aspect of the teaching and learning to examine: teachers thinking ahead about the type of evidence that will tell them whether what they're trying to do is really connecting with their aspirations for their students.
Relevant approach but with a price, see the shop of CUREE.

3. Yong Zhao, Alma Harris and Michelle Jones, webpage, Pisa: meaningless at best and destructive at worst (June 28)
https://www.tes.com/news/pisa-meaningless-best-and-destructive-worst
For nearly two decades, this triennial assessment program (Pisa) has been telling education systems what they should do, despite its claim of the opposite. In his book (World Class: how to build a 21st-century school system, Schleicher, the chief orchestrator of Pisa, brings together what Pisa has been telling, without telling, what education systems should do in order to become successful and secure better Pisa outcomes. Many of the recommendations, however, are confusing and meaningless at best and destructive at worst because they are drawn from self-contradicting evidence….
Pisa has been a major provider of educational cures but it has never discussed the side effects of its prescriptions. It is unlikely that it will voluntarily study and disclose the potential harms of its recommended strategies. Thus it is up to policymakers, education professionals, parents, and students to watch for negative side effects because they exist. What works can hurt
Strong opinions on a very influential initiative of OECD. See also Schleicher on 7.

4. Robert Slavin, blog, John Hattie is Wrong (June 21)
https://robertslavinsblog.wordpress.com/2018/06/21/john-hattie-is-wrong/
John Hattie is a professor at the University of Melbourne, Australia. He is famous for a book, Visible Learning, which claims to review every area of research that relates to teaching and learning. He uses a method called “meta-meta-analysis,” averaging effect sizes from many meta-analyses. The book ranks factors from one to 138 in terms of their effect sizes on achievement measures. Hattie is a great speaker, and many educators love the clarity and simplicity of his approach. How wonderful to have every known variable reviewed and ranked!
However, operating on the principle that anything that looks to be too good to be true probably is, I looked into Visible Learning to try to understand why it reports such large effect sizes. My colleague, Marta Pellegrini from the University of Florence (Italy), helped me track down the evidence behind Hattie’s claims. And sure enough, Hattie is profoundly wrong. He is merely shovelling meta-analyses containing massive bias into meta-meta-analyses that reflect the same biases.
For a reaction of Peter DeWitt, including comments of Hattie on this blog see note *5

5. Andy Hargreaves, book, Collaborative Professionalism (June 12)
https://uk.sagepub.com/en-gb/eur/collaborative-professionalism/book247835
Collective autonomy means that educators have more independence from top-down bureaucratic authority, but less independence from each other. Collective autonomy values teachers’ professional judgment that is informed by a range of evidence, rather than marginalizing that judgment in favor of the data alone. But collective autonomy is not individual autonomy. Teachers are not individually inscrutable or infallible. The egg-crate has emptied; the sanctuary has gone. Instead, teachers’ work is open, and opened to each other, for feedback, inspiration, and assistance. (p. 83)
Educators working in the non-bureaucratic version of Professional Learning Communities are positive about PLCs. However it is difficult if not impossible to get evidence on the effects of PLCs. Maybe we should refrain from trying to achieve that.

6. Adrian Simpson, blog, Meta-analysis: Magic or Reality (June 6)
http://evidencebasededucationalleadership.blogspot.com/2018/06/guest-post-meta-analysis-magic-or.html
So, when you are asked to conclude that one intervention is more effective than another because one study resulted in a larger effect size, check if ‘all other things equal’ holds (equal control treatment, equal spread of sample, equal measure and so on). If not, you should not draw the conclusion.
When the Teaching and Learning Toolkit invites you to draw the conclusion that the average intervention in one area is more effective than the average intervention in another because its average effect size is larger, check if ‘all other things equal’ holds for distributions of controls, samples and measures. If not, you should not draw the conclusion.
Compare with 9.

7. Andreas Schleicher, blog, Five myths about education, debunked (May 29)
http://oecdeducationtoday.blogspot.com/2018/05/education-myths-debunked-world-class-andreas-schleicher.html
One of the reasons why we get stuck in education is that our thinking is framed by so many myths. So I start my new book, World Class: Building a 21st-century school system, by debunking some of the most common.
• “The poor will always do badly in school.” That’s not true: the 10% most disadvantaged kids in Shanghai do better in maths than the 10% most advantaged students in large American cities.
• “Immigrants will lower the performance of a country on international comparisons.” That’s not true: there is no relationship between the share of immigrant students and the quality of an education system; and the school systems in which immigrant students settle matter a lot more than the country where they came from.
• “Smaller classes mean better results.” That’s not true: whenever high-performing education systems have to make a choice between a smaller class and a better teacher, they go for the latter. Often it is small classes that have created the Taylorist culture where teachers end up doing nothing other than teaching, and don’t have the time to support individual students, collaborate with other teaching professionals or work with parents – activities that are hallmarks of high-performing education systems.
• “More time spent learning always means better results.” That’s not true: students in Finland spend little more than around half the number of hours studying than what students in the United Arab Emirates spend; but students in Finland learn a lot in a short time, while students in the United Arab Emirates learn very little in a lot of time.
• “The results in PISA are merely a reflection of culture.” That’s not true: rapidly improving education systems did not change their culture but their education policies and practices.
See also 3 on side-effects of PISA.

8. Managing skills in a time of disruption, webpage (24-25 May)
https://unevoc.unesco.org/go.php?q=TVET+Learning+Forum+Day+1&utm_source=ED+Newsletters&utm_campaign=1eb415bdcf-EMAIL_CAMPAIGN_2018_06_04_01_00&utm_medium=email&utm_term=0_2235d51f32-1eb415bdcf-442316329&mc_cid=1eb415bdcf&mc_eid=856965f646
Rapid technological progress, threats to environmental sustainability, and demographic transitions, are bringing about unprecedented disruptions in industries, economies, and societies globally. How can we adapt our training and educational systems for the changing world of work, that finds itself at the centre of these disruptions? UNESCO-UNEVOC International Centre organized a two day TVET Learning Forum hosting more than 100 global TVET stakeholders to understand the impact of different ongoing disruptions on skills, and discuss how their TVET systems are responding to the challenges and opportunities in the era of digital disruption, sustainable development, and displacement of people.
It is not a surprise that working for TVET heightens your sensibility for disruptive changes coming to education.

9. Jonathan Kay, Steve Higgins, Tanya Vaughan, webpage, The magic of meta-analysis (May 24)
http://evidenceforlearning.org.au/news/the-magic-of-meta-analysis/
For the purposes of this blog we will be looking at four main arguments against meta-analysis (Simpson, 2017) and what they mean for educators. The four main arguments raised are 1) comparison groups, 2) range restriction, 3) measure design and 4) that meta-analysis is a category error.
Compare with 4 and 6.

10. Andrew Davis, Evidence-based approaches to education, article: Direct instruction, anyone? (May 23)
http://journals.sagepub.com/doi/full/10.1177/0892020618765421
Evidence-based education is not merely flavour of the month, but of the year, or even of the decade. Education leaders, yearning for all that is best in terms of teaching and learning, may find the idea very appealing. It could feel like a wonderful gift – a seductive source of educational authority independent of subjective values. ‘Surely’, our leaders might think, ‘with such an approach, staff cannot dismiss us as pushing personal obsessions about direct instruction, group work or how reading should be taught. No – with evidence-based power, a new situation has arisen. We can now acquire ‘scientific’ support for our policies. We can put staff under moral pressure to follow us, under pain of being “unscientific” if they resist.’
Readers will sense a ‘but’ hovering in the wings. Cartwright and Hardie (2012) wrote a book about evidence-based policy, where they keep returning to the aphoristic question, ‘It worked there. Will it work here?’….. (Begin of article)
The very idea of a researchable teaching method such as ‘direct instruction’ is seriously problematic. School leaders cannot and should not support classroom approaches of this kind on the ground that they are evidence-based. (End of article).
You can imagine that I really appreciate this article.

11. Matthew Deeble, Tanya Vaughan, report, An evidence broker for Australian schools (May 9)
https://www.cse.edu.au/content/evidence-broker-australian-schools
The authors explain the role of their non-profit organisation, Evidence for Learning, which has developed over the last three years as a pilot to inform Australia’s education system and help educators increase learning through better evidence. The central premise of their work is to place the school, and leaders and teachers at the heart of a collective effort for improved outcomes. The authors argue that an independent, national and cross-sectoral body, which is focused on building, sharing and encouraging the use of evidence to improve educational outcomes, and which works collaboratively with all stakeholders in our complex system, represents the best model for Australia and our learners.
A publication in support to the creation of a National Education Evidence Broker based on the experiences of Evidence for Learning.
I can’t figure out yet whether the support for a NEEB also includes critics on Hattie.

12. David Gonski et al, report, Through Growth to Achievement (May 8)
https://docs.education.gov.au/documents/through-growth-achievement-report-review-achieve-educational-excellence-australian-0
Establishing a national research and evidence institute will drive better practice and innovation (§ 5.5)
Reliable data on bottom-up innovations is critical to support schools and teachers to improve student outcomes. Commonwealth, state and territory governments should promote and accelerate school innovations by creating a national body responsible for ‘bottom-up’ evaluation of effective education policies, programs and teaching practices and for translating this into practical advice for teachers, school leaders and decision makers. (p. 103)
A key report to the Australian Government to provide advice on how to improve student achievement and school performance.

13. Pauline Ho, Tanya Vaughan, blog, Supporting system change through the Education Action Plan (April 27)
http://evidenceforlearning.org.au/news/supporting-system-change-through-the-education-action-plan/
Successful change that improves students’ learning outcomes is crucial to a thriving evidence ecosystem in Australia (Evidence for Learning, 2017a). Structuring a successful change process within a system or a school requires careful planning that needs to be based on the latest evidence (Education Endowment Foundation, 2017). At Evidence for Learning (E4L), the structuring of a change process has been part of our strategic approach since inception. It’s called the Impact Evaluation Cycle (Evidence for Learning, 2017b) as shown.
Similar to the implementation cycle of the Education Endowment Foundation. See 17.

14. Jaap Scheerens, article, The perspective of “limited malleability” in educational effectiveness (April 2)
https://www.tandfonline.com/doi/full/10.1080/13803611.2017.1455286
The article starts out with a review of definitions and operational criteria of school-effect measures. The different ways to estimate school effects depend on the way “gross” school effects are adjusted to what is usually referred to as “value-added” effects. In most applications, “value-added” school effects are adjusted performance levels, but in other cases progress or growth in achievement over time. The article also brings in substantive research outcomes from individual studies and meta-analyses, to conclude on the magnitude of treatment effects. The conclusion is that the most suitable adjustment variables, for example, prior achievement and general intelligence or aptitude, generally produce relatively small value-added or “net” school effects. The implication of this finding is that there is limited scope for effectiveness-enhancing factors when the margins for malleability are so small. Complicating factors in assessing treatment effects include study characteristics such as the nature of the test, sample size, and research design. Such study characteristics might partly explain the rather divergent results from meta-analyses focusing on similar effectiveness-enhancing conditions. Diversity and questionable quality of treatment measures are discussed as additional challenges for reliably assessing treatment effects of schooling. The discussion section considers implications of small treatment effects and limited malleability for policy and research. (Closing paragraph of introduction to the special issue).
One of the pioniers of the Effective School Movement reflecting on the evolution and the current state of research in his field.

15. Elizabeth Farley-Ripple et al, article, Rethinking Connections Between Research and Practice in Education: A Conceptual Framework (March 8)
Educational Researcher DOI: 10.3102/0013189X18761042 http://journals.sagepub.com/stoken/default+domain/JGwUhtWk3zRrhDkx3nus/full?platform=hootsuite
Recent efforts to improve the quality and availability of scientific research in education, coupled with increased expectations for the use of research in practice, demand new ways of thinking about connections between research and practice. The conceptual framework presented in this paper argues that increasing research in educational decision-making cannot be simplified to an issue of dissemination or of motivating practitioners to access evidence-based research but rather is a bidirectional problem in which characteristics of both the research and practice communities must be understood and addressed in order to strengthen ties between research and practice in education……
As we seek to understand differences between research and practice communities in the education context of the 21st century, we interpret these five categories, or gaps, as relating to assumptions and perspectives about the usefulness of research products; the nature and quality of research; problems that research addresses; the structures, processes, and incentives surrounding research production and use; and the relationships between communities.
Very fundamental paper that everyone focussing on evidence has to relate to.

16. Clinton, J.M., Aston, R., Quach, J., report, Promoting evidence uptake in schools: A review of the key features of research and evidence institutions (March)
https://docs.education.gov.au/system/files/doc/other/promoting_evidence_uptake_in_schools_accessible.pdf
Despite continued investment in a range of education reforms, national and international assessments have found little improvement in Australian student achievement outcomes. Australia has dropped in the international student outcome assessment rankings due to other countries improving at a greater rate.
The Secretariat for the Review to Achieve Educational Excellence in Australian Schools (Education Excellence Review Secretariat) commissioned the Centre for Program Evaluation at the University of Melbourne to conduct a rapid synthesis of existing evidence to understand: (i) how evidence in education can inform practice; (ii) what the enablers and barriers to evidence uptake are; and, (iii) how schools can be supported to use evidence-informed practices. The rapid synthesis drew upon findings from the practices of multiple sectors including education, health promotion, public health, mental health, and tourism that aim to supportevidence-informed practice. These institutes were in Australia, the US, the UK, and the European Union.
One of the Australian reports supporting the upcoming decision to establish an institution backing evidence-informed practice and policy in Australia.

17. Jonathan Sharples, Bianca Albers, Stephen Fraser, report, Putting Evidence to Work: A school’s guide to implementation (Febr 8 updated May 10)
https://educationendowmentfoundation.org.uk/tools/guidance-reports/a-schools-guide-to-implementation/
Schools are learning organisations. They continually strive to do better for the children and young people in their charge. In doing so, they try new things, seek to learn from those experiences, and work to adopt and embed the practices that work best.
Implementation is a key aspect of what schools do to improve, and yet it is a domain of school practice that rarely receives sufficient attention. In our collective haste to do better for pupils, new ideas are often introduced with too little consideration for how the changes will be managed, and what steps are needed to maximise the chances of success.
The purpose of this guidance is to begin to describe and demystify the professional practice of implementation – to document our knowledge of the steps that effective schools take to manage change well.
It can be used to apply any school improvement decision: programmes or practices; whole-school or targeted approaches; internal or externally generated ideas.
This is the solution of the Education Endowment Foundation for the problem that high scores of a strand in the Teaching and Learning Toolkit do not guarantee easy implementation of that strand. EEF suggests that they demystify implementation as if it was not studied before. They present a rather simple set of steps. See also 13


*! On Educational Leadership 2, https://freemanmc.com/index.php/acymailing.html
*4 Treatment effects are specific aspects of schooling and teaching, or educational programmes that may be associated with, or intended to improve, student outcomes. Treatment effects include such diverse variables as student–staff ratios, school resources, administrative arrangements, teacher qualifications, and experience in addition to specific educational interventions or programmes.
Go to top