Showing posts with label education. Show all posts
Showing posts with label education. Show all posts

07 May 2025

The Global Education Architecture is Broken

So says Nicholas Burnett in a (sadly, gated) essay for the International Journal of Educational Development.

Nicholas has some authority on the topic, as chair of the Board of UNESCO’s International Institute for Educational Planning, and previously Director of the UNESCO Education for All Global Monitoring Report.

In his opening paragraph he writes;
"The international architecture for education is failing the world. There is little leadership; global priorities are obscure; the major debates are increasingly irrelevant and divorced from reality on the ground; the number of children out-of-school has stagnated for a decade; little progress has been made in tackling the global learning crisis; knowledge about what works in education is surprisingly limited; global public goods are massively underfunded; huge global financing requirements show little prospect of being met; and the neediest low-income countries, mainly in sub-Saharan Africa, do not receive the external financial and technical support necessary if they are to develop their education systems."
Tell us what you really think Nicholas. 

He criticises:
"the work of the Right to Education Initiative lobby and its recent Abidjan Principles, which would have governments place severe restrictions on private education."
Points to a major problem being the decline of UNESCO:
"In the past, UNESCO could have been counted on to be the central voice advocating for education, including education as a human right, in international fora. UNESCO has become so weakened, however, by its internal politicization and inadequate budget, that it is no longer the respected international voice on education, rather just one of many rather weak voices. These problems preceded the withdrawal first of United States' financial support and, more recently, of US membership but these steps now mean that UNESCO cannot function effectively as it has insufficient resources. UNESCO’s total regular budget for education is now only $51 million per year."
With a few other choice quotes:
"It is a real paradox that those working in international education increasingly (and rightly) call for systems-wide approaches but fail to study their own non-functional international architecture system. 
... 
It is astonishing both how little we know about what works in education and how poorly we disseminate what we do know. 
... 
If the situation is bad regarding generating knowledge, it is even worse regarding promoting innovation in education. 
... 
There is thus no systematic regular review of how the international architecture is performing."
Well written and well worth reading in full.

17 April 2025

Coaching is better than training, but there is still a questionmark on scalability

"So should governments switch to frequent coaching sessions? Possibly, but the next step should first be to try this type of intervention at scale. 
Finding three highly skilled coaches is one thing, but you might need hundreds or thousands of them if you were to run a similar programme across an entire country. 
One potential route to scale is through new uses of technology. A study in Brazil found positive impacts of a virtual-coaching programme run via Skype, for example. 
But perhaps the most straightforward type of technology to go for is scripts, which this paper suggests have positive effects on learning both when presented through centralised training and intensive coaching."

12 March 2025

"Maybe one of the most cost-effective interventions ever studied"

In this month's TES column (I'm calling it a column, it sounds better than a blog), I call parent-teacher meetings in Bangladesh "maybe one of the most cost-effective interventions ever studied". Here's the maths behind that claim. 

First, the intervention found 0.377 standard deviation effect on Grade 5 scores and 0.141 standard deviation (not statistically significant) effect on Grade 3 scores. If we take the average of those, that is 0.259. That's equivalent to around 1.7 extra years of school (based on Evans & Yuan's estimate that 1 standard deviation ~ 6.5 years of school).

The cost was $3 per student over the two years. The author Asad Islam does the conversion using only the 0.377 effect size for Grade 5, writing "Thus, the cost per average 0.1 SD increase in test scores per student is $0.66 or $1.58 for the full program over 2 years."

J-Pal put together a list of the cost-effectiveness of different interventions on their website, now gone, but replicated by Romero, Sandholtz, & Sandefur in the Liberia Partnerships Schools paper (copied below). Islam's $1.58 per 0.1 SD increase is equivalent to 6.3 standard deviations per $100. If we use the more conservative estimate of 0.259 SD (averaging across Grade 5 and Grade 3 results) that still works out at 4.3 SD per $100 spent. That lower estimate still puts this intervention at third place in the ranking, so there you go: "maybe one of the most cost-effective interventions ever studied".


11 March 2025

The Latest Economics Research on Global Education

Last week I was at the Society for Research on Educational Effectiveness (SREE) conference. Alex Eble made a big and apparently successful push to increase representation by researchers focused on developing countries. In time-honoured Dave Evans style, here's my one-sentence roundup of 22 idiosyncratically selected studies presented at the conference. You can see the full programme here. 

---

Public-private partnerships

A subsidy for private schools in Haiti lead to higher enrolment (Adelman, Holland, and Heidelk) #Haiti

Chile has a universal school voucher and a higher voucher targeted at low-income students. The universal voucher is better for aggregate efficiency but worse for equity (Sanchez) #Chile #StructuralModel

Giving out vouchers to attend 5 years of low-cost private primary school in Delhi led to worse Hindi scores and no change in English or Maths (Crawfurd, Patel, and Sandefur) #India

Contracting out management of public schools to NGOs in Liberia led to a 60% increase in learning (Romero, Sandholtz, and Sandefur) #Liberia

School management

A mobile-phone based support programme for school councils in Pakistan led to no improvement for students (Asim) #Pakistan #Diff-in-Diff

A major school inspection reform in Madhya Pradesh led to no improvement in schools (Muralidharan and Singh) #India

Independent monitoring of teachers led to better student performance (Kim, Yang, Inayat) #Pakistan #Diff-in-Diff

Mindfulness

Mindfulness interventions reduced sadness and aggression of children in Niger (Kim, Brown, De Oca, Annan, Aber), improved concentration and prosocial behaviour in Sierra Leone (Brown, Kim, Annan, Aber), and increased prosocial behaviour amongst Syrian refugees (Keim and Kim) #Niger #SierraLeone #Syria

Information for parents

Giving parents information about their child’s performance led to some temporary improvements (Barrera-Osorio, Gonzalez, Lagos, Deming) #Colombia

Incentives for teachers 

The theoretically optimal “Pay for Percentile” incentive scheme works to increase effort, which is complementary to inputs (Gilligan, Karachiwalla, Kasirye, Lucas, Neal) #Uganda

BUT A simpler “threshold” incentive scheme can be as effective as the theoretically optimal “Pay for Percentile” (at least in the short-run) (Mbiti, Romero, Schipper) #Tanzania 

Methodology

Studies commissioned by the developer of an intervention find effect sizes 80% larger than studies commissioned independently (Wolf, Morrison, Slavin, Risman) #USA #MetaAnalysis #EvaluatorIndependence

Tests designed specifically for evaluations produce effect sizes 63% larger than generic tests (Pellegrini, Inns, Lake, Slavin) #USA #MetaAnalysis #TestDesign

External validity bias (non-random selection of schools into trials) is twice as big as internal validity bias (from using observational not experimental methods) (White, Hansen, Lycurgus, Rowan) #USA #ExternalValidity

Technology

The One Laptop Per Child programme in Peru had zero effect on learning (Cristia, Ibarrarán, Cueto, Santiago and Severín) #Peru

In addition, providing internet had no effect on student learning (Malamud, Cueto, Cristia, Beuermann) #Peru

Peer effects

Being the weakest student in a better (selective) school can be worse than being the strongest student in a worse school (Fabregas) #RDD #Mexico

Finance

Temporary subsidies can have permanent effects on enrolment (Nakajima) #Indonesia #Diff-in-Diff

Merit-based scholarships have bigger effects than need-based scholarships (Barrera-Osorio, de Barros, Filmer) #Cambodia

Heat


Each 1 degree Fahrenheit of school year temperature reduces learning by 1 percent. Air conditioning entirely offsets this. (Goodman, Hurwitz, Park, Smith) #FE #USA  

18 February 2025

Is testing good for education?

This post was first published on the Centre for Education Economics website. 
I blogged recently about a new RISE working paper by Annika Bergbauer, Eric Hanushek, and Ludger Woessmann, which finds that:
“standardized external comparisons, both school-based and student-based, is associated with improvements in student achievement.”
William Smith pointed me to his rebuttal blog written with Manos Antoninis, which argues that there are “multiple weaknesses in their analysis that undermine their conclusions”.

This blog is my attempt to make sense of the disagreement. The main issue appears to me to be a misunderstanding by Antoninis & Smith (“AS” from here on) of the mechanism proposed by Bergbauer, Hanushek, and Woessmann (BHW). AS presume that the main mechanism through which testing is hypothesised to improve outcomes is through school choice (allowing parents to shift their students to schools with better test scores) or through punitive government accountability for teachers and schools. But BHW make it clear that their main focus is on the principal-agent relationship between parents as the principal and both students and teachers as their agents. Parents can’t observe the effort made by students and teachers, but standardized testing can provide them with a proxy indicator for effort. This should induce greater effort from both students and teachers. This proposed mechanism has nothing to do with school choice or accountability from government.

First AS argue that
“Our review of the evidence found that evaluative policies promoting school choice exacerbated disparities by further advantaging more privileged children (pp. 49-52).”
This review of the evidence in pp 49-52 of the UNESCO Global Monitoring Report focuses on policies designed to promote school choice. But that is not at all the focus of the BHW analysis, which is on policies that allow for the comparison of schools and students with the purpose of incentivising greater effort. School choice doesn’t need to have anything to do with it. As BHW write:
“That is the focus of this paper: By creating outcome information, student assessments provide a mechanism for developing better incentives to elicit increased effort by teachers and students, thereby ultimately raising student achievement levels to better approximate the desires of the parents”
Second, AS argue that
“punitive systems had unclear achievement effects but troublesome negative consequences, including removing low-performing students from the testing pool and explicit cheating (pp. 52-56).”
As mentioned above, the proposed mechanism in BHW does not at all require a punitive system. BHW write
“accountability systems that use standardized tests to compare outcomes across schools and students produce greater student outcomes. These systems tend [my emphasis] to have consequential implications and produce higher student achievement than those that simply report the results of standardized tests.”
Having said that, there are some flaws in the literature review cited by AS. This section first cites studies on four individual countries (US, Brazil, Chile, South Korea), without noting that there are significantly positive results from two of them. One of the two papers they cite on Brazil (IDados 2017) concludes that there was “a large, continuous improvement in all those years in both absolute and relative terms when compared to other municipalities in the Northeastern region and in Brazil as a whole ” and “it is very likely that [the reform] is at least partially responsible for the changes.” On Chile, a paper not cited as it was published in 2017 just after the review was completed (Murnane et al) found that “On average, student test scores increased markedly and income-based gaps in those scores declined by one-third in the five years after the passage of [the reform]”.

Next the review cites two papers (Yi 2015; Gándara and Randall (2015) that present correlational analysis with no attempt to address any potential bias from omitted variables or reverse causality. The latter study is based on a small sub-sample of the fuller data used by BHW.

Next AS take issue with the way that BHW construct their 4 categories of test usage. For ease of reference I first reproduce below the 4 categories, along with the wording of the questions that go into constructing each category.

---
1. Standardized External Comparison
  • “In your school, are assessments of 15-year old students used to compare the school to district or national performance?” (PISA)
  • existence of national/central examinations at the lower secondary level (OECD, EAG)
  • National exams (primary) (Euryadice (EACEA))
  • Central exit exams end secondary (Leschnig, Schwerdt, and Zigova (2017))

2. Standardized Monitoring
  • “Generally, in your school, how often are 15- year-old students assessed using standardized tests?” (PISA)
  • “During the last year, have [tests or assessments of student achievement] been used to monitor the practice of teachers at your school?” (PISA)
  • “In your school, are achievement data … tracked over time by an administrative authority[?]”

3. Internal testing
  • whether assessments are used “to inform parents about their child’s progress.”
  • use of assessments “to monitor the school’s progress from year to year.”
  • “achievement data are posted publicly (e.g. in the media).” (vaguely phrased and is likely to be understood by school principals to include such practices as posting the school mean of the grade point average of a graduating cohort, derived from teacher-defined grades rather than any standardized test, at the school’s blackboard.)

4. Internal teaching monitoring
  • whether assessments are used “to make judgements about teachers’ effectiveness.”
  • practice of teachers is monitored through “principal or senior staff observations of lessons.”
  • “observation of classes by inspectors or other persons external to the school” are used to monitor the practice of teachers.
---

First, AS argue that question 3c should really fall under category 1. The effect of this question on outcomes is primarily statistically insignificant, though for Maths and Science the direction of the coefficients in the interacted model are the same as the other variables in category 1 (positive in the base model, with a negative coefficient on the interaction with initial score). Would adding this one variable to the 4 variables already in the category make the results statistically insignificant overall? I think probably not, but can’t say for sure without looking at the raw data.

Second, AS claim that question 4a should really fall under category 1 or 2. This claim seems debateable. The theoretical mechanism that BHW put forward is that providing credible information to parents induces greater effort from teachers. This use of testing is clearly internal to the school, and could clearly mean internal school assessments rather than necessarily standardized assessments that allow for external comparison with teachers at other schools.

Third, AS criticise the inclusion of high stakes student assessments as indicators, as by placing the stakes on students and not schools they do not relate to accountability from government. But this is not what BHW claimed was driving the effect.

Fourth, AS suggest the use standardized testing in grade 15 may be effectively “teaching to the test”. This seems odd to me - they clearly aren’t literally teaching to the test because it is a different test. BHW are looking at the effect of introducing high-stakes national standardized testing on student results in a totally separate, low-stakes sample-based test (PISA). AS then don’t really address the argument that “teaching to the test” can also be a positive thing if the test is well-designed and includes a good sample of the things that students are expected to have learnt.

Finally, AS focus only on those results that are statistically significant in the baseline model (estimating the average effect across all countries). However they miss a really important conclusion from the paper which is about heterogeneity. The effects of testing are largest for the weakest performing systems. This is clear in Figure 3.




Looking at the interacted model (Table A5), both of the other 2 questions in category 2 (2b and 2c) are statistically significant.

To sum up, there are weaknesses in the interpretation by AS of BHW which undermine their criticism. BHW focus on the role that testing can play in increasing the effort of students and teachers, with or without government accountability systems. In addition, the review of government accountability systems presented in the UNESCO Global Monitoring Report also has weaknesses, and presents an unduly negative picture. My prior remains that standardized testing plays a positive role, particularly in weak systems.

Thanks to Gabriel Heller-Sahlgren, William Smith, and Manos Antoninis for comments on a draft of this post. This acknowledgement clearly does not imply that Smith and Antoninis agree with this post - they don’t!

06 February 2025

CfEE Blogging: Giving students information on future wages improves school outcomes

As of this January and following last year's Annual Research Digest from the Centre for Education Economics, I'll be co-editing the Monthly Digest, along with Gabriel Heller-Sahlgren.

This is basically an excuse and commitment device to get me actually blogging again on at least a monthly basis. Each issue will include commentary on new papers, plus a selection of abstracts from recent publications (lightly edited for jargon).


My first comment is on a new paper by Ciro Avitabile and Rafael de Hoyos: 
Did you know what career you wanted to do when you were in secondary school? I didn’t. Most pupils make critically important choices that will affect their lives throughout their educational career, often on the basis of poor information about what those choices will mean for their future. In most countries, there is little transparency on the costs and benefits of pursuing education and information on the various career paths available. 
In this paper, Ciro Avitabile and Rafael de Hoyos study whether or not providing pupils with better information about the earnings returns to education and the options available lead to greater effort and learning. Several studies have previously shown that providing information about the wage gains from schooling leads pupils to stay in school a bit longer, and affects their educational choices, but there is limited evidence that such information can affect learning per se, at least in a slightly longer-term perspective.

15 January 2025

Testing, testing: the 123's of testing

Here's my summary of the new Annika Bergbauer, Eric Hanushek, and Ludger Woessmann working paper for CFEE.
"teachers tend to oppose standardised tests, partly because they perceive them to narrow the curriculum and crowd out wider learning. However, it is intuitive that the effects of testing could vary dramatically by context. Indeed, the impact may very well follow a so-called “Laffer curve”. At low levels of testing, an increase may lead to better performance as it provides relevant information and incentives to actors in the education system. Yet if there are already high levels of testing, further increases may very well decrease performance, due to stress, for example, or the effects of an overly-narrowed curriculum. If so, we should expect the impact of testing to follow an inverted U-curve - or at the very least display diminishing returns. Furthermore, the impact of tests is also likely to depend on exactly how they are used in the education system. 
This paper provides perhaps the first systematic evidence on these issues"
Read the rest here.

20 September 2024

Probably the best new research in global education economics

A couple of months ago the Centre for Education Economics asked me to edit their Annual Research Digest - a series of essays by leading thinkers on their favourite research paper from the past year.

The Digest is out today and I'm really pleased with how its come out, a fantastic set of blogs summarising a fascinating set of papers.

Here's the summary, and you can download the full report here.


18 June 2025

How smart are teachers in developing countries?

Eric Hanushek, Marc Piopiunik, and Simon Wiederhold published some fascinating analysis in a 2014 NBER working paper (link) comparing the literacy and numeracy of teachers to the overall population (with a university degree) in a range of OECD countries. In the figures below, the grey bars show the gap between the 25th and 75th percentile of skill for all university-educated adults in each country, and the red marker in the middle shows the median skill level of teachers in that country. Perhaps unsurprisingly, teachers in Finland are the highest skilled internationally, but also within Finland they are drawn from relatively high in the distribution of adults. This fits with common narratives about teaching being a particularly well-regarded, and selective, profession in Finland.
 


As far as I'm aware, no such comparison exists for low- and lower-middle-income countries. Tessa Bold and coauthors present results on the tragically low absolute level of teachers in sub-Saharan Africa (link), and similarly Justin Sandefur presents data comparing the skill level of teachers in sub-Saharan Africa to students in OECD countries (link). But neither compare the level of skill of teachers in Africa to teachers in high-income countries, and to the skill of other adults in Africa.

So I made one*. The World Bank's STEP Skills surveys use the same literacy assessment as the OECD PIAAC survey that Hanushek et al used, so I replicated and extended their graph, adding on the countries from the STEP data in which it was possible to identify teachers and their literacy level - specifically Vietnam, Colombia, Armenia, Georgia, Ghana, Kenya, and Bolivia. 


The first point to note is how low the overall distribution in lower income countries is. The majority of adults (in urban areas) who have graduated from university fall into the Level 2 category, whereas in high income countries most fall into Level 3. The PIAAC guide (copied below) explains what these levels mean: Level 2 tasks “may require low-level inferences” whereas Level 3 tasks require “navigating complex texts”.

The second point to note from the figure is the remarkable regularity of average (median) teacher performance in each national distribution. There is some variation, but most teachers in high-income countries are roughly in the middle of the distribution of university educated adults. Teaching in lower-income countries tends to be more selective - the median teacher is much closer to the 75th percentile of adults in Colombia, Ghana, and Kenya.

These two facts, the low overall level of literacy skill amongst graduates in the lower middle income countries, and the position of teachers within the distribution, imply an upper bound on the ability of a more selective recruitment process to improve the average quality of teachers. If for example Ghana and Kenya managed to increase the selectivity of the teaching profession enough to raise average teacher skills to above the 75th percentile (something no other country has done), this level could still be well below Level 3 on the PIAAC scale.

Getting better teaching is critical for improving education in developing countries. This data highlights the scale of one aspect of the challenge. Education systems are going to have to figure out how to deliver for children with teachers who may be able to make "low-level inferences" but are unable to "navigate complex texts."

---

*Thanks to Laura Moscoviz at the Education Partnerships Group for assistance with the graph!

---

08 February 2025

Ark Blogging: The British government’s new plan to get children learning

Over at the Ark blog:
"The new DFID education policy “Get Children Learning” was published last week. As its name suggests, the policy is all about moving the needle on learning outcomes. It sets out a strategy to tackle the learning crisis in developing countries, which has left 90 percent of primary school leavers in low-income countries without basic literacy or numeracy.  As a strategy, it’s relevant and ambitious, and it’s been widely welcomed by the education sector. 
Of course, tackling a crisis this deep and complex is easier said than done. So how does DFID plan to do it? 
Three priorities underpin DFID’s strategy to tackle the learning crisis and form the backbone of their new policy: better teaching, education system reform, and targeted support to the most marginalised kids. And permeating the strategy are three themes - more and better research; more attention paid to the political economy of education reform; and the “Best of British” - how UK expertise can be better leveraged to improving schools in developing countries.  These themes are interesting because each represents a fairly fundamental shift in or crystallisation of thinking from DFID, and together they provide some insight into how the strategy will be executed. The “Best of British” theme in particular reflects a new willingness by DFID to think more strategically about how to facilitate cross-system learning.
Read the rest here

16 October 2024

Open Data for Education

There’s a global crisis in learning, and we need to learn more about how to address it. Whilst data collection is costly, developing countries have millions of dollars worth of data about learning just sitting around unused on paper and spreadsheets in government offices. It’s time for an Open Data Revolution for Education.

The 2018 World Development Report makes clear the scale of the global learning crisis. Fewer than 1 in 5 primary school students in low income countries can pass a minimum proficiency threshold. The report concludes by listing 3 ideas on what external actors can do about it;
  1. Support the creation of objective, politically salient information
  2. Encourage flexibility and support reform coalitions
  3. Link financing more closely to results that lead to learning
The first of these, generating new information about learning, can be expensive. Travelling up and down countries to sit and test kids for a survey can cost a lot of money. The average RCT costs in the range of $0.5m. Statistician Morten Jerven added up the costs of establishing the basic census and national surveys necessary to measure the SDGs — coming to a total of $27 billion per year, far more than is currently spent on statistics.

And as expensive as they can be, surveys have limited value to policymakers as they focus on a limited sample and can only provide data about trends and averages, not individual schools. As my colleague Justin Sandefur has written; “International comparability be damned. Governments need disaggregated, high frequency data linked to sub-national units of administrative accountability.”

Even for research, much of the cutting edge education literature in advanced countries makes use of administrative not survey data. Professor Tom Kane (Harvard) has argued persuasively that education researchers in the US should abandon expensive and slow data collection for RCTs, and instead focus on using existing administrative testing and data infrastructure, linked to data on school inputs, for quasi-experimental analyses than can be done quickly and cheaply.

Can this work in developing countries?
My first PhD chapter (published in the Journal of African Economies) uses administrative test score data from Uganda, made available by the Uganda National Exams Board at no cost, saving data collection that would have cost hundreds of thousands of pounds and probably been prohibitively expensive. We’ve also analysed the same data to estimate the quality of all schools across the country, so policymakers can look up the effectiveness of any school they like, not just the handful that might have been in a survey (announced last week in the Daily Monitor).

Another paper I’m working on is looking at the Public School Support Programme (PSSP) in Punjab province, Pakistan. The staged roll-out of the program provides a neat quasi-experimental design that lasted only for the 2016-17 school year (the control group have since been treated). It would be impossible to go in now and collect retrospective test score data on how students would have performed at the end of the last school year. Fortunately, Punjab has a great administrative data infrastructure (though not quite as open as the website makes out), and I’m able to look at trends in enrolment and test scores over several years, and how these trends change with treatment by the program. And all at next to no cost.

For sure there are problems associated with using administrative data rather than independently collected data. As Justin Sandefur and Amanda Glassman point out in their paper, official data doesn’t always line up with independently collected survey data, likely because officials may have a strong incentive to report that everything is going well. Further, researchers don’t have the same level of control or even understanding about what questions are asked, and how data is generated. Our colleagues at Peas have tried to useofficial test data in Uganda but found the granularity of the test is not sufficient for their needs. In India there is not one but several test boards, who end up competing with each other and driving grade inflation. But not all administrative data is that bad. To the extent that there is measurement error, this only matters for research if it is systematically associated with specific students or schools. If the low quality and poor psychometric properties of an official test are just noisy estimates of true learning, this isn’t such a huge problem.

Why isn’t there more research done using official test score data? Data quality is one issue, but another big part is the limited accessibility of data. Education writer Matt Barnum wrote recently about “data wars” between researchers fighting to get access to public data in Louisiana and Arizona. When data is made easily available it gets used; a google scholar search for the UK “National Pupil Database” finds 2,040 studies.

How do we get more Open Data for Education?
Open data is not a new concept. There is an Open Data Charter defining what open means (Open by default, timely and comprehensive, accessible and usable, comparable and interoperable). The Web Foundation ranks countries on how open their data is across a range of domains in their Open Data Barometer, and there is also an Open Data Index and an Open Data Inventory.

Developing countries are increasingly signing up to transparency initiatives such as the Open Government Partnership, attending the Africa Open Data conference, or signing up to the African data consensus.

But whilst the high-level political backing is often there, the technical requirements for putting together a National Pupil Database are not trivial, and there are costs associated with cleaning and labelling data, hosting data, and regulating access to ensure privacy is preserved.

There is a gap here for a set of standards to be established in how governments should organise their existing test score data, and a gap for financing to help establish systems. A good example of what could be useful for education is the Agriculture Open Data Package: a collaboratively developed “roadmap for governments to publish data as open data to empowering farmers, optimising agricultural practice, stimulating rural finance, facilitating the agri value chain, enforcing policies, and promoting government transparency and efficiency.” The roadmap outlines what data governments should make available, how to think about organising the infrastructure of data collection and publication, and further practical considerations for implementing open data.

Information wants to be free. It’s time to make it happen.

26 September 2024

JOB: Research Assistant on Global Education Policy

I’m hiring a full-time research assistant based in London, for more details see the Ark website here.
 
---
 
Research and evidence are at the heart of EPG’s work. We have:
  • Collaborated with JPAL on a large-scale field experiment on school accountability in Madhya Pradesh, India
  • Commissioned a randomized evaluation by IPA of Liberia’s public-private partnership in primary schooling
  • Led a five-year randomized trial of a school voucher programme in Delhi
  • Helped the Ugandan National Examinations Bureau create new value-added measures of school performance
  • Commissioned scoping studies of non-state education provision in Kenya and Uganda 

Reporting to the Head of Research and Evaluation, the Research Assistant will contribute to EPG’s work through a mixture of background research, data analysis, writing, and organizational activities. S/he will support and participate in ongoing and future academic research projects and EPG project monitoring and evaluation activities.

The role is based in Ark’s London office with some international travel.

The successful candidate will perform a range of research, data analysis, and coordination duties, including, but not limited to, the following: 

  • Conduct literature and data searches for ongoing research projects.
  • Organize data, provide descriptive statistics, and run other statistical analysis using Stata and preparing publication quality graphics
  • Collaborate with EPG’s project team to draft blogs, policy briefs, and notes on research findings.
  • Support EPG’s project team in the design and implementation of project monitoring and evaluation plan
  • Provide technical support and testing on the development of value-added models of school quality
  • Coordination and update of the EPG/GSF research repository
  • Organise internal research and policy seminars
  • Perform other duties as assigned. 

The successful candidate will have the following qualifications and skills: 

  • Bachelor’s (or Master’s) degree in international development, economics, political science, public policy, or a related field.
  • Superb written and verbal communication skills.
  • Competence and experience conducting quantitative research. Experience with statistical software desired.
  • Familiarity with current issues, actors and debates in global education
  • Proven ability to be a team player and to successfully manage multiple and changing priorities in a fast-paced, dynamic environment, all while maintaining a good sense of humor.
  • Outstanding organization and time management skills, with an attention to detail.
  • Essential software skills: Microsoft Office (specifically Excel) and Stata
  • Experience working in developing country contexts or international education policy -- a plus
  • Experience designing or supporting the implementation of research evaluations and interpreting data -- a plus
  • Fluency or advanced language capabilities in French -- a plus
 

05 September 2024

Why is there no interest in kinky learning?


Just *how* poor are *your* beneficiaries though? In the aid project business everybody is obsessed with reaching the *poorest* of the poor. The ultra poor. The extreme poor. Lant Pritchett has criticised extensively this arbitrary focus on getting people above a certain threshold, as if the people earning $1.91 a day (just above the international poverty line) really have substantively better lives than those on $1.89 (just below). Instead he argues we should be focusing on economic growth and lifting the whole distribution, with perhaps a much higher global poverty line to aim at of around $10-15 a day, roughly the poverty line in rich countries.

Weirdly, we have the opposite problem in global education, where it is impossible to get people to focus on small incremental gains for those at the bottom of the learning distribution. Luis Crouch gave a great talk at a RISE event in Oxford yesterday in which he used the term ‘cognitive poverty’ to define those at the very bottom of the learning distribution, below a conceptually equivalent (not yet precisely measured) ‘cognitive poverty line’. Using PISA data, he documents that the big difference between the worst countries on PISA and middling countries is precisely at the bottom of the distribution - countries with better average scores don’t have high levels of very low learning (level 1 and 2 on the PISA scale), but don’t do that much better at the highest levels.



But when people try and design solutions that might help a whole bunch of people get just across that poverty line, say from level 1 or 2 to level 3 or 4 (like, say, scripted lessons), there is dramatic push-back from many in education. Basic skills aren’t enough, we can’t just define low-bar learning goals, we need to develop children holistically with creative problem solving 21st century skills and art lessons, and all children should be taught by Robin Williams from Dead Poet’s Society.

Why have global poverty advocates been so successful at re-orientating an industry, but cognitive poverty advocates so unsuccessful?

03 April 2025

The Political Economy of Underinvestment in Education

According to this model, the returns to education take so long that leaders need at least a 30 year horizon to start investing in schools.
 
"In the context of developing economies, investing in schools (relative to roads) is characterized by much larger long-run returns, but also by a much more pronounced intertemporal substitution of labor and crowding-out of private investment. Therefore, the public investment composition has profound repercussions on government debt sustainability, and is characterized by a trade-o, with important welfare implications. A myopic government would not invest in social infrastructure at all. The model predicts an horizon of at least thirty years for political leaders to start investing in schools and twice-as-long an horizon for the size of expenditures to be comparable to the socially-optimal level."
 

30 March 2025

A research agenda on education & institutions

From Tessa Bold & Jakob Svensson for the DFID-OPM-Paris School of Economics research programme "EDI"
 
1. A focus on learning in primary is still essential - don’t get too distracted by secondary and tertiary
2. More focus on teachers’ effort, knowledge, and skills
3. How do we go from pilots to scaled-up programs? (and related - can we design interventions that explicitly allow for existing implementation constraints at scale)
4. How can we use ICT to bring down the cost of sharing information on performance?
5. More research on public-private partnerships such as voucher programs

09 March 2025

The key to better education systems is accountability. So how on earth do we do that?

And what do we even actually mean when we talk about accountability?

Perhaps the key theme emerging from research on reforming education systems is accountability. But accountability means different things to different people. To start with, many think first of bottom-up (‘citizen’ or ‘social’) accountability. But increasingly in development economics, enthusiasm is waning for bottom-up social accountability as studies show limited impacts on outcomes. The implicit conclusion then is to revisit top-down (state) accountability. As Rachel Glennerster (Executive Director of J-PAL) wrote recently: 
"For years the Bank and other international agencies have sought to give the poor a voice in health, education, and infrastructure decisions through channels unrelated to politics. They have set up school committees, clinic committees, water and sanitation committees on which sit members of the local community. These members are then asked to “oversee” the work of teachers, health workers, and others. But a body of research suggests that this approach has produced disappointing results."
One striking example of this kind of research is Ben Olken’s work on infrastructure in Indonesia, which directly compared the effect of a top-down audit (which was effective) with bottom-up community monitoring (ineffective).

So what do we mean by top-down accountability for schools?

Within top-down accountability there are a range of methods by which schools and teachers could be held accountable for their performance. Three broad types stand out:

  • Student test scores (whether simple averages or more sophisticated value-added models)
  • Professional judgement (e.g. based on lesson observations)
  • Student feedback
The Gates Foundation published a major report in 2013 on how to “Measure Effective Teaching”, concluding that each of these three types of measurement has strengths and weaknesses, and that the best teacher evaluation system should therefore combine all three: test scores, lesson observations, and student feedback.

By contrast, when it comes to holding head teachers accountable for school performance, the focus in both US policy reform and research is almost entirely on test scores. There are good reasons for this - education in the US has developed as a fundamentally local activity built on bottom up accountability, often with small and relatively autonomous school districts, with little tradition of supervision by higher levels of government. Nevertheless, as Helen Ladd, a Professor of Public Policy and Economics at Duke University and an expert in school accountability, wrote on the Brookings blog last year:
"The current test based approach to accountability is far too narrow … has led to many unintended and negative consequences. It has narrowed the curriculum, induced schools and teachers to focus on what is being tested, led to teaching to the test, induced schools to manipulate the testing pool, and in some well-publicized cases induced some school teachers and administrators to cheat
Now is the time to experiment with inspections for school accountability … 
Such systems have been used extensively in other countries … provide useful information to schools … disseminate information on best practices … draw attention to school activities that have the potential to generate a broader range of educational outcomes than just performance on test scores … [and] treats schools fairly by holding them accountable only for the practices under their control … 
The few studies that have focused on the single narrow measure of student test scores have found small positive effects."
A report by the US think tank “Education Sector” also highlights the value of feedback provided through inspection systems to schools.
"Like many of its American counterparts, Peterhouse Primary School in Norfolk County, England, received some bad news early in 2010. Peterhouse had failed to pass muster under its government’s school accountability scheme, and it would need to take special measures to improve. But that is where the similarity ended. As Peterhouse’s leaders worked to develop an action plan for improving, they benefited from a resource few, if any, American schools enjoy. Bundled right along with the school’s accountability rating came a 14-page narrative report on the school’s specific strengths and weaknesses in key areas, such as leadership and classroom teaching, along with a list of top-priority recommendations for tackling problems. With the report in hand, Peterhouse improved rapidly, taking only 14 months to boost its rating substantially."
In the UK, ‘Ofsted’ reports are based on a composite of several different dimensions, including test scores, but also as importantly, independent assessments of school leadership, teaching practices and support for vulnerable students.

There is a huge lack of evidence on school accountability

This blind spot on school inspections isn’t just a problem for education in the US, though. The US is also home to most of the leading researchers on education in developing countries, and that research agenda is skewed by the US policy and research context. The leading education economists don’t study inspections because there aren’t any in the places they live.

The best literature reviews in economics can often be found in the “Handbook of Economics” series and the Journal of Economic Perspectives (JEP). The Handbook article on "School Accountability" from 2011 exclusively discusses the kind of test-based accountability that is common in the US, with no mention of the kind of inspections common in Europe and other countries at all. A recent JEP symposium on Schools and Accountability includes a great article by Isaac Mbiti, a Research on Improving Systems of Education (RISE) researcher, on ’The Need for Accountability in Education in Developing Countries” which includes; however, only one paragraph on school inspections. Another great resource on this topic is the 2011 World Bank book, "Making Schools Work: New Evidence on Accountability Reforms”. This 'must-read' 250-page book has only two paragraphs on school inspections.

This is in part a disciplinary point - it is mostly a blind-spot of economists. School inspections have been studied in more detail by education researchers. But economists have genuinely raised the bar in terms of using rigorous quantitative methods to study education. In total, I count 7 causal studies of the effects of inspections on learning outcomes - 3 by economists and 4 by education researchers.


Putting aside learning outcomes for a moment, one study from leading RISE researchers, Karthik Muralidharan and Jishnu Das (with Alaka Holla and Aakash Mohpal), in rural India finds that “increases in the frequency of inspections are strongly correlated with lower teacher absence”, which could be expected to lead to more learning as a result. However, no such correlation was found for other countries in a companion study (Bangladesh, Ecuador, Indonesia, Peru, and Uganda).

There is also fascinating qualitative work by fellow RISE researcher, Yamini Aiyar (Director of the ‘Accountability Initiative’ and collaborator of RISE researchers Rukmini Banerji, Karthik Muralidharan, and Lant Pritchett) and co-authors, that looks into how local level education administrators view their role in the Indian state of Bihar. The most frequently used term by local officials to describe their role was a “Post Officer” - someone who simply passes messages up and down the bureaucratic chain - “a powerless cog in a large machine with little authority to take decisions." A survey of their time use found that on average a school visit lasts around one hour, with 15 minutes of that time spent in a classroom, with the rest spent “checking attendance registers, examining the mid-day meal scheme and engaging in casual conversations with headmasters and teacher colleagues … the process of school visits was reduced to a mechanical exercise of ticking boxes and collecting relevant data. Academic 'mentoring' of teachers was not part of the agenda.”

At the Education Partnerships Group (EPG) and RISE we’re hoping to help fill this policy and research gap, through nascent school evaluation reforms supported by EPG in Madhya Pradesh, India, that will be studied by the RISE India research team, and an ongoing reform project working with the government of the Western Cape in South Africa. Everything we know about education systems in developing countries suggests that they are in crisis, and that a key part of the solution is around accountability. Yet we know little about how school inspections - the main component of school accountability in most developed countries - might be more effective in poor countries. It’s time we changed that.

This post appeared first on the RISE website. 

02 March 2025

Introducing... the Global Schools Forum


There’s nothing like sitting in a room full of people who build and run schools in the developing world to make you feel pretty inadequate. At least I did, last week at the Global Schools Forum. It can feel like a pretty long and abstract chain from the kind of policy research and evaluation that I do through to better policies and better outcomes, and I envy being able to see directly a tangible difference for real people.

You may have heard of the emergence of some international low-cost private school chains such as Bridge International Academies, but the movement is growing quickly, and there are many new organisations trying to do similar things that you probably haven’t heard of - some profit-making, some non-profit, some that charge fees, some that don’t, international, local, big, small, and everything in between. The biggest school operator you don’t hear that much about is the Bangladeshi NGO BRAC, who run thousands and thousands of fee-free schools.

Last week a whole range of school operators and the donors who support them gathered at the 2nd Annual Meeting of the “Global Schools Forum” (GSF); a new membership organisation of 26 school networks (of which 14 for profit and 12 non-profit) operating in 25 countries, and 17 donors and financing organisations, with networks ranging from 1 to 48,000 schools. Running one school is hard enough; trying to disrupt a dysfunctional system by growing a chain of schools is harder. One of the goals of the GSF is to help create a community of practice for school operators, and a “How-to Guide” for scale, sharing information about the best IT and finance systems, the best assessments for tracking performance, or the best training consultants. It’s a place to connect operators with new people and new ideas.

This year we heard more about public-private partnerships than last year, in part because of the presence of several of the operators in the Partnership Schools for Liberia (PSL) pilot. Government schools (with government teachers) will be managed by a range of local and international providers, including the Liberian Stella Maris Polytechnic and LIYONET, international NGOs (Street Child, BRAC, More Than Me) and school chains (Bridge International Academies, Rising Academies, Omega Schools). Other operators at the forum came from India (Gyanshala, Seed Schools, Sodha Schools), South Africa (SPARK, Streetlight Schools, African School for Excellence, Nova Pioneer), and East Africa (Peas, Silverleaf Academies, Kidogo, Scholé), to name just a few.

So what?

The number of non-state school operators planning for scale is rapidly increasing, but even at dramatic rates of growth, it would take a long time to reach any kind of significant proportion of schools. There are two possible routes to scale - either growing chains and networks to scale themselves, and/or acting as demonstration projects for government, to prove what is possible. This second route was highlighted by a number of speakers. Anecdotally at least, many exciting school reforms seem to come from the personal experience of government Ministers actually seeing something better in practice with their own eyes. More rigorously, Roland Fryer has demonstrated in the US with a randomized experiment that it is possible to “inject charter school best practices into public schools” and achieve positive gains in learning.

To find out more about the Global Schools Forum keep an eye on the website (coming soon) globalschoolsforum.org and follow @GSF_talks on twitter.

The Global Schools Forum is supported by The Education Partnerships Group (EPG) @EPG_Edu, UBS Optimus Foundation @UBSOptimus, Pearson Affordable Learning Fund @AffordableLearn, and Omidyar Network @OmidyarNetwork.

25 January 2025

How to spend aid in fragile countries

The classic dilemma in figuring out how to spend aid money is the trade-off between:

     a) achieving scale and sustainability by supporting national government systems (but losing control), and
     b) keeping more direct control by working through NGOs, but sacrificing scale and sustainability.

This trade-off is less acute when the recipient government is an effective service provider and respects human rights. Often however the countries that most need external assistance do so in large part precisely because they aren’t blessed with well qualified governments.

One possible solution to this dilemma is providing mass cash transfers - a route to supporting poor individuals whilst side-stepping their government. Another (neglected?) route is supporting local service providers directly. An example of this is the Girl’s Education South Sudan provision of ‘capitation' grants to schools (full disclosure, I was hired to do some analysis). This pipe provides both government and donor (currently DFID) finance direct to the school bank account (held by the school’s governing/managing committee).

Here is a ring-fenced pipe, separate from the main government treasury, at scale, that can send money direct to every school (whether public or private) in a country, with receipts, full government engagement in delivery, in-person monthly reporting, and disaggregated real-time data. All of which potentially an exciting opportunity for donors if they want to fund education in emergencies.

What does the money do? Overall measured enrolment has been trending upwards over the last few years. My analysis suggests that at least some of this aggregate enrolment growth can be attributed to the grants. First, looking at the individual school level, schools tend to report higher levels of enrolment and attendance the year after receiving a grant, after allowing for school fixed effects by conditioning on past attendance or attendance. Second, I exploit a natural experiment whereby the government-financed component of the grants (not the donor-financed component) was arbitrarily held up by state governments for a set of (~control) schools that had gone through all the same hoops as some other (~treatment) schools that did receive the grants. The estimated effect of receiving grants on enrolment and attendance levels remains similar. Similar gains are found for schools that qualified to receive cash transfers for girls. The results are robust to measuring enrolment using the national remote SMS reporting system (sssams.org), or the smaller in-person sample survey.

What about learning outcomes? The focus of the cutting edge in global education research is rightly on what kids actually learn at school rather than just getting bums on seats. But there are still a few countries, including South Sudan, where access and enrolment in school is still a major issue. You should probably take most statistics on South Sudan with a grain of salt, but one estimate of the Net primary enrolment rate is just 43%, which is really pretty low.

However, one of things that does seem to really matter for student learning outcomes is how teachers are motivated and held accountable. Private schools tend to get more effort out of their teachers, largely because they are paid directly by the school and not by a remote government office. For example in Uganda, teachers in private schools spend more time in the classroom teaching than their counterparts in public schools. But this isn’t an inherent feature of the ownership and management of public versus private schools. In principle all public teachers could be hired and paid directly by schools, financed by a single central government school grant, rather than all teachers being put directly onto a single national payroll. This might inadvertently be happening in South Sudan anyway, as recent rapid inflation is reducing the value of local currency denominated teacher salaries, whilst the donor (hard currency) -financed school grants maintain more of their absolute value (and increase in value relative to teacher central payroll salaries). Shifting more funding directly to schools and allowing the list of eligible schools to include non-state providers could open the door to quality-focused international NGO chains such as Peas.

Table: Uganda Primary School Service Delivery Indicators


It’s very easy to just be incredibly depressed by news coming out of South Sudan, including warnings of a potential new genocide, but as ever, sanity lies in the stoic serenity prayer - we should focus on what we can change (and on the evidence needed to enable the distinction be made between what we can and can’t change).

11 October 2024

The Education Commission & RISE

This post first appeared on the RISE blog

The recently launched report by the Education Commission has confirmed that a "business as usual" expansion of inputs is not going to fix the global learning crisis.


The recently launched report by the Education Commission, led by Commission Chair Gordon Brown, a star-studded cast of global leaders (including Center for Global Development affiliates Larry Summers and Ngozi Okonjo-Iweala), and guided by Commission Directors Justin Van Fleet and Liesbet Steer, has brought fresh data and support to the research agenda at RISE. There is a global learning crisis on a massive scale and a “business as usual” expansion of inputs isn’t going to fix it.

First, we’re very happy to see the high frequency of the word “learning”. Although educationists highlighted learning deficits of those in school (eg the 1990 Jomtien Declaration’s opening paragraphs stressed: “...millions more satisfy the attendance requirements but do not acquire essential knowledge and skills”), the UN Millennium Development Goals distorted the agenda onto an exclusive focus on enrolment and primary completion. Learning is, of course, harder than enrolment to reduce to the thin measures that are easy for states to “see,” but that is a weak excuse.

We knew already that the majority of children who can’t read are now *in* school (eg Spaull and Taylor for Southern Africa, the Global Monitoring Report), but the Commission report draws out the implications of current trends for 2030. Their calculations suggest that if current trends continue, 69% of school-aged children in low income countries will not have learnt basic primary level skills by 2030 - despite high enrolment rates. Even in middle income countries, half of children will attain primary level skills only. Millions of children are going to sit through hours of school day after day, and still not acquire the skills they need to prepare them for the complex and rapidly changing world they will face.


Second, what can we do about the global learning crisis? The Commission report leads with a discussion of the need for reform to systems which are coherent around learning performance. They provide new evidence that simply spending more money alone cannot be the answer. For example, Vietnam spends less money on education than Tunisia, yet scores much better in terms of learning outcomes (one of the reasons RISE picked Vietnam as a focus country). The same pattern is observed across cities in Pakistan, where Khanewal spends a fraction of other cities and yet achieves better results.


Even more shocking are the results from Africa. The Commission digs into an important new paper by Tessa Bold and co-authors (the draft presented at the RISE Conference) looking at the World Bank’s Service Delivery Indicators survey in seven countries. Analysis by the Commission reveals that less than half of spending on salaries and materials is actually used in teaching.


It is clearly possible to use existing resources more efficiently, and do more with less. RISE aims to understand the efficiency of schools in creating learning through a systems based approach. The best measure that is currently available for measuring features of systems, such as policies on teachers or student assessments, is the World Bank Systems Approach for Better Education Results (SABER) initiative. Each of our RISE Country Research Teams will carry out a baseline assessment of the system they are studying, based on SABER instruments. Analysis by the Commission report highlights the importance of systems in explaining performance - countries with stronger system features, as measured by the SABER surveys, score better on learning assessments.


The Education Commission report strengthens the case for research into how to reach high performing education systems to accelerate learning progress. It would be a tragedy if their “business as usual” projection becomes the sad reality, and when the end of the UN Sustainable Development Goals is reached in 2030, the majority of children emerge from school unprepared for the challenges they will face.

For more analysis listen to CGD Senior Fellows Bill Savedoff and Justin Sandefur discuss the report on the CGD podcast with Rajesh Mirchandani.