27 September 2018

The best thing about cash benchmarking is it highlights just how small most aid is

The best take on the new cash benchmarking study from Rwanda was this one by Michael Kevane:
"the main takeaway is that neither intervention (when evaluated at the low Gikuriro cost of $141 per household) improved child outcomes." Yikes. I guess though if household size = 7 that is only $20 per person.
Should we really be surprised that giving someone $20 doesn't improve any measurable outcomes? Maybe Kevane's maths is wrong and household size is smaller, but $20 is so small we could double it and still not expect to see anything.

That's one big advantage of cold hard individual cash transfers, they make explicit what the actual amount per person is, which is not so obvious when its one big community project costing loadsamoney and affecting loadsapeople. John Quiggin made this point years ago.

Last year, DFID gave £2.6 BILLION pounds to countries in Africa. That's so much money! How are they still so poor when we give them BILLIONS of pounds every year? Well, hold on, there are 1.2 billion PEOPLE in Africa. So that works out at just over £2 each for the whole YEAR. Of course it doesn't go to everyone, let's say the money is perfectly targeted on the poorest 10% of people. So they get £20 each.

Somehow there is a lot of magical thinking that by pooling money together it somehow automatically has totally outsized impacts. Of course its possible that smart investment in research or better governance can have truly outsized impact if it can nudge a country toward a slightly higher growth rate, but that isn't what most aid is even trying to do, and even when it is they stuff is wicked hard and we can expect most attempts to fail.

Aid is great but less hubris please. And less ridiculous implicit expectations from what aid could plausibly achieve from the sceptics too.

Don't Buy Local

This summer I finally got around to doing my first Park Run, joining the now millions of people around the world who turn up on Saturday mornings for a free 5km timed run around their local park (I managed a not too shabby 27 minute time, roughly average for my age). 

I also work on global development, so was pretty disappointed to receive an email announcing a new clothing line from the founder of Park Run that will be manufactured exclusively in Europe. Paul Sinton-Hewitt CBE was concerned with the “horrendously exploitative … factories in the Far East employing questionable practices, paying the lowest wages and exposing their workers to dangerous conditions”. 

Paul is right to be concerned about the wellbeing of East Asian factory workers, but moving those jobs to Europe is not the solution. Moving manufacturing jobs from places where jobs are scarce, to Europe where jobs are not scarce, is not a good thing. 

My point is not a new one; Paul Krugman, the nobel-prize winning left-wing economist wrote more than ten years ago in praise of sweatshops. But the point still stands, and is still apparently missed. The factories in which most of our sports gear are made have poor working conditions compared with jobs in rich countries, but they are usually preferable to the actual alternatives facing people unfortunate enough to be stuck in poor countries.

In 1980, before it became the workshop of the world, extreme poverty in China was close to 90 per cent. Today it is less than 1 per cent. That change would not have been possible without the manufacturing industry. It’s hard to think of anything else that has had a bigger positive impact on human wellbeing that the transformation of China’s economy. 

And it’s not just China that has benefited. Now that wages are starting to rise in China, manufacturers are moving on to other lower-income countries, such as neighbouring Vietnam and Cambodia, but also further afield, to Ethiopia and Nigeria. 

Researchers Rachel Heath and Mushfiq Mobarak looked at what happened in Bangladesh when garment factories started opening. As factories rewarded basic literacy and numeracy, girls who lived in villages near to new factories chose to stay in school longer. The effect on education was bigger than a government social programme explicitly designed to increase schooling. In Ethiopia, Chris Blattman and Stefan Dercon found that people use factory jobs as a safety net. Pay and conditions may be poor in the new factory jobs, but they are always there, even when other informal means of getting by fall through. 

So what is a concerned Park Runner to do? Ultimately pay and conditions in poor countries will only improve when workers get better outside options. Thanks to the hard work of poor Chinese in the 1980s and 1990s, China’s economy has grown, wages have risen, and Chinese workers today can afford to ask for more and turn down the worst offers. The best way to encourage that trend is to continue to buy things made in poor countries. Other ways to give poor people more options are to just give them cash directly (you can do literally that at givedirectly.org), to go on holiday to poor countries and spend money on services from poor people, or if you’re a citizen of a rich country to encourage your government to make it easier for people from poor countries to come and work in your country. 

I have nothing but admiration for Paul Sinton-Hewitt’s founding of Park Run, and of his desire to create an inclusive and ethical line of sportswear. I just hope he reconsiders his decision not to support jobs where they are needed most.

20 September 2018

Probably the best new research in global education economics

A couple of months ago the Centre for Education Economics asked me to edit their Annual Research Digest - a series of essays by leading thinkers on their favourite research paper from the past year.

The Digest is out today and I'm really pleased with how its come out, a fantastic set of blogs summarising a fascinating set of papers.

Here's the summary, and you can download the full report here.


18 June 2018

How smart are teachers in developing countries?

Eric Hanushek, Marc Piopiunik, and Simon Wiederhold published some fascinating analysis in a 2014 NBER working paper (link) comparing the literacy and numeracy of teachers to the overall population (with a university degree) in a range of OECD countries. In the figures below, the grey bars show the gap between the 25th and 75th percentile of skill for all university-educated adults in each country, and the red marker in the middle shows the median skill level of teachers in that country. Perhaps unsurprisingly, teachers in Finland are the highest skilled internationally, but also within Finland they are drawn from relatively high in the distribution of adults. This fits with common narratives about teaching being a particularly well-regarded, and selective, profession in Finland.
 


As far as I'm aware, no such comparison exists for low- and lower-middle-income countries. Tessa Bold and coauthors present results on the tragically low absolute level of teachers in sub-Saharan Africa (link), and similarly Justin Sandefur presents data comparing the skill level of teachers in sub-Saharan Africa to students in OECD countries (link). But neither compare the level of skill of teachers in Africa to teachers in high-income countries, and to the skill of other adults in Africa.

So I made one*. The World Bank's STEP Skills surveys use the same literacy assessment as the OECD PIAAC survey that Hanushek et al used, so I replicated and extended their graph, adding on the countries from the STEP data in which it was possible to identify teachers and their literacy level - specifically Vietnam, Colombia, Armenia, Georgia, Ghana, Kenya, and Bolivia. 


The first point to note is how low the overall distribution in lower income countries is. The majority of adults (in urban areas) who have graduated from university fall into the Level 2 category, whereas in high income countries most fall into Level 3. The PIAAC guide (copied below) explains what these levels mean: Level 2 tasks “may require low-level inferences” whereas Level 3 tasks require “navigating complex texts”.

The second point to note from the figure is the remarkable regularity of average (median) teacher performance in each national distribution. There is some variation, but most teachers in high-income countries are roughly in the middle of the distribution of university educated adults. Teaching in lower-income countries tends to be more selective - the median teacher is much closer to the 75th percentile of adults in Colombia, Ghana, and Kenya.

These two facts, the low overall level of literacy skill amongst graduates in the lower middle income countries, and the position of teachers within the distribution, imply an upper bound on the ability of a more selective recruitment process to improve the average quality of teachers. If for example Ghana and Kenya managed to increase the selectivity of the teaching profession enough to raise average teacher skills to above the 75th percentile (something no other country has done), this level could still be well below Level 3 on the PIAAC scale.

Getting better teaching is critical for improving education in developing countries. This data highlights the scale of one aspect of the challenge. Education systems are going to have to figure out how to deliver for children with teachers who may be able to make "low-level inferences" but are unable to "navigate complex texts."

---

*Thanks to Laura Moscoviz at the Education Partnerships Group for assistance with the graph!

---

08 June 2018

Are Rwandans really too scared to answer surveys honestly? (TLDR; Probably no)

Hi Roving Bandit blog fans! Long time no write. I've just relocated from London to Kigali, so I'm hoping that being in a more stimulating environment is going to encourage me to start blogging more frequently. Probably the best economics blog in Rwanda!

I'll kick things off by just regurgitating some recent twitter chat. A couple of days ago I posted Gallup survey findings that Rwanda is the 3rd most accepting countries for migrants in the world (pretty cool Rwanda).

The brilliant Dina Pomeranz responded "A friend who works at Afrobarometer told me that survey responses from Rwanda are problematic in general, because people tend to answer "it's very good" to everything (out of fear of repercussions)."

This would indeed be worrying were it the case, but I was sceptical. Afrobarometer doesn't actually operate in Rwanda, but I thought I'd take a look at some responses on some of the surveys that have been done here.

First, the 2014 National Household Survey asks people about satisfaction with some government services.

- 45% said they were dissatisfied with water services
- 23% said they were dissatisfied with roads 

That seems to be a lot of people who were not afraid to give a negative answer to a survey. 

The 2015 DHS asks people about their experience of personal violence, and a large number of people do say they have been affected.

Finally, a 2016 government survey by the Rwanda Governance Board also asked people about their satisfaction with a range of government services. Some were pretty positive, some not so much. There was

-  31% net dissatisfaction with social protection services 
-  38% net dissatisfaction with hygiene & sanitation services. 
-  45% net dissatisfaction with infrastructure services 

These do not at all look to me like the kind of numbers you'd expect from people so cowed by fear that they are afraid to give negative responses to an anonymous survey. Maybe its time for the team at Afrobarometer to take another look at Rwanda?

08 February 2018

Ark Blogging: The British government’s new plan to get children learning

Over at the Ark blog:
"The new DFID education policy “Get Children Learning” was published last week. As its name suggests, the policy is all about moving the needle on learning outcomes. It sets out a strategy to tackle the learning crisis in developing countries, which has left 90 percent of primary school leavers in low-income countries without basic literacy or numeracy.  As a strategy, it’s relevant and ambitious, and it’s been widely welcomed by the education sector. 
Of course, tackling a crisis this deep and complex is easier said than done. So how does DFID plan to do it? 
Three priorities underpin DFID’s strategy to tackle the learning crisis and form the backbone of their new policy: better teaching, education system reform, and targeted support to the most marginalised kids. And permeating the strategy are three themes – more and better research; more attention paid to the political economy of education reform; and the “Best of British” – how UK expertise can be better leveraged to improving schools in developing countries.  These themes are interesting because each represents a fairly fundamental shift in or crystallisation of thinking from DFID, and together they provide some insight into how the strategy will be executed. The “Best of British” theme in particular reflects a new willingness by DFID to think more strategically about how to facilitate cross-system learning.
Read the rest here

21 December 2017

The Global Education Reform Movement (GERM) in 2017: A Year in Review

First posted at the Ark blog

If the Adam Smith Institute can reclaim the pejorative label ‘neoliberal’, maybe we should be reclaiming the pejorative ‘global education reform movement (GERM)’? Aside from the unfortunate acronym, it’s actually a pretty good description of the cluster of people and organisations trying to shake things up a bit and do things differently in global education policy.

Here’s a round-up of some of my highlights from the GERM in 2017. There has been some solid progress in documenting the scale of the global learning crisis, and steps towards building the kind of global data infrastructure we need to measure the SDG on learning. In 2018 I’d like to see more measurement, and also more experimentation. The conversation in education still needs to shift some more away from “we know how to fix this just give us the cash” to the maybe slightly less compelling but more honest “this is a crisis that needs fixing, we know where we want to get to but we don’t yet know how to get there, so we’re going to throw everything we have at it, and measure the hell out of everything so we can learn and course correct until we get there.”

---

January - The big global education jamboree kicks off each year with the Education World Forum in London, not to be confused with the World Education Forum, the Global Education and Skills Forum, or the World Global Forum for Education and Children Who Can’t Read Good (I might have made up the last one).

George K. Werner, the Minister of Education in Liberia, publishes the first of many articleswritten in 2017 on the controversial Partnership Schools programme. “Less than 60% of school-aged children in Liberia are in school … Those who do attend school may not fare much better: among adult women who reached fifth grade in Liberia, less than 20% can read a single sentence.”

In Colombia, 22 of Bogota’s ‘Concession schools’ (PPP schools, like charters in the US or academies in the UK) re-open for another 10 year contract, after their last contract ended and was renegotiated last year.

February - The second, annual meeting of the Global Schools Forum is launched. The GSF event is a gathering of non-state school operators working in low- and middle-income countries and targeting the bottom half of the income distribution, comes together, networks, and shares ideas. I blogged about my general feelings of inferiority whilst rubbing shoulders with all these inspiring people who have set up their own schools here. Watch out for GSF in 2018, as the annual meeting promises to be even bigger and better!

March - The next big global education jamboree in the calendar is the Global Education and Skills Forum in Dubai. Daisy Christodoulou blogged about how she remarkably managed to win a debate in favour of teaching kids facts in the age of Google. She also reported on a fascinating debate about teaching global versus national values. If you don’t know Daisy, 2007 University Challenge champion, you should really read this superb feature profile by Laura McInerney in SchoolsWeek.

As we don’t (yet) have very good data on actual learning in many developing countries, a growing movement is spreading the idea started by Pratham in India of recruiting citizen-volunteers to administer simple learning assessments on a mass scale. The “People’s Action for Learning” (PAL) network held its 5th annual meeting in Mexico, welcomed Nepal as the newest member, reaching a total of 16 countries.

April - The Government of the Punjab province in Pakistan contracts out the management around 1,000 failing schools to private operators, bringing the total for the first year of the Public School Support Programme (PSSP) to almost 5,000 schools - that’s about how many schools have been converted to Academies in the UK over 16 years!

June - The great and the good of global education research meets in Washington DC for the Research on Improving Systems of Education (RISE) conference (report here). Ark EPG launches its Rigorous Review of the evidence on public-private partnerships in education in developing countries (post 2009). The review only found 22 studies that met the quality criteria, but of these most were positive.

In the Philippines, the senior high school (Grade 11 and 12) voucher programme enters its second year - over 600,000 students started in Grade 11 through the programme last year.

July - The Western Cape government drafted legislation for the creation of a new independent School Evaluation Authority.

Amitav Virmani, CEO of the Education Alliance, blogged an update on South Delhi Municipal Government’s Partnership Schools. Starting with one (Ark) school in 2015, the programme is now up to 29 schools, “While the rest of the system lost 12,000 students last year, these schools doubled their enrolment to 1600 children.”

Education International and ActionAid publishes a critical research report on Partnership Schools for Liberia based on qualitative research in “up to” 20 schools (how many was it?), just 2 months before the actual results would be published from the large-scale RCT which assessed the actual learning of thousands of children in 185 schools, which everyone agreed was a “really helpful” contribution to the dialogue.

August - The 3rd annual “State of the Nation” report on India’s Right to Education Act (Section 12(1)(c)) comes out, highlighting that 5 years after the policy was confirmed, still only 1 in 3 states in India have accessed any federal funding for the programme. This year’s report dives into some of the details of the private school lotteries in states that are actually implementing the policy at scale, including Gujarat, Karnataka, Madhya Pradesh, Maharashtra, and Rajasthan.

September - In a major contribution to the all-too-sparse global data on learning, the UNESCO Institute for Statistics released a new global indicator for reading proficiency (based on PIRLS, TIMSS, and SACMEQ) - finding a headline figure of 387 million children worldwide who can’t read, and of these 2 in 3 are actually attending school, they just aren’t learning.

Results from the 1st year of Partnership Schools for Liberia (PSL) evaluation comes in, with both supporters and critics hailing the results as proving them right. On average PSL schools saw solid improvements in teaching and learning, great for the first year of a pilot, but with some operators spending large amounts of money they raised, and weaknesses in contracting allowing “the largest operator” to move some children and teachers out of their schools.

The World Bank publishes its first ever annual World Development Report focused entirely on education. The report carried many of the same themes as the RISE research programme - highlighting the global learning crisis and calling for more learning assessment and for systemic reforms to try and address the crisis in quality.

UNESCO publishes its flagship Global Education Monitoring Report on the topic of accountability, and easily came away with the prize for best illustrated report.

As part of the above-mentioned South Delhi Government Partnership Schools programme, 2 new Ark-backed schools open this September as part of a plan to grow to 10 schools by 2022.

October - The Uganda National Examinations Board (UNEB) announces that starting in 2018, a new measure of secondary school effectiveness will be published based on value-added. This is a fairer way to compare schools than just looking at the level of test scores, which are much more heavily influenced by home background. This will make Uganda the first country in sub-Saharan Africa to use value-added to judge school quality.

November - The UK Parliament International Development Committee publishes its reporton DFID’s work on global education. The report welcomed the results from the PSL study and called for more research into non-state schooling, as well as backing the RISE research programme focused on systems of accountability.

In South Africa, an op-ed-war breaks on South Africa’s collaboration school project, with Equal Education calling the pilot illegal, undemocratic, and unaccountable, to which Education Minister Debbie Shafer replies “We will not be deterred by the likes of Equal Education, who cannot come up with any feasible plan to address the inequalities that still exist as a result of our apartheid legacy,” and David Harrison from the DG Murray Trust adds that the programme “gives parents the right to decide whether or not they want to be part of the experiment.”

At the World Innovation Summit for Education (WISE) in Doha, more progress was made on the nascent “global education ecosystem” project being launched by the Education Commission, R4D, Teach for All, Asia Society, and Brookings. At the exact same time, Pratham were putting these ideas into action, sharing their promising “Teaching at the right level” programme with participants from 12 countries, an idea which according to JPAL is “the most consistently effective at improving learning” from all of their learning-focused RCTs.

December - Just in time for Christmas, the latest international test score results are published - this time in reading from PIRLS. These tests included no low-income countries, and just a handful of lower-middle income countries: Egypt, Georgia, and Morocco. The worst performing country though was upper-middle income South Africa, where 8 out of 10 children can’t read. Nic Spaull provided extensive coverage on very depressing news for South Africa on his must-read blog.

What did I miss?

30 November 2017

How to achieve public policy reform by surprise and confusion

This is a great quote from Simeon Djankov, former Finance Minister of Bulgaria (and founder of the World Bank Doing Business indicators) - pulling slightly in the opposite direction of the Tony Blair school of thought on reform (ruthless prioritisation), Djankov instead suggests go off 7 different directions at once in order to surprise and confuse the opposition.  
"Well, one thing that did certainly affect it is the tactics of how to reform, in the sense that, certainly in academia, you are basically told you need to think deeply. Then there are a lot of pressure groups, lobbies, so you need to talk to them. You need to use the media for communicating the benefits of reform, and so on. Some of the reformers, successful reformers that I spoke with, before I joined the Bulgarian government, basically I said, 'You go, and on Day 1, you surprise everybody. So, you go in every direction you can, because they will be confused what's happening and you may actually be successful in some of the reforms. So, this is what I did. I went to Bulgaria in late July 2009; the Eurozone Crisis had already started around us. Greece was just about to collapse a few months later. So, there was kind of a feeling that something is to happen. But, instead of going, 'Let's now do labor reform,' then, 'Let's do business entry reform,' in the government we literally went 6 or 7 different directions hoping that Parliament will be, you know, confused or too happy to be elected--they were just elected. And we actually succeeded in most of these reforms. When I tried to do meaningful, well-explained reforms two years after, they all got bogged down, because lobbying will essentially take over and, 'Not now; let's wait for next year's government,' and so on."
From the always interesting Econ Talk.

23 November 2017

Innovations in Bureaucracy


Last week I was at the “World Innovation Summit for Education” (WISE) in Doha, and I don’t think I heard the word “bureaucrat” once. Clearly the organisers don’t read Blattman or they would know that Bureaucracy is so hot right now.

The World Bank might be a bit more ahead of the curve here, and held a workshop earlier this month on “Innovating Bureaucracy.” I wasn’t able to attend (ahem, wasn’t invited), and so as the king of conference write-upsdoesn’t seem to have gotten around to it yet, I’ve written up my notes from skimming through the slides (you can read the full presentations here).

Tim Besley — state effectiveness now lies at the heart of the study of development. Incentives, selection, and culture are key, and it is essential to study the 3 together not in isolation.

Michael Best — looks at efficiency of procurement across 100,000 government agencies (each with decentralised hiring) in Russia. Wide variation in prices paid by different individuals/agencies, with big potential for improvement.



Zahid Hasnain — presents Worldwide Bureaucracy Indicators (WWBI) for 79 countries. Public sector employment is 60% of formal employment in Africa & South Asia, and is usually better paid than private employment.



Richard Disney — provides a critique of simple public-private pay gap comparisons — need to consider conditions, pensions, and vocation. Lack of well-identified causal studies.


James L. Perry — 5 key lessons on motivating bureaucrats in developing countries.
(1) select for ‘vocation’
(2) work on prosocial culture
(3) leverage employee-service beneficiary ties
(4) teacher newcomers public service values
(5) develop leaders who model public service values. (full paper here)

Erika Deserranno: Summary of experimental lit on financial & non-financial incentives for workers. Both can work when well designed, or backfire when not. 3 conditions for effective performance-based incentives;
(1) Simple to understand
(2) Linked to measurable targets
(3) Workers can realistically affect targets 




Yuen Yuen Ang — How has China done so well in last 40 years without democratic reform? Through bureaucratic reform which has provided accountability, competition, limits on power. 50 million bureaucrats: 20% managers & 80% frontline workers. Managers have performance contracts focused on outcomes, with published league tables. Frontline workers have large performance-based informal compensation. (bonus podcast edition with Alice Evans here)






Stuti Khemani — research & policy rightly moving from short-route accountability to long-route. Need much more evidence on how public sector workers are selected. One example suggests elected Chairpersons have higher cognitive ability, higher risk aversion, lower integrity.


Jane Fountain — government IT projects fail in part because they’re too large — should move to agile development (build small and quick, get feedback, revise)

Arianna Legovini — improved inspections of health facilities in Kenya seem to be improving patient safety.



Daniel Rogger — new empirics of bureaucracy — World Bank bureaucracy lab investing in substantial new descriptive work on bureaucracy and bureaucrats using both surveys & administrative data, as well as RCTs on reforms

Jim Brumby, Raymond Muhula, Gael Rabelland — two helpful 2x2s — need to understand both capacity & incentive for reform, and then match data architecture to difficulty of measuring performance.








16 October 2017

Open Data for Education

There’s a global crisis in learning, and we need to learn more about how to address it. Whilst data collection is costly, developing countries have millions of dollars worth of data about learning just sitting around unused on paper and spreadsheets in government offices. It’s time for an Open Data Revolution for Education.

The 2018 World Development Report makes clear the scale of the global learning crisis. Fewer than 1 in 5 primary school students in low income countries can pass a minimum proficiency threshold. The report concludes by listing 3 ideas on what external actors can do about it;
  1. Support the creation of objective, politically salient information
  2. Encourage flexibility and support reform coalitions
  3. Link financing more closely to results that lead to learning
The first of these, generating new information about learning, can be expensive. Travelling up and down countries to sit and test kids for a survey can cost a lot of money. The average RCT costs in the range of $0.5m. Statistician Morten Jerven added up the costs of establishing the basic census and national surveys necessary to measure the SDGs — coming to a total of $27 billion per year, far more than is currently spent on statistics.

And as expensive as they can be, surveys have limited value to policymakers as they focus on a limited sample and can only provide data about trends and averages, not individual schools. As my colleague Justin Sandefur has written; “International comparability be damned. Governments need disaggregated, high frequency data linked to sub-national units of administrative accountability.”

Even for research, much of the cutting edge education literature in advanced countries makes use of administrative not survey data. Professor Tom Kane (Harvard) has argued persuasively that education researchers in the US should abandon expensive and slow data collection for RCTs, and instead focus on using existing administrative testing and data infrastructure, linked to data on school inputs, for quasi-experimental analyses than can be done quickly and cheaply.

Can this work in developing countries?
My first PhD chapter (published in the Journal of African Economies) uses administrative test score data from Uganda, made available by the Uganda National Exams Board at no cost, saving data collection that would have cost hundreds of thousands of pounds and probably been prohibitively expensive. We’ve also analysed the same data to estimate the quality of all schools across the country, so policymakers can look up the effectiveness of any school they like, not just the handful that might have been in a survey (announced last week in the Daily Monitor).

Another paper I’m working on is looking at the Public School Support Programme (PSSP) in Punjab province, Pakistan. The staged roll-out of the program provides a neat quasi-experimental design that lasted only for the 2016–17 school year (the control group have since been treated). It would be impossible to go in now and collect retrospective test score data on how students would have performed at the end of the last school year. Fortunately, Punjab has a great administrative data infrastructure (though not quite as open as the website makes out), and I’m able to look at trends in enrolment and test scores over several years, and how these trends change with treatment by the program. And all at next to no cost.

For sure there are problems associated with using administrative data rather than independently collected data. As Justin Sandefur and Amanda Glassman point out in their paper, official data doesn’t always line up with independently collected survey data, likely because officials may have a strong incentive to report that everything is going well. Further, researchers don’t have the same level of control or even understanding about what questions are asked, and how data is generated. Our colleagues at Peas have tried to useofficial test data in Uganda but found the granularity of the test is not sufficient for their needs. In India there is not one but several test boards, who end up competing with each other and driving grade inflation. But not all administrative data is that bad. To the extent that there is measurement error, this only matters for research if it is systematically associated with specific students or schools. If the low quality and poor psychometric properties of an official test are just noisy estimates of true learning, this isn’t such a huge problem.

Why isn’t there more research done using official test score data? Data quality is one issue, but another big part is the limited accessibility of data. Education writer Matt Barnum wrote recently about “data wars” between researchers fighting to get access to public data in Louisiana and Arizona. When data is made easily available it gets used; a google scholar search for the UK “National Pupil Database” finds 2,040 studies.

How do we get more Open Data for Education?
Open data is not a new concept. There is an Open Data Charter defining what open means (Open by default, timely and comprehensive, accessible and usable, comparable and interoperable). The Web Foundation ranks countries on how open their data is across a range of domains in their Open Data Barometer, and there is also an Open Data Index and an Open Data Inventory.

Developing countries are increasingly signing up to transparency initiatives such as the Open Government Partnership, attending the Africa Open Data conference, or signing up to the African data consensus.

But whilst the high-level political backing is often there, the technical requirements for putting together a National Pupil Database are not trivial, and there are costs associated with cleaning and labelling data, hosting data, and regulating access to ensure privacy is preserved.

There is a gap here for a set of standards to be established in how governments should organise their existing test score data, and a gap for financing to help establish systems. A good example of what could be useful for education is the Agriculture Open Data Package: a collaboratively developed “roadmap for governments to publish data as open data to empowering farmers, optimising agricultural practice, stimulating rural finance, facilitating the agri value chain, enforcing policies, and promoting government transparency and efficiency.” The roadmap outlines what data governments should make available, how to think about organising the infrastructure of data collection and publication, and further practical considerations for implementing open data.

Information wants to be free. It’s time to make it happen.

11 October 2017

Why don’t parents value school effectiveness? (because they think LeBron’s coach is a genius)


A new NBER study exploits the NYC centralised school admissions database to understand how parents choose which schools to apply for, and finds (shock!) parents choose schools based on easily observable things (test scores) rather than very difficult to observe things (actual school quality as estimated (noisily!) by value-added).

Value-added models are great — they’re a much fairer way of judging schools than just looking at test scores. Whilst test scores conflate what the student’s home background does with what the school does, value-added models (attempt to) control for a student’s starting level (and therefore all the home inputs up that point), and just looking at the progress that students make whilst at a school.

David Leonhardt put in well;
“For the most part, though, identifying a good school is hard for parents. Conventional wisdom usually defines a good school as one attended by high-achieving students, which is easy to measure. But that’s akin to concluding that all of LeBron James’s coaches have been geniuses.”
Whilst value-added models are fairer on average, they’re typically pretty noisy for any individual school, with large and overlapping confidence intervals. Here’s the distribution of school value-added estimates for Uganda (below). There are some schools at the top and bottom that are clearly a lot better or worse than average (0), but there are also a lot of schools around the middle that are pretty hard to distinguish from each other, and that is using an econometric model to analyse hundreds of thousands of data points. A researcher or policymaker who can observe the test score of every student in the country can’t distinguish between the actual quality of many pairs of schools, and we expect parents to be able to do so on the basis of just a handful of datapoints and some kind of econometric model in their head??




Making school quality visible

If parents don’t value school effectiveness when it is invisible, what happens if we make it visible by publishing data on value-added? There are now several studies looking at the effect of providing information to parents on test score levels, finding that parents do indeed change their behaviour, but there are far fewer studies directly comparing the provision of value-added information with test score levels.

One study from LA did do this, looking at the effect of releasing value-added data compared to just test score levels on local house prices, finding no additional effect of providing the value-added data. But this might just be because not enough of the right people accessed the online database (indeed, another study from North Carolina found that providing parents with a 1-page sheet with information that had already been online for ages already still caused a change in school choice).

It is still possible that publishing and properly targeting information on school effectiveness might change parent behaviour.

Ultimately though, we’re going to keep working on generating value-added data with our partners because even if parents don’t end up valuing the value-added data, there are two other important actors who perhaps might — the government when it is considering how to manage school performance, and the school themselves.

26 September 2017

JOB: Research Assistant on Global Education Policy

I’m hiring a full-time research assistant based in London, for more details see the Ark website here.
 
---
 
Research and evidence are at the heart of EPG’s work. We have:
  • Collaborated with JPAL on a large-scale field experiment on school accountability in Madhya Pradesh, India
  • Commissioned a randomized evaluation by IPA of Liberia’s public-private partnership in primary schooling
  • Led a five-year randomized trial of a school voucher programme in Delhi
  • Helped the Ugandan National Examinations Bureau create new value-added measures of school performance
  • Commissioned scoping studies of non-state education provision in Kenya and Uganda 

Reporting to the Head of Research and Evaluation, the Research Assistant will contribute to EPG’s work through a mixture of background research, data analysis, writing, and organizational activities. S/he will support and participate in ongoing and future academic research projects and EPG project monitoring and evaluation activities.

The role is based in Ark’s London office with some international travel.

The successful candidate will perform a range of research, data analysis, and coordination duties, including, but not limited to, the following: 

  • Conduct literature and data searches for ongoing research projects.
  • Organize data, provide descriptive statistics, and run other statistical analysis using Stata and preparing publication quality graphics
  • Collaborate with EPG’s project team to draft blogs, policy briefs, and notes on research findings.
  • Support EPG’s project team in the design and implementation of project monitoring and evaluation plan
  • Provide technical support and testing on the development of value-added models of school quality
  • Coordination and update of the EPG/GSF research repository
  • Organise internal research and policy seminars
  • Perform other duties as assigned. 

The successful candidate will have the following qualifications and skills: 

  • Bachelor’s (or Master’s) degree in international development, economics, political science, public policy, or a related field.
  • Superb written and verbal communication skills.
  • Competence and experience conducting quantitative research. Experience with statistical software desired.
  • Familiarity with current issues, actors and debates in global education
  • Proven ability to be a team player and to successfully manage multiple and changing priorities in a fast-paced, dynamic environment, all while maintaining a good sense of humor.
  • Outstanding organization and time management skills, with an attention to detail.
  • Essential software skills: Microsoft Office (specifically Excel) and Stata
  • Experience working in developing country contexts or international education policy -- a plus
  • Experience designing or supporting the implementation of research evaluations and interpreting data -- a plus
  • Fluency or advanced language capabilities in French -- a plus
 

05 September 2017

Why is there no interest in kinky learning?


Just *how* poor are *your* beneficiaries though? In the aid project business everybody is obsessed with reaching the *poorest* of the poor. The ultra poor. The extreme poor. Lant Pritchett has criticised extensively this arbitrary focus on getting people above a certain threshold, as if the people earning $1.91 a day (just above the international poverty line) really have substantively better lives than those on $1.89 (just below). Instead he argues we should be focusing on economic growth and lifting the whole distribution, with perhaps a much higher global poverty line to aim at of around $10–15 a day, roughly the poverty line in rich countries.

Weirdly, we have the opposite problem in global education, where it is impossible to get people to focus on small incremental gains for those at the bottom of the learning distribution. Luis Crouch gave a great talk at a RISE event in Oxford yesterday in which he used the term ‘cognitive poverty’ to define those at the very bottom of the learning distribution, below a conceptually equivalent (not yet precisely measured) ‘cognitive poverty line’. Using PISA data, he documents that the big difference between the worst countries on PISA and middling countries is precisely at the bottom of the distribution - countries with better average scores don’t have high levels of very low learning (level 1 and 2 on the PISA scale), but don’t do that much better at the highest levels.



But when people try and design solutions that might help a whole bunch of people get just across that poverty line, say from level 1 or 2 to level 3 or 4 (like, say, scripted lessons), there is dramatic push-back from many in education. Basic skills aren’t enough, we can’t just define low-bar learning goals, we need to develop children holistically with creative problem solving 21st century skills and art lessons, and all children should be taught by Robin Williams from Dead Poet’s Society.

Why have global poverty advocates been so successful at re-orientating an industry, but cognitive poverty advocates so unsuccessful?