26 Apr 2017, 4 p.m.
Different types of impact evaluations can tell us different things about the success or failure of civic technologies. Randomised evaluations aren't always feasible, but when they are, they can be a useful tool in establishing true causal links between technology implementation and relevant outcomes.
At J-PAL, a research center at the Massachusetts Institute of Technology, a network of affiliates are conducting randomised evaluations on programs that use technology to combat corruption and improve civic engagement. With governments around the world increasingly turning to technology, evidence from randomised evaluations tells us what's working — and what's not — in these efforts.
In this session, Eliza Keller, a senior policy associate on J-PAL’s governance team, will discuss case studies of randomised evaluations of large-scale civic technology programs, as well as some emerging evidence-based insights on citizen empowerment. The session will also include a conversation on how academics can best partner with civil society organisations to evaluate civic tech programs, share results widely, and scale up what works.
The Civic Tech conference that plugs a gap in debate, networking and research between practitioners, commentators, academics and funders of civic technology.
Your donations keep this site and others like it running
In association with
The William and Flora Hewlett Foundation
mySociety Limited is a project of UK Citizens Online Democracy, a registered charity in England and Wales. For full details visit mysociety.org.