So Many Ideas, So Little Time: Using Adaptive Experiments for Efficient and Iterative Inquiry.

by
Tomoko Harigaya, Precision Agriculture for Development’s Director of Research, and Grady Killeen, a former PAD Research Associate, consider the merits of adaptive experiments.

How does one create space to take stock of what we are learning and ensure that high-potential ideas that we have identified are sufficiently resourced and fast tracked for more rigorous testing? How can we ensure that priors and biases don’t lead us to home in too early on a small set of ideas which we *think* will make a big difference leading us to set up many (expensive) trials across a range of sites before we observe results?

Research design and practice requires the weighing of costs and benefits, as well as potential trade-offs. For an organization like PAD, which pursues both research and development practice, concurrently and in real-time, poor decision-making can be both costly, and hugely inefficient.

When there are an infinite number of ideas and we don’t have a great sense of which ones will have large impact margins one efficient approach is to start with experiments on lots of different ideas in different locations (i.e., testing ideas that seem most relevant for a given context). Over time, ideas that show promising results in one place can be picked up by other teams and adapted/replicated in other settings. Simple tweaks that don’t lead to improved outcomes can be put aside, and ideas that don’t show results but have a solid Theory of Change can be considered for further due diligence, major tweaks, and retesting. 

This is similar to the concept of adaptive experiments, a new experimental research design approach advanced by Max Kasy and Anja Sautmann. Adaptive experiments are implemented in the form of A/B tests with many arms, and two or more rounds to enable adaptation. Researchers observe outcomes after early rounds of testing, reduce the sample sizes for the worst performing arms and increase them for the best performing arms. Thereafter, this process is repeated throughout the experiment. The key advantages of this design is that it maximizes the number of beneficiaries receiving the best intervention, and does so empirically and efficiently.

Some of PAD’s programs – characterized by a large user base and quality administrative data on frequent farmer feedback – are very well-suited for efficient learning through the use of adaptive experiments. We worked with Sautmann and Kasy to implement this approach to test how best to increase response rates to the Interactive Voice Response (IVR) profiling survey for a large digital extension system PAD has built and manages in a state in India. We were particularly interested in investigating whether warning respondents that the call would be robotic ahead of time would improve the ability of farmers to respond to the IVR profiling survey. We tested whether doing so far in advance (24 hours) or near the time of the call (1 hour before) was more effective, and whether morning (10 am) or evening (6:30 pm) calls yielded higher responses, a total of six treatment arms, over one month.

The experiment stopped when we reached 10,000 total farmers called, which was a predetermined number. The success rate increased with a morning call and an SMS 1 hour before. Some simple tweaks, such as changing the time of day of the call, increased success rates by several percentage points. 

However, even with a relatively basic set of questions, the share of usable recordings was quite poor and, as a consequence, the rate of usable profiles to farmers contacted was low. Consequently, we have stopped using IVR profiling and continue to profile our farmers using agents operating from PAD’s call center. As we do so, we continue to explore other low-cost and more easily scalable profiling methodologies.

Notwithstanding these challenges, we continue to find the research proposition advanced through adaptive experiments compelling as we work to empower poor people with information efficiently and effectively, at low cost and at scale. Watch this space!

Read more about Anja Sautmann and Max Kasy’s experience implementing adaptive experiments with PAD >>> VoxDev