Search the Site

Winners of Heart + Mind Donations Contest

(iStockphoto)

Earlier I ran a contest for two free copies of More Than Good Intentions. The quick summary: we ran a randomized trial to test whether employing the use of statistics, and worse yet scientific evidence, would raise more or less money when added to a standard emotional appeal in a direct mail marketing solicitation for donations for Freedom from Hunger (a charity I respect and do research with). We split the analysis by size of prior gift. The Freakonomics contest then asked two questions:

Before I get to the punchline, there’s an interesting result I see from looking at the guesses that were made. In part, this contest was inspired by the idea of creating a predictions market for randomized trials (or for any sort of well-defined scientific test, for that matter). This contest was not that, but it did provide me a simple way of learning what 73 individuals thought would happen.
The average guess for the small prior donors is an increase of 2.02 percentage points, i.e., people on average thought that mentioning scientific evidence would increase the response rate by 2.02 percentage points.
The average guess for the large prior donors is a smaller increase of just 1.37 percentage points.
So on average everyone thought that mentioning scientific evidence would increase the likelihood someone gives, and people thought that small prior donors would respond more positively than large prior donors.
The answer is the opposite.
Here are the results:
Small prior donors reduce giving when presented with scientific evidence by 0.9 percentage points (statistically significant at 90%). The winner is Will H, who guessed -1.0 percentage points, and with a reason that I think is dead-on too.
Large prior donors increase giving when presented with scientific evidence by 3.54 percentage points (significant at 99%). The winner here is Helen Law, who won by a coin toss over Peter who also guessed +4.0 percentage points).
In fairness to the collective wisdom of the readers, even though the average prediction got it backwards, if I tally the simple vote as to whether large or small donors would respond more positively, more people voted with large donors (i.e., the right answer):  34 out of 65 (52%) versus 23 out of 65 (35%).
One commenter, Aaron Miller, made a key point that should be noted when thinking about how much stock to put in these results when planning fundraising for other charities: Freedom from Hunger is known amongst its supporters and those in the microfinance world as being more focused on using evidence and research to guide their programs. Empirically, data even support this, as they have done more randomized trials than any other microfinance organization, to the best of my knowledge. And they were the first to attempt a randomized trial on microcredit (as far as I know) well before Innovations for Poverty Action (IPA) and the M.I.T. Jameel Poverty Action Lab got started.
Aaron’s point is a larger one about external validity, and is true of any empirical exercise. With some theory on why people give, and more replication (something not rewarded enough in academics, one of the motivations for starting IPA), we can and should tackle this. Personally I’m very interested in this issue of how we can help donors use evidence on effectiveness without turning off the emotional triggers that inspire us to give, and am always looking for nonprofit organizations game for doing tests to help learn how to do this.
Anyway, thanks for participating everyone. This was fun.


Comments