Wednesday, December 15, 2010

The uncanny accuracy of European public opinion on the amount of foreign aid that governments give

Ok, this is probably the last post on this topic for a while. But a student (thanks Andrew!) put some of the data on European perceptions of how much foreign aid their governments give (from Eurobarometer 50.1, 1999) into nice electronic form, and I was able to calculate exactly the modal response. And really, the results surprised me: European public opinion turns out to be uncannily accurate at determining the answer to that question, far more than Americans, to the extent to which I wonder if the results discussed in this post are not simply driven by the way the question is asked in the US. The accuracy of European public opinion on this topic actually seems like a striking confirmation of the models of "information aggregation" I invoked earlier: when signals are unbiased, public opinion should converge on the true answer.

The question Eurobarometer 50.1 asked is: "We are not talking about humanitarian aid, that is assistance provided in emergency situations, like wars, famine, etc, but about development aid. Do you think the (NATIONALITY) government helps the people in poor countries in Africa South America Asia, etc to develop? (IF YES) Roughly how much of its budget do you think the (NATIONALITY) government spends on this aid?"

The potential answers are:


1 No
2 Yes, less than 1%
3 Yes, between 1 and 4%
4 Yes, between 5 and 9%
5 Yes, between 10 and 14%
6 Yes, between 15 and 19%
7 Yes, between 20 and 24%
8 Yes, between 25% and 29%
9 Yes, 30% or more
10 Yes, but I do not know the percentage (SPONTANEOUS)
NSP No response/Don't know

The correct response is coded 3, between 1 and 4%.

So how did Europeans do in 1996-1998?

Their answers are collected in this table. As you can see, on average about 40-45% of Europeans say they don't know how much aid their governments give (though only about 20% don't know if their governments give any aid, or refuse to answer; another 20% say they think their governments give ODA (official development assistance), but don't know how much), and only about 16% give the correct response. So most Europeans seem to lack knowledge of how much ODA their governments give. (Though note the variance: the vast majority of Danes claim to know that their government gives aid, and something like 40% of them give the correct response).

But this is the wrong metric to focus on. In order to determine how accurate the aggregate public opinion is, we have to do something like what Francis Galton did when he asked people at a country fair to estimate the weight of an ox, and calculate the median response among those who claim to know the answer (roughly, this is the answer that would emerge from a "democratic" vote). And here the results are quite different. In this table, I've included only the answers of people who claim to know the actual percentage of the budget given by European governments as ODA (the number represents the percentage of people giving an answer who claim they know how much money their governments give as ODA), as well as their average and median responses. And Europeans get it exactly right: the median answer in both 1996 and 1998 was precisely 3 (the correct answer). The median in most countries was also very close to the truth: Germans and Belgians overestimate the amount of aid they give (their median answer is 4, meaning between 5% and 9% of the budget, perhaps because Germans suffer from a status effect and Belgians have Brussels?), whereas Greece, Spain, Finland, and Sweden (and Italy in 1998) slightly underestimate the amount of aid they give.

So, collective opinion in the EU, in 1996-1998, "knew" the right answer to the question that seems to stump Americans. I wonder if the problem of bias in American estimates of ODA today is caused by the way the question is asked in PIPA's survey? Would Americans display such a large bias if the question of Eurobarometer 50.1 was asked of them?

[update: fixed some typos and other minor problems for the sake of clarity, 12/15/2010]

On the idea of Tolerable Outcomes (Epistemic Arguments for Conservatism V)

What does it mean for an institution to be associated with “tolerable” outcomes over a period of time? The question is more subtle than I thought at first; under prompting from a friend who commented on the paper I am writing, here’s a stab. (For an introduction to this series, see here; for all the other posts, click here; the idea of “tolerable” or "ok" outcomes is used here, here and here).

The first problem is to determine the sense in which we might say that some outcome (or some set of circumstances) is “tolerable.” One promising idea identifies tolerable outcomes with those outcomes that do not involve “clear evils.” By a “clear evil” I mean the sort of thing that all (reasonable?) individuals could identify as an evil: slavery, genocide, etc. (Though then, of course, we have the problem of sorting out the reasonable from the unreasonable; see here Estlund’s Democratic Authority). Some evils are not clear in this sense: reasonable individuals (in the Rawlsian sense of the term) might disagree about their importance, or their identification as an evil, given their (differing) beliefs about justice and the good.

A more problematic, but more substantive sense of “tolerable,” identifies tolerable outcomes with those outcomes that are above some threshold of badness on some substantive scale. Here the idea is not that some evils are necessarily clear in the sense discussed above, but that the determination of which evils are tolerable and which are not is an “easier” problem than the determination of which goods make a society optimal or fair or just, for example. Even if reasonable people disagree about whether, for example, persistent poverty is a tolerable evil, the conservative can still argue that determining whether persistent poverty is a tolerable evil is an “easy” problem relative to, for example, determining whether an egalitarian society is justified. (Perhaps the majority of people believe that poverty is a tolerable evil, while slavery is not; if we assume that the majority of people have some warrant for these beliefs, then the belief that persistent poverty is a tolerable evil might be epistemically justified, even if some reasonable individuals disagree). 

Taking some criterion of “tolerability” as given, a second problem emerges: institutions are associated with outcomes over time. Should a conservative discard any institution that is associated with even a single intolerable outcome? Or should the conservative somehow “average” these outcomes over time, or “discount” past outcomes at a specific rate?

For an example, consider the basic institutions of liberal democracy. If we look, say, at the institutions of the Netherlands or Sweden since 1960, we could easily agree that these institutions have been associated with tolerable outcomes since then, in the sense that they do not seem to have been associated (or produced, though by assumption we cannot tell whether outcomes associated with these institutions have been produced by them) with clear evils.

But now consider the entire history of relatively liberal institutions in the USA since the late 18th century.  These institutions were not always associated with tolerable outcomes; they were in fact associated with slavery and ethnic cleansing, which count as clear evils if anything does, and with many other evils besides (aggressive war and colonialism among them). But at the time they were also not the same institutions as today; there has been a great deal of institutional change in the USA. Though the basic structure of the institutions, as specified in the US constitution, has not changed that much – e.g., we still have competitive elections, two legislative chambers with specific responsibilities, an executive, a relatively independent judiciary, a bill of rights, etc. – the actual workings of these institutions, the associated circumstances under which they operate, and the expectations that shape their use have changed quite a bit. Suffrage was extended to all adult males; then it was extended to women in the early 20th century. Slavery was abolished. The regulatory powers of the Federal government expanded. The country industrialized. And so on. Since (by assumption) we do not know which aspects of American institutions and circumstances produced clear evils and which aspects and circumstances did not, we cannot in general answer the question of whether liberal institutions in the USA have produced tolerable outcomes in all past circumstances; at best, we can say that American institutions that are in some ways similar to existing institutions were associated with intolerable (not ok) outcomes in the past.

What might a conservative say to this? One possibility would be for the conservative to have a particular “discount rate” for the past: the further back in the past an outcome is associated with an institution, the less it is to “count” towards an evaluation of whether the institution is to be preserved, on the assumption that the further back in time we go, the less we are talking about the same institutions. Early nineteenth century American institutions were only superficially similar to modern American institutions, on this view; and so the outcomes associated with them should be discounted when we consider whether or not American institutions should have “epistemic authority.”

The problem with this is that, the smaller the discount rate is, the more intolerable outcomes it will “catch,” so that the conservative is forced to discard almost all institutions. With a small discount rate, the conservative is forced to argue that American institutions should not, in general be given the benefit of the doubt, since they (or similar enough institutions) have produced intolerable outcomes. But with a large discount rate, the conservative can be far less confident that the institutions  in question will be associated with tolerable outcomes in the future, since he has less evidence to go on. So the conservative faces a sort of evidence/discount rate tradeoff: the conservative position is most powerful, the more evidence we have of the association of institutions with tolerable outcomes; but the more evidence we have of outcomes, the more likely it is that some of these will be intolerable, forcing the conservative to argue for changes.

(In more formal terms: consider the series of states of the world {X1,..Xi…Xn}, associated with the institution {I1,…In}. For each Xi, we know whether it represents a tolerable or an intolerable outcome, and we know that it was associated with Ii, though we do not know whether Ii produced it. Suppose all intolerable outcomes are found in the past (i.e., in the series {X1…Xi}, where i is less than n). Suppose also that our confidence that institution In (today’s incarnation of the institution) is similar enough to institution Ii decreases according to some discount rate d. The larger the d, the smaller the series of states that can serve as evidence that In will be associated with tolerable outcomes in the future; but the smaller the d, the more likely it is that the evidential series of states will include some states in the series {X1…Xi}).

What do people think?

Tuesday, December 07, 2010

One hypothesis weakened

In an earlier post I wondered about the sensitivity of estimates of US foreign aid to the definition of foreign aid; if people included "military involvement" as foreign aid, then their estimates would be biased upwards. But apparently the good people at PIPA already thought of this in an earlier poll (Thanks Andrew, for doing what I was too lazy to do!):
Some have wondered whether the high estimate of foreign aid spending is due to Americans incorrectly including in their estimates the high costs of defending other countries militarily. To determine if this was the case, in June 1996 PIPA presented the following question: US foreign aid includes things like humanitarian assistance, aid to Israel and Egypt, and economic development aid. It does not include the cost of defending other countries militarily, which is paid for through the defense budget. Just based on what you know, please tell me your hunch about what percentage of the federal budget goes to foreign aid. Despite this clarification, the median estimate was 20% and the mean 23%.
Europeans, however, do appear to produce less biased estimates of foreign aid than Americans:
When Europeans are asked how much the government spends on overseas aid from the national budget, approximately one third of respondents do not know. Another third will choose between 1-5 per cent and 5-10 per cent. The smallest proportion will mention less than one per cent.21 The consistent trend across OECD countries is to overestimate the aid effort.
The figures cited appear to be from this report, I think, though the question is not exactly comparable. Most citizens admit they don't know (57% or so). Here's a table (click for larger size):
The correct response is "around 100 Euros per European citizen." (Based on the figures in the table, however, it looks like most Europeans actually underestimate the amount of foreign aid the EU gives - which does not support the conclusion of the other report. I wonder what the results would be if the question were asked in these terms in the USA). Anyway, it seems like the evidence is inconsistent with the hypothesis that high foreign aid estimates are driven by the inclusion of military spending in the results, though the fact that European populations do produce lower estimates of aid spending (even though the questions are not exactly comparable) does suggest that perhaps military spending plays  small role.

Another option: perhaps this is driven in part by national status? "High status" (powerful) countries will tend to have a self-image that includes lots of aid to others. But disaggregated figures for all the EU countries do not appear to be easily available to test this sort of thing (e.g., maybe France, Britain, and Germany produce more incorrect estimates than small, peripheral countries like Latvia and the Czech Republic).

[Update 12/8/2010 - thanks again Andrew: A 1999 Eurobarometer report (p. 11) notes that "Approximately a quarter of Europeans thinks that their government actually contributes to development aid, but does not feel well enough informed to say how much The largest proportions of votes go to the categories « Between 1 and 4% » (14%, -2 since 1996) and « Less than 1% » (10%, -2) Europeans are not far from reality when they make this choice." The question asked then was "We are not talking about humanitarian aid, that is assistance provided in emergency situations like wars. famine, etc, but about development aid Do you think the (NATIONALITY) government helps the people in poor countnes in Afnca, South America Asia etc to develop (I F YES) Roughly how much of its budget do you think the (NATIONALITY) government spends on this aid." The correct answer is "between 1 and 4%". If I'm reading the accompanying table right, Denmark, Finland and Sweden give especially accurate answers - around 40% of people in Denmark give the correct answer.]

The Robustness or Resilience Argument in Practice: Noah Millmann vs. Jim Manzi (Epistemic Arguments for Conservatism IV.55)

Noah Millmann and Jim Manzi over at The American Scene (and Karl Smith at Modeled Behavior) have been debating the degree of deference we should give to economic science when considering what governments should do about a recession. Manzi emphasizes the large degree of uncertainty and difficulty attendant on any attempt to determine whether a particular policy actually works, and he is right about this: we do not know very well whether any policy intervention actually works (or worked), given the enormous number of potentially confounding variables. Lots of econometric ink is spilled trying to figure out this problem, but the problem is intrinsically hard, given the information available. By contrast, knowledge in physics or chemistry is far more certain, since it can be established by means of randomized experiments that are easily replicated. So, Manzi argues, we should give less deference to economists than we do to physicists when making decisions. Millmann sensibly points out that the relevant analogy is not to physics or chemistry but to something like medicine. The knowledge produced by medical science is hard to apply in practice, and doctors base their treatment decisions on a combination of customary practice, experience, and some limited experimental and observational evidence. In particular cases, then, medical practice offers at best an informed guess about the causes of a disease and the best course of action. But Millmann argues that this does not undermine the epistemic authority of medicine: in case of sickness, we should attend to the advice of doctors, and not to the advice of nonexperts.

I think Manzi’s argument would be more compelling if it were put as a robustness or resilience argument (discussed previously here and here). Consider first the case of medicine. If we get sick, we have three basic options for what to do: heed the advice of doctors, heed the advice of non-experts, and do nothing. It seems clear that heeding the advice of non-experts should (normally) be inferior to heeding the advice of doctors. But is heeding the advice of doctors always epistemically preferable to doing nothing? (Or, more realistically, to discounting the advice of doctors based on one’s own experience and information about one’s body). The answer to this question depends on our estimation of the potential costs of medical error vs doing nothing. Because medical knowledge is hard, doctors may sometimes come up (by mistake) with treatments that are actively harmful; in the 18th century, for example, people used “bleeding” as a treatment for various diseases, which may have been appropriate for some things (apparently bleeding with leeches is used effectively for some problems), but probably served to weaken most sick people further. At any rate, we may not know whether a treatment works or not any better than the doctor; all we know is that people treated by doctors sometimes die. If our estimate of medical knowledge is sufficiently low (e.g., if we think that in some area of medical practice medical knowledge is severely limited), our estimate of the potential costs of medical error sufficiently high (we could die), and our experience of what happens when we do nothing sufficiently typical (most illness goes away on its own, after all: the human immune system is a fabulously powerful thing, perfected to a high degree by millions of years of evolution!) it may well be the case that we are better off discounting medical advice for the sake of doing nothing. Of course, atypical circumstances may result in us dying from lack of treatment; that is one of the perversities to which this sort of argument may give rise. But given our epistemic limitations (and the epistemic limitations of medicine), there may be circumstances where “doing something” is equivalent to doing something randomly (because the limitations on our medical knowledge are so severe), and so we may be (prospectively) better off doing nothing (i.e., tolerating some bad outcomes that we hope are temporary, since our bodies have proven to be resilient in the past).

Consider now the case of a government that is trying to decide on what to do with respect to a moderately severe recession. Here the government can do nothing (or rather, rely on common sense, tradition, custom and the like: i.e., do what non-experts would do), heed the advice of professional economists (who disagree about the optimal policy), or heed the advice of some selected non-economists (or the advice of some mixture of economists and non-economists). When is “heeding the advice of economists” better than “doing nothing,” given our epistemic limitations? And when is “heeding the advice of non-economists” better than “heeding the advice of economists”?

We know that the current architecture of the economic system produces recessions with some frequency, some of which seem amenable to treatment via monetary policy (whenever certain interest rates are not too close to zero), some of which appear to be less so (these are atypical), but in general produces long-run outcomes that seem tolerable (not fair, or right, or just: merely tolerable) for the majority of people (there are possible distributional concerns that I am ignoring: maybe the outcomes are not tolerable for some people). The system is robust for some threshold of outcomes and some unknown range of circumstances: it tends to be associated with increasing wealth over the long run, though it is also associated with certain bad outcomes, and we do not know if it is indefinitely sustainable into the future (due to environmental and other concerns). We also know that there is some disagreement among economists about what is the optimal policy in an atypical recession (which suggests that there are limits to their knowledge, if nothing else). If we think that the limits on economic knowledge are especially severe for some area of policy (e.g., what to do in atypical recessions), historical evidence suggests that sometimes economists may prescribe measures that are associated with intolerable outcomes (e.g., massive unemployment, hyperinflation, etc.), and we think that most recessions eventually go away on their own, we may be justified in doing nothing on epistemic grounds. In other words, if we think that for some area of policy economists’ guesses about optimal policy are not likely to be better than random, and carry a significant risk of producing intolerable outcomes, then conservatism about economic policy is justified (doing what custom, tradition, etc. recommend, and heavily discounting the advice of economists).

But these are big ifs. Suppose that the epistemic limitations of economic science are such that most policy interventions recommended by professional economists have a net effect of zero in the long run; that is, economists recommend things more or less randomly, some good, and some bad, but in general tend not to recommend things that are very bad for an economy (or very good for it). (Historical evidence may support this; “Gononomics” is something of an achievement, not necessarily something common). In that case, we are probably better off heeding the advice of economists (and gaining the experience of the results) than doing nothing (and not gaining this experience); there may not be exceedingly large costs from heeding economic advice, but there may not be very large benefits either, and the result will still be “tolerable.” (At the limit, this sort of argument suggests that we ought to be indifferent about almost any policy intervention, so long as we have reasonable expectations that the outcomes will still be tolerable). Moreover, distributional concerns may dominate in these circumstances; doing nothing has a distributional cost that is passed to some particular group of people (e.g., the unemployed), so we may have reason to be concerned more about distribution than about long-run economic performance. And much depends on our estimates of the epistemic limitations of economic science: sure, economics is not like physics, but is it more like 20th century medicine, or more like 17th century medicine? (And the answer to this question may be different for different areas – different for macroeconomics than for microeconomics, for example).

Monday, December 06, 2010

Why are estimates of US foreign aid so biased?

A number of people have pointed to the latest reiteration of the fact that Americans do not appear to know what percentage of the budget goes to foreign aid. The median guess is 25% of the total budget, which is far higher than the actual 0.6%. Moreover, as far as I know, for as long as this question has been asked (1995), Americans have always hugely overestimated the percentage of the budget that goes to foreign aid; according to PIPA, the median guess has been about 20%. More educated people guess a bit lower, and less educated people a bit higher, but they mostly err on the high side. But why? As I mentioned in an earlier post, if people estimate such quantities on the basis of unbiased signals, they should converge on the true answer. So what is the source of this bias?

Eric Crampton suggests that voters count a lot of military spending as "foreign aid." This strikes me as plausible. Voters do not have in mind the same technical definition of "foreign aid" that the budget wonks use; they mostly see a large degree of involvement by the US in various countries, some of it justified on "nation building" grounds, which they can easily classify as "foreign aid/involvement." (These are the "signals" that they use to estimate the total amount of aid). And indeed the military accounted for about 23% of federal spending in FY2009 (a bit less this year), depending on how you count, which is close enough to the public guess for "foreign aid."

How would we know if this is what is going on? I wonder if answers to the question fluctuate in ways that are more or less correlated with the foreign wars of the US. Are answers to the question lower in times of peace? (I am too lazy to download the data and crunch it myself. But perhaps some enterprising soul could do it.) Also, has this question been asked in other countries, and does the magnitude of the bias remain constant? Or are the publics of countries with fewer foreign entanglements in war more likely to offer lower guesses of the amount of foreign aid spent? (If anybody kindly points me to easily downloadable date on this, I will make some graphs). I would also like to see a poll that asks this question but primes recipients by explicitly indicating that they are not to count military spending as foreign aid. (E.g., "Just based on what you know, please tell me your hunch about what percentage of the federal budget goes to foreign aid, not counting money spent by the military.") This may well produce a biased estimate, but would it be as biased as the current one? Has some enterprising public opinion researcher asked this question or something similar before?

And I would like to see the question asked in terms of the absolute number of dollars spent. (E.g., "Just based on what you know, please tell me your hunch about how many billions of dollars the Federal government spends on to foreign aid, [not counting money spent by the military]."). Would the estimates be similarly biased upwards? I have a hunch that they might even be biased downwards, and also suspect that asking the question in terms of percentages limits guesses to a degree of coarseness that produces biased estimates. (Foreign aid is 0.6-2.6% of the budget, depending on how you calculate it. Assume people guess the true number based on relatively unbiased signals from the news, including perhaps signals about foreign military involvement, but their guesses are made in 1% increments. Since 0% is an implausible guess, the smallest guess would be 1%, which would inevitably bias the collective estimate upwards, though not necessarily nearly as much as the current estimate. Is this idea too harebrained?)

Another possibility is that answers to this question do not reflect factual beliefs, but rather what Julian Sanchez once called "symbolic beliefs." Here the idea would be that respondents interpret the question as a question about the evaluation of US commitments abroad. The high guesses merely mean "the US spends too much on foreign entanglements," and the 10% median answer to the question of how much the US should spend  merely says something like "whatever it is, halve it." On this view, voters do not really believe that the US should spend 10% on foreign aid, only that it should spend less; educating them about the true amount that the US spends would have only a limited impact on their apparent misperceptions (though could education increase the amount that voters are willing to spend on foreign aid, maybe not to 10%, but perhaps to 3%?). There would be reason to suspect that this is the case if, as Robin Hanson notes, we never see politicians run on increasing foreign aid, even though they could conceivably explain to them that the US actually spends very little on non-military foreign aid.

Could this sort of "symbolic" belief ever be consistently corrected? It would not do to simply tell the voters that the actual value of "foreign aid" is less than 1% of the budget; they might simply adjust their views to say that it should be less, or redefine "foreign aid" to include all sorts of things that the budget analyst would not include (like military spending). Even if the belief were truly a factual and not a symbolic belief, mere provision of information would not necessarily change it: these sorts of quantities are estimated on the basis of signals from the social world of the voters, not merely on the basis of remembered (or misremembered) facts. Since signals are constantly received but mere factual information is not, unless you change the bias in the signals, the public will continue to overestimate "foreign aid" (whatever they actually mean by this).

Other ideas?