Monday

"Uncontrolled" by Jim Manzi

"Uncontrolled: The Surprising Payoff of Trial-and Error for Business, Politics, and Society" has been getting a lot of interesting press. (You can find a roundup, chosen by Manzi himself, here.) The reviews made the book sound quite thoughtful, so I decided to read it for myself.

Writing in clear, straightforward prose, Manzi starts by explaining his thesis: that because non-experimental social science is unable to generate useful predictions of the effects of policy proposals, social scientists could improve the utility of the work they do by conducting more experiments. More specifically, he argues that we should establish mechanisms, analogous to the National Institutes of Health or the National Institute of Justice to design and interpret randomized experiments in social policy.

Manzi starts by explaining the 'evolutionary epistemology' theory of the American social scientist Donald T. Campbell, saying "any complex system, such as our society, evolves beliefs about what practices work by layering one kind of trial-and-error learning upon another." And then he backs up, taking a tour of the history and philosophy of science in the first third of the book. It's a useful introduction (or review for philosophy majors) covering the problem of induction, Aristotle, Bacon, Hume and then moving ahead through Karl Popper and Thomas Kuhn. Manzi makes and extends an analogy between the human tasks of science and the role of markets. Both, he says, abjure authority and encourage competition, eliminate failures through trial and error, are subject to an invisible hand phenomenon, seek predictive rules (science) or prices (markets), encourage conflict bounded by clear rules, and need resource from the public (funds for science, regulations for markets).

The next third of the book is devoted to expanding his argument. Science is of course not a monolith, and Manzi writes about both predictions in non-testing sciences (geology, astronomy) and in increasingly complex ones such as the biology of the human body. Social sciences are even more complex, and we are very far from identifying any biological basis for the causal effect of a social program. Yet, he points out, some questions cannot be subject to experiment, but need decisions nonetheless.

So how do we do this? First Manzi takes apart a couple of claims that have been widely disseminated in recent years, including Larry Bartels' 2008 claim that income inequality increases under Republican Presidents, and Steven Levitt and Stephen Dubner's claim in their 2005 book "Freakonomics" that legalized abortion in the 1970s was responsible for the lowered crime rates in the 1990s. Perhaps this is why the political right has been so drawn to this book? (I haven't read Bartels' book, but I have read "Freakonomics.") Human society is so complex, he concludes, that social scientists just don't have the tools to make broad, general claims.

But there is a way, one that has been pioneered by business, and expanded in this era of big data: randomized experiments to test proposals under a variety of conditions (locations, items for sale, time of day and so on). Experimental methods, carefully used, can work in business. But, he notes, just as in science, business innovations rarely work, and those that do provide incremental improvements relative to strategic issues.

Essentially, Manzi is saying, the scientific method developed by Bacon and further refined by later thinkers, is a model that can, with a lot of strictures, be applied to social science. He points to a few large-scale randomized experiments that have taken place in social policy, in particular the High Scope/Perry Pre-school studies. He also reports about a wide variety of different welfare reform efforts in the 1990s, before the passage of Temporary Assistance to Needy Families (TANF) legislation in 1996. But large-scale social science experimentation, he says, requires remembering some lessons from the experience of business and the scientific method, namely, that randomization and controlled experiments with replication can allow us to draw some conclusions that can be used for policy decisions. And methodological humility requires us to remember three things: most new programs do not work, programs that change incentives are more likely to work than those that train, and, critically, there is no magic bullet (which I would extend and add that nothing will work every time in every circumstance).


Manzi makes several suggestions as to how we might regulate the market in social service policy ideas. One of them is that, in order to institutionalize social experimentation, the federal government should establish an agency, akin to the NIH, that can oversee and fund the design and interpretation of randomized social policy experiments. Manzi is thinking about the big picture, and it's a pretty important suggestion.

I've worked in social services a long time. Manzi's comments about the country's approach to social services rang true to me. What are now called "evidence-based programs" are starting to become the remedy of choice in New York City, at least in some of the fields I've worked in (child welfare, mental health). But as they are implemented, all I'm hearing about is the details: are the new programs adhering to the model? Fidelity to the model is necessary for success, but it's not enough: as Manzi says, we need to look rigorously, and carefully, at each replication. Manzi offers a useful way of analyzing whether these expensive programs work.

No comments:

Popular Posts