Thursday

Universal health care - around the world

The screenshot above is a map (via theatlantic.com) of countries around the world that provide universal health care. The green countries are those that provide universal health care. Notice, despite today's upholding of the Affordable Care Act, which large, developed country is not among those that provide it? (Here's another hint: it's in North America.) Here's a link to Max Fisher's column arguing that, except for the US, the countries that don't provide universal health care are developing nations.

It's a really clear map. Costa Rica, Brazil, Chile, Argentina, Cuba and Sri Lanka are all green . . . but we're not.

And here are some links to charts showing how inefficient our health care spending is and an earlier post about how much we actually spend on health care.

Wednesday

Very helpful New York City Map

The screenshot above is a portion of NYCity Map, an interactive map that brings a huge amount of information into one place. The portion above highlights the area around Stuyvesant Square, with the five closest wi-fi hotspots identified. You can search by address, intersection, community district, zip code. In addition to finding wi-fi, you can also find: greenmarkets, parks, schools, museums, hospitals and so on. The map links to building and property information, elected official information, and neighborhood information (polling places, hurricane evacuation zones).

And the map has historical information, with photographs from 1996, 1951, and 1924. Here's how the same area looked in 1924:


It's a great resource. Have fun playing with it.

Tuesday

Measuring disease

I realize you may not actually have asked for it, but the chart in the screenshot is one of my favorite charts ever, starting with the name. It illustrates the cover of the New York City Department of Health and Mental Health annual package of vital statistics, (2010 version available here). I like it for its clarity, the way it puts even the horror of 9/11 into perspective, and the short history of successful tackling of public health problems it offers.

I was reminded of it when I saw this broader take on theatlantic.com:
The bar chart comes from an interactive graphic from a New England Journal of Medicine article discussing the differences in disease between 1810, when the Journal published its first issue, and 2010. (Nephropathy, according to the Free Dictionary, is any disease of the kidney.) I found the discussion in the NEJM article about the social definitions of disease equally interesting.
Disease is always generated, experienced, defined, and ameliorated within a social world. Patients need notions of disease that explicate their suffering. Doctors need theories of etiology and pathophysiology that account for the burden of disease and inform therapeutic practice. Policymakers need realistic understandings of determinants of disease and medicine's impact in order to design systems that foster health. The history of disease offers crucial insights into the intersections of these interests and the ways they can inform medical practice and health policy.
And measurement, as always, is complex. The authors say:
Disease is always generated, experienced, defined, and ameliorated within a social world. Patients need notions of disease that explicate their suffering. Doctors need theories of etiology and pathophysiology that account for the burden of disease and inform therapeutic practice. Policymakers need realistic understandings of determinants of disease and medicine's impact in order to design systems that foster health. The history of disease offers crucial insights into the intersections of these interests and the ways they can inform medical practice and health policy.

Monday

Mixed Metro US


Mixed Metro is a site created by researchers from the University of Georgia, the University of Washington, and Dartmouth College to visualize the racial and ethnic diversity of metropolitan neighborhoods in the US using census data. The picture above is a screenshot of the New York Metropolitan area. The site includes a set of maps of the 53 largest metro areas in the US and maps of all 50 states.

Two other interesting features are overlays showing census tracts with high numbers of mixed-race households, and what the researchers call "transition matrices" tallying the numbers of census tracts that changed classification during the decade. The matrices, which are not as successful as the maps, also include counts of the tracts that did not change. Take a look. Let me know if you are better at reading the transition matrices than I am.

The Atlantic Cities blog took the maps a step further, arguing that they show continuing segregation in cities even as the cities diversify.

Thursday

Visualizing Economics

Here's a link to an interesting website, Visualizing Economics, whose author, Catherine Mulbrandon, specializes in developing graphics illustrating the US economy. Here's her graph showing top marginal tax rates, 1916-2011, also available as a poster:


Each post contains links to similar data under the heading "you might like." It's a good site to explore.

Wednesday

Making the case for data collection and analysis in the criminal justice system

Given the vast amount of resources our criminal justice system - encompassing the courts, the corrections system, and the police work required after the point of an arrest - consumes, it's astonishing that we haven't invested the resources to figure out how to make the system more efficient. Yesterday, for example, NPR reported that Illinois can no longer afford to operate its maximum security prison.

Anne Milgram makes the case for data collection and analysis in today's Atlantic. She says:

The evidence is stark: Each year, there are approximately 13 million admissions to local jails, and, according to the FBI, fewer than 5 percent of arrests are for violent crimes. Yet many of those arrested are kept in jail for long periods even before they are convicted. According to Department of Justice (DOJ) data from 2004 (the latest available) on felony defendants in major counties, offenders who do not make bail spend an average of 121 days behind bars before even going to trial. And at any given time, nearly two-thirds of those in America's local jails are pretrial defendants. Housing this population, according to DOJ, costs state and local governments $9 billion a year.
The point is not that 13 million jail admissions are too many or too few. Nor is it that $9 billion is too much or too little to spend incarcerating defendants before trial. The point is simply that, without using technology to collect and analyze the relevant data, we simply don't know. And given the size of these figures -- not to mention their importance to public safety, government spending, and the fair administration of justice -- not knowing is no longer an option.
It's worth clicking through and reading the full article.

History of the world in two graphs

Update, June 21: Derek Thompson has updated his post, with some slightly more fine-grained charts. (This time he's paying attention to the x-axis. And citing Jared Diamond.)

It's not quite as complex an argument as Jared Diamond makes in his 1997 book "Guns, Germs and Steel," but in case you haven't yet seen it, here's a graph that Derek Thompson of The Atlantic has posted showing share of world GDP:

Pretty interesting - essentially, China and India were the economic powerhouses for most of human history. Italy, or perhaps I mean "Italy," since the country as we know it didn't exist then, has a bit part early on because of the Roman Empire. And Spain and the UK get their moments as their empires develop (not the Dutch, though - could they be embedded in Spain?)

There are some problems with the chart's layout. Can you identify them before you check that link?All the same, Thompson argues that the pre-1800 distribution reflects population, and the post-1800 the Industrial Revolution. Do you agree?

Here's another look at similar data, based this time on purchasing power parity:

The numbers don't add up to 100% but not all countries are included: Spain is omitted here.
via bharatkalyan97

Tuesday

Drug Testing, Cycling, and Lance Armstrong

Update, October 11, 2012: You can read the full USADA decision, public statement, witness affidavits, and see other supporting documentation here. I haven't yet read any of it but press reports make the report sound pretty conclusive.

Are you wondering how it is that the United States Anti-Doping Agency (USADA) has decided to make a case against Lance Armstrong and the team leader, trainer, and team doctors despite all the years of denials and passed drug tests? (You can read a copy of the USADA charging letter here.) Even though the federal investigation was closed without charges?

One element to consider is that "passing" a drug test isn't the same thing as testing negative. As Kaiser Fung explains it in his excellent book Numbers Rule Your World:
Statistical testing shows that in steroid testing, a negative finding has far less value that a positive . . . for each doper caught red-handed (a true positive), one should expect about ten others to have escaped scot-free (false negatives). . . in particular, pay attention to these two numbers: the proportion of samples declared positive by drug-testing laboratories and the proportion of athletes believe to be using steroids. . .
 The Danish rider Bjarne Riis, who won the Tour de France in 1996, never tested positive either, yet he eventually admitted to using EPO, HGH, and cortisone. Marion Jones, the sprinter, denied drug use for years, until she eventually admitted it. The B sample [each sample is divided into two, with the second tested only if the first tests positive], Fung points out, acts as a protection against false positives. But the real issue is false negatives. As Fung says:
Statisticians say even if a test committed zero false positives, it would be far from "100 percent accurate" because of false-negative errors. Athletes only complain about false positives; the media wax on about false positives. We are missing the big story of steroid testing: false negatives. . . .
 The testers are timid because of the asymmetric costs from the two types of error. A false positive . . . will be rigorously litigated by the accused. An over-turned positive publicly humiliates the anti-doping authorities and diminishes the credibility of the testing program. By contrast, negative findings can only be proven false if athletes, like Riis, step forward to confess, so most false negatives never see the light of day.
Another element is the threshold point, that is, the dividing line between a positive and a negative screen. Where it is placed is up to the testers. Fung's example is the hematocrit level used during the 1990s: 50%. Normal is about 45%, but cycling used the higher number because some people, notably those from higher elevations, do have a higher level. "If the testers had used 60 percent, they could have reduced false positives, but without a doubt, more dopers would have evaded detection. Similarly, lowering the disqualifying hematocrit level would decrease false negatives at the expense of allowing more false positives." He concludes that testers have a structural reason for setting the bar high. "Because false positives make for lousy publicity, containing these errors is a top priority. However, this policy inevitably means some drug cheats are let loose."

So negative drug tests do not necessarily mean anything. The USADA charges go farther than simple drug use, implying a systematic plan by team management. The letter refers to eye-witnesses, but doesn't identify them. Yesterday there were reports that four of Armstrong's former teammates have withdrawn their names from consideration for the US Olympic team.

Monday

"Uncontrolled" by Jim Manzi

"Uncontrolled: The Surprising Payoff of Trial-and Error for Business, Politics, and Society" has been getting a lot of interesting press. (You can find a roundup, chosen by Manzi himself, here.) The reviews made the book sound quite thoughtful, so I decided to read it for myself.

Writing in clear, straightforward prose, Manzi starts by explaining his thesis: that because non-experimental social science is unable to generate useful predictions of the effects of policy proposals, social scientists could improve the utility of the work they do by conducting more experiments. More specifically, he argues that we should establish mechanisms, analogous to the National Institutes of Health or the National Institute of Justice to design and interpret randomized experiments in social policy.

Manzi starts by explaining the 'evolutionary epistemology' theory of the American social scientist Donald T. Campbell, saying "any complex system, such as our society, evolves beliefs about what practices work by layering one kind of trial-and-error learning upon another." And then he backs up, taking a tour of the history and philosophy of science in the first third of the book. It's a useful introduction (or review for philosophy majors) covering the problem of induction, Aristotle, Bacon, Hume and then moving ahead through Karl Popper and Thomas Kuhn. Manzi makes and extends an analogy between the human tasks of science and the role of markets. Both, he says, abjure authority and encourage competition, eliminate failures through trial and error, are subject to an invisible hand phenomenon, seek predictive rules (science) or prices (markets), encourage conflict bounded by clear rules, and need resource from the public (funds for science, regulations for markets).

The next third of the book is devoted to expanding his argument. Science is of course not a monolith, and Manzi writes about both predictions in non-testing sciences (geology, astronomy) and in increasingly complex ones such as the biology of the human body. Social sciences are even more complex, and we are very far from identifying any biological basis for the causal effect of a social program. Yet, he points out, some questions cannot be subject to experiment, but need decisions nonetheless.

So how do we do this? First Manzi takes apart a couple of claims that have been widely disseminated in recent years, including Larry Bartels' 2008 claim that income inequality increases under Republican Presidents, and Steven Levitt and Stephen Dubner's claim in their 2005 book "Freakonomics" that legalized abortion in the 1970s was responsible for the lowered crime rates in the 1990s. Perhaps this is why the political right has been so drawn to this book? (I haven't read Bartels' book, but I have read "Freakonomics.") Human society is so complex, he concludes, that social scientists just don't have the tools to make broad, general claims.

But there is a way, one that has been pioneered by business, and expanded in this era of big data: randomized experiments to test proposals under a variety of conditions (locations, items for sale, time of day and so on). Experimental methods, carefully used, can work in business. But, he notes, just as in science, business innovations rarely work, and those that do provide incremental improvements relative to strategic issues.

Essentially, Manzi is saying, the scientific method developed by Bacon and further refined by later thinkers, is a model that can, with a lot of strictures, be applied to social science. He points to a few large-scale randomized experiments that have taken place in social policy, in particular the High Scope/Perry Pre-school studies. He also reports about a wide variety of different welfare reform efforts in the 1990s, before the passage of Temporary Assistance to Needy Families (TANF) legislation in 1996. But large-scale social science experimentation, he says, requires remembering some lessons from the experience of business and the scientific method, namely, that randomization and controlled experiments with replication can allow us to draw some conclusions that can be used for policy decisions. And methodological humility requires us to remember three things: most new programs do not work, programs that change incentives are more likely to work than those that train, and, critically, there is no magic bullet (which I would extend and add that nothing will work every time in every circumstance).


Manzi makes several suggestions as to how we might regulate the market in social service policy ideas. One of them is that, in order to institutionalize social experimentation, the federal government should establish an agency, akin to the NIH, that can oversee and fund the design and interpretation of randomized social policy experiments. Manzi is thinking about the big picture, and it's a pretty important suggestion.

I've worked in social services a long time. Manzi's comments about the country's approach to social services rang true to me. What are now called "evidence-based programs" are starting to become the remedy of choice in New York City, at least in some of the fields I've worked in (child welfare, mental health). But as they are implemented, all I'm hearing about is the details: are the new programs adhering to the model? Fidelity to the model is necessary for success, but it's not enough: as Manzi says, we need to look rigorously, and carefully, at each replication. Manzi offers a useful way of analyzing whether these expensive programs work.

Tuesday

Outcome measures, and data skepticism, both in the NY Times

Yesterday's "On Education" column in the New York Times, by Michael Winerip, about the efforts by Florida's education officials to raise the standards students have to meet during testing, is a good illustration of how important it is to remember that establishing and using outcome measures is an iterative process. That is, you don't just identify outcome measures, set them in concrete, and look at them year after year. You look at each year's results, and you compare changes year to year. When you have enough data, you can compare changes from, say, the last two years with changes five, or even 10, years ago. You have to look at whether the measures are telling you what you want to know - or even if they're telling you what you think they're telling you. Unfortunately, Florida changed the standards, but not the scoring system, meaning that many fewer students passed. I've written about this issue before, here, for example.

Florida, Winerip makes clear, has many problems with its testing system. According to his column, it's not clear that the tests actually show competency in reading (though I would like to know more). The lesson I draw for my clients is that you can't simply stop and rest once you have a measurement system in place.

There's a good "On the Road" column in today's Times. In it, Joe Sharkey discusses results from two contradictory studies - one showing that anger in the air is increasing at distressing rates, the other that it is decreasing. Sharkey says:
There are at least two ways to explain the discrepancy. One is that perhaps Americans have become the world’s best-behaved airline passengers — which is at least possible. The other is that the F.A.A. and the Air Transport Association have different definitions of what constitutes “unruly behavior.”
This appears to be the case (though I rather liked the first explanation).
The F.A.A.’s annual unruly behavior statistics come from official reports filed by flight attendants or pilots of a passenger “interfering with the duties of a crew member” for incidents that do not involve security threats. That is a violation of federal law, with potential criminal penalties.
But the International Air Transport Association defines unruly passengers as those who “fail to respect the rules of conduct on board aircraft or to follow the instructions of crew members, and thereby disrupt the good order,” . . .
The IATA report, he adds, may include events that "reflect only a flight attendant's annoyance."

It's a good example of critical thinking - both because Sharkey didn't accept an initial news report at face value, and because he points out that the definitions, and who is categorizing events, matter. 

Monday

Cities, adaptability, and climate change

The Atlantic's Cities blog ran a column last week called "Which Cities are Most Prepared for Climate Change?" It came to a pretty grim conclusion:
95 percent of major Latin American cities are actively planning for climate change, according to the report.
Canadian cities are also preparing themselves, with 92 percent of its major cities currently undertaking adaptation planning efforts. Similar preparations are being made in 80 percent of African cities, 84 percent of European cities and 86 percent of cities in Australia and New Zealand. Asian cities are less involved, with 67 percent reporting climate adaptation planning. And at the bottom of the list is the U.S., where only about 59 percent of major cities are actively preparing for the impacts of climate change.
Wow. Sounds as if we have some catching up to do. I clicked through to the source material, a 2012 report from ICLEI - Local Governments for Sustainability. According to ICLEI's website, it's a voluntary membership organization of government members devoted to sustainable development. OK, so far, so good. The website says that ICLEI has 1220 government members, representing 569,885,000 people.

But the methodology section of the survey raises some concerns. Check out this table:
Ten complete responses from Africa which has 29 member cities? That seems like a low response rate. What about the other cities in Africa?

And there's more - the survey was sent to 1171 communities that were members of ICLEI then - but the researchers had incorrect email addresses for 96 of them. 468 cities completed some or all of the questionnaire, but only 418 completed it fully.

Oh, and how big are the members? Some are quite big, but others are very small, 25,000. New York City and Los Angeles are on the list of member cities, but did they complete the questionnaire? It would be helpful to know how many large and how many small cities responded to the questionnaire.

The lesson? When you read someone's interpretation of survey results, it's important to think critically - even if they did not. In this case, spending a few minutes thinking about the representativeness of the survey respondents should have caused the reporter to dial back the conclusion.

Friday

The Atlantic's Google Earth Puzzles



The Atlantic has set some entertaining geographical puzzles, using images from Google Earth. They're challenging, but the last two are multiple choice, so you have a one in three chance of getting it right even if you guess.

So far there have been three:

The first is here.

The second is here.

And the third is here.

The screenshot is one of the images - many of them are beautiful, and all are interesting. Take a guess before you look at the possible answers to get a look at the availability heuristic in action.

Thursday

Soda, Obesity, and the studies behind the suggested ban

Update, June 9: Take a look at this entertaining column to see which large city will be the first to follow in New York's path. But feel free to guess in the comments before you click through to the link.

Daniel Engber, one of Slate's Explainers, has pulled together a roundup of studies about the risks of sugary soft drinks and concluded that the science behind Mayor Bloomberg's proposed ban of sales of soft drinks in large sizes is, well, soft. I'm not sure I agree, but his story is worth reading. For the other side of the story, see this blog post from NY Magazine. To get you thinking, here's a video illustrating Michael Pollan's food rules:

Wednesday

Recall vote in Wisconsin as harbinger for the 2012 general election? Not so fast

Governor Scott Walker of Wisconsin survived yesterday's recall vote despite the passion, hard work, and money poured into the recall efforts. Many news organizations are using the vote to discuss the future of organized labor. Others wonder whether the vote sends a signal about improving Republican chances in the November presidential elections.

Nate Silver posted a very good column yesterday analyzing his suggestion that races for governor can sometimes give contrary indicators for presidential elections. Here's what he says:
But one thing that the recall is unlikely to do is tell us much about how the presidential contest in Wisconsin is likely to evolve in November. The politics for a governor’s campaign are often subject to different currents than presidential ones, and historically the party identification of a state’s governor has said little about how presidential candidates will fare there.
Over the past 40 years, in fact, the relationship has run in the reverse direction than you might expect. The Democratic presidential candidate has typically done a little better when the state’s governor is a Republican, and vice versa.
Why is this so? As usual, Silver is clear in his explanations, providing two tables, one showing presidential vote margins by party of state governor, and the other, slightly more refined, showing presidential vote shift by party of state governor. But correlation is not causation, and Silver is always careful to remind readers of that fact. He offers two hypotheses and one caution. One hypothesis is that voters like balance in their elected officials; the other is that some voters tend to vote for the incumbent. The caution is that the aggregated data may hide some factors. This counterintuitive suggestion, backed up by numbers, is an interesting addition to the discussion. Do you agree?

Tuesday

Transit of Venus, June 5, 2012

Update: In case you missed it, or found it hard to watch, or it was cloudy, here's a NASA ultra high-def video of the 2012 Transit of Venus:



Today's Transit of Venus - the passage of Venus between the Earth and the Sun - is a rare event. The last transit, in 2004, was not easily viewed in the Eastern US. The next won't happen until 2117. I've embedded a video from NASA explaining more about the Transit of Venus and showing some great photos from 2004:

As in an eclipse, it's dangerous to look directly at the sun! Here are some tips on how to see the Transit of Venus safely. Or you can watch one of NASA's webcasts, here. New York City readers, here is more local information.

Monday

Statistical Anomalies in Baseball - it's all in the presentation

Update, June 20: Here's a link to a related article in The Economist, about the statistics of perfect games, "An Imperfect Measure of Excellence."

Johan Santana's no-hitter for the Mets on Friday, the first in the 51 years of the team's existence, has been getting a lot of local press. A LOT - four long articles in yesterday's New York Times print edition, for example. But, at least according to this article in Slate, it's the no-hitter that's the statistical fluke, not the long wait for one. As Jim Pagels, the writer, puts it:
Johan Santana’s no-hitter is certainly remarkable—but not for the reasons the sports media are citing today. It’s noteworthy because a no-hitter itself is incredibly rare—not as rare as a four home run game, hitting for the cycle, or an unassisted triple play, but infrequent enough that a team can fairly easily go decades without one. Such a streak, in fact, isn’t statistically that improbable.
According to his calculations, the Mets had about a 1 in 100 chance of such a long streak, only slightly below the average. There's a more detailed look at the issue here, in Baseball Prospectus. This analysis also more nuanced, coming as it does from a fan's perspective. The author, Craig Glaser, argues that, while the Mets were overdue for a no-hitter, the streak wouldn't become really rare until the Mets had played 10,000 or so games.

Bottom line? An unusual individual achievement, absolutely. An unusually long streak, even a curse, that has now been broken? The answer, as usual, depends on the framework of the question.

Friday

Internet advertising


That chart, from Derek Thompson's blog, shows opportunity for building mobile advertising. But blasting advertising at users on their mobile devices - or their Facebook pages - may not be the best approach. According to this post at the Harvard Business Review's blog, it's a myth to think that consumers want relationships with brands, and another myth to assume that more interactions are better. I certainly don't want to keep hearing from merchants - I have a separate email account that sits and collects emails from various merchants; I dump them out occasionally but almost never read them. As HBR puts it,
[the] relationship flattens much more quickly than most marketers think; soon, helpful interactions become an overwhelming torrent. Without realizing it, many marketers are only adding to the information bombardment consumers feel as they shop a category, reducing stickiness rather than enhancing it.
Perhaps that's why this article in the NY Times, about someone who, as a joke, liked a product on Facebook only to find that his joke had been turned into an ad, resonated so much. The article is on the front page of the print edition.

You can actually limit some of the use Facebook makes of your information. If you don't "like" products or commercial pages, you will never be used in an ad. Second, you can edit your third party ad settings and your social ad settings so that "no one," rather than, "only my friends," can see your name or picture in ads if, as Facebook delicately puts it, "we allow this in the future." To do so, go to Account Settings >Facebook Ads >Ads shown by third parties > Edit third party ad settings, choose "no one" and save changes. Do the same thing with "Ads and Friends."

Oh, and read the full Derek Thompson post. It has a link to a very interesting slideshow discussing Internet trends.

Popular Posts