Showing posts with label New York Times. Show all posts
Showing posts with label New York Times. Show all posts

Thursday

The importance of context

The New York Times ran an article in today's paper describing the differences in methodology used to develop suicide rates for the military, and the methodology used to develop the rate for the civilian population. The Times says that Pentagon medical statisticians use
a total population figure that includes all Guard members or reservists who spent any period of time on active duty in a given year, even if it was only a few days. According to that approach, the total active military population was about 1.67 million for all of 2009, a review of Pentagon data shows.
But at almost any given moment, the United States military is much smaller than that. Another office of the Pentagon, the Defense Manpower Data Center, the personnel record-keeping office, used a total population number of about 1.42 million service members in 2009. That figure was calculated by including only National Guard and reserve troops who had been on active duty for at least six months in a given year.
Therefore, because the denominator is too large, the military has been understating the suicide rate. (You can find a reasonably explanation of calculation of a rate here.) Why is this important? Because when the military rate and the civilian rates are comparable, there's less of a problem.

Oregon Health Study - first results

Here's a little more information about the Oregon Health Study's first published results, reported today in the New York Times. Unfortunately, the full article, in the New England Journal of Medicine, here, is behind a paywall. But here's the best take away, from the study's web site:
For uninsured low-income adults, Medicaid significantly increased the probability of being diagnosed with diabetes, though it had no statistically significant effect on measured blood pressure or cholesterol. Medicaid reduced observed rates of depression by 30 percent and increased self-reported mental health. Medicaid virtually eliminated out-of-pocket catastrophic medical expenditures, and increased use of physician services, prescription drugs, and hospitalizations.

This is not nothing, and those commenters who think it is are overstating. But as always, it's important to interpret statistical studies carefully. To its credit, the Times Economix Blog has a post
"What the Oregon Health Study Can't Tell" by reporter Annie Lowrey that does so:
Where it says something, it says a lot: it provides strong evidence that Medicaid recipients will spend more, use more tests, experience less depression, have fewer bills sent to collection agencies, and so on. It shows health insurance working just the way insurance is supposed to work: protecting the financial stability of the people purchasing it.
The biometric results are compelling, too. The authors chose a handful of conditions that were common, important, easy to test for and treatable to include in the study. Medicaid does not seem to do much to improve health outcomes related to those conditions in two years.
But there are many more questions that the Oregon Health Study simply cannot answer, despite the overheated rhetoric out there today. Does Medicaid improve health over a decade? What might Medicaid do for lifetime health costs? We do not know, even if the study provides some clues. Nor could this study answer the question of whether the Medicaid expansion will be “worth it,” and why. What study could?

You can find a roundup of responses here.

Friday

American views on gun control - shifting at last?



The New York Times reports today that a recent Times/CBS News poll appears to show that Americans have - at last! - begun to agree that gun laws need tightening:
[T]he poll found that a majority of Americans — 54 percent — think gun control laws should be tightened, up markedly from a CBS News poll last April that found that only 39 percent backed stricter laws.
The rise in support for stricter gun laws stretched across political lines, including an 18-point increase among Republicans. A majority of independents now back stricter gun laws.
It gets better: support for background checks at all gun sales (including those by unlicensed sellers_ and a ban on high-capacity magazines is increasing.

But since this is a blog about data use, what's wrong with the chart? Two things - it doesn't say when the reporting period starts, just shows two dates in the middle, 2005 and 2010. And while it appears that the scale starts at 0, it's possible that it doesn't. So the chart could be better. The data give me some cause for hope.

The Guardian has put together a very good chart on varying state gun laws. It's here.

Tuesday

Outcome measures, and data skepticism, both in the NY Times

Yesterday's "On Education" column in the New York Times, by Michael Winerip, about the efforts by Florida's education officials to raise the standards students have to meet during testing, is a good illustration of how important it is to remember that establishing and using outcome measures is an iterative process. That is, you don't just identify outcome measures, set them in concrete, and look at them year after year. You look at each year's results, and you compare changes year to year. When you have enough data, you can compare changes from, say, the last two years with changes five, or even 10, years ago. You have to look at whether the measures are telling you what you want to know - or even if they're telling you what you think they're telling you. Unfortunately, Florida changed the standards, but not the scoring system, meaning that many fewer students passed. I've written about this issue before, here, for example.

Florida, Winerip makes clear, has many problems with its testing system. According to his column, it's not clear that the tests actually show competency in reading (though I would like to know more). The lesson I draw for my clients is that you can't simply stop and rest once you have a measurement system in place.

There's a good "On the Road" column in today's Times. In it, Joe Sharkey discusses results from two contradictory studies - one showing that anger in the air is increasing at distressing rates, the other that it is decreasing. Sharkey says:
There are at least two ways to explain the discrepancy. One is that perhaps Americans have become the world’s best-behaved airline passengers — which is at least possible. The other is that the F.A.A. and the Air Transport Association have different definitions of what constitutes “unruly behavior.”
This appears to be the case (though I rather liked the first explanation).
The F.A.A.’s annual unruly behavior statistics come from official reports filed by flight attendants or pilots of a passenger “interfering with the duties of a crew member” for incidents that do not involve security threats. That is a violation of federal law, with potential criminal penalties.
But the International Air Transport Association defines unruly passengers as those who “fail to respect the rules of conduct on board aircraft or to follow the instructions of crew members, and thereby disrupt the good order,” . . .
The IATA report, he adds, may include events that "reflect only a flight attendant's annoyance."

It's a good example of critical thinking - both because Sharkey didn't accept an initial news report at face value, and because he points out that the definitions, and who is categorizing events, matter. 

Popular Posts