Cancer screening, questioned

Here's a thoughtful article by Gina Kolata from Saturday's NY Times, assessing the spate of new guidelines suggesting that less cancer screening may be a better public health approach. I've discussed this issue in earlier posts, particularly my review of "Overdiagnosed," here (and the article quotes the book's principal author Dr. H. Gilbert Welch). Cost and a new understanding of cancer--including the fact that many cancers do not grow at all or grow slowly--have led to the recommendations of less screening.


Improving data displays

I've just come across a website, Junk Charts, that illustrates and identifies some of the many problems you can stumble across illustrating data using charts. It's a very useful resource, as the author Kaiser Fung always explains what is wrong with each chart and illustrates how to do it better. Take a look at his post on the USDA's pie chart explaining healthy diets, here, for example.

And here's a screenshot of another, one where Fung argues that pie charts should be abolished.

And a final example, in which Fung illustrates exactly whose taxes would be lower under the 9-9-9 proposal (hint: it's not who you might think).

Altogether, an entertaining and instructive site.


"One for the Road," by Barron Lerner

UPDATE, November 17: Barron Lerner will be discussing the book on "All Things Considered" on NPR today at approximately 4:30 EST. 

If you came of age in the 1970s and 80s it seemed as if MADD—Mothers Against Drunk Driving—and its anti-drunk driving message were everywhere, and that the US culture embraced a clear consensus: drinking and driving should not be done together, ever. But, according to Barron Lerner’s new history “One for the Road,” it didn’t have have to be this way. (Even MADD changed its name; at the start it was ‘Mothers Against Drunk Drivers.’)

In contrast to a long history of regulating impaired driving elsewhere in the world, particularly in Europe, in the US, habitual respect for individualism clashed against the imperative to protect everyone from an impaired person driving a car. Everything was fought over, from the reliability of the various mechanisms that calculate the amount of alcohol in the blood, to the idea of linking a level of Blood Alcohol Content to a degree of impairment (who knew that Indiana would be a leader in this regard?). Some people argued that social drinkers could drive safely. Others argued equally strenuously that it was heavy drinkers who could, since they knew how to hold their liquor.

I have written elsewhere (here and here for example) about the importance of understanding the context and uncertainties of statistics someone is wielding to prove a point, and Lerner highlights the issue in his book:

The debate [about the effectiveness of efforts to control drunk driving] nicely demonstrated a perpetual challenge of activist movements: balancing fervor for a cause with justification from the available scientific data. How much scientific ‘proof’ is necessary for activists to forge ahead with seemingly just and moral agenda? Successful public health movements to control infectious diseases, prevent smoking-related lung cancers, and remove lead from paint, to name just a few, relied on suggestive—not definitive—data. This strategy has been termed the ‘precautionary principle.’ Waiting for the science, in retrospect, would have cost lives.
In lucid and unadorned prose, Lerner takes a step back from the tangles, and considers the social, cultural and enforcement issues of driving while distracted (studies have shown that the cognitive efforts required to talk on the phone create as much of an impairment to the judgment and reaction times you need while driving as driving drunk can; texting while driving takes your eyes off the road). Whether driving while impaired or distracted is viewed as a law enforcement problem, a public health problem, or an illness, American individualism, he concludes, will always make protecting Americans from drunk drivers an issue. Lerner has described this as his “preachiest” book ever; I don’t think so. In an earnest tone, he lays out a compelling case for strict driving laws. His conclusion is clear: it’s to protect all of us against the massive damage an impaired driver can cause.


School rating and sports ranking methodology (and tennis)

New York City's Department of Education engages in an extensive, and widely reported, school rating and report card system. It gives elementary schools a progress report rating from A to F based on several criteria including student progress and student performance on state tests. Elementary school students must submit to state proficiency testing each year. The Department of Education looks at both individual proficiency and overall progress. The problem, of course, for schools where many children are reading and doing math at or above grade level, is that it is impossible to show lots of progress (since they're already doing well, and there's an upper limit to how well anyone can do, it's hard to show progress). There's also an issue with the Department of Education's comparative metric.

I've occasionally thought that the Association of Tennis Professionals ranking methodology would provide some guidance or useful ideas for the folks who develop and report the New York City elementary school annual ratings. It's not an entirely fanciful notion. Tennis players must enter tournaments, and earn ranking points for progressing through the elimination rounds. Professional tennis and elementary education are not really analogous, of course. But all the same, the tennis rankings, while complex, take account of players who finished the previous year high in the rankings - and therefore can't move up.

Now I've come across another website, Greatest Sporting Nation, whose methodology New York City Department of Education might also want to take a look at. The Greatest Sporting Nation purports to identify which country competing in international competitions is the greatest overall at sports. (It also ranks countries on a per capita basis, as well ranking each sport, and male and female athletes, separately.) The website is well written and explains itself quite clearly. The first metric I'd suggest the New York City Department of Education take a look at is the Global Ranking, a ranking based on performances over the previous four years. Looking at a longer time frame might give New York City parents a better sense of how the school their kids attend has been performing. I'd also like the schools to take a look at the Per Capita Cup, which measures performance taking account of population size.

Just saying.

Medical statistics and decision-making

Back in April, I discussed Welch, Schwartz, and Woloshin's excellent book "Overdiagnosed: Making People Sick in the Pursuit of Health" in a post. The authors assume readers will want to focus on the numbers, and provide a clear guide to interpreting medical advice that comes your way. I was reminded of it several times in the past week. First the US Preventive Services Task Force issued a draft statement recommending against PSA screening for  asymptomatic men.

Second, the NY Times Magazine discussed PSA testing in an article on Sunday titled, "Can Cancer Ever Be Ignored?" FWIW, the article quotes Welch as saying, “The European trial says 50 men have to be treated for a cancer that was never going to bother them to reduce one death. Fifty men. That’s huge. To me, prostate screening feels like an incredibly bad deal.”

Third, last week I read Jerome Groopman and Pamela Hartzband's newest book "Your Medical Mind: How to Decide What is Right for You." That book is not a numbers book by any means, but it does provide several spectra intended to help the general public figure out how to respond when a doctor proposes tests, medication, or other procedures. Written in the clear style we have come to expect from Dr. Groopman, it is pitched at a level you can take in while distracted by the emotions and time pressures of a medical crisis.


Protected Health Information, or using other people's data

The New York Times carried a front-page story yesterday about the release of patient data from Stanford Hospital that provides several useful lessons for those of us who spend our work lives mucking about in files and spreadsheets.

It appears that the hospital sent the file to someone they believed worked for one of their business associates. He did, but evidently as a marketing contractor, not an employee, though he did use an email address of the business associate. According to the story, the marketing contractor sent the file to a prospective employee of his, who posted it on a public paid homework help site.

What could have been done to prevent this breach? The file moved several times; each time, the sender or receiver could have at least wondered whether it contained live data. The hospital could have alerted the marketing contractor that the file had live data. The marketing contractor could have looked at the file before sending it on to the job applicant, or had her complete the assignment in his office.

So what are the lessons? Leaving technical issues aside (though you can read the next paragraph if you're interested in those) I think there are two: first, look at the file before you send it on! And second, when you get a file, think about what might be in it before you pass it on. (There's a third lesson in there too, about not asking for help publicly when you are trying to demonstrate a skill necessary for a job, but I'm inclined to skip over that one.)

More technical paragraph: Under HIPAA, the Health Insurance Portability and Accountability Act, "business associate" is a carefully defined term of art, usually an organization that provides computer, analytical, or other services to the health care provider. Business associates often need access to confidential health information (one of the services they provide is billing; another is reimbursement) and are generally hedged in with contracts spelling out what they can and cannot disclose. The people who know what is in the contracts may not be the same people with operating responsibilities, and I suspect that training around confidentiality issues is not enough. You need constant reminders too.


Two Strategy Articles from McKinsey Quarterly

I'm summarizing two helpful articles from the McKinsey Quarterly.

In "The Perils of Bad Strategy," Richard Rumelt lists four elements that result in poor strategy. They are:
     1. Failure to face (or identify) the problem - if you don't know what stands in your way, you won't be able to solve it.
     2. Confusing goals with strategy - establishing a goal isn't enough. You need to know why you are trying to reach that goal; as Rumelt puts it, the strategist's task is "to have a strategy worthy of the effort called upon."
     3. Poorly stated strategic objectives - if your goals are general or fuzzy or there are simply too many of them, you won't be able to focus on the crucial ones.
     4. Fluff - if your goals are fluffy, or "superficial abstraction[s]" they signal that you haven't thought them through.

Fortunately, Rumelt also lists three elements of good strategy: a diagnosis, a guiding policy, and coherent steps to get you from here to there.

As always, it's easy to say these things, and harder to do them. Rumelt provides some useful illustrations and examples. The second article, "Have you Tested Your Strategy Lately?" by Chris Bradley, Martin Hirt, and Sven Smit, digs down a little deeper by offering ways to test your overall strategy, as well as various aspects of it. While clearly written with for-profit businesses in mind, the tests, with a little tweaking, can be applied equally well to not-for-profits.

Popular Posts