Showing posts with label critical thinking and interpretation. Show all posts
Showing posts with label critical thinking and interpretation. Show all posts

Friday

Hiatus week of April 22

I will be on a break from blogging next week with a last-minute large project and then some travel. In the meantime, if you haven't already seen it, make sure you read Paul Krugman's New York Times column about the "Excel depression." It's his take on the economic paper that concluded that once national debt exceeds 90 percent of gross domestic product economic growth drops off sharply. The claim gave some theoretical weight to the politicians who argued for economic austerity. Turns out the paper may have been, um, incorrect.

Yes, there was a coding error and, according to Krugman, they omitted some data and used 'questionable statistical procedures.' They've now released their data and original spreadsheet, which is how these errors came to light. You can see sections of the original spreadsheet here, if you're interested. But as is often the case, the issue was as much about how the original study was used and reported. As Krugman puts it:
[The] tipping-point claim was treated not as a disputed hypothesis but as unquestioned fact. For example, a Washington Post editorial earlier this year warned against any relaxation on the deficit front, because we are “dangerously near the 90 percent mark that economists regard as a threat to sustainable economic growth.” Notice the phrasing: “economists,” not “some economists,” let alone “some economists, vigorously disputed by other economists with equally good credentials,” which was the reality.
Read the full column. See you in a week.

Monday

Critical reading, of charts

Here's a very good look from the Harvard Business Review (free after registration) at how different presentations of data can reflect different interpretations. It's another reminder of how important critical thinking and skepticism are to data you are given. As author Jake Porway states:
The most troubling part of all this is that "we the people" rarely have the skills to see how data is being twisted into each of these visualizations. We tend to treat data as "truth," as if it is immutable and only has one perspective to present. If someone uses data in a visualization, we are inclined to believe it. This myopia is not unlike imagining the red velvet cake we see in front of us to be the only thing that could have been created from the eggs and milk we mixed together to make it. We don't see in the finished product the many transformations and manipulations of the data that were involved, along with their inherent social, political, and technological biases.

Friday

Context: it's how you use the numbers, not what they are

The normally extremely clear writer James Fallows (his terrific blog on politics, technology, and beer, among other things, is here) has posted a letter from one of his readers. The full blog post is here; the excerpt I'm interested in is:
First, our public policy discussion has become too wonkish, by being entirely focused on measurable outcomes at the expense of all others.  (Another example: the health care debate, the vast majority of which was about costs instead of the moral imperative of universal health care)...
It may just be the writing (to repeat, it's not Fallows' but a reader's comment), but this strikes me as an example of someone blaming the numbers, as opposed to the interpretation of the numbers, for the politics. Sometimes a mathematical model is our best chance of understanding what is happening in the world (even if that understanding is weak). But it's our interpretation of the numbers - the context we give it - that guides how we use them. Numbers are never just numbers.

Earlier this week I posted a map developed by the real estate web site Trulia showing where straight, single men and women live in various US metropolitan areas. In the course of researching that I came across Trulia Hindsight, an interactive map that lets you track housing development from 1870 through the present.

That's a screenshot of San Franciso, above, but the animation (which I could not embed) is the best part. Housing is overlaid on a map of the countryside as it looks now, so you can see development relative to roads. You can also change the zoom for greater detail. It's a fascinating site, and a great use of data.




Tuesday

Hate PowerPoint? Here's why you should


I've mentioned Edward Tufte, the statistician and political scientist before. Now I've read Tufte's 2003 essay "The Cognitive Style of PowerPoint: Pitching Out Corrupts Within" which makes a compelling case for ditching PowerPoint in favor of written reports or conversations.
In practice, PP slides are very low resolution compared to paper, most computer screens, and the immense visual capacities of the human eye-brain system. With little information per slide, many slides are needed. Audiences endure a relentless sequentiality . . . Information stacked in time makes it difficult to understand context and evaluate relationships . . . The statistical graphics produced by Power Point are astonishingly thin, nearly content-free.
I could go on, because it's all so well written. One of Tufte's key points is that PowerPoint is designed for the presenter, not for the audience and especially not for the content. He argues that using PowerPoint for presentations means using an underlying metaphor of the software corporation, ie, "a big bureacracy engaged in computer programming (deep hierarchical structures relentlessly sequential, nested, one-short-line-at-a-time)" (the italics are his). A better metaphor, Tufte says, is teaching. Thus the increasing use of PowerPoint in schools is cause for grave concern.

Tufte's examples are many: the most telling is his review of the slides from a presentation by engineers about the space shuttle Columbia's prospects for safe re-entry after foam blocks struck the shuttle during liftoff. He also reproduces Peter Norvig's translation of the Gettysburg Address in PowerPoint (only six slides and worth a look if you haven't already seen it.)

So how do you improve presentations? Tufte says two things: (1) use MS Word, not PowerPoint as your presentation software. (2) Develop a short briefing paper including text, graphics, and sparklines as a handout. The pamphlet is available from Tufte's website here; you can also read a further discussion here.

(That's Austin Kleon's mind map of Tufte's book "Beautiful Evidence" above, via Kleon's cool blog.)

Monday

A good discussion of confirmation bias

In an Atlantic blog post written last week Robert Wright has composed a detailed description of what looks like a series of confirmation bias errors in reporting a speech by Mahmoud Ahmadinejad. In Wright's careful description, what appears to have happened is that several linguistic choices were made during a set of translations that changed a general statement into a particular one. (There were several steps in the process, and some interesting linguistic contrasts.) How could this happen? The individuals read things into the language that they wanted to find there. The errors, if that's what they were, were compounded by some not-so-critical thinking. As Wright puts it:
A striking thing about human self-deception is how diverse and subtle its sources can be. The classic form of confirmation bias is to choose the most convenient among competing pieces of evidence . . . But look at some other elements of self-deception that seem to have been at play here:
1) Unreflectively narrowing the meaning of vague or ambiguous words.  . . .
2) Accepting evidence uncritically. . . .
3) Making slight and essentially unconscious fudges.  . . .
I'm interested in the analogy, not the politics, here, which is why I've elided Wright's discussion of the facts (I do recommend reading the full post). I've described confirmation bias before, here, for example. It's very easy to pull examples from data to "prove" to yourself that what you believe is true. (See, for example, the climate change debate.) It's much harder to look at what your numbers, or other evidence, is telling you without all sorts of unconscious biases pushing you to interpret them in certain ways. So I'm going to repeat the advice I pulled from Leonard Mlodinow's book "The Drunkard's Walk."
 1. Remember that chance events can produce patterns.
2. Question perceptions and theories.
3. Spend as much time looking for evidence that you are wrong as you spend looking for evidence that you are right.

Tuesday

Outcome measures, and data skepticism, both in the NY Times

Yesterday's "On Education" column in the New York Times, by Michael Winerip, about the efforts by Florida's education officials to raise the standards students have to meet during testing, is a good illustration of how important it is to remember that establishing and using outcome measures is an iterative process. That is, you don't just identify outcome measures, set them in concrete, and look at them year after year. You look at each year's results, and you compare changes year to year. When you have enough data, you can compare changes from, say, the last two years with changes five, or even 10, years ago. You have to look at whether the measures are telling you what you want to know - or even if they're telling you what you think they're telling you. Unfortunately, Florida changed the standards, but not the scoring system, meaning that many fewer students passed. I've written about this issue before, here, for example.

Florida, Winerip makes clear, has many problems with its testing system. According to his column, it's not clear that the tests actually show competency in reading (though I would like to know more). The lesson I draw for my clients is that you can't simply stop and rest once you have a measurement system in place.

There's a good "On the Road" column in today's Times. In it, Joe Sharkey discusses results from two contradictory studies - one showing that anger in the air is increasing at distressing rates, the other that it is decreasing. Sharkey says:
There are at least two ways to explain the discrepancy. One is that perhaps Americans have become the world’s best-behaved airline passengers — which is at least possible. The other is that the F.A.A. and the Air Transport Association have different definitions of what constitutes “unruly behavior.”
This appears to be the case (though I rather liked the first explanation).
The F.A.A.’s annual unruly behavior statistics come from official reports filed by flight attendants or pilots of a passenger “interfering with the duties of a crew member” for incidents that do not involve security threats. That is a violation of federal law, with potential criminal penalties.
But the International Air Transport Association defines unruly passengers as those who “fail to respect the rules of conduct on board aircraft or to follow the instructions of crew members, and thereby disrupt the good order,” . . .
The IATA report, he adds, may include events that "reflect only a flight attendant's annoyance."

It's a good example of critical thinking - both because Sharkey didn't accept an initial news report at face value, and because he points out that the definitions, and who is categorizing events, matter. 

Monday

Cities, adaptability, and climate change

The Atlantic's Cities blog ran a column last week called "Which Cities are Most Prepared for Climate Change?" It came to a pretty grim conclusion:
95 percent of major Latin American cities are actively planning for climate change, according to the report.
Canadian cities are also preparing themselves, with 92 percent of its major cities currently undertaking adaptation planning efforts. Similar preparations are being made in 80 percent of African cities, 84 percent of European cities and 86 percent of cities in Australia and New Zealand. Asian cities are less involved, with 67 percent reporting climate adaptation planning. And at the bottom of the list is the U.S., where only about 59 percent of major cities are actively preparing for the impacts of climate change.
Wow. Sounds as if we have some catching up to do. I clicked through to the source material, a 2012 report from ICLEI - Local Governments for Sustainability. According to ICLEI's website, it's a voluntary membership organization of government members devoted to sustainable development. OK, so far, so good. The website says that ICLEI has 1220 government members, representing 569,885,000 people.

But the methodology section of the survey raises some concerns. Check out this table:
Ten complete responses from Africa which has 29 member cities? That seems like a low response rate. What about the other cities in Africa?

And there's more - the survey was sent to 1171 communities that were members of ICLEI then - but the researchers had incorrect email addresses for 96 of them. 468 cities completed some or all of the questionnaire, but only 418 completed it fully.

Oh, and how big are the members? Some are quite big, but others are very small, 25,000. New York City and Los Angeles are on the list of member cities, but did they complete the questionnaire? It would be helpful to know how many large and how many small cities responded to the questionnaire.

The lesson? When you read someone's interpretation of survey results, it's important to think critically - even if they did not. In this case, spending a few minutes thinking about the representativeness of the survey respondents should have caused the reporter to dial back the conclusion.

Popular Posts