About a week ago, the NY Times ran an article about websites like Daytum and TouchGraph that let you create charts and graphs of your personal data: how many miles you ran or calories you consumed (and where you consumed them). Daytum is free, though there is an inexpensive paid version as well; TouchGraph is expensive. The article also mentions several websites for runners tracking their training.
These are fun to play with -- I always like looking at my friend wheel on Facebook -- and provide more options than Excel charts.
Another very useful website not mentioned in the Times article is Thinklinkr, an outlining program. Thinklinkr is Internet-based, which means you can access it from any computer. It's simple to use, and you can share it with collaborators, with a live chat discussion feature so you can talk about what you're writing. I've used it for everything from travel planning to outlining blog posts.
Friday
Monday
Catching up with the Times
The NY Times has carried a series of articles about personal statistics over the last week . . . and I was away for a couple of days so am just catching up.
In reverse order of publication, on Saturday the Times carried an essay by Alina Tugend about writers, an others, defining themselves by available online numbers: most emailed article, number of Twitter followers, Amazon sales rank. And why we care. But even though the article contains statements from authors and psychologists saying all the right things about what's wrong with measuring work this way, Ms. Tugend's article really misses the point of measurement.
Why do we measure? To find out what's happening in the world, or our corner of it. But it's important to remember to look at the right measurement, not the available one. Just because the Amazon sales ranking is there doesn't mean that it's a useful measure of how good, or popular, a book is. Sample sizes are very small, and, as Tugend points out, sales of only a few books can move a book up the list fast. With a concomitant drop the next day.
Equally important, context really does matter. Someone may write for intensely personal reasons -- reasons that are probably not reflected in Amazon sales figures. Why someone follows tweets; there's no doubt a context for that too. That's not a context I have yet imagined myself into, which brings me to my final point: the pool of users, or readers, or people with twitter accounts, or potential denominator (everyone who could potentially have a twitter account? everyone who might one day read a book?) might be huge.
And in this big a pool, with this little information, there's a lot of interpreting and not a lot of certainty. So looking at numbers like these is, well, beside the point.
In reverse order of publication, on Saturday the Times carried an essay by Alina Tugend about writers, an others, defining themselves by available online numbers: most emailed article, number of Twitter followers, Amazon sales rank. And why we care. But even though the article contains statements from authors and psychologists saying all the right things about what's wrong with measuring work this way, Ms. Tugend's article really misses the point of measurement.
Why do we measure? To find out what's happening in the world, or our corner of it. But it's important to remember to look at the right measurement, not the available one. Just because the Amazon sales ranking is there doesn't mean that it's a useful measure of how good, or popular, a book is. Sample sizes are very small, and, as Tugend points out, sales of only a few books can move a book up the list fast. With a concomitant drop the next day.
Equally important, context really does matter. Someone may write for intensely personal reasons -- reasons that are probably not reflected in Amazon sales figures. Why someone follows tweets; there's no doubt a context for that too. That's not a context I have yet imagined myself into, which brings me to my final point: the pool of users, or readers, or people with twitter accounts, or potential denominator (everyone who could potentially have a twitter account? everyone who might one day read a book?) might be huge.
And in this big a pool, with this little information, there's a lot of interpreting and not a lot of certainty. So looking at numbers like these is, well, beside the point.
Tuesday
Metrics for web sites
I use Google Analytics to keep an eye on traffic on my web site. It's free, and easy to include; you just copy the code and paste it into your web site. Today's Chronicle of Higher Education carries an article, "Colleges Rehab Their Web Sites for Major Payoffs," about colleges using metrics to understand traffic on their web sites. The article describes the costs of poor design in terms of foregone applications and the benefits of redesign. It's always worth thinking about how metrics can inform the way you do business!
Google Analytics is available here.
Google Analytics is available here.
Monday
Siddhartha Mukherjee's article in yesterday's New York Times Magazine about cell phone use and brain cancer provides a good reminder of the difficulty ahead for an individual trying to use statistics to make decisions: as an individual you don't know in advance which group you're a part of. It's a good article, though not as good as his book about cancer (out of the scope of this blog but well worth reading).
Wednesday
Organizational Background and Capacity
This post is the third in a series of posts about writing successful grant proposals. See earlier posts about picking a funder, here, and paying attention to basic requirements like formatting, here.
You may think that a proposal should be focused on the terrific services your program will provide, but often, especially when you are responding to a government-sponsored Request for Proposals (RFP), the funder will want to know something about the organization submitting the proposal. Most funders will ask you to provide introductory material. Sometimes they want to know about the neighborhood in which you operate, and the people you serve.
Remember, when you submit a proposal, that you're entering a competition. You don't have to answer any questions that aren't asked. By the same token, you should answer all the questions that have been asked. But stay focused! You don't have to explain why it's critically important to fund early childhood programs if the funder is seeking to fund day care programs. In that situation, you'll want to focus on why your day care center, in your site in your neighborhood, should be the program that receives the funding. On the other hand, if you're applying to an organization that supports early childhood development, then you'd need to explain how funding a day care program like yours furthers the funder's goals.
The second part of the introductory material often requests information about your organization. This is also not a place to skip any questions. If the funder wants to know about your internal financial controls, find out about them and write a sentence or two describing them. Don't hesitate to boast about your experience, but don't exaggerate it either. Make sure you supply context for any claims. If you're providing statistics, be sure to provide both the percentage and the total number you served. Identify the source of any data you provide.
As always when writing responses or applications, stay within the prescribed page limits. It's always better to be under--if you are, that gives you more space to write your program description. (And that will be the next post in this series.)
You may think that a proposal should be focused on the terrific services your program will provide, but often, especially when you are responding to a government-sponsored Request for Proposals (RFP), the funder will want to know something about the organization submitting the proposal. Most funders will ask you to provide introductory material. Sometimes they want to know about the neighborhood in which you operate, and the people you serve.
Remember, when you submit a proposal, that you're entering a competition. You don't have to answer any questions that aren't asked. By the same token, you should answer all the questions that have been asked. But stay focused! You don't have to explain why it's critically important to fund early childhood programs if the funder is seeking to fund day care programs. In that situation, you'll want to focus on why your day care center, in your site in your neighborhood, should be the program that receives the funding. On the other hand, if you're applying to an organization that supports early childhood development, then you'd need to explain how funding a day care program like yours furthers the funder's goals.
The second part of the introductory material often requests information about your organization. This is also not a place to skip any questions. If the funder wants to know about your internal financial controls, find out about them and write a sentence or two describing them. Don't hesitate to boast about your experience, but don't exaggerate it either. Make sure you supply context for any claims. If you're providing statistics, be sure to provide both the percentage and the total number you served. Identify the source of any data you provide.
As always when writing responses or applications, stay within the prescribed page limits. It's always better to be under--if you are, that gives you more space to write your program description. (And that will be the next post in this series.)
Tuesday
A reporter in Baghdad lists his stats
Michael Schmidt, a young NY Times reporter who covered the Michael Vick and Roger Clemens legal entanglements, and broke the nasty story about US sports agents' links to Central American academies that develop young players for the majors, is now stationed in Baghdad. His blog about the life of a war reporter is well worth reading. Mike's most recent post counts up how he's spent his days.
Sunday
NY Times on visualizing data
Interesting column, "When the Data Struts Its Stuff," by Natasha Singer in today's NY Times about how important it is to look behind the pictures and think about the data being represented. At the same time, it's great how central visual analytics is making data analysis! An article well worth making one of your 20 for the month if you're not a NY Times subscriber.
Friday
Overdiagnosed: What do we do when we're the one
Update, April 7 - Gail Collins' NY Times Op-Ed describes her head-spinning reaction to changes in recommendations about hormone therapy for menopausal women . . . you really have to be your own advocate.
Back in January, I mentioned the NY Times review of Overdiagnosed: Making People Sick in the Pursuit of Health by Drs. H. Gilbert Welch, Lisa M. Schwartz, and Steven Woloshin (Beacon Press, 2011, 191 pages). Now I've had a chance to read the book, and it is fascinating.
As a metrics person, I have often argued that it is important to look at people in large groups. You can accurately predict what will happen to some subset of them: ie, when the City of New York reports that 63% of children in its public schools will graduate within four years of entry, I believe them. (Well, sort of. See this New York Times story about some recent recalculations.) The trouble is, there are rarely ways to predict ahead of time which person will end up in which group. Generally, more children from higher income families graduate within four years, but not all of them do. And you can't tell, at the beginning of ninth grade, which kid will be in which group. This is one kind of problem when we're considering social or public services. It's another entirely when we're talking medical screening and health care.
Welch and his co-authors make a compelling case that new and improved diagnostic tests -- more sensitive blood tests, better scanners, decoding the human genome -- have meant that we are able to see more abnormalities and treat emerging conditions earlier. Sometimes, as in the case of high blood pressure, it's a good idea to treat an asymptomatic patient. But, Welch and Co. argue, in many cases we're treating people with mild or no symptoms. We've made an assumption that if treating early is good, than treating earlier must be better. The problem arises because if we've found an abnormality early -- before it's symptomatic -- there's no way of telling whether it will be nothing, or it will kill you.
As they go on to show in a number of contexts, including MRIs, CT scans, PSA testing, mammograms, and genetic screening, the more we test, or see on a scan, the more likely we are to find an abnormality. And when we see an abnormality, we are, for a variety of societal and cultural reasons, likely to treat it. But we forget, they point out, that the treatment may be worse than the abnormality. Or it may have side effects that set off a cascade of poor outcomes. Or, and this is hard to take in, we may have treated a cancer that was never going to grow, or not fast enough to kill the victim. Or, and this is even harder, we may be diagnosing a fast-moving, aggressive cancer earlier that was going to kill the victim in the same amount of time, or perhaps just a little bit less, than it would have without any treatment.
Who wants to take the chance, when a screen turns up a possible cancer, that it's slow growing? We are acculturated to fight cancer with all we can. But the stories Welch and his co-authors provide, of harm done by what may have been unnecessary treatment, are compelling. As are the mounting costs of health care. What's the thinking person to do? I don't actually want to be treated for a cancer that's not going to kill me, but it's hard to turn down treatment against medical advice. Fortunately, Welch, Schwartz, and Woloshin offer some complex rules, and some common-sense ones, to bear in mind:
Back in January, I mentioned the NY Times review of Overdiagnosed: Making People Sick in the Pursuit of Health by Drs. H. Gilbert Welch, Lisa M. Schwartz, and Steven Woloshin (Beacon Press, 2011, 191 pages). Now I've had a chance to read the book, and it is fascinating.
As a metrics person, I have often argued that it is important to look at people in large groups. You can accurately predict what will happen to some subset of them: ie, when the City of New York reports that 63% of children in its public schools will graduate within four years of entry, I believe them. (Well, sort of. See this New York Times story about some recent recalculations.) The trouble is, there are rarely ways to predict ahead of time which person will end up in which group. Generally, more children from higher income families graduate within four years, but not all of them do. And you can't tell, at the beginning of ninth grade, which kid will be in which group. This is one kind of problem when we're considering social or public services. It's another entirely when we're talking medical screening and health care.
Welch and his co-authors make a compelling case that new and improved diagnostic tests -- more sensitive blood tests, better scanners, decoding the human genome -- have meant that we are able to see more abnormalities and treat emerging conditions earlier. Sometimes, as in the case of high blood pressure, it's a good idea to treat an asymptomatic patient. But, Welch and Co. argue, in many cases we're treating people with mild or no symptoms. We've made an assumption that if treating early is good, than treating earlier must be better. The problem arises because if we've found an abnormality early -- before it's symptomatic -- there's no way of telling whether it will be nothing, or it will kill you.
As they go on to show in a number of contexts, including MRIs, CT scans, PSA testing, mammograms, and genetic screening, the more we test, or see on a scan, the more likely we are to find an abnormality. And when we see an abnormality, we are, for a variety of societal and cultural reasons, likely to treat it. But we forget, they point out, that the treatment may be worse than the abnormality. Or it may have side effects that set off a cascade of poor outcomes. Or, and this is hard to take in, we may have treated a cancer that was never going to grow, or not fast enough to kill the victim. Or, and this is even harder, we may be diagnosing a fast-moving, aggressive cancer earlier that was going to kill the victim in the same amount of time, or perhaps just a little bit less, than it would have without any treatment.
Who wants to take the chance, when a screen turns up a possible cancer, that it's slow growing? We are acculturated to fight cancer with all we can. But the stories Welch and his co-authors provide, of harm done by what may have been unnecessary treatment, are compelling. As are the mounting costs of health care. What's the thinking person to do? I don't actually want to be treated for a cancer that's not going to kill me, but it's hard to turn down treatment against medical advice. Fortunately, Welch, Schwartz, and Woloshin offer some complex rules, and some common-sense ones, to bear in mind:
- To grossly oversimplify, cancer is a product of genes and environment. You can't control your genes, or even your entire environment, but you can control some important aspects of it. Don't smoke. Eat well. Get exercise.
- Symptoms are important. They are a signal something is wrong and are the best predictor of serious problems. So in the absence of symptoms, stay skeptical. Keep asking questions.
- Remember that while early is good, earlier is not necessarily better, and there may be costs and risks to treatment.
Subscribe to:
Posts (Atom)
Popular Posts
-
Here's a link to a series of charts The Atlantic.com has put together titled "10 Ways to Visualize How Americans Spend Money on He...
-
It's still in beta, and not all the data are loaded yet, but even so the website Mapping Gothic France , put together by art historians ...
-
I've mentioned Edward Tufte, the statistician and political scientist before. Now I've read Tufte's 2003 essay "The Cog...
-
Like many other people, I am constantly on the lookout for useful organizing tools. Here are a couple to ponder, and play with, over the T...
-
"Rethinking a Lot: The Culture and Design of Parking," Eran Ben-Joseph's unexpectedly lyrical ode to the humble parking lot, d...