Category Archives: Evidence

Recommended: ‘Investigative Storytelling Gone Awry’

The NPR ombudsman, Edward Schumacher-Matos, produced an extraordinary investigation-of-an-investigation earlier this year:

My finding is that the series was deeply flawed and should not have been aired as it was.

The series committed five sins that violate NPR’s code of standards and ethics. They were:

  1. No proof for its main allegations of wrongdoing;
  2. Unfair tone in communicating these unproven allegations;
  3. Factual errors, shaky anecdotes and misleading use of data by quietly switching what was being measured;
  4. Incomplete reporting and lack of critical context;
  5. No response from the state on many key points.

I was convinced after the first couple of chapters that Schumacher-Matos was worth reading for those who are invested in the issue covered by the original report. I found his writing to be fair, cautious, and reflective.

To me, his report is a model of critical thinking applied to journalism, and I was disappointed that NPR responded with only a defensive memo.

The argumentative power of data and narrative in journalism

Nick Diakopoulos, summarizing discussion at the Computation + Journalism Symposium:

This brings us back to the raw cultural difference of the value of “theory” or “model” (i.e. understanding the central tendency and abstraction of data) versus the “anecdote” or “outlier” that is so important to journalists feeling they’ve got a good story to tell. We may be just at the beginning of understanding the benefits and tradeoffs of the narrative-dominant frame versus the analytic-dominant frame, but it’s certain that the cultural dilemma of how news communication is approached underscores a central challenge in integrating computation and journalism.

Read “Finding tools vs. making tools: Discovering common ground between computer science and journalism” at Nieman Lab.

The Niall Ferguson affair: Beyond fact-checkers to argument-checkers

The reaction to Niall Ferguson’s Newsweek attack against Barack Obama interested me as I tried to fill in the blank: It’s important that Newsweek published a poorly argued article because ______________________.

Some writers, taking the journalistic angle, focused on the fact that Newsweek employs no fact-checkers:

In the case of the links listed above, the blank is filled: “it illustrates Newsweek’s disrespect for facts and shows how the magazine has fallen into disrepute.” The Ferguson article is so shameful because Newsweek ignored basic journalistic standards.

I read some of the web’s debunking of Ferguson:

What I read in those articles was calling out of mostly Ferguson’s conflations, irrelevant claims, and misleading transitions, but not as often factual inaccuracies. More often it was rebuttal along the lines of this, from O’Brien:

“The most recent estimate for the difference between the net present value of federal government liabilities and the net present value of future federal revenues–what economist Larry Kotlikoff calls the true ”fiscal gap“–is $222 trillion.”

That’s a lot of trillions! But if our fiscal gap is “really” this many trillions, why can we borrow for 30 years for a real rate of 0.64 percent? It’s because this number is meaningless. First of all, it seems to project many decades of growth figures and budget decisions that we simply don’t know will happen. It assumes the Bush tax cuts never ever expire and that the healthcare cost curve never ever bends. This is like projecting, in 1942, that the Empire of Japan will rule the entire Asian continent for 70 years based on a few years of battle outcomes. It’s an interesting prediction, but it’s not an empirical vision of the future.

or this, from Fallows:

He presents an ominous chart showing that, if Obama is reelected, China’s economy might become bigger than America’s around the time he leaves office … [the chart] … What this chart demonstrates is not “a nation losing ground” but the reality that China has four times as many people as America does. When its overall economy exceeds ours, its per capita output will be only one-quarter as great. A historian would presumably know that the conscious strategy of every president from Richard Nixon through Barack Obama has been to encourage rather than thwart China’s continued development, on the reasoning that a poor and festering China would be more dangerous to the United States than one that is becoming richer.

O’Brien and Fallows present rebuttals to poorly justified claims. But they do not write that Ferguson’s “$222 trillion” or “China’s economy” data are factually wrong. They write that the data are irrelevant, misleading, unconvincing, etc. That requires a certain amount of training in not just economics, but argument.

Would a “fact-checker” have noticed the same thing, or would they have been content with Ferguson’s provision of a source for the figure and the chart?

Ta-Nehisi Coates writes that "a culture of fact-checking, of honesty, is as important as the actual fact-checking.

In my experience, seeing your name on the cover of a magazine will take you far in the journey toward believing your own bullshit. It is human to do so, and fact-checkers serve as a valuable check to prevent writers from lapsing into the kind of arrogant laziness which breeds plagiarism and the manufacture of facts.

But if there weren’t many “facts” out of sorts — as opposed to out-of-sorts arguments — then I don’t know how much the culture of fact-checking would have helped. What would have been more important is a culture that encourages you to have your reasons and evidence straight before opening your mouth, and I don’t know whether that’s the same thing. Maybe it is.

Then again, would a high-flying “celebrity” academic have been affected by such a culture of honesty? Might he “believe [at least some of] his own bullshit” when, as Stephen Marche noted, he takes $50,000 to $75,000 per speaking appearance?

Perhaps Ferguson’s view of himself wouldn’t matter. Suppose Newsweek had fact-checkers. Fallows supposes that they might have approached the work with a different lens than they would have Coates or some other journalist:

Scholars are supposed to be different from mere pamphleteers and journalists. We give the judgments of academics — like those of doctors, scientists, renowned jurists, etc. — extra weight because we assume that they have considered evidence, precedent, and probabilities more carefully before offering conclusions. Think: E.O. Wilson on ants and ecological patterns more broadly.

He almost — but only almost — makes the case that even fact-checkers wouldn’t have helped. Even if they had read the story, might they have made the same assumption that Fallows suggests and tread more softly around Ferguson’s work?

Finally, on the subject of intellectuals as valuable “brands,” and on the subject of a “higher standard” for scholars and academics, see Daniel W. Drezner’s post “Intellectual power and responsibility in an age of superstars”.

Thoughts about Wahl-Jorgensen’s ‘Strategic ritual of emotionality’

A few weeks ago, I sketched a post in response to Karin Wahl-Jorgensen’s article “The strategic ritual of emotionality”. But it seems I got delete-happy and erased the file.

I wanted to post about the article anyway because I found it stimulating. Here are some lightly edited notes and questions from my initial reading of the paper.

Replacing objectivity’s legal safeguard

I am less familiar with Gaye Tuchman’s work than I should be, but Wahl-Jorgensen’s article interested me in Tuchman’s work even more.

Wahl-Jorgensen summarizes Tuchman as arguing that reporters embraced traditional objectivity, with its standardized process, in part because it helped protect them from errors that could lead to expensive libel suits.

One position I recently took is that reporters who want to abandon traditional objectivity can use the principle of charity as part of a replacement ethical framework. The principle of charity, broadly stated, is that when we critize an argument we have an obligation to represent the argument in its strongest form.

If Tuchman is correct that reporters have embraced objectivity in part for its legal protections, then it seems to me fair and practical for reporters to ask whether the principle of charity would provide them similar shelter. I couldn’t answer that question today. But I have no interest in seeing a chilling effect follow from use of the principle. I’d like to research the legal literature to see whether it has addressed objectivity in the context of libel laws.

Where is the evidence?

Regina Lawrence and Matthew Schafer recently found that journalists who labeled Sarah Palin’s “death panels” claim false did so without attribution surprisingly frequently.

Similarly, Wahl-Jorgensen found evidence that journalists justfied claims, emotional or otherwise, without often rushing to use quotations as evidence, which, she says, the “objective” style might lead you to expect. Instead, reporters tended to rely on their epistemic authority: Their saying it was enough to justify it.

The two samples are not necessarily comparable. Wahl-Jorgensen studied Pulitzer-winning stories, and the other story examined general-purpose stories. Still, what is going on here?

How do journalists argue using documents and data?

Professor C.W. Anderson recently presented his latest research project at a colloquium at UNC. The session was recorded (and embedded below), and after watching it, I think that the project is pretty interesting from the argument-in-journalism perspective.

Anderson’s project, as I understand it, is to place historical context around the way journalists use and have used documents to report.

Scholars of journalism have watched for a long time how interviews and first-person witnessing of events become building blocks of news stories. They also examine what has changed in journalists’ use of interviews and witnessing over many decades.

According to Anderson, for any historical period, we can ask three questions these journalistic processes: How journalists collect interviews (or witnesses, or data); how journalists analyze them; and how the results are presented to readers and users.

Now, Anderson wants to study the use of documents and data journalism in a similar way: How are documents and data collected, analyzed, and the results eventually presented to readers and users?

The question of presentation is key. If and when journalists present an argument (or presented an argument, historically) that has documents or data journalism as a pillar, what is done with them to try to make a persuasive case?

With knowledge in hand of how journalists use data and documents to argue conclusions, using the analytical tools of argumentation, informal logic, and critical thinking is the easy next step. How strong are those arguments? Are they stronger or weaker than non-data-driven arguments? What is missing from them? And so on.

Critically thinking about reporters’ claims with Truth Goggles

“Truth Goggles” is a project by MIT’s Dan Schultz that provides a fact-checking layer on top of news articles.

Andrew Phelps described Truth Goggles at Nieman Lab last year:

Schultz is building what he calls truth goggles — not actual magical eyewear, alas, but software that flags suspicious claims in news articles and helps readers determine their truthiness. It’s possible because of a novel arrangement: Schultz struck a deal with fact-checker PolitiFact for access to its private APIs.

Phelps recently revisited Schultz’s project, which is in the middle of a public user study. Phelps took the study and reported on his experience:

Sure, my internal skepticism detector usually starts beeping whenever I see quotation marks, but this exercise also forced me to consider claims in a reporter’s copy — e.g., “About 2.5 million young adults from age 19 to 25 attained health coverage as a result of the Affordable Care Act…” — the kind of information I’m more likely to assume is true.

After using the goggles for awhile, it was impossible to read articles without a skepticism bordering on

As I wrote in my thesis, academic work on critical thinking and journalism usually suggests that students use their skills on opinion pieces, not regular news stories. But my experience in my thesis suggested to me that those news stories often lack sufficient evidence for their claims, too.

So Phelps’s remark that Truth Goggles caused him to “consider claims in a reporter’s copy” won me over to the project. I took the Truth Goggles study myself this weekend, and I had a similar experience. Why not take the study yourself?