Quantitative exceptionalism

Image courtesy of Wikimedia Commons

At its heart, the field of statistics deals with determining what inferences can be drawn from data. Causality, bias, significance, and experimental reproducibility are its lifeblood, and one doesn’t have to wander too many pages into a standard introductory statistics textbook before encountering these issues.

Most readers of this blog will not have too much trouble coming up with examples of real world situations in which the improper application of statistics can result in spurious conclusions. As a very simple example, if the average height of a population is estimated on the basis of a survey, and younger people (who tend to be shorter) have a lower response rate, the result may overestimate average height.

There is a substantial pop culture literature on such examples and how to avoid them (for instance, check out Darrell Huff’s classic How to Lie with Statistics or Joel Best’s Damned Lies and Statistics series). This phenomenon goes beyond statistics to all situations involving quantitative information or reasoning. Numbers and equations are apparently intoxicating to the uninitiated, like the narcotic lotus flowers of Greek mythology that reduced Odysseus’ crew to a state of peaceful apathy and nearly caused them to lose their way.

Too often, the curiosity and skepticism demonstrated by otherwise intelligent humans comes to a grinding halt when numbers are involved.

“Quantitative exceptionalism” is the widespread and often harmful belief that insights reached via quantitative means form an exceptional class. This term has both positive and negative connotations. Quantitative arguments are often assumed to be of high quality a priori, perhaps due to their relative inaccessibility, and those who employ them erudite. Humans are by nature fallible and what we do with numbers is subject to human error, yet people so often blindly trust quantitative arguments. Sometimes the errors are subtle; other times, not so much. The result is lower standards for scientific and mathematical rigor, with immense downstream impact.

Quantitative exceptionalism is widespread in academic, business, political, and popular discourse. In many scholarly disciplines, numerical data and quantitative arguments are given less scrutiny than their qualitative counterparts. In business and government, decisions are made on questionable calculations at an ever accelerating rate, fueled by a big data revolution that is a lot heavier on technology than it is on basic science. Educators in STEM fields could do more to encourage interrogation. Journalists using numbers and infographics could inquire more critically. Politicians…don’t get me started.

So don’t judge a number by its cover. Be curious. Be a skeptic. Avoid the lotus.

(For curious readers, the coinage “quantitative exceptionalism” is inspired by the terminology “American exceptionalism” and MIT linguist Michel DeGraff’s “Creole exceptionalism” [pdf].)

We’ll be talking a lot more about quantitative exceptionalism on this blog. In the meantime, share your thoughts or examples you’ve witnessed in the comments.

comments

2 Responses to “Quantitative exceptionalism”

  1. eat chocolate –> get wicked smaht? : Bittersweet Notes on October 19th, 2012

    [...] far too little skepticism – an excellent example of a phenomenon I’ve recently started calling quantitative exceptionalism. A comment cardiologist Sanjay Kaul provided to CardioBrief sums up the dangers well: “This [...]

  2. Nate Silver, Probabilistic Celebrity : data bitten on November 9th, 2012

    [...] Quantitative exceptionalism [...]

Leave a Reply




  • About

    Data Bitten aims to tell the story of the data revolution. More to come.

  • Stay connected