Tools for Smart Thinking

This blog entry was written by Nick Desbarats of Perceptual Edge.

In recent decades, one of the most well-supported findings from research in various sub-disciplines of psychology, philosophy and economics is that we all commit elementary reasoning errors on an alarmingly regular basis. We attribute the actions of others to their fundamental personalities and values, but our own actions to the circumstances in which we find ourselves in the moment. We draw highly confident conclusions based on tiny scraps of information. We conflate correlation with causation. We see patterns where none exist, and miss very obvious ones that don’t fit with our assumptions about how the world works.

Even “expert reasoners” such as trained statisticians, logicians, and economists routinely make basic logical missteps, particularly when confronted with problems that were rare or non-existent until a few centuries ago, such as those involving statistics, evidence, and quantified probabilities. Our brains simply haven’t had time to evolve to think about these new types of problems intuitively, and we’re paying a high price for this evolutionary lag. The consequences of mistakes, such as placing anecdotal experience above the results of controlled experiments, range from annoying to horrific. In fields such as medicine and foreign policy, such mistakes have certainly cost millions of lives and, when reasoning about contemporary problems such as climate change, the stakes may be even higher.

As people who analyze data as part of our jobs or passions (or, ideally, both), we have perhaps more opportunities than most to make such reasoning errors, since we so frequently work with large data sets, statistics, quantitative relationships, and other concepts and entities that our brains haven’t yet evolved to process intuitively.

In his wonderful 2015 book, Mindware: Tools for Smart Thinking, Richard Nisbett uses more reserved language, pitching this “thinking manual” mainly as a guide to help individuals make better decisions or, at least, fewer reasoning errors in their day-to-day lives. I think that this undersells the importance of the concepts in this book, but this more personal appeal probably means that this crucial book will be read by more people, so Nisbett’s misplaced humility can be forgiven.

Mindware

Mindware consists of roughly 100 “smart thinking” concepts, drawn from a variety of disciplines. Nesbitt includes only concepts that can be easily taught and understood, and that are useful in situations that arise frequently in modern, everyday life. “Summing up” sections at the end of each chapter usefully summarize key concepts to increase retention. Although Nesbitt is a psychologist, he draws heavily on fields such as statistics, microeconomics, epistemology, and Eastern dialectical reasoning, in addition to psychological research fields such as cognitive biases, behavioral economics, and positive psychology.

The resulting “greatest hits” of reasoning tools is an eclectic but extremely practical collection, covering concepts as varied as the sunk cost fallacy, confirmation bias, the law of large numbers, the endowment effect, and multiple regression analysis, among many others. For anyone who’s not yet familiar with most of these terms, however, Mindware may not be the gentlest way to be introduced to them, and first tackling a few books by Malcolm Gladwell, the Heath brothers, or Jonah Lehrer (despite the unfortunate plagiarism infractions) may serve as a more accessible introduction. Readers of Daniel Kahneman, Daniel Ariely, or Gerd Gigerenzer will find themselves in familiar territory fairly often, but will still almost certainly come away with valuable new “tools for smart thinking,” as I did.

Being aware of the nature and prevalence of reasoning mistakes doesn’t guarantee that we won’t make them ourselves, however, and Nisbett admits that he catches himself making them with disquieting regularity. He cites research that suggests, however, that knowledge of thinking errors does reduce the risk of committing them. Possibly more importantly, it seems clear that knowledge of these errors makes it considerably more likely that we’ll spot them when they’re committed by others, and that we’ll be better equipped to discuss and address them when we see them. Because those others are so often high-profile journalists, politicians, domain experts, and captains of industry, this knowledge has the potential to make a big difference in the world, and Mindware should be on as many personal and academic reading lists as possible.

Nick Desbarats

5 Comments on “Tools for Smart Thinking”


By Berry. April 27th, 2016 at 8:53 am

The page 23 preview says “female-named hurricanes don’t seem as dangerous as male-named ones, so people take fewer precautions.”
Andrew Gelman, to my memory, doubted that statement: https://www.washingtonpost.com/news/monkey-cage/wp/2014/06/05/hurricanes-vs-himmicanes/
http://andrewgelman.com/2014/06/06/hurricanes-vs-himmicanes/

Do you feel like a lot of disputed results are presented as exciting and, implicitly, as being true and generalizable?
If so, (how) would that change your mind about the book?

By Stephen Few. April 27th, 2016 at 10:32 am

Berry,

Even though I didn’t write the book review, I’ll weigh in with an opinion. The partial sentence that you quoted from page 23 of Nisbett’s book Mindware is one of several examples that he cites to illustrate his claim that humans are often subject to irrational sources of influence. This particular study claimed that hurricanes with female names resulted in more harm to humans because we tend to perceive them as less dangerous and therefore take fewer precautions than hurricanes with male names. As you point out, the statistician Andrew Gelman read the study and found that it lacked sufficient data to support this claim statistically. Whether the claim is true or not, we cannot say, but we can say that it is not statistically valid. In this case, however, many well-designed research studies exist to back Nisbett’s claim that we are often influenced by irrational factors, so the failure of this study about hurricane’s to provide statistical significance does not negate the claim. In other words, the fact that Nisbett cited this study does not reduce his book’s overall merits.

You do make a good point, however. Authors, speakers, academics, etc. should be careful to vet research studies carefully before citing them. Unfortunately, this practice is often ignored because it takes time and effort to carefully review research papers. I suspect that Nisbett did not read the original study regarding hurricanes with sufficient care. Had he done so, he would have probably refrained from citing it in his book.

By Nick Desbarats. April 27th, 2016 at 2:09 pm

Thanks for the question, Berry.

I think that it’s unfortunately pretty clear that researchers sometimes try to “spice up” their findings by drawing potentially unwarranted conclusions, and by downplaying opposing views, study limitations or the narrowness of the applicability of their findings. Presumably, they do so in an effort to get the attention of the mainstream media, which in turn makes it easier to get subsequent research grants, book deals, etc. Equally unfortunately, the media is, of course, often all too happy to accept these findings unquestioningly and layer on more sensationalism in order to have a “strong angle” on the story, and often add even more unwarranted certainty for the same reason.

In terms of how this affects my opinion of Nisbett’s book, I’d largely echo Steve’s comments. Mindware is generally well-researched and, as Steve pointed out, the validity or invalidity of this study doesn’t really alter the validity of the underlying point he was making by citing it.

I also obviously agree with Steve in that speakers and writers who cite research findings should subject those findings to reasonable scrutiny, although I think that this does raise the question of what constitutes “reasonable scrutiny”? The flaws in the hurricane names study appear to be quite real but were presumably subtle enough to not be detected by the original peer reviewers (although the peer review process isn’t what it used to be in many fields). If Nesbitt were to devote that level of scrutiny to every study that he referenced, however, he may never have finished his book. As such, unless they personally replicate every study they cite, every writer and speaker must make a trade-off between vetting the research that they cite and being productive and, as far as I can tell, Nisbett made that trade-off reasonably throughout the book. That being said, some quick Googling to see if others had found flaws in the hurricane names study probably falls within any useful definition of “reasonable scrutiny”, so Nisbett might have been a bit more thorough there.

By Dale Lehman. April 28th, 2016 at 4:20 am

I think you (Stephen and Nick) are letting Nesbitt off the hook too easily. In particular, the claim that if the errors were serious enough they would have presumably been found by the peer reviewers. Gelman and others have been seriously questioning the peer review process and it is, in my opinion, fatally flawed. Claims that authors can’t be expected to check and replicate the validity of everything they cite – while true – are too lenient – far too lenient.

In this particular case – the hericcanes and himaccanes study – it is not just a matter of seeing how much criticism the study provoked. The claims and methodology were sensationalist and lacking of merit. It is not that they are necessarily wrong (as Gelman points out), but that the study could never have been able to demonstrate its claims. Too much research is ill-conceived from the start. The data is too noisy and the researcher choices of methodology too varied for any useful knowledge to be found. And, yes, it plays to the media who are all too ready to pick up the story.

I would expect a book subtitled “Tools for Smart Thinking” to be more careful and insightful. I have not read the book, but I am likely to pass this one up.

By Nick Desbarats. April 28th, 2016 at 9:57 am

Thanks for the comment, Dale.

Steve and I have expressed concerns with the often woeful state of the peer review process, and I mention this concern in my previous comment about this post. Neither of us has ever suggested that it should be relied upon, and I only mentioned it to point out that that issues with the hurricane names study were not so obvious that they would have been spotted by a peer reviewer (implying that only obvious problems stand a decent -though by no means certain- chance of being spotted). We also agree that many studies are poorly designed, and Steve has pointed out many specific methodology problems in data visualization research in particular (e.g., http://www.perceptualedge.com/blog/?p=2313)

To write off the entire book because of this oversight is, however, throwing the baby out with the bathwater, I think. On the scale of research negligence, this was a minor offense. There was no intent to deceive and Nisbett wasn’t trying to support an otherwise weak point. That’s not to say that I think that his oversight was OK, but that I have far more concern with the many other writers out there who take considerably greater liberties and apply far less scrutiny to the research that they cite.

Also, while I agree with you in that writers need to thoroughly vet any research that they cite, in practice, I strongly suspect that there are few books -if any- that would clear the bar that you’ve set. If one were to subject every underlying study that’s cited in any given book to a high degree of scrutiny, one would probably find that at least one of the cited studies was flawed (and probably more than one, given the preponderance of published, peer-reviewed studies which are flawed). As such, one’s reading list of “credible” books could be very short indeed.

Does that mean that we, as readers, should be any less critical or any less vigilant in spotting potentially problematic scientific claims? Absolutely not. Because we live in an imperfect world, however, the punishment must fit the crime when such problems are detected, and I think this one counts as a misdemeanor and not a capital crime.

Leave a Reply