If you’ve read my last post, a good model for human behavior is that of a moist robot. We’re nothing more than mere automatons (or automata) except that we’re wet and squishy. This post is about another common behavior among us squishy automatons: consistency and commitment.
Influence by Cialdini describes this tendency in detail as one of the big six principles for how others manipulate our squishy brains into doing things that we otherwise wouldn’t want to do. The funny thing about these psychological tendencies is that we’re mostly blind to them — psychology researchers included! A recent blog post by Andrew Gelman discusses how scientific research in (certain parts of) the psychology community is quite lacking (particularly from a statistical point of view):
The short story is that Cuddy, Norton, and Fiske made a bunch of data errors—which is too bad, but such things happen—and then when the errors were pointed out to them, they refused to reconsider anything. Their substantive theory is so open-ended that it can explain just about any result, any interaction in any direction.
The interesting thing is that one of the main authors, Fiske, is a well-respected researcher in her field. Gelman is commenting about an open letter she sent criticizing social media and all the negative comments her research has received. She asserts that the comments should be done in a “moderated” (read: research journal) where an editor can filter inappropriate comments. While in theory it sounds good, practically, these psychology researchers are falling for consistency and commitment fallacy. They believe that (implicitly) just because their research is published and they are well-respected then it has to be good (or at least not bad) even in the face of obvious errors, from Gelman’s post (emphasis mine):
She’s implicitly following what I’ve sometimes called the research incumbency rule: that, once an article is published in some approved venue, it should be taken as truth. I’ve written elsewhere on my problems with this attitude—in short, (a) many published papers are clearly in error, which can often be seen just by internal examination of the claims and which becomes even clearer following unsuccessful replication, and (b) publication itself is such a crapshoot that it’s a statistical error to draw a bright line between published and unpublished work.
Ironically from a psychological point of view, it all makes sense. If your worldview is different from reality, what’s easier to believe: the nice rose-tinted glasses of your existing worldview or the unpleasant scent of cognitive dissonance? From Gelman’s post:
If you’d been deeply invested in the old system, it must be pretty upsetting to think about change. Fiske is in the position of someone who owns stock in a failing enterprise, so no wonder she wants to talk it up. The analogy’s not perfect, though, because there’s no one for her to sell her shares to. What Fiske should really do is cut her losses, admit that she and her colleagues were making a lot of mistakes, and move on. She’s got tenure and she’s got the keys to PPNAS, so she could do it. Short term, though, I guess it’s a lot more comfortable for her to rant about replication terrorists and all that.
Funny enough, this is not the only ironic situation that I’ve come across. If you’ve hung around enough programmers and data scientists, I won’t have to tell you that they’re (at least they say they are) all about “logical reasoning” and “rational thought”. But from what I’ve seen, they’re some of the most wet and squishy amongst us all. Just ask them what’s wrong with X, where X is any of the most popular programming languages, just be ready to sit through a half an hour rant that is heavily rooted in opinion over facts.
There are worse things than highly opinionated people though. The worst case scenario is where consistency and commitment tendencies block the person from seeing their own mistakes, from Gelman’s post:
We learn from our mistakes, but only if we recognize that they are mistakes. Debugging is a collaborative process. If you approve some code and I find a bug in it, I’m not an adversary, I’m a collaborator. If you try to paint me as an “adversary” in order to avoid having to correct the bug, that’s your problem.
Being open to collaborators who point out our flaws (hopefully in a congenial way) is really the first step in learning. Think about all the teachers you’ve had, probably half of the learning comes from them pointing out mistakes (the other half from them teaching you a better way of solving the problem). Whether you’re a toddler or a famous researcher, everyone needs help finding their mistakes, and more importantly everyone can learn from them. It’s easier said than done though because, after all, we’re just moist robots.