If you spend any time on Twitter, LinkedIn, or Facebook, odds are you’ve seen people railing about biases. The term bias is thrown around a lot, often in reference to errors in decision-making (e.g., confirmation bias) or observed differences among groups or phenomena (e.g., gender bias in STEM fields).
Bias has now acquired a clearly derogatory definition, as can be seen in the Merriam-Webster’s current definition compared to the definition offered in 1828 or even 1913. The original meaning was “a leaning of the mind” so as “to lean or incline from a state of indifference.” Though Herbert Spencer in 1873 remarked that biases can influence our beliefs much more so than evidence, it wouldn’t be accurate to conclude that biases themselves are bad. They simply represent a predisposition to favor a given conclusion over other conclusions.
Some biases are hardwired into us from evolution, as I discussed in reference to error management theory. Other biases are learned through socialization (e.g., a tendency to accept the religious beliefs of one’s upbringing) or direct experience (e.g., being more accepting of medical advice from a doctor than an auto mechanic). Still other biases are more idiosyncratic in nature based on our unique combination of genetics and experience (e.g., generally having a more favorable view of pizza than of salad).
Biases evolved to allow us to make satisficing choices in an efficient way. Giving equal consideration to all possibilities is often a cognitively demanding process. Biases make decision-making easier by giving us a starting point, an initial prediction, or a “leaning of the mind” regarding which choice to make. We anchor our original judgment in the biased conclusion and then adjust it based on supplemental information.
In their discussion of social prediction, Bach and Schenke (2017) contended that any time we enter a given situation, we make a set of predictions based on our personality and past experiences. Situation-specific information is then used to test these initial predictions. In some cases, situation-specific information confirms our prediction (causing us to be more confident in our judgment), and in others, it contradicts our prediction (causing us to revise our judgment).
In many cases, biases can be quite adaptive. For example, because I generally tend to favor pizza over salad, I don’t have to engage in an extensive process of reviewing the upsides and downsides of my choice. That doesn’t mean that I will always decide pizza sounds better than salad, but over time, that preference will win out much more often than not.
Obviously, whether I tend to favor pizza or salad is largely a meaningless bias, but biases like that operate regularly and make it possible for us to make satisficing decisions. In many cases, these biases are based on legitimate evidence. For example, we would be generally advised to favor the medical recommendations of doctors over those of people with no medical training. This doesn’t mean we automatically should accept medical advice from a doctor, but it does mean that when faced with contradictory information from different sources, a bias in favor of medical sources over non-medical ones will serve us better in the long run than listening to medical advice from Gwyneth Paltrow. Thus, biases are quite adaptive when (1) they do not meaningfully impact decision quality or (2) when there is evidence to support them. Biases, though, can also be maladaptive.
Biases are typically maladaptive when (1) they cause us to rely on faulty information to reach a conclusion, or (2) they do not generalize to a given situation. In both cases, biases can cause us to reach a conclusion that, when subjected to a more thorough review of the evidence, is logically fallacious.
For example, many of us are biased by the similarity effect: that is, we tend to prefer people who are more like us than people who are less like us. When we encounter new people, this can obviously be difficult to determine as we don’t know anything about them beyond their physical appearance. This can lead us to have a bias toward people who appear to be more like us (e.g., age, race, sex, body type) or against people who appear to be different from us in appearance.
We often assume (incorrectly) that someone who looks more like us will also have more in common with us. This is logically fallacious—just because someone looks like us doesn’t mean our personalities, interests, or worldview are also the same. Thus, when we are predisposed to rely on more superficial or easier-to-collect information to reach a conclusion, our biases may adversely affect decision quality.
We often also make this same mistake when we rely on a small number of our experiences (sometimes as few as one) to draw more generalized conclusions. The experiences on which we base those generalizations are generally those that were associated with either very negative or very positive emotions. Of those, negative experiences tend to carry greater weight in our decision-making process.
We see this play out a lot when it comes to product reviews on websites like Amazon. Based on one bad experience, individuals will conclude that a product is “the worst product they ever purchased” and that “other people shouldn’t waste their money.” While emotionally we can certainly appreciate the sentiment, logically the customer should be able to conclude that his/her bad experience may not generalize to other people’s experiences; it may simply be an outlier. But that’s not how negativity bias works.
The key takeaway here is that the “biases are bad” claim is more than a bit misleading. Bias is neither inherently good nor bad. Biases can clearly come with upsides—they improve decision-making efficiency. However, when the accuracy of the decision is of utmost importance, over-reliance on our initial judgment may cause us to seek out information to support it and neglect information that is inconsistent with it. This can create a confirmation bias that, when the stakes are high, may lead to disastrous outcomes.
 At this point, it might be helpful to draw a distinction between biases and heuristics. Biases are the leanings we have when it comes to making judgments. You can think of them as the preferences, beliefs, or inclinations we bring to a given situation. Heuristics represent the mental shortcuts we use to solve problems, a sort of algorithm we rely on in different situations. You can think of them as if/then decision patterns we develop based on our experiences. In the context of Bach and Schenke (2017), biases would be the initial predictions, but heuristics will potentially influence the process we use to derive our final decision. My next post will discuss this distinction in more detail.
 How much more often may be influenced by the strength of the bias.
 Though I’m sure that someone will likely dispute that claim.
 Goop has been conspicuously absent from discussions of ways to cure or ward off COVID.
 For example, both a Kia and a Ford perform roughly the same in terms of your criteria, so you decide to buy a Ford because of your bias toward American cars.
 Even if the bias is grounded in evidence, it does not mean the biased conclusion necessarily applies to a given situation.
 Also called negativity bias.
 It’s a bit like saying drugs are bad.