Skip to main content

Verified by Psychology Today

Artificial Intelligence

AI and Unintended Consequences for Human Decision Making

Allowing AI to be the default decision-maker can have unintended consequences.

Key points

  • AI can draw from a large volume of data to make recommendations for human decision-makers.
  • However, there’s often no transparency regarding how the recommendations are derived nor what data are used to determine them.
  • In some situations, this can lead to a word-of-machine bias where AI recommendations are assumed to be valid.
  • This poses serious issues if such recommendations produce too many false positives or false negatives.
Photo by Andrea De Santis on Unsplash
Source: Photo by Andrea De Santis on Unsplash

In my last post, I argued that AI has serious implications for choice architecture. At its most extreme, so-called hypernudging has the potential to continually adapt in ways that make it more difficult for human decision-makers to eschew the preferences of the choice architect.

But might AI, even when there is no obvious attempt to nudge, potentially present undesired consequences for human decision-making?

Let’s start with a simple example. Many people rely on GPS to help them get from Point A to Point B, especially in unfamiliar areas. Google Maps, Waze, and other GPS apps that rely heavily on AI have made navigating such situations much less stressful.1

This would, therefore, be one of those situations where AI makes our lives easier. The AI points the way, but the human decision-maker still has control over the decisions themselves.2 It seems, then, that AI-driven GPS is an application that is devoid of unintended consequences for human decision-making.3

Alas, this couldn’t be further from the truth. As it turns out, the more people rely on GPS, the more it erodes their internal navigation capabilities (Ishikawa, 2019). This means that when we rely on GPS to get from Point A to Point B, we may not encode the directions we followed to get there, which subsequently increases our reliance on GPS to make the same trip in the future (or to find our way back). GPS, therefore, can have a net negative effect on our directional capabilities.

When we learn a particular directional route, we tend to encode relevant landmarks, create a sequential series of steps (based on an ordering of landmarks), and then create a mental representation of the route (which Holly Taylor, a psychology professor at Tufts University, referred to as a survey representation).

When using GPS, we allocate our attentional resources to following its directions, which adversely affects our ability to follow the steps needed to create a mental map. There also seems to be little incentive to create such a map since the resources required to follow the AI-driven GPS are usually lower than those required for mental mapping. Therefore, people often default to the use of GPS instead of the alternative.

In this case, an argument can be made that the trade-offs when defaulting to GPS, especially in certain situations, are worth it. But the ease with which we default to reliance on the GPS poses broader implications for human decision-making. Especially in the case of sophisticated, algorithm-driven technology, allowing that technology to become the default decision-maker can have significant unintended–and quite undesired–consequences.

For example, Meta (the parent company of Facebook) experienced a huge decline in revenue, leading to the need to lay off 60 contractual staff.4 Decision-makers relied on an algorithm to identify which 60 contract workers would lose their jobs (Encila, 2022; Fabino, 2022). It’s unclear whether the algorithm was intended to be the decision maker,5 but that is what ended up happening—the humans defaulted to the algorithm. Although doing so was cost-effective for the decision makers (i.e., in terms of time, energy, and anxiety), it is difficult to say whether those cost savings were sufficient to justify realized benefits for the company itself.6

Perhaps one of the more concerning examples of human decision-makers defaulting to the algorithm was recently reported by Szalavitz (2021). Doctors, pharmacies, and hospitals in the U.S. rely on a system called NarxCare to “automatically identify a patient’s risk of misusing opioids” (para. 11). The system relies on machine-learning algorithms that have access to huge amounts of data, including data outside of state drug registries, to produce some fancy visualizations along with some risk indicator scores (see the Kansas Board of Pharmacy example).

From a decision-making standpoint, there’s a major problem. There’s no transparency regarding how the scores are derived nor what data are used to determine them.7 There’s also a dearth of evidence available to support the validity of the scores themselves, with potentially problematic false positive and false negative rates.8 But they’re presented to human decision-makers in a way that conveys a high degree of confidence in those recommendations. It’s no wonder that many doctors and pharmacists simply default to the algorithm’s implied recommendations, often to the detriment of chronic pain patients.

Both examples highlight the potential for unintended consequences when AI tools end up becoming the default decision-maker. Shacklett (2020) argued that most companies don’t want to allow AI to make the actual decision. The problem, though, is that when such systems offer a straightforward recommendation (like a risk score or a suggested action), it becomes very easy for humans to simply develop a bias (i.e., a tendency) to accept the recommendation without any critical assessment of whether it’s an appropriate recommendation.

As Longoni and Cian (2020) detailed (though their research was focused on consumer decision-making), this word-of-machine bias results in decisions that people see as grounded in more objective evidence (i.e., traditional sources of data) than subjective (e.g., attitudinal or experiential data).

Whether this applies to decision-making in other domains, such as management or medicine, remains to be seen, as this is an understudied phenomenon. However, humans tend to develop heuristics that help conserve cognitive resources when making decisions, so it’s likely that if the AI’s recommendations are assumed to be valid, human decision-makers would develop the heuristic rule to simply default to them (regardless of how valid those recommendations actually are).

While such a heuristic likely has value for relatively simple decisions involving little to no uncertainty or error, the value decreases dramatically for much more complex decisions (higher levels of uncertainty or error also lead to higher false-positive and false-negative rates). And this could pose significant implications for those affected by the decision.

References

Footnotes

[1] Additionally, a lot less pre-planning is often required before setting out on a trip.

[2] And even some control over constraints put on the directions, such as avoiding construction.

[3] Note that here I am strictly focused on unintended consequences for human decision-making related to the application’s primary use, not issues related to such consequences as privacy threats, misuse of technology for nefarious purposes, or other more general unintended consequences.

[4] I mentioned this as a footnote in my last post.

[5] It’s also unclear which information was used to make such determinations and how valid those recommendations were.

[6] Especially if it results in lawsuits over the terminations.

[7] There are also significant data privacy problems, but that is less relevant to the topic here.

[8] The best I could find was a study by Cochran et al. (2021), which is used by Bamboo Health (the company that owns Narxcare) to justify its validity. The problem, though, is that scores had a 17.2% false positive rate (classified as high risk when they aren’t) and a 13.4% false negative rate (classified as low risk when they are high risk). Thus, there’s a lot of error in that system. Additionally, most of the evidence put out by Bamboo Health to support use of Narxcare (e.g., this report) is focused on the observed decrease in prescription rates, with little actual focus on patient health outcomes.

advertisement
More from Matt Grawitch Ph.D.
More from Psychology Today