Skip to main content

Verified by Psychology Today

Artificial Intelligence

Choosing How to Choose With Artificial Intelligence

Does commitment or computation drive preference and choice?

Key points

  • The digital world offers an increasing range of choices, but humans are limited in their capability to rank preferences and choices.
  • Artificial Intelligence helps manage complex preferences and choices, but it triggers resistance or surrenders among some people.
  • We must learn to collaborate with AI in making commitments, forming preferences, choosing how to choose, and evaluating outcomes.

Today’s hyper-connected world offers an enormous range of choices. Digital systems present a constant flow of options and alternative experiences. This is true in all spheres of life, both personal and social, for consumers and citizens.

Gerd Altmann / Pixabay
Digitalization Expands Choice
Source: Gerd Altmann / Pixabay

But humans have limited capabilities to rank preferences and choices. Options may be complex or novel, and people are incapable of discriminating between them. Or the scope of choice is too broad and must be simplified. Many people then follow habits or routines, which often admits bias. Other times they rely on prior commitments. No surprise, as Amartya Sen argues, that choice preferences are hard to compare and incompletely ranked.

Benefits and Risks of AI

Fortunately, artificial intelligence (AI) helps to manage the accelerating speed and complexity of choice. Algorithms identify options based on previous choice behavior and preferences. AI then curates maximizing choice sets. However, many people are averse to this type of algorithmic control. Others are more willing to delegate choice to AI, perhaps without realizing it.

Both trends pose risks for human freedom and effectiveness. Because if AI filters too many preferences and choices, people could either lose control of decision-making or fail to enjoy the benefits digital innovations provide. Instead of being empowered by AI, they might become overly resistant or dependent. Neither response is beneficial.

Ironically, therefore, having many more options could ultimately erode freedom of choice. Humans would be less in control. Important consequences follow for personal autonomy, economic consumption, collective welfare, as well as democratic institutions. Whole societies could be distorted by artificial influence.

Managing Digitally Augmented Choice

To mitigate these risks, we need stronger skills in working with AI. Sometimes the preferential choice will require deep human involvement and ownership. In other situations, it will be appropriate to delegate control to AI. Either way, humans and artificial agents need to collaborate more closely.

The human-machine collaboration will therefore be key, grounded in mutual trust and respect. This may sound strange to some because it suggests artificial agents are starting to behave like humans. In fact, many people already trust AI to make choices on their behalf and respect artificial judgments. Every person with a smartphone is doing this to some degree.

Moving forward, we need to develop stronger capabilities that enable dynamic, adaptive choice, extending to collaborative metacognition and meta-preferences. That is, we need human-machine systems which can monitor and manage adaptive cognition and preferences. As Stuart Russell explains, we need to develop human-compatible AI.

Anna Nekrashevich / Pexels
Partnering with AI
Source: Anna Nekrashevich / Pexels

Many computer scientists are working on these problems. If their efforts are successful, future computers will know us deeply, sense our needs and wants in changing situations, and then support us to form and rank our preferences. These systems will be personalized and hopefully empathic. Of course, they will require strong supervision and safety controls.

Generating Commitments in Context

Going further, we also need a new approach towards the evaluation of outcomes. Because what counts as beneficial and successful is increasingly contextual and variable. Granted, most communities share norms of wellbeing and justice. But in other respects, different situations require customized commitments and alternative evaluation criteria.

For example, when choosing products and services, many individuals and organizations now prioritize sustainability concerns. They are even willing to pay more to minimize harm to the environment. At the same time, they may wish to improve efficiency by advancing AI, despite the risks and costs. In the future, these alternative benefits must be factored into the analysis of preferences and outcomes. Digital augmentation will make this possible.

I discuss this approach in my recent book Augmented Humanity. I argue that in a digitalized world, many commitments and preferences will be generated through human-machine collaboration in response to changing contexts. But this does not imply loose relativism. Rather, it recognizes that people and situations differ, and humans are not the only beneficiaries of choice. Particularly, ecological and virtual impacts also deserve recognition.

As a first step, we should be more mindful of the opportunities and risks of preferential choice in a digitalized world. Huge benefits are possible for all stakeholders. To enjoy them, we must overcome blunt resistance or surrender and learn to collaborate more deeply with AI to choose.

References

Bryant, P.T. (2021). Augmented Humanity: Being and Remaining Agentic in a Digitalized World. Palgrave Macmillan.

Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Penguin Books.

Sen, A. (2002). Rationality and Freedom. Cambridge, MA: Harvard University Press.

advertisement
More from Peter T. Bryant, Ph.D.
More from Psychology Today