Skip to main content

Verified by Psychology Today

Artificial Intelligence

How Artificial Intelligence Impacts Moral Decision Making

Engaging the ethics of artificial intelligence.

Key points

  • AI is perceived as more likely than humans to make utilitarian choices when faced with moral dilemmas.
  • The perception of warmth explains the perceived differences in the way humans and AI makes decisions.
  • Individuals may behave less ethically and may be more willing to deceive others when they are interacting through AI.
Gerd Altmann/Pixabay
Source: Gerd Altmann/Pixabay

Within many professions, the use of artificial intelligence (AI) can make jobs easier and more efficient. It can save companies time and money, but sometimes at human cost. Although we worry about eliminating jobs for people who need them, in many cases, AI can augment rather than replace human labor, by performing mundane or time-consuming tasks that employees may be happy to delegate to digital assistants. But what about other job responsibilities, like working in teams, interacting with others, and making collaborative business decisions?

The reality is that no one earns “Employee of the Month” simply by going through the motions. Along the same lines, can AI similarly be designed to pursue not merely competence but excellence? And what role do designers play in programming professionalism? Research reveals why these questions are important.

Moral Dilemmas

Zaixuan Zhang et al. (2022) examined the link between AI and moral dilemmas, evaluating the perception of ethical decision-making using AI.[i] They found that AIs are perceived as more likely than humans to make utilitarian choices when faced with moral dilemmas. They described the utilitarian approach as one that accepts harm and focuses on outcomes, as compared to the deontological approach, which rejects harm, focusing instead on the nature of the moral action.

Zhang et al. also found that the perception of warmth explains the perceived differences in the way humans and AI make decisions, which were evident across a variety of different types of moral dilemmas.

Examining a different angle, Jonathan Gratch and Nathanael J. Fast (2022) examined the extent to which AI assistants might facilitate unethical behavior.[ii] They explored the new ways in which AI is trained to exercise and experience power through performing interpersonal tasks such as negotiating deals, interviewing and hiring workers, and even managing and evaluating work, in connection with the extent to which such personalization permits users to dictate the ethical values that drive AI behavior.

Gratch and Fast recognize that acting through human agents (indirect agency) has the potential to weaken ethical estimation so that people believe they are behaving ethically yet demonstrate a lesser degree of benevolence for the recipients of their power, find themselves less blameworthy for ethical lapses, and expect a lesser degree of negative consequences as a result of unethical behavior. Gratch and Fast then examined research that illustrates how, within a wide variety of social tasks, individuals may behave less ethically and may be more willing to deceive others when interacting through AI.

The Personalization of Professionalism

It appears that perhaps important conversations and decisions should still be made by humans, not machines. The fact that AI has advanced to the point where virtual assistants can participate both behind the scenes and interpersonally does not mean that it is necessarily an adequate replacement for human interactions. Sometimes the best intelligence is authentic, not artificial, as many decisions are better made in person.

References

[i] Zhang, Zaixuan, Zhansheng Chen, and Liying Xu. 2022. “Artificial Intelligence and Moral Dilemmas: Perception of Ethical Decision-Making in AI.” Journal of Experimental Social Psychology 101 (July): 1–8. doi:10.1016/j.jesp.2022.104327.

[ii] Gratch, Jonathan, and Nathanael J. Fast. 2022. “The Power to Harm: AI Assistants Pave the Way to Unethical Behavior.” Current Opinion in Psychology 47 (October). doi:10.1016/j.copsyc.2022.101382.

advertisement
More from Wendy L. Patrick, J.D., Ph.D.
More from Psychology Today