Skip to main content
Artificial Intelligence

Trust in Healthcare AI Can Be Hurt Intentionally or Innocuously

6 key factors can affect trust among AI providers’ buyers and clinical users.

Key points

  • Health AI customers and users rely heavily on their trust in the provider to dampen risk.
  • Contrary to common belief, trust in companies is not only depleted through wanton and Machiavellian actions.
  • Trust in health AI firms can be damaged through innocuous acts.

The race for supremacy among major artificial intelligence (AI) providers, including OpenAI, Anthropic, and Google, is approaching peak intensity. Alongside this growth, concerns about customer trust and distrust have become paramount. These concerns are appropriate—our own work suggests that in the face of the ambiguity and uncertainty typically accompanying a new technology such as healthcare AI, customers and users rely heavily on their trust in the provider to dampen risk and obtain peace of mind.

Healthcare buyers and users’ trust in their AI providers will likely drive their consumption and long-term loyalty to specific providers. Being mindful of the role of potential actions and strategies in shaping user trust during the course of developing and deploying their AI products may help companies avoid damaging actions and course correct, for the benefit of users and ultimately to foster adoption of the company’s products.

Contrary to common belief, trust in companies is not only depleted through wanton and Machiavellian actions. Trust can be hurt, and distrust can be built, for a variety of reasons ranging from the relatively innocuous to volitionally bad actions.

Trust Can be Damaged Unintentionally and Volitionally
Trust Can be Damaged Unintentionally and Volitionally
Source: Deepak Sirdeshmukh

Below, I outline a range of factors that can affect trust among AI providers’ buyers and clinical users. This framework can be extended further down the user chain to patients and their families.

Uncontrollable external factors

Trust among nascent clinical users of AI can be affected by the reputational halo from AI-related news in other industries having nothing to do with a particular vendor in the healthcare space. Stories of AI being used to create fake videos and fake personas to scam vulnerable individuals into paying fake vendors get extensive coverage in the media, creating a general fear of AI. Even without significant first-hand experiences, negative perceptions based on AI in other industries can exert an influence on buyers and users in healthcare. Monitoring sentiment toward AI in the broader space and addressing perceived risks is therefore an imperative.

Good intentions gone wrong

One of the most threatening aspects of AI to healthcare users is the collateral damage it might do to the nature and size of the workforce. AI’s powerful ability to routinize processes and add intelligence and efficiency to workflows is slowly, but perhaps inadvertently, leading to reductions in workforces and a slowdown in hiring. This side effect of AI could have a chilling effect on both the executive leaders at healthcare organizations who are reluctant to affect company sentiment and on users who may be unnerved by the potential to disintermediate themselves or their colleagues out of a job. The growing effort to position AI as an augmenting rather than a substituting technology is in response to this potential threat.

Poor insight, cluelessness

A lack of understanding of the in situ or real-world impact of AI models and products can lead to uninformed and wrong decision-making. Vendors may not understand the problem space and build tools that do not fit workflows, leading to unanticipated disruption post-adoption. Burnt-out hospital staff (e.g., radiologists) could begin leaning on AI tools to make decisions rather than using them as a decision-making tool/assistant, leading to diagnostic errors. Another example is not adequately understanding how AI might be used by young people, the indigent, and other vulnerable populations. The U.S. Senate held hearings where parents spoke of the agony of losing children who relied on AI chatbots for self-harm. Proactively extrapolating and projecting AI outcomes across vulnerable populations during the development phase can help companies introduce (and update) guardrails and promote trust, demonstrating their benevolent intentions toward users.

Hubris and complacency

AI teams at most healthcare sites are not staffed and capable of continuously evaluating and improving models. AI providers pushing their products into implementation at medical settings and then providing minimal support can increase provider-tech friction and sow distrust. Complacency about the performance of their AI products or under-weighting potential downsides has led to limited “red teaming” or simulated testing of real-world attacks on the company’s systems to identify vulnerabilities. The time and effort saved could boomerang through the loss of trust and the expense of rebuilding damaged reputations.

Pressure and expediency

The urgency to introduce newer and better products and outperform competitors is exacerbated in the case of AI, given high expectations, massive financial outlays, and hype surrounding the technology. The pressure to prove that one vendor's model outperforms a competitor has sometimes resulted in contrived evaluations, with a focus on a model’s benchmark performance rather than real-world performance. In other cases, models are being rushed out without adequate pressure testing. OpenAI’s ChatGPT-5 was making fundamental errors in basic math and coding upon its release. Concerns about poor testing, particularly in terms of its safety, created a mini-crisis for an otherwise substantial innovation, leading to an apology from OpenAI’s leaders.

Greed and malintent

Given the large outlays in capital costs as well as massive operating budgets, AI companies are facing severe pressures to demonstrate financial returns through higher prices. Price increases seen as unjustified or too frequent can affect trust in the specific AI firm as well as in the entire industry cohort. For example, firms adopting so-called “surveillance pricing”—tailoring pricing based on a specific user’s usage—are raising alarms about privacy and fairness (even if price discrimination at the level of an individual is legal). Price increases could trickle down to health care firms, which may have to increase their own fees for health services to patients. Price is perhaps the single most visible aspect of any company’s strategy that causes mistrust; AI companies can learn from past examples.

To promote their customers’ welfare, build their own franchises, and establish a positive and trustworthy reputation for their entire growing industry, AI companies will need to carefully monitor their trust and implement actions that guard against customer distrust.

References

Sirdeshmukh, D., Singh, J., & Sabol, B. (2002). Consumer Trust, Value, and Loyalty in Relational Exchanges. Journal of Marketing, 66(1), 15–37. https://doi.org/10.1509/jmkg.66.1.15.18449

Heather Chen and Kathleen Magramo. Finance worker pays out $25 million after video call with deepfake ‘chief financial officer.’ CNN. February 4, 2024.

Godoy J. US parents to urge Senate to prevent AI chatbot harms to kids. Reuters. September 16, 2025.

advertisement
More from Deepak Sirdeshmukh M.S., Ph.D.
More from Psychology Today