Skip to main content

Verified by Psychology Today

Artificial Intelligence

Can AI Make Work Safe Again?

Tech and data can boost health and create a surveillance culture at work.

We have spent much of the recent decade excited—and worried—by the rise of AI. But in the context of the current pandemic, it is fair to say that the contribution of AI has been underwhelming, to say the least. With all the smart scientists, big tech, big pharma, and powerful governments around the world mining a sea of personal health data and having access to the fastest super-machines and the most advanced AI and skilled talent in the planet, all trying to solve the exact same problem with the highest of heroic incentives, there is actually no indication that we are closing in on a vaccine or cure for COVID-19.

Although this is no doubt disappointing, it is also evident that data and AI have been quite helpful to certain countries (with the most salient or widely-discussed cases being Israel, Singapore, and of course China) vis-à-vis their ability to track the patterns of virus spread across and within regions, and isolate positive cases. Data is typically captured through mobile phone apps that monitor residents’ location, their connections, and contacts to model and reduce risk at the individual and collective level and embark on a sort of AI-enabled quarantine.

If this sounds quite close to Minority Report, a movie that focused on predictive policing rather than algorithmic healthcare, that’s because there are parallels indeed. Where do reality and fiction converge? In the process of turning large masses of personal data into a probabilistic estimate of someone’s likely to do X and acting on that probabilistic or quantitative estimate in order to stop X from happening.

This approach raises a range of interesting ethical dilemmas; old moral conundrums, which predate the AI age. For instance, a model could predict that a driver is too drunk or emotionally vulnerable to get onto his car, sending a police officer (probably not Tom Cruise) to arrest him, saving the lives of innocent pedestrians in the process. However, the model may also be wrong, resulting in the arrest of an innocent driver, perhaps even ruining his life or career. A non-trivial factor to consider is that without technology, police officers are still tasked with making these decisions under uncertain conditions and that even when they have a great deal of expertise they often act in biased or prejudiced ways. So the challenge for any AI or automated data intervention is not to get it right all the time but more often than humans. And in most instances, humans working together with AI will achieve the best results, so the key question really is whether AI or data can enhance human decision-making.

Interestingly, the initial reaction in the West to how Asia leveraged AI and tech for dealing with the pandemic was near moral indignation. As in “sure, this may work and keep people healthy, but I wouldn’t want to live in a surveillance state even if it improves my wellbeing.” Harari’s notion of digital dictatorship – the modern version of an Orwellian or Big Brother state – acquired a new dimension, a potential bright side: If this is the best way to keep people safe and alive, how much control and privacy are we willing to give up for it?

To be sure, what Western cultures appear to object is not the use of data or tech per se, particularly if there’s a benefit to the consumer, but an overactive autocratic government that is trusted with the handling and management of such data. This explains why many Western democracies, from Australia to Britain and the U.S. (though so far at state rather than national level) have recently embarked on precisely the same tech- and data-centric surveillance, adapting and adopting tracking apps designed in Israel and China, and why other governments will surely follow.

It is also hard to take statements, such as “I wouldn’t want to give away my privacy or relinquish my personal data for the prospect of safety and health” seriously, when we (across Western democracies) have already given away so much personal data, willingly devaluing our privacy, in exchange for much less: another YouTube clip, more relevant ads, one more Amazon product we didn’t know we wanted to buy, or the ability to see what our high-school friend’s cat had for breakfast on Facebook… oh, and of course, discovering other Netflix docuseries like Tiger King, though unfortunately, it may be hard to replicate this masterpiece.

Another important question is what employers (rather than governments) should do now, especially when they are thinking about safe-proofing offices and workspaces. With a growing number of cities, states, and countries focused on their reopening plans, deliberating how to bring people back to the office, and what the new not-normal may look like, we can expect a growing number of emerging technologies, including AI, to become more widely used tools for monitoring and improving employees’ wellbeing. It should be noted that the workforce or people analytics revolution had already emerged, and even consolidated, within most large organizations, which means that multinational corporations and large employers have already created the organizational foundations to turn their HR and management practices into data-centric operations, albeit with the focus on improving productivity, performance, and efficiency rather than health.

Still, common practices for boosting employee morale, wellbeing, and engagement are also now centered around data, and of course, there is a long history in certain industries – from finance to IT security – to spot and prevent counterproductive work behaviors via programmatic, automated, or disintermediated machine algorithms: e.g., words that predict insider trading, credit card transactions that predict fraud, or email activity that predicts hacking, etc. In short, large companies already possess a great deal of useful data to predict and manage workforce behaviors at scale, and it would not take much to use those same data and techniques for predicting different outcomes related to the pandemic or health and keep people safe. In fact, we may even see a new range of data (external rather than internal) being added to the mix, and the boundaries of what companies could and should know decreasing.

The internal data companies have on their employees appears to be ethically – and for sure, legally – less contentious here. For instance, e-mail data (content, or what you e-mail) and meta-data (network and context, or who you e-mail, when, and how often), could be mined for sentiment analysis, natural language processing, and social network mapping, to detect whether someone may display symptoms or be closed to people who are sick. This is the corporate version of Target knowing that a shopper is pregnant before she knew (based on her shopping habits, perhaps chocolate ice cream?), or Google Trends predicting a flu epidemic based on online searches. Voice and speech mining algorithms could also identify changes in people’s voices, linking them to anxiety and stress, and quantifying emotional and physical vulnerability at the individual and company level. And of course, video-based AI, such as machine learning algorithms that can translate Zoom or Microsoft Teams or Google Hangout data into physical and psychological health markers, as well as digitally snoop on conversations that may reveal someone is not well or at risk, are closer to reality than science fiction.

And if this doesn’t sound like a dystopian and apocalyptic Marxist nightmare or your favorite Black Mirror episode, then you may want to consider the addition of external data (currently not used or mined by employers) that we regularly produce with our phones. Imagine, for instance, that employers start accessing your Uber data to see where you went, or your Whatsapp or Wechat data to see what you say, or your Facebook data to see who you see, or your CVS, Amazon, Visa, Netflix, and even Spotify data – with Spotify reporting that chill was a major search term during the past few weeks, one would imagine that even the choice of our songs may reveal something about our current and general health and wellbeing (actually, there is already scientific research to support this). To be clear, I am not saying organizations should try to do this, nor is it clear whether they would be allowed to, even if they wanted. All I’m saying is that accessing and mining these and other data (the more the better, yes) would increase our ability to model risk and predict future wellbeing, including infection rates.

Perhaps more importantly, we should remember that AI and data are never ethical or unethical, moral or immoral. It is only humans (as well as other animals) who are capable of moral or immoral acts, technology is simply a tool that can fit either purpose. Humans are generally opposed to change, uncertainty, and the unfamiliar. But we also have the capacity to adapt and consider the different implications change has even before it arrives.

With regard to new technologies, data, and AI designed to monitor, predict, and manage people’s behavior at work and beyond, we need to understand that there are ethical alternatives for implementing such seemingly intrusive and scary methodologies. If you explain very clearly to people what is being done with their data, what data is captured, and what the benefits are for them; and you are able to engage in a rational, transparent, voluntary transaction with them, where you still preserve the anonymity of their data and use it only for their own benefit (e.g., higher performance, engagement, better health and wellbeing), then there is certainly a great deal of promise and potential for AI. But all this requires trust, trust in our leaders. At the end of the day, we should expect AI to make ethical and prosocial leaders even better, and unethical or antisocial leaders even worse.

advertisement
More from Tomas Chamorro-Premuzic Ph.D.
More from Psychology Today