In a recent New York Times article, author Rachel Botsman describes her experiences “Co-Parenting with Alexa,” Amazon’s digital assistant. As she explains, her three-year-old daughter Grace displays a shocking ease with Alexa, asking it (her?) about the weather, requesting her favorite songs, and even showing guilt after telling Alexa to shut up.
Botsman provides an insightful look at the shift in our relationship with technology this represents:
For generations, our trust in it has gone no further than feeling confident the machine or mechanism will do what it’s supposed or expected to do, nothing more, nothing less. We trust a washing machine to clean our clothes or an A.T.M. to dispense money, but we don’t expect to form a relationship with them or call them by name.
Today, we’re no longer trusting machines just to do something, but to decide what to do and when to do it. The next generation will grow up in an age where it’s normal to be surrounded by autonomous agents, with or without cute names. The Alexas of the world will make a raft of decisions for my kids and others like them as they proceed through life — everything from whether to have mac and cheese or a green bowl for dinner to the perfect gift for a friend’s birthday to what to do to improve their mood or energy and even advice on whom they should date.
In the article, an adapted excerpt from her forthcoming book, Who Can You Trust? How Technology Brought Us Together and Why It Might Drive Us Apart, Botsman focuses on the commercial implications of this newfound trust in technology, given that Alexa is a product designed to reduce what little barrier already exists between us and Amazon. She also mentions the Echo Look, a device that works with Alexa to help you make better fashion choices—selected from Amazon, including its own fashion lines. Echoing the work of legal scholar Frank Pasquale (The Black Box Society: The Secret Algorithms That Control Money and Information), Botsman calls Alexa “a corporate algorithm in a black box.”
There are also significant implications of our increasing reliance on algorithmic decision-making in terms of our autonomy, authenticity, and individuality. As I write in my new book The Decline of the Individual (after discussing the related dangers with relying too much on personal data and “the quantified self”):
Similar to issues with our own uses of personal data, the algorithmic predictions of businesses can become problematic if we let them have too much influence on our choices without due reflection on their validity and foundations, as well as the larger context in which we make choices. This is not to say we should never consider or accept Amazon’s book recommendations; sometimes, accepting the “advice” of business is completely reasonable. Certainly, if you’re just mindlessly watching Netflix for the evening—perhaps just looking forward to the “and chill” part—you might just let it play one show after another, generally satisfied with its choices, but this is because you really don’t care too much what you watch. But if you’re actively watching, you want to choose your next show or movie, and for all of Netflix’s data and algorithms, you know better than they do what you want to watch next, and in these cases, you should dismiss their recommendations. Many of us do, of course, but my concern lies with those who do not, those who sacrifice even that tiny bit of their own decision-making autonomy to other decision-makers they assume know better.
The practical problem with this is that algorithms such as those used by Amazon and Netflix can only track your past behavior and form guesses based on them about your future behavior. But they can’t know the reasons why you made these choices or the reasons you’ll make future ones—much less whether they’ll be the same reasons! In this sense, these algorithmic recommendations are just more refined version of the stranger at the party who says, “Oh, you liked Stranger Things? Then you’ll love Sense8,” because they think they’re similar, even though this may have nothing to do with why you liked Stranger Things.
More important, ceding our choices to algorithmic recommendations means that, in some small way, we are handing over choices about our life to other parties who we presume “know better.” To be fair, there are good arguments for doing this: as prolific scholar and writer Cass Sunstein explains in his book Choosing Not to Choose, we have an enormous number of decisions to make it in any given days and only so many mental resources with which to make them, so it makes sense to offload some of that decision-making on others. We let the radio station DJ—or, these days, Spotify or Pandora—choose our music, let Netflix choose our shows, and let Amazon choose our books, and this saves us mental resources we can devote to more important and meaningful tasks.
But as with most things, this can be taken too far. Insofar as our choices reflect and define who we are, we are sacrificing a little bit of our individuality, our identities, whenever we let someone else make choices for us. As I write in The Decline of the Individual:
If we are ceding choices consciously in order to devote our attention to more important tasks, in the active sense of “choosing not to choose,” that is fine. But if we are doing it because we believe other people know us better than we do, that is a problem, because we are giving up control over our own lives, even in areas that seem trivial.
Again, this would not be as much of a problem if the choices we cede to algorithms only dealt with songs and TV shows. But as Botsman’s story shows, the next generation may develop a degree of faith in the “wisdom” of technology that leads them to give up even more autonomy to machines, resulting in a decline in individual identity and authenticity as more and more decisions are left to other parties to make in interests that are not the person’s own—but may be very much in the interests of those programming and controlling the algorithms.
Isn’t that right, Alexa?