Skip to main content

Verified by Psychology Today

Artificial Intelligence

Computers Lack Social Judgment

Why self-driving cars may be a pipe dream

istock subsciption
Source: istock subsciption

A recent fender bender in which a Google Self-Driving Car was struck by a bus while attempting to merge in front of it, has cast a pall over the autonomous car industry. As example of the demoralization the accident caused, a Google exec has stated that it may take three decades to solve the problem uncovered by this set-back. The problem is that the car’s computer predicted that the bus would stop to let the merging car go in front of it, but the bus driver did not behave as predicted. Google took responsibility for the accident, while also stating that the human driver contributed to the accident by (my words) being discourteous. (For an account of the Google accident and its aftermath, see this link: theverge.com).

When discussing this story with a self-driving car enthusiast recently, he said the real problem is public reaction. He pointed to the fact that fender benders in cars operated by humans occur extremely frequently, without triggering much concern, and that the goal of a completely autonomous car is closer to reality than most people realize. The PR issue obviously is a problem, although I see it as closely tied to the chimera of attaining perfection in the accident avoidance quest. In a recent AAA survey, 75% of drivers indicated that they would be extremely reluctant to turn over complete control of their driving to their car. Likely the number of skeptics would decline substantially if self-driving cars were proven to be absolutely safe. A few fatal or near-fatal accidents could in my opinion lead to putting the self-driving car initiative on the shelf for a long time, maybe forever. Does anyone other than a software visionary really believe that such safety perfection is ever attainable, or that widespread public acceptance in the absence of near perfection, is ever likely? At the risk of putting myself in the same category as the skeptics who laughed at the demonstration of ether or any number of other innovations, my response to the autonomous driving entrepreneurs is “dream on.”

Those who believe that self-driving cars are close to a realty point to the Tesla model-S fully electric car, which has a self-driving (“autopilot”) mode that a driver can activate. According to my friend. one can today drive for hundreds of miles on the interstate in a self-driving Tesla while doing distracting things such as reading a newspaper. Finding that surprising, I checked out the assertion and found it to be a little exaggerated. According to Tesla founder and CEO Elon Musk, the self-driving option (which he described as an imperfect Beta version) must always be used with one’s hands on the wheel, which would certainly rule out reading a newspaper. Musk acknowledged a couple of near catastrophic incidents recently, but he blamed that on drivers who ignored this warning, by taking their hands off the wheel. These incidents mainly involved what might be termed non-social (i.e., terrain-reading) errors, but almost every driving decision, especially when other cars are part of the terrain, makes some social judgment demand on the operator, whether the operator is a human or a computer. (For an account of some Tesla-S near misses, see technologyreview.com).

Support for the claim that one can today drive hundreds of miles on Tesla autopilot can be found in a well-publicized cross-country trip taken in a Tesla-S that made the journey in record time. What may not be generally known, however, is that there were three incidents during the journey, in which a serious accident was narrowly averted. In one of these incidents, the car entered a curve at 90 miles an hour (well above the speed limit) and the car would have lost control if the human had not quickly taken over. The problem seemed to be that the car’s computer algorithm called for it to follow the lane or center line marker at a standard distance, but experienced drivers know that one must correct for high speed (in addition to slowing down) by taking a curve closer than usual to the road’s apex, in order to cut down on the degree of wheel turn needed to negotiate the curve safely. The conclusion here seems to be that the car cannot yet safely operate autonomously at high speed. Another near accident occurred when the car became confused by the action of another car and crossed over the road’s midline, steering the Tesla-S into oncoming traffic and what could have been a fatal head-on collision.

Interestingly, while Google partly blamed the bus driver for its accident, Tesla has entirely blamed its customers for such incidents, warning owners that in the event of an accident while in autopilot mode, financial liability would rest entirely with the driver. To which I say “lots of luck, Elon, telling that to a judge when some seriously injured customer or dead owner’s family sues you.” In fact, some commentators have taken Tesla to task on ethical and legal grounds for making available an imperfect autopilot device when it is predictable that many owners will wrongly assume it is a fully operational, and thus completely safe, self-driving vehicle. Tesla’s motives for rushing into production a clearly unready product? Most likely ego (buttressing Musk’s claim that he is ahead of Google and other competitors in the AI race) and, of course, monetary (getting people to buy the company’s rather pricey products).

While it appears that non-social situations (such as ambiguous, confusing or poorly designed center lines or on-ramps) remain a very tough nut for the self-driving auto industry to crack, I believe that the main impediment to their eventual success lies in the social domain. As the bus accident illustrated, a self-driving car’s decisions are based in part on predictions about what another car’s driver is likely to do, but if we know anything about human behavior it is that it not fully predictable, rational or courteous (for proof of that, one need only look at recent political developments in the Middle East or, for that matter, in the United States). Furthermore, drivers sharing the road with an autonomous vehicle may have vision problems, be in an impaired, inattentive or agitated state, or lack driving competence, all of which may cause them to make sudden dangerous moves which no computer other than maybe the world’s largest super-computer will ever be able to adapt to quickly enough. The fact is that an experienced and competent driver is always “reading” the behavior and intentions of other drivers, and making split-second responses, based not on probabilities but on real-time inferences and perceptions.

Illustration for the social judgment basis for many accidents can be found in the experience of beginning adolescent drivers, whose parents pay very high insurance premiums for a reason. My hair-cutting lady recently told me the story of her 18-year-old son, who had two (fortunately) minor accidents within the first five days after obtaining his license. Neither of them were technically his fault, but the fact remains that neither likely would have happened had he been a year or two older. In one of the accidents, the young man was driving along when a car entered the road from a driveway on the right and plowed into the neophyte driver’s passenger side door. The young man’s explanation is that he was concentrating on the road ahead and did not consider the possibility of such a thing happening. Almost certainly, there were subtle cues, such as the other car inching up to the road and looking as if it might exit. It is also likely that he himself unconsciously contributed to the problem, such as by seeming to signal the other car could proceed. An experienced and careful driver is attuned to such (essentially social) subtle cues, and is aware of the need to engage in an ongoing stream of risk-assessing calculations. An inexperienced driver, such as this young man, is profoundly egocentric, as he considers only his own point of view when driving, and also when doing much else. To my mind, it is not clear to me how a computer-driven car would be any better than a neophyte driver at reading and emitting such subtle social cues.

Software engineers are good at solving physical problems, but they have not yet solved, or even paid much attention to, social judgment problems. Awareness of this reality came to me a few years ago at an annual conference in Colorado that addresses the application of technology to people with cognitive disabilities. All of the erudite presentations dealt with the application of software and hardware to physical problems, such as mobility issues. When my turn came to speak I talked about a film the audience had just watched about an intellectually disabled man who was being supported by his kind neighbors who made it possible for him to continue to live in their New York apartment building. Every one of the challenges that imperiled his continuing to live in the building was social, such as his extreme gullibility (my special interest) as reflected in such things as his lending the key to his apartment to street people (who terrorized the neighbors, stole things and damaged the building in various ways). When I challenged the computing authorities (some legendary) who were gathered at the conference to say how technology could solve such social judgment- (and loneliness-) driven behaviors, the response was one of silence.

The self-driving movement is interesting not just as an automotive phenomenon, but as an illustration of developments in the field of artificial intelligence (AI). Google is an example of a company that is extremely bullish about the ability of its computer scientists, using increasing advances in chip miniaturization, to improve human functioning through AI. Perhaps the leading exemplar of such an attitude is Raymond Kurzweil--a genius inventor of many self-named computer devices (and someone considered a modern-day Thomas Edison)—who now works full-time as Google’s resident futurologist. Kurzweil has written several books and given many talks extolling his optimistic view of the future of “transhumanism” (the use of increasingly sophisticated technologies to enhance human intelligence) and “singularity” (the use of technology to change human biology, such as by curing disease and extending the human life-span). Central to these concepts is Kurzweil’s somewhat controversial belief that the essence of intelligence is pattern recognition (something computers are good at) and that computer scientists will thus eventually be able to construct an improved version of the human brain. Competent car driving, which depends heavily on pattern recognition, would seem thus to be an obvious extension of the AI concept. A problem with great geniuses (such as Kurzweil) and great companies (such as Google) is that they often have difficulty knowing when they have taken on a “problem too far.” My own belief is that creating a self-driving car capable of recognizing and responding successfully to the hundreds of social and physical judgment challenges involved in getting safely from point A to point B will prove to be a problem unable to be solved in Kurzweil’s lifetime, even if he is successful in altering his biochemistry (he takes 150 [down from a high of 200] pills a day) and lives for another hundred years.

Copyright Stephen Greenspan

advertisement
More from Stephen Greenspan Ph.D.
More from Psychology Today