Research by Elizabeth Loftus
over thirty years established that eye-witnesses’ recall of incidents could be influenced by the language of their interrogation: for example, using words like “smash” in relation to a car accident instead of “bump” or “hit” causes witnesses to report higher speeds and more serious damage. But more recent research has revealed that this so-called misinformation effect
is not found if a robot does the questioning.
A group with a human and another with a NAO robot interviewer illustrated here were asked identical questions that introduced false information about a crime that the subjects had viewed. When posed by humans, the questions caused the witnesses’ accuracy of recall to drop by 40 per cent compared with those that did not receive misinformation as the former group “remembered” objects that were never there. But misinformation presented by the robot had no such effect despite the fact that the scripts were identical and that the experimenters told the human interviewers “to be as robotic as possible”.
The explanation presumably lies in the fact that, although the 23-inch high android robot has eyes, a synthesized voice, and is capable of gestures, it is not able to bring the subtle expressions to an interview that a human being could—and is certainly not capable of sophisticated mentalistic responses that might exert further, even more sensitive effects on those being interviewed. As the lead researcher points out in New Scientist, (9 February, p. 21) “We have good strong mental models of humans, but we don’t have good models of robots.”
In fact, we relate to them rather as would to aliens, and to the extent that robots like these mimic what we might expect of an encounter with an alien, they have the same “autistic” effect: diminishing mentalism but encouraging the kind of mechanistic, computer-like memory you typically find in autistic savants like Kim Peek.
Elizabeth Loftus went on to research so-called “false memory syndrome” and did much to discredit the paranoia of child sex abuse witch-hunts. But these remarkable findings suggest that, were psychotherapy to be entrusted to suitably programmed computers, there would be much less risk of false memories being reported in the first place. And if being interviewed by a robot makes such a difference to the accuracy and objectivity of a person’s memory, what more could be expected where other aspects of mentalism were concerned, such as emotion, sociability, subjectivity, and self-consciousness? At the very least, a mechanistic psychotherapist would counter-balance the hyper-mentalism of psychotics, and even autistic clients might relate to it much better than to a human one.
It is now widely recognized that classical psychoanalysis is not an effective treatment for psychotics. Indeed, as a recent account points out: “The classic psychoanalytic approach (including free association and having the patient lying prone on a couch with the therapist out of sight) is contra-indicated.” Furthermore, “Therapists who work with schizophrenia patients need to have a high level of frustration tolerance and not have a need to derive narcissistic gratification from the patient’s efforts or progress.” Clearly, the role of the psychotherapist—and perhaps that of the psychoanalyst especially—is open to abuse and exploitation by the therapist for whatever reason—and there a lot of them!
But no conceivable computerized psychotherapist would be subject to similar temptations. On the contrary, intelligent interfaces that might develop into computer psychotherapists could exploit their very weaknesses where absence of real human motives, memories, needs, emotions, and ego were concerned to guarantee levels of objectivity, impartiality, and rationality to which few if any human psychotherapists could aspire. At the very least, their never-tiring silicon circuits would certainly guarantee a high level of frustration tolerance, and narcissistic gratification is something that only a hyper-mentalizing human being with an agenda of personal aggrandizement would seek!
Instead, like an alien intelligence from outer space, the machine mind would be ideally qualified to explore human mentality with an objectivity, detachment, and impartiality that no human being could ever achieve. Even better, the wholly mechanistic basis of the machine’s mind would mean that it was ideally tailored to help where psychotics need help the most: in de-hyper-mentalizing and re-balancing their cognitive configuration in the mechanistic direction.
So not just in general terms, and in relation to the human race as a whole might the alien invasion of the future—intelligent, Turing-tested machines—be crucial where our understanding of ourselves is concerned: it could transform individual psychotherapy and give those who needed it unique and otherwise unobtainable insights into themselves—something psychoanalysis always promised, but seldom if ever delivered.
(With thanks to Steven M. Silverstein, whose remarkable research on blindness and risk of psychosis was the subject of a previous post.)