The Real Turing Test: Curing Computer Autism

People will really believe that computers can think once they master mentalism.

Posted Jun 09, 2015

Christopher Badclock
Source: Christopher Badclock

The test of a new paradigm is often the extent to which it can settle old issues that other perspectives have failed to resolve. Where the diametric model of the mind is concerned, I have already suggested how true that is in relation to IQ—and particularly the paradoxes posed by the Flynn effect and ethnic differences in intelligence, not to mention where it all started with the symptoms of autism and psychosis. But what is true of autism, psychosis and IQ is also true of AI (artificial intelligence) in general—and in particular of the question of whether machines can think. This is epitomized in the Turing Test (named after Alan Turing (1912-54) imaged in slate at Bletchley Park above).

According to the diametric model of the mind, if a task could be programmed in such a way that a computer could reliably and accurately perform it, it is mechanistic. But if the task requires skills, knowledge, or abilities relating to human beings and their minds that are hard to simulate in a machine, we are dealing with mentalism.

Like people diagnosed with an autism spectrum disorder (ASD), present-day computer systems have symptomatic deficits in mentalism: They are severely limited in their ability to understand and to respond appropriately to written and spoken language, and lack social and interpersonal skills—or what I would call mentalistic intelligence. Like many autistics, they are narrowly focussed on single tasks (even if they do perform them with a speed and reliability far exceeding that of ordinary humans), and like many people with ASD, they are rigidly and single-mindedly obedient to rules, and cannot tolerate changes or even minor deviations from their programming. Certainly, where mentalistic, social and inter-personal skills are concerned, you can forget it: your computer is not going to be able to carry on an intelligent conversation with you on some topic of mutual interest.  Indeed, as I pointed out in a previous post, this is the core of the problem in the Turing Test: what is at issue is the machine’s mentalistic abilities, not its mechanistic ability to compute as such.

But, looking at the issue in terms of AI, there is another factor that must be considered: that of mentalism being an interface—in the human case between people’s brains, but in the computer case between the user and the machine. Early computers used the so-called command-line interface in which the user typed in strings of symbols, which were then interpreted by the computer’s operating system. But such line commands were easier for the computer than for the user. In a phrase originated by the personal computer industry, the line command interface was not very “user-friendly.”

The next major step occurred with the arrival of the graphical user-interface or GUI for short. Today, this is universal and found not simply in computers, but on smart phones, tablets and all manner of similar devices. However, GUIs are not the ultimate in user-interfaces. Clearly, the ultimate interface would be a computer system with the mental expertise to act like a human agent, carrying out any tasks within its ability just as surely as a person would. As such, it might become known as a mentalistic, psychological, or personal user-interface—or perhaps simply as an intelligent one. Most people would probably find such a development immensely appealing simply because it relied on abilities they had already acquired in interacting with other human beings and did not require them to master skills peculiar to the computer. A system that satisfied them in this respect would inevitably seem more intelligent than one that did not.

Furthermore, the system would probably require the user to give it a name so that it knew when it was being addressed, and the use of everyday names also given to persons would be almost unavoidable. Indeed, it would be altogether easier to speak to a machine that was designed to mimic many human mental functions as if it were in fact a person, and to use the full range of mentalistic expressions that might be appropriate. Such terminology would certainly include personification, and it would be difficult to avoid references to the system’s cognitive state as if it had a mind, with knowledge, intensions, memories, and so on. Clearly, such usages would give a whole new dimension of meaning to the term personal computer, and would probably be immensely appealing to many potential users. HAL, the supercomputer in 2001: A Space Odyssey, is a fictional example.

Inevitably, any well-engineered mentalistic interface would have to appreciate both the knowledge and the ignorance of its user, and ideally the system would be able to interpret this for itself, for example in only offering assistance when the user needed it, or only requesting information that the user actually possessed. This would definitely require the system to keep a track of its user’s state of knowledge about particular topics, and ideally to be able to predict the future evolution of it.

For example, suppose the computer was programmed to remind its user of certain dates and the appropriate actions to be taken on them, such as meetings or anniversaries. Constant reminders would be irritating, and so the system might be designed to monitor the user to see if they were in fact going to remember the event, and only intervene when it became clear that they had forgotten it. So the system might not mention an impending birthday or wedding anniversary if it saw the user ordering flowers or booking a restaurant on the appropriate date, but would be certain to do so in good time if it did not.

Again, to avoid being intrusive the system might not wish to confirm the user’s knowledge every time they proved to be correct about something. But the system would have to be able to detect and interpret a false belief of its user, and be able to take appropriate action to correct it, at least where its own operations were concerned. In order to be able to do this, the intelligent user-interface would certainly have to be able to pass an appropriately formulated test of false belief similar to those used to diagnose autism. Indeed, in my previous post on the Turing Test, I gave a practical example.

What this means is that the whole debate about computer intelligence has been misconceived because it wrongly assumed that intelligence is unitary and indivisible. In fact, it is dualistic, and based in the brain on "anti-correlated" neural circuits. But once you realize that the real issue for computers is their mentalistic rather than mechanistic intelligence, you can see the solution. Furthermore, you also see that the issue is not one of computation as such, but of mentalism as a user-interface.

In other words, if your could engineer an intelligent user interface that could relate to its user as you would to another human being, the whole question of whether a machine can think would cease to be philosophical and become a software selling point with huge appeal to potential users. The question would be not so much Can a computer think? as Can my computer understand me well enough to do what I want it to do? If enough users answered Yes! the Turing test would have been passed in perhaps its most challenging and down-to-Earth form: that of a product in the market place. And it’s bound to happen sooner of later. Indeed, it has already started…

(Extracted and condensed from my forth-coming book, The Diametric Mind: Insights into  AI, IQ, society, and consciousness: a sequel to The Imprinted Brain.)