Shimon Edelman Ph.D.

The Happiness of Pursuit

From Beginnings to Hope

The human mind/brain has what it takes to make the world a better place.

Posted Dec 10, 2015

A beginning is the time for taking the most delicate care that the balances are correct. This every sister of the Bene Gesserit knows.                                  

From Manual of Muad’Dib by the Princess Irulan


A beginning is the time of imbalance. In learning how to live — a schooling that each of us undergoes, with variable success, for as long as one’s life lasts — every new beginning is a time out of joint, the time of mistakes and of failings. Some of our failings are failings of aptitude: beginner Lego builders, guitar players, cooks, or surgeons are rarely much good from the outset. No matter: practice makes perfect, and if it doesn’t, one can always move on: trade Lego for crayons, put away the guitar in the attic, marry a cook, drop out of medical school and become a park ranger.

Failings of attitude are trickier. A toddler who, upon seeing his mother admiringly cradle in her arms a neighbor’s new baby, frowns and tells her to toss it from the balcony may appear as merely cute, but this behavior should give us a pause. What is it that would prevent the lovable little rascal from growing up to see nothing wrong in napalming a village or beheading unbelievers?

The need for, and the trickiness of, instilling proper attitude holds also for entire societies. Many modern nation states in central Europe, in the Middle East, and in south Asia are the shards of old multinational empires: Ottoman, Austro-Hungarian, British, Soviet; others, in the Americas and in Australia, arose out of a drive to take over territories that were, or seemed to be, empty enough. In both types of situations, the new states’ beginnings (and often enough the rest of their histories) are paved with nationalistic wars, religious strife, ethnic cleansing, slavery, genocide, the turning away of refugees — human behavior at its worst.

Shimon Edelman
Joshua trees after a fire in the California desert — a metaphor for the Syrian refugee crisis
Source: Shimon Edelman

Arguably, bad attitude toward “the Other” (and the atrocious behavior that such attitude may unleash) is something that comes very naturally to us — both to individual humans and to groups that are united by some perceived common characteristic or concept. In extreme cases such as slavery, humans see others merely as a resource to exploit and discard when exhausted — something that, as a species, we also do to the environment, on a planetary scale. For someone who abhors this state of affairs (and by far not everyone does, which is what the problem is in part about), a computational understanding of how the mind works offers a glimmer of hope.

On this computational understanding, a mind is a bundle of computations, carried out by the embodied brain in the service of survival and procreation. Some of these computations turn sensory data into estimates of the state of affairs in the environment; others evaluate possible courses of action, given the organism’s physiological state and goals. Among these valuation processes, some are experienced as moral choices,
in that certain options feel to us to be more appropriate than others. There is nothing mysterious about these: feelings and emotions are merely the manifestations of particular types of computation — those that evolved to be so critical to the mind’s functioning that they appear to us as inexorable; felt rather than reasoned. Therein lies the hope.

Welcome to the (virtual) machine

If the moral trajectory of one’s life is like that of a ballistic missile — fixed from the moment of engine cutoff at launch— there is no alternative to having the balances correct from the outset. In contrast, insofar as course correction in mid-flight is at all feasible, there is room for improvement. Both individual people and entire nations are known to be amenable to moral course corrections: children can be acculturated by teaching them to shun cruelty and adults can be persuaded to reevaluate their moral beliefs. A computational mechanism that makes it possible for a brain-based cognitive system to stretch the leash on which it is held by its genetic and experiential history is akin to what computer scientists call a virtual machine.

To grasp the idea of a virtual machine, we must first understand the concept of native computation. In the case of a natural computation device such as the brain, native computation is what evolutionary pressure requires it to do — detecting patterns in the environment, learning to adjust its actions on the basis of past outcomes, and so on. Likewise, in the case of an artificial computation device, in the native computation
mode it does just what it has been designed to do. Some of us still remember pocket calculators, which had been designed to perform certain operations on numbers—and nothing more; there was absolutely nothing you could do to get a pocket calculator to play chess, or Candy Crush.

There exists, however, a class of computation devices, first described by Alan Turing and others in the 1930s, which are universal. A universal computer can be made to compute anything that is at all computable. To a pretty good approximation, your smartphone is one: even an old model can be used to run (slowly) the latest apps that hadn’t been invented when its hardware was built. Now, my own phone is many
orders of magnitude more powerful than the one and only computer that we had on campus when I was an undergraduate. More powerful in all respects but one: there is no computational task that my phone can do that the old IBM could not be programmed to carry out, slow as it was. Unlike a special-purpose computer such as the pocket calculator, a universal device can use its native computational faculties also in a virtual mode: to mimic the basic operations of any other computer, thereby inheriting all its abilities (at the cost of a slow-down).

It turns out that the human brain too is capable of virtual-mode computation, thanks in part to an evolutionary innovation that largely distinguishes us from other animals: working memory. As you, the reader, may remember, in the last paragraph I mentioned the decade during which Turing had his insight into universal computation (don’t look back!). If you do recall those four digits, you should be able to mentally reverse their order. This ability — carrying out arbitrary operations on arbitrary items — should give us a pause. Admittedly, it is not easy to use: it is slow and prone to interference and it chokes if fed with too many items at once. Still, from the standpoint of evolution it’s a minor miracle: there is, obviously, no brain circuit dedicated to remembering digits or to reversing the order of a sequence of items. In this sense, the computation in question is virtual, made possible by an emergent property of brains rather than by the native computational properties of their components.

A capacity that is supported by virtual computation is at least once removed from the building blocks that ultimately implement it. This is why what our brains can compute in the virtual mode is much less constrained by the evolutionary and developmental factors than their various native faculties. Evolution may have “wired” us for being good at group foraging and for fighting off other groups, but because in the process of doing so it endowed us with virtual computational tools such as versatile working memory and language, we became capable of evolutionarily unheard-of things, such as doing math, writing poems — and debating morality.


The power of the virtual machine harbored by our stone-age brains is ours to wield and to build on. Even if our instincts in morally challenging situations are not to be trusted, we may still be open to persuasion by our betters. In the longer run, we may be amenable to education, putting in place a virtual moral engine that would override the native instincts. History suggests that there is little hope in the traditional remedies offered for the sorry state of the world; as the line from l’Internationale goes, “Il n’est pas de sauveurs suprêmes / ni dieu, ni césar, ni tribun” — “There are no saviors e’er will help us, nor god, nor caesar, nor tribune.” Putting our small and clunky virtual machines to work on cultivating virtue is our only hope.


Further readings

In their thorough review of the state of the art in the psychology of morality, Haidt and Kesebir (2010, p.807) remark on the central role that intuition plays in ethical decision making:

“The modal view in moral psychology nowadays is that reasoning and intuition both matter, but that intuition matters more. This is not a normative claim (for even a little bit of good reasoning can save the world from disaster); it is a descriptive one.”

Clearly, this state of affairs can be improved on — by nurturing and promoting moral reasoning, as suggested earlier. A theory of reasoning that invokes the concept of a Turing machine has been outlined by Zylberberg, Dehaene, Roelfsema, and Sigman (2011).

Stress causes people to rely on intuition more than on reasoning. Margittai, Nave, Strombach, van Wingerden, Schwabe, and Kalenscher (2016) report that subjects who were given cortisol (a hormone that mediates the body’s stress response) engaged in intuitive more than in deliberative thinking, compared to subjects who received a placebo. This finding corroborates an old observation of Tolman’s (1948):

“[. . . ] The child-trainers and the worldplanners of the future can only, if at all, bring about the presence of the required rationality [. . . ] if they see to it that nobody’s children are too over-motivated or too frustrated. Only then can these children learn to look before and after, learn to see that there are often round-about and safer paths to their quite proper goals — learn, that is, to realize that the well-beings of White and of Negro, of Catholic and of Protestant, of Christian and of Jew, of American and of Russian (and even of males and females) are mutually interdependent.”

A modern resurgence in the study of evolutionary aspects of ethics has been documented by Ruse (1986). In a review based on much field work, de Waal (2006) distinguishes between three levels of morality in humans and apes: moral sentiments, social pressure, and reasoned judgment (the latter presumably requiring something that Dennis, Fisher, and Winfield (2015) call a “consequence engine”); according to de Waal, non-human primates have the first, aspects of the second, and only a little of the third. The need to socialize American children to ethical behavior is made poignant by the examples in (Grier, 1999).

The American pragmatist philosopher John Dewey wrote extensively on both morality and education (Dewey, 1903, 1916). Putnam (2004, p.105) notes in this connection:

“As his [Dewey’s] own primary contribution to bringing about a different sort of democracy, a ‘participatory,’ or better a ‘deliberative’ democracy, he focused his efforts on promoting what was then a new conception of education. If democracy is to be both participatory and deliberative, education must not be a matter of simply teaching people to learn things by rote and believe what they are taught. In a deliberative democracy, learning how to think for oneself, to question, to criticize, is fundamental. But thinking for oneself does not exclude — indeed, it requires — learning when and where to seek expert knowledge.”

Can religion help? Bloom (2012) concludes his review of religion, morality, and evolution with the observation that “There is surprisingly little evidence for a moral effect of specifically religious beliefs.”

Edelman (2008) offers a comprehensive treatment of minds as computational processes, including topics such as native computation in the brain and virtual machines; section 10.2 is an overview of computational ethics. A more compact and accessible treatment of all these themes can be found in (Edelman, 2012).


P. Bloom. Religion, morality, evolution. Annual Review of Psychology, 63:179–199, 2012.

F. de Waal. Primates and Philosophers. How Morality Evolved. Princeton University Press, Princeton, NJ, 2006.

L. A. Dennis, M. Fisher, and A. F. T. Winfield. Towards verifiably ethical robot behaviour, 2015. arXiv:1504.03592v1.

J. Dewey. Logical conditions of a scientific treatment of morality. Decennial Publications of the University of Chicago, First Series, 3:115–139, 1903.

J. Dewey. Democracy and education. Macmillan, New York, 1916.

S. Edelman. Computing the mind: how the mind really works. Oxford University Press, New York, NY, 2008.

S. Edelman. The Happiness of Pursuit. Basic Books, New York, NY, 2012.

K. C. Grier. Childhood socialization and companion animals: United states, 1820-1870. Society and Animals, 7:95–120, 1999.

J. Haidt and S. Kesebir. Morality. In S. Fiske, D. Gilbert, and G. Lindzey, editors, Handbook of Social Psychology, pages 797–832. Wiley, Hoboken, NJ, 2010. 5th Edition.

Z. Margittai, G. Nave, T. Strombach, M. vanWingerden, L. Schwabe, and T. Kalenscher. Exogenous cortisol causes a shift from deliberative to intuitive thinking. Psychoneuroendocrinology, 64:131–135, 2016.

H. Putnam. Ethics without ontology. Harvard University Press, Cambridge, MA, 2004.

M. Ruse. Evolutionary ethics: a phoenix arisen. Zygon, 21:95–112, 1986.

E. C. Tolman. Cognitive maps in rats and men. Psychological Review, 55:189–208, 1948.

A. Zylberberg, S. Dehaene, P. R. Roelfsema, and M. Sigman. The human Turing machine: a neural framework for mental programs. Trends in Cognitive Sciences, 15:293–300, 2011.