The Techno-Human Condition
What is the psychology of a techno-human cognitive network?
Posted May 4, 2011
We were challenged several years ago to undertake a study of how powerful emerging technologies (that is, the Five Horsemen: nanotechnology, biotechnology, robotics, information and communication technology, and applied cognitive science) were affecting human capabilities from an environmental perspective. The results of that effort are captured in our book The Techno-Human Condition. It did not surprise us to find that environmentalism had little useful to say about whether humans should or should not use these emerging technologies to augment their mental and physical performance-a question that turned out to be more interesting for its ability to stir passionate debate rather than shed light on much of anything. But trying to make sense of that debate did lead us to a deeper, and much more important, observation about technology, truth, control, and what it means to be human in a world where the human, the natural, and the technological are increasingly indistinguishable.
Just think: you walk into a university classroom today. You, and every other student, flip open your computers, and automatically you couple with google, thereby giving you access to the accumulated factual detritus of human existence as we know it (when a brand becomes a verb, you know something with serious cultural chops is going on). You open a side discussion on Facebook or IM, and when a difficult question is posed to the class, you're fortunately online with that cute girl from physics who knows the answer (she always does). But you have backup, because you just got an iPhone app that'll cover today's material nicely - and in two minutes, rather than the laborious hour that the prof is taking up front. With slides, no less.
Or, unfortunately, you're in combat in AfPak. Fortunately, you're getting fed plenty of data from the increasingly autonomous robots that inhabit your space, real and virtual. Unfortunately, it's way too much for your Cartesian brain. That's why technology systems increasingly pick up the slack: they survey the battlefield in ways you can't; identify potential threats; verify and prioritize them (often correctly!-but not always); check your sensory inputs to determine which ones aren't overloaded; and feed you the information you need to stay alive when you need it. You hope.
In short, augmented cognition. Or, put another way, in a world where complexity is already overwhelming, and yet continues to accelerate, networked cognition is becoming increasingly critical: cognition as an emergent property of techno-human networks, rather than the individual Cartesian brains that we are all so proud of. (An early discusson of this insight can be found in Hutchin's 1995 classic Cognition in the Wild.) The idea of cognition as an emergent, networked function raises some big questions, about pretty much everything we have come to depend on in the world today-like rationality, individual moral agency, and the idea that knowledge is power. For example, can components of a techno-human cognitive network (individual people, that is) understand the emergent cognitive products of that network? Can they hope to modify the output of the network in ways that they might prefer, for example to pursue and achieve morally desirable ends?
Put at its most basic level, what is the psychology of a techno-human network? And, as a shout-out to the increasingly dysfunctional myth of the Cartesian individual, what is the effect on human psychology of the dawning realization that in some fundamental way, the world has grown too complex for us to understand it as individuals?