Most of the new field of neuroeconomics is about using economics as a language with which to study the neural aspects of decision-making—“value,” “subjective value,” “revealed preferences,” “risk.”  Each of these terms has a well-defined meaning in the context of economics. Value, for example, is the amount of reward you can expect to get. Subjective value is the internal modification of that reward based on your preferences and goals. Because we cannot directly see subjective value, one has to test to reveal those preferences. Risk is the second moment of the distribution—the variability within that distribution.

The idea is that we can use this language developed by economists to study decision-making as an entry point into the neuroscience of decisions. Many papers have looked for value-representations in tasks, or have controlled value and looked for risk. My laboratory certainly does this too: We use the language of economics to develop specific tasks that allow us to talk about “value,” “risk,” and “reward” in quantitative ways. But I want to argue here that neuroeconomics is important not because it gives us new economics terms to fix (set) the neuroscience of decision-making, but rather because it allows neuroscience to fix (repair) microeconomics.

The problem is that the basic microeconomics definition of humans as simple value maximizers doesn’t seem to fit how humans actually behave. Of course, this has been known for decades.  There is an entire field of behavioral (and even experimental) economics which adds factors into that simple story to correct for experimental findings. The classic differences in the responses humans have to gains and losses, for example. (Humans are risk-averse in gains, but risk-seeking in losses.)  Or the surprising effect of anchoring, in which being cued with a high or a low number unrelated to the decision in question leads to differences in value-estimation.

The problem (I argue) is that microeconomics (including even behavioral and experimental economics) has ignored the computational issues of information processing. Making a decision is about using information from the past to make better actions in the future—you need to take the similarities and differences between the current situation and past situations to guide your potential actions. How you categorize, process, and evaluate those similarities and differences changes what actions appear best in the future.

The argument being laid out here goes beyond the idea of satisficing or limited resources. The concept of satisficing argues that if we only had infinite resources, we could look through them all and evaluate all the possibilities and get an optimal answer, but because we have limited resources (and limited time), we have to stop the process early, which leads to suboptimal answers. Much of the satisficing argument is that we side-step this speed problem by using heuristics, calculations that get the answer close enough often enough to be useful. I argue, instead, that there is no optimal solution. The best action depends on how one relates one’s perception of the present to one’s memory of the past; there is an information computation process that affects how actions are actually selected. That information process changes the computation.  

In a sense, this is like the issue raised by David Marr in his famous introduction to his book Vision, in which he lays out three levels of analysis—computation (what computation is the system performing), algorithm (how does the system represent data, what algorithms are being used), and implementation (how is the algorithm actually implemented). Much of microeconomics attempts to solve a fourth level, which we can call teleological (after Aristotle): What computations should the system perform? Behavioral (and experimental) economics tries to solve the computational level without caring about the other two levels. But what we’ve learned from neuroscience is that the algorithmic and implementational levels matter.

One of my favorite examples of this is the cash register example initially used by Marr—the computation level is addition, the algorithm is addition with an overflow at a maximum level (eventually the numbers get too big to store), and the implementation level is a mess of gears and levers (or transistors). The teleological level is adding costs and making change, the details of which depend on the currency being used. The key here is that the algorithmic level does not actually compute the computation perfectly, and it matters. Even more importantly, the implementation level makes a difference, depending on when gears break or when transistors misfire.

Coming back to the two microeconomic examples mentioned above, both of these derive from neurophysiological, psychological, and environmental implementation issues. 

Because we do not have a full, infinite knowledge of the state of the world, we are always estimating that state. That means there is always an uncertainty about whether we have noticed all of the important cues. Following this to its conclusion, this creates a difference between disappointment (loss of expected reward, which likely leads to a redefinition of situation) and punishment (pain, aversive signals, which lead to avoidance of the situation). There must be a difference between representations of euphoria (quality of reward), reinforcement (euphoria better than expected), disappointment (loss of expected reward), and dysphoria (quality of antireward/punishment), aversion (dysphoria worse than expected), relief (loss of expected punishment). These differences lead directly to differences in representations of gain and loss, and the inclusion of uncertainty leads to risk-aversion in gains and risk-seeking in losses.

Because we have to calculate value, there is an information process that estimates value—that process likely includes a content-addressable memory settling process. Content-addressable memory is a well-understood process by which one fills out a partial memory through network interactions. This means that these calculations are dependent on priming. Anchoring is a form of priming—changing the starting point can change the conclusion.

These are just two examples in which the implementation level affects the algorithm level which affects the computation level. And because we are social creatures, evolved to live in a society in which our interacting cohorts are performing those modified computations, the teleological level is going to depend on that process as well.

If we want to get microeconomics right—in the sense of accurately describing reality—then we have to understand the underlying process that produces that calculation. Not doing so would be like trying to explain microscale physics without quantum mechanics because it’s too weird and doesn’t fit some preconceived mathematical notion of how reality should be. And just as macroscale physics depends on quantum mechanics (the machine you are reading this on depends on quantum effects within transistors, lasers depend on quantum interactions between photons and electrons), trying to build macroeconomics on an incorrect microeconomics is going to give you garbage.

About the Author

A. David Redish, Ph.D.

A. David Redish, Ph.D. is a professor in the Department of Neuroscience at the University of Minnesota.

 

You are reading

Brain and the Poetic Mind

Process and Normative Models

Understanding the decision-making process changes how we should make decisions.

Peacemaking Among Primates

How to be aggressive enough to win, but keep your anger in check

Learning the Playbook and Learning From Tape

Translating from playbook to action requires imagination