The power of population is indefinitely greater than the power in the earth to produce subsistence for man.
Thomas Malthus (1766 - 1834)
~ Thomas Malthus
Few people doubt that overpopulation is a serious problem on our planet. Assuming that the human population grows faster than the food supply, Thomas Malthus (1798) predicted that disaster will ensue with mathematical certainty. More recently, Garrett Hardin (1968) cast our Malthusian dilemma in game-theoretic terms. The conflict between the mandates of human reproduction and the resilience of the environment poses a noncooperative game. Disaster is a matter of tragic inevitability, unless humanity somehow manages to change the game. Still more recently, Jared Diamond (2005) echoed Malthus and Hardin when reviewing the rise and fall of various human civilizations. Diamond sought to exorcize the specter of tragic collapse for our own global civilization, but to do so he had to ignore the implications of his own analysis. The dilemma is this: Being fundamentally self-interested, humans will seek to reproduce and consume resources. Each individual is better off by not exercising self-restraint. Yet, the collective – and hence every individual – ends up suffering. The collective – and hence every individual – would be better off of each individual limited his or her offspring begotten and calories consumed. Diamond puts his hopes in humanity’s ability to learn from the failed civilizations of the past and its ability to read the signs and portents abundantly available now. This hope has the air of desperation because there is little evidence for the idea that our current civilization has a unique willingness to respect wisdom.
Let’s review how Hardin described the human tragedy as a commons dilemma. A community of herdsmen has shared access to a pasture. Each herdsman is better off if he puts more cattle on this pasture. The total effect, however, is overgrazing and the destruction of the commons. If the herdsmen could somehow agree to limit the number of cattle they put in, and if they could enforce that agreement, all would be better off. The trouble is that they can’t agree. As a resource-replenishment dilemma, the commons dilemma gets at the second part of Malthus’s economy: the food and other resources humans need for survival. What about the issue of reproduction itself? A moment’s reflection reveals that the distinction between reproduction and consumption has little intrinsic value. The latter is bound up with the former. Adding a hungry baby to the population increases the demand for food. It is in every individual’s Darwinian interest to have offspring, and to have more offspring than their neighbor does. Hence overpopulation, and hence the return of tragedy even if the resource dilemma were solved for a stable population.
The commons dilemma is characterized by the following set of payoffs. A free-rider, who defects when others cooperate, harvests the highest payoff T (for “Temptation”). Mutual cooperators are “rewarded” with payoff R, which is lower than T. Mutual defectors reap the “Penalty” payoff P, which is lower than R, and unilateral cooperators are “suckered” with payoff S, which is the lowest. The chain of inequalities T > R > P > S, where 2R > T + S, makes the commons dilemma a special type of prisoner’s dilemma.
Game theorists analyze sets of payoffs to find out which strategy a rational person will play. Users of game theory carefully analyze specific real-world contexts to find out which particular game provides the best fit. Then they consult formal game theory to identify the rational strategy, and they compare theoretical predictions with observational data. Often it turns out that people cooperate more than game theory predicts (Krueger, DiDonato, & Freestone, 2012). Such findings raise hopes that tragedy may be averted, or they merely lead to the less sanguine conclusion that tragedy has to wait. According to Malthus, we should all be dead already. He did not foresee technological advances in food production and resource extraction, and he did not foresee that the members of wealthy nations lose their interest in unfettered breeding. Yet, it appears that Malthus has only been postponed, not refuted.
If breeding is a commons dilemma, outbreeding others is the defecting strategy and it is dominating. No matter whether others cooperate or defect, the defector is better off (because T > R and P > S). But perhaps breeding is not a commons dilemma. Perhaps the game of life on a finite planet is a volunteer’s dilemma (VoD, see here or there), or a still different dilemma, yet unnamed. Consider a two-person VoD. A volunteer abstains from breeding for the sake of a fragile Earth. For this, she reaps the payoff R regardless of what the other person does. A defector reaps the payoff T if the other person volunteers, and the payoff P if the other person defects. Notice that there is no dominating strategy. A person is better off doing whatever the other one is not doing. When the “players” cannot communicate or make enforceable contracts, they must resort to choosing their response probabilistically. Diekmann (1985) showed that volunteering with probability of R/T is a Nash-equilibric strategy; it cannot be exploited. It is not the most efficient strategy, however. Players would earn more if they all volunteered with a slightly higher probability 1-(T-R)/(2(T-P)).
But then, the game of life may not be a volunteer’s dilemma. The assumption that the volunteer does not care about the choices of others seems unrealistic. Consider again an idealized, if reductive, example with two players. Each has two children (and no mate). Each can choose the status quo and do nothing, or make a sacrifice. If both choose the status quo, their children die and they themselves get sterilized. If both make the sacrifice, everyone dies. However, if only one of them makes the sacrifice, she gets to keep one child and her own life, while no one dies in the other family. In other words, the payoff ranking is T > S > P > R and the mixed-strategy equilibrium strategy is to make the sacrifice with probability (T-S)/(R+S+T-P). The game is similar to the VoD because there is no dominating strategy and each player wishes to know the other’s choice so that she herself might do the opposite. If, as implied by the example, T = 3, S = 2, P = 1, and P = 0, the rational player will make the sacrifice with p = .25. In this game, players would do better if they all volunteered with a slightly higher probability (T+S)/(2(T+S+P-2R)). Again, however, as in the VoD, making a sacrifice with this probability is not an equilibrium strategy. It only works if both players commit to it. If one knows that the other commits, there is an incentive to defect, which is why this game, like the VoD, is a difficult dilemma where rational play is not optimal play.
The game is also difficult because psychological alternatives to game theory that have been proposed in the context of the prisoner’s and commons dilemma, are of no use here. Social projection, that is, the idea that others will act like oneself, is helpful in the prisoner’s dilemma because it encourages individuals to cooperate. In the game of life, projection creates only a mild preference for defection, which classic game theory already assumes without making psychological assumptions beyond simple self-interest and rationality. Social preferences do not matter either. Benevolence, that is, caring about the welfare of others, does not tell a person how to make a choice here. Indeed, benevolence can be counterproductive. If both players make the sacrifice, thinking that this way the other player might reap the temptation payoff, they will realize the worst outcome for both. A taste for fairness (or “inequality aversion”) is not helpful either. Indeed, the game demands that both players find a way to steer clear of equality. This game shows that social preferences are not categorically moral goods. The details of the strategic context matter.
Like Hobbes, Hardin felt that it falls to an authority (the state) to regulate behavior because individual rationality is not up to the task. Various governments have tried to influence how the people breed, with little success. China’s one-child-only policy is repressive and the Chinese are ingenious in undermining it. Perhaps the partial failure of such a policy is intended because if the policy were perfectly successful, the complete disappearance of the population would be a mathematical certainty. Other governments try to encourage population growth, though for different reasons. Nazi Germany rewarded fecund mothers because kids would grow into soldiers and more mothers. Federal Germany has rewarded fecundity because the unregulated trend of population shrinkage raises economical problems in the long term (Who will work and pay taxes to support your retirement?). As the state’s tweaking of the payoffs has only small, if non-zero, effects, the game of life remains for us to play.
The game of life is perhaps the most poignant of all social dilemmas because we ourselves are the payoffs. When we exploit the seas by overfishing, or when we pollute the air with exhausts, we destroy commons, which will ultimately destroy us. For many of us humans to have so many children affects all the other dilemmas that confront us. If only Jared Diamond wrote another book and tell us how to avert that tragedy. Instead, Diamond has burrowed his academic head even deeper into the past. Last year, he explored the lives of hunter-gatherers (Diamond, 2012). Extracting lessons relevant to current problems can be only more difficult than extracting lessons from the collapse of civilizations in historic time.
Diamond, J. (2005). Collapse: How societies choose to fail or succeed. New York: Viking.
Diamond, J. (2012). The world until yesterday: What can we learn from traditional societies? New York: Viking.
Diekmann, A. (1985). Volunteer’s dilemma. Journal of Conflict Resolution, 29, 605–610.
Hardin, G. (1968). The tragedy of the commons. Science, 162, 1243-1248.
Krueger, J. I., DiDonato, T. E., & Freestone, D. (2012). Social projection can solve social dilemmas. Psychological Inquiry, 23, 1-27.