When we look back at the beliefs and practices of our ancestors, we are often shocked at what they found morally acceptable: the public torture of criminals, the trading of slaves, and the subjugation of women.
The history of moral change—change in what is, and is not, considered morally acceptable—encourages greater skepticism about our current moral beliefs and practices. We might like to think we have arrived at a state of great moral enlightenment, but there is reason to believe that further moral revolutions await. Our great-great-grandchildren may well look back at us in the same way that we look back at our great-great-grandparents: with a mixture of shock and disappointment. Could they really have believed and done that?
Taking this possibility seriously leads to two inquiries. First, we should investigate the mechanisms of moral change and revolution. Second, we must consider the ramifications of future moral revolutions for those of us alive today. Although elements of both inquiries are dotted across existing academic disciplines, there has been limited effort to undertake a single coherent and systematic study of them.
So why did historical moral revolutions take place? And what might cause them to occur again? In his Short History of Ethics, the philosopher Alasdair MacIntyre describes the relatively simple morality of the Homeric epics, where goodness and virtue (to the extent that those terms have synonyms in Homer) are associated with performing a set role in a hierarchical warrior culture. But by the time Socrates is annoying the Athenian authorities, the situation has changed dramatically. Denizens of the sea-faring, trade-dependent Greek city-states are much less certain about moral concepts and ideas and have to reshape their moral practices to accommodate new social realities.
MacIntyre’s comparison of moralities is consistent with the finding that geological, ecological, and economic realities often shape our moral practices. For example, anthropologists have long noted that egalitarian resource-sharing norms are more common in hunter-gatherer tribes than in traditional agricultural societies. They have also noted that moral disruption is more common in open, trade-dependent societies than in closed, self-sustaining ones—as was the case in the Greek example. Indeed, the history of ethics is, in many respects, one of revolutions caused by changes in how open and closed societies have been to other ways of life.
Technology also plays a key role in changing moral beliefs and practices. Technologies give us new powers and choices, altering the balance of costs and benefits associated with decision-making. This can have profound moral consequences. A prime example of this is the impact effective contraceptives have had on attitudes toward extra-marital (particularly pre-marital) sex. Research by Jeremy Greenwood and Nezih Guner estimates that in the year 1900 fewer than 6 percent of unmarried women in the US had sex; by 2002 approximately 75 percent of unmarried women had sex. The researchers argue that this change in sexual practices can be largely explained by the effect contraceptive technology, particularly oral contraceptive pills, had on the perceived costs and benefits of extra-marital sex. This technology-induced shift in moral beliefs and practices prevails over the countervailing effect of traditional moral institutions, such as religion and law, according to subsequent research by Greenwood; Guner; and their colleague, Jesús Fernandez-Villaverde. In other words, even in places where religious organisations condemn sexual liberty or governments constrain it through law, extra-marital sexual activity remains common and widely destigmatized among younger people, due to technology’s impact on moral choices.
Most PopularThe End of Airbnb in New YorkBusiness
This is just one example among many. Medical and digital technologies have had other significant and disruptive effects on our moral beliefs and practices. For example, research by Robert Baker and Philip Nickel has shown how the invention of mechanical ventilators disrupted moral practices associated with death and dying. With this technology, it became possible to keep someone’s body alive after their brain had ceased to function. This led to a new definition of what it meant to die (brain death) and required the resolution of a new set of moral questions. Is it permissible to switch off the ventilation machine after brain death? Would this be equivalent to killing someone? Can we keep people artificially alive in order to harvest organs for the purposes of donation?
Likewise, the smartphone has not only changed our day-to-day behaviors, it has impacted existing values and norms. The pressure placed on the value of privacy and the associated claim that privacy is “dead” are the most obvious examples of this. A more subtle example might be how smartphones and social media have changed the way we value daily experiences. Having a meal at a restaurant, for instance, is no longer simply an experience to be enjoyed in the moment. For some people, it is a moment to be captured, shared, and monetized.
There are no formal estimates of the rate at which society undergoes moral revolutions. Indeed, the term “revolution” can be slightly misleading. Some revolutions may be more akin to evolutions, occurring slowly and gradually and only becoming apparent in retrospect. Others may be stark and sudden. Nevertheless, the fact that moral disruption tends to be associated with more open societies and greater technological innovation suggests that we might expect to see more of it in the future than we did in the past.
This brings us to the other inquiry prompted by the fact of moral change. What are the ramifications of future moral revolutions for those of us alive right now? Most of us care about the world we are leaving to our children. This is the central moral message of the ecological and environmental movements. This message has found a home among proponents of “longtermism,” a philosophy espoused by various Silicon Valley gurus and members of the effective altruist movement, such as Nick Beckstead and William MacAskill. Longtermism maintains that positively influencing the long-term future of humanity is a key moral priority for our present era.
Some people argue that longtermism is a dangerous idea because it can encourage us to prioritize speculative futures over the actual present, but you don’t have to embrace the most extreme versions of this philosophy to agree that future generations are a source of present moral concern. If we concede this, we should also agree that what matters is not just the physical world they will inhabit, but the moral world too. If their moral framework will be radically different from our own, we need to factor that into how we plan for the future and which actions to take now.
Most PopularThe End of Airbnb in New YorkBusiness
There are two clear ways of doing this. The first is to take a progressivist view of the future, in other words, to assume that future generations will inhabit a better moral world—at least according to some current values—than our own. They will be more enlightened, tolerant, egalitarian, and so on. Our job, in the present, is to accelerate the transition to this more progressive future. A typical progressivist argument might focus on the need to expand the moral circle. (The “moral circle” refers to the set of individuals, animals, or things to which we owe moral duties and whose existence we treat as a matter of moral concern.) The history of morality can, to some extent, be told as a tale of continual outward expansions of the circle of moral concern, from family to tribe to nation and, eventually, to all of humanity. Some progressivists, such as Jacy Reece Anthis and Eza Paez, argue that we should continue this outward expansion, including all sentient animals and, perhaps one day, sentient machines.
Another option is to take a conservative or precautionary approach, to assume that future morality is likely to be worse than present morality. Proponents of this view can justify their caution by pointing to historical examples in which societies, in the name of progress, got things terribly wrong. For example, the 20th century’s dalliance with authoritarian communism—often pursued by people who saw themselves as moral revolutionaries—resulted in a number of moral catastrophes (famines, mass imprisonment, draconian thought control). Why risk repeating those mistakes?
The conservative perspective is tempting: There seem to be more ways to get values wrong than right. This is a point often made by those worried about the risks of superintelligent AI, such as Nick Bostrom and Eliezer Yudkowsky. They argue that the set of value systems that are “friendly” to humans is narrow and fragile. It is all too easy to fall outside that range of values and do things that are contrary to human flourishing. Given the history of moral error and the fragility of values, our job should be to preserve the existing moral order as much as possible, instead of seeking progressive change. Present-day regulation of emerging technologies is often guided by this precautionary ethos. We have our current system of values—freedom, dignity, equality, and so on—and we need to ensure that these values are not harmed or undermined by morally disruptive technologies.
Progressivism and conservatism are not mutually exclusive, at least not across the full range of moral concerns. We might take a progressive attitude toward certain values, thinking that we need to expand them or cast them off, and a conservative approach to others that seem too precious to risk any loss. But both progressivism and conservatism tend to imply a great deal of certainty about future morality. They assume that we can predict whether the future will get things right or wrong. Such certainty is not warranted. If morality does change radically over time, perhaps we should be humbler about our present moral beliefs and attitudes.
Most PopularThe End of Airbnb in New YorkBusiness
One way of embracing this uncertainty is to adopt a stance of axiological open-mindedness toward the future. We can get a sense of what this might entail by considering how it works today.
In his book, The Geography of Values, Owen Flanagan explores the moral differences between cultures and argues that we can learn from this diversity. As an example, he points out that many Western cultures are wedded to an ethic of individualism while Buddhist cultures reject that ideology, arguing that fixation on the self and its flourishing is often a source of suffering and frustration. At first, these value systems might seem alien to each other, but they both sustain meaningful ways of life. What’s more, people from these cultures often experiment with elements of both. Flanagan argues that there are often good reasons for them to do so and suggests that we remain willing to experiment with different moral views.
Where Flanagan focuses on geographical moral diversity, we can focus on temporal moral diversity. In other words, we can approach the moral future with a degree of curiosity and excitement, neither as zealots promoting change nor reactionaries opposing it, but as tourists willing to experiment with it.
How can we do this if we don’t know what the future holds? Two strategies present themselves. First, we can bear in mind the oft-quoted line from William Gibson: The future is already here, it’s just unevenly distributed. Scattered around our world today, perhaps in emerging subcultures and the imaginations of science fiction authors, are the seeds of future moralities. If we are willing to explore them, we can get a sense of where we might be headed.
Second, we can design our social institutions so that they enable greater normative flexibility. One way to do this would be to use abstract standards—rather than precise rules—when legislating for the future. For example, we can create laws that focus on transportation and communication in general, rather than on particular modes of transportation and communication, such as the automobile and phone. Another option would be to enable easy amendment of any formal rules (laws, regulatory codes, or guidelines) to streamline adaptation.
More important than either, however, would be adopting a more experimentalist approach to social morality. Instead of just waiting to see what will happen, we should actively create spaces (perhaps we could call them “moral sandboxes”) for subcultures to test the moral waters without committing an entire society to a new moral code. For instance, there are emerging technological developments in brain-to-brain communication that might allow people to feel what other people feel, see what they can see, or share their thoughts. To some, this nascent technology is terrifying, an attack on our ethic of individualism, and a step toward a Borg-like society. To others, it is exciting, holding up the possibility of greater intimacy, empathy, and collaborative problem-solving. Instead of committing to either of these views right now, we could facilitate controlled and carefully observed experimentation to explore the effects of this technology on existing values, such as autonomy, self-control, intimacy, and empathy.
There are, of course, limits to what we should experiment with. The Nazis were moral revolutionaries, but not in a good way. We cannot be so open-minded that we lose all sense of right and wrong. There are, perhaps, some values that should remain foundational, but there is a balance to be struck between the extremes of progressivism and conservatism.