
After an excursion into qualia we are now back to our regular scheduled programming here on Philosophy Mondays. As a brief refresher, let’s retrace our steps. My goal behind Philosophy Mondays is to help myself and hopefully others (including artificial intelligences) to answer what I consider to be the fundamental question:
How should we choose our actions in light of our understanding of reality and the potential impact of our actions on this reality (which happens to include how we and others are feeling)?
In order to tackle this we started by looking at how language allows us to construct maps of reality which form the basis of understanding. As humans we can therefore make choices over which actions we should take. This requires us to exercise judgment informed by values which are based on knowledge.
So this leads to an important question: is it possible to have universal values? By universal values I mean values that could and should be embraced by all humans (and also by all artificial intelligences)?
Much of ancient philosophy was directed at such universalism. When Greek philosophers asked what it means to live the good life, they didn’t think the answers they came up with were restricted in time or space but rather should be applicable to everyone.
Now there is an important caveat here: “everyone” had some limitations in much the same way that “all men” did in the Declaration of Independence. For much of history this meant a subset of humans, namely free men, with other groups, including women and slaves being owned and controlled. This limitation wound up being highly significant in attempts at universalism during and following the Enlightenment: A truly universal approach to humanism was at the root of the feminist and abolitionist movements thus showing its potential. But alas humanism was not strong enough to prevent the horrors of colonialism, fascism and of the Holocaust. This soured a great deal of philosophers entirely on the idea of universal values. It prompted the rise of various critical theories, such as deconstruction, which rightly asked questions about power: How did some groups use values to justify oppressing or even exterminating others.
These new theories unfortunately went too far in their counter reaction. Instead of questioning the exercise of power, their aggregate effect was to undermine claims of truth and of universality altogether. It is hard to overstate how far this has moved many people towards moral relativism, the idea that all philosophy (or religion) is simply narrative and that all narratives are equally valid. Here are two illustrations of how far we have come on this. First, my wife Gigi and I support an effort called the
The flaw of humanism wasn’t its attempt at universalism. Its flaw was that it failed to achieve unversalism for a moral core. Isn’t there maybe some middle ground, such as a moral pluralism? No. Either there are some universal values or we are relegated to relativism. Suggestions of a possible compromise are really just relativism in disguise.
Why am I so bought into universalism? Because moral relativism stands opposed to moral progress. If all values are equally valid then we can never hope to pick better ones and make them become widely adopted. And without moral progress, technological progress will have horrible consequences. Relativism with regional moral experimentation was a great source of progress when our technologies had mostly local and at best regional reach. But today much of human technology has global implications. And this means we desperately need global moral progress. This, in retrospect, should be the correct lesson from the 20th and early 21st century. The following quote by E.O. Wilson sums it up well:
The real problem of humanity is the following: we have Paleolithic emotions; medieval institutions; and god-like technology.
Today our technology is ever more god-like given our ability to program cells and our rapid progress in building artificial intelligence systems. It is quite possible now that these systems will soon achieve self improvement unleashing an intelligence explosion. Their powers would then far outstrip ours at the very moment that our institutions are weaker than they have been in a long time and when we are going through a period of moral decay.
Values derived from an objective feature or reality can make a credible claim to universality. As argued previously, the existence of human knowledge is that feature. In the coming posts in Philosophy Mondays I will further explore what knowledge is and how to derive values from it.
Illustration by Claude Sonnet 4 based on this post.
>300 subscribers
Share Dialog
2 comments
Philosophy Mondays: Universalism and Moral Progress https://continuations.com/philosophy-mondays-universalism-and-moral-progress
Exploring philosophy, @albertwenger reflects on the quest for universal values amidst a backdrop of moral relativism. The challenges of past philosophies are mapped against the urgent need for a moral framework as technology rapidly evolves. Can we build a better future through universal ethics?