>300 subscribers

Between work, projects and travel I have missed many weeks of writing on Philosophy Mondays. Thankfully during this time I have also gained clarity that my fundamental goal with this series, which is to argue for a universal moral core on an objective basis, is more important than ever. As a result I am feeling a renewed sense of urgency for this work.
I was utterly fascinated by the conversation between Dwarkesh Patel and Richard Sutton, which is worth listening to in its entirety. Quite early on Patel makes the point that humans are in fact different from other animals because we have knowledge. Sutton rejects the salience of this and says he’s more interested in the commonality asserting that “humans are animals.” Our commonality is likely relevant for understanding some basic learning mechanisms but the bulk of human learning occurs in an entirely different way from animals because we can learn from knowledge and they can’t (as they don’t have knowledge the way I have defined it). And of course access to knowledge is also what accounts for the surprising success of LLMs.
At a later point in the conversation, we have the following exchange
Richard Sutton 01:04:16
Are there universal values that we can all agree on?
Dwarkesh Patel 01:04:21
I don’t think so, but that doesn’t prevent us from giving our kids a good education, right? Like we have some sense of wanting our children to be a certain way.
Denial of universal values is a widely held position and yet I find it jarring to hear it expressed. In this series I have already stated that I am firmly in the camp that universal values exist. Our job is to find and elucidate them. And this is a job for us humans that we should not abdicate to the new gods we are trying to invoke, because we won't know what we would get.
I believe it is important to invert the question. What is the alternative to universal values? How will technological progress not end in ruin for humanity as a whole in the absence of some universal values? I recently went to China to give several talks about my book The World After Capital. And as part of the preparation I revisited one of the core concepts from the book which is the “space of the possible” which grows as we develop new technology. It occurred to me that as we grow this space we simultaneously shrink the planet. Hunter gatherers walked. In the Agrarian Age we added horses and sailboats. In the Industrial age we worked our way up to jet planes. Today we have a global network operating at the speed of light and have built hypersonic missiles that can reach anywhere on the globe in less than an hour. There is no alternative to universal values in a world of sufficient technological capability. It is quite easy to see this in the extreme when you consider a planet destroying weapon that would blow it to smithereens (picture Alderaan -- also the cover image for this post). It must be a universal value to not make use of such a weapon.
While this may seem obvious, where it gets tricky is when there is disagreement on what the potential effects of a technology might be. This is central to the discussion around the existential risk from AI as reflected by the title of Eliezer Yudkowsky and Nate Soares’s book “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.” While the universal value of not destroying the planet and all life on it makes sense in the context of a weapon designed specifically to do this, in the case of AI a lot depends on what you believe. There are some who reject the existence of existential AI risk outright – a position I won’t engage with here. What’s more interesting is a position such as Sutton’s, who seems perfectly fine with AIs succeeding humans, largely because he has concluded that “[...] succession to digital intelligence or augmented humans is inevitable.” And if something is inevitable then why worry about it?
Well because even if you believe that succession is inevitable, there are possible good and bad outcomes from the position of humans. And here too, the central pivot is around universal values. The reason to work towards those and to work towards ways of aligning (at least some) ASis with them is one of the few paths that could lead to good outcomes. The extreme view that alignment is impossible, is intellectually more interesting than the outright denial of existential risk. The only consistent follow-on argument though is that we urgently need universal values in order to stop the development of ASI. How else would we sustain a global and actually enforced slow down of ASI development?
One interesting objection that I have heard to universal values is that they cannot be converged upon because at that very moment there would be no more outgroups (only one gigantic ingroup) and cohesion requires an outgroup. I don’t think this is true for two reasons. First, there will be a small group of people who would blow up the planet (or at least kill all humans). While this outgroup is likely small it does exist. Second, we may not be alone in the universe and even if we make progress on “universal” values here on Earth, there are potentially large outgroups that we may have to defend against eventually.
So more to come in Philosophy Mondays.
P.S. I am currently reading an interesting book called “Science and the Good” with the subtitle “The Tragic Quest for The Foundations of Morality” – obviously of great interest given my program of establishing an objective basis for universal values.
Share Dialog
Albert Wenger
1 comment
It seems to me that the longing—I'll tentatively call it longing—for universal values, values shared by all people, overlaps with what the word morality refers to. The fact that the search for the foundations of morality has something tragic about it, as stated in the subtitle of the book by Davison Hunter and Paul Nedelisky, is probably due to the means by which one searches for it. Morality lies deeper and seems more powerful, but also more elusive than ethics and conscience. Morality can only be felt, lived, recognized, and understood individually at first. Making moral principles into rules of conduct for everyone, as can happen in institutionalized religions, is a misuse of morality. Morality is not the opposite of freedom. When we ask about the foundations of morality, do we not leave the material world behind and enter a realm that, by its very nature, cannot be grasped physically? And if we try to reduce it to neurological, chemical, and other brain processes, it simultaneously loses all meaning. Thus, the concept of morality—and this ultimately also applies to the shared, unbreakable values of humankind—leads us into a realm that is not sensory, yet is nevertheless highly real. The experience and unifying significance of morality thus breaks with the traditional scientific quest, which is always focused on relationships that can ultimately be represented physically and mathematically. The question of what connects all human beings cannot be answered in sensory, material, or mathematical terms. At least not if the answer is to have a deep, powerful vitality and embody human freedom, as the word morality conveys. The question of shared fundamental values and morality leads us into the realm of spiritual inquiry. And is there truth? Is there any reality at all? I think we have to take this step in order to give the question the space it needs for answers. Otherwise, the question and answer fall back into meaninglessness. And the longing remains alone.