In my prior post I wrote about structural risk from AI. Today I want to start delving into existential risk. This broadly comes in two not entirely distinct subtypes: first, that we lose any grip on reality which could result in a Matrix style scenario or global war of all against all and second, a superintelligence getting rid of humans directly in the pursuit of its own goals.
The loss of reality scenario was the subject of an op-ed in the New York Time the other day. And right around the same time there was an amazing viral picture of the pope that had been AI generated.
I have long said that the key mistake of the Matrix movies was to posit a war between humans and machines. That instead we will be giving ourselves willingly to the machines, more akin to the “Free wifi” scenario of Mitchells vs. the Machines.
The loss of reality is a very real threat. It builds on a long tradition, such as Stalin having people edited out of historic photographs or Potemkin building fake villages to fool the invading Germans (why did I think of two Russian examples here?). And now that kind of capability is available to anyone at the push of a button. Anyone see those pictures of Trump getting arrested?
Still I am not particularly concerned about this type of existential threat from AI (outside of the superintelligence scenario). That’s for a number of different reasons. First, distribution has been the bottleneck for manipulation for some time, rather than content creation (it doesn’t take advanced AI tools to come up with a meme). Second, I believe that the approach of more AI that can help with structural risk can also help with this type of existential risk. For example, having an AI copilot when consuming the web that points out content that appears to be manipulated. Third, we have an important tool availalbe to us as individuals that can dramatically reduce the likelihood of being manipulated and that is mindfulness.
In my book “The World After Capital” I argue for the importance of developing a mindfulness practice in a world that’s already overflowing with information in a chapter titled “Psychological Freedom.” Our brains evolved in an environment that was mostly real. When you saw a cat there was a cat. Even before AI generated cats the Internet was able to serve up an endless stream of cat pictures. So we have already been facing this problem for some time. It is encouraging that studies show that younger people are already more skeptical of the digital information they encounter.
Bottom line then for me is that “loss of reality” is an existential threat, but one that we have already been facing and where further AI advancement will both help and hurt. So I am not losing any sleep over it. There is, however, an overlap with a second type of existential risk, which is a super intelligence simply wiping out humanity. The overlap is that the AI could be using the loss of reality to accomplish its goals. I will address the superintelligence scenario in the next post (preview: much more worrisome).
What's your email?
Subscribe to never miss a post.