Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
>400 subscribers
>400 subscribers
Share Dialog
Share Dialog
In my prior post I wrote about structural risk from AI. Today I want to start delving into existential risk. This broadly comes in two not entirely distinct subtypes: first, that we lose any grip on reality which could result in a Matrix style scenario or global war of all against all and second, a superintelligence getting rid of humans directly in the pursuit of its own goals.
The loss of reality scenario was the subject of an op-ed in the New York Time the other day. And right around the same time there was an amazing viral picture of the pope that had been AI generated.

I have long said that the key mistake of the Matrix movies was to posit a war between humans and machines. That instead we will be giving ourselves willingly to the machines, more akin to the “Free wifi” scenario of Mitchells vs. the Machines.
The loss of reality is a very real threat. It builds on a long tradition, such as Stalin having people edited out of historic photographs or Potemkin building fake villages to fool the invading Germans (why did I think of two Russian examples here?). And now that kind of capability is available to anyone at the push of a button. Anyone see those pictures of Trump getting arrested?
Still I am not particularly concerned about this type of existential threat from AI (outside of the superintelligence scenario). That’s for a number of different reasons. First, distribution has been the bottleneck for manipulation for some time, rather than content creation (it doesn’t take advanced AI tools to come up with a meme). Second, I believe that the approach of more AI that can help with structural risk can also help with this type of existential risk. For example, having an AI copilot when consuming the web that points out content that appears to be manipulated. Third, we have an important tool availalbe to us as individuals that can dramatically reduce the likelihood of being manipulated and that is mindfulness.
In my book “The World After Capital” I argue for the importance of developing a mindfulness practice in a world that’s already overflowing with information in a chapter titled “Psychological Freedom.” Our brains evolved in an environment that was mostly real. When you saw a cat there was a cat. Even before AI generated cats the Internet was able to serve up an endless stream of cat pictures. So we have already been facing this problem for some time. It is encouraging that studies show that younger people are already more skeptical of the digital information they encounter.
Bottom line then for me is that “loss of reality” is an existential threat, but one that we have already been facing and where further AI advancement will both help and hurt. So I am not losing any sleep over it. There is, however, an overlap with a second type of existential risk, which is a super intelligence simply wiping out humanity. The overlap is that the AI could be using the loss of reality to accomplish its goals. I will address the superintelligence scenario in the next post (preview: much more worrisome).
In my prior post I wrote about structural risk from AI. Today I want to start delving into existential risk. This broadly comes in two not entirely distinct subtypes: first, that we lose any grip on reality which could result in a Matrix style scenario or global war of all against all and second, a superintelligence getting rid of humans directly in the pursuit of its own goals.
The loss of reality scenario was the subject of an op-ed in the New York Time the other day. And right around the same time there was an amazing viral picture of the pope that had been AI generated.

I have long said that the key mistake of the Matrix movies was to posit a war between humans and machines. That instead we will be giving ourselves willingly to the machines, more akin to the “Free wifi” scenario of Mitchells vs. the Machines.
The loss of reality is a very real threat. It builds on a long tradition, such as Stalin having people edited out of historic photographs or Potemkin building fake villages to fool the invading Germans (why did I think of two Russian examples here?). And now that kind of capability is available to anyone at the push of a button. Anyone see those pictures of Trump getting arrested?
Still I am not particularly concerned about this type of existential threat from AI (outside of the superintelligence scenario). That’s for a number of different reasons. First, distribution has been the bottleneck for manipulation for some time, rather than content creation (it doesn’t take advanced AI tools to come up with a meme). Second, I believe that the approach of more AI that can help with structural risk can also help with this type of existential risk. For example, having an AI copilot when consuming the web that points out content that appears to be manipulated. Third, we have an important tool availalbe to us as individuals that can dramatically reduce the likelihood of being manipulated and that is mindfulness.
In my book “The World After Capital” I argue for the importance of developing a mindfulness practice in a world that’s already overflowing with information in a chapter titled “Psychological Freedom.” Our brains evolved in an environment that was mostly real. When you saw a cat there was a cat. Even before AI generated cats the Internet was able to serve up an endless stream of cat pictures. So we have already been facing this problem for some time. It is encouraging that studies show that younger people are already more skeptical of the digital information they encounter.
Bottom line then for me is that “loss of reality” is an existential threat, but one that we have already been facing and where further AI advancement will both help and hurt. So I am not losing any sleep over it. There is, however, an overlap with a second type of existential risk, which is a super intelligence simply wiping out humanity. The overlap is that the AI could be using the loss of reality to accomplish its goals. I will address the superintelligence scenario in the next post (preview: much more worrisome).
No comments yet