Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
>400 subscribers
>400 subscribers
Share Dialog
Share Dialog
There was a fun Twitter convo about strong AI between Patrick Collison and Marc Andreessen. I also love speculating about this topic but before I engage in that I want to point out that from an employment perspective this is a red herring. A car is not a horse. And yet cars replaced horses in transportation. A tractor is not a horse. And yet tractors replaced horses in agriculture. A tank is not a horse. And yet tanks replaced horses in war.
It is irrelevant from an employment perspective as to how a robot or machine intelligence solves a problem. What matters is whether or not it solves that problem at a lower unit cost than a human. In fact, even the problem statement itself is up for change, meaning that a machine maybe solves a slightly different problem or a sub-problem that can then be used (together with cheaper – read less trained – humans) to accomplish the same overall outcome.
Let me illustrate that last statement with two examples. In order to get a car to its destination you could either use a trained driver who knows where everything is in the city or an untrained one who simply knows how to drive together with routing software (see Semil’s blog post). Similarly you could use skilled drivers of forklift trucks in a traditionally laid out warehouse or you can dramatically change the layout so robots can move the shelves.
That’s why I think the “it will be a longtime before computers/robots can do x” argument *underestimates* the labor market impact. As always, it will take a bit longer than we assume for these impacts to actually arrive but when they do it will be more profound than we anticipate.
So now for the speculation on strong AI. Personally, I am in the same camp as Patrick. Just because it hasn’t happened in the first 70 years of having computers barely moves my prior on whether it can happen. After all, human intelligence took millions of years to emerge. Also, I don’t really subscribe to the idea that there is anything more to the functioning of our intelligence than, well, our brains.
Here too I think we need to be careful about what exactly we expect to see. Airplanes fly but they don’t fly exactly like birds. Yet, I think we can all agree that the similarity in effect is more important than the differences in technique. The same goes for intelligence. Machines may exhibit human like intelligence and yet use a somewhat different way of getting there.
There was a fun Twitter convo about strong AI between Patrick Collison and Marc Andreessen. I also love speculating about this topic but before I engage in that I want to point out that from an employment perspective this is a red herring. A car is not a horse. And yet cars replaced horses in transportation. A tractor is not a horse. And yet tractors replaced horses in agriculture. A tank is not a horse. And yet tanks replaced horses in war.
It is irrelevant from an employment perspective as to how a robot or machine intelligence solves a problem. What matters is whether or not it solves that problem at a lower unit cost than a human. In fact, even the problem statement itself is up for change, meaning that a machine maybe solves a slightly different problem or a sub-problem that can then be used (together with cheaper – read less trained – humans) to accomplish the same overall outcome.
Let me illustrate that last statement with two examples. In order to get a car to its destination you could either use a trained driver who knows where everything is in the city or an untrained one who simply knows how to drive together with routing software (see Semil’s blog post). Similarly you could use skilled drivers of forklift trucks in a traditionally laid out warehouse or you can dramatically change the layout so robots can move the shelves.
That’s why I think the “it will be a longtime before computers/robots can do x” argument *underestimates* the labor market impact. As always, it will take a bit longer than we assume for these impacts to actually arrive but when they do it will be more profound than we anticipate.
So now for the speculation on strong AI. Personally, I am in the same camp as Patrick. Just because it hasn’t happened in the first 70 years of having computers barely moves my prior on whether it can happen. After all, human intelligence took millions of years to emerge. Also, I don’t really subscribe to the idea that there is anything more to the functioning of our intelligence than, well, our brains.
Here too I think we need to be careful about what exactly we expect to see. Airplanes fly but they don’t fly exactly like birds. Yet, I think we can all agree that the similarity in effect is more important than the differences in technique. The same goes for intelligence. Machines may exhibit human like intelligence and yet use a somewhat different way of getting there.
No comments yet