Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
>400 subscribers
>400 subscribers
Share Dialog
Share Dialog
So I admit that I enjoyed watching Lucy, the Luc Besson movie with Scarlett Johanssen as the eponymous Lucy, who by virtue of an inadvertent overdose of a designer drug manages to use 100% of her brain capacity. She turns into a super hero who can control the physical environment by thought. Wildly exaggerated in every which way but entertaining nonetheless. Of course people were quick to point out that the idea that we use only 10% of our brains is a myth in the first place.
But there is a way in which the situation is actually much worse for humans. Google just announced yesterday that one of their deep learning teams has built a Go program called AlphaGo that beat the European Champion. Now if you are interested in deep learning and either have a Nature subscription or are willing to pay for a single article, I highly recommend that you head over to Nature and look at the research publication which is both quite detailed and pretty accessible.
In it you will find that the policy and value networks used in AlphaGo have about 15 layers each. While all the details aren’t there, I am estimating that these networks have the equivalent about 1 million neurons each (thanks to Matthew Zeiler from Clarifai for correcting an initial mistake here). AlphaGo uses these networks to perform a Monte Carlo Tree Search which roughly means that it looks ahead in the game for different moves but uses the neural networks to score those expansions and determine which ones to pursue further. The program uses 40 search threads each of which can apparently run 1,000 game simulations per second (that’s games not moves) for 40,000 game simulations per second.
Now the human brain has 86 Billion or so neurons. In theory that means that if a human were to use their entire brain to play Go the same way AlphaGo does, they could use 86,000 parallel versions of the smaller networks. The average game has about 150 moves. The brain is quite slow though in terms of clock cycles (I think something like 100 Hz) so maybe this adds up to about 60,000 game simulations per second as a rough approximation.
So what are we to conclude? With a professional player having theoretically 1.5x the capacity of AlphaGo but losing 5-0 I think these numbers suggest that even a human who has dedicated themselves to the game since age 12 can only muster a fraction of their brain for playing Go.
More importantly, I believe that many more tasks will be revealed to have the same property: Networks that are small relative to the size of the human brain are all it takes. In AlphaGo the networks are .001% the size of the brain! We can then run many instances of these networks in hardware in parallel and they will outperform humans.
So not only can you and I probably use less than 10% of our brain for a task like playing Go, but even if you could use 100% you would still lose to a computer.
PS I just dashed this off in a few minutes after a skim of the paper – so entirely possible that I am mis-reading and/or mis-calculating. If someone has more time to spend on this I would happily stand corrected – or more happily have my intuition and back of the envelope confirmed with better numbers …
So I admit that I enjoyed watching Lucy, the Luc Besson movie with Scarlett Johanssen as the eponymous Lucy, who by virtue of an inadvertent overdose of a designer drug manages to use 100% of her brain capacity. She turns into a super hero who can control the physical environment by thought. Wildly exaggerated in every which way but entertaining nonetheless. Of course people were quick to point out that the idea that we use only 10% of our brains is a myth in the first place.
But there is a way in which the situation is actually much worse for humans. Google just announced yesterday that one of their deep learning teams has built a Go program called AlphaGo that beat the European Champion. Now if you are interested in deep learning and either have a Nature subscription or are willing to pay for a single article, I highly recommend that you head over to Nature and look at the research publication which is both quite detailed and pretty accessible.
In it you will find that the policy and value networks used in AlphaGo have about 15 layers each. While all the details aren’t there, I am estimating that these networks have the equivalent about 1 million neurons each (thanks to Matthew Zeiler from Clarifai for correcting an initial mistake here). AlphaGo uses these networks to perform a Monte Carlo Tree Search which roughly means that it looks ahead in the game for different moves but uses the neural networks to score those expansions and determine which ones to pursue further. The program uses 40 search threads each of which can apparently run 1,000 game simulations per second (that’s games not moves) for 40,000 game simulations per second.
Now the human brain has 86 Billion or so neurons. In theory that means that if a human were to use their entire brain to play Go the same way AlphaGo does, they could use 86,000 parallel versions of the smaller networks. The average game has about 150 moves. The brain is quite slow though in terms of clock cycles (I think something like 100 Hz) so maybe this adds up to about 60,000 game simulations per second as a rough approximation.
So what are we to conclude? With a professional player having theoretically 1.5x the capacity of AlphaGo but losing 5-0 I think these numbers suggest that even a human who has dedicated themselves to the game since age 12 can only muster a fraction of their brain for playing Go.
More importantly, I believe that many more tasks will be revealed to have the same property: Networks that are small relative to the size of the human brain are all it takes. In AlphaGo the networks are .001% the size of the brain! We can then run many instances of these networks in hardware in parallel and they will outperform humans.
So not only can you and I probably use less than 10% of our brain for a task like playing Go, but even if you could use 100% you would still lose to a computer.
PS I just dashed this off in a few minutes after a skim of the paper – so entirely possible that I am mis-reading and/or mis-calculating. If someone has more time to spend on this I would happily stand corrected – or more happily have my intuition and back of the envelope confirmed with better numbers …
No comments yet