Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
>400 subscribers
>400 subscribers
Share Dialog
Share Dialog
Today in Uncertainty Wednesday I want to provide a big recap of the series which I started last August. The key takeaways so far should be that
Reality itself is not directly accessible to us. All we have to go on are observations and explanations.
Uncertainty exists because there are limits on observations and explanations.
Limits on observations include a fundamental limit, limits on resolution, measurement error, cost, and impact on the observed reality.
Explanations too have a fundamental limit.
The fundamental limits on explanations and observations mean that we always face irreducible uncertainty, even if there were no randomness in the underlying reality.
We then examined two examples: a hypothetical fortune telling machine and the flipping of coins.
Based on these examples I introduced the simplest possible model that lets us start to formalize the above: a world with two possible states and two possible signal (observation) values. The explanation here are the existence of two states and what the observation tells us about the states.
We then formalized the concept of probability and spelled out some axioms about how it should behave. We then used those axioms to relate the probability of compound events from that of elementary events in our 2 state, 2 signal value world.
With that in place I introduce the example of a cancer test and we worked through it in some detail (I happened to pick because I just turned 50 not because I was having this as a personal issue – thanks though to everyone who checked). Crucially this revealed the potential of drawing false conclusions about reality from the signal we receive.
We further investigated this issue by looking at two common measures of tests: sensitivity and specificity. We can think about this as telling us something about how much an observation tells us about reality. In doing so we derived Bayes Theorem.
In the upcoming weeks we will spend more time on what exactly Bayes Theorem tells us, but hopefully through this recap the context is clear: observations (signals) can help us reduce uncertainty about the state of the world (reality). We do so by using our explanation to update our probabilities from what they were prior to receiving the signal. Bayes theorem tells us how to compute this update for a given explanation. After such an update we are still left with uncertainty, just less uncertainty than we had before.
Today in Uncertainty Wednesday I want to provide a big recap of the series which I started last August. The key takeaways so far should be that
Reality itself is not directly accessible to us. All we have to go on are observations and explanations.
Uncertainty exists because there are limits on observations and explanations.
Limits on observations include a fundamental limit, limits on resolution, measurement error, cost, and impact on the observed reality.
Explanations too have a fundamental limit.
The fundamental limits on explanations and observations mean that we always face irreducible uncertainty, even if there were no randomness in the underlying reality.
We then examined two examples: a hypothetical fortune telling machine and the flipping of coins.
Based on these examples I introduced the simplest possible model that lets us start to formalize the above: a world with two possible states and two possible signal (observation) values. The explanation here are the existence of two states and what the observation tells us about the states.
We then formalized the concept of probability and spelled out some axioms about how it should behave. We then used those axioms to relate the probability of compound events from that of elementary events in our 2 state, 2 signal value world.
With that in place I introduce the example of a cancer test and we worked through it in some detail (I happened to pick because I just turned 50 not because I was having this as a personal issue – thanks though to everyone who checked). Crucially this revealed the potential of drawing false conclusions about reality from the signal we receive.
We further investigated this issue by looking at two common measures of tests: sensitivity and specificity. We can think about this as telling us something about how much an observation tells us about reality. In doing so we derived Bayes Theorem.
In the upcoming weeks we will spend more time on what exactly Bayes Theorem tells us, but hopefully through this recap the context is clear: observations (signals) can help us reduce uncertainty about the state of the world (reality). We do so by using our explanation to update our probabilities from what they were prior to receiving the signal. Bayes theorem tells us how to compute this update for a given explanation. After such an update we are still left with uncertainty, just less uncertainty than we had before.
No comments yet