Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
>300 subscribers
>300 subscribers
Share Dialog
Share Dialog
Before covering p-values and why they are so problematic, I thought it would be a good idea to provide a bit of a recap of Uncertainty Wednesday. I have been writing this series for almost two years now, beginning summer of 2016. Uncertainty is everywhere in the world and yet we are generally poorly equipped to reason about it. That happens to be true even for people, like myself, who have studied some probability and statistics.
My approach in Uncertainty Wednesday has been to consider that there is an external reality which is accessible to us only through observations (which I also refer to as measurements and signals). Our task then is to make inferences about the underlying reality based on those observations, meaning we want to learn something about reality from the observations.
The key point I want to get across is that this should be an iterative process where we update explanations with the goal of improving those explanations over time. We should always be asking ourselves the following question: given the explanations we have so far and some new observations, what should we believe now? Or put differently: how do we update our explanations based on observations?
While this sounds easy enough in principle, it turns out to be quite hard to do in practice for two reasons. First, as human we have all sorts of built-in heuristics that make updating hard, such as confirmation bias. We are much more likely to simply discard observations that do not fit with our explanation than to update our explanation. Why? Because it takes virtually no effort to ignore something and it takes a lot of effort to revise one’s explanation.
Second, we are often taught a binary view of the world, even in statistics classes. An explanation is either right or it is wrong. When we get new observations we use them to either confirm or reject the explanation. This amounts to asking the (wrong) question: given an explanation, how likely are these observations? Not likely, well then the explanation is wrong. This binary approach often leads to wrong conclusions though.
Why is that? An explanation establishes a hypothetical probability distribution, but our observations are a sample drawn based on the underlying reality (which may be quite different from our explanation!). And sample statistics, such as the sample mean and the sample correlation, have a distribution of their own, again based on the underlying reality. So instead of a binary accept / reject, we should use the information from the sample to update the probability distribution of our explanation.
In the coming Uncertainty Wednesdays I will give both formal and informal examples of this fundamental difference in approach.
Before covering p-values and why they are so problematic, I thought it would be a good idea to provide a bit of a recap of Uncertainty Wednesday. I have been writing this series for almost two years now, beginning summer of 2016. Uncertainty is everywhere in the world and yet we are generally poorly equipped to reason about it. That happens to be true even for people, like myself, who have studied some probability and statistics.
My approach in Uncertainty Wednesday has been to consider that there is an external reality which is accessible to us only through observations (which I also refer to as measurements and signals). Our task then is to make inferences about the underlying reality based on those observations, meaning we want to learn something about reality from the observations.
The key point I want to get across is that this should be an iterative process where we update explanations with the goal of improving those explanations over time. We should always be asking ourselves the following question: given the explanations we have so far and some new observations, what should we believe now? Or put differently: how do we update our explanations based on observations?
While this sounds easy enough in principle, it turns out to be quite hard to do in practice for two reasons. First, as human we have all sorts of built-in heuristics that make updating hard, such as confirmation bias. We are much more likely to simply discard observations that do not fit with our explanation than to update our explanation. Why? Because it takes virtually no effort to ignore something and it takes a lot of effort to revise one’s explanation.
Second, we are often taught a binary view of the world, even in statistics classes. An explanation is either right or it is wrong. When we get new observations we use them to either confirm or reject the explanation. This amounts to asking the (wrong) question: given an explanation, how likely are these observations? Not likely, well then the explanation is wrong. This binary approach often leads to wrong conclusions though.
Why is that? An explanation establishes a hypothetical probability distribution, but our observations are a sample drawn based on the underlying reality (which may be quite different from our explanation!). And sample statistics, such as the sample mean and the sample correlation, have a distribution of their own, again based on the underlying reality. So instead of a binary accept / reject, we should use the information from the sample to update the probability distribution of our explanation.
In the coming Uncertainty Wednesdays I will give both formal and informal examples of this fundamental difference in approach.
No comments yet