Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
>400 subscribers
>400 subscribers
Share Dialog
Share Dialog
Today’s Uncertainty Wednesday will be quite short as I am super swamped. Last week I showed some code and an initial graph for sample means of size 100 from a Cauchy distribution. Here is a plot (narrowed down to the -25 to +25 range again) for sample size 10:

And here is one for sample size 1,000:

Yup. They look essentially identical. As it turns out this is not an accident. The sample mean of the Cauchy distribution has itself a Cauchy distribution. And it has the same shape, independent of how big we make the sample!
There is no convergence here. This is radically different from what we encountered with the sample mean for dice rolling. There we saw the sample mean following a normal distribution that converged ever tighter around the expected value as we increased the sample size.
Next week will look at what the takeaway from all of this. Why does the sample mean for some distributions (e.g. uniform) follow a normal distribution and converge but not so for others? And, most importantly, what does that imply for what we can learn from data that we observe?
Today’s Uncertainty Wednesday will be quite short as I am super swamped. Last week I showed some code and an initial graph for sample means of size 100 from a Cauchy distribution. Here is a plot (narrowed down to the -25 to +25 range again) for sample size 10:

And here is one for sample size 1,000:

Yup. They look essentially identical. As it turns out this is not an accident. The sample mean of the Cauchy distribution has itself a Cauchy distribution. And it has the same shape, independent of how big we make the sample!
There is no convergence here. This is radically different from what we encountered with the sample mean for dice rolling. There we saw the sample mean following a normal distribution that converged ever tighter around the expected value as we increased the sample size.
Next week will look at what the takeaway from all of this. Why does the sample mean for some distributions (e.g. uniform) follow a normal distribution and converge but not so for others? And, most importantly, what does that imply for what we can learn from data that we observe?
No comments yet