Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science

Modeling The AGI Economy
Competition, Redistribution and the Fork Ahead
Heading towards the knowledge age
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science

Modeling The AGI Economy
Competition, Redistribution and the Fork Ahead
Heading towards the knowledge age
Subscribe to Continuations
Subscribe to Continuations
>600 subscribers
>600 subscribers
Share Dialog
Share Dialog
This will be the last Uncertainty Wednesday for 2017 as I am about to go away on vacation. In the last post I had introduced the idea that sometimes when volatility is suppressed it comes back to bite us. I wanted to have a really simple model for demonstrating that and so I wrote some Python code to make a 50:50 coin toss and depending on the result either increment or decrement a value by +/- 2 (I set the initial value to 100). Here is a plot of a sample run:

Now to suppress volatility, I modified the program so that it would increment or decrement the value by +/- 1 instead, i.e. half the original change. I then added the suppressed +/- 1 (the other half of the change) into a “buffer” – accumulating +1 in a positive buffer and -1 in a negative buffer. I then gave each buffer a 1 in 1,000 chance of being release. Here is a plot where the buffer is not released:

We can immediately see that there is less volatility. In fact, when we determine the sample variance, the sample in the first chart comes in at 178 whereas the sample in the second chart has a variance of only 42.
Here by contrast is a plot from a run where the negative buffer gets released.

We can see that once again the chart starts out looking as if it has rather low volatility. But then all of a sudden all the suppressed down movements are being realized in one go and the value drops dramatically.
This super simple model provides an illustration of how suppressed volatility can easily fools us. Let us look at the sample variance in the three graphs for the first 300 data points (each graph has 1,000 data points). The variances are as follows. Chart 1: 139, Chart 2: 36, Chart 3: 26.
So the lesson here should be clear: if we simply estimate the volatility of a process from the observed sample variance, we may be wildly underestimating potential future variance when dealing with a case of suppressed volatility.
This will be the last Uncertainty Wednesday for 2017 as I am about to go away on vacation. In the last post I had introduced the idea that sometimes when volatility is suppressed it comes back to bite us. I wanted to have a really simple model for demonstrating that and so I wrote some Python code to make a 50:50 coin toss and depending on the result either increment or decrement a value by +/- 2 (I set the initial value to 100). Here is a plot of a sample run:

Now to suppress volatility, I modified the program so that it would increment or decrement the value by +/- 1 instead, i.e. half the original change. I then added the suppressed +/- 1 (the other half of the change) into a “buffer” – accumulating +1 in a positive buffer and -1 in a negative buffer. I then gave each buffer a 1 in 1,000 chance of being release. Here is a plot where the buffer is not released:

We can immediately see that there is less volatility. In fact, when we determine the sample variance, the sample in the first chart comes in at 178 whereas the sample in the second chart has a variance of only 42.
Here by contrast is a plot from a run where the negative buffer gets released.

We can see that once again the chart starts out looking as if it has rather low volatility. But then all of a sudden all the suppressed down movements are being realized in one go and the value drops dramatically.
This super simple model provides an illustration of how suppressed volatility can easily fools us. Let us look at the sample variance in the three graphs for the first 300 data points (each graph has 1,000 data points). The variances are as follows. Chart 1: 139, Chart 2: 36, Chart 3: 26.
So the lesson here should be clear: if we simply estimate the volatility of a process from the observed sample variance, we may be wildly underestimating potential future variance when dealing with a case of suppressed volatility.
No activity yet