Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
>300 subscribers
>300 subscribers
Share Dialog
Share Dialog
Last week, I provided a recap of what we have covered so far in Uncertainty Wednesday. To go on from here we need to introduce some more concepts that are usually covered earlier but that I believe will make more sense in the framework that we have now established. The first one of these is the concept of independence. Two events are said to be independent if “the occurrence of one does not affect the probability of occurrence of the other.”
Now is a good time to remember that in our simplest model we have four elementary events AH, AL, BH and BL which represent all the possible combinations of the world being in either state A or state B and us receiving either signal H or signal L. We then figured out the probability of events, e.g. P(B), in terms of the underlying probabilities of the elementary events, e.g. P({BH}) and P({BL}. From there we went on to define the concept of a conditional probability, such as
P(B | H) = P({BH}) / P(H)
So if we look at the definition of independence above, what we are looking for is the situations where observing the signal does not tell us anything about the state of the world. Expressed as a formula, we are looking for a situation where
P(B | H) = P(B)
Now we know that P(H) = P({AH}) + P({BH}) and so what we are looking for is
P(B | H) = P({BH}) / [P({AH}) + P({BH})] = P(B)
Let’s remind ourselves what P({BH}) means. It is the elementary event where the world is in state B *AND* we receive signal H. Another way of writing that is as follows
P({BH}) = P(B ∩ H)
where ∩ denotes intersection. Why? Remember that events such as B and H are sets of elementary events, in particular
B = {BH, BL} and H = {AH, BH}
so B ∩ H = {BH}
With that we can re-write the above condition for independence as follows
P(B | H) = P(B ∩ H) / [P(A ∩ H) + P(B ∩ H)] = P(B)
Now what should P(B ∩ H) be in terms of P(B) and P(H) to make this hold?
Let’s consider P(B ∩ H) = P(B) * P(H)
P(B | H) = P(B) * P(H) / [P(A) * P(H) + P(B) * P(H)] = P(B) / [P(A) + P(B)] = P(B)
The last step here is simply the result of P(A) + P(B) = 1 by definition. The way we set up the problem the world is either in in state A or in state B and the likelihood that the world is in either state is therefore 1.
While this definitely doesn’t pass as a rigorous proof, what we see is that in our 2 state and 2 signal value world the following two are equivalent conditions and each mark independence
Unconditional probability = conditional probability
and
Probability of any state + signal combination = product of probabilities
More to come on independence next week. For now you should ask yourself based on the above whether assuming that two events are independent is imposing a lot of constraints or few constraints. Or put differently, will we encounter a lot of independent situations or few independent situations in the world?
Last week, I provided a recap of what we have covered so far in Uncertainty Wednesday. To go on from here we need to introduce some more concepts that are usually covered earlier but that I believe will make more sense in the framework that we have now established. The first one of these is the concept of independence. Two events are said to be independent if “the occurrence of one does not affect the probability of occurrence of the other.”
Now is a good time to remember that in our simplest model we have four elementary events AH, AL, BH and BL which represent all the possible combinations of the world being in either state A or state B and us receiving either signal H or signal L. We then figured out the probability of events, e.g. P(B), in terms of the underlying probabilities of the elementary events, e.g. P({BH}) and P({BL}. From there we went on to define the concept of a conditional probability, such as
P(B | H) = P({BH}) / P(H)
So if we look at the definition of independence above, what we are looking for is the situations where observing the signal does not tell us anything about the state of the world. Expressed as a formula, we are looking for a situation where
P(B | H) = P(B)
Now we know that P(H) = P({AH}) + P({BH}) and so what we are looking for is
P(B | H) = P({BH}) / [P({AH}) + P({BH})] = P(B)
Let’s remind ourselves what P({BH}) means. It is the elementary event where the world is in state B *AND* we receive signal H. Another way of writing that is as follows
P({BH}) = P(B ∩ H)
where ∩ denotes intersection. Why? Remember that events such as B and H are sets of elementary events, in particular
B = {BH, BL} and H = {AH, BH}
so B ∩ H = {BH}
With that we can re-write the above condition for independence as follows
P(B | H) = P(B ∩ H) / [P(A ∩ H) + P(B ∩ H)] = P(B)
Now what should P(B ∩ H) be in terms of P(B) and P(H) to make this hold?
Let’s consider P(B ∩ H) = P(B) * P(H)
P(B | H) = P(B) * P(H) / [P(A) * P(H) + P(B) * P(H)] = P(B) / [P(A) + P(B)] = P(B)
The last step here is simply the result of P(A) + P(B) = 1 by definition. The way we set up the problem the world is either in in state A or in state B and the likelihood that the world is in either state is therefore 1.
While this definitely doesn’t pass as a rigorous proof, what we see is that in our 2 state and 2 signal value world the following two are equivalent conditions and each mark independence
Unconditional probability = conditional probability
and
Probability of any state + signal combination = product of probabilities
More to come on independence next week. For now you should ask yourself based on the above whether assuming that two events are independent is imposing a lot of constraints or few constraints. Or put differently, will we encounter a lot of independent situations or few independent situations in the world?
No comments yet