In the last two Uncertainty Wednesdays, we went over the basic math of probability and the definition of events. Today we are going to come back to the underlying framework of reality, explanations and observations which we have modeled as states and signals. As a reminder, we assumed two states, A and B and two signal values H and L for four possible elementary events AH, AL, BH, and BL.
The signal values are what we can observe whereas the states are not directly accessible to us. So no let’s ask the question what is the probability of observing signal value H? Well in the language we have introduced H is an event, in particular H = {AH, BH} and so putting everything together
P(H) = P({AH, BH}) = P({AH}) + P({BH})
and similarly
P(L) = P({AL, BL}) = P({AL}) +P({BL})
is the probability of observing the signal value L.
Along the same lines we can express the probabilities of the states A and B as follows
P(A) = P({AH, AL}) = P({AH}) + P({AL})
P(B) = P({BH, BL}) = P({BH}) + P({BL})
But now we need to be more careful about what this means. Why? Because we do not observe states directly! We only observe signal values.
So that raises a fairly big question. In fact the central question: where do these probabilities come from then?
We make them up! Just kidding. Well, only slightly kidding actually. This is the role of explanations. What makes an explanation interesting is that it connects states to signal values, or put differently reality to observations.
So wait, I seem to have just made the biggest punt ever. Where do explanations come from? Well, they are the combination of new conjectures with past explanations and observations. If this seems highly cyclical or self referential to you that’s because it is. All of our knowledge is built up on prior knowledge (and hence is only as good as that prior knowledge).
All of this will hopefully become clearer as we use the math that we now have (and will continue to develop further) to look at a concrete example.