Uncertainty Wednesday: A Random Variable without Expected Value

I ended the previous Uncertainty Wednesday post asking whether an expected value always exists. Going back to the definition, the expected value is the “probability weighted average of a random variable.” So let’s construct an example of a random variable which does not have an expected value. We will consider a probability distribution with infinitely many discrete outcomes, in which the first outcome has probability ½, the second ¼, the third 1/8 and so on. This is a valid probability distribution because all the probabilities sum up to 1:

½ + ¼ + 1/8 + 1/16 + …. = 1

Whether or not an expected value exists depends on what the numeric values of the outcomes are. Consider for instance the random variable where the first outcome is 1, the second is 2, the third is 3 and so on. For this random variable we have an expected value, because

EV = ½ * 1 + ¼ * 2 + 1/8 * 3 + 1/16 * 4 + …. = 2

Why is that so? Because even though our random variable includes ever larger outcomes, these very large outcomes occur with very small probability and so the probability weighted average is a convergent infinite sum.

But now consider what happens when the outcomes themselves grow exponentially. Let’s consider the case where the first outcome is 2, the second is 4, the third is 8 and so on. Now we have

EV = ½ * 2 + ¼ * 4 + 1/8 * 8 + 1/16 * 16 + …. 
EV = 1 + 1 + 1 + 1 …

Clearly the EV here is no longer a convergent sum but rather diverges towards infinity.

Now you might say, Albert that’s not an example of an expected value that doesn’t exist, the expected value is simply infinite. This might take us into a separate discussion of the meaning of infinity, which might be fun to have, including the more sophisticated objection to the example which would claim that all real processes have some finite upper bound.

For now though let’s focus on a different question: how does the sample mean behave for the random variable we just defined? This is a well defined question. A sample has, by definition, a finite number of observations (that’s what it means to be a sample). So each sample will have a mean. What is the implication of the expected value diverging for the behavior of the sample mean?

Loading...
highlight
Collect this post to permanently own it.
Continuations logo
Subscribe to Continuations and never miss a post.
#uncertainty wednesday#expected value