Just as a reminder, we have been spending the last few weeks of Uncertainty Wednesday exploring different measures of uncertainty. We first looked at entropy which is a measure based only on the states of the probability distributions itself. We then encountered random variables, which associate values or “payouts” with states and learned about their expected value and variance (including continuous random variables).
Today we will look at functions of random variables. We will assume that we have a random variable X and we are interested in looking at the properties of f(X) for some function f. Now you might say, gee, isn’t that just another random variable Y? And so why would there be anything new to learn here?
To motivate why we we want to explore this, let’s go back to the post in which I introduced the different levels at which we can measure uncertainty. There I wrote:
Payouts are only the immediate outcomes. The value or impact of these payouts may be different for different people. What do I mean by this? Suppose that we look at a situation where you can either win $1 million with 60% probability or lose $10 thousand with 40% probability. This seems like a no brainer situation. But for some people losing $10 thousand would be a rounding error on their wealth, whereas for others it would mean becoming homeless and destitute.
We now have the language to analyze the uncertainty in this. First we can compute the entropy
H(X) = - [0.6 * log 0.6 + 0.4 * log 0.4] = 0.971
We can also calculate the expected value and variance as follows:
EV(X) = 0.6 * 1,000,000 + 0.4 * (- 10,000) = 596,000
VAR(X) = 0.6 (1,000,000 - EV(X))^2 + 0.4 (-10,000 - EV(X))^2 = 244,824,000,000
But as the text makes clear, none of these capture the vastly different impact these payoffs might have for different people.
One way to do that is to introduce the idea of a utility function U which translates payoffs into how a person feels or experiences these payoffs. Consider the following utility function
U(X) = log (IE + X)
where IE is the initial endowment, meaning the wealth someone has before encountering this uncertainty. The uncertainty faced by someone with IE = 10,000 is dramatically different than for someone with IE = 1,000,000. In fact for IE = 10,000 when the payoff is -10,000, the utility function goes to negative infinity (char produced with Desmos; technically you’d have to consider a limit, but you get the idea).
So we can see that applying a function to a random variable can have dramatic effects on uncertainty. Next week will dig deeper into what we can know about the impact of applying a function. In particular we will be interested in questions such as how does EV[U(X)] relate to U[EV(X)] — meaning what can we say about taking the expected value of the function of the random variable versus plugging the expected value into the function?