Share Dialog
Just as a reminder, we have been spending the last few weeks of Uncertainty Wednesday exploring different measures of uncertainty. We first looked at entropy which is a measure based only on the states of the probability distributions itself. We then encountered random variables, which associate values or “payouts” with states and learned about their expected value and variance (including continuous random variables).
Today we will look at functions of random variables. We will assume that we have a random variable X and we are interested in looking at the properties of f(X) for some function f. Now you might say, gee, isn’t that just another random variable Y? And so why would there be anything new to learn here?
To motivate why we we want to explore this, let’s go back to
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Share Dialog
Just as a reminder, we have been spending the last few weeks of Uncertainty Wednesday exploring different measures of uncertainty. We first looked at entropy which is a measure based only on the states of the probability distributions itself. We then encountered random variables, which associate values or “payouts” with states and learned about their expected value and variance (including continuous random variables).
Today we will look at functions of random variables. We will assume that we have a random variable X and we are interested in looking at the properties of f(X) for some function f. Now you might say, gee, isn’t that just another random variable Y? And so why would there be anything new to learn here?
To motivate why we we want to explore this, let’s go back to
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Payouts are only the immediate outcomes. The value or impact of these payouts may be different for different people. What do I mean by this? Suppose that we look at a situation where you can either win $1 million with 60% probability or lose $10 thousand with 40% probability. This seems like a no brainer situation. But for some people losing $10 thousand would be a rounding error on their wealth, whereas for others it would mean becoming homeless and destitute.
We now have the language to analyze the uncertainty in this. First we can compute the entropy
H(X) = - [0.6 * log 0.6 + 0.4 * log 0.4] = 0.971
We can also calculate the expected value and variance as follows:
EV(X) = 0.6 * 1,000,000 + 0.4 * (- 10,000) = 596,000
VAR(X) = 0.6 (1,000,000 - EV(X))^2 + 0.4 (-10,000 - EV(X))^2 = 244,824,000,000
But as the text makes clear, none of these capture the vastly different impact these payoffs might have for different people.
One way to do that is to introduce the idea of a utility function U which translates payoffs into how a person feels or experiences these payoffs. Consider the following utility function
U(X) = log (IE + X)
where IE is the initial endowment, meaning the wealth someone has before encountering this uncertainty. The uncertainty faced by someone with IE = 10,000 is dramatically different than for someone with IE = 1,000,000. In fact for IE = 10,000 when the payoff is -10,000, the utility function goes to negative infinity (char produced with Desmos; technically you’d have to consider a limit, but you get the idea).

So we can see that applying a function to a random variable can have dramatic effects on uncertainty. Next week will dig deeper into what we can know about the impact of applying a function. In particular we will be interested in questions such as how does EV[U(X)] relate to U[EV(X)] — meaning what can we say about taking the expected value of the function of the random variable versus plugging the expected value into the function?
Payouts are only the immediate outcomes. The value or impact of these payouts may be different for different people. What do I mean by this? Suppose that we look at a situation where you can either win $1 million with 60% probability or lose $10 thousand with 40% probability. This seems like a no brainer situation. But for some people losing $10 thousand would be a rounding error on their wealth, whereas for others it would mean becoming homeless and destitute.
We now have the language to analyze the uncertainty in this. First we can compute the entropy
H(X) = - [0.6 * log 0.6 + 0.4 * log 0.4] = 0.971
We can also calculate the expected value and variance as follows:
EV(X) = 0.6 * 1,000,000 + 0.4 * (- 10,000) = 596,000
VAR(X) = 0.6 (1,000,000 - EV(X))^2 + 0.4 (-10,000 - EV(X))^2 = 244,824,000,000
But as the text makes clear, none of these capture the vastly different impact these payoffs might have for different people.
One way to do that is to introduce the idea of a utility function U which translates payoffs into how a person feels or experiences these payoffs. Consider the following utility function
U(X) = log (IE + X)
where IE is the initial endowment, meaning the wealth someone has before encountering this uncertainty. The uncertainty faced by someone with IE = 10,000 is dramatically different than for someone with IE = 1,000,000. In fact for IE = 10,000 when the payoff is -10,000, the utility function goes to negative infinity (char produced with Desmos; technically you’d have to consider a limit, but you get the idea).

So we can see that applying a function to a random variable can have dramatic effects on uncertainty. Next week will dig deeper into what we can know about the impact of applying a function. In particular we will be interested in questions such as how does EV[U(X)] relate to U[EV(X)] — meaning what can we say about taking the expected value of the function of the random variable versus plugging the expected value into the function?
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
>400 subscribers
>400 subscribers
No comments yet