Back to questions
There are two games involving dice that you can play. In the first game, you roll two dice at once and receive a dollar amount equivalent to the product of the rolls.
In the second game, you roll one die and get the dollar amount equivalent to the square of that value. Which has the higher expected value and why?
This is the same question as problem #27 in the Statistics Chapter of Ace the Data Science Interview!
One method of solving this problem is the brute force method, which consists of computing the expected values by listing all of the outcomes and associated probabilities and payoffs. However, there exists an easier way of solving the problem.
Assume that the outcome of the roll of a die is given by a random variable X (meaning that it takes on the values 1...6 with equal probability). Then, the question is equivalent to asking, "What is E[X] * E[X] = E[X]^2 (i.e., the expected value of the product of two separate rolls), versus E[X^2] (the expected value of the square of a single roll)?"
Recall that the variance of a given random variable X is as follows:
Typically, this variance term is exactly the difference between the two sets of die rolls - the two "games" - (the payoff of the second game minus the payoff of the first game).
Since the left-hand side is positive, as expected for the value of a squared number, then the right-hand side is also positive. Therefore, it must be the case that the second game has a higher expected value than the first.