Expected Value Problems for Quant Interviews
Expected value is the single most tested concept in quant interviews. Jane Street in particular includes an EV problem in almost every first-round screen. Below: what EV is, the two tools that solve nearly every problem, and five classic problems you must be able to solve fluently.
What Expected Value Is
The expected value of a random variable X is the probability-weighted average of all its possible outcomes: E[X] = Σ x · P(X = x). It answers the question “what does this random quantity average to over many repetitions?” Quant interviewers care about EV because trading and market-making are fundamentally about finding situations where E[payoff] > E[cost].
Linearity of Expectation: The Most Powerful Tool
Linearity of expectation states that E[X + Y] = E[X] + E[Y] for any two random variables X and Y — regardless of whether they are independent. This is the most powerful and most frequently applicable tool in quant probability. It lets you decompose complicated random variables into simple parts and compute each part separately.
Example: the expected sum of two dice is E[die 1] + E[die 2] = 3.5 + 3.5 = 7. No need to enumerate all 36 outcomes. The same idea extends to any number of terms: E[X₁ + X₂ + … + Xₙ] = E[X₁] + … + E[Xₙ].
5 Classic Expected Value Problems
1. Expected Value of a Fair Die
What is the expected value of a single roll of a fair six-sided die?
E[die] = (1 + 2 + 3 + 4 + 5 + 6) / 6 = 21 / 6 = 3.5. This is the canonical warm-up. An interviewer who asks this wants to see you write the sum immediately rather than fumble.
2. Expected Number of Coin Flips for the First Head
You flip a fair coin repeatedly until you get heads. What is the expected number of flips?
This is a geometric distribution with p = 1/2. E[flips] = 1/p = 2. Derivation via the recursive trick: let E be the answer. E = 1/2 · 1 + 1/2 · (1 + E), so E/2 = 1, giving E = 2. Recognizing geometric distributions instantly is essential — they appear in disguise constantly.
3. The Coupon Collector Problem
There are n different coupons, each equally likely. You collect one coupon at a time (with replacement). What is the expected number of coupons needed to collect all n types?
Break the problem into phases. When you have k distinct coupons, the probability the next one is new is (n − k) / n, so the expected draws to get a new one is n / (n − k). Summing over all phases:
E[total] = n/n + n/(n−1) + n/(n−2) + … + n/1 = n · (1 + 1/2 + 1/3 + … + 1/n) = n · Hₙ
where Hₙ is the n-th harmonic number. For n = 6 (like rolling all faces of a die): E = 6 · H₆ = 6 · (1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6) ≈ 6 · 2.45 ≈ 14.7 rolls. Jane Street loves this problem in various disguises.
4. The Hat-Check Problem (Expected Fixed Points)
n people each hand in a hat. The hats are returned in a uniformly random order. What is the expected number of people who get their own hat back?
Define indicator variable Iᵢ = 1 if person i gets their own hat, 0 otherwise. P(Iᵢ = 1) = 1/n for each i. By linearity of expectation:
E[fixed points] = E[I₁ + I₂ + … + Iₙ] = n · (1/n) = 1
Remarkably, the expected number of fixed points is exactly 1 regardless of n. The indicator variable technique is what makes this tractable — computing this by direct counting would be a combinatorial nightmare. This is the canonical example of why linearity of expectation is so powerful.
5. The St. Petersburg Paradox
A casino game flips a fair coin until heads appears. If heads appears on flip k, you receive $2^k. What is the fair price to play?
E[payoff] = Σₖ₌₁^∞ P(heads on flip k) · 2^k = Σₖ₌₁^∞ (1/2)^k · 2^k = Σₖ₌₁^∞ 1 = ∞.
The expected value is infinite — yet no rational person would pay more than a few dollars to play. This is the St. Petersburg paradox. It illustrates why raw EV is insufficient: you also need to account for diminishing marginal utility, risk preferences, and the practical impossibility of a counterparty paying 2^k for large k. Interviewers ask this to see if you can argue beyond the formula.
How to Apply EV in Interviews
Always define your sample space explicitly. State what the outcomes are and their probabilities before computing anything. Interviewers penalise candidates who skip this step and land on the right answer by luck.
Default to indicator variables. Whenever you are computing expected counts — expected number of X that occur — define Iᵢ = 1 if event i happens and apply linearity. This avoids combinatorial sums in almost every case.
Check limiting cases. After computing an EV, verify it makes intuitive sense. If E[X] = n · ln(n) and n = 1 gives E = 0, that is a good sign. If n = 1 gives a nonsensical answer, recheck your derivation.
Practice expected value problems now
1200+ real quant interview questions with an AI interviewer to push your reasoning.
Browse Questions →