To all the math-crunching engineers and devs out there… let’s play a game.
You flip a coin. The pot starts at $2.
Every time it lands tails, the pot doubles.
The game ends when you hit heads—and you take home whatever’s in the pot.
- First toss is heads? Cool, you win $2.
- Tails, then heads? You win $4.
- Two tails, then heads? $8.
- Keep going? $16, $32, $64, $128...
Now, here’s the real deal:
Given that there’s an entry fee, how much would you be willing to pay?
Most of us cap off at around 20 dollars, the daring ones at 50 maybe, and the outliers? (There are always a few, right?) Probably not more than 100 dollars, I’d bet.
Well, here’s the catch:
The potential takeaway from this game is infinite dollars, and being the rational, logical, quant-centric peeps we are, ideally, weshould be willing to pay any amount of money to play the game… but we don’t, do we?
Let’s (Casually) Math This Out
You have a 1/2 chance to win $2.
A 1/4 chance to win $4.
A 1/8 chance to win $8.
And so on...
So the expected value is:
(1/2)(2) + (1/4)(4) + (1/8)(8) + ………… ∞
In short, you have a chance in front of you to retire your entire lineage—and yet, we hesitate to shell out more than 20 bucks to play the same. All the geeks out there would know this as the St. Petersburg Paradox.
A 300-year-old riddle that continues to punch holes in economic theory, decision science, and how modern AI systems are trained to "think."
Through a Behavioral Scientist’s Looking Glass
In the modern world, where we all race behind the highest-paying tech jobs, why does this 300-year-old riddle still have such an unexplainable hold over us?
The answer? We’re all just humans—and the way we’re wired fundamentally changes how money is viewed.
For rational creatures like us, money is not just a raw number. Its utility.
And this was proposed by none other than Daniel Bernoulli in 1738.
According to the famous Mr. Bernoulli:
The utility of money increases logarithmically, not linearly.
In other words:
U(money) = log(money)
When you rerun the coin-flip game through a utility lens, the “infinite jackpot” suddenly has a very finite appeal.
Why It Still Matters (Maybe More Than Ever!!!!!)
In today’s fast and flashy world, this seemingly silly thought experiment plays a crucial role in designing the systems that run our lives.
AI—the buzzword of 2025—is often portrayed as capable of taking logical decisions based on a million calculations behind the scenes.
But what happens when it's not aware of the utility viewpoint—and takes decisions based only on expected values?
This would turn them into reckless gamblers rather than informed decision-makers (still a bit of a stretch, but you get the point).
🤔 So What’s the Takeaway?
In 2025, we live in a world that runs on algorithms. From your investment portfolio to your food delivery app, decisions are being made by machines trained to optimize outcomes.
But here’s the kicker: if these systems are trained to focus purely on expected value, they could behave like reckless gamblers—chasing infinite payouts with no regard for risk or context.
That's not just inefficient—it’s dangerous.
The St. Petersburg Paradox reminds us that rational decision-making requires more than math. It demands an understanding of how humans—and smart systems—value outcomes. Without this, AI systems risk becoming detached from reality, optimizing for theoretical gains with catastrophic costs.
All right peeps, that's enough math for the day!!! See ya…..