Your Players Are Bad at Math. That's a Design Problem.

Six cognitive biases that distort how players experience randomness and the two design strategies for dealing with them.

You can make your random systems perfectly fair. Balanced to the decimal. Doesn't matter. Players will still tell you the game is broken, because human brains don't process probability the way math says they should. If you design randomness without accounting for that, you're building for a player that doesn't exist.

Expected Value Isn't the Whole Story

Game designers lean on expected value calculations, and they should. But expected value alone doesn't predict whether a player will actually engage with a system.

Game A

Pay $1. Roll a d20. On 19 or 20, win $15.

That's a 50% player advantage. Easy yes.

Game B

Pay $1,499. Roll a d20. On 19 or 20, win $15,000.

Twice the expected profit per game. But most people won't touch it, because the stakes dwarf the edge.

Game C

Pay $100. Roll a d1,000,000. On a 1, win $1 billion.

A 900% player advantage. Nobody plays. The probability of winning is too low for the math to matter.

And at the other extreme: a game that pays one penny per round at 100% player advantage. Nobody plays that either, because the stakes are too low to justify the time.

The takeaway is that expected value, win probability, and stakes all have to land in an acceptable range simultaneously. If any one of them breaks, the math doesn't save you. Your EV calculations should confirm balance, but they won't predict player behavior at the edges.

The Six Biases That Will Wreck Your Random Systems

Even within a reasonable range of stakes and probabilities, players still misread what's happening. Their internal probability calculators are running flawed algorithms.

Selection Bias

Players estimate probability by how easily they can recall an event. They remember epic wins far more easily than routine losses, which means they overestimate their own win rate. Without hard stats, they'll also overestimate their skill, and then choose difficulty levels that are too hard for them. If your game doesn't include matchmaking or dynamic difficulty, you're relying on players to self-assess accurately. They won't.

Self-Serving Bias

Sid Meier flagged this in his 2010 GDC keynote: when players see a 75 to 80 percent win probability, they intuitively expect to win around 95 percent of the time. Losing a quarter of their battles at those odds feels broken. But flip it around, give them a 25 percent chance and they lose three out of four, and they accept that without complaint. The bias only runs one direction: in the player's favor.

Attribution Bias

When randomness helps the player, they internalize it. I made good decisions that led to this outcome. When randomness hurts them, they externalize it. The dice screwed me. The AI cheated. The game is broken. In video games where the RNG is hidden, some players genuinely believe the computer is altering numbers on purpose. The asymmetry between how wins and losses are processed is one of the most persistent problems in game design.

The Dunning-Kruger Effect

Low-skill players overestimate their ability because the skills needed to judge competence are the same skills they lack. When they lose in competitive settings, they don't conclude they need to improve, they conclude the game is unfair, opponents are cheating, or balance is off. This is where your loudest balance complaint forum posts come from.

Anchoring

The first number a player encounters becomes the reference point for everything after. Meier found that playtesters who were fine losing a third of the time at 2:1 odds would call the game unfair at 20:10 odds, mathematically identical, but 20 is a big number, so 10 feels tiny in comparison. Casinos exploit this constantly: the jackpot number on a slot machine is always the biggest thing you see. In games, it means an RPG player who sees a small base damage number and then stacks bonuses may underestimate their total power, because they're anchored to that first low number.

The Gambler's Fallacy

Players expect randomness to look random, which means they don't expect streaks. But streaks are statistically normal. If your game involves coin flips and you have 3.2 million players, roughly 100,000 of them will experience six identical flips in a row as their very first interaction with the game. Half of those get six wins and develop unrealistic expectations. The other half get six losses and may quit, convinced the game is rigged. Neither group is experiencing a bug. Both groups will behave as if they are.

What You Actually Do About It

Knowing that players misread probability gives you two broad design directions: conform to the biases, or expose the truth. Both have tradeoffs.

Conform to the Biases

This is the pragmatic route. If players expect a displayed 75 percent to feel like 95 percent, then roll the dice at 95 percent behind the scenes and display 75 percent. If players hate long loss streaks, make each successive failure slightly less likely after the previous one, say, reduce the failure probability by 10 percent each time. Streaks become shorter, super-long streaks become impossible, and the experience matches what the player's brain expects.

It's effective, and it's also dishonest. You're reinforcing incorrect mental models of how probability works. Whether that matters to you is a personal call.

The core tension: players like small gains a little and big gains a lot. They can tolerate small losses but hate big ones. Design your variance accordingly. Slot machines figured this out decades ago: frequent small losses, occasional big wins, never a surprise catastrophic loss.

Expose the Truth

The alternative is transparency. Track and display the actual distribution of random outcomes so players can verify fairness for themselves. The arcade version of Tetris did this elegantly: one half of the screen showed the game, the other half tracked how many of each piece type had appeared. Players who felt screwed could look at the data and see they weren't. It's surprisingly reassuring.

In a dice-heavy digital board game like Risk, you could let players pull up roll distribution stats at any time. In a card game against AI, you could reveal each hand's results and track cumulative winning-hand percentages. In competitive games, displaying actual win/loss records corrects the inflated self-assessment that selection bias creates. Cold numbers are hard to argue with.

You can also make randomness feel more legitimate by making it visible. Board games do this naturally. There's something about physically rolling dice or drawing cards that players trust. Digital games lose that tactile layer, and with it, some trust. Animating dice rolls, shuffling decks, and drawing cards doesn't change the math, but it makes the process feel real. Players are more likely to accept outcomes from a system they can watch operate.

The Uncomfortable Middle Ground

Most shipped games use some combination of both approaches. They fudge probabilities where it matters most, early game, high-stakes moments, streaks, and provide transparency where it's cheap to do so. The fully honest approach works best in competitive games where the community values fairness. The bias-conforming approach works best in single-player or casual contexts where the feeling of the experience matters more than the integrity of the simulation.

There's a design ethics question buried in here that most teams skip past: every time you bend probability to match a player's flawed expectations, you're teaching them that their flawed expectations are correct. Games are teachers whether they intend to be or not. Poker is wildly popular and profitable, and it punishes every single probability misconception mercilessly. Players keep coming back.

The math doesn't end when the spreadsheet checks out. It ends when the player's experience matches what the numbers promise. If you're designing random systems without a model of how your players will perceive them, you're only solving half the problem.

← Back to all posts