What Bayesian inference actually does, in pictures

How the system turns your questions into probability distributions — and why that's more useful than a single answer.

Every time you ask Simmis a question, you get back something more than a number. You get a distribution — a range of futures the system considers plausible, weighted by how likely each one is. To understand why, you need to understand Bayesian inference. Not the math. The shape of it.

Start with what you already believe

Before you show the system any data, it already has a belief. Not a guess in the pejorative sense — a structured representation of uncertainty. This is the prior.

priorlowhighvalue of unknown quantityprobability
The prior is wide — the system is uncertain, and that uncertainty is honest.

A wide, flat prior says: “I don’t know much. Many values are plausible.” A narrow prior says: “I have strong domain knowledge — the answer is probably in this range.” Both are valid. The prior is where you encode what you know before you look at the evidence.

This matters because most decision-support tools skip this step. They assume a blank slate, then fill it with whatever the data says. Bayesian inference does the opposite: it asks you to make your assumptions explicit, then updates them in light of evidence.

Data arrives. Beliefs shift.

Now you show the system some data — observations, measurements, historical outcomes. Each observation is a constraint: it makes some values more plausible and some less. The system computes how likely each value of the unknown was to produce this data. That’s the likelihood.

likelihoodprior (unchanged)data
Data clusters around a region. The likelihood peaks there — those values would most plausibly have produced this data.

The likelihood isn’t a belief. It’s a measurement of fit: for each possible value of the unknown, how well does it explain what we observed?

The update

Multiply prior by likelihood, normalize so it sums to one, and you have the posterior — your updated belief after seeing the data.

posteriorpriorlikelihoodposterior
The posterior is narrower (more certain) and shifted toward where the data clusters. Prior belief and evidence have been combined.

The posterior is narrower than the prior because data reduces uncertainty. It’s centered on — but not identical to — the data cluster, because the prior pulls it slightly. That pull is the whole point: prior knowledge doesn’t disappear when evidence arrives; it just gets weighted appropriately.

Why a distribution beats a number

Most systems give you a single answer: “Q3 velocity will be 82 story points.” Bayesian inference gives you a distribution: “Q3 velocity will most likely be around 78–86 story points, with a 12% chance of falling below 65 if onboarding runs long.”

That tail — that 12% — is information. Ignoring it means ignoring a real scenario the data supports.

most likely value12% chancebelow thresholdrisk threshold
The point estimate misses the left tail — a real 12% scenario with real consequences.

In Simmis When you ask “will we hit the October release?”, the system doesn’t say yes or no. It says: “73% likely, given current data. The main risk is in the compliance review window, not engineering throughput.” That 73% is a posterior probability — prior beliefs about your team, updated by current velocity data and known blockers.

Sequential updating

One more thing. Bayesian inference is sequential. Today’s posterior becomes tomorrow’s prior. Every new observation makes the system sharper.

startafter week 1after week 4after week 8more data → more certain
Each week of new observations narrows the posterior. The system learns without forgetting what came before.

This is what “gets smarter over time” means concretely. The model isn’t just accumulating data — it’s accumulating evidence about the shape of your world. After eight weeks of sprint data, the system has a much sharper picture of your team’s velocity distribution than it did at week one.

It also means the system can tell you when its beliefs are fragile. A wide posterior isn’t a failure — it’s honest uncertainty. Sometimes the honest answer is: “we don’t have enough data to be confident. Here’s what more data would resolve.”

Bayesian inference is the mathematical form of good epistemic practice: hold beliefs proportionate to evidence, update when evidence arrives, and never confuse a best guess with a fact. Every answer comes with the uncertainty that’s actually there.

That’s more useful than a number. It’s a map of what the system knows, and doesn’t.

Every scenario Simmis generates is a draw from a posterior. If you want to see what inference-backed decisions look like in practice, we're in early access.

Get in touch Try Simmis