Enter Probabilities

Probability between 0 and 1 (e.g., 0.01 = 1%)
True positive rate (e.g., 0.90 = 90%)
False positive rate (e.g., 0.05 = 5%)
Bayes' Theorem
P(H|E) = P(E|H) *P(H) / P(E)
P(H|E) = Posterior | P(E|H) = Sensitivity | P(H) = Prior | P(E) = Evidence probability

Calculation Results

Posterior Probability P(H|E) 15.38% Strong Update Evidence dramatically increases belief
Evidence P(E) 5.85%
Prior Odds 0.0101:1
Likelihood Ratio 18.00
Posterior Odds 0.1818:1

Base-Rate Caution

Despite high sensitivity, the low prior probability (base rate) limits how much a positive result increases your belief. Many positive results will be false positives.

Formula Breakdown

P(H|E) = P(E|H) *P(H) / P(E)
Step-by-step calculation with your values

Interpretation Guide

Likelihood Ratio Evidence Strength Interpretation
> 10 Strong Strong evidence for hypothesis
3 - 10 Moderate Moderate evidence for hypothesis
1 - 3 Weak Weak evidence for hypothesis
0.3 - 1 Weak Against Weak evidence against hypothesis
< 0.1 Strong Against Strong evidence against hypothesis
Ryan O'Connell, CFA
CALCULATOR BY
Ryan O'Connell, CFA
CFA Charterholder & Finance Educator

Finance professional building free tools for options pricing, valuation, and portfolio management.

Model Assumptions

  • Binary hypothesis: The model assumes a single hypothesis H vs. its complement (not H)
  • Specified likelihoods: Sensitivity and false positive rate are assumed to be correctly estimated for the population of interest
  • Single evidence: This calculator handles one piece of evidence; sequential updating requires applying the posterior as the new prior
  • Prior elicitation: The prior must be specified externally based on domain knowledge or historical data
Educational Purpose: This calculator is for educational purposes only. Results should not be interpreted as medical diagnoses, investment recommendations, or professional advice. Actual applications require careful estimation of prior probabilities and test characteristics by qualified professionals.

Understanding Bayesian Updating

What is Bayesian Updating?

Bayesian updating is the process of revising a prior probability into a posterior probability after observing new evidence. Rather than starting from scratch each time new data arrives, you combine what you already knew with what the new evidence tells you.

Bayes' Theorem
P(H|E) = [P(E|H) * P(H)] / P(E)
Posterior = (Likelihood * Prior) / Evidence Probability

Where:

  • P(H) — Prior probability (your initial belief before observing evidence)
  • P(E|H) — Likelihood/Sensitivity (probability of evidence if hypothesis is true)
  • P(E) — Evidence probability (overall probability of observing the evidence)
  • P(H|E) — Posterior probability (updated belief after observing evidence)

The Base-Rate Fallacy

The most common error in probabilistic reasoning is base-rate neglect — focusing only on the signal's accuracy while ignoring how rare the event is. Even a highly accurate test produces many false positives when testing for rare conditions.

Key Insight: A 90% accurate test applied to a 1% base-rate condition yields only an 8.3% posterior probability after a positive result. Most positive results are false positives!

How Base Rates Affect Posteriors

This table shows how the same test (90% sensitivity, 10% false positive rate) produces dramatically different posteriors depending on the base rate:

Base Rate (Prior) Sensitivity False Positive Rate Posterior After Positive
50% 90% 10% 90.0%
10% 90% 10% 50.0%
5% 90% 10% 32.1%
1% 90% 10% 8.3%

The Odds Form

For sequential updates with multiple pieces of evidence, the odds form is often more convenient:

Bayes' Theorem (Odds Form)
Posterior Odds = Prior Odds * Likelihood Ratio
Simply multiply odds by each likelihood ratio in sequence
  • Prior Odds = P(H) / P(~H)
  • Likelihood Ratio = P(E|H) / P(E|~H)

A likelihood ratio of 10 means the evidence is 10 times more likely if the hypothesis is true. Ratios above 10 are considered strong evidence; below 0.1 is strong evidence against.

Real-World Applications

  • Medical diagnosis: Interpreting test results given disease prevalence
  • Fraud detection: Updating risk scores based on transaction patterns
  • Credit analysis: Revising default probabilities with new financial data
  • Spam filtering: Classifying emails based on content features
  • Quantitative trading: Updating market views based on new information
Remember: The posterior depends on both signal quality and base rate. A strong signal applied to a rare event may still yield a low posterior probability.
Download This Calculator as an Excel Template Interactive model with editable formulas — customize, save, and share.
Get Excel Template

Frequently Asked Questions

Bayesian updating is a method for revising probability estimates when new evidence is observed. It uses Bayes' theorem to combine your prior belief (before evidence) with the likelihood of the evidence to produce a posterior belief (after evidence). The process is fundamental to rational decision-making under uncertainty.

The base-rate fallacy occurs when people ignore or underweight the prior probability (base rate) when interpreting evidence. Even with a highly sensitive test, if the condition being tested for is rare, most positive results may be false positives. This is why medical screening for rare diseases can produce many false alarms.

Sensitivity (true positive rate) is P(positive test | condition present) - the probability of detecting a condition when it exists. Specificity (true negative rate) is P(negative test | condition absent) - the probability of a negative result when the condition is absent. The false positive rate used in this calculator equals 1 - specificity.

The likelihood ratio indicates how much more likely the evidence is under the hypothesis vs. the alternative. A ratio of 10 means the evidence is 10 times more likely if the hypothesis is true. Ratios above 10 are considered strong evidence for the hypothesis; ratios below 0.1 are strong evidence against. A ratio of 1 means the evidence provides no information.

For independent evidence, you can chain updates: use the posterior from the first evidence as the prior for the second. The odds form makes this straightforward - just multiply the prior odds by each likelihood ratio in sequence. If evidence is not conditionally independent, more complex models are needed.

Common applications include medical diagnosis (interpreting test results given disease prevalence), fraud detection (updating risk scores based on transaction patterns), credit analysis (revising default probabilities with new financial data), spam filtering (classifying emails based on content features), and quantitative trading (updating market views based on new information).