Enter Values
Formulas
R² = 1 − SSR / SST
Adj R² = 1 − [(1 − R²)(n − 1) / (n − k − 1)]
SSR = Sum of Squared Residuals |
SST = Total Sum of Squares |
n = Sample size |
k = Regressors
Results
R-Squared
0.7500
Adjusted R-Squared
0.7422
AIC
168.94
BIC
179.36
SER
2.2822
Color coding is a general guideline. Appropriate R² varies by field — cross-sectional data often has lower R².
Model 1
R-Squared
--
Adj R-Squared
--
AIC
--
BIC
--
SER
--
Model 2
R-Squared
--
Adj R-Squared
--
AIC
--
BIC
--
SER
--
Comparison valid only when both models use the same dependent variable and sample.
R² vs Adjusted R²
Complexity Penalty Gap:
0.0078
?
Formula Breakdown
Model Assumptions & Notes
- Assumes standard OLS regression with an intercept.
- R² measures in-sample fit only, not out-of-sample predictive accuracy.
- Adjusted R² penalizes model complexity — more regressors is not necessarily better.
- AIC and BIC use a reduced OLS comparison form: constant terms are omitted, so raw values may differ from software output. Values are valid for ranking models estimated on the same dependent variable and sample.
- Lower AIC/BIC indicates a better fit-complexity tradeoff. BIC penalizes more heavily than AIC for n ≥ 8.
- For educational purposes. Not financial advice. Market conventions simplified.
Frequently Asked Questions
R-squared (R²), also called the coefficient of determination, measures the proportion of variance in the dependent variable explained by the independent variables in a regression model. An R² of 0.75 means 75% of the variation in y is explained by the model. It ranges from 0 to 1 when the model includes an intercept.
R-squared never decreases when you add more variables to a regression, even if those variables are irrelevant. Adjusted R-squared penalizes for the number of regressors (k), so it can decrease when adding variables that do not improve fit. This makes adjusted R-squared more useful for comparing models with different numbers of independent variables.
R-squared is nonnegative in standard OLS regression with an intercept. Without an intercept, it can be negative. However, adjusted R-squared can be negative even with an intercept, which indicates the model fits worse than simply using the sample mean of y as the prediction — a sign of very poor model fit relative to its complexity.
The Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) are relative model-comparison metrics that balance goodness of fit against model complexity. They are not standalone quality scores — they are only meaningful when comparing models estimated on the same dependent variable and sample. Lower values indicate a better fit-complexity tradeoff. BIC imposes a heavier penalty for additional parameters than AIC when the sample size is 8 or more.
Not necessarily. A high R-squared only means the model explains a large portion of in-sample variation. It does not guarantee correct causal interpretation, absence of omitted variable bias, or good out-of-sample prediction. Conversely, a low R-squared does not mean the model is useless — in cross-sectional studies, low R-squared values are common and the model may still provide unbiased estimates of important relationships.
Compare models using adjusted R-squared (higher is better), AIC (lower is better), and BIC (lower is better). These metrics are valid when both models use the same dependent variable and sample. If the criteria disagree, BIC's preference for parsimony is often a reasonable tiebreaker. For nested models, the F-test answers a different question: whether the added regressors are jointly significant.
Disclaimer
This calculator is for educational purposes only. R-squared, adjusted R-squared, AIC, and BIC are in-sample fit metrics and do not guarantee out-of-sample predictive accuracy. AIC and BIC values use a reduced OLS comparison formula — raw values may differ from statistical software but rankings are preserved. Always verify results against your software output.