Enter Values

Number of observations
Regressors (excluding intercept)
From auxiliary regression (with intercept)
Auxiliary terms (excluding intercept)
OLS coefficient estimate
Usual (homoskedastic) SE
HC1 robust SE from regression output
Test Formulas
BP = n × R²aux ~ χ²(k)
White = n × R²aux ~ χ²(q)
n = observations | k = regressors | q = auxiliary terms | aux = auxiliary R-squared
Ryan O'Connell, CFA
Calculator by Ryan O'Connell, CFA

Test Results

Test Statistic --
Critical Value --
p-value --
Degrees of Freedom --
SE Ratio (Robust/OLS) --
OLS t-statistic --
Robust t-statistic --
Interpretation Note: The 1.2/0.8 ratio bands are heuristic guidelines, not formal statistical thresholds.

χ² Distribution

Formula Breakdown

Breusch-Pagan: BP = n × R²aux ~ χ²(k)
Step-by-step computation

Model Assumptions

  • H0: Var(u|X1,...,Xk) = σ² (homoskedasticity — constant error variance)
  • Linear regression model with zero conditional mean: E(u|X) = 0
  • BP test assumes normally distributed errors (sensitive to non-normality)
  • White test does not require normality — more general, but a significant result can reflect heteroskedasticity and/or model misspecification
  • Robust SEs are valid under heteroskedasticity but require large samples; the entered robust SE is assumed to be HC1 from your regression output
  • Both k and q exclude the intercept; the auxiliary R² must come from a regression that includes an intercept
  • This is a cross-sectional OLS diagnostic calculator

For educational purposes. Not financial advice. Econometric assumptions simplified for educational use.

Interpretation Guide

Result Condition Action
Reject H0 p-value < α Use robust SEs or WLS
Fail to reject H0 p-value ≥ α OLS inference valid
SE Ratio > 1.2 Robust SE >> OLS SE OLS understates uncertainty for this coefficient
SE Ratio 0.8–1.2 SEs broadly similar Little practical difference

Understanding Heteroskedasticity Testing

What is Heteroskedasticity?

Heteroskedasticity occurs when the variance of regression errors is not constant across observations — formally, Var(u|X) changes with X. In a scatter of residuals against fitted values, heteroskedasticity often appears as a fan or cone shape, where the spread of residuals widens (or narrows) as the fitted values increase.

Null Hypothesis
H0: Var(u|X1,...,Xk) = σ² (constant variance)
H1: Var(u|X1,...,Xk) ≠ σ² (non-constant variance)
Under E(u|X) = 0, heteroskedasticity does not bias coefficients — only standard errors

Consequences of Heteroskedasticity

  • Coefficients remain unbiased — OLS estimates are still unbiased and consistent under E(u|X) = 0, regardless of heteroskedasticity
  • Standard errors are biased — OLS standard errors computed under homoskedasticity are incorrect, leading to unreliable t-tests, F-tests, and confidence intervals
  • OLS is no longer efficient — OLS is not the minimum-variance linear unbiased estimator (violates the Gauss-Markov theorem)

BP Test vs. White Test

Breusch-Pagan Test

Tests whether error variance depends linearly on the regressors. Assumes normally distributed errors. Uses k degrees of freedom. Best when you suspect variance increases with specific X variables.

White Test

More general — tests for any pattern in error variance using levels, squares, and cross-products. Does not require normality. Uses q degrees of freedom. Note: rejection can reflect model misspecification, not just heteroskedasticity.

Robust Standard Errors

When heteroskedasticity is detected, two main remedies exist:

  • Robust (HC1) standard errors — Adjust the variance-covariance matrix to account for heteroskedasticity. Valid inference without modeling the variance function. The HC1 correction multiplies by n/(n-k-1) for small-sample adjustment. This calculator compares user-entered OLS and robust SEs.
  • Weighted Least Squares (WLS) — More efficient if you correctly specify how variance depends on X. Riskier if the variance function is misspecified.
Practical Advice: Many applied researchers report robust standard errors by default, even when heteroskedasticity tests do not reject H0. This provides insurance against undetected heteroskedasticity with minimal cost when errors are actually homoskedastic (Wooldridge, Chapter 8).

Frequently Asked Questions

Heteroskedasticity occurs when the variance of regression errors is not constant across observations — i.e., Var(u|X) changes with X. While OLS coefficient estimates remain unbiased under the zero conditional mean assumption E(u|X) = 0, standard errors become biased, making t-tests, F-tests, and confidence intervals unreliable. Detecting heteroskedasticity is essential for valid statistical inference. Per Wooldridge Ch. 8, ignoring heteroskedasticity leads to incorrect conclusions about statistical significance, even though the point estimates themselves are correct.

The BP test regresses squared OLS residuals on the original independent variables. Under H0 (homoskedasticity), there is no systematic relationship between error variance and the regressors. The test statistic BP = n × R² from the auxiliary regression follows a chi-squared distribution with k degrees of freedom (where k excludes the intercept). A large BP statistic (small p-value) indicates heteroskedasticity. The test assumes normally distributed errors, which can affect its power with non-normal data.

The White test is more general because it does not assume a specific functional form for heteroskedasticity and does not require normally distributed errors. It regresses squared residuals on all regressors, their squares, and cross-products, testing for any pattern in the error variance. However, a significant White test can reflect heteroskedasticity and/or model misspecification. The White test uses more degrees of freedom (q can be large), which reduces power in small samples. Use BP when you suspect variance depends linearly on the regressors; use White when the heteroskedasticity pattern is unknown.

Robust (heteroskedasticity-consistent) standard errors — specifically HC1 (White, 1980) — adjust the variance-covariance matrix of OLS estimates to account for non-constant error variance. The HC1 variant applies a small-sample correction of n/(n-k-1). They allow correct hypothesis testing and confidence intervals without requiring the researcher to model or correct the heteroskedasticity. This calculator compares user-entered OLS and robust SEs from your regression software; it does not compute HC1 from raw data.

No. Under the zero conditional mean assumption E(u|X) = 0, OLS coefficient estimates remain unbiased and consistent regardless of heteroskedasticity. Note that robust standard errors do not solve endogeneity or omitted-variable bias — those require different solutions (instrumental variables, proxy variables, etc.). The problem heteroskedasticity causes is that OLS is no longer efficient (not the minimum-variance estimator), and standard errors computed under the homoskedasticity assumption are biased, potentially leading to false conclusions about which variables are statistically significant.

Weighted Least Squares (WLS) is more efficient than OLS when you can correctly specify the heteroskedasticity function — i.e., you know how Var(u|X) depends on X. If the functional form is correctly modeled, WLS produces lower-variance estimates than OLS with robust SEs. However, if the heteroskedasticity function is misspecified, WLS can produce worse estimates than OLS. Robust SEs are simpler and “safer” — they provide valid inference without modeling the variance function, at the cost of some efficiency. For most applied work, robust SEs are the default recommendation (Wooldridge Ch. 8).
Disclaimer

This calculator is for educational purposes only and provides heteroskedasticity diagnostics for cross-sectional OLS regression. It uses summary statistics (R², standard errors) rather than raw data. Actual regression analysis involves additional considerations including functional form, omitted variables, and sample selection. This tool should not be used as the sole basis for econometric decisions.