AP Statistics Formula Sheet 2026 (with Explanations)

The complete AP Statistics 2026 reference sheet — every formula explained in plain English, with a Standard Normal (z) table, t critical values, χ² critical values, and links to the underlying Wikipedia articles.

Disclaimer: This page is compiled by the site author for study reference only. If you spot an error or notice anything that conflicts with the version published by the College Board, please defer to the latest official materials.

This page is a readable companion to the official College Board AP Statistics 2026 reference sheet. Every formula on the printed sheet appears below in clean LaTeX, alongside a short explanation of what it means and a link to the matching Wikipedia article so you can dive deeper.

Tip: On exam day you will receive a printed copy of this reference sheet — you do not need to memorize the formulas. Use this page during the school year to make sure you understand what each symbol means.

Quick facts about the 2026 sheet

Provided on the exam?

Yes. Both the multiple-choice and free-response sections give you the same formulas and tables shown here.

Calculator allowed?

Yes — a graphing calculator is allowed on every section. The formula sheet does not replace your calculator's distribution functions.

What still needs memorizing?

Conditions for inference, definitions of p-value and Type I/II error, and how to read the calculator output. The sheet only lists the algebra.

Formulas, section by section

I. Descriptive Statistics

Single-variable summaries (mean, standard deviation) and the basics of bivariate analysis (least-squares regression line, slope, correlation coefficient).

Sample mean

Explanation

The arithmetic average of the observed values. Add up every data point and divide by the sample size n. It estimates the population mean μ.

When to use

Use whenever you need a one-number summary of the centre of a quantitative distribution that is roughly symmetric and free of extreme outliers.

Sample standard deviation

Explanation

The typical distance between an observation and the sample mean. Dividing by n − 1 (Bessel's correction) makes the sample variance s² an unbiased estimator of the population variance.

When to use

Use to describe spread, to standardize values into z-scores, and inside almost every confidence-interval / test-statistic formula.

Least-squares regression line

Explanation

The predicted value of the response y for a given explanatory value x. The hat on ŷ marks it as an estimate, not an observed value.

When to use

Use after you have checked that the scatterplot is linear and the residuals are reasonably random.

Slope and intercept of the regression line

Explanation

The slope b says how many sample-y standard deviations the predicted y moves per one sample-x standard deviation, scaled by the correlation r. The intercept a is then chosen so that the line passes through the point (x̄, ȳ).

Correlation coefficient r

Explanation

A unit-free measure of the strength and direction of the linear relationship between x and y. r is always between −1 and +1; ±1 means a perfect line, 0 means no linear relationship.

When to use

Report r alongside any least-squares regression. r² is the proportion of variation in y explained by the linear model.

II. Probability and Distributions

Probability rules, expected value and standard deviation of a discrete random variable, plus the binomial and geometric distributions you are expected to know.

General addition rule

Explanation

Probability that A or B (or both) occurs. Subtracting P(A ∩ B) avoids counting the overlap twice. If A and B are mutually exclusive, P(A ∩ B) = 0 and the formula collapses to P(A) + P(B).

Conditional probability and the multiplication rule

Explanation

P(A | B) is the probability of A given that B has happened. Rearranging gives the multiplication rule P(A ∩ B) = P(A | B) · P(B). Independent events satisfy P(A | B) = P(A).

Mean (expected value) of a discrete random variable

Explanation

Each possible value is weighted by its probability. The expected value is the long-run average if the random experiment is repeated many times under the same conditions.

Standard deviation of a discrete random variable

Explanation

Square-root of the probability-weighted average of squared deviations from the mean. It measures how far from μX a single trial typically lands.

Binomial distribution

Explanation

Counts the number of successes in n independent trials, each with success probability p. The PMF gives the probability of exactly x successes; np and √(np(1−p)) are its mean and standard deviation.

When to use

Use when the four BINS conditions hold: Binary outcomes, Independent trials, Number of trials n fixed in advance, and the same Success probability p on every trial.

Geometric distribution

Explanation

Counts the trial number on which the first success occurs in a sequence of independent Bernoulli trials with success probability p. The expected wait until the first success is 1/p.

When to use

apStatsFormulaSheet.formulas.geometric.whenToUse

III. Sampling Distributions and Inferential Statistics

Generic test-statistic and confidence-interval templates, the chi-square statistic, and the sampling-distribution standard errors for one and two proportions, one and two means, and least-squares slope.

Standardized test statistic

Explanation

Generic recipe for every z-test, t-test, and χ² test on the AP exam. Subtract the value claimed by H0 from your sample statistic, then divide by the standard error of that statistic.

Confidence interval

Explanation

All AP confidence intervals share this shape: take your sample statistic, then add and subtract a critical value (z*, t*, or otherwise) times the standard error. The critical value sets the confidence level.

Chi-square statistic

Explanation

Compares observed counts in each cell of a one-way table or two-way contingency table to the counts expected under H0. Larger χ² values mean the observed data are farther from the null model.

When to use

Use for goodness-of-fit, test of independence, and test of homogeneity. All expected counts should be at least 5.

Sampling distributions for proportions — for one population: p̂

Explanation

p̂ is the sample proportion. For a single population its sampling distribution has mean p (the true population proportion) and standard deviation √(p(1 − p)/n). When p is unknown, plug in p̂ to get the standard error.

Sampling distributions for proportions — for two populations: p̂₁ − p̂₂

Explanation

For independent samples drawn from two populations, the difference p̂₁ − p̂₂ has mean p₁ − p₂. Use the unpooled standard error for confidence intervals; use the pooled p̂c standard error for hypothesis tests of H0: p₁ = p₂.

When to use

p̂c pools the two samples into one estimate of the common proportion assumed by the null hypothesis.

Sampling distributions for means — for one population: X̄

Explanation

For a single population the sample mean X̄ is centred at the population mean μ and has standard deviation σ/√n. As n grows the distribution becomes more nearly normal — this is the Central Limit Theorem.

When to use

Use σ when the population SD is known; use s (the sample SD) when it is not, and switch from z to t.

Sampling distributions for means — for two populations: X̄₁ − X̄₂

Explanation

For independent samples drawn from two populations, the difference in sample means X̄₁ − X̄₂ has mean μ₁ − μ₂. The standard error combines both populations' variances, scaled by their sample sizes.

Sampling distribution for the slope of a least-squares regression line: b

Explanation

The least-squares slope b is an unbiased estimator of the true slope β. Its standard error depends on the residual standard deviation s and the spread of the explanatory variable.

When to use

Use to build a t-confidence interval for β or to test H0: β = 0 (no linear relationship).

Distribution tables

All three reference tables from the back of the official PDF, recreated here so you can search and copy individual values.

Table A — Standard Normal Probabilities

Each cell gives the cumulative probability P(Z ≤ z) for a standard normal random variable. Read the row for the integer and tenths of z, then the column for the hundredths.

Example: a 95% confidence level uses z* = 1.96 because P(Z ≤ 1.96) = 0.9750, leaving 2.5% in each tail.

z.00.01.02.03.04.05.06.07.08.09
0.00.50000.50400.50800.51200.51600.51990.52390.52790.53190.5359
0.10.53980.54380.54780.55170.55570.55960.56360.56750.57140.5753
0.20.57930.58320.58710.59100.59480.59870.60260.60640.61030.6141
0.30.61790.62170.62550.62930.63310.63680.64060.64430.64800.6517
0.40.65540.65910.66280.66640.67000.67360.67720.68080.68440.6879
0.50.69150.69500.69850.70190.70540.70880.71230.71570.71900.7224
0.60.72570.72910.73240.73570.73890.74220.74540.74860.75170.7549
0.70.75800.76110.76420.76730.77040.77340.77640.77940.78230.7852
0.80.78810.79100.79390.79670.79950.80230.80510.80780.81060.8133
0.90.81590.81860.82120.82380.82640.82890.83150.83400.83650.8389
1.00.84130.84380.84610.84850.85080.85310.85540.85770.85990.8621
1.10.86430.86650.86860.87080.87290.87490.87700.87900.88100.8830
1.20.88490.88690.88880.89070.89250.89440.89620.89800.89970.9015
1.30.90320.90490.90660.90820.90990.91150.91310.91470.91620.9177
1.40.91920.92070.92220.92360.92510.92650.92790.92920.93060.9319
1.50.93320.93450.93570.93700.93820.93940.94060.94180.94290.9441
1.60.94520.94630.94740.94840.94950.95050.95150.95250.95350.9545
1.70.95540.95640.95730.95820.95910.95990.96080.96160.96250.9633
1.80.96410.96490.96560.96640.96710.96780.96860.96930.96990.9706
1.90.97130.97190.97260.97320.97380.97440.97500.97560.97610.9767
2.00.97720.97780.97830.97880.97930.97980.98030.98080.98120.9817
2.10.98210.98260.98300.98340.98380.98420.98460.98500.98540.9857
2.20.98610.98640.98680.98710.98750.98780.98810.98840.98870.9890
2.30.98930.98960.98980.99010.99040.99060.99090.99110.99130.9916
2.40.99180.99200.99220.99250.99270.99290.99310.99320.99340.9936
2.50.99380.99400.99410.99430.99450.99460.99480.99490.99510.9952
2.60.99530.99550.99560.99570.99590.99600.99610.99620.99630.9964
2.70.99650.99660.99670.99680.99690.99700.99710.99720.99730.9974
2.80.99740.99750.99760.99770.99770.99780.99790.99790.99800.9981
2.90.99810.99820.99820.99830.99840.99840.99850.99850.99860.9986
3.00.99870.99870.99870.99880.99880.99890.99890.99890.99900.9990
3.10.99900.99910.99910.99910.99920.99920.99920.99920.99930.9993
3.20.99930.99930.99940.99940.99940.99940.99940.99950.99950.9995
3.30.99950.99950.99950.99960.99960.99960.99960.99960.99960.9997
3.40.99970.99970.99970.99970.99970.99970.99970.99970.99970.9998

Table B — t Distribution Critical Values

The body of the table gives t* with the upper-tail probability listed in the column header, for the degrees of freedom in the row header. The bottom row labels the matching two-sided confidence levels.

Example: a 95% CI on a sample mean with df = 9 uses t* = 2.262 (column 0.025, row df = 9).

dfTail probability p
..25..2..15..1..05..025..02..01..005..0025..001..0005
11.0001.3761.9633.0786.31412.7115.8931.8263.66127.3318.3636.6
20.8161.0611.3861.8862.9204.3034.8496.9659.92514.0922.3331.60
30.7650.9781.2501.6382.3533.1823.4824.5415.8417.45310.2112.92
40.7410.9411.1901.5332.1322.7762.9993.7474.6045.5987.1738.610
50.7270.9201.1561.4762.0152.5712.7573.3654.0324.7735.8936.869
60.7180.9061.1341.4401.9432.4472.6123.1433.7074.3175.2085.959
70.7110.8961.1191.4151.8952.3652.5172.9983.4994.0294.7855.408
80.7060.8891.1081.3971.8602.3062.4492.8963.3553.8334.5015.041
90.7030.8831.1001.3831.8332.2622.3982.8213.2503.6904.2974.781
100.7000.8791.0931.3721.8122.2282.3592.7643.1693.5814.1444.587
110.6970.8761.0881.3631.7962.2012.3282.7183.1063.4974.0254.437
120.6950.8731.0831.3561.7822.1792.3032.6813.0553.4283.9304.318
130.6940.8701.0791.3501.7712.1602.2822.6503.0123.3723.8524.221
140.6920.8681.0761.3451.7612.1452.2642.6242.9773.3263.7874.140
150.6910.8661.0741.3411.7532.1312.2492.6022.9473.2863.7334.073
160.6900.8651.0711.3371.7462.1202.2352.5832.9213.2523.6864.015
170.6890.8631.0691.3331.7402.1102.2242.5672.8983.2223.6463.965
180.6880.8621.0671.3301.7342.1012.2142.5522.8783.1973.6113.922
190.6880.8611.0661.3281.7292.0932.2052.5392.8613.1743.5793.883
200.6870.8601.0641.3251.7252.0862.1972.5282.8453.1533.5523.850
210.6860.8591.0631.3231.7212.0802.1892.5182.8313.1353.5273.819
220.6860.8581.0611.3211.7172.0742.1832.5082.8193.1193.5053.792
230.6850.8581.0601.3191.7142.0692.1772.5002.8073.1043.4853.768
240.6850.8571.0591.3181.7112.0642.1722.4922.7973.0913.4673.745
250.6840.8561.0581.3161.7082.0602.1672.4852.7873.0783.4503.725
260.6840.8561.0581.3151.7062.0562.1622.4792.7793.0673.4353.707
270.6840.8551.0571.3141.7032.0522.1582.4732.7713.0573.4213.690
280.6830.8551.0561.3131.7012.0482.1542.4672.7633.0473.4083.674
290.6830.8541.0551.3111.6992.0452.1502.4622.7563.0383.3963.659
300.6830.8541.0551.3101.6972.0422.1472.4572.7503.0303.3853.646
400.6810.8511.0501.3031.6842.0212.1232.4232.7042.9713.3073.551
500.6790.8491.0471.2991.6762.0092.1092.4032.6782.9373.2613.496
600.6790.8481.0451.2961.6712.0002.0992.3902.6602.9153.2323.460
800.6780.8461.0431.2921.6641.9902.0882.3742.6392.8873.1953.416
1000.6770.8451.0421.2901.6601.9842.0812.3642.6262.8713.1743.390
10000.6750.8421.0371.2821.6461.9622.0562.3302.5812.8133.0983.300
0.6740.8411.0361.2821.6451.9602.0542.3262.5762.8073.0913.291
Confidence level C50%60%70%80%90%95%96%98%99%99.5%99.8%99.9%

Table C — Chi-Square Critical Values

The body of the table gives χ²* with the upper-tail probability listed in the column header, for the degrees of freedom in the row header.

Example: a chi-square test of independence on a 2 × 3 table has df = (2 − 1)(3 − 1) = 2; the 0.05 critical value is 5.99.

dfTail probability p
..25..2..15..1..05..025..02..01..005..0025..001..0005
11.321.642.072.713.845.025.416.637.889.1410.8312.12
22.773.223.794.615.997.387.829.2110.6011.9813.8215.20
34.114.645.326.257.819.359.8411.3412.8414.3216.2717.73
45.395.996.747.789.4911.1411.6713.2814.8616.4218.4720.00
56.637.298.129.2411.0712.8313.3915.0916.7518.3920.5122.11
67.848.569.4510.6412.5914.4515.0316.8118.5520.2522.4624.10
79.049.8010.7512.0214.0716.0116.6218.4820.2822.0424.3226.02
810.2211.0312.0313.3615.5117.5318.1720.0921.9523.7726.1227.87
911.3912.2413.2914.6816.9219.0219.6821.6723.5925.4627.8829.67
1012.5513.4414.5315.9918.3120.4821.1623.2125.1927.1129.5931.42
1113.7014.6315.7717.2819.6821.9222.6224.7226.7628.7331.2633.14
1214.8515.8116.9918.5521.0323.3424.0526.2228.3030.3232.9134.82
1315.9816.9818.2019.8122.3624.7425.4727.6929.8231.8834.5336.48
1417.1218.1519.4121.0623.6826.1226.8729.1431.3233.4336.1238.11
1518.2519.3120.6022.3125.0027.4928.2630.5832.8034.9537.7039.72
1619.3720.4721.7923.5426.3028.8529.6332.0034.2736.4639.2541.31
1720.4921.6122.9824.7727.5930.1931.0033.4135.7237.9540.7942.88
1821.6022.7624.1625.9928.8731.5332.3534.8137.1639.4242.3144.43
1922.7223.9025.3327.2030.1432.8533.6936.1938.5840.8843.8245.97
2023.8325.0426.5028.4131.4134.1735.0237.5740.0042.3445.3147.50
2124.9326.1727.6629.6232.6735.4836.3438.9341.4043.7846.8049.01
2226.0427.3028.8230.8133.9236.7837.6640.2942.8045.2048.2750.51
2327.1428.4329.9832.0135.1738.0838.9741.6444.1846.6249.7352.00
2428.2429.5531.1333.2036.4239.3640.2742.9845.5648.0351.1853.48
2529.3430.6832.2834.3837.6540.6541.5744.3146.9349.4452.6254.95
2630.4331.7933.4335.5638.8941.9242.8645.6448.2950.8354.0556.41
2731.5332.9134.5736.7440.1143.1944.1446.9649.6452.2255.4857.86
2832.6234.0335.7137.9241.3444.4645.4248.2850.9953.5956.8959.30
2933.7135.1436.8539.0942.5645.7246.6949.5952.3454.9758.3060.73
3034.8036.2537.9940.2643.7746.9847.9650.8953.6756.3359.7062.16
4045.6247.2749.2451.8155.7659.3460.4463.6966.7769.7073.4076.09
5056.3358.1660.3563.1767.5071.4272.6176.1579.4982.6686.6689.56
6066.9868.9771.3474.4079.0883.3084.5888.3891.9595.3499.61102.7
8088.1390.4193.1196.58101.9106.6108.1112.3116.3120.1124.8128.3
100109.1111.7114.7118.5124.3129.6131.1135.8140.2144.3149.4153.2

Frequently asked questions

Yes. Every student receives a printed copy of the same formula sheet shown here, including all three distribution tables, on both the multiple-choice and free-response sections of the AP Statistics exam.