$$ %---- MACROS FOR SETS ----% \newcommand{\znz}[1]{\mathbb{Z} / #1 \mathbb{Z}} \newcommand{\twoheadrightarrowtail}{\mapsto\mathrel{\mspace{-15mu}}\rightarrow} % popular set names \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\I}{\mathbb{I}} % popular vector space notation \newcommand{\V}{\mathbb{V}} \newcommand{\W}{\mathbb{W}} \newcommand{\B}{\mathbb{B}} \newcommand{\D}{\mathbb{D}} %---- MACROS FOR FUNCTIONS ----% % linear algebra \newcommand{\T}{\mathrm{T}} \renewcommand{\ker}{\mathrm{ker}} \newcommand{\range}{\mathrm{range}} \renewcommand{\span}{\mathrm{span}} \newcommand{\rref}{\mathrm{rref}} \renewcommand{\dim}{\mathrm{dim}} \newcommand{\col}{\mathrm{col}} \newcommand{\nullspace}{\mathrm{null}} \newcommand{\row}{\mathrm{row}} \newcommand{\rank}{\mathrm{rank}} \newcommand{\nullity}{\mathrm{nullity}} \renewcommand{\det}{\mathrm{det}} \newcommand{\proj}{\mathrm{proj}} \renewcommand{\H}{\mathrm{H}} \newcommand{\trace}{\mathrm{trace}} \newcommand{\diag}{\mathrm{diag}} \newcommand{\card}{\mathrm{card}} \newcommand\norm[1]{\left\lVert#1\right\rVert} % differential equations \newcommand{\laplace}[1]{\mathcal{L}\{#1\}} \newcommand{\F}{\mathrm{F}} % misc \newcommand{\sign}{\mathrm{sign}} \newcommand{\softmax}{\mathrm{softmax}} \renewcommand{\th}{\mathrm{th}} \newcommand{\adj}{\mathrm{adj}} \newcommand{\hyp}{\mathrm{hyp}} \renewcommand{\max}{\mathrm{max}} \renewcommand{\min}{\mathrm{min}} \newcommand{\where}{\mathrm{\ where\ }} \newcommand{\abs}[1]{\vert #1 \vert} \newcommand{\bigabs}[1]{\big\vert #1 \big\vert} \newcommand{\biggerabs}[1]{\Bigg\vert #1 \Bigg\vert} \newcommand{\equivalent}{\equiv} \newcommand{\cross}{\times} % statistics \newcommand{\cov}{\mathrm{cov}} \newcommand{\var}{\mathrm{var}} \newcommand{\bias}{\mathrm{bias}} \newcommand{\E}{\mathrm{E}} \newcommand{\prob}{\mathrm{prob}} \newcommand{\unif}{\mathrm{unif}} \newcommand{\invNorm}{\mathrm{invNorm}} \newcommand{\invT}{\mathrm{invT}} \newcommand{\P}{\text{P}} \newcommand{\pmf}{\text{pmf}} \newcommand{\pdf}{\text{pdf}} % real analysis \renewcommand{\sup}{\mathrm{sup}} \renewcommand{\inf}{\mathrm{inf}} %---- MACROS FOR ALIASES AND REFORMATTING ----% % logic \newcommand{\forevery}{\ \forall\ } \newcommand{\OR}{\lor} \newcommand{\AND}{\land} \newcommand{\then}{\implies} % set theory \newcommand{\impropersubset}{\subseteq} \newcommand{\notimpropersubset}{\nsubseteq} \newcommand{\propersubset}{\subset} \newcommand{\notpropersubset}{\not\subset} \newcommand{\union}{\cup} \newcommand{\Union}[2]{\bigcup\limits_{#1}^{#2}} \newcommand{\intersect}{\cap} \newcommand{\Intersect}[2]{\bigcap\limits_{#1}^{#2}} \newcommand{\intersection}[2]{\bigcap\limits_{#1}^{#2}} \newcommand{\Intersection}[2]{\bigcap\limits_{#1}^{#2}} \newcommand{\closure}{\overline} \newcommand{\compose}{\circ} % linear algebra \newcommand{\subspace}{\le} \newcommand{\angles}[1]{\langle #1 \rangle} \newcommand{\identity}{\mathbb{1}} \newcommand{\orthogonal}{\perp} \renewcommand{\parallel}[1]{#1^{||}} % calculus \newcommand{\integral}[2]{\int\limits_{#1}^{#2}} \newcommand{\limit}[1]{\lim\limits_{#1}} \newcommand{\approaches}{\rightarrow} \renewcommand{\to}{\rightarrow} \newcommand{\convergesto}{\rightarrow} % algebra \newcommand{\summation}[2]{\sum\nolimits_{#1}^{#2}} \newcommand{\product}[2]{\prod\limits_{#1}^{#2}} \newcommand{\by}{\times} \newcommand{\integral}[2]{\int_{#1}^{#2}} \newcommand{\ln}{\text{ln}} % exists commands \newcommand{\notexist}{\nexists\ } \newcommand{\existsatleastone}{\exists\ } \newcommand{\existsonlyone}{\exists!} \newcommand{\existsunique}{\exists!} \let\oldexists\exists \renewcommand{\exists}{\ \oldexists\ } % statistics \newcommand{\distributed}{\sim} \newcommand{\onetoonecorresp}{\sim} \newcommand{\independent}{\perp\!\!\!\perp} \newcommand{\conditionedon}{\ |\ } \newcommand{\given}{\ |\ } \newcommand{\notg}{\ngtr} \newcommand{\yhat}{\hat{y}} \newcommand{\betahat}{\hat{\beta}} \newcommand{\sigmahat}{\hat{\sigma}} \newcommand{\muhat}{\hat{\mu}} \newcommand{\transmatrix}{\mathrm{P}} \renewcommand{\choose}{\binom} % misc \newcommand{\infinity}{\infty} \renewcommand{\bold}{\textbf} \newcommand{\italics}{\textit} \newcommand{\step}{\text{step}} $$

How Random Assignment Minimizes Systematic Differences Between Groups

You want to design an experiment like an A/B test. You’re always told to randomly assign members into two different groups, treatment and control, in order minimize systematic differences between the two groups. Why do you care about minimizing differences? Because this helps you minimize the number of confounding variables in your experiment so that if you see an effect, you can be more confident it was due to the treatment itself.

Awesome, this sounds great. But how do you really know that random assignment minimizes systematic differences between two groups?

Randomly splitting into two groups

Say you have some finite population from which to choose members of both your treatment and control groups:

$$ x_1, \dots, x_{2N} $$

You do what you’re told and randomly assign members into two different groups. The first group:

$$ u_1, \dots, u_N $$

And the second group:

$$ v_1, \dots, v_N $$

So our finite population is just a combination of these two groups:

$$ x_1, \dots, x_{2N} = u_1, \dots, u_N + v_1, \dots, v_N $$

If we divide both sides by $2N$, we get:

$$ \begin{aligned} \mu &= \frac{1}{2}\bar{u} + \frac{1}{2}\bar{v} \\\
&= \frac{1}{2}(\bar{u} + \bar{v}) \end{aligned} $$

So the average of the two sample means is equal to the population mean, that’s interesting.

Discovering the differences

But what do we know about the difference between the two groups? Well,

$$ \begin{aligned} \bar{u} - \bar{v} &= \bar{u} - (2\mu - \bar{u}) \\\
&= \bar{u} - 2\mu + \bar{u} \\\
&= 2\bar{u} - 2\mu \\\
&= 2(\bar{u} - \mu) \end{aligned} $$

So the difference between the two groups is just twice the distance from $\bar{u}$ to $\mu$. And since $u_1, \dots, u_N$ is a random sample from our population, then:

$$ \E(\bar{u}) = \mu $$

and

$$ \begin{aligned} \var(\bar{u}) &= \frac{\sigma^2}{N} \cdot \sqrt{\frac{2N - N}{2N - 1}} \\\
&\approx \frac{\sigma^2}{2N} \text{ since 2N-1 is approx 2N as N gets larger} \end{aligned} $$

where $\sqrt{\frac{2N - N}{2N - 1}}$ is a finite population correction factor since the size of our groups is greater than 5% of the finite population size. For a further detailed derivation of $\var(\bar{u})$, you should read my post about the Central Limit Theorem.

Now that we have these two results, we can now answer the question “what is the expected difference between both groups?":

$$ \begin{aligned} \E(\bar{u} - \bar{v}) &= 2\E(\bar{u} - \mu) \\\
&= 2 \cdot 0 \\\
&= 0 \end{aligned} $$

And what about the variance of the difference between both groups?

$$ \begin{aligned} \var(\bar{u} - \bar{v}) &= 2^2 \var(\bar{u} - \mu) \\\
&= 4 \var(\bar{u}) \\
&= \frac{2\sigma^2}{N} \end{aligned} $$

So on average, the difference between $\bar{u}$ and $\bar{v}$ is 0 which tells us that the two groups will be about the same! But what about random volatility? In practice, random volatility is unlikely since the variance of $\bar{u} - \bar{v}$ converges to 0 as the population size approaches infinity. This, of course, is nothing new. Sample variance decreases as the sample size increases.

Confidence about the differences

We just mentioned that random volatility is unlikely and variance converges to 0 as we keep increasing the population size. In practice however, increasing your population size might be impractical. So let’s say you just want to be 95% confident that the difference between the means of the two randomly selected groups, with each group having $N$ members, is less than some small number. In mathematical terms, this is asking for us to solve:

$$ 2\sigma(\bar{u} - \bar{v}) = \epsilon $$

We write $2 \sigma$ since any normal random variable is within two standard deviations of the mean about 95% of the time. And we know $\bar{u} - \bar{v} = 2(\bar{u} - \mu)$ is normal since $\bar{u}$ is normal by the Central Limit Theorem.

Solving for $N$, we get:

$$ \begin{aligned} 2 \sigma(2(\bar{u} - \mu)) &= \epsilon \\\
2 \sqrt{2} \sigma(\bar{u} - \mu) &= \epsilon \\\
\frac{2 \sqrt{2} \sigma}{\sqrt{2N}} &= \epsilon \\\
\frac{4 \sigma^2}{\epsilon^2} &= N \end{aligned} $$

Of course this further reinforces the fact that by increasing $N$, the two sample means become arbitrarily close.

Conclusion

And there you have it! We’ve shown how random assignment ensures that any differences between and within the groups are not systematic at the outset of the experiment. This means that any observed differences between the groups at the end of the experiment can be more confidently attributed to the effects of the experiment itself, rather than underlying differences between groups.