$$ %---- MACROS FOR SETS ----% \newcommand{\znz}[1]{\mathbb{Z} / #1 \mathbb{Z}} \newcommand{\twoheadrightarrowtail}{\mapsto\mathrel{\mspace{-15mu}}\rightarrow} % popular set names \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\I}{\mathbb{I}} % popular vector space notation \newcommand{\V}{\mathbb{V}} \newcommand{\W}{\mathbb{W}} \newcommand{\B}{\mathbb{B}} \newcommand{\D}{\mathbb{D}} %---- MACROS FOR FUNCTIONS ----% % linear algebra \newcommand{\T}{\mathrm{T}} \renewcommand{\ker}{\mathrm{ker}} \newcommand{\range}{\mathrm{range}} \renewcommand{\span}{\mathrm{span}} \newcommand{\rref}{\mathrm{rref}} \renewcommand{\dim}{\mathrm{dim}} \newcommand{\col}{\mathrm{col}} \newcommand{\nullspace}{\mathrm{null}} \newcommand{\row}{\mathrm{row}} \newcommand{\rank}{\mathrm{rank}} \newcommand{\nullity}{\mathrm{nullity}} \renewcommand{\det}{\mathrm{det}} \newcommand{\proj}{\mathrm{proj}} \renewcommand{\H}{\mathrm{H}} \newcommand{\trace}{\mathrm{trace}} \newcommand{\diag}{\mathrm{diag}} \newcommand{\card}{\mathrm{card}} \newcommand\norm[1]{\left\lVert#1\right\rVert} % differential equations \newcommand{\laplace}[1]{\mathcal{L}\{#1\}} \newcommand{\F}{\mathrm{F}} % misc \newcommand{\sign}{\mathrm{sign}} \newcommand{\softmax}{\mathrm{softmax}} \renewcommand{\th}{\mathrm{th}} \newcommand{\adj}{\mathrm{adj}} \newcommand{\hyp}{\mathrm{hyp}} \renewcommand{\max}{\mathrm{max}} \renewcommand{\min}{\mathrm{min}} \newcommand{\where}{\mathrm{\ where\ }} \newcommand{\abs}[1]{\vert #1 \vert} \newcommand{\bigabs}[1]{\big\vert #1 \big\vert} \newcommand{\biggerabs}[1]{\Bigg\vert #1 \Bigg\vert} \newcommand{\equivalent}{\equiv} \newcommand{\cross}{\times} % statistics \newcommand{\cov}{\mathrm{cov}} \newcommand{\var}{\mathrm{var}} \newcommand{\bias}{\mathrm{bias}} \newcommand{\E}{\mathrm{E}} \newcommand{\prob}{\mathrm{prob}} \newcommand{\unif}{\mathrm{unif}} \newcommand{\invNorm}{\mathrm{invNorm}} \newcommand{\invT}{\mathrm{invT}} \newcommand{\P}{\text{P}} \newcommand{\pmf}{\text{pmf}} \newcommand{\pdf}{\text{pdf}} % real analysis \renewcommand{\sup}{\mathrm{sup}} \renewcommand{\inf}{\mathrm{inf}} %---- MACROS FOR ALIASES AND REFORMATTING ----% % logic \newcommand{\forevery}{\ \forall\ } \newcommand{\OR}{\lor} \newcommand{\AND}{\land} \newcommand{\then}{\implies} % set theory \newcommand{\impropersubset}{\subseteq} \newcommand{\notimpropersubset}{\nsubseteq} \newcommand{\propersubset}{\subset} \newcommand{\notpropersubset}{\not\subset} \newcommand{\union}{\cup} \newcommand{\Union}[2]{\bigcup\limits_{#1}^{#2}} \newcommand{\intersect}{\cap} \newcommand{\Intersect}[2]{\bigcap\limits_{#1}^{#2}} \newcommand{\intersection}[2]{\bigcap\limits_{#1}^{#2}} \newcommand{\Intersection}[2]{\bigcap\limits_{#1}^{#2}} \newcommand{\closure}{\overline} \newcommand{\compose}{\circ} % linear algebra \newcommand{\subspace}{\le} \newcommand{\angles}[1]{\langle #1 \rangle} \newcommand{\identity}{\mathbb{1}} \newcommand{\orthogonal}{\perp} \renewcommand{\parallel}[1]{#1^{||}} % calculus \newcommand{\integral}[2]{\int\limits_{#1}^{#2}} \newcommand{\limit}[1]{\lim\limits_{#1}} \newcommand{\approaches}{\rightarrow} \renewcommand{\to}{\rightarrow} \newcommand{\convergesto}{\rightarrow} % algebra \newcommand{\summation}[2]{\sum\nolimits_{#1}^{#2}} \newcommand{\product}[2]{\prod\limits_{#1}^{#2}} \newcommand{\by}{\times} \newcommand{\integral}[2]{\int_{#1}^{#2}} \newcommand{\ln}{\text{ln}} % exists commands \newcommand{\notexist}{\nexists\ } \newcommand{\existsatleastone}{\exists\ } \newcommand{\existsonlyone}{\exists!} \newcommand{\existsunique}{\exists!} \let\oldexists\exists \renewcommand{\exists}{\ \oldexists\ } % statistics \newcommand{\distributed}{\sim} \newcommand{\onetoonecorresp}{\sim} \newcommand{\independent}{\perp\!\!\!\perp} \newcommand{\conditionedon}{\ |\ } \newcommand{\given}{\ |\ } \newcommand{\notg}{\ngtr} \newcommand{\yhat}{\hat{y}} \newcommand{\betahat}{\hat{\beta}} \newcommand{\sigmahat}{\hat{\sigma}} \newcommand{\muhat}{\hat{\mu}} \newcommand{\transmatrix}{\mathrm{P}} \renewcommand{\choose}{\binom} % misc \newcommand{\infinity}{\infty} \renewcommand{\bold}{\textbf} \newcommand{\italics}{\textit} \newcommand{\step}{\text{step}} $$

How P-Hacking Increases False Positives

Featured image

Sometimes when a hypothesis doesn’t yield a statistically significant result, there is temptation to tweak the hypothesis a bit or consider different selections of data. But this is a trap called p-hacking and it will increase the chance of false positives.

Here’s how it happens. You’re performing a hypothesis test and if you’re like most publications, you’re looking for a statistically significant result with only a 5% likelihood (i.e. a p-value < 5%) that the result was just due to chance.

Because your hypothesis didn’t work out, you do a post hoc analysis and determine you were considering a wrong variable or two. Thus, you tweak your hypothesis and run the statistical test again. You continue to do this over and over until you achieve significant results as demonstrated in this xkcd comic.

However, every time you attempted another hypothesis test, you subsequently increased the chance of getting a false positive. To see this in action, let’s say you repeated your hypothesis test 20 times (like the comic). With only one test, your chance of a false positive was just 5% (since we require a p-value of 5%). But by the time your 20th iteration came around, this rate had increased to 64%!

Let’s verify this experimentally with a bit of code:

import random

num_of_1s = 0
num_of_experiments = 500000

for i in range(0, num_of_experiments):
    # 20 hypothesis tests each with 1/20 chance of a fluke result
    sequence = [random.randint(1, 20) for i in range(0, 20)]
    if 1 in sequence:
        num_of_1s += 1

# show ratio of fluke results to total number of experiments
print(f'{num_of_1s/num_of_experiments * 100}')
64.17%

In the code above, we made sure that each hypothesis test performed has a 1/20 chance of having an erroneous result by using random.randint(1, 20). We then kept doing hypothesis tests for a total of 20 times.

Since we’re interested in the approximate expected number of false positives, we ran this scenario a good amount of times (500,000 times in this case) for a final result of 64.17%. Of course, if you shy away from approximations and fancy a little more rigor, we can use some basic probability theory to confirm our results.

So, the probability of not having a false positive is 19/20 which means the probability of not having a false positive in 20 trials is $\frac{19}{20}^{20}$ which is approximately equal to 0.36. Finally, we can determine the probability of having at least one false positive in 20 trials by subtracting 0.36 from 1 giving us 0.64.

But what if you’re working for a genomics company where there’s a need to perform multiple hypothesis tests without a strong basis for expecting the result to be statistically significant? Well, there are ways to control the increase in false positives, like the Bonferroni Correction. However, you should keep in mind that each method will have its strengths and weaknesses.

In the end, you need to be careful when performing hypothesis tests and drawing conclusions. And remember that the more hypothesis tests you perform, the more likely it’ll be that you get a false positive.

comments powered by Disqus