$$ %---- MACROS FOR SETS ----% \newcommand{\znz}[1]{\mathbb{Z} / #1 \mathbb{Z}} \newcommand{\twoheadrightarrowtail}{\mapsto\mathrel{\mspace{-15mu}}\rightarrow} % popular set names \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\I}{\mathbb{I}} % popular vector space notation \newcommand{\V}{\mathbb{V}} \newcommand{\W}{\mathbb{W}} \newcommand{\B}{\mathbb{B}} \newcommand{\D}{\mathbb{D}} %---- MACROS FOR FUNCTIONS ----% % linear algebra \newcommand{\T}{\mathrm{T}} \renewcommand{\ker}{\mathrm{ker}} \newcommand{\range}{\mathrm{range}} \renewcommand{\span}{\mathrm{span}} \newcommand{\rref}{\mathrm{rref}} \renewcommand{\dim}{\mathrm{dim}} \newcommand{\col}{\mathrm{col}} \newcommand{\nullspace}{\mathrm{null}} \newcommand{\row}{\mathrm{row}} \newcommand{\rank}{\mathrm{rank}} \newcommand{\nullity}{\mathrm{nullity}} \renewcommand{\det}{\mathrm{det}} \newcommand{\proj}{\mathrm{proj}} \renewcommand{\H}{\mathrm{H}} \newcommand{\trace}{\mathrm{trace}} \newcommand{\diag}{\mathrm{diag}} \newcommand{\card}{\mathrm{card}} \newcommand\norm[1]{\left\lVert#1\right\rVert} % differential equations \newcommand{\laplace}[1]{\mathcal{L}\{#1\}} \newcommand{\F}{\mathrm{F}} % misc \newcommand{\sign}{\mathrm{sign}} \newcommand{\softmax}{\mathrm{softmax}} \renewcommand{\th}{\mathrm{th}} \newcommand{\adj}{\mathrm{adj}} \newcommand{\hyp}{\mathrm{hyp}} \renewcommand{\max}{\mathrm{max}} \renewcommand{\min}{\mathrm{min}} \newcommand{\where}{\mathrm{\ where\ }} \newcommand{\abs}[1]{\vert #1 \vert} \newcommand{\bigabs}[1]{\big\vert #1 \big\vert} \newcommand{\biggerabs}[1]{\Bigg\vert #1 \Bigg\vert} \newcommand{\equivalent}{\equiv} \newcommand{\cross}{\times} % statistics \newcommand{\cov}{\mathrm{cov}} \newcommand{\var}{\mathrm{var}} \newcommand{\bias}{\mathrm{bias}} \newcommand{\E}{\mathrm{E}} \newcommand{\prob}{\mathrm{prob}} \newcommand{\unif}{\mathrm{unif}} \newcommand{\invNorm}{\mathrm{invNorm}} \newcommand{\invT}{\mathrm{invT}} \newcommand{\P}{\text{P}} \newcommand{\pmf}{\text{pmf}} \newcommand{\pdf}{\text{pdf}} % real analysis \renewcommand{\sup}{\mathrm{sup}} \renewcommand{\inf}{\mathrm{inf}} %---- MACROS FOR ALIASES AND REFORMATTING ----% % logic \newcommand{\forevery}{\ \forall\ } \newcommand{\OR}{\lor} \newcommand{\AND}{\land} \newcommand{\then}{\implies} % set theory \newcommand{\impropersubset}{\subseteq} \newcommand{\notimpropersubset}{\nsubseteq} \newcommand{\propersubset}{\subset} \newcommand{\notpropersubset}{\not\subset} \newcommand{\union}{\cup} \newcommand{\Union}[2]{\bigcup\limits_{#1}^{#2}} \newcommand{\intersect}{\cap} \newcommand{\Intersect}[2]{\bigcap\limits_{#1}^{#2}} \newcommand{\intersection}[2]{\bigcap\limits_{#1}^{#2}} \newcommand{\Intersection}[2]{\bigcap\limits_{#1}^{#2}} \newcommand{\closure}{\overline} \newcommand{\compose}{\circ} % linear algebra \newcommand{\subspace}{\le} \newcommand{\angles}[1]{\langle #1 \rangle} \newcommand{\identity}{\mathbb{1}} \newcommand{\orthogonal}{\perp} \renewcommand{\parallel}[1]{#1^{||}} % calculus \newcommand{\integral}[2]{\int\limits_{#1}^{#2}} \newcommand{\limit}[1]{\lim\limits_{#1}} \newcommand{\approaches}{\rightarrow} \renewcommand{\to}{\rightarrow} \newcommand{\convergesto}{\rightarrow} % algebra \newcommand{\summation}[2]{\sum\nolimits_{#1}^{#2}} \newcommand{\product}[2]{\prod\limits_{#1}^{#2}} \newcommand{\by}{\times} \newcommand{\integral}[2]{\int_{#1}^{#2}} \newcommand{\ln}{\text{ln}} % exists commands \newcommand{\notexist}{\nexists\ } \newcommand{\existsatleastone}{\exists\ } \newcommand{\existsonlyone}{\exists!} \newcommand{\existsunique}{\exists!} \let\oldexists\exists \renewcommand{\exists}{\ \oldexists\ } % statistics \newcommand{\distributed}{\sim} \newcommand{\onetoonecorresp}{\sim} \newcommand{\independent}{\perp\!\!\!\perp} \newcommand{\conditionedon}{\ |\ } \newcommand{\given}{\ |\ } \newcommand{\notg}{\ngtr} \newcommand{\yhat}{\hat{y}} \newcommand{\betahat}{\hat{\beta}} \newcommand{\sigmahat}{\hat{\sigma}} \newcommand{\muhat}{\hat{\mu}} \newcommand{\transmatrix}{\mathrm{P}} \renewcommand{\choose}{\binom} % misc \newcommand{\infinity}{\infty} \renewcommand{\bold}{\textbf} \newcommand{\italics}{\textit} \newcommand{\step}{\text{step}} $$

Why Doing Good Science is Hard and How to Do it Better

Featured image

Doing good science is hard and a lot of experiments fail. Although the scientific method helps to reduce uncertainty and lead to discoveries, its path is full of potholes.

In this post, you’ll learn about common p-value misinterpretations, p-hacking, and the problem with performing multiple hypothesis tests. Of course, not only are the problems presented, but their potential solutions as well.

By the end of the post, you should have a good idea of some of the pitfalls of hypothesis testing, how to avoid them, and an appreciation for why doing good science is so hard.

P-Value misinterpretations

There are many ways to misinterpret a p-value. By definition, a p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming the null hypothesis is true.

What the p-value is not:

If you want to measure the strength of the evidence or size of effect, then you need to calculate the effect size. This can be done with Pearson’s r correlation, standardized difference of means, or other methods.

Reporting the effect size in your research is suggested since p-values will tell you the likelihood that experimental results differ from chance expectations but not the relative magnitude of the experimental treatment or the size of the experimental effect.

P-values also don’t tell you the chance that the intervention is effective but calculating precision does, and base rates influence this calculation. If the base rate for the intervention is low, this opens the door to many opportunities for false positives even if a hypothesis test shows a statistically significant result.

For example, if the chance the intervention is effective is 65%, then there is still only a 65% chance that the intervention was actually effective while leaving a false discovery rate of 35%. Neglecting the impact of base rates is known as the base rate fallacy and happens more often than you think.

Lastly, p-values also can’t tell you whether a hypothesis is true or false. Statistics is an inferential framework and there’s no way to know for sure if some hypothesis is true or not. Remember, there’s no such thing as proof in science.

The p-hacking problem

As a scientist, one of your degrees of freedom when setting up a hypothesis test is deciding which variables to include in the data you test. Your hypothesis will, to a degree, influence which variables you might include in the data and after testing the hypothesis with those variables, you might get a p-value greater than 5%.

At this point, you might be tempted to try different variables in your data and retest. But if you try enough combinations of variables and test each scenario, you’re likely to get a p-value of 5% or less as demonstrated by this app in this fivethirtyeight blog post. It’s called p-hacking, and it can allow you to achieve a p-value of 5% or less under competing alternative hypotheses.

There are at least a few problems with this:

Addressing p-hacking

To help mitigate p-hacking, you should disclose the number of hypotheses explored during the study, all data collection decisions, all statistical analyses conducted, and all p-values computed.

If you performed multiple hypothesis tests without a strong basis for expecting the result to be statistically significant, as can happen in genomics where genotypes for millions of genetic markers can be measured and tested, you should verify that there was some sort of control for the family-wise error rate or false discovery rate (as discussed in the next section). Otherwise, the study might not be meaningful.

It might also be a good idea to report the power of the hypothesis test. That is, report 1 - the probability of not rejecting the null hypothesis when it’s false. Keep in mind that power can be influenced by the sample size, significance level, variability in the dataset, and if the true parameter is far from the parameter assumed by the null hypothesis.

In short, the greater the sample size, the greater the power. The greater the significance level, the greater the power. The lower the variability in the dataset, the greater the power. And the further away the true parameter is from the parameter assumed by the null hypothesis, the greater the power.

Testing multiple hypotheses with the Bonferroni Correction

Since the probability of false positives increases as the number of hypothesis tests performed increases, it is necessary to try and control this. As such, you might want to control the probability of one or more false positives out of all hypothesis tests conducted. This is sometimes called the family-wise error rate.

One way to control for this is to set the significance level to $\alpha/\text{n}$ where $n$ is the number of hypothesis tests. This kind of correction is called the Bonferroni correction and ensures that the family-wise error rate is less than or equal to $\alpha$.

However, this correction can be too strict especially if you’re performing many hypothesis tests. The reason is since you’re controlling for the family-wise error rate, you also might be missing some true positives that existed at a higher significance level.

Clearly there’s a balance to be struck between increasing the power of the hypothesis test (i.e. increasing the probability of rejecting the null hypothesis when the alternative hypothesis is true) and controlling for false positives.

Testing multiple hypotheses with the Benjamini-Hochberg procedure

Instead of trying to control for the family-wise error rate, you can instead try to control for the false discovery rate which is the proportion of all the hypothesis tests identified as having statistically significant results that actually don’t have statistically significant results. In other words, the false discovery rate is equal to FP/(FP + TP).

Controlling for the false discovery rate should help you identify as many hypothesis tests with statistically significant results as possible, but still try to keep a relatively low proportion of false positives. Like $\alpha$ which controls the false positive rate, we similarly use another significance level, $\beta$, which controls the false discovery rate.

The procedure you can use to control the false discovery rate is called the Benjamini-Hochberg procedure. You first choose a $\beta$, the significance level for the false discovery rate. Then calculate the p-values for all null hypothesis tests performed and sort from lowest to highest with $i$ being the index of the p-value in the list. Now find the index, $k$, of the largest p-value such that it’s less than or equal to $\frac{i}{m} \beta$ where $m$ is the number of null hypothesis tests performed. All null hypothesis tests with p-value index $i <= k$ are considered statistically significant by the Benjamini-Hochberg procedure.

Conclusion

As you can see, doing good science does not just involve performing a null hypothesis test and publishing your findings when you get a p-value less than or equal to 5%. There are ways to misinterpret p-values, to tweak data to get the right p-value for a hypothesis that you’re convinced of, and to perform enough tests with different samples of data until you get the desired p-value.

But now that you you’re aware of the potholes and are armed with some ways to avoid them, I hope it helps you improve the quality of your research and get you closer to the truth.

References

comments powered by Disqus