NHacker Next
login
▲How to avoid P hackingnature.com
65 points by benocodes 4 days ago | 54 comments
Loading comments...
zipy124 3 minutes ago [-]
The Bonferroni correction part of this article is the most important. The amount of papers that don't account for this is shocking, comparing 20 variables with a 0.05 confidence interval is extremely annoying, as you end up having to do analysis on all papers data yourself to correct for it to see if it is still significant or not.
parpfish 7 hours ago [-]
I was heavily encouraged to do what would later be called “p-hacking”, but it looked different from what they describe here. This article describes p-hacks for people that aren’t into math/stats. I always ended up p hacking because I was into stats methods.

Somebody would say “here’s an old dataset that didn’t work out, I bet you can use one of those new stats methods you’re always reading about to find a cool effect!”, and then the fishing expedition takes off.

A couple weeks later you show off some cool effects that your new cutting edge results were able to extract from an old, useless dataset.

But instead of saying “that’s good pilot data, let’s see if it holds up with a new experiment”, you’re told “you can publish that! Keep this up and maybe you’ll be lucky enough to get a job someday!”

AstralStorm 3 hours ago [-]
The practice you describe is called data dredging though. The thing about it is that you do not know enough experimental design details to make sure it was all on the up, especially worse the older the dataset gets.

Normally when doing that you need a multiple comparison corrections and conservative stats. That won't get you published though, or if you do get published you won't get noticed except by someone running a meta analysis. Perhaps not even then. Usually you end up with negative results from reanalysis, evidence of tampering or small effect sizes.

And this does not that reliably detect dataset manipulation, p hacking on the part of experimenters or accidental violations of the protocol, not even necessarily if the data collection included measures to prevent it.

In short: you cannot 100% trust any dataset you did not make. Not even as part of the team that makes it.

gwerbret 7 hours ago [-]
> Stopping an experiment once you find a significant effect but before you reach your predetermined sample size is classic P hacking.

Although much of the article is basic common sense, and although I'm not a statistician, I had to seriously question the author's understanding of statistics at this point. The predetermined sample size (statistical power) is usually based on an assumption made about the effect size; if the effect size turns out to be much larger than you assumed, then a smaller sample size can be statistically sound.

Clinical trials very frequently do exactly this -- stop before they reach a predetermined sample size -- by design, once certain pre-defined thresholds have been passed. Other than not having to spend extra time and effort, the reasons are at least twofold: first, significant early evidence of futility means you no longer have to waste patients' time; second, early evidence of utility means you can move an effective treatment into practice that much sooner.

A classic example of this was with clinical trials evaluating the effect of circumcision on susceptibility to HIV infection; two separate trials were stopped early when interim analyses showed massive benefits of circumcision [0, 1].

In experimental studies, early evidence of efficacy doesn't mean you stop there, report your results, and go home; the typical approach, if the experiment is adequately powered, is to repeat it (three independent replicates is the informal gold standard).

[0]: https://pubmed.ncbi.nlm.nih.gov/17321310/

[1]: https://pubmed.ncbi.nlm.nih.gov/16231970/

bjornsing 7 hours ago [-]
There are of course statistical methods designed to support early stopping. But I don’t think you can use a regular p-test every day and decide to stop if p < 0.05. That’s something else.
AstralStorm 3 hours ago [-]
You use full both sided ANOVA F test with multiple comparison correction for that. Even these tests are sometimes not conservative enough, because the correction is a bit of a guess.

You will end up with much higher number of trials required to hit the P value than the version with predetermined number of trials and no stopping point by P.

Say, in a single variable single run ABX test, 8 is the usual number needed according to Fischer frequentist approach. If you do multiple comparison to hit 0.05 you need I believe 21 trials instead. (Don't quote me on that, compute your own Bayesian beta prior probability.)

The number of trials to differentiate from a fair coin is the typical comparison prior, giving a beta distribution. You're trying to set up a ratio between the two of them, one fitted to your data, the other null.

thelamest 30 minutes ago [-]
The general topic and some specific ways to estimate a correction are described under this term: https://en.wikipedia.org/wiki/Sequential_analysis
parpfish 7 hours ago [-]
In lots of human studies, you can’t just stop at an arbitrary number of participants because you’ve counterbalanced manipulations to decorrelate potential confounders (e.g., which color stimulus is paired with reward, the order of trials).
hiddencost 7 hours ago [-]
https://commons.m.wikimedia.org/wiki/File:P-hacking_by_early...

The author is absolutely correct. Early stopping is a classic form of p hacking. See attached image for an illustration.

If you want to be rigorous, you can define criterion for early stopping such that it's not, but you require relatively stronger evidence.

Clinical trials that stop early do so typically at predefined times with higher significance thresholds.

mjburgess 2 hours ago [-]
The region where `p` hits the red line should be called "publish or perish".
coolcase 7 hours ago [-]
Sounds like a variable cost experiment. Each observation cost x$. Like an A/B split on Google ads. Why keep paying for A when you know B is better already.
nialse 6 hours ago [-]
Small samples have more variability than large samples and thus more often show spurious large effects.
coolcase 3 hours ago [-]
So you end up with a higher threshold for confidence at p<0.05 ot whatever you want p to be under. Comes out in the maths!

Toss a coin 10 times comes up heads 10 times. There is a 1 in 2^10 (approx 1000) that happens by chance for an unbiased coin.

I'm convinced it is biased.

20 times I am freaking convinced.

I don't need another 1000 tosses.

rrr_oh_man 7 hours ago [-]
Google Optimize used to tell you to let an experiment run for one-two weeks (?), exactly because early strong results tend to not don't hold up in the long run.

-> https://en.wikipedia.org/wiki/Regression_toward_the_mean

dr_dshiv 55 minutes ago [-]
Seasonality effects, too
ekianjo 7 hours ago [-]
There is another reason to keep clinical trials as long as designed. To understand the safety and side effects implications.
neilv 9 hours ago [-]
> As any gambler knows, if you roll the dice often enough, eventually you’ll get the result you want by chance alone

You never count your results, when you're sitting at the lab bench, there will be time enough for counting, when the experiments are done.

boulos 8 hours ago [-]
Nicely done. Since many folks may not know the original song: https://en.m.wikipedia.org/wiki/The_Gambler_(song)

(And TIL, this wasn't original to Kenny Rogers!)

neilv 3 hours ago [-]
I almost did this verbatim quote of the lyrics, which paralleled the article's sentence, and is relevant to P-hacking, but it's the wrong advice:

    Every gambler knows
    That the secret to survivin'
    Is knowin' what to throw away
    And knowin' what to keep
saghm 27 minutes ago [-]
I don't know, maybe knowing when to "hold them" versus "fold them" and "walk away" would be a valuable skill here. The phrasing sounds off in the part you quite because in poker you only can play a given hand once, and after you've lost, you need to draw an entirely new dataset and start fresh.
cypherpunks01 8 hours ago [-]
Like the old saying goes,

"It is difficult to get a researcher to stop P hacking, when his career depends on his not stopping P hacking."

bjornsing 7 hours ago [-]
Yeah that was kind of my feeling too while skimming through this: ”Good luck with that…”

It’s not a knowledge problem. It’s a vales and incentives problem.

pizlonator 8 hours ago [-]
The worst part about this:

> Running experiments until you get a hit

Is that it's literally what us software optimization engineers do. We keep writing optimizations until we find one that is a statistically significant speed-up.

Hence we are running experiments until we get a hit.

The only defense I know against this is to have a good perf CI. If your patch seemed like a speed-up before committing, but perf CI doesn't see the speed-up, then you just p-hacked yourself. But that's not even fool proof.

You just have to accept that statistics lie and that you will fool yourself. Prepare accordingly.

starspangled 7 hours ago [-]
> Is that it's literally what us software optimization engineers do. We keep writing optimizations until we find one that is a statistically significant speed-up.

I don't think that is what it is saying. It is saying you would write one particular optimization (your hypothesis), and then you would run the experiment (measuring speed-up) multiple times until you see a good number.

It's fine to keep trying more optimizations and use the ones that have a genuine speedup.

Of course the real world is a lot more nuanced -- often times measuring the performance speed up involves hypothesis as well ("Does this change to the allocator improve network packet transmission performance?"), you might find that it does not, but you might run the same change on disk IO tests to see if it helps that case. That is presumably okay too if you're careful.

LegionMammal978 6 hours ago [-]
"Multiple times" doesn't have to mean "no modifications". Suppose the software is currently on version A. You think that changing it to a version B might make it more performant, so you implement and profile it. You find no difference, so you figure that your B implementation isn't good enough, and write a slight variation B', perhaps moving around some loops or function calls. If that makes no difference, you keep writing variations B'', B''', B'''', etc., until one of them finally comes out faster than version A. You finally declare that version B (when properly implemented) is better than version A, when you've really just tried a lot more samples.
starspangled 5 hours ago [-]
Well it does mean "no modifications" to the hypothesis, hypothesis being about performance of code A and B. Code B' would be a change.

It's just semantics, but the point is that the article wasn't saying the same thing OP was worried about. There's nothing wrong with testing B, B', B'', etc. until you find a significant performance improvement. You just wouldn't test B several times and take the last set of data when it looks good. Almost goes without saying really.

throwanem 8 hours ago [-]
Why is this bad for you? You're optimizing software, not trying to describe reality. Monte Carlo and Drunkard's Walk are fine.
analog31 8 hours ago [-]
You're churning the user experience for no reason. Maybe constant optimization churn is one of the reasons why UIs are so bad.
throwanem 8 hours ago [-]
Perf, though? If a perf optimization changes the UI noticeably other than by making it smoother or otherwise less janky, someone is lying to someone about what "performance" means. Likely though that be, we needn't embarrass ourselves by following the sad example.

No, UIs churn because when they get good and stay that way, PMs start worrying no one will remember what they're for. Cf. 90% of UI changes in iOS since about version 12.

babuloseo 7 hours ago [-]
I thought languages such as Rust and flamegraphs and etc were supposed to help us avoid doing all this testing and optimization right? Like I use the built in analysis tools that come with cargo and such and what I have on my os, tools like cutter or reverse engineering tools. Even on python I use the default or standard profiling and optimization tools, I wonder sometimes if I am not doing something enough if the default tools thats recommended should cover most edge cases and performance cases right?
pizlonator 7 hours ago [-]
Yeah!

And software ultimately fails at perfect composability. So if you add code that purports to be an optimization then that code most likely makes it harder to add other optimizations.

Not to mention bugs. Security bugs even

babuloseo 7 hours ago [-]
heck even the ai by default doesnt start with security from the models I have tested its really really weird.
cortesoft 7 hours ago [-]
Well, what is the test you are using to measure performance? Maybe the optimizations help performance in some cases and hurts performance in others... your test might not fully match all real world workloads.
jean_lannes 8 hours ago [-]
These seem like two different things. Testing many different optimizations is not the same experiment; it's many different experiments. The SE equivalent of the practice being described would be repeatedly benchmarking code without making any changes and reporting results only from the favorable runs.
pizlonator 7 hours ago [-]
Doesn’t matter if it’s the same experiment or not.

Say I’m after p<0.05. That means that if I try 40 different purported optimizations that are all actually neutral duds, one of them will seem like a speedup and one of them will seem like a slowdown, on average.

daveFNbuck 7 hours ago [-]
That's not p hacking. That's just the nature of p values. P hacking is when you do things to make a particular experiment more likely to show as a success.
1 hours ago [-]
bbertelsen 6 hours ago [-]
There's another cheeky example of this where you select a pseudo-random seed that makes your result significant. I have a personal seed, I use it in every piece of research that uses random number generation. It keeps me honest!
doubletwoyou 7 hours ago [-]
what they’re referring to might be better put as applying a patch once and then running it 500 times until you get a benchmark thats better than baseline for some reason

which is understandably a bit more loony

pizlonator 7 hours ago [-]
Nah it could be 20 different patches.
babuloseo 7 hours ago [-]
how can I do this in python what modules?
smallmancontrov 9 hours ago [-]
It might be below the fold, but it looks like they're missing the most important p-hacking strategy of all: the dogshit null hypothesis. It's very reliable and it's the most common type of p-hacking that I see.

It's easy to create a dogshit null hypotheses by negligence or by "negligence" and it's easy to reject a dogshit null hypothesis by simply collecting enough data as it automatically crumbles on contact with the real world -- that's what makes it dogshit. One might hope that this would be caught by peer review (insist on controls!) but I see enough dogshit null hypotheses roaming around the literature that these hopes are about as realistic as fairy dust. In practice, the dogshit null hypothesis reins supreme, or more precisely it quietly scoots out of the way so that its partner in crime, the dogshit alternative hypothesis, can have an unwarranted moment in the spotlight.

nmca 9 hours ago [-]
This would be much better with an example
smallmancontrov 8 hours ago [-]
"I ran a t-test on the untreated / treated samples and the difference is significant! The treatment worked!"

...but the data table shows a clear trend over time across both groups because the samples were being irradiated by intense sunlight from a nearby window. The model didn't account for this possibility, so it was rejected, just not because the treatment worked.

That's a relatively trivial example and you can already imagine ways in which it could have occurred innocently and not-so-innocently. Most of the time it isn't so straightforward. The #1 culprit I see is failure to account for some kind of obvious correlation, but the ways in which a null hypothesis can be dogshit are as numerous and subtle as the number of possible statistical modeling mistakes in the universe because they are the same thing.

somenameforme 8 hours ago [-]
I think you're more observing an issue with experimental models not challenging a null hypothesis, than with poor null hypotheses themselves. In other words, papers creating experiments that don't actually challenge the hypothesis. There was a major example of this with COVID. A typical way observational studies assessed the efficacy of the vaccines was by looking at outcomes between normalized samples of nonvaccinated and vaccinated individuals who came to the hospital and seeing their overall outcomes. Unvaccinated individuals generally had worse outcomes, so therefore the vaccines must be effective.

This logic was used repeatedly, but it fails to account for numerous obvious biases. For instance unvaccinated people are generally going to be less proactive in seeking medical treatment, and so the average severity of a case that causes them to go to the hospital is going to be substantially greater than for a vaccinated individual, with an expectation of correspondingly worse overall outcomes. It's not like this is some big secret - most papers mentioned this issue (among many others) in the discussion, but ultimately made no effort to control for it.

9 hours ago [-]
aw1621107 9 hours ago [-]
> looks like they're missing the most important p-hacking strategy of all: the dogshit null hypothesis

Would you mind giving an example(s) of such and how it differs from a "good" null hypothesis?

eviks 8 hours ago [-]
The irony of the article appearing in the "career" section when following its advice means you'll not have a career
p4ul 10 hours ago [-]
If the conclusion is "be transparent", I'm strongly supportive.

And moreover, I would be even more supportive if we found a way to change the incentives for tenure and promotion such that reproducibility was an important factor in how we make decisions about grants, tenure, and promotion.

analog31 8 hours ago [-]
Just make it even more cutthroat than it already is. Replacing one hackable incentive system with another will just produce a new set of hacks.

Disclosure: I left academia before I had to worry about any of this.

gregwebs 9 hours ago [-]
This is one of the most disturbing articles I have seen related to reproducibility because it seems to imply that scientists don’t already know this.
a_bonobo 9 hours ago [-]
As a biologist all the field wants is p < 0.05. What it actually means is unnecessary. It's a hurdle to pass to have another paper on your CV.
spinf97 6 hours ago [-]
> Ending the experiment too early

> Running experiments until you get a hit

But if I'm running an experiment how do I know how many time to run it.

remus 5 hours ago [-]
Before you start your experiment, you calculate how many samples you need based on the estimated effect size you're looking for and how small you want your confidence interval to be.

Small effect with high confidence => more samples

Big effect with low confidence=> less samples

notpushkin 7 hours ago [-]
> You have full access to this article via your institution.

Huh. I’m not on a university connection or anything. Is it just open access?

shoo 9 hours ago [-]
see also: Andrew Gelman's blog

> The problem with p-hacking is not the "hacking," it’s the "p." Or, more precisely, the problem is null hypothesis significance testing, the practice of finding data which reject straw-man hypothesis B, and taking this as evidence in support of preferred model A.

https://statmodeling.stat.columbia.edu/2021/09/30/the-proble...

See also this post from 2014 with a discussion of Confirmationist and falsificationist approaches to reasoning in science: https://statmodeling.stat.columbia.edu/2014/09/05/confirmati...

> I understand falisificationism to be that you take the hypothesis you love, try to understand its implications as deeply as possible, and use these implications to test your model, to make falsifiable predictions. The key is that you’re setting up your own favorite model to be falsified.

> In contrast, the standard research paradigm in social psychology (and elsewhere) seems to be that the researcher has a favorite hypothesis A. But, rather than trying to set up hypothesis A for falsification, the researcher picks a null hypothesis B to falsify and thus represent as evidence in favor of A.

> As I said above, this has little to do with p-values or Bayes; rather, it’s about the attitude of trying to falsify the null hypothesis B rather than trying to trying to falsify the researcher’s hypothesis A.

> Take Daryl Bem, for example. His hypothesis A is that ESP exists. But does he try to make falsifiable predictions, predictions for which, if they happen, his hypothesis A is falsified? No, he gathers data in order to falsify hypothesis B, which is someone else’s hypothesis. To me, a research program is confirmationalist, not falsificationist, if the researchers are never trying to set up their own hypotheses for falsification.

> That might be ok—maybe a confirmationalist approach is fine, I’m sure that lots of important things have been learned in this way. But I think we should label it for what it is.

See also: Andrew Gelman and Eric Loken's 2014 "garden of forking paths" paper: https://sites.stat.columbia.edu/gelman/research/unpublished/...

aaron695 2 hours ago [-]
[dead]