Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Think about it this way: Suppose my theory is that a fair coin is not actually fair but always lands on heads.

If I flip a coin one million times and only publish the ~500,000 results that show coin toss coming up as "heads," I could lead people to believe that a fair coin will always land on heads.

That's how misleading this bias can be.

But anyone can toss a fair coin, so I'd be found out pretty quick. But with more complicated and murky stuff like psychology this can be a huge problem, especially if people begin using this shaky work as foundation for things like: public policy, how to live their lives, how to do work, etc.

After all they can't just toss a coin to check my work on the more complicated stuff.



I think what you're describing is just fraud.

To take your fair coin example, publication bias is when the only studies that get published are the ones that show an unfairness in the coin – the rest of the studies that simply find 50% heads and 50% tails (aka a fair coin) don't get published because they're not interesting.


> I think what you're describing is just fraud.

Fraud requires intent. Now, this extreme of bias would typically require said intent, sure. But let's say the experimenter has anterograde amnesia.

They get an idea, flip a coin - oh hey, it's heads! They publish the interesting result. They get an idea, flip a coin - oh, it was tails, nevermind that theory. They get an idea... each time, they don't remember that they've already attempted the theory. No intent, no fraud, just bias.

The good news is this is a pretty unlikely extreme in an individual. Anterograde amnesia is rare. The bad news is nobody can remember that someone else attempted a theory and came up with a negative result that they didn't bother to publish, and even individuals will have a hard time perfectly accounting for and preventing their own bias within a single study. The even worse news is that fraud exists as well.


Pre-registration solves this issue. If I announce the study before I have the results, we make sure that uninteresting outcomes are published.


What if funding for a study gets withdrawn because it looks like it'll produce negative/unfavorable results?


This should be publicly recorded too and maybe eventually we'll have a list of funding bodies that like to minmax profits by damaging research.


This doesn't seem very feasible given the current level of privacy afforded to companies.

I don't think there's a way to distinguish between studies that have lost funding because there are genuinely less funds to go around, and foo experiment just happened to be one of the ones that didn't make the cut, and studies that have had the funding pulled for harmful reasons (Like negative results).

Mainly because it's devilishly easy to mask; you'd just reduce funding to research and redistribute it into something plausible like marketing. Of course this isn't feasible for lots of studies producing negative results.

Of course, one way to counter that is to insist that the data is published regardless, but I'm sure that the data could be hidden with clever use of NDAs and court arguments along the lines of "We need to prohibit the distribution of company assets".


Yeah, I guess.

But something else came to my mind now:

> Of course, one way to counter that is to insist that the data is published regardless, but I'm sure that the data could be hidden with clever use of NDAs and court arguments along the lines of "We need to prohibit the distribution of company assets".

I find myself more and more treating complexity as a proxy for dishonesty. Companies that are trustworthy seem to have a relatively simple business model that they don't try to hide. The more convoluted it is, the more likely it is someone is scamming you. Wonder if that could be used as an effective metric - the more complex the reason someone weasels away from publishing the data, the more their "reputability" score goes down?


What about starting by implementing these measures in public universities? That seems very feasible, and could have a snowball effect on how studies are broadly analyzed and perceived.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: