Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Speed is important for a number of factors, mistakes cost less if they can be quickly corrected. So being risky and publishing about your failures becomes useful (no one publishes failures...). Speed also reduces duplicate work.

Peer review is slow because publishing and retracting used to take a long time, now it is instant. And as for the quality of the reviewer, perhaps it would make sense for the journal to employ professional scientists that only review -- they'll be up to date on all the current research (that's their job), can highlight gaps, and can help cross pollinate other disciplines by reviewing 2 or 3 different but related topics.

The idea that a scientist should do everything seems to be more and more inefficient. We need to break down tasks like we do in the commercial world so that experts can move faster.



Peer review is slow because publishing and retracting used to take a long time, now it is instant.

Have you ever done a peer review? It took me hours to do each one. To do it well, you have to stare at each procedure and each conclusion and try to imagine what could be going wrong.

You may also have to read a lot of literature, both in general (so that you can recognize the difference between something genuinely new and something that was already published in 1975) and in particular (so that you can make sure the manuscript is not misrepresenting its own references).

And, given what's at stake in a review – you're basically holding a year or more of some grad student's life in your hands, at a minimum – it's only respectful to take it seriously.

perhaps it would make sense for the journal to employ professional scientists that only review...

This is either obvious, or silly. Obvious, to the extent that journals always have employed scientifically trained editors – they're called "editors" – to do everything from tweaking bad phrasing to offering criticism to ultimately making the final decision about what gets published. Silly, because journal editors are rarely experts in your specific field. How can they be? There are a lot of scientific fields. It's impossible to be "up to date on all the current research" in every single one of them, to the requisite level of detail. And most fields aren't awash in money to the extent that they can support a full-time editor with complete expertise. In the general case, you have to rely on peer review because only your peers have the incentive to be experts in your field.


I'm not saying that peer review doesn't take a long time to do it right, but I'm saying that perhaps a new job can be created who's job is to review and stay current on the literature. I'm not saying one person can know everything, but that there may be a better way. We have more scientists and papers being published every year, and I'm making the argument that we've outgrown current procedures. This is not an editorial job, this job does exactly what current peer reviewers do -- but it's their full time job.

Could I be wrong, sure -- but if the field is so big that a dedicated person can't keep on top of it then there is no chance for a scientist to do that either. The biggest issue is whether there is an incentive for someone with the skills to do it; why would I want to just review the work of others when I have the skills to do my own.


The point is that the skill set to be able to do science and to judge scientific manuscripts are very close together. Keeping on top of one is keeping on top of the other.


Journals should probably pay people to review works (well, really it will be included in a submission cost). But many people are resistent to the idea.


>Speed is important for a number of factors, mistakes cost less if they can be quickly corrected. So being risky and publishing about your failures becomes useful (no one publishes failures...). Speed also reduces duplicate work.

You're missing the point of scientific publication. The entire point of publication is so others can reproduce your work. Speed doesn't work if you need a billion data points taken over 20 years to prove a long term issue.

>The idea that a scientist should do everything seems to be more and more inefficient. We need to break down tasks like we do in the commercial world so that experts can move faster.

The problem here is that for a lot of cutting edge theoretical research, the experts are the only ones qualified to really vet the paper. And finding experts that: 1. Are capable of understanding everything in the paper and 2. Are willing to put aside their personal research to review the paper is a VERY small set of people.

For example, the P=NP proofs produced over the last few years were only presented to a small group of about 20 very qualified mathematicians to vet. I, with an undergrad CS degree, could barely understand the summary of the proofs. As far as I know, those proofs still have not been completely refuted or accepted. I don't think that this is inefficient, it's simply the fact that there aren't enough people with enough time to really work with extraordinarily complex concepts and ideas.


> 1. Are capable of understanding everything in the paper

Agreed. In my academic field (Debuggers), to understand and know enough to give a real review of the subject requires reading what I would estimate to be somewhere around a thousand papers. It also requires keeping up with the major industrial producers of the product. I think the field has something like two to three thousand papers right now, I haven't checked in a while.

For my work, I am just hitting the seminal papers and trying to avoid spending time reading work that led nowhere. I've read maybe two-three hundred articles (haven't tracked), and my thesis cites over one hundred.

So no, ordinary people won't do that. Ordinary computer programmers won't do that. Most people are not capable (or prepared) to understand everything in a given paper and comment on what has gone before and what has been tried before.

Academic knowledge maintenance is the total philosophical opposite of tl;dr and Twitter.


Out of curiosity, is there some place to locate a clear listing of the seminal works in your field?

I'm genuinely curious how much of the difficulty here is actual breadth/depth of the subject matter, and how much of the difficulty is due to some systemic inefficiency in the way research is published and consumed.


The trick is usually to look at the references section of a paper. If there's an old paper in there (say, >10 years old), then it's probably seminal. If you see the same paper in a lot of reference sections, then it's probably seminal.

So: to know what's seminal so you can skip reading a bunch of papers, you need to read a bunch of papers. Right.


Sounds like a machine learning problem.


Not that I've seen, but that does not mean I simply haven't stumbled into the right FAQ.


> You're missing the point of scientific publication. The entire point of publication is so others can reproduce your work. Speed doesn't work if you need a billion data points taken over 20 years to prove a long term issue.

In theory, that's the point. In practice, no journals ever publish replications, so nobody wastes their time reproducing others' work when they could be working on something publishable or their next grant proposal.


>In practice, no journals ever publish replications, so nobody wastes their time reproducing others' work when they could be working on something publishable or their next grant proposal.

Uh, this is exactly how Science works. It's not worth publishing the exact same results of the exact same experiment by multiple people. That only adds noise to the discussion.

If I arrive to the same conclusion after running the experiment again, then there is little benefit to anyone if I do a full writeup and publish it. On the other hand, if I am unable to arrive to the same results, there is a tremendous value in publishing my findings. Was the original study flawed? Were my own methods? That's what peer review and publishing results help determine.


Not publishing positive replications is just as bad a problem as not publishing negative original results. We have things like BigTable and Hadoop now; if 100 laboratories repeat an experiment and publish their results, that just means we can raise our confidence in the result by the sum of their likelihood ratios.

Getting more data improves the accuracy your results even better than using more sophisticated algorithms: http://www.catonmat.net/blog/theorizing-from-data-by-peter-n...


I remember seeing an article suggesting that the incentives related to publication (specifically in medicine) set us up for conditions where a majority of published results are wrong, and we have no way of knowing it:

Effectively requiring a positive result for publication means two phenomena can be the subject of multiple studies, with the one fluke that finds correlation being the one that gets published. At that point, we only get corrected if someone actually attempts to replicate the result, but replication effort may well be seen as a waste of resources.


Speaking as an academic mathematician, I don't think it is so small.

If the paper is groundbreaking, everyone will want to read it anyway.

If it is so-so, an expert will be able to determine if it is correct by skimming it pretty quickly.


I'd make a slight correction and say that the point of publication is not so others can reproduce your work, it is so others can confirm your results.

Exact reproduction can be problematic in science because if there was a flaw in the original experimental design or method, an exact reproduction could "confirm" that same flawed result. It's better if other scientists can learn enough to understand the result, and design their own experiments to confirm or disprove it.

A super simple example is if I drop something and then report a value for gravitational acceleration. But what if I dropped a feather? If you simply reproduce the experiment, you'll get the same (wrong) result. Whereas if you select your object to drop, there's a better chance you'll pick something denser and get a different result.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: