>Does that process need to be any different from PRs on github? Can't all that be distributed across a team of researchers working on the topic of the journal, just as PRs on github
It's not the same as a github PR because wading through a mountain of badly written papers is work researchers do not want to do. That's what a publisher (like Elsevier) does. See example of Elsevier employee Angelica Kerr.[1] It should be easy to see that scientists would think it's a waste of their time to do Angelica Kerr's work.
I didn't get a chance to respond to the reply by allenz that suggested that journal chiefs should hire their own editing and administration staff. Again, that "solution" makes the same mistake of thinking scientists will do something they have no interest in doing. They don't want the hassles of "contracting" an editing team.
The research publishing is not a "software problem" that's solvable by a "software platform solution." You can't solve it by applying social platform mechanics such as github PRs and/or votes of StackOverflow/Reddit. (My parent comment tries to explain this.[2])
So I read your links. How is Angela Kerr's job any different than a spam filter function? We've got production grade techniques for handling that now, and without going into technical details everything you mentioned as being in her set of tasks could be implemented using ML and NLP techniques, most of them not even that cutting edge. In many ways, an algorithmic approach could do her job better because some "crackpot" articles will not be crackpot but genius and if you give researchers the ability to tweak the filter settings by adjusting sensitivity / specificity to their own tastes you're mathematically guaranteed to increase the likelihood of getting the next breakthrough out there.
The actual problem, which I think you implicitly identify when talking about Nature and Cell, is the "prestige" factor. As long as researchers are motivated by a prestige level of a journal b/c they may fear being ostracized by their peers or not getting recognition for their research - which has monetary and career costs involved - I think it will be very difficult to convince anyone to switch, regardless of how effective the platform could be.
I haven't thought about that problem before so I'm not sure how to address it as of now.
To whom should you send it in order to get an expert review?
Which of those people has an ax to grind with one of the authors?
Of the remaining people, who do you think you could convince to invest a day or more to carefully evaluate the paper?
Okay, so you got replies back from two of your carefully selected referees (after two months of badgering one of them):
Referee A thinks the paper is great, insightful, and advances the field, but wants extensive changes.
Referee B thinks the paper is derivative drek and should be rejected because his friend C has already done something similar.
Your journal publishes twenty similar articles a week but receives three hundred a week. The careers of the authors are partially on the line, as is the prestige of the journal, the attention of the readership, and the future submission of articles by prospective authors.
Good luck training a neural net to do this well. I suspect a neural net can be trained to reject the worst crackpots, but little more, without rejecting insightful but unique/important papers.
The problem is that most of the (substantial) work you highlight above is done by the academic community (mostly paid by the public purse), while most of the (substantial and above-market) profits go to the private publishers.
> In addition to basic copy-editing, she prevents crackpot articles such as "Darwin's Theory of Evolution is proven wrong by Trump" from reaching the journal editors and wasting their time. (She may inadvertently forward some bad articles but she has enough education to reject many of them outright to minimize the effort by the journal's scientists.)
Do I read it right that an Elsevier manager with no degree in the subject rejects research papers without consulting the editors? To me as scientist this is a big red flag and concern about journal's quality. While the example given here looks obvious (and probably contrived), most real examples are less so. And while recognizing those takes little time and effort to a trained eye, this task certainly should not be left to a subject non-specialist. It will not save much time and create more concerns than help.
It's not the same as a github PR because wading through a mountain of badly written papers is work researchers do not want to do. That's what a publisher (like Elsevier) does. See example of Elsevier employee Angelica Kerr.[1] It should be easy to see that scientists would think it's a waste of their time to do Angelica Kerr's work.
I didn't get a chance to respond to the reply by allenz that suggested that journal chiefs should hire their own editing and administration staff. Again, that "solution" makes the same mistake of thinking scientists will do something they have no interest in doing. They don't want the hassles of "contracting" an editing team.
The research publishing is not a "software problem" that's solvable by a "software platform solution." You can't solve it by applying social platform mechanics such as github PRs and/or votes of StackOverflow/Reddit. (My parent comment tries to explain this.[2])
[1] https://news.ycombinator.com/item?id=15278621
[2] https://news.ycombinator.com/item?id=15269673