Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Consider you are another researcher (not the one who has just published).

You may want to keep up to date with your field. This is good to do and it helps foster new ideas for now and for future proposals. There will be a selection of journals that are relevant to you and they will publish work that, generally, will be tiered by either the scope or impact of the journal. Journal A publishes a lot of papers you find very insightful or useful, journal B publishes some work that is good to know but not profound. If you have maybe 20 minutes, maybe you will read one paper that catches your eye from Journal A.

The truest metric of how good the work you do is, at a surface level, is how well cited it is. How much do people read it, and then do they care once they put it down. It is not necessarily the case that putting your paper in a high impact journal gets a lot of citations, but its a lot of exposure and it maximises those chances.



Maybe I am wrong here, but shouldn't the quality metric be whether it acts as a basis for practical applications? As far as I understand, research is supposed to give theoretical answers and frameworks. X works like this. Y doesn't work like that. This knowledge should then be applied to the real world and will be prooved wrong or right. Maybe a silly thought, but I suppose pre 1800 science didn't have to pay such fees to proove the quality of their work, they just applied it.

(I understand not all papers can be immediatelly applied, but is that the majotity of the work? And if so, is that a good thing?)


As a society, we want to have a "portfolio" of research. We want short-term research with tangible results, moderate-term research that builds on the current state of the art and advances it, and long-term research that might pay off decades from now. You also want to mix in "moonshot" research that likely won't pay off, but if it did would be revolutionary. The proportion you invest in each is up for debate.

Generally, as the time horizon increases, risk increases, and it becomes less and less likely any particular research direction will pay off, but you still need to do it or you won't have anything after all the current directions have dried up, matured, or otherwise run their course.

I think the error people make is to assume academia is supposed to cover all three. Largely, academia covers long-term and maybe some moderate-term and moonshot. Short term is better handled by industry where there is a more direct measure of applicability ($$). But business is generally not interested in the longer term stuff because it is too risky.

So academia as a whole is about forming the bricks that industry uses to build useful things. While any particular paper is not going to be useful to society anytime soon (if at all), the aggregate knowledge will likely be useful at some point in the future.


Unpublished science for practical applications of course exists but that is called R&D and is not generally made public cause it’s corporate IP. If that’s what you mean.


Most scientific research has no practical applications (or at least, not within the lifetime of the people doing it). Consider e.g. cosmology. The quality of research in cosmology varies, as in any other discipline, but even the very best research in cosmology has few if any immediate practical applications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: