Hacker Newsnew | past | comments | ask | show | jobs | submit | kraetzin's commentslogin

This is normal practice. To publish in a respectable journal you are charged £1000+. To publish your paper as open access, you can be charged another ~£1000 for the privledge (IEEE).


Depends on the field. For example, practically all of machine learning and related fields are $0 to publish and are open access by default. It’s been a huge boon for everyone except I guess traditional publishers.


>It’s been a huge boon for everyone except I guess traditional publishers.

My heart bleeds for them!


And the reviewers will get exactly zero out of this


And I see all these people cautioning against publishing in predatory journals, which can be distinguished by the fact they require me to pay to publish. Then we have the big respectable journals which also require payment to publish. Hmm...sounds like the only real difference is the respectable journals are considered respectable and the predatory journals are considered predatory...


This is a relatively new practice, invented by Robert Maxwell in the 1950s or so.


I think it's laughable they still charge extra for color pictures.


They charge extra to have your pictures printed in color, online everything is in color anyway. I think that is a fair practice no?


No. If you want to submit color regardless of medium. Besides, color printing isn't that costly compared to their high margins. It's in their favor too if people use color images, the age of simple scatterplots has passed for most fields.


Never encountered paying for non-open access publication. I have only encountered "article processing fees" in relation to open access publishing.

If you're okay with having your paper paywalled, you do not need to pay for journal publication.

So I'm curious in which domain/subdomain you have encountered this.


In my area of semiconductor engineering/ detectors, we generally have to pay to publish. Journals do seem to be slowly moving away from this for just the publication, however as you say for open access you still need to pay. Depending on the funding provider for the research, it can be compulsory for the papers to be open access, so we would end up paying anyway.


Half of my PhD thesis is considered "unpublishable" because, after doing the work, my supervisors felt it's actually "unsurprising" that it didn't work out. We took methods that had been exploited to improve on previous results for over a decade to their logical extreme, and found that this method no longer leads to improvements. After doing the work it seems obvious. A paper on the subject would almost be considered uninteresting, and a high ranking journal would ignore it (which is why it's considered "unpublishable"). However, nobody has published this information, and it would help others to not make the same mistake.

I wonder how many times similar "mistakes" have been made by PhD students across disciplines.


My experience: I proposed a thesis to an advisor who deemed it unlikely to work. He ran it by a colleague who came to the same conclusion. I went to a different major and pursued the same thesis and invited the previous faculty who initially turned it down to the defense because it was relevant to their field.

It was frustrating to hear them voice their opinions in the defense that they felt, “of course it world work.” After seeing the data, they took the exact opposite side claiming it was obvious to the point of being of limited publishing value.


Academia is 'full' of people who lack intellectual integrity. Imagine Frege's response to Bertrand Russell's letter concerning barber's paradox: 'it's obvious!' Instead of doing that, Frege openly acknowledged Russell's criticism in his book.


Imagine how much worse it is outside where there's even no pretense of adhering to any intellectual rigor.

Also within academia there is still a wide spectrum of intellectual rigor across the disciplines. Some things are just more verifiable than others.



I recall Dan Ariely mentioning this in one of his books. His field is psychology/behavioural economics, a field where very often either outcome of an experiment can seem obvious after the fact. (Questions like Do newborn babies have an intuitive understanding of gravity?)

As I recall, he restructured his lectures, asking upfront for a show of hands as to which outcome everyone anticipated, before the big reveal. After making this change, he had fewer people approaching him after lectures saying how obvious the outcome was.


> Do newborn babies have an intuitive understanding of gravity?

I really had no idea about this one, but how it's studied is very interesting.

https://www.livescience.com/18101-infants-grasp-gravity.html


That’s interesting because much of the thesis was rooted in behavioral economics. The faculty that turned it down initially were in the economics department


Do you know which book?


It's one of the three of his books I own, I'm afraid I can't easily narrow it down further. (I own all three as audiobooks, and I recall Simon Jones reading it, but it turns out he read all three.)

• Predictably Irrational

• The (Honest) Truth About Dishonesty

• The Upside of Irrationality


Tangentially related I wish there were a way to “search” within audio books. Once you’ve finished the book its almost impossible to figure out where a specific chapter or passage is if you’d like to go back.


The semantic data format people have had a point all along. Just because digital audiobooks are inspired by books on cassette is no reason the data format can't support all sorts of metadata. We could have a format for written and read aloud works that highlights every word in the text on a screen as it's read when used in the proper player software, with user notations, bookmarks, indexes, an completely searchable by full text.


Kindle supports this with Whispersync. I don't know how the file format works.


They'd rather charge you separately for a DRM-locked ebook version.


I've been noticing a variant of this in myself lately. Maybe I think about some problem and stew on it and think to myself that it's probably not possible, or at least I can't come up with ideas. Then I hear that somebody else has made progress and suddenly I have a bunch of ideas. Somehow switching from "how could this work" or "can this work" to "how did they do it" leads me down entirely different paths.

I've been trying to get better at recognizing the bias and switching viewpoints without the external push.


I often think about this Michael Abrash story: (chapter introduction) http://orangeti.de/OLD/graphics_programming_black_book/html/...

When I'm stuck, or getting close to stuck, I always try to assume what I want to do has already been done in some way. Long Google searches or discussions with domain experts, purposely vague, looking for similar ideas. Even a ridiculously not-so-related paper or mention in a paper will launch me in a idea-generation frenzy and I'll quickly build confidence.

Love M. Abrash's books. Shame he didn't keep writing them, they were inspirational for me.


This is an interesting perspective that I think may exhibit itself in many domains. I’m reminded of the fact that the sub-four minute mile was impossible and then when it was first broken, many others completed the same feat in a relatively short period of time


I like the linked Egg of Columbus:

https://en.wikipedia.org/wiki/Egg_of_Columbus


I guess the lesson is, when an advisor rejects your thesis idea, get them to put their reasons in writing.


I had their rejections in email. Ultimately, the committee brought them around and accepted my thesis so I didn’t feel it was worth burning those bridges.


What would that get you though, other than knowing you were right (which you already knew without having it in writing) and (if you decide to publicly call them out on it) enemies for life in your chosen field of study?


Oh, I wouldn't do it publicly. The point would be to push back, in private discussions, against the argument that the result was not interesting enough to publish.


I believe this quote from J.B.S. Haldane is relevant here:

I suppose the process of acceptance will pass through the usual four stages:

(i) This is worthless nonsense;

(ii) This is an interesting, but perverse, point of view;

(iii) This is true, but quite unimportant;

(iv) I always said so


It's actually pretty impressive when a scientist goes through the full cycle, especially if they're already at the top of their field. Usually, they never make it past (ii) hence the Planck principle: "science advances one funeral at a time" (see The Structure of Scientific Revolutions by Kuhn)


Didn't you ask them "But didn't you say this was uspublishable?" and what was their response?


I did not. Maybe I was being weak or maybe I didn’t want to try and make them look bad in front of their peers but I did not bring up any of the previous conversations during the defense


Thanks for the response. I'm always just curious how people react to things like that.


Since the first and second interactions were 4 or more years apart, it is entirely conceivable that the field had moved enough during those years to warrant a genuine change in opinion.


It's also possible that they were just trying to protect a student from investing years into a project that they deemed to have a low (but perhaps non-zero) chance of success. This is a thing that good advisors should do to protect their students from career-wasting wild goose chases.

The fact that the two interactions were very different with four years and a completed thesis between them doesn't surprise me at all. My own embarrassing story is that I advised Jason Donenfeld to submit his WireGuard paper to NDSS, forgot about the meeting entirely after a few months, then complained (in retrospect, unfairly) when NDSS accepted it. Advisors do stupid, embarrassing, forgetful things all the time. The OP's story isn't even a misdemeanor.


Well I guess the problem there is inviting them to your defence. People gonna people.


True. In retrospect, I was a bit naïve.


I would have challenged them to a duel.


But be sure to put all your brilliant thoughts into writing and send it to a friend, you just might get a whole area of mathematics named after you...


That sucks to hear. Null results are important, as you say, if only to dissuade others from doing the same.

See also the "file-drawer problem" (https://en.wikipedia.org/wiki/Publication_bias). Also, with regards to the incentives in the field and the lack of null results, there's always Ioannidis's classic work (https://journals.plos.org/plosmedicine/article?id=10.1371/jo...).


I disagree. Negative results are important. Null results are of very limited interest. The two are worlds apart.

A null result simply means you tried something and it didn't work. But you don't know why. You haven't proven it didn't work. There are literally millions of reasons why something might not work. For instance, you could try to use compound X to cure disease Y, observe no effect, and conclude that X doesn't cure Y. But what if somewhere in the process of making X you made an uncaught mistake and you instead used X'?

A negative result means that you tried something and you came to the proven conclusion it doesn't work. This is, crucially, as hard to obtain as a positive result. In my example, it would imply a much longer process than simply "apply X, see no effect in Y, make a few robustness checks, done".

You could say "Well, publish the null anyway, somebody will catch the mistake". Unlikely. There are already so many papers out there that keeping up is impossible. If we were publishing also null results this number will grow tenfold at the very least. Nobody could possibly check everything. They will see a paper "X doesn't cure Y" and call it knowledge, stifling a possible cure virtually forever.

Am I splitting hairs? Perhaps. But I think HN prizes itself to be a scientifically minded community, and thus it has a mandate to use terms correctly. Confusing "null" with "negative" is a sin.

I hope one day I'll find a way to strongly and passionately argue against the "null results are as important as positive results" position. It is a bad meme. Charitably, I consider it most of the times a honest mistake. But sometimes it gives me the impression it is a cheap trick used by people to erode the reputation of academia.


I understand that people generally mean 'null results' here, by your definition, when they say 'null results'. That's the intent.

Null results are also important.

Suppression of null results allows for p-hacking and confirmation biases to creep into research, and greatly reduces the power of literature reviews.


True, but I'm not really arguing against what you're saying. It is true that, when you have a positive (or a negative!) result you should also report on the nulls you obtained on the way (most likely in the supplementary materials) as a compendium of the result, to put it into context.

What I'm arguing against is publishing a null result as a stand-alone publication. This creates the illusion of it being somehow a "result", which is not (in fact, we should stop calling them "results" altogether). With a null you haven't proven anything, and thus it is not a sufficient basis for a publication.


I see. Thanks for adding to the clarification. I think that the presentation of nulls as "results" can definitely be disingenuous. Ideally, science would have a better database to keep track of what people find, where we could add nulls in a way that doesn't highlight their "importance". As the person above says, reporting nulls is still useful to prevent p-hacking and publication bias.

(Of course, ideally I think we'd be better off focusing on reporting the data in a Bayesian approach, but that hasn't really gotten traction in the broader community.)


> Negative results are important. Null results are of very limited interest.

Correct. There is a highly cited paper in CS where the author showed that a mathematical model that was widely used in research didn't actually work (anymore) in reality. That paper was the starting point of a lot of new research in that field.


Can you add a citation for that paper?


> I disagree. Negative results are important. Null results are of very limited interest. The two are worlds apart.

I agree they're different but, but disagree that they're worlds apart. There's a spectrum between them, caused by uncertainty and statistics. If I say the average treatment effect of my new drug is probably somewhere between -x and +y, it could be a negative result or a null result. It's the fuzzy line between statistically insignificant and materially insignificant.

Maybe I only had two patients per experimental cell, so I barely learned anything. The drug's treatment effect on lifespan is between -30 years and +10 years. It's "null" in that we didn't learn much of anything.

Maybe I had a billion patients per cell and I learned that the average treatment effect on lifespan is between -0.001 days and +0.1 days. It's "negative" in that we learned the drug doesn't materially affect lifespan.

The position we seem to be in is that most conventional experiments are powered with a moderate effect size at 80%, meaning that many of our null-or-negative (-x, +y) results will be right around the region where it's unclear whether results are null or negative.


I generally agree in the sense that "null results" should not be published as "results." But, especially in the experimental sciences, I think it would be an incredible (and very useful) feat of work to have well-documented experiments that turned out to be ultimately null or failed, to prevent others from doing the same. (Or, on the other hand, to have people improve on the given methods in order to get a positive/negative result in some specific sense. For example, photonics returning to lithium niobate platforms, which were essentially abandoned in the 80s, but has had incredible successes lately. I'm sure there's been a lot of replicated work here.)

Of course, the problem with all of this is that there really aren't very good incentives to accurately and carefully report null experimental results (except as a kind of "folk knowledge" within a given lab) which would limit its general usefulness. But the "platonic ideal," so to speak, of a null result journal I think would be relatively useful.


I think you need to rework your definitions. Avoid using the word proven. Most of the time science proves things false. You can't prove anything to be true.

The difference between a null and a negative is just that a negative is an interesting null. In your null example, to create a proper negative you'd probably report several compound synthesis methods instead of one. You'd probably also want to use more mice/data in your analysis.


Those are some good reads, and have absolutely been my experience. It's depressing how many publications that I've come across don't provide the whole story, and are probably false.

I've found that looking at what a paper doesn't report can be far more important that what they claim.


I wonder how many novel techniques could come out of these types of reports if they were actually analyzed by ML or NLP.


I definitely think there's more room for this sort of guided / ML analysis, but I'm not quite sure to make traction on extracting the structure of scientific papers...hopefully someone with more experience can chime in.

I think paper discovery has recently suffered a huge boon thanks to ConnectedPapers, though. [https://www.connectedpapers.com/]


By experience, I want to say : way too much. Journals and published articles looks like a research lab full of PhD working on their owns without access to all the previous results of the labs (good, bad and everything in between) and do not talk to others unless they have a 10 minutes seminar every six months to show some stuff quickly.


IMO, nowadays top-tier conference paper tends to focus too much on telling an interesting story. This makes researchers only show the surprising result in the paper. The unsurprising one is hardly mentioned.

However, the uninteresting part would definitely help others to not make the same mistake. The uninteresting result is still result (and contribution), isn't it?


Yeah, that's the TED Talk effect, as some people put it

You cannot make a TED Talk about something that people already know


What if you presented it with a bunch of single-word slides and had a compelling frame? “What my 10 years among uncontacted tribes in the Amazon taught me about the boiling point of water”


> We took methods that had been exploited to improve on previous results for over a decade to their logical extreme, and found that this method no longer leads to improvements.

This actually sounds like a really good review paper! Review papers serve multiple purposes: getting people up to speed on a subject, and putting your own spin on a subject to guide future investigation.


One of the most important things my PhD advisor taught me was to design an experiment so that whichever way the result comes out, it tells you something interesting (even if one of those ways might be more surprising, and more interesting).


1. Other than your time, why not preprint it?

2. As an aside, I can't tell you how many times I've tried to work on stuff, it ends up working, and then I find papers and people saying what we did would never work. Sometimes the ignorance is good.


What field is this? In the physics papers I've worked on we generally try to state all the assumptions we made when we rule something out, but we do sometimes miss things, and I suppose that in some fields the preparation might me messier.


> then I find papers and people saying what we did would never work.

Seriously? What kind of scientific paper makes such claims?


To be fair, it's mostly people that do this because negative results publishing is rare. There have definitely been papers saying this though; plenty of shade thrown at basically every new method in its infancy with papers saying why they won't work (you can find plenty of academic papers dunking on human genome project and shotgun sequencing; next-gen sequencing; talens/crispr's, gene therapy, immunotherapy, ai, etc).


Sounds like a Columbus' Egg kind of situation. Your conclusions may be obvious in retrospect, but they weren't obvious at the time that you chose to pursue them and your supervisors gave you their blessing.


> A paper on the subject would almost be considered uninteresting, and a high ranking journal would ignore it (which is why it's considered "unpublishable").

There is a range of journals from high ranking to solid mid-level to lower tiers to somewhat suspicious to downright obviously pay-to-publish. You can always find a level that will publish your article.


The problem in such cases usually isn't in finding a willing journal, but constraints on the authors. For example, during my PhD at a leading biological research institute in India, there was an informal ban on sending manuscripts to open access journals - a rule instituted by the Director, who's office had to approve every submission. At some point this ban was extended to conference proceedings, or journals below a certain Impact Factor. These rules might have been overturned by subsequent administrations, I don't know.


What's even worse though is this creates enormous pressure to tweak results such that the findings are publishable.

I know one person who basically couldn't get their PhD because they couldn't reproduce another experiment and after several years of trying is pretty much certain the original results were faked in order to be publishable.


Nature scientific reports? A number of people have mixed feelings about the journal, but I think it does encourage people to publish work that is technically correct, but not exciting. I remember going through the pain of publishing a boring piece of work before that corrected a boring but incorrect study by someone else. It's tedious, but I try to think of it as community service.


You could publish this in a blog (summarized of course) At least other scientists/researches would be warned.


That's all well and good, but PhD students have zero incentive to do this, and the blog would likely go completely ignored anyway. Not only does a blog post not help you graduate but in order for anyone to care about the results posted on your blog you have to market it!


Would it be possible for you to upload the paper to researchgate.net ?


Same experience for my master's thesis.


In terms of orbital mechanics the polar orbit around the moon isn't significantly more complicated than an equatorial one. A small midcourse correction after entering a flight path towards the moon is all that's needed to ensure capture into orbit around the poles (and a capture burn), without the need for a plane change (note that this is also based on experience from KSP).


I see what you mean, though I'm not sure of all the details, you might want to get into an equatorial orbit (there is an orbiter, though I'm not sure about its orbit) just because the error margin is smaller (?) or for some other reason.

(But then of course the moon is neither too far nor too big so I might just be barking at the wrong tree here)


As far as I can tell, the lunar orbiter itself is in a polar orbit, so it would make sense that it entered orbit as such. In terms of complexity the main part of the journey to the moon is the midcourse correction and ensuring that the engines fire at the correct time, in the correct direction, and for a certain duration. Depending on the engines used - ie. engines that can be used mutliple times - more than one midcourse adjustement may have been possible. It seems that all of these things worked out fine, since the Chandrayaan-2 orbiter has been there in a polar orbit since 20th of August [0]. The lander seperated since then, and the loss of contact a few seconds before the expected landing time implies that the lander reached the surface with a velocity higher than expected.

[0]https://timesofindia.indiatimes.com/city/mangaluru/ISRO-chie...


I concur. It's important to be engaged with what's going on locally and things that will directly impact you and people around you. If everybody ignored these things and just kept their heads down then I'd argue we'd all be worse off. Your own mental state could be improved by ignoring them, but I do believe that we all have some responsibility to try and improve the society around us.

I've found that keeping exposure to media down to every week or two and mostly reading summaries of events after the fact has helped immensely. It filters out all of the noise you get from live reporting. This helps get down to the facts of what's happened, and allows one to keep a certain emotional detachment from the events and focus on the important things.


It's relatively common for people during important world sporting events to fly the English flag, so it might be for the Women's World Cup which England are tipped to do well in. Of course, outside of important world sporting events the flag is generally seen flown by English nationalists and white supremisists, at least that's what I've generally seen.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: