Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If a scientist uses an LLM to write a paper with fabricated citations - that’s a crappy scientist.

Really? Regardless of whether it's a good paper?





Citations are a key part of the paper. If the paper isn’t supported by the citations, it’s not a good paper.

Have you ever followed citations before? In my experience, they don't support what is being citated, saying the opposite or not even related. It's probably only 60%-ish that actually cite something relevant.

I follow them a lot. I’ve also had cases where they don’t support the paper.

This doesn’t make it okay. Bad human writer and reviewer practices are also bad.


Well yes, but just because that’s bad doesn’t mean this isn’t far worse.

How is it a good paper if the info in it cant be trusted lmao

Whether the information in the paper can be trusted is an entirely separate concern.

Old Chinese mathematics texts are difficult to date because they often purport to be older than they are. But the contents are unaffected by this. There is a history-of-math problem, but there's no math problem.


You are totally correct that hallucinated citations do not invalidate the paper. The paper sans citations might be great too (I mean the LLM could generate great stuff, it's possible).

But the author(s) of the paper is almost by definition a bad scientist (or whatever field they are in). When a researcher writes a paper for publication, if they're not expected to write the thing themselves, at least they should be responsible for checking the accuracy of the contents, and citations are part of the paper...


Problem is that most ML papers today are not independently verifiable proofs - in most, you have to trust the scientist didn't fraudulently produce their results.

There is so much BS being submitted to conferences and decreasing the amount of BS they see would result in less skimpy reviews and also less apathy


Not really true nowadays. Stuff in whitepapers needs to be verifiable which is kinda difficult with hallucinations.

Whether the students directly used LLMs or just read content online that was produced with them and cited after just shows how difficult these things made gathering information that's verifiable.


> Stuff in whitepapers needs to be verifiable which is kinda difficult with hallucinations.

That's... gibberish.

Anything you can do to verify a paper, you can do to verify the same paper with all citations scrubbed.

Whether the citations support the paper, or whether they exist at all, just doesn't have anything to do with what the paper says.


I dont think you know how whitepapers work then



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: