Hacker Newsnew | past | comments | ask | show | jobs | submit | Miguel07Code's commentslogin

Thanks! I don't track anything; it just shows sorted papers from the current date.

Maybe I'll implement a basic recommendation algorithm locally, because right now, an LLM implementation wouldn't be sustainable.


> an LLM implementation wouldn't be sustainable

Also not the right tool for the job. Not everything has to be a LLM.


Yes definitely not. But I think this could be a legitimately good use case if it were implementable. Filtering based on keywords will throw away papers of interest, while a recommendation algorithm wouldn't give you much control over content. But a language model could probably do a decent job of ranking a days worth of papers based on relevance to a short description of your research interests.

I'm in a field where there are 50+ postings a day, but only 5-10% are relevant to my focus. A good filter would save me a lot of tedium.

But OP says it wouldn't be sustainable to implement, so that's that. Maybe will try this myself and see how it goes.


Exactly, I think that in this case using ML models that are experts in papers can be more useful and that type of models can run locally so it's the best option.


I'm rolling a new version up now, it'll be live in 2 minutes.


I'll try to improve that in a moment.


Added! Try it out.


Yeah, I'll add it quickly.


I'll add it as a source too.


Just by imagining it, it would be a lot of fun haha.


That has potential..


Wow, I didn't know that.. It's really ironic


Mm maybe running a local LLM for doing it would be a great idea, I'll try to do that and if it doesn't work well, I'll consider doing it with an API.


Please don't follow the original comments suggestion; I feel that "easily digestible" is not compatible with what makes the idea shine in the first place. Your suggestion delegating such functionality to a local LLM is quite nice as a choice but adding it as a core functionality is quite antithetical to leverage the arXiv part, without which everything reverts back to a bland and generic whateverTok format.

Although the suggestion seems to be aware of the fact and provides both a good reasoning and a quite good solution (progressively deepening explanations), the implicit information and nuance lost in a summary by an unreliable LLM would undeniably turn this from a useful and interesting idea to a cool party trick no one uses for more than 5 minutes.


Yes I strongly agree with this. I also want to read the original abstracts.


Thanks for the feedback, I think that it would be like having 2 modes with a toggle: unhingered summaries from local LLMs or the real summaries.


Cool :) I am a scientist, so having an easier way to parse the abstracts would be most welcome. Keep up the good work.


Thanks! I'll text you here when I add the feature. It wouldn't be core, so I think that having two modes where you can read easily papers with LLms or not will be of great help.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: