Yes definitely not. But I think this could be a legitimately good use case if it were implementable. Filtering based on keywords will throw away papers of interest, while a recommendation algorithm wouldn't give you much control over content. But a language model could probably do a decent job of ranking a days worth of papers based on relevance to a short description of your research interests.
I'm in a field where there are 50+ postings a day, but only 5-10% are relevant to my focus. A good filter would save me a lot of tedium.
But OP says it wouldn't be sustainable to implement, so that's that. Maybe will try this myself and see how it goes.
Exactly, I think that in this case using ML models that are experts in papers can be more useful and that type of models can run locally so it's the best option.
Please don't follow the original comments suggestion; I feel that "easily digestible" is not compatible with what makes the idea shine in the first place.
Your suggestion delegating such functionality to a local LLM is quite nice as a choice but adding it as a core functionality is quite antithetical to leverage the arXiv part, without which everything reverts back to a bland and generic whateverTok format.
Although the suggestion seems to be aware of the fact and provides both a good reasoning and a quite good solution (progressively deepening explanations), the implicit information and nuance lost in a summary by an unreliable LLM would undeniably turn this from a useful and interesting idea to a cool party trick no one uses for more than 5 minutes.
Thanks! I'll text you here when I add the feature. It wouldn't be core, so I think that having two modes where you can read easily papers with LLms or not will be of great help.
Maybe I'll implement a basic recommendation algorithm locally, because right now, an LLM implementation wouldn't be sustainable.