Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wow, the 2.5 Pro summary is far better, it reads like coherent English instead of a list of bullet points.


Someone should start a Gemini-powered blog that distills the top HN posts into concise summaries.


yes, agreed. Context length might be playing a factor as total number of prompt tokens is >120k. Performance of LLMs generally degrade at higher context length.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: