Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
jiggawatts
5 months ago
|
parent
|
context
|
favorite
| on:
GPT-5
Wow, the 2.5 Pro summary is
far
better, it reads like coherent English instead of a list of bullet points.
mustaphah
5 months ago
|
next
[–]
Someone should start a Gemini-powered blog that distills the top HN posts into concise summaries.
primaprashant
5 months ago
|
prev
[–]
yes, agreed. Context length might be playing a factor as total number of prompt tokens is >120k. Performance of LLMs generally degrade at higher context length.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: