Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anecdotally speaking I use google search much less frequently and instead opt for GPT4. This is also what a number of my colleagues are doing as well.


I often use ChatGPT4 for technical info. It's easier then scrolling through pages whet it works. But.. the accuracy is inconsistent, to put it mildly. Sometimes it gets stuck on wrong idea.

Interesting how far LLMs can get? Looks like we are close to scale-up limit. It's technically difficult to get bigger models. The way to go probably is to add assisting sub-modules. Examples would be web search, have it already. Database of facts, similar to search. Compilers, image analyzers, etc. With this approach LLM is only responsible for generic decisions and doesn't need to be that big. No need to memorize all data. Even logic can be partially outsourced to sub-module.


I expect a 5x improvement before EOY, I think GPT5 will come out.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: