Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, this concerns me. If someone believes they can upper bound LLM capabilities, I think the onus is on them to explain where and why scaling laws break down. Regardless, it seems like we'll get to AGI relatively soon (say, within a century), whether that's using transformers and LLMs or not.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: