Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> As a technologist I want to solve problems effectively (by bringing about the desired, correct result), efficiently (with minimal waste) and without harm (to people or the environment).

> LLMs-as-AGI fail on all three fronts. The computational profligacy of LLMs-as-AGI is dissatisfying, and the exploitation of data workers and the environment unacceptable.

It's a bit unsatisfying how the last paragraph only argues against the second and third points, but is missing an explanation on how LLMs fail at the first goal as was claimed. As far as I can tell, they are already quite effective and correct at what they do and will only get better with no skill ceiling in sight.



do you find a 40-60% failure rate fits your definition of correctness? I don't think they really needed to spell this failure out...

https://www.salesforce.com/blog/why-generic-llm-agents-fall-...


They are not “correct” most of the time in my experience. I assumed the author left that “proof” out because it was obvious.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: