Hacker Newsnew | past | comments | ask | show | jobs | submit | kranuck's commentslogin

> Because that's what many AI experts are also talking about from a variety of angles.

Wow, in that case I'm convinced. Such an unbiased group with nothing at all to gain from massive AI hype.


If you ignore the part where there proofs are meandering drivel, sure.


Even if you don't ignore this part they (e.g. o1-preview) are still better at proofs than the average human. Substantially better even.


That's no guarantee that AI continues advancing at the same pace, and no one has been arguing against overall technological progress slowing

Refining technology is easier than the original breakthrough, but it doesn't usually lead to a great leap forward.

LLMs were the result of breakthroughs, but refining them isn't guaranteed to lead to AGI. It's not guaranteed (or likely) to improve at an exponential rate.


It's gonna be massive because companies love to replace humans at any opportunity and they don't care at all about quality in a lot of places.

For example, why hire any call center workers? They already outsourced the jobs to the lowest bidder and their customers absolutely hate it. Fire those people and get some AI in there so it can provide shitty service for even cheaper.

In other words, it will just make things a bit worse for everyone but those at the very top. usual shit.


Really, cause I remember an endless stream of people pointing out problems with blockchain and crypto and being constantly assured that it was being worked on and would be solved and crypto is inevitable.

For example, transaction costs/latency/throughput.

I realize the conversation is about blockchain, but I say my point still stands.

With blockchain the main problem was always "why do I need this?" and that's why it died without being the world changing zero trust amazing technology we were promised and constantly told we need.

With LLMs the problem is they don't actually know anything.


> have no idea whether the LLM understood what I’m asking

That's easy. The answer is it doesn't. It has no understanding of anything it does.

> if it’s able to do it

This is the hard part.


That problems feels somewhat fundamental to saying that these things have any ability to reason at all.


> For example, in the future we may wish to monitor the chain of thought for signs of manipulating the user

I'm sick of these clowns couching everything in "look how amazing and powerful and dangerous out AI is"

This is in their excuse for why they hid a bunch of model output they still charge you for.


> 11. This is all low-level pedantry

Yeah pretty much.

maybe don't write contradictory unexplained nonsense.


Yeah I stopped with 5 and 6 and will never give the slightest care what this person has to say ever again.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: