Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This is just not happening anywhere around me.

Don't worry about where AI is today, worry about where it will be in 5-10 years. AI is brand new bleeding edge technology right now, and adaption always takes time, especially when the integration with IDEs and such is even more bleeding edge than the underlying AI systems themselves.

And speaking about the future, I wouldn't just worry about it replacing the programmer, I'd worry about it replacing the program. The future we are heading into might be one where the AI is your OS. If you need an app to do something, you can just make it up on the spot, a lot of classic programs will no longer need to exist.



> Don't worry about where AI is today, worry about where it will be in 5-10 years.

And where will it be in 5-10 years?

Because right now, the trajectory looks like "right about where it is today, with maybe some better integrations".

Yes, LLMs experienced a period of explosive growth over the past 5-8 years or so. But then they hit diminishing returns, and they hit them hard. Right now, it looks like a veritable plateau.

If we want the difference between now and 5-10 years from now and the difference between now and 5-10 years ago to look similar, we're going to need a new breakthrough. And those don't come on command.


Right about where it is today with better integrations?

One year is the difference between Sonnet 3.5 and Opus 4.5. We're not hitting diminishing returns yet (mostly because of exponential capex scaling, but still). We're already committed to ~3 years of the current trajectory, which means we can expect similar performance boosts year over year.

The key to keep in mind is that LLMs are a giant bag of capabilities, and just because we hit diminishing returns on one capability, that doesn't say much if anything about your ability to scale other capabilities.


You buried the lede with “exponential capex scaling”. How is this technology not like oil extraction?

The bulk of that capex is chips, and those chips are straight up depreciating assets.


The depreciation schedule is debatable (and that's currently a big issue!). We've been depreciating based on availability of next generation chips rather than useful life, but I've seen 8 year old research clusters with low replacement rates. If we stop spending on infra now, that would still give us an engine well into the next decade.


> We're already committed to ~3 years of the current trajectory

How do you mean committed?


better integrations won't do anything to fix the fact that these tools are, by their mathematical nature, unreliable and always will be


so are people


But humans have vastly lower error rates than llms. And in a multi-step process that means that those error rates compound. And when that happens, you end up with a 50/50 or worse


And, more importantly, a given human can, and usually will, learn from their mistakes and do better in a reasonably consistent pattern.

And when humans do make mistakes, they're also in patterns that are fairly predictable and easy for other humans to understand, because we make mistakes due to a few different well-known categories of errors of thought and behavior.

LLMs, meanwhile, make mistakes simply because they happen to have randomly generated incorrect text that time. Or, to look at it another way, they get things right simply because they happen to have randomly generated correct text that time.

Individual humans can be highly reliable. Humans can consciously make tradeoffs between speed and reliability. Individual unreliable humans can become more reliable through time and effort.

None of these are true of LLMs.


It's a trope that people say this and then someone points out that while the comment was being drafted another model or product was released that took a substantial step up on problem solving power.


I use LLMs all day every day. There is no plateau. Every generation of models has resulted in substantial gains in capability. The types of tasks (both in complexity and scope) that I can assign to an LLM with high confidence is frankly absurd, and I could not even dream of it eight months ago.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: