> The question is whether it’s more important to be able to do things, or more important to have a good sense and a keen eye for what to do at any given moment.
Those aren't mutually exclusive.
"People who do things" can do both, and doing the latter is a function of doing the former, so they tend to do the latter sufficiently well.
"People who prompt things" can only do the latter, and they routinely do it poorly.
> “People who prompt things” can only do the latter, and they routinely do it poorly.
Right, but what I don’t agree with here is the idea that this category of people will never be able to improve into the first category of people. The value of an experienced anything is that they realize there is a big chasm between something that works now and something that will continue to work long into the future.
I don’t agree that doing everything yourself manually is the only thing that can grant you that understanding, because I don’t think that understanding is domain-specific. It evolves naturally as soon as someone realizes that their list of unknown unknowns is FAR larger than their list of known anythings, and that the first step in attempting to solve a problem is to prune that list as far as you can get it while realizing you will never ever be able to reduce it to zero.
You can do that by spending two weeks to build a brick wall by hand, or you can do that by spending two weeks having your magical helpers build ten brick walls that eventually collapse. I don’t think the tools are some sort of fundamental threat to cognition, I think they’re - within this society - a fundamental threat to safety, because the relentless pursuit of profit means even those that realize those ten brick walls should never actually ever be used to hold anything up will find themselves pressured to put a roof on them and hope, pray, they hold.
And this isn’t an LLM-specific thing. The vast diverse space of building codes around the world proves this, and coincidentally, the countries with laxer building codes tend to get a lot more done a lot faster; and they also tend to deal with a big tragic collapse every now and then, which I suppose someone will file away as collateral somewhere.
> I don’t agree that doing everything yourself manually is the only thing that can grant you that understanding, because I don’t think that understanding is domain-specific. It evolves naturally as soon as someone realizes that their list of unknown unknowns is FAR larger than their list of known anythings, and that the first step in attempting to solve a problem is to prune that list as far as you can get it while realizing you will never ever be able to reduce it to zero.
This isn't true, a car mechanic never evolves into an engineer, a nurse never evolve into a doctor. A car mechanic can learn to do some tasks you normally need an engineer for and same with nurses, but they never build the entire core set of skills that separates engineers from mechanics and doctors from nurses.
There are maybe some exceptions to this, but those exceptions are so rare that it doesn't matter for this discussion. A few people still learning it properly wont save anything.
> He had been working as a programmer for a significantly longer time than me. Long enough that for many years of his career, he had been programming in assembly.
Please, not this pre-canned BS again!
Comparing abstractions to AI is an apples to oranges comparison. Abstractions are dependable due to being deterministic. When I write a function in C to return the factorial of a number, and then reuse it again and again from Java, I don't need a damn set of test cases in Java to verify that factorial of 5 is 120.
With LLMs, you do. They aren't an abstraction, and seeing this worn out, tired and routinely debunked comparison being presented in every bloody thread is wearing a little thin at this point.
We've seen this argument hundreds of times on this very site. Repeating it doesn't make it true.
> The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.
"Being able to deliver using AI" wasn't the point of the article. If it was the point, your comment would make sense.
The point of the program referred to in the article is not to deliver results, but to deliver an Alice. Delivering a Bob is a failure of the program.
Whether you think that a Bob+AI delivers the same results is not relevant to the point of the article, because the goal is not to deliver the results, it's to deliver an Alice.
People never cared about delivering Alices; they were an implementation detail. I think the article argues that they're still an important one, but one that isn't produced automatically anymore
> I am aware of that - I was adding something along the lines of: I don’t think people care if we deliver Alices any more.
That's irrelevant to the goal of the program - they care. Once they stop caring, they'd shut that program down.
Maybe it would be replaced with a new program that has the goal of delivering Bobs+AI, but what would be the point? I mean, the article explained in depth that there is no market for the results currently, so what would be the point of efficiently generating those results?
The market currently does not want the results, so replacing the current program with something that produces Bobs+AI would be for... what, exactly?
There’s no market for the results, but there was a market for Alices, because they were the only people who could produce similar results historically. Now maybe there’s less of a market for Alices. Yes, maybe that means the program disappears.
> It doesn't matter if Bob can be normal. There was no point to him being paid to be on the program.
Yeah, I'm surprised at the number of people who read the article and came away with the conclusion that the program was designed to churn deliverables, and then they conclude that it doesn't matter if Bob can only function with an AI holding his hand, because he can still deliver.
That isn't the output of the program; the output is an Alice. That's the point of the program. They don't want the results generated by Alice, they want the final Alice.
Improving the agent means improving the code base such that the agent can effectively work on it.
It can not Com as a surprise that an agent is better at working on a well documented code base with clear architecture.
On the other hand, if you expect that an agent can add the right amount of ketchup to your undocumented speghatti code, then you will continue to have a bad time.
Well, it sounds pretty funny.
What can be said factually is that the Steam Deck is based on an immutable distro like Bazzite, which is a lot more important than flatpaks themselves. Flats are just a popular way to get apps on such a setup. I see a lot of resistance to them in general, not just in specific individuals or communities on the web, and they're actually pretty solid and do see lots of investment from other companies. SteamOS itself being an immutable distro, Gabe is indeed at least indirectly shitting his yacht money into them.
> So then what happens if some people's payment method fails once you do charge?
I expect its a pre-auth, like car rental companies do; a pre-auth gives you a code from the card issuer and an expiry. The issuer will reserve the amount on the cardholders account, and only perform the transaction to the merchant once the merchant sends a second message with the pre-auth code.
Sure if it was just a matter of typing. But in practise it means sitting and staring for minutes at nothing happening with a "thinking" until something finally happens.
I mean my local 122b is only 20t/s so for background stuff it can be used for that. But not for anything interactive IME.
> I mean my local 122b is only 20t/s so for background stuff it can be used for that. But not for anything interactive IME.
What are you running that local 122b on? I mean, this looks attractive to me for $5/m running unlimited at 20t/s-25t/s, but if I could buy hardware to get that running locally, I don't mind doing so.
Those aren't mutually exclusive.
"People who do things" can do both, and doing the latter is a function of doing the former, so they tend to do the latter sufficiently well.
"People who prompt things" can only do the latter, and they routinely do it poorly.
reply