> Someone approves a PR they didn’t really read. We’ve all done it (don’t look at me like that). It merges. CI takes 45 minutes, fails on a flaky test, gets re-run, passes on the second attempt (the flaky test is fine, it’s always fine, until it isn’t and you’re debugging production at 2am on a Saturday in your underwear wondering where your life went wrong. Ask me how I know… actually, don’t). The deploy pipeline requires a manual approval from someone who’s in a meeting about meetings. The feature sits in staging for three days because nobody owns the “get it to production” step with any urgency.
This is the company I (soon no longer) work at (anyone hiring?).
The thing is that they don’t even allow the use of AI. I’ve been assured that the vast majority of the code was human-written. I have my doubts but the timeline does check out.
Apart from that, this article uses a lot of words to completely miss the fact that (A) “use agents to generate code” and “optimize your processes” are not mutually exclusive things; (B) sometimes, for some tickets - particularly ones stakeholders like to slide in unrefined a week before the sprint ends - the code IS the bottleneck, and the sooner you can get the hell off of that trivial but code-heavy ticket, the sooner you can get back to spending time on the actual problems; and (C) doing all of this is a good idea completely regardless of whether you use LLMs or not; and anyone who doesn’t do any of it and thinks the solution is to just hire more devs will run into the exact same roadblocks.
That would be a lot easier to believe if this law in question actually, you know, helped society. Or did anything to affect how it runs, let alone “effectively.”
As it stands, it reads more like “I’ve used my free will to decide to suspend all critical thinking and accept that anything that anyone with authority decides should be a rule must be unquestioningly accepted.”
> so that when the subsidies end and subscription costs shoot up
Subscription costs are capped to API rates as their ceiling (and, realistically, way lower than that - why would you even subscribe if you could just go pay-what-you-use instead), and those are already at a big margin for Anthropic. What still costs them a fuckton of money comparatively is training, but that is only going to get more efficient with more purpose-built hardware on the way.
Basicallly, I don’t see much of a reason to hike subscription prices dramatically. I don’t think they’ll stay at $100/$200 but anyone who’s paying that already knows how much value they’re getting out of that and probably wouldn’t mind paying more.
I'm not sure what you mean, if you max out your subscription perhaps? If you pay $100 and don't use it, you don't get refunded $100 because it's 'capped to API rates' which would've been 0.
He means that anthropic cannot increase the price of the sub because the users can just switch to the regular API pricing which consequently puts a ceiling on the cost of the sub.
Nobody would use a $1k sub if using the API pricing would only cost $500 for comparative service.
For the record, I'm only explaining what he put forward.
I don't agree with the opinion, mainly for two reasons:
The API cost can be increased in conjunction, hence the ceiling is just as variable
The harness is even more important then the model ime, and Claude Code is getting better every month. Even though the alternatives are getting better too, they're at least currently significantly worse IME - I'd say at least 3-6 months behind (compounded by the model, ofc).
And as a third point, unrelated to the original argument: there is no way anthropic is actually treating the sub as a loss leader. It is not cheap. It's only cheap compared to their API pricing, which they can freely set however they want. Compare their pricing to free models like Kimi k2.5 etc. I sincerely doubt anthropics model costs more to run then theirs, and they're profitable at 30% of the price anthropic charges.
> He means that anthropic cannot increase the price of the sub because the users can just switch to the regular API pricing
Not that they cannot increase the price, just that there's a cap on how high they realistically can go. Sure, they can always hike API prices to compensate, but I think people are seriously sleeping on open models these days, because…
> *The harness is even more important then the model ime*, and Claude Code is getting better every month.
…I fully agree with this, and that’s actually the other reason why I don’t think we’ll approach predatory pricing. Right now, the moat is still mostly the model, but as open models improve and become more capable, this is quickly going to shift.
And the truth is that Claude Code just isn’t that great of a harness. Anyone who uses an open-source harness and optimizes it for their personal, individual workflow will quickly realize this. And I’m not even blaming Anthropic or the CC team or calling them incompetent; they are in the unenviable position to have been trailblazers. There weren’t any comparable tools before CC that they could’ve learned from.
The future lies in harnesses that are multi-model, extensible, and have full access to and control over the model’s API, context, and system prompt. Claude Code has none of those things. You can only ever bend it into a shape that approximates your workflow; you can never use it as a tool that natively supports it.
Oh, on that we can agree on! I was using opencode for the last few months, the main reason I went back to cc was for opus, and me preferring the sub over regular API pricing as I'm not using it professionally, only as a hobby. (At work I'm constrained to Copilot. Which is fine at this point, not great but definitely improving - esp. when run as CLI)
I am still hoping for a local first model approach with voice command to generate the main prompt which starts of the plan mode.
Like interactively going through the project while pointing at files or in the UI and possibly browser via the mouse and explaining while "talking" with a dumber but super quick model that acts as a questioner, to wrap things up with higher latency over the wire with the highly capable models.
I suspect that approach is still a few months to years away from viability for latency reasons, but I'm definitely looking forward to that UX
Now huge amount of investment pays for training. This investment expects some returns, to be able to both turn profit and continue the training, rates must be much, much higher.
The point is that if the harness’ workflow gives contradictory and confusing instructions to the model, it’s a harness issue, not necessarily a model issue.
First it was a model issue, then it was a prompting issue, then it was a context issue, then it was an agent issue, now it's a harness issue. AI advocates keep accusing AI skeptics of moving goalposts. But it seems like every 3-6 months another goalpost is added.
Your comment doesn’t make as strong of a point as you think it does; it might make the opposite point.
Because, yes, first, it was a model issue, and then more advanced models started appearing and prompting them correctly became more important. Then models learned through RLHF to deal with vague prompting better, and context management became more important. Then models became better (though not great) at inherent context recollection and attention distribution, so now, you need to be careful what instructions a model receives and at what points because it’s literally better at following them. It’s not so much that the goalposts are being moved, it’s that they’re literally being, like, *cleared*.
This isn’t a tech that’s already fully explored and we just need to make it good now, it’s effectively an entirely new field of computing. When ChatGPT came out years ago no one would have DREAMT of an LLM ever autonomously using CLI tools to write entire projects worth of code off of a single text prompt. We’d only just figured out how to turn them into proper chatbots. The point is that we have no idea where the ceiling is right now, so demanding well-defined goalposts is like saying we need to have a full geological map of Mars before we can set foot on it, when part of the point of going to Mars is to find out about that.
As a side point, the agent is the harness; or, rather, an agent is a model called on a loop, and the harness is where that loop lives (and where it can be influenced/stopped). So what I can say about most - not all, but most, including you, seemingly - AI skeptics is that they tend to not actually be particularly up-to-date and/or engaged with how these systems actually work and how capable they actually are at this point. Which is not supposed to be a dig or shade, because I’m pretty sure we’ve never had any tech move this fast before. But the general public is so woefully underinformed about this. I’ve recently had someone tell me in awe about how ChatGPT was able to read their handwritten note and solve a few math equations.
> the headline deliberately tries to blow this up into a big deal
I do not understand how “company that runs half the internet has had major recent outages and now explicitly names lax/non-existent LLM usage guidelines as a major reason” can possibly not be a big deal in the midst of an industry-wide hype wave over how the world’s biggest companies now run agent teams shipping 150 pull requests an hour.
The chain of events is “AWS has been having a pretty awful time as far as outages go”, and now “result of an operational meeting is that the company will cut down on the use of autonomous AI.” You don’t need CoT-level reasoning to come to the natural conclusion here.
If we could, as a species, collectively, stop measuring the relevance of a piece of news proportionally by how much we like hearing it, please?
And too many people have their egos tied to its failure, too.
Im a massive AI skeptic. If anyone were to be jumping up and down on the corpse of AI and this incessant drive to use it everywhere, it’d be me. But I also work at Amazon. I got the email. I attended the meeting. I can personally attest that there are no new requirements for AI-generated code. The articles about this in the meeting at extremely misleading, if not outright wrong. But instead of believing the person that was actually there in the room, this thread is full of people dismissing my first-hand account of the situation because it doesn’t align with the “haha AI failed” viewpoint.
Not just their egos, but their paychecks. This place is either going to get very quiet or really weird when the hype train derails and the AI bubble bursts.
The subject of the media coverage is not AWS, it is a peer organization to AWS that runs using significant amounts of non-AWS infrastructure. They are both part of an umbrella called Amazon but are not at all the same thing.
It's hard to that this objection seriously. The publication is literally called the Financial Times. It's not exactly crazy for them to think that their readers might care about the entity that shows up the stock ticker rather than how the company happens to divide up things internally.
Even if it weren't a finance publication, I have trouble imagining you making this argument if a headline said something like "Google deals with outages in the cloud" because of the idea that it's misleading to refer to it as anything other than GCP. I think you're fundamentally not understanding how people communicate about this sort of thing if you actually think that someone saying "Amazon" is misleading in any meaningful way.
You’re describing reasonable misunderstandings, but they are still misunderstandings.
The cause and effect statements just don’t correspond to reality.
I guess I’m stuck on the idea that the actual facts are relevant. If the question instead is how the dance of optics and PR is going in the minds of people who don’t know enough to doubt what they read, I don’t know what to say about that.
The message and meeting being discussed here have nothing to do with AWS or any outages AWS has faced recently. I think you’re missing the point of the discussion.
I don’t blame you, because this is just bad reporting (and potentially intentionally malicious to make you think it’s about AWS). But the meeting and discussion was with the Amazon retail teams, talking about Amazon retail processes, and Amazon retail services. The teams and processes that handle this are entirely separate from any AWS outages you are thinking of.
The outages that Amazon retail has faced also have nothing to do with AI, and there was no “explicit call out” about AI causing anything.
> while taking the joyful bits of software development away from you
Quick question: by "joyful bits of software development," do you mean the bit where you design robust architectures, services, and their communication/data concepts to solve specific problems, or the part where you have to assault a keyboard for extended periods of time _after_ all that interesting work so that it all actually does anything?
Because I sure know which of these has been "taken from me," and it's certainly not the joyful one.
I guess I enjoy solving problems, and recognize that the devil is always in the details, so I don't get much satisfaction until I see the whole stack working in concert. I never had much esteem for "architects" who sketch some blobs on the whiteboard and then disappear. I certainly wouldn't want to be "that guy" for anyone else, and I'm not even sure I could do it to an LLM.
It’s perplexing; like the majority of people who insist using AI coding assistance is guaranteed going to rob you of application understanding and business context aren’t considering that not every prompt has to be an instruction to write code. You can, like, ask the agent questions. “What auth stack is in use? Where does the event bus live? Does the project follow SoC or are we dealing with pasta here? Can you trace these call chains and let me know where they’re initiated?”
If anything, I know more about the code I work on than ever before, and at a fraction of the effort, lol.
The project managers and CEOs who are vibe-coding apps on the weekend don't know what an "auth stack" is, much less that they should consider which auth stack is in use. Then when it breaks, they hand their vibe-coded black box to their engineers and say "fix this, no mistakes"
> But the pure output of a generative model cannot be copyrighted, regardless of how complex the prompt is
If that’s how the court interpreted it, then the software industry is hosed, since that’d mean none of the generated code running in production right now is under any sort of copyright or otherwise protection, lol.
I doubt that much software is entirely AI-generated with no human review or testing, it’s probably more like integrating some public domain snippets you found online into your code (which doesn’t invalidate copyright on the rest of it, or the way it’s put together) or having some files auto-generated by a script (like a C header containing a lookup table for a simple mathematical function, the table isn’t copyrightable itself maybe but the software as a whole still is)
If a deterministic machine transformation from a copyrightable prompt results in an uncopyrightable image, what do you think a compiler is doing to source code?
AI is not specifically not deterministic from the enduser's perspective. they throw randomness into it and hence why an exact prompt wont produce the same exact result.
a compiler on the other hand is generally pretty deterministic. The non determinism that we see in output is usually non determinism (such as generated dates) in the code that it consumes.
because they are just translating code (that everyone agrees is copyrightable) in a deterministic manner into another medium.
I'm not saying AI art should or shouldn't be copyrightable. One can argue the inputs into the AI generator are copyrightable, but if the output isn't deterministic translation of the input, its a different argument.
The original argument was that AI works wouldn't be copyrightable because they are deterministic, i.e. are just an algorithmic transformation lacking in creativity.
It sounds like they might be under the impression that having any AI-generated output in the code even if parts are human authored would invalidate the copyright, which isn’t true
>If that’s how the court interpreted it, then the software industry is hosed, since that’d mean none of the generated code running in production right now is under any sort of copyright or otherwise protection, lol.
I'm not sure this is really true, since copyright applies to distribution.
If you have a substantial amount of backend code (as with most SaaS projects) you're never actually distributing the code, and copyright is never at play. Computer generated artifacts are already in this boat and are protected by virtue of being trade secrets not by copyright.
This could maybe be true of shipping javascript to the browser, which presumably is not going to qualify as a trande secret, but I don't think that's where most companies derive value.
The idea that copyright applies solely to distribution is a popular myth, but it has no support in the actual copyright law. The core exclusive rights in copyright are (in the US, 17 USC § 106):
---
(1) to reproduce the copyrighted work in copies or phonorecords;
(2) to prepare derivative works based upon the copyrighted work;
(3) to distribute copies or phonorecords of the copyrighted work to the public by sale or other transfer of ownership, or by rental, lease, or lending;
(4) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and motion pictures and other audiovisual works, to perform the copyrighted work publicly;
(5) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and pictorial, graphic, or sculptural works, including the individual images of a motion picture or other audiovisual work, to display the copyrighted work publicly; and
(6) in the case of sound recordings, to perform the copyrighted work publicly by means of a digital audio transmission.
---
OTOH, distributing copies created in violation of copyright is a good way to cause legally-cognizable harms to the copyright holder that will increase the potential damage award when you are found liable for copyright infringement, and it also makes it much more likely that someone will notice the infringement in the first place. But its not where the law, on its own terms, begins to apply.
Doing any of those without permission (unless it falls into one of the exceptions to copyright protection, like fair use) is a violation of copyright.
The idea of copyright is to prohibit unauthorized use and reproduction, but none of this actually happens with a proprietary software SaaS backend. You don't actually give anybody the code - they connect to the service.
Access to the service is already governed by computer access laws, which don't depend on copyright. And if you never intentionally distributed your code outside of your org, you can call it a trade secret and nobody else has any legitimate right to access it - whether or not it is copyrightable.
There are other things that aren't copyrightable that are trade secrets already. This would be true of any kind of automated data collection for example. You couldn't copyright it but you can call it a trade secret.
And for any of that stuff, if you want to share it and limit distribution, you just have whoever wants access explicitly agree to be bound by contract law.
>The idea of copyright is to prohibit unauthorized use and reproduction, but none of this actually happens with a proprietary software SaaS backend. You don't actually give anybody the code - they connect to the service.
The point isn't that you have to give it to people, but okay?
>Access to the service is already governed by computer access laws, which don't depend on copyright
Yeah, copyright doesn't control everything, and?
>There are other things that aren't copyrightable that are trade secrets already. This would be true of any kind of automated data collection for example. You couldn't copyright it but you can call it a trade secret.
Okay?
>And for any of that stuff, if you want to share it and limit distribution, you just have whoever wants access explicitly agree to be bound by contract law.
Your point being? You're just rambling assumptions about copyright and other things, which don't even track the actual law.
> Your point being? You're just rambling assumptions about copyright and other things, which don't even track the actual law.
I'm replying to the post that claimed:
> If that’s how the court interpreted it, then the software industry is hosed, since that’d mean none of the generated code running in production right now is under any sort of copyright or otherwise protection, lol.
There is in fact "otherwise protection" for the software industry by... not distributing the code. They don't need copyright over the generated code if they vibe code a SaaS backend. Whether there's copyright or not is irrelevant for the business model.
Copyright is the strongest legal protection available. It does not have a state of mind element. Breach of contract is much more complicated and context-dependent.
>There is in fact "otherwise protection" for the software industry by... not distributing the code.
Copyright protects against reverse engineering in some circumstances, for example.
>Whether there's copyright or not is irrelevant for the business model.
Yeah, I'm going to continue to disagree with you as I'm actually a litigator.
> Yeah, I'm going to continue to disagree with you as I'm actually a litigator.
OK, can you explain to me why this is a disaster for a vibe-coded SaaS? Why are computer access and/or contract laws insufficient and why would a vibe-coded backend be a huge risk?
I really don't understand where copyright on the code itself is necessary to protect these business models, and hopefully you can help fill the gaps.
I didn't say it would be a huge risk, I just disagree that any of those features of the law cover what copyright does. They don't. If a trade secret is ever revealed, all protection is lost. Breach of contract is very complex compared to an infringement claim and would have to be negotiated. As a customer, why would I want to indemnify a software supplier? If there's no indemnity, it's not going to get anyone very far. CFAA basically requires that something get hacked so it's not going to cover the vast majority of scenarios...
>I really don't understand where copyright on the code itself is necessary to protect these business models, and hopefully you can help fill the gaps.
Well, did you ever try to understand? It's so exhausting coming to these threads when people are just making assumptions about how the law works without any regards to what actually happens, and then suggesting policy changes in response.
Here's a scenario - disgruntled ex-employee leaks the code. Now it's free for anyone to use because there is nothing you can do to stop anyone from using it because you have no rights in the code since the trade secret is broken. You can sue the employee. They are probably judgment proof, wont have a lot of money anyway, and will still not stop a competitor from spinning up the same exact thing.
Trade secret was your suggestion by the way... So do, you actually know how trade secrets work, or you just making things up??
This is the company I (soon no longer) work at (anyone hiring?).
The thing is that they don’t even allow the use of AI. I’ve been assured that the vast majority of the code was human-written. I have my doubts but the timeline does check out.
Apart from that, this article uses a lot of words to completely miss the fact that (A) “use agents to generate code” and “optimize your processes” are not mutually exclusive things; (B) sometimes, for some tickets - particularly ones stakeholders like to slide in unrefined a week before the sprint ends - the code IS the bottleneck, and the sooner you can get the hell off of that trivial but code-heavy ticket, the sooner you can get back to spending time on the actual problems; and (C) doing all of this is a good idea completely regardless of whether you use LLMs or not; and anyone who doesn’t do any of it and thinks the solution is to just hire more devs will run into the exact same roadblocks.
reply