Yep, that’s why my forks of all their libraries with bugs fixed such as https://github.com/pmarreck/zigimg/commit/52c4b9a557d38fe1e1... will never ever go back to upstream, just because an LLM did it. Lame, but oh well- their loss. Also, this is dumb because anyone who wants fixes like this will have to find a fork like mine with them, which is an increased maintenance burden.
The PR doesn't disclose that "an LLM did it", so maybe the project allowed a violation of their policy by mistake. I guess they could revert the commit if they happen to see the submitter's HN comment.
Dunno but a commenter already noted that some begins to say: "No LLM generated PR, but we'll accept your prompt" and another person answered he saw that too.
Hugely unpopular opinion on HN, but I'd rather use code that is flawed while written by a human, versus code that has been generated by a LLM, even if it fixes bugs.
I'd gladly take a bug report, sure, but then I'd fix the issues myself. I'd never allow LLM code to be merged.
Because human errors are, well, human. And producing code that contains those errors is a human endeavor. It bases on years, decades of learning. Mistakes were made, experience was gained, skills were improved. Reasoning by humans is relatable.
Generating slop using LLMs takes seconds, has no human element, no work goes into it. Mistakes made by an LLM are excused without sincerity, without real learning, without consequence. I hate everything about that.
This is nonsense. There's plenty of work that goes into it. In fact, if no human work goes into it, then it is unlikely to pass human muster/judgment. It is just a tool for accelerated work, like literally every technological progress before it, but hey, you can go continue banging away at your loom making bespoke textiles, no one's gonna stop you.
For the parent there's immaterial value knowing that is written by a human. From what I read in your comment, you see code more as a means to an end. I think I understand where the parent is coming from. Writing code myself, and accomplishing what I set out to build sometimes feels like a form of art, and knowing that I build it, gives me a sense of accomplishment. And gives me energy. Writing code solely as a means to an end, or letting it be generated by some model, doesn't give that same energy.
This thinking has nothing to do with not caring about being a good teammate or the business. I've no idea why you put that on the same pile.
Sure, but back in reality no you’re not? No more than any other contributor?
If I want to use an auto-complete then I can, and I will? Restricting that is as regressive as a project trying to specify that I write code from a specific country or… standing on my head.
Sure, if they want me to add a “I’m writing this standing on my head” message in the PR then I will… but I’m not.
No, you can't. See, that's where you are just wrong: when you don't respect the boundaries an open source project sets that you want to contribute to then you are a net negative.
Restricting this is their right, and it is not for you to attempt to overrule that right. Besides the fact that you do not oversee the consequences it also makes you an asshole.
They're not asking for you to write standing on your head, they are asking for you to author your contributions yourself.
They are asking me to author my contributions in a way that they approve of. The essence of the request is the same as asking someone to author them whilst standing on their head.
Except they don’t, won’t and can’t control that: the very request is insulting.
I’ll make a change any way I choose, upright, sideways, using AI. My choice. Not theirs.
Their choice is to accept it or reject it based purely on the change itself, because that’s all there is.
If you’re going to lie and say there was no LLM involved, what else are you going to lie about? Copying code from another codebase with incompatible license terms, perhaps?
I would say people should be wary of any contributions whatsoever from a filthy fucking liar.
Nothing? Everything? Does it fucking matter? Assigning trust across a boundary like this is stupid, and that’s my point.
Oh, would you just accept my blatantly, verbatim copied-from-another-codebase-and-relicensed PR just because I said “I solemnly swear this is not blatantly, verbatim copied from another codebase and relicensed”?
That’s on you for stupidly assigning any trust to the author of the change. It’s the internet: nobody knows you’re a dog.
> Oh, would you just accept my blatantly, verbatim copied-from-another-codebase-and-relicensed PR just because I said “I solemnly swear this is not blatantly, verbatim copied from another codebase and relicensed”?
At that point you've proven intention, meaning you'll get the chance to argue your viewpoint in front of a judge.
Many major projects now require a signed DCO with a real name. That can be a nickname if you have a reasonable online presence under that name, but generally it has to identify you as an individual.
So you wouldn't sign it as "xXImADogOnTheInternet86Xx", but as "Tom Forbes (orf)".
And even if there won't be direct legal consequences, it'd certainly affect your ability to contribute to this or other projects in the future.
I'm really struggling to understand why you would burn down a decade+ old reputation over this particular issue. Is this really the hill you wanted to die on?
It’s an abstract argument with one pretty clear point that you can’t seem to grasp: people lie, on the internet, all the time. Any system, policy or discussion that pretends this isn’t the case is worthless.
This is not an abstract argument, you are showing a willingness to do the wrong thing in spite of being told not to, repeatedly, by many other participants here. I see only two things here:
(1) you would lie
(2) you fundamentally don't understand the concept of consent
> "I’ll make a change any way I choose, upright, sideways, using AI. My choice. Not theirs."
The fact that other people would lie is besides the point: those other people would get the exact same treatment if found out. Whether or not they would be found out is moot, it is the act of lying and ignoring consent that makes this what it is: asshole behavior. By extension anybody that practices this behavior is an asshole as well and by extension of that tying your own rep to people that would behave like that makes you an asshole and I highly doubt that that was your intention.
So now you've - over endless comments - shown that you fundamentally don't get this very important concept. Yes, people lie. But there are mechanisms for dealing with liars. Misrepresentation and fraud are serious things. Lawsuits, fines and in an extreme case jail, but on a more immediate level ostracizing. It makes you as a person into an undesirable. It also makes the world as a whole a worse place to live in, which is why such behavior is strongly discouraged, even if it is possible.
That's why we don't structurally go around clubbing old ladies over the head as a revenue model, not because we can't do it or because it would be acted upon by the law (that's for the few who don't get it) but because it is simply a bad thing to do. It is a matter of ethics. That's why if an open source project has a 'No AI' policy you either abide by the policy or you can expect massive backlash.
To think that you could do this and even should do this to make the point is as stupid as walking out and grabbing some old lady's hand bag to prove that it can be done: you are hurting an innocent to prove your point and it will cause a reaction that is at a minimum proportional to what you did and worst case you will be made an example of. This can be the proverbial career ending move. If you are Elon level rich and your inner asshole seeks a way out then yes, you could probably do it. But for normal folks such behavior is highly discouraged. Actions usually have consequences.
Finally: open source is a massive gift to society. The whole reason you can use AI in the first place is because that gift got abused in a way that open source contributors did not anticipate. If you're going around to pollute open source with AI contributions to effectively karma farm you have to wonder why you are so intent on doing that. Is it your purpose to destroy open source? Or is it just because you enjoy destroying stuff in general? I don't see any other options, this is a pathology and it would do you good to introspect on this for a bit instead of to respond with yet another ill conceived reply digging yourself in further. You've gone from 'mildly annoying' to 'wouldn't work with this person for any amount of money because they are a massive liability' in the space of 15 comments. I hope it was worth it to you.
This is a lot of words and I’m honestly not sure it’s worth reading. At a skim it seems naive at best, at worst a pretty stupid, pearl-clutching interpretation of the discussion.
> If you're going around to pollute open source with AI contributions to effectively karma farm you have to wonder why you are so intent on doing that? Is it your purpose to destroy open source? Or is it just because you enjoy destroying stuff in general? I don't see any other options, this is a pathology and it would do you good to introspect on this for a bit instead of to respond with yet another ill conceived reply digging yourself in further
Just in case you misunderstood things (it’s easy when you get so upset about trivial arguments on the internet!), I don’t use AI when contributing to open source projects.
Thanks for the imaginary psychoanalysis though I guess.
You not only broke the site guidelines badly with this comment, you actually escalated how bad the thread was by quite a margin. Please don't do that.
If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it. Note this one: "Don't feed egregious comments by replying; flag them instead."
Lying that you didn’t use an LLM when told that contributions made using LLMs are banned does indeed make you a sociopath. Whether you have also commit sexual assault is an independent axis, but when someone shows such blatant disregard for boundaries and consent, it does raise questions.
Instead of arguing for violating the boundaries of a "slow, bespoke" no-LLM project, you can simply start one that enjoys all the benefits of LLMs by NOT having that boundary. Very simple solution.
Their boundaries. If they don’t want to accept the code, cool. Nobody is forcing them to, and I respect that.
But if they can’t enforce their boundaries, because they can’t tell the difference between AI code and non-AI code without being told, then their boundaries they made up are unenforceable nonsense.
About as nonsense and enforceable as asking me to code upside down.
I'll make this blunt: if you're a guy then half the population is not capable of 'enforcing their boundaries' against you, more so if you count children. The problem you seem to have is to think that if someone is not capable of enforcing their boundaries that they are not allowed to have those boundaries and that it is your god given right to do whatever the F* you want just because you can. That's not how the world works, nor is it how it should work.
Boundaries - of all kinds - are not unenforceable nonsense, they are rights that you willingly and knowingly violate.
This is such an easily refuted assertion. Tell me, if something is wrong with the submitted code, who or what is responsible? If it's not "the LLM", then your opinion makes zero sense. The responsible party is always a human; therefore the responsible party rightfully deserves the credit whether it succeeds or fails.
I am authoring my contributions, using Clause Code as a tool. It doesn't make me an asshole.
If the maintainers don't want to accept it, fine. Someone will eventually fork and advance and we move on. The Uncles can continue to play in their no AI playground, and show each other how nice their code is.
The world is moving on from the "AI is bad" crowd.
Forking the code can be perfectly reasonable, with this or any other disagreement about policy. The main point of contention in this thread is whether you ought to lie about having used an LLM. I agree with Jacques: doing something like that would make you an asshole.
So is fraudulently claiming your code has a different author & copyright (you) than it actually has (whether that's someone else's code, or LLM-generated code).
You can, in fact, be pursued both civilly and criminally for fraud.
Your admissions here are enough that if you tried to contribute to any of my own Open Source projects, I would reject your contributions, and if I had accepted any prior ones I would pursue legal remedies.
I’d really like to know the specific legal remedies you’d pursue, assuming that I had contributed to one of your projects, based on this hacker news thread.
Can you stop LARPing and walk me through it? Please?
You stated that you will fraudulently misrepresent the origin of contributions you make to projects if you feel like it, and that nobody has any recourse. That’s you LARPing, by thinking there’s no recourse for fraud.
First of all, I don’t take anonymous or pseudonymous contributions to any of my projects, so if you had made any contributions I would have your real-world identity. That should tell you right away that recourse is possible.
Then, if I learned or had reasonable suspicion that your real-world identity mapped to Hacker News user “orf,” I would instruct my attorney to send a formal contributor agreement to you to sign within a certain period of time that certifies that you are indeed the sole author of all of the content you submitted to the project, and that you did not copy it from another codebase without proper attribution or license, or use an LLM to write it.
If you refused to sign such an agreement, or signed it and were discovered to be lying, I would file a lawsuit for the cost of having having to remove your contributions for possible fraudulent misrepresentation of their origin, for the cost of having to hire one or more developers to recreate any any important downstream work that depended upon your contributions using clean-room techniques, and for punitive damages to ensure you were dissuaded from making fraudulent misrepresentations in the future.
That’s not LARPing, that’s what any business will do in the event of a possible breach of contract. Just because many open source projects don’t have someone like me involved with the financial resources to pursue such a suit as far as necessary doesn’t mean that none do.
You’d send me a contributor agreement, after I’ve contributed, to retroactively ask if I used a LLM to write the code, and if I refused you’d then sue me for nebulous ill-defined damages and for breaching a non-existent contract?
So in your head, I could contribute a change that introduces a bug and as a result you could sue me for the time it took you to fix it?
…
Are you OK?
I was hoping for something with a “I’m a big strong serious tough guy” vibe but that’s a bit much. However I guess you can file a civil case for practically anything in some countries, and if you’re retired/unemployed maybe writing this kind of internet police fan-fiction is considered fun?
Do another one, this time where it’s not thrown out as a clearly frivolous suit with no legal basis.
You broke the site guidelines repeatedly in this thread, including by crossing into all sorts of personal attacks. I realize that you were provoked, but you were also provoking.
We've actually been asking you not to do this for years. This is bad:
I'm not going to ban you for this episode because everyone goes on tilt sometimes. But if you'd please review https://news.ycombinator.com/newsguidelines.html and do what it takes to recalibrate so that you're using the site as intended going forward, we'd be grateful.
No, you’re still either being intentionally obtuse or unintentionally clueless.
A condition of making a contribution to one of my projects is that you haven’t used an LLM to create that contribution. By making a contribution, you are agreeing to this restriction, even without having any formal document signed.
If I then found out that you may have defrauded the project by lying about the origin of your contribution—say because you said openly and publicly “I would just lie about using an LLM”—then I would first give you a chance to declare that no, really, you didn’t commit fraud in these cases because even though you publicly said you would just lie, I’m betting that you wouldn’t lie in signing a multipage contract with specific penalties for breach.
If you wouldn’t sign that contract, then I would sue you to address the damage your fraud caused the project, which would include removing all of your contributions and anything depending upon them from not just the present codebase but the project history, as well as documenting and hiring someone from outside the project to clean-room recreate anything I deem important that did depend upon them.
These damages are not nebulous or ill-defined: Because of the untrustworthy provenance of your contributions, they *must* be removed, and they also taint anything dependent upon them.
In all of your replies on this topic you really sound like a teenager who hasn’t quite understood that your actions really can have consequences.
If you look into why it was historically very difficult to find GNU emacs code for older versions, it’s because of a situationexactly like this: Stallman just copied some code from Unipress (Gosling) emacs into GNU emacs, presumably thinking he could get away with the copyright violation. (He evidently hadn’t learned from getting smacked down for directly copying Symbolics code into the LMI codebase.) The end result is that FSF and mirrors had to stop distributing the versions of GNU emacs containing the Unipress-originated code.
This is not a LARP, this is stuff that actually happens in the software industry including in Open Source, and anyone involved in the industry needs to actually take it seriously because to do otherwise is to invite substantial liability.
You broke the site guidelines repeatedly in this thread, including by crossing into quite vicious personal attack. I realize that you were provoked, but you were also provoking.
I'm not going to ban you for this episode because everyone goes on tilt sometimes. But if you'd please review https://news.ycombinator.com/newsguidelines.html and do what it takes to recalibrate so that you're using the site as intended going forward, we'd be grateful.
Surely you know that you can't do this on HN. "sociopathic piece of shit [...] Do the world a favor and remove yourself" isn't just bannable, it's 100x what we'd ban an account for.
You've been a good user generally* so I'm going to put this down to the unfortunate circumstances of this thread, but please don't do it again.
Even before AI, getting a fix into an open source project required a certain level of time and effort. If you prefer to spend your time on other things, and you assume it will eventually get fixed by someone else, using an LLM to fix it just for yourself makes sense.
If you rely on llms, you're simply not going to make it. The person who showed their work on the math test is 9/10 times is doing better in life than the person that only knew how to use a calculator. Now how do we think things are going to turn out for the person that doesn't even think they need to learn how to use a calculator.
Just like when people started losing their ability to navigate without a GPS/Maps app, you will lose your ability to write solid code, solve problems, hell maybe even read well.
I want my brain to be strong in old age, and I actually love to write code unlike 99% in software apparently (like why did you people even start doing this career.. makes no sense to me).
I'm going to keep writing the code myself! Stop paying Billionaires for their thinking machines, its not going to work out well for you.
I went into software because I like building things and coming up with solid solutions to business problems that are of use to society. I would not describe myself with "love to code". It's a means to an end to pay bills and have a meaningful career. I think of myself more like a carpenter or craftsman.
I used a coding agent for the majority of my current project and I still got the "build stuff" itch scratched because Engineers are still responsible for the output and they are needed to interface between technical teams, UX, business people etc
> I think of myself more like a carpenter or craftsman.
> I used a coding agent for the majority of my current project and I still got the "build stuff" itch scratched because Engineers are still responsible for the output and they are needed to interface between technical teams, UX, business people etc
Then you are the opposite of a carpenter or a craftsman, no matter what you think about it yourself.
Taking AI out of the equation for a minute - they don't build anything, engineers do. A carpenter builds a chair, table etc using the skill he has accumulated over the years.
And yet, I find a coding agent makes it even more fun. I spend less time working on the boilerplate crap that I hate, and a lot less time searching Google and trying to make sense of a dozen half-arsed StackOverflow posts that don't quite answer my question.
I just went through that yesterday with Unity. I did all the leg work to figure out why something didn't work like I expected. Even Google's search engine agent wasn't answering the question. It was a terrible, energy-draining experience that I don't miss at all. I did figure it out in the end, though.
Prior to yesterday, I was thinking that using AIs to do that was making it harder for me to learn things because it was so easy. But comparing what I remember from yesterday to other things I did with the AI, I don't really think that. The AI lets me do it repeatedly, quickly, and I learn by the repetition, and a lot of it. The slow method has just 1 instance, and it takes forever.
This is certainly an exciting time for coders, no matter why they're in the game.
Cool you had it do something for you, this isn't building or learning no matter what you tell yourself. Your brain is going to atrophy. The process of building can be frustrating, so what, so is training for a marathon or anything rewarding in life.
> The person who showed their work on the math test is 9/10 times is doing better in life than the person that only knew how to use a calculator
Sure but once you learn long multiplication/division algorithms by hand there's not much point in using them. By high school everyone is using a calculator.
> Just like when people started losing their ability to navigate without a GPS/Maps app
Are you suggesting people shouldn't use Google Maps? Seems kind of nuts. Similar to calculators, the lesson here is that progress works by obviating the need to think about some thing. Paper maps and compasses work the same way, they render some older skill obsolete. The written word made memorization infinitely less valuable (and writing had its critics).
I don't think "LLMs making us dumber" is a real concern. Yes, people will lose some skills. Before calculators, adults were probably way better at doing arithmetic. But this isn't something worth prioritizing.
However, it is worth teaching people to code by hand, just like we still teach arithmetic and times tables. But ultimately, once we've learned these things, we're going to use tools that supercede them. There's nothig new or scary about this, and it will be a significant net win.
>I don't think "LLMs making us dumber" is a real concern. Yes, people will lose some skills. Before calculators, adults were probably way better at doing arithmetic.
But it's a problem of scale.
Calculators are very specific tools. If you are trying to run a computation of some arithmetic/algebraic expression, then they are a great tool. But they're not going to get you far if you need help understanding how to file your taxes.
LLMs are multi-faceted tools. They can help with math, doing taxes, coding, doing research, writing essays, summarizing text, etc. Basically anything that can be condensed into an embedding that the LLM can work with is fair game.
If you're willing to accept that using a tool slowly erodes the skill that tool was made for, then you should also accept that you will see an erosion of MANY skill you currently have.
So the question is whether this is all worth it? Is an increase in productivity worth eroding a strong foundation of general purpose knowledge? Perhaps even the ability to learn in the first place?
I would argue no a million times over, but I'm starting to think that I'm an outlier.
Yeah, I agree. However, people use llms for the same reason people drive 3 blocks to a store rather than walk. Laziness and convenience. They simply don't care if their leg muscles atrophy. However, I think people aren't taking into account how much more important your thinking "muscles" are and its way more consequential to let those atrophy.
Everyone is vulnerable to the allure of taking shortcuts in life, but I've learned over the years that there is no free lunch. This is going too be quite an expensive trade off for many.
People will have to be more intentional about using their increased leisure time in a healthy way. There was no point in exercising if you were a peasant who worked the field all day. Today, if you sit down in an office all day, you need to exercise intentionally. People have figured this out!
Along the same lines, AI will necessitate a shift where people intentionally use their extra intellectual leisure time. Reading, writing, chess, learning a new language, etc.
Not everyone will do this. Some people will be the intellectual equivalent of obese. But people will figure it out eventually.
People are figuring it out in real time. The next generation is going to be way less fat than the current one, because everyone exercises. It took time for people to adjust to a world where physical exertion is optional and delicious food is cheap, but we are getting there. I see no reason to assume the same thing won't happen with AI.
Where are the stats backing this claim? Obesity levels have not dropped significantly in recent times. Also, any significant change will require government oversight, and we are increasingly heading towards a direction where private interests overrule whats best for the public at large.
>I see no reason to assume the same thing won't happen with AI.
You have the ability to choose what and much you eat. Will you have the ability to forsake AI if your employer forces it upon you, or if to stay competitive in school you have to rely on it? In the same way it's hard to live in society without a smart phone, it's already becoming hard to operate in society without relying on AI. Now extrapolate this out by a decade.
The written word isn't a very specific tool. Before writing, people had to memorize things. In some sense, writing has made us dumber as memorization has been deemphasized. But was it worth the trade? Yes.
If you want a more recent example, google search is an extremely broad tool that has operated similarly.
I think AI will be another rung in the ladder of abstraction. Something will be lost, but it's worth the trade.
I don't agree that writing, or Google search are on the same level here. A problem about having this argument on HN is that I think most people are already firmly entrenched in the pro-AI position, and will not consider any possible downsides.
There are lots of anti-AI commenters on HN. Also, I didn't say there are no downsides. There are downsides to writing! And some people were against writing, like Socrates.
You should ask yourself why you're okay with innovations that happened in the past but not okay with innovations happening now. It could just be reflexive conservativism.
Of course there's no guarantee that AI will be more positive than negative, but I see no compelling reason to believe that. Most of the anti-AI sentiment is just people not liking new things.
>You should ask yourself why you're okay with innovations that happened in the past but not okay with innovations happening now.
Because these innovations are not congruent with our most important biological advantage! We are here precisely because we developed the capacity to think critically about hard problems. To relegate our critical faculties to an activity you engage with during a small window of time each day similar to a muscle you exercise at the gym is asinine in my opinion. I firmly believe in the future, people like you will become a new underclass as they have willingly given up their ability to think.
Again, people said the same thing about computers, about writing, and so on. Maybe it's true this time, but I think the presumption should be that it's not.
If people who use AI become an "underclass" then people will adapt and...not use AI. But that won't happen. People will use it to augment rather than replace, just like we use other, similar technologies.
> Sure but once you learn long multiplication/division algorithms by hand there's not much point in using them. By high school everyone is using a calculator.
And many lose the ability to do long division by high school, where they'll have to relearn it for polynomial long division, which typical school calculators can't handle easily.
>I want my brain to be strong in old age, and I actually love to write code unlike 99% in software apparently (like why did you people even start doing this career.. makes no sense to me).
I am old now, and the unfortunate truth is that my brain isn't working as fast or as precise as when I was young. LLMs help me maintain some of my coding abilities.
It's like having a non-judgemental co-coder sitting at your side, you can discuss about the code you wrote and it will point out things you didn't think of.
Or I can tap into the immense knowledge about APIs LLMs have to keep up with change. I wouldn't be able to still read that much documentation and keep all of this.
I agree but only in the very long term. I think short-medium term, it's not going to matter as the MBA types get so caught up in the mania that results matter even less than they normally do.
One doesn't exclude the other. I still program myself; I actually have more time to do so because the LLM I pay some billionaire for is taking care of the mundane stuff. Before I had to do the mundane stuff myself. What I pay the billionaire is a laughable fraction compared to the time and energy I now have extra to spend on meaningful innovation.
It would be helpful if you answer the question about web api usage, most of that is not relevant.
The only suggestion I see there from a quick skim that would avoid the above is for customers to set up a google maps proxy server for every usage with adds security and hides the key. That is completely impractical suggestion for the majority of users of embedded google maps.
They are talking about pnpm (which they said would be the uv equivalent for node, though I disagree given that what pnpm brings on top of npm is way less than the difference between uv and the status quo in Python).
Prices are not public, but comments over the years have hinted it’s in the $500K to $1mm range.
Their hardware is multiple generations behind at this point, however. I wonder if they’re starting to reduce the price because it’s hard to justify paying so much for old hardware. They could just be targeting customers who don’t care as much about performance or efficiency as they do the software stack.
Being a few generations behind is kinda par for the course for any server hardware that's put in production, this is not a gaming PC build. Hopefully they're working on bringing their hardware up to date, since efficiency is a key consideration for the class of workloads they're aiming at.
> Being a few generations behind is kinda par for the course for any server hardware that's put in production
No it’s not. Normally if you’re buying server hardware you don’t start with a CPU that’s already 5 years old and last-generation RAM that isn’t even manufactured at scale any more.
CPUs have advanced a lot in recent years. The jump from Zen 3 to Zen 5 is very substantial.
From what they say and their podcast its pretty clear that they already have the next generation sled and likely already sell it to existing costumers, its just not on the website.
I just talk about some comments I read some time ago, so take this with a grain of salt. It was my understanding that if you were spending about $300k-$500k a year in cloud services, it would make sense this type of solution, so the expected price would be something between $500k-$1M depending on the configuration
I am getting the AI Agents to build an expense tracker Telegram. I would like to have one myself and among my family members since we are heavy Telegram users. I am also using this as a way to learn more about the AI Agents (what they are good at, their limitations, etc) with (hopefully) proper guardrails, guidelines, checks, etc.
As you may see from the git history and "contributors", it's mostly Claude and AMP making the changes.
I am not entirely sold on these agents and not particularly excited by these. But I also feel that I can't afford to sit out this transition so here I am...
https://codeberg.org/ziglang/zig#strict-no-llm-no-ai-policy
reply