If the AI slop was that valuable a project regular, who actually knows and understands the project, would be just as capable of asking the AI to produce it.
Not according to ghostty maintainer Hashimoto per above.
It takes attempts, verifying the result behaves as desired, and iterative prompting to adjust. And it takes a lot of time to wait on agents in between those steps (this work isn’t a one shot response). You’re being reductive.
We may be talking cross purposes. I read the grandparent poster discussing provably untested patches.
I have no clue in ghostty but I've seen plenty of stuff that doesn't compile much less pass tests. And I assert there is nothing but negative value in such "contributions".
If real effort went into it, then maybe there is value-- though it's not clear to me: When a project regular does the same work then at least they know the process. Like if there is some big PR moving things around at least the author knows that it's unlikely to slip in a backdoor. Once the change is reduced to some huge diff, it's much harder to gain this confidence.
In some projects direct PRs for programmatic mass renames and such have been prohibited in favor of requiring submission of the script that produces the change, because its easier to review the script carefully. The same may be necessary for AI.
Having the original prompts (in sequence and across potentially multiple models) can be valuable but is not necessarily useful in replicating the results because of the slot machine nature of it
> This whole original HN post is about ghostty btw
Sure though I believe few commenters care much about ghostty specifically and are primarily discussing the policy abstractly!
> because of the slot machine nature of it
One could use deterministically sampled LLMs with exact integer arithmetic... There is nothing fundamental preventing it from being completely reproducible.
Can't do that with state of the art LLMs and no sign of that changing (as they like to retain control over model behaviors). I would not want to use or contribute to a project that embraces LLMs yet disallows leading models.
Besides, the output of an LLM is not really any more trustworthy (even if reproducible) than the contribution of an anonymous actor. Both require review of outputs. Reproducibility of output from prompt doesn't mean that the output followed a traceable logic such that you can skip a full manual code review as with your mass renaming example. LLMs produce antagonistic output from innocuous prompting from time to time, too.