This comment, along with your comment upthread, is a good example of what we don't want on HN. It's a snarky internet putdown with no information and certainly no curiosity in it.
There are other places on the internet to post like this; please don't post like this here.
Dang's taken more moments to reflect on this topic than you or I probably ever will, and for all the myriad accusations, I have yet to see an example of dang's public actions being influenced by anything YC's doing. HN moderation hasn't changed: HN activity has changed, and that's more to do with seemingly-everyone outside YC hitching their wagon to it.
In general, yes: but these things don't work. We know they don't work. They don't work in theory, and I don't think I'm exaggerating when I say this is the hundredth article demonstrating that even people invested in making it work can't make it work because it doesn't work.
Past a certain point, shallow dismissals are all that an intellectually-curious person has: only those with the unfortunate habit of repeating well-trod arguments on the internet have very much more to say.
The whole article is "we couldn't make the autocomplete bot solve this task", written by somebody who (for whatever reason) isn't using this framing, and has tried things that couldn't possibly work. The article even calls that out!
> some might argue LLMs are architecturally incapable of that
And yet, they consider it "remarkable" that a technique with a theoretical basis (clustering of sentence vectors) can do a substantially better job. No, there's nothing in this article worth commenting on.
Generic, yes. Tangent, no: the article is of the form "we tried this thing that obviously doesn't work, and it doesn't work, and we are surprised". A comment of the form "of course it still doesn't work, you numpty" could be nicer, but it's not a tangent.
I fully understand moderating away "LLMs bad" comments in response to an article that isn't a snowclone, but removing just the low-effort negative comments, without removing the low-effort positive comments, introduces a bias that risks exacerbating a happy death spiral.
It's not sustainable to have to read the latest doomed attempt in enough detail to write a specific and substantial criticism, when we know that it's doomed from the start. Prompt engineering is, in many cases, little more than p-hacking, so many claims of positive results are still actually negative results. Why are generic comments forbidden on generic articles?
I am generally a fan of the news guidelines. Their bias towards the perspective that every article is valuable is a feature, not a bug. But this particular topic feels to me like a failure mode of that very feature – and an easily-exploitable failure mode, at that.
I'm all-but-certain this isn't the first instance of this failure mode (just the first I've noticed), and that you've written multiple relevant essays in the past three months. Most likely the flagging was all users, and you're just here to explain the rules. I don't think it's worth changing the news guidelines over. I still think the moderation decision is wrong.
Not only do LLMs require some work to build anything useful, they are also fuzzy and nondeterministic, requiring you to tweak your prompting tricks once in a while, and they are also quite expensive.