I would like to introduce you to the concepts of interfaces and memory safety.
Well-designed interfaces enforce decoupling where it matters most. And believe it or not, you can do review passes after an LLM writes code, to catch bugs, security issues, bad architecture, reduce complexity, etc.
This research presentation from Benn Jordan will hopefully change your mind on the noise issue and its consequences. I highly recommend it. https://www.youtube.com/watch?v=_bP80DEAbuo
Well, language evolves, and I personally prefer compute as a noun when talking about resources. It's great though because we can each say it in our preferred way without judging one another.
I agree. This is language evolving. If someone from the 16th century could hear a modern well-educated person speak English today they would likely be horrified at how degenerate it would sound to them.
So I don't think current English is in some perfect state that should not change.
> Not really sure what this obsession with calling things you don't like AI generated is but it's poor form
Admonishing someone for correctly identifying AI-written or AI-edited blog posts is poor form, friend.
It is without a doubt written by an LLM. All of the telltale signs are there. I work with these tools 8-20 hours a day and after a while the verbiage and grammatical structures stick out like a sore thumb.
Get off the high horse. I too think this is a very interesting read. I was fascinated with the subject, but the presentation was nauseatingly distracting and immediately sets off yellow flags about how Percepta operates, and what kind of quality they're willing to settle with. It tells me they are more interested in appearances and superficiality.
The numbers that are there categorically cannot be trusted, because hallucinating those details is quite common for models. There is simply no indication that a human adequately proof-read this and therefore any of its claims must be taken with a grain of salt. Don't forget the recent Cloudflare+Matrix debacle: https://news.ycombinator.com/item?id=46781516
I share the same concerns as OP; this post lacks metrics and feels like someone did something cool and raced to get an AI to post about it, instead of giving it a proper treatment.
I don't care how sure you are. Honestly, it's irrelevant. 99% of the time, it's a more pleasant and productive conversation for everyone involved if you just focus on issues you had with the text itself than any nebulous AI involvement.
From my point of view, all you've done is said a lot of nonsense and fabricated a convoluted explanation for why you think the text is bad. I'm fine on my horse thanks.
People can no longer freely point out when the fact that a piece of work is automated and the lack of meat are red flags as to the veracity of the content, but your antagonistic metacommentary for other people pointing out factual information is welcome discourse?
You claimed "this obsession with calling things you don't like AI generated" is "poor form", attacking the parent commenter by claiming they are lying about the nature of the content. However, multiple people have pointed out the clear signs which you missed, and the consensus is that you were wrong. Now you suddenly don't care about this point, and have introduced a new argument instead.
"From my point of view, all you've done is said a lot of nonsense and fabricated a convoluted explanation for why you think the text is bad"
What a bad-faith response. Categorically dismissive, vague, antagonistic and ultimately failing to critically engage with anything I said.
Whether a piece of work is automated and 'lacks meat' is ultimately not something you can know for sure as a reader. Articles like this existed plenty Pre-AI and will exist plenty post-AI, involvement or not, so yeah pretty pointless to focus on that. It adds nothing and all we have to go is your own surety, which is fallible. If you can't recognize that then there's not much to say.
I didn't miss anything. I never cared about it one way or another. What clear signs have people pointed out ? This is the problem. It's apparently so obvious yet even the original commenter admits "It's things humans do too". What is clear about that ?
Your inability to recognize the clear imprint of current-generation language models on this article doesn't mean they're not present.
All knowledge is ultimately fallible, but ignoring or not being able to appreciate the high statistical likelihood of this article being LLM edited/generated doesn't change reality.
You're asking me to share my expertise with you so that you can understand, but your antagonistic overtones make it not feel worth the time and effort. Other readers have also pointed out that it has characteristic idiosyncrasies. Feel free to look into it yourself, but it would also be wise to learn to defer these kinds of attacks until you have all the information.
The post is the perfect example of the kind of writing about AI that dupes people that don't really understand how things like LLMs actually work and are actually trained. Anyone who properly understands these things finds the complete and total lack of detail about training and the loss function (and of course real metrics / benchmarks) to be a monstrous red flag here.
Especially egregious to me is the claim "Because the execution trace is part of the forward pass, the whole process remains differentiable: we can even propagate gradients through the computation itself". This is total weasel-language: e.g. we can propagate any weights through any transformer architecture and all sorts of other much more insane architectural designs, but that is irrelevant if you don't have a continuous and differentiable loss function that can properly weight partially-correct solutions or the likelihood / plausibility of arbitrary model outputs. You also need a clearer source of training data (or way to generate synthetic data).
So for e.g. AlphaFold, we needed to figure out a loss function that continuously approximated the energy configuration of various molecular configurations, and this is what really allowed it to actually do something. Otherwise, you are stuck with slow and expensive reinforcement-based systems.
The other tells are garbage analogies ("Humans cannot fly. Building airplanes does not change that; it only means we built a machine that flies for us"). Such analogies add nothing to understanding, and indeed distract from serious/real understanding. Only dupes and fools think you can gain any meaningful understanding of mathematics and computer science through simplistic linguistic analogies and metaphors without learning the proper actual (visuspatial, logical, etc) models and understanding. Thus, people with real and serious mathematical understanding despise such trite metaphors.
But then, since understanding something like this properly requires serious mathematical understanding, copy like that is a huge tell that the authors / company / platform puts bullshitting and sales above truth and correctness. I.e., yes, a huge yellow flag.
Authoritarianism rarely happens overnight, it happens one step at a time and at every step the useful idiots [0] exclaim "It's just one step! What's the big deal? Stop overreacting!".
Next thing you know you've walked 100 miles and it's too late to turn back.
The comment you replied to only said the first "it's just one step" part. You're imagining the rest. Are we not even allowed to make factual statements when something is, in fact, just one step? "It's bad to factually describe what's happening because it will get worse" is a terrible way to make your case.
> I've heard a proposal that "age verification passes" be sold at liqour stores and porno shops, for example, who already seem to do an acceptable job of checking ID without destroying people's privacy.
This is not being supported because the size of the step is small, but because the step itself makes sense.
The slippery slope argument says that open source software is a stepping stone to a world where all commercial activity is banned. Should we therefore oppose open source software?
"makes sense" does a lot of heavy lifting. Explain the justification for this restriction on 1A rights and mandatory compulsion of speech for anyone writing software.
And your provided "slippery slope argument" is just a straw man argument. No one in this thread made that argument. The slippery slope is the authoritarian ratchet.
If you want to restrict your kid's access to the internet, install software that does that. I think in 2026, when kids have personal devices, key word "personal", meaning there is an expected level of privacy we should respect, effective insulation against the bad parts of the internet will not be achieved through software. Meanwhile, this legislation will be used to prevent children from turning into organized free thinkers.
It's not, you need actual parental controls. Furthermore if this was just for the children it would be a setting you opt into, fully optional and unregulated, like parental controls are now.
And no, there is absolutely no argument for a slippery slope here. Especially considering we've had OSS for what, 50 years now? And corporations are doing just fine. Unlike the vast history of authoritarianism and government oppression and violence that occurs exactly through this mechanism of giving an inch over and over again until they have taken miles.
Slippery slope is famously a fallacious argument. My first exposure to it was people insisting that legalizing gay marriage would end up legalizing marriage to animals.
A slippery slope is only a fallacy if there is no demonstrated history of it existing. I think we're all aware that that is not the case for surveillance laws.
I'll go further. As a human being, I am responsible for myself. I grew up in an extremely abusive, impoverished, cult-like religious home where anything not approved by White Jesus was disallowed.
I owe everything about who I am today to learning how to circumvent firewalls and other forms of restriction. I would almost certainly be dead if I hadn't learned to socialize and program on the web despite it being strictly forbidden at home. Most of my interests, politics and personality were forged at 2am, as quiet as possible, browsing the web on live discs. I now support myself through those interests.
We're so quick to forget that kids are people, too. And today, they often know how to safely navigate the internet better than their aging caretakers who have allowed editorial "news" and social media to warp their minds.
Even for people who think they're really doing a good thing by supporting these kinds of insane laws that are designed to restrict our 1A rights: the road to hell is paved with good intentions.
This is obviously where it's going to go, at least in the US. Things that are non-religious, non-Christian especially, pro-LGBT, and similar will be disproportionately pulled under "adult content" to ensure that children are not able to be exposed to unapproved ideas during formative years.
The scary thing about legislation and software is that they can negatively reinforce each other if not properly designed and implemented. We run the risk of codification of morality-of-the-week becoming embedded deeply embedded into the compute stack, which will not self-correct until there is a great political movement for liberation of compute.
Exactly. Having lived through it already, I know what it did to me and I would never wish that upon another child. The internet saved me from being a religious, colonial, racist piece of shit like the rest of my family.
Did you have an actual point to make or did you just choose two random words and hurl an insult with zero context? Were you looking for an actual discussion or is this all you had to offer?
My M2 has an IDE and a couple active Firefox tabs open and I'm sitting at 30GB RAM usage, with about 5GB more on swap. It's a 32GB machine and I'm constantly opening Activity Monitor to kill Firefox tabs whose memory usage just seems to grow unbounded over time.
Software shouldn't be written this way. I shouldn't have to disable mds-store because it likes to take up 2-3 cores at full throttle when I'm on 10% remaining battery. But it is, and 32GB isn't enough for me to even have a basic computing experience anymore, it seems.
Well-designed interfaces enforce decoupling where it matters most. And believe it or not, you can do review passes after an LLM writes code, to catch bugs, security issues, bad architecture, reduce complexity, etc.
reply