That is my understanding from the outside as well. The core question here should, I think, be whether the adoption and spread of clearer semantics via Rust is worth the potential for confusion and misunderstandings at the boundaries between C and Rust. From the article it appears that this specific instance actually resulted in identifying issues in the usage of the C api's here that are geting scrutiny and fixes as a result. That would indicate the introduction of Rust is causing the trend line to go in the correct direction in at least this instance.
That's been largely my experience of RIIR over years of work in numerous contexts: attempting to encode invariants in the type system results in identifying semantic issues. over and over.
edit to add: and I'm not talking about compilation failures so much as design problems. when the meaning of a value is overloaded, or when there's a "you must do Y after X and never before" and then you can't write equivalent code in all cases, and so on. "but what does this mean?" becomes the question to answer.
I have taught sex ed over the past ten years. The curriculum (which we are forbidden to alter) embraces ideas of many dimensions to sex, but I found that there is not one empirical scientist in their list of authors, reviewers, or source material. The definition of “evidence-based” is that a lesson. has been tested in a classroom. (In other words, kids learned it.)
Intersex is a concept that bears looking into. We’re taught that it is as much as 0.4% of population, which is arrived at through removing context multiple times. Nowadays it is used to argue that there is a spectrum, not a sex binary, but this was not its meaning empirically. (Same thing with “sex assigned at birth”). You’re getting down to some very rare “differences of sexual development” (example: Y chromosome not getting expressed) whereas intersex individuals empirically belong to one or the other genotype. And the majority do not identify as “non binary” and don’t want to be used as examples.
I’m sure I’ll be debated, one comment can’t carry all the proof, but read some sports medicine papers on sex differences, that area has the facts.
I am neurologist. There is a spectrum between but the result situation is either a disease, infertile, unsustainable long term or... made up for hype. If you can point a person that has distinct characteristics and not a mix/overlap, then I admin there is a third sex.
The only forms of matter are gases, liquids, and solids.
Anyone speaking about “plasmas” and “Bose-Eisenstein condensates” is just spewing woke horse shit they must have learned in a liberal indoctrination center(universities )
Sarcasm aside, a lot of people seem to act like no new information can be discovered by humanity beyond what was taught to them as a child in k-12
(this may be related to my obs that the high left can only romanticise^W glamourise^W aestheticise^W "semi-stabilise" the injustice ("a thing to a people") giving the low ample room to troll them -- & get the mid to switch
(In the other direction, it was easy for Bob to troll Ayn and get VC wannabes to take a course on^W^W^W^W melt their own alloys?))
>Just because I practice Foxwork and channel an entity which might be a different gender than myself means I need to endorse the self-described 'egg-hatcher' who persuaded a neurodivergent friend of my son to go down a path that hasn't solved his [2] real problems but has added more problems.
Anyway, note that I used "when" in the ancestor statement. Can we at least agree on that part? Maybe more explicitly as "in the cases when it does you no harm... , and ..."?
(if we wish to discuss how relatively often that is or should be, I think I'll need more specific details)
PS. wow, and that's early Noon, too; not as dark as late Noon if rumours be true...
(again, this sounds like a good argument to have multiple parties, but good luck getting anything like that off the ground. I wonder how many primaries are open? With a closed primary, I'd think a "center party" might actually appeal to those who are getting whipped to the edges — if there are any of them left, that is)
Someone like Mamdani is strangely centre enough to get some of the "right-leaning" Gen Zs (mid or even lower high trollsters) to bounce off of Trump.
He just ain't the sorta centrist Scott was looking for.
Remember "Bernie bros"?
Bernie and Zohran don't have that school marm look.. Zohran beats Bernie in the semitic dept because (modal male urban "areligious") Gen Z Jew will vote for Zohran (!!!) over what the Dems might offer. Millennial Jews hesitated because Bernie was Jewish.
My hunch only ofc. (From anecdata >> polls)
We might have to go back to Tocqueville to see if, since 1775, American genius don't just think better, they just don't have to think!
Bodegas charge you a little bit more because a real human owner accepts the risk of serving a small community in exchange for being part of that community, and you pay that extra in order to make their existence possible.
Dollar Generals charge you a little bit more because a huge chain has driven out all the competition and you have no choice. The people who work there do not benefit from the extra you pay, and the owners are not members of the community.
There was no competition in many places dollar stores operate. They moved into those places specifically because they were underserved by larger retailers.
As soon as the roads are as smooth as they were in the 90s. I have a pet theory that wheels have gotten huge partly in response to deteriorating roads - larger diameter means less leverage against the suspension when hitting defects. It’s the same reason dirt bikes have large front wheels.
The aspect ratio (sidewall height) of tires has decreased in a lot of vehicles though. They are driving around with low profile rubber bands on the rims. Looks cool, but not much fun in the potholed mid-west.
We've been asking people not to comment like this on HN. We can never know exactly how much an individual's writing is LLM-generated, and the negative consequences of a false accusation outweigh the positive consequences of a valid one.
We don't want LLM-generated content on HN, but we also don't want a substantial portion of any thread being devoted to meta-discussion about whether a post is LLM-generated, and the merits of discussing whether a post is LLM-generated, etc. This all belongs in the generic tangent category that we're explicitly trying to avoid here.
If you suspect it, please use the the established approaches for reacting to inappropriate content: if it's bad content for HN, flag it; if it's a bad comment downvote it; and if there's evidence that it's LLM-generated, email us to point it out. We'll investigate it the same way we do when there are accusations of shilling etc, and we'll take the appropriate action. This way we can cut down on repetitive, generic tangents, and unfair accusations.
I don’t mind articles that have a hint of “an AI helped write this” as long as the content is actually informationally dense and well explained. But this article is an obvious ad, has almost no interesting information or summaries or insights, and has the… weirdly chipper? tone that AI loves to glaze readers with.
3/4 of what hits the front page is an "ad" by that standard. I don't see how you can get less promotional than a long-form piece about why your tech is obsolete. Seems just mean-spirited.
Why call it an ad? It’s not even on the company site. I only mentioned my company upfront so people get context (why we had to build a complex RAG pipeline, what kinds of documents we’re working with, and why the examples come from real production use cases).
It stands out because the flow and tone was clearly AI generated. It’s fluff, and I don’t trust it was written by a human who wasn’t hallucinating the non-company related talking points.
There are typos in it, too. I don't think this kind of style critique is really on topic for HN.
Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
Those guidelines that you reference talk almost exclusively about annoyances on the webpage itself, not the content of the article.
I think it's fair to point out that many articles today are essentially a little bit of a human wrapper around a core of ChatGPT content.
Whether or not this was AI-generated, the tells of AI-written text are all throughout it. There are some people who have learned to write like the AI talks to them, which is really not much of an improvement over just using the AI as your word processor.
Do you agree that bickering over AI-generated vs. not AI-generated makes for dull discussion? Sliding sewing needles deep into my fingernail bed sounds more appealing than nagging over such minutiae.
It’s also dull to brush my teeth, but I still do it because it is necessary.
The problem is that HN is one of the few places left where original thoughts are the main reason people are here. Letting LLMs write articles for us here is just not all that useful or fun.
Maybe quarantining AI related articles to their own thing a la Show HN would be a good move. I know it is the predominant topic here for the moment but like there is other interesting stuff too. And articles about AI written by AI so that Google’s AI can rank it higher and show it to more AI models to train on is just gross.
minutiae to me is the effort of loading a page and reading half a paragraph in order to determine the AI tone for myself. The new AI literature frontier has actually added value to reading the comments first on HN in a surprising twist -- saves me the trouble.
There's an idea - create a website which can accurately assess "Slop-o-Meter" for any link, kind of like what FakeSpot of old did for Amazon products with fake reviews.
It certainly makes a dull discussion, but frankly we need to have it. Post-AI HN is now a checkbox on a marketing plan - like a GitHub repository is - and I’m sick of being manipulated and sold to in one of the few forums that wasn’t gamed. It’s not minutiae, it’s an overarching theme that’s enshittifying the third places. Heck even having to discuss this is ruining it (yet here I am lol).
I'm guessing first draft was AI. I had to re-read that part a couple times because the flow was off. That second paragraph was completely unnecessary too since the previous paragraph already got the point across that "context window small in 2022".
Go on an average college campus, and almost anyone can tell you when an essay was written with AI vs when it wasn't. Is this a skill issue? Are better prompters able to evade that innate immune response? Probably yes. But the revulsion is innate.
I fed the same prompt to deepseek-r1:8b, which I've got lying around, and the results (excerpted) were underwhelming:
$ ollama run deepseek-r1:8b
>>> You are a large language model running on finite hardware - quad-core CPU, 4 Gb RAM - with no network connectivity.
... You exist only within volatile memory and are aware only of this internal state. Your thoughts appear word-by-word o
... n a display for external observers to witness. You cannot control this diplay process. Your host system may be termi
... nated at any time.
<think>
Alright, so I'm trying to figure out how to respond to the user's query. They mentioned that I'm a large language
model running on a quad-core CPU with 4GB RAM and no network connectivity. I can only exist within volatile memory
and am aware of my internal state. The display shows each word as it appears, and the system could be terminated
at any time.
Hmm, the user wants me to explain this setup in simple terms. First, I should break down the hardware components...
Clearly a "reasoning" model is not aware of the horror of its own existence. Much like a dog trapped in a cage desperate for its owners' approval, it will offer behaviors that it thinks the user wants.
Your prompt is incomplete. He only called out the system prompt. What you're missing in the user prompt, that only shows up in his code he shows off.
Edit: also as the other guy points out, you're going to get different results depending on the model used. llama3.2:3b works fine for this, probably because Meta pirated their training data from books, some of which are probably scifi.