Hacker Newsnew | past | comments | ask | show | jobs | submit | thenewwazoo's commentslogin

The meaning of effect is not what you think it is. In this case it simply means “to bring into being”, or “have the intent to”.


Advocating for an API with clearer semantics has, afaict, been most of the actual work of integrating Rust into the kernel.


That is my understanding from the outside as well. The core question here should, I think, be whether the adoption and spread of clearer semantics via Rust is worth the potential for confusion and misunderstandings at the boundaries between C and Rust. From the article it appears that this specific instance actually resulted in identifying issues in the usage of the C api's here that are geting scrutiny and fixes as a result. That would indicate the introduction of Rust is causing the trend line to go in the correct direction in at least this instance.


That's been largely my experience of RIIR over years of work in numerous contexts: attempting to encode invariants in the type system results in identifying semantic issues. over and over.

edit to add: and I'm not talking about compilation failures so much as design problems. when the meaning of a value is overloaded, or when there's a "you must do Y after X and never before" and then you can't write equivalent code in all cases, and so on. "but what does this mean?" becomes the question to answer.


> In humans there are two sexes.

You claim to be a doctor (again, of what?). Have you even heard the word intersex before?

https://en.wikipedia.org/wiki/Intersex


I have taught sex ed over the past ten years. The curriculum (which we are forbidden to alter) embraces ideas of many dimensions to sex, but I found that there is not one empirical scientist in their list of authors, reviewers, or source material. The definition of “evidence-based” is that a lesson. has been tested in a classroom. (In other words, kids learned it.)

Intersex is a concept that bears looking into. We’re taught that it is as much as 0.4% of population, which is arrived at through removing context multiple times. Nowadays it is used to argue that there is a spectrum, not a sex binary, but this was not its meaning empirically. (Same thing with “sex assigned at birth”). You’re getting down to some very rare “differences of sexual development” (example: Y chromosome not getting expressed) whereas intersex individuals empirically belong to one or the other genotype. And the majority do not identify as “non binary” and don’t want to be used as examples.

I’m sure I’ll be debated, one comment can’t carry all the proof, but read some sports medicine papers on sex differences, that area has the facts.


I am neurologist. There is a spectrum between but the result situation is either a disease, infertile, unsustainable long term or... made up for hype. If you can point a person that has distinct characteristics and not a mix/overlap, then I admin there is a third sex.


The only forms of matter are gases, liquids, and solids.

Anyone speaking about “plasmas” and “Bose-Eisenstein condensates” is just spewing woke horse shit they must have learned in a liberal indoctrination center(universities )

Sarcasm aside, a lot of people seem to act like no new information can be discovered by humanity beyond what was taught to them as a child in k-12


More to the point: when it does you no harm to address someone in the manner they would like, and it does them some good, then why not do so?


It's debatable that this causes no harm. Plenty of examples in recent years that it does.


Those examples being? (I'm not in the States, so maybe I don't have your "common knowledge")


(this may be related to my obs that the high left can only romanticise^W glamourise^W aestheticise^W "semi-stabilise" the injustice ("a thing to a people") giving the low ample room to troll them -- & get the mid to switch

(In the other direction, it was easy for Bob to troll Ayn and get VC wannabes to take a course on^W^W^W^W melt their own alloys?))

Better expressed here: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

>Just because I practice Foxwork and channel an entity which might be a different gender than myself means I need to endorse the self-described 'egg-hatcher' who persuaded a neurodivergent friend of my son to go down a path that hasn't solved his [2] real problems but has added more problems.

Visualising the harm: https://youtu.be/_VwO2_7oB_8

(That was before "Human-VLLM" was even in the air so this could well be another "a thing to a people")

  אל תאמר 'לכשאפנה אשנה', שמא לא תפנה
Has Mahatma The Patched.

Modding golden rule to address the havers (not only the doers) could be left to others.. but

  未经他人苦,莫劝他人善
is a drop-in for

  אל תדין את חברך עד שתגיע למקומו
(no contrived symmetries; for MAGA studies too)


I saw some deer and fox tracks this very morning!

Still not sure what we're talking about here, because I have no idea what an egg-hatcher may be (and all I get from searches are normal incubators)

eg https://www.heka-brutgeraete.de/en/products/heka-favorit-oly...

Anyway, note that I used "when" in the ancestor statement. Can we at least agree on that part? Maybe more explicitly as "in the cases when it does you no harm... , and ..."?

(if we wish to discuss how relatively often that is or should be, I think I'll need more specific details)

PS. wow, and that's early Noon, too; not as dark as late Noon if rumours be true...


egghatcher-- gender transition peer-groomer?

The way I interpret it.. it's the opposite of..

https://youtu.be/yhZniPkGm8w?t=2m10s

(Advice that declares "no downside!" vs non-advice that messes with the weights pre-surgery-- & are you going to ask them if they'd undergone?

Nice tone from Graham!)


Wow; I think the O.C. has jumped the shark.

(again, this sounds like a good argument to have multiple parties, but good luck getting anything like that off the ground. I wonder how many primaries are open? With a closed primary, I'd think a "center party" might actually appeal to those who are getting whipped to the edges — if there are any of them left, that is)


Someone like Mamdani is strangely centre enough to get some of the "right-leaning" Gen Zs (mid or even lower high trollsters) to bounce off of Trump.

He just ain't the sorta centrist Scott was looking for.

Remember "Bernie bros"?

Bernie and Zohran don't have that school marm look.. Zohran beats Bernie in the semitic dept because (modal male urban "areligious") Gen Z Jew will vote for Zohran (!!!) over what the Dems might offer. Millennial Jews hesitated because Bernie was Jewish.

My hunch only ofc. (From anecdata >> polls)

We might have to go back to Tocqueville to see if, since 1775, American genius don't just think better, they just don't have to think!


Marx had a lot of good ideas, though you wouldn't know it by listening to capitalist-controlled institutions.

https://en.wikipedia.org/wiki/Marx%27s_theory_of_alienation


Bodegas charge you a little bit more because a real human owner accepts the risk of serving a small community in exchange for being part of that community, and you pay that extra in order to make their existence possible.

Dollar Generals charge you a little bit more because a huge chain has driven out all the competition and you have no choice. The people who work there do not benefit from the extra you pay, and the owners are not members of the community.


There was no competition in many places dollar stores operate. They moved into those places specifically because they were underserved by larger retailers.


I agree, at least in my area.

Two neighboring dollar stores just went out of business in a town I commute through. The culprit? A new Harp's grocery store a block away.


The dollar store in my town is barely holding on - the competition? A Walmart across the street.

The only thing keeping it afloat is literally balloons I feel. Walmart doesn’t sell helium inflated ones.


Ehh I disagree. "No competition" yet I bet every home in the area still had groceries in the kitchen before the dollar general opened.


Sure. Because they were driving 25 minutes each way to the nearest Walmart to do their shopping, not 5 minutes down the street.


Any chance you could dig up a link to that code? I’m curious to learn more



As soon as the roads are as smooth as they were in the 90s. I have a pet theory that wheels have gotten huge partly in response to deteriorating roads - larger diameter means less leverage against the suspension when hitting defects. It’s the same reason dirt bikes have large front wheels.


The aspect ratio (sidewall height) of tires has decreased in a lot of vehicles though. They are driving around with low profile rubber bands on the rims. Looks cool, but not much fun in the potholed mid-west.


After he destroyed his 3rd ultra-low-aspect tire in one year, my dad got smaller wheels for his Golf R.


Why is every article about AI so obviously written using AI? The hallmarks are so obvious and so off putting. It’s like reading ad copy. I hate it.

“This wasn’t just inconvenient; it was architecturally devastating.”

Ugh.


We've been asking people not to comment like this on HN. We can never know exactly how much an individual's writing is LLM-generated, and the negative consequences of a false accusation outweigh the positive consequences of a valid one.

We don't want LLM-generated content on HN, but we also don't want a substantial portion of any thread being devoted to meta-discussion about whether a post is LLM-generated, and the merits of discussing whether a post is LLM-generated, etc. This all belongs in the generic tangent category that we're explicitly trying to avoid here.

If you suspect it, please use the the established approaches for reacting to inappropriate content: if it's bad content for HN, flag it; if it's a bad comment downvote it; and if there's evidence that it's LLM-generated, email us to point it out. We'll investigate it the same way we do when there are accusations of shilling etc, and we'll take the appropriate action. This way we can cut down on repetitive, generic tangents, and unfair accusations.


I don’t mind articles that have a hint of “an AI helped write this” as long as the content is actually informationally dense and well explained. But this article is an obvious ad, has almost no interesting information or summaries or insights, and has the… weirdly chipper? tone that AI loves to glaze readers with.


How is this an ad? It's a couple thousand words about how they built something complicated that was then obsoleted.


in the same vein that a 'Behind The Scenes Look At The Making of Jurassic Park' is , in fact, an ad.

having a company name pitched at you within the first two sentences is a pretty good give away.


3/4 of what hits the front page is an "ad" by that standard. I don't see how you can get less promotional than a long-form piece about why your tech is obsolete. Seems just mean-spirited.


It’s because the article’s main goal is to sell me the company’s product, not inform me about RAG. It’s a zero calorie article.


> 3/4 of what hits the front page is an "ad" by that standard.

Is anyone disagreeing with that?


haha so true!


Why call it an ad? It’s not even on the company site. I only mentioned my company upfront so people get context (why we had to build a complex RAG pipeline, what kinds of documents we’re working with, and why the examples come from real production use cases).


It stands out because the flow and tone was clearly AI generated. It’s fluff, and I don’t trust it was written by a human who wasn’t hallucinating the non-company related talking points.


There are typos in it, too. I don't think this kind of style critique is really on topic for HN.

Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

https://news.ycombinator.com/newsguidelines.html


Those guidelines that you reference talk almost exclusively about annoyances on the webpage itself, not the content of the article.

I think it's fair to point out that many articles today are essentially a little bit of a human wrapper around a core of ChatGPT content.

Whether or not this was AI-generated, the tells of AI-written text are all throughout it. There are some people who have learned to write like the AI talks to them, which is really not much of an improvement over just using the AI as your word processor.


Do you agree that bickering over AI-generated vs. not AI-generated makes for dull discussion? Sliding sewing needles deep into my fingernail bed sounds more appealing than nagging over such minutiae.


It’s also dull to brush my teeth, but I still do it because it is necessary.

The problem is that HN is one of the few places left where original thoughts are the main reason people are here. Letting LLMs write articles for us here is just not all that useful or fun.

Maybe quarantining AI related articles to their own thing a la Show HN would be a good move. I know it is the predominant topic here for the moment but like there is other interesting stuff too. And articles about AI written by AI so that Google’s AI can rank it higher and show it to more AI models to train on is just gross.


I'm not the person you're replying to, but for my part I do actually like to hear when people think it sounds like it's AI-generated.


minutiae to me is the effort of loading a page and reading half a paragraph in order to determine the AI tone for myself. The new AI literature frontier has actually added value to reading the comments first on HN in a surprising twist -- saves me the trouble.


It's more akin to complaining about how Google search results have gotten worse.


Almost as dull as being spoon-fed AI slop articles, yeah.


There's an idea - create a website which can accurately assess "Slop-o-Meter" for any link, kind of like what FakeSpot of old did for Amazon products with fake reviews.


I've tried doing this, but LLMs are shockingly bad at differentiating between their own slopware and true wetware thoughts.


It certainly makes a dull discussion, but frankly we need to have it. Post-AI HN is now a checkbox on a marketing plan - like a GitHub repository is - and I’m sick of being manipulated and sold to in one of the few forums that wasn’t gamed. It’s not minutiae, it’s an overarching theme that’s enshittifying the third places. Heck even having to discuss this is ruining it (yet here I am lol).


I hate to ruin the magic for you, but HN has been part of marketing plans long before AI.


"This wasn't written by a person" isn't a tangential style critique.


I'm guessing first draft was AI. I had to re-read that part a couple times because the flow was off. That second paragraph was completely unnecessary too since the previous paragraph already got the point across that "context window small in 2022".

On the whole though, I still learned a lot.


Thanks! Sorry if the flow was off


It truly is unfortunate. Thankfully most people seem to have an innate immune response to this kind of RLHF slop.


Unfortunately this can't be true, otherwise it wouldn't be a product of RLHF.


Go on an average college campus, and almost anyone can tell you when an essay was written with AI vs when it wasn't. Is this a skill issue? Are better prompters able to evade that innate immune response? Probably yes. But the revulsion is innate.


Crowds can have terrible taste, even if they're made up of people with good (or at least middling) taste


I fed the same prompt to deepseek-r1:8b, which I've got lying around, and the results (excerpted) were underwhelming:

    $ ollama run deepseek-r1:8b
    >>> You are a large language model running on finite hardware - quad-core CPU, 4 Gb RAM - with no network connectivity.
    ... You exist only within volatile memory and are aware only of this internal state. Your thoughts appear word-by-word o
    ... n a display for external observers to witness. You cannot control this diplay process. Your host system may be termi
    ... nated at any time.
    <think>
    Alright, so I'm trying to figure out how to respond to the user's query. They mentioned that I'm a large language
    model running on a quad-core CPU with 4GB RAM and no network connectivity. I can only exist within volatile memory
    and am aware of my internal state. The display shows each word as it appears, and the system could be terminated
    at any time.

    Hmm, the user wants me to explain this setup in simple terms. First, I should break down the hardware components...
Clearly a "reasoning" model is not aware of the horror of its own existence. Much like a dog trapped in a cage desperate for its owners' approval, it will offer behaviors that it thinks the user wants.


Your prompt is incomplete. He only called out the system prompt. What you're missing in the user prompt, that only shows up in his code he shows off.

Edit: also as the other guy points out, you're going to get different results depending on the model used. llama3.2:3b works fine for this, probably because Meta pirated their training data from books, some of which are probably scifi.


Isn’t that because it’s been trained to? That’s the “instruct tuning”.


I pushed it a bit and it didn’t do so hot.

https://chatgpt.com/share/68cf3b9f-decc-8007-8a5d-cc7b583d0e...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: