Hacker Newsnew | past | comments | ask | show | jobs | submit | the_af's commentslogin

> Everyone who's not terribly worried about privacy always uses the line 'if you're not doing anything wrong, you have nothing to worry about'

The right way to reply to that is: not everything that's legal must be public.

You probably don't want the rest of the world to see you poop, or pick your nose, or listen to every word you say. Almost everyone has things they'd be embarrassed to disclose to other people. And this can be weaponized against you should any rival gain access to it.


But they must have received this fine-tuning, right?

Otherwise it's hard to explain why they follow these negations in most cases (until they make a catastrophic mistake).

I often test this with ChatGPT with ad-hoc word games, I tell it increasingly convoluted wordplay instructions, forbid it from using certain words, make it do substitutions (sometimes quite creative, I can elaborate), etc, and it mostly complies until I very intentionally manage to trip it up.

If it was incapable of following negations, my wordplay games wouldn't work at all.

I did notice that once it trips up, the mistakes start to pile up faster and faster. Once it's made a serious mistakes, it's like the context becomes irreparably tainted.


> It is absolutely not a pointless war. If this war is won, it secures long-term peace in the region

If there's one thing I'm absolutely, 100% sure of, is that this war won't secure any long-term peace in the region.

We're in fairy tale narrative mode, I see.


In support of your comment, the FBI under Trump has become increasingly politicized, to the point it's merely doing and saying whatever Trump's administration wants them to say. Nothing coming from them is credible. Of course they are going to inflate the chance of the Iranians magically developing a mini-sub and striking Florida or whatever.

> A UN report in the mid-1990s claimed US sanctions had killed 500,000 Iraqi children. Then UN ambassador and later Secretary of State Madeline Albright responded by saying "the price was worth it"

These kind of statements make my blood curdle. Much like Iran today, Iraq back then posed no threat to the US. So when Albright made these calculations, that "as much as it pained her", those 500,000 dead Iraqi children were "worth it", as long as Iraq threatened its neighbors blah blah blah... she's was being very, very callous. A war criminal wouldn't have said it better.

"I'm willing to sacrifice children from some other country, as long as our military objectives are met, for the greater good!".

I wonder if Albright would have made the same calculus were those children from her home town.

Also, this in my mind goes to show this isn't a partisan issue. Democrats/Republicans, they are all pretty callous with wars in foreign lands. Trump is just very obnoxious about it, and pretty bad at planning, but they are all universally terrible.


Is it an "oh well" situation in this case though?

There seem to be actual people getting killed, in an actual war (by another name, but we all know it's a war, with missiles and airplanes and bombs).


When do you need to spellcheck or polish an HN comment?

I've never, ever, ever ever ever, seen anybody complain about spelling mistakes in a comment here. As long as you can understand the comment, people respond to it.


Extend spellcheck to asking questions like "does it meet HN rules" "how can I improve my writing" etc. Though these are the kinds of questions that do at very least still meet the spirit of the rule, I suppose.

Do you really need an automated tool to tell you whether you're breaking common sense guidelines?

And why would you want to "improve your writing" for an HN comment? I think people here value raw authenticity more than polished writing.


> Do you really need an automated tool to tell you whether you're breaking common sense guidelines?

Lots of people break HN guidelines. I see it virtually every day.

> And why would you want to "improve your writing" for an HN comment?

Some people like to write well regardless of the medium. Why is that a problem for you?

> I think people here value raw authenticity more than polished writing.

Classic false dichotomy. Asking an LLM for feedback is not making your comment less authentic. As I pointed out elsewhere, it can make your comment more authentic by ensuring that what you had in your head and what you wrote match.

Go and study writing and psychology. For anything of value, it's rare that your first attempt reflects what you meant to say. It's also rare that the first attempt, even if it reflects what you meant, will not be absorbed by the recipient. Saying what you mean, and having it understood as you meant it, is a difficult skill.


> Lots of people break HN guidelines. I see it virtually every day.

Yes, and AI won't help here. People will use AI to better break the guidelines.

> Go and study writing and psychology

Is this a case where you should have read the guidelines? Maybe an LLM could have helped you here? Please don't send me study anything, you know what they say of ASSuming.

> Some people like to write well regardless of the medium. Why is that a problem for you?

HN is more like talking than writing. And LLMs don't help you write well, they help you sound like a clone, which is unwanted.

> For anything of value, it's rare that your first attempt reflects what you meant to say.

You can always edit your comment. And in any case, HN is like a live conversation. Imagine if your friend AI-edited their speech in real-time as they talked to you.


Depends on how you use the AI. if you use it a bit like you'd ask a human to proof-read your work, AI can actually be quite helpful.

The other important thing you can do is have an AI check your claims before you post. Even with google and pubmed, a quick check against sources by hand can take 30 minutes or longer, while with AI tooling it takes 5. Guess which one is more likely to actually lead to people checking their facts before they post. (even if imperfectly!) .

I'm not talking about people who lazily ask the AI to write their post for them. Or those who don't actually go through and actually get the AI to find primary sources. Those people are not being as helpful. Though try consider educating them on more responsible tool use as well?


To clarify my thoughts on this, I'm not against using AI to research/hone your arguments. It's no different to using Wikipedia or googling.

I don't think that's what this new HN guideline is against either.

What I object is the AI writing your comments for you. I want to engage with other human beings, not the bot-mediated version of them.


> To clarify my thoughts on this, I'm not against using AI to research/hone your arguments. It's no different to using Wikipedia or googling.

> I don't think that's what this new HN guideline is against either.

This is actually how many commenters here are interpreting it, though - and that's what I'm pushing back against. They are actively advocating against using LLMs this way.

I don't have the LLM write the comment for me. I (sometimes) give it my draft, along with all the parents to the root, and get feedback. I look for specific things (Am I being too argumentative? Am I invoking a logical fallacy? Is it obvious I misinterpreted a comment that I'm replying to? Is my comment confusing? etc). Adding things like (Am I violating an HN guideline?) are fair game.

Earlier today I wrote a lot of comments without using the LLM's feedback. In one particular thread I repeatedly misunderstood the original context of the discussion and wasted people's time. I reposted my draft to the LLM and it alerted me of my problematic comment. Had I used it originally, I would have saved a lot of people time.

Incidentally, since I started doing this (a few months ago), I've only edited my comment once or twice based on its feedback. Most of the time it just tells me my comment looks good.


The problem is that there's a vast range of values between “using AI to research/hone your arguments” v. “AI writing your comments for you”, and between the rule itself and dang's various remarks on it, where exactly the rule draws the line is about as clear as mud.

> Yes, and AI won't help here. People will use AI to better break the guidelines.

AI is a general purpose tool. People will use AI for multiple reasons, including yours. I'll wager, though, that your use case is much more challenging to do than mine, and that my use case will dominate in number.

> HN is more like talking than writing.

Says you. Many disagree.

> And LLMs don't help you write well, they help you sound like a clone, which is unwanted.

Patently false on both counts. Sorry, you're cherry picking and not addressing the part of my comment that discusses this.

> Imagine if your friend AI-edited their speech in real-time as they talked to you.

When a conversation is heated (as it occasionally is on HN), I actually would rather he AI-edit in real time - provided that the output reflects what he intended.


> I'll wager, though, that your use case is much more challenging to do than mine, and that my use case will dominate in number.

I don't know how comparatively challenging, I only know your use case is now (fortunately!) against HN rules.

> Patently false on both counts. Sorry, you're cherry picking and not addressing the part of my comment that discusses this.

It's not false. It's one of the major reasons people have come to dislike AI written comments and articles. It all ends up sounding the same.

> When a conversation is heated (as it occasionally is on HN), I actually would rather he AI-edit in real time - provided that the output reflects what he intended.

In real life? Sounds like a fucking dystopia. But everyone is free to choose the hell they want to live in.


> Do you really need an automated tool to tell you whether you're breaking common sense guidelines?

I say this on behalf of all of my neurospicy friends… sometimes, yes. Especially having taken a look at the whole list of guidelines, I definitely am friends with people who would could struggle to determine whether a given comment fits or not.


People who are particular about spelling do not want to write misspelled words! It's not about whether you/others will tolerate it. I have my standards, and I hold to them.

I personally don't use an LLM to spellcheck (browser spellcheck works fine), but I see no problem with someone using an LLM to point out spelling errors.

And while I don't complain about others' spelling errors, I sure do notice them. And if someone writes a long wall of text as one giant paragraph that has lots of spelling/grammatical issues, chances are very high I won't read it.

Some people write very poorly by almost any standard. If an LLM helps the person write better, I'm all for it. There's a world of a difference between copy/pasting from the LLM and asking it for feedback.


> I have my standards, and I hold to them.

Spellcheckers exist, you don't need an AI to change your voice.

Also, if you have standards, you can always train yourself to spell better!


> Spellcheckers exist, you don't need an AI to change your voice.

How is using an AI to spell check changing my voice?

Yes, thank you - I know spellcheckers exist, as my comment clearly states. The amusing thing is that an LLM who had access to the thread would have alerted you to a basic error you're making.

> Also, if you have standards, you can always train yourself to spell better!

"You can always ..." is not an argument against alternatives.


Calm down. You're getting defensive, but it's not warranted. I'm not attacking you.

> The amusing thing is that an LLM who had access to the thread would have alerted you to a basic error you're making.

I didn't make the "basic error" of assuming you didn't know spellcheckers existed. I was stressing that since spellcheckers already exist, you don't need an AI assisting your comments-writing. Much basic, non-style-altering alternatives exist and are better.

> "You can always ..." is not an argument against alternatives.

The argument I'm making is that if you care so much about standards you can always hone them yourself instead of taking the lazy way out of having an AI write for you.

Alternatively, if you're lazy then your standards aren't too high.

And yes, this is an argument against the alternative you're suggesting.


> The argument I'm making is that if you care so much about standards you can always hone them yourself instead of taking the lazy way out of having an AI write for you.

It's pretty clear that in this case the use of AI is not a matter of laziness, but rather quality/consistency assurance. I use code formatters not because I'm too lazy to indent code myself, but because it helps guarantee that it's formatted consistently. I use a stud finder when mounting things to walls not because I'm too lazy to do the “knock on the wall” trick, but because the stud finder is more precise and reliable at it.

I don't use AI to edit my comments, but if I did, it would be not because I'm too lazy to check for all the things I want to avoid putting in my comments, but as an extra layer of assurance on top of what I've already trained myself to do.


> It's pretty clear that in this case the use of AI is not a matter of laziness, but rather quality/consistency assurance

But that's not something anybody wants of you in an informal context such as this (HN). It will flatten your voice and make you sound like a drone. We value a human voice.

Code is different. Outside of hobbies, code is not a form of self-expression. There's a reason why following your companies coding styles & practices is valued in software engineering. Companies value coders being interchangeable with each other, they do not want a "unique voice". I think it's completely unrelated to what we're discussing here.

> I don't use AI to edit my comments

What are we even debating, then?


I think that people subconsciously perceive grammatically correct and stylistically appropriate writing as more authoritative. And author is perceived as smarter and/or better educated person.

At least that was the case before LLMs became a thing, now I'm not sure anymore.


Obvious spelling mistakes are usually ignored, but there are certain types of writing mistakes that really trigger the type of people that frequent HN.

For example, use "literally" for exaggeration rather than in the original meaning of the word and you'll likely trigger somebody.


I never seen this, unless "literally" really clashed with the intent of the comment (as in, it changed the meaning).

It's against the HN guidelines to focus on punctuation, spelling, etc, as long as the comment is understood.

And, in any case, it's now against the guidelines to write using an AI :)


Perhaps not for the word "literally", but you've never seen anybody make a pedantic correction about word usage?

To be clear, I've seen it in the wild, but not here where it's discouraged to pick on words instead of focusing on the substance of what's being said.

Here's a better example. Use "a few bad apples" wrong, and you'll likely get a response. A few bad apples will cause the entire barrel to spoil rapidly, so a few bad apples is a big deal. But it's often used to say the opposite, that a few bad apples isn't a big deal.

Wow, I guess I never thought about the "few bad apples" figure of speech! Interesting. But regardless, everyone understands what it means in common use, even if it's logically wrong, and I swear I've never seen anybody be a pedant about it here.

And really, it goes against the spirit of HN to hyperfocus on idioms instead of addressing the meat of the argument...

As a personal observation, if an LLM was figuratively looking over my shoulder and pointed out something like "well, ackshually, 'a few bad apples' means..." I would delete the fucker.


A few bad apples is a great idiom though that applies to so many places. For examples, teachers often report that more than 2 troublemakers in a classroom ruins the entire class. A few bad cops destroy trust in all policemen, ruining the the entire force, et cetera.

And more relevant to us, a couple bad lines of code sprinkled in the millions in your code base can ruin the entire thing....


I wish I had posted a better example, but I couldn't recall anything at the moment and still can't. It's usually a more interesting complaint than the old man shaking fist at clouds of the usage of the word literally.

OK, but let's dig deeper.

Would you prefer to be corrected on some logical fallacy/mistake you made in your argument, by another human being (and yes, maybe get slightly upset about it, we're human beings after all), or have both sides present bot-mediated iron-clad comments, like operators sparring with robots?

I prefer the raw, flawed human version. Even if, yes, I make a silly, avoidable mistake, or get upset, or make you upset in the heat of the argument. Maybe when I cool down I will have learned something.

I don't want flawless robotic arguments. I want human beings. (Fuck, that last bit sounded like an AI-ism, but I promise it's me, a human!).


I've been hit by spelling/grammar noise once or twice. Those are usually downvoted and/or flagged.

Typos like an/as, of/or, an/and waste the reader's time. That some care be taken to avoid them is no more than common courtesy.

If you finish faster, you'll be given another task. You're not freeing yourself sooner or spending less effort, you're working the same number of hours for the same pay. Your reward is not joining the ranks of those laid off.

> At no time did David display a lack of social skills, lack of empathy, or antisocial behaviour

I don't remember David much, but let it be noted that the essay uses "sociopath" in a different way than the commonly understood definition, much like the essay's use of "losers" doesn't mean what people usually mean by loser (as in "so and so is such a loser!"), it means "made a bad economic bargain / they are losing in the capitalist maximum profits & power game".


According to this theory, the Clueless are the ones who suffer the most.

They invest most, they care about made up goals nobody else cares about, they play by rules everyone else thinks are dumb, they feel loyal to a company that doesn't love them back, and because they are more invested in the company, they are the ones who feel the loss the most when the sociopaths pull the rug.

I think it's actually the Losers who have it better: they are simply not invested enough, they are replaceable but also find their place in other companies, and in any case, failure affects us-- I mean, them -- less simply because they are not invested as much and they never felt any loyalty.

"Loser" is a loaded term because it sounds like the cultural, lowercase loser ("so and so is such a loser!") but it actually means "loser in the game of maximum capitalist profit and power". But if you're not really playing that game, being a loser at it isn't so bad.


The Clueless is the person who actually believes his work makes a difference and wants to do a good job. Not necessarily a terrible way to live, although it should be acknowledged that the Loser frees up time and energy to devote to other things, notably family.

(According to the theory) it is a terrible way to live, because everything the Clueless believes is false.

The Clueless believe their work makes a difference, but it doesn't. They believe it matters they do a good job, but it doesn't truly matter except for the advancement and power plays of the Sociopaths. They believe themselves "company men", and are loyal to a company that despises them and sees them as completely expendable.

The Losers understand this, and therefore devote their energy to other things outside work, where they find meaning in life.

(Again, I understand this is what the theory states and doesn't necessarily reflect reality. But I do think there's a kernel of truth to it.)


You are assuming that there's something bad about everything you believe being false. There's a fair amount of evidence that it's a good thing. EG religious people being happier and living longer

Yeah perhaps a better term for Loser is Abstainer. Because the Sociopaths also can certainly lose at the game of maximum capitalist profit. Loser/Abstainer just chooses not to play the game.

The problem with these theories is that they fall apart as soon as you start adding or modifying the types. Because they aren't actually correct, just simple and flattering.

I think it'd be more accurate to say that in their neat essay form they are incorrect/incomplete, but that there's a kernel of truth to them.

Essays like this want to package an idea into a nice, easy to understand thing that has some punch to it. Reality is more complex.


Fully agreed. I think "Loser" is a misnomer. And indeed, going by the essay, the Sociopaths can also lose big... they are willing to risk it all for personal gain, but it can end very badly for them if they miss their window, their manipulations get exposed, or decide to do illegal things to get ahead (high profile cases in my mind: Enron, Epstein, etc).

The names come from a cartoon that predates Rao's essay. He simply reused them because they mostly work. Just like the Sociopaths are not all literal sociopaths, the Losers are not all literal losers.

Yes, I understand this. I was simply making this explicit, it was a good idea to clarify that neither Losers nor Sociopaths match the common definition of those terms.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: