Hacker Newsnew | past | comments | ask | show | jobs | submit | quitit's commentslogin

It’s not my business: could someone shed light on how this would better serve their respective customers, versus keeping them separate. Or in other words “what will be possible by this merger that isn’t possible now?”

Except the bots.

>We have reached a point where the shareholders are a companies real customers and that is who they all try to attract.

We currently have a handful of AI companies who make no profit, have revenue far below operating costs, their entire business runs on investment and they're posturing themselves for IPOs. Meaning that the reason they can keep the lights on solely comes from attracting investors (and will likely be that way for the foreseeable future).


That's not unique to AI though. That's very common for tech startups.

If they keep doing it, it must be because sometimes it works.


[Slaps roof of barge]

You can fit zo many tulips in this bad boy


While AI is an example, it's an extreme one - the uniqueness here is that the AI companies have very large spend commitments that exceed expected cash generation, even under presumption of no faults and very strong revenue assumptions because infrastructure costs outpace revenue by a significant margin.(1)

This differs quite a bit from a typical venture-backed or boot-strapped entity, which has a realistic pathway to profitability.

https://www.analyticsinsight.net/news/hsbc-warns-openai-coul...


Ponzi schemes work* too.

*At a specific point in time and for certain investors


Please just talk about capital and leverage like an adult. Do you expect a CFO and their team to look at the math and say, "Well, we figured out that we can speed up adoption and bring forward billions of dollars of revenue by spending fewer billions from capital injection and debt deals this year" and then not do it?

Adults tell jokes too, especially gallows humor, and to great effect.

Ergo I propose grandparent commentator inject more humor in their clear understanding of leverage and debt to widen your, my, and their audiences' understanding regarding debt and leverage beyond your proposed metaphor of the toddler CFO failing the marshmallow challenge.


What doesn't work are the predictions of Uber's collapse, of which there were many, cheered on by a great deal who still gather here looking for the next things to see through.

I am personally betting on Uber’s collapse for the obvious reason: it won’t compete with robotaxis and AV companies would rather have customers on their own apps rather than Uber’s platform.

Just unsure about the timing


> Just unsure about the timing

Right after we get nuclear fusion and a million people on Mars.


Lol I can’t remember the last time I was driven by a human.

That sounds like a pretty bad memory. Unless you're like 3 and learned to read/write pretty fast, I guess?

profound insight

Uber actually has a service that's worth paying for. I can't say I feel the same about most AI slop factories.

Attributing it to private company behaviour really minimises what Valve chooses to do. Per your counter example: Epic Games has been having a very public meltdown this week regarding Steam's inclusion of Gen-AI labelling - here we have two private companies, with two very different priorities.

It's also worth reminding ourselves that Epic settled with the FTC for over half a billion dollars for tricking kids into making unwanted purchases in Fortnite.(1) Epic also stonewalled parents' attempts at obtaining refunds, going so far as to delete Fortnite accounts in retaliation for those who arranged charge backs.

Furthermore the FTC's evidence included internal communications showing that Epic deliberately schemed and implemented these dark patterns specifically to achieve the fraudulent result, even testing different approaches to optimise it.

https://www.ftc.gov/news-events/news/press-releases/2022/12/...


I don't really get it myself. I personally don't give Steam credit for weakly saying 'hey you need to label something'. Let me know when really enforce it. Heck, let me know when they at least add a filter. That's when you can really impact the behaviour (or prove consumers really don't care).

But yew ,both private companies do their own forms of evil.


Yeah we also need to get out of the dichotomous thinking that companies are either all good or all bad.

Companies will do things that represent their interests, sometimes their goals align well with their customers, or the greater good, and sometimes they do unpopular things where they believe the profitability will outweigh the blowback.*

It's a lesson in not being too attached or needlessly loyal - our connection to a business is not a personal one.

*The Epic example is useful because their actions represent a steady pattern of deceptive conduct.


>EU vehicle safety regulations have supported a 36% reduction in European road deaths since 2010. By contrast, road deaths in the US over the same period increased 30%, with pedestrian deaths up 80% and cyclist deaths up 50%

There might be something in those stats other than anecdotal vibes.


Devils advocate

How do we really know that? If people walk more and drive less one could argue that road deaths go down too. US has a lot more cars and roads than EU. And we have this massive Interstate system.


Have you verified your numbers? With some basic searching I found that the amount of cars registered in the EU seems to be comparable (if not slightly more than) than the USA, while the total length of public roads in the USA is about 10% more than that one of the EU. Keep in mind that in the EU you have a lot of European routes which can stretch vast amount of distances over several countries, similar to the US' interstate system. The biggest factor I can think of is the lack of sidewalks and bike lanes in the US on many roads, additionally there's a disregard of bicyclists by car users, which negatively encourages these two to be as prevalent on the roads as compared to in the EU, since everyone is incentivized to just get a car anyway.

You might want to double check your own numbers. EU having “comparable or slightly more” cars than the US depends entirely on whether you count the EU as a single bloc or as individual nations. Per capita car ownership is still higher in the US. Road length is also not the relevant metric. What matters is road design, lane width, speed environment, lighting, and pedestrian exposure.

Pointing to “a lot of European routes” does not explain why US pedestrian deaths climbed 80 percent in 15 years while EU rates fell. Road geometry, car size, and enforcement patterns do. Sidewalks and bike lanes are part of the story but not the whole story.

If we are trading verification requests, the burden applies both ways.


>How do we really know that?

As the Devils advocate, the burden is upon you to propose a viable alternative.

Merely asking "what if it's not that" is called sowing doubt, a practice that aims to undermine trust in established information.

Suggest a viable reason for any of the below figures, and then others can chime in with their criticisms of your rationale.

USA car fatalities over the last 15 years:

- 30% increase in road deaths

- 80% increase in pedestrian fatalities by car

- 50% increase in cyclist fatalities by car


You are mixing up “Devils advocate” with “prove the negative for me.” The point of Devils advocate is to test assumptions, not to accept the first correlation as gospel.

If pedestrian and cyclist deaths rise 80% and 50% while vehicle size, road design, lighting, speeding, and impairment trends also shift, then asking whether those factors matter is not “sowing doubt.” It is literally how causal analysis works. If your position is that questioning causality is illegitimate unless I hand you a fully formed alternative theory, then you are not defending evidence. You are defending certainty.


nope, and arguing the point was anticipated. You've still not presented anything.

You're free to suggest an alternative concept, and that would be discussed because this is a forum, and not a place to play transparent political games.


No. I am not required to present an alternative explanation just to question someone’s claim. Challenging an inference is valid on its own.

Keep in mind that the US stats are derived from cities that are designed around personal automobile transportation, so they're likely muted.

Europe on the other hand has a much higher level of intermingling between pedestrians and vehicles. This puts pedestrians more often in harms way, and likely will lead to out-sized dangers that aren't seen as frequently in the USA. Pedestrian safety is a key requirement for European car safety.

If the EU is politically forced into accepting the US standards: The slack will need to be picked up by European insurance companies, who should charge extreme premiums for unsafe designs, effectively blocking the sale of the vehicles from dangerous, young, or casual drivers and limiting those designs to those who truly need them (which I suspect is very few.)

This should also go a long way in addressing inexpensive Chinese vehicles that ape the American designs. Since that is more likely going to be what is on the roads.


>>If the EU is politically forced into accepting the US standards: The slack will need to be picked up by European insurance companies, who should charge extreme premiums for unsafe designs, effectively blocking the sale of the vehicles from dangerous, young, or casual drivers and limiting those designs to those who truly need them (which I suspect is very few.)

That only works if there are big penalties for killing people with your car. As it is as long as you are not drunk and have your license you get away with a minor slap on the wrist. You pay if you damage someone's else car but if you kill them then there is usually no financial responsibility and thus no reason to rise insurance premiums.


If the EU forced Apple to adopt Wi-Fi Aware then Apple would just fence it to EU users.

The attempt of trying to paint this as a powerplay by the EU is tenuous:

- Apple, along with Microsoft and Intel are founding members of the Wi-Fi Alliance, whose objective was to introduce a standard of interoperability through Wi-Fi Aware.1

- This work commenced long before the EU showed any interest in regulating tech.

- Apple have a pretty solid history of fencing EU-mandated changes to EU devices.

- Microsoft's Windows, also deemed by the EU as a "gatekeeper" hasn't deployed Wi-Fi Aware in Windows. With no public plans to do so.2

1. https://www.washingtoninformer.com/wi-fi-aware-aims-to-conne...

2. https://learn.microsoft.com/en-us/answers/questions/2284386/...


Apple _did_ adopt and support Wi-Fi Aware as a protocol iOS supports. It just doesn’t use it for AirDrop.

I never said they didn't. They did, they announced they would, and it's now shipped.

This entire thread is about whether or not they were forced to do so by the EU.

As another dot point: We also don't have Margrethe Vestager or Thierry Breton (or other EU figures) doing a victory lap on social media as they usually do.


There are plenty of reasons why having a chatbot partner is a bad idea (especially for young people), but here's just a few:

- The sycophantic and unchallenging behaviours of chatbots leaves a person unconditioned for human interactions. Real relationships have friction, from this we develop important interpersonal skills such as setting boundaries, settling disagreements, building compromise, standing up for oneself, understanding one another, and so on. These also have an effect on one's personal identity and self-value.

- Real relationships have the input from each participant, whereas chatbots are responding to the user's contribution only. The chatbot doesn't have its own life experiences and happenings to bring to the relationship, nor does it instigate autonomously, it's always some kind of structured reply to the user.

- The implication of being fully satisfied by a chatbot is that the person is seeking a partner who does not contribute to the relationship, but rather just an entity that only acts in response to them. It can also be an indication of some kind of problem that the individual needs to work through with why they don't want to seek genuine human connection.


That's the default chatbot behavior. Many of these people appear to be creating their own personalities for the chatbots, and it's not too difficult to make an opinionated and challenging chatbot, or one that mimics someone who has their own experiences. Though designing one's ideal partner certainly raises some questions, and I wouldn't be surprised if many are picking sycophantic over challenging.

People opting for unchallenging pseudo-relationships over messy human interaction is part of a larger trend, though. It's why you see people shopping around until they find a therapist who will tell them what they want to hear, or why you see people opt to raise dogs instead of kids.


You can make an LLM play pretend at being opinionated and challenging. But it's still an LLM. It's still being sycophantic: it's only "challenging" because that's what you want.

And the prompt / context is going to leak into its output and affect what it says, whether you want it to or not, because that's just how LLMs work, so it never really has its own opinions about anything at all.


> But it's still an LLM. It's still being sycophantic: it's only "challenging" because that's what you want.

This seems tautological to the point where it's meaningless. It's like saying that if you try to hire an employee that's going to challenge you, they're going to always be a sycophant by definition. Either they won't challenge you (explicit sycophancy), or they will challenge you, but that's what you wanted them to do so it's just another form of sycophancy.

To state things in a different way - it's possible to prompt an LLM in a way that it will at times strongly and fiercely argue against what you're saying. Even in an emergent manner, where such a disagreement will surprise the user. I don't think "sycophancy" is an accurate description of this, but even if you do, it's clearly different from the behavior that the previous poster was talking about (the overly deferential default responses).


The LLM will only be challenging in the way you want it to be challenging. That is probably not the way that would be really challenging for you.

I only challenge LLMs in a way I don't want them to be challenging.

It's not meaningless. What do you do with a person who contradicts you or behaves in a way that is annoying to you? You can't always just shut that person up or change their mind or avoid them in some other way, can you? And I'm not talking about an employment relationship. Of course, you can simply replace employees or employers. You can also avoid other people you don't like. But if you want to maintain an ongoing relationship with someone, for example, a partnership, then you can't just re-prompt that person. You have a thinking and speaking subject in front of you who looks into the world, evaluates the world, and acts in the world just as consciously as you do.

Sociologists refer to this as double contingency. The nature of the interaction is completely open from both perspectives. Neither party can assume that they alone are in control. And that is precisely what is not the case with LLMs. Of course, you can prompt an LLM to snap at you and boss you around. But if your human partner treats you that way, you can't just prompt that behavior away. In interpersonal relationships (between equals), you are never in sole control. That's why it's so wonderful when they succeed and flourish. It's perfectly clear that an LLM can only ever give you the papier-mâché version of this.

I really can't imagine that you don't understand that.


> Of course, you can simply replace employees or employers. You can also avoid other people you don't like. But if you want to maintain an ongoing relationship with someone, for example, a partnership, then you can't just re-prompt that person.

You can fire an employee who challenges you, or you can reprompt an LLM persona that doesn't. Or you can choose not too. Claiming that power - even if unused - makes everyone a sycophant by default, is a very odd use of the term (to me, at least). I don't think I've ever heard anyone use the word in such a way before.

But maybe it makes sense to you; that's fine. Like I said previously, quibbling over personal definitions of "sycophant" isn't interesting and doesn't change the underlying point:

"...it's possible to prompt an LLM in a way that it will at times strongly and fiercely argue against what you're saying. Even in an emergent manner, where such a disagreement will surprise the user. I don't think "sycophancy" is an accurate description of this, but even if you do, it's clearly different from the behavior that the previous poster was talking about (the overly deferential default responses)."

So feel free to ignore the word "sycophant" if it bothers you that much. We were talking about a particular behavior that LLM's tend to exhibit by default, and ways to change that behavior.


I didn't use that word, and that's not what I'm concerned about. My point is that an LLM is not inherently opinionated and challenging if you've just put it together accordingly.

> I didn't use that word, and that's not what I'm concerned about.

That was what the "meaningless" comment you took issue with was about.

> My point is that an LLM is not inherently opinionated and challenging if you've just put it together accordingly.

But this isn't true, anymore than claiming "a video game is not inherently challenging if you've just put it together accordingly." Just because you created something or set up the scenario, doesn't mean it can't be challenging.


I think they have made clear what they are criticizing. And a video game is exactly that: a video game. You can play it or leave it. You don't seem to be making a good faith effort to understand the other points of view being articulated here. So this is a good point to end the exchange.

> And a video game is exactly that: a video game. You can play it or leave it.

No one is claiming you can't walk away from LLM's, or re-prompt them. The discussion was whether they're inherently unchallenging, or if it's possible to prompt one to be challenging and not sycophantic.

"But you can walk away from them" is a nonsequitur. It's like claiming that all games are unchallenging, and then when presented with a challenging game, going "well, it's not challenging because you can walk away from it." This is true, and no one is arguing otherwise. But it's deliberately avoiding the point.


"I'm leaving you for a new context window."

> This seems tautological to the point where it's meaningless. It's like saying that if you try to hire an employee that's going to challenge you, they're going to always be a sycophant by definition. Either they won't challenge you (explicit sycophancy), or they will challenge you, but that's what you wanted them to do so it's just another form of sycophancy.

I think this insight is meaningful and true. If you hire a people-pleaser employee, and convince them that you want to be challenged, they're going to come up with either minor challenges on things that don't matter or clever challenges that prove you're pretty much right in the end. They won't question deep assumptions that would require you to throw out a bunch of work, or start hard conversations that might reveal you're not as smart as you think; that's just not who they are.


Hmm. I think you may be confusing sycophancy with simply following directions.

Sycophancy is a behavior. Your complaint seems more about social dynamics and whether LLMs have some kind of internal world.


Even "simply following directions" is something the chatbot will do, that a real human would not -- and that interaction with that real human is important for human development.

>> That's the default chatbot behavior. Many of these people appear to be creating their own personalities for the chatbots, and it's not too difficult to make an opinionated and challenging chatbot, or one that mimics someone who has their own experiences. Though designing one's ideal partner certainly raises some questions, and I wouldn't be surprised if many are picking sycophantic over challenging.

> You can make an LLM play pretend at being opinionated and challenging. But it's still an LLM. It's still being sycophantic: it's only "challenging" because that's what you want.

Also: if someone makes it "challenging" it's only going to be "challenging" with the scare quotes, it's not actually going to be challenging. Would anyone deliberately, consciously program in a real challenge and put up with all the negative feelings a real challenge would cause and invest that kind of mental energy for a chatbot?

It's like stepping on a thorn. Sometimes you step on one and you've got to deal with the pain, but no sane person is going to go out stepping on thorns deliberately because of that.


> and it's not too difficult to make an opinionated and challenging chatbot

Funnily enough, I've saved instructions for ChatGPT to always challenge my opinions with at least 2 opposing views; and never to agree with me if it seems that I'm wrong. I've also saved instructions for it to cut down on pleasantries and compliments.

Works quite well. I still have to slap it around for being too supportive / agreeing from time to time - but in general it's good at digging up opposing views and telling me when I'm wrong.


>People opting for unchallenging pseudo-relationships over messy human interaction is part of a larger trend, though.

I don't disagree that some people take AI way too far, but overall, I don't see this as a significant issue. Why must relationships and human interaction be shoved down everyone's throats? People tend to impose their views on what is "right" onto others, whether it concerns religion, politics, appearance, opinions, having children, etc. In the end, it just doesn't matter - choose AI, cats, dogs, family, solitude, life, death, fit in, isolate - it's just a temporary experience. Ultimately, you will die and turn to dust like around 100 billion nameless others.


I lean toward the opinion there are certain things people (especially young people) should be steered away from because they tend to snowball in ways people may not anticipate, like drug abuse and suicide; situations where they wind up much more miserable than they realize, not understanding the various crutches they've adopted to hide from pain/anxiety have kept them from happiness (this is simplistic, though; many introverts are happy and fine).

I don't think I have a clear-enough vision on how AI will evolve to say we should do something about it, though, and few jurisdictions do anything about minors on social media, which we do have a big pile of data on, so I'm not sure it's worth thinking/talking about AI too much yet, at least as it relates to regulating for minors. Unlike social media, too, the general trajectory for AI is hazy. In the meantime, I won't be swayed much by anecdotes in the news.

Regardless, if I were hosting an LLM, I would certainly be cutting off service to any edgy/sexy/philosophy/religious services to minimize risk and culpability. I was reading a few weeks ago on Axios of actual churches offering chatbots. Some were actually neat; I hit up an Episcopalian one to figure out what their deal was and now know just enough to think of them as different-Lutherans. Then there are some where the chatbot is prompted to be Jesus or even Satan. Which, again, could actually be fine and healthy, but if I'm OpenAI or whoever, you could not pay me enough.


> chatbots are responding to the user's contribution only

Which is also why I feel the label "LLM Psychosis" has some merit to it, despite sounding scary.

Much like auditory hallucinations where voices are conveying ideas that seem-external-but-aren't... you can get actual text/sound conveying ideas that seem-external-but-aren't.

Oh, sure, even a real human can repeat ideas back at you in a conversation, but there's still some minimal level of vetting or filtering or rephrasing by another human mind.


> even a real human can repeat ideas back at you in a conversation, but there's still some minimal level of vetting or filtering or rephrasing by another human mind.

The mental corruption due to surrounding oneself with sycophantic yes men is historically well documented.


Excellent point. It’s bad for humans when humans do it! Imagine the perfect sycophant, never tires or dies, never slips, never pulls a bad facial expression, can immediately swerve their thoughts to match yours with no hiccups.

It was a danger for tyrants and it’s now a danger for the lonely.


South Park isn't for everyone, but they covered this pretty well recently with Randy Marsh going on a sycophant bender.

Interesting, thanks I’ll check it out.

I wonder if in the future that'll ever be a formal medical condition: Sycophancy poisoning, with chronic exposure leading to a syndrome of some sort...

That explains why Elon Musk is such an AI booster. The experience of using an LLM is not so different from his normal life.


> The sycophantic and unchallenging behaviours of chatbots leaves a person unconditioned for human interactions.

To be honest, the alternative for a good chunk of these users is no interaction at all, and that sort of isolation doesn't prepare you for human interaction either.


> To be honest, the alternative for a good chunk of these users is no interaction at all, and that sort of isolation doesn't prepare you for human interaction either.

This sounds like an argument in favor of safe injection sites for heroin users.


Hey hey safe injecting rooms have real harm minimisation impacts. Not convinced you can say the same for chatbot boyfriends.

That's exactly right, and that's fine. Our society is unwilling to take the steps necessary to end the root cause of drug abuse epidemics (privatization of healthcare industry, lack of social safety net, war on drugs), so localities have to do harm reduction in immediately actionable ways.

So too is our society unable to do what's necessary to reduce the startling alienation happening (halt suburban hyperspread, reduce working hours to give more leisure time, give workers ownership of the means of production so as to eliminate alienation from labor), so, ai girlfriends and boyfriends for the lonely NEETs. Bonus, maybe it'll reduce school shootings.


And there we are . . . "Our society is unable to do what's necessary on issue X, and what's necessary is this laundry list of my unrelated political hobby horses."

The person who introduced the topic did so derisively. I think you ought to re-read the comment to which you replied and a few of those leading to it for context.

If you don't deny that the USA is plagued by a drug addiction crisis, what's your solution?

Seeing society as responsible for drug abuse issues, of their many varieties, is very Rousseau.

Rousseau and Hobbes were just two dudes. I'd wager neither of them cracked the code entirely.

To claim that addicts have no responsibility for their addiction is as absurd as the idea that individual humans can be fully identified separate from the society that raised them or that they live in.


Given that those tend to have positive effects for the societies that practice this is that what you wanted to say?

Wouldn't they be seeking a romantic relationship otherwise?

Using AI to fulfill a need implies a need which usually results in action towards that need. Even "the dating scene is terrible" is human interaction.


> Even "the dating scene is terrible" is human interaction.

For some subset of people, this isn't true. Some people don't end up going on a single date or get a single match. And even for those who get a non-zero number there, that number might still be hovering around 1-2 matches a year and no actual dates.


Are we talking people trying to date or "trying to date"?

I am not even talking dates BTW but the pre-cursors to dates.

If you bring up Tinder etc then I would point out that AI has been doing bad things for quite a while obviously.


> Are we talking people trying to date or "trying to date"?

The former. The latter I find is naught more than a buzz word used to shut down people who complain about a very real problem.

> If you bring up Tinder etc then I would point out that AI has been doing bad things for quite a while obviously.

Clearly. But we've also been cornered into Tinder and other dating apps being one of very few social arenas where you can reasonably expect dating to actually happen.[1] There's also friend circles and other similar close social circles, but once you've exhausted those options, assuming no other possibilities reveal themselves, what else is there? There's uni or collage, but if you're past that time of your life, tough shit I guess. There's work, but people tend to have the sense to not let their love life and their work mix. You could hook up after someone changes jobs, but that's not something that happens every day.

[1] https://www.pnas.org/doi/full/10.1073/pnas.1908630116


Swiping on thousands of people without getting a single date is not human interaction and that's the reality for some people.

I still don't think an AI partner is a good solution, but you are seriously underestimating how bad the status quo is.


> Swiping on thousands of people without getting a single date is not human interaction and that's the reality for some people.

For some people, yes, but 99% of those people are men. The whole "women with AI boyfriends" thing is an entirely different issue.


If you have 100 men to 100 women on an imaginary tinder platform and most of the men get rejected by all 100 women it's easy to see where the problem would arise for women too.

In real dating apps, the ratio is never 1:1, there's always way more men.

The "problem" will arise anyway, of course, but as I said, it's a different problem - the women aren't struggling to find dates, they're just choosing not to date the men they find. Even classifying it as a "problem" is arguable.


> the ratio is never 1:1, there's always way more men.

Isn't it weird? There should be approximately equal number of not married men and women, so there should be some reason why there are less women on dating platforms. Is it because women work more and have less free time? Or because men are so bad? Or because they have an AI boyfriend? Or married men using dating apps shift the ratio?


Obviously men are people and therefore can vary, but a lot of them rely on women to be their sole source of emotional connection. Women tend to have more and closer friends and just aren't as lonely or desperate.

A lot of dudes are pretty awful to women in general, and dating apps are full of that sort. Add in the risks of meeting strange men, and it's not hard to see why a lot of women go "eh" and hang out with friends instead.


What else do you expect them to do if none of the choices are worthwhile?

Expectations and reality will differ. Ultimately we will have soft eugenics. This is a good thing in the long run, especially with how crowded the global south is.

Nature always finds a way, and it's telling you not to pass your genetics on. It seems cruel, but it is efficient and very elegant. Now we just need to find an incentive structure to encourage the intelligent to procreate.


Maybe lower their standards to the point that they can be satisfied by a real person, not a text completion algorithm that literally worships the ground they walk on and outputs some of the cheesiest, cringiest text I've ever read.

>Maybe lower their standards to the point that they can be satisfied by a real person, not a text completion algorithm that literally worships the ground they walk on and outputs some of the cheesiest, cringiest text I've ever read.

The vast majority of women are not replacing dating with chatbots, not even close. If you want women to stop being picky, you would have to reduce the "demand" in the market, stop men from being so damn desperate for any pair of legs in a skirt.

They are suffering through the exact same dating apps, suffering through their own problems. Try talking to one some time about how much it sucks.

Remember, the apps are not your friend, and not optimized to get you a date or a relationship. They are optimized to make you spend money.

The apps want you to feel hopeless, like there is no other way than the apps, and like only the apps can help you, which is why you should pay for their "features" which are purposely designed to screw you over. The Match company purposely withholds matches from you that are high quality and promising. They own nearly the entire market.


Making a lot of assumptions there, my dude.

Despite the name, the subreddit community has both men and women and both ai boyfriends and ai girlfriends.

I looked through a bunch of posts on the front page (and almost died from cringe in the process) and basically every one of them was a woman with an AI "boyfriend".

Interesting. I guess it's changed a lot since I looked at it last time. I remember it being about 50/50.

We do see - from 'crazy cat lady' to 'incel', from 'where have all the good men gone' to the rapid decline of the numbers of 25-year-olds who have had sexual experiences, not to mention from the 'loneliness epidemic' that has several governments, especially in Europe, alarmed enough to make it an agenda pointt: No, they would not. Not all of them. Not even a majority.

AI in these cases is just a better 'litter of 50 cats', a better, less-destructive, less-suffering-creating fantasy.


Not all human interaction is a net positive in the end.

In this framing “any” human interaction is good interaction.

This is true if the alternative to “any interaction” is “no interaction”. Bots alter this, and provide “good interaction”.

In this light, the case for relationship bots is quite strong.


Why would that be the alternative?

These are only problems if you assume the person later wants to come back to having human relationships. If you assume AI relationships are the new normal and the future looks kinda like The Matrix, with each person having their own constructed version of reality while their life-force is bled dry by some superintelligent machine, then it is all working as designed.

Human relationships are part of most families, most work, etc. Could get tedious constantly dealing with people who lack any resilience or understanding of other perspectives.

The point is you wouldn't deal with people. Every interaction becomes a transaction mediated by an AI that's designed to make you happy. You would never genuinely come in contact with other perspectives; everything would be filtered and altered to fit your preconceptions.

It's like all those dystopias where you live in a simulation but your real body is wasting away in a vat or pod or cryochamber.


Someone has to make the babies!

don't worry, "how is babby formed" is surely in every llm training set

“how girl get pragnent”

It could be the case that society is responding to overpopulation in many strange ways that serve to reduce/reverse the growth of a stressed population.

Perhaps not making as many babies is the longterm solution.


Wait, how did this work in The Matrix exactly?

Artificial wombs – we're on it.

When this gets figured out all hells will break loose the likes of which we have not seen

Decanting jars, a la Brave New World!

ugh. speak of the devil and he shall appear.

I don’t know. This reminds me of how people talked about violent video games 15 years back. Do FPS games desensitize and predispose gamers to violence, or are they an outlet?

I think for essentially all gamers, games are games and the real world is the real world. Behavior in one realm doesn’t just inherently transfer to the other.


Unless someone is harming themselves or others, who are we to judge?

We don't know that this is harmful. Those participating in it seem happier.

If we learn in the course of time (a decade?) that this degrades lives with some probability, we can begin to caution or intervene. But how in God's name would we even know that now?

I would posit this likey has measurable good outcomes right now. These people self-report as happier. Why don't we trust them? What signs are they showing otherwise?

People were crying about dialup internet being bad for kids when it provided a social and intellectual outlet for me. It seems to be a pattern as old as time for people to be skeptical about new ways for people to spend their time. Especially if it is deemed "antisocial" or against "norms".

There is obviously a big negative externality with things like social media or certain forms of pay-to-play gaming, where there are strong financial interests to create habits and get people angry or willing to open their wallets. But I don't see that here, at least not yet. If the companies start saying, "subscribe or your boyfriend dies", then we have cause for alarm. A lot of these bots seem to be open source, which is actually pretty intriguing.


It seems we're not quite there, yes. But you should have seen the despair when GPT 5 was rolled out to replace GPT 4.

These people were miserable. Complaining about a complete personality change of their "partner", the desperation in their words seemed genuine.


Words can never be a substitute for sentience, they are separate processes.

Words are simula. They're models, not games, we do not use them as games in conversation.

> The sycophantic and unchallenging behaviours of chatbots leaves a person unconditioned for human interactions

I saw a take that the AI chatbots have basically given us all the experience of being a billionaire: being coddled by sycophants, but without the billions to protect us from the consequences of the behaviors that encourages.


This. If you never train stick, you can never drive stick, just automatic. And if you never let a real person break your heart or otherwise disappoint you, you'll never be ready for real people.

AI friends need a "Disasters" menu like SimCity.

One of the first thing many Sims players do is to make a virtual version of their real boyfriend/girlfriend to torture and perform experiments on.


Ah, 'suffering builds character'. I haven't had that one in a while.

Maybe we should not want to get prepared for RealPeople™ if all they can do is break us and disappoint us.

"But RealPeople™ can also elevate, surprise, and enchant you!" you may intervene. They sure than. An still, some may decide no longer to go for new rounds of Russian roulette. Someone like that is not a lesser person, they still have real™ enjoyment in a hundred other aspects in their life from music to being a food nerd. they just don't make their happiness dependant on volatile actors.

AI chatbots as relationship replacements are, in many ways, flight simulators:

Are they 'the real thing'? Nah, sitting in a real Cessna almost always beats a computer screen and a keyboard.

Are they always a worse situation than 'the real thing'? Simulators sure beat reality when reality is 'dual engine flameout halfway over the North Pacific'

Are they cheaper? YES, significantly!

Are they 'good enough'? For many, they are.

Are they 'syncophantic'? Yes, insofar as that circumstances are decided beforehand. A 'real' pilot doesn't get to choose 'blue skies, little sheep clouds in the sky', they only get to chosen not to fly that day. And the standard weather settings? Not exactly 'hurricane, category 5'.

Are they available, while real flight is not, to some or all members of the public? Generally yes. The simulator doesn't make you have a current medical.

Are they removing pilots/humans from 'the scene'? No, not really. In fact, many pilots fly simulators for risk-free training of extreme situations.

Your argument is basically 'A flight simulator won’t teach you what it feels like when the engine coughs for real at 1000 ft above ground and your hands shake on the yoke.'. No, it doesn't. An frankly, there are experiences you can live without - especially those you may not survive (emotionally).

Society has always had the tendency to pathologize those who do not pursue a sexual relationship as lesser humans. (Especially) single women that were too happy in the medevieal age? Witches that needed burning. Guy who preferred reading to dancing? A 'weirdo and a creep'. English knows 'master' for the unmarried, 'incomplete' man, an 'mister' for the one who got married. And today? those who are incapable or unwilling to participate in the dating scene are branded 'girlfailure' or 'incel' - with the latter group considered a walking security risk. Let's not add to the stigma by playing another tune for the 'oh, everyone must get out there' scene.


One difference between "AI chatbots" in this context and common flight simulator games is that someone else is listening in and has the actual control over the simulation. You're not alone in the same way that you are when pining over a character in a television series or books, or crashing a virtual jumbo jet into a skyscraper in MICROS~1 Flight Simulator.

You are aware that you can, in fact, run models on your own, fully airgapped machine, right? Ollama exists.

The fact that most people chose not to is no argument for 'mandatory' surveillance, just a laissez-faire attitude towards it.


Yes. I have never connected to any of the SaaS-models and only use Nx/Bumblebee and sometimes Ollama.

In this context it's not about people like me.


Good for you!

Now ... why you want to police the decisions others make (or chose not to make) with their data ... it has a slightly paternalistic aspect to it, wouldn't you agree?


This is the exact kind of thinking that leads to this in the first place. The idea that a human relationship is, in the end, just about what YOU can get from it. That it's just simply a black box with an input and output, and if it can provide the right outputs for your needs, then it's sufficient. This materialistic thinking of other people is a fundamentally catastrophic worldview.

A meaningful relationship necessarily requires some element of giving, not just getting. The meaning comes from the exchange between two people, the feedback loop of give and take that leads to trust.

Not everyone needs a romantic relationship, but to think a chatbot could ever fulfill even 1% of the very fundamental human need of close relationships is dangerous thinking. At best, a chatbot can be a therapist or a sex toy. A one-way provider of some service, but never a relationship. If that's what is needed, then fine, but anything else is a slippery slope to self destruction.


> This is the exact kind of thinking that leads to this in the first place. The idea that a human relationship is, in the end, just about what YOU can get from it. That it's just simply a black box with an input and output, and if it can provide the right outputs for your needs, then it's sufficient. This materialistic thinking of other people is a fundamentally catastrophic worldview.

> A meaningful relationship necessarily requires some element of giving, not just getting. The meaning comes from the exchange between two people, the feedback loop of give and take that leads to trust.

This part seems all over the place. Firstly, why would an individual do something he/she has no expectation to benefit from or control in any way? Why would he/she cast away his/her agency for unpredictable outcomes and exposure to unnecessary and unconstrained risk?

Secondly, for exchange to occur there must a measure of inputs, outputs, and the assessment of their relative values. Any less effort or thought amounts to an unnecessary gamble. Both the giver and the intended beneficiary can only speak for their respective interests. They have no immediate knowledge of the other person's desires and few individuals ever make their expectations clear and simple to account for.

> Not everyone needs a romantic relationship, but to think a chatbot could ever fulfill even 1% of the very fundamental human need of close relationships is dangerous thinking. At best, a chatbot can be a therapist or a sex toy. A one-way provider of some service, but never a relationship. If that's what is needed, then fine, but anything else is a slippery slope to self destruction.

A relationship is an expectation. And like all expectations, it is a conception of the mind. People can be in a relationship with anything, even figments of their imaginations, so long as they believe it and no contrary evidence arises to disprove it.


> This part seems all over the place. Firstly, why would an individual do something he/she has no expectation to benefit from or control in any way? Why would he/she cast away his/her agency for unpredictable outcomes and exposure to unnecessary and unconstrained risk?

It happens all the time. People sacrifice anything, everything, for no gain, all the time. It's called love. When you give everything for your family, your loved ones, your beliefs. It's what makes us human rather than calculating machines.


You can easily argue that the warm, fuzzy dopamine push you call 'love', triggered by positive interactions, is basically a "profit". Not all generated value is expressed in dollars.

"But love can be spontaneous and unconditional!" Yes, bodies are strange things. Aneuryisms also can be spontaneous, but are not considered intrinsically altruistic functionality to benefit humanity as a whole by removing an unfit specimen from the gene pool.

"Unconditional love" is not a rational design. It's an emergent neural malfunction: a reward loop that continues to fire even when the cost/benefit analysis no longer makes sense. In psychiatry, extreme versions are classified (codependency, traumatic bonding, obsessional love); the milder versions get romanticised - because the dopamine feels meaningful, not because the outcomes are consistently good.

Remember: one of the significant narratives our culture has about love - Romeo and Juliet - involves a double suicide due to heartbreak and 'unconditional love'. But we focus on the balcony, and conveniently forget about the crypt.

You call it "love" when dopamine rewards self-selected sacrifices. A casino calls it "winning" when someone happens to hit the right slot machine. Both experiences feel profound, both rely on chance, and pursuing both can ruin you. Playing Tetris is just as blinking, attention-grabbing and loud as a slot machine, but much safer, with similar dopamine outcomes as compared to playing slot machines.

So ... why would a rational actor invest significant resources to hunt for a maybe dopamine hit called love when they can have a guaranteed 'companionship-simulation' dopamine hit immediately?


Yes, great comment.

What do you think of the idea that people generally don't really like other people - that they do generally disappoint and cause suffering. (We are all imperfect, imperfectly getting along together, daily initiating and supporting acts of aggression against others.) And that, if the FakePeople™ experience were good enough, probably most people would opt out of engaging with others, similar to how most pilot experiences are on simulators?


Ultimately, that's the old Star Trek 'the holodeck would - in a realistic scenario - be the last invention of a civilization' argument.

I think that there will always be several strata of the population who will not be satisfied with FakePeople™, either because they are unable to interact with the system effectively due to cognitive or educational deficiencies, or because they are in a belief that RealPeople™ somehow have a hidden, non-measurable capacity (let's call it, for the lack of a better term, a 'soul'), that cannot be replicated or simulated - which makes it, ultimately, a theological question.

There is probably a tipping point at which the number of RealPeople™ enthusiasts is so low reasonable relationship matching is no longer possible.

But I don't really think the problem is 'RealPeople™ are generally horrible'. I believe that the problem is availability and cost of relationship - in energy, time, money, and effort:

Most pilot experiences are on simulators because RealFlight is expensive, and the vast majority of pilots don't have access to an aircraft (instead sharing one), which also limits potential flight hours (because when the weather is good, everyone wants to fly. No-one wants the plane up in bad conditions, because it's dangerous to the plane, and - less important for the ownership group - the pilot.)

Similarly: Relationship-building takes planning effort, carries significant opportunity cost, monetary resources, and has a low probability of the desired outcome (whatever that may be, it's just as true for 'long-term potentially married relationship as it is for the one-night stand). That's incompatible with what society expects from a professional these days (e.g. work 8-16 hours a day, keep physically fit, save for old age and/or potential health crisis, invest in your professional education, the list goes on).

Enter the AI model, which gives a pretty good simulation of a relationship for the cost of a monthly subway card, carries very little opportunity cost (simulation will stop for you at any time if something more important comes up), and needs no planning at all.

Risk of heartbreak (aka: potentially catastrophic psychiatric crisis, yes, such cases are common) and hell being people doesn't even have to factor in to make the relationship simulator appear like a good deal.

If people think 'relationship chatbots' are an issue, just you wait for when - not if - someone builds a reasonably-well-working 'chatbot in a silicone-skin-body' that's more than just a glorified sex doll - a physically existing, touchable, cooking, homemaking, reasonably funny, randomly-sensual, and yes, sex-simulation-capable 'Joi' (and/or her male-looking counterpart) is probably the last invention of mankind.


Soul, yes.

You may be right, that RealPeople do seek RealInteraction.

But, how many of each RealPerson's RealInteractions are actually that - it seems to me that lots of my own historical interactions were/are RealPersonProjections. RealPersonProjections and FakePerson interactions are pretty indistinguishable from within - over time, the characterisation of an interaction can change.

But, then again, perhaps the FakePerson interactions (with AI), will be a better developmental training ground than RealPersonProjections.

Ah - I'll leave it here - its already too meta! Thanks for the exchange.


Disturbing and sad.

> Maybe we should not want to get prepared for RealPeople™ if all they can do is break us and disappoint us.

Good thing that "if" is clearly untrue.

> AI chatbots as relationship replacements are, in many ways, flight simulators:

If only! It's probably closer to playing star fox than a flight sim.


> Good thing that "if" is clearly untrue.

YMMV

> If only! It's probably closer to playing star fox than a flight sim.

But it's getting better, every day. I'd say we're in 'MS Flight Simulator 4.0' territory right now.


Love your thoughts about needing input from others! In Autistic / ADHD circles, the lack of input from other people, and the feedback of thoughts being amplified by oneself is called rumination. It can happen for many multiple ways-- lack of social discussion, drugs, etc. AI psychosis is just rumination, but the bot expands and validates your own ideas, making them appear to be validated by others. For vulnerable people, AI can be incredibly useful, but also dangerous. It requires individuals to deliberately self-regulate, pause, and break the cycle of rumination.

> In Autistic / ADHD circles

i.e. HN comments


Nah, most circles of neurodivergent people I've been around have humility and are aware of their own fallibility.

Is this clearly AI-generated comment part of the joke?

The comment seems less clearly-written (e.g., "It can happen for many multiple ways") than how a chatbot would phrase it.

Good call. I stand corrected: this is a human written comment masquerading as AI, enough so that I fell for it at my initial quick glance.

Excellent satire!


That just means they used a smaller and less focused model.

It doesn't. Name a model that writes like that by default.

We’re all just in a big LLM-generated self-licking-lollipop content farm. There aren’t any actual humans left here at all. For all you know, I’m not even human. Maybe you’re not either.

... and with this, you named the entire retention model of the whole AI industry. Kudos!

I share your concerns about the risks of over-reliance on AI companions—here are three key points that resonate deeply with me:

• Firstly, these systems tend to exhibit excessively agreeable patterns, which can hinder the development of resilience in navigating authentic human conflict and growth.

• Secondly, true relational depth requires mutual independent agency and lived experience that current models simply cannot provide autonomously.

• Thirdly, while convenience is tempting, substituting genuine reciprocity with perfectly tailored responses may signal deeper unmet needs worth examining thoughtfully. Let’s all strive to prioritize real human bonds—after all, that’s what makes life meaningfully complex and rewarding!


From reading both posts, there's a few things that come to my mind:

- It seems this is how the author is processing her father's passing, and it's not really up to us to make moral calls on the content of the posts. They are thoughts with gaps of missing context against a real life of highs and lows which is not readily condensed into a blog post.

- I'm peering into the life of a private person, that feels like a violation. Even though they have passed, the people around them are very much alive.

- We can't makes guesses at what a person truly values, neither positively nor negatively. What can be seen as promiscuity can also be seen as seeking validation, human motives and emotions exist in the grey area.

- This is a person who was deprived of the sort of genuine sexual and emotional attention that we take for granted from puberty age. They lived as a type of outsider in school, work, and their daily norms. The integrity of their actions shouldn't be evaluated against our own values which were likely built from a different life experience.

- It's ok not knowing or judging. One has to practice a type of "radical acceptance" when reviewing these sorts of life matters.


I agree with all of what you say, and while I thought the author was very good, I think calling him a coward was an unnecessary stroke of vanity and bitterness. For the same reason that no one can ever know what's inside another person's mind, much less a child understand their parents.


> I think calling him a coward was an unnecessary stroke of vanity and bitterness.

Maybe it is. Maybe it isn't.

In the process of grieving, when the emotions are at their rawest, it is difficult to not have knee-jerk reactions to the emotions that are piling-up fast and strong.

Except for that very slip, I actually found the piece impressively objective, level-headed, compassionate and open-minded.


I'd have to agree with you here. I often tell people that while we have control over our actions, we don't really have control over our emotions in the same sense. Feeling anger, happiness, sadness, bitterness, resentment, or anything else is something that will happen regardless of whether we want to or not, and all we can do is learn how to process our emotions to be able to learn how to react in ways that will hopefully hurt others less. I can't even begin to imagine the magnitude of emotions that the author has dealt with both the initial loss and the flurry of findings after the fact, so their reactions in both of the posts were quite mild all things considered. I'm lucky enough not to have lost either of my parents yet, and even with the hope that I don't find out anything anywhere close to as drastic as these revelations when I inevitably do, I still don't have any trouble imagining acting far more vain and bitter based just on the sadness I feel without needing to add any of the other bombshells into the mix.

Having taken some flak here for my reaction to that line, I'd clarify that I don't blame the writer for feeling that way. That's perfectly natural. My critique is that the injection of a summary opinion cheapened the writing and flattened the complexity. In a literary sense, the author was doing a great job of showing, rather telling the reader what to think, up to that point. The reader may very well have drawn the conclusion that the father was a coward. In a legal sense, the flash of bitterness actually harms their case. It draws into question their reliability as a witness. Calling it a "slip" was insightful, as it implies both.

I think she absolutely has a right to her judgment. She clearly has empathy for her father, but the rest of her family also suffered--greatly, it seems--from how he went about his life.

She wrote most of it centered on his perspective (as she understands it). And if you take that line as such, then you're right. I took that line as bringing another perspective to show the damage he caused. She has a lot to unpack and showing those conflicts demonstrated it.

So when can we judge someone a coward then?

> I think calling him a coward was an unnecessary stroke of vanity and bitterness.

I think given that the writer, who lived in the same culture with the same dangers and expectations, decided to accept the risks by coming out, I don't think it's vanity. They did what their father was too afraid to do.

It is absolutely bitterness, but I don't think you're in any position to judge the appropriate level of bitterness for a child to have towards their deadbeat parent.


He was so afraid of coming out that he FORCED his wife to never divorce him and kept a lover on the side while cheating on him as well with multiple partners at the heights of the AIDS epidemic while lying to him that they had a future that he was too scared to ever make a reality. If that's not a coward, then I don't think you and I can agree on very simple definitions of words like "coward".

"Coward" is really not the kind of word that admits "very simple definitions" that people can agree on, no.

It is their own dad, they are allowed to be bitter.

It ain't all waiting on you. That's vanity.


For those reasons, I won't even look for such letters when my parents die. I will take a photo or two. There are of course reasons to dig in the past, but that should be done cautiously, not for sensation, and even then under the condition that we only know a little and may not understand. The past is past. Nothing you learn can change it, but it can seriously fuck up your future.


> It's ok not knowing or judging

> One has to practice a type of "radical acceptance"

Here's a funny thing, what I got from that story was that it must have been a hard and sad life for the dad, probably the mom, and especially a horrifying discovery for the mom. These are not judgments, but tidbits of empathy and sadness for all the parties involved. I didn't have to force myself into that, probably because it didn't clash with my personality or values.

If something made you tick and you want to condemn one of the people in the story, I'm wondering if forcing your brain into "accepting" would make any difference. The real question is what you feel for the other person. I think it might come out as a judgment if it clashes with your actual values and personality. If you don't recognize yourself and would have had a different approach, you might have a negative outlook on the people in the story.

I'm extremely lucky to be a straight dude in the progressive society of today's. Had I been a gay guy in the traditional Chinese culture of the 80s, I'd probably have had the same life as that dad, and employ some of the same strategies. So it's easy for me not to judge. But some people are more upfront, active, liberated, and for them it might be harder not to judge ; and I think that's fine.


Your comment about not judging their integrity because they had different life experiences... that doesnt make sense to me. Integrity is absolute, you dont get slack on your integrity because you were dealt a bad hand. That being said, we ought not to judge anyway

> I'm peering into the life of a private person, that feels like a violation. Even though they have passed, the people around them are very much alive.

Absolutely. A person's right to privacy doesn't die with them.


It kind of does. They’re dead.

[flagged]


If it were written to seek attention in 2025, it would be on substack or twitter, not on a quiet personal website that can't go viral unless someone else posts it on hackernews

She has a right to seek attention? And you are right. The truth untold and the moments that never were cannot be recounted. They can be grieved and part of grief is anger.

I re-learned by my tears when reading this that the only thing that counts in life is love and connection. Connections not made are missed opportunities.

I lost a parent in my early twenties. Alas, anger was a very large part of my emotional arsenal then. Writer could have had a role model in her father. If only the truth would have been there between father and daughter. Layers upon layers of difficult interactions. Thinking about your parents death and the period of time they made you, cared for you, formed you, hindered you, burdened you with emotional baggage, is different with each passing of a few springs.


[flagged]


so what if it is? incels are obviously suffering too; it's not like the fact that they presently hold a toxic and sexist opinion makes them into a moral non-entity. Instead they're just tragic. Doesn't mean you can, or should, help them, nor should you tolerate that attitude when it is threatening to others. But it's tragedy all the same.

Why link closeted gay men with Incels? There was no shade of “deserve” or “victim” in the parent comment. Fact is gay men historically have had a very bad time finding love, Incels is a weird subgroup of hateful men with negative viewpoints, unless I’m out of touch with their zeitgeist.

I just think the comparison comes off as unkind to gay men.


People seem to disagree with this comment but it makes sense. Lots of people get no genuine sexually or emotional attention sure to severe disabilities, cultural incompatibility, weight issues, or simply because they don't know how to socialize properly. It's odd to say they can't live a full life, just because they didn't kiss a girl in the 9th grade.

If relationships are so key to the human experience, the incels would be right. They argue society should feel bad for them and accommodate them, because not being able to get sexual attention keeps them from having a normal life.

Not that I agree with them, but it seems odd to place so much value on relationships, except when people complain it's a problem they can't get one. I have a severely disabled friend who talks about wanting to get married every day. No one has ever shown him that kind of affection and I don't think anyone ever will. That's life for some people unfortunately. If you keep telling them they're missing out on the most important part of life of course it just makes them more frustrated


Just because something is key to the human experience, doesn’t mean some other person personally owes sacrificing their literal bodily autonomy to accommodate another who is missing out. We don’t have to pretend most people can live a happy life as a sexless hermit (we just had a large natural experiment on this during COVID) to avoid demanding anybody has to date someone they don’t want to.

Human relationships are a key part of most lives. Incels might have a point, but that does not imply that they have a solution to their own woes.

Yep. The world's a cruel place. The only choice for some people is to look beyond it for happiness.

That's individuals deciding not to be with other individuals. It's not two people of the same sex who want to be with each other but are arbitrarily prohibited by the rest of society.

Most people do deserve to be able to form emotional and sexual connections, and most people that are unable to in practice are not incels and deserve sympathy without complication. They’re victims, but only in the same sense that someone can be the victim of a hurricane. The important bit is that no person has a duty to be the one to provide those connections.


> They’re victims, but only in the same sense that someone can be the victim of a hurricane.

What about those that can't form connections because of emotional abuse in their past? I wouldn't call them victims of a hurricane like it's some kind of unpreventable natural disaster. They're victims of their abusers and the people in their life that didn't intervene to stop the abuse.


That’s absolutely true - I was mostly trying to stake out a weaker claim against someone who seemed to think anyone who feels bad about the fact they can’t find sex is an unsympathetic incel. And I don’t think even the people who are lonely simply due to vague social trends like fewer tight-knit communities have an unpreventable problem at the societal level. It’s just there’s not an obvious perpetrator (pet theories about the causes of social decay non-withstanding).

Not at all- human connection and love are important and hard for most people to live without. There’s nothing wrong with acknowledging that. The problem with incels is they feel entitled to that, and use it as a basis to fuel hate towards others for denying them what they feel entitled to- and there is no sense of that sentiment in the comment you replied to.


I hope this comment finally seals my departure from HN. Lots of very thoughtful people here but the small toxic fraction is still too high.


Yes, there are some toxic people here, as in any community or population, but there are also thoughtful and compassionate people here. This article seems to be mostly filled with the latter. I don’t know what your experience on HN has been but I encourage you to look beyond the that unpleasent post and consider the humane majority on this pot before you make your decision.

We can also pick up hints on discordant production value. This is quite noticeable on websites such as Amazon/Alibaba/Etsy/Ebay/etc where there's a lot of scam listings that use AI images for cheap or basic items.

So even though the image shown doesn't present obvious flaws, the fact that the image is high quality is the tell-tale sign of being AI generated.

This also isn't something that can be easily fixed - even if we produce convincing low production value imagery using AI, then the scam listing doesn't achieve its goal because it looks like junky crap.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: