Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


Is it really here to stay? If the wheels fells off the investment train and ChatGPT etc. disappeared tomorrow, how many people would be running inference locally? I suspect most people either wouldn't meet the hardware requirements or would be too frustrated with the slow token generation to bother. My mom certainly wouldn't be talking to it anymore.

Remember that a year or two ago, people were saying something similar about NFTs —that they were the future of sharing content online and we should all get used to it. Now, they still might exist, it's true, but they're much less pervasive and annoying than they once were.


Maybe you don't love your mom enough to do this, but if ChatGPT disappeared tomorrow and it was something she really used and loved, I wouldn't think twice before buying her a rig powerful enough to run a quantized downlodable model on, though I'm not current on which model or software would be the best for her purposes. I get that your relationship with your mother, or your financial situation might be different though.


> Maybe you don't love your mom enough to do this

I actually love my mom enough not to do this.


Maybe you should talk more to your mother so she does not need a imaginary friend.


Please tell me this is satire.


Is just your average AI user. Too much "your are right" makes them detached from reality.


No man, this must be satire.


> I get that your relationship with your mother, or your financial situation might be different though.

Fucking hell


>that they were the future of sharing content online

nobody was saying that


People right here on HN were adamant my next house would be purchased using an NFT. And similar absurd claims about blockchain before that.


And it's at least interesting that it's a lot of the same people pitching AI now who were all so excited about blockchain and NFTs and the metaverse.


I don't agree it is 'almost worse' than the slop but it sure can be annoying. On one hand it seems even somewhat positive that some people developed a more critical attitude and question things they see, on the other hand they're not critical enough to realize their own criticism might be invalid. Plus I feel bad for all the resources (both human and machine) wasted on this. Like perfectly normal things being shown, but people not knowing anything about the subject chiming in to claim that it must be AI because they see something they do not fully understand.


My main exposure to this was just in a couple of online social communities.

1. AI happens 2. Every response (that are often memes in themselves), is complaining about the AI. Hell, some of them were clever in the way a brand new meme template was in 2015. 3. Memeing about the AI happens to the point where a few borderline freaking death threats start sneaking in. 4. Someone posts thoughtful original content, the whole place degrades into a “thank god it’s not AI” meme.

Or, let’s fragment our already tiny community into NO AI SLOP

I’ve seen this exact thing happen in three very niche communities.


"You know what's almost worse than something bad? People complaining about something bad."


Shrug. Sure.

Point still stands. It’s not going anywhere. And the literal hate and pure vitriol I’ve seen towards people on social media, even when they say “oh yeah; this is AI”, is unbelievable.

So many online groups have just become toxic shitholes because someone once or twice a week posts something AI generated


This kind of pressure is good actually, because it helps fighting against “lazy AI use” while letting people use AI in addition to their own brain.

And that's a hood thing because I much as I like LLMs as a technology, I really don't want people blindly copy-pasting stuff from it without thinking.


The entire US GDP for the last few quarters is being propped up by GPU vendors and one singular chatbot company, all betting that they can make a trillion dollars on $20-per-month "it's not just X, it's Y" Markov chain generators. We have six to 12 more months of this before the first investor says "wait a minute, we're not making enough money", and the house of cards comes tumbling down.

Also, maybe consider why people are upset about being consistently and sneakily lied to about whether or not an actual human wrote something. What's more likely: that everyone who's angry is wrong, or that you're misunderstanding why they're upset?


Fascinatingly, as we found out from this HN post Markov chains don't work when scaled up, for technical reasons, so that whole transformers thing is actually necessary for this current generation of AI.

https://news.ycombinator.com/item?id=45958004


I feel like this is the kind of dodgy take that'll be dispelled by half an hour's concerted use of the thing you're talking about

short of massive technological regression, there's literally never going to be a situation where the use of what amounts to a second brain with access to all the world's public information is not going to be incredibly marketable

I dare you to try building a project with Cursor or a better cousin and then come back and repeat this comment

>What's more likely: that everyone who's angry is wrong, or that you're misunderstanding why they're upset?

your patronising tone aside, GP didn't say everyone was wrong, did he? if he didn't, which he didn't, then it's a completely useless and fallacious rhetorical. what he actually said was that it's very common. and, factually, it is. I can't count the number of these type of instagram comments I've seen on obviously real videos. most people have next to no understanding of AI and its limitations and typical features, and "surprising visual occurrence in video" or "article with correct grammar and punctuation" are enough for them to think they've figured something out


> I dare you to try building a project with Cursor or a better cousin and then come back and repeat this comment

I always try every new technology, to understand how it works, and expand my perspective. I've written a few simple websites with Cursor (one mistake and it wiped everything, and I could never get it to produce any acceptable result again), tried writing the script for a YouTube video with ChatGPT and Claude (full of hallucinations, which – after a few rewrites – led to us writing a video about hallucinations), generated subtitles with Whisper (with every single sentence having at least some mistake) and finally used Suno and ChatGPT to generate some songs and images (both of which were massively improved once I just made them myself).

Whether Android apps or websites, scripts, songs, or memes, so far AI is significantly worse at internet research and creation than a human. And cleaning up the work AI did always ended up being taking longer just doing it myself from scratch. AI certainly makes you feel more productive, and it seems like you're getting things done faster, even though it's not.


simply, you're using them wrongly


Let's assume that's true — I'm just bad at using AI.

If that were the case, everyone else's AI creations would have a significantly higher quality than my own.

But that's not what we observe in the real world. They're just as bad as what I managed to create with AI.

The only ones I see who are happy with the AI output are people who don't care about the quality about the end result, or the details of it, just the semblance of a result.


you're ignoring survivorship bias. anything text-based you can tell was made with AI input is something that was made using the AI poorly


If that was the case, that'd be great. I don't necessarily care how something was achieved, as long as the software engineering and architecture was properly done, requirements were properly considered, edge cases documented, tests written, and bugs reported upstream.

But it's not the case. Of course, I could be wrong – maybe it's not AI, maybe it's just actual incompetence instead.

That said, humans usually don't approach tasks the way LLMs do. Humans generally build a mental model that they refine over time, which means that each change, each bit of code written, closely resembles other code written at the same time, but often bears little resemblance to code nearby. This is also why humans need refactoring – our mental model has changed, and we need to adjust the old code to match the new model.

Whereas LLMs are influenced most by the most recent tokens, which means that any change is affected by the code surrounding it much more than by other code written at the same time. That's also why, when something is wrong, LLMs struggle with fixing it (as even just reading the broken code distorts the probabilities, making it more likely to make the same mistake again), which is why it's typically best to recreate a piece of code from scratch instead.


this doesn't really negate or address the fact that the sample you're basing your position upon clearly doesn't account for the content that you couldn't tell was made using AI

I only gave AI coding assistants as a secondary example as to why AI obviously isn't something that people are suddenly going to realise they don't need, and you're over-focusing on it because clearly you have an existing and well-thought out position on the topic, but it's completely beside the point

this thread is about AI generated text content online


What isn't going anywhere? You're kidding yourself if you think every single place AI is used will withstand the test of time. You're also kidding yourself if you think consumer sentiment will play no part in determining which uses of AI will eventually die off.

I don't think anyone seriously believes the technology will categorically stop being used anytime soon. But then again we still keep using tech thats 50+ years old as it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: