I feel there should be a gov dep where all the "i turned this into DOOM" folks get hired, to build the next spicy interplanetary propulsion systems. They are clearly needing an exotic task to stop twiddling their fingers.
At this point, I’m just waiting for someone to implement a CSS-only browser inside this CSS-only Doom, so we can achieve full recursive insanity. The 'Can it run Doom?' meme has officially transcended hardware and entered the realm of pure Turing-complete masochism.
Within a week they would have a rocket prototype where you can plug in a mouse and keyboard and play doom on the exhaust flames by mixing different fuel chemicals.
Man I find the HN crowd so cross and fickle sometimes. I think it’s just because when companies get bad rep it affects how people view the products? Im autistic and tend to focus on the tech
SORA ( whatever that means) was one of the most astounding demos I’ve probably ever seen ( ChatGPT was more gradual ).
The shock and awe of rendered AI video blew my mind.
Yes months later everyone can do it and is bored by it and has strong opinions about what is right for society or not.
But it was a monumental piece of tech and I personally ( clearly incorrectly ) think the top comments should be appreciative of the release and the impact
Personally I think the lack of nudity destroyed the adult market But I don’t know enough tbh
Sora was a bit like seeing a new weapon being demoed. No matter how much engineering went in to it. The overwhelming feeling was
“this is bad for society and the consequences will be massive.”
So far that’s been exactly it. Now AI generated videos are primarily used to scam, deceive, and ragebait.
exactly! while there may be some neutral to slightly positive use of this tech (haha funny video) I can only really see the evil uses of it: scams, misinformation, propaganda, easily available to create by anyone at massive scale.
I really don't see the argument for this tech to be any kind of good, unless you think moving into an era where you cannot trust any image or video is somehow a neutral outcome, AND are happy about the people who are in control of this tech. which I guess captures a larger part of the HN crowd than I'd hoped
My perspective is different: we never could trust videos and images in the past. Our hopes, back then, were that the costs of faking said media (despite us being in the age of information and media) would remain permanently high and would deter people from choosing so. But this was always wishful thinking.
GenAI has presented tangible proof of such risks and is forcing society to reevaluate the way we trust evidence. In my eyes, it serves as an opportunity to improve our foundations of trust to something that relies less on the good will of random authorities onto something more objective.
Also, I haven't really seem anyone celebrating the large corporations who control AI tech. Could be simply the people I'm involved with, but most AI enthusiasts I've seem are more about, at least, open-weights AI models.
IMO what's really wishful thinking is believing that society will necessarily adapt for the better in response to a deluge of AI spam/ads/propaganda.
You could have said the same about say, pre-AI deceptively edited/ragebait/made up content going viral on FB, "actually this is good because soon people will realize they are being tricked/lied to, they'll think extra-critically before sharing dubious content next time".
Which has not happened. I can only see AI videos/images making the problem worse as people are fed personalized, narrowly targeted content that seem to perfectly appeal to their own beliefs/biases/emotions/etc.
Also, if anything it seems like we will have to trust authoritative groups more thanks to GenAI. If I have to consider every video on the internet from e.g. Iran as fake, I'm going to turn to NYT or WSJ who can be relied on to (usually) share only original content, or highly vetted 3rd party content.
I agree that the solution we may find might not necessarily be for the better. In fact, there are a couple solutions I've seen that fall onto that category, like banning GenAI (does nothing to solve the underlying issue while control over economic production always requires increased authoritarianism).
I can't really provide a truly good solution, as this problem has large ramifications into philosophy and ethics, but I'd think it would involve solutions like attestation and certificates, and, primarily, thinking of shared media (text, images, videos, etc.) not as facts, but, strictly as allegations.
"I'm autistic and tend to focus on the tech" is not a justification, and I would advise to stop using it as such.
Would you apply the same to killing robots? Hey, the Hyperthrasher 2000 mauls people and shreds them to pieces, but it's the most impressive TECH demo I've ever seen!
Totally disagree this is what would happen. Hypertheasher2000 breaks through my door to eat me. First time I’ve seen a man made human eating werewolf bot.
Doesn't matter if you agree that would happen, the analogy is valid - you're essentially admitting that you're ignoring the negative impacts of the tech for the sake of how impressive it is.
You can feel that way if you want, but to answer the confusion you posed in your initial post, most people do consider all aspects of a technology rather than just focus on the technical achievements. We live in a society of billions of humans interacting with each other, and whether or not you personally care or understand those interactions, they still do exist and still impact all of our lives. A particular technology may be cool, but if it threatens the lives of me or my family, I'm going to have a negative view of it.
Nothing exists in a vacuum and the way technologies affect people living in the world is a fundamentally important aspect of the technology itself. To ignore them would be like celebrating a cool new engine design but overlooking the fact that it has a tendency to explode and kill everyone in the car. If the primary effect of a technology is human suffering, then it isn't cool!
The tech was fine/interesting for what it is. The product itself is awful and something from nightmares. It's not an enjoyable experience for me watching some uncanny valley slop. I'm not impressed with the "creativity" of someone typing in a prompt and having a plagiarismbox spit something out. The ingenuity and resourcefulness of someone actually making something is what I like. The emotion and reasons behind a work of art make it inspiring. The details of their perspective and choices they make when creating it are beautiful and interesting.
The impact of easy AI generated video is a less certain and less secure world. You can't trust your eyes anymore because of how fast and easy it is to fake video and moments. You can't trust communications with someone because how easy it is to impersonate them over video and voice. Scams involving tools like this are already running rampant and it will only get worse. The sheer level of distrust these tools have unleashed into the world makes me wish they never existed. They have burned millions (billions?) of dollars on this when that money would have been better served going to the creators whose work they stole to build it. It's rotten.
> I think the lack of nudity destroyed the adult market
As we've see from Grok, building the system for producing non consensual nude images of other people will get the legal and PR hammer brought down on you fairly quickly. It's just an incredibly unethical thing to do.
I have gladly been paying $20/month for ChatGPT since the day web search was available and I use codex-cli every day instead of Claude and never have to think about limits.
I also use ChatGPT as my default search engine and to help me learn Spanish.
But image generation and video generation were a nice parlor trick. But wasn’t useful for me except for images for icons for diagrams.
But light you said, porn makes money and there are people who pay $300 a month for Grok to generate AI Porn.
> there are people who pay $300 a month for Grok to generate AI Porn.
Did you just make that up?
Grok barely makes "M-rated" nudity, let alone porn. Musk recently claimed it can do "R-Rated content", but his post got a community note saying otherwise.
Grok has gotten a lot stricter about video from uploaded images. But it is still able to make realistic x rated porn from AI generated images it creates.
There are various jailbreaks that have been working for the longest and still work, just a brief look, half of them just involve “anime borders” and “transparent anime watermarks” over videos.
Your comment made it sound like "out of the box" Grok can generate AI porn. It can't.
That reddit sub you mention is tame compared to something like unstable_diffusion where the AI-porn hobbyists use locally installed models. Some of the comments in the grok_porn sub are complaining about censorship, and literally complaining about how the anime hack isn't working. So you've only confirmed my point and contradicted your own.
I've been messing around with sci-fi horror themes including graphic gore. Grok now does gore when before it wouldn't. When I tried nudity, it refused. This is with AI-generated images from scratch, nothing uploaded.
Even "romantic love scene between consenting adults" was denied by Grok. It did 6 seconds of lightweight kissing, then refused to continue. The overwhelming evidence is that Grok does not ordinarily do "AI Porn". It doesn't advertise that it does, and won't produce it in normal circumstances when prompted.
I am not going to post links I saw to grok on r/grok_porn where they within the past two weeks posted Grok generating oral sex, vaginal sex and anal sex using the anime hack. I am trying to keep this somewhat appropriate up to 30 seconds using the “extend video” feature.
That’s not even counting all of the prompts that are never shared to Reddit but they talk about sending it privately via DM so xAI won’t patch it
I’m not talking about that. Grok is really strict now about what you are allowed to do with uploaded pictures but there are well known techniques to get it to create x rated realistic video using pictures it generates from scratch.
Anyone can use a range of offline tools and processes to generate nasty images, then blame whoever they want for that image. But who cares about that when there's outrage to spread am I right?
also, using Musk as a source...yeah, sure. as if that's any better than sourcing his ex. if Musk says he's seen none then there are none, after all he never lies and always takes criticism about his companies seriously. good job playing down the situation, classy act. we're not talking about some difference in opinions here, it's about deepfakes including children. remember, it would be an issue without children being involved, that just makes it a magnitude worse.
Interesting to hear your perspective. There was no shock and awe to me, ChatGPT changed what I thought was possible with computers, and everything else as far as photorealistic generation and then video just seemed inevitable. I decided to abstain from watching any video I know is AI, but of course now it’s mixed in with television and advertisements. I’ve started data hoarding old TV shows thinking it will be nice to have something to watch when the internet goes down.
While Apple use of the tracking was not more than a party trick, the foundational technology they created for this is currently the best low budget tracking solution and heavily used in VTubing (online streamers that use an Avatar with live facial tracking instead of showing their face via webcam)
Are these the Memojis or whatever Apple calls them these days? Pretty much eveyry iOS update mentions them near the top of the list and I still have no idea where to find / create / care about them...
I think Sora is an excellent way to see how people's beliefs clash with reality. Even in this post, I see people likening Sora to unveiling "a weapon", it filling them with "bland dread", or comparing it to creating "killing robots". But now that Sora is being shut down, what impact did Sora actually have on society, other than getting a couple of people to waste their time making some funny meme videos? Did any of those negative externalities actually play out?
If you are autistic, I feel that it causes you to see reality a more accurately than most here on this thread.
At least according to the Head of Product at X, Sora was by far the most widely used tool to create fake war videos[0] aiming to push various false narratives. Given how popular fake content is at Meta I can only imagine what they see there (if they even have anybody looking at this kind of thing).
On X, viewing actual war footage was locked behind age-gating and identity verification, while any idiots' fake war footage was uncensored and consumable by anyone.
I understand that misinformation is a bad thing, and your point is taken that I was probably too quick to brush off the worst thing that Sora did as 'some funny memes'. But still. Photoshop is used to make a lot of misinformation, probably 1000x to 10,000x as much as Sora did, or even more than that. Does anyone say the latest version of Photoshop is like unveiling a weapon? Does anyone say that AI driven generative fill in Photoshop is like creating killing robots?
Sora was one of the earliest demos of a "wow okay that is good enough to be mistaken for real" GenAI model, which is what that comment was referencing with the "weapon" reference (the tech behind it not just Sora™ Videos).
Sure, by the time they productized it, Sora was no longer SOTA thanks to the AI arms race. And ultimately positioned as a TikTok for Slop with an annoying watermark so didn't take the world by storm on its own.
But since it was unveiled GenAI videos as a whole have become commonplace everywhere else on the internet, with plenty of negative impact already in terms of spam or manipulation, and we're barely in year 2 so far.
It's not that, the demo was impressive but when it became wildly available the reality of it never lived up to what was demoed and it later came out some of the shorts they did with directors had a lot of editing to them anyway.
"The [AI researchers] have known sin, and this is a knowledge which they cannot lose."[0]
which is what I would hope would happen, but they're probably fine not thinking about the consequences of their actions looking at their 7 figure salaries
Sure the tech was cool, but people already hated youtube shorts when they were added. I think the "HN crowd" is probably the type to dislike short form content, so that might be where some of the dislike comes from.
The tone of a discussion is shaped as much by who doesn't comment as who does. A product comes out and a lot of people are excited by it, they comment accordingly. People who aren't, don't, unless there is something outrageous about it. Maybe there is in this case but the point still stands that when the product fails, it's a very different set of people who feel compelled to comment. And this is totally expected because "that's a shame, I liked it" doesn't seem to contribute to the discussion. Neither does "this product doesn't excite me", even more so because that's kind of the default assumption. So an online community or institution or publication can seem very fickle, especially when the commenters are pseudonymous.
also I remember the excitement of a new game that looked different to others.
Somehow even as a child I just knew that it would be a whole new emergent game play experience.
Ofcourse I didnt know waht went into making Rolelrcoaster Tycoon but I could just by a couple of screenshots how this was clearly a ground up new game with new mechanics that would be extremely fun to play.
I dont get this feeling anymore, as I just assyne everything is just a clone of another game in the same engine generally.
Unless its been a decade in production like Breath of the Wild of GTA 5 i just dont expect much.
Im now out of the workforce and can’t even imagine the complexity of the systems as management and everyone else communicate plans and executions through Claude. It must already be the case that some code based are massive behemoths few devs understand. Is Claude good enough to help maintain and help devs stay on top of the codebase?
Have we reached the point where its "normal" to mostly use AI to code? Im just wondering because Im sure it was less than a month ago when I said I havent coded manually for over 6 months and I had several comments about how my code must be terrible.
Im not butt hurt Im just wondering if the overton window has shifted yet.
reply