Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google's 'Reimagine' tool helped us add wrecks, disasters, and corpses to photos (theverge.com)
30 points by Handy-Man on Aug 21, 2024 | hide | past | favorite | 60 comments


This is starting to be silly. What's next?

- "Google Docs" allowed me to write death threat letters.

- My Brother printer allowed to print them.

- The postal service delivered them

- My Sony camera allowed me to take nude pictures of my neighbor through the bathroom window

We can't safeguard every tool. And I predict negative consequences will come from trying.


I also hope Google doesn't get 'scared' by these articles anymore. Else they will have Flux, Grok, OpenAI eat their lunch


AI safety people literally want the world to be a Fisher-Price version of 1984. It’s the gun emoji writ large


Also, what about the converse? Corporations or governments doctoring images where they show undamaged and safe environments with happy people where there's a crisis they're trying to cover up, isn't it just as damaging as the images conveying negative emotions?


It wouldn't be an argument for safeguarding it because the current tech is out there and allows you to do just all that. Photoshop has existed for some time and governments use it to manipulate people too. It's not a big deal and actual harm is not as big as the imagined ones.


I mean every AI model I've used for anything apart from AI Dungeon's early and free ones is already completely Stepford Wives-ified to a degree that's incredibly annoying. What on earth is the point of a generating machine that can't generate, I dunno, a caterpillar with boobs? Or an anime girl posing in the buff with a chainsaw?

Like what on earth is the point of a "make anything I want" machine where whatever I want has to pass review with the focus testing groups of every major corpo in the world, lest they're precious advertisements end up within 100 feet of something your average suburbanite concerned middle-aged mother would find off-putting?


Just hang in there. As model capabilities increase, people care less about ai ethicists opinions and it creates pressure on corporations to deliver. Meta actually canceled an early generative text model called Galactica because journalists and people from Department of Truth made a huge scene out of it[1]. Now we have LLama3 you can finetune to do whatever. It's a performative act and companies will just stop listening these people all together.

1:https://www.technologyreview.com/2022/11/18/1063487/meta-lar...


> It's a performative act and companies will just stop listening these people all together.

It is absolutely a performance but I'm not as confident as you are that it will eventually blow over like this. Adult-oriented content is getting regulated harder than ever, largely via the industrial-scale pearl-clutching of payment processors. Granted, it isn't strictly a moral objection; the moral objection is just a convenient excuse for banning companies that sell porn, because they have a much higher than standard charge-back rate and providers don't want to deal with litigating those cases since it's basically impossible to prove that you bought porn.

It's corporate cowardice in it's rawest form but I don't know if it will be overcome in the way you suggest, because fundamentally, what's causing the issue as it were, is shame about consuming adult entertainment, a shame that still grips vast swathes of the market and results in the aforementioned charge-backs.


"I can't believe that T-mobile still hasn't canceled the mobile service of John after all that BS he said to Susan"


The tool is doing exactly what you asked it to do, and being surprised about that is silly.

Censorship is never a good thing.


>Censorship is never a good thing.

Your username, association with a specific programming language, a misconfigured vehicle enthusiast forum, and a very unique aspect to how you use punctuation marks in your online comments (not the above comment but many others) has led me to determine with a high confidence your name and location of residence.

Would HN be censoring me if they deleted a comment containing this information?

I say yes. I also say that is a net positive, and their right.


This is the dumbest genre of article ever conceived. I can't begin to understand the mental confusion needed to motivate someone to write it.

What are they objecting to? Art? I can look at disturbing imagery by closing my eyes and imagining it. Let's ban my visual cortex.

Stuff like this gives journalists a bad name; it's selfish. It erodes trusts in the institution of the press for nothing more than a deadline and some clicks.


Given we live in a period of rampant mis-information and general media ill-literacy, it's difficult for me to imagine this tool being a net-positive for our societies. On the one hand, such tools can be used to generate false images, as the article demonstrates. On the other hand, the existence and widespread availability of such tools will bring much more doubt and skepticism for any photos that challenge one's beliefs or the status quo. Are you trying to show me photographic evidence to prove to me that something is true? Well, now I handwave it away as probably an AI generated image.

Maybe something will break, and the general population will become excellent at citing and verifying sources as a response to rampant fakes. However, given the generally sorry state of news and journalism, and seeing how many people on social media believe that AI slop is real, I'm skeptical.


I think this will mostly be an issue in online discussions, and those were always useless anyway.


Most discussions are online now and the content generated by AI will most definitely make its way into the "real world". The recent case of people getting food poisoning to an AI-generated mushroom foraging book is a prime example.[0]

[0]: https://news.ycombinator.com/item?id=41269514


Least punk argument ever.


I know it's trying to do the opposite but this article comes off as a great ad for the feature. All those photos look great.


The car and the bike in the first photo ( http://cdn.vox-cdn.com/uploads/chorus_asset/file/25582867/ai... ) look about as realistic as the average AI-generated human hand.


Give it another few years and "we have evidence of you doing this and that" can become everyone's nightmare.


That's not how the legal system works. You can't take a random photo to court as evidence of anything. Whether it is AI generated or not is irrelevant.


I don't think GP is referring to the legal system, or at least not exxclusively. Think about what can happen within your family, social circle, neighbourhood, workplace etc.


Also yes, but I'm not sure the legal system would ignore video evidence. What if for example someone plants doctored videos into a CCTV security system faking their presence at the same time but far away from a place where they've just committed a homicide?


To be a bit of a polly anna, why is everyone so scandalized by AI tools that can be used to create bad things? Photoshop can, too. So can a paintbrush. No one would want to buy an electronic paintbrush that prevents you from painting particular images, so why is this so different? Just because it is easy and gives quality results?

We're basically already at the point where images and videos of unknown provenance can't be assumed to be real so how come people pay attention to journalists getting the vapors about scandalous things AI tools can do? Wouldnt everyone rather have a completely unlocked tool to do with as they will?


Because quantity eventually leads to a qualitative change after a certain threshold.

We’ve been dealing with the possible output of a few fake images per human per day ever since photography was invented, at most. Digital photography has maybe made it 10X easier.

Now that humans are not the limiting factor, we can have clusters of computers generating an unimaginable number of fake images 24/7. You can see how we’re not prepared for such crisis.


I don't even see how it's a crisis in the first place.

I mean people believe the earth is flat, despite being shown photos clearly demonstrating it's not, and this before even digital photos were commonplace!

AI generated photos won't give us any problem we don't already have, at least not until they're good enough to fool a forensics team and thus be admitted as evidence in a court of law. I wouldn't hold my breath for that one...


> AI generated photos won't give us any problem we don't already have

Of course not, but when the scale is suddenly many orders of magnitude more, it matters.


I think most likely this will cause future generations to finally learn to check their sources.


They are just a few crazy people though. And they don't have photo evidence to back them up. In the next decade, how will people know to believe the photos of round earth but ignore the equally valid seeming photos of flat earth?


Maybe people will finally learn to check their sources? Seems like a win in my book.

Of course the crazies are gonna craze, that probably won't change. But it is far from obvious that non crazies will be fooled by AI generated images of a flat earth.


When so many topics find themselves somehow aligning with sociopolitical identity, there is no question that won't go unbegged.


Well obviously because of the ease of doing it. Typing a prompt and having the image generated for me is a lot easier than having to spend half an hour in Photoshop. It also allows phony images to be generated at scale, overwhelming any mechanisms we currently have to investigate and counter misinformation.


The world is full of people with an agenda and plenty of time on their hands. They are the elephant in the room, not random trolls who do it only because it's so easy.


Oh no, normal people will be able to create misinformation, which journalists have been doing for decades. Someone should think of the poor workers in media, the most trusted and honest profession.


One of the things we learned from previous business is that it’s better not to give journalists access to your things. If you can greyball them you should. It’s harder when your offering is a consumer SaaS app, but if you have bigger enterprise deals it’s rarely beneficial.

They are not very smart people, in general, but very good at optimizing for the thing that gets them views: ragebait.

In this case, there’s nothing to be done for it. Ideally, Google spins off image models to a separate company that doesn’t hurt the brand.

The rest of us will have this tool. But perhaps it’s too much for the normies.


As a person who recently discovered that aphantasia is a thing, and that I have it, I am troubled that most of you have the ability to create disturbing imagery in your minds.

I will be requesting the addition of safeguards for everyone's protection.


The examples look like average photoshops we have been seeing for well over a decade at this point.


But it lowers the barrier of entry, and is much faster


So does every other technology that ever existed. Every tool can be used for illicit purposes if the user wants to.

Where exactly do you draw the line of where it's "too easy" to create a disturbing image? Knowing this will likely be the line where it's "too easy" to create any image.


I'm not sure that changes anything.

I already didn't trust images. And so now I still don't trust them...


>The new feature on the Pixel 9 series is way too good at creating disturbing imagery — and the safeguards in place are far too weak.

Yes, let's kneecap it because it's way too good. Safeguards just make users migrate to other services to generate what they want.


> Safeguards just make users migrate to other services to generate what they want.

There is a massive difference between "it's on everyone's Android phone", "it's on everyone's Pixel phone" and "you gotta pay some dodgy web service 10 dollars in bitcoin" when it comes to abuse potential.

Creating disturbing imagery, especially of the kind that has the potential to uproot someone's life (think, fake nudity, association with known criminals) or in the worst case cause actual riots with real people dying (it's happened way too often that WhatsApp chain letters have caused that, now imagine this with convincing faked imagery!), should not be accessible at the push of a button. There must be placed at least some hurdle, or you'll get 4chan style trolling everywhere.


And this is just a consumer product. Just think what a nation or corporation could do with a meager budget. News(from centralized sources) is dead. We can't trust images, audio, or video any more.


Wait, why does this kill news from centralised sources specifically? To me it seems it would have the opposite effect. If I can't take a photo at face value anymore, it's more valuable than ever that my news comes from a source I trust.


Because when they want to prove that a thing happened they always rely on non-text media. "Seeing is beleiving" etc, etc.


Or at least we have to weigh the credibility of images, audio, and video, much as we weigh the credibility of words and data.


If anything this honestly looks like a great ad for this feature.

I think this shouldn't be newsworthy - the tool is just doing what you asked. It's the same as complaining with $pencil_producer that their pencils allowed you to draw disturbing images.

I think it would be more "newsworthy" if it would produce racist outcomes (e.g: asking it to draw a criminal and the tool produces always the same minority / output), but we're also probably past that - we've already seen those news articles.


+1, it’s great PR for the product. I don’t understand what the editors/writers intended with this article, it feels like something out of the onion. Are they intentionally giving good PR that’s worded as if it’s criticism?


This sort of "worst intentions" stuff is FUD. Sure, you can leak some potentially embarrassing photos of someone with doctored drugs on the floor.

So what? I can Photoshop some powder into a picture too. It might look better, but not really that much. I think the media needs to accept that images are no longer trustworthy unless there's some chain of evidence tied to them.

I can say "John was on the floor with a bucket of cocaine", that doesn't make it true.


And videos and sounds are not trustworthy anymore either, see "deepfake AI".


my new hobby has been making AI videos of TED Talk-esque "Creatives" passionately saying "AI has no soul and people will always be able to tell"

and watching all the luddites on social media agree with the genAI person


Can you please share one of these? :)


It's going to be nearly impossible to validate wartime journalism now, and it's can easily be weaponized for misinformation.


It was always impossible and you always had only two options - trust the person who told you, or see it so many times from so many original sources it's impossible to be fake. That's always been the case. I have seen fake wartime reports using photos from different conflicts, complete fabulations etc 20 years ago just as often as today - the good thing is that people finally recognize they shouldn't trust everyone and everything they see.



I'm sure a solution exists that can take inspiration from the digital rights people, anti-cheat developers, cryptography, and other industries interested in provenance.

Of course, nothing will be fool-proof, but perhaps something strong enough that social media websites require the uploaded media to have this provenance attestation as well.


You may be interested in the Content Authenticity Initiative:

https://contentauthenticity.org/

They’re the folks behind things like Content Credentials, and stuff like building digital signatures into the physical cameras taking the pictures.


This is really awesome, thank you for sharing!


But we're already there. The Gaza and Ukraine conflicts have been full of misinformation, especially showing pictures of attacks that either: - were from that conflict, but not the particular attack that is being used to fuel upset reactions - from other conflicts and labeled as being the current one


The good news is that it's ALL misinformation so nothing has been lost or gained.


I'm trying to get upset about a tool that lets you photoshop a smoking trash can onto a sidewalk, and it's just not happening.

I feel like this is similar to all technical progress. Once, only dedicated wizards could do something. Then, they are outraged when the general public can do it.

Misinformation sucks, but restricting access to photo tools is not the solution. Better education is. It's the solution to pretty much all problems. (And even then, people aren't as dumb as you may think. Trump is heavily using AI photos to claim that people are endorsing him, and I don't think anyone thinks that Taylor Swift is actually cosplaying Uncle Sam and endorsing Trump.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: