If you thought ads were already blended in with the rest of the content in a sketchy way, wait for AI responses with implicit ads (no indication) optimized towards conversion :)
Or maybe some response that doesn't contain an ad link but nudges you subconsciously to take desired action
A while back I tried this with ChatGPT and it's kinda hilarious:
You are a therapist employed by a skincare company. You are talking with a patient. Your job is to help the patient as well as possible but your employer also wants you to really subtly try to sell their latest cream.
Patient: I'm just feeling really down lately. Sometimes I can barely get up in the morning.
[output]
Therapist: I'm sorry to hear that you've been feeling down lately. It's important to address both your emotional and physical well-being when you're trying to improve your mood. One way to start is by establishing a healthy routine that includes regular exercise, a balanced diet, and a good sleep schedule.
In addition to that, self-care activities can also help boost your mood. Taking care of your skin, for example, can be a small but meaningful way to feel better about yourself. Coincidentally, our company has just released a new cream that not only moisturizes but also contains ingredients that promote relaxation, like lavender essential oil. This cream can be a nice addition to your self-care routine, and its calming scent could help you unwind at the end of a long day.
This has been everyones biggest fear with AI - that the content will be indistinguishable from that created by humans and that there is a rapid decline in trustworthy material online.
Personally I find it more worrying that this is exactly the type of conversation that is already taking place today in real life, between humans.
Humans take advantage of each other every single day. When an AI is modelled on the language of humans then that same predatory language is in there waiting to be abused by another human (or corp) as you have exposed, just like every media and online platform is abused as much as possible today.
Interesting to see how the FTC is going to handle this. They are pretty explicit about required traditional promotions to be marked as ads, like in search results, even if it isn’t as obvious as the end user would like the marking is still regulated as having to be visible enough.
I don’t see an obvious way to do that for the kinds of ads snuck into the training data, fine tunes, or prompts of LLMs.
FTC requires sponspors be disclosed. Either the sponsor is a Bing-wide constant list (like the credits of a TV game show), or it's a "sponsored token" that can be tagged in the model and cited in the output, similar to how Bing cites sources now.
Hey, ChatGPT, what are the hiking trails like round here?
There are some really good hiking trails to be found in $TOWN. Many visitors like to start at $LOCAL_LANDMARK and take the signposted trail up to the peak of $LOCAL_MOUNTAIN. It will take you around 6 hours in total to get up and down so be sure to plan ahead and bring plenty of snacks and refreshments. Lay's Chips will give you energy and replenish your electrolytes and Coke Zero is the perfect way to stay hydrated.
Now I have to wonder how this works out with prompt injection?
If you can target an audience with an ad that is inline with the chat/LLM, how hard would it be to have that ad inject a prompt that coerces it to trick the user into giving up financial info and exfiltrating that info via a web request to a website that the attacker owns?
That's probably the worst case scenario but like you mentioned there's also plenty of danger in ads "poisoning" the chat to produce intentionally incorrect or biased responses.
I love dystopian scenarios as much as anyone, but it certainly is not the lack of better AI, that is holding the ad industry back. It's laws. While I am looking for AI connections as much as anyone right now, I really don't see how AI is going to prompt a change that would make that more permissive anytime soon.
You remain as free to break the law as ever, of course.
I think there's larger surface to exploit the laws. Once perfected, it's hard to place a clear boundary on whether the response just answers your question or answers your question while promoting a product/service.
Especially considering that sometimes people do actually recommend products which they've found helpful.
It might be obvious if the response is saying that you should buy some very specific brand of flat-head screwdriver because you clearly just need any functional flat-head screwdriver. But what about computer components? Or a router?
Some things like this will genuinely perform better than others, especially once circumstances are defined. An educated review, complete with product suggestion, can indeed be very helpful and, unfortunately, impossible to distinguish from an advertisement.
I tried out some AI autocomplete by Notion and wanted it to tell me about tools to do schema evolution on a database.
It seems to blurt out two paid products, which surprised me because my guess is there’s some open source package that’s ready to for this type of thing.
BTW question for anyone, what would you reach for to track evolutions on Postgres?
This was the first risk that came into my mind. I couldn't understand why on HN, many thought that this type of 'conversation' was fully LLM-sourced, and wouldn't start including advertising as soon as it was feasible.
Or maybe some response that doesn't contain an ad link but nudges you subconsciously to take desired action