>It begins with a person witnessing a crime scene involving a knife, then shows an AI system introducing misinformation by asking about a non-existent gun, and concludes with the witness developing a false memory of a gun at the scene. This sequence demonstrates how AI-guided questioning can distort human recall, potentially compromising the reliability of eyewitness testimony and highlighting the ethical concerns surrounding AI’s influence on human memory and perception.
I'm sorry, what is "AI" about it? That's just basic human psychology. How is this different from being manipulated in the same manner by a human?
I would view this from a practical perspective: Institutions are considering moving their dumb questionnaires behind chat-bots, because they think it'll somehow be more efficient, and here's this research showing that in at least one important case there's a big unexpected danger.
While it is possible that hiring an unbiased human intern to "guide people through the paperwork" would give a similar effect... as a practical matter no institution wants to pay for that, so it's not an option that's on the table.
That said, I wouldn't rule out the idea that an LLM could be worse than an average human helper, since their correlation-following may cause them to introduce stuff into the conversation where a human wouldn't think to or would know better. [0]
You’re correct it’s nohthing special. Police have been implanting memories into suspects using similar techniques in order to get false confessions for decades.
A few months ago I called up a reporting hotline to report an incident.
Now it wasn't an in-person law enforcement encounter, and I wasn't the suspect but the victim. The agent on the phone was only tasked with taking down the report and forwarding it.
She listened to my narrative and took down the facts. Then she began to relate it back to me, and at every turn, she gave me the wrong details and altered the story.
So I found myself correcting her again and again and ironing out the actual facts so she had them right. And I came to realize that her mistakes were probably not accidents, but she was intentionally prompting me to reinforce the same narrative as I'd stated it, because if someone is lying, fabricating, or embellishing the truth, they won't be able to repeatedly insist on the facts as retained in their memory.
Conversely, I've had interactions with authority figures, who seem to intentionally misspeak as a test. They want to see if I will challenge the veracity of what they said, or if I can accept that their knowledge counts for more, and perhaps I shouldn't openly question them for every trivial matter.
So if the police succeed in implanting false memories, then maybe someone just had a shitty memory to begin with. If someone's involved in a crime, even as an eyewitness, it's important to work with their perspective, because testimonies consist of a lot of subjective information, and different people have different capacities for recall, so if you ask 3 eyewitnesses what happened, you may get 3 different but true stories, then you reconcile them. Just ask Matthew, Mark, and Luke.
Is this part of LE training? I'm an attorney. I had a client who was a detective. He must have asked me 50 times about the same thing, varying slightly the order of the facts, hoping I suppose he'd catch me giving a different answer. I don't think he like the answer because it cost him money but it was what it was.
last time I had to contact the police, I wrote down my story (as the victim) first and then after talking to them, I went over my notes again just to make sure. to be fair, I normally do that before talking to any authority figure.
> So I found myself correcting her again and again and ironing out the actual facts so she had them right. And I came to realize that her mistakes were probably not accidents, but she was intentionally prompting me to reinforce the same narrative as I'd stated it, because if someone is lying, fabricating, or embellishing the truth, they won't be able to repeatedly insist on the facts as retained in their memory.
I think I misunderstood you the first time I read this, so let me verify my revised understanding:
You're saying that she purposely feeding back false information to check to see whether you were a reliable narrator? If you fail to correct misinformation, then you have a loose relationship with the truth (either because you're lying, or confused, or perhaps have a mental illness).
If she had the goal of altering MY story for the org's benefit, then she would be far more likely to have me believe that she took my story at face value, then change it later without my knowing.
Why else would she tell me a falsified version of my own narrative and bring those errors to my attention?
Legally for them, that would be a terrible idea. If she's tasked with receiving allegations then their legal team will be interested in knowing exactly what is believed and what accusations are on the table. They definitely do not want some clerical worker faking a story and masking issues that have a real chance of being substantiated or argued in a court case someday. If I'm lying or fabricating then they'd also want a reliable record in their favor in court. The reporting office is motivated for accuracy and that's exactly why she challenged my facts, so that I could reinforce them through repetition and clarification.
It’s not different, that’s the point. But it’s worth pointing out because general misunderstanding of “AI” being impartial or less-biased (absurd, I know) among the general public.
I think it’s good to have research like this pointing out these flawed uses of AI before they’re inevitably used as a means of laundering accountability. It’ll happen anyway.
> How is this different from being manipulated in the same manner by a human?
It can be done at scale for very little. I'm not touching AI because I can't know its biases, it can be enshittified and advertise without the user noticing.
I'm sorry, what is "AI" about it? That's just basic human psychology. How is this different from being manipulated in the same manner by a human?