I agree with this as good practice in general, but I think the human vs LLM thing is not a great comparison in this case.
When I ask a friend something I assume that they are in good faith telling me what they know. Now, they could be wrong (which could be them saying "I'm not 100% sure on this") or they could not be remembering correctly, but there's some good faith there.
An LLM, on the other hand, just makes up facts and doesn't know if they're incorrect or not or even what percentage sure it is. And to top things off, it will speak with absolute certainty the whole time.
That’s why I never make friends with my LLMs. It’s also true that when I use a push motorized lawn mower it has a different safety operating model than a weed whacker vs a reel mower vs an industrial field cutter and bailing system. But we still use all of these regularly and no one points out the industrial device is extraordinarily dangerous and there’s a continuum of safety with different techniques to address the challenges for the user to adopt. Arguably LLMs maybe shouldn’t be used by the uniformed to make medical decisions and maybe it’s dangerous that people do. But in the mean time I’m fine with having access to powerful tools and using them with caution but using them for what gives me value. I’m sure we will safety wrap everything if soon enough to the point it’s useless and wrapped in advertisements for our safety.
When I ask a friend something I assume that they are in good faith telling me what they know. Now, they could be wrong (which could be them saying "I'm not 100% sure on this") or they could not be remembering correctly, but there's some good faith there.
An LLM, on the other hand, just makes up facts and doesn't know if they're incorrect or not or even what percentage sure it is. And to top things off, it will speak with absolute certainty the whole time.