Of course, I’m not an idiot and I understand LLM very well. But generally as far as well documented stuff goes and stuff that exists it’s almost 100% accurate. It’s when you ask it to extrapolate or discuss topics that are fiction (even without realizing) you stray. Asking it to reason is a bad idea as it fundamentally is unable to reason and any approximation of reasoning is precisely that. Generally though for effectively information retrieval of well documented subjects it’s invariably accurate and can answer relatively nuanced questions.
Because I’m a well educated grown up and am familiar with a great many subjects that I want to learn more about. How do you? I can’t help you with that. You might be better off waiting for the technology to mature more. It’s very nascent but I’m sure in the fullness of time you might feel comfortable asking it questions on basic optics and photography and other well documented subjects with established agreement on process etc, once you establish your own basis for what those subjects are. In the mean time I’m super excited for this interface to mature for my own use!! (It is true tho I do love and live dangerously!)
> You might be better off waiting for the technology to mature more. It’s very nascent but I’m sure in the fullness of time you might feel comfortable asking it questions on basic optics and photography and other well documented subjects
I agree with this as good practice in general, but I think the human vs LLM thing is not a great comparison in this case.
When I ask a friend something I assume that they are in good faith telling me what they know. Now, they could be wrong (which could be them saying "I'm not 100% sure on this") or they could not be remembering correctly, but there's some good faith there.
An LLM, on the other hand, just makes up facts and doesn't know if they're incorrect or not or even what percentage sure it is. And to top things off, it will speak with absolute certainty the whole time.
That’s why I never make friends with my LLMs. It’s also true that when I use a push motorized lawn mower it has a different safety operating model than a weed whacker vs a reel mower vs an industrial field cutter and bailing system. But we still use all of these regularly and no one points out the industrial device is extraordinarily dangerous and there’s a continuum of safety with different techniques to address the challenges for the user to adopt. Arguably LLMs maybe shouldn’t be used by the uniformed to make medical decisions and maybe it’s dangerous that people do. But in the mean time I’m fine with having access to powerful tools and using them with caution but using them for what gives me value. I’m sure we will safety wrap everything if soon enough to the point it’s useless and wrapped in advertisements for our safety.
I do similar stuff, I'm just willing to learn a lot more at the cost of a small percent of my knowledge being incorrect from hallucinations, just a personal opinion. Sure human produced sources of info is gonna be more accurate (more not 100% still), and I'll default to that for important stuff.
But the difference is I actually want to and do use this interface more.
Just like learning from another human. A person can teach you the higher level concepts of some programming language but wouldn't remember the entire standard library.