Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That’s fundamentally different, and I think you know that.

It’s one thing to ask an algorithm how to build an A* driving map from point A to point B. It’s another to ask one how to be a better person and go to Heaven.

I’m not religious, and I’m not arguing this from a pro-religion POV. I happily work in AI, and I’m not arguing this from an anti-AI POV. I am highly technical. I love computers. I’m excited about the future. I rely on deterministic algorithms to make my days better. And yet, I do not want to trust the words of an LLM to counsel me on how to be a better husband or father. At this stage, the AI does not know me in the way a counselor or advisor, or even pastor or priest would. And yes, I think that’s a crucial difference.

 help



3/4-agree; LLM advice is only one step up from an Agony Aunt column in a newspaper.

And I'd expect "Target stock scheduling system does for target employees for restocking shelves" to be an A* or similar.

But also, Google maps has directed people to their deaths: https://gizmodo.com/three-men-die-after-google-maps-reported... isn't even what I was originally looking for, which was: https://www.cbsnews.com/news/google-sued-negligence-maps-dri...


Sure, people die from regular programming. Mistakes happen. That’s not good or ok, but it seems unavoidable given today’s technologies and tools.

However, I think that’s in a different category than giving life advice. How is an LLM to know that God forgives Joe for stealing a loaf of bread to feed his children, but doesn’t forgive Tom for doing the same thing because Tom had money but was saving up to buy cooler shoes and didn’t want to spend it? A priest’s advice might be “Joe, don’t make a habit of it, but you didn’t hurt anyone and you children were hungry. Tom, would you freaking knock it off already?” An LLM might reply “that’s a wonderful idea!” to both.

Again, I’m firmly not anti-AI. I use it every day. I absolutely to not want to hear its advice on how to navigate the complexities of life as a human being.


Yeah, no. What you described here and what I described before are not programming errors, they're data errors. An A* route finder isn't going to know a bridge is out unless it is told, an LLM won't know that case history unless it is told.

I'd say the real problem with using an LLM for this kind of thing is not what the LLM writes, but that the act of writing helps the human understand their community, so when it is skipped that understanding remains absent. It's like cheating on your homework.


It’s not fundamentally different it’s people who are taking physical actions in the real world based on trust in some system

whether it’s a human or not they’re trusting the system with their existential outcomes

That is literally exactly the same thing.

The fact that you think that the rules of you being a father are somehow different than the rules of you driving to a appointment indicate that you have a completely incoherent world view based on two incompatible models of epistemology

As usual dualists will come up with a incoherent model and then try and act like it’s valid


> The fact that you think that the rules of you being a father are somehow different than the rules of you driving to a appointment indicate that you have a completely incoherent world view based on two incompatible models of epistemology

Two ways to look at this, both of which are coherent:

1. Current AI is better at some stuff than others. Saying "I'm okay driving in a waymo, but not taking spiritual advice from an AI" makes sense if you think it has not advanced to a near-human level in the spritual advice domain.

2. Even if you don't think that's true, it's reasonable to just want a human for certain activities, because communion with other humans in the same existential boat you're in can be the whole point an activity. I'd argue it is a significant reason for a majority of social activities.


Disclaimer: raised Catholic, now Atheist, married to devout Catholic.

The Church as defined by the institution is a community. I do not see it as a contradiction that the head of the institution is instructing the leaders to not add more layers of abstraction between them and the community, especially when those messages are on the subject of what it means to be human.


> The fact that you think that the rules of you being a father are somehow different than the rules of you driving to a appointment indicate that you have a completely incoherent world view based on two incompatible models of epistemology

lol




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: