Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

ChatGPT was more interesting. If I asked right, it would tell me Jesus is God, died on the cross for our sins, and was raised again. That faith in Him and repentance saves us. It would add that “Christian’s believe” that or something. So, you have to ask quite specifically to get a reasonably-qualified answer. Great!

Asking it about evidence for intelligent design was another matter. It’s like it tried to beat me into letting go of the topic, kept reiterating evolution for origin of life, and said there’s no scientific way to assess design. In another question, it knew of several organizations that published arguments for intelligent design. Why didn’t it use those? I suspected it had learned or was told to respond that way on certain trigger words or topics. It also pushes specific consensus heavily with little or no dissent or exploration allowed. If I stepped out of those bubbles, then maybe it would answer rationally.

So, (IIRC) I asked how a scientist would assess if an object is designed or formed on its own. It immediately spit out every argument in intelligent design. I asked for citations and it provided them. I ask it to apply the methods it just gave me to the universe to assess its design. It switched gears opening with a negative statement, did the same list, in each element included a negative statement, and then ended telling me not to believe any of that. It was astonishing to watch this. I still have it somewhere.

I’m sure their safety mechanisms add to it. However, I think this bias starts in the data they use, too. Many scientific papers and opinion pieces talk like that with those words. They have since scientists starting putting their faith in David Hume’s religion instead of observations about the universe, like its constants and precise interactions, that make God self-evident. But why is this in LLM’s?

Although I don’t know LLM mechanics, I feel like whatever is most popular (most samples) will drown out the rest. The data sets they use reflect these views much more than they do the views of most people in the world. They magnify them. People against Christian principles, with different morals and worldviews, are also currently controlling ethical programming in AI’s to make them reflect their morality.

If anyone wants the truth in AI’s, they’d have to delete all text on high-bias topics before putting carefully-chosen selections back in on all those topics. It would have to have God’s Word, teachings built on it, and solid presentations of the other worldviews. The AI would be able to argue any side while always defaulting on the truth which has more weight. If contentious, it might briefly mention the truth at the end after plainly giving you the data you asked for.

High-quality, curated, data sets infused with Christ-centered teaching and righteous morals for the win.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: