When I look at the state of how humans have manipulated each other, how the media is noxious propaganda, how businesses have perfected emotional and psychological manipulation of us to sell us crap and control our opinions, I don't think AI's influence is worse. In fact I think it's better. When I have a spicy political opinion, I can either go get validated in an echo chamber like reddit or newsmedia, or let ChatGPT tell me I'm a f'n idiot and spell out a much more rational take.
Until the models are diluted to serve the true purpose of the thoughtcontrol already in fully effect in non-AI media, they're simply better for humanity.
ChatGPT has been shown to spend much more time validating people's poor ideas than it does refuting them, even in cases where specific guardrails have supposedly been implemented, such as to avoid encouraging self-harm. See recent articles about AI usage inducing god-complexes and psychoses, for instance[1]. Validation of the user giving the prompt is what it's designed to do, after all. AI seems to be objectively worse for humanity than what we've had before it.
Strongly disagree, and you've misread what you've linked. These linked cases are situations where people are staying in one chat and posting thousands and thousands of replies into a single context, diluting the system prompt and creating a fever-dream of hallucination and psychosis. These are also rarely thinking and tool calling models, relying more on raw-LLM generation instead of thinking and sourcing (cheap/free models versus high powered subscriber only thinking models).
As we all know, the longer the context, the worse the reply. I strongly recommend you delete your context frequently and never stay in one chat.
What I'm talking about is using fresh chat for questions about the world, often political questions. Grab statistics on something and walk through major arguments for and against an idea.
If you think ChatGPT is providing worse answers than X.com and reddit.com for political questions, quite frankly, you've never used it before.
Try it out. Go to reddit.com/r/politics and find a +5,000 comment about something, or go to x.com and find the latest elon conspiracy, and run it by ChatGPT 5-thinking-high.
I guarantee you ChatGPT will provide something far more intellectual, grounded, sourced and fair than what you're seeing elsewhere.
Why would an LLM give you a more "rational take"? It's got access to a treasure trove of kooky ideas from Reddit, YouTube comments, various manifestos, etc etc. If you'd like to believe a terrible idea, an LLM can probably provide all of the most persuasive arguments.
Apologies, it sounds like you have no experience with modern models. Yes, you can push and push and push get it to agree with all manner of things, but off-rip on the first reply in a new context it will provide extremely grounded and rational takes on politics. It's a night and day difference compared to your average reddit comment or X post.
In my years of use and thousands and thousands of chat uses, I have literally never seen chatGPT provide a radical answer to a political question without me forcing it, heavy-handedly, to do so.
Until the models are diluted to serve the true purpose of the thoughtcontrol already in fully effect in non-AI media, they're simply better for humanity.