There's a more basic problem: it's two very different questions to ask "can the machine reason about the plausibility of things/sources?", and "how does it score on an evaluation on a list of authoritative truths and proven lies?" A machine that thinks critically will perform poorly on the latter, since, if you're able to doubt a bad-actor's falsehood, you're just as capable of doubting an authoritative source (often wrongly/overeagerly; maybe sometimes not). Because you're always reasoning with incomplete information: many wrong things are plausible given limited knowledge, and many true things aren't easy to support.
The system that would score best tested against a list of known-truths and known-lies, isn't the perceptive one that excels at critical thinking: it's the ideological sycophant. It's the one that begins its research by doing a from:elonmusk search, or whomever it's supposed to agree with—whatever "obvious truths" it's "expected to understand".
> The system that would score best tested against a list of known-truths and known-lies, isn't the perceptive one that excels at critical thinking: it's the ideological sycophant
The system that would score best tested against a list of known-truths and known-lies, isn't the perceptive one that excels at critical thinking: it's the ideological sycophant. It's the one that begins its research by doing a from:elonmusk search, or whomever it's supposed to agree with—whatever "obvious truths" it's "expected to understand".