> Second, your scenario doesn't analyze another human and issue judgement, as the AI detection algorithms do.
> When a human is miscategorized as a bot, they could find themselves in front of academic fraud boards, skipped over by recruiters, placed in the spam folder, etc.
Is the problem here the algorithms or how people choose to use them?
There’s a big difference between treating the results of an AI algorithm as infallible, and treating it as just one piece of probabilistic evidence, to be combined with others, to produce a probabilistic conclusion.
“AI detector says AI wrote student’s essay, therefore it must be true, so let’s fail/expel/etc them” vs “AI detector says AI wrote student’s essay, plus I have other independent reasons to suspect that, so I’m going to investigate the matter further”
That's exactly why the stock analogy doesn't work. People don't buy algorithms, they buy products - such as detectors or predictors. You necessarily have to sell judgement alongside the algorithm. So debating the merits of an algorithm in a vacuum, when the issue being raised is the human harm caused by detector products, is the strawman.
> People don't buy algorithms, they buy products - such as detectors or predictors. You necessarily have to sell judgement alongside the algorithm.
Two people can buy the same product yet use it in very different ways: some educators take the output of anti-cheating software with a grain of salt, others treat it as infallible gospel.
Neither approach is determined by the product design in itself, rather by the broader business context (sales, marketing, education, training, implementation), and even factors entirely external to the vendor (differences in professional culture among educational institutions/systems).
> When a human is miscategorized as a bot, they could find themselves in front of academic fraud boards, skipped over by recruiters, placed in the spam folder, etc.
Is the problem here the algorithms or how people choose to use them?
There’s a big difference between treating the results of an AI algorithm as infallible, and treating it as just one piece of probabilistic evidence, to be combined with others, to produce a probabilistic conclusion.
“AI detector says AI wrote student’s essay, therefore it must be true, so let’s fail/expel/etc them” vs “AI detector says AI wrote student’s essay, plus I have other independent reasons to suspect that, so I’m going to investigate the matter further”