Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> That’s an absurd and overly strict interpretation of what Turing described.

No, it isn't.

The turing test doesn't evaluate correctness of any answers, their sophistication, or even if there is an answer. All it evaluates is the ability of the interrogator to distinguish between the computer and the human.

And therein lies the greatest flaw of the test: It doesn't test the ability of the computer, it tests the ability of the interrogator.

Quote from the wikipedia article: https://en.wikipedia.org/wiki/Turing_test#

    In practice, the test's results can easily be dominated not by the
    computer's intelligence, but by the attitudes, skill, or naïveté of
    the questioner. Numerous experts in the field, including cognitive
    scientist Gary Marcus, insist that the Turing test only shows how easy
    it is to fool humans and is not an indication of machine intelligence.
And another quote:

    Chatterbot programs such as ELIZA have repeatedly fooled unsuspecting
    people into believing that they are communicating with human beings. In
    these cases, the "interrogators" are not even aware of the possibility
    that they are interacting with computers. To successfully appear human,
    there is no need for the machine to have any intelligence whatsoever and
    only a superficial resemblance to human behaviour is required
So the "silence program" may be an extreme case, but it showcases exactly this. If the computer simply says nothing, then what can the human do to determine it's a computer who is silent behind the curtain? And the answer is: Nothing. He can only guess. And since a person can just as easily be silent as a computer can, he might even mistake the human performer for a computer.


Yes, it's an objectively wrong interpretation of Turing's Imitation Game outlined in his paper, "Computing Machinery and Intelligence", published in Mind in 1940 [0]. It's literally on the first page:

> Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification.

A must answer.

[0] https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf



Yes, really.

> practical Turing tests

Practical Turing tests.

Edit:

Here's the justification your paper uses to suggest the idea that Turing meant for the possibility of silence as a response:

> In one interpretation of Turing’s test the female is expected to tell the truth, but we are not far off that time when silence was preferred to the “jabbering” of women, because “speech was the monopoly of man” and that “sounds made by birds were part of a conversation at least as intelligible and intelligent as the confusion of tongues arising at a fashionable lady’s reception”.

Additionally, your cited paper there even admits this is a theoretical extension of The Imitation Game:

> In its standard form, Turing’s imitation game is described as an experiment that can be practicalized in two different ways (see Figure 1) (Shah, 2011):

    1) one-interrogator-one hidden interlocutor (Figure 1a),
    2) one-interrogator-two hidden interlocutors (Figure 1b).
> In both cases the machine must provide “satisfactory” and “sustained” answers to any questions put to it by the human interrogator (Turing, 1950: p.447). However, what about in the theoretical case when the machine takes the 5th amendment: “No person shall be held to answer”?1 Would we grant “fair play to the machines”?

To repeat in case you missed it when you clearly and definitely read your own citation: "In both cases the machine must provide “satisfactory” and “sustained” answers to any questions put to it by the human interrogator (Turing, 1950: p.447)."


I didn't miss anything. Your entire criticism so far hinges on my usage of silence as the answer.

Alright. I'll modify the function only slightly.

    return "I don't want to talk about this."
Replace that with a list of some different answers and `random.choice(answers)` if you like. Now you got a machine that gives "satisfactory and sustained" answers, only it always says No.

Aka. the exact same situation as with complete silence, only now we dotted the i's and crossed the t's.

And since the human is able to refuse to give any answers as well, it makes the entire test pointless, as again the interrogator cannot base his decision on anything but guesswork.

The point of the "silence-thought-experiment" isn't to satisfy Turings paper to the letter. The point is to showcase a flaw in the methodology it presents.


The “null response” isn’t a “satisfactory” answer as it doesn’t address the question. “Must answer” means the person under question must provide an answer to the question being asked. As I already said, your own citation proposes non-response as an extension of the Imitation Game, not a standard possible answer. Non-answers are not at all addressed by Turing in his work, because it’s not a possible outcome of the specific test he outlined.

It’s a weak thought experiment and from it one does not derive meaningful results, as it does not (and is not proposed to by anyone other than you) fit the original game’s intent. There are many other and better criticisms of the Turing Test.

Besides, you blindly cited a paper you yourself didn’t even read after repeated declarations of your own correctness at the expense of everyone else; I cannot think of a clearer example of “bad faith engagement.”


> “Must answer” means the person under question must provide an answer to the question being asked.

Yes, but it doesn't say what the answer has to be, it doesn't say it has to be correct, it doesn't say it has to have to do with the question.

> As I already said, your own citation proposes non-response

And I have shown to you why that doesn't matter in the slightest, because a very trivial modification to the methodology could achieve the exact same thing while following the original papers requirements to the letter.

> It’s a weak thought experiment

Wrong. It's a perfect demonstration of one of the many reasons why AI research is all but ignoring the Turing Test; the fact that the test is more about the interrogator than it is about the machine.

> “bad faith engagement.”

I don't agree with your statements, and have presented arguments why, that's not argueing in bad faith.


Your bad faith comes from how you disagreed with my statements; you did not do the necessary due diligence to demonstrate I should continue putting forward additional effort in both understanding your point and respecting your ideas.

For example, you reply "Wrong." to a subjective evaluation I've made. It literally cannot be wrong (though you can disagree), yet you declare it so with confidence! That's bad faith, and it means I will not engage further.


> you did not do the necessary due diligence to demonstrate

I did all the necessary due dilligence. I was perfectly aware that the paper used a variation on the imitation game. I also read Turings paper long before this discussion started (I think I was in highschool when I first stumbled upon it).

That's how I knew that it is easy to come up with basically the same thought experiment, without even changing any of the games rules.

> For example, you reply "Wrong." to a subjective evaluation I've made.

Because in my subjective evaluation it isn't a weak thought experiment, so I am fully within my rights to disagree with your evaluation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: