It's kind of weird that one of the most logical answers to the first question is not in the answer sheets. I bet that if it was there 100% of non-programmers would tick it, and not be wrong.
The question:
a = 10;
b = 20;
a=b;
The logical answer of course being that the computer should throw an error or return false, because a does not equal b. 14 years of schooling should have hammered that in quite thoroughly.
If you really want to test non-programmers native skill for working with computer, you should at least briefly explain how the computer will read this statements. i.e. the computer interprets the statements sequentially, and reads the '=' symbol as 'becomes', not as 'equals'.
Responding with "false" as the result seems logically incoherent, as it assumes that the first two lines of the the three-line program are "true" when there is no reason to assume that is the case.
Without any previous understanding of what computer programming is, what it does, or how it works, and relying solely on elementary mathematical learnings, is there a particular reason that one would assume that the first two lines are directives and the third line is what we are being asked to validate? I am too far down the rabbit hole to intuitively know if that is the case, can someone else suggest whether this is a plausible conjecture?
It's been maybe 15 or so years, so I'm similarly pretty far down the rabbit hole, but I definitely remember having a lot of trouble with:
x = x + 1
At the time, it seemed patently obvious that it was a false statement, because there is no single value of x for which this is true.
If the situations are analogous, my guess would be that you would assume that each of these statements is an assertion, and that at least one of them must be false. Intuitively, I'd guess that it's the last one that people would assume would be false, because as you're reading from top to bottom, you've already "accepted" the first two.
When teaching JavaScript to Kids about 10 years old. I tend to use x+=1 over x=x+1.
It seems to be much easier for people to attach the new construct += to to a new idea. I don't bring in the x=x+1 form until they have had plenty of use assigning other expressions to x. Kids don't seem to have any problem with x=y+1. They just need a little time for that idea to set properly before they start mixing things up.
If you take it to mean "The cardinality of an infinite set", "X + 1" to mean "The set X with one more element added to it", and "X = Y" to mean "X and Y have the same cardinality", then "X = X + 1" is entirely true.
Mathematics, like programming, is ultimately founded on definitions.
> is there a particular reason that one would assume that the first two lines are directives and the third line is what we are being asked to validate?
A much less narrow assumption is required to reach answers of "false". Think of each example as a system of simultaneous equations.
Since this test was seemingly designed relying on the idea of destructive updates, none of the given examples have satisfying assignments. But of course standout easy-pick answers of "no solution" would ruin the test. I'd really like to see a similar study of psychologies that took into account different programming bases. Perhaps such a test would even be a good way to sort students into separate intro classes that used a language suited to their preexisting mental model.
Several older languages—including Pascal and the Algol languages—will use the := operator for all assignment, on the grounds that assignment is a fundamentally asymmetric operation. In the ML family of languages, there are immutable definitions and mutable reference cells, and different operators for each:
(* multiple bindings, so the inner x shadows the
* outer x---indeed, this code would give you a
* warning about the unused outer x *)
let x = 5 in
let x = 6 in x
(* y is a mutable reference cell being modified *)
let y = ref 5 in
y := 6; y
Haskell makes a distinction between name-binding and name-binding-with-possible-side-effect, but still reserves = to mean signify an immutable definition and not assignment:
-- this defines x to be one more than itself---which
-- causes an infinite loop of additions when x is used
let x = x + 1 in x
-- a different assignment operator is used in a context
-- in which side-effects might occur, and can be
-- interspersed with non-side-effectful bindings:
do { x <- readInt -- does IO
; let y = 1 -- doesn't do IO
; return (x + y)
}
> "The logical answer of course being that the computer should throw an error or return false"
But that's not the question that was asked. The question is "what are the final values of a and b?". Someone adopting your interpretation would say "a=10, b=20". Otherwise, why else would you be claiming "a does not equal b"?
> "If you really want to test non-programmers native skill for working with computer, you should at least briefly explain how the computer will read this statements. i.e. the computer interprets the statements sequentially, and reads the '=' symbol as 'becomes', not as 'equals'."
That would somewhat defeat the point of the test, which is to gauge what mental models (if any) people have before they've been told anything about programming.
The question:
The logical answer of course being that the computer should throw an error or return false, because a does not equal b. 14 years of schooling should have hammered that in quite thoroughly.If you really want to test non-programmers native skill for working with computer, you should at least briefly explain how the computer will read this statements. i.e. the computer interprets the statements sequentially, and reads the '=' symbol as 'becomes', not as 'equals'.