There is none; the people who keep promulgating this falsehood about Stallman say that because of one paragraph that anyone who can pass the SAT reading comprehension tests, parse standard English, or understand the basics of logical reasoning, would have understood that it is simply a wrong interpretation at the level of how 1+1 = 3 is wrong.
It is much better to accuse Stallman in that instance that he was mansplaining and contributing to workplace toxicity, and/or to accuse Stallman of a historical pattern of objectifying women. Those are better because they are closer to the truth. But people like simpler narratives, that he said this one horrible thing, when it is not even that far fetched to show using the exact same quote that he was actually arguing quite an opposite, or orthogonal, point.
The other thing about this is that people ignore the context of the speaker; in this case Stallman was talking about media abuse of terminology. To complain about media manipulation is a very standard, leftist position. So like the above, to implicitly suggest that he was using this position as a cover for his personal misogyny, has merit, but nobody cares about this angle, because again it's too complicated to sort through.
Actually, the SAT reading comprehension questions are not that simple and trip people up. Which is really my point of analogy. They're the kind of questions where you either got the right answer or not, but explaining it to the person who got it wrong can be non trivial.
I find this approach strangely condescending. For example the author says:
> Understanding the value attributed to X, Y, and Z in that particular text requires assessment of the rhetorical strategies of the author(s).
They could've just said, if you want to know why the author thinks XYZ are important, you need to look at what they are saying about it.
I'm a hardcore postmodern leftist, but I don't see how writing in such a contorted way helps practicing scientists. In fact I would argue that this kind of listing obscures a politics of its own; it is so busy prescribing citation practices that it won't examine its own politics.
That said, it's the first time I've seen this guide so maybe I need to read up on the issues; a list of do's / don'ts isn't the best way to introduce and help people understand the issues.
> Too many of the people I meet are ground down, sliding by, holding on, grappling with their identity and place in the world, somewhere on the scale of mildly scarred to full blown post traumatic, anxiety ridden, and/or profoundly lost.
How do you diagnose this? Is there a checklist of signs you look for? Do they eventually tell you, etc.?
Just curious, because I don't get to meet than many people all the time, so I can't collect sociological samples and reach conclusions in the way you seem to be doing.
I think it's clearer to understand the Church-Turing thesis as two theses, not one. Turing's thesis was that (intuitive) computability equals (formalizable) Turing computability.
So you could completely ignore the recursive functions part (and so, Church's Thesis) and there would still be the same sort of lay confusion about the computability thesis.
OK but if by saving 1000 lives a year required as a side-effect that you personally be among the fatalities, would that be OK for you? I hope not. Think of this as a technical corner case; so, the question is the soundness of the analysis—for example, the distribution of deaths and what that means for safety—and not letting various facile logic get in the way of that work.
Let’s imagine that auto-driving tech saves 2001 gross lives per year and kills 1001 people who wouldn’t have died in an all human driving world, for a net of 1000 lives saved.
I think that’s a win, even if I now have an even statistical chance to be in the 1001 and no chance to be in the 2001.
Requiring that I be in 1001 is not ok, no more than requiring I donate all my organs tomorrow. Allowing that I might be in the 1001 is ok, just a registering for organ donation is.
>> Let’s imagine that auto-driving tech saves 2001 gross lives per year and kills 1001 people who wouldn’t have died in an all human driving world, for a net of 1000 lives saved.
You're saying that auto-driving would save the lives of 1000 people who would have died without it, by causing the death of another 1001 that wouldn't have died if it wasn't for auto-driving?
So you're basically exchanging the lives of the 1001 and the 1000? That looks a lot less of a no-brainer than your comment makes it sound.
Not to mention, the 1001 people who wouldn't have died if it wasn't for auto-driving would most probably prefer to not have to die. How is it that their opinion doesn't matter?
It saves 2001 (not 1000 as you said, or perhaps said differently, I'm exchanging the lives of 1001 to preserve the lives of the 2001).
It kills 1001.
Net lives saved = 1000.
> How is it that their opinion doesn't matter?
The 2001 who are saved by auto-driving were also most probably not interested in dying. How is it that their opinion doesn't matter?
It's a trolley problem[1]. Individual people have been killed by seatbelts, yet you probably think it's OK that we have seatbelts because many more people have been saved and/or had their injuries reduced. Individual people have been killed by airbags, yet you probably think it's OK that we have them. Many people have been killed by obesity-related mortality by shifting walkers and bikers into cars, yet you probably think it's OK that we have cars.
Right. And net lives lost = 1001. So, we've killed 1001 people to let another 1000 live. We exchaned their lives.
>> The 2001 who are saved by auto-driving were also most probably not interested in dying. How is it that their opinion doesn't matter?
Of course it matters, but they were dying already, until we intervened and killed another 1001 people with our self-driving technology.
Besides, some of the people who would be dying without self-driving technology had control of their destiny, much unlike the (btw, very theoretical) trolley problem. Some of them probably made mistakes that cost their lives. Some of them were obviously the victims of others' mistakes. But the people killed because of self-driving cars were all the victims of self-driving cars mistakes (they were never the driver).
>> Individual people have been killed by airbags, yet you probably think it's OK that we have them.
An airbag or a seatbelt can't drive out on the road and run someone over. The class of accident that airbags cause is the same kind of accident you get when you fall off a ladder etc. But the kind of accident that auto-cars cause is an accident where some intelligent agent takes action and the action causes someone else harm. An airbag is not an intelligent agent, neither is a seatbelt- but an AI car, is.
Let's change tack slightly. Say that we had a vaccine for a deadly disease and 1 million people were vaccinated with it. And let's say that out of that 1 million people, 1000 died as a side effect of the vaccine, while 2001 people avoided certain death (and let's say that we are in a position to know that with absolute certainty).
Do you think such a vaccine would be considered successful?
I guess I should clarify that when I say "considered successful" I mean: a) by the general population and b) by the medical profession.
That's not really a very good argument. If you change parameters in a complex system then the odds are that you are going to find pathologies in new places.
People claim seatbelts have caused lots of deaths, and I'm sure at least a percentage of these claims are fair ([0]). I still think it's safer to drive a car with a seatbelt rather than without.
The downside of a laissez-faire policy towards self-driving cars could fall on anyone, but so does the upside. Again, a lot of human driven car crash victims have done nothing wrong and were victimized pretty much at random. Run over, rear-ended, T-crashed at an intersection, and could not reasonably have done anything to prevent it.
>> Again, a lot of human driven car crash victims have done nothing wrong and were victimized pretty much at random.
But all self-driving car victims (will) have done nothing wrong. Whethere they were riding in the car that killed them or not, they were not in control of it, so they 're not responsible for the decision that led to their deaths.
Unless the decision to go for a walk or a cycle, or to ride on a car makes you responsible for dying in a car accident?
Well the real issue is defining what's better. And the worry some of us have is this new trend of grappling with increased complexity seems short-sighted; we don't like the direction it's going. As much as Sussman shows understanding of the current situation, what he's saying is also a criticism--doing this sort of ad hoc "science by poking but not really science" is the heart of the issue.
And what's interesting is if you look at actual scientific research around programming, say dealing with concurrency and advanced tools like model checking, etc—all of that is very theoretical stuff that assumes you know SICP or have equivalent foundations. So it's not really an argument that theory is dying; in face of this new level of complexity in practice, perhaps we could benefit from theoretical research now more than ever.
It's hard to argue with productive results, however. If this "poking" results in valuable engineering more than SICP does, then "poking" needs to at least be seriously looked at and understood, if not outright taught to the next generation of software engineers.
The real problem, I think we can all agree, is the lack of widespread specialization of degree programs. Computer Science is still the blanket degree everyone gets, when in reality, some kind of trade program is likely sufficient to train many of today's "developers" (the "pokers").
SICP is programming for the theoretically inclined. It seems analogous to calculus in math versus calculus for physics: you can study it more formally with all the proper proofs and derivations (and bizarre cases), or pick up just the applied bits (such as chain rule and dot notation) that you need for doing AP physics. This analogy suggests the existence of two approaches, with different implications and consequences for programming culture.
I think that is true to some extent, but Knuth's approach (which I'm more drawn to) is also theoretically inclined, albeit of a different approach than SICP. Knuth (and myself) see programming as fundamentally being about computers, and Knuth starts with what a computer can do and builds from there. Ableson and Sussman see programming as more about computation, so they start with a model of computation (based on scheme, lambda calculus, etc.). These two approaches are quite different and I don't think you can reconcile them easily. Not that either one is all that much better than the other, though Knuth's is certainly more efficient. It seems to me that a large part of which you favor comes down to how you're wired.
Are program meant to tell the computer what to do? (Knuth)
Or are computers meant to execute our programs? (CISP)
Personally I lean towards the second view, for a simple reason: we design programs much more often than we design computers. Computer design doesn't take much of humanity's time, compared to programming them. So I'd rather have the hardware (and compiler suite) bend over backwards to execute our pretty programs efficiently, than having our programs bend over backwards to exploit our hardware efficiently.
The Mackenzie book chapter discusses both the Fetzer and DeMillo Lipton Perlis controversies.
It's really interesting that the debate got into the questions like, can one formally verify a bridge?