Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No. Stallman do care about the freedom of the people. The problem is, he focuses almost entirely on negative freedom[1]. If one restricts oneself to Free Software, he can do whatever he wants with it (except restricting others' freedom, which is precisely what negative freedom is about). But there are additional degrees of liberty[2] to be gained if you also use proprietary software from time to time. Some capabilities just aren't in the realm of free software (the latest fashionable computer game, some device drivers, the Raspberry Pi —it uses some proprietary code).

Now, note that even someone very much aware of positive liberty could act the same way Stallman is acting. This is because we'd all be more capable if all software were free.[3] The only way to do that is stop using and making proprietary software. The problem with that is that it requires a personal sacrifice. At the consumer end, it means not enjoying some software (the proprietary ones). At the producer end, it means making less money.

In other words, we have a prisoner's dilemma[4]. Stallman is currently cooperating, and is urging everyone else to do the same. The GPL by the way is consistent with this: to some extent, it forces you to cooperate. You seem to think this is unacceptable. You'd prefer to be able to defect. But then I ask you: how do you justify this ? If you plan to cooperate, you don't need the freedom to defect. If you plan to defect, what is your moral basis for making the world a slightly worse place ?

This works even if we do not talk about you. If you think people should cooperate, why give them the freedom to defect ? If you let them defect (and they do defect), it again makes the world a slightly worse place. How would you justify this? (Note: my own moral alarm went off when I wrote that last paragraph. I suppose yours have as well. Just remember that this is probably a false positive, for the GPL actually is a give & take licence. Something like a clever Timeless Decision Theory[5] agent that will cooperate if and only if it knows you will cooperate if and only if you know it will cooperate if… infinite recursion resolved by symmetrical information —the text is laid out for all to see.)

[1]: http://en.wikipedia.org/wiki/Negative_liberty

[2]: http://en.wikipedia.org/wiki/Positive_liberty

[3]: This point is central. If you don't believe it, the rest of my argument doesn't work. That's why if anyone has reasons to reject it, I'd like to know about it (links to high walls of text are okay).

[4]: http://en.wikipedia.org/wiki/Prisoner%27s_dilemma

[5]: http://wiki.lesswrong.com/wiki/Timeless_decision_theory



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: