Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is impossible to write a real "use for good, not evil" [1] license, because there's no formal, universally accepted notions of good and evil. While there are things that are universally considered good, or considered evil, the areas around them are large, nebulous, and are anything but clearly outlined. Hence legally avoiding the "anti-evil" license terms will always be a relatively easy option for a willing party. Moreover, there is a large range of issues and causes that are considered "good" by some and "evil" by others, so there will always be a controversy and disagreement even without any legal suits, where everyone would consider themselves sincerely right, not just technically correct while violating the spirit.

A weapon that only a lawful good character can wield is the stuff of fairy tales and board games, which do not reflect reality fully enough.

Unlike this, freedom is pretty well-defined, so e.g. GPL is upheld by courts.

[1]: https://www.json.org/license.html





I have this thinking that, in reality, there's no such thing as objectively 'good' or objectively 'bad'

It's all context and timing.

Almost everyone that will attack this idea will present actions that are loaded with context - murder, is killing when it's bad, self defence is killing when it's good.

If you look at everything, and look at it's non-contextual action, then you can easily find contextually 'good' and contextually 'bad' instances of that thing.

Even further, the story of the man who lost his horse [0] shows us that even if we say that something that happens is contextually good, or bad, the resulting timeline could actually be the complete opposite, meaning that, ultimately, we can never really know if something is good, or bad.

[0] https://oneearthsangha.org/articles/the-old-man-who-lost-his...


I think this is one of these cases where talking in abstract terms does not help people agree.

What I am hearing is if you remove context (and timing, lets say it is part of context) then there is no good or bad. But who said to remove context? Arent we saying then there is good and bad depending on context?

Many people, including myself, would agree in the abstract, while at the same time some situations being very clear once down to a real example.

It reminds me of people claiming pain is an illusion or facts not existing (very edgy), until someone slaps them in the face to prove "I did slap you, that is a fact". I think that is reality, and specific examples are easier.

P.S. I would add values into the context.


How do you make good or bad resolvable? Is a piece of code being used by Tyson Foods okay? A vegetarian software engineer who contributed to the package might say “no, that use contributes to the killing of animals for food, which is bad.”

If you need to evaluate all the context to know whether a license is usable, it makes it extremely hard for “good guys” to use code under that license. (It’s generally very easy for “bad guys” to just use it quietly.)


> How do you make good or bad resolvable?

It is not a computer program, but a an ethics problem. We can solve it by thinking of the context and the ethics of it.

I realize it is the topic of this thread, but OP did not mention anything in relation to licenses, and was just talking about good and bad not existing objectively (without context).

I think, if we came with a specific situation, most people with similar values might reach the same good/bad verdict, and a small minority might reach a different one.

I believe the Tyson Foods example is overly simplistic and still too abstract, because one can be vegetarian for many reasons, and these would affect the "verdict". In the real world, if we were working on that piece of software the question would be: Does the implementation of this specific hr SAP module for Tyson foods by me, a vegetarian against animals suffering unnecessarily, etc. as opposed as the abstract idea of any piece of code and any vegetarian. If a friend called you: I have this situation at work, they are asking me to write software to do x and I feel bad about it, etc. etc. I bet it would not be difficult to know what is right and wrong. Another aspect of it is, we could agree something is wrong (bad) and you might still do it. That does not mean there is no objective reality, just that you might not have options or that your values might not be the ones you think (or say) they are, for example.


But in a typical FOSS scenario, your decision to open source the code and Tyson Foods decision to use it are decoupled. You don't know who all the potential users are when you open source it, so you can't consider all the concrete cases and make sure that the license reflects them. In the same way Tyson Foods isn't going to contact all the creators of libraries they want to use and ask if their concrete use case is in line with the creator's ethics.

Agreed. This would be a logistical nightmare on both ends. Especially if the licenses can be revoked if and when Tyson Foods decides to change some of their policies and/or the author decides to change their political views.

I believe that this would effectively make sure that nobody uses these licenses.


> I think, if we came with a specific situation, most people with similar values might reach the same good/bad verdict, and a small minority might reach a different one.

All you doing is agreeing how the context of the situation is determining if the action is "good" or "bad" (which was my point)


In classic times there was no general concept of good or evil. The question was about if something is fitting in its context. With the rise of Christianity came the general concept of good or bad.

Even that evolved with time.

This was one of the many disagreements between Catholics and Protestants during the 16th-17th century, for instance, with some of the most powerful Catholic currents (e.g. Jesuits) being very much in favor of rethinking morality to take into account context, while the most powerful Protestant currents pushed for taking morality back to [their interpretation of] the manichean early Christian dogmas.


Come on. A quick search suggests that Zoroastrianism already had this a good six-hundred years before christianity. And ancient Greek philosophers were trying to define good, evil, and "God" for generations before christianity (source: I've been reading about early christianity for two years). Certainly, Judaism had it and that's what inspired early christianity (with the exception of Paul, the early leaders were devout Jews).

> "the Software shall be used for Good, not Evil."

For JSLint, Crockford gave an exemption though: "I give permission to IBM, its customers, partners, and minions, to use JSLint for evil."

https://gist.github.com/kemitchell/fdc179d60dc88f0c9b76e5d38...


The very fact that this instantly feels like ironic jest illustrates how impossible it is to seriously limit licenses with broad moral clauses.

One could come up with clauses that could be admissible at court, e.g. "this software is expressly not licensed to be used for anything intended to kill humans". It would not be licensed for military planning software, but would likely be still licensed for a military transport system, or even an anti-drone weapon.

The best actionable clause I could come up with is like so: "your license to use this software for any purpose terminates as soon as a court of [insert jurisdiction] finds that it has been used for [something you are opposed to, but also sufficiently clearly defined, like genocide, or incarceration of peaceful political dissidents], which has resulted from the use of your products and services, and with your prior knowledge of such use". I think I've even saw similar clauses in many commercial licenses, just with not morals-related provisions.


> there are things that are universally considered good, or considered evil

What a bold claim.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: