Principled doesn't mean stupid. You can research and calculate all you want to show someone that it's cheaper to euthanize their retarded child instead of raising them. Some people will still never do it.
> Adhering to axiomatic principles like self-ownership to the degree that you can justify scenarios like 'an addict legally selling their kidney' is a mistake. It's the equivalent of trying to stick to Newton's laws to explain the universe: they work great right up until there's a contradiction.
That is precisely when it's most needed! There is no entity on Earth that can make any better claim to authority over a person's kidney than the owner of that kidney. The fact that you call them an "addict" as a way to delegitimize their decisions is an appeal to emotion; but ultimately every person is the sole determining authority on their own kidneys, without exception. This is not something that the opinions of others have any bearing on. In fact, "in cases of addiction" is a good canary in this coal mine: it's frequently trotted out by people who think that circumstances—any circumstances—are legitimate reasons to override another person's self-determination about their own physical person or choices about which that harm no other party, in an appeal to "common sense". It's not!
There isn’t a contradiction in an addict deciding to destroy themselves in continuing an addiction, and this decision is no more or less legitimately made than an addict deciding to cease their addiction. In all cases it is theirs alone to make.
Many of my friends have made each choice. It is always theirs to make, and neither is correct and neither is incorrect—unless it is forced upon them against their will!
It’s not a matter of “vulnerable”, or “decriminalization”, or “regulation”, or of markets. It is the basic philosophical premise that you don’t get to make these decisions for others unless you are willing to declare yourself their master and dominate them, using some handwaving excuse (e.g. "addiction") as justification for substituting your wishes for theirs—over their own body or choices.
The fact that you may have widespread opinion supporting “people shouldn’t get addicted to drugs and destroy themselves” is irrelevant. No person can legitimately claim this power over another in any circumstance, especially and most importantly the extreme ones. Self-destruction of any kind, but, to use an example, euthanasia, is not only legitimate in the case of terminal illness, or when loved ones agree and empathize with it. It is every person's own right to determine what happens physically to their own body, at all times and in all circumstances.
You’re using a lot of words and emotional context to dodge this plain and simple (and perhaps inconvenient) fact; the truth is, unfortunately, that such arguments have no moral basis, and are fundamentally immoral in that they violate consent.
Our bodies are our own to destroy and mutilate in any way we see fit. To use force to impose your will over top of their own and prohibit someone from exercising that basic right of self-determination is the blackest of crimes against an adult human being; it is logically equivalent to raping them, in my view of the violation of consent that it opts to impose upon them.
Please abandon this way of thinking, that you (or anyone else, or "society", or "government") are entitled to choices over the body of another person—for any reason, including and especially the ones you find disgusting, destructive, distasteful, or repugnant.
I once wrote a personal’s ad in SQL on Craigslist, back when they had that section. A DBA replied and asked if I wanted hawking. She had a Cooper’s hawk. I met her at a commercial park in a Saturday morning. She was driving a Honda CRV, the hawk was in the front passenger seat, and I hopped into the back seat.
She started driving and spotted some crows. The hawk saw them as well. Wearing a “don’t kill me either your claws” glove, she moved her hand to the hawk, who gleefully jumped on. She rolled down her window, stuck the hawk outside, and it was basically a drive by shooting with a bird bullet. This happened three times.
My most vivid memory of this was her ripping the crows apart into pieces and putting the then into a bucket, like it was sushi you’d order from KFC.
Can you quantify what you think that choice looks like?
Say you seized the entirety of Elon Musk’s assets: that could pay for a year and a half of the Dept of Transportation. Say you seized all the wealth (somehow) of the world’s 100 richest people. That’s 2 years of US non-discretionary spending (social security, Medicare and Medicaid). I often see comments that assume all problems could be solved by just taxing the rich more, but I just don’t think that’s true.
The Nordic countries pay for their social safety nets by taxing the middle class more heavily than we do in the US. If you want to change that, it’s less about capital vs labor, and more about your dentist vs labor (dentists be the classic example of jobs that earn high incomes without being “capital owners”).
> due to the end of ZIRP and AI productivity gains.
I think you're missing the mark with this analysis.
If you go back to the original dot com bubble it was as much of a hardware bubble as a software one. Same thing with the mobile bubble. The AI bubble we are in has NOTHING to do with productivity and everything to do with hardware. I, as a software engineer am not going to come up with a product that can compete with any of the major players without a massive capital investment.
Meanwhile, the price to play as a software engineer is also driven by high costs. AWS, for better or worse is the model and the go to, and it is NOT cheap by any measure. Its pricing model looks more like the iPhone and less like an efficient market. AWS is MOST of amazons profit margins. It makes tech companies more like franchisee renting the location for their fast food joint and less like independent entities.
The thing is there are TONS of gaps in the software marketplace that need filled. These are companies that are going to be in the 2-3 million a year range and capable of being run by a small team (think ~5 people). Nothing that would appeal to the ycombinator set. You don't need Kubernetes, Kafka, or high performance bleeding edge Rust or massive Autoscaling to run these services. They are never going to get huge, and in fact they offer enough room to start another company of the same scale if one is ambitious and wants to diversify.
Does your average 18 year old know this? No, because most people who write code for a living don't seem to know where these gaps are. Do the math on what it takes to make 100k a year at 10 bucks a month... add a zero for a million, multiply by 3 for "small team"... The number is shockingly small.
Does your average 19 year old have the chops to figure this out? No, because 20 and 30 something laid off software engineers can't seem to figure it out either, even ones with "top degrees".
That doesn't mean that there isn't a path for the sharp young kid to "skip school" and go directly into industry. That path is open source. A history of strong contributions and being smart is going to build a better network than any CS degree ever would/will... However if you can do both, open source and a degree (from anyplace) you're even better off! The same could be said for working at Fedex, Walmart or Costco while you get a cs degree from anyplace and seeking a job in a corporate office after. You have a set of experiences that make you invaluable as a contributor.
Lastly, no one talks about the bad guys. There are plenty of scammers and thieves abusing technical skills who lack formal education and do well for themselves. If we're going to remove all the options and only have a narrow path, will we end up with more criminals and fewer contributors? This is sort of why "Russian hackers" is one of the givens in the industry (crime did/does pay well).
I still think software engineering is a good career choice for a smart kid, but you have to bring more to the table than just code if you want to prosper!
So basically running a microwave for about 800 seconds, or a bit more than 13 minutes per model?
Oh my god - the world is gonna end. Too bad, we panicked because of exaggerated energy consumption numbers for using an LLM when doing individual work.
Yes - when a lot of people do a lot of prompting, these 0ne tenth of a second to 8 seconds of running the microwave per prompt adds up. But I strongly suggest, that we could all drop our energy consumption significantly using other means, instead of blaming the blog post's author about his energy consumption.
The "lot of burned coal" is probably not that much in this blog post's case given that 1 kWh is about 0.12 kg coal equivalent (and yes, I know that we need to burn more than that for 1kWh. Still not that much, compared to quite a few other human activities.
If you want to read up on it, James O'Donnell and Casey Crownhart try to pull together a detailed account of AI energy usage for MIT Technology Review.[1] I found that quite enlightening.
EV can't actually work. It was always about branding for the for-profit CAs so that they have a premium product which helps the line go up. Let me give you a brief history - you did ask.
In about 2005, the browser vendors and the Certificate Authorities began meeting to see if they could reach some agreement as neither had what they wanted and both might benefit from changes. This is the creation of the CA/Browser Forum aka CA/B Forum which still exists today.
From the dawn of SSL the CAs had been on a race to the bottom on quality and price.
Initially maybe somebody from a huge global audit firm that owns a CA turn up on a plane and talks to your VP of New Technology about this exciting new "World Wide Web" product, maybe somebody signs a $1M deal over ten years, and they issue a certificate for "Huge Corporation, Inc" with all the HQ address details, etc. and oh yeah, "www.hugecorp.example" should be on there because of that whole web thing, whatever that's about. Nerd stuff!
By 2005 your web designer clicks a web page owned by some bozo in a country you've never heard of, types in the company credit card details, company gets charged $15 because it's "on sale" for a 3 year cert for www.mycompany.example and mail.mycompany.com is thrown in for free, so that's nice. Is it secure? Maybe? I dunno, I think it checked my email address? Whatever. The "real world address" field in this certificate now says "Not verified / Not verified / None" which is weird, but maybe that's normal?
The CAs can see that if this keeps up in another decade they'll be charging $1 each for 10 year certificates, they need a better product and the browser vendors can make that happen.
On the other hand the browser vendors have noticed that whereas auditors arriving by aeroplane was a bit much, "Our software checked their email address matched in the From line" is kinda crap as an "assurance" of "identity".
So, the CA/B Baseline Requirements aka BRs are one result. Every CA agreed they'd do at least what the "baseline" required and in practice that's basically all they do because it'd cost extra to do more so why bother. The BRs started out pretty modest - but it's amazing what you find people were doing when you begin writing down the basics of what obviously they shouldn't do.
For example, how about "No issuing certificates for names which don't exist" ? Sounds easy enough right? When "something.example" first comes into existence there shouldn't already be certificates for "something.example" because it didn't exist... right? Oops, lots of CAs had been issuing those certificates, reasoning that it's probably fine and hey, free money.
Gradually the BRs got stricter, improving the quality of this baseline product in both terms of the technology and the business processes, this has been an enormous boon, because it's an agreement for the industry this ratchets things for everybody, so there's no race to the bottom on quality because your competitors aren't allowed to do worse than the baseline. On price, the same can't be said, zero cost certificates are what Let's Encrypt is most famous for after all.
The other side of the deal is what the CAs wanted, they wanted UI for their new premium product. That's EV. Unlike many of the baseline requirements, this is very product focused (although to avoid being an illegal cartel it is forbidden for CA/B Forum to discuss products, pricing etc.) and so it doesn't make much technical sense.
The EV documents basically say you get all of the Baseline, plus we're going to check the name of your business - here's how we'll check - and then the web browser is going to display that name. It's implied that these extra checks cost more money (they do, and so this product is much more expensive). So improvements to that baseline do help still, but they also help everybody who didn't buy the premium EV product.
Now, why doesn't this work in practice? The DNS name or IP address in an "ordinary" certificate can be compared to reality automatically by the web browser. This site says it is news.ycombinator.com, it has a certificate for news.ycombinator.com, that's the same OK. Your browser performs this check, automatically and seamlessly, for every single HTTP transaction. Here on HN that's per page load, but on many sites you're doing transactions as you click UI or scroll the page, each is checked.
With EV the checks must be done by a human, is this site really "Bob's Burgers" ? Actually wait, is it really "Bob's Burgers of Ohio, US" ? Worse, probably although you know them as Bob's Burgers, legally, as you'd see on their papers of incorporation they are "Smith Restaurant Holdings Inc." and they're registered in Delaware because of course they are.
So now you're staring at the formal company name of a busines and trying to guess whether that's legitimate or a scam. But remember you can't just do this check once, scammers might find a way to redirect some traffic and so you need to check every individual transaction like your web browser does. Of course it's a tireless machine and you are not.
This has got to be one of the best quotes that illustrates what I believe to be wrong about everything related to Apple and it's products: the blatant and worrisome repackaging of ideas and words.
I'm not facetious at all when I say: Thank you for this quote. I'll save it and use it all the discussions I'll have on the subject from now on. It has really added a key-puzzle-piece to my understanding of the Apple-mindset.
The "notch" (Who came up with the term anyway? I don't believe Apple actually identifies it with a name.) is most definitely meant to be a notch: when applications are full-screen the notch will actually "eat" a part from your screen. This was shown since day 1 of the introduction where a phone was on display with the Wonder Woman movie full-screened and HDR activated. This is Apple's intended and expected behavior. It's Apple's choice to put the "notch" front and center, not to hide it with software and even set up guidelines to ignore it in application development.
Personally, I have an issue with notches and I will never own a device that has one. I find it a lazy, ugly and uninteresting way to increase the screen to body ratio of phones. But I'm somewhat glad with the current experimental designs that are being released by other manufacturers. It's refreshing to see different takes on the issue wether by popping up camera's, flipping over camera's or now even hiding camera's under the screen. Now that is innovation, that is design, that is actually looking for a solution for a very difficult problem. Instead Apple chose to put the "notch" front and center and to ignore it even going so far as to almost market it as a feature. Because look at all the high-tech stuff you get because of it. Sorry, I'm not buying it.
And this shows the incidious marketing that Apple partakes in. It redefines words and ideas on an active basis:
- A motherboard isn't a motherboard, it's a logic-board. It does exactly the same thing, it is exactly the same thing and even is produced in exactly the same way as motherboards. But somehow the brand on the shell makes it different.
- A Mac is different from a Personal Computer and as Louis Rossman has indicated a Mac can "regress" into becoming a PC. How is this possible, it does the exact same thing, is build in the same manner, uses the same technology and serves the exact same purpose.
- An "App" is basically a term that collects all the things that are software-y. A deamon? That's an app that runs in the background. A service? That's an app. A compiler? That's an app. A game? That's an app. A script? That's an app. A shell? That's an app. Etc...
- A repository with a gui suddely is an app-store. No, it's a software repo with included DRM for free.
- Durcing the introduction of the then new "earpod" design of the corded headphones that statement was made that they were engineered to guide audiowaves into your ears. Gee wiz Batman, what are all the other headphones doing then?
- The CE Iphones were "unapologetically plastic". So they are just plain and simple plastic. Just like all the other manufacturers out there.
- The famous "I'm a mac and I'm a PC" commercial is so obvious that it almost hurts. No, they both are PC's; they just look a little different.
This repackaging of words and ideas is a very worrisome trend. It muddies the water when it comes to definitions of words and it eventually will lead to the muddying of the truth. Not only that, but if we accept this sort of repackaging with our PC and phone hardware; why should we not accept it in other aspects of our lives? Why should there not be alternative-facts, when there are alternative PC's? It's a mechanism in our psyche that is prone to abuse and therefore we should not partake in it, even if it maximises profits.
It's all actually pretty simple, look at the definition of the word and if all of it applies; it sticks. How you feel about that does not matter.
> I love Linux on the server side, but I just don't have time to deal with frustrating bullshit issues caused by this bazaar-engineering on the desktop side, even if it's FOSS. I'd rather pay and have something that works.
Which is fair. As an enthusiast that enjoys nitpicking my config files to make it all work perfectly, I've slowly come to understand people that don't want to do that. Sometimes you just want to get your work done, and it can definitely be frustrating when every utility has a different config file format, some don't even have config files but instead use gconf, some require a restart and some don't, some rely on new libraries that your distro doesn't have yet, every application has a different-looking UI... it can be a frustrating experience if you just want a MacOS or Windows-like experience. And unfortunately that level of simplicity isn't quite a thing yet in the Linux world.
Almost all UEFI firmware allows replacing the default public keys with your own, and then you can sign everything yourself with private keys only you possess.
Your description of the European intellectual climate is admittedly accurate if somewhat hyperbolic.
But I think your analysis of the root causes is lacking. European humans aren't different organisms than non-european humans. Culture does not form in a vacuum. It comes from base properties of the geography and experiences of humans in the region.
In my opinion, it's primarily down to 3 things:
1) Population density: Europe has 3X the population density (and subsequent urbanization) as the US. Urbanization leads to more collectivist attitudes. If you compare European attitudes to those in a high density American location like New York City, you'll be amazed how similar (in everything from religiosity to socialist economic leaning to political philosophy). It's the higher proportion of low population density areas of the US that lead to major differences in political philosophy.
2) Experience of downside risk: European risk aversion is quite easily explained by the fact they have experienced the most extreme version of downside risk imaginable in recent memory (WWII). The worst thing the US knows is a depression. Not total annihilation, fire bombing and markets going to 0.
3) Ethnocentrism: Europe's nationalist ethno-states are far less culturally diverse than the broader US. This leads to a higher capacity of empathy for strangers (because people in a mono-culture are more similar to you, you empathize with them more easily). Ironically though, this empathy is what leads to a higher percentage of GDP being driven by centralized government spending (50% in Europe vs. 30-35ish% in USA). The market is less empathetic, but ultimately more efficient and grows the overall pie faster...even accounting for the additional increase in inequality.
Growth compounds exponentially, so this gets more dramatic over time. People in podunk US Midwest States now have a higher disposable income per capita (on both mean and median measures) than people in London, a place you would traditionally think of as among the richest in the world.
But again, the US didn't get here because 'culture.' It got here due to design decisions made hundreds of years earlier. Being a multicultural society reveals base human racist leanings, which results in more individualist governance, which leads to a greater embrace of markets and private capital, which leads to faster growth.
1) Get your resume in order. I'm going to assume that you are in the US, in which case, as a new graduate, your resume should be 1 page long (rules vary in Europe and elsewhere). Having reviewed thousands of resumes, here's what stands out: clean, concise, and no grammatical errors or typos (if you make mistakes on the most important 1-page doc you'll ever write, then what does that say about how careful you'll be when writing code?). This is a one-page document (at this point in your career) that you have complete control over and it sums up your life, so get it right. This means having someone review it. Don't put any photos in. Avoid colors (especially bright ones). No need for a summary of who you are at the top. Have a friend look it over for mistakes. Most people think that they need to make their resume stand out with colors and gimmicks, but as a hiring manager, I've seen this way too many times to the point that I'm jaded. What stands out is a clean resume that communicates effectively. Oh, and always upload/send resumes as a PDF.
2) Applying to jobs by filling out forms online is going to be mostly fruitless. You will occasionally get bites, but even as a senior engineer, I've found these to mostly be wastes of time. Instead, find companies that you are interested in. Then, go on LinkedIn and find engineering managers, team leads or even senior engineers and reach out to them with direct messages. Everyone thinks they need to reach out to the CEO or CTO, but they get 1,000 messages a day. If you reach out to someone lower down, they will typically have enough sway to get you in front of the people making hiring decisions.
3) Even for the best engineers, there's a randomness and luck component to interviews. The best way to practice for interviews is to actually do them. Try to spread a wide net and get as many interviews as possible, even for companies you aren't interested in working for. The experience will get you in the groove. And remember, it's a numbers game. You might be great at LeetCode, but get that one question about heaps that you can't answer. Just learn from your mistakes and move on.
4) If you can, try to appear employed. For better or worse, people will see you as more desirable if you are employed. This might mean setting up an LLC called Foo Software, giving yourself a role of Software Engineer, and then bidding on projects on Upwork. You don't have to lie about what you are doing if anyone asks (and most won't ask for details), but this gives you something to put on your resume and some experience.
5) Everyone on HN talks about having amazing side projects on your github account. Take this with a grain of salt. I find that almost no one ever asks or looks at my github account when making a hiring decision. There's a small but vocal group of people who hate standard interviews and therefore like to say that looking at projects should be the metric by which you judge candidates. There's merit to this philosophical argument, but in practice, most companies just don't care.
6) As far as "unskilled labor", if you need the money, go for it. But I don't personally think that it will help you to have it on your resume.
7) There's lots of remote jobs today and by all means apply, but you will have the biggest leg up if you apply to jobs that want people in person and you are willing to go in to the office.
Just reviewed the paper. Extremely elegant flamingo dance. But let me reserve my comments to the Hacker News theme of an enlarged brain (or really an enlarged neocortex). The key panels on this topic are Fig. 1g and 1h. OMG, the dissections of these "best examples" are sad–– the olfactory bulbs are serious torn and paraflocculi are detached. The histogram that is Fig 1h has a grand total of 6 measurement for the human transgenic mouse line and 4 (four) points for the control. Hey when you have a statistically significant result why mess it up by evaluating 10 of each of the two groups. You might not get published in Nature or airplay on HN, and would that not be a shame?
As for the measurement of brain size, it is a planimetric projection of the area of the dorsal surface of the neocortex. This is a lame way to do morphometry. Squish the brain a bit and the area will expand beautifully—and by way more than 6.5%.
One last comment on the genetics of these animals. The are probably incorrectly stated to be at least F8 progeny of the mixed 129-B6N embryonic stem cells. I hope they mean N8 backcross progeny––that is to say, 8th generation congenic lines. But in this case they appear to have backcrossed weirdly to a different type of B6; the standard C57BL/6J strain. All of this means that even in the best case, they have three different genomes banging around in supposedly co-isogenic cases and their controls: 1. chunks of 129 strain chromosomes that will still be common even at 8 generations, 2. chunks of B6N chromosomes that will also be common at 8 generations, and of course the B6J background strain. You would have to carry out sparse whole genome sequencing or use the GigaMUGA array to unconfound the genetics in this study.
That statistic could only possibly exist for solved murders. Only ~50% of murders are solved and it's much easier to solve if the victim knows the killer. This sounds like
a "looking for your keys under the street lights" scenario.
This is going to sound snotty, and I'm not really trying to be, but... Unix and its derivatives were made for people who sort of knew what they were doing. The reason you pipe exceptional events to STDERR is so the STDOUT output, if it exists, can flow into the next command in the pipe. Asking for help is an exceptional event. If you want the error output of a Unixish (linux, macos, solaris, etc.) machine running bash to be lessable, re-direct it to STDOUT with `2>&1`. You probably shouldn't be touching the shell if you don't know what that does. These tools were developed assuming the users would have a basic understanding of the system they were running on.
The GNU Project has published tools of varying quality, based on who was around to write the tool, debug it, give feedback, etc. It is not the exemplar of high quality software. (But it's far from crap.) The important bit about GNU (and any other software) is that it was written to adhere to their uses. Other people have different requirements. Telling people to "write your software like GNU writes their software" is to misunderstand personal agency and one of the major points of open source software.
Your comments sound like you're saying "Software freedom means you're free to write software the way I want you to write software."
I just installed python3 on Debian and Alpine. It takes 16s vs 4s (I ran the test three times, and kept the fastest measurement for Debian and the slowest measurement for Alpine).
Sure a one-time wait of 12s doesn't change my life. But I won't use apt/apk _once_; I'll use it every single time that I install something. It low-key bothers me when my flow is interrupted by having to wait for machines to do their job, increasing that by 300% doesn't help.
This wouldn't be a deciding factor for me. But it doesn't add points for the Debian-based approach.
Thread is a WiFi replacement, the devices talk IP over thread.
And it has an encrypted pairing process to your vendor controlled hub. Said vendor can allow or disallow it which other vendors may speak with said hub.
Here is the landscape we have:
HomeKit: fully closed, requires certification from Apple. Very expensive and limited functionality.
Zigbee: fully open, anyone can make Zigbee devices and sell them without any restriction. Operates on the same frequency all over the world. Devices are super cheap. You can expand the protocol however you like as a vendor.
Z-wave: fully closed, several incompatible frequencies, requires certification to sell devices.
Thread and matter: semi closed, same ieee standard as Zigbee for data transfer. Vendors can allow it to talk to devices of other vendors. Requires certification. Same price tag as HomeKit, aka 3-4 more expensive than Zigbee.
All of them require hubs. And only with Zigbee you are guaranteed to have interop between all vendors and all devices sold across the globe. Thanks to Home Assistant.
With thread the vendor can simply disallow you to use your devices with HomeAssistant, which is unacceptable by me.
Governments who think this set speed limits on roads to 45 mph, since that's the speed where most cars pass per second on a busy road.
Those same governments then act all surprised when it turns out their whole population is depressed and overworked when they work a 9 hour workday, commute 1.5 hours each way, and have no free time for life, relationships, hobbies, etc.
"press pay to think for me button"
"press pay to think for me button"
"press pay to think for me button"
"press pay to think for me button"
"press pay to think for me button"
I love it
Unfortunately then you will eventually be forced to work in environments without it and feel like you just had a brain aneurysm due to how insufferably slow and clunky everything else is. Ignorance is bliss, just look at all those cute commenters here who think notepad++ is the best thing ever. "It even supports regexp!" It makes the editor wars feel so pointless, but Windows users are adorable.
I have hidden recovery information in a few places on the internet - someone stumbling across it would not know what they are looking at, or what it's for. For example, you can hide the TOTP secret for an authenticator app, but it's useless unless you know what account and service it's for, and the associated master password.
> If you could create some score of peoples intelligence, how they react/respond to people, their normal physical movements and put that into a distribution the autistic people would definitely be on the tail end of the curves.
Some measurable aspects of human physiology and behavior would fall into a normal distribution (i.e. "bell curve") if you graph it, yes. Bell curves are common enough that there is a lay assumption that all distributions look like that, but it isn't the case, at all. Certain forces are required to lead to that flavor of distribution and in the absence of those, you can get graphs of all sorts of shapes.
For example, if you were to graph "how feminine" a set of people are, you're unlikely to get a bell curve where most people are androgynous. Instead, you get a bimodal distribution where there's a hump on one side for dudes and one on the right for ladies. In fact, graphs of height show a similar distribution.
If you graph how many fingers someone has, you'll probably get something closer to a power law where almost everyone is on the left at "10", a few misfortunate at "9", even few very unlucky at "8", and so on. (Plus a couple of odd points for polydactyly, etc.) This is the famous "long tail" you hear about in economics.
So there is certainly variation in all attributes, but that does not imply that the center of that range is the most populated. And thus, there may not be a "normal" in the typical "average and most common" sense of the word.
That's probably true. But offending people with prudish attitudes or without a sense of humor is a pretty common old-school hacker goal.
It serves a valuable community function by protecting the community from corporate interests, non-technical bike-shedding, and overload by "bug" reports that are actually just the user not being technical enough to use the tool properly.
Edit: Also FWIW, I've had no problem introducing GIMP to church groups and in various professional settings.
Double edit: since you mention schools, I should also add that I taught my wife GIMP and she had no problem sharing it in her school (US public high school).
In some contexts, JSX and markdown are also competing.
I think HTML is the true winner.