While you you're making good points, this shows that engineers and industry intentionally make work more complex than necessary in order to justify higher prices for labor. This is not so uncommon in today's economy, especially white collar and regulated work that most people don't understand, but worth thinking about regardless.
To be fair, it's hard to imagine economy and civilization crashing hard enough to force us to be more efficient. But who knows.
The actual hard question is probably making even 10% of such wisdom and good intentions survive when the program is bombarded by contributor patches, or people taking Jira tickets. TFA talks about it in the context of strategy and tactics.
Organizationally enforcing strategy would be the issue. And also that the people most interested in making rules for others in an organization may not be the ones best qualified to program. And automatic tools (linters) by necessity focus on very surface level, local stuff.
That's how you get the argument for the small teams productivity camp.
It would be cool to see a linter, or a new language, that makes good architecture easy and bad architecture hard.
Like making state machines easier than channels. (Rust is sort-of good at state machines compared to C++ but it has one huge issue because of the ownership model, which makes good SMs a little clumsy)
Or making it slightly inconvenient to do I/O buried in the middle of business logic.
Bad architecture is a communication problem, not a technical one. It’s rushing in without knowing the domain and its constraints.
Doing IO in the middle of business logic is just bad coding. It’s usually the developer not caring about the architecture (tornado coding or slum coding) or the architecture not existing.
The language is English, the linter is us. These are things ultimately solved by establishing good processes and frameworks, making it difficult to do a task in a way other than the intended one.
I would just establish that all references to "theft" and "stealing" in the realm of copyright (with the notable exception of plagiarism) is metaphor and emotional rhetoric. Historically it would come from copyright interest groups who want(ed) to use criminal police to enforce their state-granted copyright privileges[1] against regular people.
Sadly these things are often decided by rhetoric in society, but then again, there's no actual debate if it's just throwing slogans.
Now some of the same rhetoric is used in the AI battle. The only question worth asking here is what's the social benefit, as human culture is by nature all commons and derivation. But in this case, the AI companies are also accumulating power, and LLMs are removing attribution which could be argued to discourage publishing new works more than piracy. A "pirate" may learn about you and later buy from you in different ways, a LLM user won't even know that you exist.
[1] Not even discussing how exaggerated these privileges are from what would be reasonable.
Because if you present yourself as the author, it follows that the actual author is deprived of attribution. So you are actually taking something from that person.
LLM could commit plagiarism if authorship of generated media was claimed for either the LLM or its creators.
I sympathize with what you're saying. In theory Docker and Snaps and such are supposed to more explicitly package Linux programs along with their dependencies. Though Docker especially depends heavily on being networked and servers being up.
I'm not a fan of bundling everything under the sun personally. But it could work if people had more discipline of adding a minimal number of dependencies that would be themselves lightweight. OR be big, common and maintain backwards compatibility so they can be deduplicated. So sort of the opposite of the culture of putting everything through HTTP APIs, deprecating stuff left and right every month, Electron (which puts the browser complexity into anything), and pulling whole trees of dependencies in dynamic languages.
This is probably one of the biggest pitfalls of Linux, saying this as someone to whom it's the sanest available OS despite this. But the root of the problem is wider, it's just the fact that we tend to dump the reduction of development costs onto all users in more resources usage. Unless some big corp cares to make stuff more economical, or the project is right for some mad hobbyist. As someone else said, corps don't really care about Linux desktop.
Many historians work on manuscripts and/or large archives of documents that might not be digitized, let alone be accessible in the internet. The proportion of human knowledge that is available in the internet, especially if we further constrain to English-language and non-Darkweb or pirated, is greatly exaggerated. So there are infrastructure problems that LLMs by themselves don't solve.
On the other hand, people tend to be happy with a history that ignores 90+% of what happened, instead focusing on a "central" narrative, which traditionally focussed on maybe 5 Euro-Atlantic great powers, and nowadays somewhat pretends not to.
That being said, I don't like the subjectivist take on historical truth advanced by the article. Maybe it's hard to positively establish facts, but it doesn't mean one cannot negatively establish falsehoods and this matters more in practice, in the end. This feels salient when touching on opinions of Carr's as a Soviet-friendly historian.
My Dad, who is a professor of history, always used to say that being a historian is like being a detective, piecing together many different sources of incomplete or false information from unreliable sources, assessing motivations for actions, and so on.
You may indeed be able to establish some facts with high confidence. Many others will be suppositions or just possibilities. Establishing "facts" though is not really the point (despite how history is taught in school).
You try to weave all these different things into a bigger narrative or picture. It is most definitely an act of interpretation, which itself is embedded in our current conceptions (some of which are invisible to us and which future historians may then riff on).
Saying that you don't like the subjectivist take on history means you think there is an objective history out there to be had which we could all agree on, but that does not exist.
> I don't like the subjectivist take on historical truth [...]
The work of historians is to make inferences based on incomplete and contradictory sources.
Historians aren't simple fact-checkers. They make judgements in an attempt to understand the sweep of history.
You can see what kind of work they have to do every time you stare at some bullshit narrative put out by a company about how really it was good for the local economy for them to run their fracking operation, and the waste water really was filtered three times so it couldn't be causing the overabundance of three-legged frogs, and last year they funded two scientific studies that prove it. (I just made this up, hope you get the idea.)
I wouldn't automatically say this is bad. If the money that would end up being more profits percolates throughout society, employees, communities etc., and even the founders themselves (as opposed to concentrated capital), it is actually fine and could produce a healthier society. On the other hand, I grant you that it might (also) feed corruption. But then, I wouldn't bet on concentrated capital not being corrupt as well.
If there's an argument here, it's a mess. You first talk about speech. Commerce is barely speech--it's actually using the public market--and there is a legitimate opinion that applying civil rights to companies is already a corrupt abuse of our society. Perjury is strictly limited to one context existing since the dawn of time (courts), it is also very proceduralized what they can ask you, and even then there's a carveout for not incriminating yourself. Conspiracy and blackmail are only secondarily about speech. There's a criminal intent that you either made clear yourself or they have to prove.
The internet is like media (press) or communication by letters. Both extremely established in terms of guaranteeing freedom of speech and in the latter case, also secrecy. And the ID identification (that you then make your argument about) is only loosely related to free speech strictly. It's about being constantly searched and surveilled with a presumption of crime.
Honestly the SEO talk sounds like reflexive coping in this discourse. I get that WWW has cheapened quality, but we now have the tech that could defeat most of the SEO and other trash tactics on the search engine side. Text analysis as a task is cracked open. Google and such could detect dark patterns with LLMs, or even just deep learning. This would probably be more reliable than answering factual queries.
The problem is there is no money and fame in using it that way, or at least so people think in the current moment. But we could return to enforcing some sort of clear, pro-reader writing and bury the 2010s-2020s SEO garbage on page 30.
Not the mention that the LLMs randomly lie to you with less secondary hints at trustworthiness (author, website, other articles, design etc.) than you get in any other medium. And the sustainability side of incentivizing people to publish anything. I really see the devil of convenience as the only argument for the LLM summaries here.
> But we could return to enforcing some sort of clear, pro-reader writing and bury the 2010s-2020s SEO garbage on page 30.
We could.
But it will absolutely not happen unless and until it can be more profitable than Google's current model.
What's your plan?
> Not the mention that the LLMs randomly lie to you with less secondary hints at trustworthiness (author, website, other articles, design etc.) than you get in any other medium. And the sustainability side of incentivizing people to publish anything. I really see the devil of convenience as the only argument for the LLM summaries here.
Well, yes. That's the problem. Why rely on the same random liars as taste-makers?
> I believe robots.txt was invented in 1994(thx chatgpt).
Not to pick on you, but I find it quicker to open new tab and do "!w robots.txt" (for search engines supporting the bang notation) or "wiki robots.txt"<click> (for Google I guess). The answer is right there, no need to explain to LLM what I want or verify [1].
[1] Ok, Wikipedia can be wrong, but at least it is a commonly accessible source of wrong I can point people to if they call me out. Plus my predictive model of Wikipedia wrongness gives me pretty low likelihood for something like this, while for ChatGPT it is more random.
To be fair, it's hard to imagine economy and civilization crashing hard enough to force us to be more efficient. But who knows.