I have heard of malware like this, and engineers that found it at Google were instructed by higher ups to ignore it and never talk about it without explanation.
Good luck getting anyone close to this to go on the record about it though given such things normally come with corporate or government gag orders.
There are hundreds of privileged vendor binary blobs on most flagship devices not even Google gets source code to though so supply chain attacks should be assumed.
It's not as black-and-white as "Brenda good, AI bad". It's much more nuanced than this.
When it comes to (traditional) coding, for the most part, when I program a function to do X, every single time I run that function from now until the heat death of the sun, it will always produce Y. Forever! When it does, we understand why, and when it doesn't, we also can understand why it didn't!
When I use AI to perform X, every single time I run that AI from now until the heat death of the sun it will maybe produce Y. Forever! When it does, we don't understand why, and when it doesn't, we also don't understand why!
We know that Brenda might screw up sometimes but she doesn't run at the speed of light, isn't able to produce a thousand lines of Excel Macro in 3 seconds, doesn't hallucinate (well, let's hope she doesn't), can follow instructions etc. If she does make a mistake, we can find it, fix it, ask her what happened etc. before the damage is too great.
In short: when AI does anything at all, we only have, at best, a rough approximation of why it did it. With Brenda, it only takes a couple of questions to figure it out!
Before anyone says I'm against AI, I love it and am neck-deep in it all day when programming (not vibe-coding!) so I have a full understanding of what I'm getting myself into but I also know its limitations!
I can't find the article where I read it, many years ago now, but it was about strategies that small communities can adopt to keep their culture from being subsumed by the mainstream.
One was to pick a set of norms repugnant to the mainstream that everyone currently in the community can tolerate and enforce them rigorously on all new members. This will limit the appeal of the community to people like the ones currently there and will make sure that it never grows too big.
Thus your community is as appetising to activists attempting a hostile takeover as a toxic slug is to a bird.
As an example from six years ago, when the code of conduct madness had just reached its peak:
>I believe OpenBSD's code of conduct can be summed up as "if you are the type of person who needs a code of conduct to teach to you how to human then you are not welcome here".
It is/was a vaccine, I think the terminology is correct. It just didn't work effectively because the variants mutated too quickly because you're not supposed to vaccinate in the middle of a pandemic. This cause an explosion of variants and they couldn't make vaccines that tracked the new variants fast enough.
So instead they decided to change the goalposts and said "This vaccine that worked on the variant 2 years ago will still protect from severe symptoms" when in fact it did nothing and people kept getting infected.
It wasn't the vaccine itself it was how it was sold to us by Pfizer, Moderna and the politicians.
Soft power isn't a thing. As people have recently pointed out with the USAID situation: helping someone and then stopping the help is far worse than not helping at all. Therefore, soft power isn't power, it's actually more like soft debt. Every time you do charity, you add on to your moral obligations. The less charity you do, the fewer the requirements on you.
I built a proxy a while ago to make this easier - it lets you stick with IMAP/POP/SMTP as-is. No need for your client to even know that OAuth exists. See here: https://github.com/simonrob/email-oauth2-proxy
As far as I'm concerned, it did. Linux is far and away the best OS for my needs so I'll keep using it.
Did it "win" more of some metric of perfusion / capital versus the other big two? Perhaps some, mostly not. Who cares. The market is dumb.
What matters here is whether the capability exists at all. When it comes to phones, I'm still leery about linux. Support isn't quite wide enough and for a device that I need 110% reliability out of we ain't there yet.
I do know one thing - the effects of closed ecosystems that caused 99.99999% of servers to use linux, will eventually come for interface hardware. Companies have periodic bouts of psychosis that make their walled gardens inherently unreliable. It's just a whole lot slower in a realm that doesn't iterate at web-speed. Will that mean everybody uses linux phones in the future? Of course not. But I do hope it will mean I get to put my own phone together with an OS I own, someday. That would be an unequivocal good.
If you do this sort of thing often, I'd love to chat further. I'm basically trying to automate this sort of manual research around companies with a library of deep research APIs.
We launched corporate hierarchy research and working on UBO now. From the corporate hierarchy standpoint, it looks like the Delaware entity fully owns the Estonian entity. Auto generated mermaid diagram from the deep research:
I think history would agree eventually that his timing was unfortunate. The changes in technology (social media, smartphones,etc..) and the 2008 financial crisis culminated in large scale social changes and dissatisfaction. That along with aging politicians stuck in their old ways was a huge challenge. And he could have maybe overcome all that except for the fact that even in Obama's own lifetime, half the country didn't want black people to have the same rights as whites, so he had to deal with the racism.
It all comes down to money and the gravity center of finance. Those who wanted in on commerce and rising wealth used racial attacks against him to inflame a discontent society, and the figurehead of that inflammation seized power. As they say, "America sneezes and the rest of the world catches cold", it would have been britain, ottomans,hispania, portugal, baghdad,ctesiphon, karakorum,venice,rome,etc.. in different times of history. but it is the US now, and as a result the world caught the fascism fever. I think that means Obama inadvertently was instrumental in the collapse of US-centric world older and in the shifting of center of gravity once again. I don't see beijing picking up the slack, it would be less chaotic if it were that simple. but i'm concerned the US itself won't make it till the end of this decade and I don't know what will come afterwards.
China has been in times past set to take on the throne but they've been complacent and isolationist. That I think means a contraction of US's reach and influence with an unfilled vacuum, starting in Europe and spanning the globe. It might be decades before there is any kind of stability. It's basically wealthy people of the west not wanting to accept reality that's keeping things afloat so far.
If McCain won in '08 and Obama won in '12, the swing may have been wildly different. If Romney won '12 there wouldn't be a trump admin. You'll notice that a lot of people agree that things started going really bad around 2013-15, that's on Obama's second term, after the snowden leaks. Brexit and other far right movements also peaked then. He isn't responsible and he didn't mean to, but the current state of things wouldn't have occurred without him.
One thing he could have helped though. He could have avoided making fun of an insecure billionaire at the white house correspondent's dinner. and that certain billionaire (with a long documented history of discriminating against blacks and working for the russians), wouldn't have made it his mission in life to dismantle reverting that represented Obama.
Another option would be to setup a wireguard server listening on 53. Wireguard traffic is UDP so it would work even if TCP DNS requests are blocked. And it would also make the client configuration much easier, ie just connect to the wireguard server.
The exaltation displayed in this discussion thread is something everyone should ponder about. Some stupidity specific to certain era and place on Earth, just another tumour of uncontrolled bureaucracy which always grows, is discussed as some eternal property of God-given Universe.
Hijacked plane is a popular media spectacle with lots of ties to other images and scenes. Millions are ready to discuss it, or listen to the thrilling stories. “This is important for security!” is a shazam in that context. At the same time, much closer and routine dangers directly affecting many people (power plants, refineries, railroads and so on) are kept in check by underpaid workers who can't even make companies fix sensors or replace something until it is rusted through. Effectively, “this is not important for anything”, nor public is interested in TV shows about working pipeline that is not getting blown up. Those who want money and power naturally stick to impressions that work for the crowd they are given.
Propaganda is most successful when people do the required thing on their own, agree that it's absolutely impossible to evade, and even encourage each other. Something in this day and age makes people themselves adore certain forms of propaganda, and even demand to be told specific lies. Among other things, images of stupid social machines crushing someone (“they'll put you on the list”, etc.) seem to weirdly stimulate the crowd.
Even in so-called globalised world there are examples that crack the habituation. In country A, any big gathering of people needs to be formally approved, supplied with hordes of policemen (thankfully, not tanks), fences (thankfully, not barbed wire), entrance searches (thankfully, without stripping). When you ask anyone about that, they promptly respond with “What if terrorists/enemies decide to attack the crowd?” or “What if they start to riot?” (notice that “they”), etc. Even most obvious security theatre acts are automatically accepted with promotion to “psychological stuff that helps to detect those people in the crowd”. In country B, no less “civilised”, the same event is handled by some private company that is mostly worried about portable toilets or electric generators, and people come freely to the venue if they like it (just buy the ticket).
The odds of something wrong happening are roughly the same, but people reason about themselves and those around them very differently. That mental picture of the world shapes the thing that happens, not the alleged expert opinions or calculations.
> Because dwm is customized through editing its source code, it's pointless to make binary packages of it. This keeps its userbase small and elitist.
This is the zen of suckless. Unlike other projects that desperately reach for users anywhere they can get them and then suffer the resulting bloat, suckless knows exactly who they are building for.
It’s a tough thing to swallow but embracing the opposite of the suckless zen is probably a large part of why modern software is so bad. The more we try to pretend it should be possible for all people to program or use computers in general, the worse software becomes.
I think this hits at the heart of why you and so many people on HN hate AI.
You see yourselves as the disenfranchised proletariats of tech, crusading righteously against AI companies and myopic, trend-chasing managers, resentful of their apparent success at replacing your hard-earned skill with an API call.
It’s an emotional argument, born of tribalism. I’d find it easier to believe many claims on this site that AI is all a big scam and such if it weren’t so obvious that this underlies your very motivated reasoning. It is a big mirage of angst that causes people on here to clamor with perfunctory praise around every blog post claiming that AI companies are unprofitable, AI is useless, etc.
Think about why you believe the things you believe. Are you motivated by reason, or resentment?
I'm not sure that AI has as much impact on resources like SO as one might imagine. There is one reason why I resort to using AI, and two reasons why I always double check its answers.
The reason why I resort to AI is to find out alternative solutions quickly. But quite honestly, it's more of a problem with SO moderation. People are willing to answer even stale, actual/mistaken duplicate or slightly/seemingly irrelevant questions with good quality solutions and alternatives. But I always felt that their moderation dissuaded the contributors from it.
Meanwhile, the first reason why I always double check the AI results is because they hallucinate way too much. They fake completely believable answers far too often. The second reason is that AI often neglects interesting/relevant extra information that humans always recognize as important. This is very evident if you read elaborate SO answers or official documentation like MDN, docs.rs or archwiki. One particular example for this is the XY-problem. People seem to make similar mistaken assumptions and SO answers are very good at catching those. Recipe-book/cookbook documentation also address these situations well. Human generated content (even static or archived ones) seem to anticipate/catch and address human misconceptions and confusions much better than AI.
They didn't cited papers directly even before the web. It's not a bounce or engagement issue.
Journalists don't make it easy for you to access primary sources because of a mentality and culture issue. They see themselves as gatekeepers of information and convince themselves that readers can't handle the raw material. From their perspective, making it easy to read primary sources is pure downside:
• Most readers don't care/have time.
• Of the tiny number who do, the chances of them finding a mistake in your reporting or in the primary source is high.
• It makes it easier to mis-represent the source to bolster the story.
Eliminating links to sources is pure win: people care a lot about mistakes but not about finding them, so raising the bar for the few who do is ideal.
> Being correct comes second to being agreeable in human-human interactions
Prioritizing agreeableness above correctness is the reason the space shuttle Challenger blew up.
The bcachefs fracas is interesting and important because it's like a stain making some damn germ's organelles visible: it highlights a psychological division in tech and humanity in general between people who prioritize
1) deferring to authority, reading the room, knowing your place
and people who prioritize
2) insisting on your concept of excellence, standing up against a crowd, and speaking truth to power.
I am disturbed to see the weight position #1 has accumulated over the past decade or two. These people argue that Linus could be arbitrarily wrong and Overstreet arbitrarily right and it still wouldn't matter because being nice is critical to the success of a large scale project or something.
They get angry because they feel comfort in understanding their place in a social hierarchy. Attempts to upend that hierarchy in the name of what's right creates cognitive dissonance. The rule-followers feel a tension they can relieve only by ganging up and asserting "rules are rules and you need to follow them!" --- whether or not, at the object level, a) there are rules, b) the rules are beneficial, and c) whether the rules are applied consistently. a, b, and c are exactly those object-level does-the-o-ring-actually-work-when-cold considerations that the rule-following, rule-enforcing kind of person rejects in favor a reality built out of words and feelings, not works and facts.
They know it, too. They need Overstreet and other upstarts to fail: the failure legitimizes their own timid acquiescence to rules that make no sense. If other people are able to challenge rules and win, the #1 kind of person would have to ask himself serious and uncomfortable questions about what he's doing with his life.
It's easier and psychologically safer to just tear down anyone trying to do something new or different.
The thing is all technological progress depends on the #2 people winning in the end. As Feynmann talked about when diagnosing this exact phenomenon as the root cause of the Challenger disaster, mother nature (who appears to have taken on corrupting filesystems as a personal hobby of hers) does not care one bit about these word games or how nice someone is. The only thing that matters when solving a problem of technology is whether something works.
I think a lot of people in tech have entirely lost sight of this reality. I can't emphasize enough how absurd it is to state "[b]eing correct comes second to being agreeable in human-human interactions" and how dangerously anti-technology, anti-science, and-civilization, and anti-human this poison mindset is.
To some extent Ukraine has also given people are very distorted impression of what a modern war in other contexts would look like, adding an unhelpful data point to the other outdated one which is WW2.
WW2 was probably the last time you could fight a war, and do things like convert your local industry to produce weapons and tanks that were relevant. And even then, it only really happened because the US mainland was not contested territory during the conflict - it had the luxury of choosing when to enter the war.
Ukraine is simply not a "normal" looking modern conventional war. Both sides have receiving significant external imports which are various reasons are mostly untouchable by kinetic strikes till they cross the relevant borders (in this way it is much more like Vietnam in logistical respects). So you see assumptions like "mass production of drones will be key to the future!" in a context where the bulk of the critical components - microprocessors, cameras etc. - are not produced in the countries in conflict, and are imported from factories which are in no danger of ever being directly targeted.
So cheap mass producable systems have held the line in areas, but they're obviously drop ins for something you'd prefer to use instead - i.e. artillery - but there's a shortage of that. But conversely they haven't moved the line in a lot of areas - some of the biggest strikes of the war have been from conventional exploitation of defensive failures - i.e. the Kharkiv breakthrough, or from espionage operations which might be notable for using a lot of drones but the real accomplishment was getting them in position and the real success was still very typical: Operation Spidersweb taking out a large number of Russian long range strategic bombers.
Now people will point to the latter and say "see! strategic bombers are useless!" ... and yet that can hardly be true if a substantial operation to destroy strategic bombers was worth doing. A system being vulnerable in a way it previously wasn't does not make it ineffective (i.e. if strategic bombers at airfields intact would endanger the Ukranian position, then they're still an obviously necessary system, but they now need better protection then they had - or Russian counter-espionage just sucks).
Take turmeric for example. It contains curcumin, a chemical that has quite good evidence for anti-inflammatory properties. However curcumin is not present in turmeric in clinically relevant quantities. People taking turmeric medicinally are not actually interested in the curcumin, if they were they would be taking a concentrated extract. They are interested in the ritual and cultural associations of turmeric.
In most cases when we do find evidence for something clinically relevant in traditional medicine we either discover that the effect is something other than it is traditionally associated with and/or that you need to take it at extreme doses for it to do anything at all.
I think I have an old comment about this, but there is an actual `adb sideload` command for installing an apk on your phone from your computer. Since it's from your computer and not the phone itself, it's sideloading and not frontloading, I guess. Weirdly, and wrongly, people have also started to use the term to refer to just installing apps from outside the official appstores, but that's not sideloading. It's just installing an app. It's a normal Android feature. You can just grab a .apk file with your browser and install it like you would a .exe file on Windows.
iOS on the other hand historically required a jailbreak for this. I think that's where the confusion started. Android doesn't need a jailbreak, it doesn't need root (privileges), it doesn't need a custom ROM. You can just install stuff, it's normal. I think iOS users don't realize how different Android is and they just start repeating words like sideload and root without knowing what they mean, assuming it's just Android-speak for a jailbreak. They don't realize there's no jail in the first place.
I am aware English is a living language, and if enough people are wrong for long enough, they stop being wrong, but it's certainly painful to witness.
There's a direct line from mandating seatbelts to mandating developer certificates. If you accept in one domain that it's legitimate for power to reduce freedom to protect people from themselves, you'll accept it in every domain.
Look: in order for a mandate to be justifiable, it needs to at least provide superlinear benefit to linear adoption. That is, it has to solve a coordination problem.
Do seat belts solve any coordination problem? Do they benefit anyone but those wearing them? No. Therefore, the state has no business mandating them no matter the harm prevented.
A certain kind of person thinks differently though. He sees "harm" and relishes the prospect of "protecting" people from that "harm". They don't recognize the legitimacy of individual bad decisions. The self is just another person trying to hurt you. This kind of person would turn the whole world into a rubberized playground if he could.
Your comment is a perfect example of the worldview I described. Your argument is essentially that without rules stupid people will do foolish things and get hurt.
Yes, they will. So what? That's the price of freedom. I've never been a fan of slave morality.
> Who should bear the burden of treating these people?
You're arguing that we're all the hook if we let people do dangerous things and clean up after them when they screw up. There are two ways out of this situation, not one.
They're already learning how to handle this in SF. (I don't live there anymore, so I can't give specific examples.)
Waymo markets itself as an automated driver - same reason they're using off-the-shelf cars and not the cartoony concepts they originally showed. Like real drivers, they take the law as guidelines more than rules.
De jure (what the law says) and de facto (what a cop enforces) rules have had a gap between them for decades. It's built into the system - police judgement is supposed to be an exhaust valve. As a civil libertarian, it's maddening in both directions:
- It's not just that we have a system where it's expected that everyone goes 15mph faster than posted, because it gives police an avenue to harass anyone simply for behaving as expected, and
- It's also dystopian to see police judgement be replaced with automated enforcement. There are whole classes of things that shouldn't be penalized that are technically illegal, and we've historically relied on police to be reasonable about what they enforce. Is it anybody's business if you're speeding where there's nobody to harm? Maybe encoding "judgment" into rules will be more fair in the long run, but it is also coaching new generations to expect there to be more rules and more enforcement. Feels like a ratchet where things that weren't meant to be penalized are becoming so over time, as more rules beget more automated, pedantic enforcement.
A slight digression, and clearly one I have a lot of thoughts on.
It's really interesting to see how automation is handling the other side of this - how you build machines to follow laws that aren't enforced at face value. They can't program them to behave like actual robots - going 24 mph, stopping exactly 12" before the stop line, waiting until there are no pedestrians anywhere before moving. Humans won't know how to interact with them (cause they're missing all the nonverbal communication that happens on the road), and those who understand their limits will take advantage of them in the ways you've stated.
So Waymo is programming a driver, trying to encode the behaviors and nonverbal communication that a human learns by participating in the road system. That means they have to program robots that go a bit over the speed limit, creep into the intersection before the turn is all the way clear, defend against being cut off, etc. In other words, they're building machines that follow the de facto rules of the road, which mean they may need to be ready to break the de jure laws like everyone else does.
On the contrary, the US-led coalition achieved military victory in Afghanistan in under 60 days. Which is an incredible feat. Though what that coalition failed to achieve, and where people try to adjust the definition of tactical victory, was the nation-building goal of creating a functional, independent Afghanistan government. The counterinsurgency aspect was the process of protecting that fledgling "nation".
The very uncomfortable truth here is that Israel is demonstrating how to effectively destroy insurgencies in Gaza and Lebanon. You cannot pussyfoot with nasty, brutal tactics and expect to accomplish anything. This was a lesson the west learned in the world wars, and we seem to have collectively forgot it again.
Yes, BSD is a single coherent system but so are many Linux distros. It's just that we've come to accept bad documentation as the norm for Linux-based tools. In my experience there's several types of problems that are very common for Linux tools:
* Extremely short documentation. Everyone has seen these, a tool where the man page exists but provides almost no actual information.
* Unfriendly reference-type documentation. GNU programs are often guilty of this, coreutils certainly comes to mind. On the upside, it's usually comprehensive. But it's not good - it's a short description followed by a sequential list of every option, so the functionality is described in detail but there are no usage examples, no list of the most common options, or anything like that. Great reference, poor usage documentation.
* Too much info about ancient systems or historical details. Yes, it's great that many of these utilities are portable and can run on different systems or work with files from different systems. The man pages for zip/unzip mention MS-DOS, Minix and Atari systems, while defining the zip format as "commonly found on MS-DOS systems". The man page for less explains that it's a program "similar to more(1)" - completely useless info now - and mentions that it has some support for hardcopy terminals, again information that's not important enough for the first paragraph in 2025.
* Poor keywords in the description. There's the theoretically useful apropos command. My Xorg wouldn't start so I tried to remember how to start my wifi up. apropos 'wlan|wi-fi|wifi|wireless' doesn't mention nmcli, which I was thinking of, though it does at least provide the much more difficult iw command.
* Technical project-specific jargon that makes it easy to find the solution - if you already know it, that is. For example, Xorg documentation generally doesn't use the word "resolution". It's not in the xrandr or Xserver man page, and in the xorg.conf page it's only a reference to virtual screens. Because X uses the term screen size. That's fine, understandable and even accurate but most people would first search for 'resolution'.
I date a lot of public school teachers for some reason (hey, once you have a niche it's easy to relate and they like you), and I assure you you'd have a better more human conversation with an LLM than with most middle school teachers.
One of the biggest problems with a lot of the modern theory of democracy is that it sees democratic mechanisms as being not just necessary but sufficient to justify any action undertaken by the state.
Another major problem is the lack of clear bounding principles to distinguish public questions from private ones (or universal public questions from public questions particular to a localized context).
Together these problems result in political processes that (a) treats every question as global problem affecting society an undifferentiated mass, and (b) uses majoritarianism applied to arbitrary, large-scale aggregations of people as means of answering those questions.
This leads to concepts like "one man, one vote" implying that everyone should have an equal say on every question regardless of the stake any given individual might have in the outcome of that question.
And that, in turn, leads to the dominant influence on every question -- in either mode of democracy Rothbard refers to -- being not the people who face the greatest impact from the answer, nor the people who understand its details the best, but rather vast numbers of people who really have no basis for any meaningful opinions in the first place.
Every question comes down to opposing parties trying to win over uninformed, disinterested voters through spurious arguments and vague appeals to emotion. Public choice theory hits the nail on the head here, and this is why the policy equilibrium in every modern political state is a dysfunctional mess of special-interest causes advanced at everyone else's expense.
Democracy is necessary, but not sufficient. And I think the particular genius of the American approach has been to embed democracy within a constitutional framework that attempts to define clear lines regarding what is a public question open to political answers and what is not. The more we erode that framework, the more the reliability of our institutions will fray.
Good luck getting anyone close to this to go on the record about it though given such things normally come with corporate or government gag orders.
There are hundreds of privileged vendor binary blobs on most flagship devices not even Google gets source code to though so supply chain attacks should be assumed.