Hacker Newsnew | past | comments | ask | show | jobs | submit | csallen's commentslogin

List of things that the public despised when they were new:

- Cars (expensive toys for the rich that endangered normal ppl and spooked horses)

- Recorded music (similar complaints about it not supporting artists)

- Bicycles (commonly called the devil's work)

- Novels (morally dangerous)

- Headphones / Sony Walkman (anti-social)

I remember when chatting online was nerdy, anti-social, and uncool. Now celebrities casually talk about sliding into each other's DMs.

The initial "it's unfashionable" backlash to new, useful, and threatening technology has been so repetitive and predictable throughout history that it's almost passe now. Most people aren't students of history of course, so history will repeat itself.

But that also means the second act will repeat, not just the first act. And the useful technology will almost certainly become fashionable and accepted once it's more commonplace.


Please, please stop with the AI analogies. Just make your argument on its own terms.

"It's different from X" is no more meaningful than "it's the same as X".


> It's different from X"

The post doesn't even say "it's different from X". It just says "it's unfashionable," with no comparison or mention of history at all, as if this is the first time a new technology has ever been unfashionable immediately after its release.

> Just make your argument on its own terms.

I feel like my argument is obvious? The "unfashionable" period for useful-but-jarringly-new consumer-facing technology is common, predictable, and short-lived.


You can't predict culture, you can't predict fashion, you can't predict the course of history, you can't predict innovations, you can't predict any of this creativity-mediated stuff.

Yes you can, in many cases. Go read a book about culture lol

OK, I choose "The Poverty of Historicism". Which book did you have in mind?

Ironically, nothing makes me question my stance of human supremacy over AI more than the weakness and triteness of human defenses of AI.

Or maybe the defenses are AI generated, who knows.


When you ask me, headphones are for much people the sign for beeing antisocial, especially for the people that want to be antisocial. Online chats and online dating are now so much monetized and hyped that I would be happy when we would back to the old times where it was a nerdy thing or when we could remove it from the history completely.

So yes, all things that I accepted first I hate now. The others I was born in, can't tell much about them. Maybe the people are right but accept the shit later.


They were right about cars, to be fair

Only cars? I would extend the list with bicycles, online chat/dating and at least headphones.

Okay but sometimes people despise things and then they go away or get stigma'd into a corner. There is selection bias in that list.

But either way, I'm talking about *the present* which is the time we all live in. Opining that in the future maybe it will be different is like - sure? Not super relevant though.


> Okay but sometimes people despise things and then they go away or get stigma'd into a corner.

Sure, but has that ever happened to a technology that was useful, convenient, affordable, etc.? Definitely gotta be rare. I think the utility tends to win in the end.

> But either way, I'm talking about *the present* which is the time we all live in.

Yeah that's why I didn't disagree with you. I think you're right about the present. But I wouldn't call my response irrelevant. It's pretty normal in a conversation to carry things forward and respond with your own thoughts.


> When code production gets cheap, the cost doesn't disappear. It migrates.

I'm surprised people aren't taking the time to edit this very specific kind of phrasing out of their writing. It's such a common AI tell now that, even when writing by hand, I'd just avoid it entirely.

Then again, I hated that LLMs co-opted the em-dash, and I refuse to stop using it, so I suppose I get it.


> to edit this very specific kind of phrasing out of their writing

Even without touching moral/ethical/normative reasons, it's impractical. LLMs will continue to incorporate the most popular phrasings or grammars, and touchy readers will simply pivot to a new "telltale" du-jour.

Eventually any personal or organic writing will be gone, as one twists themselves into an artificial form of "the inverse of the LLM."

> Michael Bolton: "No way, why should I change? He's the one who sucks."


> Even without touching moral/ethical/normative reasons, it's impractical.

It's impractical to edit your AI-generated writing to put it in your own voice? People have been writing unique stuff for millennia.


> your AI-generated writing

Hold up, the subject was writing which is not "AI generated", and the edits authors might make out of fear of being falsely accused.

If that's not what you intended, then I think your earlier comment is in error, since I'm not the only one who read it that way.


Why would they have to? Just to avoid being accused of using a slop machine? If that is the only criticism you have against LLM produced text, then there is no problem.

And I'm saying this as somebody who is strongly against LLM-generated content of this form.


I have no problem with AI-generated text.

But I do have somewhat of a problem with unedited text. Personally, I even take the time to edit my HN comments.

And, for the same reason I'd have a problem watching the same episode of the same show every day, I have a problem with reading text that feels like a super derivative clone of tons of other writing. Which is usually what you get when you don't edit your AI-generated text.


All good, and I agree.

But the question was about somebody who does write the text themselves, who edits it themselves, no AI has ever touched it, but the result still has elements of what AI text typically has. Because it's their style. Why should such people have to adapt? Just so they don't end up in a witch hunt? How about texts older then 2, 5, 10 years? Should they be changed too? And how about if "LLM style" changes over time?


On top of that, we have to continually tend to our bodies, feed them fuel (sometimes at risk of life and limb), exercise them, clean them, tend to them, visit the doctor, take medicine/drugs, etc., just to keep them in good shape. And they eventually have a 100% failure rate.

I think this is a problem in perspective/framing. Or phrasing, if you will.

"Being economic entities in the workforce" could alternatively be phrased, "performing a skilled role or responsibility that's useful for your tribe."

That sounds much less sinister. It's something humans have been doing for millions of years. It feels good, it engages our brains, it's helpful to others, and it's helpful to ourselves. And I can't help but feel the modern "anti-capitalist" trend is unfair in its approach of disparaging it.

Of course, play and socializing are important, too! Life isn't all work and contribution. And there are many ways to work or contribute outside of having a formal job, anyway. So I do agree with you that it's a bit sad that people don't have ideas for how to do either of these things unless it's through their long-term career.


They were specifically talking about a commercial labor-for-money transaction though. Not just any useful work.

Absolutely!

But also: with age more and more doors are closed to you. Many hobbies become inaccessible. You may end up with a bunch of choices that all just sound outright depressing. Losing a job is losing one more choice, restricting yourself to the possibly more boring options that you can still physically pull off.

It's just not fun being old.


Multi-generation households - which also can keep older people active like you noted -are mostly gone. You can't do much for your tribe from a retirement home on a random Saturday afternoon every few months in summer, so work or hobbies are the remaining activity centers, but you now which of the 2 is lionized as a virtue in American culture. Some hobbies are unfortunately only discovered in retirement, so perhaps some criticism of the economic system as imperfect is due.

Sadly, polarization pushes people towards either wholesale “burn it down” anti-capitalism or full throated corporate bootlicking and I don’t think either tact is particularly useful. There’s a more subtle critique about our indoctrination in the west towards concepts like the “efficiency of the free market” demanding that we overlook rampant alienation among the working population that is more what a lot of people are vibing on, but it’s being expressed as diet anarchism because that feels more poignant online.

I think most folks do, in fact, want to “perform a skilled role or responsibility that's useful for your tribe”, but find themselves railroaded into bullshit office jobs full of performative nonsense, soul crushing frontline service work, or body destroying blue collar work with no safety net, all of which are recipes for burnout later in life. Compare Keynes’ “Economic Possibilities for our Grandchildren” [1] to what we ended up with and you’ll find the root of the discontent is perhaps warranted.

[1] http://www.econ.yale.edu/smith/econ116a/keynes1.pdf


I don’t think being anti-capitalist necessitates being anti “perform a skilled role or responsibility that's useful for your tribe”. To me, that’s the big benefit- under capitalism you’re not working for your tribe, you’re working for a tiny few shareholders.

I’m pretty sure the world overall and certainly “my tribe” would be better off if the job I’m working just never got done


> under capitalism you’re not working for your tribe, you’re working for a tiny few shareholders

The first half of this sentence is false, but the second half is true.

I don't know about you, but when I look at my window every day, I see thousands of people working for their job: making delicious food that others can eat, stocking store shelves so others can shop, trimming trees so the city will look nice, driving trucks full of goods that others can have, designing good website UX for others to use better, repairing broken cars, etc. It's an intricate dance of millions of people waking up every day and doing selfless things for others in their tribe, in just the right amounts, because we've (miraculously) given them an incentive to do so.

To me what's depressing is that we can live in such a wonderful world, but with a cynical pessimistic culture in which it's commonplace to ignore the chief output of everyone's work.


The ‘little incentive’ that it you don’t do it you starve to death.

There’s tons of work being done because it feels meaningful, later today I’m cooking a meal for a potluck, etc. but if you want your Job to be meaningful that comes at a huge premium.


Good on you for cooking for the potluck! I think that's meaningful.

I don't think having a meaningful job comes at a huge premium, though:

1. I don't think it's true that if you don't work, you'll starve to death. At least, not in the west. You won't have the high quality things compared to your peers, but the state will provide you with housing, food, and resources, so long as you're psychologically capable of using them.

2. But even so, is there any other creature on earth that doesn't have to do some sort of work so it won't starve to death? Even hunter gatherers had to hunt, forage, raise kids, make tools, or otherwise contribute to their tribes, in an endless grind, just to get enough calories to survive.

3. And that doesn't seem… wrong? Many of us enjoy an incredible abundance of options for food, shelter, safety, entertainment, etc., produced by our peers in our tribes and communities. Why shouldn't we have to contribute as well if we want to partake?

4. The idea that "meaning" comes at a premium is the story I want to contradict. It's just that: a story. I know someone who delivers the mail. He loves delivering mail. He feels a ton of meaning. He says, "Yeah there's a lot of junk, but without me, people wouldn't get their wedding invitations. And they wouldn't get their bills paid." Most jobs contribute something, and contribution is meaning. The sad thing to me is we have so many voices telling everyone, "Your job is meaningless!" that people are starting to believe it, and they're ignoring the lives that their work touches.


Delivering mail is meaningful for sure! So is teaching, people want to do these things and sometimes it lines up that there’s a market for it too.

The premium is that stuff like my job where I’m fiddling on Azure is to the benefit of no one and making four times as much.

If you want something meaningful you have to accept worse conditions because all the wonderful lovely people of the world who care and want to make a difference want to work there and not somewhere else.

And it’s interesting you picked mail as an example, when at least in the USA it’s run by the state ;p

I don’t really think it’s horrible that it’s not possible to mooch off your community and give back nothing forever but I don’t think ‘a little incentive’ is the right way of putting it, especially for all the people that hate their jobs for reasonable reasons but stay at it because of the alternative.


I don't think the modern "anti-capitalist" trend is disparaging "performing a skilled role that's useful for your tribe". It's disparaging various of these things:

- being arm-twisted to perform a low-skill, low-utility, role because economic weirdness and bad luck makes it the only work that you can get. Your tribe could use your <furniture making skill>, but it's cheaper to import furniture from China, so tough. Your tribe might like your music, but you aren't as good as Adele, so shut up. You could grow decent fruit but it doesn't pay well enough for you to afford the land to do it, and farms using illegal migrants can undercut your work, so find something else.

- systems parasitically exploiting your desire to provide useful work, to extract maximum value from you beyond what is satisfying and fulfilling, while treating you as disposable waste. You like cooking? Become a chef for 14 hours a day including evenings and weekends, or get out. From Amazon warehouse workers to programmers in the video game industry; intense grind, burnout, fired. Tribes don't tend to do that to people they value.

- systems distorting skills and responsibilities, e.g. not providing good tools, Kafka-esque bureaucracy, firing people in your 'tribe' at will, having your day micromanaged so your skilled work is entirely at the behest of other people, taking away agency from your work, demanding lower quality but faster, demanding higher quality and faster, demanding higher quality and paperwork, so that even if something is using a fulfilling skill, it actually doesn't feel that way.

- removing options to do multiple things; a job is usually a reduced to one role from day start to end. There's not much room for someone who is the local baker, tends the canal lock, sells eggs in the market, and does mountain rescue or whatever.

- taking over your life; e.g. controlling your days off, providing your healthcare, owning all the land so there aren't 'commons' you can opt to live off, lobbying and bribing the lawmakers, mandating 37 pieces of flare, setting your start and finish time, making you justify sickness, demanding you be on-call or available at night.

Consultants with high-demand skills still have some opportunity to avoid this, but huge numbers of people don't.


Pro tip: introduce your friends to your other friends. Build a network. Networks get stronger as the number of connections increases, i.e. as more people in that network know each other. People are more excited to hang, bc they know more people, and the hangs are more exciting. And hangs become more frequent, because more people can initiate. And it makes awkward moments less common, too.

This is much more durable, reliable, and (quite frankly) fun than the hub-and-spokes model of friendship, where you just have a bunch of 1-on-1 catchups with people who know you but not each other.

Also, it's somewhat easy to do! In this guy's story, this could be as simple as, "Hey I want to get a few of us from the gym together for dinner sometime. Would you be down?" People are usually more receptive to this than they are to a 1-on-1 invite, too.


> One defining constraint must shape the product... Minecraft is built entirely from blocks. IKEA is flat-pack, self-assembly furniture.

I've been calling these things product primitives. I can't remember where I heard that term, but it refers to things like...

Blocks in Notion. Messages and conversations in Telegram. Frames and layers in Figma. Tweets in Twitter. Cells and sheets in Excel. Tools and layers in Photoshop. Commands in a CLI.

I think what makes for good product design is having a very small number of primitives. A bad product doesn't know what its primitives are. Or it has a very large number of primitives. It feels like everything in the product is some unique thing that works in its own unique way. So users have to learn a ton of different top-level primitives/concepts. It's confusing and intimidating and hard to teach. Ideally you just want one or two or three main primitives.

The complexity/power in an app comes from choosing powerful primitives that have depth, that are composable, etc. You can do a lot with Notion blocks. You can do a lot with Excel cells. You can do a lot with a CLI command. You can do a lot with a Minecraft block. There's depth there.


We used to call this “concept count”. You usually want to minimize the number of core concepts that make up your product. I’ve also heard it as the “nouns and verbs” of a product.


> the “nouns and verbs” of a product

Insightful, to think of a product and its interface as a "language" that the user learns. Some products give you a small and powerful vocabulary, where just a few words can accomplish a lot. Other products are like a badly designed language that lacks coherence and ease of use, where tasks that should be simple require many words, or some words don't fit together well with others.


I think this philosophy might be oversimplified. Tana has basically two primitives (bullets and supertags) and manages to be devastatingly complex to use to the point you have to watch hours of tutorials to do very simple things. Conversely Google Maps has a lot of “primitives” but the UX is fairly tight for 90% of use cases.


It applies more to design software, where a user is creating durable things and needs to understand those things themselves. Google Maps is more of an agent: It's responsible for understanding its own complexity and answering your queries.


My point is that, sometimes, going for the lowest common denominator or "noun" and declaring it to be your primitive (focusing on minimalism), is a worse approach than picking a larger set of primitives that suits your design. Take Hangul (한글) for example, where the primitives are designed to serve a goal, and there's no effort to "ruthlessly" minimize the number of primitives, and this is something you can learn to read in 10 minutes, or at least in a day. Whereas if you go over to something like Chinese, your numbers 1, 2, and 3 look nice thanks to your stroke primitive, but your needs quickly overwhelm your design and you end up with something quite unwieldy compared to if you had picked a more complex set of primitives — you will never learn to read all Chinese in your whole life even as a native speaker. It's a counter-intuitive design lesson.


Doesn't Jira only have one primitive: the ticket. Everything else just augments it. You could say that these augmentations are separate primitives, but then the same would apply to all tools in the other cited examples like Photoshop too


Tana is basically a programming environment disguised as a text editor (in this way, it follows in the grand tradition of emacs you could say)


Vaguely feels like "Atomic Design" but applied to engineering.


what is Tana?


This I think: https://tana.inc/

Seems like there's quite a bit more to it: https://outliner.tana.inc/learn


Yeah... this sounds a bit like the Alexandrian Pattern Language concepts which directly inspired the Gang of Four's Design Patterns.

I wonder, though, if what you're describing as "product primitives" actually maps more closely to what Alexander later called "Centers," rather than the patterns themselves.

From what I understand, while the software world heavily adopted his patterns, Alexander spent his later career arguing that the ultimate building block of a system is actually a Center: localized focal points of utility and coherence, eg a well-lit courtyard, window seat, or fireplace. A strong center is naturally composable; it "resolves local tension," is made of smaller centers, and acts as a building block to generate larger ones.

When a product feels confusing or bloated, it's rarely out of bad design intent. It's just that user needs—while not necessarily glaringly—are empirically discoverable, while the true, underlying "centers" that could elegantly solve them are incredibly subtle and hard to identify. The path of least resistance is almost always to just build a unique, rigid interface for the immediate user need right in front of you. Doing the deep architectural work to discover a core primitive that naturally absorbs those needs is difficult.

So maybe that's why we build so many faster horses.


I used a similar metric when judging programming languages. The language can get huge but if it's conceptually small, one can learn it and then leave the rest to compounding due to experience. Conceptually large languages had a barrier for me. The case when I felt this was perl.


> Commands in a CLI … I think what makes for good product design is having a very small number of primitives.

Small but not too small. Case in point: shell scripts (POSIX shell, bash) where the scripting part was decided to be modelled as commands thus not introducing another bunch of concepts. We all know what the result is (hot, slow mess).


I know it's in vogue to bash Bash but I feel that criticism is unfair.

Shell scripting is a victim of its own success: it is _so easy_ to get started that most users get value out of knowing the first one percent and never bother to actually learn the rest.

There aren't many who have read the Bash manual, or know what zsh can do that Bash cannot, etc.

"Shell scripting is a hot, slow mess" is the same hot slow mess that you get wherever the barrier to entry is extremely low (e.g. early PHP, early JavaScript/frontend development, game development with a game engine where you can just click around in the editor, etc).


There’s also the fact that shell scripting is for automation of what you may do interactively. It’s not for stuff where you want data structures to manipulate in memory. Trying to use it like python is an exercise in frustration.


I remember a friend saying they had a university assignment to implement something like a spreadsheet program in bash - I suppose it was to teach them the intricacies of bash but the big takeaway was meant to be - don't use bash for anything too complicated.


When I think about products with too many primitives, I instantly think of Snapchat and Instagram - my two least favorite apps.


The whole point of that comment is that it's not that easy. There are potential side effects and consequences that are difficult to architect around.


The fix IS easy. The side effects need to be dealt with accordingly. Why do you defend shit like this?


Except it is.

If you can't easily architect around it, then don't do what you're trying to do.

"Oh I needed to disclose user data in order to make more money" isn't an acceptable excuse.


No one's talking about excuses.


Looks like everyone does talk about excuses though.


> Oh I needed to disclose user data in order to make more money

hmm maybe they should've paywalled?


Reached for money, I take it.


The challenge with the world is that it requires nuance, ad hoc thinking, and effortful thinking. The human brain doesn't like putting effort into thinking. It's uncomfortable. It's easier for us to just have one rule, one heuristic, that we can simply apply to many similar situations. This is why ideology exists and is so powerful. You can always find people chanting the same phrase or slogan, over and over, regardless of the circumstance. Because it's easier for them to do that than it is for them to treat every situation as unique and to reason through it from first principles. Hell, sometimes that's just not feasible.

In this situation, yeah, sometimes powerful people do dumb shit. And ideologues come by and say, "You just don't understand the 4d chess!"

But also, sometimes it's the opposite! And the powerful person does something smart, but that's unclear or unfamiliar to the average person without massive wealth/access/power. And ideologues from the peanut gallery come by and say, "Another powerful person doing stupid stuff!"

And of course, the right (but alas more effortful) approach is to evaluate each situation individually, and reason through the factors, and also to wait to see how it turns out, before evaluating.

For example, the author evaluations Elon's purchase of Twitter as an irredeemably stupid decision. And I agree, many things about how that went down seem very stupid. But at the same time, dude has launched an AI lab that's gotten tons of press and exposure thanks in large part to X, combined it with his other companies, and is about to IPO for $1.5T+. Maybe you don't like it. Maybe I don't like it. Maybe there's lots to complain about here, but it's difficult to describe this as a "stupid" move.

Does that mean he was playing 4D chess? Also, maybe not! Maybe he just lucked into this situation. Maybe he didn't foresee it initially, but figured it out later. Or maybe, much more reasonably, he figured that he has tons of optionality and tons of leeway, so even if he doesn't have a good plan to begin with, he'll likely figure it out. Who knows.

It's tough to be a speculator judging from the sidelines with incomplete knowledge. And it's even tougher to avoid allowing our biases and ideologies to compel us to simply shout our beliefs rather than being objective and analytical.


Yeah, the Twitter acquisition wasn't obviously irredeemably stupid. I think it was a bad decision, but paying a 38% premium above market price for an acquisition of a public company is within the normal range. You can argue the markets were irrational and Twitter was overvalued, but you shouldn't argue it's obvious because it clearly isn't obvious to a huge number of investors. You could argue Twitter is a poor fit for Musk's goals, but that wasn't obvious either. Twitter didn't have to change radically to be worth its price. It just had to grow profits a bit and maybe help some of Musk's other projects (like Grok).


Yeah I don't buy that xAI story, musk would have gotten a popular ai lab even without Twitter if he wanted.

Except.. his ai lab is not popular. It has zero value and people only use it because it's on Twitter and they don't pay for it.

The fact that he had to merge it with a successful unrelated company tells you all you need to know.


Recently an xAI recruiter reached out to me for a position at xAI. They mentioned that they had been impressed by my GitHub profile and wanted to fast track me to an interview.

Fun fact: I have not a single repository on my GitHub.

I think the position was about creating training data and realistic coding scenarios or something.


I though the 4D chess explanation was that he bought it (was forced to buy it by courts) because he was entering politics and wanted to be able to ensure the deaths of half a million children in Africa:

https://www.impactcounter.com/dashboard?view=table&sort=inte...

It looks like he succeeded wildly at that.


There is a better 4D chess explanation of why Alon bought twitter and i hate to bring it.

When he bought it, right wing publications all over world got excited because twitter was the global / official communications channel for alot of entities and largely considered left leaning. When your plan is to disrupt that bubble and amplify right leaning narratives (he helped trump1 getting elected with it), you better cover your tracks and make that purchase look like an accident.


I think this is spot on. Poor man wanna be rich, rich man wanna be King, king ain't satisfied till he rules everything.


[flagged]


Your overly simplistic graph about population growth omits steady falling birth rates and more importantly, why they are falling.

But why should i explain when your mind is occupied with africa when mentioned in a side note and that it comes with an article opening like this

> To understand what’s at stake regarding the Mediterranean

... occupied with africa and implying harmful migration without ever basing it.

But a cheap 100 year projection expressed _in a single graph_ in a way does fit into a conversation about overconfident stupid people.


Straight to the white nationalist racists to back him up, this guy gets where Musk is coming from and approves.

Citing this guy:

> Steven Sailer is an American far-right writer and blogger.[1][2] He is a columnist for Taki's Magazine and VDARE, a website associated with white supremacy.[3][4][5] Earlier writing by Sailer appeared in some mainstream outlets, and his writings have been described as prefiguring Trumpism.[2] Sailer popularized the term "human biodiversity" for a right-wing audience in the 1990s as a euphemism for scientific racism.[2][6]

https://en.wikipedia.org/wiki/Steve_Sailer

On this site:

https://en.wikipedia.org/wiki/The_Unz_Review

> The Unz Review is an American website and blog founded and edited by Ron Unz, an American far-right activist and Holocaust denier. It is known for its publication of far-right, conspiracy theory, white nationalist, and antisemitic writings.[1]


This is confusing to me. What is composability if not calling a program, getting its program, and feeding it into another program as input? Why does it matter if that output is stored in the LLM's context, or if it's stored in a file, or if it's stored ephemerally?

Maybe I'm misunderstanding the definition of composability, but it sounds like your issue isn't that MCP isn't composable, but that it's wasteful because it adds data from interstitial steps to the context. But there are numerous ways to circumvent this.

For example, it wouldn't be hard to create a tool that just runs an LLM, so when the main LLM convo calls this tool it's effectively a subagent. This subagent can do work, call MCPs, store their responses in its context, and thereby feed that data as input into other MCPs/CLIs, and continue in this way until it's done with its work, then return its final result and disappear. The main LLM will only get the result and its context won't be polluted with intermediary steps.

This is pretty trivial to implement.


> Why does it matter if that output is stored in the LLM's context

Context window is expensive and precious. Much better to offload to some medium where it isn’t.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: