That's a false-dichotomy. Capitalism was good for artisanal workers before the industrial revolution, and then it became pretty goddamn bad for them. We're worried we're staring down the barrel of that right now - just saying 'well it was even worse before capitalism' does nothing for us.
yes it does, it says that trying to prevent technology in order to protect the interests of some special class up people at the expense of everyone else is dumb and shortsighted.
If if people actually listened to the people wailing "but what about the horse carriage business!!!" in the 20th century, it would have been a disaster.
Sure, but AI pessimism is allowed to be personal. Am I supposed to be optimistic that I feel I'm about to get shafted? Should I be less concerned that I need to provide for my family, because in the long term this is going to be a great step forward for humanity?
You are addressing something totally different to the original claim - which tried to say that capitalism is inherently exploitative on labour which is just outdated Marxism
To be frank, I thought trying to twist this into an argument about whether capitalism is inherently exploitative was a complete waste of time and I replied as such. If you'll recall what we were originally talking about here - "AI, should HN users be optimistic?"
That's a good idea and FWIW I agree that as a person who might lose their job to AI, you do deserve to feel apprehensive, even if it might lead to some good later.
Well this is HN so a lot of us are pretty terrified of your 1). We went from 'you have a good job for the next couple of decades' to 'your job is at extreme risk for disruption from AI' in the space of like 5 years. Personally I have a family, I'm a bit old to retrain, but I never worked at a high-comp FAANG or anything so I can't just focus on painting unless my government helps me (note - not US/China). That's extremely anxiety-inducing, that a vague promise of novel new things does not come close to compensating.
I'm 33 and I feel sort of lucky that I'll still potentially have time to retrain. I'm fully prepared to within the next 5 years or so (and potentially much less) I'll probably need to retrain into a trade or something to stay relevant in any sort of field.
Many people claim its going to become a tool we use alongside our daily work, but its clear to me thats not how anybody managing a company sees it, and even these AI labs that previously tried to emphasize how much its going to augment existing workforces are pushing being able to do more with less.
Most companies are holding onto their workforce only begrudgingly while the tools advance and they still need humans for "something", not because they're doing us some sort of favor.
The way I see it unless you have specialized knowledge, you are at risk of replacement within the next few years.
I also have contemplated just retraining now to try and get ahead of the curve, but I'm not confident that trades can absorb the shock of this - both in terms of supply (more unemployment) and demand (anything non-commercial will be hit by capital flight on the customer-side). I figure I will just try and make as much money on a higher wage as I can and hope for the best...
> I'm 33 and I feel sort of lucky that I'll still potentially have time to retrain. I'm fully prepared to within the next 5 years or so (and potentially much less) I'll probably need to retrain into a trade or something to stay relevant in any sort of field.
The problem is that there are not many fields that are going to be immune to AI based cost cutting and there surely will not be enough work for all of us even if we all retrain.
If we all do, then it will create a n absolutely massive downward pressure on wages due to massive oversupply in other lines of work too
> But the entire value is that it can be automated. If you try to automate a small model to look for vulnerabilities over 10,000 files, it's going to say there are 9,500 vulns. Or none.
'Or none' is ruled out since it found the same vulnerability - I agree that there is a question on precision on the smaller model, but barring further analysis it just feels like '9500' is pure vibes from yourself? Also (out of interest) did Anthropic post their false-positive rate?
The smaller model is clearly the more automatable one IMO if it has comparable precision, since it's just so much cheaper - you could even run it multiple times for consensus.
Admittedly just vibes from me, having pointed small models at code and asked them questions, no extensive evaluation process or anything. For instance, I recall models thinking that every single use of `eval` in javascript is a security vulnerability, even something obviously benign like `eval("1 + 1")`. But then I'm only posting comments on HN, I'm not the one writing an authoritative thinkpiece saying Mythos actually isn't a big deal :-)
My proof-in-pudding test is still the fact that we haven't seen gigantic mass firings at tech companies, nor a massive acceleration on quality or breadth (not quantity!) of development.
Microsoft has been going heavy on AI for 1y+ now. But then they replace their cruddy native Windows Copilot application with an Electron one. If tests and dev only has marginal cost now, why aren't they going all in on writing extremely performant, almost completely bug-free native applications everywhere?
And this repeats itself across all big tech or AI hype companies. They all have these supposed earth-shattering gains in productivity but then.. there hasn't been anything to show for that in years? Despite that whole subsect of tech plus big tech dropping trillions of dollars on it?
And then there is also the really uncomfortable question for all tech CEOs and managers: LLMs are better at 'fuzzy' things like writing specs or documentation than they are at writing code. And LLMs are supposedly godlike. Leadership is a fuzzy thing. At some point the chickens will come to roost and tech companies with LLM CEOs / managers and human developers or even completely LLM'd will outperform human-led / managed companies. The capital class will jeer about that for a while, but the cost for tokens will continue to drop to near zero. At that point, they're out of leverage too.
Your proof-in-pudding test seems to assume that AI is binary -- either it accelerates everyone's development 100x ("let's rewrite every app into bug-free native applications") or nothing ("there hasn't been anything to show for that in years"). I posit reality is somewhere in between the two.
Considering that "AI will replace nearly all devs" and "AI will give 100x boost" and such we were promised, it makes sense to question this.
After almost all hyped technology is also "somewere between the two" extremes of not doing what it promises at all and doing it. The question is which edge it's closer to.
LLM’s are capable of searching information spaces and generating some outputs that one can use to do their job.
But it’s not taking anyone’s job, ever. People are not bots, a lot of the work they do is tacit and goes well beyond the capabilities and abilities of llm’s.
Many tech firms are essentially mature and are currently using too much labour. This will lead to a natural cycle of lay offs if they cannot figure out projects to allocate the surplus labour. This is normal and healthy - only a deluded economist believes in ‘perfect’ stuff.
> Someone in power doesn’t get to choose - the board of directors do. Who’s job is to act in the best interest of shareholders.
Alas, shareholder value is a great ideal, but it tends to be honoured in practice rather less strictly.
As you can also see when sudden competition leads to rounds of efficiency improvements, cost cutting and product enhancements: even without competition, a penny saved is a penny earned for shareholders. But only when fierce competition threatens to put managers' jobs at risk, do they really kick into overdrive.
Since the majority shareholder(s) can decide to replace the board of directors, it’s not the board of directors who holds the (ultimate) power, it’s the majority shareholder(s).
> LLMs are better at 'fuzzy' things like writing specs or documentation than they are at writing code.
At least for writing specs, this is clearly not true. I am a startup founder/engineer who has written a lot of code, but I've written less and less code over the last couple of years and very little now. Even much of the code review can be delegated to frontier models now (if you know which ones to use for which purpose).
I still need to guide the models to write and revise specs a great deal. Current frontier LLMs are great at verifiable things (quite obvious to those who know how they're trained), including finding most bugs. They are still much less competent than expert humans at understanding many 'softer' aspects of business and user requirements.
> My proof-in-pudding test is still the fact that we haven't seen gigantic mass firings at tech companies
This assumes that companies will announce such mass firings (yeah, I'm aware of WARN Act); when in reality they will steadily let go of people for various reasons (including "performance").
From my (tech heavy) social circle, I have noticed an uptick in the number of people suddenly becoming unemployed.
For Jevons paradox to be a win-win, you need these 3 statements to be true:
1)Workers get more productive thanks to AI.
2)Higher worker productivity translates into lower prices.
3)Most importantly, consumer demand needs to explode in reaction to lower prices. And we're finding out in real-time that the demand is inelastic.
Around 1900, 40% of American workers worked in agriculture. Today, it's < 2%.
Which is similar to what we see with coding: The increase in demand has not exploded enough to offset the job-killing of each farmer being able to produce more food.
> Microsoft has been going heavy on AI for 1y+ now. But then they replace their cruddy native Windows Copilot application with an Electron one.
This.
Also, Microsoft is going heavy on AI but it's primarily chatbot gimmicks they call copilot agents, and they need to deeply integrate it with all their business products and have customers grant access to all their communications and business data to give something for the chatbot to work with. They go on and on in their AI your with their example on how a company can work on agents alone, and they tell everyone their job is obsoleted by agents, but they don't seem to dogfood any of their products.
What's a situation where one needs to use `eval` in benign way in JS? If something is precomputable (e.g. `eval("1 + 1")` can just be replaced by 2), then it should be precomputed. If it's not precomputable then it's dependent on input and thus hardly benign -- you'll need to carefully verify that the inputs are properly sanitized.
With LLMs (and colleagues) it might be a legitimate problem since they would load that eval into context and maybe decide it’s an acceptable paradigm in your codebase.
I remember a study from a while back that found something like "50% of 2nd graders think that french fries are made out of meat instead of potatoes. Methodology: we asked kids if french fries were meat or potatoes."
Everyone was going around acting like this meant 50% of 2nd graders were stupid with terrible parents. (Or, conversely, that 50% of 2nd graders were geniuses for "knowing" it was potatoes at all)
But I think that was the wrong conclusion.
The right conclusion was that all the kids guessed and they had a 50% chance of getting it right.
And I think there is probably an element of this going on with the small models vs big models dichotomy.
I think it also points to the problem of implicit assumptions. Fish is meat, right? Except for historical reasons, the grocery store's marketing says "Fish & Meat."
And then there's nut meats. Coconut meat. All the kinds of meat from before meat meant the stuff in animals. The meat of the problem. Meat and potatoes issues.
If you asked that question before I'd picked up those implicit assumptions, or if I never did, I would have to guess.
I’ve got many catholic relatives that describe themselves as vegetarians and eat fish. Language can be surprisingly imprecise and dependent upon tons of assumptions.
> I’ve got many catholic relatives that describe themselves as vegetarians and eat fish
Those are pescatarians.
It's like how a tomato is a fruit, but it's used as a vegetable, meat has traditionally been the flesh of warm-blooded animals. Fish is the flesh of cold-blooded animals, making it meat but due to religious reasons it’s not considered meat.
> 'Or none' is ruled out since it found the same vulnerability
It's not, though. It wasn't asked to find vulnerabilities over 10,000 files - it was asked to find a vulnerability in the one particular place in which the researchers knew there was a vulnerability. That's not proof that it would have found the vulnerability if it had been given a much larger surface area to search.
I don't think the LLM was asked to check 10,000 files given these models' context windows. I suspect they went file by file too.
That's kind of the point - I think there's three scenarios here
a) this just the first time an LLM has done such a thorough minesweeping
b) previous versions of Claude did not detect this bug (seems the least likely)
c) Anthropic have done this several times, but the false positive rate was so high that they never checked it properly
Between a) and c) I don't have a high confidence either way to be honest.
Mythos was also asked to find a vulnerability in one file, in turn for each file. Maybe the small model needs to be asked about each function instead of each file. Okay, you can still automate that.
I think it's completely normal. Whenever automation comes knocking, people are inclined to think it's going to flatline conveniently before their job is at risk. LLMs can code now? Cool, they can't code well though can they? Oh they can code pretty well now? Cool, coding was never the hard part of SWE anyway, it's [thing we have no reason to think AI can't beat 99% of humans at at some point], etc
I think SWE as a mainstream profession is much nearer to the end than the beginning, I'm curious and quite scared about what becomes of us.
The problem is that software development contains domain independent and domain specific skills. Since information processing is domain independent, replacing software developers in general will require beating them not only in the domain independent skills, which is what the recent breakthroughs have been about, but also in every single domain dependent skill.
This makes software development AGI-complete. If you have an LLM that can write software for every domain, then for every task you assign it, it could build software that performs the assigned task and thereby solves every problem in existence.
What I'm trying to get at here is that an "SWE" is a biological machine building machine. If you have a digital machine that can build any machine, you haven't solved the first step, you've solved the final step in all of human history that ever needs to be done, whatever that means. Beyond that point, human work no longer exists, because the machines have taken over everything.
I don't think you understand. Frankly, AI is a failure if all it does is replace coders. AI needs (given its current investment levels) to conquer all forms of knowledge work. This is an example of tech/industry needing to impose itself on society, rather than society needing it.
That's how human progress works. No one can want or need it because they cannot conceptualize wanting it until someone shows that it is possible. Now, many of those wants become needs.
We can absolutely conceptualize what we want or need. I was born in 1980 in NYC. When I was a boy my father took me to a tech conference where they had a demo of ordering TV shows on demand. It was a miracle, to my young mind. Was this what I needed?
Growing up I had a friend group of misfit boys, who discovered h4ck1ng and phr34king. But we also discovered slackware Linux on 3.5" floppies. We also had to discover ASM and compiling the linux kernel in order to do anything with it. Boys with machines. That wasn't what I needed either.
Later on we did have great things with tech. Google made the world searchable in ways Altavista didn't. I remember strapping the original iPod on my arm to go for runs outside. I didn't even need a car for a while investors subsidized my Uber rides to and from the office.
Now, it seems the US is balanced on a precipice. The economy seems to have an incredible amount of money desperate to grow, but to what purpose. In my lifetime, and in my parents, and their parents before them, when the dollar becomes restless the flag goes forth. The dollar follows the flag.
You wouldn't have known about a TV had you not seen it. That is what I mean by, people generally can't conceptualize what they want or need until they see it.
My point was not about the difference, it was about the fact that average people cannot conceptualize new ideas until one person or team invents it, then the average person will want or need it.
As for AI, I and many others want it, and some even need it, in certain use cases. Speak for yourself.
I believe the idea that you (or I) might know better than the 'average people' to be incredibly conceited, arrogant, and frankly wrong. It is an attitude that gives you superiority for having achieved nothing.
I think your numbers are off. TAM for office workers is ~20T a year, of which SWE compensation is ~3T. So if they can make 3T x 10% X 5 years = 1.5T that covers their current valuations. It's not as insane as you make out, even not taking into account the other high risk areas like legal, accounting etc
Hit the nail on the head with that framing. So many articles are now coming out addressing the anxieties about adoption of a new technology, but we genuinely don’t really need it as a society.
I still wonder if we really needed the iPhone or many other things we’re told is “progress” and innovation in an arrow of time manner. The future is not set in stone and things need not play out in this manner at all. Unlike the iPhone where most were excited by its possibilities (even if they traded precious privacy in the name of convenience), there’s not a clear reason that this version of LLM driven technologies represent significant upsides than downsides.
The pandas API is awful, but it's kind of interesting why. It was started as a financial time series manipulation library ('panels') in a hedge fund and a lot of the quirks come from that. For example the unique obsession with the 'index' - functions seemingly randomly returning dataframes with column data as the index, or having to write index=False every single time you write to disk, or it appending the index to the Series numpy data leading to incredibly confusing bugs. That comes from the assumption that there is almost always a meaningful index (timestamps).
I hate to be the "you're holding it wrong" guy but 90% of "Pandas bad!" posts I find are either outright misinformed or mischaracterizing one person's particular opinion as some kind of common truth. This one is both!
> That comes from the assumption that there is almost always a meaningful index (timestamps)
The index can be literally any unique row label or ID. It's idiosyncratic among "data frames" (SQL has no equivalent concept, and the R community has disowned theirs), but it's really not such a crazy thing to have row labels built into your data table. Excel supports this in several different ways (frozen columns, VLOOKUP) and users expect it in just about any table-oriented GUI tool.
> having to write index=False every single time you write to disk
If you're actually using the index as it's meant to be used, you'd see why this isn't the default setting.
> functions seemingly randomly returning dataframes with column data as the index
I assume you're talking about the behavior of .groupby() and .rolling()? It's never been random. Under-documented and hard to reason about group_keys= and related options, yes. But not random.
> appending the index to the Series numpy data leading to incredibly confusing bugs
I've been using Pandas professionally almost daily since 2015 and I have no idea what this means.
I think the commenter you are replying to might well understand these nuances. The point is not that Pandas is inscrutable, but instead that it‘s annoying to use in many common use-cases.
> but it's really not such a crazy thing to have row labels built into your data table.
Sometimes you need data in a certain order. Sometimes there is no primary key. And it is nuts how janky the pandas API is if you just want the index to mean the current order of the dataframe and nothing else. Oh you did a pivot? I'm just going to make those pivot columns a row label now if that's alright with you. I don't do that for all functions though, you're going to have to remember which ones. Oh you want to sort a dataframe? You better make damn sure you reindex if you're planning to use that with data from another dataframe (e.g. x + y on data from separate dataframes), otherwise I'm going to align the data on indices, and you can't stop me. Also - want to call pyplot.plot(df['column'])? Yeah I'm giving it the data in index order obviously I don't care about that sort you just did. Oh you want to port this data to excel? Well if your row labels aren't meaningful and you don't want "Unnamed: 0" you're going to have to tell me not to. You need to manipulate a multi-index? You're so cute. Have fun with that buddy.
There is a reason no other dataframe library does this - because it's confusing and cognitive overhead that doesn't need to exist. I've used pandas since ~2013, had this chat with colleagues and many recommend just giving in and maintaining an index throughout. Except I've read their pandas and it sucks because now _you_ need to reason about what is currently the index - because it actually needs to change a lot to do normal things with data. I just use .reset_index copiously and try to make it behave like a normal dataframe library because it's just easier to understand later. Pandas has not earned the right to redefine what a dataframe means.
At the absolute least, index behaviour should be opt-in, not something imposed on the user.
> After careful consideration of Oracle’s current business needs, we have made the decision to eliminate your role as part of a broader organizational change.
That is being laid off, not being fired - big difference. Being fired means being let go for poor performance / bad behaviour. No severance or grace period is necessary there (will be written in the contract). Being made redundant, particularly a redundancy of this size is quite well protected in EU. Typically negotiations between HR and representatives of the laid off group are required, you will continue to work (officially at least) until negotiations are over, as you are not officially out yet. This usually takes a few weeks.
Is HN in complete denial about what is happening to the younger generations right now? My whole family are teachers, and they are all sounding the alarm. A majority of kids are basically unable to read books now. Not just children - young adults studying English literature at college...
Parents are up against some of the wealthiest companies on earth, and the fear of socially excluding their kids by limiting their usage. Systemic change is never going to come from parents on this one.
The problem seems to be that many students going to college can't seem to read any substantial texts anymore, while somehow getting themselves into college. It's pretty worrying imo. There's a bunch of articles about this as well: https://www.theatlantic.com/magazine/archive/2024/11/the-eli...
It's their attention span. My SIL is an English professor and she stopped assigning long texts. The kids won't read it, will get an AI to summarize, and then give her poor reviews at the end for making them read.
HN is in denial about a lot of stuff. The tech bubble exists somewhere else to most people's reality.
A lot of my youngest's peers are pretty illiterate still at 13. They have trouble with more than a few minutes of concentration. They track reading age and the average is declining every year as they arrive at secondary school which is causing a big panic in UK education. I think some of this data is driving the legislation changes as well.
I'd have preferred the government to have targeted the social media and attention companies personally. Extremely high taxation would be a good start much as we do for cigarettes and alcohol. If the business is no longer viable at that point they can quite frankly fuck off.
The verification controls are possibly a bigger problem which has serious consequences for society going forwards. Things aren't too bad now but in the future, the information and data that is available makes the nazis and the stasi look like amateurs.
Drawing a false equivalence between the internet and literal chemical poisons that aren't safe at any dose, cause severe physical addictions that take away choice to stop at best, and disable and kill millions of people every year at worst, like alcohol or cigarettes, is a little too on the nose.
At some point, you have to ask how much of the rhetoric is driven by hysteria and moral panic and how much of it is driven by what the actual evidence shows.
From the Guardian[1]:
> Social media time does not increase teenagers’ mental health problems – study
> Research finds no evidence heavier social media use or more gaming increases symptoms of anxiety or depression
> Screen time spent gaming or on social media does not cause mental health problems in teenagers, according to a large-scale study.
> With ministers in the UK considering whether to follow Australia’s example by banning social media use for under-16s, the findings challenge concerns that long periods spent gaming or scrolling TikTok or Instagram are driving an increase in teenagers’ depression, anxiety and other mental health conditions.
> Researchers at the University of Manchester followed 25,000 11- to 14-year-olds over three school years, tracking their self-reported social media habits, gaming frequency and emotional difficulties to find out whether technology use genuinely predicted later mental health difficulties.
From Nature[2]:
> Time spent on social media among the least influential factors in adolescent mental health
From the Atlantic[3] with citations in the article:
> The Panic Over Smartphones Doesn’t Help Teens, It may only make things worse.
> I am a developmental psychologist[4], and for the past 20 years, I have worked to identify how children develop mental illnesses. Since 2008, I have studied 10-to-15-year-olds using their mobile phones, with the goal of testing how a wide range of their daily experiences, including their digital-technology use, influences their mental health. My colleagues and I have repeatedly failed to find[5] compelling support for the claim that digital-technology use is a major contributor to adolescent depression and other mental-health symptoms.
> Many other researchers have found the same[6]. In fact, a recent[6] study and a review of research[7] on social media and depression concluded that social media is one of the least influential factors in predicting adolescents’ mental health. The most influential factors include a family history of mental disorder; early exposure to adversity, such as violence and discrimination; and school- and family-related stressors, among others. At the end of last year, the National Academies of Sciences, Engineering, and Medicine released a report[8] concluding, “Available research that links social media to health shows small effects and weak associations, which may be influenced by a combination of good and bad experiences. Contrary to the current cultural narrative that social media is universally harmful to adolescents, the reality is more complicated.”
Way to cherry pick citations. Have you considered writing a meta analysis for a journal and fail to disclose your interests and funding? That'd really top it off.
I can do the same if I want the other way. But it's not worth my time.
You're going to drop a bombshell like "social media is as bad as alcohol and cigarettes, we need to ban it" and not provide any evidence?
There are a lot of strong feelings around social media, and I'm no fan, but I'm not going to walk head first into a moral panic, or participate in witch hunt, without knowing the facts.
In the end, ad hominem arguments don't affect the validity of evidence. I was hoping to have an interesting discussion, but I see that if you aren't politically correct on this topic, evidence will be outright dismissed and the messenger shot for delivering it.
Everyone likes to say the UK is a police state. It’s a bit of a meme. I mean we are literally going through legal reform at the moment to make it less of one while people with a masked presidential police force scream at us for being a police state.
Keep in mind that the UK government is currently locking people up for FB posts. Not exactly a police state but close enough that its a distinction without a difference. Oh, and they are debating if to get rid of jury trials so they can just lock up people for FB posts without a trial. If it quacks like a duck...
Firstly the incumbent legislation is actually being rolled back at the moment by Mahmood. The FB posts are all inciting violence against others which should not be protected speech. As for the jury trials, have you ever been in a jury? I'd rather not thanks myself. My peers are mostly fucking idiots. And they're changing that as well.
Are all the news items about people being arrested for exercising speech not true?
I‘ve heard from multiple people already that there is a massive prosecution going in the UK against people that say „hateful“ things on the internet. Whereby „hateful“ is vaguely defined but usually in relation to religious feelings.
- actual incitement to violence, like the hotel arson
And if you look at the actual convictions, first offense for most things is usually a suspended sentence. I'd be interested to see if you can find a case on bailii (no, not social media, actual court transcripts only) which matches:
- first offense custodial sentence
- one off post, not a pattern of harassment
- between strangers
- does not include even implied threats of violence
(Last one I can think of was the Robin Hood Airport one, which hinged on whether a joke threat to blow up an airport should have been taken seriously.)
Honestly that is happening, and I think it's an overstep. I have never heard anyone talk about this in real life (that could be a London bubble though). I will say - I am nearly 40, and I've spent the last 25 years online reading about the Orwellian hell my life is (or is about to become). It has never felt like it comes from a place of lived experience. For example we infamously have a lot of cameras in the UK. 90% of them are on closed circuits in shops and it doesn't affect anything.
Right now the biggest issue in the UK is the same as most places - lack of money. It's killing our services, poisoning our politics. Everything else feels abstract in comparison.
>the effects of social media on kids is too strong, and too negative to deny at this point.
Bold claim that really needs some evidence. Is there research which shows that kids who grow up with social media are less likely to succeed as adults because of social media exposure?
To be fair, it is still pretty remarkable what the human brain does, especially in early years - there is no text embedded in the brain, just a crazily efficient mechanism to learn hierarchical systems. As far as I know, AI intelligence cannot do anything similar to this - it generally relies on giga-scaling, or finetuning tasks similar to those it already knows. Regardless of how this arose, or if it's relevant to AGI, this is still a uniqueness of sorts.
Human babies "train" their brain on literally gigabytes of multi-modal data dumped on them through all their sensory organs every second.
In a very real sense, our magic superpower is that we "giga-scale" with such low resource consumption, especially considering how large (in terms of parameters) the brain is compared to even the most advanced models we have running on those thousands of GPUs today. But that's where all those millions of years of evolution pay off. Don't diss the wetware!
reply