I was really hoping that he would have presented an argument for why we shouldn't be worried about massive unemployment, but he didn't. All he did was point to a prior era, at something that is very different from AI, and say "look, that didn't cause massive unemployment!"
Such apples-and-oranges comparisons are not reassuring.
I keep actively looking for why this concern shouldn't be concerning and coming up empty.
One of my family members has permanently lost their job in transcription. It's not that they've lost their job either, essentially the entire field across all specialities (including legal and medical) just doesn't exist anymore as of ~this year (2023).
There are still some 'editor' positions, however they don't pay very well and I'd imagine they won't be around long either.
Language translation services have thrived despite Google Translate etc because they did so poorly. However the LLM do a pretty amazing job translating idiomatic language, so I think translating text as a career is effectively over beyond editorial revision of the automated text.
Try in ChatGPT asking it to translate some passage of relative complexity into idiomatic Thai. Start a new chat. Ask it to translate the Thai, pasted in, into idiomatic English. This was always a way to test how bad exactly Google translate is. It’s essentially flawless with ChatGPT.
I showed my brother in law, who is Thai in Thailand and who works for an English speaking multinational. It’s revolutionized his job. He struggled to be understood, and struggled with Google translate to make something understandable. Now he’s able to conduct his business without friction.
It'll probably remain for the very complex translation tasks that are performed by highly qualified experts, like translating poetry from archaic versions of modern languages and such. Also diplomatic exchanges and official translations of documents, where you can't just deflect blame to an AI tool for something that could damage international relations or leave someone without a visa or nationality.
More regular "business" translation will probably be displaced by AI tools.
Reminds me of a story about a surgeon who was an angry opponent of robot surgery until he watched it. When it went in though a tiny hole for what use to require cutting a door in the patient he was already convinced.
I had some modest experience with this and “thrived” doesn’t line up with how the translators I worked with described it even a decade ago – they loved our project because we valued quality, whereas most of the work they were seeing was rapidly declining wages because Google Translate was good enough to get businesses to decide they didn’t need to pay much for humans.
I’m pretty sure you’re right that this is the death knell for translation as a career unless it’s in combination with some other scholarly skill.
As someone who worked in transcription last year I would say the biggest reason for transcription job losses in the West is competition from India where English speakers are willing to do transcription for a fraction of the cost.
Fully automatic transcription by AI is a very, very hard problem, based on my experience of many of the jobs I got*. The automatic transcription makes the work 10X easier now though by applying a first pass that gets you 90% of the way and then humans like me come in and fix it all up.
* One hard example was a motorcross race commentary where it was necessary to know which riders the commentators were referring to based on their amusing and sarcastic comments, the nicknames of the riders, and deal with the fact the audio was 99% revving engines.
I’m sorry your family member lost their job, that’s always rough. That being said, the field of transcription is nowhere near dead. Bighand does transcription and dictation solutions and their annual revenue is over 70 million.
I think they meant the job doing transcription is dead, not that corporations don't provide the service anymore. In keeping with the topic, AI taking jobs, corporations are mostly using digital tooling rather than people to do transcription.
I think that’s what they mean, but they are wrong. Ask any large legal firm in the UK if they’ve gotten rid of all transcriptionists, I’ll buy you a pint if they say yes
But the claim wasn't "some jobs have been eliminated", it was ”the entire field across all specialities (including legal and medical) just doesn't exist anymore as of ~this year (2023).”
Parent claimed that ”the entire field across all specialities (including legal and medical) just doesn't exist anymore as of ~this year (2023).”, and that's patently false - the field still definitely exists and plenty of people work in transcription.
Do they employ humans to do the transcription? A look at their jobs page has zero such openings. This points to consolidation of an entire industry into the pockets of a small number of individuals. That's no comfort to the legions of recently-unemployed.
The legal offices employ humans, Bighand provides tools to enable said work. If there was no transcription happening, dictation tools would not be selling for tens of millions per year
One thing we've seen for transcriptions in medical field (which may also be relevant in legal and others) is a need for a qualified specialist (i.e. the doctor who dictated the thing or did the patient interview) to review and sign-off the final transcription. Currently a transcriptionist does the bulk of the work, and the doctor reviews it; however, with better tools, the doctor would just use the dictation tool directly and review the automated output directly, cutting the 'transcription office' out entirely.
My argument that it won't cause mass unemployment is that most jobs in the economy are not jobs AI can automate because (1) they involve physical work that current robots can't do (like plumbers or any of the building trades), or (2) they involve interaction with other human beings (child care workers, nurses, baristas, waiters, etc.). To the extent jobs are eliminated in AI-adjacent sectors, people will need to shift to these other jobs that AI can't do.
Furthermore, the jobs not impacted will be squeezed by increased competition from displaced workers driving salary down. We are going to face massive social disruption because of AI, there is no escaping this.
On #2, there is some indirect effect. AI being able to shift long-haul trucking to self-driving, for example. That would eventually affect not just truck drivers, but all the hotels, truck stops, and restaurants along the way.
I mean not necessarily. You don't need to shift the job to AI before you fire people, you just need to think you can. In many cases in a highly technical & inhuman sense you'll cut hidden inefficiencies by shifting more of the work to the remaining workers, who will be more miserable as a result.
We've seen this a lot of different ways to varying extents. There aren't robots making fast food. But there are demand prediction and just in time scheduling systems that allow fast food businesses to run shifts with fewer employees. Nothing "replaced" those workers, but they don't work there anymore, the service is worse, the remaining workers are overworked and lack consistency in their scheduling.
Examples like this are all over most industries over the last 10-15 years. We'll see more of it and in novel ways. The AI won't replace your job, you just won't have a job anymore because of the AI. Should you care about the difference? Depends on how much stock you own vs how much you need to work, I think.
As white collar professionals we're insulated from a lot of this or at least think we are. It's going to suck shit for most everyone else though.
> people will need to shift to these other jobs that AI can't do.
And why do think that there will be enough of these other jobs for everybody?
The funny thing is that when automation caused a large loss of blue-collar jobs, the answer was "just retrain for white-collar jobs". Now that it looks like there's a real threat to white-collar jobs, the answer is "just retrain for blue-collar jobs?"
I discussed this in the final section: "The level of employment across the economy is ultimately driven by macroeconomic factors: If consumers spend more money, then businesses will respond by hiring more workers. The last three years have illustrated how powerful this can be: In the wake of the pandemic, Congress and the Fed worked a little too hard to boost the economy, producing a super-tight labor market and rising inflation. If AI starts replacing workers in the coming years, that will put downward pressure on wages and prices while growing the economic pie. That will give the Fed more leeway to cut interest rates and give Congress more room to raise spending or cut taxes. As long as Congress and the Fed are doing their jobs, there’s no reason for the total number of jobs, economy-wide, to decrease."
>If consumers spend more money, then businesses will respond by hiring more workers.
Unless they can expand by spending capital instead. Workers are an expensive long term liability. This is why in every industry that has been able to automate at a cost 'around' human labor or less has. You don't harvest wheat with 100 people with scythes, we invest huge amounts in to capital intensive combines that can do the work of a few hundred people. At the same time, some fragile fruits have not been well automated yet, and we use unskilled low paid labor to do work like that.
What you are not doing is answering the question of "What will happen if we can automate a significant fraction of white collar labor"? Blue collar work moved to the service/information industries for higher pay. If that market shrinks, there is not any other high paying skill based market for these people to move into that I know about.
It is highly possible we could see general wage suppression. Plenty of jobs, very rich companies enriched by the control of AI, not plenty of pay. Now just contemplate the economic problems that's going to cause.
If consumers spend more money, then economies of scale increase, which makes it more attractive for companies to increase supply with extra capital investment in automation instead of more labor.
I think the fallacy is thinking here is that the number of workers will go to zero.
In reality, the productivity improvements will reduce overall costs, and either the field gets easier to break into, reducing wages, or incumbents are able to squeeze out other workers and keep more profits for themselves.
There's billions in tech investments flowing into medicine, construction, and hospitality from people hoping to make worker more efficient and aiming to capture a fraction of the wages from those efficiency gains.
There's a very real possibility that jobs get replaced by things like, The Home Depot App, which can diagnose an issue, find order the necessary parts from the store, and walk people through all the steps to fix the problem. Again, it doesn't have to completely eliminate jobs, shift how money flows around the market.
IMHO, the revolution won't be televised. It will be like self-service machines at the grocery, restaurants, hotels, etc. We just slowly get used to using kiosks and apps to do more and more things for ourselves.
Most automation of physical jobs didn't happen by dropping a robot into the same situation as a human, but by adapting the environment for automation. I think the essay spends too much time positing that since most times robots can't be dropped into existing situations, there is no threat of automation for these industries. That might be true for things that can't be changed rapidly (such as how plumbing works in residential housing built over the last hundred years), but for many things it is reasonable to expect capitalists to shift the environment in which that job is done to allow for scalable automation.
E.g. Grocery store cashiers haven't been replaced by humanoid robots, but they have been significantly offset by computers and scales and by making the customer do the manual steps of scanning and bagging via self-checkout.
One company will create an ai that can do anything, from your taxes, to drone delivery, if you want to do your own taxes it can create any interface you want on the fly: QuickBooks, fresh books, etc... need a CRM, blog, or ERP? it can be all those things, need a fantasy getaway? put your vr on and it can give you that too.
ie one software with unlimited scope that's constantly reiterating new versions of itself, self improving in near real time.
The software company that creates this will be the last software company, and probably the last unicorn so to speak.
That's why there's an arms race now, because whoever gets AGI first could be good or bad and everybody thinks they can do it better and safer, but let's fire our ethics team because they slow us down...
The team who's first will reach such growth, nobody will ever be able to catch them.
I think we will be fine because corporations tend to scale based on their profits more than their actual needs. I think this phenomenon is caused by tiers of management getting budgets and quotas. AI will make industries more profitable but managers have a talent for burning whatever budget they are allocated, more profits will mean higher budgets which means more employees and services consumed.
Look at huge tech companies, how did FB become a 75k employee company that runs a social media site with ads on it?
I do fear for people who are a one to one replacement in low margin industries like fast food.
To me there is a single defining difference between this and every prior revolution: in the past it was the poor people whose jobs disappeared. Now it's the middle class and potentially upper middle class. I anticipate it not actually affecting all that many more people than prior revolutions, but expect it to be far noisier since the people whose skills are becoming irrelevant have much more power and influence than in the past.
> in the past it was the poor people whose jobs disappeared
I'm not sure that's true. When the industrial revolution displaced factory workers making stuff by hand, they were a new middle class that had moved off subsistence farms to work in the city.
> in the past it was the poor people whose jobs disappeared. Now it's the middle class and potentially upper middle class.
This is not true in the US. In the past, it was lower and middle class jobs that disappeared. Now the threat is to lower, middle, and upper-class jobs.
The upper class people will profit from the lost jobs. Just like they've always grabbed the lion's share of productivity improvements.
The VP of Widgets is going to push an AI Initiative that will save money and increase revenue for the company, and as a result, they will get a nice bonus and a promotion of the Chief Widget Officer. Then maybe segue that into a "consultancy" where they overcharge other companies to advise them to do the same.
There's really no such thing as upper-class jobs; upper class is pretty much defined by not needing a job - the capitalists, landlords, CEO, investors and "heirs"; unlike the upper end of middle class which consists of professionals like lawyers, doctors and (recently) software developers which may have high incomes but aren't upper class because they do need to continuously work for that.
There are jobs that are predominatly held by people in the upper class, but being in the upper class (haut bourgeoisie) means that those jobs are not the significant way they engage with the economy.
Okay, things like 'museum curator' or 'charity foundation board member' come to mind, all kinds of positions that are high status but often don't even involve any serious pay and thus are generally taken by independently wealthy. Those jobs won't be automated even if they can be automated, these people will ensure that they stay the way they are.
Cue the wave of replies. Um, akshually, its not sentient yet. It's just predicting the next word. Doesnt matter that it can automate most jobs since its not sentient. We just need to stop sentient AI
I think this round of LLM will decimate some previously considered core parts of 'tech' in one fundamental way. Those already skilled in tech, will be able to garner huge productivity gains and displace the need for additional human support. I'll support this with two perspectives.
1) I began in tech in the mid 90s as a software engineer on a small team building a rudimentary ecommerce type app, written in C, compiled as an apache module, without a database. Now, multiple generations of programming tech later, look at the frameworks and higher level languages, tools and IaC, and you have amazingly complex and powerful systems being built by the same number of engineers in a fraction of the effort.
2) I, now as an engineering leader, can crank out a marketing blog if I put my mind and interest to it. But, I can write an outline and ask ChatGPT to finish it and it's decent. I can ask it to re-write my customer facing docs in a different style. I can do all sorts of amazing transformations of content, extract key snippets for headlines, write summaries, expand ideas, and I focus on the important knowledge work.
I think that this type of AI will cause those with experience that can leverage the tech to become even more entrenched as they can leverage experience and wield their influence via these tools creating much more widely felt impacts. Whether it be in marketing, sales, support, documentation, etc, the few will be able to accomplish much more with much less. Small teams with better tools will _always_ be more productive than larger teams.
1) AI is Software. Marc was just a little off on market timing, and the eating starts for real right now.
2) I worked for a large private equity company on contract for a bit. The team I worked on was doing analysis about what the likely impact of AI, robotics, and automation on true unemployment in the US was likely to be as of 2030. Largely due to the impact on food service, retail, agriculture, customer support, and entry level corporate jobs the conclusion was that we'll see true unemployment in the 40% range at that time.
Unemployment in the 40% range is beyond total societal collapse. The unemployment rate in Ukraine is 11% and they are currently being invaded. Sudan's unemployment rate was still under 20% while they were in a civil war.
So the OP is probably using "unemployment rate" incorrectly. But if they are in fact using the International Labor Organization's definition, then this is probably a self-correcting problem. Society will collapse and those working on the research an implementation of AI and robotics will find themselves on the streets begging for scraps (or getting hacked up by machete-welding gangs), instead of working on building robots to displace workers long before they achieve success.
Don't underestimate what a strong leader can achieve with an army of starving people willing to do practically anything to feed their families.
This is a hard bet to take because we also don't know what other changes will take effect by then.
Perhaps the government reclassifies full-time employment to mean 24 hours/week. Now everyone can stay employed full-time, but the same number of human hours of work are getting done.
In any event, I don't think this will happen by 2030, but by 2040, absolutely.
Our economy would have to be truly ridiculous if having robots do 40% of the work was a bad thing. I mean, that means we have almost twice as many people available to do the stuff that couldn’t get automated! The most obvious solution would just be for everyone to work a little bit more than half as hard as they used to.
If you chop incomes in half, for example, Europe's various aging welfare states would quickly implode due to lack of tax revenue. Every advanced welfare state is built on tax revenue continuing to pour in. Many of the affluent nations in Europe are already struggling with demographics v tax revenue v welfare state needs, a rapid drop in income and it'd all go down quickly.
People to repair, direct, and organize the robots.
I mean, it's kind of a laughable example. Because we don't have humanoid robots today that can do anything meaningful in society, at any kind of meaningful scale. Without that, we are a far cry from having humanoid robots that can literally do everything.
Who said anything about humanoid robots? A humanoid form is generally a terrible form for a robot outside of a small handful of use cases (mostly centering around tending directly to humans).
I'll take the way under on that bet. Despite "software eating the world" there are currently protests in France about raising the retirement age. And we don't have enough workers in many industries (unemployment is at 3.5% in US). We're also facing a demographic cliff as baby boomers retire, so any automation we can get is desirable, in my opinion. But as the article mentions, robotics are far, far from replacing service industry jobs.
You can't point backward on an exponential innovation curve and say nothing will change. Not a compelling argument. Software has already eaten the world. Industries are already getting turned upside down by gen ai.
In HN speak, "It's difficult to see why one would assume we're on an AI S-curve, given the rapid development and global focus on generative AI. It appears we're on an exponential curve, though it remains to be seen how this will unfold. The burden of proof lies with you to support your assertion and provide evidence for an S-curve rather than an exponential one."
S-curves and exponential curves look exactly alike until the S-curve tops out, and nobody would argue generative AI is late in the hype cycle. And we've seen a lot of S-curves in the returns to particular innovations...
The burden of proof lies on the person making the extraordinary claim that returns to this particular technology breakthrough will accelerate infinitely, not the person doubting that. Suffice to say I've heard more persuasive arguments in favour of this time it's different than "software has already eaten the world"...
> Even if they do, it’ll take many years to manufacture enough humanoid robots to have a major impact on the labor market. And in the short-to-medium term, we’d get a lot of new jobs in the robotics sector.
I'm sorry, this is in response to the concept of us being able to make machines that can act physically like humans right? That would rather suggest that they could take on the manual robotics tasks of constructing large numbers of robots.
Manufacturing processes can already be significantly automated, and we are talking here about quite possibly the most valuable thing to have ever been created - the economic machine should be focusing an absurd amount of capital to get them out as quickly as possible.
"How (But Not Why) Marc Andreesen Was Wrong About Software Eating More Of The World Than It Actually Did Between 2012 and 2022, And *Handwave* Maybe AI Will Turn Out That Way Too"
We sometimes talk about "super-human" and "sub-human", but let's invert that and consider the idea of humans rated as "sub-computer" or "super-computer" instead.
Right now most of us are "sub-computer" when it comes to doing math or remembering millions or billions of bits of data, but still "super-computer" when it comes to "physical goods and services like homes, cars, restaurant meals, and haircuts".
As computers (and robots) improve exponentially most of those areas where humans are commonly "super-computer" will contract, maybe all of them. Robots already build cars and cook hamburgers. If homes were built by robots we could make so many so cheaply that homelessness would disappear (except for a few people who like to be nomads.) As for haircuts, there are some hair artists for whom hair care and styling is an art. Other than that it's typically a marginal job that one does when you can't get a better job. I don't mean to disrespect haircutters! That's not where I'm coming from.
Any job that can be done by machine should be done by machine. Humans should not have to "earn" a living in a world that contains transistors, magnets (for motors), and the Turing machine. We are done. We win. Now it's time to relax and lick our wounds and heal each other and ourselves.
"Let the robots do the work and we'll take their pay."
AI should cause mass unemployment and that should be a good thing!
> AI should cause mass unemployment and that should be a good thing!
But there's a wide gulf between "should" and "would".
In order for that utopia to happen, we need to have an economic system that is 100% different from the one we have now, and there is no path to that level of transformation that doesn't require a period of large-scale human suffering. And large-scale human suffering means societal collapse that could very well derail any movement toward a different economic system.
In the system we actually have, mass unemployment means a mass of people who have little to no income, and without income, they can't get the things they need to live.
FWIW I'm working towards just such an economic shift right now. I just acquired some land for regenerative ag and a "living neighborhood"[1]
Something like a cross between Village Homes[2] and Riverbed Ranch[3].
It's meant to be a nucleus or catalyst for us to jump-start a Star Trek-style future without waiting for anybody else. (If it works they can always join later, after we get more land...)
I think part of the problem is due to scientists and philosophers speculating as they do and regular folks taking them at their word. This causes an over-estimation of the abilities of current AI.
Nick Bostrom Says AI Chatbots May Have Some Degree of Sentience: More and more people are saying this.
I think folks on our side of the equation are underestimating the value of generative AI by focusing on its obvious flaws. But the flaws (reasoning, goal formation, agency, optimization, information retrieval) are all well established sciences with excellent models and algorithms behind them. What they’ve lacked is the ability to do semantic synthesis in abstract spaces, which LLM do quite well. Use a multimodal AI and present it with a picture of a bar with cowboys playing poker. Pretend to be a goal based agent and assert you need $5 to the LLM. It will analyze the image and discern the is a poker table and you can make money from the poker game. It can describe in detail and annotate the image where to go. This can inform a navigation model where to go, and the navigation system can optimize and execute a route. Once there it can take images of the poker game and the LLM can describe the hand and the state of the game to a poker playing agent. The agent can tell the LLM moves to make, and the LLM can convert that into instructions for the navigation system.
The LLM itself can’t create agency, goals, reliably win at poker, navigate, manage control systems, etc. But it can very well “glue” these traditional AI systems in a novel way that’s never been possible to date.
Btw, I tried this as an experiment (with me being the traditional agents). By not delegating the parts that require agency, reasoning, calculation, or optimization to the LLM, I was more than able to prove to myself this approach is entirely feasible. With finetuning I was able to get it to output popular encodings as well so the input and output was properly structured.
What happens when no human knows how to do some things then the company that possess the technology that drives your business decides to up the prices 1000% ?
It’s not like it never happened.
If it’s that easier to do some things now. How will your company resist being destroyed by competition ? Everyone can do it so easily. Maybe those companies won’t have meaning anymore. So it’s not only jobs but entire industries and services that will disappear
This is very broad reasoning using analogy and metaphor. It can be useful, but only for broadening your imagination by coming up with scenarios you might not have thought of.
So, maybe AI will/won't have big effects on employment? It's plausible either way, depending on how big you're thinking.
The human nature of cognitive dissonance permits arbitrary beliefs and the denial reality.
The income security of 2000's software engineering, 1990's trucking, and 1940's farmers will evaporate eventually.
Long term, the only people with large amounts of power and money will be a handful of *illionaire owners and their hangers-on in government and media. The vast majority of humans will be poor, dying, starving climate refugees continually looking for their next meal. It will be a regression.
I think AI will cause mass unemployment -- for whichever companies lay off staff after their AI pivot fails to materialise significant new revenue.
Which is not to say AI is going away, rather, every Silicon Valley tech company that is chasing this holy grail of miraculous untapped revenue are going to burn whatever cash/stock price they have left in the process...
They are exactly the type of company that I hope to god never tries to innovate very much. They provide exactly the type of service many of us need- 100% reliable file syncing.
I don't need them to do anything else and I don't want them to do anything else. So I'm not really sure what AI has to do with them..
"You will be the first Product Manager hire in the AI Center of Excellence, a company wide team mandated with driving AI centric product development across Dropbox."
Social networks for humans are the exact opposite of at risk. They're among the services most likely to continue to be used by humans, even if we suffer mass unemployment. Facebook, Instagram, WhatsApp are free to use (if you're unemployed because AI shredded your job, you can keep your FB account regardless of your income) and have extreme operating margins.
The desire by humans to be social online isn't going anywhere. Rapidly advancing AI isn't going to make people stop wanting to connect with other people they know via social networking.
Generative AI and deep learning affect all types of office work. Picking on 2 companies would be entirely arbitrary and miss the point of what AI replace.
The author is absolutely correct that AI can't do physical labor or interact with other humans very well. It's also worth pointing out that there will be new job functions created by AI. There will always be productive ways for humans to spend their time, aka work for humans to do. People predicting mass unemployment focus exclusively on the work we have now. People saying "it's different this time" minimize the innovations of the past.
Such apples-and-oranges comparisons are not reassuring.
I keep actively looking for why this concern shouldn't be concerning and coming up empty.