Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As I see it, the purpose of AI is the same as the purpose of every technology ever since the hand axe - to reduce labor. Humans have a strong drive to figure out ways to achieve more with less effort.

Obviously there are higher order effects, but same as we wouldn't expect the Homo Erectus to stop playing with stone tools because they'd disrupt their society (which of course they did), I don't understand why we should decide to halt technological progress now.





I'm not sure what from my comment suggested pausing technological progress, unless you assume technical progress can only be achieved by exploiting workers. While that is happening, I don't think this it is a requirement.

What we see happening in the workforce with AI isn't reducing labor. We see firms making fewer workers do more work and laying off the rest, as in this case, where workers are talking about "hardly sleeping". Similarly, in my org, workers aren't expected to do any less work since adopting AI tools. This case suggests quality is down as well, but maybe that's subjective.


I don't see any evidence of worker exploitation being caused in any even semi-direct way by AI integration. In the field of animation and VFX in particular, while I've never worked in that area myself, essentially everyone I've heard talk about it over decades has mentioned unbearable crunch being routine.

You mentioned earlier that AI makes labor weaker, but I really don't see a case for it. If anything, given how relatively cheap GenAI is, it should allow most anyone with artistic sensibilities and skill in the area who is willing to leverage it to go into business themselves with minimal capital. Why should GenAI give power to employers, especially if they're just paying another company for the AI models?


I mean, we're watching the world try to figure out how to use a new set of tools. As with so many disruptive technologies, the initial stages of development appear to be drop in quality and inferior to the status quo. That usually reverses within five to ten years.

That said, I agree with you that AI is not going to lead to people doing less work, in the same way that computers didn't lead to people doing less work.


The non-technical folks don't understand the very real limitations, hallucinations, and security risks that LLMs introduce into workflows. Tech CEOs and leadership are shoving it down everyone's throats without understanding it. Google/Microsoft are shoving it down everyone's' throats without asking, and with all the layoffs that have happened? People are understandably rejecting it.

The entire premise is also CURRENTLY built around copyrighted infringement, which makes any material produced by an LLM questionable legally. Unless the provider you are using has a clause saying they will pay for all your legal bills, you should NOT be using an LLM at work. This includes software development, btw. Until the legal issue is settled once and for all, any company using an LLM may risk becoming liable for copyright infringement. Possibly any individual depending on the setup.


My comment has been weirdly controversial, but I'm not sure why.

I get that LLMs have problems.

I was recently looking into the differences between a flash drive, an SSD, and an NVMe drive. Flash memory is one of the technologies I had in mind when I wrote my comment.

Flash has a bunch of problems. It can only be written over so many times before it dies. So it needs some kind of wear-leveling abstraction that abstracts over the actual storage space and provides a smaller, virtual storage space that is directed by a controller that knows to equally distribute writes over the actual storage, and avoid dead cells when they manifest.

NVMe extends that with a protocol that allows a very high queue depth that allows the controller to reorder instructions such that throughput can be maximized, making NVMe enabled drives more performant. Virtual address space + reordered operations = successful HDD replacement.

My point here is that LLMs are young, and that we're going to compose them into into larger workflows that allow for predictable results. But that composition, and trial and error, take time. We don't yet have the remedies necessary to make up for the weaknesses of LLMs. I think we will as we explore more, but the technology is still young.

As for copyright infringement, I think copyright has been broken for a long time. It is too brittle in its implementation. Google did essentially the same thing as OpenAI when they indexed webpages, but we all wrote it off as fair use because traffic was directed to the website (presumably to aggregate ad revenue). Now that traffic is diverted from the website, everyone has an issue with the crawling. That is not a principled argument, but rather an argument centered around "Do I get paid?". I think we need to be more honest with ourselves about what we actually believe.


This seems a bit like trying to replace stone axes with axes made out of pottery, though. Neolithic pottery, of course, had its uses, but didn't make for a good axe. This is clearly worse than actually doing the job properly in every conceivable way.

I'm not convinced that generative AI video will _ever_ hit the 'acceptable' threshold, at least with current tech. Fundamentally it lacks a world model, so you get all this nightmarish _wrongness_.


So let’s add a world model!

> As I see it, the purpose of AI is the same as the purpose of every technology ever since the hand axe - to reduce labor. Humans have a strong drive to figure out ways to achieve more with less effort.

Yes.

> Obviously there are higher order effects, but same as we wouldn't expect the Homo Erectus to stop playing with stone tools because they'd disrupt their society (which of course they did), I don't understand why we should decide to halt technological progress now.

The difference is the relationship of that technology to the individual/masses. When a Homo Erectus invented a tool, he and every member of his species (who learned of it) directly benefited from the technology, but with capitalism that link has been broken. Now Homo Sapiens can invent technologies that may greatly benefit a few, but will be broadly harmful to individuals. AI is likely one of those technologies, as its on the direct path to the elimination broad classes of jobs with no replacement.

This situation would be very different if we either had some kind of socialism or a far more egalitarian form of capitalism (e.g. with extremely diffuse and widespread ownership).


I think you might have an overly noble view of Homo Erectus. I believe that a fellow member of the species is at least as likely to get that hand axe smashed into their skull as they're likely to benefit from it.

you completely forgot to include the technologies invented to enslave, imprison and monitor labor. Back to the library you!

ps- include the technology built to kill the enemy-labor in large numbers. Start with the Atomic Bomb in Japan.. that saved a lot of labor, right?


Well, obviously it's not the most representative example, but yes, if a country intends to kill hundreds of thousands of people, then an atomic bomb is probably the most cost-effective way, even after accounting for R&D. Moreover, if the calculus is how to win the war with the lowest number of additional lives lost, the atomic bombs dropped on Japan were quite likely significantly less deadly, even when comparing just against the expected number of Japanese civilian casualties from the alternative scenario of a Normandy-like invasion of Japan.

EDIT: It's worth saying that humans have been killing each other from the dawn of humanity. Studies on both present-day and historical tribal societies generally show a significantly higher homicide rate than what we're used to seeing in even our most dangerous cities and across our biggest wars.

A bit old, but extensive numbers - https://ourworldindata.org/ethnographic-and-archaeological-e...


> if the calculus is how to win the war with the lowest number of additional lives lost, the atomic bombs dropped on Japan were quite likely significantly less deadly

This is just US propaganda. These numbers come from the fact that the US was "anticipating" a ground invasion of Japan or vice versa.

Which, to be clear, was always a made-up alternative. By the time the atomic bomb was dropped, Japan had already tried to surrender multiple times, both to us and the soviets. The reality is we just wanted to drop an atomic bomb.


You are probably right that Japan were close to surrendering and had begun some signaling around it, particularly via the soviets, but my understanding is that they hadn't actually done that, and according to the sources I read, they absolutely weren't willing to unconditionally surrender and demilitarize before the bomb.

But I don't understand why you put the ground invasion plans in quotes - are you claiming that all the effort spent on Operation Downfall[0] was just a misdirection intended to fool everyone, including the high-ranking officers involved in the planning?

[0] https://en.wikipedia.org/wiki/Operation_Downfall


I don't think it was a misdirection, but I do think it was an obviously bad idea that should've never materialized, and it didn't. I'm arguing it wouldn't have materialized anyway, and if it did, it wouldn't have been necessary.

Things did work out for Japan in the long run, but I still believe a conditional surrender + no atomic bombs should have been the solution. The US was very greedy with its demands, and I think a large part of that is our history of militarism and our desire to use new weaponry. The atomic bomb was already made, and I think realistically we were just itching to use it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: