This is only worth arguing about because software has value. Putting this in context of a world where the cost of writing code is trending to 0, there are two obvious futures:
1. The cost continues to trend to 0, and _all_ software loses value and becomes immediately replaceable. In this world, proprietary, copyleft and permissive licenses do not matter, as I can simply have my AI reimplement whatever I want and not distribute it at all.
2. The coding cost reduction is all some temporary mirage, to be ended soon by drying VC money/rising inference costs, regulatory barriers, etc. In that world we should be reimplementing everything we can as copyleft while the inferencing is good.
There’s an other option. The cost of copying existing software trends to 0, but the cost of writing new software stays far enough above 0 that it is still relatively expensive.
There will always be cost though. Even if perfect code is getting one-shotted out, that is constantly maintained and adapted to changing conditions and technology, it simply can't stay at 0 forever because one day the power is surely going to go out!
More and more I am drawn to these kinds of ideas lately, perhaps as a kind of ethical sidestep, but still:
It's not going to solve any general issue here, but the one thing these freaks need that can't be generated by their models is energy, tons of it. So, the one thing I can do as an individual and in my (digital) community is work to be, in a word, self-sustainable. And depending on my company I guess, if I was a CEO I would hope I was wise enough to be thinking on the same lines.
Everyone is making beautiful mountains from paper and wire. I will just be happy to make a small dollhouse of stone, I think it will be worth it. How can we see not just at least some small-level of hubris otherwise?
There was a recent ruling that LLM output is inherently public domain (presumably unless it infringes some existing copyright). In which case it's not possible to use them to "reimplement everything we can as copyleft".
it's more complicated, the ruling was that AI can't be an author and the thing in question is (de-facto) public domain because it has no author in context of the "dev" claim it was fully build by AI
but AI assisted code has an author and claiming it's AI assisted even if it is fully AI build is trivial (if you don't make it public that you didn't do anything)
also some countries have laws which treat it like a tool in the sense that the one who used it is the author by default AFIK
The article is proceeding from the premise that a reimplementation is legal (but evil). To help my understanding of your comment, do you mean:
1. An LLM recreating a piece of software violates its copyright and is illegal, in which case LLM output can never be legally used because someone somewhere probably has a copyright on some portion of any software that an LLM could write.
2. You read my example as "copying a project without distributing it", vs. "having an LLM write the same functionality just for me"
There would be no GPL if anybody could have cheaply and trivially reproduced the software for printers and Lisp machines Stallman was denied access to. There is no reason to force someone to give you the source code if takes no effort to reproduce.
Mind you, that isn't what happened here. The effort involved in getting a LLM to write software comes from three things: writing a clear unambiguous spec that also gives you a clean exported API, more clean unambiguous specs for the APIs you use, and a test suite the LLM can use to verify it has implemented the exported API correctly. Dan got them all for free, from the previous implementation which I'm sure included good documentation. That means his contribution to this new code consisted of little more than pressing the button.
Sadly, if you wrote some GPL software with excellent documentation, a thorough test suite, clean API, and implemented using well understood library the cost of creating a cleanroom reproduction has indeed gone to near zero over the past 24 months. The GPL licence is irrelevant.
Welcome to the brave new world.
PS: Sqlite keeping their test suite proprietary is looking like a prescient masterstroke.
PPS: The recent ruling that an API isn't copyrightable just took on a whole new dimension.
I think this article just speaks to the immaturity of our use of AI at this "moment."
Production grade systems might be written by agents running on filesystem skills, but the production systems themselves will run on consistent and scalable data structures.
Meanwhile the UI of AI agents will almost certainly evolve away from desktop computers and toward audio/visual interfaces. An agent might get more context from a zoom call with you, once tone and body language can be used to increase the bandwidth between you.
I don't think written prompting will ever go away. Writing helps you organize your thoughts in a way that speaking, umm, ah, wait no, hang on, does not. Writing I can go back and change what I've already written before I hit send. Anybody who's prompted with speech for any length has been "wait no nevermind start over". So STT will get better, sure, it's already quite good. I just don't see text extry entirely going away because Human Intelligence (HI) just doesn't work in a way that speech would be the only interface.
Totally agree. Speech is powerful and it will always have its place. It will continue to evolve and become far more useful than it is today. But at its core, it remains a highly lossy medium compared with text, especially when it comes to expressing (and consuming expressions thereof) ideas. Even the best voice memo cannot rival a clear, well-structured email when it comes to explaining something even moderately complicated.
Voice assistants, AI pins, and whatever other speech-based interfaces they come up with next will always be "nice to have", but I don't think anybody should be throwing away their keyboards anytime soon. We may have transformed how we make computers work for us, yet the ways we interact with them are much harder to revolutionize, because they are grounded in the physical, neurological, and habitual constraints of human existence. All of which is to say, when I look at the future, I still see a lot of typing.
Saw this video recently, by an AI company working to get contextual cues from tone and body language. I think they're converting it to text and feeding it into a LLM, so not natively multimodal, but I still thought it was really cool.
The signatories of this (letter, petition, whatever) are the same folks who profit from creating this Pandora's Box. If you don't want it opened, stop making it?
Not all products will get abused, there’s better tools already (like matches/lighters/etc) or there’s just no good abusive use cases. Some products are just begging to be abused. You can’t really tit for tat with a household appliance here, these straw men aren’t of the same planet.
Then start your own company where you control the direction of the products. All these people make millions and only speak up after they are set for life.
I’m torn. On the one hand it’s nice that the rank and file take a stand against extreme overreach. On the other hand these rank and file scientists, engineers, whatever are fostering a technology which has so many at-best questionable effects on all of society.
Idealists who “genuinely”[1] want to change the world “for the better”[1] will just move on to the next Interesting Problem if it ends up making the world worse.
I still think Jevons Paradox has a role to play here. It was not obvious at all during the industrial revolution that the middle class would end up processing information, and to the displaced middle class, their prospects must have felt exactly like the article and your statement here.
If we accept that information processing and process automation are about to become ludicrously cheap compared to now, what previously-impossible projects become feasible-but-hard now?
Security oversight and trust management for AI services seems like a good stepping stone. Air traffic control innovations that enable a flying vehicle per person? Dynamic MMORPGs where great storytellers can build and manage whole worlds for adventurers to explore? Organic food production so well managed that it becomes accessible to normal folks? Perhaps our ability to consume resources before robots will flip everything around and mining basic resources will become the valuable human labor.
Even without new categories, there are plenty of service professions where a human touch will be valued over anything that any machine can provide. These might be unlikely to pay doctor-level pay (except perhaps... doctors).
And all of that sounds great on paper, until - again - you remember that what's being pitched and sold is not the cotton gin, or industrial automation, or robotic assembly lines, but generalized artificial intelligence. The sales pitch is the eventual replacement for all human labor in a consumption-driven society.
Who is going to buy flying vehicles when there's no jobs, and said vehicles are manufactured in dark factories like current Chinese EVs are? Certainly not the swaths of the unemployed.
Who is going to pay for MMORPGs curated by human GMs (again), their compute infrastructure, their content, their maintenance? Not the unemployed, not absent profound societal and policy changes!
The fact you're falling all the way back to basic resource extraction as potential outlets for labor is just...I don't even have the words to describe how profoundly out of touch you are with the effort involved, the harms caused (mining is one of the single most dangerous professions a human can possibly do, and your pitch is to send more humans into the mines? Or onto asteroids? Have you ever talked to a miner?), and the pittance of pay doled out to these humans as-is, nevermind in a post-AI society where labor availability far outstrips available roles.
The problem was never "what will humans do when they don't have to work", but "how will humans survive when they can't find work in a society predicated upon it for survival", and wow, you've done nothing to address that question beyond "send 'em to the mines".
Jevon's Paradox falls to pieces in the fact of a proposed tool that can replace human labor wholesale, and ya'll know it. Stop trying to wallpaper over this with historical context alone and actually sit, think critically, and address the core question at hand:
If human labor is no longer required due to generalized artificial intelligence and robotics rendering humans obsolete, how do we prepare for such a potentiality ahead of time without risking the collapse of society due to sudden and mass displacement of labor?
The answer drops out quite naturally. Nationalise or heavily regulate/tax inference. Until then it’s a wide eyed free for all wealth transfer arbitrage for capital. Net loss for society while it’s permitted.
You are overinflating how useful AI is. Moreover most FOSS people actually don't want any AI written code unless the human driving it has done equivalent amount of work understanding and designing it from scratch.
It isn't about hatred of the human drivers for me. Waymo's service is so safe and consistent that I would trust my 10-yr-old to take a ride in it solo if it were permitted by the ToS. Most Uber/Lyft/etc. rides are just as safe, but due to the inconsistency I would never reach that level of trust.
I don't live in a covered area, but when I am in range I will gladly pay 10-20% more for a Waymo ride than an Uber/Lyft/etc.
Nowadays nested just wastes the extra operating system overhead and I/O performance if your VM doesn't have paravirtualization drivers installed. CPUs all have hardware support.
1. The cost continues to trend to 0, and _all_ software loses value and becomes immediately replaceable. In this world, proprietary, copyleft and permissive licenses do not matter, as I can simply have my AI reimplement whatever I want and not distribute it at all.
2. The coding cost reduction is all some temporary mirage, to be ended soon by drying VC money/rising inference costs, regulatory barriers, etc. In that world we should be reimplementing everything we can as copyleft while the inferencing is good.
reply