Absolutely incorrect. A market is just any structure, place, or mechanism that allows buyers and sellers to exchange goods, services, information, or assets. There can be one seller and many buyers, one buyer and many sellers, or anything in between.
If I go to your stall and coercively take a product you want 1000 tokens for, but only leave 10 tokens. Then it is still a market? It certainly fits the definition you present.
Inwould argue that price discovery is a bit part of a market. Again, these things are already codified. Eg. Wash trades, insider trading, etc.
In managing a large to enterprise sized code base, I experience the opposite. I can guarantee a much more homogenous quality of the code base.
It is the opposite of slop I am seeing. And that at a lower cost.
Today,I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.
All big tech companies are mandating employees to use AI for tasks. Unless there's a similar movement to open source that is AI-free, you're going to need to be tech-free of you want to avoid companies that use AI.
Look friend, I really hope you can realize how you sound in your post. You're extraordinarily confidently saying that you refactored some ambiguous endpoints in 30 minutes. Whenever I see someone act that confidently towards refactoring, thousands alarms go off in my head. I hope you see how it sounds to others. Like, at least spend longer than a lunch break on it with just a tad more diligence. Or hell, maybe even consider LIEing about how much time you spent on it. But my point is that your shortcuts will burn you. If you want to go down that path, I'm happy to be a witness to eventual schadenfreude.
My issue isn't with the fact that you used AI. My issue is with how confident you are that it worked well and exactly to spec. I'm very well aware of what these systems can do. Hell, I've been able to get postgres to boot inside linux inside postgres inside linux inside postgres recently with these tools. But I'm also acutely aware of the aggressive modes that these systems can break in.
So again, which company should we all avoid so that we can avoid your, specifically your, refactoring?
One point: yes, you're speaking from the power position. God-mode over a fleet of minions has always been an engineer's wet-dream. That's not even bad per-say. It's the collateral damage down stream that's at issue. Maybe you don't see any damage, but that's largely the point. Is it really up to you to say?
Let's not debate that it's possible to make very large very safe changes. It is possible that you did that.
This is about "slop bias". I'd wager that empowering everyone, especially power-positions to ship 50x more code will produce more code that is slop than not. You strongly oppose this because it's possible for you to update an API?
I'm stuck on the power-position thing because I'm living it. I'm pro-AI but there are AI-transformation waves coming in and mandating top-down. From their green-field position it's undeniable crush-mode killin' it. Maintenance of all kinds is separate and the leaders and implementors don't pay this cost. Maybe AI will address everything at every level. But those imposing this world assume that to be true, while it's the line-engineers and sales and customer service reps that will bear the reality.
> Maybe AI will address everything at every level.
I think this is the idea you need to entertain / ponder more on.
I largely agree with you, what I don't agree with is the weighting about the individual elements.
My point was that I could do a 30 minutes cleanup in order to streamline hundreds of endpoints. Without AI I would not have been able to justify this migration due to business reasons.
We get to move faster, also because we can shorten deprication tails and generally keep code bases more fit more easily.
In particular, we have dropped the external backoffice tool, so we have a single mono repo.
An Ai does tasks all the way from the infrastructure (setting policies to resources) and all the way to the frontends.
Equally, if resources are not addressed in our codebase, we know at a 100% it is not in use, and can be cleaned up.
Unused code audits are being done on a weekly schedule. Like our sec audits, robustness audits, etc.
Yeah, the more I debate the AI-lovers the more I can empathize with the possibility it may very well turn out to be everything is an Agent. Encodable.
I'm not a doomer either, but I do think this arc is a human arc: there's going to be a lot of collateral damage. To your point, Agents with good stewardship can also implement hygiene and security practices.
It's important we surface potential counter metrics and unintended side effects. And even in doing so the unknown unknowns will get us. With that said, I like this positive stewardship framing, I'll choose to see and contribute to that, thanks!
I definitely don't identify as an AI lover. For me year 0 of Ai was February 6th 2026 and the release of Opus 4.6.
Until that day we had roughly zero Ai code in the code base (additions or subtractions). So in all reasonable terms I am a late adopter.
For code bases Ai does not concern me. We have for quite some time worked with systems that are too complex for single people to comprehend, so this is a natural extension of abstraction.
On the other hand, am super concerned about Ai and the society. The impact of human well being from "easy" Ai relations over difficult human connection. The continued human alienation and relational violation (I think the "woke" discourse will go on steroids).
I think society is going to be much less tolerant. And that frightens me.
I don't doubt it completed the initial coding work in a short time, but the fact that you've equated that with flawless execution is on the concerning-scary spectrum. I can only assume you're talking "compiles-runs-ship it"
The danger is not generating obvious slop, it's accepting decent and convincing outputs as complete and absolving ourselves of responsibility.
You are right, and it happens that the output looks decent.
Code idioms, or patterns if you will, is largely our solution.
We have small pattern/[pattern].md files througout the code base where we explain how certain things should be done.
In this case, the migration was a normalization to the specific pattern specified in the pattern file for the endpoints.
Semantics was not changed and the transform was straight forward. Just not task I would be able to justify spending time on from a business perspective.
Now, the more patterns you have, and the more your code base adheres to these patterns, the easier you can verify the code (as you recognize the patterns) and the easier you cal call out faulty code.
It is easier to hear an abnormality in music than in atmospheric noise. It is the same with code.
Seeing plenty of this. The quality of agentic code is a function of the quantity and quality of adversarial quality gates. I have seen no proof that an agentic system is incapable of delivering code that is as functional, performant and maintainable as code from a great team of developers, and enough anecdotes in the other direction to suggest that AI "slop" is going to be a problem that teams with great harnesses will be solving fairly soon if they haven't already.
I take your point but then it makes me think is there no more value in diversity?
[Philosophy disclaimer] So in a code-base diversity is probably a bad idea, ok that makes sense. But in an agentic world, if everything is run through the Perfect Harness then humans are intentionally just triggers? Not even that, like what are humans even needed for? Everything can be orchestrated. I'm not against this world, this is an ideal outcome for many and it's not my place to say whether it's inevitable.
What I'm conflicted on is does it even "work" in terms of outcomes. Like have we lost the plot? Why have any humans at all. 1 person billion dollar company incoming. Software aside, is the premise even valid? 1 person's inputs multiplied by N thousand agents -> ??? -> profit
Why have humans do work at all? We could have a radically better existence. It would mean that the few at the top of the pyramid lose their privileged position relative to the rest of us, but we could, actually, have that world of abundance for all.
Work in the current sense arguably isn't even desirable
> Today, I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.
This is either a very remarkable or a very frightening statement. You're claiming flawless execution within the same day as the change.
If you're unable to tell us which product this is, can you at least commit to report back in a month as to how well this actually went?
The public at large doesn't seem to care about this distinction.
Here's a proof. Search for this in google: "ai data centers heat island". Around 80 websites published articles based on a preprint which was largely shown to be completely wrong and misleading.
It matters because for medical questions, you [are supposed to] go to a medical professional, and those very much cares about and make that distinction.
Which is exactly the problem here; it "used to be" that reasonable people would disbelieve random things they find on the internet at least to some degree. "Media literacy". LLMs don't seem to have that capability, and a good number of people are using LLMs in blissful ignorance of that fact. They very confidently exclaim things that make them sound like experts in the field at question.
Would it have made a difference for the AI data center heat island thing you're quoting? maybe not. But for medical matters? Most people wouldn't even have caught wind of this odd fake disease. LLMs just amplify it and serve it to everyone.
I agree with you and I think the companies have solved it. I think they should be more skeptical of medical articles in general and be more conservative.
> Which is exactly the problem here; it "used to be" that reasonable people would disbelieve random things they find on the internet at least to some degree. "Media literacy". LLMs don't seem to have that capability, and a good number of people are using LLMs in blissful ignorance of that fact.
I completely disagree with this part. LLM's absolutely have the ability to be skeptical but skepticism comes at a cost. LLMs did what used to be a reasonable thing - trust articles published in reputed sources. But maybe it shouldn't do that - it should spend more time and processing power in being skeptical.
The definition of a preprint is that it isn't peer reviewed. Unless you're an expert in the field, you IMHO shouldn't be looking at preprints. Might be OK if they come recommended by multiple unaffiliated experts (i.e. kinda half reviewed), but definitely not by default.
The stupidity that makes depriving one of your senses seem like a sensible thing to do in a busy chaotic environment.
I don’t actually mind people doing that though. What is annoying is the entitled attitude that there should be no consequence for that choice, and everyone else should orbit/compensate around their lack of situational awareness.
This. So many people are coerced to wear a straitjacket, run the treadmill, and do the rat race, conditioned that "hey work is life, life is work" and is divided into workweek and weekend. For some it may work well, if their job aligns with their interests and passions. Many others are in it with only half their heart. They may show better productivity metrics to the manager class, but what they gain there is lost in 'quality of service' to society. You get automatons at the desk, not interested to handle your edge case. But my, are they 'productive' for the bottom line.
People also love type 2 fun. It's not fun in the moment, but you're happy that you did it.
If your work is type 1, more power to you. A lot more falls under type 2's umbrella. I find writing to be type 2 more often than not. Making complicated designs is often not fun in the moment. I like exercise, but sprint workouts are type 2.
A complete accounting of fun types has to include Type 0 fun as well: fun when it's happening, but not fun later. Drugs, gambling, crime, and most traditional vices fall in this category.
For that type of publishing please use Encyclopedia Britannica.
You will get the url in the 2027 edition on print.
reply