Hacker Newsnew | past | comments | ask | show | jobs | submit | tossandthrow's commentslogin

This is Wikipedia.

For that type of publishing please use Encyclopedia Britannica.

You will get the url in the 2027 edition on print.


Oh how I wish the print editions were still being released.

There is only a market if there is a commodity.

La Liga is not a commodity as I can not equally make a La Liga.

This is the basis for antitrust regulations.

So no, there is not market. And as such there is no markets that functions as it should.


Absolutely incorrect. A market is just any structure, place, or mechanism that allows buyers and sellers to exchange goods, services, information, or assets. There can be one seller and many buyers, one buyer and many sellers, or anything in between.

If I go to your stall and coercively take a product you want 1000 tokens for, but only leave 10 tokens. Then it is still a market? It certainly fits the definition you present.

Inwould argue that price discovery is a bit part of a market. Again, these things are already codified. Eg. Wash trades, insider trading, etc.


This has not so much to do with the law, but the execution of it.

It seems vastly inproportionate. And is likely severe overstepping.

The issue is that spain does not have a backstop. It is a completely institutional failure.

That's why you can laugh at them. Because this level of instutional failure should not happen where I come from.


I use the null uuid as primary key - never had any DB scaling issues.

Yeah, no NULL is ever equal to any other NULL, so they are basically unique.

You are also guaranteed to be able to retrieve your data, just query for '... is null'. No complicated logic needed!

You are very strong on the "slop" bias. Why?

In managing a large to enterprise sized code base, I experience the opposite. I can guarantee a much more homogenous quality of the code base.

It is the opposite of slop I am seeing. And that at a lower cost.

Today,I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.


Which company do you work at so we can avoid your migrated endpoints?

All big tech companies are mandating employees to use AI for tasks. Unless there's a similar movement to open source that is AI-free, you're going to need to be tech-free of you want to avoid companies that use AI.

Wtf. You don't even know what the migration was about?

I mean, I'm always down for learning something new. But I hope what I learn includes the name of the company I'd like to avoid.

Your tone is in conflict with the statement that you are curious.

It's because you're deflecting. :)

Deflecting from what? Telling the company name so you can avoid it due to your incredibly curious nature?

Sigh.

Look friend, I really hope you can realize how you sound in your post. You're extraordinarily confidently saying that you refactored some ambiguous endpoints in 30 minutes. Whenever I see someone act that confidently towards refactoring, thousands alarms go off in my head. I hope you see how it sounds to others. Like, at least spend longer than a lunch break on it with just a tad more diligence. Or hell, maybe even consider LIEing about how much time you spent on it. But my point is that your shortcuts will burn you. If you want to go down that path, I'm happy to be a witness to eventual schadenfreude.

My issue isn't with the fact that you used AI. My issue is with how confident you are that it worked well and exactly to spec. I'm very well aware of what these systems can do. Hell, I've been able to get postgres to boot inside linux inside postgres inside linux inside postgres recently with these tools. But I'm also acutely aware of the aggressive modes that these systems can break in.

So again, which company should we all avoid so that we can avoid your, specifically your, refactoring?


I definitely did not say anything about ambiguous endpoints.

The migration was relatively straight forward and could likely have been implemented as automatic code transforms.

What I did say was that it was complex.


Yikes. Have a good one.

One point: yes, you're speaking from the power position. God-mode over a fleet of minions has always been an engineer's wet-dream. That's not even bad per-say. It's the collateral damage down stream that's at issue. Maybe you don't see any damage, but that's largely the point. Is it really up to you to say?

What is the collateral damage? In ensuring that a bunch of endpoints use the same structure using LLMs?

Let's not debate that it's possible to make very large very safe changes. It is possible that you did that.

This is about "slop bias". I'd wager that empowering everyone, especially power-positions to ship 50x more code will produce more code that is slop than not. You strongly oppose this because it's possible for you to update an API?

I'm stuck on the power-position thing because I'm living it. I'm pro-AI but there are AI-transformation waves coming in and mandating top-down. From their green-field position it's undeniable crush-mode killin' it. Maintenance of all kinds is separate and the leaders and implementors don't pay this cost. Maybe AI will address everything at every level. But those imposing this world assume that to be true, while it's the line-engineers and sales and customer service reps that will bear the reality.


> Maybe AI will address everything at every level.

I think this is the idea you need to entertain / ponder more on.

I largely agree with you, what I don't agree with is the weighting about the individual elements.

My point was that I could do a 30 minutes cleanup in order to streamline hundreds of endpoints. Without AI I would not have been able to justify this migration due to business reasons.

We get to move faster, also because we can shorten deprication tails and generally keep code bases more fit more easily.

In particular, we have dropped the external backoffice tool, so we have a single mono repo.

An Ai does tasks all the way from the infrastructure (setting policies to resources) and all the way to the frontends.

Equally, if resources are not addressed in our codebase, we know at a 100% it is not in use, and can be cleaned up.

Unused code audits are being done on a weekly schedule. Like our sec audits, robustness audits, etc.


Yeah, the more I debate the AI-lovers the more I can empathize with the possibility it may very well turn out to be everything is an Agent. Encodable.

I'm not a doomer either, but I do think this arc is a human arc: there's going to be a lot of collateral damage. To your point, Agents with good stewardship can also implement hygiene and security practices.

It's important we surface potential counter metrics and unintended side effects. And even in doing so the unknown unknowns will get us. With that said, I like this positive stewardship framing, I'll choose to see and contribute to that, thanks!


I definitely don't identify as an AI lover. For me year 0 of Ai was February 6th 2026 and the release of Opus 4.6.

Until that day we had roughly zero Ai code in the code base (additions or subtractions). So in all reasonable terms I am a late adopter.

For code bases Ai does not concern me. We have for quite some time worked with systems that are too complex for single people to comprehend, so this is a natural extension of abstraction.

On the other hand, am super concerned about Ai and the society. The impact of human well being from "easy" Ai relations over difficult human connection. The continued human alienation and relational violation (I think the "woke" discourse will go on steroids).

I think society is going to be much less tolerant. And that frightens me.


>> Works flawlessly, debt principal down.

I don't doubt it completed the initial coding work in a short time, but the fact that you've equated that with flawless execution is on the concerning-scary spectrum. I can only assume you're talking "compiles-runs-ship it"

The danger is not generating obvious slop, it's accepting decent and convincing outputs as complete and absolving ourselves of responsibility.


You are right, and it happens that the output looks decent.

Code idioms, or patterns if you will, is largely our solution.

We have small pattern/[pattern].md files througout the code base where we explain how certain things should be done.

In this case, the migration was a normalization to the specific pattern specified in the pattern file for the endpoints.

Semantics was not changed and the transform was straight forward. Just not task I would be able to justify spending time on from a business perspective.

Now, the more patterns you have, and the more your code base adheres to these patterns, the easier you can verify the code (as you recognize the patterns) and the easier you cal call out faulty code.

It is easier to hear an abnormality in music than in atmospheric noise. It is the same with code.


Seeing plenty of this. The quality of agentic code is a function of the quantity and quality of adversarial quality gates. I have seen no proof that an agentic system is incapable of delivering code that is as functional, performant and maintainable as code from a great team of developers, and enough anecdotes in the other direction to suggest that AI "slop" is going to be a problem that teams with great harnesses will be solving fairly soon if they haven't already.

I take your point but then it makes me think is there no more value in diversity?

[Philosophy disclaimer] So in a code-base diversity is probably a bad idea, ok that makes sense. But in an agentic world, if everything is run through the Perfect Harness then humans are intentionally just triggers? Not even that, like what are humans even needed for? Everything can be orchestrated. I'm not against this world, this is an ideal outcome for many and it's not my place to say whether it's inevitable.

What I'm conflicted on is does it even "work" in terms of outcomes. Like have we lost the plot? Why have any humans at all. 1 person billion dollar company incoming. Software aside, is the premise even valid? 1 person's inputs multiplied by N thousand agents -> ??? -> profit


> Why have any humans at all

Why have humans do work at all? We could have a radically better existence. It would mean that the few at the top of the pyramid lose their privileged position relative to the rest of us, but we could, actually, have that world of abundance for all.

Work in the current sense arguably isn't even desirable

Maybe I've just read too many Culture books.


These are the right questions to ask.

> Today, I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.

This is either a very remarkable or a very frightening statement. You're claiming flawless execution within the same day as the change.

If you're unable to tell us which product this is, can you at least commit to report back in a month as to how well this actually went?


It is a part of the smoke testing process right now.

But we run 90% test coverage, e2e test etc. None of which had been altered, and are all passing.

Migrations are generally not that high risk if you have a code base in alright shape.


Ironically the post saying it is not slop sounds exactly like ai slop.

Too. Many spelling errors for that to be slop...

Yes, and all of Spain is learning how to use VPNs

Seems to be a failure of the publishing system.

For humans, or Ai, to have any knowledge, we need to have trustworthy sources.

Naturally,when you use publishing systems considered trust worthy, that is going to be trusted.


A preprint isn't a published works.

Why does that difference matter?

The public at large doesn't seem to care about this distinction.

Here's a proof. Search for this in google: "ai data centers heat island". Around 80 websites published articles based on a preprint which was largely shown to be completely wrong and misleading.

https://edition.cnn.com/2026/03/30/climate/data-centers-are-...

https://www.theregister.com/2026/04/01/ai_datacenter_heat_is...

https://hackaday.com/2026/04/07/the-heat-island-effect-is-wa...

https://dev.ua/en/news/shi-infrastruktura-pochala-hrity-mist...

https://www.newscientist.com/article/2521256-ai-data-centres...

https://fortune.com/2026/04/01/ai-data-centers-heat-island-h...

You may not believe it but the impact this had on general population was huge. Lots of people took it as true and there seem to be no consequences.


It matters because for medical questions, you [are supposed to] go to a medical professional, and those very much cares about and make that distinction.

Which is exactly the problem here; it "used to be" that reasonable people would disbelieve random things they find on the internet at least to some degree. "Media literacy". LLMs don't seem to have that capability, and a good number of people are using LLMs in blissful ignorance of that fact. They very confidently exclaim things that make them sound like experts in the field at question.

Would it have made a difference for the AI data center heat island thing you're quoting? maybe not. But for medical matters? Most people wouldn't even have caught wind of this odd fake disease. LLMs just amplify it and serve it to everyone.


I agree with you and I think the companies have solved it. I think they should be more skeptical of medical articles in general and be more conservative.

> Which is exactly the problem here; it "used to be" that reasonable people would disbelieve random things they find on the internet at least to some degree. "Media literacy". LLMs don't seem to have that capability, and a good number of people are using LLMs in blissful ignorance of that fact.

I completely disagree with this part. LLM's absolutely have the ability to be skeptical but skepticism comes at a cost. LLMs did what used to be a reasonable thing - trust articles published in reputed sources. But maybe it shouldn't do that - it should spend more time and processing power in being skeptical.


> LLMs did what used to be a reasonable thing - trust articles published in reputed sources.

That's absolutely not what happened in this case though; neither posts on Medium nor random preprints are reputed sources.


It was published in Preprints.org, a multidisciplinary preprint server run by MDPI.

I'm not an expert here - is it correct to take anything from here or arxiv as default skeptical?


The definition of a preprint is that it isn't peer reviewed. Unless you're an expert in the field, you IMHO shouldn't be looking at preprints. Might be OK if they come recommended by multiple unaffiliated experts (i.e. kinda half reviewed), but definitely not by default.

I agree, but the public and media outlets don't practice this either: https://news.ycombinator.com/item?id=47716699

Human stupidity? As in allowing too much noise in the cities to the extend that people need to protect their minds?

The stupidity that makes depriving one of your senses seem like a sensible thing to do in a busy chaotic environment.

I don’t actually mind people doing that though. What is annoying is the entitled attitude that there should be no consequence for that choice, and everyone else should orbit/compensate around their lack of situational awareness.


Stockholm is a very quiet city, people still wear noise-cancelling headphones all the time.

People don't tend to wear anc headsets when walking the Forrest.

Maybe the issue is the noise in the cities?


There’s more than one issue. It’s not wrong to try to solve one of them.

Some people wear them there.

As perfectly captured in "don't _tend_ to ..."

Play. People love to play.

The culturally assigned meaning to work seems more like a social coercion.

If given the choice, including choice in mind, then people will likely choose community and play.


This. So many people are coerced to wear a straitjacket, run the treadmill, and do the rat race, conditioned that "hey work is life, life is work" and is divided into workweek and weekend. For some it may work well, if their job aligns with their interests and passions. Many others are in it with only half their heart. They may show better productivity metrics to the manager class, but what they gain there is lost in 'quality of service' to society. You get automatons at the desk, not interested to handle your edge case. But my, are they 'productive' for the bottom line.

Play is type 1 fun: fun in the moment.

People also love type 2 fun. It's not fun in the moment, but you're happy that you did it.

If your work is type 1, more power to you. A lot more falls under type 2's umbrella. I find writing to be type 2 more often than not. Making complicated designs is often not fun in the moment. I like exercise, but sprint workouts are type 2.


You're forgetting Type 3 Activities, which are neither fun at the time, nor afterwards on reflection.

League of Legends players are very familiar with type 3 activities.

A complete accounting of fun types has to include Type 0 fun as well: fun when it's happening, but not fun later. Drugs, gambling, crime, and most traditional vices fall in this category.

And yet they are a type of fun!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: