Hacker Newsnew | past | comments | ask | show | jobs | submit | Towaway69's commentslogin

Is there a site which implements the ideas behind the file Indecent Proposal[1]?

It would be a site where folks could start auctions based on stuff they want from other folks. So Bob wants Janes jumper. He goes onto the site, creates an auction with an initial offer of $5. Jane is informed that Bob wants to buy her jumper. She turns down the offer. Bob raises his offer to $10. She declines the offer. Then Cane joins the auction and makes an offer for $15. Jane refuses that too.

One see where this is going. The point is that Jane cannot shutdown the auction and anyone can make a bid. The trajectory is that Jane will reach a point where she is forced to make a moral choice (much like in the film). Everyone has their price, even Jane in this case.

It is easy enough to imagine what besides jumpers will be placed on the site. It is an highly immoral idea and something akin to cyber mobbing or AI fake porn. So does the site exist already? Asking for a friend.

[1] https://en.wikipedia.org/wiki/Indecent_Proposal


This sounds like a great Black Mirror episode

What the article doesn't touch on is the vendor lock-in that is currently underway. Many corps are now moving to an AI-based development process that is reliant on the big AI providers.

Once the codebase has become fully agentic, i.e., only agents fundamentally understand it and can modify it, the prices will start rising. After all, these loss making AI companies will eventually need to recoup on their investments.

Sure it will be - perhaps - possible to interchange the underlying AI for the development of the codebase but will they be significantly cheaper? Of course, the invisible hand of the market will solve that problem. Something that OPEC has successfully done for the oil market.

Another issue here is once the codebase is agentic and the price for developers falls sufficiently that it will significant cheaper to hire humans again, will these be able to understand the agentic codebase? Is this a one-way transition?

I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices and the global economy, fundamentally everything is getting better.


I have similar concerns.

We will miss SaaS dearly. I think history is repeating just with DVD and streaming - we simply bought the same movie twice.

AI more and more feels the same. Half a year ago Claude Opus was Anthropics most expensive model - boy, using Claude Opus 4.6 in the 500k version is like paying 1 dollar per minute now. My once decent budgets get hit not after weeks but days (!) now.

And I am not using agents, subagents which would only multiply the costs - for what?

So what we arrive more and more is the same as always: low, medium, luxury tier. A boring service with different quality and payment structures.

Proof: you cannot compensate with prompt engineering anymore. Month ago you fixed any model discrepancies by being more clever and elaborate with your prompts etc.

Not anymore. There is a hidden factor now that accounts for exactly that. It seems that the reliance on skills and different tiers simply moves us away from prompt engineering which is considered more and more jailbreaking than guidance.

Prompt engineering lately became so mundane, I wonder what vendors were really doing by analyzing the usage data. It seems like that vendors tied certain inquiries with certain outcomes modeled by multistep prompting which was reduced internally to certain trigger sentences to create the illusion of having prompted your result while in fact you haven't.

All you did was asking the same result thousands of user did before and the LLM took an statistical approach to deliver the result.


Are you saying we increasingly get ML results and not LLM resuluts?

> we simply bought the same movie twice

Maybe you did, but I certainly didnt.


> 1 dollar per minute

so 60 usd / hour? a plumber earns more

if this allows you to produce features that bring you money, it's a no-brainer


Plumbers are working on very standard systems, they have to front costs and secure work. It only works for them because enough people use basic plumbing services to sustain them. But how many have return customers and guaranteed work for long term projects?

> Once the codebase has become fully agentic, i.e., only agents fundamentally understand it

What exactly do we mean this? Because it is obviously common for human coders to tackle learning how an unfamiliar and complex codebase works so that they can modify it (new hires do it all the time). I can think this means one of two things:

* The code and architecture being produced by agents takes approaches that are abnormally complex or inscrutable to human reviewers. Is that what folks working with cutting edge agents are seeing? In which case, such code obviously isn’t beeping reviewed; it can’t be.

* the code and architecture being produced by agents can still be understood by human reviewers, but it isn’t actually being reviewed by anyone — since reviewing pull requests isn’t always fun or easy, and injecting in-depth human review slows everything down a lot — and so no one understands how the code works. (I keep thinking about the AI maximalist who recently said he woke up to 75 pull requests from his agent, like that was a good thing)

And maybe it’s a combination of the two: agent-generated pull requests are incrementally harder to grok, which makes reviewing more painful and take longer, which means more of them go without in-depth reviews.

But if your claim is true, the bottom line is that it means no one is fully reviewing code produced by agents.


Folks are reviewing the code, but the standard shape of a review is a PR. This diff assumes you have an underlying knowledge of the system, one that is most realistically gained by having written the code. Could you “just remember” every diff you’ve seen? Maybe, but I don’t think it’s realistic; we learn far better from doing than from reading.

> What exactly do we mean this? Because it is obviously common for human coders to tackle learning how an unfamiliar and complex codebase works so that they can modify it (new hires do it all the time).

I agree with you, BUT: I find it much harder to get my head around a medium sized vibe coded project than a medium size bespoke coded project. It's not even close.

I don't know what codebases will look like if/when they become "fully agentic". Right now, LLM-agents get worse, not better, as a codebase grows, and as more if it is coded (or worse architected) by LLM.

Humans get better over time in a project and LLMs get worse, and this seems fundamental to the LLM architecture really. The only real way I see for codebases to become fully agentic right now is if they're small enough. That size grows as context sizes that new models can deal with grows.

If that's how this plays out - context windows get large enough that LLM-agents can work fine in perpetuity in medium or large size projects - I wonder if the resulting projects will be extremely difficult for humans to wrap their heads around. That is, if the LLM relies on looking at massive chunks of the codebase all at once, we could get to the point of fully agentic codebases without having to tackle the problem of LLMs being terrible at architecture, because they don't need it.


And is "model collapse" a thing when LLMs are trained on 100% LLM-generated code? Fun times ahead.

What examples in history can be learned from here?

For your points:

- Garden path approaches are definitely a thing, but I don't think this is necessarily catastrophic. A lot depends on the language and framework in question, and also the driver of the change.

- I think it's that plus the fact it's easy to just generate ever more code. Solutions scale in every dimension until they hit a limit where it's not feasible to go further. If AI tools will allow you to write a project with a million or 10 million lines of code, you can bet it will eventually happen. Who's ever gonna fix that?


No one ever asks how much it costs Facebook or Uber to serve requests because it is irrelevant, they set prices to maximize their profit like any good monopolist. Similarly the future cartel of big providers will charge their captive users whatever they can get away with, not the cost of inference.

The current discourse around "AI", swarms of agents producing mountains of inscrutable spaghetti, is a tell that this is the future the big players are looking for. They want to create a captive market of token tokers who have no hope of untangling the mess they made when tokens were cheap without buying even more at full price.


This is a great point, and I routinely use it as an argument for why seasoned professionals should work hard to keep their skills and why new professionals should build them in the first place. I would never be comfortable leasing my ability to perform detailed knowledge work from one of these companies.

Sometimes the argument lands, very often it doesn't. As you said, a common refrain is, "but prices won't go up, cost to serve is the highest it will ever be." Or, "inference is already massively profitable and will become more so in the future--I read so on a news site."

And that remark, for me, is unfortunately a discussion-ender. I just haven't ever had a productive conversation with somebody about this after they make these remarks. Somebody saying these things has placed their bets already and are about to throw the dice.


There is no such thing as agentic codebase. If humans don’t understand it, nothing really does. Agents give zero fuck about anything. If they burn 100 or million tokens to add a feature, they don’t care. It’s the developers responsibility to keep it under control.

100% this. With these new tools it's tempting to one-shot massive changesets crossing multiple concerns in preexisting, stable codebases.

The key is to keep any changes to code small enough to fit in your own "context window." Exceed that at your own risk. Constantly exceeding your capacity for understanding the changes being made leads to either burnout or indifference to the fires you're inevitably starting.

Be proactive with these tools w.r.t. risk mitigation, not reactive. Don't yolo out unverified shit at scales beyond basic human comprehension limits. Sure, you can now randomly generate entirely (unverified) new software into being, but 95% of the time that's a really, really bad idea. It is just gambling and likely some part of our lizard brains finds it enticing, but in order to prevent the slopification of everything, we need to apply some basic fucking discipline.

As you point out, it's our responsibility as human engineers to manage the risk reward tradeoffs with the output of these new tools. Anecdotally, I can tell you, we're doing a fucking bad job of it rn.


The big AI projects I've seen at work are...

- A Kafka topic visualization dashboard

and

- A chrome extension the original "developer" can no longer work on cause the bots will wreck something else on every new feature he tries to add or bug he tries to fix

I think we're a ways out from truly complex code bases that only agents understand.

I've seen a bunch of hype video where people spend lord knows how much money in order to have a bunch of these things run around and I guess... use Facebook, and make reports to distribute amongst themselves, and then the human comes in and spends all their time tweaking this system. And then apparently one day it's going to produce _something_ but two years and counting and much like bitcoin, I've yet to see much of this _something_ materialize in the form of actual, working, quality software that I want to use.

My buddy made a thing that tells him how many people are at the gym by scraping their API and pushing it into a small app package... I guess that's kind of nice.


Lately I also wonder about the geopolitical lock-in and balkanization of the internet. US won't have this problem I guess. But with all that's happening in the world right now and the current trends, for the rest of us we need to think hard what AI company we trust with our data or trust to still have access to once we're on the other side of the wall.

> geopolitical lock-in and balkanization of the internet. US won't have this problem I guess

This reminds me of the apocryphal headline from the dying days of the British Empire:

> Fog in Channel; Continent Cut Off


If only the AI understands your code, then vendor lock-in and exposure to price hikes will be the least of your problems. I don't think that you will be able to add Claude as the Dev-On-Call to your pagerduty schedule. If you are in an industry that requires due diligence and you get sued for bugs that cause material damage and human suffering, then I don't think the "blame it on Claude" defense is going to land well in court. I cover these topics on https://www.exploravention.com/blogs/soft_arch_agentic_ai/ which is a blog I wrote recently.

I'm beginning to develop the opinion that the next step in this process will (or at least should) be local and/or self-hosted inference.

The latest qwen models are already very useful, and the smaller ones can be run locally on my laptop. These are obviously not as good as the latest frontier models, and that's extremely noticeable for the development workflow, but maybe in a year or two, they will be competitive with the proprietary models we have today, which are incredibly capable. I also expect compute for inference to continue getting cheaper.

The current lock in for me is the UX of Claude Code / codex cli, but this is a very small moat that will definitely be commoditized soon.


I've been thinking this for a while now as well. If they keep subsidizing for long enough there might be a large gap of humans that changes jobs, didn't get into the field in the first place. Then the only way out is to keep buying those tokens.

What do you mean about vendor lock-in? I haven’t yet seen any meaningful barriers to switching between different companies’ coding agents. Are you talking about AI market lock-in and not vendor-specific lock-in?

> these loss making AI companies will eventually need to recoup

This is true, and while AI spend continues to rise, I’m starting to think once the dust settles and the true costs emerge and stable profits are achieved, that it may be expensive enough that it’s a limiting force.


Then you aren’t a true vibe coder using replit

I think it will be more similar to the cloud. I remember people predicted that once you move to the cloud, you'll realize how expensive it actually is, but the cost of migration back will be high. While, yes, the cloud is expensive, most people realized that it is kinda worth it.

Code is so low entropy that smaller and more economical models will be up to the task the same as gigantic models from big providers are today.

No worries there, the huge improvements we see today from GPT and Claude, are at their heart just Reinforcement Learning (CoT, chain of thought and thinking tokens are just one example of many). RL is the cheapest kind of training one can perform, as far as I understand. Please correct me if that's not the case.

In the economy the invisible hand manages to produce everything cheaper and better all the time, but in the digital space the open source invisible hand makes everything completely free.


> the open source invisible hand makes everything completely free.

In this case the limitation is the compute. Very few people have the compute required for AI/LLMs locally or for free (comparable to the performance of Claude). So yes, there are plenty of Open Source models that can be used locally but you need to invest in hardware to make that happen and especially if you want the quality that is available from the commercial offerings.

Not to speak of the training of those models. It's all there to make it possible to do this locally however where's the hardware? AWS? Google? There are hidden costs of the Open Source model in this case.


>In this case the limitation is the compute.

I agree with most of your points, but computation can be transferred from a place where energy is cheap to a place that is expensive. Energy for cooking cannot be transferred that way.

See for example Amazon-Google datacenters in the Gulf region. We've also got a whole continent, Australia, to put as many solar panels as we desire. Australia got dark for half a day, every day? Put solar panels to the opposite side of the planet.

Energy is a concern, for cooking, transportation etc. Energy for computation is not.


this is a good point. Some of the ai companies are trying to hook cs students so they'll only know "dev" as a function of their products. First one's free as they say (the drug dealers).

I agree, that is the great danger that CS students aren't even taught the fundamentals of "computer science" any longer. It would be the equivalent of physics students not learning Newtons laws or e-m-c-squared.

Probably there is an issue with how much there is in CS - each programming language basically represents a different fundamental approach to coding machines. Each paradigm has its application, even COBOL ;)

Perhaps CS has not - yet - found its fundamental rules and approaches. Unlike other sciences that have hard rules and well trodden approaches - the speed of light is fixed but not the speed of a bit.


Oil market doesn't have an equivalent of open-source LLMs, self-hosting and cloud providers.

"Just like oil prices and the global economy, fundamentally everything is getting better." (implied /s)

I remember having to pay a pretty penny to have a 3 minute conversation with my dad working half way across the world. Now I can video call my nephew for 45 minutes without blinking an eye. What happened?

Why will Intelligence be like Oil and not Broadband?


> the prices will start rising. After all, these loss making AI companies will eventually need to recoup on their investments.

I would bet a lot of money that the price of LLM assistance will go down, not up, as the hardware and software advance.

Every genre-defining startup seems to go through this same cycle where the naysayers tell us that it's all going to collapse once the investment money runs out. This was definitely true for technologies without use cases (remember the blockchain-all-the-things era?) but it is not true for businesses that have actual users.

Some early players may go bust by chasing market share without a real business plan, like the infamous Webvan grocery delivery service. But even Webvan was directionally correct, with delivery services now a booming business sector.

Uber is another good example. We heard for years that ridesharing was a fad that would go away as soon as the VC money ran out. Instead, Uber became a profitable company and almost nobody noticed because the naysayers moved on to something else.

AI is different because the hardware is always getting faster and cheaper to operate. Even if LLM progress stalled at Opus 4.6 levels today, it would still be very useful and it would get cheaper with each passing year as hardware improved.

> I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices

Comparing compute costs to oil prices is apples to oranges. Oil is a finite resource that comes out of the ground and the technology to extract it doesn't improve much over decades. AI compute gets better and cheaper every year because the technology advances rapidly. GPU servers that were as expensive as cars a few years ago are now deprecated and available for cheap because the new technology is vastly faster. The next generation will be faster still.

If you're mentally comparing this to things like oil, you're not on the right track


> almost nobody noticed

Rideshare costs are much higher than they have been in years past. Everyone noticed


> Oil is a finite resource that comes out of the ground

Yes but the chips, hardware, copper cables, silicon and all the rest of the components that make up a server are finite. Unless these magically appear from outer space, we'll face the same resource constraints as everything else that is pulled out of the ground.

These components are also far more fragile to source, see COVID and the collapse of global supply chains. Also the factories to create these components are expensive to build and fragile to maintain. See the Dutch company that seems to be the sole supply of certain manufacturing skills.[1]

> I would bet a lot of money that the price of LLM assistance will go down, not up, as the hardware and software advance.

My bet would be that it would fuel the profits of AI companies and not make the price of AI come down. Over supply makes price come down but if supply is kept artificially low, then prices stay high.

That's the comparison to OPEC and oil. There is plenty of oil to go around yet the supply is capped and thereby prices kept high. There is no guarantee that savings in hardware or supply will be passed on by AI corps.

Indeed there is no guarantee that there will be serious competition in the market, OPEC is a monopoly so why not have an AI monopoly? At the moment, all major players in AI are based in the same geopolitical sphere, making a monopoly more likely, IMHO.

In the end, it's all speculation what will happen. It just depends on which fairy tail one believes in.

[1]: https://en.wikipedia.org/wiki/ASML_Holding


> Yes but the chips, hardware, copper cables, silicon and all the rest of the components that make up a server are finite. Unless these magically appear from outer space, we'll face the same resource constraints as everything else that is pulled out of the ground.

Raw material cost is not a driver of datacenter GPU costs.

> Over supply makes price come down but if supply is kept artificially low, then prices stay high.

Where are you getting "supply kept artificially low" when we're in the middle of an explosion of datacenter buildouts and AI companies?

We're in a race to the bottom on pricing. I haven't seen a realistic argument for why you think prices are going to go up. You're starting with a conclusion and trying to find reasons it might be true.


> Where are you getting "supply kept artificially low"

If a resource is controlled by a small group of coordinated folks (for example, large US controlled corporations who have/are these datacenters), the resource may be limited artificially because access to these resources are controlled by said corporations.

Exploding datacenters and AI companies yes, but true competition probably not. Most AI companies are using the datacenters from said corporations, if those corporations decide that compute costs one cent more, then all AI providers will become more expensive.

What we should learn from OPEC and oil is that not resource amounts that define the price, it is access to the resource that defines the price.


While I fundamentally agree with the basis of compute getting cheaper by the year, I think a missed consideration here is the fact that these models are also requiring exponentially more compute with each iteration to train, in a way that arguably has outscaled the advances in compute.

Whether a generalized and broadly usable model will be able to trained within some N multiple of our current compute availability allowing the price to come down with iterative compute advances is yet to be seen. With the current race to the top in terms of SOTA models and increasingly iteratively smaller improvements on previous generations, I have a feeling the scaling need for compute will outpace the improvements in our hardware architecture, and that's if Moore's law even holds as we start to reach the bounds of physics and not engineering.

However as it stands today, essentially none of these providers are profitable so it's really a question of whether that disconnect will come within their current runway or not and they'll be required to increase their price point to stay alive and/or raise more capital. It's pure conjecture either way.


At the same time, systems have become far more complex. Back when version control was crap, there weren't a thousand APIs to integrate and a million software package dependencies to manage.

Sure everything seems to have gotten better and that's why we now need AIs to understand our code bases - that we created with our great version control tooling.

Fundamentally we're still monkeys at keyboards just that now there are infinitely many digital monkeys.


Perrow’s book Normal Accidents postulates that, given advances which could improve safety, people just decide to emphasize throughput, speed, profits, etc. he turned out to be wrong about aviation (got much safer over time) and maritime shipping (there was a perception of a safety crisis in the late 1970s with oil tankers exploding, now you just hear about the odd exceptional event.)

> Perrow argues that multiple and unexpected failures are built into society's complex and tightly coupled systems, and that accidents are unavoidable and cannot be designed around.[1]

This is definitely something that is happening with software systems. The question is: is having an AI that is fundamentally undecipherable in its intention to extend these systems a good approach? Or is an approach of slowing down and fundamentally trying understand the systems we have created a better approach?

Has software become safer? Well planes don't fall from the sky but the number of zero day exploits built into our devices has vastly improved. Is this an issue? Does it matter that software is shipped broken? Only to be fixed with the next update.

I think its hard to have the same measure of safety for software. A bridge is safe because it doesn't fall down. Is email safe when there is spam and phishing attacks? Fundamentally Email is a safe technology only that it allows attacks via phishing. Is that an Email safety problem? Probably not just as as someone having a car accident on a bridge is generally not a result of the bridge.

I think that we don't learn from our mistakes. As developers we tend to coat over the accidents of our software. When was the last time a developer was sued for shipping broken software? When was the last time an engineer was sued for building a broken bridge? Notice that there is an incentive as engineer to build better and safer bridges, for developers those incentives don't exist.

[1]: https://en.wikipedia.org/wiki/Normal_Accidents


The other day I was thinking about how stupid little things in the Javascript ecosystem where you have to change your configuration file "just because" are a real billion-dollar mistake and speculating that I could sue some of the developers in small claims court.

Right away I scoffed when I heard people had 20 agents running in parallel because I've been at my share of startups with 20 person teams that tend to break down somewhere between:

- 20 people that get about as much done as an optimal 5 person team with a lot more burnout and backlash

- There is a sprint every two weeks but the product is never done

and people who are running those teams don't know which one they are!

I'm sure there are better ones out there but even one or two SD north of the mean you find that people are in over their heads. All the ceremony of agile hypnotizes people into thinking they are making progress (we closed tickets!) and have a plan (Sprint board!) and know what they are doing (user stories!)

Put on your fieldworker hat and interview the manager about how the team works [1] and the state of the code base and compare that to the ground truth of the code and you tend to find the manager's mental is somewhere between "just plain wrong" and "not even wrong". Teams like that get things done because there are a few members, maybe even dyads and triads, who know what time it is and quietly make sure the things that are important-but-ignored-by-management are taken care of.

Take away those moral subjects and eliminate the filtering mechanisms that make that 20-person manager better than average and I can't help but think 'gas town' is a joke that isn't even funny. Seems folks have forgotten that Yegge used to blog that he owed all his success in software development to chronic cannabis use, like if wasn't for all that weed there wouldn't be any Google today.

[1] I'll take even odds he doesn't know how long the build takes!


> Seems folks have forgotten that Yegge used to blog that he owed all his success in software development to chronic cannabis use, like if wasn't for all that weed there wouldn't be any Google today.

I remember a lot of Steve Yegge's impressive claims from back when he and Zed Shaw were what I would call "fringe contemporaries" in the early 2010s - like all the time he spent gassing on about his unmaintainable, barely usable nightmare of a Javascript mode for Emacs. (I did like the MozRepl integration, for what that's worth.)

I don't particularly recall him talking about smoking pot, and I think I would have, if he'd been as memorably effusive there as about js2-mode. But it's been a lot of years and I couldn't begin to remember where to look for an archive of his old blog. Would you happen to have a link?


The most obvious one is this brilliant piece on complexity:

https://steve-yegge.blogspot.com/2009/04/have-you-ever-legal...

It doesn't match OP's description, but it certainly fits talk about his pot use.

There may be others.


I remember thinking of him as a skillful writer and a sometimes incisive thinker, back then. Apparently my taste has significantly improved in the interim; for a piece ostensibly about complexity, this is an embarrassingly superficial analysis from priors that already don't make any sense.

I'm not going to knock a guy today based on an almost twenty-year-old piece, especially on subjects (cannabis legalization, the quality and direction of Obama administration policy initiatives) that were widely misunderstood at the time, including by such luminaries as the Nobel committee. But Yegge really wasn't starting from so strong a position as I had misrecalled. Thanks for the link.


I haven't read it in at least ten years myself - maybe it's not as good as I recall.

I do remember that I appreciated his grasp of the fact that if you aren't deep in the weeds, you really cannot understand just how complex a system really is.

I also appreciated the slow build to the actual point, which I think could help people who wouldn't hear a direct explanation understand what he was getting at.

"'Shit's Easy' syndrome" is real, and I wonder if the prevalence of LLMs doing the scutwork will lead to an entire generation of programmers who suffer from it.


Well, sure. Trying to plan events at incomprehensibly large scale is like that, as the 20th century collectivist states failed largely in consequence of too late discovering. You have to retain a sense of scale in these things, not to say humility. Meanwhile, cannabis legalization in the US proceeds apace as a fifty-state patchwork, with simple possession still a major felony some places, while commercial distribution in others is a wholly legitimate storefront affair, and someone will eventually reap a small political windfall through federal recognition of the situation in being. No one is really planning anything. It is the assumption someone must that I'm criticizing, because for all the decades of planning indulged by the interminable old-times legalization advocates, their desideratum in practice looks nothing like they ever came close to seriously imagining or predicting.

To his dubious credit, I think Yegge has in the interim learned this lesson, possibly at the cost of some others. Looking at his "Gas Town" makes the hair stand up on the back of my neck, not least for that I once had ferrets and I know what chaos they embody and wreak (and how f—ing expensive they are!); I'm sure he was intentional in his choice of the metaphor, but he's always been one of those for whom consensus reality and good sense are likewise mostly optional. So in entire fairness I have to admit I really can't see any just criticism that he's planning too much these days. But the value in such a swing from one extreme to another, versus something more closely resembling moderation, charitably has yet to be demonstrated.

(As a programmer of both fintech and actual finance experience, btw, it's very comical to me to see the Big Design Up Front approach being applied in this way to this specific example, precisely because it so little resembles how anyone genuinely approaching the task does so. It is very much how I would expect the Google of 2009 to look at things. It isn't that much like how a bank or a startup does. But I said I wasn't going to beat up on old work, and I can't pretend I had so broad a perspective myself so long ago.)


Good points.

I was similarly appalled and shocked at Gas Town. Maybe something like it is the future, but I really didn't expect Yegge to be a genAI booster.

If Gas Town has "the Quality Without a Name," I will eat my hat.

https://sites.google.com/site/steveyegge2/tour-de-babel


Oh, God, spare me from the architect who must be sure he is seen to be one with the Tao. Its name is 无为 and Emacs, which I have used exclusively since 2010, does not "have" it, although a given human Emacs user may. (But see previously my comments with respect to js2-mode; Yegge's enthusiasm of the moment notwithstanding, he was at least not then the most obviously reliable judge.)

It isn't something that can exist in the absence of consciousness, because only in the presence of consciousness can it not exist. I grant some computer programs sensu lato may conceivably experience qualia, but even today would be taken sorely aback to discover Emacs among them.


> planes don’t fall from the sky

Boeing would like a word (; https://en.wikipedia.org/wiki/Maneuvering_Characteristics_Au...


> that's why we now need AIs to understand our code bases

I don't need an AI to understand my code base, and neither do you. You're smarter then you give yourself credit for.


The better processes and tools made larger project possible.

Why? What happened to the magic bar? There are many things Apple changed their minds on.

I won’t be surprised if there is a rollback in 27 (i’m hoping there will be - else going to buy a retro mac with a magic bar upon no tahoe runs).


No need for a retro Mac. macOS Sequoia runs great on all but the very newest Macs, and is still getting maintenance updates. And unlike with iPhones, it's perfectly possible to downgrade Macs back to older macOS versions if you want.


They can't frame it as a rollback at least. Have to sync it up with a new OS release or something as the "new" best thing


There are people who have OCD and can’t help but seeing these things. It’s great for coding and seeing minor changes but its shit for real life - trust me.

The number of times auto update of some app has caused the thought process “but that wasn’t like that yesterday… or was it… hm… oh it was an update”. Just minor things, small mostly unnoticeable if don’t have an “eye for details”.


That’s not OCD, it’s just paying attention to detail.


True, it's not OCD but in combination with OCD you can get into strange thought loops.


> There are people who have OCD and can’t help but seeing these things. It’s great for coding and seeing minor changes but its shit for real life - trust me.

I don't have OCD, but easily notice inconsistencies in various design choices these mega-corporations continue to fumble.

It's less "OMG I can't focus on coding because Calculator and TextEdit aren't sharing the same border radius" but more "The UX/UI department seems like they're on perpetual vacation if Apple is letting simple things like this slip through", and this specific case is just an example, every version of macOS seems to get worse when it comes to consistency.


Aren't there blockchain agents, surely there must be agents running in the blockchain as smart contracts?


I wonder in what timeframe the cosmic timeframe is recent.

It's turtles all the way down ....

;)


I guess for the author its learning about how Linux can be ported to the browser. For us, it's more of a nice amusement.

But then again, I've never understood why Buddhist monks create sand mandalas[1] and then let them be blown away (the mandalas not the monks!).

I think one should see it from the authors PoV instead of thinking "what is in it for me". If I were to use this, then to create digital sand mandalas in the browser! ;)

[1]: https://en.wikipedia.org/wiki/Sand_mandala


Germany has a great layer of "consultants" that fudge the books and make everything look profitable and rosy. It's the land of "Arbeitsgruppen" and "Berater" - folks that ensure things get buried and forgotten.

But there is no investment in the future, no investment in infrastructure and no investment in anything creative, in fact, that's were cuts are made, in the arts and culture.

Once a society can no longer afford the arts, you know there is something going wrong and Germany is going wrong. Perhaps "klagen auf hohem Niveau" (complaining from up on high) but the higher they are, the further they fall.


Not forgetting the VW "Abgasskandal"[1].

Fake it 'til you make it, I guess. Or in this case make it fake 'til you fail.

[1]: (de) https://de.wikipedia.org/wiki/Abgasskandal (en) https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: