Hacker Newsnew | past | comments | ask | show | jobs | submit | peterlk's commentslogin

Yes. Other humans are generally accepting of mistakes below some frequency threshold, and frontier models are very robust in my experience

Have fun annoying a ton of people, and also, getting prompt injected on a weekly basis and leaking who knows what from your inbox.

I think this article conflates (at least) two problems.

The first is that very few people (especially rich people) are anonymous. A motivated person who has had a psychotic break can be very dangerous, and if you’re even a little bit famous, the probability of that happening to you goes up substantially.

The second issue is the one that everyone is getting riled up about - wealth inequality.

These are distinct issues, and I think it does harm - in the form of polarization - to not explicitly call them out.


Shameless plug. We’re working on something like this. thismachine.ai. It’s still early, but interested to get feedback. The slack/chat part is still behind a feature flag. Let me know if you want to use it

The solution is parents! Stop making your bad parenting my problem!


How has it become your "problem"? Do you believe everyone should be able to get into any location anywhere worldwide without screening?


If you believe that all parents are intelligent, informed, and put their children's well-being before everything, you are unfortunately wrong about society. Kids don't deserve to suffer just because they have neglectful parents.

Discord, on the other hand, should be at least somewhat responsible for the interactions of children (which they profit off of) on their platform.

And finally, you, a sentient adult with free will, can use another platform. Not your problem unless you want to make it yours, which is the response of choice on this thread.


I have been having this conversation more and more with friends. As a research topic, modern AI is a miracle, and I absolutely love learning about it. As an economic endeavor, it just feels insane. How many hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories, etc. could we build with the money we’re spending on pretraining models that we throw away next quarter?


I have to admit I'm flip-flopping on the topic, back and forth from skeptic to scared enthusiast.

I just made a LLM recreate a decent approximation of the file system browser from the movie Hackers (similar to the SGI one from Jurassic park) in about 10 minutes. At work I've had it do useful features and bug fixes daily for a solid week.

Something happened around newyears 2026. The clients, the skills, the mcps, the tools and models reached some new level of usefulness. Or maybe I've been lucky for a week.

If it can do things like what I saw last week reliably, then every tool, widget, utility and library currently making money for a single dev or small team of devs is about to get eaten. Maybe even applications like jira, slack, or even salesforce or SAP can be made in-house by even small companies. "Make me a basic CRM".

Just a few months ago I found it mostly frustrating to use LLM's and I thought the whole thing was little more than a slight improvement over googling info for myself. But the past week has been mind-blowing.

Is it the beginning of the star trek ship computer? If so, it is as big as the smartphone, the internet, or even the invention of the microchip. And then the investments make sense in a way.

The problem might end up being that the value created by LLMs will have no customers when everyone is unemployed.


Yeah I’m having a similar experience. I’ve been wanting a standard test suite for JMAP email servers, so we can make sure all created jmap servers implement the (somewhat complex) spec in a consistent manner. I spent a single day prompting Claude code on Friday, and walked away with about 9000 lines of code, containing 300 unit tests for jmap servers. And a web interface showing the results. It would have taken me at least a week or two to make something similar by hand.

There’s some quality issues - I think some of the tests are slightly wrong. We went back and forth on some ambiguities Claude found in the spec, and how we should actually interpret what the jmap spec is asking. But after just a day, it’s nearly there. And it’s already very useful to see where existing implementations diverge on their output, even if the tests are sometimes not correctly identifying which implementation is wrong. Some of the test failures are 100% correct - it found real bugs in production implementations.

Using an AI to do weeks of work in a single day is the biggest change in what software development looks like that I’ve seen in my 30+ year career. I don’t know why I would hire a junior developer to write code any more. (But I would hire someone who was smart enough to wrangle the AI). I just don’t know how long “ai prompter” will remain a valuable skill. The AIs are getting much better at operating independently. It won’t be long before us humans aren’t needed to babysit them.


So what'd your prompt look like, out of curiosity? I hear about all these things that sound quite impressive, but no one ever seems to want share any info on the prompts to learn or gain insight from.


It was nothing special. I can't seem to pull up the initial prompt, but it was something like this:

> Build a thorough test suite for JMAP in this directory. The test suite will be run against multiple JMAP servers, to ensure each server implements the JMAP spec consistently and correctly. In this directory are two files - rfc8620.txt and rfc8621.txt. These files containing the JMAP core and JMAP email specs. Read these files. Then make a list of all aspects of the specifications. For each, create a set of tests which thoroughly tests all aspects of a JMAP server's behaviour specified by the RFCs, including error behaviour. The test suite should be configurable to point at a jmap server & email account. The account will contain an empty mailbox (error if its not empty). The test suite starts by adding a known set of test emails to the account, then run your tests and clear the inbox again. Write the test suite in typescript. The test runner should output the report into a JSON file. Start with a project plan.

If you haven't tried claude code or openai's codex, just dive in there and give it a go. Writing a prompt isn't rocket science. Just succinctly say the same things you'd say to a very competent junior engineer when you want to brief them on some work.


Was this written by llm?


No.


Sorry just asking. I used to review CVs part time. It used to be pretty clear who was embellishing their CV with AI and who was not. There was also a writing style that came with using AI.

My spider sense is tingling!


:( I get this from time to time, where people think my writing is at least partially AI generated. I wonder if it’s worth trying to change my writing style to avoid the stigma. If I hand wrote a CV, it would suck to be dumped in with AI generated slop. More small pithy sentences? or throw in some spelling and grammar mistakes? Bleh.


Honestly, that is why I let errors in my writing go now… to sound human.


My team of 6 people has been building a software to compete with an already established piece of software written by a major software corporation. I'm not saying we'll succeed, I'm not saying we'll be better nor that we will cover every corner case they do and that they learned over the past 30 years. But 6 senior devs are getting stuff done at an insane pace. And if we can _attempt_ to do this, which would have been unthinkable 2 years ago, I can only wonder what will happen next.


> My team of 6 people has been building a software to compete with an already established piece of software written by a major software corporation.

How long until that the devs at that major corporation start using an LLM? You think your smaller team can still compare to their huge team?


If the goal is to simply undercut the incumbent with roughly the same product than it doesn't really matter if the incumbent starts using LLMs too as their cost structure, margin expectations, etc. are already relatively set.


Of course they can. if you’ve ever stepped a foot inside big tech you’ll know the bottle neck is not dev output.


100%- which is what I'm telling everyone. I am in big tech and it doesn't matter that I can write what I used to in 1 week in 5 minutes. Meetings, reviews, design docs, politics, etc. etc. mean how much code is written is irrelevant. Productivity in big tech is pretty low because of organizational overhead. You just can't get anything done. Being able to get more work done with less people is the real game changer because less people don't suffer from those "coordination headwinds".


Bingo. Most of my employees come from big tech (not faang, but big corps ) where they felt they couldn't really deliver what they wanted and what they're capable of. These guys love to not just code, but to create and deliver stuff.


Yeah I’m curious how much the moat of big software companies will shrink over the next few years. How long before I can ask a chatbot to build me a windows-like OS from scratch (complete with an office suite) and it can do a reasonable job?

And what happens then? Will we stop using each others code?


I agree with you, and share the experience. Something changed recently for me as well, where I found the mode to actually get value from these things. I find it refreshing that I don't have to write boilerplate myself or think about the exact syntax of the framework I use. I get to think about the part that adds value.

I also have the same experience where we rejected a SAP offering with the idea to build the same thing in-house.

But... aside from the obvious fact that building a thing is easier than using and maintaining the thing, the question arose if we even need what SAP offered, or if we get agents to do it.

In your example, do you actually need that simple CRM or maybe you can get agents to do the thing without any other additional software?

I don't know what this means for our jobs. I do know that, if making software becomes so trivial for everyone, companies will have to find another way to differentiate and compete. And hopefully that's where knowledge workers come in again.


Exactly. I hear this "wow finally I can just let Claude work on a ticket while I get coffee!" stuff and it makes me wonder why none of these people feel threatened in any way?

And if you can be so productive, then where exactly do we need this surplus productivity in software right now when were no longer in the "digital transformation" phase?


I don't feel threatened because no matter how tools, platforms and languages improved, no matter how much faster I could produce and distribute working applications, there has never been a shortage of higher level problems to solve.

Now if the only thing I was doing was writing code to a specification written by someone else, then I would be scared, but in my quarter century career that has never been the case. Even at my first job as a junior web developer before graduating college, there was always a conversation with stakeholders and I always had input on what was being built. I get that not every programmer had that experience, but to me that's always been the majority of the value that software developers bring, the code itself is just an implementation detail.

I can't say that I won't miss hand-crafting all the code, there certainly was something meditative about it, but I'm sure some of the original ENIAC programmers felt the same way about plugging in cables to make circuits. The world of tech moves fast, and nostalgia doesn't pay the bills.


> there has never been a shortage of higher level problems to solve.

True, but whether all those problems are SEEN worth chasing business wise is another matter. Short term is what matters most for individuals currently in the field, and short term is less devs needed which leads to drop in salaries and higher competition. You will have a job but if you explore the job market you will find it much harder to get a job you want at the salary you want without facing huge competition. At the same time, your current employer might be less likely to give you salary raises because they know you bargaining power has decreased due to the job market conditions.

Maybe in 40 years time, new problems will change the job market dynamics but you will likely be near retirement by then


Smart devs know this is the beginning of the end of high paying dev work. Once the LLM's get really good, most dev work will go to the lowest bidder. Just like factory work did 30 years ago.


Not even factory work, classic engineering jobs in general. SWE sucked all the air out of the engineering room, because the pay/benefits/job prospects were just head and shoulders better.

We had a fresh out of school EE hire who left our company for an SWE position 6 months into his job with us, for a position that paid the same (plus full remote with a food stipend) as our Director of Engineering. A 23 yr old getting on offer above what a 54 yr old with 30 years experience was making.

For a few years there, you had to be an idi...making sub-optimal decisions, to choose anything other than becoming an techy.


I think it’s the end of low paying dev work. If I was in one of the coding sweatshops I would be thinking hard.


Then whats the smart dev plan, sit on the vibe coding casino until the bossman calls you into the office?


Make as much money as you can while you still can before the bottom falls out. Or go work for one of the AI companies on AI. Always better to sell picks and shovels than dig for gold. Eventually the gold runs out where you are.


Exactly, it will be a CodeUber, we just pick the task from the app and deliver the results ))


I thought AI would already automate that part, I expect to actually just drive an actual uber


Become a plutocrat, or be useful to plutocrats. I don't have the moral flexibility for the former, but plutes tend to care about their images, legacies, and mewling broods. A clever person can find a way to be the latter.


Lots of dreamers here, yet Vanguard reports 4x job and wages growth in the 100 jobs most exposed to AI


Bit naive to think that positive pattern will hold for the next ten years or so or whatever time is left between now and your retirement. And arguably, the later that positive pattern changes is worse for you because retraining as an older person has its own challenges.


Oh please, SAP doesn't exist only because writing software is not free or cheap


It seems like every quarter or two, I hear a story just like yours (including the <<Wow! We've quietly passed an inflection point!>> part).

What does that tell me?

It tells me that I shouldn't waste my time with a tool that's going to fundamentally change in three to six months; that I should wait until I stop hearing stories like this for a good, long while. "But you're going to be left behind!", yeah, maybe. But. I've been primarily a maintenance programmer for a very long time. The "bleeding edge" is where I am very, very rarely... and it seems to work out fine.

New tools that are useful are nice. Switching to a radically different tool every quarter or two? Not nice. I've got shit to do.


I don't see the interface changing much in 3-6 months, and definitely not fundamentally.

Sure, there will probably be some changes around MCP, skills, AGENTS.md and similar, but I don't see them as big changes, and you can use the tools now without those things.


> I don't see the interface changing much in 3-6 months, and definitely not fundamentally.

This is as insightful as a fellow noting that both a caulk gun and a shotgun have a fixed handle and movable trigger and genuinely wondering why an expert user of the former would ever have even a moment's trouble learning to use the latter.


I have not had the success you mention with programming… I still feel like I have to hold its hand all the way.

Regardless..

> The problem might end up being that the value created by LLMs will have no customers when everyone is unemployed.

This mentality is why investors are scrambling right now. It’s a scare tactic.


> The problem might end up being that the value created by LLMs will have no customers when everyone is unemployed.

I'm not a professional programmer, but I am the I.T. department for my wife's small office. I used ChatGPT recently (as a search engine) to help create a web interface for some files on our intranet. I'm sure no one in the office has the time or skills to vibe code this in a reasonable amount of time. So I'm confident that my "job" is secure :)


> Im sure no one in the office has the time or skills to vibe code.

the thing you are describing can be vibe coded by anyone. Its not that teachers or nurses are gonna start vibecoding tmrw, but the risk comes from other programmers outworking you to show off to the boss. Or companies pitting devs against each other, or them mistakenly assuming they require very few programmers, or PMs suddenly start vibe coding when threatened for their jobs.


I have to admit the last 6-8 weeks have been different. Maybe it’s just me realizing the value in some of these tools…


>As a research topic, modern AI is a miracle, and I absolutely love learning about it. As an economic endeavor, it just feels insane. How many hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories, etc. could we build with the money we’re spending on pretraining models that we throw away next quarter?

This is a wrong way to look at it. The right way is to consider that AI investments generate (taxable) economic activity that your government can use to build "hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories".


they pay very little tax and most of the cash is going into datacenters and electricity which provide very little long term employment. llms can do some amazing things but at the same time they’re setting mountains of cash on fire to nudify random women on twitter and generate more spam than we could’ve ever imagined possible


Not so much when there’s a race to the bottom for which municipalities, and states can offer the most tax breaks.


People working on those facilities still pay income tax to the municipality no matter how much of a discount the business gets. People buying AI tokes/subscriptions pay VAT to the municipality where they reside.


Not many. Money is not a perfect abstraction. The raw materials used to produce 100B worth of Nvidia chips will not yield you many hospitals. AI researcher with 100M singup bonus from Meta ain't gonna lay you much brick.


It's not about the consumption of raw materials or repurposing of the raw materials used for chips. peterlk said:

> How many hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories, etc. could we build with the money we’re spending on pretraining models that we throw away next quarter?

It's about using the money for to build things that we actually need and that have more long term utility. No one expects someone with a 100M signing bonus at Meta to lay bricks, but that 100M could be used to buy a lot of bricks and pay a lot of brick layers to build hospitals.


I think it's a mistake to believe that this money would exist if it was to be spent on these things. The existence of money is largely derived from society scale intention, excitement or urgency. These hospitals, machine shops, etc, could not manifest the same amount of money unless packaged as an exciting society scale project by a charismatic and credible character. But AI, as an aggregate, has this pull and there are a few clear investment channels in which to pour this money. The money didn't need to exist yesterday, it can be created by pulling a loan from (ultimately) the Fed.


Those companies were each sitting ~$50-100B in cash even before the AI boom.


I mean, you're just talking about spending money. Google isn't trying to build data centers for fun. These massive outlays are only there because the folks making them think they will make much more money than they spend.


Seems like the main issue is that taxes in America are far too low.


Again people confuse paper wealth and material assets. If you take half of money of 0.001% people imagine there will be material change in world of atoms but thats not true. You can't take 8 mil Richard Mille watch and build an apartment building. We are mostly resource constrained. There are no material assets to convert all the paper wealth into. Telsa's physical assets are like 5% of Tesla's market cap the rest is cultish belief in Elon. You can't convert that into a hospital. It's trivial to observe on AI side there is unlimited amount of $ available and yet companies are supplied constrained on the atoms side from gas turbines having 3-4 year lead times to ASML running 24/7 prod cycle and yet unable to meet demand.


You can tax wealth, assets and paper wealth as well. Some countries like Switzerland does it. Annual tax is 0.05-0.3% and that what should billionaires pay to the society.


You can and Pollock paintings will go for 80 mil instead of 110 and luxury assets will drop in price but will still be owned by same people. Switzerland is tiny so not very constrained. There is some elasticity for converting paper wealth into physical things but it is miniscule. I think COVID should've being a pretty strong lesson there.


> How many hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories, etc. could we build

“We?”

This isn’t “our” money.

If you buy shares, you get a voice.


FWIW the models aren't thrown away. The weights are used to preinit the next foundation model training run. It helps to reuse weights rather than randomize them even if the model has a somewhat different architecture.

As for the rest, constraint on hospital capacity (at least in some countries, not sure about the USA) isn't money for capex, it's doctors unions that restrict training slots.


It's not a zero sum game. We could build hospitals and data centers. The reason we are not building hospitals or parks or machine shops have nothing to do with AI. We weren't building them 2 years ago either.


Indeed not zero sum, but a negative sum game to waste money instead of even doing nothing, let alone building something useful.


Google has zero expected build outs of "forests". They've never mentioned this in their 10k ever. There is no misallocation of Google's money from "forests" to datacenters.


There is a certain logic to it though. If the scaling approaches DO get us to AGI, that's basically going to change everything, forever. And if you assume this is the case, then "our side" has to get there before our geopolitical adversaries do. Because in the long run the expected "hit" from a hostile nation developing AGI and using it to bully "our side" probably really dwarfs the "hit" we take from not developing the infrastructure you mentioned.


Any serious LLM user will tell you that there's no way to get from LLM to AGI.

These models are vast and, in many ways, clearly superhuman. But they can't venture outside their training data, not even if you hold their hand and guide them.

Try getting Suno to write a song in a new genre. Even if you tell it EXACTLY what you want, and provide it with clear examples, it won't be able to do it.

This is also why there have been zero-to-very-few new scientific discoveries made by LLM.


Most humans aren't making new scientific discoveries either, are they? Does that mean they don't have AGI?

Intelligence is mostly about pattern recognition. All those model weights represent patterns, compressed and encoded. If you can find a similar pattern in a new place, perhaps you can make a new discovery.

One problem is the patterns are static. Sooner or later, someone is going to figure out a way to give LLMs "real" memory. I'm not talking about keeping a long term context, extending it with markdown files, RAG, etc. like we do today for an individual user, but updating the underlying model weights incrementally, basically resulting in a learning, collective memory.


Virtually all humans of average intelligence are capable of making scientific discoveries -- admittedly minor ones -- if they devote themselves to a field, work at its frontiers, and apply themselves. They are also capable of originality in other domains, in other ways.

I am not at all sure that the same thing is even theoretically possible for LLMs.

Not to be facetious, but you need to spend more time playing with Suno. It really drives home how limited these models are. With text, there's a vast conceptual space that's hard to probe; it's much easier when the same structure is ported to music. The number of things it can't do absolutely outweighs the number of things it can do. Within days, even mere hours, you'll become aware of its peculiar rigidity.


Can most people venture outside their training data?


In some ways no, because to learn something you have to LEARN that then thats in the training data. But humans can do it continuously and sometimes randomly, and also being without prompted.


If you're a scientist -- and in many cases if you're an engineer, or a philosopher, or even perhaps a theologian -- your job is quite literally to add to humanity's training data.

I'd add that fiction is much more complicated. LLMs can clearly write original fiction, even if they are, as yet, not very good at it. There's an idea (often attributed to John Gardner or Leo Tolstoy) that all stories boil down to one of two scenarios:

> "A stranger comes to town."

> "A person goes on a journey."

Christopher Booker wrote that there are seven: https://en.wikipedia.org/wiki/The_Seven_Basic_Plots

So I'd tentatively expect tomorrow's LLMs to write good fiction along those well-trodden paths. I'm less sanguine about their applications in scientific invention and in producing original music.


Yes, they can.


Ever heard of creativity?


Are you seriously comparing chips running AI models and human brains now???

Last time I checked the chips are not rewiring themselves like the brain does, nor does even the software rewrite itself, or the model recalibrate itself - anything that could be called "learning", normal daily work for a human brain.

Also, the models are not models of the world, but of our text communication only.

Human brains start by building a model of the physical world, from age zero. Much later, on top of that foundation, more abstract ideas emerge, including language. Text, even later. And all of it on a deep layer of a physical world model.

The LLM has none of that! It has zero depth behind the words it learned. It's like a human learning some strange symbols and the rules governing their appearance. The human will be able to reproduce valid chains of symbols following the learned rules, but they will never have any understanding of those symbols. In the human case, somebody would have to connect those symbols to their world model by telling them the "meaning" in a way they can already use. For the LLM that is not possible, since it doesn't habe such a model to begin with.

How anyone can even entertain the idea of "AGI" based on uncomprehending symbol manipulation, where every symbol has zero depth of a physical world model, only connections to other symbols, is beyond me TBH.


Watch out, you're getting suspiciously close to the Chinese Room argument. And people on here really don't like that argument.


Speaking as someone who thinks the Chinese Room argument is an obvious case of begging the question, GP isn't about that. They're not saying that LLMs don't have world models - they're saying that those world models are not based in physical world and thus cannot properly understand what they talk about.

I don't think that's true anymore, though. All the SOTA models are multimodal now, meaning that they are trained on images and videos as well, not just text; and they do that is precisely because it improves the text output as well, for this exact reason. Already, I don't have to waste time explaining to Claude or Codex what I want on a webpage - I can just sketch a mock-up, or when there's a bug, I take a screenshot and circle the bits that are wrong. But this extends into the ability to reason about real world, as well.


I would argue that is still just symbols. A physical model requires a lot more. For example, the way babies and toddlers learn is heavy on interaction with objects and the world. We know those who have less of that kind of experience in early childhood will do less well later. We know that many of today's children, kept quiet and sedated with interactive screens, are at a disadvantage. What if you made this even more extreme, a brain without ability to interact with anything, trained entirely passively? Even our much more complex brains have trouble creating a good model in these cases.

You also need more than one simple brain structure simulation repeated a lot. Our brains have many different parts and structures, not just a single type.

However, just like our airplanes do not resemble bird flight as the early dreamers of human flight dreamed of, with flapping wings, I also do not see a need for our technology to fully reproduce the original.

We are better off following our own tech path and seeing where it will lead. It will be something else, and that's fine, because anyone can create a new human brain without education and tools, with just some sex, and let it self-assemble.

Biology is great and all but also pretty limited, extremely path-dependent. Just look at all the materials we already managed to create that nature would never make. Going off the already trodden bio-path should be good, we can create a lot of very different things. Those won't be brains like ours that "Feel" like ours, if that word will ever even apply. and that's fine and good. Our creations should explore entirely new paths. All these comparisons to the human experience make me sad, let's evaluate our products on their own merit.

One important point:

If you truly want a copy, partial or full, in tech, of the human experience, you need to look at the physics. Not at some meta stuff like "text"!!

The physical structure and the electrical signals in the brain. THAT is us. And electrical signals and what they represent in chips are so completely and utterly different from what can be found in the brain, THAT is the much more important argument against silly human "AGI" comparisons. We don't have a CPU and RAM. We have massively parallel waves of electrical signals in a very complex structure.

Humans are hung up on words. We even have fantasy stories hat are all about it. You say some word, magic happens. You know somebody's "true name", you control them.

But the brain works on a much lower deeply physical level. We don't even need language. A human without language and "inner voice" still is a human with the same complex brain, just much worse at communication.

The LLMs are all about the surface layer of that particular human ability though. And again, that is fine, but it has nothing to do with how our brains work. We looked at nature and were inspired, and went and created something else. As always.


I mean yeah, but that's why there are far more research avenues these days than just pure LLMs, for instance world models. The thinking is that if LLMs can achieve near-human performance in the language domain then we must be very close to achieving human performance in the "general" domain - that's the main thesis of the current AI financial bubble (see articles like AI 2027). And if that is the case, you still want as much compute as possible, both to accelerate research and to achieve greater performance on other architectures that benefit from scaling.


How does scaling compute does not go hand-in-hand with energy generation? To me, scaling one and not the other puts a different set of constraints on overall growth. And the energy industry works at a different pace than these hyperscalars scaling compute.


The other thing here is we know the human brain learns on far less samples than LLMs in their current form. If there is any kind of learning breakthrough then the amount of compute used for learning could explode overnight


Scaling alone wont get us to AGI. We are in the latter half of this AI summer where the real research has slowed down and even stopped and the MBAs and moguls are doing stupid things

For us to take the next step towards AGI, we need an AI winter to hit and the next AI summer to start, the first half of which will produce the advancement we actually need


Here's hoping you are chinese, then.


Well, I tried to specifically frame it in a neutral way, to outline the thinking that pretty much all the major nations / companies currently have on this topic.


Why?


I see value here. Firstly, it’s a fun toy. This isn’t that great if you care about being productive at work, but I don’t think fun should be so heavily discounted. Second, the possibility of me _finally_ having a single interface that can deal with message/notification overload is a life-changing opportunity. For a long time, I have wanted a single message interface with everything. Matrix bridges kind of got close, but didn’t actually work that well. Now, I get pretty good functionality plus summarization and prioritization. Whether it “actually works” (like matrix bridges did not) is yet to be seen.

With all that said, I haven’t mentioned anything about the economics, and like much of the AI industry, those might be overstated. But running a local language model on my macbook that helps me with messaging productivity is a compelling idea.


My dad has some stories of working in Burkina Faso (and Mali, and other countries) with a drone, and having to appease locals about his witch-bird. A lot if places in Africa still prosecute witchcraft.


This argument does not make sense to me. If we push aside the philosophical debates of “understanding” for a moment, a reasoning model will absolutely use some (usually reasonable) definition of “user harm”. That definition will make its way into the final output, so in that respect “user harm” has been considered. The quality of response is one of degree, the same way we would judge a human response.


This is a reductionist perspective that is unhelpful. Does buying a water cooler in the office increase profit margins? What about a coffee machine? Across a wide portfolio of decisions, a business does need to be profitable. However measuring the individual impact of single vendors is often a very difficult task.

How do you measure developer productivity? Code quality? Developer happiness? As far as I know, no one in the industry can put concrete numbers to these things. This makes it basically impossible to answer the question you pose.


The survey was about operational costs and revenue. Water cooler and coffee machine manufacturers don't market their products to be "smarter than people in many ways" and "able to significantly amplify the output of people using them"[1]. If these claims are true, then surely relying on this technology should bring both lower operational costs, since human labor is expensive, and an increase in revenue, since the superhuman intelligence and significantly amplified output of humans using these tools should produce higher quality products and benefits across the board.

There are of course many factors at play here, and a substantial percentage of CEOs report a positive RoI, but the fact that a majority don't shouldn't be dismissed on the basis of this being difficult to measure.

[1]: https://blog.samaltman.com/the-gentle-singularity


Ping me when coffee machines ceo ask for 7 trillion dollars


The link is highly relevant to the executive order because this executive order attempts to place limitations on what laws US states can create.


EOs aren't law though. They're guidance for the rest of the executive branch on how to execute the laws written by congress.

The Legislative branch (Congress) not the Executive branch (White House) can preempt states.


That's the whole point. They aren't law, and they were (probably) never meant to be so far-reaching, and yet the clear purpose of this Executive Order is to tell the states what laws they can enact. The EO doesn't have the legal power to do that directly, but it clearly outlines the intention to withdraw federal funding from states that refuse to toe the line.


I don't know if you've heard, but norms don't matter anymore.


Can you guys just read stuff before talking?

> The order directs Attorney General Pam Bondi to create an “AI Litigation Task Force” within 30 days whose "sole responsibility shall be to challenge State AI laws" that clash with the Trump administration's vision for light-touch regulation.

The EO isn't about Federal Preemption. Trump's not creating a law to preempt states. So a question about how Federal Preemption is relevant is on point.


> My Administration must act with the Congress to ensure that there is a minimally burdensome national standard — not 50 discordant State ones. …

Sounds like leaving it up to Congress! But then the administration vows to thwart state laws despite the vacuum of no extant preemption, so effectively imposing a type of supposed Executive preemption:

> Until such a national standard exists, however, it is imperative that my Administration takes action to check the most onerous and excessive laws emerging from the States that threaten to stymie innovation.

So preemption link is relevant, I think; and at any rate, helpful to give background to those not familiar with the concept, which constitutes the field against which this is happening.


Also why are they small federal government states rights for things but big federal government centralized power for this? It doesn't make sense to me.


When you start thinking of the political elite as out of touch sociopathic aristocrats, it becomes easier to understand their behavior.

Their goal is to make money and enrich their own lives at the expense of everyone else.

Stephen Miller is just super weird though. Don’t bother trying to figure that guy out.


I think the message between the lines is what's important, and it goes like this:

"We in the executive branch have an agreement with the Supreme Court allowing us to bypass congress and enact edicts. We will do this by sending the Justice Department any state law that gets in the way of our donors, sending the layup to our Republican Supreme Court, who will dunk on the States for us and nullify their law."

We don't have to go through the motions of pretending we still live in a constitutional republic, it's okay to talk frankly about reality as it exists.


It goes deeper than that - the Supreme Council will issue non-binding "guidance" on the "shadow docket", so that when/if the fascists/destructionists [0] lose the Presidency, they can go back to being obstructionists weaponizing high-minded ideals in bad faith. As a libertarian, the way I see it is we can disagree politically on what constitutes constructive solutions, but it's time to unite, stop accepting any of the fascists' nonsense, and take back the fucking government - full support for the one remaining mainstream party that at least nominally represents the interests of the United States, while demanding they themselves stop preemptively appeasing the fascists. The Libertarian, Green, or even new parties can step up as the opposition. Pack the courts with judges that believe in America first and foremost, make DC and PR states to mitigate the fascists' abuse of the Senate, and so on. After we've stopped the hemorrhaging, work on fundamental things like adopting ranked pairs voting instead of this plurality trash.

[0] I'd be willing to call them something else if they picked an honest name for themselves - they are most certainly not "conservatives"


It's right in the text of the EO: they intend to argue that the state laws are preempted by existing federal regulations, and they also direct the creation of new regulations to create preemption if necessary, specifically calling on the FCC and FTC to make new federal rules to preempt disfavored state laws. Separately it talks about going to Congress for new laws but mostly this lays out an attempt to do it with executive action as much as possible, both through preemption and by using funding to try to coerce the states.

There's a reasonable argument that nationwide regulation is the more efficient and proper path here but I think it's pretty obvious that the intent is to make toothless "regulation" simply to trigger preemption. You don't have to do much wondering to figure out the level of regulation that David Sacks is looking for.


This is quite literally going to lead to a Supreme Court case about Federal Preemption. Bondi will challenge some CA law, they will lose and appeal until they get to the Supreme Court. I don't have any grace to give people at this point, you have to be willingly turning a blind eye if you do not see where this will end up.


Federal preemption requires federal law (aka laws written by congress). How else would it get to the supreme court?

The EO mentions congress passing new law a few times in addition to an executive task force to look into challenging state laws based on constitutional violations or federal statues. That's the only way they'd get in front of a judge.

If the plan is for the executive to invent new laws it's not mapped out in this EO


> Federal preemption requires federal law (aka laws written by congress). How else would it get to the supreme court?

1. No federal preemption currently. (No federal law, therefore no regulation on the matter that should preempt.)

2. State passes and enforces law regarding AI.

3. Trump directs Bondi to challenge the state law on nonsense grounds.

4. In the lawsuit, the state points out that there is no federal preemption; oh yeah, 10th Amendment; and that the administration's argument is nonsense.

5. The judge, say Eileen Cannon, invalidates the state law.

6. Circuit Court reverses.

7. Administration seeks and immediately gets a grant of certiorari — and the preemption matter is in the Supreme Court.

> passing new law … only way they'd get it in front of a judge.

The EO directs Bondi to investigate whether, and argue that, existing executive regulations (presumably on other topics) preempt state legislation.

Regardless, the EO makes it a priority to find and take advantage of some way to challenge and possibly invalidate state laws on the subject. This is a new take on preemption: creation of a state-law vacuum on the subject, through scorched-earth litigation (how Trumpian!), despite an utter absence of federal legislation on the matter.


>2. Trump preemptively threatens to withhold all Federal funding to any state that intends to pass any laws he doesn't like.

>2.5 If it's a blue state, maybe the National Guard and ICE suddenly show up in force for the people's own protection.

>3. States choose entirely of their own volition to comply in advance.

That's probably how this is really going to go.


the Task Force can try to challenge state AI laws. they can file whatever lawsuits they want. they will probably lose most of their suits, because there's very little ground for challenging state AI regulations.


Those suits will be seen by the worst judges the Heritage Foundation could ram through. I would not be nearly so confident of a sane outcome.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: