Hacker Newsnew | past | comments | ask | show | jobs | submit | asgraham's commentslogin

The good is hidden: court systems are already overwhelmed. If the arbitration cases were added, then it’d take even longer to get a court date.

(Which isn’t to say I think the system as it is is good, just that there is a good)


Then fix the court system? Create more courts, hire more judges/clerks... I mean, I know it isn't "as simple" as that, but that's the proper solution instead of creating a half-legal half-favoritist system where a company can force you into arbitration where, more often than not, the arbitrator is paid by the company, and therefore rules in it's favor.

Or reduce the demand on the legal system? Just adding more expense to an outright broken thing isn't an actual fix. It's a half-measure patch at best. And no, I don't mean create a workaround like arbitration.

Why do court cases take so long and suck up so many resources? Start with that. Perhaps reduce the amount of legislation/laws/etc. on the books, and write laws that limit the litigious society we find ourselves living in.

That is of course easier said than done, but we've chosen this path and can choose to unwind it if we have enough desire to.


Sounds like the solution is just hire more judges instead

Are you arguing that eventually a competitor will emerge that does support OpenClaw with a subscription model? Wouldn’t that just be more expensive for the exact same reason Anthropic is banning it?

OpenAI have literally gone out of their way to explicitly support this sort of thing. As they did with OpenCode.

Honestly, this just looks like what Dylan of SemiAnalysis suggested on Dwarkesh – that they've massively under-provisioned capacity / under-spent on infrastructure.

That would honestly be a comforting answer if true, because I would gladly take 'we can't afford to do this right now' over 'we are self-preferencing, and the FTC should really take a look at us, even if we're technically not a monopoly right now, since we're the only strongly-instruction-following model in town and we clearly know it'.


OpenAi is burning cash to stay relevant aiui, i.e. they will keep subsidizing

You can use these tools with most providers today, just no subscription plan. If you have enough spend, you can likely get bulk deals


> we are self-preferencing, and the FTC should really take a look at us, even if we're technically not a monopoly right now

Tell me you have zero clue what a monopoly is or what the law is, without telling me.

Monopoly law relies on broad categories, not narrow ones. You can’t call Microsoft a monopoly because they are the only company that makes Windows. You can’t call Amazon a monopoly because they are the only company that makes AmazonBasics. You can’t call Anthropic a monopoly because their product is 20% better for your use case, otherwise by definition no company has any incentive to do a good job at anything.


Somehow this was coming up a few years ago where people kept saying that Apple could face antitrust because they were the only company who made iOS and controlled the App Store. Given that android exists, and has roughly equal market share, that didn’t make much sense to me, but I kept seeing it being discussed.

And Apple did lose that case now so they were correct; sometimes, one can be a monopolist in the market they created.

> Tell me you have zero clue what a monopoly is or what the law is, without telling me.

Monopoly law is subject to reinterpretation over time and anybody who has studied the history of it knows this. The only people argue for "strict" interpretations of current monopoly law are those who currently benefit from the status quo.

> Monopoly law relies on broad categories, not narrow ones.

And this is currently a gigantic problem. Because of relying on broad categories to define "monopoly", every single supply chain has been allowed to collapse into a small handful of suppliers who have no downstream capacity thanks to Always Late Inventory(tm). This prevents businesses from mounting effective competition since their upstream suppliers have no ability to support such activities thanks to over-optimization.

To be effective on the modern incarnation of businesses, monopoly law needs to bust every single consolidated narrow vertical over and over and over until they have enough downstream capacity to support competition again.


Well, Apple did recently lose as they're the monopolist in their walled garden for app distribution.

Oh, give me a break. I know the law around this incredibly well. Reasonable people can disagree about whether the law is appropriate. The whole point of laws is that they should match intent – and as for '20%': "tell me you don't understand how a small quantitative gap can result in a step change in capability."

> Oh, give me a break. I know the law around this incredibly well.

Then don’t make BS up like implying Anthropic is a monopolist for the crime of competence.

> tell me you don't understand how a small quantitative gap can result in a step change in capability

The law does not give a darn about this. Being a good competitive option does not make you a league of your own. If I invent a new flavor of shake, the Emerald Slide, am I a monopolist in shakes because I’m the only one selling Emerald Slides? If you go and then start a local business reselling shakes and I’m your only supplier, am I a monopolist then? Absolutely not.


You do realize that I called out in my post they are absolutely not a monopoly by the law, right? I know all-too-well what the definition is.

We have a similar situation in mobile where Apple may not be considered a monopoly, but people have walked around for a decade with a supercomputer in their pocket that is wildly underused.

Things have gotten faster; things are different than they were decades ago when a lot of this was devised.

The reality of the matter is that some of us just want to see innovation actually happen apace, and not see 5, 10, or 30 years of slowdown while we litigate whether or not such a company is holding all the cards, while everyone is collectively waiting at the spigot for a company to get its shit together because we're not allowed to fix the situation.

For what it's worth, I'm hopeful that the other model providers will catch up and put us in a situation where this conversation is irrelevant.

What I'm afraid of is a situation where we see continued divergence, and we end up with another Apple situation.


> “we are self-preferencing, and the FTC should really take a look at us, even if we're technically not a monopoly right now”

That is not calling out that they are “absolutely not a monopoly by the law” in any way, shape, or form. You’re framing it as though they aren’t by a technicality, when they aren’t anywhere near discussion by even the most extreme of legal theories. You won’t find Lina Khan or Margarethe Vestager, both ousted for going too far, complaining about Anthropic.

> “We have a similar situation in mobile where Apple may not be considered a monopoly, but people have walked around for a decade with a supercomputer in their pocket that is wildly underused.”

In that we can’t run a Torrent client to download illegally redistributed media 99% of the time? Otherwise, in what way, are they underused? For the degrees of public addiction, a more underutilized phone would be a social benefit.


Let me back up what you're saying. They absolutely are not a monopoly today by any definition, by any stretch, in any conceivable way.

I'm looking forward. Things are moving very quickly. As I said above, I'm afraid of us diverging into another Apple situation in the future. If I suggest that they should be looked at and thought about, it's not for today, it's for tomorrow. If divergence continues. Because as with everything in AI, it might hit us a lot faster than people expect. Hell, given their approach to morality, I suspect that Anthropic folks have already thought deeply about these sorts of concerns. That's why it's actually a lot more in character for them to be doing this not due to self-preferencing, but due to unaffordability, which - if you look at my first post - is what I said seems to be happening.

Suffice to say that I have a graveyard of things that I think phones could have been, where unfortunately we've ended up with these - as you say - addicting consumerist messes.

Gonna stop here so I don't flood the thread. We're getting very off topic.


You’re welcome to start OpenSpigot yourself, and see how investors feel about you giving away your technical / IP / market advantage on launch day.

Some of the Chinese labs with cheaper per token costs do support it, like say minimax: https://agent.minimax.io/max-claw

I haven't tried it to see if it's any good but it's $20/mo.


Doesn't OpenAI allow this today?

It's a good way to win market share and build goodwill, but one has to wonder whether this class of usage is marginally profitable for them (or anyone) and how sustainable their lenient policies will be for them long term.

Kimi seems to support this with their 39 usd a month plan.

You mean whether another competitor will emerge? Right now we have OpenAI.

The real threat that Anthropic sees as real competitors in the long term, are the AI labs building open weight models, especially the AI labs in China.

I know for a fact [1] that the neuroscientific discoveries were not independent of physics: the people doing the developing were largely former physicists. They likely didn't cite anything because why would you cite phase transitions or criticality? You learn about them in class as a physicist. I strongly suspect the ecology results weren't independent either, but all the theoretical ecologists I know are relatively young (if mostly former physicists) so no first person accounts.

The part of this that could totally be true is that a clinical application somewhere along the way "independently" "reinvented" it. There's a hilarious collection of peer-reviewed journal articles out there inventing a "new" method of calculating the sizes of shapes and areas under the curve. The method involves adding up really small rectangles. (I think a top comment already mentioned the Tai article [2])

[1] source: my doctoral advisor was a really really old theoretical neuroscientist who trained as an electrical engineer and mathematician. If you want a more concrete example, the work of Bard Ermentrout on neural criticality starting in the 70's or 80's. He read a lot of physics textbooks.

[2] https://science.slashdot.org/story/10/12/06/0416250/medical-...


Good correction! Ermentrout is a fair example. You're right that a lot of neuroscience criticality work came from retrained physicists. The paper distinguishes between independent derivation and cross-trained import. The title for this post over-simplifies this. I made this change to try to increase engagement, since the full detailed title got zero engagement.

Where I'd push back: even after physicists brought the tools into neuroscience, the receiving field didn't connect it back to the parallel work in ecology or cardiology. Ermentrout's neural work and Goldberger's cardiac work used the same underlying math but didn't cross-cite. The silos reformed around the imported tools.

You're correct that "none of them knew" is too strong. Fair point. "Most of them didn't talk to each other even after import" is closer to what the citation data actually shows.


> because why would you cite phase transitions or criticality? You learn about them in class as a physicist

I'm not sure if you're being entirely serious with that remark, but clearly citing the earlier work would have bolstered their credibility: interdisciplinary research is a plus and hardly something to hide. If it's something that's taught in physics class, you can cite a common textbook.


The disease of having 100 citations in each paper had not yet broken out when the papers in question were written. A good paper in 1994 probably had about 8 references, and certainly not any to common textbooks.


I would read it as there being a different threshold for what is citation-worthy versus presumed background knowledge.

Imagine if every graphics paper had to cite every concept they use from arithmetic, trigonometry, and linear algebra textbooks...


This was citation worthy because it's new knowledge to the field. Even in a graphics paper, you can cite whatever basic techniques you're using if it's not clear that everyone will be familiar with them.


The irony is that youth are simulatenously the biggest consumers of (new) social media, and the staunchest haters [EDIT: this is directly contradicted by the research article I found below…]. I can’t find the source so take it with a grain of salt, but I’ve read that something like 80% of TikTok users under some age think they’d be happier if it didn’t exist and/or wish it didn’t exist.

I don’t think this is really an issue of censorship to a lot of people (though that may be how it shakes out in the government) but rather of control over their digital environment and sanity.

EDIT: I don’t think this is what I’m remembering, but it has concrete numbers somewhat lower than I thought (48% of teens think social media harms people their age, but only 14% think it harms them personally) https://www.pewresearch.org/internet/2025/04/22/teens-social...


It's not even irony? They want to quit, but it's too hard.


I was so with you the first half of that. But the notion that everything should be capitalism is just as wrong as the notion that nothing should be capitalism (or, that capitalism only leads to bad things; obviously wrong but somehow a broadly accepted truism).

Capitalism works when a market works; capitalism fails when a market fails. Healthcare is a great example, because there’s an obvious and inherent imbalance in demand vs supply. Firefighting is another great example. These also have externalities to the community as a whole that everyone gets, even when you don’t pay/need the service; so it makes sense to make everyone pay (taxes). Even if you never have a child, even if you send your kids to private school, you live in a society that could only exist because of a (formerly, relatively) high standard of public education. So everyone pays for schools.

The idea of government bureaucrats lining their pockets is also (formerly, relatively) ridiculous: who would get into US government bureaucracy to make money? They are all (formerly, relatively) doing it almost uniformly because they believe in the mission, because they would almost all make more money going private.


Lots of good suggestions. However for Svelte in particular I’ve had a lot of trouble. You can get good results as long as you don’t care about runes and Svelte 5. It’s too new, and there’s too much good Svelte code out there used in training that doesn’t use Svelte 5. If you want AI generated Svelte code, restricting yourself to <5 is going to improve your results.

(YMMV: this was my experience as of three or four months ago)


Those prices seem geared toward people who are completely price insensitive, who just want "the best" at any cost. If the margins on that premium model are as high as they should be, it's a smart business move to give them what they want.


Really cool dataset! Love seeing people actually doing the hard work of generating data rather than just trying to analyze what exists (I say this as someone who’s gone out of his way to avoid data collection).

Have you played at all with thought-to-voice? Intuitively I’d think EEG readout would be more reliable for spoken rather than typed words, especially if you’re not controlling for keyboard fluency.


Yeah we do both text and voice (roughly 70% of data collection is typed, 30% spoken). Partly this is to make sure the model is learning to decode semantic intent (rather than just planned motor movements). Right now, it's doing better on the typed part, but I expect that's just because we have more data of that kind.

It does generalize between typed and spoken, i.e. it does much better on spoken decoding if we've also trained on the typing data, which is what we were hoping to see.


> we do both text and voice (roughly 70% of data collection is typed, 30% spoken). Partly this is to make sure the model is learning to decode semantic intent (rather than just planned motor movements)

Both of these modes are incredibly slow thinking. Conciously shifting from thinking in concepts to thinking in words is like slamming on brakes for a school zone on an autobahn.

I've gathered most people think in words they can "hear in their head", most people can "picture a red triangle" and literally see one, and so on. Many folks who are multi-lingual say they think in a language, or dream in that language, and know which one it is.

Meanwhile, some people think less verbally or less visually, perhaps not verbally or visually at all, and there is no language (words).

A blog post shared here last month discussed a person trying to access this conceptual mode, which he thinks is like "shower thoughts" or physicists solving things in their heads while staring into space, except "under executive function". He described most of his thoughts as words he can hear in his head, with these concepts more like vectors. I agree with that characterization.

I'm curious what % of folks you've scanned may be in this non-word mode, or if the text and voice requirement forces everyone into words.


I agree that thinking in words is much slower than thinking in concepts would be -- that's the point of training models like this, so that ideally people can always just think in concepts. That said, we do need to get some kind of ground truth of what they're thinking in order to train the model, so we do need them to communicate that (in words).

One thing that's particularly exciting here is that the model often gets the high-level idea correct, without getting any words correct (as in some of the examples above), which suggests that it is picking up the idea rather than the particular words.


> ideally people can always just think in concepts

Are you pursing an idea of how to help people like this author* access this mode that some of us are always in unless kicked out of it by the need for words?

Very needed right now — the opposite of the YouTube-ization of idea transfer.

It doesn't seem clear this is accessible without other changes in wiring? The inability to "picture" things as visuals seems to swap out for "conceptualizing" things in -- well, I don't have words for this.

An attempt from that essay:

This is not what Hadamard is talking about when he describes the wordless thought of the mathematicians and researchers he has surveyed. Instead, what they seem to be doing is something similar to this subconscious, parallelized search, except they do it in a “tensely” focused way.

The impression I get is that Hadamard loads a question into his mind (either in a non-verbal way, or by reading a mathematical problem that has been written by himself or someone else), and then he holds the problem effortfully centered in his mind. Effortfully, but wordlessly, and without clear visualizations. Describing the mental image that filled his mind while working on a problem concerning infinite series for his thesis, Hadamard writes that his mind was occupied by an image of a ribbon which was thicker in certain places (corresponding to possibly important terms). He also saw something that looked like equations, but as if seen from a distance, without glasses on: he was unable to make out what they said.

I’m not sure what is going on here.

* https://www.henrikkarlsson.xyz/p/wordless-thought

A couple of this author's speculations aren't how I'd say it works when this is one's default mode, but most are in the neighborhood. He comes the closest of what I've read by people who do think the way the author thinks — which seems to be most people.


Interesting! I imagine speech-related motor artifacts don't help matters either, even if noise starts mattering less at scale.


Yeah -- we have the participants use chinrests as well, which reduces head motion artifacts for typing but less so for speaking (because they have to move their heads for that of course). so a lot of the data is with them keeping their heads quite still, although the model is becoming much more robust to this over time.


This isn’t really “Show HN” so you might want to remove that, but looks really awesome!

https://news.ycombinator.com/showhn.html


Thank you! I changed that. Yeah, some really awesome freebies this year. I've been following music production freebies for over a decade and it's never been like this year.


I was initially skeptical of this claim because I’d previously learned that to cross the blood-brain barrier particles need to be ~200nm (PM2.5 = 2500nm). However, PM2.5 does seem to be an important category of particles for brain damage: somehow these particles can access the brain [1]. Obviously, yes, it depends on exactly the particle whether it will be “neurotoxic,” but generally “unnatural” particles in the brain are not going to do good things. (I am not an expert in particulates) it seems like things larger than this don’t penetrate the blood-brain barrier, so they can’t be neurotoxic. So PM2.5 is probably at an intersection of large enough to be unhealthy but small enough that the blood brain barrier doesn’t help (probably some evolutionary argument to be made here).

[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC9491465/#:~:text=PM...


The article does suggest the particles travel "from the nose to the brain", but I think that may be a bit of hyperbole.

In the studies described, they weren't looking for these particles in the brain.

There is potentially a case to be made that the particles result in systemic inflammation, or some other pathway which leads to effects in the brain, rather than a direct action.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: