Hacker Newsnew | past | comments | ask | show | jobs | submit | thenaturalist's commentslogin

Irish siding with a colonial terrorist power which flounders its genocidal ambitions freely financed by a petro state which also flounders its genocidal ambitions freely instead of the indigenous people of a land who's artefacts and scriptures are in the name of the land as well as dug up from the ground has to be the biggest moral confusion of the 21st century.


You mean the Philistines?


The argument conveniently always goes such that Israel is the baddie.

Curious how that goes, especially since Israels ulterior motives are always implied, they're not taken by their word.

And Islamists, who share their motives openly with anyone willing to listen are ignored.


Genocide was literally in Hamas’s charter and yet somehow they’re the good guys because modern leftists can’t think past “colonialism bad”.


I never said they were the good guys. Fuck Hamas.

But let's not equate an inexperienced group of starved and impoverished guerilla fighters with a first-world, nuclear-armed genocidal ethnostate.


Ah, the association where anyone paying a 30 USD annual fee can become a member...


Do you have a better authority that says otherwise?


What is the HN community doing to use tech to combat terrorism and defend civilian security and freedom?


Well, you could join Palantir and fight terrorism: https://www.palantir.com/careers/


Well, you could join Palantir and f̶i̶g̶h̶t̶ ̶t̶e̶r̶r̶o̶r̶i̶s̶m̶ help develop Israel's terrorism in Gaza.


> The magic

Hilarious that you start with that as TAO requires

- Continuous adaptation makes it challenging to track performance changes and troubleshoot issues effectively.

- Advanced monitoring tools and sophisticated logging systems become essential to identify and address issues promptly.

- Adaptive models could inadvertently reinforce biases present in their initial training data or in ongoing feedback.

- Ethical oversight and regular audits are crucial to ensure fairness, transparency, and accountability.

Not much magic in there if it requires good old human oversight every step of the way, is there?


Goalposts wooshing by at maglev speed.

Of course it needs human supervision, see IBM 1979. Oversight however doesn’t mean the robots wait for approvals doing r&d and that’s where the magic is - the magic being robots overseeing their training and improvement of their harnesses.

IOW only the ethics and deployment decisions need to be gated by human decisions. The rest is just chugging along 1% a month, 1% a week, 1% a day…


How would that work out barring a complete retraining or human in the loop evals?


I highly doubt that, and its in OPs article.

First, a vendor will have the best context on the inner workings and best practices of extending the current state of their software. The pressure on vendors to make this accessible and digestable to agents/ LLMs will increase, though.

Secondly, if you have coded with LLM assistance (not vibe coding), you will have experienced the limited ability of one shot stochastic approaches to build out well architected solutions that go beyond immediate functionality encapsulated in a prompt.

Thirdly, as the article mentions, opportunity cost will never make this a favorable term - unless the SaaS vendor was extorting prices before. The direct cost of mental overhead and time of an internal team member to hand-hold an agent/ write specs/ debug/ firefight some LLM assisted/ vibe coded solution will not outweigh the upside potential of expanding your core business unless you're a stagnant enterprise product on life support.


Pretty worthless take posting an ad-hominem attack instead of addressing the actual content of the article/ statement.


???

Use an API Key and there's no problem.

They literally put that in plain words in the ToS.


Using an API key is orders of magnitude more expensive. That's the difference here. The Claude Code subscriptions are being heavily subsidized by Anthropic, which is why people want to use their subscriptions in everything else.


They are subsidized by people who underuse their subscriptions. There must be a lot of them.


I think the people who use more than they pay for vastly outnumber those who pay for more than they use. It takes intention to sign up (not the default, like health care) and once you do, you quickly get in the habit of using it.


Of course, like gym memberships. VC’s subsidizing powerlifters…


This move feels poorly timed. Their latest ad campaigns about not having ads, and the goodwill they'd earned lately in my book was just decimated by this. I'm sure I'm not the only one who's still just dipping their toes into the AI pool. And am very much a user that under utilizes what I pay for because of that. I have several clients who are scrambling to get on board with cowork. Eliminating API usage for subscription members right before a potentially large wave of turnover not only chills that motivation it signals a lack of faith in their marketing, which from my POV, put out the only AI super bowl campaign to escape virtually unscathed.


> the goodwill they'd earned lately in my book was just decimated by this

That sounds absurd to me. Committing to not building in advertising is very important and fundamental to me. Asking people who pay for a personal subscription rather than paying by the API call to use that subscription themselves sounds to me like it is. Just clarifying the social compact that was already implied.

I WANT to be able to pay a subscription price. Rather like the way I pay for my internet connectivity with a fixed monthly bill. If I had to pay per packet transmitted, I would have to stop and think about it every time I decided to download a large file or watch a movie. Sure, someone with extremely heavy usage might not be able to use a normal consumer internet subscription; but it works fine for my personal use. I like having the option for my AI usage to operate the same way.


The problem with fixed subscriptions in this model is that the service has an actual consumption cost. For something like internet service, the cost is primarily maintenance, unless the infrastructure is being expanded. But using LLMs is more like using water, where the more you use it, the greater the depletion of a resource (electricity in this case, which is likely being produced with fossil fuel which has to be sourced and transported, etc). Anthropic et al would be setting themselves up for a fall if they allow wholesale use at a fixed price.


There is a lot more vc cash.


There are. It's like healthcare, the healthy don't use it as much and pay for the sick.


Be the economics as they may, there is no lock in as OP claims.

This statement is plainly wrong.

If you boost and praise AI usage, you have to face the real cost.

Can't have your cake and eat it, too.


The people mad about this feel they are entitled to the heavily subsidized usage in any context they want, not in the context explicitly allowed by the subsidizer.

It's kind of like a new restaurant started handing out coupons for "90% off", wanting to attract diners to the restaurant, customers started coming in and ordering bulk meals then immediately packaging them in tupperware containers and taking it home (violating the spirit of the arrangement, even if not the letter of the arrangement), so the restaurant changed the terms on the discount to say "limited to in-store consumption only, not eligible for take-home meals", and instead of still being grateful that they're getting food for 90% off, the cheapskate customers are getting angry that they're no longer allowed to exploit the massive subsidy however they want.


Not proof reading quotes you've dispatched to be fetched by an AI ignoring that said website has blocked LLM scraping and hence your quotes are made up?

For a senior tech writer?

Come on, man.

> Any of us who use these tools could make a mistake of this kind.

No, no not any of us.

And, as Benji will know himself, certainly not if accuracy is paramount.

Journalistic integrity - especially when quoting someone - is too valuable to be rooted in AI tools.

This is a big, big L for Ars and Benji.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: