There is such a mind-bogglingly huge amount of waste in IT services worldwide, particularly in the consulting and offshoring areas, that big swings, up and down, in that area don’t actually have anything to do with what works well or doesn’t. Decisions are made to offshore work or drop offshore contracts based on the latest hype cycle, not whether it is effective or worthwhile.
So while there may be lots of consultants losing their jobs, that’s not because AI tools do the work better. It’s because management thinks investors will accept the story that AI tools will do the work better and save money. Management, and investors, don’t know, can’t judge, and honestly don’t actually care if it’s better or worse. And they run things so poorly it would be impossible to tell anyway.
It just means Kursor is sharing data with Chinese llm which enables them to improve their LLM by training on outputs and input of all data which cursor collects.
Claude code might be subsidized but there are other risks
Like if any agent can use claude models then it exposes them to distillation risk. Where data gathered from millions of such agent usage can easily be used to train a model, making their model superiority subpar
Second thing is, to improve their own coding model, you need predictable input.
If input to their model is all over the place (using different harnesses adds additional entropy to data) then it's hard to improve the model along 1 axis.
Cache is money saver in computing. Their own client might be lot better at caches than any other agent so they do not want to lose money yet end up with disgrunted customer that claude isn't working as good
And also, if a user can simply switch model in an agent. Then what moat does anthropic have? Claude code will not include other companys models and thus will allow them to make their claude code more "complex" with time so the workflows are ingrained in users psyche to the point using anything else becomes very difficult and user quickly returns to claude code
They are not entitled to a moat, and their customers do not owe them one. Several companies have narrow or no moats. Dell and HP are two examples when it comes to their PC business.
This idea that companies should be allowed to lock down their products just so they can have moats, is how we ended up with printer ink being more expensive than crude oil or champagne.
Companies are absolutely allowed to lock down their own products. Netflix is a great example, you don't bring your own client for Netflix.
The whining/entitlement in this thread is ridiculous. The API is always there for you to use as you desire.
If you want to use the loss leader on the other hand, you agree to abide by certain terms. But if you don't want to do that, just use the API. It's not that hard.
> Cache is money saver in computing. Their own client might be lot better at caches than any other agent so they do not want to lose money yet end up with disgrunted customer that claude isn't working as good
I’d bet a reasonable amount that this could be the case. They are very well incentivized to maximize cache use when it’s basically not pay per token.
This is literally the first time I've heard this. What is your source? I can type the exact same query three times and though the general meaning may be the same, the actual output is unique every single time. How do you explain this if it's cached?
In this case LLMs were obviously used to dress the code up as more legitimate, adding more human or project relevant noise. It's social engineering, but you leave the tedious bits to an LLM. The sophisticated part is the obscurity in the whole process, not the code.
i do that when i don't trust the persons ability to translate to english without error. if they are using a tool to translate to english, then i might as well use that tool myself, with the benefit that i then have the original untranslated message too and can use it to get a second opinion if the translation doesn't make sense. if all i have is the translation then i am stuck with that.
infact, i go and implement dumb AI models in many companies and executives immediately show "how many people they can fire with this advancement".
reply