Hacker Newsnew | past | comments | ask | show | jobs | submit | ej88's commentslogin

They do, in the paper they mention they evaluate the LLM without tools

ai skeptic fanfic evolves in fascinating ways every day

Take it a step further: AI generated AI skeptic fanfic :D

huh? this is completely inaccurate

You're absolutely right!

Im starting to believe that the biggest moats will be in the application layer, and that people are starting to realize traditional saas moats apply to those companies too

network effects, distribution, proprietary data, systems of record

companies like opencode have none of the above

cursor's distribution has been faltering and they're hard pivoting to training their own models with their proprietary data to try to build their moat back


The network effects and distribution moats are subject to the same erosion. proprietary data less so. Systems of record the least so. The real value-add over the next decades is gonna be around providing "stability". In multiple domains, from service stability to cultural stability. The "disruption" formula will be flipped on it's head, and ppl will be motivated to "move slow and fix things".

Exactly this. Also global sales and marketing and support.

1. on prem 2. extremely strict data controls, if one of palantirs big customers found out data got leaked people are going to prison

Amen.

People seem to struggle with the concept of private datacenters these days. Palantir customers tend to be the sorts of orgs that are pretty paranoid about their data, and they wouldn't be handing it over to some schmucks without being confident that those concerns were addressed. Militaries and governments generally aren't fuckin around with things like intelligence data, so I think it's reasonable that Palantir is able to make a convincing case to the world's most paranoid orgs that their data isn't being sent anywhere (and it'd likely be air gapped anyway).

Just because everything you touch is in the cloud doesn't mean other orgs aren't still building their own datacenters and then buying software to run inside.


You think people go to prison for this sort of thing? How laughably quaint.

yes

am close with a few employees there


The on prem solution is probably 2X TCO of the hosted solution. I'm sure many orgs that should be strictly "on prem" are running hosted solutions due to budgetary concerns.

there doesnt seem to be a limit in terms of the ceiling of what companies can do with software, probably the most elastic demand out of any industry ever

the swe role is going to change but problem solving systems thinkers with initiative won't go away


That's a possible outcome. Another possibility is that AI will handle all of the thinking and problem solving part. So the market value of thinking will drop. The bottleneck will still be humans, but their input will be (1) doing physical, real-world stuff (2) providing data that the AI doesn't have, e.g. information about a specific problem domain or how does a user interface feel.


assuming no asi, the market value of thinking without accountability trends to zero, the bottleneck will be thinking + accountability, at least for knowledge work

if ai truly solves novel thinking then nothing is a barrier. the physical world is downstream from robotics which is downstream from software. itll be able to persuade nation states to collect data for itself etc etc (insert sci fi ending)


I feel like this article doesn't really contribute much to the discourse and is somewhat spoiled by the author's biases.

I think the point about lacking precise language to describe LLMs is reasonable, then the author follows it up with claims that the machines can't count and that they are incapable of math (easily disproven). Then says "talking rock" is a better alternative, which to the average person would be even more confusing. Then says AI researchers tend to not consider LLMs AI (like.. what?)

The point on Turing's Imitation Game was reasonable too, then confidently proclaims LLMs are not doing anything intelligent and are pure mimicry. Intelligence is notoriously poorly defined, and the stochastic parrot meme has already died now that RL enables out of distribution behavior.

The chat point and talking dog syndrome are both reasonable and I generally agree with them.


Yeah, this a lot of what irks me in that article as well. The author hasn't made any groundbreaking discoveries about the inner workings of LLMs - he just claims LLMs work a certain way and then complains that current language use doesn't align with his assertions.


the role of a 'developer' is ever-shifting, even now a lot of swe roles expect a good amount of product or customer-facing skills


the majority of everyday customers have never heard of an API and prefer to call in via phone

in that medium, llms are so much better than old phonetrees and waiting on hold


I think the point is: If there is an API somewhere in Company's systems that does what the customer wants, why have a phone tree or an LLM in the way? Just add a button to the app itself that calls that API.


most support volume comes through voice, and you need a layer to interpret what the customer intent is

additionally for many use cases it's not feasible from an eng standpoint to expose a separate api for each entire workflow, instead they typically have many smaller composable steps that need to be strung together in a certain order depending on the situation

its well fit for an llm + tools


There's no reason the app itself couldn't string together those composable steps into an action performed when the user invokes it. OP's point is there is that neither an LLM or a voice layer is really required, unless you're deliberately aiming to frustrate the user by adding extra steps (chat, phone call). Customer intent can be determined with good UX.


its the opposite, majority of users prefer to get support via chat or phone

navigating ux is still difficult in 2026

the average hn user is leagues above what the average customer or even smb knows about tech and ux, just not realistic for them to redesign their apis


Accounting for inflation, rents barely moved from 2022-2024 in D.C.: https://ipropertymanagement.com/research/average-rent-by-yea...

I suspect most of the rent increase is due to inflation (sticker price). Nonetheless rents are still rising, which means 60k units were not enough to outweigh demand.


I'm sure 3 years noted for their high inflation rates are indicative of the preceding 7 years which were not. /s


either way, it's clear they didn't build enough


It's not, because it's not clear that building actually lowers prices.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: