Hacker Newsnew | past | comments | ask | show | jobs | submit | troupo's commentslogin

> As far as I've ever heard, "le code" used in a codebase is uncountable

Now I can't get the Pulp Fuction dialog out of my head.

- Do you know what they call code in France?

- No

- Le code


As an additional wrinkle, the word seems quite French in origin in this case.

1. No one asked them.

2. Half (or more) of those things they bought.


> Anthropic is the power company that has a 3D printer to make a faster Maglev than anyone.

And yet they can't: https://news.ycombinator.com/item?id=47281246


> Slack won't open up their data moat to AI, which is shameful.

Ah yes. It's shameful that Slack won't open data moat to AI. You know, those millions of chats (including private data) by people who didn't give consent to this


> You know, those millions of chats (including private data) by people who didn't give consent to this

I'm pretty sure the company you work for owns your work chat, and that what you say on company slack constitutes business information.

There are a lot of things people don't consent to. Being born. Breathing in the air molecules that come from other people's bodies. Looking at ugly things. Hearing annoying sounds. It'll be okay.


> It'll be okay.

Could there ever exist anything that wouldn't be okay? What's the difference between something that will be okay and something that won't? I'm guessing the things that will be okay are the things that might pose an obstacle for AI "progress".


> I'm pretty sure the company you work for owns your work chat, and that what you say on company slack constitutes business information.

That’s not a valid argument. The company itself would still need to consent.


The company in the very article this thread is about wants this.

Lots of companies want this.

Companies should have the option. Right now they're completely locked out of taking advantage of AI with their business data locked away in Slack.

Slack is a graveyard.


In general the companies are the ones showing reluctance, much more than their employees. There's still a morass of security, privacy, and legal unanswered questions about LLM use in general. Not to mention the huge unknown of total lifecycle costs

The company writing the article this HN thread is about wants this. Lots of companies do.

Today there is no option because Slack is scared to death of losing their leverage.

Companies want full rights to their data, and Slack is lording over it like a dragon protecting treasure.


It's amazing how every reply failed to realize you're (and post was) talking about (a) enterprise Slack usage & (b) AI use by the company itself.

I operate with the assumption that the company can access my private DMs on enterprise slack if they want to. With that, users are still allowed to be concerned if the company is going to use that information for AI use cases. I’d prefer that all AI stay away from my private DMs.

> I'm pretty sure the company you work for owns your work chat, and that what you say on company slack constitutes business information.

It does. And a lot of this information is highly sensitive. Imagine my company's surprise if Slack would not be shameful and would just open up its data moat to AI.

> There are a lot of things people don't consent to. Being born.

Demagoguery and non sequiturs are not arguments.

But I guess that's what passes for "arguments" for AI maximalists.


> Claude Code could absolutely build a chat client in the hands of someone who could also build the rest around it.

So why can't Anthropic build a CLI client that doesn't flickr and doesn't consume 68 GB to run a CLI wrapper on top of their API? https://x.com/jarredsumner/status/2026497606575398987


That's still light years better than Slack.

The thing lags a few seconds while typing a message on a 20 core 128g ram machine. That's with their desktop (electron) app. Mercifully, the web app works better.

Still, CC blows it out of water. Slack is that bad.


Something important must be different about our Slack environments. Maybe it's the number of users, or possibly the OS?

We're a small company (about 150 Slack users), and I've run the Slack (Electron) app on a 16GB M2 (macOS) and a 4GB Chromebook (running a non-ChromeOS Linux), and it has never had any noteworthy performance issues.

It still sucks, but not because of performance.


How is it "light years better than Slack"?

It's a terminal wrapper for Anthropic API. It somehow baloons to 68 gigabytes when all it needs to do is call an APi and slowly draw a few hundred characters on screen. And they can't even do that without flickering. Oh yes, and until very recently it would also consume a significant percentage of CPU just waiting for input to a slash command.

Yes, on that same 20 core 128g RAM machine.

You surely must be kidding. Slack is an amazing cutting edge high performance tech in comparison as it has about two orders of magnitude more features that a TUI API wrapper.


your instance does that. Mine does no such thing and I don’t know anyone for whom it does.

Not to say it doesn’t, but it’s clearly not a universal issue.


They are using react for that

Not even joking


Can’t != not prioritizing

No. They literally can't.

E.g. they claim it's a difficult task to render a few hundred characters on screen, and that their CLI wrapper is a tiny game engine: https://x.com/trq212/status/2014051501786931427

They literally had to buy bun to have someone who understands how things work to fix this


that is 1/8 of Slack so it’d be progress :)

Slack doesn't require nearly as much to run. And Slack has about two orders of magnitude more functionality

Anthropic? The company whose CLI wrapper for their own API was consuming 68 GB RAM (yes, that's 68 gigabytes)? https://x.com/jarredsumner/status/2026497606575398987

You'll rue the day when they decide to release a Slack lookalike.


> Is there anything Flash could produce that wouldn't render these days with SVG + CSS + JS?

This sounds like a "is there anything you can do in C++ or Javascript that you couldn't do in Brainfuck?".

Flash was a complete authoring environment. Yes, you can replicste the output in JS+CSS (or more likely JS+Canvas/WebGL/WebGPU), but at what cost and with how much effort?


> Why be angry at Steve for playing around with a fun hobby project that goes nowhere?

People dislike scam artists, hype artists and bullshitters. Especially when said artists had something actually useful to contribute once upon a time. E.g. Yegge's Platform Rant [1] is still required reading IMO.

Now he's uncritically and unapologetically pushes extremely low quality level AI slop while first trying to prop up Amp, then trying to sell a book, then trying to sell a crypto scam, now trying to sell a vibe-coded database. All the while proclaiming his projects have the basest of basic ideas but somehow need hundreds of thousands of lines of AI-generated low quality slop code to barely function.

The contempt here is the same as for idiots who uncritically run clawdbot and other AI bullshitters and grifters.

Compare this to @simonw who constantly evaluates coding agents, explains what he does in a coherent clear language that doesn't use ChatGPT to invent new inane terms for existing things, and stands behind his work and motivations: https://simonwillison.net

[1] https://gist.github.com/chitchcock/1281611


People should stop giving Steve Yegge as much attention as they do.

It's slop on top of slop on top of slop. It's not even quality slop. Apart from bloviated self-aggrandising blog posts, the ideas are trivial, and the execution is beyond horrendous.

Look beyond the ChatGPT-generated terminology to see the supervisors, loops and worker processes of Gas Town. Now it's a freelance board with AI agents as freelancers. But sure, polecats, mayors, stamps and character sheets. Whatever floats your upcoming crypto rug pull [1].

This is what he writes: "build stuff really, really fast. So fast that your biggest problem will be ideas." We've yet to see ideas built using the slopcoded monstrosities.

[1] https://pivot-to-ai.com/2026/01/22/steve-yegges-gas-town-vib...


I suspect ns;nt will be one of those HN stories that helped reframe things here for the better. It will stand the test of time

ns;nt (No Skill; No Taste)

https://blog.kinglycrow.com/no-skill-no-taste/


> Why can’t LLMs understand the big picture?

Because LLMs don't understand things to begin with.

Because LLMs only have access to aource code and whatever .md files you've given them.

Because they have biases in their training data that overfit them on certain solutions.

Because LLMs have a tiny context window.

Because LLMs largely suck at UI/UX/design especially when they don't have referense images.

Because...


> Because LLMs don't understand things to begin with.

Ok, that’s fair. But I think the comment was making a distinction between the big picture and other types of “understanding.” I agree that it is incorrect to say LLMs understand anything, but I think that was just an informal turn of phrase. I’m saying I don’t think there’s something special about “big picture” information processing tasks, compared to in-detail information processing tasks, that makes them uniquely impossible for LLM.

The other objections seem mostly to be issues with current tooling, or the sort of capacity problems that the LLM developers are constantly overcoming.


> I’m saying I don’t think there’s something special about “big picture” information processing tasks, compared to in-detail information processing tasks, that makes them uniquely impossible for LLM.

LLMs can do neither reliably because to do that you need understanding which LLMs don't have. You need to learn from the codebase and the project, which LLMs can't do.

On top of that, to have the big picture LLMs have to be inside your mind. To know and correlate the various Google Docs and Figma files, the Slack discussions, the various notes scattered on your system etc.

They can't do that either because, well, they don't understand or learn (and no, clawdbot will not help you with that).

> The other objections seem mostly to be issues with current tooling, or the sort of capacity problems that the LLM developers are constantly overcoming.

These are not limitations of tooling, and no, LLM developers are not even close to overcoming, especially not "constantly". The only "overcoming" has been the gimmicky "1 million token context" which doesn't really work.


I would say that it's very germane to my original statement. Understanding is absolutely fundamental to strategy and it is pretty much why I can say LLMs can't be strategic.

To really strategize you have to have mental model of well, everything and be able to sift through that model to know what elements are critical or not. And it includes absolutely everything -- human psychology to understand how people might feel about certain features or usage models, the future outlook for what popular framework to choose and will it as viable next year as it is today. The geographic and geopolitics of which cloud provider to use. The knowledge of human sentiment around ethical or moral concerns. The financial outlook for VC funding and interest rates. The list goes on and on. The scope of what information may be relevant is unlimited in time and space. It needs creativity, imagination, intuition, inventiveness, discernment.

LLMs are fundamentally incapable of this.


Claude is perfectly capable of all of this. Give it access to meeting notes and notion/linear and it can elegantly connect the dots within the context of a given problem.

yes, its just a matter of capability, not skill

It routinely can't "connect dots" on a 10 kloc project with design notes right there in the same project.

It routinely cannot read files more than 2k lines long.

You can't even provide detailed CLAUDE.md instructions because "file is too large and will affect context".

But sure. "Just give it access to a magnitude more info and it will be able to do stuff".


Yeah, it's strange to me that the default assumption is that current LLMs are already human-level AGI.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: