> Slack won't open up their data moat to AI, which is shameful.
Ah yes. It's shameful that Slack won't open data moat to AI. You know, those millions of chats (including private data) by people who didn't give consent to this
> You know, those millions of chats (including private data) by people who didn't give consent to this
I'm pretty sure the company you work for owns your work chat, and that what you say on company slack constitutes business information.
There are a lot of things people don't consent to. Being born. Breathing in the air molecules that come from other people's bodies. Looking at ugly things. Hearing annoying sounds. It'll be okay.
Could there ever exist anything that wouldn't be okay? What's the difference between something that will be okay and something that won't? I'm guessing the things that will be okay are the things that might pose an obstacle for AI "progress".
In general the companies are the ones showing reluctance, much more than their employees. There's still a morass of security, privacy, and legal unanswered questions about LLM use in general. Not to mention the huge unknown of total lifecycle costs
I operate with the assumption that the company can access my private DMs on enterprise slack if they want to. With that, users are still allowed to be concerned if the company is going to use that information for AI use cases. I’d prefer that all AI stay away from my private DMs.
> I'm pretty sure the company you work for owns your work chat, and that what you say on company slack constitutes business information.
It does. And a lot of this information is highly sensitive. Imagine my company's surprise if Slack would not be shameful and would just open up its data moat to AI.
> There are a lot of things people don't consent to. Being born.
Demagoguery and non sequiturs are not arguments.
But I guess that's what passes for "arguments" for AI maximalists.
The thing lags a few seconds while typing a message on a 20 core 128g ram machine. That's with their desktop (electron) app. Mercifully, the web app works better.
Still, CC blows it out of water. Slack is that bad.
Something important must be different about our Slack environments. Maybe it's the number of users, or possibly the OS?
We're a small company (about 150 Slack users), and I've run the Slack (Electron) app on a 16GB M2 (macOS) and a 4GB Chromebook (running a non-ChromeOS Linux), and it has never had any noteworthy performance issues.
It's a terminal wrapper for Anthropic API. It somehow baloons to 68 gigabytes when all it needs to do is call an APi and slowly draw a few hundred characters on screen. And they can't even do that without flickering. Oh yes, and until very recently it would also consume a significant percentage of CPU just waiting for input to a slash command.
Yes, on that same 20 core 128g RAM machine.
You surely must be kidding. Slack is an amazing cutting edge high performance tech in comparison as it has about two orders of magnitude more features that a TUI API wrapper.
> Is there anything Flash could produce that wouldn't render these days with SVG + CSS + JS?
This sounds like a "is there anything you can do in C++ or Javascript that you couldn't do in Brainfuck?".
Flash was a complete authoring environment. Yes, you can replicste the output in JS+CSS (or more likely JS+Canvas/WebGL/WebGPU), but at what cost and with how much effort?
> Why be angry at Steve for playing around with a fun hobby project that goes nowhere?
People dislike scam artists, hype artists and bullshitters. Especially when said artists had something actually useful to contribute once upon a time. E.g. Yegge's Platform Rant [1] is still required reading IMO.
Now he's uncritically and unapologetically pushes extremely low quality level AI slop while first trying to prop up Amp, then trying to sell a book, then trying to sell a crypto scam, now trying to sell a vibe-coded database. All the while proclaiming his projects have the basest of basic ideas but somehow need hundreds of thousands of lines of AI-generated low quality slop code to barely function.
The contempt here is the same as for idiots who uncritically run clawdbot and other AI bullshitters and grifters.
Compare this to @simonw who constantly evaluates coding agents, explains what he does in a coherent clear language that doesn't use ChatGPT to invent new inane terms for existing things, and stands behind his work and motivations: https://simonwillison.net
People should stop giving Steve Yegge as much attention as they do.
It's slop on top of slop on top of slop. It's not even quality slop. Apart from bloviated self-aggrandising blog posts, the ideas are trivial, and the execution is beyond horrendous.
Look beyond the ChatGPT-generated terminology to see the supervisors, loops and worker processes of Gas Town. Now it's a freelance board with AI agents as freelancers. But sure, polecats, mayors, stamps and character sheets. Whatever floats your upcoming crypto rug pull [1].
This is what he writes: "build stuff really, really fast. So fast that your biggest problem will be ideas." We've yet to see ideas built using the slopcoded monstrosities.
> Because LLMs don't understand things to begin with.
Ok, that’s fair. But I think the comment was making a distinction between the big picture and other types of “understanding.” I agree that it is incorrect to say LLMs understand anything, but I think that was just an informal turn of phrase. I’m saying I don’t think there’s something special about “big picture” information processing tasks, compared to in-detail information processing tasks, that makes them uniquely impossible for LLM.
The other objections seem mostly to be issues with current tooling, or the sort of capacity problems that the LLM developers are constantly overcoming.
> I’m saying I don’t think there’s something special about “big picture” information processing tasks, compared to in-detail information processing tasks, that makes them uniquely impossible for LLM.
LLMs can do neither reliably because to do that you need understanding which LLMs don't have. You need to learn from the codebase and the project, which LLMs can't do.
On top of that, to have the big picture LLMs have to be inside your mind. To know and correlate the various Google Docs and Figma files, the Slack discussions, the various notes scattered on your system etc.
They can't do that either because, well, they don't understand or learn (and no, clawdbot will not help you with that).
> The other objections seem mostly to be issues with current tooling, or the sort of capacity problems that the LLM developers are constantly overcoming.
These are not limitations of tooling, and no, LLM developers are not even close to overcoming, especially not "constantly". The only "overcoming" has been the gimmicky "1 million token context" which doesn't really work.
I would say that it's very germane to my original statement. Understanding is absolutely fundamental to strategy and it is pretty much why I can say LLMs can't be strategic.
To really strategize you have to have mental model of well, everything and be able to sift through that model to know what elements are critical or not. And it includes absolutely everything -- human psychology to understand how people might feel about certain features or usage models, the future outlook for what popular framework to choose and will it as viable next year as it is today. The geographic and geopolitics of which cloud provider to use. The knowledge of human sentiment around ethical or moral concerns. The financial outlook for VC funding and interest rates. The list goes on and on. The scope of what information may be relevant is unlimited in time and space. It needs creativity, imagination, intuition, inventiveness, discernment.
Claude is perfectly capable of all of this. Give it access to meeting notes and notion/linear and it can elegantly connect the dots within the context of a given problem.
Now I can't get the Pulp Fuction dialog out of my head.
- Do you know what they call code in France?
- No
- Le code
reply