AI is making everyone faster that I’ve seen. I’d say 30% of the tickets I’ve seen in the last month have been solved by just clicking the delegate to AI button
I'll be honest, just the idea of working there makes me feel like vomiting. For me, they are bizarrely evil. They're not evil like, "we're going to destroy our competition through anti competitive practices," (which they do), but "let's destroy a whole generation of minds."
And now with the glasses. I mean, jeeze. Can there be a stronger signal of not caring for others?
It's as if Meta sees people as cattle.
Though I think a lot of techies see humans as cattle, truthfully.
What was your rationale?
I guess this question is out-of-the-blue, and I don't mean for you to justify your existence, but I've never understood why people choose to work for Meta.
I feel the same - would I like a meta paycheck, sure, but I couldn't look at myself in the mirror knowing what the company I'm giving my work to does to people's brains (not just the young, though that is the most reprehensible).
I told my son I would disown him if he worked for Facebook, for the reasons stated above.
Then he took a contracting gig for Meta. His rationalization was that the project was an ill-specified prototype that would never see the light of day - if they wanted to throw money at him for stuff like that, he would accept it.
That gig is finished, and he's now thoroughly disillusioned with working for big tech.
From this angle, what's the difference between Meta and a junk food company?
Both sell things that are bad for you, but that the consumer has complete control over whether or not to consume.
And not all of what Meta is selling is bad. There's a lot of information exchanged on Facebook, Instagram, etc. that are good for society. Like health/nutrition advice, etc.
I've always attributed it to people being very good at convincing themselves they aren't one of the bad guys. A big paycheck makes it even easier to ignore to what you are a part of.
Where livelihood is concerned, rational individuals with strong morals can do irrational, and immoral things (e.g., work at the Palantir's of the world).
TLDR: incentives don't just shape perception, they form it
I wrote a SaaS project over the weekend. I was amazed at how fast Claude implemented features. 1 sentence turned into a TDD that looked right to me and features worked
but now 3 weeks later I only have the outlines of how it works and regaining the context on the system sounds painful
In projects I hand wrote I could probably still locate major files and recall system architectures after years being away
I used to enjoy going to them too. I remember price comparing though, and thinking that stuff seemed too expensive. Fry's started to seem run down over time.
Mermaid diagrams are even better because you don't waste characters on the visual representation but rather the relationships between them. It's the difference between
graph TD
User -->|Enters Credentials| Frontend[React App]
Frontend -->|POST /auth| API[NodeJS Service]
API -->|Query| DB[(PostgreSQL)]
API --x|Invalid| Frontend
DB -->|User Object| API
API -->|JWT| Frontend
Mermaid diagrams automatically render on Markdown and IDE chat windows as in VSCode or Cursor. So you get the best of both worlds, a graph you can look at a ND manipulate with the mouse but also in a format LLMs can read.
FWIW, I do aim for inbox-zero for email, and similar for chat apps (Slack/Teams). Otherwise it piles up and gets overwhelming. I'm referring more to things like - "only the exact thing you're currently working on open" part. I agree systems are needed. For me it's Obsidian for notes, inbox zero, and OneTab extension to allow me to remove tabs without fear of "losing" them completely. I've learned that it's also a trap to over-complicate my system, even something like Todoist which is fairly minimal was semi-problematic, although I may come back to it - just using manual TODO checklists in Obsidian with a small table that pulls them all into a single dashboard file for reference.
Subsidized plans that are only for their Agent (Claude Code). Fine tuning their models to work best with their agent. But it's not much of a moat once every leading model is great at tool calling.
I do think Claude Code as a tool gave Anthropic some advantages over others. They have plan mode, todolist, askUserQuestion tools, hooks, etc., which greatly extend Opus's capabilities. Agree that others (Codex, Cursor) also quickly copy these features, but this is the nature of the race, and Anthropic has to keep innovating to maintain its edge over others
The biggest advantage by far is the data they collect along the way. Data that can be bucketed to real devs and signals extracted from this can be top tier. All that data + signals + whatever else they cook can be re-added in the training corpus and the models re-trained / version++ on the new set. Rinse and repeat.
(this is also why all the labs, including some chinese ones, are subsidising / metoo-ing coding agents)
(I work at Cursor) We have all these! Plan mode with a GUI + ability to edit plans inline. Todos. A tool for asking the user questions, which will be automatically called or you can manually ask for it. Hooks. And you can use Opus or any other models with these.
reply