Yeah, I suspect the people railing against MCP don’t actually use agents much at all. MCP is super useful for giving your agent access to tools. The main alternative is CLI tools, if they exist, but they don’t always, or it’s just more awkward than a well-designed MCP. I let my agent use the GitHub CLI, but I also have MCPs for remote database access and bugsnag access, so they can debug issues on prod more easily.
If choosing the "wrong" model, or not wording your prompt in just the right way, is sufficient to not just degrade your output but make it actively misleading and worse than useless, then what does that say about the narrative that all this sort of work is about to be replaced?
I don't recall the bot he was using, it was a rushed portion of the presentation to make the point that "yes these tools exist, but be mindful of the output - they're not a magic wand"
What? People do this all the time. Sometimes manually by invoking another agent with a different model and asking it to review the changes against the original spec. I just setup some reviewer / verifier sub agents in Cursor that I can invoke with a slash command. I use Opus 4.5 as my daily driver, but I have reviewer subagents running Gemini 3 Pro and GPT-5.2-codex and they each review the plan as well, and then the final implementation against the plan. Both sometimes identify issues, and Opus then integrates that feedback.
It’s not perfect so I still review the code myself, but it helps decrease the number of defects I have to then have the AI correct.
HN is such a bubble. ChatGPT is wildly successful, and about to be an order of magnitude more so, once they add ads. And I have never heard a non-technical person mention Altman. I highly doubt they have any idea who he is, or care. They’re all still using ChatGPT.
I have multiple friends who have 6+ cars. To be fair, they're pretty well-off (mid six figure income), but one for example:
- Husband Tesla daily driver
- Wife Bronco daily driver
- Truck to pull their boat
- Campervan for outdoor adventures
- Older car for teenager to drive
- 90s convertible for summer fun
Yeah, but your "cameras" also have a bunch of capabilities that hardware cameras don't, plus they're mounted on a flexible stalk in the cockpit that can move in any direction to update the view in real-time.
Also, humans kinda suck at driving. I suspect that in the endgame, even if AI can drive with cameras only, we won't want it to. If we could upgrade our eyeballs and brains to have real-time 3D depth mapping information as well as the visual streams, we would.
A complete inability to get true 360 coverage that the neck has to swivel wildly across windows and mirrors to somewhat compensate for? Being able to get high FoV or high resolution but never both? IPD so low that stereo depth estimation unravels beyond 5m, which, in self-driving terms, is point-blank range?
Human vision is a mediocre sensor kit, and the data it gets has to be salvaged in post. Human brain was just doing computation photography before it was cool.
What do you believe the frame rate and resolution of Tesla cameras are? If a human can tell the difference between two virtual reality displays, one with a frame rate of 36hz and a per eye resolution of 1448x1876, and another display with numerically greater values, then the cameras that Tesla uses for self driving are inferior to human eyes. The human eye typically has a resolution from 5 to 15 megapixels in the fovea, and the current, highest definition automotive cameras that Tesla uses just about clears 5 megapixels across the entire field of view. By your criterion, the cameras that Tesla uses today are never high definition. I can physically saccade my eyes by a millimeter here or there and see something that their cameras would never be able to resolve.
I can't figure out your position, then. You were saying that human eyes suck and are inferior compared to sensors because human eyes require interpretation by a human brain. You're also saying that if self driving isn't possible with only camera sensors, then no amount of extra sensors will make up for the deficiency.
This came from a side conversation with other parties where one noted that driving is possible with only human eyes, another person said that human eyes are superior to cameras, you disagreed, and then when you're told that the only company which is approaching self driving with cameras alone has cameras with worse visual resolution and worse temporal resolution than human eyes, you're saying you respect the grind because the cameras require processing by a computer.
If I understand correctly, you believe:
1. Driving should be possible with vision alone, because human eyes can do it, and human eyes are inferior to camera sensors and require post processing, so obviously with superior sensors it must be possible
2. Even if one knows that current automotive camera sensors are not actually superior to human eyes and also require post processing, then that just means that camera-only approaches are the only way forward and you "respect the grind" of a single company trying to make it work.
Is that correct? Okay, maybe that's understandable, but it makes me confused because 1 and 2 contradict each other. Help me out here.
My position is: sensors aren't the blocker, AI is the blocker.
Tesla put together a sensor suite that's amenable to AI techniques and gives them good enough performance. Then they moved on to getting better FSD hardware and rolling out newer versions of AI models.
Tesla gets it. They located the hard problem and put themselves on the hard problem. LIDAR wankers don't get it. They point at the easy problem and say "THIS IS WHY TESLA IS BAD, SEE?"
Outperforming humans in the sensing dept wasn't "hard" for over a decade now. You can play with sensors all day long and watch real world driving performance vary by a measurement error. Because "sensors" was never where the issue was.
Yeah, Tesla gets it, except they’ve been promising actual FSD for a decade now, and have yet to deliver. Their “robotaxi” service has like 30 cars, all with humans, and still crashes all the time. They’re a total fucking joke.
Meanwhile Waymo (the LiDAR wankers) are doing hundreds of thousands of paid rides every week.
Yes, they are a mixed bag, but still useful.
And if on-device models get to the point where they're not a "mixed bag" and are genuinely useful, won't larger data center models be even more so?
reply