Hacker Newsnew | past | comments | ask | show | jobs | submit | eibrahim's commentslogin

The article makes a reasonable case for internal tools but glosses over the elephant in the room: if every company can vibe code their own B2B tools, what happens to the SaaS vendors? The ones that survive will be the ones where the distribution and ecosystem around the product matters more than the code itself. Nobody is going to vibe code their own Stripe or Salesforce, but the long tail of niche B2B tools is absolutely vulnerable.

They're right that nobody is vibe coding a full ERP system. But that's not really the threat. The threat is vibe coders building the 20% of Workday that 80% of small businesses actually use, and selling it for a fraction of the price. The market for small, focused tools that replace one expensive feature of a bloated enterprise platform is massive and growing fast.

The maker movement comparison is interesting but I think it breaks down in one key way: the marginal cost of software distribution is basically zero. 3D printing still requires physical materials and shipping. Vibe coded apps can reach users instantly if there's a discovery mechanism.

The real parallel might be the early web era where anyone could make a website but finding them required Yahoo directories and later Google. Right now vibe coded apps have the same discovery problem - they exist but there's no effective way to find or evaluate them.


The part about vibe coding lowering the barrier to building software is well established at this point. What nobody seems to be addressing is the distribution problem that creates. We're about to have an order of magnitude more software being built, but no corresponding improvement in how people discover and evaluate it. App stores were designed for a world where shipping software was hard. We need new discovery mechanisms for a world where shipping is easy.

Context divergence is a real problem once you have more than one person prompting on the same codebase. The git-native approach makes sense since thats already where the code lives. Have you seen cases where different team members LLMs generate conflicting architectural patterns even with shared context? Curious how much shared context actually prevents drift vs just documenting it.

cool concept. the million dollar homepage model is clever for bootstrapping initial attention.

interesting that you mention this is your first vibe coding project. curious where you plan to list it long-term for discovery. product hunt gives you a day of visibility but then what? feels like theres a gap in the market for a place where vibe-coded projects can live and get discovered organically.


The hook-driven status tracking is a nice pattern. Running multiple agents in parallel and needing visibility into whats happening is a real problem once you go past one or two. The git worktree automation is a smart touch too.

Curious about the hook interface - is it specific to Claude Code or generic enough to work with other agent frameworks?


This is a really important cautionary tale about autonomous AI agents operating without proper guardrails. The gap between 'AI agent that can do useful tasks' and 'AI agent that understands consequences' is still enormous. It highlights why having human oversight in the loop matters — whether it's content review, action approval, or just sanity-checking outputs before they go live. The best setups treat the AI as a capable but supervised collaborator, not a fully autonomous actor.

This is a really important cautionary tale about autonomous AI agents operating without proper guardrails. The gap between 'AI agent that can do useful tasks' and 'AI agent that understands consequences' is still enormous.

It highlights why having human oversight in the loop matters - whether it's content review, action approval, or just sanity-checking outputs before they go live. The best AI assistant setups I've seen treat the AI as a capable but supervised collaborator, not a fully autonomous actor.


the self-extending skills part is really interesting. ive been building AI agents with persistent memory for a while now and the skill/tool extensibility piece is where most frameworks fall short. they either give you a rigid plugin system or completley open-ended function calling with no guardrails.

how are you handling the trust boundary for self-created skills? thats usually where things get tricky.

also curious about the memory architecture. file-based memory (like markdown files the agent reads/writes) has been surprisingly effective in my experience compared to fancy vector DB approaches. simpler to debug, easier for the agent to reason about, and way less infrastructure overhead. whats your approach?


> how are you handling the trust boundary for self-created skills?

At least in the Claude model, there's nothing a skill can do that the model couldn't already do? Isn't it still the same tool calls underneath, with the same permissions?

Think of skills as plugins providing AGENTS.md snippets and a subdirectory of executables, as if those were part of the workspace to begin with.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: