Been going back and forth on this with open source tools I've built. The training data argument is valid, but honestly the more immediate version of the same problem is that someone can just take your repo, feed it to an agent, and have their own fork in an afternoon.
The moat used to be effort, nobody wants to rewrite this from scratch (especially when it's free). What's left is actually understanding why the thing works the way it does. Not sure that's enough to sustain open source long-term? I guess we all have to get used to it?
> but honestly the more immediate version of the same problem is that someone can just take your repo, feed it to an agent, and have their own fork in an afternoon.
Indeed, I've got a few applications I've built or contributed too that are (A)?GPL, and for those I do worry about this AI washing technique. For libraries that are MIT or permissive anyway, I don't really care. (I default to *GPL for applications, MIT/Apache/etc for libraries)
Same energy here. I was sitting on 50+ .env files across various projects with plaintext API keys and it always bothered me but never enough to actually fix it. AI dropped the effort enough that I just had a dedicated agent run at it for a few days — kept making iterations while I was using it day to day until it landed on a pretty solid Touch ID-based setup.
This mix of doing my main work on complex stuff (healthcare) with heavy AI input, and then having 1-2 agents building lighter tools on the side, has been surprisingly effective.
What I find interesting is that AI is surprisingly bad at writing agentic flows. We're setting up a lightweight system where the core logic lives in markdown files agent identity, rules, context, memory to create specific report outputs.
Every time we ask the model to help build out this system, it tries to write a Python script instead. It cannot stop itself from reaching for "real code." The idea that the orchestration layer is just structured text that another LLM reads is somehow alien to it, even though that's literally how it works.
It's like asking a native speaker to teach their language and they start building Duolingo
Curious, why remove Touch ID? Been moving everything into it seems like a really good mix of convenience + security (especially if the alternative is copying your key into AI :) )
We've been working on exactly this problem from the credential layer side. The root issue isn't that frameworks lack auth features — it's that .env files are the path of least resistance, and every framework optimizes for that path. Not just a problem for OpenClaw but also for the more 'trusted' regular CLI agents.
One thing the report doesn't cover: even with perfect credential injection, agents can still leak secrets through their output. An agent that received a key via a proxy can print it into a chat window, a log, or a commit message.
Same here!! 39, working through a backlog of side projects that never got built before. The velocity is insanely fun compared to even two years ago.
Best part has been building stuff with my kids. They come up with game ideas, we prototype them together, and they actually ship. Watching them go from "what if it did this" to a working thing they can show their friends has been incredibly enjoyable (instead of them asking why I'm behind my computer again)
I always liked coding but honestly liked the end result more.
We've tried a bunch of approaches, always comes down to the same few things:
- Built internal tooling to keep keys out of AI chats and anywhere they could leak. The moment a raw key enters a conversation or a shared space, you've lost control of it.
- LLM gateways with capped virtual keys per developer and separate service accounts. If a key leaks, it's easy to kill, doesn't affect the product, and the damage is capped — not your whole billing account.
- A scoped intermediary layer for any autonomous agents. Anything running without a human in the loop gets its own access that we can kill in seconds.
We ended up building some custom tooling here specifically for working with AI agents. There's always this tension between the easy way (just paste it into the chat, it'll be fine) and the proper way, which usually ends up being too cumbersome for anyone to actually follow.
The approaches make sense for teams with engineering resources to build internal tooling. The LLM gateway layer is smart — virtual keys with caps is exactly the right mental model. The hard part is most solo devs and small teams never get around to building that layer, which is where the incidents happen. We built CloudSentinel specifically for that gap — automatic revocation on raw request count, no internal tooling required. Happy to share more if useful.
The moat used to be effort, nobody wants to rewrite this from scratch (especially when it's free). What's left is actually understanding why the thing works the way it does. Not sure that's enough to sustain open source long-term? I guess we all have to get used to it?
reply