I just got this WordPress plugin approved on wp.org.
It lets site owners control automated access from AI crawlers and agents with per-site and per-post rules: allow, deny, teaser previews, and rate limits. It can also return 402 if you want to experiment with paid access.
If a crawler signs requests, the plugin can verify RFC 9421 HTTP Message Signatures. Most crawlers do not sign yet, but I wanted the path to be there.
It also publishes llms.txt, a JSON feed, and per-post Markdown endpoints.
We are building open source tooling around Web Bot Auth IETF draft, built on RFC 9421 HTTP Message Signatures (the standard Cloudflare and others are pushing).
This week we shipped the following packages for the website operators that verify signed HTTP requests so crawler and agent traffic can be tied to a private/public keypair. Once you can verify identity, you can do allowlists, rate limits, analytics, and policy enforcement at the origin or edge.
- Node verifier middleware: @openbotauth/verifier-client (Express and Next.js style)
- Python verifier: openbotauth-verifier (FastAPI and Flask)
- Zero code reverse proxy: @openbotauth/proxy
- Registry signer utilities: @openbotauth/registry-signer (Ed25519 + JWKS)
- Test crawler and key generation: @openbotauth/bot-cli
- WordPress plugin (pending WP.org review if you run WP)
I tried building this tool a couple of months ago, definitely helps if we can visualize what the AI agents are building. How does this keep its diagrams accurate as the codebase evolves (like pytorch refactors)? In particular, do you diff the static-analysis graph against the live repo on each commit, or do you have another way to flag drift and re-validate the LLM layer?
So in order to keep the diagram up-to-date with commits we use the git difference of python files. An agent is tasked to firstly evaluate if the change is big enough to trigger a full clean analysis.
If the change is not big enough we start doing the same thing component by component recursively and update only components which are affected by the new change.
But comparing the control-flow-graph probably makes more sense for a big refactor commits, as it might blow the context. However so far we haven't seen this be an issue.
Curious to hear what was your approach when building diagram represnetation!
It lets site owners control automated access from AI crawlers and agents with per-site and per-post rules: allow, deny, teaser previews, and rate limits. It can also return 402 if you want to experiment with paid access.
If a crawler signs requests, the plugin can verify RFC 9421 HTTP Message Signatures. Most crawlers do not sign yet, but I wanted the path to be there.
It also publishes llms.txt, a JSON feed, and per-post Markdown endpoints.
Source and other integration surfaces (Node, Python, proxy): https://github.com/OpenBotAuth/openbotauth https://openbotauth.com/developers
If you run a content site or a crawler team, I would love feedback on signing defaults and replay protection expectations.