Hacker Newsnew | past | comments | ask | show | jobs | submit | rastriga's commentslogin

Part 2 of a small personal investigation into how Claude Code actually works. This one focuses on the system prompt and how much behavioral governance is encoded there. Part 3 will look at tool execution.


Fair point — the stateless prompt+context pattern itself is standard, and the headline probably leans too much into “surprise.” The takeaway for me was the opposite of hidden complexity: start simple, don’t over-engineer. Claude Code feels like a good example of a system that’s straightforward, reliable, and easy to reason about.


Thanks — glad it resonated! Part 2 should uncover a lot of the magic behind the scenes. And thanks for sharing the link. Running claude code against a local llm is a really interesting direction, but I need more RAM...


Thanks! I tried doing a similar comparison with Codex CLI and Cursor, but hit a wall. Codex doesn’t seem to respect standard proxy env vars, and Cursor uses gRPC. Claude Code was the only one that was straightforward to inspect. Opencode looks like a great next candidate.


You can change the system prompt claude code sends, which changes how the agent frames behavior, but claude still has internal and server side safety layers. So removing or rewriting the client system prompt won't allow to magically bypass those. I think of the client system prompt more as agent configuration than as the primary safety net — it shapes behavior, but it’s not the final authority. I’m covering this in Part 2 — breaking down what’s actually in the system prompt and how the client-side safety framing is constructed.


If they have all of this stuff server side, why are they recreating it client side? That's the part I can't figure out.


Statelessness simplifies scaling and operational complexity. They cache the system prompt, but otherwise each request is fully self-contained. It’s an obvious tradeoff, and I wouldn’t be surprised if they move toward some form of server-side state or delta encoding once the product stabilizes.


yes, it's obvious the data goes to Anthropic. What wasn't obvious to me was what exactly is included and how it's structured: system prompt size, full conversation replay, file contents, git history, tool calls. The goal was to understand how the wire level works. On Azure/Bedrock - good point! My understanding is that they route requests through their infrastructure rather than Anthropic directly, which does change the trust boundary, but my focus here was strictly on what the client sends, that payload structure is the same regardless of backend.


Specifically do you mean when you say git history is sent? How much of it is included in each request?


It's the last 5 commits, not the full history. Here's what actually gets sent in the system prompt:

  gitStatus: This is the git status at the start of the conversation...
  Current branch: main
  Main branch: main
  Status: (clean)

  Recent commits:
  6578431 chore: Update security contact email (#417)
  0dc71cd chore: Open source readiness fixes (#416)
  ...
Enough for Claude to understand what you've been working on without sending your entire repo history.


That makes sense :) Cool analysis!


thank you!


I built a MITM proxy to inspect Claude Code’s network traffic and was surprised by how much context is sent on every request. This is Part 1 of a 4-part series focusing only on the wire format and transport layer. Happy to answer technical questions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: