Wow, very nice. Thank you. That's very well thought out.
I'm particularly intrigued by the large bold letters: "Success must be verifiable by the AI / LLM that will be writing the code later, using tools like Codex or Cursor."
May I ask, what your testing strategy is like?
I think you've encapsulated a good best practices workflow here in a nice condensed way.
I'd also be interested to know how you handle documentation but don't want to bombard you with too many questions
I added that line, because otherwise the LLM would generate goals that are not verifiable in development (e.g. certain pages to render <300ms - this is not something you can test on your local machine).
Documentation is a different topic - I have not yet found how to do it correctly. But I am reading about it and might soon test some ideas to co-generate documentation based on the PRD and the actual code. The challenge being, the code normally evolves and drifts away from the original PRD.
I'm particularly intrigued by the large bold letters: "Success must be verifiable by the AI / LLM that will be writing the code later, using tools like Codex or Cursor."
May I ask, what your testing strategy is like?
I think you've encapsulated a good best practices workflow here in a nice condensed way.
I'd also be interested to know how you handle documentation but don't want to bombard you with too many questions