OP here, some anticipations: I modeled the impact of AI-assisted development on the SDLC as a non-linear dynamical system.
By calibrating an ODE (Validation Capacity) against 1.6 million file-touch events from 27 repositories, the model reveals a saddle-node bifurcation. Simply put: AI increases generation volume, but if QA interception isn't scaled proportionally, the queue saturates with rework, causing net delivery to collapse mathematically.
The paper includes the formal derivation, the empirical validation (including an operational regime classifier based on file closure rates), and the full Python replication suite.
I'd appreciate any mathematical or architectural critique on the queueing model and the filter chain formalization.
The saddle-node bifurcation framing is genuinely useful for making the case to engineering leadership. "Your QA capacity is a control variable, and if it doesn't scale with generation volume, you're past the tipping point" is a much more compelling argument than "AI code is sometimes bad."
The practical implication that stands out to me: the solution isn't to slow down AI generation - it's to automate QA interception so it scales at the same rate. That's why there's been a wave of tools focused on automated checks that run immediately post-generation (linting, SAST, SCA) rather than relying on human review to absorb the volume increase.
We've been building in this space with LucidShark (lucidshark.com) - the core hypothesis is exactly what your model suggests: the constraint isn't the generation side, it's the validation side, and automation is the only way to keep validation capacity proportional to throughput. Would love to see your model applied to teams that add an automated gate - does it change the bifurcation threshold significantly?
By calibrating an ODE (Validation Capacity) against 1.6 million file-touch events from 27 repositories, the model reveals a saddle-node bifurcation. Simply put: AI increases generation volume, but if QA interception isn't scaled proportionally, the queue saturates with rework, causing net delivery to collapse mathematically.
The paper includes the formal derivation, the empirical validation (including an operational regime classifier based on file closure rates), and the full Python replication suite.
I'd appreciate any mathematical or architectural critique on the queueing model and the filter chain formalization.