Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When debugging an agile process I'd distinguish two kinds of "out of control" conditions:

(1) Individual sprints more-or-less hit their goals. Maybe you do 80% of what you expected to do consistently, but there never seems to be a last sprint. (e.g. new requirements keep coming up, new problems get discovered, etc.)

(2) Each sprint is a disaster. You deliver 20 or 30% of what you expected in the sprint.

If you ask the people in the team and other stakeholders you might even find that some believe (1) is the case, others believe (2) is the case.

I would look the following mismatch: The conventional sprint planning process assumes the work is a big bucket of punchclock time where there are no dependency orders, one team member can do the work of another team member, etc.

In some cases this is close to the truth, in other cases it is nowhere near the truth.

For instance if you plan to have work implemented and tested within the boundary of one sprint there is a point at which the work is sent over the wall to the tester. I worked on one project for which each iteration contained a machine learning model that took two days to train (most of this process happened outside "punchclock time") If everything went right you could start two days before the end of the sprint and have a model, but often things didn't go right and if you really wanted the sprint to succeed you would want to start training the model as early as you can, maybe even over the first weekend.

If wallclock time and temporal dependencies are the real issue you have to address that.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: