Wednesday, March 3, 2021

overlapping sprints to deal with the harsh realities of scrum

Scrum, an "agile" methodology to make the rhythm of software development more predictable, tends to be built on some iffy assumptions.

Some of these methodologies assume that any given ticket will be a equally as difficult no matter which dev, qa, or designer works on it. (Kanban, in its purest form of "the top ticket is the most important thing that should be picked up next" also falls prey to that 'everyone is interchangeable' fallacy, but in practice people don't have to scoop the ticket right on the top, they'll go down a few to find one that makes sense for their role.)

Some teams avoid thinking of people as interchangeable parts, but fall prey to completely ignoring the "Gant chart"-like dependencies within a sprint - the way many tickets will need some work by Designers, which will then be handed off to be implemented by Developers, and then finally to QA (with some time needed for Dev to fix what QA finds)

(But at least such teams have QA has its own thing! - the trendy thing to do has been to say "devs QA their own work" - armed with their own sense of diligence and the power of automated tests, but A. it's really hard to get someone to be good at hunting for something they don't want to find, i.e. ego-bruising bugs in their own code and B. for automated tests.... if they could have thought of went wrong ahead of time they would have already fixed it in code! So tests are mostly good to say what was developed hasn't been broken over time. (There are some other things tests are good with, but still, in my experience they aren't as load bearing as people want to pretend, and are much more brittle and prone to needing to be fixed than the code itself.))

Anyway. Here's theoretically what a sprint should look like:


That's a weird setup, right? What is QA doing the first part of the sprint? What is Design doing the last? In practice there are always things to be doing, extra chores, light prep for future sprints or learning or useful tasks, but I find it bizarre this kind of dependency isn't accounted for in the system. And of course, let's be honest, thanks to human psychology, this is more what a sprint actually looks like:

QA always gets squashed! Devs are thinking more in terms of the whole sprint rather than aiming to get done halfway through. And so the system fixes itself by carrying over to the next sprint... but that feels bad, like failure, even though the failure is more in the design of the system than in the people doing the work.

The sanest, most clear-eyed response I've seen to these problems was at Alleyoop, a little subproject of Pearson Education that unfortunately never found its sustainable business model mojo. They would do overlapping springs, something like this:


I'm not sure how the naming actually worked (overlapping full sprints vs "subsprints") but hopefully you get the idea: from the devs point of view, at any given sprint, designers were prepping stuff for dev that would be coded in the next sprint, and QA was checking on what was dev'd up the previous sprint.

The trouble is, most tools companies rely on don't model this particularly well. (Alleyoop was well in pre-COVID times and independent, so it was free to use good old "stickies on a white board" without having to use online tracking tools.)  And it's a tough sell. But at least it acknowledges that any given ticket has dependencies in time, and if you want to keep everyone productive, you should include those dependencies in your system.

No comments:

Post a Comment