I just read Joel Semeniuk’s blog about personas. If you don’t feel like jumping over to Joel’s blog and would rather get the executive summary, he’s saying that personas are really useful for understanding the value and direction of a user story. I’m with Joel on the value of personas. There is something else, however, related to users that I felt moved to write about – user stories and what users “want”.
Regardless of whether a team has devised personas or not, I tend to see user stories written too much from a what-the-software-does perspective. I’ll just use Joel’s off-the-cuff example for a user story, reproduced below, as an example to highlight this failure mode that so many teams suffer from:
Feature: Shovel Snow
As a Home Owner
I want to Shovel Snow
So that I can get out of my driveway to get to work
Fairly simple and universally understandable user story, right? (Assuming you know what snow is. Not all people do. Although even Floridans seem to get blizzards these days.)
So what’s the failure mode here? Home owners don’t want to shovel snow. They want snow to be removed from their driveway. Shoveling snow is something they accept as a solution because they don’t have an alternative in mind. What we should aim to do with user stories is to provide the connection between a user’s goals – what they need – and a placeholder for how we might give them just that.
Who knows, maybe the home owner would rather pay for a heated driveway than pick up a snow shovel. If we write down stuff about shovels, it’s that much less likely that anyone will think of offering such a drastically different alternative such as laying pipes under the driveway. Or replacing the Cooper Mini with a 4×4 SUV.
To illustrate what we’re talking about here, let’s see how we might rephrase Joel’s example:
Feature: Accessible DrivewayAs a Home OwnerI want my driveway to be cleared of snow
So that I can drive in and out of my driveway to get to work
Stumbling onto Ron Jeffries’ blog about testing everything but not accessors I was reminded of a question I get asked surprisingly often – how can I test private methods in [programming language]? When somebody pops this question I always say, you shouldn’t test private methods. And you shouldn’t.
The conversation then typically continues with a “But…” and I end up elaborating on why I say that. This time I’ll elaborate in the Internet, which hopefully helps spread this thought.
“I want to test a private method”
The fundamental motivation for wanting to test private methods is fear. We fear that our code coverage report’s 100% result isn’t the whole truth (and it isn’t) and we don’t trust our unit tests – the ones that already test the private method indirectly – to have kept out all critters from that piece of code encapsulated behind the magic keyword private.
Why don’t we trust our tests? First of all, in most of the cases when this question is popped the code hasn’t been written test-first. Tests that are written afterward tend to suffer from confirmation bias – we write them to prove to ourselves that we are indeed the great programming geniuses who never make a mistake. Such tests can indeed be much less than a full-cover insurance.
The code is also to blame for our lack of trust. We rationalize our need to write direct tests for a private method by pointing out how hideously long, complex and ugly that method is. This line of thinking exhibits itself quite clearly when people respond to my strict statement with, “…but you just said that we should test ‘everything that could possibly break’ and this private method here looks so ugly that it could break at any time!”
“I understand. And you should not test that method if it’s private.”
In this phase of the dialogue we’ve typically establish one thing: the person asking the question really wants to test that private method, usually for the reasons mentioned above. And that is perfectly reasonable. Yet, I persist and restate that they should not write tests for that private method. So what am I thinking? Why shouldn’t we test it directly?
For one, writing tests for a private method might not be technically possible in your chosen programming language without obnoxious trickery like invoking a private method through Java’s Reflection API. Such tests are also bound to break when you swoosh ahead with those lightning-fast refactorings your IDE vendor has graciously automated for you – because referring to methods and classes through literal strings makes the IDE blind to that link, effectively cutting your tests off from the reach of the Rename Method refactoring you just pulled off in Eclipse.
Second, private methods are details that you clearly don’t want to expose through the class’s public API. After all, if you’d be OK with making that private method public then you wouldn’t have asked the question in the first place. Package private and protected methods are a bit of a gray area in this regard. Bumping up the private method’s visibility so that the tests can invoke it but not so much as to make it part of the public API kind of solves the problem of resorting to ugly hacks – you can now invoke the method without reflection. However, the artery is still exposed and your class’s internals are leaking on the floor. Your tests are testing the object’s implementation, not its intended behavior.
“So what would you do?”
At this point we’ve usually established that the method in question is covered indirectly by other tests but that it’s so complex or important that you want to test it directly. So what would I do?
I would test it as a public method on another class.
The whole pain of seeing that private method, wondering whether it truly works or not, and struggling to decide whether to leave it like it is, whether to increase its visibility and test it directly, or whether to resort to the mortal sin of reflection – all of this pain is trying to tell us something. The code is trying to tell us something.
Clearly that piece of code is important and essential enough that it should get a new home and become a public method on some other class – quite possibly on a whole new class. Writing direct tests for it over there would not be an issue anymore.
A place for everything and everything in its place. It seems that Mr. Franklin was a programmer himself.
My personal interests have shifted through a number of phases since I found the iterative and incremental world of agile methods in the beginning of this past decade. While my intent with this article is to share some of my thoughts and experiences with merging the discipline of user interaction design into agile methods, I find it useful to first explain how my interests have evolved over the years to the point where I stand today.
My journey from code to design
At first my attention was consumed by the engineering practices that XP imposed on the kind of cowboy programming I had seen around me – and performing myself – both in the context of ad hoc and of supposedly highly disciplined methods. I forced myself to write my code test-first, I asked to pair program with others, and I checked in code in far smaller batches than before.
Looking back at around five years ago I can see my interests starting to move towards team work and project management. There was still plenty of improvement to make as far as engineering practices went but that work wasn’t consuming as much of my learning cycles as it had before. I routinely ran classes on test-driven development and spent a lot of time pair programming with clients, coaching them in writing better unit tests but I was increasingly more curious about the kind of insanity and waste I observed happening outside of an individual programmer’s sphere of influence.
Since about 2005 the majority of product development efforts I’ve been involved with have been “Scrum projects”. Considering the Scrum trifecta of a development team, a Scrum Master, and a Product Owner, I can see a slight trend of my work starting from the more technically oriented end of a development team through the more team work and facilitation-oriented domain of Scrum Master, and lately more and more towards the prioritization and product design-oriented world of the Product Owner.
The constant through this change
The single common thread throughout these years has been that my goal has been to help clients succeed with their product development efforts. At first I saw severe problems with code quality and discipline that I could help alleviate with certain engineering practices and processes. Later I saw dysfunction in collaboration between people in different roles and strived to improve that collaboration with facilitation and coaching, applying methods like Scrum. Then I saw dysfunction in the Product Backlog and around the Product Owner’s role, seeking to help the people involved understand the system and how they can influence its behavior in more productive, more effective ways.
Other people have had the same goal of helping clients succeed with their product development efforts for at least as long as I have. I work with some of those people. Many of those people also have had a vastly different approach to reaching that goal. After all, we all have our individual experience, skills and quirks that we must play with and make the most out of them.
Introducing user experience
In 2005 we hired a couple of professional interaction designers. They brought in just such vastly different skills. Up until then we had survived with our engineering-oriented, albeit multi-disciplined, staff, every now and then reaching for outsourced help with interface design. We had almost invariably been disappointed with the results from such outsourcing and yet we knew that we weren’t delivering the best user experience we could for our clients. Not that the clients were complaining but we knew and we do have a thing for perfection.
From the beginning our interaction designers were loudly proclaiming that “programming should only start when the interaction design is done.” Essentially their message was that interaction design was the king. That didn’t go down too well with the XP/Scrum-minded folks who were almost 100% programmers and very much into short iterations, incremental development, and generally allergic to big-design-up-front. After all, most of us have worked for major consulting companies and even the smallest hint of waterfall would give us the chills.
After a while the exaggerated soundbites started making room for more constructive discussion and more open-minded search of a Better Way. From one project to another our engineers and designers grew experience and formed a way of working that seemed to yield the best of both worlds, accommodating iterative and incremental development without a big design up front and the kind of smart user interface that truly was fit for purpose. We started being more and more pleased with the approach. We had a gold standard for how we want to run our application development projects.
We still had a problem as we only had a couple of interaction designers and plenty of projects that desperately needed their precious attention. We have found ways to alleviate that bottleneck – including hiring, of course – but we’ll come back to that topic later. Right now, I’d like to explain how we execute UI-intensive software projects today, combining user interaction design and agile methods into what I personally consider the best way I’m aware of to build such software products or applications.
Just enough design up front
Some of our engagements don’t involve any code to be written, some of our software delivery projects don’t involve a single graphical user interface, and some of our software delivery projects have a user interface that quite frankly isn’t all that important. When we do engage in a project where delivering a great user experience is essential, this is how we work.
To start with who’s involved, there’s a dedicated development team made up of generalist software developers capable of turning requirements into working software. There’s also a dedicated user interaction designer who’s capable of interpreting what different people are saying into a functional user interface design.
We start with what some might call “just enough design up front” where, over a time span of a couple of weeks, the user interaction designer digs into the problem domain and carries out a number of interviews with different people. Those different people may hold titles such as Product Owner, Product Manager, VP of Product Development or, if we’re lucky, titles such as Junior Sales Associate or Cashier. We want to talk to users, not somebody who pays the bills. Sometimes this luxury is available and sometimes we’re not that lucky.
Based on this work the user interaction designer identifies and documents the essential usage scenarios that represent the most significant, most valuable use of the system. With these scenarios to work with the designer starts iterating toward a user interface design that supports these scenarios with the best possible design, starting with just one scenario and incrementally expanding and editing the design one additional scenario at a time. At some point the designer starts involving the client’s staff to help validate the designs, which at this point are generally sketches or paper prototypes.
When the designer feels confident enough that the main cruces of the problem domain and the main usage scenarios have been solved, the design is considered stable enough to start programming. Up until this point in time, the development team has usually set up their infrastructure or worked on specific bits of implementation that do not involve the user interface.
The very reason we don’t start implementing the user interface on day one along with the user interaction design work is that it’s much, much more expensive to iterate in code than it is to iterate on paper. There’s a lot of rework to be saved by investing some time up front on this work.
Working with the designs
Once implementation begins, what the development team works with is a Product Backlog derived from general system requirements and the functional design – the sketches and paper prototypes – provided by the user interaction designers. By the iteration planning meeting the Product Owner and the user interaction designer have identified vertical slices of end-to-end functionality that could be implemented in the next iteration.
Sometimes those vertical slices are straight from the designs, e.g. a panel that displays information. Sometimes, those vertical slices are a stripped down version of the design resulting from a decision to down-prioritize a particular usage scenario. Sometimes, however, there’s a need for an intermediary design that introduces partial functionality with the intent that this is a temporary solution. In those cases, the development team and the user interaction designer are looking for a compromise that delivers the best user experience feasible with the boundaries of implementation effort and iteration length.
During an iteration the interaction designer reviews and validates the implementation as it progresses and serves as a quality gate – “reviewed by interaction designer” is often part of the team’s Definition of Done. Essentially, the interaction designer’s attention is split between this iteration and the next iteration(s), very much like that of the Product Owner’s. Just like the Product Backlog continues to live and evolve our user interface designs continue to live and evolve.
Beyond the bottleneck
As I said before, we hired a couple of trained professionals in interaction design back in 2005 and that wasn’t even remotely enough to staff all of our web development or desktop application projects with a full-time interaction designer. I also said that we’ve found ways to alleviate that problem.
The most obvious solution is, of course, to hire. That has proven to be somewhat difficult and even though we received almost a thousand applications last year only a tiny fraction of those applicants have had the right profile. Instead, we’ve had to find ways to source that talent from within.
A couple of years ago we started an internal apprenticeship program and a handful of solid programmers jumped on the user interaction train. Our experienced interaction designers taught their apprentices in regular training days and mentored them almost on a daily basis. In practice this meant that our senior designers had to take on much less work than before. On the other hand, we immediately had a wider spread of limited knowledge and skill – enough to fulfill the immediate needs of a development team – and bigger designs would still be reviewed together with a senior colleague.
Seeing the apprenticeship program become as successful as it has makes me wonder at the courage and drive of the individuals who saw the importance and dared to make the leap. Others have also taken part in the trainings and the whole company is more or less familiar – at least on a theoretical level – of how user interfaces are designed with our iterative method.
Some time ago I was a developer on a software project where we had an apprentice user interaction designer on staff and a senior designer paying us a visit one or two days a week. It was a relatively big project for one designer, however, with multiple teams working on an application where a good user interface was considered crucial for commercial success. Our sole full-time designer frequently had her hands full when a larger UI change was approaching and user stories would pile up towards the “interaction designer’s review” column on our story board.
I took part in one of those larger UI changes in a small team of four developers. The whole thing had bubbled through the Product Backlog very quickly and we didn’t have a single sketch or prototype to work with. We could’ve said, “no, we can’t do this before the designer has time” but instead we said, “yes, we can do it.” After all, it’s not a problem for us to take collective ownership of code given that we all know a bit of everything in the code base so why should the UI be any different?
Art of the possible
We decided to take the reality by the horns and make the best of what we had. We started preparing for the upcoming implementation between ourselves, trying to verify that we really do understand the big picture, identify the relevant usage scenarios, and sketch solutions that would support those scenarios as best we could. Stepping outside of our respective comfort zones wasn’t anywhere near as frightening as it had sounded a couple of weeks earlier.
Once we had agreed about the overall design with the team I set off to create a more detailed design for one particular user interface geared at managing orders and their logistics. I looked at the usage scenarios and specified what kind of information does the user need at which stage, what kind of information does the service provider need, and what kind of a workflow and interface design would tie it all up such that all of the scenarios would be supported and that I wouldn’t be leading our team down the front end programmer’s equivalent of Dante’s Hell.
That was hard. Not because I couldn’t have designed a user interface but because nobody from our team knew what the people in logistics actually do, how they work currently, and what kind of a user interface would best support them in their task. Our Product Owner knew something about it but he also had just second hand information from the management of the logistics department. This was when it finally sunk in for me how crucially important it is to get to talk to actual users – I had no idea what the warehouse dudes actually have to do before Tom’s new mobile phone and Jean’s shiny new iPod leave the premises with the correct transport manifests and everything.
Again, I knew that I could only do my best with what we had. Besides, I had been through the basic training on user interface design, I’d read Alan Cooper’s classic about inmates and an asylum, I’d read “About Face”, I’d familiarized myself with user interface design patterns, and I’d seen many designs created by our awesome interaction designers. It wasn’t difficult to design a good user interface. Maybe not a great user interface but it’s perfectly doable to design a decent solution by following our method, focusing on the usage scenarios, and iterating the design, simulating against the scenarios.
At this point I had a design that I thought was good but I also knew that I’m relying on the hearsay and babble of the Product Owner, which shed some insecurity into my doing. I had just leaned back in my chair, looking at the sketch in front of me and wondering whether it really is good enough when our senior interaction designer walked in. He wasn’t supposed to be at our disposal – he was there for another project – but I thought, “screw it, he’s there and this will only take a few minutes.” After all, I might get an opinion back in two seconds that could be compressed into something along the lines of “that’s the crappiest solution I remember seeing” but at least I would know so I walked over and asked if he had a moment to take a look at my design.
I explained the usage scenarios for which the design was made and what kind of widgets and behavior I had drawn in it. He asked me some clarifying questions and I did my best to answer them. Most of the questions were related to what happens at the logistics department when orders are coming in and how the logistics people juggle the orders internally, alone or collectively, grouping by customer or by product, whether the delivery address influences the way they are processed, etc.
After some five to ten minutes I walked away having crossed over one panel that wasn’t actually necessary for the scenarios and with a couple of other minor changes to make. We didn’t only have a better design but also a good feeling about having done our best despite our lack of availability for user interaction design expertise. We had likely done a good job (which later proved to be a correct assumption) and most certainly better than the CRUD-crap that we’d seen some competitors produce in the past.
It was definitely worth taking that step outside of the comfort zone. Looking back at this experience, it has provided me with a lot more perspective and tools for my coaching work with Product Owners. In fact, I’m certain that it wouldn’t hurt for a Product Owner to invest some time and effort to learn about user interaction design. After all, it’s all about product design, learning and knowing what your users need, working that knowledge into the Product Backlog, and refining your designs to allow for iterative and incremental implementation of that Product Backlog.
My last article was about the role of a Scrum Master. I’d like to continue on that theme, exploring a pattern I’ve seen at many companies. The pattern I’m talking about can be observed in discussions and the kind of words used for describing the role of or talking about Scrum Masters. In short, many companies adopting Scrum are struggling to get over the particular misconception of the Scrum Master being a management role.
Scrum Masters are not supposed to be managers. Scrum Masters are not some kind of coordinating bodies between teams and Product Owners. Scrum Masters don’t manage anyone but themselves. That’s one of the reasons why it’s often easier for a non-manager to take on this role – lacking the baggage of old habits of managing others.
It shouldn’t be a surprise that we may have established such mental models, however. After all, even some Scrum tool vendors get it somewhat wrong and Wikipedia’s agile software development-related articles are notoriously flawed. Even the Scrum Alliance, who’s supposed to be the official center of Scrum-related knowledge and information (along with the competing Scrum.org), has such utter bull on its website (I’m referring to the sidebar of that page and the Scrum Master having “three primary responsibilities in addition to leading the daily scrums“…) that we’re left without an authoritative source of information beyond individuals we trust.
Personally, for such authoritative source of information, I recommend reading writings by people such as Ken Schwaber, Craig Larman, and Bas Vodde. These are predominantly books. One notable exception is the Scrum Primer (PDF), which I find to be perhaps the best description of Scrum available online.
The Scrum Alliance’s role description is not total bull, however. Namely, their list of Scrum Master’s responsibilities isn’t far from what Certified Scrum Trainers have taught for almost a decade now:
What the Scrum Alliance says about the Scrum Master’s responsibilities
The Scrum Master is a facilitative team leader who ensures that the team adheres to its chosen process and removes blocking issues.
- Ensures that the team is fully functional and productive
- Enables close cooperation across all roles and functions
- Removes barriers
- Shields the team from external interferences
- Ensures that the process is followed, including issuing invitations to daily scrums, sprint reviews, and sprint planning
- Facilitates the daily scrums
Notice how the Scrum Alliance’s definition doesn’t put the Scrum Master in between anyone else except between the team and “external interferences”. The Scrum Master is not supposed to act as a single point of contact towards other teams or towards the Product Owners. The Scrum Master, according to the Scrum Alliance, does have administrative responsibilities such as issuing meeting invitations to the Scrum ceremonies (standup, review, planning) and the facilitation of the daily standups.
I would personally go even further and say that the first two bullets in the above list imply that the Scrum Master should seek to detach himself from much of that administrative work over time, as the team begins to take responsibility over their own productivity, cooperation across roles and functions, removing more and more of their barriers as they empower themselves, learn to stand strong in the face of external interference, and facilitate their own collaboration.
Note that this doesn’t necessarily mean that the Scrum Master’s role wouldn’t be a full-time job.
For a description of the Scrum Master’s responsibilities that is more in line with the original spirit of Scrum and the role, this is how Ken Schwaber and Jeff Sutherland, original creators of the method, describe the role in the Scrum Guide:
The Scrum Master
The Scrum Master is responsible for ensuring that the Scrum Team adheres to Scrum values, practices, and rules. The Scrum Master helps the Scrum Team and the organization adopt Scrum. The Scrum Master teaches the Scrum Team by coaching and by leading it to be more productive and produce higher quality products. The Scrum Master helps the Scrum Team understand and use self-organization and cross-functionality. The Scrum Master also helps the Scrum Team do its best in an organizational environment that may not yet be optimized for complex product development. When the Scrum Master helps make these changes, this is called “removing impediments.” The Scrum Master’s role is one of a servant-leader for the Scrum Team.
This is much closer to the mindset that I’d like to see adopted as far as the role or responsibilities of the Scrum Master are concerned. Scrum was developed within a software company over several years as the group devised and refined their engineering and product management practices. It’s a designed system that creates a dynamic equilibrium. If we move one lever the others will inevitably move as well. If we compromise on one front or one aspect of our system we shouldn’t expect too much on the other fronts either.
Assuming we want to make the most of Scrum we need to pay attention to Scrum, its roles, and what they entail. We need patience and drive to study, learn, and understand Scrum. Skimming a book or sitting through a presentation is clearly not enough.
The next time you catch yourself or someone else blurt that the Scrum Master “removes impediments” or “facilitates a standup”, stop for a moment to make sure that the parties involved in that conversation understand what these soundbites really mean and, perhaps even more importantly, what they don’t mean.
Speaking of the role of the Scrum Master one of the most common questions I get is whether it’s a full-time job and whether the Scrum Master can also be a member of the development team? Realizing that this is such a prevalent topic it’s worth exploring in a bit more depth than “it depends”.
Let’s begin by acknowledging the most common arguments for and against the Scrum Master being a dedicated, full-time job.
Visiting a kick-ass Scrum Team it’s obvious how big a difference many small things can have. One of those small things is having a Scrum Master. Good Scrum Masters nurture their team with strong roots in the principles and values of Scrum and gentle, facilitative guidance over the team’s daily business. Good Scrum Masters help teams resolve conflicts, reflect on their own values and behavior, and generally develop the team. None of us are born with the skills required to navigate this world of “peopleware” matters. It’s all about awareness and practice – acquiring information and integrating that into knowledge through application. It is this fundamental need for practice that best makes the case for the Scrum Master role being a full-time job – we need to do it to become good at it.
One the other hand, the Scrum Master’s job is fuzzy. Trying to enumerate what the Scrum Master does for a living, day in and day out, feels a bit like waving hands to the sound of metaphors, analogies, and figures of speech. It’s comparatively easy to see what a programmer or a test engineer does. Most of what a Scrum Master does isn’t visible in the sense that you could walk into a room and say, “she’s now doing X.” Furthermore, the most visible duties tend to be the ones least essential for a Scrum Master, such as facilitating a daily stand-up meeting. For someone who hasn’t internalized the role of the Scrum Master it’s difficult to imagine what else a Scrum Master would do than the obvious administrative tasks (which would be better off handled by the team anyway). This easily leads to doubts around whether it makes any sense for a Scrum Master to not do development work. After all, otherwise she’ll be idle most of the day, right?
These two perspectives constitute the apparent conflict between a full-time and a part-time Scrum Master, which are mutually exclusive (one can’t be both at the same time). What both of these stands have in common is the very same concern over the team’s productivity. There is no conflict around the end goal – only around the means of how to best get to that goal.
The following “conflict cloud” summarizes the dilemma we are facing here:
You can read the above diagram from right to left as follows:
We need to have full-time Scrum Masters because we need them to be good at what they do and good Scrum Masters help increase our productivity. At the same time, we need to have part-time Scrum Masters because their technical contribution increases our productivity.
That’s a conflict alright. Now let’s see how we can begin to resolve this conflict.
Resolving the Conflict
The above type of diagram is sometimes called the conflict resolution diagram, also known as the evaporating cloud. The way it helps us resolve a conflict is by providing a structure within which we can focus our analysis of the situation in two steps: first, uncovering the underlying assumptions we have and, second, uncovering potential solutions from those very same assumptions.
More specifically, we’re interested in the kind of assumptions we have related to the dependencies. I just went through the exercise of listing some of the assumptions I’ve identified previously with my coaching clients around this particular conflict scenario. I’ve grouped those assumptions below, organized by the dependency they are associated with.
Note that these are actual examples of assumptions I have uncovered during one-on-one coaching sessions with perfectly sensible, smart, well-educated IT and management professionals where we’ve sketched such an evaporating cloud on paper and stickies.
“We need full-time Scrum Masters in order to have better Scrum Masters”
- Being a full-time Scrum Master is the only way to become good at it.
- All of the time we spend being a Scrum Master contributes equally to our learning the skill.
- Any engineering work carried out by a Scrum Master is a major distraction from learning the skill.
“We need better Scrum Masters in order to have higher productivity”
- The skills of a team’s Scrum Master is the biggest contributor to a team’s productivity.
- Any team’s productivity is significantly improved by having a good Scrum Master.
- The Scrum Master is the only person who can coach a team.
“We need part-time Scrum Masters in order to have more engineers”
- All Scrum Masters can write code and test as well as any other engineer.
- Scrum Masters are the only source of additional engineering skill available to our teams.
- The only way to increase a team’s engineering power is by adding more people.
“We need more engineers in order to have higher productivity”
- Delivering more code and more features is always the best way to improve productivity.
- Any person added to a team contributes an equal positive impact on the team’s productivity.
- Any time taken away from a Scrum Master attending to his role so he can work on the team’s backlog is a good trade-off.
Now, you may have noticed that these assumptions are somewhat, well, extreme. That is entirely intentional. Formulating our thinking into the most outrageously extreme and polarized phrasing helps us see the essence of the assumption and being able to look at the essence without muddying the waters is an enormously powerful thinking tool. With our assumptions crystallized into extreme statements like the ones above, we can easily phase into criticizing those assumptions and, hopefully, uncovering potential solutions to our conflict.
So our next step is to start scrutinizing the assumptions we’ve identified behind the dependencies in our conflict cloud. Somehow that should lead us to potential solutions. As usual this process can be best understood by looking at a concrete example so we’ll do just that. Let’s pick one of the dependencies from our diagram and take a closer look at the identified assumptions:
“We need better Scrum Masters in order to have higher productivity”
- The skills of a team’s Scrum Master is the biggest contributor to a team’s productivity.
- Any team’s productivity is significantly improved by having a good Scrum Master.
- The Scrum Master is the only person who can coach a team.
Do you agree that the skills of a team’s Scrum Master is the biggest contributor to a team’s productivity? Always? I don’t think so. I’ve met plenty of programmers who most of all need a senior programmer to work with in order to improve their personal productivity – not a Scrum Master. Clearly I (and, I assume, we) don’t believe that the first assumption above is a fair one. Now, let’s see how we might change that assumption so that we could agree with it.
Would we agree that the Scrum Master’s goodness can be a major contribution to some teams’ productivity? I think that’s a fair assumption. Now, the question becomes what are those teams like and whether some of our teams are like that? We just stumbled on a potential solution to the conflict – some Scrum Masters can be part-time while others are full-time.
The second assumption turns out to be very close to the first one, leading us to the same question of which teams would benefit significantly from having a top-notch master of Scrum as their servant leader. The third assumption, on the other hand, takes us to a new place so let’s explore that assumption properly.
Do we agree that the Scrum Master is the only person who can coach a team? Nope. Too extreme. Of course a team’s Scrum Master doesn’t have exclusive rights to coach the team. This was clear from the very moment we jotted down that extreme assumption on a whiteboard – but we let it be because we were exaggerating on purpose. Now, however, is the right time to rip these assumptions to pieces. If I recall things correctly, when we did pick up this assumption, I was just about to point out that “obviously other teams’ Scrum Masters could also coach that team” when my client interposed with, “obviously team members can also coach each other.” At this point we obviously had two potential solutions to resolve our conflict – seeking to create other coaching relationships both within a team and between teams.
If you go through this process of questioning the rest of our assumptions one by one, toning them down, you’re bound to find a bunch of additional avenues that might make the conflict go away – with the prerequisites of our conflicting ideas being sufficiently fulfilled through other means. You might also find out that some of those extreme assumptions are, in fact, not that far fetched. Most of our assumptions, however, turn out to be false in the extreme and only be true towards the opposite end of the spectrum. It’s really that simple – turning the assumption knobs all the way to 11 and then back – and it works. Every time.
Illusion of The Answer
What this thinking tool, the evaporating cloud, drives home for those who use it is that very few things are black and white. The elusive “right” answer rarely exists and instead we find that most of our conflicts prove to be matters of tunnel vision. We realize that we weren’t looking at the situation from enough angles and that we’d dug into our respective positions determined by our personal bias and diverse experiences.
Quite frequently I catch myself digging into such positions because of fear. I might fear that, for example, if we don’t make a clear statement with a decision we will inevitably get into trouble due to the resulting ambiguity. That happens from time to time no matter how well I know that it’s not true. The world is still not black and white and there’s always a way to steer clear of the disasters we are so dearly afraid of.
For our question of whether a Scrum Master should be a full-time job I want to point out that whatever decision we come to today will not be the “right” answer forever. The world keeps changing around us, we change and we learn, and our situation changes. We might come to agreement that for a certain team that’s just beginning to use Scrum it’s best to dedicate a full-time Scrum Master. For another team we might see that their biggest problems lie with their engineering practices and that the benefits of a part-time Scrum Master tip the scale. Two years down the road our judgment might be the exact opposite.
I would like to conclude this article by explicitly pointing out that I am by no means suggesting that beginning teams should or shouldn’t have a full-time Scrum Master or that a full-time Scrum Master wouldn’t be a good idea for an experienced Scrum team that’s worked together for years. I have held all of these opinions at one time and later on found them to be flawed.
None of my answers are right and none of yours will be either. The beauty of conflict is that there is none – not until we make them up. And once we’ve done that we have the means to make them go away as fast as they came.
If you’ve heard of Scrum you’ve probably heard of something called the Definition of Done. If you’ve worked in a Scrum team, it’s next to impossible that you haven’t. Yet, I’ve found so many teams struggling to find a common understanding of what it actually is. This is my attempt at increasing the software community’s awareness of the concept and to offer a chance for Scrum teams to reflect on what I am writing about and how that maps to their context.
Let’s start by taking a quick look at what Scrum says about the Definition of Done and, perhaps most importantly, why we should pay attention to such a thing in the first place.
What Scrum Says and Why We Should Have It
Scrum doesn’t really say anything – it’s a method – but there is a clear definition for what the term means in the context of Scrum. You can read Ken Schwaber’s view on page 20 of the Scrum Guide. Please do read that page – it’s going to be 2 minutes well spent.
To summarize the perspective of the community-at-large, the Definition of Done is a joint agreement between the team and their Product Owner about what it means for a Product Backlog Item to be “done”.
“When someone describes something as done, everyone must understand what done means.”
The simple value proposition of having a Definition of Done is to avoid misunderstandings, flawed assumptions and the resulting aftermath of nasty surprises and broken trust. By agreeing on what we actually have when we’ve done something all parties have a common frame of reference: this is what has been done and everything else is still to be done before the release.
With an explicit Definition of Done the Product Owner can prepare for the upcoming release, trade show, etc. as he has an idea of what remains to be done before the sufficient level of quality has been ensured and the necessary bits and pieces are all in place. “Updating the online help isn’t in the Definition of Done? OK, I’ll set aside some time before the release for getting that stuff done.”
For the teams themselves the Definition of Done provides a simple way of knowing when to move on. If the Definition of Done is fulfilled, it’s time to move this story card over to “done” and start working on something else. An explicitly agreed definition also gives the team members a formal and social permission to nag if someone isn’t complying with the joint agreement.
Who Should Define It
The “DoD” is a joint agreement between the team and their Product Owner. Therefore, both parties should be involved in defining what ”done” means for us. In the end, the Definition of Done represents our current capability of delivering potentially shippable increments. Hopefully our capability improves over time and the Definition of Done should reflect that development. It’s not something you carve in stone.
We’re not talking about a wish list or a day dream, however. We’re talking about a matter of fact style truth in all of its ugliness. “This is how much we can feasibly do for a backlog item within a sprint.” If we’re really good, we can deliver a high quality increment to production 5 minutes before the sprint ends and forget about it. If we’re not so good, we have a parallel “staggered sprint” that works for two weeks to take what we’ve churned out and get it into good enough shape that it can be deployed to production – because we hadn’t tested it much.
It’s worth noting that there’s a bit of a trade-off involved here. To certain degree we are capable of doing more extensive work, adopt a more extensive Definition of Done, by taking on fewer user stories into our sprints. Or we can churn out more stuff in a sprint by cutting back on the level of scrutiny. I suggest, however, that it’s better to err towards a more extensive definition as I’ve found that it tends to correlate with less inventory and a higher throughput. We all know how productive we are when the code base is full of crap, ridden with bugs, everybody working on their own feature in parallel, and the whole project devoid of any test automation whatsoever.
We should have a Definition of Done from the beginning. A practice that’s become more or less the norm with our software delivery projects at Reaktor is to start every new project with a “Ways of Working” workshop where the team(s) and the Product Owner(s) agree on, among other things, their Definition of Done. Sometimes we fail and end up agreeing on it later on, realizing that we do indeed need one.
What Should Be In It
The Definition of Done should describe what we have and what we’ve done by the time we say we’re “done”.
Do we have end user documentation for the feature, user story or scenario in question? Do we have automated regression tests for it? How many of them? With what kind of coverage? Developer tests? System tests? Integration tests? Have we reviewed or pair programmed all code written for the feature? Has the UX guy taken a look at it and given his blessing? Have we showed it to the Product Owner and gotten his blessing that it’s OK?
In essence, the Definition of Done is a description of what kind of scrutiny have we put our work through and what kind of work still needs to be done before we push the yellow button labeled “Deploy to Production”.
In the end, we need to be able to tell whether we’ve fulfilled the Definition of Done or not. That is why we should strive to formulate our definition in clear terms that facilitate a binary yes/no answer to “have we done this?”
It’s a potential trade-off, however, once again. A common subject that leads to such trade-off is unit testing. Most teams agree that unit testing should be done but many teams are also quick to point out that in their context it doesn’t make sense to require unit tests for everything. In such cases, it might make sense to let go of having a binary criteria for “everything being unit tested” and settling for a less ambitious standard such as agreeing that the overall test coverage must remain above 90% or, compromising our ability to objectively verify our compliance to our agreed standards, agreeing that everything is “sufficiently unit tested”.
How Is This Different From Acceptance Criteria
We sometimes mix terms such as the Definition of Done and Acceptance Criteria rather liberally in conversations, using them interchangeably. That is a mistake, however, as there is a clear difference between these two concepts.
As we’ve established, the Definition of Done describes the level of scrutiny and activity towards a given Product Backlog Item. This is about process and quality. Acceptance Criteria, however, are all about functionality and requirements. Whereas the Definition of Done might include stuff like “at least one automated system test is in place and passing”, the Acceptance Criteria for a backlog item might include stuff like “Non-numeric input is rejected and results in the transaction being canceled.”
There is a relation between the two concepts, though, in the sense that fulfilling a Product Backlog Item’s Acceptance Criteria is (at least implicitly) the very first element of the Definition of Done. After all, if we haven’t built the right functionality it’s somewhat irrelevant whether or not the code was peer reviewed.
As the Definition of Done is about process and quality, it’s also a product or project level global standard. Everything we do we do abiding by that definition. There is no “definition of done for user story 145.” As the Acceptance Criteria is about the functionality and requirements it’s a backlog item level definition. In other words, each Product Backlog Item has its own Acceptance Criteria.
Definition of Done is a joint agreement between the team and their Product Owner about what it means for a Product Backlog Item to be ”done.” This common understanding is necessary for us to be able to prepare for the work that remains “undone” – the work that is not included in our Definition of Done. It also tells us when we should move on with our sprint work and creates a social contract between team members, making it easier for the team to hold themselves accountable for their agreed-upon behavior.
For these reasons, the Definition of Done should be as unambiguous and clear as possible so that it’s trivial to verify whether we’ve followed through with our agreed behavior, i.e. whether we’ve given our work the kind of scrutiny we’ve set as our standard.
Is there anything that you find confusing? Is there anything that you feel should’ve been addressed? How does your Definition of Done look like?
Note: This is a republication of the article originally published in my previous blog.
I’m sitting in the hallway in Limerick, Ireland, attending the XP2008 conference, downloading something from the company server to my laptop, eavesdropping on an open space session hosted by J.B.. He’s talking about user stories and roughly 4 minutes ago he mentioned he’s got a blog post up on his website that shows an example of four ways to split a story.
Since I’m so Web 2.0, I’m blogging about this while they’re having their open space session two meters from where I’m sitting. I tried to be Web 2.0 and blog this while they were running their open space session but I’m so darn slow and old school that it took me 2 days to get this written! Hardly enough to blog about it as it happens. I should’ve recorded a podcast, I guess.
I’ll first reproduce Joe’s list of four ways to split:
- splitting stories along process lines
- splitting stories along architectural lines
- splitting stories along procedural lines
- splitting stories into smaller stories
What Joe is saying in his blog post (among other things) is that teams often progress through this list, starting from the worse way to split down stories and (hopefully) ending up with splitting stories smaller so that they’re still “self-contained increments of value.”
Great. I’ve seen this pattern.
Now, having seen this pattern and having found that people find it difficult to split in any other way than what they’ve done so far unless they can look at examples, I thought I should share what I teach about splitting user stories.
Let’s start with my list of ways to split user stories (not in any specific order):
- by implementation (Joe’s first two bullets)
- by quality
- by data/details
- by operations (CRUD)
- by major effort
- by role
Let me explain what I mean by these, along with an example of each.
Splitting by implementation
First of all, this should be your last option. It’s very intuitive for an engineer but you should only do this if you honestly can’t think of another way to split it down. Only then you should consider looking at the technical tasks you need to carry out in order to make the original story come to life in the software or split along the lines of architectural boundaries, components, or another technical boundary.
Once you’re done splitting it down like this, you call your mother to apologize, check Google Maps for the shortest route to a nearby Catholic church for a confession during the lunch hour, and pick up a new brush from Walmart on your way home. You’ll need the brush while sitting in the corner of your shower, scrubbing your back violently, chanting “I feel dirty.” That’s how bad way this is to split stories.
Now for the example I promised. Let’s imagine we’re building an online retail system like Amazon.com and we’ve got a user story like this:
As a potential buyer I want to see available multi-item discounts involving the product I’m currently looking at.
If this is too big for us, how could we split it further by implementation? We could, for example, split into two stories – the original and a smaller story that’s a dependency for the original:
As a product owner I want the discount subsystem to support multi-item campaigns so that I can deliver value to the user in a later iteration.
See? With this split down story we’re not delivering any value to the end user because it’s a technical split. To emphasize this, I’ve expressed the story in a form that makes it explicit that we’re doing this in order to enable a later value-delivering story.
Now that we’re over the evil implementation split, let’s look at some more useful ways to split stories.
Splitting by quality
When I think about the goodness of a user interface, I divide the question into two: utility and usability. Utility is about whether the user can achieve the goals he has with the system. Usability is about how easy it is for the user to reach that goal. This dual model can help us split down user stories because the two aspects – utility and usability – can be valuable as such and have different priorities.
Let’s look at an example from an online store selling photography equipment:
As a beginning photographer I want to get recommended a camera kit to buy so that I don’t need to spend hours reading reviews to figure out which camera would suit me well.
Now how would we split this story along the lines of quality, separating the concerns of usability from pure utility?
Well, the utility aspect – the goal – is for the user to be able to figure out which camera to buy. This would be a lot easier if the system could make a smart recommendation. That recommendation might, however, be too difficult to implement right now, making the story too big. With that said, we can support the user’s goal (utility) with less quality (usability) through, for example, a split to these two smaller user stories:
As a beginning photographer I want to see a numeric sales rank so that I can better decide which camera to buy by comparing the sales of my alternatives.
As a beginning photographer I want to see a numeric sales rank grouped by buyer expertise level so that I can better decide which camera to buy by comparing the sales of my alternatives.
These two stories aren’t quite as valuable to the user as getting a clear recommendation but they are valuable in that they help the user make that buying decision. They’re the cobblestone road solution that’s not quite the asphalt highway we’d eventually want but it’s already better than no road at all or a bumpy dirt road.
Splitting by data/details
One of the easy ways to split down user stories is by data and details. The classic example is searching:
As a user looking for camera accessories I want to search for products so that I can avoid browsing through the whole product catalog.
Now, let’s say that it’s too much work for us to implement the search facility in all of the function and polish we’d eventually like to have. We can, however, implement a subset of that fully-featured search by splitting along the kinds of data and details we support. For example:
As a user looking for camera accessories I want to search for products by their name and description so that I can avoid browsing through the whole product catalog.
As a user looking for camera accessories I want to search for products by their price and availability so that I can avoid browsing through items I wouldn’t buy anyway.
In other words, we might first implement a search that only looks for matches in the name and description of a product. In the next iteration, we might follow up with extending the search to other data fields such as price and availability.
In some cases the difference between supporting 2 or 4 data fields can be negligible and therefore splitting along these lines might not make much sense. However, it could be that the type of data in question makes that difference significant enough that splitting actually does produce multiple significantly smaller stories than the original. In our example above, we’re not really going to match the exact price but rather a price range. Similarly, we might want to search for availability in a specific location rather than a simple “yes, we have it” match.
Splitting by operations
Another intuitive way to split down user stories is along the lines of operations and procedures. An archetypal example of this could be a CRUD (create-read-update-delete) scenario of managing products in a database:
As a shop keeper I want to manage the products being sold in my online store so that I can sell what people want to buy.
If this is too big for us, we might split along the lines of the CRUD operations like this:
As a shop keeper I want to add and remove products from my online store so that I can sell what people want to buy.
As a shop keeper I want to edit product details in my online store so that I can avoid recreating a product to fix a typo etc.
These are both valuable stories. We could just implement the first story and, for now, deal with updating product details by removing and recreating the same product in the system with the new details. We could just implement the latter story and accept that we need to add new products into the system with raw SQL.
Usable? No. Doable? Yes. Acceptable? Depends.
Splitting by major effort
Yet another way to split down too big user stories is by identifying where the effort would go. Let’s use the classic credit card example:
As a buyer I want to be able to pay with a VISA, Mastercard, Diners Club, or American Express credit card.
Now, our first thought might be to split along the lines of data and details – the different credit cards we support – but that would give us four user stories (one for each card type) with identical, conditional effort estimates. Why? Because implementing support for any one card takes at least as long as adding support for all the rest. Recognizing that this is how the effort is divided, we might split the above story into these two smaller user stories:
As a buyer I want to be able to pay with a credit card (one of VISA, Mastercard, Diners Club, American Express).
As a buyer I want to be able to pay with four types of credit cards (VISA, Mastercard, Diners Club, American Express).
There is a dependency (the first story must be implemented before the latter) but we might have small enough stories, provided that building the necessary plumbing for credit card processing isn’t too dominant in the overall effort.
Splitting by role
Last but not least, we could think about the user story at hand from the perspective of different users and the value to those users.
Let’s say we’ve got this universal story about error handling:
user friendly error messages
detailed stack trace in log file
unique error code displayed to user and in log file
Now, getting an error message that the user can understand probably doesn’t make him happy but certainly reduces the degree of frustration if his understanding of the situation improves.
The detailed stack trace and unique error code on the other hand aren’t really something the user would specifically appreciate. Their value is for the programmer who wants to be able to locate the source of an error as fast and easily as possible.
In other words, there’s two distinct users involved in this ambiguous, two-word “user story” – the user and the programmer. Splitting along this division, we might get the following three smaller user stories:
As a user I want to see an error message I can understand when something goes wrong.
As a programmer I want to see the full stack trace in the log file for any exception thrown during runtime so that I can better debug error situations.
As a programmer I want to show the user a unique error situation identifier so that I can locate the relevant portion of the log file faster and more reliably.
Each of these stories is valuable as such and independent of each other, which is nice. This example also nicely illustrates the value of the explicit “as a role I want something so that benefit” template. If we would’ve tried to write the original user story using the template, it would’ve been obvious that we’re talking about things that are valuable to two distinct roles – a clear hint at the chance of a split.
Another cue that we’re seeing in the original story is the bullet list. Sometimes you can simply look at a story and identify a log-hanging fruitsplit by scanning for keywords such as “and”, “or”, periods and other kinds of separators.
I’ll stop writing now. It took me a lot longer than I thought to get all of this out of my head but I hope it’s useful. Perhaps needless to say but I’d appreciate any feedback, suggestions, pointers, etc.