How Do You Know When You’re Delivering Customer Value From Your Software Project?
Many of us are relatively familiar with the notion that stories should be expressed in terms of value deliveredand how important the “Why” is for being able to maximize the outcome for the customer. In other words, the project outcome can be a reflection of delivering what is needed rather than what is asked for by the customer. Because of this, there are so many projects that never see the light of day or are ever used because they don’t meet the true need.
In addition to this, when we talk about good stories, we often refer to the term “INVEST” (Independent, Negotiable, Valuable, Estimable, Small, Testable)as a means of helping validate that our stories are well written. I think this is a great tool for helping to write user stories, and we may even include acceptance criteria to help the team identify that the story has been completed in such a way that the value is best able to be realized. With that in mind, I’d like to propose going a step further.
Expected Vs. Actual
Whichever way the story is written, the assumption is that the product owner has determined the value of the story and prioritized it accordingly, but value is a very nebulous term and encapsulates all sorts of thoughts, many of which are assumptions or just plain guesses or personal preferences. We are also making the assumption that the story will successfully deliver the value we intend; rather than accepting that it is a hypothesis that will achieve our goal.
Up to this point, we are assuming that the product owner is always making the right decisions, and that their assumptions of the value delivered by a story are infallible. But speaking as a product owner myself, let me attest that this is a rather hopeful assumption, often a value judgement based on little more than an educated guess, and very subjective opinions about value.
Even market research is guesswork to some extent and particularly with new products or internal systems there is little opportunity for effective predictions of value. I don’t want to take anything away from the product owner and their authority on making these judgements, as this is a main function of their role. But as a product owner, I value a feedback loop that enables me to validate that if my decisions were right or wrong, and that gives me the opportunity to course correct accordingly.
In other words, it is necessary for us to make judgement calls, but getting feedback on the accuracyof those decisions is quite beneficial.
So What Can You Do?
One idea is that we could add to the acceptance criteria to include some additional validation. Our acceptance criteria assist us to validate that a story is implemented the way we intended it to be implemented, but it does not however always enable us to measure whether the value is fully realized.
Assumption: We believe that adding a picture to listings on a product website will increase sales by 10% (our market research says so).
As an online customer,Iwant to see pictures of products so that I can make more informed buying decisions (and thus buy more products). The business value: Marketing estimates sales increase of 10%.
Our acceptance criteria may stipulate positioning and size of the picture, or what to display if a picture is not available. We may even add some performance acceptance criteria such as average page load time. But these dimensions arestill not enough to validate that the value was achieved.
How Do We Validate that a Story Delivers Value?
So how do we validate that this story does actually deliver the value we expect? How can we be confident that having a picture fulfills some aspects of the value and provides better informed decisions? One note to consider is that it might be that we are missing out by not having the ability to zoom on the picture, or it may be that our users are not bothered by the picture at all and would prefer another feature such as the “lead time” or “quantity in stock.”
What if as part of this story we not only implement the feature to show the picture but we also include analytic measurements on the page load times, and even a measurement for the number of sales of a product or products per day? Then from there, we could evaluate let’s say 50% of users with the new feature and 50% without the pictures and compare the results. Or we could conduct focus groups on this feature or conduct usability studies to get more subjective but detailed feedback.
As part of the story we could also add an additional layer of validation criteria that would be similar to acceptance criteria but that would be a way to measure the value actually realized by the user.
What Do We Gain?
Consider this point: Would including functionality or activities that enable us to measure that we have delivered the value we are expecting make the stories better? Would that information help shape our product and build a better product? Would it help us prioritize our backlog as we get a better understanding of value actually delivered vs. value expected to be delivered?
We could either add stories for these measurements or consider these to be encapsulated in the delivery of this story. Essentially, we are asking whether feedback is valuable and if it is – how valuable is it to us?
Return on Investment
When discussing this idea with a colleague, his first response was that this is putting more upfront work and that this is a challenge for the “lazy.” In Agile, “lazy” is a virtue so this is important feedback.
Naturally, there is overhead in this but as with all feedback loops, the information is valuable, knowledge is power and we just need to fine tune our efforts. We need to adjust our feedback volume to the right level to get valuable information with the minimum necessary effort. In Lean, we talk about Andon Cords, an alert that informs us of an issue with quality or process. And what we’re talking about here could actually be an example of an Andon Cord, where the effort is too much and producing either too much information or nothing of value, requiring us to retune our feedback loops to give us enough valuable feedback to act.
Also, many of these measurements will be applicable to multiple stories, so while the investment may end up being very limited, the feedback may be far reaching. And, once automated, the ongoing feedback can be tweaked to add extra sensors to give us more and more valuable information.
- Given that, let us consider some examples of value-assessing measurements:
Website analytics: hit rates, click through rates, hot spots etc. The cost of these is minimal and is often something that can be applied even after development.
- Writingstories: We could write stories to build into our application measurement for how our product is used, or the performance of our product.
- Research: We could add usability testing or focus groups, or surveys of users.
- Feature flags: By using feature flags we could set up effective A-B testing to get feedback on structured hypothesis validation.
Please note that not all measurements need to be software driven – as increased subscribers may be measured entirely independent of your application.
Ultimately, the biggest change would be in your initial vision creation, and to ask if you know your product goals and do you have a way to measure success.
Is your goal increased sales, time saved, efficiency improvements, or increased users or cost savings regardless of your goals? Do you have a plan for measuring whether your product is achieving your goals?
This may seem like we’re stating the obvious but you would be surprised by the number of projects I have seen where the stated aims were cost savings or revenue generating and numbers were stated, yet, after the project was authorized, no one ever went back and assessed whether the project was a success or achieved any of its aims. Having an aim was simply enough to get the project started, but claiming a 10% increase in sales or reduction in costs should be something you can measure – so measure it.
Ironically, being able to map a story to one of your stated goals for the product could be another way to filter unnecessary stories, if the expected impact is not one of your product goals.
This is a very simple change to your story writing process – an extra little consideration that could have significant implications to the success of your product. The addition of a very valuable feedback loop on value delivered (rather than value expected) could accomplish just that.
Try using the following formula:
- As a …….
- I want to …….
- So that we can ……
- And so I can verify this by…….