IntroductionFeatures are the key vehicle for value flow in SAFe, yet they are also the source of much confusion amongst those implementing it. The framework specifies that “Each feature includes a Benefit Hypothesis and acceptance criteria, and is sized or split as necessary to be delivered by a single Agile Release Train (ART) in a Program Increment (PI)”.
It is obvious that you need a little more information about the feature, and on what feels like countless occasions I have facilitated the definition of a Feature Template with a Product Management group. People in classes ask me for a sample, and of course the only samples I have belong to my clients and aren’t mine to share.
The new emphasis on Lean UX in SAFe 4.5 finally inspired me to put some time into crafting a Feature Template of my own that I could share. The result is a synthesis of recurring patterns I have observed in my coaching, and focuses an “essential components” with guidance on additional information that might be required.
How much detail is needed, and by when?
I divide the refinement lifecycle of the Feature into two phases.
- Prior to WSJF assessment
- Prior to PI Planning
There is a level of detail required for a feature to enable Cost of Delay and sizing assessment. Information is inventory, and we want a lightweight holding pattern until the Feature’s priorities indicate it as needing to be prepared for PI planning.
To this end, my template focuses on taking a canvas approach to supporting WSJF assessment, and I provide some guidance on likely extensions of detail in readiness for PI Planning.
Leveraging the work of Jeff Gothelf in Lean UX, we base the feature on a definition of the problem it is designed to address. Gothelf provides two excellent template statements here:
“The current state of the [domain] has focussed primarily on [customer segments, pain points, etc].
What existing products/services fail to address is [this gap]
Our product/service will address this gap by [vision/strategy]
Our initial focus will be [this segment]”
“Our [service/product] is intended to achieve [these goals].
We have observed that the [product/service] isn’t meeting [these goals] which is causing [this adverse effect] to our business.
How might we improve [service/product] so that our customers are more successful based on [these measurable criteria]?”
Again, leveraging Gothelf’s work we form a hypothesis as to the impact our Feature might achieve. The template takes the form:
“We believe this [business outcome] will be achieved if [these users] successfully achieve [this user outcome] with [this feature]”.
Objectives and Key Results (OKRs)
Features are intended to provide verifiable impact, this detail is critical to enabling effective Cost of Delay estimation and the post-deployment verification of impact. We want to ensure that quantifiable movement of identified Leading Indicators supports the ongoing evolution of our Product Management strategy.
As previously documented in this post, I have become quite a fan of the OKR model initiated at Intel and popularized by Google and find this a useful discipline in defining feature impacts.
If OKRs seem a little daunting, you would instead list Leading Indicators and expected movements in this section.
Cost of Delay Components
The effectiveness/objectivity of Cost of Delay estimation workshops is heavily driven by the data on the table. The 3 sections for User/Business Value, Timing Criticality and Risk Reduction/Opportunity Enablement provide opportunity to highlight supporting data for the assessment of the three cost of delay components.
Key Subject Matter experts
I rarely if ever work with an ART where Product Management is self-sufficient with their domain expertise. Deliberate identification of and engagement with subject matter experts early in the lifecycle of a feature is critical.
Nothing brings a feature unstuck faster than unidentified external dependencies. These should be flushed early, and inform prioritization and roadmapping discussions.
Non Functional Requirements
We know that an ART will have a standing set of non functional requirements that applies to all features, but occasionally features come with specific NFR’s.
Sample Completed Canvas
It was incredibly difficult to invent a sample feature because my head kept running to real features, but eventually I settled on a fictitious feature for restaurant reservations.
A glimpse at how you might visualise your next WSJF estimation workshop
Detail beyond the Canvas
As a feature is selected as a candidate for an upcoming PI, it triggers the collection of additional framing detail. How much or how little detail is appropriate tends to vary ART by ART and at different stages in their evolution.
At a minimum, it will require acceptance criteria. However, some other things to consider would include:
- User Journeys: Some framing UX exploration is often very useful in preparing a Feature, and makes a great support to teams during PI planning.
- Architectural Impact Assessment: Some form of deliberate architectural consideration of the potential impact of the feature is critical in most complex environments. It should rarely be more than a page – I find a common approach is one to two paragraphs of text accompanied by a high level sequence diagram identifying expected interactions between architectural layers.
- Change Management Impacts: How do we get from deployed software to realised value? Who will need training? Are Work Instructions required?
Tuning your Template
Virtually every ART I have worked with has oscillated between “too much up-front information” and “not enough up-front information”. You want to know enough to enable teams and product owners to effectively plan and execute iteratively, yet not so much that you constrain the opportunity for teams and product owners to innovate and take ownership of their interpretation.
Every PI is a learning opportunity. Take stock a week after PI planning and look at both the information you wished you’d had, and your observations as to the value proposition of the information you had provided.
Then, take stock again late in the PI. Look at how the features played out over the PI, and the moments you wish you could have avoided 😊
Who completes the Canvas/Template?
The Product Manager is the content authority at the Program Backlog level, hence they are the ultimate owners. However, one of the nice influences Lean UX has brought to SAFe 4.5 is a real emphasis of “collaborative design”. In avoiding the waste of knowledge handoff, the best people to work through the majority of the detail (including preparing the canvas itself) are the product owners and teams likely to be implementing the feature.