Readers of this blog will be no strangers to the fact that I'm a strong believer in the value of kanban visualization at all levels of the SAFe implementation. Further, if you've been reading my Metrics Series
you may have been wondering how to collect all those Speed Metrics
Given that the Feature is the primary unit of "small batch value flow" in SAFe, effective application of kanban tools to the feature life-cycle is critical in supporting study and improvement of the flow of value for an Agile Release Train (ART).
The first application for most ARTs is enabling flow in the identification and preparation of Features for PI planning. Many ARTs emerge from their first PI planning promising themselves to do a better job of starting early on identifying and preparing features for their next PI only to suddenly hit panic-stations 2 weeks out from the start of PI 2. The introduction of a kanban to visualize this process is extremely valuable in starting to create visibility and momentum to solving the problem.
However, I vividly remember the debate my colleague +Em Campbell-Pretty
and I had with +Dean Leffingwell
and +Alex Yakyma
regarding their proposed implementation of the SAFe Program Kanban
over drinks at the 2015 SAFe Leadership Retreat in Scotland. Their initial cut positioned the job of the program kanban as "delivering features to PI planning", whilst we both felt the life-cycle needed to extend all the way to value realization. This was in part driven by our shared belief that a feature kanban made a great visualization to support Scrum-of-Scrums during PI execution but primarily by our drive to enable optimization of the full “Idea to Value” life-cycle. Dean bought in and adjusted the representation in the framework (the graphic depicting the backlog as an interim rather than an end-state was in fact doodled by Em on her iPad during the conversation).
A good Kanban requires a level of granularity appropriate to exposing the bottlenecks, queues and patterns throughout the life-cycle. Whilst the model presented in SAFe acts much as the Portfolio Kanban in identifying the overarching life-cycle states, it leaves a more granular interpretation as an exercise for the implementer.
Having now built (and rebuilt) many Program Kanban walls over the years while coaching, I've come to a fairly standard starting blueprint (depicted below). This article will cover the purpose and typical usage of each column in the blueprint.
My previous article on Kanban tips-and-tricks
is worthwhile pre-reading in order to best understand and leverage the presented model. Avatars should indicate both Product Management team members and Development teams associated with the Feature as it moves through its life-cycle.
The fundamental premise of lean is global optimization of the flow from idea to value. Any truly valuable Feature Kanban will cover this entire life-cycle. The reality is that many ARTs do not start life in control of the full cycle, but I believe this should not preclude visualization and monitoring of the full flow. In short, “don’t let go just because you’re not in control”. If you’re not following the feature all the way into production and market realization you’re missing out on vital feedback.
The Kanban States
This is the entry point for all new feature ideas. They might arrive here as features decomposed out of the Epic Kanban, features decomposed from Value Stream Capabilities, or as independently identified features. In the words of SAFe, "All new features are welcome in the Feature Funnel".
No action occurs in this state, it is simply a queue with (typically) no exit policies.
In this state, we prepare the feature for prioritization. My standard recommendation is that ARTs adopt a "half to one page summary" Feature Template (sample coming soon in a future article).
Exit Policies would typically dictate that the following be understood about the feature in order to support an effective Cost of Delay estimation and WSJF calculation:
- Motivation (core problem or opportunity)
- Desired outcomes
- Key stakeholders and impacted users
- Proposed benefits (aligned to Cost of Delay drivers)
- Key dependencies (architectural or otherwise)
- Very rough size.
Features are taken to Cost of Delay estimation workshop, WSJF calculated, and either rejected or approved to proceed to the backlog.
Exit Policies would typically indicate:
- Initial Cost of Delay agreed
- WSJF calculated
- Feature has requisite support to proceed.
This is simply a holding queue. We have a feature summary and a calculated WSJF. Features are stored here in WSJF order, but held to avoid investing more work in analysis until the feature is close to play. If applying a WIP limit to this state, it would likely be based on ART capacity and limited to 2-3 PI's capacity.
Exit Policies would typically surround confirmation that the Feature has been selected as a candidate for the next PI and any key dependencies have been validated sufficiently to support the selection. I find most Product Management teams will make a deliberate decision at this point rather than just operating on “pull from backlog when ready”.
Next PI Candidate
Again, this state is simply a holding queue. Movement from the “Backlog” to this state indicates that the Feature can be pulled for “Preparation” when ready.
Generally, there are no exit policies, but I like to place a spanning WIP limit over this and the following state (Preparing). The logical WIP limit is based on capacity rather than number of features, and should roughly match the single-PI capacity of the ART.
Here, we add sufficient detail to the Feature to enable it to be successfully planned. The Exit Policy is equivalent to a “Feature Definition of Ready”. Typically, this would specify the following:
Acceptance Criteria Complete
Participating Dev Teams identified and briefed
Dependencies validated and necessary external alignment reached
High level Architectural Analysis complete
Journey-level UX complete
Required Technical Spikes complete
This is the one state in the Feature Kanban that is almost guaranteed to be decomposed to something more granular when applied. The reality is that feature preparation involves a number of activities, and the approach taken will vary significantly based on context. A decomposition I have often found useful is as follows:
- Product Owner onboarding (affected Product Owners are briefed on the Feature by Product Management and perform some initial research, particularly with respect to expected benefits)
- Discovery Workshops (led by Product Owner(s) and including affected team(s), architecture, UX and relevant subject matter experts to explore the feature and establish draft acceptance criteria and high level solution options)
- Finalization (execution of required technical spikes, validation of architectural decisions, finalization of acceptance criteria, updates to size and benefit estimates).
The planning event itself is not represented on the Kanban, but following the conclusion of PI planning all features which were included in the plan are pulled from "Preparing " to "Planned".
This is a queue state. Just because a feature was included in the PI plan does not mean teams are working on it from Day 1 of the PI. We include it deliberately to provide more accuracy (particularly with respect to cycle time) to the following states. There are generally no exit policies.
A feature is pulled into this state the instant the first team pulls the first story for it into a Sprint, and the actual work done here is the build/test of the feature.
Exit policies are based on the completion of all story level build/test activities and readiness for feature level validation. Determination of appropriate WIP limit strategies for this state will emerge with time and study. In the beginning, the level of WIP observed here provides excellent insight into the alignment strategy of the teams and the effectiveness of their observation of Feature WIP concepts during PI planning.
A mature ART will eliminate this state (given that maturity includes effective DevOps). However, until such time as the ART reaches maturity, the type of activities we expect to occur here are:
- Feature-level end-to-end testing
- Feature UAT
- Feature-level NFR validation
Exit Policies for this state are equivalent to a “Feature Definition of Done”. They are typically based around readiness of the feature for Release level hardening and packaging. The size of the queues building in this Done state will provide excellent insights into the batch-size approach being taken to deployments (and "time-in-state" metrics will reveal hard data about the cost of delay of said batching).
Once again, a mature ART will eliminate this state. Until this maturity is achieved we will see a range of activities occurring here around pre-deployment finalization.
Exit Policies will be the equivalent of a "Release Definition of Done"
, and might include:
- Regression Testing complete
- Release-level NFR Validation (eg Penetration, Stress and Volume Testing) complete
- Enterprise-level integration testing complete
- Deployment documentation finalized
- Deployment approvals sought and granted and deployment window approved
The set of features planned for release packaging will be pulled as a batch from "Feature Validation" into this state, and the set of features to be deployed (hopefully the same) will progress together to “Done” once the exit policies are fulfilled.
Yet another "to-be-eliminated" state. When the ARTs DevOps strategy matures, this state will last seconds - but in the meantime it will often last days. The batch of features sitting in "Release Hardening" will be simultaneously pulled into this state at the commencement of Production Deployment activities, and moved together to Done at the conclusion of post-deployment verification activities.
Exit Policies will be based on enterprise deployment governance policy. For many of my clients, they are based on the successful completion of a Business Verification Testing (BVT) activity where a number of key business SME’s manually verify a set of mission-critical scenarios prior to signalling successful deployment.
This state covers the finalization of operational readiness activities. An ART that has matured well along Lean lines will already have performed much of the operational readiness work prior to deployment, but we are interested in the gap between "Feature is available" and "Feature is realizing value". Typical activities we might see here depend on whether the solution context is internal or external, but might include:
- Preparation and introduction of Work Instructions
- Preparation and Delivery of end-user training
- Preparation and execution of marketing activities
- Education of sales channel
Exit Policies should be based around “first use in anger” by a production user in a real (non-simulated) context.
A (hopefully not too) long time ago, our feature had some proposed benefits. It's time to see whether the hypothesis was correct (the Measure and Learn cycles in Lean Startup). I typically recommend this state be time-boxed to 3 months. During this time, we are monitoring the operational metrics which will inform the correlation between expected and actual benefits.
Whilst learning should be harvested and applied regularly throughout this phase, it should conclude with some form of postmortem, with participants at minimum including Product Management and Product Owners but preferably also relevant subject matter experts, the RTE and representative team members. Insights should be documented, and fed back into the Program Roadmap.
Exit Policies would be based upon the completion of the “learning validation workshop” and the incorporation of the generated insights into the Program Roadmap
Everybody needs a "brag board"!
Once established, this Kanban will provide a great deal of value. Among other things, it can support:
- Visualization and maintenance of the Program Roadmap
- Management of the flow of features into the PI
- Visualization of the current state of the PI to support Scrum of Scrums, PO Sync, executive gemba walks, and other execution steering activities.
- Visualization and management (or at least monitoring) of the deployment, operational rollout and outcome validation phases of the feature life-cycle.
- Collection of cumulative flow and an abundance of Lead and Cycle time metrics at the Feature level.