Sunday, February 5, 2017

Revamping SAFe's Program Level PI Metrics Part 5/6 - Speed

Changing the system starts with changing your vantage point so you can ‘see’ the system differently.  Development speed is often attributed to quick decisions.  Early definition of the requirements and freezing specification quickly are often highlighted as keys to shortening the product development cycle.  Yet the key steps required to bring a new product to market remain the creation and application of knowledge, regardless of how quickly the requirements are set.  The challenge in creating an effective and efficient development system lies in shortening the entire process.” – Dantar Oosterwal, The Lean Machine.

Series Context

Part 1 – Introduction and Overview
Part 2 – Business Impact Metrics
Part 3 – Culture Metrics
Part 4 – Quality Metrics
Part 5 – Speed Metrics (You are here)
Part 6 – Conclusion and Implementation


Introduction

As mentioned in my last post, the categorization of metrics went through some significant reshaping in the review process.  The “Speed” (or “Flow”) quadrant didn’t exist, with its all-important metrics divided between “Business Impact” and “Deployment Health”.  

Lead Time is arguably the most important metric in Lean, as evidenced by Taiicho Ohno’s famous statement that “All we are doing is looking at the customer time line, from the moment the customer gives us the order to the point when we collect the cash”.  Not only does it measure our (hopefully) increasing ability to respond rapidly to opportunity, but is a critical ingredient in enabling a focus on global rather than local optimization.

In this quadrant, the core focus is to employ two perspective views on Lead Time.  The first (Feature Lead Time) relates to the delivery of feature outcomes, and the second (MTTR from Incident) our ability to rapidly recover from production incidents.

The other proposed metrics highlight the cycle time of key phases in the idea-to-value life-cycle as an aid to understanding “where we are slow, and where we are making progress”.  In particular, they will highlight failure to gain traction in XP and DevOps practices.

There is, however, a caveat.  Many (if not most) Agile Release Trains do not begin life in control of the entire idea-to-value life-cycle.  On the one hand, its very common for features to be handed off to an enterprise release management organisation for production release.  On the other, whilst Lean Principles are at the heart of SAFe the framework centers on hardware/software development.  The (traditionally business) skill-sets in areas such as operational readiness, marketing and sales required to move from “Deployed product” to “Value generating product” are nowhere on the big picture. 

ARTs focused on bringing to life the SAFe principles will address these gaps as they inspect and adapt, but in the meantime  there is a temptation to “not measure what we are not in control of”.  As a coach, I argue that ARTs should “never let go until you’ve validated the outcome”.  You may not be in control, but you should be involved – if for nothing else than in pursuit of global optimization.  

Basic Definitions


Basic Metrics Rationale

Average Feature Lead Time (days)

This is the flagship metric.   However, the trick is determining "when the timer starts ticking".   For an ART maintaining the recommended 3-PI roadmap, feature lead time would rarely be shorter than a depressing 9 months.  
To measure it, one needs 2 things: A solid Feature Kanban, and agreement on which stage triggers the timer.  A good feature kanban will of necessity be more granular the sample illustrated in the framework (fuel for a future post), but the trigger point I most commonly look for is "selection for next PI".  In classic kanban parlance, this is the moment when a ticket moves from "backlog" to "To Do", and in most ARTs triggers the deeper preparation activities necessary to prepare a feature for PI planning.  The end-point for the measure is the moment at which the feature starts realizing value and is dependent on solution context, often triggered by deployment for digital solutions but after business change management activities for internal solutions.

Average Deployment Cycle Time (days)

This metric was inspired by the recently released Devops Handbook by Gene Kim and friends.  In essence, we want to measure “time spent in the tail”.  I have known ART after ART that accelerated their development cycle whilst never making inroads on their path to production.  If everything you build has to be injected in a 3-month enterprise release cycle, its almost pointless accelerating your ability to build!  
Whilst our goal is to measure this in minutes, I have selected days as the initial measure as for most large enterprises the starting point will be weeks if not months.

Average Mean Time to Restore (MTTR) from Incident (mins)

When a high severity incident occurs in production, how long does it take us to recover?  In severe cases, these incidents can cause losses of millions of dollars per hour.  Gaining trust in our ability to safely deploy regularly can only occur with demonstrated ability to recover fast from issues.  Further, since these incidents are typically easy to quantify in bottom-line impact, we gain the ability to start to measure the ROI of investment in DevOps enablers.

Prod Deploys Per PI (#)

Probably the simplest measure of all listed on the dashboard - how frequently are we deploying and realizing value?

Advanced Definitions


Advanced Metrics Rationale

Average Feature Execution Cycle Time (days)

This is one of the sub-phases of the lead time which are worth measuring in isolation, and is once again dependent on the presence of an appropriately granular feature kanban.  
The commencement trigger is "first story played", and the finalization trigger is "feature ready for deployment packaging" (satisfies Feature Definition of Done).  The resultant measure will be an excellent indicator of train behaviors when it comes to Feature WIP during the PI.  Are they working on all features simultaneously throughout the PI or effectively collaborating across teams to shorten the execution cycles at the feature level?

One (obvious) use of the metric is determination of PI length.  Long PI’s place an obvious overhead on Feature Lead Time, but if average Feature Execution time is 10 weeks its pointless considering an 8 week PI.  

Average Deploy to Value Cycle Time (days)

This sub-phase of feature lead time measures "how long a deployed feature sits on the shelf before realizing value".  
The commencement trigger is "feature deployed", and the finalization trigger is "feature used in anger".  It will signal the extent to which true system level optimization is being achieved, as opposed to local optimization for software build.  In a digital solution context it is often irrelevant (unless features are being shipped toggled-off in anticipation of marketing activities), but for internal solution contexts it can be invaluable in signalling missed opportunities when it comes to organizational change management and business readiness activities.

Average Deployment Outage (mins)

How long an outage will our users and customers experience in relation to a production deployment?  Lengthy outages will severely limit our aspirations to deliver value frequently.  

Conclusion

We’ve now covered all 4 quadrants and their accompanying metrics.  The next post will conclude the series with a look at dashboard representation, implementation and utilisation. 

High performers [in Devops practices] were twice as likely to exceed profitability, market share, and productivity goals.  And, for those organizations that provided a stock ticker symbol, we found that high performers had 50% higher market capitalization growth over three years.” – Gene Kim, Jez Humble, Patrick Debois, John Willis .. The Devops Handbook


No comments:

Post a Comment