Sunday, March 12, 2017

Improving SAFe Product Management Strategy with OKRs

Introduction

In my last article, I introduced the concept of Objectives and Key Results (OKRs) and provided an illustration based on personal application.  However, I only tried out personal OKRs because I wanted a way to “learn by doing” before I started trying out some of my thinking on incorporating them in SAFe.   This article will explore the first application in SAFe: Improving Product Strategy.

Late last year, Product Management guru John Cutler wrote a brilliant article called “12 signs you’re working in a Feature Factory”.  Some highlights include:

  • No connection to core metrics.  Infrequent discussions about desired customer and business outcomes.
  • “Success Theater” around shipping with little discussion about impact.
  • Culture of hand-offs.  Front-loaded process in place to “get ahead of the work” so that items are ready for engineering”.  Team is not directly involved in research, problem exploration, or experimentation and validation.
  • Primary measure of success is delivered features, not delivered outcomes.
  • No measurement.  Teams do not measure the impact of their work.
  • Mismatch between prioritization rigour (deciding what gets worked on) and validation rigour (deciding if it was, in fact, the right thing to work on).

His blog is a gold-mine for Product Managers, and I found much of his thinking resonated with me when it came to the difference between a healthy ART and a sick one.  OKRs are a very handy tool in combating the syndrome.

Introducing OKRs to Feature Definitions

The official guidance says “every feature must have defined benefits”.  We teach it in every class, and our Cost of Delay estimates should be based on the defined benefits but it is frightening how few ARTs actually define benefits or expected outcomes for features.  At best, most scratch out a few rarely referenced qualitative statements.

Imagine if every feature that went into a Cost of Delay (COD) estimation session or PI planning had a clearly defined set of objectives and key results!

Following is an illustration based on an imaginary feature for a restaurant wanting to introduce an automated SMS-based reservation reminder capability.


Applying OKRs to PI Objectives

I’ve lost count of the number of long discussions I’ve had about the difference between PI Objectives and Features.  They almost always start the same way.  “Surely our PI Objective is to deliver Feature X”.  Or, in an ART with a lot of component teams it might be “Support Team A in delivering Feature X”.

WRONG! 
 
Good PI objectives are based on outcomes.  Over the course of the planning event, the team unpacks their features and generates stories to capture the ongoing conversation and identify key inter-dependencies.  Eventually, we consolidate what we’ve learned about the desired outcomes by expressing them as PI objectives.  The teams don’t commit to delivering the stories, they commit to achieving the objectives.  They should be working with their product owners all the way through the PI to refine and reshape their backlogs to maximize their outcomes.  

If every feature that enters PI planning has a well-defined set of OKRs, we have a great big hint as to what our objectives are.  An ART with good Devops maturity may well transpose the Feature OKRs straight into their PI objectives (subject to trade-off decisions made during planning).  They will ship multiple times during the PI, leveraging the feedback from each deployment to guide their ongoing backlog refinement.   

SAFe Principle 9 suggests enabling decentralization by defining the economic logic behind the decision rather than the decision itself.   Using well-defined OKRs for our PI objectives provides this economic logic, enabling Product Owners and teams to make far more effective trade-off decisions as they execute.

However, without good DevOps maturity the features may not have been deployed in market for sufficient time by the end of the PI to be measuring the predefined Key Results.  In this case, the team needs to work to define a set of Key Results they will be able to measure by the time the PI is over.  These should at minimum be demonstrating some level of incremental validation of the target KRs.  Perhaps they could be based on results from User Testing (the UX kind, not UAT).  Or maybe, measured results in integration test environments.  For example:
  • 50% of test users indicated they were highly likely to recommend the feature to a friend or colleague
  • End-to-end processing time reduced by 50% in test environments. 
Last but not least, we’ve started to solve some of the challenges ARTs experience with the “Business Value” measure on PI objectives.  It should be less subjective during the assignment stage at PI Planning and completely quantitative once we reach Inspect and Adapt!

Outcome Validation

By this point, we’ve started to address a number of the warning signs.   But, unless we’re some distance down the path to good DevOps we haven’t really done much about validation.   If you read my recent Feature Kanban article, you’ll have noticed that the final stage before we call a feature “done” was “Impact Validation”. 

This is the moment for final leverage of our Feature OKRs.  What happened once the feature is deployed and used in anger?  Do the observed outcomes match expectations?  Do they trigger the generation of an enhancement feature to feed back through our Program Backlog for the next Build/Measure/Learn cycle?  Do they affect the priorities and value propositions of other features currently in the Program Backlog?

Linking it back to the ART PI Metrics dashboard

In the business impact quadrant of my PI metrics model, I proposed that every ART should have a “fitness function” defining the fashion in which the ART could measure its impact on the business.  This function is designed to be enduring rather than varying feature by feature – the intended use is to support trend-based analysis.  The job of effective product managers is, of course, to identify features which will move the strategic needles in the desired direction.  Business Features should be contributing to Business Impacts, Enablers might be either generating learning or contributing to the other three quadrants.

Thus, what we should see is Key Result measures in every feature that exert force on the strategic success measures for the ART.   Our team level metric models should be varying with feature content to enable them to steer.  This helps with the first warning sign Cutler identifies:
  1. No measurement.  Teams do not measure the impact of their work.  Or, if measurement happens, it is done in isolation by the product management team and selectively shared.  You have no idea if your work worked.

Conclusion

Whilst OKRs obviously have a lot to offer in the sphere of Product Strategy, this is not the only area in which they might help us with SAFe.  In my next article, I’ll explore applying them to Relentless Improvement.

No comments:

Post a Comment