Friday, July 7, 2017

Facilitating Release Train Design with the ART Canvas Part 3 - Launch Planning

Following a hectic couple of  months launching four new ARTS, its time to conclude my three-part series on facilitating Release Train design with the ART Canvas.  Covering the facilitation of a 1-day workshop, the previous posts dealt with creation of a shared ART Vision accompanied by a supporting design.  In this conclusion, I tackle the closing session of the day: the preparation of a launch plan.

Before commencing the launch planning, we generally shrink the audience.  We release the stakeholders who were critical to establishing a shared vision and design, and bring our focus in to the leadership team who will be executing the launch preparation activities.

Agile is a team sport, and the ART is intended to be a self-organizing team of agile teams.  So, your leadership team needs to work as an agile team too!   The vision workshop is essentially a team formation activity for the leaders, who then have the chance to operate as an agile team as they prepare for the launch.

Since our final objective for the workshop is to generate a launch plan, the metaphor that makes most sense is that of a short “enablement PI” which is executed by the leadership team with the support of the product owners.  Thus, the final segment of the workshop is a mini PI Planning.  I open with the following challenge objective for their enablement PI:

“Your product is an Agile Release Train that can fulfil the vision we’ve defined here today.  8 weeks from now (or whatever launch date we’ve set), 150 people are going to walk into a room to spend 2 days in their first PI Planning.  What will make this a success?”
To assist them in their planning, I then distribute the following Features before sending them into breakouts to create a set of stories and objectives that will realize the objective:







The depth to which the plan is developed depends on our time-boxing.  I’ve generally run this workshop over the course of a single day, which means their set of stories and backlog will be very rudimentary given the constrained time-box they will have for breakouts.  I have increasingly become tempted to extend it to a second day, allowing time to create a more effective plan, establish PI objectives, identify dependencies and create a Program Board.  This would also allow some time on the 2nd day to work through some team formation activities with the Leadership Team such as:

  • Development of a social contract
  • Design of the team Kanban
  • Planning of team demo approach (the team should be demonstrating back to stakeholders on launch preparation progress)
  • Agreement on standup timing
  • Iteration planning for the first iteration

Closing the session

In closing, we should have the following outcomes:

  • A name and launch date for the ART
  • A completed ART canvas, ready to be hung on a wall (and eventually digitised)
  • An agreement for the newly formed ART leadership team to function as an agile team during the launch preparation
  • An appropriately scary launch preparation backlog that motivates movement by the leadership team and is accompanied by an agreed “Agile monitoring” process using Team Demos to enable progress transparency and feedback. 

I like to close out with a final check-in.  I love the definition of consensus Jean Tabaka taught me: “I can live with that, and support it”.  It’s a great way to bring a day of very hard work to a close, perhaps using a roman vote.  “We’ve formed a vision, an ART design and a plan today.  There’s a lot of hard work ahead of us but I’d like to check that we have achieved consensus before we leave this room.”



Sunday, June 11, 2017

Reflecting on the 2017 SAFe Leadership Retreat

Earlier this week, I spent a few days at the very picturesque Dalmahoy Hotel in Edinburgh attending the 2017 SAFe Leadership Retreat.  This was our 3rd gathering, having travelled to Banff (Canada) in 2016 and Crieff (Scotland) in 2015.

If I had to describe the retreat in one word, it would be “community”.  Given the general application of SAFe in large enterprise and the prevailing “partner consultancy” based model, creation of an environment where both consultants and practitioners come together to transcend boundaries and share is no mean feat.  Full credit must go to +Martin Burns and his better half Lucy for their tireless efforts in this regard.

As always, the retreat was attended by +Dean Leffingwell  along with a number of other representatives from SAI.  Also in attendance were a mix of SAFe Fellows, SPCTs, consultants and practitioners along with a “chaos monkey” in the form of +Simon Wardley.

Having now had a few days to reflect on the experience, I’d like to share some of the key themes that emerged.

SAFe “next”

In the opening session, Dean spent a couple of hours running us through the soon-to-be-released next version of SAFe.  Whilst we’re under NDA when it comes to specifics,  I can say that the changes were enthusiastically received – with much improved guidance in some critical areas and best of all for the first time ever a simpler big picture!

SAFe beyond technology

Whilst the framework is squarely focused on the HW/FW/SW side of “Lean Product Development”, those with a truly lean mindset know that there’s a lot more than technology involved in the creation of a Lean Enterprise and optimization of the flow from Idea to Value.  I’ve long held the belief that great ARTs extend beyond technology into Sales, Marketing, Business Enablement and Business Change Management and it was great to see many others not just talking about this but doing it.

HR as an enabler rather than an impediment in a Lean-Agile transformation

We were lucky to have a number of attendees who were either practicing HR professionals or came from an HR background, and numerous open-space sessions devoted considerable attention to Lean|Agile and the world of HR.  Whilst Fabiola Eyholzer’s guidance article last year made a great start on this front, many are grappling with the practical realities of such questions as how to address career progression in the ScrumMaster/RTE and Product Manager/Owner spaces.  Hopefully in the coming months we’ll see some of the outcomes of those discussions synthesized and in the public domain.

SAFe for Systems of Record

When it comes to true Business Agility, there are always Systems of Record involved.  A number of sessions and discussions focused on this, with a particularly robust session on SAP.   The general conclusion was that whilst it’s a lot more convoluted than digital, many are doing it successfully and common patterns are emerging.

Active Learning in SAFe courses

Many of the attendees were passionate believers in experiential and/or active learning who have struggled with the orientation of the SAFe courseware towards “lecture-style” training.  The great news for all of us is that this is becoming a significant focus area for SAI.  The newly introduced RTE course is a radical departure from the past, and the preview we were given of the new Leading SAFe course shows marked improvement in this direction.

SAFe is taking off in Europe

I’ve been coming to Europe every few months in a SAFe context for a couple of years now (starting with Crieff in 2015), and it has clearly been lagging the US and Australian markets in enterprise adoption.   But it appears the time has come.  Of the roughly 50 attendees at this year’s summit, perhaps 40 were European based and the vast majority were actively involved in significant SAFe implementations – a radical departure from 2015.

Closing Thoughts

Great agile is all about collaboration, relationship and learning.  The manifesto expressed it beautifully with the words “We are uncovering better ways of developing software by doing it and helping others do it”.   This year’s retreat lived the philosophy, and I enjoyed deepening existing relationships, forming new ones, sharing my experiences and learning from others’.  Bring on 2018!

Sunday, April 23, 2017

Facilitating Release Train Design with the ART Canvas Part 2 - Design


Introduction 

In my last article, I introduced the concept of facilitating a 1-day launch planning workshop for an Agile Release Train using the ART Canvas.  The workshop covers the generation of a vision for the ART, preparing an initial design and generating a launch plan.

The more ARTs I launch, the greater the time I devote in the workshop to developing a clear, aligned vision using the approach described in the preceding article.  With this in hand, we are well equipped to move forward with designing the ART itself – the focus of this article.

Workshop Agenda


  • Vision (Details in last article
  • Design
    • What is the Development Value Stream and to what extent will the ART cover it?
    • Who will fill the key Leadership roles for the ART?
    • What supporting leadership roles will the ART need?
    • Which technical assets will the ART impact?
    • What is our Team Design strategy for the ART?
    • What is the Name and Launch Date for the ART?
  • Planning (Details in next article)

What is the Development Value Stream and to what extent will the ART cover it?

I was torn between calling this section of the canvas “Development Value Stream” or “Solution Context”.  Introduced in SAFe 4.0, the Solution Context “identifies critical aspects of the target solution environment” and “is often driven by factors that are outside the control of the organisation that develops the solution”.  I settled on the Development Value Stream because this tends to be how I facilitate the conversation.

Our focus here is really to delineate the difference between “the ART we want” and “the ART we can have”.  The “ART we want” can take an “Idea” directly from the Operational Value Stream and perform all activities necessary to carry that Idea through the life cycle until it ends in realized “Value”.  However, reality tends to intervene.  Perhaps the ART is not capacity funded and “Ideas” need to be routed through some type of PMO function before reaching the Product Manager.  A particular supporting system may not able to be worked on by the ART due to the nature of the contract with a vendor or its ownership by an entrenched organisational silo.  Further, many ARTs cannot deploy independently and must participate in an enterprise release process.

Lack of awareness of the pragmatic compromises in your ART design can and will derail you.  In Good to Great, Jim Collins summarized it beautifully: “You must maintain unwavering faith that you can and will prevail in the end, regardless of difficulties, AND, at the same time, have the discipline to confront the most brutal facts about your current reality, whatever they may be”.

My favorite way to represent this is to depict a Development Value Stream running from “Idea to Value”, and represent using shading or call-outs those components of the Value Stream which fall outside the boundaries of the ART.   There will be some obvious exclusions we can use to frame the following discussions, and as design progresses we will find ourselves adding some fine details.  By the end of the workshop, we should have highlighted all of the ART’s key external dependencies in fulfilling its mission.

Who will fill the key Leadership Roles for the ART?

If you have been comparing my canvas to the one Dean documented, you will notice a couple of differences.  One of these is that I define 4 key leadership roles, whilst Dean sticks to the “classic troika”:

  • Release Train Engineer
  • Product Manager
  • System Architect
  • UX Lead (My addition)

The reason for my extension is simple: experience and scars.  The vast majority of ARTs have at least some digital component to them, and if you have a digital component you will have a UX competency.  I have learnt a great deal about the field of UX in the last 5 years, and much of it I’ve learned the hard way.  Two critical lessons have emerged:

  • There is regularly significant confusion between the roles of UX and Product Management
  • When it comes to embracing iterative and incremental approaches, UX specialists are often the new architects in terms of mindset battleground.

My response is to embrace their critical leadership role in enabling effective digital outcomes by recognizing them formally as part of the ART Leadership team.

Facilitating the selection of candidates is generally a formality.  If we’ve prepared for the workshop at all well, the nominees are sitting in the room and putting their names in the boxes is simply putting the stamp of approval on the nomination.  If we walk out of the room with one of those boxes empty, we have a critical priority as all of them are going to be incredibly busy in the coming weeks executing on the launch plan and every day will count for each of them.

What supporting leadership roles will the ART need?

Large ARTs will often need some additional folks to support the key leaders.  Following are some sample questions I use to seed this discussion:

  • Will one Product Manager be enough?  (in the case of multiple Product Managers, I’m always looking to see one designated as “chief” but if we remember the SAFe guidance of 1 Product Manager per 2-4 Product Owners it will be far from uncommon to see 2-3 people in this role)
  • Will one Architect be enough? (Given that system architects are often tied to a limited set of technology assets, I often see 2-3 architects dedicated to large ARTs to provide adequate coverage to all assets)
  • Do we need a Release Manager? (If the ART is facing into an Enterprise Release process and has low or no Devops Maturity, liaising with the ITIL folks can be a full-time role)
  • Do we need a Test Manager?  (The concept of a Test Manager might be anathema in a mature agile implementation, but as we launch our ART we’re not yet mature.  Who is going to mentor the testers embedded in the teams and assist them with evolving their testing strategy whilst also meeting the organisation’s governance and quality articulation needs?)
  • Do we need a Change Manager? (Many of my ARTs have a full change management team, in which case of course we would tackle this in our team design, but in cases where we don’t have a full team I often see this role staffed for the ART to enable effective liaison with the Operational Value Stream with respect to operational readiness and go-to-market strategies).


Whilst on the one hand we want to “minimize management overhead by creating a self-organizing team of teams”, on the other this will not happen overnight.   Experience has taught me that for any significant ART people in these roles will exist – our choice is whether to make them a “part of the team” sharing ownership of our transformative change, or a part of “external governance functions”.


What technical assets will the ART impact?

Begin with the list of supporting systems from your Operational Value Stream outputs.  Then, because representation in the value stream is often in terms of “meta platforms”, add in more granularity as needed.  (For example, middleware is part of most ARTs but often considered to be implied during the value stream discussion).

Now examine the list through the lens of “Frequency/Extent of Impact”.  Framing the discussion in the context of the “typical Features” defined in the Solution section of the canvas, a simple mechanism such as “*” for frequently impacted and “**” for always impacted can be useful. Below is a simplified and de-sensitized example from a recent workshop:

  • Web**
  • SAP Bus Suite*
  • Mainframe*
  • Mobile*
  • Security
  • ESB*
  • ESS
  • WFM
  • Data Integration
  • MI/EDW*

What is our Team Design strategy for the ART?


It surprises me that there aren’t entire books devoted to the topic of agile team design and the trade-offs between feature and component teams.  My advice: "Be pragmatic yet ambitious".  

In brutal honesty, I’m a reformed “feature team purist”.  I love the theory, but I’ve lived the reality.  Your culture has to be ready for them.  Not just your management and silo owners, but most importantly your team members.  They just don’t work without open-ness to cross-skilling.  Have a conversation with an iOS dev about writing android code.  Try another conversation with a 20 year COBOL veteran about learning to write javascript.  Recognize that there are specializations within specializations – there is no better place to learn this than your middleware folks.  The naïve will think “surely I can put a middleware person in each team”, only to discover there are 6-7 discrete specializations within middleware.

Recognize that your ART has to start somewhere, and inspect and adapt itself to a better place.  I establish a “feature teams are the goal” mindset, then climb my way back to a decent starting place – generally a combination of “nearly feature teams” with 2-3 specialized component teams.  For example, for the asset stack above we wound up with “nearly Feature Teams” containing Web, SAP and Mainframe skills, a “Data and Analytics” team, a “specialized assets” team and a “Business Enablement” team focused on change management and operation readiness.  By the end of the first PI planning we’d also added a lawyer to the Business Enablement Team.  And .. don’t forget to define the flavor of your System Team while you’re here.  Nowhere will you tackle more organisational silos than in staffing a good system team, you want to seize the momentum you have established on vision to lock this in.

Do not attempt to match names to teams in this workshop.  Focus on defining “team templates” covering both technical and non-technical skills (eg BA, UX, Functional Analysts if you have an ERP, etc).  Working out the quantity of each team flavor and the match of names to teams can come later.  If you’ve put the right people in the room (line managers) you are reaching alignment on commitment to dedication and cross-functionality.

What is the Name and Launch Date for the ART?

Let me be brutally clear, if you leave the room without a launch date you’ve missed a huge opportunity.  A committed date is a forcing function.  You are laying the seeds of the “fix the date, float the scope” mindset the ART needs to have.   Look for something between 2 and 8 weeks, with a preference for 6-8 weeks.   Anything less than 4 weeks requires a lot of special circumstances, so if you’re new to the game don’t try it.  Anything more than 8 will hurt your sense of urgency and momentum.

On the Name function, I (and my colleagues) are huge believers in the power of a name in establishing a shared sense of identity.  All the data is on the table to find one, and the sooner this new creation has a name the sooner it will begin to take life.  Some recent examples for me:
  • SHERPA (Secure High value Engaged Reliable Payment Assurance)
  • DART (Digital Assurance Release Train)
  • START (Student Transformation Agile Release Train)

Conclusion

With an agreed vision and design for the ART, the final step is to generate the launch plan and supporting approach - the topic of my final article in this series.

Saturday, April 8, 2017

Facilitating Release Train design with the ART Canvas Part 1 - Vision

It became clear to me after I began to read the four hundred or so vision statements I received: they all read the same.  Every organisation cares about customers, values teamwork, exists for shareholders or the community, believes in excellence in all they do.  If we all threw our vision statements in a hat and then drew one out, we would think it was ours regardless of where it came from.  I finally realised that it was the act of creating a vision that matters, not so much the content of what it was.” – Peter Block, Flawless Consulting

Introduction

In January 2014, I wrote my first article on launching an ART.   It focused on the concept of a launch planning workshop that formed a cross-functional Agile Leadership team and generated a vision and a backlog they could execute on over 6-8 weeks to launch their ART.

I’ve launched over 30 ARTs since then, and whilst my core approach has remained consistent it has benefited from a great deal of iteration.  The biggest change has been adjusting the proportion of time I spend on the ART vision and design as opposed to the launch preparation backlog.    Alignment is a key component of ART success, and if the leaders aren’t aligned on the vision there is little hope for the teams.

Walking out of a brilliant launch workshop last year, I sat down to reflect on the key conversations and capture some notes to feed forward into my next launch.  As I reviewed the photos, I had an aha moment.  Why not use a Lean Canvas to anchor the key conversations?   I could print it out as a big wall poster, structure an agenda around completing it and walk out of the workshop with a “Train on a Page” diagram.

I’ve used it a number of times since then to good  effect, and when I shared it with +Dean Leffingwell  on a visit to Boulder in January he was quick to ask about including it in his new Implementation Roadmap.  While he beat me to the publish button a few weeks ago with his Prepare for Art Launch article, I felt there would be value in sharing the approach I take to facilitating it.


All of my ART launches begin with what I call the “Launch Planning Workshop”.  Generally held within a week of the Leading SAFe, this is a one-day facilitated workshop attended by the Business Owners and ART Leadership team with the objective of clarifying the vision, boundary and key success measures for the ART and generating the launch preparation backlog.

The workshop progresses through 3 phases:
  • Define the vision
  • Design the ART
  • Generate the Launch Plan
This article will tackle opening the workshop and the most important phase: the Vision.  My next two articles will address ART design and generation of the launch plan.

Opening the Workshop

The workshop opens with a blank A0 copy of the Canvas pinned to the wall and a 2-part objective:
  • Complete the ART Canvas to define the Vision, Leadership and Design strategy for our ART
  • Generate a backlog detailing the activities necessary to bring it to life
The bulk of the attendees should have recently attended Leading SAFe, and key participants include:
  • ART Leadership Candidates (at minimum RTE, Product Manager, System Architect)
  • Key sponsors from both Business and IT
  • Relevant technology Line Management (those who are likely to be contributing staff to the ART)
As a general note, I do not directly populate the Canvas.  I operate on either 2 whiteboards or a combination of flip-charts.  At the completion of each section, a scribe will transcribe the outputs onto the canvas.

Workshop Agenda

  • Vision
    • Which Customers will be impacted by the ART?
    • Which Operational Value Stream(s) will the ART support?
    • Who are the Business Owners for the ART?
    • What kind of Features will the ART deliver?
    • How will we measure success for the ART?
    • What is the Vision statement for the ART?
  • Design (Details in next article)
  • Planning (Details in future article)

Vision

The first 6 topics on the agenda address the vision for the ART, culminating in the creation of a Vision Statement.  

Which Customers will be impacted by the ART?

Customer centricity is at the heart of Lean and Agile, so it is where we begin.  This should take no more than 5 minutes.  Personally, I use the “internal Customer” and “external Customer” metaphor to frame the conversation and ensure we are covering both internal users of the solution and true customers of the organisation.

Which Operational Value Stream(s) will the ART support? 

The mission for any ART involves improving business outcomes, and succeeding in that mission begins with understanding the flow of value in the areas of the business the ART is to support.  
If a Value Stream workshop has already been conducted, this is the moment to review the outputs from that workshop.  However, for most ARTs this is not the case and now is the moment to identify and map the relevant operational value stream(s).  Provided the right people are in the room, this can be generated at an appropriate level in 20-30 minutes.   Generally, we are concerned with a single value stream and we look to our business folks to generate the core flow and architects to map the underlying systems.  Note that we do not need a great deal of detail.  

Service Assurance Operational Value Stream for a Telco
For more information (and examples) on approaching this, see the excellent new guidance provided in the Identify Value Streams and ARTs article from the Implementation Roadmap.

Who are the Business Owners for the ART?

I have to confess that for a number of years, I wondered why Dean insisted on providing a new term instead of just talking about Stakeholders.  An embarrassingly short time ago, it finally hit me.  Every ART will have many stakeholders, but in general 3-5 of them will be critical.  These are your Business Owners.  SAFe calls out specific responsibilities for them, defining them as “a critical group of three to five stakeholders who have shared fiduciary, governance, efficacy and ROI responsibility for the value delivered by a specific Agile Release Train”.  

Having now lived through a number of ARTs where either the membership of this group was unclear or the identified members took little accountability, I am now very focused on identifying them early and ensuring they have been engaged and signed up for their responsibilities prior to the launch.

What kind of Features will the ART deliver (Solution)?

Over time, the ART will deliver many Features.  In general, by the time you are at launch there is a known seed.  Perhaps it is being triggered by a waterfall program with preexisting expectations “converting” to agile.  Perhaps a critical project, or an urgent set of priorities.  But we’re also looking to consider the future – if for no other reason than to assist in both communicating to the broader organisation and assist decision rules on work that should be routed to the ART.  Following is a sample from a recent workshop:
  • New, digitally enabled user experience on available technology
  • Process automation, improved data exchange with third parties
  • Remediation of current platform
In short, by this point we know which operational value stream we're supporting, who the customers and key stakeholders are - we just want to clarify "what kind of stuff" they can expect from the ART.

How will we measure success for the ART?

Given that 9 out of 10 of my last articles have focused on metrics, it should come as no surprise that I want to talk about them as part of our vision workshop.  There are many good reasons to do this, but the most important one lies in establishing the mindset that success for the ART involves business outcomes, not “delivering features on time and budget” or “improving velocity”.  It enables validation of the vision with the Business Owners, assists Product Managers in establishing their WSJF prioritization approach for the Program Backlog and frames the kind of benefit propositions that should be appearing in Features.  Further, it enables us to capture baselines during our pre-launch phase so we are positioned to understand the impact the ART is having.

For those who have followed my Metrics series , this area seeds the Fitness Function from the Business Impact quadrant.  Whilst you many not exit the room with a full set of quantifiable measures, you can at least define qualitative ones and a follow-up activity to establish quantitative approaches to them.  Below, you’ll find a (slightly desensitized) sample set from a recent workshop:
  • Staff
    • Automation of simple work
    • Able to add more value by focusing on complex work
    • Stable and trustworthy technical platform
  • Customers
    • Simple, intuitive, digitally enhanced experience
    • Increased accuracy and timeliness 
    • Tailored service offer based on individual circumstances
    • Seamless movement between channels (tell us once)
  • Organisation
    • Straight-through processing increased
    • Authenticated digital transactions increased
    • Accuracy increased in the areas of debt, outlays and fraud
    • Operational Costs reduced

What is the Vision Statement for the ART?

The discussion we have facilitated to this point has been focused on identifying “non-generic” components of the vision statement by exploring the following questions:
  • Who are our customers?
  • How do we interact with them? (the operational value stream)
  • Who are our critical stakeholders? (who have to bless the vision)
  • What kind of Features are we going to deliver?
  • What difference are we to make? (Success Measures)
Now we want to bring it all together.  I’m a big fan of the elevator pitch template popularized by Geoffrey Moore in Crossing the Chasm and leveraged by SAFe in the Epic Value Statement.   I will generally split the room into two sub-groups (ensuring a mix of backgrounds in each), and give each 20 minutes to create their pitch.   I then help them pick a favorite, and optionally spend another 10 minutes polishing it by incorporating key ideas from the discarded pitch.

For the brave, an alternative which can really get the creative juices flowing is the Pixar Pitch.  My colleague +Em Campbell-Pretty wrote an article on it a few years ago which includes a sample used as the vision for an ART in a higher education setting.  Whilst confidentiality precludes me from sharing a specific example here, I have included one written by a leadership team describing what success with their SAFe implementation would look like:


Conclusion

Getting this far will take at least until lunch, and probably a little further.   In all likelihood, the longer it takes the more value you are getting from the discussion as assumptions are flushed and clarity emerges both on “what the mission of the ART is” and just as importantly “what it is not”! 
The alignment and sense of mission achieved here will stand you in good stead as you progress into the ART Design agenda items addressed in my next article and begin to tackle some ivory towers and entrenched silos.

Sunday, March 26, 2017

Using OKRs to accelerate Relentless Improvement

Introduction

In my last 2 articles, I first introduced the concept of Objectives and Key Results (OKRs) with an illustration based on personal goal-setting then explored applying them to improvement of Product Strategy.

After experimenting on myself with personal OKRs, my first applications of the concept in a coaching setting have been in the area of improvement objectives.  I was particularly excited about this idea as I encounter so many teams and ARTs who lose heart with their retrospectives and Inspect and Adapt sessions due to failure to follow through on their great ideas.

I’ve lost count of how many discussions I’ve had in either the training room or a coaching setting with people who drag themselves to retrospectives feeling they are a waste of time.   The common refrain is that “we just keep talking about the same issues and nothing ever changes”.  A little probing generally exposes the following:

  • Teams use the same format every time (my pet hate is the “What went well, what didn’t go well, what puzzles me” retro).
  • Teams spend 80-90% of our session examining the past
  • The (rushed) improvement idea generation yields concepts beyond their power to change or very fuzzy
  • Few teams retrospect on the effectiveness or application of the ideas they do generate 

I’ve gifted many a copy of Esther Derby and Diana Larsen’s classic “Agile Retrospectives” to ScrumMasters and RTE’s.  Likewise, I’ve facilitated countless sessions on time-boxing your retrospective or I&A to leave at least 50% of it for the “improvement” part of the discussion.

Good facilitation will at least get you to the point of emerging from the session with some good ideas.  However, in the words of Michael Josephson “Ideas without action are like beautifully gift-wrapped boxes with nothing inside”.
 

Crystallizing your great ideas with OKRs

A couple of years ago, I started experimenting with the Improvement Kata to assist teams with actioning their ideas.  As I researched it, I came across a great article describing an “Improvement Theme Canvas” by Crisp’s Jimmy Janlen.  The canvas was beautifully simple, taking teams on a great journey with 4 questions:

  • Where are we now?
  • Where do we dream of being (What does awesome look like?)
  • Where can we get to next? (And how would we measure it?)
  • What steps would start us moving?

I used it again and again, in settings ranging from team retrospectives to executive workshops.  And it helped.     But it was still missing something.  There was a recurring trend – a lack of measures.
As I started experimenting with OKRs, there was an obvious synergy.   I had found the Key Result setting to be the moment where I did my deepest thinking myself, so I tweaked the canvas a little:


My first opportunity to apply it was the second day of a management workshop.  The first day had been spent exploring challenge areas and selecting focus areas for improvement.  We had a fairly clear view of current state and definition of awesome (which represented 18 months+ of hard work), and the job on Day 2 was to address the right side of the canvas with a 3 month view. 

We began with some objectives, then the hard work began – how would we measure them?  The result was incredibly powerful.  We moved from some very fuzzy, amorphous objectives into real clarity on how to generate momentum.  When you start to talk about how to measure movement, something changes in the conversation.  On the one hand, it grounds you in reality but on the other the fact that you are looking for a measure ambitious enough to only have a 50% chance of success stretches you out again. 

As an illustration, I’ve developed a sample based on a commonly occurring theme in Inspect and Adapt sessions – struggles to develop momentum with Test Automation.


As a final hint on the canvas generation, the order in which you complete it is crucial.  There’s a great temptation to rush to “Enabling Activities” because people love to begin with solutions.  
  • Start with “Current state”: fishbone diagrams, 5 whys, and causal loop diagrams are all great tools for generating the data and shared understanding
  • Move to “Definition of Awesome”.  The word “Awesome” is incredibly powerful in helping move minds into the art of the possible, and also encourages alignment around desired direction.
  • Step 3 is to agree on your timebox (often 3 months or one Program Increment) and figure out how far you can move from your current state to the definition of awesome as you set your “Target State OKRs”.  You will start flushing assumptions, and more importantly discovering alignment challenges.  The movement to quantified measures with your Key Results will expose many an unvoiced insight.
  • Only now do you set the “Enabling activities”.  They will come very quickly.  If you’ve set your OKRs well, your key activities will be very obvious.


So far we only have an Idea, what about Action? 

I can recommend nothing so strongly as the weekly priority-setting cycle from Radical Focus I described in my first OKR article.  The reality is, despite many schemes people use for creating space for follow-up on improvement ideas it is amazing how often they stumble.  Life gets busy, delivery pressure is high, and the first thing that gets sacrificed is time to work on improvement ideas. 

The beauty of setting yourself a few priorities each week on moving towards your OKRs and spending a little time reflecting on them at the end of the week is that it helps you “find time”.  A few months into applying personal OKRs I can attest to this.  I live a very hectic life.  Most weeks involve 5 long days facilitating and coaching with clients, 2-3 flights, and lots of “after hours admin” whilst coming up for air to be a dad, husband and human 😊  Its very easy to find excuses.  But every week starts with “how can I move forward this week?”  And in the back of my head every day are my weekly goals.  To be brutally honest, my learning cycle at the end of the week often focuses on “how to find time”.   I’m not waiting 3 months to look back and find out I made no progress and lose heart.   

My final hint is this: Make them public.  If you’re applying the technique to Inspect and Adapt,  your improvement OKRs should appear in your PI Objectives and progress discussed  in your System Demos and PI Demo.  Include the Key Result evaluation in the metric portion of your Inspect and Adapt.  There is both power and motivation in transparency about improvement objectives.

Sunday, March 12, 2017

Improving SAFe Product Management Strategy with OKRs

Introduction

In my last article, I introduced the concept of Objectives and Key Results (OKRs) and provided an illustration based on personal application.  However, I only tried out personal OKRs because I wanted a way to “learn by doing” before I started trying out some of my thinking on incorporating them in SAFe.   This article will explore the first application in SAFe: Improving Product Strategy.

Late last year, Product Management guru John Cutler wrote a brilliant article called “12 signs you’re working in a Feature Factory”.  Some highlights include:

  • No connection to core metrics.  Infrequent discussions about desired customer and business outcomes.
  • “Success Theater” around shipping with little discussion about impact.
  • Culture of hand-offs.  Front-loaded process in place to “get ahead of the work” so that items are ready for engineering”.  Team is not directly involved in research, problem exploration, or experimentation and validation.
  • Primary measure of success is delivered features, not delivered outcomes.
  • No measurement.  Teams do not measure the impact of their work.
  • Mismatch between prioritization rigour (deciding what gets worked on) and validation rigour (deciding if it was, in fact, the right thing to work on).

His blog is a gold-mine for Product Managers, and I found much of his thinking resonated with me when it came to the difference between a healthy ART and a sick one.  OKRs are a very handy tool in combating the syndrome.

Introducing OKRs to Feature Definitions

The official guidance says “every feature must have defined benefits”.  We teach it in every class, and our Cost of Delay estimates should be based on the defined benefits but it is frightening how few ARTs actually define benefits or expected outcomes for features.  At best, most scratch out a few rarely referenced qualitative statements.

Imagine if every feature that went into a Cost of Delay (COD) estimation session or PI planning had a clearly defined set of objectives and key results!

Following is an illustration based on an imaginary feature for a restaurant wanting to introduce an automated SMS-based reservation reminder capability.


Applying OKRs to PI Objectives

I’ve lost count of the number of long discussions I’ve had about the difference between PI Objectives and Features.  They almost always start the same way.  “Surely our PI Objective is to deliver Feature X”.  Or, in an ART with a lot of component teams it might be “Support Team A in delivering Feature X”.

WRONG! 
 
Good PI objectives are based on outcomes.  Over the course of the planning event, the team unpacks their features and generates stories to capture the ongoing conversation and identify key inter-dependencies.  Eventually, we consolidate what we’ve learned about the desired outcomes by expressing them as PI objectives.  The teams don’t commit to delivering the stories, they commit to achieving the objectives.  They should be working with their product owners all the way through the PI to refine and reshape their backlogs to maximize their outcomes.  

If every feature that enters PI planning has a well-defined set of OKRs, we have a great big hint as to what our objectives are.  An ART with good Devops maturity may well transpose the Feature OKRs straight into their PI objectives (subject to trade-off decisions made during planning).  They will ship multiple times during the PI, leveraging the feedback from each deployment to guide their ongoing backlog refinement.   

SAFe Principle 9 suggests enabling decentralization by defining the economic logic behind the decision rather than the decision itself.   Using well-defined OKRs for our PI objectives provides this economic logic, enabling Product Owners and teams to make far more effective trade-off decisions as they execute.

However, without good DevOps maturity the features may not have been deployed in market for sufficient time by the end of the PI to be measuring the predefined Key Results.  In this case, the team needs to work to define a set of Key Results they will be able to measure by the time the PI is over.  These should at minimum be demonstrating some level of incremental validation of the target KRs.  Perhaps they could be based on results from User Testing (the UX kind, not UAT).  Or maybe, measured results in integration test environments.  For example:
  • 50% of test users indicated they were highly likely to recommend the feature to a friend or colleague
  • End-to-end processing time reduced by 50% in test environments. 
Last but not least, we’ve started to solve some of the challenges ARTs experience with the “Business Value” measure on PI objectives.  It should be less subjective during the assignment stage at PI Planning and completely quantitative once we reach Inspect and Adapt!

Outcome Validation

By this point, we’ve started to address a number of the warning signs.   But, unless we’re some distance down the path to good DevOps we haven’t really done much about validation.   If you read my recent Feature Kanban article, you’ll have noticed that the final stage before we call a feature “done” was “Impact Validation”. 

This is the moment for final leverage of our Feature OKRs.  What happened once the feature is deployed and used in anger?  Do the observed outcomes match expectations?  Do they trigger the generation of an enhancement feature to feed back through our Program Backlog for the next Build/Measure/Learn cycle?  Do they affect the priorities and value propositions of other features currently in the Program Backlog?

Linking it back to the ART PI Metrics dashboard

In the business impact quadrant of my PI metrics model, I proposed that every ART should have a “fitness function” defining the fashion in which the ART could measure its impact on the business.  This function is designed to be enduring rather than varying feature by feature – the intended use is to support trend-based analysis.  The job of effective product managers is, of course, to identify features which will move the strategic needles in the desired direction.  Business Features should be contributing to Business Impacts, Enablers might be either generating learning or contributing to the other three quadrants.

Thus, what we should see is Key Result measures in every feature that exert force on the strategic success measures for the ART.   Our team level metric models should be varying with feature content to enable them to steer.  This helps with the first warning sign Cutler identifies:
  1. No measurement.  Teams do not measure the impact of their work.  Or, if measurement happens, it is done in isolation by the product management team and selectively shared.  You have no idea if your work worked.

Conclusion

Whilst OKRs obviously have a lot to offer in the sphere of Product Strategy, this is not the only area in which they might help us with SAFe.  In my next article, I’ll explore applying them to Relentless Improvement.

Sunday, March 5, 2017

An Introduction to Objectives and Key Results (OKRs)

Introduction

In late January I was working on a final draft of one of my metrics articles when someone suggested I take a look at OKRs.   I’d heard of them, but hadn’t really paid much attention other than to file it on my always long backlog of “things I need to learn something about”.  Then a couple of days later I was writing my abstract for a metrics session at the Agile Alliance 2017 conference and in scanning the other submissions in the stream discovered a plethora of OKR talks.  This piqued my interest, so I started reading the OKR submissions.   At this point I panicked a little.  The concept sounded fascinating (and kind of compelling when you realize its how Google and Intel work).  I was facing into the idea that my lovingly crafted “SAFe PI Metrics revamp” might be invalidated before I even finished the series.

So, I went digging.  The references that seemed to dominate were Radical Focus by Christina Wodtke, a presentation by Rick Clau of Google, and an OKR guide by Google.  A few days of reading later, I relaxed a little.   I hadn’t read anything that had invalidated my metric model, but I could see a world of synergies.  A number of obvious and compelling opportunities to apply OKR thinking both personally and to SAFe had emerged.

In this article, I will provide a brief overview of the concept and use my early application to personal objective setting as a worked example.  My next article will detail a number of potentially powerful applications in the context of SAFe.

OKRs – an Overview

As Wodtke states in Radical Focus, “this is a system that originated at Intel and is used by folks such as Google, Zynga, LinkedIn and General Assembly to promote rapid and sustained growth.  O stands for Objective, KR for key results.  Objective is what you want to do (Launch a killer game!), Key Results are how you know if you’ve achieved them (Downloads of 25K/day, Revenue of 50K/day). OKRs are set annually and/or quarterly and unite the company behind a vision

Google provides some further great clarifying guidance:

  • “Objectives are ambitious and may feel somewhat uncomfortable”
  • “Key results are measurable and should be easy to grade with a number (Google uses a scale of 0-1.0”
  • “The sweet spot for an OKR grade is 60-70%; if someone consistently fully attains their objectives, their OKRs are not ambitious enough and they need to think bigger”
  • “Pick just three to five objectives – more can lead to overextended teams and a diffusion of effort”.
  • “Determine around three Key Results for each objective”
  • “Key results express measurable milestones which, if achieved, will directly advance the objective”.
  • “Key results should describe outcomes, not activities”

Wodtke dials it up a notch when it comes to ambition:

  • OKRs are always stretch goals.  A great way to do this is to set a confidence level of five of ten on the OKR.  By confidence level of five out of ten, I mean ‘I have confidence I only have a 50/50 shot of making this goal”.


Personal OKRs – a worked example

I’m a great believer in experimenting on myself before I experiment on my customers, so after finishing Radical Focus I decided to try quarterly personal OKRs.  Over the years, I’ve often been asked when I’m going to write a book.  After my colleague +Em Campbell-Pretty  published Tribal Unity last year, the questions started arriving far more frequently.

After a particularly productive spurt of writing over my Christmas Holidays I was starting to think it might be feasible so I began to toy with the objective of publishing a book by the end of the year.  Em had used a book writing retreat as a focusing tool, so it seemed like planning to hole up in a holiday house for a few weeks late in the year would be a necessary ingredient but that was far away (and a large commitment) so the whole thing still felt very daunting.

As I focused in on the first quarter, the idea for an objective to commit to scheduling the writing retreat emerged.  Then the tough part came – what were the quantitative measures that would work as Key Results?  Having now facilitated a number of OKR setting workshops, I'm coming to learn that Objectives are easy, but Key Results is where the brain really has to engage.  My first endeavors to apply it personally were no exception.  Eventually, I came to the realization that I needed to measure confidence.  Confidence that I could generate enough material, and confidence that people would be interested in reading the book.  After a long wrestle, I reached a definition:



Working the Objectives

As Wodtke states when concluding Radical Focus, “The Google implementation of OKRs is quite different than the one I recommend here”.  Given that I had started with her work, I was quite curious to spot the differences as I read through Google’s guidance.  At first, it seemed very subtle.  The biggest difference I could see was that for Google not all objectives were stretch.  Then the gap hit me.  Wodtke added a whole paradigm for working your way towards the objective with a lightweight weekly cycle.   I had loved it, as I’d mentally translated it to “weekly iterations”.

The model was simple.  At the start of each week, set three “Priority 1” goals and two “Priority 2” goals that would move you towards your objectives.  At the end of the week, celebrate your successes and reflect on your insights.  Whilst she had a lot more to offer on application and using the weekly goals as a lightweight communication tool, the simplicity was very appealing personally.  After all, I had a “day job” training and coaching.  My personal objectives were always going to be extra-curricular so I wanted a lightweight focusing tool.

Whilst writing a book was not my only objective, it was one of two so features heavily in my weekly priorities.  Following is an example excerpt:

Clarifying notes:
  • Oakleigh is a suburb near me with a big Greek population.  They have a bunch of late night coffee shops, and its my favorite place to write.  I head up at 10pm after the kids are in bed and write until they kick me out around 2am.  This article is being written from my newly found “Brisbane Oakleigh”.
  • In week 2 I utilised OKRs in an executive strategy workshop I was facilitating (with a fantastic result), and shared the concept with a friend who was struggling with goal clarity.  In both cases, I used my writing OKR as an example 

Conclusion

Hopefully by this point you are both starting to understand the notion of the OKR and perhaps inspired enough to read some of the source material that covers it more fully.  I can’t recommend Radical Focus highly enough.  It’s an easy read, fable style (think The Goal, The Phoenix Project). 

You might have noticed along the way that my priority goal this week was to “Publish OKR article”.  As seems to happen regularly to me, once I start exploring an idea it becomes too big for one post.  I never actually got to the “SAFe” part of OKRs.  To whet your appetite, here are some things I plan to address in the second article:

  • Specifying OKRs as part of a Feature Definition
  • Applying OKR thinking to PI Objectives
  • Applying OKR thinking to Inspect and Adapt and Retrospective Outcomes


Sunday, February 26, 2017

Getting from Idea to Value with the ART Program Kanban

Introduction


Readers of this blog will be no strangers to the fact that I'm a strong believer in the value of kanban visualization at all levels of the SAFe implementation.  Further, if you've been reading my Metrics Series you may have been wondering how to collect all those Speed Metrics.

Given that the Feature is the primary unit of "small batch value flow" in SAFe, effective application of kanban tools to the feature life-cycle is critical in supporting study and improvement of the flow of value for an Agile Release Train (ART).

The first application for most ARTs is enabling flow in the identification and preparation of Features for PI planning.  Many ARTs emerge from their first PI planning promising themselves to do a better job of starting early on identifying and preparing features for their next PI only to suddenly hit panic-stations 2 weeks out from the start of PI 2.  The introduction of a kanban to visualize this process is extremely valuable in starting to create visibility and momentum to solving the problem.

However, I vividly remember the debate my colleague +Em Campbell-Pretty and I had with +Dean Leffingwell and +Alex Yakyma  regarding their proposed implementation of the SAFe Program Kanban over drinks at the 2015 SAFe Leadership Retreat in Scotland.  Their initial cut positioned the job of the program kanban as "delivering features to PI planning", whilst we both felt the life-cycle needed to extend all the way to value realization.  This was in part driven by our shared belief that a feature kanban made a great visualization to support Scrum-of-Scrums during PI execution but primarily by our drive to enable optimization of the full “Idea to Value” life-cycle.   Dean bought in and adjusted the representation in the framework (the graphic depicting the backlog as an interim rather than an end-state was in fact doodled by Em on her iPad during the conversation).

A good Kanban requires a level of granularity appropriate to exposing the bottlenecks, queues and patterns throughout the life-cycle.  Whilst the model presented in SAFe acts much as the Portfolio Kanban in identifying the overarching life-cycle states, it leaves a more granular interpretation as an exercise for the implementer.

Having now built (and rebuilt) many Program Kanban walls over the years while coaching, I've come to a fairly standard starting blueprint (depicted below).  This article will cover the purpose and typical usage of each column in the blueprint.



Note: My previous article on Kanban tips-and-tricks is worthwhile pre-reading in order to best understand and leverage the presented model.  Avatars should indicate both Product Management team members and Development teams associated with the Feature as it moves through its life-cycle.

Background thinking

The fundamental premise of lean is global optimization of the flow from idea to value.  Any truly valuable Feature Kanban will cover this entire life-cycle.  The reality is that many ARTs do not start life in control of the full cycle, but I believe this should not preclude visualization and monitoring of the full flow.  In short, “don’t let go just because you’re not in control”.  If you’re not following the feature all the way into production and market realization you’re missing out on vital feedback.

The Kanban States


Funnel

This is the entry point for all new feature ideas.  They might arrive here as features decomposed out of the Epic Kanban, features decomposed from Value Stream Capabilities, or as independently identified features.  In the words of SAFe, "All new features are welcome in the Feature Funnel".
No action occurs in this state, it is simply a queue with (typically) no exit policies.

Feature Summary

In this state, we prepare the feature for prioritization.  My standard recommendation is that ARTs adopt a "half to one page summary" Feature Template (sample coming soon in a future article).

Exit Policies would typically dictate that the following be understood about the feature in order to support an effective Cost of Delay estimation and WSJF calculation:

  • Motivation (core problem or opportunity)
  • Desired outcomes
  • Key stakeholders and impacted users
  • Proposed benefits (aligned to Cost of Delay drivers)
  • Key dependencies (architectural or otherwise)
  • Very rough size.

Prioritization


Features are taken to Cost of Delay estimation workshop, WSJF calculated, and either rejected or approved to proceed to the backlog.

Exit Policies would typically indicate:

  • Initial Cost of Delay agreed
  • WSJF calculated
  • Feature has requisite support to proceed.

Backlog

This is simply a holding queue.  We have a feature summary and a calculated WSJF.  Features are stored here in WSJF order, but held to avoid investing more work in analysis until the feature is close to play.  If applying a WIP limit to this state, it would likely be based on ART capacity and limited to 2-3 PI's capacity.

Exit Policies would typically surround confirmation that the Feature has been selected as a candidate for the next PI and any key dependencies have been validated sufficiently to support the selection.  I find most Product Management teams will make a deliberate decision at this point rather than just operating on “pull from backlog when ready”.

Next PI Candidate

Again, this state is simply a holding queue.  Movement from the “Backlog” to this state indicates that the Feature can be pulled for “Preparation” when ready.

Generally, there are no exit policies, but I like to place a spanning WIP limit over this and the following state (Preparing).  The logical WIP limit is based on capacity rather than number of features, and should roughly match the single-PI capacity of the ART.

Preparing

Here, we add sufficient detail to the Feature to enable it to be successfully planned.  The Exit Policy is equivalent to a “Feature Definition of Ready”.  Typically, this would specify the following:
Acceptance Criteria Complete
Participating Dev Teams identified and briefed
Dependencies validated and necessary external alignment reached
High level Architectural Analysis complete
Journey-level UX complete
Required Technical Spikes complete

This is the one state in the Feature Kanban that is almost guaranteed to be decomposed to something more granular when applied.  The reality is that feature preparation involves a number of activities, and the approach taken will vary significantly based on context.   A decomposition I have often found useful is as follows:

  • Product Owner onboarding (affected Product Owners are briefed on the Feature by Product Management and perform some initial research, particularly with respect to expected benefits)
  • Discovery Workshops (led by Product Owner(s) and including affected team(s), architecture, UX and relevant subject matter experts to explore the feature and establish draft acceptance criteria and high level solution options)
  • Finalization (execution of required technical spikes, validation of architectural decisions, finalization of acceptance criteria, updates to size and benefit estimates).


Planned 

The planning event itself is not represented on the Kanban, but following the conclusion of PI planning all features which were included in the plan are pulled from "Preparing " to "Planned".

This is a queue state.  Just because a feature was included in the PI plan does not mean teams are working on it from Day 1 of the PI.  We include it deliberately to provide more accuracy (particularly with respect to cycle time) to the following states.  There are generally no exit policies.
  

Executing 

A feature is pulled into this state the instant the first team pulls the first story for it into a Sprint, and the actual work done here is the build/test of the feature.

Exit policies are based on the completion of all story level build/test activities and readiness for feature level validation.  Determination of appropriate WIP limit strategies for this state will emerge with time and study.  In the beginning, the level of WIP observed here provides excellent insight into the alignment strategy of the teams and the effectiveness of their observation of Feature WIP concepts during PI planning.

Feature Validation

A mature ART will eliminate this state (given that maturity includes effective DevOps).  However, until such time as the ART reaches maturity, the type of activities we expect to occur here are:

  • Feature-level end-to-end testing
  • Feature UAT
  • Feature-level NFR validation

Exit Policies for this state are equivalent to a “Feature Definition of Done”.  They are typically based around readiness of the feature for Release level hardening and packaging.   The size of the queues building in this Done state will provide excellent insights into the batch-size approach being taken to deployments (and "time-in-state" metrics will reveal hard data about the cost of delay of said batching).

Release Validation

Once again, a mature ART will eliminate this state.  Until this maturity is achieved we will see a range of activities occurring here around pre-deployment finalization.

Exit Policies will be the equivalent of a "Release Definition of Done", and might include:

  • Regression Testing complete
  • Release-level NFR Validation (eg Penetration, Stress and Volume Testing) complete
  • Enterprise-level integration testing complete
  • Deployment documentation finalized 
  • Deployment approvals sought and granted and deployment window approved


The set of features planned for release packaging will be pulled as a batch from "Feature Validation" into this state, and the set of features to be deployed (hopefully the same) will progress together to “Done” once the exit policies are fulfilled.

Deployment

Yet another "to-be-eliminated" state.  When the ARTs DevOps strategy matures, this state will last seconds - but in the meantime it will often last days.  The batch of features sitting in "Release Hardening" will be simultaneously pulled into this state at the commencement of Production Deployment activities, and moved together to Done at the conclusion of post-deployment verification activities.

Exit Policies will be based on enterprise deployment governance policy.  For many of my clients, they are based on the successful completion of a Business Verification Testing (BVT) activity where a number of key business SME’s manually verify a set of mission-critical scenarios prior to signalling successful deployment.

Operational Readiness

This state covers the finalization of operational readiness activities.  An ART that has matured well along Lean lines will already have performed much of the operational readiness work prior to deployment, but we are interested in the gap between "Feature is available" and "Feature is realizing value".   Typical activities we might see here depend on whether the solution context is internal or external, but might include:

  • Preparation and introduction of Work Instructions
  • Preparation and Delivery of end-user training
  • Preparation and execution of marketing activities
  • Education of sales channel

Exit Policies should be based around “first use in anger” by a production user in a real (non-simulated) context.

Impact Validation

A (hopefully not too) long time ago, our feature had some proposed benefits.  It's time to see whether the hypothesis was correct (the Measure and Learn cycles in Lean Startup).  I typically recommend this state be time-boxed to 3 months.  During this time, we are monitoring the operational metrics which will inform the correlation between expected and actual benefits.

Whilst learning should be harvested and applied regularly throughout this phase, it should conclude with some form of postmortem, with participants at minimum including Product Management and Product Owners but preferably also relevant subject matter experts, the RTE and representative team members.  Insights should be documented, and fed back into the Program Roadmap.
Exit Policies would be based upon the completion of the “learning validation workshop” and the incorporation of the generated insights into the Program Roadmap.

Done

Everybody needs a "brag board"!


Conclusion

Once established, this Kanban will provide a great deal of value.  Among other things, it can support:

  • Visualization and maintenance of the Program Roadmap
  • Management of the flow of features into the PI
  • Visualization of the current state of the PI to support Scrum of Scrums, PO Sync, executive gemba walks, and other execution steering activities.
  • Visualization and management (or at least monitoring) of the deployment, operational rollout and outcome validation phases of the feature life-cycle.
  • Collection of cumulative flow and an abundance of Lead and Cycle time metrics at the Feature level.


Saturday, February 18, 2017

Revamping SAFE's Program Level PI Metrics - Conclusion

Base controls on relative indicators and trends, not on variances against plan” – Bjarte Bogsnes, Implementing Beyond Budgeting

Introduction

The series began with an overview of a metric model defined to address the following question:
"Is the ART sustainably improving in its ability to generate value through the creation of a passionate, results-oriented culture relentlessly improving both its engineering and product management capabilities?"
The ensuing posts delved into the definitions and rationale for the Business Impact, Culture, Quality and Speed quadrants.  In this final article, I will address dashboard representation, implementation and application.

Dashboard Representation

The model is designed such that the selected set of metrics will be relatively stable unless the core mission of the ART changes.  The only expected change would result from either refinement of the fitness function or incorporation of the advanced measures as the ART becomes capable of measuring them.


Given that our focus is on trend analysis rather than absolutes, my recommendation is that for each measure the dashboard reflects the vale for the PI just completed, the previous PI and the average of the last 3 PI’s.   Given the assumption that most will initially implement the dashboard in Excel (sample available here), I would further suggest the use of conditional formatting to color-code movement (dark green for strongly positive through dark red for strongly negative). 

Implementation

In The Art of Business Value, Mark Schwartz proposes the idea of “BI-Driven Development (BIDD?)”.  His rationale?  “In the same sense that we do Test-Driven-Development, we can set up dashboards in the BI or reporting system that will measure business results even before we start writing our code”.

I have long believed that if we are serious about steering product strategy through feedback, every ART should either have embedded analytics capability or a strong reach into the organisation’s analytics capability.  While the applicability extends far beyond the strategic dashboard (ie per Feature), I would suggest the more rapidly one can move from a manually collated and completed spreadsheet to an automated analytics solution the more effective the implementation will be.

Virtually every metric on the dashboard can be automatically captured, whether it be from the existing enterprise data-warehouse for Business Metrics, the Feature Kanban in the agile lifecycle management tool, Sonarqube, the logs of the Continuous Integration and Version Control tools or the Defect Management System.  Speed and Quality will require deliberate effort to configure tooling such that the metrics can be captured, and hints as to approach were provided in the rationales of the relevant deep-dive articles.  NPS metrics will require survey execution, but are relatively trivial to capture using such tools as Survey Monkey.

Timing

I cannot recommend base-lining your metrics prior to ART launch strongly enough.  If you do not know where you are beginning, how will you understand the effectiveness of your early days?  Additionally, the insights derived from the period from launch to end of first PI can be applied in improving the effectiveness of subsequent ART launches across the enterprise.

With sufficient automation, the majority of the dashboard can be in a live state throughout the PI, but during the period of manual collation the results should be captured in the days leading up to the Inspect & Adapt workshop.

Application

The correct mindset is essential to effective use of the dashboard.  It should be useful for multiple purposes:
  • Enabling the Portfolio to steer the ART and the accompanying investment strategy
  • Enabling enterprise-level trend analysis and correlation across multiple ARTs
  • Improving the effectiveness of the ART’s Inspect and Adapt cycle
  • Informing the strategy and focus areas for the Lean Agile Centre of Excellence (LACE)
Regardless of application specifics, our focus is on trends and global optimization.  Are the efforts of the ART yielding the desired harvest, and are we ensuring that our endeavors to accelerate positive movement in a particular area are not causing sub-optimizations elsewhere in the system?

It is vital to consider the dashboard as a source not of answers, but of questions.   People are often puzzled by the Taiichi Ohno quote “Data is of course important … but I place the greatest emphasis on facts”.   Clarity lies in appreciating his emphasis on not relying on reports, but rather going to the “gemba”.  For me, the success of the model implementation lies in the number and quality of questions it poses.  The only decisions made in response to the dashboard should be what areas of opportunity to explore – and of course every good question begins with why.   For example:
  • Why is our feature execution time going down but our feature lead time unaffected?
  • Why has our deployment cycle time not reduced in response to our DevOps investment?
  • Why is Business Owner NPS going up while Team NPS is going down?
  • Why is our Program Predictability high but our Fitness Function yield low?
  • Why is our Feature Lead Time decreasing but our number of production incidents rising?

Conclusion

It’s been quite a journey working through this model, and I’m grateful for all the positive feedback I have received along the way.   The process has inspired me to write a number of supplementary articles.  

The first of these is a detailed coverage of the Feature Kanban (also known as the Program Kanban).  Numerous people have queried me as to the most effective way of collecting the Speed Metrics, and this becomes trivial with the development an effective Feature Kanban (to say nothing of the other benefits).

I’ve also wound up doing a lot of digging into “Objectives and Key Results” (OKR’s).  Somehow the growing traction of this concept had passed me by, and when my attention was first drawn to it I panicked at the thought it might invalidate my model before I had even finished publishing it.  However, my research concluded that the concepts were complementary rather than conflicting.  You can expect an article exploring this to follow closely on the heels of my Feature Kanban coverage.

There is no better way to close this series than with a thought from Deming reminding us of the vital importance of mindset when utilising any form of metric.
People with targets and jobs dependent upon meeting them will probably meet the targets – even if they have to destroy the enterprise to do it.” – W. Edwards Deming
 

Sunday, February 5, 2017

Revamping SAFe's Program Level PI Metrics Part 5/6 - Speed

Changing the system starts with changing your vantage point so you can ‘see’ the system differently.  Development speed is often attributed to quick decisions.  Early definition of the requirements and freezing specification quickly are often highlighted as keys to shortening the product development cycle.  Yet the key steps required to bring a new product to market remain the creation and application of knowledge, regardless of how quickly the requirements are set.  The challenge in creating an effective and efficient development system lies in shortening the entire process.” – Dantar Oosterwal, The Lean Machine.

Series Context

Part 1 – Introduction and Overview
Part 2 – Business Impact Metrics
Part 3 – Culture Metrics
Part 4 – Quality Metrics
Part 5 – Speed Metrics (You are here)
Part 6 – Conclusion and Implementation


Introduction

As mentioned in my last post, the categorization of metrics went through some significant reshaping in the review process.  The “Speed” (or “Flow”) quadrant didn’t exist, with its all-important metrics divided between “Business Impact” and “Deployment Health”.  

Lead Time is arguably the most important metric in Lean, as evidenced by Taiicho Ohno’s famous statement that “All we are doing is looking at the customer time line, from the moment the customer gives us the order to the point when we collect the cash”.  Not only does it measure our (hopefully) increasing ability to respond rapidly to opportunity, but is a critical ingredient in enabling a focus on global rather than local optimization.

In this quadrant, the core focus is to employ two perspective views on Lead Time.  The first (Feature Lead Time) relates to the delivery of feature outcomes, and the second (MTTR from Incident) our ability to rapidly recover from production incidents.

The other proposed metrics highlight the cycle time of key phases in the idea-to-value life-cycle as an aid to understanding “where we are slow, and where we are making progress”.  In particular, they will highlight failure to gain traction in XP and DevOps practices.

There is, however, a caveat.  Many (if not most) Agile Release Trains do not begin life in control of the entire idea-to-value life-cycle.  On the one hand, its very common for features to be handed off to an enterprise release management organisation for production release.  On the other, whilst Lean Principles are at the heart of SAFe the framework centers on hardware/software development.  The (traditionally business) skill-sets in areas such as operational readiness, marketing and sales required to move from “Deployed product” to “Value generating product” are nowhere on the big picture. 

ARTs focused on bringing to life the SAFe principles will address these gaps as they inspect and adapt, but in the meantime  there is a temptation to “not measure what we are not in control of”.  As a coach, I argue that ARTs should “never let go until you’ve validated the outcome”.  You may not be in control, but you should be involved – if for nothing else than in pursuit of global optimization.  

Basic Definitions


Basic Metrics Rationale

Average Feature Lead Time (days)

This is the flagship metric.   However, the trick is determining "when the timer starts ticking".   For an ART maintaining the recommended 3-PI roadmap, feature lead time would rarely be shorter than a depressing 9 months.  
To measure it, one needs 2 things: A solid Feature Kanban, and agreement on which stage triggers the timer.  A good feature kanban will of necessity be more granular the sample illustrated in the framework (fuel for a future post), but the trigger point I most commonly look for is "selection for next PI".  In classic kanban parlance, this is the moment when a ticket moves from "backlog" to "To Do", and in most ARTs triggers the deeper preparation activities necessary to prepare a feature for PI planning.  The end-point for the measure is the moment at which the feature starts realizing value and is dependent on solution context, often triggered by deployment for digital solutions but after business change management activities for internal solutions.

Average Deployment Cycle Time (days)

This metric was inspired by the recently released Devops Handbook by Gene Kim and friends.  In essence, we want to measure “time spent in the tail”.  I have known ART after ART that accelerated their development cycle whilst never making inroads on their path to production.  If everything you build has to be injected in a 3-month enterprise release cycle, its almost pointless accelerating your ability to build!  
Whilst our goal is to measure this in minutes, I have selected days as the initial measure as for most large enterprises the starting point will be weeks if not months.

Average Mean Time to Restore (MTTR) from Incident (mins)

When a high severity incident occurs in production, how long does it take us to recover?  In severe cases, these incidents can cause losses of millions of dollars per hour.  Gaining trust in our ability to safely deploy regularly can only occur with demonstrated ability to recover fast from issues.  Further, since these incidents are typically easy to quantify in bottom-line impact, we gain the ability to start to measure the ROI of investment in DevOps enablers.

Prod Deploys Per PI (#)

Probably the simplest measure of all listed on the dashboard - how frequently are we deploying and realizing value?

Advanced Definitions


Advanced Metrics Rationale

Average Feature Execution Cycle Time (days)

This is one of the sub-phases of the lead time which are worth measuring in isolation, and is once again dependent on the presence of an appropriately granular feature kanban.  
The commencement trigger is "first story played", and the finalization trigger is "feature ready for deployment packaging" (satisfies Feature Definition of Done).  The resultant measure will be an excellent indicator of train behaviors when it comes to Feature WIP during the PI.  Are they working on all features simultaneously throughout the PI or effectively collaborating across teams to shorten the execution cycles at the feature level?

One (obvious) use of the metric is determination of PI length.  Long PI’s place an obvious overhead on Feature Lead Time, but if average Feature Execution time is 10 weeks its pointless considering an 8 week PI.  

Average Deploy to Value Cycle Time (days)

This sub-phase of feature lead time measures "how long a deployed feature sits on the shelf before realizing value".  
The commencement trigger is "feature deployed", and the finalization trigger is "feature used in anger".  It will signal the extent to which true system level optimization is being achieved, as opposed to local optimization for software build.  In a digital solution context it is often irrelevant (unless features are being shipped toggled-off in anticipation of marketing activities), but for internal solution contexts it can be invaluable in signalling missed opportunities when it comes to organizational change management and business readiness activities.

Average Deployment Outage (mins)

How long an outage will our users and customers experience in relation to a production deployment?  Lengthy outages will severely limit our aspirations to deliver value frequently.  

Conclusion

We’ve now covered all 4 quadrants and their accompanying metrics.  The next post will conclude the series with a look at dashboard representation, implementation and utilisation. 

High performers [in Devops practices] were twice as likely to exceed profitability, market share, and productivity goals.  And, for those organizations that provided a stock ticker symbol, we found that high performers had 50% higher market capitalization growth over three years.” – Gene Kim, Jez Humble, Patrick Debois, John Willis .. The Devops Handbook