Friday, July 7, 2017

Facilitating Release Train Design with the ART Canvas Part 3 - Launch Planning

Following a hectic couple of  months launching four new ARTS, its time to conclude my three-part series on facilitating Release Train design with the ART Canvas.  Covering the facilitation of a 1-day workshop, the previous posts dealt with creation of a shared ART Vision accompanied by a supporting design.  In this conclusion, I tackle the closing session of the day: the preparation of a launch plan.

Before commencing the launch planning, we generally shrink the audience.  We release the stakeholders who were critical to establishing a shared vision and design, and bring our focus in to the leadership team who will be executing the launch preparation activities.

Agile is a team sport, and the ART is intended to be a self-organizing team of agile teams.  So, your leadership team needs to work as an agile team too!   The vision workshop is essentially a team formation activity for the leaders, who then have the chance to operate as an agile team as they prepare for the launch.

Since our final objective for the workshop is to generate a launch plan, the metaphor that makes most sense is that of a short “enablement PI” which is executed by the leadership team with the support of the product owners.  Thus, the final segment of the workshop is a mini PI Planning.  I open with the following challenge objective for their enablement PI:

“Your product is an Agile Release Train that can fulfil the vision we’ve defined here today.  8 weeks from now (or whatever launch date we’ve set), 150 people are going to walk into a room to spend 2 days in their first PI Planning.  What will make this a success?”
To assist them in their planning, I then distribute the following Features before sending them into breakouts to create a set of stories and objectives that will realize the objective:







The depth to which the plan is developed depends on our time-boxing.  I’ve generally run this workshop over the course of a single day, which means their set of stories and backlog will be very rudimentary given the constrained time-box they will have for breakouts.  I have increasingly become tempted to extend it to a second day, allowing time to create a more effective plan, establish PI objectives, identify dependencies and create a Program Board.  This would also allow some time on the 2nd day to work through some team formation activities with the Leadership Team such as:

  • Development of a social contract
  • Design of the team Kanban
  • Planning of team demo approach (the team should be demonstrating back to stakeholders on launch preparation progress)
  • Agreement on standup timing
  • Iteration planning for the first iteration

Closing the session

In closing, we should have the following outcomes:

  • A name and launch date for the ART
  • A completed ART canvas, ready to be hung on a wall (and eventually digitised)
  • An agreement for the newly formed ART leadership team to function as an agile team during the launch preparation
  • An appropriately scary launch preparation backlog that motivates movement by the leadership team and is accompanied by an agreed “Agile monitoring” process using Team Demos to enable progress transparency and feedback. 

I like to close out with a final check-in.  I love the definition of consensus Jean Tabaka taught me: “I can live with that, and support it”.  It’s a great way to bring a day of very hard work to a close, perhaps using a roman vote.  “We’ve formed a vision, an ART design and a plan today.  There’s a lot of hard work ahead of us but I’d like to check that we have achieved consensus before we leave this room.”



Sunday, June 11, 2017

Reflecting on the 2017 SAFe Leadership Retreat

Earlier this week, I spent a few days at the very picturesque Dalmahoy Hotel in Edinburgh attending the 2017 SAFe Leadership Retreat.  This was our 3rd gathering, having travelled to Banff (Canada) in 2016 and Crieff (Scotland) in 2015.

If I had to describe the retreat in one word, it would be “community”.  Given the general application of SAFe in large enterprise and the prevailing “partner consultancy” based model, creation of an environment where both consultants and practitioners come together to transcend boundaries and share is no mean feat.  Full credit must go to +Martin Burns and his better half Lucy for their tireless efforts in this regard.

As always, the retreat was attended by +Dean Leffingwell  along with a number of other representatives from SAI.  Also in attendance were a mix of SAFe Fellows, SPCTs, consultants and practitioners along with a “chaos monkey” in the form of +Simon Wardley.

Having now had a few days to reflect on the experience, I’d like to share some of the key themes that emerged.

SAFe “next”

In the opening session, Dean spent a couple of hours running us through the soon-to-be-released next version of SAFe.  Whilst we’re under NDA when it comes to specifics,  I can say that the changes were enthusiastically received – with much improved guidance in some critical areas and best of all for the first time ever a simpler big picture!

SAFe beyond technology

Whilst the framework is squarely focused on the HW/FW/SW side of “Lean Product Development”, those with a truly lean mindset know that there’s a lot more than technology involved in the creation of a Lean Enterprise and optimization of the flow from Idea to Value.  I’ve long held the belief that great ARTs extend beyond technology into Sales, Marketing, Business Enablement and Business Change Management and it was great to see many others not just talking about this but doing it.

HR as an enabler rather than an impediment in a Lean-Agile transformation

We were lucky to have a number of attendees who were either practicing HR professionals or came from an HR background, and numerous open-space sessions devoted considerable attention to Lean|Agile and the world of HR.  Whilst Fabiola Eyholzer’s guidance article last year made a great start on this front, many are grappling with the practical realities of such questions as how to address career progression in the ScrumMaster/RTE and Product Manager/Owner spaces.  Hopefully in the coming months we’ll see some of the outcomes of those discussions synthesized and in the public domain.

SAFe for Systems of Record

When it comes to true Business Agility, there are always Systems of Record involved.  A number of sessions and discussions focused on this, with a particularly robust session on SAP.   The general conclusion was that whilst it’s a lot more convoluted than digital, many are doing it successfully and common patterns are emerging.

Active Learning in SAFe courses

Many of the attendees were passionate believers in experiential and/or active learning who have struggled with the orientation of the SAFe courseware towards “lecture-style” training.  The great news for all of us is that this is becoming a significant focus area for SAI.  The newly introduced RTE course is a radical departure from the past, and the preview we were given of the new Leading SAFe course shows marked improvement in this direction.

SAFe is taking off in Europe

I’ve been coming to Europe every few months in a SAFe context for a couple of years now (starting with Crieff in 2015), and it has clearly been lagging the US and Australian markets in enterprise adoption.   But it appears the time has come.  Of the roughly 50 attendees at this year’s summit, perhaps 40 were European based and the vast majority were actively involved in significant SAFe implementations – a radical departure from 2015.

Closing Thoughts

Great agile is all about collaboration, relationship and learning.  The manifesto expressed it beautifully with the words “We are uncovering better ways of developing software by doing it and helping others do it”.   This year’s retreat lived the philosophy, and I enjoyed deepening existing relationships, forming new ones, sharing my experiences and learning from others’.  Bring on 2018!

Sunday, April 23, 2017

Facilitating Release Train Design with the ART Canvas Part 2 - Design


Introduction 

In my last article, I introduced the concept of facilitating a 1-day launch planning workshop for an Agile Release Train using the ART Canvas.  The workshop covers the generation of a vision for the ART, preparing an initial design and generating a launch plan.

The more ARTs I launch, the greater the time I devote in the workshop to developing a clear, aligned vision using the approach described in the preceding article.  With this in hand, we are well equipped to move forward with designing the ART itself – the focus of this article.

Workshop Agenda


  • Vision (Details in last article
  • Design
    • What is the Development Value Stream and to what extent will the ART cover it?
    • Who will fill the key Leadership roles for the ART?
    • What supporting leadership roles will the ART need?
    • Which technical assets will the ART impact?
    • What is our Team Design strategy for the ART?
    • What is the Name and Launch Date for the ART?
  • Planning (Details in next article)

What is the Development Value Stream and to what extent will the ART cover it?

I was torn between calling this section of the canvas “Development Value Stream” or “Solution Context”.  Introduced in SAFe 4.0, the Solution Context “identifies critical aspects of the target solution environment” and “is often driven by factors that are outside the control of the organisation that develops the solution”.  I settled on the Development Value Stream because this tends to be how I facilitate the conversation.

Our focus here is really to delineate the difference between “the ART we want” and “the ART we can have”.  The “ART we want” can take an “Idea” directly from the Operational Value Stream and perform all activities necessary to carry that Idea through the life cycle until it ends in realized “Value”.  However, reality tends to intervene.  Perhaps the ART is not capacity funded and “Ideas” need to be routed through some type of PMO function before reaching the Product Manager.  A particular supporting system may not able to be worked on by the ART due to the nature of the contract with a vendor or its ownership by an entrenched organisational silo.  Further, many ARTs cannot deploy independently and must participate in an enterprise release process.

Lack of awareness of the pragmatic compromises in your ART design can and will derail you.  In Good to Great, Jim Collins summarized it beautifully: “You must maintain unwavering faith that you can and will prevail in the end, regardless of difficulties, AND, at the same time, have the discipline to confront the most brutal facts about your current reality, whatever they may be”.

My favorite way to represent this is to depict a Development Value Stream running from “Idea to Value”, and represent using shading or call-outs those components of the Value Stream which fall outside the boundaries of the ART.   There will be some obvious exclusions we can use to frame the following discussions, and as design progresses we will find ourselves adding some fine details.  By the end of the workshop, we should have highlighted all of the ART’s key external dependencies in fulfilling its mission.

Who will fill the key Leadership Roles for the ART?

If you have been comparing my canvas to the one Dean documented, you will notice a couple of differences.  One of these is that I define 4 key leadership roles, whilst Dean sticks to the “classic troika”:

  • Release Train Engineer
  • Product Manager
  • System Architect
  • UX Lead (My addition)

The reason for my extension is simple: experience and scars.  The vast majority of ARTs have at least some digital component to them, and if you have a digital component you will have a UX competency.  I have learnt a great deal about the field of UX in the last 5 years, and much of it I’ve learned the hard way.  Two critical lessons have emerged:

  • There is regularly significant confusion between the roles of UX and Product Management
  • When it comes to embracing iterative and incremental approaches, UX specialists are often the new architects in terms of mindset battleground.

My response is to embrace their critical leadership role in enabling effective digital outcomes by recognizing them formally as part of the ART Leadership team.

Facilitating the selection of candidates is generally a formality.  If we’ve prepared for the workshop at all well, the nominees are sitting in the room and putting their names in the boxes is simply putting the stamp of approval on the nomination.  If we walk out of the room with one of those boxes empty, we have a critical priority as all of them are going to be incredibly busy in the coming weeks executing on the launch plan and every day will count for each of them.

What supporting leadership roles will the ART need?

Large ARTs will often need some additional folks to support the key leaders.  Following are some sample questions I use to seed this discussion:

  • Will one Product Manager be enough?  (in the case of multiple Product Managers, I’m always looking to see one designated as “chief” but if we remember the SAFe guidance of 1 Product Manager per 2-4 Product Owners it will be far from uncommon to see 2-3 people in this role)
  • Will one Architect be enough? (Given that system architects are often tied to a limited set of technology assets, I often see 2-3 architects dedicated to large ARTs to provide adequate coverage to all assets)
  • Do we need a Release Manager? (If the ART is facing into an Enterprise Release process and has low or no Devops Maturity, liaising with the ITIL folks can be a full-time role)
  • Do we need a Test Manager?  (The concept of a Test Manager might be anathema in a mature agile implementation, but as we launch our ART we’re not yet mature.  Who is going to mentor the testers embedded in the teams and assist them with evolving their testing strategy whilst also meeting the organisation’s governance and quality articulation needs?)
  • Do we need a Change Manager? (Many of my ARTs have a full change management team, in which case of course we would tackle this in our team design, but in cases where we don’t have a full team I often see this role staffed for the ART to enable effective liaison with the Operational Value Stream with respect to operational readiness and go-to-market strategies).


Whilst on the one hand we want to “minimize management overhead by creating a self-organizing team of teams”, on the other this will not happen overnight.   Experience has taught me that for any significant ART people in these roles will exist – our choice is whether to make them a “part of the team” sharing ownership of our transformative change, or a part of “external governance functions”.


What technical assets will the ART impact?

Begin with the list of supporting systems from your Operational Value Stream outputs.  Then, because representation in the value stream is often in terms of “meta platforms”, add in more granularity as needed.  (For example, middleware is part of most ARTs but often considered to be implied during the value stream discussion).

Now examine the list through the lens of “Frequency/Extent of Impact”.  Framing the discussion in the context of the “typical Features” defined in the Solution section of the canvas, a simple mechanism such as “*” for frequently impacted and “**” for always impacted can be useful. Below is a simplified and de-sensitized example from a recent workshop:

  • Web**
  • SAP Bus Suite*
  • Mainframe*
  • Mobile*
  • Security
  • ESB*
  • ESS
  • WFM
  • Data Integration
  • MI/EDW*

What is our Team Design strategy for the ART?


It surprises me that there aren’t entire books devoted to the topic of agile team design and the trade-offs between feature and component teams.  My advice: "Be pragmatic yet ambitious".  

In brutal honesty, I’m a reformed “feature team purist”.  I love the theory, but I’ve lived the reality.  Your culture has to be ready for them.  Not just your management and silo owners, but most importantly your team members.  They just don’t work without open-ness to cross-skilling.  Have a conversation with an iOS dev about writing android code.  Try another conversation with a 20 year COBOL veteran about learning to write javascript.  Recognize that there are specializations within specializations – there is no better place to learn this than your middleware folks.  The naïve will think “surely I can put a middleware person in each team”, only to discover there are 6-7 discrete specializations within middleware.

Recognize that your ART has to start somewhere, and inspect and adapt itself to a better place.  I establish a “feature teams are the goal” mindset, then climb my way back to a decent starting place – generally a combination of “nearly feature teams” with 2-3 specialized component teams.  For example, for the asset stack above we wound up with “nearly Feature Teams” containing Web, SAP and Mainframe skills, a “Data and Analytics” team, a “specialized assets” team and a “Business Enablement” team focused on change management and operation readiness.  By the end of the first PI planning we’d also added a lawyer to the Business Enablement Team.  And .. don’t forget to define the flavor of your System Team while you’re here.  Nowhere will you tackle more organisational silos than in staffing a good system team, you want to seize the momentum you have established on vision to lock this in.

Do not attempt to match names to teams in this workshop.  Focus on defining “team templates” covering both technical and non-technical skills (eg BA, UX, Functional Analysts if you have an ERP, etc).  Working out the quantity of each team flavor and the match of names to teams can come later.  If you’ve put the right people in the room (line managers) you are reaching alignment on commitment to dedication and cross-functionality.

What is the Name and Launch Date for the ART?

Let me be brutally clear, if you leave the room without a launch date you’ve missed a huge opportunity.  A committed date is a forcing function.  You are laying the seeds of the “fix the date, float the scope” mindset the ART needs to have.   Look for something between 2 and 8 weeks, with a preference for 6-8 weeks.   Anything less than 4 weeks requires a lot of special circumstances, so if you’re new to the game don’t try it.  Anything more than 8 will hurt your sense of urgency and momentum.

On the Name function, I (and my colleagues) are huge believers in the power of a name in establishing a shared sense of identity.  All the data is on the table to find one, and the sooner this new creation has a name the sooner it will begin to take life.  Some recent examples for me:
  • SHERPA (Secure High value Engaged Reliable Payment Assurance)
  • DART (Digital Assurance Release Train)
  • START (Student Transformation Agile Release Train)

Conclusion

With an agreed vision and design for the ART, the final step is to generate the launch plan and supporting approach - the topic of my final article in this series.

Saturday, April 8, 2017

Facilitating Release Train design with the ART Canvas Part 1 - Vision

It became clear to me after I began to read the four hundred or so vision statements I received: they all read the same.  Every organisation cares about customers, values teamwork, exists for shareholders or the community, believes in excellence in all they do.  If we all threw our vision statements in a hat and then drew one out, we would think it was ours regardless of where it came from.  I finally realised that it was the act of creating a vision that matters, not so much the content of what it was.” – Peter Block, Flawless Consulting

Introduction

In January 2014, I wrote my first article on launching an ART.   It focused on the concept of a launch planning workshop that formed a cross-functional Agile Leadership team and generated a vision and a backlog they could execute on over 6-8 weeks to launch their ART.

I’ve launched over 30 ARTs since then, and whilst my core approach has remained consistent it has benefited from a great deal of iteration.  The biggest change has been adjusting the proportion of time I spend on the ART vision and design as opposed to the launch preparation backlog.    Alignment is a key component of ART success, and if the leaders aren’t aligned on the vision there is little hope for the teams.

Walking out of a brilliant launch workshop last year, I sat down to reflect on the key conversations and capture some notes to feed forward into my next launch.  As I reviewed the photos, I had an aha moment.  Why not use a Lean Canvas to anchor the key conversations?   I could print it out as a big wall poster, structure an agenda around completing it and walk out of the workshop with a “Train on a Page” diagram.

I’ve used it a number of times since then to good  effect, and when I shared it with +Dean Leffingwell  on a visit to Boulder in January he was quick to ask about including it in his new Implementation Roadmap.  While he beat me to the publish button a few weeks ago with his Prepare for Art Launch article, I felt there would be value in sharing the approach I take to facilitating it.


All of my ART launches begin with what I call the “Launch Planning Workshop”.  Generally held within a week of the Leading SAFe, this is a one-day facilitated workshop attended by the Business Owners and ART Leadership team with the objective of clarifying the vision, boundary and key success measures for the ART and generating the launch preparation backlog.

The workshop progresses through 3 phases:
  • Define the vision
  • Design the ART
  • Generate the Launch Plan
This article will tackle opening the workshop and the most important phase: the Vision.  My next two articles will address ART design and generation of the launch plan.

Opening the Workshop

The workshop opens with a blank A0 copy of the Canvas pinned to the wall and a 2-part objective:
  • Complete the ART Canvas to define the Vision, Leadership and Design strategy for our ART
  • Generate a backlog detailing the activities necessary to bring it to life
The bulk of the attendees should have recently attended Leading SAFe, and key participants include:
  • ART Leadership Candidates (at minimum RTE, Product Manager, System Architect)
  • Key sponsors from both Business and IT
  • Relevant technology Line Management (those who are likely to be contributing staff to the ART)
As a general note, I do not directly populate the Canvas.  I operate on either 2 whiteboards or a combination of flip-charts.  At the completion of each section, a scribe will transcribe the outputs onto the canvas.

Workshop Agenda

  • Vision
    • Which Customers will be impacted by the ART?
    • Which Operational Value Stream(s) will the ART support?
    • Who are the Business Owners for the ART?
    • What kind of Features will the ART deliver?
    • How will we measure success for the ART?
    • What is the Vision statement for the ART?
  • Design (Details in next article)
  • Planning (Details in future article)

Vision

The first 6 topics on the agenda address the vision for the ART, culminating in the creation of a Vision Statement.  

Which Customers will be impacted by the ART?

Customer centricity is at the heart of Lean and Agile, so it is where we begin.  This should take no more than 5 minutes.  Personally, I use the “internal Customer” and “external Customer” metaphor to frame the conversation and ensure we are covering both internal users of the solution and true customers of the organisation.

Which Operational Value Stream(s) will the ART support? 

The mission for any ART involves improving business outcomes, and succeeding in that mission begins with understanding the flow of value in the areas of the business the ART is to support.  
If a Value Stream workshop has already been conducted, this is the moment to review the outputs from that workshop.  However, for most ARTs this is not the case and now is the moment to identify and map the relevant operational value stream(s).  Provided the right people are in the room, this can be generated at an appropriate level in 20-30 minutes.   Generally, we are concerned with a single value stream and we look to our business folks to generate the core flow and architects to map the underlying systems.  Note that we do not need a great deal of detail.  

Service Assurance Operational Value Stream for a Telco
For more information (and examples) on approaching this, see the excellent new guidance provided in the Identify Value Streams and ARTs article from the Implementation Roadmap.

Who are the Business Owners for the ART?

I have to confess that for a number of years, I wondered why Dean insisted on providing a new term instead of just talking about Stakeholders.  An embarrassingly short time ago, it finally hit me.  Every ART will have many stakeholders, but in general 3-5 of them will be critical.  These are your Business Owners.  SAFe calls out specific responsibilities for them, defining them as “a critical group of three to five stakeholders who have shared fiduciary, governance, efficacy and ROI responsibility for the value delivered by a specific Agile Release Train”.  

Having now lived through a number of ARTs where either the membership of this group was unclear or the identified members took little accountability, I am now very focused on identifying them early and ensuring they have been engaged and signed up for their responsibilities prior to the launch.

What kind of Features will the ART deliver (Solution)?

Over time, the ART will deliver many Features.  In general, by the time you are at launch there is a known seed.  Perhaps it is being triggered by a waterfall program with preexisting expectations “converting” to agile.  Perhaps a critical project, or an urgent set of priorities.  But we’re also looking to consider the future – if for no other reason than to assist in both communicating to the broader organisation and assist decision rules on work that should be routed to the ART.  Following is a sample from a recent workshop:
  • New, digitally enabled user experience on available technology
  • Process automation, improved data exchange with third parties
  • Remediation of current platform
In short, by this point we know which operational value stream we're supporting, who the customers and key stakeholders are - we just want to clarify "what kind of stuff" they can expect from the ART.

How will we measure success for the ART?

Given that 9 out of 10 of my last articles have focused on metrics, it should come as no surprise that I want to talk about them as part of our vision workshop.  There are many good reasons to do this, but the most important one lies in establishing the mindset that success for the ART involves business outcomes, not “delivering features on time and budget” or “improving velocity”.  It enables validation of the vision with the Business Owners, assists Product Managers in establishing their WSJF prioritization approach for the Program Backlog and frames the kind of benefit propositions that should be appearing in Features.  Further, it enables us to capture baselines during our pre-launch phase so we are positioned to understand the impact the ART is having.

For those who have followed my Metrics series , this area seeds the Fitness Function from the Business Impact quadrant.  Whilst you many not exit the room with a full set of quantifiable measures, you can at least define qualitative ones and a follow-up activity to establish quantitative approaches to them.  Below, you’ll find a (slightly desensitized) sample set from a recent workshop:
  • Staff
    • Automation of simple work
    • Able to add more value by focusing on complex work
    • Stable and trustworthy technical platform
  • Customers
    • Simple, intuitive, digitally enhanced experience
    • Increased accuracy and timeliness 
    • Tailored service offer based on individual circumstances
    • Seamless movement between channels (tell us once)
  • Organisation
    • Straight-through processing increased
    • Authenticated digital transactions increased
    • Accuracy increased in the areas of debt, outlays and fraud
    • Operational Costs reduced

What is the Vision Statement for the ART?

The discussion we have facilitated to this point has been focused on identifying “non-generic” components of the vision statement by exploring the following questions:
  • Who are our customers?
  • How do we interact with them? (the operational value stream)
  • Who are our critical stakeholders? (who have to bless the vision)
  • What kind of Features are we going to deliver?
  • What difference are we to make? (Success Measures)
Now we want to bring it all together.  I’m a big fan of the elevator pitch template popularized by Geoffrey Moore in Crossing the Chasm and leveraged by SAFe in the Epic Value Statement.   I will generally split the room into two sub-groups (ensuring a mix of backgrounds in each), and give each 20 minutes to create their pitch.   I then help them pick a favorite, and optionally spend another 10 minutes polishing it by incorporating key ideas from the discarded pitch.

For the brave, an alternative which can really get the creative juices flowing is the Pixar Pitch.  My colleague +Em Campbell-Pretty wrote an article on it a few years ago which includes a sample used as the vision for an ART in a higher education setting.  Whilst confidentiality precludes me from sharing a specific example here, I have included one written by a leadership team describing what success with their SAFe implementation would look like:


Conclusion

Getting this far will take at least until lunch, and probably a little further.   In all likelihood, the longer it takes the more value you are getting from the discussion as assumptions are flushed and clarity emerges both on “what the mission of the ART is” and just as importantly “what it is not”! 
The alignment and sense of mission achieved here will stand you in good stead as you progress into the ART Design agenda items addressed in my next article and begin to tackle some ivory towers and entrenched silos.

Sunday, March 26, 2017

Using OKRs to accelerate Relentless Improvement

Introduction

In my last 2 articles, I first introduced the concept of Objectives and Key Results (OKRs) with an illustration based on personal goal-setting then explored applying them to improvement of Product Strategy.

After experimenting on myself with personal OKRs, my first applications of the concept in a coaching setting have been in the area of improvement objectives.  I was particularly excited about this idea as I encounter so many teams and ARTs who lose heart with their retrospectives and Inspect and Adapt sessions due to failure to follow through on their great ideas.

I’ve lost count of how many discussions I’ve had in either the training room or a coaching setting with people who drag themselves to retrospectives feeling they are a waste of time.   The common refrain is that “we just keep talking about the same issues and nothing ever changes”.  A little probing generally exposes the following:

  • Teams use the same format every time (my pet hate is the “What went well, what didn’t go well, what puzzles me” retro).
  • Teams spend 80-90% of our session examining the past
  • The (rushed) improvement idea generation yields concepts beyond their power to change or very fuzzy
  • Few teams retrospect on the effectiveness or application of the ideas they do generate 

I’ve gifted many a copy of Esther Derby and Diana Larsen’s classic “Agile Retrospectives” to ScrumMasters and RTE’s.  Likewise, I’ve facilitated countless sessions on time-boxing your retrospective or I&A to leave at least 50% of it for the “improvement” part of the discussion.

Good facilitation will at least get you to the point of emerging from the session with some good ideas.  However, in the words of Michael Josephson “Ideas without action are like beautifully gift-wrapped boxes with nothing inside”.
 

Crystallizing your great ideas with OKRs

A couple of years ago, I started experimenting with the Improvement Kata to assist teams with actioning their ideas.  As I researched it, I came across a great article describing an “Improvement Theme Canvas” by Crisp’s Jimmy Janlen.  The canvas was beautifully simple, taking teams on a great journey with 4 questions:

  • Where are we now?
  • Where do we dream of being (What does awesome look like?)
  • Where can we get to next? (And how would we measure it?)
  • What steps would start us moving?

I used it again and again, in settings ranging from team retrospectives to executive workshops.  And it helped.     But it was still missing something.  There was a recurring trend – a lack of measures.
As I started experimenting with OKRs, there was an obvious synergy.   I had found the Key Result setting to be the moment where I did my deepest thinking myself, so I tweaked the canvas a little:


My first opportunity to apply it was the second day of a management workshop.  The first day had been spent exploring challenge areas and selecting focus areas for improvement.  We had a fairly clear view of current state and definition of awesome (which represented 18 months+ of hard work), and the job on Day 2 was to address the right side of the canvas with a 3 month view. 

We began with some objectives, then the hard work began – how would we measure them?  The result was incredibly powerful.  We moved from some very fuzzy, amorphous objectives into real clarity on how to generate momentum.  When you start to talk about how to measure movement, something changes in the conversation.  On the one hand, it grounds you in reality but on the other the fact that you are looking for a measure ambitious enough to only have a 50% chance of success stretches you out again. 

As an illustration, I’ve developed a sample based on a commonly occurring theme in Inspect and Adapt sessions – struggles to develop momentum with Test Automation.


As a final hint on the canvas generation, the order in which you complete it is crucial.  There’s a great temptation to rush to “Enabling Activities” because people love to begin with solutions.  
  • Start with “Current state”: fishbone diagrams, 5 whys, and causal loop diagrams are all great tools for generating the data and shared understanding
  • Move to “Definition of Awesome”.  The word “Awesome” is incredibly powerful in helping move minds into the art of the possible, and also encourages alignment around desired direction.
  • Step 3 is to agree on your timebox (often 3 months or one Program Increment) and figure out how far you can move from your current state to the definition of awesome as you set your “Target State OKRs”.  You will start flushing assumptions, and more importantly discovering alignment challenges.  The movement to quantified measures with your Key Results will expose many an unvoiced insight.
  • Only now do you set the “Enabling activities”.  They will come very quickly.  If you’ve set your OKRs well, your key activities will be very obvious.


So far we only have an Idea, what about Action? 

I can recommend nothing so strongly as the weekly priority-setting cycle from Radical Focus I described in my first OKR article.  The reality is, despite many schemes people use for creating space for follow-up on improvement ideas it is amazing how often they stumble.  Life gets busy, delivery pressure is high, and the first thing that gets sacrificed is time to work on improvement ideas. 

The beauty of setting yourself a few priorities each week on moving towards your OKRs and spending a little time reflecting on them at the end of the week is that it helps you “find time”.  A few months into applying personal OKRs I can attest to this.  I live a very hectic life.  Most weeks involve 5 long days facilitating and coaching with clients, 2-3 flights, and lots of “after hours admin” whilst coming up for air to be a dad, husband and human 😊  Its very easy to find excuses.  But every week starts with “how can I move forward this week?”  And in the back of my head every day are my weekly goals.  To be brutally honest, my learning cycle at the end of the week often focuses on “how to find time”.   I’m not waiting 3 months to look back and find out I made no progress and lose heart.   

My final hint is this: Make them public.  If you’re applying the technique to Inspect and Adapt,  your improvement OKRs should appear in your PI Objectives and progress discussed  in your System Demos and PI Demo.  Include the Key Result evaluation in the metric portion of your Inspect and Adapt.  There is both power and motivation in transparency about improvement objectives.

Sunday, March 12, 2017

Improving SAFe Product Management Strategy with OKRs

Introduction

In my last article, I introduced the concept of Objectives and Key Results (OKRs) and provided an illustration based on personal application.  However, I only tried out personal OKRs because I wanted a way to “learn by doing” before I started trying out some of my thinking on incorporating them in SAFe.   This article will explore the first application in SAFe: Improving Product Strategy.

Late last year, Product Management guru John Cutler wrote a brilliant article called “12 signs you’re working in a Feature Factory”.  Some highlights include:

  • No connection to core metrics.  Infrequent discussions about desired customer and business outcomes.
  • “Success Theater” around shipping with little discussion about impact.
  • Culture of hand-offs.  Front-loaded process in place to “get ahead of the work” so that items are ready for engineering”.  Team is not directly involved in research, problem exploration, or experimentation and validation.
  • Primary measure of success is delivered features, not delivered outcomes.
  • No measurement.  Teams do not measure the impact of their work.
  • Mismatch between prioritization rigour (deciding what gets worked on) and validation rigour (deciding if it was, in fact, the right thing to work on).

His blog is a gold-mine for Product Managers, and I found much of his thinking resonated with me when it came to the difference between a healthy ART and a sick one.  OKRs are a very handy tool in combating the syndrome.

Introducing OKRs to Feature Definitions

The official guidance says “every feature must have defined benefits”.  We teach it in every class, and our Cost of Delay estimates should be based on the defined benefits but it is frightening how few ARTs actually define benefits or expected outcomes for features.  At best, most scratch out a few rarely referenced qualitative statements.

Imagine if every feature that went into a Cost of Delay (COD) estimation session or PI planning had a clearly defined set of objectives and key results!

Following is an illustration based on an imaginary feature for a restaurant wanting to introduce an automated SMS-based reservation reminder capability.


Applying OKRs to PI Objectives

I’ve lost count of the number of long discussions I’ve had about the difference between PI Objectives and Features.  They almost always start the same way.  “Surely our PI Objective is to deliver Feature X”.  Or, in an ART with a lot of component teams it might be “Support Team A in delivering Feature X”.

WRONG! 
 
Good PI objectives are based on outcomes.  Over the course of the planning event, the team unpacks their features and generates stories to capture the ongoing conversation and identify key inter-dependencies.  Eventually, we consolidate what we’ve learned about the desired outcomes by expressing them as PI objectives.  The teams don’t commit to delivering the stories, they commit to achieving the objectives.  They should be working with their product owners all the way through the PI to refine and reshape their backlogs to maximize their outcomes.  

If every feature that enters PI planning has a well-defined set of OKRs, we have a great big hint as to what our objectives are.  An ART with good Devops maturity may well transpose the Feature OKRs straight into their PI objectives (subject to trade-off decisions made during planning).  They will ship multiple times during the PI, leveraging the feedback from each deployment to guide their ongoing backlog refinement.   

SAFe Principle 9 suggests enabling decentralization by defining the economic logic behind the decision rather than the decision itself.   Using well-defined OKRs for our PI objectives provides this economic logic, enabling Product Owners and teams to make far more effective trade-off decisions as they execute.

However, without good DevOps maturity the features may not have been deployed in market for sufficient time by the end of the PI to be measuring the predefined Key Results.  In this case, the team needs to work to define a set of Key Results they will be able to measure by the time the PI is over.  These should at minimum be demonstrating some level of incremental validation of the target KRs.  Perhaps they could be based on results from User Testing (the UX kind, not UAT).  Or maybe, measured results in integration test environments.  For example:
  • 50% of test users indicated they were highly likely to recommend the feature to a friend or colleague
  • End-to-end processing time reduced by 50% in test environments. 
Last but not least, we’ve started to solve some of the challenges ARTs experience with the “Business Value” measure on PI objectives.  It should be less subjective during the assignment stage at PI Planning and completely quantitative once we reach Inspect and Adapt!

Outcome Validation

By this point, we’ve started to address a number of the warning signs.   But, unless we’re some distance down the path to good DevOps we haven’t really done much about validation.   If you read my recent Feature Kanban article, you’ll have noticed that the final stage before we call a feature “done” was “Impact Validation”. 

This is the moment for final leverage of our Feature OKRs.  What happened once the feature is deployed and used in anger?  Do the observed outcomes match expectations?  Do they trigger the generation of an enhancement feature to feed back through our Program Backlog for the next Build/Measure/Learn cycle?  Do they affect the priorities and value propositions of other features currently in the Program Backlog?

Linking it back to the ART PI Metrics dashboard

In the business impact quadrant of my PI metrics model, I proposed that every ART should have a “fitness function” defining the fashion in which the ART could measure its impact on the business.  This function is designed to be enduring rather than varying feature by feature – the intended use is to support trend-based analysis.  The job of effective product managers is, of course, to identify features which will move the strategic needles in the desired direction.  Business Features should be contributing to Business Impacts, Enablers might be either generating learning or contributing to the other three quadrants.

Thus, what we should see is Key Result measures in every feature that exert force on the strategic success measures for the ART.   Our team level metric models should be varying with feature content to enable them to steer.  This helps with the first warning sign Cutler identifies:
  1. No measurement.  Teams do not measure the impact of their work.  Or, if measurement happens, it is done in isolation by the product management team and selectively shared.  You have no idea if your work worked.

Conclusion

Whilst OKRs obviously have a lot to offer in the sphere of Product Strategy, this is not the only area in which they might help us with SAFe.  In my next article, I’ll explore applying them to Relentless Improvement.

Sunday, March 5, 2017

An Introduction to Objectives and Key Results (OKRs)

Introduction

In late January I was working on a final draft of one of my metrics articles when someone suggested I take a look at OKRs.   I’d heard of them, but hadn’t really paid much attention other than to file it on my always long backlog of “things I need to learn something about”.  Then a couple of days later I was writing my abstract for a metrics session at the Agile Alliance 2017 conference and in scanning the other submissions in the stream discovered a plethora of OKR talks.  This piqued my interest, so I started reading the OKR submissions.   At this point I panicked a little.  The concept sounded fascinating (and kind of compelling when you realize its how Google and Intel work).  I was facing into the idea that my lovingly crafted “SAFe PI Metrics revamp” might be invalidated before I even finished the series.

So, I went digging.  The references that seemed to dominate were Radical Focus by Christina Wodtke, a presentation by Rick Clau of Google, and an OKR guide by Google.  A few days of reading later, I relaxed a little.   I hadn’t read anything that had invalidated my metric model, but I could see a world of synergies.  A number of obvious and compelling opportunities to apply OKR thinking both personally and to SAFe had emerged.

In this article, I will provide a brief overview of the concept and use my early application to personal objective setting as a worked example.  My next article will detail a number of potentially powerful applications in the context of SAFe.

OKRs – an Overview

As Wodtke states in Radical Focus, “this is a system that originated at Intel and is used by folks such as Google, Zynga, LinkedIn and General Assembly to promote rapid and sustained growth.  O stands for Objective, KR for key results.  Objective is what you want to do (Launch a killer game!), Key Results are how you know if you’ve achieved them (Downloads of 25K/day, Revenue of 50K/day). OKRs are set annually and/or quarterly and unite the company behind a vision

Google provides some further great clarifying guidance:

  • “Objectives are ambitious and may feel somewhat uncomfortable”
  • “Key results are measurable and should be easy to grade with a number (Google uses a scale of 0-1.0”
  • “The sweet spot for an OKR grade is 60-70%; if someone consistently fully attains their objectives, their OKRs are not ambitious enough and they need to think bigger”
  • “Pick just three to five objectives – more can lead to overextended teams and a diffusion of effort”.
  • “Determine around three Key Results for each objective”
  • “Key results express measurable milestones which, if achieved, will directly advance the objective”.
  • “Key results should describe outcomes, not activities”

Wodtke dials it up a notch when it comes to ambition:

  • OKRs are always stretch goals.  A great way to do this is to set a confidence level of five of ten on the OKR.  By confidence level of five out of ten, I mean ‘I have confidence I only have a 50/50 shot of making this goal”.


Personal OKRs – a worked example

I’m a great believer in experimenting on myself before I experiment on my customers, so after finishing Radical Focus I decided to try quarterly personal OKRs.  Over the years, I’ve often been asked when I’m going to write a book.  After my colleague +Em Campbell-Pretty  published Tribal Unity last year, the questions started arriving far more frequently.

After a particularly productive spurt of writing over my Christmas Holidays I was starting to think it might be feasible so I began to toy with the objective of publishing a book by the end of the year.  Em had used a book writing retreat as a focusing tool, so it seemed like planning to hole up in a holiday house for a few weeks late in the year would be a necessary ingredient but that was far away (and a large commitment) so the whole thing still felt very daunting.

As I focused in on the first quarter, the idea for an objective to commit to scheduling the writing retreat emerged.  Then the tough part came – what were the quantitative measures that would work as Key Results?  Having now facilitated a number of OKR setting workshops, I'm coming to learn that Objectives are easy, but Key Results is where the brain really has to engage.  My first endeavors to apply it personally were no exception.  Eventually, I came to the realization that I needed to measure confidence.  Confidence that I could generate enough material, and confidence that people would be interested in reading the book.  After a long wrestle, I reached a definition:



Working the Objectives

As Wodtke states when concluding Radical Focus, “The Google implementation of OKRs is quite different than the one I recommend here”.  Given that I had started with her work, I was quite curious to spot the differences as I read through Google’s guidance.  At first, it seemed very subtle.  The biggest difference I could see was that for Google not all objectives were stretch.  Then the gap hit me.  Wodtke added a whole paradigm for working your way towards the objective with a lightweight weekly cycle.   I had loved it, as I’d mentally translated it to “weekly iterations”.

The model was simple.  At the start of each week, set three “Priority 1” goals and two “Priority 2” goals that would move you towards your objectives.  At the end of the week, celebrate your successes and reflect on your insights.  Whilst she had a lot more to offer on application and using the weekly goals as a lightweight communication tool, the simplicity was very appealing personally.  After all, I had a “day job” training and coaching.  My personal objectives were always going to be extra-curricular so I wanted a lightweight focusing tool.

Whilst writing a book was not my only objective, it was one of two so features heavily in my weekly priorities.  Following is an example excerpt:

Clarifying notes:
  • Oakleigh is a suburb near me with a big Greek population.  They have a bunch of late night coffee shops, and its my favorite place to write.  I head up at 10pm after the kids are in bed and write until they kick me out around 2am.  This article is being written from my newly found “Brisbane Oakleigh”.
  • In week 2 I utilised OKRs in an executive strategy workshop I was facilitating (with a fantastic result), and shared the concept with a friend who was struggling with goal clarity.  In both cases, I used my writing OKR as an example 

Conclusion

Hopefully by this point you are both starting to understand the notion of the OKR and perhaps inspired enough to read some of the source material that covers it more fully.  I can’t recommend Radical Focus highly enough.  It’s an easy read, fable style (think The Goal, The Phoenix Project). 

You might have noticed along the way that my priority goal this week was to “Publish OKR article”.  As seems to happen regularly to me, once I start exploring an idea it becomes too big for one post.  I never actually got to the “SAFe” part of OKRs.  To whet your appetite, here are some things I plan to address in the second article:

  • Specifying OKRs as part of a Feature Definition
  • Applying OKR thinking to PI Objectives
  • Applying OKR thinking to Inspect and Adapt and Retrospective Outcomes