Tuesday, December 27, 2016

Team Backlog Evolution in SAFe - from 3 words on a sticky to a Ready to Play story

Introduction

The PI planning event provides a great pattern for filling a backlog, but its all too easy for teams to fail to capitalise on their great beginning with effective ongoing backlog refinement.

How do we get from those hastily scrawled post-it’s to fully articulated stories in user-voice form with comprehensive acceptance criteria? Given that many Release Trains move from “all-waterfall” to “all-agile” via a one-week quick-start, a solid set of starter pattern practices is a useful tool to help them accelerate their journey. This article focuses on the evolution of an individual backlog item. A follow-up article will cover techniques for managing the overall backlog.

Whose job is it?

In my early agile days, I learnt that a story was a token for an ongoing conversation between the team and their customer. The important thing was the conversation, some details of which needed to be preserved by writing them down.

Unfortunately, all too many people interpret user stories as “the new form of requirements”. They want to know who’s responsible. Does the Product Owner write the detailed stories and hand them to the team? Does the BA work with the Product Owner to do so? These are depressingly common anti-patterns, and our first mission is to avoid them. Story evolution is a collaborative activity regardless of the stage of maturity of the story, and I always start teams off with the mantra that nobody writes alone.

PI planning sets the scene perfectly. Each team starts with an empty backlog, and over the course of the breakouts works with their product owner, product manager and subject matter experts to explore their features – recording the results of the exploration as a set of backlog items.

So we’ve started well by having a conversation and recording a set of tokens for the details of the conversation on post-its – how do we keep that good pattern going after we leave PI planning?

Does it always have to be the whole team?

Having the whole team involved in an activity is an expensive (and not always productive) exercise. While the fully shared context is desirable, it’s not always necessary or valuable. Involving the whole team in workshops in their early life for the shared learning is great, but most evolve reasonably rapidly to some variation of Jeff Patton’s “3 Amigos” or triad approach. Jeff describes a triad as “Product Owner, BA and developer” (the 3 amigos). I don’t tend to be quite as prescriptive – what I’m looking for is diversity of input, so I tend to recommend “one of each discipline”. The most common format for this is “Product Owner, BA, Dev, Tester” or “Product Owner, UX, Dev, Tester”.

One caveat to this is to make sure triad memberships rotate to spread knowledge. The Product Owner will be a constant, and if the team has a BA they’ll also usually be relatively constant but what we don’t want is a pattern where it’s always the “dev lead and lead tester”. The more the understanding of the problem domain is spread around the team, the better the outcome!

Moving to User Voice Form

Teams rarely generate great user voice form stories during PI planning. There just isn’t time. If we’re lucky, they might have done so for their first sprint but most of the backlog will consist of items like “pay by the month”.

It’s highly tempting to detail someone to go off and “upgrade the stories”, but we miss something. The point of user-voice-form is to help us to look through the eyes of our users. The discussion as we identify roles and “so that” reasons enriches our understanding of what we’re trying to accomplish. The last thing we want is for the team to miss out on that conversation.

So, the team should set a goal of having a “rolling 3 sprints” of backlog that’s been refined to user voice form and moving as rapidly as possible to achieving this after PI planning. It’s a pretty rapid activity, so usually not all that hard to achieve with a set of well-facilitated one hour workshops. There’s a good reason for this. As we move to user voice form, we flush new insights – often in the form of missed stories or missed complexity. It’s good to learn about this early in the PI.

Establishing a Definition of Ready

Nothing destroys a sprint like a poorly defined set of stories entering the gate. All teams are taught to establish a definition of done, but I’m a huge personal fan of a “Definition of Ready”. This defines the set of conditions a story must meet in order to be accepted into Sprint Planning.

Exactly how much or how little is required to “be ready” is highly dependent on context. In highly regulated environments this often includes signoffs from legal and risk on acceptance tests. These are folks not renowned for moving fast - discovering you need an approval from legal in the middle of the sprint is a recipe for an unfinished story and missed sprint goals.

A good question to seed the definition is “what are you likely to discover in the middle of the story that might take more than a couple of days to get an answer to?”. This then forms the guidelines for key things to answer when preparing the story. The rule of thumb I’ve found quite useful is getting the acceptance tests to “80% lockdown”. There are always some questions that will be revealed once developers crack the story open, but the “80% rule” enables an approach where “if the expectation just changed by more than the 20% we allowed for the new detail its fuel for another story and a future sprint”.

My strong recommendation to all teams is “provide all the detail in your acceptance tests”. The temptation to elaborate the story by filling in tables of business rules and bullet point acceptance criteria before developing acceptance tests just leads to waste and redundancy. Using techniques like Specification by Example or BDD workshops take you straight to acceptance tests and provide very useful ways of exploring the problem domain collaboratively.

Employing a definition of ready as well as a definition of done establishes a two-way contract between the team and product owner:
  • The product owner take accountability (with the support of the team) for ensuring that stories meet the Definition of Ready before being brought to sprint planning.
  • The team takes accountability (with the support of the product owner) for ensuring that stories meet the Definition of Done before being reported as complete.
The instant benefit most teams will observe is both shortening the sprint planning meeting and enabling it to move focus away from “what will be done” to “how they will work together to do it”, and their ability to hit sprint goals will also increase significantly.

The “Next Sprint” Kanban

Story detail should leverage the law of the last responsible moment. We want it neither too early nor too late. My encouragement is to go no deeper than a user-voice form version of the story until the sprint before it is due to play.

Each sprint thus has a dual purpose. Complete the current sprint’s committed stories and flesh out the next sprint’s stories to meet the definition of ready. The first ingredient in doing so is to introduce the “Next Sprint Kanban” to visually manage this journey.

The board illustrated below was from a high assurance environment, where acceptance tests were detailed in the “Specifying” state, then stories were sent off to legal and risk in the “sign-off” state prior to hitting “Ready”.


Figure 1 - Team Aliens: A slightly different approach to visualizing next sprint preparation and a helping hand from Dilbert

Specification Workshops

“Specification by Example” is a great way to flesh out a story. Whilst there are whole books written on the concept (my favourite is definitely Gojko Adzic’s), I have found 3 keys to success:
  1. Put them on cadence. Two-three 1-hour sessions per week will usually feed a team pretty well. Scheduling them straight after standup is great as you can use the standup to organise who (whole team or designated amigos) will participate.
  2. Don’t produce polished detail, your focus is shared context. Use a whiteboard (preferably in the team area to allow for osmotic communication) and lots of photos as you explore to keep the pace moving and energy up. Word-smithing can be done either solo or by a pair (Product Owner/Tester works well) as followup.
  3. The Product Owner needs to order and do some pre-thinking on the stories both to help them identify any subject matter experts or other stakeholders who might need to be involved and to assist the team in identifying appropriate amigos to participate. 

Story Splitting

Good backlogs are, of course, iceberg shaped. As Jeff Patton puts it so eloquently “stories are like cheese, best sliced fresh”. Story splitting will tend to occur naturally as a by-product of user-voice-form workshops and specification workshops. Whatever you do, don’t bury yourself in deep story hierarchies. One layer is good. “Here is my feature, and here are the stories that represent it”. When you split a story, tear up the original and replace it with the newly identified split stories.

What about Kanban?

Everything in this article works as well for a Kanban team as a Scrum team. In fact, I find that the more mature a team becomes in applying these practices the more it moves towards operating in flow with Kanban and leaving Scrum behind. Adaptation is simple – instead of thinking about “next 3 sprints” or “next Sprint”, apply WIP limits to identify the optimal amount of detailed story inventory to hold.


Sunday, December 25, 2016

Want a supercharged ART? Don't settle for a Proxy Product Manager!

In a recent SPC class, we wound up in a very interesting discussion about the role of the RTE and the potential need for multiple RTE’s. Given that in our very first Release Train we had employed a “paired RTE” approach (described here by my colleague Em Campbell-Pretty), we had a fairly public opening position on the discussion. I have a client who has a rotational RTE system where each train has 2 RTE’s. One leads execution of the current PI, and the other focuses on preparation for the next. We’ve also seen and heard of numerous other variations. As the discussion progressed, something crystallized for me. In every single case where it’s been necessary, the Product Management function has either been weak/uncommitted or fulfilled in proxy by the IT organisation!

One of the great strengths of +Dean Leffingwell’s approach to defining SAFe has been his pragmatism. One example of this has been his historical approach to defining the type of people (traditional role descriptions) who might fulfill a SAFe role. As with the entire framework, he distills and documents ‘observed patterns’ rather than theory. I vividly remember a moment of shock and horror when I saw that he had listed “BA’s” as potential product managers. Whilst having acknowledged the occasional need to utilise “IT proxy” product owners, the notion of a BA as Product Manager triggered the following email:

Hi Dean,
I'm teaching an SPC and just had the product manager slide onscreen and someone in the audience pulled up the guidance article and said this could be a BA.
After I fell off my chair I suggested someone must have applied the content to the wrong article (ie meant product owner). Are you able to confirm I was right and someone hasn't started smoking strange drugs?

In typical Dean style he reminded me that many organisations have extremely senior strategic BA’s who could well fulfill the role – which got me thinking about a couple of BCG BA’s I had worked with and resulted in a humble apology.

The drawback, however, is that where Dean describes something that “can work in particular contexts”, it is all too common for organisations to interpret the guidance in unfortunate fashions – in this case implicit license to put an IT person in the role. This likelihood is inflated given that most large IT organisations have formed a “wedge division” of “business-facing IT” who act as interpreters and demand managers. The value proposition of such an organisation is threatened by the direct connection between business and developers underpinning agile and it desperately seeks survival .. “surely we are the logical product managers and owners!”

What Dean is looking for is someone who can fulfill the responsibilities and accountabilities of the role. So, what are they? A Product Manager is a person entrusted by the organisation with identifying the best mix of features and steering the optimal economic trade-offs between them to maximise the ROI on an investment of somewhere between $10M and $30M per year.

In recent months I’ve been fortunate enough to launch two Release Trains with not just exceptionally strong co-located business Product Managers but also dedicated co-located business Product Owners for every team. In both cases, they have matured further in a single PI than I have seen in many others across the course of 18 months. Following are some of my thoughts on the key differences:
  • Agile is about eliminating the concept of “business and IT”. It’s about working together on a shared mission. It’s not about something “IT does to the business”. Proxies simply maintain the status quo wearing a slightly different disguise.
  • The container of the system shapes the system being optimised. As long as the Release Train contains only IT, it will suffer from local optimisation with the relevant area of the business forming a separate system also locally optimising.
  • Business people come with business networks. Whilst a short-lived team can source a product owner with deep domain knowledge, a long-lived team will over time work on features covering many domains – to survive the product manager and owners need to know how to bring the right subject matter experts to the table to provide the deep domain knowledge. This is far easier when you’re accessing your mobile phone contact list than researching an org chart!
  • Business Product Managers/Owners know how to get the right people to the System Demos and demonstrate in business language about business outcomes being achieved.
  • Business leaders bring more to the table than just content authority. They’ve also often received far more development in people leadership than their IT counterparts. Watching somebody bring their call centre team management experience to the table with dev teams is simply amazing. 
  • Proxies will be proxies. They will naturally preserve their value proposition by acting as translators rather than connectors. 
  • It’s very different when a business leader elects to defer something than when an IT leader asks a business leader to defer something!
Chatting to the executive sponsor during the second PI planning event for one of these trains recently, he shared that one of the things that particularly excited him was watching his business teams reach insights and form alignment at a business process level as they collaborated with the dev teams and their product owners in breaking down their features. He had a grin from ear to ear, but it wasn’t quite as big as the grin I was wearing the night before when I was forwarded an email from another train. It contained the following story:
Once upon a time there was a new Agile Release Train (ART) that had just finished its third sprint. Everyone was amazed at the velocity the teams had achieved. But at the retrospective 2 things became clear:
(1) the teams had worked long hours and many nights to achieve a higher velocity to overcome what they saw as a hump necessary to meet the desired business date for production after 2 more sprints.
(2) the teams were doing "wagile", coding in week 1, then testing in week 2.
The Product Manager thanked the team for their extra efforts, but told them, and - importantly - all the attendees at the showcase, that the velocity achieved was not sustainable, and long days and nights were not to be repeated.
The ART then reviewed the way the teams were working, and decided that extra attention needed to be paid to the adoption of Test Driven Development (TDD). The coach also refreshed the teams on how to conduct sprint planning for sustainable pace.
The Product Manager then told the teams to take as long as necessary to plan for the next sprint, including how to adopt TDD. With some further guidance from the Release Train Engineer (RTE) and Systems Team, the teams re-planned how they would work, as well as setting a more sustainable velocity target for sprint 4.
After I shot an email to the Product Manager saying “hey, can I please quote your story”, she regretfully informed me that she hadn’t been the author but had the following to add:
It was a tough decision knowing that maximum capability in the short term is what senior management are looking for, but also being aware the greater value lay in ensuring the team had the time and space to embed the practices and achieve a sustainable long term velocity and you know the saying, short term pain, long term gain.
On a closing note, my few weeks of pondering the topic have left me with a realization. We obsess about the critical role of the Release Train Engineer. At every gathering of SAFe people, discussions center around how to find, retain and avoid burning out great RTE’s. I think perhaps we’re guilty of spending too much effort on finding and growing RTE’s and not enough on Product Managers. In fact, I’d go so far as to say that in pursuit of great business outcomes and a thriving ART a great Product Manager can cover for a weak RTE far better than a great RTE can compensate for a weak Product Manager!

Saturday, November 26, 2016

Getting the most from the Management Problem Solving session at SAFe PI Planning

The longer I have been facilitating PI planning, the more I have come to believe that the primary purpose of Day 1 is to set the scene for the Management Problem Solving session. The energy of the visioning, team breakouts and plan reviews are well-known, but the management problem solving at the end of Day 1 is a crucial part of the event often glossed over.

Let’s start with a simplified version of the PI Planning agenda:

Day 1:
  • Present the vision (Leadership)
  • Create a draft plan revealing the flaws and challenges in achieving the vision (Teams)
  • Face into the insights generated by the teams (Leadership)
Day 2:
  • Communicate the response to the team insights (Leadership)
  • Breathe a sigh of relief and craft an achievable set of objectives with a supporting plan (Teams)
My goal in this post is to share thoughts on the best way to facilitate the session, but before getting into details I’d like to dwell for a moment on why it’s so important. In the early days, this event is often the only real time senior leaders spend at the gemba. Instilling a culture of lean leadership and management by walking around takes time – particularly with busy executives. They’re not only getting a taste of agile collaboration, but they get to listen to teams wrestle with the system of work as they try to find a way to achieve the desired outcomes. There is no “please explain” on the problems. No need to prepare briefing decks. They’re all writ-large in the transparency of the planning room discussions. They cross silos and boundaries, there is no hiding from reality. And the problems that would normally be quietly hidden within a given organisational silo are out in the open. We’ve preached about Leadership’s duty to fix the system of work in the training room, and now they’re at the coalface with the opportunity to mirror the collaboration the teams have been displaying all day. 

As a coach or facilitator, you have a golden opportunity – you need to make the most of it. The first hint is that if you’re the RTE, try and get an external facilitator (be it a coach or somebody else). You want a voice in the discussions.


Expectation Setting

The “official agenda” shows management problem solving occurring from 5-6pm. The reality is most sessions will run 2-3 hours – I’ve run as late as 11pm, and I’ve heard stories from Dean of post-midnight finishes. Participants need to know you’ll be asking them to work late and understand why. It gives them the chance to fly in the night before rather than trying to stay awake after an early morning flight, and to warn their families.

When you’re scheduling, provide a break (it’s been a busy day) and preferably some food (brains need food to face into tough discussions). I typically suggest scheduling the session for 6-9pm to set appropriate expectations, then work towards finishing between 8 and 8:30. Also, work with your scrum-masters. Help them understand that their primary goal at the end of day 1 is to provide clarity on the problems that need solving. Be ready to remind them of this at the final scrum of scrums.


Structuring the facilitation

I don’t tend to start with a fixed approach to facilitating the session. I’m observing during the day, looking at the types of problems emerging and thinking about the best way to structure the session to help achieve the optimal outcomes. While the leaders are having a break, I take the opportunity to create a working space and setup my visualisations. If at all possible, do it in the PI planning room so you’re surrounded by the day’s work.



At a minimum, you need a visual space for actions and decisions. Further visualisations will depend on approach. After much experimentation, I have settled on 3 primary variations.


The Retrospective Option

For this option, I structure virtually the entire conversation around a 4 L’s activity. 4 feedback areas are established (Loved, Learned, Loathed and Longed For), and participants are invited to respond with post-it’s which are then affinity grouped. 

Open with “Loved” – it’s good to get some positive acknowledgements flowing but also opens the conversation flow and tends to provide a little more safety for when the discussion gets tough. Then, open each affinity group in turn by asking for participants to elaborate. Virtually every affinity group will hold the seeds of 1-3 actions and/or decisions. Allow the conversation to flow, probing with open questions if necessary but be listening for the seed of an action.

It’s not uncommon to have 2 flip-charts full of recorded actions and decisions by the time you’ve processed your feedback.


The Simple Study Option

Open with a reflection on what went well. This should be short, but helps prepare to enter quality discussions about challenges.

Next, send your group out to study the work. Ask them to tour the room (as individuals) and study each team’s plan visualisation. At its simplest, you might arm them with a pad of post-it’s and a sharpie and ask them to note down the problems they believe need to be discussed for each team. You may also ask something more specific. On one occasion, I created a grid with a row for each team and a request that participants rate (on a scale of 0 to 10) both the achievability and the clarity of each team’s plan. A timebox of 15 minutes generally suffices for this.

Upon their return, collate the results. This might involve the creation of an affinity grouped set of post-its around discussion topics or problems to solve, or some other form of visualisation. The photo below shows the whiteboard where I collated the results of the “achievability and clarity” rating mission.

When collating, you might also choose to help them set some priorities. One technique is to provide 3 separate areas – “Must discuss tonight”, “Should discuss tonight”, “Nice to discuss tonight”. Whatever you don’t make it to becomes a takeaway list for the RTE to facilitate follow-up discussions around in Release Management Meetings.

Now you need to timebox the discussions. I believe Lean Coffee provides a very handy control mechanism. The topic timebox will vary depending on your approach – “team by team” will need longer timeboxes than “problem by problem”. Your goal with each timebox is to arrive at one or more actions or decisions in response to the prompt.


The "Hello Game" Study Option

This is a slightly more structured study option based on Thiagi’s “Hello Game” frame game.  The approach involves dividing participants into 3-4 “study groups”. As with the Simple Study Option, participants are given time to roam the room and study the plans. However, they are given a more structured mission. As the facilitator, you need to come up with one “framing question” per group. As members roam the room, they are asked to take notes in response to all 4 framing questions. However, you then enter a second phase. Each study group is assigned one of the framing questions to focus on, and receives an additional 15-20 minutes to survey the room and determine an approach to aggregating, visualising and playing back the key insights in response to their framing question. 



This is currently my favourite approach, as I find it yields deep engagement and more challenging discussions. There’s a little mayhem as people survey each other, but the one-on-one discussions held to gather their data and the effort the study groups put into visualising it (eg graphs, diagrams) is brilliant. The job of the facilitator, of course, is to spend all day pondering the 4 framing questions that will yield the best insights. As a coach, there are usually some key themes I can see but I’d like to give my participants the opportunity to generate their own learnings rather than hand them out myself. Good question selection facilitates this. I almost always create one or two questions based on the day’s events, but I also have some stock questions up my sleeve such as:
  • How have we improved since our last PI planning event? 
  • How could we have improved our preparation activities to enable a more effective PI Planning? 
  • What systemic issues are evident in the teams planning that we do not have improvement plans in train to address? 
  • What are the top issues most likely to derail our PI execution? 
  • What issues are most likely to prevent the teams reaching a committed plan tomorrow if not addressed? 
  • In which areas are we facing scope trade-offs, and what is the extent and contributing factors of those causes? 
Following is an example of a framing question I used recently to draw attention to something the leadership group really needed to ponder:
  • What are the key factors contributing to the overabunce of tech stories and widget based objective statements?
Once the study groups have created their visualisations, ask each group in turn to present their outputs. Harvest actions and decisions as these conclusions are presented, explored and debated.


Closing

Once you’ve exhausted the topics opened up by your information gathering approach, there’s a final question to ask:
“What have we not yet discussed that we should have?”
It’s amazing how often the toughest topic or two of the night open up at this point. Facilitated well, the group should have reached the safety needed to delve into something that felt too controversial in the beginning.


Preparing for Playback

At this point, you can release all the attendees except the RTE and the Product Manager. Your final bit of work is to agree their playback approach for the morning. It’s fairly straightforward.
  • Agree who will present the “positive feedback” 
  • For each identified action and decision, identify whether it needs to be fed back to the team in the morning and who (RTE or Product Manager) will speak to it. 
  • Agree on any pre-briefing that will be given to the Scrum Masters and Product Owners prior to the general Playback session.


Selecting a facilitation approach

I circulated the draft of this post with my colleagues to test it prior to publishing. Much of the resulting discussion involved “when to pick which option”, with particular focus on “how mature does a train have to be before you take the “hello game” study option. My view is that it’s not so much a matter of train maturity as facilitator bravery. For the first PI planning of any train, I usually start with the retrospective option. You are looking for obvious, surface level challenges. On all other occasions, I use the “hello game” study option. It provides the deepest discussions and insights, but as the facilitator you need the courage and conviction to ask your participants to do real work rather than sit around a table responding while the facilitator does most of the work :)


Final Note

On average, I find 30-40% of the actions and decisions to emerge from a good Problem Solving session affect Day 2. The remainder are for action by the leadership team during the PI. An executive in one of these sessions last year commented that the leadership team needed to hold themselves accountable to the teams for following through on these. The result was the inclusion of a section in the System Demo agenda for leadership to report back to teams and stakeholders on their progress with the actions emerging from Management Problem Solving. I loved it, as it demonstrates to the team that their hard work in planning drives leadership focus as well as their own work.

Saturday, September 10, 2016

Improving SAFe Cost of Delay - A response to Josh Arnold

In recent months, there has been considerable public critique of the SAFe interpretation of Cost of Delay.  This culminated recently in Josh Arnold’s June blog post suggesting improvements.  As the author of the simulation most broadly used to teach the SAFe interpretation (SAFe City) and the veteran of having introduced it to perhaps more implementations than most in the SAFe world, I thought I’d weigh in.

What’s the debate?


At heart, the debate is hinges on whether you drive straight to “fully quantified Cost of Delay” or adopt SAFe’s approach of using a proxy as a stepping stone.  The debate then digs deeper by questioning the validity of the SAFe proxy.

What is full quantification as opposed to proxy COD?


By “fully quantified”, we refer to a situation where the Cost of Delay (COD) has been defined as “$x/period” (eg $80K/week).  A proxied approach establishes no formal “value/period”, but tends to produce an outcome whereby we can establish the approximate ratio of Cost of Delay between two options (eg Option A has twice the cost of delay of Option B) without actually quantifying it for either option.

Where there’s smoke there’s fire


When the topic of Cost of Delay arises, it’s easy to get lost in intellectual debate.   The reality is that the primary use is to apply it to prioritization to enable maximisation of economic outcomes from a fixed capacity – and a well-implemented proxy will get pretty close to the same results on this front as an attempt at full quantification.  Both approaches seek to expose underlying assumptions and provide a model to assist in applying objective rationale to select between competing priorities.

After a workshop I held on it a few months ago one audience member came up to me to passionately argue about the theoretical flaws in the SAFe interpretation.  My response was to ask how many organisations he had implemented it in – to which the answer was zero.  At heart, full quantification is hard and many organisations don’t begin because the theory sounds good but the practical application seems too daunting.  A proxy is far easier to get moving with, and likely to gradually lead towards full quantification anyway.

The other thing to realize about our economic decisions is that we are trying to improve them, not to make them perfect. We want to make better economic choices than we make today, and today, this bar is set very low. We simply do not need perfect analysis” - Reinertsen
However, there’s no escaping the fact that there are applications of COD that cannot be achieved by using a proxy.  Further, the SAFe proxy approach is not without its flaws.

What do we lose by not achieving full quantification?


Cost of Delay can be used for a lot more than prioritization.  Key applications negated by a proxy include the following:

Economic formulae for optimum utilisation rates


Quantified COD allows us to calculate the economic impact of queues at key bottlenecks and establish an economic basis for the amount of slack to build in.  In short, the hard maths behind why we might staff a particular specialist function to operate at 60-70% utilization in pursuit of flow efficiency.

SAFe’s approach to this problem is basically a blunt hammer.  Build in overall slack using “IP sprints”, and build in further slack by only committing plans at 80% load.  It then relies on effective application of dependency visualization and inspect and adapt cycles to optimize for slack.  The destination is similar in a good implementation, but the people footing the bill would certainly buy in much faster with the hard numbers quantified cost of delay can provide.

Economic Decision Making for Trade-offs


Reinertsen makes much of the fact that all decisions are economic decisions and better in the presence of an economic framework.  Quantified cost of delay allows us to apply economic criteria to such decisions as “do we ship now without Feature x or wait for Feature x to be ready”, or “If reducing the cost of operations by 10% would cause a delay in time to market of 4 weeks, would this be a good tradeoff?”. 

SAFe currently has no answer to this lack other than to stress the fact that you must develop an economic framework

Application of global economics to local prioritisation for specialist teams


Quantified Cost of Delay is a global property where delay time is local.  If, for example, I have a highly capacity constrained penetration testing team attempting to balance demands from an entire organisation they can apply their local “delay estimate” in conjunction with the supplied cost of delay to easily achieve global optimised priorities for their work.   A cost of delay of $60K/Week is the same regardless of where in the organisation it is identified.  Relative cost of delay is local to the domain of estimation, and a 13 from one domain will never be the same as a 13 from another domain.  

SAFe’s approach to this problem is to largely negate it.   Complex Value Streams and Release Trains sweep up specialist skills and create dedicated local capacity whose priority is set “globally for the system” using the PI planning construct. 

What do we lose by achieving full quantification?


Perhaps it’s heresy to suggest a proxy can be better, but if one takes the perspective that COD is “value foregone over time” then a quantified COD inherently only caters to value which can be quantified.  Let me take a couple of examples:

Improved NPS


Every customer I work with (even the Australian Tax Office) considers “customer value” to be a key economic consideration when prioritizing.  Most have moved from measuring customer satisfaction to customer loyalty using Net Promoter Score (NPS). 

The creators (Bain & Company) argue that any mature NPS implementation will eventually be able to quantify the P&L impact of an NPS movement in a particular sector/demographic/service.  However, I have yet to work with a company that has reached this maturity in their NPS implementation.  Losing NPS movement from our value considerations in COD would not be a good thing. 

Learning Value


What of the feature whose core value proposition is to understand how customer’s respond to an idea?   To validate an assumption the company has about them?   You could argue that then the COD would centre on the value that would be lost if the company got its guess wrong in a quantified world, but I suspect the resulting numbers would be highly suspect.

Pirate Metrics


More and more I see customers implementing leading measures such as the Pirate Metrics (Acquisition, Activation, Retention, Referrals and Revenue).  With enough time (and lagging feedback) you can quantify these into hard dollars, but there’s a reality that for a significant period when introducing new products they don’t quantify well.     

With enough work, I’m sure there’s a way to solve for these problems in a fully quantified world but none of the examples I have researched have done so.   The reality is the vast majority of COD science is based on Reinertsen’s work, and his focus is “introduction of products” where in the software world we are not simply introducing new products but selecting how to evolve them iteratively and incrementally – it’s a different paradigm.  Achieving an objective balancing of qualitative and quantitative inputs is one of the things I have found the proxy model does well with.

Is there a killer argument one way or the other?


Personally, I don’t really feel like it’s an open and shut case.   The reason I like the proxy is simple.  It’s easy for customers to get started with.   Full quantification (particularly at feature level) sounds scary and all too easily raises the barrier to entry out of reach.  The longer the proxy is employed, the more hard data is brought to the table – full quantification is almost inevitable.  Having said that, Josh and others have successfully kick-started people straight into quantified – having read all of their material and attended one of their workshops the starting journey and discussions sound remarkably similar (flush assumptions!).

What (if anything) is wrong with the proxy?


I agree with Josh - the current proxy is flawed when it comes to time sensitivity.  The simplest proof is to use a legislative requirement.  Imagine that I have a new piece of legislation coming into place on 1 July 2017.  If I am not compliant, I will be charged $100k/week that I am in breach.  It will take me 4 weeks to implement the feature.  Today, in September 2016, there is no value whatsoever to me implementing the feature (COD=0).  As April/May 2017 approach, my COD suddenly escalates.  There is no way to reflect this using the current SAFe approach.

How did Josh fix it?


Josh suggested using time as a multiplier component, which is most definitely an improvement but not the approach I would take.  For one thing, I tend to find people over-fixate on timing in their early days with COD.  A multiplier is a fairly blunt instrument (think double and triple), and you would have to be careful to talk percentages.

However, the real challenge is that the question we are really trying to ask ourselves is “how does value change over time”, or “how is value sensitive to time”.   Let’s take two examples based on a federal government client of mine:

  • They have many processes they are seeking to digitally enable/enhance which occur yearly.  A feature which is delivered just before peak lodgement activity is far more valuable than a feature delivered just after peak.  In fact, if you miss the peak you might choose to defer the feature for 9-10 months.
  • They have numerous features which must be delivered in time for a legislative change.  There is very little advantage other than risk mitigation to delivering these features 6 months early.  Using time as a multiplier would allow us to arrive at "zero COD" for early delivery of a legislative change, but this is entirely dependent on the date at which I assess it.

How would I fix it?



My belief is that we need to focus on the timing of the COD assessment, rather than the timing component of the COD.  At any given point that we assess it, we are effectively assessing it “based on today’s urgency”. 

At this point, we can leverage Josh’s great work with urgency profiles.   Each urgency profile takes the form of a graph, and the graphs tend to have inflection points representing moments (timing) when the value goes through significant change.  

This is what I would do:
  • When first assessing COD for an item (be it an Epic or a Feature), first assign an "Urgency Profile" to it and make explicit the dates believed to be associated with the inflection points.
  • Eliminate the time criticality component of the COD formula.
  • Separate the User and Business Value components.  Most implementations I work with tend to do this, defining "Customer value" and "Business Value" to separate considerations of customer loyalty from hard-nosed underlying business.  This would also open the door to more easily introducing quantification (with relative values based on dollar value brackets perhaps).
  • Make explicit in the guidance the fact that when applying the Proxy to your organisation you need to consider the relative weighting of the proxy components.
  • When assessing the Customer, Business and Risk Reduction/Opportunity Enablement values, do so in light of "today's urgency"
  • Based on the identified inflection points on the urgency profile, flag the date when the COD of the item needs to be re-assessed.

This solves two fundamental problems:
  • It ensures we have a systematic (and meaningful) approach to considering time sensitivity and its impact on COD without suffering from the flaws of the current world
  • It establishes a rhythm for re-assessment, sadly lacking for most current implementations of the SAFe COD model which tend to be "set and forget".




Friday, July 8, 2016

Agile Musical Chairs - A Facilitation Guide

This is a fantastic game I picked up at the Agile Alliance conference in Florida a couple of years ago.  It offers deep learning with respect to self-organisation and learning cycles, and a number of people I’ve facilitated it for in conference settings have asked for a facilitation guide.
The mission of the team playing the game is to “beat the facilitator” by preventing them from sitting in a vacant chair for 1 minute.  You should allow roughly an hour for the activity.

Setup

The smallest group I have played the game with is 7, and the largest about 100.  With large numbers, you will want to break into groups of 25-30 with a facilitator per group (it’s very easy to recruit and instruct a facilitator on the spot).
To setup, you need an open space with one chair per player, one for the facilitator and plenty of room to move.  Ask each team member to grab a chair, and to randomly organise themselves in the space available.  You don’t want a regular arrangement, and certainly not an orderly circle.  You want chairs facing in multiple directions fairly evenly distributed throughout the space (some in the middle, some on the edges.

Briefing

The rules are very simple.  Inform the participants that their goal is to learn about self-organisation and learning cycles.  Their goal is to block you from occupying a vacant seat for 60 seconds.  The constraints are as follows:
  • Any number of people can be moving at once to occupy the chair the facilitator is aiming for
  • Once you have stood up (or even half stood up), you cannot sit down in the same chair .. you must find a new chair to sit in.
  • The chairs cannot be moved from their starting position.
  • No physical blocking is allowed. You can’t push the facilitator out of the way or impede them from moving in their desired direction.
  • The facilitator can only move at a walk, the team can move as fast as they like. It’s not a running race, it’s a strategy game.
After each failure, the group will have exactly 60 seconds to conduct a retrospective and alter strategy before the facilitator starts moving again.

The play


You may or may not choose to warn participants they are likely to take 4 to 5 attempts just to reach 10 seconds.  Most groups will take 30-40 attempts to solve it.  You have various hints to offer along the way, with the hints later on the list being more powerful accelerators.
·       Are they hearing every voice at their retrospectives or allowing dominators?
·       Have they remembered that the game is about self-organisation (if they make the inevitable mistake of attempting to have one or two people co-ordinate)?
·       Are they trying to reinvent the wheel every retrospective?  What about making a small tweak then testing it?  It generally takes 10 seconds to decide whether a change was positive or negative (as long as it takes to fail) as opposed to debating it for 40 seconds?
·       The most important ingredient in a self-organising team is trust!  (This is usually the last hint that helps the breakthrough.  Most teams find a winning strategy then have one or two people who keep panicking and breaking it)

The Debrief

The debrief usually drives itself, however some key learnings I look to draw out are:
  • A self-organising team still needs a strategy and some agreed rules. It needs to be developed by the team to guide the way they work together.
  • The uniting power of a shared mission
  • The notion of improvement through small course corrections. A retrospective should identify a small change which can be actioned within 2 weeks, then evaluated as an input to selecting the next small change. It’s too easy to fall into the trap of large “all or nothing” changes.

Places and Times to use the activity

I most commonly use this activity as part of team formation.  It’s brilliant to run with a freshly formed team as it really unites them, drops some traditional barriers (nothing like sprinting around a room to do that) and sets the scene for what their retrospectives should feel like.  I also use it as part of release train launches (teams of teams) for similar outcomes.
Alternatively, it’s great to run with a team instead of a regular retrospective.  It will both drive some interesting reflections on how they work as a team and re-invigorate their retrospectives.

Lastly, you can use it anytime as a community building activity.

Saturday, January 23, 2016

The case for the SAFe QuickStart

In May 2012 I was struck speechless as I listened to +Dean Leffingwell describe the “1-week QuickStart” model for launching an Agile Release Train on Day 3 of the inaugural (beta) SAFe Program Consultant course.   The coach and trainer in me couldn’t reconcile my visceral reaction to the scale of the event with the confidence with which Dean described the many occasions on which he had employed it.  I drank the kool-aid on much of SAFe, but not that!

Fast forward to September 2013.  With a few more trains and lots of training under the belt I was back in Boulder – this time to spend a week at the new SAI headquarters to participate in the alpha SPCT program.  A very heated discussion took place regarding the pre-requisites for becoming an SPCT.  The proposal to require the SPCT candidate to have completed two QuickStarts caused me to passionately argue that there are many SAFe coaches out there (myself included) who would never buy into running them, and the requirement needed to specify successful launches rather than prescribed launch techniques. 

Fast forward once more to October 2015.  On day 1 of the inaugural SAFe summit in Westminster Colorado, I found myself on-stage describing the fact that the QuickStart is now my strongly preferred model for launching a train. 

So what is the QuickStart and what changed my mind?



One common misconception that needs to be dispelled is the belief that the launch actually begins with the QuickStart (Notice the "Prepare" arrow).  Like any other large-scale event, a great deal of preparation goes into making it a success.  The preparation typically commences with a Leading SAFe course for the leaders of the “train-to-be” followed by a workshop to create a launch plan.   I’ve previously described my preferred approach to this here.

With that cleared up, let’s return to the QuickStart itself.  Whilst PI planning has been strongly advocated by many in the Scaled Agile community, the particular value of starting with the full event rather than employing a “soft-launch” was covered by my colleague +Em Campbell-Pretty in a recent blog post.

That leaves us with the training.  One of the key tenets of SAFe is “train everyone”, but why do we have to do it all at once?  This was the piece that took me years to wrap my head around.  I’ve been training for over 20 years, and throughout that time have loved the intimacy of small classes.  Somewhere between 12 and 20, and you can make a unique experience and form a real connection with every member of the class.  How on earth do you get a high impact training experience with 100 people in the room?

This led to me feeling I knew better than Dean for my first few launches.  I worked with my clients to schedule 4 or 5 team-level courses over the period leading up to the first PI planning.  I’d request that they send entire teams to the same course so they could sit and learn together, and they would promise to do their best.  Then the pain would start.  Firstly, the teams would often be in flux up until the last moment.  Then they would be too busy on current commitments to all come together so they would dribble through 2 or 3 at a time.  And of course distributed team members would go to different courses.  The training was still hugely valuable, but I came to understand the motivation and some of the benefits of the big room – and eventually got convicted enough to try it. 

After the first “big room training”, I was blown away and spent some time sorting through how on earth it could be so powerful.  Following are some of the key insights it yielded:
  • The teams will be fully formed. The whole team can sit at the same table. Not only do they get to learn together and share their insights as they learn, but it’s actually a very powerful team formation event. We give teams some time to choose their names on Day 1, and watch team identity grow before our eyes.
  • The team engages in collective learning, with the chance to dissect their different interpretations in discussions and exercises. They are not reliant on “1 brain – the ScrumMaster” to ensure they get value from the agile approach, they have many brains who each captured different nuances.
  • The features for the PI will be ready. The very long (and effective) series of exercises involving the identification, splitting, estimation and evolution of stories can actually be done as practice on real features the teams will be dealing with in PI planning. 
  • Not only do the teams form their own identities, but they begin to form the shared identity of the train. As the discussions and debriefs progress, they start to learn about each other’s worlds.
  • Logistics are easier and more cost-effective. You’re already booking a large venue and flying people in – you get to double-dip on both the venue logistics and the return from the investment in collocating people for the planning event.

The biggest takeaway, however, is the momentum that builds.  The team members don’t leave the training room to head back to their old day-jobs while they wait for the train to launch and give them a chance to put their ideas into practice.  The day after training finishes they’re back in the room to apply their newfound techniques.

Now zoom back out to the QuickStart as a whole.    A train succeeds when 100 or so people come into alignment, form a shared identity and sense of mission and collaborate to both execute and learn together.    Can you think of any better way to accelerate the beginning of that journey?