Saturday, September 10, 2016

Improving SAFe Cost of Delay - A response to Josh Arnold

In recent months, there has been considerable public critique of the SAFe interpretation of Cost of Delay.  This culminated recently in Josh Arnold’s June blog post suggesting improvements.  As the author of the simulation most broadly used to teach the SAFe interpretation (SAFe City) and the veteran of having introduced it to perhaps more implementations than most in the SAFe world, I thought I’d weigh in.

What’s the debate?


At heart, the debate is hinges on whether you drive straight to “fully quantified Cost of Delay” or adopt SAFe’s approach of using a proxy as a stepping stone.  The debate then digs deeper by questioning the validity of the SAFe proxy.

What is full quantification as opposed to proxy COD?


By “fully quantified”, we refer to a situation where the Cost of Delay (COD) has been defined as “$x/period” (eg $80K/week).  A proxied approach establishes no formal “value/period”, but tends to produce an outcome whereby we can establish the approximate ratio of Cost of Delay between two options (eg Option A has twice the cost of delay of Option B) without actually quantifying it for either option.

Where there’s smoke there’s fire


When the topic of Cost of Delay arises, it’s easy to get lost in intellectual debate.   The reality is that the primary use is to apply it to prioritization to enable maximisation of economic outcomes from a fixed capacity – and a well-implemented proxy will get pretty close to the same results on this front as an attempt at full quantification.  Both approaches seek to expose underlying assumptions and provide a model to assist in applying objective rationale to select between competing priorities.

After a workshop I held on it a few months ago one audience member came up to me to passionately argue about the theoretical flaws in the SAFe interpretation.  My response was to ask how many organisations he had implemented it in – to which the answer was zero.  At heart, full quantification is hard and many organisations don’t begin because the theory sounds good but the practical application seems too daunting.  A proxy is far easier to get moving with, and likely to gradually lead towards full quantification anyway.

The other thing to realize about our economic decisions is that we are trying to improve them, not to make them perfect. We want to make better economic choices than we make today, and today, this bar is set very low. We simply do not need perfect analysis” - Reinertsen
However, there’s no escaping the fact that there are applications of COD that cannot be achieved by using a proxy.  Further, the SAFe proxy approach is not without its flaws.

What do we lose by not achieving full quantification?


Cost of Delay can be used for a lot more than prioritization.  Key applications negated by a proxy include the following:

Economic formulae for optimum utilisation rates


Quantified COD allows us to calculate the economic impact of queues at key bottlenecks and establish an economic basis for the amount of slack to build in.  In short, the hard maths behind why we might staff a particular specialist function to operate at 60-70% utilization in pursuit of flow efficiency.

SAFe’s approach to this problem is basically a blunt hammer.  Build in overall slack using “IP sprints”, and build in further slack by only committing plans at 80% load.  It then relies on effective application of dependency visualization and inspect and adapt cycles to optimize for slack.  The destination is similar in a good implementation, but the people footing the bill would certainly buy in much faster with the hard numbers quantified cost of delay can provide.

Economic Decision Making for Trade-offs


Reinertsen makes much of the fact that all decisions are economic decisions and better in the presence of an economic framework.  Quantified cost of delay allows us to apply economic criteria to such decisions as “do we ship now without Feature x or wait for Feature x to be ready”, or “If reducing the cost of operations by 10% would cause a delay in time to market of 4 weeks, would this be a good tradeoff?”. 

SAFe currently has no answer to this lack other than to stress the fact that you must develop an economic framework

Application of global economics to local prioritisation for specialist teams


Quantified Cost of Delay is a global property where delay time is local.  If, for example, I have a highly capacity constrained penetration testing team attempting to balance demands from an entire organisation they can apply their local “delay estimate” in conjunction with the supplied cost of delay to easily achieve global optimised priorities for their work.   A cost of delay of $60K/Week is the same regardless of where in the organisation it is identified.  Relative cost of delay is local to the domain of estimation, and a 13 from one domain will never be the same as a 13 from another domain.  

SAFe’s approach to this problem is to largely negate it.   Complex Value Streams and Release Trains sweep up specialist skills and create dedicated local capacity whose priority is set “globally for the system” using the PI planning construct. 

What do we lose by achieving full quantification?


Perhaps it’s heresy to suggest a proxy can be better, but if one takes the perspective that COD is “value foregone over time” then a quantified COD inherently only caters to value which can be quantified.  Let me take a couple of examples:

Improved NPS


Every customer I work with (even the Australian Tax Office) considers “customer value” to be a key economic consideration when prioritizing.  Most have moved from measuring customer satisfaction to customer loyalty using Net Promoter Score (NPS). 

The creators (Bain & Company) argue that any mature NPS implementation will eventually be able to quantify the P&L impact of an NPS movement in a particular sector/demographic/service.  However, I have yet to work with a company that has reached this maturity in their NPS implementation.  Losing NPS movement from our value considerations in COD would not be a good thing. 

Learning Value


What of the feature whose core value proposition is to understand how customer’s respond to an idea?   To validate an assumption the company has about them?   You could argue that then the COD would centre on the value that would be lost if the company got its guess wrong in a quantified world, but I suspect the resulting numbers would be highly suspect.

Pirate Metrics


More and more I see customers implementing leading measures such as the Pirate Metrics (Acquisition, Activation, Retention, Referrals and Revenue).  With enough time (and lagging feedback) you can quantify these into hard dollars, but there’s a reality that for a significant period when introducing new products they don’t quantify well.     

With enough work, I’m sure there’s a way to solve for these problems in a fully quantified world but none of the examples I have researched have done so.   The reality is the vast majority of COD science is based on Reinertsen’s work, and his focus is “introduction of products” where in the software world we are not simply introducing new products but selecting how to evolve them iteratively and incrementally – it’s a different paradigm.  Achieving an objective balancing of qualitative and quantitative inputs is one of the things I have found the proxy model does well with.

Is there a killer argument one way or the other?


Personally, I don’t really feel like it’s an open and shut case.   The reason I like the proxy is simple.  It’s easy for customers to get started with.   Full quantification (particularly at feature level) sounds scary and all too easily raises the barrier to entry out of reach.  The longer the proxy is employed, the more hard data is brought to the table – full quantification is almost inevitable.  Having said that, Josh and others have successfully kick-started people straight into quantified – having read all of their material and attended one of their workshops the starting journey and discussions sound remarkably similar (flush assumptions!).

What (if anything) is wrong with the proxy?


I agree with Josh - the current proxy is flawed when it comes to time sensitivity.  The simplest proof is to use a legislative requirement.  Imagine that I have a new piece of legislation coming into place on 1 July 2017.  If I am not compliant, I will be charged $100k/week that I am in breach.  It will take me 4 weeks to implement the feature.  Today, in September 2016, there is no value whatsoever to me implementing the feature (COD=0).  As April/May 2017 approach, my COD suddenly escalates.  There is no way to reflect this using the current SAFe approach.

How did Josh fix it?


Josh suggested using time as a multiplier component, which is most definitely an improvement but not the approach I would take.  For one thing, I tend to find people over-fixate on timing in their early days with COD.  A multiplier is a fairly blunt instrument (think double and triple), and you would have to be careful to talk percentages.

However, the real challenge is that the question we are really trying to ask ourselves is “how does value change over time”, or “how is value sensitive to time”.   Let’s take two examples based on a federal government client of mine:

  • They have many processes they are seeking to digitally enable/enhance which occur yearly.  A feature which is delivered just before peak lodgement activity is far more valuable than a feature delivered just after peak.  In fact, if you miss the peak you might choose to defer the feature for 9-10 months.
  • They have numerous features which must be delivered in time for a legislative change.  There is very little advantage other than risk mitigation to delivering these features 6 months early.  Using time as a multiplier would allow us to arrive at "zero COD" for early delivery of a legislative change, but this is entirely dependent on the date at which I assess it.

How would I fix it?



My belief is that we need to focus on the timing of the COD assessment, rather than the timing component of the COD.  At any given point that we assess it, we are effectively assessing it “based on today’s urgency”. 

At this point, we can leverage Josh’s great work with urgency profiles.   Each urgency profile takes the form of a graph, and the graphs tend to have inflection points representing moments (timing) when the value goes through significant change.  

This is what I would do:
  • When first assessing COD for an item (be it an Epic or a Feature), first assign an "Urgency Profile" to it and make explicit the dates believed to be associated with the inflection points.
  • Eliminate the time criticality component of the COD formula.
  • Separate the User and Business Value components.  Most implementations I work with tend to do this, defining "Customer value" and "Business Value" to separate considerations of customer loyalty from hard-nosed underlying business.  This would also open the door to more easily introducing quantification (with relative values based on dollar value brackets perhaps).
  • Make explicit in the guidance the fact that when applying the Proxy to your organisation you need to consider the relative weighting of the proxy components.
  • When assessing the Customer, Business and Risk Reduction/Opportunity Enablement values, do so in light of "today's urgency"
  • Based on the identified inflection points on the urgency profile, flag the date when the COD of the item needs to be re-assessed.

This solves two fundamental problems:
  • It ensures we have a systematic (and meaningful) approach to considering time sensitivity and its impact on COD without suffering from the flaws of the current world
  • It establishes a rhythm for re-assessment, sadly lacking for most current implementations of the SAFe COD model which tend to be "set and forget".




Friday, July 8, 2016

Agile Musical Chairs - A Facilitation Guide

This is a fantastic game I picked up at the Agile Alliance conference in Florida a couple of years ago.  It offers deep learning with respect to self-organisation and learning cycles, and a number of people I’ve facilitated it for in conference settings have asked for a facilitation guide.
The mission of the team playing the game is to “beat the facilitator” by preventing them from sitting in a vacant chair for 1 minute.  You should allow roughly an hour for the activity.

Setup

The smallest group I have played the game with is 7, and the largest about 100.  With large numbers, you will want to break into groups of 25-30 with a facilitator per group (it’s very easy to recruit and instruct a facilitator on the spot).
To setup, you need an open space with one chair per player, one for the facilitator and plenty of room to move.  Ask each team member to grab a chair, and to randomly organise themselves in the space available.  You don’t want a regular arrangement, and certainly not an orderly circle.  You want chairs facing in multiple directions fairly evenly distributed throughout the space (some in the middle, some on the edges.

Briefing

The rules are very simple.  Inform the participants that their goal is to learn about self-organisation and learning cycles.  Their goal is to block you from occupying a vacant seat for 60 seconds.  The constraints are as follows:
  • Any number of people can be moving at once to occupy the chair the facilitator is aiming for
  • Once you have stood up (or even half stood up), you cannot sit down in the same chair .. you must find a new chair to sit in.
  • The chairs cannot be moved from their starting position.
  • No physical blocking is allowed. You can’t push the facilitator out of the way or impede them from moving in their desired direction.
  • The facilitator can only move at a walk, the team can move as fast as they like. It’s not a running race, it’s a strategy game.
After each failure, the group will have exactly 60 seconds to conduct a retrospective and alter strategy before the facilitator starts moving again.

The play


You may or may not choose to warn participants they are likely to take 4 to 5 attempts just to reach 10 seconds.  Most groups will take 30-40 attempts to solve it.  You have various hints to offer along the way, with the hints later on the list being more powerful accelerators.
·       Are they hearing every voice at their retrospectives or allowing dominators?
·       Have they remembered that the game is about self-organisation (if they make the inevitable mistake of attempting to have one or two people co-ordinate)?
·       Are they trying to reinvent the wheel every retrospective?  What about making a small tweak then testing it?  It generally takes 10 seconds to decide whether a change was positive or negative (as long as it takes to fail) as opposed to debating it for 40 seconds?
·       The most important ingredient in a self-organising team is trust!  (This is usually the last hint that helps the breakthrough.  Most teams find a winning strategy then have one or two people who keep panicking and breaking it)

The Debrief

The debrief usually drives itself, however some key learnings I look to draw out are:
  • A self-organising team still needs a strategy and some agreed rules. It needs to be developed by the team to guide the way they work together.
  • The uniting power of a shared mission
  • The notion of improvement through small course corrections. A retrospective should identify a small change which can be actioned within 2 weeks, then evaluated as an input to selecting the next small change. It’s too easy to fall into the trap of large “all or nothing” changes.

Places and Times to use the activity

I most commonly use this activity as part of team formation.  It’s brilliant to run with a freshly formed team as it really unites them, drops some traditional barriers (nothing like sprinting around a room to do that) and sets the scene for what their retrospectives should feel like.  I also use it as part of release train launches (teams of teams) for similar outcomes.
Alternatively, it’s great to run with a team instead of a regular retrospective.  It will both drive some interesting reflections on how they work as a team and re-invigorate their retrospectives.

Lastly, you can use it anytime as a community building activity.

Saturday, January 23, 2016

The case for the SAFe QuickStart

In May 2012 I was struck speechless as I listened to +Dean Leffingwell describe the “1-week QuickStart” model for launching an Agile Release Train on Day 3 of the inaugural (beta) SAFe Program Consultant course.   The coach and trainer in me couldn’t reconcile my visceral reaction to the scale of the event with the confidence with which Dean described the many occasions on which he had employed it.  I drank the kool-aid on much of SAFe, but not that!

Fast forward to September 2013.  With a few more trains and lots of training under the belt I was back in Boulder – this time to spend a week at the new SAI headquarters to participate in the alpha SPCT program.  A very heated discussion took place regarding the pre-requisites for becoming an SPCT.  The proposal to require the SPCT candidate to have completed two QuickStarts caused me to passionately argue that there are many SAFe coaches out there (myself included) who would never buy into running them, and the requirement needed to specify successful launches rather than prescribed launch techniques. 

Fast forward once more to October 2015.  On day 1 of the inaugural SAFe summit in Westminster Colorado, I found myself on-stage describing the fact that the QuickStart is now my strongly preferred model for launching a train. 

So what is the QuickStart and what changed my mind?



One common misconception that needs to be dispelled is the belief that the launch actually begins with the QuickStart (Notice the "Prepare" arrow).  Like any other large-scale event, a great deal of preparation goes into making it a success.  The preparation typically commences with a Leading SAFe course for the leaders of the “train-to-be” followed by a workshop to create a launch plan.   I’ve previously described my preferred approach to this here.

With that cleared up, let’s return to the QuickStart itself.  Whilst PI planning has been strongly advocated by many in the Scaled Agile community, the particular value of starting with the full event rather than employing a “soft-launch” was covered by my colleague +Em Campbell-Pretty in a recent blog post.

That leaves us with the training.  One of the key tenets of SAFe is “train everyone”, but why do we have to do it all at once?  This was the piece that took me years to wrap my head around.  I’ve been training for over 20 years, and throughout that time have loved the intimacy of small classes.  Somewhere between 12 and 20, and you can make a unique experience and form a real connection with every member of the class.  How on earth do you get a high impact training experience with 100 people in the room?

This led to me feeling I knew better than Dean for my first few launches.  I worked with my clients to schedule 4 or 5 team-level courses over the period leading up to the first PI planning.  I’d request that they send entire teams to the same course so they could sit and learn together, and they would promise to do their best.  Then the pain would start.  Firstly, the teams would often be in flux up until the last moment.  Then they would be too busy on current commitments to all come together so they would dribble through 2 or 3 at a time.  And of course distributed team members would go to different courses.  The training was still hugely valuable, but I came to understand the motivation and some of the benefits of the big room – and eventually got convicted enough to try it. 

After the first “big room training”, I was blown away and spent some time sorting through how on earth it could be so powerful.  Following are some of the key insights it yielded:
  • The teams will be fully formed. The whole team can sit at the same table. Not only do they get to learn together and share their insights as they learn, but it’s actually a very powerful team formation event. We give teams some time to choose their names on Day 1, and watch team identity grow before our eyes.
  • The team engages in collective learning, with the chance to dissect their different interpretations in discussions and exercises. They are not reliant on “1 brain – the ScrumMaster” to ensure they get value from the agile approach, they have many brains who each captured different nuances.
  • The features for the PI will be ready. The very long (and effective) series of exercises involving the identification, splitting, estimation and evolution of stories can actually be done as practice on real features the teams will be dealing with in PI planning. 
  • Not only do the teams form their own identities, but they begin to form the shared identity of the train. As the discussions and debriefs progress, they start to learn about each other’s worlds.
  • Logistics are easier and more cost-effective. You’re already booking a large venue and flying people in – you get to double-dip on both the venue logistics and the return from the investment in collocating people for the planning event.

The biggest takeaway, however, is the momentum that builds.  The team members don’t leave the training room to head back to their old day-jobs while they wait for the train to launch and give them a chance to put their ideas into practice.  The day after training finishes they’re back in the room to apply their newfound techniques.

Now zoom back out to the QuickStart as a whole.    A train succeeds when 100 or so people come into alignment, form a shared identity and sense of mission and collaborate to both execute and learn together.    Can you think of any better way to accelerate the beginning of that journey?


Friday, April 3, 2015

Taking the Status out of Standups

Today was a busy coaching day on a large release train with lots of good coaching conversations.  When I got to the "warm and fuzzy moments" part of my coaching journal at the end of the day, memories of the scrum of scrums in the morning put a such a huge smile on my face that I had to come home and blog.
One of the first things I will do walking in the door as a coach is to go and 'chicken' at stand-ups. 99 times out of a hundred, they'll sound like a status report.  How can it be a self-organising team if they have a daily status report?  It's meant to be the timeout where the team gets together for a moment to work out how best to work together that day on moving towards their goal.  Often, I find the real standup happens just after the formal one.  The scrum-master walks off, and all of a sudden the team lights up and has a great conversation.    
Fixing a standup is a great place to start as a coach.  Changing that one conversation the team is guaranteed to have every day will lay a strong foundation for much of the deeper work to build on.  And I've built up quite a library of tips and tricks from conferences, blogs, colleagues and blind experiments.  The best ones always vary according to the team - I just offer a smorgasboard and encourage the team to pick one or two and try them out.

Simple Tricks for changing the dynamic

Use a talking ball

Very simple technique, takes control of who speaks when out of the hands of the scrum-master and into the hands of the team members. Also bound to get a laugh at some point as a throw goes crazy or when a team member brings in a soft toy that makes funny noises to use as the ball.  It's simple.  Last person to arrive gets thrown the ball.  They speak, then pick someone to throw it to.  Repeat until done.

Start with a soundtrack

Pick a team sound.  A favourite song, a sound-bite aligned to the team's name.  Program an alarm on a phone or a computer to play the sound at standup-time.  It's no longer the scrum-master summoning people in.

Talk to the wall 

There are a few variants on this one.  One is to have the team-member talking step up to the kanban wall and point to the cards they are talking about as they're talking.  Goes well with the talking ball as the person speaking then throws the ball out to the huddle to call the next one in.  Alternatively, have a rotation between the team members of someone to stand at the wall and walk through it from right to left calling for comment card by card.  Changes things up from the "person by person" routine.

Get out of the way

The conversation's meant to be between the team members, if they're all facing the scrum-master while they talk change it up.  They can stand behind the huddle if you're talking to the wall, or when extreme measures are called for make a moving target.  Get the scrum-master to constantly move so there are always other team members between them and the person talking.  It'll move eyelines.

Make a standup agreement

Often a good activity for a retrospective.   Have the team create an agreement listing the ingredients of a great standup.  Make it visible.  If nothing else, it'll prompt a good discussion about why the standup exists.  Many team-members just assume it's meant to be a status report.  Works really well in conjunction with red-cards.

Introduce red-cards 

This is one of my favourites.  Standups that get stuck in the weeds and drag on forever are horrible.  If the scrum-master is forever having to intervene to offline stuff it puts them far too squarely back in the centre.  Establish a stash of pink index cards (or flags or some other fun device) in the standup area and get every team member to pick one up as they arrive.  The system is simple:
  • "if you're bored or disinterested in the current conversation, hold up your card"
  • "if you agree with someone else who just held up a card, hold yours up too"
Now the team is voting conversations off the island.  They have control, the scrum-master just needs to help park it after the vote - it might be a really important conversation, just not for the whole team

Visualise your impediments

You're trying to discover impediments.  They're important.  They're half the reason to have the standup discussion.  Shame most teams just watch their scrum-master either try to memorise them or take notes while all the conversation is about the status rather than the things stopping progress.  Make an impediment radiator.  Write them up on post-it's instantly and visualise.

Stage a revolt

This one's a bit cheeky, but I've used it a few times to good effect.  Separate the team from the scrum-master.  Make sure they understand that the standup is meant to be for them not the scrum-master, then suggest they go tell the scrum-master they're taking over the standup.  Interestingly enough, the bulk of the times I've used it has been with the scrum-master's full support.  We have a discussion where they explain to me how frustrated they are with their standups.  They try a few things and nothing's working.  Then the team stages a revolt and all of a sudden it's fixed.  
You'll notice a recurring theme above - take it out of the control of the scrum-master and into the team's hands.  But these are no guarantee the right conversations will happen.  They'll almost certainly change the dynamic, but I've saved the best for last.

Change the questions

The "3 question standup" goes stale even faster than the good/bad/confused retrospective - it's used every day instead of every 2 weeks.  What's worse, the questions almost seem designed to promote a status flavour.  And of course, 99 times out of a hundred it's "no blockers" all around even for a team screaming from pain.  Change the questions, change them again.  Find the questions that create the best conversations.  I had one friend who ran for awhile with only 1 question - "What's blocking you?".  And finished the standup by filling in the blocker of the day on the team wall.   See my sample scrum of scrums questions below for some inspiration on new questions.
And .. make sure you visualise the questions.  Nothing worse than team members trying to remember what the question was.  Make a BVIR.  

Scrum of Scrums

This one's easy.  It's just a standup.  All comments above apply, they're just even more important.  Getting a good scrum of scrums going is often painful.  It's typically at more risk of being a status report than normal standups.  Also, never forget a standup is a "daily standup".   The best way to make a painful SoS is to only do it once a week.
So, back to the inspiration to sit down and write tonight.  Today's Scrum of Scrums was for a group of about 10 teams who are 2 sprints into scrum of scrums after spending 8 months without.  And it's been evolving well with some great conversations.  They were just ready for a nudge, and I suggested some new questions to try to the Release Train Engineer last night.  Then I just stood there with a big goofy coaching smile on my face this morning as the richness of the conversation ratcheted up another notch.
For more thoughts on creating a great scrum of scrums, take a look at this post by my colleague +Em Campbell-Pretty delving into visualisation.

Friday, December 5, 2014

Moving from belief to action - implementing WSJF for value based prioritisation in SAFe (Part 2)

In my last post, I described the simulation I use to teach Cost of Delay (CoD) and Weighted Shortest Job First (WSJF).  This often provides the impetus to begin, and one of the beauties of the model is that you can adopt it before you’ve even begun to change delivery method.  Beginning your change with a shared definition of value and a rich supporting discussion is a great launch point!

The first step is to make sure you know your vision and overarching strategy.  This need has usually emerged in the debrief on the simulation.  One group will mention how they paused with 10 minutes to go to say “what kind of city are we really building here?”  Every other group will look around and say “aaah, if only we’d started there!”.   Pulling it back to SAFe terminology, we are looking at the workshop(s) to define the strategic themes and portfolio vision.

With a sound strategic vision in place, we look next at adapting the model to the organisational context.  Three key questions need to be answered:

  • Do we use Dean’s components for CoD?
  • Do we weight the components equally or otherwise?
  • What specific factors contribute to each component?


The official SAFe formula is:
Cost of Delay = User/Business Value + Timing Criticality + Risk Reduction/Opportunity Enablement
Typical variations I have seen include:

  • Separation of “Business Value” and “Customer Value”
  • Separation of “Risk Reduction” and “Opportunity Enablement”
  • Introduction of a component for “Alignment to Strategy”
  • Weighting Business Value more highly than the other components.

In practice, I prefer to defer the weighting question.  Revisiting it once you have scored a set of options and gained a feel for how you are scaling each component seems to land you in a better position.

Another key theme that always emerges in the simulation debrief is the importance of getting to a shared understanding of ‘what contributes to’ business value, timing and opportunity/risk reduction.  Groups will discuss how alignment around scoring became easier as they aligned around the definition of value.  They’ll also regularly highlight the fact that once they got to opportunity/risk reduction they realised they had been ‘double-counting’ it in earlier components.

Leveraging this insight, my next workshop after the strategy is defined beings with identifying the contributing factors.  This is fairly easily facilitated with flip-charts and post-it’s, and typically takes 60-90 minutes.  Again, it becomes an alignment tool as it enables a great discussion on what constitutes value and opens up shared understanding among the group.  Below is a (slightly desensitised) example of the definition from a group I recently helped develop a roadmap using WSJF.



By this point you have an overarching strategy, a model people understand how to use, and you’ve filled in and clarified the intent in using it.

The next step is to take your current list of opportunities and get them estimated.  Most groups I work with want to retain the index card/planning poker approach.  This is handy, because every good coach knows how to help people navigate a good planning poker estimation session.  I will also often separate the estimation to one workshop per contributing lever.  The conversation is intense, and with a decent size backlog (25-30 items) allowing a couple of hours per component works well.  It also allows time for homework and clarifications.  We regularly park a couple of items for people to go away and assemble supporting data.  We conclude the series by pulling all the calculated results together and reviewing/adjusting the result.  Once your first list is prioritised, you just need a cadenced working group to evaluate new items or adjust previous evaluations as new information emerges or timing considerations change.

My closing hint is to remember not to be a slave to the numbers.  There are often good reasons to adjust the final ordering.  And naturally you want to put a kaizen approach in place to start to measure realised benefits and use that feedback to improve your initial value estimation discussions.

The true value is not in calculating WSJF to 2 decimal points.  It lies in having a tool that facilitates objective conversations that assists a group of execs or product managers in aligning around a common understanding of value for the organisation.  In doing so, it’s amazing how much they start to understand about each other.

Friday, August 22, 2014

Getting from theory to practice with Cost of Delay and WSJF in SAFe

Recently, I’ve noticed a number of threads on the SAFe linked-in groups indicating people are struggling to get started with the SAFe “Weighted Shortest Job First” (WSJF) prioritisation model.

When I first heard +Dean Leffingwell teach the model I thought “wow, this fantastic.”  His use of a proxy model for Cost of Delay (CoD) seemed inspired.  +Don Reinertsen  had provided compelling arguments for the power of fully quantified CoD.  On the other hand, all too many people abandon all attempts to have a value discussion because it’s “just too hard to quantify”.  The 3-part proxy in SAFe seemed simple enough to enable people to begin.

The problem, I learned, was one often encountered in people digesting Don’s principles of product development flow.  It sounds like fantastic theory but the complexity makes it challenging to find a starting point.  My first breakthrough came in early 2013.  I was teaching a public Leading SAFe class, and had 5 people from one company on the same table.  While most people tackle the WSJF exercise as individuals, this group had a shared list of features and attacked it as a group exercise.  They made up planning poker cards from post-its and started voting.  As I listened to the conversation, I got inspired.  Not only did I hear a great discussion, but I could hear what they had misunderstood from the theory.

My next step was to take their example and turn it into something every class member could experience.  They’d had the benefit of a list of features all of them recognised, but this is rarely true.  For an answer, I decided to feed every group a recognisable set of things to debate for Cost of Delay.  The result was a simulation I call “SAFe City”.  The setting involves a property developer building a satellite city and evaluating the Cost of Delay of various housing estates, shops and other developments.  Initially framed at the Epic level, every group receives a list of 9 Epics to evaluate along with a good supply of planning poker cards.



I started to run it for every class.  And the results astonished me.  Not only did it become apparent how confused people became in interpreting the theory, but I also started to understand the true beauty of the model myself.

Sample Epic


If you’re interested in the simulation, it takes roughly 1 hour to run and debrief and all the required materials can be found at SAFe City Epic Prioritisation.  It runs perfectly well standalone, and I’ve run it with groups up to and including C-level.

The activity gives groups both the confidence and interest to take it back into their own context.  I’ve now seen numerous groups elect to adopt the model irrespective of whether the initiatives are being delivered through waterfall or agile.  In my next post I will tackle the first steps as we take it out of the classroom and into reality.

Wednesday, July 2, 2014

Self-selecting teams with SAFe

Whilst the energy surrounding the launch of an Agile Release Train is immense, there is an all-too-real risk that our new teams will be more facade than reality once the hype dies down.

Forming around the delivery of value, we draw a "kidney shape" that inevitably cuts across significant organisational boundaries.  In so doing, we ask both line management and team members to take a major leap out of their comfort zone.

Whilst as agilists we love the mantra of "embracing uncertainty", it is far too easy to gloss over the human impact this creates.

For line management, it involves a significant amount of release.  No longer will they determine the day to day priorities of those they manage.  Much as we rail against traditional performance management, it does not disappear overnight.  How will they address this in a matrixed world?  How will they address their duty of care with respect to career progression?  How do they deal with a situation where some of their people are embedded in a train and some not?  What happens when they lose their "go-to" problem solvers to an agile team?  Do they really understand what "dedicated team member" means?

For team members, the upheaval is even greater.  No longer will they be surrounded by others with a similar skill-set and conversational language.  In many cases, their new team-mates will be people they have literally thought of as "the enemy" for years.

There is a pattern I have been guilty of and heard from fellow coaches many times around the design of teams for a new train.  Full of passion to create "self-organising teams", enable decentralisation, and change our language from "resources" to "people", we fill a room with leadership.  Wanting to be agile, we bring lots of post-its.  We put flip charts on the wall (one per team), put people's names on color-coded post-its (blue for devs, red for testers, green for BA's etc etc).  Then we allocate them.  Who goes where?  How do we get a balanced team?  Can we stick to "100% committed" and avoid "shared resources"?  Full of our triumph at our "agile team selection process", the workshop concludes with a moment to form the "Comms plan" for announcing the new team structures.

Do the leaders in the room really understand what they're committing to?  Do they feel safe to voice their concerns?  Do we know the personalities and personal histories of the people we are teaming up?  Are we discussing people or post-its?

Any hesitation in answering the questions above in the affirmative will likely undermine our train.  But beyond that, doesn't this feel like a very management-centric approach to creating "autonomous self organising teams"?

I've felt for a long time that there has to be a better way.   I've been reading for years about the power of self-selecting teams.  I've loved it not just from the "team effectiveness" perspective but also when thinking about the psychology of change.  Every Agile coach knows that people dislike having change forced upon them, we should be trying to create the conditions in which people can be part of choosing the change for themselves.  But I just haven't been able to bring it to life.

Then last year I read a blog post by Sandy Mamoli of Nomad8 about "squadification", and I was in awe.  She had convinced an entire IT department to throw their current structure into the air and hold a 1 day event which enabled all their people to organise themselves into new Agile teams (or squads).  She had been generous enough to share her facilitation design, and as I read it light bulbs started going off in my head.

Her premise was that the job of leadership was to design the teams, their missions and the skill-sets they would need available to fulfil them.  So far so good, this is what we're doing when we commence the design of a release train.  Then, you  hold an all-day facilitated event where candidate team members self-organise themselves into the teams whose shape you have defined.  What a replacement for the "people as post-its" workshop!

I was fortunate recently to be coaching a train with a courageous leader.  They had originally formed with component teams, were looking to re-organise into feature teams, and he shared my excitement about Sandy's thinking.  In the spirit of servant leadership, he did not want to thrust it upon his leadership team but invited me to facilitate a series of workshops with them to explore it.

Three weeks and many nervous moments later, we embarked on a "Team Fair".  He has shared the story on his blog, and I have to say the result was amazing.  The vast majority of our "What if?" scenarios about things going wrong did not eventuate, and some very surprising and creative outcomes did.  In designing it, we had the chance to explicitly redesign the role of line management in the new world and give them the time to walk their people through the upcoming change and involve them in the preparation.  The fair itself became a symbolic moment as leadership "released" their people into the freedom they had designed.

With every train I help to launch, I take away some new learnings for the next one - backed by the conviction of experience to give me the courage.  Facilitated team self-selection through a "team fair" is now firmly in the kitbag, with thanks to +Sandy Mamoli for the inspiration and +Andy Kelk and his team for the courage to experiment.