Monday, February 25, 2019

Practical Finance and Cost Allocation in SAFe



SAFe provides some wonderful yet daunting guidance when it comes to funding and the application of Lean Budgets.   As of SAFe 4.6, we have 3 key tenets of Lean Budgeting:

  1. Fund Value Streams, not projects
  2. Guide Investments by horizon
  3. Participatory Budgeting

My purpose in this article is not to restate the SAFe recommendations.  They’re well documented at in the Lean Budget article.  As always, however, I’d rather talk practice than theory – in this case in the area of “Fund Value Streams, not projects”.

In summary, the theory is that we provide guaranteed funding to establish a standing capacity (in the form of an Agile Release Train), then use a combination of “guardrails”, Epic level governance and Product Management accountability to ensure that this funded capacity is well used (building valuable features) and tune it over time based on results achieved.

When I take a room of leadership through this approach its usually pretty confronting, and often provokes strong reactions.  Then I share a story from an early SAFe implementation.  It was with an organization that didn’t have Lean or Agile friendly funding, we just had a stable ART with stable teams that were fed by projects.  And we built a record for “100% on time/on budget”.  We did it in a very simple way.  When something was delivered under budget, we still charged the quoted price, and as such we were able to build up a buffer fund.  When something ran over budget, we drew down on the buffer fund.  It was closely managed, and it all came out in the wash!  By the conclusion of the story, that same room who was reacting strongly a few minutes earlier is suddenly grinning (in some cases rather nervously).  I’m pretty sure every organization I’ve worked with in the past 10 years has survived historically using some variation of this technique.

Some SAFe implementations are lucky enough to start with capacity funded ARTs fully following the framework guidance, but in most cases they’re still realistically project funded.  It’s either a really big amorphous project being used as a masking umbrella to the standing funding model, or quite literally the single ART is being funded by numerous projects and needing to charge back.

And to take it a bit further, even if we are capacity funded we usually need more granular data on how the money is spent.  At minimum, we generally have some work which is Opex funded and some Capex funded.  We also want some data on how much our features are costing.   Hopefully we’re not falling into the trap of funding feature by feature, but when your ART is costing $3-5M per PI the organization will require us to be able to break down where those millions are going.

About now, the timesheet police come out!  Somebody somewhere figures out a set of WBS codes the ART should be charging against (I recently heard an example where a theoretically capacity funded ART had up to 15 WBS codes per Feature), and everyone on the ART spends some time every week trying to break up the 40 hours they worked across a bewildering array of WBS codes that will then be massaged, hounded and reconciled by a small army in a PMO.

There’s a much simpler, less wasteful way.  We’ve used it time and again, blessed by Finance department after Finance department – once they understand the SAFe framework.  I’m going to start at the team level, then build it up to the ART.

Team Level Actuals

We usually work this on a “per Sprint” basis, and need to know two things:

  • What did the team cost?
  • What did they work on?

What did the team cost?

Figuring out what the team cost should be straightforward.  Given that we know if you’re part of an Agile team you’re 100% dedicated, all we need is your daily rate and the days you worked.  Specifics of how this daily rate is calculated for permanent employees versus contract/outsource vary too much to provide specifics, but for each of your agile teams you should know your “burn rate per sprint”.  You then need a way of dealing with leave and (hopefully not) overtime.  We’re not trying to totally kill timesheeting here, but we can be much more simplistic about it.  Each team member should only have one WBS code to allocate their time against – the one that says “I worked a day as a member of this team”.  Thus, using your knowledge of burn rates and actual days worked you have total cost for the team for the sprint.

What did they work on?

If you’re part of an Agile team, the only thing you could possibly work on is your backlog!  So, we know that the entire team cost should be allocated against their backlog.  From  a funding perspective, aggregating this is based on effective categorization of backlog items: by parent Feature, item type or both.  Consider the following example:


The Sprint in Aggregate

Velocity: 28


Based on a team burn rate of $100K/sprint, we then have:

  • Feature A: $36,000
  • Feature B: $46,000
  • Production Defects: $11,000
  • BAU: $7,000

If your world can’t cope with ragged hierarchies, you can create “Fake Features” for the purpose of aggregation.  I quite commonly see features like:

  • Feature C: “PI 3.1 Production Defect Fixes”
  • Feature D: “PI 3.1 BAU Work”

The richness and detail of your aggregation basically depends on your “Story Type” classifications.  For example, typically work done on exploration/discovery can't be capitalized, which would lead you to introduce a story type of “Exploration”.

Applying the results

At this point, we have all the costs against the temporary WBS associated with their team.  All that remains is to journal the costs across from the temporary WBS to the real WBS based on your aggregates.

If you’re clever, you’ll automate the entire process.  Extract the timesheet data, extract the backlog data, calculate the journals and submit!

Dealing with the naysayers

At this point, some people get worried.  The size of the stories was estimated, not actual.  What happens if Story 1 was estimated as 3 points and was actually 5?  What about stories that didn’t get accepted?  Can Fibonacci sizing really be accurate enough?  Our old timesheets recorded their time spent against different activities in minutes!

It’s time for a reality check.  I’m going to illustrate with a story.  I was recently facilitating some improvement OKR setting with the folks responsible for timesheets in a large division (still mostly waterfall).  One of the objectives they wanted to set involved reducing the number of timesheet resets.  I asked what a timesheet reset was and why it was important to reduce them.  Turned out it was when a timesheet had been submitted and was some way through the approval process when they realized there was an error and it needed to be reset to be fixed and resubmitted.  Obviously, this was a pain.  I asked them how often it happened and why?  The response: “Usually when there’s a public holiday we get a lot of them.  Everyone enters 8, 8, 8, 8, 8 (hours worked) and the approver approves on autopilot then half way through their approval run remembers the public holiday!”

Everyone (and most especially finance) knows timesheet data is approximate at best.  The person filling it out knows they worked 8 hours, guesses their way through how much they spent on what, and finds whatever hours are left unaccounted for and chooses something to attach them to!  And the person approving it does little review, knowing they have no meaningful way to validate the accuracy.

While the backlog aggregation will always have a level of inaccuracy, few will argue that it is any less accurate than the practices employed by their timesheet submitters (unless you’re a law firm steeped in the 6-minute charging increment).  And at this level, your CFO should realize that any variation is not material to the overall investment and is accepted under GAAP (General Accepted Accounting Principles).

Moving from Team to Train

On the surface, life gets a little more complex moving from team to train.  You have lots of people in supporting roles not necessarily working on specific features.  Again, however, it’s not all doom and gloom when you look at prevailing practices.

In many an IT organization, pre-Agile daily rates attract a “tax” used to cover the costs of those less directly attributable such as management, governance, QA teams and the like.  Every project they’ve ever estimated has attracted a percentage surcharge (or a series thereof).

In the same vein, we can calculate a “burdened run rate” for the teams.  We do this by taking the burn rate for every member of the ART not associated with a delivery team, summing them, then distributing them across the team burn rates.  In theory, they exist because the teams couldn’t be delivering value without them – so they must be contributing in some fashion.  Consider the following example:


Support costs can either be distributed proportionally to burn rate or on a flat per team rate (usually based on discussion with Finance).  Assuming a flat per team rate, we can restate the example above as:



This becomes more nuanced in the case of a support team who does directly cost attributable work.  The classic example is the System Team.  They should be spending a certain percentage of their capacity in general support and the rest building enabler features (hopefully DevOps enabler features).  In this case, we can use the team-level backlog aggregation approach illustrated above provided we can see their support work clearly categorized so we know which percentage of their cost to distribute to burdened burn rates and which percentage to attribute directly to features.

All that remains is for our supporting staff to timesheet against a temporary “I worked on this ART WBS”, and we have the means to attribute our costs at the ART level just as we did for the team.

We get one other thing for free.  I like the word "Burdened" when it comes to burn rates.  There's usually a world of potential waste to be eradicated in burden costs.  Calculating some heuristics allows your CFO to start asking some rather pointed questions about whether ARTs are really "running lean".

Conclusion

I work with one small client who has none of the “enterprise fiscal responsibilities” of most SAFe implementations.  In theory, they have no need for any of the above discipline and in fact ran their early implementation without it.  But then they wanted to start analyzing cost/benefit on the features they were building.  In fact, it was the delivery folks who wanted to know the answers so they could change up the cost/benefit conversations when feature requests came in.

I don’t think it matters how “ideally Lean or Agile” you are, if you’re in the type of enterprise using SAFe you will need to be able to allocate your costs across Opex and Capex, and need to analyze your Feature costs to tune your investment strategy and provide data to your improvement initiatives.  The techniques illustrated in this article require good backlog discipline and some walking through to get blessing from your Finance departments, but they’re far from rocket science.  And best of all, they work regardless of whether you’re project funded, capacity funded, or some combination of the two!

Because, in the end, I have one experience with Leaning up governance.  Until you have a viable alternative that enables the organization to fulfil its fiscal and governance responsibilities you’ll never dislodge the onerous, wasteful practices of the past.

Friday, February 15, 2019

Dealing with Unplanned and/or BAU work in SAFe

In the Agile world in general, we have long preached the move from “Project mindset” to “Product mindset”.   “Wouldn’t the world be simpler if we just talked about work instead of projects and BAU?” is a mantra on many an agilist’s lips. 

Whilst the notion of forming teams and trains that “just do the most important work regardless of its nature” is a great aspiration, it comes with a number of caveats:

  • Funding and capitalization are generally significantly different for the two
  • Planning and commitment are difficult when some (or much) of the team’s work is unplanned

Enterprises have typically solved for the problem through structural separation.  The first step into Agile is often to move from separate “Plan”, “Build” and “Run” structures to separate “Plan and Build” and “Run” structures.  Projects are fed through “Plan and Build”, then after some warranty period transitioned to “Run”.  Funding is separate, and “Run” is driven more by SLA’s than plans.

A truly product-oriented mindset requires the establishment of teams and ARTS that can “Plan,Build and Run”, and this post will tackle in-depth the issue of planning and commitment and introduce some tools for tackling the funding side of the equation.

Funding

I’ll tackle the topic of funding in greater detail in a future post, but the short version follows.  If a backlog item is categorized, the categories can be mapped to funding constructs.  We can then take the burn-rate for a team, the percentage of its capacity dedicated to each funding construct, and allocate funding accordingly.

Planning and Commitment

Both the PI cadence of SAFe and the Sprint cadence of Scrum seem to invalidate the incorporation of BAU.  After all, if we fix our Feature priorities for 8-12 weeks in SAFe and our Story priorities for 2 weeks in Scrum how do we deal with the unplanned?

Known BAU work can be represented by planned backlog items, but the answer to unplanned work lies in the effective utilization of Capacity Allocation.  We can reserve a given percentage of the team (or train’s) capacity for unplanned work, and plan and commit based on the remaining capacity. 

Team-level Illustration: Production Defects

One of the first benefits we find with persistent teams is that we can feed production defects back to the team responsible for introducing them.  This provides them with valuable feedback, typically dramatically improving quality. 

We might reserve 10% of team capacity to cater for this.  Thus, if the team’s velocity is 40 they would only plan to a velocity of 36 and reserve 4 points for production defects. 

Mechanically, the following occurs:

  • If less than 4 points of production defects arrive, the team pulls forward work from the following sprint.
  • If more than 4 points of defects arrive, the Product Owner makes an informed decision: defer new feature work or defer low-priority defects.

My preferred implementation of this technique is slightly different.  A number of times, we have reserved the 10% for a combination of Production Defects and Innovation.  If the team has shipped clean code, they get to work on their innovation ideas rather than pulling forward work!

ART-level illustration: BAU work

When staffing ARTs, we often find that some (or many) of the key staff are only available “if they bring their BAU work with them”.  In these cases, we plan the known BAU work and apply PI-level capacity allocation based on the percentage of their capacity we feel is needed to cater to expected “unplanned BAU” loads and withhold this when planning out the PI.

Dealing with fluctuations in unplanned work levels at the ART/PI level is a little more consequential.  Whilst the sprint-to-sprint mechanism of the production defect illustration still applies, we need to be monitoring for potential impact on PI objectives.

  • If less than the expected amount of unplanned work arrives for the team, we have the option to either use the spare capacity to absorb work from other teams struggling with their PI objectives or pull forward Features from future PI’s.
  • If more than the expected amount arrives, we are monitoring impact on committed objectives.  We can cater to a certain amount by sacrificing capacity allocated to stretch objectives, but if we are at risk of compromising committed objectives this should trigger a management decision to determine whether to defer or deflect the unplanned work or compromise a committed objective due to the significance of the unplanned work.

Discipline is a must

Applying these techniques will quickly run into a challenge.  Teams are often sloppy with BAU/unplanned work.  They “just do it”, viewing the effort of creating, sizing and running backlog items for it as unnecessary overhead.  This leaves us without the visibility required for the deliberate, proactive decision making illustrated above and often somewhat embarrassingly at the end of the Sprint or PI apologizing for missing a commitment “because BAU was more than expected” without any hard data to back it up and even more importantly without having given the Product Owner/Product Manager/Business Owners the opportunity to intervene and deflect the unplanned work to enable us to maintain the commitment.

Further, I find most teams dramatically underestimate the capacity consumed by BAU work.  We’ve routinely worked with teams who set a capacity of 30% aside for BAU, then when they’ve finally missed enough objectives to buy into actually tracking their BAU work find it to be 50-60%. 

However, the true benefit of discipline goes further – the data generated is a goldmine.


Reaping the Benefit of Discipline

Whilst the first benefit of discipline is obviously that of gaining an accurate understanding of your capacity and being able to more confidently make and keep commitments, exponential gains can be realized once you start to analyze the data generated.  A key first step is developing an awareness of failure demand and value demand.

Failure Demand vs Value Demand

Failure demand is demand caused by a failure to do something or do something right for the customer” – John Seddon
The first illustration that was given to me for failure demand many years ago was in the context of call centers.  It’s the 2nd and 3rd phone call you have to make because your issue wasn’t fully resolved on the first call.   If we take a typical agile team or ART, we can find many examples:

  • A late-phase defect is caused by failure to “build quality in”.
  • A production defect is caused by failure to deploy a quality product
  • A request for information is caused by failure to have provided that information previously or failure to have made the requester aware of where the information is published
  • An issue is often caused by failure to effectively mitigate a risk
  • Time spent issuing reminders or nagging is failure demand, as more effectively establishing the awareness of the “why” and clearly setting the expectation would have avoided it.
  • Managing the politics of a missed commitment results from both failure to meet the commitment and failure to effectively manage the possibility that the commitment would be compromised.

Value demands are demands from customers that we ‘want’, the reason we are in business” – John Seddon
Value demand for teams and ARTs should be obvious – the features and stories the teams are working on!  However, this can become a little more nuanced very quickly:

  • Is work done on an improvement initiative value demand?  Our customer probably didn’t directly ask for it.  In fact, many improvement initiatives are effectively failure demand as they are driven by addressing previous failures.
  • A great deal of BAU/Unplanned work is falsely perceived as value demand.  “I run this script or extract every morning”, “We produce and consolidate this report every month” are all great examples.  In theory someone values the result of the script or extract, and values the report – but the need to dedicate capacity to it results from a failure to automate it, or failure to fix a broken process.

Applying the Insights from Demand Patterns

Assuming we’ve had the discipline to channel all demand on a team through their backlog, and the further discipline to categorize it appropriately as failure or value demand, we can now start to drive significant improvement on the following basis:

  • If I reduce failure demand, I have more capacity to devote to value demand
  • If I find a more effective way to respond to value demand, I have more capacity to devote to value demand

In “Four Types of Problems: from reactive troubleshooting to creative innovation”, Lean expert Art Smalley defines a hierarchy of problem types and accompanying resolution strategies.  Three of these are pertinent to this situation:

  • Type 1: Troubleshooting – “Reactive problem solving based upon quick responses to immediate symptoms”.
  • Type 2: Gap from Standard – “Structured problem solving focused on problem definition, goal setting, root causes analysis, counter-measures, checks, standards and follow-up activities
  • Type 3: Target Condition – “Continuous improvement that goes beyond existing performance of a stable process or value stream.  It seeks to eliminate waste, overburden, unevenness, and other concerns systemically,  rather than responding to one specific problem”.

When you form a good Agile team, their ability to jump to each other’s aid, rally around problems and move from individual work to teamwork tends to exhibit a lot of troubleshooting – particularly in the case of unplanned work.  Good troubleshooting skills are fundamental to any team.  As Smalley comments, “to address each [issue] with a deeper root cause problem-solving approach would require tracking and managing a problem list that runs, literally, hundreds of miles long.  No organization can hold that many problem-solving meetings … in an efficient manner”.

Our response to most failure demand is to apply troubleshooting techniques.  However, while these will help us survive the prevailing conditions they won’t help us change them.  Change requires the use of Type 2 problem solving techniques.  We need to leverage our data to identify recurring trends, and act to remove the root cause of the failure demand.  Smalley devotes great attention to problem definition, and opens with two pieces of critical advice when framing the problem for attention:

  • “The first step is to clarify the initial problem background using facts and data to depict the gap between how things should be (current standard) versus how they actually are (current state).
  • “Why does this problem deserve time and resources?  How does it relate to organizational priorities?  Strive to show why the problem matters or else people might not pay attention or might question the problem-solving effort.”

As we are successful with the reduction of failure demand with our Type 2 activities, we can move on to Type 3 problem solving, driving activity to establish new target conditions.  If we accurately understand the capacity being devoted to various types of value demand we can more accurately assess whether the value being generated justifies the capacity being consumed – triggering informed continuous improvement.   An enterprise PMO we have been working with provided a wonderful example recently:

They had historically applied a QA process to every project the organization ran.  This, of course, was characterized as “BAU” work.  It had to be done every time a project passed through a particular phase in lifecycle.  As they gathered data on how much of their capacity it actually consumed, they started to question the value proposition.  How regularly did the QA check actually expose an issue?  What were the typical consequences of the issues exposed?  What other high-value discretionary activities were unable to proceed due to capacity constraints?  Eventually, they were able to make an informed decision to move to a sampling approach, freeing up more capacity to devote to high-value initiatives they had been frustrated by an inability to proceed with.

Conclusion

Capacity allocation allows us to deal with BAU/Unplanned work, but my experience has been that it never works well without the accompanying discipline of actually channeling that work formally through your backlog.  It might require some creativity to make it meaningful (eg a single backlog item for the sprint representing the capacity devoted to a daily BAU activity).  Beginning with the reduction of failure demand in BAU/Unplanned work will both improve performance and free capacity which can then be devoted to true continuous improvement initiatives.

However, the usefulness of the Sprint or PI cadence-driven cycle seems to fall apart at the point where more than 30-40% of capacity is being reserved for unplanned work.  Some form of cadence-driven alignment cycle will always be valuable, but adaptation from the standard events and agendas will be necessary to make them meaningful and Kanban is far more likely to provide a useful lifecycle model.  The ARTs I have worked with in this situation have tended to wind up with shortened planning events far more focused on “priority alignment” than detailed planning.

Above all, the benefit comes from the mindfulness generated in the presence of data reflecting “where you really spend your time” as opposed to “what your value priorities are”, and the accompanying discipline of acting on that data to achieve better alignment.