Showing posts with label SAFe. Show all posts
Showing posts with label SAFe. Show all posts

Wednesday, January 25, 2017

Revamping SAFe's Program Level PI Metrics Part 3/6: Culture

"Organizational culture can be a major asset or a damaging liability that hinders all efforts to grow and become more successful. Measuring and managing it is something few companies do well." - Mark Graham Brown, Business Finance Magazine



Introduction

After exploring the Business Impact quadrant in Part 2 of this series, our focus now moves to Culture. I have been involved with over 30 release trains since I started working with SAFe in early 2012, and I have come to the passionate belief over that time that positive movement in culture is the most accurate predictor of sustained success.

While most agree that it is impossible to truly measure culture, there are certainly indicators that can be measured which help us in steering our path.

In selecting the mix of measures proposed, I was looking for a number of elements:
  • Are our people happy?
  • Are our stakeholders happy?
  • Are we becoming more self-organizing?
  • Are we breaking down silos?

The basic metrics address the first 2 elements, while the advanced metrics tackle self-organization and silos.

Basic Definitions



Basic Metrics Rationale

Team Net Promoter Score (NPS) - "Are our people happy?"

In his book The Ultimate Question 2.0, Fred Reichheld describes the fashion in which many companies also apply NPS surveys to their employees - altering the question from "how likely are you to recommend [Company Name]" to "how likely are you to recommend working for [Company Name]".

My recommendation is that the question is framed as "how likely are you to recommend being a member of [Release Train name]?". Survey Monkey provides a very easy mechanism for running the surveys.

For a more detailed treatment, see this post by my colleague +Em Campbell-Pretty. Pay particular attention to the value of the verbatims and the inclusion of vendor staff in the survey – they’re team members too!

As a coach, I often ponder what “mission success” looks like. What is the moment when the ART I’ve been nurturing is set for greatness and my job is done? Whilst not enough of my ARTs have adopted the team NPS discipline to give me great data, I have developed a belief based on the data I do have that the signal is the moving Team NPS above +20.

Business Owner Net Promoter Score (NPS) - "Are our stakeholders happy?

This is a more traditional treatment of NPS based on the notion that business owners are effectively internal customers of the ART. The question is framed as "how likely are you to recommend the services of [Release Train Name] to a friend or colleague?"

If you’re truly serious about the Lean mindset, you will be considering your vendors when you identify the relevant Business Owners for this metric. There is vendor involvement in virtually every ART I work with, team-members sourced from vendors are a key part of our culture, and vendor management need to be satisfied the model is working for their people and their organization.

Staff Turnover %

In one sense, this metric could be focused on "Are our people happy", however I believe it is more holistic in nature. Staff turnover can be triggered either by people being unhappy and leaving, or by lack of organizational commitment to maintaining long-lived train membership. Either will have negative impacts.

Advanced Definitions


Advanced Metrics Rationale

Developer % (IT) - "Are we becoming more self-organizing?"

When an ART is first formed it classically finds “a role in SAFe” for all relevant existing IT staff (often a criticism of SAFe from "anti-SAFe crowd"). However, as it matures and evolves the people might stay but their activities change. People who have spent years doing nothing but design start writing code again. Great business analysts move from the IT organisation to the business organisation. Project managers either return to a practical skill they had prior to become project managers or roll off the train. In short, the only people who directly create value in software development are software developers. All other IT roles are useful only in so far as they enable alignment (and the greater our self-organisation maturity the less the need for dedicated alignment functions). In short, if we seek true productivity gains we seek a greater proportion of doers.

One of my customers started using this metric to measure progress on this front and I loved it. One of the early cost-saving aspects of agile is reduction in management overhead, whether it be the instant win of preventing duplication of management functions between the implementing organization and their vendors or the conversion of supervision roles (designers, project managers) to contribution roles.

Obviously, this is a very software-centric view of the ART. As the “Business %” metric will articulate, maturing ARTs will tend to deliberately incorporate more people with skills unrelated to software development. Thus, this measure focuses on IT-sourced Train members (including leadership) who are developers.

As a benchmark, the (Federal Government) organization who inspired the incorporation of this metric had achieved a ratio of 70%.

Business % - "Are we breaking down silos?"

While most ARTs begin life heavily staffed by IT roles, as the mission shifts towards global optimization of the “Idea to Value” life-cycle they discover the need for more business related roles. This might be the move from “proxy Product Owners” to real ones, but equivalently and powerfully sees the incorporation of business readiness skill-sets such as business process engineering, learning and development, marketing and other business readiness type skills.

Whilst the starting blueprint for an ART incorporates only 1 mandatory business role (the Product Manager) and a number of recommended business roles (Product Owners), evolution should see this mix change drastically.

The purpose of this measure could easily have been written as "Are we achieving system-level optimization?", however my personal bent for the mission of eliminating the terms "business" and "IT" led to the silo focus in the question.

Conclusion

When it comes to culture, I have a particular belief in the power of a change in language employed to provide acceleration. A number of ARTs I coach are working hard to eliminate the terms “Business” and “IT” from their vocabulary, but the most powerful language change you can make is to substitute the word “person” for “resource”!


Series Context

Part 1 – Introduction and Overview
Part 2 – Business Impact Metrics
Part 3 – Culture Metrics (You are here)
Part 4 – Quality Metrics
Part 5 – Speed Metrics 
Part 6 – Conclusion and Implementation

Instead of trying to change mindsets and then change the way we acted, we would start acting differently and the new thinking would follow.” David Marquet, Turn the Ship Around.

Saturday, January 14, 2017

Revamping SAFe's Program Level PI Metrics Part 1/6 - Overview

"Performance of management should be measured by potential to stay in business, to protect investment, to ensure future dividends and jobs through improvement of product and service for the future, not by the quarterly dividend" - Deming

Whilst the Scaled Agile Framework (SAFe) has evolved significantly over the years since inception, one area that has lagged is that of metrics. Since the Agile Release Train (ART) is the key value-producing vehicle in SAFe, I have a particular interest in Program Metrics - especially those produced on the PI boundaries.

In tackling this topic, I have numerous motivations. Firstly, the desire to acknowledge that it is easier to critique than create. I have often harassed +Dean Leffingwell  over the need to revamp the PI metrics, but not until recently have I developed a set of thoughts which I believe meaningfully contribute to progress. Further, I wish to help organisations avoid falling into the all-too-common traps of mistaking velocity for productivity or simply adopting the default “on time, on budget, on scope” and phase gate inheritance. It is one thing to tout Principle 5 – Base milestones on objective evaluation of working systems, and quite another to provide a sample set of measures which provide a convincing alternative to traditional milestones and measures.

Scorecard Design

It is not enough to look at value alone. One must take a balanced view not just of the results being achieved but of the sustainability of those results. In defining the PI scorecard represented here, I was in pursuit of a set of metrics which answered the following question:

"Is the ART sustainably improving in its ability to generate value through the creation of a passionate, results-oriented culture relentlessly improving both its engineering and product management capabilities?"

After significant debate, I settled on 4 quadrants, each focused on a specific aspect of the question above:

For each quadrant, I have defined both a basic and advanced set of metrics.  The basics represent “the essentials”, the bare minimum that should be measured for a train.  However, if one desires to truly use metrics to both measure and identify opportunities for improvement some additional granularity is vital – and this is the focus of the additional advanced metrics.




Business Impact

Whilst at first glance this quadrant might look sparse, the trick is in the “Fitness Function”. Wikipedia defines it as “a particular type of objective function that is used to summarise, as a single figure of merit, how close a given design solution is to achieving the set aims”. Jeff Bezos at Amazon quite famously applied it, insisting that every team in the organization developed a fitness function to measure how effectively they were impacting the customer. It will be different for every ART, but should at minimum identify the key business performance measures that will be impacted as the ART fulfils its mission.

Culture

The focus in culture is advocacy. Do our people advocate working here? Do our stakeholders advocate our services? Are we managing to maintain a stable ART?

Quality

For quality, our primary question is “are we building quality in?” Unit Test coverage demonstrate progress with unit test automation, while “Mean time between Green Builds” and “MTTR from Red Build” provide good clues as to the establishment of an effective Continuous Integration mindset. From there we look at late phase defect counts and validation capacity to understand the extent to which our quality practices are “backloaded” – in short, how much is deferred to “end-to-end” feature validation and pre-release validation activities. And finally, we are looking to see incidents associated with deployments dropping.

Speed

This quadrant is focused on responsiveness - how rapidly can our ART respond to a newly identified opportunity or threat?  Thus, we start with Feature Lead Time - "how fast can we realise value after identifying a priority feature?". Additionally, we are looking for downward trends in time spent “on the path to production”, mean time to recover from incidents and frequency of deployments as our Devops work pays dividends.

Conclusion

In parts 2 through 5 of this series, I will delve into each quadrant in turn, exploring the definitions of and rationale for each measure and in part 6 wrap it all up with a look at usage of the complete dashboard.

Series Context

Part 1 – Introduction and Overview (You are here)
Part 2 – Business Impact Metrics
Part 3 – Culture Metrics
Part 4 – Quality Metrics
Part 5 – Speed Metrics 
Part 6 – Conclusion and Implementation 


"Short term profits are not a reliable indicator of performance of management. Anybody can pay dividends by deferring maintenance, cutting out research, or acquiring another company" – Deming