CommunityDevOps ExchangePartners
Articles
12/20/2022
10 minutes

You Can’t Measure What You Can’t See: Getting to know the 4 Metrics of Software Delivery Performance

Written by
Copado Team
Table of contents

 The title says it all, doesn’t it? You cannot optimize what you don’t measure...and you cannot measure what you cannot see. One of the central aims of DevOps is to continually improve collaboration and processes in order to increase the quality and speed of innovation, thereby delivering more value to customers faster.  And the only way to continually improve your processes is to measure your processes, team performance, and value flow. When you can visualize and quantify how your team is operating, you gain the insights you need to reduce waste and drive improvements.

One effective way to do this is through value stream maps—visual depictions of each stage of your process that quantify lead time, change failure rate and other key performance metrics.   Viewed holistically, the value stream map documents the end-to-end flow of delivering value, which helps you quantify the full resource investment required to build and deliver a feature. By then quantifying performance at each stage of this value flow, you can assess where there may be gaps, inefficiencies or breakdowns in your processes. When teams address these issues they both improve specific performance areas and the entire value stream.

That all sounds great in theory, but how do you get started with documenting a value stream map in order to actually realize this potential value? Let’s take a look.

The Value of Visibility: Building Your Value Stream Map

Mapping your value stream and tracking the key metrics within it helps to quantify a team’s performance and pinpoint areas of improvement.

All of this can be done manually. In fact, putting “pen to paper” is a great way to start conceptualizing what processes you need to outline and eventually automate. Getting Started with Value Stream Maps will give you more background on the genesis of value stream maps, how to get your thoughts down on paper or a whiteboard, and how you can manually track each piece of the process. But to get started, you can follow these steps to make sure your stream is as accurate as possible:

  • Identify which team’s process you are mapping. Different teams may have different processes, so clarity is key!
  • State which customer the flow is serving, and what value is to be delivered (i.e. what is the team’s main objective?). Value could range from increasing revenue by cutting down time to close (if your customer is a sales team) to reducing case resolution time (if the customer is a support team). Be specific!
  • Define the process you are mapping. Is it an investigation process? The user story lifecycle?
  • List the people and roles that contribute to this process.
  • Describe each stage of the delivery process. Map them out in the order they occur, from left to right. If there are any parallel processes, note those as well.
  • Taking it a level deeper, within each stage, identify the “process blocks” that occur. Process blocks are parallel work types within a stage that are not interdependent, such as manual and automated testing or frontend and backend development.
  • Identify where work “waits,” what handoff looks like, and how work is classified as waiting, blocked, or done. This will help you list out all the statues a user story can have. Knowing each status is important, because you can start to figure out the amount of time each distinct piece of work spends in any given state and stage.
  • Finally, aggregating these metrics for each stage, process block, and the stream as a whole allows you to see the bigger picture and quantify waste and forecast ROI for making improvements.

While “pen-to-paper” value stream maps are a great start, they only capture a single moment in time and they’re highly reliant on estimates. For this exact reason, the IT industry is seeing the rise of value stream management platforms. An automated tool like Copado’s DevOps 360 Value Stream Maps allows you to visualize your specific value stream through an easy-to-configure UI. The tool allows you to select specific metrics to track at each stage, and these metrics are tied directly to user stories to provide you a real-time, accurate view of how work is flowing (or not flowing!) through your value stream.

Once you’ve thought through and outlined each of these areas, you’ll be able to physically see the invisible processes that comprise your development lifecycle and how teams deliver value to customers. But that’s only the first part of the puzzle. The real value comes from digging into four main indicators of performance and understanding, monitoring, and influencing how they change over time.

The 4 Metrics of Software Delivery Performance

The benefits of a value stream mapping go beyond process visualization. They also track changes over time, quantifying trends and allowing you to better identify anomalies. Together, process visualization and performance trends help quickly identify bottlenecks, inefficiencies and risk.

So, what are those all-important metrics? DORA, DevOps Research & Assessment, defines four metrics to assess across the value stream:

  1. Lead Time
  2. Deployment Frequency
  3. Change Failure Rate
  4. Mean Time to Recovery

While lead time and deployment frequency are indicators of the velocity of innovation, change failure rate and mean time to recovery are indicators of reliability and trust. Together, they paint a picture of how effective a development team and the processes they follow are, as well as how and where they can improve to bring more value to the organization as a whole.

Let’s dive into more detail to learn how each impacts value delivery success.

Lead Time

Simply put, lead time is the time it takes to release a feature to production after development is complete. A piece of work (a feature or application, for example), only becomes valuable when the end-user can access it. Until that point, the value of the work cannot be realized. So, it becomes important to know how long work is “sitting around” in a completed state before a customer can begin to use it. If that time is considerably long, value delivery is delayed and ROI on that development effort is left on the table.

When it comes to lead time, remember “shorter is better.” Delivering work to end users quickly enables faster feedback cycles to the product and development teams, enabling them to adapt quickly to users’ needs and changes in the market, ultimately delivering more features that meet their needs, faster. These feedback cycles are critical to staying relevant with a user group and reducing the amount of work that goes unused because it either sat on the shelf too long or didn’t include the most up to date and relevant feedback since the last release. The value of decreasing lead time is two-fold: not only do end-users get access to valuable features faster, but the continual relevance of future work delivered means the product is more likely to meet their needs.

Deployment Frequency

Deployment frequency measures how often teams release work to production. As mentioned above, work only creates value when end-users can use it. If work is deployed and made available more often, it follows that more value is realized over time, which ultimately drives more significant business outcomes.  

Over time, deployment frequency should ideally increase. Teams that deploy more frequently generally see lower risk in each deployment and faster time to value for many reasons, which we’ll dive into in Part 2 of this series.

Change Failure Rate

Next up is change failure rate, which measures the percentage of production releases that result in a disruption to the business...which can have very costly consequences. This measurement is a key indicator of quality—higher quality code and more rigorous testing is less likely to introduce breaking changes. Driving this number closer to zero over time should be one of your optimization objectives, as it helps to mitigate risk of customer exposure to bugs and failures and is essential for maintaining business value (and trust!) over time.

Mean Time to Recovery

Mean time to recovery indicates how long it takes your team to troubleshoot, fix, and/or roll back a deployment failure from production. A lower number is better here—if it takes teams considerable time to address failures, the business will undoubtedly suffer. Downtime of your systems not only causes low productivity internally, but it also affects how customers interact with your services. If this happens semi-frequently or for long stretches of time, it can begin to erode customer trust. All of these outcomes have financial impact, and that negative financial impact increases as time to recover increases. One goal should be to minimize downtime as much as possible, and working to optimize mean time to recovery is a key way to keep that metric in check.

Driving Outcomes through Value Stream Maps: Progressing through the Platform Innovation Performance Matrix 

While powerful on their own, all of these metrics come together to create a Software Innovation Performance Matrix that shows how your company compares to others’ performance overall. When compared in this way, it becomes clear that the highest performing companies balance both speed and quality. They deliver innovation more rapidly and with fewer errors than companies in the less effective tiers of this matrix. Most companies will fall primarily in the same category (elite, high performer, etc) for each metric due to the interconnectedness of these measurements, but it is possible to be stronger in one area and weaker in others.

 

The 4 Metrics of Software Delivery Performance - Copado

 

There are multiple factors that influence the “numeric outcome” of each performance indicator across the value stream. Understanding where you stand by benchmarking your teams’ performance against your peers is only one part of the equation. The real value comes from taking that information, using it to identify areas of improvement (which of those metrics is lagging? Where is waste? Is there an easily identifiable bottleneck?), and then acting on that area of opportunity to not only improve your teams’ work output but HOW they accomplish that output.

As you practice identifying these areas and working to address them, you’ll see real ROI from implementing value stream maps and will watch your teams progress towards reaching that “elite” DevOps innovation performance status in each of the four key areas. Improvement is a continual process that comes to life in small, focused, discernable stages that build on one another, but by using Copado’s Value Stream Maps to get an idea of your starting point and measure improvements over time, you’ll be one step closer to the “top” and will capture real value for the business as a whole with each step.

Looking for more detail on how to identify areas of improvement and begin optimizing your process to realize these value gains faster? Stay tuned for Part 2, Driving Outcomes through Value Stream Maps: 4 Strategies for Progressing through the Software Innovation Performance Matrix. And if you don’t want to wait, we’d love to help you get started now. 

 

 

Book a demo

About The Author

#1 DevOps Platform for Salesforce

We Build Unstoppable Teams By Equipping DevOps Professionals With The Platform, Tools And Training They Need To Make Release Days Obsolete. Work Smarter, Not Longer.

Enhancing Customer Service with CopadoGPT Technology
What is Efficient Low Code Deployment?
Copado Launches Test Copilot to Deliver AI-powered Rapid Test Creation
Cloud-Native Testing Automation: A Comprehensive Guide
A Guide to Effective Change Management in Salesforce for DevOps Teams
Building a Scalable Governance Framework for Sustainable Value
Copado Launches Copado Explorer to Simplify and Streamline Testing on Salesforce
Exploring Top Cloud Automation Testing Tools
Master Salesforce DevOps with Copado Robotic Testing
Exploratory Testing vs. Automated Testing: Finding the Right Balance
A Guide to Salesforce Source Control
A Guide to DevOps Branching Strategies
Family Time vs. Mobile App Release Days: Can Test Automation Help Us Have Both?
How to Resolve Salesforce Merge Conflicts: A Guide
Copado Expands Beta Access to CopadoGPT for All Customers, Revolutionizing SaaS DevOps with AI
Is Mobile Test Automation Unnecessarily Hard? A Guide to Simplify Mobile Test Automation
From Silos to Streamlined Development: Tarun’s Tale of DevOps Success
Simplified Scaling: 10 Ways to Grow Your Salesforce Development Practice
What is Salesforce Incident Management?
What Is Automated Salesforce Testing? Choosing the Right Automation Tool for Salesforce
Copado Appoints Seasoned Sales Executive Bob Grewal to Chief Revenue Officer
Business Benefits of DevOps: A Guide
Copado Brings Generative AI to Its DevOps Platform to Improve Software Development for Enterprise SaaS
Celebrating 10 Years of Copado: A Decade of DevOps Evolution and Growth
Copado Celebrates 10 Years of DevOps for Enterprise SaaS Solutions
5 Reasons Why Copado = Less Divorces for Developers
What is DevOps? Build a Successful DevOps Ecosystem with Copado’s Best Practices
Scaling App Development While Meeting Security Standards
5 Data Deploy Features You Don’t Want to Miss
Top 5 Reasons I Choose Copado for Salesforce Development
How to Elevate Customer Experiences with Automated Testing
Getting Started With Value Stream Maps
Copado and nCino Partner to Provide Proven DevOps Tools for Financial Institutions
Unlocking Success with Copado: Mission-Critical Tools for Developers
How Automated Testing Enables DevOps Efficiency
How to Keep Salesforce Sandboxes in Sync
How to Switch from Manual to Automated Testing with Robotic Testing
Best Practices to Prevent Merge Conflicts with Copado 1 Platform
Software Bugs: The Three Causes of Programming Errors
How Does Copado Solve Release Readiness Roadblocks?
Why I Choose Copado Robotic Testing for my Test Automation
How to schedule a Function and Job Template in DevOps: A Step-by-Step Guide
Delivering Quality nCino Experiences with Automated Deployments and Testing
Best Practices Matter for Accelerated Salesforce Release Management
Maximize Your Code Quality, Security and performance with Copado Salesforce Code Analyzer
Upgrade Your Test Automation Game: The Benefits of Switching from Selenium to a More Advanced Platform
Three Takeaways From Copa Community Day
Cloud Native Applications: 5 Characteristics to Look for in the Right Tools
Using Salesforce nCino Architecture for Best Testing Results
How To Develop A Salesforce Testing Strategy For Your Enterprise
What Is Multi Cloud: Key Use Cases and Benefits for Enterprise Settings
5 Steps to Building a Salesforce Center of Excellence for Government Agencies
Salesforce UI testing: Benefits to Staying on Top of Updates
Benefits of UI Test Automation and Why You Should Care
Types of Salesforce Testing and When To Use Them
Copado + DataColada: Enabling CI/CD for Developers Across APAC
What is Salesforce API Testing and It Why Should Be Automated
Machine Learning Models: Adapting Data Patterns With Copado For AI Test Automation
Automated Testing Benefits: The Case For As Little Manual Testing As Possible
Beyond Selenium: Low Code Testing To Maximize Speed and Quality
UI Testing Best Practices: From Implementation to Automation
How Agile Test Automation Helps You Develop Better and Faster
Salesforce Test Cases: Knowing When to Test
DevOps Quality Assurance: Major Pitfalls and Challenges
11 Characteristics of Advanced Persistent Threats (APTs) That Set Them Apart
7 Key Compliance Regulations Relating to Data Storage
7 Ways Digital Transformation Consulting Revolutionizes Your Business
6 Top Cloud Security Trends
API Management Best Practices
Applying a Zero Trust Infrastructure in Kubernetes
Building a Data Pipeline Architecture Based on Best Practices Brings the Biggest Rewards
CI/CD Methodology vs. CI/CD Mentality: How to Meet Your Workflow Goals
DevOps to DevSecOps: How to Build Security into the Development Lifecycle
DevSecOps vs Agile: It’s Not Either/Or
How to Create a Digital Transformation Roadmap to Success
Infrastructure As Code: Overcome the Barriers to Effective Network Automation
Leveraging Compliance Automation Tools to Mitigate Risk
Moving Forward with These CI/CD Best Practices
Top 3 Data Compliance Challenges of Tomorrow and the Solutions You Need Today
Top 6 Cloud Security Management Policies and Procedures to Protect Your Business
What are the Benefits of Principle of Least Privilege (POLP) for My Organization?
You Can’t Measure What You Can’t See: Getting to know the 4 Metrics of Software Delivery Performance