CommunityDevOps ExchangePartners
Articles
2/18/2021
10 minutes

Driving Outcomes through Value Stream Maps: 4 Strategies for Progressing through the Platform Innovation Performance Matrix

Written by
Table of contents

In Part 1 of our two-part “You Can’t Measure What You Can’t See: Driving Outcomes through Value Stream Maps” series, we covered how to create a value stream map and took a high-level look at the four metrics to track at each stage of your value stream. In this installment, we’re going to dig into how you can turn this data into insights to increase your DevOps maturity. 

Innovation speed is the number one reason companies adopt DevOps. However,  the Software Delivery Performance Matrix shows that speed isn’t the only benefit. In fact, the highest performing DevOps teams balance speed with quality, increasing not only how fast they work, but also how well they work.

The question becomes, “how do you optimize speed and quality in parallel?” And luckily, value stream maps give you the data you need to address both. The strategies detailed below will help you to get started on increasing both speed and quality in your development lifecycle. Remember, it's very important to focus on one area of improvement at a time. If you focus on everything, the focus is on nothing. While elite organizations have high performance across the board, they got there by making step-wise improvements over time. Treat your journey in the same way; marginal improvement in one area will ultimately have outsized impact on overall business outcomes.

Software Delivery Performance Matrix - Copado

Side note: If you haven’t mapped out your value stream, we recommend starting your optimization journey here—understanding your value stream is the essential first step to realizing speed and quality performance gains. Everything is theory until you have the data—without it, you can’t accurately identify inefficiencies, reduce waste, or drive improvement.

1: Reduce lead time.

Reduce Lead Time - Copado

Lead time—the time it takes to release a feature to production after development is complete— is the first indicator of speed. Remember, your team isn’t only delivering work, you’re delivering capabilities that increase business value, so the longer you delay releasing your work, the longer it takes the business to realize value.

Understanding your current lead time is step 1. Step 2 is benchmarking it against industry peers, your team’s previous performance, or even other development teams in your organization. These comparative metrics give you a baseline for where your team stands and   also help you identify what lead time you should be able to achieve. The top 13% of Salesforce organizations have a lead time of less than one hour, while high performing teams have a lead time of less than one day. How do they do it?

While there are many factors, two strategies for reducing lead time may include:

  • Writing smaller user stories. DevOps experts recommend promoting work upstream at least daily to ensure this flow is steady, but if user stories are too big, developers can’t promote as frequently. Smaller, bite-sized chunks of work can typically be completed, reviewed, and tested faster because they are generally less complex and touch fewer related but tangential pieces of metadata at once. This makes the flow of work more consistent and digestible, thereby reducing time to value for any one user story.
  • Reducing the amount of stories developers are working on at once. We’ve all been told time and again that multitasking is far less effective than working on one task at a time. The same is true with development teams: developers should focus on finishing and delivering one user story before starting another. When developers focus on pushing one piece of work through rather than splitting their time between multiple user stories, they help to keep the entire development lifecycle moving forward.

2: Increase deployment frequency.

Increase deployment frequency - Copado

Deployment frequency is the second indicator of speed, as it measures how often a team releases work to production. Higher performing organizations are far more likely to integrate developers’ changes on an ongoing basis, and when those changes are integrated more frequently, they are likely to be less complex and less costly. 

However, if teams just began moving faster and merging code into production more often, quality could plummet. Instead, increasing deployment frequency is best accomplished with guardrails provided by DevOps methodologies and tools.

Strategies that can contribute to increased deployment frequency include:

  • Release smaller batches. You can think of deployment frequency as somewhat analogous to lead time; where lead time can be decreased by working on smaller units of work (user stories) and fewer at a time, deployment frequency can be increased by releasing smaller batches of work at each deployment. These smaller batches allow teams to have better and more effective test coverage, and to rollback changes more easily if necessary.  
  • Automate testing and deployments. When processes are repeatable and follow a prescribed set of steps each time, they should be automated rather than manual. By moving to automated processes, your team will be able to move faster while reducing potential for human error.

You might notice that teams who deploy more frequently tend to see lower risk and faster time to value. That’s because while each of these strategies can contribute to increased deployment frequency, they may also help to ensure increased quality at the same time. Building higher levels of security and compliance into your DevOps processes ultimately enables a faster, more reliable, and more consistent value flow from IT to end user. 

3: Decrease change failure rate.

Decrease change failure rate - Copado

Ideally, errors would never make it to production to disrupt any business process or workflow. But, as this is real life and every part of the development life cycle requires human input at one point or another (either doing the work or setting up the automated systems and tests), errors do happen. The first quality indicator—change failure rate—measures the percentage of production releases that cause deployment errors which result in business disruptions. This measurement helps illustrate to delivery leaders how their work affects the business.

Decreasing change failure rate and maintaining a low number is critical for the business—not just in terms of value creation or loss in the moment, but for overall end user trust and business continuity. Customer exposure to bugs and any amount of downtime due to breaking changes can be incredibly costly.

That said, lowering this number can be complex because it’s tied to  factors across people, process, and product. Let’s take a look at a few methods for addressing each: 

  • People: First, figure out manual steps in the development process, and whether or not that work can be automated. Where there is more manual work—ie manually packaging or testing high volumes of work—there are greater chances for error. These teams often see the highest change failure rates. Automating as many of these processes as possible (such as automated regression testing and CI/CD) can remove a large portion of breaking changes due to human error.
  • Process: Remember our discussions about how to increase innovation speed and velocity? These are process optimizations. By increasing your speed through increasing deployment frequency and reducing deployment sizes, QA teams are better able to test the work that comes across their desk, meaning errors in code are more likely to be caught. Additionally, implementing quality gates such as not allowing changes to be made directly into production will go a long way in reducing failures.
  • Product: Tech debt is notorious for causing on-going problems. Ensure your code is scalable and as up to date as possible in terms of any refactors that are needed so code can be simplified and old, brittle code is removed. 

Change failure rate needs to be as close to zero as possible. Decreasing it mitigates risk and is essential for maintaining business value over time.

 

4: Decrease mean time to recovery.

Decrease mean time to recovery - Copado

When errors are deployed, how long does it take your team to troubleshoot, fix, or rollback those changes? The answer to that question is your mean time to recovery. You can think of change failure rate and mean time to recovery as a pair. Failures should happen as infrequently as possible, but when they do happen, your team should be able to fix them quickly. 

A short mean time to recovery is essential for many of the reasons discussed above—not only does downtime potentially erode customer trust, but it also contributes to low productivity and even lost revenue when systems are unreliable. Clearly, your organization wants to avoid all of these outcomes.

Here are a few practices to think about in order to decrease mean time to recovery:

  • Version control. When a failure occurs, are your developers able to identify the root cause of the issue in a timely manner? CI/CD tools with version control provide an audit trail of all the changes included in a release, which can help developers track down what went wrong more efficiently. Seeing the difference between the “original” state and the current failure state can also help to indicate what a potential fix might include, or serve as a baseline for what state to roll back to while fixes are in progress.
  • Speaking of rollbacks—practice makes perfect. Consider practicing rollbacks with your team so they understand how to “undo” a production failure, what types of changes to roll back, and how to roll fixes forward when appropriate.

Of course, speed and quality are intertwined. If your team has reduced lead time and is deploying frequently, the odds that you can also reduce your mean time to recovery are greater because it’s likely that the code developers will need to assess is less complex and can therefore be fixed faster.

Speed + Quality = Elite Performance 

Teams who regularly monitor development metrics perform better, but more effective performance isn’t based on monitoring metrics alone. The crucial step is turning insight into action and implementing strategies that help your team improve on areas of weakness and scale areas of strength.

There are many ways to approach a problem and optimize a team’s performance, but realizing performance gains in all areas is more likely to occur when a comprehensive CI/CD tool is implemented and common DevOps practices are followed.

Speed at the price of quality is no longer a tradeoff. With a strong DevOps culture and value stream mapping to bring visibility to your processes and their effectiveness, there is a clear confirmation that it is possible to “optimize for stability without sacrificing speed.”

Speed + Quality = Elite Performance - Copado

See how Copado Value Stream Maps can bring more visibility into your development lifecycle.

Contact Us

Book a demo

About The Author

A Guide to Effective Change Management in Salesforce for DevOps Teams
Building a Scalable Governance Framework for Sustainable Value
Copado Launches Copado Explorer to Simplify and Streamline Testing on Salesforce
Exploring Top Cloud Automation Testing Tools
Master Salesforce DevOps with Copado Robotic Testing
Exploratory Testing vs. Automated Testing: Finding the Right Balance
A Guide to Salesforce Source Control
A Guide to DevOps Branching Strategies
Family Time vs. Mobile App Release Days: Can Test Automation Help Us Have Both?
How to Resolve Salesforce Merge Conflicts: A Guide
Copado Expands Beta Access to CopadoGPT for All Customers, Revolutionizing SaaS DevOps with AI
Is Mobile Test Automation Unnecessarily Hard? A Guide to Simplify Mobile Test Automation
From Silos to Streamlined Development: Tarun’s Tale of DevOps Success
Simplified Scaling: 10 Ways to Grow Your Salesforce Development Practice
What is Salesforce Incident Management?
What Is Automated Salesforce Testing? Choosing the Right Automation Tool for Salesforce
Copado Appoints Seasoned Sales Executive Bob Grewal to Chief Revenue Officer
Business Benefits of DevOps: A Guide
Copado Brings Generative AI to Its DevOps Platform to Improve Software Development for Enterprise SaaS
Celebrating 10 Years of Copado: A Decade of DevOps Evolution and Growth
Copado Celebrates 10 Years of DevOps for Enterprise SaaS Solutions
5 Reasons Why Copado = Less Divorces for Developers
What is DevOps? Build a Successful DevOps Ecosystem with Copado’s Best Practices
Scaling App Development While Meeting Security Standards
5 Data Deploy Features You Don’t Want to Miss
Top 5 Reasons I Choose Copado for Salesforce Development
How to Elevate Customer Experiences with Automated Testing
Getting Started With Value Stream Maps
Copado and nCino Partner to Provide Proven DevOps Tools for Financial Institutions
Unlocking Success with Copado: Mission-Critical Tools for Developers
How Automated Testing Enables DevOps Efficiency
How to Keep Salesforce Sandboxes in Sync
How to Switch from Manual to Automated Testing with Robotic Testing
Best Practices to Prevent Merge Conflicts with Copado 1 Platform
Software Bugs: The Three Causes of Programming Errors
How Does Copado Solve Release Readiness Roadblocks?
Why I Choose Copado Robotic Testing for my Test Automation
How to schedule a Function and Job Template in DevOps: A Step-by-Step Guide
Delivering Quality nCino Experiences with Automated Deployments and Testing
Best Practices Matter for Accelerated Salesforce Release Management
Maximize Your Code Quality, Security and performance with Copado Salesforce Code Analyzer
Upgrade Your Test Automation Game: The Benefits of Switching from Selenium to a More Advanced Platform
Three Takeaways From Copa Community Day
Cloud Native Applications: 5 Characteristics to Look for in the Right Tools
Using Salesforce nCino Architecture for Best Testing Results
How To Develop A Salesforce Testing Strategy For Your Enterprise
What Is Multi Cloud: Key Use Cases and Benefits for Enterprise Settings
5 Steps to Building a Salesforce Center of Excellence for Government Agencies
Salesforce UI testing: Benefits to Staying on Top of Updates
Benefits of UI Test Automation and Why You Should Care
Types of Salesforce Testing and When To Use Them
Copado + DataColada: Enabling CI/CD for Developers Across APAC
What is Salesforce API Testing and It Why Should Be Automated
Machine Learning Models: Adapting Data Patterns With Copado For AI Test Automation
Automated Testing Benefits: The Case For As Little Manual Testing As Possible
Beyond Selenium: Low Code Testing To Maximize Speed and Quality
UI Testing Best Practices: From Implementation to Automation
How Agile Test Automation Helps You Develop Better and Faster
Salesforce Test Cases: Knowing When to Test
DevOps Quality Assurance: Major Pitfalls and Challenges
11 Characteristics of Advanced Persistent Threats (APTs) That Set Them Apart
7 Key Compliance Regulations Relating to Data Storage
7 Ways Digital Transformation Consulting Revolutionizes Your Business
6 Top Cloud Security Trends
API Management Best Practices
Applying a Zero Trust Infrastructure in Kubernetes
Building a Data Pipeline Architecture Based on Best Practices Brings the Biggest Rewards
CI/CD Methodology vs. CI/CD Mentality: How to Meet Your Workflow Goals
DevOps to DevSecOps: How to Build Security into the Development Lifecycle
DevSecOps vs Agile: It’s Not Either/Or
How to Create a Digital Transformation Roadmap to Success
Infrastructure As Code: Overcome the Barriers to Effective Network Automation
Leveraging Compliance Automation Tools to Mitigate Risk
Moving Forward with These CI/CD Best Practices
Top 3 Data Compliance Challenges of Tomorrow and the Solutions You Need Today
Top 6 Cloud Security Management Policies and Procedures to Protect Your Business
What are the Benefits of Principle of Least Privilege (POLP) for My Organization?
You Can’t Measure What You Can’t See: Getting to know the 4 Metrics of Software Delivery Performance
How the Public Sector Can Continue to Accelerate Modernization
Building an Automated Test Framework to Streamline Deployments
How To Implement a Compliance Testing Methodology To Exceed Your Objectives
Cloud Security: Advantages and Disadvantages to Accessibility
Copado Collaborates with IBM to Accelerate Digital Transformation Projects on the Salesforce Platform
Continuous Quality: The missing link to DevOps maturity
Why Empowering Your Salesforce CoE is Essential for Maximizing ROI
Value Stream Management: The Future of DevOps at Scale is Here
Is Salesforce Development ‘One Size Fits All?’
The 3 Pillars of DevOps Value Stream Management
Gartner Recommends Companies Adopt Value Stream Delivery Platforms To Scale DevOps
The Admin's Quick Glossary for Understanding Salesforce DevOps
Top 10 Copado Features for #AwesomeAdmins
10 Secrets Management Tools to Facilitate Stronger Security Practices
5 Cloud Security Compliance Basics to Prevent Data Breaches
5 Data Security Management Fundamentals
Cloud Agnostic vs Cloud Native: Developing a Hybrid Approach
Making DIE Model Security vs. the CIA Security Triad Complementary, Not Competitive
The CI/CD Pipeline: Why Testing Is Required at Every Stage
DevSecOps Roadmap: From Architecture to Automation
Pets vs. Cattle: More Than an Analogy for Modern Infrastructures
Data Compliance Solutions Provide Greater Control Over Enterprise Data
Go back to resources
There is no previous posts
Go back to resources
There is no next posts

Ready to Transform Your Software Delivery Process?

Explore more about

End-to-End Visibility
Winter ‘21: The Value Release
Articles
Winter ‘21: The Value Release
Gartner Recommends Companies Adopt Value Stream Delivery Platforms To Scale DevOps
Articles
10/5/2023
Gartner Recommends Companies Adopt Value Stream Delivery Platforms To Scale DevOps
The ROI of Data-Driven Development: Improving How Teams Work
Articles
8/31/2023
The ROI of Data-Driven Development: Improving How Teams Work
Value Stream Management: The Future of DevOps at Scale is Here
Articles
8/31/2023
Value Stream Management: The Future of DevOps at Scale is Here