The ROI of Data-Driven Development: Improving How Teams Work
The window to get digital transformation right is rapidly closing. Every year, companies spend millions on digital solutions and platforms, hoping to accelerate their transformations. Development teams are at the heart of delivering that value and innovation, and the DevOps practices software teams have begun to adopt are critical to helping them optimize their software delivery value streams. Value streams illustrate the way teams work together to deliver value, but without visibility into the delivery pipeline, companies are leaving millions in ROI for their transformations on the table.
But, where does the ROI of digital transformation actually come from? How does it get realized?
Put simply, ROI comes from improving the way teams work. While the work development teams produce is clearly important, how that work is accomplished typically holds the key for opportunities to drive business value faster. Optimizing the how impacts the what and when.
When teams improve the way they work, they gain small, yet impactful, efficiencies continuously. Improvements to the software delivery value stream help reach revenue goals faster, allowing the business to realize ROI sooner and increase its ROI potential. In fact, in an in-depth study of 12 Salesforce customers, analyst firm IDC1 found that two of the four primary value drivers of the Salesforce platform were increasing the productivity of the application development team and optimizing the use of IT staff time and IT infrastructure. The way in which work is done, matters.
This is all well and good, but the question remains: how can teams strategically improve the ways in which they work together?
The Value of Visibility
In reality, implementing DevOps practices is just the first step to optimizing a value stream. In order to unlock innovation at scale in an orchestrated fashion, development teams need to gain visibility into the value stream itself. Teams cannot improve without clarity. Gartner’s research has revealed that companies' inability to measure and optimize the value of application development is directly tied to their lack of visibility into the flow of work. While they may “know” how to work together, teams are often siloed, without visibility into processes, flow, and what’s working versus what’s breaking down.
As mentioned, value streams represent the series of steps an organization uses to provide a continuous flow of value to a customer. The steps of that process are typically some version of planning, building, testing, deploying, and monitoring. If those sound familiar, it’s because Copado’s integrated DevOps value stream delivery platform structure is a derivative of this flow. All of these steps are needed in order to maximize value delivery in a systematic way. But, if the steps aren’t explicitly defined, visualized, and measured, it’s difficult to know what a team is doing or how it could improve at any stage.
The interesting thing is that the majority of teams who have adopted DevOps practices are unsure of how to take the next step to begin to measure and improve their organization’s processes. While their teams’ actions throughout the development lifecycle produce all the data they need to do so, data-driven development can only happen when development data is surfaced in a clear way. Few companies give their teams the tools to have access or visibility into their data...and even fewer have the ability to see the data organized in such a way that leads to insightful decision-making. This is the critical gap organizations need to fill in order to maximize potential ROI from digital transformation based in DevOps practices.
Performance data visibility enables and expedites data-driven decisions about future investments in the product, brings to light opportunities for business process re-engineering, and improves delivery velocity and quality while mitigating risk factors. Monitoring and analyzing how work is done is an essential part of an effective DevOps practice: DevOps ROI is directly rooted in visibility into your value stream.
Insightful decision-making is based on data.
A single-source-of-DevOps truth can help give teams the insights they need to measure and improve DevOps performance and throughput. Many people talk about monitoring the development lifecycle as the last step in DevOps maturity, but the reality is that it is one of the linchpins to digital transformation. Monitoring should be applied to every step of the DevOps lifecycle in order to gain visibility into processes, gather data on what’s working and what’s not, and make adjustments accordingly. It’s integral to every step of your DevOps process, right from the start.
Historically, a value stream map has been an essential lean tool for an organization wanting to plan, implement, and improve while on its lean journey. Value stream mapping helps users create a solid implementation plan that maximizes their available resources and ensures that materials and time are used efficiently.
If we extend this over to the software space and apply it to DevOps, it starts to make sense why the tool holds such value. Mapping the value stream of your Salesforce delivery process helps you unlock opportunities to drive more value because even if your team is working hard, they may still struggle to deliver quality work on time. Since you cannot physically see where breakdowns in your process are happening, you need to map out each phase of your delivery process: plan, build, test, deliver, and monitor. Copado’s Value Stream Maps bring visibility to the development lifecycle by displaying work stages, processes within stages, and metrics associated with the processes for each phase, while DevOps 360 Analytics provides data on over 20 other software development metrics. Each metric represents an opportunity for incremental improvements in time to value and ROI.
Together, these products allow you to assess adherence to process and quality at each phase, as well as measure throughput of the entire system. Metrics - such as change failure rate, mean time to restore, lead time, and deployment frequency - that are tracked on every stage of the process serve as red flags, of sorts. Changes in these numbers over time help indicate stages of the development process to further dig into. This information is crucial for not only having a holistic view of where components of the roadmap stand in the development lifecycle, but for reporting on value creation of the development team and digging in to optimize the system.
Realistically, you can’t make improvements across all indicators at once. Just as agile is rooted in simplifying work to drive value faster, DevOps value stream optimization benefits from starting with smaller pieces. The important thing to remember here, though, is increases or decreases in these metrics over time are just symptoms of underlying causes. Target one at a time for improvement, and often, multiple areas will benefit.
In this way, both Value Stream Maps and DevOps 360 Analytics help bring to light breakdowns in process, such as where bottlenecks may prevent the team from delivering value in a reasonable time frame; stages with excess waiting time; or when, where, and how far back in the process work is being sent back. These all have effects on throughput of the system and are opportunities for business process improvement. Here are a few ways to spot and assess potential process breakdowns:
If lead time increases, ask:
- Assess how the work is planned - Is the work scoped too big? Can user stories be broken down into smaller, independent pieces?
- Analyze what’s happening during the build phase - Are developers context-switching throughout the day, as they work on multiple user stories at once?
- Look at work quality - do tests or QA checks routinely fail, causing work to be sent back to dev?
- Do deployments routinely fail? If so, dig into if testing / QA is bypassed, if test coverage is lacking, or if the build itself was low-quality.
If deployments become less frequent, ask:
- Has complexity of the planned work increased, biting off too much in each user story? What’s the size of the deployment? If the planned builds are too big, it can cause delays in deployments due to the complexity of the release. Consider re-scoping projects to release in smaller batches.
- If teams are still manually testing, how tedious is the process? Could any of it be automated?
If mean time to recover starts to increase, ask:
- Are teams implementing version control practices appropriately?
- How complex are the issues that cause failures?
- Does the team know how to roll back or roll forward fixes in production?
If change fail rate creeps further and further away from 0%, ask:
- Which tests (or testers) are failing or are missing code that introduces breaking changes into production? Consider implementing automated tests or reviewing the tests to make sure they are written in a way that provides more coverage.
- Have deployment sizes increased? Increasing deployment sizes can be caused by increasingly complex work...which, again, is more difficult to test and accurately troubleshoot.
Variation in Work in Progress:
- Are developers working on too many user stories at once?
- Are there bottlenecks to work flow that aren’t being addressed?
- Where are there dependencies that can be restructured?
Re-engineering Business Processes
As Gartner reports, “the access to unified analytical insights across the toolchain enables product owners and business stakeholders to assess the level of business risk, release cadence, responsiveness to change and cross-team collaboration.” This enables leaders to make decisions in an agile manner and course-correct as needed.
Business process management is a dedicated strategy for improving workflows and processes throughout an organization. All of the areas above represent processes that can be trained (or re-trained). Many are also workflows that present opportunities to apply DevOps practices - such as version control, automated testing, embedding compliance into builds, or implementing a CI/CD tool - that may not already be present.
So, while the numbers themselves are interesting, visibility is not about tracking metrics to see incremental improvement in them over time. It’s about understanding where to look to uncover which processes are working and which aren’t in your development lifecycle. This, in turn, helps you understand where to apply or refine DevOps practices to optimize a team’s work output and the way they work, in order to deliver value to customers faster. In fact, a report from Forrester shows that 64% of all companies practicing business process management are placing emphasis on re-engineering customer-servicing functions.
Business process management in and of itself is a virtuous feedback loop that resembles a DevOps mindset: watch the data to understand which process to optimize, adjust the process, watch how the data changes, rinse and repeat.
As your teams practice this process of end to end optimization over time, they’ll become more efficient, building and delivering with higher quality at faster speeds. This is the core of digital transformation: you can’t transform (or continue transforming) without refining your processes.
In 2018, nearly $1.3 trillion was spent on digital transformation globally. Of that, more than $900 billion is estimated to have gone to waste.2 As a result, the need for DevOps, agile development, and value stream visibility isn’t a nice-to-have, it’s a business necessity. Businesses who promote continuous innovation in their work—and the way they work—will be the digital transformation winners.
1 Carvalho Larry, Marden Matthew, Arora Ustav. The ROI of Building Apps on Salesforce. IDC. 2016.
2 Tabrizi, Behnam Lam, Ed Girard, Kirk Irvin, Vernon. Digital Transformation Is Not About Technology. Harvard Business Review. 2019.