There’s a feature arms race underway, and SaaS is fueling the fire. The quality of collaboration in software development is measured by a direct line of sight into the customer experience. Read more about this in my prior post.
DevOps is a given in today’s software engineering world. It has certainly unlocked a measure of collaboration, productivity, feature velocity, and innovation within SaaS software development previously unseen. Short of dogs and cats living together, Dev-Plus-Ops playing nicely seems to be the right direction, if only based on how often we hear about its from prospective customers with SaaS ambitions.
Cultural alignment within an engineering organization is necessary, but not sufficient. To gauge whether it also makes a difference to customers (those nice people with the money), we need to go beyond the false dichotomy of sequence built into the term, and measure what matters. Software delivery is measured by results, not just feature velocity. (More about how we got here in my previous post.)
Five or years ago, the writing was on the wall, and the DevOps Research Assessment project set out to quantify what separates leaders – the companies whose software is eating the world – from the laggards (who are now really struggling in the face of the pandemic).
The State of DevOps report (and its hard-core companion book, Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations) provides a wealth of information about how 31,000 people over the last six years have tackled DevOps problems. comparing you to others in your space.
The output your customers see reduces to 4 functions of time: deployment frequency, lead time, mean time to restore (MTTR), and change failure rate. Collectively, they are the measure of feature friction.
The DORA report, as it’s known, derives four results-based criteria to gauge how effective the hard work of software development is at your organization, in the context of your industry.
- Deployment frequency: cadence of the release of code to production and/or end-users
- Lead time for changes: from code committed, to code successfully running in production
- Service restoration times: Ranges of elapsed time from a service incident or a defect that impacts users, e.g., unplanned outage or service impairment
- Change failure rate: the fraction of changes to production that result in degraded service (e.g., unplanned outage or service impairment) and/or need subsequent remediation (e.g., hotfix, rollback, fix forward, patch)
Go ahead and try the survey yourself here; it takes less than 2 minutes. At right, here’s a picture of the benchmark for a hypothetical manufacturing company.
So much of the discussion around developer productivity – without doubt, an essential feature of feature agility – revolves around methodology. The choices in building out an Agile capability are non-trivial. As I’ve written elsewhere, the best teams reserve capacity to pay down technical debt, and seek ways to be better architected. But how good is good enough? Respondents from high-performance organizations in the DORA benchmark survey reported their company was
- 2x more likely to exceed profitability, market share & productivity goals
- 2x more likely to achieve organizational and mission goals. customer satisfaction, quantity & quality goals
- 2.2x higher employee Net Promoter Score
- 50% higher market capitalization growth over three years
The winners in SaaS deliver on feature agility by minimizing feature friction. Hewing to these four metrics gives everyone in your organization from interns to executives and everyone in between a single, clear cut focus: how hard you have to work to keep your customers confident and committed.
This doesn’t make all the hard work of metrics, instrumentation, architectural improvement, time boxing, test automation, bug remediation counts, CI/CD Automation, production readiness (and more) go away. Those are the internals of the software sausage factory; they’re necessary, but not sufficient.
Software Delivery: DevOps as the last mile of SaaS
Seen another way, the output your customers’ experience reduces to 4 functions of time: lead time, deployment frequency, mean time to restore (MTTR), and change failure rate. Collectively, they are the measure of feature friction. For the last mile of software development, productivity is measured in the delivery of results over time.
There are at least four lessons on addressing feature friction to be drawn from the DORA software delivery benchmark.
- Software innovation doesn’t count until it’s in production. All the software development doctrines and debates about collaboration models, from Agile to DevOps to Lean to Scrum, only matter inasmuch as they combine to deliver reliable net value to customers.
- The best in class is not about trading off velocity vs. stability. Top performers excel at both reliability and velocity. The value of the benchmark is binding inputs and outputs. Together, they combine to give you a full picture of whether your product development work makes a difference to your customers and users.
- Especially in a SaaS business, metrics aggregated over time matter most. In the context of delivering software as a service to customers, trends over time matter more than points in time. Human attention spans don’t do well with comprehending trends over time (something that did not start with the struggles about comprehending the trajectory of CoVID-19).
- The nature of a SaaS business is not only in feature agility but in eliminating sources of feature friction. In most software organizations, there are many more developers writing software than there are operations people who keep the software running. It’s natural that there is an asymmetry in the technical workforce. Be careful that this asymmetry doesn’t become a false democracy, in which the people who write the code overrule the people who run it.
Bear in mind there will be seemingly endless input metrics (microservice logging, anyone?), and that’s ok. Understanding which symptoms matter ensures you learn more from research into the candidate causes of disease. The beauty of a lagging indicator is that it saves you from the agony of prediction. Reality always has the last word.
Taking a combined view of feature agility and feature friction allows DevOps to go deeper than the drama of “was there an outage?” It goes beyond the classic SaaS business metrics of churn, renewals, subscriptions, and other feature-driven indicators. It aligns the hard work of your development with the one thing customers will never have enough of: time.