Continuous integration and continuous delivery represent one of the clearest productivity levers available to software engineering teams. Yet in our conversations with engineering leaders across the organizations we work with, we consistently find a wide gap between what teams know they should be doing and what their pipelines actually do.

This gap is not primarily a knowledge problem. Most engineers understand the principles of CI/CD and can articulate what a mature delivery pipeline looks like. The gap is a prioritization and investment problem. CI/CD infrastructure is classic platform work: it accrues technical debt quietly, the cost of a slow or fragile pipeline is diffuse rather than acute, and the work of improving it rarely makes it to the top of a product roadmap.

In this article, we map the stages of CI/CD maturity, describe what separates elite delivery organizations from the rest, and share our perspective on where the most interesting investment opportunities exist in the delivery automation space.

The Four Stages of CI/CD Maturity

Stage 1: Scripted Builds. The baseline stage. Teams have automated build scripts, often bash or Makefile-based, that can reproduce the build process without manual steps. Continuous integration in this stage usually means running these scripts on a shared CI server whenever code is pushed. Test coverage is minimal and inconsistent. Deployments are largely manual, often performed by a specific person who "knows how to do it."

This stage is better than fully manual processes, but it creates significant problems at scale. Build scripts become complex and brittle. The person who "knows how to deploy" becomes a critical bottleneck. The absence of reliable automated testing means that every deployment carries significant risk, which causes teams to deploy infrequently, which in turn makes each deployment larger and riskier.

Stage 2: Automated Testing Integration. Teams at this stage have invested in automated test suites and integrated them into the CI pipeline. Every code commit triggers a build and a test run. Failing tests block merges. Deployment is still partially manual, but there is a defined process and a set of quality gates that code must pass before production.

The key challenge at Stage 2 is pipeline performance. As test suites grow, build times grow with them. Teams that do not invest in parallelization, caching, and test infrastructure end up with pipelines that take 30 to 60 minutes or longer, which imposes a significant friction cost on developers and reduces the effective deployment frequency.

Stage 3: Continuous Delivery. At this stage, every code change that passes the automated quality gates can be deployed to production at any time, though deployment is still a human decision. The deployment process itself is fully automated — pushing the button is the only manual step. Rollback is also automated, and the team has confidence that they can deploy and recover quickly if something goes wrong.

Teams at Stage 3 deploy frequently by most industry standards, but they are still constrained by the human decision to deploy. This creates batching behavior — deploying multiple changes together to reduce the overhead of the deployment decision — which in turn increases the risk of each deployment.

Stage 4: Continuous Deployment and Elite Performance. The elite stage. Every code change that passes automated quality gates is automatically deployed to production without human intervention. Deployment frequency is measured in tens or hundreds of times per day. Lead time from commit to production is measured in minutes. Change failure rate is low, and mean time to restore is measured in minutes rather than hours.

Reaching Stage 4 requires a combination of technical investment (comprehensive automated testing, feature flags, progressive delivery, and automated rollback) and cultural investment (trust in automation, tolerance for small controlled failures, and a deployment culture that treats every commit as potentially production-ready).

What Elite Delivery Organizations Do Differently

The Accelerate research, conducted by DORA (now part of Google Cloud) across tens of thousands of organizations, has identified a consistent set of technical practices that separate elite software delivery organizations from the rest. These practices are worth examining in detail, because they inform where we see the most compelling investment opportunities.

Trunk-based development is one of the strongest predictors of elite delivery performance. Teams that work in long-lived feature branches accumulate integration debt that makes every merge a significant risk event. Trunk-based development, combined with feature flags for work-in-progress features, allows teams to integrate continuously without exposing incomplete features to users. The technical requirement for this practice — a robust feature flagging and progressive delivery system — is itself a significant product opportunity.

Comprehensive automated testing at multiple layers (unit, integration, end-to-end, performance) is table stakes for elite delivery. But the subtler practice we observe in the best organizations is test investment discipline: regularly reviewing the test suite to identify slow, flaky, or redundant tests, and treating test infrastructure as a first-class concern with dedicated engineering time.

Database change management is one of the most underappreciated challenges in continuous delivery. Schema migrations that cannot be safely rolled back create a hard constraint on deployment automation. Elite organizations invest heavily in backward-compatible migration strategies, expand-contract patterns, and automated schema validation tooling.

The Build Speed Problem

Build speed is the most universally cited pain point in CI/CD. When we ask engineering leaders what they would change about their delivery pipeline if they could change one thing, slow builds are the most common answer by a significant margin.

The economics of slow builds are straightforward: a developer waiting for a 20-minute CI run can context switch away from the task at hand, and returning to it carries a significant cognitive cost. Multiply this across a team of 100 engineers, each running 10 CI cycles per day, and the accumulated productivity cost of slow builds is substantial — often equivalent to several full-time engineering salaries annually.

The solutions to slow builds are well-understood in principle: parallelization, test sharding, build caching, and selective test execution based on what code actually changed. The challenge is that implementing these optimizations in existing CI infrastructure requires significant investment and specialized expertise. This gap — between the known solution and the ability to implement it — is exactly the kind of problem that creates compelling product opportunities for developer tools companies.

Security Integration in the Delivery Pipeline

The shift-left security movement — bringing security testing earlier in the development process — has driven significant investment in security tooling integrated into CI/CD pipelines. Static analysis security testing (SAST), software composition analysis (SCA), container image scanning, and infrastructure-as-code security validation have all become standard components of mature delivery pipelines.

The challenge with security tooling in CI/CD is signal-to-noise ratio. Security scanners that produce hundreds of findings per run create alert fatigue, and developers quickly learn to ignore the output. The most effective security tooling in delivery pipelines is highly curated, provides clear remediation guidance, and distinguishes sharply between actionable findings and noise.

From an investment perspective, we are particularly interested in companies that are integrating security tooling into the developer workflow in a way that feels native and helpful rather than imposed. The developer security tooling market has grown substantially, but we believe there is still significant room for products that genuinely improve developer experience while providing meaningful security coverage.

Where We See Investment Opportunities

The CI/CD tooling market is large and competitive, but several categories remain underpenetrated from a seed-stage investment perspective. Build acceleration tooling — solutions that provide intelligent caching, remote build execution, and test selection — is a category we are actively tracking. The core insight is that build speed is a measurable, quantifiable problem, and solutions that demonstrably reduce build times have a clear ROI story that resonates with both developers and engineering leadership.

Progressive delivery infrastructure — feature flags, canary deployments, and controlled rollouts — is another category we believe has substantial room for growth. The concept is well-established, but the tooling required to implement it with confidence across different infrastructure environments is less mature than it should be, and the governance and experimentation capabilities layered on top of basic feature flagging are almost entirely unbuilt.

Finally, we are watching the emerging category of AI-assisted delivery optimization — systems that analyze pipeline performance data, identify optimization opportunities, and automatically tune build configuration. The data required to train these systems is readily available in modern CI platforms, and the potential for meaningful developer productivity gains is significant.

Key Takeaways

  • CI/CD maturity progresses through four stages: scripted builds, automated testing, continuous delivery, and continuous deployment.
  • Trunk-based development with feature flags is the strongest predictor of elite delivery performance.
  • Build speed is the most universally cited pain point, with measurable productivity costs.
  • Security tooling in CI/CD pipelines must prioritize signal-to-noise ratio to avoid developer fatigue.
  • Investment opportunities exist in build acceleration, progressive delivery, and AI-assisted pipeline optimization.
← All Insights View Our Portfolio →