There is an old saying that you can’t teach an old dog new tricks (actually the original saying is “An old dog will learn no tricks,” by Nathan Baily) and while that saying isn’t actually true, the sentence isn’t meant to be taken literally. A related phrase is “old habits die hard”—and we see this playing out every day in the rapidly evolving software development world. We meet team after team who have taken their continuous integration (CI) tool (most frequently it's Jenkins) and attempted to “teach” it how to do continuous delivery (CD). But there is a better way to do this that was actually designed to help organizations reach the ultimate goal of continuous dev/test/deploy/repeat.
I believe the core of the entanglement we see in the market stems from the use of the term “pipeline” in both CI and CD. And this is because, both CI and CD are two very different types of pipelines for two different purposes. They are connected and interdependent, but they shouldn’t be confused as doing the same thing. Let’s dive deeper.
Below are the main differences between what CI and CD do:
CI—Take code from developers, test it, build it, output software artifacts.
CD—Deploy artifacts across dev, test, QA, and production, and test, verify, and rollback as needed at every stage.
What we want to focus on today is the actual shortcomings of attempting to build a CD pipeline with Jenkins and the actual benefits of constructing a more intelligent pipeline for CD (with CI as the first step in the process).
“Jenkins Pipeline (or simply “Pipeline” with a capital “P”) is a suite of plugins that supports implementing and integrating continuous delivery pipelines into Jenkins.”
Now plugins in and of themselves are not necessarily a bad thing, of course, but there are shortcomings:
Open source and multiple plugins are available for the same “headline” (such as Docker, Kubernetes, and so on).
Each plugin can have any number of other plugin dependencies.
Maintaining them can be a daunting task, especially these days, when everything changes so fast. Plus, numerous changes (in plugins and plugin-dependencies) can render your pipeline inoperative at any given moment.
An even bigger missing piece is that Jenkins was designed to do CI and not CD so you have to “teach” it many concepts for it to attempt CD. Further, you would have to somehow enable it to be flexible across stages and environments (for example, so it can adapt user credentials and secrets, understand the difference between a container and a different kind of artifact, and so on). The way people do this in CI is by scripting these things, which is extremely complex, and eventually leads to an almost hardcoded “pipeline” that requires constant maintenance. Many find they are spending more time maintaining the pipeline than actually building and shipping code. Finally, even if you’ve accepted this reality with a Jenkins pipeline, you are still going to hit a wall. The application and testing environments are quickly growing so large and have so many moving parts (artifacts, containers, environments, test tool, and test types, deployment strategies, rollbacks, and roll forwards) that many are now simply unable to “catch up”—which brings me to the final point.
We are all at the edge (some have already crossed it) of not being able to maintain a CD environment without the aid of algorithmic AI helping us manage the growing stream of flowing features, functions, and apps. Which brings me to the final conclusion: To scale agile, you need an intelligent pipeline.
The idea is of an intelligent CD pipeline is:
Specifically built for CD.
Has built-in AI capabilities to allow it to scale across any size of organization and any level of complexity.
A couple additional points to consider:
Built for CD means it includes all the concepts needed to model a CD pipeline natively (as opposed to having to programmatically “teach” Jenkins or other CI tools). This includes many things, including environment, integration user, manual step, test suite, and plan versus actual.
Built for CD also means it has built-in AI capabilities, such as predicting failure and proactively taking action at any stage of the CD chain (not CI), allowing the pipeline to make the choice of which tests to run at each stage based on its ability to identify tests that are not relevant and tests that are.
There is nothing inherently wrong about using Jenkins for CI, however, bending and stretching it to attempt enterprise-scale CD might very well be an inevitably futile effort, an attempt to “teach an old dog to learn new tricks.” The best way forward is to lay out your strategy and plans for scale, including carefully designated CI and CD solutions working in unison.
Scott Willson has over 20 years of technology and leadership experience that spans verticals such as manufacturing, technology, finance, sales, and marketing. Among other accomplishments during that time, he led the data transformation effort for the $6.6B US Robotics/3Com merger. He led the automation transformation initiative that automated regulatory compliance for over 10,000 registered reps at a Broker-Dealer. He was one of the published authors for Gene Kim's DevOps Forum papers and is active in the DevOps community. He is currently in product marketing for Broadcom Enterprise Software Division