There is an old saying that you can’t teach an old dog new tricks (actually the original saying is “An old dog will learn no tricks,” by Nathan Baily) and while that saying isn’t actually true, the sentence isn’t meant to be taken literally. A related phrase is “old habits die hard”—and we see this playing out every day in the rapidly evolving software development world. We meet team after team who have taken their continuous integration (CI) tool (most frequently it's Jenkins) and attempted to “teach” it how to do continuous delivery (CD). But there is a better way to do this that was actually designed to help organizations reach the ultimate goal of continuous dev/test/deploy/repeat.
I believe the core of the entanglement we see in the market stems from the use of the term “pipeline” in both CI and CD. And this is because, both CI and CD are two very different types of pipelines for two different purposes. They are connected and interdependent, but they shouldn’t be confused as doing the same thing. Let’s dive deeper.
Below are the main differences between what CI and CD do:
Figure 1.
What we want to focus on today is the actual shortcomings of attempting to build a CD pipeline with Jenkins and the actual benefits of constructing a more intelligent pipeline for CD (with CI as the first step in the process).
A Jenkins pipeline is defined as the following in the Jenkins documentation:
“Jenkins Pipeline (or simply “Pipeline” with a capital “P”) is a suite of plugins that supports implementing and integrating continuous delivery pipelines into Jenkins.”
Now plugins in and of themselves are not necessarily a bad thing, of course, but there are shortcomings:
Maintaining them can be a daunting task, especially these days, when everything changes so fast. Plus, numerous changes (in plugins and plugin-dependencies) can render your pipeline inoperative at any given moment.
An even bigger missing piece is that Jenkins was designed to do CI and not CD so you have to “teach” it many concepts for it to attempt CD. Further, you would have to somehow enable it to be flexible across stages and environments (for example, so it can adapt user credentials and secrets, understand the difference between a container and a different kind of artifact, and so on). The way people do this in CI is by scripting these things, which is extremely complex, and eventually leads to an almost hardcoded “pipeline” that requires constant maintenance. Many find they are spending more time maintaining the pipeline than actually building and shipping code. Finally, even if you’ve accepted this reality with a Jenkins pipeline, you are still going to hit a wall. The application and testing environments are quickly growing so large and have so many moving parts (artifacts, containers, environments, test tool, and test types, deployment strategies, rollbacks, and roll forwards) that many are now simply unable to “catch up”—which brings me to the final point.
We are all at the edge (some have already crossed it) of not being able to maintain a CD environment without the aid of algorithmic AI helping us manage the growing stream of flowing features, functions, and apps. Which brings me to the final conclusion: To scale agile, you need an intelligent pipeline.
The idea is of an intelligent CD pipeline is:
Figure 2.
A couple additional points to consider:
There is nothing inherently wrong about using Jenkins for CI, however, bending and stretching it to attempt enterprise-scale CD might very well be an inevitably futile effort, an attempt to “teach an old dog to learn new tricks.” The best way forward is to lay out your strategy and plans for scale, including carefully designated CI and CD solutions working in unison.
Scott Willson has over 20 years of technology and leadership experience that spans verticals such as manufacturing, technology, finance, sales, and marketing. Among other accomplishments during that time, he led the data transformation effort for the $6.6B US Robotics/3Com merger. He led the automation transformation initiative that automated regulatory compliance for over 10,000 registered reps at a Broker-Dealer. He was one of the published authors for Gene Kim's DevOps Forum papers and is active in the DevOps community. He is currently in product marketing for Broadcom Enterprise Software Division
Integer egestas luctus venenatis. Ut gravida volutpat erat, non mollis arcu porta vel. Mauris id rhoncus metus. Vivamus vitae maximus est. Pellentesque porta purus sem, eget posuere arcu laoreet sit amet. Sed vitae ante ut nulla posuere fringilla a eu massa.
Unique copy and headline by persona to feature most recent blogs that are relevant to this user group. We will include a link to view all blog posts as well and it will drive to a filtered blog.
bizops.com is sponsored by Broadcom, a leading provider of solutions that empower teams to maximize the value of BizOps approaches.