Optimized CD Pipelines: Employing a Hybrid Mix of As-Code and Low-Code Approaches

    Integrating as-code with low-code model-based solutions for better, reliable, scalable hybrid DevOps pipelines In this blog, we propose a hybrid approach that includes as-code and low-code models for developing and executing complex DevOps pipelines.

    Introduction

    Infrastructure-as-code” (IAC) techniques have emerged as a popular paradigm for codifying and managing infrastructure as versioned software, which can help drive automated deployments in DevOps pipelines.

    The concept of IAC has been extended to “everything as code” (EAC), which applies the paradigm to other aspects of DevOps, such as testing, security, databases, and operations.

    While treating everything as code provides many benefits, it also has its drawbacks. Code sprawl and complexity create their own quality and maintenance challenges.

    Some of the biggest impediments to continuous delivery (CD) include environment configuration, provisioning, and deployment of applications. IAC approaches automate the configuration and deployment, but often require detailed scripting to define these topologies, provision them, and then deploy applications into environments that have different configurations. In addition, such scripts proliferate depending on the number and types of environments (for example, containerized, cloud, on-premises, hybrid, and so on).

    During a recent webinar we conducted, polling indicated that over 70% of the attendees were using Jenkins for managing CD pipelines as code.

    Specifically, while Jenkins is a great tool for continuous integration (CI), it presents several challenges for complex, enterprise CD scenarios.

    Most of the attendees indicated that they faced a variety of challenges implementing as-code Jenkins pipelines, including buggy script sprawl; dependency on untested 3P plugins; difficulty in creating, debugging, and maintaining complex pipelines; and difficulty in supporting compliance, audit management, and audits. In addition, as-code pipelines require use of scarce technical and development resources, and also make usage difficult for functional experts, such as testers, deployment specialists, and release train managers.

    In this blog, we propose a hybrid approach that includes as-code and low-code models for developing and executing complex CD pipelines.

    What is a Model?

    A model, in our context, is a form of abstraction for different types of entities in a CD system, including code, tests, data, infrastructure, and so on. For example, model-based testing is an emerging discipline that allows us to represent tests as models that can be used to generate actual tests. Similarly  model-based software allows us to design the software as a model from which code can be generated.

    A model-based approach offers many advantages, such as:

    • Models are visual, easy to understand, and better at representing relationships between different components. Modeling can be a great way to express complex behavior as well disambiguate such behavior.
    • The associated entity (for which the model is an abstraction—for example, code or tests) can be generated from the model with greater precision, accuracy, and quality, using sophisticated algorithms.
    • Change management is easier, since the model can be used to define the change and predict the impact of change, thereby promoting greater agility and optimization of effort.

    The following figure captures some of the key differences between as-code and as-model approaches:

    Figure 1

    As-code and as-model approaches need not be mutually exclusive. They may co-exist in the same DevOps ecosystem and address the needs of different audiences at different stages of the CD pipeline. For example, modeling may be used extensively by functional SMEs during the design process, while coding is done more by developers during development. For co-existence to be possible, however, it must be possible to translate models to code and vice-versa. See figure below.

    Figure 2

    Model-Based Approaches for DevOps Pipelines

    Modeling enables teams to abstract the elements of applications (the “what”), environments (the “where”), and the deployment workflow (the “how”) into an overall recipe. Based on this recipe, lower-level artifacts can be created using code generation. See figure below.

    Figure 3

    Essentially, this allows the modeling of the entire DevOps pipeline along with the topologies of the environments that make up the pipeline.

    Let us look at each of the modeling dimensions.

    Application Models (the “What”)

    Applications are made up of numerous binary artifacts, configuration settings, scripts, and dependent services. Artifacts originate from multiple sources, including repositories, build servers, file shares, and FTP servers. And each artifact requires exact versions to match the current deployment. Designing a logical model of your application provides a layer of abstraction between all the physical bits and bytes and your CD pipeline, enabling the same underlying automation mechanics to execute in any environment. Some CD release automation or deployment automation solutions directly map physical artifacts to modeled application components or environment deployment targets. This means each new app version will require a continual redesign of your logical model and require you to redesign your deployment pipeline. This one-to-one mapping (not modeling) does not provide the significant benefits one expects to leverage while troubleshooting, since the automation mechanics themselves need to be evaluated as part of your problem resolution efforts.

    A proper application model will provide consistency and repeatability across all environments and release stages. This reduces audit concerns and troubleshooting efforts. Additionally, a logical application model will allow the automated deployment mechanics to execute on a centralized development server or container, or across distributed production servers or containers.

    Application models should then provide a packaging construct to act like a bill of materials BoM) for each deployment pipeline, representing a mash-up of the different versioned components that should be deployed together. Packages should also provide a state flow that allows you to accept or reject them for promotion. This will provide key insights into the productivity and efficiency of your CD release automation practice and provide the forensics for process

    Pipeline Models (the “How”)

    Pipeline models can be of different types: declarative (that describe desired end state) or imperative or workflow (that describe how to get to the end state). Declarative models prescribe how the deployment process will occur without the software vendor allowing flexibility for DevOps teams to determine their own best practices or adapt to technology changes.

    Workflow models, by contrast, provide DevOps teams with complete flexibility and visibility into how software will be delivered.

    Workflows replace the manual tasks that are required to install, upgrade, patch, or roll back an application. And they can be reused, time and again, across any environment the application operates in. Mature DevOps pipeline solutions allow us to model workflows visually on a canvas, much like a Microsoft Visio-type block diagram. Workflows are assembled by dragging and dropping steps or tasks from a library of existing behaviors and integrations for most of the common application hosts.

    Workflows are be able to handle retries, if-then-else branching, partial failure or partial success processing, runtime decision making, workload distribution, rollbacks, and much, much more.

    The examples below illustrate the breadth of infrastructures and tools for which out-of-the-box workflows can be assembled:

    • Application servers, such as Oracle WebLogic, IBM WebSphere, and JBoss/IIS
    • Database servers, such as Oracle and Microsoft SQL Server
    • Integration servers, such as BizTalk
    • Container technologies, such as Docker, Kubernetes and LXC
    • Application servers, such as WebLogic, WebSphere, and JBoss/IIS
    • Cloud providers like AWS, Microsoft Azure, and others 
    • OS commands, package managers, and source control systems, such as GitHub and Bitbucket
    • CI tools, such as Jenkins, Bamboo, and Travis CI

    Out-of-the-box integrations provide a standard and quality-assured way to consistently deploy new changes to various servers. These integrations can also be executed using the different credentials (impersonation) that are required during the process, without compromising security. These integrations are also automatically audited 100% of the time.

    Environment Models (the “Where”)

    Many errors are caused by differences in the environment settings of a particular deployment, and not by workflow bugs. The problem is that software and applications are susceptible to even the smallest variations in environment and settings. A simple example is how file folder structures can vary between QA and production servers. More subtle differences might include the port configuration of a service, execution credentials, and even the configuration of the environment, such as cluster sizes. Accounting for these important variables during deployment execution is a tedious and error-prone process. Users have to verify and adjust for all of them on every target machine that they touch, as well as the system as a whole. When you consider tens or even hundreds of servers that need to be updated, the matrix quickly becomes unmanageable by manual methods, and scripting requires hard programming that is frequently done on a one-off basis.

    Environment models let operators and developers control runtime settings in a single place. The system will propagate the correct settings to the correct executions at the correct moment, automatically. The model takes care of the variations existing between environments (such as the number and types of servers), server settings (such as directory and version differences), and configuration data (such as variable values). The model includes deployment targets, profiles, logins, and more. The model not only ensures the correct settings are propagated at the right time to the correct execution, but also plays a crucial role in keeping workflows simple and manageable. In fact, not having a built-in model will result in very complex workflows and having to store the runtime settings in external sources, such as XML or databases. This can be both insecure and complicated, as the user needs to read, parse, and branch execution flows as part of the workflow.

    Release Models (the “Payload”)

    Mature DevOps pipelines decouple releases from deployments. Release modeling also helps us map actual application requirements (features/stories) to deployments. Large enterprises typically have multiple release trains (or deployment pipelines) active at the same time tied to different teams assigned to different backlog (see figure below). As the payload of these trains/pipelines change, manually tracking and adapting to such changes can be a challenge. Modeling of release trains with application payload information allows the deployment packages to be re-configured automatically, without manual intervention.

    Figure 4

    Hybrid DevOps Pipelines

    As we have mentioned before, it is possible for as-code and as-model approaches to co-exist. If we think of the DevOps pipeline in terms of CI and CD stages, clearly “as-code” solutions may make more sense for CI since work there is dominated by developers, typically is less complex, and has less stringent controls for governance.

    Various stages may be more model-driven since other functional SMEs (such as deployment engineers, release engineers, and SREs) are more involved in the CD pipeline. These stages are also typically more complex (since they aggregate the outputs from multiple CI engines across teams), and also have much stricter governance and compliance requirements. See figure below:

    Figure 5

    Jenkins is a popular CI tool of choice for developers and represents a clear “as-code” choice for them. Whereas, for the rest of the CD pipeline may benefit from low-code CD platforms. In fact, some of these platforms support bi-directional integration with Jenkins to enable hybrid DevOps pipelines that support both as-code and low-code mechanisms.

    Continuous Modeling?

    In the spirit of the “continuous everything” philosophy in DevOps, we propose “continuous modeling” as a new approach to managing CD pipelines. Continuous modeling is in fact an established concept in mathematics in which modelers continuously update models of systems with new data to test its validity (or rollback if it fails). This approach results in continuous refinement. In many ways, our approach to continuous modeling for CD is similar. It allows for rapid management of change, early testing (with fast failure detection), and continuous refinement of the application system. In an age of digital transformation driven by big data, analytics, and autonomics, we feel this capability is critical as we build self-healing systems that analyze data in large streams, use machine learning to make intelligent decisions, and continuously refine our applications.