CI/CD Infrastructure Lab

Infrastructure-as-Code (IaC), in other words, the entirety of your infrastructure represented in code form is one of the key elements in the evolving DevOps world. Besides having everything "As-Code" as a guideline, we have two additional guidelines for our CI/CD ecosystem:

  • Reproducibility - we want to be able to re-create any of our assets or infrastructure elements on demand from our own artifacts in scripted form
  • Availability - we want to minimize dependencies on third party "web" downloads as much as possible, ie no external access

While the above guidelines appear minor, they can be difficult to accomplish, especially when dealing with third party applications. As such, its always a good idea to pose this question for each application you are managing.

Can you re-create this application, when your external network is down?

Scripted Installs

As the primary focus is on third party applications, we want to ensure that all installations and configurations are fully scripted and abide by a simple sanity check:

Can this application be installed without a single user ever having logged into the applications web interface?

That means no wizards, no usage of web-browsers to configure, everything done from command line only, including verification of successful installation.

After installation, can this application be used as part of your CI/CD ecosystem without a single admin user ever having logged into the applications web interface?

This last element may not always be achievable but should still be a initial goal to explore until all options are exhausted.

While we understand that not every third party application may be fully auto-installable and configurable, having the option to do so is a huge plus. So use that as a metrics in the future when evaluating third party applications.

Docker for Processes

The majority of this lab leverages Docker for process isolation. To simplify Linux distros, we try to use the same distro whenever possible across the board, either on Cloud VM's, or for use in docker containers, in our case Ubuntu 20.04 LTS. Remember, leveraging docker when managing third party applications vs your own applications is going to be substantially different. While you may even go distroless for your own applications, the Anti-Pattern of container shell access is generally going to be needed when working with third party apps, or apps that are not quite as container native as you need them to be.

As part of the initial lab we will produce the following assets.

  • Ubuntu 20.04 - Java 8 - Nexus 3
  • Ubuntu 20.04 - Java 8 - Bitbucket
  • Ubuntu 20.04 - Java 11 - Jenkins
  • Ubuntu 20.04 - Java 11 - Sonarqube

This will give us the initial portion of Source, Build, Output, Verify.

Environment Independence

As we are using docker for processes, we have additional rules to follow. The ability to build and run from the same source without environment specific hardcoded references points. That includes any type of URL, network location, secrets but can also range to license keys that may need to be externalized.

A developer should be able to spool up all elements of the application eco system including the respective CI/CD applications anywhere.

The final part is ensuring that your IaC (infrastructure-as-code) can properly run on various host systems and runtimes without issues. This means being able to build and run via Docker Desktop on windows and mac, while also being able to run without changes on VMs directly.

CI/CD vs Application Environments

As you look at your various environments such as development, staging and production, it is important to ensure proper isolation of environments for your CI/CD toolkit and the environments of the applications whose lifecycle is managed by your CI/CD.

Your production CI/CD environment will control the development, staging and production environments of the applications it manages.


As you are dealing with any "Element-As-Code", version and dependency management has to be introduced, in our case usage of Semantic Versioning (SemVer) for all our docker images. Given the challenge of public docker images, all our images are "from scratch" and controlled ourselves. Read more on our Blue Box of Info: Public Images.

Blue Box of Information: Challenges of Public Images

When most DevOps'ers get started with Docker, one generally follows tutorials just as this one and refers to base images made available on dockerhub. While this is a good starting point and convenient as you start out, it is likely not the best option if you want to follow our core infrastructure guidelines.

Distro and Package Managers

When looking at popular docker images on dockerhub, you will generally end up with a mix of different distros, each distro with their own pre-installed packages and sometimes even different package managers. So depending on which distro you are on, you will have to use a different package manager, resulting in potentially different software being selected for installation.

So if you are ever trying to use or add debugging tools or configure other elements, being on the same base image is going to save you a lot of future headache.

Dockerfile FROM and :Latest

As you are managing your own Dockerfile, relying on a dockerhub base image, the image you are selecting in your FROM may change during different fresh build cycles. So a Dockerfile you build today, maybe different from building that same exact Dockerfile 1 day later even if none of your own code has changed. This has the possibility of introducing incompatibilities to your Docker containers, build failures, or worse post-build failures when running in production.

As you add new application containers to your production ecosystem you are now possibly mixing different tags and versions of underlying base images, creating future complexity and inability to effectively control your final docker artifacts and dependencies.

Imagine you build a Dockerfile that uses image:latest, that uses an underlying image with another FROM image:latest, any change in the tree will mean a new pull or build will create a different image. While there are tools available such as sha-pinning or docker-lock, thus far, they do not seem to integrate nicely or easily into your overall Dockerfile development. We assume this will eventually change, but for now, let's stick with "The harder but more consistent" way.