Jenkins Headless Install Docker Build

The next part of our Jenkins infrastructure components is the ability to build docker images from our dockerized Jenkins instances. This brings a unique challenge, how can we build docker images inside another docker container? The majority of recommendations refer to connecting the docker socket to your Jenkins process:

services:
  jenkins:
    [...]
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    [...]

While this works and allows you to now build your Dockerfiles, it now creates dependencies on your local host environment, with its pre-existing operating system, docker version, configurations, build caches, credentials, docker daemon configurations, etc.

To solve this problem we will be using Docker-in-Docker or dind. Our dind implementation is still based off https://github.com/jpetazzo/dind. However, we are in the process of moving to "from scratch" dind-rootless located at https://github.com/docker-library/docker/blob/master/20.10/dind-rootless/Dockerfile.

The concept is straight forward:

  • dind sidecar - Docker in Docker instance exposing the Docker API
  • jenkins - Install docker client and docker-compose (API on docker client) to connect to remote server (remote server being our dind instance)

Docker in Docker (DinD)

The docker installation is relatively straight forward by simply using the package manager to install a specific version of docker directly which will further install related dependencies.

labimagesdind
version: '3.3'

services:
  dind:
    image: infra/docker/dind:1.0.0
    build:
      context: .
      network: host
      args:
        ARG_ART_URL: http://d1i-doc-ngbuild:3001
      extra_hosts:
        - "d1i-doc-ngbuild:172.22.90.2"
deb [arch=amd64] APT_URL/repository/apt-proxy-ubuntu-focal-docker/ focal stable
FROM infra/ubuntu/focal:1.0.0

ARG ARG_ART_URL

RUN sed -e "s|APT_URL|${ARG_ART_URL}|" /etc/apt/sources.list.base > /etc/apt/sources.list \
    && apt-get update \
    && apt-get install -y curl dos2unix apt-transport-https ca-certificates software-properties-common \
    && apt-get clean \
    && rm /etc/apt/sources.list

COPY docker.list.base /etc/apt/sources.list.d/docker.list.base

RUN curl -fsSL $ARG_ART_URL/repository/apt-keys/docker/gpg | apt-key add - \
    && sed -e "s|APT_URL|${ARG_ART_URL}|" /etc/apt/sources.list.d/docker.list.base > /etc/apt/sources.list.d/docker.list \
    && sed -e "s|APT_URL|${ARG_ART_URL}|" /etc/apt/sources.list.base > /etc/apt/sources.list \
    && apt-get update \
    && apt-get install -y docker-ce=5:20.10.5~3-0~ubuntu-focal \
    && apt-get clean \
    && rm /etc/apt/sources.list \
    && rm /etc/apt/sources.list.d/docker.list

# Install the magic wrapper.
ADD ./wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker \
    && dos2unix /usr/local/bin/wrapdocker

CMD ["wrapdocker"]
#!/bin/bash

# Ensure that all nodes in /dev/mapper correspond to mapped devices currently loaded by the device-mapper kernel driver
dmsetup mknodes

# First, make sure that cgroups are mounted correctly.
CGROUP=/sys/fs/cgroup
: {LOG:=stdio}

[ -d $CGROUP ] ||
	mkdir $CGROUP

mountpoint -q $CGROUP ||
	mount -n -t tmpfs -o uid=0,gid=0,mode=0755 cgroup $CGROUP || {
		echo "Could not make a tmpfs mount. Did you use --privileged?"
		exit 1
	}

if [ -d /sys/kernel/security ] && ! mountpoint -q /sys/kernel/security
then
    mount -t securityfs none /sys/kernel/security || {
        echo "Could not mount /sys/kernel/security."
        echo "AppArmor detection and --privileged mode might break."
    }
fi

# Mount the cgroup hierarchies exactly as they are in the parent system.
for SUBSYS in $(cut -d: -f2 /proc/1/cgroup)
do
        [ -d $CGROUP/$SUBSYS ] || mkdir $CGROUP/$SUBSYS
        mountpoint -q $CGROUP/$SUBSYS ||
                mount -n -t cgroup -o $SUBSYS cgroup $CGROUP/$SUBSYS

        # The two following sections address a bug which manifests itself
        # by a cryptic "lxc-start: no ns_cgroup option specified" when
        # trying to start containers withina container.
        # The bug seems to appear when the cgroup hierarchies are not
        # mounted on the exact same directories in the host, and in the
        # container.

        # Named, control-less cgroups are mounted with "-o name=foo"
        # (and appear as such under /proc/<pid>/cgroup) but are usually
        # mounted on a directory named "foo" (without the "name=" prefix).
        # Systemd and OpenRC (and possibly others) both create such a
        # cgroup. To avoid the aforementioned bug, we symlink "foo" to
        # "name=foo". This shouldn't have any adverse effect.
        echo $SUBSYS | grep -q ^name= && {
                NAME=$(echo $SUBSYS | sed s/^name=//)
                ln -s $SUBSYS $CGROUP/$NAME
        }

        # Likewise, on at least one system, it has been reported that
        # systemd would mount the CPU and CPU accounting controllers
        # (respectively "cpu" and "cpuacct") with "-o cpuacct,cpu"
        # but on a directory called "cpu,cpuacct" (note the inversion
        # in the order of the groups). This tries to work around it.
        [ $SUBSYS = cpuacct,cpu ] && ln -s $SUBSYS $CGROUP/cpu,cpuacct
done

# Note: as I write those lines, the LXC userland tools cannot setup
# a "sub-container" properly if the "devices" cgroup is not in its
# own hierarchy. Let's detect this and issue a warning.
grep -q :devices: /proc/1/cgroup ||
	echo "WARNING: the 'devices' cgroup should be in its own hierarchy."
grep -qw devices /proc/1/cgroup ||
	echo "WARNING: it looks like the 'devices' cgroup is not mounted."

# Now, close extraneous file descriptors.
pushd /proc/self/fd >/dev/null
for FD in *
do
	case "$FD" in
	# Keep stdin/stdout/stderr
	[012])
		;;
	# Nuke everything else
	*)
		eval exec "$FD>&-"
		;;
	esac
done
popd >/dev/null


# If a pidfile is still around (for example after a container restart),
# delete it so that docker can start.
rm -rf /var/run/docker.pid

# If we were given a PORT environment variable, start as a simple daemon;
# otherwise, spawn a shell as well
if [ "$PORT" ]
then
	exec dockerd -H 0.0.0.0:$PORT -H unix:///var/run/docker.sock \
		$DOCKER_DAEMON_ARGS
else
	if [ "$LOG" == "file" ]
	then
		dockerd $DOCKER_DAEMON_ARGS &>/var/log/docker.log &
	else
		dockerd $DOCKER_DAEMON_ARGS &
	fi
	(( timeout = 60 + SECONDS ))
	until docker info >/dev/null 2>&1
	do
		if (( SECONDS >= timeout )); then
			echo 'Timed out trying to connect to internal docker host.' >&2
			break
		fi
		sleep 1
	done
	[[ $1 ]] && exec "$@"
	exec bash --login
fi

Jenkins Docker Image

labimagesjenkins-job-pipeline-docker
version: '3.3'

services:
  jenkins-job-pipeline-docker:
    image: infra/jenkins-job-pipeline-docker:1.0.0
    build:
      context: .
      network: host
      args:
        ARG_ART_URL: http://d1i-doc-ngbuild:3001
      extra_hosts:
        - "d1i-doc-ngbuild:172.22.90.2"
FROM infra/jenkins:1.0.0

ARG ARG_ART_URL
ARG user=jenkins

# to root so we can add our file
USER root

## Install Docker
RUN curl -fsSL $ARG_ART_URL/repository/dml/docker/docker/docker-20.10.5.tgz | tar zxvf - --strip 1 -C /usr/bin docker/docker

## Install Docker-Compose 
RUN curl $ARG_ART_URL/repository/dml/docker/docker/docker-compose-1.29.2 -o /usr/local/bin/docker-compose \
    && chmod +x /usr/local/bin/docker-compose

COPY init.sh /init.sh
RUN chmod +x init.sh \
    && dos2unix init.sh

RUN mkdir -p /var/jenkins_home/.docker \
    && chown -R ${user}:${user} /var/jenkins_home/.docker

COPY init-02.groovy /usr/share/jenkins/ref/init.groovy.d/init-02.groovy
RUN chmod +x /usr/share/jenkins/ref/init.groovy.d/init-02.groovy
COPY pipeline-job.xml /tmp/pipeline-job.xml

# back to jenkins
USER ${user}

ENTRYPOINT ["/bin/tini", "--", "./init.sh"]
import hudson.model.*;
import jenkins.model.*;

import hudson.tasks.Maven.MavenInstallation;
import hudson.tools.InstallSourceProperty;
import hudson.tools.ToolProperty;
import hudson.tools.ToolPropertyDescriptor;
import hudson.util.DescribableList;

import com.cloudbees.plugins.credentials.impl.*;
import com.cloudbees.plugins.credentials.*;
import com.cloudbees.plugins.credentials.domains.*;

Thread.start {
      
      println "--> starting init-02"

      def env = System.getenv()
      
      /*
       * Step 1 - add the default job
       *          To get the XML, create a job and use following URL:
       *                http://localhost:8080/job/default/config.xml
       *          with [default] being the job name.
       */

      
      def jobXmlFile = new File('/tmp/pipeline-job.xml')
      def jobName = "default"
      def configXml = jobXmlFile.text
            .replace('$pipeline.scm.url', env['pipeline.scm.url'])
            .replace('$pipeline.scm.credentials', env['pipeline.scm.credentials'])
            .replace('$pipeline.scriptPath', env['pipeline.scriptPath'])
            .replace('$pipeline.branch', env['pipeline.branch'])

      def xmlStream = new ByteArrayInputStream( configXml.getBytes() )

      Jenkins.instance.createProjectFromXML(jobName, xmlStream)

      println "--> added default job based on env"
      

      /*
       * Step 2 - start the job
       */

      def job = hudson.model.Hudson.instance.getJob("default")
      hudson.model.Hudson.instance.queue.schedule(job, 0)

      println "--> running job"

      sleep 10000

      println "--> Job output available at http://localhost:8080/job/default/1/consoleText"
      
}     
#!/bin/bash

# copy the mounted config file
cp /var/jenkins_home/.docker/config.json.tmp /var/jenkins_home/.docker/config.json

/usr/local/bin/jenkins.sh
<?xml version='1.1' encoding='UTF-8'?>
<flow-definition plugin="workflow-job@2.41">
  <actions>
    <org.jenkinsci.plugins.pipeline.modeldefinition.actions.DeclarativeJobAction plugin="pipeline-model-definition@1.9.1"/>
    <org.jenkinsci.plugins.pipeline.modeldefinition.actions.DeclarativeJobPropertyTrackerAction plugin="pipeline-model-definition@1.9.1">
      <jobProperties/>
      <triggers/>
      <parameters/>
      <options/>
    </org.jenkinsci.plugins.pipeline.modeldefinition.actions.DeclarativeJobPropertyTrackerAction>
  </actions>
  <description></description>
  <keepDependencies>false</keepDependencies>
  <properties/>
  <definition class="org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition" plugin="workflow-cps@2.93">
    <scm class="hudson.plugins.git.GitSCM" plugin="git@4.8.2">
      <configVersion>2</configVersion>
      <userRemoteConfigs>
        <hudson.plugins.git.UserRemoteConfig>
          <url>$pipeline.scm.url</url>
          <credentialsId>$pipeline.scm.credentials</credentialsId>
        </hudson.plugins.git.UserRemoteConfig>
      </userRemoteConfigs>
      <branches>
        <hudson.plugins.git.BranchSpec>
          <name>$pipeline.branch</name>
        </hudson.plugins.git.BranchSpec>
      </branches>
      <doGenerateSubmoduleConfigurations>false</doGenerateSubmoduleConfigurations>
      <submoduleCfg class="empty-list"/>
      <extensions/>
    </scm>
    <scriptPath>$pipeline.scriptPath</scriptPath>
    <lightweight>true</lightweight>
  </definition>
  <triggers/>
  <disabled>false</disabled>
</flow-definition>

Both Jenkins and dind image share the same version for the docker engine to ensure that conflicts are minimized.

Building Both Images

Simply build each image.

docker-compose -f images/jenkins-job-pipeline-docker/docker-compose.yml build
docker-compose -f images/dind/docker-compose.yml build

#docker login docker-private.acme.com
images/docker-push.sh infra/jenkins-job-pipeline-docker:1.0.0 docker-private.acme.com
images/docker-push.sh infra/docker/dind:1.0.0 docker-private.acme.com

Running Both Images

There are a couple of elements to cover when running both images.

labcomposed
admin123
admin123
{
    "auths": {
            "docker-private.acme.com": {
                    "auth": "BASE64_USER_PASSWORD"
            }
    }
}
{
    "insecure-registries": [
      "docker-private.acme.com"
    ]
  }
version: '3.3'

services:
  jenkins-job-pipeline-docker:
    image: infra/jenkins-job-pipeline-docker:1.0.0
    ports:
      - "8080:8080"
    volumes:
      - ./docker/config.json.tmp:/var/jenkins_home/.docker/config.json.tmp
    environment:
      - DOCKER_HOST=tcp://dind:2375
      - artifacts.id=nexus
      - artifacts.user=admin
      - scm.id=bitbucket
      - scm.user=admin
      - pipeline.scm.url=https://bitbucket.acme.com/scm/hey/helloworld-ops.git
      - pipeline.scm.credentials=bitbucket
      - pipeline.scriptPath=jenkins-ops/Jenkinsfile
      #- pipeline.scriptPath=jenkins/Jenkinsfile-local
      - pipeline.branch=*/master
    secrets:
      - jenkins-passwords
    networks:
      - ops-network
    extra_hosts:
      - "bitbucket.acme.com:172.22.90.1"
      - "docker-private.acme.com:172.22.90.1"
  dind:
    image: infra/docker/dind:1.0.0
    ports:
      - 2375:2375
    privileged: true
    volumes:
      - ./docker/daemon.json:/etc/docker/daemon.json
    environment:
      - PORT=2375
    networks:
      - ops-network
    extra_hosts:
      - "docker-private.acme.com:172.22.90.1"
networks:
  ops-network:
    name: ops-network
secrets:
  jenkins-passwords:
    #artifacts-pwd.txt
    #scm-pwd.txt
    #ssh-passphrase.txt
    #ssh-private-key.txt
    file: secrets/jenkins-passwords
    

The dind image will need a daemon.json config to allow us to inject docker specific configs into the generic image, in our case our private registry with self signed certificate. On the jenkins image, we want to inject our docker credentials to be able to do easy login to our registry. This is done again via the Base64 encoded user:password combination.

sed -e "s|BASE64_USER_PASSWORD|YWRtaW46YWRtaW4xMjM=|" composed/docker/config.json.base > composed/docker/config.json.tmp

docker-compose -f composed/docker-compose-jenkins-job-pipeline-docker.yml up --force-recreate