Chuan Chuan Law

DevOps | Software Automation | Continuous Integration

Year: 2017 (page 1 of 2)

Jenkinsfile – Build & Publish Docker In Docker

The Jenkinsfile below shows how build and publish a  Docker image to Docker registry on a Dockerized Jenkins node:

//Running on Docker node



checkout([$class: ‘GitSCM’, branches: [[name: “${git_branch}”]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs:     [[credentialsId: ‘abc’, url: GIT_URL]]])

stage(‘docker build, tag & push’){

//credentials for Docker registry

withCredentials([usernamePassword(credentialsId: ‘dockerpush’, passwordVariable: ‘pass’, usernameVariable: ‘user’)]) {


//Build the Docker image

//Tag the image

sh ‘docker tag “${docker_source_image_tag}” “${docker_target_image_tag}”‘

docker.withRegistry(‘’, ‘dockerpush’) {

//Log into the Docker registry

sh “docker login -u ${user} -p ${pass}”

//Push the image





How To Write Jenkinsfile

Jenkinsfile is another great feature from Jenkins2.

Below is an example of a Jenkinsfile:



   //Parameters of a Jenkins build  
text(defaultValue: ”, description: ‘URL’, name: ‘ARTIFACT’),
choice(choices: ‘qa’, description: ‘Deploy_Env’, name: ‘DEPLOY_ENV’),
string(defaultValue: ‘master’ , description: ‘ Branch’,name:’BRANCH’)

//Which node the job should run on


//Delete directory before job starts


//Git checkout certain branch using defined Git credentials

checkout([$class: ‘GitSCM’, branches: [[name: “${branch}”]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: ‘abc’, url: GIT_URL]]])

//Name of which stage of task that is running

//Credentials with secret file configured in Jenkins

withCredentials([file(credentialsId: ‘PASS’, variable: ‘FILE’)]) {

//Execute shell script

sh ‘ansible-galaxy install -r requirements.yml –force’

//Ansible command

playbook: ‘deploy.yml’,
inventory: ‘inventory/qa.inventory’,
artifact_url: “${ARTIFACT}”,
extras: ‘–diff –vault-password-file ${FILE} –tags ${ACTION}’,
colorized: true




Enter Jenkinsfile into Jenkins2 as below:

Screen Shot 2017-10-24 at 11.14.39 AM

References on Jenkinsfile

Screen Shot 2017-10-20 at 1.28.07 PM

How To Install Tomcat8 On Ubuntu 14 Trusty By Extracting Package


Tomcat8 package is yet currently available for Ubuntu 14.04. However, we can set up Tomcat8.0.32 on Ubuntu 14.04 by using the package for Tomcat 16.

A Tomcat8 package consists of the following packages:

  • authbind
  • libcommons-collections3-java
  • libcommons-dhcp-java
  • libcommons-pool-java
  • libecj-java
  • libtomcat8-java
  • tomcat8
  • tomcat8-common


  • Install Tomcat8.0.32 on a Ubuntu 16 box
  • Inspect the contents of the package

dpkg -c ./path/to/tomcat8_8.0.32-1ubuntu1.3_all.deb

The path showed will be a reference of file structure of a Tomcat 8 installation which we will need to mimic in the Ubuntu 14.04 box

  • Download Tomcat8.0.32 package for Ubuntu 16
  • Extract files from the Debian package:

ar -x tomcat8_8.0.32-1ubuntu1.3_all.deb

  • Extract files from data.tar.gz using tar

tar -xf data.tar.gz

  • The above will produce usr, etc and var folders
  • Move the folders above into /usr, /etc and /var respectively
  • Do the same for libtomcat8-java and tomcat8-common packages
  • Apt-get install the rest of the required packages:

– authbind

– libcommons-collections3-java

– libcommons-dbcp-java

– libcommons-pool-java

– libecj-java

Note: if you are getting dependency issue during the install of libecj-java, fix with “apt-get install -f

  • Create tomcat8 user

groupadd tomcat8

useradd -s /bin/false -g tomcat8 -d /usr/share/tomcat8 tomcat8

  • /usr/share/tomcat8/lib contains symlinks to /usr/share/java/tomcat8 which are extracts from libtomcat8-java
  • Fix file permissions and ownership in /usr, /etc, /var. Some will require ownership of root:tomcat8

Consul – Alerts

Consul alerts comes really handy in order to notify you whenever the services that you are monitoring goes down. There are many channels that we could integrate. In this blog, I’ll be using HipChat.

To set up:

  • Install consul alerts

/usr/local/go/bin/go get -u

  • Set the HipChat keys. We could do this via :
  1. Key/Value on Consul UI

Screen Shot 2017-08-31 at 1.29.06 PM


2. Ansible Consul module for Ansible version >= 2.0

– name: set consul HipChat enable key
key: consul-alerts/config/notifiers/hipchat/enabled
value: true
when: ansible_version.major|int>=2.0

  • Create a Start script in /etc/init/consul-alert.conf

description “Consul alert process”

start on (local-filesystems and net-device-up IFACE=eth0)
stop on runlevel [!12345]


setuid consul
setgid consul

exec /opt/alert/bin/./consul-alerts start –alert-addr=localhost:9000 –consul-addr=localhost:8500 –consul-dc=dev –consul-acl-token=”” –watch-events –watch-checks –log-level=err

To use Ansible consul module, you will need to install the module:

– name: install ansible consul module
name: python-consul
state: present

  • Start service

service consul-alert start

Consul – Using Consul Backinator For Backup Purposes

We can use consul-backinator as Consul KV pair backup and restore tool.

To implement this on the Consul Leader server:


  •  S3 repo for backup


  • Download Go

    sudo curl -O

  • Unarchive the tar file

sudo tar -xvf go1.8.linux-amd64.tar.gz

sudo mv go /usr/local

  • Install consul-backinator

sudo /usr/local/go/bin/go get -u

  • Add consul-backinator to cron job

sudo crontab -e

0 * * * * /root/consul-backinator/bin/consul-backinator backup -file s3://consul-backup/consul/$(uname -n)+$(date +\%Y-\%m-\%d-\%H:\%M).bak?region=us-east-1


How To Setup Consul On Ubuntu


Consul is a very lightweight and simple Dev Ops tool by Hashicorp  to enable monitoring of servers and services on it. It comprises of:

  • A cluster – a group of servers acting as Consul managers. In this blog, we are setting up a cluster of 3 servers or managers.
  • Agents – the servers that you want to monitor

The following blog is based on Ubuntu 14.04 and run as root user.

Consul Cluster


  • Unzip the binary package


  • Create consul directory

mkdir -p /etc/consul.d/server

  • Create data directory

mkdir /var/consul

  • Create consul user

useradd -m consul

  • Generate consul key

consul keygen

  • Writes server config file in /etc/consul.d/server/config.json

“bind_addr”: “<server’s IP address>”,
“datacenter”: “dc1”,
“data_dir”: “/var/consul”,
“encrypt”: “<consul key you generated>”,
“log_level”: “INFO”,
“enable_syslog”: true,
“enable_debug”: true,
“client_addr”: “”,
“server”: true,
“bootstrap_expect”: 3,
“leave_on_terminate”: false,
“skip_leave_on_interrupt”: true,
“rejoin_after_leave”: true,
“retry_join”: [
“<IP address for server 1>:8301”,
“<IP address for server 2>:8301”,
“<IP address for server 3>:8301”

  • Write server start script into /etc/init/consul.conf

description “Consul server process”

start on (local-filesystems and net-device-up IFACE=eth0)
stop on runlevel [!12345]


setuid consul
setgid consul

exec consul agent -config-dir /etc/consul.d/server -ui

  • Start consul:

start consul

Consul Agent

As in Consul Server:

  • Install Consul
  • Unzip binary package
  • Create consul user
  • Creates consul directory

mkdir -p /etc/consul.d/client

  • Writes client config file in /etc/consul.d/client/config.json

“bind_addr”: “<agent’s server IP>”,
“datacenter”: “dc1”,
“data_dir”: “/var/consul”,
“encrypt”: “<Consul key>”,
“log_level”: “INFO”,
“enable_syslog”: true,
“enable_debug”: true,
“server”: false,
“node_meta”: {
“name”: “Consul client”,
“rejoin_after_leave”: true,
“retry_join”: [
“<IP of Consul server 1>”,

“<IP of Consul server 2>”,

“<IP of Consul server 3>”

  • Write client start script in /etc/init/consul.conf

description “Consul Client process”

start on (local-filesystems and net-device-up IFACE=eth0)
stop on runlevel [!12345]


setuid consul
setgid consul

exec consul agent -config-dir /etc/consul.d/client

  • Starts Consul agent with command

start consul

How To Perform SonarQube Upgrade on Ubuntu

SonarQube is used as an integration with Jenkins to enable code quality analysis.

Installation is pretty straightforward as discussed in the official documentation. In this blog, I would like to share  some “hipccups” that I encountered during server upgrade scenario.

  1. Setup PostgreSQL server

This blog has pretty comprehensive notes on this

2. Backup the existing database using pg_dump (as Postgres user)

pg_dump sonardb>sonar_bck.sql

5. Start Sonar server on the empty database (this will not be the actual database that we will use)

$SONAR_HOME/bin/ console

4. Copy over the SQL dump from old database to new database

psql -h -d sonar -U postgres -f “sonar_bck.sql”

3. Point Sonar to the newly created database that was populated with data by modifying the config


6. Delete /opt/sonar/data/main/es to clear Elastic Search indexes if “issues” are not showing up on Sonar

7. Restart Sonar

Jenkins – Making Data Persistent In Docker Slave

As in Jenkins Docker Slave, every time a job runs, a new Docker container is started and terminated upon job completion. This will mean it will download all the dependencies every time. To avoid this, we will use the Volume function in Docker.

As we are using the Docker Plugin, there is a field to do this – Volumes.

Docker Jenkins plugin


In the example above, we map path /var/lib/jenkins/tools on the Slave machine to path /home/jenkins/tools in Docker container, and /home/jenkins/.m2 on Slave machine to /home/jenkins/.m2 on Docker container. We can specify the mode of Read/Write, Read-Only, etc, by default is Read/Write.

Therefore, all Jenkins tools and Maven dependencies will be stored on the host or Slave Machine every time Docker container runs and will not need to be downloaded upon next Docker container starts. So build time will be saved.


How To Set Up Jenkins 2.0 Master & Slaves On Docker

Jenkins 2.0 Master In Docker

We can just use the Jenkins Docker official image, or you can just install Jenkins 2.0 normally.

 Jenkins Slave In Docker


  • Install Docker
  • Open up TCP port by adding the following in /etc/default/docker
    DOCKER_OPTS="-H tcp:// -H unix:///var/run/docker.sock"
  • On Jenkins Master, install Docker Plugin on Master->Manage Jenkins->Manage Plugins
  • On Master->Manage Jenkins->Cloud configure the communication with Jenkins slave node
  • Docker URL is the IP of the Slave machine in TCP
  • Click on Test Connection, if successful will show the Docker version and Docker API versionDocker Plugin


  • Have Docker Jenkins Slave images in the slave box
  • On Master->Manage Jenkins->Manage Plugins->Cloud->Add Docker Template
  • The SSH credentials to access the Docker Jenkins Slave container is the Jenkins user setup in Dockerfile and Jenkins MasterScreen Shot 2017-03-27 at 4.29.05 PM
Docker Slave Image

There are some Docker images that you can use like Docker Slave image , or you can write your own Dockerfile.

Docker image contains a minimum of:

  • Ubuntu
  • Java
  • Jenkins user and password
  • Git
  • OpenSSH for Jenkins Master to SSH to Slave machine
  • Maven, Ruby, etc depends on what your project needs
  • Version managers like NPM or RVM cannot be installed in Docker due to we cannot “source” files like .bashrc

It is a sshd service running on port 22, therefore in the Dockerfile, you will need:


CMD [“/usr/sbin/sshd”, “-D”]

  • We restrict Jenkins jobs to run based on the label we give in Docker template
  • When job runs, Jenkins Master will spin up the Docker image on the Slave machine
  • When job completes, the Docker image will be terminated
  • Therefore, we can have multiple Docker images for different types of jobs, Ruby, Maven, NodeJS on a single Slave machine

Docker – Using Tiller To Overwrite Environment Specific Variables

To use Tiller, in Dockerfile, we specify:

ADD deploy/tiller/*.yaml /etc/tiller/
ADD deploy/tiller/environments/dev /etc/tiller/environments/
ADD deploy/tiller/templates/* /etc/tiller/templates/

Tiller folder comprises of the following:

  • environments

This is where the yaml for environment specific variables are written:

#These are the variables to overwrite service.conf.erb template

target: /etc/appleapi/application.ini
environment_name: “prod”

#These are the variables to overwrite template.conf.erb template

target: /etc/appleapi/application.conf

appleapi_database_url: jdbc:postgresql://appledb:5432/appleapi
deliveryapi_database_user: apple
deliveryapi_database_password: apple

  • templates

There are the files in the server to be overwritten:

# PostgreSQL configuration (default datasource)

  • common.yaml

data_sources: [ “defaults” , “file” , “environment” ]
template_sources: [ file ]
deep_merge: true

Load the defaults, file and then environment data source to overwrite the templates

  • default.yaml

application_name: “appleapi”
application_port: 9001

Default variables that are shared regardless of environment

Run it in Dockerfile:

RUN tiller -b /etc/tiller -n -v -e production

« Older posts

© 2020 Chuan Chuan Law

Theme by Anders NorenUp ↑