DevOps | Software Automation | Continuous Integration

Category: DevOps Tools

Top 10 DevOps Tools Or Services Used

Below are the list of top 10 tools I use on a daily basis in my job

  1. Configuration management tool

Configuration management tool – Ansible takes up 80% to 90% of my daily life. All servers provisioning, software installation and management are automated using it. Automation with configuration management tool allows repetition on multiple servers and avoids human error.

2. Jenkins

All software compilation, build and deploys are automated on Jenkins. Includes, writing Jenkins pipeline, installing, upgrading Jenkins and its plugins.

3. AWS

This is where all the servers and resources are. EC2, DNS and other services like Elastic Search etc.

4. Terraform

This is used in to provision the services and resources in AWS. I view it as the configuration management tool of AWS that allows repetition and eliminates human error.

5. Elastic Search

This is where all the logs go to. Maintenance work such as automating snapshot, backup and curator clean up are part of the job.

6. Operating system

System administrating work on operating systems like Ubuntu. Diagnosing, troubleshooting issues, installing and upgrading packages.

7. Nginx

Load balancing for applications and services.

8. Docker

Containerization has become important these days due to cost savings, therefore most servers are shifted towards being provisioned in Docker.

9. Monitoring tools

Integrating monitoring software into the applications, services and databases using services such as New Relic, AppDynamics and Datadog.

10. Hashicorp Vault

Used to store all secrets and sensitive information of applications.

How To Fix 0% Coverage On SonarCube

  • We need to make sure the Jacoco – Overall Coverage Summary is showing up test coverage result
  • Ensure the maven-sureflire-plugin is configured

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>${dep.apache.maven.test.plugins}</version>
<configuration>
<argLine>${surefireArgLine}</argLine>
<includes>
<include>**/*Test.java</include>
</includes>
<excludes>
<exclude>**/*IntegrationTest.java</exclude>
</excludes>
</configuration>
</plugin>

  • Ensure the jacoco-maven-plugin is configured

<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>${jacoco-maven-plugin.version}</version>
<executions>
<execution>
<id>pre-unit-test</id>
<phase>initialize</phase>
<goals>
<goal>prepare-agent</goal>
</goals>
<configuration>
<propertyName>surefireArgLine</propertyName>
</configuration>
</execution>
<execution>
<id>pre-int-test</id>
<phase>pre-integration-test</phase>
<goals>
<goal>prepare-agent-integration</goal>
</goals>
</execution>
<execution>
<id>report-generation</id>
<phase>verify</phase>
<goals>
<goal>report</goal>
<goal>report-integration</goal>
</goals>
</execution>
</executions>
</plugin>

Ref: https://docs.sonarqube.org/display/PLUG/Usage+of+JaCoCo+with+SonarJava

How To Migrate Elastic Search Cluster To New Cluster

Introduction

There are times when we will choose to set up a new cluster and migrate the data instead of updating existing one especially when when the version that we want to upgrade is many versions ahead of the current one. Thus, the risk associated with the upgrade will be higher and downtime of existing cluster due to the upgrade will also be longer.

Setup The New Elk Cluster

Install the new versions in the following sequence:

  • Elasticsearch
  • Kibana
  • Logstash

The Elastic Search official guide contains comprehensive guide on the installation, so this blog will talk about personal experience and problems encountered which are not part of the installation guide.

Elastic Search

The main thing here is to do Snapshot and Restore of current Elastic Search cluster to the new one.

  • Register a backup with the current ES. In this blog, we use S3 repository

curl -X PUT “http://oldelasticsearch:9200/_snapshot/s3_repository?verify=false” -H ‘Content-Type: application/json’ -d’
{
“type”: “s3”,
“settings”: {
“bucket”: “es-snapshot”,
“region”: “us-east-1”
}
}

  • Snapshot to S3

curl -XPUT “http://oldelasticsearch:9200/_snapshot/s3_repository/snap1?pretty?wait_for_completion=true

  • Register the backup on the new cluster

curl -X PUT “http://newelasticsearch:9200/_snapshot/s3_repository?verify=false” -H ‘Content-Type: application/json’ -d’
{
“type”: “s3”,
“settings”: {
“bucket”: “es-snapshot”,
“region”: “us-east-1”
}
}

  • Restore from S3 bucket on the new ES cluster

curl -X POST “http://newelasticsearch:9200/_snapshot/s3_repository/snap1/_restore

Kibana

The hip cups that you will see from the newly installed Kibana will be due to some conflicting indices from new and old cluster.

To solve this, do the following:

  • Close the kibana index

curl -X POST “http://newelasticsearch:9200/.kibana/_close

  • Delete the kibana index

curl -X DELETE “http://newelasticsearch:9200/.kibana

  • Restore kibana index from S3 (old cluster)

curl -X POST “http://newelasticsearch:9200/_snapshot/s3_repository/snap1/_restore” -H ‘Content-Type: application/json’ -d’
{
“indices”: “.kibana”,
“ignore_unavailable”: true,
“include_global_state”: true
}

  • Open the kibana index again

curl -X POST “http://newelasticsearch:9200/.kibana/_open

  • Restart kibana

Another issue with Kibana setup will be the Logtrial plugin. The plugin version needs to match exactly the Kibana version, thus we will need to do some manual hacks.

Below are the hacks in Ansible script:

  • Download logtrial

– name: download logtrail
get_url:
url: https://github.com/sivasamyk/logtrail/releases/download/v0.1.23/logtrail-5.6.5-0.1.23.zip
dest: /tmp

  • Unzip logtrial

– name: unzip logtrail
unarchive:
src: /tmp/logtrail-5.6.5-0.1.23.zip
dest: /tmp

  • Modify the kibana version in package.json

– name: modify the kibana version in package.json
lineinfile:
path: /tmp/kibana/logtrail/package.json
regexp: ‘”version”: “5.6.5”‘
line: ‘”version”: “5.6.15”‘

  • Zip it back

– name: zip logtrial back
archive:
path: /tmp/kibana
dest: /usr/share/kibana/bin/logtrail-5.6.5-0.1.23.zip
format: zip
mode: 0664

  • Install the modified logrial

– name: install modified logtrial
shell: ./kibana-plugin install file:///usr/share/kibana/bin/logtrail- 5.6.5-0.1.23.zip
chdir=/usr/share/kibana/bin

Consul – Integrating Nagios Checks

We can integrate script checks into Consul.  To do so with Nagios:

  • Install nagios-plugins-basic. That will contain some basic checks such as disk util, and cpu load

apt-get install nagios-plugins-basic

  • There are also other custom Nagios checks on the internet for eg: checks for Open Connections

Put this script in:

/usr/lib/nagios/plugins

  • Create JSON config (connections.json) and put it under /etc/consul.d/client

{
“check”: {
“name”: “Open Connections”,
“interval”: “60s”,
“args”: [“/usr/lib/nagios/plugins/check_connections”, “-c”, “{{connection_limit_critical}}”, “-w”, “{{connection_limit_warn}}” ],
“status”: “passing”
}

}

  • Restart Consul service
  • You should see this on Consul UI

Consul – Alerts

Consul alerts comes really handy in order to notify you whenever the services that you are monitoring goes down. There are many channels that we could integrate. In this blog, I’ll be using HipChat.

To set up:

  • Install consul alerts

/usr/local/go/bin/go get -u github.com/AcalephStorage/consul-alerts

  • Set the HipChat keys. We could do this via :
  1. Key/Value on Consul UI

Screen Shot 2017-08-31 at 1.29.06 PM

 

2. Ansible Consul module for Ansible version >= 2.0

– name: set consul HipChat enable key
consul_kv:
key: consul-alerts/config/notifiers/hipchat/enabled
value: true
when: ansible_version.major|int>=2.0

  • Create a Start script in /etc/init/consul-alert.conf

description “Consul alert process”

start on (local-filesystems and net-device-up IFACE=eth0)
stop on runlevel [!12345]

respawn

setuid consul
setgid consul

exec /opt/alert/bin/./consul-alerts start –alert-addr=localhost:9000 –consul-addr=localhost:8500 –consul-dc=dev –consul-acl-token=”” –watch-events –watch-checks –log-level=err

To use Ansible consul module, you will need to install the module:

– name: install ansible consul module
pip:
name: python-consul
state: present

  • Start service

service consul-alert start

Consul – Using Consul Backinator For Backup Purposes

We can use consul-backinator as Consul KV pair backup and restore tool.

To implement this on the Consul Leader server:

Prerequisite:

  •  S3 repo for backup
  • AWS CLI

Steps:

  • Download Go

    sudo curl -O https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz

  • Unarchive the tar file

sudo tar -xvf go1.8.linux-amd64.tar.gz

sudo mv go /usr/local

  • Install consul-backinator

sudo /usr/local/go/bin/go get -u github.com/myENA/consul-backinator

  • Add consul-backinator to cron job

sudo crontab -e

0 * * * * /root/consul-backinator/bin/consul-backinator backup -file s3://consul-backup/consul/$(uname -n)+$(date +\%Y-\%m-\%d-\%H:\%M).bak?region=us-east-1

 

How To Setup Consul On Ubuntu

Introduction

Consul is a very lightweight and simple Dev Ops tool by Hashicorp  to enable monitoring of servers and services on it. It comprises of:

  • A cluster – a group of servers acting as Consul managers. In this blog, we are setting up a cluster of 3 servers or managers.
  • Agents – the servers that you want to monitor

The following blog is based on Ubuntu 14.04 and run as root user.

Consul Cluster

wget https://releases.hashicorp.com/consul/0.8.0/consul_0.8.0_linux_amd64.zip

  • Unzip the binary package

unzip https://releases.hashicorp.com/consul/0.8.0/consul_0.8.0_linux_amd64.zip

  • Create consul directory

mkdir -p /etc/consul.d/server

  • Create data directory

mkdir /var/consul

  • Create consul user

useradd -m consul

  • Generate consul key

consul keygen

  • Writes server config file in /etc/consul.d/server/config.json

{
“bind_addr”: “<server’s IP address>”,
“datacenter”: “dc1”,
“data_dir”: “/var/consul”,
“encrypt”: “<consul key you generated>”,
“log_level”: “INFO”,
“enable_syslog”: true,
“enable_debug”: true,
“client_addr”: “0.0.0.0”,
“server”: true,
“bootstrap_expect”: 3,
“leave_on_terminate”: false,
“skip_leave_on_interrupt”: true,
“rejoin_after_leave”: true,
“retry_join”: [
“<IP address for server 1>:8301”,
“<IP address for server 2>:8301”,
“<IP address for server 3>:8301”
]
}

  • Write server start script into /etc/init/consul.conf

description “Consul server process”

start on (local-filesystems and net-device-up IFACE=eth0)
stop on runlevel [!12345]

respawn

setuid consul
setgid consul

exec consul agent -config-dir /etc/consul.d/server -ui

  • Start consul:

start consul

Consul Agent

As in Consul Server:

  • Install Consul
  • Unzip binary package
  • Create consul user
  • Creates consul directory

mkdir -p /etc/consul.d/client

  • Writes client config file in /etc/consul.d/client/config.json

{
“bind_addr”: “<agent’s server IP>”,
“datacenter”: “dc1”,
“data_dir”: “/var/consul”,
“encrypt”: “<Consul key>”,
“log_level”: “INFO”,
“enable_syslog”: true,
“enable_debug”: true,
“server”: false,
“node_meta”: {
“name”: “Consul client”,
},
“rejoin_after_leave”: true,
“retry_join”: [
“<IP of Consul server 1>”,

“<IP of Consul server 2>”,

“<IP of Consul server 3>”
]
}

  • Write client start script in /etc/init/consul.conf

description “Consul Client process”

start on (local-filesystems and net-device-up IFACE=eth0)
stop on runlevel [!12345]

respawn

setuid consul
setgid consul

exec consul agent -config-dir /etc/consul.d/client

  • Starts Consul agent with command

start consul

How To Perform SonarQube Upgrade on Ubuntu

SonarQube is used as an integration with Jenkins to enable code quality analysis.

Installation is pretty straightforward as discussed in the official documentation. In this blog, I would like to share  some “hipccups” that I encountered during server upgrade scenario.

  1. Setup PostgreSQL server

This blog has pretty comprehensive notes on this

2. Backup the existing database using pg_dump (as Postgres user)

pg_dump sonardb>sonar_bck.sql

5. Start Sonar server on the empty database (this will not be the actual database that we will use)

$SONAR_HOME/bin/sonar.sh console

4. Copy over the SQL dump from old database to new database

psql -h newSonar.host.com -d sonar -U postgres -f “sonar_bck.sql”

3. Point Sonar to the newly created database that was populated with data by modifying the config

$SONAR_HOME/conf/sonar.properties

6. Delete /opt/sonar/data/main/es to clear Elastic Search indexes if “issues” are not showing up on Sonar

7. Restart Sonar

© 2023 Chuan Chuan Law

Theme by Anders NorenUp ↑