Chuan Chuan Law

DevOps | Software Automation | Continuous Integration

Category: DevOps

DevOpsDays NYC 2020


I am delighted to attend DevOpsDays NYC 2020 held on March 3 – 4, 2020 at the New York Academy of Medicine . I wanted to share about my experience for the 2 days, so we go.

Day #1

The location was sort of far in the upper town of Manhattan from where I live, Brooklyn. However, after getting off the subway, the walk to the venue through Central Park was nice.

I arrived around 10am and only attended the last bit of this part before coffee break.

Next was a series of Ignite Talks which comprised of multiple 5 minutes talks. The key takeaways from these are:

  • We should modularize notebooks for productionalizing data science models. Make them maintainable using modules and versions. Decouple and specialize child modules
  • Incorporate security tools into CI/CD pipeline
  • Resilience engineering is a community, and also means we should adapt to changes and learn from other industries such as medical, aviation, etc.

After the Ignite Talks was lunch. It was fine apart of the part that it is lacking of vegan options. I ended up having pasta with cheese. 😐

After lunch was Open Spaces where attendees get to suggest the topics that they want to talk about. There will be a subject matter expertise to facilitate each talk.

I selected these 3 topics and with the following takeways:


  • Bake security into process and tools
  • Automate as much as you can
  • Secure driven development – use tools to check flaws in security
  • Have someone as security champion in the team
  • Plug in security checks early, before pull request


  • Do not use multi-cloud, use namespace instead
  • kubetl weakness – async and does not know when it finishes
  • Use security tools to scan images


  • Use distributed tracing
  • Cloudwatch
  • Key in tarce id
  • Logs in json format
  • Incorporate tracing before going live
  • Use auto-instrumentation
  • Use open source tools
  • Use industry standard
  • Incorporate into pull requests
  • Tracing platforms (APM) are like DataDog, NewRelic, Elastic APM

Key takeaways from afternoon sessions on CI/CD Agility and Controlling Pipeline Sprawl by Angel Rivera.

  • Avoid clear text in CI/CD
  • Use tool like Hashicorp Vault to protect passwords
  • Use random password generator to change passwords often
  • Auto rotate the passwords
  • Pipeline in YAML format
  • 1 pipeline in 1 repo is not a good practice
  • Do not hardcode in pipeline, use scripts
  • Create vendor libraries for reusability
  • Minimize vendor lock in

Day #2

James Meickle – Cooperative Economics for Engineers; or, Why you have more in common with Pirate Fleets Than With Your Manager

Key takeaways from Ignite Talks

  • DevOps principle – has to have production mindset
  • Is K8 really necessary? Automate everything, test twice, change architecture instead
  • All tech is debt, people are gold – stop building new technology
  • When software incident happens, mitigate or rollback 1st, learn from it, and practice (drills)

Next were Open Spaces. I went to a salary negotiation, learning from software incidents and talk pay sessions.

Below are key takeaways from salary negotiation:

  • Do not give a number in the initial interview process
  • Focus on how you can give value to the company
  • Have multiple offers
  • Negotiate at the end of the interview process
  • Its hard to negotiate in the same company

Learning from software incidents:

  • Incident are operational surprises
  • When there is a problem, implement more metrics and have processes in place to prevent the problem
  • Test more
  • Think of different ways a problem could have happened
  • Learn from things that did not fail, how we did it right

Open space #3 was interesting as it had attendees to enter their base salaries based on dev, ops, or others (qa) regardless of experience levels. This session had the most attendees for obvious reason.

The ranges vary widely from 5 to high 6 digits.

Key takeaways from afternoon sessions:

  • Product management is customer focus, provides strategy + vision, allignment+leadership
  • Product = Customer * Business * Technology
  • Product managers gathers requirements, syhnthesize feedback, prioritize against business goals and broadcast value
  • Name your services and be specific, says what it does
  • Version your API, have clear documentations and examples
  • Update runbook regularly
  • Alerts for SLO level
  • After alert is triggered, tune it, see patterns and prune
  • All alerts should be actionable
  • Need to understand business impact of the alerts
  • DevOps should be low context, carefully constructing defaults, have ubiquitous documentation, document as much as you can

It was a very productive conference as it is relevant to what I do. Looking forward for another DevOpsDay!

How To Fix Passenger + Nginx Issue On Ubuntu 18


Nginx will not start due to the following error on Ubuntu 18:

nginx: [emerg] unknown directive "passenger_enabled" in /etc/nginx/sites-enabled/default:25

nginx: configuration file /etc/nginx/nginx.conf test failed


Install Passenger + Nginx module

apt-get install -y libnginx-mod-http-passenger

Add a line on top of /etc/nginx/nginx.conf

load_module /usr/lib/nginx/modules/;

Restart Nginx

service nginx restart

Transitioning From SDET To DevOps

I started my career as a junior QA to Software Engineer In Test in Australia before a slight change in my IT career into a DevOps engineer in NYC.


I would like to share about the technical skills that could be transferred from SDET and what extra skills that need to be picked up in order to be a DevOps engineer.

Transferable skills

  • Programming/automation

The programming skills from writing automated tests will be helpful in DevOps as part of the job requires programming to automate processes.

SDET will use more programming language such as Java, Ruby, etc but DevOps will use more of shell and bash script.

  • CI/CD or Deployment

SDET’s involvement in deployment pipeline automation with tools like Jenkins is definitely a core part of a DevOps engineer. The only difference is that a SDET will usually use Jenkins to set up the automated test build or integrate it into the CI/CD pipeline.

DevOps’s involvement will be helping the development team with building the entire pipeline from compilation till deployment

  • Tools

SDET uses lots of Open Source tools such as Selenium, Cucumber to develop test automation frameworks.

DevOps’s job leverage a lot on tools as well, but different sets. We will discuss more about this in the “Skills to be picked up” section.

  • General computing/system knowledge

General computer knowledge of operation systems etc will be used but more in depth in a DevOps’s role.

Skills to be picked up

  • New automation tool – configuration management tools

Configuration management tools such as Ansible and Puppet are a key part for DevOps to automate deployment and server configuration.

  • New tools

A major portion of a DevOp’s tasks involves installation, configuration, setting up, upgrading and managing a bunch of tools used by the development tools. A list of it include:

  1. Jenkins
  2. Git
  3. Artifactory
  4. Docker
  5. Kubernetes
  6. Nginx
  7. Consul
  8. Hashicorp
  9. Elastic Search
  10. Sonarqube
  11. New Relic
  12. Datadog
  • More in-depth operating system

A lot of DevOps’s work involve in troubleshooting system issues. Therefore, knowledge of the operation system in use such as Linux, Ubuntu, RedHat, etc is very important.

  • Cloud

Working with data centers or more applicable nowadays are the cloud such as AWS and Google Cloud.

That will include usage of related tools such as Terraform.


Both SDET and DevOps are exciting jobs. I would not recommend one over the other. However for those who want to transition, it’s definitely not a difficult task. There are transferable skills which you could leverage but also new skills to learn. Learning new skills is unavoidable in the IT world anyway as technology keeps on evolving.

General Guideline To Take While Doing System Upgrade

Upgrading system and applications are important and major part of a DevOps role. Below are the general guideline to take while performing an upgrade.

Take a snapshot

Take a snapshot, create a backup, or image before an upgrade is very important in case we need to rollback.

Practice the upgrade process

Work out the process on a development environment, staging or test box before doing the actual upgrade, especially if it is for production environment.

Automate as much as you can

Try to automate the upgrade as much as you can, so we can repeat the process on other boxes or environment.

Make notes

Take notes especially on things that we cannot automate .

Have a runbook

List all the automation and manual steps to run in a runbook, which we can follow during the upgrade process

Prepare a rollback strategy

Ensure we know the steps to rollback if needed

Have peer reviews

Get peers or seniors to review the automated steps and runbook.

Have a helpline or support ready

Have a helpline or support ready during upgrade in case things go south during upgrade

How To Publish Artifacts To Artifactory Using Curl

We can publish an artifact onto Artifactory using unconventional way such as Curl.

This is how we can do that:

  • Generate a pom file using a bash script, so the artifact will have versioning, etc.
#!/usr/bin/env bash
cat  << EOF
<?xml version="1.0" encoding="UTF-8"?>
<project xsi:schemaLocation="" xmlns=""
  • Post onto Artifactory using Curl

curl -i -X PUT -u username:password –data-binary @”apple.tar” “https://artifactory/snapshot/com/org/test/apple/apple.tar”

Starting Dropwizard Application Using Jar Packaging & SysV

By default Debian package will generate an Upstart and SysV script.

This blog will show how we can ship an application in Jar file and start it using SysV

  • Put your jar file somewhere. E.g: /opt
  • Copy the Debian package generated config in /etc into the new app config directory. You might need to reformat the YAML file a bit.

For example:

Some vars might look like this:

timeout: ${sys.TIMEOUT!”5000ms” }

We want to change it to look like:

timeout: 5000ms

  • Uninstall Debian packages

apt-get purge <package>

  • Delete the generated Upstart (/etc/init/<package>.conf) and SysV script (/etc/init.d/<package>)
  • Create a SysV script which looks like this:
# dropwizard     This shell script takes care of starting and stopping Dropwizard applications
# chkconfig: - 80 20
# Provides: dropwizard
# Required-Start: $network $syslog
# Required-Stop: $network $syslog
# Default-Start:
# Default-Stop:
# Short-Description: start and stop dropwizard

#You just need to replace the {{ var }} below for your application
APPLICATION_NAME="{{ app_name }}"
APPLICATION_USER="{{ app_user }}"
APPLICATION_HOME="{{ app_home }}"
APPLICATION_JAR="{{ app_name }}.jar"
APPLICATION_CONFIG="{{ app_name }}.yml"
APPLICATION_CONFIG_DIR="{{ app_config_dir}}"

dropwizard_pid() {
    echo `ps aux | grep "${APPLICATION_CMD}" | grep -v grep | awk '{ print $2 }'`

start() {
    if [ -n "$pid" ]
        echo "${APPLICATION_NAME} is already running (pid: $pid)"
        # Start dropwizard
        echo "Starting ${APPLICATION_NAME}"
        su ${APPLICATION_USER} -c "cd ${APPLICATION_HOME}; ${APPLICATION_CMD} > /dev/null &"
    return 0

stop() {
    if [ -n "$pid" ]
        echo "Stopping ${APPLICATION_NAME}"
        kill $pid

    until [ `ps -p $pid | grep -c $pid` = '0' ] || [ $count -gt $kwait ]
        echo "Waiting for processes to exit. Timeout before we kill the pid: ${count}/${kwait}"
        sleep $count_by
        let count=$count+$count_by;

    if [ $count -gt $kwait ]; then
        echo "Killing processes which didn't stop after ${APPLICATION_SHUTDOWN_WAIT} seconds"
        kill -9 $pid
        echo "${APPLICATION_NAME} is not running"

    return 0

    if [ -n "$pid" ]; then
        echo "${APPLICATION_NAME} is running with pid: $pid"
        echo "${APPLICATION_NAME} is not running"
    exit 1

case $1 in
    echo "Usage: $0 {start|stop|restart|status}"
    exit 1


exit 0
  • Finally, we can start the application by:

service <package> start

How To Setup Json Log in Logback

  1. In pom.xml, add the dependency


2. In logback.xml, add the LogstashEncoder appender

 <appender name=”JSON” class=”ch.qos.logback.core.rolling.RollingFileAppender”>

//This is the JSON formatter
<encoder class=”net.logstash.logback.encoder.LogstashEncoder” />

//This is where the JSON log will be produced

//On a daily basis, the JSON logged will be rolled over
<rollingPolicy class=”ch.qos.logback.core.rolling.TimeBasedRollingPolicy”>

//We only store the rolled over log for a max of 7 days

3. Add the appender to the Root Level

<root level=”DEBUG”>
<appender-ref ref=”JSON” />

© 2020 Chuan Chuan Law

Theme by Anders NorenUp ↑