Unikernel: Another paradigm for the cloud

In this cloud era it is hard to imagine a world without access to services in the cloud. From contacting someone through mail, to storing work-related documents on an online drive and accessing it across devices, there are lot of services we use on a daily basis that is in the cloud.

To reduce the cost of compute power, Virtualization has been adapted towards offering more services with less hardware. And then came the concept of containers where you deploy the application in isolated containers with light weight images which has few binaries and libraries to run your application, But still we need the underlying VMs to deploy such solutions. All these VMs comes with a cost. While large data-centers are offering services in the cloud, they are also hungry for electric power, which is becoming a growing concern as our planet is being drained of its resources. So what we need now is less power-hungry solutions.

What if, instead of virtualization of an entire operating system, you were to load an application with only the required components from the operating system? Effectively reducing the size of the virtual machine to its bare minimum resource footprint? This is where unikernels come into play.


Unikernel is a relatively new concept that was first introduced around 2013 by Anil Madhavapeddy in a paper titled “Unikernels: Library Operating Systems for the Cloud” (Madhavapeddy, et al., 2013).

You can find more details on Unilernel by searching the scholarly articles in Google.

Unikernels are defined by the community at Unikernel.org as follows.

“Unikernels are specialized, single-address-space machine images constructed by using library operating systems.”

For more detailed reading about the concepts behind Unikernel, please follow this link,

A Unikernel is an application that has been boiled down to a small, secure, light-weight virtual machine which eliminates general purpose operating systems such as Linux or Windows. Unikernels aims to be a much more secure system than Linux. It does this through several thrusts. Not having the notion of users, running a single process per VM, and limiting the amount of code that is incorporated into each VM. This means that there are no users and no shell to login to and, more importantly, you can’t run more than the one program you want to run inside. Despite their relatively young age, unikernels borrow from age-old concepts rooted in the dawn of the computer era: microkernels and library operating systems. This means that a unikernel holds a single application. Single-address space means that in its core, the unikernel does not have separate user and kernel address space. Library operating systems are the core of unikernel systems. Unikernels are provisioned directly on the hypervisor without a traditional system like Linux. So it can run 1000X more vms/per server.

You can have a look in here for details about Microkernel, Monolithic & Library Operating Systems

Virtual Machines VS Linux Containers VS Unikernel

Virtualization of services can be implemented in various ways. One of the most widespread methods today is through virtual machine, hosted on hypervisors such as VMware’s ESXi or Linux Foundation’s Xen Project.

Hypervisors allow hosting multiple guest operating systems on a single physical machine. These guest operating systems are executed in what is called virtual machines. The widespread use of hypervisors is due to their ability to better distribute and optimize the workload on the physical servers as opposed to legacy infrastructures of one operating system per physical server.

Containers are another method of virtualization, which differentiates from hypervisors by creating virtualized environments and sharing the host’s kernel. This provides a lighter approach to hypervisors which requires each guest to have their copy of the operating system kernel, making a hypervisor-virtualized environment resource heavy in contrast to containers which share parts of the existing operating system.

As aforementioned, unikernels leverage the abstraction of hypervisors in addition to using library operating systems to only include the required kernel routines alongside the application to present the lightest of all three solutions.

The figure above shows the major difference between the three virtualization technologies. Here we can clearly see that virtual machines present a much larger load on the infrastructure as opposed to containers and unikernels.

Additionally, unikernels are in direct “competition” with containers. By providing services in the form of reduced virtual machines, unikernels improve on the container model by its increased security. By sharing the host kernel, containerized applications share the same vulnerabilities as the host operating system. Furthermore, containers do not possess the same level of host/guest isolation as hypervisors/virtual machines, potentially making container breaches more damaging than both virtual machines and unikernels.

Virtual Machines– Allows deploying different operating systems on a single host
– Complete isolation from host
– Orchestration solutions available
– Requires compute power proportional to number of instances
– Requires large infrastructures
– Each instance loads an entire operating system
Linux Containers– Lightweight virtualization
– Fast boot times
– Ochestration solutions
– Dynamic resource allocation
– Reduced isolation between host and guest due to shared kernel
– Less flexible (i.e.: dependent on host kernel)
– Network is less flexible
Unikernels– Lightweight images
– Specialized application
– Complete isolation from host
– Higher security against absent functionalities (e.g.: remote command execution)
– Not mature enough yet for production
– Requires developing applications from the grounds up
– Limited deployment possibilities
– Lack of complete IDE support
– Static resource allocation
– Lack of orchestration tools
A Comparison of solutions

Docker and containerization technology and the container orchestra-tors like Kubernetes, OpenShift are 2 steps forward for the world of DevOps and that the principles it promotes are forward-thinking and largely on-target for the future of a more secure, performance oriented, and easy-to-manage cloud future. However, an alternative approach leveraging unikernels and immutable servers will result in smaller, easier to manage, more secure containers that will be simpler to adopt by existing enterprises. As DevOps matures, the shortcomings of cloud application deployment and management are becoming clear. Virtual machine image bloat, large attack surfaces, legacy executable, base-OS fragmentation, and unclear division of responsibilities between development and IT for cloud deployments are all causing significant friction (and opportunities for the future).

For Example: It remains virtually impossible to create a Ruby or Python web server virtual machine image that DOESN’T include build tools (gcc), ssh, and multiple latent shell executable. All of these components are detrimental for production systems as they increase image size, increase attack surface, and increase maintenance overhead.

Compared to VMs running Operating systems like Windows and Linux, the unikernel has only a tenth of 1% of the attack surface. So in the case of a unikernel — sysdig, tcpdump, and mysql-client are not installed and you can’t just “apt-get install” them either. You have to bring that with your exploit. To take it further even a simple cat /etc/hosts or grep of /var/log/nginx/access.log simply won’t work — once again they are separate processes.
So unikernels are highly resistant to remote code execution attacks, more specifically shell code exploits.

Immutable Servers & Unikernels

Immutable Servers are a deployment model that mandates that no application updates, security patches, or configuration changes happen on production systems. If any of these layers needs to be modified, a new image is constructed, pushed and cycled into production. Heroku is a great example of immutable servers in action: every change to your application requires a ‘git push’ to overwrite the existing version. The advantages of this approach include higher confidence in the code that is running in production, integration of testing into deployment workflows, easy to verify that systems have not been compromised.

Once you become a believer in the concept of immutable servers, then speed of deployment and minimizing vulnerability surface area become objectives. Containers promote the idea of single-service-per-container (microservices), and unikernels take this idea even further.

Unikernels allow you to compile and link your application code all the way down to and include the operating system. For example, if your application doesn’t require persistent disk access, no device drivers or OS facilities for disk access would even be included in final production images. Since unikernels are designed to run on hypervisors such as Xen, they only need interfaces to standardized resources such as networking and persistence. Device drivers for thousands of displays, disks, network cards are completely unnecessary. Production systems become minimalist — only requiring the application code, the runtime environment, and the OS facilities required by the applications. The net effect is smaller VM images with less surface area that can be deployed faster and maintained more easily.

Traditional Operating Systems (Linux, Windows) will become extinct on servers. They will be replaced with single-user, bare metal hypervisors optimized for the specific hardware, taking decades of multi-user, hardware-agnostic code cruft with them. More mature build-deploy-manage tool set based on these technologies will be truly game changing for hosted and enterprise clouds alike.

ClickOSC++XenNetwork Function Virtualization
IncludeOSC++KVM, VirtualBox, ESXi, Google Cloud, OpenStackOrchestration tool available
Nanos UnikernelC, C++, Go, Java, Node.js, Python, Rust, Ruby, PHP, etcQEMU/KVMOrchestration tool available
OSvJava, C, C++, Node, RubyVirtualBox, ESXi, KVM, Amazon EC2, Google CloudCloud and IoT (ARM)
RumprunC, C++, Erlan, Go, Java, JavaScript, Node.js, Python, Ruby, RustXen, KVM
UnikGo, Node.js, Java, C, C++, Python, OCamlVirtualBox, ESXi, KVM, XEN, Amazon EC2, Google Cloud, OpenStack, PhotonControllerUnikernel compiler toolbox with orchestration possible through Kubernetes and Cloud Foundry
ToroKernelFreePascalVirtualBox, KVM, XEN, HyperVUnikernel dedicated to run microservices
Comparing few Unikernel solutions from active projects

Out of the various existing projects, some standout due to their wide range of supported languages. Out of the active projects, the above table describes the language they support, the hypervisors they can run on and remarks concerning their functionality.

Currently experimenting with the Unikernel in the AWS and Google Cloud Platform and will update you with another post on that soon.

Source: Medium, github, containerjournal, linuxjournal

Helm 3 – Sans tiller – Really?

Helm has recently announced it’s much-awaited version 3. The surprise factor in this release is that the server component added during Helm 2 release is missing. Yeah, you got it right – Tiller – is missing in Helm 3 which means we have a server-less Helm. Let us check out in this post why it was missing and how it matters?

As an introduction, Helm, the package manager for Kubernetes, is a useful tool for: installing, upgrading and managing applications on a Kubernetes cluster. Helm has two parts: a client (helm) and a server (tiller). Tiller runs inside of your Kubernetes cluster as a pod in the kube-system namespace onto current context specified in your .kubeconfig (can be manipulated using –kube-context flag). Tiller manages both, the releases (installations) and revisions (versions) of charts deployed on the cluster. When you run helm commands, your local Helm client sends instructions to tiller in the cluster that in turn make the requested changes. So Helm is our package manager for Kubernetes and our client tool. We use the helm cli to do all of our commands. Tiller is the service that actually communicates with the Kubernetes API to manage our Helm packages.

Helm packages are called charts. Charts are the Helm’s deploy-able artifacts. Charts are always versioned using semantic versioning, and come either packed, in versioned .tgz files, or in a flat directory structure. They are abstractions describing how to install packages onto a Kubernetes cluster. When a chart is deployed, it works as a templating engine to populate multiple yaml files for package dependencies with the required variables, and then runs kubectl apply to apply the configuration to the resource and install the package.

As we already mentioned Tiller is the tool used by Helm to deploy almost any Kubernetes resource. Helm takes the maximum permission to make changes in Kubernetes in order to do this. Because of this, anyone who can talk to the Tiller can deploy or modify any resources on the Kubernetes cluster, just like a system-admin (Think as ‘root’ user in a Linux host). This can cause security issues in the cluster if Helm has not been properly deployed, following certain security measures. Also, authentication is not enabled in Tiller by default, so if any of the pod has been compromised and has permission to talk to the Tiller, then the complete cluster in which tiller is running has been compromised. This stands for the main reason for the removal of Tiller.

The Tiller method:

Tiller was used as an in-cluster operator by the helm to maintain the state of a helm release. It’s also used to save the release information of all the releases done by the tiller — it uses config-map to save the release information in the same namespace in which Tiller is deployed. This release information was required by the helm when it updated or when there were state changes in any of the releases. So whenever a helm update command was used, Tiller used to compare the new manifest with the old manifest of the release and made changes accordingly. Thus Helm was dependent on the Tiller to provide the previous state of the release.

The Tiller less method:

The main need of Tiller was to store release information, for which helm is now using secrets and saving it in the same namespace as the release. Whenever Helm needs the release information it gets it from the namespace of the release. To make a change Helm now just fetches information from the Kubernetes API server, makes the changes on the client-side, and stores a record of the installation in Kubernetes. The benefit of tiller less Helm is that since now Helm make changes in the cluster from client-side, it can only make those changes that the client has been granted permission.

Tiller was a good addition in helm 2, but to run it in production it should be properly secured, which would add additional learning steps for the DevOps and SRE. With Helm 3 learning steps have been reduced and security management is been left in hands of Kubernetes to maintain. Helm can now focus on package-management.

Data courtesy : Medium, codecentric, Helm

Quick and Dirty tutorial on Jenkins and Git plugin

Jenkins is a popular open source Continuous Integration tool which is widely used for project development, deployment, and automation. Continuous integration is a process in which all development work is integrated as early as possible. The resulting artifacts are automatically created and tested. This process should identify errors as very early in the process. Jenkins is one open source tool to perform continuous integration and build automation. The basic functionality of Jenkins is to execute a predefined list of steps. The trigger for this execution can be time or event based. For example, every half an hour or after a new commit in a Git repository. Jenkins also monitors the execution of the steps and allows to stop the process if one of the steps fails. Jenkins can also send out notification about the build success or failure.Jenkins can be extended by additional plug-ins, e.g., for supporting Git version control system, provisioning with docker or integration with the puppet.

In this post, we’ll discuss Installation of Jenkins and how to use Jenkins to integrate with GitHub in such a way that a git commit can trigger a Jenkins build. Jenkins can be started via the command line or can run in a web application server. In Linux, you can also install Jenkins as a system service. Almost all the Linux platforms there is a native package of Jenkins available. Please check the Jenkins home page for more details on this.

All the demonstrations in this post are shown using and Amazon EC2 RHEL 7  instance. We’re using an Amazon instance because we need to establish a connection to the system’s public IP via Git web hooks. If you are using any other Linux distribution you can make use of most of the steps described here except the RHEL specific command like yum, systemctl etc. Also, you can find your OS-specific installation details in the Jenkins Wiki.

We will be installing Jenkins using the native package available for the RHEL/Fedora/CentOS. There is another way of installing the Jenkins which includes deploying the Jenkins war file which you can see here

Installing Java:

The prerequisite of installing Jenkins is to setup a Java Virtual Machine on your system. This will be done by installing a latest OpenJDK package. Before that, we will be installing EPEL repository for RHEL 7. You can download the epel package from fedora project URL using wget command and if in case wget is not installed on your machine install it using yum.

# yum install wget

# wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

# yum install epel-release-latest-7.noarch.rpm

You can do a yum search OpenJDK to know the available JDK packages.

# yum install java-1.8.0-openjdk-devel.x86_64

Verify the java installation as below,

# java -version
openjdk version "1.8.0_121"
OpenJDK Runtime Environment (build 1.8.0_121-b13)
OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)

Next step is to set the environment variables namely JAVA_HOME and JRE_HOME. This is used by the Java applications to identify the Java virtual machine path. This is accomplished by adding these environment variables to the file /etc/profile.

export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk
export JRE_HOME=/usr/lib/jvm/jre

Once this is added you can reload the environment variables from /etc/profile by executing,

# source /etc/profile

Installing Apache Ant and Apache Maven:

The next step is to install the apache foundation siblings Ant and Maven. These siblings are used while building Java based application in Jenkins. Both these packages can be downloaded from the respective project site as a tarball.


# wget http://redrockdigimark.com/apachemirror//ant/binaries/apache-ant-1.10.0-bin.tar.gz

# wget http://redrockdigimark.com/apachemirror/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz


The command provided below will extract your downloaded tar ball and put it in /opt. Then we will go to the /opt directory and create a symbolic link.

# tar -zxvf apache-maven-3.3.9-bin.tar.gz -C /opt/
# tar -zxvf apache-ant-1.10.0-bin.tar.gz -C /opt/

# ln -s apache-maven-3.3.9/ maven
# ln -s apache-ant-1.10.0/ ant

The final listing of the /opt will show something like seen below,

# ls /opt
ant apache-ant-1.10.0 apache-maven-3.3.9 maven

Environment Variables

Now we will setup environment variables for this apache sibling and verify the installation. These variables are set by creating files maven.sh and ant.sh under /etc/profile.d and reload the environment variables.

# cat > /etc/profile.d/ant.sh
export ANT_HOME=/opt/ant
export PATH=${ANT_HOME}/bin:${PATH}

# cat > /etc/profile.d/maven.sh
export M2_HOME=/opt/maven
export PATH=${M2_HOME}/bin:${PATH}

# source /etc/profile

At this stage, if everything goes well you'll see and output similar to the one below,

# echo $JAVA_HOME;echo $JRE_HOME

# ant -version
Apache Ant(TM) version 1.10.0 compiled on December 27 2016

# mvn --version
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T11:41:47-05:00)
Maven home: /opt/maven
Java version: 1.8.0_121, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-1.8.0-openjdk-
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-514.el7.x86_64", arch: "amd64", family: "unix"

All our prerequisites for Jenkins installation is ready and we’ll now proceed to install Jenkins. We will add an RHEL 7 specific Jenkins repository and install with the yum command.

# wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
# rpm --import http://pkg.jenkins-ci.org/redhat-stable/jenkins-ci.org.key
# yum install jenkins -y

Once the installation is complete, we will enable the service and start it.

# systemctl enable jenkins.service
# systemctl start jenkins.service
# systemctl status jenkins.service

If you have followed the instalation so far you find Jenkins running under the following URL:

http://<ip of your ec2 instance>:8080/

Note: if there are any connection issues or page not loading in the browser check the firewalls inside the system and flush it if any with the command iptables -F. Also check the security-groups of your EC2 instance to allow traffic on port 8080.

You’ll see a Jenkins page similar to the one above and you can use the password as described there and setup a user to create Jenkins builds.

#  cat /var/lib/jenkins/secrets/initialAdminPassword

Now we will install the git package in the system and Jenkins GitHub plugin to create a build job.

# yum install git

Create a repository in GitHub by importing the repository https://github.com/ajoybharath/hello-world.git

You can clone your newly created repository to your system and make it ready to accept push from your repo to GitHub. You can see more on git installation and setup here

Once all the steps as provided in the above screen shots are done we’ll be ready to create our first build. You can follow the sequence of images below and setup a Jenkins build job. We’re using the following git repository in this demonstration.


Apply and Save the job and go ahead and build the project.

If in case you’ll find any issue in the build similar to the one below,

FATAL: command execution failed
java.io.IOException: Cannot run program "mvn" (in directory

Please add the following lines to the Jenkins config:

# vi /etc/sysconfig/jenkins and add the line to "source /etc/profile" the end of the file and stop and start Jenkins.

Try building the job again. Also, we will be adding the GitHub webhook to enable the build trigger if a change is pushed to the repository.

In the Jenkins home page guide to Manage Jenkins=>Configure Systems and find the GIT

Click on the Advanced setting there and check on the “Specify another hook URL for GitHub configuration” and we need to copy the URL specified there.

This URL is the web hook to our Jenkins and we’ll be adding this to our GitHub repository. Navigate to your repository and click on settings tab as shown below,

Also, we have to go to our Jenkins dashboard and select the job we created and click on configure and navigate to Build Trigger and click on the check box GitHub hook trigger for GITScm polling

If everything goes fine, we can do some change in the readme file of the repository and push it which will trigger an automatic build in the Jenkins.


Jenkins – An Introduction

Jenkins is a Continuous Integration, Continous Deployment (CI/CD)tool. Jenkins is written in Java and released as open source product. It’s used heavily in Continuous Integration as it allows code to be build deployed and tested automatically. For software development, you can hook it up with most code repo’s and there are loads of plugins that you can integrate with various software tools for better technical governance. Think it as a very advanced task scheduler which can do workflow or as a middle man between your code repo and your build server which triggers a build for every change made in the source code repository.

Jenkins offers the following major features out of the box, and much more can be added through plugins:

  1. Easy installation: Just run java -jar jenkins.war, deploy it in a servlet container or install with any native package manager. No additional install, no database.
  2. Easy configuration: Jenkins can be configured entirely from its user-friendly web GUI.
  3. Rich plugin ecosystem: Jenkins integrates with virtually every SCM or build tool that exists.
  4. Extensibility: Most parts of Jenkins can be extended and modified, and it’s easy to create new Jenkins plugins.
  5. Distributed builds: Jenkins can distribute build/test loads to multiple computers with different operating systems.

Jenkins is pretty much understood after you’ll understand the terms Continous Integration and Continous Deployment (Delivery).

The relevant terms here are “Continuous Integration” and “Continuous Deployment”, often used together and abbreviated as CI/CD. Originally Continuous Integration means that you run your “integration tests” at every code change while Continuous Delivery means that you automatically deploy every change that passes your tests. However recently the term CI/CD is being used in a more general way to represent everything related to automation of the pipeline from where a developer adds his change to a central repository until that code ends up in production. This post is based on the latter meaning of the terms CI/CD.

CI system gathers all your code from different developers and makes sure it compiles and build fine. Once the code is built it deploys it on the test server for testing. Once you’re made sure the build compiles fine then, Jenkins can be tuned to deploy the application on the production server satisfying the goal of CD. There are many different ways in which Jenkins can be set up to drive a CI/CD pipeline. This details provided below describes a simple yet powerful configuration:

The pipeline consists of 4 steps: “Build”, “Unit”, “Integration”, “System” and “Deploy”

Each step is implemented as a Jenkins job of type “free-style software project”.

The first step (“Build”) is connected to a version control system. This step can either poll the repository, get notified via a webhook, or run on a schedule or on demand. Subsequent steps have a build trigger that starts the job when the previous step finished.
Artifacts are copied between the different jobs via the “Copy Artifact” plugin. This requires the “Archive Artifacts” setting to be enabled. Deploy triggers from “Copy Artifact” to a Configuration Manager which will trigger a Container service to deploy the new application.

As told earlier in this post we can install Jenkins using the native package available for the respective Operating System. This is detailed in another post which you can read here. There is another way of installing the Jenkins which we’ll be discussing here and consist of 2 steps,

  1. Install the latest version of Apache tomcat
  2. Download and Deploy Jenkins war file

As a prerequisite, we will be installing Java which can be installed with yum command.

# yum install java-1.8.0-openjdk-devel.x86_64

[root@devopsnode1 ~]# java -version
openjdk version "1.8.0_121"
OpenJDK Runtime Environment (build 1.8.0_121-b13)
OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)
[root@devopsnode1 ~]#

After Java is installed we’ll set the Java path as below:

[root@devopsnode1 ~]# cp /etc/profile /etc/profile_orig
[root@devopsnode1 ~]# echo "export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk" | tee -a /etc/profile
export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk
[root@devopsnode1 ~]# echo "export JRE_HOME=/usr/lib/jvm/jre" | tee -a /etc/profile
export JRE_HOME=/usr/lib/jvm/jre
[root@devopsnode1 ~]# source /etc/profile
[root@devopsnode1 ~]# echo $JAVA_HOME;echo $JRE_HOME
[root@devopsnode1 ~]#

Now we’ll be installing the latest version of Apache tomcat. We’ll be using the wget command to download the latest Apache tomcat tarball from the website http://tomcat.apache.org/download-90.cgi

[ajoy@devopsnode1 ~]$ wget http://www-us.apache.org/dist/tomcat/tomcat-9/v9.0.0.M17/bin/apache-tomcat-9.0.0.M17.tar.gz

After the tarball is downloaded we’ll extract it and rename the directory to tomcat9, though the renaming is not mandatory step we’ll do it for having a simple tomcat paths 🙂

[ajoy@devopsnode1 ~]$ tar zxf apache-tomcat-9.0.0.M17.tar.gz
[ajoy@devopsnode1 ~]$ mv apache-tomcat-9.0.0.M17 tomcat9
[ajoy@devopsnode1 ~]$ ls
apache-tomcat-9.0.0.M17.tar.gz tomcat9

We’ll have to set roles and users in the tomcat configuration and then start the tomcat server:

[ajoy@devopsnode1 ~]$ cp /home/ajoy/tomcat9/conf/tomcat-users.xml /home/ajoy/tomcat-users.xml_orig
[ajoy@devopsnode1 ~]$ vi tomcat9/conf/tomcat-users.xml

<tomcat-users xmlns=”http://tomcat.apache.org/xml”
xsi:schemaLocation=”http://tomcat.apache.org/xml tomcat-users.xsd”
<role rolename=”manager-gui”/>
<role rolename=”manager-script”/>
<role rolename=”manager-jmx”/>
<role rolename=”manager-status”/>
<role rolename=”admin-gui”/>
<role rolename=”admin-script”/>
<user username=”ajoy” password=”ajoy” roles=”manager-gui,manager-script,manager-jmx,manager-status,admin-gui,admin-script”/>

Starting Apache tomcat server:

[ajoy@devopsnode1 tomcat9]$ cd bin/
[ajoy@devopsnode1 bin]$ ./startup.sh
Using CATALINA_BASE: /home/ajoy/tomcat9
Using CATALINA_HOME: /home/ajoy/tomcat9
Using CATALINA_TMPDIR: /home/ajoy/tomcat9/temp
Using JRE_HOME: /usr/lib/jvm/jre
Using CLASSPATH: /home/ajoy/tomcat9/bin/bootstrap.jar:/home/ajoy/tomcat9/bin/tomcat-juli.jar
Tomcat started.
[ajoy@devopsnode1 bin]$

If everything is fine you’ll be able to browse to http://localhost:8080 which will open the default tomcat web page.

Shutting down Apache tomcat server:

[ajoy@devopsnode1 tomcat9]$ cd bin/
[ajoy@devopsnode1 bin]$ ./shutdown.sh
Using CATALINA_BASE: /home/ajoy/tomcat9
Using CATALINA_HOME: /home/ajoy/tomcat9
Using CATALINA_TMPDIR: /home/ajoy/tomcat9/temp
Using JRE_HOME: /usr/lib/jvm/jre
Using CLASSPATH: /home/ajoy/tomcat9/bin/bootstrap.jar:/home/ajoy/tomcat9/bin/tomcat-juli.jar
[ajoy@devopsnode1 bin]$

Now our tomcat server is up and running we’ll now download the Jenkins war file and deploy it in tomcat.

[ajoy@devopsnode1 ~]$ wget http://ftp.yz.yamagata-u.ac.jp/pub/misc/jenkins/war/2.44/jenkins.war

Click on the Manager App button. It will ask for username and password which we have put in the “tomcat-users.xml”.

We’ll provide a Context Path and specify our Jenkins war file with an absolute path and then click on Deploy.

Once it’s deployed you can access the Jenkins as shown above.

Using the cat command open the file and to copy the password and paste it on the Unlock screen.

[ajoy@devopsnode1 ~]$ cat /home/ajoy/.jenkins/secrets/initialAdminPassword

Follow the below screenshot sequences to complete the Jenkins setup:

That’s all folks, you’re ready to build your project with award-winning, cross-platform, continuous integration and continuous delivery application that increases your productivity. You can read here another post describing the build steps.

Data Source Courtesy: Jenkins JenkinsWiki, Quora,

Git Cheat Sheet

This post is a Git cheat sheet, which git users can bookmark so that you’ll have most basic Git commands handy. A more detailed post about git installation and commit can be seen here

General Git Commands

$ git –version ==> Displays the version fo the git installed

$ git config –global alias.st status ==> Create an alias for git status

$ git help ==> Displays help on the git command

$ git init ==> Initialize git

$ git add . ==> Make everything in CWD ready to commit

$ git add index.html ==> Make one file ready to commit

$ git commit -m “Message” ==> Commit changes

$ git commit -am “Message” ==> Add file and Commit in one commad

$ git rm index.html ==> Remove files from git

$ git add -u ==> Update all changes

$ git rm –cached index.html ==> Remove & not to track a file any more

$ git mv index.html dir/index_new.html ==> Move or Rename files

$ git checkout — index.html ==> Restore file from a latest commit

$ git checkout 6eb715d — index.html ==> Restore file from a custom commit of current branch

$ git clean -n ==> Delete dry run of untracked files

$ git clean -f ==> Delete untracked files

$ git reset HEAD index.html ==> Undo adds

$ git commit –amend -m “Message” ==> Commit to most recent commit

$ git commit –amend -m “New Message” ==> Update most recent commit message

Tagging and Branching and Merging

$ git tag ==> Show all released version

$ git tag -l -n1 ==> Show all released versions with comments

$ git tag v1.0.0 ==> Create release version

$ git tag -a v1.0.0 -m ‘Message’ ==> Create release version with comment

$ git checkout v1.0.0 ==> Checkout a specific release version

$ git branch ==> Show branches

$ git branch branchname ==> Create branch

$ git checkout branchname ==> Change to a branch

$ git checkout -b branchname ==> Create and change to a branch

$ git branch -m branchname new_branchname

or ==> Rename a branch

$ git branch –move branchname new_branchname

$ git branch –merged ==> List all completely merged branch with current one

$ git branch -d branchname

or ==> Delete merged branch

$ git branch –delete branchname

$ git branch -D branch_to_delete ==> Delete not merged branch

$ git merge branchname ==> True Merge

$ git merge –ff-only branchname ==> Merge to Master

$ git merge –no-ff branchname ==> Merge to master/force a new commit

$ git merge –abort ==> Stop merge

Working with remote Git/GitHub

$ git remote ==> Show remote

$ git remote -v ==> Show remote details

$ git remote add origin https://github.com/user/project.git ==> Add remote origin from GitHub project

$ git remote add origin ssh://root@ ==> Ad remote origin from existing empty project

$ git remote rm origin ==> Remove origin

$ git branch -r ==> Show remote branches

$ git branch -a ==> Show all branches

$ git diff origin/master..master ==> Compare

$ git push -u origin master ==> Default push/set default with -u

$ git push origin master ==> Push to default

$ git fetch origin ==> Fetch

$ git pull ==> Pull

$ git pull origin branchname ==> Pull specific branch

$ git merge origin/master ==> Merge fetched commits

$ git clone https://github.com/user/project.git or: git clone ssh://user@domain.com/~/dir/.git ==> Clone to local system

$ git clone https://github.com/user/project.git ~/dir/folder ==> Clone to a local folder

$ git clone -b branchname https://github.com/user/project.git ==> Clone specific branch to local host

$ git push origin :branchname or: git push origin –delete branchname ==> Delete remote branch/Push nothing

Git Logs

$ git log ==> Show commits

$ git log –oneline ==> Show oneline summary of commits

$ git log –oneline -3 ==> Show oneline summary of last three commits

$ git log –author=”Sven” git log –grep=”Message” git log –until=2013-01-01 git log –since=2013-01-01 ==> Show only custom commits

$ git log -p ==> Show changes

$ git log –stat –summary ==> Show status and summary of commits

$ git log –graph ==> Show history of commits as graph

$ git log –oneline –graph –all –decorate ==> Show history of commits as graph summary

Cherry pick

git cherry-pick <first_commit>..<nth commit>
$ git cherry-pick 6e30dac520faef4a1..fd7d0d7873461a26
On branch develop
You are currently cherry-picking commit d856905.

$ git cherry-pick <initial_commit_hash>^..<terminal_commit_hash>
$ git cherry-pick --continue


$ git archive –format zip –output filename.zip master ==> Create a zip archive

$ git log –author=sven –all > log.txt ==> Write custom logs to a file

And finally a useful template for creating .gitignore,

.gitignore Templates


Git and GitHub An Introduction


Git is a distributed version control system. It is primarily used for software development. It can also be used literally to keep track of changes to any files. Git was created by Linus Torvalds in 2005 for the development of the Linux kernel, with other kernel developers contributing to its initial development. Linus was not satisfied with any of the available free systems to replace BitKeeper. Consider it as a series of snapshots (commits) of your code. You see a path of these snapshots, in which order they were created. You can make branches to experiment and come back to snapshots you took. Currently, it is used by many popular open source projects, such as the Android or the Eclipse developer teams, as well as many commercial organizations.


GitHub is a web-based Git repository hosting service, founded in 2008 which offers all of the distributed revision control and source code management (SCM) functionality of Git as well as adding its own features. It actually hosts remote repositories and allows code collaboration with anyone who has an access to the GitHub. It adds enhanced features like UI, bug tracking etc on top of Git. It’s free to use to host public repositories and has an Enterprise version like most open source applications.

This post discusses git essentials like repositories, branches, commits, pull, fetch and clone. You’ll create your own repository and learn git’s workflow, a popular way to create and review code.

Initial Steps:

The two things that you’ll be doing in order to play around with git and GitHub is to install git on your operating system first and then proceed to create a free account in the GitHub. The steps provided below assumes that we are using a Linux machine, though I’ll be providing quick links for installation in other operating systems.

Installing git:

Linux: Debian and its derivatives – #sudo apt-get install git

Linux: Fedora, RH and its derivatives – yum install git or dnf install git

Mac OS – https://git-scm.com/download/mac

Windows – https://git-scm.com/download/win

We’ll use a Ubuntu machine to install git and the steps are as follows,

ajoy@testserver:~$ sudo apt-get update
ajoy@testserver:~$sudo apt-get install git
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded.
Need to get 0 B/2,586 kB of archives.
After this operation, 20.5 MB of additional disk space will be used.
(Reading database ... 62961 files and directories currently installed.)
Preparing to unpack .../git_1%3a1.9.1-1ubuntu0.3_amd64.deb ...
Unpacking git (1:1.9.1-1ubuntu0.3) ...
Setting up git (1:1.9.1-1ubuntu0.3) ...

A more flexible way of installing git is to compile the software from the source. This procedure anyhow is time-consuming and will not be maintained by aptitude. The main advantage of doing so is that it will allow you to download the latest release and also gives you some control over the installation if you wish to customize your git installation. Before proceeding to compile from source you need to install a couple of dependency packages which allows you to compile git.

Another way of installing the latest version is to download the git source and to package it using an easy packaging tool called FPM with all your customizations in the new package. This process will provide you a package in .deb, .rpm or any other format which will be unique with your customization. This is a little more time-consuming way than the previous one.

Setting up git:

ajoy@testserver:~$ git --version
git version 1.9.1

ajoy@testserver:~$ git config --global user.name "ajoy"
ajoy@testserver:~$ git config --global user.email "ajoy.XXXX@XX.com"
ajoy@testserver:~$ git config --list

These details are stored in a file called .gitconfig which you can edit manually to add different configurations and also make sure you’re following the syntax and formatting properly.

GitHub account:

Next step is to go to the GitHub site and signup. GitHub accounts are free for setting up public repositories but there will be a nominal charge for setting up private repositories.

Creating a local repository:

Now we’ll be creating a local repository on our Linux machine.

ajoy@testserver:~$ mkdir mytestrepo
ajoy@testserver:~$ cd mytestrepo/

Then we’ll be initializing this directory as a repository. We will execute git init command in the root of the new directory to do this.

ajoy@testserver:~/mytestrepo$ git init
Initialized empty Git repository in /home/ajoy/mytestrepo/.git/

We’ll now create a file in the directory with touch command or with any of your favorite editors,

ajoy@testserver:~/mytestrepo$ touch file1
ajoy@testserver:~/mytestrepo$ ls

Once we add a file or change any existing file of a directory which holds a git repo, git will be aware of the changes happening But, git won’t keep a track of it unless until you tell it to do so. This process is called committing, which is a crucial task while using git.

In this point of time, you can use a command called git status to know how git sees your file,

ajoy@testserver:~/mytestrepo$ git status
On branch master

Initial commit

Untracked files:
(use "git add <file>..." to include in what will be committed)


nothing added to commit but untracked files present (use "git add" to track)

This actually means that git has noticed the creation of a new file called file1 but unless you execute git add command git is not going to do anything with it.

One of the most confusing parts while learning git is the concept of the staging environment and how it relates to a commit. A commit is a record of what files you have changed since the last time you made a commit. Essentially, you make changes to your repo (for example, adding a file or modifying one) and then tell git to put those files into a commit. Commits make up the essence of your project and allow you to go back to the state of a project at any point.

So, how do you tell git which files to put into a commit? This is where the staging environment or index come in. As seen in above, when you make changes to your repo, git notices that a file has changed but won’t do anything with it (like adding it in a commit).

To add a file to a commit, you first need to add it to the staging environment. To do this, you can use the git add <filename> command.
Once you’ve executed the command git add to add all the files you want to the staging environment, you can then tell git to package them into a commit using the git commit command.

ajoy@testserver:~/mytestrepo$ git add file1
ajoy@testserver:~/mytestrepo$ git status
On branch master

Initial commit

Changes to be committed:
(use "git rm --cached <file>..." to unstage)

new file: file1


Now the file is added to the staging environment and ready to commit.

Note: The staging environment, also called ‘staging’, is the new preferred term for this, but you can also see it referred to as the ‘index’.

Our First Commit:

It’s now time to go for our first commit. Commit is made with git commit -m “<message about the commit>”

ajoy@testserver:~/mytestrepo$ git commit -m "My First Commit"
[master (root-commit) e198044] My First Commit
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 file1

The message at the end of the commit should be something related to what the commit contains – maybe it’s a new feature, maybe it’s a bug fix, maybe it’s just fixing a typo. Don’t put a message like “asdfadsf” or “1234” or any such junk. It should be something meaningful describing our commit so that other contributors or users who see this will understand what this commit is all about.

Pushing our repository to the GitHub:

Now we have our repository created locally in our system. Our next step is to push this repository to the GitHub so that it will be available for public and users can see our code, pull it to their system modify it and push it back. If we need to keep track of this codes locally, we don’t need to push it to GitHub. GitHub push is recommended if we need to work collaboratively on the code. The pictures provided below shows a demo of creating a repository in the GitHub

When you’re done filling out the information, press the ‘Create repository’ button to make your new repo. GitHub will ask if you want to create a new repo from scratch or if you want to add a repo you have created locally. In this case, since we’ve already created a new repo locally, we want to push that onto GitHub so follow the ‘….or push an existing repository from the command line’ section. But we will first see what that means and then will do our push.

In order to push the local repository to the GitHub, we need to set the remote URL of the newly created GitHub repository to our local repository. The command git remote -v will show whether our local repo is linked to any remote URL.

ajoy@testserver:~/mytestrepo$ git remote -v
ajoy@testserver:~/mytestrepo$ git remote add origin https://github.com/ajoybharath/mytestrepo.git
ajoy@testserver:~/mytestrepo$ git remote -v
origin https://github.com/ajoybharath/mytestrepo.git (fetch)
origin https://github.com/ajoybharath/mytestrepo.git (push)

The command git remote add origin <url> will set the remote repo url to our local repository. This can be verified by executing git remote -v again.

Now we’ll be pushing our repo to remote GitHub,

ajoy@testserver:~/mytestrepo$ git push -u origin master
Username for 'https://github.com': <username>
Password for 'https://<username>@github.com':
Counting objects: 3, done.
Writing objects: 100% (3/3), 211 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To https://github.com/ajoybharath/mytestrepo.git
* [new branch] master -> master
Branch master set up to track remote branch master from origin.

The user name and the password is the one we created during the GitHub signup process.

We had done a push to the GitHub now let’s try something a little more advanced.

Assume that you want to make a new feature but are worried about making changes to the main project while developing the feature. This is where git branches are applicable.

Branches allow you to move back and forth between ‘states’ of a project. For instance, if you want to add a new page to your website you can create a new branch just for that page without affecting the main part of the project. Once you’re done with the page, you can merge your changes from your branch into the master branch. When you create a new branch, Git keeps track of which commit your branch ‘branched’ off of, so it knows the history behind all the files. Let’s say you are on the master branch and want to create a new branch to develop your web page. Here’s what you’ll do: execute git checkout -b <my branch name>. This command will automatically create a new branch and then ‘check you out’ on it, meaning git will move you to that branch, off of the master branch. After running the above command, you can use the git branch command to confirm that your branch was created.

ajoy@testserver:~/mytestrepo$ git checkout -b devel
Switched to a new branch 'devel'
ajoy@testserver:~/mytestrepo$ git branch
* devel

The branch name with the asterisk next to it indicates which branch you’re pointed to at that given time.

Now, if you switch back to the master branch and make some more commits, your new branch won’t see any of those changes until you merge those changes onto your new branch.

The next step is to push the devel branch we created to the remote repository. This allows other people to see the changes you’ve made. If they’re approved by the repository’s owner, the changes can then be merged into the master branch. To push changes to a new branch on GitHub, you’ll want to run git push origin <yourbranchname>. GitHub will automatically create the branch for you on the remote repository.

ajoy@testserver:~/mytestrepo$ git push origin devel
Username for 'https://github.com': <username>
Password for 'https://<username>@github.com':
Total 0 (delta 0), reused 0 (delta 0)
To https://github.com/ajoybharath/mytestrepo.git
* [new branch] devel -> devel

Now let’s discuss the word “origin” in the command above. When you clone a remote repository to your local machine, git creates an alias for you. In nearly all cases this alias is called “origin.” It’s essentially shorthand for the remote repository’s URL. So, to push your changes to the remote repository, you could’ve used either the command: git push git@github.com:git/git.git <your branch name> or git push origin <your branch name>

In order to get the most recent changes that you or others have merged on GitHub, use the git pull origin master command (when working on the master branch).

ajoy@testserver:~/mytestrepo$ git pull origin master
From https://github.com/ajoybharath/mytestrepo
* branch master -> FETCH_HEAD
Already up-to-date.

Now we can use the git log command again to see all new commits.

ajoy@testserver:~/mytestrepo$ git log
commit e19804462d4b87336e4607d26e467f2ca14a5d3a
Author: ajoy <email>
Date: Wed Feb 1 21:10:23 2017 +0530

My First Commit

Finally, three more basic git commands, git pull git fetch and git clone:

Git pull – Git pull will pull down from a remote whatever you ask (so, whatever trunk you’re asking for) and instantly merge it into the branch you’re in when you make the request. Pull is a high-level request that runs ‘fetch’ then a ‘merge’ by default, or a rebase with ‘–rebase’. You could do without it, it’s just a convenience.

Git fetch – Git fetch is similar to pull, except it won’t do any merging. So, the fetch will have pulled down the remoteBranch and put it into a local branch called “remoteBranch”. creates a local copy of a remote branch which you shouldn’t manipulate directly; instead, create a proper local branch and work on that. ‘git checkout’ has a confusing feature, though. If you ‘checkout’ a local copy of a remote branch, it creates a local copy and sets up a merge to it by default.

Git clone – Git clone will clone a repo into a newly created directory. It’s useful for when you’re setting up your local repo. It additionally creates a remote called ‘origin’ for the repo cloned from, sets up a local branch based on the remote’s active branch (generally master), and creates remote-tracking branches for all the branches in the repo

You can find more usage and command options in this post