Understanding API

API stands for Application Programming Interface. An API is a software intermediary that allows two applications to talk to each other. In other words, an API is the messenger that delivers your request to the provider that you’re requesting it from and then delivers the response back to you. We can say that an API is a set of programming instructions and standards for accessing a Web-based software application or Web tool.

As we understood API is a software-to-software interface, not a user interface. The most important part of this name is “interface,” because an API essentially talks to a program for you. You still need to know the language to communicate with the program, but without an API, you won’t get far. With APIs, applications talk to each other without any user knowledge or intervention. When programmers decide to make some of their data available to the public, they “expose endpoints,” meaning they publish a portion of the language they’ve used to build their program. Other programmers can then pull data from the application by building URLs or using HTTP clients (special programs that build the URLs for you) to request data from those endpoints.

Endpoints return text that’s meant for computers to read, so it won’t make complete sense if you don’t understand the computer code used to write it. A software company releases its API to the public so that other software developers can design products that are powered by its service.


When bloggers put their Twitter handle on their blog’s sidebar, WordPress enables this by using Twitter’s API.

Amazon.com released its API so that Web site developers could more easily access Amazon’s product information. Using the Amazon API, a third party Web site can post direct links to Amazon products with updated prices and an option to “buy now.”

When you buy movie tickets online and enter your credit card information, the movie ticket Web site uses an API to send your credit card information to a remote application that verifies whether your information is correct. Once payment is confirmed, the remote application sends a response back to the movie ticket Web site saying it’s OK to issue the tickets. As a user, you only see one interface — the movie ticket Web site — but behind the scenes, many applications are working together using APIs. This type of integration is called seamless, since the user never notices when software functions are handed from one application to another

Docker engine comes with an API. Docker provides an API for interacting with the Docker daemon (called the Docker Engine API), as well as SDKs for Go and Python. The SDKs allow you to build and scale Docker apps and solutions quickly and easily. If Go or Python don’t work for you, you can use the Docker Engine API directly. The Docker Engine API is a RESTful API accessed by an HTTP client such as wget or curl , or the HTTP library which is part of most modern programming languages.

Types of APIs

There are four main types of APIs:

Open APIs: Also known as Public API, there are no restrictions to access these types of APIs because they are publicly available.
Partner APIs: A developer needs specific rights or licenses in order to access this type of API because they are not available to the public.
Internal APIs: Also known as Private APIs, only internal systems expose this type of API. These are usually designed for internal use within a company. The company uses this type of API among the different internal teams to be able to improve its products and services.
Composite APIs: This type of API combines different data and service APIs. It is a sequence of tasks that run synchronously as a result of the execution, and not at the request of a task. Its main uses are to speed up the process of execution and improve the performance of the listeners in the web interfaces.

API architecture types:

APIs can vary by architecture type but are generally used for one of three purposes:

System APIs access and maintain data. These types of APIs are responsible for managing all of the configurations within a system. To use an example, a system API unlocks data from a company’s billing database.

Process APIs take the data accessed with system APIs and synthesize it to create a new way to view or act on data across systems. To continue the example, a process API would take the billing information and combine it with inventory information and other data to fulfill an order.

Experience APIs add context to system and process APIs. These types of APIs make the information collected by system and process APIs understandable to a specified audience. Following the same example, an experience API could translate the data from the process and system APIs into an order status tracker that displays information about when the order was placed and when the customer should expect to receive it.

Apart from the main web APIs, there are also web service APIs:

The following are the most common types of web service APIs:

SOAP (Simple Object Access Protocol): This is a protocol that uses XML as a format to transfer data. Its main function is to define the structure of the messages and methods of communication. It also uses WSDL, or Web Services Definition Language, in a machine-readable document to publish a definition of its interface.

XML-RPC: This is a protocol that uses a specific XML format to transfer data compared to SOAP that uses a proprietary XML format. It is also older than SOAP. XML-RPC uses minimum bandwidth and is much simpler than SOAP. Example – YUM command in Linux uses XML-RPC calls

JSON-RPC: JavaScript Object Notation, this protocol is similar to XML-RPC but instead of using XML format to transfer data it uses JSON.

REST (Representational State Transfer): REST is not a protocol like the other web services, instead, it is a set of architectural principles. The REST service needs to have certain characteristics, including simple interfaces, which are resources identified easily within the request and manipulation of resources using the interface.

It has strict rules and advanced security to follow.There are loose guidelines to follow allowing developers to make recommendations easily
It is driven by FunctionIt is driven by Data
It requires more BandwidthIt requires minimum Bandwidth
Supports only text and numbers.Supports various types of data for example text, numbers, images, graphs, charts etc.
Focuses mainly on DataFocuses mainly on Document
It has low securityIt has more security

The web service APIs honor all the http methods like POST, GET, PUT, PATCH, DELETE. if we compare these with the CRUD operations,

HTTP methods vs CRUD operations

POST – The POST verb is most-often utilized to create new resources. It will return HTTP status 201 on success and returning a Location header with a link to the newly-created resource with the 201 HTTP status. POST is neither safe nor idempotent.

GET – The HTTP GET method is used to read (or retrieve) a representation of a resource GET returns a representation in XML or JSON and an HTTP response code of 200 (OK). GET is idempotent

PUT – PUT is most-often utilized for update capabilities. PUT is not a safe operation, in that it modifies (or creates) state on the server, but it is idempotent.

PATCH – PATCH is used for modify capabilities. The PATCH request only needs to contain the changes to the resource, not the complete resource. PATCH is neither safe nor idempotent. However, a PATCH request can be issued in such a way as to be idempotent,

DELETE – DELETE is pretty easy to understand. It is used to delete a resource identified by a URI. There is a caveat about DELETE idempotence as calling DELETE on a resource a second time will often return a 404 (NOT FOUND) since it was already removed and therefore is no longer available.

We will now just try creating a RESTFul API with Golang. For those who haven’t tried your hands with Golang can click here to follow the basics of Golang.

We are starting the go program which creates an API. The source code for this is available in this Git repository. A discussion on the implementation of API is out of scope for this blog post. We can discuss that in another post.

Here we concentrate only on the API and the requests and responses we receive and retrieve from the API end point. So, we can start our dummy API interface.

executing the code

Now we can check whether our API is accessible and it’s giving responses for our requests. We are doing it in command line with simple curl request. our application is listening on port 8001 which can be modified as per your wish in the code. We are now hitting the end point ‘/’

curl request

Next we try hitting the end point /events to get all the events in the dummy database created with slice and strut in the main.go file.

curl request to get all events

Now we will simulate the requests with an opensource Firefox extension RESTer. This is available for Chrome too. So for hitting the endpoint “/events” we are using the GET method. the Response 200 states that the request was successful.

GET method

In this simulation, we are using the POST method to create another event in the dummy database inside the application programmatically. Response code 201 is provided for successful creation of the event.

POST method to create new event

Now we can try hitting the endpoint “/events/{id}” which is the endpoint to retrieve one event with a GET method which will display the newly added event with “/events/2”

GET method on /events/2

We will now hit the “/events” endpoint and see whether both the evets are in the response.

GET method on /events endpoint

In the next simulation we are using the PATCH method to modify/update an existing event. In this case we are hitting the endpoint “/events/2” to modify the event id 2.

PATCH method to modify event id 2

Try to GET the expected results from the endpoint “/events” to verify our PATCH request.

GET method to verify PATCH request

In the final simulation we are hitting the endpoint “/events/1” with a DELETE method so that event id 1 will be removed/deleted.

DELETE Method to delete event id 1

Voila..!! We just created an API and tested it with dummy data. I believe we got a quick overview of what an API is and how we can use different HTTP methods to retrieve data or modify data using an API.

Site Reliability: SLI, SLO & SLA

Service Level Indicator(SLI), Service Level Object(SLO) & Service Level Agreement(SLA) are parameters with which reliability, availability and performance of the service are measured. The SLA, SLO, and SLI are related concepts though they’re different concepts.

It’s easy to get lost in a fog of acronyms, so before we dig in, here is a quick and easy definition:

  • SLA or Service Level Agreement is a contract that the service provider promises customers on service availability, performance, etc.
  • SLO or Service Level Objective is a goal that service provider wants to reach.
  • SLI or Service Level Indicator is a measurement the service provider uses for the goal.

Service Level Indicator
SLI are the parameters which indicates the successful transactions, requests served by the service over the predefined intervals of time. These parameters allows to measure much required performance and availability of the service. Measuring these parameters also enables to improve them gradually.

Key Examples are:

  • Availability/Uptime of the service.
  • Number of successful transactions/requests.
  • Consistency and durability of the data.

Service Level Objective
SLO defines the acceptable downtime of the service. For multiple components of the service, there can be different parameters which defines the acceptable downtime. It is common pattern to start with low SLO and gradually increase it.

Key Examples are:

  • Durability of disks should be 99.9%.
  • Availability of service should be 99.95%
  • Service should successfully serve 99.999% requests/transactions.

Service Level Agreement
SLA defines the penalty that service provider should pay in an event of service unavailability for pre-defined period of time. Service provider should clearly define the failure factors for which they will be accountable(Domain of responsibility). It is common pattern to have loose SLA than SLO, for instance: SLA is 99% and SLO is 99.5%. If the service is overly available, then SLA/SLO can be used as error budget to deploy complex releases to production.

Key Examples of Penalty are:

  • Partial refund of service subscription fee.
  • Additional subscription time added for free.

So here is the relationship. The service provider needs to collect metrics based on SLI, define thresholds of metrics based on SLO, and monitor the thresholds of metrics so that it won’t break SLA. In practical, the SLIs are the metrics in the monitoring system; the SLOs are alerting rules, and the SLAs are the numbers of the monitoring metrics applying to the SLOs.

Usually the SLO and the SLA are similar while the SLO is tighter than the SLA. The SLOs are generally used for internal only, and the SLAs are for external. If a service availability violates the SLO, operations need to react quickly to avoid it breaking SLA, otherwise, the company might need to refund some money to customers.

The SLA, SLO, and SLI are based on such assumption that is the service will not be available 100%. Instead, we guarantee that the system will be available greater than a certain number, for example, 99.5%.

When we apply this definition to availability, for example, SLIs are the key measurements of the availability of a system; SLOs are goals we set for how much availability we expect out of a system; and SLAs are the legal contracts that explains what happens if our system doesn’t meet its SLO.

SLIs exist to help engineering teams make better decisions. Your SLO performance is critical information to have when you’re making decisions about how hard and fast you can push your systems. SLOs are also important data points for other engineers when they’re making assumptions about their dependencies on your service or system. Lastly, your larger organization should use your SLIs and SLOs to make informed decisions about investment levels and about balancing reliability work against engineering velocity.

Note this abstract is taken from SRE Fundamentals, CRE and the book Site Reliability Engineering: How Google Runs Production Systems

Unikernel: Another paradigm for the cloud

In this cloud era it is hard to imagine a world without access to services in the cloud. From contacting someone through mail, to storing work-related documents on an online drive and accessing it across devices, there are lot of services we use on a daily basis that is in the cloud.

To reduce the cost of compute power, Virtualization has been adapted towards offering more services with less hardware. And then came the concept of containers where you deploy the application in isolated containers with light weight images which has few binaries and libraries to run your application, But still we need the underlying VMs to deploy such solutions. All these VMs comes with a cost. While large data-centers are offering services in the cloud, they are also hungry for electric power, which is becoming a growing concern as our planet is being drained of its resources. So what we need now is less power-hungry solutions.

What if, instead of virtualization of an entire operating system, you were to load an application with only the required components from the operating system? Effectively reducing the size of the virtual machine to its bare minimum resource footprint? This is where unikernels come into play.


Unikernel is a relatively new concept that was first introduced around 2013 by Anil Madhavapeddy in a paper titled “Unikernels: Library Operating Systems for the Cloud” (Madhavapeddy, et al., 2013).

You can find more details on Unilernel by searching the scholarly articles in Google.

Unikernels are defined by the community at Unikernel.org as follows.

“Unikernels are specialized, single-address-space machine images constructed by using library operating systems.”

For more detailed reading about the concepts behind Unikernel, please follow this link,

A Unikernel is an application that has been boiled down to a small, secure, light-weight virtual machine which eliminates general purpose operating systems such as Linux or Windows. Unikernels aims to be a much more secure system than Linux. It does this through several thrusts. Not having the notion of users, running a single process per VM, and limiting the amount of code that is incorporated into each VM. This means that there are no users and no shell to login to and, more importantly, you can’t run more than the one program you want to run inside. Despite their relatively young age, unikernels borrow from age-old concepts rooted in the dawn of the computer era: microkernels and library operating systems. This means that a unikernel holds a single application. Single-address space means that in its core, the unikernel does not have separate user and kernel address space. Library operating systems are the core of unikernel systems. Unikernels are provisioned directly on the hypervisor without a traditional system like Linux. So it can run 1000X more vms/per server.

You can have a look in here for details about Microkernel, Monolithic & Library Operating Systems

Virtual Machines VS Linux Containers VS Unikernel

Virtualization of services can be implemented in various ways. One of the most widespread methods today is through virtual machine, hosted on hypervisors such as VMware’s ESXi or Linux Foundation’s Xen Project.

Hypervisors allow hosting multiple guest operating systems on a single physical machine. These guest operating systems are executed in what is called virtual machines. The widespread use of hypervisors is due to their ability to better distribute and optimize the workload on the physical servers as opposed to legacy infrastructures of one operating system per physical server.

Containers are another method of virtualization, which differentiates from hypervisors by creating virtualized environments and sharing the host’s kernel. This provides a lighter approach to hypervisors which requires each guest to have their copy of the operating system kernel, making a hypervisor-virtualized environment resource heavy in contrast to containers which share parts of the existing operating system.

As aforementioned, unikernels leverage the abstraction of hypervisors in addition to using library operating systems to only include the required kernel routines alongside the application to present the lightest of all three solutions.

The figure above shows the major difference between the three virtualization technologies. Here we can clearly see that virtual machines present a much larger load on the infrastructure as opposed to containers and unikernels.

Additionally, unikernels are in direct “competition” with containers. By providing services in the form of reduced virtual machines, unikernels improve on the container model by its increased security. By sharing the host kernel, containerized applications share the same vulnerabilities as the host operating system. Furthermore, containers do not possess the same level of host/guest isolation as hypervisors/virtual machines, potentially making container breaches more damaging than both virtual machines and unikernels.

Virtual Machines– Allows deploying different operating systems on a single host
– Complete isolation from host
– Orchestration solutions available
– Requires compute power proportional to number of instances
– Requires large infrastructures
– Each instance loads an entire operating system
Linux Containers– Lightweight virtualization
– Fast boot times
– Ochestration solutions
– Dynamic resource allocation
– Reduced isolation between host and guest due to shared kernel
– Less flexible (i.e.: dependent on host kernel)
– Network is less flexible
Unikernels– Lightweight images
– Specialized application
– Complete isolation from host
– Higher security against absent functionalities (e.g.: remote command execution)
– Not mature enough yet for production
– Requires developing applications from the grounds up
– Limited deployment possibilities
– Lack of complete IDE support
– Static resource allocation
– Lack of orchestration tools
A Comparison of solutions

Docker and containerization technology and the container orchestra-tors like Kubernetes, OpenShift are 2 steps forward for the world of DevOps and that the principles it promotes are forward-thinking and largely on-target for the future of a more secure, performance oriented, and easy-to-manage cloud future. However, an alternative approach leveraging unikernels and immutable servers will result in smaller, easier to manage, more secure containers that will be simpler to adopt by existing enterprises. As DevOps matures, the shortcomings of cloud application deployment and management are becoming clear. Virtual machine image bloat, large attack surfaces, legacy executable, base-OS fragmentation, and unclear division of responsibilities between development and IT for cloud deployments are all causing significant friction (and opportunities for the future).

For Example: It remains virtually impossible to create a Ruby or Python web server virtual machine image that DOESN’T include build tools (gcc), ssh, and multiple latent shell executable. All of these components are detrimental for production systems as they increase image size, increase attack surface, and increase maintenance overhead.

Compared to VMs running Operating systems like Windows and Linux, the unikernel has only a tenth of 1% of the attack surface. So in the case of a unikernel — sysdig, tcpdump, and mysql-client are not installed and you can’t just “apt-get install” them either. You have to bring that with your exploit. To take it further even a simple cat /etc/hosts or grep of /var/log/nginx/access.log simply won’t work — once again they are separate processes.
So unikernels are highly resistant to remote code execution attacks, more specifically shell code exploits.

Immutable Servers & Unikernels

Immutable Servers are a deployment model that mandates that no application updates, security patches, or configuration changes happen on production systems. If any of these layers needs to be modified, a new image is constructed, pushed and cycled into production. Heroku is a great example of immutable servers in action: every change to your application requires a ‘git push’ to overwrite the existing version. The advantages of this approach include higher confidence in the code that is running in production, integration of testing into deployment workflows, easy to verify that systems have not been compromised.

Once you become a believer in the concept of immutable servers, then speed of deployment and minimizing vulnerability surface area become objectives. Containers promote the idea of single-service-per-container (microservices), and unikernels take this idea even further.

Unikernels allow you to compile and link your application code all the way down to and include the operating system. For example, if your application doesn’t require persistent disk access, no device drivers or OS facilities for disk access would even be included in final production images. Since unikernels are designed to run on hypervisors such as Xen, they only need interfaces to standardized resources such as networking and persistence. Device drivers for thousands of displays, disks, network cards are completely unnecessary. Production systems become minimalist — only requiring the application code, the runtime environment, and the OS facilities required by the applications. The net effect is smaller VM images with less surface area that can be deployed faster and maintained more easily.

Traditional Operating Systems (Linux, Windows) will become extinct on servers. They will be replaced with single-user, bare metal hypervisors optimized for the specific hardware, taking decades of multi-user, hardware-agnostic code cruft with them. More mature build-deploy-manage tool set based on these technologies will be truly game changing for hosted and enterprise clouds alike.

ClickOSC++XenNetwork Function Virtualization
IncludeOSC++KVM, VirtualBox, ESXi, Google Cloud, OpenStackOrchestration tool available
Nanos UnikernelC, C++, Go, Java, Node.js, Python, Rust, Ruby, PHP, etcQEMU/KVMOrchestration tool available
OSvJava, C, C++, Node, RubyVirtualBox, ESXi, KVM, Amazon EC2, Google CloudCloud and IoT (ARM)
RumprunC, C++, Erlan, Go, Java, JavaScript, Node.js, Python, Ruby, RustXen, KVM
UnikGo, Node.js, Java, C, C++, Python, OCamlVirtualBox, ESXi, KVM, XEN, Amazon EC2, Google Cloud, OpenStack, PhotonControllerUnikernel compiler toolbox with orchestration possible through Kubernetes and Cloud Foundry
ToroKernelFreePascalVirtualBox, KVM, XEN, HyperVUnikernel dedicated to run microservices
Comparing few Unikernel solutions from active projects

Out of the various existing projects, some standout due to their wide range of supported languages. Out of the active projects, the above table describes the language they support, the hypervisors they can run on and remarks concerning their functionality.

Currently experimenting with the Unikernel in the AWS and Google Cloud Platform and will update you with another post on that soon.

Source: Medium, github, containerjournal, linuxjournal

Helm 3 – Sans tiller – Really?

Helm has recently announced it’s much-awaited version 3. The surprise factor in this release is that the server component added during Helm 2 release is missing. Yeah, you got it right – Tiller – is missing in Helm 3 which means we have a server-less Helm. Let us check out in this post why it was missing and how it matters?

As an introduction, Helm, the package manager for Kubernetes, is a useful tool for: installing, upgrading and managing applications on a Kubernetes cluster. Helm has two parts: a client (helm) and a server (tiller). Tiller runs inside of your Kubernetes cluster as a pod in the kube-system namespace onto current context specified in your .kubeconfig (can be manipulated using –kube-context flag). Tiller manages both, the releases (installations) and revisions (versions) of charts deployed on the cluster. When you run helm commands, your local Helm client sends instructions to tiller in the cluster that in turn make the requested changes. So Helm is our package manager for Kubernetes and our client tool. We use the helm cli to do all of our commands. Tiller is the service that actually communicates with the Kubernetes API to manage our Helm packages.

Helm packages are called charts. Charts are the Helm’s deploy-able artifacts. Charts are always versioned using semantic versioning, and come either packed, in versioned .tgz files, or in a flat directory structure. They are abstractions describing how to install packages onto a Kubernetes cluster. When a chart is deployed, it works as a templating engine to populate multiple yaml files for package dependencies with the required variables, and then runs kubectl apply to apply the configuration to the resource and install the package.

As we already mentioned Tiller is the tool used by Helm to deploy almost any Kubernetes resource. Helm takes the maximum permission to make changes in Kubernetes in order to do this. Because of this, anyone who can talk to the Tiller can deploy or modify any resources on the Kubernetes cluster, just like a system-admin (Think as ‘root’ user in a Linux host). This can cause security issues in the cluster if Helm has not been properly deployed, following certain security measures. Also, authentication is not enabled in Tiller by default, so if any of the pod has been compromised and has permission to talk to the Tiller, then the complete cluster in which tiller is running has been compromised. This stands for the main reason for the removal of Tiller.

The Tiller method:

Tiller was used as an in-cluster operator by the helm to maintain the state of a helm release. It’s also used to save the release information of all the releases done by the tiller — it uses config-map to save the release information in the same namespace in which Tiller is deployed. This release information was required by the helm when it updated or when there were state changes in any of the releases. So whenever a helm update command was used, Tiller used to compare the new manifest with the old manifest of the release and made changes accordingly. Thus Helm was dependent on the Tiller to provide the previous state of the release.

The Tiller less method:

The main need of Tiller was to store release information, for which helm is now using secrets and saving it in the same namespace as the release. Whenever Helm needs the release information it gets it from the namespace of the release. To make a change Helm now just fetches information from the Kubernetes API server, makes the changes on the client-side, and stores a record of the installation in Kubernetes. The benefit of tiller less Helm is that since now Helm make changes in the cluster from client-side, it can only make those changes that the client has been granted permission.

Tiller was a good addition in helm 2, but to run it in production it should be properly secured, which would add additional learning steps for the DevOps and SRE. With Helm 3 learning steps have been reduced and security management is been left in hands of Kubernetes to maintain. Helm can now focus on package-management.

Data courtesy : Medium, codecentric, Helm

Bare-Metal K8s Cluster with Raspberry Pi – Part 3

This is a continuation from the post series Bare-metal K8s cluster with Raspberry Pi – Part 1 & Part 2 here

Another option of running bare-metal K8s cluster in the Raspberry Pi I tried and tested was with Micro K8s which we discuss in this post.

Micro K8s are Lightweight upstream K8s. They are smallest, simplest, pure production K8s. For clusters, laptops, IoT and Edge, on Intel and ARM.

MicroK8s is a CNCF certified upstream Kubernetes deployment that runs entirely on your workstation or edge device. Being a snap it runs all Kubernetes services natively (i.e. no virtual machines) while packing the entire set of libraries and binaries needed. Installation is limited by how fast you can download a couple of hundred megabytes and the removal of MicroK8s leaves nothing behind.

And to give a context on snap, Snaps are app packages for desktop, cloud and IoT that are easy to install, secure, cross‐platform and dependency‐free. Snaps are discover able and install able from the Snap Store, the app store for Linux with an audience of millions.

A snap is a bundle of an app and its dependencies that works without modification across Linux distributions.

We are going to use the same components list as described in the Part 1 of this series.

Each Pi is going to need an Ubuntu server image and you’ll need to be able to SSH into them. Please follow this link here will help us to reach to this stage

Kubernetes Cluster Preparation with SSH connection to the Pi from your terminal

Installing MicroK8s
Follow this section for each of your Pis. Once completed you will have MicroK8s installed and running everywhere.

SSH to your first Pi and install the MicroK8s snap:

sudo snap install microk8s --classic

As MicroK8s is a snap and as such it will be automatically updated to newer releases of the package, which is following closely upstream Kubernetes releases, so we don’t need to worry about the K8s version we’re installing

sudo snap install microk8s --classic --channel=1.15/stable
Channels are made up of a track (or series) and an expected level of stability, based on MicroK8s releases (Stable, Candidate, Beta, Edge). For more information about which releases are available, run:

snap info microk8s

Cheat Sheet for MicroK8s
Before going further here is a quick intro to the MicroK8s command line:

  • microk8s.start – start all enabled Kubernetes services
  • microk8s.inspect – status of services
  • microk8s.stop – stop all Kubernetes services
  • microk8s.enable dns – enable Kubernetes add-ons,“kubedns”
  • microk8s.kubectl cluster-info – status of the cluster:

MicroK8s is easy to use and comes with plenty of Kubernetes add-ons you can enable or disable.

Master node and leaf nodes
Now that you have MicroK8s installed on all boards, pick one which has to be the master node of your cluster.

On the chosen Master node, run the following command:

sudo microk8s.add-node
This command will generate a connection string in the form of :/.

Adding a node
Now, you need to run the join command from another Pi you want to add to the cluster:

You should be able to see the new node in a few seconds on the master with the following command:

microk8s.kubectl get node

For each new node, you need to run the microk8s.add-node command on the master, copy the output, then run microk8s.join on the leaf.

Removing nodes
To remove a node, run the following command on the master:

sudo microk8s remove-node
The name of nodes are available on the master by running the microk8s.kubectl get node

Alternatively, you can leave the cluster from a leaf node by running:

sudo microk8s.leave

Once Pis are setup with MicroK8s, adding and removing nodes is easy and you can scale up or down as you go.

If you follow this series, you are now in control of your Kubernetes cluster. One with native kubernetes and docker and the other one with more easier to install and manage MicroK8s

This completes the 3 part series for K8s in Raspberry Pi. In a new follow-up blog post we can see how we can use kubectl & helm charts to deploy a Nginx service, Prometheus and few other services from the DevOps tools set to the cluster.

Bare-Metal K8s Cluster with Raspberry Pi – Part 2

This is a continuation from the post series Bare-metal K8s cluster with Raspberry Pi

As we have 1 master node and 3 nodes setup we continue to install Kubernetes.

Install Kubernetes

We are using version 1.15.3. There shouldn’t be any errors, however during my installation the repos were down and I had to retry in a few times.

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
sudo apt-key add - && echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | \
sudo tee /etc/apt/sources.list.d/kubernetes.list && sudo apt-get update -q

sudo apt-get install -qy kubelet=1.15.3-00 kubectl=1.15.3-00 kubeadm=1.15.3-00

Repeat steps for all of the Raspberry Pis.

Kubernetes Master Node Configuration
Note: You only need to do this for the master node (in this deployment I recommend only 1 master node). Each Raspberry Pi is a node.

Initiate Master Node

sudo kubeadm init
Enable Connections to Port 8080
Without this Kubernetes services won’t work

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Add Container Network Interface (CNI)
I’ve chosen to use Weaver, however you can get others working such as Flannel (I’ve verified this works with this cluster)

Get Join Command
This will be used in the next section to join the worker nodes to the cluster. It will return something like:

kubeadm join --token X.Y --discovery-token-ca-cert-hash sha256:XYZ
kubeadm token create --print-join-command

Kubernetes Worker Node Configuration
Note: You only need to do this for the worker nodes (in this deployment I recommend 3 worker node).

Join Cluster
Use the join command provided at the end of the previous section
sudo kubeadm join --token X.Y --discovery-token-ca-cert-hash sha256:XYZ

Verify Node Added Successfully (SSH on Master Node)
Should have status ready after ~30 seconds
kubectl get componentstatuses

Another option of running bare-metal K8s cluster in the Raspberry Pi I tried and tested was with Micro K8s will posted in the Part 3 of this series.

A sneak peak into the K8s cluster in Raspberry pi

Bare-Metal K8s Cluster with Raspberry Pi – Part 1

There are multiple ways we can use a Kubernetes cluster to deploy our applications. Most of us opt to use any Kubernetes service from a public cloud provider. GKE, EKS, AKS are the most prominent ones. Deploying a Kubernetes cluster on a public cloud provider is relatively easy, but what if you want a private bare-metal K8s cluster. Being worked extensively in the data center and started my career as a Sys Admin, I personally prefer a piece of tangible hardware to get the feel of building it. This blog post walk you through the steps I took in order to have a bare-metal K8s cluster to play with.

K8s is an open source container orchestration platform that helps manage distributed, containerized applications at a massive scale. Born at Google as Borg, version 1.0 was released in July 2015. It has continued to evolve and mature and is now offered as a PaaS service by all of the major cloud vendors.

Google has been running containerized workloads in production for more than a decade. Whether it’s service jobs like web front-ends and stateful servers, infrastructure systems like Bigtable and Spanner, or batch frameworks like MapReduce and Millwheel, virtually everything at Google runs as a container.

You can find the paper here.

Kubernetes traces its lineage directly from Borg. Many of the developers at Google working on Kubernetes were formerly developers on the Borg project. We’ve incorporated the best ideas from Borg in Kubernetes, and have tried to address some pain points that users identified with Borg over the years.

More than just enabling a containerized application to scale, Kubernetes has release-management features that enable updates with near-zero downtime, version rollback, and clusters that can ‘self-heal’ when there is a problem. Load balancing, auto-scaling and SSL can easily be implemented. Helm, a plugin for Kubernetes, has revolutionized the world of server management by making multi-node data stores like Redis and MongoDB incredibly easy to deploy. Kubernetes enables you to have the flexibility to move your workload where it is best suited. This compliments the hybrid cloud story and in my career it has become more apparent that my customers see this as well to help them resolve issues like; cost, availability and compliance. In parallel software vendors are starting to embrace containers as a standard deployment model leading to a recent increase in requests for container solutions.

As you can see in the workflow comparison below, there is greater room for error when deploying on-premises. Public clouds provide the automation and reduces the risk of error as less steps are required. But as mentioned above, private cloud provides you more options when you have unique requirements.


  • Using Kubernetes and its huge ecosystem can improve your productivity
  • Kubernetes and a cloud-native tech stack attracts talent
  • Kubernetes is a future proof solution
  • Kubernetes helps to make your application run more stable
  • Kubernetes can be cheaper than its alternatives


  • Kubernetes can be an overkill for simple applications
  • Kubernetes is very complex and can reduce productivity
  • The transition to Kubernetes can be cumbersome
  • Kubernetes can be more expensive than its alternatives



3 x Raspberry Pi 4 Model B with 2 GB RAM
1 x Raspberry Pi 3 Model B+ with 1 GB RAM


4 x 16GB High Speed Sand-disk Micro-SD Cards


1 x Network Switch – for local LAN for k8s internal connectivity
1 x Network Router – for Wifi (My default ISP router was used here) only master node had internet connectivity once completed the setup
4 x Ethernet Cables
1 x Keyboard, HDMI, Mouse (for initial setup only)

Initial Raspberry Pi Configuration:

Flash Raspbian to the Micro-SD Cards

Download image from the below link,

Raspbian OS

I have used BalenaEtcher to flash image onto micro-SD card

Perform Initial Setup on Boot on startup screen, we need to connect keyboard, monitor and mouse for this setup.

Choose Country, Language, Timezone
Define new password for user ‘pi’
Connect to WiFi or skip if using ethernet
Skip update software (We will perform this activity manually later).
Choose restart later

Configure Additional Settings Click the Raspberry Pi icon (top left of screen) > Preferences > Raspberry Pi Configuration


Configure Hostname
Boot: To CLI

SSH: Enable

Choose restart later

Configure Static Network Perform one of the following:
Define Static IP on Raspberry Pi: Right Click the arrow logo top right of screen and select ‘Wireless & Wired Network Settings’

Define Static IP on DHCP Server: Configure your DHCP server to define a static IP on the Raspberry Pi Mac Address.

Reboot and Test SSH
Username: pi

Password: Defined in step 2 above

From the terminal ssh pi@[IP Address]

Repeat steps for all of the Raspberry Pis.

Kubernetes Cluster Preparation with SSH connection to the Pi from your terminal. I am using a 12 years old Lenovo laptop running MX Linux. Open a terminal and establish ssh connection to the Pi

Perform Updates

apt-get update: Updates the package indexes
apt-get upgrade: Performs the upgrades

Configure Net.IP4.IP configuration Edit sudo vi /etc/sysctl.conf, uncomment net.ipv4.ip_forward = 1 and add net.ipv4.ip_nonlocal_bind=1.
Note: This is required to allow for traffic forwarding, for example Node Ports from containers to/from non-cluster devices.

Install Docker

curl -sSL get.docker.com | sh

Grant privilege for user pi to execute docker commands

sudo usermod pi -aG docker

Disable Swap

sudo systemctl disable dphys-swapfile.service
sudo reboot

We can verify this with the top command, on the top left corner next to MiB Swap should be 0.0.

As we completed the initial steps to create our K8s bare-metal cluster, we can see how we build the cluster in Part 2 of this blog post

An Introduction to Golang

The Go programming language is an open source project to make programmers more productive.

Go is expressive, concise, clean, and efficient. Its concurrency mechanisms make it easy to write programs that get the most out of multicore and networked machines, while its novel type system enables flexible and modular program construction. Go compiles quickly to machine code yet has the convenience of garbage collection and the power of run-time reflection. It’s a fast, statically typed, compiled language that feels like a dynamically typed, interpreted language.

It combines the best of both statically typed and dynamically typed languages and gives you the right mixture of efficiency and ease of programming. It is primarily suited for building fast, efficient, and reliable server side applications.

Following are some of the most noted features of Go

  • Type safety and Memory safety.
  • Good support for Concurrency and communication.
  • Efficient and latency-free Garbage Collection
  • High speed compilation
  • Excellent Tooling support

Now 10 years in the wild, Google’s Go programming language has certainly made a name for itself. Lightweight and quick to compile, Go has stirred significant interest due to its generous libraries and abstractions that ease the development of concurrent and distributed (read: cloud) applications.

But the true measure of success of any programming language is the projects that developers create with it. Go has proven itself as a first choice for fast development of network services, software infrastructure projects, and compact and powerful tools of all kinds.

Here are 5 noteworthy projects written in Go, many of which have become more famous than the language they were written in. All of them have made a significant mark in their respective domains. All of the projects featured here are hosted on GitHub, so it’s easy for the Go-curious to take a peek at the Go code that makes them tick.

  • Docker
  • Kubernetes
  • Terraform
  • Fedora CoreOS
  • InfluxDB

Installing GO

As a Linux user for more than 17 years, installation and configuration explained here is for Linux and will work for almost all Linux distros. For installing go in Windows or Mac you can follow the GO official documentation here.

Go binary distributions are available for all major operating systems like Linux, Windows, and MacOS. It’s super simple to install Go from the binary distributions. The following link provides more information on the releases.

If a binary distribution is not available for your operating system, you can install Go from source.

We will be installing from the binary by downloading the tar ball,

Curl in action for download

Extracting the tar ball:

Extract using tar

Now we need to export the path so that our go binary will be available in your OS environment PATH variable.

Exporting PATH variable

If all the above steps are completed successfully we will be able to execute go from the command line and can be tested to print the version details.

GO Tool or Go command

Go is a tool for managing Go source code.

$ go [arguments]


bug - start a bug report
build - compile packages and dependencies
clean - remove object files and cached files
doc - show documentation for package or symbol
env - print Go environment information
fix - update packages to use new APIs
fmt - gofmt (reformat) package sources
generate - generate Go files by processing source
get - add dependencies to current module and install them
install - compile and install packages and dependencies
list - list packages or modules
mod - module maintenance
run - compile and run Go program
test - test packages
tool - run specified go tool
version - print Go version
vet - report likely mistakes in packages

We can execute “go help ” for more information about a command.


Go requires you to organize your code in a specific way. All your Go code and the code you import, must reside in a single workspace. A workspace is a directory in your file system whose path is stored in the environment variable GOPATH. A Go Workspace is how Go manages our source files, compiled binaries, and cached objects used for faster compilation later. It is typical, and also advised, to have only one Go Workspace, though it is possible to have multiple spaces. The GOPATH acts as the root folder of a workspace. In Linux and other UNIX derivatives GOPATH defaults to $HOME/go, but this can be changed if needed by exporting appropriate path in the GOPATH variable.

We will now set the GOPATH variable and test it by printing the environment variable.

Variable GOPATH exported

Anatomy of Go workspace (GOPATH)

Inside of a Go Workspace, or GOPATH, there are three directories: bin, pkg, and src. Each of these directories has special meaning to the Go tool chain.

├── bin
├── pkg
└── src
. └── github.com/foo/bar
└── bar.go

Let’s take a look at each of these directories:

src: contains Go source files.

The src directory typically contains many version control repositories containing one or more Go packages. Every Go source file belongs to a package. You generally create a new subdirectory inside your repository for every separate Go package.

bin: contains the binary executables.

The Go tool builds and installs binary executables to this directory. All Go programs that are meant to be executables must contain a source file with a special package called main and define the entry point of the program in a special function called main().

pkg: contains Go package archives

All the non-executable packages (shared libraries) are stored in this directory. You cannot run these packages directly as they are not binary files. They are typically imported and used inside other executable packages.

Start Coding with a Hello World

First, make sure that you have created the Go workspace directory or GOPATH at $HOME/go or any other directory you wish and GOPATH variable exported. Next, create a new directory src/hello inside your workspace.

Creating $GOPATH/src/hello directory

Now we can create a file named hello.go with the following code

package main
import "fmt"
func main() {
fmt.Printf("Hello, World\n")

creating hello.go and listing it

The easiest way to run the above program is using the go run command as below.

run the hello.go program

The go run command compiles and runs your program at one go. If however, you want to produce a binary from your Go source that can be run as a standalone executable without using the Go tool, then use the go build command. The go build command creates an executable binary with the same name as the name of your immediate package (hello). You can run the binary file like so

Building hello app from hello.go

Learning GO

An interactive introduction to Go in three sections. The first section covers basic syntax and data structures; the second discusses methods and interfaces; and the third introduces Go’s concurrency primitives. Each section concludes with a few exercises that you can practice what you’ve learned.

You can take the tour online:

Or can install locally:

$ go get golang.org/x/tour

That’s all for now folks! I hope you’re all set to deep dive and learn more about the Go programming language.

Quick and Dirty tutorial on Jenkins and Git plugin

Jenkins is a popular open source Continuous Integration tool which is widely used for project development, deployment, and automation. Continuous integration is a process in which all development work is integrated as early as possible. The resulting artifacts are automatically created and tested. This process should identify errors as very early in the process. Jenkins is one open source tool to perform continuous integration and build automation. The basic functionality of Jenkins is to execute a predefined list of steps. The trigger for this execution can be time or event based. For example, every half an hour or after a new commit in a Git repository. Jenkins also monitors the execution of the steps and allows to stop the process if one of the steps fails. Jenkins can also send out notification about the build success or failure.Jenkins can be extended by additional plug-ins, e.g., for supporting Git version control system, provisioning with docker or integration with the puppet.

In this post, we’ll discuss Installation of Jenkins and how to use Jenkins to integrate with GitHub in such a way that a git commit can trigger a Jenkins build. Jenkins can be started via the command line or can run in a web application server. In Linux, you can also install Jenkins as a system service. Almost all the Linux platforms there is a native package of Jenkins available. Please check the Jenkins home page for more details on this.

All the demonstrations in this post are shown using and Amazon EC2 RHEL 7  instance. We’re using an Amazon instance because we need to establish a connection to the system’s public IP via Git web hooks. If you are using any other Linux distribution you can make use of most of the steps described here except the RHEL specific command like yum, systemctl etc. Also, you can find your OS-specific installation details in the Jenkins Wiki.

We will be installing Jenkins using the native package available for the RHEL/Fedora/CentOS. There is another way of installing the Jenkins which includes deploying the Jenkins war file which you can see here

Installing Java:

The prerequisite of installing Jenkins is to setup a Java Virtual Machine on your system. This will be done by installing a latest OpenJDK package. Before that, we will be installing EPEL repository for RHEL 7. You can download the epel package from fedora project URL using wget command and if in case wget is not installed on your machine install it using yum.

# yum install wget

# wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

# yum install epel-release-latest-7.noarch.rpm

You can do a yum search OpenJDK to know the available JDK packages.

# yum install java-1.8.0-openjdk-devel.x86_64

Verify the java installation as below,

# java -version
openjdk version "1.8.0_121"
OpenJDK Runtime Environment (build 1.8.0_121-b13)
OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)

Next step is to set the environment variables namely JAVA_HOME and JRE_HOME. This is used by the Java applications to identify the Java virtual machine path. This is accomplished by adding these environment variables to the file /etc/profile.

export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk
export JRE_HOME=/usr/lib/jvm/jre

Once this is added you can reload the environment variables from /etc/profile by executing,

# source /etc/profile

Installing Apache Ant and Apache Maven:

The next step is to install the apache foundation siblings Ant and Maven. These siblings are used while building Java based application in Jenkins. Both these packages can be downloaded from the respective project site as a tarball.


# wget http://redrockdigimark.com/apachemirror//ant/binaries/apache-ant-1.10.0-bin.tar.gz

# wget http://redrockdigimark.com/apachemirror/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz


The command provided below will extract your downloaded tar ball and put it in /opt. Then we will go to the /opt directory and create a symbolic link.

# tar -zxvf apache-maven-3.3.9-bin.tar.gz -C /opt/
# tar -zxvf apache-ant-1.10.0-bin.tar.gz -C /opt/

# ln -s apache-maven-3.3.9/ maven
# ln -s apache-ant-1.10.0/ ant

The final listing of the /opt will show something like seen below,

# ls /opt
ant apache-ant-1.10.0 apache-maven-3.3.9 maven

Environment Variables

Now we will setup environment variables for this apache sibling and verify the installation. These variables are set by creating files maven.sh and ant.sh under /etc/profile.d and reload the environment variables.

# cat > /etc/profile.d/ant.sh
export ANT_HOME=/opt/ant
export PATH=${ANT_HOME}/bin:${PATH}

# cat > /etc/profile.d/maven.sh
export M2_HOME=/opt/maven
export PATH=${M2_HOME}/bin:${PATH}

# source /etc/profile

At this stage, if everything goes well you'll see and output similar to the one below,

# echo $JAVA_HOME;echo $JRE_HOME

# ant -version
Apache Ant(TM) version 1.10.0 compiled on December 27 2016

# mvn --version
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T11:41:47-05:00)
Maven home: /opt/maven
Java version: 1.8.0_121, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-1.8.0-openjdk-
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-514.el7.x86_64", arch: "amd64", family: "unix"

All our prerequisites for Jenkins installation is ready and we’ll now proceed to install Jenkins. We will add an RHEL 7 specific Jenkins repository and install with the yum command.

# wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
# rpm --import http://pkg.jenkins-ci.org/redhat-stable/jenkins-ci.org.key
# yum install jenkins -y

Once the installation is complete, we will enable the service and start it.

# systemctl enable jenkins.service
# systemctl start jenkins.service
# systemctl status jenkins.service

If you have followed the instalation so far you find Jenkins running under the following URL:

http://<ip of your ec2 instance>:8080/

Note: if there are any connection issues or page not loading in the browser check the firewalls inside the system and flush it if any with the command iptables -F. Also check the security-groups of your EC2 instance to allow traffic on port 8080.

You’ll see a Jenkins page similar to the one above and you can use the password as described there and setup a user to create Jenkins builds.

#  cat /var/lib/jenkins/secrets/initialAdminPassword

Now we will install the git package in the system and Jenkins GitHub plugin to create a build job.

# yum install git

Create a repository in GitHub by importing the repository https://github.com/ajoybharath/hello-world.git

You can clone your newly created repository to your system and make it ready to accept push from your repo to GitHub. You can see more on git installation and setup here

Once all the steps as provided in the above screen shots are done we’ll be ready to create our first build. You can follow the sequence of images below and setup a Jenkins build job. We’re using the following git repository in this demonstration.


Apply and Save the job and go ahead and build the project.

If in case you’ll find any issue in the build similar to the one below,

FATAL: command execution failed
java.io.IOException: Cannot run program "mvn" (in directory

Please add the following lines to the Jenkins config:

# vi /etc/sysconfig/jenkins and add the line to "source /etc/profile" the end of the file and stop and start Jenkins.

Try building the job again. Also, we will be adding the GitHub webhook to enable the build trigger if a change is pushed to the repository.

In the Jenkins home page guide to Manage Jenkins=>Configure Systems and find the GIT

Click on the Advanced setting there and check on the “Specify another hook URL for GitHub configuration” and we need to copy the URL specified there.

This URL is the web hook to our Jenkins and we’ll be adding this to our GitHub repository. Navigate to your repository and click on settings tab as shown below,

Also, we have to go to our Jenkins dashboard and select the job we created and click on configure and navigate to Build Trigger and click on the check box GitHub hook trigger for GITScm polling

If everything goes fine, we can do some change in the readme file of the repository and push it which will trigger an automatic build in the Jenkins.


Jenkins – An Introduction

Jenkins is a Continuous Integration, Continous Deployment (CI/CD)tool. Jenkins is written in Java and released as open source product. It’s used heavily in Continuous Integration as it allows code to be build deployed and tested automatically. For software development, you can hook it up with most code repo’s and there are loads of plugins that you can integrate with various software tools for better technical governance. Think it as a very advanced task scheduler which can do workflow or as a middle man between your code repo and your build server which triggers a build for every change made in the source code repository.

Jenkins offers the following major features out of the box, and much more can be added through plugins:

  1. Easy installation: Just run java -jar jenkins.war, deploy it in a servlet container or install with any native package manager. No additional install, no database.
  2. Easy configuration: Jenkins can be configured entirely from its user-friendly web GUI.
  3. Rich plugin ecosystem: Jenkins integrates with virtually every SCM or build tool that exists.
  4. Extensibility: Most parts of Jenkins can be extended and modified, and it’s easy to create new Jenkins plugins.
  5. Distributed builds: Jenkins can distribute build/test loads to multiple computers with different operating systems.

Jenkins is pretty much understood after you’ll understand the terms Continous Integration and Continous Deployment (Delivery).

The relevant terms here are “Continuous Integration” and “Continuous Deployment”, often used together and abbreviated as CI/CD. Originally Continuous Integration means that you run your “integration tests” at every code change while Continuous Delivery means that you automatically deploy every change that passes your tests. However recently the term CI/CD is being used in a more general way to represent everything related to automation of the pipeline from where a developer adds his change to a central repository until that code ends up in production. This post is based on the latter meaning of the terms CI/CD.

CI system gathers all your code from different developers and makes sure it compiles and build fine. Once the code is built it deploys it on the test server for testing. Once you’re made sure the build compiles fine then, Jenkins can be tuned to deploy the application on the production server satisfying the goal of CD. There are many different ways in which Jenkins can be set up to drive a CI/CD pipeline. This details provided below describes a simple yet powerful configuration:

The pipeline consists of 4 steps: “Build”, “Unit”, “Integration”, “System” and “Deploy”

Each step is implemented as a Jenkins job of type “free-style software project”.

The first step (“Build”) is connected to a version control system. This step can either poll the repository, get notified via a webhook, or run on a schedule or on demand. Subsequent steps have a build trigger that starts the job when the previous step finished.
Artifacts are copied between the different jobs via the “Copy Artifact” plugin. This requires the “Archive Artifacts” setting to be enabled. Deploy triggers from “Copy Artifact” to a Configuration Manager which will trigger a Container service to deploy the new application.

As told earlier in this post we can install Jenkins using the native package available for the respective Operating System. This is detailed in another post which you can read here. There is another way of installing the Jenkins which we’ll be discussing here and consist of 2 steps,

  1. Install the latest version of Apache tomcat
  2. Download and Deploy Jenkins war file

As a prerequisite, we will be installing Java which can be installed with yum command.

# yum install java-1.8.0-openjdk-devel.x86_64

[root@devopsnode1 ~]# java -version
openjdk version "1.8.0_121"
OpenJDK Runtime Environment (build 1.8.0_121-b13)
OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)
[root@devopsnode1 ~]#

After Java is installed we’ll set the Java path as below:

[root@devopsnode1 ~]# cp /etc/profile /etc/profile_orig
[root@devopsnode1 ~]# echo "export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk" | tee -a /etc/profile
export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk
[root@devopsnode1 ~]# echo "export JRE_HOME=/usr/lib/jvm/jre" | tee -a /etc/profile
export JRE_HOME=/usr/lib/jvm/jre
[root@devopsnode1 ~]# source /etc/profile
[root@devopsnode1 ~]# echo $JAVA_HOME;echo $JRE_HOME
[root@devopsnode1 ~]#

Now we’ll be installing the latest version of Apache tomcat. We’ll be using the wget command to download the latest Apache tomcat tarball from the website http://tomcat.apache.org/download-90.cgi

[ajoy@devopsnode1 ~]$ wget http://www-us.apache.org/dist/tomcat/tomcat-9/v9.0.0.M17/bin/apache-tomcat-9.0.0.M17.tar.gz

After the tarball is downloaded we’ll extract it and rename the directory to tomcat9, though the renaming is not mandatory step we’ll do it for having a simple tomcat paths 🙂

[ajoy@devopsnode1 ~]$ tar zxf apache-tomcat-9.0.0.M17.tar.gz
[ajoy@devopsnode1 ~]$ mv apache-tomcat-9.0.0.M17 tomcat9
[ajoy@devopsnode1 ~]$ ls
apache-tomcat-9.0.0.M17.tar.gz tomcat9

We’ll have to set roles and users in the tomcat configuration and then start the tomcat server:

[ajoy@devopsnode1 ~]$ cp /home/ajoy/tomcat9/conf/tomcat-users.xml /home/ajoy/tomcat-users.xml_orig
[ajoy@devopsnode1 ~]$ vi tomcat9/conf/tomcat-users.xml

<tomcat-users xmlns=”http://tomcat.apache.org/xml”
xsi:schemaLocation=”http://tomcat.apache.org/xml tomcat-users.xsd”
<role rolename=”manager-gui”/>
<role rolename=”manager-script”/>
<role rolename=”manager-jmx”/>
<role rolename=”manager-status”/>
<role rolename=”admin-gui”/>
<role rolename=”admin-script”/>
<user username=”ajoy” password=”ajoy” roles=”manager-gui,manager-script,manager-jmx,manager-status,admin-gui,admin-script”/>

Starting Apache tomcat server:

[ajoy@devopsnode1 tomcat9]$ cd bin/
[ajoy@devopsnode1 bin]$ ./startup.sh
Using CATALINA_BASE: /home/ajoy/tomcat9
Using CATALINA_HOME: /home/ajoy/tomcat9
Using CATALINA_TMPDIR: /home/ajoy/tomcat9/temp
Using JRE_HOME: /usr/lib/jvm/jre
Using CLASSPATH: /home/ajoy/tomcat9/bin/bootstrap.jar:/home/ajoy/tomcat9/bin/tomcat-juli.jar
Tomcat started.
[ajoy@devopsnode1 bin]$

If everything is fine you’ll be able to browse to http://localhost:8080 which will open the default tomcat web page.

Shutting down Apache tomcat server:

[ajoy@devopsnode1 tomcat9]$ cd bin/
[ajoy@devopsnode1 bin]$ ./shutdown.sh
Using CATALINA_BASE: /home/ajoy/tomcat9
Using CATALINA_HOME: /home/ajoy/tomcat9
Using CATALINA_TMPDIR: /home/ajoy/tomcat9/temp
Using JRE_HOME: /usr/lib/jvm/jre
Using CLASSPATH: /home/ajoy/tomcat9/bin/bootstrap.jar:/home/ajoy/tomcat9/bin/tomcat-juli.jar
[ajoy@devopsnode1 bin]$

Now our tomcat server is up and running we’ll now download the Jenkins war file and deploy it in tomcat.

[ajoy@devopsnode1 ~]$ wget http://ftp.yz.yamagata-u.ac.jp/pub/misc/jenkins/war/2.44/jenkins.war

Click on the Manager App button. It will ask for username and password which we have put in the “tomcat-users.xml”.

We’ll provide a Context Path and specify our Jenkins war file with an absolute path and then click on Deploy.

Once it’s deployed you can access the Jenkins as shown above.

Using the cat command open the file and to copy the password and paste it on the Unlock screen.

[ajoy@devopsnode1 ~]$ cat /home/ajoy/.jenkins/secrets/initialAdminPassword

Follow the below screenshot sequences to complete the Jenkins setup:

That’s all folks, you’re ready to build your project with award-winning, cross-platform, continuous integration and continuous delivery application that increases your productivity. You can read here another post describing the build steps.

Data Source Courtesy: Jenkins JenkinsWiki, Quora,