Helm 3 – Sans tiller – Really?

Helm has recently announced it’s much-awaited version 3. The surprise factor in this release is that the server component added during Helm 2 release is missing. Yeah, you got it right – Tiller – is missing in Helm 3 which means we have a server-less Helm. Let us check out in this post why it was missing and how it matters?

As an introduction, Helm, the package manager for Kubernetes, is a useful tool for: installing, upgrading and managing applications on a Kubernetes cluster. Helm has two parts: a client (helm) and a server (tiller). Tiller runs inside of your Kubernetes cluster as a pod in the kube-system namespace onto current context specified in your .kubeconfig (can be manipulated using –kube-context flag). Tiller manages both, the releases (installations) and revisions (versions) of charts deployed on the cluster. When you run helm commands, your local Helm client sends instructions to tiller in the cluster that in turn make the requested changes. So Helm is our package manager for Kubernetes and our client tool. We use the helm cli to do all of our commands. Tiller is the service that actually communicates with the Kubernetes API to manage our Helm packages.

Helm packages are called charts. Charts are the Helm’s deploy-able artifacts. Charts are always versioned using semantic versioning, and come either packed, in versioned .tgz files, or in a flat directory structure. They are abstractions describing how to install packages onto a Kubernetes cluster. When a chart is deployed, it works as a templating engine to populate multiple yaml files for package dependencies with the required variables, and then runs kubectl apply to apply the configuration to the resource and install the package.

As we already mentioned Tiller is the tool used by Helm to deploy almost any Kubernetes resource. Helm takes the maximum permission to make changes in Kubernetes in order to do this. Because of this, anyone who can talk to the Tiller can deploy or modify any resources on the Kubernetes cluster, just like a system-admin (Think as ‘root’ user in a Linux host). This can cause security issues in the cluster if Helm has not been properly deployed, following certain security measures. Also, authentication is not enabled in Tiller by default, so if any of the pod has been compromised and has permission to talk to the Tiller, then the complete cluster in which tiller is running has been compromised. This stands for the main reason for the removal of Tiller.

The Tiller method:

Tiller was used as an in-cluster operator by the helm to maintain the state of a helm release. It’s also used to save the release information of all the releases done by the tiller — it uses config-map to save the release information in the same namespace in which Tiller is deployed. This release information was required by the helm when it updated or when there were state changes in any of the releases. So whenever a helm update command was used, Tiller used to compare the new manifest with the old manifest of the release and made changes accordingly. Thus Helm was dependent on the Tiller to provide the previous state of the release.

The Tiller less method:

The main need of Tiller was to store release information, for which helm is now using secrets and saving it in the same namespace as the release. Whenever Helm needs the release information it gets it from the namespace of the release. To make a change Helm now just fetches information from the Kubernetes API server, makes the changes on the client-side, and stores a record of the installation in Kubernetes. The benefit of tiller less Helm is that since now Helm make changes in the cluster from client-side, it can only make those changes that the client has been granted permission.

Tiller was a good addition in helm 2, but to run it in production it should be properly secured, which would add additional learning steps for the DevOps and SRE. With Helm 3 learning steps have been reduced and security management is been left in hands of Kubernetes to maintain. Helm can now focus on package-management.

Data courtesy : Medium, codecentric, Helm

Linux Inside Win 10

I am a zealous fan of Linux and FOSS. I have been using Linux and it’s TUI with bash shell for more than seventeen years. When I moved to my new role I find it bit difficult when I had a Windows 10 laptop and was literally fumbling with the powershell and cmd line when I tried working with tools like terraform, git etc. But luckily I figured out a solution for the old school *NIX users like me who are forced to use a Windows laptop, and that solution is WSL.

Windows Subsystem for Linux a.k.a WSL is an environment in Windows 10 for running unmodified Linux binaries in a way similar to Linux Containers. Please go through my earlier post on LinuxContainers for more details on it. WSL runs Linux binaries by implementing a Linux API compatibility layer partly in the Windows kernel when it was introduced first. The second iteration of it, “WSL 2” uses the Linux kernel itself in a lightweight VM to provide better compatibility with native Linux installations.

To use WSL in Win 10, you have to enable wsl feature from the Windows optional features. Being a aficionado of command line than the GUI, I’ll now list out step by step commands in order to enable the WSL and install your favourite Linux distribution and how to use it.

  1. Open Powershell as administrator and execute the command below Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
    Note: Restart is required when prompted
  2. Invoke-WebRequest -Uri https://aka.ms/wsl-ubuntu-1804 -OutFile Ubuntu.appx -UseBasicParsing

In the above command you can use all the Linux distros available for WSL. For example if you want to use Kali Linux instead of Ubuntu, you can edit the distro url in the step 2 command as “https://aka.ms/ wsl-kali-linux”. Please refer to this guide for all available distros.

Once the download is completed, you need to add that package to the Windows 10 application list which can be done using the command below.

  1. Add-AppxPackage .\Ubuntu.appx

Once all these steps are completed, in the Win 10 search (magnifying glass in the bottom left corner near to the windows logo) type Ubuntu and you can see your Ubuntu (you need to search which ever wsl distro you have added). To complete the initialization of your newly installed distro, launch a new instance which is Ubuntu in our case by selecting and running Ubuntu from search as seen in the screen shot below.

This will start the installation of your chosen Linux distros binaries and libraries along with the kernel. It will take some time to complete the installation and configuration (approximately 5 to 10 minutes depending on your laptop/desktop configuration).

Once installation is complete, you will be prompted to create a new user account (and its password).

Note: You can choose any username and password you wish – they have no bearing on your Windows username.

If everything goes well, we’ll have Ubuntu installed as a sub system. When you open a new distro instance, you won’t be prompted for your password,
but if you elevate your privileges using sudo, you will need to enter your password.

Next step is to updating your package catalog, and upgrading your installed packages with the Ubuntu package manager apt. To do so, execute the below command in the prompt.

$ sudo apt update && sudo apt upgrade

Now we will proceed to install Git.

$ sudo apt install git

$ git --version

And finally test out git installation with the above command. You can also install other tools, packages available in the repository. I have installed git, terraform, aws-cli, azure-cli and ansible. You can install python, ruby, go programming environment as well. Python pip and ruby gem installations are also supported. You can use this sub system as an alternative for your day to day Linux operations and as an alternative terminal for your Powershell if you are using Sublime Text or Atom or Visual studio code

Arch installation – Part 1

This is a continuation for my earlier post on Arch Linux. In this blog we’ll try to install and tame Arch Linux in an easier and quickest way.

The default installation covers only a minimal base system and expects the end user to configure and use it. Based on the KISS – Keep It Simple, Stupid! principle, Arch Linux focus on elegance, code correctness, minimalist system and simplicity.

Arch Linux supports the Rolling release model and has its own package manager – pacman. With the aim to provide a cutting-edge operating system, Arch never misses out to have an up-to-date repository. The fact that it provides a minimal base system gives you a choice to install it even on low-end hardware and then install only the required packages over it.

Also, its one of the most popular OS for learning Linux from scratch. If you like to experiment with a DIY attitude, you should give Arch Linux a try. It’s what many Linux users consider a core Linux experience.

You have been warned…!!! The method we are going to discuss here wipes out existing operating system(s) from your computer and install Arch Linux on it. So if you are going to follow this tutorial, make sure that you have backed up your files or else you’ll lose all of it or else your are installing in a fresh VM (You can deploy a VM easily with a Virtual Box, VMWare Workstation, KVM, Xen etc). Deploying a VM with any of those afore mentioned tools is completely out of scope for this post. again I would reccomend you to try this installation in any of your old laptop or a new VM

Requirements for installing Arch Linux:

  • A x86_64 (i.e. 64 bit) compatible machine
  • Minimum 512 MB of RAM (recommended 2 GB)
  • At least 1 GB of free disk space (recommended 20 GB for basic usage)
  • An active internet connection
  • A USB drive with minimum 2 GB of storage capacity
  • Familiarity with Linux command line

You have been warned..!!! If you are a person addicted to GUI and doesn’t want to do a DIY kind of Linux installation, IMHO, please discontinue reading this and go ahead with Arch derivatives like Manjaro.

Step 1: Download the ISO

You can download the ISO from the official website.
I prefer using the magnet link and downloading the ISO with a torrent client.

Step 2: Create a live USB of Arch Linux

We will have to create a live USB of Arch Linux from the ISO you just downloaded.

If you are on Linux, you can use dd command to create a live USB. I recommend you to try Balena Etcher for creating your usb. This is my favourite way of doing it. It is a portable piece of software and can be used on Windows/Mac OS/Linux etc

You can download it from the link here

Once you have created a live USB for Arch Linux, shut down your PC. Plugin your USB and boot your system. While booting keep pressing ESC, F2, F10 or F12 (depending upon your system) to go into boot settings. In here, select to boot from USB or removable disk. You will be presented with a Linux terminal which will be automatically logged in as root user.

As the initial steps are completed, we’ll discuss the installation part in my next post

Linux Containers

The Linux Containers project or LinuX Containers (LXC) is an open source container platform that provides a set of tools, templates, libraries, and language bindings. LXC has a simple command line interface that improves the user experience when starting containers. LXC is a user-space interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers.

LXC offers an operating-system level virtualization environment that is available to be installed on many Linux-based systems. Your Linux distribution may have it available through its package repository. LXC is a free software, most of the code is released under the terms of the GNU LGPLv2.1+ license, some Android compatibility bits are released under a standard 2-clause BSD license and some binaries and templates are released under the GNU GPLv2 license.

LXC is a OS-level virtualization technology that allows creation and running of multiple isolated Linux virtual environments (VE) on a single control host. These isolation levels or containers can be used to either sandbox specific applications, or to emulate an entirely new host. LXC uses Linux’s cgroups functionality, which was introduced in kernel version 2.6.24 to allow the host CPU to better partition memory allocation into isolation levels called namespaces. Note that a VE is distinct from a virtual machine (VM).

The idea of what we now call container technology first appeared in 2000 as FreeBSD jails, a technology that allows the partitioning of a FreeBSD system into multiple subsystems, or jails. Jails were developed as safe environments that a system administrator could share with multiple users inside or outside of an organization.

In 2001, an implementation of an isolated environment made its way into Linux, by way of Jacques Gélinas’ VServer project. Once this foundation was set for multiple controlled userspaces in Linux, pieces began to fall into place to form what is today’s Linux container.

Very quickly, more technologies combined to make this isolated approach a reality. Control groups (cgroups) is a kernel feature that controls and limits resource usage for a process or groups of processes. And systemd, an initialization system that sets up the userspace and manages their processes, is used by cgroups to provide greater control over these isolated processes. Both of these technologies, while adding overall control for Linux, were the framework for how environments could be successful in staying separated.

Current LXC uses the following kernel features to contain processes:

  • Kernel namespaces (ipc, uts, mount, pid, network and user)
  • Apparmor and SELinux profiles
  • Seccomp policies
  • Chroots (using pivot_root)
  • Kernel capabilities
  • CGroups (control groups)

LXC containers are often considered as something in the middle between a chroot and a full fledged virtual machine. The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel. LXC is currently made of a few separate components:

  • The liblxc library
  • Several language bindings for the API:
  • python3 (in-tree, long term support in 2.0.x)
  • lua (in tree, long term support in 2.0.x)
  • Go
  • ruby
  • python2
  • Haskell
  • A set of standard tools to control the containers
  • Distribution container templates

Now we will quickly explore the LXC containers.

For most modern Linux distributions, the kernel is enabled with cgroups, but you most likely still will need to install the LXC utilities.

If you’re using Red Hat or CentOS, you’ll need to install the EPEL repositories first. For other distributions, such as Ubuntu or Debian, simply type:

$ sudo apt-get install lxc

Create the ~/.config/lxc directory if it doesn’t already exist, and copy the /etc/lxc/default.conf configuration file to ~/.config/lxc/default.conf. Append the following two lines to the end of the file:

lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536

Append the following to the /etc/lxc/lxc-usernet file (replace the first column with your user name):

akbharat veth lxcbr0 10

The quickest way for these settings to take effect is either to reboot the host or log the user out and then log back in.

Once logged back in, verify that the veth networking driver is currently loaded:

$ lsmod|grep veth
veth 16384 0

$ sudo modprobe veth

If in case you couldn’t see the module ‘veth’ not loaded you can use the command above to load it to the kernel.

Next, download a container image and name it “my-container”. When you type the following command, you’ll see a long list of supported containers under many Linux distributions and versions:

$ sudo lxc-create -t download -n my-container

You’ll be given three prompts to pick the distribution, release and architecture. I chose the following:

Distribution: ubuntu
Release: xenial
Architecture: amd64

Once you press Enter, the rootfs will be downloaded locally and configured. For security reasons, each container does not ship with an OpenSSH server or user accounts. A default root password also is not provided. In order to change the root password and log in, you must run either an lxc-attach or chroot into the container directory path (after it has been started). We will use the command below to start the container

$ sudo lxc-start -n my-container -d

The -d option dæmonizes the container, and it will run in the background. If you want to observe the boot process, replace the -d with -F, and it will run in the foreground, ending at a login prompt.

Open up a second terminal window and verify the status of the container:

$ sudo lxc-info -n my-container
Name: my-container
State: RUNNING
PID: 1356
IP: 10.0.3.28
CPU use: 0.29 seconds
BlkIO use: 16.80 MiB
Memory use: 29.02 MiB
KMem use: 0 bytes
Link: vethPRK7YU
TX bytes: 1.34 KiB
RX bytes: 2.09 KiB
Total bytes: 3.43 KiB

There is also another way to see a list of all installed containers, which is provided below:

$ sudo lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6
my-container RUNNING 0 - 10.0.3.28 -

But still you will not be able to use it, so for that you can attach the container directly with your LXC tool sets and work with it.

$ sudo lxc-attach -n my-container
root@my-container:/#

If you set a username now from within the container, you can either use console command to connect to the container with your newly created username and password.

$ sudo lxc-console -n my-container

If you want to connect to the container using SSH, you can install the Open SSH server in the container and then figure out the IP address from the container with the commands below and connect to the IP from your linux host.

root@my-container:/# apt-get install openssh-server

root@my-container:/# ip addr show eth0|grep inet

From your Linux host now ssh to the container using the IP we captured from the earlier command.

$ ssh 10.0.3.25

On the host system, and not within the container, it’s interesting to observe which LXC processes are initiated and running after launching a container:

$ ps aux|grep lxc|grep -v grep
root 861 0.0 0.0 234772 1368 ? Ssl 11:01
↪0:00 /usr/bin/lxcfs /var/lib/lxcfs/
lxc-dns+ 1155 0.0 0.1 52868 2908 ? S 11:01
↪0:00 dnsmasq -u lxc-dnsmasq --strict-order
↪--bind-interfaces --pid-file=/run/lxc/dnsmasq.pid
↪--listen-address 10.0.3.1 --dhcp-range 10.0.3.2,10.0.3.254
↪--dhcp-lease-max=253 --dhcp-no-override
↪--except-interface=lo --interface=lxcbr0
↪--dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases
↪--dhcp-authoritative
root 1196 0.0 0.1 54484 3928 ? Ss 11:01
↪0:00 [lxc monitor] /var/lib/lxc my-container
root 1658 0.0 0.1 54780 3960 pts/1 S+ 11:02
↪0:00 sudo lxc-attach -n my-container
root 1660 0.0 0.2 54464 4900 pts/1 S+ 11:02
↪0:00 lxc-attach -n my-container

And finally to stop the container and verify whether the container is stopped we can use the following commnds in sequence,

$ sudo lxc-stop -n my-container

$ sudo lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6
my-container STOPPED 0 - - -

$ sudo lxc-info -n my-container
Name: my-container
State: STOPPED

You might also want to destroy the contianer:

$ sudo lxc-destroy -n my-container
Destroyed container my-container

This is a very basic example of a container capability. Now in modern container era containerization is much more sophisticated and the.
Docker is a significant improvement of LXC’s capabilities. Its obvious advantages are gaining Docker a growing following of adherents. In fact, it starts getting dangerously close to negating the advantage of VM’s over VE’s because of its ability to quickly and easily transfer and replicate any Docker-created packages. We’ll discuss more on Docker on my next post.