Bare-Metal K8s Cluster with Raspberry Pi – Part 3

This is a continuation from the post series Bare-metal K8s cluster with Raspberry Pi – Part 1 & Part 2 here

Another option of running bare-metal K8s cluster in the Raspberry Pi I tried and tested was with Micro K8s which we discuss in this post.

Micro K8s are Lightweight upstream K8s. They are smallest, simplest, pure production K8s. For clusters, laptops, IoT and Edge, on Intel and ARM.

MicroK8s is a CNCF certified upstream Kubernetes deployment that runs entirely on your workstation or edge device. Being a snap it runs all Kubernetes services natively (i.e. no virtual machines) while packing the entire set of libraries and binaries needed. Installation is limited by how fast you can download a couple of hundred megabytes and the removal of MicroK8s leaves nothing behind.

And to give a context on snap, Snaps are app packages for desktop, cloud and IoT that are easy to install, secure, cross‐platform and dependency‐free. Snaps are discover able and install able from the Snap Store, the app store for Linux with an audience of millions.

A snap is a bundle of an app and its dependencies that works without modification across Linux distributions.

We are going to use the same components list as described in the Part 1 of this series.

Each Pi is going to need an Ubuntu server image and you’ll need to be able to SSH into them. Please follow this link here will help us to reach to this stage

Kubernetes Cluster Preparation with SSH connection to the Pi from your terminal

Installing MicroK8s
Follow this section for each of your Pis. Once completed you will have MicroK8s installed and running everywhere.

SSH to your first Pi and install the MicroK8s snap:

sudo snap install microk8s --classic

As MicroK8s is a snap and as such it will be automatically updated to newer releases of the package, which is following closely upstream Kubernetes releases, so we don’t need to worry about the K8s version we’re installing

sudo snap install microk8s --classic --channel=1.15/stable
Channels are made up of a track (or series) and an expected level of stability, based on MicroK8s releases (Stable, Candidate, Beta, Edge). For more information about which releases are available, run:

snap info microk8s

Cheat Sheet for MicroK8s
Before going further here is a quick intro to the MicroK8s command line:

  • microk8s.start – start all enabled Kubernetes services
  • microk8s.inspect – status of services
  • microk8s.stop – stop all Kubernetes services
  • microk8s.enable dns – enable Kubernetes add-ons,“kubedns”
  • microk8s.kubectl cluster-info – status of the cluster:

MicroK8s is easy to use and comes with plenty of Kubernetes add-ons you can enable or disable.

Master node and leaf nodes
Now that you have MicroK8s installed on all boards, pick one which has to be the master node of your cluster.

On the chosen Master node, run the following command:

sudo microk8s.add-node
This command will generate a connection string in the form of :/.

Adding a node
Now, you need to run the join command from another Pi you want to add to the cluster:

microk8s.join 10.55.60.14:25000/JHpbBYMIevZSAMnmjMHmFwanrOYCWZLu
You should be able to see the new node in a few seconds on the master with the following command:

microk8s.kubectl get node

For each new node, you need to run the microk8s.add-node command on the master, copy the output, then run microk8s.join on the leaf.

Removing nodes
To remove a node, run the following command on the master:

sudo microk8s remove-node
The name of nodes are available on the master by running the microk8s.kubectl get node

Alternatively, you can leave the cluster from a leaf node by running:

sudo microk8s.leave

Once Pis are setup with MicroK8s, adding and removing nodes is easy and you can scale up or down as you go.

Voila..!!
If you follow this series, you are now in control of your Kubernetes cluster. One with native kubernetes and docker and the other one with more easier to install and manage MicroK8s

This completes the 3 part series for K8s in Raspberry Pi. In a new follow-up blog post we can see how we can use kubectl & helm charts to deploy a Nginx service, Prometheus and few other services from the DevOps tools set to the cluster.

Bare-Metal K8s Cluster with Raspberry Pi – Part 2

This is a continuation from the post series Bare-metal K8s cluster with Raspberry Pi

As we have 1 master node and 3 nodes setup we continue to install Kubernetes.

Install Kubernetes

We are using version 1.15.3. There shouldn’t be any errors, however during my installation the repos were down and I had to retry in a few times.

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
sudo apt-key add - && echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | \
sudo tee /etc/apt/sources.list.d/kubernetes.list && sudo apt-get update -q

sudo apt-get install -qy kubelet=1.15.3-00 kubectl=1.15.3-00 kubeadm=1.15.3-00

Repeat steps for all of the Raspberry Pis.

Kubernetes Master Node Configuration
Note: You only need to do this for the master node (in this deployment I recommend only 1 master node). Each Raspberry Pi is a node.

Initiate Master Node

sudo kubeadm init
Enable Connections to Port 8080
Without this Kubernetes services won’t work

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Add Container Network Interface (CNI)
I’ve chosen to use Weaver, however you can get others working such as Flannel (I’ve verified this works with this cluster)

Get Join Command
This will be used in the next section to join the worker nodes to the cluster. It will return something like:

kubeadm join 192.168.0.101:6443 --token X.Y --discovery-token-ca-cert-hash sha256:XYZ
kubeadm token create --print-join-command

Kubernetes Worker Node Configuration
Note: You only need to do this for the worker nodes (in this deployment I recommend 3 worker node).

Join Cluster
Use the join command provided at the end of the previous section
sudo kubeadm join 192.168.0.101:6443 --token X.Y --discovery-token-ca-cert-hash sha256:XYZ

Verify Node Added Successfully (SSH on Master Node)
Should have status ready after ~30 seconds
kubectl get componentstatuses

Another option of running bare-metal K8s cluster in the Raspberry Pi I tried and tested was with Micro K8s will posted in the Part 3 of this series.

A sneak peak into the K8s cluster in Raspberry pi

Bare-Metal K8s Cluster with Raspberry Pi – Part 1

There are multiple ways we can use a Kubernetes cluster to deploy our applications. Most of us opt to use any Kubernetes service from a public cloud provider. GKE, EKS, AKS are the most prominent ones. Deploying a Kubernetes cluster on a public cloud provider is relatively easy, but what if you want a private bare-metal K8s cluster. Being worked extensively in the data center and started my career as a Sys Admin, I personally prefer a piece of tangible hardware to get the feel of building it. This blog post walk you through the steps I took in order to have a bare-metal K8s cluster to play with.

K8s is an open source container orchestration platform that helps manage distributed, containerized applications at a massive scale. Born at Google as Borg, version 1.0 was released in July 2015. It has continued to evolve and mature and is now offered as a PaaS service by all of the major cloud vendors.

Google has been running containerized workloads in production for more than a decade. Whether it’s service jobs like web front-ends and stateful servers, infrastructure systems like Bigtable and Spanner, or batch frameworks like MapReduce and Millwheel, virtually everything at Google runs as a container.

You can find the paper here.

Kubernetes traces its lineage directly from Borg. Many of the developers at Google working on Kubernetes were formerly developers on the Borg project. We’ve incorporated the best ideas from Borg in Kubernetes, and have tried to address some pain points that users identified with Borg over the years.

More than just enabling a containerized application to scale, Kubernetes has release-management features that enable updates with near-zero downtime, version rollback, and clusters that can ‘self-heal’ when there is a problem. Load balancing, auto-scaling and SSL can easily be implemented. Helm, a plugin for Kubernetes, has revolutionized the world of server management by making multi-node data stores like Redis and MongoDB incredibly easy to deploy. Kubernetes enables you to have the flexibility to move your workload where it is best suited. This compliments the hybrid cloud story and in my career it has become more apparent that my customers see this as well to help them resolve issues like; cost, availability and compliance. In parallel software vendors are starting to embrace containers as a standard deployment model leading to a recent increase in requests for container solutions.

As you can see in the workflow comparison below, there is greater room for error when deploying on-premises. Public clouds provide the automation and reduces the risk of error as less steps are required. But as mentioned above, private cloud provides you more options when you have unique requirements.

Pros:

  • Using Kubernetes and its huge ecosystem can improve your productivity
  • Kubernetes and a cloud-native tech stack attracts talent
  • Kubernetes is a future proof solution
  • Kubernetes helps to make your application run more stable
  • Kubernetes can be cheaper than its alternatives

Cons:

  • Kubernetes can be an overkill for simple applications
  • Kubernetes is very complex and can reduce productivity
  • The transition to Kubernetes can be cumbersome
  • Kubernetes can be more expensive than its alternatives

Pre-requisites:

Compute:

3 x Raspberry Pi 4 Model B with 2 GB RAM
1 x Raspberry Pi 3 Model B+ with 1 GB RAM

Storage:

4 x 16GB High Speed Sand-disk Micro-SD Cards

Network:

1 x Network Switch – for local LAN for k8s internal connectivity
1 x Network Router – for Wifi (My default ISP router was used here) only master node had internet connectivity once completed the setup
4 x Ethernet Cables
1 x Keyboard, HDMI, Mouse (for initial setup only)

Initial Raspberry Pi Configuration:

Flash Raspbian to the Micro-SD Cards

Download image from the below link,

Raspbian OS

I have used BalenaEtcher to flash image onto micro-SD card

Perform Initial Setup on Boot on startup screen, we need to connect keyboard, monitor and mouse for this setup.

Choose Country, Language, Timezone
Define new password for user ‘pi’
Connect to WiFi or skip if using ethernet
Skip update software (We will perform this activity manually later).
Choose restart later

Configure Additional Settings Click the Raspberry Pi icon (top left of screen) > Preferences > Raspberry Pi Configuration

System

Configure Hostname
Boot: To CLI

Interfaces
SSH: Enable

Choose restart later

Configure Static Network Perform one of the following:
Define Static IP on Raspberry Pi: Right Click the arrow logo top right of screen and select ‘Wireless & Wired Network Settings’

Define Static IP on DHCP Server: Configure your DHCP server to define a static IP on the Raspberry Pi Mac Address.

Reboot and Test SSH
Username: pi

Password: Defined in step 2 above

From the terminal ssh pi@[IP Address]

Repeat steps for all of the Raspberry Pis.

Kubernetes Cluster Preparation with SSH connection to the Pi from your terminal. I am using a 12 years old Lenovo laptop running MX Linux. Open a terminal and establish ssh connection to the Pi

Perform Updates

apt-get update: Updates the package indexes
apt-get upgrade: Performs the upgrades

Configure Net.IP4.IP configuration Edit sudo vi /etc/sysctl.conf, uncomment net.ipv4.ip_forward = 1 and add net.ipv4.ip_nonlocal_bind=1.
Note: This is required to allow for traffic forwarding, for example Node Ports from containers to/from non-cluster devices.

Install Docker

curl -sSL get.docker.com | sh

Grant privilege for user pi to execute docker commands

sudo usermod pi -aG docker

Disable Swap

sudo systemctl disable dphys-swapfile.service
sudo reboot

We can verify this with the top command, on the top left corner next to MiB Swap should be 0.0.

As we completed the initial steps to create our K8s bare-metal cluster, we can see how we build the cluster in Part 2 of this blog post

Linux Inside Win 10

I am a zealous fan of Linux and FOSS. I have been using Linux and it’s TUI with bash shell for more than seventeen years. When I moved to my new role I find it bit difficult when I had a Windows 10 laptop and was literally fumbling with the powershell and cmd line when I tried working with tools like terraform, git etc. But luckily I figured out a solution for the old school *NIX users like me who are forced to use a Windows laptop, and that solution is WSL.

Windows Subsystem for Linux a.k.a WSL is an environment in Windows 10 for running unmodified Linux binaries in a way similar to Linux Containers. Please go through my earlier post on LinuxContainers for more details on it. WSL runs Linux binaries by implementing a Linux API compatibility layer partly in the Windows kernel when it was introduced first. The second iteration of it, “WSL 2” uses the Linux kernel itself in a lightweight VM to provide better compatibility with native Linux installations.

To use WSL in Win 10, you have to enable wsl feature from the Windows optional features. Being a aficionado of command line than the GUI, I’ll now list out step by step commands in order to enable the WSL and install your favourite Linux distribution and how to use it.

  1. Open Powershell as administrator and execute the command below Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
    Note: Restart is required when prompted
  2. Invoke-WebRequest -Uri https://aka.ms/wsl-ubuntu-1804 -OutFile Ubuntu.appx -UseBasicParsing

In the above command you can use all the Linux distros available for WSL. For example if you want to use Kali Linux instead of Ubuntu, you can edit the distro url in the step 2 command as “https://aka.ms/ wsl-kali-linux”. Please refer to this guide for all available distros.

Once the download is completed, you need to add that package to the Windows 10 application list which can be done using the command below.

  1. Add-AppxPackage .\Ubuntu.appx

Once all these steps are completed, in the Win 10 search (magnifying glass in the bottom left corner near to the windows logo) type Ubuntu and you can see your Ubuntu (you need to search which ever wsl distro you have added). To complete the initialization of your newly installed distro, launch a new instance which is Ubuntu in our case by selecting and running Ubuntu from search as seen in the screen shot below.

This will start the installation of your chosen Linux distros binaries and libraries along with the kernel. It will take some time to complete the installation and configuration (approximately 5 to 10 minutes depending on your laptop/desktop configuration).

Once installation is complete, you will be prompted to create a new user account (and its password).

Note: You can choose any username and password you wish – they have no bearing on your Windows username.

If everything goes well, we’ll have Ubuntu installed as a sub system. When you open a new distro instance, you won’t be prompted for your password,
but if you elevate your privileges using sudo, you will need to enter your password.

Next step is to updating your package catalog, and upgrading your installed packages with the Ubuntu package manager apt. To do so, execute the below command in the prompt.

$ sudo apt update && sudo apt upgrade

Now we will proceed to install Git.

$ sudo apt install git

$ git --version

And finally test out git installation with the above command. You can also install other tools, packages available in the repository. I have installed git, terraform, aws-cli, azure-cli and ansible. You can install python, ruby, go programming environment as well. Python pip and ruby gem installations are also supported. You can use this sub system as an alternative for your day to day Linux operations and as an alternative terminal for your Powershell if you are using Sublime Text or Atom or Visual studio code

Arch installation – Part 1

This is a continuation for my earlier post on Arch Linux. In this blog we’ll try to install and tame Arch Linux in an easier and quickest way.

The default installation covers only a minimal base system and expects the end user to configure and use it. Based on the KISS – Keep It Simple, Stupid! principle, Arch Linux focus on elegance, code correctness, minimalist system and simplicity.

Arch Linux supports the Rolling release model and has its own package manager – pacman. With the aim to provide a cutting-edge operating system, Arch never misses out to have an up-to-date repository. The fact that it provides a minimal base system gives you a choice to install it even on low-end hardware and then install only the required packages over it.

Also, its one of the most popular OS for learning Linux from scratch. If you like to experiment with a DIY attitude, you should give Arch Linux a try. It’s what many Linux users consider a core Linux experience.

You have been warned…!!! The method we are going to discuss here wipes out existing operating system(s) from your computer and install Arch Linux on it. So if you are going to follow this tutorial, make sure that you have backed up your files or else you’ll lose all of it or else your are installing in a fresh VM (You can deploy a VM easily with a Virtual Box, VMWare Workstation, KVM, Xen etc). Deploying a VM with any of those afore mentioned tools is completely out of scope for this post. again I would reccomend you to try this installation in any of your old laptop or a new VM

Requirements for installing Arch Linux:

  • A x86_64 (i.e. 64 bit) compatible machine
  • Minimum 512 MB of RAM (recommended 2 GB)
  • At least 1 GB of free disk space (recommended 20 GB for basic usage)
  • An active internet connection
  • A USB drive with minimum 2 GB of storage capacity
  • Familiarity with Linux command line

You have been warned..!!! If you are a person addicted to GUI and doesn’t want to do a DIY kind of Linux installation, IMHO, please discontinue reading this and go ahead with Arch derivatives like Manjaro.

Step 1: Download the ISO

You can download the ISO from the official website.
I prefer using the magnet link and downloading the ISO with a torrent client.

Step 2: Create a live USB of Arch Linux

We will have to create a live USB of Arch Linux from the ISO you just downloaded.

If you are on Linux, you can use dd command to create a live USB. I recommend you to try Balena Etcher for creating your usb. This is my favourite way of doing it. It is a portable piece of software and can be used on Windows/Mac OS/Linux etc

You can download it from the link here

Once you have created a live USB for Arch Linux, shut down your PC. Plugin your USB and boot your system. While booting keep pressing ESC, F2, F10 or F12 (depending upon your system) to go into boot settings. In here, select to boot from USB or removable disk. You will be presented with a Linux terminal which will be automatically logged in as root user.

As the initial steps are completed, we’ll discuss the installation part in my next post

Demystifying Arch

There is always a lot of buzz around the Arch Linux distribution in the Linux community. Those who are switching to Linux as a desktop operating system are very reluctant to go the Arch way and those who have experienced with Linux as well. IMHO it’s only because of the mystery prevailing around this distro. Here in this blog we’ll try to address some of those and help to embrace everyone to Arch world with out any concerns.

Arch principles are: Simplicity, Modernity, Pragmatism, User Centrality and Versatility.

Arch Linux is an independently developed, x86-64 general-purpose GNU/Linux distribution that strives to provide the latest stable versions of most software by following a rolling-release model. The default installation is a minimal base system, configured by the user to only add what is purposely required. In other words Arch is a DIY Linux instalaltion where the user is having full control on each and every component getting installed and configured.

The original creator of Arch Linux, Judd Vinet was in total love with the simplicity and elegance of Slackware, BSD, etc. He wanted a distribution that could be DIY, you make it yourself from ground up and it only has what you want it to have.

The solution was to use a shell based manual installation instead of the automatic ncurses or graphical based installations that lead you to learn almost nothing and you know nothing about what exactly was installed on the system. The ZSH based installer of Arch Linux means that you do everything using commands. All this means that you know exactly what is happening, since you are the one doing it. You can customize everything right from the pre-installation steps. You don’t install the OS in Arch. You just download the base system and install it in the root partition/directory. Then you install other required packages needed by you by either chrooting into the system or booting into the new system and it will be a terminal, as it is only base system install the you’ve done. Install and configure each package, you must know what are your required packages and how they are installed and configured. That is how Arch Linux haelps you install your on custom Linux (DIY) and help you to learn by doing, breaking & repeating.

Arch Linux doesn’t care about being easy for Ubuntu/Debian/Redhat/Fedora style to set up. But, there do exist easy-to-install variants if anyone wants to have a touch and feel os Arch Linux; Antergos & Manjaro is essentially Arch with a graphical installer.

Typically in a more simpler steps Arch installation will be like,

  • Download ISO Image
  • Burn the image to a DVD/USB
  • Boot Arch from the media
  • create disk partitions
  • Setup network with/without DHCP, including wired or wireless network
  • Optimize gcc for the specific CPU
  • Config/compile Linux kernel & modules
  • Base packages
  • Environmental configuration
  • Necessary softwares/tools/applications
  • setup X server and GUI

In one word, the installation bundle in those distributions like Debian and CentOS does all things above in a more user-friendly way or even automatically, which you will have to do manually step by step in Arch Linux. But it is not at all a hard to do thing as it sounds. We’ll walk through the installation in the next post.

Linux Containers

The Linux Containers project or LinuX Containers (LXC) is an open source container platform that provides a set of tools, templates, libraries, and language bindings. LXC has a simple command line interface that improves the user experience when starting containers. LXC is a user-space interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers.

LXC offers an operating-system level virtualization environment that is available to be installed on many Linux-based systems. Your Linux distribution may have it available through its package repository. LXC is a free software, most of the code is released under the terms of the GNU LGPLv2.1+ license, some Android compatibility bits are released under a standard 2-clause BSD license and some binaries and templates are released under the GNU GPLv2 license.

LXC is a OS-level virtualization technology that allows creation and running of multiple isolated Linux virtual environments (VE) on a single control host. These isolation levels or containers can be used to either sandbox specific applications, or to emulate an entirely new host. LXC uses Linux’s cgroups functionality, which was introduced in kernel version 2.6.24 to allow the host CPU to better partition memory allocation into isolation levels called namespaces. Note that a VE is distinct from a virtual machine (VM).

The idea of what we now call container technology first appeared in 2000 as FreeBSD jails, a technology that allows the partitioning of a FreeBSD system into multiple subsystems, or jails. Jails were developed as safe environments that a system administrator could share with multiple users inside or outside of an organization.

In 2001, an implementation of an isolated environment made its way into Linux, by way of Jacques Gélinas’ VServer project. Once this foundation was set for multiple controlled userspaces in Linux, pieces began to fall into place to form what is today’s Linux container.

Very quickly, more technologies combined to make this isolated approach a reality. Control groups (cgroups) is a kernel feature that controls and limits resource usage for a process or groups of processes. And systemd, an initialization system that sets up the userspace and manages their processes, is used by cgroups to provide greater control over these isolated processes. Both of these technologies, while adding overall control for Linux, were the framework for how environments could be successful in staying separated.

Current LXC uses the following kernel features to contain processes:

  • Kernel namespaces (ipc, uts, mount, pid, network and user)
  • Apparmor and SELinux profiles
  • Seccomp policies
  • Chroots (using pivot_root)
  • Kernel capabilities
  • CGroups (control groups)

LXC containers are often considered as something in the middle between a chroot and a full fledged virtual machine. The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel. LXC is currently made of a few separate components:

  • The liblxc library
  • Several language bindings for the API:
  • python3 (in-tree, long term support in 2.0.x)
  • lua (in tree, long term support in 2.0.x)
  • Go
  • ruby
  • python2
  • Haskell
  • A set of standard tools to control the containers
  • Distribution container templates

Now we will quickly explore the LXC containers.

For most modern Linux distributions, the kernel is enabled with cgroups, but you most likely will need to install the LXC utilities.

If you’re using Red Hat or CentOS, you’ll need to install the EPEL repositories first. For other distributions, such as Ubuntu or Debian, simply type:

$ sudo apt-get install lxc

Create the ~/.config/lxc directory if it doesn’t already exist, and copy the /etc/lxc/default.conf configuration file to ~/.config/lxc/default.conf. Append the following two lines to the end of the file:

lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536

Append the following to the /etc/lxc/lxc-usernet file (replace the first column with your user name):

akbharat veth lxcbr0 10

The quickest way for these settings to take effect is either to reboot the host or log the user out and then log back in.

Once logged back in, verify that the veth networking driver is currently loaded:

$ lsmod|grep veth
veth 16384 0

$ sudo modprobe veth

If in case you couldn’t see the module ‘veth’ not loaded you can use the command above to load it to the kernel.

Next, download a container image and name it “my-container”. When you type the following command, you’ll see a long list of supported containers under many Linux distributions and versions:

$ sudo lxc-create -t download -n my-container

You’ll be given three prompts to pick the distribution, release and architecture. I chose the following:

Distribution: ubuntu
Release: xenial
Architecture: amd64

Once you press Enter, the rootfs will be downloaded locally and configured. For security reasons, each container does not ship with an OpenSSH server or user accounts. A default root password also is not provided. In order to change the root password and log in, you must run either an lxc-attach or chroot into the container directory path (after it has been started). We will use the command below to start the container

$ sudo lxc-start -n my-container -d

The -d option dæmonizes the container, and it will run in the background. If you want to observe the boot process, replace the -d with -F, and it will run in the foreground, ending at a login prompt.

Open up a second terminal window and verify the status of the container:

$ sudo lxc-info -n my-container
Name: my-container
State: RUNNING
PID: 1356
IP: 10.0.3.28
CPU use: 0.29 seconds
BlkIO use: 16.80 MiB
Memory use: 29.02 MiB
KMem use: 0 bytes
Link: vethPRK7YU
TX bytes: 1.34 KiB
RX bytes: 2.09 KiB
Total bytes: 3.43 KiB

There is also another way to see a list of all installed containers, which is provided below:

$ sudo lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6
my-container RUNNING 0 - 10.0.3.28 -

But still you will not be able to use it, so for that you can attach the container directly with your LXC tool sets and work with it.

$ sudo lxc-attach -n my-container
root@my-container:/#

If you set a username now from within the container, you can either use console command to connect to the container with your newly created username and password.

$ sudo lxc-console -n my-container

If you want to connect to the container using SSH, you can install the Open SSH server in the container and then figure out the IP address from the container with the commands below and connect to the IP from your linux host.

root@my-container:/# apt-get install openssh-server

root@my-container:/# ip addr show eth0|grep inet

From your Linux host now ssh to the container using the IP we captured from the earlier command.

$ ssh 10.0.3.25

On the host system, and not within the container, it’s interesting to observe which LXC processes are initiated and running after launching a container:

$ ps aux|grep lxc|grep -v grep
root 861 0.0 0.0 234772 1368 ? Ssl 11:01
↪0:00 /usr/bin/lxcfs /var/lib/lxcfs/
lxc-dns+ 1155 0.0 0.1 52868 2908 ? S 11:01
↪0:00 dnsmasq -u lxc-dnsmasq --strict-order
↪--bind-interfaces --pid-file=/run/lxc/dnsmasq.pid
↪--listen-address 10.0.3.1 --dhcp-range 10.0.3.2,10.0.3.254
↪--dhcp-lease-max=253 --dhcp-no-override
↪--except-interface=lo --interface=lxcbr0
↪--dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases
↪--dhcp-authoritative
root 1196 0.0 0.1 54484 3928 ? Ss 11:01
↪0:00 [lxc monitor] /var/lib/lxc my-container
root 1658 0.0 0.1 54780 3960 pts/1 S+ 11:02
↪0:00 sudo lxc-attach -n my-container
root 1660 0.0 0.2 54464 4900 pts/1 S+ 11:02
↪0:00 lxc-attach -n my-container

And finally to stop the container and verify whether the container is stopped we can use the following commnds in sequence,

$ sudo lxc-stop -n my-container

$ sudo lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6
my-container STOPPED 0 - - -

$ sudo lxc-info -n my-container
Name: my-container
State: STOPPED

You might also want to destroy the contianer:

$ sudo lxc-destroy -n my-container
Destroyed container my-container

This is a very basic example of a container capability. Now in modern container era containerization is much more sophisticated and Docker is a significant improvement of LXC’s capabilities. Its obvious advantages are gaining Docker a growing following of adherents. In fact, it starts getting dangerously close to negating the advantage of VM’s over VE’s because of its ability to quickly and easily transfer and replicate any Docker-created packages. We’ll discuss more on Docker in my next post.

Linux Interview Questions

These are some of the Interview questions we were using while conducting L1, L2 & L3 level interviews, I’ll not posting any answers to these questions as it’s open up to you to search and find the answers. It will help you to understand the topic and you’ll remember the answers all the time.

What is the difference between kernel image and initrd image?

Why is root file system mounted read-only initially?

What will happen if it is mounted read-write?

What are the process states in Linux?

Which log files holds the information of currently logged users?

What is called a fork bomb?What is its relevance?

What is the difference between commands last and lastb?

How do you ensure your users have hard-to-guess passwords?

What is log rotate and Why is it used in a Linux system?

How will you increase the priority of a process?

What is the range for the priority of the processes?

I have a file named “/ \+Xy \+\8_/ \” How do I get rid of it?

What does the command pvscan do?

How will you probe for a new SCSI drive?

How will you take a system backup image for recovery?

What will the command chage –E -1 do?

What will the command option –a will do for usermod?

What is the command gpasswd?

When debugging a core in gdb, what does the command `bt` give?

What do you mean by man (1) (5) and (8)?

How many fields in crontab and what are those?

To display a list of all manual pages containing the keyword “date”, which commad can be used?

What do you mean by restricted deletion flag?

What’s the difference between `telnet` and `ssh`? What’s a good use for each?

What will vgcfgbackup and vgcfgrestore do and can you explain with a scenario?

What is the difference between insmod and modprobe?

What to do if the newly built kernel does not boot?

What are TCP wrappers? What is the library responsible for TCP wrappers?

In how many ways you can block the traffic to ssh from a specific host?

How will you run ssh in debugging mode to identify a failure in establishing a connection?

Which file is responsible for kernel parameter changes?

What do you mean by hardening of a Linux server?How will you achieve this?

How will you tune a Linux server for maximum performance?

What is the use of commands clustat clusvcadm?

What is multipathing?Why is it used?

What is black-listing of devices in multipath?

Which command will display the current multipath configuration gathered from sysfs, the device mapper, and all other available components on the system?

What is multipath interactive console?

What is GFS?

How will you create a GFS file system?

What is called NIC bonding or teaming?

How will you repair a GFS file system?

 

SHELL SCRIPTING

How do you read arguments in a shell program?

How do you send a mail message to somebody within a script?

Which command is used to count lines and/or characters in a file?

How do you start a process in the background?

How will you define a function in a shell script?

How do you get character positions 10-20 from a text file?

What do you mean by getopts?

How do you test for file properties in a shell script?

How will you define an array?

How can you take in key-board inputs to a shell script?

What is meant by traps in a shell scripts?

Is it possible to use alphanumeric in for loop?

Write a Palindrome script?

Write a script to check no of machines up in network ( Provided list of hosts are available in a file)?

if u have f1;f2;f3 words, need to display f3 first then f1 and at last f2 using a script?

Print the numbers of 1 to 1000 in the format 0001,0002,0003………1000?

Magic SysRq key of Linux

The magic SysRq key is a key combination in the Linux kernel which allows the user to perform various low-level commands regardless of the system’s state.

It is often used to recover from freezes or to reboot a computer without corrupting the filesystem. The key combination consists of Alt+SysRq+commandkey. In many systems, the SysRq key is the PrintScreen key.

First, you need to enable the SysRq key, as shown below.

# echo “1” > /proc/sys/kernel/sysrq

List of SysRq Command Keys

Following are the command keys available for Alt+SysRq+commandkey.

‘k’ – Kills all the process running on the current virtual console.
’s’ – This will attempt to sync all the mounted file system.
‘b’ – Immediately reboot the system, without unmounting partitions or syncing.
‘e’ – Sends SIGTERM to all process except init.
‘m’ – Output current memory information to the console.
‘i’ – Send the SIGKILL signal to all processes except init
‘r’ – Switch the keyboard from raw mode (the mode used by programs such as X11), to XLATE mode.
’s’ – sync all mounted file systems.
‘t’ – Output a list of current tasks and their information to the console.
‘u’ – Remount all mounted filesystems in read-only mode.
‘o’ – Shutdown the system immediately.
‘p’ – Print the current registers and flags to the console.
‘0-9′ – Sets the console log level, controlling which kernel messages will be printed to your console.
‘f’ – Will call oom_kill to kill a process which takes more memory.
‘h’ – Used to display the help. But any other keys than the above listed will print help.

We can also do this by echoing the keys to the /proc/sysrq-trigger file. For example, to reboot a system you can perform the following.

# echo “b” > /proc/sysrq-trigger

Perform a Safe reboot of Linux using Magic SysRq Key

To perform a safe reboot of a Linux computer which hangs up, do the following. This will avoid the fsck during the next re-booting. i.e Press Alt+SysRq+letter highlighted below.

unRaw (take control of keyboard back from X11,
tErminate (send SIGTERM to all processes, allowing them to terminate gracefully),
kIll (send SIGILL to all processes, forcing them to terminate immediately),
Sync (flush data to disk),
Unmount (remount all filesystems read-only),
reBoot.

Finding the ‘find’ command

The common problem users or admins run into when first dealing with a Linux machine is how to find the files they are looking for.

We discuss here the GNU/Linux find command which is one of the most important and much-used commands in Linux systems. find is used to search and locate a list of files and directories based on conditions you specify for files that match the arguments. Find can be used in a variety of conditions like you can find files by permissions, users, groups, file type, date, size and other possible criteria.

1. Find files under the current directory or in a specific directory:

ajoy@testserver:~$ find . -name file2.txt 
./file2.txt
ajoy@testserver:~$ find /home/ajoy/ -name file5.txt 
/home/ajoy/file5.txt
ajoy@testserver:~$

2. Find only files or only directories

ajoy@testserver:~$ find /home/ajoy -type f -name file1.txt
/home/ajoy/file1.txt
ajoy@testserver:~$ find /home/ajoy -type f -name “*.txt”
/home/ajoy/File1.txt
/home/ajoy/File5.txt
/home/ajoy/file3.txt
/home/ajoy/File2.txt
==result truncated=====
/home/ajoy/file9.txt
/home/ajoy/File9.txt
/home/ajoy/File7.txt
ajoy@testserver:~$

In the above example if we replace -type f to -type d find will look for only directories.

ajoy@testserver:~$ find . -type d -name “test”
./test
ajoy@testserver:~$

3. Find files ignoring the case

ajoy@testserver:~$ find . -iname file2.txt
./file2.txt
ajoy@testserver:/tmp$ find /home/ajoy/ -iname file5.txt
/home/ajoy/File5.txt
/home/ajoy/file5.txt
ajoy@testserver:~$

4. Find files limitting the directory traversal

ajoy@testserver:~$ find test/ -maxdepth 3 -name “*.py”
test/subdir1/subdir2/github_summary.py
test/subdir1/github_summary.py
test/github_summary.py
ajoy@testserver:~$ find test/ -maxdepth 2 -name “*.py”
test/subdir1/github_summary.py
test/github_summary.py
ajoy@testserver:~$

5. Find file inverting the match

ajoy@testserver:~$ find test/ -not -name “*.py”
test/
test/File1.txt
test/File5.txt
test/File2.txt
test/subdir1
test/subdir1/subdir2
test/File3.txt
test/File6.txt
test/File4.txt
test/File8.txt
test/File9.txt
test/File7.txt
ajoy@testserver:~$

6. Find with multiple search criterias

ajoy@testserver:~$ find test/ -name “*.txt” -o -name “*.py”
test/File1.txt
test/File5.txt
test/File2.txt
test/subdir1/subdir2/github_summary.py
test/subdir1/github_summary.py
test/File3.txt
test/File6.txt
test/File4.txt
test/File8.txt
test/github_summary.py
test/File9.txt
test/File7.txt
ajoy@testserver:~$

7. Find files with certain permissions

ajoy@testserver:~$ find . -type f -perm 0664
./file3.txt
./file6.txt
./file1.txt
./.gitconfig
./file5.txt
./file4.txt
./file2.txt
./file8.txt
./file7.txt
ajoy@testserver:~$

ajoy@testserver:~$ sudo find / -maxdepth 2 -perm /u=s 2>/dev/null
/bin/ping6
/bin/ping
/bin/fusermount
/bin/mount
/bin/umount
/bin/su
ajoy@testserver:~$

The above example shows the files with suid permissions set. The /dev/null bit bucket is used to remove errors related to permission while find traverses through directories

ajoy@testserver:~$ sudo find / -maxdepth 2 -type d -perm /o=t 2>/dev/null
/tmp
/var/tmp
/var/crash
/run/shm
/run/lock
ajoy@testserver:~$

The above example shows the directories  with sticky bit

8. Find files based on users and groups

ajoy@testserver:~$ sudo find /var -user www-data
/var/cache/apache2/mod_cache_disk
ajoy@testserver:~$

ajoy@testserver:~$ sudo find /var -group crontab
/var/spool/cron/crontabs
ajoy@testserver:~$

9. Find files as per access time and modified time and changed time

ajoy@testserver:~$ find / -maxdepth 2 -mtime 50 ==> last modified 50 days back

ajoy@testserver:~$ find / -maxdepth 2 -atime 50 ==> last accessed 50 days back

ajoy@testserver:~$ find /  -mtime +50 -mtime -100 ==> modified between 50 to 100 days ago

ajoy@testserver:~$ find .  -cmin -60 ==> changed in last 60 minutes (1 hour)

ajoy@testserver:~$ find / -mmin -60 ==> modified in last 60 minutes

ajoy@testserver:~$ find / -amin -60 ==> accessed in last 60 minutes

10. Find files based on size

ajoy@testserver:~$ find .  -size 50 ==> all files of 50 MB

ajoy@testserver:~$ find /  -size +50 -size -100 ==> all files greater than 50 MB & less than 100 MB

ajoy@testserver:~$ find /var -type f -empty ==> empty file

ajoy@testserver:~$ find /var -type d -empty ==> empty directory

11. Listing out files found with find

ajoy@testserver:~$ find . -maxdepth 1 -name “*.txt” -exec ls -l {} \;
-rw-rw-r– 1 ajoy ajoy 0 Jan 31 17:40 ./file3.txt
-rw-rw-r– 1 ajoy ajoy 0 Jan 31 17:40 ./file6.txt
-rw-rw-r– 1 ajoy ajoy 0 Jan 31 17:40 ./file1.txt
-rw-rw-r– 1 ajoy ajoy 0 Jan 31 17:40 ./file5.txt
-rw-rw-r– 1 ajoy ajoy 0 Jan 31 17:40 ./file4.txt
-rw-rw-r– 1 ajoy ajoy 0 Jan 31 17:40 ./file2.txt
-rw-rw-r– 1 ajoy ajoy 0 Jan 31 17:40 ./file8.txt
-rw-rw-r– 1 ajoy ajoy 0 Jan 31 17:40 ./file7.txt
-rw-rw-r– 1 ajoy ajoy 0 Jan 31 17:40 ./file9.txt
ajoy@testserver:~$

The above command will long list the files matching the find criteria and the one given below will remove the files matching the given criteria

ajoy@testserver:~$ find . -maxdepth 1 -name “*.txt” -exec rm -f {} \;
ajoy@testserver:~$ ls
script.sh test
ajoy@testserver:~$

12. Finding Smallest and Largest file

4 largest files in the current directory and sub dorectories

ajoy@testserver:~$ sudo find /var -type f -exec ls -l {} \; |sort -n -r |head -4
-rwxr-xr-x 1 root root 998 Dec 5 2012 /var/lib/dpkg/info/sgml-base.preinst
-rwxr-xr-x 1 root root 991 Mar 25 2013 /var/lib/dpkg/info/ureadahead.postinst
-rwxr-xr-x 1 root root 980 Sep 23 2014 /var/lib/dpkg/info/man-db.postrm
-rwxr-xr-x 1 root root 972 May 15 2014 /var/lib/dpkg/info/locales.preinst

2 smallest files in the current directory and sub dorectories

ajoy@testserver:~$ sudo find /var -type f -exec ls -l {} \; |sort -n |head -2 
-rw——- 1 daemon daemon 2 Jan 31 16:36 /var/spool/cron/atjobs/.SEQ
-rw——- 1 root ajoy 40 Jan 31 20:16 /var/lib/sudo/ajoy/0
ajoy@testserver:~$