Microsoft Linux CBL-Mariner

This post is a two part series on the CBL-Mariner Linux. If you’re interested just in the installation you can skip this part and switch to Part 2.

It was a very well known fact that Microsoft and Linux doesn’t sync for many years. Gone are those days when we couldn’t even come up with Microsoft and Linux together in a sentence. Former Microsoft CEO Steve Ballmer (in)famously branded Linux “a cancer that attaches itself in an intellectual property sense to everything it touches” back in 2001. Fast forwarding to 2021, news says that Microsoft quietly released CBL on its GitHub and it’s released as open-source. Anyone can use it, build it, edit it & reuse it fulfilling the four essential freedom of Free software foundation.

Microsoft’s stance about open-source has also changed over the years. There have been signs on Microsoft embracing Linux for quite sometime. The Windows Subsystem for Linux (WSL) initially released in 2016 and later came up with a stable release WSL2 in 2019 is an evidence of that. The constant rise of cloud and edge computing has increased the dominance of Linux. Their surprising acquisition of GitHub back in 2018 was another strategic motion towards accepting OSS/FOSS model.

In this blog, we’ll discuss about Microsoft’s Linux (distro)? CBL-Mariner. IMHO it doesn’t qualify for being called as a distro. We’ll see why?

CBL-Mariner is a Linux developed by the Linux System Group at Microsoft, the team behind the WSL compatibility layer. The CBL part of its name stands for Common Base Linux. It is fully open-source & built for powering Microsoft’s Azure Edge services. Although Microsoft stated that it’s an internal distribution for managing their Edge infrastructure, the entire project is available publicly via GitHub. It’s a minimal and lightweight Linux that users can use as a container or container host. CBL-Mariner proves that Microsoft is on the right track when it comes to free software and Linux. The company that once stood steadfast against its open-source rival has seemingly come to terms with the changing reality of the IT industry. Let’s see what the future holds for this new strategy.

“This initiative is part of Microsoft’s increasing investment in a wide range of Linux technologies, such as SONiC, Azure Sphere OS and Windows Subsystem for Linux (WSL). CBL-Mariner is being shared publicly as part of Microsoft’s commitment to Open Source and to contribute back to the Linux community,”

CBL-Mariner GitHub pages

Looking into this release strategically, CBL-Mariner has similarities in concept and philosophy with Amazon Linux which is explicitly used in AWS. Microsoft might have thought in the same direction for use in Azure. Another aspect to look is that, RedHat’s CoreOS was deprecated in May 2020. CoreOS also known as Container Linux is a discontinued open-source lightweight operating system based on the Linux kernel and designed for providing infrastructure to clustered deployments, while focusing on automation, ease of application deployment, security, reliability and scalability. which was predominantly used in Microsoft Azure. This led Microsoft to look for an alternate Linux which they controls rather than from other enterprise Linux players. Having said that, we’ll wait and see the adoptability of CBL-Mariner.

Whether deployed as a container or a container host, CBL-Mariner consumes limited disk and memory resources. The lightweight characteristics of CBL-Mariner also provides faster boot times and a minimal attack surface. By focusing the features in the core image to just what is needed for our internal cloud customers there are fewer services to load, and fewer attack vectors. When security vulnerabilities arise, CBL-Mariner supports both a package-based update model and an image based update model. Leveraging the common RPM Package Manager system, CBL-Mariner makes the latest security patches and fixes available for download with the goal of fast turn-around times.

Microsoft uses it as the base Linux for containers in the Azure Stack HCI implementation of Azure Kubernetes Service. It’s also used in Azure IoT Edge to run Linux workloads.

Now, we talk explicitly on the CBL-Mariner. what are the features of it? how to install it? etc. in the part 2 of this series.

Microsoft Linux CBL Mariner – Part 2

This post is the part 2 of the CBL-Mariner blog series. If you’ve not read Part 1 yet please do so here.

CBL-Mariner philosophy is you only need a small core set of packages(RPM Based) on top of CBL core to address the needs of cloud & edge computing.

What’s in it?

It shares major components from,

  • VMWare’s Photon OS project for SPEC files
  • The Fedora Project for QT, DNF etc
  • Linux from scratch for SPEC and for simplified installation
  • Open mamba for SPEC files
  • GNU FSF Core compilers and utilities
  • And finally requires Ubuntu 18.04 as the build environment to create CBL mariner binaries and to bake an ISO

How to install it?

CBL-Mariner is open source released under GNU GPL, LGPL, MIT License & Apache License.

  • It has its own repo under Microsoft’s GitHub organization.
  • No ISOs or images of Mariner are provided
  • The repo has instructions to build them on Ubuntu 18.04.
  • It can be deployed on VMware, Hyper-V or Virtual Box
  • It doesn’t include Desktop aka GUI

How does it look like?

  • Its completely based on command line(CUI)
  • Boots very quickly due to light-weight nature
  • Runs with very minimal or nominal memory foot print
  • CBL-Mariner package system is RPM-based
  • The package management system uses both dnf and tdnf (Tinydnf)
  • It has two package repositories, base and update
  • Around 3300 packages are available between both repositories.

How does it operates?

Apart from DNF CBL-Mariner also supports an image-based update mechanism for atomic servicing and rollback using RPM-OSTree, rpm-ostree is an open source tool based on OSTree to manage bootable, immutable, versioned filesystem trees. The idea behind rpm-ostree is to use a client-server architecture to keep Linux hosts updated and in sync with the latest packages in a reliable manner.

This is not a regular Linux distro that you think, which you’ll try to install on the hardware or as a virtual machine and install necessary applications on it and start using it like you use Ubuntu, Fedora, Arch etc. No matter if you’re a professional developer, sys admin or a mere hobbyist, you can easily build custom CBL-Mariner images and play around. However, If you’re not a faint hearted person and have previous exposure to Makefiles, Make command, rpm build and Linux proficiency it’ll be easy for you. There are prerequisites listed in it’s GitHub page that roughly include Docker, RPM tools, ISO build tools and Golang, amongst others to setup an environment on Ubuntu 18.04(recommended) for building an ISO.

Finally, how do you use it?

CBL-Mariner doesn’t have any ISO image available by default. The GitHub page provides a quick start guide to build an ISO. There is another description which provides details on building a custom CBL-Mariner ISO and/or image.

There are couple of prerequisites needed to build your ISO. First thing we require is an Ubuntu 18.04 system. As per the GitHub pages of CBL-Mariner all the requirements are tested and validated for Ubuntu 18.04.

We’ll now check on the quick start guide as an initial step and if you’re interested to do a custom build you can do so. Quick build will take approximately 20 to 30 minutes and a custom build has taken a little over 3 hours.

I’ve tested this build in AWS with a t2.medium EC2 instance having 50 Gig disk. I have bumped up to 50 Gig because I build the quick one as well as the custom image. Custom build requires rpm packages to be rebuild and it’ll consume some space in your file system.

# Install required dependencies.
sudo apt -y install make tar wget curl rpm qemu-utils genisoimage python-minimal bison gawk parted gcc g++

# Recommended but not required: `pigz` for faster compression operations.
sudo apt -y install pigz

# Install Docker.
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh
$ sudo usermod -aG docker $USER

# I've explicitly installed golang from the tar ball
$ wget https://dl.google.com/go/go1.16.7.linux-amd64.tar.gz
$ tar zxvf go1.16.7.linux-amd64.tar.gz
$ sudo mv go /usr/local/bin
$ export GOROOT=/usr/local/go
$ export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
$ go version - To test the installation

For the build to work we need to create a symbolic link of go binary to /usr/bin

$ sudo ln -s /usr/local/go/bin/go /usr/bin/go

Now we can clone the CBL-Mariner repository and start our build.

$ git clone https://github.com/microsoft/CBL-Mariner.git
$ git checkout 1.0-stable
$ cd toolkit
$ sudo make image REBUILD_TOOLS=y REBUILD_PACKAGES=n CONFIG_FILE=./imageconfigs/full.json

I’ve faced lot of issues and errors while building with non stable branches/tags. If you are comfortable with bleeding edge releases, you can get your hands dirty with that. Once the build is completed you can find your image in the out directory.

$ cd ../CBL-Mariner/out/images/full/

$ ls -lh full.1.0.20210813.0520.iso

A quick view of the build I’ve done is provided in the below video.

CBL-Mariner build

Now that we have the ISO image, we can install CBL-Mariner on a virtual machine. To do this, I’m going to use Oracle VirtualBox, which is free. If we have any other virtualization tool we can use that as well.

Steps to Follow:

  • Open VirtualBox.
  • Click on the button New to create a new VM.
  • Now start the virtual machine creation wizard.
  • Put the name we want
  • Choose “Linux”, and version “Linux 2.6/3.x/4.x (64-bit)”. And press next.
  • Follow the wizard and chose the default
  • For CBL-Mariner we must configure at least 1 CPU, 1GB of RAM, and 8GB of disk.
  • Go to next until completing the wizard.
  • Now that we are back on the main VirtualBox screen
  • we can right-click the entry that appears with the name we have given it and then select Configuration on the menu.
  • we can also select the entry and click the upper Settings button.
  • Go to Storage, and from there on the icon of the optical disk (Empty) we have to click on Optical Drive and choose “Select a disk file” to be able to load the ISO image. And in the browser that will appear, select where we have the ISO that we generated in the previous step.
  • It’s time Start the virtual machine with CBL-Mariner.

Once we have started the virtual machine, it will start up and after a few moments it will show we a menu to installation. The steps we must follow are:

  • Choose the option “Graphical Installer” for graphical installation. There are also options for text mode, but the graphic is better. And once selected, press Next. [we have to move through the menu with the keyboard arrows and ENTER to select]
  • Now we will see an installer very similar to that of any other distro. In the Installation Type menu: we have to choose «CBL-Mariner Full » for full installation. In any case, both in Full and Core, as it hardly includes packages, it will be fast.
  • The next screen is the license terms to accept.
  • Then comes the assistant hard drive partitioning. There we have to create the necessary partitions or leave the ones that come by default.
  • Turn to choose the hostname, as well as the username and password. It’s free text for hostname
  • Please provide a complex password with combination of CAPITAL, small, Number & Special Characters
  • CBL-Mariner now begins the actual installation. Will start to install packages. And when it’s done, reboot the virtual machine.
  • When we start we will see the Login, where we have to put the login data (name and password).
  • Now we use CBL-Mariner as we would with any server distro, without a GUI
CBL-Mariner Installation

Thanks to all who have taken time to go through this blog. Hope this helps you to try building and installing CBL-Mariner the latest addition to Linux family.

RHEL 8 What’s new?

Red Hat Enterprise Linux 8 was released in Beta on November 14, 2018. There are so many features and improvements that distinguishes it from its antecedent – RHEL 7. In this blog, I’m attempting to provide a quick glance of those improvements, deprecations and the upgrade path.

Improvements:

  • YUM command is not available and DNF command replaces it. If you’ve worked on Fedora, DNF was a the default package manager in it.
  • chronyd is the default network time protocol wrapper instead of ntpd
  • Repo channels names changed, but content of them is mostly the same. CodeReady Linux Builder repository was added. It is similar to EPEL and supplies additional packages, which are not supported for production use.
  • One of the biggest improvement in RHEL 8 system performance is the new upper limit on physical memory capacity. Now has 4 PB of physical memory capacity compared to 64TB of system memory in RHEL 7
  • RPM command is also upgraded. The rpmbuild command can now do all build steps from a source package directly. the new –reinstall option allows to reinstall a previously installed package. there is a new rpm2archive utility for converting rpm payload to tar archives.
  • TCP networking stack is Improved. RedHat claims that the version 4.18 provides higher performances, better scalability, and more stability
  • RHEL 8 supports Open SSL 1.1.1 and TLS 1.3 cryptographic standard by default
  • BIND version is upgraded to 9.11 by default and introduces new features and feature changes compared to version 9.10.
  • Apache HTTP Server, has been updated from version 2.4.6 to version 2.4.37 between RHEL 7 and RHEL 8. This updated version includes several new features, but maintains backwards compatibility with the RHEL 7 version
  • RHEL 8 introduces nginx 1.14, a web and proxy server supporting HTTP and other protocols
  • OpenSSH was upgraded to version 7.8p1.
  • Vim runs default.vim script, if no ~/.vimrc file is available.
  • The ‘nobody’ & ‘nfsnobody’ user and groups are merged into ‘nobody’ ID (65534).
  • In RHEL 8, for some daemons like cups, the logs are no longer stored in specific files within the /var/log directory, which was used in RHEL 7. Instead, thet are stored only in systemd-journald.
  • Now you are forced to switch to Chronyd. The old NTP implementation is not supported in RHEL8.
  • NFS over UDP (NFS3) is no longer supported. The NFS configuration file moved to “/etc/nfs.conf”. when upgrading from RHEL7 the file is moved automatically.
  • For desktop users, Wayland is the default display server as a replacement for the X.org server. Yet X.Org is still available. Legacy X11 applications that cannot be ported to Wayland automatically use Xwayland as a proxy between the X11 legacy clients and the Wayland compositor.
  • Iptables were replaced by the nftables as a default network filtering framework. This update adds the iptables-translate and ip6tables-translate tools to convert the existing iptables or ip6tables rules into the equivalent ones for nftables.
  • GCC toolchain is based on the GCC 8.2
  • Python version installed by default is 3.6, which introduced incompatibilities with scripts written for Python 2.x but, Python 2.7 is available in the python2 package.
  • Perl 5.26, distributed with RHEL 8. The current directory . has been removed from the @INC module search path for security reasons. PHP 7.2 is also added
  • For working with containers, Red hat expects you to use the podman, buildah, skopeo, and runc tools. The podman tool manages pods, container images, and containers on a single node. It is built on the libpod library, which enables management of containers and groups of containers, called pods.
  • The basic installation provides a new version of the ifup and ifdown scripts which call NetworkManager through the nmcli tool. The NetworkManager-config-server package is only installed by default if you select either the Server or Server with GUI base environment during the setup. If you selected a different environment, use the yum install NetworkManager-config-server command to install the package.
  • Node.js, a software development platform in the JavaScript programming language, is provided for the first time in RHEL. It was previously available only as a Software Collection. RHEL 8 provides Node.js 10.
  • DNF modules improve package management.
  • New tool called Image Builder enables users to create customized RHEL images. Image Builder is available in AppStream in the lorax-composer package. Among other things, it allows created live ISO disk image and images for Azure, VMWare and AWS, See Composing a customized RHEL system image.
  • Some new storage management capabilities were introduced. Stratis is a new local storage manager. It provides managed file systems on top of pools of storage with additional features to the user. Also supports file system snapshots, and LUKSv2 disk encryption with Network-BoundDisk Encryption (NBDE).
  • VMs by default are managed via Cockpit. If required virt-manager could also be installed. Cockpit web console is available by default. It provides basic stats of the server much like Nagios and access to logs. Packages for the RHEL 8 web console, also known as Cockpit, are now part of Red Hat Enterprise Linux default repositories, and can therefore be immediately installed on a registered RHEL 8 system. (You should be using this extensively if you’re using KVM implementations of RHEL 8 virtual machines)

Deprecations:

  • Yum package is deprecated and Yum command is just a symbolic link to dnf.
  • NTP implementation is not supported in RHEL8
  • Network scripts are deprecated; ifup and ifdown map to nm-cli
  • Digital Signature Algorithm (DSA) is considered deprecated.. Authentication mechanisms that depend on DSA keys do not work in the default configuration.
  • rdist is removed as well as rsh and all r-utilities.
  • X.Org display server was replaced by Wayland’ from Gnome
  • tcp_wrappers were removed. Not clear what happened with programs previously compiled with tcp-wrapper support such as Postfix.
  • Iptables are deprecated.
  • Limited support for python 2.6.
  • KDE support has been deprecated.
  • The Up-gradation from KDE on RHEL 7 to GNOME on RHEL 8 is unsupported.
  • Removal of Btrfs support.
  • Docker is not included in RHEL 8.0.

Upgrade:

Release of RHEL 8 gives opportunity for those who still are using RHEL 6 to skip RHEL 7 completely for new server installations. RHEL 7 has five years before EOL (June 30, 2024) while many severs last more then five years now. Theoretically upgrade from RHEL 6 to RHEL 8 is possible via upgrade to RHEL 7 first, but is too risky. RHEL 8 is distributed through two main repositories: Please follow RHEL8 Upgrade path.

Base OS

Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in previous releases of RHEL. For a list of packages distributed through BaseOS.

AppStream

Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules, or as Software Collections. For a list of packages available in AppStream,

In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. Please check RHEL 8 Package manifest.

With the idea of the Application stream, RHEL 8 is following the Fedora Modularity lead. Fedora 28, released earlier this year, by Fedora Linux distribution (considered as bleeding edge community edition of RHEL) introduced the concept of modularity. Without waiting for the next version of the operating system, Userspace components will update in less time than core operating system packages. Installations of many versions of the same packages (such as an interpreted language or a database) are also available by the use of an application stream.

Theoretically RHEL 8 will be able to withstand more heavy loads due to optimized TCP/IP stack and improvements in memory handling.

Installation has not been changed much from RHEL 7. RHEL 8 still pushes LVM for root filesystem in default installation. Without subscription you can still install packages from ISO, either directly or making it a repo. The default filesystem remains XFS, RedHat EnterpriseLinux 8 supports installing from a repository on a local hard drive. you only need to specify the directory instead of the ISO image.

For example:

inst.repo=hd::.

Kickstart also has changed but not much ( auth or authconfig are depreciated & you need to use authselect instead)

Source: Red Hat RHEL8 release notes, Red Hat Blogs, Linux Journal etc

Understanding API

API stands for Application Programming Interface. An API is a software intermediary that allows two applications to talk to each other. In other words, an API is the messenger that delivers your request to the provider that you’re requesting it from and then delivers the response back to you. We can say that an API is a set of programming instructions and standards for accessing a Web-based software application or Web tool.

As we understood API is a software-to-software interface, not a user interface. The most important part of this name is “interface,” because an API essentially talks to a program for you. You still need to know the language to communicate with the program, but without an API, you won’t get far. With APIs, applications talk to each other without any user knowledge or intervention. When programmers decide to make some of their data available to the public, they “expose endpoints,” meaning they publish a portion of the language they’ve used to build their program. Other programmers can then pull data from the application by building URLs or using HTTP clients (special programs that build the URLs for you) to request data from those endpoints.

Endpoints return text that’s meant for computers to read, so it won’t make complete sense if you don’t understand the computer code used to write it. A software company releases its API to the public so that other software developers can design products that are powered by its service.

Examples:

When bloggers put their Twitter handle on their blog’s sidebar, WordPress enables this by using Twitter’s API.

Amazon.com released its API so that Web site developers could more easily access Amazon’s product information. Using the Amazon API, a third party Web site can post direct links to Amazon products with updated prices and an option to “buy now.”

When you buy movie tickets online and enter your credit card information, the movie ticket Web site uses an API to send your credit card information to a remote application that verifies whether your information is correct. Once payment is confirmed, the remote application sends a response back to the movie ticket Web site saying it’s OK to issue the tickets. As a user, you only see one interface — the movie ticket Web site — but behind the scenes, many applications are working together using APIs. This type of integration is called seamless, since the user never notices when software functions are handed from one application to another

Docker engine comes with an API. Docker provides an API for interacting with the Docker daemon (called the Docker Engine API), as well as SDKs for Go and Python. The SDKs allow you to build and scale Docker apps and solutions quickly and easily. If Go or Python don’t work for you, you can use the Docker Engine API directly. The Docker Engine API is a RESTful API accessed by an HTTP client such as wget or curl , or the HTTP library which is part of most modern programming languages.

Types of APIs

There are four main types of APIs:

Open APIs: Also known as Public API, there are no restrictions to access these types of APIs because they are publicly available.
Partner APIs: A developer needs specific rights or licenses in order to access this type of API because they are not available to the public.
Internal APIs: Also known as Private APIs, only internal systems expose this type of API. These are usually designed for internal use within a company. The company uses this type of API among the different internal teams to be able to improve its products and services.
Composite APIs: This type of API combines different data and service APIs. It is a sequence of tasks that run synchronously as a result of the execution, and not at the request of a task. Its main uses are to speed up the process of execution and improve the performance of the listeners in the web interfaces.

API architecture types:

APIs can vary by architecture type but are generally used for one of three purposes:

System APIs access and maintain data. These types of APIs are responsible for managing all of the configurations within a system. To use an example, a system API unlocks data from a company’s billing database.

Process APIs take the data accessed with system APIs and synthesize it to create a new way to view or act on data across systems. To continue the example, a process API would take the billing information and combine it with inventory information and other data to fulfill an order.

Experience APIs add context to system and process APIs. These types of APIs make the information collected by system and process APIs understandable to a specified audience. Following the same example, an experience API could translate the data from the process and system APIs into an order status tracker that displays information about when the order was placed and when the customer should expect to receive it.

Apart from the main web APIs, there are also web service APIs:

The following are the most common types of web service APIs:

SOAP (Simple Object Access Protocol): This is a protocol that uses XML as a format to transfer data. Its main function is to define the structure of the messages and methods of communication. It also uses WSDL, or Web Services Definition Language, in a machine-readable document to publish a definition of its interface.

XML-RPC: This is a protocol that uses a specific XML format to transfer data compared to SOAP that uses a proprietary XML format. It is also older than SOAP. XML-RPC uses minimum bandwidth and is much simpler than SOAP. Example – YUM command in Linux uses XML-RPC calls

JSON-RPC: JavaScript Object Notation, this protocol is similar to XML-RPC but instead of using XML format to transfer data it uses JSON.

REST (Representational State Transfer): REST is not a protocol like the other web services, instead, it is a set of architectural principles. The REST service needs to have certain characteristics, including simple interfaces, which are resources identified easily within the request and manipulation of resources using the interface.

SOAPREST
It has strict rules and advanced security to follow.There are loose guidelines to follow allowing developers to make recommendations easily
It is driven by FunctionIt is driven by Data
It requires more BandwidthIt requires minimum Bandwidth
SOAP vs REST
JSONXML
Supports only text and numbers.Supports various types of data for example text, numbers, images, graphs, charts etc.
Focuses mainly on DataFocuses mainly on Document
It has low securityIt has more security
JSON vs XML

The web service APIs honor all the http methods like POST, GET, PUT, PATCH, DELETE. if we compare these with the CRUD operations,

HTTP MethodsCRUD
POSTCreate
GETRead
PUTUpdate/Replace
PATCHUpdate/Modify
DELETEDelete
HTTP methods vs CRUD operations

POST – The POST verb is most-often utilized to create new resources. It will return HTTP status 201 on success and returning a Location header with a link to the newly-created resource with the 201 HTTP status. POST is neither safe nor idempotent.

GET – The HTTP GET method is used to read (or retrieve) a representation of a resource GET returns a representation in XML or JSON and an HTTP response code of 200 (OK). GET is idempotent

PUT – PUT is most-often utilized for update capabilities. PUT is not a safe operation, in that it modifies (or creates) state on the server, but it is idempotent.

PATCH – PATCH is used for modify capabilities. The PATCH request only needs to contain the changes to the resource, not the complete resource. PATCH is neither safe nor idempotent. However, a PATCH request can be issued in such a way as to be idempotent,

DELETE – DELETE is pretty easy to understand. It is used to delete a resource identified by a URI. There is a caveat about DELETE idempotence as calling DELETE on a resource a second time will often return a 404 (NOT FOUND) since it was already removed and therefore is no longer available.

We will now just try creating a RESTFul API with Golang. For those who haven’t tried your hands with Golang can click here to follow the basics of Golang.

We are starting the go program which creates an API. The source code for this is available in this Git repository. A discussion on the implementation of API is out of scope for this blog post. We can discuss that in another post.

Here we concentrate only on the API and the requests and responses we receive and retrieve from the API end point. So, we can start our dummy API interface.

executing the code

Now we can check whether our API is accessible and it’s giving responses for our requests. We are doing it in command line with simple curl request. our application is listening on port 8001 which can be modified as per your wish in the code. We are now hitting the end point ‘/’

curl request

Next we try hitting the end point /events to get all the events in the dummy database created with slice and strut in the main.go file.

curl request to get all events

Now we will simulate the requests with an opensource Firefox extension RESTer. This is available for Chrome too. So for hitting the endpoint “/events” we are using the GET method. the Response 200 states that the request was successful.

GET method

In this simulation, we are using the POST method to create another event in the dummy database inside the application programmatically. Response code 201 is provided for successful creation of the event.

POST method to create new event

Now we can try hitting the endpoint “/events/{id}” which is the endpoint to retrieve one event with a GET method which will display the newly added event with “/events/2”

GET method on /events/2

We will now hit the “/events” endpoint and see whether both the evets are in the response.

GET method on /events endpoint

In the next simulation we are using the PATCH method to modify/update an existing event. In this case we are hitting the endpoint “/events/2” to modify the event id 2.

PATCH method to modify event id 2

Try to GET the expected results from the endpoint “/events” to verify our PATCH request.

GET method to verify PATCH request

In the final simulation we are hitting the endpoint “/events/1” with a DELETE method so that event id 1 will be removed/deleted.

DELETE Method to delete event id 1

Voila..!! We just created an API and tested it with dummy data. I believe we got a quick overview of what an API is and how we can use different HTTP methods to retrieve data or modify data using an API.

Site Reliability: SLI, SLO & SLA

Service Level Indicator(SLI), Service Level Object(SLO) & Service Level Agreement(SLA) are parameters with which reliability, availability and performance of the service are measured. The SLA, SLO, and SLI are related concepts though they’re different concepts.

It’s easy to get lost in a fog of acronyms, so before we dig in, here is a quick and easy definition:

  • SLA or Service Level Agreement is a contract that the service provider promises customers on service availability, performance, etc.
  • SLO or Service Level Objective is a goal that service provider wants to reach.
  • SLI or Service Level Indicator is a measurement the service provider uses for the goal.

Service Level Indicator
SLI are the parameters which indicates the successful transactions, requests served by the service over the predefined intervals of time. These parameters allows to measure much required performance and availability of the service. Measuring these parameters also enables to improve them gradually.

Key Examples are:

  • Availability/Uptime of the service.
  • Number of successful transactions/requests.
  • Consistency and durability of the data.

Service Level Objective
SLO defines the acceptable downtime of the service. For multiple components of the service, there can be different parameters which defines the acceptable downtime. It is common pattern to start with low SLO and gradually increase it.

Key Examples are:

  • Durability of disks should be 99.9%.
  • Availability of service should be 99.95%
  • Service should successfully serve 99.999% requests/transactions.

Service Level Agreement
SLA defines the penalty that service provider should pay in an event of service unavailability for pre-defined period of time. Service provider should clearly define the failure factors for which they will be accountable(Domain of responsibility). It is common pattern to have loose SLA than SLO, for instance: SLA is 99% and SLO is 99.5%. If the service is overly available, then SLA/SLO can be used as error budget to deploy complex releases to production.

Key Examples of Penalty are:

  • Partial refund of service subscription fee.
  • Additional subscription time added for free.

So here is the relationship. The service provider needs to collect metrics based on SLI, define thresholds of metrics based on SLO, and monitor the thresholds of metrics so that it won’t break SLA. In practical, the SLIs are the metrics in the monitoring system; the SLOs are alerting rules, and the SLAs are the numbers of the monitoring metrics applying to the SLOs.

Usually the SLO and the SLA are similar while the SLO is tighter than the SLA. The SLOs are generally used for internal only, and the SLAs are for external. If a service availability violates the SLO, operations need to react quickly to avoid it breaking SLA, otherwise, the company might need to refund some money to customers.

The SLA, SLO, and SLI are based on such assumption that is the service will not be available 100%. Instead, we guarantee that the system will be available greater than a certain number, for example, 99.5%.

When we apply this definition to availability, for example, SLIs are the key measurements of the availability of a system; SLOs are goals we set for how much availability we expect out of a system; and SLAs are the legal contracts that explains what happens if our system doesn’t meet its SLO.

SLIs exist to help engineering teams make better decisions. Your SLO performance is critical information to have when you’re making decisions about how hard and fast you can push your systems. SLOs are also important data points for other engineers when they’re making assumptions about their dependencies on your service or system. Lastly, your larger organization should use your SLIs and SLOs to make informed decisions about investment levels and about balancing reliability work against engineering velocity.

Note this abstract is taken from SRE Fundamentals, CRE and the book Site Reliability Engineering: How Google Runs Production Systems

Unikernel: Another paradigm for the cloud

In this cloud era it is hard to imagine a world without access to services in the cloud. From contacting someone through mail, to storing work-related documents on an online drive and accessing it across devices, there are lot of services we use on a daily basis that is in the cloud.

To reduce the cost of compute power, Virtualization has been adapted towards offering more services with less hardware. And then came the concept of containers where you deploy the application in isolated containers with light weight images which has few binaries and libraries to run your application, But still we need the underlying VMs to deploy such solutions. All these VMs comes with a cost. While large data-centers are offering services in the cloud, they are also hungry for electric power, which is becoming a growing concern as our planet is being drained of its resources. So what we need now is less power-hungry solutions.

What if, instead of virtualization of an entire operating system, you were to load an application with only the required components from the operating system? Effectively reducing the size of the virtual machine to its bare minimum resource footprint? This is where unikernels come into play.

Unikernel

Unikernel is a relatively new concept that was first introduced around 2013 by Anil Madhavapeddy in a paper titled “Unikernels: Library Operating Systems for the Cloud” (Madhavapeddy, et al., 2013).

You can find more details on Unilernel by searching the scholarly articles in Google.

Unikernels are defined by the community at Unikernel.org as follows.

“Unikernels are specialized, single-address-space machine images constructed by using library operating systems.”

For more detailed reading about the concepts behind Unikernel, please follow this link,

A Unikernel is an application that has been boiled down to a small, secure, light-weight virtual machine which eliminates general purpose operating systems such as Linux or Windows. Unikernels aims to be a much more secure system than Linux. It does this through several thrusts. Not having the notion of users, running a single process per VM, and limiting the amount of code that is incorporated into each VM. This means that there are no users and no shell to login to and, more importantly, you can’t run more than the one program you want to run inside. Despite their relatively young age, unikernels borrow from age-old concepts rooted in the dawn of the computer era: microkernels and library operating systems. This means that a unikernel holds a single application. Single-address space means that in its core, the unikernel does not have separate user and kernel address space. Library operating systems are the core of unikernel systems. Unikernels are provisioned directly on the hypervisor without a traditional system like Linux. So it can run 1000X more vms/per server.

You can have a look in here for details about Microkernel, Monolithic & Library Operating Systems

Virtual Machines VS Linux Containers VS Unikernel

Virtualization of services can be implemented in various ways. One of the most widespread methods today is through virtual machine, hosted on hypervisors such as VMware’s ESXi or Linux Foundation’s Xen Project.

Hypervisors allow hosting multiple guest operating systems on a single physical machine. These guest operating systems are executed in what is called virtual machines. The widespread use of hypervisors is due to their ability to better distribute and optimize the workload on the physical servers as opposed to legacy infrastructures of one operating system per physical server.

Containers are another method of virtualization, which differentiates from hypervisors by creating virtualized environments and sharing the host’s kernel. This provides a lighter approach to hypervisors which requires each guest to have their copy of the operating system kernel, making a hypervisor-virtualized environment resource heavy in contrast to containers which share parts of the existing operating system.

As aforementioned, unikernels leverage the abstraction of hypervisors in addition to using library operating systems to only include the required kernel routines alongside the application to present the lightest of all three solutions.

The figure above shows the major difference between the three virtualization technologies. Here we can clearly see that virtual machines present a much larger load on the infrastructure as opposed to containers and unikernels.

Additionally, unikernels are in direct “competition” with containers. By providing services in the form of reduced virtual machines, unikernels improve on the container model by its increased security. By sharing the host kernel, containerized applications share the same vulnerabilities as the host operating system. Furthermore, containers do not possess the same level of host/guest isolation as hypervisors/virtual machines, potentially making container breaches more damaging than both virtual machines and unikernels.

TechnologyProsCons
Virtual Machines– Allows deploying different operating systems on a single host
– Complete isolation from host
– Orchestration solutions available
– Requires compute power proportional to number of instances
– Requires large infrastructures
– Each instance loads an entire operating system
Linux Containers– Lightweight virtualization
– Fast boot times
– Ochestration solutions
– Dynamic resource allocation
– Reduced isolation between host and guest due to shared kernel
– Less flexible (i.e.: dependent on host kernel)
– Network is less flexible
Unikernels– Lightweight images
– Specialized application
– Complete isolation from host
– Higher security against absent functionalities (e.g.: remote command execution)
– Not mature enough yet for production
– Requires developing applications from the grounds up
– Limited deployment possibilities
– Lack of complete IDE support
– Static resource allocation
– Lack of orchestration tools
A Comparison of solutions

Docker and containerization technology and the container orchestra-tors like Kubernetes, OpenShift are 2 steps forward for the world of DevOps and that the principles it promotes are forward-thinking and largely on-target for the future of a more secure, performance oriented, and easy-to-manage cloud future. However, an alternative approach leveraging unikernels and immutable servers will result in smaller, easier to manage, more secure containers that will be simpler to adopt by existing enterprises. As DevOps matures, the shortcomings of cloud application deployment and management are becoming clear. Virtual machine image bloat, large attack surfaces, legacy executable, base-OS fragmentation, and unclear division of responsibilities between development and IT for cloud deployments are all causing significant friction (and opportunities for the future).

For Example: It remains virtually impossible to create a Ruby or Python web server virtual machine image that DOESN’T include build tools (gcc), ssh, and multiple latent shell executable. All of these components are detrimental for production systems as they increase image size, increase attack surface, and increase maintenance overhead.

Compared to VMs running Operating systems like Windows and Linux, the unikernel has only a tenth of 1% of the attack surface. So in the case of a unikernel — sysdig, tcpdump, and mysql-client are not installed and you can’t just “apt-get install” them either. You have to bring that with your exploit. To take it further even a simple cat /etc/hosts or grep of /var/log/nginx/access.log simply won’t work — once again they are separate processes.
So unikernels are highly resistant to remote code execution attacks, more specifically shell code exploits.

Immutable Servers & Unikernels

Immutable Servers are a deployment model that mandates that no application updates, security patches, or configuration changes happen on production systems. If any of these layers needs to be modified, a new image is constructed, pushed and cycled into production. Heroku is a great example of immutable servers in action: every change to your application requires a ‘git push’ to overwrite the existing version. The advantages of this approach include higher confidence in the code that is running in production, integration of testing into deployment workflows, easy to verify that systems have not been compromised.

Once you become a believer in the concept of immutable servers, then speed of deployment and minimizing vulnerability surface area become objectives. Containers promote the idea of single-service-per-container (microservices), and unikernels take this idea even further.

Unikernels allow you to compile and link your application code all the way down to and include the operating system. For example, if your application doesn’t require persistent disk access, no device drivers or OS facilities for disk access would even be included in final production images. Since unikernels are designed to run on hypervisors such as Xen, they only need interfaces to standardized resources such as networking and persistence. Device drivers for thousands of displays, disks, network cards are completely unnecessary. Production systems become minimalist — only requiring the application code, the runtime environment, and the OS facilities required by the applications. The net effect is smaller VM images with less surface area that can be deployed faster and maintained more easily.

Traditional Operating Systems (Linux, Windows) will become extinct on servers. They will be replaced with single-user, bare metal hypervisors optimized for the specific hardware, taking decades of multi-user, hardware-agnostic code cruft with them. More mature build-deploy-manage tool set based on these technologies will be truly game changing for hosted and enterprise clouds alike.

UnikernelLanguageTargetsFunctions
ClickOSC++XenNetwork Function Virtualization
HalVMHaskellXen
IncludeOSC++KVM, VirtualBox, ESXi, Google Cloud, OpenStackOrchestration tool available
MirageOSOCamlKVM, Xen, RTOS/MCU
Nanos UnikernelC, C++, Go, Java, Node.js, Python, Rust, Ruby, PHP, etcQEMU/KVMOrchestration tool available
OSvJava, C, C++, Node, RubyVirtualBox, ESXi, KVM, Amazon EC2, Google CloudCloud and IoT (ARM)
RumprunC, C++, Erlan, Go, Java, JavaScript, Node.js, Python, Ruby, RustXen, KVM
UnikGo, Node.js, Java, C, C++, Python, OCamlVirtualBox, ESXi, KVM, XEN, Amazon EC2, Google Cloud, OpenStack, PhotonControllerUnikernel compiler toolbox with orchestration possible through Kubernetes and Cloud Foundry
ToroKernelFreePascalVirtualBox, KVM, XEN, HyperVUnikernel dedicated to run microservices
Comparing few Unikernel solutions from active projects

Out of the various existing projects, some standout due to their wide range of supported languages. Out of the active projects, the above table describes the language they support, the hypervisors they can run on and remarks concerning their functionality.

Currently experimenting with the Unikernel in the AWS and Google Cloud Platform and will update you with another post on that soon.

Source: Medium, github, containerjournal, linuxjournal

Bare-Metal K8s Cluster with Raspberry Pi – Part 3

This is a continuation from the post series Bare-metal K8s cluster with Raspberry Pi – Part 1 & Part 2 here

Another option of running bare-metal K8s cluster in the Raspberry Pi I tried and tested was with Micro K8s which we discuss in this post.

Micro K8s are Lightweight upstream K8s. They are smallest, simplest, pure production K8s. For clusters, laptops, IoT and Edge, on Intel and ARM.

MicroK8s is a CNCF certified upstream Kubernetes deployment that runs entirely on your workstation or edge device. Being a snap it runs all Kubernetes services natively (i.e. no virtual machines) while packing the entire set of libraries and binaries needed. Installation is limited by how fast you can download a couple of hundred megabytes and the removal of MicroK8s leaves nothing behind.

And to give a context on snap, Snaps are app packages for desktop, cloud and IoT that are easy to install, secure, cross‐platform and dependency‐free. Snaps are discover able and install able from the Snap Store, the app store for Linux with an audience of millions.

A snap is a bundle of an app and its dependencies that works without modification across Linux distributions.

We are going to use the same components list as described in the Part 1 of this series.

Each Pi is going to need an Ubuntu server image and you’ll need to be able to SSH into them. Please follow this link here will help us to reach to this stage

Kubernetes Cluster Preparation with SSH connection to the Pi from your terminal

Installing MicroK8s
Follow this section for each of your Pis. Once completed you will have MicroK8s installed and running everywhere.

SSH to your first Pi and install the MicroK8s snap:

sudo snap install microk8s --classic

As MicroK8s is a snap and as such it will be automatically updated to newer releases of the package, which is following closely upstream Kubernetes releases, so we don’t need to worry about the K8s version we’re installing

sudo snap install microk8s --classic --channel=1.15/stable
Channels are made up of a track (or series) and an expected level of stability, based on MicroK8s releases (Stable, Candidate, Beta, Edge). For more information about which releases are available, run:

snap info microk8s

Cheat Sheet for MicroK8s
Before going further here is a quick intro to the MicroK8s command line:

  • microk8s.start – start all enabled Kubernetes services
  • microk8s.inspect – status of services
  • microk8s.stop – stop all Kubernetes services
  • microk8s.enable dns – enable Kubernetes add-ons,“kubedns”
  • microk8s.kubectl cluster-info – status of the cluster:

MicroK8s is easy to use and comes with plenty of Kubernetes add-ons you can enable or disable.

Master node and leaf nodes
Now that you have MicroK8s installed on all boards, pick one which has to be the master node of your cluster.

On the chosen Master node, run the following command:

sudo microk8s.add-node
This command will generate a connection string in the form of :/.

Adding a node
Now, you need to run the join command from another Pi you want to add to the cluster:

microk8s.join 10.55.60.14:25000/JHpbBYMIevZSAMnmjMHmFwanrOYCWZLu
You should be able to see the new node in a few seconds on the master with the following command:

microk8s.kubectl get node

For each new node, you need to run the microk8s.add-node command on the master, copy the output, then run microk8s.join on the leaf.

Removing nodes
To remove a node, run the following command on the master:

sudo microk8s remove-node
The name of nodes are available on the master by running the microk8s.kubectl get node

Alternatively, you can leave the cluster from a leaf node by running:

sudo microk8s.leave

Once Pis are setup with MicroK8s, adding and removing nodes is easy and you can scale up or down as you go.

Voila..!!
If you follow this series, you are now in control of your Kubernetes cluster. One with native kubernetes and docker and the other one with more easier to install and manage MicroK8s

This completes the 3 part series for K8s in Raspberry Pi. In a new follow-up blog post we can see how we can use kubectl & helm charts to deploy a Nginx service, Prometheus and few other services from the DevOps tools set to the cluster.

Bare-Metal K8s Cluster with Raspberry Pi – Part 2

This is a continuation from the post series Bare-metal K8s cluster with Raspberry Pi

As we have 1 master node and 3 nodes setup we continue to install Kubernetes.

Install Kubernetes

We are using version 1.15.3. There shouldn’t be any errors, however during my installation the repos were down and I had to retry in a few times.

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
sudo apt-key add - && echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | \
sudo tee /etc/apt/sources.list.d/kubernetes.list && sudo apt-get update -q

sudo apt-get install -qy kubelet=1.15.3-00 kubectl=1.15.3-00 kubeadm=1.15.3-00

Repeat steps for all of the Raspberry Pis.

Kubernetes Master Node Configuration
Note: You only need to do this for the master node (in this deployment I recommend only 1 master node). Each Raspberry Pi is a node.

Initiate Master Node

sudo kubeadm init
Enable Connections to Port 8080
Without this Kubernetes services won’t work

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Add Container Network Interface (CNI)
I’ve chosen to use Weaver, however you can get others working such as Flannel (I’ve verified this works with this cluster)

Get Join Command
This will be used in the next section to join the worker nodes to the cluster. It will return something like:

kubeadm join 192.168.0.101:6443 --token X.Y --discovery-token-ca-cert-hash sha256:XYZ
kubeadm token create --print-join-command

Kubernetes Worker Node Configuration
Note: You only need to do this for the worker nodes (in this deployment I recommend 3 worker node).

Join Cluster
Use the join command provided at the end of the previous section
sudo kubeadm join 192.168.0.101:6443 --token X.Y --discovery-token-ca-cert-hash sha256:XYZ

Verify Node Added Successfully (SSH on Master Node)
Should have status ready after ~30 seconds
kubectl get componentstatuses

Another option of running bare-metal K8s cluster in the Raspberry Pi I tried and tested was with Micro K8s will posted in the Part 3 of this series.

A sneak peak into the K8s cluster in Raspberry pi

Bare-Metal K8s Cluster with Raspberry Pi – Part 1

There are multiple ways we can use a Kubernetes cluster to deploy our applications. Most of us opt to use any Kubernetes service from a public cloud provider. GKE, EKS, AKS are the most prominent ones. Deploying a Kubernetes cluster on a public cloud provider is relatively easy, but what if you want a private bare-metal K8s cluster. Being worked extensively in the data center and started my career as a Sys Admin, I personally prefer a piece of tangible hardware to get the feel of building it. This blog post walk you through the steps I took in order to have a bare-metal K8s cluster to play with.

K8s is an open source container orchestration platform that helps manage distributed, containerized applications at a massive scale. Born at Google as Borg, version 1.0 was released in July 2015. It has continued to evolve and mature and is now offered as a PaaS service by all of the major cloud vendors.

Google has been running containerized workloads in production for more than a decade. Whether it’s service jobs like web front-ends and stateful servers, infrastructure systems like Bigtable and Spanner, or batch frameworks like MapReduce and Millwheel, virtually everything at Google runs as a container.

You can find the paper here.

Kubernetes traces its lineage directly from Borg. Many of the developers at Google working on Kubernetes were formerly developers on the Borg project. We’ve incorporated the best ideas from Borg in Kubernetes, and have tried to address some pain points that users identified with Borg over the years.

More than just enabling a containerized application to scale, Kubernetes has release-management features that enable updates with near-zero downtime, version rollback, and clusters that can ‘self-heal’ when there is a problem. Load balancing, auto-scaling and SSL can easily be implemented. Helm, a plugin for Kubernetes, has revolutionized the world of server management by making multi-node data stores like Redis and MongoDB incredibly easy to deploy. Kubernetes enables you to have the flexibility to move your workload where it is best suited. This compliments the hybrid cloud story and in my career it has become more apparent that my customers see this as well to help them resolve issues like; cost, availability and compliance. In parallel software vendors are starting to embrace containers as a standard deployment model leading to a recent increase in requests for container solutions.

As you can see in the workflow comparison below, there is greater room for error when deploying on-premises. Public clouds provide the automation and reduces the risk of error as less steps are required. But as mentioned above, private cloud provides you more options when you have unique requirements.

Pros:

  • Using Kubernetes and its huge ecosystem can improve your productivity
  • Kubernetes and a cloud-native tech stack attracts talent
  • Kubernetes is a future proof solution
  • Kubernetes helps to make your application run more stable
  • Kubernetes can be cheaper than its alternatives

Cons:

  • Kubernetes can be an overkill for simple applications
  • Kubernetes is very complex and can reduce productivity
  • The transition to Kubernetes can be cumbersome
  • Kubernetes can be more expensive than its alternatives

Pre-requisites:

Compute:

3 x Raspberry Pi 4 Model B with 2 GB RAM
1 x Raspberry Pi 3 Model B+ with 1 GB RAM

Storage:

4 x 16GB High Speed Sand-disk Micro-SD Cards

Network:

1 x Network Switch – for local LAN for k8s internal connectivity
1 x Network Router – for Wifi (My default ISP router was used here) only master node had internet connectivity once completed the setup
4 x Ethernet Cables
1 x Keyboard, HDMI, Mouse (for initial setup only)

Initial Raspberry Pi Configuration:

Flash Raspbian to the Micro-SD Cards

Download image from the below link,

Raspbian OS

I have used BalenaEtcher to flash image onto micro-SD card

Perform Initial Setup on Boot on startup screen, we need to connect keyboard, monitor and mouse for this setup.

Choose Country, Language, Timezone
Define new password for user ‘pi’
Connect to WiFi or skip if using ethernet
Skip update software (We will perform this activity manually later).
Choose restart later

Configure Additional Settings Click the Raspberry Pi icon (top left of screen) > Preferences > Raspberry Pi Configuration

System

Configure Hostname
Boot: To CLI

Interfaces
SSH: Enable

Choose restart later

Configure Static Network Perform one of the following:
Define Static IP on Raspberry Pi: Right Click the arrow logo top right of screen and select ‘Wireless & Wired Network Settings’

Define Static IP on DHCP Server: Configure your DHCP server to define a static IP on the Raspberry Pi Mac Address.

Reboot and Test SSH
Username: pi

Password: Defined in step 2 above

From the terminal ssh pi@[IP Address]

Repeat steps for all of the Raspberry Pis.

Kubernetes Cluster Preparation with SSH connection to the Pi from your terminal. I am using a 12 years old Lenovo laptop running MX Linux. Open a terminal and establish ssh connection to the Pi

Perform Updates

apt-get update: Updates the package indexes
apt-get upgrade: Performs the upgrades

Configure Net.IP4.IP configuration Edit sudo vi /etc/sysctl.conf, uncomment net.ipv4.ip_forward = 1 and add net.ipv4.ip_nonlocal_bind=1.
Note: This is required to allow for traffic forwarding, for example Node Ports from containers to/from non-cluster devices.

Install Docker

curl -sSL get.docker.com | sh

Grant privilege for user pi to execute docker commands

sudo usermod pi -aG docker

Disable Swap

sudo systemctl disable dphys-swapfile.service
sudo reboot

We can verify this with the top command, on the top left corner next to MiB Swap should be 0.0.

As we completed the initial steps to create our K8s bare-metal cluster, we can see how we build the cluster in Part 2 of this blog post

Linux Inside Win 10

I am a zealous fan of Linux and FOSS. I have been using Linux and it’s TUI with bash shell for more than seventeen years. When I moved to my new role I find it bit difficult when I had a Windows 10 laptop and was literally fumbling with the powershell and cmd line when I tried working with tools like terraform, git etc. But luckily I figured out a solution for the old school *NIX users like me who are forced to use a Windows laptop, and that solution is WSL.

Windows Subsystem for Linux a.k.a WSL is an environment in Windows 10 for running unmodified Linux binaries in a way similar to Linux Containers. Please go through my earlier post on LinuxContainers for more details on it. WSL runs Linux binaries by implementing a Linux API compatibility layer partly in the Windows kernel when it was introduced first. The second iteration of it, “WSL 2” uses the Linux kernel itself in a lightweight VM to provide better compatibility with native Linux installations.

To use WSL in Win 10, you have to enable wsl feature from the Windows optional features. Being a aficionado of command line than the GUI, I’ll now list out step by step commands in order to enable the WSL and install your favourite Linux distribution and how to use it.

  1. Open Powershell as administrator and execute the command below Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
    Note: Restart is required when prompted
  2. Invoke-WebRequest -Uri https://aka.ms/wsl-ubuntu-1804 -OutFile Ubuntu.appx -UseBasicParsing

In the above command you can use all the Linux distros available for WSL. For example if you want to use Kali Linux instead of Ubuntu, you can edit the distro url in the step 2 command as “https://aka.ms/ wsl-kali-linux”. Please refer to this guide for all available distros.

Once the download is completed, you need to add that package to the Windows 10 application list which can be done using the command below.

  1. Add-AppxPackage .\Ubuntu.appx

Once all these steps are completed, in the Win 10 search (magnifying glass in the bottom left corner near to the windows logo) type Ubuntu and you can see your Ubuntu (you need to search which ever wsl distro you have added). To complete the initialization of your newly installed distro, launch a new instance which is Ubuntu in our case by selecting and running Ubuntu from search as seen in the screen shot below.

This will start the installation of your chosen Linux distros binaries and libraries along with the kernel. It will take some time to complete the installation and configuration (approximately 5 to 10 minutes depending on your laptop/desktop configuration).

Once installation is complete, you will be prompted to create a new user account (and its password).

Note: You can choose any username and password you wish – they have no bearing on your Windows username.

If everything goes well, we’ll have Ubuntu installed as a sub system. When you open a new distro instance, you won’t be prompted for your password,
but if you elevate your privileges using sudo, you will need to enter your password.

Next step is to updating your package catalog, and upgrading your installed packages with the Ubuntu package manager apt. To do so, execute the below command in the prompt.

$ sudo apt update && sudo apt upgrade

Now we will proceed to install Git.

$ sudo apt install git

$ git --version

And finally test out git installation with the above command. You can also install other tools, packages available in the repository. I have installed git, terraform, aws-cli, azure-cli and ansible. You can install python, ruby, go programming environment as well. Python pip and ruby gem installations are also supported. You can use this sub system as an alternative for your day to day Linux operations and as an alternative terminal for your Powershell if you are using Sublime Text or Atom or Visual studio code