Becoming a SRE

This is a continuation of an earlier post about SRE. In that post we’ve seen what a SRE is and key skills require to become a SRE. Further to that, in this post we’ll see on becoming a SRE.

Cloud

  • AWS (recommended)
  • Azure
  • Google Cloud

Operating Systems

  • Linux (recommended)
  • Windows

Programming

  • Python (recommended)
  • Golang (recommended)
  • NodeJS

IaC – Infrastructure as a Code

  • Terraform (recommended)
  • Container Orchestration (recommended)
  • Configuration Management

CI & CD Tools

  • Jenkins (recommended)
  • Git & GItHub (recommended)
  • GitLab
  • Circle CI
  • Go continuous delivery
  • Bamboo

Continuous Monitoring

  • Prometheus (recommended)
  • AppDynamics (recommended)
  • Nagios
  • Zabbix
  • NewRelic

Networking/Connectivity

  • Protocols
  • Subnet/CIDR
  • Network Components (TGW, VPC, SG etc)
  • API (Rest, SOAP, XMLRPC)

Fun with Linux CLI

I have been using Linux or the past 20 years and I’m always in love with this Operating System. My daily driver for official use is a Windows Laptop which has WSL in it but my personal laptops a 15 year old Lenovo and the new HP runs MX Linux & Arch respectively. I believe most computer geeks are so enthusiastic about Linux distros and open-source software. Everyone has their own reasons for loving Linux, my reasons are as follows:

  • Linux Is Free – distributions are available for download free of charge.
  • Linux Is Open – Linux kernel or the heart of the OS & other operating system components, and many user programs are free and open-source, meaning that anyone can look at the source code and make changes. As Richard Stallman says, this software is “free as in speech.”
  • Linux Command Line – command line offers the most control over the computer. Many Linux programs only use the command line, including developer tools. This may repel normal users, but technical users appreciate it.
  • Community Support – choice of support, ranging from IRC, web forums, Wikis, Discord servers, even in-person user groups. For any issues someone has often posted a solution somewhere on the web. Due to the community spirit Linux seems to inspire its users on multiple forums
  • Programming Tools in abundance it comes with many of the tools they need to do their jobs. Editors, compilers, interpreters, debuggers, you name it, it’s often included in the default system. If not, it’s only a package manager command away.
  • Rapid Prototyping due to its affinity to scripting languages
  • Linux Is Customizable to the core including, desktop environments, window managers, apps etc and one can even run Linux without a GUI
  • Linux Runs Everywhere, from x86 to ARM to your N/W devices and your mobile.
  • Linux strengths is its ability to interoperate with other file format

Today we’re going to discuss about the fun side of Linux command line. If you are bored you should definitely try these fun commands.

Neofetch: A system utility written in bash to get customizable system info.

Installing neofetch:

sudo apt install neofetch - Debian/Ubuntu & its derivatives
sudo dnf install neofetch - Fedora/RHEL and its derivatives
sudo pacman -S neofetch - Arch/Manjaro and its derivatives

FIGlet: This command utility is used to create beautiful ASCII art logo. I some remote servers you might have already seen this. It doesn’t have a character limit. Create your own ASCII art of unlimited lengths with this CLI tool.

Installing figlet

sudo apt install figlet - Debian/Ubuntu & its derivatives
sudo dnf install figlet - Fedora/RHEL and its derivatives
sudo pacman -S figlet - Arch/Manjaro and its derivatives

Cowsay: It is an ASCII art command line tool that displays the input with an ASCII cow art

Installing cowsay

sudo apt install cowsay - Debian/Ubuntu & its derivatives
sudo dnf install cowsay - Fedora/RHEL and its derivatives
sudo pacman -S cowsay - Arch/Manjaro and its derivatives

sl: This linux command line utility brings the good old steam locomotive to our desktop. funny right, do try it out.

Installing sl

sudo apt install sl - Debian/Ubuntu & its derivatives
sudo dnf install sl - Fedora/RHEL and its derivatives
sudo pacman -S sl - Arch/Manjaro and its derivatives

xeyes: This is kind of a stress buster which will bring a pair of eyes to your desktop. The eye balls move depending on your mouse pointers position.

Installing xeyes

sudo apt install x11-apps - Debian/Ubuntu & its derivatives
sudo dnf install xeyes - Fedora/RHEL and its derivatives
sudo pacman -S xorg-xeyes - Arch/Manjaro and its derivatives

aafire: This utility will make your terminal light up. aafire command starts an ASCII fire inside your terminal.

Installing aafire

sudo apt install libaa-bin - Debian/Ubuntu & its derivatives
sudo dnf install aalib - Fedora/RHEL and its derivatives
sudo pacman -S aalib - Arch/Manjaro and its derivatives

rig: This command line tool help you to rig some user info. It wil quickly generate fake identity which is readable by apps and users.

Installing rig

sudo apt install rig - Debian/Ubuntu & its derivatives
sudo dnf install rig - Fedora/RHEL and its derivatives
sudo pacman -S rig - Arch/Manjaro and its derivatives

Finally, want to see movies or play music/mp3 files on the Linux command line, give it a try not many CLI will allow you to do this.

mpg123: for playing mp3 files/playlists
cmus: ncurses based utility for playing mp3 files/playlists
mpv: ncurses based utility for playing videos

sudo apt install cmus/mpg123 - Debian/Ubuntu & its derivatives
sudo dnf install cmus/mpg123 - Fedora/RHEL and its derivatives
sudo pacman -S cmus/mpg123 - Arch/Manjaro and its derivatives

sudo apt install mpv - Debian/Ubuntu & its derivatives
sudo dnf install mpv - Fedora/RHEL and its derivatives
sudo pacman -S mpv - Arch/Manjaro and its derivatives

Architectural Diagrams Made Easy

A picture is worth a thousand words, an architectural diagrams help convey complex information in a single image.

  • Architectural diagrams show systems. Displaying information visually allows the viewer to see everything at a glance, including how elements interact. This is especially useful when making changes. You’ll be able to see the downstream effects of a given change more clearly.
  • Architectural diagrams also break down complex systems and processes into layers. So, rather than trying to comprehend everything at once, you can zoom in and focus on smaller sub-processes or systems.

One of the main issues software engineers face is consistency. When you’re working on anything that involves multiple people, there’s always a risk of miscommunication and discrepancies between project teams and developers. It’s crucial to standardize information, which is where an architectural diagram becomes helpful.

There are lot more ways to create architectural diagrams. If you search in Google it will pull you few thousand results on how to create an architectural diagram. The method we discuss here is a GitHub project repository called “diagrams” created by Min-Jae Kwon. This is an opensource project released with MIT license. Diagrams lets you draw the cloud system architecture in Python code. It was born for prototyping a new system architecture design without any design tools. A Golang fork is also available here for those who are familiar with go.

Diagrams requires Python 3.6 or higher, check your Python version first.

We can start with updating pip3 itself before we install diagrams

/usr/bin/python3 -m pip install --upgrade pip

pip3.7 install diagrams

Diagrams uses Graphviz to render the diagram, so you need to install Graphviz to use diagrams.

sudo apt install python-pydot python-pydot-ng graphviz

We’ll now have a working diagrams tool installed and let’s see how we can use it.

Open any IDE and type in the below code and save as a .py file.

from diagrams import Diagram
from diagrams.aws.compute import EC2

with Diagram("Simple Diagram"):
    EC2("web")

Diagram represents a global diagram context.

You can create a diagram context with Diagram class. The first parameter of Diagram constructor will be used for output filename.

And if you run the above script with below command,

python diagram.py

It will generate an image file with single EC2 node drawn as simple_diagram.png on your working directory, and open that created image file immediately.

In the example described here I’m using Jupyter Notebook to create diagrams with ‘Diagrams’/ The per-requisite is to have Jupyter notebook installed on your system which runs on localhost:8888
Installation & configuration of Jupyter Notebook is pretty straight forward in Linux and is out of scope for this post. You can do a Google search and follow any documentation/blog post for that. Diagrams can be also rendered directly inside the notebook as like this:

from diagrams import Diagram
from diagrams.aws.compute import EC2

with Diagram("Simple Diagram") as diag:
    EC2("web")
diag

You can specify the output file format with outformat parameter. Default is png. Saving output as png, jpg, svg, and pdf are allowed.

from diagrams import Diagram
from diagrams.aws.compute import EC2

with Diagram("Simple Diagram", outformat="jpg"):
    EC2("web")

I’m still exploring more on this GitHub project. I’ll post another post later with details on system design with more option available with this awesome tool.

For a quick demo please check the video posted below

Diagram as a Code

Site Reliability Engineering (SRE)

I’m bit late to post in this blog in the year 2022 due to some personal exigencies. Being three months already in this year, and considering the widespread reach to the term Site Reliability Engineering, I believe the acronym SRE will be a better way to start off this year. I’m trying to convey what I’ve learned about SRE as a System Admin for more than a decade and SRE for another half a decade.

According to the person who coined this word Ben Treynor Sloss, the senior VP overseeing technical operations at Google SRE is

“what happens when a software engineer is tasked with what used to be called operations.”

In another words Site Reliability Engineering (SRE) is a discipline that incorporates aspects of software engineering and applies them to infrastructure and operations problems. Summarising this we can say that a SRE is a professional with solid background in coding/automation, who uses that experience to solve problems in infrastructure and operations.

If you think of DevOps as a philosophy and an approach to working, you can argue that SRE implements some of the philosophy that DevOps describes, and is somewhat closer to a concrete definition of a job or role than, say, “DevOps engineer” So, in a way, we can say:

class SRE implements DevOps;

abstract class DevOps {
  // Reduce organization silos
  abstract reduceOrganizationSilos(): BetterColaboration: 

  // Accept failure as normal
  abstract acceptFailureAsNormal(): ReliabilityGoal;

  // Implement gradual change
  abstract implementGradualChange(): ErrorBudget;

  // Leverage tooling and automation
  abstract leverageAutomation(): LongTermValue;

  // Measure everything
  abstract measureEverything(): BetterObservability;
}

class SRE implements DevOps {
  ...
}

I will explain more about SRE in this blog post quoting from the Introduction of the SRE Book [Site Reliability Engineering; How Google Runs Production Systems] written by Ben Treynor Sloss & edited by Betsy Beyer.

“Hope is not a strategy.”
-Traditional SRE saying
It is a truth universally acknowledged that systems do not run themselves. How, then, should a system — particularly a complex computing system that operates at a large scale — be run?

https://sre.google/sre-book/introduction/

When we say “Hope is not a strategy” we mean: We need to apply best practices, instead of just letting software and new features launch and trusting that it will be successful. We use it to call out anyone who is letting something happen (such as a launch or running a system) without applying the proper principles and best practices. The book clearly defines the Principles, Practices and Management about the Site Reliability Engineering in a better way.

A site reliability engineer can be a generalist or a specialist. Depending on the individual skill set organizations can engage a SRE in a number of general or specialist roles like: Educator, SLO guard, Infra architect, Incident response leader etc. Details about SLA, SLO, SLI can be found in a previous post here. SRE’s may contribute to the code base of a product or write development policies and procedures as and when needed. Workflows, priorities and day-to-day operations for SRE vary from team to team. They all share a set of basic responsibilities for the service(s)/products(s)/platform(s) they support and always adhere to the core responsibility for availability, latency, performance, monitoring, efficiency, change management, emergency response and capacity planning. As defined in SRE book google caps operational work for SREs at 50% of their time and the remaining should be spent on their coding skills and project works. They achieve this by reintegrating developers into on-call rotations, routing excess operational work to the product development team and even re assigning bus and tickets to development or engineering managers.

One of the key responsibility of SRE is to quantify confidence in the systems they maintain. Confidence can be measured both by past reliability and future reliability. Past reliability is captured by analysing monitoring data historically and future reliability by predictions based on the past system behavior. We will discuss more on the Principles, Practices and Management about the Site Reliability Engineering in the later posts which will be followed shortly after this.

A SRE has responsibility for all these areas:

  • General systems uptimes
  • Systems performance
  • Latency
  • Incident and outage management
  • Systems and application monitoring
  • Change management
  • Capacity planning

In a nutshell Service Reliability hierarchy is as follows,

Service Reliability Hierarchy

It’s easy to define what site reliability engineers do, but which skills exactly do SREs need to perform their jobs is a much more undefined or complicated question. As mentioned earlier though the SRE skills widely vary from team to team depending on multiple factors like – types of systems managed, types of reliability challenges faced etc.: modern SREs or aspiring SREs need a core set of standard skills that helps them to understand, manage and deploy complex distributed systems at any typical organizations today.

Now we can look in to skill sets that a SRE should master:

Coding:

Coding is an essential skill to master for a SRE role. Depending on the role understanding development and coding can go a long way. As day-to-day tasks of an SRE include automating processes and dealing with systems, knowing Bash, Python, Yaml and Golang can help you in the long run.

Version Control Tools:

As a SRE, while working with code, you’ll be using Git or some other kind of version control tool. So it makes sense to learn about version control tools mainly distributed verson control systems. So it’s better to have a good understanding of Git and GitHub.

Cloud Computing:

Cloud computing is on of the niche skills that modern SREs can’t live without. Around 90% of business uses cloud in any format available private, public, hydbrid. Realiability of cloud platform cannot be managed if you don’t understand the cloud architecture, cloud networking. data storage, observability and so on and so forth.

Distributed Computing:

Knowing how distributed computing works and understanding the concept of microservices are both significant advantages for an SRE. You’ll be handling large, distributed systems, so having some experience with these topics can really help you progress as a SRE.

Agile & DevOps:

As we already mentioned earlier that class SRE implements DevOps. Many would say that SRE is to DevOps what Scrum is to Agile. DevOps is not a role, it is more of a cultural aspect and can’t be assigned to a person but shoud be done as a team. DevOps engineer most times is just a title used to hire system admins. SREs focus more on the aspects of system availability, observability and scaling. DevOps is a practice of bringing development and operations teams together whereas Agile is an iterative approach that focuses on collaboration, customer feedback and small rapid releases. DevOps focuses on constant testing and delivery while the Agile process focuses on constant changes. Automation is the key to DevOps and we need some tools to do DevOps. Understanding these toolsets and afore mentioned cultural aspect of DevOps is very much in need for being a SRE.

Operating Systems:

Basically a good understanding Operating Systems usually Linux or Windows which is common in most organisations will be helpful. In this Cloud & DevOps era, most public cloud management tools, toolsets that are part of DevOps follow the conventions of Linux CLI. Cloud Native systems like Kubernetes, containers also follow the same CLI principles even if you run them in a Windows environment. So it’s an essential skill for any SRE to work with Linux or *NIX systems even if you come from a Windows background.

Understanding of Databases:

NoSQL databases, there are many types, and each has pretty specific use cases where they excel. Compare and contrast with relational databases like MySQL. This is an excellent time to dive into understanding what a data model is, why data models are necessary, and how the data model should inform your choice of database and your service architecture.

Cloud Native Applications:

Knowing cloud native applications is another skill to master as a SRE. You don’t have to know them in depth, but here are some knowledge areas that can help your organization and you as you get on the road to becoming a successful SRE. Knowing what docker is having some idea about how containers work and understanding how to run a secure application using Kubernetes is also a set of skills to master as a SRE.

Networking:

In the modern distributed environments at scale, networking plays a pivotal role. It is also considered as culprit when something goes wrong. Even if the organizations have different networking engineers and/or connectivity team SREs need an indepth understanding of networking and different protocols and topologies used in modern system design to know when the network is the root cause of an incident and how efficiently and effectively to resolve those issues.

Monitoring:

As we mentioned earlier monitoring is an integral part of Service Reliability hierarchy. Monitoring tools make your life easier when you’re an SRE. They give you a brief look into your system performance and issues your system is dealing with. Implementing these tools and getting insights from them is the primary goal of SRE, so the system experiences as little downtime as possible. Prometheus and Grafana are widely used monitoring solutions, so it makes sense to learn those.

CI/CD Pipelines:

It’s hard to address reliability problems that emerge from the source code or deployment process if a SRE don’t have a good understanding of how CI/CD process work and which tools are being used in that area. Even though SRE don’t typically develop software they must know how a software is written and deployed. Most organizations today rely on CI/CD pipeline for this. So this skill is also a niche skill for SREs.

Security Engineering/Response:

SREs who dont understand security fundamentals are at risk of implementing reliability solutions that are effective from a reliability standpoint and not really secure. Though this domain is one that SREs don’t own but they require significant skills in this area.

Incident Management:

SREs must know how incident response roles are structured and have to take lead in organizing the incident response team, communicating with takholders and devising best strategy to ensure rapid and effective incident resolution.

Problem Management:

As we mentioned earlier in Service Reliability hierarchy, postmortem/root cause analysis is a must for reliability engineering. Knowing how to run a postmortem and derive a RCA is considered as an important skill a SRE should possess.

Communication:

As a SRE, you’ll need to report critical incidents that affect applications or you’ll be working with software engineers and others. In all these situations, having effective, well-developed communication skills makes life much easier. To ensure there are no miscommunications while reporting incidents this is also a skill to master if you are in the path to a SRE

The list of SRE skills could go on infinitely but the skills mentioned here are best and good to have skills to transition yourselves to a SRE or if you want to excel in your current role as a SRE.

I have worked as System admin, architect etc. and the most I enjoyed was as my tenure as a SRE and SRE lead. If you enjoy working on the backend and want to get closer to your system’s performance, reliability, and scalability, then an SRE role might just be perfect for you!

Microsoft Linux CBL-Mariner

This post is a two part series on the CBL-Mariner Linux. If you’re interested just in the installation you can skip this part and switch to Part 2.

It was a very well known fact that Microsoft and Linux doesn’t sync for many years. Gone are those days when we couldn’t even come up with Microsoft and Linux together in a sentence. Former Microsoft CEO Steve Ballmer (in)famously branded Linux “a cancer that attaches itself in an intellectual property sense to everything it touches” back in 2001. Fast forwarding to 2021, news says that Microsoft quietly released CBL on its GitHub and it’s released as open-source. Anyone can use it, build it, edit it & reuse it fulfilling the four essential freedom of Free software foundation.

Microsoft’s stance about open-source has also changed over the years. There have been signs on Microsoft embracing Linux for quite sometime. The Windows Subsystem for Linux (WSL) initially released in 2016 and later came up with a stable release WSL2 in 2019 is an evidence of that. The constant rise of cloud and edge computing has increased the dominance of Linux. Their surprising acquisition of GitHub back in 2018 was another strategic motion towards accepting OSS/FOSS model.

In this blog, we’ll discuss about Microsoft’s Linux (distro)? CBL-Mariner. IMHO it doesn’t qualify for being called as a distro. We’ll see why?

CBL-Mariner is a Linux developed by the Linux System Group at Microsoft, the team behind the WSL compatibility layer. The CBL part of its name stands for Common Base Linux. It is fully open-source & built for powering Microsoft’s Azure Edge services. Although Microsoft stated that it’s an internal distribution for managing their Edge infrastructure, the entire project is available publicly via GitHub. It’s a minimal and lightweight Linux that users can use as a container or container host. CBL-Mariner proves that Microsoft is on the right track when it comes to free software and Linux. The company that once stood steadfast against its open-source rival has seemingly come to terms with the changing reality of the IT industry. Let’s see what the future holds for this new strategy.

“This initiative is part of Microsoft’s increasing investment in a wide range of Linux technologies, such as SONiC, Azure Sphere OS and Windows Subsystem for Linux (WSL). CBL-Mariner is being shared publicly as part of Microsoft’s commitment to Open Source and to contribute back to the Linux community,”

CBL-Mariner GitHub pages

Looking into this release strategically, CBL-Mariner has similarities in concept and philosophy with Amazon Linux which is explicitly used in AWS. Microsoft might have thought in the same direction for use in Azure. Another aspect to look is that, RedHat’s CoreOS was deprecated in May 2020. CoreOS also known as Container Linux is a discontinued open-source lightweight operating system based on the Linux kernel and designed for providing infrastructure to clustered deployments, while focusing on automation, ease of application deployment, security, reliability and scalability. which was predominantly used in Microsoft Azure. This led Microsoft to look for an alternate Linux which they controls rather than from other enterprise Linux players. Having said that, we’ll wait and see the adoptability of CBL-Mariner.

Whether deployed as a container or a container host, CBL-Mariner consumes limited disk and memory resources. The lightweight characteristics of CBL-Mariner also provides faster boot times and a minimal attack surface. By focusing the features in the core image to just what is needed for our internal cloud customers there are fewer services to load, and fewer attack vectors. When security vulnerabilities arise, CBL-Mariner supports both a package-based update model and an image based update model. Leveraging the common RPM Package Manager system, CBL-Mariner makes the latest security patches and fixes available for download with the goal of fast turn-around times.

Microsoft uses it as the base Linux for containers in the Azure Stack HCI implementation of Azure Kubernetes Service. It’s also used in Azure IoT Edge to run Linux workloads.

Now, we talk explicitly on the CBL-Mariner. what are the features of it? how to install it? etc. in the part 2 of this series.

Microsoft Linux CBL Mariner – Part 2

This post is the part 2 of the CBL-Mariner blog series. If you’ve not read Part 1 yet please do so here.

CBL-Mariner philosophy is you only need a small core set of packages(RPM Based) on top of CBL core to address the needs of cloud & edge computing.

What’s in it?

It shares major components from,

  • VMWare’s Photon OS project for SPEC files
  • The Fedora Project for QT, DNF etc
  • Linux from scratch for SPEC and for simplified installation
  • Open mamba for SPEC files
  • GNU FSF Core compilers and utilities
  • And finally requires Ubuntu 18.04 as the build environment to create CBL mariner binaries and to bake an ISO

How to install it?

CBL-Mariner is open source released under GNU GPL, LGPL, MIT License & Apache License.

  • It has its own repo under Microsoft’s GitHub organization.
  • No ISOs or images of Mariner are provided
  • The repo has instructions to build them on Ubuntu 18.04.
  • It can be deployed on VMware, Hyper-V or Virtual Box
  • It doesn’t include Desktop aka GUI

How does it look like?

  • Its completely based on command line(CUI)
  • Boots very quickly due to light-weight nature
  • Runs with very minimal or nominal memory foot print
  • CBL-Mariner package system is RPM-based
  • The package management system uses both dnf and tdnf (Tinydnf)
  • It has two package repositories, base and update
  • Around 3300 packages are available between both repositories.

How does it operates?

Apart from DNF CBL-Mariner also supports an image-based update mechanism for atomic servicing and rollback using RPM-OSTree, rpm-ostree is an open source tool based on OSTree to manage bootable, immutable, versioned filesystem trees. The idea behind rpm-ostree is to use a client-server architecture to keep Linux hosts updated and in sync with the latest packages in a reliable manner.

This is not a regular Linux distro that you think, which you’ll try to install on the hardware or as a virtual machine and install necessary applications on it and start using it like you use Ubuntu, Fedora, Arch etc. No matter if you’re a professional developer, sys admin or a mere hobbyist, you can easily build custom CBL-Mariner images and play around. However, If you’re not a faint hearted person and have previous exposure to Makefiles, Make command, rpm build and Linux proficiency it’ll be easy for you. There are prerequisites listed in it’s GitHub page that roughly include Docker, RPM tools, ISO build tools and Golang, amongst others to setup an environment on Ubuntu 18.04(recommended) for building an ISO.

Finally, how do you use it?

CBL-Mariner doesn’t have any ISO image available by default. The GitHub page provides a quick start guide to build an ISO. There is another description which provides details on building a custom CBL-Mariner ISO and/or image.

There are couple of prerequisites needed to build your ISO. First thing we require is an Ubuntu 18.04 system. As per the GitHub pages of CBL-Mariner all the requirements are tested and validated for Ubuntu 18.04.

We’ll now check on the quick start guide as an initial step and if you’re interested to do a custom build you can do so. Quick build will take approximately 20 to 30 minutes and a custom build has taken a little over 3 hours.

I’ve tested this build in AWS with a t2.medium EC2 instance having 50 Gig disk. I have bumped up to 50 Gig because I build the quick one as well as the custom image. Custom build requires rpm packages to be rebuild and it’ll consume some space in your file system.

# Install required dependencies.
sudo apt -y install make tar wget curl rpm qemu-utils genisoimage python-minimal bison gawk parted gcc g++

# Recommended but not required: `pigz` for faster compression operations.
sudo apt -y install pigz

# Install Docker.
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh
$ sudo usermod -aG docker $USER

# I've explicitly installed golang from the tar ball
$ wget https://dl.google.com/go/go1.16.7.linux-amd64.tar.gz
$ tar zxvf go1.16.7.linux-amd64.tar.gz
$ sudo mv go /usr/local/bin
$ export GOROOT=/usr/local/go
$ export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
$ go version - To test the installation

For the build to work we need to create a symbolic link of go binary to /usr/bin

$ sudo ln -s /usr/local/go/bin/go /usr/bin/go

Now we can clone the CBL-Mariner repository and start our build.

$ git clone https://github.com/microsoft/CBL-Mariner.git
$ git checkout 1.0-stable
$ cd toolkit
$ sudo make image REBUILD_TOOLS=y REBUILD_PACKAGES=n CONFIG_FILE=./imageconfigs/full.json

I’ve faced lot of issues and errors while building with non stable branches/tags. If you are comfortable with bleeding edge releases, you can get your hands dirty with that. Once the build is completed you can find your image in the out directory.

$ cd ../CBL-Mariner/out/images/full/

$ ls -lh full.1.0.20210813.0520.iso

A quick view of the build I’ve done is provided in the below video.

CBL-Mariner build

Now that we have the ISO image, we can install CBL-Mariner on a virtual machine. To do this, I’m going to use Oracle VirtualBox, which is free. If we have any other virtualization tool we can use that as well.

Steps to Follow:

  • Open VirtualBox.
  • Click on the button New to create a new VM.
  • Now start the virtual machine creation wizard.
  • Put the name we want
  • Choose “Linux”, and version “Linux 2.6/3.x/4.x (64-bit)”. And press next.
  • Follow the wizard and chose the default
  • For CBL-Mariner we must configure at least 1 CPU, 1GB of RAM, and 8GB of disk.
  • Go to next until completing the wizard.
  • Now that we are back on the main VirtualBox screen
  • we can right-click the entry that appears with the name we have given it and then select Configuration on the menu.
  • we can also select the entry and click the upper Settings button.
  • Go to Storage, and from there on the icon of the optical disk (Empty) we have to click on Optical Drive and choose “Select a disk file” to be able to load the ISO image. And in the browser that will appear, select where we have the ISO that we generated in the previous step.
  • It’s time Start the virtual machine with CBL-Mariner.

Once we have started the virtual machine, it will start up and after a few moments it will show we a menu to installation. The steps we must follow are:

  • Choose the option “Graphical Installer” for graphical installation. There are also options for text mode, but the graphic is better. And once selected, press Next. [we have to move through the menu with the keyboard arrows and ENTER to select]
  • Now we will see an installer very similar to that of any other distro. In the Installation Type menu: we have to choose «CBL-Mariner Full » for full installation. In any case, both in Full and Core, as it hardly includes packages, it will be fast.
  • The next screen is the license terms to accept.
  • Then comes the assistant hard drive partitioning. There we have to create the necessary partitions or leave the ones that come by default.
  • Turn to choose the hostname, as well as the username and password. It’s free text for hostname
  • Please provide a complex password with combination of CAPITAL, small, Number & Special Characters
  • CBL-Mariner now begins the actual installation. Will start to install packages. And when it’s done, reboot the virtual machine.
  • When we start we will see the Login, where we have to put the login data (name and password).
  • Now we use CBL-Mariner as we would with any server distro, without a GUI
CBL-Mariner Installation

Thanks to all who have taken time to go through this blog. Hope this helps you to try building and installing CBL-Mariner the latest addition to Linux family.

Telegram a new alternative of Dark Web:

What is Telegram?

Telegram is a popular instant messaging service presented by Telegram Messenger Inc. It’s is just a platform like Instagram and Facebook in other words it’s a messaging app. It is compatible with Android, iOS, Windows, Mac, and Linux operating systems, and the official website is telegram.org. At the beginning of the year, the service was used by 200 million monthly active users, and this number has surely risen throughout the year.

Telegram is a freeware, cross-platform, cloud-based instant messaging (IM) software. The service also provides end-to-end encrypted video calling, VoIP, file sharing and several other features. It was launched for iOS on 14 August 2013 and Android in October 2013 by brothers Nikolai and Pavel Durov. The servers of Telegram are distributed worldwide to decrease data load with five data centers in different regions, while the operational center is based in Dubai. Various client apps are available for desktop and mobile platforms including official apps for Android, iOS, Windows, macOS and Linux. There are also two official Telegram web twin apps – WebK and WebZ – and numerous unofficial clients that make use of Telegram’s protocol. All of Telegram’s official components are open source, with the exception of the server which is closed-source and proprietary. And this is a big step up from a closed source client. If you are using WhatsApp, or any closed source app for that matter (like Messenger, Skype, etc.), you can’t know what it’s doing with your Mobile/Desktop/Laptop.

Users can send text and voice messages, animated stickers, make voice and video calls, and share an unlimited number of images, documents (2 GB per file), user locations, contacts, and audio files. In January 2021, Telegram surpassed 500 million monthly active users. It was the most downloaded app worldwide in January 2021. Unfortunately, popularity always attracts cybercriminals. Of course, big companies have better resources to ensure that their customers are safe and content, but securing Telegram is not always easy. We’ll be discussing about this in a while.

What is Dark Web?

Dark web is the hidden part of the internet that is not indexed on Google and mostly there are illegal things going on. Dark web and Deep web is quite two different things and I’ve explained it in detail in my earlier blog post. You can read it here.

So, what’s there to see on the dark web, It features 90’s styled web pages, hard to see text due to minimized TOR browser window, with no video playing & no downloading. You can see various illegitimate content links Red Rooms, Pedophilia, Hitmen for hire, Drug trafficking (remember Silk Route), Arms and weapons, Human trafficking etc. There is an onion wiki website which provides details about most of the sites available in the dark web. It is difficult that you may actually stumble across any of the aforementioned “dark contents” today because these kinds of websites or links are regularly taken down by either govt agencies (FBI) or other by hacking groups like Anonymous and other White hats hacker(s) and/or group(s).

So if curiosity is killing you by now, go ahead and download TOR and try exploring Dark Web. As I said earlier accessing and browsing dark web is detailed in my earlier blog post.

P.S. – Don’t use Windows to access the dark web, Linux will be a safer and recommended choice. Due to the number of “back doors & vulnerabilities” in Windows, the most innocent thing that would happen if someone from Dark web is on your PC is that your system might be a part of massive planned DDoS attack on some Govt sites, and it’s the best case scenario, things only get worse from there.

Is telegram really an alternative for dark web?

Over the last year, many users have shifted to chatting and texting apps such as Signal and Telegram. One of the more reputable and old texting applications, Telegram saw a torrential flow of users coming to it leaving WhatsApp. It has some great interactive features for texting which enhance the way users communicate with each other. An investigation by cybersecurity researchers into the messaging platform has revealed that private data of millions of people are being shared openly on groups and channels that have thousands of members.

Another investigation conducted by NortonLifeLock has found evidence of a “thriving illegal marketplace” on Telegram where everything from Covid-19 vaccines, personal data, pirated software to fake IDs are up for sale. The vpnMentor researchers have detailed their findings in a report where they examine the growing trend of cybercriminals sharing leaked data on Telegram. Their team joined several cybercrime-focused Telegram groups and channels to experience the illicit exchanges between bad actors for themselves. To their surprise they discovered hackers openly posting data dumps on channels, some with over 10,000 members. More worryingly, the unscrupulous users don’t even shy away from discussions on how to exploit the data dumps in various criminal enterprises.

Traditionally, data dumps like these are usually exchanged over the dark web. Moving these exchanges to Telegram has its advantages including “protecting the privacy of its members”. Also, Telegram has a lower barrier for entry as compared to the dark web and this messaging platform is also immune to Distributed Denial of Service (DDoS) attacks, web takedowns that can threaten how cybercriminals work on the normal web.

Research from VPN provider vpnMentor further cements Telegram’s position as a safe haven for cybercriminals, finding cybercriminals are using the popular encrypted communications platform to share and discuss massive data leaks exposing millions of people to unprecedented levels of online fraud, hacking, and attack.

Is there a way to keep telegram safe?

It has some great interactive features for texting which enhance the way users communicate with each other. If you are worried about privacy and security on Telegram, we will help you to keep your Telegram safe. The vpnMentor report has mentioned that Telegram has taken “limited steps” to remove groups related to hacking, but that hasn’t made much of a difference. If you did not secure your Telegram account, the risk of someone hacking it is much bigger. Generally, when the attacker hacks the account, the login is compromised. Hackers would not be able to do anything unless they gained access to your insecure Telegram account. So, how exactly can hackers breach seemingly well-guarded accounts? They often employ brute-force attacks to guess login data. If you think that cybercriminals spend hours typing in random password and username combinations to make a correct guess, you are mistaken. Hacking techniques are much more advanced nowadays, and they can use hardware and software to perform a successful brute-force attack within minutes or mere seconds. The task is especially easy if the password and username are predictable, such as password123 and admin123. Of course, passwords are not generally used to sign into Telegram.

When you sign into Telegram, you need to enter your phone number to receive a verification code that grants you access to your account. If you think that that makes your Telegram secure, you are not 100% right. In 2016, hackers were able to compromise Telegram accounts in Iran using a flaw in the SMS protocol. According to Reuters, the verification codes sent via SMS were intercepted and leaked to hackers. This allowed them to gain full access to the affected Telegram accounts, as well as add new devices to the same account to continue the attack. The flaw also made it possible for hackers to identify 15 million unique phone numbers registered with Telegram. In a situation like this, unless hackers change passwords and block your access to the account, or they send messages that you can see in your chat history, you might be unaware of the hack at all. Ultimately, if insecure Telegram accounts are hacked, attackers can spy on users and gather sensitive information that, later on, could be used to hack bigger accounts and do more harm.

We’ll start with using complex passwords and then move further with different methods to enable layers of security features which are already available. If you have already set those, you can skip this section.

How to set Password Complexity?

Telegram enables you to create a password that is strong.

Tips for complex passwords:

Usage of multiple cases of alphabets along with symbols, special characters etc.

thisislowercase
THISISUPPERCASE
ThisIsPascalCase
thisIsCamelCase
This_is_snake_case
THIS_IS_SCREAMING_SNAKE_CASE
this-is-kebab-case

Few examples,

“I love you so much” – IL0v3Y0U5OMuch
“Humpty Dumpty sat on a wall” — HumtyDumty$@t0nAwa11
“It is raining cats and dogs”– 1tsR@in1NGc@ts&Dogs!

Try adding some additions to the above,

“I love you so much.” - IL0v3Y0U5OMuchPer10d
“Humpty Dumpty sat on a wall” + Google — HumtyDumty$@t0nAwa11+G00gl
Netflix + “Humpty Dumpty sat on a wall” — humTdumt$@t0nAwa114netFLX

How to Set a Passcode on Telegram?

First of all,

  • Open the application
  • You will notice a three-line menu on the top left, tap on it.
  • Select ‘Settings’ and tap on ‘Privacy and Security’.
  • Under the ‘Security’, tap on the ‘Passcode Lock’, and you will be asked to enter a four-digit passcode.
  • Enter the passcode twice to save it. Your telegram passcode will be active.

How to Set Fingerprint Lock on Telegram?

Once you have set up the passcode, whenever you try to open the app, you will also see an option of unlocking the app with your fingerprint. You can also disable the feature if you don’t want it. Do note that the fingerprint option will only come if you have entered fingerprint data for unlocking the device under the device settings itself. It will use the same biometric data entered into the system of your device in the first place.

How to Set Auto Lock on Telegram?

For further safety of your chat data, you can also set an ‘auto-lock’ timer. By default, Telegram auto-locks the application after 1 hour. But that is too long a time, to change that, go to the ‘Auto-Lock’ option from the settings of the application, and you can keep the timer anywhere between 1 minute to 1.45 hours. You can also disable the feature from here.

How to enable two-step verification on Telegram?

You must be familiar with two-step verification, also known as 2FA or two-factor authentication. You probably have it set up on several other accounts, such as Gmail, AWS etc. If you have not secured Telegram using the two-step verification feature, we suggest you take care of that as soon as possible. Once that is done, you will need to enter a password when you sign in from a new device. Here’s a guide that shows how to secure Telegram by setting up two-step verification via the Telegram app.

  • Open the Telegram app and sign in.
  • Tap the “menu” button on the top-right corner.
  • Go to “Settings” and then to “Privacy and Security”.
  • Tap “Two-Step Verification”.
  • Create a strong password and re-enter it for confirmation.
  • Create a hint for the password.
  • Enter your email address and tap the “green check” icon.
  • Go to your inbox, open the email, and click the “confirmation link”.

How to terminate Active Sessions on Telegram?

If someone has signed into your Telegram account, you can see it in the Active Sessions menu. The feature enables you to terminate unwanted sessions, which, hopefully, should help you kick hackers to the curb.

  • Open the Telegram app and sign in.
  • Tap the “menu” button on the top-right corner.
  • Go to “Settings” and then to “Active Sessions”.
  • Tap “Terminate All Other Sessions” or select one session at a time and tap “OK” to terminate.

References:

Disclaimer

This is a personal blog. All content provided on this blog is for informational purposes only. They are collated from different sources and some are my own. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. The owner of THIS BLOG will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information 

RHEL 8 What’s new?

Red Hat Enterprise Linux 8 was released in Beta on November 14, 2018. There are so many features and improvements that distinguishes it from its antecedent – RHEL 7. In this blog, I’m attempting to provide a quick glance of those improvements, deprecations and the upgrade path.

Improvements:

  • YUM command is not available and DNF command replaces it. If you’ve worked on Fedora, DNF was a the default package manager in it.
  • chronyd is the default network time protocol wrapper instead of ntpd
  • Repo channels names changed, but content of them is mostly the same. CodeReady Linux Builder repository was added. It is similar to EPEL and supplies additional packages, which are not supported for production use.
  • One of the biggest improvement in RHEL 8 system performance is the new upper limit on physical memory capacity. Now has 4 PB of physical memory capacity compared to 64TB of system memory in RHEL 7
  • RPM command is also upgraded. The rpmbuild command can now do all build steps from a source package directly. the new –reinstall option allows to reinstall a previously installed package. there is a new rpm2archive utility for converting rpm payload to tar archives.
  • TCP networking stack is Improved. RedHat claims that the version 4.18 provides higher performances, better scalability, and more stability
  • RHEL 8 supports Open SSL 1.1.1 and TLS 1.3 cryptographic standard by default
  • BIND version is upgraded to 9.11 by default and introduces new features and feature changes compared to version 9.10.
  • Apache HTTP Server, has been updated from version 2.4.6 to version 2.4.37 between RHEL 7 and RHEL 8. This updated version includes several new features, but maintains backwards compatibility with the RHEL 7 version
  • RHEL 8 introduces nginx 1.14, a web and proxy server supporting HTTP and other protocols
  • OpenSSH was upgraded to version 7.8p1.
  • Vim runs default.vim script, if no ~/.vimrc file is available.
  • The ‘nobody’ & ‘nfsnobody’ user and groups are merged into ‘nobody’ ID (65534).
  • In RHEL 8, for some daemons like cups, the logs are no longer stored in specific files within the /var/log directory, which was used in RHEL 7. Instead, thet are stored only in systemd-journald.
  • Now you are forced to switch to Chronyd. The old NTP implementation is not supported in RHEL8.
  • NFS over UDP (NFS3) is no longer supported. The NFS configuration file moved to “/etc/nfs.conf”. when upgrading from RHEL7 the file is moved automatically.
  • For desktop users, Wayland is the default display server as a replacement for the X.org server. Yet X.Org is still available. Legacy X11 applications that cannot be ported to Wayland automatically use Xwayland as a proxy between the X11 legacy clients and the Wayland compositor.
  • Iptables were replaced by the nftables as a default network filtering framework. This update adds the iptables-translate and ip6tables-translate tools to convert the existing iptables or ip6tables rules into the equivalent ones for nftables.
  • GCC toolchain is based on the GCC 8.2
  • Python version installed by default is 3.6, which introduced incompatibilities with scripts written for Python 2.x but, Python 2.7 is available in the python2 package.
  • Perl 5.26, distributed with RHEL 8. The current directory . has been removed from the @INC module search path for security reasons. PHP 7.2 is also added
  • For working with containers, Red hat expects you to use the podman, buildah, skopeo, and runc tools. The podman tool manages pods, container images, and containers on a single node. It is built on the libpod library, which enables management of containers and groups of containers, called pods.
  • The basic installation provides a new version of the ifup and ifdown scripts which call NetworkManager through the nmcli tool. The NetworkManager-config-server package is only installed by default if you select either the Server or Server with GUI base environment during the setup. If you selected a different environment, use the yum install NetworkManager-config-server command to install the package.
  • Node.js, a software development platform in the JavaScript programming language, is provided for the first time in RHEL. It was previously available only as a Software Collection. RHEL 8 provides Node.js 10.
  • DNF modules improve package management.
  • New tool called Image Builder enables users to create customized RHEL images. Image Builder is available in AppStream in the lorax-composer package. Among other things, it allows created live ISO disk image and images for Azure, VMWare and AWS, See Composing a customized RHEL system image.
  • Some new storage management capabilities were introduced. Stratis is a new local storage manager. It provides managed file systems on top of pools of storage with additional features to the user. Also supports file system snapshots, and LUKSv2 disk encryption with Network-BoundDisk Encryption (NBDE).
  • VMs by default are managed via Cockpit. If required virt-manager could also be installed. Cockpit web console is available by default. It provides basic stats of the server much like Nagios and access to logs. Packages for the RHEL 8 web console, also known as Cockpit, are now part of Red Hat Enterprise Linux default repositories, and can therefore be immediately installed on a registered RHEL 8 system. (You should be using this extensively if you’re using KVM implementations of RHEL 8 virtual machines)

Deprecations:

  • Yum package is deprecated and Yum command is just a symbolic link to dnf.
  • NTP implementation is not supported in RHEL8
  • Network scripts are deprecated; ifup and ifdown map to nm-cli
  • Digital Signature Algorithm (DSA) is considered deprecated.. Authentication mechanisms that depend on DSA keys do not work in the default configuration.
  • rdist is removed as well as rsh and all r-utilities.
  • X.Org display server was replaced by Wayland’ from Gnome
  • tcp_wrappers were removed. Not clear what happened with programs previously compiled with tcp-wrapper support such as Postfix.
  • Iptables are deprecated.
  • Limited support for python 2.6.
  • KDE support has been deprecated.
  • The Up-gradation from KDE on RHEL 7 to GNOME on RHEL 8 is unsupported.
  • Removal of Btrfs support.
  • Docker is not included in RHEL 8.0.

Upgrade:

Release of RHEL 8 gives opportunity for those who still are using RHEL 6 to skip RHEL 7 completely for new server installations. RHEL 7 has five years before EOL (June 30, 2024) while many severs last more then five years now. Theoretically upgrade from RHEL 6 to RHEL 8 is possible via upgrade to RHEL 7 first, but is too risky. RHEL 8 is distributed through two main repositories: Please follow RHEL8 Upgrade path.

Base OS

Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in previous releases of RHEL. For a list of packages distributed through BaseOS.

AppStream

Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules, or as Software Collections. For a list of packages available in AppStream,

In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. Please check RHEL 8 Package manifest.

With the idea of the Application stream, RHEL 8 is following the Fedora Modularity lead. Fedora 28, released earlier this year, by Fedora Linux distribution (considered as bleeding edge community edition of RHEL) introduced the concept of modularity. Without waiting for the next version of the operating system, Userspace components will update in less time than core operating system packages. Installations of many versions of the same packages (such as an interpreted language or a database) are also available by the use of an application stream.

Theoretically RHEL 8 will be able to withstand more heavy loads due to optimized TCP/IP stack and improvements in memory handling.

Installation has not been changed much from RHEL 7. RHEL 8 still pushes LVM for root filesystem in default installation. Without subscription you can still install packages from ISO, either directly or making it a repo. The default filesystem remains XFS, RedHat EnterpriseLinux 8 supports installing from a repository on a local hard drive. you only need to specify the directory instead of the ISO image.

For example:

inst.repo=hd::.

Kickstart also has changed but not much ( auth or authconfig are depreciated & you need to use authselect instead)

Source: Red Hat RHEL8 release notes, Red Hat Blogs, Linux Journal etc

Understanding API

API stands for Application Programming Interface. An API is a software intermediary that allows two applications to talk to each other. In other words, an API is the messenger that delivers your request to the provider that you’re requesting it from and then delivers the response back to you. We can say that an API is a set of programming instructions and standards for accessing a Web-based software application or Web tool.

As we understood API is a software-to-software interface, not a user interface. The most important part of this name is “interface,” because an API essentially talks to a program for you. You still need to know the language to communicate with the program, but without an API, you won’t get far. With APIs, applications talk to each other without any user knowledge or intervention. When programmers decide to make some of their data available to the public, they “expose endpoints,” meaning they publish a portion of the language they’ve used to build their program. Other programmers can then pull data from the application by building URLs or using HTTP clients (special programs that build the URLs for you) to request data from those endpoints.

Endpoints return text that’s meant for computers to read, so it won’t make complete sense if you don’t understand the computer code used to write it. A software company releases its API to the public so that other software developers can design products that are powered by its service.

Examples:

When bloggers put their Twitter handle on their blog’s sidebar, WordPress enables this by using Twitter’s API.

Amazon.com released its API so that Web site developers could more easily access Amazon’s product information. Using the Amazon API, a third party Web site can post direct links to Amazon products with updated prices and an option to “buy now.”

When you buy movie tickets online and enter your credit card information, the movie ticket Web site uses an API to send your credit card information to a remote application that verifies whether your information is correct. Once payment is confirmed, the remote application sends a response back to the movie ticket Web site saying it’s OK to issue the tickets. As a user, you only see one interface — the movie ticket Web site — but behind the scenes, many applications are working together using APIs. This type of integration is called seamless, since the user never notices when software functions are handed from one application to another

Docker engine comes with an API. Docker provides an API for interacting with the Docker daemon (called the Docker Engine API), as well as SDKs for Go and Python. The SDKs allow you to build and scale Docker apps and solutions quickly and easily. If Go or Python don’t work for you, you can use the Docker Engine API directly. The Docker Engine API is a RESTful API accessed by an HTTP client such as wget or curl , or the HTTP library which is part of most modern programming languages.

Types of APIs

There are four main types of APIs:

Open APIs: Also known as Public API, there are no restrictions to access these types of APIs because they are publicly available.
Partner APIs: A developer needs specific rights or licenses in order to access this type of API because they are not available to the public.
Internal APIs: Also known as Private APIs, only internal systems expose this type of API. These are usually designed for internal use within a company. The company uses this type of API among the different internal teams to be able to improve its products and services.
Composite APIs: This type of API combines different data and service APIs. It is a sequence of tasks that run synchronously as a result of the execution, and not at the request of a task. Its main uses are to speed up the process of execution and improve the performance of the listeners in the web interfaces.

API architecture types:

APIs can vary by architecture type but are generally used for one of three purposes:

System APIs access and maintain data. These types of APIs are responsible for managing all of the configurations within a system. To use an example, a system API unlocks data from a company’s billing database.

Process APIs take the data accessed with system APIs and synthesize it to create a new way to view or act on data across systems. To continue the example, a process API would take the billing information and combine it with inventory information and other data to fulfill an order.

Experience APIs add context to system and process APIs. These types of APIs make the information collected by system and process APIs understandable to a specified audience. Following the same example, an experience API could translate the data from the process and system APIs into an order status tracker that displays information about when the order was placed and when the customer should expect to receive it.

Apart from the main web APIs, there are also web service APIs:

The following are the most common types of web service APIs:

SOAP (Simple Object Access Protocol): This is a protocol that uses XML as a format to transfer data. Its main function is to define the structure of the messages and methods of communication. It also uses WSDL, or Web Services Definition Language, in a machine-readable document to publish a definition of its interface.

XML-RPC: This is a protocol that uses a specific XML format to transfer data compared to SOAP that uses a proprietary XML format. It is also older than SOAP. XML-RPC uses minimum bandwidth and is much simpler than SOAP. Example – YUM command in Linux uses XML-RPC calls

JSON-RPC: JavaScript Object Notation, this protocol is similar to XML-RPC but instead of using XML format to transfer data it uses JSON.

REST (Representational State Transfer): REST is not a protocol like the other web services, instead, it is a set of architectural principles. The REST service needs to have certain characteristics, including simple interfaces, which are resources identified easily within the request and manipulation of resources using the interface.

SOAPREST
It has strict rules and advanced security to follow.There are loose guidelines to follow allowing developers to make recommendations easily
It is driven by FunctionIt is driven by Data
It requires more BandwidthIt requires minimum Bandwidth
SOAP vs REST
JSONXML
Supports only text and numbers.Supports various types of data for example text, numbers, images, graphs, charts etc.
Focuses mainly on DataFocuses mainly on Document
It has low securityIt has more security
JSON vs XML

The web service APIs honor all the http methods like POST, GET, PUT, PATCH, DELETE. if we compare these with the CRUD operations,

HTTP MethodsCRUD
POSTCreate
GETRead
PUTUpdate/Replace
PATCHUpdate/Modify
DELETEDelete
HTTP methods vs CRUD operations

POST – The POST verb is most-often utilized to create new resources. It will return HTTP status 201 on success and returning a Location header with a link to the newly-created resource with the 201 HTTP status. POST is neither safe nor idempotent.

GET – The HTTP GET method is used to read (or retrieve) a representation of a resource GET returns a representation in XML or JSON and an HTTP response code of 200 (OK). GET is idempotent

PUT – PUT is most-often utilized for update capabilities. PUT is not a safe operation, in that it modifies (or creates) state on the server, but it is idempotent.

PATCH – PATCH is used for modify capabilities. The PATCH request only needs to contain the changes to the resource, not the complete resource. PATCH is neither safe nor idempotent. However, a PATCH request can be issued in such a way as to be idempotent,

DELETE – DELETE is pretty easy to understand. It is used to delete a resource identified by a URI. There is a caveat about DELETE idempotence as calling DELETE on a resource a second time will often return a 404 (NOT FOUND) since it was already removed and therefore is no longer available.

We will now just try creating a RESTFul API with Golang. For those who haven’t tried your hands with Golang can click here to follow the basics of Golang.

We are starting the go program which creates an API. The source code for this is available in this Git repository. A discussion on the implementation of API is out of scope for this blog post. We can discuss that in another post.

Here we concentrate only on the API and the requests and responses we receive and retrieve from the API end point. So, we can start our dummy API interface.

executing the code

Now we can check whether our API is accessible and it’s giving responses for our requests. We are doing it in command line with simple curl request. our application is listening on port 8001 which can be modified as per your wish in the code. We are now hitting the end point ‘/’

curl request

Next we try hitting the end point /events to get all the events in the dummy database created with slice and strut in the main.go file.

curl request to get all events

Now we will simulate the requests with an opensource Firefox extension RESTer. This is available for Chrome too. So for hitting the endpoint “/events” we are using the GET method. the Response 200 states that the request was successful.

GET method

In this simulation, we are using the POST method to create another event in the dummy database inside the application programmatically. Response code 201 is provided for successful creation of the event.

POST method to create new event

Now we can try hitting the endpoint “/events/{id}” which is the endpoint to retrieve one event with a GET method which will display the newly added event with “/events/2”

GET method on /events/2

We will now hit the “/events” endpoint and see whether both the evets are in the response.

GET method on /events endpoint

In the next simulation we are using the PATCH method to modify/update an existing event. In this case we are hitting the endpoint “/events/2” to modify the event id 2.

PATCH method to modify event id 2

Try to GET the expected results from the endpoint “/events” to verify our PATCH request.

GET method to verify PATCH request

In the final simulation we are hitting the endpoint “/events/1” with a DELETE method so that event id 1 will be removed/deleted.

DELETE Method to delete event id 1

Voila..!! We just created an API and tested it with dummy data. I believe we got a quick overview of what an API is and how we can use different HTTP methods to retrieve data or modify data using an API.

Site Reliability: SLI, SLO & SLA

Service Level Indicator(SLI), Service Level Object(SLO) & Service Level Agreement(SLA) are parameters with which reliability, availability and performance of the service are measured. The SLA, SLO, and SLI are related concepts though they’re different concepts.

It’s easy to get lost in a fog of acronyms, so before we dig in, here is a quick and easy definition:

  • SLA or Service Level Agreement is a contract that the service provider promises customers on service availability, performance, etc.
  • SLO or Service Level Objective is a goal that service provider wants to reach.
  • SLI or Service Level Indicator is a measurement the service provider uses for the goal.

Service Level Indicator
SLI are the parameters which indicates the successful transactions, requests served by the service over the predefined intervals of time. These parameters allows to measure much required performance and availability of the service. Measuring these parameters also enables to improve them gradually.

Key Examples are:

  • Availability/Uptime of the service.
  • Number of successful transactions/requests.
  • Consistency and durability of the data.

Service Level Objective
SLO defines the acceptable downtime of the service. For multiple components of the service, there can be different parameters which defines the acceptable downtime. It is common pattern to start with low SLO and gradually increase it.

Key Examples are:

  • Durability of disks should be 99.9%.
  • Availability of service should be 99.95%
  • Service should successfully serve 99.999% requests/transactions.

Service Level Agreement
SLA defines the penalty that service provider should pay in an event of service unavailability for pre-defined period of time. Service provider should clearly define the failure factors for which they will be accountable(Domain of responsibility). It is common pattern to have loose SLA than SLO, for instance: SLA is 99% and SLO is 99.5%. If the service is overly available, then SLA/SLO can be used as error budget to deploy complex releases to production.

Key Examples of Penalty are:

  • Partial refund of service subscription fee.
  • Additional subscription time added for free.

So here is the relationship. The service provider needs to collect metrics based on SLI, define thresholds of metrics based on SLO, and monitor the thresholds of metrics so that it won’t break SLA. In practical, the SLIs are the metrics in the monitoring system; the SLOs are alerting rules, and the SLAs are the numbers of the monitoring metrics applying to the SLOs.

Usually the SLO and the SLA are similar while the SLO is tighter than the SLA. The SLOs are generally used for internal only, and the SLAs are for external. If a service availability violates the SLO, operations need to react quickly to avoid it breaking SLA, otherwise, the company might need to refund some money to customers.

The SLA, SLO, and SLI are based on such assumption that is the service will not be available 100%. Instead, we guarantee that the system will be available greater than a certain number, for example, 99.5%.

When we apply this definition to availability, for example, SLIs are the key measurements of the availability of a system; SLOs are goals we set for how much availability we expect out of a system; and SLAs are the legal contracts that explains what happens if our system doesn’t meet its SLO.

SLIs exist to help engineering teams make better decisions. Your SLO performance is critical information to have when you’re making decisions about how hard and fast you can push your systems. SLOs are also important data points for other engineers when they’re making assumptions about their dependencies on your service or system. Lastly, your larger organization should use your SLIs and SLOs to make informed decisions about investment levels and about balancing reliability work against engineering velocity.

Note this abstract is taken from SRE Fundamentals, CRE and the book Site Reliability Engineering: How Google Runs Production Systems