Telegram a new alternative of Dark Web:

What is Telegram?

Telegram is a popular instant messaging service presented by Telegram Messenger Inc. It’s is just a platform like Instagram and Facebook in other words it’s a messaging app. It is compatible with Android, iOS, Windows, Mac, and Linux operating systems, and the official website is At the beginning of the year, the service was used by 200 million monthly active users, and this number has surely risen throughout the year.

Telegram is a freeware, cross-platform, cloud-based instant messaging (IM) software. The service also provides end-to-end encrypted video calling, VoIP, file sharing and several other features. It was launched for iOS on 14 August 2013 and Android in October 2013 by brothers Nikolai and Pavel Durov. The servers of Telegram are distributed worldwide to decrease data load with five data centers in different regions, while the operational center is based in Dubai. Various client apps are available for desktop and mobile platforms including official apps for Android, iOS, Windows, macOS and Linux. There are also two official Telegram web twin apps – WebK and WebZ – and numerous unofficial clients that make use of Telegram’s protocol. All of Telegram’s official components are open source, with the exception of the server which is closed-source and proprietary. And this is a big step up from a closed source client. If you are using WhatsApp, or any closed source app for that matter (like Messenger, Skype, etc.), you can’t know what it’s doing with your Mobile/Desktop/Laptop.

Users can send text and voice messages, animated stickers, make voice and video calls, and share an unlimited number of images, documents (2 GB per file), user locations, contacts, and audio files. In January 2021, Telegram surpassed 500 million monthly active users. It was the most downloaded app worldwide in January 2021. Unfortunately, popularity always attracts cybercriminals. Of course, big companies have better resources to ensure that their customers are safe and content, but securing Telegram is not always easy. We’ll be discussing about this in a while.

What is Dark Web?

Dark web is the hidden part of the internet that is not indexed on Google and mostly there are illegal things going on. Dark web and Deep web is quite two different things and I’ve explained it in detail in my earlier blog post. You can read it here.

So, what’s there to see on the dark web, It features 90’s styled web pages, hard to see text due to minimized TOR browser window, with no video playing & no downloading. You can see various illegitimate content links Red Rooms, Pedophilia, Hitmen for hire, Drug trafficking (remember Silk Route), Arms and weapons, Human trafficking etc. There is an onion wiki website which provides details about most of the sites available in the dark web. It is difficult that you may actually stumble across any of the aforementioned “dark contents” today because these kinds of websites or links are regularly taken down by either govt agencies (FBI) or other by hacking groups like Anonymous and other White hats hacker(s) and/or group(s).

So if curiosity is killing you by now, go ahead and download TOR and try exploring Dark Web. As I said earlier accessing and browsing dark web is detailed in my earlier blog post.

P.S. – Don’t use Windows to access the dark web, Linux will be a safer and recommended choice. Due to the number of “back doors & vulnerabilities” in Windows, the most innocent thing that would happen if someone from Dark web is on your PC is that your system might be a part of massive planned DDoS attack on some Govt sites, and it’s the best case scenario, things only get worse from there.

Is telegram really an alternative for dark web?

Over the last year, many users have shifted to chatting and texting apps such as Signal and Telegram. One of the more reputable and old texting applications, Telegram saw a torrential flow of users coming to it leaving WhatsApp. It has some great interactive features for texting which enhance the way users communicate with each other. An investigation by cybersecurity researchers into the messaging platform has revealed that private data of millions of people are being shared openly on groups and channels that have thousands of members.

Another investigation conducted by NortonLifeLock has found evidence of a “thriving illegal marketplace” on Telegram where everything from Covid-19 vaccines, personal data, pirated software to fake IDs are up for sale. The vpnMentor researchers have detailed their findings in a report where they examine the growing trend of cybercriminals sharing leaked data on Telegram. Their team joined several cybercrime-focused Telegram groups and channels to experience the illicit exchanges between bad actors for themselves. To their surprise they discovered hackers openly posting data dumps on channels, some with over 10,000 members. More worryingly, the unscrupulous users don’t even shy away from discussions on how to exploit the data dumps in various criminal enterprises.

Traditionally, data dumps like these are usually exchanged over the dark web. Moving these exchanges to Telegram has its advantages including “protecting the privacy of its members”. Also, Telegram has a lower barrier for entry as compared to the dark web and this messaging platform is also immune to Distributed Denial of Service (DDoS) attacks, web takedowns that can threaten how cybercriminals work on the normal web.

Research from VPN provider vpnMentor further cements Telegram’s position as a safe haven for cybercriminals, finding cybercriminals are using the popular encrypted communications platform to share and discuss massive data leaks exposing millions of people to unprecedented levels of online fraud, hacking, and attack.

Is there a way to keep telegram safe?

It has some great interactive features for texting which enhance the way users communicate with each other. If you are worried about privacy and security on Telegram, we will help you to keep your Telegram safe. The vpnMentor report has mentioned that Telegram has taken “limited steps” to remove groups related to hacking, but that hasn’t made much of a difference. If you did not secure your Telegram account, the risk of someone hacking it is much bigger. Generally, when the attacker hacks the account, the login is compromised. Hackers would not be able to do anything unless they gained access to your insecure Telegram account. So, how exactly can hackers breach seemingly well-guarded accounts? They often employ brute-force attacks to guess login data. If you think that cybercriminals spend hours typing in random password and username combinations to make a correct guess, you are mistaken. Hacking techniques are much more advanced nowadays, and they can use hardware and software to perform a successful brute-force attack within minutes or mere seconds. The task is especially easy if the password and username are predictable, such as password123 and admin123. Of course, passwords are not generally used to sign into Telegram.

When you sign into Telegram, you need to enter your phone number to receive a verification code that grants you access to your account. If you think that that makes your Telegram secure, you are not 100% right. In 2016, hackers were able to compromise Telegram accounts in Iran using a flaw in the SMS protocol. According to Reuters, the verification codes sent via SMS were intercepted and leaked to hackers. This allowed them to gain full access to the affected Telegram accounts, as well as add new devices to the same account to continue the attack. The flaw also made it possible for hackers to identify 15 million unique phone numbers registered with Telegram. In a situation like this, unless hackers change passwords and block your access to the account, or they send messages that you can see in your chat history, you might be unaware of the hack at all. Ultimately, if insecure Telegram accounts are hacked, attackers can spy on users and gather sensitive information that, later on, could be used to hack bigger accounts and do more harm.

We’ll start with using complex passwords and then move further with different methods to enable layers of security features which are already available. If you have already set those, you can skip this section.

How to set Password Complexity?

Telegram enables you to create a password that is strong.

Tips for complex passwords:

Usage of multiple cases of alphabets along with symbols, special characters etc.


Few examples,

“I love you so much” – IL0v3Y0U5OMuch
“Humpty Dumpty sat on a wall” — HumtyDumty$@t0nAwa11
“It is raining cats and dogs”– 1tsR@in1NGc@ts&Dogs!

Try adding some additions to the above,

“I love you so much.” - IL0v3Y0U5OMuchPer10d
“Humpty Dumpty sat on a wall” + Google — HumtyDumty$@t0nAwa11+G00gl
Netflix + “Humpty Dumpty sat on a wall” — humTdumt$@t0nAwa114netFLX

How to Set a Passcode on Telegram?

First of all,

  • Open the application
  • You will notice a three-line menu on the top left, tap on it.
  • Select ‘Settings’ and tap on ‘Privacy and Security’.
  • Under the ‘Security’, tap on the ‘Passcode Lock’, and you will be asked to enter a four-digit passcode.
  • Enter the passcode twice to save it. Your telegram passcode will be active.

How to Set Fingerprint Lock on Telegram?

Once you have set up the passcode, whenever you try to open the app, you will also see an option of unlocking the app with your fingerprint. You can also disable the feature if you don’t want it. Do note that the fingerprint option will only come if you have entered fingerprint data for unlocking the device under the device settings itself. It will use the same biometric data entered into the system of your device in the first place.

How to Set Auto Lock on Telegram?

For further safety of your chat data, you can also set an ‘auto-lock’ timer. By default, Telegram auto-locks the application after 1 hour. But that is too long a time, to change that, go to the ‘Auto-Lock’ option from the settings of the application, and you can keep the timer anywhere between 1 minute to 1.45 hours. You can also disable the feature from here.

How to enable two-step verification on Telegram?

You must be familiar with two-step verification, also known as 2FA or two-factor authentication. You probably have it set up on several other accounts, such as Gmail, AWS etc. If you have not secured Telegram using the two-step verification feature, we suggest you take care of that as soon as possible. Once that is done, you will need to enter a password when you sign in from a new device. Here’s a guide that shows how to secure Telegram by setting up two-step verification via the Telegram app.

  • Open the Telegram app and sign in.
  • Tap the “menu” button on the top-right corner.
  • Go to “Settings” and then to “Privacy and Security”.
  • Tap “Two-Step Verification”.
  • Create a strong password and re-enter it for confirmation.
  • Create a hint for the password.
  • Enter your email address and tap the “green check” icon.
  • Go to your inbox, open the email, and click the “confirmation link”.

How to terminate Active Sessions on Telegram?

If someone has signed into your Telegram account, you can see it in the Active Sessions menu. The feature enables you to terminate unwanted sessions, which, hopefully, should help you kick hackers to the curb.

  • Open the Telegram app and sign in.
  • Tap the “menu” button on the top-right corner.
  • Go to “Settings” and then to “Active Sessions”.
  • Tap “Terminate All Other Sessions” or select one session at a time and tap “OK” to terminate.



This is a personal blog. All content provided on this blog is for informational purposes only. They are collated from different sources and some are my own. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. The owner of THIS BLOG will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information 

RHEL 8 What’s new?

Red Hat Enterprise Linux 8 was released in Beta on November 14, 2018. There are so many features and improvements that distinguishes it from its antecedent – RHEL 7. In this blog, I’m attempting to provide a quick glance of those improvements, deprecations and the upgrade path.


  • YUM command is not available and DNF command replaces it. If you’ve worked on Fedora, DNF was a the default package manager in it.
  • chronyd is the default network time protocol wrapper instead of ntpd
  • Repo channels names changed, but content of them is mostly the same. CodeReady Linux Builder repository was added. It is similar to EPEL and supplies additional packages, which are not supported for production use.
  • One of the biggest improvement in RHEL 8 system performance is the new upper limit on physical memory capacity. Now has 4 PB of physical memory capacity compared to 64TB of system memory in RHEL 7
  • RPM command is also upgraded. The rpmbuild command can now do all build steps from a source package directly. the new –reinstall option allows to reinstall a previously installed package. there is a new rpm2archive utility for converting rpm payload to tar archives.
  • TCP networking stack is Improved. RedHat claims that the version 4.18 provides higher performances, better scalability, and more stability
  • RHEL 8 supports Open SSL 1.1.1 and TLS 1.3 cryptographic standard by default
  • BIND version is upgraded to 9.11 by default and introduces new features and feature changes compared to version 9.10.
  • Apache HTTP Server, has been updated from version 2.4.6 to version 2.4.37 between RHEL 7 and RHEL 8. This updated version includes several new features, but maintains backwards compatibility with the RHEL 7 version
  • RHEL 8 introduces nginx 1.14, a web and proxy server supporting HTTP and other protocols
  • OpenSSH was upgraded to version 7.8p1.
  • Vim runs default.vim script, if no ~/.vimrc file is available.
  • The ‘nobody’ & ‘nfsnobody’ user and groups are merged into ‘nobody’ ID (65534).
  • In RHEL 8, for some daemons like cups, the logs are no longer stored in specific files within the /var/log directory, which was used in RHEL 7. Instead, thet are stored only in systemd-journald.
  • Now you are forced to switch to Chronyd. The old NTP implementation is not supported in RHEL8.
  • NFS over UDP (NFS3) is no longer supported. The NFS configuration file moved to “/etc/nfs.conf”. when upgrading from RHEL7 the file is moved automatically.
  • For desktop users, Wayland is the default display server as a replacement for the server. Yet X.Org is still available. Legacy X11 applications that cannot be ported to Wayland automatically use Xwayland as a proxy between the X11 legacy clients and the Wayland compositor.
  • Iptables were replaced by the nftables as a default network filtering framework. This update adds the iptables-translate and ip6tables-translate tools to convert the existing iptables or ip6tables rules into the equivalent ones for nftables.
  • GCC toolchain is based on the GCC 8.2
  • Python version installed by default is 3.6, which introduced incompatibilities with scripts written for Python 2.x but, Python 2.7 is available in the python2 package.
  • Perl 5.26, distributed with RHEL 8. The current directory . has been removed from the @INC module search path for security reasons. PHP 7.2 is also added
  • For working with containers, Red hat expects you to use the podman, buildah, skopeo, and runc tools. The podman tool manages pods, container images, and containers on a single node. It is built on the libpod library, which enables management of containers and groups of containers, called pods.
  • The basic installation provides a new version of the ifup and ifdown scripts which call NetworkManager through the nmcli tool. The NetworkManager-config-server package is only installed by default if you select either the Server or Server with GUI base environment during the setup. If you selected a different environment, use the yum install NetworkManager-config-server command to install the package.
  • Node.js, a software development platform in the JavaScript programming language, is provided for the first time in RHEL. It was previously available only as a Software Collection. RHEL 8 provides Node.js 10.
  • DNF modules improve package management.
  • New tool called Image Builder enables users to create customized RHEL images. Image Builder is available in AppStream in the lorax-composer package. Among other things, it allows created live ISO disk image and images for Azure, VMWare and AWS, See Composing a customized RHEL system image.
  • Some new storage management capabilities were introduced. Stratis is a new local storage manager. It provides managed file systems on top of pools of storage with additional features to the user. Also supports file system snapshots, and LUKSv2 disk encryption with Network-BoundDisk Encryption (NBDE).
  • VMs by default are managed via Cockpit. If required virt-manager could also be installed. Cockpit web console is available by default. It provides basic stats of the server much like Nagios and access to logs. Packages for the RHEL 8 web console, also known as Cockpit, are now part of Red Hat Enterprise Linux default repositories, and can therefore be immediately installed on a registered RHEL 8 system. (You should be using this extensively if you’re using KVM implementations of RHEL 8 virtual machines)


  • Yum package is deprecated and Yum command is just a symbolic link to dnf.
  • NTP implementation is not supported in RHEL8
  • Network scripts are deprecated; ifup and ifdown map to nm-cli
  • Digital Signature Algorithm (DSA) is considered deprecated.. Authentication mechanisms that depend on DSA keys do not work in the default configuration.
  • rdist is removed as well as rsh and all r-utilities.
  • X.Org display server was replaced by Wayland’ from Gnome
  • tcp_wrappers were removed. Not clear what happened with programs previously compiled with tcp-wrapper support such as Postfix.
  • Iptables are deprecated.
  • Limited support for python 2.6.
  • KDE support has been deprecated.
  • The Up-gradation from KDE on RHEL 7 to GNOME on RHEL 8 is unsupported.
  • Removal of Btrfs support.
  • Docker is not included in RHEL 8.0.


Release of RHEL 8 gives opportunity for those who still are using RHEL 6 to skip RHEL 7 completely for new server installations. RHEL 7 has five years before EOL (June 30, 2024) while many severs last more then five years now. Theoretically upgrade from RHEL 6 to RHEL 8 is possible via upgrade to RHEL 7 first, but is too risky. RHEL 8 is distributed through two main repositories: Please follow RHEL8 Upgrade path.

Base OS

Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in previous releases of RHEL. For a list of packages distributed through BaseOS.


Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules, or as Software Collections. For a list of packages available in AppStream,

In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. Please check RHEL 8 Package manifest.

With the idea of the Application stream, RHEL 8 is following the Fedora Modularity lead. Fedora 28, released earlier this year, by Fedora Linux distribution (considered as bleeding edge community edition of RHEL) introduced the concept of modularity. Without waiting for the next version of the operating system, Userspace components will update in less time than core operating system packages. Installations of many versions of the same packages (such as an interpreted language or a database) are also available by the use of an application stream.

Theoretically RHEL 8 will be able to withstand more heavy loads due to optimized TCP/IP stack and improvements in memory handling.

Installation has not been changed much from RHEL 7. RHEL 8 still pushes LVM for root filesystem in default installation. Without subscription you can still install packages from ISO, either directly or making it a repo. The default filesystem remains XFS, RedHat EnterpriseLinux 8 supports installing from a repository on a local hard drive. you only need to specify the directory instead of the ISO image.

For example:


Kickstart also has changed but not much ( auth or authconfig are depreciated & you need to use authselect instead)

Source: Red Hat RHEL8 release notes, Red Hat Blogs, Linux Journal etc

Unikernel: Another paradigm for the cloud

In this cloud era it is hard to imagine a world without access to services in the cloud. From contacting someone through mail, to storing work-related documents on an online drive and accessing it across devices, there are lot of services we use on a daily basis that is in the cloud.

To reduce the cost of compute power, Virtualization has been adapted towards offering more services with less hardware. And then came the concept of containers where you deploy the application in isolated containers with light weight images which has few binaries and libraries to run your application, But still we need the underlying VMs to deploy such solutions. All these VMs comes with a cost. While large data-centers are offering services in the cloud, they are also hungry for electric power, which is becoming a growing concern as our planet is being drained of its resources. So what we need now is less power-hungry solutions.

What if, instead of virtualization of an entire operating system, you were to load an application with only the required components from the operating system? Effectively reducing the size of the virtual machine to its bare minimum resource footprint? This is where unikernels come into play.


Unikernel is a relatively new concept that was first introduced around 2013 by Anil Madhavapeddy in a paper titled “Unikernels: Library Operating Systems for the Cloud” (Madhavapeddy, et al., 2013).

You can find more details on Unilernel by searching the scholarly articles in Google.

Unikernels are defined by the community at as follows.

“Unikernels are specialized, single-address-space machine images constructed by using library operating systems.”

For more detailed reading about the concepts behind Unikernel, please follow this link,

A Unikernel is an application that has been boiled down to a small, secure, light-weight virtual machine which eliminates general purpose operating systems such as Linux or Windows. Unikernels aims to be a much more secure system than Linux. It does this through several thrusts. Not having the notion of users, running a single process per VM, and limiting the amount of code that is incorporated into each VM. This means that there are no users and no shell to login to and, more importantly, you can’t run more than the one program you want to run inside. Despite their relatively young age, unikernels borrow from age-old concepts rooted in the dawn of the computer era: microkernels and library operating systems. This means that a unikernel holds a single application. Single-address space means that in its core, the unikernel does not have separate user and kernel address space. Library operating systems are the core of unikernel systems. Unikernels are provisioned directly on the hypervisor without a traditional system like Linux. So it can run 1000X more vms/per server.

You can have a look in here for details about Microkernel, Monolithic & Library Operating Systems

Virtual Machines VS Linux Containers VS Unikernel

Virtualization of services can be implemented in various ways. One of the most widespread methods today is through virtual machine, hosted on hypervisors such as VMware’s ESXi or Linux Foundation’s Xen Project.

Hypervisors allow hosting multiple guest operating systems on a single physical machine. These guest operating systems are executed in what is called virtual machines. The widespread use of hypervisors is due to their ability to better distribute and optimize the workload on the physical servers as opposed to legacy infrastructures of one operating system per physical server.

Containers are another method of virtualization, which differentiates from hypervisors by creating virtualized environments and sharing the host’s kernel. This provides a lighter approach to hypervisors which requires each guest to have their copy of the operating system kernel, making a hypervisor-virtualized environment resource heavy in contrast to containers which share parts of the existing operating system.

As aforementioned, unikernels leverage the abstraction of hypervisors in addition to using library operating systems to only include the required kernel routines alongside the application to present the lightest of all three solutions.

The figure above shows the major difference between the three virtualization technologies. Here we can clearly see that virtual machines present a much larger load on the infrastructure as opposed to containers and unikernels.

Additionally, unikernels are in direct “competition” with containers. By providing services in the form of reduced virtual machines, unikernels improve on the container model by its increased security. By sharing the host kernel, containerized applications share the same vulnerabilities as the host operating system. Furthermore, containers do not possess the same level of host/guest isolation as hypervisors/virtual machines, potentially making container breaches more damaging than both virtual machines and unikernels.

Virtual Machines– Allows deploying different operating systems on a single host
– Complete isolation from host
– Orchestration solutions available
– Requires compute power proportional to number of instances
– Requires large infrastructures
– Each instance loads an entire operating system
Linux Containers– Lightweight virtualization
– Fast boot times
– Ochestration solutions
– Dynamic resource allocation
– Reduced isolation between host and guest due to shared kernel
– Less flexible (i.e.: dependent on host kernel)
– Network is less flexible
Unikernels– Lightweight images
– Specialized application
– Complete isolation from host
– Higher security against absent functionalities (e.g.: remote command execution)
– Not mature enough yet for production
– Requires developing applications from the grounds up
– Limited deployment possibilities
– Lack of complete IDE support
– Static resource allocation
– Lack of orchestration tools
A Comparison of solutions

Docker and containerization technology and the container orchestra-tors like Kubernetes, OpenShift are 2 steps forward for the world of DevOps and that the principles it promotes are forward-thinking and largely on-target for the future of a more secure, performance oriented, and easy-to-manage cloud future. However, an alternative approach leveraging unikernels and immutable servers will result in smaller, easier to manage, more secure containers that will be simpler to adopt by existing enterprises. As DevOps matures, the shortcomings of cloud application deployment and management are becoming clear. Virtual machine image bloat, large attack surfaces, legacy executable, base-OS fragmentation, and unclear division of responsibilities between development and IT for cloud deployments are all causing significant friction (and opportunities for the future).

For Example: It remains virtually impossible to create a Ruby or Python web server virtual machine image that DOESN’T include build tools (gcc), ssh, and multiple latent shell executable. All of these components are detrimental for production systems as they increase image size, increase attack surface, and increase maintenance overhead.

Compared to VMs running Operating systems like Windows and Linux, the unikernel has only a tenth of 1% of the attack surface. So in the case of a unikernel — sysdig, tcpdump, and mysql-client are not installed and you can’t just “apt-get install” them either. You have to bring that with your exploit. To take it further even a simple cat /etc/hosts or grep of /var/log/nginx/access.log simply won’t work — once again they are separate processes.
So unikernels are highly resistant to remote code execution attacks, more specifically shell code exploits.

Immutable Servers & Unikernels

Immutable Servers are a deployment model that mandates that no application updates, security patches, or configuration changes happen on production systems. If any of these layers needs to be modified, a new image is constructed, pushed and cycled into production. Heroku is a great example of immutable servers in action: every change to your application requires a ‘git push’ to overwrite the existing version. The advantages of this approach include higher confidence in the code that is running in production, integration of testing into deployment workflows, easy to verify that systems have not been compromised.

Once you become a believer in the concept of immutable servers, then speed of deployment and minimizing vulnerability surface area become objectives. Containers promote the idea of single-service-per-container (microservices), and unikernels take this idea even further.

Unikernels allow you to compile and link your application code all the way down to and include the operating system. For example, if your application doesn’t require persistent disk access, no device drivers or OS facilities for disk access would even be included in final production images. Since unikernels are designed to run on hypervisors such as Xen, they only need interfaces to standardized resources such as networking and persistence. Device drivers for thousands of displays, disks, network cards are completely unnecessary. Production systems become minimalist — only requiring the application code, the runtime environment, and the OS facilities required by the applications. The net effect is smaller VM images with less surface area that can be deployed faster and maintained more easily.

Traditional Operating Systems (Linux, Windows) will become extinct on servers. They will be replaced with single-user, bare metal hypervisors optimized for the specific hardware, taking decades of multi-user, hardware-agnostic code cruft with them. More mature build-deploy-manage tool set based on these technologies will be truly game changing for hosted and enterprise clouds alike.

ClickOSC++XenNetwork Function Virtualization
IncludeOSC++KVM, VirtualBox, ESXi, Google Cloud, OpenStackOrchestration tool available
Nanos UnikernelC, C++, Go, Java, Node.js, Python, Rust, Ruby, PHP, etcQEMU/KVMOrchestration tool available
OSvJava, C, C++, Node, RubyVirtualBox, ESXi, KVM, Amazon EC2, Google CloudCloud and IoT (ARM)
RumprunC, C++, Erlan, Go, Java, JavaScript, Node.js, Python, Ruby, RustXen, KVM
UnikGo, Node.js, Java, C, C++, Python, OCamlVirtualBox, ESXi, KVM, XEN, Amazon EC2, Google Cloud, OpenStack, PhotonControllerUnikernel compiler toolbox with orchestration possible through Kubernetes and Cloud Foundry
ToroKernelFreePascalVirtualBox, KVM, XEN, HyperVUnikernel dedicated to run microservices
Comparing few Unikernel solutions from active projects

Out of the various existing projects, some standout due to their wide range of supported languages. Out of the active projects, the above table describes the language they support, the hypervisors they can run on and remarks concerning their functionality.

Currently experimenting with the Unikernel in the AWS and Google Cloud Platform and will update you with another post on that soon.

Source: Medium, github, containerjournal, linuxjournal

Search engine – A different view

Most of us will be doing a search in the internet for almost everything that comes to mind. If you’re the one who does so, it’s very likely you’ll be using a search engine for that matter. Though there are more than one search engine available, you will be using Google search engine for an internet search.

Google is one of the world’s largest companies. Though they have a wide spread of products traversed into many areas of our lives, they are still best known for their search engine. The phrase “You can just google that” emphasizes it. But there is one thing that you should keep in mind when you’re “googling” for information. To keep their services free, Google records a staggering amount of data about your online habits. Google in turn uses this data for targeted advertising – which obviously is Google’s primary source of income.

If you’re looking for a privacy focused alternative to Google search, then you should get your hands dirty with DuckDuckGO. The tag line of DDG describes as “the search engine that doesn’t track you”. For those who want to take anonymity a step further, TOR browser users are presented with DDG search results by default. If you’re the one familiar with TOR browser and the .onion TLD in the Dark Web, you might have used DDG there. I have mentioned about DDG very briefly in my earlier post on Dark Web

Most search engines collect and store search data, but google link that data to your account. This collected information is used to personalize your search results. That’s how Google show you targeted advertising. But on the other hand DDG doesn’t track you and opts not to personalize your search results. DDG focuses on search result quality over quantity.

DuckDuckGo has grown steadily since its inception in 2008, sky rocketing from an average of 79000 daily searches in 2010 to a 40 million million daily and stands at a whopping 30 billion searches as of this writing. They partnered with many Linux distributions and have native apps for both Android and iOS. DuckDuckGo also is a search provider on most mainstream browsers which gained traction to them, but Chrome, Firefox, Opera, and Safari users can also install the DuckDuckGo Privacy Essentials extension. The extension blocks hidden advertising trackers, forces sites to switch to HTTPS where possible, and gives you quick access to DDG’s search.

Most of you will be wondering about the profitability of the company behind DuckDuckGo. We know that DuckDuckGo was first launched in 2008 which was done by it’s founder and CEO Gabriel Weinberg. It is still owned and operated by Weinberg under the privately held company DuckDuckGo Inc. The company currently has over 65 employees working behind the scenes to continue development of DDG. Before creating DDG, Gabriel Weinberg developed one of the first-wave social networks, Names Database. He later sold the business for approximately $10 million in 2006. The money was used to self-fund the development of DDG through the company’s early years. Weinberg later co-authored Traction, a book about startup growth. Weinberg’s initial cash injection carried the company for some years. In 2011, the venture capital firm Union Square Ventures invested in DDG. According to Crunchbase, that initial funding round netted DDG an additional $3 million. To date, their external fundraising has generated $13 million. However, venture capital investments don’t make a company profitable. To create a financially sustainable business model, DDG displays advertising. nlike other search engines, the adverts are not based on targeted data. Instead, the ads are based exclusively on the keywords in your search. All of DDG’s advertising is syndicated by Yahoo, which is part of the Yahoo-Microsoft search alliance.

While DDG doesn’t provide any personal data to either company, the inclusion of two technology giants with questionable attitudes to privacy might make you uncomfortable. That’s why DDG allows you to head over to the settings and disable advertisements. This is one of the most important points in the DuckDuckGo vs. Google battle. DuckDuckGo is also part of Amazon and eBay’s affiliate programs. If you click through to either site from your search results and make a purchase, DDG receives a small percentage of the sale. However, no personal information is passed through to either company.

Developments in recent years have shown that many technology companies can’t be trusted with your data. From Facebook selling your data to unscrupulous third parties to Timehop losing the personal information of over 21 million accounts, there are many data breaches that may have put you at risk. So, it’s only natural that you would question why you should trust DuckDuckGo. The founder’s privacy-focused background and the company’s admirable business model are excellent starting points, but there are plenty more reasons to trust DDG. If you’ve been wondering is DuckDuckGo safe, then these points may reassure you.

Privacy Policy
Their clearly written Privacy Policy also makes for reassuring reading, providing detail on the small amount of information they do collect. The key takeaways are that they do not store IP addresses or unique User-Agent identification and will set a cookie only for saving site settings.

It ends with the assuring statement:

“…we will comply with court-ordered legal requests. However, in our case, we don’t expect any because there is nothing useful to give them since we don’t collect any personal information.”

Open Source
As well as being built using free and open source software (FOSS), DDG has made parts of their software open source. Many of the site’s designs, mobile apps, browser extensions, whitelists, and instant answers are available on DuckDuckGo’s GitHub page. Although the primary search core is proprietary, open-sourcing most other parts of the site means that, given the inclination, anyone can view the code.

Donations to Privacy
Like many companies, DDG also donates a portion of their income to good causes. They specifically select organizations which share their “vision of raising the standard of trust online.” Each year DuckDuckGo selects a new group of organizations, even reaching out to Reddit for suggestions. To date, they have donated $1.3 million to their chosen beneficiaries. The Donations page on their website lists each donation they’ve made, arranged by year.

Beyond Search
In January 2018, DuckDuckGo moved beyond search, releasing a suite of tools to help you maintain your privacy across the internet. They revamped their browser extensions and mobile apps to include tracking protection, encryption, and quick access to their private search. The update also added a Site Privacy Grade rating from A through to F, for you to gauge how much a site maintains your privacy. Many of the features found in the browser extension and mobile apps aim to stop tracking and protect your privacy. In other words, DuckDuckGo’s privacy apps want to keep you safe online.

DuckDuckGo has another trick up its sleeve: bangs.

Bangs allow you to search third-party sites directly from DuckDuckGo. Say you wanted to search Google would let you perform a site search by entering Using DDG’s bangs, you type !muo followed by your search term. There are even plenty of bangs that make Google search look slow. What’s more, searching a site with any of the thousands of available bangs takes you directly to the site, rather than the search engine’s results. If you do find yourself missing Google’s tailored results, then adding !g with your query will take you directly there.

Google became the dominant force in search by offering you personalized search. They built incredibly useful apps and services which captured even more of our data to improve your search results further. However, in light of several privacy scandals in recent years, we are becoming more cautious with our data.

Alongside search, Google operates some of the web’s most used software including Gmail, Docs, Drive, Calendar, and more besides. Google’s access to vast amounts of your data means that its results can be deeply personalized and their search page pulls it all together in one place.

DuckDuckGo doesn’t have any personal data to draw from, and so makes itself stand out in other ways. It’s one of the many ways that DuckDuckGo protects your personal information online.

This privacy-focused environment is almost the exact reverse of Google’s highly targeted surroundings. There are no personalized ads, no personal search results, and no filter bubble. Depending on your point of view, this is either one of DDG’s best or worst features. For the privacy-minded, this lack of tracking is likely to seal the deal.

Before we conclude, a simple demo of the googles targeted advertisement is illustrated below. This will help you understand things better about how Google is tracking you right from your very first search, even in incognito mode.

Open Google Chrome in incognito mode: Now search something like Stock Market:

Now what you have to do is remove the “Stock Market” keyword from the search bar and place something random like how, when, what, why, etc. and look at the suggestions.

You can even try with some random words like Best, King, Joke etc.

The Google Search tries to manipulate the suggestions in order to engage you more and more with the topic you were looking for. This is just the beginning how Google can manipulate your search behaviors, which results in loss of time and information bombardment as well. There are many more ways how Google is using your personal data.

On the other hand DDG doesn’t behaves like above, we’ll now do the same exercise with DuckDuckGo and you can visualize the difference.

DuckDuckGo proves us that user privacy and usefulness aren’t mutually exclusive. Isn’t it an absolute winner of the DuckDuckGo vs. Google fight?

DuckDuckGo appeals to the privacy-minded, but importantly, it isn’t a niche product. There are a range of useful features and some DuckDuckGo search tricks that don’t even work on Google.

If you do decide to stick with Google, take a look at the ways to customize your Google Search results.

Clear Web, Deep Web and Dark Web

An Introduction

All of you might have heard about the Internet and WWW, but today we’re about to examine the other side of the internet or how much do we know about the internet? Yes, as we always say there are two sides to a coin, there exists a dark side for the internet too.

Before going too deep into the WWW, we’ll see how to distinguish between the Clear, Deep and Dark webs. The illustration below can put some light on the things we are discussing here, ##

  • Clear Web: This is the most common one. It’s the Internet we use on a daily basis either on mobiles or on Desktops/Laptops to check emails, to read news, to access Facebook, twitter, Instagram etc and for online shopping, booking tickets which we browse regularly.
  • Deep Web: It is the part or a subset of the Internet that isn’t necessarily malicious, but is simply too large and/or obscure to be indexed due to the limitations of crawling and indexing software (like Google/Bing/Baidu).  This means that you have to visit those places directly instead of being able to search for them. So there aren’t directions to get there, but they’re waiting if you have an address.
  • Dark Web: The Dark Web (also called Darknet) is the ill-famed subset of the Deep Web that is not only not indexed, but also is used by those who are purposely trying to control access because they have a strong desire for privacy, or because what they’re doing is illegal. The Dark Web often sits on top of additional sub-networks, such as Tor, I2P, and Freenet, and is often associated with a criminal activity of various degrees, including buying and selling drugs, pornography, gambling, etc. Though the deep web makes up 95% of all the internet the dark web only consist of about .03%. But that small section has millions of monthly users. The dark web is usually what people actually mean when they refer to the deep web. Although both are technically correct it is important to keep it separated so there is a standardised phrase.

Deep Web & Dark Web

Computer scientist Mike Bergman is credited with coining these term in 2000. The Dark web is a small part of the Deep Web which is not indexed by search engines. Sometimes the term “deep web” is mistakenly used to refer specifically to the dark web. Most of the web’s information is buried far down  on  sites, and standard search engines do not find it. Traditional  search  engines like Google,Bing etc cannot see or retrieve content in the deep web. The  portion of  the web that is indexed by standard search engines is known  as the  surface web or clear net or clear web. As of 2001, the deep web was several orders of  magnitude larger than the surface  web.

Methods   which prevent web pages from being indexed by traditional search   engines may be categorized as one or more of the following: **
  • Dynamic content:  dynamic pages  which are returned in response to a submitted query or  accessed only  through a form, especially if open-domain input elements  (such as text  fields) are used; such fields are hard to navigate  without domain  knowledge.
  • Unlinked content: pages which  are not linked to by other pages, which may prevent web crawling  programs from accessing the content. This content is referred to as  pages without backlinks (also known as in links). Also, search engines do  not always detect all backlinks from searched web pages.
  • Private Web: sites that require registration and login (password-protected resources).
  • Contextual Web:   pages with content varying for different access contexts (e.g., ranges   of client IP addresses or previous navigation sequence).
  • Limited access content:  sites that limit access to their pages in a technical way (e.g., using  the Robots Exclusion Standard or CAPTCHAs, or no-store directive which  prohibit search engines from browsing them and creating cached copies).
  • Scripted content:  pages that are only accessible through links produced by JavaScript as  well as content dynamically downloaded from Web servers via Flash or  Ajax solutions.
  • Non-HTML/text content: textual content encoded in multimedia (image or video) files or specific file formats not handled by search engines.
  • Software:  certain content is intentionally hidden from the regular internet,  accessible only with special software, such as Tor, I2P, or other  darknet software. For example, Tor allows users to access websites using  the .onion host suffix anonymously, hiding their IP address.
  • Web archives:  Web archival services such as the Wayback Machine  enable users to see  archived versions of web pages across time,  including websites which  have become inaccessible, and are not indexed  by search engines such as  Google.

How to access

Now we will discuss the most important part, Ho does we access the Deep web and Dark Web?
If you have an email id, a facebook account, a twitter account or collectively saying any of these social media accounts or if you are a person using internet banking  you are accessing the Deep web on a daily basis.
Wondering how.!! The databases which store your user information and other details about your account reside in the deep web.
So, now the sole enigma is on the access to Dark Web.
You cannot simply access the dark web from a normal web browser like Chrome, you can only access the dark web through a dark web browser. The most famous of these deep web browsers is called Tor and this is the one we recommend you get if you’re looking to get onto the deep web.


Tor is free software for enabling anonymous communication. The name is derived from an acronym for the original software project name “The Onion Router”. Tor directs Internet traffic through a free, worldwide, volunteer network consisting of more than seven thousand relays to conceal a user’s location and usage from anyone conducting network surveillance or traffic analysis. Using Tor makes it more difficult for Internet activity to be traced back to the user: this includes “visits to Web sites, online posts, instant messages, and other communication forms”. Tor’s use is intended to protect the personal privacy of users, as well as their freedom and ability to conduct confidential communication by keeping their Internet activities from being monitored.
It was developed in the mid-1990s by United States Naval Research Laboratory employees, mathematician Paul Syverson and computer scientists Michael G. Reed and David Goldschlag, with the purpose of protecting U.S. intelligence communications online. Onion routing was further developed by DARPA in 1997
Downloads of Tor soared in August by almost 100% as the general population became more and more concerned about their privacy amid revelations about US and UK intelligence agencies monitoring web traffic. In short, more and more people are turning to the deep web to get their internet fix and protect their information.
This is because when you’re using Tor – or any other deep web browser – you are truly anonymous and your location cannot be picked up and neither can your browsing habits. Essentially nothing you do in the deep web can be monitored and as such the deep web is becoming a more attractive option for all internet users – those who know about it at least.


.onion is a special-use top level domain suffix designating an anonymous hidden service reachable via the Tor network. Such addresses are not actual DNS names, and the .onion TLD is not in the Internet DNS root, but with the appropriate proxy software installed, Internet programs such as web browsers can access sites with .onion addresses by sending the request through the network of Tor servers. The purpose of using such a system is to make both the information provider and the person accessing the information more difficult to trace, whether by one another, by an intermediate network host, or by an outsider. Addresses in the .onion TLD are generally opaque, non-mnemonic, 16-character alpha-semi-numeric hashes which are automatically generated based on a public key when a hidden service is configured. These 16-character hashes can be made up of any letter of the alphabet, and decimal digits from 2 to 7, thus representing an 80-bit number in base32. The “onion” name refers to onion routing, the technique used by Tor to achieve a degree of anonymity.
Apologies for deviating from the topic of accessing the Dark Web, Without a mention about TOR and .onion we cannot jump to access the Dark Web.

Steps to accessing the Dark Web

The method mentioned below is the simplest way to access Dark Web. There are other methods and TOR specific OS to access the contents of Dark Web
Download and install TOR Browser Bundle from the TOR project site.
TOR Browser Bundle is a browser configured to browse using the TOR relay network.  The browser component is actually built using the Firefox codebase, so if you have used Firefox you will find it pretty familiar.  It comes properly configured and won’t install any malware or cruft on your computer, as it is written and maintained by freedom fighters, not some large corporation. 
Once you install it, fire up your tor browser and you are good to go into the Dark Web.
You can browse on to the Dark Web sites using this browser by passing the links in the address bar of the browser. The following link will give you a good start. Don’t forget to go through the links “The Matrix” and “How to Exit the Matrix” in this link. Also, I personally recommend using Linux or any other Non-Windows for playing with TOR and Dark Web. Please check this link for taking some security measures Do’s and Dont’s


Search Engines TOR specific:

More onion URLs are found in the  Hidden Wiki

## – This illustration is just for demonstration purpose

** – This content is as is fromWikipedia


This is a personal blog. All content provided on this blog is for informational purposes only. They are collated from different sources and some are my own.The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site.

The owner of THIS BLOG will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information 
Data courtesy : Rational WikiPopular Science,, TORProject, Hacked

Your Facebook Account – Tips for a secure profile


Security and privacy are not interchangeable, and we must have both in order to protect our data. So what’s the difference between the two? It’s an important one.

Information security is commonly referred to as the confidentiality, availability, and integrity of data. In other words, it is all of the practices and processes that are in place to ensure data isn’t being used or accessed by unauthorized individuals or parties

Information privacy is suitably defined as the appropriate use of data. When companies and merchants use data or information that is provided or entrusted to them, the data should be used according to the agreed purposes

Ultimately, security and privacy must go hand-in-hand. So talking about Facebook data you’re sharing on your page, either it’s a post, a photo or a status update, you should be well aware of the magnitude of its reach.

Sharing information on your Facebook timeline may seem harmless but sometimes it can turn into a nightmare and/or can be objectionable to you friends and family.

Never accept a friend request from someone you don’t know, even if they know a friend of yours. Don’t share information that you don’t want to become public. Providing too much information in your profile can leave you exposed to people who want to steal your identity or other sensitive information.

There are a lot of ways to protect your privacy on Facebook and a lot of people are not aware that most information is available for everyone. After a brief research, I did on how to set your settings to the highest security level (Actually, I did this when one of my facebook friends posted some X-rated content on my timeline 🙂 ) I thought of sharing it.

So, here you go –

Who views your facebook posts?.

Log into Facebook. Go to Settings, Privacy.

There you have three subjects: ‘Who can see my stuff?’, ‘Who can contact me?’, and ‘Who can look me up?’.

To maximize the security on your page, choose ‘Only me’ or ‘Only friends’. This prevents other people from seeing stuff that you post.

Under ‘Who can look me up?’, Facebook will ask, “Do you want other search engines to link to your timeline?” Do not let other search engines link to your timeline.

What about the privacy of your Timeline and Tags?

When you post pictures on your timeline you can change the settings on who can see, comment or add things to it. You can also determine who is allowed to post stuff on your timeline.

Log into Facebook. Go to Settings, Timeline and Tagging.

There you have three subjects: ‘Who can add things to my timeline?’, ‘Who can see things on my timeline?’, and ‘How can I manage tags people add, and tagging suggestions?’.

The best thing to do is to make this available to just you or your friends only. Don’t have friends of friends put stuff or comments on your pictures and keep this as private as you can.

When you’re “tagged” in a post, it means that someone has created a link to your profile. You can turn on Tag Review to review tags friends add to your content before they appear on Facebook. In the ‘How can I manage tags people add and tagging suggestions?’ section, click edit and click the disabled button and change its setting to enabled.

How safe is your profile picture?

A new update to Facebook allows you to change the size of your profile picture. It also allows you to make the picture private and unclickable.

Through Photoshop, Microsoft paint or any other photo editing software, you can change the size to 180 x 180 pixels. This will make it a square image and smaller. Save this and use this one as your profile picture on Facebook.

Then, click on your profile picture. You will see a new edit button.

Click on the ‘Public’ dropdown menu and select ‘Only Me’.

Disclaimer: I have not tried the above one but came to know from reliable sources that it works 🙂

Alright so now your photos and your status updates etc. are “Friends Only” which of course they always should be, all of the time.

But there are other things that shouldn’t be visible to the public as well. There are few things that many people still make public on their Facebook (and other social media) accounts, but they really should be private, or better yet, not on there at all.

Did you shared your D.O.B?

Okay, so you have to tell Facebook how old you are so they know you’re old enough to use the site. But make sure this information is set to Friends Only. Not only that, but we also recommend keeping your birth year set to Only Me. After all, your friends don’t really need to know how old you’re getting!

Your DOB is an essential and integral part of your personal information, and contributes to the “security questions” to many online and offline accounts, including your bank! Thus keeping this information away from strangers is important.

The privacy settings for your Date of Birth can be found in your About section under Contact & Basic Info.

How many friends are there in your list & who are they?

Did you know that if you have a public friends list then you are a prime target for a Facebook cloning scam? These scammers clone your account by creating a new account with your profile picture and Facebook account name. They then go to your friend’s list and start requesting friendships to your contacts, trying to lure them into believing it’s you!

To hide your friend’s list, go to your profile and click Friends. Click the Edit Privacy option and select Only Me.

Do you have your correct Address in your profile?

Okay, we know your address isn’t set to public. You’re not that naïve.

But we thought we’d include it here, mainly because it really shouldn’t be on Facebook AT ALL, and we know many people do include it.

So we hope the reasons for not having your address public are self-evident, but even having it as Friends Only or Only Me are not a good enough justification. Despite Facebook having a place to add your address, there really is no good reason why they need it. If a scammer compromises a friends account, or your account, they could obtain your address and that is the last thing you want a scammer knowing.

And how about your place of work?

Facebook and the professional lives of its users are becoming increasingly intertwined, and [in many cases] this really isn’t a good thing. The number of people getting fired from their jobs because of what they post on Facebook is forever increasing, and the last thing you want is for some ill-thought out post getting reported to your place of work.

The business world is slowly getting a handle on how to mediate social media, and as a result increasing number of employers are forcing their employees to sign contracts that include social media policies that include not posting things that will reflect poorly on them.

Whilst some people are always going to post regrettable content on Facebook, you can try and contain the damage it does by ensuring that people who see it don’t know where you work. Possibly.

It’s time to get over the illusion that just because you use Facebook for your own personal reasons, it cannot impact your professional life. Things that people say over a keyboard via their personal social media accounts are getting them into trouble, and it’s happening more frequently than ever before.

Therefore, either don’t include your workplace on Facebook at all, or if you do, keep it Friends Only.

Huh.. What do you know about facebook tracking?

On your settings page, click Ads and where it says Ads based on my use of websites and apps, select Off.

This prevents Facebook serving you ads based on websites that you have visited outside of Facebook – for example through third party tracking cookies. However you should be aware that Facebook still may track you even though this setting is off, it just prevents them from showing you adverts related to your activity.

What the heck is facebook app?

Facebook Apps that friends install can access information about you. To stop this happening go to your Settings page and click Apps. Select Apps Others Use and ensure all the checkboxes are deselected.

How many times and from where you accessed facebook? a.k.a Stale Sessions

Every time you login to Facebook and don’t log out, the session remains open. Most Facebook users will have a number of sessions open at one time. For example for their mobile phone, tablet and computer and possibly after logging in from a friends or family members computer.

A session will have the approximate location recorded next to it, often in the nearest city or location of your nearest ISP point. We recommend checking your open sessions and closing any that are suspicious (I recommend whichever is backdated)– i.e. nowhere near a location where you’ve logged in. This prevents that device from auto-logging you back into Facebook.

To do this, go to your Settings page and click Security and select Where You’re Logged In and review the open sessions. Hover the mouse over the location and it will show you the IP from which you logged in.

Few more recommendations before conclusion:-

Regularly check installed application by selecting the Apps option in your privacy settings. Make sure you remove untrusted apps.

Be aware that both your profile picture and cover photos are public and this cannot be changed. Do not make any of these photos something you don’t want people seeing.

Regularly check your activity log for public photos that you are tagged in, and untag photos you do not wish to be tagged in.

Suggest to your friends to tighten their privacy settings as well, especially if they regularly upload photos with you in them!

And finally, the most important part of the article, and advice we certainly recommend…

Facebook is a social networking site, designed for sharing. It can be argued that social media privacy is an oxymoron – for this reason never post or upload information onto the site that you cannot risk falling into the wrong hands. 

Always remember: Think before you post. Stay safe. Happy Posting..!!!!


This is a personal blog. All content provided on this blog is for informational purposes only. They are collated from different sources and some are my own.The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site.

The owner of THIS BLOG will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information 

Data courtesy : military and Wiki