Linux memory usage demystified.. !!!!

Have you ever worried about Memory usage in the Linux server?

Ok, let’s take a look into the Linux memory.

Memory Architecture

To execute a process, the Linux kernel allocates a portion of the memory area to the requesting process. The process uses the memory area as workspace and performs the
required work. Assume a desktop or laptop assigned to you in your office cubicle. Take a look at your desktop (read as your Windows/Linux Dekstop) most of us will use this place in our system to scatter text files, documents, pdf’s, ppt’s and other things to perform your work, look how muddled up it looks. We are very poor in managing our desktop area, we are speaking about a maximum area of 1366×768 or more. Think of our Linux kernel managing GB’s of memory how cluttered it will be, But the Linux kernel is perfectly efficient in managing its memory area. The difference is that the kernel has to allocate space in a more dynamic manner. The number of running processes sometimes comes to tens of thousands and amount of memory is usually limited. Therefore, Linux kernel must handle the memory efficiently or you will end up with an unresponsive system.

32 Bit Vs 64 bit

On 32-bit architectures such as the i386, the Linux kernel can directly address only the first gigabyte of physical memory (896 MB when considering the reserved range). Memory above the so-called ZONE_NORMAL must be mapped into the lower 1 GB. This mapping is completely transparent to applications, but allocating a memory page in ZONE_HIGHMEM causes a small performance degradation.

On the other hand, with 64-bit architectures such as x86_64, ZONE_NORMAL extends all the way to 64 GB or to 128 GB in the case of IA-64 systems. As you can see, the overhead of mapping memory pages from ZONE_HIGHMEM into ZONE_NORMAL can be eliminated by using a 64-bit architecture

On 32-bit architectures, the maximum address space that single process can access is 4GB.This is a restriction derived from 32-bit virtual addressing. In a standard implementation, the virtual address space is divided into a 3 GB user space and a 1 GB kernel space. There are some variants like 4 G/4 G addressing layout implementing.

On the other hand, on 64-bit architecture such as x86_64 and IA64, no such restriction exists.Each single process can benefit from the vast and huge address space.

Memory Usage

On a Linux system, many programs run at the same time. These programs support multiple users, and some processes are more used than others. Some of these programs use a portion of memory while the rest are “sleeping.” When an application accesses cache, the performance increases because an in-memory access retrieves data, thereby eliminating the need to access slower disks. The OS uses an algorithm to control which programs will use physical memory and which are paged out. This is transparent to user programs.

Paging and Swapping

In Linux, genrally *NIXes, there are differences between paging and swapping. Paging moves individual pages to swap space on the disk and swapping is a bigger operation that moves the entire address space of a process to swap space in one operation.

If your server is always paging to disk (a high page-out rate), buddy..! its time to increase your physical RAM. However, for systems with a low page-out rate, it might not affect performance. If a process behaves poorly. Paging can be a serious performance problem when the amount of free memory pages falls below the minimum amount specified, because the paging mechanism is not able to handle the requests for physical memory pages and the swap mechanism is called to free more pages. This significantly increases I/O to disk and will quickly degrade a server’s performance.

Memory Available

This indicates how much physical memory is available for use. If, after you start your application, this value has decreased significantly, you might have a memory leak. Check the application that is causing it and make the necessary adjustments.

System Cache

This is the common memory space used by the file system cache.

Page Faults

There are two types of page faults: soft page faults, when the page is found in memory, and hard page faults, when the page is not found in memory and must be fetched from disk. Accessing the disk will slow your application considerably.

Private Memory

This represents the memory used by each process running on the server.

Oh.. ok.. at least some of you seems its too boring by now, So let’s think it real time.

To explain the memory usage I’ll try to illustrate with examples from my virtual test server running  PostgreSQL DB Instance on Centos 6.3.

The above output is a typical output for the free command used for memory monitoring. Just looking at this numbers, we can conclude that the system is having approx 2GB of ram and nearly 95% of it is used. If we look at the Swap: line in the output, we see that the swap space appears to be unused.

In between the Mem: and Swap: lines, we see a line labeled -/+ buffers/cache. This is probably the trickiest part in interpreting free command output. This line shows how much of the physical memory is used by the buffer/cache. In other words, this shows how much memory is being used (a better word would be borrowed) for disk caching. Disk caching makes the system run much faster.

So, while at first glance, this system appears to be running short of memory, it’s actually just making good use of memory that’s currently not needed for anything else. The key number to look at in the output above is, therefore, 1703. This is the amount of memory that would be made available to your applications if they need it.

Now we will see the Swap and Paging activity

NOTE: If your system is busy and you want to watch how memory is changing, you can run free with a -s (seconds) argument that causes the command to give you totals every X seconds. You need to start worrying about memory only and only if you see the swap usage is high.

Why Linux?

Why do I use Linux? Why do I spend much of my time suggesting others use it? Is it just because it’s available for free? These are interesting questions that are not discussed very often.

I post my personal motive for using Linux below. Some are practical, while others are more judicious.

If you’re one of those quivers on the brink of switching to Linux, reading this list might be a good place to start, and you may find some inspiration to make the leap.

  • Control over my system

There’s no “right way” or “wrong way” of doing things (although there are sensible and efficient ways of doing things, of course). I have the freedom to do what I want with Linux. In the Linux community, you’ll never hear somebody say, “Hey! You’re not supposed to do that!” or, “Serves you right for doing it the wrong way!” Instead, what you’re more likely to hear is, “Hey! I didn’t know you could do that! That’s cool!” Innovative solutions are encouraged. There is more than one way to do that a.k.a TIMTOWTDT

This freedom extends to my choice of software applications I use on my system. If I don’t like a particular piece of software, I can use an alternative. This is true even of desktop or system components.

  • Community

All operating systems tend to generate communities around them, but the community around Linux is proactive, rather than passive. What does it mean? Well, a typical posting in a Windows community forum might be something like this: “Hey! Feature X doesn’t work as it should! This sucks! I sure hope Microsoft fixes it soon!” The same posting in a Linux forum is more likely to be like this: “Hey! Feature X doesn’t work as it should! Here’s a solution…” Not only that but, in the reply, there will be several other solutions from other people or the original solution will be improved by others. Not only that but a developer might read the posting and offer a tweak for the original program, or even start his/her alternative project.

People in the Linux community share what they know. This is the whole damn point. Linux is based on the fundamental concept that knowledge wants to be free. Personally, I think this is awe-inspiring, not least because Linux brings out the very best in people.

What this also means that, if you solve a problem and take just a few moments to share it, you’re making Linux stronger. You are both a user and a contributor.

  • Virus-Free

This is a very practical reason for liking Linux, but no less valid: When using Linux, I don’t have to worry about viruses. Viruses are an ever-present threat to Windows–a kind of terrorism for computers.

I won’t brag that there are no viruses for Linux, but I’m fairly certain that there are fewer viruses in active circulation. Any that arise tend to die out quickly, simply because it’s much harder to infect a Linux system due to the way it’s built.

In addition, there’s a secret I read in some of the hacker blogs and sites that are rarely discussed: The kind of people who create viruses respect and like Linux. They don’t want to damage it, either in practical terms or by damaging its reputation. There’s also the fact that Linux is still a minority operating system, and virus writers tend to target the big fishes in the pond.

The lack of viruses means I don’t have to have an annoying antivirus program installed on my computer. There are no irritating pop-ups from the virus checker telling me it’s doing a good job, and no daily/weekly scans rendering my computer almost unusable. (IMHO nearly all antivirus programs on Windows are almost as bad as the viruses they claim to protect the user from.)

  • It’s Free Of Charge

This is another very practical reason, but it really can’t be underestimated. Linux doesn’t cost anything, and everybody in the world, therefore, has access to it. Who can argue with that?

And finally, if I’m to list out my personal liking for Linux I’ll put it in this way,

  • It’s free.
  • I can run on pretty much any hardware.
  • It is highly scalable… I can install it on a 486 or a dual core.
  • Help is readily available and free of charge.
  • Well documented.
  • An inbuilt standard help system that is actually useful (man pages).
  • Powerful CLI.
  • Linux can be configured to run without a GUI for max performance. This is especially useful for servers. Other operating systems don’t have this luxury.
  • Linux will actually give you a reason why something had an error.
  • You can choose which type of desktop environment you want.
  • With Linux, we can compile our own kernel so we don’t have to a wear a one size fits all hat.
  • When you “end a task” in Linux it actually works. (SIGTERM, SIGHUP,SIGKILL).
  • The command line auto complete feature works the way you expect it to.
  • Linux tends not to hide details.
  • You can choose a filesystem that better fits your needs. 
  • When there is a security exploit I can expect a patch immediately on the next day.


Linux – Most Popular Operating System

Before Linux

In order to understand the popularity of Linux, we need to travel back in time, about 30 years ago.

Imagine computers as big as houses, even stadiums. While the sizes of those computers posed substantial problems, there was one thing that made this even worse: every computer had a different operating system. Software was always customized to serve a specific purpose, and software for one given system didn’t run on another system. Being able to work with one system didn’t automatically mean that you could work with another. It was difficult, both for the users and the system administrators.

Computers were extremely expensive then, and sacrifices had to be made even after the original purchase just to get the users to understand how they worked. The total cost per unit of computing power was enormous.

Technologically the world was not quite that advanced, so they had to live with the size for another decade. In 1969, a team of developers in the Bell Labs laboratories started working on a solution for the software problem, to address these compatibility issues. They developed a new operating system, which was

Simple and elegant.

Written in the C programming language instead of in assembly code.

Able to recycle code.

The Bell Labs developers named their project “UNIX.”

The code recycling features were very important. Until then, all commercially available computer systems were written in a code specifically developed for one system. UNIX on the other hand needed only a small piece of that special code, which is now commonly named the kernel. This kernel is the only piece of code that needs to be adapted for every specific system and forms the base of the UNIX system. The operating system and all other functions were built around this kernel and written in a higher programming language, C. This language was especially developed for creating the UNIX system. Using this new technique, it was much easier to develop an operating system that could run on many different types of hardware.

The software vendors were quick to adapt, since they could sell ten times more software almost effortlessly. Weird new situations came in existence: imagine for instance computers from different vendors communicating in the same network, or users working on different systems without the need for extra education to use another computer. UNIX did a great deal to help users become compatible with different systems.

Throughout the next couple of decades, the development of UNIX continued. More things became possible to do and more hardware and software vendors added support for UNIX to their products.

UNIX was initially found only in very large environments with mainframes and minicomputers (note that a PC is a “micro” computer). You had to work at a university, for the government or for large financial corporations in order to get your hands on a UNIX system.

But smaller computers were being developed, and by the end of the 80’s, many people had home computers. By that time, there were several versions of UNIX available for the PC architecture, but none of them are truly free and more important: they were all terribly slow, so most people ran MS DOS or Windows 3.1 on their home PCs.

Linus and Linux

By the beginning of the 90s, home PCs were finally powerful enough to run a full blown UNIX. Linus Torvalds, a young man studying computer science at the university of Helsinki, thought it would be a good idea to have some sort of freely available academic version of UNIX, and promptly started to code.

He started to ask questions, looking for answers and solutions that would help him get UNIX on his PC. Below is one of his first posts in comp.os.minix, dating from 1991:

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: Gcc-1.40 and a posix-question
Message-ID: <1991Jul3.100050.9886@klaava.Helsinki.FI>
Date: 3 Jul 91 10:00:50 GMT
Hello netlanders,
Due to a project I’m working on (in minix), I’m interested in the posix
standard definition. Could somebody please point me to a (preferably)
machine-readable format of the latest posix rules? Ftp-sites would be

From the start, it was Linus’ goal to have a free system that was completely compliant with the original UNIX. That is why he asked for POSIX standards, POSIX still being the standard for UNIX.

In those days plug-and-play wasn’t invented yet, but so many people were interested in having a UNIX system of their own, that this was only a small obstacle. New drivers became available for all kinds of new hardware, at a continuously rising speed. Almost as soon as a new piece of hardware became available, someone bought it and submitted it to the Linux test, as the system was gradually being called, releasing more free code for an ever wider range of hardware. These coders didn’t stop at their PC’s; every piece of hardware they could find was useful for Linux.

Back then, those people were called “nerds” or “freaks”, but it didn’t matter to them, as long as the supported hardware list grew longer and longer. Thanks to these people, Linux is now not only ideal to run on new PC’s, but is also the system of choice for old and exotic hardware that would be useless if Linux didn’t exist.

Two years after Linus’ post, there were 12000 Linux users. The project, popular with hobbyists, grew steadily, all the while staying within the bounds of the POSIX standard. All the features of UNIX were added over the next couple of years, resulting in the mature operating system Linux has become today. Linux is a full UNIX clone, fit for use on workstations as well as on middle-range and high-end servers. Today, a lot of the important players on the hardware and software market each have their team of Linux developers; at your local dealer’s you can even buy pre-installed Linux systems with official support – eventhough there is still a lot of hardware and software that is not supported, too.

Current application of Linux systems

Today Linux has joined the desktop market. Linux developers concentrated on networking and services in the beginning, and office applications have been the last barrier to be taken down. We don’t like to admit that Microsoft is ruling this market, so plenty of alternatives have been started over the last couple of years to make Linux an acceptable choice as a workstation, providing an easy user interface and MS compatible office applications like word processors, spreadsheets, presentations and the like.

On the server side, Linux is well-known as a stable and reliable platform, providing database and trading services for companies like Amazon, the well-known online bookshop, US Post Office, the German army and many others. Especially Internet providers and Internet service providers have grown fond of Linux as firewall, proxy- and web server, and you will find a Linux box within reach of every UNIX system administrator who appreciates a comfortable management station. Clusters of Linux machines are used in the creation of movies such as “Titanic”, “Shrek” and others. In post offices, they are the nerve centers that route mail and in large search engine, clusters are used to perform internet searches.These are only a few of the thousands of heavy-duty jobs that Linux is performing day-to-day across the world.

It is also worth to note that modern Linux not only runs on workstations, mid- and high-end servers but also on “gadgets” like PDA’s, mobiles, a shipload of embedded applications and even on experimental wristwatches. This makes Linux the only operating system in the world covering such a wide range of hardware.

Current development Torvalds continues to direct the development of the kernel. Stallman heads the Free Software Foundation, which in turn supports the GNU components. Finally, individuals and corporations develop third-party non-GNU components. These third-party components comprise a vast body of work and may include both kernel modules and user applications and libraries. Linux vendors and communities combine and distribute the kernel, GNU components, and non-GNU components, with additional package management software in the form of Linux distributions.

Linux is a Unix-like computer operating system assembled under the model of free and open source software development and distribution. The defining component of any Linux system is the Linux kernel, an operating system kernel first released October 5, 1991, by Linus Torvalds. Linux system distributions may vary in many details of system operation, configuration, and software package selections.

Linux runs on a wide variety of computer hardware, including mobile phones, tablet computers, network routers, televisions, video game consoles, desktop computers, mainframes and supercomputers. Linux is a leading server operating system and runs the 10 fastest supercomputers in the world. In addition, more than 90% of today’s supercomputers run some variant of Linux.

The development of Linux is one of the most prominent examples of free and open source software collaboration: the underlying source code may be used, modified, and distributed—commercially or non-commercially—by anyone under licenses such as the GNU General Public License. Typically Linux is packaged in a format known as a Linux distribution for desktop and server use. Some popular mainstream Linux distributions include Debian (and its derivatives such as Ubuntu), Fedora and openSUSE. Linux distributions include the Linux kernel, supporting utilities and libraries and usually a large amount of application software to fulfill the distribution’s intended use.