Wednesday, March 12, 2008

Swap over NFS on SLE11

1. Some Introduction

1a. Swap space

We all know swap space is an area on disk that temporarily holds memory images of processes. When physical memory demand is sufficiently low, process memory images are brought back into physical memory from the swap area on disk. Having sufficient swap space enables the system to keep some physical memory free at all times. We always have had a need to run applications that might require more memory than available physical memory.

Linux supports two forms of swap space:
  • Swap partition - an independent partition of the hard disk dedicated for swapping.
  • Swap file - a special file in the file system that resides amongst your system and data files. The advantage of swap files is that you don't need to find an empty partition or repartition a disk to add additional swap space.

1b. Swap over NFS

Swap over NFS allows you to have your swap on a remote NFS filesystem. Swap over NFS is very useful in the case of thin-client workstations where primary or secondary storage is a cost issue and may or may not be available. It's also useful in case of disk-less clusters and also in virtualization, where dumping the storage on a networked storage unit makes for trivial migration.

Once you are no longer restricted to local storage for swap space, you can cut costs dramatically. You can use less expensive diskless servers, and simplify administration thereby reducing implementation, administration, and management costs. By using swap over NFS, you can also protect your systems against application restarts and expensive downtimes.

Swap over NFS is a feature-in-demand that is currently not supported by Linux kernel. Though had been good amount of review, discussions followed by the latest patchset posted by Peter Zijlstra, it has not been merged upstream, yet.

2. Swap over NFS on SLE11

SUSE Linux Enterprise Server recently added support for Swap over NFS On SLE11. This allows you to use network file system (NFS) over Internet protocols (IP) to use remote storage for local server swap needs. To my knowledge, SLE 11/openSUSE 11.1 is the only distro that supports Swap over NFS.

2a. Swap over NFS in action

If you don't have a diskless workstation and if you to get a feel of how it would be to have your swap on NFS on your desktop, you could try the following simple steps to see Swap over NFS in action on SLE11.

* Append mem=256M to the kernel command line during booting. This restricts your physical memory usage to 256M and ensures that swap gets used even in case you don't run big applications.
* Disable all devices marked as swap by doing
swapoff -a
* On the NFS server, create a swap file on the NFS export
dd if=/dev/zero of=swapfile.swp count=1048576 bs=1024
mkswap swapfile.swp
* From the client enable swapping on the swapfile we created
swapon /mnt/nfs/swapfile.swp

* Run multiple applications (memory-intensive) and see swap getting exercised using the `free' command

And of course you would have noticed that slowness and it's expected since Swap over NFS is slow in general except faster network cards. "Network swapping" as it is usually called is obviously much slower than having a much faster and more capable secondary storage device for doing virtual memory swapping. Swap over NFS allows any thin-client workstation to supplement its built in RAM with virtual memory. Usually virtual memory is supplied by a secondary storage device (i.e., a hard drive, flash memory, etc.) and the thin-client has no facility for secondary storage devices for this purpose (in a true thin-client sense), the network is the only alternative. You can get away with having around as little as 32MB of memory on the thin-client workstation. There could be momentarily slowness if the RAM is much lesser. The faster the network speed and available bandwidth, the less often these slowness will be experienced. It would be optimal to have nothing less than a 100 Mb/s, switched network.


3. Linux Implementation

Traditionally, swapping out is performed directly to block devices. Block devices are written to pre-allocate any memory that might be needed during write-out, and to block when the pre-allocated memory is exhausted and no extra memory is available. They can be sure not to block forever as the pre-allocated memory will be returned as soon as the data it is being used for has been written out. Mempools (Memory Pools) are used to help out in such situations here a memory allocation must succeed, but sleeping is not an option. Mempools pre-allocate a pool of memory and reserve it until it is needed. Mempools make life easier in some situations, but they should be used with caution as each mempool takes a chunk of kernel memory out increases the minimum amount of memory the kernel needs to run effectively.

The above approach does not work for writing anonymous pages (i.e. swapping) over a network, using e.g NFS or Network Block Device (NBD). The main reason that it does not work is that when data from an anonymous page is written to the network, we must wait for a reply to confirm the data is safe. Receiving that reply will consume memory and, significantly, we need to allocate memory to an incoming packet before we can tell if it is the reply we are waiting for or not. Another reason is that much of the network subsytem code is not written to use mempools or fixed sized allocations (but uses kmalloc() ) and in most cases does not need to use them. Changing all allocations in the networking layer to use mempools would be quite intrusive, and would waste memory, and probably cause a slow-down in the common case of not swapping over the network.

These problems are addressed in the patchset in different parts.

* The first part provides a generic memory reserve framework and use it on the slow paths - when we're low on memory. Currently, it supports SLAB/SLUB and not SLOB.

* The second part provides some generic network infrastructure needed.

* The third part makes use of the generic memory reserve system on the network stack. Note that unlike BIO layer we need memory allocations in both the send and the receive path. So we reserve a little pool to act as a receive buffer. This way, we can filter out those packets that ensure write-back completion and disregard the others packets.

* The fourth part provides generic VM infrastructure to handle swapping to a file system instead of a block device.

* The final part converts NFS to make use of the new network and VM infrastructure to provide swap over NFS.

There are much more deeper details which I'm not going to delve in here as it can get more complex and verbose.

Despite a few drawbacks like slowness, suboptimal memory usage etc. , Swap over NFS is a feature that does fit quite well in certain scenarios like diskless clusters. With the current level of stability and the reviews that have already happened, I would think it would make it to upstream kernel soon.

Acknowledgements/Credits: To Peter Zijlstra (Implementation) and Neil Brown (nice documentation).

2 comments:

DonaldMacroni said...

hey suresh!!

that was so informative.......Please do keep posting such wonderful posts.

Suresh said...

@DonaldMacroni: Thanks! Yes, I should try to blog more regularly..