Here's a follow-up to my previous post on
Local caching for CIFS network file system
Since the previous post, I worked on improving the patches that add
local caching, fixed a few bugs, addressed review comments from the
community and re-posted the patches. I also gave a talk about it at
the SUSE Labs Conference 2010 happened at Prague. The slides can be
found here: FS-Cache aware CIFS.
This patchset was merged in the upstream Linux kernel yesterday (Yay!)
which means this feature would be available starting from kernel
version 2.6.35-rc1.
The primary aim of caching data on the client side is to reduce the
network calls to the CIFS Server whenever possible, thereby reducing
the server load as well the network load. This will indirectly improve
the performance and the scalability of the CIFS Server and will
increase the number of clients per Server ratio. This feature could be
useful in a number of scenarios:
- Render farms in Entertainment industry - used to distribute
textures to individual rendering units
- Read only multimedia workloads
- Accelerate distributed web-servers
- Web server cluster nodes serve content from the cache
- /usr distributed by a network file system - to avoid spamming
Servers when there is a power outage
- Caching Server with SSDs reexporting netfs data
- where a persistent cache remains across reboots is useful
However, be warned that local caching may not suitable for all
workloads and a few workloads could suffer a slight performance hit
(for e.g. read-once type workloads).
When I reposted this patchset, I got asked whether I have done any
benchmarking and could share the performance numbers. Here are the
results from a 100Mb/s network:
Environment
------------
I'm using my T60p laptop as the CIFS server (running Samba) and one of
my test machines as CIFS client, connected over an ethernet of reported
speed 1000 Mb/s. ethtool was used to throttle the speed to 100 Mb/s. The
TCP bandwidth as seen by a pair of netcats between the client and the
server is about 89.555 Mb/s.
Client has a 2.8 GHz Pentium D CPU with 2GB RAM
Server has a 2.33GHz Core2 CPU (T7600) with 2GB RAM
Test
-----
The benchmark involves pulling a 200 MB file over CIFS to the client
using cat to /dev/zero under `time'. The wall clock time reported was
recorded.
First, the test was run on the server twice and the second result was
recorded (noted as Server below i.e. time taken by the Server when file
is loaded on the RAM).
Secondly, the client was rebooted and the test was run with caching
disabled (noted as None below).
Next, the client was rebooted, the cache contents (if any) were erased
with mkfs.ext3 and test was run again with cachefilesd running (noted
as COLD)
Next the client was rebooted, tests were run with caching enabled this
time with a populated disk cache (noted as HOT).
Finally, the test was run again without unmounting or rebooting to
ensure pagecache remains valid (noted as PGCACHE).
The benchmark was repeated twice:
Cache (state) Run #1 Run#2
============= ======= =======
Server 0.104 s 0.107 s
None 26.042 s 26.576 s
COLD 26.703 s 26.787 s
HOT 5.115 s 5.147 s
PGCACHE 0.091 s 0.092 s
As it can be seen when the cache is hot, the performance is roughly 5X
times than reading over the network. And, it has to be noted that the
Scalability improvement due to reduced network traffic cannot be seen
as the test involves only a single client and the Server. The read
performance with more number of clients would be more interesting as
the cache can positively impact the scalability.
Showing posts with label kernel. Show all posts
Showing posts with label kernel. Show all posts
Wednesday, August 4, 2010
Local caching for CIFS network file system - followup
Monday, June 14, 2010
Hackweek V: Local caching for CIFS network file system
Hackweek
It's that time of the year when SUSE/Novell developers use their Innovation Time-off to do a project of their interest/wish - called as Hackweek. Last week was Hackweek V. I worked on making the Common Internet File System (CIFS) cache aware, i.e. local caching for CIFS Network File System.
Linux FS-Cache
Caching can result in performance improvements in network filesystems where access to network and media is slow. The cache can indirectly improve performance of the network and the server by reduced network calls. Caching can be also viewed as a preparatory work for making disconnected operation (Offline) work with network filesystems.
The Linux Kernel recently added a generic caching facility (FS-Cache) that any network filesystem like NFS or CIFS or other service can use to cache data locally. FS-Cache supports a variety of cache backends i.e. different types of cache that have different trade-offs (like CacheFiles, CacheFS etc.) FS-Cache mediates between cache backends and the network filesystems. Some of the network filesystems such as NFS and AFS are already integrated with FS-Cache.
Making CIFS FS-Cache capable
To make any network filesystem FS-Cache aware, there are a few things to consider. Let's consider them step by step (though not in detail):
I wanted to get the prototype working within a week. So the way I have implemented it is rudimentary and has lot of room for improvement.
The index hierarchy is not very deep. It has three levels - Server, Share and Inode. The only way that I know of identifying files with CIFS is by 'UniqueId' which is supposed to be unique. However, some server do not ensure that the 'UniqueId' is always unique (for example when there is more than one filesystem in the exported share). The cache coherency is currently ensured by verifying the 'LastWriteTime' and size of the file. This is not a reliable way of detecting changes as some CIFS servers will not update the time until the filehandle is closed.
The rudimentary implementation is ready and the cumulative patch can be found here:
http://www.kernel.org/pub/linux/kernel/people/jays/patches/
[WARNING: The patch is lightly tested and of prototype quality.]
Here are some initial performance numbers with the patch:
Copying one big file of size ~150 MB.
$time cp /mnt/cifs/amuse.zip .
(Cache initialized)
real 1m18.603s
user 0m0.016s
sys 0m8.569s
$time cp /mnt/cifs/amuse.zip /
(Read from Cache)
real 0m28.055s
user 0m0.008s
sys 0m1.140s
It's that time of the year when SUSE/Novell developers use their Innovation Time-off to do a project of their interest/wish - called as Hackweek. Last week was Hackweek V. I worked on making the Common Internet File System (CIFS) cache aware, i.e. local caching for CIFS Network File System.
Linux FS-Cache
Caching can result in performance improvements in network filesystems where access to network and media is slow. The cache can indirectly improve performance of the network and the server by reduced network calls. Caching can be also viewed as a preparatory work for making disconnected operation (Offline) work with network filesystems.
The Linux Kernel recently added a generic caching facility (FS-Cache) that any network filesystem like NFS or CIFS or other service can use to cache data locally. FS-Cache supports a variety of cache backends i.e. different types of cache that have different trade-offs (like CacheFiles, CacheFS etc.) FS-Cache mediates between cache backends and the network filesystems. Some of the network filesystems such as NFS and AFS are already integrated with FS-Cache.
Making CIFS FS-Cache capable
To make any network filesystem FS-Cache aware, there are a few things to consider. Let's consider them step by step (though not in detail):
- First, we need to define the network filesystem and it should be able to register/unregister with the FS-Cache interface.
- The network filesystem has to define the index hierarchy which could be used to locate a file object or discard a certain subset of all the files cached.
- We need to define the objects and the methods associated.
- All the indices in the index hierarchy and the data file need to be registered. This could be done by requesting a cookie for each index or data file. Upon successful registration, a corresponding cookie is returned.
- Functions to store and retrieve pages in the cache.
- Way to identify whether the cache for a file is valid or not.
- Function to release any in-memory representation for the network filesystem page.
- Way to invalidate a data file or index subtree and relinquish cookies.
I wanted to get the prototype working within a week. So the way I have implemented it is rudimentary and has lot of room for improvement.
The index hierarchy is not very deep. It has three levels - Server, Share and Inode. The only way that I know of identifying files with CIFS is by 'UniqueId' which is supposed to be unique. However, some server do not ensure that the 'UniqueId' is always unique (for example when there is more than one filesystem in the exported share). The cache coherency is currently ensured by verifying the 'LastWriteTime' and size of the file. This is not a reliable way of detecting changes as some CIFS servers will not update the time until the filehandle is closed.
The rudimentary implementation is ready and the cumulative patch can be found here:
http://www.kernel.org/pub/linux/kernel/people/jays/patches/
[WARNING: The patch is lightly tested and of prototype quality.]
Here are some initial performance numbers with the patch:
Copying one big file of size ~150 MB.
$time cp /mnt/cifs/amuse.zip .
(Cache initialized)
real 1m18.603s
user 0m0.016s
sys 0m8.569s
$time cp /mnt/cifs/amuse.zip /
(Read from Cache)
real 0m28.055s
user 0m0.008s
sys 0m1.140s
Tuesday, September 1, 2009
LWN.net quotes me
LWN.net a leading online magazine which covers quite a bit of Linux Kernel development had quoted my name in the Security advisory on CIFS multiple vulnerabilities.
To quote LWN:
kernel: multiple vulnerabilities
Package(s): linux-2.6 CVE #(s): CVE-2009-1630 CVE-2009-1633 CVE-2009-1758
Created: June 2, 2009 Updated: August 20, 2009
Description: From the Debian advisory:
Frank Filz discovered that local users may be able to execute files without execute permission when accessed via an nfs4 mount. CVE-2009-1630
Jeff Layton and Suresh Jayaraman fixed several buffer overflows in the CIFS filesystem which allow remote servers to cause memory corruption. CVE-2009-1633
Jan Beulich discovered an issue in Xen where local guest users may cause a denial of service (oops). CVE-2009-1758
Forgot to stick the url: http://lwn.net/Articles/335751/
(Thanks Nikanth)
Wondering how did I miss this.. Pleasantly surprised.
To quote LWN:
kernel: multiple vulnerabilities
Package(s): linux-2.6 CVE #(s): CVE-2009-1630 CVE-2009-1633 CVE-2009-1758
Created: June 2, 2009 Updated: August 20, 2009
Description: From the Debian advisory:
Frank Filz discovered that local users may be able to execute files without execute permission when accessed via an nfs4 mount. CVE-2009-1630
Jeff Layton and Suresh Jayaraman fixed several buffer overflows in the CIFS filesystem which allow remote servers to cause memory corruption. CVE-2009-1633
Jan Beulich discovered an issue in Xen where local guest users may cause a denial of service (oops). CVE-2009-1758
Forgot to stick the url: http://lwn.net/Articles/335751/
(Thanks Nikanth)
Wondering how did I miss this.. Pleasantly surprised.
Wednesday, March 12, 2008
Swap over NFS on SLE11
1. Some Introduction
1a. Swap space
1b. Swap over NFS
Swap over NFS allows you to have your swap on a remote NFS filesystem. Swap over NFS is very useful in the case of thin-client workstations where primary or secondary storage is a cost issue and may or may not be available. It's also useful in case of disk-less clusters and also in virtualization, where dumping the storage on a networked storage unit makes for trivial migration.
Once you are no longer restricted to local storage for swap space, you can cut costs dramatically. You can use less expensive diskless servers, and simplify administration thereby reducing implementation, administration, and management costs. By using swap over NFS, you can also protect your systems against application restarts and expensive downtimes.
Swap over NFS is a feature-in-demand that is currently not supported by Linux kernel. Though had been good amount of review, discussions followed by the latest patchset posted by Peter Zijlstra, it has not been merged upstream, yet.
2. Swap over NFS on SLE11
If you don't have a diskless workstation and if you to get a feel of how it would be to have your swap on NFS on your desktop, you could try the following simple steps to see Swap over NFS in action on SLE11.
* Append mem=256M to the kernel command line during booting. This restricts your physical memory usage to 256M and ensures that swap gets used even in case you don't run big applications.
* Disable all devices marked as swap by doing
swapoff -a
* On the NFS server, create a swap file on the NFS export
dd if=/dev/zero of=swapfile.swp count=1048576 bs=1024
mkswap swapfile.swp
* From the client enable swapping on the swapfile we created
swapon /mnt/nfs/swapfile.swp
* Run multiple applications (memory-intensive) and see swap getting exercised using the `free' command
And of course you would have noticed that slowness and it's expected since Swap over NFS is slow in general except faster network cards. "Network swapping" as it is usually called is obviously much slower than having a much faster and more capable secondary storage device for doing virtual memory swapping. Swap over NFS allows any thin-client workstation to supplement its built in RAM with virtual memory. Usually virtual memory is supplied by a secondary storage device (i.e., a hard drive, flash memory, etc.) and the thin-client has no facility for secondary storage devices for this purpose (in a true thin-client sense), the network is the only alternative. You can get away with having around as little as 32MB of memory on the thin-client workstation. There could be momentarily slowness if the RAM is much lesser. The faster the network speed and available bandwidth, the less often these slowness will be experienced. It would be optimal to have nothing less than a 100 Mb/s, switched network.
3. Linux Implementation
Traditionally, swapping out is performed directly to block devices. Block devices are written to pre-allocate any memory that might be needed during write-out, and to block when the pre-allocated memory is exhausted and no extra memory is available. They can be sure not to block forever as the pre-allocated memory will be returned as soon as the data it is being used for has been written out. Mempools (Memory Pools) are used to help out in such situations here a memory allocation must succeed, but sleeping is not an option. Mempools pre-allocate a pool of memory and reserve it until it is needed. Mempools make life easier in some situations, but they should be used with caution as each mempool takes a chunk of kernel memory out increases the minimum amount of memory the kernel needs to run effectively.
The above approach does not work for writing anonymous pages (i.e. swapping) over a network, using e.g NFS or Network Block Device (NBD). The main reason that it does not work is that when data from an anonymous page is written to the network, we must wait for a reply to confirm the data is safe. Receiving that reply will consume memory and, significantly, we need to allocate memory to an incoming packet before we can tell if it is the reply we are waiting for or not. Another reason is that much of the network subsytem code is not written to use mempools or fixed sized allocations (but uses kmalloc() ) and in most cases does not need to use them. Changing all allocations in the networking layer to use mempools would be quite intrusive, and would waste memory, and probably cause a slow-down in the common case of not swapping over the network.
These problems are addressed in the patchset in different parts.
* The first part provides a generic memory reserve framework and use it on the slow paths - when we're low on memory. Currently, it supports SLAB/SLUB and not SLOB.
* The second part provides some generic network infrastructure needed.
* The third part makes use of the generic memory reserve system on the network stack. Note that unlike BIO layer we need memory allocations in both the send and the receive path. So we reserve a little pool to act as a receive buffer. This way, we can filter out those packets that ensure write-back completion and disregard the others packets.
* The fourth part provides generic VM infrastructure to handle swapping to a file system instead of a block device.
* The final part converts NFS to make use of the new network and VM infrastructure to provide swap over NFS.
There are much more deeper details which I'm not going to delve in here as it can get more complex and verbose.
Despite a few drawbacks like slowness, suboptimal memory usage etc. , Swap over NFS is a feature that does fit quite well in certain scenarios like diskless clusters. With the current level of stability and the reviews that have already happened, I would think it would make it to upstream kernel soon.
Acknowledgements/Credits: To Peter Zijlstra (Implementation) and Neil Brown (nice documentation).
1a. Swap space
We all know swap space is an area on disk that temporarily holds memory images of processes. When physical memory demand is sufficiently low, process memory images are brought back into physical memory from the swap area on disk. Having sufficient swap space enables the system to keep some physical memory free at all times. We always have had a need to run applications that might require more memory than available physical memory.
- Swap partition - an independent partition of the hard disk dedicated for swapping.
- Swap file - a special file in the file system that resides amongst your system and data files. The advantage of swap files is that you don't need to find an empty partition or repartition a disk to add additional swap space.
1b. Swap over NFS
Swap over NFS allows you to have your swap on a remote NFS filesystem. Swap over NFS is very useful in the case of thin-client workstations where primary or secondary storage is a cost issue and may or may not be available. It's also useful in case of disk-less clusters and also in virtualization, where dumping the storage on a networked storage unit makes for trivial migration.
Once you are no longer restricted to local storage for swap space, you can cut costs dramatically. You can use less expensive diskless servers, and simplify administration thereby reducing implementation, administration, and management costs. By using swap over NFS, you can also protect your systems against application restarts and expensive downtimes.
Swap over NFS is a feature-in-demand that is currently not supported by Linux kernel. Though had been good amount of review, discussions followed by the latest patchset posted by Peter Zijlstra, it has not been merged upstream, yet.
2. Swap over NFS on SLE11
SUSE Linux Enterprise Server recently added support for Swap over NFS On SLE11. This allows you to use network file system (NFS) over Internet protocols (IP) to use remote storage for local server swap needs. To my knowledge, SLE 11/openSUSE 11.1 is the only distro that supports Swap over NFS.
2a. Swap over NFS in actionIf you don't have a diskless workstation and if you to get a feel of how it would be to have your swap on NFS on your desktop, you could try the following simple steps to see Swap over NFS in action on SLE11.
* Append mem=256M to the kernel command line during booting. This restricts your physical memory usage to 256M and ensures that swap gets used even in case you don't run big applications.
* Disable all devices marked as swap by doing
swapoff -a
* On the NFS server, create a swap file on the NFS export
dd if=/dev/zero of=swapfile.swp count=1048576 bs=1024
mkswap swapfile.swp
* From the client enable swapping on the swapfile we created
swapon /mnt/nfs/swapfile.swp
* Run multiple applications (memory-intensive) and see swap getting exercised using the `free' command
And of course you would have noticed that slowness and it's expected since Swap over NFS is slow in general except faster network cards. "Network swapping" as it is usually called is obviously much slower than having a much faster and more capable secondary storage device for doing virtual memory swapping. Swap over NFS allows any thin-client workstation to supplement its built in RAM with virtual memory. Usually virtual memory is supplied by a secondary storage device (i.e., a hard drive, flash memory, etc.) and the thin-client has no facility for secondary storage devices for this purpose (in a true thin-client sense), the network is the only alternative. You can get away with having around as little as 32MB of memory on the thin-client workstation. There could be momentarily slowness if the RAM is much lesser. The faster the network speed and available bandwidth, the less often these slowness will be experienced. It would be optimal to have nothing less than a 100 Mb/s, switched network.
3. Linux Implementation
Traditionally, swapping out is performed directly to block devices. Block devices are written to pre-allocate any memory that might be needed during write-out, and to block when the pre-allocated memory is exhausted and no extra memory is available. They can be sure not to block forever as the pre-allocated memory will be returned as soon as the data it is being used for has been written out. Mempools (Memory Pools) are used to help out in such situations here a memory allocation must succeed, but sleeping is not an option. Mempools pre-allocate a pool of memory and reserve it until it is needed. Mempools make life easier in some situations, but they should be used with caution as each mempool takes a chunk of kernel memory out increases the minimum amount of memory the kernel needs to run effectively.
The above approach does not work for writing anonymous pages (i.e. swapping) over a network, using e.g NFS or Network Block Device (NBD). The main reason that it does not work is that when data from an anonymous page is written to the network, we must wait for a reply to confirm the data is safe. Receiving that reply will consume memory and, significantly, we need to allocate memory to an incoming packet before we can tell if it is the reply we are waiting for or not. Another reason is that much of the network subsytem code is not written to use mempools or fixed sized allocations (but uses kmalloc() ) and in most cases does not need to use them. Changing all allocations in the networking layer to use mempools would be quite intrusive, and would waste memory, and probably cause a slow-down in the common case of not swapping over the network.
These problems are addressed in the patchset in different parts.
* The first part provides a generic memory reserve framework and use it on the slow paths - when we're low on memory. Currently, it supports SLAB/SLUB and not SLOB.
* The second part provides some generic network infrastructure needed.
* The third part makes use of the generic memory reserve system on the network stack. Note that unlike BIO layer we need memory allocations in both the send and the receive path. So we reserve a little pool to act as a receive buffer. This way, we can filter out those packets that ensure write-back completion and disregard the others packets.
* The fourth part provides generic VM infrastructure to handle swapping to a file system instead of a block device.
* The final part converts NFS to make use of the new network and VM infrastructure to provide swap over NFS.
There are much more deeper details which I'm not going to delve in here as it can get more complex and verbose.
Despite a few drawbacks like slowness, suboptimal memory usage etc. , Swap over NFS is a feature that does fit quite well in certain scenarios like diskless clusters. With the current level of stability and the reviews that have already happened, I would think it would make it to upstream kernel soon.
Acknowledgements/Credits: To Peter Zijlstra (Implementation) and Neil Brown (nice documentation).
Subscribe to:
Comments (Atom)
