Wednesday, August 4, 2010

Local caching for CIFS network file system - followup

Here's a follow-up to my previous post on
Local caching for CIFS network file system

Since the previous post, I worked on improving the patches that add
local caching, fixed a few bugs, addressed review comments from the
community and re-posted the patches. I also gave a talk about it at
the SUSE Labs Conference 2010 happened at Prague. The slides can be
found here: FS-Cache aware CIFS.

This patchset was merged in the upstream Linux kernel yesterday (Yay!)
which means this feature would be available starting from kernel
version 2.6.35-rc1.

The primary aim of caching data on the client side is to reduce the
network calls to the CIFS Server whenever possible, thereby reducing
the server load as well the network load. This will indirectly improve
the performance and the scalability of the CIFS Server and will
increase the number of clients per Server ratio. This feature could be
useful in a number of scenarios:

- Render farms in Entertainment industry - used to distribute
textures to individual rendering units
- Read only multimedia workloads
- Accelerate distributed web-servers
- Web server cluster nodes serve content from the cache
- /usr distributed by a network file system - to avoid spamming
Servers when there is a power outage
- Caching Server with SSDs reexporting netfs data
- where a persistent cache remains across reboots is useful

However, be warned that local caching may not suitable for all
workloads and a few workloads could suffer a slight performance hit
(for e.g. read-once type workloads).

When I reposted this patchset, I got asked whether I have done any
benchmarking and could share the performance numbers. Here are the
results from a 100Mb/s network:

Environment
------------

I'm using my T60p laptop as the CIFS server (running Samba) and one of
my test machines as CIFS client, connected over an ethernet of reported
speed 1000 Mb/s. ethtool was used to throttle the speed to 100 Mb/s. The
TCP bandwidth as seen by a pair of netcats between the client and the
server is about 89.555 Mb/s.

Client has a 2.8 GHz Pentium D CPU with 2GB RAM
Server has a 2.33GHz Core2 CPU (T7600) with 2GB RAM


Test
-----
The benchmark involves pulling a 200 MB file over CIFS to the client
using cat to /dev/zero under `time'. The wall clock time reported was
recorded.

First, the test was run on the server twice and the second result was
recorded (noted as Server below i.e. time taken by the Server when file
is loaded on the RAM).

Secondly, the client was rebooted and the test was run with caching
disabled (noted as None below).

Next, the client was rebooted, the cache contents (if any) were erased
with mkfs.ext3 and test was run again with cachefilesd running (noted
as COLD)

Next the client was rebooted, tests were run with caching enabled this
time with a populated disk cache (noted as HOT).

Finally, the test was run again without unmounting or rebooting to
ensure pagecache remains valid (noted as PGCACHE).

The benchmark was repeated twice:

Cache (state) Run #1 Run#2
============= ======= =======
Server 0.104 s 0.107 s
None 26.042 s 26.576 s
COLD 26.703 s 26.787 s
HOT 5.115 s 5.147 s
PGCACHE 0.091 s 0.092 s

As it can be seen when the cache is hot, the performance is roughly 5X
times than reading over the network. And, it has to be noted that the
Scalability improvement due to reduced network traffic cannot be seen
as the test involves only a single client and the Server. The read
performance with more number of clients would be more interesting as
the cache can positively impact the scalability.

Monday, June 14, 2010

Hackweek V: Local caching for CIFS network file system

Hackweek

It's that time of the year when SUSE/Novell developers use their Innovation Time-off to do a project of their interest/wish - called as Hackweek. Last week was Hackweek V. I worked on making the Common Internet File System (CIFS) cache aware, i.e. local caching for CIFS Network File System.

Linux FS-Cache

Caching can result in performance improvements in network filesystems where access to network and media is slow. The cache can indirectly improve performance of the network and the server by reduced network calls. Caching can be also viewed as a preparatory work for making disconnected operation (Offline) work with network filesystems.

The Linux Kernel recently added a generic caching facility (FS-Cache) that any network filesystem like NFS or CIFS or other service can use to cache data locally. FS-Cache supports a variety of cache backends i.e. different types of cache that have different trade-offs (like CacheFiles, CacheFS etc.) FS-Cache mediates between cache backends and the network filesystems. Some of the network filesystems such as NFS and AFS are already integrated with FS-Cache.

Making CIFS FS-Cache capable

To make any network filesystem FS-Cache aware, there are a few things to consider. Let's consider them step by step (though not in detail):
  • First, we need to define the network filesystem and it should be able to register/unregister with the FS-Cache interface.
  • The network filesystem has to define the index hierarchy which could be used to locate a file object or discard a certain subset of all the files cached.
  • We need to define the objects and the methods associated.
  • All the indices in the index hierarchy and the data file need to be registered. This could be done by requesting a cookie for each index or data file. Upon successful registration, a corresponding cookie is returned.
  • Functions to store and retrieve pages in the cache.
  • Way to identify whether the cache for a file is valid or not.
  • Function to release any in-memory representation for the network filesystem page.
  • Way to invalidate a data file or index subtree and relinquish cookies.
Implementation

I wanted to get the prototype working within a week. So the way I have implemented it is rudimentary and has lot of room for improvement.

The index hierarchy is not very deep. It has three levels - Server, Share and Inode. The only way that I know of identifying files with CIFS is by 'UniqueId' which is supposed to be unique. However, some server do not ensure that the 'UniqueId' is always unique (for example when there is more than one filesystem in the exported share). The cache coherency is currently ensured by verifying the 'LastWriteTime' and size of the file. This is not a reliable way of detecting changes as some CIFS servers will not update the time until the filehandle is closed.

The rudimentary implementation is ready and the cumulative patch can be found here:

http://www.kernel.org/pub/linux/kernel/people/jays/patches/

[WARNING: The patch is lightly tested and of prototype quality.]

Here are some initial performance numbers with the patch:

Copying one big file of size ~150 MB.

$time cp /mnt/cifs/amuse.zip .
(Cache initialized)

real 1m18.603s
user 0m0.016s
sys 0m8.569s

$time cp /mnt/cifs/amuse.zip /
(Read from Cache)

real 0m28.055s
user 0m0.008s
sys 0m1.140s

Tuesday, September 15, 2009

openSUSE Conference 2009

I'm heading over to Nuremberg on Wednesday (16 Sep) for a few days to participate in openSUSE Conference! This is the first-ever openSUSE Conference, an opportunity for openSUSE contributors to do/attend talks, workshops, Birds of a Feather sessions, and collaborate together face to face. The conference will be held from September 17 - September 20 in Nuermberg, Germany.

The interactive event aims to bring the openSUSE contributor community together to share ideas, experience, learn, hack and help to guide the direction of the project. The different tracks include Desktop Development, System and Toolchain (openSUSE Build Service, YaST, Kernel, Packaging), Community, Quality and Appliances (Moblin, SUSE Studio). There will be a lot of Fun in the form of Birds of a Feather (BoF) sessions, roundtable discussions, Unconferences and hackfests apart from scheduled talks.

I will be doing an Unconference session. The topic is "Roads Less Travelled - Making Technology Previews succeed".

Check out the full schedule here.

Wednesday, September 2, 2009

VIM tip of the day!

Ever wondered how to avoid VIM creating those annoying backup files like foo~ ?

Add the following to your vimrc:

" Don't backup files like foo~
set nobackup
set nowritebackup


Tuesday, September 1, 2009

LWN.net quotes me

LWN.net a leading online magazine which covers quite a bit of Linux Kernel development had quoted my name in the Security advisory on CIFS multiple vulnerabilities.

To quote LWN:

kernel: multiple vulnerabilities
Package(s): linux-2.6 CVE #(s): CVE-2009-1630 CVE-2009-1633 CVE-2009-1758
Created: June 2, 2009 Updated: August 20, 2009
Description: From the Debian advisory:

Frank Filz discovered that local users may be able to execute files without execute permission when accessed via an nfs4 mount. CVE-2009-1630

Jeff Layton and Suresh Jayaraman fixed several buffer overflows in the CIFS filesystem which allow remote servers to cause memory corruption. CVE-2009-1633

Jan Beulich discovered an issue in Xen where local guest users may cause a denial of service (oops). CVE-2009-1758

Forgot to stick the url: http://lwn.net/Articles/335751/
(Thanks Nikanth)

Wondering how did I miss this.. Pleasantly surprised.

Sunday, August 2, 2009

HackWeek IV: Fun with BlogPost (a blog publisher)


As part of Novell's HackWeek IV, I decided to learn and develop a GUI application that allows me to post blog entries quickly, without much effort (without using a browser).

Why I wrote this?
  • I wanted to make myself capable of developing desktop applications (as a Kernel developer I have spent very little time/no time on GUI development). Learning new stuff is always a lot of Fun!
  • I have always found using browsers for writing blogs is time consuming and takes little more effort for me.
  • None of the existing applications convinced me.
What it is , what it is not?
  • BlogPost is a simple, easy to use blog that is aiming to make blogging experience better. It
    currently support blogger.com only.
  • It's a alpha software and tested to limited extent only so it will have rough edges (use it with care :-)).
  • I wrote this application for Fun and Learning (actually I learnt GTK/PyGTK and Python when I developed it). So don't expect it to be bug-free or quite solid.
  • It's a GPLv2 Software.
  • It's not a feature-rich a.k.a bloated application that is intended to replace web blogging.
  • It's aimed at developers/users not for professional bloggers who might need more features.
List of Features
  • support posts to blogger.com
  • Offline blogging (save drafts locally and send later)
  • Basic formatting
  • Select blog names to post
  • Labels/Tags support
Screenshot


Want to try BlogPost?

Prerequisites:
  • python-gdata (gdata api's) package
  • python-base and python-devel if not installed already (which usually are present in the default installation of openSUSE).
Once you have the prerequisites installed, grab the BlogPost rpm from here:
(Currenly x86_64 and i386 rpms available)

BlogPost x86-64 RPM
BlogPost i386 RPM

Install the rpm the usual way:
$rpm -ivh

The tar ball can be found here:
BlogPost tar ball

To install from source tar ball
  • Extract the source: tar -xvjf blogpost-0.1.tar.bz2
  • cd blogpost-0.1 and run ./setup.py install
  • Run `blogpost' to launch the application after install
I had Fun; Hope you'll like it!
Feel free to leave your comments, feedback!