Tuesday, December 8, 2009

Migrating My File Server to LVM

The other day I ran across an article on some really neat tricks that can be done with LVM (Logical Volume Management) on a Linux file server. The snapshot feature was of particular interest to me.

For those of you not familiar with the term, I'll try to briefly explain what a snapshot is and what it's good for. A more detailed explanation can be found here.

When you take a snapshot of a dataset, you instantly get two datasets. One dataset is the data exactly as it was when you took the snapshot. The other dataset is the data as it is currently (with all the changes since the snapshot was taken).

You may ask "How does it instantly make a complete copy of the dataset? Wouldn't that take hours for even moderately sized datasets?"

The magic comes from a bit of technology called "Copy on Write" (COW). When you hit the snapshot button, all your current data is frozen as is, and another thread is started that only tracks the changes to the original dataset. Instantly, you have two virtual copies of your dataset. One is a read-only dataset frozen in time the instant you hit the snapshot button, and the other is a read/write dataset which keeps on functioning as if nothing of interest happened.

It should be noted, however, that by freezing your dataset with a snapshot, your dataset will never take up less space than it's currently occupying. When you delete some piece of data on your read/write dataset, it only flags that piece of data as being deleted, it's still occupying space on your read-only dataset.

To reclaim storage space, you have two options. You can "delete" the snapshot, which causes all the changes in your read/write dataset to be applied to the original dataset. Or you can "revert" to snapshot, which simply deletes the changes tracked in your read/write dataset, and gives you read/write access to your frozen dataset.

That's "snapshots" in a nutshell, but that barely scratches the surface of all the neat tricks Copy on Write can do. One of the neatest tricks, which I plan on implementing over the next month or so, is the ability to create multiple read/write volumes all based on a common snapshot.

This is what makes Copy on Write terribly useful.

If you have compare the contents of the hard drives of ten computers running Windows 7, you'll notice that about 10GB worth of files on each HD is identical (the OS files). If you consolidate your storage in some form of SAN, you can install Windows 7 to a (virtual) drive in your SAN, then take a snapshot of that drive. At this point, you can create 10 new virtual drives based off that snapshot, all tracking their changes individually. Map your 10 computers to the new virtual drives in your SAN, and you're only using that 10GB once, instead of storing it separately 10 times (once for each computer) on local drives. 10GB may not sound like a lot, but let's pretend it's not 10 computers we're talking about, but 1000. That's almost 10TB (9990GB) of wasted storage. And that's just the Windows OS. Throw on some standard software packages like Office or Photoshop, and the wasted space gets bigger and bigger.

Now we're saving a ton on storage. But wait, there are more cool tricks we can do.

1000 people accessing the same 10GB of data can get pretty slow, if you're using a slow volume. But since we know that 1000 users are going to be accessing this particular 10GB of data, we spend a little more money to put this 10GB on a very fast volume (RAM disk, or lots of HDs in a RAID10 array), which will actually improve the users' access time over an internal HD, assuming the SAN fabric doesn't become a choke-point. For this to work effectively in an iSCSI environment, you'd probably need a 10Gbps network backbone and 1Gbps to the desktops. If, however, you're not running 1000 machines off a single volume, then you don't need the 10Gbps backbone to see massive improvements.

I'll be migrating my current, non-LVM-based file server to LVM-based storage in the coming weeks. I'll post from time to time during the transition.