The Home Cloud 2009-03-25


My home server is now fully virtualized. When I ssh home I reach "Outland", an OpenVz container. Outland has limited access to the rest of my home network, but isn't completely isolated. I've taken the pragmatic view that allowing some access further "in" is ok too, though there's a two step process (ssh in to outland, and then on to whatever else), and only stuff that's running on Outland is visible to the outside.
While there are some security benefits (Outland doesn't hold anything important, so you have to penetrate both Outland and a server beyond it to get at anything really interesting, like my boring holiday photos), my main motivation was maintenance and flexibility after having slowly deployed OpenVz more and more across my servers at Aardvark Media and seen just how much we've benefited from being able to seamlessly transition services between hardware etc. to make hardware upgrades and optimal use of our hardware easier.

Whenever I've upgraded my home servers in the past I've dragged a lot of cruft with me and had a real painful process recreating my setup, or I've wiped everything and just moved over my data.

With OpenVZ being the only thing that's now allowed to run directly on the host, I can subdivide my "workspace" much more making more regular partial upgrades much easier, and decoupling upgrades a lot. I can now upgrade "service by service". I'm never looking back.

Outland holds various junk I want easily accessible via ssh. It's a "scratchpad" of sorts that I expect to exist semi-permanently, but that there won't be any big consequences from wiping clean now and again. Another container is used for backups and holds snapshots of the other containers, config files on the host, as well as my blog and various other externally hosted stuff. Other containers holds my development projects, or act as short lived scratchpads to prevent me from messing up anything I want to keep. Anything I want to keep for the long term is isolated into containers holding databases and my Git repositories (and a legacy SVN repository or two) and raw files.

I opted for Debian Lenny on the host, partly because I now use mostly Debian at work, and partly because Lenny has great support for OpenVz - install is as simple as "apt-get install linux-image-openvz-amd64" (assuming a 64 bit machine) and a reboot.

Creating a new one is a command or two (vzctl create [id] --ostemplate=...; vzctl set [id] --name ... --hostname ... --ipadd ... --save), and creating scripts to customize the templates is easy enough (for the most part a list of packages to install, and a set of config files to overwrite or add things to) so that I can rapidly set up customized containers for my Ruby projects, for example.

I love the level of isolation I can get between various work, in particular because it makes testing dependencies trivial - I can rebuild the containers from scratch at a moments notice, check out the code and run my tests, and know that the only thing I've "lost" will have been accumulated cruft, such as packages I don't need or files I'd strewn all over the place.

The thought is that individual containers should be ok to kill at almost any time. The "scratchpads" because they don't hold any important data; the dev containers because they'll contain only clones of Git repositories, and I have scripts to rebuild them from scratch; the database and file containers because their data lives in a couple of well defined directories that can be trivially copied out to be moved into replacement containers (and there's backups)

The end result is my personal "cloud" where server instances are ephemeral and will perhaps in some cases be provisioned and destroyed automatically (see Rebuilding the build server on every build), but their functions are persistent. 

Add sufficient automated tests into the mix, and I can bring up a new container from scratch with a new OS version for example, copy data into it, run my tests, and wipe the old and be done with it (though the paranoia in me makes me keep backup snapshots for a while) with reasonable confidence that it forces me to keep the scripts that effectively define each "service" in my home cloud up to date. It also leaves me with the option of trivially farming these images out to hosted servers if I should want to in the future (an option which is more relevant when doing this at work)

Whenever I get new hardware these days, whether at home or at work, the process I follow is thus always to install OpenVz or another virtualization technology first (OpenVZ uses a shared kernel, so it can only run Linux containers, and only on the same kernel as the host, which is a deal-killer in some cases, but also makes it extremely lightweight for the cases where those limitations are ok - nothing stops you from running OpenVz and KVM or Xen on the same box to get the best of both on a container-by-container basis) - everything else belongs in a container.

blog comments powered by Disqus