libvirt

Containers Reloaded

| | | |

I've been busy lately trying to learn more about Docker. I'm not much of a fan of "application containers" and still prefer a full-blown "distro container" like that provided by LXC (good) or OpenVZ (better)... but I have to admit that the disk image / layering provided by Docker is really the feature everyone loves... which provides almost instantaneous container creation and start-up. If OpenVZ had that, it would be even more awesome.

OpenVZ certainly has done a lot development over the past couple of years. They realized that simfs just wasn't cutting it and introduced ploop storage... and then made that the default. ploop is great. It provides for instant snapshots which is really handy for doing zero-downtime backups. I wonder how ploop differs these days from qcow2? I wonder how hard it would be to add disk layering features like Docker to OpenVZ with ploop snapshots?

Applications Containers In the Beginning

Ok, so Docker has taken off but I really can't figure out why. I mean Red Hat introduced OpenShift some time ago. First it was a service, then a product, and lastly a open source product that you can deploy yourself if you don't need support. A couple of years ago I attended an OpenShift presentation and at that time it provided "Gears" which were basically chrooted processes with a custom SELinux policy applied... and cgroup resource management? Something like that. While (non-OpenVZ) containers don't contain, with the SELinux added, OpenShift gears seemed to be secure enough.

OpenShift offered easy deployment via some git-based scheme (if I remember correctly) and a bunch of pre-packaged stacks, frameworks, and applications called "cartridges" which I see as functionally equivalent to the Docker registry.. It didn't have the disk image layering and instant startup of Docker so I guess that's was a minus.

These days I guess OpenShift is going to or has shifted to using Docker.

Docker Crawls Before It Can Walk

Docker started off using aufs but that was an out-of-tree filesystem that isn't going to make it into mainline. Luckily Red Hat helped by adapting Docker to use device mapper-based container storage... and then btrfs-based container storage was added. What you get as default seems to depend on what distro you install Docker on. Which of the three is performant and which one(s) sucks... again that depends on who you talk to and what the host distro is.

Docker started off using LXC. I'm not sure what that means exactly. We all know that LXC is "LinuX native Containers" but LXC seems to vary greatly depending on what kernel you are running and what distro you are using... and the state of the LXC userland packages. Docker wised up there and decided to take more control (and provide more consistency) and created their own libcontainer.

The default networking of Docker containers seems a bit sloppy. A container gets a private network address (either via DHCP or manually assigned, you pick) and then if you want to expose a service to the outside world you have to map that to a port on the host. That means if you want to run a lot of the same service... you'll be doing so mostly on non-standard ports... or end up setting up a more advanced solution like a load balancer and/or a reverse proxy.

Want to run more than one application / service inside of your Docker container? Good luck. Docker was really designed for a single application and as a result a Docker container doesn't have an init system of its own. Yeah, there are various solutions to this. Write some shell scripts that start up everything you want... which is basically creating your own ghetto init system. That seems so backwards considering the gains that have been made in recent years with the switch to systemd... but people are doing it. There is something called supervisor which I think is a slight step up from a shell script but I don't know much about it. I guess there are also a few other solutions from third-parties.

Due to the complexity of the networking and the single-app design... and given the fact that most web-services these days are really a combination of services that are interconnected, a single Docker container won't get you much. You need to make two or three or more and then link them together. Links can be private between the containers but don't forget to expose to the host the port(s) you need to get your data to the outside world.

While there are ways (hacks?) that make Docker do persistent data (like mapping one or more directories as "volumes" into the container or doing a "commit"), Docker really seems more geared toward non-persistent or stateless use.

Docker Spaghetti

Because of all of these complexities, which I really see as the result of an over-simplified Docker design, there are a ton of third-party solutions. Docker has been trying to solve some of these things themselves too. Some of Docker's newer stuff has been seen by some (for example CoreOS) as a hijacking of the original platform and as a result... additional, currently incompatible container formats and tools have been created. There seems to be a new third-party Docker problem solver start-up appearing weekly. I mean there are a ton of add-ons... and not many of them are designed to work together. It's kind of like Christianity denominations... they mostly believe the same stuff but there are some important things they disagree on. :)

Application Containers Are Real

Ok, so I've vented a little about Docker but I will admit that application containers are useful to certain people... those into "livestock" virtualization rather than "pet" virtualization aka "fleet computing". Those are the folks running big web-services that need dozens, hundreds or thousands of instances of the same thing serving a large number of clients. I'm just one one of those folks so I prefer the more traditional full-distro style of containers provided by OpenVZ.

Working On Fedora 22

I've already blogged about working on my own Fedora 22 remix but I've also made a Fedora 22 OpenVZ OS Template that I've submitted to contrib. Yeah, it is pre-release but I'll update it over time... and Fedora 22 is slated for release next week unless there are additional delays.

Like so many OpenVZ OS Templates my contributed Fedora 22 OS Template doesn't have a lot of software installed and is mainly for use as a server. For my own use though I've added to that with the MATE desktop, x2goserver, Firefox, LibreOffice, GIMP, Dia, Inkscape, Scribus, etc. It makes for a pretty handy yet light desktop environment. It was a little tricky to build because adding any desktop environment will drag in NetworkManager which will overpower ye 'ole network service and break networking in the container upon next container start. So while building it "vzctl enter" access from the OpenVZ host node was required. With a handful of systemctl disable / mask commands it was in working order again. Don't forget to change the default target back to multi-user from graphical... and yeah, you can turn off the display manager because you don't need that since x2go is the access method of choice.

BTW, there was a libssh update that broke x2go but they should have that fixed RSN.

Multi-purpose OS Templates

I also decided to play with LXC some on my Fedora 22 physical desktop. I found a libvirt-related recipe for LXC on Fedora. Even though it was a little dated it was very helpful.

The yum-install-in-chroot method of building a container filesystem really didn't work for me. I guess I just didn't have a complete enough package list or maybe a few things have changed since Fedora 20. I decided to re-purpose my Fedora 22 OpenVZ OS Template. I extracted it to a directory and then edited a few network related files (/etc/sysconfig/network, removed /etc/sysconfig/network-scripts/ifcfg-venet*, and added an ifcfg-eth0 file). I also chroot'ed into the directory and set a root password and created a user account that I added to the wheel group for sudo access.

After a minute or so for the minor modifications (and having left the chroot'ed environment) I did the virt-install command to create a libvirt managed LXC container using the new Fedora 22 directory / filesystem... and bingo bango that worked. I also added some GUI stuff and just like with OpenVZ I had to disable NetworkManager or it broke networking in the container. Anyway... running an LXC container is a like OpenVZ on a mainline kernel... just without all of the resource management and working security. Baby steps.

Containers Taken Too Far?

While hunting down some videos on Docker I ran into RancherVM. What is that? To quote from their description:

RancherVM is a new open source project from Rancher Labs that makes it simple to run KVM inside a Docker container.

What they heck? Run KVM VMs inside of Docker containers? Why would anyone want to do that? Well, so you can embed KVM VM disk images inside of Docker images... and easily deploy a KVM VM (almost) as easily as a Docker container. That kind of makes my head hurt just thinking about running a Windows 7 Desktop inside of a Docker container... but someone out there is doing that. Yikes!

-----
Thanks to Vlada Catalic for translating this article into Bosnian.

libvirt begins to add OpenVZ support

| | | |

I noticed a blog posting by Daniel Veillard on Fedora People about initial support for OpenVZ being added to libvirt. If you aren't familiar with libvirt, it is an underlying library/API that can be used by higher level tools to create, manage, and monitor virtual machines. libvirt is trying to be technology agnostic by supporting several virtualization technologies. They started off with Xen and QEMU but have since added KVM. libvirt is used by the GUI tool Virtual Machine Manager which first appeared in Fedora Core (now Fedora) but became part of Red Hat Enterprise Linux 5.

Looking at some of the postings in the libvirt mailing list archive for this month, it is mentioned that adding OpenVZ support is a bit different than previous technologies because the OpenVZ tools are already GPLed, "simple and straight forward", and than OpenVZ additions to libvirt "ends up looking very close to the original". I don't know how far away complete support for OpenVZ is in libvirt nor when it will show up in Virtual Machine Manager but I definitely look forward to it... although I doubt it would completely replace vzctl and the other OpenVZ tools for me.


Syndicate content