Let’s Brew!

Where to start? There are many blogs on homebrewing from folks with lots of experience so I won’t be providing any earth-shattering information for knowledgeable brewers, but I can provide a perspective from a novice that might be interesting for others considering diving into homebrewing. Along the way perhaps I’ll be lucky enough to also get some feedback and tips from the pros! I’ll first talk briefly about the ingredients used to make beer and then discuss the process of turning those ingredients into the tasty beverage we all love using the Clawhammer BIAB system I discussed in a previous post.

Beer is actually made from a simple list of ingredients: water, barley, hops, and yeast. All are very important to the quality of the final product. Although the ingredient list is simple, the chemical characteristics of water, the types of barley and the degree of malting, the families of hops, and the strains of yeast vary greatly, which accounts for the seemingly endless varieties of beer we see. Some beers include ingredients like fruit, honey, and other sweeteners, but IMO those are for amateurs and kids and will not be considered in this blog post. We’ll save those ingredients for the ice cream and pies!

The beer we are brewing here is a recipe from a good friend and fellow brewer, which he named Baldy Chutes Pale Ale in honor of the famed ski run at Alta Ski area. BTW, in the winters when not brewing or working I can often be spotted at Alta :-). Ok, back to making beer. Let’s review the ingredients: water, malted barley, hops, and yeast. Many brewers say that yeast is the secrete sauce to a beer. I don’t have enough experience to confirm or deny this rumor, but understand its role in the process enough to know it is important. Therefore I make a yeast starter three days before brewing. Being one that likes simplicity, I use a canned yeast starter instead of preparing one traditionally. I buy the bagged yeast, adding it, the yeast starter, and water to a flask and drop in a magnetic stir stick.

The flask is placed on a stir plate that slowly stirs the yeast in the starter to ensure it has oxygen to flourish. I try to keep the concoction at a constant temperature, ideally 68F, but it really at the mercy of the temperature of my office. On the morning of brewing, I place the flask in the refrigerator so the yeast will separate from the starter and settle to the bottom.

Another task I’ve learned to do the night before brewing is filling the boil pot with cold tap water, particularly in the winter months when the water is very cold. This allows raising it to room temperature without using electricity. It saves a few electrons for the conservation-minded folks. One other item to consider the night before brewing, or even before starting your yeast starter, is that brewing takes some time. I’ve found I need to reserve 5-6 hours for preparation, actual brewing, and cleanup. I don’t start a brew session unless I have the day free, or at least 8 hours to give some buffer in case of catastrophe.

For my last brew, I did all the preparation the night before. I installed the boil pot heating element, inserted the grain basket, connected the circulation pump, setup the controller and temperature probe, filled the pot with water per recipe, and added a campden tablet per gallon of water to help neutralize the city water chlorine. On brew day I only had to turn on the controller and heat the water to mash-in temperature.

Once water is at mash-in temperature, it is time to slowly stir in the milled barlies of your recipe. Avoid dumping to much of that grains at once, otherwise they are likely to form dough balls that will act like little safes for the grain’s sugars. We wont be able to extract the sugar locked inside those dough balls. So slowly pour in your grains, giving the mixture a good stir each time grains are added, being careful not to damage the grain basket. After all the grain is thoroughly mixed with the water in the grain basket and the mixture has returned to mash-in temperature, start circulating the water over the grains using the system’s pump. Typical beer recipes generally call for an hour or so of mashing, a process meant to extract the grain’s sugars and produce our prized wort.

The Clawhammer system makes mash-out easy by providing the grain basket and hooks that make a temporary ledge for the basket to drain as much of the sugar-saturated water from the grains. While the grains are draining, the controller can be adjusted to begin bringing the wort to boil temperature. After the grains have mostly drained they can be removed from the pot and placed in a 5 gallon bucket for further draining.

This allows covering the boil pot with lid and a towel or other insulation to help accelerate our goal of boil temperature. I’ve found it can take 15-20 minutes to reach boil temperature with the 110 volt element. You should experiment with boil temperature. I’ve found most recipes I brew boil around 200F at my house, which is 4600 feet elevation. The primary advice I can give is to keep a close eye on the pot once you’ve reached 195F to avoid the dreaded boil-over. Wort boil-over can make an awful mess and extend your brew time with extra cleaning, not to mention the hazard of erupting sugar water at boil temperature!

Most of the recipes I’ve attempted thus far call for 60 minutes boil time. At various intervals of the boil your recipe may call for adding hops, yeast nutrient, and whirlfloc tablets or Irish moss. The latter is used to help clarify your beer. Hops not only add unique flavors to beer but also bitter the beer to offset the sweetness of the barley sugars. I enjoy taking the occasional whiff of the hops before adding them to the boil. There’s something about the smell of fresh hop pellets that’s hard to resist. Interestingly, hops are a relative of marijuana and are grown similarly, where only the female plants are allowed to thrive and all male plants are removed as soon as they are identified.

It’s all downhill after the boil is complete. There is still lots of cleaning to do, but the real time-consuming part of the brew is done. It is time to chill the wort to fermenting temperature, which is dictated by your recipe. Most ales are fermented at 68F. Lagering beer is a more time-consuming process where the fermentation is done slowly at various temperatures. I have not yet done any lagering since you really need another chest freezer or refrigerator with a dual stage controller to precisely control the temperature of the fermenting beer.

Sanitation of all equipment is a must after the boil to avoid contaminating and spoiling your beer. I use Star San non-rinse sanitizer for all of my post-boil brew equipment: carboys, kegs, siphon tubes, etc. After thoroughly sanitizing a carboy and siphon tube I transfer the wort into the carboy for fermentation, collecting a bit of the wort along the way to take a starting gravity (SG) reading.

Before adding the yeast, I like to aerate the beer with a small pump similar to a fish tank aerator. This ensures there is plenty of oxygen in the wort for the yeast to thrive and turn all that sweet sugar to alcohol! Before adding the yeast I decant most of the yeast starer leaving just enough to thin the thick yeast layer on the bottom of the flask to ensure the all-important yeast is completely emptied from the flask.

After the yeast is added to the wort it is time to seal the carboy with an airlock and let the fermentation begin! I wrap my carboy in a heater to keep the temperature consistent. For ales I can always find a place in my house that will be at or below 68F, even in the summer months.

I always like to keep an eye on my beer for a few days into the fermentation process. Once the yeast really gets after the sugars, the mixture will look very cloudy and hopefully you’ll have lots of air bubbles in the airlock, indicating the yeast is happy and doing its job. I always find fermentation fastinating. To me, it is a bit of magic. I agree with the old addage that brewers make wort and yeast makes beer!

About half way through the fermenation process I like to transfer the beer to a secondary carboy. A lot of yeast waste, hop bits, and small particles of barley will settle to the bottom of the primary fermentor. This is a good time to transfer the contents to another fermentor and leave the baggage behind. Strictly speaking it is not needed, but I like a non-cloudy beer so put in a bit of extra time with the secondary fermentor.

After the second stage fermentation is complete it is time to transfer the beer to a keg for carbonation. This is also the time to collect a bit of the flat, fermented beer for a final gravity (FG) reading. This allows us to determine the alcohol content of the beer. The formula for Alcohol By Volume (ABV) is: ABV% = (SG-FG)/0.776. Or for the pedantic math nerds out there: ABV% = (SG-FG)*131.

If there are no existing kegs on tap, I carbonate a freshly kegged beer at 22 PSI for a week. If my gas system is in use for pouring (10-12 PSI) I’ll let the new keg carbonate for two weeks, at which point we reap the rewards of our labor! Fresh beer with no bullshit sweeteners, additives, or preservatives. Beer as it was meant to be enjoyed!

Advertisements

BIAB Homebrewing

Being a beer lover, I always wanted to brew my own beer but lacked time, money, and space required form making my own liquid bread. Over my travels and many places I’ve called home I’ve met several homebrewers and even spent afternoons helping brew batches of beer. While lending a helping hand I quickly realized the equipment needed to make beer from all-grain ingredients was the barrier for me. That all changed last fall when I helped a good fried make a batch of Alaska Amber clone. He had recently invested in a new Brew In A Bag (BIAB) system from Clawhammer Supply and I was immediately impressed with the functionality and usability of the system. I decided that afternoon it was time to start brewing my own beer!  I had found a brew system that provided the balance of simplicity and functionality I was looking for. Along with more free time in my life now that kids are older and more independent, it’s a great time to start a new hobby!

The Clawhammer BIAB system includes all the components you need for creating your favorite all-grain beer. It consists of a 10 gallon stainless steel boil pot, a 110 volt heating element (220 volt option available at substantially more cost), a controller for heat and pump, a temperature probe, a 110 volt pump, a plate wort chiller, grain and hop baskets, high temperature, restaurant-grade rubber hose, and all the quick-release fittings and valves needed to connect the system components. The system requires assembly but that was made easy by the helpful videos provided by Clawhammer Supply. That leads me to the only complaint about the system: lack of written documentation. Videos are fine, but I’m old school and generally like to read shit :-).

20190405_123138

When deciding on the 110 vs 220 volt system, I read many reviews stating the 110 system was adequate if the boil pot was insulated. Clawhammer even sells an insulation kit for their boil pots. So I opted for the 110 volt element and fabricated my own insulation wrap for the pot with some insulation bought at the local hardware store.

I already mentioned how I liked the Clawhammer system simplicity. You have a boil pot that heats water with an electric element, a large grain basket that fits inside the pot, and a pump to circulate hot water over the grains to extract their all-important sugars. After mashing the grains and boiling the wort, simply insert the wort chiller in the system to cool the wort before transferring to fermentation storage. The system is also relatively mobile, opening the possibility to take it to the Henry’s Lake place for some Idaho brewing! Perhaps this picture of the entire system in action chilling a pale ale wort will help you understand my excitement around this system. In a followup post I’ll describe the steps for creating an all-natural, preservative and additive free beer with the Clawhammer BIAB system. Until then, enjoy a few cold ones!

20190216_194637-1

libvirt support for Xen’s new libxenlight toolstack

I had the pleasure of meeting Russell Pavlicek, who shares Xen community management responsibilities with Lars Kurth, at SUSECon last November and he, along with Dario Faggioli of the Xen community, poked me about writing a blog post describing the state of Xen support in libvirt.  I found this to be an excellent idea and good reason to get off my butt and write!  Far too much time passes between my musings.

Xen has had a long history in libvirt.  In fact, it was the first hypervisor supported by libvirt.  I’ve witnessed an incredible evolution of libvirt over the years and now not only does it support managing many hypervisors such as Xen, KVM/QEMU, LXC, VirtualBox, hyper-v, ESX, etc., but it also supports managing a wide range of host subsystems used in a virtualized environment such as storage pools and volumes, networks, network interfaces, etc.  It has really become the swiss army knife of virtualization management on Linux, and Xen has been along for the entire ride.

libvirt supports multiple hypervisors via a hypervisor driver interface, which is defined in $libvirt_root/src/drvier.h – see struct _virDriver.  libvirt’s virDomain* APIs map to functions in the hypervisor driver interface, which are implemented by the various hypervisor drivers.  The drivers are located under $libvirt_root/src/<hypervisor-name>.  Typically, each driver has a $libvirt_root/src/<hypervisor-name>/<hypervisor-name>_driver.c file which defines a static instance of virDriver and fills in the functions it implements.  As an example, see the definition of libxlDriver in $libvirt_root/src/libxl/libxl_driver.c, the firsh few lines of which are

static virDriver libxlDriver = {
.no = VIR_DRV_LIBXL,
.name = “xenlight”,
.connectOpen = libxlConnectOpen, /* 0.9.0 */
.connectClose = libxlConnectClose, /* 0.9.0 */
.connectGetType = libxlConnectGetType, /* 0.9.0 */
….
}

The original Xen hypervisor driver is implemented using a variety of Xen tools: xend, xm, xenstore, and the hypervisor domctrl and sysctrl interfaces.  All of these “sub-drivers” are controlled by an “uber driver” known simply as the “xen driver”, which resides in $libvirt_root/src/xen/.  When an API in the hypervisor driver is called on a Xen system, e.g. virDomainCreateXML, it makes its way to the xen driver, which funnels the request to the most appropriate sub-driver.  In most cases, this is the xend sub-driver, although the other sub-drivers are used for some APIs.  And IIRC, there are a few APIs for which the xen driver will iterate over the sub-drivers until the function succeeds.  I like to refer to this xen driver, and its collection of sub-drivers, as the “legacy Xen driver”.  Due to its heavy reliance on xend, and xend’s deprecation in the Xen community, the legacy driver became just that – legacy.  With the introduction of libxenlight (aka libxl), libvirt needed a new driver for Xen.

In 2011 I had a bit of free time to work on a hypervisor driver for libxl, committing the initial driver in 2b84e445.  As mentioned above, this driver resides in $libvirt_root/src/libxl/.  Subsequent work by SUSE, Univention, Redhat, Citrix, Ubuntu, and other community contributors has resulted in a quite functional libvirt driver for the libxl toolstack.

The libxl driver only supports Xen >= 4.2.  The legacy Xen driver should be used on earlier versions of Xen, or installations where the xend toolstack is used.  In fact, if xend is running, the libxl driver won’t even load.  So if you want to use the libxl driver but have xend running, xend must be shutdown followed by a restart of libvirtd to load the libxl driver.  Note that if xend is not running, the legacy Xen driver will not load.

Currently, there are a few differences between the libxl driver and the legacy Xen driver.  First, the libxl driver is clueless about domains created by other libxl applications such as xl.  ‘virsh list’ will not show domains created with ‘xl create …’.  This is not the case with the legacy Xen driver, which is just a broker to xend.  Any domains managed by xend are also manageable with the legacy Xen driver.  Users of the legacy Xen driver in libvirt are probably well aware that ‘virsh list’ will show domains defined with ‘xm new …’ or created with ‘xm create …’, and might be a bit surprised to find this in not the case with the libxl driver.  But this could be addressed by implementing functionality similar to the ‘qemu-attach’ capability supported by the QEMU driver, which allows “importing” a QEMU instance created directly with e.g. ‘qemu -m 1024 -smp …’.  Contributions are warmly welcomed if this functionality is important to you :-).

A second difference between the libxl and legacy Xen drivers is related to the first one.  xend is the stateful service in the legacy stack, maintaining state of defined and running domains.  As a result, the legacy libvirt Xen driver is stateless, generally forwarding requests to xend and allowing xend to maintain state.  In the new stack, however, libxl is stateless.  Thererfore, the libvirt libxl driver itself must now maintain the state of all domains.  An interesting side affect of this is losing all your domains when upgrading from libvirt+xend to libvirt+libxl.  For a smooth upgrade, all running domains should be shutdown and their libvirt domXML configuration exported for post-upgrade import into the libvirt libxl driver.  For example, in psuedo-code

for each domain
virsh shutdown domain
virsh dumpxml > domain-name.xml
perform xend -> libxl upgrade
restart libvirtd
for each domain
virsh define domain-name.xml

It may also be possible to import xend managed domains after upgrading to libxl.  On most installations, the configuration of xend managed domains is stored in /var/lib/xend/domains/<dom-uuid>/config.sxp.  Since the legacy Xen driver already supports parsing SXP, this code could be used read any existing xend managed domains and import those into libvirt.  I will need to investigate the feasibility of this approach, and report any findings in a future blog post.

The last (known) difference between the drivers is the handling of domain0.  The legacy xen driver handles domain0 as any other domain.  The libxl driver currently treats domain0 as part of the host, thus e.g. it is not shown in ‘virsh list’.  This behavior is similar to the QEMU driver, but is not necessarily correct.  Afterall, domain0 is just another domain in Xen, which can have devices attached and detached, memory ballooned, etc., and should probably be handled as such by the libvirt libxl driver.  Contributions welcomed!

Otherwise, the libxl driver should behave the same as the legacy Xen driver, making xend to libxl upgrades quite painless, outside of the statefullness issue discussed above. Any other differences between the legacy Xen driver and the libxl driver are bugs – or missing features.  Afterall, the goal of libvirt is to insulate users from underlying churn in hypervisor-specific tools.

At the time of this writing, the important missing features in the libxl driver relative to the legacy Xen driver are PCI passthrough and migration.  Chunyan Liu has provided patches for both of these features, the first of which is close to committing upstream IMO

https://www.redhat.com/archives/libvir-list/2014-January/msg00400.html
https://www.redhat.com/archives/libvir-list/2013-September/msg00667.html

The libxl driver is also in need of improved parallelization.  Currently, long running operations such as create, save, restore, core dump, etc. lock the driver, blocking other operations, even those that simply get state.  I have some initial patches that introduce job support in the libxl driver, similar to the QEMU driver.  These patches allow classifying driver operations into jobs that modify state, and thus block any other operations on the domain, and jobs that can run concurrently.  Bamvor Jian Zhang is working on a patch series to make use of libxl’s asynchronous variants of these long running operations.  Together, these patch sets will greatly improve parallelism in the libxl driver, which is certainly important in for example cloud environments where many virtual machine instances can be started in parallel.

Beyond these sorely needed features and improvements, there is quite a bit of work required to reach feature parity with the QEMU driver, where it makes sense.  The hypervisor driver interface currently supports 193 functions, 186 of which are implemented in the QEMU driver.  By contrast, only 86 functions are implemented in the the libxl driver.  To be fair, quite a few of the unimplemented functions don’t apply to Xen and will never be implemented.  Nonetheless, for any enthusiastic volunteers, there is quite a bit of work to be done in the libvirt libxl driver.

Although I thoroughly enjoy working on libvirt and have healthy respect for the upstream community, my available time to work on upstream libvirt is limited.  Currently, I’m the primary maintainer of the Xen drivers, so my limited availability is a bottleneck.  Other libvirt maintainers review and commit Xen stuff, but their primary focus is on the rapid development of other hypervisor drivers and host subsystems.  I’m always looking for help in not only implementation of new features, but also reviewing and testing patches from other contributors.  If you are part of the greater Xen ecosystem, consider lending a hand with improving Xen support in libvirt!

Xen live migration in OpenStack Grizzly

I recently experimented with live migration in an OpenStack Grizzly cluster of Xen compute nodes and thought it useful to write a short blog post for the benefit of others interested in OpenStack Xen deployments.

My OpenStack Grizzly cluster consisted of three nodes: One controller node hosting most of the OpenStack services (rabbitmq, mysql, cinder, keystone, nova-api, nova-scheduler, etc.) and two Xen compute nodes running nova-compute, nova-network, and nova-novncproxy.  All nodes were running fully patched SLES11 SP2.  devstack was used to deploy the OpenStack services.

For the most part, I used the OpenStack documentation for configuring live migration to setup the environment.  The main configuration tasks include

  1. Configuring shared storage on the compute nodes involved in live migration.  I took the simple approach and used NFS for shared storage between the compute nodes, mounting an NFS share at /opt/stack/data/nova/instances in each compute node.
  2. Ensure that the UID and GID of your nova (or stack) and libvirt users are identical between each of your servers. This ensures that the permissions on the NFS mount will work correctly.
  3. Ensure the shared directory has ‘execute/search’ bit set, e.g. chmod o+x /opt/stack/data/nova/instances
  4. Ensure firewall is properly configured to allow migrations.  For Xen, port 8002 needs to be open.  This is the port xend listens on for migration requests.

In addition to the steps described in the OpenStack documentation, the following configuration needs to be performed on the Xen compute nodes

  1. Enable migration (aka relocation) in /etc/xen/xend-config.sxp: (xend-relocation-server yes)
  2. Define a list of hosts allowed to connect to the migration port in /etc/xen/xend-config.sxp.  To allow all hosts, leave the list empty:  (xend-relocation-hosts-allow ”)
  3. Set the ‘live_migration_flag’ option in /etc/nova/nova.conf.  In the legacy xm/xend toolstack, xend implements all of the migration logic.  Unlike the libvirt qemu driver, the libvirt legacy xen driver can only pass the migration request to the Xen toolstack, so the only migration flags needed are VIR_MIGRATE_LIVE and VIR_MIGRATE_UNDEFINE_SOURCE:  live_migration_flag=VIR_MIGRATE_LIVE,VIR_MIGRATE_UNDEFINE_SOURCE
  4. Set the live_migration_uri option in /etc/nova/nova.conf.  The default for this option is ‘qemu+tcp://%s/system’.  For Xen, this needs to be ‘xenmigr://%s/system’:  live_migration_uri = xenmigr://%s/system

After these configuration steps, restart xend and nova-compute on the Xen compute nodes to reload the new configuration.  Your OpenStack Xen cluster should now be able to perform live migration as per the OpenStack Using Migration documentation.

On my small cluster, xen71 is the controller node and xen76 and xen77 are the Xen compute nodes.  I booted an instance of a SLES11 SP2 Xen HVM image that was provisioned on xen76.

stack@xen71:~> nova list
+--------------------------------------+------------------------+--------+--------------------+
|ID                                    | Name                   | Status | Networks           |
+--------------------------------------+------------------------+--------+--------------------+
| 6b45baa2-3dc2-420c-a7ab-aad25fc1aa2a | sles11sp2-xen-hvm-test | ACTIVE | private=10.4.128.2 |
+--------------------------------------+------------------------+--------+--------------------+
stack@xen71:~> nova show 6b45baa2-3dc2-420c-a7ab-aad25fc1aa2a
+-------------------------------------+----------------------------------------------------------+
| Property                            | Value                                                    |
+-------------------------------------+----------------------------------------------------------+
| status                              | ACTIVE                                                   |
| updated                             | 2013-04-05T17:27:16Z                                     |
| OS-EXT-STS:task_state               | None                                                     |
| OS-EXT-SRV-ATTR:host                | xen76                                                    |
| key_name                            | None                                                     |
| image                               | SLES11SP2-xen-hvm (5b39e6b3-bc3f-4fb0-81a0-b115cb8ada80) |
| private network                     | 10.4.128.2                                               |
| hostId                              | cca619a77da34c0c26001fb2438d7cce6a5da6408ae8ec111401f627 |
| OS-EXT-STS:vm_state                 | active                                                   |
| OS-EXT-SRV-ATTR:instance_name       | instance-0000000a                                        |
| OS-EXT-SRV-ATTR:hypervisor_hostname | xen76.virt.lab.novell.com                                |
| flavor                              | m1.tiny (1)                                              |
| id                                  | 6b45baa2-3dc2-420c-a7ab-aad25fc1aa2a                     |
| security_groups                     | [{u'name': u'default'}]                                  |
| user_id                             | 86b77dee688e4eff957865205d27464a                         |
| name                                | sles11sp2-xen-hvm-test                                   |
| created                             | 2013-04-05T17:10:04Z                                     |
| tenant_id                           | 0833047bb70d4b38874328aad83b7140                         |
| OS-DCF:diskConfig                   | MANUAL                                                   |
| metadata                            | {}                                                       |
| accessIPv4                          |                                                          |
| accessIPv6                          |                                                          |
| progress                            | 0                                                        |
| OS-EXT-STS:power_state              | 1                                                        |
| OS-EXT-AZ:availability_zone         | nova                                                     |
| config_drive                        |                                                          |
+-------------------------------------+----------------------------------------------------------+

Now let’s migrate the instance to the xen77 compute node

stack@xen71:~> nova live-migration 6b45baa2-3dc2-420c-a7ab-aad25fc1aa2a xen77

While the migration is in progess, we can see the status and task state as migrating

stack@xen71:~> nova show 6b45baa2-3dc2-420c-a7ab-aad25fc1aa2a
+-------------------------------------+----------------------------------------------------------+
| Property                            | Value                                                    |
+-------------------------------------+----------------------------------------------------------+
| status                              | MIGRATING                                                |
| updated                             | 2013-04-05T20:16:27Z                                     |
| OS-EXT-STS:task_state               | migrating                                                |
| OS-EXT-SRV-ATTR:host                | xen76                                                    |
| key_name                            | None                                                     |
| image                               | SLES11SP2-xen-hvm (5b39e6b3-bc3f-4fb0-81a0-b115cb8ada80) |
| private network                     | 10.4.128.2                                               |
| hostId                              | cca619a77da34c0c26001fb2438d7cce6a5da6408ae8ec111401f627 |
| OS-EXT-STS:vm_state                 | active                                                   |
| OS-EXT-SRV-ATTR:instance_name       | instance-0000000a                                        |
| OS-EXT-SRV-ATTR:hypervisor_hostname | xen76.virt.lab.novell.com                                |
| flavor                              | m1.tiny (1)                                              |
| id                                  | 6b45baa2-3dc2-420c-a7ab-aad25fc1aa2a                     |
| security_groups                     | [{u'name': u'default'}]                                  |
| user_id                             | 86b77dee688e4eff957865205d27464a                         |
| name                                | sles11sp2-xen-hvm-test                                   |
| created                             | 2013-04-05T17:10:04Z                                     |
| tenant_id                           | 0833047bb70d4b38874328aad83b7140                         |
| OS-DCF:diskConfig                   | MANUAL                                                   |
| metadata                            | {}                                                       |
| accessIPv4                          |                                                          |
| accessIPv6                          |                                                          |
| OS-EXT-STS:power_state              | 1                                                        |
| OS-EXT-AZ:availability_zone         | nova                                                     |
| config_drive                        |                                                          |
+-------------------------------------+----------------------------------------------------------+

Once the migration completes, we can see that the instance is now running on xen77

stack@xen71:~> nova show 6b45baa2-3dc2-420c-a7ab-aad25fc1aa2a
+-------------------------------------+----------------------------------------------------------+
| Property                            | Value                                                    |
+-------------------------------------+----------------------------------------------------------+
| status                              | ACTIVE                                                   |
| updated                             | 2013-04-05T20:11:37Z                                     |
| OS-EXT-STS:task_state               | None                                                     |
| OS-EXT-SRV-ATTR:host                | xen77                                                    |
| key_name                            | None                                                     |
| image                               | SLES11SP2-xen-hvm (5b39e6b3-bc3f-4fb0-81a0-b115cb8ada80) |
| private network                     | 10.4.128.2                                               |
| hostId                              | cabdc8468130edd0f85440f1b2922419b359b3da36a40de98713dbda |
| OS-EXT-STS:vm_state                 | active                                                   |
| OS-EXT-SRV-ATTR:instance_name       | instance-0000000a                                        |
| OS-EXT-SRV-ATTR:hypervisor_hostname | xen76.virt.lab.novell.com                                |
| flavor                              | m1.tiny (1)                                              |
| id                                  | 6b45baa2-3dc2-420c-a7ab-aad25fc1aa2a                     |
| security_groups                     | [{u'name': u'default'}]                                  |
| user_id                             | 86b77dee688e4eff957865205d27464a                         |
| name                                | sles11sp2-xen-hvm-test                                   |
| created                             | 2013-04-05T17:10:04Z                                     |
| tenant_id                           | 0833047bb70d4b38874328aad83b7140                         |
| OS-DCF:diskConfig                   | MANUAL                                                   |
| metadata                            | {}                                                       |
| accessIPv4                          |                                                          |
| accessIPv6                          |                                                          |
| progress                            | 0                                                        |
| OS-EXT-STS:power_state              | 1                                                        |
| OS-EXT-AZ:availability_zone         | nova                                                     |
| config_drive                        |                                                          |
+-------------------------------------+----------------------------------------------------------+

You might notice a small bug here in that OS-EXT-SRV-ATTR:hypervisor_hostname is not updated with the xen77 host now running the instance.  A minor issue that I will add to my list of bugs needing investigation.

libvirt sanlock integration in openSUSE Factory

A few weeks back I found some time to package sanlock for openSUSE Factory, which subsequently allowed enabling the libvirt sanlock driver.  And how might this be useful?  When running qemu/kvm virtual machines on a pool of hosts that are not cluster-aware, it may be possible to start a virtual machine on more than one host, potentially corrupting the guest filesystem.  To prevent such an unpleasant scenario, libvirt+sanlock can be used to protect the virtual machine’s disk images, ensuring we never have two qemu/kvm processes writing to an image concurrently.  libvirt+sanlock provides protection against starting the same virtual machine on different hosts, or adding the same disk to different virtual machines.

In this blog post I’ll describe how to install and configure sanlock and the libvirt sanlock plugin.  I’ll briefly cover lockspace and resource creation, and show some examples of specifying disk leases in libvirt, but users should become familiar with the wdmd (watchdog multiplexing daemon) and sanlock man pages, as well as the lease element specification in libvirt domainXML.  I’ve used SLES11 SP2 hosts and guests for this example, but have also tested a similar configuration on openSUSE 12.1.

The sanlock and sanlock-enabled libvirt packages can be retrieved from a Factory repository or a repository from the OBS Virtualization project.  (As a side note, for those that didn’t know, Virtualization is the development project for virtualization-related packages in Factory.  Packages are built, tested, and staged in this project before submitting to Factory.)

After configuring the appropriate repository for the target host, update libvirt and install sanlock and libvirt-lock-sanlock.
# zypper up libvirt libvirt-client libvirt-python
# zypper in sanlock libsanlock1 libvirt-lock-sanlock

Enable watchdog daemon and sanlock daemons.
# insserv wdmd
# insserv sanlock

Specify the sanlock lock manager in /etc/libvirt/qemu.conf.
lock_manager = “sanlock”

The suggested libvirt sanlock configuration uses NFS for shared lock space storage.  Mount a share at the default mount point.
# mount -t nfs nfs-server:/export/path /var/lib/libvirt/sanlock

These installation steps need to be performed on each host participating in the sanlock-protected environment.

libvirt provides two modes for configuring sanlock.  The default mode requires a user or management application to manually define the sanlock lockspace and resource leases, and then describe those leases with a lease element in the virtual machine XML configuration.  libvirt also supports an auto disk lease mode, where libvirt will automatically create a lockspace and lease for each fully qualified disk path in the virtual machine XML configuration.  The latter mode removes the administrator burden of configuring lockspaces and leases, but only works if the administrator can ensure stable and unique disk paths across all participating hosts.  I’ll describe both modes here, starting with the manual configuration.

Manual Configuration:
First we need to reserve and initialize host_id leases.  Each host that wants to participate in the sanlock-enabled environment must first acquire a lease on its host_id number within the lockspace.  The lockspace requirements for 2000 leases (2000 possible host_id’s) is 1MB (8MB for 4k sectors).  On one host, create a 1M lockspace file in the default lease directory (/var/lib/libvirt/sanlock/).
# truncate -s 1M /var/lib/libvirt/sanlock/TEST_LS

And then initialize the lockspace for storing host_id leases.
# sanlock direct init -s TEST_LS:0:/var/lib/libvirt/sanlock/TEST_LS:0

On each participating host, start the watchdog and sanlock daemons and restart libvirtd.
# rcwdmd start; rcsanlock start; rclibvirtd restart

On each participating host, we’ll need to tell the sanlock daemon to acquire its host_id in the lockspace, which will subsequently allow resources to be acquired in the lockspace.
host1:
# sanlock client add_lockspace -s TEST_LS:1:var/lib/libvirt/sanlock/TEST_LS:0
host2:
# sanlock client add_lockspace -s TEST_LS:2:var/lib/libvirt/sanlock/TEST_LS:0
hostN:
# sanlock client add_lockspace -s TEST_LS:<hostidN>:var/lib/libvirt/sanlock/TEST_LS:0

To see the state of host_id leases read during the last renewal
# sanlock client host_status -s TEST_LS
1 timestamp 50766
2 timestamp 327323

Now that we have the hosts configured, time to move on to configuring a virtual machine resource lease and defining it in the virtual machine XML configuration.  First we need to reserve and initialize a resource lease for the virtual machine disk image.
# truncate -s 1M /var/lib/libvirt/sanlock/sles11sp2-disk-resource-lock
# sanlock direct init -r TEST_LS:sles11sp2-disk-resource-lock:/var/lib/libvirt/sanlock/sles11sp2-disk-resource-lock:0

Then add the lease information to the virtual machine XML configuration
# virsh edit sles11sp2

<lease>
<lockspace>TEST_LS</lockspace>
<key>sles11sp2-disk-resource-lock</key>
<target path=’/var/lib/libvirt/sanlock/sles11sp2-disk-resource-lock’/>
</lease>

Finally, start the virtual machine!
# virsh start sles11sp2
Domain sles11sp2 started

Trying to start same virtual machine on different host will fail since the resource lock is already leased to another host
other-host:~ # virsh start sles11sp2
error: Failed to start domain sles11sp2
error: internal error Failed to acquire lock: error -243

Automatic disk lease configuration:
As can be seen even with the trivial example above, manual disk lease configuration puts quite a burden on the user, particularly in an adhoc environment with only a few hosts and no central management service to coordinate all of the lockspace and resource configuration.  To ease this burden, Daniel Berrange adding support in libvirt for automatically creating sanlock disk leases.  Once the environment is configured for automatic disk leases, libvirt will handle the details of creating lockspace and resource leases.

On each participating host, edit /etc/libvirt/qemu-sanlock.conf, setting auto_disk_leases to 1 and assigning a unique host_id.
auto_disk_leases = 1
host_id = 1

Then restart libvirtd
# rclibvirtd restart

Now libvirtd+sanlock is configured to automatically acquire a resource lease for each virtual machine disk.  No lease configuration is required in the virtual machine XML configuration.  We can simply start the virtual machine and libvirt will handle all the details for us.

host1 # virsh start sles11sp2
Domain sles11sp2 started

libvirt creates a host lease lockspace named __LIBVIRT__DISKS__.  Disk resource leases are named using the MD5 checksum of the fully qualified disk path.  After staring the above virtual machine, the lease directory contained
host1 # ls -l /var/lib/libvirt/sanlock/
total 2064
-rw——-  1 root root 1048576 Mar 13 01:35 3ab0d33a35403d03e3ad10b485c7b593
-rw——-  1 root root 1048576 Mar 13 01:35 __LIBVIRT__DISKS__

Finally, try to start the virtual machine on another participating host
host2 # virsh start sles11sp2
error: Failed to start domain sles11sp2
error: internal error Failed to acquire lock: error -243

Feel free to try the sanlock and sanlock-enabled libvirt packages from openSUSE Factory or our OBS Virtualization project. One thing to keep in mind is that the sanlock daemon protects resources for some process, in this case qemu/kvm.  If the sanlock daemon is terminated, it can no longer protect those resources and kills the processes for which it holds leases.  In other words, restarting the sanlock daemon will terminate your virtual machines!  If the sanlock daemon is SIGKILL’ed, then the watchdog daemon intervenes by resetting the entire host.  With this in mind, it would be wise to consider an appropriate disk cache mode such as ‘none’ or ‘writethrough’ to improve the integrity of your disk images in the event of a mass virtual machine kill off.

Removal of 32-bit Xen from openSUSE

As announced in July 2011 , the openSUSE Xen maintainers intended to discontinue support for 32-bit Xen host in openSUSE12.1. Now that 12.1 has been released, we are hearing complaints from users virtualizing on older P4-based systems. I understand their frustration, but given that the upstream Xen community has ignored the 32-bit host, and no other distros are supporting it, we can no longer justify the effort required to support it. Supported 32-bit Xen packages are going by way of the dodo, and dropping them in openSUSE may very well mean extinction.

That said, users still have a few options. First, we have quite stable 32 and 64-bit Xen packages in openSUSE11.4. The Xen version is 4.0.3, which has all the latest upstream fixes and improvements for the 4.0 branch. In fact, the package sources are shared with SLES11 SP1 and benefit from the broader user-base and QA of the enterprise product. openSUSE11.4 contains kernel version 2.6.37, which has excellent support for older P4-based hardware.

Another option is using the openSUSE Build Service to maintain your own 32-bit Xen packages. In fact, the community itself can maintain 32-bit Xen in the Virtualization project if there is enough interest. We will be happy to accept any patches that do not break 64-bit environments :-). One benefit of this option is that the openSUSE Factory Xen packages are developed in the Virtualization project. A community maintained, 32-bit Xen host in this project would be submitted to Factory, and hence included in the next openSUSE release, as part of the overall Xen package submission done by the openSUSE maintainers.

Updated libvirt for openSUSE12.1 RC1

Last week I updated the libvirt package for openSUSE12.1 RC1 / Factory to version 0.9.6. The package was also submitted for SLE11 SP2 Beta8. Changes since last update include backporting of AHCI controller patch for qemu driver. With this patch it is possible to use SATA drives with qemu instances. The following controller device XML is used to specify an AHCI controller

<controller type='sata' index='0'>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</controller

The libvirt qemu driver supports many AHCI controllers, each with one bus and 6 units. To attach a SATA disk to a unit on an AHCI controller, use the following disk device XML

<disk type='file' device='disk'>
  <driver name='qemu' type='raw'/>
  <source file='/var/lib/libvirt/images/test/disk0.raw'/>
  <target dev='sda' bus='sata'/>
  <address type='drive' controller='0' bus='0' unit='0'/>
</disk

Also new to this libvirt update is opt-in for Apparmor confinement of qemu instances. /etc/libvirt/qemu.conf has been patched to explicitly set the security driver to ‘none’. If Apparmor is enabled on the host, libvirtd is generously confined since it needs access to many utilities and libraries, but users must opt-in to also have qemu instances launched by libvirtd confined. Simply edit /etc/libvirt/qemu.conf and change security_driver to ‘apparmor’. Of course, selinux is also available if users prefer it over Apparmor.