mirror of https://gitee.com/openkylin/libvirt.git
161 lines
5.9 KiB
XML
161 lines
5.9 KiB
XML
<?xml version="1.0" encoding="UTF-8"?>
|
|
<!DOCTYPE html>
|
|
<html xmlns="http://www.w3.org/1999/xhtml">
|
|
<body>
|
|
<h1>Virtual machine lock manager, virtlockd plugin</h1>
|
|
|
|
<ul id="toc"></ul>
|
|
|
|
<p>
|
|
This page describes use of the <code>virtlockd</code>
|
|
service as a <a href="locking.html">lock driver</a>
|
|
plugin for virtual machine disk mutual exclusion.
|
|
</p>
|
|
|
|
<h2><a id="background">virtlockd background</a></h2>
|
|
|
|
<p>
|
|
The virtlockd daemon is a single purpose binary which
|
|
focuses exclusively on the task of acquiring and holding
|
|
locks on behalf of running virtual machines. It is
|
|
designed to offer a low overhead, portable locking
|
|
scheme can be used out of the box on virtualization
|
|
hosts with minimal configuration overheads. It makes
|
|
use of the POSIX fcntl advisory locking capability
|
|
to hold locks, which is supported by the majority of
|
|
commonly used filesystems.
|
|
</p>
|
|
|
|
<h2><a id="sanlock">virtlockd daemon setup</a></h2>
|
|
|
|
<p>
|
|
In most OS, the virtlockd daemon itself will not require
|
|
any upfront configuration work. It is installed by default
|
|
when libvirtd is present, and a systemd socket unit is
|
|
registered such that the daemon will be automatically
|
|
started when first required. With OS that predate systemd
|
|
though, it will be necessary to start it at boot time,
|
|
prior to libvirtd being started. On RHEL/Fedora distros,
|
|
this can be achieved as follows
|
|
</p>
|
|
|
|
<pre>
|
|
# chkconfig virtlockd on
|
|
# service virtlockd start
|
|
</pre>
|
|
|
|
<p>
|
|
The above instructions apply to the instance of virtlockd
|
|
that runs privileged, and is used by the libvirtd daemon
|
|
that runs privileged. If running libvirtd as an unprivileged
|
|
user, it will always automatically spawn an instance of
|
|
the virtlockd daemon unprivileged too. This requires no
|
|
setup at all.
|
|
</p>
|
|
|
|
<h2><a id="lockdplugin">libvirt lockd plugin configuration</a></h2>
|
|
|
|
<p>
|
|
Once the virtlockd daemon is running, or setup to autostart,
|
|
the next step is to configure the libvirt lockd plugin.
|
|
There is a separate configuration file for each libvirt
|
|
driver that is using virtlockd. For QEMU, we will edit
|
|
<code>/etc/libvirt/qemu-lockd.conf</code>
|
|
</p>
|
|
|
|
<p>
|
|
The default behaviour of the lockd plugin is to acquire locks
|
|
directly on the virtual disk images associated with the guest
|
|
<disk> elements. This ensures it can run out of the box
|
|
with no configuration, providing locking for disk images on
|
|
shared filesystems such as NFS. It does not provide any cross
|
|
host protection for storage that is backed by block devices,
|
|
since locks acquired on device nodes in /dev only apply within
|
|
the host. It may also be the case that the filesystem holding
|
|
the disk images is not capable of supporting fcntl locks.
|
|
</p>
|
|
|
|
<p>
|
|
To address these problems it is possible to tell lockd to
|
|
acquire locks on an indirect file. Essentially lockd will
|
|
calculate the SHA256 checksum of the fully qualified path,
|
|
and create a zero length file in a given directory whose
|
|
filename is the checksum. It will then acquire a lock on
|
|
that file. Assuming the block devices assigned to the guest
|
|
are using stable paths (eg /dev/disk/by-path/XXXXXXX) then
|
|
this will allow for locks to apply across hosts. This
|
|
feature can be enabled by setting a configuration setting
|
|
that specifies the directory in which to create the lock
|
|
files. The directory referred to should of course be
|
|
placed on a shared filesystem (eg NFS) that is accessible
|
|
to all hosts which can see the shared block devices.
|
|
</p>
|
|
|
|
<pre>
|
|
$ su - root
|
|
# augtool -s set \
|
|
/files/etc/libvirt/qemu-lockd.conf/file_lockspace_dir \
|
|
"/var/lib/libvirt/lockd/files"
|
|
</pre>
|
|
|
|
<p>
|
|
If the guests are using either LVM and SCSI block devices
|
|
for their virtual disks, there is a unique identifier
|
|
associated with each device. It is possible to tell lockd
|
|
to use this UUID as the basis for acquiring locks, rather
|
|
than the SHA256 sum of the filename. The benefit of this
|
|
is that the locking protection will work even if the file
|
|
paths to the given block device are different on each
|
|
host.
|
|
</p>
|
|
|
|
<pre>
|
|
$ su - root
|
|
# augtool -s set \
|
|
/files/etc/libvirt/qemu-lockd.conf/scsi_lockspace_dir \
|
|
"/var/lib/libvirt/lockd/scsi"
|
|
# augtool -s set \
|
|
/files/etc/libvirt/qemu-lockd.conf/lvm_lockspace_dir \
|
|
"/var/lib/libvirt/lockd/lvm"
|
|
</pre>
|
|
|
|
<p>
|
|
It is important to remember that the changes made to the
|
|
<code>/etc/libvirt/qemu-lockd.conf</code> file must be
|
|
propagated to all hosts before any virtual machines are
|
|
launched on them. This ensures that all hosts are using
|
|
the same locking mechanism
|
|
</p>
|
|
|
|
<h2><a id="qemuconfig">QEMU/KVM driver configuration</a></h2>
|
|
|
|
<p>
|
|
The QEMU driver is capable of using the virtlockd plugin
|
|
since the release <span>1.0.2</span>.
|
|
The out of the box configuration, however, currently
|
|
uses the <strong>nop</strong> lock manager plugin.
|
|
To get protection for disks, it is thus necessary
|
|
to reconfigure QEMU to activate the <strong>lockd</strong>
|
|
driver. This is achieved by editing the QEMU driver
|
|
configuration file (<code>/etc/libvirt/qemu.conf</code>)
|
|
and changing the <code>lock_manager</code> configuration
|
|
tunable.
|
|
</p>
|
|
|
|
<pre>
|
|
$ su - root
|
|
# augtool -s set /files/etc/libvirt/qemu.conf/lock_manager lockd
|
|
# service libvirtd restart
|
|
</pre>
|
|
|
|
<p>
|
|
Every time you start a guest, the virtlockd daemon will acquire
|
|
locks on the disk files directly, or in one of the configured
|
|
lookaside directories based on SHA256 sum. To check that locks
|
|
are being acquired as expected, the <code>lslocks</code> tool
|
|
can be run.
|
|
</p>
|
|
|
|
</body>
|
|
</html>
|