When building RPMs the host kernel cannot be assumed to match
the target OS kernel. Thus auto-detecting /selinux vs
/sys/fs/selinux based on the host kernel can result in the
wrong choice (eg F18 builds on a RHEL6 host kernel)
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
A previous patch forced libnl-3 and netcf-0.2.2 (which itself requires
libnl-3) when *building* for Fedora 18+ (and RHEL 7+), but the
install-time Requires: for netcf has always been implicit due to
libvirtd linking with libnetcf.so. However, the since the API of netcf
didn't change when it was rebuilt to use libnl-3, the internal library
version didn't change either, making it possible (from rpm's point of
view) to upgrade libvirt without upgrading netcf (in reality, that
leads to a segfault - see
https://bugzilla.redhat.com/show_bug.cgi?id=853381).
The solution is to put an explicit Requires: line in libvirt's
specfile for fedora >= 18 and rhel >= 7.
The libvirt storage driver uses librbd.so for its functionality.
RPM will automatically add a dependency on the library, so there
is no need to have an explicit dependency on the ceph RPM itself.
This allows newer Fedora distros to avoid pulling in the huge
ceph RPM, in favour of just having the libraries installed
Everything is ready in both netcf and libvirt to switch over to libnl3
in future releases of both Fedora and RHEL. This needs to be done more
or less simultaneously in both packages, though, because you can't mix
libnl1.1 and libnl3 in the same process (e.g. libvirtd using
libnl-3.so and libnetcf.so, while libnetcf.so uses libnl.so)
This patch does two things when fedora >= 18 || rhel >= 7):
1) requires libnl3-devel
2) requires netcf-devel-0.2.2 or greater
(the idea is that a similar patch is going into netcf's specfile, so
that when a build of netcf is done on F18 or later (or RHEL7 or later)
netcf will be guaranteed to be built with libnl3 rather than
libnl-1.1)
* configure.ac, spec file: firewalld defaults to enabled if dbus is
available, otherwise is disabled. If --with_firewalld is explicitly
requested and dbus is not available, configure will fail.
* bridge_driver: add dbus filters to get the FirewallD1.Reloaded
signal and DBus.NameOwnerChanged on org.fedoraproject.FirewallD1.
When these are encountered, reload all the iptables reuls of all
libvirt's virtual networks (similar to what happens when libvirtd is
restarted).
* iptables, ebtables: use firewall-cmd's direct passthrough interface
when available, otherwise use iptables and ebtables commands. This
decision is made once the first time libvirt calls
iptables/ebtables, and that decision is maintained for the life of
libvirtd.
* Note that the nwfilter part of this patch was separated out into
another patch by Stefan in V2, so that needs to be revised and
re-reviewed as well.
================
All the configure.ac and specfile changes are unchanged from Thomas'
V3.
V3 re-ran "firewall-cmd --state" every time a new rule was added,
which was extremely inefficient. V4 uses VIR_ONCE_GLOBAL_INIT to set
up a one-time initialization function.
The VIR_ONCE_GLOBAL_INIT(x) macro references a static function called
vir(Ip|Eb)OnceInit(), which will then be called the first time that
the static function vir(Ip|Eb)TablesInitialize() is called (that
function is defined for you by the macro). This is
thread-safe, so there is no chance of any race.
IMPORTANT NOTE: I've left the VIR_DEBUG messages in these two init
functions (one for iptables, on for ebtables) as VIR_WARN so that I
don't have to turn on all the other debug message just to see
these. Even if this patch doesn't need any other modification, those
messages need to be changed to VIR_DEBUG before pushing.
This one-time initialization works well. However, I've encountered
problems with testing:
1) Whenever I have enabled the firewalld service, *all* attempts to
call firewall-cmd from within libvirtd end with firewall-cmd hanging
internally somewhere. This is *not* the case if firewall-cmd returns
non-0 in response to "firewall-cmd --state" (i.e. *that* command runs
and returns to libvirt successfully.)
2) If I start libvirtd while firewalld is stopped, then start
firewalld later, this triggers libvirtd to reload its iptables rules,
however it also spits out a *ton* of complaints about deletion failing
(I suppose because firewalld has nuked all of libvirt's rules). I
guess we need to suppress those messages (which is a more annoying
problem to fix than you might think, but that's another story).
3) I noticed a few times during this long line of errors that
firewalld made a complaint about "Resource Temporarily
unavailable. Having libvirtd access iptables commands directly at the
same time as firewalld is doing so is apparently problematic.
4) In general, I'm concerned about the "set it once and never change
it" method - if firewalld is disabled at libvirtd startup, causing
libvirtd to always use iptables/ebtables directly, this won't cause
*terrible* problems, but if libvirtd decides to use firewall-cmd and
firewalld is later disabled, libvirtd will not be able to recover.
The 'make check' was rebuilding the binaries just overrided,
so for more safety also override the C program
Also daemon-conf isn't built anymore so remove it from the list
Parallels Cloud Server is a cloud-ready virtualization
solution that allows users to simultaneously run multiple virtual
machines and containers on the same physical server.
More information can be found here: http://www.parallels.com/products/pcs/
Also beta version of Parallels Cloud Server can be downloaded there.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
libvirt-daemon-driver-XXX should be a dependency only when with_driver_modules
is 1.
libvirt-daemon-driver-libxl should be a dependency only when with_libxl is 1.
libvirt-daemon-driver-lxc should be a dependency only when with_lxc is 1.
libvirt-daemon-driver-qemu should be a dependency only when with_qemu is 1.
libvirt-daemon-driver-uml should be a dependency only when with_uml is 1.
libvirt-daemon-driver-xen should be a dependency only when with_xen is 1.
Turning on the building of driver modules in libvirt.spec.in
means that installing 'libvirt' no longer pulls in all the
drivers. For upgrade compatibility we need to list all drivers
module sub-RPMs against the 'libvirt' RPM.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This patch brings support to manage sheepdog pools and volumes to libvirt.
It uses the "collie" command-line utility that comes with sheepdog for that.
A sheepdog pool in libvirt maps to a sheepdog cluster.
It needs a host and port to connect to, which in most cases
is just going to be the default of localhost on port 7000.
A sheepdog volume in libvirt maps to a sheepdog vdi.
To create one specify the pool, a name and the capacity.
Volumes can also be resized later.
In the volume XML the vdi name has to be put into the <target><path>.
To use the volume as a disk source for virtual machines specify
the vdi name as "name" attribute of the <source>.
The host and port information from the pool are specified inside the host tag.
<disk type='network'>
...
<source protocol="sheepdog" name="vdi_name">
<host name="localhost" port="7000"/>
</source>
</disk>
To work right this patch parses the output of collie,
so it relies on the raw output option. There recently was a bug which caused
size information to be reported wrong. This is fixed upstream already and
will be in the next release.
Signed-off-by: Sebastian Wiedenroth <wiedi@frubar.net>
Apart from the non-sanlock check build, there is also a little fix for
qemu (EXTRA_DIST had qemu.conf and others inside even if the build was
supposed to be without qemu).
Turn on loadable modules for libvirtd. Add new sub-RPMs
libvirt-daemon-driver-XXX, one for each loadable .so.
Modify the libvirt-daemon-YYY RPMs to depend on each of
the individual drivers they required
* libvirt.spec.in: Enable driver modules
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This patch adds support for a new storage backend with RBD support.
RBD is the RADOS Block Device and is part of the Ceph distributed storage
system.
It comes in two flavours: Qemu-RBD and Kernel RBD, this storage backend only
supports Qemu-RBD, thus limiting the use of this storage driver to Qemu only.
To function this backend relies on librbd and librados being present on the
local system.
The backend also supports Cephx authentication for safe authentication with
the Ceph cluster.
For storing credentials it uses the built-in secret mechanism of libvirt.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Introduce a set sub-RPMs, one per hypervisor, which can be used
as dependency targets by applications wishing to pull in the
full stack of packages required for a specific hypervisor. This
avoids the application needing to know what the hypervisor specific
package set is.
ie, applications should not need to know that using the libvirt
Xen hypervisor requires the 'xen' RPM - libvirt should take care
of that knowledge. All the application wants is 'libvirt-daemon-xen'
There are 5 sub-RPMs:
libvirt-daemon-qemu - non-native TCG based emulators
libvirt-daemon-kvm - native KVM hypervisor
libvirt-daemon-uml - User Mode linux
libvirt-daemon-xen - Xen, either via XenD or libxl
libvirt-daemon-lxc - Linux native containers
When driver modules get turned on, these sub-RPMs will also
gain dependencies on the appropriate driver module .so files
Take the libvirt RPM and split it into three pieces
- libvirt-daemon - libvirtd & other mandatory bits for its operation
- libvirt-daemon-config-network - the virbr0 config definition
- libvirt-daemon-config-nwfilter - the firewall config rules
For backwards compatibility with existing installs / application RPM
deps, the 'libvirt' RPM is retained, but will have a dependency on
the 3 new RPMs.
Currently documentation is split between the libvirt RPM and the
libvirt-devel RPM. In the client-only build there is no libvirt
RPM, so the docs need to live elsewhere. The obvious answer is a
dedicated libvirt-docs RPM. For back-compatibility make the
libvirt-devel RPM require the libvirt-docs RPM
* libvirt.spec.in: Create separate libvirt-docs RPM
* configure.ac docs/news.html.in libvirt.spec.in: update for the release
* po/*.po*: updated a number of languages translation including new
indian languages and regenerated
* libvirt.spec.in: Remove obsolete --with-remote-pid-file arg.
Add missing %{without_libxl} statement. Fix handling of docs
in client only build. Put systemtap files in -client RPM
instead of -daemon RPM
* examples/xml/nwfilter/Makefile.am: Don't install examples if
nwfilter is disabled.
There are a number of flaws with our packaging of the libvirtd
daemon:
- Installing 'libvirt' does not install 'qemu-kvm' or 'xen'
etc which are required to actually run the hypervisor in
question
- Installing 'libvirt' pulls in the default configuration
files which may not be wanted & cause problems if installed
inside a guest
- It is not possible to explicitly required all the peices
required to manage a specific hypervisor
This change takes the 'libvirt' RPM and and changes it thus
- libvirt: just a virtual package with dep on libvirt-daemon,
libvirt-daemon-config-network & libvirt-daemon-config-nwfilter
- libvirt-daemon: the libvirt daemon and related pieces
- libvirt-daemon-config-network: the default network config
- libvirt-daemon-config-nwfilter: the network filter configs
- libvirt-docs: the website HTML
We then introduce some more virtual (empty) packages
- libvirt-daemon-qemu: Deps on libvirt-daemon & 'qemu'
- libvirt-daemon-kvm: Deps on libvirt-daemon & 'qemu-kvm'
- libvirt-daemon-lxc: Deps on libvirt-daemon
- libvirt-daemon-uml: Deps on libvirt-daemon
- libvirt-daemon-xen: Deps on libvirt-daemon & 'xen'
- libvirt-qemu: Deps on libvirt-daemon-qemu & libvirt-daemon-config-{network,nwfilter}
- libvirt-kvm: Deps on libvirt-daemon-kvm & libvirt-daemon-config-{network,nwfilter}
- libvirt-lxc: Deps on libvirt-daemon-lxc & libvirt-daemon-config-{network,nwfilter}
- libvirt-uml: Deps on libvirt-daemon-uml & libvirt-daemon-config-{network,nwfilter}
- libvirt-xen: Deps on libvirt-daemon-xen & libvirt-daemon-config-network
My intent in the future is to turn on the driver modules by
default, at which time 'libvirt-daemon' will cease to include
any specific drivers, instead we'll get libvirt-daemon-driver-XXXX
packages for each driver. The libvirt-daemon-XXX packages will
then pull in each driver that they require.
It is recommended that applications required a locally installed
libvirtd daemon, use either 'Requires: libvirt-daemon-XXXX' or
'Requires: libvirt-XXX' and *not* "Requires: libvirt-daemon"
or 'Requires: libvirt'
* libvirt.spec.in: Refactor RPMs
* docs/packaging.html.in, docs/sitemap.html.in: Document
new RPM split rationale
After adding the libvirt-guests service into usual runlevels, we used
to start the libvirt-guests service. However this is usually not a
good practice. As mentioned on fedoraproject wiki, the installations
can be in changeroots, in an installer context, or in other situations
where we don't want the services autostarted.
Currently, if scrub (used for wiping algorithms) is not present
at compile time, we don't support any other wiping algorithms than
zeroing, even if it was installed later. Switch to runtime detection
instead.
Language bindings may well want to use the libvirt-api.xml and
libvirt-qemu-api.xml files to either auto-generate themselves,
or sanity check the manually written bindings for completeness.
Currently these XML files are not installed as standard, merely
ending up as a %doc file in the RPM.
This changes them to be installed into $prefix/share/libvirt/apis/
The *-refs.xml files are not installed, since those are only
useful during generation of the online API doc files.
The pkg-config file is enhanced so that you can query the install
location of the API files. eg
# pkg-config --variable=libvirt_qemu_api libvirt
/home/berrange/builder/i686-pc-mingw32/sys-root/mingw/share/libvirt/libvirt-qemu-api.xml
* docs/Makefile.am: Install libvirt-api.xml & libvirt-qemu-api.xml
* libvirt.pc.in: Add vars for querying API install location
* libvirt.spec.in, mingw32-libvirt.spec.in: Include API XML files
See: https://bugzilla.redhat.com/show_bug.cgi?id=785269
The specfile requires avahi during install if libvirt was built with
avahi support, but there are many situations where it is undesirable
to install avahi due to security concerns. This patch requires only
the avahi-libs package, which is needed by libvirt to call the
function that tries to attach to the avahi daemon, but will instead
silently fail because the avahi-daemon is in the main avahi package,
and that package isn't installed.
To assist people in verifying that their host is operating in an
optimal manner, provide a 'virt-host-validate' command. For each
type of hypervisor, it will check any pre-requisites, or other
good recommendations and report what's working & what is not.
eg
# virt-host-validate
QEMU: Checking for device /dev/kvm : FAIL (Check that the 'kvm-intel' or 'kvm-amd' modules are loaded & the BIOS has enabled virtualization)
QEMU: Checking for device /dev/vhost : WARN (Load the 'vhost_net' module to improve performance of virtio networking)
QEMU: Checking for device /dev/net/tun : PASS
LXC: Checking for Linux >= 2.6.26 : PASS
This warns people if they have vmx/svm, but don't have /dev/kvm. It
also warns about missing /dev/vhost net.