libvirt is released under the GNU Lesser
General Public License, see the file COPYING.LIB in the distribution
for the precise wording. The only library that libvirt depends upon is
the Xen store access library which is also licenced under the LGPL.
-
-
Can I embed libvirt in a proprietary application ?
-
Yes. The LGPL allows you to embed libvirt into a proprietary
+
+ Can I embed libvirt in a proprietary application ?
+
Yes. The LGPL allows you to embed libvirt into a proprietary
application. It would be graceful to send-back bug fixes and improvements
as patches for possible incorporation in the main development tree. It
will decrease your maintenance costs anyway if you do so.
I can't install the libvirt/libvirt-devel RPM packages due to
+
+ I can't install the libvirt/libvirt-devel RPM packages due to
failed dependencies
-
The most generic solution is to re-fetch the latest src.rpm , and
+
The most generic solution is to re-fetch the latest src.rpm , and
rebuild it locally with
-
rpm --rebuild libvirt-xxx.src.rpm.
-
If everything goes well it will generate two binary rpm packages (one
+
rpm --rebuild libvirt-xxx.src.rpm.
+
If everything goes well it will generate two binary rpm packages (one
providing the shared libs and virsh, and the other one, the -devel
package, providing includes, static libraries and scripts needed to build
applications with libvirt that you can install locally.
-
One can also rebuild the RPMs from a tarball:
-
rpmbuild -ta libdir-xxx.tar.gz
-
Or from a configured tree with:
-
make rpm
-
-
Failure to use the API for non-root users
-
Large parts of the API may only be accessible with root privileges,
+
One can also rebuild the RPMs from a tarball:
+
+ rpmbuild -ta libdir-xxx.tar.gz
+
+
Or from a configured tree with:
+
+ make rpm
+
+
+ Failure to use the API for non-root users
+
Large parts of the API may only be accessible with root privileges,
however the read only access to the xenstore data doesnot have to be
forbidden to user, at least for monitoring purposes. If "virsh dominfo"
fails to run as an user, change the mode of the xenstore read-only socket
with:
-
chmod 666 /var/run/xenstored/socket_ro
-
and also make sure that the Xen Daemon is running correctly with local
+
+ chmod 666 /var/run/xenstored/socket_ro
+
+
and also make sure that the Xen Daemon is running correctly with local
HTTP server enabled, this is defined in
/etc/xen/xend-config.sxp which need the following line to be
enabled:
-
(xend-http-server yes)
-
If needed restart the xend daemon after making the change with the
+
+ (xend-http-server yes)
+
+
If needed restart the xend daemon after making the change with the
following command run as root:
+ Troubles compiling or linking programs using libvirt
+
To simplify the process of reusing the library, libvirt comes with
pkgconfig support, which can be used directly from autoconf support or
via the pkg-config command line tool, like:
libvirt is released under the GNU Lesser
+ General Public License, see the file COPYING.LIB in the distribution
+ for the precise wording. The only library that libvirt depends upon is
+ the Xen store access library which is also licenced under the LGPL.
+
+
+ Can I embed libvirt in a proprietary application ?
+
Yes. The LGPL allows you to embed libvirt into a proprietary
+ application. It would be graceful to send-back bug fixes and improvements
+ as patches for possible incorporation in the main development tree. It
+ will decrease your maintenance costs anyway if you do so.
+ I can't install the libvirt/libvirt-devel RPM packages due to
+ failed dependencies
+
The most generic solution is to re-fetch the latest src.rpm , and
+ rebuild it locally with
+
rpm --rebuild libvirt-xxx.src.rpm.
+
If everything goes well it will generate two binary rpm packages (one
+ providing the shared libs and virsh, and the other one, the -devel
+ package, providing includes, static libraries and scripts needed to build
+ applications with libvirt that you can install locally.
+
One can also rebuild the RPMs from a tarball:
+
+ rpmbuild -ta libdir-xxx.tar.gz
+
+
Or from a configured tree with:
+
+ make rpm
+
+
+
+ Failure to use the API for non-root users
+
Large parts of the API may only be accessible with root privileges,
+ however the read only access to the xenstore data doesnot have to be
+ forbidden to user, at least for monitoring purposes. If "virsh dominfo"
+ fails to run as an user, change the mode of the xenstore read-only socket
+ with:
+
+ chmod 666 /var/run/xenstored/socket_ro
+
+
and also make sure that the Xen Daemon is running correctly with local
+ HTTP server enabled, this is defined in
+ /etc/xen/xend-config.sxp which need the following line to be
+ enabled:
+
+ (xend-http-server yes)
+
+
If needed restart the xend daemon after making the change with the
+ following command run as root:
+ Troubles compiling or linking programs using libvirt
+
To simplify the process of reusing the library, libvirt comes with
+ pkgconfig support, which can be used directly from autoconf support or
+ via the pkg-config command line tool, like:
+
+ pkg-config libvirt --libs
+
+
+
+
+
diff --git a/docs/Makefile.am b/docs/Makefile.am
index fc1153f340..8cb5f06bc6 100644
--- a/docs/Makefile.am
+++ b/docs/Makefile.am
@@ -4,45 +4,45 @@ SUBDIRS= . examples devhelp
# The directory containing the source code (if it contains documentation).
DOC_SOURCE_DIR=../src
-PAGES= index.html bugs.html FAQ.html remote.html
-
man_MANS=
-html = \
- book1.html \
+apihtml = \
index.html \
- libvirt-conf.html \
- libvirt-lib.html \
libvirt-libvirt.html \
libvirt-virterror.html
-png = \
+apipng = \
left.png \
up.png \
home.png \
right.png
+png = \
+ 16favicon.png \
+ 32favicon.png \
+ et_logo.png \
+ footer_corner.png \
+ footer_pattern.png \
+ libvirHeader.png \
+ libvirLogo.png \
+ libvirt-header-bg.png \
+ libvirt-header-logo.png \
+ libvirtLogo.png \
+ libvirt-net-logical.png \
+ libvirt-net-physical.png \
+ madeWith.png \
+ windows-cygwin-1.png \
+ windows-cygwin-2.png \
+ windows-cygwin-3.png
+
gif = \
Libxml2-Logo-90x34.gif \
architecture.gif \
node.gif \
redhat.gif
-dot_html = \
- FAQ.html \
- architecture.html \
- bugs.html \
- downloads.html \
- errors.html \
- format.html \
- hvsupport.html \
- index.html \
- intro.html \
- libvir.html \
- news.html \
- python.html \
- remote.html \
- uri.html
+dot_html_in = $(wildcard *.html.in)
+dot_html = $(dot_html_in:%.html.in=%.html)
xml = \
libvirt-api.xml \
@@ -57,12 +57,16 @@ rng = \
libvirt.rng \
network.rng
+fig = \
+ libvirt-net-logical.fig \
+ libvirt-net-physical.fig
+
EXTRA_DIST= \
libvirt-api.xml libvirt-refs.xml apibuild.py \
- site.xsl newapi.xsl news.xsl \
- $(dot_html) $(gif) html \
- $(xml) $(rng) \
- virsh.pod
+ site.xsl newapi.xsl news.xsl page.xsl ChangeLog.xsl \
+ $(dot_html) $(dot_html_in) $(gif) html/*.html html/*.png \
+ $(xml) $(rng) $(fig) $(png) \
+ virsh.pod ChangeLog.awk
all: web $(top_builddir)/NEWS $(man_MANS)
@@ -73,18 +77,30 @@ virsh.1: virsh.pod
api: libvirt-api.xml libvirt-refs.xml $(srcdir)/html/index.html
-web: $(PAGES)
+web: $(dot_html)
-$(PAGES): libvir.html site.xsl
- -@(if [ -x $(XSLTPROC) ] ; then \
- echo "Rebuilding the HTML Web pages from libvir.html" ; \
- $(XSLTPROC) --nonet --html $(top_srcdir)/docs/site.xsl $(top_srcdir)/docs/libvir.html > index.html ; fi );
- -@(if [ -x $(XMLLINT) ] ; then \
- echo "Validating the HTML Web pages" ; \
- $(XMLLINT) --nonet --valid --noout $(PAGES) ; fi );
+ChangeLog.xml: ../ChangeLog ChangeLog.awk
+ awk -f ChangeLog.awk < $< > $@
+
+ChangeLog.html.in: ChangeLog.xml ChangeLog.xsl
+ @(if [ -x $(XSLTPROC) ] ; then \
+ echo "Generating $@"; \
+ name=`echo $@ | sed -e 's/.tmp//'`; \
+ $(XSLTPROC) --nonet $(top_srcdir)/docs/ChangeLog.xsl $< > $@ || (rm $@ && exit 1) ; fi )
+
+%.html.tmp: %.html.in site.xsl page.xsl sitemap.html.in
+ @(if [ -x $(XSLTPROC) ] ; then \
+ echo "Generating $@"; \
+ name=`echo $@ | sed -e 's/.tmp//'`; \
+ $(XSLTPROC) --stringparam pagename $$name --nonet --html $(top_srcdir)/docs/site.xsl $< > $@ || (rm $@ && exit 1) ; fi )
+
+%.html: %.html.tmp
+ @(if [ -x $(XMLLINT) ] ; then \
+ echo "Validating $@" ; \
+ $(XMLLINT) --nonet --format --valid $< > $@ || : ; fi );
-$(srcdir)/html/index.html: libvirt-api.xml $(srcdir)/newapi.xsl
+$(srcdir)/html/index.html: libvirt-api.xml newapi.xsl page.xsl sitemap.html.in
-@(if [ -x $(XSLTPROC) ] ; then \
echo "Rebuilding the HTML pages from the XML API" ; \
$(XSLTPROC) --nonet $(srcdir)/newapi.xsl libvirt-api.xml ; fi )
@@ -115,11 +131,11 @@ install-data-local:
$(srcdir)/redhat.gif $(srcdir)/Libxml2-Logo-90x34.gif \
$(DESTDIR)$(HTML_DIR)
$(mkinstalldirs) $(DESTDIR)$(HTML_DIR)/html
- for h in $(html); do \
+ for h in $(apihtml); do \
$(INSTALL) -m 0644 $(srcdir)/html/$$h $(DESTDIR)$(HTML_DIR)/html; done
- for p in $(png); do \
+ for p in $(apipng); do \
$(INSTALL) -m 0644 $(srcdir)/html/$$p $(DESTDIR)$(HTML_DIR)/html; done
uninstall-local:
- for h in $(html); do rm $(DESTDIR)$(HTML_DIR)/html/$$h; done
- for p in $(png); do rm $(DESTDIR)$(HTML_DIR)/html/$$p; done
+ for h in $(apihtml); do rm $(DESTDIR)$(HTML_DIR)/html/$$h; done
+ for p in $(apipng); do rm $(DESTDIR)$(HTML_DIR)/html/$$p; done
diff --git a/docs/apps.html b/docs/apps.html
new file mode 100644
index 0000000000..252103e363
--- /dev/null
+++ b/docs/apps.html
@@ -0,0 +1,154 @@
+
+
+
+
+
+
+
+
+ libvirt: Applications using libvirt
+
+
+
+
+
+
+
+
+
+
+
+
Applications using libvirt
+
+ This page provides an illustration of the wide variety of
+ applications using the libvirt management API. If you know
+ of interesting applications not listed on this page, send
+ a message to the mailing list
+ to request that it be added here. If your application uses
+ libvirt as its API, the following graphic is available for
+ your website to advertise support for libvirt:
+
+
+
+
Command line tools
+
virsh
+ An interactive shell, and batch scriptable tool for performing
+ management tasks on all libvirt managed domains, networks and
+ storage. This is part of the libvirt core distribution.
+
+ Provides a way to provision new virtual machines from a
+ OS distribution install tree. It supports provisioning from
+ local CD images, and the network over NFS, HTTP and FTP.
+
+ Allows the disk image(s) and configuration for an existing
+ virtual machine to be cloned to form a new virtual machine.
+ It automates copying of data across to new disk images, and
+ updates the UUID, Mac address and name in the configuration
+
+ Provides a way to deploy virtual appliances. It defines a
+ simplified portable XML format describing the pre-requisites
+ of a virtual machine. At time of deployment this is translated
+ into the domain XML format for execution under any libvirt
+ hypervisor meeting the pre-requisites.
+
+ Examine the utilization of each filesystem in a virtual machine
+ from the comfort of the host machine. This tool peeks into the
+ guest disks and determines how much space is used. It can cope
+ with common Linux filesystems and LVM volumes.
+
+ A general purpose desktop management tool, able to manage
+ virtual machines across both local and remotely accessed
+ hypervisors. It is targetted at home and small office usage
+ upto managing 10-20 hosts and their VMs.
+
+ A lightweight tool for accessing the graphical console
+ associated with a virtual machine. It can securely connect
+ to remote consoles supporting the VNC protocol. Also provides
+ an optional mozilla browser plugin.
+
+ oVirt provides the ability to manage large numbers of virtual
+ machines across an entire data center of hosts. It integrates
+ with FreeIPA for Kerberos authentication, and in the future,
+ certificate management.
+
+ A tool for converting a physical machine into a virtual machine. It
+ is a LiveCD which is booted on the machine to be converted. It collects
+ a little information from the user and then copies the disks over to
+ a remote machine and defines the XML for a domain to run the guest.
+
+ This page provides an illustration of the wide variety of
+ applications using the libvirt management API. If you know
+ of interesting applications not listed on this page, send
+ a message to the mailing list
+ to request that it be added here. If your application uses
+ libvirt as its API, the following graphic is available for
+ your website to advertise support for libvirt:
+
+
+
+
+
+
+
Command line tools
+
+
+
virsh
+
+ An interactive shell, and batch scriptable tool for performing
+ management tasks on all libvirt managed domains, networks and
+ storage. This is part of the libvirt core distribution.
+
+ Provides a way to provision new virtual machines from a
+ OS distribution install tree. It supports provisioning from
+ local CD images, and the network over NFS, HTTP and FTP.
+
+ Allows the disk image(s) and configuration for an existing
+ virtual machine to be cloned to form a new virtual machine.
+ It automates copying of data across to new disk images, and
+ updates the UUID, Mac address and name in the configuration
+
+ Provides a way to deploy virtual appliances. It defines a
+ simplified portable XML format describing the pre-requisites
+ of a virtual machine. At time of deployment this is translated
+ into the domain XML format for execution under any libvirt
+ hypervisor meeting the pre-requisites.
+
+ Examine the utilization of each filesystem in a virtual machine
+ from the comfort of the host machine. This tool peeks into the
+ guest disks and determines how much space is used. It can cope
+ with common Linux filesystems and LVM volumes.
+
+ A general purpose desktop management tool, able to manage
+ virtual machines across both local and remotely accessed
+ hypervisors. It is targetted at home and small office usage
+ upto managing 10-20 hosts and their VMs.
+
+ A lightweight tool for accessing the graphical console
+ associated with a virtual machine. It can securely connect
+ to remote consoles supporting the VNC protocol. Also provides
+ an optional mozilla browser plugin.
+
+ oVirt provides the ability to manage large numbers of virtual
+ machines across an entire data center of hosts. It integrates
+ with FreeIPA for Kerberos authentication, and in the future,
+ certificate management.
+
+ A tool for converting a physical machine into a virtual machine. It
+ is a LiveCD which is booted on the machine to be converted. It collects
+ a little information from the user and then copies the disks over to
+ a remote machine and defines the XML for a domain to run the guest.
+
When running in a Xen environment, programs using libvirt have to execute
in "Domain 0", which is the primary Linux OS loaded on the machine. That OS
kernel provides most if not all of the actual drivers used by the set of
domains. It also runs the Xen Store, a database of information shared by the
@@ -13,22 +46,27 @@ hypervisor, the kernels, the drivers and the xen daemon. Xend. The xen daemon
supervise the control and execution of the sets of domains. The hypervisor,
drivers, kernels and daemons communicate though a shared system bus
implemented in the hypervisor. The figure below tries to provide a view of
-this environment:
The library can be initialized in 2 ways depending on the level of
+this environment:
+
+
The library can be initialized in 2 ways depending on the level of
privilege of the embedding program. If it runs with root access,
virConnectOpen() can be used, it will use three different ways to connect to
-the Xen infrastructure:
a connection to the Xen Daemon though an HTTP RPC layer
-
a read/write connection to the Xen Store
-
use Xen Hypervisor calls
-
when used as non-root libvirt connect to a proxy daemon running
- as root and providing read-only support
-
The library will usually interact with the Xen daemon for any operation
+the Xen infrastructure:
+
a connection to the Xen Daemon though an HTTP RPC layer
a read/write connection to the Xen Store
use Xen Hypervisor calls
when used as non-root libvirt connect to a proxy daemon running
+ as root and providing read-only support
+
The library will usually interact with the Xen daemon for any operation
changing the state of the system, but for performance and accuracy reasons
may talk directly to the hypervisor when gathering state information at
least when possible (i.e. when the running program using libvirt has root
-privilege access).
If it runs without root access virConnectOpenReadOnly() should be used to
+privilege access).
+
If it runs without root access virConnectOpenReadOnly() should be used to
connect to initialize the library. It will then fork a libvirt_proxy
program running as root and providing read_only access to the API, this is
-then only useful for reporting and monitoring.
The model for QEmu and KVM is completely similar, basically KVM is based
on QEmu for the process controlling a new domain, only small details differs
between the two. In both case the libvirt API is provided by a controlling
process forked by libvirt in the background and which launch and control the
@@ -36,8 +74,13 @@ QEmu or KVM process. That program called libvirt_qemud talks though a specific
protocol to the library, and connects to the console of the QEmu process in
order to control and report on its status. Libvirt tries to expose all the
emulations models of QEmu, the selection is done when creating the new
-domain, by specifying the architecture and machine type targeted.
The code controlling the QEmu process is available in the
-qemud/ directory.
As the previous section explains, libvirt can communicate using different
channels with the current hypervisor, and should also be able to use
different kind of hypervisor. To simplify the internal design, code, ease
maintenance and simplify the support of other virtualization engine the
@@ -46,22 +89,76 @@ acting as a front-end for the library API and a set of hypervisor drivers
defining a common set of routines. That way the Xen Daemon access, the Xen
Store one, the Hypervisor hypercall are all isolated in separate C modules
implementing at least a subset of the common operations defined by the
-drivers present in driver.h:
xend_internal: implements the driver functions though the Xen
- Daemon
-
xs_internal: implements the subset of the driver available though the
- Xen Store
-
xen_internal: provide the implementation of the functions possible via
- direct hypervisor access
-
proxy_internal: provide read-only Xen access via a proxy, the proxy code
- is in the proxy/directory.
-
xm_internal: provide support for Xen defined but not running
- domains.
-
qemu_internal: implement the driver functions for QEmu and
+drivers present in driver.h:
+
xend_internal: implements the driver functions though the Xen
+ Daemon
xs_internal: implements the subset of the driver available though the
+ Xen Store
xen_internal: provide the implementation of the functions possible via
+ direct hypervisor access
proxy_internal: provide read-only Xen access via a proxy, the proxy code
+ is in the proxy/directory.
xm_internal: provide support for Xen defined but not running
+ domains.
qemu_internal: implement the driver functions for QEmu and
KVM virtualization engines. It also uses a qemud/ specific daemon
- which interacts with the QEmu process to implement libvirt API.
-
test: this is a test driver useful for regression tests of the
- front-end part of libvirt.
-
Note that a given driver may only implement a subset of those functions,
+ which interacts with the QEmu process to implement libvirt API.
test: this is a test driver useful for regression tests of the
+ front-end part of libvirt.
+
Note that a given driver may only implement a subset of those functions,
(for example saving a Xen domain state to disk and restoring it is only
possible though the Xen Daemon), in that case the driver entry points for
-unsupported functions are initialized to NULL.
When running in a Xen environment, programs using libvirt have to execute
+in "Domain 0", which is the primary Linux OS loaded on the machine. That OS
+kernel provides most if not all of the actual drivers used by the set of
+domains. It also runs the Xen Store, a database of information shared by the
+hypervisor, the kernels, the drivers and the xen daemon. Xend. The xen daemon
+supervise the control and execution of the sets of domains. The hypervisor,
+drivers, kernels and daemons communicate though a shared system bus
+implemented in the hypervisor. The figure below tries to provide a view of
+this environment:
+
+
The library can be initialized in 2 ways depending on the level of
+privilege of the embedding program. If it runs with root access,
+virConnectOpen() can be used, it will use three different ways to connect to
+the Xen infrastructure:
+
+
a connection to the Xen Daemon though an HTTP RPC layer
+
a read/write connection to the Xen Store
+
use Xen Hypervisor calls
+
when used as non-root libvirt connect to a proxy daemon running
+ as root and providing read-only support
+
+
The library will usually interact with the Xen daemon for any operation
+changing the state of the system, but for performance and accuracy reasons
+may talk directly to the hypervisor when gathering state information at
+least when possible (i.e. when the running program using libvirt has root
+privilege access).
+
If it runs without root access virConnectOpenReadOnly() should be used to
+connect to initialize the library. It will then fork a libvirt_proxy
+program running as root and providing read_only access to the API, this is
+then only useful for reporting and monitoring.
The model for QEmu and KVM is completely similar, basically KVM is based
+on QEmu for the process controlling a new domain, only small details differs
+between the two. In both case the libvirt API is provided by a controlling
+process forked by libvirt in the background and which launch and control the
+QEmu or KVM process. That program called libvirt_qemud talks though a specific
+protocol to the library, and connects to the console of the QEmu process in
+order to control and report on its status. Libvirt tries to expose all the
+emulations models of QEmu, the selection is done when creating the new
+domain, by specifying the architecture and machine type targeted.
+
The code controlling the QEmu process is available in the
+qemud/ directory.
As the previous section explains, libvirt can communicate using different
+channels with the current hypervisor, and should also be able to use
+different kind of hypervisor. To simplify the internal design, code, ease
+maintenance and simplify the support of other virtualization engine the
+internals have been structured as one core component, the libvirt.c module
+acting as a front-end for the library API and a set of hypervisor drivers
+defining a common set of routines. That way the Xen Daemon access, the Xen
+Store one, the Hypervisor hypercall are all isolated in separate C modules
+implementing at least a subset of the common operations defined by the
+drivers present in driver.h:
+
+
xend_internal: implements the driver functions though the Xen
+ Daemon
+
xs_internal: implements the subset of the driver available though the
+ Xen Store
+
xen_internal: provide the implementation of the functions possible via
+ direct hypervisor access
+
proxy_internal: provide read-only Xen access via a proxy, the proxy code
+ is in the proxy/directory.
+
xm_internal: provide support for Xen defined but not running
+ domains.
+
qemu_internal: implement the driver functions for QEmu and
+ KVM virtualization engines. It also uses a qemud/ specific daemon
+ which interacts with the QEmu process to implement libvirt API.
+
test: this is a test driver useful for regression tests of the
+ front-end part of libvirt.
+
+
Note that a given driver may only implement a subset of those functions,
+(for example saving a Xen domain state to disk and restoring it is only
+possible though the Xen Daemon), in that case the driver entry points for
+unsupported functions are initialized to NULL.
+ The diagrams below illustrate some of the network configurations
+ enabled by the libvirt networking APIs
+
+
VLAN 1. This virtual network has connectivity
+ to LAN 2 with traffic forwarded and NATed.
+
VLAN 2. This virtual network is completely
+ isolated from any physical LAN.
+
Guest A. The first network interface is bridged
+ to the physical LAN 1. The second interface is connected
+ to a virtual network VLAN 1.
+
Guest B. The first network interface is connected
+ to a virtual network VLAN 1, giving it limited NAT
+ based connectivity to LAN2. It has a second network interface
+ connected to VLAN 2. It acts a router allowing limited
+ traffic between the two VLANs, thus giving Guest C
+ connectivity to the physical LAN 2.
+
Guest C. The only network interface is connected
+ to a virtual network VLAN 2. It has no direct connectivity
+ to a physical LAN, relying on Guest B to route traffic
+ on its behalf.
+
+ The diagrams below illustrate some of the network configurations
+ enabled by the libvirt networking APIs
+
+
+
+
VLAN 1. This virtual network has connectivity
+ to LAN 2 with traffic forwarded and NATed.
+
+
VLAN 2. This virtual network is completely
+ isolated from any physical LAN.
+
+
Guest A. The first network interface is bridged
+ to the physical LAN 1. The second interface is connected
+ to a virtual network VLAN 1.
+
+
Guest B. The first network interface is connected
+ to a virtual network VLAN 1, giving it limited NAT
+ based connectivity to LAN2. It has a second network interface
+ connected to VLAN 2. It acts a router allowing limited
+ traffic between the two VLANs, thus giving Guest C
+ connectivity to the physical LAN 2.
+
+
Guest C. The only network interface is connected
+ to a virtual network VLAN 2. It has no direct connectivity
+ to a physical LAN, relying on Guest B to route traffic
+ on its behalf.
+
+ The storage management APIs are based around 2 core concepts
+
+
+ Volume - a single storage volume which can
+ be assigned to a guest, or used for creating further pools. A
+ volume is either a block device, a raw file, or a special format
+ file.
+
+ Pool - provides a means for taking a chunk
+ of storage and carving it up into volumes. A pool can be used to
+ manage things such as a physical disk, a NFS server, a iSCSI target,
+ a host adapter, an LVM group.
+
+
+ These two concepts are mapped through to two libvirt objects, a
+ virStorageVolPtr and a virStoragePoolPtr,
+ each with a collection of APIs for their management.
+
+ The storage management APIs are based around 2 core concepts
+
+
+
+ Volume - a single storage volume which can
+ be assigned to a guest, or used for creating further pools. A
+ volume is either a block device, a raw file, or a special format
+ file.
+
+
+ Pool - provides a means for taking a chunk
+ of storage and carving it up into volumes. A pool can be used to
+ manage things such as a physical disk, a NFS server, a iSCSI target,
+ a host adapter, an LVM group.
+
+
+
+
+ These two concepts are mapped through to two libvirt objects, a
+ virStorageVolPtr and a virStoragePoolPtr,
+ each with a collection of APIs for their management.
+
+
+
+
diff --git a/docs/auth.html b/docs/auth.html
index 43910cfab5..748656f421 100644
--- a/docs/auth.html
+++ b/docs/auth.html
@@ -1,16 +1,51 @@
-Access control
Access control
+
+
+
+
+
+
+ libvirt: Access control
+
+
+
+
+
+
+
+
+
+
+
+
Access control
+
When connecting to libvirt, some connections may require client
authentication before allowing use of the APIs. The set of possible
authentication mechanisms is administrator controlled, independent
of applications using libvirt.
-
The libvirt daemon allows the administrator to choose the authentication
mechanisms used for client connections on each network socket independently.
This is primarily controlled via the libvirt daemon master config file in
@@ -19,21 +54,30 @@ have its authentication mechanism configured independently. There is
currently a choice of none, polkit, and sasl.
The SASL scheme can be further configured to choose between a large
number of different mechanisms.
-
If libvirt does not contain support for PolicyKit, then access control for
the UNIX domain socket is done using traditional file user/group ownership
and permissions. There are 2 sockets, one for full read-write access, the
other for read-only access. The RW socket will be restricted (mode 0700) to
only allow the root user to connect. The read-only socket will
be open access (mode 0777) to allow any user to connect.
-
+
+
To allow non-root users greater access, the libvirtd.conf file
can be edited to change the permissions via the unix_sock_rw_perms,
config parameter and to set a user group via the unix_sock_group
parameter. For example, setting the former to mode 0770 and the
latter wheel would let any user in the wheel group connect to
the libvirt daemon.
-
If libvirt contains support for PolicyKit, then access control options are
more advanced. The unix_sock_auth parameter will default to
polkit, and the file permissions will default to 0777
@@ -43,24 +87,31 @@ RW daemon socket will require any application running in the current desktop
session to authenticate using the user's password. This is akin to sudo
auth, but does not require that the client application ultimately run as root.
Default policy will still allow any application to connect to the RO socket.
-
+
+
The default policy can be overridden by the administrator using the PolicyKit
master configuration file in /etc/PolicyKit/PolicyKit.conf. The
PolicyKit.conf(5) manual page provides details on the syntax
available. The two libvirt daemon actions available are named org.libvirt.unix.monitor
for the RO socket, and org.libvirt.unix.manage for the RW socket.
-
+
+
As an example, to allow a user fredfull access to the RW socket,
while requiring joe to authenticate with the admin password,
would require adding the following snippet to PolicyKit.conf.
-
The plain TCP socket of the libvirt daemon defaults to using SASL for authentication.
The SASL mechanism configured by default is DIGEST-MD5, which provides a basic
username+password style authentication. It also provides for encryption of the data
@@ -68,28 +119,38 @@ stream, so the security of the plain TCP socket is on a par with that of the TLS
socket. If desired the UNIX socket and TLS socket can also have SASL enabled by
setting the auth_unix_ro, auth_unix_rw, auth_tls
config params in libvirt.conf.
-
+
+
Out of the box, no user accounts are defined, so no clients will be able to authenticate
on the TCP socket. Adding users and setting their passwords is done with the saslpasswd2
command. When running this command it is important to tell it that the appname is libvirt.
As an example, to add a user fred, run
-
+
+
# saslpasswd2 -a libvirt fred
Password: xxxxxx
Again (for verification): xxxxxx
-
+
+
To see a list of all accounts the sasldblistusers2 command can be used.
This command expects to be given the path to the libvirt user database, which is kept
in /etc/libvirt/passwd.db
-
The plain TCP socket of the libvirt daemon defaults to using SASL for authentication.
The SASL mechanism configured by default is DIGEST-MD5, which provides a basic
username+password style authentication. To enable Kerberos single-sign-on instead,
@@ -98,19 +159,22 @@ The mech_list parameter must first be changed to gssapidigest-md5. If SASL is enabled on the UNIX
and/or TLS sockets, Kerberos will also be used for them. Like DIGEST-MD5, the Kerberos
mechanism provides data encryption of the session.
-
+
+
Some operating systems do not install the SASL kerberos plugin by default. It
may be necessary to install a sub-package such as cyrus-sasl-gssapi.
To check whether the Kerberos plugin is installed run the pluginviewer
program and verify that gssapi is listed,eg:
-
Next is is necessary for the administrator of the Kerberos realm to issue a principle
for the libvirt server. There needs to be one principle per host running the libvirt
daemon. The principle should be named libvirt/full.hostname@KERBEROS.REALM.
@@ -118,7 +182,8 @@ This is typically done by running the kadmin.local command on the K
server, though some Kerberos servers have alternate ways of setting up service principles.
Once created, the principle should be exported to a keytab, copied to the host running
the libvirt daemon and placed in /etc/libvirt/krb5.tab
-
+
+
# kadmin.local
kadmin.local: add_principal libvirt/foo.example.com
Enter password for principal "libvirt/foo.example.com@EXAMPLE.COM":
@@ -135,9 +200,90 @@ kadmin.local: quit
# scp /root/libvirt-foo-example.tab root@foo.example.com:/etc/libvirt/krb5.tab
# rm /root/libvirt-foo-example.tab
-
+
+
Any client application wishing to connect to a Kerberos enabled libvirt server
merely needs to run kinit to gain a user principle. This may well
be done automatically when a user logs into a desktop session, if PAM is setup
to authenticate against Kerberos.
-
+When connecting to libvirt, some connections may require client
+authentication before allowing use of the APIs. The set of possible
+authentication mechanisms is administrator controlled, independent
+of applications using libvirt.
+
+The libvirt daemon allows the administrator to choose the authentication
+mechanisms used for client connections on each network socket independently.
+This is primarily controlled via the libvirt daemon master config file in
+/etc/libvirt/libvirtd.conf. Each of the libvirt sockets can
+have its authentication mechanism configured independently. There is
+currently a choice of none, polkit, and sasl.
+The SASL scheme can be further configured to choose between a large
+number of different mechanisms.
+
+If libvirt does not contain support for PolicyKit, then access control for
+the UNIX domain socket is done using traditional file user/group ownership
+and permissions. There are 2 sockets, one for full read-write access, the
+other for read-only access. The RW socket will be restricted (mode 0700) to
+only allow the root user to connect. The read-only socket will
+be open access (mode 0777) to allow any user to connect.
+
+
+To allow non-root users greater access, the libvirtd.conf file
+can be edited to change the permissions via the unix_sock_rw_perms,
+config parameter and to set a user group via the unix_sock_group
+parameter. For example, setting the former to mode 0770 and the
+latter wheel would let any user in the wheel group connect to
+the libvirt daemon.
+
+If libvirt contains support for PolicyKit, then access control options are
+more advanced. The unix_sock_auth parameter will default to
+polkit, and the file permissions will default to 0777
+even on the RW socket. Upon connecting to the socket, the client application
+will be required to identify itself with PolicyKit. The default policy for the
+RW daemon socket will require any application running in the current desktop
+session to authenticate using the user's password. This is akin to sudo
+auth, but does not require that the client application ultimately run as root.
+Default policy will still allow any application to connect to the RO socket.
+
+
+The default policy can be overridden by the administrator using the PolicyKit
+master configuration file in /etc/PolicyKit/PolicyKit.conf. The
+PolicyKit.conf(5) manual page provides details on the syntax
+available. The two libvirt daemon actions available are named org.libvirt.unix.monitor
+for the RO socket, and org.libvirt.unix.manage for the RW socket.
+
+
+As an example, to allow a user fredfull access to the RW socket,
+while requiring joe to authenticate with the admin password,
+would require adding the following snippet to PolicyKit.conf.
+
+The plain TCP socket of the libvirt daemon defaults to using SASL for authentication.
+The SASL mechanism configured by default is DIGEST-MD5, which provides a basic
+username+password style authentication. It also provides for encryption of the data
+stream, so the security of the plain TCP socket is on a par with that of the TLS
+socket. If desired the UNIX socket and TLS socket can also have SASL enabled by
+setting the auth_unix_ro, auth_unix_rw, auth_tls
+config params in libvirt.conf.
+
+
+Out of the box, no user accounts are defined, so no clients will be able to authenticate
+on the TCP socket. Adding users and setting their passwords is done with the saslpasswd2
+command. When running this command it is important to tell it that the appname is libvirt.
+As an example, to add a user fred, run
+
+
+# saslpasswd2 -a libvirt fred
+Password: xxxxxx
+Again (for verification): xxxxxx
+
+
+To see a list of all accounts the sasldblistusers2 command can be used.
+This command expects to be given the path to the libvirt user database, which is kept
+in /etc/libvirt/passwd.db
+
+The plain TCP socket of the libvirt daemon defaults to using SASL for authentication.
+The SASL mechanism configured by default is DIGEST-MD5, which provides a basic
+username+password style authentication. To enable Kerberos single-sign-on instead,
+the libvirt SASL configuration file must be changed. This is /etc/sasl2/libvirt.conf.
+The mech_list parameter must first be changed to gssapi
+instead of the default digest-md5. If SASL is enabled on the UNIX
+and/or TLS sockets, Kerberos will also be used for them. Like DIGEST-MD5, the Kerberos
+mechanism provides data encryption of the session.
+
+
+Some operating systems do not install the SASL kerberos plugin by default. It
+may be necessary to install a sub-package such as cyrus-sasl-gssapi.
+To check whether the Kerberos plugin is installed run the pluginviewer
+program and verify that gssapi is listed,eg:
+
+Next is is necessary for the administrator of the Kerberos realm to issue a principle
+for the libvirt server. There needs to be one principle per host running the libvirt
+daemon. The principle should be named libvirt/full.hostname@KERBEROS.REALM.
+This is typically done by running the kadmin.local command on the Kerberos
+server, though some Kerberos servers have alternate ways of setting up service principles.
+Once created, the principle should be exported to a keytab, copied to the host running
+the libvirt daemon and placed in /etc/libvirt/krb5.tab
+
+
+# kadmin.local
+kadmin.local: add_principal libvirt/foo.example.com
+Enter password for principal "libvirt/foo.example.com@EXAMPLE.COM":
+Re-enter password for principal "libvirt/foo.example.com@EXAMPLE.COM":
+Principal "libvirt/foo.example.com@EXAMPLE.COM" created.
+
+kadmin.local: ktadd -k /root/libvirt-foo-example.tab libvirt/foo.example.com@EXAMPLE.COM
+Entry for principal libvirt/foo.example.com@EXAMPLE.COM with kvno 4, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:/root/libvirt-foo-example.tab.
+Entry for principal libvirt/foo.example.com@EXAMPLE.COM with kvno 4, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:/root/libvirt-foo-example.tab.
+Entry for principal libvirt/foo.example.com@EXAMPLE.COM with kvno 4, encryption type DES with HMAC/sha1 added to keytab WRFILE:/root/libvirt-foo-example.tab.
+Entry for principal libvirt/foo.example.com@EXAMPLE.COM with kvno 4, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:/root/libvirt-foo-example.tab.
+
+kadmin.local: quit
+
+# scp /root/libvirt-foo-example.tab root@foo.example.com:/etc/libvirt/krb5.tab
+# rm /root/libvirt-foo-example.tab
+
+
+Any client application wishing to connect to a Kerberos enabled libvirt server
+merely needs to run kinit to gain a user principle. This may well
+be done automatically when a user logs into a desktop session, if PAM is setup
+to authenticate against Kerberos.
+
Libvirt comes with bindings to support other languages than
+pure C. First the headers embeds the necessary declarations to
+allow direct acces from C++ code, but also we have bindings for
+higher level kind of languages:
+
Python: Libvirt comes with direct support for the Python language
+ (just make sure you installed the libvirt-python package if not
+ compiling from sources). See below for more information about
+ using libvirt with python
Support, requests or help for libvirt bindings are welcome on
+the mailing
+list, as usual try to provide enough background information
+and make sure you use recent version, see the help
+page.
Libvirt comes with bindings to support other languages than
+pure C. First the headers embeds the necessary declarations to
+allow direct acces from C++ code, but also we have bindings for
+higher level kind of languages:
+
+
Python: Libvirt comes with direct support for the Python language
+ (just make sure you installed the libvirt-python package if not
+ compiling from sources). See below for more information about
+ using libvirt with python
Support, requests or help for libvirt bindings are welcome on
+the mailing
+list, as usual try to provide enough background information
+and make sure you use recent version, see the help
+page.
+
+
diff --git a/docs/bugs.html b/docs/bugs.html
index ca05707cef..534bdabd29 100644
--- a/docs/bugs.html
+++ b/docs/bugs.html
@@ -1,20 +1,127 @@
-Reporting bugs and getting help
Reporting bugs and getting help
There is a mailing-list libvir-list@redhat.com for libvirt,
-with an on-line
-archive. Please subscribe to this list before posting by visiting the associated Web
-page and follow the instructions. Patches with explanations and provided as
-attachments are really appreciated and will be discussed on the mailing list.
-If possible generate the patches by using cvs diff -u in a CVS checkout.
We use Red Hat Bugzilla to track bugs and new feature requests to libvirt.
-If you want to report a bug or ask for a feature, please check the existing open bugs, then if yours isn't a duplicate of
-an existing bug:
Don't forget to attach any patch or extra data that you may have available. It is always a good idea to also
-to post to the mailing-list
-too, so that everybody working on the project can see it, thanks !
Some of the libvirt developers may be found on IRC on the OFTC
-network. Use the settings:
server: irc.oftc.net
-
port: 6667 (the usual IRC port)
-
channel: #virt
-
But there is no guarantee that someone will be watching or able to reply,
-use the mailing-list if you don't get an answer there.
+ The Red Hat Bugzilla Server
+ should be used to report bugs and request features against libvirt.
+ Before submitting a ticket, check the existing tickets to see if
+ the bug/feature is already tracked.
+
+
General libvirt bug reports
+
+ If you are using official libvirt binaries from a Linux distribution
+ check below for distribution specific bug reporting policies first.
+ For general libvirt bug reports, from self-built releases, CVS snapshots
+ and any other non-distribution supported builds, enter tickets under
+ the Virtualization Tools product and the libvirt
+ component.
+
+ If you are using official binaries from Red Hat Enterprise Linux distribution,
+ tickets against the Red Hat Enterprise Linux 5 product and
+ the libvirt component.
+
+ If you are using official binaries from another Linux distribution first
+ follow their own bug reporting guidelines.
+
+
How to file high quality bug reports
+
+ To increase the likelihood of your bug report being addressed it is
+ important to provide as much information as possible. When filing
+ libvirt bugs use this checklist to see if you are providing enough
+ information:
+
+
The version number of the libvirt build, or date of the CVS
+ checkout
The hardware architecture being used
The name of the hypervisor (Xen, QEMU, KVM)
The XML config of the guest domain if relevant
For Xen hypervisor, the XenD logfile from /var/log/xen
For QEMU/KVM, the domain logfile from /var/log/libvirt/qemu
+
+ If requesting a new feature attach any available patch to the ticket
+ and also email the patch to the libvirt mailing list for discussion
+
+ The Red Hat Bugzilla Server
+ should be used to report bugs and request features against libvirt.
+ Before submitting a ticket, check the existing tickets to see if
+ the bug/feature is already tracked.
+
+
+
General libvirt bug reports
+
+
+ If you are using official libvirt binaries from a Linux distribution
+ check below for distribution specific bug reporting policies first.
+ For general libvirt bug reports, from self-built releases, CVS snapshots
+ and any other non-distribution supported builds, enter tickets under
+ the Virtualization Tools product and the libvirt
+ component.
+
+ If you are using official binaries from Red Hat Enterprise Linux distribution,
+ tickets against the Red Hat Enterprise Linux 5 product and
+ the libvirt component.
+
+ If you are using official binaries from another Linux distribution first
+ follow their own bug reporting guidelines.
+
+
+
+
+
How to file high quality bug reports
+
+
+ To increase the likelihood of your bug report being addressed it is
+ important to provide as much information as possible. When filing
+ libvirt bugs use this checklist to see if you are providing enough
+ information:
+
+
+
+
The version number of the libvirt build, or date of the CVS
+ checkout
+
The hardware architecture being used
+
The name of the hypervisor (Xen, QEMU, KVM)
+
The XML config of the guest domain if relevant
+
For Xen hypervisor, the XenD logfile from /var/log/xen
+
For QEMU/KVM, the domain logfile from /var/log/libvirt/qemu
+
+
+
+ If requesting a new feature attach any available patch to the ticket
+ and also email the patch to the libvirt mailing list for discussion
+
+
+
+
diff --git a/docs/contact.html b/docs/contact.html
new file mode 100644
index 0000000000..11809832fa
--- /dev/null
+++ b/docs/contact.html
@@ -0,0 +1,107 @@
+
+
+
+
+
+
+
+
+ libvirt: Contacting the development team
+
+
+
+
+
+
+
+
+
+
+
+
Contacting the development team
+
Mailing list
+
+ There is a mailing-list libvir-list@redhat.com for libvirt,
+ with an on-line archive.
+ Please subscribe to this list before posting by visiting the
+ associated Web
+ page and follow the instructions. Patches with explanations and provided as
+ attachments are really appreciated and will be discussed on the mailing list.
+ If possible generate the patches by using cvs diff -up in a CVS
+ checkout.
+
+
IRC discussion
+
+ Some of the libvirt developers may be found on IRC on the OFTC IRC
+ network. Use the settings:
+
+
server: irc.oftc.net
port: 6667 (the usual IRC port)
channel: #virt
+
+ NB There is no guarantee that someone will be watching or able to reply
+ promptly, so use the mailing-list if you don't get an answer on the IRC
+ channel.
+
+ There is a mailing-list libvir-list@redhat.com for libvirt,
+ with an on-line archive.
+ Please subscribe to this list before posting by visiting the
+ associated Web
+ page and follow the instructions. Patches with explanations and provided as
+ attachments are really appreciated and will be discussed on the mailing list.
+ If possible generate the patches by using cvs diff -up in a CVS
+ checkout.
+
+
+
IRC discussion
+
+
+ Some of the libvirt developers may be found on IRC on the OFTC IRC
+ network. Use the settings:
+
+
+
server: irc.oftc.net
+
port: 6667 (the usual IRC port)
+
channel: #virt
+
+
+ NB There is no guarantee that someone will be watching or able to reply
+ promptly, so use the mailing-list if you don't get an answer on the IRC
+ channel.
+
+ The libvirt API is now available in all major Linux distributions
+ so the simplest deployment approach is to use your distributions'
+ package management software to install the libvirt
+ module.
+
+
Self-built releases
+
+ libvirt uses GNU autotools for its build system, so deployment
+ follows the usual process of configure; make ; make install
+
+
+
+ # ./configure --prefix=$HOME/usr
+ # make
+ # make install
+
+
Built from CVS / GIT
+
+ When building from CVS it is neccessary to generate the autotools
+ support files. This requires having autoconf,
+ automake, libtool and intltool
+ installed. The process can be automated with the autogen.sh
+ script.
+
+
+
+ # ./autogen.sh --prefix=$HOME/usr
+ # make
+ # make install
+
+ The libvirt API is now available in all major Linux distributions
+ so the simplest deployment approach is to use your distributions'
+ package management software to install the libvirt
+ module.
+
+
+
Self-built releases
+
+
+ libvirt uses GNU autotools for its build system, so deployment
+ follows the usual process of configure; make ; make install
+
+
+
+
+ # ./configure --prefix=$HOME/usr
+ # make
+ # make install
+
+
+
Built from CVS / GIT
+
+
+ When building from CVS it is neccessary to generate the autotools
+ support files. This requires having autoconf,
+ automake, libtool and intltool
+ installed. The process can be automated with the autogen.sh
+ script.
+
+
+
+
+ # ./autogen.sh --prefix=$HOME/usr
+ # make
+ # make install
+
The latest versions of libvirt can be found on the libvirt.org server ( HTTP, FTP). You will find there the released
-versions as well as snapshot
-tarballs updated from CVS head every hour
Anonymous CVS is also
-available, first register onto the server:
it will request a password, enter anoncvs. Then you can
-checkout the development tree with:
cvs -d :pserver:anoncvs@libvirt.org:2401/data/cvs co
-libvirt
Use ./autogen.sh to configure the local checkout, then make
-and make install, as usual. All normal cvs commands are now
-available except commiting to the base.
+ Once an hour, an automated snapshot is made from the latest CVS server
+ source tree. These snapshots should be usable, but we make no guarentees
+ about their stability:
+
+ The master source repository uses CVS
+ and anonymous access is provided. Prior to accessing the server is it neccessary
+ to authenticate using the password anoncvs. This can be accomplished with the
+ cvs login command:
+
+ Once authenticated, a checkout can be obtained using
+
+
+
+ # cvs -d :pserver:anoncvs@libvirt.org:2401/data/cvs co libvirt
+
+
+ The libvirt build process uses GNU autotools, so after obtaining a checkout
+ it is neccessary to generate the configure script and Makefile.in templates
+ using the autogen.sh command. As an example, to do a complete
+ build and install it into your home directory run:
+
+
+
+ ./autogen.sh --prefix=$HOME/usr
+ make
+ make install
+
+
GIT repository mirror
+
+ The CVS source repository is also mirrored using GIT, and is available
+ for anonymous access via:
+
+ Once an hour, an automated snapshot is made from the latest CVS server
+ source tree. These snapshots should be usable, but we make no guarentees
+ about their stability:
+
+ The master source repository uses CVS
+ and anonymous access is provided. Prior to accessing the server is it neccessary
+ to authenticate using the password anoncvs. This can be accomplished with the
+ cvs login command:
+
+ Once authenticated, a checkout can be obtained using
+
+
+
+
+ # cvs -d :pserver:anoncvs@libvirt.org:2401/data/cvs co libvirt
+
+
+
+ The libvirt build process uses GNU autotools, so after obtaining a checkout
+ it is neccessary to generate the configure script and Makefile.in templates
+ using the autogen.sh command. As an example, to do a complete
+ build and install it into your home directory run:
+
+
+
+
+ ./autogen.sh --prefix=$HOME/usr
+ make
+ make install
+
+
+
GIT repository mirror
+
+
+ The CVS source repository is also mirrored using GIT, and is available
+ for anonymous access via:
+
+ The libvirt public API delegates its implementation to one or
+ more internal drivers, depending on the connection URI
+ passed when initializing the library. There is always a hypervisor driver
+ active, and if the libvirt daemon is available there will usually be a
+ network and storage driver active.
+
+
Hypervisor drivers
+
+ The hypervisor drivers currently supported by livirt are:
+
+ The libvirt public API delegates its implementation to one or
+ more internal drivers, depending on the connection URI
+ passed when initializing the library. There is always a hypervisor driver
+ active, and if the libvirt daemon is available there will usually be a
+ network and storage driver active.
+
+
+
Hypervisor drivers
+
+
+ The hypervisor drivers currently supported by livirt are:
+
+ The libvirt QEMU driver can manage any QEMU emulator from version 0.8.1
+ or later. It can also manage anything that provides the same QEMU command
+ line syntax and monitor interaction. This includes KVM, and Xenner.
+
+
Deployment pre-requisites
+
+ QEMU emulators: The driver will probe /usr/bin
+ for the presence of qemu, qemu-system-x86_64,
+ qemu-system-mips,qemu-system-mipsel,
+ qemu-system-sparc,qemu-system-ppc. The results
+ of this can be seen from the capabilities XML output.
+
+ KVM hypervisor: The driver will probe /usr/bin
+ for the presence of qemu-kvm and /dev/kvm device
+ node. If both are found, then KVM fullyvirtualized, hardware accelerated
+ guests will be available.
+
+ Xenner hypervisor: The driver will probe /usr/bin
+ for the presence of xenner and /dev/kvm device
+ node. If both are found, then Xen paravirtualized guests can be run using
+ the KVM hardware acceleration.
+
+ The libvirt QEMU driver can manage any QEMU emulator from version 0.8.1
+ or later. It can also manage anything that provides the same QEMU command
+ line syntax and monitor interaction. This includes KVM, and Xenner.
+
+
+
Deployment pre-requisites
+
+
+
+ QEMU emulators: The driver will probe /usr/bin
+ for the presence of qemu, qemu-system-x86_64,
+ qemu-system-mips,qemu-system-mipsel,
+ qemu-system-sparc,qemu-system-ppc. The results
+ of this can be seen from the capabilities XML output.
+
+
+ KVM hypervisor: The driver will probe /usr/bin
+ for the presence of qemu-kvm and /dev/kvm device
+ node. If both are found, then KVM fullyvirtualized, hardware accelerated
+ guests will be available.
+
+
+ Xenner hypervisor: The driver will probe /usr/bin
+ for the presence of xenner and /dev/kvm device
+ node. If both are found, then Xen paravirtualized guests can be run using
+ the KVM hardware acceleration.
+
+ The libvirt Xen driver provides the ability to manage virtual machines
+ on any Xen release from 3.0.1 onwards.
+
+
Deployment pre-requisites
+
+ The libvirt Xen driver uses a combination of channels to manage Xen
+ virtual machines.
+
+
+ XenD: Access to the Xen daemon is a mandatory
+ requirement for the libvirt Xen driver. It requires that the UNIX
+ socket interface be enabled in the /etc/xen/xend-config.sxp
+ configuration file. Specifically the config settings
+ (xend-unix-server yes). This path is usually restricted
+ to only allow the root user access. As an alternative,
+ the HTTP interface can be used, however, this has significant security
+ implications.
+
+ XenStoreD: Access to the Xenstore daemon enables
+ more efficient codepaths for looking up domain information which
+ lowers the CPU overhead of management.
+
+ Hypercalls: The ability to make direct hypercalls
+ allows the most efficient codepaths in the driver to be used for
+ monitoring domain status.
+
+ XM config: When using Xen releases prior to 3.0.4,
+ there is no inactive domain management in XenD. For such releases,
+ libvirt will automatically process XM configuration files kept in
+ the /etc/xen directory. It is important not to place
+ any other non-config files in this directory.
+
+ Below are some example XML configurations for Xen guest domains.
+ For full details of the available options, consult the domain XML format
+ guide.
+
+
Paravirtualized guest bootloader
+
+ Using a bootloader allows a paravirtualized guest to be booted using
+ a kernel stored inside its virtual disk image
+
+ With Xen 3.2.0 or later it is possible to bypass the BIOS and directly
+ boot a Linux kernel and initrd as a fullyvirtualized domain. This allows
+ for complete automation of OS installation, for example using the Anaconda
+ kickstart support.
+
+ The libvirt Xen driver provides the ability to manage virtual machines
+ on any Xen release from 3.0.1 onwards.
+
+
+
Deployment pre-requisites
+
+
+ The libvirt Xen driver uses a combination of channels to manage Xen
+ virtual machines.
+
+
+
+
+ XenD: Access to the Xen daemon is a mandatory
+ requirement for the libvirt Xen driver. It requires that the UNIX
+ socket interface be enabled in the /etc/xen/xend-config.sxp
+ configuration file. Specifically the config settings
+ (xend-unix-server yes). This path is usually restricted
+ to only allow the root user access. As an alternative,
+ the HTTP interface can be used, however, this has significant security
+ implications.
+
+
+ XenStoreD: Access to the Xenstore daemon enables
+ more efficient codepaths for looking up domain information which
+ lowers the CPU overhead of management.
+
+
+ Hypercalls: The ability to make direct hypercalls
+ allows the most efficient codepaths in the driver to be used for
+ monitoring domain status.
+
+
+ XM config: When using Xen releases prior to 3.0.4,
+ there is no inactive domain management in XenD. For such releases,
+ libvirt will automatically process XM configuration files kept in
+ the /etc/xen directory. It is important not to place
+ any other non-config files in this directory.
+
+ Below are some example XML configurations for Xen guest domains.
+ For full details of the available options, consult the domain XML format
+ guide.
+
+
+
Paravirtualized guest bootloader
+
+
+ Using a bootloader allows a paravirtualized guest to be booted using
+ a kernel stored inside its virtual disk image
+
+ With Xen 3.2.0 or later it is possible to bypass the BIOS and directly
+ boot a Linux kernel and initrd as a fullyvirtualized domain. This allows
+ for complete automation of OS installation, for example using the Anaconda
+ kickstart support.
+
The main goals of libvirt when it comes to error handling are:
provide as much detail as possible
-
provide the information as soon as possible
-
dont force the library user into one style of error handling
-
As result the library provide both synchronous, callback based and
+
+
+
+
+
+
+ libvirt: Handling of errors
+
+
+
+
+
+
+
+
+
+
+
+
Handling of errors
+
The main goals of libvirt when it comes to error handling are:
+
provide as much detail as possible
provide the information as soon as possible
dont force the library user into one style of error handling
+
As result the library provide both synchronous, callback based and
asynchronous error reporting. When an error happens in the library code the
error is logged, allowing to retrieve it later and if the user registered an
error callback it will be called synchronously. Once the call to libvirt ends
the error can be detected by the return value and the full information for
-the last logged error can be retrieved.
To avoid as much as possible troubles with a global variable in a
+the last logged error can be retrieved.
+
To avoid as much as possible troubles with a global variable in a
multithreaded environment, libvirt will associate when possible the errors to
the current connection they are related to, that way the error is stored in a
dynamic structure which can be made thread specific. Error callback can be
-set specifically to a connection with
So error handling in the code is the following:
if the error can be associated to a connection for example when failing
+set specifically to a connection with
+
So error handling in the code is the following:
+
if the error can be associated to a connection for example when failing
to look up a domain
if there is a callback associated to the connection set with virConnSetErrorFunc,
- call it with the error information
-
otherwise if there is a global callback set with virSetErrorFunc,
- call it with the error information
domain: an enum indicating which part of libvirt raised the error see
+ virErrorDomain
level: the error level, usually VIR_ERR_ERROR, though there is room for
+ warnings like VIR_ERR_WARNING
message: the full human-readable formatted string of the error
conn: if available a pointer to the virConnectPtr
+ connection to the hypervisor where this happened
dom: if available a pointer to the virDomainPtr domain
+ targeted in the operation
+
and then extra raw information about the error which may be initialized
+to 0 or NULL if unused
+
str1, str2, str3: string information, usually str1 is the error
+ message format
int1, int2: integer information
+
So usually, setting up specific error handling with libvirt consist of
registering an handler with with virSetErrorFunc or
with virConnSetErrorFunc,
check the value of the code value, take appropriate action, if needed let
@@ -57,13 +72,74 @@ For asynchronous error handing, set such a function doing nothing to avoid
the error being reported on stderr, and call virConnGetLastError or
virGetLastError when an API call returned an error value. It can be a good
idea to use virResetError or virConnResetLastError
-once an error has been processed fully.
At the python level, there only a global reporting callback function at
-this point, see the error.py example about it:
def handler(ctxt, err):
+once an error has been processed fully.
+
At the python level, there only a global reporting callback function at
+this point, see the error.py example about it:
the second argument to the registerErrorHandler function is passed as the
+libvirt.registerErrorHandler(handler, 'context')
+
the second argument to the registerErrorHandler function is passed as the
first argument of the callback like in the C version. The error is a tuple
-containing the same field as a virError in C, but cast to Python.
The main goals of libvirt when it comes to error handling are:
+
+
provide as much detail as possible
+
provide the information as soon as possible
+
dont force the library user into one style of error handling
+
+
As result the library provide both synchronous, callback based and
+asynchronous error reporting. When an error happens in the library code the
+error is logged, allowing to retrieve it later and if the user registered an
+error callback it will be called synchronously. Once the call to libvirt ends
+the error can be detected by the return value and the full information for
+the last logged error can be retrieved.
+
To avoid as much as possible troubles with a global variable in a
+multithreaded environment, libvirt will associate when possible the errors to
+the current connection they are related to, that way the error is stored in a
+dynamic structure which can be made thread specific. Error callback can be
+set specifically to a connection with
+
So error handling in the code is the following:
+
+
if the error can be associated to a connection for example when failing
+ to look up a domain
+
if there is a callback associated to the connection set with virConnSetErrorFunc,
+ call it with the error information
otherwise if there is a global callback set with virSetErrorFunc,
+ call it with the error information
otherwise call virDefaultErrorFunc
+ which is the default error function of the library issuing the error
+ on stderr
save the error in the connection for later retrieval with virConnGetLastError
+
otherwise like when failing to create an hypervisor connection:
+
if there is a global callback set with virSetErrorFunc,
+ call it with the error information
otherwise call virDefaultErrorFunc
+ which is the default error function of the library issuing the error
+ on stderr
save the error in the connection for later retrieval with virGetLastError
+
+
In all cases the error information is provided as a virErrorPtr pointer to
+read-only structure virError containing the
+following fields:
domain: an enum indicating which part of libvirt raised the error see
+ virErrorDomain
+
level: the error level, usually VIR_ERR_ERROR, though there is room for
+ warnings like VIR_ERR_WARNING
+
message: the full human-readable formatted string of the error
+
conn: if available a pointer to the virConnectPtr
+ connection to the hypervisor where this happened
+
dom: if available a pointer to the virDomainPtr domain
+ targeted in the operation
+
+
and then extra raw information about the error which may be initialized
+to 0 or NULL if unused
+
+
str1, str2, str3: string information, usually str1 is the error
+ message format
+
int1, int2: integer information
+
+
So usually, setting up specific error handling with libvirt consist of
+registering an handler with with virSetErrorFunc or
+with virConnSetErrorFunc,
+check the value of the code value, take appropriate action, if needed let
+libvirt print the error on stderr by calling virDefaultErrorFunc.
+For asynchronous error handing, set such a function doing nothing to avoid
+the error being reported on stderr, and call virConnGetLastError or
+virGetLastError when an API call returned an error value. It can be a good
+idea to use virResetError or virConnResetLastError
+once an error has been processed fully.
+
At the python level, there only a global reporting callback function at
+this point, see the error.py example about it:
the second argument to the registerErrorHandler function is passed as the
+first argument of the callback like in the C version. The error is a tuple
+containing the same field as a virError in C, but cast to Python.
This section describes the XML format used to represent domains, there are
-variations on the format based on the kind of domains run and the options
-used to launch them:
The library use an XML format to describe domains, as input to virDomainCreateLinux()
-and as the output of virDomainGetXMLDesc(),
-the following is an example of the format as returned by the shell command
-virsh xmldump fc4 , where fc4 was one of the running domains:
The root element must be called domain with no namespace, the
-type attribute indicates the kind of hypervisor used, 'xen' is
-the default value. The id attribute gives the domain id at
-runtime (not however that this may change, for example if the domain is saved
-to disk and restored). The domain has a few children whose order is not
-significant:
name: the domain name, preferably ASCII based
-
memory: the maximum memory allocated to the domain in kilobytes
-
vcpu: the number of virtual cpu configured for the domain
-
os: a block describing the Operating System, its content will be
- dependent on the OS type
-
type: indicate the OS type, always linux at this point
-
kernel: path to the kernel on the Domain 0 filesystem
-
initrd: an optional path for the init ramdisk on the Domain 0
- filesystem
-
cmdline: optional command line to the kernel
-
root: the root filesystem from the guest viewpoint, it may be
- passed as part of the cmdline content too
-
-
devices: a list of disk, interface and
- console descriptions in no special order
-
The format of the devices and their type may grow over time, but the
-following should be sufficient for basic use:
A disk device indicates a block device, it can have two
-values for the type attribute either 'file' or 'block' corresponding to the 2
-options available at the Xen layer. It has two mandatory children, and one
-optional one in no specific order:
source with a file attribute containing the path in Domain 0 to the
- file or a dev attribute if using a block device, containing the device
- name ('hda5' or '/dev/hda5')
-
target indicates in a dev attribute the device where it is mapped in
- the guest
-
readonly an optional empty element indicating the device is
- read-only
-
shareable an optional empty element indicating the device
- can be used read/write with other domains
-
An interface element describes a network device mapped on the
-guest, it also has a type whose value is currently 'bridge', it also have a
-number of children in no specific order:
source: indicating the bridge name
-
mac: the optional mac address provided in the address attribute
-
ip: the optional IP address provided in the address attribute
-
script: the script used to bridge the interface in the Domain 0
-
target: and optional target indicating the device name.
-
A console element describes a serial console connection to
-the guest. It has no children, and a single attribute tty which
-provides the path to the Pseudo TTY on which the guest console can be
-accessed
Life cycle actions for the domain can also be expressed in the XML format,
-they drive what should be happening if the domain crashes, is rebooted or is
-poweroff. There is various actions possible when this happen:
destroy: The domain is cleaned up (that's the default normal processing
- in Xen)
-
restart: A new domain is started in place of the old one with the same
- configuration parameters
-
preserve: The domain will remain in memory until it is destroyed
- manually, it won't be running but allows for post-mortem debugging
-
rename-restart: a variant of the previous one but where the old domain
- is renamed before being saved to allow a restart
-
The following could be used for a Xen production system:
While the format may be extended in various ways as support for more
-hypervisor types and features are added, it is expected that this core subset
-will remain functional in spite of the evolution of the library.
Here is an example of a domain description used to start a fully
-virtualized (a.k.a. HVM) Xen domain. This requires hardware virtualization
-support at the processor level but allows to run unmodified operating
-systems:
There is a few things to notice specifically for HVM domains:
the optional <features> block is used to enable
- certain guest CPU / system features. For HVM guests the following
- features are defined:
-
pae - enable PAE memory addressing
-
apic - enable IO APIC
-
acpi - enable ACPI bios
-
-
the optional <clock> element is used to specify
- whether the emulated BIOS clock in the guest is synced to either
- localtime or utc. In general Windows will
- want localtime while all other operating systems will
- want utc. The default is thus utc
-
the <os> block description is very different, first
- it indicates that the type is 'hvm' for hardware virtualization, then
- instead of a kernel, boot and command line arguments, it points to an os
- boot loader which will extract the boot information from the boot device
- specified in a separate boot element. The dev attribute on
- the boot tag can be one of:
-
fd - boot from first floppy device
-
hd - boot from first harddisk device
-
cdrom - boot from first cdrom device
-
-
the <devices> section includes an emulator entry
- pointing to an additional program in charge of emulating the devices
-
the disk entry indicates in the dev target section that the emulation
- for the drive is the first IDE disk device hda. The list of device names
- supported is dependent on the Hypervisor, but for Xen it can be any IDE
- device hda-hdd, or a floppy device
- fda, fdb. The <disk> element
- also supports a 'device' attribute to indicate what kinda of hardware to
- emulate. The following values are supported:
-
floppy - a floppy disk controller
-
disk - a generic hard drive (the default it
- omitted)
-
cdrom - a CDROM device
-
- For Xen 3.0.2 and earlier a CDROM device can only be emulated on the
- hdc channel, while for 3.0.3 and later, it can be emulated
- on any IDE channel.
-
the <devices> section also include at least one
- entry for the graphic device used to render the os. Currently there is
- just 2 types possible 'vnc' or 'sdl'. If the type is 'vnc', then an
- additional port attribute will be present indicating the TCP
- port on which the VNC server is accepting client connections.
-
It is likely that the HVM description gets additional optional elements
-and attributes as the support for fully virtualized domain expands,
-especially for the variety of devices emulated and the graphic support
-options offered.
Support for the KVM virtualization
-is provided in recent Linux kernels (2.6.20 and onward). This requires
-specific hardware with acceleration support and the availability of the
-special version of the QEmu binary. Since this
-relies on QEmu for the machine emulation like fully virtualized guests the
-XML description is quite similar, here is a simple example:
The networking support in the QEmu and KVM case is more flexible, and
-support a variety of options:
Userspace SLIRP stack
-
Provides a virtual LAN with NAT to the outside world. The virtual
- network has DHCP & DNS services and will give the guest VM addresses
- starting from 10.0.2.15. The default router will be
- 10.0.2.2 and the DNS server will be 10.0.2.3.
- This networking is the only option for unprivileged users who need their
- VMs to have outgoing access. Example configs are:
Provides a virtual network using a bridge device in the host.
- Depending on the virtual network configuration, the network may be
- totally isolated, NAT'ing to an explicit network device, or NAT'ing to
- the default route. DHCP and DNS are provided on the virtual network in
- all cases and the IP range can be determined by examining the virtual
- network config with 'virsh net-dumpxml <network
- name>'. There is one virtual network called 'default' setup out
- of the box which does NAT'ing to the default route and has an IP range of
- 192.168.22.0/255.255.255.0. Each guest will have an
- associated tun device created with a name of vnetN, which can also be
- overridden with the <target> element. Example configs are:
Provides a bridge from the VM directly onto the LAN. This assumes
- there is a bridge device on the host which has one or more of the hosts
- physical NICs enslaved. The guest VM will have an associated tun device
- created with a name of vnetN, which can also be overridden with the
- <target> element. The tun device will be enslaved to the bridge.
- The IP range / network configuration is whatever is used on the LAN. This
- provides the guest VM full incoming & outgoing net access just like a
- physical machine. Examples include:
Provides a means for the administrator to execute an arbitrary script
- to connect the guest's network to the LAN. The guest will have a tun
- device created with a name of vnetN, which can also be overridden with the
- <target> element. After creating the tun device a shell script will
- be run which is expected to do whatever host network integration is
- required. By default this script is called /etc/qemu-ifup but can be
- overridden.
A multicast group is setup to represent a virtual network. Any VMs
- whose network devices are in the same multicast group can talk to each
- other even across hosts. This mode is also available to unprivileged
- users. There is no default DNS or DHCP support and no outgoing network
- access. To provide outgoing network access, one of the VMs should have a
- 2nd NIC which is connected to one of the first 4 network types and do the
- appropriate routing. The multicast protocol is compatible with that used
- by user mode linux guests too. The source address used must be from the
- multicast address block.
A TCP client/server architecture provides a virtual network. One VM
- provides the server end of the network, all other VMS are configured as
- clients. All network traffic is routed between the VMs via the server.
- This mode is also available to unprivileged users. There is no default
- DNS or DHCP support and no outgoing network access. To provide outgoing
- network access, one of the VMs should have a 2nd NIC which is connected
- to one of the first 4 network types and do the appropriate routing.
To be noted, options 2, 3, 4 are also supported by Xen VMs, so it is
-possible to use these configs to have networking with both Xen &
-QEMU/KVMs connected to each other.
Libvirt support for KVM and QEmu is the same code base with only minor
-changes. The configuration is as a result nearly identical, the only changes
-are related to QEmu ability to emulate various CPU type and hardware
-platforms, and kqemu support (QEmu own kernel accelerator when the
-emulated CPU is i686 as well as the target machine):
As new virtualization engine support gets added to libvirt, and to handle
-cases like QEmu supporting a variety of emulations, a query interface has
-been added in 0.2.1 allowing to list the set of supported virtualization
-capabilities on the host:
The value returned is an XML document listing the virtualization
-capabilities of the host and virtualization engine to which
-@conn is connected. One can test it using virsh
-command line tool command 'capabilities', it dumps the XML
-associated to the current connection. For example in the case of a 64 bits
-machine with hardware virtualization capabilities enabled in the chip and
-BIOS you will see
The first block (in red) indicates the host hardware capabilities, currently
-it is limited to the CPU properties but other information may be available,
-it shows the CPU architecture, and the features of the chip (the feature
-block is similar to what you will find in a Xen fully virtualized domain
-description).
The second block (in blue) indicates the paravirtualization support of the
-Xen support, you will see the os_type of xen to indicate a paravirtual
-kernel, then architecture information and potential features.
The third block (in green) gives similar information but when running a
-32 bit OS fully virtualized with Xen using the hvm support.
This section is likely to be updated and augmented in the future, see the
-discussion which led to the capabilities format in the mailing-list
-archives.
As new virtualization engine support gets added to libvirt, and to handle
+cases like QEmu supporting a variety of emulations, a query interface has
+been added in 0.2.1 allowing to list the set of supported virtualization
+capabilities on the host:
The value returned is an XML document listing the virtualization
+capabilities of the host and virtualization engine to which
+@conn is connected. One can test it using virsh
+command line tool command 'capabilities', it dumps the XML
+associated to the current connection. For example in the case of a 64 bits
+machine with hardware virtualization capabilities enabled in the chip and
+BIOS you will see
The first block (in red) indicates the host hardware capabilities, currently
+it is limited to the CPU properties but other information may be available,
+it shows the CPU architecture, and the features of the chip (the feature
+block is similar to what you will find in a Xen fully virtualized domain
+description).
+
The second block (in blue) indicates the paravirtualization support of the
+Xen support, you will see the os_type of xen to indicate a paravirtual
+kernel, then architecture information and potential features.
+
The third block (in green) gives similar information but when running a
+32 bit OS fully virtualized with Xen using the hvm support.
+
This section is likely to be updated and augmented in the future, see the
+discussion which led to the capabilities format in the mailing-list
+archives.
As new virtualization engine support gets added to libvirt, and to handle
+cases like QEmu supporting a variety of emulations, a query interface has
+been added in 0.2.1 allowing to list the set of supported virtualization
+capabilities on the host:
The value returned is an XML document listing the virtualization
+capabilities of the host and virtualization engine to which
+@conn is connected. One can test it using virsh
+command line tool command 'capabilities', it dumps the XML
+associated to the current connection. For example in the case of a 64 bits
+machine with hardware virtualization capabilities enabled in the chip and
+BIOS you will see
The first block (in red) indicates the host hardware capabilities, currently
+it is limited to the CPU properties but other information may be available,
+it shows the CPU architecture, and the features of the chip (the feature
+block is similar to what you will find in a Xen fully virtualized domain
+description).
+
The second block (in blue) indicates the paravirtualization support of the
+Xen support, you will see the os_type of xen to indicate a paravirtual
+kernel, then architecture information and potential features.
+
The third block (in green) gives similar information but when running a
+32 bit OS fully virtualized with Xen using the hvm support.
+
This section is likely to be updated and augmented in the future, see the
+discussion which led to the capabilities format in the mailing-list
+archives.
+
+
+
diff --git a/docs/formatdomain.html b/docs/formatdomain.html
new file mode 100644
index 0000000000..5239d51c67
--- /dev/null
+++ b/docs/formatdomain.html
@@ -0,0 +1,314 @@
+
+
+
+
+
+
+
+
+ libvirt: Domain XML format
+
+
+
+
+
+
+
+
+
+
+
+
Domain XML format
+
This section describes the XML format used to represent domains, there are
+variations on the format based on the kind of domains run and the options
+used to launch them:
The root element must be called domain with no namespace, the
+type attribute indicates the kind of hypervisor used, 'xen' is
+the default value. The id attribute gives the domain id at
+runtime (not however that this may change, for example if the domain is saved
+to disk and restored). The domain has a few children whose order is not
+significant:
+
name: the domain name, preferably ASCII based
memory: the maximum memory allocated to the domain in kilobytes
vcpu: the number of virtual cpu configured for the domain
os: a block describing the Operating System, its content will be
+ dependent on the OS type
+
type: indicate the OS type, always linux at this point
kernel: path to the kernel on the Domain 0 filesystem
initrd: an optional path for the init ramdisk on the Domain 0
+ filesystem
cmdline: optional command line to the kernel
root: the root filesystem from the guest viewpoint, it may be
+ passed as part of the cmdline content too
devices: a list of disk, interface and
+ console descriptions in no special order
+
The format of the devices and their type may grow over time, but the
+following should be sufficient for basic use:
+
A disk device indicates a block device, it can have two
+values for the type attribute either 'file' or 'block' corresponding to the 2
+options available at the Xen layer. It has two mandatory children, and one
+optional one in no specific order:
+
source with a file attribute containing the path in Domain 0 to the
+ file or a dev attribute if using a block device, containing the device
+ name ('hda5' or '/dev/hda5')
target indicates in a dev attribute the device where it is mapped in
+ the guest
readonly an optional empty element indicating the device is
+ read-only
shareable an optional empty element indicating the device
+ can be used read/write with other domains
+
An interface element describes a network device mapped on the
+guest, it also has a type whose value is currently 'bridge', it also have a
+number of children in no specific order:
+
source: indicating the bridge name
mac: the optional mac address provided in the address attribute
ip: the optional IP address provided in the address attribute
script: the script used to bridge the interface in the Domain 0
target: and optional target indicating the device name.
+
A console element describes a serial console connection to
+the guest. It has no children, and a single attribute tty which
+provides the path to the Pseudo TTY on which the guest console can be
+accessed
+
Life cycle actions for the domain can also be expressed in the XML format,
+they drive what should be happening if the domain crashes, is rebooted or is
+poweroff. There is various actions possible when this happen:
+
destroy: The domain is cleaned up (that's the default normal processing
+ in Xen)
restart: A new domain is started in place of the old one with the same
+ configuration parameters
preserve: The domain will remain in memory until it is destroyed
+ manually, it won't be running but allows for post-mortem debugging
rename-restart: a variant of the previous one but where the old domain
+ is renamed before being saved to allow a restart
+
The following could be used for a Xen production system:
While the format may be extended in various ways as support for more
+hypervisor types and features are added, it is expected that this core subset
+will remain functional in spite of the evolution of the library.
There is a few things to notice specifically for HVM domains:
+
the optional <features> block is used to enable
+ certain guest CPU / system features. For HVM guests the following
+ features are defined:
+
pae - enable PAE memory addressing
apic - enable IO APIC
acpi - enable ACPI bios
the optional <clock> element is used to specify
+ whether the emulated BIOS clock in the guest is synced to either
+ localtime or utc. In general Windows will
+ want localtime while all other operating systems will
+ want utc. The default is thus utc
the <os> block description is very different, first
+ it indicates that the type is 'hvm' for hardware virtualization, then
+ instead of a kernel, boot and command line arguments, it points to an os
+ boot loader which will extract the boot information from the boot device
+ specified in a separate boot element. The dev attribute on
+ the boot tag can be one of:
+
fd - boot from first floppy device
hd - boot from first harddisk device
cdrom - boot from first cdrom device
the <devices> section includes an emulator entry
+ pointing to an additional program in charge of emulating the devices
the disk entry indicates in the dev target section that the emulation
+ for the drive is the first IDE disk device hda. The list of device names
+ supported is dependent on the Hypervisor, but for Xen it can be any IDE
+ device hda-hdd, or a floppy device
+ fda, fdb. The <disk> element
+ also supports a 'device' attribute to indicate what kinda of hardware to
+ emulate. The following values are supported:
+
floppy - a floppy disk controller
disk - a generic hard drive (the default it
+ omitted)
cdrom - a CDROM device
+ For Xen 3.0.2 and earlier a CDROM device can only be emulated on the
+ hdc channel, while for 3.0.3 and later, it can be emulated
+ on any IDE channel.
the <devices> section also include at least one
+ entry for the graphic device used to render the os. Currently there is
+ just 2 types possible 'vnc' or 'sdl'. If the type is 'vnc', then an
+ additional port attribute will be present indicating the TCP
+ port on which the VNC server is accepting client connections.
+
It is likely that the HVM description gets additional optional elements
+and attributes as the support for fully virtualized domain expands,
+especially for the variety of devices emulated and the graphic support
+options offered.
The networking support in the QEmu and KVM case is more flexible, and
+support a variety of options:
+
Userspace SLIRP stack
+
Provides a virtual LAN with NAT to the outside world. The virtual
+ network has DHCP & DNS services and will give the guest VM addresses
+ starting from 10.0.2.15. The default router will be
+ 10.0.2.2 and the DNS server will be 10.0.2.3.
+ This networking is the only option for unprivileged users who need their
+ VMs to have outgoing access. Example configs are:
Provides a virtual network using a bridge device in the host.
+ Depending on the virtual network configuration, the network may be
+ totally isolated, NAT'ing to an explicit network device, or NAT'ing to
+ the default route. DHCP and DNS are provided on the virtual network in
+ all cases and the IP range can be determined by examining the virtual
+ network config with 'virsh net-dumpxml <network
+ name>'. There is one virtual network called 'default' setup out
+ of the box which does NAT'ing to the default route and has an IP range of
+ 192.168.22.0/255.255.255.0. Each guest will have an
+ associated tun device created with a name of vnetN, which can also be
+ overridden with the <target> element. Example configs are:
Provides a bridge from the VM directly onto the LAN. This assumes
+ there is a bridge device on the host which has one or more of the hosts
+ physical NICs enslaved. The guest VM will have an associated tun device
+ created with a name of vnetN, which can also be overridden with the
+ <target> element. The tun device will be enslaved to the bridge.
+ The IP range / network configuration is whatever is used on the LAN. This
+ provides the guest VM full incoming & outgoing net access just like a
+ physical machine. Examples include:
Provides a means for the administrator to execute an arbitrary script
+ to connect the guest's network to the LAN. The guest will have a tun
+ device created with a name of vnetN, which can also be overridden with the
+ <target> element. After creating the tun device a shell script will
+ be run which is expected to do whatever host network integration is
+ required. By default this script is called /etc/qemu-ifup but can be
+ overridden.
A multicast group is setup to represent a virtual network. Any VMs
+ whose network devices are in the same multicast group can talk to each
+ other even across hosts. This mode is also available to unprivileged
+ users. There is no default DNS or DHCP support and no outgoing network
+ access. To provide outgoing network access, one of the VMs should have a
+ 2nd NIC which is connected to one of the first 4 network types and do the
+ appropriate routing. The multicast protocol is compatible with that used
+ by user mode linux guests too. The source address used must be from the
+ multicast address block.
A TCP client/server architecture provides a virtual network. One VM
+ provides the server end of the network, all other VMS are configured as
+ clients. All network traffic is routed between the VMs via the server.
+ This mode is also available to unprivileged users. There is no default
+ DNS or DHCP support and no outgoing network access. To provide outgoing
+ network access, one of the VMs should have a 2nd NIC which is connected
+ to one of the first 4 network types and do the appropriate routing.
To be noted, options 2, 3, 4 are also supported by Xen VMs, so it is
+possible to use these configs to have networking with both Xen &
+QEMU/KVMs connected to each other.
+
Example configs
+
+ Example configurations for each driver are provide on the
+ driver specific pages listed below
+
This section describes the XML format used to represent domains, there are
+variations on the format based on the kind of domains run and the options
+used to launch them:
The root element must be called domain with no namespace, the
+type attribute indicates the kind of hypervisor used, 'xen' is
+the default value. The id attribute gives the domain id at
+runtime (not however that this may change, for example if the domain is saved
+to disk and restored). The domain has a few children whose order is not
+significant:
+
+
name: the domain name, preferably ASCII based
+
memory: the maximum memory allocated to the domain in kilobytes
+
vcpu: the number of virtual cpu configured for the domain
+
os: a block describing the Operating System, its content will be
+ dependent on the OS type
+
type: indicate the OS type, always linux at this point
kernel: path to the kernel on the Domain 0 filesystem
initrd: an optional path for the init ramdisk on the Domain 0
+ filesystem
cmdline: optional command line to the kernel
root: the root filesystem from the guest viewpoint, it may be
+ passed as part of the cmdline content too
+
devices: a list of disk, interface and
+ console descriptions in no special order
+
+
The format of the devices and their type may grow over time, but the
+following should be sufficient for basic use:
+
A disk device indicates a block device, it can have two
+values for the type attribute either 'file' or 'block' corresponding to the 2
+options available at the Xen layer. It has two mandatory children, and one
+optional one in no specific order:
+
+
source with a file attribute containing the path in Domain 0 to the
+ file or a dev attribute if using a block device, containing the device
+ name ('hda5' or '/dev/hda5')
+
target indicates in a dev attribute the device where it is mapped in
+ the guest
+
readonly an optional empty element indicating the device is
+ read-only
+
shareable an optional empty element indicating the device
+ can be used read/write with other domains
+
+
An interface element describes a network device mapped on the
+guest, it also has a type whose value is currently 'bridge', it also have a
+number of children in no specific order:
+
+
source: indicating the bridge name
+
mac: the optional mac address provided in the address attribute
+
ip: the optional IP address provided in the address attribute
+
script: the script used to bridge the interface in the Domain 0
+
target: and optional target indicating the device name.
+
+
A console element describes a serial console connection to
+the guest. It has no children, and a single attribute tty which
+provides the path to the Pseudo TTY on which the guest console can be
+accessed
+
Life cycle actions for the domain can also be expressed in the XML format,
+they drive what should be happening if the domain crashes, is rebooted or is
+poweroff. There is various actions possible when this happen:
+
+
destroy: The domain is cleaned up (that's the default normal processing
+ in Xen)
+
restart: A new domain is started in place of the old one with the same
+ configuration parameters
+
preserve: The domain will remain in memory until it is destroyed
+ manually, it won't be running but allows for post-mortem debugging
+
rename-restart: a variant of the previous one but where the old domain
+ is renamed before being saved to allow a restart
+
+
The following could be used for a Xen production system:
While the format may be extended in various ways as support for more
+hypervisor types and features are added, it is expected that this core subset
+will remain functional in spite of the evolution of the library.
There is a few things to notice specifically for HVM domains:
+
+
the optional <features> block is used to enable
+ certain guest CPU / system features. For HVM guests the following
+ features are defined:
+
pae - enable PAE memory addressing
apic - enable IO APIC
acpi - enable ACPI bios
+
the optional <clock> element is used to specify
+ whether the emulated BIOS clock in the guest is synced to either
+ localtime or utc. In general Windows will
+ want localtime while all other operating systems will
+ want utc. The default is thus utc
+
the <os> block description is very different, first
+ it indicates that the type is 'hvm' for hardware virtualization, then
+ instead of a kernel, boot and command line arguments, it points to an os
+ boot loader which will extract the boot information from the boot device
+ specified in a separate boot element. The dev attribute on
+ the boot tag can be one of:
+
fd - boot from first floppy device
hd - boot from first harddisk device
cdrom - boot from first cdrom device
+
the <devices> section includes an emulator entry
+ pointing to an additional program in charge of emulating the devices
+
the disk entry indicates in the dev target section that the emulation
+ for the drive is the first IDE disk device hda. The list of device names
+ supported is dependent on the Hypervisor, but for Xen it can be any IDE
+ device hda-hdd, or a floppy device
+ fda, fdb. The <disk> element
+ also supports a 'device' attribute to indicate what kinda of hardware to
+ emulate. The following values are supported:
+
floppy - a floppy disk controller
disk - a generic hard drive (the default it
+ omitted)
cdrom - a CDROM device
+ For Xen 3.0.2 and earlier a CDROM device can only be emulated on the
+ hdc channel, while for 3.0.3 and later, it can be emulated
+ on any IDE channel.
+
the <devices> section also include at least one
+ entry for the graphic device used to render the os. Currently there is
+ just 2 types possible 'vnc' or 'sdl'. If the type is 'vnc', then an
+ additional port attribute will be present indicating the TCP
+ port on which the VNC server is accepting client connections.
+
+
It is likely that the HVM description gets additional optional elements
+and attributes as the support for fully virtualized domain expands,
+especially for the variety of devices emulated and the graphic support
+options offered.
The networking support in the QEmu and KVM case is more flexible, and
+support a variety of options:
+
+
Userspace SLIRP stack
+
Provides a virtual LAN with NAT to the outside world. The virtual
+ network has DHCP & DNS services and will give the guest VM addresses
+ starting from 10.0.2.15. The default router will be
+ 10.0.2.2 and the DNS server will be 10.0.2.3.
+ This networking is the only option for unprivileged users who need their
+ VMs to have outgoing access. Example configs are:
Provides a virtual network using a bridge device in the host.
+ Depending on the virtual network configuration, the network may be
+ totally isolated, NAT'ing to an explicit network device, or NAT'ing to
+ the default route. DHCP and DNS are provided on the virtual network in
+ all cases and the IP range can be determined by examining the virtual
+ network config with 'virsh net-dumpxml <network
+ name>'. There is one virtual network called 'default' setup out
+ of the box which does NAT'ing to the default route and has an IP range of
+ 192.168.22.0/255.255.255.0. Each guest will have an
+ associated tun device created with a name of vnetN, which can also be
+ overridden with the <target> element. Example configs are:
Provides a bridge from the VM directly onto the LAN. This assumes
+ there is a bridge device on the host which has one or more of the hosts
+ physical NICs enslaved. The guest VM will have an associated tun device
+ created with a name of vnetN, which can also be overridden with the
+ <target> element. The tun device will be enslaved to the bridge.
+ The IP range / network configuration is whatever is used on the LAN. This
+ provides the guest VM full incoming & outgoing net access just like a
+ physical machine. Examples include:
Provides a means for the administrator to execute an arbitrary script
+ to connect the guest's network to the LAN. The guest will have a tun
+ device created with a name of vnetN, which can also be overridden with the
+ <target> element. After creating the tun device a shell script will
+ be run which is expected to do whatever host network integration is
+ required. By default this script is called /etc/qemu-ifup but can be
+ overridden.
A multicast group is setup to represent a virtual network. Any VMs
+ whose network devices are in the same multicast group can talk to each
+ other even across hosts. This mode is also available to unprivileged
+ users. There is no default DNS or DHCP support and no outgoing network
+ access. To provide outgoing network access, one of the VMs should have a
+ 2nd NIC which is connected to one of the first 4 network types and do the
+ appropriate routing. The multicast protocol is compatible with that used
+ by user mode linux guests too. The source address used must be from the
+ multicast address block.
A TCP client/server architecture provides a virtual network. One VM
+ provides the server end of the network, all other VMS are configured as
+ clients. All network traffic is routed between the VMs via the server.
+ This mode is also available to unprivileged users. There is no default
+ DNS or DHCP support and no outgoing network access. To provide outgoing
+ network access, one of the VMs should have a 2nd NIC which is connected
+ to one of the first 4 network types and do the appropriate routing.
To be noted, options 2, 3, 4 are also supported by Xen VMs, so it is
+possible to use these configs to have networking with both Xen &
+QEMU/KVMs connected to each other.
+
+
Example configs
+
+
+ Example configurations for each driver are provide on the
+ driver specific pages listed below
+
+Although all storage pool backends share the same public APIs and
+XML format, they have varying levels of capabilities. Some may
+allow creation of volumes, others may only allow use of pre-existing
+volumes. Some may have constraints on volume size, or placement.
+
+
The is the top level tag for a storage pool document is 'pool'. It has
+a single attribute type, which is one of dir,
+fs,netfs,disk,iscsi,
+logical. This corresponds to the storage backend drivers
+listed further along in this document.
+
Providing a name for the pool which is unique to the host.
+This is mandatory when defining a pool
uuid
Providing an identifier for the pool which is globally unique.
+This is optional when defining a pool, a UUID will be generated if
+omitted
allocation
Providing the total storage allocation for the pool. This may
+be larger than the sum of the allocation of all volumes due to
+metadata overhead. This value is in bytes. This is not applicable
+when creating a pool.
capacity
Providing the total storage capacity for the pool. Due to
+underlying device constraints it may not be possible to use the
+full capacity for storage volumes. This value is in bytes. This
+is not applicable when creating a pool.
available
Providing the free space available for allocating new volumes
+in the pool. Due to underlying device constraints it may not be
+possible to allocate the entire free space to a single volume.
+This value is in bytes. This is not applicable when creating a
+pool.
source
Provides information about the source of the pool, such as
+the underlying host devices, or remote server
target
Provides information about the representation of the pool
+on the local host.
Provides the source for pools backed by physical devices.
+May be repeated multiple times depending on backend driver. Contains
+a single attribute path which is the fully qualified
+path to the block device node.
directory
Provides the source for pools backed by directories. May
+only occur once. Contains a single attribute path
+which is the fully qualified path to the block device node.
host
Provides the source for pools backed by storage from a
+remote server. Will be used in combination with a directory
+or device element. Contains an attribute name
+which is the hostname or IP address of the server. May optionally
+contain a port attribute for the protocol specific
+port number.
format
Provides information about the format of the pool. This
+contains a single attribute type whose value is
+backend specific. This is typically used to indicate filesystem
+type, or network filesystem type, or partition table type, or
+LVM metadata type. All drivers are required to have a default
+value for this, so it is optional.
Provides the location at which the pool will be mapped into
+the local filesystem namespace. For a filesystem/directory based
+pool it will be the name of the directory in which volumes will
+be created. For device based pools it will be the name of the directory in which
+devices nodes exist. For the latter /dev/ may seem
+like the logical choice, however, devices nodes there are not
+guaranteed stable across reboots, since they are allocated on
+demand. It is preferable to use a stable location such as one
+of the /dev/disk/by-{path,id,uuid,label locations.
+
permissions
Provides information about the default permissions to use
+when creating volumes. This is currently only useful for directory
+or filesystem based pools, where the volumes allocated are simple
+files. For pools where the volumes are device nodes, the hotplug
+scripts determine permissions. It contains 4 child elements. The
+mode element contains the octal permission set. The
+owner element contains the numeric user ID. The group
+element contains the numeric group ID. The label element
+contains the MAC (eg SELinux) label string.
+
+If a storage pool exposes information about its underlying
+placement / allocation scheme, the device element
+within the source element may contain information
+about its available extents. Some pools have a constraint that
+a volume must be allocated entirely within a single constraint
+(eg disk partition pools). Thus the extent information allows an
+application to determine the maximum possible size for a new
+volume
+
+
+For storage pools supporting extent information, within each
+device element there will be zero or more freeExtent
+elements. Each of these elements contains two attributes, start
+and end which provide the boundaries of the extent on the
+device, measured in bytes.
+
Providing a name for the pool which is unique to the host.
+This is mandatory when defining a pool
uuid
Providing an identifier for the pool which is globally unique.
+This is optional when defining a pool, a UUID will be generated if
+omitted
allocation
Providing the total storage allocation for the volume. This
+may be smaller than the logical capacity if the volume is sparsely
+allocated. It may also be larger than the logical capacity if the
+volume has substantial metadata overhead. This value is in bytes.
+If omitted when creating a volume, the volume will be fully
+allocated at time of creation. If set to a value smaller than the
+capacity, the pool has the option of deciding
+to sparsely allocate a volume. It does not have to honour requests
+for sparse allocation though.
capacity
Providing the logical capacity for the volume. This value is
+in bytes. This is compulsory when creating a volume
source
Provides information about the underlying storage allocation
+of the volume. This may not be available for some pool types.
target
Provides information about the representation of the volume
+on the local host.
Provides the location at which the pool will be mapped into
+the local filesystem namespace. For a filesystem/directory based
+pool it will be the name of the directory in which volumes will
+be created. For device based pools it will be the name of the directory in which
+devices nodes exist. For the latter /dev/ may seem
+like the logical choice, however, devices nodes there are not
+guaranteed stable across reboots, since they are allocated on
+demand. It is preferrable to use a stable location such as one
+of the /dev/disk/by-{path,id,uuid,label locations.
+
format
Provides information about the pool specific volume format.
+For disk pools it will provide the partition type. For filesystem
+or directory pools it will provide the file format type, eg cow,
+qcow, vmdk, raw. If omitted when creating a volume, the pool's
+default format will be used. The actual format is specified via
+the type. Consult the pool-specific docs for the
+list of valid values.
permissions
Provides information about the default permissions to use
+when creating volumes. This is currently only useful for directory
+or filesystem based pools, where the volumes allocated are simple
+files. For pools where the volumes are device nodes, the hotplug
+scripts determine permissions. It contains 4 child elements. The
+mode element contains the octal permission set. The
+owner element contains the numeric user ID. The group
+element contains the numeric group ID. The label element
+contains the MAC (eg SELinux) label string.
+
+Although all storage pool backends share the same public APIs and
+XML format, they have varying levels of capabilities. Some may
+allow creation of volumes, others may only allow use of pre-existing
+volumes. Some may have constraints on volume size, or placement.
+
+
The is the top level tag for a storage pool document is 'pool'. It has
+a single attribute type, which is one of dir,
+fs,netfs,disk,iscsi,
+logical. This corresponds to the storage backend drivers
+listed further along in this document.
+
Providing a name for the pool which is unique to the host.
+This is mandatory when defining a pool
+
uuid
+
Providing an identifier for the pool which is globally unique.
+This is optional when defining a pool, a UUID will be generated if
+omitted
+
allocation
+
Providing the total storage allocation for the pool. This may
+be larger than the sum of the allocation of all volumes due to
+metadata overhead. This value is in bytes. This is not applicable
+when creating a pool.
+
capacity
+
Providing the total storage capacity for the pool. Due to
+underlying device constraints it may not be possible to use the
+full capacity for storage volumes. This value is in bytes. This
+is not applicable when creating a pool.
+
available
+
Providing the free space available for allocating new volumes
+in the pool. Due to underlying device constraints it may not be
+possible to allocate the entire free space to a single volume.
+This value is in bytes. This is not applicable when creating a
+pool.
+
source
+
Provides information about the source of the pool, such as
+the underlying host devices, or remote server
+
target
+
Provides information about the representation of the pool
+on the local host.
Provides the source for pools backed by physical devices.
+May be repeated multiple times depending on backend driver. Contains
+a single attribute path which is the fully qualified
+path to the block device node.
+
directory
+
Provides the source for pools backed by directories. May
+only occur once. Contains a single attribute path
+which is the fully qualified path to the block device node.
+
host
+
Provides the source for pools backed by storage from a
+remote server. Will be used in combination with a directory
+or device element. Contains an attribute name
+which is the hostname or IP address of the server. May optionally
+contain a port attribute for the protocol specific
+port number.
+
format
+
Provides information about the format of the pool. This
+contains a single attribute type whose value is
+backend specific. This is typically used to indicate filesystem
+type, or network filesystem type, or partition table type, or
+LVM metadata type. All drivers are required to have a default
+value for this, so it is optional.
Provides the location at which the pool will be mapped into
+the local filesystem namespace. For a filesystem/directory based
+pool it will be the name of the directory in which volumes will
+be created. For device based pools it will be the name of the directory in which
+devices nodes exist. For the latter /dev/ may seem
+like the logical choice, however, devices nodes there are not
+guaranteed stable across reboots, since they are allocated on
+demand. It is preferable to use a stable location such as one
+of the /dev/disk/by-{path,id,uuid,label locations.
+
+
permissions
+
Provides information about the default permissions to use
+when creating volumes. This is currently only useful for directory
+or filesystem based pools, where the volumes allocated are simple
+files. For pools where the volumes are device nodes, the hotplug
+scripts determine permissions. It contains 4 child elements. The
+mode element contains the octal permission set. The
+owner element contains the numeric user ID. The group
+element contains the numeric group ID. The label element
+contains the MAC (eg SELinux) label string.
+
+If a storage pool exposes information about its underlying
+placement / allocation scheme, the device element
+within the source element may contain information
+about its available extents. Some pools have a constraint that
+a volume must be allocated entirely within a single constraint
+(eg disk partition pools). Thus the extent information allows an
+application to determine the maximum possible size for a new
+volume
+
+
+For storage pools supporting extent information, within each
+device element there will be zero or more freeExtent
+elements. Each of these elements contains two attributes, start
+and end which provide the boundaries of the extent on the
+device, measured in bytes.
+
Providing a name for the pool which is unique to the host.
+This is mandatory when defining a pool
+
uuid
+
Providing an identifier for the pool which is globally unique.
+This is optional when defining a pool, a UUID will be generated if
+omitted
+
allocation
+
Providing the total storage allocation for the volume. This
+may be smaller than the logical capacity if the volume is sparsely
+allocated. It may also be larger than the logical capacity if the
+volume has substantial metadata overhead. This value is in bytes.
+If omitted when creating a volume, the volume will be fully
+allocated at time of creation. If set to a value smaller than the
+capacity, the pool has the option of deciding
+to sparsely allocate a volume. It does not have to honour requests
+for sparse allocation though.
+
capacity
+
Providing the logical capacity for the volume. This value is
+in bytes. This is compulsory when creating a volume
+
source
+
Provides information about the underlying storage allocation
+of the volume. This may not be available for some pool types.
+
target
+
Provides information about the representation of the volume
+on the local host.
Provides the location at which the pool will be mapped into
+the local filesystem namespace. For a filesystem/directory based
+pool it will be the name of the directory in which volumes will
+be created. For device based pools it will be the name of the directory in which
+devices nodes exist. For the latter /dev/ may seem
+like the logical choice, however, devices nodes there are not
+guaranteed stable across reboots, since they are allocated on
+demand. It is preferrable to use a stable location such as one
+of the /dev/disk/by-{path,id,uuid,label locations.
+
+
format
+
Provides information about the pool specific volume format.
+For disk pools it will provide the partition type. For filesystem
+or directory pools it will provide the file format type, eg cow,
+qcow, vmdk, raw. If omitted when creating a volume, the pool's
+default format will be used. The actual format is specified via
+the type. Consult the pool-specific docs for the
+list of valid values.
+
permissions
+
Provides information about the default permissions to use
+when creating volumes. This is currently only useful for directory
+or filesystem based pools, where the volumes allocated are simple
+files. For pools where the volumes are device nodes, the hotplug
+scripts determine permissions. It contains 4 child elements. The
+mode element contains the octal permission set. The
+owner element contains the numeric user ID. The group
+element contains the numeric group ID. The label element
+contains the MAC (eg SELinux) label string.
+
Structure virConfValue struct _virConfValue {
- virConfType type : the virConfType
- virConfValuePtr next : next element if in a list
- long l : long integer
- char * str : pointer to 0 terminated string
- virConfValuePtr list : list of a list
-}
an handle to lookup settings or NULL if it failed to read or parse the file, use virConfFree() to free the data.
Function: virConfReadMem
virConfPtr virConfReadMem (const char * memory, int len)
-
Reads a configuration file loaded in memory. The string can be zero terminated in which case @len can be 0
-
memory:
pointer to the content of the configuration file
len:
length in byte
Returns:
an handle to lookup settings or NULL if it failed to parse the content, use virConfFree() to free the data.
Function: virConfWriteFile
int virConfWriteFile (const char * filename, virConfPtr conf)
-
Writes a configuration file back to a file.
-
filename:
the path to the configuration file.
conf:
the conf
Returns:
the number of bytes written or -1 in case of error.
Function: virConfWriteMem
int virConfWriteMem (char * memory, int * len, virConfPtr conf)
-
Writes a configuration file back to a memory area. @len is an IN/OUT parameter, it indicates the size available in bytes, and on output the size required for the configuration file (even if the call fails due to insufficient space).
-
memory:
pointer to the memory to store the config file
len:
pointer to the length in byte of the store, on output the size
conf:
the conf
Returns:
the number of bytes written or -1 in case of error.
Macro providing the version of the library as version * 1,000,000 + minor * 1000 + micro
-
Macro: VIR_COPY_CPUMAP
#define VIR_COPY_CPUMAP
This macro is to be used in conjunction with virDomainGetVcpus() and virDomainPinVcpu() APIs. VIR_COPY_CPUMAP macro extract the cpumap of the specified vcpu from cpumaps array and copy it into cpumap to be used later by virDomainPinVcpu() API.
-
Macro: VIR_CPU_MAPLEN
#define VIR_CPU_MAPLEN
This macro is to be used in conjunction with virDomainPinVcpu() API. It returns the length (in bytes) required to store the complete CPU map between a single virtual & all physical CPUs of a domain.
-
Macro: VIR_CPU_USABLE
#define VIR_CPU_USABLE
This macro is to be used in conjunction with virDomainGetVcpus() API. VIR_CPU_USABLE macro returns a non zero value (true) if the cpu is usable by the vcpu, and 0 otherwise.
This macro is to be used in conjunction with virDomainGetVcpus() and virDomainPinVcpu() APIs. VIR_GET_CPUMAP macro returns a pointer to the cpumap of the specified vcpu from cpumaps array.
-
Macro: VIR_NODEINFO_MAXCPUS
#define VIR_NODEINFO_MAXCPUS
This macro is to calculate the total number of CPUs supported but not necessary active in the host.
-
Macro: VIR_UNUSE_CPU
#define VIR_UNUSE_CPU
This macro is to be used in conjunction with virDomainPinVcpu() API. USE_CPU macro reset the bit (CPU not usable) of the related cpu in cpumap.
-
Macro: VIR_USE_CPU
#define VIR_USE_CPU
This macro is to be used in conjunction with virDomainPinVcpu() API. USE_CPU macro set the bit (CPU usable) of the related cpu in cpumap.
-
Macro: VIR_UUID_BUFLEN
#define VIR_UUID_BUFLEN
This macro provides the length of the buffer required for virDomainGetUUID()
-
Macro: VIR_UUID_STRING_BUFLEN
#define VIR_UUID_STRING_BUFLEN
This macro provides the length of the buffer required for virDomainGetUUIDString()
Structure virConnectCredential struct _virConnectCredential {
- int type : One of virConnectCredentialType constan
- const char * prompt : Prompt to show to user
- const char * challenge : Additional challenge to show
- const char * defresult : Optional default result
- char * result : Result to be filled with user response
- unsigned int resultlen : Length of the result
-}
This macro is to be used in conjunction with virDomainGetVcpus() and virDomainPinVcpu() APIs. VIR_COPY_CPUMAP macro extract the cpumap of the specified vcpu from cpumaps array and copy it into cpumap to be used later by virDomainPinVcpu() API.
This macro is to be used in conjunction with virDomainPinVcpu() API. It returns the length (in bytes) required to store the complete CPU map between a single virtual & all physical CPUs of a domain.
This macro is to be used in conjunction with virDomainGetVcpus() API. VIR_CPU_USABLE macro returns a non zero value (true) if the cpu is usable by the vcpu, and 0 otherwise.
This macro is to be used in conjunction with virDomainGetVcpus() and virDomainPinVcpu() APIs. VIR_GET_CPUMAP macro returns a pointer to the cpumap of the specified vcpu from cpumaps array.
Structure virDomainBlockStatsStruct struct _virDomainBlockStats {
- long long rd_req : number of read requests
- long long rd_bytes : number of read bytes
- long long wr_req : number of write requests
- long long wr_bytes : number of written bytes
- long long errs : In Xen this returns the mysterious 'oo_
-}
Structure virDomainInfo struct _virDomainInfo {
- unsigned char state : the running state, one of virDomainFlag
- unsigned long maxMem : the maximum memory in KBytes allowed
- unsigned long memory : the memory in KBytes used by the domain
- unsigned short nrVirtCpu : the number of virtual CPUs for the doma
- unsigned long long cpuTime : the CPU time used in nanoseconds
-}
- a virDomainInfoPtr is a pointer to a virDomainInfo structure.
-
- A pointer to a virDomainInterfaceStats structure
-
Structure virDomainInterfaceStatsStruct struct _virDomainInterfaceStats {
- long long rx_bytes
- long long rx_packets
- long long rx_errs
- long long rx_drop
- long long tx_bytes
- long long tx_packets
- long long tx_errs
- long long tx_drop
-}
Structure virNodeInfo struct _virNodeInfo {
- charmodel[32] model : string indicating the CPU model
- unsigned long memory : memory size in kilobytes
- unsigned int cpus : the number of active CPUs
- unsigned int mhz : expected CPU frequency
- unsigned int nodes : the number of NUMA cell, 1 for uniform
- unsigned int sockets : number of CPU socket per node
- unsigned int cores : number of core per socket
- unsigned int threads : number of threads per core
-}
- a virNodeInfoPtr is a pointer to a virNodeInfo structure.
-
Structure virSchedParameter struct _virSchedParameter {
- charfield[VIR_DOMAIN_SCHED_FIELD_LENGTH] field : parameter name
- int type : parameter type
-}
- a virSchedParameterPtr is a pointer to a virSchedParameter structure.
-
Structure virStoragePoolInfo struct _virStoragePoolInfo {
- int state : virStoragePoolState flags
- unsigned long long capacity : Logical size bytes
- unsigned long long allocation : Current allocation bytes
- unsigned long long available : Remaining free space bytes
-}
- a virStoragePoolPtr is pointer to a virStoragePool private structure, this is the type used to reference a storage pool in the API.
-
Structure virStorageVolInfo struct _virStorageVolInfo {
- int type : virStorageVolType flags
- unsigned long long capacity : Logical size bytes
- unsigned long long allocation : Current allocation bytes
-}
- a virStorageVolPtr is pointer to a virStorageVol private structure, this is the type used to reference a storage volume in the API.
-
Structure virVcpuInfo struct _virVcpuInfo {
- unsigned int number : virtual CPU number
- int state : value from virVcpuState
- unsigned long long cpuTime : CPU time used, in nanoseconds
- int cpu : real CPU number, or -1 if offline
-}
This function closes the connection to the Hypervisor. This should not be called if further interaction with the Hypervisor are needed especially if there is running domain which need further monitoring by the application.
This returns the system hostname on which the hypervisor is running (the result of the gethostname(2) system call). If we are connected to a remote system, then this returns the hostname of the remote system.
-
conn:
pointer to a hypervisor connection
Returns:
the hostname which must be freed by the caller, or NULL if there was an error.
Function: virConnectGetMaxVcpus
int virConnectGetMaxVcpus (virConnectPtr conn, const char * type)
-
Provides the maximum number of virtual CPUs supported for a guest VM of a specific type. The 'type' parameter here corresponds to the 'type' attribute in the <domain> element of the XML.
-
conn:
pointer to the hypervisor connection
type:
value of the 'type' attribute in the <domain> element
Returns:
the maximum of virtual CPU or -1 in case of error.
This returns the URI (name) of the hypervisor connection. Normally this is the same as or similar to the string passed to the virConnectOpen/virConnectOpenReadOnly call, but the driver may make the URI canonical. If name == NULL was passed to virConnectOpen, then the driver will return a non-NULL URI which can be used to connect to the same hypervisor later.
-
conn:
pointer to a hypervisor connection
Returns:
the URI string which must be freed by the caller, or NULL if there was an error.
Function: virConnectGetVersion
int virConnectGetVersion (virConnectPtr conn, unsigned long * hvVer)
-
Get the version level of the Hypervisor running. This may work only with hypervisor call, i.e. with privileged access to the hypervisor, not with a Read-Only connection.
-
conn:
pointer to the hypervisor connection
hvVer:
return value for the version of the running hypervisor (OUT)
Returns:
-1 in case of error, 0 otherwise. if the version can't be extracted by lack of capacities returns 0 and @hvVer is 0, otherwise @hvVer value is major * 1,000,000 + minor * 1,000 + release
Function: virConnectListDefinedDomains
int virConnectListDefinedDomains (virConnectPtr conn, char ** const names, int maxnames)
-
list the defined but inactive domains, stores the pointers to the names in @names
-
conn:
pointer to the hypervisor connection
names:
pointer to an array to store the names
maxnames:
size of the array
Returns:
the number of names provided in the array or -1 in case of error
Function: virConnectListDefinedNetworks
int virConnectListDefinedNetworks (virConnectPtr conn, char ** const names, int maxnames)
-
list the inactive networks, stores the pointers to the names in @names
-
conn:
pointer to the hypervisor connection
names:
pointer to an array to store the names
maxnames:
size of the array
Returns:
the number of names provided in the array or -1 in case of error
Function: virConnectListDefinedStoragePools
int virConnectListDefinedStoragePools (virConnectPtr conn, char ** const names, int maxnames)
-
Provides the list of names of inactive storage pools upto maxnames. If there are more than maxnames, the remaining names will be silently ignored.
-
conn:
pointer to hypervisor connection
names:
array of char * to fill with pool names (allocated by caller)
maxnames:
size of the names array
Returns:
0 on success, -1 on error
Function: virConnectListDomains
int virConnectListDomains (virConnectPtr conn, int * ids, int maxids)
-
Collect the list of active domains, and store their ID in @maxids
-
conn:
pointer to the hypervisor connection
ids:
array to collect the list of IDs of active domains
maxids:
size of @ids
Returns:
the number of domain found or -1 in case of error
Function: virConnectListNetworks
int virConnectListNetworks (virConnectPtr conn, char ** const names, int maxnames)
-
Collect the list of active networks, and store their names in @names
-
conn:
pointer to the hypervisor connection
names:
array to collect the list of names of active networks
maxnames:
size of @names
Returns:
the number of networks found or -1 in case of error
Function: virConnectListStoragePools
int virConnectListStoragePools (virConnectPtr conn, char ** const names, int maxnames)
-
Provides the list of names of active storage pools upto maxnames. If there are more than maxnames, the remaining names will be silently ignored.
-
conn:
pointer to hypervisor connection
names:
array of char * to fill with pool names (allocated by caller)
maxnames:
size of the names array
Returns:
0 on success, -1 on error
Function: virConnectNumOfDefinedDomains
int virConnectNumOfDefinedDomains (virConnectPtr conn)
-
Provides the number of defined but inactive domains.
-
conn:
pointer to the hypervisor connection
Returns:
the number of domain found or -1 in case of error
Function: virConnectNumOfDefinedNetworks
int virConnectNumOfDefinedNetworks (virConnectPtr conn)
-
Provides the number of inactive networks.
-
conn:
pointer to the hypervisor connection
Returns:
the number of networks found or -1 in case of error
Function: virConnectNumOfDefinedStoragePools
int virConnectNumOfDefinedStoragePools (virConnectPtr conn)
-
This function should be called first to get a connection to the Hypervisor. If necessary, authentication will be performed fetching credentials via the callback
-
name:
URI of the hypervisor
auth:
Authenticate callback parameters
flags:
Open flags
Returns:
a pointer to the hypervisor connection or NULL in case of error URIs are documented at http://libvirt.org/uri.html
This function should be called first to get a restricted connection to the library functionalities. The set of APIs usable are then restricted on the available methods to control the domains.
-
name:
URI of the hypervisor
Returns:
a pointer to the hypervisor connection or NULL in case of error URIs are documented at http://libvirt.org/uri.html
Function: virDomainAttachDevice
int virDomainAttachDevice (virDomainPtr domain, const char * xml)
-
This function returns block device (disk) stats for block devices attached to the domain. The path parameter is the name of the block device. Get this by calling virDomainGetXMLDesc and finding the <target dev='...'> attribute within //domain/devices/disk. (For example, "xvda"). Domains may have more than one block device. To get stats for each you should make multiple calls to this function. Individual fields within the stats structure may be returned as -1, which indicates that the hypervisor does not support that particular statistic.
-
dom:
pointer to the domain object
path:
path to the block device
stats:
block device stats (returned)
size:
size of stats structure
Returns:
0 in case of success or -1 in case of failure.
Function: virDomainCoreDump
int virDomainCoreDump (virDomainPtr domain, const char * to, int flags)
-
This method will dump the core of a domain on a given file for analysis. Note that for remote Xen Daemon the file path will be interpreted in the remote host.
Launch a new Linux guest domain, based on an XML description similar to the one returned by virDomainGetXMLDesc() This function may requires privileged access to the hypervisor.
-
conn:
pointer to the hypervisor connection
xmlDesc:
string containing an XML description of the domain
Destroy the domain object. The running instance is shutdown if not down already and all resources used by it are given back to the hypervisor. The data structure is freed and should not be used thereafter if the call does not return an error. This function may requires privileged access
-
domain:
a domain object
Returns:
0 in case of success and -1 in case of failure.
Function: virDomainDetachDevice
int virDomainDetachDevice (virDomainPtr domain, const char * xml)
-
Provides the connection pointer associated with a domain. The reference counter on the connection is not increased by this call. WARNING: When writing libvirt bindings in other languages, do not use this function. Instead, store the connection and the domain object together.
Extract information about a domain. Note that if the connection used to get the domain is limited only a partial set of the information can be extracted.
-
domain:
a domain object
info:
pointer to a virDomainInfo structure allocated by the user
Returns:
0 in case of success and -1 in case of failure.
Function: virDomainGetMaxMemory
unsigned long virDomainGetMaxMemory (virDomainPtr domain)
-
Retrieve the maximum amount of physical memory allocated to a domain. If domain is NULL, then this get the amount of memory reserved to Domain0 i.e. the domain where the application runs.
-
domain:
a domain object or NULL
Returns:
the memory size in kilobytes or 0 in case of error.
Provides the maximum number of virtual CPUs supported for the guest VM. If the guest is inactive, this is basically the same as virConnectGetMaxVcpus. If the guest is running this will reflect the maximum number of virtual CPUs the guest was booted with.
-
domain:
pointer to domain object
Returns:
the maximum of virtual CPU or -1 in case of error.
int virDomainGetVcpus (virDomainPtr domain, virVcpuInfoPtr info, int maxinfo, unsigned char * cpumaps, int maplen)
-
Extract information about virtual CPUs of domain, store it in info array and also in cpumaps if this pointer isn't NULL.
-
domain:
pointer to domain object, or NULL for Domain0
info:
pointer to an array of virVcpuInfo structures (OUT)
maxinfo:
number of structures in info array
cpumaps:
pointer to an bit map of real CPUs for all vcpus of this domain (in 8-bit bytes) (OUT) If cpumaps is NULL, then no cpumap information is returned by the API. It's assumed there is <maxinfo> cpumap in cpumaps array. The memory allocated to cpumaps must be (maxinfo * maplen) bytes (ie: calloc(maxinfo, maplen)). One cpumap inside cpumaps has the format described in virDomainPinVcpu() API.
maplen:
number of bytes in one cpumap, from 1 up to size of CPU map in underlying virtualization system (Xen...).
Returns:
the number of info filled in case of success, -1 in case of failure.
Function: virDomainGetXMLDesc
char * virDomainGetXMLDesc (virDomainPtr domain, int flags)
-
Provide an XML description of the domain. The description may be reused later to relaunch the domain with virDomainCreateLinux().
This function returns network interface stats for interfaces attached to the domain. The path parameter is the name of the network interface. Domains may have more than network interface. To get stats for each you should make multiple calls to this function. Individual fields within the stats structure may be returned as -1, which indicates that the hypervisor does not support that particular statistic.
Migrate the domain object from its current host to the destination host given by dconn (a connection to the destination host). Flags may be one of more of the following: VIR_MIGRATE_LIVE Attempt a live migration. If a hypervisor supports renaming domains during migration, then you may set the dname parameter to the new name (otherwise it keeps the same name). If this is not supported by the hypervisor, dname must be NULL or else you will get an error. Since typically the two hypervisors connect directly to each other in order to perform the migration, you may need to specify a path from the source to the destination. This is the purpose of the uri parameter. If uri is NULL, then libvirt will try to find the best method. Uri may specify the hostname or IP address of the destination host as seen from the source. Or uri may be a URI giving transport, hostname, user, port, etc. in the usual form. Refer to driver documentation for the particular URIs supported. The maximum bandwidth (in Mbps) that will be used to do migration can be specified with the bandwidth parameter. If set to 0, libvirt will choose a suitable default. Some hypervisors do not support this feature and will return an error if bandwidth is not 0. To see which features are supported by the current hypervisor, see virConnectGetCapabilities, /capabilities/host/migration_features. There are many limitations on migration imposed by the underlying technology - for example it may not be possible to migrate between different processors even with the same architecture, or between different types of hypervisor.
-
domain:
a domain object
dconn:
destination host (a connection object)
flags:
flags
dname:
(optional) rename domain to this at destination
uri:
(optional) dest hostname/URI as seen from the source host
bandwidth:
(optional) specify migration bandwidth limit in Mbps
Returns:
the new domain object if the migration was successful, or NULL in case of error. Note that the new domain object exists in the scope of the destination connection (dconn).
Function: virDomainPinVcpu
int virDomainPinVcpu (virDomainPtr domain, unsigned int vcpu, unsigned char * cpumap, int maplen)
-
Dynamically change the real CPUs which can be allocated to a virtual CPU. This function requires privileged access to the hypervisor.
-
domain:
pointer to domain object, or NULL for Domain0
vcpu:
virtual CPU number
cpumap:
pointer to a bit map of real CPUs (in 8-bit bytes) (IN) Each bit set to 1 means that corresponding CPU is usable. Bytes are stored in little-endian order: CPU0-7, 8-15... In each byte, lowest CPU number is least significant bit.
maplen:
number of bytes in cpumap, from 1 up to size of CPU map in underlying virtualization system (Xen...). If maplen < size, missing bytes are set to zero. If maplen > size, failure code is returned.
Returns:
0 in case of success, -1 in case of failure.
Function: virDomainReboot
int virDomainReboot (virDomainPtr domain, unsigned int flags)
-
Reboot a domain, the domain object is still usable there after but the domain OS is being stopped for a restart. Note that the guest OS may ignore the request.
-
domain:
a domain object
flags:
extra flags for the reboot operation, not used yet
Returns:
0 in case of success and -1 in case of failure.
Function: virDomainRestore
int virDomainRestore (virConnectPtr conn, const char * from)
-
This method will restore a domain saved to disk by virDomainSave().
Resume an suspended domain, the process is restarted from the state where it was frozen by calling virSuspendDomain(). This function may requires privileged access
-
domain:
a domain object
Returns:
0 in case of success and -1 in case of failure.
Function: virDomainSave
int virDomainSave (virDomainPtr domain, const char * to)
-
This method will suspend a domain and save its memory contents to a file on disk. After the call, if successful, the domain is not listed as running anymore (this may be a problem). Use virDomainRestore() to restore a domain after saving.
-
domain:
a domain object
to:
path for the output file
Returns:
0 in case of success and -1 in case of failure.
Function: virDomainSetAutostart
int virDomainSetAutostart (virDomainPtr domain, int autostart)
-
Configure the domain to be automatically started when the host machine boots.
-
domain:
a domain object
autostart:
whether the domain should be automatically started 0 or 1
Returns:
-1 in case of error, 0 in case of success
Function: virDomainSetMaxMemory
int virDomainSetMaxMemory (virDomainPtr domain, unsigned long memory)
-
Dynamically change the maximum amount of physical memory allocated to a domain. If domain is NULL, then this change the amount of memory reserved to Domain0 i.e. the domain where the application runs. This function requires privileged access to the hypervisor.
-
domain:
a domain object or NULL
memory:
the memory size in kilobytes
Returns:
0 in case of success and -1 in case of failure.
Function: virDomainSetMemory
int virDomainSetMemory (virDomainPtr domain, unsigned long memory)
-
Dynamically change the target amount of physical memory allocated to a domain. If domain is NULL, then this change the amount of memory reserved to Domain0 i.e. the domain where the application runs. This function may requires privileged access to the hypervisor.
number of scheduler parameter (this value should be same or less than the returned value nparams of virDomainGetSchedulerType)
Returns:
-1 in case of error, 0 in case of success.
Function: virDomainSetVcpus
int virDomainSetVcpus (virDomainPtr domain, unsigned int nvcpus)
-
Dynamically change the number of virtual CPUs used by the domain. Note that this call may fail if the underlying virtualization hypervisor does not support it or if growing the number is arbitrary limited. This function requires privileged access to the hypervisor.
Shutdown a domain, the domain object is still usable there after but the domain OS is being stopped. Note that the guest OS may ignore the request. TODO: should we add an option for reboot, knowing it may not be doable in the general case ?
Suspends an active domain, the process is frozen without further access to CPU resources and I/O but the memory used by the domain at the hypervisor level will stay allocated. Use virDomainResume() to reactivate the domain. This function may requires privileged access.
undefine a domain but does not stop it if it is running
-
domain:
pointer to a defined domain
Returns:
0 in case of success, -1 in case of error
Function: virGetVersion
int virGetVersion (unsigned long * libVer, const char * type, unsigned long * typeVer)
-
Provides two information back, @libVer is the version of the library while @typeVer will be the version of the hypervisor type @type against which the library was compiled. If @type is NULL, "Xen" is assumed, if @type is unknown or not available, an error code will be returned and @typeVer will be 0.
-
libVer:
return value for the library version (OUT)
type:
the type of connection/driver looked at
typeVer:
return value for the version of the hypervisor (OUT)
Returns:
-1 in case of failure, 0 otherwise, and values for @libVer and @typeVer have the format major * 1,000,000 + minor * 1,000 + release.
Function: virInitialize
int virInitialize (void)
-
Initialize the library. It's better to call this routine at startup in multithreaded applications to avoid potential race when initializing the library.
Destroy the network object. The running instance is shutdown if not down already and all resources used by it are given back to the hypervisor. The data structure is freed and should not be used thereafter if the call does not return an error. This function may requires privileged access
Provides the connection pointer associated with a network. The reference counter on the connection is not increased by this call. WARNING: When writing libvirt bindings in other languages, do not use this function. Instead, store the connection and the network object together.
Undefine a network but does not stop it if it is running
-
network:
pointer to a defined network
Returns:
0 in case of success, -1 in case of error
Function: virNodeGetCellsFreeMemory
int virNodeGetCellsFreeMemory (virConnectPtr conn, unsigned long long * freeMems, int startCell, int maxCells)
-
This call returns the amount of free memory in one or more NUMA cells. The @freeMems array must be allocated by the caller and will be filled with the amount of free memory in kilobytes for each cell requested, starting with startCell (in freeMems[0]), up to either (startCell + maxCells), or the number of additional cells in the node, whichever is smaller.
-
conn:
pointer to the hypervisor connection
freeMems:
pointer to the array of unsigned long long
startCell:
index of first cell to return freeMems info on.
maxCells:
Maximum number of cells for which freeMems information can be returned.
Returns:
the number of entries filled in freeMems, or -1 in case of error.
Function: virNodeGetFreeMemory
unsigned long long virNodeGetFreeMemory (virConnectPtr conn)
-
provides the free memory available on the Node
-
conn:
pointer to the hypervisor connection
Returns:
the available free memory in kilobytes or 0 in case of error
Create a new storage based on its XML description. The pool is not persistent, so its definition will disappear when it is destroyed, or if the host is restarted
Destroy an active storage pool. This will deactivate the pool on the host, but keep any persistent config associated with it. If it has a persistent config it can later be restarted with virStoragePoolCreate(). This does not free the associated virStoragePoolPtr object.
Provides the connection pointer associated with a storage pool. The reference counter on the connection is not increased by this call. WARNING: When writing libvirt bindings in other languages, do not use this function. Instead, store the connection and the pool object together.
int virStoragePoolRefresh (virStoragePoolPtr pool, unsigned int flags)
-
Request that the pool refresh its list of volumes. This may involve communicating with a remote server, and/or initializing new devices at the OS layer
-
pool:
pointer to storage pool
flags:
flags to control refresh behaviour (currently unused, use 0)
Returns:
0 if the volume list was refreshed, -1 on failure
Function: virStoragePoolSetAutostart
int virStoragePoolSetAutostart (virStoragePoolPtr pool, int autostart)
-
Provides the connection pointer associated with a storage volume. The reference counter on the connection is not increased by this call. WARNING: When writing libvirt bindings in other languages, do not use this function. Instead, store the connection and the volume object together.
Fetch the storage volume path. Depending on the pool configuration this is either persistent across hosts, or dynamically assigned at pool startup. Consult pool documentation for information on getting the persistent naming
-
vol:
pointer to storage volume
Returns:
the storage volume path, or NULL on error
Function: virStorageVolGetXMLDesc
char * virStorageVolGetXMLDesc (virStorageVolPtr vol, unsigned int flags)
-
Fetch an XML document describing all aspects of the storage volume
This function closes the connection to the Hypervisor. This should not be called if further interaction with the Hypervisor are needed especially if there is running domain which need further monitoring by the application.
This returns the system hostname on which the hypervisor is running (the result of the gethostname(2) system call). If we are connected to a remote system, then this returns the hostname of the remote system.
conn:
pointer to a hypervisor connection
Returns:
the hostname which must be freed by the caller, or NULL if there was an error.
int virConnectGetMaxVcpus (virConnectPtr conn, const char * type)
+
Provides the maximum number of virtual CPUs supported for a guest VM of a specific type. The 'type' parameter here corresponds to the 'type' attribute in the <domain> element of the XML.
conn:
pointer to the hypervisor connection
type:
value of the 'type' attribute in the <domain> element
Returns:
the maximum of virtual CPU or -1 in case of error.
This returns the URI (name) of the hypervisor connection. Normally this is the same as or similar to the string passed to the virConnectOpen/virConnectOpenReadOnly call, but the driver may make the URI canonical. If name == NULL was passed to virConnectOpen, then the driver will return a non-NULL URI which can be used to connect to the same hypervisor later.
conn:
pointer to a hypervisor connection
Returns:
the URI string which must be freed by the caller, or NULL if there was an error.
int virConnectGetVersion (virConnectPtr conn, unsigned long * hvVer)
+
Get the version level of the Hypervisor running. This may work only with hypervisor call, i.e. with privileged access to the hypervisor, not with a Read-Only connection.
conn:
pointer to the hypervisor connection
hvVer:
return value for the version of the running hypervisor (OUT)
Returns:
-1 in case of error, 0 otherwise. if the version can't be extracted by lack of capacities returns 0 and @hvVer is 0, otherwise @hvVer value is major * 1,000,000 + minor * 1,000 + release
This function should be called first to get a connection to the Hypervisor. If necessary, authentication will be performed fetching credentials via the callback
name:
URI of the hypervisor
auth:
Authenticate callback parameters
flags:
Open flags
Returns:
a pointer to the hypervisor connection or NULL in case of error URIs are documented at http://libvirt.org/uri.html
This function should be called first to get a restricted connection to the library functionalities. The set of APIs usable are then restricted on the available methods to control the domains.
name:
URI of the hypervisor
Returns:
a pointer to the hypervisor connection or NULL in case of error URIs are documented at http://libvirt.org/uri.html
This function returns block device (disk) stats for block devices attached to the domain. The path parameter is the name of the block device. Get this by calling virDomainGetXMLDesc and finding the <target dev='...'> attribute within //domain/devices/disk. (For example, "xvda"). Domains may have more than one block device. To get stats for each you should make multiple calls to this function. Individual fields within the stats structure may be returned as -1, which indicates that the hypervisor does not support that particular statistic.
int virDomainCoreDump (virDomainPtr domain, const char * to, int flags)
+
This method will dump the core of a domain on a given file for analysis. Note that for remote Xen Daemon the file path will be interpreted in the remote host.
Launch a new Linux guest domain, based on an XML description similar to the one returned by virDomainGetXMLDesc() This function may requires privileged access to the hypervisor.
conn:
pointer to the hypervisor connection
xmlDesc:
string containing an XML description of the domain
Destroy the domain object. The running instance is shutdown if not down already and all resources used by it are given back to the hypervisor. The data structure is freed and should not be used thereafter if the call does not return an error. This function may requires privileged access
Provides the connection pointer associated with a domain. The reference counter on the connection is not increased by this call. WARNING: When writing libvirt bindings in other languages, do not use this function. Instead, store the connection and the domain object together.
Extract information about a domain. Note that if the connection used to get the domain is limited only a partial set of the information can be extracted.
domain:
a domain object
info:
pointer to a virDomainInfo structure allocated by the user
unsigned long virDomainGetMaxMemory (virDomainPtr domain)
+
Retrieve the maximum amount of physical memory allocated to a domain. If domain is NULL, then this get the amount of memory reserved to Domain0 i.e. the domain where the application runs.
domain:
a domain object or NULL
Returns:
the memory size in kilobytes or 0 in case of error.
Provides the maximum number of virtual CPUs supported for the guest VM. If the guest is inactive, this is basically the same as virConnectGetMaxVcpus. If the guest is running this will reflect the maximum number of virtual CPUs the guest was booted with.
domain:
pointer to domain object
Returns:
the maximum of virtual CPU or -1 in case of error.
int virDomainGetVcpus (virDomainPtr domain, virVcpuInfoPtr info, int maxinfo, unsigned char * cpumaps, int maplen)
+
Extract information about virtual CPUs of domain, store it in info array and also in cpumaps if this pointer isn't NULL.
domain:
pointer to domain object, or NULL for Domain0
info:
pointer to an array of virVcpuInfo structures (OUT)
maxinfo:
number of structures in info array
cpumaps:
pointer to an bit map of real CPUs for all vcpus of this domain (in 8-bit bytes) (OUT) If cpumaps is NULL, then no cpumap information is returned by the API. It's assumed there is <maxinfo> cpumap in cpumaps array. The memory allocated to cpumaps must be (maxinfo * maplen) bytes (ie: calloc(maxinfo, maplen)). One cpumap inside cpumaps has the format described in virDomainPinVcpu() API.
maplen:
number of bytes in one cpumap, from 1 up to size of CPU map in underlying virtualization system (Xen...).
Returns:
the number of info filled in case of success, -1 in case of failure.
This function returns network interface stats for interfaces attached to the domain. The path parameter is the name of the network interface. Domains may have more than network interface. To get stats for each you should make multiple calls to this function. Individual fields within the stats structure may be returned as -1, which indicates that the hypervisor does not support that particular statistic.
Migrate the domain object from its current host to the destination host given by dconn (a connection to the destination host). Flags may be one of more of the following: VIR_MIGRATE_LIVE Attempt a live migration. If a hypervisor supports renaming domains during migration, then you may set the dname parameter to the new name (otherwise it keeps the same name). If this is not supported by the hypervisor, dname must be NULL or else you will get an error. Since typically the two hypervisors connect directly to each other in order to perform the migration, you may need to specify a path from the source to the destination. This is the purpose of the uri parameter. If uri is NULL, then libvirt will try to find the best method. Uri may specify the hostname or IP address of the destination host as seen from the source. Or uri may be a URI giving transport, hostname, user, port, etc. in the usual form. Refer to driver documentation for the particular URIs supported. The maximum bandwidth (in Mbps) that will be used to do migration can be specified with the bandwidth parameter. If set to 0, libvirt will choose a suitable default. Some hypervisors do not support this feature and will return an error if bandwidth is not 0. To see which features are supported by the current hypervisor, see virConnectGetCapabilities, /capabilities/host/migration_features. There are many limitations on migration imposed by the underlying technology - for example it may not be possible to migrate between different processors even with the same architecture, or between different types of hypervisor.
domain:
a domain object
dconn:
destination host (a connection object)
flags:
flags
dname:
(optional) rename domain to this at destination
uri:
(optional) dest hostname/URI as seen from the source host
bandwidth:
(optional) specify migration bandwidth limit in Mbps
Returns:
the new domain object if the migration was successful, or NULL in case of error. Note that the new domain object exists in the scope of the destination connection (dconn).
int virDomainPinVcpu (virDomainPtr domain, unsigned int vcpu, unsigned char * cpumap, int maplen)
+
Dynamically change the real CPUs which can be allocated to a virtual CPU. This function requires privileged access to the hypervisor.
domain:
pointer to domain object, or NULL for Domain0
vcpu:
virtual CPU number
cpumap:
pointer to a bit map of real CPUs (in 8-bit bytes) (IN) Each bit set to 1 means that corresponding CPU is usable. Bytes are stored in little-endian order: CPU0-7, 8-15... In each byte, lowest CPU number is least significant bit.
maplen:
number of bytes in cpumap, from 1 up to size of CPU map in underlying virtualization system (Xen...). If maplen < size, missing bytes are set to zero. If maplen > size, failure code is returned.
int virDomainReboot (virDomainPtr domain, unsigned int flags)
+
Reboot a domain, the domain object is still usable there after but the domain OS is being stopped for a restart. Note that the guest OS may ignore the request.
domain:
a domain object
flags:
extra flags for the reboot operation, not used yet
Resume an suspended domain, the process is restarted from the state where it was frozen by calling virSuspendDomain(). This function may requires privileged access
int virDomainSave (virDomainPtr domain, const char * to)
+
This method will suspend a domain and save its memory contents to a file on disk. After the call, if successful, the domain is not listed as running anymore (this may be a problem). Use virDomainRestore() to restore a domain after saving.
int virDomainSetMaxMemory (virDomainPtr domain, unsigned long memory)
+
Dynamically change the maximum amount of physical memory allocated to a domain. If domain is NULL, then this change the amount of memory reserved to Domain0 i.e. the domain where the application runs. This function requires privileged access to the hypervisor.
int virDomainSetMemory (virDomainPtr domain, unsigned long memory)
+
Dynamically change the target amount of physical memory allocated to a domain. If domain is NULL, then this change the amount of memory reserved to Domain0 i.e. the domain where the application runs. This function may requires privileged access to the hypervisor.
int virDomainSetVcpus (virDomainPtr domain, unsigned int nvcpus)
+
Dynamically change the number of virtual CPUs used by the domain. Note that this call may fail if the underlying virtualization hypervisor does not support it or if growing the number is arbitrary limited. This function requires privileged access to the hypervisor.
Shutdown a domain, the domain object is still usable there after but the domain OS is being stopped. Note that the guest OS may ignore the request. TODO: should we add an option for reboot, knowing it may not be doable in the general case ?
Suspends an active domain, the process is frozen without further access to CPU resources and I/O but the memory used by the domain at the hypervisor level will stay allocated. Use virDomainResume() to reactivate the domain. This function may requires privileged access.
int virGetVersion (unsigned long * libVer, const char * type, unsigned long * typeVer)
+
Provides two information back, @libVer is the version of the library while @typeVer will be the version of the hypervisor type @type against which the library was compiled. If @type is NULL, "Xen" is assumed, if @type is unknown or not available, an error code will be returned and @typeVer will be 0.
libVer:
return value for the library version (OUT)
type:
the type of connection/driver looked at
typeVer:
return value for the version of the hypervisor (OUT)
Returns:
-1 in case of failure, 0 otherwise, and values for @libVer and @typeVer have the format major * 1,000,000 + minor * 1,000 + release.
Initialize the library. It's better to call this routine at startup in multithreaded applications to avoid potential race when initializing the library.
Destroy the network object. The running instance is shutdown if not down already and all resources used by it are given back to the hypervisor. The data structure is freed and should not be used thereafter if the call does not return an error. This function may requires privileged access
Provides the connection pointer associated with a network. The reference counter on the connection is not increased by this call. WARNING: When writing libvirt bindings in other languages, do not use this function. Instead, store the connection and the network object together.
int virNodeGetCellsFreeMemory (virConnectPtr conn, unsigned long long * freeMems, int startCell, int maxCells)
+
This call returns the amount of free memory in one or more NUMA cells. The @freeMems array must be allocated by the caller and will be filled with the amount of free memory in kilobytes for each cell requested, starting with startCell (in freeMems[0]), up to either (startCell + maxCells), or the number of additional cells in the node, whichever is smaller.
conn:
pointer to the hypervisor connection
freeMems:
pointer to the array of unsigned long long
startCell:
index of first cell to return freeMems info on.
maxCells:
Maximum number of cells for which freeMems information can be returned.
Returns:
the number of entries filled in freeMems, or -1 in case of error.
Create a new storage based on its XML description. The pool is not persistent, so its definition will disappear when it is destroyed, or if the host is restarted
Destroy an active storage pool. This will deactivate the pool on the host, but keep any persistent config associated with it. If it has a persistent config it can later be restarted with virStoragePoolCreate(). This does not free the associated virStoragePoolPtr object.
Provides the connection pointer associated with a storage pool. The reference counter on the connection is not increased by this call. WARNING: When writing libvirt bindings in other languages, do not use this function. Instead, store the connection and the pool object together.
int virStoragePoolRefresh (virStoragePoolPtr pool, unsigned int flags)
+
Request that the pool refresh its list of volumes. This may involve communicating with a remote server, and/or initializing new devices at the OS layer
pool:
pointer to storage pool
flags:
flags to control refresh behaviour (currently unused, use 0)
Provides the connection pointer associated with a storage volume. The reference counter on the connection is not increased by this call. WARNING: When writing libvirt bindings in other languages, do not use this function. Instead, store the connection and the volume object together.
Fetch the storage volume path. Depending on the pool configuration this is either persistent across hosts, or dynamically assigned at pool startup. Consult pool documentation for information on getting the persistent naming
Structure virError struct _virError {
- int code : The error code, a virErrorNumber
- int domain : What part of the library raised this er
- char * message : human-readable informative error messag
- virErrorLevel level : how consequent is the error
- virConnectPtr conn : the connection if available
- virDomainPtr dom : the domain if available
- char * str1 : extra string information
- char * str2 : extra string information
- char * str3 : extra string information
- int int1 : extra number information
- int int2 : extra number information
- virNetworkPtr net : the network if available
-}
Provide a pointer to the last error caught on that connection Simpler but may not be suitable for multithreaded accesses, in which case use virConnCopyLastError()
-
conn:
pointer to the hypervisor connection
Returns:
a pointer to the last error or NULL if none occurred.
Provide a pointer to the last error caught at the library level Simpler but may not be suitable for multithreaded accesses, in which case use virCopyLastError()
-
Returns:
a pointer to the last error or NULL if none occurred.
Set a library global error handling function, if @handler is NULL, it will reset to default printing on stderr. The error raised there are those for which no handler at the connection level could caught.
-
userData:
pointer to the user data provided in the handler callback
handler:
the function to get called in case of error or NULL
Provide a pointer to the last error caught on that connection Simpler but may not be suitable for multithreaded accesses, in which case use virConnCopyLastError()
conn:
pointer to the hypervisor connection
Returns:
a pointer to the last error or NULL if none occurred.
Provide a pointer to the last error caught at the library level Simpler but may not be suitable for multithreaded accesses, in which case use virCopyLastError()
Returns:
a pointer to the last error or NULL if none occurred.
Set a library global error handling function, if @handler is NULL, it will reset to default printing on stderr. The error raised there are those for which no handler at the connection level could caught.
userData:
pointer to the user data provided in the handler callback
handler:
the function to get called in case of error or NULL
-Network functions are not hypervisor-specific. For historical
-reasons they require the QEMU daemon to be running (this
-restriction may be lifted in future). Most network functions
-first appeared in libvirt 0.2.0.
-
+Network functions are not hypervisor-specific.They require the libvirtd
+daemon to be running. Most network functions first appeared in libvirt 0.2.0.
+
+Network functions are not hypervisor-specific.They require the libvirtd
+daemon to be running. Most network functions first appeared in libvirt 0.2.0.
+
+
+
+
Function
+
Since
+
+
+
virConnectNumOfNetworks
+
0.2.0
+
+
+
virConnectListNetworks
+
0.2.0
+
+
+
virConnectNumOfDefinedNetworks
+
0.2.0
+
+
+
virConnectListDefinedNetworks
+
0.2.0
+
+
+
virNetworkCreate
+
0.2.0
+
+
+
virNetworkCreateXML
+
0.2.0
+
+
+
virNetworkDefineXML
+
0.2.0
+
+
+
virNetworkDestroy
+
0.2.0
+
+
+
virNetworkFree
+
0.2.0
+
+
+
virNetworkGetAutostart
+
0.2.1
+
+
+
virNetworkGetConnect
+
0.3.0
+
+
+
virNetworkGetBridgeName
+
0.2.0
+
+
+
virNetworkGetName
+
0.2.0
+
+
+
virNetworkGetUUID
+
0.2.0
+
+
+
virNetworkGetUUIDString
+
0.2.0
+
+
+
virNetworkGetXMLDesc
+
0.2.0
+
+
+
virNetworkLookupByName
+
0.2.0
+
+
+
virNetworkLookupByUUID
+
0.2.0
+
+
+
virNetworkLookupByUUIDString
+
0.2.0
+
+
+
virNetworkSetAutostart
+
0.2.1
+
+
+
virNetworkUndefine
+
0.2.0
+
+
+
+
diff --git a/docs/index.html b/docs/index.html
index 348fe59168..607408ea00 100644
--- a/docs/index.html
+++ b/docs/index.html
@@ -1,142 +1,118 @@
+
-
-
- the virtualization API
+
+
+ libvirt: The virtualization API
+
-
-
-
-
-
-
-
what is libvirt?
-
Libvirt is a C toolkit to interact with the virtualization capabilities
-of recent versions of Linux (and other OSes). It is free software available
-under the GNU
-Lesser General Public License. Virtualization of the Linux Operating
-System means the ability to run multiple instances of Operating Systems
-concurrently on a single hardware system where the basic resources are driven
-by a Linux (or Solaris) instance. The library aims at providing a long term
-stable C API initially for Xen
-paravirtualization but it can also integrate with other
-virtualization mechanisms. It currently also supports QEMU, KVM and
-OpenVZ.
Libvirt is a C toolkit to interact with the virtualization capabilities of
+
+
+
+
+
+
+ libvirt: Architecture
+
+
+
+
+
+
+
+
+
+
+
+
Architecture
+
Libvirt is a C toolkit to interact with the virtualization capabilities of
recent versions of Linux (and other OSes), but libvirt won't try to provide
-all possible interfaces for interacting with the virtualization features.
To avoid ambiguity about the terms used here here are the definitions for
-some of the specific concepts used in libvirt documentation:
a node is a single physical machine
-
an hypervisor is a layer of software allowing to
+all possible interfaces for interacting with the virtualization features.
+
To avoid ambiguity about the terms used here here are the definitions for
+some of the specific concepts used in libvirt documentation:
+
a node is a single physical machine
an hypervisor is a layer of software allowing to
virtualize a node in a set of virtual machines with possibly different
- configurations than the node itself
-
a domain is an instance of an operating system running
- on a virtualized machine provided by the hypervisor
-
Now we can define the goal of libvirt: to provide the lowest possible
-generic and stable layer to manage domains on a node.
This implies the following:
the API should not be targeted to a single virtualization environment
+ configurations than the node itself
a domain is an instance of an operating system running
+ on a virtualized machine provided by the hypervisor
+
+
+
Now we can define the goal of libvirt: to provide the lowest possible
+generic and stable layer to manage domains on a node.
+
This implies the following:
+
the API should not be targeted to a single virtualization environment
though Xen is the current default, which also means that some very
specific capabilities which are not generic enough may not be provided as
- libvirt APIs
-
the API should allow to do efficiently and cleanly all the operations
- needed to manage domains on a node
-
the API will not try to provide hight level multi-nodes management
+ libvirt APIs
the API should allow to do efficiently and cleanly all the operations
+ needed to manage domains on a node
the API will not try to provide hight level multi-nodes management
features like load balancing, though they could be implemented on top of
- libvirt
-
stability of the API is a big concern, libvirt should isolate
+ libvirt
stability of the API is a big concern, libvirt should isolate
applications from the frequent changes expected at the lower level of the
- virtualization framework
-
So libvirt should be a building block for higher level management tools
+ virtualization framework
+
So libvirt should be a building block for higher level management tools
and for applications focusing on virtualization of a single node (the only
exception being domain migration between node capabilities which may need to
be added at the libvirt level). Where possible libvirt should be extendable
to be able to provide the same API for remote nodes, however this is not the
case at the moment, the code currently handle only local node accesses
-(extension for remote access support is being worked on, see the mailing list discussions about it).
Libvirt is a C toolkit to interact with the virtualization capabilities of
+recent versions of Linux (and other OSes), but libvirt won't try to provide
+all possible interfaces for interacting with the virtualization features.
+
To avoid ambiguity about the terms used here here are the definitions for
+some of the specific concepts used in libvirt documentation:
+
+
a node is a single physical machine
+
an hypervisor is a layer of software allowing to
+ virtualize a node in a set of virtual machines with possibly different
+ configurations than the node itself
+
a domain is an instance of an operating system running
+ on a virtualized machine provided by the hypervisor
+
+
+
+
+
Now we can define the goal of libvirt: to provide the lowest possible
+generic and stable layer to manage domains on a node.
+
This implies the following:
+
+
the API should not be targeted to a single virtualization environment
+ though Xen is the current default, which also means that some very
+ specific capabilities which are not generic enough may not be provided as
+ libvirt APIs
+
the API should allow to do efficiently and cleanly all the operations
+ needed to manage domains on a node
+
the API will not try to provide hight level multi-nodes management
+ features like load balancing, though they could be implemented on top of
+ libvirt
+
stability of the API is a big concern, libvirt should isolate
+ applications from the frequent changes expected at the lower level of the
+ virtualization framework
+
+
So libvirt should be a building block for higher level management tools
+and for applications focusing on virtualization of a single node (the only
+exception being domain migration between node capabilities which may need to
+be added at the libvirt level). Where possible libvirt should be extendable
+to be able to provide the same API for remote nodes, however this is not the
+case at the moment, the code currently handle only local node accesses
+(extension for remote access support is being worked on, see the mailing list discussions about it).
Libvirt is a C toolkit to interact with the virtualization capabilities
-of recent versions of Linux (and other OSes). It is free software available
-under the GNU
-Lesser General Public License. Virtualization of the Linux Operating
-System means the ability to run multiple instances of Operating Systems
-concurrently on a single hardware system where the basic resources are driven
-by a Linux (or Solaris) instance. The library aims at providing a long term
-stable C API initially for Xen
-paravirtualization but it can also integrate with other
-virtualization mechanisms. It currently also supports QEMU, KVM and
-OpenVZ.
Here is the list of official releases, however since it is early on in the
-development of libvirt, it is preferable when possible to just use the CVS version or snapshot, contact the mailing list
-and check the ChangeLog to gauge progress.
-
-
-
-
0.4.2: Apr 8 2008
-
-
New features: memory operation for QEmu/KVM driver (Cole Robinson),
- new routed networking schemas (Mads Olesen)
Bug fixes: pointer errors in qemu (Jim Meyering), iSCSI login fix
- (Chris Lalancette), well formedness error in test driver capabilities
- (Cole Robinson), fixes cleanup code when daemon exits (Daniel Berrange),
- CD Rom change on live QEmu/KVM domains (Cole Robinson), setting scheduler
- parameter is forbidden for read-only (Saori Fukuta)i, fixes for TAP
- devices (Daniel Berrange), assorted storage driver fixes (Daniel
- Berrange), Makefile fixes (Jim Meyering), Xen-3.2 hypercall fix,
- fix iptables rules to avoid blocking traffic within virtual network
- (Daniel Berrange), XML output fix for directory pools (Daniel Berrange),
- remove dandling domain/net/conn pointers from error data, do not
- ask polkit auth when root (Daniel Berrange), handling of fork and
- pipe errors when starting the daemon (Richard Jones)
-
Improvements: better validation of MAC addresses (Jim Meyering and
- Hiroyuki Kaguchi),
- virsh vcpupin error report (Shigeki Sakamoto), keep boot tag on
- HVM domains (Cole Robinson), virsh non-root should not be limited to read
- only anymore (Daniel Berrange), switch to polkit-auth from polkit-grant
- (Daniel Berrange), better handling of missing SElinux data (Daniel
- Berrange and Jim Meyering), cleanup of the connection opening logic
- (Daniel Berrange), first bits of Linux Containers support (Dave Leskovec),
- scheduler API support via xend (Saori Fukuta), improvement of the
- testing framework and first tests (Jim Meyering), missing error
- messages from virsh parameters validation (Shigeki Sakamoto),
- improve support of older iscsiadm command (Chris Lalancette),
- move linux container support in the daemon (Dan Berrange), older
- awk implementation support (Mike Gerdts), NUMA support in test
- driver (Cole Robinson), xen and hvm added to test driver capabilities
- (Cole Robinson)
-
Code cleanup: remove unused getopt header (Jim Meyering), mark more
- strings as translatable (Guido Günther and Jim Meyering), convert
- error strings to something meaningful and translatable (Jim Meyering),
- Linux Containers code cleanup, last error initializer (Guido Günther)
-
-
0.4.1: Mar 3 2008
-
-
New features: build on MacOSX (Richard Jones), storage management
- (Daniel Berrange), Xenner - Xen on KVM - support (Daniel Berrange)
-
Documentation: Fix of various typos (Atsushi SAKAI), memory and
- vcpu settings details (Richard Jones), ethernet bridging typo
- (Maxwell Bottiger), add storage APIs documentation (Daniel Berrange)
-
Bug fixes: OpenVZ code compilation (Mikhail Pokidko), crash in
- policykit auth handling (Daniel Berrange), large config files
- (Daniel Berrange), cpumap hypercall size (Saori Fukuta), crash
- in remote auth (Daniel Berrange), ssh args error (Daniel Berrange),
- preserve vif order from config files (Hiroyuki Kaguchi), invalid
- pointer access (Jim Meyering), virDomainGetXMLDesc flag handling,
- device name conversion on stats (Daniel Berrange), double mutex lock
- (Daniel Berrange), config file reading crashes (Guido Guenther),
- xenUnifiedDomainSuspend bug (Marcus Meissner), do not crash if
- /sys/hypervisor/capabilities is missing (Mark McLoughlin),
- virHashRemoveSet bug (Hiroyuki Kaguchi), close-on-exec flag for
- qemud signal pipe (Daniel Berrange), double free in OpenVZ
- (Anton Protopopov), handle mac without addresses (Shigeki Sakamoto),
- MAC addresses checks (Shigeki Sakamoto and Richard Jones),
- allow to read non-seekable files (Jim Meyering)
-
Improvements: Windows build (Richard Jones), KVM/QEmu shutdown
- (Guido Guenther), catch virExec output on debug (Mark McLoughlin),
- integration of iptables and lokkit (Mark McLoughlin), keymap
- parameter for VNC servers (Daniel Hokka Zakrisson), enable debug
- by default using VIR_DEBUG (Daniel Berrange), xen 3.2 fixes
- (Daniel Berrange), Python bindings for VCPU and scheduling
- (Daniel Berrange), framework for automatic code syntax checks
- (Jim Meyering), allow kernel+initrd setup in Xen PV (Daniel Berrange),
- allow change of Disk/NIC of an inactive domains (Shigeki Sakamoto),
- virsh commands to manipulate and create storage(Daniel Berrange),
- update use of PolicyKit APIs, better detection of fedault hypervisor,
- block device statistics for QEmu/KVM (Richard Jones), various improvements
- for Xenner (Daniel Berrange)
-
Code cleanups: avoid warnings (Daniel Berrange), virRun helper
- function (Dan Berrange), iptable code fixes (Mark McLoughlin),
- static and const cleanups (Jim Meyering), malloc and python cleanups
- (Jim Meyering), xstrtol_ull and xstrtol_ll functions (Daniel Berrange),
- remove no-op networking from OpenVZ (Daniel Berrange), python generator
- cleanups (Daniel Berrange), cleanup ref counting (Daniel Berrange),
- remove uninitialized warnings (Jim Meyering), cleanup configure
- for RHEL4 (Daniel Berrange), CR/LF cleanups (Richard Jones),
- various automatic code check and associated cleanups (Jim Meyering),
- various memory leaks (Jim Meyering), fix compilation when building
- without Xen (Guido Guenther), mark translatables strings (Jim Meyering),
- use virBufferAddLit for constant strings (Jim Meyering), fix
- make distcheck (Jim Meyering), return values for python bindings (Cole
- Robinson), trailing blanks fixes (Jim Meyering), gcc-4.3.0 fixes
- (Mark McLoughlin), use safe read and write routines (Jim Meyering),
- refactoring of code dealing with hypervisor capabilities (Daniel
- Berrange), qemudReportError to use virErrorMsg (Cole Robinson),
- intemediate library and Makefiles for compiling static and coverage
- rule support (Jim Meyering), cleanup of various leaks (Jim Meyering)
-
-
-
0.4.0: Dec 18 2007
-
-
New features: Compilation on Windows cygwin/mingw (Richard Jones),
- Ruby bindings (David Lutterkort), SASL based authentication for
- libvirt remote support (Daniel Berrange), PolicyKit authentication
- (Daniel Berrange)
-
Documentation: example files for QEMU and libvirtd configuations
- (Daniel Berrange), english cleanups (Jim Paris), CIM and OpenVZ
- references, document <shareable/>, daemon startup when using
- QEMU/KVM, document HV support for new NUMA calls (Richard Jones),
- various english fixes (Bruce Montague), OCaml docs links (Richard Jones),
- describe the various bindings add Ruby link, Windows support page
- (Richard Jones), authentication documentation updates (Daniel Berrange)
-
-
Bug fixes: NUMA topology error handling (Beth Kon), NUMA topology
- cells without CPU (Beth Kon), XML to/from XM bridge config (Daniel
- Berrange), XM processing of vnc parameters (Daniel Berrange), Reset
- migration source after failure (Jim Paris), negative integer in config
- (Tatsuro Enokura), zero terminating string buffer, detect integer
- overflow (Jim Meyering), QEmu command line ending fixes (Daniel Berrange),
- recursion problem in the daemon (Daniel Berrange), HVM domain with CDRom
- (Masayuki Sunou), off by one error in NUMA cpu count (Beth Kon),
- avoid xend errors when adding disks (Masayuki Sunou), compile error
- (Chris Lalancette), transposed fwrite args (Jim Meyering), compile
- without xen and on solaris (Jim Paris), parsing of interface names
- (Richard Jones), overflow for starts on 32bits (Daniel Berrange),
- fix problems in error reporting (Saori Fukuta), wrong call to
- brSetForwardDelay changed to brSetEnableSTP (Richard Jones),
- allow shareable disk in old Xen, fix wrong certificate file (Jim
- Meyering), avoid some startup error when non-root, off-by-1 buffer
- NULL termination (Daniel Berrange), various string allocation fixes
- (Daniel Berrange), avoid problems with vnetXXX interfaces in domain dumps
- (Daniel Berrange), build fixes for RHEL (Daniel Berrange), virsh prompt
- should not depend on uid (Richard Jones), fix scaping of '<' (Richard
- Jones), fix detach-disk on Xen tap devices (Saori Fukuta), CPU
- parameter setting in XM config (Saori Fukuta), credential handling
- fixes (Daniel Berrange), fix compatibility with Xen 3.2.0 (Daniel
- Berrange)
-
-
Improvements: /etc/libvirt/qemu.conf configuration for QEMU driver
- (Daniel Berrange), NUMA cpu pinning in config files (DV and Saori Fukuta),
- CDRom media change in KVM/QEMU (Daniel Berrange), tests for
- <shareable/> in configs, pinning inactive domains for Xen 3.0.3
- (Saori Fukuta), use gnulib for portability enhancement (Jim Meyering),
- --without-libvirtd config option (Richard Jones), Python bindings for
- NUMA, add extra utility functions to buffer (Richard Jones),
- separate qparams module for handling query parameters (Richard Jones)
-
-
Code cleanups: remove virDomainRestart from API as it was never used
- (Richard Jones), constify params for attach/detach APIs (Daniel Berrange),
- gcc printf attribute checkings (Jim Meyering), refactoring of device
- parsing code and shell escaping (Daniel Berrange), virsh schedinfo
- parameters validation (Masayuki Sunou), Avoid risk of format string abuse
- (Jim Meyering), integer parsing cleanups (Jim Meyering), build out
- of the source tree (Jim Meyering), URI parsing refactoring (Richard
- Jones), failed strdup/malloc handling (Jim Meyering), Make "make
- distcheck" work (Jim Meyering), improve xen internall error reports
- (Richard Jones), cleanup of the daemon remote code (Daniel Berrange),
- rename error VIR_FROM_LINUX to VIR_FROM_STATS_LINUX (Richard Jones),
- don't compile the proxy if without Xen (Richard Jones), fix paths when
- configuring for /usr prefix, improve error reporting code (Jim Meyering),
- detect heap allocation failure (Jim Meyering), disable xen sexpr parsing
- code if Xen is disabled (Daniel Berrange), cleanup of the GetType
- entry point for Xen drivers, move some QEmu path handling to generic
- module (Daniel Berrange), many code cleanups related to the Windows
- port (Richard Jones), disable the proxy if using PolicyKit, readline
- availability detection, test libvirtd's config-processing code (Jim
- Meyering), use a variable name as sizeof argument (Jim Meyering)
-
-
-
-
0.3.3: Sep 30 2007
-
-
New features: Avahi mDNS daemon export (Daniel Berrange),
- NUMA support (Beth Kan)
Bug fixes: memory corruption on large dumps (Masayuki Sunou), fix
- virsh vncdisplay command exit (Masayuki Sunou), Fix network stats
- TX/RX result (Richard Jones), warning on Xen 3.0.3 (Richard Jones),
- missing buffer check in virDomainXMLDevID (Hugh Brock), avoid zombies
- when using remote (Daniel Berrange), xend connection error message
- (Richard Jones), avoid ssh tty prompt (Daniel Berrange), username
- handling for remote URIs (Fabian Deutsch), fix potential crash
- on multiple input XML tags (Daniel Berrange), Solaris Xen hypercalls
- fixup (Mark Johnson)
-
Improvements: OpenVZ support (Shuveb Hussain and Anoop Cyriac),
- CD-Rom reload on XEn (Hugh Brock), PXE boot got QEmu/KVM (Daniel
- Berrange), QEmu socket permissions customization (Daniel Berrange),
- more QEmu support (Richard Jones), better path detection for qemu and
- dnsmasq (Richard Jones), QEmu flags are per-Domain (Daniel Berrange),
- virsh freecell command, Solaris portability fixes (Mark Johnson),
- default bootloader support (Daniel Berrange), new virNodeGetFreeMemory
- API, vncpasswd extraction in configuration files if secure (Mark
- Johnson and Daniel Berrange), Python bindings for block and interface
- statistics
New features: KVM migration and save/restore (Jim Paris),
- added API for migration (Richard Jones), added APIs for block device and
- interface statistic (Richard Jones).
-
Documentation: examples for XML network APIs,
- fix typo and schedinfo synopsis in man page (Atsushi SAKAI),
- hypervisor support page update (Richard Jones).
-
Bug fixes: remove a couple of leaks in QEmu/KVM backend(Daniel berrange),
- fix GnuTLS 1.0 compatibility (Richard Jones), --config/-f option
- mistake for libvirtd (Richard Jones), remove leak in QEmu backend
- (Jim Paris), fix some QEmu communication bugs (Jim Paris), UUID
- lookup though proxy fix, setvcpus checking bugs (with Atsushi SAKAI),
- int checking in virsh parameters (with Masayuki Sunou), deny devices
- attach/detach for < Xen 3.0.4 (Masayuki Sunou), XenStore query
- memory leak (Masayuki Sunou), virsh schedinfo cleanup (Saori Fukuta).
-
Improvement: virsh new ttyconsole command, networking API implementation
- for test driver (Daniel berrange), qemu/kvm feature reporting of
- ACPI/APIC (David Lutterkort), checking of QEmu architectures (Daniel
- berrange), improve devices XML errors reporting (Masayuki Sunou),
- speedup of domain queries on Xen (Daniel berrange), augment XML dumps
- with interface devices names (Richard Jones), internal API to query
- drivers for features (Richard Jones).
-
-
Cleanups: Improve virNodeGetInfo implentation (Daniel berrange),
- general UUID code cleanup (Daniel berrange), fix API generator
- file selection.
-
-
-
0.3.1: Jul 24 2007
-
-
Documentation: index to remote page, script to test certificates,
- IPv6 remote support docs (Daniel Berrange), document
- VIRSH_DEFAULT_CONNECT_URI in virsh man page (David Lutterkort),
- Relax-NG early grammar for the network XML (David Lutterkort)
-
Bug fixes: leaks in disk XML parsing (Masayuki Sunou), hypervisor
- alignment call problems on PPC64 (Christian Ehrhardt), dead client
- registration in daemon event loop (Daniel Berrange), double free
- in error handling (Daniel Berrange), close on exec for log file
- descriptors in the daemon (Daniel Berrange), avoid caching problem
- in remote daemon (Daniel Berrange), avoid crash after QEmu domain
- failure (Daniel Berrange)
-
Improvements: checks of x509 certificates and keys (Daniel Berrange),
- error reports in the daemon (Daniel Berrange), checking of Ethernet MAC
- addresses in XML configs (Masayuki Sunou), support for a new
- clock switch between UTC and localtime (Daniel Berrange), early
- version of OpenVZ support (Shuveb Hussain), support for input devices
- on PS/2 and USB buses (Daniel Berrange), more tests especially
- the QEmu support (Daniel Berrange), range check in credit scheduler
- (with Saori Fukuta and Atsushi Sakai), add support for listen VNC
- parameter un QEmu and fix command line arg (Daniel Berrange)
-
Cleanups: debug tracing (Richard Jones), removal of --with-qemud-pid-file
- (Richard Jones), remove unused virDeviceMode, new util module for
- code shared between drivers (Shuveb Hussain), xen header location
- detection (Richard Jones)
-
-
0.3.0: Jul 9 2007
-
-
Secure Remote support (Richard Jones).
- See the remote page
- of the documentation
-
Documentation: remote support (Richard Jones), description of
- the URI connection strings (Richard Jones), update of virsh man
- page, matrix of libvirt API/hypervisor support with version
- information (Richard Jones)
-
Bug fixes: examples Makefile.am generation (Richard Jones),
- SetMem fix (Mark Johnson), URI handling and ordering of
- drivers (Daniel Berrange), fix virsh help without hypervisor (Richard
- Jones), id marshalling fix (Daniel Berrange), fix virConnectGetMaxVcpus
- on remote (Richard Jones), avoid a realloc leak (Jim Meyering), scheduler
- parameters handling for Xen (Richard Jones), various early remote
- bug fixes (Richard Jones), remove virsh leaks of domains references
- (Masayuki Sunou), configCache refill bug (Richard Jones), fix
- XML serialization bugs
-
Improvements: QEmu switch to XDR-based protocol (Dan Berrange),
- device attach/detach commands (Masayuki Sunou), OCaml bindings
- (Richard Jones), new entry points virDomainGetConnect and
- virNetworkGetConnect useful for bindings (Richard Jones),
- reunitifaction of remote and qemu daemon under a single libvirtd
- with a config file (Daniel Berrange)
-
Cleanups: parsing of connection URIs (Richard Jones), messages
- from virsh (Saori Fukuta), Coverage files (Daniel Berrange),
- Solaris fixes (Mark Johnson), avoid [r]index calls (Richard Jones),
- release information in Xen backend, virsh cpupin command cleanups
- (Masayuki Sunou), xen:/// suppport as standard Xen URI (Richard Jones and
- Daniel Berrange), improve driver selection/decline mechanism (Richard
- Jones), error reporting on XML dump (Richard Jones), Remove unused
- virDomainKernel structure (Richard Jones), daemon event loop event
- handling (Daniel Berrange), various unifications cleanup in the daemon
- merging (Daniel Berrange), internal file and timer monitoring API
- (Daniel Berrange), remove libsysfs dependancy, call brctl program
- directly (Daniel Berrange), virBuffer functions cleanups (Richard Jones),
- make init script LSB compliant, error handling on lookup functions
- (Richard Jones), remove internal virGetDomainByID (Richard Jones),
- revamp of xen subdrivers interfaces (Richard Jones)
-
Localization updates
-
-
0.2.3: Jun 8 2007
-
-
Documentation: documentation for upcoming remote access (Richard Jones),
- virConnectNumOfDefinedDomains doc (Jan Michael), virsh help messages
- for dumpxml and net-dumpxml (Chris Wright),
-
Bug fixes: RelaxNG schemas regexp fix (Robin Green), RelaxNG arch bug
- (Mark McLoughlin), large buffers bug fixes (Shigeki Sakamoto), error
- on out of memory condition (Shigeki Sakamoto), virshStrdup fix, non-root
- driver when using Xen bug (Richard Jones), use --strict-order when
- running dnsmasq (Daniel Berrange), virbr0 weirdness on restart (Mark
- McLoughlin), keep connection error messages (Richard Jones), increase
- QEmu read buffer on help (Daniel Berrange), rpm dependance on
- dnsmasq (Daniel Berrange), fix XML boot device syntax (Daniel Berrange),
- QEmu memory bug (Daniel Berrange), memory leak fix (Masayuki Sunou),
- fix compiler flags (Richard Jones), remove type ioemu on recent Xen
- HVM for paravirt drivers (Saori Fukuta), uninitialized string bug
- (Masayuki Sunou), allow init even if the daemon is not running,
- XML to config fix (Daniel Berrange)
-
Improvements: add a special error class for the test module (Richard
- Jones), virConnectGetCapabilities on proxy (Richard Jones), allow
- network driver to decline usage (Richard Jones), extend error messages
- for upcoming remote access (Richard Jones), on_reboot support for QEmu
- (Daniel Berrange), save daemon output in a log file (Daniel Berrange),
- xenXMDomainDefineXML can override guest config (Hugh Brock),
- add attach-device and detach-device commands to virsh (Masayuki Sunou
- and Mark McLoughlin and Richard Jones), make virGetVersion case
- insensitive and Python bindings (Richard Jones), new scheduler API
- (Atsushi SAKAI), localizations updates, add logging option for virsh
- (Nobuhiro Itou), allow arguments to be passed to bootloader (Hugh Brock),
- increase the test suite (Daniel Berrange and Hugh Brock)
-
Cleanups: Remove VIR_DRV_OPEN_QUIET (Richard Jones), disable xm_internal.c
- for Xen > 3.0.3 (Daniel Berrange), unused fields in _virDomain (Richard
- Jones), export __virGetDomain and __virGetNetwork for libvirtd only
- (Richard Jones), ignore old VNC config for HVM on recent Xen (Daniel
- Berrange), various code cleanups, -Werror cleanup (Hugh Brock)
-
-
0.2.2: Apr 17 2007
-
-
Documentation: fix errors due to Amaya (with Simon Hernandez),
- virsh uses kB not bytes (Atsushi SAKAI), add command line help to
- qemud (Richard Jones), xenUnifiedRegister docs (Atsushi SAKAI),
- strings typos (Nikolay Sivov), ilocalization probalem raised by
- Thomas Canniot
-
Bug fixes: virsh memory values test (Masayuki Sunou), operations without
- libvirt_qemud (Atsushi SAKAI), fix spec file (Florian La Roche, Jeremy
- Katz, Michael Schwendt),
- direct hypervisor call (Atsushi SAKAI), buffer overflow on qemu
- networking command (Daniel Berrange), buffer overflow in quemud (Daniel
- Berrange), virsh vcpupin bug (Masayuki Sunou), host PAE detections
- and strcuctures size (Richard Jones), Xen PAE flag handling (Daniel
- Berrange), bridged config configuration (Daniel Berrange), erroneous
- XEN_V2_OP_SETMAXMEM value (Masayuki Sunou), memory free error (Mark
- McLoughlin), set VIR_CONNECT_RO on read-only connections (S.Sakamoto),
- avoid memory explosion bug (Daniel Berrange), integer overflow
- for qemu CPU time (Daniel Berrange), QEMU binary path check (Daniel
- Berrange)
-
Cleanups: remove some global variables (Jim Meyering), printf-style
- functions checks (Jim Meyering), better virsh error messages, increase
- compiler checkings and security (Daniel Berrange), virBufferGrow usage
- and docs, use calloc instead of malloc/memset, replace all sprintf by
- snprintf, avoid configure clobbering user's CTAGS (Jim Meyering),
- signal handler error cleanup (Richard Jones), iptables internal code
- claenup (Mark McLoughlin), unified Xen driver (Richard Jones),
- cleanup XPath libxml2 calls, IPTables rules tightening (Daniel
- Berrange),
-
Improvements: more regression tests on XML (Daniel Berrange), Python
- bindings now generate exception in error cases (Richard Jones),
- Python bindings for vir*GetAutoStart (Daniel Berrange),
- handling of CD-Rom device without device name (Nobuhiro Itou),
- fix hypervisor call to work with Xen 3.0.5 (Daniel Berrange),
- DomainGetOSType for inactive domains (Daniel Berrange), multiple boot
- devices for HVM (Daniel Berrange),
-
-
-
0.2.1: Mar 16 2007
-
-
Various internal cleanups (Richard Jones,Daniel Berrange,Mark McLoughlin)
-
Bug fixes: libvirt_qemud daemon path (Daniel Berrange), libvirt
- config directory (Daniel Berrange and Mark McLoughlin), memory leak
- in qemud (Mark), various fixes on network support (Mark), avoid Xen
- domain zombies on device hotplug errors (Daniel Berrange), various
- fixes on qemud (Mark), args parsing (Richard Jones), virsh -t argument
- (Saori Fukuta), avoid virsh crash on TAB key (Daniel Berrange), detect
- xend operation failures (Kazuki Mizushima), don't listen on null socket
- (Rich Jones), read-only socket cleanup (Rich Jones), use of vnc port 5900
- (Nobuhiro Itou), assorted networking fixes (Daniel Berrange), shutoff and
- shutdown mismatches (Kazuki Mizushima), unlimited memory handling
- (Atsushi SAKAI), python binding fixes (Tatsuro Enokura)
-
Build and portability fixes: IA64 fixes (Atsushi SAKAI), dependancies
- and build (Daniel Berrange), fix xend port detection (Daniel
- Berrange), icompile time warnings (Mark), avoid const related
- compiler warnings (Daniel Berrange), automated builds (Daniel
- Berrange), pointer/int mismatch (Richard Jones), configure time
- selection of drivers, libvirt spec hacking (Daniel Berrange)
-
Add support for network autostart and init scripts (Mark McLoughlin)
-
New API virConnectGetCapabilities() to detect the virtualization
- capabilities of a host (Richard Jones)
-
Minor improvements: qemud signal handling (Mark), don't shutdown or reboot
- domain0 (Kazuki Mizushima), QEmu version autodetection (Daniel Berrange),
- network UUIDs (Mark), speed up UUID domain lookups (Tatsuro Enokura and
- Daniel Berrange), support for paused QEmu CPU (Daniel Berrange), keymap
- VNC attribute support (Takahashi Tomohiro and Daniel Berrange), maximum
- number of virtual CPU (Masayuki Sunou), virtsh --readonly option (Rich
- Jones), python bindings for new functions (Daniel Berrange)
-
Documentation updates especially on the XML formats
-
-
-
0.2.0: Feb 14 2007
-
-
Various internal cleanups (Mark McLoughlin, Richard Jones,
- Daniel Berrange, Karel Zak)
-
Bug fixes: avoid a crash in connect (Daniel Berrange), virsh args
- parsing (Richard Jones)
-
Add support for QEmu and KVM virtualization (Daniel Berrange)
-
Add support for network configuration (Mark McLoughlin)
-
Minor improvements: regression testing (Daniel Berrange),
- localization string updates
-
-
-
0.1.11: Jan 22 2007
-
-
Finish XML <-> XM config files support
-
Remove memory leak when freeing virConf objects
-
Finishing inactive domain support (Daniel Berrange)
-
Added a Relax-NG schemas to check XML instances
-
-
-
0.1.10: Dec 20 2006
-
-
more localizations
-
bug fixes: VCPU info breakages on xen 3.0.3, xenDaemonListDomains buffer overflow (Daniel Berrange), reference count bug when creating Xen domains (Daniel Berrange).
-
improvements: support graphic framebuffer for Xen paravirt (Daniel Berrange), VNC listen IP range support (Daniel Berrange), support for default Xen config files and inactive domains of 3.0.4 (Daniel Berrange).
-
-
-
0.1.9: Nov 29 2006
-
-
python bindings: release interpeter lock when calling C (Daniel Berrange)
-
don't raise HTTP error when looking information for a domain
-
some refactoring to use the driver for all entry points
-
better error reporting (Daniel Berrange)
-
fix OS reporting when running as non-root
-
provide XML parsing errors
-
extension of the test framework (Daniel Berrange)
-
fix the reconnect regression test
-
python bindings: Domain instances now link to the Connect to avoid garbage collection and disconnect
-
separate the notion of maximum memory and current use at the XML level
-
Fix a memory leak (Daniel Berrange)
-
add support for shareable drives
-
add support for non-bridge style networking configs for guests(Daniel Berrange)
-
python bindings: fix unsigned long marshalling (Daniel Berrange)
-
new config APIs virConfNew() and virConfSetValue() to build configs from scratch
-
hot plug device support based on Michel Ponceau patch
-
added support for inactive domains, new APIs, various associated cleanup (Daniel Berrange)
-
special device model for HVM guests (Daniel Berrange)
-
add API to dump core of domains (but requires a patched xend)
-
pygrub bootloader information take over <os> information
-
updated the localization strings
-
-
0.1.8: Oct 16 2006
-
-
Bug for system with page size != 4k
-
vcpu number initialization (Philippe Berthault)
-
don't label crashed domains as shut off (Peter Vetere)
-
fix virsh man page (Noriko Mizumoto)
-
blktapdd support for alternate drivers like blktap (Daniel Berrange)
-
memory leak fixes (xend interface and XML parsing) (Daniel Berrange)
-
compile fix
-
mlock/munlock size fixes (Daniel Berrange)
-
improve error reporting
-
-
0.1.7: Sep 29 2006
-
-
fix a memory bug on getting vcpu information from xend (Daniel Berrange)
-
fix another problem in the hypercalls change in Xen changeset
- 86d26e6ec89b when getting domain information (Daniel Berrange)
-
-
0.1.6: Sep 22 2006
-
-
Support for localization of strings using gettext (Daniel Berrange)
-
Support for new Xen-3.0.3 cdrom and disk configuration (Daniel Berrange)
-
Support for setting VNC port when creating domains with new
- xend config files (Daniel Berrange)
-
Fix bug when running against xen-3.0.2 hypercalls (Jim Fehlig)
-
Fix reconnection problem when talking directly to http xend
-
-
0.1.5: Sep 5 2006
-
-
Support for new hypercalls change in Xen changeset 86d26e6ec89b
-
bug fixes: virParseUUID() was wrong, netwoking for paravirt guestsi
- (Daniel Berrange), virsh on non-existent domains (Daniel Berrange),
- string cast bug when handling error in python (Pete Vetere), HTTP
- 500 xend error code handling (Pete Vetere and Daniel Berrange)
-
improvements: test suite for SEXPR <-> XML format conversions (Daniel
- Berrange), virsh output regression suite (Daniel Berrange), new environ
- variable VIRSH_DEFAULT_CONNECT_URI for the default URI when connecting
- (Daniel Berrange), graphical console support for paravirt guests
- (Jeremy Katz), parsing of simple Xen config files (with Daniel Berrange),
- early work on defined (not running) domains (Daniel Berrange),
- virsh output improvement (Daniel Berrange
-
-
-
0.1.4: Aug 16 2006
-
-
bug fixes: spec file fix (Mark McLoughlin), error report problem (with
- Hugh Brock), long integer in Python bindings (with Daniel Berrange), XML
- generation bug for CDRom (Daniel Berrange), bug whem using number() XPath
- function (Mark McLoughlin), fix python detection code, remove duplicate
- initialization errors (Daniel Berrange)
-
improvements: UUID in XML description (Peter Vetere), proxy code
- cleanup, virtual CPU and affinity support + virsh support (Michel
- Ponceau, Philippe Berthault, Daniel Berrange), port and tty information
- for console in XML (Daniel Berrange), added XML dump to driver and proxy
- support (Daniel Berrange), extention of boot options with support for
- floppy and cdrom (Daniel Berrange), features block in XML to report/ask
- PAE, ACPI, APIC for HVM domains (Daniel Berrange), fail saide-effect
- operations when using read-only connection, large improvements to test
- driver (Daniel Berrange)
-
documentation: spelling (Daniel Berrange), test driver examples.
-
-
-
0.1.3: Jul 11 2006
-
-
bugfixes: build as non-root, fix xend access when root, handling of
- empty XML elements (Mark McLoughlin), XML serialization and parsing fixes
- (Mark McLoughlin), allow to create domains without disk (Mark
- McLoughlin),
-
improvement: xenDaemonLookupByID from O(n^2) to O(n) (Daniel Berrange),
- support for fully virtualized guest (Jim Fehlig, DV, Mark McLoughlin)
-
documentation: augmented to cover hvm domains
-
-
-
0.1.2: Jul 3 2006
-
-
headers include paths fixup
-
proxy mechanism for unprivileged read-only access by httpu
-
-
-
0.1.1: Jun 21 2006
-
-
building fixes: ncurses fallback (Jim Fehlig), VPATH builds (Daniel P.
- Berrange)
-
driver cleanups: new entry points, cleanup of libvirt.c (with Daniel P.
- Berrange)
-
Cope with API change introduced in Xen changeset 10277
-
new test driver for regression checks (Daniel P. Berrange)
-
improvements: added UUID to XML serialization, buffer usage (Karel
- Zak), --connect argument to virsh (Daniel P. Berrange),
-
bug fixes: uninitialized memory access in error reporting, S-Expr
- parsing (Jim Fehlig, Jeremy Katz), virConnectOpen bug, remove a TODO in
- xs_internal.c
-
documentation: Python examples (David Lutterkort), new Perl binding
- URL, man page update (Karel Zak)
-
-
-
0.1.0: Apr 10 2006
-
-
building fixes: --with-xen-distdir option (Ronald Aigner), out of tree
- build and pkginfo cflag fix (Daniel Berrange)
-
enhancement and fixes of the XML description format (David Lutterkort
- and Jim Fehlig)
-
new APIs: for Node information and Reboot
-
internal code cleanup: refactoring internals into a driver model, more
- error handling, structure sharing, thread safety and ref counting
documentation: updates on architecture, and format, typo fix (Jim
- Meyering)
-
bindings: exception handling in examples (Jim Meyering), perl ones out
- of tree (Daniel Berrange)
-
virsh: more options, create, nodeinfo (Karel Zak), renaming of some
- options (Karel Zak), use stderr only for errors (Karel Zak), man page
- (Andrew Puch)
-
-
-
0.0.6: Feb 28 2006
-
-
add UUID lookup and extract API
-
add error handling APIs both synchronous and asynchronous
-
added minimal hook for error handling at the python level, improved the
- python bindings
-
augment the documentation and tests to cover error handling
-
-
-
0.0.5: Feb 23 2006
-
-
Added XML description parsing, dependance to libxml2, implemented the
- creation API virDomainCreateLinux()
-
new APIs to lookup and name domain by UUID
-
fixed the XML dump when using the Xend access
-
Fixed a few more problem related to the name change
-
Adding regression tests in python and examples in C
-
web site improvement, extended the documentation to cover the XML
- format and Python API
-
Added devhelp help for Gnome/Gtk programmers
-
-
-
0.0.4: Feb 10 2006
-
-
Fix various bugs introduced in the name change
-
-
-
0.0.3: Feb 9 2006
-
-
Switch name from from 'libvir' to libvirt
-
Starting infrastructure to add code examples
-
Update of python bindings for completeness
-
-
-
0.0.2: Jan 29 2006
-
-
Update of the documentation, web site redesign (Diana Fong)
-
integration of HTTP xend RPC based on libxend by Anthony Liquori for
- most operations
-
Adding Save and Restore APIs
-
extended the virsh command line tool (Karel Zak)
-
remove xenstore transactions (Anthony Liguori)
-
fix the Python bindings bug when domain and connections where freed
Libvirt is a C toolkit to interact with the virtualization capabilities of
-recent versions of Linux (and other OSes), but libvirt won't try to provide
-all possible interfaces for interacting with the virtualization features.
-
-
To avoid ambiguity about the terms used here here are the definitions for
-some of the specific concepts used in libvirt documentation:
-
-
a node is a single physical machine
-
an hypervisor is a layer of software allowing to
- virtualize a node in a set of virtual machines with possibly different
- configurations than the node itself
-
a domain is an instance of an operating system running
- on a virtualized machine provided by the hypervisor
-
-
-
-
-
Now we can define the goal of libvirt: to provide the lowest possible
-generic and stable layer to manage domains on a node.
-
-
This implies the following:
-
-
the API should not be targeted to a single virtualization environment
- though Xen is the current default, which also means that some very
- specific capabilities which are not generic enough may not be provided as
- libvirt APIs
-
the API should allow to do efficiently and cleanly all the operations
- needed to manage domains on a node
-
the API will not try to provide hight level multi-nodes management
- features like load balancing, though they could be implemented on top of
- libvirt
-
stability of the API is a big concern, libvirt should isolate
- applications from the frequent changes expected at the lower level of the
- virtualization framework
-
-
-
So libvirt should be a building block for higher level management tools
-and for applications focusing on virtualization of a single node (the only
-exception being domain migration between node capabilities which may need to
-be added at the libvirt level). Where possible libvirt should be extendable
-to be able to provide the same API for remote nodes, however this is not the
-case at the moment, the code currently handle only local node accesses
-(extension for remote access support is being worked on, see the mailing list discussions about it).
When running in a Xen environment, programs using libvirt have to execute
-in "Domain 0", which is the primary Linux OS loaded on the machine. That OS
-kernel provides most if not all of the actual drivers used by the set of
-domains. It also runs the Xen Store, a database of information shared by the
-hypervisor, the kernels, the drivers and the xen daemon. Xend. The xen daemon
-supervise the control and execution of the sets of domains. The hypervisor,
-drivers, kernels and daemons communicate though a shared system bus
-implemented in the hypervisor. The figure below tries to provide a view of
-this environment:
-
-
-
The library can be initialized in 2 ways depending on the level of
-privilege of the embedding program. If it runs with root access,
-virConnectOpen() can be used, it will use three different ways to connect to
-the Xen infrastructure:
-
-
a connection to the Xen Daemon though an HTTP RPC layer
-
a read/write connection to the Xen Store
-
use Xen Hypervisor calls
-
when used as non-root libvirt connect to a proxy daemon running
- as root and providing read-only support
-
-
-
The library will usually interact with the Xen daemon for any operation
-changing the state of the system, but for performance and accuracy reasons
-may talk directly to the hypervisor when gathering state information at
-least when possible (i.e. when the running program using libvirt has root
-privilege access).
-
-
If it runs without root access virConnectOpenReadOnly() should be used to
-connect to initialize the library. It will then fork a libvirt_proxy
-program running as root and providing read_only access to the API, this is
-then only useful for reporting and monitoring.
The model for QEmu and KVM is completely similar, basically KVM is based
-on QEmu for the process controlling a new domain, only small details differs
-between the two. In both case the libvirt API is provided by a controlling
-process forked by libvirt in the background and which launch and control the
-QEmu or KVM process. That program called libvirt_qemud talks though a specific
-protocol to the library, and connects to the console of the QEmu process in
-order to control and report on its status. Libvirt tries to expose all the
-emulations models of QEmu, the selection is done when creating the new
-domain, by specifying the architecture and machine type targeted.
-
-
The code controlling the QEmu process is available in the
-qemud/ directory.
As the previous section explains, libvirt can communicate using different
-channels with the current hypervisor, and should also be able to use
-different kind of hypervisor. To simplify the internal design, code, ease
-maintenance and simplify the support of other virtualization engine the
-internals have been structured as one core component, the libvirt.c module
-acting as a front-end for the library API and a set of hypervisor drivers
-defining a common set of routines. That way the Xen Daemon access, the Xen
-Store one, the Hypervisor hypercall are all isolated in separate C modules
-implementing at least a subset of the common operations defined by the
-drivers present in driver.h:
-
-
xend_internal: implements the driver functions though the Xen
- Daemon
-
xs_internal: implements the subset of the driver available though the
- Xen Store
-
xen_internal: provide the implementation of the functions possible via
- direct hypervisor access
-
proxy_internal: provide read-only Xen access via a proxy, the proxy code
- is in the proxy/directory.
-
xm_internal: provide support for Xen defined but not running
- domains.
-
qemu_internal: implement the driver functions for QEmu and
- KVM virtualization engines. It also uses a qemud/ specific daemon
- which interacts with the QEmu process to implement libvirt API.
-
test: this is a test driver useful for regression tests of the
- front-end part of libvirt.
-
-
-
Note that a given driver may only implement a subset of those functions,
-(for example saving a Xen domain state to disk and restoring it is only
-possible though the Xen Daemon), in that case the driver entry points for
-unsupported functions are initialized to NULL.
The latest versions of libvirt can be found on the libvirt.org server ( HTTP, FTP). You will find there the released
-versions as well as snapshot
-tarballs updated from CVS head every hour
-
-
Anonymous CVS is also
-available, first register onto the server:
it will request a password, enter anoncvs. Then you can
-checkout the development tree with:
-
-
cvs -d :pserver:anoncvs@libvirt.org:2401/data/cvs co
-libvirt
-
-
Use ./autogen.sh to configure the local checkout, then make
-and make install, as usual. All normal cvs commands are now
-available except commiting to the base.
This section describes the XML format used to represent domains, there are
-variations on the format based on the kind of domains run and the options
-used to launch them:
The library use an XML format to describe domains, as input to virDomainCreateLinux()
-and as the output of virDomainGetXMLDesc(),
-the following is an example of the format as returned by the shell command
-virsh xmldump fc4 , where fc4 was one of the running domains:
The root element must be called domain with no namespace, the
-type attribute indicates the kind of hypervisor used, 'xen' is
-the default value. The id attribute gives the domain id at
-runtime (not however that this may change, for example if the domain is saved
-to disk and restored). The domain has a few children whose order is not
-significant:
-
-
name: the domain name, preferably ASCII based
-
memory: the maximum memory allocated to the domain in kilobytes
-
vcpu: the number of virtual cpu configured for the domain
-
os: a block describing the Operating System, its content will be
- dependent on the OS type
-
-
type: indicate the OS type, always linux at this point
-
kernel: path to the kernel on the Domain 0 filesystem
-
initrd: an optional path for the init ramdisk on the Domain 0
- filesystem
-
cmdline: optional command line to the kernel
-
root: the root filesystem from the guest viewpoint, it may be
- passed as part of the cmdline content too
-
-
-
devices: a list of disk, interface and
- console descriptions in no special order
-
-
-
The format of the devices and their type may grow over time, but the
-following should be sufficient for basic use:
-
-
A disk device indicates a block device, it can have two
-values for the type attribute either 'file' or 'block' corresponding to the 2
-options available at the Xen layer. It has two mandatory children, and one
-optional one in no specific order:
-
-
source with a file attribute containing the path in Domain 0 to the
- file or a dev attribute if using a block device, containing the device
- name ('hda5' or '/dev/hda5')
-
target indicates in a dev attribute the device where it is mapped in
- the guest
-
readonly an optional empty element indicating the device is
- read-only
-
shareable an optional empty element indicating the device
- can be used read/write with other domains
-
-
-
An interface element describes a network device mapped on the
-guest, it also has a type whose value is currently 'bridge', it also have a
-number of children in no specific order:
-
-
source: indicating the bridge name
-
mac: the optional mac address provided in the address attribute
-
ip: the optional IP address provided in the address attribute
-
script: the script used to bridge the interface in the Domain 0
-
target: and optional target indicating the device name.
-
-
-
A console element describes a serial console connection to
-the guest. It has no children, and a single attribute tty which
-provides the path to the Pseudo TTY on which the guest console can be
-accessed
-
-
Life cycle actions for the domain can also be expressed in the XML format,
-they drive what should be happening if the domain crashes, is rebooted or is
-poweroff. There is various actions possible when this happen:
-
-
destroy: The domain is cleaned up (that's the default normal processing
- in Xen)
-
restart: A new domain is started in place of the old one with the same
- configuration parameters
-
preserve: The domain will remain in memory until it is destroyed
- manually, it won't be running but allows for post-mortem debugging
-
rename-restart: a variant of the previous one but where the old domain
- is renamed before being saved to allow a restart
-
-
-
The following could be used for a Xen production system:
While the format may be extended in various ways as support for more
-hypervisor types and features are added, it is expected that this core subset
-will remain functional in spite of the evolution of the library.
Here is an example of a domain description used to start a fully
-virtualized (a.k.a. HVM) Xen domain. This requires hardware virtualization
-support at the processor level but allows to run unmodified operating
-systems:
There is a few things to notice specifically for HVM domains:
-
-
the optional <features> block is used to enable
- certain guest CPU / system features. For HVM guests the following
- features are defined:
-
-
pae - enable PAE memory addressing
-
apic - enable IO APIC
-
acpi - enable ACPI bios
-
-
-
the optional <clock> element is used to specify
- whether the emulated BIOS clock in the guest is synced to either
- localtime or utc. In general Windows will
- want localtime while all other operating systems will
- want utc. The default is thus utc
-
the <os> block description is very different, first
- it indicates that the type is 'hvm' for hardware virtualization, then
- instead of a kernel, boot and command line arguments, it points to an os
- boot loader which will extract the boot information from the boot device
- specified in a separate boot element. The dev attribute on
- the boot tag can be one of:
-
-
fd - boot from first floppy device
-
hd - boot from first harddisk device
-
cdrom - boot from first cdrom device
-
-
-
the <devices> section includes an emulator entry
- pointing to an additional program in charge of emulating the devices
-
the disk entry indicates in the dev target section that the emulation
- for the drive is the first IDE disk device hda. The list of device names
- supported is dependent on the Hypervisor, but for Xen it can be any IDE
- device hda-hdd, or a floppy device
- fda, fdb. The <disk> element
- also supports a 'device' attribute to indicate what kinda of hardware to
- emulate. The following values are supported:
-
-
floppy - a floppy disk controller
-
disk - a generic hard drive (the default it
- omitted)
-
cdrom - a CDROM device
-
- For Xen 3.0.2 and earlier a CDROM device can only be emulated on the
- hdc channel, while for 3.0.3 and later, it can be emulated
- on any IDE channel.
-
the <devices> section also include at least one
- entry for the graphic device used to render the os. Currently there is
- just 2 types possible 'vnc' or 'sdl'. If the type is 'vnc', then an
- additional port attribute will be present indicating the TCP
- port on which the VNC server is accepting client connections.
-
-
-
It is likely that the HVM description gets additional optional elements
-and attributes as the support for fully virtualized domain expands,
-especially for the variety of devices emulated and the graphic support
-options offered.
Support for the KVM virtualization
-is provided in recent Linux kernels (2.6.20 and onward). This requires
-specific hardware with acceleration support and the availability of the
-special version of the QEmu binary. Since this
-relies on QEmu for the machine emulation like fully virtualized guests the
-XML description is quite similar, here is a simple example:
The networking support in the QEmu and KVM case is more flexible, and
-support a variety of options:
-
-
Userspace SLIRP stack
-
Provides a virtual LAN with NAT to the outside world. The virtual
- network has DHCP & DNS services and will give the guest VM addresses
- starting from 10.0.2.15. The default router will be
- 10.0.2.2 and the DNS server will be 10.0.2.3.
- This networking is the only option for unprivileged users who need their
- VMs to have outgoing access. Example configs are:
Provides a virtual network using a bridge device in the host.
- Depending on the virtual network configuration, the network may be
- totally isolated, NAT'ing to an explicit network device, or NAT'ing to
- the default route. DHCP and DNS are provided on the virtual network in
- all cases and the IP range can be determined by examining the virtual
- network config with 'virsh net-dumpxml <network
- name>'. There is one virtual network called 'default' setup out
- of the box which does NAT'ing to the default route and has an IP range of
- 192.168.22.0/255.255.255.0. Each guest will have an
- associated tun device created with a name of vnetN, which can also be
- overridden with the <target> element. Example configs are:
Provides a bridge from the VM directly onto the LAN. This assumes
- there is a bridge device on the host which has one or more of the hosts
- physical NICs enslaved. The guest VM will have an associated tun device
- created with a name of vnetN, which can also be overridden with the
- <target> element. The tun device will be enslaved to the bridge.
- The IP range / network configuration is whatever is used on the LAN. This
- provides the guest VM full incoming & outgoing net access just like a
- physical machine. Examples include:
Provides a means for the administrator to execute an arbitrary script
- to connect the guest's network to the LAN. The guest will have a tun
- device created with a name of vnetN, which can also be overridden with the
- <target> element. After creating the tun device a shell script will
- be run which is expected to do whatever host network integration is
- required. By default this script is called /etc/qemu-ifup but can be
- overridden.
A multicast group is setup to represent a virtual network. Any VMs
- whose network devices are in the same multicast group can talk to each
- other even across hosts. This mode is also available to unprivileged
- users. There is no default DNS or DHCP support and no outgoing network
- access. To provide outgoing network access, one of the VMs should have a
- 2nd NIC which is connected to one of the first 4 network types and do the
- appropriate routing. The multicast protocol is compatible with that used
- by user mode linux guests too. The source address used must be from the
- multicast address block.
A TCP client/server architecture provides a virtual network. One VM
- provides the server end of the network, all other VMS are configured as
- clients. All network traffic is routed between the VMs via the server.
- This mode is also available to unprivileged users. There is no default
- DNS or DHCP support and no outgoing network access. To provide outgoing
- network access, one of the VMs should have a 2nd NIC which is connected
- to one of the first 4 network types and do the appropriate routing.
To be noted, options 2, 3, 4 are also supported by Xen VMs, so it is
-possible to use these configs to have networking with both Xen &
-QEMU/KVMs connected to each other.
Libvirt support for KVM and QEmu is the same code base with only minor
-changes. The configuration is as a result nearly identical, the only changes
-are related to QEmu ability to emulate various CPU type and hardware
-platforms, and kqemu support (QEmu own kernel accelerator when the
-emulated CPU is i686 as well as the target machine):
As new virtualization engine support gets added to libvirt, and to handle
-cases like QEmu supporting a variety of emulations, a query interface has
-been added in 0.2.1 allowing to list the set of supported virtualization
-capabilities on the host:
The value returned is an XML document listing the virtualization
-capabilities of the host and virtualization engine to which
-@conn is connected. One can test it using virsh
-command line tool command 'capabilities', it dumps the XML
-associated to the current connection. For example in the case of a 64 bits
-machine with hardware virtualization capabilities enabled in the chip and
-BIOS you will see
The first block (in red) indicates the host hardware capabilities, currently
-it is limited to the CPU properties but other information may be available,
-it shows the CPU architecture, and the features of the chip (the feature
-block is similar to what you will find in a Xen fully virtualized domain
-description).
-
-
The second block (in blue) indicates the paravirtualization support of the
-Xen support, you will see the os_type of xen to indicate a paravirtual
-kernel, then architecture information and potential features.
-
-
The third block (in green) gives similar information but when running a
-32 bit OS fully virtualized with Xen using the hvm support.
-
-
This section is likely to be updated and augmented in the future, see the
-discussion which led to the capabilities format in the mailing-list
-archives.
Libvirt comes with bindings to support other languages than
-pure C. First the headers embeds the necessary declarations to
-allow direct acces from C++ code, but also we have bindings for
-higher level kind of languages:
-
-
Python: Libvirt comes with direct support for the Python language
- (just make sure you installed the libvirt-python package if not
- compiling from sources). See below for more information about
- using libvirt with python
Support, requests or help for libvirt bindings are welcome on
-the mailing
-list, as usual try to provide enough background information
-and make sure you use recent version, see the help
-page.
-
-
The remaining of this page focuses on the Python bindings.
-
-
The Python binding should be complete and are mostly automatically
-generated from the formal description of the API in xml. The bindings are
-articulated around 2 classes virConnect and virDomain mapping to
-the C types. Functions in the C API taking either type as argument then
-becomes methods for the classes, their name is just stripped from the
-virConnect or virDomain(Get) prefix and the first letter gets converted to
-lower case, for example the C functions:
This process is fully automated, you can get a summary of the conversion
-in the file libvirtclass.txt present in the python dir or in the docs.There
-is a couple of function who don't map directly to their C counterparts due to
-specificities in their argument conversions:
-
-
virConnectListDomains
- is replaced by virDomain::listDomainsID(self) which returns
- a list of the integer ID for the currently running domains
-
virDomainGetInfo
- is replaced by virDomain::info() which returns a list of
-
-
state: one of the state values (virDomainState)
-
maxMemory: the maximum memory used by the domain
-
memory: the current amount of memory used by the domain
-
nbVirtCPU: the number of virtual CPU
-
cpuTime: the time used by the domain in nanoseconds
-
-
-
-
-
So let's look at a simple example inspired from the basic.py
-test found in python/tests/ in the source tree:
-
import libvirt
-import sys
-
-conn = libvirt.openReadOnly(None)
-if conn == None:
- print 'Failed to open connection to the hypervisor'
- sys.exit(1)
-
-try:
- dom0 = conn.lookupByName("Domain-0")
-except:
- print 'Failed to find the main domain'
- sys.exit(1)
-
-print "Domain 0: id %d running %s" % (dom0.ID(), dom0.OSType())
-print dom0.info()
-
-
There is not much to comment about it, it really is a straight mapping
-from the C API, the only points to notice are:
-
-
the import of the module called libvirt
-
getting a connection to the hypervisor, in that case using the
- openReadOnly function allows the code to execute as a normal user.
-
getting an object representing the Domain 0 using lookupByName
-
if the domain is not found a libvirtError exception will be raised
-
extracting and printing some information about the domain using
- various methods
- associated to the virDomain class.
The main goals of libvirt when it comes to error handling are:
-
-
provide as much detail as possible
-
provide the information as soon as possible
-
dont force the library user into one style of error handling
-
-
-
As result the library provide both synchronous, callback based and
-asynchronous error reporting. When an error happens in the library code the
-error is logged, allowing to retrieve it later and if the user registered an
-error callback it will be called synchronously. Once the call to libvirt ends
-the error can be detected by the return value and the full information for
-the last logged error can be retrieved.
-
-
To avoid as much as possible troubles with a global variable in a
-multithreaded environment, libvirt will associate when possible the errors to
-the current connection they are related to, that way the error is stored in a
-dynamic structure which can be made thread specific. Error callback can be
-set specifically to a connection with
-
-
So error handling in the code is the following:
-
-
if the error can be associated to a connection for example when failing
- to look up a domain
-
-
if there is a callback associated to the connection set with virConnSetErrorFunc,
- call it with the error information
-
otherwise if there is a global callback set with virSetErrorFunc,
- call it with the error information
-
otherwise call virDefaultErrorFunc
- which is the default error function of the library issuing the error
- on stderr
-
save the error in the connection for later retrieval with virConnGetLastError
-
-
-
otherwise like when failing to create an hypervisor connection:
-
-
if there is a global callback set with virSetErrorFunc,
- call it with the error information
-
otherwise call virDefaultErrorFunc
- which is the default error function of the library issuing the error
- on stderr
-
save the error in the connection for later retrieval with virGetLastError
-
-
-
-
-
In all cases the error information is provided as a virErrorPtr pointer to
-read-only structure virError containing the
-following fields:
domain: an enum indicating which part of libvirt raised the error see
- virErrorDomain
-
level: the error level, usually VIR_ERR_ERROR, though there is room for
- warnings like VIR_ERR_WARNING
-
message: the full human-readable formatted string of the error
-
conn: if available a pointer to the virConnectPtr
- connection to the hypervisor where this happened
-
dom: if available a pointer to the virDomainPtr domain
- targeted in the operation
-
-
-
and then extra raw information about the error which may be initialized
-to 0 or NULL if unused
-
-
str1, str2, str3: string information, usually str1 is the error
- message format
-
int1, int2: integer information
-
-
-
So usually, setting up specific error handling with libvirt consist of
-registering an handler with with virSetErrorFunc or
-with virConnSetErrorFunc,
-check the value of the code value, take appropriate action, if needed let
-libvirt print the error on stderr by calling virDefaultErrorFunc.
-For asynchronous error handing, set such a function doing nothing to avoid
-the error being reported on stderr, and call virConnGetLastError or
-virGetLastError when an API call returned an error value. It can be a good
-idea to use virResetError or virConnResetLastError
-once an error has been processed fully.
-
-
At the python level, there only a global reporting callback function at
-this point, see the error.py example about it:
the second argument to the registerErrorHandler function is passed as the
-first argument of the callback like in the C version. The error is a tuple
-containing the same field as a virError in C, but cast to Python.
libvirt is released under the GNU Lesser
- General Public License, see the file COPYING.LIB in the distribution
- for the precise wording. The only library that libvirt depends upon is
- the Xen store access library which is also licenced under the LGPL.
-
-
Can I embed libvirt in a proprietary application ?
-
Yes. The LGPL allows you to embed libvirt into a proprietary
- application. It would be graceful to send-back bug fixes and improvements
- as patches for possible incorporation in the main development tree. It
- will decrease your maintenance costs anyway if you do so.
I can't install the libvirt/libvirt-devel RPM packages due to
- failed dependencies
-
The most generic solution is to re-fetch the latest src.rpm , and
- rebuild it locally with
-
rpm --rebuild libvirt-xxx.src.rpm.
-
If everything goes well it will generate two binary rpm packages (one
- providing the shared libs and virsh, and the other one, the -devel
- package, providing includes, static libraries and scripts needed to build
- applications with libvirt that you can install locally.
-
One can also rebuild the RPMs from a tarball:
-
rpmbuild -ta libdir-xxx.tar.gz
-
Or from a configured tree with:
-
make rpm
-
-
Failure to use the API for non-root users
-
Large parts of the API may only be accessible with root privileges,
- however the read only access to the xenstore data doesnot have to be
- forbidden to user, at least for monitoring purposes. If "virsh dominfo"
- fails to run as an user, change the mode of the xenstore read-only socket
- with:
-
chmod 666 /var/run/xenstored/socket_ro
-
and also make sure that the Xen Daemon is running correctly with local
- HTTP server enabled, this is defined in
- /etc/xen/xend-config.sxp which need the following line to be
- enabled:
-
(xend-http-server yes)
-
If needed restart the xend daemon after making the change with the
- following command run as root:
Troubles compiling or linking programs using libvirt
-
To simplify the process of reusing the library, libvirt comes with
- pkgconfig support, which can be used directly from autoconf support or
- via the pkg-config command line tool, like:
There is a mailing-list libvir-list@redhat.com for libvirt,
-with an on-line
-archive. Please subscribe to this list before posting by visiting the associated Web
-page and follow the instructions. Patches with explanations and provided as
-attachments are really appreciated and will be discussed on the mailing list.
-If possible generate the patches by using cvs diff -u in a CVS checkout.
-
-
We use Red Hat Bugzilla to track bugs and new feature requests to libvirt.
-If you want to report a bug or ask for a feature, please check the existing open bugs, then if yours isn't a duplicate of
-an existing bug:
Don't forget to attach any patch or extra data that you may have available. It is always a good idea to also
-to post to the mailing-list
-too, so that everybody working on the project can see it, thanks !
-
-
Some of the libvirt developers may be found on IRC on the OFTC
-network. Use the settings:
-
-
server: irc.oftc.net
-
port: 6667 (the usual IRC port)
-
channel: #virt
-
-
But there is no guarantee that someone will be watching or able to reply,
-use the mailing-list if you don't get an answer there.
-These are the steps to compile libvirt and the other
-tools from source on Windows.
-
-
-
-You will need:
-
-
-
-
MS Windows. Microsoft makes free (as beer) versions
-of some of its operating systems available to
-MSDN subscribers.
-We used Windows 2008 Server for testing, virtualized under
-Linux using KVM-53 (earlier versions of KVM and QEMU won't
-run recent versions of Windows because of lack of full ACPI
-support, so make sure you have the latest KVM).
-
A large amount of free disk space to install Cygwin.
-Make sure you have 10 GB free to install most Cygwin packages,
-although if you pare down the list of dependencies you may
-get away with much less.
-
-
A network connection for Windows, since Cygwin downloads packages
-from the net as it installs.
A version of Cygwin sunrpc, patched to support building
- librpc.dll.
- A patch and a binary package are available from
- the download area.
-
-
-
-These are the steps to take to compile libvirt from
-source on Windows:
-
-
-
-
-
Run Cygwin
- setup.exe.
- When it starts up it will show a dialog like this:
-
-
-
-
-
-
-
Step through the setup program accepting defaults
- or making choices as appropriate, until you get to the
- screen for selecting packages:
-
-
-
-
- The user interface here is very confusing. You have to
- click the "recycling icon" as shown by the arrow:
-
-
-
-
-
- which takes the package (and all packages in the subtree)
- through several states such as "Install", "Reinstall", "Keep",
- "Skip", "Uninstall", etc.
-
-
-
-
-
-
You can install "All" (everything) or better select
- just the groups and packages needed. Select the following
- groups and packages for installation:
-
Once Cygwin has finished installing, start a Cygwin bash shell
- (either click on the desktop icon or look for Cygwin bash shell
- in the Start menu).
-
-
The very first time you start the Cygwin bash shell, you may
- find you need to run the mkpasswd and mkgroup
- commands in order to create /etc/passwd and
- /etc/group files from Windows users. If this
- is needed then a message is printed in the shell.
- Note that you need to do this as Windows Administrator.
-
-
-
-
Install Cygwin sunrpc ≥ 4.0-4 package, patched to include
- librpc.dll.
- To do this, first check to see whether /usr/lib/librpc.dll
- exists. If it does, you're good to go and can skip to the next
- step.
-
-
- If you don't have this file, either install the binary package
- sunrpc-4.0-4.tar.bz2 (just unpack it, as Administrator, in the Cygwin root directory).
- Or you can download the
- source patch
- and apply it by hand to the Cygwin sunrpc package (eg. using
- cygport).
-
The configure step will tell you if you have all the
- required parts installed. If something is missing you
- will need to go back through Cygwin setup and install it.
-
If this step is not successful, you should post a full
- report including complete messages to
- the
- libvirt mailing list.
-
-
-
-
-
Test it. If you have access to a remote machine
- running Xen or QEMU/KVM, and the libvirt daemon (libvirtd)
- then you should be able to connect to it and display
- domains using, eg:
-
- Please read more about remote
- support before sending bug reports, to make sure that
- any problems are really Windows and not just with remote
- configuration / security.
-
-
-
-
-
- You may want to install the library and programs by doing:
-
-
-make install
-
-
-
-
-
- The above steps should also build and install Python modules.
- However for reasons which I don't fully understand, Python won't
- look in the
- non-standard /usr/local/lib/python*/site-packages/
- directory by default so you may need to set the environment
- variable PYTHONPATH:
-
-To tell libvirt that you want to access a remote resource,
-you should supply a hostname in the normal URI that is passed
-to virConnectOpen (or virsh -c ...).
-For example, if you normally use qemu:///system
-to access the system-wide QEMU daemon, then to access
-the system-wide QEMU daemon on a remote machine called
-oirase you would use qemu://oirase/system.
-
-From an API point of view, apart from the change in URI, the
-API should behave the same. For example, ordinary calls
-are routed over the remote connection transparently, and
-values or errors from the remote side are returned to you
-as if they happened locally. Some differences you may notice:
-
-
-
-
Additional errors can be generated, specifically ones
-relating to failures in the remote transport itself.
-
Remote calls are handled synchronously, so they will be
-much slower than, say, direct hypervisor calls.
TLS
- 1.0 (SSL 3.1) authenticated and encrypted TCP/IP socket, usually
- listening on a public port number. To use this you will need to
- generate client and
- server certificates.
- The standard port is 16514.
-
-
-
unix
-
Unix domain socket. Since this is only accessible on the
- local machine, it is not encrypted, and uses Unix permissions or
- SELinux for authentication.
- The standard socket names are
- /var/run/libvirt/libvirt-sock and
- /var/run/libvirt/libvirt-sock-ro (the latter
- for read-only connections).
-
-
-
ssh
-
Transported over an ordinary
- ssh
- (secure shell) connection.
- Requires Netcat (nc)
- installed and libvirtd should be running
- on the remote machine. You should use some sort of
- ssh key management (eg.
- ssh-agent)
- otherwise programs which use
- this transport will stop to ask for a password.
-
-
ext
-
Any external program which can make a connection to the
- remote machine by means outside the scope of libvirt.
-
-
tcp
-
Unencrypted TCP/IP socket. Not recommended for production
- use, this is normally disabled, but an administrator can enable
- it for testing or use over a trusted network.
- The standard port is 16509.
-
-
-
-
-The default transport, if no other is specified, is tls.
-
-Either the transport or the hostname must be given in order
-to distinguish this from a local URI.
-
-
-
-Some examples:
-
-
-
-
xen+ssh://rjones@towada/ — Connect to a
-remote Xen hypervisor on host towada using ssh transport and ssh
-username rjones.
-
-
-
xen://towada/ — Connect to a
-remote Xen hypervisor on host towada using TLS.
-
-
-
xen://towada/?no_verify=1 — Connect to a
-remote Xen hypervisor on host towada using TLS. Do not verify
-the server's certificate.
-
-
-
qemu+unix:///system?socket=/opt/libvirt/run/libvirt/libvirt-sock —
-Connect to the local qemu instances over a non-standard
-Unix socket (the full path to the Unix socket is
-supplied explicitly in this case).
-
-
-
test+tcp://localhost:5000/default —
-Connect to a libvirtd daemon offering unencrypted TCP/IP connections
-on localhost port 5000 and use the test driver with default
-settings.
-
-Extra parameters can be added to remote URIs as part
-of the query string (the part following ?).
-Remote URIs understand the extra parameters shown below.
-Any others are passed unmodified through to the back end.
-Note that parameter values must be
-URI-escaped.
-
-
-
-
-
Name
-
Transports
-
Meaning
-
-
-
-
name
-
any transport
-
- The name passed to the remote virConnectOpen function. The
- name is normally formed by removing transport, hostname, port
- number, username and extra parameters from the remote URI, but in certain
- very complex cases it may be better to supply the name explicitly.
-
-
-
-
Example: name=qemu:///system
-
-
-
-
command
-
ssh, ext
-
- The external command. For ext transport this is required.
- For ssh the default is ssh.
- The PATH is searched for the command.
-
-
-
-
Example: command=/opt/openssh/bin/ssh
-
-
-
-
socket
-
unix, ssh
-
- The path to the Unix domain socket, which overrides the
- compiled-in default. For ssh transport, this is passed to
- the remote netcat command (see next).
-
- The name of the netcat command on the remote machine.
- The default is nc. For ssh transport, libvirt
- constructs an ssh command which looks like:
-
-
-command -p port [-l username] hostnamenetcat -U socket
-
-
- where port, username, hostname can be
- specified as part of the remote URI, and command, netcat
- and socket come from extra parameters (or
- sensible defaults).
-
-
-
-
-
Example: netcat=/opt/netcat/bin/nc
-
-
-
-
no_verify
-
tls
-
- If set to a non-zero value, this disables client checks of the
- server's certificate. Note that to disable server checks of
- the client's certificate or IP address you must
- change the libvirtd
- configuration.
-
-
-
-
Example: no_verify=1
-
-
-
-
no_tty
-
ssh
-
- If set to a non-zero value, this stops ssh from asking for
- a password if it cannot log in to the remote machine automatically
- (eg. using ssh-agent etc.). Use this when you don't have access
- to a terminal - for example in graphical programs which use libvirt.
-
-Libvirt supports TLS certificates for verifying the identity
-of the server and clients. There are two distinct checks involved:
-
-
-
-
The client should know that it is connecting to the right
-server. Checking done by client by matching the certificate that
-the server sends to the server's hostname. May be disabled by adding
-?no_verify=1 to the
-remote URI.
-
-
-
The server should know that only permitted clients are
-connecting. This can be done based on client's IP address, or on
-client's IP address and client's certificate. Checking done by the
-server. May be enabled and disabled in the libvirtd.conf file.
-
-
-
-
-For full certificate checking you will need to have certificates
-issued by a recognised Certificate
-Authority (CA) for your server(s) and all clients. To avoid the
-expense of getting certificates from a commercial CA, you can set up
-your own CA and tell your server(s) and clients to trust certificates
-issues by your own CA. Follow the instructions in the next section.
-
-
-
-Be aware that the default
-configuration for libvirtd allows any client to connect provided
-they have a valid certificate issued by the CA for their own IP
-address. You may want to change this to make it less (or more)
-permissive, depending on your needs.
-
-(You can delete ca.info file now if you
-want).
-
-
-
-Now you have two files which matter:
-
-
-
-
-cakey.pem - Your CA's private key (keep this very secret!)
-
-
-cacert.pem - Your CA's certificate (this is public).
-
-
-
-
-cacert.pem has to be installed on clients and
-server(s) to let them know that they can trust certificates issued by
-your CA.
-
-
-
-The normal installation directory for cacert.pem
-is /etc/pki/CA/cacert.pem on all clients and servers.
-
-
-
-To see the contents of this file, do:
-
-
-
-certtool -i --infile cacert.pem
-
-X.509 certificate info:
-
-Version: 3
-Serial Number (hex): 00
-Subject: CN=Red Hat Emerging Technologies
-Issuer: CN=Red Hat Emerging Technologies
-Signature Algorithm: RSA-SHA
-Validity:
- Not Before: Mon Jun 18 16:22:18 2007
- Not After: Tue Jun 17 16:22:18 2008
-[etc]
-
-
-
-This is all that is required to set up your CA. Keep the CA's private
-key carefully as you will need it when you come to issue certificates
-for your clients and servers.
-
-For each server (libvirtd) you need to issue a certificate
-with the X.509 CommonName (CN) field set to the hostname
-of the server. The CN must match the hostname which
-clients will be using to connect to the server.
-
-
-
-In the example below, clients will be connecting to the
-server using a URI of
-xen://oirase/, so the CN must be "oirase".
-
-
-
-Make a private key for the server:
-
-
-
-certtool --generate-privkey > serverkey.pem
-
-
-
-and sign that key with the CA's private key by first
-creating a template file called server.info
-(only the CN field matters, which as explained above must
-be the server's hostname):
-
-
-
-organization = Name of your organization
-cn = oirase
-tls_www_server
-encryption_key
-signing_key
-
-For each client (ie. any program linked with libvirt, such as
-virt-manager)
-you need to issue a certificate with the X.509 Distinguished Name (DN)
-set to a suitable name. You can decide this on a company / organisation
-policy. For example, I use:
-
-On the server side, run the libvirtd server with
-the '--listen' and '--verbose' options while the
-client is connecting. The verbose log messages should
-tell you enough to diagnose the problem.
-
-
-
-
You can use the pki_check.sh shell script
-to analyze the setup on the client or server machines, preferably as root.
-It will try to point out the possible problems and provide solutions to
-fix the set up up to a point where you have secure remote access.
-Libvirtd (the remote daemon) is configured from a file called
-/etc/libvirt/libvirtd.conf, or specified on
-the command line using -f filename or
---config filename.
-
-
-
-This file should contain lines of the form below.
-Blank lines and comments beginning with # are ignored.
-
-
setting = value
-
The following settings, values and default are:
-
-
-
-
Line
-
Default
-
Meaning
-
-
-
-
listen_tls [0|1]
-
1 (on)
-
- Listen for secure TLS connections on the public TCP/IP port.
-
-
-
-
-
listen_tcp [0|1]
-
0 (off)
-
- Listen for unencrypted TCP connections on the public TCP/IP port.
-
-
-
-
-
tls_port "service"
-
"16514"
-
- The port number or service name to listen on for secure TLS connections.
-
-
-
-
-
tcp_port "service"
-
"16509"
-
- The port number or service name to listen on for unencrypted TCP connections.
-
-
-
-
-
mdns_adv [0|1]
-
1 (advertise with mDNS)
-
- If set to 1 then the virtualization service will be advertised over
- mDNS to hosts on the local LAN segment.
-
-
-
-
-
mdns_name "name"
-
"Virtualization Host HOSTNAME"
-
- The name to advertise for this host with Avahi mDNS. The default
- includes the machine's short hostname. This must be unique to the
- local LAN segment.
-
-
-
-
-
unix_sock_group "groupname"
-
"root"
-
- The UNIX group to own the UNIX domain socket. If the socket permissions allow
- group access, then applications running under matching group can access the
- socket. Only valid if running as root
-
-
-
-
-
unix_sock_ro_perms "octal-perms"
-
"0777"
-
- The permissions for the UNIX domain socket for read-only client connections.
- The default allows any user to monitor domains.
-
-
-
-
-
unix_sock_rw_perms "octal-perms"
-
"0700"
-
- The permissions for the UNIX domain socket for read-write client connections.
- The default allows only root to manage domains.
-
-
-
-
-
tls_no_verify_certificate [0|1]
-
0 (certificates are verified)
-
- If set to 1 then if a client certificate check fails, it is not an error.
-
-
-
-
-
tls_no_verify_address [0|1]
-
0 (addresses are verified)
-
- If set to 1 then if a client IP address check fails, it is not an error.
-
-
-
-
-
key_file "filename"
-
"/etc/pki/libvirt/ private/serverkey.pem"
-
- Change the path used to find the server's private key.
- If you set this to an empty string, then no private key is loaded.
-
-
-
-
-
cert_file "filename"
-
"/etc/pki/libvirt/ servercert.pem"
-
- Change the path used to find the server's certificate.
- If you set this to an empty string, then no certificate is loaded.
-
-
-
-
-
ca_file "filename"
-
"/etc/pki/CA/cacert.pem"
-
- Change the path used to find the trusted CA certificate.
- If you set this to an empty string, then no trusted CA certificate is loaded.
-
-
-
-
-
crl_file "filename"
-
(no CRL file is used)
-
- Change the path used to find the CA certificate revocation list (CRL) file.
- If you set this to an empty string, then no CRL is loaded.
-
-
-
-
-
tls_allowed_dn_list ["DN1", "DN2"]
-
(none - DNs are not checked)
-
-
- Enable an access control list of client certificate Distinguished
- Names (DNs) which can connect to the TLS port on this server.
-
-
- The default is that DNs are not checked.
-
-
- This list may contain wildcards such as "C=GB,ST=London,L=London,O=Red Hat,CN=*"
- See the POSIX fnmatch function for the format
- of the wildcards.
-
-
- Note that if this is an empty list, no client can connect.
-
-
- Note also that GnuTLS returns DNs without spaces
- after commas between the fields (and this is what we check against),
- but the openssl x509 tool shows spaces.
-
-
-
-
-
tls_allowed_ip_list ["ip1", "ip2", "ip3"]
-
(none - clients can connect from anywhere)
-
-
- Enable an access control list of the IP addresses of clients
- who can connect to the TLS or TCP ports on this server.
-
-
- The default is that clients can connect from any IP address.
-
-
- This list may contain wildcards such as 192.168.*
- See the POSIX fnmatch function for the format
- of the wildcards.
-
-
- Note that if this is an empty list, no client can connect.
-
-The libvirtd service and libvirt remote client driver both use the
-getaddrinfo() functions for name resolution and are
-thus fully IPv6 enabled. ie, if a server has IPv6 address configured
-the daemon will listen for incoming connections on both IPv4 and IPv6
-protocols. If a client has an IPv6 address configured and the DNS
-address resolved for a service is reachable over IPv6, then an IPv6
-connection will be made, otherwise IPv4 will be used. In summary it
-should just 'do the right thing(tm)'.
-
Remote storage: To be fully useful, particularly for
-creating new domains, it should be possible to enumerate
-and provision storage on the remote machine. This is currently
-in the design phase.
-
-
Migration: We expect libvirt will support migration,
-and obviously remote support is what makes migration worthwhile.
-This is also in the design phase. Issues to discuss include
-which path the migration data should follow (eg. client to
-client direct, or client to server to client) and security.
-
-
-
Fine-grained authentication: libvirt in general,
-but in particular the remote case should support more
-fine-grained authentication for operations, rather than
-just read-write/read-only as at present.
-
-
-
-
-Please come and discuss these issues and more on the mailing list.
-
-The current implementation uses XDR-encoded packets with a
-simple remote procedure call implementation which also supports
-asynchronous messaging and asynchronous and out-of-order replies,
-although these latter features are not used at the moment.
-
-
-
-The implementation should be considered strictly internal to
-libvirt and subject to change at any time without notice. If
-you wish to talk to libvirtd, link to libvirt. If there is a problem
-that means you think you need to use the protocol directly, please
-first discuss this on the mailing list.
-
-
-
-The messaging protocol is described in
-qemud/remote_protocol.x.
-
-
-
-Authentication and encryption (for TLS) is done using GnuTLS and the RPC protocol is unaware of this layer.
-
-
-
-Protocol messages are sent using a simple 32 bit length word (encoded
-XDR int) followed by the message header (XDR
-remote_message_header) followed by the message body. The
-length count includes the length word itself, and is measured in
-bytes. Maximum message size is REMOTE_MESSAGE_MAX and to
-avoid denial of services attacks on the XDR decoders strings are
-individually limited to REMOTE_STRING_MAX bytes. In the
-TLS case, messages may be split over TLS records, but a TLS record
-cannot contain parts of more than one message. In the common RPC case
-a single REMOTE_CALL message is sent from client to
-server, and the server then replies synchronously with a single
-REMOTE_REPLY message, but other forms of messaging are
-also possible.
-
-
-
-The protocol contains support for multiple program types and protocol
-versioning, modelled after SunRPC.
-
-When connecting to libvirt, some connections may require client
-authentication before allowing use of the APIs. The set of possible
-authentication mechanisms is administrator controlled, independent
-of applications using libvirt.
-
-The libvirt daemon allows the administrator to choose the authentication
-mechanisms used for client connections on each network socket independently.
-This is primarily controlled via the libvirt daemon master config file in
-/etc/libvirt/libvirtd.conf. Each of the libvirt sockets can
-have its authentication mechanism configured independently. There is
-currently a choice of none, polkit, and sasl.
-The SASL scheme can be further configured to choose between a large
-number of different mechanisms.
-
-
-
UNIX socket permissions/group
-
-
-If libvirt does not contain support for PolicyKit, then access control for
-the UNIX domain socket is done using traditional file user/group ownership
-and permissions. There are 2 sockets, one for full read-write access, the
-other for read-only access. The RW socket will be restricted (mode 0700) to
-only allow the root user to connect. The read-only socket will
-be open access (mode 0777) to allow any user to connect.
-
-
-
-To allow non-root users greater access, the libvirtd.conf file
-can be edited to change the permissions via the unix_sock_rw_perms,
-config parameter and to set a user group via the unix_sock_group
-parameter. For example, setting the former to mode 0770 and the
-latter wheel would let any user in the wheel group connect to
-the libvirt daemon.
-
-
-
UNIX socket PolicyKit auth
-
-
-If libvirt contains support for PolicyKit, then access control options are
-more advanced. The unix_sock_auth parameter will default to
-polkit, and the file permissions will default to 0777
-even on the RW socket. Upon connecting to the socket, the client application
-will be required to identify itself with PolicyKit. The default policy for the
-RW daemon socket will require any application running in the current desktop
-session to authenticate using the user's password. This is akin to sudo
-auth, but does not require that the client application ultimately run as root.
-Default policy will still allow any application to connect to the RO socket.
-
-
-
-The default policy can be overridden by the administrator using the PolicyKit
-master configuration file in /etc/PolicyKit/PolicyKit.conf. The
-PolicyKit.conf(5) manual page provides details on the syntax
-available. The two libvirt daemon actions available are named org.libvirt.unix.monitor
-for the RO socket, and org.libvirt.unix.manage for the RW socket.
-
-
-
-As an example, to allow a user fredfull access to the RW socket,
-while requiring joe to authenticate with the admin password,
-would require adding the following snippet to PolicyKit.conf.
-
-The plain TCP socket of the libvirt daemon defaults to using SASL for authentication.
-The SASL mechanism configured by default is DIGEST-MD5, which provides a basic
-username+password style authentication. It also provides for encryption of the data
-stream, so the security of the plain TCP socket is on a par with that of the TLS
-socket. If desired the UNIX socket and TLS socket can also have SASL enabled by
-setting the auth_unix_ro, auth_unix_rw, auth_tls
-config params in libvirt.conf.
-
-
-
-Out of the box, no user accounts are defined, so no clients will be able to authenticate
-on the TCP socket. Adding users and setting their passwords is done with the saslpasswd2
-command. When running this command it is important to tell it that the appname is libvirt.
-As an example, to add a user fred, run
-
-
-
-# saslpasswd2 -a libvirt fred
-Password: xxxxxx
-Again (for verification): xxxxxx
-
-
-
-To see a list of all accounts the sasldblistusers2 command can be used.
-This command expects to be given the path to the libvirt user database, which is kept
-in /etc/libvirt/passwd.db
-
-Finally, to disable a user's access, the saslpasswd2 command can be used
-again:
-
-
-
-# saslpasswd2 -a libvirt -d fred
-
-
-
-
Kerberos auth
-
-
-The plain TCP socket of the libvirt daemon defaults to using SASL for authentication.
-The SASL mechanism configured by default is DIGEST-MD5, which provides a basic
-username+password style authentication. To enable Kerberos single-sign-on instead,
-the libvirt SASL configuration file must be changed. This is /etc/sasl2/libvirt.conf.
-The mech_list parameter must first be changed to gssapi
-instead of the default digest-md5. If SASL is enabled on the UNIX
-and/or TLS sockets, Kerberos will also be used for them. Like DIGEST-MD5, the Kerberos
-mechanism provides data encryption of the session.
-
-
-
-Some operating systems do not install the SASL kerberos plugin by default. It
-may be necessary to install a sub-package such as cyrus-sasl-gssapi.
-To check whether the Kerberos plugin is installed run the pluginviewer
-program and verify that gssapi is listed,eg:
-
-Next is is necessary for the administrator of the Kerberos realm to issue a principle
-for the libvirt server. There needs to be one principle per host running the libvirt
-daemon. The principle should be named libvirt/full.hostname@KERBEROS.REALM.
-This is typically done by running the kadmin.local command on the Kerberos
-server, though some Kerberos servers have alternate ways of setting up service principles.
-Once created, the principle should be exported to a keytab, copied to the host running
-the libvirt daemon and placed in /etc/libvirt/krb5.tab
-
-
-
-# kadmin.local
-kadmin.local: add_principal libvirt/foo.example.com
-Enter password for principal "libvirt/foo.example.com@EXAMPLE.COM":
-Re-enter password for principal "libvirt/foo.example.com@EXAMPLE.COM":
-Principal "libvirt/foo.example.com@EXAMPLE.COM" created.
-
-kadmin.local: ktadd -k /root/libvirt-foo-example.tab libvirt/foo.example.com@EXAMPLE.COM
-Entry for principal libvirt/foo.example.com@EXAMPLE.COM with kvno 4, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:/root/libvirt-foo-example.tab.
-Entry for principal libvirt/foo.example.com@EXAMPLE.COM with kvno 4, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:/root/libvirt-foo-example.tab.
-Entry for principal libvirt/foo.example.com@EXAMPLE.COM with kvno 4, encryption type DES with HMAC/sha1 added to keytab WRFILE:/root/libvirt-foo-example.tab.
-Entry for principal libvirt/foo.example.com@EXAMPLE.COM with kvno 4, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:/root/libvirt-foo-example.tab.
-
-kadmin.local: quit
-
-# scp /root/libvirt-foo-example.tab root@foo.example.com:/etc/libvirt/krb5.tab
-# rm /root/libvirt-foo-example.tab
-
-
-
-Any client application wishing to connect to a Kerberos enabled libvirt server
-merely needs to run kinit to gain a user principle. This may well
-be done automatically when a user logs into a desktop session, if PAM is setup
-to authenticate against Kerberos.
-
-Since libvirt supports many different kinds of virtualization
-(often referred to as "drivers" or "hypervisors"), we need a
-way to be able to specify which driver a connection refers to.
-Additionally we may want to refer to a driver on a remote
-machine over the network.
-
-
-
-To this end, libvirt uses URIs as used on the Web and as defined in RFC 2396. This page
-documents libvirt URIs.
-
-To use QEMU support in libvirt you must be running the
-libvirtd daemon (named libvirt_qemud
-in releases prior to 0.3.0). The purpose of this
-daemon is to manage qemu instances.
-
-
-
-The libvirtd daemon should be started by the
-init scripts when the machine boots. It should appear as
-a process libvirtd --daemon running as root
-in the background and will handle qemu instances on behalf
-of all users of the machine (among other things).
-
-
-So to connect to the daemon, one of two different URIs is used:
-
-
-
-
qemu:///system connects to a system mode daemon.
-
qemu:///session connects to a session mode daemon.
-
-
-
-(If you do libvirtd --help, the daemon will print
-out the paths of the Unix domain socket(s) that it listens on in
-the various different modes).
-
-
-
-KVM URIs are identical. You select between qemu, qemu accelerated and
-KVM guests in the guest XML as described
-here.
-
-Libvirt allows you to pass a NULL pointer to
-virConnectOpen*. Empty string ("") acts in
-the same way. Traditionally this has meant
-connect to the local Xen hypervisor. However in future this
-may change to mean connect to the best available hypervisor.
-
-
-
-The theory is that if, for example, Xen is unavailable but the
-machine is running an OpenVZ kernel, then we should not try to
-connect to the Xen hypervisor since that is obviously the wrong
-thing to do.
-
-
-
-In any case applications linked to libvirt can continue to pass
-NULL as a default choice, but should always allow the
-user to override the URI, either by constructing one or by allowing
-the user to type a URI in directly (if that is appropriate). If your
-application wishes to connect specifically to a Xen hypervisor, then
-for future proofing it should choose a full xen:/// URI.
-
-If XenD is running and configured in /etc/xen/xend-config.sxp:
-
-
-(xend-http-server yes)
-
-
-
-then it listens on TCP port 8000. libvirt allows you to
-try to connect to xend running on remote machines by passing
-http://hostname[:port]/, for example:
-
-
-virsh -c http://oirase/ list
-
-
-
-This method is unencrypted and insecure and is definitely not
-recommended for production use. Instead use libvirt's remote support.
-
-
-
-Notes:
-
-
-
-
The HTTP client does not fully support IPv6.
-
Many features do not work as expected across HTTP connections, in
- particular, virConnectGetCapabilities.
- The remote support however does work
- correctly.
-
XenD's new-style XMLRPC interface is not supported by
- libvirt, only the old-style sexpr interface known in the Xen
- documentation as "unix server" or "http server".
-Another legacy URI is to specify name as the string
-"xen". This will continue to refer to the Xen
-hypervisor. However you should prefer a full xen:/// URI in all future code.
-
-Libvirt continues to support connections to a separately running Xen
-proxy daemon. This provides a way to allow non-root users to make a
-safe (read-only) subset of queries to the hypervisor.
-
-
-
-There is no specific "Xen proxy" URI. However if a Xen URI of any of
-the ordinary or legacy forms is used (eg. NULL,
-"", "xen", ...) which fails, and the
-user is not root, and the Xen proxy socket can be connected to
-(/tmp/libvirt_proxy_conn), then libvirt will use a proxy
-connection.
-
-Network functions are not hypervisor-specific. For historical
-reasons they require the QEMU daemon to be running (this
-restriction may be lifted in future). Most network functions
-first appeared in libvirt 0.2.0.
-
-The storage management APIs are based around 2 core concepts
-
-
-
-
Volume - a single storage volume which can
-be assigned to a guest, or used for creating further pools. A
-volume is either a block device, a raw file, or a special format
-file.
-
Pool - provides a means for taking a chunk
-of storage and carving it up into volumes. A pool can be used to
-manage things such as a physical disk, a NFS server, a iSCSI target,
-a host adapter, an LVM group.
-
-
-
-These two concepts are mapped through to two libvirt objects, a
-virStorageVolPtr and a virStoragePoolPtr,
-each with a collection of APIs for their management.
-
-Although all storage pool backends share the same public APIs and
-XML format, they have varying levels of capabilities. Some may
-allow creation of volumes, others may only allow use of pre-existing
-volumes. Some may have constraints on volume size, or placement.
-
-
-
The is the top level tag for a storage pool document is 'pool'. It has
-a single attribute type, which is one of dir,
-fs,netfs,disk,iscsi,
-logical. This corresponds to the storage backend drivers
-listed further along in this document.
-
Providing a name for the pool which is unique to the host.
-This is mandatory when defining a pool
-
-
uuid
-
Providing an identifier for the pool which is globally unique.
-This is optional when defining a pool, a UUID will be generated if
-omitted
-
-
allocation
-
Providing the total storage allocation for the pool. This may
-be larger than the sum of the allocation of all volumes due to
-metadata overhead. This value is in bytes. This is not applicable
-when creating a pool.
-
-
capacity
-
Providing the total storage capacity for the pool. Due to
-underlying device constraints it may not be possible to use the
-full capacity for storage volumes. This value is in bytes. This
-is not applicable when creating a pool.
-
-
available
-
Providing the free space available for allocating new volumes
-in the pool. Due to underlying device constraints it may not be
-possible to allocate the entire free space to a single volume.
-This value is in bytes. This is not applicable when creating a
-pool.
-
-
source
-
Provides information about the source of the pool, such as
-the underlying host devices, or remote server
-
-
target
-
Provides information about the representation of the pool
-on the local host.
Provides the source for pools backed by physical devices.
-May be repeated multiple times depending on backend driver. Contains
-a single attribute path which is the fully qualified
-path to the block device node.
-
directory
-
Provides the source for pools backed by directories. May
-only occur once. Contains a single attribute path
-which is the fully qualified path to the block device node.
-
host
-
Provides the source for pools backed by storage from a
-remote server. Will be used in combination with a directory
-or device element. Contains an attribute name
-which is the hostname or IP address of the server. May optionally
-contain a port attribute for the protocol specific
-port number.
-
format
-
Provides information about the format of the pool. This
-contains a single attribute type whose value is
-backend specific. This is typically used to indicate filesystem
-type, or network filesystem type, or partition table type, or
-LVM metadata type. All drivers are required to have a default
-value for this, so it is optional.
Provides the location at which the pool will be mapped into
-the local filesystem namespace. For a filesystem/directory based
-pool it will be the name of the directory in which volumes will
-be created. For device based pools it will be the name of the directory in which
-devices nodes exist. For the latter /dev/ may seem
-like the logical choice, however, devices nodes there are not
-guaranteed stable across reboots, since they are allocated on
-demand. It is preferable to use a stable location such as one
-of the /dev/disk/by-{path,id,uuid,label locations.
-
-
permissions
-
Provides information about the default permissions to use
-when creating volumes. This is currently only useful for directory
-or filesystem based pools, where the volumes allocated are simple
-files. For pools where the volumes are device nodes, the hotplug
-scripts determine permissions. It contains 4 child elements. The
-mode element contains the octal permission set. The
-owner element contains the numeric user ID. The group
-element contains the numeric group ID. The label element
-contains the MAC (eg SELinux) label string.
-
-If a storage pool exposes information about its underlying
-placement / allocation scheme, the device element
-within the source element may contain information
-about its available extents. Some pools have a constraint that
-a volume must be allocated entirely within a single constraint
-(eg disk partition pools). Thus the extent information allows an
-application to determine the maximum possible size for a new
-volume
-
-
-
-For storage pools supporting extent information, within each
-device element there will be zero or more freeExtent
-elements. Each of these elements contains two attributes, start
-and end which provide the boundaries of the extent on the
-device, measured in bytes.
-
Providing a name for the pool which is unique to the host.
-This is mandatory when defining a pool
-
-
uuid
-
Providing an identifier for the pool which is globally unique.
-This is optional when defining a pool, a UUID will be generated if
-omitted
-
-
allocation
-
Providing the total storage allocation for the volume. This
-may be smaller than the logical capacity if the volume is sparsely
-allocated. It may also be larger than the logical capacity if the
-volume has substantial metadata overhead. This value is in bytes.
-If omitted when creating a volume, the volume will be fully
-allocated at time of creation. If set to a value smaller than the
-capacity, the pool has the option of deciding
-to sparsely allocate a volume. It does not have to honour requests
-for sparse allocation though.
-
-
capacity
-
Providing the logical capacity for the volume. This value is
-in bytes. This is compulsory when creating a volume
-
-
source
-
Provides information about the underlying storage allocation
-of the volume. This may not be available for some pool types.
-
-
target
-
Provides information about the representation of the volume
-on the local host.
Provides the location at which the pool will be mapped into
-the local filesystem namespace. For a filesystem/directory based
-pool it will be the name of the directory in which volumes will
-be created. For device based pools it will be the name of the directory in which
-devices nodes exist. For the latter /dev/ may seem
-like the logical choice, however, devices nodes there are not
-guaranteed stable across reboots, since they are allocated on
-demand. It is preferrable to use a stable location such as one
-of the /dev/disk/by-{path,id,uuid,label locations.
-
-
format
-
Provides information about the pool specific volume format.
-For disk pools it will provide the partition type. For filesystem
-or directory pools it will provide the file format type, eg cow,
-qcow, vmdk, raw. If omitted when creating a volume, the pool's
-default format will be used. The actual format is specified via
-the type. Consult the pool-specific docs for the
-list of valid values.
-
permissions
-
Provides information about the default permissions to use
-when creating volumes. This is currently only useful for directory
-or filesystem based pools, where the volumes allocated are simple
-files. For pools where the volumes are device nodes, the hotplug
-scripts determine permissions. It contains 4 child elements. The
-mode element contains the octal permission set. The
-owner element contains the numeric user ID. The group
-element contains the numeric group ID. The label element
-contains the MAC (eg SELinux) label string.
-
-A pool with a type of dir provides the means to manage
-files within a directory. The files can be fully allocated raw files,
-sparsely allocated raw files, or one of the special disk formats
-such as qcow,qcow2,vmdk,
-cow, etc as supported by the qemu-img
-program. If the directory does not exist at the time the pool is
-defined, the build operation can be used to create it.
-
-The directory pool does not use the pool format type element.
-
-
-
Valid volume format types
-
-
-One of the following options:
-
-
-
-
raw: a plain file
-
bochs: Bochs disk image format
-
cloop: compressed loopback disk image format
-
cow: User Mode Linux disk image format
-
dmg: Mac disk image format
-
iso: CDROM disk image format
-
qcow: QEMU v1 disk image format
-
qcow2: QEMU v2 disk image format
-
vmdk: VMWare disk image format
-
vpc: VirtualPC disk image format
-
-
-
-When listing existing volumes all these formats are supported
-natively. When creating new volumes, only a subset may be
-available. The raw type is guaranteed always
-available. The qcow2 type can be created if
-either qemu-img or qcow-create tools
-are present. The others are dependent on support of the
-qemu-img tool.
-
-
-This is a variant of the directory pool. Instead of creating a
-directory on an existing mounted filesystem though, it expects
-a source block device to be named. This block device will be
-mounted and files managed in the directory of its mount point.
-It will default to allowing the kernel to automatically discover
-the filesystem type, though it can be specified manually if
-required.
-
-This is a variant of the filesystem pool. Instead of requiring
-a local block device as the source, it requires the name of a
-host and path of an exported directory. It will mount this network
-filesystem and manage files within the directory of its mount
-point. It will default to using NFS as the protocol.
-
-This provides a pool based on an LVM volume group. For a
-pre-defined LVM volume group, simply providing the group
-name is sufficient, while to build a new group requires
-providing a list of source devices to serve as physical
-volumes. Volumes will be allocated by carving out chunks
-of storage from the volume group.
-
-This provides a pool based on a physical disk. Volumes are created
-by adding partitions to the disk. Disk pools are have constraints
-on the size and placement of volumes. The 'free extents'
-information will detail the regions which are available for creating
-new volumes. A volume cannot span across 2 different free extents.
-
-This provides a pool based on an iSCSI target. Volumes must be
-pre-allocated on the iSCSI server, and cannot be created via
-the libvirt APIs. Since /dev/XXX names may change each time libvirt
-logs into the iSCSI target, it is recommended to configure the pool
-to use /dev/disk/by-path or /dev/disk/by-id
-for the target path. These provide persistent stable naming for LUNs
-