From 9092c3d491047539e376a496d15202495ac701cf Mon Sep 17 00:00:00 2001 From: "Daniel P. Berrange" Date: Wed, 23 Apr 2008 17:08:31 +0000 Subject: [PATCH] Split website out into one file per page. APply new layout and styling --- ChangeLog | 14 + docs/ChangeLog.xsl | 72 +- docs/FAQ.html | 229 +- docs/FAQ.html.in | 144 + docs/Makefile.am | 94 +- docs/apps.html | 154 + docs/apps.html.in | 108 + docs/archdomain.html | 107 + docs/archdomain.html.in | 5 + docs/architecture.html | 161 +- docs/architecture.html.in | 101 + docs/archnetwork.html | 136 + docs/archnetwork.html.in | 50 + docs/archnode.html | 107 + docs/archnode.html.in | 5 + docs/archstorage.html | 126 + docs/archstorage.html.in | 30 + docs/auth.html | 200 +- docs/auth.html.in | 183 ++ docs/background.png | Bin 267 -> 0 bytes docs/bindings.html | 115 + docs/bindings.html.in | 24 + docs/bugs.html | 143 +- docs/bugs.html.in | 82 + docs/contact.html | 107 + docs/contact.html.in | 37 + docs/deployment.html | 139 + docs/deployment.html.in | 46 + docs/docs.html | 98 + docs/docs.html.in | 5 + docs/downloads.html | 149 +- docs/downloads.html.in | 89 + docs/drivers.html | 125 + docs/drivers.html.in | 27 + docs/drvlxc.html | 113 + docs/drvlxc.html.in | 5 + docs/drvopenvz.html | 113 + docs/drvopenvz.html.in | 5 + docs/drvqemu.html | 191 ++ docs/drvqemu.html.in | 97 + docs/drvremote.html | 113 + docs/drvremote.html.in | 5 + docs/drvtest.html | 113 + docs/drvtest.html.in | 5 + docs/drvxen.html | 307 ++ docs/drvxen.html.in | 221 ++ docs/errors.html | 156 +- docs/errors.html.in | 83 + docs/footer_corner.png | Bin 0 -> 2359 bytes docs/footer_pattern.png | Bin 0 -> 817 bytes docs/format.html | 526 +--- docs/format.html.in | 9 + docs/formatcaps.html | 172 ++ docs/formatcaps.html.in | 70 + docs/formatdomain.html | 314 ++ docs/formatdomain.html.in | 255 ++ docs/formatnetwork.html | 145 + docs/formatnetwork.html.in | 50 + docs/formatnode.html | 109 + docs/formatnode.html.in | 5 + docs/formatstorage.html | 275 ++ docs/formatstorage.html.in | 237 ++ docs/generic.css | 75 + docs/html/book1.html | 3 - docs/html/index.html | 5 +- docs/html/libvirt-conf.html | 44 - docs/html/libvirt-lib.html | 3 - docs/html/libvirt-libvirt.html | 935 +++--- docs/html/libvirt-virterror.html | 173 +- docs/hvsupport.html | 508 +--- docs/hvsupport.html.in | 594 ++++ docs/index.html | 236 +- docs/index.html.in | 68 + docs/intro.html | 142 +- docs/intro.html.in | 46 + docs/libvir.html | 4596 ------------------------------ docs/libvirt-api.xml | 6 +- docs/libvirt-header-bg.png | Bin 0 -> 1136 bytes docs/libvirt-header-logo.png | Bin 0 -> 25945 bytes docs/libvirt-net-logical.fig | 159 ++ docs/libvirt-net-logical.png | Bin 0 -> 7387 bytes docs/libvirt-net-physical.fig | 139 + docs/libvirt-net-physical.png | Bin 0 -> 10666 bytes docs/libvirt-refs.xml | 7 + docs/libvirt.css | 493 ++-- docs/libvirtHeader.png | Bin 6635 -> 0 bytes docs/main.css | 2 + docs/newapi.xsl | 376 ++- docs/news.html | 440 ++- docs/news.html.in | 607 ++++ docs/page.xsl | 124 + docs/python.html | 169 +- docs/python.html.in | 71 + docs/relatedlinks.html | 121 + docs/relatedlinks.html.in | 64 + docs/remote.html | 710 +++-- docs/remote.html.in | 893 ++++++ docs/site.xsl | 407 +-- docs/sitemap.html | 241 ++ docs/sitemap.html.in | 224 ++ docs/storage.html | 917 +++--- docs/storage.html.in | 354 +++ docs/uri.html | 330 ++- docs/uri.html.in | 295 ++ docs/windows.html | 290 +- docs/windows.html.in | 239 ++ 106 files changed, 13380 insertions(+), 8632 deletions(-) create mode 100644 docs/FAQ.html.in create mode 100644 docs/apps.html create mode 100644 docs/apps.html.in create mode 100644 docs/archdomain.html create mode 100644 docs/archdomain.html.in create mode 100644 docs/architecture.html.in create mode 100644 docs/archnetwork.html create mode 100644 docs/archnetwork.html.in create mode 100644 docs/archnode.html create mode 100644 docs/archnode.html.in create mode 100644 docs/archstorage.html create mode 100644 docs/archstorage.html.in create mode 100644 docs/auth.html.in delete mode 100644 docs/background.png create mode 100644 docs/bindings.html create mode 100644 docs/bindings.html.in create mode 100644 docs/bugs.html.in create mode 100644 docs/contact.html create mode 100644 docs/contact.html.in create mode 100644 docs/deployment.html create mode 100644 docs/deployment.html.in create mode 100644 docs/docs.html create mode 100644 docs/docs.html.in create mode 100644 docs/downloads.html.in create mode 100644 docs/drivers.html create mode 100644 docs/drivers.html.in create mode 100644 docs/drvlxc.html create mode 100644 docs/drvlxc.html.in create mode 100644 docs/drvopenvz.html create mode 100644 docs/drvopenvz.html.in create mode 100644 docs/drvqemu.html create mode 100644 docs/drvqemu.html.in create mode 100644 docs/drvremote.html create mode 100644 docs/drvremote.html.in create mode 100644 docs/drvtest.html create mode 100644 docs/drvtest.html.in create mode 100644 docs/drvxen.html create mode 100644 docs/drvxen.html.in create mode 100644 docs/errors.html.in create mode 100644 docs/footer_corner.png create mode 100644 docs/footer_pattern.png create mode 100644 docs/format.html.in create mode 100644 docs/formatcaps.html create mode 100644 docs/formatcaps.html.in create mode 100644 docs/formatdomain.html create mode 100644 docs/formatdomain.html.in create mode 100644 docs/formatnetwork.html create mode 100644 docs/formatnetwork.html.in create mode 100644 docs/formatnode.html create mode 100644 docs/formatnode.html.in create mode 100644 docs/formatstorage.html create mode 100644 docs/formatstorage.html.in create mode 100644 docs/generic.css delete mode 100644 docs/html/book1.html delete mode 100644 docs/html/libvirt-conf.html delete mode 100644 docs/html/libvirt-lib.html create mode 100644 docs/hvsupport.html.in create mode 100644 docs/index.html.in create mode 100644 docs/intro.html.in delete mode 100644 docs/libvir.html create mode 100644 docs/libvirt-header-bg.png create mode 100644 docs/libvirt-header-logo.png create mode 100644 docs/libvirt-net-logical.fig create mode 100644 docs/libvirt-net-logical.png create mode 100644 docs/libvirt-net-physical.fig create mode 100644 docs/libvirt-net-physical.png delete mode 100644 docs/libvirtHeader.png create mode 100644 docs/main.css create mode 100644 docs/news.html.in create mode 100644 docs/page.xsl create mode 100644 docs/python.html.in create mode 100644 docs/relatedlinks.html create mode 100644 docs/relatedlinks.html.in create mode 100644 docs/remote.html.in create mode 100644 docs/sitemap.html create mode 100644 docs/sitemap.html.in create mode 100644 docs/storage.html.in create mode 100644 docs/uri.html.in create mode 100644 docs/windows.html.in diff --git a/ChangeLog b/ChangeLog index 67e8d6f9fc..0027269431 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,17 @@ +Wed Apr 23 12:18:11 EST 2008 Daniel P. Berrange + + * docs/libvir.html, docs/*.html.in: Removed merged HTML docs + and replaced with one file per page + * docs/*.html: Re-generated with new page layout + * docs/page.xsl: New master page template and navigation + * docs/site.xsl, docs/newapi.xsl, docs/ChangeLog.xsl: Updated + to use new page.xsl templates + * libvirt-net-*.{fig,png}: Added diagrams illustrating some + ways of using virtual networking + * docs/*.css: New styles for site + * docs/html/*: Re-generated for new page layout & removed + unused files + 2008-04-21 Jim Meyering Enable 'make syntax-check's sc_changelog rule. diff --git a/docs/ChangeLog.xsl b/docs/ChangeLog.xsl index 00fecf11c0..3adde586b9 100644 --- a/docs/ChangeLog.xsl +++ b/docs/ChangeLog.xsl @@ -3,83 +3,35 @@ - - - - libvirt - - - API Menu - -
- - -
- -
- - - - - -
  • -

    - - - - - + + + +

    -

    - ChangeLog last entries of - - - - - - - - -
    -
    -
    - - - -
    - - + + +

    Log of recent changes to libvirt

    +
    + +
    + + diff --git a/docs/FAQ.html b/docs/FAQ.html index 4dca383f2c..e7f1260676 100644 --- a/docs/FAQ.html +++ b/docs/FAQ.html @@ -1,80 +1,205 @@ -FAQ

    FAQ

    Table of Contents:

    License(s)

    1. Licensing Terms for libvirt -

      libvirt is released under the GNU Lesser + + + + + + + libvirt: FAQ + + + +

      +
      +
      +

      FAQ

      +

      Table of Contents:

      +
      +

      License(s)

      +
      1. + Licensing Terms for libvirt +

        libvirt is released under the GNU Lesser General Public License, see the file COPYING.LIB in the distribution for the precise wording. The only library that libvirt depends upon is the Xen store access library which is also licenced under the LGPL.

        -
      2. -
      3. Can I embed libvirt in a proprietary application ? -

        Yes. The LGPL allows you to embed libvirt into a proprietary +

      4. + Can I embed libvirt in a proprietary application ? +

        Yes. The LGPL allows you to embed libvirt into a proprietary application. It would be graceful to send-back bug fixes and improvements as patches for possible incorporation in the main development tree. It will decrease your maintenance costs anyway if you do so.

        -
      5. -

      Installation

      1. Where can I get libvirt ? +
      +

      + Installation +

      +
      1. Where can I get libvirt ?

        The original distribution comes from ftp://libvirt.org/libvirt/.

        -
      2. -
      3. I can't install the libvirt/libvirt-devel RPM packages due to +
      4. + I can't install the libvirt/libvirt-devel RPM packages due to failed dependencies -

        The most generic solution is to re-fetch the latest src.rpm , and +

        The most generic solution is to re-fetch the latest src.rpm , and rebuild it locally with

        -

        rpm --rebuild libvirt-xxx.src.rpm.

        -

        If everything goes well it will generate two binary rpm packages (one +

        rpm --rebuild libvirt-xxx.src.rpm.

        +

        If everything goes well it will generate two binary rpm packages (one providing the shared libs and virsh, and the other one, the -devel package, providing includes, static libraries and scripts needed to build applications with libvirt that you can install locally.

        -

        One can also rebuild the RPMs from a tarball:

        -

        rpmbuild -ta libdir-xxx.tar.gz

        -

        Or from a configured tree with:

        -

        make rpm

        -
      5. -
      6. Failure to use the API for non-root users -

        Large parts of the API may only be accessible with root privileges, +

        One can also rebuild the RPMs from a tarball:

        +

        + rpmbuild -ta libdir-xxx.tar.gz +

        +

        Or from a configured tree with:

        +

        + make rpm +

        +
      7. + Failure to use the API for non-root users +

        Large parts of the API may only be accessible with root privileges, however the read only access to the xenstore data doesnot have to be forbidden to user, at least for monitoring purposes. If "virsh dominfo" fails to run as an user, change the mode of the xenstore read-only socket with:

        -

        chmod 666 /var/run/xenstored/socket_ro

        -

        and also make sure that the Xen Daemon is running correctly with local +

        + chmod 666 /var/run/xenstored/socket_ro +

        +

        and also make sure that the Xen Daemon is running correctly with local HTTP server enabled, this is defined in /etc/xen/xend-config.sxp which need the following line to be enabled:

        -

        (xend-http-server yes)

        -

        If needed restart the xend daemon after making the change with the +

        + (xend-http-server yes) +

        +

        If needed restart the xend daemon after making the change with the following command run as root:

        -

        service xend restart

        -
      8. -

      Compilation

      1. What is the process to compile libvirt ? -

        As most UNIX libraries libvirt follows the "standard":

        -

        gunzip -c libvirt-xxx.tar.gz | tar xvf -

        -

        cd libvirt-xxxx

        -

        ./configure --help

        -

        to see the options, then the compilation/installation proper

        -

        ./configure [possible options]

        -

        make

        -

        make install

        -

        At that point you may have to rerun ldconfig or a similar utility to +

        + service xend restart +

        +
      +

      + Compilation +

      +
      1. + What is the process to compile libvirt ? +

        As most UNIX libraries libvirt follows the "standard":

        +

        + gunzip -c libvirt-xxx.tar.gz | tar xvf - +

        +

        + cd libvirt-xxxx +

        +

        + ./configure --help +

        +

        to see the options, then the compilation/installation proper

        +

        + ./configure [possible options] +

        +

        + make +

        +

        + make install +

        +

        At that point you may have to rerun ldconfig or a similar utility to update your list of installed shared libs.

        -
      2. -
      3. What other libraries are needed to compile/install libvirt ? -

        Libvirt requires libxenstore, which is usually provided by the xen +

      4. + What other libraries are needed to compile/install libvirt ? +

        Libvirt requires libxenstore, which is usually provided by the xen packages as well as the public headers to compile against libxenstore.

        -
      5. -
      6. I use the CVS version and there is no configure script -

        The configure script (and other Makefiles) are generated. Use the +

      7. + I use the CVS version and there is no configure script +

        The configure script (and other Makefiles) are generated. Use the autogen.sh script to regenerate the configure script and Makefiles, like:

        -

        ./autogen.sh --prefix=/usr --disable-shared

        -
      8. -

      Developer corner

      1. Troubles compiling or linking programs using libvirt -

        To simplify the process of reusing the library, libvirt comes with +

        + ./autogen.sh --prefix=/usr --disable-shared +

        +
      +

      Developer corner

      +
      1. + Troubles compiling or linking programs using libvirt +

        To simplify the process of reusing the library, libvirt comes with pkgconfig support, which can be used directly from autoconf support or via the pkg-config command line tool, like:

        -

        pkg-config libvirt --libs

        -
      2. -

    +

    + pkg-config libvirt --libs +

    + +
    + +
    + + + diff --git a/docs/FAQ.html.in b/docs/FAQ.html.in new file mode 100644 index 0000000000..a436e789eb --- /dev/null +++ b/docs/FAQ.html.in @@ -0,0 +1,144 @@ + + + +

    FAQ

    +

    Table of Contents:

    + +

    License(s)

    +
      +
    1. + Licensing Terms for libvirt +

      libvirt is released under the GNU Lesser + General Public License, see the file COPYING.LIB in the distribution + for the precise wording. The only library that libvirt depends upon is + the Xen store access library which is also licenced under the LGPL.

      +
    2. +
    3. + Can I embed libvirt in a proprietary application ? +

      Yes. The LGPL allows you to embed libvirt into a proprietary + application. It would be graceful to send-back bug fixes and improvements + as patches for possible incorporation in the main development tree. It + will decrease your maintenance costs anyway if you do so.

      +
    4. +
    +

    + Installation +

    +
      +
    1. Where can I get libvirt ? +

      The original distribution comes from ftp://libvirt.org/libvirt/.

      +
    2. +
    3. + I can't install the libvirt/libvirt-devel RPM packages due to + failed dependencies +

      The most generic solution is to re-fetch the latest src.rpm , and + rebuild it locally with

      +

      rpm --rebuild libvirt-xxx.src.rpm.

      +

      If everything goes well it will generate two binary rpm packages (one + providing the shared libs and virsh, and the other one, the -devel + package, providing includes, static libraries and scripts needed to build + applications with libvirt that you can install locally.

      +

      One can also rebuild the RPMs from a tarball:

      +

      + rpmbuild -ta libdir-xxx.tar.gz +

      +

      Or from a configured tree with:

      +

      + make rpm +

      +
    4. +
    5. + Failure to use the API for non-root users +

      Large parts of the API may only be accessible with root privileges, + however the read only access to the xenstore data doesnot have to be + forbidden to user, at least for monitoring purposes. If "virsh dominfo" + fails to run as an user, change the mode of the xenstore read-only socket + with:

      +

      + chmod 666 /var/run/xenstored/socket_ro +

      +

      and also make sure that the Xen Daemon is running correctly with local + HTTP server enabled, this is defined in + /etc/xen/xend-config.sxp which need the following line to be + enabled:

      +

      + (xend-http-server yes) +

      +

      If needed restart the xend daemon after making the change with the + following command run as root:

      +

      + service xend restart +

      +
    6. +
    +

    + Compilation +

    +
      +
    1. + What is the process to compile libvirt ? +

      As most UNIX libraries libvirt follows the "standard":

      +

      + gunzip -c libvirt-xxx.tar.gz | tar xvf - +

      +

      + cd libvirt-xxxx +

      +

      + ./configure --help +

      +

      to see the options, then the compilation/installation proper

      +

      + ./configure [possible options] +

      +

      + make +

      +

      + make install +

      +

      At that point you may have to rerun ldconfig or a similar utility to + update your list of installed shared libs.

      +
    2. +
    3. + What other libraries are needed to compile/install libvirt ? +

      Libvirt requires libxenstore, which is usually provided by the xen + packages as well as the public headers to compile against libxenstore.

      +
    4. +
    5. + I use the CVS version and there is no configure script +

      The configure script (and other Makefiles) are generated. Use the + autogen.sh script to regenerate the configure script and Makefiles, + like:

      +

      + ./autogen.sh --prefix=/usr --disable-shared +

      +
    6. +
    +

    Developer corner

    +
      +
    1. + Troubles compiling or linking programs using libvirt +

      To simplify the process of reusing the library, libvirt comes with + pkgconfig support, which can be used directly from autoconf support or + via the pkg-config command line tool, like:

      +

      + pkg-config libvirt --libs +

      +
    2. +
    + + diff --git a/docs/Makefile.am b/docs/Makefile.am index fc1153f340..8cb5f06bc6 100644 --- a/docs/Makefile.am +++ b/docs/Makefile.am @@ -4,45 +4,45 @@ SUBDIRS= . examples devhelp # The directory containing the source code (if it contains documentation). DOC_SOURCE_DIR=../src -PAGES= index.html bugs.html FAQ.html remote.html - man_MANS= -html = \ - book1.html \ +apihtml = \ index.html \ - libvirt-conf.html \ - libvirt-lib.html \ libvirt-libvirt.html \ libvirt-virterror.html -png = \ +apipng = \ left.png \ up.png \ home.png \ right.png +png = \ + 16favicon.png \ + 32favicon.png \ + et_logo.png \ + footer_corner.png \ + footer_pattern.png \ + libvirHeader.png \ + libvirLogo.png \ + libvirt-header-bg.png \ + libvirt-header-logo.png \ + libvirtLogo.png \ + libvirt-net-logical.png \ + libvirt-net-physical.png \ + madeWith.png \ + windows-cygwin-1.png \ + windows-cygwin-2.png \ + windows-cygwin-3.png + gif = \ Libxml2-Logo-90x34.gif \ architecture.gif \ node.gif \ redhat.gif -dot_html = \ - FAQ.html \ - architecture.html \ - bugs.html \ - downloads.html \ - errors.html \ - format.html \ - hvsupport.html \ - index.html \ - intro.html \ - libvir.html \ - news.html \ - python.html \ - remote.html \ - uri.html +dot_html_in = $(wildcard *.html.in) +dot_html = $(dot_html_in:%.html.in=%.html) xml = \ libvirt-api.xml \ @@ -57,12 +57,16 @@ rng = \ libvirt.rng \ network.rng +fig = \ + libvirt-net-logical.fig \ + libvirt-net-physical.fig + EXTRA_DIST= \ libvirt-api.xml libvirt-refs.xml apibuild.py \ - site.xsl newapi.xsl news.xsl \ - $(dot_html) $(gif) html \ - $(xml) $(rng) \ - virsh.pod + site.xsl newapi.xsl news.xsl page.xsl ChangeLog.xsl \ + $(dot_html) $(dot_html_in) $(gif) html/*.html html/*.png \ + $(xml) $(rng) $(fig) $(png) \ + virsh.pod ChangeLog.awk all: web $(top_builddir)/NEWS $(man_MANS) @@ -73,18 +77,30 @@ virsh.1: virsh.pod api: libvirt-api.xml libvirt-refs.xml $(srcdir)/html/index.html -web: $(PAGES) +web: $(dot_html) -$(PAGES): libvir.html site.xsl - -@(if [ -x $(XSLTPROC) ] ; then \ - echo "Rebuilding the HTML Web pages from libvir.html" ; \ - $(XSLTPROC) --nonet --html $(top_srcdir)/docs/site.xsl $(top_srcdir)/docs/libvir.html > index.html ; fi ); - -@(if [ -x $(XMLLINT) ] ; then \ - echo "Validating the HTML Web pages" ; \ - $(XMLLINT) --nonet --valid --noout $(PAGES) ; fi ); +ChangeLog.xml: ../ChangeLog ChangeLog.awk + awk -f ChangeLog.awk < $< > $@ + +ChangeLog.html.in: ChangeLog.xml ChangeLog.xsl + @(if [ -x $(XSLTPROC) ] ; then \ + echo "Generating $@"; \ + name=`echo $@ | sed -e 's/.tmp//'`; \ + $(XSLTPROC) --nonet $(top_srcdir)/docs/ChangeLog.xsl $< > $@ || (rm $@ && exit 1) ; fi ) + +%.html.tmp: %.html.in site.xsl page.xsl sitemap.html.in + @(if [ -x $(XSLTPROC) ] ; then \ + echo "Generating $@"; \ + name=`echo $@ | sed -e 's/.tmp//'`; \ + $(XSLTPROC) --stringparam pagename $$name --nonet --html $(top_srcdir)/docs/site.xsl $< > $@ || (rm $@ && exit 1) ; fi ) + +%.html: %.html.tmp + @(if [ -x $(XMLLINT) ] ; then \ + echo "Validating $@" ; \ + $(XMLLINT) --nonet --format --valid $< > $@ || : ; fi ); -$(srcdir)/html/index.html: libvirt-api.xml $(srcdir)/newapi.xsl +$(srcdir)/html/index.html: libvirt-api.xml newapi.xsl page.xsl sitemap.html.in -@(if [ -x $(XSLTPROC) ] ; then \ echo "Rebuilding the HTML pages from the XML API" ; \ $(XSLTPROC) --nonet $(srcdir)/newapi.xsl libvirt-api.xml ; fi ) @@ -115,11 +131,11 @@ install-data-local: $(srcdir)/redhat.gif $(srcdir)/Libxml2-Logo-90x34.gif \ $(DESTDIR)$(HTML_DIR) $(mkinstalldirs) $(DESTDIR)$(HTML_DIR)/html - for h in $(html); do \ + for h in $(apihtml); do \ $(INSTALL) -m 0644 $(srcdir)/html/$$h $(DESTDIR)$(HTML_DIR)/html; done - for p in $(png); do \ + for p in $(apipng); do \ $(INSTALL) -m 0644 $(srcdir)/html/$$p $(DESTDIR)$(HTML_DIR)/html; done uninstall-local: - for h in $(html); do rm $(DESTDIR)$(HTML_DIR)/html/$$h; done - for p in $(png); do rm $(DESTDIR)$(HTML_DIR)/html/$$p; done + for h in $(apihtml); do rm $(DESTDIR)$(HTML_DIR)/html/$$h; done + for p in $(apipng); do rm $(DESTDIR)$(HTML_DIR)/html/$$p; done diff --git a/docs/apps.html b/docs/apps.html new file mode 100644 index 0000000000..252103e363 --- /dev/null +++ b/docs/apps.html @@ -0,0 +1,154 @@ + + + + + + + + + libvirt: Applications using libvirt + + + + +
    +
    +

    Applications using libvirt

    +

    + This page provides an illustration of the wide variety of + applications using the libvirt management API. If you know + of interesting applications not listed on this page, send + a message to the mailing list + to request that it be added here. If your application uses + libvirt as its API, the following graphic is available for + your website to advertise support for libvirt: +

    +

    + Made with libvirt

    +

    Command line tools

    +
    virsh
    + An interactive shell, and batch scriptable tool for performing + management tasks on all libvirt managed domains, networks and + storage. This is part of the libvirt core distribution. +
    virt-install
    + Provides a way to provision new virtual machines from a + OS distribution install tree. It supports provisioning from + local CD images, and the network over NFS, HTTP and FTP. +
    virt-clone
    + Allows the disk image(s) and configuration for an existing + virtual machine to be cloned to form a new virtual machine. + It automates copying of data across to new disk images, and + updates the UUID, Mac address and name in the configuration +
    virt-image
    + Provides a way to deploy virtual appliances. It defines a + simplified portable XML format describing the pre-requisites + of a virtual machine. At time of deployment this is translated + into the domain XML format for execution under any libvirt + hypervisor meeting the pre-requisites. +
    virt-df
    + Examine the utilization of each filesystem in a virtual machine + from the comfort of the host machine. This tool peeks into the + guest disks and determines how much space is used. It can cope + with common Linux filesystems and LVM volumes. +
    virt-top
    + Watch the CPU, memory, network and disk utilization of all + virtual machines running on a host. +
    +

    Desktop applications

    +
    virt-manager
    + A general purpose desktop management tool, able to manage + virtual machines across both local and remotely accessed + hypervisors. It is targetted at home and small office usage + upto managing 10-20 hosts and their VMs. +
    virt-viewer
    + A lightweight tool for accessing the graphical console + associated with a virtual machine. It can securely connect + to remote consoles supporting the VNC protocol. Also provides + an optional mozilla browser plugin. +
    +

    Web applications

    +
    oVirt
    + oVirt provides the ability to manage large numbers of virtual + machines across an entire data center of hosts. It integrates + with FreeIPA for Kerberos authentication, and in the future, + certificate management. +
    +

    LiveCD / Appliances

    +
    virt-p2v
    + A tool for converting a physical machine into a virtual machine. It + is a LiveCD which is booted on the machine to be converted. It collects + a little information from the user and then copies the disks over to + a remote machine and defines the XML for a domain to run the guest. +
    +
    + +
    + + + diff --git a/docs/apps.html.in b/docs/apps.html.in new file mode 100644 index 0000000000..31c956b3ae --- /dev/null +++ b/docs/apps.html.in @@ -0,0 +1,108 @@ + + +

    Applications using libvirt

    + +

    + This page provides an illustration of the wide variety of + applications using the libvirt management API. If you know + of interesting applications not listed on this page, send + a message to the mailing list + to request that it be added here. If your application uses + libvirt as its API, the following graphic is available for + your website to advertise support for libvirt: +

    + +

    + Made with libvirt +

    + +

    Command line tools

    + +
    +
    virsh
    +
    + An interactive shell, and batch scriptable tool for performing + management tasks on all libvirt managed domains, networks and + storage. This is part of the libvirt core distribution. +
    +
    virt-install
    +
    + Provides a way to provision new virtual machines from a + OS distribution install tree. It supports provisioning from + local CD images, and the network over NFS, HTTP and FTP. +
    +
    virt-clone
    +
    + Allows the disk image(s) and configuration for an existing + virtual machine to be cloned to form a new virtual machine. + It automates copying of data across to new disk images, and + updates the UUID, Mac address and name in the configuration +
    +
    virt-image
    +
    + Provides a way to deploy virtual appliances. It defines a + simplified portable XML format describing the pre-requisites + of a virtual machine. At time of deployment this is translated + into the domain XML format for execution under any libvirt + hypervisor meeting the pre-requisites. +
    +
    virt-df
    +
    + Examine the utilization of each filesystem in a virtual machine + from the comfort of the host machine. This tool peeks into the + guest disks and determines how much space is used. It can cope + with common Linux filesystems and LVM volumes. +
    +
    virt-top
    +
    + Watch the CPU, memory, network and disk utilization of all + virtual machines running on a host. +
    +
    + +

    Desktop applications

    + +
    +
    virt-manager
    +
    + A general purpose desktop management tool, able to manage + virtual machines across both local and remotely accessed + hypervisors. It is targetted at home and small office usage + upto managing 10-20 hosts and their VMs. +
    +
    virt-viewer
    +
    + A lightweight tool for accessing the graphical console + associated with a virtual machine. It can securely connect + to remote consoles supporting the VNC protocol. Also provides + an optional mozilla browser plugin. +
    +
    + +

    Web applications

    + +
    +
    oVirt
    +
    + oVirt provides the ability to manage large numbers of virtual + machines across an entire data center of hosts. It integrates + with FreeIPA for Kerberos authentication, and in the future, + certificate management. +
    +
    + +

    LiveCD / Appliances

    + +
    +
    virt-p2v
    +
    + A tool for converting a physical machine into a virtual machine. It + is a LiveCD which is booted on the machine to be converted. It collects + a little information from the user and then copies the disks over to + a remote machine and defines the XML for a domain to run the guest. +
    +
    + + + + diff --git a/docs/archdomain.html b/docs/archdomain.html new file mode 100644 index 0000000000..ff8f8e00aa --- /dev/null +++ b/docs/archdomain.html @@ -0,0 +1,107 @@ + + + + + + + + + libvirt: Domain management architecture + + + + +
    +
    +

    Domain management architecture

    +
    + +
    + + + diff --git a/docs/archdomain.html.in b/docs/archdomain.html.in new file mode 100644 index 0000000000..294fecb37a --- /dev/null +++ b/docs/archdomain.html.in @@ -0,0 +1,5 @@ + + +

    Domain management architecture

    + + diff --git a/docs/architecture.html b/docs/architecture.html index 612b2e03b8..49848d4317 100644 --- a/docs/architecture.html +++ b/docs/architecture.html @@ -1,11 +1,44 @@ -libvirt architecture

    libvirt architecture

    Currently libvirt supports 2 kind of virtualization, and its + + + + + + + libvirt: libvirt architecture + + + +

    +
    +
    +

    libvirt architecture

    +

    Currently libvirt supports 2 kind of virtualization, and its internal structure is based on a driver model which simplifies adding new -engines:

    Libvirt Xen support

    When running in a Xen environment, programs using libvirt have to execute +engines:

    + +

    + Libvirt Xen support +

    +

    When running in a Xen environment, programs using libvirt have to execute in "Domain 0", which is the primary Linux OS loaded on the machine. That OS kernel provides most if not all of the actual drivers used by the set of domains. It also runs the Xen Store, a database of information shared by the @@ -13,22 +46,27 @@ hypervisor, the kernels, the drivers and the xen daemon. Xend. The xen daemon supervise the control and execution of the sets of domains. The hypervisor, drivers, kernels and daemons communicate though a shared system bus implemented in the hypervisor. The figure below tries to provide a view of -this environment:

    The Xen architecture

    The library can be initialized in 2 ways depending on the level of +this environment:

    + The Xen architecture +

    The library can be initialized in 2 ways depending on the level of privilege of the embedding program. If it runs with root access, virConnectOpen() can be used, it will use three different ways to connect to -the Xen infrastructure:

    • a connection to the Xen Daemon though an HTTP RPC layer
    • -
    • a read/write connection to the Xen Store
    • -
    • use Xen Hypervisor calls
    • -
    • when used as non-root libvirt connect to a proxy daemon running - as root and providing read-only support
    • -

    The library will usually interact with the Xen daemon for any operation +the Xen infrastructure:

    +
    • a connection to the Xen Daemon though an HTTP RPC layer
    • a read/write connection to the Xen Store
    • use Xen Hypervisor calls
    • when used as non-root libvirt connect to a proxy daemon running + as root and providing read-only support
    +

    The library will usually interact with the Xen daemon for any operation changing the state of the system, but for performance and accuracy reasons may talk directly to the hypervisor when gathering state information at least when possible (i.e. when the running program using libvirt has root -privilege access).

    If it runs without root access virConnectOpenReadOnly() should be used to +privilege access).

    +

    If it runs without root access virConnectOpenReadOnly() should be used to connect to initialize the library. It will then fork a libvirt_proxy program running as root and providing read_only access to the API, this is -then only useful for reporting and monitoring.

    Libvirt QEmu and KVM support

    The model for QEmu and KVM is completely similar, basically KVM is based +then only useful for reporting and monitoring.

    +

    + Libvirt QEmu and KVM support +

    +

    The model for QEmu and KVM is completely similar, basically KVM is based on QEmu for the process controlling a new domain, only small details differs between the two. In both case the libvirt API is provided by a controlling process forked by libvirt in the background and which launch and control the @@ -36,8 +74,13 @@ QEmu or KVM process. That program called libvirt_qemud talks though a specific protocol to the library, and connects to the console of the QEmu process in order to control and report on its status. Libvirt tries to expose all the emulations models of QEmu, the selection is done when creating the new -domain, by specifying the architecture and machine type targeted.

    The code controlling the QEmu process is available in the -qemud/ directory.

    the driver based architecture

    As the previous section explains, libvirt can communicate using different +domain, by specifying the architecture and machine type targeted.

    +

    The code controlling the QEmu process is available in the +qemud/ directory.

    +

    + the driver based architecture +

    +

    As the previous section explains, libvirt can communicate using different channels with the current hypervisor, and should also be able to use different kind of hypervisor. To simplify the internal design, code, ease maintenance and simplify the support of other virtualization engine the @@ -46,22 +89,76 @@ acting as a front-end for the library API and a set of hypervisor drivers defining a common set of routines. That way the Xen Daemon access, the Xen Store one, the Hypervisor hypercall are all isolated in separate C modules implementing at least a subset of the common operations defined by the -drivers present in driver.h:

    • xend_internal: implements the driver functions though the Xen - Daemon
    • -
    • xs_internal: implements the subset of the driver available though the - Xen Store
    • -
    • xen_internal: provide the implementation of the functions possible via - direct hypervisor access
    • -
    • proxy_internal: provide read-only Xen access via a proxy, the proxy code - is in the proxy/directory.
    • -
    • xm_internal: provide support for Xen defined but not running - domains.
    • -
    • qemu_internal: implement the driver functions for QEmu and +drivers present in driver.h:

      +
      • xend_internal: implements the driver functions though the Xen + Daemon
      • xs_internal: implements the subset of the driver available though the + Xen Store
      • xen_internal: provide the implementation of the functions possible via + direct hypervisor access
      • proxy_internal: provide read-only Xen access via a proxy, the proxy code + is in the proxy/directory.
      • xm_internal: provide support for Xen defined but not running + domains.
      • qemu_internal: implement the driver functions for QEmu and KVM virtualization engines. It also uses a qemud/ specific daemon - which interacts with the QEmu process to implement libvirt API.
      • -
      • test: this is a test driver useful for regression tests of the - front-end part of libvirt.
      • -

      Note that a given driver may only implement a subset of those functions, + which interacts with the QEmu process to implement libvirt API.

    • test: this is a test driver useful for regression tests of the + front-end part of libvirt.
    +

    Note that a given driver may only implement a subset of those functions, (for example saving a Xen domain state to disk and restoring it is only possible though the Xen Daemon), in that case the driver entry points for -unsupported functions are initialized to NULL.

    +unsupported functions are initialized to NULL.

    +

    +
    + +
    + + + diff --git a/docs/architecture.html.in b/docs/architecture.html.in new file mode 100644 index 0000000000..8eb64585b5 --- /dev/null +++ b/docs/architecture.html.in @@ -0,0 +1,101 @@ + + +

    libvirt architecture

    +

    Currently libvirt supports 2 kind of virtualization, and its +internal structure is based on a driver model which simplifies adding new +engines:

    + +

    + Libvirt Xen support +

    +

    When running in a Xen environment, programs using libvirt have to execute +in "Domain 0", which is the primary Linux OS loaded on the machine. That OS +kernel provides most if not all of the actual drivers used by the set of +domains. It also runs the Xen Store, a database of information shared by the +hypervisor, the kernels, the drivers and the xen daemon. Xend. The xen daemon +supervise the control and execution of the sets of domains. The hypervisor, +drivers, kernels and daemons communicate though a shared system bus +implemented in the hypervisor. The figure below tries to provide a view of +this environment:

    + The Xen architecture +

    The library can be initialized in 2 ways depending on the level of +privilege of the embedding program. If it runs with root access, +virConnectOpen() can be used, it will use three different ways to connect to +the Xen infrastructure:

    +
      +
    • a connection to the Xen Daemon though an HTTP RPC layer
    • +
    • a read/write connection to the Xen Store
    • +
    • use Xen Hypervisor calls
    • +
    • when used as non-root libvirt connect to a proxy daemon running + as root and providing read-only support
    • +
    +

    The library will usually interact with the Xen daemon for any operation +changing the state of the system, but for performance and accuracy reasons +may talk directly to the hypervisor when gathering state information at +least when possible (i.e. when the running program using libvirt has root +privilege access).

    +

    If it runs without root access virConnectOpenReadOnly() should be used to +connect to initialize the library. It will then fork a libvirt_proxy +program running as root and providing read_only access to the API, this is +then only useful for reporting and monitoring.

    +

    + Libvirt QEmu and KVM support +

    +

    The model for QEmu and KVM is completely similar, basically KVM is based +on QEmu for the process controlling a new domain, only small details differs +between the two. In both case the libvirt API is provided by a controlling +process forked by libvirt in the background and which launch and control the +QEmu or KVM process. That program called libvirt_qemud talks though a specific +protocol to the library, and connects to the console of the QEmu process in +order to control and report on its status. Libvirt tries to expose all the +emulations models of QEmu, the selection is done when creating the new +domain, by specifying the architecture and machine type targeted.

    +

    The code controlling the QEmu process is available in the +qemud/ directory.

    +

    + the driver based architecture +

    +

    As the previous section explains, libvirt can communicate using different +channels with the current hypervisor, and should also be able to use +different kind of hypervisor. To simplify the internal design, code, ease +maintenance and simplify the support of other virtualization engine the +internals have been structured as one core component, the libvirt.c module +acting as a front-end for the library API and a set of hypervisor drivers +defining a common set of routines. That way the Xen Daemon access, the Xen +Store one, the Hypervisor hypercall are all isolated in separate C modules +implementing at least a subset of the common operations defined by the +drivers present in driver.h:

    +
      +
    • xend_internal: implements the driver functions though the Xen + Daemon
    • +
    • xs_internal: implements the subset of the driver available though the + Xen Store
    • +
    • xen_internal: provide the implementation of the functions possible via + direct hypervisor access
    • +
    • proxy_internal: provide read-only Xen access via a proxy, the proxy code + is in the proxy/directory.
    • +
    • xm_internal: provide support for Xen defined but not running + domains.
    • +
    • qemu_internal: implement the driver functions for QEmu and + KVM virtualization engines. It also uses a qemud/ specific daemon + which interacts with the QEmu process to implement libvirt API.
    • +
    • test: this is a test driver useful for regression tests of the + front-end part of libvirt.
    • +
    +

    Note that a given driver may only implement a subset of those functions, +(for example saving a Xen domain state to disk and restoring it is only +possible though the Xen Daemon), in that case the driver entry points for +unsupported functions are initialized to NULL.

    +

    + + diff --git a/docs/archnetwork.html b/docs/archnetwork.html new file mode 100644 index 0000000000..beed44ac3c --- /dev/null +++ b/docs/archnetwork.html @@ -0,0 +1,136 @@ + + + + + + + + + libvirt: Network management architecture + + + + +
    +
    +

    Network management architecture

    +

    Architecture illustration

    +

    + The diagrams below illustrate some of the network configurations + enabled by the libvirt networking APIs +

    +
    • VLAN 1. This virtual network has connectivity + to LAN 2 with traffic forwarded and NATed. +
    • VLAN 2. This virtual network is completely + isolated from any physical LAN. +
    • Guest A. The first network interface is bridged + to the physical LAN 1. The second interface is connected + to a virtual network VLAN 1. +
    • Guest B. The first network interface is connected + to a virtual network VLAN 1, giving it limited NAT + based connectivity to LAN2. It has a second network interface + connected to VLAN 2. It acts a router allowing limited + traffic between the two VLANs, thus giving Guest C + connectivity to the physical LAN 2. +
    • Guest C. The only network interface is connected + to a virtual network VLAN 2. It has no direct connectivity + to a physical LAN, relying on Guest B to route traffic + on its behalf. +
    +

    Logical diagram

    +

    + Logical network architecture

    +

    Physical diagram

    +

    + Physical network architecture

    +
    + +
    + + + diff --git a/docs/archnetwork.html.in b/docs/archnetwork.html.in new file mode 100644 index 0000000000..ab019dbe02 --- /dev/null +++ b/docs/archnetwork.html.in @@ -0,0 +1,50 @@ + + +

    Network management architecture

    + +

    Architecture illustration

    + +

    + The diagrams below illustrate some of the network configurations + enabled by the libvirt networking APIs +

    + +
      +
    • VLAN 1. This virtual network has connectivity + to LAN 2 with traffic forwarded and NATed. +
    • +
    • VLAN 2. This virtual network is completely + isolated from any physical LAN. +
    • +
    • Guest A. The first network interface is bridged + to the physical LAN 1. The second interface is connected + to a virtual network VLAN 1. +
    • +
    • Guest B. The first network interface is connected + to a virtual network VLAN 1, giving it limited NAT + based connectivity to LAN2. It has a second network interface + connected to VLAN 2. It acts a router allowing limited + traffic between the two VLANs, thus giving Guest C + connectivity to the physical LAN 2. +
    • +
    • Guest C. The only network interface is connected + to a virtual network VLAN 2. It has no direct connectivity + to a physical LAN, relying on Guest B to route traffic + on its behalf. +
    • +
    + +

    Logical diagram

    + +

    + Logical network architecture +

    + +

    Physical diagram

    + +

    + Physical network architecture +

    + + + diff --git a/docs/archnode.html b/docs/archnode.html new file mode 100644 index 0000000000..0223e5e46b --- /dev/null +++ b/docs/archnode.html @@ -0,0 +1,107 @@ + + + + + + + + + libvirt: Node device management architecture + + + + +
    +
    +

    Node device management architecture

    +
    + +
    + + + diff --git a/docs/archnode.html.in b/docs/archnode.html.in new file mode 100644 index 0000000000..b3d50dd83d --- /dev/null +++ b/docs/archnode.html.in @@ -0,0 +1,5 @@ + + +

    Node device management architecture

    + + diff --git a/docs/archstorage.html b/docs/archstorage.html new file mode 100644 index 0000000000..d1dfdb01b0 --- /dev/null +++ b/docs/archstorage.html @@ -0,0 +1,126 @@ + + + + + + + + + libvirt: Storage management architecture + + + + +
    +
    +

    Storage management architecture

    +

    + The storage management APIs are based around 2 core concepts +

    +
    1. + Volume - a single storage volume which can + be assigned to a guest, or used for creating further pools. A + volume is either a block device, a raw file, or a special format + file. +
    2. + Pool - provides a means for taking a chunk + of storage and carving it up into volumes. A pool can be used to + manage things such as a physical disk, a NFS server, a iSCSI target, + a host adapter, an LVM group. +
    +

    + These two concepts are mapped through to two libvirt objects, a + virStorageVolPtr and a virStoragePoolPtr, + each with a collection of APIs for their management. +

    +
    + +
    + + + diff --git a/docs/archstorage.html.in b/docs/archstorage.html.in new file mode 100644 index 0000000000..9bdbe53e3b --- /dev/null +++ b/docs/archstorage.html.in @@ -0,0 +1,30 @@ + + +

    Storage management architecture

    + +

    + The storage management APIs are based around 2 core concepts +

    +
      +
    1. + Volume - a single storage volume which can + be assigned to a guest, or used for creating further pools. A + volume is either a block device, a raw file, or a special format + file. +
    2. +
    3. + Pool - provides a means for taking a chunk + of storage and carving it up into volumes. A pool can be used to + manage things such as a physical disk, a NFS server, a iSCSI target, + a host adapter, an LVM group. +
    4. +
    + +

    + These two concepts are mapped through to two libvirt objects, a + virStorageVolPtr and a virStoragePoolPtr, + each with a collection of APIs for their management. +

    + + + diff --git a/docs/auth.html b/docs/auth.html index 43910cfab5..748656f421 100644 --- a/docs/auth.html +++ b/docs/auth.html @@ -1,16 +1,51 @@ -Access control

    Access control

    + + + + + + + libvirt: Access control + + + +

    +
    +
    +

    Access control

    +

    When connecting to libvirt, some connections may require client authentication before allowing use of the APIs. The set of possible authentication mechanisms is administrator controlled, independent of applications using libvirt. -

    Server configuration

    +

    + +

    + Server configuration +

    +

    The libvirt daemon allows the administrator to choose the authentication mechanisms used for client connections on each network socket independently. This is primarily controlled via the libvirt daemon master config file in @@ -19,21 +54,30 @@ have its authentication mechanism configured independently. There is currently a choice of none, polkit, and sasl. The SASL scheme can be further configured to choose between a large number of different mechanisms. -

    UNIX socket permissions/group

    +

    +

    + UNIX socket permissions/group +

    +

    If libvirt does not contain support for PolicyKit, then access control for the UNIX domain socket is done using traditional file user/group ownership and permissions. There are 2 sockets, one for full read-write access, the other for read-only access. The RW socket will be restricted (mode 0700) to only allow the root user to connect. The read-only socket will be open access (mode 0777) to allow any user to connect. -

    +

    +

    To allow non-root users greater access, the libvirtd.conf file can be edited to change the permissions via the unix_sock_rw_perms, config parameter and to set a user group via the unix_sock_group parameter. For example, setting the former to mode 0770 and the latter wheel would let any user in the wheel group connect to the libvirt daemon. -

    UNIX socket PolicyKit auth

    +

    +

    + UNIX socket PolicyKit auth +

    +

    If libvirt contains support for PolicyKit, then access control options are more advanced. The unix_sock_auth parameter will default to polkit, and the file permissions will default to 0777 @@ -43,24 +87,31 @@ RW daemon socket will require any application running in the current desktop session to authenticate using the user's password. This is akin to sudo auth, but does not require that the client application ultimately run as root. Default policy will still allow any application to connect to the RO socket. -

    +

    +

    The default policy can be overridden by the administrator using the PolicyKit master configuration file in /etc/PolicyKit/PolicyKit.conf. The PolicyKit.conf(5) manual page provides details on the syntax available. The two libvirt daemon actions available are named org.libvirt.unix.monitor for the RO socket, and org.libvirt.unix.manage for the RW socket. -

    +

    +

    As an example, to allow a user fredfull access to the RW socket, while requiring joe to authenticate with the admin password, would require adding the following snippet to PolicyKit.conf. -

    +

    +
       <match action="org.libvirt.unix.manage" user="fred">
         <return result="yes"/>
       </match>
       <match action="org.libvirt.unix.manage" user="joe">
         <return result="auth_admin"/>
       </match>
    -

    Username/password auth

    +

    +

    + Username/password auth +

    +

    The plain TCP socket of the libvirt daemon defaults to using SASL for authentication. The SASL mechanism configured by default is DIGEST-MD5, which provides a basic username+password style authentication. It also provides for encryption of the data @@ -68,28 +119,38 @@ stream, so the security of the plain TCP socket is on a par with that of the TLS socket. If desired the UNIX socket and TLS socket can also have SASL enabled by setting the auth_unix_ro, auth_unix_rw, auth_tls config params in libvirt.conf. -

    +

    +

    Out of the box, no user accounts are defined, so no clients will be able to authenticate on the TCP socket. Adding users and setting their passwords is done with the saslpasswd2 command. When running this command it is important to tell it that the appname is libvirt. As an example, to add a user fred, run -

    +

    +
     # saslpasswd2 -a libvirt fred
     Password: xxxxxx
     Again (for verification): xxxxxx
    -

    +

    +

    To see a list of all accounts the sasldblistusers2 command can be used. This command expects to be given the path to the libvirt user database, which is kept in /etc/libvirt/passwd.db -

    +

    +
     # sasldblistusers2 -f /etc/libvirt/passwd.db
     fred@t60wlan.home.berrange.com: userPassword
    -

    +

    +

    Finally, to disable a user's access, the saslpasswd2 command can be used again: -

    +

    +
     # saslpasswd2 -a libvirt -d fred
    -

    Kerberos auth

    +

    +

    + Kerberos auth +

    +

    The plain TCP socket of the libvirt daemon defaults to using SASL for authentication. The SASL mechanism configured by default is DIGEST-MD5, which provides a basic username+password style authentication. To enable Kerberos single-sign-on instead, @@ -98,19 +159,22 @@ The mech_list parameter must first be changed to gssapidigest-md5. If SASL is enabled on the UNIX and/or TLS sockets, Kerberos will also be used for them. Like DIGEST-MD5, the Kerberos mechanism provides data encryption of the session. -

    +

    +

    Some operating systems do not install the SASL kerberos plugin by default. It may be necessary to install a sub-package such as cyrus-sasl-gssapi. To check whether the Kerberos plugin is installed run the pluginviewer program and verify that gssapi is listed,eg: -

    +

    +
     # pluginviewer
     ...snip...
     Plugin "gssapiv2" [loaded],     API version: 4
             SASL mechanism: GSSAPI, best SSF: 56
             security flags: NO_ANONYMOUS|NO_PLAINTEXT|NO_ACTIVE|PASS_CREDENTIALS|MUTUAL_AUTH
             features: WANT_CLIENT_FIRST|PROXY_AUTHENTICATION|NEED_SERVER_FQDN
    -

    +

    +

    Next is is necessary for the administrator of the Kerberos realm to issue a principle for the libvirt server. There needs to be one principle per host running the libvirt daemon. The principle should be named libvirt/full.hostname@KERBEROS.REALM. @@ -118,7 +182,8 @@ This is typically done by running the kadmin.local command on the K server, though some Kerberos servers have alternate ways of setting up service principles. Once created, the principle should be exported to a keytab, copied to the host running the libvirt daemon and placed in /etc/libvirt/krb5.tab -

    +

    +
     # kadmin.local
     kadmin.local: add_principal libvirt/foo.example.com
     Enter password for principal "libvirt/foo.example.com@EXAMPLE.COM":
    @@ -135,9 +200,90 @@ kadmin.local: quit
     
     # scp /root/libvirt-foo-example.tab root@foo.example.com:/etc/libvirt/krb5.tab
     # rm /root/libvirt-foo-example.tab
    -

    +

    +

    Any client application wishing to connect to a Kerberos enabled libvirt server merely needs to run kinit to gain a user principle. This may well be done automatically when a user logs into a desktop session, if PAM is setup to authenticate against Kerberos. -

    +

    +
    + +
    + + + diff --git a/docs/auth.html.in b/docs/auth.html.in new file mode 100644 index 0000000000..73403604c0 --- /dev/null +++ b/docs/auth.html.in @@ -0,0 +1,183 @@ + + +

    Access control

    +

    +When connecting to libvirt, some connections may require client +authentication before allowing use of the APIs. The set of possible +authentication mechanisms is administrator controlled, independent +of applications using libvirt. +

    + +

    Server configuration

    +

    +The libvirt daemon allows the administrator to choose the authentication +mechanisms used for client connections on each network socket independently. +This is primarily controlled via the libvirt daemon master config file in +/etc/libvirt/libvirtd.conf. Each of the libvirt sockets can +have its authentication mechanism configured independently. There is +currently a choice of none, polkit, and sasl. +The SASL scheme can be further configured to choose between a large +number of different mechanisms. +

    +

    UNIX socket permissions/group

    +

    +If libvirt does not contain support for PolicyKit, then access control for +the UNIX domain socket is done using traditional file user/group ownership +and permissions. There are 2 sockets, one for full read-write access, the +other for read-only access. The RW socket will be restricted (mode 0700) to +only allow the root user to connect. The read-only socket will +be open access (mode 0777) to allow any user to connect. +

    +

    +To allow non-root users greater access, the libvirtd.conf file +can be edited to change the permissions via the unix_sock_rw_perms, +config parameter and to set a user group via the unix_sock_group +parameter. For example, setting the former to mode 0770 and the +latter wheel would let any user in the wheel group connect to +the libvirt daemon. +

    +

    UNIX socket PolicyKit auth

    +

    +If libvirt contains support for PolicyKit, then access control options are +more advanced. The unix_sock_auth parameter will default to +polkit, and the file permissions will default to 0777 +even on the RW socket. Upon connecting to the socket, the client application +will be required to identify itself with PolicyKit. The default policy for the +RW daemon socket will require any application running in the current desktop +session to authenticate using the user's password. This is akin to sudo +auth, but does not require that the client application ultimately run as root. +Default policy will still allow any application to connect to the RO socket. +

    +

    +The default policy can be overridden by the administrator using the PolicyKit +master configuration file in /etc/PolicyKit/PolicyKit.conf. The +PolicyKit.conf(5) manual page provides details on the syntax +available. The two libvirt daemon actions available are named org.libvirt.unix.monitor +for the RO socket, and org.libvirt.unix.manage for the RW socket. +

    +

    +As an example, to allow a user fredfull access to the RW socket, +while requiring joe to authenticate with the admin password, +would require adding the following snippet to PolicyKit.conf. +

    +
    +  <match action="org.libvirt.unix.manage" user="fred">
    +    <return result="yes"/>
    +  </match>
    +  <match action="org.libvirt.unix.manage" user="joe">
    +    <return result="auth_admin"/>
    +  </match>
    +
    +

    Username/password auth

    +

    +The plain TCP socket of the libvirt daemon defaults to using SASL for authentication. +The SASL mechanism configured by default is DIGEST-MD5, which provides a basic +username+password style authentication. It also provides for encryption of the data +stream, so the security of the plain TCP socket is on a par with that of the TLS +socket. If desired the UNIX socket and TLS socket can also have SASL enabled by +setting the auth_unix_ro, auth_unix_rw, auth_tls +config params in libvirt.conf. +

    +

    +Out of the box, no user accounts are defined, so no clients will be able to authenticate +on the TCP socket. Adding users and setting their passwords is done with the saslpasswd2 +command. When running this command it is important to tell it that the appname is libvirt. +As an example, to add a user fred, run +

    +
    +# saslpasswd2 -a libvirt fred
    +Password: xxxxxx
    +Again (for verification): xxxxxx
    +
    +

    +To see a list of all accounts the sasldblistusers2 command can be used. +This command expects to be given the path to the libvirt user database, which is kept +in /etc/libvirt/passwd.db +

    +
    +# sasldblistusers2 -f /etc/libvirt/passwd.db
    +fred@t60wlan.home.berrange.com: userPassword
    +
    +

    +Finally, to disable a user's access, the saslpasswd2 command can be used +again: +

    +
    +# saslpasswd2 -a libvirt -d fred
    +
    +

    Kerberos auth

    +

    +The plain TCP socket of the libvirt daemon defaults to using SASL for authentication. +The SASL mechanism configured by default is DIGEST-MD5, which provides a basic +username+password style authentication. To enable Kerberos single-sign-on instead, +the libvirt SASL configuration file must be changed. This is /etc/sasl2/libvirt.conf. +The mech_list parameter must first be changed to gssapi +instead of the default digest-md5. If SASL is enabled on the UNIX +and/or TLS sockets, Kerberos will also be used for them. Like DIGEST-MD5, the Kerberos +mechanism provides data encryption of the session. +

    +

    +Some operating systems do not install the SASL kerberos plugin by default. It +may be necessary to install a sub-package such as cyrus-sasl-gssapi. +To check whether the Kerberos plugin is installed run the pluginviewer +program and verify that gssapi is listed,eg: +

    +
    +# pluginviewer
    +...snip...
    +Plugin "gssapiv2" [loaded],     API version: 4
    +        SASL mechanism: GSSAPI, best SSF: 56
    +        security flags: NO_ANONYMOUS|NO_PLAINTEXT|NO_ACTIVE|PASS_CREDENTIALS|MUTUAL_AUTH
    +        features: WANT_CLIENT_FIRST|PROXY_AUTHENTICATION|NEED_SERVER_FQDN
    +
    +

    +Next is is necessary for the administrator of the Kerberos realm to issue a principle +for the libvirt server. There needs to be one principle per host running the libvirt +daemon. The principle should be named libvirt/full.hostname@KERBEROS.REALM. +This is typically done by running the kadmin.local command on the Kerberos +server, though some Kerberos servers have alternate ways of setting up service principles. +Once created, the principle should be exported to a keytab, copied to the host running +the libvirt daemon and placed in /etc/libvirt/krb5.tab +

    +
    +# kadmin.local
    +kadmin.local: add_principal libvirt/foo.example.com
    +Enter password for principal "libvirt/foo.example.com@EXAMPLE.COM":
    +Re-enter password for principal "libvirt/foo.example.com@EXAMPLE.COM":
    +Principal "libvirt/foo.example.com@EXAMPLE.COM" created.
    +
    +kadmin.local:  ktadd -k /root/libvirt-foo-example.tab libvirt/foo.example.com@EXAMPLE.COM
    +Entry for principal libvirt/foo.example.com@EXAMPLE.COM with kvno 4, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:/root/libvirt-foo-example.tab.
    +Entry for principal libvirt/foo.example.com@EXAMPLE.COM with kvno 4, encryption type ArcFour with HMAC/md5 added to keytab WRFILE:/root/libvirt-foo-example.tab.
    +Entry for principal libvirt/foo.example.com@EXAMPLE.COM with kvno 4, encryption type DES with HMAC/sha1 added to keytab WRFILE:/root/libvirt-foo-example.tab.
    +Entry for principal libvirt/foo.example.com@EXAMPLE.COM with kvno 4, encryption type DES cbc mode with RSA-MD5 added to keytab WRFILE:/root/libvirt-foo-example.tab.
    +
    +kadmin.local: quit
    +
    +# scp /root/libvirt-foo-example.tab root@foo.example.com:/etc/libvirt/krb5.tab
    +# rm /root/libvirt-foo-example.tab
    +
    +

    +Any client application wishing to connect to a Kerberos enabled libvirt server +merely needs to run kinit to gain a user principle. This may well +be done automatically when a user logs into a desktop session, if PAM is setup +to authenticate against Kerberos. +

    + + diff --git a/docs/background.png b/docs/background.png deleted file mode 100644 index 5dd8735991db1d4b05d10111466c4e6ce5394b1b..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 267 zcmeAS@N?(olHy`uVBq!ia0y~yV447AvvDv1$=~Ic1we`=-O<;Pfnj4m_n$;oAm5_I zHKHUqKdq!Zu_%=xB1|DHwWv5VKTp9}&(I)IK_Mu$%u2z~NZ-g(-_k@u189RnL1IaA zhJvA*`dJx`z+uAVNAAsjQ`UOmWrz<`5cqsilq`mY!E2|RUO`G)gdxqq6Toxaj8 z;fCzct8)+3zj?>N@Zih_#sr4LYz>S&(hO_{ISdjHSS8chn0J?ZX8{dlFY)wsWxvKK y!pE(5U&QqZP)G{JosKE_NvR5+xryniL8*x;m0YqXHx~j`F?hQAxvX + + + + + + + + libvirt: Bindings for other languages + + + + +
    +
    +

    Bindings for other languages

    +

    Libvirt comes with bindings to support other languages than +pure C. First the headers embeds the necessary declarations to +allow direct acces from C++ code, but also we have bindings for +higher level kind of languages:

    +
    • Python: Libvirt comes with direct support for the Python language + (just make sure you installed the libvirt-python package if not + compiling from sources). See below for more information about + using libvirt with python
    • Perl: Daniel Berrange provides bindings for + Perl.
    • OCaml: Richard Jones supplies bindings for OCaml.
    • Ruby: David Lutterkort provides bindings for Ruby.
    +

    Support, requests or help for libvirt bindings are welcome on +the mailing +list, as usual try to provide enough background information +and make sure you use recent version, see the help +page.

    +
    + +
    + + + diff --git a/docs/bindings.html.in b/docs/bindings.html.in new file mode 100644 index 0000000000..e68270097e --- /dev/null +++ b/docs/bindings.html.in @@ -0,0 +1,24 @@ + + +

    Bindings for other languages

    +

    Libvirt comes with bindings to support other languages than +pure C. First the headers embeds the necessary declarations to +allow direct acces from C++ code, but also we have bindings for +higher level kind of languages:

    +
      +
    • Python: Libvirt comes with direct support for the Python language + (just make sure you installed the libvirt-python package if not + compiling from sources). See below for more information about + using libvirt with python
    • +
    • Perl: Daniel Berrange provides bindings for + Perl.
    • +
    • OCaml: Richard Jones supplies bindings for OCaml.
    • +
    • Ruby: David Lutterkort provides bindings for Ruby.
    • +
    +

    Support, requests or help for libvirt bindings are welcome on +the mailing +list, as usual try to provide enough background information +and make sure you use recent version, see the help +page.

    + + diff --git a/docs/bugs.html b/docs/bugs.html index ca05707cef..534bdabd29 100644 --- a/docs/bugs.html +++ b/docs/bugs.html @@ -1,20 +1,127 @@ -Reporting bugs and getting help

    Reporting bugs and getting help

    There is a mailing-list libvir-list@redhat.com for libvirt, -with an on-line -archive. Please subscribe to this list before posting by visiting the associated Web -page and follow the instructions. Patches with explanations and provided as -attachments are really appreciated and will be discussed on the mailing list. -If possible generate the patches by using cvs diff -u in a CVS checkout.

    We use Red Hat Bugzilla to track bugs and new feature requests to libvirt. -If you want to report a bug or ask for a feature, please check the existing open bugs, then if yours isn't a duplicate of -an existing bug:

    Don't forget to attach any patch or extra data that you may have available. It is always a good idea to also -to post to the mailing-list -too, so that everybody working on the project can see it, thanks !

    Some of the libvirt developers may be found on IRC on the OFTC -network. Use the settings:

    • server: irc.oftc.net
    • -
    • port: 6667 (the usual IRC port)
    • -
    • channel: #virt
    • -

    But there is no guarantee that someone will be watching or able to reply, -use the mailing-list if you don't get an answer there.

    + + + + + + + libvirt: Bug reporting + + + + +
    +
    +

    Bug reporting

    +

    + The Red Hat Bugzilla Server + should be used to report bugs and request features against libvirt. + Before submitting a ticket, check the existing tickets to see if + the bug/feature is already tracked. +

    +

    General libvirt bug reports

    +

    + If you are using official libvirt binaries from a Linux distribution + check below for distribution specific bug reporting policies first. + For general libvirt bug reports, from self-built releases, CVS snapshots + and any other non-distribution supported builds, enter tickets under + the Virtualization Tools product and the libvirt + component. +

    + +

    Linux Distribution specific bug reports

    + +

    How to file high quality bug reports

    +

    + To increase the likelihood of your bug report being addressed it is + important to provide as much information as possible. When filing + libvirt bugs use this checklist to see if you are providing enough + information: +

    +
    • The version number of the libvirt build, or date of the CVS + checkout
    • The hardware architecture being used
    • The name of the hypervisor (Xen, QEMU, KVM)
    • The XML config of the guest domain if relevant
    • For Xen hypervisor, the XenD logfile from /var/log/xen
    • For QEMU/KVM, the domain logfile from /var/log/libvirt/qemu
    +

    + If requesting a new feature attach any available patch to the ticket + and also email the patch to the libvirt mailing list for discussion +

    +
    + +
    + + + diff --git a/docs/bugs.html.in b/docs/bugs.html.in new file mode 100644 index 0000000000..0eb723a92c --- /dev/null +++ b/docs/bugs.html.in @@ -0,0 +1,82 @@ + + + + +

    Bug reporting

    + +

    + The Red Hat Bugzilla Server + should be used to report bugs and request features against libvirt. + Before submitting a ticket, check the existing tickets to see if + the bug/feature is already tracked. +

    + +

    General libvirt bug reports

    + +

    + If you are using official libvirt binaries from a Linux distribution + check below for distribution specific bug reporting policies first. + For general libvirt bug reports, from self-built releases, CVS snapshots + and any other non-distribution supported builds, enter tickets under + the Virtualization Tools product and the libvirt + component. +

    + + + +

    Linux Distribution specific bug reports

    + + + +

    How to file high quality bug reports

    + +

    + To increase the likelihood of your bug report being addressed it is + important to provide as much information as possible. When filing + libvirt bugs use this checklist to see if you are providing enough + information: +

    + +
      +
    • The version number of the libvirt build, or date of the CVS + checkout
    • +
    • The hardware architecture being used
    • +
    • The name of the hypervisor (Xen, QEMU, KVM)
    • +
    • The XML config of the guest domain if relevant
    • +
    • For Xen hypervisor, the XenD logfile from /var/log/xen
    • +
    • For QEMU/KVM, the domain logfile from /var/log/libvirt/qemu
    • +
    + +

    + If requesting a new feature attach any available patch to the ticket + and also email the patch to the libvirt mailing list for discussion +

    + + + diff --git a/docs/contact.html b/docs/contact.html new file mode 100644 index 0000000000..11809832fa --- /dev/null +++ b/docs/contact.html @@ -0,0 +1,107 @@ + + + + + + + + + libvirt: Contacting the development team + + + + +
    +
    +

    Contacting the development team

    +

    Mailing list

    +

    + There is a mailing-list libvir-list@redhat.com for libvirt, + with an on-line archive. + Please subscribe to this list before posting by visiting the + associated Web + page and follow the instructions. Patches with explanations and provided as + attachments are really appreciated and will be discussed on the mailing list. + If possible generate the patches by using cvs diff -up in a CVS + checkout. +

    +

    IRC discussion

    +

    + Some of the libvirt developers may be found on IRC on the OFTC IRC + network. Use the settings: +

    +
    • server: irc.oftc.net
    • port: 6667 (the usual IRC port)
    • channel: #virt
    +

    + NB There is no guarantee that someone will be watching or able to reply + promptly, so use the mailing-list if you don't get an answer on the IRC + channel. +

    +
    + +
    + + + diff --git a/docs/contact.html.in b/docs/contact.html.in new file mode 100644 index 0000000000..4b9f532441 --- /dev/null +++ b/docs/contact.html.in @@ -0,0 +1,37 @@ + + + +

    Contacting the development team

    + +

    Mailing list

    + +

    + There is a mailing-list libvir-list@redhat.com for libvirt, + with an on-line archive. + Please subscribe to this list before posting by visiting the + associated Web + page and follow the instructions. Patches with explanations and provided as + attachments are really appreciated and will be discussed on the mailing list. + If possible generate the patches by using cvs diff -up in a CVS + checkout. +

    + +

    IRC discussion

    + +

    + Some of the libvirt developers may be found on IRC on the OFTC IRC + network. Use the settings: +

    +
      +
    • server: irc.oftc.net
    • +
    • port: 6667 (the usual IRC port)
    • +
    • channel: #virt
    • +
    +

    + NB There is no guarantee that someone will be watching or able to reply + promptly, so use the mailing-list if you don't get an answer on the IRC + channel. +

    + + + diff --git a/docs/deployment.html b/docs/deployment.html new file mode 100644 index 0000000000..e3601e8267 --- /dev/null +++ b/docs/deployment.html @@ -0,0 +1,139 @@ + + + + + + + + + libvirt: Deployment + + + + +
    +
    +

    Deployment

    +

    Pre-packaged releases

    +

    + The libvirt API is now available in all major Linux distributions + so the simplest deployment approach is to use your distributions' + package management software to install the libvirt + module. +

    +

    Self-built releases

    +

    + libvirt uses GNU autotools for its build system, so deployment + follows the usual process of configure; make ; make install +

    +
    +
    +      # ./configure --prefix=$HOME/usr
    +      # make
    +      # make install
    +    
    +

    Built from CVS / GIT

    +

    + When building from CVS it is neccessary to generate the autotools + support files. This requires having autoconf, + automake, libtool and intltool + installed. The process can be automated with the autogen.sh + script. +

    +
    +
    +      # ./autogen.sh --prefix=$HOME/usr
    +      # make
    +      # make install
    +    
    +
    + +
    + + + diff --git a/docs/deployment.html.in b/docs/deployment.html.in new file mode 100644 index 0000000000..3c548e16f6 --- /dev/null +++ b/docs/deployment.html.in @@ -0,0 +1,46 @@ + + +

    Deployment

    + +

    Pre-packaged releases

    + +

    + The libvirt API is now available in all major Linux distributions + so the simplest deployment approach is to use your distributions' + package management software to install the libvirt + module. +

    + +

    Self-built releases

    + +

    + libvirt uses GNU autotools for its build system, so deployment + follows the usual process of configure; make ; make install +

    + +
    +
    +      # ./configure --prefix=$HOME/usr
    +      # make
    +      # make install
    +    
    + +

    Built from CVS / GIT

    + +

    + When building from CVS it is neccessary to generate the autotools + support files. This requires having autoconf, + automake, libtool and intltool + installed. The process can be automated with the autogen.sh + script. +

    + +
    +
    +      # ./autogen.sh --prefix=$HOME/usr
    +      # make
    +      # make install
    +    
    + + + diff --git a/docs/docs.html b/docs/docs.html new file mode 100644 index 0000000000..1c1ee3130e --- /dev/null +++ b/docs/docs.html @@ -0,0 +1,98 @@ + + + + + + + + + libvirt: Documentation + + + + +
    +
    +

    Documentation

    +
    + +
    + + + diff --git a/docs/docs.html.in b/docs/docs.html.in new file mode 100644 index 0000000000..970a79ac1a --- /dev/null +++ b/docs/docs.html.in @@ -0,0 +1,5 @@ + + +

    Documentation

    + + diff --git a/docs/downloads.html b/docs/downloads.html index e7727456bb..23c9240f3f 100644 --- a/docs/downloads.html +++ b/docs/downloads.html @@ -1,10 +1,143 @@ -Downloads

    Downloads

    The latest versions of libvirt can be found on the libvirt.org server ( HTTP, FTP). You will find there the released -versions as well as snapshot -tarballs updated from CVS head every hour

    Anonymous CVS is also -available, first register onto the server:

    cvs -d :pserver:anoncvs@libvirt.org:2401/data/cvs login

    it will request a password, enter anoncvs. Then you can -checkout the development tree with:

    cvs -d :pserver:anoncvs@libvirt.org:2401/data/cvs co -libvirt

    Use ./autogen.sh to configure the local checkout, then make -and make install, as usual. All normal cvs commands are now -available except commiting to the base.

    + + + + + + + libvirt: Downloads + + + + +
    +
    +

    Downloads

    +

    Official Releases

    +

    + The latest versions of the libvirt C library can be downloaded from: +

    + +

    Hourly development snapshots

    +

    + Once an hour, an automated snapshot is made from the latest CVS server + source tree. These snapshots should be usable, but we make no guarentees + about their stability: +

    + +

    CVS repository access

    +

    + The master source repository uses CVS + and anonymous access is provided. Prior to accessing the server is it neccessary + to authenticate using the password anoncvs. This can be accomplished with the + cvs login command: +

    +
    +
    +      # cvs -d :pserver:anoncvs@libvirt.org:2401/data/cvs login
    +    
    +

    + Once authenticated, a checkout can be obtained using +

    +
    +
    +      # cvs -d :pserver:anoncvs@libvirt.org:2401/data/cvs co libvirt
    +    
    +

    + The libvirt build process uses GNU autotools, so after obtaining a checkout + it is neccessary to generate the configure script and Makefile.in templates + using the autogen.sh command. As an example, to do a complete + build and install it into your home directory run: +

    +
    +
    +      ./autogen.sh --prefix=$HOME/usr
    +      make
    +      make install
    +    
    +

    GIT repository mirror

    +

    + The CVS source repository is also mirrored using GIT, and is available + for anonymous access via: +

    +
    +
    +      git clone git://git.et.redhat.com/libvirt.git
    +    
    +

    + It can also be browsed at +

    +
    +
    +      http://git.et.redhat.com/?p=libvirt.git;a=summary
    +    
    +
    + +
    + + + diff --git a/docs/downloads.html.in b/docs/downloads.html.in new file mode 100644 index 0000000000..e1aae438dc --- /dev/null +++ b/docs/downloads.html.in @@ -0,0 +1,89 @@ + + + +

    Downloads

    + +

    Official Releases

    + +

    + The latest versions of the libvirt C library can be downloaded from: +

    + + + +

    Hourly development snapshots

    + +

    + Once an hour, an automated snapshot is made from the latest CVS server + source tree. These snapshots should be usable, but we make no guarentees + about their stability: +

    + + + +

    CVS repository access

    + +

    + The master source repository uses CVS + and anonymous access is provided. Prior to accessing the server is it neccessary + to authenticate using the password anoncvs. This can be accomplished with the + cvs login command: +

    + +
    +
    +      # cvs -d :pserver:anoncvs@libvirt.org:2401/data/cvs login
    +    
    + +

    + Once authenticated, a checkout can be obtained using +

    + +
    +
    +      # cvs -d :pserver:anoncvs@libvirt.org:2401/data/cvs co libvirt
    +    
    + +

    + The libvirt build process uses GNU autotools, so after obtaining a checkout + it is neccessary to generate the configure script and Makefile.in templates + using the autogen.sh command. As an example, to do a complete + build and install it into your home directory run: +

    + +
    +
    +      ./autogen.sh --prefix=$HOME/usr
    +      make
    +      make install
    +    
    + +

    GIT repository mirror

    + +

    + The CVS source repository is also mirrored using GIT, and is available + for anonymous access via: +

    + +
    +
    +      git clone git://git.et.redhat.com/libvirt.git
    +    
    + +

    + It can also be browsed at +

    + +
    +
    +      http://git.et.redhat.com/?p=libvirt.git;a=summary
    +    
    + + + diff --git a/docs/drivers.html b/docs/drivers.html new file mode 100644 index 0000000000..564a063abb --- /dev/null +++ b/docs/drivers.html @@ -0,0 +1,125 @@ + + + + + + + + + libvirt: Internal drivers + + + + +
    +
    +

    Internal drivers

    +

    + The libvirt public API delegates its implementation to one or + more internal drivers, depending on the connection URI + passed when initializing the library. There is always a hypervisor driver + active, and if the libvirt daemon is available there will usually be a + network and storage driver active. +

    +

    Hypervisor drivers

    +

    + The hypervisor drivers currently supported by livirt are: +

    + +
    + +
    + + + diff --git a/docs/drivers.html.in b/docs/drivers.html.in new file mode 100644 index 0000000000..a8298ca55e --- /dev/null +++ b/docs/drivers.html.in @@ -0,0 +1,27 @@ + + +

    Internal drivers

    + +

    + The libvirt public API delegates its implementation to one or + more internal drivers, depending on the connection URI + passed when initializing the library. There is always a hypervisor driver + active, and if the libvirt daemon is available there will usually be a + network and storage driver active. +

    + +

    Hypervisor drivers

    + +

    + The hypervisor drivers currently supported by livirt are: +

    + + + + diff --git a/docs/drvlxc.html b/docs/drvlxc.html new file mode 100644 index 0000000000..3f6cf6392a --- /dev/null +++ b/docs/drvlxc.html @@ -0,0 +1,113 @@ + + + + + + + + + libvirt: LXC container driver + + + + +
    +
    +

    LXC container driver

    +
    + +
    + + + diff --git a/docs/drvlxc.html.in b/docs/drvlxc.html.in new file mode 100644 index 0000000000..d658f1930f --- /dev/null +++ b/docs/drvlxc.html.in @@ -0,0 +1,5 @@ + + +

    LXC container driver

    + + diff --git a/docs/drvopenvz.html b/docs/drvopenvz.html new file mode 100644 index 0000000000..4b5ff6a039 --- /dev/null +++ b/docs/drvopenvz.html @@ -0,0 +1,113 @@ + + + + + + + + + libvirt: OpenVZ container driver + + + + +
    +
    +

    OpenVZ container driver

    +
    + +
    + + + diff --git a/docs/drvopenvz.html.in b/docs/drvopenvz.html.in new file mode 100644 index 0000000000..b002289cf9 --- /dev/null +++ b/docs/drvopenvz.html.in @@ -0,0 +1,5 @@ + + +

    OpenVZ container driver

    + + diff --git a/docs/drvqemu.html b/docs/drvqemu.html new file mode 100644 index 0000000000..177b510ae8 --- /dev/null +++ b/docs/drvqemu.html @@ -0,0 +1,191 @@ + + + + + + + + + libvirt: QEMU/KVM hypervisor driver + + + + +
    +
    +

    QEMU/KVM hypervisor driver

    +

    + The libvirt QEMU driver can manage any QEMU emulator from version 0.8.1 + or later. It can also manage anything that provides the same QEMU command + line syntax and monitor interaction. This includes KVM, and Xenner. +

    +

    Deployment pre-requisites

    +
    • + QEMU emulators: The driver will probe /usr/bin + for the presence of qemu, qemu-system-x86_64, + qemu-system-mips,qemu-system-mipsel, + qemu-system-sparc,qemu-system-ppc. The results + of this can be seen from the capabilities XML output. +
    • + KVM hypervisor: The driver will probe /usr/bin + for the presence of qemu-kvm and /dev/kvm device + node. If both are found, then KVM fullyvirtualized, hardware accelerated + guests will be available. +
    • + Xenner hypervisor: The driver will probe /usr/bin + for the presence of xenner and /dev/kvm device + node. If both are found, then Xen paravirtualized guests can be run using + the KVM hardware acceleration. +
    +

    + Example domain XML config +

    +

    QEMU emulated guest on x86_64

    +
    <domain type='qemu'>
    +  <name>QEmu-fedora-i686</name>
    +  <uuid>c7a5fdbd-cdaf-9455-926a-d65c16db1809</uuid>
    +  <memory>219200</memory>
    +  <currentMemory>219200</currentMemory>
    +  <vcpu>2</vcpu>
    +  <os>
    +    <type arch='i686' machine='pc'>hvm</type>
    +    <boot dev='cdrom'/>
    +  </os>
    +  <devices>
    +    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    +    <disk type='file' device='cdrom'>
    +      <source file='/home/user/boot.iso'/>
    +      <target dev='hdc'/>
    +      <readonly/>
    +    </disk>
    +    <disk type='file' device='disk'>
    +      <source file='/home/user/fedora.img'/>
    +      <target dev='hda'/>
    +    </disk>
    +    <interface type='network'>
    +      <source name='default'/>
    +    </interface>
    +    <graphics type='vnc' port='-1'/>
    +  </devices>
    +</domain>
    +

    KVM hardware accelerated guest on i686

    +
    <domain type='kvm'>
    +  <name>demo2</name>
    +  <uuid>4dea24b3-1d52-d8f3-2516-782e98a23fa0</uuid>
    +  <memory>131072</memory>
    +  <vcpu>1</vcpu>
    +  <os>
    +    <type arch="i686">hvm</type>
    +  </os>
    +  <clock sync="localtime"/>
    +  <devices>
    +    <emulator>/usr/bin/qemu-kvm</emulator>
    +    <disk type='file' device='disk'>
    +      <source file='/var/lib/libvirt/images/demo2.img'/>
    +      <target dev='hda'/>
    +    </disk>
    +    <interface type='network'>
    +      <source network='default'/>
    +      <mac address='24:42:53:21:52:45'/>
    +    </interface>
    +    <graphics type='vnc' port='-1'/>
    +  </devices>
    +</domain>
    +

    Xen paravirtualized guests with hardware acceleration

    +
    + +
    + + + diff --git a/docs/drvqemu.html.in b/docs/drvqemu.html.in new file mode 100644 index 0000000000..fd2bb12669 --- /dev/null +++ b/docs/drvqemu.html.in @@ -0,0 +1,97 @@ + + +

    QEMU/KVM hypervisor driver

    + +

    + The libvirt QEMU driver can manage any QEMU emulator from version 0.8.1 + or later. It can also manage anything that provides the same QEMU command + line syntax and monitor interaction. This includes KVM, and Xenner. +

    + +

    Deployment pre-requisites

    + +
      +
    • + QEMU emulators: The driver will probe /usr/bin + for the presence of qemu, qemu-system-x86_64, + qemu-system-mips,qemu-system-mipsel, + qemu-system-sparc,qemu-system-ppc. The results + of this can be seen from the capabilities XML output. +
    • +
    • + KVM hypervisor: The driver will probe /usr/bin + for the presence of qemu-kvm and /dev/kvm device + node. If both are found, then KVM fullyvirtualized, hardware accelerated + guests will be available. +
    • +
    • + Xenner hypervisor: The driver will probe /usr/bin + for the presence of xenner and /dev/kvm device + node. If both are found, then Xen paravirtualized guests can be run using + the KVM hardware acceleration. +
    • +
    + +

    Example domain XML config

    + +

    QEMU emulated guest on x86_64

    + +
    <domain type='qemu'>
    +  <name>QEmu-fedora-i686</name>
    +  <uuid>c7a5fdbd-cdaf-9455-926a-d65c16db1809</uuid>
    +  <memory>219200</memory>
    +  <currentMemory>219200</currentMemory>
    +  <vcpu>2</vcpu>
    +  <os>
    +    <type arch='i686' machine='pc'>hvm</type>
    +    <boot dev='cdrom'/>
    +  </os>
    +  <devices>
    +    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    +    <disk type='file' device='cdrom'>
    +      <source file='/home/user/boot.iso'/>
    +      <target dev='hdc'/>
    +      <readonly/>
    +    </disk>
    +    <disk type='file' device='disk'>
    +      <source file='/home/user/fedora.img'/>
    +      <target dev='hda'/>
    +    </disk>
    +    <interface type='network'>
    +      <source name='default'/>
    +    </interface>
    +    <graphics type='vnc' port='-1'/>
    +  </devices>
    +</domain>
    + +

    KVM hardware accelerated guest on i686

    + +
    <domain type='kvm'>
    +  <name>demo2</name>
    +  <uuid>4dea24b3-1d52-d8f3-2516-782e98a23fa0</uuid>
    +  <memory>131072</memory>
    +  <vcpu>1</vcpu>
    +  <os>
    +    <type arch="i686">hvm</type>
    +  </os>
    +  <clock sync="localtime"/>
    +  <devices>
    +    <emulator>/usr/bin/qemu-kvm</emulator>
    +    <disk type='file' device='disk'>
    +      <source file='/var/lib/libvirt/images/demo2.img'/>
    +      <target dev='hda'/>
    +    </disk>
    +    <interface type='network'>
    +      <source network='default'/>
    +      <mac address='24:42:53:21:52:45'/>
    +    </interface>
    +    <graphics type='vnc' port='-1'/>
    +  </devices>
    +</domain>
    + +

    Xen paravirtualized guests with hardware acceleration

    + + + + + diff --git a/docs/drvremote.html b/docs/drvremote.html new file mode 100644 index 0000000000..df4ff8d687 --- /dev/null +++ b/docs/drvremote.html @@ -0,0 +1,113 @@ + + + + + + + + + libvirt: Remote management driver + + + + +
    +
    +

    Remote management driver

    +
    + +
    + + + diff --git a/docs/drvremote.html.in b/docs/drvremote.html.in new file mode 100644 index 0000000000..c66526f924 --- /dev/null +++ b/docs/drvremote.html.in @@ -0,0 +1,5 @@ + + +

    Remote management driver

    + + diff --git a/docs/drvtest.html b/docs/drvtest.html new file mode 100644 index 0000000000..848e4a12bb --- /dev/null +++ b/docs/drvtest.html @@ -0,0 +1,113 @@ + + + + + + + + + libvirt: Test "mock" driver + + + + +
    +
    +

    Test "mock" driver

    +
    + +
    + + + diff --git a/docs/drvtest.html.in b/docs/drvtest.html.in new file mode 100644 index 0000000000..f08dd3b4ef --- /dev/null +++ b/docs/drvtest.html.in @@ -0,0 +1,5 @@ + + +

    Test "mock" driver

    + + diff --git a/docs/drvxen.html b/docs/drvxen.html new file mode 100644 index 0000000000..403f1e9984 --- /dev/null +++ b/docs/drvxen.html @@ -0,0 +1,307 @@ + + + + + + + + + libvirt: Xen hypervisor driver + + + + +
    +
    +

    Xen hypervisor driver

    +

    + The libvirt Xen driver provides the ability to manage virtual machines + on any Xen release from 3.0.1 onwards. +

    +

    Deployment pre-requisites

    +

    + The libvirt Xen driver uses a combination of channels to manage Xen + virtual machines. +

    +
    • + XenD: Access to the Xen daemon is a mandatory + requirement for the libvirt Xen driver. It requires that the UNIX + socket interface be enabled in the /etc/xen/xend-config.sxp + configuration file. Specifically the config settings + (xend-unix-server yes). This path is usually restricted + to only allow the root user access. As an alternative, + the HTTP interface can be used, however, this has significant security + implications. +
    • + XenStoreD: Access to the Xenstore daemon enables + more efficient codepaths for looking up domain information which + lowers the CPU overhead of management. +
    • + Hypercalls: The ability to make direct hypercalls + allows the most efficient codepaths in the driver to be used for + monitoring domain status. +
    • + XM config: When using Xen releases prior to 3.0.4, + there is no inactive domain management in XenD. For such releases, + libvirt will automatically process XM configuration files kept in + the /etc/xen directory. It is important not to place + any other non-config files in this directory. +
    +

    + Example domain XML config +

    +

    + Below are some example XML configurations for Xen guest domains. + For full details of the available options, consult the domain XML format + guide. +

    +

    Paravirtualized guest bootloader

    +

    + Using a bootloader allows a paravirtualized guest to be booted using + a kernel stored inside its virtual disk image +

    +
    <domain type='xen' >
    +  <name>fc8</name>
    +  <bootloader>/usr/bin/pygrub</bootloader>
    +  <os>
    +    <type>linux</type>
    +  </os>
    +  <memory>131072</memory>
    +  <vcpu>1</vcpu>
    +  <devices>
    +    <disk type='file'>
    +      <source file='/var/lib/xen/images/fc4.img'/>
    +      <target dev='sda1'/>
    +    </disk>
    +    <interface type='bridge'>
    +      <source bridge='xenbr0'/>
    +      <mac address='aa:00:00:00:00:11'/>
    +      <script path='/etc/xen/scripts/vif-bridge'/>
    +    </interface>
    +    <console tty='/dev/pts/5'/>
    +  </devices>
    +</domain>
    +

    Paravirtualized guest direct kernel boot

    +

    + For installation of paravirtualized guests it is typical to boot the + domain using a kernel and initrd stored in the host OS +

    +
    <domain type='xen' >
    +  <name>fc8</name>
    +  <os>
    +    <type>linux</type>
    +    <kernel>/var/lib/xen/install/vmlinuz-fedora8-x86_64</kernel>
    +    <initrd>/var/lib/xen/install/initrd-vmlinuz-fedora8-x86_64</initrd>
    +    <cmdline> kickstart=http://example.com/myguest.ks </cmdline>
    +  </os>
    +  <memory>131072</memory>
    +  <vcpu>1</vcpu>
    +  <devices>
    +    <disk type='file'>
    +      <source file='/var/lib/xen/images/fc4.img'/>
    +      <target dev='sda1'/>
    +    </disk>
    +    <interface type='bridge'>
    +      <source bridge='xenbr0'/>
    +      <mac address='aa:00:00:00:00:11'/>
    +      <script path='/etc/xen/scripts/vif-bridge'/>
    +    </interface>
    +    <graphics type='vnc' port='-1'/>
    +    <console tty='/dev/pts/5'/>
    +  </devices>
    +</domain>
    +

    Fullyvirtualized guest BIOS boot

    +

    + Fullyvirtualized guests use the emulated BIOS to boot off the primary + harddisk, CDROM or Network PXE ROM. +

    +
    <domain type='xen' id='3'>
    +  <name>fv0</name>
    +  <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid>
    +  <os>
    +    <type>hvm</type>
    +    <loader>/usr/lib/xen/boot/hvmloader</loader>
    +    <boot dev='hd'/>
    +  </os>
    +  <memory>524288</memory>
    +  <vcpu>1</vcpu>
    +  <on_poweroff>destroy</on_poweroff>
    +  <on_reboot>restart</on_reboot>
    +  <on_crash>restart</on_crash>
    +  <features>
    +     <pae/>
    +     <acpi/>
    +     <apic/>
    +  </features>
    +  <clock sync="localtime"/>
    +  <devices>
    +    <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
    +    <interface type='bridge'>
    +      <source bridge='xenbr0'/>
    +      <mac address='00:16:3e:5d:c7:9e'/>
    +      <script path='vif-bridge'/>
    +    </interface>
    +    <disk type='file'>
    +      <source file='/var/lib/xen/images/fv0'/>
    +      <target dev='hda'/>
    +    </disk>
    +    <disk type='file' device='cdrom'>
    +      <source file='/var/lib/xen/images/fc5-x86_64-boot.iso'/>
    +      <target dev='hdc'/>
    +      <readonly/>
    +    </disk>
    +    <disk type='file' device='floppy'>
    +      <source file='/root/fd.img'/>
    +      <target dev='fda'/>
    +    </disk>
    +    <graphics type='vnc' port='5904'/>
    +  </devices>
    +</domain>
    +

    Fullyvirtualized guest direct kernel boot

    +

    + With Xen 3.2.0 or later it is possible to bypass the BIOS and directly + boot a Linux kernel and initrd as a fullyvirtualized domain. This allows + for complete automation of OS installation, for example using the Anaconda + kickstart support. +

    +
    <domain type='xen' id='3'>
    +  <name>fv0</name>
    +  <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid>
    +  <os>
    +    <type>hvm</type>
    +    <loader>/usr/lib/xen/boot/hvmloader</loader>
    +    <kernel>/var/lib/xen/install/vmlinuz-fedora8-x86_64</kernel>
    +    <initrd>/var/lib/xen/install/initrd-vmlinuz-fedora8-x86_64</initrd>
    +    <cmdline> kickstart=http://example.com/myguest.ks </cmdline>
    +  </os>
    +  <memory>524288</memory>
    +  <vcpu>1</vcpu>
    +  <on_poweroff>destroy</on_poweroff>
    +  <on_reboot>restart</on_reboot>
    +  <on_crash>restart</on_crash>
    +  <features>
    +     <pae/>
    +     <acpi/>
    +     <apic/>
    +  </features>
    +  <clock sync="localtime"/>
    +  <devices>
    +    <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
    +    <interface type='bridge'>
    +      <source bridge='xenbr0'/>
    +      <mac address='00:16:3e:5d:c7:9e'/>
    +      <script path='vif-bridge'/>
    +    </interface>
    +    <disk type='file'>
    +      <source file='/var/lib/xen/images/fv0'/>
    +      <target dev='hda'/>
    +    </disk>
    +    <disk type='file' device='cdrom'>
    +      <source file='/var/lib/xen/images/fc5-x86_64-boot.iso'/>
    +      <target dev='hdc'/>
    +      <readonly/>
    +    </disk>
    +    <disk type='file' device='floppy'>
    +      <source file='/root/fd.img'/>
    +      <target dev='fda'/>
    +    </disk>
    +    <graphics type='vnc' port='5904'/>
    +  </devices>
    +</domain>
    +
    + +
    + + + diff --git a/docs/drvxen.html.in b/docs/drvxen.html.in new file mode 100644 index 0000000000..6853c0a241 --- /dev/null +++ b/docs/drvxen.html.in @@ -0,0 +1,221 @@ + + +

    Xen hypervisor driver

    + +

    + The libvirt Xen driver provides the ability to manage virtual machines + on any Xen release from 3.0.1 onwards. +

    + +

    Deployment pre-requisites

    + +

    + The libvirt Xen driver uses a combination of channels to manage Xen + virtual machines. +

    + +
      +
    • + XenD: Access to the Xen daemon is a mandatory + requirement for the libvirt Xen driver. It requires that the UNIX + socket interface be enabled in the /etc/xen/xend-config.sxp + configuration file. Specifically the config settings + (xend-unix-server yes). This path is usually restricted + to only allow the root user access. As an alternative, + the HTTP interface can be used, however, this has significant security + implications. +
    • +
    • + XenStoreD: Access to the Xenstore daemon enables + more efficient codepaths for looking up domain information which + lowers the CPU overhead of management. +
    • +
    • + Hypercalls: The ability to make direct hypercalls + allows the most efficient codepaths in the driver to be used for + monitoring domain status. +
    • +
    • + XM config: When using Xen releases prior to 3.0.4, + there is no inactive domain management in XenD. For such releases, + libvirt will automatically process XM configuration files kept in + the /etc/xen directory. It is important not to place + any other non-config files in this directory. +
    • +
    + +

    Example domain XML config

    + +

    + Below are some example XML configurations for Xen guest domains. + For full details of the available options, consult the domain XML format + guide. +

    + +

    Paravirtualized guest bootloader

    + +

    + Using a bootloader allows a paravirtualized guest to be booted using + a kernel stored inside its virtual disk image +

    + +
    <domain type='xen' >
    +  <name>fc8</name>
    +  <bootloader>/usr/bin/pygrub</bootloader>
    +  <os>
    +    <type>linux</type>
    +  </os>
    +  <memory>131072</memory>
    +  <vcpu>1</vcpu>
    +  <devices>
    +    <disk type='file'>
    +      <source file='/var/lib/xen/images/fc4.img'/>
    +      <target dev='sda1'/>
    +    </disk>
    +    <interface type='bridge'>
    +      <source bridge='xenbr0'/>
    +      <mac address='aa:00:00:00:00:11'/>
    +      <script path='/etc/xen/scripts/vif-bridge'/>
    +    </interface>
    +    <console tty='/dev/pts/5'/>
    +  </devices>
    +</domain>
    + +

    Paravirtualized guest direct kernel boot

    + +

    + For installation of paravirtualized guests it is typical to boot the + domain using a kernel and initrd stored in the host OS +

    + +
    <domain type='xen' >
    +  <name>fc8</name>
    +  <os>
    +    <type>linux</type>
    +    <kernel>/var/lib/xen/install/vmlinuz-fedora8-x86_64</kernel>
    +    <initrd>/var/lib/xen/install/initrd-vmlinuz-fedora8-x86_64</initrd>
    +    <cmdline> kickstart=http://example.com/myguest.ks </cmdline>
    +  </os>
    +  <memory>131072</memory>
    +  <vcpu>1</vcpu>
    +  <devices>
    +    <disk type='file'>
    +      <source file='/var/lib/xen/images/fc4.img'/>
    +      <target dev='sda1'/>
    +    </disk>
    +    <interface type='bridge'>
    +      <source bridge='xenbr0'/>
    +      <mac address='aa:00:00:00:00:11'/>
    +      <script path='/etc/xen/scripts/vif-bridge'/>
    +    </interface>
    +    <graphics type='vnc' port='-1'/>
    +    <console tty='/dev/pts/5'/>
    +  </devices>
    +</domain>
    + +

    Fullyvirtualized guest BIOS boot

    + +

    + Fullyvirtualized guests use the emulated BIOS to boot off the primary + harddisk, CDROM or Network PXE ROM. +

    + +
    <domain type='xen' id='3'>
    +  <name>fv0</name>
    +  <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid>
    +  <os>
    +    <type>hvm</type>
    +    <loader>/usr/lib/xen/boot/hvmloader</loader>
    +    <boot dev='hd'/>
    +  </os>
    +  <memory>524288</memory>
    +  <vcpu>1</vcpu>
    +  <on_poweroff>destroy</on_poweroff>
    +  <on_reboot>restart</on_reboot>
    +  <on_crash>restart</on_crash>
    +  <features>
    +     <pae/>
    +     <acpi/>
    +     <apic/>
    +  </features>
    +  <clock sync="localtime"/>
    +  <devices>
    +    <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
    +    <interface type='bridge'>
    +      <source bridge='xenbr0'/>
    +      <mac address='00:16:3e:5d:c7:9e'/>
    +      <script path='vif-bridge'/>
    +    </interface>
    +    <disk type='file'>
    +      <source file='/var/lib/xen/images/fv0'/>
    +      <target dev='hda'/>
    +    </disk>
    +    <disk type='file' device='cdrom'>
    +      <source file='/var/lib/xen/images/fc5-x86_64-boot.iso'/>
    +      <target dev='hdc'/>
    +      <readonly/>
    +    </disk>
    +    <disk type='file' device='floppy'>
    +      <source file='/root/fd.img'/>
    +      <target dev='fda'/>
    +    </disk>
    +    <graphics type='vnc' port='5904'/>
    +  </devices>
    +</domain>
    + +

    Fullyvirtualized guest direct kernel boot

    + +

    + With Xen 3.2.0 or later it is possible to bypass the BIOS and directly + boot a Linux kernel and initrd as a fullyvirtualized domain. This allows + for complete automation of OS installation, for example using the Anaconda + kickstart support. +

    + +
    <domain type='xen' id='3'>
    +  <name>fv0</name>
    +  <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid>
    +  <os>
    +    <type>hvm</type>
    +    <loader>/usr/lib/xen/boot/hvmloader</loader>
    +    <kernel>/var/lib/xen/install/vmlinuz-fedora8-x86_64</kernel>
    +    <initrd>/var/lib/xen/install/initrd-vmlinuz-fedora8-x86_64</initrd>
    +    <cmdline> kickstart=http://example.com/myguest.ks </cmdline>
    +  </os>
    +  <memory>524288</memory>
    +  <vcpu>1</vcpu>
    +  <on_poweroff>destroy</on_poweroff>
    +  <on_reboot>restart</on_reboot>
    +  <on_crash>restart</on_crash>
    +  <features>
    +     <pae/>
    +     <acpi/>
    +     <apic/>
    +  </features>
    +  <clock sync="localtime"/>
    +  <devices>
    +    <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
    +    <interface type='bridge'>
    +      <source bridge='xenbr0'/>
    +      <mac address='00:16:3e:5d:c7:9e'/>
    +      <script path='vif-bridge'/>
    +    </interface>
    +    <disk type='file'>
    +      <source file='/var/lib/xen/images/fv0'/>
    +      <target dev='hda'/>
    +    </disk>
    +    <disk type='file' device='cdrom'>
    +      <source file='/var/lib/xen/images/fc5-x86_64-boot.iso'/>
    +      <target dev='hdc'/>
    +      <readonly/>
    +    </disk>
    +    <disk type='file' device='floppy'>
    +      <source file='/root/fd.img'/>
    +      <target dev='fda'/>
    +    </disk>
    +    <graphics type='vnc' port='5904'/>
    +  </devices>
    +</domain>
    + + + diff --git a/docs/errors.html b/docs/errors.html index a400589f98..98b1bc2ee0 100644 --- a/docs/errors.html +++ b/docs/errors.html @@ -1,54 +1,69 @@ -Handling of errors

    Handling of errors

    The main goals of libvirt when it comes to error handling are:

    • provide as much detail as possible
    • -
    • provide the information as soon as possible
    • -
    • dont force the library user into one style of error handling
    • -

    As result the library provide both synchronous, callback based and + + + + + + + libvirt: Handling of errors + + + +

    +
    +
    +

    Handling of errors

    +

    The main goals of libvirt when it comes to error handling are:

    +
    • provide as much detail as possible
    • provide the information as soon as possible
    • dont force the library user into one style of error handling
    +

    As result the library provide both synchronous, callback based and asynchronous error reporting. When an error happens in the library code the error is logged, allowing to retrieve it later and if the user registered an error callback it will be called synchronously. Once the call to libvirt ends the error can be detected by the return value and the full information for -the last logged error can be retrieved.

    To avoid as much as possible troubles with a global variable in a +the last logged error can be retrieved.

    +

    To avoid as much as possible troubles with a global variable in a multithreaded environment, libvirt will associate when possible the errors to the current connection they are related to, that way the error is stored in a dynamic structure which can be made thread specific. Error callback can be -set specifically to a connection with

    So error handling in the code is the following:

    1. if the error can be associated to a connection for example when failing +set specifically to a connection with

      +

      So error handling in the code is the following:

      +
      1. if the error can be associated to a connection for example when failing to look up a domain
        1. if there is a callback associated to the connection set with virConnSetErrorFunc, - call it with the error information
        2. -
        3. otherwise if there is a global callback set with virSetErrorFunc, - call it with the error information
        4. -
        5. otherwise call virDefaultErrorFunc + call it with the error information
        6. otherwise if there is a global callback set with virSetErrorFunc, + call it with the error information
        7. otherwise call virDefaultErrorFunc which is the default error function of the library issuing the error - on stderr
        8. -
        9. save the error in the connection for later retrieval with virConnGetLastError
        10. -
      2. -
      3. otherwise like when failing to create an hypervisor connection: + on stderr
      4. save the error in the connection for later retrieval with virConnGetLastError
    2. otherwise like when failing to create an hypervisor connection:
      1. if there is a global callback set with virSetErrorFunc, - call it with the error information
      2. -
      3. otherwise call virDefaultErrorFunc + call it with the error information
      4. otherwise call virDefaultErrorFunc which is the default error function of the library issuing the error - on stderr
      5. -
      6. save the error in the connection for later retrieval with virGetLastError
      7. -
    3. -

    In all cases the error information is provided as a virErrorPtr pointer to + on stderr

  • save the error in the connection for later retrieval with virGetLastError
  • +

    In all cases the error information is provided as a virErrorPtr pointer to read-only structure virError containing the -following fields:

    • code: an error number from the virErrorNumber - enum
    • -
    • domain: an enum indicating which part of libvirt raised the error see - virErrorDomain
    • -
    • level: the error level, usually VIR_ERR_ERROR, though there is room for - warnings like VIR_ERR_WARNING
    • -
    • message: the full human-readable formatted string of the error
    • -
    • conn: if available a pointer to the virConnectPtr - connection to the hypervisor where this happened
    • -
    • dom: if available a pointer to the virDomainPtr domain - targeted in the operation
    • -

    and then extra raw information about the error which may be initialized -to 0 or NULL if unused

    • str1, str2, str3: string information, usually str1 is the error - message format
    • -
    • int1, int2: integer information
    • -

    So usually, setting up specific error handling with libvirt consist of +following fields:

    +
    • code: an error number from the virErrorNumber + enum
    • domain: an enum indicating which part of libvirt raised the error see + virErrorDomain
    • level: the error level, usually VIR_ERR_ERROR, though there is room for + warnings like VIR_ERR_WARNING
    • message: the full human-readable formatted string of the error
    • conn: if available a pointer to the virConnectPtr + connection to the hypervisor where this happened
    • dom: if available a pointer to the virDomainPtr domain + targeted in the operation
    +

    and then extra raw information about the error which may be initialized +to 0 or NULL if unused

    +
    • str1, str2, str3: string information, usually str1 is the error + message format
    • int1, int2: integer information
    +

    So usually, setting up specific error handling with libvirt consist of registering an handler with with virSetErrorFunc or with virConnSetErrorFunc, check the value of the code value, take appropriate action, if needed let @@ -57,13 +72,74 @@ For asynchronous error handing, set such a function doing nothing to avoid the error being reported on stderr, and call virConnGetLastError or virGetLastError when an API call returned an error value. It can be a good idea to use virResetError or virConnResetLastError -once an error has been processed fully.

    At the python level, there only a global reporting callback function at -this point, see the error.py example about it:

    def handler(ctxt, err):
    +once an error has been processed fully.

    +

    At the python level, there only a global reporting callback function at +this point, see the error.py example about it:

    +
    def handler(ctxt, err):
         global errno
     
         #print "handler(%s, %s)" % (ctxt, err)
         errno = err
     
    -libvirt.registerErrorHandler(handler, 'context') 

    the second argument to the registerErrorHandler function is passed as the +libvirt.registerErrorHandler(handler, 'context')

    +

    the second argument to the registerErrorHandler function is passed as the first argument of the callback like in the C version. The error is a tuple -containing the same field as a virError in C, but cast to Python.

    +containing the same field as a virError in C, but cast to Python.

    +
    + +
    + + + diff --git a/docs/errors.html.in b/docs/errors.html.in new file mode 100644 index 0000000000..57cbe6f122 --- /dev/null +++ b/docs/errors.html.in @@ -0,0 +1,83 @@ + + + +

    Handling of errors

    +

    The main goals of libvirt when it comes to error handling are:

    +
      +
    • provide as much detail as possible
    • +
    • provide the information as soon as possible
    • +
    • dont force the library user into one style of error handling
    • +
    +

    As result the library provide both synchronous, callback based and +asynchronous error reporting. When an error happens in the library code the +error is logged, allowing to retrieve it later and if the user registered an +error callback it will be called synchronously. Once the call to libvirt ends +the error can be detected by the return value and the full information for +the last logged error can be retrieved.

    +

    To avoid as much as possible troubles with a global variable in a +multithreaded environment, libvirt will associate when possible the errors to +the current connection they are related to, that way the error is stored in a +dynamic structure which can be made thread specific. Error callback can be +set specifically to a connection with

    +

    So error handling in the code is the following:

    +
      +
    1. if the error can be associated to a connection for example when failing + to look up a domain +
      1. if there is a callback associated to the connection set with virConnSetErrorFunc, + call it with the error information
      2. otherwise if there is a global callback set with virSetErrorFunc, + call it with the error information
      3. otherwise call virDefaultErrorFunc + which is the default error function of the library issuing the error + on stderr
      4. save the error in the connection for later retrieval with virConnGetLastError
    2. +
    3. otherwise like when failing to create an hypervisor connection: +
      1. if there is a global callback set with virSetErrorFunc, + call it with the error information
      2. otherwise call virDefaultErrorFunc + which is the default error function of the library issuing the error + on stderr
      3. save the error in the connection for later retrieval with virGetLastError
    4. +
    +

    In all cases the error information is provided as a virErrorPtr pointer to +read-only structure virError containing the +following fields:

    +
      +
    • code: an error number from the virErrorNumber + enum
    • +
    • domain: an enum indicating which part of libvirt raised the error see + virErrorDomain
    • +
    • level: the error level, usually VIR_ERR_ERROR, though there is room for + warnings like VIR_ERR_WARNING
    • +
    • message: the full human-readable formatted string of the error
    • +
    • conn: if available a pointer to the virConnectPtr + connection to the hypervisor where this happened
    • +
    • dom: if available a pointer to the virDomainPtr domain + targeted in the operation
    • +
    +

    and then extra raw information about the error which may be initialized +to 0 or NULL if unused

    +
      +
    • str1, str2, str3: string information, usually str1 is the error + message format
    • +
    • int1, int2: integer information
    • +
    +

    So usually, setting up specific error handling with libvirt consist of +registering an handler with with virSetErrorFunc or +with virConnSetErrorFunc, +check the value of the code value, take appropriate action, if needed let +libvirt print the error on stderr by calling virDefaultErrorFunc. +For asynchronous error handing, set such a function doing nothing to avoid +the error being reported on stderr, and call virConnGetLastError or +virGetLastError when an API call returned an error value. It can be a good +idea to use virResetError or virConnResetLastError +once an error has been processed fully.

    +

    At the python level, there only a global reporting callback function at +this point, see the error.py example about it:

    +
    def handler(ctxt, err):
    +    global errno
    +
    +    #print "handler(%s, %s)" % (ctxt, err)
    +    errno = err
    +
    +libvirt.registerErrorHandler(handler, 'context') 
    +

    the second argument to the registerErrorHandler function is passed as the +first argument of the callback like in the C version. The error is a tuple +containing the same field as a virError in C, but cast to Python.

    + + diff --git a/docs/footer_corner.png b/docs/footer_corner.png new file mode 100644 index 0000000000000000000000000000000000000000..090bfce9f86913fc494e5dcdd31eb03ee74fac39 GIT binary patch literal 2359 zcmV-73CQ+|P)9jQ zK6UP$)|N#QzyV3%DRE>sZUlcghkvi1pPygHf4YwMx*k5(a_x2V{WbXfyVvWvZuB{= zuX~aIEw`^fy?4gntLOV~umAY5v%@?sk6)eF-03ng9^0L_5OW( z-3c<=1u!3)g2+FXf^be>yM9P1sfM_x|9R{zGMe&JNo}YRGNBCMMl~hj&_*1l+Pg^k z=Z7z{*+$=-GLFnhwUG}Ll##3I=PJrI^*GLk|8%wa9F4~y+-{8%1-aPeOm{Q!JjObZ z4D&e}k4-xUff(`Bp?ShI-pD=v=-QLm9b{5$*T`69b|%THI*x3r-A2mM2+6F?*i>^M z94cb=T(6{^L#@Va{(aR+aI&gaQ$-rr0qoY_Urls|xty9J)EbUKIpsJS>*g7`X5{zo z`TdpAkZmr-%q5B`PtXgybs$kU_DXeFCFxaEYcYZ%Qr?ckA%XBS9gjLt6i0f3fnT%?8gi{->(s)e5?Kn_fF^7w*;9*Ezwz z=rw`SRom&07eO>wgpf;YeDS|Lxv*DPv+AP~3Fjj0ktf+QdPBW5v zM(Qodv@C=XPmAU#5Dpu|r+uK(&fmc$8ul3qz$wYmJl+9mlHX zheJJgP4K~nXp$h~qYc$ikrFY7E^YssdbFZeI{-#vw)*L2>m^)Id~j|6p!a1|HeIS} zH3!OB*J{0lbBl~C)Kat6ZOW#})91X7@v%0k(R!LDPLWYr!?9~DwTXKFt}Xu-lWHx| z)yix=gj066R&V>yx@nK1)jwjKPXsJC+uwv!|FPIN-z+PzUD5%&HbW0q-(7n-nXI*G zGUGM~)w^OW%YX`0MRHFkGV`9Xo8CCij(s-X;bxw$<+`0X^IAouTrNT?YdFRs?K7^@Mw-^8 zNz!60L{wbDZ2u>ms>WDlYw_wGu~ynwWDBExBAnt=o_%w#a>Ns}rbE?gF`AHYE7f7h zW`^4?$LOXB+f>V%?OoGEG4{r&Z*ts|CSJDGUdzrFK}AZ3870n3VrJe~gmHF6%w}A} zWwJ5Pcq>Hen5?T*+NE;63_&k>o-TV7J7=D@M~q%2m$2W)W`3 zGqBSX5!P_1ZXsjf8QHu+*J^6wv1+v#t*C}uWhHl2BLsQQ8me&;su>baNtzXB_*YQv zsVQQ#1qp|`;ngd%+4Sp)sLVN$F7&m>q3I2qw~oAc;!=5-7;G2wQv+nz>CWxPb0DzVa(aO?qZ$J(x{R%13! zo;?1F1yaMg4ra5Bs5D#XR!lhO)d5dTs65cnsaDIZ{t(Xo@DEh%td5JR)fOR~^OUQZ z_7zb#Mnbj43FmSWsNxVqHpQ|BWC&;fwemVf(*zMSUoqPnghSoPl?OYbe%3?7%ERSj zv=Gi};&E^6RK`bSvm`Pl9J^g#9T{84q-Gkxjy5Np;!HUN;n2@TP%UOl4QCMzXI!hs z+*mbcTgY%ov#7?-W~Exj)HaNil~iMI+T$?Um>Dahi3vB-2&sslte>%shIoDy!gZO* z!CWmQVssm0w#^KO-VSetMo7lgB5I^uiEyLF>oGx2poyAmI9IB%nj}v9io|FeM#{UJ z61hhRW0_3VOpur=f)LKRR#P8h$f8FjxThp##l`HF6>U+&sp8~F6C^9JBN^e8dx=yY|1@IAm@SeKuBWNOBHq)RTX9%M z%ofQA=Q#gSxmGPkizI|o#77q8GA%2zAe@qFqu;BdVrsQWN;pPLbeYv3qD4~>jzu;1 z(?VmmsI-Pt881|eLby)jq3gVuElLdspqOH{b&M9-5)Sn<6xA|QB-)N}qt~*`7o$bX z6At;@76Zg=(X@uEh?Bj~X*Fq%<&7*OW%aZyRHW=NU=&S9xC&#%Y|*rYLz=7Vz{1pO z(R75Xpd2eg#%Pf>;f@10bA)Nem@S%`a1ON_s#c59qNyS!dO#a#$X?n-kzJ&8nwAx+ zMcWW=G)^)I7OF+d5w4PIX^KQ4oRDg%)uIrt(0rjcEANSPWgii#p-YPBc=N6HW_ diir9xzyJsu$2-?!E9d|K002ovPDHLkV1iiVdrJTS literal 0 HcmV?d00001 diff --git a/docs/footer_pattern.png b/docs/footer_pattern.png new file mode 100644 index 0000000000000000000000000000000000000000..647c52a375b6a54ef21639bae67bf299fcf5a1ad GIT binary patch literal 817 zcmV-11J3-3P)%0BIy7BRhoETC96aAT$h=L1|VW^$a}bqCrP`mTO>*HKG-zaYp~~F>*ViGmWF^w zoXcQW4yGh6EC63J4E8SAhhPm!&ld2B=pxu}4h|#@EZ{o_E1YStd(@^LVcZ1=n_w?i zv;{mwTsyd^E(`dAb1$w_Dupoa+|6T2U26;YM#k0^+oD!mz-JCFZXQCkLWpbaRU}wl z2hCxM^o!d=UsuYq-f&Se|4xV!=7PXi%|qKPH95bBV0RWUWwheXQ0nUvVglD}$Dmef z^9k+}OoLjf3o~-u+_;#{V`%{?yn9;Qpw`dB0)DYcIW>>5x+vTpN{8r#$XWG`iyee` zV%5uN9#bLmh%2>&5Cx7o+!b~a;-{P3(amF~btjHnqcqXTEntY#GEOLMRhQPi<6uV} zx;XQ=iy=y8XI=Y&3LM4A~*0mAs#?%-78mus68S_sRnmL)UYc?y=5o2hx8-PgPX^Qq_Ibo z6k5~nPTy`#E|b&EBe#Hi+B&&;j7X~Dw|c_wBS}XKm=oRN+`D-cs$uWB4w3nXbcnO0 zWlK6+z#WHYag%>PeUTF~Zd?*Z{vlOsLp~o+YJ@0p96_nBN>}dWxFt%J-br0CIj<;L z(p96*9HMn2ui_^t1yRw>NeH!85`3cAPKeHNJ2&zw{_>{CSA0I}t5+ajAbs{v&>fqU zvvc3BD*p0jD3$nb2su-SxC${t`rvxe+eL>*G7pn`?;Ji9Sp?@&k zrh92wBSh|;EeFl*`Q>kxoPHjI7?IM&fX}V&q?{Sf)-5m5oowj9Bl7 -XML Format

    XML Format

    This section describes the XML format used to represent domains, there are -variations on the format based on the kind of domains run and the options -used to launch them:

    The formats try as much as possible to follow the same structure and reuse -elements and attributes where it makes sense.

    Normal paravirtualized Xen -guests:

    The library use an XML format to describe domains, as input to virDomainCreateLinux() -and as the output of virDomainGetXMLDesc(), -the following is an example of the format as returned by the shell command -virsh xmldump fc4 , where fc4 was one of the running domains:

    <domain type='xen' id='18'>
    -  <name>fc4</name>
    -  <os>
    -    <type>linux</type>
    -    <kernel>/boot/vmlinuz-2.6.15-1.43_FC5guest</kernel>
    -    <initrd>/boot/initrd-2.6.15-1.43_FC5guest.img</initrd>
    -    <root>/dev/sda1</root>
    -    <cmdline> ro selinux=0 3</cmdline>
    -  </os>
    -  <memory>131072</memory>
    -  <vcpu>1</vcpu>
    -  <devices>
    -    <disk type='file'>
    -      <source file='/u/fc4.img'/>
    -      <target dev='sda1'/>
    -    </disk>
    -    <interface type='bridge'>
    -      <source bridge='xenbr0'/>
    -      <mac address='aa:00:00:00:00:11'/>
    -      <script path='/etc/xen/scripts/vif-bridge'/>
    -    </interface>
    -    <console tty='/dev/pts/5'/>
    -  </devices>
    -</domain>

    The root element must be called domain with no namespace, the -type attribute indicates the kind of hypervisor used, 'xen' is -the default value. The id attribute gives the domain id at -runtime (not however that this may change, for example if the domain is saved -to disk and restored). The domain has a few children whose order is not -significant:

    • name: the domain name, preferably ASCII based
    • -
    • memory: the maximum memory allocated to the domain in kilobytes
    • -
    • vcpu: the number of virtual cpu configured for the domain
    • -
    • os: a block describing the Operating System, its content will be - dependent on the OS type -
      • type: indicate the OS type, always linux at this point
      • -
      • kernel: path to the kernel on the Domain 0 filesystem
      • -
      • initrd: an optional path for the init ramdisk on the Domain 0 - filesystem
      • -
      • cmdline: optional command line to the kernel
      • -
      • root: the root filesystem from the guest viewpoint, it may be - passed as part of the cmdline content too
      • -
    • -
    • devices: a list of disk, interface and - console descriptions in no special order
    • -

    The format of the devices and their type may grow over time, but the -following should be sufficient for basic use:

    A disk device indicates a block device, it can have two -values for the type attribute either 'file' or 'block' corresponding to the 2 -options available at the Xen layer. It has two mandatory children, and one -optional one in no specific order:

    • source with a file attribute containing the path in Domain 0 to the - file or a dev attribute if using a block device, containing the device - name ('hda5' or '/dev/hda5')
    • -
    • target indicates in a dev attribute the device where it is mapped in - the guest
    • -
    • readonly an optional empty element indicating the device is - read-only
    • -
    • shareable an optional empty element indicating the device - can be used read/write with other domains
    • -

    An interface element describes a network device mapped on the -guest, it also has a type whose value is currently 'bridge', it also have a -number of children in no specific order:

    • source: indicating the bridge name
    • -
    • mac: the optional mac address provided in the address attribute
    • -
    • ip: the optional IP address provided in the address attribute
    • -
    • script: the script used to bridge the interface in the Domain 0
    • -
    • target: and optional target indicating the device name.
    • -

    A console element describes a serial console connection to -the guest. It has no children, and a single attribute tty which -provides the path to the Pseudo TTY on which the guest console can be -accessed

    Life cycle actions for the domain can also be expressed in the XML format, -they drive what should be happening if the domain crashes, is rebooted or is -poweroff. There is various actions possible when this happen:

    • destroy: The domain is cleaned up (that's the default normal processing - in Xen)
    • -
    • restart: A new domain is started in place of the old one with the same - configuration parameters
    • -
    • preserve: The domain will remain in memory until it is destroyed - manually, it won't be running but allows for post-mortem debugging
    • -
    • rename-restart: a variant of the previous one but where the old domain - is renamed before being saved to allow a restart
    • -

    The following could be used for a Xen production system:

    <domain>
    -  ...
    -  <on_reboot>restart</on_reboot>
    -  <on_poweroff>destroy</on_poweroff>
    -  <on_crash>rename-restart</on_crash>
    -  ...
    -</domain>

    While the format may be extended in various ways as support for more -hypervisor types and features are added, it is expected that this core subset -will remain functional in spite of the evolution of the library.

    Fully virtualized guests -(added in 0.1.3):

    Here is an example of a domain description used to start a fully -virtualized (a.k.a. HVM) Xen domain. This requires hardware virtualization -support at the processor level but allows to run unmodified operating -systems:

    <domain type='xen' id='3'>
    -  <name>fv0</name>
    -  <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid>
    -  <os>
    -    <type>hvm</type>
    -    <loader>/usr/lib/xen/boot/hvmloader</loader>
    -    <boot dev='hd'/>
    -  </os>
    -  <memory>524288</memory>
    -  <vcpu>1</vcpu>
    -  <on_poweroff>destroy</on_poweroff>
    -  <on_reboot>restart</on_reboot>
    -  <on_crash>restart</on_crash>
    -  <features>
    -     <pae/>
    -     <acpi/>
    -     <apic/>
    -  </features>
    -  <clock sync="localtime"/>
    -  <devices>
    -    <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
    -    <interface type='bridge'>
    -      <source bridge='xenbr0'/>
    -      <mac address='00:16:3e:5d:c7:9e'/>
    -      <script path='vif-bridge'/>
    -    </interface>
    -    <disk type='file'>
    -      <source file='/root/fv0'/>
    -      <target dev='hda'/>
    -    </disk>
    -    <disk type='file' device='cdrom'>
    -      <source file='/root/fc5-x86_64-boot.iso'/>
    -      <target dev='hdc'/>
    -      <readonly/>
    -    </disk>
    -    <disk type='file' device='floppy'>
    -      <source file='/root/fd.img'/>
    -      <target dev='fda'/>
    -    </disk>
    -    <graphics type='vnc' port='5904'/>
    -  </devices>
    -</domain>

    There is a few things to notice specifically for HVM domains:

    • the optional <features> block is used to enable - certain guest CPU / system features. For HVM guests the following - features are defined: -
      • pae - enable PAE memory addressing
      • -
      • apic - enable IO APIC
      • -
      • acpi - enable ACPI bios
      • -
    • -
    • the optional <clock> element is used to specify - whether the emulated BIOS clock in the guest is synced to either - localtime or utc. In general Windows will - want localtime while all other operating systems will - want utc. The default is thus utc
    • -
    • the <os> block description is very different, first - it indicates that the type is 'hvm' for hardware virtualization, then - instead of a kernel, boot and command line arguments, it points to an os - boot loader which will extract the boot information from the boot device - specified in a separate boot element. The dev attribute on - the boot tag can be one of: -
      • fd - boot from first floppy device
      • -
      • hd - boot from first harddisk device
      • -
      • cdrom - boot from first cdrom device
      • -
    • -
    • the <devices> section includes an emulator entry - pointing to an additional program in charge of emulating the devices
    • -
    • the disk entry indicates in the dev target section that the emulation - for the drive is the first IDE disk device hda. The list of device names - supported is dependent on the Hypervisor, but for Xen it can be any IDE - device hda-hdd, or a floppy device - fda, fdb. The <disk> element - also supports a 'device' attribute to indicate what kinda of hardware to - emulate. The following values are supported: -
      • floppy - a floppy disk controller
      • -
      • disk - a generic hard drive (the default it - omitted)
      • -
      • cdrom - a CDROM device
      • -
      - For Xen 3.0.2 and earlier a CDROM device can only be emulated on the - hdc channel, while for 3.0.3 and later, it can be emulated - on any IDE channel.
    • -
    • the <devices> section also include at least one - entry for the graphic device used to render the os. Currently there is - just 2 types possible 'vnc' or 'sdl'. If the type is 'vnc', then an - additional port attribute will be present indicating the TCP - port on which the VNC server is accepting client connections.
    • -

    It is likely that the HVM description gets additional optional elements -and attributes as the support for fully virtualized domain expands, -especially for the variety of devices emulated and the graphic support -options offered.

    KVM domain (added in 0.2.0)

    Support for the KVM virtualization -is provided in recent Linux kernels (2.6.20 and onward). This requires -specific hardware with acceleration support and the availability of the -special version of the QEmu binary. Since this -relies on QEmu for the machine emulation like fully virtualized guests the -XML description is quite similar, here is a simple example:

    <domain type='kvm'>
    -  <name>demo2</name>
    -  <uuid>4dea24b3-1d52-d8f3-2516-782e98a23fa0</uuid>
    -  <memory>131072</memory>
    -  <vcpu>1</vcpu>
    -  <os>
    -    <type>hvm</type>
    -  </os>
    -  <clock sync="localtime"/>
    -  <devices>
    -    <emulator>/home/user/usr/kvm-devel/bin/qemu-system-x86_64</emulator>
    -    <disk type='file' device='disk'>
    -      <source file='/home/user/fedora/diskboot.img'/>
    -      <target dev='hda'/>
    -    </disk>
    -    <interface type='user'>
    -      <mac address='24:42:53:21:52:45'/>
    -    </interface>
    -    <graphics type='vnc' port='-1'/>
    -  </devices>
    -</domain>

    The specific points to note if using KVM are:

    • the top level domain element carries a type of 'kvm'
    • -
    • the <clock> optional is supported as with Xen HVM
    • -
    • the <devices> emulator points to the special qemu binary required - for KVM
    • -
    • networking interface definitions definitions are somewhat different due - to a different model from Xen see below
    • -

    except those points the options should be quite similar to Xen HVM -ones.

    Networking options for QEmu and KVM (added in 0.2.0)

    The networking support in the QEmu and KVM case is more flexible, and -support a variety of options:

    1. Userspace SLIRP stack -

      Provides a virtual LAN with NAT to the outside world. The virtual - network has DHCP & DNS services and will give the guest VM addresses - starting from 10.0.2.15. The default router will be - 10.0.2.2 and the DNS server will be 10.0.2.3. - This networking is the only option for unprivileged users who need their - VMs to have outgoing access. Example configs are:

      -
      <interface type='user'/>
      -
      -<interface type='user'>
      -  <mac address="11:22:33:44:55:66"/>
      -</interface>
      -    
      -
    2. -
    3. Virtual network -

      Provides a virtual network using a bridge device in the host. - Depending on the virtual network configuration, the network may be - totally isolated, NAT'ing to an explicit network device, or NAT'ing to - the default route. DHCP and DNS are provided on the virtual network in - all cases and the IP range can be determined by examining the virtual - network config with 'virsh net-dumpxml <network - name>'. There is one virtual network called 'default' setup out - of the box which does NAT'ing to the default route and has an IP range of - 192.168.22.0/255.255.255.0. Each guest will have an - associated tun device created with a name of vnetN, which can also be - overridden with the <target> element. Example configs are:

      -
      <interface type='network'>
      -  <source network='default'/>
      -</interface>
      -
      -<interface type='network'>
      -  <source network='default'/>
      -  <target dev='vnet7'/>
      -  <mac address="11:22:33:44:55:66"/>
      -</interface>
      -    
      -
    4. -
    5. Bridge to to LAN -

      Provides a bridge from the VM directly onto the LAN. This assumes - there is a bridge device on the host which has one or more of the hosts - physical NICs enslaved. The guest VM will have an associated tun device - created with a name of vnetN, which can also be overridden with the - <target> element. The tun device will be enslaved to the bridge. - The IP range / network configuration is whatever is used on the LAN. This - provides the guest VM full incoming & outgoing net access just like a - physical machine. Examples include:

      -
      <interface type='bridge'>
      - <source bridge='br0'/>
      -</interface>
      -
      -<interface type='bridge'>
      -  <source bridge='br0'/>
      -  <target dev='vnet7'/>
      -  <mac address="11:22:33:44:55:66"/>
      -</interface>
      -
    6. -
    7. Generic connection to LAN -

      Provides a means for the administrator to execute an arbitrary script - to connect the guest's network to the LAN. The guest will have a tun - device created with a name of vnetN, which can also be overridden with the - <target> element. After creating the tun device a shell script will - be run which is expected to do whatever host network integration is - required. By default this script is called /etc/qemu-ifup but can be - overridden.

      -
      <interface type='ethernet'/>
      -
      -<interface type='ethernet'>
      -  <target dev='vnet7'/>
      -  <script path='/etc/qemu-ifup-mynet'/>
      -</interface>
      -
    8. -
    9. Multicast tunnel -

      A multicast group is setup to represent a virtual network. Any VMs - whose network devices are in the same multicast group can talk to each - other even across hosts. This mode is also available to unprivileged - users. There is no default DNS or DHCP support and no outgoing network - access. To provide outgoing network access, one of the VMs should have a - 2nd NIC which is connected to one of the first 4 network types and do the - appropriate routing. The multicast protocol is compatible with that used - by user mode linux guests too. The source address used must be from the - multicast address block.

      -
      <interface type='mcast'>
      -  <source address='230.0.0.1' port='5558'/>
      -</interface>
      -
    10. -
    11. TCP tunnel -

      A TCP client/server architecture provides a virtual network. One VM - provides the server end of the network, all other VMS are configured as - clients. All network traffic is routed between the VMs via the server. - This mode is also available to unprivileged users. There is no default - DNS or DHCP support and no outgoing network access. To provide outgoing - network access, one of the VMs should have a 2nd NIC which is connected - to one of the first 4 network types and do the appropriate routing.

      -

      Example server config:

      -
      <interface type='server'>
      -  <source address='192.168.0.1' port='5558'/>
      -</interface>
      -

      Example client config:

      -
      <interface type='client'>
      -  <source address='192.168.0.1' port='5558'/>
      -</interface>
      -
    12. -

    To be noted, options 2, 3, 4 are also supported by Xen VMs, so it is -possible to use these configs to have networking with both Xen & -QEMU/KVMs connected to each other.

    QEmu domain (added in 0.2.0)

    Libvirt support for KVM and QEmu is the same code base with only minor -changes. The configuration is as a result nearly identical, the only changes -are related to QEmu ability to emulate various CPU type and hardware -platforms, and kqemu support (QEmu own kernel accelerator when the -emulated CPU is i686 as well as the target machine):

    <domain type='qemu'>
    -  <name>QEmu-fedora-i686</name>
    -  <uuid>c7a5fdbd-cdaf-9455-926a-d65c16db1809</uuid>
    -  <memory>219200</memory>
    -  <currentMemory>219200</currentMemory>
    -  <vcpu>2</vcpu>
    -  <os>
    -    <type arch='i686' machine='pc'>hvm</type>
    -    <boot dev='cdrom'/>
    -  </os>
    -  <devices>
    -    <emulator>/usr/bin/qemu</emulator>
    -    <disk type='file' device='cdrom'>
    -      <source file='/home/user/boot.iso'/>
    -      <target dev='hdc'/>
    -      <readonly/>
    -    </disk>
    -    <disk type='file' device='disk'>
    -      <source file='/home/user/fedora.img'/>
    -      <target dev='hda'/>
    -    </disk>
    -    <interface type='network'>
    -      <source name='default'/>
    -    </interface>
    -    <graphics type='vnc' port='-1'/>
    -  </devices>
    -</domain>

    The difference here are:

    • the value of type on top-level domain, it's 'qemu' or kqemu if asking - for kernel assisted - acceleration
    • -
    • the os type block defines the architecture to be emulated, and - optionally the machine type, see the discovery API below
    • -
    • the emulator string must point to the right emulator for that - architecture
    • -

    Discovering virtualization capabilities (Added in 0.2.1)

    As new virtualization engine support gets added to libvirt, and to handle -cases like QEmu supporting a variety of emulations, a query interface has -been added in 0.2.1 allowing to list the set of supported virtualization -capabilities on the host:

        char * virConnectGetCapabilities (virConnectPtr conn);

    The value returned is an XML document listing the virtualization -capabilities of the host and virtualization engine to which -@conn is connected. One can test it using virsh -command line tool command 'capabilities', it dumps the XML -associated to the current connection. For example in the case of a 64 bits -machine with hardware virtualization capabilities enabled in the chip and -BIOS you will see

    <capabilities>
    -  <host>
    -    <cpu>
    -      <arch>x86_64</arch>
    -      <features>
    -        <vmx/>
    -      </features>
    -    </cpu>
    -  </host>
    -
    -  <!-- xen-3.0-x86_64 -->
    -  <guest>
    -    <os_type>xen</os_type>
    -    <arch name="x86_64">
    -      <wordsize>64</wordsize>
    -      <domain type="xen"></domain>
    -      <emulator>/usr/lib64/xen/bin/qemu-dm</emulator>
    -    </arch>
    -    <features>
    -    </features>
    -  </guest>
    -
    -  <!-- hvm-3.0-x86_32 -->
    -  <guest>
    -    <os_type>hvm</os_type>
    -    <arch name="i686">
    -      <wordsize>32</wordsize>
    -      <domain type="xen"></domain>
    -      <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
    -      <machine>pc</machine>
    -      <machine>isapc</machine>
    -      <loader>/usr/lib/xen/boot/hvmloader</loader>
    -    </arch>
    -    <features>
    -    </features>
    -  </guest>
    -  ...
    -</capabilities>

    The first block (in red) indicates the host hardware capabilities, currently -it is limited to the CPU properties but other information may be available, -it shows the CPU architecture, and the features of the chip (the feature -block is similar to what you will find in a Xen fully virtualized domain -description).

    The second block (in blue) indicates the paravirtualization support of the -Xen support, you will see the os_type of xen to indicate a paravirtual -kernel, then architecture information and potential features.

    The third block (in green) gives similar information but when running a -32 bit OS fully virtualized with Xen using the hvm support.

    This section is likely to be updated and augmented in the future, see the -discussion which led to the capabilities format in the mailing-list -archives.

    + + + + + + + libvirt: XML Format + + + + +
    +
    +

    XML Format

    +
    + +
    + + + diff --git a/docs/format.html.in b/docs/format.html.in new file mode 100644 index 0000000000..13061088d6 --- /dev/null +++ b/docs/format.html.in @@ -0,0 +1,9 @@ + + + +

    XML Format

    + + + + + diff --git a/docs/formatcaps.html b/docs/formatcaps.html new file mode 100644 index 0000000000..e61e5d81be --- /dev/null +++ b/docs/formatcaps.html @@ -0,0 +1,172 @@ + + + + + + + + + libvirt: Driver capabilities XML format + + + + +
    +
    +

    Driver capabilities XML format

    +

    As new virtualization engine support gets added to libvirt, and to handle +cases like QEmu supporting a variety of emulations, a query interface has +been added in 0.2.1 allowing to list the set of supported virtualization +capabilities on the host:

    +
        char * virConnectGetCapabilities (virConnectPtr conn);
    +

    The value returned is an XML document listing the virtualization +capabilities of the host and virtualization engine to which +@conn is connected. One can test it using virsh +command line tool command 'capabilities', it dumps the XML +associated to the current connection. For example in the case of a 64 bits +machine with hardware virtualization capabilities enabled in the chip and +BIOS you will see

    +
    <capabilities>
    +  <host>
    +    <cpu>
    +      <arch>x86_64</arch>
    +      <features>
    +        <vmx/>
    +      </features>
    +    </cpu>
    +  </host>
    +
    +  <!-- xen-3.0-x86_64 -->
    +  <guest>
    +    <os_type>xen</os_type>
    +    <arch name="x86_64">
    +      <wordsize>64</wordsize>
    +      <domain type="xen"></domain>
    +      <emulator>/usr/lib64/xen/bin/qemu-dm</emulator>
    +    </arch>
    +    <features>
    +    </features>
    +  </guest>
    +
    +  <!-- hvm-3.0-x86_32 -->
    +  <guest>
    +    <os_type>hvm</os_type>
    +    <arch name="i686">
    +      <wordsize>32</wordsize>
    +      <domain type="xen"></domain>
    +      <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
    +      <machine>pc</machine>
    +      <machine>isapc</machine>
    +      <loader>/usr/lib/xen/boot/hvmloader</loader>
    +    </arch>
    +    <features>
    +    </features>
    +  </guest>
    +  ...
    +</capabilities>
    +

    The first block (in red) indicates the host hardware capabilities, currently +it is limited to the CPU properties but other information may be available, +it shows the CPU architecture, and the features of the chip (the feature +block is similar to what you will find in a Xen fully virtualized domain +description).

    +

    The second block (in blue) indicates the paravirtualization support of the +Xen support, you will see the os_type of xen to indicate a paravirtual +kernel, then architecture information and potential features.

    +

    The third block (in green) gives similar information but when running a +32 bit OS fully virtualized with Xen using the hvm support.

    +

    This section is likely to be updated and augmented in the future, see the +discussion which led to the capabilities format in the mailing-list +archives.

    +
    + +
    + + + diff --git a/docs/formatcaps.html.in b/docs/formatcaps.html.in new file mode 100644 index 0000000000..36718d93c1 --- /dev/null +++ b/docs/formatcaps.html.in @@ -0,0 +1,70 @@ + + +

    Driver capabilities XML format

    + +

    As new virtualization engine support gets added to libvirt, and to handle +cases like QEmu supporting a variety of emulations, a query interface has +been added in 0.2.1 allowing to list the set of supported virtualization +capabilities on the host:

    +
        char * virConnectGetCapabilities (virConnectPtr conn);
    +

    The value returned is an XML document listing the virtualization +capabilities of the host and virtualization engine to which +@conn is connected. One can test it using virsh +command line tool command 'capabilities', it dumps the XML +associated to the current connection. For example in the case of a 64 bits +machine with hardware virtualization capabilities enabled in the chip and +BIOS you will see

    +
    <capabilities>
    +  <host>
    +    <cpu>
    +      <arch>x86_64</arch>
    +      <features>
    +        <vmx/>
    +      </features>
    +    </cpu>
    +  </host>
    +
    +  <!-- xen-3.0-x86_64 -->
    +  <guest>
    +    <os_type>xen</os_type>
    +    <arch name="x86_64">
    +      <wordsize>64</wordsize>
    +      <domain type="xen"></domain>
    +      <emulator>/usr/lib64/xen/bin/qemu-dm</emulator>
    +    </arch>
    +    <features>
    +    </features>
    +  </guest>
    +
    +  <!-- hvm-3.0-x86_32 -->
    +  <guest>
    +    <os_type>hvm</os_type>
    +    <arch name="i686">
    +      <wordsize>32</wordsize>
    +      <domain type="xen"></domain>
    +      <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
    +      <machine>pc</machine>
    +      <machine>isapc</machine>
    +      <loader>/usr/lib/xen/boot/hvmloader</loader>
    +    </arch>
    +    <features>
    +    </features>
    +  </guest>
    +  ...
    +</capabilities>
    +

    The first block (in red) indicates the host hardware capabilities, currently +it is limited to the CPU properties but other information may be available, +it shows the CPU architecture, and the features of the chip (the feature +block is similar to what you will find in a Xen fully virtualized domain +description).

    +

    The second block (in blue) indicates the paravirtualization support of the +Xen support, you will see the os_type of xen to indicate a paravirtual +kernel, then architecture information and potential features.

    +

    The third block (in green) gives similar information but when running a +32 bit OS fully virtualized with Xen using the hvm support.

    +

    This section is likely to be updated and augmented in the future, see the +discussion which led to the capabilities format in the mailing-list +archives.

    + + + diff --git a/docs/formatdomain.html b/docs/formatdomain.html new file mode 100644 index 0000000000..5239d51c67 --- /dev/null +++ b/docs/formatdomain.html @@ -0,0 +1,314 @@ + + + + + + + + + libvirt: Domain XML format + + + + +
    +
    +

    Domain XML format

    +

    This section describes the XML format used to represent domains, there are +variations on the format based on the kind of domains run and the options +used to launch them:

    +

    Normal paravirtualized Xen +guests:

    +

    The root element must be called domain with no namespace, the +type attribute indicates the kind of hypervisor used, 'xen' is +the default value. The id attribute gives the domain id at +runtime (not however that this may change, for example if the domain is saved +to disk and restored). The domain has a few children whose order is not +significant:

    +
    • name: the domain name, preferably ASCII based
    • memory: the maximum memory allocated to the domain in kilobytes
    • vcpu: the number of virtual cpu configured for the domain
    • os: a block describing the Operating System, its content will be + dependent on the OS type +
      • type: indicate the OS type, always linux at this point
      • kernel: path to the kernel on the Domain 0 filesystem
      • initrd: an optional path for the init ramdisk on the Domain 0 + filesystem
      • cmdline: optional command line to the kernel
      • root: the root filesystem from the guest viewpoint, it may be + passed as part of the cmdline content too
    • devices: a list of disk, interface and + console descriptions in no special order
    +

    The format of the devices and their type may grow over time, but the +following should be sufficient for basic use:

    +

    A disk device indicates a block device, it can have two +values for the type attribute either 'file' or 'block' corresponding to the 2 +options available at the Xen layer. It has two mandatory children, and one +optional one in no specific order:

    +
    • source with a file attribute containing the path in Domain 0 to the + file or a dev attribute if using a block device, containing the device + name ('hda5' or '/dev/hda5')
    • target indicates in a dev attribute the device where it is mapped in + the guest
    • readonly an optional empty element indicating the device is + read-only
    • shareable an optional empty element indicating the device + can be used read/write with other domains
    +

    An interface element describes a network device mapped on the +guest, it also has a type whose value is currently 'bridge', it also have a +number of children in no specific order:

    +
    • source: indicating the bridge name
    • mac: the optional mac address provided in the address attribute
    • ip: the optional IP address provided in the address attribute
    • script: the script used to bridge the interface in the Domain 0
    • target: and optional target indicating the device name.
    +

    A console element describes a serial console connection to +the guest. It has no children, and a single attribute tty which +provides the path to the Pseudo TTY on which the guest console can be +accessed

    +

    Life cycle actions for the domain can also be expressed in the XML format, +they drive what should be happening if the domain crashes, is rebooted or is +poweroff. There is various actions possible when this happen:

    +
    • destroy: The domain is cleaned up (that's the default normal processing + in Xen)
    • restart: A new domain is started in place of the old one with the same + configuration parameters
    • preserve: The domain will remain in memory until it is destroyed + manually, it won't be running but allows for post-mortem debugging
    • rename-restart: a variant of the previous one but where the old domain + is renamed before being saved to allow a restart
    +

    The following could be used for a Xen production system:

    +
    <domain>
    +  ...
    +  <on_reboot>restart</on_reboot>
    +  <on_poweroff>destroy</on_poweroff>
    +  <on_crash>rename-restart</on_crash>
    +  ...
    +</domain>
    +

    While the format may be extended in various ways as support for more +hypervisor types and features are added, it is expected that this core subset +will remain functional in spite of the evolution of the library.

    +

    + Fully virtualized guests +

    +

    There is a few things to notice specifically for HVM domains:

    +
    • the optional <features> block is used to enable + certain guest CPU / system features. For HVM guests the following + features are defined: +
      • pae - enable PAE memory addressing
      • apic - enable IO APIC
      • acpi - enable ACPI bios
    • the optional <clock> element is used to specify + whether the emulated BIOS clock in the guest is synced to either + localtime or utc. In general Windows will + want localtime while all other operating systems will + want utc. The default is thus utc
    • the <os> block description is very different, first + it indicates that the type is 'hvm' for hardware virtualization, then + instead of a kernel, boot and command line arguments, it points to an os + boot loader which will extract the boot information from the boot device + specified in a separate boot element. The dev attribute on + the boot tag can be one of: +
      • fd - boot from first floppy device
      • hd - boot from first harddisk device
      • cdrom - boot from first cdrom device
    • the <devices> section includes an emulator entry + pointing to an additional program in charge of emulating the devices
    • the disk entry indicates in the dev target section that the emulation + for the drive is the first IDE disk device hda. The list of device names + supported is dependent on the Hypervisor, but for Xen it can be any IDE + device hda-hdd, or a floppy device + fda, fdb. The <disk> element + also supports a 'device' attribute to indicate what kinda of hardware to + emulate. The following values are supported: +
      • floppy - a floppy disk controller
      • disk - a generic hard drive (the default it + omitted)
      • cdrom - a CDROM device
      + For Xen 3.0.2 and earlier a CDROM device can only be emulated on the + hdc channel, while for 3.0.3 and later, it can be emulated + on any IDE channel.
    • the <devices> section also include at least one + entry for the graphic device used to render the os. Currently there is + just 2 types possible 'vnc' or 'sdl'. If the type is 'vnc', then an + additional port attribute will be present indicating the TCP + port on which the VNC server is accepting client connections.
    +

    It is likely that the HVM description gets additional optional elements +and attributes as the support for fully virtualized domain expands, +especially for the variety of devices emulated and the graphic support +options offered.

    +

    + Networking interface options +

    +

    The networking support in the QEmu and KVM case is more flexible, and +support a variety of options:

    +
    1. Userspace SLIRP stack +

      Provides a virtual LAN with NAT to the outside world. The virtual + network has DHCP & DNS services and will give the guest VM addresses + starting from 10.0.2.15. The default router will be + 10.0.2.2 and the DNS server will be 10.0.2.3. + This networking is the only option for unprivileged users who need their + VMs to have outgoing access. Example configs are:

      +
      <interface type='user'/>
      +
      +<interface type='user'>
      +  <mac address="11:22:33:44:55:66"/>
      +</interface>
      +    
      +
    2. Virtual network +

      Provides a virtual network using a bridge device in the host. + Depending on the virtual network configuration, the network may be + totally isolated, NAT'ing to an explicit network device, or NAT'ing to + the default route. DHCP and DNS are provided on the virtual network in + all cases and the IP range can be determined by examining the virtual + network config with 'virsh net-dumpxml <network + name>'. There is one virtual network called 'default' setup out + of the box which does NAT'ing to the default route and has an IP range of + 192.168.22.0/255.255.255.0. Each guest will have an + associated tun device created with a name of vnetN, which can also be + overridden with the <target> element. Example configs are:

      +
      <interface type='network'>
      +  <source network='default'/>
      +</interface>
      +
      +<interface type='network'>
      +  <source network='default'/>
      +  <target dev='vnet7'/>
      +  <mac address="11:22:33:44:55:66"/>
      +</interface>
      +    
      +
    3. Bridge to to LAN +

      Provides a bridge from the VM directly onto the LAN. This assumes + there is a bridge device on the host which has one or more of the hosts + physical NICs enslaved. The guest VM will have an associated tun device + created with a name of vnetN, which can also be overridden with the + <target> element. The tun device will be enslaved to the bridge. + The IP range / network configuration is whatever is used on the LAN. This + provides the guest VM full incoming & outgoing net access just like a + physical machine. Examples include:

      +
      <interface type='bridge'>
      + <source bridge='br0'/>
      +</interface>
      +
      +<interface type='bridge'>
      +  <source bridge='br0'/>
      +  <target dev='vnet7'/>
      +  <mac address="11:22:33:44:55:66"/>
      +</interface>
      +
    4. Generic connection to LAN +

      Provides a means for the administrator to execute an arbitrary script + to connect the guest's network to the LAN. The guest will have a tun + device created with a name of vnetN, which can also be overridden with the + <target> element. After creating the tun device a shell script will + be run which is expected to do whatever host network integration is + required. By default this script is called /etc/qemu-ifup but can be + overridden.

      +
      <interface type='ethernet'/>
      +
      +<interface type='ethernet'>
      +  <target dev='vnet7'/>
      +  <script path='/etc/qemu-ifup-mynet'/>
      +</interface>
      +
    5. Multicast tunnel +

      A multicast group is setup to represent a virtual network. Any VMs + whose network devices are in the same multicast group can talk to each + other even across hosts. This mode is also available to unprivileged + users. There is no default DNS or DHCP support and no outgoing network + access. To provide outgoing network access, one of the VMs should have a + 2nd NIC which is connected to one of the first 4 network types and do the + appropriate routing. The multicast protocol is compatible with that used + by user mode linux guests too. The source address used must be from the + multicast address block.

      +
      <interface type='mcast'>
      +  <source address='230.0.0.1' port='5558'/>
      +</interface>
      +
    6. TCP tunnel +

      A TCP client/server architecture provides a virtual network. One VM + provides the server end of the network, all other VMS are configured as + clients. All network traffic is routed between the VMs via the server. + This mode is also available to unprivileged users. There is no default + DNS or DHCP support and no outgoing network access. To provide outgoing + network access, one of the VMs should have a 2nd NIC which is connected + to one of the first 4 network types and do the appropriate routing.

      +

      Example server config:

      +
      <interface type='server'>
      +  <source address='192.168.0.1' port='5558'/>
      +</interface>
      +

      Example client config:

      +
      <interface type='client'>
      +  <source address='192.168.0.1' port='5558'/>
      +</interface>
      +
    +

    To be noted, options 2, 3, 4 are also supported by Xen VMs, so it is +possible to use these configs to have networking with both Xen & +QEMU/KVMs connected to each other.

    +

    Example configs

    +

    + Example configurations for each driver are provide on the + driver specific pages listed below +

    + +
    + +
    + + + diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in new file mode 100644 index 0000000000..aa4a9039fb --- /dev/null +++ b/docs/formatdomain.html.in @@ -0,0 +1,255 @@ + + +

    Domain XML format

    + +

    This section describes the XML format used to represent domains, there are +variations on the format based on the kind of domains run and the options +used to launch them:

    + +

    Normal paravirtualized Xen +guests:

    + +

    The root element must be called domain with no namespace, the +type attribute indicates the kind of hypervisor used, 'xen' is +the default value. The id attribute gives the domain id at +runtime (not however that this may change, for example if the domain is saved +to disk and restored). The domain has a few children whose order is not +significant:

    +
      +
    • name: the domain name, preferably ASCII based
    • +
    • memory: the maximum memory allocated to the domain in kilobytes
    • +
    • vcpu: the number of virtual cpu configured for the domain
    • +
    • os: a block describing the Operating System, its content will be + dependent on the OS type +
      • type: indicate the OS type, always linux at this point
      • kernel: path to the kernel on the Domain 0 filesystem
      • initrd: an optional path for the init ramdisk on the Domain 0 + filesystem
      • cmdline: optional command line to the kernel
      • root: the root filesystem from the guest viewpoint, it may be + passed as part of the cmdline content too
    • +
    • devices: a list of disk, interface and + console descriptions in no special order
    • +
    +

    The format of the devices and their type may grow over time, but the +following should be sufficient for basic use:

    +

    A disk device indicates a block device, it can have two +values for the type attribute either 'file' or 'block' corresponding to the 2 +options available at the Xen layer. It has two mandatory children, and one +optional one in no specific order:

    +
      +
    • source with a file attribute containing the path in Domain 0 to the + file or a dev attribute if using a block device, containing the device + name ('hda5' or '/dev/hda5')
    • +
    • target indicates in a dev attribute the device where it is mapped in + the guest
    • +
    • readonly an optional empty element indicating the device is + read-only
    • +
    • shareable an optional empty element indicating the device + can be used read/write with other domains
    • +
    +

    An interface element describes a network device mapped on the +guest, it also has a type whose value is currently 'bridge', it also have a +number of children in no specific order:

    +
      +
    • source: indicating the bridge name
    • +
    • mac: the optional mac address provided in the address attribute
    • +
    • ip: the optional IP address provided in the address attribute
    • +
    • script: the script used to bridge the interface in the Domain 0
    • +
    • target: and optional target indicating the device name.
    • +
    +

    A console element describes a serial console connection to +the guest. It has no children, and a single attribute tty which +provides the path to the Pseudo TTY on which the guest console can be +accessed

    +

    Life cycle actions for the domain can also be expressed in the XML format, +they drive what should be happening if the domain crashes, is rebooted or is +poweroff. There is various actions possible when this happen:

    +
      +
    • destroy: The domain is cleaned up (that's the default normal processing + in Xen)
    • +
    • restart: A new domain is started in place of the old one with the same + configuration parameters
    • +
    • preserve: The domain will remain in memory until it is destroyed + manually, it won't be running but allows for post-mortem debugging
    • +
    • rename-restart: a variant of the previous one but where the old domain + is renamed before being saved to allow a restart
    • +
    +

    The following could be used for a Xen production system:

    +
    <domain>
    +  ...
    +  <on_reboot>restart</on_reboot>
    +  <on_poweroff>destroy</on_poweroff>
    +  <on_crash>rename-restart</on_crash>
    +  ...
    +</domain>
    +

    While the format may be extended in various ways as support for more +hypervisor types and features are added, it is expected that this core subset +will remain functional in spite of the evolution of the library.

    + +

    Fully virtualized guests

    +

    There is a few things to notice specifically for HVM domains:

    +
      +
    • the optional <features> block is used to enable + certain guest CPU / system features. For HVM guests the following + features are defined: +
      • pae - enable PAE memory addressing
      • apic - enable IO APIC
      • acpi - enable ACPI bios
    • +
    • the optional <clock> element is used to specify + whether the emulated BIOS clock in the guest is synced to either + localtime or utc. In general Windows will + want localtime while all other operating systems will + want utc. The default is thus utc
    • +
    • the <os> block description is very different, first + it indicates that the type is 'hvm' for hardware virtualization, then + instead of a kernel, boot and command line arguments, it points to an os + boot loader which will extract the boot information from the boot device + specified in a separate boot element. The dev attribute on + the boot tag can be one of: +
      • fd - boot from first floppy device
      • hd - boot from first harddisk device
      • cdrom - boot from first cdrom device
    • +
    • the <devices> section includes an emulator entry + pointing to an additional program in charge of emulating the devices
    • +
    • the disk entry indicates in the dev target section that the emulation + for the drive is the first IDE disk device hda. The list of device names + supported is dependent on the Hypervisor, but for Xen it can be any IDE + device hda-hdd, or a floppy device + fda, fdb. The <disk> element + also supports a 'device' attribute to indicate what kinda of hardware to + emulate. The following values are supported: +
      • floppy - a floppy disk controller
      • disk - a generic hard drive (the default it + omitted)
      • cdrom - a CDROM device
      + For Xen 3.0.2 and earlier a CDROM device can only be emulated on the + hdc channel, while for 3.0.3 and later, it can be emulated + on any IDE channel.
    • +
    • the <devices> section also include at least one + entry for the graphic device used to render the os. Currently there is + just 2 types possible 'vnc' or 'sdl'. If the type is 'vnc', then an + additional port attribute will be present indicating the TCP + port on which the VNC server is accepting client connections.
    • +
    +

    It is likely that the HVM description gets additional optional elements +and attributes as the support for fully virtualized domain expands, +especially for the variety of devices emulated and the graphic support +options offered.

    + +

    + Networking interface options +

    +

    The networking support in the QEmu and KVM case is more flexible, and +support a variety of options:

    +
      +
    1. Userspace SLIRP stack +

      Provides a virtual LAN with NAT to the outside world. The virtual + network has DHCP & DNS services and will give the guest VM addresses + starting from 10.0.2.15. The default router will be + 10.0.2.2 and the DNS server will be 10.0.2.3. + This networking is the only option for unprivileged users who need their + VMs to have outgoing access. Example configs are:

      +
      <interface type='user'/>
      +
      +<interface type='user'>
      +  <mac address="11:22:33:44:55:66"/>
      +</interface>
      +    
      +
    2. +
    3. Virtual network +

      Provides a virtual network using a bridge device in the host. + Depending on the virtual network configuration, the network may be + totally isolated, NAT'ing to an explicit network device, or NAT'ing to + the default route. DHCP and DNS are provided on the virtual network in + all cases and the IP range can be determined by examining the virtual + network config with 'virsh net-dumpxml <network + name>'. There is one virtual network called 'default' setup out + of the box which does NAT'ing to the default route and has an IP range of + 192.168.22.0/255.255.255.0. Each guest will have an + associated tun device created with a name of vnetN, which can also be + overridden with the <target> element. Example configs are:

      +
      <interface type='network'>
      +  <source network='default'/>
      +</interface>
      +
      +<interface type='network'>
      +  <source network='default'/>
      +  <target dev='vnet7'/>
      +  <mac address="11:22:33:44:55:66"/>
      +</interface>
      +    
      +
    4. +
    5. Bridge to to LAN +

      Provides a bridge from the VM directly onto the LAN. This assumes + there is a bridge device on the host which has one or more of the hosts + physical NICs enslaved. The guest VM will have an associated tun device + created with a name of vnetN, which can also be overridden with the + <target> element. The tun device will be enslaved to the bridge. + The IP range / network configuration is whatever is used on the LAN. This + provides the guest VM full incoming & outgoing net access just like a + physical machine. Examples include:

      +
      <interface type='bridge'>
      + <source bridge='br0'/>
      +</interface>
      +
      +<interface type='bridge'>
      +  <source bridge='br0'/>
      +  <target dev='vnet7'/>
      +  <mac address="11:22:33:44:55:66"/>
      +</interface>
      +
    6. +
    7. Generic connection to LAN +

      Provides a means for the administrator to execute an arbitrary script + to connect the guest's network to the LAN. The guest will have a tun + device created with a name of vnetN, which can also be overridden with the + <target> element. After creating the tun device a shell script will + be run which is expected to do whatever host network integration is + required. By default this script is called /etc/qemu-ifup but can be + overridden.

      +
      <interface type='ethernet'/>
      +
      +<interface type='ethernet'>
      +  <target dev='vnet7'/>
      +  <script path='/etc/qemu-ifup-mynet'/>
      +</interface>
      +
    8. +
    9. Multicast tunnel +

      A multicast group is setup to represent a virtual network. Any VMs + whose network devices are in the same multicast group can talk to each + other even across hosts. This mode is also available to unprivileged + users. There is no default DNS or DHCP support and no outgoing network + access. To provide outgoing network access, one of the VMs should have a + 2nd NIC which is connected to one of the first 4 network types and do the + appropriate routing. The multicast protocol is compatible with that used + by user mode linux guests too. The source address used must be from the + multicast address block.

      +
      <interface type='mcast'>
      +  <source address='230.0.0.1' port='5558'/>
      +</interface>
      +
    10. +
    11. TCP tunnel +

      A TCP client/server architecture provides a virtual network. One VM + provides the server end of the network, all other VMS are configured as + clients. All network traffic is routed between the VMs via the server. + This mode is also available to unprivileged users. There is no default + DNS or DHCP support and no outgoing network access. To provide outgoing + network access, one of the VMs should have a 2nd NIC which is connected + to one of the first 4 network types and do the appropriate routing.

      +

      Example server config:

      +
      <interface type='server'>
      +  <source address='192.168.0.1' port='5558'/>
      +</interface>
      +

      Example client config:

      +
      <interface type='client'>
      +  <source address='192.168.0.1' port='5558'/>
      +</interface>
      +
    12. +
    +

    To be noted, options 2, 3, 4 are also supported by Xen VMs, so it is +possible to use these configs to have networking with both Xen & +QEMU/KVMs connected to each other.

    + +

    Example configs

    + +

    + Example configurations for each driver are provide on the + driver specific pages listed below +

    + + + + diff --git a/docs/formatnetwork.html b/docs/formatnetwork.html new file mode 100644 index 0000000000..9e8a55437b --- /dev/null +++ b/docs/formatnetwork.html @@ -0,0 +1,145 @@ + + + + + + + + + libvirt: Network XML format + + + + +
    +
    +

    Network XML format

    +

    Example configuration

    +

    NAT based network

    +
    +      <network>
    +	<name>default</name>
    +	<bridge name="virbr0" />
    +	<forward type="nat"/>
    +	<ip address="192.168.122.1" netmask="255.255.255.0">
    +	  <dhcp>
    +	    <range start="192.168.122.2" end="192.168.122.254" />
    +	  </dhcp>
    +	</ip>
    +      </network>
    +

    Routed network config

    +
    +      <network>
    +	<name>local</name>
    +	<bridge name="virbr1" />
    +	<forward type="route" dev="eth1"/>
    +	<ip address="192.168.122.1" netmask="255.255.255.0">
    +	  <dhcp>
    +	    <range start="192.168.122.2" end="192.168.122.254" />
    +	  </dhcp>
    +	</ip>
    +      </network>
    +

    Isolated network config

    +
    +      <network>
    +	<name>private</name>
    +	<bridge name="virbr2" />
    +	<ip address="192.168.152.1" netmask="255.255.255.0">
    +	  <dhcp>
    +	    <range start="192.168.152.2" end="192.168.152.254" />
    +	  </dhcp>
    +	</ip>
    +      </network>
    +
    + +
    + + + diff --git a/docs/formatnetwork.html.in b/docs/formatnetwork.html.in new file mode 100644 index 0000000000..1932bf061f --- /dev/null +++ b/docs/formatnetwork.html.in @@ -0,0 +1,50 @@ + + +

    Network XML format

    + + +

    Example configuration

    + +

    NAT based network

    + +
    +      <network>
    +	<name>default</name>
    +	<bridge name="virbr0" />
    +	<forward type="nat"/>
    +	<ip address="192.168.122.1" netmask="255.255.255.0">
    +	  <dhcp>
    +	    <range start="192.168.122.2" end="192.168.122.254" />
    +	  </dhcp>
    +	</ip>
    +      </network>
    + +

    Routed network config

    + +
    +      <network>
    +	<name>local</name>
    +	<bridge name="virbr1" />
    +	<forward type="route" dev="eth1"/>
    +	<ip address="192.168.122.1" netmask="255.255.255.0">
    +	  <dhcp>
    +	    <range start="192.168.122.2" end="192.168.122.254" />
    +	  </dhcp>
    +	</ip>
    +      </network>
    + +

    Isolated network config

    + +
    +      <network>
    +	<name>private</name>
    +	<bridge name="virbr2" />
    +	<ip address="192.168.152.1" netmask="255.255.255.0">
    +	  <dhcp>
    +	    <range start="192.168.152.2" end="192.168.152.254" />
    +	  </dhcp>
    +	</ip>
    +      </network>
    + + + diff --git a/docs/formatnode.html b/docs/formatnode.html new file mode 100644 index 0000000000..97045bb048 --- /dev/null +++ b/docs/formatnode.html @@ -0,0 +1,109 @@ + + + + + + + + + libvirt: Node devices XML format + + + + +
    +
    +

    Node devices XML format

    +
    + +
    + + + diff --git a/docs/formatnode.html.in b/docs/formatnode.html.in new file mode 100644 index 0000000000..91882ca71e --- /dev/null +++ b/docs/formatnode.html.in @@ -0,0 +1,5 @@ + + +

    Node devices XML format

    + + diff --git a/docs/formatstorage.html b/docs/formatstorage.html new file mode 100644 index 0000000000..d9aaf3a09b --- /dev/null +++ b/docs/formatstorage.html @@ -0,0 +1,275 @@ + + + + + + + + + libvirt: Storage pool and volume XML format + + + + +
    +
    +

    Storage pool and volume XML format

    + +

    + Storage pool XML +

    +

    +Although all storage pool backends share the same public APIs and +XML format, they have varying levels of capabilities. Some may +allow creation of volumes, others may only allow use of pre-existing +volumes. Some may have constraints on volume size, or placement. +

    +

    The is the top level tag for a storage pool document is 'pool'. It has +a single attribute type, which is one of dir, +fs,netfs,disk,iscsi, +logical. This corresponds to the storage backend drivers +listed further along in this document. +

    +

    + First level elements +

    +
    name
    Providing a name for the pool which is unique to the host. +This is mandatory when defining a pool
    uuid
    Providing an identifier for the pool which is globally unique. +This is optional when defining a pool, a UUID will be generated if +omitted
    allocation
    Providing the total storage allocation for the pool. This may +be larger than the sum of the allocation of all volumes due to +metadata overhead. This value is in bytes. This is not applicable +when creating a pool.
    capacity
    Providing the total storage capacity for the pool. Due to +underlying device constraints it may not be possible to use the +full capacity for storage volumes. This value is in bytes. This +is not applicable when creating a pool.
    available
    Providing the free space available for allocating new volumes +in the pool. Due to underlying device constraints it may not be +possible to allocate the entire free space to a single volume. +This value is in bytes. This is not applicable when creating a +pool.
    source
    Provides information about the source of the pool, such as +the underlying host devices, or remote server
    target
    Provides information about the representation of the pool +on the local host.
    +

    + Source elements +

    +
    device
    Provides the source for pools backed by physical devices. +May be repeated multiple times depending on backend driver. Contains +a single attribute path which is the fully qualified +path to the block device node.
    directory
    Provides the source for pools backed by directories. May +only occur once. Contains a single attribute path +which is the fully qualified path to the block device node.
    host
    Provides the source for pools backed by storage from a +remote server. Will be used in combination with a directory +or device element. Contains an attribute name +which is the hostname or IP address of the server. May optionally +contain a port attribute for the protocol specific +port number.
    format
    Provides information about the format of the pool. This +contains a single attribute type whose value is +backend specific. This is typically used to indicate filesystem +type, or network filesystem type, or partition table type, or +LVM metadata type. All drivers are required to have a default +value for this, so it is optional.
    +

    + Target elements +

    +
    path
    Provides the location at which the pool will be mapped into +the local filesystem namespace. For a filesystem/directory based +pool it will be the name of the directory in which volumes will +be created. For device based pools it will be the name of the directory in which +devices nodes exist. For the latter /dev/ may seem +like the logical choice, however, devices nodes there are not +guaranteed stable across reboots, since they are allocated on +demand. It is preferable to use a stable location such as one +of the /dev/disk/by-{path,id,uuid,label locations. +
    permissions
    Provides information about the default permissions to use +when creating volumes. This is currently only useful for directory +or filesystem based pools, where the volumes allocated are simple +files. For pools where the volumes are device nodes, the hotplug +scripts determine permissions. It contains 4 child elements. The +mode element contains the octal permission set. The +owner element contains the numeric user ID. The group +element contains the numeric group ID. The label element +contains the MAC (eg SELinux) label string. +
    +

    + Device extents +

    +

    +If a storage pool exposes information about its underlying +placement / allocation scheme, the device element +within the source element may contain information +about its available extents. Some pools have a constraint that +a volume must be allocated entirely within a single constraint +(eg disk partition pools). Thus the extent information allows an +application to determine the maximum possible size for a new +volume +

    +

    +For storage pools supporting extent information, within each +device element there will be zero or more freeExtent +elements. Each of these elements contains two attributes, start +and end which provide the boundaries of the extent on the +device, measured in bytes. +

    +

    + Storage volume XML +

    +

    +A storage volume will be either a file or a device node. +

    +

    + First level elements +

    +
    name
    Providing a name for the pool which is unique to the host. +This is mandatory when defining a pool
    uuid
    Providing an identifier for the pool which is globally unique. +This is optional when defining a pool, a UUID will be generated if +omitted
    allocation
    Providing the total storage allocation for the volume. This +may be smaller than the logical capacity if the volume is sparsely +allocated. It may also be larger than the logical capacity if the +volume has substantial metadata overhead. This value is in bytes. +If omitted when creating a volume, the volume will be fully +allocated at time of creation. If set to a value smaller than the +capacity, the pool has the option of deciding +to sparsely allocate a volume. It does not have to honour requests +for sparse allocation though.
    capacity
    Providing the logical capacity for the volume. This value is +in bytes. This is compulsory when creating a volume
    source
    Provides information about the underlying storage allocation +of the volume. This may not be available for some pool types.
    target
    Provides information about the representation of the volume +on the local host.
    +

    + Target elements +

    +
    path
    Provides the location at which the pool will be mapped into +the local filesystem namespace. For a filesystem/directory based +pool it will be the name of the directory in which volumes will +be created. For device based pools it will be the name of the directory in which +devices nodes exist. For the latter /dev/ may seem +like the logical choice, however, devices nodes there are not +guaranteed stable across reboots, since they are allocated on +demand. It is preferrable to use a stable location such as one +of the /dev/disk/by-{path,id,uuid,label locations. +
    format
    Provides information about the pool specific volume format. +For disk pools it will provide the partition type. For filesystem +or directory pools it will provide the file format type, eg cow, +qcow, vmdk, raw. If omitted when creating a volume, the pool's +default format will be used. The actual format is specified via +the type. Consult the pool-specific docs for the +list of valid values.
    permissions
    Provides information about the default permissions to use +when creating volumes. This is currently only useful for directory +or filesystem based pools, where the volumes allocated are simple +files. For pools where the volumes are device nodes, the hotplug +scripts determine permissions. It contains 4 child elements. The +mode element contains the octal permission set. The +owner element contains the numeric user ID. The group +element contains the numeric group ID. The label element +contains the MAC (eg SELinux) label string. +
    +
    + +
    + + + diff --git a/docs/formatstorage.html.in b/docs/formatstorage.html.in new file mode 100644 index 0000000000..107d8abb64 --- /dev/null +++ b/docs/formatstorage.html.in @@ -0,0 +1,237 @@ + + +

    Storage pool and volume XML format

    + + + +

    + Storage pool XML +

    +

    +Although all storage pool backends share the same public APIs and +XML format, they have varying levels of capabilities. Some may +allow creation of volumes, others may only allow use of pre-existing +volumes. Some may have constraints on volume size, or placement. +

    +

    The is the top level tag for a storage pool document is 'pool'. It has +a single attribute type, which is one of dir, +fs,netfs,disk,iscsi, +logical. This corresponds to the storage backend drivers +listed further along in this document. +

    +

    + First level elements +

    +
    +
    name
    +
    Providing a name for the pool which is unique to the host. +This is mandatory when defining a pool
    +
    uuid
    +
    Providing an identifier for the pool which is globally unique. +This is optional when defining a pool, a UUID will be generated if +omitted
    +
    allocation
    +
    Providing the total storage allocation for the pool. This may +be larger than the sum of the allocation of all volumes due to +metadata overhead. This value is in bytes. This is not applicable +when creating a pool.
    +
    capacity
    +
    Providing the total storage capacity for the pool. Due to +underlying device constraints it may not be possible to use the +full capacity for storage volumes. This value is in bytes. This +is not applicable when creating a pool.
    +
    available
    +
    Providing the free space available for allocating new volumes +in the pool. Due to underlying device constraints it may not be +possible to allocate the entire free space to a single volume. +This value is in bytes. This is not applicable when creating a +pool.
    +
    source
    +
    Provides information about the source of the pool, such as +the underlying host devices, or remote server
    +
    target
    +
    Provides information about the representation of the pool +on the local host.
    +
    +

    + Source elements +

    +
    +
    device
    +
    Provides the source for pools backed by physical devices. +May be repeated multiple times depending on backend driver. Contains +a single attribute path which is the fully qualified +path to the block device node.
    +
    directory
    +
    Provides the source for pools backed by directories. May +only occur once. Contains a single attribute path +which is the fully qualified path to the block device node.
    +
    host
    +
    Provides the source for pools backed by storage from a +remote server. Will be used in combination with a directory +or device element. Contains an attribute name +which is the hostname or IP address of the server. May optionally +contain a port attribute for the protocol specific +port number.
    +
    format
    +
    Provides information about the format of the pool. This +contains a single attribute type whose value is +backend specific. This is typically used to indicate filesystem +type, or network filesystem type, or partition table type, or +LVM metadata type. All drivers are required to have a default +value for this, so it is optional.
    +
    +

    + Target elements +

    +
    +
    path
    +
    Provides the location at which the pool will be mapped into +the local filesystem namespace. For a filesystem/directory based +pool it will be the name of the directory in which volumes will +be created. For device based pools it will be the name of the directory in which +devices nodes exist. For the latter /dev/ may seem +like the logical choice, however, devices nodes there are not +guaranteed stable across reboots, since they are allocated on +demand. It is preferable to use a stable location such as one +of the /dev/disk/by-{path,id,uuid,label locations. +
    +
    permissions
    +
    Provides information about the default permissions to use +when creating volumes. This is currently only useful for directory +or filesystem based pools, where the volumes allocated are simple +files. For pools where the volumes are device nodes, the hotplug +scripts determine permissions. It contains 4 child elements. The +mode element contains the octal permission set. The +owner element contains the numeric user ID. The group +element contains the numeric group ID. The label element +contains the MAC (eg SELinux) label string. +
    +
    +

    + Device extents +

    +

    +If a storage pool exposes information about its underlying +placement / allocation scheme, the device element +within the source element may contain information +about its available extents. Some pools have a constraint that +a volume must be allocated entirely within a single constraint +(eg disk partition pools). Thus the extent information allows an +application to determine the maximum possible size for a new +volume +

    +

    +For storage pools supporting extent information, within each +device element there will be zero or more freeExtent +elements. Each of these elements contains two attributes, start +and end which provide the boundaries of the extent on the +device, measured in bytes. +

    +

    + Storage volume XML +

    +

    +A storage volume will be either a file or a device node. +

    +

    + First level elements +

    +
    +
    name
    +
    Providing a name for the pool which is unique to the host. +This is mandatory when defining a pool
    +
    uuid
    +
    Providing an identifier for the pool which is globally unique. +This is optional when defining a pool, a UUID will be generated if +omitted
    +
    allocation
    +
    Providing the total storage allocation for the volume. This +may be smaller than the logical capacity if the volume is sparsely +allocated. It may also be larger than the logical capacity if the +volume has substantial metadata overhead. This value is in bytes. +If omitted when creating a volume, the volume will be fully +allocated at time of creation. If set to a value smaller than the +capacity, the pool has the option of deciding +to sparsely allocate a volume. It does not have to honour requests +for sparse allocation though.
    +
    capacity
    +
    Providing the logical capacity for the volume. This value is +in bytes. This is compulsory when creating a volume
    +
    source
    +
    Provides information about the underlying storage allocation +of the volume. This may not be available for some pool types.
    +
    target
    +
    Provides information about the representation of the volume +on the local host.
    +
    +

    + Target elements +

    +
    +
    path
    +
    Provides the location at which the pool will be mapped into +the local filesystem namespace. For a filesystem/directory based +pool it will be the name of the directory in which volumes will +be created. For device based pools it will be the name of the directory in which +devices nodes exist. For the latter /dev/ may seem +like the logical choice, however, devices nodes there are not +guaranteed stable across reboots, since they are allocated on +demand. It is preferrable to use a stable location such as one +of the /dev/disk/by-{path,id,uuid,label locations. +
    +
    format
    +
    Provides information about the pool specific volume format. +For disk pools it will provide the partition type. For filesystem +or directory pools it will provide the file format type, eg cow, +qcow, vmdk, raw. If omitted when creating a volume, the pool's +default format will be used. The actual format is specified via +the type. Consult the pool-specific docs for the +list of valid values.
    +
    permissions
    +
    Provides information about the default permissions to use +when creating volumes. This is currently only useful for directory +or filesystem based pools, where the volumes allocated are simple +files. For pools where the volumes are device nodes, the hotplug +scripts determine permissions. It contains 4 child elements. The +mode element contains the octal permission set. The +owner element contains the numeric user ID. The group +element contains the numeric group ID. The label element +contains the MAC (eg SELinux) label string. +
    +
    + + + diff --git a/docs/generic.css b/docs/generic.css new file mode 100644 index 0000000000..2aaac4652d --- /dev/null +++ b/docs/generic.css @@ -0,0 +1,75 @@ + +body { + margin: 0em; + padding: 0px; + color: rgb(0,0,0); + font-family: Verdana, Arial, Helvetica, sans-serif; + font-size: 80%; +// font-size: 83%; +} + +p, ul, ol, dl { + padding: 0px; + margin: 0px; +} + +ol,ul { + margin-left: 3em; +} + +ol,ul,dl,p { + margin-top: 1em; + margin-bottom: 1em; +} + +p:first-line { + margin-right: 1em; +} + +div.body p:first-letter { + font-size: 1.2em; + font-weight: bold; +} + +h1,h2,h3,h4,h5,h6 { + font-weight: bold; + margin: 0px; + padding: 0px; + margin-top: 0.5em; +} + +div.footer { + margin-top: 1em; +} + +h1 { + font-size: 2em; +} +h2 { + font-size: 1.6em; +} +h3 { + font-size: 1.4em; +} +h4 { + font-size: 1.2em; +} +h5 { + font-size: 1em; +} +h6 { + font-size: 0.8em; +} + +dl dt { + margin-left: 1em; + margin-right: 2em; + font-weight: bold; + font-size: larger; +} + +dl dd { + margin-left: 2em; + margin-right: 2em; + margin-bottom: 0.5em; +} diff --git a/docs/html/book1.html b/docs/html/book1.html deleted file mode 100644 index 7c9f206d70..0000000000 --- a/docs/html/book1.html +++ /dev/null @@ -1,3 +0,0 @@ - - -Reference Manual for libvirt

    Reference Manual for libvirt

    Table of Contents

    • libvirt: core interfaces for the libvirt library
    • virterror: error handling interfaces for the libvirt library

    diff --git a/docs/html/index.html b/docs/html/index.html index 7c9f206d70..1647ca43b5 100644 --- a/docs/html/index.html +++ b/docs/html/index.html @@ -1,3 +1,6 @@ -Reference Manual for libvirt

    Reference Manual for libvirt

    Table of Contents

    • libvirt: core interfaces for the libvirt library
    • virterror: error handling interfaces for the libvirt library

    +libvirt: Reference Manual for libvirt

    Reference Manual for libvirt

    Table of Contents

    • libvirt: core interfaces for the libvirt library
    • virterror: error handling interfaces for the libvirt library
    diff --git a/docs/html/libvirt-conf.html b/docs/html/libvirt-conf.html deleted file mode 100644 index bd3a1c8e5e..0000000000 --- a/docs/html/libvirt-conf.html +++ /dev/null @@ -1,44 +0,0 @@ - - -Module conf from libvirt

    Module conf from libvirt

    Table of Contents

    Structure virConf
    struct _virConf -The content of this structure is not made public by the API. -
    Typedef virConf * virConfPtr
    -
    Enum virConfType
    -
    Structure virConfValue
    struct _virConfValue -
    Typedef virConfValue * virConfValuePtr
    -
    int	virConfFree			(virConfPtr conf)
    -
    virConfValuePtr	virConfGetValue		(virConfPtr conf, 
    const char * setting)
    -
    virConfPtr	virConfReadFile		(const char * filename)
    -
    virConfPtr	virConfReadMem		(const char * memory, 
    int len)
    -
    int	virConfWriteFile		(const char * filename, 
    virConfPtr conf)
    -
    int	virConfWriteMem			(char * memory, 
    int * len,
    virConfPtr conf)
    -

    Description

    -

    Structure virConf

    Structure virConf
    struct _virConf { -The content of this structure is not made public by the API. -}
    - a pointer to a parsed configuration file -

    Enum virConfType

    Enum virConfType {
    -    VIR_CONF_NONE = 0 : undefined
    -    VIR_CONF_LONG = 1 : a long int
    -    VIR_CONF_STRING = 2 : a string
    -    VIR_CONF_LIST = 3 : a list
    -}
    -

    Structure virConfValue

    Structure virConfValue
    struct _virConfValue { - virConfType type : the virConfType - virConfValuePtr next : next element if in a list - long l : long integer - char * str : pointer to 0 terminated string - virConfValuePtr list : list of a list -}

    Function: virConfFree

    int	virConfFree			(virConfPtr conf)
    -

    Frees all data associated to the handle

    -
    conf:a configuration file handle
    Returns:0 in case of success, -1 in case of error.

    Function: virConfGetValue

    virConfValuePtr	virConfGetValue		(virConfPtr conf, 
    const char * setting)
    -

    Lookup the value associated to this entry in the configuration file

    -
    conf:a configuration file handle
    setting:
    Returns:a pointer to the value or NULL if the lookup failed, the data associated will be freed when virConfFree() is called

    Function: virConfReadFile

    virConfPtr	virConfReadFile		(const char * filename)
    -

    Reads a configuration file.

    -
    filename:the path to the configuration file.
    Returns:an handle to lookup settings or NULL if it failed to read or parse the file, use virConfFree() to free the data.

    Function: virConfReadMem

    virConfPtr	virConfReadMem		(const char * memory, 
    int len)
    -

    Reads a configuration file loaded in memory. The string can be zero terminated in which case @len can be 0

    -
    memory:pointer to the content of the configuration file
    len:length in byte
    Returns:an handle to lookup settings or NULL if it failed to parse the content, use virConfFree() to free the data.

    Function: virConfWriteFile

    int	virConfWriteFile		(const char * filename, 
    virConfPtr conf)
    -

    Writes a configuration file back to a file.

    -
    filename:the path to the configuration file.
    conf:the conf
    Returns:the number of bytes written or -1 in case of error.

    Function: virConfWriteMem

    int	virConfWriteMem			(char * memory, 
    int * len,
    virConfPtr conf)
    -

    Writes a configuration file back to a memory area. @len is an IN/OUT parameter, it indicates the size available in bytes, and on output the size required for the configuration file (even if the call fails due to insufficient space).

    -
    memory:pointer to the memory to store the config file
    len:pointer to the length in byte of the store, on output the size
    conf:the conf
    Returns:the number of bytes written or -1 in case of error.

    diff --git a/docs/html/libvirt-lib.html b/docs/html/libvirt-lib.html deleted file mode 100644 index 7c9f206d70..0000000000 --- a/docs/html/libvirt-lib.html +++ /dev/null @@ -1,3 +0,0 @@ - - -Reference Manual for libvirt

    Reference Manual for libvirt

    Table of Contents

    • libvirt: core interfaces for the libvirt library
    • virterror: error handling interfaces for the libvirt library

    diff --git a/docs/html/libvirt-libvirt.html b/docs/html/libvirt-libvirt.html index ea9e104aa3..e032c914b2 100644 --- a/docs/html/libvirt-libvirt.html +++ b/docs/html/libvirt-libvirt.html @@ -1,590 +1,377 @@ -Module libvirt from libvirt

    Module libvirt from libvirt

    Provides the interfaces of the libvirt library to handle Xen domains from a process running in domain 0

    Table of Contents

    #define LIBVIR_VERSION_NUMBER
    #define VIR_COPY_CPUMAP
    #define VIR_CPU_MAPLEN
    #define VIR_CPU_USABLE
    #define VIR_DOMAIN_SCHED_FIELD_LENGTH
    #define VIR_GET_CPUMAP
    #define VIR_NODEINFO_MAXCPUS
    #define VIR_UNUSE_CPU
    #define VIR_USE_CPU
    #define VIR_UUID_BUFLEN
    #define VIR_UUID_STRING_BUFLEN
    Structure virConnect
    struct _virConnect -The content of this structure is not made public by the API. -
    Structure virConnectAuth
    struct _virConnectAuth -
    Typedef virConnectAuth * virConnectAuthPtr
    -
    Structure virConnectCredential
    struct _virConnectCredential -
    Typedef virConnectCredential * virConnectCredentialPtr
    -
    Enum virConnectCredentialType
    -
    Enum virConnectFlags
    -
    Typedef virConnect * virConnectPtr
    -
    Structure virDomain
    struct _virDomain -The content of this structure is not made public by the API. -
    Typedef virDomainBlockStatsStruct * virDomainBlockStatsPtr
    -
    Structure virDomainBlockStatsStruct
    struct _virDomainBlockStats -
    Enum virDomainCreateFlags
    -
    Structure virDomainInfo
    struct _virDomainInfo -
    Typedef virDomainInfo * virDomainInfoPtr
    -
    Typedef virDomainInterfaceStatsStruct * virDomainInterfaceStatsPtr
    -
    Structure virDomainInterfaceStatsStruct
    struct _virDomainInterfaceStats -
    Enum virDomainMigrateFlags
    -
    Typedef virDomain * virDomainPtr
    -
    Enum virDomainState
    -
    Enum virDomainXMLFlags
    -
    Structure virNetwork
    struct _virNetwork -The content of this structure is not made public by the API. -
    Typedef virNetwork * virNetworkPtr
    -
    Structure virNodeInfo
    struct _virNodeInfo -
    Typedef virNodeInfo * virNodeInfoPtr
    -
    Structure virSchedParameter
    struct _virSchedParameter -
    Typedef virSchedParameter * virSchedParameterPtr
    -
    Enum virSchedParameterType
    -
    Structure virStoragePool
    struct _virStoragePool -The content of this structure is not made public by the API. -
    Enum virStoragePoolBuildFlags
    -
    Enum virStoragePoolDeleteFlags
    -
    Structure virStoragePoolInfo
    struct _virStoragePoolInfo -
    Typedef virStoragePoolInfo * virStoragePoolInfoPtr
    -
    Typedef virStoragePool * virStoragePoolPtr
    -
    Enum virStoragePoolState
    -
    Structure virStorageVol
    struct _virStorageVol -The content of this structure is not made public by the API. -
    Enum virStorageVolDeleteFlags
    -
    Structure virStorageVolInfo
    struct _virStorageVolInfo -
    Typedef virStorageVolInfo * virStorageVolInfoPtr
    -
    Typedef virStorageVol * virStorageVolPtr
    -
    Enum virStorageVolType
    -
    Structure virVcpuInfo
    struct _virVcpuInfo -
    Typedef virVcpuInfo * virVcpuInfoPtr
    -
    Enum virVcpuState
    -
    Function type: virConnectAuthCallbackPtr
    +libvirt: Module libvirt from libvirt

    Module libvirt from libvirt

    Provides the interfaces of the libvirt library to handle Xen domains from a process running in domain 0

    Table of Contents

    Macros

    #define LIBVIR_VERSION_NUMBER
    +#define VIR_COPY_CPUMAP
    +#define VIR_CPU_MAPLEN
    +#define VIR_CPU_USABLE
    +#define VIR_DOMAIN_SCHED_FIELD_LENGTH
    +#define VIR_GET_CPUMAP
    +#define VIR_NODEINFO_MAXCPUS
    +#define VIR_UNUSE_CPU
    +#define VIR_USE_CPU
    +#define VIR_UUID_BUFLEN
    +#define VIR_UUID_STRING_BUFLEN
    +

    Types

    typedef struct _virConnect virConnect
    +typedef struct _virConnectAuth virConnectAuth
    +typedef virConnectAuth * virConnectAuthPtr
    +typedef struct _virConnectCredential virConnectCredential
    +typedef virConnectCredential * virConnectCredentialPtr
    +typedef enum virConnectCredentialType
    +typedef enum virConnectFlags
    +typedef virConnect * virConnectPtr
    +typedef struct _virDomain virDomain
    +typedef virDomainBlockStatsStruct * virDomainBlockStatsPtr
    +typedef struct _virDomainBlockStats virDomainBlockStatsStruct
    +typedef enum virDomainCreateFlags
    +typedef struct _virDomainInfo virDomainInfo
    +typedef virDomainInfo * virDomainInfoPtr
    +typedef virDomainInterfaceStatsStruct * virDomainInterfaceStatsPtr
    +typedef struct _virDomainInterfaceStats virDomainInterfaceStatsStruct
    +typedef enum virDomainMigrateFlags
    +typedef virDomain * virDomainPtr
    +typedef enum virDomainState
    +typedef enum virDomainXMLFlags
    +typedef struct _virNetwork virNetwork
    +typedef virNetwork * virNetworkPtr
    +typedef struct _virNodeInfo virNodeInfo
    +typedef virNodeInfo * virNodeInfoPtr
    +typedef struct _virSchedParameter virSchedParameter
    +typedef virSchedParameter * virSchedParameterPtr
    +typedef enum virSchedParameterType
    +typedef struct _virStoragePool virStoragePool
    +typedef enum virStoragePoolBuildFlags
    +typedef enum virStoragePoolDeleteFlags
    +typedef struct _virStoragePoolInfo virStoragePoolInfo
    +typedef virStoragePoolInfo * virStoragePoolInfoPtr
    +typedef virStoragePool * virStoragePoolPtr
    +typedef enum virStoragePoolState
    +typedef struct _virStorageVol virStorageVol
    +typedef enum virStorageVolDeleteFlags
    +typedef struct _virStorageVolInfo virStorageVolInfo
    +typedef virStorageVolInfo * virStorageVolInfoPtr
    +typedef virStorageVol * virStorageVolPtr
    +typedef enum virStorageVolType
    +typedef struct _virVcpuInfo virVcpuInfo
    +typedef virVcpuInfo * virVcpuInfoPtr
    +typedef enum virVcpuState
    +

    Functions

    typedef virConnectAuthCallbackPtr
     int	virConnectAuthCallbackPtr	(virConnectCredentialPtr cred, 
    unsigned int ncred,
    void * cbdata) -
    -
    int	virConnectClose			(virConnectPtr conn)
    -
    char *	virConnectGetCapabilities	(virConnectPtr conn)
    -
    char *	virConnectGetHostname		(virConnectPtr conn)
    -
    int	virConnectGetMaxVcpus		(virConnectPtr conn, 
    const char * type)
    -
    const char *	virConnectGetType	(virConnectPtr conn)
    -
    char *	virConnectGetURI		(virConnectPtr conn)
    -
    int	virConnectGetVersion		(virConnectPtr conn, 
    unsigned long * hvVer)
    -
    int	virConnectListDefinedDomains	(virConnectPtr conn, 
    char ** const names,
    int maxnames)
    -
    int	virConnectListDefinedNetworks	(virConnectPtr conn, 
    char ** const names,
    int maxnames)
    -
    int	virConnectListDefinedStoragePools	(virConnectPtr conn, 
    char ** const names,
    int maxnames)
    -
    int	virConnectListDomains		(virConnectPtr conn, 
    int * ids,
    int maxids)
    -
    int	virConnectListNetworks		(virConnectPtr conn, 
    char ** const names,
    int maxnames)
    -
    int	virConnectListStoragePools	(virConnectPtr conn, 
    char ** const names,
    int maxnames)
    -
    int	virConnectNumOfDefinedDomains	(virConnectPtr conn)
    -
    int	virConnectNumOfDefinedNetworks	(virConnectPtr conn)
    -
    int	virConnectNumOfDefinedStoragePools	(virConnectPtr conn)
    -
    int	virConnectNumOfDomains		(virConnectPtr conn)
    -
    int	virConnectNumOfNetworks		(virConnectPtr conn)
    -
    int	virConnectNumOfStoragePools	(virConnectPtr conn)
    -
    virConnectPtr	virConnectOpen		(const char * name)
    -
    virConnectPtr	virConnectOpenAuth	(const char * name, 
    virConnectAuthPtr auth,
    int flags)
    -
    virConnectPtr	virConnectOpenReadOnly	(const char * name)
    -
    int	virDomainAttachDevice		(virDomainPtr domain, 
    const char * xml)
    -
    int	virDomainBlockStats		(virDomainPtr dom, 
    const char * path,
    virDomainBlockStatsPtr stats,
    size_t size)
    -
    int	virDomainCoreDump		(virDomainPtr domain, 
    const char * to,
    int flags)
    -
    int	virDomainCreate			(virDomainPtr domain)
    -
    virDomainPtr	virDomainCreateLinux	(virConnectPtr conn, 
    const char * xmlDesc,
    unsigned int flags)
    -
    virDomainPtr	virDomainDefineXML	(virConnectPtr conn, 
    const char * xml)
    -
    int	virDomainDestroy		(virDomainPtr domain)
    -
    int	virDomainDetachDevice		(virDomainPtr domain, 
    const char * xml)
    -
    int	virDomainFree			(virDomainPtr domain)
    -
    int	virDomainGetAutostart		(virDomainPtr domain, 
    int * autostart)
    -
    virConnectPtr	virDomainGetConnect	(virDomainPtr dom)
    -
    unsigned int	virDomainGetID		(virDomainPtr domain)
    -
    int	virDomainGetInfo		(virDomainPtr domain, 
    virDomainInfoPtr info)
    -
    unsigned long	virDomainGetMaxMemory	(virDomainPtr domain)
    -
    int	virDomainGetMaxVcpus		(virDomainPtr domain)
    -
    const char *	virDomainGetName	(virDomainPtr domain)
    -
    char *	virDomainGetOSType		(virDomainPtr domain)
    -
    int	virDomainGetSchedulerParameters	(virDomainPtr domain, 
    virSchedParameterPtr params,
    int * nparams)
    -
    char *	virDomainGetSchedulerType	(virDomainPtr domain, 
    int * nparams)
    -
    int	virDomainGetUUID		(virDomainPtr domain, 
    unsigned char * uuid)
    -
    int	virDomainGetUUIDString		(virDomainPtr domain, 
    char * buf)
    -
    int	virDomainGetVcpus		(virDomainPtr domain, 
    virVcpuInfoPtr info,
    int maxinfo,
    unsigned char * cpumaps,
    int maplen)
    -
    char *	virDomainGetXMLDesc		(virDomainPtr domain, 
    int flags)
    -
    int	virDomainInterfaceStats		(virDomainPtr dom, 
    const char * path,
    virDomainInterfaceStatsPtr stats,
    size_t size)
    -
    virDomainPtr	virDomainLookupByID	(virConnectPtr conn, 
    int id)
    -
    virDomainPtr	virDomainLookupByName	(virConnectPtr conn, 
    const char * name)
    -
    virDomainPtr	virDomainLookupByUUID	(virConnectPtr conn, 
    const unsigned char * uuid)
    -
    virDomainPtr	virDomainLookupByUUIDString	(virConnectPtr conn, 
    const char * uuidstr)
    -
    virDomainPtr	virDomainMigrate	(virDomainPtr domain, 
    virConnectPtr dconn,
    unsigned long flags,
    const char * dname,
    const char * uri,
    unsigned long bandwidth)
    -
    int	virDomainPinVcpu		(virDomainPtr domain, 
    unsigned int vcpu,
    unsigned char * cpumap,
    int maplen)
    -
    int	virDomainReboot			(virDomainPtr domain, 
    unsigned int flags)
    -
    int	virDomainRestore		(virConnectPtr conn, 
    const char * from)
    -
    int	virDomainResume			(virDomainPtr domain)
    -
    int	virDomainSave			(virDomainPtr domain, 
    const char * to)
    -
    int	virDomainSetAutostart		(virDomainPtr domain, 
    int autostart)
    -
    int	virDomainSetMaxMemory		(virDomainPtr domain, 
    unsigned long memory)
    -
    int	virDomainSetMemory		(virDomainPtr domain, 
    unsigned long memory)
    -
    int	virDomainSetSchedulerParameters	(virDomainPtr domain, 
    virSchedParameterPtr params,
    int nparams)
    -
    int	virDomainSetVcpus		(virDomainPtr domain, 
    unsigned int nvcpus)
    -
    int	virDomainShutdown		(virDomainPtr domain)
    -
    int	virDomainSuspend		(virDomainPtr domain)
    -
    int	virDomainUndefine		(virDomainPtr domain)
    -
    int	virGetVersion			(unsigned long * libVer, 
    const char * type,
    unsigned long * typeVer)
    -
    int	virInitialize			(void)
    -
    int	virNetworkCreate		(virNetworkPtr network)
    -
    virNetworkPtr	virNetworkCreateXML	(virConnectPtr conn, 
    const char * xmlDesc)
    -
    virNetworkPtr	virNetworkDefineXML	(virConnectPtr conn, 
    const char * xml)
    -
    int	virNetworkDestroy		(virNetworkPtr network)
    -
    int	virNetworkFree			(virNetworkPtr network)
    -
    int	virNetworkGetAutostart		(virNetworkPtr network, 
    int * autostart)
    -
    char *	virNetworkGetBridgeName		(virNetworkPtr network)
    -
    virConnectPtr	virNetworkGetConnect	(virNetworkPtr net)
    -
    const char *	virNetworkGetName	(virNetworkPtr network)
    -
    int	virNetworkGetUUID		(virNetworkPtr network, 
    unsigned char * uuid)
    -
    int	virNetworkGetUUIDString		(virNetworkPtr network, 
    char * buf)
    -
    char *	virNetworkGetXMLDesc		(virNetworkPtr network, 
    int flags)
    -
    virNetworkPtr	virNetworkLookupByName	(virConnectPtr conn, 
    const char * name)
    -
    virNetworkPtr	virNetworkLookupByUUID	(virConnectPtr conn, 
    const unsigned char * uuid)
    -
    virNetworkPtr	virNetworkLookupByUUIDString	(virConnectPtr conn, 
    const char * uuidstr)
    -
    int	virNetworkSetAutostart		(virNetworkPtr network, 
    int autostart)
    -
    int	virNetworkUndefine		(virNetworkPtr network)
    -
    int	virNodeGetCellsFreeMemory	(virConnectPtr conn, 
    unsigned long long * freeMems,
    int startCell,
    int maxCells)
    -
    unsigned long long	virNodeGetFreeMemory	(virConnectPtr conn)
    -
    int	virNodeGetInfo			(virConnectPtr conn, 
    virNodeInfoPtr info)
    -
    int	virStoragePoolBuild		(virStoragePoolPtr pool, 
    unsigned int flags)
    -
    int	virStoragePoolCreate		(virStoragePoolPtr pool, 
    unsigned int flags)
    -
    virStoragePoolPtr	virStoragePoolCreateXML	(virConnectPtr conn, 
    const char * xmlDesc,
    unsigned int flags)
    -
    virStoragePoolPtr	virStoragePoolDefineXML	(virConnectPtr conn, 
    const char * xml,
    unsigned int flags)
    -
    int	virStoragePoolDelete		(virStoragePoolPtr pool, 
    unsigned int flags)
    -
    int	virStoragePoolDestroy		(virStoragePoolPtr pool)
    -
    int	virStoragePoolFree		(virStoragePoolPtr pool)
    -
    int	virStoragePoolGetAutostart	(virStoragePoolPtr pool, 
    int * autostart)
    -
    virConnectPtr	virStoragePoolGetConnect	(virStoragePoolPtr pool)
    -
    int	virStoragePoolGetInfo		(virStoragePoolPtr pool, 
    virStoragePoolInfoPtr info)
    -
    const char *	virStoragePoolGetName	(virStoragePoolPtr pool)
    -
    int	virStoragePoolGetUUID		(virStoragePoolPtr pool, 
    unsigned char * uuid)
    -
    int	virStoragePoolGetUUIDString	(virStoragePoolPtr pool, 
    char * buf)
    -
    char *	virStoragePoolGetXMLDesc	(virStoragePoolPtr pool, 
    unsigned int flags)
    -
    int	virStoragePoolListVolumes	(virStoragePoolPtr pool, 
    char ** const names,
    int maxnames)
    -
    virStoragePoolPtr	virStoragePoolLookupByName	(virConnectPtr conn, 
    const char * name)
    -
    virStoragePoolPtr	virStoragePoolLookupByUUID	(virConnectPtr conn, 
    const unsigned char * uuid)
    -
    virStoragePoolPtr	virStoragePoolLookupByUUIDString	(virConnectPtr conn, 
    const char * uuidstr)
    -
    virStoragePoolPtr	virStoragePoolLookupByVolume	(virStorageVolPtr vol)
    -
    int	virStoragePoolNumOfVolumes	(virStoragePoolPtr pool)
    -
    int	virStoragePoolRefresh		(virStoragePoolPtr pool, 
    unsigned int flags)
    -
    int	virStoragePoolSetAutostart	(virStoragePoolPtr pool, 
    int autostart)
    -
    int	virStoragePoolUndefine		(virStoragePoolPtr pool)
    -
    virStorageVolPtr	virStorageVolCreateXML	(virStoragePoolPtr pool, 
    const char * xmldesc,
    unsigned int flags)
    -
    int	virStorageVolDelete		(virStorageVolPtr vol, 
    unsigned int flags)
    -
    int	virStorageVolFree		(virStorageVolPtr vol)
    -
    virConnectPtr	virStorageVolGetConnect	(virStorageVolPtr vol)
    -
    int	virStorageVolGetInfo		(virStorageVolPtr vol, 
    virStorageVolInfoPtr info)
    -
    const char *	virStorageVolGetKey	(virStorageVolPtr vol)
    -
    const char *	virStorageVolGetName	(virStorageVolPtr vol)
    -
    char *	virStorageVolGetPath		(virStorageVolPtr vol)
    -
    char *	virStorageVolGetXMLDesc		(virStorageVolPtr vol, 
    unsigned int flags)
    -
    virStorageVolPtr	virStorageVolLookupByKey	(virConnectPtr conn, 
    const char * key)
    -
    virStorageVolPtr	virStorageVolLookupByName	(virStoragePoolPtr pool, 
    const char * name)
    -
    virStorageVolPtr	virStorageVolLookupByPath	(virConnectPtr conn, 
    const char * path)
    -

    Description

    -

    Macro: LIBVIR_VERSION_NUMBER

    #define LIBVIR_VERSION_NUMBER

    Macro providing the version of the library as version * 1,000,000 + minor * 1000 + micro

    -

    Macro: VIR_COPY_CPUMAP

    #define VIR_COPY_CPUMAP

    This macro is to be used in conjunction with virDomainGetVcpus() and virDomainPinVcpu() APIs. VIR_COPY_CPUMAP macro extract the cpumap of the specified vcpu from cpumaps array and copy it into cpumap to be used later by virDomainPinVcpu() API.

    -

    Macro: VIR_CPU_MAPLEN

    #define VIR_CPU_MAPLEN

    This macro is to be used in conjunction with virDomainPinVcpu() API. It returns the length (in bytes) required to store the complete CPU map between a single virtual & all physical CPUs of a domain.

    -

    Macro: VIR_CPU_USABLE

    #define VIR_CPU_USABLE

    This macro is to be used in conjunction with virDomainGetVcpus() API. VIR_CPU_USABLE macro returns a non zero value (true) if the cpu is usable by the vcpu, and 0 otherwise.

    -

    Macro: VIR_DOMAIN_SCHED_FIELD_LENGTH

    #define VIR_DOMAIN_SCHED_FIELD_LENGTH

    Macro providing the field length of virSchedParameter

    -

    Macro: VIR_GET_CPUMAP

    #define VIR_GET_CPUMAP

    This macro is to be used in conjunction with virDomainGetVcpus() and virDomainPinVcpu() APIs. VIR_GET_CPUMAP macro returns a pointer to the cpumap of the specified vcpu from cpumaps array.

    -

    Macro: VIR_NODEINFO_MAXCPUS

    #define VIR_NODEINFO_MAXCPUS

    This macro is to calculate the total number of CPUs supported but not necessary active in the host.

    -

    Macro: VIR_UNUSE_CPU

    #define VIR_UNUSE_CPU

    This macro is to be used in conjunction with virDomainPinVcpu() API. USE_CPU macro reset the bit (CPU not usable) of the related cpu in cpumap.

    -

    Macro: VIR_USE_CPU

    #define VIR_USE_CPU

    This macro is to be used in conjunction with virDomainPinVcpu() API. USE_CPU macro set the bit (CPU usable) of the related cpu in cpumap.

    -

    Macro: VIR_UUID_BUFLEN

    #define VIR_UUID_BUFLEN

    This macro provides the length of the buffer required for virDomainGetUUID()

    -

    Macro: VIR_UUID_STRING_BUFLEN

    #define VIR_UUID_STRING_BUFLEN

    This macro provides the length of the buffer required for virDomainGetUUIDString()

    -

    Structure virConnect

    Structure virConnect
    struct _virConnect { -The content of this structure is not made public by the API. -}

    Structure virConnectAuth

    Structure virConnectAuth
    struct _virConnectAuth { - int * credtype : List of supported virConnectCredentialT - unsigned int ncredtype - virConnectAuthCallbackPtr cb : Callback used to collect credentials - void * cbdata -}

    Structure virConnectCredential

    Structure virConnectCredential
    struct _virConnectCredential { - int type : One of virConnectCredentialType constan - const char * prompt : Prompt to show to user - const char * challenge : Additional challenge to show - const char * defresult : Optional default result - char * result : Result to be filled with user response - unsigned int resultlen : Length of the result -}

    Enum virConnectCredentialType

    Enum virConnectCredentialType {
    -    VIR_CRED_USERNAME = 1 : Identity to act as
    -    VIR_CRED_AUTHNAME = 2 : Identify to authorize as
    -    VIR_CRED_LANGUAGE = 3 : RFC 1766 languages, comma separated
    -    VIR_CRED_CNONCE = 4 : client supplies a nonce
    -    VIR_CRED_PASSPHRASE = 5 : Passphrase secret
    -    VIR_CRED_ECHOPROMPT = 6 : Challenge response
    -    VIR_CRED_NOECHOPROMPT = 7 : Challenge response
    -    VIR_CRED_REALM = 8 : Authentication realm
    -    VIR_CRED_EXTERNAL = 9 : Externally managed credential More may be added - expect the unexpected
    +
    +int	virConnectClose			(virConnectPtr conn)
    +char *	virConnectGetCapabilities	(virConnectPtr conn)
    +char *	virConnectGetHostname		(virConnectPtr conn)
    +int	virConnectGetMaxVcpus		(virConnectPtr conn, 
    const char * type) +const char * virConnectGetType (virConnectPtr conn) +char * virConnectGetURI (virConnectPtr conn) +int virConnectGetVersion (virConnectPtr conn,
    unsigned long * hvVer) +int virConnectListDefinedDomains (virConnectPtr conn,
    char ** const names,
    int maxnames) +int virConnectListDefinedNetworks (virConnectPtr conn,
    char ** const names,
    int maxnames) +int virConnectListDefinedStoragePools (virConnectPtr conn,
    char ** const names,
    int maxnames) +int virConnectListDomains (virConnectPtr conn,
    int * ids,
    int maxids) +int virConnectListNetworks (virConnectPtr conn,
    char ** const names,
    int maxnames) +int virConnectListStoragePools (virConnectPtr conn,
    char ** const names,
    int maxnames) +int virConnectNumOfDefinedDomains (virConnectPtr conn) +int virConnectNumOfDefinedNetworks (virConnectPtr conn) +int virConnectNumOfDefinedStoragePools (virConnectPtr conn) +int virConnectNumOfDomains (virConnectPtr conn) +int virConnectNumOfNetworks (virConnectPtr conn) +int virConnectNumOfStoragePools (virConnectPtr conn) +virConnectPtr virConnectOpen (const char * name) +virConnectPtr virConnectOpenAuth (const char * name,
    virConnectAuthPtr auth,
    int flags) +virConnectPtr virConnectOpenReadOnly (const char * name) +int virDomainAttachDevice (virDomainPtr domain,
    const char * xml) +int virDomainBlockStats (virDomainPtr dom,
    const char * path,
    virDomainBlockStatsPtr stats,
    size_t size) +int virDomainCoreDump (virDomainPtr domain,
    const char * to,
    int flags) +int virDomainCreate (virDomainPtr domain) +virDomainPtr virDomainCreateLinux (virConnectPtr conn,
    const char * xmlDesc,
    unsigned int flags) +virDomainPtr virDomainDefineXML (virConnectPtr conn,
    const char * xml) +int virDomainDestroy (virDomainPtr domain) +int virDomainDetachDevice (virDomainPtr domain,
    const char * xml) +int virDomainFree (virDomainPtr domain) +int virDomainGetAutostart (virDomainPtr domain,
    int * autostart) +virConnectPtr virDomainGetConnect (virDomainPtr dom) +unsigned int virDomainGetID (virDomainPtr domain) +int virDomainGetInfo (virDomainPtr domain,
    virDomainInfoPtr info) +unsigned long virDomainGetMaxMemory (virDomainPtr domain) +int virDomainGetMaxVcpus (virDomainPtr domain) +const char * virDomainGetName (virDomainPtr domain) +char * virDomainGetOSType (virDomainPtr domain) +int virDomainGetSchedulerParameters (virDomainPtr domain,
    virSchedParameterPtr params,
    int * nparams) +char * virDomainGetSchedulerType (virDomainPtr domain,
    int * nparams) +int virDomainGetUUID (virDomainPtr domain,
    unsigned char * uuid) +int virDomainGetUUIDString (virDomainPtr domain,
    char * buf) +int virDomainGetVcpus (virDomainPtr domain,
    virVcpuInfoPtr info,
    int maxinfo,
    unsigned char * cpumaps,
    int maplen) +char * virDomainGetXMLDesc (virDomainPtr domain,
    int flags) +int virDomainInterfaceStats (virDomainPtr dom,
    const char * path,
    virDomainInterfaceStatsPtr stats,
    size_t size) +virDomainPtr virDomainLookupByID (virConnectPtr conn,
    int id) +virDomainPtr virDomainLookupByName (virConnectPtr conn,
    const char * name) +virDomainPtr virDomainLookupByUUID (virConnectPtr conn,
    const unsigned char * uuid) +virDomainPtr virDomainLookupByUUIDString (virConnectPtr conn,
    const char * uuidstr) +virDomainPtr virDomainMigrate (virDomainPtr domain,
    virConnectPtr dconn,
    unsigned long flags,
    const char * dname,
    const char * uri,
    unsigned long bandwidth) +int virDomainPinVcpu (virDomainPtr domain,
    unsigned int vcpu,
    unsigned char * cpumap,
    int maplen) +int virDomainReboot (virDomainPtr domain,
    unsigned int flags) +int virDomainRestore (virConnectPtr conn,
    const char * from) +int virDomainResume (virDomainPtr domain) +int virDomainSave (virDomainPtr domain,
    const char * to) +int virDomainSetAutostart (virDomainPtr domain,
    int autostart) +int virDomainSetMaxMemory (virDomainPtr domain,
    unsigned long memory) +int virDomainSetMemory (virDomainPtr domain,
    unsigned long memory) +int virDomainSetSchedulerParameters (virDomainPtr domain,
    virSchedParameterPtr params,
    int nparams) +int virDomainSetVcpus (virDomainPtr domain,
    unsigned int nvcpus) +int virDomainShutdown (virDomainPtr domain) +int virDomainSuspend (virDomainPtr domain) +int virDomainUndefine (virDomainPtr domain) +int virGetVersion (unsigned long * libVer,
    const char * type,
    unsigned long * typeVer) +int virInitialize (void) +int virNetworkCreate (virNetworkPtr network) +virNetworkPtr virNetworkCreateXML (virConnectPtr conn,
    const char * xmlDesc) +virNetworkPtr virNetworkDefineXML (virConnectPtr conn,
    const char * xml) +int virNetworkDestroy (virNetworkPtr network) +int virNetworkFree (virNetworkPtr network) +int virNetworkGetAutostart (virNetworkPtr network,
    int * autostart) +char * virNetworkGetBridgeName (virNetworkPtr network) +virConnectPtr virNetworkGetConnect (virNetworkPtr net) +const char * virNetworkGetName (virNetworkPtr network) +int virNetworkGetUUID (virNetworkPtr network,
    unsigned char * uuid) +int virNetworkGetUUIDString (virNetworkPtr network,
    char * buf) +char * virNetworkGetXMLDesc (virNetworkPtr network,
    int flags) +virNetworkPtr virNetworkLookupByName (virConnectPtr conn,
    const char * name) +virNetworkPtr virNetworkLookupByUUID (virConnectPtr conn,
    const unsigned char * uuid) +virNetworkPtr virNetworkLookupByUUIDString (virConnectPtr conn,
    const char * uuidstr) +int virNetworkSetAutostart (virNetworkPtr network,
    int autostart) +int virNetworkUndefine (virNetworkPtr network) +int virNodeGetCellsFreeMemory (virConnectPtr conn,
    unsigned long long * freeMems,
    int startCell,
    int maxCells) +unsigned long long virNodeGetFreeMemory (virConnectPtr conn) +int virNodeGetInfo (virConnectPtr conn,
    virNodeInfoPtr info) +int virStoragePoolBuild (virStoragePoolPtr pool,
    unsigned int flags) +int virStoragePoolCreate (virStoragePoolPtr pool,
    unsigned int flags) +virStoragePoolPtr virStoragePoolCreateXML (virConnectPtr conn,
    const char * xmlDesc,
    unsigned int flags) +virStoragePoolPtr virStoragePoolDefineXML (virConnectPtr conn,
    const char * xml,
    unsigned int flags) +int virStoragePoolDelete (virStoragePoolPtr pool,
    unsigned int flags) +int virStoragePoolDestroy (virStoragePoolPtr pool) +int virStoragePoolFree (virStoragePoolPtr pool) +int virStoragePoolGetAutostart (virStoragePoolPtr pool,
    int * autostart) +virConnectPtr virStoragePoolGetConnect (virStoragePoolPtr pool) +int virStoragePoolGetInfo (virStoragePoolPtr pool,
    virStoragePoolInfoPtr info) +const char * virStoragePoolGetName (virStoragePoolPtr pool) +int virStoragePoolGetUUID (virStoragePoolPtr pool,
    unsigned char * uuid) +int virStoragePoolGetUUIDString (virStoragePoolPtr pool,
    char * buf) +char * virStoragePoolGetXMLDesc (virStoragePoolPtr pool,
    unsigned int flags) +int virStoragePoolListVolumes (virStoragePoolPtr pool,
    char ** const names,
    int maxnames) +virStoragePoolPtr virStoragePoolLookupByName (virConnectPtr conn,
    const char * name) +virStoragePoolPtr virStoragePoolLookupByUUID (virConnectPtr conn,
    const unsigned char * uuid) +virStoragePoolPtr virStoragePoolLookupByUUIDString (virConnectPtr conn,
    const char * uuidstr) +virStoragePoolPtr virStoragePoolLookupByVolume (virStorageVolPtr vol) +int virStoragePoolNumOfVolumes (virStoragePoolPtr pool) +int virStoragePoolRefresh (virStoragePoolPtr pool,
    unsigned int flags) +int virStoragePoolSetAutostart (virStoragePoolPtr pool,
    int autostart) +int virStoragePoolUndefine (virStoragePoolPtr pool) +virStorageVolPtr virStorageVolCreateXML (virStoragePoolPtr pool,
    const char * xmldesc,
    unsigned int flags) +int virStorageVolDelete (virStorageVolPtr vol,
    unsigned int flags) +int virStorageVolFree (virStorageVolPtr vol) +virConnectPtr virStorageVolGetConnect (virStorageVolPtr vol) +int virStorageVolGetInfo (virStorageVolPtr vol,
    virStorageVolInfoPtr info) +const char * virStorageVolGetKey (virStorageVolPtr vol) +const char * virStorageVolGetName (virStorageVolPtr vol) +char * virStorageVolGetPath (virStorageVolPtr vol) +char * virStorageVolGetXMLDesc (virStorageVolPtr vol,
    unsigned int flags) +virStorageVolPtr virStorageVolLookupByKey (virConnectPtr conn,
    const char * key) +virStorageVolPtr virStorageVolLookupByName (virStoragePoolPtr pool,
    const char * name) +virStorageVolPtr virStorageVolLookupByPath (virConnectPtr conn,
    const char * path) +

    Description

    Macros

    LIBVIR_VERSION_NUMBER

    #define LIBVIR_VERSION_NUMBER

    Macro providing the version of the library as version * 1,000,000 + minor * 1000 + micro

    VIR_COPY_CPUMAP

    #define VIR_COPY_CPUMAP

    This macro is to be used in conjunction with virDomainGetVcpus() and virDomainPinVcpu() APIs. VIR_COPY_CPUMAP macro extract the cpumap of the specified vcpu from cpumaps array and copy it into cpumap to be used later by virDomainPinVcpu() API.

    VIR_CPU_MAPLEN

    #define VIR_CPU_MAPLEN

    This macro is to be used in conjunction with virDomainPinVcpu() API. It returns the length (in bytes) required to store the complete CPU map between a single virtual & all physical CPUs of a domain.

    VIR_CPU_USABLE

    #define VIR_CPU_USABLE

    This macro is to be used in conjunction with virDomainGetVcpus() API. VIR_CPU_USABLE macro returns a non zero value (true) if the cpu is usable by the vcpu, and 0 otherwise.

    VIR_DOMAIN_SCHED_FIELD_LENGTH

    #define VIR_DOMAIN_SCHED_FIELD_LENGTH

    Macro providing the field length of virSchedParameter

    VIR_GET_CPUMAP

    #define VIR_GET_CPUMAP

    This macro is to be used in conjunction with virDomainGetVcpus() and virDomainPinVcpu() APIs. VIR_GET_CPUMAP macro returns a pointer to the cpumap of the specified vcpu from cpumaps array.

    VIR_NODEINFO_MAXCPUS

    #define VIR_NODEINFO_MAXCPUS

    This macro is to calculate the total number of CPUs supported but not necessary active in the host.

    VIR_UNUSE_CPU

    #define VIR_UNUSE_CPU

    This macro is to be used in conjunction with virDomainPinVcpu() API. USE_CPU macro reset the bit (CPU not usable) of the related cpu in cpumap.

    VIR_USE_CPU

    #define VIR_USE_CPU

    This macro is to be used in conjunction with virDomainPinVcpu() API. USE_CPU macro set the bit (CPU usable) of the related cpu in cpumap.

    VIR_UUID_BUFLEN

    #define VIR_UUID_BUFLEN

    This macro provides the length of the buffer required for virDomainGetUUID()

    VIR_UUID_STRING_BUFLEN

    #define VIR_UUID_STRING_BUFLEN

    This macro provides the length of the buffer required for virDomainGetUUIDString()

    Types

    virConnect

    struct virConnect{
    +
    The content of this structure is not made public by the API
     }
    -

    Enum virConnectFlags

    Enum virConnectFlags {
    -    VIR_CONNECT_RO = 1 : A readonly connection
    +

    virConnectAuth

    struct virConnectAuth{
    +
    int *credtype : List of supported virConnectCredentialType values
    unsigned intncredtype
    virConnectAuthCallbackPtrcb : Callback used to collect credentials
    void *cbdata
     }
    -
    - a virConnectPtr is pointer to a virConnect private structure, this is the type used to reference a connection to the Xen Hypervisor in the API. -

    Structure virDomain

    Structure virDomain
    struct _virDomain { -The content of this structure is not made public by the API. -}
    - A pointer to a virDomainBlockStats structure -

    Structure virDomainBlockStatsStruct

    Structure virDomainBlockStatsStruct
    struct _virDomainBlockStats { - long long rd_req : number of read requests - long long rd_bytes : number of read bytes - long long wr_req : number of write requests - long long wr_bytes : number of written bytes - long long errs : In Xen this returns the mysterious 'oo_ -}

    Enum virDomainCreateFlags

    Enum virDomainCreateFlags {
    -    VIR_DOMAIN_NONE = 0
    +

    virConnectCredential

    struct virConnectCredential{
    +
    inttype : One of virConnectCredentialType constants
    const char *prompt : Prompt to show to user
    const char *challenge : Additional challenge to show
    const char *defresult : Optional default result
    char *result : Result to be filled with user response (or defresult)
    unsigned intresultlen : Length of the result
     }
    -

    Structure virDomainInfo

    Structure virDomainInfo
    struct _virDomainInfo { - unsigned char state : the running state, one of virDomainFlag - unsigned long maxMem : the maximum memory in KBytes allowed - unsigned long memory : the memory in KBytes used by the domain - unsigned short nrVirtCpu : the number of virtual CPUs for the doma - unsigned long long cpuTime : the CPU time used in nanoseconds -}
    - a virDomainInfoPtr is a pointer to a virDomainInfo structure. - - A pointer to a virDomainInterfaceStats structure -

    Structure virDomainInterfaceStatsStruct

    Structure virDomainInterfaceStatsStruct
    struct _virDomainInterfaceStats { - long long rx_bytes - long long rx_packets - long long rx_errs - long long rx_drop - long long tx_bytes - long long tx_packets - long long tx_errs - long long tx_drop -}

    Enum virDomainMigrateFlags

    Enum virDomainMigrateFlags {
    -    VIR_MIGRATE_LIVE = 1 : live migration
    +

    virConnectCredentialType

    enum virConnectCredentialType {
    +
    VIR_CRED_USERNAME = 1 : Identity to act as
    VIR_CRED_AUTHNAME = 2 : Identify to authorize as
    VIR_CRED_LANGUAGE = 3 : RFC 1766 languages, comma separated
    VIR_CRED_CNONCE = 4 : client supplies a nonce
    VIR_CRED_PASSPHRASE = 5 : Passphrase secret
    VIR_CRED_ECHOPROMPT = 6 : Challenge response
    VIR_CRED_NOECHOPROMPT = 7 : Challenge response
    VIR_CRED_REALM = 8 : Authentication realm
    VIR_CRED_EXTERNAL = 9 : Externally managed credential More may be added - expect the unexpected
    }
    +

    virConnectFlags

    enum virConnectFlags {
    +
    VIR_CONNECT_RO = 1 : A readonly connection
    }
    +

    virDomain

    struct virDomain{
    +
    The content of this structure is not made public by the API
     }
    -
    - a virDomainPtr is pointer to a virDomain private structure, this is the type used to reference a Xen domain in the API. -

    Enum virDomainState

    Enum virDomainState {
    -    VIR_DOMAIN_NOSTATE = 0 : no state
    -    VIR_DOMAIN_RUNNING = 1 : the domain is running
    -    VIR_DOMAIN_BLOCKED = 2 : the domain is blocked on resource
    -    VIR_DOMAIN_PAUSED = 3 : the domain is paused by user
    -    VIR_DOMAIN_SHUTDOWN = 4 : the domain is being shut down
    -    VIR_DOMAIN_SHUTOFF = 5 : the domain is shut off
    -    VIR_DOMAIN_CRASHED = 6 : the domain is crashed
    +

    virDomainBlockStatsStruct

    struct virDomainBlockStatsStruct{
    +
    long longrd_req : number of read requests
    long longrd_bytes : number of read bytes
    long longwr_req : number of write requests
    long longwr_bytes : number of written bytes
    long longerrs : In Xen this returns the mysterious 'oo_req'.
     }
    -

    Enum virDomainXMLFlags

    Enum virDomainXMLFlags {
    -    VIR_DOMAIN_XML_SECURE = 1 : dump security sensitive information too
    -    VIR_DOMAIN_XML_INACTIVE = 2 : dump inactive domain information
    +

    virDomainCreateFlags

    enum virDomainCreateFlags {
    +
    VIR_DOMAIN_NONE = 0
    }
    +

    virDomainInfo

    struct virDomainInfo{
    +
    unsigned charstate : the running state, one of virDomainFlags
    unsigned longmaxMem : the maximum memory in KBytes allowed
    unsigned longmemory : the memory in KBytes used by the domain
    unsigned shortnrVirtCpu : the number of virtual CPUs for the domain
    unsigned long longcpuTime : the CPU time used in nanoseconds
     }
    -

    Structure virNetwork

    Structure virNetwork
    struct _virNetwork { -The content of this structure is not made public by the API. -}
    - a virNetworkPtr is pointer to a virNetwork private structure, this is the type used to reference a virtual network in the API. -

    Structure virNodeInfo

    Structure virNodeInfo
    struct _virNodeInfo { - charmodel[32] model : string indicating the CPU model - unsigned long memory : memory size in kilobytes - unsigned int cpus : the number of active CPUs - unsigned int mhz : expected CPU frequency - unsigned int nodes : the number of NUMA cell, 1 for uniform - unsigned int sockets : number of CPU socket per node - unsigned int cores : number of core per socket - unsigned int threads : number of threads per core -}
    - a virNodeInfoPtr is a pointer to a virNodeInfo structure. -

    Structure virSchedParameter

    Structure virSchedParameter
    struct _virSchedParameter { - charfield[VIR_DOMAIN_SCHED_FIELD_LENGTH] field : parameter name - int type : parameter type -}
    - a virSchedParameterPtr is a pointer to a virSchedParameter structure. -

    Enum virSchedParameterType

    Enum virSchedParameterType {
    -    VIR_DOMAIN_SCHED_FIELD_INT = 1 : integer case
    -    VIR_DOMAIN_SCHED_FIELD_UINT = 2 : unsigned integer case
    -    VIR_DOMAIN_SCHED_FIELD_LLONG = 3 : long long case
    -    VIR_DOMAIN_SCHED_FIELD_ULLONG = 4 : unsigned long long case
    -    VIR_DOMAIN_SCHED_FIELD_DOUBLE = 5 : double case
    -    VIR_DOMAIN_SCHED_FIELD_BOOLEAN = 6 : boolean(character) case
    +

    virDomainInterfaceStatsStruct

    struct virDomainInterfaceStatsStruct{
    +
    long longrx_bytes
    long longrx_packets
    long longrx_errs
    long longrx_drop
    long longtx_bytes
    long longtx_packets
    long longtx_errs
    long longtx_drop
     }
    -

    Structure virStoragePool

    Structure virStoragePool
    struct _virStoragePool { -The content of this structure is not made public by the API. -}

    Enum virStoragePoolBuildFlags

    Enum virStoragePoolBuildFlags {
    -    VIR_STORAGE_POOL_BUILD_NEW = 0 : Regular build from scratch
    -    VIR_STORAGE_POOL_BUILD_REPAIR = 1 : Repair / reinitialize
    -    VIR_STORAGE_POOL_BUILD_RESIZE = 2 : Extend existing pool
    +

    virDomainMigrateFlags

    enum virDomainMigrateFlags {
    +
    VIR_MIGRATE_LIVE = 1 : live migration
    }
    +

    virDomainState

    enum virDomainState {
    +
    VIR_DOMAIN_NOSTATE = 0 : no state
    VIR_DOMAIN_RUNNING = 1 : the domain is running
    VIR_DOMAIN_BLOCKED = 2 : the domain is blocked on resource
    VIR_DOMAIN_PAUSED = 3 : the domain is paused by user
    VIR_DOMAIN_SHUTDOWN = 4 : the domain is being shut down
    VIR_DOMAIN_SHUTOFF = 5 : the domain is shut off
    VIR_DOMAIN_CRASHED = 6 : the domain is crashed
    }
    +

    virDomainXMLFlags

    enum virDomainXMLFlags {
    +
    VIR_DOMAIN_XML_SECURE = 1 : dump security sensitive information too
    VIR_DOMAIN_XML_INACTIVE = 2 : dump inactive domain information
    }
    +

    virNetwork

    struct virNetwork{
    +
    The content of this structure is not made public by the API
     }
    -

    Enum virStoragePoolDeleteFlags

    Enum virStoragePoolDeleteFlags {
    -    VIR_STORAGE_POOL_DELETE_NORMAL = 0 : Delete metadata only (fast)
    -    VIR_STORAGE_POOL_DELETE_ZEROED = 1 : Clear all data to zeros (slow)
    +

    virNodeInfo

    struct virNodeInfo{
    +
    charmodel[32]model : string indicating the CPU model
    unsigned longmemory : memory size in kilobytes
    unsigned intcpus : the number of active CPUs
    unsigned intmhz : expected CPU frequency
    unsigned intnodes : the number of NUMA cell, 1 for uniform mem access
    unsigned intsockets : number of CPU socket per node
    unsigned intcores : number of core per socket
    unsigned intthreads : number of threads per core
     }
    -

    Structure virStoragePoolInfo

    Structure virStoragePoolInfo
    struct _virStoragePoolInfo { - int state : virStoragePoolState flags - unsigned long long capacity : Logical size bytes - unsigned long long allocation : Current allocation bytes - unsigned long long available : Remaining free space bytes -}
    - a virStoragePoolPtr is pointer to a virStoragePool private structure, this is the type used to reference a storage pool in the API. -

    Enum virStoragePoolState

    Enum virStoragePoolState {
    -    VIR_STORAGE_POOL_INACTIVE = 0 : Not running
    -    VIR_STORAGE_POOL_BUILDING = 1 : Initializing pool, not available
    -    VIR_STORAGE_POOL_RUNNING = 2 : Running normally
    -    VIR_STORAGE_POOL_DEGRADED = 3 : Running degraded
    +

    virSchedParameter

    struct virSchedParameter{
    +
    charfield[VIR_DOMAIN_SCHED_FIELD_LENGTH]field : parameter name
    inttype : parameter type
     }
    -

    Structure virStorageVol

    Structure virStorageVol
    struct _virStorageVol { -The content of this structure is not made public by the API. -}

    Enum virStorageVolDeleteFlags

    Enum virStorageVolDeleteFlags {
    -    VIR_STORAGE_VOL_DELETE_NORMAL = 0 : Delete metadata only (fast)
    -    VIR_STORAGE_VOL_DELETE_ZEROED = 1 : Clear all data to zeros (slow)
    +

    virSchedParameterType

    enum virSchedParameterType {
    +
    VIR_DOMAIN_SCHED_FIELD_INT = 1 : integer case
    VIR_DOMAIN_SCHED_FIELD_UINT = 2 : unsigned integer case
    VIR_DOMAIN_SCHED_FIELD_LLONG = 3 : long long case
    VIR_DOMAIN_SCHED_FIELD_ULLONG = 4 : unsigned long long case
    VIR_DOMAIN_SCHED_FIELD_DOUBLE = 5 : double case
    VIR_DOMAIN_SCHED_FIELD_BOOLEAN = 6 : boolean(character) case
    }
    +

    virStoragePool

    struct virStoragePool{
    +
    The content of this structure is not made public by the API
     }
    -

    Structure virStorageVolInfo

    Structure virStorageVolInfo
    struct _virStorageVolInfo { - int type : virStorageVolType flags - unsigned long long capacity : Logical size bytes - unsigned long long allocation : Current allocation bytes -}
    - a virStorageVolPtr is pointer to a virStorageVol private structure, this is the type used to reference a storage volume in the API. -

    Enum virStorageVolType

    Enum virStorageVolType {
    -    VIR_STORAGE_VOL_FILE = 0 : Regular file based volumes
    -    VIR_STORAGE_VOL_BLOCK = 1 : Block based volumes
    +

    virStoragePoolBuildFlags

    enum virStoragePoolBuildFlags {
    +
    VIR_STORAGE_POOL_BUILD_NEW = 0 : Regular build from scratch
    VIR_STORAGE_POOL_BUILD_REPAIR = 1 : Repair / reinitialize
    VIR_STORAGE_POOL_BUILD_RESIZE = 2 : Extend existing pool
    }
    +

    virStoragePoolDeleteFlags

    enum virStoragePoolDeleteFlags {
    +
    VIR_STORAGE_POOL_DELETE_NORMAL = 0 : Delete metadata only (fast)
    VIR_STORAGE_POOL_DELETE_ZEROED = 1 : Clear all data to zeros (slow)
    }
    +

    virStoragePoolInfo

    struct virStoragePoolInfo{
    +
    intstate : virStoragePoolState flags
    unsigned long longcapacity : Logical size bytes
    unsigned long longallocation : Current allocation bytes
    unsigned long longavailable : Remaining free space bytes
     }
    -

    Structure virVcpuInfo

    Structure virVcpuInfo
    struct _virVcpuInfo { - unsigned int number : virtual CPU number - int state : value from virVcpuState - unsigned long long cpuTime : CPU time used, in nanoseconds - int cpu : real CPU number, or -1 if offline -}

    Enum virVcpuState

    Enum virVcpuState {
    -    VIR_VCPU_OFFLINE = 0 : the virtual CPU is offline
    -    VIR_VCPU_RUNNING = 1 : the virtual CPU is running
    -    VIR_VCPU_BLOCKED = 2 : the virtual CPU is blocked on resource
    +

    virStoragePoolState

    enum virStoragePoolState {
    +
    VIR_STORAGE_POOL_INACTIVE = 0 : Not running
    VIR_STORAGE_POOL_BUILDING = 1 : Initializing pool, not available
    VIR_STORAGE_POOL_RUNNING = 2 : Running normally
    VIR_STORAGE_POOL_DEGRADED = 3 : Running degraded
    }
    +

    virStorageVol

    struct virStorageVol{
    +
    The content of this structure is not made public by the API
     }
    -

    Function type: virConnectAuthCallbackPtr

    Function type: virConnectAuthCallbackPtr
    -int	virConnectAuthCallbackPtr	(virConnectCredentialPtr cred, 
    unsigned int ncred,
    void * cbdata) -

    cred:
    ncred:
    cbdata:
    Returns:

    -

    Function: virConnectClose

    int	virConnectClose			(virConnectPtr conn)
    -

    This function closes the connection to the Hypervisor. This should not be called if further interaction with the Hypervisor are needed especially if there is running domain which need further monitoring by the application.

    -
    conn:pointer to the hypervisor connection
    Returns:0 in case of success or -1 in case of error.

    Function: virConnectGetCapabilities

    char *	virConnectGetCapabilities	(virConnectPtr conn)
    -

    Provides capabilities of the hypervisor / driver.

    -
    conn:pointer to the hypervisor connection
    Returns:NULL in case of error, or an XML string defining the capabilities. The client must free the returned string after use.

    Function: virConnectGetHostname

    char *	virConnectGetHostname		(virConnectPtr conn)
    -

    This returns the system hostname on which the hypervisor is running (the result of the gethostname(2) system call). If we are connected to a remote system, then this returns the hostname of the remote system.

    -
    conn:pointer to a hypervisor connection
    Returns:the hostname which must be freed by the caller, or NULL if there was an error.

    Function: virConnectGetMaxVcpus

    int	virConnectGetMaxVcpus		(virConnectPtr conn, 
    const char * type)
    -

    Provides the maximum number of virtual CPUs supported for a guest VM of a specific type. The 'type' parameter here corresponds to the 'type' attribute in the <domain> element of the XML.

    -
    conn:pointer to the hypervisor connection
    type:value of the 'type' attribute in the <domain> element
    Returns:the maximum of virtual CPU or -1 in case of error.

    Function: virConnectGetType

    const char *	virConnectGetType	(virConnectPtr conn)
    -

    Get the name of the Hypervisor software used.

    -
    conn:pointer to the hypervisor connection
    Returns:NULL in case of error, a static zero terminated string otherwise. See also: http://www.redhat.com/archives/libvir-list/2007-February/msg00096.html

    Function: virConnectGetURI

    char *	virConnectGetURI		(virConnectPtr conn)
    -

    This returns the URI (name) of the hypervisor connection. Normally this is the same as or similar to the string passed to the virConnectOpen/virConnectOpenReadOnly call, but the driver may make the URI canonical. If name == NULL was passed to virConnectOpen, then the driver will return a non-NULL URI which can be used to connect to the same hypervisor later.

    -
    conn:pointer to a hypervisor connection
    Returns:the URI string which must be freed by the caller, or NULL if there was an error.

    Function: virConnectGetVersion

    int	virConnectGetVersion		(virConnectPtr conn, 
    unsigned long * hvVer)
    -

    Get the version level of the Hypervisor running. This may work only with hypervisor call, i.e. with privileged access to the hypervisor, not with a Read-Only connection.

    -
    conn:pointer to the hypervisor connection
    hvVer:return value for the version of the running hypervisor (OUT)
    Returns:-1 in case of error, 0 otherwise. if the version can't be extracted by lack of capacities returns 0 and @hvVer is 0, otherwise @hvVer value is major * 1,000,000 + minor * 1,000 + release

    Function: virConnectListDefinedDomains

    int	virConnectListDefinedDomains	(virConnectPtr conn, 
    char ** const names,
    int maxnames)
    -

    list the defined but inactive domains, stores the pointers to the names in @names

    -
    conn:pointer to the hypervisor connection
    names:pointer to an array to store the names
    maxnames:size of the array
    Returns:the number of names provided in the array or -1 in case of error

    Function: virConnectListDefinedNetworks

    int	virConnectListDefinedNetworks	(virConnectPtr conn, 
    char ** const names,
    int maxnames)
    -

    list the inactive networks, stores the pointers to the names in @names

    -
    conn:pointer to the hypervisor connection
    names:pointer to an array to store the names
    maxnames:size of the array
    Returns:the number of names provided in the array or -1 in case of error

    Function: virConnectListDefinedStoragePools

    int	virConnectListDefinedStoragePools	(virConnectPtr conn, 
    char ** const names,
    int maxnames)
    -

    Provides the list of names of inactive storage pools upto maxnames. If there are more than maxnames, the remaining names will be silently ignored.

    -
    conn:pointer to hypervisor connection
    names:array of char * to fill with pool names (allocated by caller)
    maxnames:size of the names array
    Returns:0 on success, -1 on error

    Function: virConnectListDomains

    int	virConnectListDomains		(virConnectPtr conn, 
    int * ids,
    int maxids)
    -

    Collect the list of active domains, and store their ID in @maxids

    -
    conn:pointer to the hypervisor connection
    ids:array to collect the list of IDs of active domains
    maxids:size of @ids
    Returns:the number of domain found or -1 in case of error

    Function: virConnectListNetworks

    int	virConnectListNetworks		(virConnectPtr conn, 
    char ** const names,
    int maxnames)
    -

    Collect the list of active networks, and store their names in @names

    -
    conn:pointer to the hypervisor connection
    names:array to collect the list of names of active networks
    maxnames:size of @names
    Returns:the number of networks found or -1 in case of error

    Function: virConnectListStoragePools

    int	virConnectListStoragePools	(virConnectPtr conn, 
    char ** const names,
    int maxnames)
    -

    Provides the list of names of active storage pools upto maxnames. If there are more than maxnames, the remaining names will be silently ignored.

    -
    conn:pointer to hypervisor connection
    names:array of char * to fill with pool names (allocated by caller)
    maxnames:size of the names array
    Returns:0 on success, -1 on error

    Function: virConnectNumOfDefinedDomains

    int	virConnectNumOfDefinedDomains	(virConnectPtr conn)
    -

    Provides the number of defined but inactive domains.

    -
    conn:pointer to the hypervisor connection
    Returns:the number of domain found or -1 in case of error

    Function: virConnectNumOfDefinedNetworks

    int	virConnectNumOfDefinedNetworks	(virConnectPtr conn)
    -

    Provides the number of inactive networks.

    -
    conn:pointer to the hypervisor connection
    Returns:the number of networks found or -1 in case of error

    Function: virConnectNumOfDefinedStoragePools

    int	virConnectNumOfDefinedStoragePools	(virConnectPtr conn)
    -

    Provides the number of inactive storage pools

    -
    conn:pointer to hypervisor connection
    Returns:the number of pools found, or -1 on error

    Function: virConnectNumOfDomains

    int	virConnectNumOfDomains		(virConnectPtr conn)
    -

    Provides the number of active domains.

    -
    conn:pointer to the hypervisor connection
    Returns:the number of domain found or -1 in case of error

    Function: virConnectNumOfNetworks

    int	virConnectNumOfNetworks		(virConnectPtr conn)
    -

    Provides the number of active networks.

    -
    conn:pointer to the hypervisor connection
    Returns:the number of network found or -1 in case of error

    Function: virConnectNumOfStoragePools

    int	virConnectNumOfStoragePools	(virConnectPtr conn)
    -

    Provides the number of active storage pools

    -
    conn:pointer to hypervisor connection
    Returns:the number of pools found, or -1 on error

    Function: virConnectOpen

    virConnectPtr	virConnectOpen		(const char * name)
    -

    This function should be called first to get a connection to the Hypervisor and xen store

    -
    name:URI of the hypervisor
    Returns:a pointer to the hypervisor connection or NULL in case of error URIs are documented at http://libvirt.org/uri.html

    Function: virConnectOpenAuth

    virConnectPtr	virConnectOpenAuth	(const char * name, 
    virConnectAuthPtr auth,
    int flags)
    -

    This function should be called first to get a connection to the Hypervisor. If necessary, authentication will be performed fetching credentials via the callback

    -
    name:URI of the hypervisor
    auth:Authenticate callback parameters
    flags:Open flags
    Returns:a pointer to the hypervisor connection or NULL in case of error URIs are documented at http://libvirt.org/uri.html

    Function: virConnectOpenReadOnly

    virConnectPtr	virConnectOpenReadOnly	(const char * name)
    -

    This function should be called first to get a restricted connection to the library functionalities. The set of APIs usable are then restricted on the available methods to control the domains.

    -
    name:URI of the hypervisor
    Returns:a pointer to the hypervisor connection or NULL in case of error URIs are documented at http://libvirt.org/uri.html

    Function: virDomainAttachDevice

    int	virDomainAttachDevice		(virDomainPtr domain, 
    const char * xml)
    -

    Create a virtual device attachment to backend.

    -
    domain:pointer to domain object
    xml:pointer to XML description of one device
    Returns:0 in case of success, -1 in case of failure.

    Function: virDomainBlockStats

    int	virDomainBlockStats		(virDomainPtr dom, 
    const char * path,
    virDomainBlockStatsPtr stats,
    size_t size)
    -

    This function returns block device (disk) stats for block devices attached to the domain. The path parameter is the name of the block device. Get this by calling virDomainGetXMLDesc and finding the <target dev='...'> attribute within //domain/devices/disk. (For example, "xvda"). Domains may have more than one block device. To get stats for each you should make multiple calls to this function. Individual fields within the stats structure may be returned as -1, which indicates that the hypervisor does not support that particular statistic.

    -
    dom:pointer to the domain object
    path:path to the block device
    stats:block device stats (returned)
    size:size of stats structure
    Returns:0 in case of success or -1 in case of failure.

    Function: virDomainCoreDump

    int	virDomainCoreDump		(virDomainPtr domain, 
    const char * to,
    int flags)
    -

    This method will dump the core of a domain on a given file for analysis. Note that for remote Xen Daemon the file path will be interpreted in the remote host.

    -
    domain:a domain object
    to:path for the core file
    flags:extra flags, currently unused
    Returns:0 in case of success and -1 in case of failure.

    Function: virDomainCreate

    int	virDomainCreate			(virDomainPtr domain)
    -

    launch a defined domain. If the call succeed the domain moves from the defined to the running domains pools.

    -
    domain:pointer to a defined domain
    Returns:0 in case of success, -1 in case of error

    Function: virDomainCreateLinux

    virDomainPtr	virDomainCreateLinux	(virConnectPtr conn, 
    const char * xmlDesc,
    unsigned int flags)
    -

    Launch a new Linux guest domain, based on an XML description similar to the one returned by virDomainGetXMLDesc() This function may requires privileged access to the hypervisor.

    -
    conn:pointer to the hypervisor connection
    xmlDesc:string containing an XML description of the domain
    flags:an optional set of virDomainFlags
    Returns:a new domain object or NULL in case of failure

    Function: virDomainDefineXML

    virDomainPtr	virDomainDefineXML	(virConnectPtr conn, 
    const char * xml)
    -

    define a domain, but does not start it

    -
    conn:pointer to the hypervisor connection
    xml:the XML description for the domain, preferably in UTF-8
    Returns:NULL in case of error, a pointer to the domain otherwise

    Function: virDomainDestroy

    int	virDomainDestroy		(virDomainPtr domain)
    -

    Destroy the domain object. The running instance is shutdown if not down already and all resources used by it are given back to the hypervisor. The data structure is freed and should not be used thereafter if the call does not return an error. This function may requires privileged access

    -
    domain:a domain object
    Returns:0 in case of success and -1 in case of failure.

    Function: virDomainDetachDevice

    int	virDomainDetachDevice		(virDomainPtr domain, 
    const char * xml)
    -

    Destroy a virtual device attachment to backend.

    -
    domain:pointer to domain object
    xml:pointer to XML description of one device
    Returns:0 in case of success, -1 in case of failure.

    Function: virDomainFree

    int	virDomainFree			(virDomainPtr domain)
    -

    Free the domain object. The running instance is kept alive. The data structure is freed and should not be used thereafter.

    -
    domain:a domain object
    Returns:0 in case of success and -1 in case of failure.

    Function: virDomainGetAutostart

    int	virDomainGetAutostart		(virDomainPtr domain, 
    int * autostart)
    -

    Provides a boolean value indicating whether the domain configured to be automatically started when the host machine boots.

    -
    domain:a domain object
    autostart:the value returned
    Returns:-1 in case of error, 0 in case of success

    Function: virDomainGetConnect

    virConnectPtr	virDomainGetConnect	(virDomainPtr dom)
    -

    Provides the connection pointer associated with a domain. The reference counter on the connection is not increased by this call. WARNING: When writing libvirt bindings in other languages, do not use this function. Instead, store the connection and the domain object together.

    -
    dom:pointer to a domain
    Returns:the virConnectPtr or NULL in case of failure.

    Function: virDomainGetID

    unsigned int	virDomainGetID		(virDomainPtr domain)
    -

    Get the hypervisor ID number for the domain

    -
    domain:a domain object
    Returns:the domain ID number or (unsigned int) -1 in case of error

    Function: virDomainGetInfo

    int	virDomainGetInfo		(virDomainPtr domain, 
    virDomainInfoPtr info)
    -

    Extract information about a domain. Note that if the connection used to get the domain is limited only a partial set of the information can be extracted.

    -
    domain:a domain object
    info:pointer to a virDomainInfo structure allocated by the user
    Returns:0 in case of success and -1 in case of failure.

    Function: virDomainGetMaxMemory

    unsigned long	virDomainGetMaxMemory	(virDomainPtr domain)
    -

    Retrieve the maximum amount of physical memory allocated to a domain. If domain is NULL, then this get the amount of memory reserved to Domain0 i.e. the domain where the application runs.

    -
    domain:a domain object or NULL
    Returns:the memory size in kilobytes or 0 in case of error.

    Function: virDomainGetMaxVcpus

    int	virDomainGetMaxVcpus		(virDomainPtr domain)
    -

    Provides the maximum number of virtual CPUs supported for the guest VM. If the guest is inactive, this is basically the same as virConnectGetMaxVcpus. If the guest is running this will reflect the maximum number of virtual CPUs the guest was booted with.

    -
    domain:pointer to domain object
    Returns:the maximum of virtual CPU or -1 in case of error.

    Function: virDomainGetName

    const char *	virDomainGetName	(virDomainPtr domain)
    -

    Get the public name for that domain

    -
    domain:a domain object
    Returns:a pointer to the name or NULL, the string need not be deallocated its lifetime will be the same as the domain object.

    Function: virDomainGetOSType

    char *	virDomainGetOSType		(virDomainPtr domain)
    -

    Get the type of domain operation system.

    -
    domain:a domain object
    Returns:the new string or NULL in case of error, the string must be freed by the caller.

    Function: virDomainGetSchedulerParameters

    int	virDomainGetSchedulerParameters	(virDomainPtr domain, 
    virSchedParameterPtr params,
    int * nparams)
    -

    Get the scheduler parameters, the @params array will be filled with the values.

    -
    domain:pointer to domain object
    params:pointer to scheduler parameter object (return value)
    nparams:pointer to number of scheduler parameter (this value should be same than the returned value nparams of virDomainGetSchedulerType)
    Returns:-1 in case of error, 0 in case of success.

    Function: virDomainGetSchedulerType

    char *	virDomainGetSchedulerType	(virDomainPtr domain, 
    int * nparams)
    -

    Get the scheduler type.

    -
    domain:pointer to domain object
    nparams:number of scheduler parameters(return value)
    Returns:NULL in case of error. The caller must free the returned string.

    Function: virDomainGetUUID

    int	virDomainGetUUID		(virDomainPtr domain, 
    unsigned char * uuid)
    -

    Get the UUID for a domain

    -
    domain:a domain object
    uuid:pointer to a VIR_UUID_BUFLEN bytes array
    Returns:-1 in case of error, 0 in case of success

    Function: virDomainGetUUIDString

    int	virDomainGetUUIDString		(virDomainPtr domain, 
    char * buf)
    -

    Get the UUID for a domain as string. For more information about UUID see RFC4122.

    -
    domain:a domain object
    buf:pointer to a VIR_UUID_STRING_BUFLEN bytes array
    Returns:-1 in case of error, 0 in case of success

    Function: virDomainGetVcpus

    int	virDomainGetVcpus		(virDomainPtr domain, 
    virVcpuInfoPtr info,
    int maxinfo,
    unsigned char * cpumaps,
    int maplen)
    -

    Extract information about virtual CPUs of domain, store it in info array and also in cpumaps if this pointer isn't NULL.

    -
    domain:pointer to domain object, or NULL for Domain0
    info:pointer to an array of virVcpuInfo structures (OUT)
    maxinfo:number of structures in info array
    cpumaps:pointer to an bit map of real CPUs for all vcpus of this domain (in 8-bit bytes) (OUT) If cpumaps is NULL, then no cpumap information is returned by the API. It's assumed there is <maxinfo> cpumap in cpumaps array. The memory allocated to cpumaps must be (maxinfo * maplen) bytes (ie: calloc(maxinfo, maplen)). One cpumap inside cpumaps has the format described in virDomainPinVcpu() API.
    maplen:number of bytes in one cpumap, from 1 up to size of CPU map in underlying virtualization system (Xen...).
    Returns:the number of info filled in case of success, -1 in case of failure.

    Function: virDomainGetXMLDesc

    char *	virDomainGetXMLDesc		(virDomainPtr domain, 
    int flags)
    -

    Provide an XML description of the domain. The description may be reused later to relaunch the domain with virDomainCreateLinux().

    -
    domain:a domain object
    flags:an OR'ed set of virDomainXMLFlags
    Returns:a 0 terminated UTF-8 encoded XML instance, or NULL in case of error. the caller must free() the returned value.

    Function: virDomainInterfaceStats

    int	virDomainInterfaceStats		(virDomainPtr dom, 
    const char * path,
    virDomainInterfaceStatsPtr stats,
    size_t size)
    -

    This function returns network interface stats for interfaces attached to the domain. The path parameter is the name of the network interface. Domains may have more than network interface. To get stats for each you should make multiple calls to this function. Individual fields within the stats structure may be returned as -1, which indicates that the hypervisor does not support that particular statistic.

    -
    dom:pointer to the domain object
    path:path to the interface
    stats:network interface stats (returned)
    size:size of stats structure
    Returns:0 in case of success or -1 in case of failure.

    Function: virDomainLookupByID

    virDomainPtr	virDomainLookupByID	(virConnectPtr conn, 
    int id)
    -

    Try to find a domain based on the hypervisor ID number

    -
    conn:pointer to the hypervisor connection
    id:the domain ID number
    Returns:a new domain object or NULL in case of failure. If the domain cannot be found, then VIR_ERR_NO_DOMAIN error is raised.

    Function: virDomainLookupByName

    virDomainPtr	virDomainLookupByName	(virConnectPtr conn, 
    const char * name)
    -

    Try to lookup a domain on the given hypervisor based on its name.

    -
    conn:pointer to the hypervisor connection
    name:name for the domain
    Returns:a new domain object or NULL in case of failure. If the domain cannot be found, then VIR_ERR_NO_DOMAIN error is raised.

    Function: virDomainLookupByUUID

    virDomainPtr	virDomainLookupByUUID	(virConnectPtr conn, 
    const unsigned char * uuid)
    -

    Try to lookup a domain on the given hypervisor based on its UUID.

    -
    conn:pointer to the hypervisor connection
    uuid:the raw UUID for the domain
    Returns:a new domain object or NULL in case of failure. If the domain cannot be found, then VIR_ERR_NO_DOMAIN error is raised.

    Function: virDomainLookupByUUIDString

    virDomainPtr	virDomainLookupByUUIDString	(virConnectPtr conn, 
    const char * uuidstr)
    -

    Try to lookup a domain on the given hypervisor based on its UUID.

    -
    conn:pointer to the hypervisor connection
    uuidstr:the string UUID for the domain
    Returns:a new domain object or NULL in case of failure. If the domain cannot be found, then VIR_ERR_NO_DOMAIN error is raised.

    Function: virDomainMigrate

    virDomainPtr	virDomainMigrate	(virDomainPtr domain, 
    virConnectPtr dconn,
    unsigned long flags,
    const char * dname,
    const char * uri,
    unsigned long bandwidth)
    -

    Migrate the domain object from its current host to the destination host given by dconn (a connection to the destination host). Flags may be one of more of the following: VIR_MIGRATE_LIVE Attempt a live migration. If a hypervisor supports renaming domains during migration, then you may set the dname parameter to the new name (otherwise it keeps the same name). If this is not supported by the hypervisor, dname must be NULL or else you will get an error. Since typically the two hypervisors connect directly to each other in order to perform the migration, you may need to specify a path from the source to the destination. This is the purpose of the uri parameter. If uri is NULL, then libvirt will try to find the best method. Uri may specify the hostname or IP address of the destination host as seen from the source. Or uri may be a URI giving transport, hostname, user, port, etc. in the usual form. Refer to driver documentation for the particular URIs supported. The maximum bandwidth (in Mbps) that will be used to do migration can be specified with the bandwidth parameter. If set to 0, libvirt will choose a suitable default. Some hypervisors do not support this feature and will return an error if bandwidth is not 0. To see which features are supported by the current hypervisor, see virConnectGetCapabilities, /capabilities/host/migration_features. There are many limitations on migration imposed by the underlying technology - for example it may not be possible to migrate between different processors even with the same architecture, or between different types of hypervisor.

    -
    domain:a domain object
    dconn:destination host (a connection object)
    flags:flags
    dname:(optional) rename domain to this at destination
    uri:(optional) dest hostname/URI as seen from the source host
    bandwidth:(optional) specify migration bandwidth limit in Mbps
    Returns:the new domain object if the migration was successful, or NULL in case of error. Note that the new domain object exists in the scope of the destination connection (dconn).

    Function: virDomainPinVcpu

    int	virDomainPinVcpu		(virDomainPtr domain, 
    unsigned int vcpu,
    unsigned char * cpumap,
    int maplen)
    -

    Dynamically change the real CPUs which can be allocated to a virtual CPU. This function requires privileged access to the hypervisor.

    -
    domain:pointer to domain object, or NULL for Domain0
    vcpu:virtual CPU number
    cpumap:pointer to a bit map of real CPUs (in 8-bit bytes) (IN) Each bit set to 1 means that corresponding CPU is usable. Bytes are stored in little-endian order: CPU0-7, 8-15... In each byte, lowest CPU number is least significant bit.
    maplen:number of bytes in cpumap, from 1 up to size of CPU map in underlying virtualization system (Xen...). If maplen < size, missing bytes are set to zero. If maplen > size, failure code is returned.
    Returns:0 in case of success, -1 in case of failure.

    Function: virDomainReboot

    int	virDomainReboot			(virDomainPtr domain, 
    unsigned int flags)
    -

    Reboot a domain, the domain object is still usable there after but the domain OS is being stopped for a restart. Note that the guest OS may ignore the request.

    -
    domain:a domain object
    flags:extra flags for the reboot operation, not used yet
    Returns:0 in case of success and -1 in case of failure.

    Function: virDomainRestore

    int	virDomainRestore		(virConnectPtr conn, 
    const char * from)
    -

    This method will restore a domain saved to disk by virDomainSave().

    -
    conn:pointer to the hypervisor connection
    from:path to the
    Returns:0 in case of success and -1 in case of failure.

    Function: virDomainResume

    int	virDomainResume			(virDomainPtr domain)
    -

    Resume an suspended domain, the process is restarted from the state where it was frozen by calling virSuspendDomain(). This function may requires privileged access

    -
    domain:a domain object
    Returns:0 in case of success and -1 in case of failure.

    Function: virDomainSave

    int	virDomainSave			(virDomainPtr domain, 
    const char * to)
    -

    This method will suspend a domain and save its memory contents to a file on disk. After the call, if successful, the domain is not listed as running anymore (this may be a problem). Use virDomainRestore() to restore a domain after saving.

    -
    domain:a domain object
    to:path for the output file
    Returns:0 in case of success and -1 in case of failure.

    Function: virDomainSetAutostart

    int	virDomainSetAutostart		(virDomainPtr domain, 
    int autostart)
    -

    Configure the domain to be automatically started when the host machine boots.

    -
    domain:a domain object
    autostart:whether the domain should be automatically started 0 or 1
    Returns:-1 in case of error, 0 in case of success

    Function: virDomainSetMaxMemory

    int	virDomainSetMaxMemory		(virDomainPtr domain, 
    unsigned long memory)
    -

    Dynamically change the maximum amount of physical memory allocated to a domain. If domain is NULL, then this change the amount of memory reserved to Domain0 i.e. the domain where the application runs. This function requires privileged access to the hypervisor.

    -
    domain:a domain object or NULL
    memory:the memory size in kilobytes
    Returns:0 in case of success and -1 in case of failure.

    Function: virDomainSetMemory

    int	virDomainSetMemory		(virDomainPtr domain, 
    unsigned long memory)
    -

    Dynamically change the target amount of physical memory allocated to a domain. If domain is NULL, then this change the amount of memory reserved to Domain0 i.e. the domain where the application runs. This function may requires privileged access to the hypervisor.

    -
    domain:a domain object or NULL
    memory:the memory size in kilobytes
    Returns:0 in case of success and -1 in case of failure.

    Function: virDomainSetSchedulerParameters

    int	virDomainSetSchedulerParameters	(virDomainPtr domain, 
    virSchedParameterPtr params,
    int nparams)
    -

    Change the scheduler parameters

    -
    domain:pointer to domain object
    params:pointer to scheduler parameter objects
    nparams:number of scheduler parameter (this value should be same or less than the returned value nparams of virDomainGetSchedulerType)
    Returns:-1 in case of error, 0 in case of success.

    Function: virDomainSetVcpus

    int	virDomainSetVcpus		(virDomainPtr domain, 
    unsigned int nvcpus)
    -

    Dynamically change the number of virtual CPUs used by the domain. Note that this call may fail if the underlying virtualization hypervisor does not support it or if growing the number is arbitrary limited. This function requires privileged access to the hypervisor.

    -
    domain:pointer to domain object, or NULL for Domain0
    nvcpus:the new number of virtual CPUs for this domain
    Returns:0 in case of success, -1 in case of failure.

    Function: virDomainShutdown

    int	virDomainShutdown		(virDomainPtr domain)
    -

    Shutdown a domain, the domain object is still usable there after but the domain OS is being stopped. Note that the guest OS may ignore the request. TODO: should we add an option for reboot, knowing it may not be doable in the general case ?

    -
    domain:a domain object
    Returns:0 in case of success and -1 in case of failure.

    Function: virDomainSuspend

    int	virDomainSuspend		(virDomainPtr domain)
    -

    Suspends an active domain, the process is frozen without further access to CPU resources and I/O but the memory used by the domain at the hypervisor level will stay allocated. Use virDomainResume() to reactivate the domain. This function may requires privileged access.

    -
    domain:a domain object
    Returns:0 in case of success and -1 in case of failure.

    Function: virDomainUndefine

    int	virDomainUndefine		(virDomainPtr domain)
    -

    undefine a domain but does not stop it if it is running

    -
    domain:pointer to a defined domain
    Returns:0 in case of success, -1 in case of error

    Function: virGetVersion

    int	virGetVersion			(unsigned long * libVer, 
    const char * type,
    unsigned long * typeVer)
    -

    Provides two information back, @libVer is the version of the library while @typeVer will be the version of the hypervisor type @type against which the library was compiled. If @type is NULL, "Xen" is assumed, if @type is unknown or not available, an error code will be returned and @typeVer will be 0.

    -
    libVer:return value for the library version (OUT)
    type:the type of connection/driver looked at
    typeVer:return value for the version of the hypervisor (OUT)
    Returns:-1 in case of failure, 0 otherwise, and values for @libVer and @typeVer have the format major * 1,000,000 + minor * 1,000 + release.

    Function: virInitialize

    int	virInitialize			(void)
    -

    Initialize the library. It's better to call this routine at startup in multithreaded applications to avoid potential race when initializing the library.

    -
    Returns:0 in case of success, -1 in case of error

    Function: virNetworkCreate

    int	virNetworkCreate		(virNetworkPtr network)
    -

    Create and start a defined network. If the call succeed the network moves from the defined to the running networks pools.

    -
    network:pointer to a defined network
    Returns:0 in case of success, -1 in case of error

    Function: virNetworkCreateXML

    virNetworkPtr	virNetworkCreateXML	(virConnectPtr conn, 
    const char * xmlDesc)
    -

    Create and start a new virtual network, based on an XML description similar to the one returned by virNetworkGetXMLDesc()

    -
    conn:pointer to the hypervisor connection
    xmlDesc:an XML description of the network
    Returns:a new network object or NULL in case of failure

    Function: virNetworkDefineXML

    virNetworkPtr	virNetworkDefineXML	(virConnectPtr conn, 
    const char * xml)
    -

    Define a network, but does not create it

    -
    conn:pointer to the hypervisor connection
    xml:the XML description for the network, preferably in UTF-8
    Returns:NULL in case of error, a pointer to the network otherwise

    Function: virNetworkDestroy

    int	virNetworkDestroy		(virNetworkPtr network)
    -

    Destroy the network object. The running instance is shutdown if not down already and all resources used by it are given back to the hypervisor. The data structure is freed and should not be used thereafter if the call does not return an error. This function may requires privileged access

    -
    network:a network object
    Returns:0 in case of success and -1 in case of failure.

    Function: virNetworkFree

    int	virNetworkFree			(virNetworkPtr network)
    -

    Free the network object. The running instance is kept alive. The data structure is freed and should not be used thereafter.

    -
    network:a network object
    Returns:0 in case of success and -1 in case of failure.

    Function: virNetworkGetAutostart

    int	virNetworkGetAutostart		(virNetworkPtr network, 
    int * autostart)
    -

    Provides a boolean value indicating whether the network configured to be automatically started when the host machine boots.

    -
    network:a network object
    autostart:the value returned
    Returns:-1 in case of error, 0 in case of success

    Function: virNetworkGetBridgeName

    char *	virNetworkGetBridgeName		(virNetworkPtr network)
    -

    Provides a bridge interface name to which a domain may connect a network interface in order to join the network.

    -
    network:a network object
    Returns:a 0 terminated interface name, or NULL in case of error. the caller must free() the returned value.

    Function: virNetworkGetConnect

    virConnectPtr	virNetworkGetConnect	(virNetworkPtr net)
    -

    Provides the connection pointer associated with a network. The reference counter on the connection is not increased by this call. WARNING: When writing libvirt bindings in other languages, do not use this function. Instead, store the connection and the network object together.

    -
    net:pointer to a network
    Returns:the virConnectPtr or NULL in case of failure.

    Function: virNetworkGetName

    const char *	virNetworkGetName	(virNetworkPtr network)
    -

    Get the public name for that network

    -
    network:a network object
    Returns:a pointer to the name or NULL, the string need not be deallocated its lifetime will be the same as the network object.

    Function: virNetworkGetUUID

    int	virNetworkGetUUID		(virNetworkPtr network, 
    unsigned char * uuid)
    -

    Get the UUID for a network

    -
    network:a network object
    uuid:pointer to a VIR_UUID_BUFLEN bytes array
    Returns:-1 in case of error, 0 in case of success

    Function: virNetworkGetUUIDString

    int	virNetworkGetUUIDString		(virNetworkPtr network, 
    char * buf)
    -

    Get the UUID for a network as string. For more information about UUID see RFC4122.

    -
    network:a network object
    buf:pointer to a VIR_UUID_STRING_BUFLEN bytes array
    Returns:-1 in case of error, 0 in case of success

    Function: virNetworkGetXMLDesc

    char *	virNetworkGetXMLDesc		(virNetworkPtr network, 
    int flags)
    -

    Provide an XML description of the network. The description may be reused later to relaunch the network with virNetworkCreateXML().

    -
    network:a network object
    flags:and OR'ed set of extraction flags, not used yet
    Returns:a 0 terminated UTF-8 encoded XML instance, or NULL in case of error. the caller must free() the returned value.

    Function: virNetworkLookupByName

    virNetworkPtr	virNetworkLookupByName	(virConnectPtr conn, 
    const char * name)
    -

    Try to lookup a network on the given hypervisor based on its name.

    -
    conn:pointer to the hypervisor connection
    name:name for the network
    Returns:a new network object or NULL in case of failure. If the network cannot be found, then VIR_ERR_NO_NETWORK error is raised.

    Function: virNetworkLookupByUUID

    virNetworkPtr	virNetworkLookupByUUID	(virConnectPtr conn, 
    const unsigned char * uuid)
    -

    Try to lookup a network on the given hypervisor based on its UUID.

    -
    conn:pointer to the hypervisor connection
    uuid:the raw UUID for the network
    Returns:a new network object or NULL in case of failure. If the network cannot be found, then VIR_ERR_NO_NETWORK error is raised.

    Function: virNetworkLookupByUUIDString

    virNetworkPtr	virNetworkLookupByUUIDString	(virConnectPtr conn, 
    const char * uuidstr)
    -

    Try to lookup a network on the given hypervisor based on its UUID.

    -
    conn:pointer to the hypervisor connection
    uuidstr:the string UUID for the network
    Returns:a new network object or NULL in case of failure. If the network cannot be found, then VIR_ERR_NO_NETWORK error is raised.

    Function: virNetworkSetAutostart

    int	virNetworkSetAutostart		(virNetworkPtr network, 
    int autostart)
    -

    Configure the network to be automatically started when the host machine boots.

    -
    network:a network object
    autostart:whether the network should be automatically started 0 or 1
    Returns:-1 in case of error, 0 in case of success

    Function: virNetworkUndefine

    int	virNetworkUndefine		(virNetworkPtr network)
    -

    Undefine a network but does not stop it if it is running

    -
    network:pointer to a defined network
    Returns:0 in case of success, -1 in case of error

    Function: virNodeGetCellsFreeMemory

    int	virNodeGetCellsFreeMemory	(virConnectPtr conn, 
    unsigned long long * freeMems,
    int startCell,
    int maxCells)
    -

    This call returns the amount of free memory in one or more NUMA cells. The @freeMems array must be allocated by the caller and will be filled with the amount of free memory in kilobytes for each cell requested, starting with startCell (in freeMems[0]), up to either (startCell + maxCells), or the number of additional cells in the node, whichever is smaller.

    -
    conn:pointer to the hypervisor connection
    freeMems:pointer to the array of unsigned long long
    startCell:index of first cell to return freeMems info on.
    maxCells:Maximum number of cells for which freeMems information can be returned.
    Returns:the number of entries filled in freeMems, or -1 in case of error.

    Function: virNodeGetFreeMemory

    unsigned long long	virNodeGetFreeMemory	(virConnectPtr conn)
    -

    provides the free memory available on the Node

    -
    conn:pointer to the hypervisor connection
    Returns:the available free memory in kilobytes or 0 in case of error

    Function: virNodeGetInfo

    int	virNodeGetInfo			(virConnectPtr conn, 
    virNodeInfoPtr info)
    -

    Extract hardware information about the node.

    -
    conn:pointer to the hypervisor connection
    info:pointer to a virNodeInfo structure allocated by the user
    Returns:0 in case of success and -1 in case of failure.

    Function: virStoragePoolBuild

    int	virStoragePoolBuild		(virStoragePoolPtr pool, 
    unsigned int flags)
    -

    Build the underlying storage pool

    -
    pool:pointer to storage pool
    flags:future flags, use 0 for now
    Returns:0 on success, or -1 upon failure

    Function: virStoragePoolCreate

    int	virStoragePoolCreate		(virStoragePoolPtr pool, 
    unsigned int flags)
    -

    Starts an inactive storage pool

    -
    pool:pointer to storage pool
    flags:future flags, use 0 for now
    Returns:0 on success, or -1 if it could not be started

    Function: virStoragePoolCreateXML

    virStoragePoolPtr	virStoragePoolCreateXML	(virConnectPtr conn, 
    const char * xmlDesc,
    unsigned int flags)
    -

    Create a new storage based on its XML description. The pool is not persistent, so its definition will disappear when it is destroyed, or if the host is restarted

    -
    conn:pointer to hypervisor connection
    xmlDesc:XML description for new pool
    flags:future flags, use 0 for now
    Returns:a virStoragePoolPtr object, or NULL if creation failed

    Function: virStoragePoolDefineXML

    virStoragePoolPtr	virStoragePoolDefineXML	(virConnectPtr conn, 
    const char * xml,
    unsigned int flags)
    -

    Define a new inactive storage pool based on its XML description. The pool is persistent, until explicitly undefined.

    -
    conn:pointer to hypervisor connection
    xml:XML description for new pool
    flags:future flags, use 0 for now
    Returns:a virStoragePoolPtr object, or NULL if creation failed

    Function: virStoragePoolDelete

    int	virStoragePoolDelete		(virStoragePoolPtr pool, 
    unsigned int flags)
    -

    Delete the underlying pool resources. This is a non-recoverable operation. The virStoragePoolPtr object itself is not free'd.

    -
    pool:pointer to storage pool
    flags:flags for obliteration process
    Returns:0 on success, or -1 if it could not be obliterate

    Function: virStoragePoolDestroy

    int	virStoragePoolDestroy		(virStoragePoolPtr pool)
    -

    Destroy an active storage pool. This will deactivate the pool on the host, but keep any persistent config associated with it. If it has a persistent config it can later be restarted with virStoragePoolCreate(). This does not free the associated virStoragePoolPtr object.

    -
    pool:pointer to storage pool
    Returns:0 on success, or -1 if it could not be destroyed

    Function: virStoragePoolFree

    int	virStoragePoolFree		(virStoragePoolPtr pool)
    -

    Free a storage pool object, releasing all memory associated with it. Does not change the state of the pool on the host.

    -
    pool:pointer to storage pool
    Returns:0 on success, or -1 if it could not be free'd.

    Function: virStoragePoolGetAutostart

    int	virStoragePoolGetAutostart	(virStoragePoolPtr pool, 
    int * autostart)
    -

    Fetches the value of the autostart flag, which determines whether the pool is automatically started at boot time

    -
    pool:pointer to storage pool
    autostart:location in which to store autostart flag
    Returns:0 on success, -1 on failure

    Function: virStoragePoolGetConnect

    virConnectPtr	virStoragePoolGetConnect	(virStoragePoolPtr pool)
    -

    Provides the connection pointer associated with a storage pool. The reference counter on the connection is not increased by this call. WARNING: When writing libvirt bindings in other languages, do not use this function. Instead, store the connection and the pool object together.

    -
    pool:pointer to a pool
    Returns:the virConnectPtr or NULL in case of failure.

    Function: virStoragePoolGetInfo

    int	virStoragePoolGetInfo		(virStoragePoolPtr pool, 
    virStoragePoolInfoPtr info)
    -

    Get volatile information about the storage pool such as free space / usage summary

    -
    pool:pointer to storage pool
    info:pointer at which to store info
    Returns:0 on success, or -1 on failure.

    Function: virStoragePoolGetName

    const char *	virStoragePoolGetName	(virStoragePoolPtr pool)
    -

    Fetch the locally unique name of the storage pool

    -
    pool:pointer to storage pool
    Returns:the name of the pool, or NULL on error

    Function: virStoragePoolGetUUID

    int	virStoragePoolGetUUID		(virStoragePoolPtr pool, 
    unsigned char * uuid)
    -

    Fetch the globally unique ID of the storage pool

    -
    pool:pointer to storage pool
    uuid:buffer of VIR_UUID_BUFLEN bytes in size
    Returns:0 on success, or -1 on error;

    Function: virStoragePoolGetUUIDString

    int	virStoragePoolGetUUIDString	(virStoragePoolPtr pool, 
    char * buf)
    -

    Fetch the globally unique ID of the storage pool as a string

    -
    pool:pointer to storage pool
    buf:buffer of VIR_UUID_STRING_BUFLEN bytes in size
    Returns:0 on success, or -1 on error;

    Function: virStoragePoolGetXMLDesc

    char *	virStoragePoolGetXMLDesc	(virStoragePoolPtr pool, 
    unsigned int flags)
    -

    Fetch an XML document describing all aspects of the storage pool. This is suitable for later feeding back into the virStoragePoolCreateXML method.

    -
    pool:pointer to storage pool
    flags:flags for XML format options (set of virDomainXMLFlags)
    Returns:a XML document, or NULL on error

    Function: virStoragePoolListVolumes

    int	virStoragePoolListVolumes	(virStoragePoolPtr pool, 
    char ** const names,
    int maxnames)
    -

    Fetch list of storage volume names, limiting to at most maxnames.

    -
    pool:pointer to storage pool
    names:array in which to storage volume names
    maxnames:size of names array
    Returns:the number of names fetched, or -1 on error

    Function: virStoragePoolLookupByName

    virStoragePoolPtr	virStoragePoolLookupByName	(virConnectPtr conn, 
    const char * name)
    -

    Fetch a storage pool based on its unique name

    -
    conn:pointer to hypervisor connection
    name:name of pool to fetch
    Returns:a virStoragePoolPtr object, or NULL if no matching pool is found

    Function: virStoragePoolLookupByUUID

    virStoragePoolPtr	virStoragePoolLookupByUUID	(virConnectPtr conn, 
    const unsigned char * uuid)
    -

    Fetch a storage pool based on its globally unique id

    -
    conn:pointer to hypervisor connection
    uuid:globally unique id of pool to fetch
    Returns:a virStoragePoolPtr object, or NULL if no matching pool is found

    Function: virStoragePoolLookupByUUIDString

    virStoragePoolPtr	virStoragePoolLookupByUUIDString	(virConnectPtr conn, 
    const char * uuidstr)
    -

    Fetch a storage pool based on its globally unique id

    -
    conn:pointer to hypervisor connection
    uuidstr:globally unique id of pool to fetch
    Returns:a virStoragePoolPtr object, or NULL if no matching pool is found

    Function: virStoragePoolLookupByVolume

    virStoragePoolPtr	virStoragePoolLookupByVolume	(virStorageVolPtr vol)
    -

    Fetch a storage pool which contains a particular volume

    -
    vol:pointer to storage volume
    Returns:a virStoragePoolPtr object, or NULL if no matching pool is found

    Function: virStoragePoolNumOfVolumes

    int	virStoragePoolNumOfVolumes	(virStoragePoolPtr pool)
    -

    Fetch the number of storage volumes within a pool

    -
    pool:pointer to storage pool
    Returns:the number of storage pools, or -1 on failure

    Function: virStoragePoolRefresh

    int	virStoragePoolRefresh		(virStoragePoolPtr pool, 
    unsigned int flags)
    -

    Request that the pool refresh its list of volumes. This may involve communicating with a remote server, and/or initializing new devices at the OS layer

    -
    pool:pointer to storage pool
    flags:flags to control refresh behaviour (currently unused, use 0)
    Returns:0 if the volume list was refreshed, -1 on failure

    Function: virStoragePoolSetAutostart

    int	virStoragePoolSetAutostart	(virStoragePoolPtr pool, 
    int autostart)
    -

    Sets the autostart flag

    -
    pool:pointer to storage pool
    autostart:new flag setting
    Returns:0 on success, -1 on failure

    Function: virStoragePoolUndefine

    int	virStoragePoolUndefine		(virStoragePoolPtr pool)
    -

    Undefine an inactive storage pool

    -
    pool:pointer to storage pool
    Returns:a virStoragePoolPtr object, or NULL if creation failed

    Function: virStorageVolCreateXML

    virStorageVolPtr	virStorageVolCreateXML	(virStoragePoolPtr pool, 
    const char * xmldesc,
    unsigned int flags)
    -

    Create a storage volume within a pool based on an XML description. Not all pools support creation of volumes

    -
    pool:pointer to storage pool
    xmldesc:description of volume to create
    flags:flags for creation (unused, pass 0)
    Returns:the storage volume, or NULL on error

    Function: virStorageVolDelete

    int	virStorageVolDelete		(virStorageVolPtr vol, 
    unsigned int flags)
    -

    Delete the storage volume from the pool

    -
    vol:pointer to storage volume
    flags:future flags, use 0 for now
    Returns:0 on success, or -1 on error

    Function: virStorageVolFree

    int	virStorageVolFree		(virStorageVolPtr vol)
    -

    Release the storage volume handle. The underlying storage volume contains to exist

    -
    vol:pointer to storage volume
    Returns:0 on success, or -1 on error

    Function: virStorageVolGetConnect

    virConnectPtr	virStorageVolGetConnect	(virStorageVolPtr vol)
    -

    Provides the connection pointer associated with a storage volume. The reference counter on the connection is not increased by this call. WARNING: When writing libvirt bindings in other languages, do not use this function. Instead, store the connection and the volume object together.

    -
    vol:pointer to a pool
    Returns:the virConnectPtr or NULL in case of failure.

    Function: virStorageVolGetInfo

    int	virStorageVolGetInfo		(virStorageVolPtr vol, 
    virStorageVolInfoPtr info)
    -

    Fetches volatile information about the storage volume such as its current allocation

    -
    vol:pointer to storage volume
    info:pointer at which to store info
    Returns:0 on success, or -1 on failure

    Function: virStorageVolGetKey

    const char *	virStorageVolGetKey	(virStorageVolPtr vol)
    -

    Fetch the storage volume key. This is globally unique, so the same volume will have the same key no matter what host it is accessed from

    -
    vol:pointer to storage volume
    Returns:the volume key, or NULL on error

    Function: virStorageVolGetName

    const char *	virStorageVolGetName	(virStorageVolPtr vol)
    -

    Fetch the storage volume name. This is unique within the scope of a pool

    -
    vol:pointer to storage volume
    Returns:the volume name, or NULL on error

    Function: virStorageVolGetPath

    char *	virStorageVolGetPath		(virStorageVolPtr vol)
    -

    Fetch the storage volume path. Depending on the pool configuration this is either persistent across hosts, or dynamically assigned at pool startup. Consult pool documentation for information on getting the persistent naming

    -
    vol:pointer to storage volume
    Returns:the storage volume path, or NULL on error

    Function: virStorageVolGetXMLDesc

    char *	virStorageVolGetXMLDesc		(virStorageVolPtr vol, 
    unsigned int flags)
    -

    Fetch an XML document describing all aspects of the storage volume

    -
    vol:pointer to storage volume
    flags:flags for XML generation (unused, pass 0)
    Returns:the XML document, or NULL on error

    Function: virStorageVolLookupByKey

    virStorageVolPtr	virStorageVolLookupByKey	(virConnectPtr conn, 
    const char * key)
    -

    Fetch a pointer to a storage volume based on its globally unique key

    -
    conn:pointer to hypervisor connection
    key:globally unique key
    Returns:a storage volume, or NULL if not found / error

    Function: virStorageVolLookupByName

    virStorageVolPtr	virStorageVolLookupByName	(virStoragePoolPtr pool, 
    const char * name)
    -

    Fetch a pointer to a storage volume based on its name within a pool

    -
    pool:pointer to storage pool
    name:name of storage volume
    Returns:a storage volume, or NULL if not found / error

    Function: virStorageVolLookupByPath

    virStorageVolPtr	virStorageVolLookupByPath	(virConnectPtr conn, 
    const char * path)
    -

    Fetch a pointer to a storage volume based on its locally (host) unique path

    -
    conn:pointer to hypervisor connection
    path:locally unique path
    Returns:a storage volume, or NULL if not found / error

    +

    virStorageVolDeleteFlags

    enum virStorageVolDeleteFlags {
    +
    VIR_STORAGE_VOL_DELETE_NORMAL = 0 : Delete metadata only (fast)
    VIR_STORAGE_VOL_DELETE_ZEROED = 1 : Clear all data to zeros (slow)
    }
    +

    virStorageVolInfo

    struct virStorageVolInfo{
    +
    inttype : virStorageVolType flags
    unsigned long longcapacity : Logical size bytes
    unsigned long longallocation : Current allocation bytes
    +}
    +

    virStorageVolType

    enum virStorageVolType {
    +
    VIR_STORAGE_VOL_FILE = 0 : Regular file based volumes
    VIR_STORAGE_VOL_BLOCK = 1 : Block based volumes
    }
    +

    virVcpuInfo

    struct virVcpuInfo{
    +
    unsigned intnumber : virtual CPU number
    intstate : value from virVcpuState
    unsigned long longcpuTime : CPU time used, in nanoseconds
    intcpu : real CPU number, or -1 if offline
    +}
    +

    virVcpuState

    enum virVcpuState {
    +
    VIR_VCPU_OFFLINE = 0 : the virtual CPU is offline
    VIR_VCPU_RUNNING = 1 : the virtual CPU is running
    VIR_VCPU_BLOCKED = 2 : the virtual CPU is blocked on resource
    }
    +

    Functions

    virConnectAuthCallbackPtr

    typedef int	(*virConnectAuthCallbackPtr)	(virConnectCredentialPtr cred, 
    unsigned int ncred,
    void * cbdata) +

    cred:
    ncred:
    cbdata:
    Returns:

    virConnectClose

    int	virConnectClose			(virConnectPtr conn)
    +

    This function closes the connection to the Hypervisor. This should not be called if further interaction with the Hypervisor are needed especially if there is running domain which need further monitoring by the application.

    conn:pointer to the hypervisor connection
    Returns:0 in case of success or -1 in case of error.

    virConnectGetCapabilities

    char *	virConnectGetCapabilities	(virConnectPtr conn)
    +

    Provides capabilities of the hypervisor / driver.

    conn:pointer to the hypervisor connection
    Returns:NULL in case of error, or an XML string defining the capabilities. The client must free the returned string after use.

    virConnectGetHostname

    char *	virConnectGetHostname		(virConnectPtr conn)
    +

    This returns the system hostname on which the hypervisor is running (the result of the gethostname(2) system call). If we are connected to a remote system, then this returns the hostname of the remote system.

    conn:pointer to a hypervisor connection
    Returns:the hostname which must be freed by the caller, or NULL if there was an error.

    virConnectGetMaxVcpus

    int	virConnectGetMaxVcpus		(virConnectPtr conn, 
    const char * type)
    +

    Provides the maximum number of virtual CPUs supported for a guest VM of a specific type. The 'type' parameter here corresponds to the 'type' attribute in the <domain> element of the XML.

    conn:pointer to the hypervisor connection
    type:value of the 'type' attribute in the <domain> element
    Returns:the maximum of virtual CPU or -1 in case of error.

    virConnectGetType

    const char *	virConnectGetType	(virConnectPtr conn)
    +

    Get the name of the Hypervisor software used.

    conn:pointer to the hypervisor connection
    Returns:NULL in case of error, a static zero terminated string otherwise. See also: http://www.redhat.com/archives/libvir-list/2007-February/msg00096.html

    virConnectGetURI

    char *	virConnectGetURI		(virConnectPtr conn)
    +

    This returns the URI (name) of the hypervisor connection. Normally this is the same as or similar to the string passed to the virConnectOpen/virConnectOpenReadOnly call, but the driver may make the URI canonical. If name == NULL was passed to virConnectOpen, then the driver will return a non-NULL URI which can be used to connect to the same hypervisor later.

    conn:pointer to a hypervisor connection
    Returns:the URI string which must be freed by the caller, or NULL if there was an error.

    virConnectGetVersion

    int	virConnectGetVersion		(virConnectPtr conn, 
    unsigned long * hvVer)
    +

    Get the version level of the Hypervisor running. This may work only with hypervisor call, i.e. with privileged access to the hypervisor, not with a Read-Only connection.

    conn:pointer to the hypervisor connection
    hvVer:return value for the version of the running hypervisor (OUT)
    Returns:-1 in case of error, 0 otherwise. if the version can't be extracted by lack of capacities returns 0 and @hvVer is 0, otherwise @hvVer value is major * 1,000,000 + minor * 1,000 + release

    virConnectListDefinedDomains

    int	virConnectListDefinedDomains	(virConnectPtr conn, 
    char ** const names,
    int maxnames)
    +

    list the defined but inactive domains, stores the pointers to the names in @names

    conn:pointer to the hypervisor connection
    names:pointer to an array to store the names
    maxnames:size of the array
    Returns:the number of names provided in the array or -1 in case of error

    virConnectListDefinedNetworks

    int	virConnectListDefinedNetworks	(virConnectPtr conn, 
    char ** const names,
    int maxnames)
    +

    list the inactive networks, stores the pointers to the names in @names

    conn:pointer to the hypervisor connection
    names:pointer to an array to store the names
    maxnames:size of the array
    Returns:the number of names provided in the array or -1 in case of error

    virConnectListDefinedStoragePools

    int	virConnectListDefinedStoragePools	(virConnectPtr conn, 
    char ** const names,
    int maxnames)
    +

    Provides the list of names of inactive storage pools upto maxnames. If there are more than maxnames, the remaining names will be silently ignored.

    conn:pointer to hypervisor connection
    names:array of char * to fill with pool names (allocated by caller)
    maxnames:size of the names array
    Returns:0 on success, -1 on error

    virConnectListDomains

    int	virConnectListDomains		(virConnectPtr conn, 
    int * ids,
    int maxids)
    +

    Collect the list of active domains, and store their ID in @maxids

    conn:pointer to the hypervisor connection
    ids:array to collect the list of IDs of active domains
    maxids:size of @ids
    Returns:the number of domain found or -1 in case of error

    virConnectListNetworks

    int	virConnectListNetworks		(virConnectPtr conn, 
    char ** const names,
    int maxnames)
    +

    Collect the list of active networks, and store their names in @names

    conn:pointer to the hypervisor connection
    names:array to collect the list of names of active networks
    maxnames:size of @names
    Returns:the number of networks found or -1 in case of error

    virConnectListStoragePools

    int	virConnectListStoragePools	(virConnectPtr conn, 
    char ** const names,
    int maxnames)
    +

    Provides the list of names of active storage pools upto maxnames. If there are more than maxnames, the remaining names will be silently ignored.

    conn:pointer to hypervisor connection
    names:array of char * to fill with pool names (allocated by caller)
    maxnames:size of the names array
    Returns:0 on success, -1 on error

    virConnectNumOfDefinedDomains

    int	virConnectNumOfDefinedDomains	(virConnectPtr conn)
    +

    Provides the number of defined but inactive domains.

    conn:pointer to the hypervisor connection
    Returns:the number of domain found or -1 in case of error

    virConnectNumOfDefinedNetworks

    int	virConnectNumOfDefinedNetworks	(virConnectPtr conn)
    +

    Provides the number of inactive networks.

    conn:pointer to the hypervisor connection
    Returns:the number of networks found or -1 in case of error

    virConnectNumOfDefinedStoragePools

    int	virConnectNumOfDefinedStoragePools	(virConnectPtr conn)
    +

    Provides the number of inactive storage pools

    conn:pointer to hypervisor connection
    Returns:the number of pools found, or -1 on error

    virConnectNumOfDomains

    int	virConnectNumOfDomains		(virConnectPtr conn)
    +

    Provides the number of active domains.

    conn:pointer to the hypervisor connection
    Returns:the number of domain found or -1 in case of error

    virConnectNumOfNetworks

    int	virConnectNumOfNetworks		(virConnectPtr conn)
    +

    Provides the number of active networks.

    conn:pointer to the hypervisor connection
    Returns:the number of network found or -1 in case of error

    virConnectNumOfStoragePools

    int	virConnectNumOfStoragePools	(virConnectPtr conn)
    +

    Provides the number of active storage pools

    conn:pointer to hypervisor connection
    Returns:the number of pools found, or -1 on error

    virConnectOpen

    virConnectPtr	virConnectOpen		(const char * name)
    +

    This function should be called first to get a connection to the Hypervisor and xen store

    name:URI of the hypervisor
    Returns:a pointer to the hypervisor connection or NULL in case of error URIs are documented at http://libvirt.org/uri.html

    virConnectOpenAuth

    virConnectPtr	virConnectOpenAuth	(const char * name, 
    virConnectAuthPtr auth,
    int flags)
    +

    This function should be called first to get a connection to the Hypervisor. If necessary, authentication will be performed fetching credentials via the callback

    name:URI of the hypervisor
    auth:Authenticate callback parameters
    flags:Open flags
    Returns:a pointer to the hypervisor connection or NULL in case of error URIs are documented at http://libvirt.org/uri.html

    virConnectOpenReadOnly

    virConnectPtr	virConnectOpenReadOnly	(const char * name)
    +

    This function should be called first to get a restricted connection to the library functionalities. The set of APIs usable are then restricted on the available methods to control the domains.

    name:URI of the hypervisor
    Returns:a pointer to the hypervisor connection or NULL in case of error URIs are documented at http://libvirt.org/uri.html

    virDomainAttachDevice

    int	virDomainAttachDevice		(virDomainPtr domain, 
    const char * xml)
    +

    Create a virtual device attachment to backend.

    domain:pointer to domain object
    xml:pointer to XML description of one device
    Returns:0 in case of success, -1 in case of failure.

    virDomainBlockStats

    int	virDomainBlockStats		(virDomainPtr dom, 
    const char * path,
    virDomainBlockStatsPtr stats,
    size_t size)
    +

    This function returns block device (disk) stats for block devices attached to the domain. The path parameter is the name of the block device. Get this by calling virDomainGetXMLDesc and finding the <target dev='...'> attribute within //domain/devices/disk. (For example, "xvda"). Domains may have more than one block device. To get stats for each you should make multiple calls to this function. Individual fields within the stats structure may be returned as -1, which indicates that the hypervisor does not support that particular statistic.

    dom:pointer to the domain object
    path:path to the block device
    stats:block device stats (returned)
    size:size of stats structure
    Returns:0 in case of success or -1 in case of failure.

    virDomainCoreDump

    int	virDomainCoreDump		(virDomainPtr domain, 
    const char * to,
    int flags)
    +

    This method will dump the core of a domain on a given file for analysis. Note that for remote Xen Daemon the file path will be interpreted in the remote host.

    domain:a domain object
    to:path for the core file
    flags:extra flags, currently unused
    Returns:0 in case of success and -1 in case of failure.

    virDomainCreate

    int	virDomainCreate			(virDomainPtr domain)
    +

    launch a defined domain. If the call succeed the domain moves from the defined to the running domains pools.

    domain:pointer to a defined domain
    Returns:0 in case of success, -1 in case of error

    virDomainCreateLinux

    virDomainPtr	virDomainCreateLinux	(virConnectPtr conn, 
    const char * xmlDesc,
    unsigned int flags)
    +

    Launch a new Linux guest domain, based on an XML description similar to the one returned by virDomainGetXMLDesc() This function may requires privileged access to the hypervisor.

    conn:pointer to the hypervisor connection
    xmlDesc:string containing an XML description of the domain
    flags:an optional set of virDomainFlags
    Returns:a new domain object or NULL in case of failure

    virDomainDefineXML

    virDomainPtr	virDomainDefineXML	(virConnectPtr conn, 
    const char * xml)
    +

    define a domain, but does not start it

    conn:pointer to the hypervisor connection
    xml:the XML description for the domain, preferably in UTF-8
    Returns:NULL in case of error, a pointer to the domain otherwise

    virDomainDestroy

    int	virDomainDestroy		(virDomainPtr domain)
    +

    Destroy the domain object. The running instance is shutdown if not down already and all resources used by it are given back to the hypervisor. The data structure is freed and should not be used thereafter if the call does not return an error. This function may requires privileged access

    domain:a domain object
    Returns:0 in case of success and -1 in case of failure.

    virDomainDetachDevice

    int	virDomainDetachDevice		(virDomainPtr domain, 
    const char * xml)
    +

    Destroy a virtual device attachment to backend.

    domain:pointer to domain object
    xml:pointer to XML description of one device
    Returns:0 in case of success, -1 in case of failure.

    virDomainFree

    int	virDomainFree			(virDomainPtr domain)
    +

    Free the domain object. The running instance is kept alive. The data structure is freed and should not be used thereafter.

    domain:a domain object
    Returns:0 in case of success and -1 in case of failure.

    virDomainGetAutostart

    int	virDomainGetAutostart		(virDomainPtr domain, 
    int * autostart)
    +

    Provides a boolean value indicating whether the domain configured to be automatically started when the host machine boots.

    domain:a domain object
    autostart:the value returned
    Returns:-1 in case of error, 0 in case of success

    virDomainGetConnect

    virConnectPtr	virDomainGetConnect	(virDomainPtr dom)
    +

    Provides the connection pointer associated with a domain. The reference counter on the connection is not increased by this call. WARNING: When writing libvirt bindings in other languages, do not use this function. Instead, store the connection and the domain object together.

    dom:pointer to a domain
    Returns:the virConnectPtr or NULL in case of failure.

    virDomainGetID

    unsigned int	virDomainGetID		(virDomainPtr domain)
    +

    Get the hypervisor ID number for the domain

    domain:a domain object
    Returns:the domain ID number or (unsigned int) -1 in case of error

    virDomainGetInfo

    int	virDomainGetInfo		(virDomainPtr domain, 
    virDomainInfoPtr info)
    +

    Extract information about a domain. Note that if the connection used to get the domain is limited only a partial set of the information can be extracted.

    domain:a domain object
    info:pointer to a virDomainInfo structure allocated by the user
    Returns:0 in case of success and -1 in case of failure.

    virDomainGetMaxMemory

    unsigned long	virDomainGetMaxMemory	(virDomainPtr domain)
    +

    Retrieve the maximum amount of physical memory allocated to a domain. If domain is NULL, then this get the amount of memory reserved to Domain0 i.e. the domain where the application runs.

    domain:a domain object or NULL
    Returns:the memory size in kilobytes or 0 in case of error.

    virDomainGetMaxVcpus

    int	virDomainGetMaxVcpus		(virDomainPtr domain)
    +

    Provides the maximum number of virtual CPUs supported for the guest VM. If the guest is inactive, this is basically the same as virConnectGetMaxVcpus. If the guest is running this will reflect the maximum number of virtual CPUs the guest was booted with.

    domain:pointer to domain object
    Returns:the maximum of virtual CPU or -1 in case of error.

    virDomainGetName

    const char *	virDomainGetName	(virDomainPtr domain)
    +

    Get the public name for that domain

    domain:a domain object
    Returns:a pointer to the name or NULL, the string need not be deallocated its lifetime will be the same as the domain object.

    virDomainGetOSType

    char *	virDomainGetOSType		(virDomainPtr domain)
    +

    Get the type of domain operation system.

    domain:a domain object
    Returns:the new string or NULL in case of error, the string must be freed by the caller.

    virDomainGetSchedulerParameters

    int	virDomainGetSchedulerParameters	(virDomainPtr domain, 
    virSchedParameterPtr params,
    int * nparams)
    +

    Get the scheduler parameters, the @params array will be filled with the values.

    domain:pointer to domain object
    params:pointer to scheduler parameter object (return value)
    nparams:pointer to number of scheduler parameter (this value should be same than the returned value nparams of virDomainGetSchedulerType)
    Returns:-1 in case of error, 0 in case of success.

    virDomainGetSchedulerType

    char *	virDomainGetSchedulerType	(virDomainPtr domain, 
    int * nparams)
    +

    Get the scheduler type.

    domain:pointer to domain object
    nparams:number of scheduler parameters(return value)
    Returns:NULL in case of error. The caller must free the returned string.

    virDomainGetUUID

    int	virDomainGetUUID		(virDomainPtr domain, 
    unsigned char * uuid)
    +

    Get the UUID for a domain

    domain:a domain object
    uuid:pointer to a VIR_UUID_BUFLEN bytes array
    Returns:-1 in case of error, 0 in case of success

    virDomainGetUUIDString

    int	virDomainGetUUIDString		(virDomainPtr domain, 
    char * buf)
    +

    Get the UUID for a domain as string. For more information about UUID see RFC4122.

    domain:a domain object
    buf:pointer to a VIR_UUID_STRING_BUFLEN bytes array
    Returns:-1 in case of error, 0 in case of success

    virDomainGetVcpus

    int	virDomainGetVcpus		(virDomainPtr domain, 
    virVcpuInfoPtr info,
    int maxinfo,
    unsigned char * cpumaps,
    int maplen)
    +

    Extract information about virtual CPUs of domain, store it in info array and also in cpumaps if this pointer isn't NULL.

    domain:pointer to domain object, or NULL for Domain0
    info:pointer to an array of virVcpuInfo structures (OUT)
    maxinfo:number of structures in info array
    cpumaps:pointer to an bit map of real CPUs for all vcpus of this domain (in 8-bit bytes) (OUT) If cpumaps is NULL, then no cpumap information is returned by the API. It's assumed there is <maxinfo> cpumap in cpumaps array. The memory allocated to cpumaps must be (maxinfo * maplen) bytes (ie: calloc(maxinfo, maplen)). One cpumap inside cpumaps has the format described in virDomainPinVcpu() API.
    maplen:number of bytes in one cpumap, from 1 up to size of CPU map in underlying virtualization system (Xen...).
    Returns:the number of info filled in case of success, -1 in case of failure.

    virDomainGetXMLDesc

    char *	virDomainGetXMLDesc		(virDomainPtr domain, 
    int flags)
    +

    Provide an XML description of the domain. The description may be reused later to relaunch the domain with virDomainCreateLinux().

    domain:a domain object
    flags:an OR'ed set of virDomainXMLFlags
    Returns:a 0 terminated UTF-8 encoded XML instance, or NULL in case of error. the caller must free() the returned value.

    virDomainInterfaceStats

    int	virDomainInterfaceStats		(virDomainPtr dom, 
    const char * path,
    virDomainInterfaceStatsPtr stats,
    size_t size)
    +

    This function returns network interface stats for interfaces attached to the domain. The path parameter is the name of the network interface. Domains may have more than network interface. To get stats for each you should make multiple calls to this function. Individual fields within the stats structure may be returned as -1, which indicates that the hypervisor does not support that particular statistic.

    dom:pointer to the domain object
    path:path to the interface
    stats:network interface stats (returned)
    size:size of stats structure
    Returns:0 in case of success or -1 in case of failure.

    virDomainLookupByID

    virDomainPtr	virDomainLookupByID	(virConnectPtr conn, 
    int id)
    +

    Try to find a domain based on the hypervisor ID number

    conn:pointer to the hypervisor connection
    id:the domain ID number
    Returns:a new domain object or NULL in case of failure. If the domain cannot be found, then VIR_ERR_NO_DOMAIN error is raised.

    virDomainLookupByName

    virDomainPtr	virDomainLookupByName	(virConnectPtr conn, 
    const char * name)
    +

    Try to lookup a domain on the given hypervisor based on its name.

    conn:pointer to the hypervisor connection
    name:name for the domain
    Returns:a new domain object or NULL in case of failure. If the domain cannot be found, then VIR_ERR_NO_DOMAIN error is raised.

    virDomainLookupByUUID

    virDomainPtr	virDomainLookupByUUID	(virConnectPtr conn, 
    const unsigned char * uuid)
    +

    Try to lookup a domain on the given hypervisor based on its UUID.

    conn:pointer to the hypervisor connection
    uuid:the raw UUID for the domain
    Returns:a new domain object or NULL in case of failure. If the domain cannot be found, then VIR_ERR_NO_DOMAIN error is raised.

    virDomainLookupByUUIDString

    virDomainPtr	virDomainLookupByUUIDString	(virConnectPtr conn, 
    const char * uuidstr)
    +

    Try to lookup a domain on the given hypervisor based on its UUID.

    conn:pointer to the hypervisor connection
    uuidstr:the string UUID for the domain
    Returns:a new domain object or NULL in case of failure. If the domain cannot be found, then VIR_ERR_NO_DOMAIN error is raised.

    virDomainMigrate

    virDomainPtr	virDomainMigrate	(virDomainPtr domain, 
    virConnectPtr dconn,
    unsigned long flags,
    const char * dname,
    const char * uri,
    unsigned long bandwidth)
    +

    Migrate the domain object from its current host to the destination host given by dconn (a connection to the destination host). Flags may be one of more of the following: VIR_MIGRATE_LIVE Attempt a live migration. If a hypervisor supports renaming domains during migration, then you may set the dname parameter to the new name (otherwise it keeps the same name). If this is not supported by the hypervisor, dname must be NULL or else you will get an error. Since typically the two hypervisors connect directly to each other in order to perform the migration, you may need to specify a path from the source to the destination. This is the purpose of the uri parameter. If uri is NULL, then libvirt will try to find the best method. Uri may specify the hostname or IP address of the destination host as seen from the source. Or uri may be a URI giving transport, hostname, user, port, etc. in the usual form. Refer to driver documentation for the particular URIs supported. The maximum bandwidth (in Mbps) that will be used to do migration can be specified with the bandwidth parameter. If set to 0, libvirt will choose a suitable default. Some hypervisors do not support this feature and will return an error if bandwidth is not 0. To see which features are supported by the current hypervisor, see virConnectGetCapabilities, /capabilities/host/migration_features. There are many limitations on migration imposed by the underlying technology - for example it may not be possible to migrate between different processors even with the same architecture, or between different types of hypervisor.

    domain:a domain object
    dconn:destination host (a connection object)
    flags:flags
    dname:(optional) rename domain to this at destination
    uri:(optional) dest hostname/URI as seen from the source host
    bandwidth:(optional) specify migration bandwidth limit in Mbps
    Returns:the new domain object if the migration was successful, or NULL in case of error. Note that the new domain object exists in the scope of the destination connection (dconn).

    virDomainPinVcpu

    int	virDomainPinVcpu		(virDomainPtr domain, 
    unsigned int vcpu,
    unsigned char * cpumap,
    int maplen)
    +

    Dynamically change the real CPUs which can be allocated to a virtual CPU. This function requires privileged access to the hypervisor.

    domain:pointer to domain object, or NULL for Domain0
    vcpu:virtual CPU number
    cpumap:pointer to a bit map of real CPUs (in 8-bit bytes) (IN) Each bit set to 1 means that corresponding CPU is usable. Bytes are stored in little-endian order: CPU0-7, 8-15... In each byte, lowest CPU number is least significant bit.
    maplen:number of bytes in cpumap, from 1 up to size of CPU map in underlying virtualization system (Xen...). If maplen < size, missing bytes are set to zero. If maplen > size, failure code is returned.
    Returns:0 in case of success, -1 in case of failure.

    virDomainReboot

    int	virDomainReboot			(virDomainPtr domain, 
    unsigned int flags)
    +

    Reboot a domain, the domain object is still usable there after but the domain OS is being stopped for a restart. Note that the guest OS may ignore the request.

    domain:a domain object
    flags:extra flags for the reboot operation, not used yet
    Returns:0 in case of success and -1 in case of failure.

    virDomainRestore

    int	virDomainRestore		(virConnectPtr conn, 
    const char * from)
    +

    This method will restore a domain saved to disk by virDomainSave().

    conn:pointer to the hypervisor connection
    from:path to the
    Returns:0 in case of success and -1 in case of failure.

    virDomainResume

    int	virDomainResume			(virDomainPtr domain)
    +

    Resume an suspended domain, the process is restarted from the state where it was frozen by calling virSuspendDomain(). This function may requires privileged access

    domain:a domain object
    Returns:0 in case of success and -1 in case of failure.

    virDomainSave

    int	virDomainSave			(virDomainPtr domain, 
    const char * to)
    +

    This method will suspend a domain and save its memory contents to a file on disk. After the call, if successful, the domain is not listed as running anymore (this may be a problem). Use virDomainRestore() to restore a domain after saving.

    domain:a domain object
    to:path for the output file
    Returns:0 in case of success and -1 in case of failure.

    virDomainSetAutostart

    int	virDomainSetAutostart		(virDomainPtr domain, 
    int autostart)
    +

    Configure the domain to be automatically started when the host machine boots.

    domain:a domain object
    autostart:whether the domain should be automatically started 0 or 1
    Returns:-1 in case of error, 0 in case of success

    virDomainSetMaxMemory

    int	virDomainSetMaxMemory		(virDomainPtr domain, 
    unsigned long memory)
    +

    Dynamically change the maximum amount of physical memory allocated to a domain. If domain is NULL, then this change the amount of memory reserved to Domain0 i.e. the domain where the application runs. This function requires privileged access to the hypervisor.

    domain:a domain object or NULL
    memory:the memory size in kilobytes
    Returns:0 in case of success and -1 in case of failure.

    virDomainSetMemory

    int	virDomainSetMemory		(virDomainPtr domain, 
    unsigned long memory)
    +

    Dynamically change the target amount of physical memory allocated to a domain. If domain is NULL, then this change the amount of memory reserved to Domain0 i.e. the domain where the application runs. This function may requires privileged access to the hypervisor.

    domain:a domain object or NULL
    memory:the memory size in kilobytes
    Returns:0 in case of success and -1 in case of failure.

    virDomainSetSchedulerParameters

    int	virDomainSetSchedulerParameters	(virDomainPtr domain, 
    virSchedParameterPtr params,
    int nparams)
    +

    Change the scheduler parameters

    domain:pointer to domain object
    params:pointer to scheduler parameter objects
    nparams:number of scheduler parameter (this value should be same or less than the returned value nparams of virDomainGetSchedulerType)
    Returns:-1 in case of error, 0 in case of success.

    virDomainSetVcpus

    int	virDomainSetVcpus		(virDomainPtr domain, 
    unsigned int nvcpus)
    +

    Dynamically change the number of virtual CPUs used by the domain. Note that this call may fail if the underlying virtualization hypervisor does not support it or if growing the number is arbitrary limited. This function requires privileged access to the hypervisor.

    domain:pointer to domain object, or NULL for Domain0
    nvcpus:the new number of virtual CPUs for this domain
    Returns:0 in case of success, -1 in case of failure.

    virDomainShutdown

    int	virDomainShutdown		(virDomainPtr domain)
    +

    Shutdown a domain, the domain object is still usable there after but the domain OS is being stopped. Note that the guest OS may ignore the request. TODO: should we add an option for reboot, knowing it may not be doable in the general case ?

    domain:a domain object
    Returns:0 in case of success and -1 in case of failure.

    virDomainSuspend

    int	virDomainSuspend		(virDomainPtr domain)
    +

    Suspends an active domain, the process is frozen without further access to CPU resources and I/O but the memory used by the domain at the hypervisor level will stay allocated. Use virDomainResume() to reactivate the domain. This function may requires privileged access.

    domain:a domain object
    Returns:0 in case of success and -1 in case of failure.

    virDomainUndefine

    int	virDomainUndefine		(virDomainPtr domain)
    +

    undefine a domain but does not stop it if it is running

    domain:pointer to a defined domain
    Returns:0 in case of success, -1 in case of error

    virGetVersion

    int	virGetVersion			(unsigned long * libVer, 
    const char * type,
    unsigned long * typeVer)
    +

    Provides two information back, @libVer is the version of the library while @typeVer will be the version of the hypervisor type @type against which the library was compiled. If @type is NULL, "Xen" is assumed, if @type is unknown or not available, an error code will be returned and @typeVer will be 0.

    libVer:return value for the library version (OUT)
    type:the type of connection/driver looked at
    typeVer:return value for the version of the hypervisor (OUT)
    Returns:-1 in case of failure, 0 otherwise, and values for @libVer and @typeVer have the format major * 1,000,000 + minor * 1,000 + release.

    virInitialize

    int	virInitialize			(void)
    +

    Initialize the library. It's better to call this routine at startup in multithreaded applications to avoid potential race when initializing the library.

    Returns:0 in case of success, -1 in case of error

    virNetworkCreate

    int	virNetworkCreate		(virNetworkPtr network)
    +

    Create and start a defined network. If the call succeed the network moves from the defined to the running networks pools.

    network:pointer to a defined network
    Returns:0 in case of success, -1 in case of error

    virNetworkCreateXML

    virNetworkPtr	virNetworkCreateXML	(virConnectPtr conn, 
    const char * xmlDesc)
    +

    Create and start a new virtual network, based on an XML description similar to the one returned by virNetworkGetXMLDesc()

    conn:pointer to the hypervisor connection
    xmlDesc:an XML description of the network
    Returns:a new network object or NULL in case of failure

    virNetworkDefineXML

    virNetworkPtr	virNetworkDefineXML	(virConnectPtr conn, 
    const char * xml)
    +

    Define a network, but does not create it

    conn:pointer to the hypervisor connection
    xml:the XML description for the network, preferably in UTF-8
    Returns:NULL in case of error, a pointer to the network otherwise

    virNetworkDestroy

    int	virNetworkDestroy		(virNetworkPtr network)
    +

    Destroy the network object. The running instance is shutdown if not down already and all resources used by it are given back to the hypervisor. The data structure is freed and should not be used thereafter if the call does not return an error. This function may requires privileged access

    network:a network object
    Returns:0 in case of success and -1 in case of failure.

    virNetworkFree

    int	virNetworkFree			(virNetworkPtr network)
    +

    Free the network object. The running instance is kept alive. The data structure is freed and should not be used thereafter.

    network:a network object
    Returns:0 in case of success and -1 in case of failure.

    virNetworkGetAutostart

    int	virNetworkGetAutostart		(virNetworkPtr network, 
    int * autostart)
    +

    Provides a boolean value indicating whether the network configured to be automatically started when the host machine boots.

    network:a network object
    autostart:the value returned
    Returns:-1 in case of error, 0 in case of success

    virNetworkGetBridgeName

    char *	virNetworkGetBridgeName		(virNetworkPtr network)
    +

    Provides a bridge interface name to which a domain may connect a network interface in order to join the network.

    network:a network object
    Returns:a 0 terminated interface name, or NULL in case of error. the caller must free() the returned value.

    virNetworkGetConnect

    virConnectPtr	virNetworkGetConnect	(virNetworkPtr net)
    +

    Provides the connection pointer associated with a network. The reference counter on the connection is not increased by this call. WARNING: When writing libvirt bindings in other languages, do not use this function. Instead, store the connection and the network object together.

    net:pointer to a network
    Returns:the virConnectPtr or NULL in case of failure.

    virNetworkGetName

    const char *	virNetworkGetName	(virNetworkPtr network)
    +

    Get the public name for that network

    network:a network object
    Returns:a pointer to the name or NULL, the string need not be deallocated its lifetime will be the same as the network object.

    virNetworkGetUUID

    int	virNetworkGetUUID		(virNetworkPtr network, 
    unsigned char * uuid)
    +

    Get the UUID for a network

    network:a network object
    uuid:pointer to a VIR_UUID_BUFLEN bytes array
    Returns:-1 in case of error, 0 in case of success

    virNetworkGetUUIDString

    int	virNetworkGetUUIDString		(virNetworkPtr network, 
    char * buf)
    +

    Get the UUID for a network as string. For more information about UUID see RFC4122.

    network:a network object
    buf:pointer to a VIR_UUID_STRING_BUFLEN bytes array
    Returns:-1 in case of error, 0 in case of success

    virNetworkGetXMLDesc

    char *	virNetworkGetXMLDesc		(virNetworkPtr network, 
    int flags)
    +

    Provide an XML description of the network. The description may be reused later to relaunch the network with virNetworkCreateXML().

    network:a network object
    flags:and OR'ed set of extraction flags, not used yet
    Returns:a 0 terminated UTF-8 encoded XML instance, or NULL in case of error. the caller must free() the returned value.

    virNetworkLookupByName

    virNetworkPtr	virNetworkLookupByName	(virConnectPtr conn, 
    const char * name)
    +

    Try to lookup a network on the given hypervisor based on its name.

    conn:pointer to the hypervisor connection
    name:name for the network
    Returns:a new network object or NULL in case of failure. If the network cannot be found, then VIR_ERR_NO_NETWORK error is raised.

    virNetworkLookupByUUID

    virNetworkPtr	virNetworkLookupByUUID	(virConnectPtr conn, 
    const unsigned char * uuid)
    +

    Try to lookup a network on the given hypervisor based on its UUID.

    conn:pointer to the hypervisor connection
    uuid:the raw UUID for the network
    Returns:a new network object or NULL in case of failure. If the network cannot be found, then VIR_ERR_NO_NETWORK error is raised.

    virNetworkLookupByUUIDString

    virNetworkPtr	virNetworkLookupByUUIDString	(virConnectPtr conn, 
    const char * uuidstr)
    +

    Try to lookup a network on the given hypervisor based on its UUID.

    conn:pointer to the hypervisor connection
    uuidstr:the string UUID for the network
    Returns:a new network object or NULL in case of failure. If the network cannot be found, then VIR_ERR_NO_NETWORK error is raised.

    virNetworkSetAutostart

    int	virNetworkSetAutostart		(virNetworkPtr network, 
    int autostart)
    +

    Configure the network to be automatically started when the host machine boots.

    network:a network object
    autostart:whether the network should be automatically started 0 or 1
    Returns:-1 in case of error, 0 in case of success

    virNetworkUndefine

    int	virNetworkUndefine		(virNetworkPtr network)
    +

    Undefine a network but does not stop it if it is running

    network:pointer to a defined network
    Returns:0 in case of success, -1 in case of error

    virNodeGetCellsFreeMemory

    int	virNodeGetCellsFreeMemory	(virConnectPtr conn, 
    unsigned long long * freeMems,
    int startCell,
    int maxCells)
    +

    This call returns the amount of free memory in one or more NUMA cells. The @freeMems array must be allocated by the caller and will be filled with the amount of free memory in kilobytes for each cell requested, starting with startCell (in freeMems[0]), up to either (startCell + maxCells), or the number of additional cells in the node, whichever is smaller.

    conn:pointer to the hypervisor connection
    freeMems:pointer to the array of unsigned long long
    startCell:index of first cell to return freeMems info on.
    maxCells:Maximum number of cells for which freeMems information can be returned.
    Returns:the number of entries filled in freeMems, or -1 in case of error.

    virNodeGetFreeMemory

    unsigned long long	virNodeGetFreeMemory	(virConnectPtr conn)
    +

    provides the free memory available on the Node

    conn:pointer to the hypervisor connection
    Returns:the available free memory in kilobytes or 0 in case of error

    virNodeGetInfo

    int	virNodeGetInfo			(virConnectPtr conn, 
    virNodeInfoPtr info)
    +

    Extract hardware information about the node.

    conn:pointer to the hypervisor connection
    info:pointer to a virNodeInfo structure allocated by the user
    Returns:0 in case of success and -1 in case of failure.

    virStoragePoolBuild

    int	virStoragePoolBuild		(virStoragePoolPtr pool, 
    unsigned int flags)
    +

    Build the underlying storage pool

    pool:pointer to storage pool
    flags:future flags, use 0 for now
    Returns:0 on success, or -1 upon failure

    virStoragePoolCreate

    int	virStoragePoolCreate		(virStoragePoolPtr pool, 
    unsigned int flags)
    +

    Starts an inactive storage pool

    pool:pointer to storage pool
    flags:future flags, use 0 for now
    Returns:0 on success, or -1 if it could not be started

    virStoragePoolCreateXML

    virStoragePoolPtr	virStoragePoolCreateXML	(virConnectPtr conn, 
    const char * xmlDesc,
    unsigned int flags)
    +

    Create a new storage based on its XML description. The pool is not persistent, so its definition will disappear when it is destroyed, or if the host is restarted

    conn:pointer to hypervisor connection
    xmlDesc:XML description for new pool
    flags:future flags, use 0 for now
    Returns:a virStoragePoolPtr object, or NULL if creation failed

    virStoragePoolDefineXML

    virStoragePoolPtr	virStoragePoolDefineXML	(virConnectPtr conn, 
    const char * xml,
    unsigned int flags)
    +

    Define a new inactive storage pool based on its XML description. The pool is persistent, until explicitly undefined.

    conn:pointer to hypervisor connection
    xml:XML description for new pool
    flags:future flags, use 0 for now
    Returns:a virStoragePoolPtr object, or NULL if creation failed

    virStoragePoolDelete

    int	virStoragePoolDelete		(virStoragePoolPtr pool, 
    unsigned int flags)
    +

    Delete the underlying pool resources. This is a non-recoverable operation. The virStoragePoolPtr object itself is not free'd.

    pool:pointer to storage pool
    flags:flags for obliteration process
    Returns:0 on success, or -1 if it could not be obliterate

    virStoragePoolDestroy

    int	virStoragePoolDestroy		(virStoragePoolPtr pool)
    +

    Destroy an active storage pool. This will deactivate the pool on the host, but keep any persistent config associated with it. If it has a persistent config it can later be restarted with virStoragePoolCreate(). This does not free the associated virStoragePoolPtr object.

    pool:pointer to storage pool
    Returns:0 on success, or -1 if it could not be destroyed

    virStoragePoolFree

    int	virStoragePoolFree		(virStoragePoolPtr pool)
    +

    Free a storage pool object, releasing all memory associated with it. Does not change the state of the pool on the host.

    pool:pointer to storage pool
    Returns:0 on success, or -1 if it could not be free'd.

    virStoragePoolGetAutostart

    int	virStoragePoolGetAutostart	(virStoragePoolPtr pool, 
    int * autostart)
    +

    Fetches the value of the autostart flag, which determines whether the pool is automatically started at boot time

    pool:pointer to storage pool
    autostart:location in which to store autostart flag
    Returns:0 on success, -1 on failure

    virStoragePoolGetConnect

    virConnectPtr	virStoragePoolGetConnect	(virStoragePoolPtr pool)
    +

    Provides the connection pointer associated with a storage pool. The reference counter on the connection is not increased by this call. WARNING: When writing libvirt bindings in other languages, do not use this function. Instead, store the connection and the pool object together.

    pool:pointer to a pool
    Returns:the virConnectPtr or NULL in case of failure.

    virStoragePoolGetInfo

    int	virStoragePoolGetInfo		(virStoragePoolPtr pool, 
    virStoragePoolInfoPtr info)
    +

    Get volatile information about the storage pool such as free space / usage summary

    pool:pointer to storage pool
    info:pointer at which to store info
    Returns:0 on success, or -1 on failure.

    virStoragePoolGetName

    const char *	virStoragePoolGetName	(virStoragePoolPtr pool)
    +

    Fetch the locally unique name of the storage pool

    pool:pointer to storage pool
    Returns:the name of the pool, or NULL on error

    virStoragePoolGetUUID

    int	virStoragePoolGetUUID		(virStoragePoolPtr pool, 
    unsigned char * uuid)
    +

    Fetch the globally unique ID of the storage pool

    pool:pointer to storage pool
    uuid:buffer of VIR_UUID_BUFLEN bytes in size
    Returns:0 on success, or -1 on error;

    virStoragePoolGetUUIDString

    int	virStoragePoolGetUUIDString	(virStoragePoolPtr pool, 
    char * buf)
    +

    Fetch the globally unique ID of the storage pool as a string

    pool:pointer to storage pool
    buf:buffer of VIR_UUID_STRING_BUFLEN bytes in size
    Returns:0 on success, or -1 on error;

    virStoragePoolGetXMLDesc

    char *	virStoragePoolGetXMLDesc	(virStoragePoolPtr pool, 
    unsigned int flags)
    +

    Fetch an XML document describing all aspects of the storage pool. This is suitable for later feeding back into the virStoragePoolCreateXML method.

    pool:pointer to storage pool
    flags:flags for XML format options (set of virDomainXMLFlags)
    Returns:a XML document, or NULL on error

    virStoragePoolListVolumes

    int	virStoragePoolListVolumes	(virStoragePoolPtr pool, 
    char ** const names,
    int maxnames)
    +

    Fetch list of storage volume names, limiting to at most maxnames.

    pool:pointer to storage pool
    names:array in which to storage volume names
    maxnames:size of names array
    Returns:the number of names fetched, or -1 on error

    virStoragePoolLookupByName

    virStoragePoolPtr	virStoragePoolLookupByName	(virConnectPtr conn, 
    const char * name)
    +

    Fetch a storage pool based on its unique name

    conn:pointer to hypervisor connection
    name:name of pool to fetch
    Returns:a virStoragePoolPtr object, or NULL if no matching pool is found

    virStoragePoolLookupByUUID

    virStoragePoolPtr	virStoragePoolLookupByUUID	(virConnectPtr conn, 
    const unsigned char * uuid)
    +

    Fetch a storage pool based on its globally unique id

    conn:pointer to hypervisor connection
    uuid:globally unique id of pool to fetch
    Returns:a virStoragePoolPtr object, or NULL if no matching pool is found

    virStoragePoolLookupByUUIDString

    virStoragePoolPtr	virStoragePoolLookupByUUIDString	(virConnectPtr conn, 
    const char * uuidstr)
    +

    Fetch a storage pool based on its globally unique id

    conn:pointer to hypervisor connection
    uuidstr:globally unique id of pool to fetch
    Returns:a virStoragePoolPtr object, or NULL if no matching pool is found

    virStoragePoolLookupByVolume

    virStoragePoolPtr	virStoragePoolLookupByVolume	(virStorageVolPtr vol)
    +

    Fetch a storage pool which contains a particular volume

    vol:pointer to storage volume
    Returns:a virStoragePoolPtr object, or NULL if no matching pool is found

    virStoragePoolNumOfVolumes

    int	virStoragePoolNumOfVolumes	(virStoragePoolPtr pool)
    +

    Fetch the number of storage volumes within a pool

    pool:pointer to storage pool
    Returns:the number of storage pools, or -1 on failure

    virStoragePoolRefresh

    int	virStoragePoolRefresh		(virStoragePoolPtr pool, 
    unsigned int flags)
    +

    Request that the pool refresh its list of volumes. This may involve communicating with a remote server, and/or initializing new devices at the OS layer

    pool:pointer to storage pool
    flags:flags to control refresh behaviour (currently unused, use 0)
    Returns:0 if the volume list was refreshed, -1 on failure

    virStoragePoolSetAutostart

    int	virStoragePoolSetAutostart	(virStoragePoolPtr pool, 
    int autostart)
    +

    Sets the autostart flag

    pool:pointer to storage pool
    autostart:new flag setting
    Returns:0 on success, -1 on failure

    virStoragePoolUndefine

    int	virStoragePoolUndefine		(virStoragePoolPtr pool)
    +

    Undefine an inactive storage pool

    pool:pointer to storage pool
    Returns:a virStoragePoolPtr object, or NULL if creation failed

    virStorageVolCreateXML

    virStorageVolPtr	virStorageVolCreateXML	(virStoragePoolPtr pool, 
    const char * xmldesc,
    unsigned int flags)
    +

    Create a storage volume within a pool based on an XML description. Not all pools support creation of volumes

    pool:pointer to storage pool
    xmldesc:description of volume to create
    flags:flags for creation (unused, pass 0)
    Returns:the storage volume, or NULL on error

    virStorageVolDelete

    int	virStorageVolDelete		(virStorageVolPtr vol, 
    unsigned int flags)
    +

    Delete the storage volume from the pool

    vol:pointer to storage volume
    flags:future flags, use 0 for now
    Returns:0 on success, or -1 on error

    virStorageVolFree

    int	virStorageVolFree		(virStorageVolPtr vol)
    +

    Release the storage volume handle. The underlying storage volume contains to exist

    vol:pointer to storage volume
    Returns:0 on success, or -1 on error

    virStorageVolGetConnect

    virConnectPtr	virStorageVolGetConnect	(virStorageVolPtr vol)
    +

    Provides the connection pointer associated with a storage volume. The reference counter on the connection is not increased by this call. WARNING: When writing libvirt bindings in other languages, do not use this function. Instead, store the connection and the volume object together.

    vol:pointer to a pool
    Returns:the virConnectPtr or NULL in case of failure.

    virStorageVolGetInfo

    int	virStorageVolGetInfo		(virStorageVolPtr vol, 
    virStorageVolInfoPtr info)
    +

    Fetches volatile information about the storage volume such as its current allocation

    vol:pointer to storage volume
    info:pointer at which to store info
    Returns:0 on success, or -1 on failure

    virStorageVolGetKey

    const char *	virStorageVolGetKey	(virStorageVolPtr vol)
    +

    Fetch the storage volume key. This is globally unique, so the same volume will have the same key no matter what host it is accessed from

    vol:pointer to storage volume
    Returns:the volume key, or NULL on error

    virStorageVolGetName

    const char *	virStorageVolGetName	(virStorageVolPtr vol)
    +

    Fetch the storage volume name. This is unique within the scope of a pool

    vol:pointer to storage volume
    Returns:the volume name, or NULL on error

    virStorageVolGetPath

    char *	virStorageVolGetPath		(virStorageVolPtr vol)
    +

    Fetch the storage volume path. Depending on the pool configuration this is either persistent across hosts, or dynamically assigned at pool startup. Consult pool documentation for information on getting the persistent naming

    vol:pointer to storage volume
    Returns:the storage volume path, or NULL on error

    virStorageVolGetXMLDesc

    char *	virStorageVolGetXMLDesc		(virStorageVolPtr vol, 
    unsigned int flags)
    +

    Fetch an XML document describing all aspects of the storage volume

    vol:pointer to storage volume
    flags:flags for XML generation (unused, pass 0)
    Returns:the XML document, or NULL on error

    virStorageVolLookupByKey

    virStorageVolPtr	virStorageVolLookupByKey	(virConnectPtr conn, 
    const char * key)
    +

    Fetch a pointer to a storage volume based on its globally unique key

    conn:pointer to hypervisor connection
    key:globally unique key
    Returns:a storage volume, or NULL if not found / error

    virStorageVolLookupByName

    virStorageVolPtr	virStorageVolLookupByName	(virStoragePoolPtr pool, 
    const char * name)
    +

    Fetch a pointer to a storage volume based on its name within a pool

    pool:pointer to storage pool
    name:name of storage volume
    Returns:a storage volume, or NULL if not found / error

    virStorageVolLookupByPath

    virStorageVolPtr	virStorageVolLookupByPath	(virConnectPtr conn, 
    const char * path)
    +

    Fetch a pointer to a storage volume based on its locally (host) unique path

    conn:pointer to hypervisor connection
    path:locally unique path
    Returns:a storage volume, or NULL if not found / error
    diff --git a/docs/html/libvirt-virterror.html b/docs/html/libvirt-virterror.html index aa965d8d8e..87f355fc2a 100644 --- a/docs/html/libvirt-virterror.html +++ b/docs/html/libvirt-virterror.html @@ -1,137 +1,44 @@ -Module virterror from libvirt

    Module virterror from libvirt

    Provides the interfaces of the libvirt library to handle errors raised while using the library.

    Table of Contents

    Structure virError
    struct _virError -
    Enum virErrorDomain
    -
    Enum virErrorLevel
    -
    Enum virErrorNumber
    -
    Typedef virError * virErrorPtr
    -
    int	virConnCopyLastError		(virConnectPtr conn, 
    virErrorPtr to)
    -
    virErrorPtr	virConnGetLastError	(virConnectPtr conn)
    -
    void	virConnResetLastError		(virConnectPtr conn)
    -
    void	virConnSetErrorFunc		(virConnectPtr conn, 
    void * userData,
    virErrorFunc handler)
    -
    int	virCopyLastError		(virErrorPtr to)
    -
    void	virDefaultErrorFunc		(virErrorPtr err)
    -
    Function type: virErrorFunc
    +libvirt: Module virterror from libvirt

    Module virterror from libvirt

    Provides the interfaces of the libvirt library to handle errors raised while using the library.

    Table of Contents

    Types

    typedef struct _virError virError
    +typedef enum virErrorDomain
    +typedef enum virErrorLevel
    +typedef enum virErrorNumber
    +typedef virError * virErrorPtr
    +

    Functions

    int	virConnCopyLastError		(virConnectPtr conn, 
    virErrorPtr to) +virErrorPtr virConnGetLastError (virConnectPtr conn) +void virConnResetLastError (virConnectPtr conn) +void virConnSetErrorFunc (virConnectPtr conn,
    void * userData,
    virErrorFunc handler) +int virCopyLastError (virErrorPtr to) +void virDefaultErrorFunc (virErrorPtr err) +typedef virErrorFunc void virErrorFunc (void * userData,
    virErrorPtr error) -
    -
    virErrorPtr	virGetLastError		(void)
    -
    void	virResetError			(virErrorPtr err)
    -
    void	virResetLastError		(void)
    -
    void	virSetErrorFunc			(void * userData, 
    virErrorFunc handler)
    -

    Description

    -

    Structure virError

    Structure virError
    struct _virError { - int code : The error code, a virErrorNumber - int domain : What part of the library raised this er - char * message : human-readable informative error messag - virErrorLevel level : how consequent is the error - virConnectPtr conn : the connection if available - virDomainPtr dom : the domain if available - char * str1 : extra string information - char * str2 : extra string information - char * str3 : extra string information - int int1 : extra number information - int int2 : extra number information - virNetworkPtr net : the network if available -}

    Enum virErrorDomain

    Enum virErrorDomain {
    -    VIR_FROM_NONE = 0
    -    VIR_FROM_XEN = 1 : Error at Xen hypervisor layer
    -    VIR_FROM_XEND = 2 : Error at connection with xend daemon
    -    VIR_FROM_XENSTORE = 3 : Error at connection with xen store
    -    VIR_FROM_SEXPR = 4 : Error in the S-Expression code
    -    VIR_FROM_XML = 5 : Error in the XML code
    -    VIR_FROM_DOM = 6 : Error when operating on a domain
    -    VIR_FROM_RPC = 7 : Error in the XML-RPC code
    -    VIR_FROM_PROXY = 8 : Error in the proxy code
    -    VIR_FROM_CONF = 9 : Error in the configuration file handling
    -    VIR_FROM_QEMU = 10 : Error at the QEMU daemon
    -    VIR_FROM_NET = 11 : Error when operating on a network
    -    VIR_FROM_TEST = 12 : Error from test driver
    -    VIR_FROM_REMOTE = 13 : Error from remote driver
    -    VIR_FROM_OPENVZ = 14 : Error from OpenVZ driver
    -    VIR_FROM_XENXM = 15 : Error at Xen XM layer
    -    VIR_FROM_STATS_LINUX = 16 : Error in the Linux Stats code
    -    VIR_FROM_LXC = 17 : Error from Linux Container driver
    -    VIR_FROM_STORAGE = 18 : Error from storage driver
    +
    +virErrorPtr	virGetLastError		(void)
    +void	virResetError			(virErrorPtr err)
    +void	virResetLastError		(void)
    +void	virSetErrorFunc			(void * userData, 
    virErrorFunc handler) +

    Description

    Types

    virError

    struct virError{
    +
    intcode : The error code, a virErrorNumber
    intdomain : What part of the library raised this error
    char *message : human-readable informative error message
    virErrorLevellevel : how consequent is the error
    virConnectPtrconn : connection if available, see note above
    virDomainPtrdom : domain if available, see note above
    char *str1 : extra string information
    char *str2 : extra string information
    char *str3 : extra string information
    intint1 : extra number information
    intint2 : extra number information
    virNetworkPtrnet : network if available, see note above
     }
    -

    Enum virErrorLevel

    Enum virErrorLevel {
    -    VIR_ERR_NONE = 0
    -    VIR_ERR_WARNING = 1 : A simple warning
    -    VIR_ERR_ERROR = 2 : An error
    -}
    -

    Enum virErrorNumber

    Enum virErrorNumber {
    -    VIR_ERR_OK = 0
    -    VIR_ERR_INTERNAL_ERROR = 1 : internal error
    -    VIR_ERR_NO_MEMORY = 2 : memory allocation failure
    -    VIR_ERR_NO_SUPPORT = 3 : no support for this function
    -    VIR_ERR_UNKNOWN_HOST = 4 : could not resolve hostname
    -    VIR_ERR_NO_CONNECT = 5 : can't connect to hypervisor
    -    VIR_ERR_INVALID_CONN = 6 : invalid connection object
    -    VIR_ERR_INVALID_DOMAIN = 7 : invalid domain object
    -    VIR_ERR_INVALID_ARG = 8 : invalid function argument
    -    VIR_ERR_OPERATION_FAILED = 9 : a command to hypervisor failed
    -    VIR_ERR_GET_FAILED = 10 : a HTTP GET command to failed
    -    VIR_ERR_POST_FAILED = 11 : a HTTP POST command to failed
    -    VIR_ERR_HTTP_ERROR = 12 : unexpected HTTP error code
    -    VIR_ERR_SEXPR_SERIAL = 13 : failure to serialize an S-Expr
    -    VIR_ERR_NO_XEN = 14 : could not open Xen hypervisor control
    -    VIR_ERR_XEN_CALL = 15 : failure doing an hypervisor call
    -    VIR_ERR_OS_TYPE = 16 : unknown OS type
    -    VIR_ERR_NO_KERNEL = 17 : missing kernel information
    -    VIR_ERR_NO_ROOT = 18 : missing root device information
    -    VIR_ERR_NO_SOURCE = 19 : missing source device information
    -    VIR_ERR_NO_TARGET = 20 : missing target device information
    -    VIR_ERR_NO_NAME = 21 : missing domain name information
    -    VIR_ERR_NO_OS = 22 : missing domain OS information
    -    VIR_ERR_NO_DEVICE = 23 : missing domain devices information
    -    VIR_ERR_NO_XENSTORE = 24 : could not open Xen Store control
    -    VIR_ERR_DRIVER_FULL = 25 : too many drivers registered
    -    VIR_ERR_CALL_FAILED = 26 : not supported by the drivers (DEPRECATED)
    -    VIR_ERR_XML_ERROR = 27 : an XML description is not well formed or broken
    -    VIR_ERR_DOM_EXIST = 28 : the domain already exist
    -    VIR_ERR_OPERATION_DENIED = 29 : operation forbidden on read-only connections
    -    VIR_ERR_OPEN_FAILED = 30 : failed to open a conf file
    -    VIR_ERR_READ_FAILED = 31 : failed to read a conf file
    -    VIR_ERR_PARSE_FAILED = 32 : failed to parse a conf file
    -    VIR_ERR_CONF_SYNTAX = 33 : failed to parse the syntax of a conf file
    -    VIR_ERR_WRITE_FAILED = 34 : failed to write a conf file
    -    VIR_ERR_XML_DETAIL = 35 : detail of an XML error
    -    VIR_ERR_INVALID_NETWORK = 36 : invalid network object
    -    VIR_ERR_NETWORK_EXIST = 37 : the network already exist
    -    VIR_ERR_SYSTEM_ERROR = 38 : general system call failure
    -    VIR_ERR_RPC = 39 : some sort of RPC error
    -    VIR_ERR_GNUTLS_ERROR = 40 : error from a GNUTLS call
    -    VIR_WAR_NO_NETWORK = 41 : failed to start network
    -    VIR_ERR_NO_DOMAIN = 42 : domain not found or unexpectedly disappeared
    -    VIR_ERR_NO_NETWORK = 43 : network not found
    -    VIR_ERR_INVALID_MAC = 44 : invalid MAC address
    -    VIR_ERR_AUTH_FAILED = 45 : authentication failed
    -    VIR_ERR_INVALID_STORAGE_POOL = 46 : invalid storage pool object
    -    VIR_ERR_INVALID_STORAGE_VOL = 47 : invalid storage vol object
    -    VIR_WAR_NO_STORAGE = 48 : failed to start storage
    -    VIR_ERR_NO_STORAGE_POOL = 49 : storage pool not found
    -    VIR_ERR_NO_STORAGE_VOL = 50 : storage pool not found
    -}
    -

    Function: virConnCopyLastError

    int	virConnCopyLastError		(virConnectPtr conn, 
    virErrorPtr to)
    -

    Copy the content of the last error caught on that connection One will need to free the result with virResetError()

    -
    conn:pointer to the hypervisor connection
    to:target to receive the copy
    Returns:0 if no error was found and the error code otherwise and -1 in case of parameter error.

    Function: virConnGetLastError

    virErrorPtr	virConnGetLastError	(virConnectPtr conn)
    -

    Provide a pointer to the last error caught on that connection Simpler but may not be suitable for multithreaded accesses, in which case use virConnCopyLastError()

    -
    conn:pointer to the hypervisor connection
    Returns:a pointer to the last error or NULL if none occurred.

    Function: virConnResetLastError

    void	virConnResetLastError		(virConnectPtr conn)
    -

    Reset the last error caught on that connection

    -
    conn:pointer to the hypervisor connection

    Function: virConnSetErrorFunc

    void	virConnSetErrorFunc		(virConnectPtr conn, 
    void * userData,
    virErrorFunc handler)
    -

    Set a connection error handling function, if @handler is NULL it will reset to default which is to pass error back to the global library handler.

    -
    conn:pointer to the hypervisor connection
    userData:pointer to the user data provided in the handler callback
    handler:the function to get called in case of error or NULL

    Function: virCopyLastError

    int	virCopyLastError		(virErrorPtr to)
    -

    Copy the content of the last error caught at the library level One will need to free the result with virResetError()

    -
    to:target to receive the copy
    Returns:0 if no error was found and the error code otherwise and -1 in case of parameter error.

    Function: virDefaultErrorFunc

    void	virDefaultErrorFunc		(virErrorPtr err)
    -

    Default routine reporting an error to stderr.

    -
    err:pointer to the error.

    Function type: virErrorFunc

    Function type: virErrorFunc
    -void	virErrorFunc			(void * userData, 
    virErrorPtr error) -

    Signature of a function to use when there is an error raised by the library.

    userData:user provided data for the error callback
    error:the error being raised.

    -

    Function: virGetLastError

    virErrorPtr	virGetLastError		(void)
    -

    Provide a pointer to the last error caught at the library level Simpler but may not be suitable for multithreaded accesses, in which case use virCopyLastError()

    -
    Returns:a pointer to the last error or NULL if none occurred.

    Function: virResetError

    void	virResetError			(virErrorPtr err)
    -

    Reset the error being pointed to

    -
    err:pointer to the virError to clean up

    Function: virResetLastError

    void	virResetLastError		(void)
    -

    Reset the last error caught at the library level.

    -

    Function: virSetErrorFunc

    void	virSetErrorFunc			(void * userData, 
    virErrorFunc handler)
    -

    Set a library global error handling function, if @handler is NULL, it will reset to default printing on stderr. The error raised there are those for which no handler at the connection level could caught.

    -
    userData:pointer to the user data provided in the handler callback
    handler:the function to get called in case of error or NULL

    +

    virErrorDomain

    enum virErrorDomain {
    +
    VIR_FROM_NONE = 0
    VIR_FROM_XEN = 1 : Error at Xen hypervisor layer
    VIR_FROM_XEND = 2 : Error at connection with xend daemon
    VIR_FROM_XENSTORE = 3 : Error at connection with xen store
    VIR_FROM_SEXPR = 4 : Error in the S-Expression code
    VIR_FROM_XML = 5 : Error in the XML code
    VIR_FROM_DOM = 6 : Error when operating on a domain
    VIR_FROM_RPC = 7 : Error in the XML-RPC code
    VIR_FROM_PROXY = 8 : Error in the proxy code
    VIR_FROM_CONF = 9 : Error in the configuration file handling
    VIR_FROM_QEMU = 10 : Error at the QEMU daemon
    VIR_FROM_NET = 11 : Error when operating on a network
    VIR_FROM_TEST = 12 : Error from test driver
    VIR_FROM_REMOTE = 13 : Error from remote driver
    VIR_FROM_OPENVZ = 14 : Error from OpenVZ driver
    VIR_FROM_XENXM = 15 : Error at Xen XM layer
    VIR_FROM_STATS_LINUX = 16 : Error in the Linux Stats code
    VIR_FROM_LXC = 17 : Error from Linux Container driver
    VIR_FROM_STORAGE = 18 : Error from storage driver
    }
    +

    virErrorLevel

    enum virErrorLevel {
    +
    VIR_ERR_NONE = 0
    VIR_ERR_WARNING = 1 : A simple warning
    VIR_ERR_ERROR = 2 : An error
    }
    +

    virErrorNumber

    enum virErrorNumber {
    +
    VIR_ERR_OK = 0
    VIR_ERR_INTERNAL_ERROR = 1 : internal error
    VIR_ERR_NO_MEMORY = 2 : memory allocation failure
    VIR_ERR_NO_SUPPORT = 3 : no support for this function
    VIR_ERR_UNKNOWN_HOST = 4 : could not resolve hostname
    VIR_ERR_NO_CONNECT = 5 : can't connect to hypervisor
    VIR_ERR_INVALID_CONN = 6 : invalid connection object
    VIR_ERR_INVALID_DOMAIN = 7 : invalid domain object
    VIR_ERR_INVALID_ARG = 8 : invalid function argument
    VIR_ERR_OPERATION_FAILED = 9 : a command to hypervisor failed
    VIR_ERR_GET_FAILED = 10 : a HTTP GET command to failed
    VIR_ERR_POST_FAILED = 11 : a HTTP POST command to failed
    VIR_ERR_HTTP_ERROR = 12 : unexpected HTTP error code
    VIR_ERR_SEXPR_SERIAL = 13 : failure to serialize an S-Expr
    VIR_ERR_NO_XEN = 14 : could not open Xen hypervisor control
    VIR_ERR_XEN_CALL = 15 : failure doing an hypervisor call
    VIR_ERR_OS_TYPE = 16 : unknown OS type
    VIR_ERR_NO_KERNEL = 17 : missing kernel information
    VIR_ERR_NO_ROOT = 18 : missing root device information
    VIR_ERR_NO_SOURCE = 19 : missing source device information
    VIR_ERR_NO_TARGET = 20 : missing target device information
    VIR_ERR_NO_NAME = 21 : missing domain name information
    VIR_ERR_NO_OS = 22 : missing domain OS information
    VIR_ERR_NO_DEVICE = 23 : missing domain devices information
    VIR_ERR_NO_XENSTORE = 24 : could not open Xen Store control
    VIR_ERR_DRIVER_FULL = 25 : too many drivers registered
    VIR_ERR_CALL_FAILED = 26 : not supported by the drivers (DEPRECATED)
    VIR_ERR_XML_ERROR = 27 : an XML description is not well formed or broken
    VIR_ERR_DOM_EXIST = 28 : the domain already exist
    VIR_ERR_OPERATION_DENIED = 29 : operation forbidden on read-only connections
    VIR_ERR_OPEN_FAILED = 30 : failed to open a conf file
    VIR_ERR_READ_FAILED = 31 : failed to read a conf file
    VIR_ERR_PARSE_FAILED = 32 : failed to parse a conf file
    VIR_ERR_CONF_SYNTAX = 33 : failed to parse the syntax of a conf file
    VIR_ERR_WRITE_FAILED = 34 : failed to write a conf file
    VIR_ERR_XML_DETAIL = 35 : detail of an XML error
    VIR_ERR_INVALID_NETWORK = 36 : invalid network object
    VIR_ERR_NETWORK_EXIST = 37 : the network already exist
    VIR_ERR_SYSTEM_ERROR = 38 : general system call failure
    VIR_ERR_RPC = 39 : some sort of RPC error
    VIR_ERR_GNUTLS_ERROR = 40 : error from a GNUTLS call
    VIR_WAR_NO_NETWORK = 41 : failed to start network
    VIR_ERR_NO_DOMAIN = 42 : domain not found or unexpectedly disappeared
    VIR_ERR_NO_NETWORK = 43 : network not found
    VIR_ERR_INVALID_MAC = 44 : invalid MAC address
    VIR_ERR_AUTH_FAILED = 45 : authentication failed
    VIR_ERR_INVALID_STORAGE_POOL = 46 : invalid storage pool object
    VIR_ERR_INVALID_STORAGE_VOL = 47 : invalid storage vol object
    VIR_WAR_NO_STORAGE = 48 : failed to start storage
    VIR_ERR_NO_STORAGE_POOL = 49 : storage pool not found
    VIR_ERR_NO_STORAGE_VOL = 50 : storage pool not found
    }
    +

    Functions

    virConnCopyLastError

    int	virConnCopyLastError		(virConnectPtr conn, 
    virErrorPtr to)
    +

    Copy the content of the last error caught on that connection One will need to free the result with virResetError()

    conn:pointer to the hypervisor connection
    to:target to receive the copy
    Returns:0 if no error was found and the error code otherwise and -1 in case of parameter error.

    virConnGetLastError

    virErrorPtr	virConnGetLastError	(virConnectPtr conn)
    +

    Provide a pointer to the last error caught on that connection Simpler but may not be suitable for multithreaded accesses, in which case use virConnCopyLastError()

    conn:pointer to the hypervisor connection
    Returns:a pointer to the last error or NULL if none occurred.

    virConnResetLastError

    void	virConnResetLastError		(virConnectPtr conn)
    +

    Reset the last error caught on that connection

    conn:pointer to the hypervisor connection

    virConnSetErrorFunc

    void	virConnSetErrorFunc		(virConnectPtr conn, 
    void * userData,
    virErrorFunc handler)
    +

    Set a connection error handling function, if @handler is NULL it will reset to default which is to pass error back to the global library handler.

    conn:pointer to the hypervisor connection
    userData:pointer to the user data provided in the handler callback
    handler:the function to get called in case of error or NULL

    virCopyLastError

    int	virCopyLastError		(virErrorPtr to)
    +

    Copy the content of the last error caught at the library level One will need to free the result with virResetError()

    to:target to receive the copy
    Returns:0 if no error was found and the error code otherwise and -1 in case of parameter error.

    virDefaultErrorFunc

    void	virDefaultErrorFunc		(virErrorPtr err)
    +

    Default routine reporting an error to stderr.

    err:pointer to the error.

    virErrorFunc

    typedef void	(*virErrorFunc		)	(void * userData, 
    virErrorPtr error) +

    Signature of a function to use when there is an error raised by the library.

    userData:user provided data for the error callback
    error:the error being raised.

    virGetLastError

    virErrorPtr	virGetLastError		(void)
    +

    Provide a pointer to the last error caught at the library level Simpler but may not be suitable for multithreaded accesses, in which case use virCopyLastError()

    Returns:a pointer to the last error or NULL if none occurred.

    virResetError

    void	virResetError			(virErrorPtr err)
    +

    Reset the error being pointed to

    err:pointer to the virError to clean up

    virResetLastError

    void	virResetLastError		(void)
    +

    Reset the last error caught at the library level.

    virSetErrorFunc

    void	virSetErrorFunc			(void * userData, 
    virErrorFunc handler)
    +

    Set a library global error handling function, if @handler is NULL, it will reset to default printing on stderr. The error raised there are those for which no handler at the connection level could caught.

    userData:pointer to the user data provided in the handler callback
    handler:the function to get called in case of error or NULL
    diff --git a/docs/hvsupport.html b/docs/hvsupport.html index 618bd5faa3..20c395138a 100644 --- a/docs/hvsupport.html +++ b/docs/hvsupport.html @@ -1,395 +1,123 @@ -Hypervisor support

    Hypervisor support

    + + + + + + + libvirt: Driver support matrix + + + +

    +
    +
    +

    Driver support matrix

    +

    This page documents which libvirt calls work on -which hypervisors. -

    +which libvirt drivers / hypervisors, and which version the API appeared +in. +

    +

    This information changes frequently. This page was last checked or updated on 2007-08-20. -

    Domain functions

    x = not supported; empty cell means no information

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Function Since Xen QEMU KVM Remote
    virConnectClose All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectGetCapabilities 0.2.1 ≥ 0.2.1 ≥ 0.2.1 ≥ 0.2.1 ≥ 0.3.0
    virConnectGetHostname 0.3.0 ≥ 0.3.0 ≥ 0.3.3 ≥ 0.3.3 ≥ 0.3.0
    virConnectGetMaxVcpus 0.2.1 ≥ 0.2.1 x x ≥ 0.3.0
    virConnectGetType All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectGetURI 0.3.0 ≥ 0.3.0 ≥ 0.3.0 ≥ 0.3.0 ≥ 0.3.0
    virConnectGetVersion All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectListDefinedDomains 0.1.5 ≥ 0.1.9 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectListDomains All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectNumOfDefinedDomains 0.1.5 ≥ 0.1.9 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectNumOfDomains All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectOpen All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectOpenReadOnly All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainAttachDevice 0.1.9 ≥ 0.1.9 x x ≥ 0.3.0
    virDomainBlockStats 0.3.2 ≥ 0.3.2 x x ≥ 0.3.2
    virDomainCoreDump 0.1.9 ≥ 0.1.9 x x ≥ 0.3.0
    virDomainCreate 0.1.5 ≥ 0.1.9 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainCreateLinux All ≥ 0.0.5 x x ≥ 0.3.0
    virDomainDefineXML 0.1.5 ≥ 0.1.9 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainDestroy All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainDetachDevice 0.1.9 ≥ 0.1.9 x x ≥ 0.3.0
    virDomainFree All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetAutostart 0.2.1 x ≥ 0.2.1 ≥ 0.2.1 ≥ 0.3.0
    virDomainGetConnect 0.3.0 not a HV function
    virDomainGetID All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetInfo All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetMaxMemory All All x x ≥ 0.3.0
    virDomainGetMaxVcpus 0.2.1 ≥ 0.2.1 x x ≥ 0.3.0
    virDomainGetName All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetOSType All All x x ≥ 0.3.0
    virDomainGetSchedulerParameters 0.2.3 ≥ 0.2.3 x x ≥ 0.3.0
    virDomainGetSchedulerType 0.2.3 ≥ 0.2.3 x x ≥ 0.3.0
    virDomainGetUUID 0.1.10 ≥ 0.1.10 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetUUIDString 0.1.10 ≥ 0.1.10 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetVcpus 0.1.4 ≥ 0.1.4 x x ≥ 0.3.0
    virDomainInterfaceStats 0.3.2 ≥ 0.3.2 x x ≥ 0.3.2
    virDomainGetXMLDesc All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainLookupByID All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainLookupByName All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainLookupByUUID 0.1.10 ≥ 0.1.10 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainLookupByUUIDString 0.1.10 ≥ 0.1.10 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainMigrate 0.3.2 ≥ 0.3.2 x x 0.3.2
    virDomainPinVcpu 0.1.4 ≥ 0.1.4 x x ≥ 0.3.0
    virDomainReboot 0.1.0 ≥ 0.1.0 x x ≥ 0.3.0
    virDomainRestore All All x ≥ 0.3.2 ≥ 0.3.0
    virDomainResume All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainSave All All x ≥ 0.3.2 ≥ 0.3.0
    virDomainSetAutostart 0.2.1 x ≥ 0.2.1 ≥ 0.2.1 ≥ 0.3.0
    virDomainSetMaxMemory All All x x ≥ 0.3.0
    virDomainSetMemory 0.1.1 ≥ 0.1.1 x x ≥ 0.3.0
    virDomainSetSchedulerParameters 0.2.3 ≥ 0.2.3 x x ≥ 0.3.0
    virDomainSetVcpus 0.1.4 ≥ 0.1.4 x x ≥ 0.3.0
    virDomainShutdown All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainSuspend All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainUndefine 0.1.5 ≥ 0.1.9 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virGetVersion All All Returns -1 if HV unsupported.
    virInitialize 0.1.0 not a HV function
    virNodeGetInfo 0.1.0 ≥ 0.1.0 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virNodeGetFreeMemory 0.3.3 ≥ 0.3.3 x x x
    virNodeGetCellsFreeMemory 0.3.3 ≥ 0.3.3 x x x

    Network functions

    -Network functions are not hypervisor-specific. For historical -reasons they require the QEMU daemon to be running (this -restriction may be lifted in future). Most network functions -first appeared in libvirt 0.2.0. -

    - - - - - - - - - - - - - - - - - - - - - - -
    Function Since
    virConnectNumOfNetworks 0.2.0
    virConnectListNetworks 0.2.0
    virConnectNumOfDefinedNetworks 0.2.0
    virConnectListDefinedNetworks 0.2.0
    virNetworkCreate 0.2.0
    virNetworkCreateXML 0.2.0
    virNetworkDefineXML 0.2.0
    virNetworkDestroy 0.2.0
    virNetworkFree 0.2.0
    virNetworkGetAutostart 0.2.1
    virNetworkGetConnect 0.3.0
    virNetworkGetBridgeName 0.2.0
    virNetworkGetName 0.2.0
    virNetworkGetUUID 0.2.0
    virNetworkGetUUIDString 0.2.0
    virNetworkGetXMLDesc 0.2.0
    virNetworkLookupByName 0.2.0
    virNetworkLookupByUUID 0.2.0
    virNetworkLookupByUUIDString 0.2.0
    virNetworkSetAutostart 0.2.1
    virNetworkUndefine 0.2.0

    +

    +

    Domain functions

    +

    x = not supported; empty cell means no information

    +
    Function Since XenQEMUKVMRemote
    virConnectClose All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectGetCapabilities 0.2.1 ≥ 0.2.1 ≥ 0.2.1 ≥ 0.2.1 ≥ 0.3.0
    virConnectGetHostname 0.3.0 ≥ 0.3.0 ≥ 0.3.3 ≥ 0.3.3 ≥ 0.3.0
    virConnectGetMaxVcpus 0.2.1 ≥ 0.2.1 x x ≥ 0.3.0
    virConnectGetType All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectGetURI 0.3.0 ≥ 0.3.0 ≥ 0.3.0 ≥ 0.3.0 ≥ 0.3.0
    virConnectGetVersion All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectListDefinedDomains 0.1.5 ≥ 0.1.9 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectListDomains All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectNumOfDefinedDomains 0.1.5 ≥ 0.1.9 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectNumOfDomains All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectOpen All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectOpenReadOnly All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainAttachDevice 0.1.9 ≥ 0.1.9 x x ≥ 0.3.0
    virDomainBlockStats 0.3.2 ≥ 0.3.2 x x ≥ 0.3.2
    virDomainCoreDump 0.1.9 ≥ 0.1.9 x x ≥ 0.3.0
    virDomainCreate 0.1.5 ≥ 0.1.9 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainCreateLinux All ≥ 0.0.5 x x ≥ 0.3.0
    virDomainDefineXML 0.1.5 ≥ 0.1.9 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainDestroy All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainDetachDevice 0.1.9 ≥ 0.1.9 x x ≥ 0.3.0
    virDomainFree All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetAutostart 0.2.1 x ≥ 0.2.1 ≥ 0.2.1 ≥ 0.3.0
    virDomainGetConnect 0.3.0 not a HV function
    virDomainGetID All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetInfo All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetMaxMemory All All x x ≥ 0.3.0
    virDomainGetMaxVcpus 0.2.1 ≥ 0.2.1 x x ≥ 0.3.0
    virDomainGetName All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetOSType All All x x ≥ 0.3.0
    virDomainGetSchedulerParameters 0.2.3 ≥ 0.2.3 x x ≥ 0.3.0
    virDomainGetSchedulerType 0.2.3 ≥ 0.2.3 x x ≥ 0.3.0
    virDomainGetUUID 0.1.10 ≥ 0.1.10 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetUUIDString 0.1.10 ≥ 0.1.10 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetVcpus 0.1.4 ≥ 0.1.4 x x ≥ 0.3.0
    virDomainInterfaceStats 0.3.2 ≥ 0.3.2 x x ≥ 0.3.2
    virDomainGetXMLDesc All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainLookupByID All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainLookupByName All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainLookupByUUID 0.1.10 ≥ 0.1.10 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainLookupByUUIDString 0.1.10 ≥ 0.1.10 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainMigrate 0.3.2 ≥ 0.3.2 x x 0.3.2
    virDomainPinVcpu 0.1.4 ≥ 0.1.4 x x ≥ 0.3.0
    virDomainReboot 0.1.0 ≥ 0.1.0 x x ≥ 0.3.0
    virDomainRestore All All x ≥ 0.3.2 ≥ 0.3.0
    virDomainResume All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainSave All All x ≥ 0.3.2 ≥ 0.3.0
    virDomainSetAutostart 0.2.1 x ≥ 0.2.1 ≥ 0.2.1 ≥ 0.3.0
    virDomainSetMaxMemory All All x x ≥ 0.3.0
    virDomainSetMemory 0.1.1 ≥ 0.1.1 x x ≥ 0.3.0
    virDomainSetSchedulerParameters 0.2.3 ≥ 0.2.3 x x ≥ 0.3.0
    virDomainSetVcpus 0.1.4 ≥ 0.1.4 x x ≥ 0.3.0
    virDomainShutdown All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainSuspend All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainUndefine 0.1.5 ≥ 0.1.9 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virGetVersion All All Returns -1 if HV unsupported.
    virInitialize 0.1.0 not a HV function
    virNodeGetInfo 0.1.0 ≥ 0.1.0 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virNodeGetFreeMemory 0.3.3 ≥ 0.3.3 x x x
    virNodeGetCellsFreeMemory 0.3.3 ≥ 0.3.3 x x x
    +

    Network functions

    +

    +Network functions are not hypervisor-specific.They require the libvirtd +daemon to be running. Most network functions first appeared in libvirt 0.2.0. +

    +
    Function Since
    virConnectNumOfNetworks 0.2.0
    virConnectListNetworks 0.2.0
    virConnectNumOfDefinedNetworks 0.2.0
    virConnectListDefinedNetworks 0.2.0
    virNetworkCreate 0.2.0
    virNetworkCreateXML 0.2.0
    virNetworkDefineXML 0.2.0
    virNetworkDestroy 0.2.0
    virNetworkFree 0.2.0
    virNetworkGetAutostart 0.2.1
    virNetworkGetConnect 0.3.0
    virNetworkGetBridgeName 0.2.0
    virNetworkGetName 0.2.0
    virNetworkGetUUID 0.2.0
    virNetworkGetUUIDString 0.2.0
    virNetworkGetXMLDesc 0.2.0
    virNetworkLookupByName 0.2.0
    virNetworkLookupByUUID 0.2.0
    virNetworkLookupByUUIDString 0.2.0
    virNetworkSetAutostart 0.2.1
    virNetworkUndefine 0.2.0
    +
    + +
    + + + diff --git a/docs/hvsupport.html.in b/docs/hvsupport.html.in new file mode 100644 index 0000000000..f6f3a778e0 --- /dev/null +++ b/docs/hvsupport.html.in @@ -0,0 +1,594 @@ + + + +

    Driver support matrix

    +

    +This page documents which libvirt calls work on +which libvirt drivers / hypervisors, and which version the API appeared +in. +

    +

    +This information changes frequently. This page was last checked or +updated on 2007-08-20. +

    +

    Domain functions

    +

    x = not supported; empty cell means no information

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Function Since XenQEMUKVMRemote
    virConnectClose All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectGetCapabilities 0.2.1 ≥ 0.2.1 ≥ 0.2.1 ≥ 0.2.1 ≥ 0.3.0
    virConnectGetHostname 0.3.0 ≥ 0.3.0 ≥ 0.3.3 ≥ 0.3.3 ≥ 0.3.0
    virConnectGetMaxVcpus 0.2.1 ≥ 0.2.1 x x ≥ 0.3.0
    virConnectGetType All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectGetURI 0.3.0 ≥ 0.3.0 ≥ 0.3.0 ≥ 0.3.0 ≥ 0.3.0
    virConnectGetVersion All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectListDefinedDomains 0.1.5 ≥ 0.1.9 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectListDomains All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectNumOfDefinedDomains 0.1.5 ≥ 0.1.9 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectNumOfDomains All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectOpen All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virConnectOpenReadOnly All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainAttachDevice 0.1.9 ≥ 0.1.9 x x ≥ 0.3.0
    virDomainBlockStats 0.3.2 ≥ 0.3.2 x x ≥ 0.3.2
    virDomainCoreDump 0.1.9 ≥ 0.1.9 x x ≥ 0.3.0
    virDomainCreate 0.1.5 ≥ 0.1.9 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainCreateLinux All ≥ 0.0.5 x x ≥ 0.3.0
    virDomainDefineXML 0.1.5 ≥ 0.1.9 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainDestroy All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainDetachDevice 0.1.9 ≥ 0.1.9 x x ≥ 0.3.0
    virDomainFree All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetAutostart 0.2.1 x ≥ 0.2.1 ≥ 0.2.1 ≥ 0.3.0
    virDomainGetConnect 0.3.0 not a HV function
    virDomainGetID All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetInfo All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetMaxMemory All All x x ≥ 0.3.0
    virDomainGetMaxVcpus 0.2.1 ≥ 0.2.1 x x ≥ 0.3.0
    virDomainGetName All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetOSType All All x x ≥ 0.3.0
    virDomainGetSchedulerParameters 0.2.3 ≥ 0.2.3 x x ≥ 0.3.0
    virDomainGetSchedulerType 0.2.3 ≥ 0.2.3 x x ≥ 0.3.0
    virDomainGetUUID 0.1.10 ≥ 0.1.10 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetUUIDString 0.1.10 ≥ 0.1.10 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainGetVcpus 0.1.4 ≥ 0.1.4 x x ≥ 0.3.0
    virDomainInterfaceStats 0.3.2 ≥ 0.3.2 x x ≥ 0.3.2
    virDomainGetXMLDesc All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainLookupByID All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainLookupByName All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainLookupByUUID 0.1.10 ≥ 0.1.10 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainLookupByUUIDString 0.1.10 ≥ 0.1.10 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainMigrate 0.3.2 ≥ 0.3.2 x x 0.3.2
    virDomainPinVcpu 0.1.4 ≥ 0.1.4 x x ≥ 0.3.0
    virDomainReboot 0.1.0 ≥ 0.1.0 x x ≥ 0.3.0
    virDomainRestore All All x ≥ 0.3.2 ≥ 0.3.0
    virDomainResume All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainSave All All x ≥ 0.3.2 ≥ 0.3.0
    virDomainSetAutostart 0.2.1 x ≥ 0.2.1 ≥ 0.2.1 ≥ 0.3.0
    virDomainSetMaxMemory All All x x ≥ 0.3.0
    virDomainSetMemory 0.1.1 ≥ 0.1.1 x x ≥ 0.3.0
    virDomainSetSchedulerParameters 0.2.3 ≥ 0.2.3 x x ≥ 0.3.0
    virDomainSetVcpus 0.1.4 ≥ 0.1.4 x x ≥ 0.3.0
    virDomainShutdown All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainSuspend All All ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virDomainUndefine 0.1.5 ≥ 0.1.9 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virGetVersion All All Returns -1 if HV unsupported.
    virInitialize 0.1.0 not a HV function
    virNodeGetInfo 0.1.0 ≥ 0.1.0 ≥ 0.2.0 ≥ 0.2.0 ≥ 0.3.0
    virNodeGetFreeMemory 0.3.3 ≥ 0.3.3 x x x
    virNodeGetCellsFreeMemory 0.3.3 ≥ 0.3.3 x x x
    +

    Network functions

    +

    +Network functions are not hypervisor-specific.They require the libvirtd +daemon to be running. Most network functions first appeared in libvirt 0.2.0. +

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Function Since
    virConnectNumOfNetworks 0.2.0
    virConnectListNetworks 0.2.0
    virConnectNumOfDefinedNetworks 0.2.0
    virConnectListDefinedNetworks 0.2.0
    virNetworkCreate 0.2.0
    virNetworkCreateXML 0.2.0
    virNetworkDefineXML 0.2.0
    virNetworkDestroy 0.2.0
    virNetworkFree 0.2.0
    virNetworkGetAutostart 0.2.1
    virNetworkGetConnect 0.3.0
    virNetworkGetBridgeName 0.2.0
    virNetworkGetName 0.2.0
    virNetworkGetUUID 0.2.0
    virNetworkGetUUIDString 0.2.0
    virNetworkGetXMLDesc 0.2.0
    virNetworkLookupByName 0.2.0
    virNetworkLookupByUUID 0.2.0
    virNetworkLookupByUUIDString 0.2.0
    virNetworkSetAutostart 0.2.1
    virNetworkUndefine 0.2.0
    + + diff --git a/docs/index.html b/docs/index.html index 348fe59168..607408ea00 100644 --- a/docs/index.html +++ b/docs/index.html @@ -1,142 +1,118 @@ + - - - the virtualization API + + + libvirt: The virtualization API + -
    -
    -
    -

    -
    -
    -

    what is libvirt?

    -

    Libvirt is a C toolkit to interact with the virtualization capabilities -of recent versions of Linux (and other OSes). It is free software available -under the GNU -Lesser General Public License. Virtualization of the Linux Operating -System means the ability to run multiple instances of Operating Systems -concurrently on a single hardware system where the basic resources are driven -by a Linux (or Solaris) instance. The library aims at providing a long term -stable C API initially for Xen -paravirtualization but it can also integrate with other -virtualization mechanisms. It currently also supports QEMU, KVM and -OpenVZ.

    -
    + -

    Macro:

    +

    #define 

    @@ -212,7 +224,6 @@ -

         
           
         
    @@ -245,8 +256,7 @@
     						 
           
         
    -    )
    -    
    + )
    @@ -255,8 +265,7 @@ -
    -    Function type: 
    +    typedef 
         
         
     
    @@ -294,7 +303,6 @@
         
         )
     
    -    
    @@ -304,20 +312,13 @@ -

    - - Function type: - -

    +

    -    Function type: 
    -    
    -    
    -
    +    typedef 
         
           
         
    -    	
    +    	(*
         
         
           	
    @@ -325,7 +326,7 @@
         
           	
         
    -    	(
    +    )	(
         
           void
         
    @@ -388,7 +389,7 @@
         
         
         
    -    

    Function:

    +

         
           
    @@ -496,83 +497,50 @@
       
         
         Module  from 
    -    
    -      
    -        
    -          
    -          
    -            
    -          
    -        
    -	
    -	
    -
    -
    - - - -
    - - - + + +

    + +

    Table of Contents

    + +

    Macros

    +
    +            
    +              
    +            
    +          
    +
    +

    Types

    +
    +          
    +            
    +          
    +        
    +

    Functions

    +
    +          
    +            
    +          
    +        
    + +

    Description

    + + +

    Macros

    + + + +
    +

    Types

    + + + +

    Functions

    + + + + + @@ -585,52 +553,50 @@ - Reference Manual for - - - - - - - - - -
    -
    -
    - - - -
    - - - + + +

    +

    Table of Contents

    +
      + +
    + + - - - - - - - - - + + + + + + + + + + + + + + + + + + + + diff --git a/docs/news.html b/docs/news.html index 319567b645..9211afaba8 100644 --- a/docs/news.html +++ b/docs/news.html @@ -1,12 +1,37 @@ -Releases

    Releases

    Here is the list of official releases, however since it is early on in the + + + + + + + libvirt: Releases + + + +

    +
    +
    +

    Releases

    +

    Here is the list of official releases, however since it is early on in the development of libvirt, it is preferable when possible to just use the CVS version or snapshot, contact the mailing list -and check the ChangeLog to gauge progress.

    0.4.2: Apr 8 2008

    • New features: memory operation for QEmu/KVM driver (Cole Robinson), - new routed networking schemas (Mads Olesen)
    • -
    • Documentation: storage documentation fixes (Atsushi Sakai), many - typo cleanups (Atsushi Sakai), string fixes (Francesco Tombolini)
    • -
    • Bug fixes: pointer errors in qemu (Jim Meyering), iSCSI login fix +and check the ChangeLog to gauge progress.

      +

      0.4.2: Apr 8 2008

      +
      • New features: memory operation for QEmu/KVM driver (Cole Robinson), + new routed networking schemas (Mads Olesen)
      • Documentation: storage documentation fixes (Atsushi Sakai), many + typo cleanups (Atsushi Sakai), string fixes (Francesco Tombolini)
      • Bug fixes: pointer errors in qemu (Jim Meyering), iSCSI login fix (Chris Lalancette), well formedness error in test driver capabilities (Cole Robinson), fixes cleanup code when daemon exits (Daniel Berrange), CD Rom change on live QEmu/KVM domains (Cole Robinson), setting scheduler @@ -17,8 +42,7 @@ and check the ChangeLog to gauge progress.

        0 (Daniel Berrange), XML output fix for directory pools (Daniel Berrange), remove dandling domain/net/conn pointers from error data, do not ask polkit auth when root (Daniel Berrange), handling of fork and - pipe errors when starting the daemon (Richard Jones)

      • -
      • Improvements: better validation of MAC addresses (Jim Meyering and + pipe errors when starting the daemon (Richard Jones)
      • Improvements: better validation of MAC addresses (Jim Meyering and Hiroyuki Kaguchi), virsh vcpupin error report (Shigeki Sakamoto), keep boot tag on HVM domains (Cole Robinson), virsh non-root should not be limited to read @@ -33,17 +57,15 @@ and check the ChangeLog to gauge progress.

        0 move linux container support in the daemon (Dan Berrange), older awk implementation support (Mike Gerdts), NUMA support in test driver (Cole Robinson), xen and hvm added to test driver capabilities - (Cole Robinson)

      • -
      • Code cleanup: remove unused getopt header (Jim Meyering), mark more - strings as translatable (Guido Günther and Jim Meyering), convert + (Cole Robinson)
      • Code cleanup: remove unused getopt header (Jim Meyering), mark more + strings as translatable (Guido Günther and Jim Meyering), convert error strings to something meaningful and translatable (Jim Meyering), - Linux Containers code cleanup, last error initializer (Guido Günther)
      • -

      0.4.1: Mar 3 2008

      • New features: build on MacOSX (Richard Jones), storage management - (Daniel Berrange), Xenner - Xen on KVM - support (Daniel Berrange)
      • -
      • Documentation: Fix of various typos (Atsushi SAKAI), memory and + Linux Containers code cleanup, last error initializer (Guido Günther)
      +

      0.4.1: Mar 3 2008

      +
      • New features: build on MacOSX (Richard Jones), storage management + (Daniel Berrange), Xenner - Xen on KVM - support (Daniel Berrange)
      • Documentation: Fix of various typos (Atsushi SAKAI), memory and vcpu settings details (Richard Jones), ethernet bridging typo - (Maxwell Bottiger), add storage APIs documentation (Daniel Berrange)
      • -
      • Bug fixes: OpenVZ code compilation (Mikhail Pokidko), crash in + (Maxwell Bottiger), add storage APIs documentation (Daniel Berrange)
      • Bug fixes: OpenVZ code compilation (Mikhail Pokidko), crash in policykit auth handling (Daniel Berrange), large config files (Daniel Berrange), cpumap hypercall size (Saori Fukuta), crash in remote auth (Daniel Berrange), ssh args error (Daniel Berrange), @@ -57,8 +79,7 @@ and check the ChangeLog to gauge progress.

        0 qemud signal pipe (Daniel Berrange), double free in OpenVZ (Anton Protopopov), handle mac without addresses (Shigeki Sakamoto), MAC addresses checks (Shigeki Sakamoto and Richard Jones), - allow to read non-seekable files (Jim Meyering)

      • -
      • Improvements: Windows build (Richard Jones), KVM/QEmu shutdown + allow to read non-seekable files (Jim Meyering)
      • Improvements: Windows build (Richard Jones), KVM/QEmu shutdown (Guido Guenther), catch virExec output on debug (Mark McLoughlin), integration of iptables and lokkit (Mark McLoughlin), keymap parameter for VNC servers (Daniel Hokka Zakrisson), enable debug @@ -70,8 +91,7 @@ and check the ChangeLog to gauge progress.

        0 virsh commands to manipulate and create storage(Daniel Berrange), update use of PolicyKit APIs, better detection of fedault hypervisor, block device statistics for QEmu/KVM (Richard Jones), various improvements - for Xenner (Daniel Berrange)

      • -
      • Code cleanups: avoid warnings (Daniel Berrange), virRun helper + for Xenner (Daniel Berrange)
      • Code cleanups: avoid warnings (Daniel Berrange), virRun helper function (Dan Berrange), iptable code fixes (Mark McLoughlin), static and const cleanups (Jim Meyering), malloc and python cleanups (Jim Meyering), xstrtol_ull and xstrtol_ll functions (Daniel Berrange), @@ -89,20 +109,19 @@ and check the ChangeLog to gauge progress.

        0 refactoring of code dealing with hypervisor capabilities (Daniel Berrange), qemudReportError to use virErrorMsg (Cole Robinson), intemediate library and Makefiles for compiling static and coverage - rule support (Jim Meyering), cleanup of various leaks (Jim Meyering)

      • -

      0.4.0: Dec 18 2007

      • New features: Compilation on Windows cygwin/mingw (Richard Jones), + rule support (Jim Meyering), cleanup of various leaks (Jim Meyering)
      +

      0.4.0: Dec 18 2007

      +
      • New features: Compilation on Windows cygwin/mingw (Richard Jones), Ruby bindings (David Lutterkort), SASL based authentication for libvirt remote support (Daniel Berrange), PolicyKit authentication - (Daniel Berrange)
      • -
      • Documentation: example files for QEMU and libvirtd configuations + (Daniel Berrange)
      • Documentation: example files for QEMU and libvirtd configuations (Daniel Berrange), english cleanups (Jim Paris), CIM and OpenVZ references, document <shareable/>, daemon startup when using QEMU/KVM, document HV support for new NUMA calls (Richard Jones), various english fixes (Bruce Montague), OCaml docs links (Richard Jones), describe the various bindings add Ruby link, Windows support page (Richard Jones), authentication documentation updates (Daniel Berrange) -
      • -
      • Bug fixes: NUMA topology error handling (Beth Kon), NUMA topology +
      • Bug fixes: NUMA topology error handling (Beth Kon), NUMA topology cells without CPU (Beth Kon), XML to/from XM bridge config (Daniel Berrange), XM processing of vnc parameters (Daniel Berrange), Reset migration source after failure (Jim Paris), negative integer in config @@ -126,8 +145,7 @@ and check the ChangeLog to gauge progress.

        0 parameter setting in XM config (Saori Fukuta), credential handling fixes (Daniel Berrange), fix compatibility with Xen 3.2.0 (Daniel Berrange) -

      • -
      • Improvements: /etc/libvirt/qemu.conf configuration for QEMU driver +
      • Improvements: /etc/libvirt/qemu.conf configuration for QEMU driver (Daniel Berrange), NUMA cpu pinning in config files (DV and Saori Fukuta), CDRom media change in KVM/QEMU (Daniel Berrange), tests for <shareable/> in configs, pinning inactive domains for Xen 3.0.3 @@ -135,8 +153,7 @@ and check the ChangeLog to gauge progress.

        0 --without-libvirtd config option (Richard Jones), Python bindings for NUMA, add extra utility functions to buffer (Richard Jones), separate qparams module for handling query parameters (Richard Jones) -

      • -
      • Code cleanups: remove virDomainRestart from API as it was never used +
      • Code cleanups: remove virDomainRestart from API as it was never used (Richard Jones), constify params for attach/detach APIs (Daniel Berrange), gcc printf attribute checkings (Jim Meyering), refactoring of device parsing code and shell escaping (Daniel Berrange), virsh schedinfo @@ -156,11 +173,10 @@ and check the ChangeLog to gauge progress.

        0 port (Richard Jones), disable the proxy if using PolicyKit, readline availability detection, test libvirtd's config-processing code (Jim Meyering), use a variable name as sizeof argument (Jim Meyering) -

      • -

      0.3.3: Sep 30 2007

      • New features: Avahi mDNS daemon export (Daniel Berrange), - NUMA support (Beth Kan)
      • -
      • Documentation: cleanups (Toth Istvan), typos (Eduardo Pereira),
      • -
      • Bug fixes: memory corruption on large dumps (Masayuki Sunou), fix +
      +

      0.3.3: Sep 30 2007

      +
      • New features: Avahi mDNS daemon export (Daniel Berrange), + NUMA support (Beth Kan)
      • Documentation: cleanups (Toth Istvan), typos (Eduardo Pereira),
      • Bug fixes: memory corruption on large dumps (Masayuki Sunou), fix virsh vncdisplay command exit (Masayuki Sunou), Fix network stats TX/RX result (Richard Jones), warning on Xen 3.0.3 (Richard Jones), missing buffer check in virDomainXMLDevID (Hugh Brock), avoid zombies @@ -168,8 +184,7 @@ and check the ChangeLog to gauge progress.

        0 (Richard Jones), avoid ssh tty prompt (Daniel Berrange), username handling for remote URIs (Fabian Deutsch), fix potential crash on multiple input XML tags (Daniel Berrange), Solaris Xen hypercalls - fixup (Mark Johnson)

      • -
      • Improvements: OpenVZ support (Shuveb Hussain and Anoop Cyriac), + fixup (Mark Johnson)
      • Improvements: OpenVZ support (Shuveb Hussain and Anoop Cyriac), CD-Rom reload on XEn (Hugh Brock), PXE boot got QEmu/KVM (Daniel Berrange), QEmu socket permissions customization (Daniel Berrange), more QEmu support (Richard Jones), better path detection for qemu and @@ -178,46 +193,41 @@ and check the ChangeLog to gauge progress.

        0 default bootloader support (Daniel Berrange), new virNodeGetFreeMemory API, vncpasswd extraction in configuration files if secure (Mark Johnson and Daniel Berrange), Python bindings for block and interface - statistics

      • -
      • Code cleanups: virDrvOpenRemoteFlags definition (Richard Jones), - configure tests and output (Daniel Berrange)
      • -

      0.3.2: Aug 21 2007

      • New features: KVM migration and save/restore (Jim Paris), + statistics
      • Code cleanups: virDrvOpenRemoteFlags definition (Richard Jones), + configure tests and output (Daniel Berrange)
      +

      0.3.2: Aug 21 2007

      +
      • New features: KVM migration and save/restore (Jim Paris), added API for migration (Richard Jones), added APIs for block device and - interface statistic (Richard Jones).
      • -
      • Documentation: examples for XML network APIs, + interface statistic (Richard Jones).
      • Documentation: examples for XML network APIs, fix typo and schedinfo synopsis in man page (Atsushi SAKAI), - hypervisor support page update (Richard Jones).
      • -
      • Bug fixes: remove a couple of leaks in QEmu/KVM backend(Daniel berrange), + hypervisor support page update (Richard Jones).
      • Bug fixes: remove a couple of leaks in QEmu/KVM backend(Daniel berrange), fix GnuTLS 1.0 compatibility (Richard Jones), --config/-f option mistake for libvirtd (Richard Jones), remove leak in QEmu backend (Jim Paris), fix some QEmu communication bugs (Jim Paris), UUID lookup though proxy fix, setvcpus checking bugs (with Atsushi SAKAI), int checking in virsh parameters (with Masayuki Sunou), deny devices attach/detach for < Xen 3.0.4 (Masayuki Sunou), XenStore query - memory leak (Masayuki Sunou), virsh schedinfo cleanup (Saori Fukuta).
      • -
      • Improvement: virsh new ttyconsole command, networking API implementation + memory leak (Masayuki Sunou), virsh schedinfo cleanup (Saori Fukuta).
      • Improvement: virsh new ttyconsole command, networking API implementation for test driver (Daniel berrange), qemu/kvm feature reporting of ACPI/APIC (David Lutterkort), checking of QEmu architectures (Daniel berrange), improve devices XML errors reporting (Masayuki Sunou), speedup of domain queries on Xen (Daniel berrange), augment XML dumps with interface devices names (Richard Jones), internal API to query drivers for features (Richard Jones). -
      • -
      • Cleanups: Improve virNodeGetInfo implentation (Daniel berrange), +
      • Cleanups: Improve virNodeGetInfo implentation (Daniel berrange), general UUID code cleanup (Daniel berrange), fix API generator - file selection.
      • -

      0.3.1: Jul 24 2007

      • Documentation: index to remote page, script to test certificates, + file selection.
      +

      0.3.1: Jul 24 2007

      +
      • Documentation: index to remote page, script to test certificates, IPv6 remote support docs (Daniel Berrange), document VIRSH_DEFAULT_CONNECT_URI in virsh man page (David Lutterkort), - Relax-NG early grammar for the network XML (David Lutterkort)
      • -
      • Bug fixes: leaks in disk XML parsing (Masayuki Sunou), hypervisor + Relax-NG early grammar for the network XML (David Lutterkort)
      • Bug fixes: leaks in disk XML parsing (Masayuki Sunou), hypervisor alignment call problems on PPC64 (Christian Ehrhardt), dead client registration in daemon event loop (Daniel Berrange), double free in error handling (Daniel Berrange), close on exec for log file descriptors in the daemon (Daniel Berrange), avoid caching problem in remote daemon (Daniel Berrange), avoid crash after QEmu domain - failure (Daniel Berrange)
      • -
      • Improvements: checks of x509 certificates and keys (Daniel Berrange), + failure (Daniel Berrange)
      • Improvements: checks of x509 certificates and keys (Daniel Berrange), error reports in the daemon (Daniel Berrange), checking of Ethernet MAC addresses in XML configs (Masayuki Sunou), support for a new clock switch between UTC and localtime (Daniel Berrange), early @@ -225,19 +235,18 @@ and check the ChangeLog to gauge progress.

        0 on PS/2 and USB buses (Daniel Berrange), more tests especially the QEmu support (Daniel Berrange), range check in credit scheduler (with Saori Fukuta and Atsushi Sakai), add support for listen VNC - parameter un QEmu and fix command line arg (Daniel Berrange)

      • -
      • Cleanups: debug tracing (Richard Jones), removal of --with-qemud-pid-file + parameter un QEmu and fix command line arg (Daniel Berrange)
      • Cleanups: debug tracing (Richard Jones), removal of --with-qemud-pid-file (Richard Jones), remove unused virDeviceMode, new util module for code shared between drivers (Shuveb Hussain), xen header location - detection (Richard Jones)
      • -

      0.3.0: Jul 9 2007

      • Secure Remote support (Richard Jones). + detection (Richard Jones)
      +

      0.3.0: Jul 9 2007

      +
      • Secure Remote support (Richard Jones). See the remote page of the documentation
      • Documentation: remote support (Richard Jones), description of the URI connection strings (Richard Jones), update of virsh man page, matrix of libvirt API/hypervisor support with version - information (Richard Jones)
      • -
      • Bug fixes: examples Makefile.am generation (Richard Jones), + information (Richard Jones)
      • Bug fixes: examples Makefile.am generation (Richard Jones), SetMem fix (Mark Johnson), URI handling and ordering of drivers (Daniel Berrange), fix virsh help without hypervisor (Richard Jones), id marshalling fix (Daniel Berrange), fix virConnectGetMaxVcpus @@ -245,14 +254,12 @@ and check the ChangeLog to gauge progress.

        0 parameters handling for Xen (Richard Jones), various early remote bug fixes (Richard Jones), remove virsh leaks of domains references (Masayuki Sunou), configCache refill bug (Richard Jones), fix - XML serialization bugs

      • -
      • Improvements: QEmu switch to XDR-based protocol (Dan Berrange), + XML serialization bugs
      • Improvements: QEmu switch to XDR-based protocol (Dan Berrange), device attach/detach commands (Masayuki Sunou), OCaml bindings (Richard Jones), new entry points virDomainGetConnect and virNetworkGetConnect useful for bindings (Richard Jones), reunitifaction of remote and qemu daemon under a single libvirtd - with a config file (Daniel Berrange)
      • -
      • Cleanups: parsing of connection URIs (Richard Jones), messages + with a config file (Daniel Berrange)
      • Cleanups: parsing of connection URIs (Richard Jones), messages from virsh (Saori Fukuta), Coverage files (Daniel Berrange), Solaris fixes (Mark Johnson), avoid [r]index calls (Richard Jones), release information in Xen backend, virsh cpupin command cleanups @@ -266,12 +273,11 @@ and check the ChangeLog to gauge progress.

        0 directly (Daniel Berrange), virBuffer functions cleanups (Richard Jones), make init script LSB compliant, error handling on lookup functions (Richard Jones), remove internal virGetDomainByID (Richard Jones), - revamp of xen subdrivers interfaces (Richard Jones)

      • -
      • Localization updates
      • -

      0.2.3: Jun 8 2007

      • Documentation: documentation for upcoming remote access (Richard Jones), + revamp of xen subdrivers interfaces (Richard Jones)
      • Localization updates
      +

      0.2.3: Jun 8 2007

      +
      • Documentation: documentation for upcoming remote access (Richard Jones), virConnectNumOfDefinedDomains doc (Jan Michael), virsh help messages - for dumpxml and net-dumpxml (Chris Wright),
      • -
      • Bug fixes: RelaxNG schemas regexp fix (Robin Green), RelaxNG arch bug + for dumpxml and net-dumpxml (Chris Wright),
      • Bug fixes: RelaxNG schemas regexp fix (Robin Green), RelaxNG arch bug (Mark McLoughlin), large buffers bug fixes (Shigeki Sakamoto), error on out of memory condition (Shigeki Sakamoto), virshStrdup fix, non-root driver when using Xen bug (Richard Jones), use --strict-order when @@ -283,8 +289,7 @@ and check the ChangeLog to gauge progress.

        0 fix compiler flags (Richard Jones), remove type ioemu on recent Xen HVM for paravirt drivers (Saori Fukuta), uninitialized string bug (Masayuki Sunou), allow init even if the daemon is not running, - XML to config fix (Daniel Berrange)

      • -
      • Improvements: add a special error class for the test module (Richard + XML to config fix (Daniel Berrange)
      • Improvements: add a special error class for the test module (Richard Jones), virConnectGetCapabilities on proxy (Richard Jones), allow network driver to decline usage (Richard Jones), extend error messages for upcoming remote access (Richard Jones), on_reboot support for QEmu @@ -295,18 +300,17 @@ and check the ChangeLog to gauge progress.

        0 insensitive and Python bindings (Richard Jones), new scheduler API (Atsushi SAKAI), localizations updates, add logging option for virsh (Nobuhiro Itou), allow arguments to be passed to bootloader (Hugh Brock), - increase the test suite (Daniel Berrange and Hugh Brock)

      • -
      • Cleanups: Remove VIR_DRV_OPEN_QUIET (Richard Jones), disable xm_internal.c + increase the test suite (Daniel Berrange and Hugh Brock)
      • Cleanups: Remove VIR_DRV_OPEN_QUIET (Richard Jones), disable xm_internal.c for Xen > 3.0.3 (Daniel Berrange), unused fields in _virDomain (Richard Jones), export __virGetDomain and __virGetNetwork for libvirtd only (Richard Jones), ignore old VNC config for HVM on recent Xen (Daniel - Berrange), various code cleanups, -Werror cleanup (Hugh Brock)
      • -

      0.2.2: Apr 17 2007

      • Documentation: fix errors due to Amaya (with Simon Hernandez), + Berrange), various code cleanups, -Werror cleanup (Hugh Brock)
      +

      0.2.2: Apr 17 2007

      +
      • Documentation: fix errors due to Amaya (with Simon Hernandez), virsh uses kB not bytes (Atsushi SAKAI), add command line help to qemud (Richard Jones), xenUnifiedRegister docs (Atsushi SAKAI), strings typos (Nikolay Sivov), ilocalization probalem raised by - Thomas Canniot
      • -
      • Bug fixes: virsh memory values test (Masayuki Sunou), operations without + Thomas Canniot
      • Bug fixes: virsh memory values test (Masayuki Sunou), operations without libvirt_qemud (Atsushi SAKAI), fix spec file (Florian La Roche, Jeremy Katz, Michael Schwendt), direct hypervisor call (Atsushi SAKAI), buffer overflow on qemu @@ -318,8 +322,7 @@ and check the ChangeLog to gauge progress.

        0 McLoughlin), set VIR_CONNECT_RO on read-only connections (S.Sakamoto), avoid memory explosion bug (Daniel Berrange), integer overflow for qemu CPU time (Daniel Berrange), QEMU binary path check (Daniel - Berrange)

      • -
      • Cleanups: remove some global variables (Jim Meyering), printf-style + Berrange)
      • Cleanups: remove some global variables (Jim Meyering), printf-style functions checks (Jim Meyering), better virsh error messages, increase compiler checkings and security (Daniel Berrange), virBufferGrow usage and docs, use calloc instead of malloc/memset, replace all sprintf by @@ -327,17 +330,16 @@ and check the ChangeLog to gauge progress.

        0 signal handler error cleanup (Richard Jones), iptables internal code claenup (Mark McLoughlin), unified Xen driver (Richard Jones), cleanup XPath libxml2 calls, IPTables rules tightening (Daniel - Berrange),

      • -
      • Improvements: more regression tests on XML (Daniel Berrange), Python + Berrange),
      • Improvements: more regression tests on XML (Daniel Berrange), Python bindings now generate exception in error cases (Richard Jones), Python bindings for vir*GetAutoStart (Daniel Berrange), handling of CD-Rom device without device name (Nobuhiro Itou), fix hypervisor call to work with Xen 3.0.5 (Daniel Berrange), DomainGetOSType for inactive domains (Daniel Berrange), multiple boot devices for HVM (Daniel Berrange), -
      • -

      0.2.1: Mar 16 2007

      • Various internal cleanups (Richard Jones,Daniel Berrange,Mark McLoughlin)
      • -
      • Bug fixes: libvirt_qemud daemon path (Daniel Berrange), libvirt +
      +

      0.2.1: Mar 16 2007

      +
      • Various internal cleanups (Richard Jones,Daniel Berrange,Mark McLoughlin)
      • Bug fixes: libvirt_qemud daemon path (Daniel Berrange), libvirt config directory (Daniel Berrange and Mark McLoughlin), memory leak in qemud (Mark), various fixes on network support (Mark), avoid Xen domain zombies on device hotplug errors (Daniel Berrange), various @@ -347,96 +349,55 @@ and check the ChangeLog to gauge progress.

        0 (Rich Jones), read-only socket cleanup (Rich Jones), use of vnc port 5900 (Nobuhiro Itou), assorted networking fixes (Daniel Berrange), shutoff and shutdown mismatches (Kazuki Mizushima), unlimited memory handling - (Atsushi SAKAI), python binding fixes (Tatsuro Enokura)

      • -
      • Build and portability fixes: IA64 fixes (Atsushi SAKAI), dependancies + (Atsushi SAKAI), python binding fixes (Tatsuro Enokura)
      • Build and portability fixes: IA64 fixes (Atsushi SAKAI), dependancies and build (Daniel Berrange), fix xend port detection (Daniel Berrange), icompile time warnings (Mark), avoid const related compiler warnings (Daniel Berrange), automated builds (Daniel Berrange), pointer/int mismatch (Richard Jones), configure time - selection of drivers, libvirt spec hacking (Daniel Berrange)
      • -
      • Add support for network autostart and init scripts (Mark McLoughlin)
      • -
      • New API virConnectGetCapabilities() to detect the virtualization - capabilities of a host (Richard Jones)
      • -
      • Minor improvements: qemud signal handling (Mark), don't shutdown or reboot + selection of drivers, libvirt spec hacking (Daniel Berrange)
      • Add support for network autostart and init scripts (Mark McLoughlin)
      • New API virConnectGetCapabilities() to detect the virtualization + capabilities of a host (Richard Jones)
      • Minor improvements: qemud signal handling (Mark), don't shutdown or reboot domain0 (Kazuki Mizushima), QEmu version autodetection (Daniel Berrange), network UUIDs (Mark), speed up UUID domain lookups (Tatsuro Enokura and Daniel Berrange), support for paused QEmu CPU (Daniel Berrange), keymap VNC attribute support (Takahashi Tomohiro and Daniel Berrange), maximum number of virtual CPU (Masayuki Sunou), virtsh --readonly option (Rich - Jones), python bindings for new functions (Daniel Berrange)
      • -
      • Documentation updates especially on the XML formats
      • -

      0.2.0: Feb 14 2007

      • Various internal cleanups (Mark McLoughlin, Richard Jones, - Daniel Berrange, Karel Zak)
      • -
      • Bug fixes: avoid a crash in connect (Daniel Berrange), virsh args - parsing (Richard Jones)
      • -
      • Add support for QEmu and KVM virtualization (Daniel Berrange)
      • -
      • Add support for network configuration (Mark McLoughlin)
      • -
      • Minor improvements: regression testing (Daniel Berrange), - localization string updates
      • -

      0.1.11: Jan 22 2007

      • Finish XML <-> XM config files support
      • -
      • Remove memory leak when freeing virConf objects
      • -
      • Finishing inactive domain support (Daniel Berrange)
      • -
      • Added a Relax-NG schemas to check XML instances
      • -

      0.1.10: Dec 20 2006

      • more localizations
      • -
      • bug fixes: VCPU info breakages on xen 3.0.3, xenDaemonListDomains buffer overflow (Daniel Berrange), reference count bug when creating Xen domains (Daniel Berrange).
      • -
      • improvements: support graphic framebuffer for Xen paravirt (Daniel Berrange), VNC listen IP range support (Daniel Berrange), support for default Xen config files and inactive domains of 3.0.4 (Daniel Berrange).
      • -

      0.1.9: Nov 29 2006

      • python bindings: release interpeter lock when calling C (Daniel Berrange)
      • -
      • don't raise HTTP error when looking information for a domain
      • -
      • some refactoring to use the driver for all entry points
      • -
      • better error reporting (Daniel Berrange)
      • -
      • fix OS reporting when running as non-root
      • -
      • provide XML parsing errors
      • -
      • extension of the test framework (Daniel Berrange)
      • -
      • fix the reconnect regression test
      • -
      • python bindings: Domain instances now link to the Connect to avoid garbage collection and disconnect
      • -
      • separate the notion of maximum memory and current use at the XML level
      • -
      • Fix a memory leak (Daniel Berrange)
      • -
      • add support for shareable drives
      • -
      • add support for non-bridge style networking configs for guests(Daniel Berrange)
      • -
      • python bindings: fix unsigned long marshalling (Daniel Berrange)
      • -
      • new config APIs virConfNew() and virConfSetValue() to build configs from scratch
      • -
      • hot plug device support based on Michel Ponceau patch
      • -
      • added support for inactive domains, new APIs, various associated cleanup (Daniel Berrange)
      • -
      • special device model for HVM guests (Daniel Berrange)
      • -
      • add API to dump core of domains (but requires a patched xend)
      • -
      • pygrub bootloader information take over <os> information
      • -
      • updated the localization strings
      • -

      0.1.8: Oct 16 2006

      • Bug for system with page size != 4k
      • -
      • vcpu number initialization (Philippe Berthault)
      • -
      • don't label crashed domains as shut off (Peter Vetere)
      • -
      • fix virsh man page (Noriko Mizumoto)
      • -
      • blktapdd support for alternate drivers like blktap (Daniel Berrange)
      • -
      • memory leak fixes (xend interface and XML parsing) (Daniel Berrange)
      • -
      • compile fix
      • -
      • mlock/munlock size fixes (Daniel Berrange)
      • -
      • improve error reporting
      • -

      0.1.7: Sep 29 2006

      • fix a memory bug on getting vcpu information from xend (Daniel Berrange)
      • -
      • fix another problem in the hypercalls change in Xen changeset - 86d26e6ec89b when getting domain information (Daniel Berrange)
      • -

      0.1.6: Sep 22 2006

      • Support for localization of strings using gettext (Daniel Berrange)
      • -
      • Support for new Xen-3.0.3 cdrom and disk configuration (Daniel Berrange)
      • -
      • Support for setting VNC port when creating domains with new - xend config files (Daniel Berrange)
      • -
      • Fix bug when running against xen-3.0.2 hypercalls (Jim Fehlig)
      • -
      • Fix reconnection problem when talking directly to http xend
      • -

      0.1.5: Sep 5 2006

      • Support for new hypercalls change in Xen changeset 86d26e6ec89b
      • -
      • bug fixes: virParseUUID() was wrong, netwoking for paravirt guestsi + Jones), python bindings for new functions (Daniel Berrange)
      • Documentation updates especially on the XML formats
      +

      0.2.0: Feb 14 2007

      +
      • Various internal cleanups (Mark McLoughlin, Richard Jones, + Daniel Berrange, Karel Zak)
      • Bug fixes: avoid a crash in connect (Daniel Berrange), virsh args + parsing (Richard Jones)
      • Add support for QEmu and KVM virtualization (Daniel Berrange)
      • Add support for network configuration (Mark McLoughlin)
      • Minor improvements: regression testing (Daniel Berrange), + localization string updates
      +

      0.1.11: Jan 22 2007

      +
      • Finish XML <-> XM config files support
      • Remove memory leak when freeing virConf objects
      • Finishing inactive domain support (Daniel Berrange)
      • Added a Relax-NG schemas to check XML instances
      +

      0.1.10: Dec 20 2006

      +
      • more localizations
      • bug fixes: VCPU info breakages on xen 3.0.3, xenDaemonListDomains buffer overflow (Daniel Berrange), reference count bug when creating Xen domains (Daniel Berrange).
      • improvements: support graphic framebuffer for Xen paravirt (Daniel Berrange), VNC listen IP range support (Daniel Berrange), support for default Xen config files and inactive domains of 3.0.4 (Daniel Berrange).
      +

      0.1.9: Nov 29 2006

      +
      • python bindings: release interpeter lock when calling C (Daniel Berrange)
      • don't raise HTTP error when looking information for a domain
      • some refactoring to use the driver for all entry points
      • better error reporting (Daniel Berrange)
      • fix OS reporting when running as non-root
      • provide XML parsing errors
      • extension of the test framework (Daniel Berrange)
      • fix the reconnect regression test
      • python bindings: Domain instances now link to the Connect to avoid garbage collection and disconnect
      • separate the notion of maximum memory and current use at the XML level
      • Fix a memory leak (Daniel Berrange)
      • add support for shareable drives
      • add support for non-bridge style networking configs for guests(Daniel Berrange)
      • python bindings: fix unsigned long marshalling (Daniel Berrange)
      • new config APIs virConfNew() and virConfSetValue() to build configs from scratch
      • hot plug device support based on Michel Ponceau patch
      • added support for inactive domains, new APIs, various associated cleanup (Daniel Berrange)
      • special device model for HVM guests (Daniel Berrange)
      • add API to dump core of domains (but requires a patched xend)
      • pygrub bootloader information take over <os> information
      • updated the localization strings
      +

      0.1.8: Oct 16 2006

      +
      • Bug for system with page size != 4k
      • vcpu number initialization (Philippe Berthault)
      • don't label crashed domains as shut off (Peter Vetere)
      • fix virsh man page (Noriko Mizumoto)
      • blktapdd support for alternate drivers like blktap (Daniel Berrange)
      • memory leak fixes (xend interface and XML parsing) (Daniel Berrange)
      • compile fix
      • mlock/munlock size fixes (Daniel Berrange)
      • improve error reporting
      +

      0.1.7: Sep 29 2006

      +
      • fix a memory bug on getting vcpu information from xend (Daniel Berrange)
      • fix another problem in the hypercalls change in Xen changeset + 86d26e6ec89b when getting domain information (Daniel Berrange)
      +

      0.1.6: Sep 22 2006

      +
      • Support for localization of strings using gettext (Daniel Berrange)
      • Support for new Xen-3.0.3 cdrom and disk configuration (Daniel Berrange)
      • Support for setting VNC port when creating domains with new + xend config files (Daniel Berrange)
      • Fix bug when running against xen-3.0.2 hypercalls (Jim Fehlig)
      • Fix reconnection problem when talking directly to http xend
      +

      0.1.5: Sep 5 2006

      +
      • Support for new hypercalls change in Xen changeset 86d26e6ec89b
      • bug fixes: virParseUUID() was wrong, netwoking for paravirt guestsi (Daniel Berrange), virsh on non-existent domains (Daniel Berrange), string cast bug when handling error in python (Pete Vetere), HTTP - 500 xend error code handling (Pete Vetere and Daniel Berrange)
      • -
      • improvements: test suite for SEXPR <-> XML format conversions (Daniel + 500 xend error code handling (Pete Vetere and Daniel Berrange)
      • improvements: test suite for SEXPR <-> XML format conversions (Daniel Berrange), virsh output regression suite (Daniel Berrange), new environ variable VIRSH_DEFAULT_CONNECT_URI for the default URI when connecting (Daniel Berrange), graphical console support for paravirt guests (Jeremy Katz), parsing of simple Xen config files (with Daniel Berrange), early work on defined (not running) domains (Daniel Berrange), - virsh output improvement (Daniel Berrange
      • -

      0.1.4: Aug 16 2006

      • bug fixes: spec file fix (Mark McLoughlin), error report problem (with + virsh output improvement (Daniel Berrange
      +

      0.1.4: Aug 16 2006

      +
      • bug fixes: spec file fix (Mark McLoughlin), error report problem (with Hugh Brock), long integer in Python bindings (with Daniel Berrange), XML generation bug for CDRom (Daniel Berrange), bug whem using number() XPath function (Mark McLoughlin), fix python detection code, remove duplicate - initialization errors (Daniel Berrange)
      • -
      • improvements: UUID in XML description (Peter Vetere), proxy code + initialization errors (Daniel Berrange)
      • improvements: UUID in XML description (Peter Vetere), proxy code cleanup, virtual CPU and affinity support + virsh support (Michel Ponceau, Philippe Berthault, Daniel Berrange), port and tty information for console in XML (Daniel Berrange), added XML dump to driver and proxy @@ -444,72 +405,107 @@ and check the ChangeLog to gauge progress.

        0 floppy and cdrom (Daniel Berrange), features block in XML to report/ask PAE, ACPI, APIC for HVM domains (Daniel Berrange), fail saide-effect operations when using read-only connection, large improvements to test - driver (Daniel Berrange)

      • -
      • documentation: spelling (Daniel Berrange), test driver examples.
      • -

      0.1.3: Jul 11 2006

      • bugfixes: build as non-root, fix xend access when root, handling of + driver (Daniel Berrange)
      • documentation: spelling (Daniel Berrange), test driver examples.
      +

      0.1.3: Jul 11 2006

      +
      • bugfixes: build as non-root, fix xend access when root, handling of empty XML elements (Mark McLoughlin), XML serialization and parsing fixes (Mark McLoughlin), allow to create domains without disk (Mark - McLoughlin),
      • -
      • improvement: xenDaemonLookupByID from O(n^2) to O(n) (Daniel Berrange), - support for fully virtualized guest (Jim Fehlig, DV, Mark McLoughlin)
      • -
      • documentation: augmented to cover hvm domains
      • -

      0.1.2: Jul 3 2006

      • headers include paths fixup
      • -
      • proxy mechanism for unprivileged read-only access by httpu
      • -

      0.1.1: Jun 21 2006

      • building fixes: ncurses fallback (Jim Fehlig), VPATH builds (Daniel P. - Berrange)
      • -
      • driver cleanups: new entry points, cleanup of libvirt.c (with Daniel P. - Berrange)
      • -
      • Cope with API change introduced in Xen changeset 10277
      • -
      • new test driver for regression checks (Daniel P. Berrange)
      • -
      • improvements: added UUID to XML serialization, buffer usage (Karel - Zak), --connect argument to virsh (Daniel P. Berrange),
      • -
      • bug fixes: uninitialized memory access in error reporting, S-Expr + McLoughlin),
      • improvement: xenDaemonLookupByID from O(n^2) to O(n) (Daniel Berrange), + support for fully virtualized guest (Jim Fehlig, DV, Mark McLoughlin)
      • documentation: augmented to cover hvm domains
      +

      0.1.2: Jul 3 2006

      +
      • headers include paths fixup
      • proxy mechanism for unprivileged read-only access by httpu
      +

      0.1.1: Jun 21 2006

      +
      • building fixes: ncurses fallback (Jim Fehlig), VPATH builds (Daniel P. + Berrange)
      • driver cleanups: new entry points, cleanup of libvirt.c (with Daniel P. + Berrange)
      • Cope with API change introduced in Xen changeset 10277
      • new test driver for regression checks (Daniel P. Berrange)
      • improvements: added UUID to XML serialization, buffer usage (Karel + Zak), --connect argument to virsh (Daniel P. Berrange),
      • bug fixes: uninitialized memory access in error reporting, S-Expr parsing (Jim Fehlig, Jeremy Katz), virConnectOpen bug, remove a TODO in - xs_internal.c
      • -
      • documentation: Python examples (David Lutterkort), new Perl binding - URL, man page update (Karel Zak)
      • -

      0.1.0: Apr 10 2006

      • building fixes: --with-xen-distdir option (Ronald Aigner), out of tree - build and pkginfo cflag fix (Daniel Berrange)
      • -
      • enhancement and fixes of the XML description format (David Lutterkort - and Jim Fehlig)
      • -
      • new APIs: for Node information and Reboot
      • -
      • internal code cleanup: refactoring internals into a driver model, more - error handling, structure sharing, thread safety and ref counting
      • -
      • bug fixes: error message (Jim Meyering), error allocation in virsh (Jim - Meyering), virDomainLookupByID (Jim Fehlig),
      • -
      • documentation: updates on architecture, and format, typo fix (Jim - Meyering)
      • -
      • bindings: exception handling in examples (Jim Meyering), perl ones out - of tree (Daniel Berrange)
      • -
      • virsh: more options, create, nodeinfo (Karel Zak), renaming of some + xs_internal.c
      • documentation: Python examples (David Lutterkort), new Perl binding + URL, man page update (Karel Zak)
      +

      0.1.0: Apr 10 2006

      +
      • building fixes: --with-xen-distdir option (Ronald Aigner), out of tree + build and pkginfo cflag fix (Daniel Berrange)
      • enhancement and fixes of the XML description format (David Lutterkort + and Jim Fehlig)
      • new APIs: for Node information and Reboot
      • internal code cleanup: refactoring internals into a driver model, more + error handling, structure sharing, thread safety and ref counting
      • bug fixes: error message (Jim Meyering), error allocation in virsh (Jim + Meyering), virDomainLookupByID (Jim Fehlig),
      • documentation: updates on architecture, and format, typo fix (Jim + Meyering)
      • bindings: exception handling in examples (Jim Meyering), perl ones out + of tree (Daniel Berrange)
      • virsh: more options, create, nodeinfo (Karel Zak), renaming of some options (Karel Zak), use stderr only for errors (Karel Zak), man page - (Andrew Puch)
      • -

      0.0.6: Feb 28 2006

      • add UUID lookup and extract API
      • -
      • add error handling APIs both synchronous and asynchronous
      • -
      • added minimal hook for error handling at the python level, improved the - python bindings
      • -
      • augment the documentation and tests to cover error handling
      • -

      0.0.5: Feb 23 2006

      • Added XML description parsing, dependance to libxml2, implemented the - creation API virDomainCreateLinux()
      • -
      • new APIs to lookup and name domain by UUID
      • -
      • fixed the XML dump when using the Xend access
      • -
      • Fixed a few more problem related to the name change
      • -
      • Adding regression tests in python and examples in C
      • -
      • web site improvement, extended the documentation to cover the XML - format and Python API
      • -
      • Added devhelp help for Gnome/Gtk programmers
      • -

      0.0.4: Feb 10 2006

      • Fix various bugs introduced in the name change
      • -

      0.0.3: Feb 9 2006

      • Switch name from from 'libvir' to libvirt
      • -
      • Starting infrastructure to add code examples
      • -
      • Update of python bindings for completeness
      • -

      0.0.2: Jan 29 2006

      • Update of the documentation, web site redesign (Diana Fong)
      • -
      • integration of HTTP xend RPC based on libxend by Anthony Liquori for - most operations
      • -
      • Adding Save and Restore APIs
      • -
      • extended the virsh command line tool (Karel Zak)
      • -
      • remove xenstore transactions (Anthony Liguori)
      • -
      • fix the Python bindings bug when domain and connections where freed
      • -

      0.0.1: Dec 19 2005

      • First release
      • -
      • Basic management of existing Xen domains
      • -
      • Minimal autogenerated Python bindings
      • -

    + (Andrew Puch) +

    0.0.6: Feb 28 2006

    +
    • add UUID lookup and extract API
    • add error handling APIs both synchronous and asynchronous
    • added minimal hook for error handling at the python level, improved the + python bindings
    • augment the documentation and tests to cover error handling
    +

    0.0.5: Feb 23 2006

    +
    • Added XML description parsing, dependance to libxml2, implemented the + creation API virDomainCreateLinux()
    • new APIs to lookup and name domain by UUID
    • fixed the XML dump when using the Xend access
    • Fixed a few more problem related to the name change
    • Adding regression tests in python and examples in C
    • web site improvement, extended the documentation to cover the XML + format and Python API
    • Added devhelp help for Gnome/Gtk programmers
    +

    0.0.4: Feb 10 2006

    +
    • Fix various bugs introduced in the name change
    +

    0.0.3: Feb 9 2006

    +
    • Switch name from from 'libvir' to libvirt
    • Starting infrastructure to add code examples
    • Update of python bindings for completeness
    +

    0.0.2: Jan 29 2006

    +
    • Update of the documentation, web site redesign (Diana Fong)
    • integration of HTTP xend RPC based on libxend by Anthony Liquori for + most operations
    • Adding Save and Restore APIs
    • extended the virsh command line tool (Karel Zak)
    • remove xenstore transactions (Anthony Liguori)
    • fix the Python bindings bug when domain and connections where freed
    +

    0.0.1: Dec 19 2005

    +
    • First release
    • Basic management of existing Xen domains
    • Minimal autogenerated Python bindings
    +
    + +
    + + + diff --git a/docs/news.html.in b/docs/news.html.in new file mode 100644 index 0000000000..522476a0b8 --- /dev/null +++ b/docs/news.html.in @@ -0,0 +1,607 @@ + + + +

    Releases

    +

    Here is the list of official releases, however since it is early on in the +development of libvirt, it is preferable when possible to just use the CVS version or snapshot, contact the mailing list +and check the ChangeLog to gauge progress.

    +

    0.4.2: Apr 8 2008

    +
      +
    • New features: memory operation for QEmu/KVM driver (Cole Robinson), + new routed networking schemas (Mads Olesen)
    • +
    • Documentation: storage documentation fixes (Atsushi Sakai), many + typo cleanups (Atsushi Sakai), string fixes (Francesco Tombolini)
    • +
    • Bug fixes: pointer errors in qemu (Jim Meyering), iSCSI login fix + (Chris Lalancette), well formedness error in test driver capabilities + (Cole Robinson), fixes cleanup code when daemon exits (Daniel Berrange), + CD Rom change on live QEmu/KVM domains (Cole Robinson), setting scheduler + parameter is forbidden for read-only (Saori Fukuta)i, fixes for TAP + devices (Daniel Berrange), assorted storage driver fixes (Daniel + Berrange), Makefile fixes (Jim Meyering), Xen-3.2 hypercall fix, + fix iptables rules to avoid blocking traffic within virtual network + (Daniel Berrange), XML output fix for directory pools (Daniel Berrange), + remove dandling domain/net/conn pointers from error data, do not + ask polkit auth when root (Daniel Berrange), handling of fork and + pipe errors when starting the daemon (Richard Jones)
    • +
    • Improvements: better validation of MAC addresses (Jim Meyering and + Hiroyuki Kaguchi), + virsh vcpupin error report (Shigeki Sakamoto), keep boot tag on + HVM domains (Cole Robinson), virsh non-root should not be limited to read + only anymore (Daniel Berrange), switch to polkit-auth from polkit-grant + (Daniel Berrange), better handling of missing SElinux data (Daniel + Berrange and Jim Meyering), cleanup of the connection opening logic + (Daniel Berrange), first bits of Linux Containers support (Dave Leskovec), + scheduler API support via xend (Saori Fukuta), improvement of the + testing framework and first tests (Jim Meyering), missing error + messages from virsh parameters validation (Shigeki Sakamoto), + improve support of older iscsiadm command (Chris Lalancette), + move linux container support in the daemon (Dan Berrange), older + awk implementation support (Mike Gerdts), NUMA support in test + driver (Cole Robinson), xen and hvm added to test driver capabilities + (Cole Robinson)
    • +
    • Code cleanup: remove unused getopt header (Jim Meyering), mark more + strings as translatable (Guido Günther and Jim Meyering), convert + error strings to something meaningful and translatable (Jim Meyering), + Linux Containers code cleanup, last error initializer (Guido Günther)
    • +
    +

    0.4.1: Mar 3 2008

    +
      +
    • New features: build on MacOSX (Richard Jones), storage management + (Daniel Berrange), Xenner - Xen on KVM - support (Daniel Berrange)
    • +
    • Documentation: Fix of various typos (Atsushi SAKAI), memory and + vcpu settings details (Richard Jones), ethernet bridging typo + (Maxwell Bottiger), add storage APIs documentation (Daniel Berrange)
    • +
    • Bug fixes: OpenVZ code compilation (Mikhail Pokidko), crash in + policykit auth handling (Daniel Berrange), large config files + (Daniel Berrange), cpumap hypercall size (Saori Fukuta), crash + in remote auth (Daniel Berrange), ssh args error (Daniel Berrange), + preserve vif order from config files (Hiroyuki Kaguchi), invalid + pointer access (Jim Meyering), virDomainGetXMLDesc flag handling, + device name conversion on stats (Daniel Berrange), double mutex lock + (Daniel Berrange), config file reading crashes (Guido Guenther), + xenUnifiedDomainSuspend bug (Marcus Meissner), do not crash if + /sys/hypervisor/capabilities is missing (Mark McLoughlin), + virHashRemoveSet bug (Hiroyuki Kaguchi), close-on-exec flag for + qemud signal pipe (Daniel Berrange), double free in OpenVZ + (Anton Protopopov), handle mac without addresses (Shigeki Sakamoto), + MAC addresses checks (Shigeki Sakamoto and Richard Jones), + allow to read non-seekable files (Jim Meyering)
    • +
    • Improvements: Windows build (Richard Jones), KVM/QEmu shutdown + (Guido Guenther), catch virExec output on debug (Mark McLoughlin), + integration of iptables and lokkit (Mark McLoughlin), keymap + parameter for VNC servers (Daniel Hokka Zakrisson), enable debug + by default using VIR_DEBUG (Daniel Berrange), xen 3.2 fixes + (Daniel Berrange), Python bindings for VCPU and scheduling + (Daniel Berrange), framework for automatic code syntax checks + (Jim Meyering), allow kernel+initrd setup in Xen PV (Daniel Berrange), + allow change of Disk/NIC of an inactive domains (Shigeki Sakamoto), + virsh commands to manipulate and create storage(Daniel Berrange), + update use of PolicyKit APIs, better detection of fedault hypervisor, + block device statistics for QEmu/KVM (Richard Jones), various improvements + for Xenner (Daniel Berrange)
    • +
    • Code cleanups: avoid warnings (Daniel Berrange), virRun helper + function (Dan Berrange), iptable code fixes (Mark McLoughlin), + static and const cleanups (Jim Meyering), malloc and python cleanups + (Jim Meyering), xstrtol_ull and xstrtol_ll functions (Daniel Berrange), + remove no-op networking from OpenVZ (Daniel Berrange), python generator + cleanups (Daniel Berrange), cleanup ref counting (Daniel Berrange), + remove uninitialized warnings (Jim Meyering), cleanup configure + for RHEL4 (Daniel Berrange), CR/LF cleanups (Richard Jones), + various automatic code check and associated cleanups (Jim Meyering), + various memory leaks (Jim Meyering), fix compilation when building + without Xen (Guido Guenther), mark translatables strings (Jim Meyering), + use virBufferAddLit for constant strings (Jim Meyering), fix + make distcheck (Jim Meyering), return values for python bindings (Cole + Robinson), trailing blanks fixes (Jim Meyering), gcc-4.3.0 fixes + (Mark McLoughlin), use safe read and write routines (Jim Meyering), + refactoring of code dealing with hypervisor capabilities (Daniel + Berrange), qemudReportError to use virErrorMsg (Cole Robinson), + intemediate library and Makefiles for compiling static and coverage + rule support (Jim Meyering), cleanup of various leaks (Jim Meyering)
    • +
    +

    0.4.0: Dec 18 2007

    +
      +
    • New features: Compilation on Windows cygwin/mingw (Richard Jones), + Ruby bindings (David Lutterkort), SASL based authentication for + libvirt remote support (Daniel Berrange), PolicyKit authentication + (Daniel Berrange)
    • +
    • Documentation: example files for QEMU and libvirtd configuations + (Daniel Berrange), english cleanups (Jim Paris), CIM and OpenVZ + references, document <shareable/>, daemon startup when using + QEMU/KVM, document HV support for new NUMA calls (Richard Jones), + various english fixes (Bruce Montague), OCaml docs links (Richard Jones), + describe the various bindings add Ruby link, Windows support page + (Richard Jones), authentication documentation updates (Daniel Berrange) +
    • +
    • Bug fixes: NUMA topology error handling (Beth Kon), NUMA topology + cells without CPU (Beth Kon), XML to/from XM bridge config (Daniel + Berrange), XM processing of vnc parameters (Daniel Berrange), Reset + migration source after failure (Jim Paris), negative integer in config + (Tatsuro Enokura), zero terminating string buffer, detect integer + overflow (Jim Meyering), QEmu command line ending fixes (Daniel Berrange), + recursion problem in the daemon (Daniel Berrange), HVM domain with CDRom + (Masayuki Sunou), off by one error in NUMA cpu count (Beth Kon), + avoid xend errors when adding disks (Masayuki Sunou), compile error + (Chris Lalancette), transposed fwrite args (Jim Meyering), compile + without xen and on solaris (Jim Paris), parsing of interface names + (Richard Jones), overflow for starts on 32bits (Daniel Berrange), + fix problems in error reporting (Saori Fukuta), wrong call to + brSetForwardDelay changed to brSetEnableSTP (Richard Jones), + allow shareable disk in old Xen, fix wrong certificate file (Jim + Meyering), avoid some startup error when non-root, off-by-1 buffer + NULL termination (Daniel Berrange), various string allocation fixes + (Daniel Berrange), avoid problems with vnetXXX interfaces in domain dumps + (Daniel Berrange), build fixes for RHEL (Daniel Berrange), virsh prompt + should not depend on uid (Richard Jones), fix scaping of '<' (Richard + Jones), fix detach-disk on Xen tap devices (Saori Fukuta), CPU + parameter setting in XM config (Saori Fukuta), credential handling + fixes (Daniel Berrange), fix compatibility with Xen 3.2.0 (Daniel + Berrange) +
    • +
    • Improvements: /etc/libvirt/qemu.conf configuration for QEMU driver + (Daniel Berrange), NUMA cpu pinning in config files (DV and Saori Fukuta), + CDRom media change in KVM/QEMU (Daniel Berrange), tests for + <shareable/> in configs, pinning inactive domains for Xen 3.0.3 + (Saori Fukuta), use gnulib for portability enhancement (Jim Meyering), + --without-libvirtd config option (Richard Jones), Python bindings for + NUMA, add extra utility functions to buffer (Richard Jones), + separate qparams module for handling query parameters (Richard Jones) +
    • +
    • Code cleanups: remove virDomainRestart from API as it was never used + (Richard Jones), constify params for attach/detach APIs (Daniel Berrange), + gcc printf attribute checkings (Jim Meyering), refactoring of device + parsing code and shell escaping (Daniel Berrange), virsh schedinfo + parameters validation (Masayuki Sunou), Avoid risk of format string abuse + (Jim Meyering), integer parsing cleanups (Jim Meyering), build out + of the source tree (Jim Meyering), URI parsing refactoring (Richard + Jones), failed strdup/malloc handling (Jim Meyering), Make "make + distcheck" work (Jim Meyering), improve xen internall error reports + (Richard Jones), cleanup of the daemon remote code (Daniel Berrange), + rename error VIR_FROM_LINUX to VIR_FROM_STATS_LINUX (Richard Jones), + don't compile the proxy if without Xen (Richard Jones), fix paths when + configuring for /usr prefix, improve error reporting code (Jim Meyering), + detect heap allocation failure (Jim Meyering), disable xen sexpr parsing + code if Xen is disabled (Daniel Berrange), cleanup of the GetType + entry point for Xen drivers, move some QEmu path handling to generic + module (Daniel Berrange), many code cleanups related to the Windows + port (Richard Jones), disable the proxy if using PolicyKit, readline + availability detection, test libvirtd's config-processing code (Jim + Meyering), use a variable name as sizeof argument (Jim Meyering) +
    • +
    +

    0.3.3: Sep 30 2007

    +
      +
    • New features: Avahi mDNS daemon export (Daniel Berrange), + NUMA support (Beth Kan)
    • +
    • Documentation: cleanups (Toth Istvan), typos (Eduardo Pereira),
    • +
    • Bug fixes: memory corruption on large dumps (Masayuki Sunou), fix + virsh vncdisplay command exit (Masayuki Sunou), Fix network stats + TX/RX result (Richard Jones), warning on Xen 3.0.3 (Richard Jones), + missing buffer check in virDomainXMLDevID (Hugh Brock), avoid zombies + when using remote (Daniel Berrange), xend connection error message + (Richard Jones), avoid ssh tty prompt (Daniel Berrange), username + handling for remote URIs (Fabian Deutsch), fix potential crash + on multiple input XML tags (Daniel Berrange), Solaris Xen hypercalls + fixup (Mark Johnson)
    • +
    • Improvements: OpenVZ support (Shuveb Hussain and Anoop Cyriac), + CD-Rom reload on XEn (Hugh Brock), PXE boot got QEmu/KVM (Daniel + Berrange), QEmu socket permissions customization (Daniel Berrange), + more QEmu support (Richard Jones), better path detection for qemu and + dnsmasq (Richard Jones), QEmu flags are per-Domain (Daniel Berrange), + virsh freecell command, Solaris portability fixes (Mark Johnson), + default bootloader support (Daniel Berrange), new virNodeGetFreeMemory + API, vncpasswd extraction in configuration files if secure (Mark + Johnson and Daniel Berrange), Python bindings for block and interface + statistics
    • +
    • Code cleanups: virDrvOpenRemoteFlags definition (Richard Jones), + configure tests and output (Daniel Berrange)
    • +
    +

    0.3.2: Aug 21 2007

    +
      +
    • New features: KVM migration and save/restore (Jim Paris), + added API for migration (Richard Jones), added APIs for block device and + interface statistic (Richard Jones).
    • +
    • Documentation: examples for XML network APIs, + fix typo and schedinfo synopsis in man page (Atsushi SAKAI), + hypervisor support page update (Richard Jones).
    • +
    • Bug fixes: remove a couple of leaks in QEmu/KVM backend(Daniel berrange), + fix GnuTLS 1.0 compatibility (Richard Jones), --config/-f option + mistake for libvirtd (Richard Jones), remove leak in QEmu backend + (Jim Paris), fix some QEmu communication bugs (Jim Paris), UUID + lookup though proxy fix, setvcpus checking bugs (with Atsushi SAKAI), + int checking in virsh parameters (with Masayuki Sunou), deny devices + attach/detach for < Xen 3.0.4 (Masayuki Sunou), XenStore query + memory leak (Masayuki Sunou), virsh schedinfo cleanup (Saori Fukuta).
    • +
    • Improvement: virsh new ttyconsole command, networking API implementation + for test driver (Daniel berrange), qemu/kvm feature reporting of + ACPI/APIC (David Lutterkort), checking of QEmu architectures (Daniel + berrange), improve devices XML errors reporting (Masayuki Sunou), + speedup of domain queries on Xen (Daniel berrange), augment XML dumps + with interface devices names (Richard Jones), internal API to query + drivers for features (Richard Jones). +
    • +
    • Cleanups: Improve virNodeGetInfo implentation (Daniel berrange), + general UUID code cleanup (Daniel berrange), fix API generator + file selection.
    • +
    +

    0.3.1: Jul 24 2007

    +
      +
    • Documentation: index to remote page, script to test certificates, + IPv6 remote support docs (Daniel Berrange), document + VIRSH_DEFAULT_CONNECT_URI in virsh man page (David Lutterkort), + Relax-NG early grammar for the network XML (David Lutterkort)
    • +
    • Bug fixes: leaks in disk XML parsing (Masayuki Sunou), hypervisor + alignment call problems on PPC64 (Christian Ehrhardt), dead client + registration in daemon event loop (Daniel Berrange), double free + in error handling (Daniel Berrange), close on exec for log file + descriptors in the daemon (Daniel Berrange), avoid caching problem + in remote daemon (Daniel Berrange), avoid crash after QEmu domain + failure (Daniel Berrange)
    • +
    • Improvements: checks of x509 certificates and keys (Daniel Berrange), + error reports in the daemon (Daniel Berrange), checking of Ethernet MAC + addresses in XML configs (Masayuki Sunou), support for a new + clock switch between UTC and localtime (Daniel Berrange), early + version of OpenVZ support (Shuveb Hussain), support for input devices + on PS/2 and USB buses (Daniel Berrange), more tests especially + the QEmu support (Daniel Berrange), range check in credit scheduler + (with Saori Fukuta and Atsushi Sakai), add support for listen VNC + parameter un QEmu and fix command line arg (Daniel Berrange)
    • +
    • Cleanups: debug tracing (Richard Jones), removal of --with-qemud-pid-file + (Richard Jones), remove unused virDeviceMode, new util module for + code shared between drivers (Shuveb Hussain), xen header location + detection (Richard Jones)
    • +
    +

    0.3.0: Jul 9 2007

    +
      +
    • Secure Remote support (Richard Jones). + See the remote page + of the documentation +
    • +
    • Documentation: remote support (Richard Jones), description of + the URI connection strings (Richard Jones), update of virsh man + page, matrix of libvirt API/hypervisor support with version + information (Richard Jones)
    • +
    • Bug fixes: examples Makefile.am generation (Richard Jones), + SetMem fix (Mark Johnson), URI handling and ordering of + drivers (Daniel Berrange), fix virsh help without hypervisor (Richard + Jones), id marshalling fix (Daniel Berrange), fix virConnectGetMaxVcpus + on remote (Richard Jones), avoid a realloc leak (Jim Meyering), scheduler + parameters handling for Xen (Richard Jones), various early remote + bug fixes (Richard Jones), remove virsh leaks of domains references + (Masayuki Sunou), configCache refill bug (Richard Jones), fix + XML serialization bugs
    • +
    • Improvements: QEmu switch to XDR-based protocol (Dan Berrange), + device attach/detach commands (Masayuki Sunou), OCaml bindings + (Richard Jones), new entry points virDomainGetConnect and + virNetworkGetConnect useful for bindings (Richard Jones), + reunitifaction of remote and qemu daemon under a single libvirtd + with a config file (Daniel Berrange)
    • +
    • Cleanups: parsing of connection URIs (Richard Jones), messages + from virsh (Saori Fukuta), Coverage files (Daniel Berrange), + Solaris fixes (Mark Johnson), avoid [r]index calls (Richard Jones), + release information in Xen backend, virsh cpupin command cleanups + (Masayuki Sunou), xen:/// suppport as standard Xen URI (Richard Jones and + Daniel Berrange), improve driver selection/decline mechanism (Richard + Jones), error reporting on XML dump (Richard Jones), Remove unused + virDomainKernel structure (Richard Jones), daemon event loop event + handling (Daniel Berrange), various unifications cleanup in the daemon + merging (Daniel Berrange), internal file and timer monitoring API + (Daniel Berrange), remove libsysfs dependancy, call brctl program + directly (Daniel Berrange), virBuffer functions cleanups (Richard Jones), + make init script LSB compliant, error handling on lookup functions + (Richard Jones), remove internal virGetDomainByID (Richard Jones), + revamp of xen subdrivers interfaces (Richard Jones)
    • +
    • Localization updates
    • +
    +

    0.2.3: Jun 8 2007

    +
      +
    • Documentation: documentation for upcoming remote access (Richard Jones), + virConnectNumOfDefinedDomains doc (Jan Michael), virsh help messages + for dumpxml and net-dumpxml (Chris Wright),
    • +
    • Bug fixes: RelaxNG schemas regexp fix (Robin Green), RelaxNG arch bug + (Mark McLoughlin), large buffers bug fixes (Shigeki Sakamoto), error + on out of memory condition (Shigeki Sakamoto), virshStrdup fix, non-root + driver when using Xen bug (Richard Jones), use --strict-order when + running dnsmasq (Daniel Berrange), virbr0 weirdness on restart (Mark + McLoughlin), keep connection error messages (Richard Jones), increase + QEmu read buffer on help (Daniel Berrange), rpm dependance on + dnsmasq (Daniel Berrange), fix XML boot device syntax (Daniel Berrange), + QEmu memory bug (Daniel Berrange), memory leak fix (Masayuki Sunou), + fix compiler flags (Richard Jones), remove type ioemu on recent Xen + HVM for paravirt drivers (Saori Fukuta), uninitialized string bug + (Masayuki Sunou), allow init even if the daemon is not running, + XML to config fix (Daniel Berrange)
    • +
    • Improvements: add a special error class for the test module (Richard + Jones), virConnectGetCapabilities on proxy (Richard Jones), allow + network driver to decline usage (Richard Jones), extend error messages + for upcoming remote access (Richard Jones), on_reboot support for QEmu + (Daniel Berrange), save daemon output in a log file (Daniel Berrange), + xenXMDomainDefineXML can override guest config (Hugh Brock), + add attach-device and detach-device commands to virsh (Masayuki Sunou + and Mark McLoughlin and Richard Jones), make virGetVersion case + insensitive and Python bindings (Richard Jones), new scheduler API + (Atsushi SAKAI), localizations updates, add logging option for virsh + (Nobuhiro Itou), allow arguments to be passed to bootloader (Hugh Brock), + increase the test suite (Daniel Berrange and Hugh Brock)
    • +
    • Cleanups: Remove VIR_DRV_OPEN_QUIET (Richard Jones), disable xm_internal.c + for Xen > 3.0.3 (Daniel Berrange), unused fields in _virDomain (Richard + Jones), export __virGetDomain and __virGetNetwork for libvirtd only + (Richard Jones), ignore old VNC config for HVM on recent Xen (Daniel + Berrange), various code cleanups, -Werror cleanup (Hugh Brock)
    • +
    +

    0.2.2: Apr 17 2007

    +
      +
    • Documentation: fix errors due to Amaya (with Simon Hernandez), + virsh uses kB not bytes (Atsushi SAKAI), add command line help to + qemud (Richard Jones), xenUnifiedRegister docs (Atsushi SAKAI), + strings typos (Nikolay Sivov), ilocalization probalem raised by + Thomas Canniot
    • +
    • Bug fixes: virsh memory values test (Masayuki Sunou), operations without + libvirt_qemud (Atsushi SAKAI), fix spec file (Florian La Roche, Jeremy + Katz, Michael Schwendt), + direct hypervisor call (Atsushi SAKAI), buffer overflow on qemu + networking command (Daniel Berrange), buffer overflow in quemud (Daniel + Berrange), virsh vcpupin bug (Masayuki Sunou), host PAE detections + and strcuctures size (Richard Jones), Xen PAE flag handling (Daniel + Berrange), bridged config configuration (Daniel Berrange), erroneous + XEN_V2_OP_SETMAXMEM value (Masayuki Sunou), memory free error (Mark + McLoughlin), set VIR_CONNECT_RO on read-only connections (S.Sakamoto), + avoid memory explosion bug (Daniel Berrange), integer overflow + for qemu CPU time (Daniel Berrange), QEMU binary path check (Daniel + Berrange)
    • +
    • Cleanups: remove some global variables (Jim Meyering), printf-style + functions checks (Jim Meyering), better virsh error messages, increase + compiler checkings and security (Daniel Berrange), virBufferGrow usage + and docs, use calloc instead of malloc/memset, replace all sprintf by + snprintf, avoid configure clobbering user's CTAGS (Jim Meyering), + signal handler error cleanup (Richard Jones), iptables internal code + claenup (Mark McLoughlin), unified Xen driver (Richard Jones), + cleanup XPath libxml2 calls, IPTables rules tightening (Daniel + Berrange),
    • +
    • Improvements: more regression tests on XML (Daniel Berrange), Python + bindings now generate exception in error cases (Richard Jones), + Python bindings for vir*GetAutoStart (Daniel Berrange), + handling of CD-Rom device without device name (Nobuhiro Itou), + fix hypervisor call to work with Xen 3.0.5 (Daniel Berrange), + DomainGetOSType for inactive domains (Daniel Berrange), multiple boot + devices for HVM (Daniel Berrange), +
    • +
    +

    0.2.1: Mar 16 2007

    +
      +
    • Various internal cleanups (Richard Jones,Daniel Berrange,Mark McLoughlin)
    • +
    • Bug fixes: libvirt_qemud daemon path (Daniel Berrange), libvirt + config directory (Daniel Berrange and Mark McLoughlin), memory leak + in qemud (Mark), various fixes on network support (Mark), avoid Xen + domain zombies on device hotplug errors (Daniel Berrange), various + fixes on qemud (Mark), args parsing (Richard Jones), virsh -t argument + (Saori Fukuta), avoid virsh crash on TAB key (Daniel Berrange), detect + xend operation failures (Kazuki Mizushima), don't listen on null socket + (Rich Jones), read-only socket cleanup (Rich Jones), use of vnc port 5900 + (Nobuhiro Itou), assorted networking fixes (Daniel Berrange), shutoff and + shutdown mismatches (Kazuki Mizushima), unlimited memory handling + (Atsushi SAKAI), python binding fixes (Tatsuro Enokura)
    • +
    • Build and portability fixes: IA64 fixes (Atsushi SAKAI), dependancies + and build (Daniel Berrange), fix xend port detection (Daniel + Berrange), icompile time warnings (Mark), avoid const related + compiler warnings (Daniel Berrange), automated builds (Daniel + Berrange), pointer/int mismatch (Richard Jones), configure time + selection of drivers, libvirt spec hacking (Daniel Berrange)
    • +
    • Add support for network autostart and init scripts (Mark McLoughlin)
    • +
    • New API virConnectGetCapabilities() to detect the virtualization + capabilities of a host (Richard Jones)
    • +
    • Minor improvements: qemud signal handling (Mark), don't shutdown or reboot + domain0 (Kazuki Mizushima), QEmu version autodetection (Daniel Berrange), + network UUIDs (Mark), speed up UUID domain lookups (Tatsuro Enokura and + Daniel Berrange), support for paused QEmu CPU (Daniel Berrange), keymap + VNC attribute support (Takahashi Tomohiro and Daniel Berrange), maximum + number of virtual CPU (Masayuki Sunou), virtsh --readonly option (Rich + Jones), python bindings for new functions (Daniel Berrange)
    • +
    • Documentation updates especially on the XML formats
    • +
    +

    0.2.0: Feb 14 2007

    +
      +
    • Various internal cleanups (Mark McLoughlin, Richard Jones, + Daniel Berrange, Karel Zak)
    • +
    • Bug fixes: avoid a crash in connect (Daniel Berrange), virsh args + parsing (Richard Jones)
    • +
    • Add support for QEmu and KVM virtualization (Daniel Berrange)
    • +
    • Add support for network configuration (Mark McLoughlin)
    • +
    • Minor improvements: regression testing (Daniel Berrange), + localization string updates
    • +
    +

    0.1.11: Jan 22 2007

    +
      +
    • Finish XML <-> XM config files support
    • +
    • Remove memory leak when freeing virConf objects
    • +
    • Finishing inactive domain support (Daniel Berrange)
    • +
    • Added a Relax-NG schemas to check XML instances
    • +
    +

    0.1.10: Dec 20 2006

    +
      +
    • more localizations
    • +
    • bug fixes: VCPU info breakages on xen 3.0.3, xenDaemonListDomains buffer overflow (Daniel Berrange), reference count bug when creating Xen domains (Daniel Berrange).
    • +
    • improvements: support graphic framebuffer for Xen paravirt (Daniel Berrange), VNC listen IP range support (Daniel Berrange), support for default Xen config files and inactive domains of 3.0.4 (Daniel Berrange).
    • +
    +

    0.1.9: Nov 29 2006

    +
      +
    • python bindings: release interpeter lock when calling C (Daniel Berrange)
    • +
    • don't raise HTTP error when looking information for a domain
    • +
    • some refactoring to use the driver for all entry points
    • +
    • better error reporting (Daniel Berrange)
    • +
    • fix OS reporting when running as non-root
    • +
    • provide XML parsing errors
    • +
    • extension of the test framework (Daniel Berrange)
    • +
    • fix the reconnect regression test
    • +
    • python bindings: Domain instances now link to the Connect to avoid garbage collection and disconnect
    • +
    • separate the notion of maximum memory and current use at the XML level
    • +
    • Fix a memory leak (Daniel Berrange)
    • +
    • add support for shareable drives
    • +
    • add support for non-bridge style networking configs for guests(Daniel Berrange)
    • +
    • python bindings: fix unsigned long marshalling (Daniel Berrange)
    • +
    • new config APIs virConfNew() and virConfSetValue() to build configs from scratch
    • +
    • hot plug device support based on Michel Ponceau patch
    • +
    • added support for inactive domains, new APIs, various associated cleanup (Daniel Berrange)
    • +
    • special device model for HVM guests (Daniel Berrange)
    • +
    • add API to dump core of domains (but requires a patched xend)
    • +
    • pygrub bootloader information take over <os> information
    • +
    • updated the localization strings
    • +
    +

    0.1.8: Oct 16 2006

    +
      +
    • Bug for system with page size != 4k
    • +
    • vcpu number initialization (Philippe Berthault)
    • +
    • don't label crashed domains as shut off (Peter Vetere)
    • +
    • fix virsh man page (Noriko Mizumoto)
    • +
    • blktapdd support for alternate drivers like blktap (Daniel Berrange)
    • +
    • memory leak fixes (xend interface and XML parsing) (Daniel Berrange)
    • +
    • compile fix
    • +
    • mlock/munlock size fixes (Daniel Berrange)
    • +
    • improve error reporting
    • +
    +

    0.1.7: Sep 29 2006

    +
      +
    • fix a memory bug on getting vcpu information from xend (Daniel Berrange)
    • +
    • fix another problem in the hypercalls change in Xen changeset + 86d26e6ec89b when getting domain information (Daniel Berrange)
    • +
    +

    0.1.6: Sep 22 2006

    +
      +
    • Support for localization of strings using gettext (Daniel Berrange)
    • +
    • Support for new Xen-3.0.3 cdrom and disk configuration (Daniel Berrange)
    • +
    • Support for setting VNC port when creating domains with new + xend config files (Daniel Berrange)
    • +
    • Fix bug when running against xen-3.0.2 hypercalls (Jim Fehlig)
    • +
    • Fix reconnection problem when talking directly to http xend
    • +
    +

    0.1.5: Sep 5 2006

    +
      +
    • Support for new hypercalls change in Xen changeset 86d26e6ec89b
    • +
    • bug fixes: virParseUUID() was wrong, netwoking for paravirt guestsi + (Daniel Berrange), virsh on non-existent domains (Daniel Berrange), + string cast bug when handling error in python (Pete Vetere), HTTP + 500 xend error code handling (Pete Vetere and Daniel Berrange)
    • +
    • improvements: test suite for SEXPR <-> XML format conversions (Daniel + Berrange), virsh output regression suite (Daniel Berrange), new environ + variable VIRSH_DEFAULT_CONNECT_URI for the default URI when connecting + (Daniel Berrange), graphical console support for paravirt guests + (Jeremy Katz), parsing of simple Xen config files (with Daniel Berrange), + early work on defined (not running) domains (Daniel Berrange), + virsh output improvement (Daniel Berrange
    • +
    +

    0.1.4: Aug 16 2006

    +
      +
    • bug fixes: spec file fix (Mark McLoughlin), error report problem (with + Hugh Brock), long integer in Python bindings (with Daniel Berrange), XML + generation bug for CDRom (Daniel Berrange), bug whem using number() XPath + function (Mark McLoughlin), fix python detection code, remove duplicate + initialization errors (Daniel Berrange)
    • +
    • improvements: UUID in XML description (Peter Vetere), proxy code + cleanup, virtual CPU and affinity support + virsh support (Michel + Ponceau, Philippe Berthault, Daniel Berrange), port and tty information + for console in XML (Daniel Berrange), added XML dump to driver and proxy + support (Daniel Berrange), extention of boot options with support for + floppy and cdrom (Daniel Berrange), features block in XML to report/ask + PAE, ACPI, APIC for HVM domains (Daniel Berrange), fail saide-effect + operations when using read-only connection, large improvements to test + driver (Daniel Berrange)
    • +
    • documentation: spelling (Daniel Berrange), test driver examples.
    • +
    +

    0.1.3: Jul 11 2006

    +
      +
    • bugfixes: build as non-root, fix xend access when root, handling of + empty XML elements (Mark McLoughlin), XML serialization and parsing fixes + (Mark McLoughlin), allow to create domains without disk (Mark + McLoughlin),
    • +
    • improvement: xenDaemonLookupByID from O(n^2) to O(n) (Daniel Berrange), + support for fully virtualized guest (Jim Fehlig, DV, Mark McLoughlin)
    • +
    • documentation: augmented to cover hvm domains
    • +
    +

    0.1.2: Jul 3 2006

    +
      +
    • headers include paths fixup
    • +
    • proxy mechanism for unprivileged read-only access by httpu
    • +
    +

    0.1.1: Jun 21 2006

    +
      +
    • building fixes: ncurses fallback (Jim Fehlig), VPATH builds (Daniel P. + Berrange)
    • +
    • driver cleanups: new entry points, cleanup of libvirt.c (with Daniel P. + Berrange)
    • +
    • Cope with API change introduced in Xen changeset 10277
    • +
    • new test driver for regression checks (Daniel P. Berrange)
    • +
    • improvements: added UUID to XML serialization, buffer usage (Karel + Zak), --connect argument to virsh (Daniel P. Berrange),
    • +
    • bug fixes: uninitialized memory access in error reporting, S-Expr + parsing (Jim Fehlig, Jeremy Katz), virConnectOpen bug, remove a TODO in + xs_internal.c
    • +
    • documentation: Python examples (David Lutterkort), new Perl binding + URL, man page update (Karel Zak)
    • +
    +

    0.1.0: Apr 10 2006

    +
      +
    • building fixes: --with-xen-distdir option (Ronald Aigner), out of tree + build and pkginfo cflag fix (Daniel Berrange)
    • +
    • enhancement and fixes of the XML description format (David Lutterkort + and Jim Fehlig)
    • +
    • new APIs: for Node information and Reboot
    • +
    • internal code cleanup: refactoring internals into a driver model, more + error handling, structure sharing, thread safety and ref counting
    • +
    • bug fixes: error message (Jim Meyering), error allocation in virsh (Jim + Meyering), virDomainLookupByID (Jim Fehlig),
    • +
    • documentation: updates on architecture, and format, typo fix (Jim + Meyering)
    • +
    • bindings: exception handling in examples (Jim Meyering), perl ones out + of tree (Daniel Berrange)
    • +
    • virsh: more options, create, nodeinfo (Karel Zak), renaming of some + options (Karel Zak), use stderr only for errors (Karel Zak), man page + (Andrew Puch)
    • +
    +

    0.0.6: Feb 28 2006

    +
      +
    • add UUID lookup and extract API
    • +
    • add error handling APIs both synchronous and asynchronous
    • +
    • added minimal hook for error handling at the python level, improved the + python bindings
    • +
    • augment the documentation and tests to cover error handling
    • +
    +

    0.0.5: Feb 23 2006

    +
      +
    • Added XML description parsing, dependance to libxml2, implemented the + creation API virDomainCreateLinux()
    • +
    • new APIs to lookup and name domain by UUID
    • +
    • fixed the XML dump when using the Xend access
    • +
    • Fixed a few more problem related to the name change
    • +
    • Adding regression tests in python and examples in C
    • +
    • web site improvement, extended the documentation to cover the XML + format and Python API
    • +
    • Added devhelp help for Gnome/Gtk programmers
    • +
    +

    0.0.4: Feb 10 2006

    +
      +
    • Fix various bugs introduced in the name change
    • +
    +

    0.0.3: Feb 9 2006

    +
      +
    • Switch name from from 'libvir' to libvirt
    • +
    • Starting infrastructure to add code examples
    • +
    • Update of python bindings for completeness
    • +
    +

    0.0.2: Jan 29 2006

    +
      +
    • Update of the documentation, web site redesign (Diana Fong)
    • +
    • integration of HTTP xend RPC based on libxend by Anthony Liquori for + most operations
    • +
    • Adding Save and Restore APIs
    • +
    • extended the virsh command line tool (Karel Zak)
    • +
    • remove xenstore transactions (Anthony Liguori)
    • +
    • fix the Python bindings bug when domain and connections where freed
    • +
    +

    0.0.1: Dec 19 2005

    +
      +
    • First release
    • +
    • Basic management of existing Xen domains
    • +
    • Minimal autogenerated Python bindings
    • +
    + + diff --git a/docs/page.xsl b/docs/page.xsl new file mode 100644 index 0000000000..1c1687e079 --- /dev/null +++ b/docs/page.xsl @@ -0,0 +1,124 @@ + + + + + + + + + + +
      + +
    • + + + + + active + + + inactive + + + + + + + + + + + + + + + + + + + + + +
    • +
      +
    +
    + + + + + + + This file is autogenerated from .in + Do not edit this file. Changes will be lost. + + + + + libvirt: <xsl:value-of select="html/body/h1"/> + + + +
    +

    There is not much to comment about it, it really is a straight mapping +from the C API, the only points to notice are:

    +
    • the import of the module called libvirt
    • getting a connection to the hypervisor, in that case using the + openReadOnly function allows the code to execute as a normal user.
    • getting an object representing the Domain 0 using lookupByName
    • if the domain is not found a libvirtError exception will be raised
    • extracting and printing some information about the domain using various methods - associated to the virDomain class.
    • -

    + associated to the virDomain class. +
    + + + + + diff --git a/docs/python.html.in b/docs/python.html.in new file mode 100644 index 0000000000..a8c972e038 --- /dev/null +++ b/docs/python.html.in @@ -0,0 +1,71 @@ + + + +

    Python API bindings

    + +

    The Python binding should be complete and are mostly automatically +generated from the formal description of the API in xml. The bindings are +articulated around 2 classes virConnect and virDomain mapping to +the C types. Functions in the C API taking either type as argument then +becomes methods for the classes, their name is just stripped from the +virConnect or virDomain(Get) prefix and the first letter gets converted to +lower case, for example the C functions:

    +

    + int virConnectNumOfDomains +(virConnectPtr conn); +

    +

    + int virDomainSetMaxMemory +(virDomainPtr domain, unsigned long memory); +

    +

    become

    +

    + virConn::numOfDomains(self) +

    +

    + virDomain::setMaxMemory(self, memory) +

    +

    This process is fully automated, you can get a summary of the conversion +in the file libvirtclass.txt present in the python dir or in the docs.There +is a couple of function who don't map directly to their C counterparts due to +specificities in their argument conversions:

    +
      +
    • virConnectListDomains + is replaced by virDomain::listDomainsID(self) which returns + a list of the integer ID for the currently running domains
    • +
    • virDomainGetInfo + is replaced by virDomain::info() which returns a list of +
      1. state: one of the state values (virDomainState)
      2. maxMemory: the maximum memory used by the domain
      3. memory: the current amount of memory used by the domain
      4. nbVirtCPU: the number of virtual CPU
      5. cpuTime: the time used by the domain in nanoseconds
    • +
    +

    So let's look at a simple example inspired from the basic.py +test found in python/tests/ in the source tree:

    +
    import libvirt
    +import sys
    +
    +conn = libvirt.openReadOnly(None)
    +if conn == None:
    +    print 'Failed to open connection to the hypervisor'
    +    sys.exit(1)
    +
    +try:
    +    dom0 = conn.lookupByName("Domain-0")
    +except:
    +    print 'Failed to find the main domain'
    +    sys.exit(1)
    +
    +print "Domain 0: id %d running %s" % (dom0.ID(), dom0.OSType())
    +print dom0.info()
    +

    There is not much to comment about it, it really is a straight mapping +from the C API, the only points to notice are:

    +
      +
    • the import of the module called libvirt
    • +
    • getting a connection to the hypervisor, in that case using the + openReadOnly function allows the code to execute as a normal user.
    • +
    • getting an object representing the Domain 0 using lookupByName
    • +
    • if the domain is not found a libvirtError exception will be raised
    • +
    • extracting and printing some information about the domain using + various methods + associated to the virDomain class.
    • +
    + + diff --git a/docs/relatedlinks.html b/docs/relatedlinks.html new file mode 100644 index 0000000000..ab88ffbe1d --- /dev/null +++ b/docs/relatedlinks.html @@ -0,0 +1,121 @@ + + + + + + + + + libvirt: Related links + + + + +
    +
    +

    Related links

    +

    + This page contains some links of interest in the area of virtualization. + There are separate pages covering applications using libvirt + and language bindings for libvirt. +

    +

    Other library bindings

    + +

    Hypervisors / emulators / containers

    +
    • + The Xen hypervisor +
    • + The QEMU emulator +
    • + The KVM Linux hypervisor +
    • + The LXC Linux container system +
    • + The OpenVZ Linux container system +
    • + The lGuest paravirtualized hypervisor +
    • + The Linux-VServer container system +
    • + The User Mode Linux paravirtualized hypervisor +
    +

    Virtualization technology

    + +
    + +
    + + + diff --git a/docs/relatedlinks.html.in b/docs/relatedlinks.html.in new file mode 100644 index 0000000000..2227e8d846 --- /dev/null +++ b/docs/relatedlinks.html.in @@ -0,0 +1,64 @@ + + +

    Related links

    + +

    + This page contains some links of interest in the area of virtualization. + There are separate pages covering applications using libvirt + and language bindings for libvirt. +

    + +

    Other library bindings

    + + + +

    Hypervisors / emulators / containers

    + +
      +
    • + The Xen hypervisor +
    • +
    • + The QEMU emulator +
    • +
    • + The KVM Linux hypervisor +
    • +
    • + The LXC Linux container system +
    • +
    • + The OpenVZ Linux container system +
    • +
    • + The lGuest paravirtualized hypervisor +
    • +
    • + The Linux-VServer container system +
    • +
    • + The User Mode Linux paravirtualized hypervisor +
    • +
    + +

    Virtualization technology

    + + + + + diff --git a/docs/remote.html b/docs/remote.html index bdc5087a49..7f362d2fc3 100644 --- a/docs/remote.html +++ b/docs/remote.html @@ -1,30 +1,74 @@ -Remote support

    Remote support

    + + + + + + + libvirt: Remote support + + + +

    +
    +
    +

    Remote support

    +

    Libvirt allows you to access hypervisors running on remote machines through authenticated and encrypted connections. -

    Basic usage

    +

    + +

    + Basic usage +

    +

    On the remote machine, libvirtd should be running. See the section on configuring libvirtd for more information. -

    +

    +

    To tell libvirt that you want to access a remote resource, you should supply a hostname in the normal URI that is passed to virConnectOpen (or virsh -c ...). @@ -32,42 +76,41 @@ For example, if you normally use qemu:///system to access the system-wide QEMU daemon, then to access the system-wide QEMU daemon on a remote machine called oirase you would use qemu://oirase/system. -

    +

    +

    The section on remote URIs describes in more detail these remote URIs. -

    +

    +

    From an API point of view, apart from the change in URI, the API should behave the same. For example, ordinary calls are routed over the remote connection transparently, and values or errors from the remote side are returned to you as if they happened locally. Some differences you may notice: -

    • Additional errors can be generated, specifically ones -relating to failures in the remote transport itself.
    • -
    • Remote calls are handled synchronously, so they will be -much slower than, say, direct hypervisor calls.
    • -

    Transports

    +

    +
    • Additional errors can be generated, specifically ones +relating to failures in the remote transport itself.
    • Remote calls are handled synchronously, so they will be +much slower than, say, direct hypervisor calls.
    +

    + Transports +

    +

    Remote libvirt supports a range of transports: -

    tls
    -
    TLS +

    +
    tls
    TLS 1.0 (SSL 3.1) authenticated and encrypted TCP/IP socket, usually listening on a public port number. To use this you will need to generate client and server certificates. The standard port is 16514. -
    - -
    unix
    -
    Unix domain socket. Since this is only accessible on the +
    unix
    Unix domain socket. Since this is only accessible on the local machine, it is not encrypted, and uses Unix permissions or SELinux for authentication. The standard socket names are /var/run/libvirt/libvirt-sock and /var/run/libvirt/libvirt-sock-ro (the latter for read-only connections). -
    - -
    ssh
    -
    Transported over an ordinary +
    ssh
    Transported over an ordinary ssh (secure shell) connection. Requires Netcat (nc) @@ -76,104 +119,90 @@ Remote libvirt supports a range of transports: ssh key management (eg. ssh-agent) otherwise programs which use - this transport will stop to ask for a password.
    - -
    ext
    -
    Any external program which can make a connection to the - remote machine by means outside the scope of libvirt.
    - -
    tcp
    -
    Unencrypted TCP/IP socket. Not recommended for production + this transport will stop to ask for a password.
    ext
    Any external program which can make a connection to the + remote machine by means outside the scope of libvirt.
    tcp
    Unencrypted TCP/IP socket. Not recommended for production use, this is normally disabled, but an administrator can enable it for testing or use over a trusted network. The standard port is 16509. -
    -

    +

    +

    The default transport, if no other is specified, is tls. -

    Remote URIs

    +

    +

    + Remote URIs +

    +

    See also: documentation on ordinary ("local") URIs. -

    +

    +

    Remote URIs have the general form ("[...]" meaning an optional part): -

    -driver[+transport]://[username@][hostname][:port]/[path][?extraparameters] -

    +

    +

    driver[+transport]://[username@][hostname][:port]/[path][?extraparameters] +

    +

    Either the transport or the hostname must be given in order to distinguish this from a local URI. -

    +

    +

    Some examples: -

    • xen+ssh://rjones@towada/
      — Connect to a +

      +
      • xen+ssh://rjones@towada/
        — Connect to a remote Xen hypervisor on host towada using ssh transport and ssh username rjones. -
      • - -
      • xen://towada/
        — Connect to a +
      • xen://towada/
        — Connect to a remote Xen hypervisor on host towada using TLS. -
      • - -
      • xen://towada/?no_verify=1
        — Connect to a +
      • xen://towada/?no_verify=1
        — Connect to a remote Xen hypervisor on host towada using TLS. Do not verify the server's certificate. -
      • - -
      • qemu+unix:///system?socket=/opt/libvirt/run/libvirt/libvirt-sock
        — +
      • qemu+unix:///system?socket=/opt/libvirt/run/libvirt/libvirt-sock
        — Connect to the local qemu instances over a non-standard Unix socket (the full path to the Unix socket is supplied explicitly in this case). -
      • - -
      • test+tcp://localhost:5000/default
        — +
      • test+tcp://localhost:5000/default
        — Connect to a libvirtd daemon offering unencrypted TCP/IP connections on localhost port 5000 and use the test driver with default settings. -
      • - -

      Extra parameters

      +

    +

    + Extra parameters +

    +

    Extra parameters can be added to remote URIs as part of the query string (the part following ?). Remote URIs understand the extra parameters shown below. Any others are passed unmodified through to the back end. Note that parameter values must be URI-escaped. -

    - - - - -
    Name Transports Meaning
    name any transport +

    + - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Name Transports Meaning
    + name + + any transport + The name passed to the remote virConnectOpen function. The name is normally formed by removing transport, hostname, port number, username and extra parameters from the remote URI, but in certain very complex cases it may be better to supply the name explicitly. -
    Example: name=qemu:///system
    command ssh, ext +
    Example: name=qemu:///system
    + command + ssh, ext The external command. For ext transport this is required. For ssh the default is ssh. The PATH is searched for the command. -
    Example: command=/opt/openssh/bin/ssh
    socket unix, ssh +
    Example: command=/opt/openssh/bin/ssh
    + socket + unix, ssh The path to the Unix domain socket, which overrides the compiled-in default. For ssh transport, this is passed to the remote netcat command (see next). -
    Example: socket=/opt/libvirt/run/libvirt/libvirt-sock
    netcat ssh +
    Example: socket=/opt/libvirt/run/libvirt/libvirt-sock
    + netcat + ssh The name of the netcat command on the remote machine. The default is nc. For ssh transport, libvirt constructs an ssh command which looks like: -
    -command -p port [-l username] hostname netcat -U socket
    +
    command -p port [-l username] hostname netcat -U socket
     
    where port, username, hostname can be @@ -181,131 +210,130 @@ Note that parameter values must be and socket come from extra parameters (or sensible defaults). -
    Example: netcat=/opt/netcat/bin/nc
    no_verify tls +
    Example: netcat=/opt/netcat/bin/nc
    + no_verify + tls If set to a non-zero value, this disables client checks of the server's certificate. Note that to disable server checks of the client's certificate or IP address you must change the libvirtd configuration. -
    Example: no_verify=1
    no_tty ssh +
    Example: no_verify=1
    + no_tty + ssh If set to a non-zero value, this stops ssh from asking for a password if it cannot log in to the remote machine automatically (eg. using ssh-agent etc.). Use this when you don't have access to a terminal - for example in graphical programs which use libvirt. -
    Example: no_tty=1

    Generating TLS certificates

    Public Key Infrastructure set up

    +

    Example: no_tty=1
    +

    + Generating TLS certificates +

    +

    + Public Key Infrastructure set up +

    +

    If you are unsure how to create TLS certificates, skip to the next section. -

    - - - - - - - - - - - - - - - - - - - - - - -
    Location Machine Description Required fields
    /etc/pki/CA/cacert.pem Installed on all clients and servers CA's certificate (more info) n/a
    /etc/pki/libvirt/ private/serverkey.pem Installed on the server Server's private key (more info) n/a
    /etc/pki/libvirt/ servercert.pem Installed on the server Server's certificate signed by the CA. - (more info) CommonName (CN) must be the hostname of the server as it - is seen by clients.
    /etc/pki/libvirt/ private/clientkey.pem Installed on the client Client's private key. (more info) n/a
    /etc/pki/libvirt/ clientcert.pem Installed on the client Client's certificate signed by the CA - (more info) Distinguished Name (DN) can be checked against an access +

    + -
    Location Machine Description Required fields
    + /etc/pki/CA/cacert.pem + Installed on all clients and servers CA's certificate (more info) n/a
    + /etc/pki/libvirt/ private/serverkey.pem + Installed on the server Server's private key (more info) n/a
    + /etc/pki/libvirt/ servercert.pem + Installed on the server Server's certificate signed by the CA. + (more info) CommonName (CN) must be the hostname of the server as it + is seen by clients.
    + /etc/pki/libvirt/ private/clientkey.pem + Installed on the client Client's private key. (more info) n/a
    + /etc/pki/libvirt/ clientcert.pem + Installed on the client Client's certificate signed by the CA + (more info) Distinguished Name (DN) can be checked against an access control list (tls_allowed_dn_list). -

    Background to TLS certificates

    +

    +

    + Background to TLS certificates +

    +

    Libvirt supports TLS certificates for verifying the identity of the server and clients. There are two distinct checks involved: -

    • The client should know that it is connecting to the right +

      +
      • The client should know that it is connecting to the right server. Checking done by client by matching the certificate that the server sends to the server's hostname. May be disabled by adding ?no_verify=1 to the remote URI. -
      • - -
      • The server should know that only permitted clients are +
      • The server should know that only permitted clients are connecting. This can be done based on client's IP address, or on client's IP address and client's certificate. Checking done by the server. May be enabled and disabled in the libvirtd.conf file. -
      • -

      +

    +

    For full certificate checking you will need to have certificates issued by a recognised Certificate Authority (CA) for your server(s) and all clients. To avoid the expense of getting certificates from a commercial CA, you can set up your own CA and tell your server(s) and clients to trust certificates issues by your own CA. Follow the instructions in the next section. -

    +

    +

    Be aware that the default configuration for libvirtd allows any client to connect provided they have a valid certificate issued by the CA for their own IP address. You may want to change this to make it less (or more) permissive, depending on your needs. -

    Setting up a Certificate Authority (CA)

    +

    +

    + Setting up a Certificate Authority (CA) +

    +

    You will need the GnuTLS certtool program documented here. In Fedora, it is in the gnutls-utils package. -

    +

    +

    Create a private key for your CA: -

    +

    +
     certtool --generate-privkey > cakey.pem
    -

    +

    +

    and self-sign it by creating a file with the signature details called ca.info containing: -

    +

    +
     cn = Name of your organization
     ca
     cert_signing_key
    -
    +
    +
     certtool --generate-self-signed --load-privkey cakey.pem \
       --template ca.info --outfile cacert.pem
    -

    +

    +

    (You can delete ca.info file now if you want). -

    +

    +

    Now you have two files which matter: -

    • -cakey.pem - Your CA's private key (keep this very secret!) -
    • -
    • -cacert.pem - Your CA's certificate (this is public). -
    • -

    -cacert.pem has to be installed on clients and +

    +
    • cakey.pem - Your CA's private key (keep this very secret!) +
    • cacert.pem - Your CA's certificate (this is public). +
    +

    cacert.pem has to be installed on clients and server(s) to let them know that they can trust certificates issued by your CA. -

    +

    +

    The normal installation directory for cacert.pem is /etc/pki/CA/cacert.pem on all clients and servers. -

    +

    +

    To see the contents of this file, do: -

    -certtool -i --infile cacert.pem
    +

    +
    certtool -i --infile cacert.pem
     
     X.509 certificate info:
     
    @@ -318,52 +346,63 @@ Validity:
             Not Before: Mon Jun 18 16:22:18 2007
             Not After: Tue Jun 17 16:22:18 2008
     [etc]
    -

    +

    +

    This is all that is required to set up your CA. Keep the CA's private key carefully as you will need it when you come to issue certificates for your clients and servers. -

    Issuing server certificates

    +

    +

    + Issuing server certificates +

    +

    For each server (libvirtd) you need to issue a certificate with the X.509 CommonName (CN) field set to the hostname of the server. The CN must match the hostname which clients will be using to connect to the server. -

    +

    +

    In the example below, clients will be connecting to the server using a URI of xen://oirase/, so the CN must be "oirase". -

    +

    +

    Make a private key for the server: -

    +

    +
     certtool --generate-privkey > serverkey.pem
    -

    +

    +

    and sign that key with the CA's private key by first creating a template file called server.info (only the CN field matters, which as explained above must be the server's hostname): -

    +

    +
     organization = Name of your organization
     cn = oirase
     tls_www_server
     encryption_key
     signing_key
    -

    +

    +

    and sign: -

    +

    +
     certtool --generate-certificate --load-privkey serverkey.pem \
       --load-ca-certificate cacert.pem --load-ca-privkey cakey.pem \
       --template server.info --outfile servercert.pem
    -

    +

    +

    This gives two files: -

    • -serverkey.pem - The server's private key. -
    • -
    • -servercert.pem - The server's public key. -
    • -

    +

    +
    • serverkey.pem - The server's private key. +
    • servercert.pem - The server's public key. +
    +

    We can examine this certificate and its signature: -

    -certtool -i --infile servercert.pem
    +

    +
    certtool -i --infile servercert.pem
     X.509 certificate info:
     
     Version: 3
    @@ -374,44 +413,47 @@ Signature Algorithm: RSA-SHA
     Validity:
             Not Before: Mon Jun 18 16:34:49 2007
             Not After: Tue Jun 17 16:34:49 2008
    -

    +

    +

    Note the "Issuer" CN is "Red Hat Emerging Technologies" (the CA) and the "Subject" CN is "oirase" (the server). -

    +

    +

    Finally we have two files to install: -

    • -serverkey.pem is +

      +
      • serverkey.pem is the server's private key which should be copied to the server only as /etc/pki/libvirt/private/serverkey.pem. -
      • - -
      • -servercert.pem is the server's certificate +
      • servercert.pem is the server's certificate which can be installed on the server as /etc/pki/libvirt/servercert.pem. -
      • -

      Issuing client certificates

      +

    +

    + Issuing client certificates +

    +

    For each client (ie. any program linked with libvirt, such as virt-manager) you need to issue a certificate with the X.509 Distinguished Name (DN) set to a suitable name. You can decide this on a company / organisation policy. For example, I use: -

    +

    +
     C=GB,ST=London,L=London,O=Red Hat,CN=name_of_client
    -

    +

    +

    The process is the same as for setting up the server certificate so here we just briefly cover the steps. -

    1. +

      +
      1. Make a private key:
         certtool --generate-privkey > clientkey.pem
         
        -
      2. - -
      3. +
      4. Act as CA and sign the certificate. Create client.info containing:
         country = GB
        @@ -429,167 +471,126 @@ certtool --generate-certificate --load-privkey clientkey.pem \
           --load-ca-certificate cacert.pem --load-ca-privkey cakey.pem \
           --template client.info --outfile clientcert.pem
         
        -
      5. - -
      6. +
      7. Install the certificates on the client machine:
         cp clientkey.pem /etc/pki/libvirt/private/clientkey.pem
         cp clientcert.pem /etc/pki/libvirt/clientcert.pem
         
        -
      8. -

      Troubleshooting TLS certificate problems

      failed to verify client's certificate
      -
      -

      +

    +

    + Troubleshooting TLS certificate problems +

    +
    failed to verify client's certificate
    +

    On the server side, run the libvirtd server with the '--listen' and '--verbose' options while the client is connecting. The verbose log messages should tell you enough to diagnose the problem.

    -
    -

    You can use the pki_check.sh shell script + +

    You can use the pki_check.sh shell script to analyze the setup on the client or server machines, preferably as root. It will try to point out the possible problems and provide solutions to -fix the set up up to a point where you have secure remote access.

    libvirtd configuration file

    +fix the set up up to a point where you have secure remote access.

    +

    + libvirtd configuration file +

    +

    Libvirtd (the remote daemon) is configured from a file called /etc/libvirt/libvirtd.conf, or specified on the command line using -f filename or --config filename. -

    +

    +

    This file should contain lines of the form below. Blank lines and comments beginning with # are ignored. -

    setting = value

    The following settings, values and default are:

    - - - - -
    Line Default Meaning
    listen_tls [0|1] 1 (on) +

    +
    setting = value
    +

    The following settings, values and default are:

    + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Line Default Meaning
    listen_tls [0|1] 1 (on) Listen for secure TLS connections on the public TCP/IP port. -
    listen_tcp [0|1] 0 (off) +
    listen_tcp [0|1] 0 (off) Listen for unencrypted TCP connections on the public TCP/IP port. -
    tls_port "service" "16514" +
    tls_port "service" "16514" The port number or service name to listen on for secure TLS connections. -
    tcp_port "service" "16509" +
    tcp_port "service" "16509" The port number or service name to listen on for unencrypted TCP connections. -
    mdns_adv [0|1] 1 (advertise with mDNS) +
    mdns_adv [0|1] 1 (advertise with mDNS) If set to 1 then the virtualization service will be advertised over mDNS to hosts on the local LAN segment. -
    mdns_name "name" "Virtualization Host HOSTNAME" +
    mdns_name "name" "Virtualization Host HOSTNAME" The name to advertise for this host with Avahi mDNS. The default includes the machine's short hostname. This must be unique to the local LAN segment. -
    unix_sock_group "groupname" "root" +
    unix_sock_group "groupname" "root" The UNIX group to own the UNIX domain socket. If the socket permissions allow group access, then applications running under matching group can access the socket. Only valid if running as root -
    unix_sock_ro_perms "octal-perms" "0777" +
    unix_sock_ro_perms "octal-perms" "0777" The permissions for the UNIX domain socket for read-only client connections. The default allows any user to monitor domains. -
    unix_sock_rw_perms "octal-perms" "0700" +
    unix_sock_rw_perms "octal-perms" "0700" The permissions for the UNIX domain socket for read-write client connections. The default allows only root to manage domains. -
    tls_no_verify_certificate [0|1] 0 (certificates are verified) +
    tls_no_verify_certificate [0|1] 0 (certificates are verified) If set to 1 then if a client certificate check fails, it is not an error. -
    tls_no_verify_address [0|1] 0 (addresses are verified) +
    tls_no_verify_address [0|1] 0 (addresses are verified) If set to 1 then if a client IP address check fails, it is not an error. -
    key_file "filename" "/etc/pki/libvirt/ private/serverkey.pem" +
    key_file "filename" "/etc/pki/libvirt/ private/serverkey.pem" Change the path used to find the server's private key. If you set this to an empty string, then no private key is loaded. -
    cert_file "filename" "/etc/pki/libvirt/ servercert.pem" +
    cert_file "filename" "/etc/pki/libvirt/ servercert.pem" Change the path used to find the server's certificate. If you set this to an empty string, then no certificate is loaded. -
    ca_file "filename" "/etc/pki/CA/cacert.pem" +
    ca_file "filename" "/etc/pki/CA/cacert.pem" Change the path used to find the trusted CA certificate. If you set this to an empty string, then no trusted CA certificate is loaded. -
    crl_file "filename" (no CRL file is used) +
    crl_file "filename" (no CRL file is used) Change the path used to find the CA certificate revocation list (CRL) file. If you set this to an empty string, then no CRL is loaded. -
    tls_allowed_dn_list ["DN1", "DN2"] (none - DNs are not checked) -

    +

    tls_allowed_dn_list ["DN1", "DN2"] (none - DNs are not checked) +

    Enable an access control list of client certificate Distinguished Names (DNs) which can connect to the TLS port on this server.

    -

    +

    The default is that DNs are not checked.

    -

    +

    This list may contain wildcards such as "C=GB,ST=London,L=London,O=Red Hat,CN=*" See the POSIX fnmatch function for the format of the wildcards.

    -

    +

    Note that if this is an empty list, no client can connect.

    -

    +

    Note also that GnuTLS returns DNs without spaces after commas between the fields (and this is what we check against), but the openssl x509 tool shows spaces. -

    tls_allowed_ip_list ["ip1", "ip2", "ip3"] (none - clients can connect from anywhere) -

    +

    +
    tls_allowed_ip_list ["ip1", "ip2", "ip3"] (none - clients can connect from anywhere) +

    Enable an access control list of the IP addresses of clients who can connect to the TLS or TCP ports on this server.

    -

    +

    The default is that clients can connect from any IP address.

    -

    +

    This list may contain wildcards such as 192.168.* See the POSIX fnmatch function for the format of the wildcards.

    -

    +

    Note that if this is an empty list, no client can connect.

    -

    IPv6 support

    +

    +

    + IPv6 support +

    +

    The libvirtd service and libvirt remote client driver both use the getaddrinfo() functions for name resolution and are thus fully IPv6 enabled. ie, if a server has IPv6 address configured @@ -598,42 +599,50 @@ protocols. If a client has an IPv6 address configured and the DNS address resolved for a service is reachable over IPv6, then an IPv6 connection will be made, otherwise IPv4 will be used. In summary it should just 'do the right thing(tm)'. -

    Limitations

    • Remote storage: To be fully useful, particularly for +

      +

      + Limitations +

      +
      • Remote storage: To be fully useful, particularly for creating new domains, it should be possible to enumerate and provision storage on the remote machine. This is currently -in the design phase.
      • - -
      • Migration: We expect libvirt will support migration, +in the design phase.
      • Migration: We expect libvirt will support migration, and obviously remote support is what makes migration worthwhile. This is also in the design phase. Issues to discuss include which path the migration data should follow (eg. client to client direct, or client to server to client) and security. -
      • - -
      • Fine-grained authentication: libvirt in general, +
      • Fine-grained authentication: libvirt in general, but in particular the remote case should support more fine-grained authentication for operations, rather than just read-write/read-only as at present. -
      • -

      +

    +

    Please come and discuss these issues and more on the mailing list. -

    Implementation notes

    +

    +

    + Implementation notes +

    +

    The current implementation uses XDR-encoded packets with a simple remote procedure call implementation which also supports asynchronous messaging and asynchronous and out-of-order replies, although these latter features are not used at the moment. -

    +

    +

    The implementation should be considered strictly internal to libvirt and subject to change at any time without notice. If you wish to talk to libvirtd, link to libvirt. If there is a problem that means you think you need to use the protocol directly, please first discuss this on the mailing list. -

    +

    +

    The messaging protocol is described in qemud/remote_protocol.x. -

    +

    +

    Authentication and encryption (for TLS) is done using GnuTLS and the RPC protocol is unaware of this layer. -

    +

    +

    Protocol messages are sent using a simple 32 bit length word (encoded XDR int) followed by the message header (XDR remote_message_header) followed by the message body. The @@ -647,7 +656,88 @@ a single REMOTE_CALL message is sent from client to server, and the server then replies synchronously with a single REMOTE_REPLY message, but other forms of messaging are also possible. -

    +

    +

    The protocol contains support for multiple program types and protocol versioning, modelled after SunRPC. -

    +

    +
    + +
    + + + diff --git a/docs/remote.html.in b/docs/remote.html.in new file mode 100644 index 0000000000..4803d397bb --- /dev/null +++ b/docs/remote.html.in @@ -0,0 +1,893 @@ + + + +

    Remote support

    +

    +Libvirt allows you to access hypervisors running on remote +machines through authenticated and encrypted connections. +

    + +

    + Basic usage +

    +

    +On the remote machine, libvirtd should be running. +See the section +on configuring libvirtd for more information. +

    +

    +To tell libvirt that you want to access a remote resource, +you should supply a hostname in the normal URI that is passed +to virConnectOpen (or virsh -c ...). +For example, if you normally use qemu:///system +to access the system-wide QEMU daemon, then to access +the system-wide QEMU daemon on a remote machine called +oirase you would use qemu://oirase/system. +

    +

    +The section on remote URIs +describes in more detail these remote URIs. +

    +

    +From an API point of view, apart from the change in URI, the +API should behave the same. For example, ordinary calls +are routed over the remote connection transparently, and +values or errors from the remote side are returned to you +as if they happened locally. Some differences you may notice: +

    +
      +
    • Additional errors can be generated, specifically ones +relating to failures in the remote transport itself.
    • +
    • Remote calls are handled synchronously, so they will be +much slower than, say, direct hypervisor calls.
    • +
    +

    + Transports +

    +

    +Remote libvirt supports a range of transports: +

    +
    +
    tls
    +
    TLS + 1.0 (SSL 3.1) authenticated and encrypted TCP/IP socket, usually + listening on a public port number. To use this you will need to + generate client and + server certificates. + The standard port is 16514. +
    +
    unix
    +
    Unix domain socket. Since this is only accessible on the + local machine, it is not encrypted, and uses Unix permissions or + SELinux for authentication. + The standard socket names are + /var/run/libvirt/libvirt-sock and + /var/run/libvirt/libvirt-sock-ro (the latter + for read-only connections). +
    +
    ssh
    +
    Transported over an ordinary + ssh + (secure shell) connection. + Requires Netcat (nc) + installed and libvirtd should be running + on the remote machine. You should use some sort of + ssh key management (eg. + ssh-agent) + otherwise programs which use + this transport will stop to ask for a password.
    +
    ext
    +
    Any external program which can make a connection to the + remote machine by means outside the scope of libvirt.
    +
    tcp
    +
    Unencrypted TCP/IP socket. Not recommended for production + use, this is normally disabled, but an administrator can enable + it for testing or use over a trusted network. + The standard port is 16509. +
    +
    +

    +The default transport, if no other is specified, is tls. +

    +

    + Remote URIs +

    +

    +See also: documentation on ordinary ("local") URIs. +

    +

    +Remote URIs have the general form ("[...]" meaning an optional part): +

    +

    driver[+transport]://[username@][hostname][:port]/[path][?extraparameters] +

    +

    +Either the transport or the hostname must be given in order +to distinguish this from a local URI. +

    +

    +Some examples: +

    +
      +
    • xen+ssh://rjones@towada/
      — Connect to a +remote Xen hypervisor on host towada using ssh transport and ssh +username rjones. +
    • +
    • xen://towada/
      — Connect to a +remote Xen hypervisor on host towada using TLS. +
    • +
    • xen://towada/?no_verify=1
      — Connect to a +remote Xen hypervisor on host towada using TLS. Do not verify +the server's certificate. +
    • +
    • qemu+unix:///system?socket=/opt/libvirt/run/libvirt/libvirt-sock
      — +Connect to the local qemu instances over a non-standard +Unix socket (the full path to the Unix socket is +supplied explicitly in this case). +
    • +
    • test+tcp://localhost:5000/default
      — +Connect to a libvirtd daemon offering unencrypted TCP/IP connections +on localhost port 5000 and use the test driver with default +settings. +
    • +
    +

    + Extra parameters +

    +

    +Extra parameters can be added to remote URIs as part +of the query string (the part following ?). +Remote URIs understand the extra parameters shown below. +Any others are passed unmodified through to the back end. +Note that parameter values must be +URI-escaped. +

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Name Transports Meaning
    + name + + any transport + + The name passed to the remote virConnectOpen function. The + name is normally formed by removing transport, hostname, port + number, username and extra parameters from the remote URI, but in certain + very complex cases it may be better to supply the name explicitly. +
    + Example: name=qemu:///system
    + command + ssh, ext + The external command. For ext transport this is required. + For ssh the default is ssh. + The PATH is searched for the command. +
    + Example: command=/opt/openssh/bin/ssh
    + socket + unix, ssh + The path to the Unix domain socket, which overrides the + compiled-in default. For ssh transport, this is passed to + the remote netcat command (see next). +
    + Example: socket=/opt/libvirt/run/libvirt/libvirt-sock
    + netcat + ssh + The name of the netcat command on the remote machine. + The default is nc. For ssh transport, libvirt + constructs an ssh command which looks like: + +
    command -p port [-l username] hostname netcat -U socket
    +
    + + where port, username, hostname can be + specified as part of the remote URI, and command, netcat + and socket come from extra parameters (or + sensible defaults). + +
    + Example: netcat=/opt/netcat/bin/nc
    + no_verify + tls + If set to a non-zero value, this disables client checks of the + server's certificate. Note that to disable server checks of + the client's certificate or IP address you must + change the libvirtd + configuration. +
    + Example: no_verify=1
    + no_tty + ssh + If set to a non-zero value, this stops ssh from asking for + a password if it cannot log in to the remote machine automatically + (eg. using ssh-agent etc.). Use this when you don't have access + to a terminal - for example in graphical programs which use libvirt. +
    + Example: no_tty=1
    +

    + Generating TLS certificates +

    +

    + Public Key Infrastructure set up +

    +

    +If you are unsure how to create TLS certificates, skip to the +next section. +

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Location Machine Description Required fields
    + /etc/pki/CA/cacert.pem + Installed on all clients and servers CA's certificate (more info) n/a
    + /etc/pki/libvirt/ private/serverkey.pem + Installed on the server Server's private key (more info) n/a
    + /etc/pki/libvirt/ servercert.pem + Installed on the server Server's certificate signed by the CA. + (more info) CommonName (CN) must be the hostname of the server as it + is seen by clients.
    + /etc/pki/libvirt/ private/clientkey.pem + Installed on the client Client's private key. (more info) n/a
    + /etc/pki/libvirt/ clientcert.pem + Installed on the client Client's certificate signed by the CA + (more info) Distinguished Name (DN) can be checked against an access + control list (tls_allowed_dn_list). +
    +

    + Background to TLS certificates +

    +

    +Libvirt supports TLS certificates for verifying the identity +of the server and clients. There are two distinct checks involved: +

    +
      +
    • The client should know that it is connecting to the right +server. Checking done by client by matching the certificate that +the server sends to the server's hostname. May be disabled by adding +?no_verify=1 to the +remote URI. +
    • +
    • The server should know that only permitted clients are +connecting. This can be done based on client's IP address, or on +client's IP address and client's certificate. Checking done by the +server. May be enabled and disabled in the libvirtd.conf file. +
    • +
    +

    +For full certificate checking you will need to have certificates +issued by a recognised Certificate +Authority (CA) for your server(s) and all clients. To avoid the +expense of getting certificates from a commercial CA, you can set up +your own CA and tell your server(s) and clients to trust certificates +issues by your own CA. Follow the instructions in the next section. +

    +

    +Be aware that the default +configuration for libvirtd allows any client to connect provided +they have a valid certificate issued by the CA for their own IP +address. You may want to change this to make it less (or more) +permissive, depending on your needs. +

    +

    + Setting up a Certificate Authority (CA) +

    +

    +You will need the GnuTLS +certtool program documented here. In Fedora, it is in the +gnutls-utils package. +

    +

    +Create a private key for your CA: +

    +
    +certtool --generate-privkey > cakey.pem
    +
    +

    +and self-sign it by creating a file with the +signature details called +ca.info containing: +

    +
    +cn = Name of your organization
    +ca
    +cert_signing_key
    +
    +
    +certtool --generate-self-signed --load-privkey cakey.pem \
    +  --template ca.info --outfile cacert.pem
    +
    +

    +(You can delete ca.info file now if you +want). +

    +

    +Now you have two files which matter: +

    +
      +
    • cakey.pem - Your CA's private key (keep this very secret!) +
    • +
    • cacert.pem - Your CA's certificate (this is public). +
    • +
    +

    cacert.pem has to be installed on clients and +server(s) to let them know that they can trust certificates issued by +your CA. +

    +

    +The normal installation directory for cacert.pem +is /etc/pki/CA/cacert.pem on all clients and servers. +

    +

    +To see the contents of this file, do: +

    +
    certtool -i --infile cacert.pem
    +
    +X.509 certificate info:
    +
    +Version: 3
    +Serial Number (hex): 00
    +Subject: CN=Red Hat Emerging Technologies
    +Issuer: CN=Red Hat Emerging Technologies
    +Signature Algorithm: RSA-SHA
    +Validity:
    +        Not Before: Mon Jun 18 16:22:18 2007
    +        Not After: Tue Jun 17 16:22:18 2008
    +[etc]
    +
    +

    +This is all that is required to set up your CA. Keep the CA's private +key carefully as you will need it when you come to issue certificates +for your clients and servers. +

    +

    + Issuing server certificates +

    +

    +For each server (libvirtd) you need to issue a certificate +with the X.509 CommonName (CN) field set to the hostname +of the server. The CN must match the hostname which +clients will be using to connect to the server. +

    +

    +In the example below, clients will be connecting to the +server using a URI of +xen://oirase/, so the CN must be "oirase". +

    +

    +Make a private key for the server: +

    +
    +certtool --generate-privkey > serverkey.pem
    +
    +

    +and sign that key with the CA's private key by first +creating a template file called server.info +(only the CN field matters, which as explained above must +be the server's hostname): +

    +
    +organization = Name of your organization
    +cn = oirase
    +tls_www_server
    +encryption_key
    +signing_key
    +
    +

    +and sign: +

    +
    +certtool --generate-certificate --load-privkey serverkey.pem \
    +  --load-ca-certificate cacert.pem --load-ca-privkey cakey.pem \
    +  --template server.info --outfile servercert.pem
    +
    +

    +This gives two files: +

    +
      +
    • serverkey.pem - The server's private key. +
    • +
    • servercert.pem - The server's public key. +
    • +
    +

    +We can examine this certificate and its signature: +

    +
    certtool -i --infile servercert.pem
    +X.509 certificate info:
    +
    +Version: 3
    +Serial Number (hex): 00
    +Subject: O=Red Hat Emerging Technologies,CN=oirase
    +Issuer: CN=Red Hat Emerging Technologies
    +Signature Algorithm: RSA-SHA
    +Validity:
    +        Not Before: Mon Jun 18 16:34:49 2007
    +        Not After: Tue Jun 17 16:34:49 2008
    +
    +

    +Note the "Issuer" CN is "Red Hat Emerging Technologies" (the CA) and +the "Subject" CN is "oirase" (the server). +

    +

    +Finally we have two files to install: +

    +
      +
    • serverkey.pem is +the server's private key which should be copied to the +server only as +/etc/pki/libvirt/private/serverkey.pem. +
    • +
    • servercert.pem is the server's certificate +which can be installed on the server as +/etc/pki/libvirt/servercert.pem. +
    • +
    +

    + Issuing client certificates +

    +

    +For each client (ie. any program linked with libvirt, such as +virt-manager) +you need to issue a certificate with the X.509 Distinguished Name (DN) +set to a suitable name. You can decide this on a company / organisation +policy. For example, I use: +

    +
    +C=GB,ST=London,L=London,O=Red Hat,CN=name_of_client
    +
    +

    +The process is the same as for +setting up the +server certificate so here we just briefly cover the +steps. +

    +
      +
    1. +Make a private key: +
      +certtool --generate-privkey > clientkey.pem
      +
      +
    2. +
    3. +Act as CA and sign the certificate. Create client.info containing: +
      +country = GB
      +state = London
      +locality = London
      +organization = Red Hat
      +cn = client1
      +tls_www_client
      +encryption_key
      +signing_key
      +
      +and sign by doing: +
      +certtool --generate-certificate --load-privkey clientkey.pem \
      +  --load-ca-certificate cacert.pem --load-ca-privkey cakey.pem \
      +  --template client.info --outfile clientcert.pem
      +
      +
    4. +
    5. +Install the certificates on the client machine: +
      +cp clientkey.pem /etc/pki/libvirt/private/clientkey.pem
      +cp clientcert.pem /etc/pki/libvirt/clientcert.pem
      +
      +
    6. +
    +

    + Troubleshooting TLS certificate problems +

    +
    +
    failed to verify client's certificate
    +
    +

    +On the server side, run the libvirtd server with +the '--listen' and '--verbose' options while the +client is connecting. The verbose log messages should +tell you enough to diagnose the problem. +

    +
    +
    +

    You can use the pki_check.sh shell script +to analyze the setup on the client or server machines, preferably as root. +It will try to point out the possible problems and provide solutions to +fix the set up up to a point where you have secure remote access.

    +

    + libvirtd configuration file +

    +

    +Libvirtd (the remote daemon) is configured from a file called +/etc/libvirt/libvirtd.conf, or specified on +the command line using -f filename or +--config filename. +

    +

    +This file should contain lines of the form below. +Blank lines and comments beginning with # are ignored. +

    +
    setting = value
    +

    The following settings, values and default are:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Line Default Meaning
    listen_tls [0|1] 1 (on) + Listen for secure TLS connections on the public TCP/IP port. +
    listen_tcp [0|1] 0 (off) + Listen for unencrypted TCP connections on the public TCP/IP port. +
    tls_port "service" "16514" + The port number or service name to listen on for secure TLS connections. +
    tcp_port "service" "16509" + The port number or service name to listen on for unencrypted TCP connections. +
    mdns_adv [0|1] 1 (advertise with mDNS) + If set to 1 then the virtualization service will be advertised over + mDNS to hosts on the local LAN segment. +
    mdns_name "name" "Virtualization Host HOSTNAME" + The name to advertise for this host with Avahi mDNS. The default + includes the machine's short hostname. This must be unique to the + local LAN segment. +
    unix_sock_group "groupname" "root" + The UNIX group to own the UNIX domain socket. If the socket permissions allow + group access, then applications running under matching group can access the + socket. Only valid if running as root +
    unix_sock_ro_perms "octal-perms" "0777" + The permissions for the UNIX domain socket for read-only client connections. + The default allows any user to monitor domains. +
    unix_sock_rw_perms "octal-perms" "0700" + The permissions for the UNIX domain socket for read-write client connections. + The default allows only root to manage domains. +
    tls_no_verify_certificate [0|1] 0 (certificates are verified) + If set to 1 then if a client certificate check fails, it is not an error. +
    tls_no_verify_address [0|1] 0 (addresses are verified) + If set to 1 then if a client IP address check fails, it is not an error. +
    key_file "filename" "/etc/pki/libvirt/ private/serverkey.pem" + Change the path used to find the server's private key. + If you set this to an empty string, then no private key is loaded. +
    cert_file "filename" "/etc/pki/libvirt/ servercert.pem" + Change the path used to find the server's certificate. + If you set this to an empty string, then no certificate is loaded. +
    ca_file "filename" "/etc/pki/CA/cacert.pem" + Change the path used to find the trusted CA certificate. + If you set this to an empty string, then no trusted CA certificate is loaded. +
    crl_file "filename" (no CRL file is used) + Change the path used to find the CA certificate revocation list (CRL) file. + If you set this to an empty string, then no CRL is loaded. +
    tls_allowed_dn_list ["DN1", "DN2"] (none - DNs are not checked) +

    + Enable an access control list of client certificate Distinguished + Names (DNs) which can connect to the TLS port on this server. +

    +

    + The default is that DNs are not checked. +

    +

    + This list may contain wildcards such as "C=GB,ST=London,L=London,O=Red Hat,CN=*" + See the POSIX fnmatch function for the format + of the wildcards. +

    +

    + Note that if this is an empty list, no client can connect. +

    +

    + Note also that GnuTLS returns DNs without spaces + after commas between the fields (and this is what we check against), + but the openssl x509 tool shows spaces. +

    +
    tls_allowed_ip_list ["ip1", "ip2", "ip3"] (none - clients can connect from anywhere) +

    + Enable an access control list of the IP addresses of clients + who can connect to the TLS or TCP ports on this server. +

    +

    + The default is that clients can connect from any IP address. +

    +

    + This list may contain wildcards such as 192.168.* + See the POSIX fnmatch function for the format + of the wildcards. +

    +

    + Note that if this is an empty list, no client can connect. +

    +
    +

    + IPv6 support +

    +

    +The libvirtd service and libvirt remote client driver both use the +getaddrinfo() functions for name resolution and are +thus fully IPv6 enabled. ie, if a server has IPv6 address configured +the daemon will listen for incoming connections on both IPv4 and IPv6 +protocols. If a client has an IPv6 address configured and the DNS +address resolved for a service is reachable over IPv6, then an IPv6 +connection will be made, otherwise IPv4 will be used. In summary it +should just 'do the right thing(tm)'. +

    +

    + Limitations +

    +
      +
    • Remote storage: To be fully useful, particularly for +creating new domains, it should be possible to enumerate +and provision storage on the remote machine. This is currently +in the design phase.
    • +
    • Migration: We expect libvirt will support migration, +and obviously remote support is what makes migration worthwhile. +This is also in the design phase. Issues to discuss include +which path the migration data should follow (eg. client to +client direct, or client to server to client) and security. +
    • +
    • Fine-grained authentication: libvirt in general, +but in particular the remote case should support more +fine-grained authentication for operations, rather than +just read-write/read-only as at present. +
    • +
    +

    +Please come and discuss these issues and more on the mailing list. +

    +

    + Implementation notes +

    +

    +The current implementation uses XDR-encoded packets with a +simple remote procedure call implementation which also supports +asynchronous messaging and asynchronous and out-of-order replies, +although these latter features are not used at the moment. +

    +

    +The implementation should be considered strictly internal to +libvirt and subject to change at any time without notice. If +you wish to talk to libvirtd, link to libvirt. If there is a problem +that means you think you need to use the protocol directly, please +first discuss this on the mailing list. +

    +

    +The messaging protocol is described in +qemud/remote_protocol.x. +

    +

    +Authentication and encryption (for TLS) is done using GnuTLS and the RPC protocol is unaware of this layer. +

    +

    +Protocol messages are sent using a simple 32 bit length word (encoded +XDR int) followed by the message header (XDR +remote_message_header) followed by the message body. The +length count includes the length word itself, and is measured in +bytes. Maximum message size is REMOTE_MESSAGE_MAX and to +avoid denial of services attacks on the XDR decoders strings are +individually limited to REMOTE_STRING_MAX bytes. In the +TLS case, messages may be split over TLS records, but a TLS record +cannot contain parts of more than one message. In the common RPC case +a single REMOTE_CALL message is sent from client to +server, and the server then replies synchronously with a single +REMOTE_REPLY message, but other forms of messaging are +also possible. +

    +

    +The protocol contains support for multiple program types and protocol +versioning, modelled after SunRPC. +

    + + diff --git a/docs/site.xsl b/docs/site.xsl index 5d447d54e1..cf4bd0141e 100644 --- a/docs/site.xsl +++ b/docs/site.xsl @@ -1,396 +1,25 @@ - - + + + + + - Main Menu - - - - - - intro.html - - - docs.html - - - bugs.html - - - help.html - - - help.html - - - errors.html - - - downloads.html - - - news.html - - - contribs.html - - - format.html - - - architecture.html - - - python.html - - - FAQ.html - - - remote.html - - - uri.html - - - hvsupport.html - - - auth.html - - - windows.html - - - storage.html - - - unknown.html - - - - - - - - - - - - - -
    -
    -

    main menu

    - -
    -
    -

    related links

    - -

    Graphics and design by Diana Fong

    -
    -
    + + + + - - -
    -
    - - -
    -
    -

    API menu

    -
    -

    Main menu

    -

    API menu

    -

    ChangeLog

    -
    -

    related links

    -
    -

    Mail archive

    -

    Open bugs

    -

    virt-manager

    -

    Perl bindings

    -

    OCaml bindings

    -

    Ruby bindings

    -

    Xen project

    - Made with Libxml2 Logo -
    -
    -
    -

    API menu

    - -
    - -
    -
    - - - - - - -

    -
    - - - - - - - - - -
    - Libvirt the virtualization API -
    -
    - - - -
    - Libvirt the virtualization API -
    -
    - - - -
    -

    -
    -
    - - - - - - - - - - - - - - - - - - - - - -
    -
    -
    - - - -
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    -
    -
    -

    -
    -
    - - - - - - -
    -
    - - -
    - -
    - - - - Generating the Web pages - - - - the virtualization API - - - - diff --git a/docs/sitemap.html b/docs/sitemap.html new file mode 100644 index 0000000000..523f4e981e --- /dev/null +++ b/docs/sitemap.html @@ -0,0 +1,241 @@ + + + + + + + + + libvirt: Sitemap + + + + +
    +
    +

    Sitemap

    +
    +
    • + Home + Front page of the libvirt website +
    • + News + Details of new features and bugs fixed in each release +
      • + Changelog + Latest commit messages from the source repository +
    • + Downloads + Get the latest source releases, binary builds and get access to the source repository +
    • + Documentation + Information for users, administrators and developers +
      • + Deployment + Information about deploying and using libvirt +
        • + URI format + The URI formats used for connecting to libvirt +
        • + Remote access + Enable remote access over TCP +
        • + Authentication + Configure authentication for the libvirt daemon +
        • + Windows port + Access the libvirt daemon from a native Windows client +
      • + Architecture + Overview of the logical subsystems in the libvirt API +
        • + Domains + Managing virtual machines +
        • + Network + Providing isolated networks and NAT based network connectivity +
        • + Storage + Managing storage pools and volumes +
        • + Node Devices + Enumerating host node devices +
      • + XML format + Description of the XML formats used in libvirt +
        • + Domains + The domain XML format +
        • + Networks + The virtual network XML format +
        • + Storage + The storage pool and volume XML format +
        • + Capabilities + The driver capabilities XML format +
        • + Node Devices + The host device XML format +
      • + Drivers + Hypervisor specific driver information +
        • + Xen + Driver the Xen hypervisor +
        • + QEMU / KVM + Driver for QEMU, KQEMU, KVM and Xenner +
        • + Linux Container + Driver for the Linux native container API +
        • + Test + Psuedo-driver simulating APIs in memory for test suites +
        • + Remote + Driver providing secure remote to the libvirt APIs +
        • + OpenVZ + Driver for the OpenVZ container technology +
        • + Storage + Driver for the storage management APIs +
      • + API reference + Reference manual for the C public API + +
        • + libvirt + core interfaces for the libvirt library +
        • + virterror + error handling interfaces for the libvirt library +
        • + Driver support + matrix of API support per hypervisor per release +
      • + Language bindings + Bindings of the libvirt API for other languages +
        • + Python + overview of the python API bindings +
    • + Wiki + User contributed content +
    • + FAQ + Frequently asked questions +
    • + Bug reports + How and where to report bugs and request features +
    • + Contact + How to contact the developers via email and IRC +
    • + Related Links + Miscellaneous links of interest related to libvirt +
      • + Applications + Overview of applications using the libvirt APIs +
    • + Sitemap + Overview of all content on the website +
    +
    + +
    + + + diff --git a/docs/sitemap.html.in b/docs/sitemap.html.in new file mode 100644 index 0000000000..a15ab4cad7 --- /dev/null +++ b/docs/sitemap.html.in @@ -0,0 +1,224 @@ + + +

    Sitemap

    + +
    +
      +
    • + Home + Front page of the libvirt website +
    • +
    • + News + Details of new features and bugs fixed in each release +
        +
      • + Changelog + Latest commit messages from the source repository +
      • +
      +
    • +
    • + Downloads + Get the latest source releases, binary builds and get access to the source repository +
    • +
    • + Documentation + Information for users, administrators and developers +
        +
      • + Deployment + Information about deploying and using libvirt +
          +
        • + URI format + The URI formats used for connecting to libvirt +
        • +
        • + Remote access + Enable remote access over TCP +
        • +
        • + Authentication + Configure authentication for the libvirt daemon +
        • +
        • + Windows port + Access the libvirt daemon from a native Windows client +
        • +
        +
      • +
      • + Architecture + Overview of the logical subsystems in the libvirt API +
          +
        • + Domains + Managing virtual machines +
        • +
        • + Network + Providing isolated networks and NAT based network connectivity +
        • +
        • + Storage + Managing storage pools and volumes +
        • +
        • + Node Devices + Enumerating host node devices +
        • +
        +
      • +
      • + XML format + Description of the XML formats used in libvirt +
          +
        • + Domains + The domain XML format +
        • +
        • + Networks + The virtual network XML format +
        • +
        • + Storage + The storage pool and volume XML format +
        • +
        • + Capabilities + The driver capabilities XML format +
        • +
        • + Node Devices + The host device XML format +
        • +
        +
      • +
      • + Drivers + Hypervisor specific driver information +
          +
        • + Xen + Driver the Xen hypervisor +
        • +
        • + QEMU / KVM + Driver for QEMU, KQEMU, KVM and Xenner +
        • +
        • + Linux Container + Driver for the Linux native container API +
        • +
        • + Test + Psuedo-driver simulating APIs in memory for test suites +
        • +
        • + Remote + Driver providing secure remote to the libvirt APIs +
        • +
        • + OpenVZ + Driver for the OpenVZ container technology +
        • +
        • + Storage + Driver for the storage management APIs +
        • +
        +
      • +
      • + API reference + Reference manual for the C public API + +
          +
        • + libvirt + core interfaces for the libvirt library +
        • +
        • + virterror + error handling interfaces for the libvirt library +
        • +
        • + Driver support + matrix of API support per hypervisor per release +
        • +
        +
      • +
      • + Language bindings + Bindings of the libvirt API for other languages +
          +
        • + Python + overview of the python API bindings +
        • +
        +
      • +
      +
    • +
    • + Wiki + User contributed content +
    • +
    • + FAQ + Frequently asked questions +
    • +
    • + Bug reports + How and where to report bugs and request features +
    • +
    • + Contact + How to contact the developers via email and IRC +
    • +
    • + Related Links + Miscellaneous links of interest related to libvirt +
        +
      • + Applications + Overview of applications using the libvirt APIs +
      • +
      +
    • +
    • + Sitemap + Overview of all content on the website +
    • +
    +
    + + diff --git a/docs/storage.html b/docs/storage.html index 41a0af3036..9b23ccf8c3 100644 --- a/docs/storage.html +++ b/docs/storage.html @@ -1,531 +1,394 @@ -Storage Management

    Storage Management

    -This page describes the storage management capabilities in + + + + + + + libvirt: Storage Management + + + +

    +
    +
    +

    Storage Management

    +

    +This page describes the backends for the storage management capabilities in libvirt. -

    • Core concepts
    • -
    • Storage pool XML -
    • -
    • Storage volume XML -
    • -
    • Storage backend drivers -

      Core concepts

      - -

      -The storage management APIs are based around 2 core concepts

      - -
      1. Volume - a single storage volume which can -be assigned to a guest, or used for creating further pools. A -volume is either a block device, a raw file, or a special format -file.
      2. -
      3. Pool - provides a means for taking a chunk -of storage and carving it up into volumes. A pool can be used to -manage things such as a physical disk, a NFS server, a iSCSI target, -a host adapter, an LVM group.
      4. -

      -These two concepts are mapped through to two libvirt objects, a -virStorageVolPtr and a virStoragePoolPtr, -each with a collection of APIs for their management. -

      - - -

      Storage pool XML

      - -

      -Although all storage pool backends share the same public APIs and -XML format, they have varying levels of capabilities. Some may -allow creation of volumes, others may only allow use of pre-existing -volumes. Some may have constraints on volume size, or placement. -

      - -

      The is the top level tag for a storage pool document is 'pool'. It has -a single attribute type, which is one of dir, -fs,netfs,disk,iscsi, -logical. This corresponds to the storage backend drivers -listed further along in this document. -

      - - -

      First level elements

      - -
      name
      -
      Providing a name for the pool which is unique to the host. -This is mandatory when defining a pool
      - -
      uuid
      -
      Providing an identifier for the pool which is globally unique. -This is optional when defining a pool, a UUID will be generated if -omitted
      - -
      allocation
      -
      Providing the total storage allocation for the pool. This may -be larger than the sum of the allocation of all volumes due to -metadata overhead. This value is in bytes. This is not applicable -when creating a pool.
      - -
      capacity
      -
      Providing the total storage capacity for the pool. Due to -underlying device constraints it may not be possible to use the -full capacity for storage volumes. This value is in bytes. This -is not applicable when creating a pool.
      - -
      available
      -
      Providing the free space available for allocating new volumes -in the pool. Due to underlying device constraints it may not be -possible to allocate the entire free space to a single volume. -This value is in bytes. This is not applicable when creating a -pool.
      - -
      source
      -
      Provides information about the source of the pool, such as -the underlying host devices, or remote server
      - -
      target
      -
      Provides information about the representation of the pool -on the local host.
      -

      Source elements

      - -
      device
      -
      Provides the source for pools backed by physical devices. -May be repeated multiple times depending on backend driver. Contains -a single attribute path which is the fully qualified -path to the block device node.
      -
      directory
      -
      Provides the source for pools backed by directories. May -only occur once. Contains a single attribute path -which is the fully qualified path to the block device node.
      -
      host
      -
      Provides the source for pools backed by storage from a -remote server. Will be used in combination with a directory -or device element. Contains an attribute name -which is the hostname or IP address of the server. May optionally -contain a port attribute for the protocol specific -port number.
      -
      format
      -
      Provides information about the format of the pool. This -contains a single attribute type whose value is -backend specific. This is typically used to indicate filesystem -type, or network filesystem type, or partition table type, or -LVM metadata type. All drivers are required to have a default -value for this, so it is optional.
      -

      Target elements

      - -
      path
      -
      Provides the location at which the pool will be mapped into -the local filesystem namespace. For a filesystem/directory based -pool it will be the name of the directory in which volumes will -be created. For device based pools it will be the name of the directory in which -devices nodes exist. For the latter /dev/ may seem -like the logical choice, however, devices nodes there are not -guaranteed stable across reboots, since they are allocated on -demand. It is preferable to use a stable location such as one -of the /dev/disk/by-{path,id,uuid,label locations. -
      -
      permissions
      -
      Provides information about the default permissions to use -when creating volumes. This is currently only useful for directory -or filesystem based pools, where the volumes allocated are simple -files. For pools where the volumes are device nodes, the hotplug -scripts determine permissions. It contains 4 child elements. The -mode element contains the octal permission set. The -owner element contains the numeric user ID. The group -element contains the numeric group ID. The label element -contains the MAC (eg SELinux) label string. -
      -

      Device extents

      - -

      -If a storage pool exposes information about its underlying -placement / allocation scheme, the device element -within the source element may contain information -about its available extents. Some pools have a constraint that -a volume must be allocated entirely within a single constraint -(eg disk partition pools). Thus the extent information allows an -application to determine the maximum possible size for a new -volume -

      - -

      -For storage pools supporting extent information, within each -device element there will be zero or more freeExtent -elements. Each of these elements contains two attributes, start -and end which provide the boundaries of the extent on the -device, measured in bytes. -

      - -

      Storage volume XML

      - -

      -A storage volume will be either a file or a device node. -

      - -

      First level elements

      - -
      name
      -
      Providing a name for the pool which is unique to the host. -This is mandatory when defining a pool
      - -
      uuid
      -
      Providing an identifier for the pool which is globally unique. -This is optional when defining a pool, a UUID will be generated if -omitted
      - -
      allocation
      -
      Providing the total storage allocation for the volume. This -may be smaller than the logical capacity if the volume is sparsely -allocated. It may also be larger than the logical capacity if the -volume has substantial metadata overhead. This value is in bytes. -If omitted when creating a volume, the volume will be fully -allocated at time of creation. If set to a value smaller than the -capacity, the pool has the option of deciding -to sparsely allocate a volume. It does not have to honour requests -for sparse allocation though.
      - -
      capacity
      -
      Providing the logical capacity for the volume. This value is -in bytes. This is compulsory when creating a volume
      - -
      source
      -
      Provides information about the underlying storage allocation -of the volume. This may not be available for some pool types.
      - -
      target
      -
      Provides information about the representation of the volume -on the local host.
      -

      Target elements

      - -
      path
      -
      Provides the location at which the pool will be mapped into -the local filesystem namespace. For a filesystem/directory based -pool it will be the name of the directory in which volumes will -be created. For device based pools it will be the name of the directory in which -devices nodes exist. For the latter /dev/ may seem -like the logical choice, however, devices nodes there are not -guaranteed stable across reboots, since they are allocated on -demand. It is preferrable to use a stable location such as one -of the /dev/disk/by-{path,id,uuid,label locations. -
      -
      format
      -
      Provides information about the pool specific volume format. -For disk pools it will provide the partition type. For filesystem -or directory pools it will provide the file format type, eg cow, -qcow, vmdk, raw. If omitted when creating a volume, the pool's -default format will be used. The actual format is specified via -the type. Consult the pool-specific docs for the -list of valid values.
      -
      permissions
      -
      Provides information about the default permissions to use -when creating volumes. This is currently only useful for directory -or filesystem based pools, where the volumes allocated are simple -files. For pools where the volumes are device nodes, the hotplug -scripts determine permissions. It contains 4 child elements. The -mode element contains the octal permission set. The -owner element contains the numeric user ID. The group -element contains the numeric group ID. The label element -contains the MAC (eg SELinux) label string. -
      -

      Storage backend drivers

      - -

      -This section illustrates the capabilities / format for each of -the different backend storage pool drivers -

      - -

      Directory pool

      - -

      -A pool with a type of dir provides the means to manage -files within a directory. The files can be fully allocated raw files, -sparsely allocated raw files, or one of the special disk formats -such as qcow,qcow2,vmdk, -cow, etc as supported by the qemu-img -program. If the directory does not exist at the time the pool is -defined, the build operation can be used to create it. -

      - -
      Example pool input definition
      - -
      -<pool type="dir">
      -  <name>virtimages</name>
      -  <target>
      -    <path>/var/lib/virt/images</path>
      -  </target>
      -</pool>
      -
      - -
      Valid pool format types
      - -

      -The directory pool does not use the pool format type element. -

      - -
      Valid volume format types
      - -

      -One of the following options: -

      - -
      • raw: a plain file
      • -
      • bochs: Bochs disk image format
      • -
      • cloop: compressed loopback disk image format
      • -
      • cow: User Mode Linux disk image format
      • -
      • dmg: Mac disk image format
      • -
      • iso: CDROM disk image format
      • -
      • qcow: QEMU v1 disk image format
      • -
      • qcow2: QEMU v2 disk image format
      • -
      • vmdk: VMWare disk image format
      • -
      • vpc: VirtualPC disk image format
      • -

      -When listing existing volumes all these formats are supported -natively. When creating new volumes, only a subset may be -available. The raw type is guaranteed always -available. The qcow2 type can be created if -either qemu-img or qcow-create tools -are present. The others are dependent on support of the -qemu-img tool. - -

      Filesystem pool

      - -

      -This is a variant of the directory pool. Instead of creating a -directory on an existing mounted filesystem though, it expects -a source block device to be named. This block device will be -mounted and files managed in the directory of its mount point. -It will default to allowing the kernel to automatically discover -the filesystem type, though it can be specified manually if -required. -

      - -
      Example pool input
      - -
      -<pool type="fs">
      -  <name>virtimages</name>
      -  <source>
      -    <device path="/dev/VolGroup00/VirtImages"/>
      -  </source>
      -  <target>
      -    <path>/var/lib/virt/images</path>
      -  </target>
      -</pool>
      -
      - -
      Valid pool format types
      - -

      -The filesystem pool supports the following formats: -

      - -
      • auto - automatically determine format
      • -
      • ext2
      • -
      • ext3
      • -
      • ext4
      • -
      • ufs
      • -
      • iso9660
      • -
      • udf
      • -
      • gfs
      • -
      • gfs2
      • -
      • vfat
      • -
      • hfs+
      • -
      • xfs
      • -
      Valid volume format types
      - -

      -The valid volume types are the same as for the directory -pool type. -

      - -

      Network filesystem pool

      - -

      -This is a variant of the filesystem pool. Instead of requiring -a local block device as the source, it requires the name of a -host and path of an exported directory. It will mount this network -filesystem and manage files within the directory of its mount -point. It will default to using NFS as the protocol. -

      - -
      Example pool input
      - -
      -<pool type="netfs">
      -  <name>virtimages</name>
      -  <source>
      -    <host name="nfs.example.com"/>
      -    <dir path="/var/lib/virt/images"/>
      -  </source>
      -  <target>
      -    <path>/var/lib/virt/images</path>
      -  </target>
      -</pool>
      -
      - -
      Valid pool format types
      - -

      -The network filesystem pool supports the following formats: -

      - -
      • auto - automatically determine format
      • -
      • nfs
      • -
      Valid volume format types
      - -

      -The valid volume types are the same as for the directory -pool type. -

      - -

      Logical volume pools

      - -

      -This provides a pool based on an LVM volume group. For a -pre-defined LVM volume group, simply providing the group -name is sufficient, while to build a new group requires -providing a list of source devices to serve as physical -volumes. Volumes will be allocated by carving out chunks -of storage from the volume group. -

      - -
      Example pool input
      - -
      -<pool type="logical">
      -  <name>HostVG</name>
      -  <source>
      -    <device path="/dev/sda1"/>
      -    <device path="/dev/sdb1"/>
      -    <device path="/dev/sdc1"/>
      -  </source>
      -  <target>
      -    <path>/dev/HostVG</path>
      -  </target>
      -</pool>
      -
      - -
      Valid pool format types
      - -

      -The logical volume pool does not use the pool format type element. -

      - -
      Valid volume format types
      - -

      -The logical volume pool does not use the volume format type element. -

      - - -

      Disk volume pools

      - -

      -This provides a pool based on a physical disk. Volumes are created -by adding partitions to the disk. Disk pools are have constraints -on the size and placement of volumes. The 'free extents' -information will detail the regions which are available for creating -new volumes. A volume cannot span across 2 different free extents. -

      - -
      Example pool input
      - -
      -<pool type="disk">
      -  <name>sda</name>
      -  <source>
      -    <device path='/dev/sda'/>
      -  </source>
      -  <target>
      -    <path>/dev</path>
      -  </target>
      -</pool>
      -
      - -
      Valid pool format types
      - -

      -The disk volume pool accepts the following pool format types, representing -the common partition table types: -

      - -
      • dos
      • -
      • dvh
      • -
      • gpt
      • -
      • mac
      • -
      • bsd
      • -
      • pc98
      • -
      • sun
      • -

      -The dos or gpt formats are recommended for -best portability - the latter is needed for disks larger than 2TB. -

      - -
      Valid volume format types
      - -

      -The disk volume pool accepts the following volume format types, representing -the common partition entry types: -

      - -
      • none
      • -
      • linux
      • -
      • fat16
      • -
      • fat32
      • -
      • linux-swap
      • -
      • linux-lvm
      • -
      • linux-raid
      • -
      • extended
      • -

      iSCSI volume pools

      - -

      -This provides a pool based on an iSCSI target. Volumes must be -pre-allocated on the iSCSI server, and cannot be created via -the libvirt APIs. Since /dev/XXX names may change each time libvirt -logs into the iSCSI target, it is recommended to configure the pool -to use /dev/disk/by-path or /dev/disk/by-id -for the target path. These provide persistent stable naming for LUNs -

      - -
      Example pool input
      - -
      -<pool type="iscsi">
      -  <name>virtimages</name>
      -  <source>
      -    <host name="iscsi.example.com"/>
      -    <device path="demo-target"/>
      -  </source>
      -  <target>
      -    <path>/dev/disk/by-path</path>
      -  </target>
      -</pool>
      -
      - -
      Valid pool format types
      - -

      -The logical volume pool does not use the pool format type element. -

      - -
      Valid volume format types
      - -

      -The logical volume pool does not use the volume format type element. -

      - - - -

    + +

    + Directory pool +

    +

    + A pool with a type of dir provides the means to manage + files within a directory. The files can be fully allocated raw files, + sparsely allocated raw files, or one of the special disk formats + such as qcow,qcow2,vmdk, + cow, etc as supported by the qemu-img + program. If the directory does not exist at the time the pool is + defined, the build operation can be used to create it. +

    +

    Example pool input definition

    +
    +      <pool type="dir">
    +        <name>virtimages</name>
    +        <target>
    +          <path>/var/lib/virt/images</path>
    +        </target>
    +      </pool>
    +    
    +

    Valid pool format types

    +

    + The directory pool does not use the pool format type element. +

    +

    Valid volume format types

    +

    + One of the following options: +

    +
    • raw: a plain file
    • bochs: Bochs disk image format
    • cloop: compressed loopback disk image format
    • cow: User Mode Linux disk image format
    • dmg: Mac disk image format
    • iso: CDROM disk image format
    • qcow: QEMU v1 disk image format
    • qcow2: QEMU v2 disk image format
    • vmdk: VMWare disk image format
    • vpc: VirtualPC disk image format
    +

    + When listing existing volumes all these formats are supported + natively. When creating new volumes, only a subset may be + available. The raw type is guaranteed always + available. The qcow2 type can be created if + either qemu-img or qcow-create tools + are present. The others are dependent on support of the + qemu-img tool. + +

    +

    + Filesystem pool +

    +

    + This is a variant of the directory pool. Instead of creating a + directory on an existing mounted filesystem though, it expects + a source block device to be named. This block device will be + mounted and files managed in the directory of its mount point. + It will default to allowing the kernel to automatically discover + the filesystem type, though it can be specified manually if + required. +

    +

    Example pool input

    +
    +      <pool type="fs">
    +        <name>virtimages</name>
    +        <source>
    +          <device path="/dev/VolGroup00/VirtImages"/>
    +        </source>
    +        <target>
    +          <path>/var/lib/virt/images</path>
    +        </target>
    +      </pool>
    +    
    +

    Valid pool format types

    +

    + The filesystem pool supports the following formats: +

    +
    • auto - automatically determine format
    • + ext2 +
    • + ext3 +
    • + ext4 +
    • + ufs +
    • + iso9660 +
    • + udf +
    • + gfs +
    • + gfs2 +
    • + vfat +
    • + hfs+ +
    • + xfs +
    +

    Valid volume format types

    +

    + The valid volume types are the same as for the directory + pool type. +

    +

    + Network filesystem pool +

    +

    + This is a variant of the filesystem pool. Instead of requiring + a local block device as the source, it requires the name of a + host and path of an exported directory. It will mount this network + filesystem and manage files within the directory of its mount + point. It will default to using NFS as the protocol. +

    +

    Example pool input

    +
    +      <pool type="netfs">
    +        <name>virtimages</name>
    +        <source>
    +          <host name="nfs.example.com"/>
    +          <dir path="/var/lib/virt/images"/>
    +        </source>
    +        <target>
    +          <path>/var/lib/virt/images</path>
    +        </target>
    +      </pool>
    +    
    +

    Valid pool format types

    +

    + The network filesystem pool supports the following formats: +

    +
    • auto - automatically determine format
    • + nfs +
    +

    Valid volume format types

    +

    + The valid volume types are the same as for the directory + pool type. +

    +

    + Logical volume pools +

    +

    + This provides a pool based on an LVM volume group. For a + pre-defined LVM volume group, simply providing the group + name is sufficient, while to build a new group requires + providing a list of source devices to serve as physical + volumes. Volumes will be allocated by carving out chunks + of storage from the volume group. +

    +

    Example pool input

    +
    +      <pool type="logical">
    +        <name>HostVG</name>
    +        <source>
    +          <device path="/dev/sda1"/>
    +          <device path="/dev/sdb1"/>
    +          <device path="/dev/sdc1"/>
    +        </source>
    +        <target>
    +          <path>/dev/HostVG</path>
    +        </target>
    +      </pool>
    +    
    +

    Valid pool format types

    +

    + The logical volume pool does not use the pool format type element. +

    +

    Valid volume format types

    +

    + The logical volume pool does not use the volume format type element. +

    +

    + Disk volume pools +

    +

    + This provides a pool based on a physical disk. Volumes are created + by adding partitions to the disk. Disk pools are have constraints + on the size and placement of volumes. The 'free extents' + information will detail the regions which are available for creating + new volumes. A volume cannot span across 2 different free extents. +

    +

    Example pool input

    +
    +      <pool type="disk">
    +        <name>sda</name>
    +        <source>
    +          <device path='/dev/sda'/>
    +        </source>
    +        <target>
    +          <path>/dev</path>
    +        </target>
    +      </pool>
    +    
    +

    Valid pool format types

    +

    + The disk volume pool accepts the following pool format types, representing + the common partition table types: +

    +
    • + dos +
    • + dvh +
    • + gpt +
    • + mac +
    • + bsd +
    • + pc98 +
    • + sun +
    +

    + The dos or gpt formats are recommended for + best portability - the latter is needed for disks larger than 2TB. +

    +

    Valid volume format types

    +

    + The disk volume pool accepts the following volume format types, representing + the common partition entry types: +

    +
    • + none +
    • + linux +
    • + fat16 +
    • + fat32 +
    • + linux-swap +
    • + linux-lvm +
    • + linux-raid +
    • + extended +
    +

    + iSCSI volume pools +

    +

    + This provides a pool based on an iSCSI target. Volumes must be + pre-allocated on the iSCSI server, and cannot be created via + the libvirt APIs. Since /dev/XXX names may change each time libvirt + logs into the iSCSI target, it is recommended to configure the pool + to use /dev/disk/by-path or /dev/disk/by-id + for the target path. These provide persistent stable naming for LUNs +

    +

    Example pool input

    +
    +      <pool type="iscsi">
    +        <name>virtimages</name>
    +        <source>
    +          <host name="iscsi.example.com"/>
    +          <device path="demo-target"/>
    +        </source>
    +        <target>
    +          <path>/dev/disk/by-path</path>
    +        </target>
    +      </pool>
    +    
    +

    Valid pool format types

    +

    + The logical volume pool does not use the pool format type element. +

    +

    Valid volume format types

    +

    + The logical volume pool does not use the volume format type element. +

    +
    + +
    + + + diff --git a/docs/storage.html.in b/docs/storage.html.in new file mode 100644 index 0000000000..40e8e80da5 --- /dev/null +++ b/docs/storage.html.in @@ -0,0 +1,354 @@ + + + +

    Storage Management

    +

    +This page describes the backends for the storage management capabilities in +libvirt. +

    + + +

    Directory pool

    +

    + A pool with a type of dir provides the means to manage + files within a directory. The files can be fully allocated raw files, + sparsely allocated raw files, or one of the special disk formats + such as qcow,qcow2,vmdk, + cow, etc as supported by the qemu-img + program. If the directory does not exist at the time the pool is + defined, the build operation can be used to create it. +

    + +

    Example pool input definition

    +
    +      <pool type="dir">
    +        <name>virtimages</name>
    +        <target>
    +          <path>/var/lib/virt/images</path>
    +        </target>
    +      </pool>
    +    
    + +

    Valid pool format types

    +

    + The directory pool does not use the pool format type element. +

    + +

    Valid volume format types

    +

    + One of the following options: +

    +
      +
    • raw: a plain file
    • +
    • bochs: Bochs disk image format
    • +
    • cloop: compressed loopback disk image format
    • +
    • cow: User Mode Linux disk image format
    • +
    • dmg: Mac disk image format
    • +
    • iso: CDROM disk image format
    • +
    • qcow: QEMU v1 disk image format
    • +
    • qcow2: QEMU v2 disk image format
    • +
    • vmdk: VMWare disk image format
    • +
    • vpc: VirtualPC disk image format
    • +
    +

    + When listing existing volumes all these formats are supported + natively. When creating new volumes, only a subset may be + available. The raw type is guaranteed always + available. The qcow2 type can be created if + either qemu-img or qcow-create tools + are present. The others are dependent on support of the + qemu-img tool. + +

    + +

    Filesystem pool

    +

    + This is a variant of the directory pool. Instead of creating a + directory on an existing mounted filesystem though, it expects + a source block device to be named. This block device will be + mounted and files managed in the directory of its mount point. + It will default to allowing the kernel to automatically discover + the filesystem type, though it can be specified manually if + required. +

    + +

    Example pool input

    +
    +      <pool type="fs">
    +        <name>virtimages</name>
    +        <source>
    +          <device path="/dev/VolGroup00/VirtImages"/>
    +        </source>
    +        <target>
    +          <path>/var/lib/virt/images</path>
    +        </target>
    +      </pool>
    +    
    + +

    Valid pool format types

    +

    + The filesystem pool supports the following formats: +

    +
      +
    • auto - automatically determine format
    • +
    • + ext2 +
    • +
    • + ext3 +
    • +
    • + ext4 +
    • +
    • + ufs +
    • +
    • + iso9660 +
    • +
    • + udf +
    • +
    • + gfs +
    • +
    • + gfs2 +
    • +
    • + vfat +
    • +
    • + hfs+ +
    • +
    • + xfs +
    • +
    + +

    Valid volume format types

    +

    + The valid volume types are the same as for the directory + pool type. +

    + + +

    Network filesystem pool

    +

    + This is a variant of the filesystem pool. Instead of requiring + a local block device as the source, it requires the name of a + host and path of an exported directory. It will mount this network + filesystem and manage files within the directory of its mount + point. It will default to using NFS as the protocol. +

    + +

    Example pool input

    +
    +      <pool type="netfs">
    +        <name>virtimages</name>
    +        <source>
    +          <host name="nfs.example.com"/>
    +          <dir path="/var/lib/virt/images"/>
    +        </source>
    +        <target>
    +          <path>/var/lib/virt/images</path>
    +        </target>
    +      </pool>
    +    
    + +

    Valid pool format types

    +

    + The network filesystem pool supports the following formats: +

    +
      +
    • auto - automatically determine format
    • +
    • + nfs +
    • +
    + +

    Valid volume format types

    +

    + The valid volume types are the same as for the directory + pool type. +

    + + +

    Logical volume pools

    +

    + This provides a pool based on an LVM volume group. For a + pre-defined LVM volume group, simply providing the group + name is sufficient, while to build a new group requires + providing a list of source devices to serve as physical + volumes. Volumes will be allocated by carving out chunks + of storage from the volume group. +

    + +

    Example pool input

    +
    +      <pool type="logical">
    +        <name>HostVG</name>
    +        <source>
    +          <device path="/dev/sda1"/>
    +          <device path="/dev/sdb1"/>
    +          <device path="/dev/sdc1"/>
    +        </source>
    +        <target>
    +          <path>/dev/HostVG</path>
    +        </target>
    +      </pool>
    +    
    + +

    Valid pool format types

    +

    + The logical volume pool does not use the pool format type element. +

    + +

    Valid volume format types

    +

    + The logical volume pool does not use the volume format type element. +

    + + +

    Disk volume pools

    +

    + This provides a pool based on a physical disk. Volumes are created + by adding partitions to the disk. Disk pools are have constraints + on the size and placement of volumes. The 'free extents' + information will detail the regions which are available for creating + new volumes. A volume cannot span across 2 different free extents. +

    + +

    Example pool input

    +
    +      <pool type="disk">
    +        <name>sda</name>
    +        <source>
    +          <device path='/dev/sda'/>
    +        </source>
    +        <target>
    +          <path>/dev</path>
    +        </target>
    +      </pool>
    +    
    + +

    Valid pool format types

    +

    + The disk volume pool accepts the following pool format types, representing + the common partition table types: +

    +
      +
    • + dos +
    • +
    • + dvh +
    • +
    • + gpt +
    • +
    • + mac +
    • +
    • + bsd +
    • +
    • + pc98 +
    • +
    • + sun +
    • +
    +

    + The dos or gpt formats are recommended for + best portability - the latter is needed for disks larger than 2TB. +

    + +

    Valid volume format types

    +

    + The disk volume pool accepts the following volume format types, representing + the common partition entry types: +

    +
      +
    • + none +
    • +
    • + linux +
    • +
    • + fat16 +
    • +
    • + fat32 +
    • +
    • + linux-swap +
    • +
    • + linux-lvm +
    • +
    • + linux-raid +
    • +
    • + extended +
    • +
    + + +

    iSCSI volume pools

    +

    + This provides a pool based on an iSCSI target. Volumes must be + pre-allocated on the iSCSI server, and cannot be created via + the libvirt APIs. Since /dev/XXX names may change each time libvirt + logs into the iSCSI target, it is recommended to configure the pool + to use /dev/disk/by-path or /dev/disk/by-id + for the target path. These provide persistent stable naming for LUNs +

    + +

    Example pool input

    +
    +      <pool type="iscsi">
    +        <name>virtimages</name>
    +        <source>
    +          <host name="iscsi.example.com"/>
    +          <device path="demo-target"/>
    +        </source>
    +        <target>
    +          <path>/dev/disk/by-path</path>
    +        </target>
    +      </pool>
    +    
    + +

    Valid pool format types

    +

    + The logical volume pool does not use the pool format type element. +

    + +

    Valid volume format types

    +

    + The logical volume pool does not use the volume format type element. +

    + + diff --git a/docs/uri.html b/docs/uri.html index adf0b23736..1c82bf6d01 100644 --- a/docs/uri.html +++ b/docs/uri.html @@ -1,171 +1,357 @@ -Connection URIs

    Connection URIs

    + + + + + + + libvirt: Connection URIs + + + +

    +
    +
    +

    Connection URIs

    +

    Since libvirt supports many different kinds of virtualization (often referred to as "drivers" or "hypervisors"), we need a way to be able to specify which driver a connection refers to. Additionally we may want to refer to a driver on a remote machine over the network. -

    +

    +

    To this end, libvirt uses URIs as used on the Web and as defined in RFC 2396. This page documents libvirt URIs. -

    Specifying URIs to libvirt

    +

    + +

    + Specifying URIs to libvirt +

    +

    The URI is passed as the name parameter to virConnectOpen or virConnectOpenReadOnly. For example: -

    +

    +
     virConnectPtr conn = virConnectOpenReadOnly ("test:///default");
    -

    Specifying URIs to virsh, virt-manager and virt-install

    +

    +

    + Specifying URIs to virsh, virt-manager and virt-install +

    +

    In virsh use the -c or --connect option: -

    +

    +
     virsh -c test:///default list
    -

    +

    +

    If virsh finds the environment variable VIRSH_DEFAULT_CONNECT_URI set, it will try this URI by default. -

    +

    +

    When using the interactive virsh shell, you can also use the connect URI command to reconnect to another hypervisor. -

    +

    +

    In virt-manager use the -c or --connect=URI option: -

    +

    +
     virt-manager -c test:///default
    -

    +

    +

    In virt-install use the --connect=URI option: -

    +

    +
     virt-install --connect=test:///default [other options]
    -

    xen:/// URI

    This section describes a feature which is new in libvirt > +

    +

    + xen:/// URI +

    +

    + This section describes a feature which is new in libvirt > 0.2.3. For libvirt ≤ 0.2.3 use "xen". -

    +

    +

    To access a Xen hypervisor running on the local machine use the URI xen:///. -

    qemu:///... QEMU and KVM URIs

    +

    +

    + qemu:///... QEMU and KVM URIs +

    +

    To use QEMU support in libvirt you must be running the libvirtd daemon (named libvirt_qemud in releases prior to 0.3.0). The purpose of this daemon is to manage qemu instances. -

    +

    +

    The libvirtd daemon should be started by the init scripts when the machine boots. It should appear as a process libvirtd --daemon running as root in the background and will handle qemu instances on behalf -of all users of the machine (among other things).

    +of all users of the machine (among other things).

    +

    So to connect to the daemon, one of two different URIs is used: -

    • qemu:///system connects to a system mode daemon.
    • -
    • qemu:///session connects to a session mode daemon.
    • -

    +

    +
    • qemu:///system connects to a system mode daemon.
    • qemu:///session connects to a session mode daemon.
    +

    (If you do libvirtd --help, the daemon will print out the paths of the Unix domain socket(s) that it listens on in the various different modes). -

    +

    +

    KVM URIs are identical. You select between qemu, qemu accelerated and KVM guests in the guest XML as described here. -

    Remote URIs

    +

    +

    + Remote URIs +

    +

    Remote URIs are formed by taking ordinary local URIs and adding a hostname and/or transport name. For example: -

    - - - - - - - - - - - -
    Local URI Remote URI Meaning
    xen:/// xen://oirase/ Connect to the Xen hypervisor running on host oirase - using TLS.
    xen:/// xen+ssh://oirase/ Connect to the Xen hypervisor running on host oirase - by going over an ssh connection.
    test:///default test+tcp://oirase/default Connect to the test driver on host oirase - using an unsecured TCP connection.

    +

    +
    Local URI Remote URI Meaning
    + xen:/// + + xen://oirase/ + Connect to the Xen hypervisor running on host oirase + using TLS.
    + xen:/// + + xen+ssh://oirase/ + Connect to the Xen hypervisor running on host oirase + by going over an ssh connection.
    + test:///default + + test+tcp://oirase/default + Connect to the test driver on host oirase + using an unsecured TCP connection.
    +

    Remote URIs in libvirt offer a rich syntax and many features. We refer you to the libvirt remote URI reference and full documentation for libvirt remote support. -

    test:///... Test URIs

    +

    +

    + test:///... Test URIs +

    +

    The test driver is a dummy hypervisor for test purposes. The URIs supported are: -

    • test:///default connects to a default set of -host definitions built into the driver.
    • -
    • test:///path/to/host/definitions connects to +

      +
      • test:///default connects to a default set of +host definitions built into the driver.
      • test:///path/to/host/definitions connects to a set of host definitions held in the named file. -

      Other & legacy URI formats

      NULL and empty string URIs

      +

    +

    + Other & legacy URI formats +

    +

    + NULL and empty string URIs +

    +

    Libvirt allows you to pass a NULL pointer to virConnectOpen*. Empty string ("") acts in the same way. Traditionally this has meant connect to the local Xen hypervisor. However in future this may change to mean connect to the best available hypervisor. -

    +

    +

    The theory is that if, for example, Xen is unavailable but the machine is running an OpenVZ kernel, then we should not try to connect to the Xen hypervisor since that is obviously the wrong thing to do. -

    +

    +

    In any case applications linked to libvirt can continue to pass NULL as a default choice, but should always allow the user to override the URI, either by constructing one or by allowing the user to type a URI in directly (if that is appropriate). If your application wishes to connect specifically to a Xen hypervisor, then for future proofing it should choose a full xen:/// URI. -

    File paths (xend-unix-server)

    +

    +

    + File paths (xend-unix-server) +

    +

    If XenD is running and configured in /etc/xen/xend-config.sxp: -

    +

    +
     (xend-unix-server yes)
    -

    +

    +

    then it listens on a Unix domain socket, usually at /var/lib/xend/xend-socket. You may pass a different path using a file URI such as: -

    +

    +
     virsh -c ///var/run/xend/xend-socket
    -

    Legacy: http://... (xend-http-server)

    +

    +

    + Legacy: http://... (xend-http-server) +

    +

    If XenD is running and configured in /etc/xen/xend-config.sxp: -

    +

    +
     (xend-http-server yes)
    -

    +

    +

    then it listens on TCP port 8000. libvirt allows you to try to connect to xend running on remote machines by passing http://hostname[:port]/, for example: -

    +

    +
     virsh -c http://oirase/ list
    -

    +

    +

    This method is unencrypted and insecure and is definitely not recommended for production use. Instead use libvirt's remote support. -

    +

    +

    Notes: -

    1. The HTTP client does not fully support IPv6.
    2. -
    3. Many features do not work as expected across HTTP connections, in +

      +
      1. The HTTP client does not fully support IPv6.
      2. Many features do not work as expected across HTTP connections, in particular, virConnectGetCapabilities. The remote support however does work - correctly.
      3. -
      4. XenD's new-style XMLRPC interface is not supported by + correctly.
      5. XenD's new-style XMLRPC interface is not supported by libvirt, only the old-style sexpr interface known in the Xen - documentation as "unix server" or "http server".
      6. -

      Legacy: "xen"

      + documentation as "unix server" or "http server".

    +

    + Legacy: "xen" +

    +

    Another legacy URI is to specify name as the string "xen". This will continue to refer to the Xen hypervisor. However you should prefer a full xen:/// URI in all future code. -

    Legacy: Xen proxy

    +

    +

    + Legacy: Xen proxy +

    +

    Libvirt continues to support connections to a separately running Xen proxy daemon. This provides a way to allow non-root users to make a safe (read-only) subset of queries to the hypervisor. -

    +

    +

    There is no specific "Xen proxy" URI. However if a Xen URI of any of the ordinary or legacy forms is used (eg. NULL, "", "xen", ...) which fails, and the user is not root, and the Xen proxy socket can be connected to (/tmp/libvirt_proxy_conn), then libvirt will use a proxy connection. -

    +

    +

    You should consider using libvirt remote support in future. -

    +

    +
    + +
    + + + diff --git a/docs/uri.html.in b/docs/uri.html.in new file mode 100644 index 0000000000..a357ee29ba --- /dev/null +++ b/docs/uri.html.in @@ -0,0 +1,295 @@ + + + +

    Connection URIs

    +

    +Since libvirt supports many different kinds of virtualization +(often referred to as "drivers" or "hypervisors"), we need a +way to be able to specify which driver a connection refers to. +Additionally we may want to refer to a driver on a remote +machine over the network. +

    +

    +To this end, libvirt uses URIs as used on the Web and as defined in RFC 2396. This page +documents libvirt URIs. +

    + +

    + Specifying URIs to libvirt +

    +

    +The URI is passed as the name parameter to virConnectOpen or virConnectOpenReadOnly. For example: +

    +
    +virConnectPtr conn = virConnectOpenReadOnly ("test:///default");
    +
    +

    + Specifying URIs to virsh, virt-manager and virt-install +

    +

    +In virsh use the -c or --connect option: +

    +
    +virsh -c test:///default list
    +
    +

    +If virsh finds the environment variable +VIRSH_DEFAULT_CONNECT_URI set, it will try this URI by +default. +

    +

    +When using the interactive virsh shell, you can also use the +connect URI command to reconnect to another +hypervisor. +

    +

    +In virt-manager use the -c or --connect=URI option: +

    +
    +virt-manager -c test:///default
    +
    +

    +In virt-install use the --connect=URI option: +

    +
    +virt-install --connect=test:///default [other options]
    +
    +

    + xen:/// URI +

    +

    + This section describes a feature which is new in libvirt > +0.2.3. For libvirt ≤ 0.2.3 use "xen". +

    +

    +To access a Xen hypervisor running on the local machine +use the URI xen:///. +

    +

    + qemu:///... QEMU and KVM URIs +

    +

    +To use QEMU support in libvirt you must be running the +libvirtd daemon (named libvirt_qemud +in releases prior to 0.3.0). The purpose of this +daemon is to manage qemu instances. +

    +

    +The libvirtd daemon should be started by the +init scripts when the machine boots. It should appear as +a process libvirtd --daemon running as root +in the background and will handle qemu instances on behalf +of all users of the machine (among other things).

    +

    +So to connect to the daemon, one of two different URIs is used: +

    +
      +
    • qemu:///system connects to a system mode daemon.
    • +
    • qemu:///session connects to a session mode daemon.
    • +
    +

    +(If you do libvirtd --help, the daemon will print +out the paths of the Unix domain socket(s) that it listens on in +the various different modes). +

    +

    +KVM URIs are identical. You select between qemu, qemu accelerated and +KVM guests in the guest XML as described +here. +

    +

    + Remote URIs +

    +

    +Remote URIs are formed by taking ordinary local URIs and adding a +hostname and/or transport name. For example: +

    + + + + + + + + + + + + + + + + + + + + + +
    Local URI Remote URI Meaning
    + xen:/// + + xen://oirase/ + Connect to the Xen hypervisor running on host oirase + using TLS.
    + xen:/// + + xen+ssh://oirase/ + Connect to the Xen hypervisor running on host oirase + by going over an ssh connection.
    + test:///default + + test+tcp://oirase/default + Connect to the test driver on host oirase + using an unsecured TCP connection.
    +

    +Remote URIs in libvirt offer a rich syntax and many features. +We refer you to the libvirt +remote URI reference and full documentation +for libvirt remote support. +

    +

    + test:///... Test URIs +

    +

    +The test driver is a dummy hypervisor for test purposes. +The URIs supported are: +

    +
      +
    • test:///default connects to a default set of +host definitions built into the driver.
    • +
    • test:///path/to/host/definitions connects to +a set of host definitions held in the named file. +
    • +
    +

    + Other & legacy URI formats +

    +

    + NULL and empty string URIs +

    +

    +Libvirt allows you to pass a NULL pointer to +virConnectOpen*. Empty string ("") acts in +the same way. Traditionally this has meant +connect to the local Xen hypervisor. However in future this +may change to mean connect to the best available hypervisor. +

    +

    +The theory is that if, for example, Xen is unavailable but the +machine is running an OpenVZ kernel, then we should not try to +connect to the Xen hypervisor since that is obviously the wrong +thing to do. +

    +

    +In any case applications linked to libvirt can continue to pass +NULL as a default choice, but should always allow the +user to override the URI, either by constructing one or by allowing +the user to type a URI in directly (if that is appropriate). If your +application wishes to connect specifically to a Xen hypervisor, then +for future proofing it should choose a full xen:/// URI. +

    +

    + File paths (xend-unix-server) +

    +

    +If XenD is running and configured in /etc/xen/xend-config.sxp: +

    +
    +(xend-unix-server yes)
    +
    +

    +then it listens on a Unix domain socket, usually at +/var/lib/xend/xend-socket. You may pass a different path +using a file URI such as: +

    +
    +virsh -c ///var/run/xend/xend-socket
    +
    +

    + Legacy: http://... (xend-http-server) +

    +

    +If XenD is running and configured in /etc/xen/xend-config.sxp: + +

    +
    +(xend-http-server yes)
    +
    +

    +then it listens on TCP port 8000. libvirt allows you to +try to connect to xend running on remote machines by passing +http://hostname[:port]/, for example: + +

    +
    +virsh -c http://oirase/ list
    +
    +

    +This method is unencrypted and insecure and is definitely not +recommended for production use. Instead use libvirt's remote support. +

    +

    +Notes: +

    +
      +
    1. The HTTP client does not fully support IPv6.
    2. +
    3. Many features do not work as expected across HTTP connections, in + particular, virConnectGetCapabilities. + The remote support however does work + correctly.
    4. +
    5. XenD's new-style XMLRPC interface is not supported by + libvirt, only the old-style sexpr interface known in the Xen + documentation as "unix server" or "http server".
    6. +
    +

    + Legacy: "xen" +

    +

    +Another legacy URI is to specify name as the string +"xen". This will continue to refer to the Xen +hypervisor. However you should prefer a full xen:/// URI in all future code. +

    +

    + Legacy: Xen proxy +

    +

    +Libvirt continues to support connections to a separately running Xen +proxy daemon. This provides a way to allow non-root users to make a +safe (read-only) subset of queries to the hypervisor. +

    +

    +There is no specific "Xen proxy" URI. However if a Xen URI of any of +the ordinary or legacy forms is used (eg. NULL, +"", "xen", ...) which fails, and the +user is not root, and the Xen proxy socket can be connected to +(/tmp/libvirt_proxy_conn), then libvirt will use a proxy +connection. +

    +

    +You should consider using libvirt remote support +in future. +

    + + diff --git a/docs/windows.html b/docs/windows.html index 18083ad284..70a3185475 100644 --- a/docs/windows.html +++ b/docs/windows.html @@ -1,86 +1,105 @@ -Windows support

    Windows support

    + + + + + + + libvirt: Windows support + + + +

    +
    +
    +

    Windows support

    +

    Instructions for compiling and installing libvirt on Windows. -

    Binaries

    +

    + +

    + Binaries +

    +

    Binaries will be available from the download area (but we don't have binaries at the moment). -

    Compiling from source

    +

    +

    + Compiling from source +

    +

    These are the steps to compile libvirt and the other tools from source on Windows. -

    +

    +

    You will need: -

    1. MS Windows. Microsoft makes free (as beer) versions +

      +
      1. MS Windows. Microsoft makes free (as beer) versions of some of its operating systems available to MSDN subscribers. We used Windows 2008 Server for testing, virtualized under Linux using KVM-53 (earlier versions of KVM and QEMU won't run recent versions of Windows because of lack of full ACPI support, so make sure you have the latest KVM). -
      2. - -
      3. Cygwin's +
      4. Cygwin's setup.exe. -
      5. - -
      6. A large amount of free disk space to install Cygwin. +
      7. A large amount of free disk space to install Cygwin. Make sure you have 10 GB free to install most Cygwin packages, although if you pare down the list of dependencies you may -get away with much less.
      8. - -
      9. A network connection for Windows, since Cygwin downloads packages -from the net as it installs.
      10. - -
      11. Libvirt -latest version from CVS
      12. - -
      13. The latest source patch from -the download area.
      14. - -
      15. A version of Cygwin sunrpc, patched to support building +get away with much less.
      16. A network connection for Windows, since Cygwin downloads packages +from the net as it installs.
      17. + Libvirt +latest version from CVS +
      18. The latest source patch from +the download area.
      19. A version of Cygwin sunrpc, patched to support building librpc.dll. A patch and a binary package are available from - the download area.
      20. -

      + the download area.

    +

    These are the steps to take to compile libvirt from source on Windows: -

    1. -

      Run Cygwin +

      +
      1. +

        Run Cygwin setup.exe. When it starts up it will show a dialog like this:

        - - Cygwin Net Release Setup Program
      2. - -
      3. -

        Step through the setup program accepting defaults + Cygwin Net Release Setup Program

      4. +

        Step through the setup program accepting defaults or making choices as appropriate, until you get to the screen for selecting packages:

        - - Cygwin Select Packages screen

        + Cygwin Select Packages screen

        The user interface here is very confusing. You have to click the "recycling icon" as shown by the arrow:

        - - Cygwin Recycling Icon

        + Cygwin Recycling Icon

        which takes the package (and all packages in the subtree) through several states such as "Install", "Reinstall", "Keep", "Skip", "Uninstall", etc.

        - -
      5. - -
      6. -

        You can install "All" (everything) or better select +

      7. +

        You can install "All" (everything) or better select just the groups and packages needed. Select the following groups and packages for installation:

        - - -
        Groups + - - -
        Groups Archive
        Base
        Devel
        @@ -88,34 +107,25 @@ source on Windows: Mingw
        Perl
        Python
        - Shells
        Packages + Shells
        Packages openssh
        - sunrpc ≥ 4.0-4 (see below)
        - -
      8. -

        Once Cygwin has finished installing, start a Cygwin bash shell + sunrpc ≥ 4.0-4 (see below)

      9. +

        Once Cygwin has finished installing, start a Cygwin bash shell (either click on the desktop icon or look for Cygwin bash shell in the Start menu).

        - -

        The very first time you start the Cygwin bash shell, you may +

        The very first time you start the Cygwin bash shell, you may find you need to run the mkpasswd and mkgroup commands in order to create /etc/passwd and /etc/group files from Windows users. If this is needed then a message is printed in the shell. Note that you need to do this as Windows Administrator.

        -
      10. - -
      11. -

        Install Cygwin sunrpc ≥ 4.0-4 package, patched to include +

      12. +

        Install Cygwin sunrpc ≥ 4.0-4 package, patched to include librpc.dll. To do this, first check to see whether /usr/lib/librpc.dll exists. If it does, you're good to go and can skip to the next step.

        - -

        +

        If you don't have this file, either install the binary package sunrpc-4.0-4.tar.bz2 (just unpack it, as Administrator, in the Cygwin root directory). Or you can download the @@ -123,78 +133,64 @@ source on Windows: and apply it by hand to the Cygwin sunrpc package (eg. using cygport).

        -
      13. - -
      14. -

        +

      15. +

        Check out Libvirt from CVS and apply the latest Windows patch to the source.

        -
      16. - -
      17. -

        Configure libvirt by doing:

        -
        +      
      18. +

        Configure libvirt by doing:

        +
         autoreconf
         ./configure --without-xen --without-qemu
         
        -

        (The autoreconf step is probably optional).

        -

        The configure step will tell you if you have all the +

        (The autoreconf step is probably optional).

        +

        The configure step will tell you if you have all the required parts installed. If something is missing you will need to go back through Cygwin setup and install it.

        -
      19. - -
      20. -

        Rebuild the XDR structures:

        -
        +      
      21. +

        Rebuild the XDR structures:

        +
         rm qemud/remote_protocol.[ch] qemud/remote_dispatch_*.h
         make -C qemud remote_protocol.c
         
        -
      22. - -
      23. -

        Build:

        -
        +      
      24. +

        Build:

        +
         make
         
        -

        If this step is not successful, you should post a full +

        If this step is not successful, you should post a full report including complete messages to the libvirt mailing list.

        -
      25. - -
      26. -

        Test it. If you have access to a remote machine +

      27. +

        Test it. If you have access to a remote machine running Xen or QEMU/KVM, and the libvirt daemon (libvirtd) then you should be able to connect to it and display domains using, eg:

        -
        +        
         src/virsh.exe -c qemu://remote/system list --all
         
        -

        +

        Please read more about remote support before sending bug reports, to make sure that any problems are really Windows and not just with remote configuration / security.

        -
      28. - -
      29. -

        +

      30. +

        You may want to install the library and programs by doing:

        -
        +        
         make install
         
        -
      31. - -
      32. -

        +

      33. +

        The above steps should also build and install Python modules. However for reasons which I don't fully understand, Python won't look in the @@ -202,17 +198,14 @@ make install directory by default so you may need to set the environment variable PYTHONPATH:

        - -
        +        
         export PYTHONPATH=/usr/local/lib/python2.5/site-packages
         
        - -

        +

        (Change the version number to your version of Python). You can test Python support from the command line:

        - -
        +        
         python
         >>> import libvirt
         >>> conn = libvirt.open ("test:///default")
        @@ -222,12 +215,89 @@ python
         >>> dom.XMLDesc (0)
         "<domain type='test' id='1'> ..."
         
        - -

        +

        The most common failure will be with import libvirt which usually indicates that either PYTHONPATH is wrong or a DLL cannot be loaded.

        -
      34. - -

    + +
    + +
    + + + diff --git a/docs/windows.html.in b/docs/windows.html.in new file mode 100644 index 0000000000..55ab19cfd3 --- /dev/null +++ b/docs/windows.html.in @@ -0,0 +1,239 @@ + + + +

    Windows support

    +

    +Instructions for compiling and installing libvirt on Windows. +

    + +

    + Binaries +

    +

    +Binaries will be available from +the download area +(but we don't have binaries at the moment). +

    +

    + Compiling from source +

    +

    +These are the steps to compile libvirt and the other +tools from source on Windows. +

    +

    +You will need: +

    +
      +
    1. MS Windows. Microsoft makes free (as beer) versions +of some of its operating systems available to +MSDN subscribers. +We used Windows 2008 Server for testing, virtualized under +Linux using KVM-53 (earlier versions of KVM and QEMU won't +run recent versions of Windows because of lack of full ACPI +support, so make sure you have the latest KVM). +
    2. +
    3. Cygwin's +setup.exe. +
    4. +
    5. A large amount of free disk space to install Cygwin. +Make sure you have 10 GB free to install most Cygwin packages, +although if you pare down the list of dependencies you may +get away with much less.
    6. +
    7. A network connection for Windows, since Cygwin downloads packages +from the net as it installs.
    8. +
    9. + Libvirt +latest version from CVS +
    10. +
    11. The latest source patch from +the download area.
    12. +
    13. A version of Cygwin sunrpc, patched to support building + librpc.dll. + A patch and a binary package are available from + the download area.
    14. +
    +

    +These are the steps to take to compile libvirt from +source on Windows: +

    +
      +
    1. +

      Run Cygwin + setup.exe. + When it starts up it will show a dialog like this: +

      + Cygwin Net Release Setup Program +
    2. +
    3. +

      Step through the setup program accepting defaults + or making choices as appropriate, until you get to the + screen for selecting packages:

      + Cygwin Select Packages screen +

      + The user interface here is very confusing. You have to + click the "recycling icon" as shown by the arrow: +

      + Cygwin Recycling Icon +

      + which takes the package (and all packages in the subtree) + through several states such as "Install", "Reinstall", "Keep", + "Skip", "Uninstall", etc. +

      +
    4. +
    5. +

      You can install "All" (everything) or better select + just the groups and packages needed. Select the following + groups and packages for installation: +

      + + + + + + + + + +
      Groups + Archive
      + Base
      + Devel
      + Editors
      + Mingw
      + Perl
      + Python
      + Shells
      Packages + openssh
      + sunrpc ≥ 4.0-4 (see below)
      +
    6. +
    7. +

      Once Cygwin has finished installing, start a Cygwin bash shell + (either click on the desktop icon or look for Cygwin bash shell + in the Start menu).

      +

      The very first time you start the Cygwin bash shell, you may + find you need to run the mkpasswd and mkgroup + commands in order to create /etc/passwd and + /etc/group files from Windows users. If this + is needed then a message is printed in the shell. + Note that you need to do this as Windows Administrator.

      +
    8. +
    9. +

      Install Cygwin sunrpc ≥ 4.0-4 package, patched to include + librpc.dll. + To do this, first check to see whether /usr/lib/librpc.dll + exists. If it does, you're good to go and can skip to the next + step.

      +

      + If you don't have this file, either install the binary package + sunrpc-4.0-4.tar.bz2 (just unpack it, as Administrator, in the Cygwin root directory). + Or you can download the + source patch + and apply it by hand to the Cygwin sunrpc package (eg. using + cygport). +

      +
    10. +
    11. +

      + Check out + Libvirt from CVS and + apply the latest Windows patch + to the source. +

      +
    12. +
    13. +

      Configure libvirt by doing:

      +
      +autoreconf
      +./configure --without-xen --without-qemu
      +
      +

      (The autoreconf step is probably optional).

      +

      The configure step will tell you if you have all the + required parts installed. If something is missing you + will need to go back through Cygwin setup and install it. +

      +
    14. +
    15. +

      Rebuild the XDR structures:

      +
      +rm qemud/remote_protocol.[ch] qemud/remote_dispatch_*.h
      +make -C qemud remote_protocol.c
      +
      +
    16. +
    17. +

      Build:

      +
      +make
      +
      +

      If this step is not successful, you should post a full + report including complete messages to + the + libvirt mailing list. +

      +
    18. +
    19. +

      Test it. If you have access to a remote machine + running Xen or QEMU/KVM, and the libvirt daemon (libvirtd) + then you should be able to connect to it and display + domains using, eg: +

      +
      +src/virsh.exe -c qemu://remote/system list --all
      +
      +

      + Please read more about remote + support before sending bug reports, to make sure that + any problems are really Windows and not just with remote + configuration / security. +

      +
    20. +
    21. +

      + You may want to install the library and programs by doing: +

      +
      +make install
      +
      +
    22. +
    23. +

      + The above steps should also build and install Python modules. + However for reasons which I don't fully understand, Python won't + look in the + non-standard /usr/local/lib/python*/site-packages/ + directory by default so you may need to set the environment + variable PYTHONPATH: +

      +
      +export PYTHONPATH=/usr/local/lib/python2.5/site-packages
      +
      +

      + (Change the version number to your version of Python). You + can test Python support from the command line: +

      +
      +python
      +>>> import libvirt
      +>>> conn = libvirt.open ("test:///default")
      +>>> conn.listDomainsID ()
      +[1]
      +>>> dom = conn.lookupByID (1)
      +>>> dom.XMLDesc (0)
      +"<domain type='test' id='1'> ..."
      +
      +

      + The most common failure will be with import libvirt + which usually indicates that either PYTHONPATH is + wrong or a DLL cannot be loaded. +

      +
    24. +
    + +