libvirt/docs/storage.html.in

785 lines
27 KiB
HTML
Raw Normal View History

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<body>
<h1 >Storage Management</h1>
<p>
Libvirt provides storage management on the physical host through
storage pools and volumes.
</p>
<p>
A storage pool is a quantity of storage set aside by an
administrator, often a dedicated storage administrator, for use
by virtual machines. Storage pools are divided into storage
volumes either by the storage administrator or the system
administrator, and the volumes are assigned to VMs as block
devices.
</p>
<p>
For example, the storage administrator responsible for an NFS
server creates a share to store virtual machines' data. The
system administrator defines a pool on the virtualization host
with the details of the share
(e.g. nfs.example.com:/path/to/share should be mounted on
/vm_data). When the pool is started, libvirt mounts the share
on the specified directory, just as if the system administrator
logged in and executed 'mount nfs.example.com:/path/to/share
/vmdata'. If the pool is configured to autostart, libvirt
ensures that the NFS share is mounted on the directory specified
when libvirt is started.
</p>
<p>
Once the pool is started, the files in the NFS share are
reported as volumes, and the storage volumes' paths may be
queried using the libvirt APIs. The volumes' paths can then be
copied into the section of a VM's XML definition describing the
source storage for the VM's block devices. In the case of NFS,
an application using the libvirt APIs can create and delete
volumes in the pool (files in the NFS share) up to the limit of
the size of the pool (the storage capacity of the share). Not
all pool types support creating and deleting volumes. Stopping
the pool (somewhat unfortunately referred to by virsh and the
API as "pool-destroy") undoes the start operation, in this case,
unmounting the NFS share. The data on the share is not modified
by the destroy operation, despite the name. See man virsh for
more details.
</p>
<p>
A second example is an iSCSI pool. A storage administrator
provisions an iSCSI target to present a set of LUNs to the host
running the VMs. When libvirt is configured to manage that
iSCSI target as a pool, libvirt will ensure that the host logs
into the iSCSI target and libvirt can then report the available
LUNs as storage volumes. The volumes' paths can be queried and
used in VM's XML definitions as in the NFS example. In this
case, the LUNs are defined on the iSCSI server, and libvirt
cannot create and delete volumes.
</p>
<p>
Storage pools and volumes are not required for the proper
operation of VMs. Pools and volumes provide a way for libvirt
to ensure that a particular piece of storage will be available
for a VM, but some administrators will prefer to manage their
own storage and VMs will operate properly without any pools or
volumes defined. On systems that do not use pools, system
administrators must ensure the availability of the VMs' storage
using whatever tools they prefer, for example, adding the NFS
share to the host's fstab so that the share is mounted at boot
time.
</p>
<p>
If at this point the value of pools and volumes over traditional
system administration tools is unclear, note that one of the
features of libvirt is its remote protocol, so it's possible to
manage all aspects of a virtual machine's lifecycle as well as
the configuration of the resources required by the VM. These
operations can be performed on a remote host entirely within the
libvirt API. In other words, a management application using
libvirt can enable a user to perform all the required tasks for
configuring the host for a VM: allocating resources, running the
VM, shutting it down and deallocating the resources, without
requiring shell access or any other control channel.
</p>
<p>
Libvirt supports the following storage pool types:
</p>
<ul id="toc"></ul>
<h2><a id="StorageBackendDir">Directory pool</a></h2>
<p>
A pool with a type of <code>dir</code> provides the means to manage
files within a directory. The files can be fully allocated raw files,
sparsely allocated raw files, or one of the special disk formats
such as <code>qcow</code>,<code>qcow2</code>,<code>vmdk</code>,
<code>cow</code>, etc as supported by the <code>qemu-img</code>
program. If the directory does not exist at the time the pool is
defined, the <code>build</code> operation can be used to create it.
</p>
<h3>Example pool input definition</h3>
<pre>
&lt;pool type="dir"&gt;
&lt;name&gt;virtimages&lt;/name&gt;
&lt;target&gt;
&lt;path&gt;/var/lib/virt/images&lt;/path&gt;
&lt;/target&gt;
&lt;/pool&gt;</pre>
<h3>Valid pool format types</h3>
<p>
The directory pool does not use the pool format type element.
</p>
<h3>Valid volume format types</h3>
<p>
One of the following options:
</p>
<ul>
<li><code>raw</code>: a plain file</li>
<li><code>bochs</code>: Bochs disk image format</li>
<li><code>cloop</code>: compressed loopback disk image format</li>
<li><code>cow</code>: User Mode Linux disk image format</li>
<li><code>dmg</code>: Mac disk image format</li>
<li><code>iso</code>: CDROM disk image format</li>
<li><code>qcow</code>: QEMU v1 disk image format</li>
<li><code>qcow2</code>: QEMU v2 disk image format</li>
<li><code>qed</code>: QEMU Enhanced Disk image format</li>
<li><code>vmdk</code>: VMware disk image format</li>
<li><code>vpc</code>: VirtualPC disk image format</li>
</ul>
<p>
When listing existing volumes all these formats are supported
natively. When creating new volumes, only a subset may be
available. The <code>raw</code> type is guaranteed always
available. The <code>qcow2</code> type can be created if
either <code>qemu-img</code> or <code>qcow-create</code> tools
are present. The others are dependent on support of the
<code>qemu-img</code> tool.
</p>
<h2><a id="StorageBackendFS">Filesystem pool</a></h2>
<p>
This is a variant of the directory pool. Instead of creating a
directory on an existing mounted filesystem though, it expects
a source block device to be named. This block device will be
mounted and files managed in the directory of its mount point.
It will default to allowing the kernel to automatically discover
the filesystem type, though it can be specified manually if
required.
</p>
<h3>Example pool input</h3>
<pre>
&lt;pool type="fs"&gt;
&lt;name&gt;virtimages&lt;/name&gt;
&lt;source&gt;
&lt;device path="/dev/VolGroup00/VirtImages"/&gt;
&lt;/source&gt;
&lt;target&gt;
&lt;path&gt;/var/lib/virt/images&lt;/path&gt;
&lt;/target&gt;
&lt;/pool&gt;</pre>
<h3>Valid pool format types</h3>
<p>
The filesystem pool supports the following formats:
</p>
<ul>
<li><code>auto</code> - automatically determine format</li>
<li>
<code>ext2</code>
</li>
<li>
<code>ext3</code>
</li>
<li>
<code>ext4</code>
</li>
<li>
<code>ufs</code>
</li>
<li>
<code>iso9660</code>
</li>
<li>
<code>udf</code>
</li>
<li>
<code>gfs</code>
</li>
<li>
<code>gfs2</code>
</li>
<li>
<code>vfat</code>
</li>
<li>
<code>hfs+</code>
</li>
<li>
<code>xfs</code>
</li>
<li>
<code>ocfs2</code>
</li>
</ul>
<h3>Valid volume format types</h3>
<p>
The valid volume types are the same as for the <code>directory</code>
pool type.
</p>
<h2><a id="StorageBackendNetFS">Network filesystem pool</a></h2>
<p>
This is a variant of the filesystem pool. Instead of requiring
a local block device as the source, it requires the name of a
host and path of an exported directory. It will mount this network
filesystem and manage files within the directory of its mount
point. It will default to using <code>auto</code> as the
protocol, which generally tries a mount via NFS first.
</p>
<h3>Example pool input</h3>
<pre>
&lt;pool type="netfs"&gt;
&lt;name&gt;virtimages&lt;/name&gt;
&lt;source&gt;
&lt;host name="nfs.example.com"/&gt;
&lt;dir path="/var/lib/virt/images"/&gt;
&lt;format type='nfs'/&gt;
&lt;/source&gt;
&lt;target&gt;
&lt;path&gt;/var/lib/virt/images&lt;/path&gt;
&lt;/target&gt;
&lt;/pool&gt;</pre>
<h3>Valid pool format types</h3>
<p>
The network filesystem pool supports the following formats:
</p>
<ul>
<li><code>auto</code> - automatically determine format</li>
<li>
<code>nfs</code>
</li>
<li>
<code>glusterfs</code> - use the glusterfs FUSE file system.
For now, the <code>dir</code> specified as the source can only
be a gluster volume name, as gluster does not provide a way to
directly mount subdirectories within a volume. (To bypass the
file system completely, see
the <a href="#StorageBackendGluster">gluster</a> pool.)
</li>
<li>
<code>cifs</code> - use the SMB (samba) or CIFS file system.
The mount will use "-o guest" to mount the directory anonymously.
</li>
</ul>
<h3>Valid volume format types</h3>
<p>
The valid volume types are the same as for the <code>directory</code>
pool type.
</p>
<h2><a id="StorageBackendLogical">Logical volume pool</a></h2>
<p>
This provides a pool based on an LVM volume group. For a
pre-defined LVM volume group, simply providing the group
name is sufficient, while to build a new group requires
providing a list of source devices to serve as physical
volumes. Volumes will be allocated by carving out chunks
of storage from the volume group.
</p>
<h3>Example pool input</h3>
<pre>
&lt;pool type="logical"&gt;
&lt;name&gt;HostVG&lt;/name&gt;
&lt;source&gt;
&lt;device path="/dev/sda1"/&gt;
&lt;device path="/dev/sdb1"/&gt;
&lt;device path="/dev/sdc1"/&gt;
&lt;/source&gt;
&lt;target&gt;
&lt;path&gt;/dev/HostVG&lt;/path&gt;
&lt;/target&gt;
&lt;/pool&gt;</pre>
<h3>Valid pool format types</h3>
<p>
The logical volume pool supports only the <code>lvm2</code> format,
although not supplying a format value will result in automatic
selection of the<code>lvm2</code> format.
</p>
<h3>Valid volume format types</h3>
<p>
The logical volume pool does not use the volume format type element.
</p>
<h2><a id="StorageBackendDisk">Disk pool</a></h2>
<p>
This provides a pool based on a physical disk. Volumes are created
by adding partitions to the disk. Disk pools have constraints
on the size and placement of volumes. The 'free extents'
information will detail the regions which are available for creating
new volumes. A volume cannot span across 2 different free extents.
It will default to using <code>msdos</code> as the pool source format.
</p>
<h3>Example pool input</h3>
<pre>
&lt;pool type="disk"&gt;
&lt;name&gt;sda&lt;/name&gt;
&lt;source&gt;
&lt;device path='/dev/sda'/&gt;
&lt;/source&gt;
&lt;target&gt;
&lt;path&gt;/dev&lt;/path&gt;
&lt;/target&gt;
&lt;/pool&gt;</pre>
<h3>Valid pool format types</h3>
<p>
The disk volume pool accepts the following pool format types, representing
the common partition table types:
</p>
<ul>
<li>
<code>dos</code>
</li>
<li>
<code>dvh</code>
</li>
<li>
<code>gpt</code>
</li>
<li>
<code>mac</code>
</li>
<li>
<code>bsd</code>
</li>
<li>
<code>pc98</code>
</li>
<li>
<code>sun</code>
</li>
<li>
<code>lvm2</code>
</li>
</ul>
<p>
The <code>dos</code> or <code>gpt</code> formats are recommended for
best portability - the latter is needed for disks larger than 2TB.
</p>
<h3>Valid volume format types</h3>
<p>
The disk volume pool accepts the following volume format types, representing
the common partition entry types:
</p>
<ul>
<li>
<code>none</code>
</li>
<li>
<code>linux</code>
</li>
<li>
<code>fat16</code>
</li>
<li>
<code>fat32</code>
</li>
<li>
<code>linux-swap</code>
</li>
<li>
<code>linux-lvm</code>
</li>
<li>
<code>linux-raid</code>
</li>
<li>
<code>extended</code>
</li>
</ul>
<h2><a id="StorageBackendISCSI">iSCSI pool</a></h2>
<p>
This provides a pool based on an iSCSI target. Volumes must be
pre-allocated on the iSCSI server, and cannot be created via
the libvirt APIs. Since /dev/XXX names may change each time libvirt
logs into the iSCSI target, it is recommended to configure the pool
to use <code>/dev/disk/by-path</code> or <code>/dev/disk/by-id</code>
for the target path. These provide persistent stable naming for LUNs
</p>
<p>
The libvirt iSCSI storage backend does not resolve the provided
host name or IP address when finding the available target IQN's
on the host; therefore, defining two pools to use the same IQN
on the same host will fail the duplicate source pool checks.
</p>
<h3>Example pool input</h3>
<pre>
&lt;pool type="iscsi"&gt;
&lt;name&gt;virtimages&lt;/name&gt;
&lt;source&gt;
&lt;host name="iscsi.example.com"/&gt;
&lt;device path="iqn.2013-06.com.example:iscsi-pool"/&gt;
&lt;/source&gt;
&lt;target&gt;
&lt;path&gt;/dev/disk/by-path&lt;/path&gt;
&lt;/target&gt;
&lt;/pool&gt;</pre>
<h3>Valid pool format types</h3>
<p>
The iSCSI volume pool does not use the pool format type element.
</p>
<h3>Valid volume format types</h3>
<p>
The iSCSI volume pool does not use the volume format type element.
</p>
2010-02-23 05:50:04 +08:00
<h2><a id="StorageBackendSCSI">SCSI pool</a></h2>
2010-02-23 05:50:04 +08:00
<p>
This provides a pool based on a SCSI HBA. Volumes are preexisting SCSI
LUNs, and cannot be created via the libvirt APIs. Since /dev/XXX names
aren't generally stable, it is recommended to configure the pool
to use <code>/dev/disk/by-path</code> or <code>/dev/disk/by-id</code>
for the target path. These provide persistent stable naming for LUNs
<span class="since">Since 0.6.2</span>
2010-02-23 05:50:04 +08:00
</p>
<h3>Example pool input</h3>
<pre>
&lt;pool type="scsi"&gt;
&lt;name&gt;virtimages&lt;/name&gt;
&lt;source&gt;
&lt;adapter name="host0"/&gt;
&lt;/source&gt;
&lt;target&gt;
&lt;path&gt;/dev/disk/by-path&lt;/path&gt;
&lt;/target&gt;
&lt;/pool&gt;</pre>
2010-02-23 05:50:04 +08:00
<h3>Valid pool format types</h3>
<p>
The SCSI volume pool does not use the pool format type element.
</p>
<h3>Valid volume format types</h3>
<p>
The SCSI volume pool does not use the volume format type element.
</p>
<h2><a id="StorageBackendMultipath">Multipath pool</a></h2>
<p>
This provides a pool that contains all the multipath devices on the
host. Therefore, only one Multipath pool may be configured per host.
Volume creating is not supported via the libvirt APIs.
The target element is actually ignored, but one is required to appease
the libvirt XML parser.<br/>
<br/>
Configuring multipathing is not currently supported, this just covers
the case where users want to discover all the available multipath
devices, and assign them to guests.
<span class="since">Since 0.7.1</span>
</p>
<h3>Example pool input</h3>
<pre>
&lt;pool type="mpath"&gt;
&lt;name&gt;virtimages&lt;/name&gt;
&lt;target&gt;
&lt;path&gt;/dev/mapper&lt;/path&gt;
&lt;/target&gt;
&lt;/pool&gt;</pre>
<h3>Valid pool format types</h3>
<p>
The Multipath volume pool does not use the pool format type element.
</p>
<h3>Valid volume format types</h3>
<p>
The Multipath volume pool does not use the volume format type element.
</p>
<h2><a id="StorageBackendRBD">RBD pool</a></h2>
<p>
This storage driver provides a pool which contains all RBD
images in a RADOS pool. RBD (RADOS Block Device) is part
of the Ceph distributed storage project.<br/>
This backend <i>only</i> supports QEMU with RBD support. Kernel RBD
which exposes RBD devices as block devices in /dev is <i>not</i>
supported. RBD images created with this storage backend
can be accessed through kernel RBD if configured manually, but
this backend does not provide mapping for these images.<br/>
Images created with this backend can be attached to QEMU guests
when QEMU is build with RBD support (Since QEMU 0.14.0). The
backend supports cephx authentication for communication with the
Ceph cluster. Storing the cephx authentication key is done with
the libvirt secret mechanism. The UUID in the example pool input
refers to the UUID of the stored secret.<br />
The port attribute for a Ceph monitor does not have to be provided.
If not provided librados will use the default Ceph monitor port.
<span class="since">Since 0.9.13</span>
</p>
<h3>Example pool input</h3>
<pre>
&lt;pool type="rbd"&gt;
&lt;name&gt;myrbdpool&lt;/name&gt;
&lt;source&gt;
&lt;name&gt;rbdpool&lt;/name&gt;
&lt;host name='1.2.3.4'/&gt;
&lt;host name='my.ceph.monitor'/&gt;
&lt;host name='third.ceph.monitor' port='6789'/&gt;
&lt;auth username='admin' type='ceph'&gt;
&lt;secret uuid='2ec115d7-3a88-3ceb-bc12-0ac909a6fd87'/&gt;
&lt;/auth&gt;
&lt;/source&gt;
&lt;/pool&gt;</pre>
<h3>Example volume output</h3>
<pre>
&lt;volume&gt;
&lt;name&gt;myvol&lt;/name&gt;
&lt;key&gt;rbd/myvol&lt;/key&gt;
&lt;source&gt;
&lt;/source&gt;
&lt;capacity unit='bytes'&gt;53687091200&lt;/capacity&gt;
&lt;allocation unit='bytes'&gt;53687091200&lt;/allocation&gt;
&lt;target&gt;
&lt;path&gt;rbd:rbd/myvol&lt;/path&gt;
&lt;format type='unknown'/&gt;
&lt;permissions&gt;
&lt;mode&gt;00&lt;/mode&gt;
&lt;owner&gt;0&lt;/owner&gt;
&lt;group&gt;0&lt;/group&gt;
&lt;/permissions&gt;
&lt;/target&gt;
&lt;/volume&gt;</pre>
<h3>Example disk attachment</h3>
<p>RBD images can be attached to QEMU guests when QEMU is built
with RBD support. Information about attaching a RBD image to a
guest can be found
at <a href="formatdomain.html#elementsDisks">format domain</a>
page.</p>
<h3>Valid pool format types</h3>
<p>
The RBD pool does not use the pool format type element.
</p>
<h3>Valid volume format types</h3>
<p>
The RBD pool does not use the volume format type element.
</p>
<h2><a id="StorageBackendSheepdog">Sheepdog pool</a></h2>
<p>
This provides a pool based on a Sheepdog Cluster.
Sheepdog is a distributed storage system for QEMU/KVM.
It provides highly available block level storage volumes that
can be attached to QEMU/KVM virtual machines.
The cluster must already be formatted.
<span class="since">Since 0.9.13</span>
</p>
<h3>Example pool input</h3>
<pre>
&lt;pool type="sheepdog"&gt;
&lt;name&gt;mysheeppool&lt;/name&gt;
&lt;source&gt;
&lt;name&gt;mysheeppool&lt;/name&gt;
&lt;host name='localhost' port='7000'/&gt;
&lt;/source&gt;
&lt;/pool&gt;</pre>
<h3>Example volume output</h3>
<pre>
&lt;volume&gt;
&lt;name&gt;myvol&lt;/name&gt;
&lt;key&gt;sheep/myvol&lt;/key&gt;
&lt;source&gt;
&lt;/source&gt;
&lt;capacity unit='bytes'&gt;53687091200&lt;/capacity&gt;
&lt;allocation unit='bytes'&gt;53687091200&lt;/allocation&gt;
&lt;target&gt;
&lt;path&gt;sheepdog:myvol&lt;/path&gt;
&lt;format type='unknown'/&gt;
&lt;permissions&gt;
&lt;mode&gt;00&lt;/mode&gt;
&lt;owner&gt;0&lt;/owner&gt;
&lt;group&gt;0&lt;/group&gt;
&lt;/permissions&gt;
&lt;/target&gt;
&lt;/volume&gt;</pre>
<h3>Example disk attachment</h3>
<p>Sheepdog images can be attached to QEMU guests.
Information about attaching a Sheepdog image to a
guest can be found
at the <a href="formatdomain.html#elementsDisks">format domain</a>
page.</p>
<h3>Valid pool format types</h3>
<p>
The Sheepdog pool does not use the pool format type element.
</p>
<h3>Valid volume format types</h3>
<p>
The Sheepdog pool does not use the volume format type element.
</p>
<h2><a id="StorageBackendGluster">Gluster pool</a></h2>
<p>
This provides a pool based on native Gluster access. Gluster is
a distributed file system that can be exposed to the user via
FUSE, NFS or SMB (see the <a href="#StorageBackendNetfs">netfs</a>
pool for that usage); but for minimal overhead, the ideal access
is via native access (only possible for QEMU/KVM compiled with
libgfapi support).
The cluster and storage volume must already be running, and it
is recommended that the volume be configured with <code>gluster
volume set $volname storage.owner-uid=$uid</code>
and <code>gluster volume set $volname
storage.owner-gid=$gid</code> for the uid and gid that qemu will
be run as. It may also be necessary to
set <code>rpc-auth-allow-insecure on</code> for the glusterd
service, as well as <code>gluster set $volname
server.allow-insecure on</code>, to allow access to the gluster
volume.
<span class="since">Since 1.2.0</span>
</p>
<h3>Example pool input</h3>
<p>A gluster volume corresponds to a libvirt storage pool. If a
gluster volume could be mounted as <code>mount -t glusterfs
localhost:/volname /some/path</code>, then the following example
will describe the same pool without having to create a local
mount point. Remember that with gluster, the mount point can be
through any machine in the cluster, and gluster will
automatically pick the ideal transport to the actual bricks
backing the gluster volume, even if on a different host than the
one named in the <code>host</code> designation.
The <code>&lt;name&gt;</code> element is always the volume name
(no slash). The pool source also supports an
optional <code>&lt;dir&gt;</code> element with
a <code>path</code> attribute that lists the absolute name of a
subdirectory relative to the gluster volume to use instead of
the top-level directory of the volume.</p>
<pre>
&lt;pool type="gluster"&gt;
&lt;name&gt;myglusterpool&lt;/name&gt;
&lt;source&gt;
&lt;name&gt;volname&lt;/name&gt;
&lt;host name='localhost'/&gt;
&lt;dir path='/'/&gt;
&lt;/source&gt;
&lt;/pool&gt;</pre>
<h3>Example volume output</h3>
<p>Libvirt storage volumes associated with a gluster pool
correspond to the files that can be found when mounting the
gluster volume. The <code>name</code> is the path relative to
the effective mount specified for the pool; and
the <code>key</code> is a string that identifies a single volume
uniquely. Currently the <code>key</code> attribute consists of the
URI of the volume but it may be changed to a UUID of the volume
in the future.</p>
<pre>
&lt;volume&gt;
&lt;name&gt;myfile&lt;/name&gt;
&lt;key&gt;gluster://localhost/volname/myfile&lt;/key&gt;
&lt;source&gt;
&lt;/source&gt;
&lt;capacity unit='bytes'&gt;53687091200&lt;/capacity&gt;
&lt;allocation unit='bytes'&gt;53687091200&lt;/allocation&gt;
&lt;/volume&gt;</pre>
<h3>Example disk attachment</h3>
<p>Files within a gluster volume can be attached to QEMU guests.
Information about attaching a Gluster image to a
guest can be found
at the <a href="formatdomain.html#elementsDisks">format domain</a>
page.</p>
<h3>Valid pool format types</h3>
<p>
The Gluster pool does not use the pool format type element.
</p>
<h3>Valid volume format types</h3>
<p>
The valid volume types are the same as for the <code>directory</code>
pool type.
</p>
<h2><a id="StorageBackendZFS">ZFS pool</a></h2>
<p>
2016-02-27 10:39:50 +08:00
This provides a pool based on the ZFS filesystem. Initially it was developed
for FreeBSD, and <span class="since">since 1.3.2</span> experimental support
for <a href="http://zfsonlinux.org/">ZFS on Linux</a> version 0.6.4 or newer
is available.
</p>
<p>A pool could either be created manually using the <code>zpool create</code>
command and its name specified in the source section or <span class="since">
since 1.2.9</span> source devices could be specified to create a pool using
libvirt.
</p>
<p>Please refer to the ZFS documentation for details on a pool creation.</p>
<p><span class="since">Since 1.2.8</span></p>.
<h3>Example pool input</h3>
<pre>
&lt;pool type="zfs"&gt;
&lt;name&gt;myzfspool&lt;/name&gt;
&lt;source&gt;
&lt;name&gt;zpoolname&lt;/name&gt;
&lt;device path="/dev/ada1"/&gt;
&lt;device path="/dev/ada2"/&gt;
&lt;/source&gt;
&lt;/pool&gt;</pre>
<h3>Valid pool format types</h3>
<p>
The ZFS volume pool does not use the pool format type element.
</p>
<h3>Valid pool format types</h3>
<p>
The ZFS volume pool does not use the volume format type element.
</p>
<h2><a id="StorageBackendVstorage">Vstorage pool</a></h2>
<p>
This provides a pool based on Virtuozzo storage. Virtuozzo Storage is
a highly available distributed software-defined storage with built-in
replication and disaster recovery. More detailed information about
Virtuozzo storage and its management can be found here:
<a href="https://openvz.org/Virtuozzo_Storage">Virtuozzo Storage</a>).
</p>
<p>Please refer to the Virtuozzo Storage documentation for details
on storage management and usage.</p>
<h3>Example pool input</h3>
<p>In order to create storage pool with Virtuozzo Storage backend you
have to provide cluster name and be authorized within the cluster.</p>
<pre>
&lt;pool type="vstorage"&gt;
&lt;name&gt;myvstoragepool&lt;/name&gt;
&lt;source&gt;
&lt;name&gt;clustername&lt;/name&gt;
&lt;/source&gt;
&lt;target&gt;
&lt;path&gt;/mnt/clustername&lt;/path&gt;
&lt;/target&gt;
&lt;/pool&gt;</pre>
<h3>Valid volume format types</h3>
<p>The valid volume types are the same as for the directory pool.</p>
</body>
</html>