libvirt/po/POTFILES

324 lines
8.0 KiB
Plaintext
Raw Normal View History

gnulib/lib/gai_strerror.c
gnulib/lib/getopt.c
gnulib/lib/regcomp.c
src/access/viraccessdriverpolkit.c
src/access/viraccessmanager.c
src/admin/admin_server.c
src/admin/admin_server_dispatch.c
src/admin/admin_server_dispatch_stubs.h
src/bhyve/bhyve_capabilities.c
src/bhyve/bhyve_command.c
src/bhyve/bhyve_device.c
src/bhyve/bhyve_domain.c
src/bhyve/bhyve_driver.c
src/bhyve/bhyve_monitor.c
src/bhyve/bhyve_parse_command.c
src/bhyve/bhyve_process.c
src/conf/capabilities.c
backup: Parse and output checkpoint XML Add a new file checkpoint_conf.c that performs the translation to and from new XML describing a checkpoint. The code shares a common base class with snapshots, since a checkpoint similarly represents the domain state at a moment in time. Add some basic testing of round trip XML handling through the new code. Of note - this code intentionally differs from snapshots in that XML schema validation is unconditional, rather than based on a public API flag. We have many existing interfaces that still need to add a flag for opt-in schema validation, but those interfaces have existing clients that may not have been producing strictly-compliant XML, or we may still uncover bugs where our RNG grammar is inconsistent with our code (where omitting the opt-in flag allows existing apps to keep working while waiting for an RNG patch). But since checkpoints are brand-new, it's easier to ensure the code matches the schema by always using the schema. If needed, a later patch could extend the API and add a flag to turn on to request schema validation, rather than having it forced (possibly just the validation of the <domain> sub-element during REDEFINE) - but if a user encounters XML that looks like it should be good but fails to validate with our RNG schema, they would either have to upgrade to a new libvirt that adds the new flag, or upgrade to a new libvirt that fixes the RNG schema, which implies adding such a flag won't help much. Also, the redefine flag requires the <domain> sub-element to be present, rather than catering to historical back-compat to older versions. Signed-off-by: Eric Blake <eblake@redhat.com>
2018-07-08 10:01:14 +08:00
src/conf/checkpoint_conf.c
src/conf/cpu_conf.c
src/conf/device_conf.c
src/conf/domain_addr.c
src/conf/domain_capabilities.c
2009-09-17 00:26:27 +08:00
src/conf/domain_conf.c
2010-01-14 02:11:33 +08:00
src/conf/domain_event.c
2009-09-17 00:26:27 +08:00
src/conf/interface_conf.c
src/conf/netdev_bandwidth_conf.c
conf: add <vlan> element to network and domain interface elements The following config elements now support a <vlan> subelements: within a domain: <interface>, and the <actual> subelement of <interface> within a network: the toplevel, as well as any <portgroup> Each vlan element must have one or more <tag id='n'/> subelements. If there is more than one tag, it is assumed that vlan trunking is being requested. If trunking is required with only a single tag, the attribute "trunk='yes'" should be added to the toplevel <vlan> element. Some examples: <interface type='hostdev'/> <vlan> <tag id='42'/> </vlan> <mac address='52:54:00:12:34:56'/> ... </interface> <network> <name>vlan-net</name> <vlan trunk='yes'> <tag id='30'/> </vlan> <virtualport type='openvswitch'/> </network> <interface type='network'/> <source network='vlan-net'/> ... </interface> <network> <name>trunk-vlan</name> <vlan> <tag id='42'/> <tag id='43'/> </vlan> ... </network> <network> <name>multi</name> ... <portgroup name='production'/> <vlan> <tag id='42'/> </vlan> </portgroup> <portgroup name='test'/> <vlan> <tag id='666'/> </vlan> </portgroup> </network> <interface type='network'/> <source network='multi' portgroup='test'/> ... </interface> IMPORTANT NOTE: As of this patch there is no backend support for the vlan element for *any* network device type. When support is added in later patches, it will only be for those select network types that support setting up a vlan on the host side, without the guest's involvement. (For example, it will be possible to configure a vlan for a guest connected to an openvswitch bridge, but it won't be possible to do that for one that is connected to a standard Linux host bridge.)
2012-08-12 15:51:30 +08:00
src/conf/netdev_vlan_conf.c
src/conf/netdev_vport_profile_conf.c
src/conf/network_conf.c
src/conf/networkcommon_conf.c
2009-09-17 00:26:27 +08:00
src/conf/node_device_conf.c
src/conf/numa_conf.c
src/conf/nwfilter_conf.c
src/conf/nwfilter_params.c
src/conf/object_event.c
2009-09-17 00:26:27 +08:00
src/conf/secret_conf.c
src/conf/snapshot_conf.c
src/conf/storage_adapter_conf.c
2009-09-17 00:26:27 +08:00
src/conf/storage_conf.c
src/conf/virchrdev.c
src/conf/virdomainobjlist.c
src/conf/virnetworkobj.c
src/conf/virnodedeviceobj.c
src/conf/virnwfilterobj.c
src/conf/virsavecookie.c
src/conf/virsecretobj.c
src/conf/virstorageobj.c
Adds CPU selection infrastructure Each driver supporting CPU selection must fill in host CPU capabilities. When filling them, drivers for hypervisors running on the same node as libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers, such as VMware, need to implement their own way of getting such data. Raw data can be decoded into virCPUDefPtr using cpuDecode() function. When implementing virConnectCompareCPU(), a hypervisor driver can just call cpuCompareXML() function with host CPU capabilities. For each guest for which a driver supports selecting CPU models, it must set the appropriate feature in guest's capabilities: virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0) Actions needed when a domain is being created depend on whether the hypervisor understands raw CPU data (currently CPUID for i686, x86_64 architectures) or symbolic names has to be used. Typical use by hypervisors which prefer CPUID (such as VMware and Xen): - convert guest CPU configuration from domain's XML into a set of raw data structures each representing one of the feature policies: cpuEncode(conn, architecture, guest_cpu_config, &forced_data, &required_data, &optional_data, &disabled_data, &forbidden_data) - create a mask or whatever the hypervisor expects to see and pass it to the hypervisor Typical use by hypervisors with symbolic model names (such as QEMU): - get raw CPU data for a computed guest CPU: cpuGuestData(conn, host_cpu, guest_cpu_config, &data) - decode raw data into virCPUDefPtr with a possible restriction on allowed model names: cpuDecode(conn, guest, data, n_allowed_models, allowed_models) - pass guest->model and guest->features to the hypervisor * src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h * configure.in: check for CPUID instruction * src/Makefile.am: glue the new files in * src/libvirt_private.syms: add new private symbols * po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 23:02:11 +08:00
src/cpu/cpu.c
src/cpu/cpu_arm.c
Adds CPU selection infrastructure Each driver supporting CPU selection must fill in host CPU capabilities. When filling them, drivers for hypervisors running on the same node as libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers, such as VMware, need to implement their own way of getting such data. Raw data can be decoded into virCPUDefPtr using cpuDecode() function. When implementing virConnectCompareCPU(), a hypervisor driver can just call cpuCompareXML() function with host CPU capabilities. For each guest for which a driver supports selecting CPU models, it must set the appropriate feature in guest's capabilities: virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0) Actions needed when a domain is being created depend on whether the hypervisor understands raw CPU data (currently CPUID for i686, x86_64 architectures) or symbolic names has to be used. Typical use by hypervisors which prefer CPUID (such as VMware and Xen): - convert guest CPU configuration from domain's XML into a set of raw data structures each representing one of the feature policies: cpuEncode(conn, architecture, guest_cpu_config, &forced_data, &required_data, &optional_data, &disabled_data, &forbidden_data) - create a mask or whatever the hypervisor expects to see and pass it to the hypervisor Typical use by hypervisors with symbolic model names (such as QEMU): - get raw CPU data for a computed guest CPU: cpuGuestData(conn, host_cpu, guest_cpu_config, &data) - decode raw data into virCPUDefPtr with a possible restriction on allowed model names: cpuDecode(conn, guest, data, n_allowed_models, allowed_models) - pass guest->model and guest->features to the hypervisor * src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h * configure.in: check for CPUID instruction * src/Makefile.am: glue the new files in * src/libvirt_private.syms: add new private symbols * po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 23:02:11 +08:00
src/cpu/cpu_map.c
src/cpu/cpu_ppc64.c
src/cpu/cpu_s390.c
Adds CPU selection infrastructure Each driver supporting CPU selection must fill in host CPU capabilities. When filling them, drivers for hypervisors running on the same node as libvirtd can use cpuNodeData() to obtain raw CPU data. Other drivers, such as VMware, need to implement their own way of getting such data. Raw data can be decoded into virCPUDefPtr using cpuDecode() function. When implementing virConnectCompareCPU(), a hypervisor driver can just call cpuCompareXML() function with host CPU capabilities. For each guest for which a driver supports selecting CPU models, it must set the appropriate feature in guest's capabilities: virCapabilitiesAddGuestFeature(guest, "cpuselection", 1, 0) Actions needed when a domain is being created depend on whether the hypervisor understands raw CPU data (currently CPUID for i686, x86_64 architectures) or symbolic names has to be used. Typical use by hypervisors which prefer CPUID (such as VMware and Xen): - convert guest CPU configuration from domain's XML into a set of raw data structures each representing one of the feature policies: cpuEncode(conn, architecture, guest_cpu_config, &forced_data, &required_data, &optional_data, &disabled_data, &forbidden_data) - create a mask or whatever the hypervisor expects to see and pass it to the hypervisor Typical use by hypervisors with symbolic model names (such as QEMU): - get raw CPU data for a computed guest CPU: cpuGuestData(conn, host_cpu, guest_cpu_config, &data) - decode raw data into virCPUDefPtr with a possible restriction on allowed model names: cpuDecode(conn, guest, data, n_allowed_models, allowed_models) - pass guest->model and guest->features to the hypervisor * src/cpu/cpu.c src/cpu/cpu.h src/cpu/cpu_generic.c src/cpu/cpu_generic.h src/cpu/cpu_map.c src/cpu/cpu_map.h src/cpu/cpu_x86.c src/cpu/cpu_x86.h src/cpu/cpu_x86_data.h * configure.in: check for CPUID instruction * src/Makefile.am: glue the new files in * src/libvirt_private.syms: add new private symbols * po/POTFILES.in: add new cpu files containing translatable strings
2009-12-18 23:02:11 +08:00
src/cpu/cpu_x86.c
src/datatypes.c
src/driver.c
src/esx/esx_driver.c
esx: Implement network driver An ESX server has one or more PhysicalNics that represent the actual hardware NICs. Those can be listed via the interface driver. A libvirt virtual network is mapped to a HostVirtualSwitch. On the physical side a HostVirtualSwitch can be connected to PhysicalNics. On the virtual side a HostVirtualSwitch has HostPortGroups that are mapped to libvirt virtual network's portgroups. Typically there is HostPortGroups named 'VM Network' that is used to connect virtual machines to a HostVirtualSwitch. A second HostPortGroup typically named 'Management Network' is used to connect the hypervisor itself to the HostVirtualSwitch. This one is not mapped to a libvirt virtual network's portgroup. There can be more HostPortGroups than those typical two on a HostVirtualSwitch. +---------------+-------------------+ ...---| | | +-------------+ | HostPortGroup | |---| PhysicalNic | | VM Network | | | vmnic0 | ...---| | | +-------------+ +---------------+ HostVirtualSwitch | | vSwitch0 | +---------------+ | | HostPortGroup | | ...---| Management | | | Network | | +---------------+-------------------+ The virtual counterparts of the PhysicalNic is the HostVirtualNic for the hypervisor and the VirtualEthernetCard for the virtual machines that are grouped into HostPortGroups. +---------------------+ +---------------+---... | VirtualEthernetCard |---| | +---------------------+ | HostPortGroup | +---------------------+ | VM Network | | VirtualEthernetCard |---| | +---------------------+ +---------------+ | +---------------+ +---------------------+ | HostPortGroup | | HostVirtualNic |---| Management | +---------------------+ | Network | +---------------+---... The currently implemented network driver can list, define and undefine HostVirtualSwitches including HostPortGroups for virtual machines. Existing HostVirtualSwitches cannot be edited yet. This will be added in a followup patch.
2012-08-06 04:11:50 +08:00
src/esx/esx_network_driver.c
src/esx/esx_storage_backend_iscsi.c
src/esx/esx_storage_backend_vmfs.c
src/esx/esx_storage_driver.c
src/esx/esx_stream.c
src/esx/esx_util.c
src/esx/esx_vi.c
src/esx/esx_vi_methods.c
src/esx/esx_vi_types.c
2011-07-13 22:47:01 +08:00
src/hyperv/hyperv_driver.c
src/hyperv/hyperv_util.c
src/hyperv/hyperv_wmi.c
src/interface/interface_backend_netcf.c
src/interface/interface_backend_udev.c
2010-04-17 01:21:10 +08:00
src/internal.h
src/libvirt-admin.c
backup: Introduce virDomainCheckpoint APIs Introduce a bunch of new public APIs related to backup checkpoints. Checkpoints are modeled heavily after virDomainSnapshotPtr (both represent a point in time of the guest), although a snapshot exists with the intent of rolling back to that state, while a checkpoint exists to make it possible to create an incremental backup at a later time. We may have a future hypervisor that can completely manage checkpoints without libvirt metadata, but the first two planned hypervisors (qemu and test) both always use libvirt for tracking metadata relations between checkpoints, so for now, I've deferred the counterpart of virDomainSnapshotHasMetadata for a separate API addition at a later date if there is ever a need for it. Note that until we allow snapshots and checkpoints to exist simultaneously on the same domain (although the actual prevention of this will be in a separate patch for the sake of an easier revert down the road), that it is not possible to branch out to create more than one checkpoint child to a given parent, although it may become possible later when we revert to a snapshot that coincides with a checkpoint. This also means that for now, the decision of which checkpoint becomes the parent of a newly created one is the only checkpoint with no child (so while there are APIs for dealing with a current snapshot, we do not need those for checkpoints). We may end up exposing a notion of a current checkpoint later, but it's easier to add stuff when proven needed than to blindly support it now and wish we hadn't exposed it. The following map shows the API relations to snapshots, with new APIs on the right: Operate on a domain object to create/redefine a child: virDomainSnapshotCreateXML virDomainCheckpointCreateXML Operate on a child object for lifetime management: virDomainSnapshotDelete virDomainCheckpointDelete virDomainSnapshotFree virDomainCheckpointFree virDomainSnapshotRef virDomainCheckpointRef Operate on a child object to learn more about it: virDomainSnapshotGetXMLDesc virDomainCheckpointGetXMLDesc virDomainSnapshotGetConnect virDomainCheckpointGetConnect virDomainSnapshotGetDomain virDomainCheckpointGetDomain virDomainSnapshotGetName virDomainCheckpiontGetName virDomainSnapshotGetParent virDomainCheckpiontGetParent virDomainSnapshotHasMetadata (deferred for later) virDomainSnapshotIsCurrent (no counterpart, see note above) Operate on a domain object to list all children: virDomainSnapshotNum (no counterparts, these are the old virDomainSnapshotListNames racy interfaces) virDomainSnapshotListAllSnapshots virDomainListAllCheckpoints Operate on a child object to list descendents: virDomainSnapshotNumChildren (no counterparts, these are the old virDomainSnapshotListChildrenNames racy interfaces) virDomainSnapshotListAllChildren virDomainCheckpointListAllChildren Operate on a domain to locate a particular child: virDomainSnapshotLookupByName virDomainCheckpointLookupByName virDomainSnapshotCurrent (no counterpart, see note above) virDomainHasCurrentSnapshot (no counterpart, old racy interface) Operate on a snapshot to roll back to earlier state: virDomainSnapshotRevert (no counterpart, instead checkpoints are used in incremental backups via XML to virDomainBackupBegin) Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2019-03-14 03:35:26 +08:00
src/libvirt-domain-checkpoint.c
src/libvirt-domain-snapshot.c
src/libvirt-domain.c
src/libvirt-host.c
Apply security label when entering LXC namespaces Add a new virDomainLxcEnterSecurityLabel() function as a counterpart to virDomainLxcEnterNamespaces(), which can change the current calling process to have a new security context. This call runs client side, not in libvirtd so we can't use the security driver infrastructure. When entering a namespace, the process spawned from virsh will default to running with the security label of virsh. The actual desired behaviour is to run with the security label of the container most of the time. So this changes virsh lxc-enter-namespace command to invoke the virDomainLxcEnterSecurityLabel method. The current behaviour is: LABEL PID TTY TIME CMD system_u:system_r:svirt_lxc_net_t:s0:c0.c1023 1 pts/0 00:00:00 systemd system_u:system_r:svirt_lxc_net_t:s0:c0.c1023 3 pts/1 00:00:00 sh system_u:system_r:svirt_lxc_net_t:s0:c0.c1023 24 ? 00:00:00 systemd-journal system_u:system_r:svirt_lxc_net_t:s0:c0.c1023 29 ? 00:00:00 dhclient staff_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 47 ? 00:00:00 ps Note the ps command is running as unconfined_t, After this patch, The new behaviour is this: virsh -c lxc:/// lxc-enter-namespace dan -- /bin/ps -eZ LABEL PID TTY TIME CMD system_u:system_r:svirt_lxc_net_t:s0:c0.c1023 1 pts/0 00:00:00 systemd system_u:system_r:svirt_lxc_net_t:s0:c0.c1023 3 pts/1 00:00:00 sh system_u:system_r:svirt_lxc_net_t:s0:c0.c1023 24 ? 00:00:00 systemd-journal system_u:system_r:svirt_lxc_net_t:s0:c0.c1023 32 ? 00:00:00 dhclient system_u:system_r:svirt_lxc_net_t:s0:c0.c1023 38 ? 00:00:00 ps The '--noseclabel' flag can be used to skip security labelling. Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2013-03-13 01:24:01 +08:00
src/libvirt-lxc.c
src/libvirt-network.c
src/libvirt-nodedev.c
src/libvirt-nwfilter.c
src/libvirt-qemu.c
src/libvirt-secret.c
src/libvirt-storage.c
src/libvirt-stream.c
src/libvirt.c
src/libxl/libxl_capabilities.c
src/libxl/libxl_conf.c
src/libxl/libxl_domain.c
src/libxl/libxl_driver.c
src/libxl/libxl_migration.c
src/locking/lock_daemon.c
src/locking/lock_daemon_dispatch.c
src/locking/lock_driver_lockd.c
src/locking/lock_driver_sanlock.c
src/locking/lock_manager.c
src/locking/sanlock_helper.c
src/logging/log_daemon.c
src/logging/log_daemon_dispatch.c
src/logging/log_handler.c
src/logging/log_manager.c
src/lxc/lxc_cgroup.c
src/lxc/lxc_conf.c
src/lxc/lxc_container.c
2009-09-17 00:26:27 +08:00
src/lxc/lxc_controller.c
src/lxc/lxc_domain.c
2009-09-17 00:26:27 +08:00
src/lxc/lxc_driver.c
src/lxc/lxc_fuse.c
src/lxc/lxc_hostdev.c
src/lxc/lxc_native.c
src/lxc/lxc_process.c
2009-09-17 00:26:27 +08:00
src/network/bridge_driver.c
src/network/bridge_driver_linux.c
src/network/leaseshelper.c
2009-09-17 00:26:27 +08:00
src/node_device/node_device_driver.c
src/node_device/node_device_hal.c
src/node_device/node_device_udev.c
src/nwfilter/nwfilter_dhcpsnoop.c
src/nwfilter/nwfilter_driver.c
src/nwfilter/nwfilter_ebiptables_driver.c
src/nwfilter/nwfilter_gentech_driver.c
src/nwfilter/nwfilter_learnipaddr.c
2009-09-17 00:26:27 +08:00
src/openvz/openvz_conf.c
src/openvz/openvz_driver.c
2012-05-15 23:27:08 +08:00
src/openvz/openvz_util.c
src/phyp/phyp_driver.c
src/qemu/qemu_agent.c
src/qemu/qemu_alias.c
src/qemu/qemu_block.c
src/qemu/qemu_capabilities.c
src/qemu/qemu_cgroup.c
src/qemu/qemu_command.c
2009-09-17 00:26:27 +08:00
src/qemu/qemu_conf.c
src/qemu/qemu_domain.c
src/qemu/qemu_domain_address.c
2009-09-17 00:26:27 +08:00
src/qemu/qemu_driver.c
src/qemu/qemu_hostdev.c
src/qemu/qemu_hotplug.c
src/qemu/qemu_interface.c
src/qemu/qemu_migration.c
src/qemu/qemu_migration_cookie.c
src/qemu/qemu_migration_params.c
src/qemu/qemu_monitor.c
src/qemu/qemu_monitor_json.c
src/qemu/qemu_monitor_text.c
src/qemu/qemu_process.c
src/qemu/qemu_qapi.c
src/remote/remote_client_bodies.h
src/remote/remote_daemon.c
src/remote/remote_daemon_config.c
src/remote/remote_daemon_dispatch.c
src/remote/remote_daemon_dispatch_stubs.h
src/remote/remote_daemon_dispatch_qemu_stubs.h
src/remote/remote_daemon_stream.c
2009-09-17 00:26:27 +08:00
src/remote/remote_driver.c
src/rpc/virkeepalive.c
src/rpc/virnetclient.c
src/rpc/virnetclientprogram.c
src/rpc/virnetclientstream.c
src/rpc/virnetdaemon.c
src/rpc/virnetlibsshsession.c
src/rpc/virnetmessage.c
src/rpc/virnetsaslcontext.c
src/rpc/virnetserver.c
src/rpc/virnetserverclient.c
src/rpc/virnetserverprogram.c
src/rpc/virnetserverservice.c
src/rpc/virnetsocket.c
src/rpc/virnetsshsession.c
src/rpc/virnettlscontext.c
2009-09-17 00:26:27 +08:00
src/secret/secret_driver.c
src/security/security_apparmor.c
Refactor the security drivers to simplify usage The current security driver usage requires horrible code like if (driver->securityDriver && driver->securityDriver->domainSetSecurityHostdevLabel && driver->securityDriver->domainSetSecurityHostdevLabel(driver->securityDriver, vm, hostdev) < 0) This pair of checks for NULL clutters up the code, making the driver calls 2 lines longer than they really need to be. The goal of the patchset is to change the calling convention to simply if (virSecurityManagerSetHostdevLabel(driver->securityDriver, vm, hostdev) < 0) The first check for 'driver->securityDriver' being NULL is removed by introducing a 'no op' security driver that will always be present if no real driver is enabled. This guarentees driver->securityDriver != NULL. The second check for 'driver->securityDriver->domainSetSecurityHostdevLabel' being non-NULL is hidden in a new abstraction called virSecurityManager. This separates the driver callbacks, from main internal API. The addition of a virSecurityManager object, that is separate from the virSecurityDriver struct also allows for security drivers to carry state / configuration information directly. Thus the DAC/Stack drivers from src/qemu which used to pull config from 'struct qemud_driver' can now be moved into the 'src/security' directory and store their config directly. * src/qemu/qemu_conf.h, src/qemu/qemu_driver.c: Update to use new virSecurityManager APIs * src/qemu/qemu_security_dac.c, src/qemu/qemu_security_dac.h src/qemu/qemu_security_stacked.c, src/qemu/qemu_security_stacked.h: Move into src/security directory * src/security/security_stack.c, src/security/security_stack.h, src/security/security_dac.c, src/security/security_dac.h: Generic versions of previous QEMU specific drivers * src/security/security_apparmor.c, src/security/security_apparmor.h, src/security/security_driver.c, src/security/security_driver.h, src/security/security_selinux.c, src/security/security_selinux.h: Update to take virSecurityManagerPtr object as the first param in all callbacks * src/security/security_nop.c, src/security/security_nop.h: Stub implementation of all security driver APIs. * src/security/security_manager.h, src/security/security_manager.c: New internal API for invoking security drivers * src/libvirt.c: Add missing debug for security APIs
2010-11-18 04:26:30 +08:00
src/security/security_dac.c
src/security/security_driver.c
src/security/security_manager.c
2009-09-17 00:26:27 +08:00
src/security/security_selinux.c
src/security/virt-aa-helper.c
src/storage/parthelper.c
2009-09-17 00:26:27 +08:00
src/storage/storage_backend.c
src/storage/storage_backend_disk.c
src/storage/storage_backend_fs.c
storage: initial support for linking with libgfapi We support gluster volumes in domain XML, so we also ought to support them as a storage pool. Besides, a future patch will want to take advantage of libgfapi to handle the case of a gluster device holding qcow2 rather than raw storage, and for that to work, we need a storage backend that can read gluster storage volume contents. This sets up the framework. Note that the new pool is named 'gluster' to match a <disk type='network'><source protocol='gluster'> image source already supported in a <domain>; it does NOT match the <pool type='netfs'><source><target type='glusterfs'>, since that uses a FUSE mount to a local file name rather than a network name. This and subsequent patches have been tested against glusterfs 3.4.1 (available on Fedora 19); there are likely bugs in older versions that may prevent decent use of gfapi, so this patch enforces the minimum version tested. A future patch may lower the minimum. On the other hand, I hit at least two bugs in 3.4.1 that will be fixed in 3.5/3.4.2, where it might be worth raising the minimum: glfs_readdir is nicer to use than glfs_readdir_r [1], and glfs_fini should only return failure on an actual failure [2]. [1] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00085.html [2] http://lists.gnu.org/archive/html/gluster-devel/2013-10/msg00086.html * configure.ac (WITH_STORAGE_GLUSTER): New conditional. * m4/virt-gluster.m4: new file. * libvirt.spec.in (BuildRequires): Support gluster in spec file. * src/conf/storage_conf.h (VIR_STORAGE_POOL_GLUSTER): New pool type. * src/conf/storage_conf.c (poolTypeInfo): Treat similar to sheepdog and rbd. (virStoragePoolDefFormat): Don't output target for gluster. * src/storage/storage_backend_gluster.h: New file. * src/storage/storage_backend_gluster.c: Likewise. * po/POTFILES.in: Add new file. * src/storage/storage_backend.c (backends): Register new type. * src/Makefile.am (STORAGE_DRIVER_GLUSTER_SOURCES): Build new files. * src/storage/storage_backend.h (_virStorageBackend): Documet assumption. Signed-off-by: Eric Blake <eblake@redhat.com>
2013-11-20 07:26:05 +08:00
src/storage/storage_backend_gluster.c
2009-09-17 00:26:27 +08:00
src/storage/storage_backend_iscsi.c
src/storage/storage_backend_logical.c
src/storage/storage_backend_mpath.c
src/storage/storage_backend_rbd.c
2009-09-17 00:26:27 +08:00
src/storage/storage_backend_scsi.c
src/storage/storage_backend_sheepdog.c
src/storage/storage_backend_vstorage.c
src/storage/storage_backend_zfs.c
2009-09-17 00:26:27 +08:00
src/storage/storage_driver.c
src/storage/storage_util.c
2009-09-17 00:26:27 +08:00
src/test/test_driver.c
src/util/iohelper.c
src/util/viralloc.c
src/util/virarptable.c
src/util/viraudit.c
src/util/virauth.c
src/util/virauthconfig.c
src/util/virbitmap.c
src/util/virbuffer.c
src/util/vircgroup.c
src/util/virclosecallbacks.c
src/util/vircommand.c
2012-12-13 00:35:35 +08:00
src/util/virconf.c
src/util/vircrypto.c
src/util/virdbus.c
src/util/virdnsmasq.c
src/util/virerror.c
src/util/virerror.h
src/util/vireventpoll.c
src/util/virfcp.c
src/util/virfdstream.c
src/util/virfile.c
src/util/virfilecache.c
src/util/virfirewall.c
src/util/virfirmware.c
src/util/virhash.c
2012-12-13 01:00:34 +08:00
src/util/virhook.c
src/util/virhostcpu.c
src/util/virhostdev.c
src/util/virhostmem.c
src/util/viridentity.c
src/util/virinitctl.c
src/util/viriptables.c
src/util/viriscsi.c
2012-12-13 01:53:50 +08:00
src/util/virjson.c
src/util/virkeyfile.c
src/util/virlease.c
Introduce an internal API for handling file based lockspaces The previously introduced virFile{Lock,Unlock} APIs provide a way to acquire/release fcntl() locks on individual files. For unknown reason though, the POSIX spec says that fcntl() locks are released when *any* file handle referring to the same path is closed. In the following sequence threadA: fd1 = open("foo") threadB: fd2 = open("foo") threadA: virFileLock(fd1) threadB: virFileLock(fd2) threadB: close(fd2) you'd expect threadA to come out holding a lock on 'foo', and indeed it does hold a lock for a very short time. Unfortunately when threadB does close(fd2) this releases the lock associated with fd1. For the current libvirt use case for virFileLock - pidfiles - this doesn't matter since the lock is acquired at startup while single threaded an never released until exit. To provide a more generally useful API though, it is necessary to introduce a slightly higher level abstraction, which is to be referred to as a "lockspace". This is to be provided by a virLockSpacePtr object in src/util/virlockspace.{c,h}. The core idea is that the lockspace keeps track of what files are already open+locked. This means that when a 2nd thread comes along and tries to acquire a lock, it doesn't end up opening and closing a new FD. The lockspace just checks the current list of held locks and immediately returns VIR_ERR_RESOURCE_BUSY. NB, the API as it stands is designed on the basis that the files being locked are not being otherwise opened and used by the application code. One approach to using this API is to acquire locks based on a hash of the filepath. eg to lock /var/lib/libvirt/images/foo.img the application might do virLockSpacePtr lockspace = virLockSpaceNew("/var/lib/libvirt/imagelocks"); lockname = md5sum("/var/lib/libvirt/images/foo.img"); virLockSpaceAcquireLock(lockspace, lockname); NB, in this example, the caller should ensure that the path is canonicalized before calculating the checksum. It is also possible to do locks directly on resources by using a NULL lockspace directory and then using the file path as the lock name eg virLockSpacePtr lockspace = virLockSpaceNew(NULL); virLockSpaceAcquireLock(lockspace, "/var/lib/libvirt/images/foo.img"); This is only safe to do though if no other part of the process will be opening the files. This will be the case when this code is used inside the soon-to-be-reposted virlockd daemon Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2012-08-03 00:02:40 +08:00
src/util/virlockspace.c
src/util/virlog.c
src/util/virmacmap.c
src/util/virmdev.c
src/util/virnetdev.c
src/util/virnetdevbandwidth.c
src/util/virnetdevbridge.c
src/util/virnetdevip.c
src/util/virnetdevmacvlan.c
src/util/virnetdevmidonet.c
src/util/virnetdevopenvswitch.c
src/util/virnetdevtap.c
src/util/virnetdevveth.c
src/util/virnetdevvportprofile.c
src/util/virnetlink.c
src/util/virnodesuspend.c
src/util/virnuma.c
src/util/virobject.c
2012-12-13 22:52:25 +08:00
src/util/virpci.c
src/util/virperf.c
src/util/virpidfile.c
src/util/virpolkit.c
src/util/virportallocator.c
src/util/virprocess.c
src/util/virqemu.c
src/util/virrandom.c
src/util/virresctrl.c
src/util/virrotatingfile.c
src/util/virscsi.c
src/util/virscsihost.c
src/util/virscsivhost.c
src/util/virsecret.c
src/util/virsocketaddr.c
src/util/virstorageencryption.c
src/util/virstoragefile.c
src/util/virstoragefilebackend.c
src/util/virstring.c
src/util/virsysinfo.c
src/util/virthreadjob.c
src/util/virthreadpool.c
src/util/virtime.c
src/util/virtpm.c
src/util/virtypedparam.c
src/util/viruri.c
2012-12-13 01:04:51 +08:00
src/util/virusb.c
2012-12-14 01:44:57 +08:00
src/util/virutil.c
src/util/virvhba.c
2012-12-14 02:13:21 +08:00
src/util/virxml.c
src/vbox/vbox_MSCOMGlue.c
src/vbox/vbox_XPCOMCGlue.c
src/vbox/vbox_common.c
src/vbox/vbox_driver.c
src/vbox/vbox_network.c
src/vbox/vbox_snapshot_conf.c
src/vbox/vbox_storage.c
src/vbox/vbox_tmpl.c
src/vmware/vmware_conf.c
src/vmware/vmware_driver.c
src/vmx/vmx.c
src/vz/vz_driver.c
src/vz/vz_sdk.c
src/vz/vz_utils.c
src/vz/vz_utils.h
src/xenapi/xenapi_driver.c
src/xenapi/xenapi_utils.c
src/xenconfig/xen_common.c
src/xenconfig/xen_xl.c
src/xenconfig/xen_xm.c
tests/virpolkittest.c
tools/libvirt-guests.sh.in
tools/virsh-checkpoint.c
tools/virsh-console.c
tools/virsh-domain-monitor.c
tools/virsh-domain.c
tools/virsh-edit.c
tools/virsh-host.c
tools/virsh-interface.c
tools/virsh-network.c
tools/virsh-nodedev.c
tools/virsh-nwfilter.c
tools/virsh-pool.c
tools/virsh-secret.c
tools/virsh-snapshot.c
tools/virsh-util.c
tools/virsh-volume.c
tools/virsh.c
tools/virt-admin.c
tools/virt-host-validate-bhyve.c
tools/virt-host-validate-common.c
tools/virt-host-validate-lxc.c
tools/virt-host-validate-qemu.c
tools/virt-host-validate.c
tools/virt-login-shell.c
tools/vsh.c
tools/vsh.h