We were not correctly accounting for the internal representation of
some fields, and just trying to a string comparison. We need to be
a bit smarter than that
Fixes: https://github.com/virt-manager/virt-manager/issues/356
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Strip back the logic to:
* Only try to toggle source_type=memfd and access_mode=shared
* Disable the field if guest has any <numa> config
* Disable the field if domcaps does not report virtiofs and memfd
This is the simplest future proof case, though it will exclude some
legit guest configs and some libvirt+qemu back compat.
My feeling is the <numa> stuff in particular is pretty advanced, so if
users have it configured they can toggle shared memory via the XML
without too much trouble.
Signed-off-by: Cole Robinson <crobinso@redhat.com>
The virtiofs in domcapabilities is used as a proxy to tell us whether
libvirt is new enough to allow bare memory access mode=shared', So We
enable/disable this checkbox according to it.
When we configure shared memory access, If the 'memfd' is available in
domcaps, We configure VM to use it as memory backend because it doesn't
need addtional host setup for vhost-user devices, Otherwise use 'file'
as backend.
If all of numa nodes explicitly defined memAccess=shared, We mark this
checkbox as checked even if virtiofs isn't exposed in domcapabilities.
In this case:
- It doesn't matter what the value of access mode of memoryBacking is
because access mode of memoryBacking will be overridden per numa node
by memAccess attribute.
- Although the checkbox is disabled, the checked checkbox presents actual
status about shared memory access to users.
Signed-off-by: Lin Ma <lma@suse.com>
Linux memfd memory backend doesn't require any host setup, We prefer to
use it as the simplest memory XML adjustments to make virtiofs works.
Signed-off-by: Lin Ma <lma@suse.com>
Check whether virtiofs is exposed in domcapabilities, We can use it as a
proxy for 'libvirt is new enough to allow bare memory access mode=shared'
as well.
Signed-off-by: Lin Ma <lma@suse.com>
Both these windows versions are now longer supported, and UEFI isn't
the default, so I don't think this hack is much needed anymore
Signed-off-by: Cole Robinson <crobinso@redhat.com>
The bare metal world is moving to a situation where UEFI is going to be
the only supported firmware and there will be a strong expectation for
TPM and SecureBoot support.
With this in mind, if we're enabling UEFI on a VM, it makes sense to
also provide a TPM alongside it.
Since this requires swtpm to be installed we can't do this
unconditionally. The forthcoming libvirt release expands the domain
capabilities to report whether TPMs are supported, so we check that.
The user can disable the default TPM by requesting --tpm none
https://github.com/virt-manager/virt-manager/issues/310
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This address string decomposing is strictly and virt-* cli feature.
Move it to cli.py to make that explicit
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Apparently nodedev drm XML can link to a parent device that we can't
look up?
We shouldn't be trying to do the full address string compare anyways,
so just try the name lookup, which would improve the error here too
Fixes: #328
Signed-off-by: Cole Robinson <crobinso@redhat.com>
https://listman.redhat.com/archives/virt-tools-list/2022-January/msg00012.html
On xen, a guest reboot will trigger a non-error viewer-disconnected
signal, but we treat it like an error, which makes it difficult to
reconnect to the VM console.
If there's no error message raised, treat the disconnect like a
non-error cases.
Signed-off-by: Cole Robinson <crobinso@redhat.com>
... with the 'hypervisor default' address. In this case, we need to
force set port=-1 in the XML, to make the changes actually stick
Signed-off-by: Cole Robinson <crobinso@redhat.com>
The sync_vcpus_topology method will sometimes set the self.vcpus prop,
but other times leave it unset. This is confusing an unhelpful
behaviour. Both callers have logic to set the self.vcpus prop
to a default value of sync_vcpus_topology failed to do so. It makes
more sense to just pass this default value in.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When setting CPU defaults we want to force create the topology even if
the user has not specified anything. In particular this allows for
overriding the QEMU defaults, to expose vCPUs as cores instead of
sockets which is a much saner default for Windows.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
In real world silicon though it is rare to have high socket/die counts,
but common to have huge core counts.
Some OS will even refuse to use sockets over a certain count.
Thus we prefer to expose cores to the guest rather than sockets as the
default for missing fields.
This matches a recent change made in QEMU for new machine types
commit 4a0af2930a4e4f64ce551152fdb4b9e7be106408
Author: Yanan Wang <wangyanan55@huawei.com>
Date: Wed Sep 29 10:58:09 2021 +0800
machine: Prefer cores over sockets in smp parsing since 6.2
Closes: https://github.com/virt-manager/virt-manager/issues/155
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The product of sockets * dies * cores * threads must be equal to the
vCPU count. While libvirt and QEMU will report this error scenario,
it makes sense to catch it in virt-install, so we can test our local
logic for setting defaults for topology.
This exposes some inconsistent configurations in the test suite.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Any missing values in the topology need to be calculated based on the
other values which are set.
We can take account of fact that 'total_vcpus' treats any unset values
as being 1 to simplify the way we set topology defaults.
This ensures that topology defaulting takes account of dies.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
It is always permitted to set dies==1 regardless of architecture or
machine type. The only constraint is around setting values > 1, for
archs/machines that don't support the dies concept.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Although using --cpu topology.XXX is the preferred way to set topology,
it is still possible via the --vcpus parameter. For consistency, this
should support the full set of parameters, so dies needs to be added.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The MDEV devices listed in the "Add New Virtual Hardware" page, are a
concatenation of parent device name and MDEV device name, eg:
css_0_0_0014 mdev_b204c698_6731_4f25_b5f4_894614a05ec0_0_0_0014. The
parent name is duplicated in here, as the MDEV device name itself includes
a part of the parent name in libvirt version 7.8.0 and later. So, this patch
changes the MDEVs listed in "Add New Virtual Hardware" page to only display
the MDEV device
name(eg:mdev_b204c698_6731_4f25_b5f4_894614a05ec0_0_0_0014), when the
new naming convention is used.
Signed-off-by: Shalini Chellathurai Saroja <shalini@linux.ibm.com>
We should only be returning a driver_type value for volumes that
report support_format(), meaning they support file type formats like
qcow2. Any other reported format should be ignored
Dropping the check for 'unknown' value changes one test case a bit,
but it hardcodes raw which is what libvirt gives us anyways, so it's
okay
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Usually, when storage volume is attached as a disk and disk xml is filled with
default values, the "<driver type=...>" value is copied from volume's
"<format type=...>". This makes sense for volumes of storage pool of type
"dir", where format types include "raw, qcow2...".
However, the same approach cannot be used for the storage pool of type "disk".
In that case, format types include "none, linux, fat16, fat32...". Such formats
cannot be used for disk's "<driver type=...>".
Therefore, when generating disk XML for volume of storage pool type "disk",
driver type should always be "raw".
Accidentally changed in
commit 302ef1f096
Author: Daniel P. Berrangé <berrange@redhat.com>
Date: Fri Nov 26 18:51:49 2021 +0000
virtinst/osdict: add a property for the OsinfoDb object
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Now that we've removed all internal aliases, there is no longer any
reason to keep a cache of OS objects internally. We can directly
query the DB when we need it.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>