mirror of https://gitee.com/openkylin/libvirt.git
numa_conf: Properly check for caches in virDomainNumaDefValidate()
When adding support for HMAT, inf0611fe883
I've introduced a check which aims to validate /domain/cpu/numa/interconnects. As a part of that, there is a loop which checks whether all <latency/> with @cache attribute refer to an existing cache level. For instance: <cpu mode='host-model' check='partial'> <numa> <cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'> <cache level='1' associativity='direct' policy='writeback'> <size value='8' unit='KiB'/> <line value='5' unit='B'/> </cache> </cell> <interconnects> <latency initiator='0' target='0' cache='1' type='access' value='5'/> <bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/> </interconnects> </numa> </cpu> This XML defines that accessing L1 cache of node #0 from node #0 has latency of 5ns. However, the loop was not written properly. Well, the check in it, as it was always checking for the first cache in the target node and not the rest. Therefore, the following example errors out: <cpu mode='host-model' check='partial'> <numa> <cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'> <cache level='3' associativity='direct' policy='writeback'> <size value='10' unit='KiB'/> <line value='8' unit='B'/> </cache> <cache level='1' associativity='direct' policy='writeback'> <size value='8' unit='KiB'/> <line value='5' unit='B'/> </cache> </cell> <interconnects> <latency initiator='0' target='0' cache='1' type='access' value='5'/> <bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/> </interconnects> </numa> </cpu> This errors out even though it is a valid configuration. The L1 cache under node #0 is still present. Fixes:f0611fe883
Signed-off-by: Michal Privoznik <mprivozn@redhat.com> Reviewed-by: Laine Stump <laine@redhat.com>
This commit is contained in:
parent
b3204e820f
commit
e41ac71fca
|
@ -1421,7 +1421,7 @@ virDomainNumaDefValidate(const virDomainNuma *def)
|
|||
|
||||
if (l->cache > 0) {
|
||||
for (j = 0; j < def->mem_nodes[l->target].ncaches; j++) {
|
||||
const virDomainNumaCache *cache = def->mem_nodes[l->target].caches;
|
||||
const virDomainNumaCache *cache = &def->mem_nodes[l->target].caches[j];
|
||||
|
||||
if (l->cache == cache->level)
|
||||
break;
|
||||
|
|
Loading…
Reference in New Issue