virtio-mem: more precise calculation in virtio_mem_mb_state_prepare_next_mb()

We actually need one byte less (next_mb_id is exclusive, first_mb_id is
inclusive). While at it, compact the code.

Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Link: https://lore.kernel.org/r/20201112133815.13332-3-david@redhat.com
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This commit is contained in:
David Hildenbrand 2020-11-12 14:37:48 +01:00 committed by Michael S. Tsirkin
parent 6725f21157
commit 347202dc04
1 changed files with 2 additions and 4 deletions

View File

@ -257,10 +257,8 @@ static enum virtio_mem_mb_state virtio_mem_mb_get_state(struct virtio_mem *vm,
*/
static int virtio_mem_mb_state_prepare_next_mb(struct virtio_mem *vm)
{
unsigned long old_bytes = vm->next_mb_id - vm->first_mb_id + 1;
unsigned long new_bytes = vm->next_mb_id - vm->first_mb_id + 2;
int old_pages = PFN_UP(old_bytes);
int new_pages = PFN_UP(new_bytes);
int old_pages = PFN_UP(vm->next_mb_id - vm->first_mb_id);
int new_pages = PFN_UP(vm->next_mb_id - vm->first_mb_id + 1);
uint8_t *new_mb_state;
if (vm->mb_state && old_pages == new_pages)