From: David Hildenbrand <[email protected]> If we want to trap every access to a section, we might not have a slot. So let's just tolerate if we don't have one.
Signed-off-by: David Hildenbrand <[email protected]> Message-Id: <[email protected]> Tested-by: Joe Clifford <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]> --- accel/kvm/kvm-all.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index fae1eca..f5fa3e2 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -394,8 +394,8 @@ static int kvm_section_update_flags(KVMMemoryListener *kml, mem = kvm_lookup_matching_slot(kml, start_addr, size); if (!mem) { - fprintf(stderr, "%s: error finding slot\n", __func__); - abort(); + /* We don't have a slot if we want to trap every access. */ + return 0; } return kvm_slot_update_flags(kml, mem, section->mr); @@ -470,8 +470,8 @@ static int kvm_physical_sync_dirty_bitmap(KVMMemoryListener *kml, if (size) { mem = kvm_lookup_matching_slot(kml, start_addr, size); if (!mem) { - fprintf(stderr, "%s: error finding slot\n", __func__); - abort(); + /* We don't have a slot if we want to trap every access. */ + return 0; } /* XXX bad kernel interface alert -- 1.8.3.1
