2015-10-23 23:43+0200, Laszlo Ersek:
> Commit b10d92a54dac ("KVM: x86: fix RSM into 64-bit protected mode")
> reordered the rsm_load_seg_64() and rsm_enter_protected_mode() calls,
> relative to each other. The argument that said commit made was correct,
> however putting rsm_enter_protected_mode() first whole-sale violated the
> following (correct) invariant from em_rsm():
>
> * Get back to real mode, to prepare a safe state in which to load
> * CR0/CR3/CR4/EFER. Also this will ensure that addresses passed
> * to read_std/write_std are not virtual.
Nice catch.
> Namely, rsm_enter_protected_mode() may re-enable paging, *after* which
>
> rsm_load_seg_64()
> GET_SMSTATE()
> read_std()
>
> will try to interpret the (smbase + offset) address as a virtual one. This
> will result in unexpected page faults being injected to the guest in
> response to the RSM instruction.
I think this is a good time to introduce the read_phys helper, which we
wanted to avoid with that assumption.
> Split rsm_load_seg_64() in two parts:
>
> - The first part, rsm_stash_seg_64(), shall call GET_SMSTATE() while in
> real mode, and save the relevant state off SMRAM into an array local to
> rsm_load_state_64().
>
> - The second part, rsm_load_seg_64(), shall occur after entering protected
> mode, but the segment details shall come from the local array, not the
> guest's SMRAM.
>
> Fixes: b10d92a54dac25a6152f1aa1ffc95c12908035ce
> Cc: Paolo Bonzini <[email protected]>
> Cc: Radim Krčmář <[email protected]>
> Cc: Jordan Justen <[email protected]>
> Cc: Michael Kinney <[email protected]>
> Cc: [email protected]
> Signed-off-by: Laszlo Ersek <[email protected]>
> ---
The code would be cleaner if we had a different approach, but this works
too and is safer for stable. In case you prefer to leave the rewrite for
a future victim,
Reviewed-by: Radim Krčmář <[email protected]>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html