Il 15/08/2014 23:49, Hulin, Patrick - 0559 - MITLL ha scritto: >>> In this case, the write is 8 bytes and unaligned, so it gets split >>> into 8 single-byte writes. In stock QEMU, these writes are done in >>> reverse order (see the loop in softmmu_template.h, line 402). The >>> third decryption xor from Kernel Patch Protection should hit 4 bytes >>> that are in the current TB and 4 bytes in the TB afterwards in linear >>> order. Since this happens in reverse order, and the last 4 bytes of >>> the write do not intersect the current TB, those writes happen >>> successfully and QEMU's memory is modified. The 4th byte in linear >>> order (the 5th in temporal order) then triggers the >>> current_tb_modified flag and cpu_restore_state, longjmp'ing out. >>> >> Would it work to just call tb_invalidate_phys_page_range before the >> helper_ret_stb loop? > > Maybe. I think there’s another issue, which is that QEMU’s ending up > in the I/O read/write code instead of the normal memory RW. This could > be QEMU messing up, it could be PatchGuard doing something weird, or it > could be me misunderstanding what’s going on. I never really figured out > how the control flow works here.
That's okay. Everything that's in the slow path goes down io_mem_read/write (in this case TLB_NOTDIRTY is set for dirty-page tracking and causes QEMU to choose that path). Try making a self-contained test case using the kvm-unit-tests harness (git://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git). Paolo
