On 12/01/2018 17:19, Steven Seeger wrote: > A good example of this would be that say I have an interrupt that occurs > every > second. If I were to print out the virtual time that interrupt occurs in the > device model, I should see a time of: > > 1.000000 > 2.000000 > 3.000000 > 4.000000 > > etc > > Instead, I see: > > 1.000000 > 2.000013 > 3.000074 > 4.000022
What is the guest doing in the meanwhile? > When the timer function is called in the device model, I arm the timer again > with qemu_get_clock_ns(QEMU_CLOCK_VIRTUAL + 1000000000ULL) and expect this > time to be exaclty 1 second of virtual time later. > > Either the virtual time is increasing without instructions executing or the > granularity of when the timer is serviced relative to virtual time is not > exact. I think the latter is happening. Is this because a tcg codeblock must > execute completely and this causes increases in virtual time based on the > number of instructions in that block, and the number of instructions varies? virtual time increases only when instructions are executed, or when the CPUs are idle (in the latter case, behavior depends on "-icount sleep": if on, it increases at the same pace as real time, if off, it jumps immediately to the next deadline). Paolo
