On 29.11.2010 23:47, Giovanni Trematerra wrote:
I got it on QEMU and assumed that QEMU was not doing a proper job of
distributing run-time amongst cores (so VirtualBox???).
I figured out that sched_tick is being passed a huge number of ticks elapsed
for the cpu at startup, in my case, by hardclock_anycpu (kern_clock.c).

Problem with many ticks at CPU startup should be fixed by r214987.

I haven't a patch only a dirty hack just to make sure we won't be
running for more than 5s solid, if we have a huge number of ticks in
input to sched_tick, which is something that ULE can still handle.

Hope this helps.

diff -r d16464301129 sys/kern/kern_clock.c
--- a/sys/kern/kern_clock.c     Thu Sep 23 11:56:35 2010 -0400
+++ b/sys/kern/kern_clock.c     Sun Oct 03 17:53:39 2010 -0400
@@ -525,7 +525,7 @@ hardclock_anycpu(int cnt, int usermode)
               PROC_SUNLOCK(p);
       }
       thread_lock(td);
-       sched_tick(cnt);
+       sched_tick((cnt<  (hz*10)/2) ? cnt : (hz*10)/2);
       td->td_flags |= flags;
       thread_unlock(td);

--
Giovanni Trematerra


--
Alexander Motin
_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to