On Wed, 2014-12-10 at 08:42 +0100, Sebastian Huber wrote:
> On 10/12/14 02:53, Nick Withers wrote:
> > On Wed, 2014-12-03 at 08:09 +0100, Sebastian Huber wrote:
> >> Hello Nick,
> >>
> >> to disable TCR[ARE] is not the right fix.  You have to check the
> >> TSR[DIS] register in the nanoseconds extension.  See for example
> >> lpc-clock-config.c for an example.
> > Hi Sebastian,
> >
> > Thanks for your help, 'fraid I need to lean on you a little more...
> >
> > I don't understand how the proposed solution (or that in
> > lpc-clock-config.c) can work properly in the case where the nanoseconds
> > extension could be queried multiple times with an interrupt lock held
> > (which I believe disables interrupts), preventing the tick from being
> > incremented, as I believe happens in spnsext01.
> >
> > lpc-clock-config.c's lpc_clock_nanoseconds_since_last_tick(), if I'm
> > understanding it correctly, says "Ah, there's an interrupt pending, so
> > we need to add another tick period to our current timer value". Cool
> > bananas.
> 
> Yes, this is exactly how it works.
> 
> > But what if it's called again with the same interrupt still
> > pending? The timer counter may have wrapped again and we could return an
> > earlier time.
> >
> > ...Couldn't we? What am I missing?
> >
> > Cheers :-)
> 
> If you disable interrupts for more than one clock tick interval, then 
> you have a bigger problem than a non-monotonic uptime.

Yeah, that's probably fair...

> If you disable the auto-reload, then the decrementer stops, so you have 
> a time freeze.

I suppose that didn't worry me as much as non-monotonic time. Probably
because I'm a selfish so-and-so and not personally writing a real-time
application with strict periodic tasking requirements (and don't
particularly care about the uptime drifting) :-P

A new patch is attached, thanks for your help and guidance!

It's a little more convoluted than might be necessary because I wasn't
confident that every relevant BSP's Clock_Decrementer_value (some of
which are set at run-time) was small enough not to overflow the uint32_t
tmp variable in intermediate calculations if I just straight-up doubled
the potential range of values it might need to hold (with the extra
clock tick period that might now be returned). It potentially adds
another run-time division this way, though. I could just use a
uint64_t...?


P.S.: For my education (because noone has anything better to do,
right?)... Is there a more general solution? Maybe servicing the tick
increment in the nanoseconds call if an interrupt's been generated (is
servicing an interrupt out-of-band "usually" legitimate across
architectures / PICs?) but deferring e.g. scheduling 'til interrupts're
re-enabled?

If not, I guess it should be guaranteeable / assumable that interrupts
should never be blocked for longer than x cycles (when users can't
disable them themselves)?

Would the FreeBSD clock code you mentioned earlier handle it any better?


P.P.S.: Clock_driver_nanoseconds_since_last_tick() also seems broken in
the non-Book-E case if the decrementer value is (one's complement)
negative
-- 
Nick Withers

Embedded Systems Programmer
Department of Nuclear Physics, Research School of Physics and Engineering
The Australian National University (CRICOS: 00120C)
>From 48a370b95558aa60449a16b4c5f4799013e6d75c Mon Sep 17 00:00:00 2001
From: Nick Withers <nick.with...@anu.edu.au>
Date: Fri, 12 Dec 2014 12:12:21 +1100
Subject: [PATCH] PowerPC Book E: Account for an extra tick period if a tick
 increment's pending

---
 c/src/lib/libcpu/powerpc/mpc6xx/clock/c_clock.c | 36 +++++++++++++++++++++----
 1 file changed, 31 insertions(+), 5 deletions(-)

diff --git a/c/src/lib/libcpu/powerpc/mpc6xx/clock/c_clock.c b/c/src/lib/libcpu/powerpc/mpc6xx/clock/c_clock.c
index f14acab..c3e2cf5 100644
--- a/c/src/lib/libcpu/powerpc/mpc6xx/clock/c_clock.c
+++ b/c/src/lib/libcpu/powerpc/mpc6xx/clock/c_clock.c
@@ -22,6 +22,7 @@
 #include <rtems.h>
 #include <rtems/libio.h>
 #include <rtems/clockdrv.h>
+#include <stdbool.h>
 #include <stdlib.h>                     /* for atexit() */
 #include <assert.h>
 #include <libcpu/c_clock.h>
@@ -196,17 +197,42 @@ void Clock_exit( void )
 static uint32_t Clock_driver_nanoseconds_since_last_tick(void)
 {
   uint32_t clicks, tmp;
+  bool tick_pending;
 
   PPC_Get_decrementer( clicks );
 
   /*
-   * Multiply by 1000 here separately from below so we do not overflow
-   * and get a negative value.
+   * Check whether a clock tick interrupt is pending and the
+   * decrementer wrapped.
+   *
+   * If so, we'll compensate by returning a time one tick period longer.
+   *
+   * We have to check (Book E) interrupt status after reading the
+   * decrementer. If we don't, we may miss an interrupt and read a
+   * wrapped decrementer value without compensating for it
    */
-  tmp = (Clock_Decrementer_value - clicks) * 1000;
-  tmp /= (BSP_bus_frequency/BSP_time_base_divisor);
+  tick_pending = ppc_cpu_is_bookE() && (_read_BOOKE_TSR() & BOOKE_TSR_DIS);
+
+  if ( tick_pending )
+  {
+     /*
+     * Re-read the decrementer: The tick interrupt may have been
+     * generated and the decrementer wrapped during the time since we
+     * last read it and the time we checked the interrupt status
+     */
+    PPC_Get_decrementer( clicks );
+  }
+
+  tmp = ((Clock_Decrementer_value - clicks) * 1000U)
+          / (BSP_bus_frequency/BSP_time_base_divisor);
+
+  if ( tick_pending )
+  {
+    tmp += (Clock_Decrementer_value * 1000U)
+             / (BSP_bus_frequency/BSP_time_base_divisor);
+  }
 
-  return tmp * 1000;
+  return tmp * 1000U;
 }
 
 /*
-- 
2.1.2

_______________________________________________
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Reply via email to