On 5/31/19 7:06 AM, Sebastian Huber wrote:
> On 27/05/2019 11:03, Jiri Gaisler wrote:
>> On 5/23/19 1:47 PM, Sebastian Huber wrote:
>>> On 22/05/2019 22:34, Jiri Gaisler wrote:
>>>> Adding a pseudo-random delay of 0 - 15 clocks to each trap/interrupt 
>>>> causes the test to pass on all cpu configurations with the default time 
>>>> slice (50)..! I am not sure what this means - it could be a hidden race 
>>>> condition, the algorithm might need some jitter to work or it could still 
>>>> be a simulator issue.
>>> Adding a pseudo-random delay to interrupts is probably not enough. 
>>> Sometimes the atomic instructions are carried out with interrupts disabled. 
>>> Would it be possible to add a pseudo-random delay to each of the 
>>> instruction cycles per core?
>>>
>> I have pushed a patch to my sis repository at rtems.org that adds an 
>> emulated L1 cache to SMP configurations and also improves timing for a few 
>> instructions. The epoch.exe test now runs on both SPARC and RISC-V for 3 and 
>> 4 cpus. To pass on 2 cpus, a smaller time-slice in the simulator is needed, 
>> e.g. -d 17. Feel free to test...
>
> Thanks for your update, I cloned the repository and it builds fine on my 
> machine. The instruction trace is really nice. If I have a bit of time I will 
> try to figure out why the EBR algorithm ends up in the endless loop.

Good to hear. Note that epoch01 now passes for both SPARC and RISC-V even 
without the simulated L1 cache. This was done by adding proper timing to all 
MUL/DIV instructions and to add a more random delay to the trap response time.

I have pushed a small patch to fix some build problems on Cygwin and FreeBSD. I 
don't know how fast we want to progress, but I could prepare a patch for RSB to 
use the new sis repository and skip the sis patches for gdb. I guess this would 
also have include updating any documentation that refers to the old sis ...

Jiri.


_______________________________________________
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Reply via email to