On Thu, Oct 15, 2020 at 10:23 PM Richi Dubey <richidu...@gmail.com> wrote:
>>
>> Your scheduler is a unique piece of software. It may be making assumptions 
>> that are not checked in the generic scheduler code. And checks in other 
>> schedulers are of no use to you.
>> There may not be any but this is something to consider. There are existing 
>> checks scattered throughout the source. Do any need to be in your scheduler?
>
> I understand. Thanks.
>
>>
>> Simulators tend to run slowly. They also may spend a long time when testing 
>> RTEMS executing the idle thread waiting for time to pass. Fast idle just 
>> says if a clock tick occurs while the idle thread is running, call clock 
>> tick over and over until another thread is unblocked and preempts idle.
>
> This is hard to understand.
>
I'll try ;)

The fast idle option is available for simulators. When it is enabled,
the simulator can "tick" time forward in RTEMS at a faster rate than
wall time. This is used to pass time faster when the only thing
running is the idle thread. It allows for tests to run faster.

>> Looking across all failing tests usually helps. For some reason, one tends 
>> to be easier to debug than the others.
>
> Got it. I'll read all the failing tests once.
>
>> Also some of the tests have a lot of code up front that doesn't impact what 
>> you are testing. It may be possible to disable early parts of sp16 to reduce 
>> what you have to step through and compare schedulers.d
>
> This is great advice, thanks. How do I disable the early unwanted parts? I 
> can't comment it, nor can I set an appropriate breakpoint. What should I do?
>
>> > That's a bad typo or auto-correct. Probably should have been tactic.
>> tracing
>
> I understand. Thanks.
>
> On Tue, Oct 13, 2020 at 11:06 PM Gedare Bloom <ged...@rtems.org> wrote:
>>
>> On Mon, Oct 12, 2020 at 11:19 AM Joel Sherrill <j...@rtems.org> wrote:
>> >
>> >
>> >
>> > On Mon, Oct 12, 2020 at 10:47 AM Richi Dubey <richidu...@gmail.com> wrote:
>> >>>
>> >>> There are existing checks scattered throughout the source. Do any need 
>> >>> to be in your scheduler?
>> >>
>> >> I don't understand. If there are already checks scattered through, why do 
>> >> I need more checks in my scheduler? Are these checks independent from the 
>> >> checks I might need in the scheduler? Please explain.
>> >
>> >
>> > Your scheduler is a unique piece of software. It may be making assumptions 
>> > that are not checked in the generic scheduler code. And checks in other 
>> > schedulers are of no use to you.
>> >
>> > There may not be any but this is something to consider.
>> >
>> >>
>> >>
>> >>> + Looks like the bsp has fast idle on but that should not impact 
>> >>> anything.
>> >>
>> >> What's fast idle? I found this. :p How can time run as fast as possible?
>> >
>> >
>> > Simulators tend to run slowly. They also may spend a long time when 
>> > testing RTEMS executing the idle thread waiting for time to pass. Fast 
>> > idle just says if a clock tick occurs while the idle thread is running, 
>> > call clock tick over and over until another thread is unblocked and 
>> > preempts idle.
>> >>
>> >>
>> >>> + Run this with the another scheduler and see if you can identify when 
>> >>> that scheduler makes the decision you are missing. There has to be one 
>> >>> of the scheduler hooks that is making a different decision. Run the test 
>> >>> side by side with two different schedulers. Alternate forward motion in 
>> >>> the two and compare the behaviour.
>> >>
>> >> This is genius. Thanks a lot. I'm gonna work on this.
>> >>
>> >>> + Adding trading might help but is probably more trouble to set up than 
>> >>> just comparing good and bad schedulers in parallel.
>> >>
>> >> What's trading?
>> >
>> >
>> > That's a bad typo or auto-correct. Probably should have been tactic.
>> tracing
>>
>> >>
>> >>
>> >>> + Look at what every failing test is doing. May be a common issue and 
>> >>> one is easier to debug
>> >>
>> >> Thanks. I'll check this.
>> >
>> >
>> > Looking across all failing tests usually helps. For some reason, one tends 
>> > to be easier to debug than the others.
>> >
>> > Also some of the tests have a lot of code up front that doesn't impact 
>> > what you are testing. It may be possible to disable early parts of sp16 to 
>> > reduce what you have to step through and compare schedulers.
>> >
>> > --joel
>> >>
>> >>
>> >>
>> >> On Sun, Oct 11, 2020 at 5:08 AM Joel Sherrill <j...@rtems.org> wrote:
>> >>>
>> >>>
>> >>>
>> >>> On Sat, Oct 10, 2020, 10:47 AM Richi Dubey <richidu...@gmail.com> wrote:
>> >>>>
>> >>>> Hi Mr. Huber,
>> >>>>
>> >>>> Thanks for checking in.
>> >>>>
>> >>>>> I suggested to enable your new scheduler implementation as the default
>> >>>>> to check if it is in line with the standard schedulers. I would first
>> >>>>> get some high level data. Select a BSP with good test results on a
>> >>>>> simulator (for example sparc/leon3 or arm/realview_pbx_a9_qemu). Run 
>> >>>>> the
>> >>>>> tests and record the test data. Then enable the SMP EDF scheduler as 
>> >>>>> the
>> >>>>> default, run the tests, record the data. Then enable your scheduler as
>> >>>>> the default, run the tests, record the data. Then get all tests which
>> >>>>> fail only with your scheduler.
>> >>>>
>> >>>> Yes, this is something I've already done based on your previous 
>> >>>> suggestion. I set SCHEDULER_STRONG_APA(the current RTEMS master's 
>> >>>> version) as the default scheduler for both sp and SMP and ran the test 
>> >>>> (on both sparc/leon3 and arm/realview_pbx_a9_qemu). Then I set 
>> >>>> SCHEDULER_STRONG_APA(my version) as the default scheduler for both sp 
>> >>>> and SMP and ran the test and compared it with the master's strong apa 
>> >>>> result. The following (extra) tests failed:
>> >>>>
>> >>>>  sp02.exe
>> >>>>  sp16.exe
>> >>>>  sp30.exe
>> >>>>  sp31.exe
>> >>>>  sp37.exe
>> >>>>  sp42.exe
>> >>>>  spfatal29.exe
>> >>>>  tm24.exe
>> >>>>
>> >>>>> Do a high level analysis of all failing
>> >>>>>
>> >>>>> tests. Try to figure out a new scenario for the test smpstrongapa01.
>> >>>>
>> >>>> Okay, I would look into this. This is a great suggestion, thanks!
>> >>>>
>> >>>>
>> >>>>> Do all the development with RTEMS_DEBUG enabled!
>> >>>>> Add _Assert() stuff to your scheduler. Check pre- and post-conditions 
>> >>>>> of
>> >>>>> all operations. Check invariants.
>> >>>>
>> >>>> How do I check postconditions? Using _Assert() or by manually debugging 
>> >>>> each function call?
>> >>>
>> >>>
>> >>> There are existing checks scattered throughout the source. Do any need 
>> >>> to be in your scheduler?
>> >>>
>> >>> Random thoughts:
>> >>>
>> >>> + Looks like the bsp has fast idle on but that should not impact 
>> >>> anything.
>> >>>
>> >>> + Run this with the another scheduler and see if you can identify when 
>> >>> that scheduler makes the decision you are missing. There has to be one 
>> >>> of the scheduler hooks that is making a different decision. Run the test 
>> >>> side by side with two different schedulers. Alternate forward motion in 
>> >>> the two and compare the behaviour.
>> >>>
>> >>> + Adding trading might help but is probably more trouble to set up than 
>> >>> just comparing good and bad schedulers in parallel.
>> >>>
>> >>> + Look at what every failing test is doing. May be a common issue and 
>> >>> one is easier to debug
>> >>>
>> >>> --joel
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> On Sat, Oct 10, 2020 at 6:09 PM Sebastian Huber 
>> >>>> <sebastian.hu...@embedded-brains.de> wrote:
>> >>>>>
>> >>>>> Hello Richi,
>> >>>>>
>> >>>>> I suggested to enable your new scheduler implementation as the default
>> >>>>> to check if it is in line with the standard schedulers. I would first
>> >>>>> get some high level data. Select a BSP with good test results on a
>> >>>>> simulator (for example sparc/leon3 or arm/realview_pbx_a9_qemu). Run 
>> >>>>> the
>> >>>>> tests and record the test data. Then enable the SMP EDF scheduler as 
>> >>>>> the
>> >>>>> default, run the tests, record the data. Then enable your scheduler as
>> >>>>> the default, run the tests, record the data. Then get all tests which
>> >>>>> fail only with your scheduler. Do a high level analysis of all failing
>> >>>>> tests. Try to figure out a new scenario for the test smpstrongapa01.
>> >>>>>
>> >>>>> Do all the development with RTEMS_DEBUG enabled!
>> >>>>>
>> >>>>> Add _Assert() stuff to your scheduler. Check pre- and post-conditions 
>> >>>>> of
>> >>>>> all  operations. Check invariants.
>> >>>>>
>> >>>> _______________________________________________
>> >>>> devel mailing list
>> >>>> devel@rtems.org
>> >>>> http://lists.rtems.org/mailman/listinfo/devel
>> >
>> > _______________________________________________
>> > devel mailing list
>> > devel@rtems.org
>> > http://lists.rtems.org/mailman/listinfo/devel
_______________________________________________
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Reply via email to