New thread priority management

2016-09-21 Thread Sebastian Huber

Hello,

I checked in a patch set today that reworks the thread priority 
management. This improves the priority inheritance protocol for example.


https://git.rtems.org/rtems/commit/?id=300f6a481aaf9e6d29811faca71bf7104a01492c

Previously, each thread used a resource count that reflects the count of 
mutexes owned by the thread. A thread could temporarily get a higher 
priority via priority inheritance due to mutexes which it owned 
directly. However, it did keep this temporary high priority until it 
released its last mutex (resource count changes from 1 to 0). So, in 
case of nested mutexes, it could have a high priority even though it 
already released the corresponding mutex.


Lets consider this scenario. We have a file system instance protected by 
one mutex (e.g. JFFS2) and a dynamic memory allocator protected by 
another mutex. A low priority thread performs writes some log data into 
a file, thus it acquires the file system instance mutex. The file system 
allocates dynamic memory. Now a high priority thread interrupts and 
tries to allocate dynamic memory. The allocator mutex is already owned, 
so the priority of the low priority thread is raised to the priority of 
the high priority thread. The memory allocation completes and the 
allocator mutex is released, since the low priority thread still owns 
the file system instance mutex it continues to execute with the high 
priority (the high priority thread is not scheduled). It may now perform 
complex and long file system operations (e.g. garbage collection, polled 
flash erase and write functions) with a high priority.


In the new implementation the thread priorities are tracked accurately 
and priority changes take place immediately, e.g. priority changes are 
not deferred until the thread releases its last resource. The actual 
priority of a thread is now an aggregation of priority nodes. The thread 
priority aggregation for the home scheduler instance of a thread 
consists of at least one priority node, which is normally the real 
priority of the thread.  The locking protocols (e.g. priority ceiling 
and priority inheritance), rate-monotonic period objects and the POSIX 
sporadic server add, change and remove priority nodes.


Priority inheritance works now recursively, e.g. a thread T0 owns mutex 
A, a thread T1 owns mutex B and waits for mutex A, a thread T2 which 
wants to obtain mutex B inherits its priority to the direct owner thread 
T1 and the indirect owner thread T0.


In addition timeouts are supported (nothing new, however, this is quite 
complex on SMP configurations) and we have a deadlock detection.


All these features are not for free. The additional storage space is 
negligible and affects only the thread control block and the thread 
stack. A non-recursive mutex object with support for priority 
inheritance consists of an SMP spin lock and two pointers (e.g. 12 bytes 
on a 32-bit machine in case of MCS spin locks). The implementation of 
the common case of a mutex obtain/release with no previous owner/waiting 
thread is next to optimal. The run-time of mutex obtain/release 
operations depends now on the mutex dependency graph, which is defined 
by the application. Complex and deeply nested mutex dependencies lead to 
long mutex obtain/release sequences. However, also without the accurate 
priority tacking in the scenario of complex dependencies, the overall 
run-time behaviour is pretty bad due to production of parasitic high 
priority threads or incomplete priority inheritance. Complex 
dependencies are a problem itself and it is the duty of an application 
designer to avoid them.


--
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax : +49 89 189 47 41-09
E-Mail  : sebastian.hu...@embedded-brains.de
PGP : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: [PATCH] cpukit: Add nxp-sc16is752 serial device driver

2016-09-21 Thread Sebastian Huber



On 20/09/16 13:06, Chris Johns wrote:

On 20/09/2016 5:24 PM, Sebastian Huber wrote:

On 20/09/16 09:12, Chris Johns wrote:

On 20/09/2016 17:03, Sebastian Huber wrote:

We have a Termios device driver chapter:

https://docs.rtems.org/doc-current/share/rtems/html/bsp_howto/Console-Driver.html#Console-Driver


I was thinking of a few points to look at when updating a BSP.

Maybe we should add a chapter like "Getting a BSP up to date" to this
guide.


Does a user manual need something which is to transition BSPs?


I think this stuff belongs to the BSP guide.



The BSP Howto has been converted ...

https://ftp.rtems.org/pub/rtems/people/chrisj/docs/bsp_howto/

.. so you could make a branch, make your changes and up load copy for us
to use. :)


I have several tickets open that need a documentation update so that I 
can close them. I prefer work on this in one rush.


--
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax : +49 89 189 47 41-09
E-Mail  : sebastian.hu...@embedded-brains.de
PGP : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


SMP Next Steps

2016-09-21 Thread Sebastian Huber

Hello,

with the reworked thread priority management we have now all information 
available to feed the schedulers and implement the last step of the OMIP 
locking protocol.


https://devel.rtems.org/ticket/2556

The remaining work is mostly SMP only. We have two major locking domains 
for threads, one is the thread wait domain (thread queues) and the other 
is the thread state (scheduler related) domain. The thread wait domain 
needs to pass the thread priority information to the thread state 
domain. This will be done via a dedicated low-level SMP lock 
(Thread_Control::Scheduler::Lock) and disabled thread dispatching. Each 
thread may use multiple scheduler instances (clustered scheduling) due 
to the locking protocols (MrsP and OMIP).


The first approach to implement MrsP had several severe design flaws. I 
will replace it and use a thread queue instead. So, the Resource Handler 
will be removed entirely 
(https://git.rtems.org/rtems/tree/cpukit/score/include/rtems/score/resource.h). 
The first scheduler helping protocol used a return value to indicate 
threads in need for help. There are three problems with this approach. 
Firstly, you can return at most one thread, but multiple threads in need 
for help may be produced by one operation, e.g. due to affinity 
constraints. Secondly, the thread the initiated the operation must carry 
out the ask for help operation. This is bad in terms of scheduler 
instance isolation, e.g. high priority thread H in instance A sends a 
message to worker thread in instance B, H must execute scheduler code of 
instance B. Thirdly, it complicates the SMP low-level locking. I will 
change this to use an SMP message processed via an 
inter-processor-interrupt.


Once the OMIP implementation is done, the SMP support for RTEMS is 
feature complete from my point of view. There are still some interesting 
things to do, but the ground work is done and I think we have a 
state-of-the-art feature set in real-time SMP systems area.


--
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax : +49 89 189 47 41-09
E-Mail  : sebastian.hu...@embedded-brains.de
PGP : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: New thread priority management

2016-09-21 Thread Pavel Pisa
Hello Sebastian,

On Wednesday 21 of September 2016 09:08:25 Sebastian Huber wrote:
> Hello,
>
> I checked in a patch set today that reworks the thread priority
> management. This improves the priority inheritance protocol for example.

big congratulation and thanks. I have checked master today and my tests
are happy about the change. I have tested on i386 under QEMU in UP
configuration for now.

I have test for checking priority release timing included
in my RTEMS templates for long time

This one is based on classic API

  
http://rtime.felk.cvut.cz/gitweb/rtems-devel.git/tree/HEAD:/rtems-tests/prioinh_check

The second is POSIX based and actual test file "prio_inherit_test.c" can be 
compiled
for Linux, VxWorks etc.

  
http://rtime.felk.cvut.cz/gitweb/rtems-devel.git/tree/HEAD:/rtems-tests/prioinh_posix

I am attaching logs. The POSIX test on actual RTEMS implementation
"prioinh_posix-new.log" matches "prioinh_posix-linux-1-cpu.log" matches Linux 
one
when limited to one CPU

  taskset -c 0 ./prioinh_posix-linux

(The test has to be run by root or user with RT capabilities on Linux).

The difference between previous RTEMS implementation
  prioinh_posix-old.log
and actual
  prioinh_posix-new.log
in the form of diff

 RTEMS v 4.11.99
 Starting application prioinh_check

 RTEMS Shell on /dev/console. Use 'help' to list commands.
 [/] # *** Starting up Task_1 ***
 THI created
 TMID created
 LO created
 1 obtained SLO
 1 going to release RLO
 1 obtained SHI
 1 going to release RHI
 THI released (RHI)
 TLO released (RLO)
 TLO released (RLO)
 1 going to release RMID
 1 going to release SHI
-1 going to release SLO
 THI obtained SHI
 THI going to release SHI
 THI released SHI
 MID released (RMID)
 MID going to sleep
-1 released both SHI and SLO
-1 going to sleep
-TLO obtained SLO
-TLO going to release SLO
-TLO released SLO
-1 obtained SLO
-1 going to release RLO
-1 obtained SHI
-1 going to release RHI
-THI released (RHI)
-TLO released (RLO)
-1 going to release RMID
-1 going to release SHI
 1 going to release SLO
-THI obtained SHI
-THI going to release SHI
-THI released SHI
-MID released (RMID)
-MID going to sleep
 1 released both SHI and SLO
 1 going to sleep

Important is that when low priority thread releases
semaphore/mutex shared with high pririty thread
then it keeps the priority gained from high priority
thread when it blocks on that semaphore.

RLO and RHI are additional semaphores which
are controlled by test controllong task "1".
This way it is ensured that semaphores/mutexes SHI
and SLO are obtained by TLO and THI task is such
order that THI task has to wait for TLO task.

Interpreting data recorded on SMP system is much
more complex and I do not remember if the sequnce
stays or stays not deterministic in such case.
It is different for sure, because outputs can
go in parallel. But check can be used even on SMP
to check for late priority release.

Best wishes,

 Pavel
*** Starting up Task_1 ***
THI created
TMID created
LO created
1 obtained SLO
1 going to release RLO
1 obtained SHI
1 going to release RHI
THI released (RHI)
TLO released (RLO)
1 going to release RMID
1 going to release SHI
THI obtained SHI
THI going to release SHI
THI released SHI
MID released (RMID)
MID going to sleep
1 going to release SLO
1 released both SHI and SLO
1 going to sleep
TLO obtained SLO
TLO going to release SLO
TLO released SLO
1 obtained SLO
1 going to release RLO
1 obtained SHI
1 going to release RHI
THI released (RHI)
TLO released (RLO)
1 going to release RMID
1 going to release SHI
THI obtained SHI
THI going to release SHI
THI released SHI
MID released (RMID)
MID going to sleep
1 going to release SLO
1 released both SHI and SLO
1 going to sleep
TLO obtained SLO
TLO going to release SLO
TLO released SLO
1 obtained SLO
1 going to release RLO
1 obtained SHI
1 going to release RHI
THI released (RHI)
TLO released (RLO)
1 going to release RMID
1 going to release SHI
THI obtained SHI
THI going to release SHI
THI released SHI
MID released (RMID)
MID going to sleep
1 going to release SLO
1 released both SHI and SLO
1 going to sleep
TLO obtained SLO
TLO going to release SLO
TLO released SLO
1 obtained SLO
1 going to release RLO
1 obtained SHI
1 going to release RHI
THI released (RHI)
TLO released (RLO)
1 going to release RMID
1 going to release SHI
THI obtained SHI
THI going to release SHI
THI released SHI
MID released (RMID)
MID going to sleep
1 going to release SLO
1 released both SHI and SLO
1 going to sleep
TLO obtained SLO
TLO going to release SLO
TLO released SLO
1 obtained SLO
1 going to release RLO
1 obtained SHI
1 going to release RHI
THI released (RHI)
TLO released (RLO)
1 going to release RMID
1 going to release SHI
THI obtained SHI
THI going to release SHI
THI released SHI
MID released (RMID)
MID going to sleep
1 going to release SLO
1 released both SHI and SLO
1 going to sleep
TLO obtained SLO
TLO going to release SLO
TLO released SLO
1 obtained SLO
1 going to release RLO
1 obtained SHI
1 going to relea

Re: SMP Next Steps

2016-09-21 Thread Gedare Bloom
Thank you for the update and great progress. The plan to resolve MrsP
issues looks sound.

On Wed, Sep 21, 2016 at 8:15 AM, Sebastian Huber
 wrote:
> Hello,
>
> with the reworked thread priority management we have now all information
> available to feed the schedulers and implement the last step of the OMIP
> locking protocol.
>
> https://devel.rtems.org/ticket/2556
>
> The remaining work is mostly SMP only. We have two major locking domains for
> threads, one is the thread wait domain (thread queues) and the other is the
> thread state (scheduler related) domain. The thread wait domain needs to
> pass the thread priority information to the thread state domain. This will
> be done via a dedicated low-level SMP lock (Thread_Control::Scheduler::Lock)
> and disabled thread dispatching. Each thread may use multiple scheduler
> instances (clustered scheduling) due to the locking protocols (MrsP and
> OMIP).
>
> The first approach to implement MrsP had several severe design flaws. I will
> replace it and use a thread queue instead. So, the Resource Handler will be
> removed entirely
> (https://git.rtems.org/rtems/tree/cpukit/score/include/rtems/score/resource.h).
> The first scheduler helping protocol used a return value to indicate threads
> in need for help. There are three problems with this approach. Firstly, you
> can return at most one thread, but multiple threads in need for help may be
> produced by one operation, e.g. due to affinity constraints. Secondly, the
> thread the initiated the operation must carry out the ask for help
> operation. This is bad in terms of scheduler instance isolation, e.g. high
> priority thread H in instance A sends a message to worker thread in instance
> B, H must execute scheduler code of instance B. Thirdly, it complicates the
> SMP low-level locking. I will change this to use an SMP message processed
> via an inter-processor-interrupt.
>
> Once the OMIP implementation is done, the SMP support for RTEMS is feature
> complete from my point of view. There are still some interesting things to
> do, but the ground work is done and I think we have a state-of-the-art
> feature set in real-time SMP systems area.
>
> --
> Sebastian Huber, embedded brains GmbH
>
> Address : Dornierstr. 4, D-82178 Puchheim, Germany
> Phone   : +49 89 189 47 41-16
> Fax : +49 89 189 47 41-09
> E-Mail  : sebastian.hu...@embedded-brains.de
> PGP : Public key available on request.
>
> Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
>
> ___
> devel mailing list
> devel@rtems.org
> http://lists.rtems.org/mailman/listinfo/devel
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: [PATCH 0/5] arm/tms570: include hardware initialization and selftests to run RTEMS directly from Flash without loader on TMS570LS3137 HDK.

2016-09-21 Thread Gedare Bloom
I only added a couple of very minor inline comments. Looks good, I
think you can push directly no one else needs to re-read this
BSP-specific stuff.

On Sun, Sep 18, 2016 at 8:33 PM, Gedare Bloom  wrote:
> I will try to read your GitHub diffs sometime this week.
>
> Thanks
>
> On Sat, Sep 17, 2016 at 6:34 PM, Pavel Pisa  wrote:
>> Hello Gedare,
>>
>> I have tried to update code according to your review.
>> I have added even enabled/uncommented and added implementation
>> for some more test. Especially ECC integrated TCRAM single errors
>> correction and reporting test. I have commented why double errors
>> abort exception based test is commented out because it would require
>> significantly more work. Have left original commits unchanged for now
>>
>>   1a98178 arm/tms570: define base addresses of all TMS570LS3137 SPI 
>> interfaces.
>>   9d06f3b arm/tms570: include hardware initialization and selftest based on 
>> Ti HalCoGen generated files.
>>   1ef345e arm/tms570: include TMS570_USE_HWINIT_STARTUP option to select 
>> bare metal startup and selftest.
>>   74b20a1 arm/tms570: document BSP setup with included hardware 
>> initialization.
>>   b38ecb7 arm/tms570: update bootstrap generated preinstall.am
>>
>> I think that even for you it is easier to read diffs against reviewed state 
>> for now.
>> I plan to collapse/rearrange commits then. I can send updated series
>> to mainling list then if the V2 complete review is preferred.
>>
>> The new updates
>>
>>   472a561 arm/tms570: include HalCoGen generated MPU initialization.
>>  used to test MPU and SDRAM or real HW, this is not mainline material 
>> for this round
>>
>>   b092a83 bsps/arm: Export bsp_start_hook_0_done symbol from ARM start.S.
>>   b204502 arm/tms570: Enable more self-tests including integrated TCRAM and 
>> its ECC functionality testing.
>>  this is a little dancing between eggs, but works
>>   aed1d28 arm/tms570: Add data barriers to external SDRAM setup code.
>>  there are still some problems with first configuration/reset cycle 
>> after power on reset
>>   c4cbfe8 arm/tms570: Update comments, document functions and add references 
>> to the corresponding HalCoGen functions.
>>   45fda72 arm/tms570: readme and configure help extended a little.
>>   9f83ada arm/tms570: change ESM parity selftests related functions/types 
>> prefix from tms570_partest_ to tms570_selftest_par_ .
>>
>> The commits, diffs can be read at GitHub
>>
>>   https://github.com/AoLaD/rtems/commits/tms570-bsp
>>
>> Best wishes,
>>
>>Pavel
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel