Re: warning for __atomic_compare_exchange_4

2014-07-08 Thread Sebastian Huber

Hello Joel,

see libatomic_i.h in the GCC sources:

/* All sized operations are implemented in hidden functions prefixed with
   "libat_".  These are either renamed or aliased to the expected prefix
   of "__atomic".  Some amount of renaming is required to avoid hiding or
   conflicting with the builtins of the same name, but this additional
   use of hidden symbols (where appropriate) avoids unnecessary PLT entries
   on relevant targets.  */

A proper fix would be to implement libatomic for SPARCv8.

On 2014-07-07 23:21, Joel Sherrill wrote:


Hi

Saw this today in a build log and couldn't find the prototype
so I am hoping someone else can fix it.

sparc-rtems4.11-gcc --pipe -DHAVE_CONFIG_H   -I../../..
-I../../../../cpukit/../../../sis/lib/include   -mcpu=cypress -O2 -g
-ffunction-sections -fdata-sections -Wall -Wmissing-prototypes
-Wimplicit-function-declaration -Wstrict-prototypes -Wnested-externs -MT
libscorecpu_a-sparcv8-atomic.o -MD -MP -MF
.deps/libscorecpu_a-sparcv8-atomic.Tpo -c -o
libscorecpu_a-sparcv8-atomic.o `test -f 'sparcv8-atomic.c' || echo
'../../../../../../../../rtems/c/src/../../cpukit/score/cpu/sparc/'`sparcv8-atomic.c
mv -f .deps/libscorecpu_a-cpu_asm.Tpo .deps/libscorecpu_a-cpu_asm.Po
../../../../../../../../rtems/c/src/../../cpukit/score/cpu/sparc/sparcv8-atomic.c:106:6:
warning: conflicting types for built-in function
'__atomic_compare_exchange_4' [enabled by default]
  bool __atomic_compare_exchange_4(




--
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax : +49 89 189 47 41-09
E-Mail  : sebastian.hu...@embedded-brains.de
PGP : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


rtems_io_lookup_name deprecated?

2014-07-08 Thread Marcos Díaz
Hi, we are developing a CAN driver for a bsp, and at first, we are
devloping it in order to use it with the rtems io manager. But for
this I need to use the function rtems_io_lookup_name which is
deprecated(it works but it gives a warning message) I've been looking
and it says to use stat() instead, but stat returns a different
struct, and I need the major and minors numbers that
rtems_io_lookup_name returns.
So, why has been that function deprecated? what is the correct way of
develop a driver like this? thanks!

-- 
__


Marcos Díaz

Software Engineer


San Lorenzo 47, 3rd Floor, Office 5

Córdoba, Argentina


Phone: +54 351 4217888 / +54 351 4218211/ +54 351 7617452

Skype: markdiaz22
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: Fwd: [PATCH] Add const qualifier in

2014-07-08 Thread Sebastian Huber

On 2014-07-07 23:27, Joel Sherrill wrote:

Hi Chris,

I added this patch to rtems-tools. Attached is a patch to
RSB to use it for the sparc tools. If this looks OK, should
I make a similar update to all applicable 4.11 targets?


Why don't we move to another Newlib snapshot which includes this patch?  Pavel 
needs also the recent memchr() fix for its ARM tool chain.


--
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax : +49 89 189 47 41-09
E-Mail  : sebastian.hu...@embedded-brains.de
PGP : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


[PATCH 2/2] smptests/smpmrsp01: Add and update test cases

2014-07-08 Thread Sebastian Huber
---
 testsuites/smptests/smpmrsp01/init.c|  428 +---
 testsuites/smptests/smpmrsp01/smpmrsp01.scn |  477 +++
 2 files changed, 720 insertions(+), 185 deletions(-)

diff --git a/testsuites/smptests/smpmrsp01/init.c 
b/testsuites/smptests/smpmrsp01/init.c
index b93f196..ce22c25 100644
--- a/testsuites/smptests/smpmrsp01/init.c
+++ b/testsuites/smptests/smpmrsp01/init.c
@@ -21,7 +21,9 @@
 
 #include 
 #include 
+#include 
 #include 
+#include 
 
 #define TESTS_USE_PRINTK
 #include "tmacros.h"
@@ -32,27 +34,43 @@ const char rtems_test_name[] = "SMPMRSP 1";
 
 #define MRSP_COUNT 32
 
+#define SWITCH_EVENT_COUNT 32
+
 typedef struct {
   uint32_t sleep;
   uint32_t timeout;
   uint32_t obtain[MRSP_COUNT];
+  uint32_t cpu[CPU_COUNT];
 } counter;
 
 typedef struct {
+  uint32_t cpu_index;
+  const Thread_Control *executing;
+  const Thread_Control *heir;
+  const Thread_Control *heir_node;
+  Priority_Control heir_priority;
+} switch_event;
+
+typedef struct {
   rtems_id main_task_id;
+  rtems_id migration_task_id;
   rtems_id counting_sem_id;
   rtems_id mrsp_ids[MRSP_COUNT];
   rtems_id scheduler_ids[CPU_COUNT];
   rtems_id worker_ids[2 * CPU_COUNT];
-  rtems_id timer_id;
   volatile bool stop_worker[CPU_COUNT];
   counter counters[2 * CPU_COUNT];
+  uint32_t migration_counters[CPU_COUNT];
   Thread_Control *worker_task;
   SMP_barrier_Control barrier;
+  SMP_lock_Control switch_lock;
+  size_t switch_index;
+  switch_event switch_events[32];
 } test_context;
 
 static test_context test_instance = {
-  .barrier = SMP_BARRIER_CONTROL_INITIALIZER
+  .barrier = SMP_BARRIER_CONTROL_INITIALIZER,
+  .switch_lock = SMP_LOCK_INITIALIZER("test instance switch lock")
 };
 
 static void barrier(test_context *ctx, SMP_barrier_State *bs)
@@ -60,14 +78,27 @@ static void barrier(test_context *ctx, SMP_barrier_State 
*bs)
   _SMP_barrier_Wait(&ctx->barrier, bs, 2);
 }
 
-static void assert_prio(rtems_id task_id, rtems_task_priority expected_prio)
+static rtems_task_priority get_prio(rtems_id task_id)
 {
   rtems_status_code sc;
   rtems_task_priority prio;
 
   sc = rtems_task_set_priority(task_id, RTEMS_CURRENT_PRIORITY, &prio);
   rtems_test_assert(sc == RTEMS_SUCCESSFUL);
-  rtems_test_assert(prio == expected_prio);
+
+  return prio;
+}
+
+static void wait_for_prio(rtems_id task_id, rtems_task_priority prio)
+{
+  while (get_prio(task_id) != prio) {
+/* Wait */
+  }
+}
+
+static void assert_prio(rtems_id task_id, rtems_task_priority expected_prio)
+{
+  rtems_test_assert(get_prio(task_id) == expected_prio);
 }
 
 static void change_prio(rtems_id task_id, rtems_task_priority prio)
@@ -85,6 +116,78 @@ static void assert_executing_worker(test_context *ctx)
   );
 }
 
+static void switch_extension(Thread_Control *executing, Thread_Control *heir)
+{
+  test_context *ctx = &test_instance;
+  SMP_lock_Context lock_context;
+  size_t i;
+
+  _SMP_lock_ISR_disable_and_acquire(&ctx->switch_lock, &lock_context);
+
+  i = ctx->switch_index;
+  if (i < SWITCH_EVENT_COUNT) {
+switch_event *e = &ctx->switch_events[i];
+Scheduler_SMP_Node *node = _Scheduler_SMP_Thread_get_node(heir);
+
+e->cpu_index = rtems_get_current_processor();
+e->executing = executing;
+e->heir = heir;
+e->heir_node = _Scheduler_Node_get_owner(&node->Base);
+e->heir_priority = node->priority;
+
+ctx->switch_index = i + 1;
+  }
+
+  _SMP_lock_Release_and_ISR_enable(&ctx->switch_lock, &lock_context);
+}
+
+static void reset_switch_events(test_context *ctx)
+{
+  SMP_lock_Context lock_context;
+
+  _SMP_lock_ISR_disable_and_acquire(&ctx->switch_lock, &lock_context);
+  ctx->switch_index = 0;
+  _SMP_lock_Release_and_ISR_enable(&ctx->switch_lock, &lock_context);
+}
+
+static size_t get_switch_events(test_context *ctx)
+{
+  SMP_lock_Context lock_context;
+  size_t events;
+
+  _SMP_lock_ISR_disable_and_acquire(&ctx->switch_lock, &lock_context);
+  events = ctx->switch_index;
+  _SMP_lock_Release_and_ISR_enable(&ctx->switch_lock, &lock_context);
+
+  return events;
+}
+
+static void print_switch_events(test_context *ctx)
+{
+  size_t n = get_switch_events(ctx);
+  size_t i;
+
+  for (i = 0; i < n; ++i) {
+switch_event *e = &ctx->switch_events[i];
+char ex[5];
+char hr[5];
+char hn[5];
+
+rtems_object_get_name(e->executing->Object.id, sizeof(ex), &ex[0]);
+rtems_object_get_name(e->heir->Object.id, sizeof(hr), &hr[0]);
+rtems_object_get_name(e->heir_node->Object.id, sizeof(hn), &hn[0]);
+
+printf(
+  "[%" PRIu32 "] %4s -> %4s (prio %3" PRIu32 ", node %4s)\n",
+  e->cpu_index,
+  &ex[0],
+  &hr[0],
+  e->heir_priority,
+  &hn[0]
+);
+  }
+}
+
 static void obtain_and_release_worker(rtems_task_argument arg)
 {
   test_context *ctx = &test_instance;
@@ -134,9 +237,8 @@ static void obtain_and_release_worker(rtems_task_argument 
arg)
   rtems_test_assert(0);
 }
 
-static void test_mrsp_obtain_and_release(void)
+static void test_mrsp_o

Re: Fwd: [PATCH] Add const qualifier in

2014-07-08 Thread Joel Sherrill

On 7/8/2014 9:59 AM, Sebastian Huber wrote:
> On 2014-07-07 23:27, Joel Sherrill wrote:
>> Hi Chris,
>>
>> I added this patch to rtems-tools. Attached is a patch to
>> RSB to use it for the sparc tools. If this looks OK, should
>> I make a similar update to all applicable 4.11 targets?
> Why don't we move to another Newlib snapshot which includes this patch?  
> Pavel 
> needs also the recent memchr() fix for its ARM tool chain.
>
The default version is 2.1.0 with a few patches. I don't mind if
the version gets bumped or if we add the memchr() patch.

But Chris is on holiday for a few more days. Whatever he wants
is OK with me.

There is value in moving to a CVS snapshot given that it is mid-year
but I don't know if that negatively impacts any odd targets.

-- 
Joel Sherrill, Ph.D. Director of Research & Development
joel.sherr...@oarcorp.comOn-Line Applications Research
Ask me about RTEMS: a free RTOS  Huntsville AL 35805
Support Available(256) 722-9985

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [PATCH 1/2] score: Implement scheduler helping protocol

2014-07-08 Thread Sebastian Huber

Hello Gedare,

thanks for reviewing this huge patch.

On 07/08/2014 05:22 PM, Gedare Bloom wrote:

diff --git a/cpukit/score/include/rtems/score/thread.h 
b/cpukit/score/include/rtems/score/thread.h
>index a9a3f9f..4d758fb 100644
>--- a/cpukit/score/include/rtems/score/thread.h
>+++ b/cpukit/score/include/rtems/score/thread.h
>@@ -461,19 +461,72 @@ typedef struct {
>Thread_Control*terminator;
>  } Thread_Life_control;
>
>+#if defined(RTEMS_SMP)
>+/**
>+ * @brief The thread state with respect to the scheduler.
>+ */
>+typedef enum {
>+  /**
>+   * @brief This thread is blocked with respect to the scheduler.
>+   *
>+   * This thread uses no scheduler nodes.
>+   */
>+  THREAD_SCHEDULER_BLOCKED,
>+
>+  /**
>+   * @brief This thread is scheduled with respect to the scheduler.
>+   *
>+   * This thread executes using one of its scheduler nodes.  This could be its
>+   * own scheduler node or in case it owns resources taking part in the
>+   * scheduler helping protocol a scheduler node of another thread.
>+   */
>+  THREAD_SCHEDULER_SCHEDULED,
>+
>+  /**
>+   * @brief This thread is ready with respect to the scheduler.
>+   *
>+   * None of the scheduler nodes of this thread is scheduled.
>+   */
>+  THREAD_SCHEDULER_READY
>+} Thread_Scheduler_state;
>+#endif
>+
>  /**
>   * @brief Thread scheduler control.
>   */
>  typedef struct {
>  #if defined(RTEMS_SMP)
>/**
>+   * @brief The current scheduler state of this thread.
>+   */
>+  Thread_Scheduler_state state;

Should be named "State" for an embedded structure.


Its an enum, so it should be lower case.




>+
>+  /**
>+   * @brief The own scheduler control of this thread.
>+   *
>+   * This field is constant after initialization.
>+   */
>+  const struct Scheduler_Control *own_control;
>+
>+  /**
> * @brief The current scheduler control of this thread.
>+   *
>+   * The scheduler helping protocol may change this field.
> */
>const struct Scheduler_Control *control;

Perhaps current_control is better now that there are two of these pointers?


Hm, I would rather remove the "current" from the @brief.  This "current" 
is probably a bit misleading.





>+
>+  /**
>+   * @brief The own scheduler node of this thread.
>+   *
>+   * This field is constant after initialization.
>+   */
>+  struct Scheduler_Node *own_node;
>  #endif
>
>/**
> * @brief The current scheduler node of this thread.
>+   *
>+   * The scheduler helping protocol may change this field.

but only for SMP schedulers? There is no helping protocol for UP right?


Ok, good catch.  I will clarify, that this helping protocol is only used 
in SMP configurations.





> */
>struct Scheduler_Node *node;

Again current_node might be better now.


>
>diff --git a/cpukit/score/include/rtems/score/threadimpl.h 
b/cpukit/score/include/rtems/score/threadimpl.h
>index 4971e9d..cb7d5fe 100644
>--- a/cpukit/score/include/rtems/score/threadimpl.h
>+++ b/cpukit/score/include/rtems/score/threadimpl.h
>@@ -828,6 +828,16 @@ RTEMS_INLINE_ROUTINE bool _Thread_Owns_resources(
>return owns_resources;
>  }
>
>+#if defined(RTEMS_SMP)
>+RTEMS_INLINE_ROUTINE Thread_Control *_Thread_Resource_node_to_thread(
>+  Resource_Node *node
>+)
>+{
>+  return (Thread_Control *)
>+( (char *) node - offsetof( Thread_Control, Resource_node ) );
>+}

We should include some generic container_of function in rtems instead
of reproducing it multiple places.


In  we have:

/*
 * Given the pointer x to the member m of the struct s, return
 * a pointer to the containing structure.  When using GCC, we first
 * assign pointer x to a local variable, to check that its type is
 * compatible with member m.
 */
#if __GNUC_PREREQ__(3, 1)
#define__containerof(x, s, m) ({\
const volatile __typeof__(((s *)0)->m) *__x = (x);\
__DEQUALIFY(s *, (const volatile char *)__x - __offsetof(s, m));\
})
#else
#define__containerof(x, s, m)\
__DEQUALIFY(s *, (const volatile char *)(x) - __offsetof(s, m))
#endif

What about adding a similar

 _Container_of()

or

rtems_container_of()

to ?




>+#endif
>+
>  RTEMS_INLINE_ROUTINE void _Thread_Debug_set_real_processor(
>Thread_Control  *the_thread,
>Per_CPU_Control *cpu
>diff --git a/cpukit/score/src/schedulerchangeroot.c 
b/cpukit/score/src/schedulerchangeroot.c
>new file mode 100644
>index 000..bdb7b30
>--- /dev/null
>+++ b/cpukit/score/src/schedulerchangeroot.c
>@@ -0,0 +1,85 @@
>+/*
>+ * Copyright (c) 2014 embedded brains GmbH.  All rights reserved.
>+ *
>+ *  embedded brains GmbH
>+ *  Dornierstr. 4
>+ *  82178 Puchheim
>+ *  Germany
>+ *
>+ *
>+ * The license and distribution terms for this file may be
>+ * found in the file LICENSE in this distribution or at
>+ *http://www.rtems.org/license/LICENSE.
>+ */
>+
>+#if HAVE_CONFIG_H
>+  #include "config.h"
>+#endif
>+
>+#include 
>+
>+typedef struct {
>+  Thread_Control *root;
>+  Thread_Control *needs_help;
>+} Scheduler_Set_root_context

Re: [PATCH 1/2] score: Implement scheduler helping protocol

2014-07-08 Thread Gedare Bloom
On Tue, Jul 8, 2014 at 2:20 PM, Sebastian Huber
 wrote:
>>> >
>>> >diff --git a/cpukit/score/include/rtems/score/threadimpl.h
>>> > b/cpukit/score/include/rtems/score/threadimpl.h
>>> >index 4971e9d..cb7d5fe 100644
>>> >--- a/cpukit/score/include/rtems/score/threadimpl.h
>>> >+++ b/cpukit/score/include/rtems/score/threadimpl.h
>>> >@@ -828,6 +828,16 @@ RTEMS_INLINE_ROUTINE bool _Thread_Owns_resources(
>>> >return owns_resources;
>>> >  }
>>> >
>>> >+#if defined(RTEMS_SMP)
>>> >+RTEMS_INLINE_ROUTINE Thread_Control *_Thread_Resource_node_to_thread(
>>> >+  Resource_Node *node
>>> >+)
>>> >+{
>>> >+  return (Thread_Control *)
>>> >+( (char *) node - offsetof( Thread_Control, Resource_node ) );
>>> >+}
>>
>> We should include some generic container_of function in rtems instead
>> of reproducing it multiple places.
>
>
> In  we have:
>
> /*
>  * Given the pointer x to the member m of the struct s, return
>  * a pointer to the containing structure.  When using GCC, we first
>  * assign pointer x to a local variable, to check that its type is
>  * compatible with member m.
>  */
> #if __GNUC_PREREQ__(3, 1)
> #define__containerof(x, s, m) ({\
> const volatile __typeof__(((s *)0)->m) *__x = (x);\
> __DEQUALIFY(s *, (const volatile char *)__x - __offsetof(s, m));\
> })
> #else
> #define__containerof(x, s, m)\
> __DEQUALIFY(s *, (const volatile char *)(x) - __offsetof(s, m))
> #endif
>
> What about adding a similar
>
>  _Container_of()
>
> or
>
> rtems_container_of()
>
> to ?
>
Probably it should be _Container_of() for supercore visible code.

>>> >+
>>> >+  _Resource_Node_set_root( resource_node, &root->Resource_node );
>>> >+
>>> >+  needs_also_help = ( *scheduler->Operations.ask_for_help )(
>>> >+scheduler,
>>> >+offers_help,
>>> >+needs_help
>>> >+  );
>>> >+
>>> >+  if ( needs_also_help != needs_help && needs_also_help != NULL ) {
>>> >+_Assert( ctx->needs_help == NULL );
>>> >+ctx->needs_help = needs_also_help;
>>> >+  }
>>> >+
>>> >+  return false;
>>> >+}
>>> >+
>>> >+void _Scheduler_Thread_change_resource_root(
>>> >+  Thread_Control *top,
>>> >+  Thread_Control *root
>>> >+)
>>> >+{
>>> >+  Scheduler_Set_root_context ctx = { root, NULL };
>>> >+  Thread_Control *offers_help = top;
>>> >+  Scheduler_Node *offers_help_node;
>>> >+  Thread_Control *offers_also_help;
>>> >+  ISR_Level level;
>>> >+
>>> >+  _ISR_Disable( level );
>>> >+
>>> >+  offers_help_node = _Scheduler_Thread_get_node( offers_help );
>>> >+  offers_also_help = _Scheduler_Node_get_owner( offers_help_node );
>>> >+
>>> >+  if ( offers_help != offers_also_help ) {
>>> >+_Scheduler_Set_root_visitor( &offers_also_help->Resource_node, &ctx
>>> > );
>>> >+_Assert( ctx.needs_help == offers_help );
>>> >+ctx.needs_help = NULL;
>>> >+  }
>>> >+
>>> >+  _Scheduler_Set_root_visitor( &top->Resource_node, &ctx );
>>> >+  _Resource_Iterate( &top->Resource_node, _Scheduler_Set_root_visitor,
>>> > &ctx );
>>> >+
>>
>> Does this iterate() with disabled interrupts have bad implications for
>> schedulability / worst-case latency?
>>
>
> Yes, the worst-case latency depends now on the resource tree size. I don't
> think its easy to avoid this.  You have at least the following options.
>
> 1. Use one lock or a hierarchy of locks to freeze the tree state and thus
> enable a safe iteration.
>
> 2. Partially lock the tree and somehow provide safe iteration. How?  Is this
> possible with fine grained locking at all?
>
> 3. Organize the tree so that the interesting elements are the min/max nodes.
> I don't know how this can be done.  Each scheduler state change may result
> in updates of all resource trees in the system.
>
> This implementation was done with fine grained locking in mind.   So I did
> choose 1.  We can use the tree to get a partial order of per-resource locks
> necessary to avoid deadlocks.
>
Ok, this is a tricky problem, and it should definitely be documented.
I don't have a good idea right now about how the resource tree grows.
Perhaps the size of the tree is bounded such that the cost isn't too
bad.

-Gedare
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [PATCH 1/2] score: Implement scheduler helping protocol

2014-07-08 Thread Joel Sherrill

On 7/8/2014 10:05 AM, Sebastian Huber wrote:

I reviewed the code and didn't spot anything obvious.

Does the Doxygen have any warnings or do we need to wait
to see the automated build?

I am sure once the SMP dust settles, we will have to work
on coverage with SMP enabled. 

> diff --git a/doc/user/smp.t b/doc/user/smp.t
> index dd84c37..239a544 100644
> --- a/doc/user/smp.t
> +++ b/doc/user/smp.t
> @@ -147,6 +147,79 @@ another processor.  So if we enable interrupts during 
> this transition we have
>  to provide an alternative task independent stack for this time frame.  This
>  issue needs further investigation.
>
> +@subsection Scheduler Helping Protocol
> +
> +The scheduler provides a helping protocol to support locking protocols like
> +@cite{Migratory Priority Inheritance} or the @cite{Multiprocessor Resource
> +Sharing Protocol}.  Each ready task can use at least one scheduler node at a
> +time to gain access to a processor.  Each scheduler node has an owner, a user
> +and an optional idle task.  The owner of a scheduler node is determined a 
> task
a -> at
> +creation and never changes during the life time of a scheduler node.  The 
> user
> +of a scheduler node may change due to the scheduler helping protocol.  A
> +scheduler node is in one of the four scheduler help states:
> +
> +@table @dfn
> +
> +@item help yourself
> +
> +This scheduler node is solely used by the owner task.  This task owns no
> +resources using a helping protocol and thus does not take part in the 
> scheduler
> +helping protocol.  No help will be provided for other tasks.
> +
> +@item help active owner
> +
> +This scheduler node is owned by a task actively owning a resource and can be
> +used to help out tasks.
> +
> +In case this scheduler node changes its state from ready to scheduled and the
> +task executes using another node, then an idle task will be provided as a 
> user
> +of this node to temporarily execute on behalf of the owner task.  Thus lower
> +priority tasks are denied access to the processors of this scheduler 
> instance.
> +
> +In case a task actively owning a resource performs a blocking operation, then
> +an idle task will be used also in case this node is in the scheduled state.
> +
> +@item help active rival
> +
> +This scheduler node is owned by a task actively obtaining a resource 
> currently
> +owned by another task and can be used to help out tasks.
help out other(?) tasks which...?
> +The task owning this node is ready and will give away its processor in case 
> the
> +task owning the resource asks for help.
> +
> +@item help passive
> +
> +This scheduler node is owned by a task obtaining a resource currently owned 
> by
> +another task and can be used to help out tasks.
> +
> +The task owning this node is blocked.
> +
> +@end table
> +
> +The following scheduler operations return a task in need for help
> +
> +@itemize @bullet
> +@item unblock,
> +@item change priority,
> +@item yield, and
> +@item ask for help.
> +@end itemize
> +
> +A task in need for help is a task that encounters a scheduler state change 
> from
> +scheduled to ready or a task that cannot be scheduled in an unblock 
> operation.
> +Such a task can ask tasks which depend on resources owned by this task for
> +help.
> +
> +In case it is not possible to schedule a task in need for help, then
> +the corresponding scheduler node will be placed into the set of ready
> +scheduler nodes of the scheduler instance.  Once a state change from
> +ready to scheduled happens for this scheduler node it may be used to
> +schedule the task in need for help.
> +
> +The ask for help scheduler operation is used to help tasks in need for help
May want to use @code{ask for help} where appropriate to clearly
indicate a method.
> +returned by the operations mentioned above.  This operation is also used in
> +case the root of a resource sub-tree owned by a task changes.
> +
>  @subsection Critical Section Techniques and SMP
>
>  As discussed earlier, SMP systems have opportunities for true parallelism
> --
> 1.7.7
>
> ___
> devel mailing list
> devel@rtems.org
> http://lists.rtems.org/mailman/listinfo/devel

-- 
Joel Sherrill, Ph.D. Director of Research & Development
joel.sherr...@oarcorp.comOn-Line Applications Research
Ask me about RTEMS: a free RTOS  Huntsville AL 35805
Support Available(256) 722-9985

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [PATCH 1/2] score: Implement scheduler helping protocol

2014-07-08 Thread Joel Sherrill

On 7/8/2014 1:28 PM, Gedare Bloom wrote:
> On Tue, Jul 8, 2014 at 2:20 PM, Sebastian Huber
>  wrote:
> diff --git a/cpukit/score/include/rtems/score/threadimpl.h
> b/cpukit/score/include/rtems/score/threadimpl.h
> index 4971e9d..cb7d5fe 100644
> --- a/cpukit/score/include/rtems/score/threadimpl.h
> +++ b/cpukit/score/include/rtems/score/threadimpl.h
> @@ -828,6 +828,16 @@ RTEMS_INLINE_ROUTINE bool _Thread_Owns_resources(
>return owns_resources;
>  }
>
> +#if defined(RTEMS_SMP)
> +RTEMS_INLINE_ROUTINE Thread_Control *_Thread_Resource_node_to_thread(
> +  Resource_Node *node
> +)
> +{
> +  return (Thread_Control *)
> +( (char *) node - offsetof( Thread_Control, Resource_node ) );
> +}
>>> We should include some generic container_of function in rtems instead
>>> of reproducing it multiple places.
>>
>> In  we have:
>>
>> /*
>>  * Given the pointer x to the member m of the struct s, return
>>  * a pointer to the containing structure.  When using GCC, we first
>>  * assign pointer x to a local variable, to check that its type is
>>  * compatible with member m.
>>  */
>> #if __GNUC_PREREQ__(3, 1)
>> #define__containerof(x, s, m) ({\
>> const volatile __typeof__(((s *)0)->m) *__x = (x);\
>> __DEQUALIFY(s *, (const volatile char *)__x - __offsetof(s, m));\
>> })
>> #else
>> #define__containerof(x, s, m)\
>> __DEQUALIFY(s *, (const volatile char *)(x) - __offsetof(s, m))
>> #endif
>>
>> What about adding a similar
>>
>>  _Container_of()
>>
>> or
>>
>> rtems_container_of()
>>
>> to ?
>>
> Probably it should be _Container_of() for supercore visible code.
>
> +
> +  _Resource_Node_set_root( resource_node, &root->Resource_node );
> +
> +  needs_also_help = ( *scheduler->Operations.ask_for_help )(
> +scheduler,
> +offers_help,
> +needs_help
> +  );
> +
> +  if ( needs_also_help != needs_help && needs_also_help != NULL ) {
> +_Assert( ctx->needs_help == NULL );
> +ctx->needs_help = needs_also_help;
> +  }
> +
> +  return false;
> +}
> +
> +void _Scheduler_Thread_change_resource_root(
> +  Thread_Control *top,
> +  Thread_Control *root
> +)
> +{
> +  Scheduler_Set_root_context ctx = { root, NULL };
> +  Thread_Control *offers_help = top;
> +  Scheduler_Node *offers_help_node;
> +  Thread_Control *offers_also_help;
> +  ISR_Level level;
> +
> +  _ISR_Disable( level );
> +
> +  offers_help_node = _Scheduler_Thread_get_node( offers_help );
> +  offers_also_help = _Scheduler_Node_get_owner( offers_help_node );
> +
> +  if ( offers_help != offers_also_help ) {
> +_Scheduler_Set_root_visitor( &offers_also_help->Resource_node, &ctx
> );
> +_Assert( ctx.needs_help == offers_help );
> +ctx.needs_help = NULL;
> +  }
> +
> +  _Scheduler_Set_root_visitor( &top->Resource_node, &ctx );
> +  _Resource_Iterate( &top->Resource_node, _Scheduler_Set_root_visitor,
> &ctx );
> +
>>> Does this iterate() with disabled interrupts have bad implications for
>>> schedulability / worst-case latency?
>>>
>> Yes, the worst-case latency depends now on the resource tree size. I don't
>> think its easy to avoid this.  You have at least the following options.
>>
>> 1. Use one lock or a hierarchy of locks to freeze the tree state and thus
>> enable a safe iteration.
>>
>> 2. Partially lock the tree and somehow provide safe iteration. How?  Is this
>> possible with fine grained locking at all?
>>
>> 3. Organize the tree so that the interesting elements are the min/max nodes.
>> I don't know how this can be done.  Each scheduler state change may result
>> in updates of all resource trees in the system.
>>
>> This implementation was done with fine grained locking in mind.   So I did
>> choose 1.  We can use the tree to get a partial order of per-resource locks
>> necessary to avoid deadlocks.
>>
> Ok, this is a tricky problem, and it should definitely be documented.
> I don't have a good idea right now about how the resource tree grows.
> Perhaps the size of the tree is bounded such that the cost isn't too
> bad.
Documented for sure but "I don't have a good idea right now..."
is a good thing for us to remember.

I have thought a lot in the background about how much effort we
should put into optimizing for the theoretical cases in SMP.  We
know on paper that a lot of things are potentially bad like coarse
grained locking, O(XXX) where XXX is non-constant, etc.

But in the back of my head, a little voice keeps saying avoid optimizing
too early. Let's not make stupid decisions when a good one is possible
but optimizing too early is an easy trap to step into. 

I wrote a lot more but decided to delete it. The short version is that
the use of SMP is going to be new to RTEMS users and other traditional
single addre

Adding capture support to score

2014-07-08 Thread Jennifer Averett
The attached patches are a starting point for discussions for adding capture 
support
to core objects.   We started to write notes based on our discussions but the 
text
was harder to follow than just writing some code and commenting on it. It is 
hard
to see the impact of "word changes" versus real changes.

The initial cut has one method call per Score Handler with an enumerated
parameter indicating points of interest.  Most take a thread pointer and
an object id as arguments which can be interpreted based on the 
event indicated by the enumerated first argument.

Some of the items we would like to discuss:

  1) Should the number of methods be expanded to one method per point of 
interest?

   Having one method per "event" or doing it this way is a no-win choice. The 
callout 
  table will grow larger with more entries if we go to one function per event.  
More
  methods will be in the capture engine. But if we stick this way, there is a 
smaller
  callout table, fewer capture methods, but the capture methods likely will 
have some
  decode logic.

  2) This design focuses on pass paths for points of interest and ignores 
failure paths,
  is this the direction we wish to follow?

  There are potentially LOTS of failure points and the errors will be in the 
supercore level.
  Worse, a "try lock" which doesn't get the mutex could end up in a busy loop 
swamping
   the capture engine.

  3) We think the callout table should be completely populated. The table is 
either 
full of methods or not. It is an error not to install a handler for every 
event.  This
simplified checks at run-time.

  4) We discussed having the Score own a table with stubs so the calls are 
always 
  present but that could negatively impact minimum footprint. This only puts a 
  single pointer and code to call. This point is arguable if we stick to the 
small
  number of handlers approach. Eliminating the "if table installed" check and 
always
  making the subroutine call to a stub MIGHT be better. But we thought avoiding
  a subroutine call in critical paths was best.

So this is basically some code using one approach for discussion purposes. It 
Is only known to compile and this may not be enough for you to compile it. :)

We want to discuss the design approach and see where to take this.

Jennifer Averett
On-Line Applications Research
From 5714fc8121c2c1b7ce821a9f0a3e24f844aba2fa Mon Sep 17 00:00:00 2001
From: Jennifer Averett 
Date: Wed, 2 Jul 2014 08:28:27 -0500
Subject: [PATCH 1/7] score: core capture initial start point.

---
 cpukit/score/include/rtems/score/corecapture.h | 190 +
 cpukit/score/include/rtems/score/corecaptureimpl.h |  57 +++
 cpukit/score/src/corecapture.c |  54 ++
 3 files changed, 301 insertions(+)
 create mode 100644 cpukit/score/include/rtems/score/corecapture.h
 create mode 100644 cpukit/score/include/rtems/score/corecaptureimpl.h
 create mode 100644 cpukit/score/src/corecapture.c

diff --git a/cpukit/score/include/rtems/score/corecapture.h b/cpukit/score/include/rtems/score/corecapture.h
new file mode 100644
index 000..be3eb8b
--- /dev/null
+++ b/cpukit/score/include/rtems/score/corecapture.h
@@ -0,0 +1,190 @@
+/**
+ *  @file  rtems/score/corecapture.h
+ *
+ *  @brief Data Associated with the Core Capture options
+ *
+ */
+
+/*
+ *  COPYRIGHT (c) 2014.
+ *  On-Line Applications Research Corporation (OAR).
+ *
+ *  The license and distribution terms for this file may be
+ *  found in the file LICENSE in this distribution or at
+ *  http://www.rtems.org/license/LICENSE.
+ */
+
+#ifndef _RTEMS_SCORE_CORECAPTURE_H
+#define _RTEMS_SCORE_CORECAPTURE_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include 
+
+/**
+ *  @defgroup ScoreCoreCapture Supercore Capture Handler
+ *
+ *  @ingroup Score
+ *
+ *  This handler encapsulates functionality which provides the foundation
+ *  for capture of core information to be used in debugging.
+ */
+/**@{*/
+
+
+/**
+ * List of core object handler events of interest
+ */
+typedef enum {
+  /**
+   * Indicates when an object is allocated (e.g. created).
+   * When the capture method is invoked,  
+   * 
+   *   @a id will indicate the object allocated
+   *   @a thread will be the thread executing
+   */
+  CORE_CAPTURE_OBJECT_ALLOCATE,
+
+  /**
+   * Indicates when an object is freed (e.g. deleted).
+   * When the capture method is invoked,  
+   * 
+   *   @a id will indicate the object to be freed
+   *   @a thread will be the thread executing
+   */
+  CORE_CAPTURE_OBJECT_FREE
+} CORE_capture_Core_object_event;
+
+/**
+ * Method invoked by Core Object Handler on events
+ * of interest.
+ */
+typedef void (*CORE_capture_Object_event_f)(
+  CORE_capture_Core_object_event  event,
+  Objects_Id  id,
+  Thread_Control *thread
+);
+
+/**
+ * List of core object handler events of interest
+ */
+typedef enum {
+
+  /**
+   * Indicates when a mutex is acquired
+   * When 

[PATCH 1/6] semdelete.c: Correct spacing

2014-07-08 Thread Joel Sherrill
---
 cpukit/rtems/src/semdelete.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/cpukit/rtems/src/semdelete.c b/cpukit/rtems/src/semdelete.c
index 52bb14e..a805ac6 100644
--- a/cpukit/rtems/src/semdelete.c
+++ b/cpukit/rtems/src/semdelete.c
@@ -82,7 +82,7 @@ rtems_status_code rtems_semaphore_delete(
   SEMAPHORE_MP_OBJECT_WAS_DELETED,
   CORE_SEMAPHORE_WAS_DELETED
 );
- }
+  }
 
   _Objects_Close( &_Semaphore_Information, &the_semaphore->Object );
 
-- 
1.7.1

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


[PATCH 6/6] Thread Queue: Merge discipline subroutines into main methods

2014-07-08 Thread Joel Sherrill
There was a lot of duplication between the discipline subroutines.
With the transition to RBTrees for priority discipline, there were
only a few lines of source code manipulating the data structure
for FIFO and priority. Thus is made sense to fold these back
into the main methods.

As part of doing this all of the tests for discipline were changed
to be in the same order.
---
 cpukit/score/Makefile.am   |   10 +-
 cpukit/score/include/rtems/score/threadq.h |6 +-
 cpukit/score/include/rtems/score/threadqimpl.h |  175 +++
 cpukit/score/src/threadchangepriority.c|   49 +---
 cpukit/score/src/threadq.c |7 +-
 cpukit/score/src/threadqdequeue.c  |   72 --
 cpukit/score/src/threadqdequeuefifo.c  |   61 
 cpukit/score/src/threadqdequeuepriority.c  |   67 -
 cpukit/score/src/threadqenqueue.c  |   38 --
 cpukit/score/src/threadqenqueuefifo.c  |   59 
 cpukit/score/src/threadqenqueuepriority.c  |   55 
 cpukit/score/src/threadqextract.c  |   48 ++--
 cpukit/score/src/threadqextractfifo.c  |   61 
 cpukit/score/src/threadqextractpriority.c  |   73 --
 cpukit/score/src/threadqfirst.c|   29 +++-
 cpukit/score/src/threadqfirstfifo.c|   32 -
 cpukit/score/src/threadqfirstpriority.c|   34 -
 cpukit/score/src/threadqrequeue.c  |   62 +
 18 files changed, 238 insertions(+), 700 deletions(-)
 delete mode 100644 cpukit/score/src/threadqdequeuefifo.c
 delete mode 100644 cpukit/score/src/threadqdequeuepriority.c
 delete mode 100644 cpukit/score/src/threadqenqueuefifo.c
 delete mode 100644 cpukit/score/src/threadqenqueuepriority.c
 delete mode 100644 cpukit/score/src/threadqextractfifo.c
 delete mode 100644 cpukit/score/src/threadqextractpriority.c
 delete mode 100644 cpukit/score/src/threadqfirstfifo.c
 delete mode 100644 cpukit/score/src/threadqfirstpriority.c
 create mode 100644 cpukit/score/src/threadqrequeue.c

diff --git a/cpukit/score/Makefile.am b/cpukit/score/Makefile.am
index 6caefb5..353d5e9 100644
--- a/cpukit/score/Makefile.am
+++ b/cpukit/score/Makefile.am
@@ -296,13 +296,9 @@ endif
 
 ## THREADQ_C_FILES
 libscore_a_SOURCES += src/threadq.c src/threadqdequeue.c \
-src/threadqdequeuefifo.c src/threadqdequeuepriority.c \
-src/threadqenqueue.c src/threadqenqueuefifo.c \
-src/threadqenqueuepriority.c src/threadqextract.c \
-src/threadqextractfifo.c src/threadqextractpriority.c \
-src/threadqextractwithproxy.c src/threadqfirst.c src/threadqfirstfifo.c \
-src/threadqfirstpriority.c src/threadqflush.c \
-src/threadqprocesstimeout.c src/threadqtimeout.c
+src/threadqenqueue.c src/threadqextract.c src/threadqrequeue.c \
+src/threadqextractwithproxy.c src/threadqfirst.c \
+src/threadqflush.c src/threadqprocesstimeout.c src/threadqtimeout.c
 
 ## TIMESPEC_C_FILES
 libscore_a_SOURCES += src/timespecaddto.c src/timespecfromticks.c \
diff --git a/cpukit/score/include/rtems/score/threadq.h 
b/cpukit/score/include/rtems/score/threadq.h
index 35e0f1d..6dcdf41 100644
--- a/cpukit/score/include/rtems/score/threadq.h
+++ b/cpukit/score/include/rtems/score/threadq.h
@@ -33,9 +33,9 @@ extern "C" {
  *
  *  @ingroup Score
  *
- *  This handler defines the data shared between the thread and thread
- *  queue handlers.  Having this handler define these data structure
- *  avoids potentially circular references.
+ *  This handler provides the capability to have threads block in
+ *  ordered sets. The sets may be ordered using the FIFO or priority
+ *  discipline.
  */
 /**@{*/
 
diff --git a/cpukit/score/include/rtems/score/threadqimpl.h 
b/cpukit/score/include/rtems/score/threadqimpl.h
index 4e5a498..de98077 100644
--- a/cpukit/score/include/rtems/score/threadqimpl.h
+++ b/cpukit/score/include/rtems/score/threadqimpl.h
@@ -60,6 +60,9 @@ typedef void ( *Thread_queue_Timeout_callout )(
  *  the_thread_queue.  The selection of this thread is based on
  *  the discipline of the_thread_queue.  If no threads are waiting
  *  on the_thread_queue, then NULL is returned.
+ *
+ *  - INTERRUPT LATENCY:
+ *+ single case
  */
 Thread_Control *_Thread_queue_Dequeue(
   Thread_queue_Control *the_thread_queue
@@ -123,6 +126,9 @@ void _Thread_queue_Extract(
  *  @param[in] the_thread is the pointer to a thread control block that
  *  is to be removed
  *  @param[in] return_code specifies the status to be returned.
+ *
+ *  - INTERRUPT LATENCY:
+ *+ single case
  */
 void _Thread_queue_Extract_with_return_code(
   Thread_queue_Control *the_thread_queue,
@@ -194,158 +200,6 @@ void _Thread_queue_Initialize(
 );
 
 /**
- *  @brief Removes a thread from the specified PRIORITY based
- *  threadq, unblocks it, and cancels its timeout timer.
- *
- *  This routine removes a thread from the specified PRIORITY based
- *  threadq, unblocks 

[PATCH 5/6] Thread Queue Priority Discipline Reimplemented with RBTree

2014-07-08 Thread Joel Sherrill
---
 cpukit/configure.ac|6 -
 cpukit/score/include/rtems/score/thread.h  |4 +
 cpukit/score/include/rtems/score/threadq.h |   23 +---
 cpukit/score/include/rtems/score/threadqimpl.h |   61 +++---
 cpukit/score/src/threadq.c |   28 -
 cpukit/score/src/threadqdequeuepriority.c  |   68 ++-
 cpukit/score/src/threadqenqueuepriority.c  |  152 ++-
 cpukit/score/src/threadqextractpriority.c  |   48 +---
 cpukit/score/src/threadqfirstpriority.c|   21 +---
 9 files changed, 87 insertions(+), 324 deletions(-)

diff --git a/cpukit/configure.ac b/cpukit/configure.ac
index e87aa4a..19e5b81 100644
--- a/cpukit/configure.ac
+++ b/cpukit/configure.ac
@@ -248,12 +248,6 @@ RTEMS_CPUOPT([__RTEMS_DO_NOT_INLINE_CORE_MUTEX_SEIZE__],
   [1],
   [disable inlining _Thread_Enable_dispatch])
 
-## This improves both the size and coverage analysis.
-RTEMS_CPUOPT([__RTEMS_DO_NOT_UNROLL_THREADQ_ENQUEUE_PRIORITY__],
-  [test x"${RTEMS_DO_NOT_UNROLL_THREADQ_ENQUEUE_PRIORITY}" = x"1"],
-  [1],
-  [disable inlining _Thread_queue_Enqueue_priority])
-
 ## This gives the same behavior as 4.8 and older
 RTEMS_CPUOPT([__RTEMS_STRICT_ORDER_MUTEX__],
   [test x"${ENABLE_STRICT_ORDER_MUTEX}" = x"1"],
diff --git a/cpukit/score/include/rtems/score/thread.h 
b/cpukit/score/include/rtems/score/thread.h
index 28844c3..20fcddf 100644
--- a/cpukit/score/include/rtems/score/thread.h
+++ b/cpukit/score/include/rtems/score/thread.h
@@ -304,6 +304,8 @@ typedef struct {
 typedef struct {
   /** This field is the object management structure for each proxy. */
   Objects_Control  Object;
+  /** This field is used to enqueue the thread on RBTrees. */
+  RBTree_Node  RBNode;
   /** This field is the current execution state of this proxy. */
   States_Control   current_state;
   /** This field is the current priority state of this proxy. */
@@ -483,6 +485,8 @@ typedef struct {
 struct Thread_Control_struct {
   /** This field is the object management structure for each thread. */
   Objects_Control  Object;
+  /** This field is used to enqueue the thread on RBTrees. */
+  RBTree_Node  RBNode;
   /** This field is the current execution state of this thread. */
   States_Control   current_state;
   /** This field is the current priority state of this thread. */
diff --git a/cpukit/score/include/rtems/score/threadq.h 
b/cpukit/score/include/rtems/score/threadq.h
index c8f2aa4..35e0f1d 100644
--- a/cpukit/score/include/rtems/score/threadq.h
+++ b/cpukit/score/include/rtems/score/threadq.h
@@ -8,7 +8,7 @@
  */
 
 /*
- *  COPYRIGHT (c) 1989-2008.
+ *  COPYRIGHT (c) 1989-2014.
  *  On-Line Applications Research Corporation (OAR).
  *
  *  The license and distribution terms for this file may be
@@ -22,6 +22,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #ifdef __cplusplus
 extern "C" {
@@ -48,22 +49,6 @@ typedef enum {
 }   Thread_queue_Disciplines;
 
 /**
- *  This is one of the constants used to manage the priority queues.
- *
- *  There are four chains used to maintain a priority -- each chain
- *  manages a distinct set of task priorities.  The number of chains
- *  is determined by TASK_QUEUE_DATA_NUMBER_OF_PRIORITY_HEADERS.
- *  The following set must be consistent.
- *
- *  The set below configures 4 headers -- each contains 64 priorities.
- *  Header x manages priority range (x*64) through ((x*64)+63).  If
- *  the priority is more than half way through the priority range it
- *  is in, then the search is performed from the rear of the chain.
- *  This halves the search time to find the insertion point.
- */
-#define TASK_QUEUE_DATA_NUMBER_OF_PRIORITY_HEADERS 4
-
-/**
  *  This is the structure used to manage sets of tasks which are blocked
  *  waiting to acquire a resource.
  */
@@ -74,8 +59,8 @@ typedef struct {
   union {
 /** This is the FIFO discipline list. */
 Chain_Control Fifo;
-/** This is the set of lists for priority discipline waiting. */
-Chain_Control Priority[TASK_QUEUE_DATA_NUMBER_OF_PRIORITY_HEADERS];
+/** This is the set of threads for priority discipline waiting. */
+RBTree_Control Priority;
   } Queues;
   /** This field is used to manage the critical section. */
   Thread_blocking_operation_States sync_state;
diff --git a/cpukit/score/include/rtems/score/threadqimpl.h 
b/cpukit/score/include/rtems/score/threadqimpl.h
index 91f0938..4e5a498 100644
--- a/cpukit/score/include/rtems/score/threadqimpl.h
+++ b/cpukit/score/include/rtems/score/threadqimpl.h
@@ -8,7 +8,7 @@
  */
 
 /*
- *  COPYRIGHT (c) 1989-2009.
+ *  COPYRIGHT (c) 1989-2014.
  *  On-Line Applications Research Corporation (OAR).
  *
  *  The license and distribution terms for this file may be
@@ -37,18 +37,6 @@ extern "C" {
 #define THREAD_QUEUE_WAIT_FOREVER  WATCHDOG_NO_TIMEOUT
 
 /**
- *  This is one of the constants used to manage the priority queues.
- *  @ref TASK_QUEUE_DA

[PATCH 3/6] sp59: Fix typos

2014-07-08 Thread Joel Sherrill
---
 testsuites/sptests/sp59/init.c   |4 ++--
 testsuites/sptests/sp59/sp59.scn |2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/testsuites/sptests/sp59/init.c b/testsuites/sptests/sp59/init.c
index 2e68b9f..4120761 100644
--- a/testsuites/sptests/sp59/init.c
+++ b/testsuites/sptests/sp59/init.c
@@ -94,7 +94,7 @@ rtems_task Init(
   rtems_region_get_segment(
 Region,
 ALLOC_SIZE,
-RTEMS_PRIORITY,
+RTEMS_DEFAULT_OPTIONS,
 RTEMS_NO_TIMEOUT,
 &address_1
   );
@@ -104,7 +104,7 @@ rtems_task Init(
   status = rtems_task_wake_after( RTEMS_MILLISECONDS_TO_TICKS(1000) );
   directive_failed( status, "rtems_task_wake_after" );
 
-  puts( "Init - rtems_region_get_segment - return segment" );
+  puts( "Init - rtems_region_return_segment - return segment" );
   status = rtems_region_return_segment( Region, address_1 );
   directive_failed( status, "rtems_region_return_segment" );
 
diff --git a/testsuites/sptests/sp59/sp59.scn b/testsuites/sptests/sp59/sp59.scn
index 3c539db..2ec32d7 100644
--- a/testsuites/sptests/sp59/sp59.scn
+++ b/testsuites/sptests/sp59/sp59.scn
@@ -6,7 +6,7 @@ Init - rtems_region_create - OK
 Init - rtems_region_get_segment - get segment to consume memory
 Init - rtems_task_wake_after - let other task block - OK
 Blocking_task - wait for memory
-Init - rtems_region_get_segment - return segment
+Init - rtems_region_return_segment - return segment
 Init - rtems_task_wake_after - let other task run again - OK
 Blocking_task - Got memory segment after freed
 Blocking_task - delete self
-- 
1.7.1

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


[PATCH 4/6] spintrcritical20: Fix incorrect assumption

2014-07-08 Thread Joel Sherrill
The test assumed that the thread would have enough time to block
and become enqueued. In fact, the thread would still be in the
ready state and not blocked on the semaphore. Thus the state
of the Wait sub-structure in the TCB would not be in the expected
state. The simple solution was to continue when the thread was
in the ready state.
---
 testsuites/sptests/spintrcritical20/init.c |4 
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/testsuites/sptests/spintrcritical20/init.c 
b/testsuites/sptests/spintrcritical20/init.c
index cae8fdb..209c9e5 100644
--- a/testsuites/sptests/spintrcritical20/init.c
+++ b/testsuites/sptests/spintrcritical20/init.c
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include 
 
 const char rtems_test_name[] = "SPINTRCRITICAL 20";
 
@@ -108,6 +109,9 @@ static void Init(rtems_task_argument ignored)
   ++resets;
 }
 
+if (ctx->semaphore_task_tcb->current_state == STATES_READY)
+  continue;
+
 _Thread_Disable_dispatch();
 
 rtems_test_assert(
-- 
1.7.1

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


[PATCH 2/6] scheduleredfunblock.c: Correct spacing

2014-07-08 Thread Joel Sherrill
---
 cpukit/score/src/scheduleredfunblock.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/cpukit/score/src/scheduleredfunblock.c 
b/cpukit/score/src/scheduleredfunblock.c
index 469655e..308a691 100644
--- a/cpukit/score/src/scheduleredfunblock.c
+++ b/cpukit/score/src/scheduleredfunblock.c
@@ -27,7 +27,7 @@ Scheduler_Void_or_thread _Scheduler_EDF_Unblock(
   Thread_Control  *the_thread
 )
 {
-  _Scheduler_EDF_Enqueue( scheduler, the_thread);
+  _Scheduler_EDF_Enqueue( scheduler, the_thread );
   /* TODO: flash critical section? */
 
   /*
-- 
1.7.1

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Misc on RBTree Thread Queue Priority Discipline Changes

2014-07-08 Thread Joel Sherrill
Hi

If you take the patches in their entirety, most of the tests
appear to be about 500 bytes smaller on the erc32.

None of the tmtests do priority based blocking so I can't
report any changes there.

There was historically a subroutine in the threadq calls for a discipline
specific routine. Using the RBTree, this resulted in only 3-5 lines
of code unique to the discipline in those discipline files. The other
20+ lines was duplicated. The last patch folds those subroutines into
the main methods.

There is still some room for clean up since the code that is in
threadblockingoperationcancel.c is open coded there and
three other places. So there are a total of four copies of this
code in the tree.

+ rtems/src/eventsurrender.c
+ score/src/threadqdequeue.c
+ score/src/threadqextract.c
+ score/src/threadblockingoperationcancel.c

Two of those need the debug check code and two do not.
Also the method name isn't right for all four cases. It indicates
its use on the "resource released in ISR" usage not the
normal "clean up from blocking" case.

Suggestions on a new name and whether this should be
a real subroutine or a static inline is appreciated. Then I
can rework and reduce the code duplication.

-- 
Joel Sherrill, Ph.D. Director of Research & Development
joel.sherr...@oarcorp.comOn-Line Applications Research
Ask me about RTEMS: a free RTOS  Huntsville AL 35805
Support Available(256) 722-9985

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: rtems_io_lookup_name deprecated?

2014-07-08 Thread Sebastian Huber

On 2014-07-08 15:49, Marcos Díaz wrote:

Hi, we are developing a CAN driver for a bsp, and at first, we are
devloping it in order to use it with the rtems io manager. But for
this I need to use the function rtems_io_lookup_name which is
deprecated(it works but it gives a warning message) I've been looking
and it says to use stat() instead, but stat returns a different
struct, and I need the major and minors numbers that
rtems_io_lookup_name returns.


The device number field st_dev in struct stat contains the major and minor 
number.


So, why has been that function deprecated? what is the correct way of
develop a driver like this? thanks!



If you want a low-overhead driver, then you can use also

http://www.rtems.org/onlinedocs/doxygen/cpukit/html/group__IMFSGenericNodes.html

--
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax : +49 89 189 47 41-09
E-Mail  : sebastian.hu...@embedded-brains.de
PGP : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel