On 27/08/2014 11:05 pm, Joel Sherrill wrote:
On August 27, 2014 7:33:28 AM CDT, Daniel Hellstrom <dan...@gaisler.com> wrote:
Hi,
On 08/27/2014 02:39 AM, Chris Johns wrote:
On 27/08/2014 3:50 am, Jennifer Averett wrote:
> We suggest to remove the ref_count of the task structures to save
> time and locking. Otherwise we need atomic counters here?
You need a means of tracking the logged references to the task data
in the trace log. The log can exist well past the life time of the task
and the data.
Yes, I thought that was inline with what we suggested.
> We suggest to free task structures when reading, by recording
time of
> task deletion event we know when last event referencing the task
occurred,
I do not like this approach.
Allocating and freeing memory in the trace recording path introduces
jitter in adding records, it requires taking a SMP lock.
> when that time have passed it should be okay to free the task
structure?
Can anyone see a problem with this. My question is will this
always
work based on the zombie state of the thread? I thought I saw
You cannot remove the ref_counts without an equivalent. The data is
referenced beyond the life of a task.
Do you mean that there will be references to the task struct after the
task delete record has been read? I assumed that the task delete was
the last record referencing the task struct?
There should not be references.
Another approach could involve adding the task data to the trace
buffer when the first reference to the task is added to the trace
buffer. You would add the task data tagging it with some sort of
"id". When added events related to the task the "id" is used. The
trace buffer decoder extracts the task data building a suitable table
to decode the following "id" references. There is a small
extra bump filling the trace buffer with the task details but this is
the cost of tracing in software. This approach helps solves the problem
of off board streaming of the trace data.
I like this approach much better, but I assumed it would be harder to
implement. Then we could basically remove the task struct.. That would
solve allocation/freeing of the task structs too?
This is impossible or at least very complicated to implement. The trace
buffers are a circular queue and you would end up with gaps in the middle which
would have to be preserved across reads of records and wrapping.
Why would there be gaps ? The assumption here is the buffer does not
loop until it is read. The host reads and keeps the details piecing the
picture back to together again.
The idea here is the first reference commits to the log the basic data
for the thread that is static. After that we incrementally add the
details about the change of state for the thread. For example each
context switch you log the priorities (if changed). Also there is
currently time monitoring on each task. This could be avoided because
the time stamps for the context switch is maintained.
If we need to hold some per thread data maybe we register an extension
to do this which is done during the create.
This change would break the load picture but that is ok.
Using an id may require a global lock but we can use the same basic algorithm
as the id look up and be deterministic.
What if the id was an incrementing thread create count and placed in the
TCB ?
Chris
_______________________________________________
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel