Re: Tickets: Milestone vs. Version

2018-08-11 Thread Chris Johns
On 11/8/18 6:31 am, Gedare Bloom wrote:
> On Fri, Aug 10, 2018 at 2:10 AM, Chris Johns  wrote:
>> On 10/08/2018 15:41, Sebastian Huber wrote:
>>> On 10/08/18 07:38, Chris Johns wrote:
 On 10/08/2018 15:03, Sebastian Huber wrote:
> we want a ticket for each milestone in which it is resolved. What is now 
> the
> meaning of the version field?
>
 A ticket may be assigned to a branch but not a milestone. Milestones lets 
 us
 select which tickets we fix on branch. Once all tickets on a milestone are
 closed the release can be made.

 We do not work that way at the moment. I use the milestones when making 
 releases
 to move tickets scheduled for a release that are not closed to the next 
 release.
>>>
>>> This doesn't explain the version field. Is version the same as branch from 
>>> your
>>> point of view?
>>>
>>
>> The branch is the version of RTEMS released from that branch. In trac it is
>> called version, ie 4.11, 4.10, 5 etc. The term version is more accurate, the 
>> use
>> of branch is actually a VC implementation detail.
>>
> 
> I had understood we should use 'version' field in Trac to indicate
> when the bug first appeared. 

If a bug appears in 4.11 and we say the bug is no longer present on 5 because
things has changed do we close the bug even it is still present on 4.11?

If a bug is present in 4.11 and raised against it however is fixed in 5 is
closing that bug valid if still present in 4.11?

What happens if someone finds a bug in 5 that is also present on 4.11, etc,
which is what started this thread, and it is only fixed on 4.11?

> If this is not the case, then definitely
> (a) we need more guidance, 

I think this discuss highlights we need to improve what we have. Thank you for
questioning what is being said. The page I did was focused on the release
process at the time. It is far from complete.

and (b) we probably need a way to indicate
> (our best guess about) when a bug appeared.

Do we? If we decide what I have said above is correct, which is not a given,
then we would need a ticket on each version (branch) it is present on. The bugs
have the creation date.

My understanding of Trac is the relationships are sort of direct and so I am not
sure there is a way to view the complexity of a bug the way we see in it's
database. Also I am fine with Trac. I suspect increasing a tool's complexity to
handle what we want brings it's own set of issues.

Maybe it would be helpful to list what I see we need:

1. View open tickets on any version of RTEMS.
2. View closed tickets on any version of RTEMS.
3. Machine generated release notes.
4. ??

I see viewing open tickets on a version as a query for that version of RTEMS for
any tickets that are not closed. Viewing closed tickets is a pretty simple
query. Release note generation is keyed off the milestone.

I am not saying what we have is prefect, optimal etc and it does mean we need to
do more work cloning tickets when back porting fixes.

Chris
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


[GSoC - x86_64] Pre-merge issues (at -O2 optimization level) and WIP review

2018-08-11 Thread Amaan Cheval
Hi!

In the process of cleaning my work up, I've run into an odd problem
which only shows up when I set the optimization level to -O2. At -O0,
it's perfectly fine.

The issue is that somehow execution ends up at address 0x0.

This likely happens due to a _CPU_Context_switch, where the rsp is set
to a corrupted value, leading to a corrupt (i.e. 0) return address at
the end of the context switch.

What's curious is that this corruption _seems_ to occur in
_ISR_Handler's call to _Thread_Dispatch, by somehow messing the value
of rsp up - I honestly don't know this for sure because gdb says one
thing (i.e. that rsp = 0), but setting up some code (cmpq $0, rsp) to
check this seems to say rsp is non-zero, at least.

This is an odd heisenbug I'd like to investigate for sure - I just
thought I'd shoot this email out because:

- If I can't figure it out tomorrow, soon, I'll just drop it so I can
create more logical commits to send as patches upstream (thereby
leaving -O0 upstream, at least temporarily)

- If anyone's seen an odd stack corruption like this, or has any
advice on debugging it, could you let me know? I suspect something
like interrupting tasks which ought not to be interrupted (perhaps I
forgot to implement some kind of "CPU_ISR_Disable") - is there
anything you can think of of that sort?

Also, here's a Github PR like last time with all the work (just for
the overall changes, not the specific commits!). I'd appreciate a
quick review if anyone could - sorry about sending this out over the
weekend! I've had a surprising share of Heisenbugs with QEMU in the
past week.

https://github.com/AmaanC/rtems-gsoc18/pull/3/files
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64] Pre-merge issues (at -O2 optimization level) and WIP review

2018-08-11 Thread Amaan Cheval
Figured it out; turns out my code to align the stack so I could make
calls without raising exceptions was messing up and corrupting the
stack-pointer.

Running the -O2 code now makes the clock run a bit too quickly - the
calibration may have a minor issue. I'll fix that up and send patches
tomorrow or Monday hopefully.

I'll be traveling Tuesday, so I'd appreciate if we can get them merged
upstream Monday itself - I'm okay to have a call and walk someone
through the patches and whatnot if need be.

Cheers!

On Sun, Aug 12, 2018 at 1:25 AM, Amaan Cheval  wrote:
> Hi!
>
> In the process of cleaning my work up, I've run into an odd problem
> which only shows up when I set the optimization level to -O2. At -O0,
> it's perfectly fine.
>
> The issue is that somehow execution ends up at address 0x0.
>
> This likely happens due to a _CPU_Context_switch, where the rsp is set
> to a corrupted value, leading to a corrupt (i.e. 0) return address at
> the end of the context switch.
>
> What's curious is that this corruption _seems_ to occur in
> _ISR_Handler's call to _Thread_Dispatch, by somehow messing the value
> of rsp up - I honestly don't know this for sure because gdb says one
> thing (i.e. that rsp = 0), but setting up some code (cmpq $0, rsp) to
> check this seems to say rsp is non-zero, at least.
>
> This is an odd heisenbug I'd like to investigate for sure - I just
> thought I'd shoot this email out because:
>
> - If I can't figure it out tomorrow, soon, I'll just drop it so I can
> create more logical commits to send as patches upstream (thereby
> leaving -O0 upstream, at least temporarily)
>
> - If anyone's seen an odd stack corruption like this, or has any
> advice on debugging it, could you let me know? I suspect something
> like interrupting tasks which ought not to be interrupted (perhaps I
> forgot to implement some kind of "CPU_ISR_Disable") - is there
> anything you can think of of that sort?
>
> Also, here's a Github PR like last time with all the work (just for
> the overall changes, not the specific commits!). I'd appreciate a
> quick review if anyone could - sorry about sending this out over the
> weekend! I've had a surprising share of Heisenbugs with QEMU in the
> past week.
>
> https://github.com/AmaanC/rtems-gsoc18/pull/3/files
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel