Re: [lldb-dev] lldb_private::RegisterContext vs lldb_private::RegisterInfoInterface

2017-09-21 Thread Ramana via lldb-dev
Sorry, I could not respond yesterday as I was of out of office.

> Interesting. There are two ways to accomplish this:
> 1 - Treat the CPU as one target and the GPU as another.
> 2 - Treat the CPU and GPU as one target
>
> The tricky things with solution #1 is how to manage switching the targets
> between the CPU and GPU when events happen (CPU stops, or GPU stops while
> the other is running or already stopped). We don't have any formal
> "cooperative targets" yet, but we know they will exist in the future
> (client/server, vm code/vm debug of vm code, etc) so we will be happy to
> assist with questions if and when you get there.

I was going along the option #1. Would definitely post here with more
questions as I progress, thank you. Fortunately, the way OpenVX APIs
work is, after off-loading the tasks to GPU, they will wait for the
GPU to complete those tasks before continuing further. And in our
case, both CPU and GPU can be controlled separately. Given that, do
you think I still need to bother much about "cooperative targets"?

> GPU debugging is tricky since they usually don't have a kernel or anything
> running on the hardware. Many examples I have seen so far will set a
> breakpoint in the program at some point by compiling the code with a
> breakpoint inserted, run to that breakpoint, and then if the user wants to
> continue, you recompile with breakpoints set at a later place and re-run the
> entire program again. Is your GPU any different?

> We also discussed how to single step in a GPU program. Since multiple cores
> on the GPU are concurrently running the same program, there was discussion
> on how single stepping would work. If you are stepping and run into an
> if/then statement, do you walk through the if and the else at all times? One
> GPU professional was saying this is how GPU folks would want to see single
> stepping happen. So I think there is a lot of stuff we need to think about
> when debugging GPUs in general.

Thanks for sharing that. Yeah, ours is a little different. Basically,
from the top level, the affinity in our case is per core of the GPU. I
am not there yet to discuss more on this.

> So we currently have no cooperative targets in LLDB. This will be the first.
> We will need to discuss how hand off between the targets will occur and many
> other aspects. We will be sure to comment when and if you get to this point.

Thank you. Will post more when I get there.

Regards,
Ramana

On Tue, Sep 19, 2017 at 8:56 PM, Greg Clayton  wrote:
>
> On Sep 19, 2017, at 3:32 AM, Ramana  wrote:
>
> Thank you so much Greg for your comments.
>
> What architecture and os are you looking to support?
>
>
> The OS is Linux and the primary use scenario is remote debugging.
> Basically http://lists.llvm.org/pipermail/lldb-dev/2017-June/012445.html
> is what I am trying to achieve and unfortunately that query did not
> get much attention of the members.
>
>
> Sorry about missing that. I will attempt to address this now:
>
> I have to implement a debugger for our HW which comprises of CPU+GPU where
> the GPU is coded in OpenCL and is accelerated through OpenVX API in C++
> application which runs on CPU. Our requirement is we should be able to
> debug the code running on both CPU and GPU simultaneously with in the same
> LLDB debug session.
>
>
> Interesting. There are two ways to accomplish this:
> 1 - Treat the CPU as one target and the GPU as another.
> 2 - Treat the CPU and GPU as one target
>
> There are tricky areas for both, but for sanity I would suggest options #1.
>
> The tricky things with solution #1 is how to manage switching the targets
> between the CPU and GPU when events happen (CPU stops, or GPU stops while
> the other is running or already stopped). We don't have any formal
> "cooperative targets" yet, but we know they will exist in the future
> (client/server, vm code/vm debug of vm code, etc) so we will be happy to
> assist with questions if and when you get there.
>
> Option #2 would be tricky as this would be the first target that has
> multiple architectures within one process. IF the CPU and GPU be be
> controlled separately, then I would go with option #1 as LLDB currently
> always stops all threads in a process when any thread stops. You would also
> need to implement different register contexts for each thread within such a
> target. It hasn't been done yet, other than through the OS plug-ins that can
> provide extra threads to show in case you are doing some sort of user space
> threading.
>
> GPU debugging is tricky since they usually don't have a kernel or anything
> running on the hardware. Many examples I have seen so far will set a
> breakpoint in the program at some point by compiling the code with a
> breakpoint inserted, run to that breakpoint, and then if the user wants to
> continue, you recompile with breakpoints set at a later place and re-run the
> entire program again. Is your GPU any different? Since they will be used in
> an OpenCL context maybe your solu

Re: [lldb-dev] lldb_private::RegisterContext vs lldb_private::RegisterInfoInterface

2017-09-21 Thread Greg Clayton via lldb-dev

> On Sep 21, 2017, at 5:15 AM, Ramana  wrote:
> 
> Sorry, I could not respond yesterday as I was of out of office.
> 
>> Interesting. There are two ways to accomplish this:
>> 1 - Treat the CPU as one target and the GPU as another.
>> 2 - Treat the CPU and GPU as one target
>> 
>> The tricky things with solution #1 is how to manage switching the targets
>> between the CPU and GPU when events happen (CPU stops, or GPU stops while
>> the other is running or already stopped). We don't have any formal
>> "cooperative targets" yet, but we know they will exist in the future
>> (client/server, vm code/vm debug of vm code, etc) so we will be happy to
>> assist with questions if and when you get there.
> 
> I was going along the option #1. Would definitely post here with more
> questions as I progress, thank you. Fortunately, the way OpenVX APIs
> work is, after off-loading the tasks to GPU, they will wait for the
> GPU to complete those tasks before continuing further. And in our
> case, both CPU and GPU can be controlled separately. Given that, do
> you think I still need to bother much about "cooperative targets"?

If you just want to make two targets that know nothing about each other, then 
that is very easy. Is that what you were asking?

> 
>> GPU debugging is tricky since they usually don't have a kernel or anything
>> running on the hardware. Many examples I have seen so far will set a
>> breakpoint in the program at some point by compiling the code with a
>> breakpoint inserted, run to that breakpoint, and then if the user wants to
>> continue, you recompile with breakpoints set at a later place and re-run the
>> entire program again. Is your GPU any different?
> 
>> We also discussed how to single step in a GPU program. Since multiple cores
>> on the GPU are concurrently running the same program, there was discussion
>> on how single stepping would work. If you are stepping and run into an
>> if/then statement, do you walk through the if and the else at all times? One
>> GPU professional was saying this is how GPU folks would want to see single
>> stepping happen. So I think there is a lot of stuff we need to think about
>> when debugging GPUs in general.
> 
> Thanks for sharing that. Yeah, ours is a little different. Basically,
> from the top level, the affinity in our case is per core of the GPU. I
> am not there yet to discuss more on this.

ok, let me know when you are ready to ask more questions.

> 
>> So we currently have no cooperative targets in LLDB. This will be the first.
>> We will need to discuss how hand off between the targets will occur and many
>> other aspects. We will be sure to comment when and if you get to this point.
> 
> Thank you. Will post more when I get there.

Sounds good.
> 
> Regards,
> Ramana
> 
> On Tue, Sep 19, 2017 at 8:56 PM, Greg Clayton  wrote:
>> 
>> On Sep 19, 2017, at 3:32 AM, Ramana  wrote:
>> 
>> Thank you so much Greg for your comments.
>> 
>> What architecture and os are you looking to support?
>> 
>> 
>> The OS is Linux and the primary use scenario is remote debugging.
>> Basically http://lists.llvm.org/pipermail/lldb-dev/2017-June/012445.html
>> is what I am trying to achieve and unfortunately that query did not
>> get much attention of the members.
>> 
>> 
>> Sorry about missing that. I will attempt to address this now:
>> 
>> I have to implement a debugger for our HW which comprises of CPU+GPU where
>> the GPU is coded in OpenCL and is accelerated through OpenVX API in C++
>> application which runs on CPU. Our requirement is we should be able to
>> debug the code running on both CPU and GPU simultaneously with in the same
>> LLDB debug session.
>> 
>> 
>> Interesting. There are two ways to accomplish this:
>> 1 - Treat the CPU as one target and the GPU as another.
>> 2 - Treat the CPU and GPU as one target
>> 
>> There are tricky areas for both, but for sanity I would suggest options #1.
>> 
>> The tricky things with solution #1 is how to manage switching the targets
>> between the CPU and GPU when events happen (CPU stops, or GPU stops while
>> the other is running or already stopped). We don't have any formal
>> "cooperative targets" yet, but we know they will exist in the future
>> (client/server, vm code/vm debug of vm code, etc) so we will be happy to
>> assist with questions if and when you get there.
>> 
>> Option #2 would be tricky as this would be the first target that has
>> multiple architectures within one process. IF the CPU and GPU be be
>> controlled separately, then I would go with option #1 as LLDB currently
>> always stops all threads in a process when any thread stops. You would also
>> need to implement different register contexts for each thread within such a
>> target. It hasn't been done yet, other than through the OS plug-ins that can
>> provide extra threads to show in case you are doing some sort of user space
>> threading.
>> 
>> GPU debugging is tricky since they usually don't have a kernel or anything
>> running on the hardwa

Re: [lldb-dev] lldb_private::RegisterContext vs lldb_private::RegisterInfoInterface

2017-09-21 Thread Ted Woodward via lldb-dev
We've done something similar in-house running on a Snapdragon with an ARM and a 
Hexagon DSP. We use Android Studio to debug an app on the ARM that sends work 
down to the Hexagon, running an app under Linux. On the Hexagon we ran lldb, 
and were able to debug both apps talking to each other.

--
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project

> -Original Message-
> From: lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org] On Behalf Of Greg
> Clayton via lldb-dev
> Sent: Thursday, September 21, 2017 10:10 AM
> To: Ramana 
> Cc: lldb-dev@lists.llvm.org
> Subject: Re: [lldb-dev] lldb_private::RegisterContext vs
> lldb_private::RegisterInfoInterface
> 
> 
> > On Sep 21, 2017, at 5:15 AM, Ramana 
> wrote:
> >
> > Sorry, I could not respond yesterday as I was of out of office.
> >
> >> Interesting. There are two ways to accomplish this:
> >> 1 - Treat the CPU as one target and the GPU as another.
> >> 2 - Treat the CPU and GPU as one target
> >>
> >> The tricky things with solution #1 is how to manage switching the
> >> targets between the CPU and GPU when events happen (CPU stops, or GPU
> >> stops while the other is running or already stopped). We don't have
> >> any formal "cooperative targets" yet, but we know they will exist in
> >> the future (client/server, vm code/vm debug of vm code, etc) so we
> >> will be happy to assist with questions if and when you get there.
> >
> > I was going along the option #1. Would definitely post here with more
> > questions as I progress, thank you. Fortunately, the way OpenVX APIs
> > work is, after off-loading the tasks to GPU, they will wait for the
> > GPU to complete those tasks before continuing further. And in our
> > case, both CPU and GPU can be controlled separately. Given that, do
> > you think I still need to bother much about "cooperative targets"?
> 
> If you just want to make two targets that know nothing about each other, then
> that is very easy. Is that what you were asking?
> 
> >
> >> GPU debugging is tricky since they usually don't have a kernel or
> >> anything running on the hardware. Many examples I have seen so far
> >> will set a breakpoint in the program at some point by compiling the
> >> code with a breakpoint inserted, run to that breakpoint, and then if
> >> the user wants to continue, you recompile with breakpoints set at a
> >> later place and re-run the entire program again. Is your GPU any different?
> >
> >> We also discussed how to single step in a GPU program. Since multiple
> >> cores on the GPU are concurrently running the same program, there was
> >> discussion on how single stepping would work. If you are stepping and
> >> run into an if/then statement, do you walk through the if and the
> >> else at all times? One GPU professional was saying this is how GPU
> >> folks would want to see single stepping happen. So I think there is a
> >> lot of stuff we need to think about when debugging GPUs in general.
> >
> > Thanks for sharing that. Yeah, ours is a little different. Basically,
> > from the top level, the affinity in our case is per core of the GPU. I
> > am not there yet to discuss more on this.
> 
> ok, let me know when you are ready to ask more questions.
> 
> >
> >> So we currently have no cooperative targets in LLDB. This will be the 
> >> first.
> >> We will need to discuss how hand off between the targets will occur
> >> and many other aspects. We will be sure to comment when and if you get to
> this point.
> >
> > Thank you. Will post more when I get there.
> 
> Sounds good.
> >
> > Regards,
> > Ramana
> >
> > On Tue, Sep 19, 2017 at 8:56 PM, Greg Clayton 
> wrote:
> >>
> >> On Sep 19, 2017, at 3:32 AM, Ramana 
> wrote:
> >>
> >> Thank you so much Greg for your comments.
> >>
> >> What architecture and os are you looking to support?
> >>
> >>
> >> The OS is Linux and the primary use scenario is remote debugging.
> >> Basically
> >> http://lists.llvm.org/pipermail/lldb-dev/2017-June/012445.html
> >> is what I am trying to achieve and unfortunately that query did not
> >> get much attention of the members.
> >>
> >>
> >> Sorry about missing that. I will attempt to address this now:
> >>
> >> I have to implement a debugger for our HW which comprises of CPU+GPU
> >> where the GPU is coded in OpenCL and is accelerated through OpenVX
> >> API in C++ application which runs on CPU. Our requirement is we
> >> should be able to debug the code running on both CPU and GPU
> >> simultaneously with in the same LLDB debug session.
> >>
> >>
> >> Interesting. There are two ways to accomplish this:
> >> 1 - Treat the CPU as one target and the GPU as another.
> >> 2 - Treat the CPU and GPU as one target
> >>
> >> There are tricky areas for both, but for sanity I would suggest options #1.
> >>
> >> The tricky things with solution #1 is how to manage switching the
> >> targets between the CPU and GPU when events happen (CPU stops, or GPU

[lldb-dev] IMPORTANT: *.llvm.org certificates being renewed (impacts SVN) FRIDAY Sept 22

2017-09-21 Thread Tanya Lattner via lldb-dev
All,

The following llvm.org  certificates will be renewed 
tomorrow, Friday Sept 22nd at ~7:30AM PDT.

bugs.llvm.org 
clang-analyzer.llvm.org 
clang.llvm.org 
compiler-rt.llvm.org 
dragonegg.llvm.org 
git.llvm.org 
klee.llvm.org 
libclc.llvm.org 
libcxx.llvm.org 
libcxxabi.llvm.org 
lists.llvm.org 
lld.llvm.org 
lldb.llvm.org 
llvm.org 
new-www.llvm.org 
openmp.llvm.org 
polly.llvm.org 
svn.llvm.org 
vmkit.llvm.org 
www.llvm.org 

As a consequence, you will need to accept the new certificate when using SVN.

-Tanya___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev