[lldb-dev] LLVM 7.0.0 Release
I am pleased to announce that LLVM 7 is now available. Get it here: https://llvm.org/releases/download.html#7.0.0 The release contains the work on trunk up to SVN revision 338536 plus work on the release branch. It is the result of the community's work over the past six months, including: function multiversioning in Clang with the 'target' attribute for ELF-based x86/x86_64 targets, improved PCH support in clang-cl, preliminary DWARF v5 support, basic support for OpenMP 4.5 offloading to NVPTX, OpenCL C++ support, MSan, X-Ray and libFuzzer support for FreeBSD, early UBSan, X-Ray and libFuzzer support for OpenBSD, UBSan checks for implicit conversions, many long-tail compatibility issues fixed in lld which is now production ready for ELF, COFF and MinGW, new tools llvm-exegesis, llvm-mca and diagtool. And as usual, many optimizations, improved diagnostics, and bug fixes. For more details, see the release notes: https://llvm.org/releases/7.0.0/docs/ReleaseNotes.html https://llvm.org/releases/7.0.0/tools/clang/docs/ReleaseNotes.html https://llvm.org/releases/7.0.0/tools/clang/tools/extra/docs/ReleaseNotes.html https://llvm.org/releases/7.0.0/tools/lld/docs/ReleaseNotes.html Thanks to everyone who helped with filing, fixing, and code reviewing for the release-blocking bugs! Special thanks to the release testers and packagers: Bero Rosenkränzer, Brian Cain, Dimitry Andric, Jonas Hahnfeld, Lei Huang Michał Górny, Sylvestre Ledru, Takumi Nakamura, and Vedant Kumar. For questions or comments about the release, please contact the community on the mailing lists. Onwards to LLVM 8! Cheers, Hans ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
[lldb-dev] [RFC] LLDB Reproducers
Hi everyone, We all know how hard it can be to reproduce an issue or crash in LLDB. There are a lot of moving parts and subtle differences can easily add up. We want to make this easier by generating reproducers in LLDB, similar to what clang does today. The core idea is as follows: during normal operation we capture whatever information is needed to recreate the current state of the debugger. When something goes wrong, this becomes available to the user. Someone else should then be able to reproduce the same issue with only this data, for example on a different machine. It's important to note that we want to replay the debug session from the reproducer, rather than just recreating the current state. This ensures that we have access to all the events leading up to the problem, which are usually far more important than the error state itself. # High Level Design Concretely we want to extend LLDB in two ways: 1. We need to add infrastructure to _generate_ the data necessary for reproducing. 2. We need to add infrastructure to _use_ the data in the reproducer to replay the debugging session. Different parts of LLDB will have different definitions of what data they need to reproduce their path to the issue. For example, capturing the commands executed by the user is very different from tracking the dSYM bundles on disk. Therefore, we propose to have each component deal with its needs in a localized way. This has the advantage that the functionality can be developed and tested independently. ## Providers We'll call a combination of (1) and (2) for a given component a `Provider`. For example, we'd have an provider for user commands and a provider for dSYM files. A provider will know how to keep track of its information, how to serialize it as part of the reproducer as well as how to deserialize it again and use it to recreate the state of the debugger. With one exception, the lifetime of the provider coincides with that of the `SBDebugger`, because that is the scope of what we consider here to be a single debug session. The exception would be the provider for the global module cache, because it is shared between multiple debuggers. Although it would be conceptually straightforward to add a provider for the shared module cache, this significantly increases the complexity of the reproducer framework because of its implication on the lifetime and everything related to that. For now we will ignore this problem which means we will not replay the construction of the shared module cache but rather build it up during replaying, as if the current debug session was the first and only one using it. The impact of doing so is significant, as no issue caused by the shared module cache will be reproducible, but does not limit reproducing any issue unrelated to it. ## Reproducer Framework To coordinate between the data from different components, we'll need to introduce a global reproducer infrastructure. We have a component responsible for reproducer generation (the `Generator`) and for using the reproducer (the `Loader`). They are essentially two ways of looking at the same unit of repayable work. The Generator keeps track of its providers and whether or not we need to generate a reproducer. When a problem occurs, LLDB will request the Generator to generate a reproducer. When LLDB finishes successfully, the Generator cleans up anything it might have created during the session. Additionally, the Generator populates an index, which is part of the reproducer, and used by the Loader to discover what information is available. When a reproducer is passed to LLDB, we want to use its data to replay the debug session. This is coordinated by the Loader. Through the index created by the Generator, different components know what data (Providers) are available, and how to use them. It's important to note that in order to create a complete reproducer, we will require data from our dependencies (llvm, clang, swift) as well. This means that either (a) the infrastructure needs to be accessible from our dependencies or (b) that an API is provided that allows us to query this. We plan to address this issue when it arises for the respective Generator. # Components We have identified a list of minimal components needed to make reproducing possible. We've divided those into two groups: explicit and implicit inputs. Explicit inputs are inputs from the user to the debugger. - Command line arguments - Settings - User commands - Scripting Bridge API In addition to the components listed above, LLDB has a bunch of inputs that are not passed explicitly. It's often these that make reproducing an issue complex. - GDB Remote Packets - Files containing debug information (object files, dSYM bundles) - Clang headers - Swift modules Every component would have its own provider and is free to implement it as it sees fit. For example, as we expect to have a large number of GDB remote packets, the provider might choos
Re: [lldb-dev] [cfe-dev] [7.0.0 Release] The final tag is in
Hi Hans, > The final version of 7.0.0 has been tagged from the branch at r342370. I'd like to fork from 7.0.0 final but I got confused: The tip of release_70 branch is still r341805, which is identical to rc3. This should be r342370 instead, shouldn't it? Or the final (r342370) does not go into the release branch? Or it just takes some more time? Thanks, Gabor On Mon, Sep 17, 2018 at 1:42 PM Hans Wennborg via cfe-dev wrote: > > Dear testers, > > The final version of 7.0.0 has been tagged from the branch at r342370. > It is identical to rc3 modulo release notes and docs changes. > > Please build the final binaries and upload to the sftp. > > For those following along: this means 7.0.0 is done, but it will take > a few days to get all the tarballs ready and published on the web > page. I will send the announcement once everything is ready. > > Thanks again everyone for your work! > > Hans > ___ > cfe-dev mailing list > cfe-...@lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
[lldb-dev] skip some tests with "check-lldb"
Hi, I'd like to skip some tests when I run "ninja check-lldb", because they fail. I am on release_70 branch. I know I could use dotest.py directly, but that would exercise only one thread. Is there a way to execute the tests parallel on all cores and in the same time skip some of the tests? Thanks, Gabor ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] skip some tests with "check-lldb"
I just realized that `dotest.py` has a --thread option. Is that the one which is used during the lit test (`ninja check-lldb`) ? On Wed, Sep 19, 2018 at 6:00 PM Gábor Márton wrote: > > Hi, > > I'd like to skip some tests when I run "ninja check-lldb", because they fail. > I am on release_70 branch. > I know I could use dotest.py directly, but that would exercise only one > thread. > Is there a way to execute the tests parallel on all cores and in the > same time skip some of the tests? > > Thanks, > Gabor ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] [cfe-dev] [7.0.0 Release] The final tag is in
Hi Gabor, The revision Hans mentioned is essentially the revision that created svn tag. Content-wise it's equal to the tip of release_70 branch. On Wed, Sep 19, 2018 at 6:21 PM Gábor Márton via cfe-dev wrote: > > Hi Hans, > > > The final version of 7.0.0 has been tagged from the branch at r342370. > > I'd like to fork from 7.0.0 final but I got confused: > The tip of release_70 branch is still r341805, which is identical to > rc3. This should be r342370 instead, shouldn't it? Or the final > (r342370) does not go into the release branch? Or it just takes some > more time? > > Thanks, > Gabor > On Mon, Sep 17, 2018 at 1:42 PM Hans Wennborg via cfe-dev > wrote: > > > > Dear testers, > > > > The final version of 7.0.0 has been tagged from the branch at r342370. > > It is identical to rc3 modulo release notes and docs changes. > > > > Please build the final binaries and upload to the sftp. > > > > For those following along: this means 7.0.0 is done, but it will take > > a few days to get all the tarballs ready and published on the web > > page. I will send the announcement once everything is ready. > > > > Thanks again everyone for your work! > > > > Hans > > ___ > > cfe-dev mailing list > > cfe-...@lists.llvm.org > > http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev > ___ > cfe-dev mailing list > cfe-...@lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev -- With best regards, Anton Korobeynikov Department of Statistical Modelling, Saint Petersburg State University ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] skip some tests with "check-lldb"
Unless you pass --no-multiprocess to dotest, it should detect how many cores your system has and use them. -- Ted Woodward Qualcomm Innovation Center, Inc. Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project -Original Message- From: lldb-dev On Behalf Of Gábor Márton via lldb-dev Sent: Wednesday, September 19, 2018 11:04 AM To: lldb-dev@lists.llvm.org Subject: Re: [lldb-dev] skip some tests with "check-lldb" I just realized that `dotest.py` has a --thread option. Is that the one which is used during the lit test (`ninja check-lldb`) ? On Wed, Sep 19, 2018 at 6:00 PM Gábor Márton wrote: > > Hi, > > I'd like to skip some tests when I run "ninja check-lldb", because they fail. > I am on release_70 branch. > I know I could use dotest.py directly, but that would exercise only one > thread. > Is there a way to execute the tests parallel on all cores and in the > same time skip some of the tests? > > Thanks, > Gabor ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] [RFC] LLDB Reproducers
Sounds like a fantastic idea. How would this work when the behavior of the debugee process is non-deterministic? On Wed, Sep 19, 2018 at 6:50 AM, Jonas Devlieghere via lldb-dev < lldb-dev@lists.llvm.org> wrote: > Hi everyone, > > We all know how hard it can be to reproduce an issue or crash in LLDB. > There > are a lot of moving parts and subtle differences can easily add up. We > want to > make this easier by generating reproducers in LLDB, similar to what clang > does > today. > > The core idea is as follows: during normal operation we capture whatever > information is needed to recreate the current state of the debugger. When > something goes wrong, this becomes available to the user. Someone else > should > then be able to reproduce the same issue with only this data, for example > on a > different machine. > > It's important to note that we want to replay the debug session from the > reproducer, rather than just recreating the current state. This ensures > that we > have access to all the events leading up to the problem, which are usually > far > more important than the error state itself. > > # High Level Design > > Concretely we want to extend LLDB in two ways: > > 1. We need to add infrastructure to _generate_ the data necessary for > reproducing. > 2. We need to add infrastructure to _use_ the data in the reproducer to > replay > the debugging session. > > Different parts of LLDB will have different definitions of what data they > need > to reproduce their path to the issue. For example, capturing the commands > executed by the user is very different from tracking the dSYM bundles on > disk. > Therefore, we propose to have each component deal with its needs in a > localized > way. This has the advantage that the functionality can be developed and > tested > independently. > > ## Providers > > We'll call a combination of (1) and (2) for a given component a > `Provider`. For > example, we'd have an provider for user commands and a provider for dSYM > files. > A provider will know how to keep track of its information, how to > serialize it > as part of the reproducer as well as how to deserialize it again and use > it to > recreate the state of the debugger. > > With one exception, the lifetime of the provider coincides with that of the > `SBDebugger`, because that is the scope of what we consider here to be a > single > debug session. The exception would be the provider for the global module > cache, > because it is shared between multiple debuggers. Although it would be > conceptually straightforward to add a provider for the shared module cache, > this significantly increases the complexity of the reproducer framework > because > of its implication on the lifetime and everything related to that. > > For now we will ignore this problem which means we will not replay the > construction of the shared module cache but rather build it up during > replaying, as if the current debug session was the first and only one > using it. > The impact of doing so is significant, as no issue caused by the shared > module > cache will be reproducible, but does not limit reproducing any issue > unrelated > to it. > > ## Reproducer Framework > > To coordinate between the data from different components, we'll need to > introduce a global reproducer infrastructure. We have a component > responsible > for reproducer generation (the `Generator`) and for using the reproducer > (the > `Loader`). They are essentially two ways of looking at the same unit of > repayable work. > > The Generator keeps track of its providers and whether or not we need to > generate a reproducer. When a problem occurs, LLDB will request the > Generator > to generate a reproducer. When LLDB finishes successfully, the Generator > cleans > up anything it might have created during the session. Additionally, the > Generator populates an index, which is part of the reproducer, and used by > the > Loader to discover what information is available. > > When a reproducer is passed to LLDB, we want to use its data to replay the > debug session. This is coordinated by the Loader. Through the index > created by > the Generator, different components know what data (Providers) are > available, > and how to use them. > > It's important to note that in order to create a complete reproducer, we > will > require data from our dependencies (llvm, clang, swift) as well. This means > that either (a) the infrastructure needs to be accessible from our > dependencies > or (b) that an API is provided that allows us to query this. We plan to > address > this issue when it arises for the respective Generator. > > # Components > > We have identified a list of minimal components needed to make reproducing > possible. We've divided those into two groups: explicit and implicit > inputs. > > Explicit inputs are inputs from the user to the debugger. > > - Command line arguments > - Settings > - User commands > - Scripting Bridge API > > In addition to the com
Re: [lldb-dev] [RFC] LLDB Reproducers
> On Sep 19, 2018, at 6:49 PM, Leonard Mosescu wrote: > > Sounds like a fantastic idea. > > How would this work when the behavior of the debugee process is > non-deterministic? All the communication between the debugger and the inferior goes through the GDB remote protocol. Because we capture and replay this, we can reproduce without running the executable, which is particularly convenient when you were originally debugging something on a different device for example. > > On Wed, Sep 19, 2018 at 6:50 AM, Jonas Devlieghere via lldb-dev > mailto:lldb-dev@lists.llvm.org>> wrote: > Hi everyone, > > We all know how hard it can be to reproduce an issue or crash in LLDB. There > are a lot of moving parts and subtle differences can easily add up. We want to > make this easier by generating reproducers in LLDB, similar to what clang does > today. > > The core idea is as follows: during normal operation we capture whatever > information is needed to recreate the current state of the debugger. When > something goes wrong, this becomes available to the user. Someone else should > then be able to reproduce the same issue with only this data, for example on a > different machine. > > It's important to note that we want to replay the debug session from the > reproducer, rather than just recreating the current state. This ensures that > we > have access to all the events leading up to the problem, which are usually far > more important than the error state itself. > > # High Level Design > > Concretely we want to extend LLDB in two ways: > > 1. We need to add infrastructure to _generate_ the data necessary for > reproducing. > 2. We need to add infrastructure to _use_ the data in the reproducer to > replay > the debugging session. > > Different parts of LLDB will have different definitions of what data they need > to reproduce their path to the issue. For example, capturing the commands > executed by the user is very different from tracking the dSYM bundles on disk. > Therefore, we propose to have each component deal with its needs in a > localized > way. This has the advantage that the functionality can be developed and tested > independently. > > ## Providers > > We'll call a combination of (1) and (2) for a given component a `Provider`. > For > example, we'd have an provider for user commands and a provider for dSYM > files. > A provider will know how to keep track of its information, how to serialize it > as part of the reproducer as well as how to deserialize it again and use it to > recreate the state of the debugger. > > With one exception, the lifetime of the provider coincides with that of the > `SBDebugger`, because that is the scope of what we consider here to be a > single > debug session. The exception would be the provider for the global module > cache, > because it is shared between multiple debuggers. Although it would be > conceptually straightforward to add a provider for the shared module cache, > this significantly increases the complexity of the reproducer framework > because > of its implication on the lifetime and everything related to that. > > For now we will ignore this problem which means we will not replay the > construction of the shared module cache but rather build it up during > replaying, as if the current debug session was the first and only one using > it. > The impact of doing so is significant, as no issue caused by the shared module > cache will be reproducible, but does not limit reproducing any issue unrelated > to it. > > ## Reproducer Framework > > To coordinate between the data from different components, we'll need to > introduce a global reproducer infrastructure. We have a component responsible > for reproducer generation (the `Generator`) and for using the reproducer (the > `Loader`). They are essentially two ways of looking at the same unit of > repayable work. > > The Generator keeps track of its providers and whether or not we need to > generate a reproducer. When a problem occurs, LLDB will request the Generator > to generate a reproducer. When LLDB finishes successfully, the Generator > cleans > up anything it might have created during the session. Additionally, the > Generator populates an index, which is part of the reproducer, and used by the > Loader to discover what information is available. > > When a reproducer is passed to LLDB, we want to use its data to replay the > debug session. This is coordinated by the Loader. Through the index created by > the Generator, different components know what data (Providers) are available, > and how to use them. > > It's important to note that in order to create a complete reproducer, we will > require data from our dependencies (llvm, clang, swift) as well. This means > that either (a) the infrastructure needs to be accessible from our > dependencies > or (b) that an API is provided that allows us to query this. We plan to > address > this issue when it arises for the respective
Re: [lldb-dev] skip some tests with "check-lldb"
That's okay, but is it possible to skip a few tests, when using lit? I was thinking about moving the test files I want to skip, but that has obvious drawbacks. Also --filter does not seem so useful in this case. On Wed, 19 Sep 2018, 18:46 , wrote: > Unless you pass --no-multiprocess to dotest, it should detect how many > cores your system has and use them. > > -- > Ted Woodward > Qualcomm Innovation Center, Inc. > Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux > Foundation Collaborative Project > > -Original Message- > From: lldb-dev On Behalf Of Gábor > Márton via lldb-dev > Sent: Wednesday, September 19, 2018 11:04 AM > To: lldb-dev@lists.llvm.org > Subject: Re: [lldb-dev] skip some tests with "check-lldb" > > I just realized that `dotest.py` has a --thread option. Is that the one > which is used during the lit test (`ninja check-lldb`) ? > > On Wed, Sep 19, 2018 at 6:00 PM Gábor Márton > wrote: > > > > Hi, > > > > I'd like to skip some tests when I run "ninja check-lldb", because they > fail. > > I am on release_70 branch. > > I know I could use dotest.py directly, but that would exercise only one > thread. > > Is there a way to execute the tests parallel on all cores and in the > > same time skip some of the tests? > > > > Thanks, > > Gabor > ___ > lldb-dev mailing list > lldb-dev@lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev > > ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] [RFC] LLDB Reproducers
Great, thanks. This means that the lldb-server issues are not in scope for this feature, right? On Wed, Sep 19, 2018 at 10:09 AM, Jonas Devlieghere wrote: > > > On Sep 19, 2018, at 6:49 PM, Leonard Mosescu wrote: > > Sounds like a fantastic idea. > > How would this work when the behavior of the debugee process is > non-deterministic? > > > All the communication between the debugger and the inferior goes through > the > GDB remote protocol. Because we capture and replay this, we can reproduce > without running the executable, which is particularly convenient when you > were > originally debugging something on a different device for example. > > > On Wed, Sep 19, 2018 at 6:50 AM, Jonas Devlieghere via lldb-dev < > lldb-dev@lists.llvm.org> wrote: > >> Hi everyone, >> >> We all know how hard it can be to reproduce an issue or crash in LLDB. >> There >> are a lot of moving parts and subtle differences can easily add up. We >> want to >> make this easier by generating reproducers in LLDB, similar to what clang >> does >> today. >> >> The core idea is as follows: during normal operation we capture whatever >> information is needed to recreate the current state of the debugger. When >> something goes wrong, this becomes available to the user. Someone else >> should >> then be able to reproduce the same issue with only this data, for example >> on a >> different machine. >> >> It's important to note that we want to replay the debug session from the >> reproducer, rather than just recreating the current state. This ensures >> that we >> have access to all the events leading up to the problem, which are >> usually far >> more important than the error state itself. >> >> # High Level Design >> >> Concretely we want to extend LLDB in two ways: >> >> 1. We need to add infrastructure to _generate_ the data necessary for >> reproducing. >> 2. We need to add infrastructure to _use_ the data in the reproducer to >> replay >> the debugging session. >> >> Different parts of LLDB will have different definitions of what data they >> need >> to reproduce their path to the issue. For example, capturing the commands >> executed by the user is very different from tracking the dSYM bundles on >> disk. >> Therefore, we propose to have each component deal with its needs in a >> localized >> way. This has the advantage that the functionality can be developed and >> tested >> independently. >> >> ## Providers >> >> We'll call a combination of (1) and (2) for a given component a >> `Provider`. For >> example, we'd have an provider for user commands and a provider for dSYM >> files. >> A provider will know how to keep track of its information, how to >> serialize it >> as part of the reproducer as well as how to deserialize it again and use >> it to >> recreate the state of the debugger. >> >> With one exception, the lifetime of the provider coincides with that of >> the >> `SBDebugger`, because that is the scope of what we consider here to be a >> single >> debug session. The exception would be the provider for the global module >> cache, >> because it is shared between multiple debuggers. Although it would be >> conceptually straightforward to add a provider for the shared module >> cache, >> this significantly increases the complexity of the reproducer framework >> because >> of its implication on the lifetime and everything related to that. >> >> For now we will ignore this problem which means we will not replay the >> construction of the shared module cache but rather build it up during >> replaying, as if the current debug session was the first and only one >> using it. >> The impact of doing so is significant, as no issue caused by the shared >> module >> cache will be reproducible, but does not limit reproducing any issue >> unrelated >> to it. >> >> ## Reproducer Framework >> >> To coordinate between the data from different components, we'll need to >> introduce a global reproducer infrastructure. We have a component >> responsible >> for reproducer generation (the `Generator`) and for using the reproducer >> (the >> `Loader`). They are essentially two ways of looking at the same unit of >> repayable work. >> >> The Generator keeps track of its providers and whether or not we need to >> generate a reproducer. When a problem occurs, LLDB will request the >> Generator >> to generate a reproducer. When LLDB finishes successfully, the Generator >> cleans >> up anything it might have created during the session. Additionally, the >> Generator populates an index, which is part of the reproducer, and used >> by the >> Loader to discover what information is available. >> >> When a reproducer is passed to LLDB, we want to use its data to replay the >> debug session. This is coordinated by the Loader. Through the index >> created by >> the Generator, different components know what data (Providers) are >> available, >> and how to use them. >> >> It's important to note that in order to create a complete reproducer, we >> will >>
Re: [lldb-dev] [RFC] LLDB Reproducers
I assume that reproducing race conditions is out of scope? Also, will it be possible to incorporate these reproducers into the test suite somehow? It would be nice if we could create a tar file similar to a linkrepro, check in the tar file, and then have a test where you don't have to write any python code, any Makefile, any source code, or any anything for that matter. It just enumerates all of these repro tar files in a certain location and runs that test. On Wed, Sep 19, 2018 at 10:48 AM Leonard Mosescu via lldb-dev < lldb-dev@lists.llvm.org> wrote: > Great, thanks. This means that the lldb-server issues are not in scope for > this feature, right? > > On Wed, Sep 19, 2018 at 10:09 AM, Jonas Devlieghere < > jdevliegh...@apple.com> wrote: > >> >> >> On Sep 19, 2018, at 6:49 PM, Leonard Mosescu wrote: >> >> Sounds like a fantastic idea. >> >> How would this work when the behavior of the debugee process is >> non-deterministic? >> >> >> All the communication between the debugger and the inferior goes through >> the >> GDB remote protocol. Because we capture and replay this, we can reproduce >> without running the executable, which is particularly convenient when you >> were >> originally debugging something on a different device for example. >> >> >> On Wed, Sep 19, 2018 at 6:50 AM, Jonas Devlieghere via lldb-dev < >> lldb-dev@lists.llvm.org> wrote: >> >>> Hi everyone, >>> >>> We all know how hard it can be to reproduce an issue or crash in LLDB. >>> There >>> are a lot of moving parts and subtle differences can easily add up. We >>> want to >>> make this easier by generating reproducers in LLDB, similar to what >>> clang does >>> today. >>> >>> The core idea is as follows: during normal operation we capture whatever >>> information is needed to recreate the current state of the debugger. When >>> something goes wrong, this becomes available to the user. Someone else >>> should >>> then be able to reproduce the same issue with only this data, for >>> example on a >>> different machine. >>> >>> It's important to note that we want to replay the debug session from the >>> reproducer, rather than just recreating the current state. This ensures >>> that we >>> have access to all the events leading up to the problem, which are >>> usually far >>> more important than the error state itself. >>> >>> # High Level Design >>> >>> Concretely we want to extend LLDB in two ways: >>> >>> 1. We need to add infrastructure to _generate_ the data necessary for >>> reproducing. >>> 2. We need to add infrastructure to _use_ the data in the reproducer to >>> replay >>> the debugging session. >>> >>> Different parts of LLDB will have different definitions of what data >>> they need >>> to reproduce their path to the issue. For example, capturing the commands >>> executed by the user is very different from tracking the dSYM bundles on >>> disk. >>> Therefore, we propose to have each component deal with its needs in a >>> localized >>> way. This has the advantage that the functionality can be developed and >>> tested >>> independently. >>> >>> ## Providers >>> >>> We'll call a combination of (1) and (2) for a given component a >>> `Provider`. For >>> example, we'd have an provider for user commands and a provider for dSYM >>> files. >>> A provider will know how to keep track of its information, how to >>> serialize it >>> as part of the reproducer as well as how to deserialize it again and use >>> it to >>> recreate the state of the debugger. >>> >>> With one exception, the lifetime of the provider coincides with that of >>> the >>> `SBDebugger`, because that is the scope of what we consider here to be a >>> single >>> debug session. The exception would be the provider for the global module >>> cache, >>> because it is shared between multiple debuggers. Although it would be >>> conceptually straightforward to add a provider for the shared module >>> cache, >>> this significantly increases the complexity of the reproducer framework >>> because >>> of its implication on the lifetime and everything related to that. >>> >>> For now we will ignore this problem which means we will not replay the >>> construction of the shared module cache but rather build it up during >>> replaying, as if the current debug session was the first and only one >>> using it. >>> The impact of doing so is significant, as no issue caused by the shared >>> module >>> cache will be reproducible, but does not limit reproducing any issue >>> unrelated >>> to it. >>> >>> ## Reproducer Framework >>> >>> To coordinate between the data from different components, we'll need to >>> introduce a global reproducer infrastructure. We have a component >>> responsible >>> for reproducer generation (the `Generator`) and for using the reproducer >>> (the >>> `Loader`). They are essentially two ways of looking at the same unit of >>> repayable work. >>> >>> The Generator keeps track of its providers and whether or not we need to >>> generate a reproducer. When a
Re: [lldb-dev] [RFC] LLDB Reproducers
By the way, several weeks / months ago I had an idea for exposing a debugger object model. That would be one very powerful way to create reproducers, but it would be a large effort. The idea is that if every important part of your debugger is represented by some component in a debugger object model, and all interactions (including internal interactions) go through the object model, then you can record every state change to the object model and replay it. On Wed, Sep 19, 2018 at 10:59 AM Zachary Turner wrote: > I assume that reproducing race conditions is out of scope? > > Also, will it be possible to incorporate these reproducers into the test > suite somehow? It would be nice if we could create a tar file similar to a > linkrepro, check in the tar file, and then have a test where you don't have > to write any python code, any Makefile, any source code, or any anything > for that matter. It just enumerates all of these repro tar files in a > certain location and runs that test. > > On Wed, Sep 19, 2018 at 10:48 AM Leonard Mosescu via lldb-dev < > lldb-dev@lists.llvm.org> wrote: > >> Great, thanks. This means that the lldb-server issues are not in scope >> for this feature, right? >> >> On Wed, Sep 19, 2018 at 10:09 AM, Jonas Devlieghere < >> jdevliegh...@apple.com> wrote: >> >>> >>> >>> On Sep 19, 2018, at 6:49 PM, Leonard Mosescu wrote: >>> >>> Sounds like a fantastic idea. >>> >>> How would this work when the behavior of the debugee process is >>> non-deterministic? >>> >>> >>> All the communication between the debugger and the inferior goes through >>> the >>> GDB remote protocol. Because we capture and replay this, we can reproduce >>> without running the executable, which is particularly convenient when >>> you were >>> originally debugging something on a different device for example. >>> >>> >>> On Wed, Sep 19, 2018 at 6:50 AM, Jonas Devlieghere via lldb-dev < >>> lldb-dev@lists.llvm.org> wrote: >>> Hi everyone, We all know how hard it can be to reproduce an issue or crash in LLDB. There are a lot of moving parts and subtle differences can easily add up. We want to make this easier by generating reproducers in LLDB, similar to what clang does today. The core idea is as follows: during normal operation we capture whatever information is needed to recreate the current state of the debugger. When something goes wrong, this becomes available to the user. Someone else should then be able to reproduce the same issue with only this data, for example on a different machine. It's important to note that we want to replay the debug session from the reproducer, rather than just recreating the current state. This ensures that we have access to all the events leading up to the problem, which are usually far more important than the error state itself. # High Level Design Concretely we want to extend LLDB in two ways: 1. We need to add infrastructure to _generate_ the data necessary for reproducing. 2. We need to add infrastructure to _use_ the data in the reproducer to replay the debugging session. Different parts of LLDB will have different definitions of what data they need to reproduce their path to the issue. For example, capturing the commands executed by the user is very different from tracking the dSYM bundles on disk. Therefore, we propose to have each component deal with its needs in a localized way. This has the advantage that the functionality can be developed and tested independently. ## Providers We'll call a combination of (1) and (2) for a given component a `Provider`. For example, we'd have an provider for user commands and a provider for dSYM files. A provider will know how to keep track of its information, how to serialize it as part of the reproducer as well as how to deserialize it again and use it to recreate the state of the debugger. With one exception, the lifetime of the provider coincides with that of the `SBDebugger`, because that is the scope of what we consider here to be a single debug session. The exception would be the provider for the global module cache, because it is shared between multiple debuggers. Although it would be conceptually straightforward to add a provider for the shared module cache, this significantly increases the complexity of the reproducer framework because of its implication on the lifetime and everything related to that. For now we will ignore this problem which means we will not replay the construction of the shared module cache but rather build it up during replaying, as if the current debug session was the first and only one using it. The impact of doing so
Re: [lldb-dev] [RFC] LLDB Reproducers
There are a couple of problems with using these reproducers in the testsuite. The first is that we make no commitments that the a future lldb will implement the "same" session with the same sequence of gdb-remote packet requests. We often monkey around with lldb's sequences of requests to make things go faster. So some future lldb will end up making a request that wasn't in the data from the reproducer, and at that point we won't really know what to do. The Provider for gdb-remote packets should record the packets it receives - not just the answers it gives - so it can detect this error and not go off the rails. But I'm pretty sure it isn't worth the effort to try to get lldb to maintain all the old sequences it used in the past in order to support keeping the reproducers alive. But this does mean that this is an unreliable way to write tests. The second is that the reproducers as described have no notion of "expected state". They are meant to go along with a bug report where the "x was wrong" part is not contained in the reproducer. That would be an interesting thing to think about adding, but I think the problem space here is complicated enough already... You can't write a test if you don't know the correct end state. Jim > On Sep 19, 2018, at 10:59 AM, Zachary Turner via lldb-dev > wrote: > > I assume that reproducing race conditions is out of scope? > > Also, will it be possible to incorporate these reproducers into the test > suite somehow? It would be nice if we could create a tar file similar to a > linkrepro, check in the tar file, and then have a test where you don't have > to write any python code, any Makefile, any source code, or any anything for > that matter. It just enumerates all of these repro tar files in a certain > location and runs that test. > > On Wed, Sep 19, 2018 at 10:48 AM Leonard Mosescu via lldb-dev > wrote: > Great, thanks. This means that the lldb-server issues are not in scope for > this feature, right? > > On Wed, Sep 19, 2018 at 10:09 AM, Jonas Devlieghere > wrote: > > >> On Sep 19, 2018, at 6:49 PM, Leonard Mosescu wrote: >> >> Sounds like a fantastic idea. >> >> How would this work when the behavior of the debugee process is >> non-deterministic? > > All the communication between the debugger and the inferior goes through the > GDB remote protocol. Because we capture and replay this, we can reproduce > without running the executable, which is particularly convenient when you were > originally debugging something on a different device for example. > >> >> On Wed, Sep 19, 2018 at 6:50 AM, Jonas Devlieghere via lldb-dev >> wrote: >> Hi everyone, >> >> We all know how hard it can be to reproduce an issue or crash in LLDB. There >> are a lot of moving parts and subtle differences can easily add up. We want >> to >> make this easier by generating reproducers in LLDB, similar to what clang >> does >> today. >> >> The core idea is as follows: during normal operation we capture whatever >> information is needed to recreate the current state of the debugger. When >> something goes wrong, this becomes available to the user. Someone else should >> then be able to reproduce the same issue with only this data, for example on >> a >> different machine. >> >> It's important to note that we want to replay the debug session from the >> reproducer, rather than just recreating the current state. This ensures that >> we >> have access to all the events leading up to the problem, which are usually >> far >> more important than the error state itself. >> >> # High Level Design >> >> Concretely we want to extend LLDB in two ways: >> >> 1. We need to add infrastructure to _generate_ the data necessary for >> reproducing. >> 2. We need to add infrastructure to _use_ the data in the reproducer to >> replay >> the debugging session. >> >> Different parts of LLDB will have different definitions of what data they >> need >> to reproduce their path to the issue. For example, capturing the commands >> executed by the user is very different from tracking the dSYM bundles on >> disk. >> Therefore, we propose to have each component deal with its needs in a >> localized >> way. This has the advantage that the functionality can be developed and >> tested >> independently. >> >> ## Providers >> >> We'll call a combination of (1) and (2) for a given component a `Provider`. >> For >> example, we'd have an provider for user commands and a provider for dSYM >> files. >> A provider will know how to keep track of its information, how to serialize >> it >> as part of the reproducer as well as how to deserialize it again and use it >> to >> recreate the state of the debugger. >> >> With one exception, the lifetime of the provider coincides with that of the >> `SBDebugger`, because that is the scope of what we consider here to be a >> single >> debug session. The exception would be the provider for the global module >> cach
Re: [lldb-dev] [RFC] LLDB Reproducers
Yes, I think that would be pretty cool. It is along the same lines we've been talking about with using "ProcessMock", "ThreadMock" etc. plugins. However, I think you need both. For instance if we bobble a gdb-remote packet, you will see that in a bad state of one of these higher level state descriptions, but without the actual packet traffic you wouldn't have that much help figuring out what actually went wrong. OTOH, things like packet level recording will likely be much less stable than capturing state at a higher level. Jim > On Sep 19, 2018, at 11:10 AM, Zachary Turner via lldb-dev > wrote: > > By the way, several weeks / months ago I had an idea for exposing a debugger > object model. That would be one very powerful way to create reproducers, but > it would be a large effort. The idea is that if every important part of your > debugger is represented by some component in a debugger object model, and all > interactions (including internal interactions) go through the object model, > then you can record every state change to the object model and replay it. > > On Wed, Sep 19, 2018 at 10:59 AM Zachary Turner wrote: > I assume that reproducing race conditions is out of scope? > > Also, will it be possible to incorporate these reproducers into the test > suite somehow? It would be nice if we could create a tar file similar to a > linkrepro, check in the tar file, and then have a test where you don't have > to write any python code, any Makefile, any source code, or any anything for > that matter. It just enumerates all of these repro tar files in a certain > location and runs that test. > > On Wed, Sep 19, 2018 at 10:48 AM Leonard Mosescu via lldb-dev > wrote: > Great, thanks. This means that the lldb-server issues are not in scope for > this feature, right? > > On Wed, Sep 19, 2018 at 10:09 AM, Jonas Devlieghere > wrote: > > >> On Sep 19, 2018, at 6:49 PM, Leonard Mosescu wrote: >> >> Sounds like a fantastic idea. >> >> How would this work when the behavior of the debugee process is >> non-deterministic? > > All the communication between the debugger and the inferior goes through the > GDB remote protocol. Because we capture and replay this, we can reproduce > without running the executable, which is particularly convenient when you were > originally debugging something on a different device for example. > >> >> On Wed, Sep 19, 2018 at 6:50 AM, Jonas Devlieghere via lldb-dev >> wrote: >> Hi everyone, >> >> We all know how hard it can be to reproduce an issue or crash in LLDB. There >> are a lot of moving parts and subtle differences can easily add up. We want >> to >> make this easier by generating reproducers in LLDB, similar to what clang >> does >> today. >> >> The core idea is as follows: during normal operation we capture whatever >> information is needed to recreate the current state of the debugger. When >> something goes wrong, this becomes available to the user. Someone else should >> then be able to reproduce the same issue with only this data, for example on >> a >> different machine. >> >> It's important to note that we want to replay the debug session from the >> reproducer, rather than just recreating the current state. This ensures that >> we >> have access to all the events leading up to the problem, which are usually >> far >> more important than the error state itself. >> >> # High Level Design >> >> Concretely we want to extend LLDB in two ways: >> >> 1. We need to add infrastructure to _generate_ the data necessary for >> reproducing. >> 2. We need to add infrastructure to _use_ the data in the reproducer to >> replay >> the debugging session. >> >> Different parts of LLDB will have different definitions of what data they >> need >> to reproduce their path to the issue. For example, capturing the commands >> executed by the user is very different from tracking the dSYM bundles on >> disk. >> Therefore, we propose to have each component deal with its needs in a >> localized >> way. This has the advantage that the functionality can be developed and >> tested >> independently. >> >> ## Providers >> >> We'll call a combination of (1) and (2) for a given component a `Provider`. >> For >> example, we'd have an provider for user commands and a provider for dSYM >> files. >> A provider will know how to keep track of its information, how to serialize >> it >> as part of the reproducer as well as how to deserialize it again and use it >> to >> recreate the state of the debugger. >> >> With one exception, the lifetime of the provider coincides with that of the >> `SBDebugger`, because that is the scope of what we consider here to be a >> single >> debug session. The exception would be the provider for the global module >> cache, >> because it is shared between multiple debuggers. Although it would be >> conceptually straightforward to add a provider for the shared module cache, >> this significantly increases
[lldb-dev] [LLD] How to get rid of debug info of sections deleted by garbage collector
Hi, After compiling an example.cpp file with "-c -ffunction-sections" and linking with "--gc-sections" (used ld.lld), I am still seeing debug info for the sections deleted by garbage collector in the generated executable. Are there any compiler/linker options and/or other tools in LLVM to get rid of the above mentioned unneeded debug info? If such options does not exist, what needs to be changed in the linker (lld)? Thanks, Ramana ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev