Re: patch 3/3 debuginfod client interruptability
On 11/18/19 2:50 AM, Frank Ch. Eigler wrote: >> Attached is a variant that adds debuginfod_begin and debuginfo_end >> (names matching elf/dwarf_begin/end) and adds a debuginfod_client >> handle to each other function. Thanks much for doing this! > > Sure, if you like. Would you be sympathetic to supporting a > client=NULL entrypoint to the lookup functions, ergo no begin/end, for > applications that don't want a progressfn callback? That way the > simple case looks simple. I'm not sure that's a good idea. Creating/destroying the client object doesn't seem onerous to me. It'd mask out bugs where the client pointer becomes NULL by mistake. As the API evolves, it'll very likely gain more state. Seems simpler to me that way from a documentation and maintenance perspective, that with having everyone use the API in the same way. My 2c. Thanks, Pedro Alves
Re: patch 2/2 debuginfod server etc.
Hi, On Thu, 2019-11-14 at 07:29 -0500, Frank Ch. Eigler wrote: > > > -bin_PROGRAMS = debuginfod-find > > > +bin_PROGRAMS = debuginfod debuginfod-find > > > +debuginfod_SOURCES = debuginfod.cxx > > > +debuginfod_cxx_CXXFLAGS = -Wno-unused-functions > > > > Should that be debuginfod_CXXFLAGS? > > Why are there unused functions are there? > > I don't think there are in debuginfod itself. OK, lets get rid of this then. > > > +# g++ 4.8 on rhel7 otherwise errors gthr-default.h / > > > __gthrw2(__gthrw_(__pthread_key_create) undefs > > > > Could you explain this comment a bit more? > > Not sure, these might have been notes from diagnosing > the argp.h bug that messed with function attributes, > or something else. Lets remove it then. It is somewhat confusing. > > [...] > > So you give it directories with executables and directories with debug > > files. They are indexed if they have a build-id. You never provide a > > sources dir, but if something is indexed as a debug file then any > > source file referenced from it can be requested. > > Yes. > > > In theory you only need to provide the executable dirs, from that you > > can normally get the separate .debug files. > > That's if those debug files are installed in the proper system > location. A build tree mid-strippinng is not like that. And > in the case of RPMs, it's not an option at all. > > > Why didn't you do that (only providing executable dirs)? > > And why do you allow sources to be found indirectly? > > Because there's no need to search/index source files. We can already > tell instantly whether they exist (when we parse the dwarf file) via a > stat(2). We index debuginfo/source rpms for their content because > that instant check is not possible. > > > Would it make sense to restrict the sources (prefix) dir? > > I don't see any reason, because ... > > > The questions above are simply because it makes me nervous that some, > > but not all, files that can be requested can indirectly be found > > anywhere. If you have an explicit debug dir, then it might make sense > > to also have an explicit sources dir. > > If the executables we find are trusted, then the source file paths > inside are trustworthy too. (We don't have an explicit debug dir.) > > > In general I think it would be good to think about a selinux security > > context for debuginfod. This is not urgent, but I think we should see > > if we can get someone to do a security audit. > > Will keep it in mind. > > > One concern I have is that if a process is run in a restricted security > > context in which it might not be able to access certain files, but it > > might have access to a local socket to debuginfod, then it can get > > access to files it normally wouldn't. > > Note that the systemd service runs under an unprivileged userid. > > > And if it could somehow write into any directory that debuginfod > > reads then it could construct a .debug file that points to any file > > on the system by adding it to the DWARF debug_lines file list. All > > this might be unlikely because this is all locally running > > processes. But it would be good to have someone who is a bit > > paranoid take a look if we aren't accidentally "leaking" files. > > I see where you're going with it, it's a reasonable concern. For now, > we can consider it covered by the "debuginfod should be given > trustworthy binaries" general rule. This does keep me slightly worried. Even "trustworthy binaries" could be produced by buggy compilers. I really think we should have some way to restrict which files on the file system can be served up. IMHO the default should be that no files outside directories explicit given to debuginfod should be allowed to be provided to the client. So it makes sense providing any extra source dirs, so you can check that any references to source files are actually correct/intended. > > > +.B "\-I REGEX" "\-\-include=REGEX" "\-X REGEX" "\-\-exclude=REGEX" > > > +Govern the inclusion and exclusion of file names under the search > > > +paths. [...] > > > > Could/Should this default to -I/all/paths/given/* so only files from > > under those paths are included and no files from outside the search > > paths? > > fullpath(3) canonicalization (and in a later patch, symlink following > mode) would mess that up. I know you're probably thinking about the > evil-source-file-escape problem, but I wouldn't handle that with an > index-time constraint like this. I envisioned using this more for > optimization purposes, if for example one wants to index only .x86_64. > rpms in a tree that has other architectures. Aha, yes, this is at a different level/time than what I was thinking of. > > > +and means that only one initial scan should performed. The default > > > +rescan time is 300 seconds. Receiving a SIGUSR1 signal triggers a new > > > +scan, independent of the rescan time (including if it was zero). > > > > 300 seconds seem somewhat arbitrary. But I don't have a better num
Re: patch 2/2 debuginfod server etc.
Hi Frank, On Fri, 2019-11-15 at 16:00 -0500, Frank Ch. Eigler wrote: > > > + string popen_cmd = string("/usr/bin/rpm2cpio " + > > > shell_escape(b_source0)); > > > > Why the hardcoded path? > > Could you check at startup if rpm2cpio is in the PATH? > > Hm, since this is run under popen(), so it'll do $PATH searching for > us. Checking whether it is present at runtime ... hmmm ugh ... how > about an autoconf test instead? The worst thing that happens with > the > current code is that on a non-rpm system, if it does find .rpm files, > the code will print errors and otherwise ignore RPMs. OK, I think how you did it on the branch, just remove the path, let the shell find rpm2cpio, is fine for now. > > > + // extract this file to a temporary file > > > + char tmppath[PATH_MAX] = "/tmp/debuginfod.XX"; // XXX: > > > $TMP_DIR etc. > > > > Some other code uses: > > const char *tmpdir = getenv ("TMPDIR") ?: P_tmpdir; > > static const char suffix[] = "/debuginfod.XX"; > > Also PATH_MAX? > > OK -- and yeah if we getenv() we might need PATH_MAX. > Will try asprintf() here and the client instead. It would be really nice if this code at least respected TMPDIR. > > Also I prefer checking against NULL, it is slightly more obvious (0 > > returns often means success). > > ... the C++ tradition is 0 for null pointers, but if you insist... No, if that is the C++ way then that is fine. > > > + Dwarf_Off offset = 0; > > > + Dwarf_Off old_offset; > > > + size_t hsize; > > > + > > > + while (dwarf_nextcu (dbg, old_offset = offset, &offset, &hsize, NULL, > > > NULL, NULL) == 0) > > > > These days I would prefer dwarf_get_units (). It is slightly higher > > level and immediately gives you the cudie and unit_type. > > Will look into that later. > > > > +{ > > > + Dwarf_Die cudie_mem; > > > + Dwarf_Die *cudie = dwarf_offdie (dbg, old_offset + hsize, > > > &cudie_mem); > > > + > > > + if (cudie == NULL) > > > +continue; > > > + if (dwarf_tag (cudie) != DW_TAG_compile_unit) > > > +continue; > > > + > > > + const char *cuname = dwarf_diename(cudie) ?: "unknown"; > > > + > > > + Dwarf_Files *files; > > > + size_t nfiles; > > > + if (dwarf_getsrcfiles (cudie, &files, &nfiles) != 0) > > > +continue; > > > > So you are really only interested in the file/line tables. > > In that case you could also use dwarf_next_lines which iterates through > > the debug line units directly, so you don't need to do the whole CU DIE > > tree iteration yourself (and it handles CUless tables). > > Ditto. I believe the code works as is, so it isn't urgent. I just think it would simplify things a bit. > > > + string waldo; > > > + if (hat[0] == '/') // absolute > > > +waldo = (string (hat)); > > > + else // comp_dir relative > > > +waldo = (string (comp_dir) + string("/") + string (hat)); > > > > Do you have to think about/handle a comp_dir that ends with a / ? > > Old debugedit truncated some strings by adding /// (to fill up the > > spaces till the '\0'...) Yes, terrible :{ > > It should just work(tm) if the debugger uses the documented convention > and preserves those extra slashes (just as if it preserved ../ and > such). See also the other email/review. So, we always add an '/' between comp_dir and the file. That should probably be explicitly documented because I wouldn't be surprised if some code doesn't when comp_dir already ends in a slash. I just double checked that dwarf_getsrclines always (unconditionally) adds a '/', so the use of dwarf_filesrc here does the right thing. Thanks, Mark
Re: patch 2/2 debuginfod server etc.
Hi, On Fri, 2019-11-15 at 14:54 -0500, Frank Ch. Eigler wrote: > > > +set -x > > > > Is this really necessary? > > It makes the log files very verbose. > > Or is that the intention? > > I've found it tricky to debug failing tests without this. OK, but note that you can place echo "starting phase x..." in the test and those will show up in the logs. > > > +# find an unused port number > > > +while true; do > > > +PORT1=`expr '(' $RANDOM % 1000 ')' + 9000` > > > +ss -atn | fgrep ":$PORT1" || break > > > +done > > > > Which package does ss come from? > > iproute-5.2.0-1.fc30.x86_64 > > > Make sure it is listed as a BuildRequires. > > OK, because of %check in the RPM? Yes, thanks. > > > +env LD_LIBRARY_PATH=$ldpath DEBUGINFOD_URLS= > > > ${abs_builddir}/../debuginfod/debuginfod - -d $DB \ > > > +-p $PORT1 -t0 -g0 R F & > > > +PID1=$! > > > +sleep 3 > > > > Is there no better way to test the server has started? > > This one may not be needed. > > > > +mv prog F > > > +mv prog.debug F > > > +kill -USR1 $PID1 > > > +sleep 3 # give enough time for scanning pass > > > > Hmm that hardcoded sleep 3 again :{ > > Well, here's the problem. We want to assert that the scan was > successful. We can't just reject a 404 response because the scan may > not have finished yet. So we'd have to race/loop/poll. But then we'd > need a timeout (how long?). It turns out to be the same thing, just > more complicated. > > > > > +mv testfile-debuginfod-0.rpm R > > > +mv testfile-debuginfod-1.rpm R > > > +mv testfile-debuginfod-2.rpm R > > > +kill -USR1 $PID1 > > > +sleep 10 > > > > Why 10? > > To give extra time for scanning RPMs. > > > > +kill -USR1 $PID1 # two hits of SIGUSR1 may be needed to resolve > > > .debug->dwz->srefs > > > +sleep 10 > > > > And another :{ > > Yes, again, same reasons as above. You can either have a timeout-poll > loop, or a time sleep and a single judgement poll. Now that we have the metrics maybe we can poll those to see if the new files have been indexed? The reason I am complaining about this, is that make check -j8 on my system takes (without valgrind): real3m6.749s user0m42.627s sys 0m31.588s Of which 2m seem to just be sleeps in run-debuginfod-find.sh. > > > +# Trigger a cache clean and run the tests again. The clients should be > > > unable to > > > +# find the target. > > > +echo 0 > $DEBUGINFOD_CACHE_PATH/cache_clean_interval_s > > > +echo 0 > $DEBUGINFOD_CACHE_PATH/max_unused_age_s > > > + > > > +testrun ${abs_builddir}/debuginfod_build_id_find -e F/prog 1 > > > + > > > +testrun ${abs_top_builddir}/debuginfod/debuginfod-find debuginfo > > > $BUILDID2 && false || true > > > > OK. But that means zero means never cache/always clean? > > I would have expected 0 to mean "forever". > > I see the man page doesn't specifically disclose the interpretation of > zero. A "no retention of prior results" purpose is useful, and is > consistent with 0 as per the text. A "retain forever" setting would > have to be a different value. Could you add the current interpretation of zero to the manual page? We could have some other setting (-1?) for "forever", but that doesn't seem urgent. Cheers, Mark
Re: patch 2/2 debuginfod server etc.
Hi - > Now that we have the metrics maybe we can poll those to see if the new > files have been indexed? Sort of indirectly. But then we're back to polling, which itself needs a timeout, so the logic is made at least as complicated. > The reason I am complaining about this, is that make check -j8 on my > system takes (without valgrind): > > real 3m6.749s > user 0m42.627s > sys 0m31.588s > > Of which 2m seem to just be sleeps in run-debuginfod-find.sh. (I wonder how that could be consider the total sleep is 45 seconds.) > > I see the man page doesn't specifically disclose the interpretation of > > zero. A "no retention of prior results" purpose is useful, and is > > consistent with 0 as per the text. A "retain forever" setting would > > have to be a different value. > > Could you add the current interpretation of zero to the manual page? Done. - FChE
Re: patch 5 debuginfod: prometheus metrics
Hi, On Fri, 2019-11-15 at 12:57 -0500, Frank Ch. Eigler wrote: > Could you also add a reference to the Prometheus Exposition format. I > > see it is already in a comment in the code. Best to also add it as See > > also in the docs. > > OK. Thanks, that would be good. > > > +control. The \fI/metrics\fP webapi endpoint is probably not > > > +appropriate for disclosure to the public. > > > > So, should there be an option to turn it off? > > IMHO not necessary. The security section already advises against > exposing an unprotected debuginfod server to the public. A front-end > reverse-proxy would easily filter requests to /metrics. I think defense in depth is not a bad thing. You already have local users to which it is exposed. And it would also make the server do slightly less work. > > > +#ifdef __linux__ > > > +#define gettid() syscall(SYS_gettid) > > > +#else > > > +#define gettid() pthread_self() > > > +#endif > > > > You might want to rename this since newer glibc might expose gettid(). > > OK. Note that the current code defines tid () as syscall(SYS_getpid). Should be SYS_gettid. Cheers, Mark
Re: patch 4 debuginfod: symlink following mode
Hi, On Fri, 2019-11-15 at 13:31 -0500, Frank Ch. Eigler wrote: > > > In order to support file/rpm archives that are organized via symlink > > > trees, add an "-L" option to debuginfod, meaning about the same as for > > > find(1) or ls(1): to traverse rather than ignore symlinks. > > > > Could you give an example of when exactly this is necessary? > > Because some file/rpm archievs are organized via symlink trees. :-) > For example, a build system may have > >/packages/N/V/R/ARCH/*.rpm files, which are a legion > plus /compose/F30/SPIN/ARCH/ directories, which contain >symlinks into the former > or package archives stored over multiple NFS mounts, whose > contents are interleaved via symlinks. > > > I assume that it isn't really necessary for rpm archives, you probably > > don't want to follow any symlinks from an archive that point to > > something outside the archive, and you will see all files in the > > archive anyway. So following symlinks doesn't seem helpful there. > > These are not symlinks WITHIN rpms. These are symlinks encountered > during filesystem traversal looking for RPMs. > > > Also why combine symlink following with cross-device searches? > > Shouldn't that be separate options? > > They could be. They seemed to go well together in the cases I've seen > and thought about. Would you like another option now? No, I understand the use case now. And you are right it makes sense to combine them for such cases. But I still think it would be better to split the directory full of rpms case from the directory full of individual file cases. The option makes perfect sense for the case you show, but I think it is not really appropriate to mix it with the individual exec/debug and source files case. Cheers, Mark
Re: patch 5 debuginfod: prometheus metrics
Hi - > > > see it is already in a comment in the code. Best to also add it as See > > > also in the docs. > > > > OK. > > Thanks, that would be good. Done. > > > > +control. The \fI/metrics\fP webapi endpoint is probably not > > > > +appropriate for disclosure to the public. > > > > > > So, should there be an option to turn it off? > > > > IMHO not necessary. The security section already advises against > > exposing an unprotected debuginfod server to the public. A front-end > > reverse-proxy would easily filter requests to /metrics. > > I think defense in depth is not a bad thing. > You already have local users to which it is exposed. Local users can already run "ps awux" to see the same semi-sensitive command line arguments. > And it would also make the server do slightly less work. Maybe, but if it's a serious/public enough installation to worry about configuration privacy, then it's also bound to be important enough to be be monitored, so its admin would not turn this off. > Note that the current code defines tid () as syscall(SYS_getpid). > Should be SYS_gettid. OK. - FChE
Re: patch 1/2 debuginfod client
Hi, On Sat, 2019-11-16 at 13:52 -0500, Frank Ch. Eigler wrote: > Yeah, maybe a different automake version made it work after my earlier > failing tests. (dead horse PS. It remains my opinion that we should > commit auto* generated files into git.) > > > > Possible, but since these bits are just getting transcribed straight > > > to a debuginfod URL, an insane unclean hex will just get rejected at > > > that time. There is no lost-error case there. > > > > But since the user won't see the URL generated they might not notice > > what is really going on. They will just see that something wasn't > > found, won't they? > > Yes, so the only benefit would be the generation of a different error > message for impossible buildids. But if there are multiple server URLs it might not be clear which/where the error came from. Since all this is done through a very simple web api I think it is useful for the user to get informed about what the actual request URL was that failed. Then they could debug what exactly went wrong. Maybe my use case is not a common one. But I assume that people who use debuginfod-find directly/by hand are doing it already because they are interested what is really going on. If we go with the client connection context idea for the API we could add an extra function that would tell you the last URL tried maybe? > > > Yeah ... if a user were to set DEBUGINFOD_CACHE_DIR to $HOME, she > > > will > > > get a good thorough cleaning, or a root user were to set it to /. > > > PEBCAK IMHO; don't see easy ways of protecting themselves against it. > > > > It might be unlikely the user accidentally set the environment > > variables to something odd, > > but they might be tricked into running it on a debug file that was > > doctored. Which might produce lookups for odd source files. It might > > even just be a buggy compiler that created a few ../.. too many. It > > would be bad if that would cause havoc. > > Source file names do not survive into the client side cache - you > already found the "#" escaping bits. Yes, you are right, this works on a different level than I assumed. At the client request level it seems fine. I still am concerned about the indexing level though. There a buggy ../.. in some debug file line table might cause the indexer to include files which aren't really sources at all. > > That doesn't make me very happy. > > It looks like using libcurl from another library is not really very > > safe if the program or another library it links against are also > > libcurl users. > > > > Do you know how other libraries that use libcurl deal with this? > > I looked over some other libcurl users in fedora. Some don't worry > about the issue at all, relying on implicit initialization, which is > only safe if single-threaded. Others (libvirtd) do an explicitly > invoked initialization function, which is also only safe if invoked > from a single-threaded context. > > I think actually our way here, running the init function from the > shared library constructor, is about the best possible. As long as > the ld.so process is done as normal, it should be fine. Programs that > use the elfutils trick of manual dlopen'ing libdebuginfod.so should do > so only if they are single-threaded. But they cannot really control that... Since they might not know (and really shouldn't care) that libdw lazily tries to dlopen libdebuginfod.so which then pulls in libcurl and calls that global init function... Could we do try to do the dlopen and global curl initialization from libdw _init, or a ctor, to make sure it is done as early as possible? > > > > I assume libcurl handles tls certificates for https? Does that need any > > > > mention here? > > > > > > Dunno if it's worth discussing ... the client side of https usually > > > does not deal with configurable certificates. > > > > But the client side might or might not verify the server side ssl > > certificate. Does it do that? And if so, which trusted roots does it > > use? And can that be disabled or changed? > > Whatever the compiled-in defaults are, /etc paths etc. IMHO we > shouldn't worry about them. I think it is fine to just mention in the docs that resolving and authenticating https URLs is done according to whatever the defaults are. But I think it should be documented what those are. > > > I suppose that wouldn't make any sense. Anything will take a nonzero > > > amount of time. :-) Maybe it could be a floating point number or > > > something; worth it? > > > > I was more thinking zero == infinity (no timeout). > > An unset environment variable should do that. Are you sure? If DEBUGINFOD_TIMEOUT isn't set, then it seems it defaults to 5 seconds: /* Timeout for debuginfods, in seconds. This env var must be set for debuginfod-client to run. */ static const char *server_timeout_envvar = DEBUGINFOD_TIMEOUT_ENV_VAR; static int server_timeout = 5; [...] if (getenv(server_timeout_envvar)) server_timeout = atoi (getenv
[Bug libdw/25173] dwarf_getsrc_die fails for rust code
https://sourceware.org/bugzilla/show_bug.cgi?id=25173 Josh Stone changed: What|Removed |Added CC||jistone at redhat dot com --- Comment #3 from Josh Stone --- (In reply to Mark Wielaard from comment #1) > It might help if you could attach the binary. Locally (Fedora 31, rustc > 1.38.0) I cannot replicate. Neither binutils addr2line, not elfutils > eu-addr2line seem able to resolve addresses to source lines. Note that rpmbuild strips Rust's rlibs (static archives), so you won't have any debuginfo for pre-compiled code from the standard library. You should still be able to resolve the user's own code though, as well as generic std code that gets monomorphized in the user's compilation. $ bat main.rs ───┬── │ File: main.rs ───┼── 1 │ fn main() { 2 │ println!("Hello, world!"); 3 │ } ───┴── $ rustc -g main.rs $ nm main | grep main U __libc_start_main@@GLIBC_2.2.5 53d0 T main 5390 t _ZN4main4main17h6b07ac6c6f06cdd6E $ addr2line -e main 5390 /tmp/hello/src/main.rs:1 $ eu-addr2line -e main 5390 ??:0 -- You are receiving this mail because: You are on the CC list for the bug.
[Bug libdw/25173] dwarf_getsrc_die fails for rust code
https://sourceware.org/bugzilla/show_bug.cgi?id=25173 --- Comment #4 from Josh Stone --- I can also reproduce this with Clang (clang-9.0.0-1.fc31.x86_64), so it seems to be a more general problem with LLVM as the producer. $ bat main.c ───┬── │ File: main.c ───┼── 1 │ #include 2 │ int main() { 3 │ printf("Hello, world!\n"); 4 │ return 0; 5 │ } ───┴── $ clang -g main.c -o main $ nm main | grep main U __libc_start_main@@GLIBC_2.2.5 00401130 T main $ addr2line -e main 401130 /tmp/hello/src/main.c:2 $ eu-addr2line -e main 401130 ??:0 When compiled with GCC, eu-addr2line resolves it fine. -- You are receiving this mail because: You are on the CC list for the bug.
[Bug libdw/25173] dwarf_getsrc_die fails for rust code
https://sourceware.org/bugzilla/show_bug.cgi?id=25173 --- Comment #5 from Mark Wielaard --- I suspect this is the same as https://sourceware.org/bugzilla/show_bug.cgi?id=22288 If so then rustc -Cllvm-args=-generate-arange-section should resolve it. -- You are receiving this mail because: You are on the CC list for the bug.
Re: patch 2/2 debuginfod server etc.
Hi - > > I see where you're going with it, it's a reasonable concern. For now, > > we can consider it covered by the "debuginfod should be given > > trustworthy binaries" general rule. > > This does keep me slightly worried. Even "trustworthy binaries" could > be produced by buggy compilers. Those would be untrustworthy binaries. > I really think we should have some way to restrict which files on > the file system can be served up. IMHO the default should be that no > files outside directories explicit given to debuginfod should be > allowed to be provided to the client. So it makes sense providing > any extra source dirs, so you can check that any references to > source files are actually correct/intended. It's not so easy. For build trees, source files can include /usr/include/*, which do not contain executables and so don't need indexing. The use of symlinks in some configury/build systems makes filtering after the fact difficult too - the realpath of object files and source trees need not even be near, or in a single place. Would you be satisfied if the -F / -R flags were restored, so that -F would be required in order to start file-scanning threads (and similar for -R)? Then all this does not arise, because people who don't trust their compilers wouldn't run debuginfod in -F mode. > > I was thinkinng [300s] it's about right for a developer's live build tree. > > OK, but those aren't included by default. A concern about the systemd service defaults can be addressed at the systemd service / sysconfig level. Would you prefer a day for that? > > > So basically never, ever use [-G]? :) > > > Can you add a hint when you should use it? > > > > See also the DATA MANAGEMENT section. :-) > > Which says: > >There is also an optional one-shot \fImaximal grooming\fP pass is >available. It removes information debuginfo-unrelated data from the >RPM content index, but it is slow and temporarily requires up to >twice the database size as free space. Worse: it may result in >missing source-code info if the RPM traversals were >interrupted. Use it rarely to polish a complete index. > > Which suggest to use it to polish by removing debuginfo-unrelated data. > I am still not sure what that unrelated data is or when I really want > to do this. It doesn't really list any quantifiable benefits. OK, elaborating this point in the man page. > > The text says that debuginfod does not provide https service at all, > > so this is not relevant. A front-end https reverse-proxy would have > > to deal with this stuff. I plan to cover this in a blog post later > > on, and probably in the form of software baked into a container image. > > But debuginfod might use HTTPS services itself for fallback. I think it > is important to describe how/if those https ssl/tls connections are > authenticated. OK, will add a blurb. - FChE
[Bug libdw/25173] dwarf_getsrc_die fails for rust code
https://sourceware.org/bugzilla/show_bug.cgi?id=25173 --- Comment #6 from Josh Stone --- Aha, I thought this was familiar. I guess I should try to implement that aranges change in rustc after all. However, I think elfutils should probably still try to deal without aranges, on par with binutils. -- You are receiving this mail because: You are on the CC list for the bug.
Re: patch 1/2 debuginfod client
Hi - > > > But since the user won't see the URL generated they might not notice > > > what is really going on. They will just see that something wasn't > > > found, won't they? > > > > Yes, so the only benefit would be the generation of a different error > > message for impossible buildids. > > But if there are multiple server URLs it might not be clear which/where > the error came from. (My comment was about detecting even number of chars in the hex code.) > Since all this is done through a very simple web > api I think it is useful for the user to get informed about what the > actual request URL was that failed. [...] > If we go with the client connection context idea for the API we could > add an extra function that would tell you the last URL tried maybe? Yeah, maybe. They are tried in parallel. We could also hook up to libcurl's own progress-notification callbacks. > > > Do you know how other libraries that use libcurl deal with this? > > > > I looked over some other libcurl users in fedora. Some don't worry > > about the issue at all, relying on implicit initialization, which is > > only safe if single-threaded. Others (libvirtd) do an explicitly > > invoked initialization function, which is also only safe if invoked > > from a single-threaded context. > > > > I think actually our way here, running the init function from the > > shared library constructor, is about the best possible. As long as > > the ld.so process is done as normal, it should be fine. Programs that > > use the elfutils trick of manual dlopen'ing libdebuginfod.so should do > > so only if they are single-threaded. > > But they cannot really control that... Since they might not know (and > really shouldn't care) that libdw lazily tries to dlopen > libdebuginfod.so which then pulls in libcurl and calls that global init > function... > > Could we do try to do the dlopen and global curl initialization from > libdw _init, or a ctor, to make sure it is done as early as possible? Doing a redundant initialization later is not a problem; there is a counter in there. The problematic case would be - a multithreaded application - loading debuginfod.so multiply concurrently somehow - and calling the solib ctor concurrently somehow - and all of this concurrently enough to defeat libcurl's init-counter IMHO, not worth worrying about. Someday libcurl will do the right thing (tm) and plop this initialization into their solib ctor. > > > I was more thinking zero == infinity (no timeout). > > > > An unset environment variable should do that. > > Are you sure? If DEBUGINFOD_TIMEOUT isn't set, then it seems it > defaults to 5 seconds: > > /* Timeout for debuginfods, in seconds. >This env var must be set for debuginfod-client to run. */ > static const char *server_timeout_envvar = DEBUGINFOD_TIMEOUT_ENV_VAR; > static int server_timeout = 5; > [...] > > if (getenv(server_timeout_envvar)) > server_timeout = atoi (getenv(server_timeout_envvar)); OK, hm, we could make an -empty- but set environment variable mean 'infinity'. Then again, a user can also say =9. - FChE