Re: [PATCH 0/4] libdwfl: make dwfl_addrmodule work for Linux kernel modules
On Thu, 2019-12-12 at 21:03 -0800, Omar Sandoval wrote: > Reading the kernel code, the main sections are indeed contiguous. So > this was entirely my bug. Sorry for the noise! > > On the bright side, patch 2 ("libdwfl: remove broken coalescing logic in > dwfl_report_segment") does seem like a legitimate bug. It does indeed. How curious this code never worked as advertised. I agree that it is probably best to just document that IDENT should be NULL and is ignored. It comes with a testcase and removes various extra fields from Dwfl_Module. Applied to master. Thanks, Mark
Re: rfc/patch: debuginfod client $DEBUGINFOD_PROGRESS env var
Hi Frank, On Thu, 2019-12-12 at 12:18 -0500, Frank Ch. Eigler wrote: > > I like this idea but have a bit of a security concern about the ability > > to set it to any file. If someone manages to set it then they can > > overwrite anything a process using the library can write to. > > That's not that serious a category of concern. Environment variables > are not under control of untrusted agents. FWIW, $DEBUGINFOD_CACHE is > considerably more dangerous in that regard (cache cleaning!). You have a way to make me even more scared of security issues than less :) It would actually be pretty bad if a user made the mistake to set DEBUGINFOD_CACHE to e.g. their home directory by mistake. Could we have some extra safeguard there? e.g. If the directory already exist check whether it is completely empty or if it isn't empty it contains a cache_clean_interval file? Or at least only delete files that follow our creation pattern: /[debuginfo|executable|source]? Doesn't need to be in your patch, but if you think something like that isn't insane, the paranoid in me would like to add some extra safeguards to not wipe out someones whole home dir by mistake. > > I rather it would just use stderr always when set since it is only > > meant as a human debugging device. > > That default makes sense. Making it configurable gives a way for a > larger app to still capture output (by having the client direct it to > a pipe file descriptor or something), but I'll just hardcode for now > STDERR_FILENO if the env var is set. That seems like a good default. Thanks. > > > Some larger usage experience (systemtap/kernels) indicates > > > the default timeout is too short, so bumped it to 30s. > > My first thought is that we might need two timeouts. The original 5s > > timeout is a good inactivity timeout [...] But for larger data we > > might want a timeout for the total download time, maybe 30 seconds > > is too low for that. Given how large kernel debuginfo is it might > > take several minutes. > > OK, redrafting $DEBUGINFOD_TIMEOUT as a two-part variable: > > $DEBUGINFOD_TIMEOUT=x,y > x - connection timeout, default 5 > y - optional, overall timeout, default unlimited I would call the first progress_timeout since I think it is about making progress, not just about connecting, but let me comment on the actual patch. Maybe we actually need 3 values :) > > [...] > > This seems to mean that if there is an error then the progress function > > is never called, is that intended? Also for CURLM_CALL_MULTI_PERFORM we > > continue/skip the progress function. > > If we have an error outcome, there is no more progress to report, so > that should be OK. Yeah, I guess if the first thing is an error, then no progress was ever really made, so we just never report it. It is just a tiny change in behavior. Although I doubt anybody would rely on the progress function being called at least once. Cheers, Mark
Re: rfc/patch: debuginfod client $DEBUGINFOD_PROGRESS env var
Hi Frank, On Fri, 2019-12-13 at 11:57 -0500, Frank Ch. Eigler wrote: > Pushed the following additional patch to fche/debuginfod-progress-env. > Hand-tested on a ginormous download with various length timeouts; hard > to test reliably within the elfutils testsuite (esp after removal of > that induced debuginfod-side wait). The code looks good in general, do note that if you rebase/squash to onto master there is a slight conflict with the curl_res/curlm_res fixlet. Since GCC10 isn't out yet, just yell if this gives you trouble and I do/test it for you. > +static const long default_connect_timeout = 5; > +static const long default_transfer_timeout = -1; /* unlimited */ > + I would call it default_progress_timeout, because... > @@ -492,7 +502,10 @@ debuginfod_query_server (debuginfod_client *c, > CURLOPT_WRITEFUNCTION, > debuginfod_write_callback); >curl_easy_setopt(data[i].handle, CURLOPT_WRITEDATA, (void*)&data[i]); > - curl_easy_setopt(data[i].handle, CURLOPT_TIMEOUT, (long) > server_timeout); > + if (connect_timeout >= 0) > +curl_easy_setopt(data[i].handle, CURLOPT_CONNECTTIMEOUT, > connect_timeout); > + if (transfer_timeout >= 0) > +curl_easy_setopt(data[i].handle, CURLOPT_TIMEOUT, transfer_timeout); >curl_easy_setopt(data[i].handle, CURLOPT_FILETIME, (long) 1); >curl_easy_setopt(data[i].handle, CURLOPT_FOLLOWLOCATION, (long) 1); >curl_easy_setopt(data[i].handle, CURLOPT_FAILONERROR, (long) 1); I would add something like: /* Make sure there is at least some progress, try to get at least 1K per progress timeout seconds. */ curl_easy_setopt(curl, CURLOPT_LOW_SPEED_TIME, 5 * 1024L); curl_easy_setopt(curl, CURLOPT_LOW_SPEED_LIMIT, progress_timeout); The idea being that if we didn't at least get 1K per 5 seconds then the connection is just so bad that it doesn't make sense to wait for it to finish, since that will most likely be forever (or feel like it for the user). This might or might not be a separate timeout from the connect timeout, but I think they are kind of the same thing. Cheers, Mark
Re: rfc/patch: debuginfod client $DEBUGINFOD_PROGRESS env var
Hi - > > That's not that serious a category of concern. Environment variables > > are not under control of untrusted agents. FWIW, $DEBUGINFOD_CACHE is > > considerably more dangerous in that regard (cache cleaning!). > > You have a way to make me even more scared of security issues than less > :) > > It would actually be pretty bad if a user made the mistake to set > DEBUGINFOD_CACHE to e.g. their home directory by mistake. Yeah, those bothering to override the default path have to be careful. > Could we have some extra safeguard there? e.g. If the directory already > exist check whether it is completely empty or if it isn't empty it > contains a cache_clean_interval file? Or at least only delete files > that follow our creation pattern: > /[debuginfo|executable|source]? This sounds prudent. Will work on that. - FChE
Re: rfc/patch: debuginfod client $DEBUGINFOD_PROGRESS env var
Hi - > The code looks good in general, do note that if you rebase/squash to > onto master there is a slight conflict with the curl_res/curlm_res > fixlet. Since GCC10 isn't out yet, just yell if this gives you trouble > and I do/test it for you. I'll figure it out and merge. > [...] > I would add something like: > > /* Make sure there is at least some progress, > try to get at least 1K per progress timeout seconds. */ > curl_easy_setopt(curl, CURLOPT_LOW_SPEED_TIME, 5 * 1024L); > curl_easy_setopt(curl, CURLOPT_LOW_SPEED_LIMIT, progress_timeout); > > The idea being that if we didn't at least get 1K per 5 seconds then the > connection is just so bad that it doesn't make sense to wait for it to > finish, since that will most likely be forever (or feel like it for the > user). The problem with that is that, for a large download such as a kernel, it can take almost a minute to just decompress the kernel-debuginfo rpm far enough to start streaming the vmlinux file. (In the presene of caching somewhere in the http proxy tree, it gets much better the second+ time.) So any small default would be counterproductive to e.g. systemtap users: they'd be forced to override this for basic usage. - FChE