Quoting Michael Niedermayer (2024-03-27 22:54:14) > On Fri, Mar 22, 2024 at 11:29:31AM +0100, Anton Khirnov wrote: > > Quoting Michael Niedermayer (2024-03-22 03:25:25) > [...] > > > alternative is "wont fix" for all such cases, > > > > IMO it's not, in general, a bug, so EWONTFIX is the appropriate > > response. If the user does not want us to do arbitrarily large > > allocation, they have the appropriate OS-level mechanisms (e.g. ulimit, > > cgroups on Linux) or av_max_alloc(). > > You misunderstand the issue. > > the issue is coverage in the fuzzer > > if your 32bit channel number is all allowed then in some decoders > and demuxers you will in 99.9% of the cases never go beyond the > channel processing code > because it will timeout or hit OOM > > your suggestion of ulimits, cgroups and other limits dont help > We already have both time and space limits in the fuzzers > > Below is simplifying things a bit > > if 99.9% of the random 32bit channel numbers die in the channel > processing because of the current limit. Then making the limit > tighter will increase that percentage further. > > If you want better coverage you need a channel limit that stops > us before a resource intensive channel processing loop > > you can also write down a model of this problem in a more formal way > Ht as in time spend reading the header > Ct time spend processing each channel after the header > Cmax maximum number of channels that will continue execution after the header > > you will see that a Cmax = 2^32 will never be able to do what s Cmax=512 > can do no matter what external limits you apply > > because if you set really high external limits than 99.9% of time will be > spend in the channel processing code because most of the time the channel > number will be very large and nothing will stop it so little time will be > spend for coverage afterwards > > and OTOH if you set a medium outside memory/time limut then most channel > cases will hit that limit but run the full length of the time limut > here 99.9% of the cases will timeout and take ALOT of time leaving no > resources for coverage after the channel code > > and if you set a realls small outside memory/time limit then maybe you > will quickly stop the channel code but now 99.999% of cases will timeout > in the channel loop and what remains will not have enough time left to > even execute all the code after the loop > > So again if you want fuzzer coverage theres need for a channel limit of > some sort. > > The alternative is to tell everyone that we will not fix this and then > have bad fuzzer coverage for some cases.
I understand that this is done for fuzzers, I just disagree that we should introduce arbitrary limits to our code in order to appease them. They should be tools for our benefit, not vice versa. -- Anton Khirnov _______________________________________________ ffmpeg-devel mailing list [email protected] https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email [email protected] with subject "unsubscribe".
