On Wed, Sep 5, 2012 at 12:44 PM, Rich Freeman <ri...@gentoo.org> wrote:
> On Wed, Sep 5, 2012 at 3:19 AM, Fabio Erculiani <lx...@gentoo.org> wrote:
>> The complexity would become:
>> O((b + r + p) * P)
>> b = amount of buildtime dependencies
>> r = amount of runtime dependencies
>> p = amount of post-dependencies
>> P = amount of packages i need to apply the filter to
>>
>> Instead of just:
>> O(b * P)
>
> Well, actually, in both cases the complexity is O(n), assuming you
> only need to make a constant number of passes through the deps per
> package.
>
> The whole point of O notation is that it is about how it scales, not
> how long it takes.  An O(n) algorithm can actually be slower than an
> O(n^n) algorithm even on a large dataset.  However, the key is that at
> some point if the dataset gets large enough the O(n) algorithm will
> always become faster.
>
> I tend to agree with Cirian though - the time for some code to run
> through the dependencies array and do something isn't going to be very
> high.  If a piece of code has to do it many times there is nothing
> that says the package manager can't index it.

I don't want to diverge (again) from the main topic, but I think that
you're just oversimplifying it.
If you consider parsing an ebuild something hidden behind a lot of
abstraction layers, O(n) vs O(n/2) is a big difference, even if both,
normalized, are still O(n). And I would never design an API which
assumes that O(n/2) equals to O(n), because you don't know how that is
going to be used in upper layers, today, tomorrow and in 5 years.

>
> Rich
>



-- 
Fabio Erculiani

Reply via email to