On Thu, Apr 12, 2007 at 01:14:08PM +0200, Loïc Minier <[EMAIL PROTECTED]> was heard to say: > On Wed, Apr 11, 2007, Daniel Burrows wrote: > > One temporary solution for you might (I haven't tested this) be to > > jack up the single-step bonus. Change Aptitude::ProblemResolver::StepScore > > from its default (10) to something huge (say, 1000) and aptitude should > > start behaving more like a depth-first search than it is right now. > > That is, it'll get a large bonus for chasing down a search tree, rather > > than a modest improvment. That tends to help it over "hills", but might > > give you worse results than you'd otherwise get. I'd be interested in > > knowing how far you have to increase it before aptitude terminates and > > whether the solution you get sucks. > > This went impressively well, the resolver immediately found a solution > and happily installed a set of packages which satisfied the > dependencies of pbuilder-satisfydepends-dummy! This was with the > suggested score of 1000.
Cool :) > (I confirmed that not setting the score or using a score of 10 would > still take a lot of time (lots of migration from experimental to > unstable might change this).) > > Because I have a fast machine and I imagine slower machines, and > because I saw how fast the resolver can go when the score is > sufficiently high, my tests were based on the number of "Resolving > dependencies..." lines I would see: after 5, I would consider the test > a failure. > > I dropped the score in multiple steps down to 50, which went equally > well (aka super fast with correct deps). 45 still required more time > than I wanted to give. 48 went with 4 "Resolving dependencies..." > lines and installed satisfying dependencies. Wow, that's less than I expected (the search behavior may be well defined but it's not always easy to predict). > I suppose it would be nice to bump the default score a little; I > imagine the scores in the above example are completely random and only > have a value in this particular example. Perhaps using a default value > of 50 or 100 would give better results in people who like me use > experimental? Well, it looked like you had a pretty pathological case. I wonder about increasing the default to a bit more, though, like say 50 or 100. Unfortunately, I don't really have a good corpus of problems from past archives and desired results (i.e., I have no good testing apparatus to find out whether I made old results get worse). One option would be to make the per-step score a function of the size of a solution (probably capped exponential), so aptitude is willing to sift around a little in the first few steps, then gets serious about chasing down individual branches. > > Another interesting possibility would be to decrease the penalty for > > broken dependencies (e.g., set BrokenScore to -10 instead of -100), so > > aptitude doesn't go "oh my god I have 30 broken packages this solution > > must suck let's try something else" when it tries to install, say, > > epiphany-browser. (setting it too low could also lead aptitude to spin > > if you don't increase StepScore, because it would end up spending lots > > of time on short solutions that haven't had time to accumulate other > > penalties yet) > > Should I run separate tests on this? Which values should I start with > and which should I change? Sure, start with the default and then set Aptitude::ProblemResolver::BrokenScore to -10, or whatever. (heck, what happens if you set it to 0? I don't know -- the only problem I can actually imagine is that you'd be able to, say, remove libgtk without a penalty for all the stuff you break, but the removal itself has a penalty anyway) > > I'll see when I get a chance to look at the log more closely whether > > there's any clever way to cut down on the amount of work being done. > > That would be nice! In the little bit of examination yesterday, I didn't see anything obvious. The general technique for speeding up a search like this is to share information between branches; for instance, if we can detect that "installing these two packages doesn't conflict immediately but will conflict eventually", lots of useless searches can often be curtailed (aptitude does a little of this, and it helps in some cases). Unfortunately, it looks to me like the different branches are more-or-less nonoverlapping. The one thing that would be interesting would be if we could somehow detect that two branches are "almost" the same and can be merged, at least for now. e.g., if I install libarts version X and that other branch there installed version Y, then there's no reason to split the computation unless and until this changes the list of broken deps and resolvers for those deps. That seems to be a general theme in the log, but I don't see a trivial way of fixing this (especially when there's more than one package involved, in which case we might need to merge branches that have diverged temporarily). Daniel