On Wed, Apr 11, 2007 at 08:34:23AM +0200, Loïc Minier <[EMAIL PROTECTED]> was heard to say: > On Tue, Apr 10, 2007, Daniel Burrows wrote: > > Hrm, the resolver has a cutoff to stop thinking after "a while", but > > it looks like you've overridden that (-y). Maybe I should add some > > additional logic that aborts the search regardless of -y if the > > resolver's data structures get too big (this means we have a problem > > that's "too hard" to solve). > > I don't know; I used "-y" for aptitude to work alone in non-interactive > mode, but perhaps there's something lighter which has the same effect? > If not, I suppose it would be nice to stop the resolver after a while > despite -y indeed.
It might be good to have -y fail by default on that prompt. > > I bet that > > you have a cache with several copies of a large dependency tree (e.g. > > in testing, unstable & experimental) and where some dependencies in > > the tree can be satisfied from any repository but others conflict > > across repositories. That's the hardest case for the resolver, and I > > don't think it handles it very gracefully. > > Yes, I have the full sid and experimental repositories, and a local > almost empty one, and the situation probably looks awful. The reason this is a problem is that if aptitude sees 2 versions of, say, libxul-dev, it'll consider *both* versions as a candidate. The scores are rigged so that it tries to search depth-wise before breadth-wise in the main archive, but seomtimes it runs into trouble when there are lots of solutions that all look temporarily bad (i.e., it has to generate broken dependencies to get to a solution). The "multiple choices" problem is much worse with 2 repositories instead of 1, because there are less opportunities for forcing dependencies. Normally, when you add a new package to a solution, you can trivially trace out its dependency tree (minus virtual packages and ORed dependencies) without losing any solutions; large dependency trees tend to lead to a "branch" every time you add a package, and the penalty for broken dependencies often causes the resolver to start searching "crosswise" (like a breadth-first search) rather than "depthwise". It looks like this is happening; a lot of the log is aptitude trying various equally valid ways of fulfilling dependencies that might be mutually compatible (e.g., two versions of libarts). Because the resolution process is driven by score, if there's a penalty to picking any later install target (e.g., because it increases the number of broken dependencies), aptitude will explore both branches before it tries to resolve the next dependency. One temporary solution for you might (I haven't tested this) be to jack up the single-step bonus. Change Aptitude::ProblemResolver::StepScore from its default (10) to something huge (say, 1000) and aptitude should start behaving more like a depth-first search than it is right now. That is, it'll get a large bonus for chasing down a search tree, rather than a modest improvment. That tends to help it over "hills", but might give you worse results than you'd otherwise get. I'd be interested in knowing how far you have to increase it before aptitude terminates and whether the solution you get sucks. Another interesting possibility would be to decrease the penalty for broken dependencies (e.g., set BrokenScore to -10 instead of -100), so aptitude doesn't go "oh my god I have 30 broken packages this solution must suck let's try something else" when it tries to install, say, epiphany-browser. (setting it too low could also lead aptitude to spin if you don't increase StepScore, because it would end up spending lots of time on short solutions that haven't had time to accumulate other penalties yet) I'll see when I get a chance to look at the log more closely whether there's any clever way to cut down on the amount of work being done. Maybe I should finally open up the AI book that's been sitting on my shelf for a while. (I bet someone has solved this problem already...) I might also want to tweak the log-generating code, the output is huge and hard to analyze. I think the idea was to provide full context always, but it might be better to generate a more modest output that can be easily parsed into a more sophisticated tool than "less". :) Daniel