On 1 September 2014 12:24, László Böszörményi (GCS) <g...@debian.org> wrote: > Hi Carnë, > > On Mon, Sep 1, 2014 at 12:48 PM, Carnë Draug <carandr...@octave.org> wrote: >> It's been almost 5 years since this bug was reported, it's a blocker >> for other two bugs, it has a patch (it's a 1 line change on the >> Makefile), and even the upstream maintainer has pitched in saying that >> the build should be changed has proposed. >> >> Could the package maintainer please please please, at least comment on this? > I'm the current maintainer, but I came late in the package life. Yes, > changing the quantum depth is just a switch for the configure script. > On the other hand, it changes at least two important aspect of the > programs using graphicsmagick. > The first one is the in-memory usage of graphics handling. It will > double the memory needed for the unpacked (ie, don't compare it with > the on-disk size of the image) pictures. Yes, newer architectures like > PPC64 will have several gigabytes of RAM to handle this. But on older > architectures like armel/armhf it may render the package itself and > related ones to useless because of memory issues.
This proposal is only to increase it to 16 which is a compromise. People with such limited machines (in need of quantum-depth=8), and people with such specialized data (in need of quantum-depth=32) would have to build it themselves. Making everyone happy is not possible (well, there's a new GM features which would allow to have 3 separate packages, one for each configuration but that may be more trouble) but it would seem that 16 bit, would make the most people, and the typical user, happier. I would guess that older systems are probably dealing with smaller images, where doubling the image on size will be a small increase. But to sacrifice all the users that need 16 (and that's not me, I need 32) to save the ones that need 8, seems unfair. > The second one is can the dependent packages handle the changed > in-memory representation of the image? Who can test those in every > aspect at least on two architectures (little- and big-endian one)? > Fixes may be necessary for those and if their upstream is busy with > other things, the packages may be broken for a long time. > I can do a limited quick test on amd64, but the Release-Team will be > in position to allow this change or not. At least Fedora (and Fedora-based distros), and Arch linux have been using quantum-depth 16. If there's any problems they should have appeared there as well (and hopefully have already been fixed). Even the upstream recommendation is to use 16, so dependent packages should at least be aware and prepared to have GM built with different options. And when using GM, one usually is abstracted from that, I'd be surprised if they have hardcoded somewhere a limitation for specific depth. In Octave we recommend users to rebuild everything from source using 16 (or 32 for special cases), so at least the Octave package has already been well tested for this change. Finally, the only way to be sure is to actual test. I see no reason to think there will be any problem with this change but that's why this will first go into unstable, and then into testing. It is best if this change goes in soon, well before the Jessie freeze (November 5th) so that there's plenty of time to find any possible problem. Please consider this change. Thank you, Carnë -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org