Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
On Sun, Jan 6, 2019 at 11:46 PM Steve McIntyre  wrote:
>
> [ Please note the cross-post and respect the Reply-To... ]
>
> Hi folks,
>
> This has taken a while in coming, for which I apologise. There's a lot
> of work involved in rebuilding the whole Debian archive, and many many
> hours spent analysing the results. You learn quite a lot, too! :-)
>
> I promised way back before DC18 that I'd publish the results of the
> rebuilds that I'd just started. Here they are, after a few false
> starts. I've been rebuilding the archive *specifically* to check if we
> would have any problems building our 32-bit Arm ports (armel and
> armhf) using 64-bit arm64 hardware. I might have found other issues
> too, but that was my goal.

 very cool.

 steve, this is probably as good a time as any to mention a very
specific issue with binutils (ld) that has been slowly and inexorably
creeping up on *all* distros - both 64 and 32 bit - where the 32-bit
arches are beginning to hit the issue first.

 it's a 4GB variant of the "640k should be enough for anyone" problem,
as applied to linking.

 i spoke with dr stallman a couple of weeks ago and confirmed that in
the original version of ld that he wrote, he very very specifically
made sure that it ONLY allocated memory up to the maximum *physical*
resident available amount (i.e. only went into swap as an absolute
last resort), and secondly that the number of object files loaded into
memory was kept, again, to the minimum that the amount of spare
resident RAM could handle.

 some... less-experienced people, somewhere in the late 1990s, ripped
all of that code out [what's all this crap, why are we not just
relying on swap, 4GB swap will surely be enough for anybody"]

 by 2008 i experienced a complete melt-down on a 2GB system when
compiling webkit.  i tracked it down to having accidentally enabled
"-g -g -g" in the Makefile, which i had done specifically for one
file, forgot about it, and accidentally recompiled everything.

 that resulted in an absolute thrashing meltdown that nearly took out
the entire laptop.

 the problem is that the linker phase in any application is so heavy
on cross-references that the moment the memory allocated by the linker
goes outside of the boundary of the available resident RAM it is
ABSOLUTELY GUARANTEED to go into permanent sustained thrashing.

 i cannot emphasise enough how absolutely critical that this is to
EVERY distribution to get this fixed.

resources world-wide are being completely wasted (power, time, and the
destruction of HDDs and SSDs) because systems which should only really
take an hour to do a link are instead often taking FIFTY times longer
due to swap thrashing.

not only that, but the poor design of ld is beginning to stop certain
packages from even *linking* on 32-bit systems!  firefox i heard now
requires SEVEN GIGABYTES during the linker phase!

and it's down to this very short-sighted decision to remove code
written by dr stallman, back in the late 1990s.

it would be extremely useful to confirm that 32-bit builds can in fact
be completed, simply by adding "-Wl no-keep-memory" to any 32-bit
builds that are failing at the linker phase due to lack of memory.

however *please do not make the mistake of thinking that this is
specifically a 32-bit problem*.  resources are being wasted on 64-bit
systems by them going into massive thrashing, just as much as they are
on 32-bit ones: it's just that if it happens on a 32-bit system a hard
error occurs.

somebody needs to take responsibility for fixing binutils: the
maintainer of binutils needs help as he does not understand the
problem.  https://sourceware.org/bugzilla/show_bug.cgi?id=22831

l.



Conflicting lintian warnings when using debian/tests/control.autodep8 or debian/tests/control

2019-01-07 Thread Andreas Tille
Hi,

in several r-cran packages I used debian/tests/control.autodep8
in addition to the definition in d/control

  Testsuite: autopkgtest-pkg-r

since lintian otherwise warns about

  source: unnecessary-testsuite-autopkgtest-field

Now there is a new warning

  source: debian-tests-control-autodep8-is-obsolete 
debian/tests/control.autodep8

which recommends not to use debian/tests/control.autodep8 any
more.  With these two warnings I do not see a chance to deliver
a lintian warning free package which does two kind of tests

  1. autopkgtest-pkg-r (general test for all R packages)
  2. manually crafted test for a specific package

Any idea what to do (except overriding one of the lintian
warnings)?

Kind regards

   Andreas.

-- 
http://fam-tille.de



Bug#918549: ITP: ruby-csv: CSV Reading and Writing

2019-01-07 Thread Lucas Kanashiro
Package: wnpp
Owner: Lucas Kanashiro 
Severity: wishlist
X-Debbugs-CC: debian-devel@lists.debian.org

* Package name    : ruby-csv
  Version : 3.0.2
  Upstream Author : Kouhei Sutou 
* URL : https://github.com/ruby/csv
* License : BSD-2-clause
  Programming Lang: Ruby
  Description : CSV Reading and Writing

The CSV library provides a complete interface to CSV files and data. It offers
tools to enable you to read and write to and from Strings or IO objects, as
needed.

This package will be maintained under the umbrella of the Debian Ruby team.

-- 
Lucas Kanashiro



introduction of x-www-browser virtual package

2019-01-07 Thread Jonathan Dowland

Hello,

We don't seem to have an x-www-browser virtual package name,
corresponding to the x-www-browser alternative name. Introducing one
would fix bugs like #833268 (package ships .desktop file which
references x-www-browser but does not correctly depend upon package(s)
that implement the x-www-browser alternative name)

I plan to propose introducing one accordingly, but I was surprised
enough at its absence (I could have sworn we did have one) I wondered if
it was deliberately removed.

I thought I'd post here to see if anyone had any information first.


Best wishes

--

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Jonathan Dowland
⢿⡄⠘⠷⠚⠋⠀ https://jmtd.net
⠈⠳⣄ Please do not CC me, I am subscribed to the list.



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
(hi edmund, i'm reinstating debian-devel on the cc list as this is not
a debian-arm problem, it's *everyone's* problem)

On Mon, Jan 7, 2019 at 12:40 PM Edmund Grimley Evans
 wrote:

> >  i spoke with dr stallman a couple of weeks ago and confirmed that in
> > the original version of ld that he wrote, he very very specifically
> > made sure that it ONLY allocated memory up to the maximum *physical*
> > resident available amount (i.e. only went into swap as an absolute
> > last resort), and secondly that the number of object files loaded into
> > memory was kept, again, to the minimum that the amount of spare
> > resident RAM could handle.
>
> How did ld back then determine how much physical memory was available,
> and how might a modern reimplemention do it?

 i don't know: i haven't investigated the code.  one clue: gcc does
exactly the same thing (or, used to: i believe that someone *may* have
tried removing the feature from recent versions of gcc).

 ... you know how gcc stays below the radar of available memory, never
going into swap-space except as a last resort?

> Perhaps you use sysconf(_SC_PHYS_PAGES) or sysconf(_SC_AVPHYS_PAGES).
> But which? I have often been annoyed by how "make -j" may attempt
> several huge linking phases in parallel.

 on my current laptop, which was one of the very early quad core i7
skylakes with 2400mhz DDR4 RAM, the PCIe bus actually shuts down if
too much data goes over it (too high a power draw occurs).

 consequently, if swap-thrashing occurs, it's extremely risky, as it
causes the NVMe SSD to go *offline*, re-initialise, and come back on
again after some delay.

 that means that i absolutely CANNOT allow the linker phase to go into
swap-thrashing, as it will result in the loadavg shooting up to over
120 within just a few seconds.


> Would it be possible to put together a small script that demonstrates
> ld's inefficient use of memory? It is easy enough to generate a big
> object file from a tiny source file, and there are no doubt easy ways
> of measuring how much memory a process used, so it may be possible to
> provide a more convenient test case than "please try building Firefox
> and watch/listen as your SSD/HDD gets t(h)rashed".
>
> extern void *a[], *b[];
> void *c[1000] = { &a };
> void *d[1000] = { &b };
>
> If we had an easy test case we could compare GNU ld, GNU gold, and LLD.

 a simple script that auto-generated tens of thousands of functions in
a couple of hundred c files, with each function making tens to
hundreds of random cross-references (calls) to other functions across
the entire range of auto-generated c files should be more than
adequate to make the linker phase go into near-total meltdown.

 the evil kid in me really *really* wants to give that a shot...
except it would be extremely risky to run on my laptop.

 i'll write something up. mwahahah :)

l.



Re: Would be possible to have a ".treeinfo" file added to the installers' page?

2019-01-07 Thread Bastian Blank
On Fri, Dec 07, 2018 at 10:45:31AM +0100, Fabiano Fidêncio wrote:
> Although the subject says it all, let me explain the background of the
> change so you all can get the idea of why it'd help a few projects
> and/or even come up with a better solution than adding a  ".treeinfo"
> file.

I'm not exactly sure what you expect from this.  Maybe it would be
easier if you provide a complete example, including information for at
least one non-mainstream architecture.

> [1]: 
> http://download.opensuse.org/pub/opensuse/distribution/leap/15.0/repo/oss/.treeinfo
> [2]: http://mirror.vutbr.cz/fedora/releases/29/Server/x86_64/os/.treeinfo
> [3]: http://mirror.centos.org/centos-7/7/os/x86_64/.treeinfo

How do you detect those special paths?  Because you already need to know
them somehow.

Bastian

-- 
Hailing frequencies open, Captain.



Bug#918603: ITP: recap -- Generates reports of various information about the server

2019-01-07 Thread Darshaka Pathirana
Package: wnpp
Severity: wishlist
Owner: Darshaka Pathirana 

* Package name: recap
  Version : 1.3.1
  Upstream Author : Rackspace US, Inc.
* URL : https://github.com/rackerlabs/recap
* License : GPL-2.0+
  Description : Generates reports of various information about the server

This program is intended to be used as a companion for the reporting provided
by sysstat. It will create a set of reports summarizing hardware resource
utilization. The script also provides optional reporting on a web server,
MySQL and network connections.



Bug#918607: ITP: kthresher -- Purge Unused Kernels

2019-01-07 Thread Darshaka Pathirana
Package: wnpp
Severity: wishlist
Owner: Darshaka Pathirana 

* Package name: kthresher
  Version : 1.3.1
  Upstream Author : Rackspace US, Inc.
* URL : https://github.com/rackerlabs/kthresher
* License : Apache
  Programming Lang: Python
  Description : Purge Unused Kernels

Tool to remove unused kernels that were installed automatically
This tool removes those kernel packages marked as candidate for autoremoval.
Those packages are generally installed via Unattended upgrade or
meta-packages. By default the latest kernel and manual installations are
marked to Never Auto Remove.



Re: Conflicting lintian warnings when using debian/tests/control.autodep8 or debian/tests/control

2019-01-07 Thread Paul Gevers
Hi Andreas,

On 07-01-2019 11:37, Andreas Tille wrote:
> Any idea what to do?

File a bug against lintian, it's not perfect you know. If you let me
know (maybe in private) the bug number, I may create a merge request for
lintian.

As the creator of the latter warning and as the one that implemented the
code in autodep8 and autopkgtest to make that d/t/control.autodep8 file
obsolete, I am telling you that it's really the latter warning that you
want to fix.

Paul



signature.asc
Description: OpenPGP digital signature


Re: deduplicating jquery/

2019-01-07 Thread Nicholas D Steeves
Dear Java Team,

Do you have any suggestions for working with the following?: (please
reply to -devel)

On Sun, Jan 06, 2019 at 10:34:50PM +0100, Rene Engelhard wrote:
> On Sat, Jan 05, 2019 at 09:20:34PM +0100, Samuel Thibault wrote:
> > Sean Whitton, le sam. 05 janv. 2019 19:48:35 +, a ecrit:
> > > Forgive my ignorance of the specifics of this package, but why can't you
> > > add symlinks to the files shipped by libjs-jquery?  That is the standard
> > > solution.
> > 
> > openjdk's javadoc not only includes libjs-query, but also jszip,
> > jszip-utils, some images, etc. It'd be better to have a central provider
> > for whatever javadoc needs to have its search functionality working,
> > rather than each package maintainers doing it.
> 
> Similar with doxygen...
> 
> And then there's "jquery but not really jquery" appearing the wild or
> stuff which will not work with newer/older jquerys, ttbomk.
> 
> Short: the JS mess.

In the case of jquery, does a tool exist that can check for obsolete
methods?  For example, calibre ships with an embedded copy of jquery
1.4.2, this should be unbundled, but 3.2.1 in buster obviously has
breaking changes between major versions.  Would a clever pattern
search be enough to screen for this?  I feel like this might be a tool
that already exists ;-)

Cheers!
Nicholas


signature.asc
Description: PGP signature


Re: Conflicting lintian warnings when using debian/tests/control.autodep8 or debian/tests/control

2019-01-07 Thread Ondrej Novy
Hi,

po 7. 1. 2019 v 11:37 odesílatel Andreas Tille  napsal:

> Any idea what to do (except overriding one of the lintian
> warnings)?
>

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=918621

-- 
Best regards
 Ondřej Nový

Email: n...@ondrej.org
PGP: 3D98 3C52 EB85 980C 46A5  6090 3573 1255 9D1E 064B


Re: deduplicating jquery/

2019-01-07 Thread Emmanuel Bourg
Hi Nicholas,

Le 07/01/2019 à 21:13, Nicholas D Steeves a écrit :

> Do you have any suggestions for working with the following?: (please
> reply to -devel)

We've discussed this topic in #903428 and the consensus is roughly that
it's a waste of time and we would rather drop the mostly unused javadoc
packages than implementing this.

If someone has a clever idea that doesn't involve patching the 480
javadoc packages in unstable, nor deviating too much the OpenJDK tools
from upstream, the Java Team would be happy to discuss and review the
patches provided.

Emmanuel Bourg



Re: deduplicating jquery/

2019-01-07 Thread Samuel Thibault
Emmanuel Bourg, le lun. 07 janv. 2019 22:25:35 +0100, a ecrit:
> Le 07/01/2019 à 21:13, Nicholas D Steeves a écrit :
> > Do you have any suggestions for working with the following?: (please
> > reply to -devel)
> 
> We've discussed this topic in #903428 and the consensus is roughly that
> it's a waste of time and we would rather drop the mostly unused javadoc
> packages than implementing this.

I'd rather cripple the documentation a bit than removing it :)

Could jh_build perhaps just drop the embedded jquery copy to just avoid
the issue? AFAIK, jquery is only used to implement the "search" feature,
which can sometimes be convenient, but can be done by users with greps &
such.

Samuel



Re: Conflicting lintian warnings when using debian/tests/control.autodep8 or debian/tests/control

2019-01-07 Thread Chris Lamb
Ondrej,

> > Any idea what to do (except overriding one of the lintian
> > warnings)?
> 
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=918621

Merged & uploaded to unstable in lintian 2.5.120; thanks.


Regards,

-- 
  ,''`.
 : :'  : Chris Lamb
 `. `'`  la...@debian.org / chris-lamb.co.uk
   `-



Re: deduplicating jquery/

2019-01-07 Thread Emmanuel Bourg
Le 07/01/2019 à 23:02, Samuel Thibault a écrit :

> I'd rather cripple the documentation a bit than removing it :)

The issue is, we keep getting more and more javadoc related issues with
each OpenJDK upgrade. This jquery "issue" is a bit the straw that breaks
the camel's back, and we would rather cut the loss now than investing
even more time on these low popcon packages. The Java Team is
understaffed, we struggle to keep up with the JDK upgrades and update
the important packages, so the documentation issues are really low
priority items.


> Could jh_build perhaps just drop the embedded jquery copy to just avoid
> the issue? AFAIK, jquery is only used to implement the "search" feature,
> which can sometimes be convenient, but can be done by users with greps &
> such.

jh_build is only part of the picture. Most javadoc packages are
generated by Maven, so the maven-javadoc-plugin would have to be patched
as well.

Emmanuel Bourg



python-socketio x gevent-socketio

2019-01-07 Thread Paulo Henrique Santana
Hi, could you give me some orientation to my issue below?

A while ago I packaged [1] a software named Flask-SocketIO [2].
It depends of other software named python-socketio, and I could find this 
package on Debian that time [3]. In fact this package is gevent-socketio 
software [4].

But as you can see in this bug reported [5], Flask-SocketIO can't use 
gevent-socketio.

>From flask_socketio/__init__.py:
if gevent_socketio_found:
print('The gevent-socketio package is incompatible with this version of '
'the Flask-SocketIO extension. Please uninstall it, and then '
'install the latest version of python-socketio in its place.')
sys.exit(1)

The upstream (Miguel) that developed Flask-SocketIO has developed other 
python-socketio [6].
So, I throught to package this python-socketio from Miguel to close the 
Flask-SocketIO bug.

gevent-socketio seems to be stoped since 2016 and it does not seem to be update 
to Python 3. When I install it, it is under the tree:
/usr/lib/python2.7/dist-packages/socketio

My package with python-socketio (from Miguel) uses namespace "socketio" too, 
and it would be under the tree:
/usr/lib/python3/dist-packages/socketio

So. what is a good a solution to this?
Keep my python-socketio on Python3 tree and gevent-socketio on Python2.7?
Use Conflicts field to force the user to remove gevent-socketio before install 
python-socketio?
Or other solution?

[1] https://tracker.debian.org/pkg/flask-socketio
[2] https://github.com/miguelgrinberg/Flask-SocketIO
[3] https://tracker.debian.org/pkg/gevent-socketio
[4] https://pypi.org/project/gevent-socketio/#files
[5] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=879631
[6] http://github.com/miguelgrinberg/python-socketio

Best regards,


-
Paulo Henrique de Lima Santana (phls) 
Curitiba - Brasil 
Debian Maintainer
Diretor do Instituto para Conservação de Tecnologias Livres
Membro da Comunidade Curitiba Livre
Site: http://www.phls.com.br 
GNU/Linux user: 228719  GPG ID: 0443C450

Organizador da DebConf19 - Conferência Mundial de Desenvolvedores(as) Debian
Curitiba - 21 a 28 de julho de 2019
http://debconf19.debconf.org





Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Mike Hommey
On Mon, Jan 07, 2019 at 10:28:31AM +, Luke Kenneth Casson Leighton wrote:
> On Sun, Jan 6, 2019 at 11:46 PM Steve McIntyre  wrote:
> >
> > [ Please note the cross-post and respect the Reply-To... ]
> >
> > Hi folks,
> >
> > This has taken a while in coming, for which I apologise. There's a lot
> > of work involved in rebuilding the whole Debian archive, and many many
> > hours spent analysing the results. You learn quite a lot, too! :-)
> >
> > I promised way back before DC18 that I'd publish the results of the
> > rebuilds that I'd just started. Here they are, after a few false
> > starts. I've been rebuilding the archive *specifically* to check if we
> > would have any problems building our 32-bit Arm ports (armel and
> > armhf) using 64-bit arm64 hardware. I might have found other issues
> > too, but that was my goal.
> 
>  very cool.
> 
>  steve, this is probably as good a time as any to mention a very
> specific issue with binutils (ld) that has been slowly and inexorably
> creeping up on *all* distros - both 64 and 32 bit - where the 32-bit
> arches are beginning to hit the issue first.
> 
>  it's a 4GB variant of the "640k should be enough for anyone" problem,
> as applied to linking.
> 
>  i spoke with dr stallman a couple of weeks ago and confirmed that in
> the original version of ld that he wrote, he very very specifically
> made sure that it ONLY allocated memory up to the maximum *physical*
> resident available amount (i.e. only went into swap as an absolute
> last resort), and secondly that the number of object files loaded into
> memory was kept, again, to the minimum that the amount of spare
> resident RAM could handle.
> 
>  some... less-experienced people, somewhere in the late 1990s, ripped
> all of that code out [what's all this crap, why are we not just
> relying on swap, 4GB swap will surely be enough for anybody"]
> 
>  by 2008 i experienced a complete melt-down on a 2GB system when
> compiling webkit.  i tracked it down to having accidentally enabled
> "-g -g -g" in the Makefile, which i had done specifically for one
> file, forgot about it, and accidentally recompiled everything.
> 
>  that resulted in an absolute thrashing meltdown that nearly took out
> the entire laptop.
> 
>  the problem is that the linker phase in any application is so heavy
> on cross-references that the moment the memory allocated by the linker
> goes outside of the boundary of the available resident RAM it is
> ABSOLUTELY GUARANTEED to go into permanent sustained thrashing.
> 
>  i cannot emphasise enough how absolutely critical that this is to
> EVERY distribution to get this fixed.
> 
> resources world-wide are being completely wasted (power, time, and the
> destruction of HDDs and SSDs) because systems which should only really
> take an hour to do a link are instead often taking FIFTY times longer
> due to swap thrashing.
> 
> not only that, but the poor design of ld is beginning to stop certain
> packages from even *linking* on 32-bit systems!  firefox i heard now
> requires SEVEN GIGABYTES during the linker phase!
> 
> and it's down to this very short-sighted decision to remove code
> written by dr stallman, back in the late 1990s.
> 
> it would be extremely useful to confirm that 32-bit builds can in fact
> be completed, simply by adding "-Wl no-keep-memory" to any 32-bit
> builds that are failing at the linker phase due to lack of memory.

Note that Firefox is built with --no-keep-memory
--reduce-memory-overheads, and that was still not enough for 32-bts
builds. GNU gold instead of BFD ld was also given a shot. That didn't
work either. Presently, to make things link at all on 32-bits platforms,
debug info is entirely disabled. I still need to figure out what minimal
debug info can be enabled without incurring too much memory usage
during linking.

Mike



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
On Tuesday, January 8, 2019, Mike Hommey  wrote:

> .
>
> Note that Firefox is built with --no-keep-memory
> --reduce-memory-overheads, and that was still not enough for 32-bts
> builds. GNU gold instead of BFD ld was also given a shot. That didn't
> work either. Presently, to make things link at all on 32-bits platforms,
> debug info is entirely disabled. I still need to figure out what minimal
> debug info can be enabled without incurring too much memory usage
> during linking.


Dang. Yes, removing debug symbols was the only way I could get webkit to
link without thrashing, it's a temporary fix though.

So the removal of the algorithm in ld Dr Stallman wrote, dating back to the
1990s, has already resulted in a situation that's worse than I feared.

At some point apps are going to become so insanely large that not even
disabling debug info will help.

At which point perhaps it is worth questioning the approach of having an
app be a single executable in the first place.  Even on a 64 bit system if
an app doesn't fit into 4gb RAM there's something drastically going awry.



-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Mike Hommey
On Mon, Jan 07, 2019 at 11:46:41PM +, Luke Kenneth Casson Leighton wrote:
> On Tuesday, January 8, 2019, Mike Hommey  wrote:
> 
> > .
> >
> > Note that Firefox is built with --no-keep-memory
> > --reduce-memory-overheads, and that was still not enough for 32-bts
> > builds. GNU gold instead of BFD ld was also given a shot. That didn't
> > work either. Presently, to make things link at all on 32-bits platforms,
> > debug info is entirely disabled. I still need to figure out what minimal
> > debug info can be enabled without incurring too much memory usage
> > during linking.
> 
> 
> Dang. Yes, removing debug symbols was the only way I could get webkit to
> link without thrashing, it's a temporary fix though.
> 
> So the removal of the algorithm in ld Dr Stallman wrote, dating back to the
> 1990s, has already resulted in a situation that's worse than I feared.
> 
> At some point apps are going to become so insanely large that not even
> disabling debug info will help.

That's less likely, I'd say. Debug info *is* getting incredibly more and
more complex for the same amount of executable weight, and linking that
is making things worse and worse. But having enough code to actually be
a problem without debug info is probably not so close.

There are solutions to still keep full debug info, but the Debian
packaging side doesn't support that presently: using split-dwarf. It
would probably be worth investing in supporting that.

Mike



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
On Tuesday, January 8, 2019, Mike Hommey  wrote:

> On Mon, Jan 07, 2019 at 11:46:41PM +, Luke Kenneth Casson Leighton
> wrote:
>
> > At some point apps are going to become so insanely large that not even
> > disabling debug info will help.
>
> That's less likely, I'd say. Debug info *is* getting incredibly more and
> more complex for the same amount of executable weight, and linking that
> is making things worse and worse. But having enough code to actually be
> a problem without debug info is probably not so close.
>
>
It's a slow boil problem, taken 10 years to get bad, another 10 years to
get really bad. Needs strategic planning. Right now things are not exactly
being tackled except in a reactive way, which unfortunately takes time as
everyone is volunteers. Exacerbates the problem and leaves drastic
"solutions" such as "drop all 32 bit support".


> There are solutions to still keep full debug info, but the Debian
> packaging side doesn't support that presently: using split-dwarf. It
> would probably be worth investing in supporting that.
>
>
Sounds very reasonable, always wondered why debug syms are not separated at
build/link, would buy maybe another decade?



-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: deduplicating jquery/

2019-01-07 Thread Wookey
On 2019-01-04 20:16 +0100, Samuel Thibault wrote:
> Hello,
> 
> Quite a few packages have jquery/ embedded in documentation generated by
> javadoc. This yields to
> 
> Could openjdk perhaps build a package that would ship jquery/ in a known
> place, and packages would just depend on it and the generated jquery/
> directory be replaced with a symlink to the known place?

This would be very nice. I noticed yesterday that rebuilding one of my
packages in unstable adds a pile of jquery and jszip files that didn't
used to be there, taking up 1.2MB of space (unpacked) and making the
deb nearly twice as big (540K vs 300K). This does seem pretty
ridiculous.

Wookey
-- 
Principal hats:  Linaro, Debian, Wookware, ARM
http://wookware.org/


signature.asc
Description: PGP signature


Re: introduction of x-www-browser virtual package

2019-01-07 Thread Paul Wise
On Mon, Jan 7, 2019 at 8:03 PM Jonathan Dowland wrote:

> I thought I'd post here to see if anyone had any information first.

I noticed that this idea came up in 2010 and 2014 so I think we never
had x-www-browser, only www-browser.

https://lists.debian.org/20141117130332.ga9...@free.fr
https://lists.debian.org/4b432943.3080...@leat.rub.de

--
bye,
pabs

https://wiki.debian.org/PaulWise



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
$ python evil_linker_torture.py 2000 50 100 200

ok so it's pretty basic, and arguments of "2000 50 10 100"
resulted in around a 10-15 second linker phase, which top showed to be
getting up to around the 2-3GB resident memory range.  "2000 50 100
200" should start to make even a system with 64GB RAM start to
feel the pain.

evil_linker_torture.py N M O P generates N files with M functions
calling O randomly-selected functions where each file contains a
static char of size P that is *deliberately* put into the code segment
by being initialised with a non-zero value, exactly and precisely as
you should never do because... surpriiise! it adversely impacts the
binary size.

i'm just running the above, will hit "send" now in case i can't hit
ctrl-c in time on the linker phase... goodbye world... :)

l.
#!/usr/bin/env python

import sys
import random

maketemplate = """\
CC := gcc
CFILES:=$(shell ls | grep "\.c")
OBJS:=$(CFILES:%.c=%.o)
DEPS := $(CFILES:%.c=%.d)
CFLAGS := -g -g -g
LDFLAGS := -g -g -g

%.d: %.c
	$(CC) $(CFLAGS) -MM -o $@ $<

%.o: %.c
	$(CC) $(CFLAGS) -o $@ -c $<

#	$(CC) $(CFLAGS) -include $(DEPS) -o $@ $<

main: $(OBJS)
	$(CC) $(OBJS) $(LDFLAGS) -o main
"""

def gen_makefile():
with open("Makefile", "w") as f:
f.write(maketemplate)

def gen_headers(num_files, num_fns):
for fnum in range(num_files):
with open("hdr{}.h".format(fnum), "w") as f:
for fn_num in range(num_fns):
f.write("extern int fn_{}_{}(int arg1);\n".format(fnum, fn_num))

def gen_c_code(num_files, num_fns, num_calls, static_sz):
for fnum in range(num_files):
with open("src{}.c".format(fnum), "w") as f:
for hfnum in range(num_files):
f.write('#include "hdr{}.h"\n'.format(hfnum))
f.write('static char data[%d] = {1};\n' % static_sz)
for fn_num in range(num_fns):
f.write("int fn_%d_%d(int arg1)\n{\n" % (fnum, fn_num))
f.write("\tint arg = arg1 + 1;\n")
for nc in range(num_calls):
cnum = random.randint(0, num_fns-1)
cfile = random.randint(0, num_files-1)
f.write("\targ += fn_{}_{}(arg);\n".format(cfile, cnum))
f.write("\treturn arg;\n")
f.write("}\n")
if fnum != 0:
continue
f.write("int main(int argc, char *argv[])\n{\n")
f.write("\tint arg = 0;\n")
for nc in range(num_calls):
cnum = random.randint(0, num_fns-1)
cfile = random.randint(0, num_files-1)
f.write("\targ += fn_{}_{}(arg);\n".format(cfile, cnum))
f.write("\treturn 0;\n")
f.write("}\n")

if __name__ == '__main__':
num_files = int(sys.argv[1])
num_fns = int(sys.argv[2])
num_calls = int(sys.argv[3])
static_sz = int(sys.argv[4])
gen_makefile()
gen_headers(num_files, num_fns)
gen_c_code(num_files, num_fns, num_calls, static_sz)


Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
On Tue, Jan 8, 2019 at 6:27 AM Luke Kenneth Casson Leighton
 wrote:

> i'm just running the above, will hit "send" now in case i can't hit
> ctrl-c in time on the linker phase... goodbye world... :)

$ python evil_linker_torture.py 2000 50 100 200
$ make -j8

oh, err... whoopsie... is this normal? :)  it was only showing around
600mb during the linker phase anyway. will keep hunting. where is this
best discussed (i.e. not such a massive cc list)?

/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o: in function
`deregister_tm_clones':
crtstuff.c:(.text+0x3): relocation truncated to fit: R_X86_64_PC32
against `.tm_clone_table'
/usr/bin/ld: crtstuff.c:(.text+0xb): relocation truncated to fit:
R_X86_64_PC32 against symbol `__TMC_END__' defined in .data section in
main
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o: in function
`register_tm_clones':
crtstuff.c:(.text+0x43): relocation truncated to fit: R_X86_64_PC32
against `.tm_clone_table'
/usr/bin/ld: crtstuff.c:(.text+0x4a): relocation truncated to fit:
R_X86_64_PC32 against symbol `__TMC_END__' defined in .data section in
main
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o: in function
`__do_global_dtors_aux':
crtstuff.c:(.text+0x92): relocation truncated to fit: R_X86_64_PC32
against `.bss'
/usr/bin/ld: crtstuff.c:(.text+0xba): relocation truncated to fit:
R_X86_64_PC32 against `.bss'
collect2: error: ld returned 1 exit status
make: *** [main] Error 1



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
$ python evil_linker_torture.py 3000 100 100 50

ok so that managed to get up to 1.8GB resident memory, paused for a
bit, then doubled it to 3.6GB, and a few seconds later successfully
outputted a binary.

i'm going to see if i can get above the 4GB mark by modifying the
Makefile to do 3,000 shared libraries instead of 3,000 static object
files.

l.



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
On Tue, Jan 8, 2019 at 7:01 AM Luke Kenneth Casson Leighton
 wrote:

> i'm going to see if i can get above the 4GB mark by modifying the
> Makefile to do 3,000 shared libraries instead of 3,000 static object
> files.

 fail.  shared libraries link extremely quickly.  reverted to static,
trying this:

$ python evil_linker_torture.py 3000 400 200 50

so that's 4x the number of functions per file, and 2x the number of
calls *in* each function.

just the compile phase requires 1GB per object file (gcc 7.3.0-29),
which, on "make -j8" ratched up the loadavg to the point where...
well.. *when* it recovered it reported a loadavg of over 35, with 95%
usage of the 16GB swap space...

running with "make -j4" is going to take a few hours.

l.