On 6/26/25 6:57 AM, 1...@110110.net wrote:
Basically even though theres a HUGE overhead when it comes to distributing 
static packages via apt, it can save time, bandwidth and cpu resources 
because.. yeah, I mean you’re only downloading one or two packages vs 
downloading up to 20 sometimes 700 packages at a time. This saves resources !! 
But yeah, there is a huge overhead when it comes to downloading statically 
packages, along with running the binaries.. along with redownloading if it 
fails.


i figure if OpenOffice, for example, has 700 dependencies, and instead of 700 
different processes dedicated to downloading, a single process dedicated to 
downloading a large binary with all the dependencies compiled directly into the 
binary could in fact save bandwidth and possibly network resources, along with 
local and remote resources

I figure 80mb of a static package is just as good as 50mb of shared packages, 
maybe even more..

It fails when it comes to upgrading packages, since the shared library model is 
kind of the defacto standard of open source linux, and you’d have to sadly 
download the entire static binary and risk data loss principles instead of a 
single library to update a part of a program.. I don’t know, I tried to make an 
equation but it didn’t work out lol

I personally think debian doesn’t distribute static packages because it takes 
up a ton of hard drive space and cpu ram to run a statically compiled program I 
think, seeing as the os copies binary data into the stack/heap, and yeah, there 
is a ton of users that use debian on old computing devices

I personally would like debian to research a version of debian for high 
performance computers, or at least a fork of debian optimized for high 
performance computers; ready to occupy large sets of ram, hd space, and 
completely utilize new technology in x86 family processors Made after 2020 or 
so; where large sets of ram (64gb+) can just be occupied up to 35% for 
performance reason, such as caching and hopefully occupy high performance code

And yeah within this month ill try to fork apt and make a patch for the src 
command, and possibly with the help of ldap and store it on some kind of local 
server (I guess rsync or nfs would be used?)

But Id prefer if someone else did because im super taxed at work at the moment

Oh man, not to act all crazy but, could you and the debian team talk about ldap 
integration at debian? (Or debian.org lol imagine getting a debian.org domain 
setup on your network haha)

On Jun 21, 2025, at 10:13 AM, IOhannes m zmölnig <umlae...@debian.org> wrote:

Am 21. Juni 2025 18:05:54 MESZ schrieb 1...@110110.net:
I *think* it might be able to save a significant amount of bandwidth 
distributing stuff like apache or even OpenOffice in static form


why do you think so?
(and why do you think, Debian does not do this?)


mfh.her.fsr
IOhannes

Hello,

First of all, you're mistaken about the advantages about static compiling everything, because shared libraries are much more than shared. Some libraries like glibc holds state and shares this between processes (IIRC, some network features crash and burn. See [0] for context). In layman's terms, some libraries are designed to be shared and they make everyone's and everything's life easier when they work as shared libraries.

As you duly noted, you need to download a whole new disk image, and compile the whole repository when a single shared library gets patched. This is very wasteful and time consuming. Also, HTTP's overhead is negligible, and the resources used for parallel file downloads (apt is very conservative at that, BTW) is practically non-existent, even in severely underpowered hardware like Raspberry Pi 1st generation.

As in your HPC argument, I strongly disagree with you as a both an HPC admin and a high performance programmer. Debian's kernel is already well tuned for large, even large systems (think of 16 socket behemoths) and large clusters. What you need is to compile your code with a modern GCC with correct flags ("-O3, -march native, -mtune native" is a good starting point). I successfully managed to saturate systems from different eras, hitting low cache-trash, high IPC and high retire rates while getting the whole memory bandwidth available from the system on both vanilla Debian and some RedHat derivatives.

If you want to test your hypotheses, it's easy to do so. You can get some source packages, compile them statically, bump their versions with a "+static", create a repo (aptly is practical), and either install it to a standard Debian installation or create a new image, getting these packages from your repositories. It's a couple of days of work. We managed to build a Debian derivative as a 2 person team at work.

Then, you can share what you have found with the Debian folks, and it can be discussed.

Lastly, Debian works on merit, and is a proof of work group. Nobody can force anyone to do anything, but if you come with code and related and required work around it, it can be considered. I personally don't see any positives of changing a working authentication system with LDAP, but somebody more knowledgeable than me can answer that question better.

As for the work you want to do, I'd love to help, but either I'm neither interested in a "Static Debian" nor have the time to do the work.

Hope this helps a bit,

Cheers,

H.

[0]: https://www.reddit.com/r/haskell/comments/vqqq7x/trying_to_build_a_statically_linked_binary/

Attachment: OpenPGP_signature.asc
Description: OpenPGP digital signature

Reply via email to