For what it's worth, I tried booting the kernel without an initrd at all, by setting root=/dev/sda1 and rootfs=ext4 as per:
https://freedesktop.org/wiki/Software/systemd/Optimizations/ ... and that fails miserably: [ 0.899628] List of all partitions: [ 0.900512] No filesystem could mount root, tried: [ 0.900513] [ 0.901740] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 0.903441] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.17.0-2-amd64 #1 Debian 5.17.6-1 [ 0.905578] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 [ 0.907604] Call Trace: [ 0.908281] <TASK> [ 0.908850] dump_stack_lvl+0x48/0x5e [ 0.909820] panic+0xfa/0x2c6 [ 0.910584] mount_block_root+0x1c6/0x1d5 [ 0.911607] prepare_namespace+0x136/0x165 [ 0.912701] kernel_init_freeable+0x258/0x282 [ 0.913736] ? rest_init+0xd0/0xd0 [ 0.914620] kernel_init+0x16/0x120 [ 0.915512] ret_from_fork+0x22/0x30 [ 0.916413] </TASK> [ 0.917421] Kernel Offset: 0x3bc00000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff) [ 0.920182] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]--- ... so it looks like the base kernel doesn't have enough modules to find the root filesystem, which is a shame and a bit ridiculous considering that even grub has that built-in... An alternative I have tried is to do this: sed -i 's/^MODULES=.*/MODULES=dep/' /etc/initramfs-tools/initramfs.conf update-initramfs -u this shaves off a few seconds in the initramfs load, going from 5-6 seconds to 2-3, which is pretty nice. In fact, it even shaves off about 75% of the initrd size right there: root@host:~# ls -lh /boot/initrd.img-5.17.0-* -rw-r--r-- 1 root root 30M May 2 04:02 /boot/initrd.img-5.17.0-1-amd64 -rw-r--r-- 1 root root 7.4M May 26 19:06 /boot/initrd.img-5.17.0-2-amd64 But I bet it could be lowered down even further... Experimenting with systemd-networkd does remove some delays. Remember this? > The entire boot operation seems to take about 12-13 seconds, which is > really noticeable. Here's the top 10 offenders according to systemd: > > root@host:~# systemd-analyze blame | head -10 > 1.526s dev-sda1.device > 1.152s ifupdown-pre.service > 855ms systemd-journal-flush.service > 323ms networking.service switching to systemd-networkd completely removes almost close to a second of boot time here. The magic recipe is: apt purge ifupdown cat > /etc/systemd/network/80-dhcp.network <<EOF [Match] Name=eth* [Network] DHCP=yes EOF systemctl enable systemd-networkd ... and that's it really. Then the top ten becomes: root@host:~# systemd-analyze blame | head -10 1.165s dev-sda1.device 850ms systemd-journal-flush.service 759ms systemd-networkd.service 252ms systemd-sysctl.service 198ms systemd-sysusers.service 192ms systemd-random-seed.service 184ms apparmor.service 151ms systemd-udev-trigger.service 148ms systemd-tmpfiles-setup-dev.service 144ms systemd-journald.service The critical chain now looks like: root@host:~# systemd-analyze critical-chain The time when unit became active or started is printed after the "@" character. The time the unit took to start is printed after the "+" character. graphical.target @1.654s └─multi-user.target @1.654s └─getty.target @1.654s └─serial-getty@ttyS0.service @1.653s └─systemd-user-sessions.service @1.638s +10ms └─network.target @1.634s └─systemd-networkd.service @874ms +759ms └─systemd-udevd.service @734ms +137ms └─systemd-tmpfiles-setup-dev.service @575ms +148ms └─systemd-sysusers.service @373ms +198ms └─systemd-remount-fs.service @278ms +91ms └─systemd-journald.socket @254ms └─-.mount @249ms └─-.slice @249ms Previously, that chain looked like this: root@host:~# systemd-analyze critical-chain The time when unit became active or started is printed after the "@" character. The time the unit took to start is printed after the "+" character. graphical.target @1.943s └─multi-user.target @1.942s └─getty.target @1.940s └─serial-getty@ttyS0.service @1.938s └─systemd-user-sessions.service @1.922s +9ms └─network.target @1.919s └─networking.service @1.569s +348ms └─ifupdown-pre.service @855ms +706ms └─systemd-udev-trigger.service @583ms +261ms └─systemd-udevd-kernel.socket @450ms └─system.slice @357ms └─-.slice @357ms So from what I can tell, we shave off about 600-700ms here. At some point I also found this service in the critical chain, with a full second spent spinning around for no good reason on a throw-away system: systemctl mask systemd-journal-flush.service And now the critical chain looks like this: root@host:~# systemd-analyze critical-chain The time when unit became active or started is printed after the "@" character. The time the unit took to start is printed after the "+" character. graphical.target @1.334s └─multi-user.target @1.334s └─systemd-logind.service @889ms +425ms └─basic.target @805ms └─sockets.target @805ms └─dbus.socket @805ms └─sysinit.target @803ms └─systemd-udevd.service @669ms +134ms └─systemd-tmpfiles-setup-dev.service @614ms +37ms └─systemd-sysusers.service @387ms +224ms └─systemd-remount-fs.service @285ms +89ms └─systemd-journald.socket @254ms └─system.slice @249ms └─-.slice @249ms So that's pretty good. Anything else in there has < 500ms run time on its own, so I'm getting diminishing returns now, so that's what i got for now. :) -- The destiny of Earthseed is to take root among the stars. - Octavia Butler