On Mon, May 1, 2023 at 8:43 PM Samuel Thibault <samuel.thiba...@gnu.org> wrote: > > How do we proceed? I don't know enough about rump to get it building; > > We can easily cross-build debian packages thanks to the rebootstrap > scripts from Helmut: > > https://salsa.debian.org/helmutg/rebootstrap.git
I didn't just mean the build system; I'm not familiar with rump at all. I hardly understand what it is, other than it's some code extracted from NetBSD and intended to somehow be embeddable in a broad range of environments. I've skimmed their whitepapers, but I still don't have any idea how they accomplish what they are claiming to. So I'm clearly not the right person to port rump to x86_64-gnu. Unless you're saying that it's largely architecture-independent and will just build. > > so enabling the in-kernel Linux drivers seems preferable. > > Any tips on how I would do that? Are the Linux drivers ancient enough > > to require their own porting to x86_64, or will they just work? > > They will just *not* work with 64bit. The glue code is not ready at all, > and I think it's really not useful to spend time on that when we have > the rump drivers which should be working fine. I was secretly hoping that the Linux drivers won't be fully abandoned with the rise of rump and lwip; but that Linux and non-Linux implementations will both stay as available alternatives. And that someone would upgrade the bundled Linux code (6.3 is out now...) -- perhaps a future project for myself. > Now, that being said, you can also use an initrd, see debian's > 50_initrd.patch (which nobody has yet taken the time to clean according > the discussion at the time). That's enough to run a full Hurd system, > you can for instance take the initrd files from the debian installation > CD. Yes! An initrd sounds exactly like what I need, thank you! Well, I'm unlikely to need whatever Debian is putting there, but the idea of a ramdisk, I mean. As for the best way to implement this... I have always thought gnumach should have a bootscript directive to expose a multiboot module as a memobj. Then there would be a task, /hurd/ramdisk, that would expose it over the device (and possibly I/O) protocols: module /boot/initrd.img $(initrd-image=vm-object-create) module /hurd/ramdisk.static ramdisk --vm-object-port=${initrd-image} $(task-create) $(task-resume) This is a lot like data-task-create which has been proposed in the "bootshell" discussion, but without repurposing tasks for this. A memobj IMO is a much cleaner way to represent a literal uninterpreted piece of memory provided to us by the bootloader. Super ideally, gnumach wouldn't even know how to parse bootscripts and load ELFs; that, too, should be done by some pre-init task; the only thing we need Mach to do is to pass over the memory regions it has received from the bootloader to userspace, as memobjects -- and start the initial task, somehow. From there on, the initial task would parse the script, parse the ELFs, create the tasks, map the ELF segments into their memory (with vm_map, since these are memobjects, and not with a simple copy as it's done now), insert the ports and argv according to the scripts -- all in userspace. The question is of course how to start this initial task; but that needs to be discussed in the context of my broader ideas for how to reengineer bootstrap... Anyways, I'll see if I can apply the initrd patch and get it working. Thank you. Sergey