On Thu, Sep 15, 2011 at 10:48 AM, Joost Roeleveld <jo...@antarean.org> wrote: > On Thursday, September 15, 2011 10:32:50 AM Michael Mol wrote: >> On Thu, Sep 15, 2011 at 10:11 AM, Joost Roeleveld <jo...@antarean.org> > wrote: >> >> I'm not entirely convinced this is the case, because it feels like >> >> some situations like network devices (nbd, iSCSI) or loopback would >> >> require userland tools to bring up once networking is up. >> > >> > Yes, but the kernel-events referencing those devices won't appear untill >> > after the networking is brought up. >> > The scripts that "udev" starts are run *after* a device-event is >> > created. If the device itself has not been spotted by the kernel (for >> > instance, the networking doesn't exist yet), the event won't be >> > triggered yet. >> > >> > This situation does not require udev to start all these tools when >> > network- devices appear. >> > >> > I hope the following would make my thoughts a bit clearer: >> > >> > 1) kernel boots >> > >> > 2) kernel detects network device and places "network-device-event" in >> > the >> > queue >> > >> > 3) further things happen and kernel places relevant events in the queue >> > (some other events may also already be in the queue before step 2) >> > >> > 4) udev starts and starts processing the queue >> > >> > 5) For each event, udev creates the corresponding device-entry and >> > places >> > action-entries in a queue >> > >> > 6) system-init-scripts mount all local filesystems >> > >> > 7) udev-actions starts (I made up this name) >> > >> > 8) udev-actions processes all the entries in the action-queue >> > >> > From step 4, udev will keep processing further events it gets, which >> > means that if any action taken by "udev-actions" causes further devices >> > to become available, "udev" will create the device-entries and place >> > the action in the action-queue. >> >> So, if I read this correctly, there are two classes of processing >> events. kernel events and scripted actions. Here's rough pseudocode >> describing what I think you're saying. (Or perhaps what I'm hoping >> you're saying) >> >> while(wait_for_event()) >> { >> kevent* pkEvent = NULL; >> if(get_waiting_kernel_event(pkEvent)) // returns true if an event was >> waiting { >> process_kernel_event(pkEvent); >> } >> else >> { >> aevent* pAction = NULL; >> if(get_waiting_action(pAction)) // Returns true if there's an >> action waiting. >> { >> process_action(pAction); >> } >> } >> } > > This is, sort-of, what I feel should happen. But currently, in pseudo-code, > the following seems to happen: > while(wait_for_event()) > { > kevent* pkEvent = NULL; > if(get_waiting_kernel_event(pkEvent)) // returns true if an event was > waiting { > process_kernel_event(pkEvent); > } > } > > I would prefer to see 2 seperate processes: > > --- process 1 --- > while(wait_for_event()) > { > kevent* pkEvent = NULL; > if(get_waiting_kernel_event(pkEvent)) // returns true if an event was > waiting > { > action_event = process_kernel_event(pkEvent); > if (action_event != NULL) > { > put_action_event(pkEvent); > } > } > } > > ------ > > --- process 2 --- > while(wait_for_event()) > { > aevent* paEvent = NULL; > if(get_waiting_action_event(paEvent)) // returns true if an event was > waiting > { > process_action_event(paEvent); > } > } > ------- > >> So, udev processes one event at a time, and always processes kernel >> events with a higher priority than resulting scripts. This makes a >> certain amount of sense; an action could launch, e.g. nbdclient, which >> would cause a new kernel event to get queued. > > Yes, except that udev ONLY handles kernel-events and doesn't process any > "actions" itself. > These are placed on a seperate queue for a seperate process. > >> > If anyone has a setup where /usr can not be mounted easily, it won't >> > work >> > currently either and a init* would be necessary anyway. >> > (Am thinking of NFS, CIFS, iSCSI, NBD, special raid-drivers,.... hosting >> > /usr or other required filesystems) >> >> I don't see how this is relevant to actually fixing udev. (See below) >> >> > But anyone with a currently working environment should be able to expect >> > a currently working environment. If it fails to boot with only updating >> > versions, it's a regression. And one of the worst kinds of all. >> >> I agree that the direction udev is going is a regression. There aren't >> very many people active in this thread who would disagree with that >> point. So let's just drop it and focus on what a good, general >> solution would look like. (And anyone who says something amounting to >> 'status quo' for udev needs another explanation of why the udev >> developer sees the current scenario as broken. And he's right; the >> current scenario is architecturally unsound. I just think he's wrong >> about the solution.) > > I agree he is wrong about the solution as well. > > I have actually just posted my idea to the gentoo-dev list to see how the > developers actually feel about possible splitting udev into 2 parts. > > I'm not a good enough programmer to do this myself. But if anyone who can code > and who also agrees with me that my idea for a solution is actually a good > idea, please let me know and lets see how far we can get with implementing > this solution.
Now we are talking. I am really, *REALLY* interested to know the devs saying in the matter. Regards. -- Canek Peláez Valdés Posgrado en Ciencia e Ingeniería de la Computación Universidad Nacional Autónoma de México