Please remove me from this mailing list, as I can no longer actively use the
hurd.
----- Original Message -----
From: Brent Fulgham <[EMAIL PROTECTED]>
To: Brent Fulgham <[EMAIL PROTECTED]>; OKUJI Yoshinori
<[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, September 22, 2000 2:25 PM
Subject: RE: [Bug-hurd] Problem with dev/glue/block.c (More Ramblings)
> > The problem seems to be here:
> >
> > /dev/glue/block.c init_partition (roughly 1005)
> > if (slice >= gd->max_p
> > || gd->part[MINOR (*dev) & mask].start_sect < 0
> > || gd->part[MINOR (*dev) & mask].nr_sects <= 0)
> > return D_NO_SUCH_DEVICE;
> >
> > What's happening is that the &mask causes the offset into gd->part
> > to drop from 127 down to 63. I think that the mach routines that
> > set this up are placing partitions on the primary slave IDE
> > interface (hd1) above offset 64. If I take the &mask out in all
> > places where it is used, my offset is 127, GNUmach can find
> > the valid partition (with start_sect >= 0 and nr_sects > 0)
> > and all is well.
> >
> Indeed, according to linux/drivers/block/ide.c, this is the
> correct behavior (I can see that hdb [or hd1 in Hurd] should
> have minor value 64).
>
> So perhaps the problem is that the code at line 952 in
> glue/block.c stops prematurely, either because gd->max_p is too
> low, or because the d->inode.i_rdev = *dev | i; isn't doing
> what it should for the case of slave drives.
>
> I.e., instead of counting 0-63 (which is what it does right
> now), it should count 64-128 to pick up hd1.
>
> So I'm going to assume for now that the linux driver code is
> correct, since it has not been modified, and since it seems to
> be filling the slots as defined (primary slave should reside at
> minor values 64+).
>
> I'll instead investigate why our "init_partition" routine is
> not properly accessing slots 64+ when the Hurd is initializing
> a slave drive.
>
> In fact, this explains why people with the Hurd on hd0 are able
> to boot successfully, but hd1 cannot. Similarly, I would
> expect people on hd2 to be okay, but hd3 to have problems.
> Anecdotal evidence so far bears this out, since all three of us
> experiencing the problem have the hurd located on hd1.
>
> Thanks,
>
> -Brent
> _______________________________________________
> Bug-hurd mailing list
> [EMAIL PROTECTED]
> http://mail.gnu.org/mailman/listinfo/bug-hurd
>
_______________________________________________
Bug-hurd mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/bug-hurd