On Vi, 28 iun 19, 11:26:43, Dennis Wicks wrote:
> andreimpope...@gmail.com wrote on 6/24/19 2:09 AM:
> > On Ma, 14 mai 19, 16:38:37, Dennis Wicks wrote:
> > > 
> > > How do I prevent the mounts from failing and make the system continue on
> > > with the boot process?
> > 
> > You could start by attaching your /etc/fstab and copy-pasting the output
> > of 'lsblk -f' with all partitions mounted.
> > 
> > It would also be useful to know what init system you are using
> > (ls -l /sbin/init) and if your mounts have any specials (LVM, encrypted,
> > NFS, RAID, etc.) basically anything besides plain extX filesystems
> > mounted from internal drives.
> > 
> > Kind regards,
> > Andrei
> > 
> No need for all that!

Hmm...
 
> All my mounts are local PATA and SATA drives. The SATA drives are on an
> adapter card. All the file systems are xfs, ext2, ext4 or swap and use
> either /dir/dir, LABEL= or UUID=.
> All very vanilla. No LVM, encrypted, NFS, RAID, etc. Doesn't make any
> difference as *all* of the mounts are failing on the first pass!
> 
> I found a work around on a forum. Put "nofail" in the options field of
> fstab. So now my entries contain "defaults,nofail" or "sw,pri=100,nofail" in
> the options field.
> 
> Doesn't make any difference though. All the
>    Dependency failed for ...
>     Timeout waiting for ...
> messages still occur, they just don't stop the boot process and the mounts
> get done successfully later on.(??)
> 
> I can't tell what might have caused this as I don't re-boot after every
> update, just when an update to the kernel occurs. I think it was about the
> time that systemd was implemented as the boot screen looked different when
> the mount failures started happening.

There are a lot of eyes on this list and someone might spot something 
that you don't even think might have an impact.

But then it's your system, your rules ;)

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser

Attachment: signature.asc
Description: PGP signature

Reply via email to