andreimpope...@gmail.com wrote on 6/24/19 2:09 AM:
On Ma, 14 mai 19, 16:38:37, Dennis Wicks wrote:
How do I prevent the mounts from failing and make the system continue on
with the boot process?
You could start by attaching your /etc/fstab and copy-pasting the output
of 'lsblk -f' with all partitions mounted.
It would also be useful to know what init system you are using
(ls -l /sbin/init) and if your mounts have any specials (LVM, encrypted,
NFS, RAID, etc.) basically anything besides plain extX filesystems
mounted from internal drives.
Kind regards,
Andrei
No need for all that!
All my mounts are local PATA and SATA drives. The SATA
drives are on an adapter card. All the file systems are xfs,
ext2, ext4 or swap and use either /dir/dir, LABEL= or UUID=.
All very vanilla. No LVM, encrypted, NFS, RAID, etc. Doesn't
make any difference as *all* of the mounts are failing on
the first pass!
I found a work around on a forum. Put "nofail" in the
options field of fstab. So now my entries contain
"defaults,nofail" or "sw,pri=100,nofail" in the options field.
Doesn't make any difference though. All the
Dependency failed for ...
Timeout waiting for ...
messages still occur, they just don't stop the boot process
and the mounts get done successfully later on.(??)
I can't tell what might have caused this as I don't re-boot
after every update, just when an update to the kernel
occurs. I think it was about the time that systemd was
implemented as the boot screen looked different when the
mount failures started happening.
Just an update in case someone else runs into the problem.
In fact I am surprised that no once else has. Must be
something different about my system that I don't know about
and that isn't obvious! (Something from SysV that is
incompatible with systemd?)
Regards,
Dennis