I have a suspend.sh script that aims to take three cache devices offline before sleep of the computer:

% grep -v -e '# ' /etc/rc.suspend | uniq | grep -B 3 -A 2 suspend.sh
#!/bin/sh
#

/usr/local/sbin/suspend.sh

        echo "Usage: $0 [apm|acpi] [standby,suspend|1-4]"
% grep -v -e '# ' /usr/local/sbin/suspend.sh | uniq
#!/bin/sh

while mount | grep Transcend 2>&1; do
   zpool export Transcend
   sleep 5
done

zpool offline august gpt/cache1-august
zpool offline august gpt/cache2-august
zpool offline august gpt/cache3-august

sync

killall pulseaudio

while fstat | grep -e dsp -e mixer 2>&1; do
   fstat | grep -e dsp -e mixer | cut -w -f 3 | while read pid;
      do kill -15 "$pid"
   done
done

sysctl hw.snd.default_unit=1

%


Below, it seems that sleep fails if a device is not detached. (Possibly if the offlining does not succeed, although I did check pool status shortly before suspend.)

How can I more reliably ensure detachment before /etc/rc.suspend proceeds?

Alternatively (ideally) is it possible for /etc/rc.suspend to _not_ proceed if detachment does not occur?


Final lines in /var/log/messages before a forced stop of the computer:

Aug 31 17:37:01 mowa219-gjp4-8570p-freebsd kernel: ugen1.8: <EE Ogima> at usbus1 (disconnected) Aug 31 17:38:46 mowa219-gjp4-8570p-freebsd kernel: vdev_geom_close_locked:352[1]: Closing access to gpt/cache1-august. Aug 31 17:38:46 mowa219-gjp4-8570p-freebsd kernel: vdev_geom_detach:315[1]: Detaching from gpt/cache1-august. Aug 31 17:38:46 mowa219-gjp4-8570p-freebsd kernel: vdev_geom_detach:326[1]: Destroying consumer for gpt/cache1-august. Aug 31 17:38:53 mowa219-gjp4-8570p-freebsd kernel: acpi0: suspend request timed out, forcing sleep now Aug 31 17:38:56 mowa219-gjp4-8570p-freebsd kernel: vdev_geom_close_locked:352[1]: Closing access to gpt/cache2-august. Aug 31 17:38:56 mowa219-gjp4-8570p-freebsd kernel: vdev_geom_detach:315[1]: Detaching from gpt/cache2-august. Aug 31 17:38:56 mowa219-gjp4-8570p-freebsd kernel: vdev_geom_detach:326[1]: Destroying consumer for gpt/cache2-august.


Extract from /var/log/console.log after the next start:

Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel: Enter full pathname of shell or RETURN for /bin/sh:
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel: # mount -uw /
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel: # zfs mount -a
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel: # zpool status -x
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel:   pool: august
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel:  state: ONLINE
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel: status: One or more devices has been taken offline by the administrator. Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel:      Sufficient replicas exist for the pool to continue functioning in a
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel:      degraded state.
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel: action: Online the device using 'zpool online' or replace the device with
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel:      'zpool replace'.
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel:   scan: scrub repaired 0B in 11:06:38 with 0 errors on Mon Jun 12 01:56:37 2023
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel: config:
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel:
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel: NAME                 STATE     READ WRITE CKSUM Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel: august               ONLINE       0     0     0 Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel: ada0p3.eli         ONLINE       0     0     0
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel:      cache
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel: gpt/cache2-august  OFFLINE      0     0     0 Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel: gpt/cache3-august  ONLINE       0     0     0 Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel: gpt/cache1-august  OFFLINE      0     0     0
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel:
Aug 31 18:29:26 mowa219-gjp4-8570p-freebsd kernel: errors: No known data errors


Reply via email to