** Changed in: zfs-linux (Ubuntu)
Assignee: Colin Ian King (colin-king) => (unassigned)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1899249
Title:
OpenZFS writing stalls, under load
To manag
OK, I'll close this bug for now. If it still bites please feel free to
re-open the bug and I'll get back onto it.
** Changed in: zfs-linux (Ubuntu)
Status: Incomplete => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubun
Thanks, Colin. I intend to retest, with 20.10, but for now, with
deduplication disabled, things seem stable.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1899249
Title:
OpenZFS writing stalls, unde
@Tyson, hopefully the WBT changes helped to resolve this issue. If so,
please let me know and I can close this bug.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1899249
Title:
OpenZFS writing stal
That warning message is from the Raspberry Pi firmware broadcom
get_property sys interface, some user space program has read an old
deprecated sys interface and is just warning to use the hwmon sysfs
interface and it has no bearing on the ZFS or block WBT settings.
--
You received this bug notifi
Curiously, it worked fine, for quite some time, but I did manage to get
this crash, with TAR, earlier:
[0.00] Booting Linux on physical CPU 0x00 [0x410fd083]
[0.00] Linux version 5.4.0-1018-raspi (buildd@bos02-arm64-052) (gcc
version 9.3.0 (Ubuntu 9.3.0-10ubuntu2)) #20-Ubu
That's great news. Let's keep this bug open for the moment as
"incomplete" and if you don't report back after ~6 weeks or so it will
automatically be closed.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bu
And, after 18 hours+, I was able to archive a ZFS snapshot, using
BorgBackup, with the WBT settings, suggested, earlier:
tyson@ubuntu:~$ sudo zfs send Yaesu@Crucial-2TB-1951E22FA633 | borg create
--stats BorgStore::Yaesu@Crucial-2TB-1951E22FA633 -
^[[A^[[A
It may also be a good idea to disable any power management on USB too.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1899249
Title:
OpenZFS writing stalls, under load
To manage notifications about
Needs more time, in terms of long-running I/O, but changing the WBT
setting at least makes retrieving snapshots a little faster, with "zfs
list -t all".
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/18
All of the drives in the pool are SSDs (2x Crucial BX500 (2TB), 1x
SanDisk Ultra 3D (4TB), 1x Samsung 870 QVO (4TB), but obviously don't
perfectly-align, in terms of performance characteristics - but, I'll
test the WBT tuneable, to see if it makes a difference.
I'm also suspecting that some power
Set the WBT value to 0, for all of the devices, whilst another archive
run takes place. The last one ran for just over 20 hours, before hitting
the I/O problem.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net
I'm using 16GB of swap, on a ZVOL, in the pool, but I can also test with
swap, on the internal MicroSD card's EXT4 root partition, if it helps.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1899249
Ti
Here's a full "dmesg" log, if it helps:
tyson@ubuntu:~$ dmesg
[0.00] Booting Linux on physical CPU 0x00 [0x410fd083]
[0.00] Linux version 5.4.0-1018-raspi (buildd@bos02-arm64-052) (gcc
version 9.3.0 (Ubuntu 9.3.0-10ubuntu2)) #20-Ubuntu SMP Sun Sep 6 05:11:16 UTC
2020 (Ubu
So it may be that the write-back-throttling (wbt) for the underlying
devices is getting confused about the exact throttle rates are for these
devices and somehow getting stuck. It maybe worth experimenting by
disabling the throttling and seeing if this gets I/O working again.
For example, to disa
..and does you raspi have swap enabled?
** Bug watch added: github.com/openzfs/zfs/issues #10522
https://github.com/openzfs/zfs/issues/10522
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1899249
T
Same issue as reported here: https://github.com/openzfs/zfs/issues/10522
** Also affects: zfs via
https://github.com/openzfs/zfs/issues/10522
Importance: Unknown
Status: Unknown
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubu
Is it possible to have the full dmesg from the start of where you boot
to the point where you see the "INFO... blocked for more than 120
seconds" message?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/
Looks like I could archive about 1.3TB, of a 1.64TB snapshot, before
things started to go bad, again:
[110079.681102] INFO: task z_wr_iss_h:2171 blocked for more than 120 seconds.
[110079.688123] Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
[110079.695167] "echo 0 > /proc/sys/kern
Thanks.
I was able to archive, and delete 400GB of data, without problems,
earlier on, today, which reduced the "REFER" of my data set, a little -
however, it looks like I probably need to focus on archiving, and
removing some of the older snapshots, if I want to trim down the memory
utilisation o
You could shrink the DDT by making a copy of the files in place (with
dedup off) and deleting the old file. That only requires enough extra
space for a single file at a time. This assumes no snapshots.
If you need to preserve snapshots, another option would be to send|recv
a dataset at a time. If
Unfortunately, I haven't got a storage device large enough, to contain
all of the data from the pool, and much of it cannot be recreated, or
restored, from another source, so I won't be able to nuke the pool, and
rebuild it.
--
You received this bug notification because you are a member of Ubuntu
Did you destroy and recreate the pool after disabling dedup? Otherwise
you still have the same dedup table and haven’t really accomplished
much.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1899249
T
Also receive this, when trying to archive a large directory:
[ 2055.147509] INFO: task z_wr_iss_h:2169 blocked for more than 120 seconds.
[ 2055.154450] Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
[ 2055.161401] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
me
With deduplication disabled, it eventually gets further with the backup,
but still stalls, after leaving it, overnight:
[12084.274242] INFO: task z_wr_iss_h:2157 blocked for more than 120 seconds.
[12084.281171] Tainted: P C OE 5.4.0-1018-raspi #20-Ubuntu
[12084.288126] "echo 0 >
In the meantime, I'll see if I can temporarily disable ZPool-level
deduplication, and retry the backup run, again, on the Windows machine.
Whilst it's not the perfect long-term solution, I might look into using
offline deduplication, for older, infrequently-accessed data in the pool
(probably with
It looks like writing snapshots, with "zfs snapshot" will sometimes
stall, even if other commands, like "zfs status" appear to work,
occasionally, too.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/189
Thanks for your reply (I expected this to go into the ether, since it
seems to be a very common issue, with I/O, on this hardware platform).
I did try switching from LZJB, to LZ4, and got slightly-better
performance, and reliability, from at least testing with backing up a
Windows 10 machine, usin
It is worth noting that using dededuplication does consume a lot of
memory.
Deduplication tables consume memory and eventually spill over and
consume disk space - this causes extra read and write operations for
every block of data on which deduplication is attempted cause more I/O
waits.
A system
29 matches
Mail list logo