On Thu, 4 Dec 2014, Nikolaus Rath wrote:
On 12/04/2014 10:07 AM, Shannon Dealy wrote:
Attached is the log file for a failed unmount.
While my previous attempts were simple umount.s3ql commands, this time a
couple of additional commands were run after rsync completed and before
the umount:
s3qlctrl flushcache /media/server-external
s3qlctrl upload-meta /media/server-external
umount.s3ql /media/server-external
The logs looks as you also send a SIGUSR1 signal to mount.s3ql. Is that
correct?
No, it ran to completion on its own (though it took forever). I forgot
to mention that I did issue this command just before running umount.s3ql
setfattr -n fuse_stacktrace /media/server-external
though I am not sure if that makes any difference with fuse as that
command had previously been issued for that mount point when a different
file system (the other one we have been debugging) was mounted there, and
I have no idea if fuse retains this setting between unmounts and mounts
on a given mount point.
After the umount.s3ql command, what are the contents of
/root/.s3ql/local:=2F=2F=2Fmedia=2FTransferDrive=2FS3QL_server-external-cache?
Can you confirm that this directory did not exist when calling mount.s3ql?
No, in fact based on the timestamps, I can confirm that this directory did
exist as it has over 4000 files in it with November 30th timestamps
spanning roughly 1/2 hour. I never gave this directory a thought as the
mount indicates that the cache is out of date, so I assumed any old data
that was lying around would be discarded when it downloads the current
file system info.
It should be noted that the fsck run on this file system was not run from
my local machine, but from the remote server, so when I mount this on my
local machine, it says something to the effect that the local cache is
out of date, and then proceeds to download and unpack the current file
system information from the remote server. I realize this discards any
chance of recovering data that might have been transferred before the
crash, but past experience has shown me that in most cases it is
significantly faster to run fsck at the other end and resend the data.
The problem is that fsck.s3ql is incredibly slow for my mounts using
sshfs - well over an order of magnitude slower than my S3 mounts.
Not sure what the issue is with s3ql over sshfs, the file system
performance while a bit on the slow side seems reasonable until I try to
fsck or umount the file system, then it takes forever (though maybe I just
notice it more as that is a more interactive use of the file system).
Regards,
Shannon C. Dealy | DeaTech Research Inc.
de...@deatech.com | - Custom Software Development -
USA Phone: +1 800-467-5820 | - Natural Building Instruction -
numbers : +1 541-929-4089 | www.deatech.com
--
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org