Feel free to close this. The bug has not affected me in over a decade.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/317781
Title:
Ext4 data loss
To manage notifications about this bug go to:
https
Shouldn't this problem be closed by now if this bug was fixed?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/317781
Title:
Ext4 data loss
To manage notifications about this bug go to:
https://bugs.
As far as I understand, the problem has been fixed for overwriting by
rename and overwriting by truncate. Is it an issue at all for just
overwriting part of a file, without truncating it first?
I realize that there are basically no guarantees when fsync() is not
used, but will such writes to alrea
Can someone point me toward documentation for "data=alloc_on_commit"?
I am getting 0 byte files after system freezes on Ubuntu 10.04.01
(amd64) with kernel version 2.6.32-25. Just want to understand how one
uses alloc_on_commit and how it works before I use it, and I can't find
any proper document
Installed Karmic with ext4 on a new PC today. Installed FGLRX
afterwards. All of a sudden the PC froze completely. No mouse-movement,
no keyboard. Hard reset. After reboot lots of configuration files that
were recently changed had zero length. The system became unusable due to
this (lots of error m
Hi,
I found a workaround to the problem of determining the cleartext filenames.
*Before* you delete the zero-byte files, back 'em up:
1) tar find .Private -size 0b | xargs tar -czvf zerofiles.tgz
2) Unmount your encrypted home
3a) Temporarily move the "good" files away:
mv .Private .Privat
I have added a separate bug for the problem of (de-)crypting filenames,
see https://bugs.launchpad.net/ecryptfs/+bug/493779
Yours,
Steffen
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to U
Hi,
I am also bitten by the above ecryptfs messages slowly filling my /var/log and
have a followup question to the cleanup workaround presented by Dustin in
comment #57
of this bug:
Is there any way to determine (=decrypt) which files have been messed up,
so I know if there is anything importa
ted ts'o:
"You can opine all you want, but the problem is that POSIX does not
specify anything ..."
I'll opine that POSIX needs to be updated.
The use of the create-new-file-write-rename design pattern is pervasive
and expected that after a crash either the new contents or the old
contents of t
Would it be possible to create sync policies (per distribution, per
user, per application) and ensure like this a flexibility/compromise
every user might choose/change?
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
B
Ok, I'll try installing 2.6.30 final for ubuntu and report a new bug. As for
the fsck, the only time I didn't boot into single user mode and ran fsck by
hand was that one. My fstab entry is simple - "LABEL=Home/home ext4
relatime,defaults 0 0", and most errors I had were dat
Jose, please open a separate bug, as this is an entirely different
problem. (I really hate Ubuntu bugs that have a generic description,
because it seems to generate "Ubuntu Launchpad Syndrome" --- a problem
which seems to cause users to search for bugs, see something that looks
vaguely similar, a
And after another clean shutdown and a reboot, I finally had to reformat
my home partition and restore it from a backup, as the fsck gave a huge
amount of errors and unlinked inodes. Gone back to ext3, will wait for
2.6.30 final before new tests. Here is the final dmesg to just after the
fsck. As w
After another reboot some more problems with kwallet, here is dmesg.
** Attachment added: "dmesg.txt"
http://launchpadlibrarian.net/27107305/dmesg.txt
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is
Now I didn't even had a crash, but on reboot my kdewallet.kwl file was empty. I
removed it, and in syslog I got the following:
"EXT4-FS warning (device mmcblk0p1): ext4_unlink: Deleting nonexistent file
(274), 0
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug noti
I just subscribed this bug as I started seeing this behaviour with 2.6.30-rc6
on my aspire one. First it was the 0 length files after a crash (the latest
intel drivers still hang sometimes at suspend/resume or at logout/shutdown, and
only the magic REISUB gets you out of it), and once I saw my /
No worries, André! Some more feedback on 2.6.30: I've been using
2.6.30-rc3 then 2.6.30-rc5 without problems in Jaunty for several weeks
now (I used to get kernel panics at least twice a week with 2.6.28) and
am now trying 2.6.30-rc6. Still so far so good.
--
Ext4 data loss
https://bugs.launchpad
Hi
Thanks for your answers, Rocko. Today I have installed the Karmic Koala
Alpha 1 with Kernel 2.6.30-5-generic, and it seems that all the former
problems with ext4 are gone. For testing purposes I have created 5 big
dvd iso files (together about 30 GB of data), moved them around in the
system, co
@André: you might be experiencing one or two different bugs that are
possibly related to ext4 in the Jaunty kernel - see
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/348731 and
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/330824. The latter
happens when you try and delete lots of fi
Hello together
Just reporting some observations after making a brand new installation
of Ubuntu 9.04 with ext4 as default file system on my Sony Vaio VGN-
FS195VP. Since the installation some days ago I had again four hard
locks, but luckily - despite my experiences some weeks ago - without any
da
@Theo: would it be hard to implement something like I suggested, ie
storing rename backup metadata for crash recovery? I think in the
discussion on your blog someone says that reiserfs already does this via
'save links' (comment 120).
Alternatively, if there were a barrier to all renames instead o
@Theo
Sorry for the false alarm. Filed it as soon as I found the 0 byte file while
still investigating the source. I've since created and submitted a patch (via
launchpad, https://bugs.launchpad.net/ubuntu/+source/gajim/+bug/349661) that I
believe should correct gajim's behavior in this area.
I agree with Daniel - consistency should be a primary objective of any
journaling file system.
Would it be possible to do something like store both the old and new
inodes when a rename occurs, and to remove the old inode when the data
is written? This way it could operate like it is currently, exc
First of all, the program under discussion got it wrong. It shouldn't
have unlinked the destination filename. But the scenario it unwittingly
created is *identical* to the first-time creation of a filename via a
rename, and that's a very important case. EVERY program will encounter
it the first tim
On Fri, 2009-03-27 at 22:55 +, Daniel Colascione wrote:
> The risk isn't data loss; if you forgo fsync, you accept the risk of
> some data loss. The issue that started this whole debate is consistency.
>
> The risk here is of the system ending up in an invalid state with zero-
> length files *
The risk isn't data loss; if you forgo fsync, you accept the risk of
some data loss. The issue that started this whole debate is consistency.
The risk here is of the system ending up in an invalid state with zero-
length files *THAT NEVER APPEARED ON THE RUNNING SYSTEM* suddenly
cropping up. A zer
"If you accept that it makes sense to allocate on rename commits for
overwrites of *existing* files, it follows that it makes sense to commit
on *all* renames."
Renaming a new file over an existing one carries the risk of destroying
*old* data. If I create a new file and don't rename it to anythi
If you accept that it makes sense to allocate on rename commits for
overwrites of *existing* files, it follows that it makes sense to commit
on *all* renames. Otherwise, users can still see zero-length junk files
when writing a file out for the first time. If an application writes out
a file using
"The filesystem should be fixed to allocate blocks on *every* commit,
not just ones overwriting existing files."
alloc_on_commit mode has been added. Those who want to use it (and take
the large associated performance hit) can use it. It's a tradeoff that
is and should be in the hands of the ind
@Daniel,
Note that if you don't call fsync(), and you hence you don't check the
error returns from fsync(), your application won't be notified about any
possible I/O errors. So that means if the new file doesn't get written
out due to media errors, the rename may also end up wiping out the
exist
What that code does is stupid, yes. It shouldn't remove the original
unless the platform is win32. *Windows* (except with Transactional NTFS)
doesn't support an atomic rename, so it's no surprise that Python under
Windows doesn't either.
You're seeing a zero-length file because Tso's fix for ext4
That looks like it removes the file before it does the rename, so it
misses the special overwrite-by-rename workaround. This is slightly
unsafe on any filesystem, since you might be left with no config file
with the correct name if the system crashes in a small window, fsync()
or no. Seemingly Py
@Theo,
Been digging through the source to track down how it does it. Managed
to find it. It does use a central consistent method, which does use a
tempfile. However, it does not (as of yet) force a sync. I'm working
on getting that added to the code now. Here's the python routine it
uses:
se
@Jamin,
We'd have to see how gaim is rewriting the application file. If it is
doing open/truncate/write/close, there will always be the chance that
the file would be lost if you crash right after the truncate. This is
true with both ext3 and ext4. With the workaround, the chances of
losing th
@Rocko,
If you really want this, you can disable delayed allocation via the
mount option, "nodelalloc". You will take a performance hit and your
files will be more fragmented. But if you have applications which
don't call fsync(), and you have an unstable system, then you can use
the mount opti
@Theo: I vote for what (I think) lots of people are saying: if the file
system delays writing of data to improve performance, it should delay
renames and truncates as well so you don't get *complete* data loss in
the event of a crash... Why have a journaled file system if it allows
you to lose both
@Theo
The file in question was a previously existing configuration file for my IM
client (gajim). All IM accounts and preferences were lost. Not a huge deal,
but definitely a preexisting file. The system kernel panicked (flashing caps
lock) while chatting. The kernel panic is a separate issu
@189: Jamin,
The fix won't protect against a freshly written new file (configuration
or otherwise); it only protects against a file which is replaced via
rename or truncate. But if it was a file that previously didn't exist,
then you can still potentially get a zero-length file --- just as you
c
I know this report claims that a fix is already in Jaunty for this
issue. However, I just found myself with a 0 byte configuration file
after a system lockup (flashing caps lock).
$ uname -ra
Linux odin 2.6.28-11-generic #37-Ubuntu SMP Mon Mar 23 16:40:00 UTC 2009 x86_64
GNU/Linux
--
Ext4 data
Daniel Philipps, developer of Tux3 filesystem, wants to make sure that renames
come after file being written even when delayed writing of metadata is
introduced to it:
http://mailman.tux3.org/pipermail/tux3/2009-March/000829.html
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You rece
Linus made some comments about the filesystem's behaviour:
http://lkml.org/lkml/2009/3/24/415
http://lkml.org/lkml/2009/3/24/460
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
Problem seems 2.6.28-11. My system is stable with 2.6.28-9. I have
reported bug #346691.
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
ubuntu-bugs@li
@Carey
Sorry I did read a fair amount of the comments but realized that my
problem was slightly different...
I already investigated the hardware side and all kind of test (short,
long and don't remember the third word) returned with no error! I also
reinstalled everything again on an ext3 sam
@nicobrainless: Sounds like a hardware failure to me. I'd suggest
investigating the smartctl utility (in the package 'smartmontools') to
check on the general health of the drive.
Note that this isn't a troubleshooting forum, nor is 'too many
comments' really a good excuse for not reading them.
BTW I am running on a fully updated jaunty with kernel 2.6.28-11..and it
started about 5 days ago
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
ubunt
I am experiencing the exact same problem... I was with an ext3 converted to
ext4 partition and ended up reinstalling everything as data loss killed all
/etc...
My fresh Jaunty on a fresh ext4 already gave me twice 'read-only file system'
and now I don't know what to do...
Olli Salonen wrote
In this post, Ts'o writes: "Since there is no location on disk, there is
no place to write the data on a commit; but it also means that there is
no security problem." Well, this means that the specific security
problem identified, exposure of information to those who are not
authorized to see it,
Shit, not used to launchpad, I didn't see all the comments it hid by
default, I read the entire first page and didn't see my comment was
answered. Ignore what I wrote, it's been covered already.
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because
Theodore, you're a bright guy, but this really means that you can't use
EXT4 for anything at all.
fsync() is slow on many(most?) filesystems, and it grinds the entire
system to a halt. What you're saying is that those applications have
to know the details of the filesystem implementation, and
Graziano: please open another bug for your issues. The only files
touched by the patches for this bug are in fs/ext4, and nothing else in
the source tree.
http://kernel.ubuntu.com/git?p=ubuntu/ubuntu-jaunty.git;a=commit;h=f305d27b95849da130c3319e51054309c371e92a
http://kernel.ubuntu.com/git?p=ubun
Well, al least it seems that tweaking kernel to allow ext4 to be nice,
something has been broken on JFS side
My box uses JFS for root FS, and all worked OK at least up to two days ago
(2.6.28-9, IIRC).
With both 2.6.28-10 and 2.6.28-11, all sort of filesystem corruptions started
to pop up, f
Wow, this thing sure is being actively discussed. I might as well weigh
in:
- I side with those that say "lots of small files are a good idea" --
the benefits of having many small human-readable config files are self-
evident to anyone who's ever had to deal with the windows registry.
Suggesting t
There have been a lot of the same arguments (and more than few
misconceptions) that have been made over and over again in this thread,
so I've created a rather long blog post, "Don't fear the fsync!", which
will hopefully answer a number of the questions people have raised:
http://thunk.org/
@Volodymyr
I finished recompiling the kernel with Theodore Ts'o patches, and reran
Volodymyr's test cases with the patched kernel. The results are:
File System Method Performance (Typical, Minimum, Maximum) #Lost %Lost
ext4patch 1 0.440.410.501 1.00%
ext4patch
This bug was fixed in the package linux - 2.6.28-10.32
---
linux (2.6.28-10.32) jaunty; urgency=low
[ Amit Kucheria ]
* Delete prepare-ppa-source script
[ Andy Isaacson ]
* SAUCE: FSAM7400: select CHECK_SIGNATURE
* SAUCE: LIRC_PVR150: depends on VIDEO_IVTV
- LP: #3414
Guys, see comment 45 and comment 154. A workaround is going to be
committed to 2.6.30 and has already been committed to Jaunty. The bug
is fixed. There will be no data loss in these applications when using
ext4, it will automatically fsync() in these cases (truncate then
recreate, create new and
KDE has a framework for reading and writing application settings. So the
solution should be simple: switch on the fsync call at the same Ubuntu release
where ext4 is the default file system. Does anybody know what is the situation
of the GNOME environment? Does a similar switch exist?
Of course
To conclude everything:
No distribution should probably ship with Ext4 as default because Ext3's
behaviour was broken and everyone relies on it. And it should not be
shipped with Ext4 as default for the same reasons while people are
warned to use XFS: Potential data loss on crashes.
So as this me
@Tom
I might not be good at making my point sometimes, but you clearly sum
things up very good. Way better than I do.
@Aryeh
In Ext3, too many applications use fsync, I think that was from the ext2
day-and-era, where not syncing could lead to corrupt filesystems, not
just empty files. Same with
@CowBoyTim
I agree with you. I work with real-time industrial systems, where the
shop floor systems are considered unreliable. We have all the same
issues as a regular desktop user, except our users have bigger hammers.
The attraction of ext3 was the journalling with the ordered data mode.
If po
@Volodymyr
I did some experimenting with your test cases. My results so far are:
File System Method Performance (Typical, Minimum, Maximum) #Lost %Lost
ext31 0.430.420.501 1.00%
ext32 0.320.300.330 0.00%
ext33 0.190.160.
@CowBoyTim
Power failure during fsync() will result in a half-written file, but
that's why the correct sequence is
1) Create new temp file
2) Write to new temp file
3) fsync() new temp file
4) rename() over old file
If there's a power failure before or during step 3, the temp file will
be partia
@helios
I fail to see the fact that this would ever be a KDE bug. fsync only
*helps*, it will never ever make sure that things will be solved
permanently for good. The reason that a rename() is used with a temp
file that *nothing* can get 100% durability (even using fsync). App
developers want at
For the application side of things I filed a bug report for KDE:
https://bugs.kde.org/show_bug.cgi?id=187172
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailin
I updated my "Lame test case" to include more scenarios and added
version implemented in C.
You can use it to estimate reliability of ext3/4 FS.
I will post my results (much) later - I will be busy with sport dancing
at Sun-Mon.
Can anybody prepare small QEMU or VMWare image with recent patched
Ted,
I am not sure if this was covered yet but if so I apologize in advance
for beating a dead horse.
The current opinion is that if you want to make meta data (eg the out of
order rename case) reliable you must use sync... Ok, that would require
a huge number of changes to many programs but it i
I wonder if KDE really rewrites it's configfiles on startup? Why write
something at all on startup? Maybe a KDE dev can comment on this...
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ub
@Adrian Cox
I'm sorry but I was in the middle of work so I just quickly restored a
backup without really looking at what happened.
Some MYD files were truncated to 0 but I didn't take the time to
investigate the cause. It was a standard Jaunty MySQL 5.0.x install
using MyISAM tables.
--
Ext4 da
@Bogdan Gribincea
While the discussion here has concentrated on rewriting config files,
you also report a loss of MySQL databases. What table configuration were
you using, and were the underlying files corrupted or reduced to 0
length?
InnoDB is intended to be ACID compliant, and takes care to ca
@mkluwe:
Filesystems are databases by definition (they are structured sets of data).
But of course not all databases are/work equal, because they serve different
purposes...
Maybe it would be good to amend POSIX to provide an (optional?)
mechanism for guaranteed transactional integrity for some
@Olaf
from the manpage, fsync() transfers "all modified in-core data of the
file referred to by the file descriptor fd". So it should really be all
pending writes, not just the writes that take place using the current
fd.
I cannot really reboot any of my machines right now, but it does make
sense
Just in case this has not been done yet: I have experienced this »data
loss problem« with XFS, losing the larger part of my gnome settings,
including the evolution ones (uh-oh).
Alas, filesystems are not databases. Obviously, there's some work to be
done in application space.
--
Ext4 data loss
h
@Theodore Ts'o
> 3.a) open and read file ~/.kde/foo/bar/baz
> 3.b) fd = open("~/.kde/foo/bar/baz.new", O_WRONLY|O_TRUNC|O_CREAT)
> 3.c) write(fd, buf-of-new-contents-of-file, size-of-new-contents-of-file)
> 3.d) fsync(fd) --- and check the error return from the fsync
> 3.e) close(fd)
> 3.f) rename
@Michel Salim
> You can only fsync given a file descriptor, but I think writing an
fsync binary that opens the file read-only, fsync on the descriptor, and
close the file, should work.
Wouldn't that only guarantee the updates through that descriptor (none)
are synced?
--
Ext4 data loss
https://
Per Ted's suggestion
(https://bugs.edge.launchpad.net/ubuntu/+source/linux/+bug/317781/comments/56),
I've applied the following 3 commits in order to preserve some ext3
semantics.
http://kernel.ubuntu.com/git?p=ubuntu/ubuntu-jaunty.git;a=commit;h=0903d3a2925f3cffb78ca611c4e3356ac7ffef8a
http://ker
I wrote something similar, but with one change -- it turns out you must
have write access to the file you want to fsync (or fdatasync).
It seems to work, but I have not had time to do a power loss simulation.
Would be useful performance-wise on any system but ext3 (where calling
this is identical
Just for reference:
http://flamingspork.com/talks/2007/06/eat_my_data.odp
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
> You can only fsync given a file descriptor, but I think writing an
fsync binary that opens the file read-only, fsync on the descriptor, and
close the file, should work.
Use this little program to verify your assumptions (I have no time right
now):
#include /* open(), O_RDONLY */
#include /*
@Volodymyr,
You can only fsync given a file descriptor, but I think writing an fsync
binary that opens the file read-only, fsync on the descriptor, and close
the file, should work.
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a membe
I created lame test case to catch the bug. Numbers:
Filesystem, Method, Performance, Percentage of data loss
ext3, (1), 0,50, 1% (one file is partial)
ext3, (2), 0,44, 0% (one temporary file is partial)
ext3, (3), 0,37, 0% (one temporary file is partial)
ext4, (1), 0,50, 102% (all files are zeroed
@Theo: thanks for addressing this point.
> So it's not as simple as "delaying the truncate"; you can delay
committing all operations in the journal, but you can't just delay one
transaction but not another.
I think this is the overall idea people are expressing here, delaying
the entire journal o
Theo, does that then imply that setting the writeback time to the
journal commit time (5 seconds) would also largely eliminate the
unpopular behavior?
How much of the benefit of delayed allocation do we lose by waiting a
couple seconds rather than minutes or tens of seconds? Any large
write could
@pablomme,
Well, until the journal has been committed, none of the modified meta-
data blocks are allowed to be written to disk --- so any changes to the
inode table, block allocation bitmaps, inode allocation bitmaps,
indirect blocks, extent tree blocks, directory blocks, all have to be
pinned in
@Carey,
>Theo, does that then imply that setting the writeback time to the
>journal commit time (5 seconds) would also largely eliminate the
>unpopular behavior?
You'd need to set it to be substantially smaller than the journal commit
time, (half the commit time or smaller), since the two timers
>But why can't the metadata writes be delayed as
>well? Why do they have to be written every five seconds
>instead of much later, whenever the data happens to get written?
Fundamentally the problem is "entangled commits". Normally there are
multiple things happening all at once in a filesystem
@Chris
I hate to keep repeating myself, but the 2.6.30 patches will cause open-
write-close-rename (what I call "replace via rename") to have the
semantic you want. It will do that by forcing a block allocation on
the rename, and then when you do the journal commit, it will block
waiting for the
@Arial
> No. If I overwrite a file the filesystem MUST guarantee that either
the old version will be there or the new one.
Err, no it's perfectly fine for a filesystem to give you a zero-byte
file if you truncate, then write over the truncated file. Why should the
filesystem try to guess the futu
@Hiten
I agree with your comment, I think. I was about to make that same post.
I would even dare to say that fsync(fd) is an evil call that never
should be used by any application. The reason for this is very simple,
it doesn't make a difference: if fsync(fd) needs to write 100MB to disk
and a po
Theo, I have tremendous respect for the work you did, but you are wrong.
> If you are going to be using a single monolithic config, then you really want
> to
> fsync() it each time you write it out. If some of the changes are
> bullsh*t ones where it really doesn't matter of you lose the last
>
@Theodore,
As a scalable server developer with 25 years experience, I am fully
aware of the purpose of fsync, fdatasync and use them if and only if the
semantics I want are "really commit to disk right now". To use them at
any other time would be an implementation error.
I further agree delayed
@Theodore,
> Well, that's not how I would describe it, although I admit in practice it has
> that effect. What's happening is that the journal is still being committed
> every 5 seconds, but dirty pages in the page cache do not get flushed out if
> they don't have a block allocation assigned to th
@Kai,
>While that may be true (and I suppose it is ;-)) what happens
>to all those users sticking to ext3 or similar fs' when "suddenly"
>all apps to fsync() on every occassion?
>
>It may not hurt ext4 performance that much but probably other
>fs' performance.
Actually the problem with fsync() be
@Theodore:
Please stop blaming this on binary drivers, they are not the only reason for
this happening; open source drivers aren't magically bug-free, power losses
happen and hardware breaks or starts to behave flaky...
--
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this
@Theo
Sorry, but you seem to avoid the actual point people are making. No one
says that delayed allocation is bad in general or questions its benefit,
but that reordering the operations on a file and then delaying the data-
writing part, but not the renaming part, is prone to serious data loss
if
@Theodore,
> Note that the "fsync causes performance problems meme got" started
> precisely because of ext3's "data=ordered" mode. This is what causes
> all dirty blocks to be pushed out to disks on an fsync(). Using ext4 with
> delayed allocation, or ext3 with data=writeback, fsync() is actually
As for configuration registry:
Filesystems are about files exactly same as sqlite and other databases
are about records. Actually, files could be treated like some sort of
records in some sort of very specific database (if we'll ignore some
specifics).
And we're, the users expect BOTH databases a
> The reason why the write operation is delayed is because of
performance.
Yup, I understand that and I'm all for it. Delay writing for hours if
that improves performance further, that's great. But the question
remains: why is the _truncate_ operation not delayed as well? The gap
between the trunc
@Kai,
>But you can imagine what happens to fs performance if
>every application does fsyncs after every write or before
>every close. Performance would suffer badly.
Note that the "fsync causes performance problems meme got" started
precisely because of ext3's "data=ordered" mode. This is what
@Brett,
Servers generally run on UPS's, and most server applications generally
are set up to use fsync() where it is needed. So you should be able to
use ext4 in good health. :-) I'm using a GNOME desktop myself, but I
use a very minimalistic set of GNOME applications, and so I don't see a
la
So servers, presumably running with CLI and not GNOME or KDE should be
relatively unaffected by this, correct?
I've been using ext4 on my desktop (/, not /home) for quite some time
and have seen no problems. I like the performance boost it gives me and
would like to give it to my servers as well..
1 - 100 of 229 matches
Mail list logo