Sure, it should be.

We just faced with same problem in our (unrelated) project. For modern
64-bit system that is not very unusual to have files > 2G. And we faced,
that writing such file with single write() call is not successful.
Reason is - as I said - note in 'man 2 write' about maximum size. We've
fixed our project with code similar to this one you suggest (similar -
because our project written in C++; so logic is same, but words are
different).

One of our customers complained about crashes; we asked him to provide
crashdump; that one was taken by Apport on ubuntu. Unfortunately, it was
broken (corrupted). Fortunately, it was sized exactly 2,147,479,552
bytes, and, fortunately, I remember this very number from our recent
fix.

The only problem is - I don't know, how 'os.write' implemented inside
python; but this very number cries, it just recalls 'write()' from clib,
with all it's 'features' and 'limits'.

I think, simplest live test - like make 3'000'000'000 sized blob (in
python), and try to write it with single os.write() should promptly
reveal the limits, or reject my guess about it.

In case, that is linux 'feature' (don't know, if such limitation present
on windows), I thing it would be good to report the issue to python core
developers also (as minimal, docs can be actualized)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2109979

Title:
  max size of user coredump is 0x7ffff000

To manage notifications about this bug go to:
https://bugs.launchpad.net/apport/+bug/2109979/+subscriptions


-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to