On 2015-09-29 1:47 PM, Gregory Szorc wrote:
You'd be surprised. :-)
Windows doesn't really have a notion of open file limits similar to
Unix. File handles opened using _open can go up to a maximum of
2048. fopen has a cap of 512 which can be raised up to 2048 using
_setmaxstdio(). *But* these are just CRT limits, and if you use
Win32 directly, you can open up to 2^24 handles all at once
<https://technet.microsoft.com/en-us/library/bb896645.aspx>. Since
we will never need to open that many file handles, you may very well
be able to use this approach.
I experimented with a background thread for just processing file closes.
This drastically increases performance! However, the queue periodically
accumulates and I was seeing errors for too many open files - despite
using CreateFile()! We do make a call to _open_osfhandle() after
CreateFile(). I'm guessing the file limit is on file descriptors (not
handles) and _open_osfhandle() triggers the 512 default ceiling?
Yeah that would go through the CRT so you would be subject to the CRT
limit of 512 (which you can bump up to 2048 as I said before.)
This
call is necessary because Python file objects speak in terms of file
descriptors. Not calling _open_osfhandle() would mean re-implementing
Python's file object, which I'm going to say is too much work for the
interim.
Ugh. In that case, closing file handles on a background thread is the
best you can do, I think.
Buried in that last paragraph is that a background threading closing
files resulted in significant performance wins - ~5:00 wall on an
operation that was previously ~16:00 wall! And I'm pretty sure it would
go faster if multiple closing threads were used. Still not as fast as
Linux. But much better than the 3x increase from before.
Cool!
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform