On Fri, Feb 23, 2018 at 2:48 AM, CM <[email protected]> wrote:
> Hi,
>
>
> Q1. There is a certain idiom used in libuv code (see this) -- namely using
> unprotected static variables to mark missing features (like O_CLOEXEC). In
> multi-threaded environment these spots have a race condition, which is in
> C++ workd is UB (undefined behavior) and compiler is allowed to do
> interesting stuff that could break your app (in C++ compiler is allowed to
> assume that your code has no race conditions). I am not so sure about C,
> though -- hence the question: is this ok? shouldn't these static variables
> be replaced with C equivalent of std::atomic<...>?

Libuv is a C89 code base and can't use anything from stdatomic.h.

C89 doesn't have a concept of concurrency so you can't even really
talk of whether it's UB or not.  It comes down to what the compilers
and architectures we support do and I feel reasonably confident saying
hand-rolled atomic ops won't make a practical difference.

If you're not convinced, think about what the CPU or compiler could
realistically do that would make a difference.  Worst case, libuv
makes a few unnecessary system calls until the store is settled (and
in practice system calls are pretty effective memory barriers.)

> Another consideration is possibility of 'false negative'. For example, in
> the link below if invalid mask somehow gets passed and open() returns EINVAL
> -- it will activate the flag that will stay on until process exits. Not sure
> if open() can return EINVAL on bad 'path'.

uv__open_cloexec() is only used internally with known-good flags.  If
you can get libuv to erroneously conclude that O_CLOEXEC isn't
available when it is, please file a bug.

> Q2. libuv tries to deal with EMFILE errors in accept() by closing "reserve
> fd", accepting connection, closing it and reopening reserve fd again. It
> isn't guaranteed to work in multithreaded env. What is the recommendation
> for client's code to deal with EMFILE error that may bubble up?

Increase the fd limit and try again. :-)

> As I see it I can:
> - stop listening for a while and try uv_accept()'ing again. But then there
> is no point in having (mandatory) "EMFILE-handling" trick -- I am dealing
> with this myself anyway, no point in wasting one fd per eventloop
> - tear down listening socket and try to restart it repeatedly (with some
> delay) until it works -- again, no point in having that trick built in and
> mandatory
>
> maybe it makes sense to make that trick optional? Unless there is a better
> way for handling this situation...

I think you may be misunderstanding what libuv tries to achieve.  It's
a best-effort attempt to prevent busy-looping where:

1. The listen socket repeatedly wakes up because there are pending
connections, but
2. Said connections can't be accepted because the file descriptor
limit has been reached.

Empirical evidence suggests almost no application handles this
scenario, that's why libuv tries to take care of it.  But again, it's
a best effort.

>
> Q3. EMFILE handling trick employs "reserve" loop->emfile_fd which is
> populated by uv__open_cloexec("/", O_RDONLY) or
> uv__open_cloexec("/dev/null", O_RDONLY). Isn't it a blocking operation? Why
> not dup(STDOUT) instead?

The file descriptor may not be open and even if it is, the UNIX
semantics of tty file descriptors are such that you don't want to keep
them open unnecessarily.

Apropos blocking operation: in a technical sense, yes; in a practical
sense, no.  /dev/null is not an on-disk entity and neither is / in
that its inode data is effectively always in memory.

-- 
You received this message because you are subscribed to the Google Groups 
"libuv" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/libuv.
For more options, visit https://groups.google.com/d/optout.

Reply via email to