Frankly, having fought the test vs user-configure battle many times, there
comes a point where the user bears some responsibility. This strikes me as one
of those times.
For us, at least, creating another startup delay while libevent probes timers
would be more than annoying. It is completely r
The problem I see is that you can't assume system A would be faster with
timing mechanism X, or system B with timing mechanism Z. This may vary from
device to device.
I believe this is not something that can be reliably solved without querying
the actual system at runtime.
See
a)
http://docs.redha
On Thu, Aug 4, 2011 at 18:36 UTC, Nick Mathewson wrote:
> Hmmm. So as near as I can tell, coming out of this whole discussion, I
> can see a few options for Libevent 2.1:
>
> * Make no changes in the code, but suggest to people that they should
> be using EVENT_BASE_FLAG_NO_CACHE_TIME if they are
On Thu, Jul 28, 2011 at 11:32 AM, Leonid Evdokimov wrote:
> Also, FYI, I tried to use event_base_init_common_timeout with trivial
> timeout callback and got following numbers:
>
> 18% of single CPU core usage with 65536 250ms persistent timeouts
> 7% of single CPU core usage with 65536 250ms persi
On Thu, Jul 28, 2011 at 3:35 AM, Nicholas Marriott
wrote:
> Interesting.
>
> We have seen problems in the past with gettimeofday() not being very
> deterministic on Linux, with occasional spikes in the time it takes,
> although I don't have the numbers to hand (50-100 microseconds or so are
> the