On Mon, Aug 8, 2011 at 06:12, Dongsheng Song wrote:
> On Mon, Aug 8, 2011 at 06:15, Leonid Evdokimov wrote:
>>
>> 2.6.24 result is really surprising me.
>
> Maybe you forgot the CPU not the same:
That's why I have `memcpy` value - to estimate memory/CPU speed. The
case that surprised me was mono
On Aug 4, 2011, at 9:36 PM, Nick Mathewson wrote:
On Thu, Jul 28, 2011 at 3:35 AM, Nicholas Marriott
wrote:
Interesting.
We have seen problems in the past with gettimeofday() not being very
deterministic on Linux, with occasional spikes in the time it takes,
although I don't have the numbers
On Mon, Aug 8, 2011 at 06:15, Leonid Evdokimov wrote:
> 2.6.24 result is really surprising me.
>
Maybe you forgot the CPU not the same:
$ dmesg | grep GHz
[0.056379] CPU0: Intel(R) Xeon(TM) CPU 3.20GHz stepping 04
[0.144076] CPU1: Intel(R) Xeon(TM) CPU 3.20GHz stepping 04
[0.240057]
On Thu, Aug 4, 2011 at 22:36, Nick Mathewson wrote:
> * Make no changes in the code, but suggest to people that they should
> be using EVENT_BASE_FLAG_NO_CACHE_TIME if they are running on a
> platform where time checking is very quick and they care about very
> accurate timing.
I've measures gett
Frankly, having fought the test vs user-configure battle many times, there
comes a point where the user bears some responsibility. This strikes me as one
of those times.
For us, at least, creating another startup delay while libevent probes timers
would be more than annoying. It is completely r
The problem I see is that you can't assume system A would be faster with
timing mechanism X, or system B with timing mechanism Z. This may vary from
device to device.
I believe this is not something that can be reliably solved without querying
the actual system at runtime.
See
a)
http://docs.redha
On Thu, Aug 4, 2011 at 18:36 UTC, Nick Mathewson wrote:
> Hmmm. So as near as I can tell, coming out of this whole discussion, I
> can see a few options for Libevent 2.1:
>
> * Make no changes in the code, but suggest to people that they should
> be using EVENT_BASE_FLAG_NO_CACHE_TIME if they are
On Thu, Jul 28, 2011 at 11:32 AM, Leonid Evdokimov wrote:
> Also, FYI, I tried to use event_base_init_common_timeout with trivial
> timeout callback and got following numbers:
>
> 18% of single CPU core usage with 65536 250ms persistent timeouts
> 7% of single CPU core usage with 65536 250ms persi
On Thu, Jul 28, 2011 at 3:35 AM, Nicholas Marriott
wrote:
> Interesting.
>
> We have seen problems in the past with gettimeofday() not being very
> deterministic on Linux, with occasional spikes in the time it takes,
> although I don't have the numbers to hand (50-100 microseconds or so are
> the
On Wed, Jul 27, 2011 at 22:49, Nick Mathewson wrote:
> so that your event is being scheduled relative to the time when
> epoll returned, not quite relative to the time when event_add() is
> called.
Nick, thanks! That was exactly another pair of eyes I was looking for!
I added relative delta betwe
Interesting.
We have seen problems in the past with gettimeofday() not being very
deterministic on Linux, with occasional spikes in the time it takes,
although I don't have the numbers to hand (50-100 microseconds or so are
the numbers I remember offhand).
If this is to be used as a basis for wha
On Wed, Jul 27, 2011 at 11:00:45PM -0400, Nick Mathewson wrote:
> On Wed, Jul 27, 2011 at 10:35 PM, William Ahern
> wrote:
>
> If you happen to know, is it the same story with clock_gettime()
> performance? I ask because Libevent uses that function in preference
> to gettimeofday() when it's ava
On Wed, Jul 27, 2011 at 10:35 PM, William Ahern
wrote:
If you happen to know, is it the same story with clock_gettime()
performance? I ask because Libevent uses that function in preference
to gettimeofday() when it's available.
yrs,
--
Nick
*
On Wed, Jul 27, 2011 at 07:03:33PM -0700, William Ahern wrote:
> However, the difference between vDSO support and no vDSO support was nothing
> compared to the OpenBSD v. Linux differences. OpenBSD system calls are
> incredibly slow compared to Linux. A million gettimeofday calls on OpenBSD
> take
On Wed, Jul 27, 2011 at 10:10:36PM +0100, Nicholas Marriott wrote:
> On Wed, Jul 27, 2011 at 01:10:05PM -0700, William Ahern wrote:
> > Has anyone tested the performance gain of caching? Linux's gettimeofday()
> > doesn't trap into the kernel; it effectively just reads a shared memory
> > page. II
AFAIK this optimization in Linux is only available on some architectures
and there is a knob with several possible settings.
Without this, gettimeofday() can be quite expensive.
On Wed, Jul 27, 2011 at 01:10:05PM -0700, William Ahern wrote:
> On Wed, Jul 27, 2011 at 02:49:26PM -0400, Nick Mathew
On Wed, Jul 27, 2011 at 02:49:26PM -0400, Nick Mathewson wrote:
> So because I'm at a conference this week, I can't examine the code too
> closely. But my guess is that this is happening because Libevent
> caches its calls to gettimeofday()/clock_gettime() during its event
> loop, so that your ev
On Wed, Jul 27, 2011 at 11:24 AM, Leonid Evdokimov wrote:
> Hello all,
>
> I assumed, that timeout is never(!) scheduled a-bit-earlier than
> requested and today I see, that either my assumption is wrong, or I
> became a bit insane while writing the code.
>
> I attached simple test for the asserti
Hello all,
I assumed, that timeout is never(!) scheduled a-bit-earlier than
requested and today I see, that either my assumption is wrong, or I
became a bit insane while writing the code.
I attached simple test for the assertion. For example, I've got
following strange results:
$ ./event-timeout
19 matches
Mail list logo