On 19 October 2011 19:21, Gregory Collins <[email protected]> wrote:
> On Wed, Oct 19, 2011 at 10:13 AM, Kazu Yamamoto <[email protected]> wrote:
>> Hello,
>>
>> I'm measuring performance of the insertion operation of red-black
>> trees. For input, three kinds of [Int] are prepared: the increasing
>> the order, decreasing order, and random.
>>
>> The random case is 4 or 5 times slower than the others. I'm afraid
>> that my program also measured the cost of random Int generation.
>>
>> My benchmark code can be found:
>>
>>        
>> https://github.com/kazu-yamamoto/llrbtree/blob/master/bench/insert/Bench.hs
>>
>> Does anyone kindly take a look and tell me whether or not my criterion
>> code measures the cost of random Int generation? If so, would you
>> suggest how to avoid it?
>
> The code looks ok to me -- you've deepseq'ed the list, and forcing it
> to whnf should force the deepseq. Also, criterion runs your benchmark
> many times, if your code was measuring the RNG time it would only
> happen once. This would show up in the criterion output as an
> unusually large outlier.

Wouldn't it be better to write a sensible NFData instance than to
explicitly use deepseq?

However, you could try to fix this by using an actual seed rather than
mkStdGen; this would _really_ give you random values to let you see
whether different runtimes affect matters.

>
> G
> --
> Gregory Collins <[email protected]>
>
> _______________________________________________
> Haskell-Cafe mailing list
> [email protected]
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>



-- 
Ivan Lazar Miljenovic
[email protected]
IvanMiljenovic.wordpress.com

_______________________________________________
Haskell-Cafe mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to