I do not really doubt your numbers, Tibor. But I think there's a huge
difference between "real world" application benchmarks and just
measuring what can be done in a constructed test case. Both are of
course valuable, but they have somewhat different applicability. And I
think SSD's make a huge difference, probably more of a difference than
most of us have considered.

With a SSD you seem to be much more sensitive for keeping the device
busy. With spinning media your code would almost always be "ahead" of
the actual disk, in the sense that you'd be pushing io to the OS but
bottom line you're waiting for that disk to spin around at 7200rpm and
that head to move. You can park a bus in those time-slots without any
problem. In the "gather" phase when we're just trying to push maximum
IO, it's quite easy to see that a 50ms period every second spent not
doing IO actually reduces overall throughput by 5%. So a GC pause will
likely kill your IO throughput.

I can also somewhat confirm what you're saying Tibor, lots of 2-byte
IO seems like a really bad idea with SSD's  even with
RandomAccessFile. Unfortunately it's not always trivial to restructure
code to support this, and I can confirm there is not "infinite" room
for copying data around to make bigger buffers, at some point this
overhead exceeds the gain.

Kristian



2015-01-11 13:54 GMT+01:00 Tibor Digana <tibordig...@apache.org>:
> guys, if there are any arguments, I can give you Word document with the jar
> and test instructions.
> I was testing this with one Russian guys on Unix and I was testing on
> Windows. We presented this in a discussion at LinkedIn.
>
> The NIO work nicely with big chunks you are about to write; But if you want
> to write just one integer to IO, then NIO is just wrong path.
> Simply make writes into 256KB buffer and then write it down when buffer
> overflows.
> You will see differences between native and heap buffer. The typical heap
> ByteBuffer uses arrays which needs two machine cycles per read or write.
> Reason is that one operation is array range check and then read/write. The
> native ByteBuffer is a bit risky, but 2-times faster than heap buffer.
>
> Remember that MappedByteBuffer mapped file system with memory, and JVM
> reserved 256MB for native byte buffers. Using DMA the
> MappedByteBuffer.write(256KB ByteBuffer) es very fast not because of file
> system, but because of DMA memory access.
> So what i did, I took file (600MB and 1GB) one and made a copy into another
> file. Entire application consumed 8MB (4MB just for JVM) and performed at
> 400MB/s. I made a test experiment and used different size of Buffers:
> 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192 KB
> and the minimum latency on Windows was with 256KB and most fast.
> The measurement was very simple. I used ESET which is bind in kernel, and
> "Watch activity" tool gave me this measurement results.
> The same test was done on Unix & Sun machine with RAID and the minimum
> latency was not such significant - even worse than with Windows.
>
>
>
> --
> View this message in context: 
> http://maven.40175.n5.nabble.com/Preview-release-Plexus-Archiver-multithreaded-Zip-edition-tp5822942p5823154.html
> Sent from the Maven Developers mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
> For additional commands, e-mail: dev-h...@maven.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org

Reply via email to