2016-02-15 9:11 GMT+01:00 Mark Thomas <ma...@apache.org>: > > Direct buffers help > > OpenSSL a lot for example (socket.directBuffer and socket.directSslBuffer > > to true). Also one important item is to make sure the tests all use the > > same cipher, especially with ab (JSSE might not use the same cipher as > > OpenSSL), something like: ab -k -Z "AES128-GCM-SHA256" forces testing of > > this common AES-GCM cipher. Newer and more secure ciphers are often way > > slower, no surprise there. > > Good point. I'll double check that. What I was really after was a some > numbers to back up a (probably over simplified) "drop in the native > library and turo-charge your TLS performance" claim. >
Ok ! I also started with "good enough" benchmarks initially which indicated OpenSSL was worthwhile, then realized the results could be more accurate. > > > Last, APR is still significantly faster for me, which is rather normal. > > It's not that critical at this performance level, probably, but it's here > > to stay. > > An in depth comparison between the three options would be useful at some > point. > > Have you done much performance tuning of NIO + OpenSSL? > The code looks ok to me, as long as direct buffers are used. It needs to copy data to the OpenSSL BIO and back, that's the main cost besides encryption and decryption, and I don't think this can be optimized. My thinking is APR is faster since it skips these two copy operations. However, the SSL behaviors simply destroys fancier IO APIs (the async scatter/gather I wanted to introduce): - JSSE only wants big buffers ... - OpenSSL prefers direct buffers So much flexibility ! :( Rémy