Hi all, First off, I realize this question has been asked here and elsewhere before, but I can't seem to find any recent relevant numbers on this.
I am setting up a system with an Intel octo-core Avoton, which has AES-NI support. After doing some crude benchmarking tests with dd, I am surprised about the huge performance penalty that full-disk encryption apparently has on read/write throughput. In short, the write speed plummets to around 160 MB/s, as opposed to 270 MB/s on the naked partition; read speed is at 115 MB/s (slower than writing - no idea why), as opposed to 465 MB/s on the bare partition. (I've pasted the results below.) I encrypted the partition with aes-xts-plain64, sha-512 and a 512 bit key, but also tried 256 bit key with similar results. The drive in question is a Samsung 840 pro SSD, but I've fiddled with a couple of spinning drives before, and the performance penalty was similarly bad. The system will be used as a home file server, and the results with drive encryption are still acceptable - but I'm still curious if they are to be expected, or if there is an obvious culprit for the performance hit. Is it possible that I'm not using the hardware AES? Thanks, - Dave. Encrypted drive setup: cryptsetup luksFormat -c aes-xts-plain64 --hash sha512 --iter-time 2000 --use-random -s 512 /dev/sdd6 Results w/o encryption: # dd bs=1M count=256 if=/dev/zero of=/dev/sdd6 conv=fdatasync 256+0 records in 256+0 records out 268435456 bytes (268 MB) copied, 0.990527 s, 271 MB/s # dd bs=1M count=512 if=/dev/sdd6 of=/dev/null 512+0 records in 512+0 records out 536870912 bytes (537 MB) copied, 1.15489 s, 465 MB/s Results with encryption: # dd bs=1M count=512 if=/dev/zero of=/dev/mapper/test conv=fdatasync 512+0 records in 512+0 records out 536870912 bytes (537 MB) copied, 3.26955 s, 164 MB/s # dd bs=1M count=512 if=/dev/mapper/test of=/dev/null 512+0 records in 512+0 records out 536870912 bytes (537 MB) copied, 4.66179 s, 115 MB/s