On 03/18/2015 07:57 PM, Ming Lei wrote:
On Thu, Mar 19, 2015 at 2:28 AM, Maxim Patlasov <[email protected]> wrote:
On 01/13/2015 07:44 AM, Ming Lei wrote:
Part of the patch is based on Dave's previous post.

This patch submits I/O to fs via kernel aio, and we
can obtain following benefits:

         - double cache in both loop file system and backend file
         gets avoided
         - context switch decreased a lot, and finally CPU utilization
         is decreased
         - cached memory got decreased a lot

One main side effect is that throughput is decreased when
accessing raw loop block(not by filesystem) with kernel aio.

This patch has passed xfstests test(./check -g auto), and
both test and scratch devices are loop block, file system is ext4.

Follows two fio tests' result:

1. fio test inside ext4 file system over loop block
1) How to run
         - linux kernel base: 3.19.0-rc3-next-20150108(loop-mq merged)
         - loop over SSD image 1 in ext4
         - linux psync, 16 jobs, size 200M, ext4 over loop block
         - test result: IOPS from fio output

2) Throughput result:
         -------------------------------------------------------------
         test cases          |randread   |read   |randwrite  |write  |
         -------------------------------------------------------------
         base                |16799      |59508  |31059      |58829
         -------------------------------------------------------------
         base+kernel aio     |15480      |64453  |30187      |57222
         -------------------------------------------------------------

Ming, it's important to understand the overhead of aio_kernel_()
implementation. So could you please add test results for raw SSD device to
the table above next time (in v3 of your patches).
what aio_kernel_() does is to just call ->read_iter()/->write_iter(),
so it should not have introduced extra overload.

 From performance view, the effect is only from switching to
O_DIRECT. With O_DIRECT, double cache can be avoided,
meantime both page caches and CPU utilization can be decreased.

The way how you reused loop_queue_rq() --> queue_work() functionality (added early, by commit b5dd2f604) may affect performance of O_DIRECT operations. It can be easily demonstrated on ram-drive, but measurements on real storage h/w would be more convincing.

Btw, when you wrote "linux psync, 16 jobs, size 200M, ext4 over loop block" -- does it mean that there were 16 threads in userspace submitting I/O concurrently? If yes, throughput comparison for a single job test would be also useful to look at.

Thanks,
Maxim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to