The shown speed inverse linearly depends on size of data.
See the output:

  spi-nand: spi_nand nand@0: Micron SPI NAND was found.
  spi-nand: spi_nand nand@0: 256 MiB, block size: 128 KiB, page size: 2048, OOB 
size: 128
  ...
  => mtd read.benchmark spi-nand0 $loadaddr 0 0x40000
  Reading 262144 byte(s) (128 page(s)) at offset 0x00000000
  Read speed: 63kiB/s
  => mtd read.benchmark spi-nand0 $loadaddr 0 0x20000
  Reading 131072 byte(s) (64 page(s)) at offset 0x00000000
  Read speed: 127kiB/s
  => mtd read.benchmark spi-nand0 $loadaddr 0 0x10000
  Reading 65536 byte(s) (32 page(s)) at offset 0x00000000
  Read speed: 254kiB/s

In the spi-nand case 'io_op.len' is not the same as 'len',
thus we divide a size of the single block on total time.
This is wrong, we should divide on the time for a single
block.

Signed-off-by: Mikhail Kshevetskiy <[email protected]>
---
 cmd/mtd.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/cmd/mtd.c b/cmd/mtd.c
index 2520b89eed2..deac7d1f002 100644
--- a/cmd/mtd.c
+++ b/cmd/mtd.c
@@ -469,7 +469,7 @@ static int do_mtd_io(struct cmd_tbl *cmdtp, int flag, int 
argc,
 {
        bool dump, read, raw, woob, benchmark, write_empty_pages, has_pages = 
false;
        u64 start_off, off, len, remaining, default_len;
-       unsigned long bench_start, bench_end;
+       unsigned long bench_start, bench_end, block_time;
        struct mtd_oob_ops io_op = {};
        uint user_addr = 0, npages;
        const char *cmd = argv[0];
@@ -594,9 +594,10 @@ static int do_mtd_io(struct cmd_tbl *cmdtp, int flag, int 
argc,
 
        if (benchmark && bench_start) {
                bench_end = timer_get_us();
+               block_time = (bench_end - bench_start) / (len / io_op.len);
                printf("%s speed: %lukiB/s\n",
                       read ? "Read" : "Write",
-                      ((io_op.len * 1000000) / (bench_end - bench_start)) / 
1024);
+                      ((io_op.len * 1000000) / block_time) / 1024);
        }
 
        led_activity_off();
-- 
2.50.1

Reply via email to