>> AFAIK the bogus 128TB drives do properly report such ridiculous sizes: >> the reality only hits when you try to actually store that amount of >> information on them. >> [ I'm not sure how it works under the hood, but since SSDs store their >> data "anywhere" in the flash, they can easily pretend to have any size >> they want, and allocate the physical flash blocks only on-the-fly as >> logical blocks are being written. >> Also, some Flash controllers use compression, so if you store data >> that compresses well, they can let you store a lot more than if you >> store already compressed data. ] >> IOW, to really check, try to save 2TB of videos (or other already >> compressed data), and then try and read it back. >> > Sounds like a lawsuit to me. If I can get Alexanders script from a few days > back to run. Is bash not actually bash these days? It is not doing for > loops for me.
As discussed in related threads, there's the `f3` package in Debian designed specifically for that. You can try `f3probe /dev/sdX` (or use `f3write` and `f3read` if you prefer to test at the filesystem level rather than at the block level). Stefan