Hi,

Andrew F Comly wrote:
> You said earlier that 512 is the default block size.

The default of program "dd", to be exacting.

Block size is normally a property of devices and filesystems.
Many devices offer direct random access only at the granularity of
their block size.
E.g. a CD can be read in chunks of 2 KiB. If software needs to address
in finer steps then it has to load full blocks and pick from them
the bytes which are desired.

Filesystems use block addresses to point from name to data. It is
technically desirable that file system block size is an integer multiple
of the device block size. (Multiplication factor 1 is normal.)


> I looked up that "bs" also refers to block size.

Option bs= is the combination of dd options ibs= and obs=. The program
uses these sizes for single read or write operations. (One may experience
surprises if the input does not deliver full blocks or the output demands
larger ones.)
The advise to use large bs= when curbing data reading from a device or file
is merely to improve the read and write speed. Copying many small chunks
can impose substantial overhead.


> So it appears that we have two "block sizes".

In the case of buffered i/o on a USB stick or DVD you may more or less
use any block size which divides without remainder the desired amount of
bytes.
(Giant sizes should be avoided because they bring no improvement while
 filling giant memory buffers.)


> $ fdisk -l

fdisk seems to assume block size 512 with data files.
With devices it obviously inquires their block size.
E.g. with a CD:

  $ /sbin/fdisk -lu /dev/sr4
  ...
  Units: sectors of 1 * 2048 = 2048 bytes


> 255 heads, 63 sectors/track, 1988 cylinders,

The Cylinder/Head/Sector addresses of disk-like devices are quite bogus
nowadays. They are uphold for legacy software (and to confuse programmers
like me).
Regrettably MBR partition tables show these numbers, so one has sometimes
to find the factors sector-per-head and heads-per-cylinder from comparing
Logical Block Addresses with CHS adresses.
(Mathematically this is equivalent to the task of finding those points of
 a 2D-parabola which have integer coordinates.)

255 heads (per cylinder) and 63 sectors/track is a poor guess, of course,
as 16358768640 consists entirely of prime factors 2, 3, and 43.
The numbers 255 and 63 contain prime factors 5, 7, and 17 which spoil
any attempt to match above big number by integer multiplication.

The reason for 255, 63 is simply that they are the largest values
possible for heads-per-cyl and sec-per-head in the CHS address fields
of a MBR partition table.
The range of CHS addressing ends at 8,422,686,720 bytes, because MBR
partition table entries can only record up to 1024 cylinders.

So with your 32 GB stick, CHS is a mere ornament.


> Is my hypothesis that the "512 block size" is the "I/O" or "sector" size
> correct?

Roughly yes. But mainly in the case of fdisk, not so much in the case of dd.
WIth fdisk the number 512 stems either from device properties or from the
default of fdisk, if no device properties can be inquired (as from a data
file).

Unless the input of dd is a device which may deliver less than the block
size in a single read operation, you are free to choose a dd blocksize.
(It can become tricky with pipes as input. Even if you choose a block size
 lower than the pipe buffer size, you imay get less input bytes per block
 than expected.)


Have a nice day :)

Thomas

Reply via email to