Hi Jayden, Imre,

>> - defrag does FAT32 (you have to select the right defragmentation
>> method from the menu)

Which options do / do not work for FAT32? Could the menu
system and dialogs be improved if that would help? For
example instead of saying "I cannot do X on FAT32", it
could say "I cannot do X, but I can do Y on FAT32" :-)

>> - chkdsk does not do FAT32, you have to use DOSFSCK for FAT32.
>> There is not enough memory in 16 bit to check the disk in a
>> reasonable amout of time

That is correct. I have discussed this with sparky before...

>> sparky4 schreef op 15/01/2015 om 17:24:
>>> I noticed they do not exist!
>>> 
>>> i hope they get made soon!

One of my main points was:

> In particular the check for lost and cross linked clusters has the
> issue that I even do not know an obvious method for doing it without
> having at least size-of-one-FAT in RAM space at all, because temp
> files are no option for chkdsk imho.

... and with FAT32, one FAT can be up to 1 gigabyte big.
With FAT16, the maximum size of a FAT was 128 kilobytes:
2^28 clusters * 32 bit versus 2^16 clusters * 16 bit. In
more realistic cases, the FAT can still be many MB large.

SOME of the checks can be done without using much RAM. The
question to all of you is: Would you like a tool for a low-
RAM (or even 16 bit) PC which does only the low-RAM checks?



I made this list of things that dosfstools dosfsck can do.

First the low RAM things:

- read the boot sector and check the data there

- read the first fat to see which clusters are used

- read the second fat to see if they are both the same

- optionally read all data sectors to test for bad sectors

- recursively check all directory entries for bad content



Now the high RAM things:

- for each directory entry, check which clusters it uses
  (as far as I know, the expected amount is not specified)

- check if some clusters are used by no directory entries
  (lost clusters) or by several (cross linked files)

- check that each directory can only be reached in one way

- check if the free cluster statistics are correct



Things which would really like some caching of data:

- check if all long file name fragments form working name
  chains, without truncations or orphaned fragments

- check if entries have too few or too many clusters as
  compared to the specified file size

- check if all "." and ".." directory entries make sense

- for any of the checks, try to repair any errors, either
  automatically or with help of user decisions



I do not know how exactly the long file name checking
works, but that probably can be done with a buffer which
holds only one directory at a time. Still, even those
can reach arbitrary size.

In particular checking for lost or cross linked cluster
chains needs at least the size of one fat of temp data,
in dosfsck even more because it will tell you WHICH two
files are crosslinked instead of only saying that some
cluster has N (instead of the expected 0 or 1) users.

A number of other checks will end up reading most of the
fat and directory metadata spread throughout the disk,
more or less once per check, so performance might still
be acceptable with some EMS based disk cache. Note that
both UIDE and LBACACHE only support XMS, which is only
commonly available on computers with 32 bit capable CPU.

Regards, Eric



------------------------------------------------------------------------------
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Freedos-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/freedos-devel

Reply via email to