Simon Peyton-Jones <[EMAIL PROTECTED]> wrote,
> I'd be interested to hear people's opinion about the lazy-file read
> question.
> I'm not prepared to add new functions to Haskell 98, but I think
> the clarification of (1) or (2) below would be useful. (2) is nice
> but it makes *all* file reading more expensive, perhaps significantly
> so (e.g. making a complete copy of the file).
I don't agree that it makes all file reading more expensive.
My proposal is to do the unlink game on Unix (no extra
memory costs) and actual read the file only on OSes that
can't do the unlink trick and *only* when the file is
written to (ie, only when we have a conflict). In other
words, only in the situation which currently causes file
corruption and would be completely outlawed in (1), extra
resources are required. So, we have additional costs only
when needed.
As pointed out by Ketil, the costs on Unix are negligible.
Maybe there are also ways to handle this gracefully on NT.
Moreover, programs that do not want to run the risk of
increased memory use on legacy OSes can still copy the file
(something they would have to do currently anyway).
In fact the bigger problem is to recognise the case where we
write to a file, which is semi-closed (different file names
can point to the same physical file, eg, when sym links are
used). In Unix, this is again quite easy, because we can
compare the inode number of the two files.[1] In other OSes,
we can at least fall back to comparing file names.
So, I think, it is quite clear how to implement the proposed
functionality and I don't see significant costs for the
general case.
Manuel
[1] In cases - like NFS - where this doesn't work, the
underlying file system usually already makes only very
weak statements about consistency. So, I think, we
don't have worry about this.
> | -----Original Message-----
> | From: Manuel M. T. Chakravarty [mailto:[EMAIL PROTECTED]]
> | Sent: 05 September 2000 02:10
> | To: [EMAIL PROTECTED]
> | Subject: lazy file reading in H98
> |
> |
> | In an assignment, in my class, we came across a lack of
> | specification of the behaviour of `Prelude.readFile' and
> | `IO.hGetContents' and IMHO also a lack of functionality. As
> | both operations read a file lazily, subsequent writes to the
> | same file are potentially disastrous. In this assignment,
> | the file was used to make a Haskell data structure
> | persistent over multiple runs of the program - ie,
> |
> | readFile fname >>= return . read
> |
> | at the start of the program and
> |
> | writeFile fname . show
> |
> | at the end of the program. For certain inputs, where the
> | data structure stored in the file was only partially used,
> | the file was overwritten before it was fully read.
> |
> | H98 doesn't really specify what happens in this situation.
> | I think, there are two ways to solve that:
> |
> | (1) At least, the definition should say that the behaviour
> | is undefined if a program every writes to a file that it
> | has read with `readFile' or `hGetContents' before.
> |
> | (2) Alternatively, it could demand more sophistication from
> | the implementation and require that upon opening of a
> | file for writing that is currently semi-closed, the
> | implementation has to make sure that the contents of the
> | semi-closed file is not corrupted before it is fully
> | read.[1]
> |
> | In the case that solution (1) is chosen, I think, we should
> | also have something like `strictReadFile' (and
> | `hStrictGetContents') which reads the whole file before
> | proceeding to the next IO action. Otherwise, in situations
> | like in the mentioned assignment, you have to resort to
> | reading the file character by character, which seems very
> | awkward.
> |
> | So, overall, I think solution (2) is more elegant.
> |
> | Cheers,
> | Manuel
> |
> | [1] On Unix-like (POSIX?) systems, unlinking the file and
> | then opening the writable file would be sufficient. On
> | certain legacy OSes, the implementation would have to
> | read the rest of the file into memory before creating
> | a new file under the same name.
> |