Hi,

just some thoughts...


fgetc will definitely be much faster than fread 16K and fseek back to the end 
of line position.

Note: fgetc is already buffered and not too slow on average, but only if you do 
not
fseek around. In that case the buffered file-stream data is lost.

Well, reading 16K in one piece _could_ be faster than fgetc, but only if you 
read the
data once and do not memcpy or memmove the buffer afterwards.

But if you read 16K starting from 0, then read 16K starting from line 1, and so 
on,
you will be really really slow. Did you ever profile your proposed patch?

The most easy way to do something for the performance would be in
location_get_source_line:

If you remember the last file and position, and keep the file open, just in 
case,
that the next time, either the same or a following line is requested. That 
would be
a really simple optimization.

Maybe that would be too simple? I do not know.

> fseek (fp, prev_pos + cur_len, SEEK_SET);
Furthermore this seek will position in the middle of the \r\n sequence on 
windows.


I am furthermore not sure, how the gcov.c would react on a NUL-char in the 
input stream.
Maybe in a comment, somewhere? Could it be possible that this reaches the 
gcov.c?

In that case the same OOM would occur because the memory size is always 
duplicated
after an incomplete line?

And gcov.c uses printf ("..%s\n") to print a source listing right?

If the line contains NUL-chars, the listing will only contain the part of the 
line
until the NUL-char, which is not what we want?

So in that case it would be better to replace NUL-chars in the line
buffer with SPACEs, to get a printable C string? What do you think?

Isn't a C string always more easy to handle, than this buffer with zero-chars?


Regards
Bernd.                                    

Reply via email to