On September 13, 2014 8:32:53 AM CDT, Peter Dufault <dufa...@hda.com> wrote:
>
>On Sep 11, 2014, at 10:03 , Sebastian Huber
><sebastian.hu...@embedded-brains.de> wrote:
>
>>> In nfs_dir_read() I see:
>>> 
>>>     /* align + round down the buffer */
>>>     count &= ~ (DIRENT_HEADER_SIZE - 1);
>>>     di->len = count;
>>> 
>>> Then later:
>>>     if (count > NFS_MAXDATA)
>>>         count = NFS_MAXDATA;
>>> 
>>>     di->readdirargs.count = count;
>>> 
>>> Can someone who understands this comment on it?
>> 
>> Sorry, Peter.  I don't have time to look at this at the moment.  The
>NFS code 
>> is hard to understand.  Since it works for me I have no urgent need
>to spend 
>> time on this.
>
>Can you answer one question?  If nfsStBlksize is set to zero so that
>this:
>
>    /* Set to "preferred size" of this NFS client implementation */
>    buf->st_blksize = nfsStBlksize ? nfsStBlksize : fa->blocksize;
>
>returns 4096, then is it a bug if transfers larger than 4K are
>attempted?
>
>Limiting all transfers to values other than the default 8K can end up
>with memory corruption.  512B works, 1K works, 2K and 4K corrupt
>things.  The changes are minor (attached).
>
>If it's a bug for transfers to be larger than 4K when fa->blocksize=4K
>I can look at that, that's easy to trap when it happens.

Speaking purely from a defensive programming stance with no knowledge of what 
is possible in NFS, I would put the trap in. It is a case the codes will do bad 
things if it happens. So it could be a case of this should never happen or this 
is an odd configuration the code does not support. Either way, better to detect 
and report than tromp on memory.

If your work simplifies the code, that's always a good thing but certainly not 
a requirement to address the issues.

>Peter
>-----------------
>Peter Dufault
>HD Associates, Inc.      Software and System Engineering

--joel
_______________________________________________
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Reply via email to