Am 02.06.22 um 16:19 schrieb Joel Sherrill:
On Thu, Jun 2, 2022 at 8:58 AM Christian MAUDERER
<christian.maude...@embedded-brains.de
<mailto:christian.maude...@embedded-brains.de>> wrote:
Am 02.06.22 um 15:49 schrieb Gedare Bloom:
> On Thu, Jun 2, 2022 at 2:28 AM Sebastian Huber
> <sebastian.hu...@embedded-brains.de
<mailto:sebastian.hu...@embedded-brains.de>> wrote:
>>
>> On 02/06/2022 09:27, Christian MAUDERER wrote:
>>>
>>> Am 01.06.22 um 14:46 schrieb Gedare Bloom:
>>>> On Mon, May 23, 2022 at 6:21 AM Christian Mauderer
>>>> <christian.maude...@embedded-brains.de
<mailto:christian.maude...@embedded-brains.de>> wrote:
>>>>>
>>>>> Typical embedded systems don't have that much memory. Reduce
the buffer
>>>>> size to something more sensible for the usual type of
application.
>>>>> ---
>>>>> freebsd/sys/dev/ffec/if_ffec.c | 8 ++++++++
>>>>> 1 file changed, 8 insertions(+)
>>>>>
>>>>> diff --git a/freebsd/sys/dev/ffec/if_ffec.c
>>>>> b/freebsd/sys/dev/ffec/if_ffec.c
>>>>> index 47c0f770..4c1e147b 100644
>>>>> --- a/freebsd/sys/dev/ffec/if_ffec.c
>>>>> +++ b/freebsd/sys/dev/ffec/if_ffec.c
>>>>> @@ -139,9 +139,17 @@ static struct ofw_compat_data
compat_data[] = {
>>>>> /*
>>>>> * Driver data and defines. The descriptor counts must be
a power
>>>>> of two.
>>>>> */
>>>>> +#ifndef __rtems__
>>>>> #define RX_DESC_COUNT 512
>>>>> +#else /* __rtems__ */
>>>>> +#define RX_DESC_COUNT 64
>>>>> +#endif /* __rtems__ */
>>>>
>>>> Do we need some way to control this parameter? Or, how will this
>>>> appear if it breaks something?
>>>
>>> I don't expect that there will be any problems. But I can take
a look
>>> how I can make that a parameter.
>>
>> Can we please keep this a compile time constant as it is. The 64
>> descriptors should be more than enough.
>>
> I don't mind the reduction of the constant, but it would be good to
> predict what behavior might indicate this was exceeded. I guess it
> should be some kind of errno on an allocation request though? So it
> should be fine, but if a user hits this limit, I guess they have
> pretty limited options to overcome it.
Reducing the limit won't cause errors. It will only means that if you
flood the target with network packets, it will cache less packets and
start dropping them earlier. That means:
On a short packet burst, some packets will be dropped and (for TCP)
some
have to be re-transmitted. So for short bursts it can be a slight
disadvantage.
On a constant overload situation: It doesn't really make a difference
because the target wouldn't be able to process the packages anyway. It
might even is an advantage because the processor doesn't have to
process
packets that are already outdated and maybe re-transmitted.
How much RAM does this save versus having control over the size of
UDP and TCP RX/TX buffers like we had in the legacy stack? I recall
being able to control the various buffer sizes saved a LOT of memory
on applications I used these parameters on.
There we had four configuration values. Any chance this has a hint
in FreeBSD now or we can provide the same tuning?
rtems_set_udp_buffer_sizes(
rtems_bsdnet_config.udp_tx_buf_size,
rtems_bsdnet_config.udp_rx_buf_size
);
rtems_set_tcp_buffer_sizes(
rtems_bsdnet_config.tcp_tx_buf_size,
rtems_bsdnet_config.tcp_rx_buf_size
);
Are you sure that this is the same buffer? The parameter in this patch
is a driver specific ring buffer of rx descriptors. The parameter that
you mention sounds more like a general network stack buffer (although I
have to say that I don't know these functions of the old stack).
Regarding the sizes:
The driver allocates one mbuf for each buffer. It's a bit tricky to tell
exactly how big one MBUF is. FreeBSD does a lot of abstraction there.
But a debugger tells me that after the initialization one buffer is:
sc->rxbuf_map[0].mbuf->m_len = 2048
That means that I reduced the buffers that are cached in the driver for
sending data from 512 * 2kiB = 1MiB to 64 * 2kiB = 128kiB for the
receive direction. Note that our default size for all mbufs in the stack
is 8MiB (RTEMS_BSD_ALLOCATOR_DOMAIN_PAGE_MBUF_DEFAULT). So 1MiB is a
relevant part of that. And that's only for one direction!
The Tx buffers only have some management information allocated. They
will get buffers as soon as there is something to send. But if the
device can't send fast enough to get rid of the data, it will be most
likely a similar amount of memory.
Again: That's only the buffers in the driver. Not any buffers on higher
layers.
Best regards
Christian
--joel
Best regards
Christian
--
--------------------------------------------
embedded brains GmbH
Herr Christian MAUDERER
Dornierstr. 4
82178 Puchheim
Germany
email: christian.maude...@embedded-brains.de
phone: +49-89-18 94 741 - 18
fax: +49-89-18 94 741 - 08
Registergericht: Amtsgericht München
Registernummer: HRB 157899
Vertretungsberechtigte Geschäftsführer: Peter Rasmussen, Thomas Dörfler
Unsere Datenschutzerklärung finden Sie hier:
https://embedded-brains.de/datenschutzerklaerung/
_______________________________________________
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel