On Oct 2, 2019, at 2:16 PM, Mario Rugiero <mrugi...@gmail.com> wrote:

> A new `pcap_set_buffer_size1` call is created, taking a `size_t`
> instead of an `int`, allowing for buffers as big as the platform
> allows.

Perhaps pcap_set_buffer_size_ext (Windows-style) would be better - a 1 at the 
end 1) is a bit unclear about what it means and 2) may look too much like an l 
(I first thought it *was* an "l", for "long", but maybe that's just the 
particular fixed-width font that's the default in macOS).

(Or pcap_set_buffer_size_size_t, but that may be a bit awkward.)

> Due to some contexts requiring smaller maximum buffers, a new field
> named `max_buffer_size` of type `size_t` was added to the same structure
> to account for that.

There should probably be an API to get the maximum buffer size as well, for the 
benefit of 1) programs that want "the biggest buffer they can get" and 2) GUI 
programs that might have a "buffer size" field implemented as a spinbox.

Should pcap_set_buffer_size also check against the maximum size, and set it to 
the maximum size if it's above the maximum?

> This field is initialized by default to `INT_MAX` to preserve the
> behaviour of the older API.
> Then, each driver is expected, but not mandated, to fix it to a more
> appropriate value for the platform.
> In this RFC, Linux and DPDK are used as examples.

Is there a maximum buffer size > INT_MAX for Linux?

At least in macOS, and possibly in other BSD-flavored OSes, the sysctl variable 
debug.bpf_maxbufsize will indicate the maximum size.
_______________________________________________
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Reply via email to