if noone else is going to try this then i think it should go in.

On 28 Aug 2014, at 20:32, David Gwynne <da...@gwynne.id.au> wrote:

> 
> On 28 Aug 2014, at 3:02 am, Mike Belopuhov <m...@belopuhov.com> wrote:
> 
>> On 27 August 2014 08:25, Brad Smith <b...@comstyle.com> wrote:
>>> Looking for some testing of the following diff to add Jumbo support for the
>>> BCM5714 / BCM5780 and BCM5717 / BCM5719 / BCM5720 / BCM57765 / BCM57766
>>> chipsets.
>>> 
>>> 
>> 
>> i have tested this on "Broadcom BCM5719" rev 0x01, unknown BCM5719 
>> (0x5719001),
>> APE firmware NCSI 1.1.15.0  and "Broadcom BCM5714" rev 0xa3, BCM5715
>> A3 (0x9003).
>> 
>> it works, however i'm not strictly a fan of switching the cluster pool to
>> larger one for 5714.  wasting another 8k page (on sparc for example) for
>> every rx cluster in 90% cases sounds kinda wrong to me.  but ymmv.
> 
> this is what MCLGETI was invented to solve though. comparing pre mclgeti to 
> what this does:
> 
> a 5714 right now without jumbos would have 512 rings entries with 2048 bytes 
> on each. 2048 * 512 is a 1024k of ram. if we bumped the std ring up to jumbos 
> by default, 9216 * 512 would eat 4608k of ram.
> 
> my boxes with bge with mclgeti generally sit around 40 clusters, but 
> sometimes end up around 80. 80 * 9216 is 720k. we can have jumbos and still 
> be ahead.
> 
> if you compare the nics with split rings: 512 * 2048 + 256 * 9216 is ~3.3M. 
> the same chip with mclgeti and only doing a 1500 byte workload would be 80 * 
> 2048 + 17 * 9216, or 300k.
> 
>> 
>> apart from that there's a deficiency in the diff itself.  you probably want
>> to change MCLBYTES in bge_rxrinfo to bge_rx_std_len otherwise statistics
>> look wrong.
> 
> yeah.
> 
> i have tested both 1500 and 9000 mtus on a 5714 and it is working well. as 
> you say, 5719 seems to be fine too, but ive only tested it with mtu 1500. ill 
> test 9k tomorrow.
> 
> it needs tests on older chips too though.


Reply via email to