On Mon, May 18, 2015 at 04:12:01PM -0400, David Miller wrote:
>
> Ok, this of course depends upon the distribution of the input data
> and the strength/suitability of the hash function.
>
> I'm a little bit disappointed in what Thomas found. I would expect
> the distribution to be at least a litt
From: Herbert Xu
Date: Sun, 17 May 2015 09:38:29 +0800
> On Sat, May 16, 2015 at 06:09:46PM -0400, David Miller wrote:
>>
>> Obviously something like 50 or 100 is too much.
>>
>> Perhaps something between 5 and 10.
>
> You are even more parsimonious than I :) Because the maximum chain
> length
From: Herbert Xu
Date: Fri, 15 May 2015 14:30:57 +0800
> On Thu, May 14, 2015 at 11:46:15PM -0400, David Miller wrote:
>>
>> We wouldn't fail these inserts in any other hash table in the kernel.
>>
>> Would we stop making new TCP sockets if the TCP ehash chains are 3
>> entries deep? 4? 5? T
From: Herbert Xu
Date: Fri, 15 May 2015 11:06:23 +0800
> On Thu, May 14, 2015 at 10:22:17PM -0400, David Miller wrote:
>>
>> In my opinion, up to at least 2 X max_size, it's safe to allow the
>> insert. Assuming a well choosen hash function and a roughly even
>> distribution.
>
> OK I can make
From: Herbert Xu
Date: Wed, 13 May 2015 16:06:40 +0800
> We currently have no limit on the number of elements in a hash table.
> This is a problem because some users (tipc) set a ceiling on the
> maximum table size and when that is reached the hash table may
> degenerate. Others may encounter OO
From: Herbert Xu
Date: Fri, 24 Apr 2015 16:22:11 +0800
> On Fri, Apr 24, 2015 at 09:15:39AM +0100, Thomas Graf wrote:
>>
>> You are claiming that the rhashtable convertion removed a cap. I'm
>> not seeing such a change. Can you point me to where netlink_insert()
>> enforced a cap pre-rhashtable?
On Fri, Apr 24, 2015 at 09:15:39AM +0100, Thomas Graf wrote:
>
> You are claiming that the rhashtable convertion removed a cap. I'm
> not seeing such a change. Can you point me to where netlink_insert()
> enforced a cap pre-rhashtable?
OK you are right. We never had such a limit. In that case I
On 04/24/15 at 04:12pm, Herbert Xu wrote:
> On Fri, Apr 24, 2015 at 09:06:08AM +0100, Thomas Graf wrote:
> >
> > Which users are you talking about? Both Netlink and TIPC still
> > have an upper limit. nft sets are controlled by privileged users.
>
> There is no limit in netlink apart from UINT_MAX
On Fri, Apr 24, 2015 at 09:06:08AM +0100, Thomas Graf wrote:
>
> Which users are you talking about? Both Netlink and TIPC still
> have an upper limit. nft sets are controlled by privileged users.
There is no limit in netlink apart from UINT_MAX AFAICS. Allowing
UINT_MAX entries into a hash table
On 04/24/15 at 08:57am, Herbert Xu wrote:
> It seems that I lost track somewhere along the line. I meant
> to add an explicit limit on the overall number of entries since
> that was what users like netlink expected but never got around
> to doing it. Instead it seems that we're currently relying
On Fri, Apr 24, 2015 at 09:01:10AM +0200, Johannes Berg wrote:
>
> > As allowing >100% utilisation is potentially dangerous, the name
> > contains the word insecure.
>
> Not sure I get this. So rhashtable is trying to actually never have
> collisions? How could that possibly work?
Of course it's
On Fri, 2015-04-24 at 08:57 +0800, Herbert Xu wrote:
> It seems that I lost track somewhere along the line. I meant
> to add an explicit limit on the overall number of entries since
> that was what users like netlink expected but never got around
> to doing it. Instead it seems that we're curren
12 matches
Mail list logo