On 9/21/2015 4:05 PM, David Miller wrote:
From: Santosh Shilimkar <[email protected]> Date: Sat, 19 Sep 2015 19:04:42 -0400Even with per bucket locking scheme, in a massive parallel system with active rds sockets which could be in excess of multiple of 10K, rds_bin_lookup() workload is siginificant because of smaller hashtable size. With some tests, it was found that we get modest but still nice reduction in rds_bind_lookup with bigger bucket. Hashtable Baseline(1k) Delta 2048: 8.28% -2.45% 4096: 8.28% -4.60% 8192: 8.28% -6.46% 16384: 8.28% -6.75% Based on the data, we set 8K as the bind hash-table size. Signed-off-by: Santosh Shilimkar <[email protected]> Signed-off-by: Santosh Shilimkar <[email protected]>Like others I would strongly prefer that you use a dynamically sized hash table. Eating 8k just because a module just happened to get loaded is really not appropriate. And there are many other places that use such a scheme, one example is the AF_NETLINK socket hash table.
OK. Thanks for AF_NETLINK pointer. I will look it up. Regards, Santosh -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html
