On Tue, 01 Aug 2017, Guillaume Knispel wrote:

On Mon, Jul 31, 2017 at 08:45:58AM -0700, Davidlohr Bueso wrote:
On Mon, 31 Jul 2017, Guillaume Knispel wrote:
>ipc_findkey() scanned all objects to look for the wanted key. This is
>slow when using a high number of keys, for example on an i5 laptop the
>following loop took 17 s, with last semget calls taking ~1 ms each.

I would argue that this is not the common case.

Well, Linux allows for 32000 objects, and if you want to allocate them
with keys, this initial (maybe diluted) duration is incompressible, and
is O(n²).

Besides, I maintain a program which, in some of its versions, uses tens
of thousands of semaphore sets with keys, and destroys and creates new
ones all the time.

Not impossible, just not the common case.

On 4.13-rc3 without and with the patch, the following loop takes on
my laptop, according to clock_gettime CLOCK_MONOTONIC calls not
shown here, for each value of KEYS starting right after a reboot
with initially 0 semaphore sets:

   for (int i = 0, k=0x424242; i < KEYS ; ++i)
       semget(k++, 1, IPC_CREAT | 0600);

                total       total          max single  max single
  KEYS        without        with        call without   call with

     1            3.5         4.9   µs            3.5         4.9
    10            7.6         8.6   µs            3.7         4.7
    32           16.2        15.9   µs            4.3         5.3
   100           72.9        41.8   µs            3.7         4.7
  1000        5,630.0       502.0   µs             *           *
 10000    1,340,000.0     7,240.0   µs             *           *
 31900   17,600,000,0    22,200.0   µs             *           *

Repeating the test immediately (prior to the reboot) for the same value
of KEYS gives the times without creation (lookup only):

                total       total          max single  max single
  KEYS        without        with        call without   call with

     1            2.1         2.5   µs            2.1         2.5
    10            4.5         4.8   µs            2.2         2.3
    32           13.0        10.8   µs            2.3         2.8
   100           82.9        25.1   µs             *          2.3
  1000        5,780.0       217.0   µs             *           *
 10000    1,470,000.0     2,520.0   µs             *           *
 31900   17,400,000.0     7,810.0   µs             *           *

*: unreliable measure: high variance

This is both on a laptop and within a VM, so even where I have not noted
high variance the figures are not very precise (especially for long
runs) but we can still see the tendencies.

I did one last benchmark, this time running each semget() in a new
process (and still only measuring the time taken by this syscall) and
got those figures (in a single run on each kernel) in µs:

creation:
                total       total
  KEYS        without        with

     1            3.7         5.0   µs
    10           32.9        36.7   µs
    32          125.0       109.0   µs
   100          523.0       353.0   µs
  1000       20,300.0     3,280.0   µs
 10000    2,470,000.0    46,700.0   µs
 31900   27,800,000.0   219,000.0   µs

lookup-only:
                total       total
  KEYS        without        with

     1            2.5         2.7   µs
    10           25.4        24.4   µs
    32          106.0        72.6   µs
   100          591.0       352.0   µs
  1000       22,400.0     2,250.0   µs
 10000    2,510,000.0    25,700.0   µs
 31900   28,200,000.0   115,000.0   µs

My provisional conclusion is that on my system this patch improves the
performance consistently from about n ~= 30, and below 30 the slowdown,
if any, is more than reasonable; it should be inconsequential for
properly written programs and of a limited impact on programs doing lots
of <ipc>get() calls on small sets of ipc objects.

I agree, for smaller amounts of keys the overhead is negligible, and the
O(n) quickly kicks in. To the point that this patch also benefits in
more normal scenarios, where we don't have unrealisticly high amounts of
keys.

Thanks,
Davidlohr

Reply via email to