Nice tricks,

I have specified an index for my filter search. Populating entries again to
take care of it.

Regarding shm_key, I have a couple of silly questions:

- I should create a shared memory region. Could you please point me in the
right direction? I've been googling but found nothing regarding creating it
and OpenLDAP
- I am afraid not all the database will fit in a shared memory region. The
machine has 16GB RAM and I think all the entries would be about ~22GB. What
may happend if it comes to grow and surpass the limit?

Thanks a lot for your assistance.

2010/3/12 Dieter Kluenter <[email protected]>

> Echedey Lorenzo <[email protected]> writes:
>
> > Thanks Dieter,
> >
> > On point 3. load the whole database into shared memory, I guess I do this
> setting a big set_cachesize isn't
> > it?
>
> No, man slapd-bdb(5), shm_key
>
> -Dieter
>
> >
> > 2010/3/12 Dieter Kluenter <[email protected]>
> >
> >     Echedey Lorenzo <[email protected]> writes:
> >
> >     > Hi,
> >     >
> >     > Soon I'll have a x64 Suse machine with 16GB RAM and 4 Intel Cores.
> We
> >     > need our OpenLDAP Server to be as fast as possible. I wonder which
> >     > DB_CONFIG values are suitable for this. Maybe...
> >     >
> >     > set_cachesie 14 0 4 set_lg_regionmax 262144 set_lg_bsize 2097152
> >     > set_flags DB_LOG_AUTOREMOVE
> >     >
> >     > ...?
> >     >
> >     > Directory will have around 6~8 million entries. I'm not sure if
> they
> >     > will be entirely uploaded to the RAM cache, but at least part of
> will
> >     > improve speed, I think.
> >     >
> >     > Any help will be appreciated since my experiences with OpenLDAP
> >     > performance against Sun Directory Server 5.2 has not been good,
> taking
> >     > several seconds to answer any ldaprequest :(
> >
> >     huh, time ldapsearch -x -LLL -H ldap://localhost -b
> >     ou=benchmark,o=avci,c=de -s one sn=xxx telephonenumber mail
> >     real    0m0.006s
> >     user    0m0.004s
> >     sys     0m0.000s
> >
> >     1. configure your ldap clients properly, that is, reduce to onelevel
> >       scope search and unbind decently,
> >     2. put the transaction logs onto a separate disk,
> >     3. load the whole database into shared memory,
> >     4. use a separate partition for the database files and format with
> >       ext2 fs, we don't need a journaling filesystem,
> >     5. set loglevel 0,
> >     6. implement a log database in order to control write operations
> >     7. check the number of threads
> >
> >     -Dieter
> >     --
> >     Dieter Klünter | Systemberatung
> >     http://dkluenter.de
> >     GPG Key ID:8EF7B6C6
> >     53°37'09,95"N
> >     10°08'02,42"E
>
> --
> Dieter Klünter | Systemberatung
> http://dkluenter.de
> GPG Key ID:8EF7B6C6
> 53°37'09,95"N
> 10°08'02,42"E
>



-- 
--------------------------------------------
| Echedey Lorenzo Arencibia  |
--------------------------------------------

Reply via email to