[Kolab-devel] LDAP performance

Dieter Kluenter dieter at dkluenter.de
Mon May 23 20:37:54 CEST 2005


Hi,

"Andreas Gungl" <a.gungl at gmx.de> writes:

> Hi, 
>  
> after having managed the import of about 18.000 addresses into our Kolab 2 
> (test) server, I would like to provide some feedback and ask some 
> questions. 
>  
> The many contacts have been added as addresses (not users). Importing via 
> ldapadd was very slow. It took me more than 5 hours on our PIII-500 test 
> machine with 256 MB RAM plus slow HDD. But hey, import needs to be done 
> only once. :-)

congratulations!
Just a hint for your next bulk load to your directory server:
1. stop slapd 
2. add following lines to DB_CONFIG

set_flags DB_TXN_NOSYNC
set_flags DB_TXN_NOT_DURABLE

3. slapadd(8) your ldif file 
4. uncomment this two lines afterwards,
5  start slapd.

This will load your ldif file within a few minutes. :-)
 

> Handling the data in the admin interface is problematic, there are issues 
> in the tracker (admin interface can't handle more addresses than in the 
> LDAP limit AFAICT). Well, I've set up some Perl scripts to support the 
> update of these addresses. So I don't have to care. 
>  
> The processing of queries is very different. I experience _very_ fast 
> searches out of KAddressbook / Kontact (less than 0.5 sec for serveral 
> names), while Thunderbird / Mozilla are really slow (10-15 sec for the 
> same queries). Using TB/Mozilla, the CPU load is above 90% for quite a 
> while. I think that's because of the way how those tools define the lookup 
> ("name plus email", but they search e.g. the givenName as well, at least I 
> think so after looking at it in ethereal).

index the attributes, Mozilla is searching,
point TB/Mozilla to the subtree cn=external,dc=your domain,dc=tld and
reduce search scope to one. Otherwise Mozilla will wast resources by
searching the whole tree.
>  
> Following the thread about tweaking the LDAP and DB backend configuration, 
> I've set (on Kolab 2 RC 1) in /kolab/var/openldap/openldap-data/DB_CONFIG 
>   set_cachesize 0 65000000 1 
>   set_lg_bsize  2097152 
>  
> I expect it to be 64 MB cache, is that okay this way? I must admit that I 
> couldn't completely follow the explanations of Dieter. Hm, I blame it to 
> my English. 
> Another change was made in /kolab/etc/kolab/templates/slapd.conf.template 
>   idlecachesize 10000 
>   sizelimit 20000 
> That should cache ~50% of the data, while I can query all records without 
> getting to the limit. Does that look fine? 

This are rather high settings, while cachesize in DB_CONFIG is OK, I
don't think you will need permanent caching of 10.000 entries,
sizelimit restricts the number of returned entries of a search,
default is 500, you should set 'sizelimit unlimited' to achieve your
task.
   
> I'm really unsure about the config values. But changing them and trying to 
> measure is not as easy as it seems in the first place, so I thought I 
> might ask here. 

If you can spare this amount of memory stick to your settings, if you
feel that other applications lack performance due to shorted memory,
reduce your settings.

[...]

-Dieter

-- 
Dieter Klünter | Systemberatung
http://www.dkluenter.de
GPG Key ID:01443B53




More information about the devel mailing list