[Kolab-devel] OpenLDAP replication issues: slurpd vs syncrepl

Fabio Pietrosanti lists at pietrosanti.it
Mon Feb 13 12:02:00 CET 2006


Martin Konold wrote:
> Am Sonntag, 12. Februar 2006 12:33 schrieb Fabio Pietrosanti:
>
> Hi Fabio,
>
> I don'z think that syncrepl is a benefit even with 80 000 users currently. If 
> time tells that syncrepl is much more robust than slurpd (which could be the 
> case because the architecture is more robust) I am willing to have a look at 
> it for this single reason.
>
> In short the amount of LDAP data is small, mostly read and seldom written and 
> is required in all  locations with the current Kolab model.
>   
I agree that time is needed to evaluate which replication methods is
more robust, however i really doesn't like slurpd that read replica log
files as the only interaction with slapd.

I would like to take the opportunity to understand and discuss some
kolab design aspects i don't understand and that i consider as a
limitations for the projects in the route to enterprise markets.

>> This would give many improvement:
>> - security
>>   With syncrepl is possible to specificy parameters for what have to be
>> replicate and where.
>>     
>
> Currently Kolab needs all data anyway.
>   
Let's discuss why and how we should reduce/rationalize the data needed
for kolab.
>>    It should be possible to replicate to slave server B only the users
>> that have KolabHomeServer: B .
>>     
>
> No, e.g. for public folders access control all users arer required on all 
> servers.
>   
Ok, i understand the needs.

I'm wondering whether there could be some possible approach to reduce
the ldap data between slave servers.

It could be possible to introduce the concept of "ldap referral" for
"non local users" or not to copy all the attributes but only the CN and
needed data for public folders ACL managements?

>>    Or it should be replicated the whoole ldap database but without the
>> "password" for "non local users".
>>     
>
> If you don't trust in the physical security of Kolab servers you are in 
> trouble anyway. 
>
> In general you may though avoid the password hashes entirely when relying on a 
> third party authentification mechanism. (SASL is your friend.)
>   
Suppose the traditional scenario of big organizations (bank, postal
office, or whatever when you have a HQ and branch offices with 5-to-20
persons).
Each branch office in a microsoft scenario have a BDC along with a small
exchange server.
Does the HQ trust the IT persons at the branch office and would give
them the complete directory access? No!
Does the HQ trust the phisical security at the branch office? No!

So, why they should replicate the whoole directory along with the
passwords and other sensitive data to the branch office?

Imho this is a real scenario that would severely limit the introduction
of kolab in the enterprise (big organizations) market.

>> - network performance
>>   Only the data needed to allow a slave server to work should be
>> replicated.
>>     
>
> I don't buy this mainly because the amount of data for LDAP replication is 
> negligible.
>   
When OpenLDAP start generating transaction logs of 2.0GB for a 300mb
database that are replicated between all kolab servers the network
performance problem will reveal.

I don't know if this could be solved removing the openldap functionality
of creating log.xxxxxx inside openldap-data directory, however after the
import of 78k users openldap created more than 2GB of transaction logs
which was part of the replica and caused a network congestions in the
infrastructure i was setting up.

For this reason, replicating only needed data, carefully selecting it,
would drammatically speed up the network performance.

>> - cyrus performance
>>   Only mailboxes of local users should be created.
>>     
>
> For proper ACLs it is benefitial if the cyrus imapd knows about all users. 
> Performance should not be affected by this. Can you provide any measurements?
>   
When you have a lot of users the:
- creation of thousands of unused mailboxes
- verification and setup of all ACL for each mailbox at each kolabd restart
- each kolabd ldap related activity that need to crawl the directory
(like transport/virtual postfix map creation)

would create some severe performance problem because kolabd and cyrus
have to do *a lot* of "not needed" work.

Crawling the ldap directory along with getting in memory 78k users along
with creating mailboxes along with comparing ACL for each user (even if
not needed because it's not a local users), trust me that cause severe
performance problem.

I don't think that Microsoft Exchange and Active Directory replicate all
mailboxes along with all ldap objects on every slave servers.

For performance reasons only "local mailbox" and only "needed local ldap
objects" should be replicated from the master servers.

Imho kolab should follow that way, at least for cyrus mailboxes.

If there is a valid reason to keep kolab replicate all cyrus mailboxes
across all slave servers let me know, only because i'm quite new to the
kolab project and doesn't know all the design decision that was made in
past.


>> - kolab design simplicity enanchments
>>    Slurpd should be used only for kolabd notification but not for
>> replica, leaving this task to the more feature rich syncrepl.
>>     
>
> How will this simplify the Kolab design?
>   
When you manage huge database you should really reduce the amount of
data that each components manage.
Slurpd doesn't allow careful selection of what need to be replicated and
instead syncrepl does.

Additionally we all know that OpenLDAP is not a robust product, it often
crash for misconfiguration or data integrity problem. Having two
processes (slapd + slurpd) mean taking care of resuming from crashes two
different process instead of one.

In the infrastructure i setup, slurpd have some stability problem and we
need to resume it from "unknown" crashes, otherwise the replica will not
work fine.

And with slurpd when you will have inconsistencies, you will have
trouble and have do to some manual an non intuitive work to recover the
situation.

Those are the main reasons that make me work at syncrepl as an
alternative for data replication.


I hope we should have a profitable discussion on that subjects

Regards

Fabio




More information about the devel mailing list