[Kolab-devel] Expiring Kolab objects cache in Roundcube

Jeroen van Meeuwen vanmeeuwen at kolabsys.com
Mon Jun 11 14:11:42 CEST 2012


On Saturday, June 09, 2012 02:39:04 PM Thomas Brüderli wrote:
> Hello
> 

Hi Thomas,

> The new storage layer for Kolab 3.0 in Roundcube slowly grows up and it
> also provides caching of Kolab objects in the local (My)SQL database. The
> cache is persistent and synchronized with the IMAP mailbox on every access.
> This means that all objects of each resource every accessed through the web
> client have a copy in the local cache. You can easily imagine the growth of
> that cache. So we need some strategy how to expire and remove cache objects
> in a way that keeps good balance between access speed and storage volume.
> Here are a few thoughts how that might be achieved:
> 
> 1) Clear user's cache when a user terminates a session
> 2) Cronjob which removes cache objects older than T - <cache-lifetime>
> 3) Same as 2) but triggered randomly on web client requests (no cronjob)
> 
> Of course every one of these approaches has it's pros and cons. Even the
> removal of objects being in cache for a long time isn't always desired.
> Imagine contacts in an address book which is accessed daily. But that's
> probably a trade-off we have to live with because on the other hand, the
> web client will not be notified when a user is removed from the IMAP
> backend and will therefore never access the cached objects again. So
> removing objects added to the case some time ago is mandatory even if
> they'll be re-added a short time after.
> 
> Also 1) isn't perfect for several reasons: this would require a complete
> re-sync on every login which can easily take longer than just a few
> seconds. And, even more important, caches are shared amongst users. If user
> A has shared a folder with user B, its objects are only cached once and
> both users access the same records. Thus clearing the cache when user A
> logs out would cause user B's session to re-sync the entire folder.
> 
> I'd therefore propose either 2) or 3). Depending on how easy/complicated it
> would be to install a cron job within the Kolab setup 2) would definitely
> be better because it's entirely independent from web server requests. For
> 3) I had in mind to just call a decoupled shell command in order to not
> block the http request which triggered the cache cleanup process.
> 
> Any comments are welcome!
> 

Concerning user mutations (change of primary unique ID / result attribute used 
as identifier in Roundcube, currently the first 'mail' attribute value, or 
user deletion), I'm already in the progress of building a Roundcube plugin to 
the Kolab daemon, that can hook itself up to the database and make those 
changes.

It'd be easiest if we were able to develop a command-line script that can 
synchronize cache* table entries with what exists in the users table at that 
moment, so that the daemon can restrict itself to a limited number of columns 
to update in the users table, and a simple delete on the users table.

I think this might both help in pre-seeding caches for a user's (large) 
folders, as well as expiring cache* table entries.

I think also a user's last login timestamp could be taken into account, if 
expiry was to be sought past a cache lifetime.

Kind regards,

Jeroen van Meeuwen

-- 
Systems Architect, Kolab Systems AG

e: vanmeeuwen at kolabsys.com
m: +44 74 2516 3817
w: http://www.kolabsys.com

pgp: 9342 BF08
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.kolab.org/pipermail/devel/attachments/20120611/0cbbd92f/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: <http://lists.kolab.org/pipermail/devel/attachments/20120611/0cbbd92f/attachment.sig>


More information about the devel mailing list