Basic rationale of the KEP #2 design

Georg C. F. Greve greve at
Fri Mar 11 20:30:56 CET 2011

Dear Florian,

On Friday 11 March 2011 16.41:08 Florian v. Samson wrote:
> That is one reason why specifications must never follow or adapt to broken
> implementations.

Indeed. Which is why KEP #2 does not follow or adapt to the current broken 
specification and implementation, but proposes a clean solution that the 
implementations can then adapt to.

> Please be clear: which "several clients"?  I only read Kontact-developers
> stating that on this list.

Joon said that the Toltec connector also does not preserve nested XML tags, so 
both implementations that are mentioned as the reference implementations for 
version 2 of the format do not show the behaviour upon which you suggest we 
rely. IIRC, Gunnar also said the same was true for Horde.

Other clients have not commented, but may be having similar issues.

> IMO it must be fixed in those clients, and loosening the specification is
> a "anti-fix", as it calls all clients to use more leeway.

As stated before, I agree with your suggestion to fix all clients to start 
preserving all XML tags. But as we know this is not currently something they 
do, we cannot rely upon it for this particular KEP.

This is in no way "loosening" the specification, it is not touching the status 
quo and following the (IMHO) sound principle of not touching all aspects in a 
single KEP, in particular as it would be years until we can rely upon the 
tightened XML preservation rules to be available everywhere, and the issue 
that KEP 2 addresses is more pressing than that.

> Exactly, but not Synckolab!
> Synckolab is bound to the APIs Thunderbird and Lightning offer, if one does
> not intend to come up with an awkward and clumsy design.

So you're saying with absolute certainty that the Thunderbird and Lightning 
APIs definitely do not offer RFC3339 parsing, nor timezone management?

> The same applies to kolab-evolution, which is limited to the APIs a
> camel-provider can use, if one avoids invasive changes in Evolution which
> will not be accepted upstream, anyway.

So you're 100% certain that Evolution does not offer APIs for tz support?

> > The concept has been tried in iCalendar, likely because they also could
> > not agree on one approach and thus allowed both, and leads to
> > inconsistent client behaviour. 

> Please name your "one", "both" and "inconsistent client behaviour"
> specifically.

There are two ways of encoding DST data.

One is to encode it statically, e.g. this time zone always switches to summer 
time on 3rd of March every year and back on the 5th of October. The other is 
to encode it dynamically, through a database, e.g. "I want the DST rules for 
Europe/Berlin" as they currently are. Either approach on its own is likely to 
show quite consistent behaviour of clients.

From the user's perspective, the static encoding is going to be consistently 
wrong once the real-world rules have changed, but at least it is consistent.

The dynamic approach is likely to be more correct over time, but has the 
potential pitfall that users with very old/outdated systems may be shown the 
wrong time.

If BOTH time zone identifier and static encoding are stored, they necessarily 
drift apart as soon as the DST rules change.

The data stored is now inconsistent, and clients can choose to follow the 
static encoding, which they know to be wrong, but possibly consistent with 
what other clients do, or the dynamic encoding, which they know to be correct, 
but maybe the other clients are betting on static.

There's been substantial discussion on this list about this issue, and the 
fact that more clients are tendentially following the dynamic approach, thus 
ignoring the (inconsistent and outdated) static data.

But because the static data is there, and okay to use according to the 
standard, which allows both approaches, there will be inconsistencies between 
clients depending on which path they choose.

> You correctly provided a reason, why we must agree on and specify timezone
> identifiers (you call them geographical identifiers here): interoperability
> on the format-level.

I am glad we agree that storing timezone identifiers are the right approach.

> But you always lacked and still do lack any reason why ...
> a. the *source* for the timezone data must be a database

What else would that source be, in your view?

All systems I know use a system wide database for this kind of information, 
often in the form of some ASCII table with system libraries that make it easy 
to access for all applications.

Which other methodology do you propose?

> b. this *source* must be the same database for all (actually this point was
> weakened by you, lately)

That point was never made, actually, as for instance under Windows the KEP 
always proposed that the time zone would be converted to the Windows time 
zone, and be displayed by making use of the Windows system database for time 
zone information.

The narrowing down of Olson has always been for the reason of providing an 
encompassing, yet limited, set of tzids, for the reasons already discussed, 
and to which you agreed.

> > Not all communication takes place in email on list, as explained.
 > Consequently they do not exist for the readers of this list.

That would be inappropriately exclusive, I believe.

People should have the right to participate in this process in any way they 
consider workable for them, and I'd rather get more input, than preclude their 
ability to provide input by trying to force them to formulate it in full 
extent to the list.

But everyone is following the discussion, I believe, so could have and still 
can speak up if they felt their input was somehow misrepresented.

> Yes we do.  Any of us needed some time to fully comprehend the various
> aspects and depth of the timezone issue.  Sure, that applies to Henrick as
> well, and it was RFC3339 limited to UTC only: please do read his mail you
> referenced.

I did. Quoting from

sent by Hendrik on 21 December 2010:

 "I think storing in local time would be the best solution.
  If we look at this article:
  It shows that it is in fact possible that the standard time somewhere might 
  So it is not enough to store the UTC time and a flag for DST to calculate the 
  intended local time. It is needed to store the offset to UTC that was active 
  at this time.
  As i looked at RFC3339 i see that its not true that only UTC is possible.
  Also local times like this are possible:


 Perhaps this is a solution with which all are satisfied?"

> Sorry, that is not true: the proposal for RFC3339 came from you.

Please re-read the above. 

> This is a nice example how many of us slowly came to the conclusion that
> UTC together with an tz-id (he modelled it as an simple offset at that
> time, which turned out to be insufficient) is a good solution.
> This is exactly what Joon and I propose.

And for the reasons outlined and meanwhile understood by the vast majority, 
that will not resolve the issue, or create the need for substantially more 
metainformation and more complex calculations.

It was in fact your own team which strongly advocated storage of local time as 
the best solution for this issue for the Evolution Kolab Connector, finishing 
with their post of 28 October 2010:

Unfortunately the understanding of most other people had not yet reached the 
point that local time zone storage is indeed the best solution. Most of us, 
myself included, were fiercly of the opinion that UTC + time zone should be 
sufficient to model this issue.

I will openly admit that Hendrik was right and I was wrong.

I did not understand the full depth of the issue until at some point in 
December, having exhausted all the other options, and having spent lots of 
time trying to think of ways to resolve the issue on UTC + TZID, I finally had 
to realize that the issue was more complex than that.

This resulted in my post of 20 December 2010 to try and explain the rationale 
of storing local time, after all:

Hendrik immediately re-affirmed that he was still seeing storage of local time 
as the best option in his mail of 21 December 2010 quoted above.

So both on storing local time, as well as using RFC3339 for that storage, we 
have ultimately followed the proposals made by the Evolution team.

In other aspects we have taken into account other points raised by other 
clients, e.g. the usage of an attribute, rather than a tag for timezone id.

> I read his (very few) mails on this list and his statements there were
> different from what you purport.

Like the mails from Hendrik, we seem to be reading them very differently.

But when you said you had understood him differently, I indeed took the time to 
ensure I had understood him correctly even though he prefers not to be drawn 
into this debate. So yes, KEP 2 has Gunnar's support as it stands.

> Shawn's statements have been generally negative as I read them.

Once more we seem to be reading different things.

I read his mail as not being overjoyed of having to put in the work to address 
this issue, but at the same time not seeing a better approach, thus saying 
that KEP #2 would be implemented as it stands if it is approved in this form.

That is certainly not an enthusiastic welcome, but then none of us is happy of 
having to put in extra work to resolve an issue that was introduced by the 
authors of the format some years back. There is just no alternative.

Best regards,

Georg C. F. Greve
Chief Executive Officer

Kolab Systems AG
Zürich, Switzerland

e: greve at
t: +41 78 904 43 33

pgp: 86574ACA Georg C. F. Greve
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 308 bytes
Desc: This is a digitally signed message part.
URL: <>

More information about the format mailing list