KEP 2: Modification of datetime type, introduction of 'tz' sub-tag
wrobel at kolabsys.com
Tue Dec 7 22:36:03 CET 2010
Sorry for the double post - Horde gave up on me ;)
Zitat von Gunnar Wrobel <wrobel at kolabsys.com>:
> Zitat von "Georg C. F. Greve" <greve at kolabsys.com>:
>> Hi Hendrik,
>> On Tuesday 30 November 2010 10.52:45 Hendrik Helwich wrote:
>>> For backward compatibility i agree that clients should be able to read
>>> RFC3339 for the first koab format version. But for writing i think clients
>>> MUST write a clear normailzed datetime format like the Zulu format and
>>> also do not need to read RFC3339.
>> If clients can read it for ONE Kolab version, they can read it for all of
>> them. I don't think anyone would really want to have multiple
>> parsers for two
>> formats where one format is a limited subset of the other.
>>> This is because i see no real benefit for kolab in the complex RFC3339
>>> datetime format. You can have partially timezone information in that
>>> datetime format. And there is no need for this information. Why not omit
>>> this unused information to make the format more clear and remove
>> As another discussion demonstrated, it is not actually time zone
>> just an offset. While - as long as all times are always stored
>> without DST in
>> effect - it allows to extrapolate some idea about the meridian, it does NOT
>> allow to extrapolate to the correct DST regime.
>> So it is a pure offset, and *not* a redundancy.
>> But as an offset it is fairly harmless, and parsers can be expected
>> to have no
>> issues deal with it. So it does not introduce any risk or instability.
>> But you are right it is also not strictly necessary.
>> If anyone has an idea why it ended up in the format, I'd be
>> interested in the
>> story. My guess is that it somehow came from ISO8601, from which RFC3339 was
>> then derived as a schema, IIRC.
>>> If we have the possibility to change things in version 1.1 i would like to
>>> pick up the idea of Andrew (Mail from 12.11.2010) to specify times directly
>>> in local time e.g. for the element "last-modification-date" in the Zulu
>>> format and for the element "start-date" optionally in a local time like
>>> this: [...]
>>> So the Suffix 'Z' could indicate that it is a UTC time. For a local time,
>>> the timezone which is specified in the kolab format xml must be used.
>> Is this RFC3339 compliant?
>> It seems like it would establish a third, RFC3339 incompatible format, which
>> would be very easy to mistake for an RFC3339 compatible format, because it
>> probably defeats the expectation of many client implementors that someome
>> would go to create yet another time format when RFC3339 is widely used for
>> this purpose.
>> That seems like a rather risky proposition to me.
>>> You suggested that the Zulu format could be not sufficient. I think you are
>>> referring to the milliseconds. Do you see a use case where this could be
>> We are planning to extend Kolab integration into the area of real time
>> collaboration technologies, among other things, including collaborative
>> editing and such.
>> It is entirely foreseeable that such applications would be based
>> upon RFC3339,
>> so if RFC3339 is not supported it would necessitate translation which would
>> introduce yet another potential source for error with no gain, and we might
>> find that some of these applications actually make use of the milliseconds.
>> So yes, I would like to not close that door.
>> As a compromise proposal - because strict Zulu UTC with the time zone
>> information is sufficient for the purposes of all existing Kolab
>> objects - we
>> could say that
>> * Clients *MUST* be parsing datetime as RFC3339
>> * As a general rule, Clients *SHOULD* always write datetime in the simplest
>> possible format
>> * For all existing objects individually we specify UTC Zulu *MUST* be used.
>> This way we'd keep backward compatibility, and older clients will be able to
>> continue reading the timestamps unless they have implemented the
>> implicit "do
>> not read versions above your level" rule.
>> At the same time we ensure that we have a smooth path to integrate other
>> technologies, including those for which we'd encounter the current
>> approach to
>> fall short.
>> Because there is nothing yet that uses it, clients would gain a grace period
>> to switch to full RFC3339 parsing where they aren't already using it.
>> What do you think?
>>> So in fact i think we have to do a trade-off here and decide what is more
>>> (a) to use actual time zone data
>>> (b) to assure that all people come to the same time to a meeting
>> You are right there is a trade off here. But this is not it.
>> With static time zone specifications, some people will still miss
>> their 11:00
>> meeting because they know it's at 11:00, it has been there for
>> years. So when
>> the computer tells them it is now at 10:00, they will probably ignore the
>> computer, and still go at 11:00, just like they'll ignore their car
>> that tells them to turn right when they know they need to go straight.
>> If that same guy's time zone file is out of date, but everyone
>> else's is still
>> up to date, he'll just as gladly go to the meeting at 11:00, and
>> meet everyone
>> as planned, and all is well.
>> What I am not trying to say is that ALL people would behave like this, some
>> people will behave differently. What I am trying to explain is that
>> *NONE* of
>> these options can *GUARANTEE* that everyone will be at the right time at the
>> right place.
>> The questions are:
>> (a) Which method has the better chance of providing correct information?
>> (b) Which method is more robust against future developments?
>> and, of course,
>> (c) Who will tech support (and consequently the user) blame for
>> the failure?
>> The answers to these questions are fairly straightforward.
>> Answer to (a): As long as DST rules do not change, both perform
>> equally well.
>> Once DST rules change, the static encoding *will* break, the database *may*
>> break, but only if the user has not updated their system in quite a while,
>> because such changes are prepared politically, then communicated,
>> into the database and made available quickly.
>> So in most cases there is a substantial update window which would only be
>> realistically missed in an unmaintained, unserviced and essentially orphaned
>> installation of Kolab. I'd expect those users to have much bigger problems
>> than a recurring meeting that switched one week early or late.
>> Answer to (b): The database.
>> Because there is an RFC in the works that will do for DST
>> information what RFC
>> 1305 did for time synchronization. Just like there are very few people today
>> who use atomic clock receivers to set their system time and instead rely on
>> NTP, only very few people will use database updates, and instead
>> rely upon the
>> network service.
>> Answer to (c):
>> For the static approach: Kolab.
>> For the database: The platform provider.
>> So the database scores better on every single issue. That is why,
>> even though
>> it is not (yet) perfect, I see it as the better of two imperfect choices.
> Finally found the time to read through the Olson/static part and make
> my mind up about it. This part from Georg has been the perfect summary
> for me and Olson definitely gets a big +1 from my side.
> To add the PHP/Kolab server perspective: The Olson db has been part of
> PHP since 2005 and is being updated regularly. If you don't update PHP
> respectively your Kolab server for three to five years (as has been
> discussed here) the Olson database and recurring events crossing DST
> is indeed the least of your problems ;) I would say the same holds
> true for clients.
> Still I can imagine that people might actually keep a system without
> updates for years. "Never change a running system" is sensible after
> *But* ... if this is causing problems then it can be easily fixed by
> updating the problematic system. Static information on DST in the
> format would mean that you could only fix the problem by trying to fix
> problematic events in the Kolab IMAP storage. This is *way* harder and
> nothing you want to go into.
>>> Additionally it could be allowed for clients to update the timezone data in
>>> the kolab item if they notice that its outdated.
>> That only seems to combine the weaknesses of both approaches, and
>> none of the
>> strengths, while increasing the burden on client implementors and
>> It adds many questions to which we'll have to discuss answers, such as:
>> When does one client determine that the static information is out of touch
>> with reality? How does it ensure its update is the better one, and won't be
>> overwritten by another client which may or may not correctly see
>> that the first
>> client was wrong in its assumption? How do we ensure that all clients have
>> received the latest updates before displaying the event to their user?
>>> > So it will likely be supported by an NTP-like service to update DST
>>> > information in the future, which will give us maximum reliability and
>>> > assurance of correct display, with no change to the storage format.
>>> To be honest i have doubts that this will work correctly in practice.
>> This is a much simpler problem than NTP.
>> Considering that NTP seems to work pretty well in practice, I have little
>> doubt this can be made to work, to be honest.
>>> All people on all the different systems need to have always a
>>> database which
>>> is fully mappable to Olson database and which also needs to be in the same
>>> state. How can it be assured that all people always update their database
>>> at the same time?
>> Firstly, they don't need the SAME database. They only need a version of the
>> database that correctly gives DST rules for the current instance.
>> If rules have not changed in the region in question, but only in
>> another which
>> is irrelevant to the calculation at hand, different versions will
>> deliver the
>> same result.
>> Secondly, when limiting it to "versions of the database that are
>> correct with
>> regards all the zones relevant to the calculation", not everyone
>> will need to
>> have them, only those providing that service.
>> A cron job of importing the latest version once a day ought to do
>> just fine for
>> that. Remember: The Olson database changes very infrequently, and for events
>> that are months in the future. So as long as there is at least one
>> update per
>> half-year, you're typically on the safe side.
>> Best regards,
>> Georg C. F. Greve
>> Chief Executive Officer
>> Kolab Systems AG
>> Zürich, Switzerland
>> e: greve at kolabsys.com
>> t: +41 78 904 43 33
>> w: http://kolabsys.com
>> pgp: 86574ACA Georg C. F. Greve
> Gunnar Wrobel