Rationale of the KEP2 design: central in-format TZ-data vs. individual local TZ-databases and more (Part 2)
Georg C. F. Greve
greve at kolabsys.com
Fri Mar 25 16:46:20 CET 2011
Allow me to skip across all the ad-hominem and unconstructive parts, and focus
on the substantive issues, leaving out the issues with updating semi-static
encoding, which Bernhard already explained.
On Tuesday 22 March 2011 11.48:27 Florian v. Samson wrote:
> I clearly answered that: Any source, which is providing sufficient data,
> e.g. the web-service you named more than once.
Ok. So it seems there was a misunderstanding and we do agree that it makes
sense to use TZ data sources such as the system wide databases or the web
service which is under RFC-drafting.
Thank you for clarifying that.
> I merely tried to suggest that the *reasoning* in this discussions must be
> publicly documented, not just the results. This is exactly what PEP / KEP
> demands, but I cannot find much of the technical reasoning which is being
> discussed here in KEP2.
Ah, on the reasoning we agree.
That is why it has been explained at great length on the public mailing list,
and the most important aspects been summarized into the KEP.
There is a balance between readability and completeness, though. So if the
entire discussion with all fringe aspects were to be put into the KEP, it
would quickly become quite unreadable, again increasing the barrier to
> As you have strong opinions in this matter it is also a bit problematic
> that it is you claiming to transporting many opinions, doing the
> evaluation of opinions and majorities, stating your own opinion, and
> developing the KEP2.
KEP 2 is actually very far from what I personally thought was the solution in
the beginning. The problem on an issue as controversial and complex as this is
to get to any result in the end.
That is why in my experience it is necessary to defend the most comprehensive
arguments and deepest understanding of the issue as if it were your own idea,
which for KEP 2 I clearly cannot claim.
Otherwise you end up in an eternal editing loop where two or more groups keep
at an editing war that will end up going nowhere.
> > As explained in
> > http://kolab.org/pipermail/kolab-format/2010-December/001178.html
> > UTC + TZ requires knowledge about the DST assumption of the client that
> > made the appointment.
> There is no "DST assumption": UTC has no DST and the TZ-data implicitly
> includes DST, as it defines the delta between UTC and the local time (which
> includes DST).
So how do you know which is the UTC for 13:00 in Europe/Berlin on 30.3.2015?
My understanding is that you would use the same rules you have today as those
are the best information you have, and thus calculate UTC based on those
rules. But this is merely an assumption. You do not know whether those rules
are going to change between now and 2015.
So in order to correctly interpret UTC + TZID, you also need the assumption
about DST rules that the initial client made when writing the event, which
ultimately boils down to a pre-processing step whether the stored UTC was
correctly calculated and can be relied upon, or needs to be adjusted.
> > So we then really store in a time zone which you might call UTCWND ("UTC
> > With No DST") and store that using RFC3339. This would however mean that
> > we're not actually RFC3339 compliant, as RFC3339 specifies UTC and not
> > UTCWND.
> UTC == "UTCWND".
Considering that 10:00 in Europe/Berlin
* in the winter translates to
* in the summer translates to
08:00 UTC during DST
09:00 UTCWND during DST
and considering that 08:00 != 09:00, I am not sure how UTC == UTCWND.
Georg C. F. Greve
Chief Executive Officer
Kolab Systems AG
e: greve at kolabsys.com
t: +41 78 904 43 33
pgp: 86574ACA Georg C. F. Greve
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 308 bytes
Desc: This is a digitally signed message part.
More information about the format