Rationale of the KEP2 design: central in-format TZ-data vs. individual local TZ-databases and more (Part 2)

Bernhard Reiter bernhard at intevation.de
Fri Apr 1 10:19:54 CEST 2011


thanks for rereading this and your answer. My short answer to your
email is that I agree to your points and we should split out three 

a) How often does tz-data change? 
b) Pros and cons of re-writing the objects when tz-data changes.
c) Which failure is better: all clients a wrong time but consistent,
   clients inconsistent, but some showing the righ time?

Maybe we even document some stuff in the wiki, 
so make it easier to find and refer to for others and KEPs.

Am Donnerstag, 31. März 2011 19:09:38 schrieb Florian v. Samson:
> > > 1. The DST switching dates for this specific TZ are not defined yet.
> > > 2. The DST rules for this specific TZ are redefined (i.e. they are
> > > altered), but they have been defined (although differently) before.
> > > Which one did you intend to address?
> >
> > Both, in both cases we cannot be sure that the tz-data stays the same.
> Well "not sure" is a very weak statement, which can mean anything
> from "extremely unlikely" to "almost guaranteed".

Yes, it needs discussion. My point in that context was that it is not 
necessary to make a distinction between the different cases why it changes. 
-> a)

> > > And thanks for pointing out this big advantage of in-format TZ-data:
> > > For case 1 above (TZ-data not yet preassigned by legal authorities) a
> > > single source of TZ-data in the Kolab-object would prevent that every
> > > client my do its own (potentially deviating) extrapolation of the
> > > (definitely deviating) local sources of TZ-data (e.g. a local
> > > installation of the Olson database).
> >
> > It has a good and a bad side effect. For me it comes out less favorable.
> Well this is a matter to discuss in order to be aware of all aspects;
> even then still people may conclude different evaluation results.

Yes. We had mentioned many aspects now, but maybe it needs a new overview.
-> b)

> > It is true, tz-data within the object will make all clients display the
> > same, but in the case that the tz-data changes, this will not be what the
> > user wants.
> So you think when there is a change in TZ-data, it is less critical to have
> different clients accessing one Kolab-object displaying different times
> (some of them wrong as well), instead of all clients showing the same time,
> although that time might be wrong?

Yes. Though it took me a while. I first thought it was the other way around.

> I am not inclined to follow that assessment, as this is also "not what the
> users want".

-> c)

> I believe there is no right or wrong in many of the discussed topics and
> issues on this mailing list, hence I believe it is crucial to thoroughly
> evaluate these issues.

I completely agree, this is a design decision in the end, leading to a 
solution that leans into one or the other direction.

> > So the tz-data would need updating, which means rewriting the
> > object with the drawbacks coming with it.
> Georg once mentioned, that you listed these drawbacks in detail, concluding
> that my proposition has been rebutted, but I was unable to find that.
> Do you have a pointer / web-link at hand?

I need to search for it. Anyway, maybe a new summary using the better 
terminology that we reached together will be helpful anyway.
-> b) 

> > > > which ultimately boils down to a pre-processing step whether the
> > > > stored UTC was correctly calculated and can be relied upon, or needs
> > > > to be adjusted.
> > >
> > > No, not at all!  By definition of this process, the stored UTC *is*
> > > correctly calculated against the TZ-data used and *must* be relied upon
> > > (what else would you rely on?).
> >
> > I agree with Georg: Using the "old" tz-data will lead to an appointment
> > which is not where the user wanted it to be. So you need to start
> > guessing or update the tz-data.
> This is what I wrote a couple of times:

Yes, you did, so did Georg. I think you were highlighting different aspects.

> "Do update the TZ-data in this 
> case, and the Kolab-client doing so may have to perform some additional
> changes in this Kolab-object".  But the old, now stale TZ-data and (in case
> of storing UTC) the stored date-time fields _were_ correctly calculated by
> definition; one needs them to correctly calculate the intended local time.
> Then the fresh TZ-data is used to calculate the new UTC equivalent of the
> intended local time, which is stored together with the new TZ-data in the
> Kiolab-object, replacing the old, stale data there:

If we want this would be an outcome of b).

> this has to be an 
> "atomic update" from the perspective of other Kolab-clients accessing that
> Kolab-object (i.e. first both changes have to be applied, then the updated
> object is written).

Technical information: All Kolab Object writes over IMAP are atomic: you write 
the new email and delete the old. (In case of extreme trouble you end up with 
both and the user has to resolve the situation.)


Managing Director - Owner: www.intevation.net       (Free Software Company)
Deputy Coordinator Germany: fsfe.org. Board member: www.kolabsys.com.
Intevation GmbH, Osnabrück, DE; Amtsgericht Osnabrück, HRB 18998
Geschäftsführer Frank Koormann, Bernhard Reiter, Dr. Jan-Oliver Wagner
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: <http://lists.kolab.org/pipermail/format/attachments/20110401/faadfe08/attachment.sig>

More information about the format mailing list