Re: [Python-Dev] PEP 447 (type.__getdescriptor__)

2015-07-26 Thread Ronald Oussoren

> On 25 Jul 2015, at 17:39, Mark Shannon  wrote:
> 
> Hi,
> 
> On 22/07/15 09:25, Ronald Oussoren wrote:> Hi,
>> 
>> Another summer with another EuroPython, which means its time again to 
>> try to revive PEP 447…
>> 
> 
> IMO, there are two main issues with the PEP and implementation.
> 
> 1. The implementation as outlined in the PEP is infinitely recursive, since 
> the
> lookup of "__getdescriptor__" on type must necessarily call
> type.__getdescriptor__.
> The implementation (in C) special cases classes that inherit 
> "__getdescriptor__"
> from type. This special casing should be mentioned in the PEP.

Sure.  An alternative is to slightly change the the PEP: use __getdescriptor__ 
when
present and directly peek into __dict__ when it is not, and then remove the 
default
__getdescriptor__. 

The reason I didn’t do this in the PEP is that I prefer a programming model 
where
I can explicitly call the default behaviour. 

> 
> 2. The actual implementation in C does not account for the case where the 
> class
> of a metaclass implements __getdescriptor__ and that method returns a value 
> when
> called with "__getdescriptor__" as the argument.

Isn’t that the same problem as with all slots, even when using 
__getattribute__? That is,
a meta class that implements __getattribute__ to return implementations for 
(say)
__getitem__ won’t work because the interpreter won’t call __getattribute__ to 
get that
implementation unless it already knows that the attribute is present.  Class 
creation,
and __setattr__ on type will not only fill __dict__, but also set slots in the 
type structure
as appropriate.  The interpreter than uses those slots to determine if a 
special method
is present.

In code:

class Meta1 (type):
def __getitem__(self, key):
return "<{} {}>".format(self.__name__, key)

class Class1 (metaclass=Meta1):
pass



class Meta2 (type):
def __getattribute__(self, name):
if name == "__getitem__":
return lambda key: "<{} {}>".format(self.__name__, key)

return super().__getattribute__(name)

class Class2 (metaclass=Meta2):
pass

print(Class1.__getitem__("hello"))
print(Class1["hello"])

print(Class2.__getitem__("hello"))
print(Class2["hello"])

The last line causes an exception:

Traceback (most recent call last):
  File "demo-getattr.py", line 24, in 
print(Class2["hello"])
TypeError: 'Meta2' object is not subscriptable

I agree that this should be mentioned in the PEP as it can be confusing.

> 
> 
> 
> Why was "__getattribute_super__" rejected as an alternative? No reason is 
> given.
> 
> "__getattribute_super__" has none of the problems listed above.

Not really. I initially used __getattribute_super__ as the name, but IIRC with
the same  semantics.

> Making super(t, obj) delegate to t.__super__(obj) seems consistent with other
> builtin method/classes and doesn't add corner cases to the already complex
> implementation of PyType_Lookup().

A disadvantage of delegation is t.__super__ then reproduce the logic dealing
with the MRO, while my proposal allows the metaclass to just deal with lookup
in a specific class object. 

Implementation complexity is an issue, but it seems to be acceptable so far. 
The main
problem w.r.t. additional complexity is that PyType_Lookup can now fail
with an exception other than an implied AttributeError and that causes
changes elsewhere in the implementation.

BTW. The patch for that part is slightly uglier than it needs to be, I currently
test for PyErr_Occurred() instead of using return codes in a number of places
to minimise the number of lines changes to make code review easier.  That 
needs to be changed before the code would actually be committed.

Ronald

P.S. Are you at the EP sprints? I’ll be there until early in the afternoon.

> 
> Cheers,
> Mark
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-26 Thread Lennart Regebro
On Sun, Jul 26, 2015 at 8:05 AM, Tim Peters  wrote:
> The Python docs also are quite clear about that all arithmetic within
> a single timezone is "naive".  That was intentional.  The _intended_
> way to do "aware" arithmetic was always to convert to UTC, do the
> arithmetic, then convert back.

We can't explicitly implement incorrect timezone aware arithmetic and
then expect people to not use it. We can make the arithmetic correct,
and we can raise an error when doing tz-aware arithmetic in a
non-fixed timezone. But having an implementation we know is incorrect
and telling people "don't do that" doesn't seem like a good solution
here.

Why do we even have timezone aware datetimes if we don't intend them
for usage? There could just be naive datetimes, and timezones, and let
strftime take a timezone that is used when formatting. And we could
make date-time creation into a function that parses data including a
timezone, and returns the UTC time of that data.

But then again, if we do that, we could just as well have that
timezone as an attribute on the datetime object, and let strftime use
so it doesn't have to be passed in. And we could let the __init__ of
the datetime take a timezone and do that initial conversion to UTC.

> Python's datetime never intended to support that directly.

I think it should. It's expected that it supports it, and there is no
real reason not to support it. The timezone handling becomes
complicated if you base yourself on localtime, and simple if you base
yourself on UTC.

As you agree, we recommend to people to use UTC at all times, and only
use timezones for input and output. Well, what I'm now proposing is to
take that recommendation to heart, and change datetime's
implementation so it does exactly that.

I saw the previous mention of "pure" vs "practical", and that is often
a concern. Here it clearly is not. This is a choice between impure,
complicated and impractical, and pure, simple and practical.

> Is it the case that pytz also "fails" in the cases your attempts "fail"?

No, that is not the case. And if you wonder why I just don't do it
like pytz does it, it's because that leads to infinite recursion, much
as discussions on this mailing list does. ;-) And this is because we
need to normalize the datetime after arithmatic, but normalizing is
arithmetics.

> "Batteries included" has some attractions all on its own.  On top of
> that, adding is_dst-like flags to appropriate methods may have major
> attractions.

> Ah, but it already happens that way

No, in fact it does not. Pytz makes that happen only through a
separate explicit normalize() call (and some deep cleverness to keep
track of which timezone offset it is located in). dateutil.tz can't
guarantee these things to be true, because it doesn't keep track of
ambiguous times. So no, it does not already happen that way.

>>> from dateutil.zoneinfo import gettz
>>> from datetime import *
>>> dt = datetime(2015, 11, 1, 0, 30, tzinfo=est)
>>> dt2 = dt + timedelta(hours=1)

>>> utc = gettz('Etc/UTC')
>>> dtutc = dt.astimezone(utc)
>>> dt2utc = dt2.astimezone(utc)
>>> (dt2utc-dtutc).total_seconds()
7200.0

You add one hour, and you get a datetime that happens two hours later.
So no, it does not already happen that way.
In pytz the datetime will be adjusted after you do the normalize call.

> I apologize if I've come off as unduly critical - I truly have been
> _only_ trying to find out what "the problem" is.  That helps!  Thank
> you.  Note that I've had nothing to do with datetime (except to use
> it) for about a decade.  I have no idea what you, or anyone else, has
> said about it for years & years until this very thread caught my
> attention this week.  Heck, for all I know, Guido _demanded_ that
> datetime arithmetic be changed - although I doubt it ;-)

It's not a question of changing datetime arithmetic per se. The PEP
does indeed mean it has to be changed, but only to support ambiguous
and non-existent times.

It's helpful to me to understand, which I hadn't done before, that
this was never intended to work. That helps me argue for changing
datetimes internal implementation, once I get time to do that. (I'm
currently moving, renovating a new house, trying fix up a garden that
has been neglected for years, and insanely, write my own code editor,
all at the same time, so it won't be anytime soon).

> There's more than one decision affecting this  In cases where a single
> local time corresponds to more than one UTC time (typically at the end
> of DST, when a local hour repeats), datetime never did give any clear
> way to do "the intended" conversion from that local time _to_ UTC.
> But resolving such ambiguities has nothing to do with how arithmetic
> works:  it's utterly unsolvable by any means short of supplying new
> info ("which UTC value is intended?" AKA is_dst).

The "changing arithmetic" discussion is a  red herring.

Now my wife insist I help her pack, so this is the end of this
discussion for me. If i continue i

Re: [Python-Dev] PEP 447 (type.__getdescriptor__)

2015-07-26 Thread Ronald Oussoren

> On 26 Jul 2015, at 09:14, Ronald Oussoren  wrote:
> 
> 
>> On 25 Jul 2015, at 17:39, Mark Shannon > > wrote:
>> 
>> Hi,
>> 
>> On 22/07/15 09:25, Ronald Oussoren wrote:> Hi,
>>> 
>>> Another summer with another EuroPython, which means its time again to 
>>> try to revive PEP 447…
>>> 
>> 
>> IMO, there are two main issues with the PEP and implementation.
>> 
>> 1. The implementation as outlined in the PEP is infinitely recursive, since 
>> the
>> lookup of "__getdescriptor__" on type must necessarily call
>> type.__getdescriptor__.
>> The implementation (in C) special cases classes that inherit 
>> "__getdescriptor__"
>> from type. This special casing should be mentioned in the PEP.
> 
> Sure.  An alternative is to slightly change the the PEP: use 
> __getdescriptor__ when
> present and directly peek into __dict__ when it is not, and then remove the 
> default
> __getdescriptor__. 
> 
> The reason I didn’t do this in the PEP is that I prefer a programming model 
> where
> I can explicitly call the default behaviour. 

I’m not sure there is a problem after all (but am willing to use the 
alternative I describe above),
although that might be because I’m too much focussed on CPython semantics.

The __getdescriptor__ method is a slot in the type object and because of that 
the
 normal attribute lookup mechanism is side-stepped for methods implemented in 
C. A
__getdescriptor__ that is implemented on Python is looked up the normal way by 
the 
C function that gets added to the type struct for such methods, but that’s not 
a problem for
type itself.

That’s not new for __getdescriptor__ but happens for most other special methods 
as well,
as I noted in my previous mail, and also happens for the __dict__ lookup that’s 
currently
used (t.__dict__ is an attribute and should be lookup up using 
__getattribute__, …)

Ronald___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Burning down the backlog.

2015-07-26 Thread Paul Moore
On 25 July 2015 at 20:28, Robert Collins  wrote:
> Those charts doesn't show patches in 'commit-review' -
> http://bugs.python.org/issue?%40columns=title&%40columns=id&stage=5&%40columns=activity&%40sort=activity&status=1&%40columns=status&%40pagesize=50&%40startwith=0&%40sortdir=on&%40action=search
>
> There are only 45 of those patches.
>
> AIUI - and I'm very new to core here - anyone in triagers can get
> patches up to commit-review status.
>
> I think we should set a goal to keep inventory low here - e.g. review
> and either bounce back to patch review, or commit, in less than a
> month. Now - a month isn't super low, but we have lots of stuff
> greater than a month.

I'm not actually clear what "Commit Review" status means. I did do a
quick check of the dev guide, and couldn't come up with anything, but
looking at the first issue on that list
(http://bugs.python.org/issue24585) the change has been committed but
it can't practically be tested until it's released in a beta. The
second on the list also seems to have been committed.

While post-commit reviews are useful, I wouldn't classify not getting
them as a pressing problem (after all, the change is in, so in the
worst case we'll get bug reports if there *were* any issues). Getting
patches to a state where they can be committed, and more importantly
actually committing them, is the bigger problem.

Looking at "Issues with patches" for example, I find
http://bugs.python.org/issue21279. That is a simple doc patch, and
there's a pretty lengthy discussion on getting the exact wording right
(plus six revisions of the patch). That seems excessive, but
nevertheless...

My problem is that in order to commit that patch (which seems to be
the next step - I see no reason not to) I'd need to go through working
out all of the commit/merge process, and make sure I got it all right.
That's a lot of work (and some level of risk) - particularly compared
to working on pip, where I hit the "merge" button, and I'm done. So
that patch languishes until someone other than me, who's more familiar
with the commit process, merges it.

Of course, I could learn the patch process, but my time for working on
tracker items is limited, and my brain is getting old, so the
likelihood is that I'll forget some of the details before next time,
so that learning time isn't a one-off cost.

This is basically what the improved dev workflow peps are about, so
people are working on a solution, but to me that's the big problem -
we need a big red "Commit" button that provides a zero-effort means
for a core dev to point at a patch on the tracker that's already been
fully reviewed, and just say "do it". Personally, if I were actually
expecting to do enough commits to justify the effort, I'd write a
script to do that (and I'd probably isolate my build environment in a
VM, somehow, as I rebuild my main PC often enough that even having a
build env present on my PC isn't a given).

Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 447 (type.__getdescriptor__)

2015-07-26 Thread Mark Shannon
> On 26 July 2015 at 10:41 Ronald Oussoren  wrote:
> 
> 
> 
> > On 26 Jul 2015, at 09:14, Ronald Oussoren  wrote:
> > 
> > 
> >> On 25 Jul 2015, at 17:39, Mark Shannon  >> > wrote:
> >> 
> >> Hi,
> >> 
> >> On 22/07/15 09:25, Ronald Oussoren wrote:> Hi,
> >>> 
> >>> Another summer with another EuroPython, which means its time again to 
> >>> try to revive PEP 447…
> >>> 
> >> 
> >> IMO, there are two main issues with the PEP and implementation.
> >> 
> >> 1. The implementation as outlined in the PEP is infinitely recursive, since
> >> the
> >> lookup of "__getdescriptor__" on type must necessarily call
> >> type.__getdescriptor__.
> >> The implementation (in C) special cases classes that inherit
> >> "__getdescriptor__"
> >> from type. This special casing should be mentioned in the PEP.
> > 
> > Sure.  An alternative is to slightly change the the PEP: use
> > __getdescriptor__ when
> > present and directly peek into __dict__ when it is not, and then remove the
> > default
> > __getdescriptor__. 
> > 
> > The reason I didn’t do this in the PEP is that I prefer a programming model
> > where
> > I can explicitly call the default behaviour. 
> 
> I’m not sure there is a problem after all (but am willing to use the
> alternative I describe above),
> although that might be because I’m too much focussed on CPython semantics.
> 
> The __getdescriptor__ method is a slot in the type object and because of that
> the
>  normal attribute lookup mechanism is side-stepped for methods implemented in
> C. A
> __getdescriptor__ that is implemented on Python is looked up the normal way by
> the 
> C function that gets added to the type struct for such methods, but that’s not
> a problem for
> type itself.
> 
> That’s not new for __getdescriptor__ but happens for most other special
> methods as well,
> as I noted in my previous mail, and also happens for the __dict__ lookup
> that’s currently
> used (t.__dict__ is an attribute and should be lookup up using
> __getattribute__, …)


"__getdescriptor__" is fundamentally different from "__getattribute__" in that
is defined in terms of itself.

object.__getattribute__ is defined in terms of type.__getattribute__, but
type.__getattribute__ just does 
dictionary lookups. However defining type.__getattribute__ in terms of
__descriptor__ causes a circularity as
__descriptor__ has to be looked up on a type.

So, not only must the cycle be broken by special casing "type", but that
"__getdescriptor__" can be defined
not only by a subclass, but also a metaclass that uses "__getdescriptor__" to
define  "__getdescriptor__" on the class.
(and so on for meta-meta classes, etc.)

Cheers,
Mark
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] [RELEASED] Python 3.5.0b4 is now available

2015-07-26 Thread Larry Hastings
On behalf of the Python development community and the Python 3.5 release
team, I'm delighted to announce the availability of Python 3.5.0b4.  Python
3.5.0b4 is scheduled to be the last beta release; the next release will be
Python 3.5.0rc1, or Release Candidate 1.

Python 3.5 has now entered "feature freeze".  By default new features may
no longer be added to Python 3.5.

This is a preview release, and its use is not recommended for production
settings.

An important reminder for Windows users about Python 3.5.0b4: if installing
Python 3.5.0b4 as a non-privileged user, you may need to escalate to
administrator privileges to install an update to your C runtime libraries.


You can find Python 3.5.0b4 here:

https://www.python.org/downloads/release/python-350b4/

Happy hacking,


*/arry*
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RELEASED] Python 3.5.0b4 is now available

2015-07-26 Thread Stephane Wirtel
\o/

> On 26 juil. 2015, at 4:37 PM, Larry Hastings  wrote:
> 
> 
> On behalf of the Python development community and the Python 3.5 release 
> team, I'm delighted to announce the availability of Python 3.5.0b4.  Python 
> 3.5.0b4 is scheduled to be the last beta release; the next release will be 
> Python 3.5.0rc1, or Release Candidate 1.
> 
> Python 3.5 has now entered "feature freeze".  By default new features may no 
> longer be added to Python 3.5.
> 
> This is a preview release, and its use is not recommended for production 
> settings.
> 
> An important reminder for Windows users about Python 3.5.0b4: if installing 
> Python 3.5.0b4 as a non-privileged user, you may need to escalate to 
> administrator privileges to install an update to your C runtime libraries.
> 
> 
> You can find Python 3.5.0b4 here:
> https://www.python.org/downloads/release/python-350b4/
> Happy hacking,
> 
> 
> /arry
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/stephane%40wirtel.be
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyCapsule_Import semantics, relative imports, module names etc.

2015-07-26 Thread Larry Hastings
PyCapsule_Import() is a simple helper function, a slightly-updated analogue
to PyCObject_Import().  It's not particularly sophisticated, and I'm not
surprised it's bewildered by complicated scenarios like relative imports in
subpackages.  For now all I can recommend is that you not try and torture
PyCapsule_Import().  And, as always... patches welcome.


/arry

On Sat, Jul 25, 2015 at 1:41 AM, John Dennis  wrote:

> While porting several existing CPython extension modules that form a
> package to be 2.7 and 3.x compatible the existing PyObject_* API was
> replaced with PyCapsule_*. This introduced some issues the existing CPython
> docs are silent on. I'd like clarification on a few issues and wish to
> raise some questions.
>
> 1. Should an extension module name as provided in PyModule_Create (Py3) or
> Py_InitModule3 (Py2) be fully package qualified or just the module name? I
> believe it's just the module name (see item 5 below) Yes/No?
>
> 2. PyCapsule_Import does not adhere to the general import semantics. The
> module name must be fully qualified, relative imports are not supported.
>
> 3. PyCapsule_Import requires the package (e.g. __init__.py) to import
> *all* of it's submodules which utilize the PyCapsule mechanism preventing
> lazy on demand loading. This is because PyCapsule_Import only imports the
> top level module (e.g. the package). From there it iterates over each of
> the module names in the module path. However the parent module (e.g.
> globals) will not contain an attribute for the submodule unless it's
> already been loaded. If the submodule has not been loaded into the parent
> PyCapsule_Import throws an error instead of trying to load the submodule.
> The only apparent solution is for the package to load every possible
> submodule whether required or not just to avoid a loading error. The
> inability to load modules on demand seems like a design flaw and change in
> semantics from the prior use of PyImport_ImportModule in combination with
> PyObject. [One of the nice features with normal import loading is setting
> the submodule name in the parent, the fact this step is omitted is what
> causes PyCapsule_Import to fail unless all submodules are unconditionally
> loaded). Shouldn't PyCapsule_Import utilize PyImport_ImportModule?
>
> 4. Relative imports seem much more useful for cooperating submodules in a
> package as opposed to fully qualified package names. Being able to import a
> C_API from the current package (the package I'm a member of) seems much
> more elegant and robust for cooperating modules but this semantic isn't
> supported (in fact the leading dot syntax completely confuses
> PyCapsule_Import, doc should clarify this).
>
> 5. The requirement that a module specifies it's name as unqualified when
> it is initializing but then also has to use a fully qualified package name
> for PyCapsule_New, both of which occur inside the same initialization
> function seems like an odd inconsistency (documentation clarification would
> help here). Also, depending on your point of view package names could be
> considered a deployment/packaging decision, a module obtains it's fully
> qualified name by virtue of it's position in the filesystem, something at
> compile time the module will not be aware of, another reason why relative
> imports make sense. Note the identical comment regarding _Py_PackageContext
> in  modsupport.c (Py2) and moduleobject.c (Py3) regarding how a module
> obtains it's fully qualified package name (see item 1).
>
> Thanks!
>
> --
> John
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/larry%40hastings.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-26 Thread Nick Coghlan
On 26 July 2015 at 18:12, Lennart Regebro  wrote:
> On Sun, Jul 26, 2015 at 8:05 AM, Tim Peters  wrote:
>> The Python docs also are quite clear about that all arithmetic within
>> a single timezone is "naive".  That was intentional.  The _intended_
>> way to do "aware" arithmetic was always to convert to UTC, do the
>> arithmetic, then convert back.
>
> We can't explicitly implement incorrect timezone aware arithmetic and
> then expect people to not use it. We can make the arithmetic correct,
> and we can raise an error when doing tz-aware arithmetic in a
> non-fixed timezone. But having an implementation we know is incorrect
> and telling people "don't do that" doesn't seem like a good solution
> here.
>
> Why do we even have timezone aware datetimes if we don't intend them
> for usage? There could just be naive datetimes, and timezones, and let
> strftime take a timezone that is used when formatting. And we could
> make date-time creation into a function that parses data including a
> timezone, and returns the UTC time of that data.
>
> But then again, if we do that, we could just as well have that
> timezone as an attribute on the datetime object, and let strftime use
> so it doesn't have to be passed in. And we could let the __init__ of
> the datetime take a timezone and do that initial conversion to UTC.

I think we need to make sure to separate out the question of the
semantic model presented to users from the internal implementation
model here.

As a user, if the apparent semantics of time zone aware date time
arithmetic are accurately represented by "convert time to UTC ->
perform arithmetic -> convert back to stated timezone", then I *don't
care* how that is implemented internally.

This is the aspect Tim is pointing out is a change from the original
design of the time zone aware arithmetic in the datetime module. I
personally think its a change worth making that reflects additional
decades of experience with time zone aware datetime arithmetic, but
the PEP should be clear that it *is* a change.

As Alexander points out, the one bit of information which needs to be
provided by me as a *user* of such an API (rather than its
implementor), is how to handle ambiguities in the initial conversion
to UTC (whether to interpret any ambiguous time reference I supply as
a pre-rollback or post-rollback time). Similarly, the API needs to
tell *me* whether a returned time in a period of ambiguity is
pre-rollback or post-rollback. At the moment the "pre-rollback" flag
is specifically called "is_dst", since rolling clocks back at the end
of DST period is the most common instance of ambiguous times. That
then causes confusion since "DST" in common usage refers to the entire
period from the original roll forward to the eventual roll back, but
the extra bit is only relevant to time zone arithmetic during the
final two overlapping hours when the clocks are rolled back each year
(and is in fact relevant any time a clock rollback occurs, even if the
reason for the rollback has nothing to do with DST).

The above paragraphs represent the full extent of my *personal*
interest in the matter of the datetime module changing the way it
handles timezones - I think there's a right answer from a usability
perspective, and I think it involves treating UTC as the universal
time zone used for all datetime arithmetic, and finding a less
confusing name for the "isdst" flag (such as "prerollback", or
inverting the sense of it to "postrollback", such that 0/False
referred to the first time encountered, and 1/True referred to the
second time encountered).

There's a *separate* discussion, which relates to how best to
*implement* those semantics, given the datetime module implementation
we already have. For the original decimal module, we went with the
approach of storing the data in a display friendly format, and then
converting it explicitly as needed to and from a working
representation for arithmetic purposes. While it seems plausible to me
that such an approach may also work well for datetime arithmetic that
presents the appearance of all datetime arithmetic taking place in
terms of UTC, that's a guess based on general principles, not
something based on a detailed knowledge of datetime in particular
(and, in particular, with no knowledge of the performance
consequences, or if we have any good datetime focused benchmarks akin
to the telco benchmark that guided the original decimal module
implementation).

Regards,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Burning down the backlog.

2015-07-26 Thread Berker Peksağ
On Sun, Jul 26, 2015 at 2:58 PM, Paul Moore  wrote:
> On 25 July 2015 at 20:28, Robert Collins  wrote:
>> Those charts doesn't show patches in 'commit-review' -
>> http://bugs.python.org/issue?%40columns=title&%40columns=id&stage=5&%40columns=activity&%40sort=activity&status=1&%40columns=status&%40pagesize=50&%40startwith=0&%40sortdir=on&%40action=search
>>
>> There are only 45 of those patches.
>>
>> AIUI - and I'm very new to core here - anyone in triagers can get
>> patches up to commit-review status.
>>
>> I think we should set a goal to keep inventory low here - e.g. review
>> and either bounce back to patch review, or commit, in less than a
>> month. Now - a month isn't super low, but we have lots of stuff
>> greater than a month.
>
> I'm not actually clear what "Commit Review" status means. I did do a
> quick check of the dev guide, and couldn't come up with anything,

https://docs.python.org/devguide/triaging.html#stage
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyCapsule_Import semantics, relative imports, module names etc.

2015-07-26 Thread Nick Coghlan
On 27 July 2015 at 01:21, Larry Hastings  wrote:
>
> PyCapsule_Import() is a simple helper function, a slightly-updated analogue
> to PyCObject_Import().  It's not particularly sophisticated, and I'm not
> surprised it's bewildered by complicated scenarios like relative imports in
> subpackages.  For now all I can recommend is that you not try and torture
> PyCapsule_Import().  And, as always... patches welcome.

In this case, there are actually a lot of limitations related to the
fact that extension modules generally have far more limited
information about where they live in the package hierarchy than normal
Python modules do. PEP 489 addressed quite a few of those with
multi-phase initialisation.

> On Sat, Jul 25, 2015 at 1:41 AM, John Dennis  wrote:
>>
>> While porting several existing CPython extension modules that form a
>> package to be 2.7 and 3.x compatible the existing PyObject_* API was
>> replaced with PyCapsule_*. This introduced some issues the existing CPython
>> docs are silent on. I'd like clarification on a few issues and wish to raise
>> some questions.
>>
>> 1. Should an extension module name as provided in PyModule_Create (Py3) or
>> Py_InitModule3 (Py2) be fully package qualified or just the module name? I
>> believe it's just the module name (see item 5 below) Yes/No?

Fully qualified is generally better (if you know the ultimate
location), but it's mainly for introspection support, so most things
will work fine even if you set the name to something like "".

>> 2. PyCapsule_Import does not adhere to the general import semantics. The
>> module name must be fully qualified, relative imports are not supported.

Correct, as it has no knowledge of the current module name to anchor a
relative import.

>> 3. PyCapsule_Import requires the package (e.g. __init__.py) to import
>> *all* of it's submodules which utilize the PyCapsule mechanism preventing
>> lazy on demand loading. This is because PyCapsule_Import only imports the
>> top level module (e.g. the package). From there it iterates over each of the
>> module names in the module path. However the parent module (e.g. globals)
>> will not contain an attribute for the submodule unless it's already been
>> loaded. If the submodule has not been loaded into the parent
>> PyCapsule_Import throws an error instead of trying to load the submodule.
>> The only apparent solution is for the package to load every possible
>> submodule whether required or not just to avoid a loading error. The
>> inability to load modules on demand seems like a design flaw and change in
>> semantics from the prior use of PyImport_ImportModule in combination with
>> PyObject. [One of the nice features with normal import loading is setting
>> the submodule name in the parent, the fact this step is omitted is what
>> causes PyCapsule_Import to fail unless all submodules are unconditionally
>> loaded). Shouldn't PyCapsule_Import utilize PyImport_ImportModule?

This sounds like it may be a bug in PyCapsule_Import, but I don't know
the capsule API very well myself (I've never had a reason to use it -
all the extension modules I've worked with personally have been
self-contained).

>> 4. Relative imports seem much more useful for cooperating submodules in a
>> package as opposed to fully qualified package names. Being able to import a
>> C_API from the current package (the package I'm a member of) seems much more
>> elegant and robust for cooperating modules but this semantic isn't supported
>> (in fact the leading dot syntax completely confuses PyCapsule_Import, doc
>> should clarify this).

Until PEP 489 (multi-phase initialisation) was implemented for Python
3.5, extension modules didn't know their actual runtime place in the
module hierarchy, so there was no easy way to provide a module name to
the API to anchor relative lookups.

Given PEP 489, it may be feasible to offer a PyCapsule_ImportRelative
for 3.6+, but it would require someone interested in working through
the details of such an API.

>> 5. The requirement that a module specifies it's name as unqualified when
>> it is initializing but then also has to use a fully qualified package name
>> for PyCapsule_New, both of which occur inside the same initialization
>> function seems like an odd inconsistency (documentation clarification would
>> help here). Also, depending on your point of view package names could be
>> considered a deployment/packaging decision, a module obtains it's fully
>> qualified name by virtue of it's position in the filesystem, something at
>> compile time the module will not be aware of, another reason why relative
>> imports make sense. Note the identical comment regarding _Py_PackageContext
>> in  modsupport.c (Py2) and moduleobject.c (Py3) regarding how a module
>> obtains it's fully qualified package name (see item 1).

Yes, these weird limitations were the genesis of Petr Viktorin's
efforts in implementing a new approach to import extension modules for
Python 3.5: https://www.python.or

Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-26 Thread Alexander Belopolsky
On Sun, Jul 26, 2015 at 11:33 AM, Nick Coghlan  wrote:

> As a user, if the apparent semantics of time zone aware date time
> arithmetic are accurately represented by "convert time to UTC ->
> perform arithmetic -> convert back to stated timezone", then I *don't
> care* how that is implemented internally.
>
> This is the aspect Tim is pointing out is a change from the original
> design of the time zone aware arithmetic in the datetime module. I
> personally think its a change worth making that reflects additional
> decades of experience with time zone aware datetime arithmetic, but
> the PEP should be clear that it *is* a change.
>

These semantics are already available in python 3:

>>> t = datetime(2015, 3, 7, 17, tzinfo=timezone.utc).astimezone()
>>> t.strftime('%D %T %z %Z')
'03/07/15 12:00:00 -0500 EST'
>>> (t+timedelta(1)).strftime('%D %T %z %Z')
'03/08/15 12:00:00 -0500 EST'   # a valid time, but not what you see on the
wall clock
>>> (t+timedelta(1)).astimezone().strftime('%D %T %z %Z')
'03/08/15 13:00:00 -0400 EDT'   # this is what the wall clock would show

Once CPython starts vendoring a complete timezone database, it would be
trivial to extend .astimezone() so that things like
t.astimezone('US/Eastern')
work as expected.

What is somewhat more challenging, is implementing a tzinfo subclass that
can be used
to construct datetime instances with the following behavior:

>>> t = datetime(2015, 3, 7, 12, tzinfo=timezone('US/Eastern'))
>>> t.strftime('%D %T %z %Z')
'03/07/15 12:00:00 -0500 EST'
>>> (t + timedelta(1)).strftime('%D %T %z %Z')
'03/08/15 12:00:00 -0400 EDT'

The solution to this problem has been provided as a documentation example
[1] for many years,
but also for many years it contained a subtle bug [2] which illustrates
that one has to be careful
implementing those things.

Although the examples [1] in the documentation only cover simple US
timezones, they cover
a case of changing DST rules and changing STD rules can be implemented
similarly.

Whether we want such tzinfo implementations in stdlib, is a valid question,
but it should be
completely orthogonal to the question of vendoring a TZ database.

If we agree that vendoring a TZ database is a good thing, we can make
.astimezone() understand how to construct a fixed offset timezone from a
location
and call it a day.

[1]:
https://hg.python.org/cpython/file/default/Doc/includes/tzinfo-examples.py
[2]: http://bugs.python.org/issue9063
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Burning down the backlog.

2015-07-26 Thread Alexander Belopolsky
On Sun, Jul 26, 2015 at 11:39 AM, Berker Peksağ 
wrote:

> > I'm not actually clear what "Commit Review" status means. I did do a
> > quick check of the dev guide, and couldn't come up with anything,
>
> https://docs.python.org/devguide/triaging.html#stage


What is probably missing from the dev-guide is an explanation that stages
do not necessarily
happen in the linear order.  For example, a committer may reset the stage
back to "needs a
patch" if the patch does not pass a "commit review".
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Burning down the backlog.

2015-07-26 Thread Paul Moore
On 26 July 2015 at 16:39, Berker Peksağ  wrote:
>> I'm not actually clear what "Commit Review" status means. I did do a
>> quick check of the dev guide, and couldn't come up with anything,
>
> https://docs.python.org/devguide/triaging.html#stage

Thanks, I missed that. The patches I checked seemed to have been
committed and were still at commit review, though. Doesn't the roundup
robot update the stage when there's a commit? (Presumably not, and
people forget to do so too).

Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-26 Thread Paul Moore
On 26 July 2015 at 16:33, Nick Coghlan  wrote:
> As a user, if the apparent semantics of time zone aware date time
> arithmetic are accurately represented by "convert time to UTC ->
> perform arithmetic -> convert back to stated timezone", then I *don't
> care* how that is implemented internally.
>
> This is the aspect Tim is pointing out is a change from the original
> design of the time zone aware arithmetic in the datetime module. I
> personally think its a change worth making that reflects additional
> decades of experience with time zone aware datetime arithmetic, but
> the PEP should be clear that it *is* a change.

I think the current naive semantics are useful and should not be
discarded lightly. At an absolute minimum, there should be a clear,
documented way to get the current semantics under any changed
implementation.

As an example, consider an alarm clock. I want it to go off at 7am
each morning. I'd feel completely justified in writing tomorrows_alarm
= todays_alarm + timedelta(days=1).

If the time changes to DST overnight, I still want the alarm to go off
at 7am. Even though +1 day is in this case actually + 25 (or is it
23?) hours. That's the current semantics.

To be honest, I would imagine, from experience with programmers
writing naive algorithms, that the current semantics is a lot less
prone to error when used by such people. People forget about timezones
until they are bitten by them, and if they are using the convert to
UTC->calculate->convert back model, their code ends up with
off-by-1-hour bugs. Certainly such mistakes can be fixed, and the
people who make them educated, but I like the fact that Python's
typical behaviour is to do what a non-expert would expect. By all
means have the more sophisticated approach available, but if it's the
default then naive users have to either (1) learn the subtleties of
timezones, or (2) learn how to code naive datetime behaviour in Python
before they can write their code. If the current behaviour remains the
default, then *when* the naive user learns about the subtleties of
timezones, they can switch to the TZ-aware datetime - but that's a
single learning step, and it can be taken when the user is ready.

Paul

PS I don't think the above is particularly original - IIRC, it's
basically Guido's argument for naive datetimes from when they were
introduced. I think his example was checking his watch while on a
transatlantic plane flight, but the principle is the same.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Burning down the backlog.

2015-07-26 Thread Mark Lawrence

On 26/07/2015 22:59, Paul Moore wrote:

On 26 July 2015 at 16:39, Berker Peksağ  wrote:

I'm not actually clear what "Commit Review" status means. I did do a
quick check of the dev guide, and couldn't come up with anything,


https://docs.python.org/devguide/triaging.html#stage


Thanks, I missed that. The patches I checked seemed to have been
committed and were still at commit review, though. Doesn't the roundup
robot update the stage when there's a commit? (Presumably not, and
people forget to do so too).

Paul


I wouldn't know.  I certainly believe that the more time we spend 
assisting Cannon, Coghlan & Co on the core workflow, the quicker, in the 
medium to long term, we put the backlog of issues to bed.


--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Burning down the backlog.

2015-07-26 Thread R. David Murray
On Sun, 26 Jul 2015 22:59:51 +0100, Paul Moore  wrote:
> On 26 July 2015 at 16:39, Berker Peksağ  wrote:
> >> I'm not actually clear what "Commit Review" status means. I did do a
> >> quick check of the dev guide, and couldn't come up with anything,
> >
> > https://docs.python.org/devguide/triaging.html#stage
> 
> Thanks, I missed that. The patches I checked seemed to have been
> committed and were still at commit review, though. Doesn't the roundup
> robot update the stage when there's a commit? (Presumably not, and
> people forget to do so too).

Yes, it is manual.  Making it automatic would be nice.  Patches accepted
:) Writing a Roundup detector for this shouldn't be all that hard once
you figure out how detectors work.  See:


http://www.roundup-tracker.org/docs/customizing.html#detectors-adding-behaviour-to-your-tracker

The steep part of the curve there is testing your work, which is
something some effort has been made to simplify, but unfortunately I'm
not up on the details of that currently.

In the meantime, this is a service triagers could perform: look at the
commit review issues and make sure that really is the correct stage.

Now, as for the original question:

First, a little history so that the old timers and the newer committers
are on the same page.  When 'commit review' was originally introduced,
what it was used for was for what was then a "second committer" required
review during the RC phase.  I have recently (last two years?) with
agreement of the workflow list and with no objection from committers
shifted this to the model documented in the devguide currently.

However, I agree that what is currently in the devguide is not
sufficient.  Here is my actual intent for the workflow:

1) Issue is opened.  Triager/committer sets it to 'patch needed' if they
believe the bug should be fixed/feature implemented.  (A committer may
choose to override a triager decision and close the bug, explaining why
for the benefit of the triager and all onlookers.)

2) Once we have a patch, one or more triage or committer people work
with the submitter or whoever is working on the patch (who may have no
special role or be a triager or be a committer) in a patch
review-and-update cycle until a triager or a committer thinks it is
ready for commit.

3) If the patch was submitted and/or worked on by a committer, the patch
can be committed.

4) If the patch is not by a committer, the stage should be set to
'commit review' by a triager.  Committers with time available should, as
Robert suggests, look for patches in 'commit review' status *and review
them*.  The wording of "a quick once over" in the devguide isn't
*wrong*, but it does give the impression the patch is "ready to commit",
whereas the goal here is to review the work of the *triager*.  If the
patch is not actually commit ready for whatever reason, it gets bounced
back to patch review with an explanation as to why.

5) Eventually (hopefully the first time or quickly thereafter most of
the time!) the patch really is ready to commit and gets committed.

An here, to my mind, is the most important bit:

6) When the patches moved to commit ready by a given triager are
consistently committable after the step 4 review, it is time to offer
that triager commit privileges.

My goal here is to *transfer* the knowledge of what makes a good review
process and a good patch from the existing committers to new committers,
and therefore acquire new committers.

Now, the problem that Paul cites about not feeling comfortable with the
*commit* process is real (although I will say that at this point I can
go months without doing a commit and I still remember quite clearly how
to do one; it isn't that complicated).  Improving the tooling is one way
to attack that.  I think there can be two types of tooling:  the large
scale problem the PEPs are working toward, and smaller scale helper
scripts such as Paul mentions that one or more committers could develop
and publish (patchcheck on steroids).

Before that, though, it is clear that the devguide needs a "memory
jogger" cheat sheet on how to do a multi-branch commit, linked from
the quicklinks section.

So, I'm hoping Carol will take what I've written above and turn it into
updates for the devguide (assuming no one disagrees with what I've said :)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-26 Thread Tim Peters
[Paul Moore ]
> I think the current naive semantics are useful and should not be
> discarded lightly. At an absolute minimum, there should be a clear,
> documented way to get the current semantics under any changed
> implementation.

Realistically, default arithmetic behavior can't change in Python 3
(let alone Python 2).  Pushing for a different design is fine, but
that can't be sold on the grounds that current behavior is "a bug" -
it's working as designed, as intended, and as documented, and hasn't
materially changed in the dozen-or-so years since it was introduced.
It's not even that the proposed alternative arithmetic is "better",
either:  while it's certainly more suitable for some applications,
it's certainly worse for others.  Making an incompatible change would
be (& should be) a hard sell even if there were a much stronger case
for it than there is here.

But that's just arithmetic.  Some way to disambiguate local times, and
support for most zoneinfo time zones, are different issues.


> As an example, consider an alarm clock. I want it to go off at 7am
> each morning. I'd feel completely justified in writing tomorrows_alarm
> = todays_alarm + timedelta(days=1).
>
> If the time changes to DST overnight, I still want the alarm to go off
> at 7am. Even though +1 day is in this case actually + 25 (or is it
> 23?) hours. That's the current semantics.

There was a long list of use cases coming to the same conclusion.  The
current arithmetic allows uniform patterns in local time to be coded
in uniform, straightforward ways.  Indeed, in "the obvious" ways.  The
alternative behavior favors uniform patterns in UTC, but who cares?
;-)  Few local clocks show UTC.  Trying to code uniform local-time
behaviors using "aware arithmetic" (which is uniform in UTC. but may
be "lumpy" in local time) can be a nightmare.

The canonical counterexample is a nuclear reactor that needs to be
vented every 24 hours.  To which the canonical rejoinder is that the
programmer in charge of that system is criminally incompetent if
they're using _any_ notion of time other than UTC ;-)

> To be honest, I would imagine, from experience with programmers
> writing naive algorithms, that the current semantics is a lot less
> prone to error when used by such people. People forget about timezones
> until they are bitten by them, and if they are using the convert to
> UTC->calculate->convert back model, their code ends up with
> off-by-1-hour bugs. Certainly such mistakes can be fixed, and the
> people who make them educated, but I like the fact that Python's
> typical behaviour is to do what a non-expert would expect. By all
> means have the more sophisticated approach available, but if it's the
> default then naive users have to either (1) learn the subtleties of
> timezones, or (2) learn how to code naive datetime behaviour in Python
> before they can write their code. If the current behaviour remains the
> default, then *when* the naive user learns about the subtleties of
> timezones, they can switch to the TZ-aware datetime - but that's a
> single learning step, and it can be taken when the user is ready.

There is a design flaw here, IMO:  when they switch to a TZ-aware
datetime, they _still_ get "naive" arithmetic within that time zone.
It's at best peculiar that such a datetime is _called_ "aware" yet
still ignores the time zone rules when doing arithmetic.  I would have
preferred a sharper distinction, like "completely naive" (tzinfo
absent) versus "completely aware" (tzinfo present).  But, again, it's
working as designed, intended and documented.

One possibility to get "the other" behavior in a backward-compatible
way:  recognize a new magic attribute on a tzinfo instance, say,
__aware_arithmetic__.  If it's present, arithmetic on a datetime with
such a tzinfo member "acts as if" arithmetic were done by converting
to UTC first, doing the arithmetic, then converting back.  Otherwise
(magic new attribute not present) arithmetic remains naive.  Bonus:
then you could stare at datetime code and have no idea which kind of
arithmetic is being used ;-)

> PS I don't think the above is particularly original - IIRC, it's
> basically Guido's argument for naive datetimes from when they were
> introduced. I think his example was checking his watch while on a
> transatlantic plane flight, but the principle is the same.

Yup, your account is fair (according to me ;-) ).  Here's Guido's
first message on the topic:

https://mail.python.org/pipermail/python-dev/2002-March/020648.html
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Burning down the backlog.

2015-07-26 Thread Nick Coghlan
On 27 July 2015 at 11:37, R. David Murray  wrote:
> My goal here is to *transfer* the knowledge of what makes a good review
> process and a good patch from the existing committers to new committers,
> and therefore acquire new committers.

+1000 :)

A few years back, I wrote this patch review guide for work:
https://beaker-project.org/dev/guide/writing-a-patch.html#reviewing-a-patch

Helping to create a similarly opinionated guide to assist reviewers
for CPython has been kicking around on my todo list ever since, but
it's a lot easier to create that kind of guide when you're the
appointed development lead on a relatively small project produced
almost entirely through paid development - I wrote it one afternoon,
uploaded it to gerrit.beaker-project.org, and the only folks I needed
to get to review it were the other full-time developers on the Beaker
team.

I don't think that would be the right way to create such a guide for a
volunteer driven project like CPython, but steering a more
collaborative community discussion would require substantially more
time than it took me to put the Beaker one together.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-26 Thread Tim Peters
[Tim]
>> The Python docs also are quite clear about that all arithmetic within
>> a single timezone is "naive".  That was intentional.  The _intended_
>> way to do "aware" arithmetic was always to convert to UTC, do the
>> arithmetic, then convert back.

[Lennart]
> We can't explicitly implement incorrect timezone aware arithmetic and
> then expect people to not use it.

Python didn't implement timezone-aware arithmetic at all within a
single time zone.  Read what I wrote just above.  It implements naive
arithmetic within a single time zone.

> We can make the arithmetic correct,

The naive arithmetic within a timezone is already correct, by its own
internal criteria.  It's also useful (see the original discussions, or
Paul Moore's recent brief account).  That it's not the arithmetic you
want doesn't make it "incorrect", it makes it different from what you
want.  That's fine - you're allowed to want anything ;-)  But it's a
dozen years too late to change that decision.  Maybe for Python 4.

> and we can raise an error when doing tz-aware arithmetic in a
> non-fixed timezone.

Sorry, I don't know what that means.  Under any plausible
interpretation, I don't see any need to raise an exception.

> But having an implementation we know is incorrect

You really have to get over insisting it's incorrect.  It's
functioning exactly the way it was intended to function.  It's
_different_ from what you favor.  Note that I'm not calling what you
favor "incorrect".  It's different.  Both kinds of arithmetic are
useful for different purposes, although I still agree with Guido's
original belief that the current arithmetic is most useful most often
for most programmers.

> and telling people "don't do that" doesn't seem like a good solution
> here.

We don't tell people "don't do that".  It's perfectly usable exactly
as-is for many applications.  Not all.  For those applications needing
the other kind of arithmetic, the convert-to/from-UTC dance was the
intended solution.

> Why do we even have timezone aware datetimes if we don't intend them
> for usage?

They are intended for usage.  But a single way of using them is not
suitable for all possible applications.

>> ...
>> Python's datetime never intended to support that directly.

> I think it should.

Ya, I picked that up ;-)  I don't, but it's too late to break backward
compatibility regardless.

> It's expected that it supports it,

By some people, yes.  Not by all.

> and there is no real reason not to support it.

Backward compatibility is a gigantic reason to continue with the
status quo.  See Paul Moore's post for a start on why naive arithmetic
was picked to begin with.

> The timezone handling becomes complicated if you base yourself on
> localtime, and simple if you base yourself on UTC.

That's an implementation detail unrelated (in principle) to how
arithmetic works.  Although as a practical matter it cuts both ways:
naive local-time arithmetic is complicated if the internal time is
stored in UTC, but simple if stored in local time.

> As you agree, we recommend to people to use UTC at all times,

I recommend people don't use tzinfo at all if they can avoid it.
Beyond that, there are many attractions to using UTC, and to
explicitly use UTC.  Not all applications need to care, though.

> and only use timezones for input and output. Well, what I'm now
> proposing is to take that recommendation to heart, and change
> datetime's implementation so it does exactly that.

Suppose I'm correct in my belief that there's scant chance of getting
approval for changing the default datetime arithmetic in Python 3 (or
Python 2).  Would you still be keen to replace the internals with UTC
format?  Note that there are many consequences to that implementation
detail.  For example, it was an explicit requirement of the datetime
design that the month, day, hour, minute and second components be very
cheap to extract.  If you have to do conversion every time one is
accessed, it's much slower; if you cache the "local time" components
separately, the memory burden increases.  Etc.

> I saw the previous mention of "pure" vs "practical", and that is often
> a concern. Here it clearly is not. This is a choice between impure,
> complicated and impractical, and pure, simple and practical.

There is nothing in the datetime world simpler than naive arithmetic
;-)  "Practical" is relevant to a specific application's specific
needs, and neither kind of arithmetic is "practical" for all
applications.  Guido believed naive arithmetic is most practical
overall.  But even believing that too, datetime certainly "should be"
beefed up to solve the _other_ problems:  like resolving ambiguous
times, and supporting the full range of zoneinfo possibilities

>> Is it the case that pytz also "fails" in the cases your attempts "fail"?

> No, that is not the case. And if you wonder why I just don't do it
> like pytz does it, it's because that leads to infinite recursion, much
> as discussions on this mailing 

Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-26 Thread Lennart Regebro
On Mon, Jul 27, 2015 at 12:15 AM, Paul Moore  wrote:
> I think the current naive semantics are useful and should not be
> discarded lightly. At an absolute minimum, there should be a clear,
> documented way to get the current semantics under any changed
> implementation.
>
> As an example, consider an alarm clock. I want it to go off at 7am
> each morning. I'd feel completely justified in writing tomorrows_alarm
> = todays_alarm + timedelta(days=1).

That's a calendar operation made with a timedelta. The "days"
attribute here is indeed confusing as it doesn't mean 1 day, it means
24 hours.

//Lennart
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-26 Thread Lennart Regebro
On Mon, Jul 27, 2015 at 4:04 AM, Tim Peters  wrote:
> Realistically, default arithmetic behavior can't change in Python 3
> (let alone Python 2).

Then we can't implement timezones in a reasonable way with the current
API, but have to have something like pytz's normalize() function or
similar.

I'm sorry I've wasted everyones time with this PEP.

//Lennart
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com