Re: [Python-Dev] PEP 471 -- os.scandir() function -- a better and faster directory iterator

2014-06-26 Thread Tim Delaney
On 27 June 2014 09:28, MRAB  wrote:

> Personally, I'd prefer the name 'iterdir' because it emphasises that
> it's an iterator.


Exactly what I was going to post (with the added note that thee's an
obvious symmetry with listdir).

+1 for iterdir rather than scandir

Other than that:

+1 for adding scandir to the stdlib
-1 for windows_wildcard (it would be an attractive nuisance to write
windows-only code)

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 471 -- os.scandir() function -- a better and faster directory iterator

2014-06-30 Thread Tim Delaney
On 1 July 2014 03:05, Ben Hoyt  wrote:

> > So, here's my alternative proposal: add an "ensure_lstat" flag to
> > scandir() itself, and don't have *any* methods on DirEntry, only
> > attributes.
> ...
> > Most importantly, *regardless of platform*, the cached stat result (if
> > not None) would reflect the state of the entry at the time the
> > directory was scanned, rather than at some arbitrary later point in
> > time when lstat() was first called on the DirEntry object.
>

I'm torn between whether I'd prefer the stat fields to be populated on
Windows if ensure_lstat=False or not. There are good arguments each way,
but overall I'm inclining towards having it consistent with POSIX - don't
populate them unless ensure_lstat=True.

+0 for stat fields to be None on all platforms unless ensure_lstat=True.


> Yeah, I quite like this. It does make the caching more explicit and
> consistent. It's slightly annoying that it's less like pathlib.Path
> now, but DirEntry was never pathlib.Path anyway, so maybe it doesn't
> matter. The differences in naming may highlight the difference in
> caching, so maybe it's a good thing.
>

See my comments below on .fullname.


> Two further questions from me:
>
> 1) How does error handling work? Now os.stat() will/may be called
> during iteration, so in __next__. But it hard to catch errors because
> you don't call __next__ explicitly. Is this a problem? How do other
> iterators that make system calls or raise errors handle this?
>

I think it just needs to be documented that iterating may throw the same
exceptions as os.lstat(). It's a little trickier if you don't want the
scope of your exception to be too broad, but you can always wrap the
iteration in a generator to catch and handle the exceptions you care about,
and allow the rest to propagate.

def scandir_accessible(path='.'):
gen = os.scandir(path)

while True:
try:
yield next(gen)
except PermissionError:
pass

2) There's still the open question in the PEP of whether to include a
> way to access the full path. This is cheap to build, it has to be
> built anyway on POSIX systems, and it's quite useful for further
> operations on the file. I think the best way to handle this is a
> .fullname or .full_name attribute as suggested elsewhere. Thoughts?
>

+1 for .fullname. The earlier suggestion to have __str__ return the name is
killed I think by the fact that .fullname could be bytes.

It would be nice if pathlib.Path objects were enhanced to take a DirEntry
and use the .fullname automatically, but you could always call
Path(direntry.fullname).

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 471 -- os.scandir() function -- a better and faster directory iterator

2014-06-30 Thread Tim Delaney
On 1 July 2014 08:38, Ethan Furman  wrote:

> On 06/30/2014 03:07 PM, Tim Delaney wrote:
>
>> I'm torn between whether I'd prefer the stat fields to be populated
>> on Windows if ensure_lstat=False or not. There are good arguments each
>>  way, but overall I'm inclining towards having it consistent with POSIX
>> - don't populate them unless ensure_lstat=True.
>>
>> +0 for stat fields to be None on all platforms unless ensure_lstat=True.
>>
>
> If a Windows user just needs the free info, why should s/he have to pay
> the price of a full stat call?  I see no reason to hold the Windows side
> back and not take advantage of what it has available.  There are plenty of
> posix calls that Windows is not able to use, after all.
>

On Windows ensure_lstat would either be either a NOP (if the fields are
always populated), or it simply determines if the fields get populated. No
extra stat call.

On POSIX it's the difference between an extra stat call or not.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Updates to PEP 471, the os.scandir() proposal

2014-07-09 Thread Tim Delaney
On 10 July 2014 10:23, Victor Stinner  wrote:

> 2014-07-09 17:26 GMT+02:00 Paul Moore :
> > On 9 July 2014 16:05, Victor Stinner  wrote:
> >> The PEP says that DirEntry should mimic pathlib.Path, so I think that
> >> DirEntry.is_dir() should work as os.path.isir(): if the entry is a
> >> symbolic link, you should follow the symlink to get the status of the
> >> linked file with os.stat().
> >
> > (...)
> > As a Windows user with only a superficial understanding of how
> > symlinks should behave, (...)
>
> FYI Windows also supports symbolic links since Windows Vista. The
> feature is unknown because it is restricted to the administrator
> account. Try the "mklink" command in a terminal (cmd.exe) ;-)
> http://en.wikipedia.org/wiki/NTFS_symbolic_link
>
> ... To be honest, I never created a symlink on Windows. But since it
> is supported, you need to know it to write correctly your Windows
> code.
>

Personally, I create them all the time on Windows - mainly via  the Link
Shell Extension <
http://schinagl.priv.at/nt/hardlinkshellext/linkshellextension.html>. It's
the easiest way to ensure that my directory structures are as I want them
whilst not worrying about where the files really are e.g. code on SSD,
GB+-sized data files on rusty metal, symlinks makes it look like it's the
same directory structure. Same thing can be done with junctions if you're
only dealing with directories, but symlinks work with files as well.

I work cross-platform, and have a mild preference for option #2 with
similar semantics on all platforms.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Updates to PEP 471, the os.scandir() proposal

2014-07-10 Thread Tim Delaney
On 10 July 2014 17:04, Paul Moore  wrote:

> On 10 July 2014 01:23, Victor Stinner  wrote:
> >> As a Windows user with only a superficial understanding of how
> >> symlinks should behave, (...)
> >
> > FYI Windows also supports symbolic links since Windows Vista. The
> > feature is unknown because it is restricted to the administrator
> > account. Try the "mklink" command in a terminal (cmd.exe) ;-)
> > http://en.wikipedia.org/wiki/NTFS_symbolic_link
> >
> > ... To be honest, I never created a symlink on Windows. But since it
> > is supported, you need to know it to write correctly your Windows
> > code.
>
> I know how symlinks *do* behave, and I know how Windows supports them.
> What I meant was that, because Windows typically makes little use of
> symlinks, I have little or no intuition of what feels natural to
> people using an OS where symlinks are common.
>
> As someone (Tim?) pointed out later in the thread,
> FindFirstFile/FindNextFile doesn't follow symlinks by default (and nor
> do the dirent entries on Unix).


It wasn't me (I didn't even see it - lost in the noise).


> So whether or not it's "natural", the
> "free" functionality provided by the OS is that of lstat, not that of
> stat. Presumably because it's possible to build symlink-following code
> on top of non-following code, but not the other way around.
>

For most uses the most natural thing is to follow symlinks (e.g. opening a
symlink in a text editor should open the target). However, I think not
following symlinks by default is better approach for exactly the reason
Paul has noted above.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Remaining decisions on PEP 471 -- os.scandir()

2014-07-13 Thread Tim Delaney
On 14 July 2014 10:33, Ben Hoyt  wrote:

>

If we go with Victor's link-following .is_dir() and .is_file(), then
> we probably need to add his suggestion of a follow_symlinks=False
> parameter (defaults to True). Either that or you have to say
> "stat.S_ISDIR(entry.lstat().st_mode)" instead, which is a little bit
> less nice.
>

Absolutely agreed that follow_symlinks is the way to go, disagree on the
default value.


> Given the above arguments for symlink-following is_dir()/is_file()
> methods (have I missed any, Victor?), what do others think?
>

I would say whichever way you go, someone will assume the opposite. IMO not
following symlinks by default is safer. If you follow symlinks by default
then everyone has the following issues:

1. Crossing filesystems (including onto network filesystems);

2. Recursive directory structures (symlink to a parent directory);

3. Symlinks to non-existent files/directories;

4. Symlink to an absolutely huge directory somewhere else (very annoying if
you just wanted to do a directory sizer ...).

If follow_symlinks=False by default, only those who opt-in have to deal
with the above.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Remaining decisions on PEP 471 -- os.scandir()

2014-07-13 Thread Tim Delaney
On 14 July 2014 12:17, Nick Coghlan  wrote:
>
> I think os.walk() is a good source of inspiration here: call the flag
> "followlink" and default it to False.
>
Actually, that's "followlinks", and I'd forgotten that os.walk() defaulted
to not follow - definitely behaviour to match IMO :)

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 471 Final: os.scandir() merged into Python 3.5

2015-03-08 Thread Tim Delaney
On 8 March 2015 at 13:31, Ben Hoyt  wrote:

> Thanks for committing this, Victor! And fixing the d_type issue on funky
> platforms.
>
> Others: if you want to benchmark this, the simplest way is to use my
> os.walk() benchmark.py test program here:
> https://github.com/benhoyt/scandir -- it compares the built-in os.walk()
> implemented with os.listdir() with a version of walk() implemented with
> os.scandir(). I see huge gains on Windows (12-50x) and modest gains on my
> Linux VM (3-5x).
>
> Note that the actual CPython version of os.walk() doesn't yet use
> os.scandir(). I intend to open a separate issue for that shortly (or Victor
> can). But that part should be fairly straight-forward, as I already have a
> version available in my GitHub project.
>

I think it would be a good idea to report the type of drive/mount along
with the results. I imagine that there might be significant differences
between solid state drives, hard drives and network mounts.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Guarantee ordered dict literals in v3.7?

2017-11-05 Thread Tim Delaney
On 6 November 2017 at 07:50, Peter Ludemann via Python-Dev <
python-dev@python.org> wrote:

> Isn't ordered dict also useful for **kwargs?
>

**kwargs is already specified as insertion ordered as of Python  3.6.

https://www.python.org/dev/peps/pep-0468/

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Guarantee ordered dict literals in v3.7?

2017-11-05 Thread Tim Delaney
On 6 November 2017 at 06:09, Serhiy Storchaka  wrote:

> 05.11.17 20:39, Stefan Krah пише:
>
>> On Sun, Nov 05, 2017 at 01:14:54PM -0500, Paul G wrote:
>>
>>> 2. Someone invents a new arbitrary-ordered container that would improve
>>> on the memory and/or CPU performance of the current dict implementation
>>>
>>
>> I would think this is very unlikely, given that the previous dict
>> implementation
>> has always been very fast. The new one is very fast, too.
>>
>
> The modification of the current implementation that don't preserve the
> initial order after deletion would be more compact and faster.


I would personally be happy with this as the guarantee (it covers dict
literals and handles PEP 468), but it might be more confusing. "dicts are
in arbitrary order" and "dicts maintain insertion order" are fairly simple
to explain, "dicts maintain insertion order up to the point that a key is
deleted" is less so.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default

2017-11-05 Thread Tim Delaney
On 6 November 2017 at 13:05, Nick Coghlan  wrote:

> As part of this though, I'd suggest amending the documentation for
> DeprecationWarning [1] to specifically cover how to turn it off
> programmatically (`warnings.simplefilter("ignore",
> DeprecationWarning)`), at the command line (`python -W
> ignore::DeprecationWarning ...`), and via the environment
> (`PYTHONWARNINGS=ignore::DeprecationWarning`).
>

I'm wondering if it would be sensible to recommend only disabling the
warnings if running with a known version of Python e.g.

if sys.version_info < (3, 8):
with warnings.simplefilter('ignore', DeprecationWarning):
import module

The idea here is to prompt the developer to refactor to not use the
deprecated functionality early enough that users aren't impacted.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposal: go back to enabling DeprecationWarning by default

2017-11-07 Thread Tim Delaney
On 8 November 2017 at 03:55, Barry Warsaw  wrote:

> On Nov 7, 2017, at 05:44, Paul Moore  wrote:
>
> > If you're a user and your application developer didn't do (1) or a
> > library developer developing one of the libraries your application
> > developer chose to use didn't do (2), you're hosed. If you're a user
> > who works in an environment where moving to a new version of the
> > application is administratively complex, you're hosed.
>
> “hosed” feels like too strong of a word here.  DeprecationWarnings usually
> don’t break anything.  Sure, they’re annoying but they can usually be
> easily ignored.
>
> Yes, there are some situations where DWs do actively break things (as I’ve
> mentioned, some Debuntu build/test environments).  But those are also
> relatively easier to silence, or at least the folks running those
> environments, or writing the code for those environments, are usually more
> advanced developers for whom setting an environment variable or flag isn’t
> that big of a deal.
>

One other case would be if you've got an application with no stderr (e.g. a
GUI application) - with enough deprecation warnings the stderr buffer could
become full and block, preventing the application from progressing. I've
just had a similar issue where a process was running as a service and used
subprocess.check_output() - stderr was written to the parent's stderr,
which didn't exist and caused the program to hang.

However, I'm definitely +1 on enabling DeprecationWarning by default, but
with mechanisms or recommendations for the application developer to silence
them selectively for the current release.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Subtle difference between f-strings and str.format()

2018-03-28 Thread Tim Delaney
On 29 March 2018 at 07:39, Eric V. Smith  wrote:

> I’d vote #3 as well.
>
> > On Mar 28, 2018, at 11:27 AM, Serhiy Storchaka 
> wrote:
> >
> > There is a subtle semantic difference between str.format() and
> "equivalent" f-string.
> >
> >'{}{}'.format(a, b)
> >f'{a}{b}'
> >
> > In most cases this doesn't matter, but when implement the optimization
> that transforms the former expression to the the latter one ([1], [2]) we
> have to make a decision what to do with this difference.
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Subtle difference between f-strings and str.format()

2018-03-28 Thread Tim Delaney
On 29 March 2018 at 08:09, Tim Delaney  wrote:

> On 29 March 2018 at 07:39, Eric V. Smith  wrote:
>
>> I’d vote #3 as well.
>>
>> > On Mar 28, 2018, at 11:27 AM, Serhiy Storchaka 
>> wrote:
>> >
>> > There is a subtle semantic difference between str.format() and
>> "equivalent" f-string.
>> >
>> >'{}{}'.format(a, b)
>> >f'{a}{b}'
>> >
>> > In most cases this doesn't matter, but when implement the optimization
>> that transforms the former expression to the the latter one ([1], [2]) we
>> have to make a decision what to do with this difference.
>>
>
Sorry about that - finger slipped and I sent an incomplete email ...

If I'm not mistaken, #3 would result in the optimiser changing str.format()
into an f-string in-place. Is this correct? We're not talking here about
people manually changing the code from str.format() to f-strings, right?

I would argue that any optimisation needs to have the same semantics as the
original code - in this case, that all arguments are evaluated before the
string is formatted.

I also assumed (not having actually used an f-string) that all its
formatting arguments were evaluated before formatting.

So my preference would be (if my understanding in the first line is
correct):

1: +0
2a: +0.5
2b: +1
3: -1

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-498: Literal String Formatting

2015-08-08 Thread Tim Delaney
On 8 August 2015 at 11:39, Eric V. Smith  wrote:

> Following a long discussion on python-ideas, I've posted my draft of
> PEP-498. It describes the "f-string" approach that was the subject of
> the "Briefer string format" thread. I'm open to a better title than
> "Literal String Formatting".
>
> I need to add some text to the discussion section, but I think it's in
> reasonable shape. I have a fully working implementation that I'll get
> around to posting somewhere this weekend.
>
> >>> def how_awesome(): return 'very'
> ...
> >>> f'f-strings are {how_awesome()} awesome!'
> 'f-strings are very awesome!'
>
> I'm open to any suggestions to improve the PEP. Thanks for your feedback.
>

I'd like to see an alternatives section, in particular listing alternative
prefixes and why they weren't chosen over f. Off the top of my head, ones
I've seen listed are:

!
$

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP: Collecting information about git

2015-09-12 Thread Tim Delaney
On 13 September 2015 at 04:42, Oleg Broytman  wrote:

>There are too many things that I personally can do with git but can't
> do with hg. Because of that I switched all my development from hg to git
> and I am willing to help those who want to follow.
>

Slightly off-topic, but personally I'd love to know what those are. I've
yet to find anything in Git that I haven't been able to do at least as well
with Mercurial (or an extension), and there are things Mercurial
supports that I use extensively (in particular named branches and phases)
where the concept doesn't even exist in Git.

I switched all of my development to Mercurial, and use hg-git and
hgsubversion when I need to interact with those systems.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] possibility of shaving a stat call from imports

2013-10-18 Thread Tim Delaney
On 19 October 2013 03:53, Brett Cannon  wrote:

> importlib.machinery.FileFinder does a stat call to check if a path is a
> file if the package check failed. Now I'm willing to bet that the check is
> rather redundant as the file extension should be a dead give-away that
> something in a directory is a file and not some non-file type. The import
> would still fail even if this is the case in the loader when attempting to
> read from the file, but it happens a little later and it means finders
> would be more permissive in claiming they found a loader.
>
> Does anyone see a good reason not to take the more optimistic route in the
> finder? As I said, the only thing I see breaking user code is if they have
> a directory or something named spam.py and so the finder claims it found a
> module when in fact it didn't and thus stopping the search for the module
> from continuing.
>

Whilst directories with extensions are unusual on Windows, they're fairly
common on UNIX-based systems. For example, blah.rc directories. And I
personally often create directories with extensions - usually a timestamp
of some kind.

If the extension is specifically an extension that Python uses (e.g.
.py[cod]) then I think it would be reasonable to make the assumption and
let the import fail at the loader instead. Would the extension check be
faster or slower than another stat() call?

As an alternative, is there another stat call later that could be bypassed
if you temporarily cached the result of this stat call? And if so, when
should the cached value be cleared?

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Hashes on same site as download?

2013-10-21 Thread Tim Delaney
On 22 October 2013 12:21, Dan Stromberg  wrote:

>
> I may be missing something, but it seems the Python tarballs and hashes
> are on the same host, and this is not an entirely good thing for security.
>
> The way things are now, an attacker breaks into one host, doctors up a
> tarball, changes the hashes in the same host, and people download without
> noticing, even if they verify hashes.
>
> If you put the hashes on a different host from the tarballs, the attacker
> has to break into two machines.  In this scenario, the hashes add more
> strength.
>

I'm not a security expert, but I can't see how that gives any more security
than the current system (I tried to find whatever article you're talking
about, but failed). It doesn't matter if you provide downloads in one place
and direct people to get the hashes from elsewhere. An attacker has no need
to compromise the server where the hashes are stored - they only need to
compromise the server that tells you where to get the downloads and hashes.

Then the attacker can simply change the download page to direct you to the
malicious downloads, hashes and keys (which they can place on the same
server, so everything looks legit).

Off the top of my head, one way that would give more security would be to
store a hash of the download page itself elsewhere (preferably multiple
places) and periodically compare that with the live version. Any changes to
the live page would be noticed (eventually) unless the attacker also
compromised all those other machines.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] py.ini documentation improvement

2013-11-28 Thread Tim Delaney
I was just getting Jython working with py.exe, and I think the
documentation can be made a bit more friendly. In particular think we can
make it easier for people to determine the correct folder by changing the
line in 3.4.4.1 Customization via INI files:

Two .ini files will be searched by the launcher - py.ini in the current
user’s "application data" directory (i.e. the directory returned by calling
the Windows function SHGetFolderPath with CSIDL_LOCAL_APPDATA) and py.ini
in the same directory as the launcher.

to

Two .ini files will be searched by the launcher - py.ini in the current
user’s "application data" directory (i.e. the directory returned by
executing `echo %LOCALAPPDATA%` in a command window) and py.ini in the same
directory as the launcher.

%LOCALAPPDATA% should always contain the same value as would be returned
from SHGetFolderPath with CSIDL_LOCAL_APPDATA.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] py.ini documentation improvement

2013-11-28 Thread Tim Delaney
On 29 November 2013 08:34, Glenn Linderman  wrote:

>  On 11/28/2013 1:04 PM, Tim Delaney wrote:
>
>
> %LOCALAPPDATA% should always contain the same value as would be returned
> from SHGetFolderPath with CSIDL_LOCAL_APPDATA.
>
> Except when it gets changed. Documentation should reflect the actual use
> in the code. If it uses the SHGetFolderPath, that is what should be
> documented; if it uses the environment variable, that is what should get
> documented. Users can easily change the environment variable (which might
> be a reason to use it, instead of SHGetFolderPath).
>

Didn't think of that - good point.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] py.ini documentation improvement

2013-11-28 Thread Tim Delaney
On 29 November 2013 10:59, Terry Reedy  wrote:

> On 11/28/2013 5:35 PM, mar...@v.loewis.de wrote:
>
>>
>> Quoting Terry Reedy :
>>
>>  'Two .ini files will be searched by the launcher' sort of implies to
>>> me that the files exist. On my Win7 machine, echo %LOCALAPPDATA%
>>> returns C:\Users\Terry\AppData\Local. If I go to Users/Terry with
>>> Explorer, there is no AppData. (Same with admin user.)
>>>
>>
> I initially intended to write "there is no AppData *visible*". Then I
> switched to my win7 admin account and there was still no AppData visible,
> for any user. This is unlike with XP where things not visible to 'users'
> become visible as admin.


By default in Win7 AppData is a hidden folder - you need to go to  Tools |
Folder Options | View | Show hidden files, folders and drives to see it in
Explorer (no matter what user you're logged in as).

If the py.ini location does become defined in terms of %LOCALAPPDATA% then
suggesting to use that value in the Explorer address bar would probably be
the easiest way for people to get to the correct directory.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] py.ini documentation improvement

2013-11-28 Thread Tim Delaney
On 29 November 2013 13:17, Terry Reedy  wrote:

> On 11/28/2013 7:06 PM, Tim Delaney wrote:
>
>  By default in Win7 AppData is a hidden folder - you need to go to  Tools
>>
>
> On my system, that is Control Panel, not Tools.


Sorry - was referring to the Explorer "Tools" menu, which is likewise
hidden by default ... so many things that I change from the defaults when
setting up a Windows machine - it's become so automatic that I forget what
is hidden from most users.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] RFC: PEP 460: Add bytes % args and bytes.format(args) to Python 3.5

2014-01-06 Thread Tim Delaney
I've just posted about PEP 460 and this discussion on the mercurial-devel
mailing list.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 463: Exception-catching expressions

2014-02-21 Thread Tim Delaney
On 22 February 2014 02:03, Chris Angelico  wrote:

> Oops, hit the wrong key and sent that half-written.
>
> ... and simply require that the statement form be used. But the
> whelming opinion of python-dev seems to be in favour of the parens
> anyway, and since they give us the possibility of future expansion
> effectively for free, I've gone that way. Parens are now required; the
> syntax is:
>
> value = (expr except Exception: default)
>

Let me add my congratulations on a fine PEP.

I think it's much more readable with the parens (doesn't look as much like
there's a missing newline). I'd also strongly argue for permanently
disallowing multiple exceptions - as you said, this is intended to be a
simple, readable syntax.

Even with the parens, I share the bad gut feeling others are having with
the colon in the syntax. I also don't think "then" is a good fit (even if
it didn't add a keyword).

Unfortunately, I can't come up with anything better ...

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 463: Exception-catching expressions

2014-02-21 Thread Tim Delaney
On 22 February 2014 10:29, Greg Ewing  wrote:

> Antoine Pitrou wrote:
>
> lst = [1, 2]
>>>value = lst[2] except IndexError: "No value"
>>>
>>
>> the gain in concision is counterbalanced by a loss in
>> readability,
>>
>
> This version might be more readable:
>
>value = lst[2] except "No value" if IndexError
>

+1 - it's readable, clear, and only uses existing keywords.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 463: Exception-catching expressions

2014-02-22 Thread Tim Delaney
On 23 February 2014 02:29, Nick Coghlan  wrote:

> On 22 Feb 2014 22:15, "Stephen J. Turnbull"  wrote:
> > Antoine Pitrou writes:
> >  > Chris Angelico  wrote:
> >  > > hasattr(x,"y") <-> (x.y or True except AttributeError: False)
> >  > But it's not the same. hasattr() returns a boolean, not an arbitrary
> >  > value.
> > I think he meant
> > hasattr(x,"y") <-> (x.y and True except AttributeError: False)
>
> With PEP 463, the explicit equivalent of hasattr() would be something like
> :
>
> hasattr(x,"y") <-> (bool(x.y) or True except AttributeError: False)
>
That would work, but I think I'd prefer:

hasattr(x,"y") <-> bool(x.y or True except AttributeError: False)

Makes it clearer IMO that the entire expression will always return a
boolean.

If exception expressions already existed in the language, I would think
there would be a strong argument for a library function hasattr(), but
probably not a builtin.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] GSoC 2014 - Status for Core Python

2014-02-26 Thread Tim Delaney
On 27 February 2014 05:53, Terry Reedy  wrote:

>
> PSF acts as an umbrella organization for multiple Python projects
> https://wiki.python.org/moin/SummerOfCode/2014
> Core Python is the first listed of about 15.
>

I'm guessing Mercurial will appear under the umbrella in the not to distant
future (Mercurial was rejected as a sponsor organisation - Giovanni
Gherdovich is liaising with the PSF about it).

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Internal representation of strings and Micropython

2014-06-05 Thread Tim Delaney
On 5 June 2014 22:01, Paul Sokolovsky  wrote:

>
> All these changes are what let me dream on and speculate on
> possibility that Python4 could offer an encoding-neutral string type
> (which means based on bytes)
>

To me, an "encoding neutral string type" means roughly "characters are
atomic", and the best representation we have for a "character" is a Unicode
code point. Through any interface that provides "characters" each
individual "character" (code point) is indivisible.

To me, Python 3 has exactly an "encoding-neutral string type". It also has
a bytes type that is is just that - bytes which can represent anything at
all.It might be the UTF-8 representation of a string, but you have the
freedom to manipulate it however you like - including making it no longer
valid UTF-8.

Whilst I think O(1) indexing of strings is important, I don't think it's as
important as the property that "characters" are indivisible and would be
quite happy for MicroPython to use UTF-8 as the underlying string
representation (or some more clever thing, several ideas in this thread) so
long as:

1. It maintains a string type that presents code points as indivisible
elements;

2. The performance consequences of using UTF-8 are documented, as well as
any optimisations, tricks, etc that are used to overcome those consequences
(and what impact if any they would have if code written for MicroPython was
run in CPython).

Cheers,

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Internal representation of strings and Micropython

2014-06-06 Thread Tim Delaney
On 6 June 2014 21:34, Paul Sokolovsky  wrote:

>
> On Fri, 06 Jun 2014 20:11:27 +0900
> "Stephen J. Turnbull"  wrote:
>
> > Paul Sokolovsky writes:
> >
> >  > That kinda means "string is atomic", instead of your "characters
> >  > are atomic".
> >
> > I would be very surprised if a language that behaved that way was
> > called a "Python subset".  No indexing, no slicing, no regexps, no
> > .split(), no .startswith(), no sorted() or .sort(), ...!?
> >
> > If that's not what you mean by "string is atomic", I think you're
> > using very confusing terminology.
>
> I'm sorry if I didn't mention it, or didn't make it clear enough - it's
> all about layering.
>
> On level 0, you treat strings verbatim, and can write some subset of
> apps (my point is that even this level allows to write lot enough
> apps). Let's call this set A0.
>
> On level 1, you accept that there's some universal enough conventions
> for some chars, like space or newline. And you can write set of
> apps A1 > A0.
>

At heart, this is exactly what the Python 3 "str" type is. The universal
convention is "code points". It's got nothing to do with encodings, or
bytes. A Python string is simply a finite sequence of atomic code points -
it is indexable, and it has a length. Once you have that, everything is
layered on top of it. How the code points themselves are implemented is
opaque and irrelevant other than the memory and performance consequences of
the implementation decisions (for example, a string could be indexable by
iterating from the start until you find the nth code point).

Similarly the "bytes" type is a sequence of 8-bit bytes.

Encodings are simply a way to transport code points via a byte-oriented
transport.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Internal representation of strings and Micropython

2014-06-06 Thread Tim Delaney
On 7 June 2014 00:52, Paul Sokolovsky  wrote:

> > At heart, this is exactly what the Python 3 "str" type is. The
> > universal convention is "code points".
>
> Yes. Except for one small detail - Python3 specifies these code points
> to be Unicode code points. And Unicode is a very bloated thing.
>
> But if we drop that "Unicode" stipulation, then it's also exactly what
> MicroPython implements. Its "str" type consists of codepoints, we don't
> have pet names for them yet, like Unicode does, but their numeric
> values are 0-255. Note that it in no way limits encodings, characters,
> or scripts which can be used with MicroPython, because just like
> Unicode, it support concept of "surrogate pairs" (but we don't call it
> like that) - specifically, smaller code points may comprise bigger
> groupings. But unlike Unicode, we don't stipulate format, value or
> other constraints on how these "surrogate pairs"-alikes are formed,
> leaving that to users.


I think you've missed my point.

There is absolutely nothing conceptually bloaty about what a Python 3
string is. It's just like a 7-bit ASCII string, except each entry can be
from a larger table. When you index into a Python 3 string, you get back
exactly *one valid entry* from the Unicode code point table. That plus the
length of the string, plus the guarantee of immutability gives everything
needed to layer the rest of the string functionality on top.

There are no surrogate pairs - each code point is standalone (unlike code
*units*). It is conceptually very simple. The implementation may be
difficult (if you're trying to do better than 4 bytes per code point) but
the concept is dead simple.

If the MicroPython string type requires people *using* it to deal with
surrogates (i.e. indexing could return a value that is not a valid Unicode
code point) then it will have broken the conceptual simplicity of the
Python 3 string type (and most certainly can't be considered in any way
compatible).

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How to update namedtuple asdict() to use dict instead of OrderedDict

2019-01-30 Thread Tim Delaney
On Thu, 31 Jan 2019 at 15:46, Raymond Hettinger 
wrote:

>
> > Would it be practical to add deprecated methods to regular dict for the
> OrderedDict reordering methods that raise with an error suggesting "To use
> this method, convert dict to OrderedDict." (or some better wording).
>
> That's an interesting idea.  Regular dicts aren't well suited to the
> reordering operations (like lists, repeated inserts at the front of the
> sequence wouldn't be performant relative to OrderedDict which uses
> double-linked lists internally).  My instinct is to leave regular dicts
> alone so that they can focus on their primary task (being good a fast
> lookups).
>

Alternatively, would it be viable to make OrderedDict work in a way that so
long as you don't use any reordering operations it's essentially just a
very thin layer on top of a dict, but if you do use any reordering
operations, it adds in the additional heavyweight structure required to
support that?

I'm pretty sure something similar has been considered before, but thought I
should bring it up in the context of this discussion (if only to have to
shot down).

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Is XML serialization output guaranteed to be bytewise identical forever?

2019-03-19 Thread Tim Delaney
On Tue, 19 Mar 2019 at 23:13, David Mertz  wrote:

> In a way, this case makes bugs worse because they are not only a Python
> internal matter. XML is used to communicate among many tools and
> programming languages, and relying on assumptions those other tools will
> not follow us a bad habit.
>

I have a recent example I encountered where the 3.7 behaviour (sorting
attributes) results in a third-party tool behaving incorrectly, whereas
maintaining attribute order works correctly. The particular case was using
HTML  tags for importing into Calibre for converting to an ebook. The
most common symptom was that series indexes were sometimes being correctly
imported, and sometimes not. Occasionally other  tags would also fail
to be correctly imported.

Turns out that  gave consistently
correct results, whilst  was
erratic. And whilst I'd specified the  tags with the name attribute
first, I was then passing the HTML through BeautifulSoup, which sorted the
attributes.

Now Calibre is definitely in the wrong here - it should be able to import
regardless of the order of attributes. But the fact is that there are a lot
of tools out there that are semi-broken in a similar manner.

This to me is an argument to default to maintaining order, but provide a
way for the caller to control the order of attributes when formatting e.g.
pass an ordering function. If you want sorted attributes, pass the built-in
sorted function as your ordering function. But I think that's getting
beyond the scope of this discussion.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Is XML serialization output guaranteed to be bytewise identical forever?

2019-03-19 Thread Tim Delaney
On Wed, 20 Mar 2019 at 00:29, Serhiy Storchaka  wrote:

> 19.03.19 15:10, Tim Delaney пише:
> > Now Calibre is definitely in the wrong here - it should be able to
> > import regardless of the order of attributes. But the fact is that there
> > are a lot of tools out there that are semi-broken in a similar manner.
>
> Is not Calibre going to seat on Python 2 forever? This makes it
> non-relevant to the discussion about Python 3.8.
>

I was simply using Calibre as an example of a tool I'd encountered recently
that works correctly with input files with attributes in one order, but not
the other. That it happens to be using Python (of any vintage) is
irrelevant - could have been written in C, Go, Lua ... same problem that
XML libraries that arbitrarily sort (or otherwise manipulate the order of)
attributes can result in files that may not work with third-party tools.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: Proposal: declare "unstable APIs"

2021-06-03 Thread Tim Delaney
On Fri, 4 Jun 2021 at 03:13, Guido van Rossum  wrote:

> This is not a complete thought yet, but it occurred to me that while we
> have deprecated APIs (which will eventually go away), and provisional APIs
> (which must mature a little before they're declared stable), and stable
> APIs (which everyone can rely on), it might be good to also have something
> like *unstable* APIs, which will continually change without ever going away
> or stabilizing. Examples would be the ast module (since the AST structure
> changes every time the grammar changes) and anything to do with code
> objects and bytecode (since we sometimes think of better ways to execute
> Python).
>

Perhaps "living API" analogous to "living document". Much more positive
connotations ...

Tim Delaney
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/J5U7HG2D35P2IXZRIYTKLT7COXNBJKUR/
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-Dev] [Python-checkins] peps: Pre-alpha draft for PEP 435 (enum). The name is not important at the moment, as

2013-02-25 Thread Tim Delaney
On 26 February 2013 07:32, Barry Warsaw  wrote:

> One thing I've been thinking about is allowing you to override the
> EnumValue
> class that the metaclass uses.  In that case, if you really wanted ordered
> comparisons, you could override __lt__() and friends in a custom enum-value
> class.  I haven't quite worked out in my mind how that would look, but I
> have
> a bug to track the feature request:
>
> https://bugs.launchpad.net/flufl.enum/+bug/1132976
>
> Heck, that might even allow you to implement int-derived enum values if you
> really wanted them .
>

You're starting to tread in an area that I investigated, did an
implementation of, and then started moving away from due to a different
approach (delegating to the methods in the owning Enum class when accessing
EnumValue attribtues).

I haven't touched my implementation for a couple of weeks now - been busy
with other stuff and I got a bit fatigued with the discussion so I decided
to wait until things had settled a bit. Hasn't happened yet ... ;)

I'm actually in a quandry about what way I want my enums to go. I think
each enum should have an ordinal based on the order it is defined, and
should be ordered by that ordinal. But (whether or not it inherits from int
- I'm ignoring string enums here) should __int__ and __index__ return the
ordinal, or the assigned int value (if it has one)? There are arguments
both ways. My current implementation doesn't have an ordinal at all (except
by accident in the trivial case). That was when I decided to put it aside
for a while and see where the discussion went.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] peps: Pre-alpha draft for PEP 435 (enum). The name is not important at the moment, as

2013-02-26 Thread Tim Delaney
On 27 February 2013 01:50, Terry Reedy  wrote:

> On 2/25/2013 12:35 PM, Ethan Furman wrote:
>
>  But this I don't, and in both mine, Ted's, and Alex's versions enums
>> from different groups do not compare equal, regardless of the underlying
>> value.  Of course, this does have the potential problem of `green == 1
>> == bee` but not `green == bee` which would be a problem with set and
>> dicts -- but I'm the only one who has brought up that issue.
>>
>
> I have not been following the discussion in detail so I missed that
> before. Breaking transitivity of equality a no-no. It is basic to thought
> and logic.
>
> decimal(0) == 0 == 0.0 != decimal(0)
> was a problem we finally fixed by removing the inequality above.
> http://bugs.python.org/**issue4087 <http://bugs.python.org/issue4087>
> http://bugs.python.org/**issue4090 <http://bugs.python.org/issue4090>
>
> We should NOT knowingly re-introduce the same problem again! If color and
> animal are isolated from each other, they should each be isolated from
> everything, including int.


FWIW the only reason I made my enums int-based (and comparable with ints)
was because I read somewhere that Guido had said that any stdlib enum would
have to be an int subclass.

I have no problems with having int-like enums that:

1. Are not int subclasses;

2. Do not compare equal with ints unless explicitly converted.

I do think an int-like enum should implement both __int__ and __index__.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 405 (venv) - why does it copy the DLLs on Windows

2013-03-23 Thread Tim Delaney
On 23 March 2013 23:55, Antoine Pitrou  wrote:

> On Sat, 23 Mar 2013 12:57:02 +
> Richard Oudkerk  wrote:
>
> > Also, couldn't hard links be used instead of copying?  (This will fail
> > if not on the same NTFS partition, but then one can copy as a fallback.)
>
> Hard links are generally hard to discover and debug (at least under
> Unix, but I suppose the same applies under Windows).
>

(Slightly OT, but I think useful in this case.)

That's what the Link Shell Extension <
http://schinagl.priv.at/nt/hardlinkshellext/hardlinkshellext.html> is for.

Makes it very easy to work with Hardlinks, Symbolic links, Junctions and
Volume Mountpoints. It gives different overlays for each to icons in
Explorer (and Save/Open dialogs) and adds a tab to the properties of any
link which gives details e.g. for hardlinks it displays the reference count
and all the hardlinks to the same file.

There's also a command-line version - ln <
http://schinagl.priv.at/nt/ln/ln.html>.

Highly recommended.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Semantics of __int__(), __index__()

2013-04-04 Thread Tim Delaney
On 5 April 2013 02:16, Ethan Furman  wrote:

> On 04/04/2013 08:01 AM, Chris Angelico wrote:
>
>> On Fri, Apr 5, 2013 at 1:59 AM, Guido van Rossum 
>> wrote:
>>
>>> On Thu, Apr 4, 2013 at 7:47 AM, Chris Angelico  wrote:
>>>
>>>> Is there any argument that I can pass to Foo() to get back a Bar()?
>>>> Would anyone expect there to be one? Sure, I could override __new__ to
>>>> do stupid things, but in terms of logical expectations, I'd expect
>>>> that Foo(x) will return a Foo object, not a Bar object. Why should int
>>>> be any different? What have I missed here?
>>>>
>>>
>>>
>>> A class can define a __new__ method that returns a different object. E.g.
>>> (python 3):
>>>
>>
>> Right, I'm aware it's possible. But who would expect it of a class?
>>
>
> FTR I'm in the int() should return an int camp, but to answer your
> question: my dbf module has a Table class, but it returns either a
> Db3Table, FpTable, VfpTable, or ClpTable depending on arguments (if
> creating a new one) or the type of the table in the existing dbf file.
>

I fall into:

1. int(), float(), str() etc should return that exact class (and
operator.index() should return exactly an int).

2. It could sometimes be useful for __int__() and __index__() to return a
subclass of int.

So, for the int constructor, I would have the following logic (assume
appropriate try/catch):

def __new__(cls, obj):
i = obj.__int__()

if type(i) is int:
return i

return i._internal_value

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435 -- Adding an Enum type to the Python standard library

2013-04-12 Thread Tim Delaney
On 13 April 2013 08:32, Barry Warsaw  wrote:

> On Apr 12, 2013, at 04:52 PM, R. David Murray wrote:
>
> >You are right, the problem of comparison of disparate types makes ordering
> >a non-starter.  But by the same token that means you are going to have to
> >be consistent and give up on having a sorted iteration and a stable repr:
>
> Why do you make me cry?
>

Just using definition order as the stable iteration order would do the
trick - no need for any comparisons at all. Subclasses (e.g. IntEnum) can
then override it.

You could then easily have a subclass that implemented comparisons defined
based on iteration order. It makes sense not to have this in the base Enum
class (it would be confusing).

On a related note, I really would like to have the ordinal exposed if this
were added.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435 -- Adding an Enum type to the Python standard library

2013-04-20 Thread Tim Delaney
On 21 April 2013 04:10, Barry Warsaw  wrote:

> On Apr 13, 2013, at 08:37 AM, Tim Delaney wrote:
>
> >Just using definition order as the stable iteration order would do the
> >trick - no need for any comparisons at all. Subclasses (e.g. IntEnum) can
> >then override it.
>
> I think this isn't possible if we want to keep backward compatibility with
> earlier Pythons, which I want to do.


Do you want it compatible with Python 2.x? In that case I don't see a way
to do it - getting definition order relies on __prepare__ returning an
ordered dict, and __prepare__ of course is only available in 3.x.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435 -- Adding an Enum type to the Python standard library

2013-04-21 Thread Tim Delaney
On 21 April 2013 21:02, Greg Ewing  wrote:

> Barry Warsaw wrote:
>
>> On Apr 13, 2013, at 12:51 PM, Steven D'Aprano wrote:
>>
>
>  class Insect(Enum):
>>>wasp = wsap = 1
>>>bee = 2
>>>ant = 3
>>>
>>> What's the justification for this restriction? I have looked in the PEP,
>>> and
>>> didn't see one.
>>>
>>
>> If you allowed this, there would be no way to look up an enumeration item
>> by
>> value.  This is necessary for e.g. storing the value in a database.
>>
>
> Hm. What you really want there isn't two enum objects with
> the same value, but two names bound to the same enum object.
> Then looking it up by value would not be a problem.


If there were some way to identify the canonical name a lookup by value
would be unambiguous. If we have iteration in definition order, I'd say the
first defined name for a value should be the canonical name, and any other
name for the value should be considered an alias.

That would preclude the syntax above, but the following might be workable:

class Insect(Enum):
wasp = 1
bee = 2
ant = 3

# aliases
wsap = wasp
waps = 1

In the above, looking up by the value 1 would always return Insect.wasp.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435 -- Adding an Enum type to the Python standard library

2013-04-21 Thread Tim Delaney
On 22 April 2013 09:02, Nick Coghlan  wrote:

>
> On 22 Apr 2013 07:50, "Barry Warsaw"  wrote:
> >
> > On Apr 20, 2013, at 07:10 PM, R. David Murray wrote:
> >
> > >It seems strange to limit a new Python3 feature to the Python2 feature
> > >set.  Just saying :)
> >
> > For a critical feature sure, but I don't put __repr__ or enum item
> iteration
> > order in that category.  There's no need for gratuitous incompatibility
> > either, and attribute name order is just fine.
>
> Iteration order matters a lot if you don't want people complaining about
> enums being broken:
>
>   class Days(enum.Enum):
> Monday = 1
> Tuesday = 2
> Wednesday = 3
> Thursday = 4
> Friday = 5
> Saturday = 6
> Sunday = 7
>
I'm fine with iteration order being by sorted name by default, so long as
it's easily overrideable by enum subclasses or metaclasses e.g. an IntEnum
should probably iterate in value order.

For definition order, a 3.x-only metaclass could be provided:

class Days(enum.Enum, metaclass=enum.DefinitionOrder):
Monday = 1
Tuesday = 2
Wednesday = 3
Thursday = 4
Friday = 5
Saturday = 6
Sunday = 7

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435 -- Adding an Enum type to the Python standard library

2013-04-21 Thread Tim Delaney
On 22 April 2013 10:31, Barry Warsaw  wrote:

> On Apr 22, 2013, at 09:31 AM, Tim Delaney wrote:
>
> >I'm fine with iteration order being by sorted name by default, so long as
> >it's easily overrideable by enum subclasses or metaclasses e.g. an IntEnum
> >should probably iterate in value order.
>
> It does. :)


I knew it *did*, but wasn't sure if with the current discussion it was
going to continue to do so.


> >For definition order, a 3.x-only metaclass could be provided:
> >
> >class Days(enum.Enum, metaclass=enum.DefinitionOrder):
> >Monday = 1
> >Tuesday = 2
> >Wednesday = 3
> >Thursday = 4
> >Friday = 5
> >Saturday = 6
> >Sunday = 7
>
> Yep, that's how it works.  From flufl.enum:
>
> class IntEnumMetaclass(EnumMetaclass):
> # Define an iteration over the integer values instead of the attribute
> # names.
> def __iter__(cls):
> for key in sorted(cls._enums):
> yield getattr(cls, cls._enums[key])
>

Would it be worthwhile storing a sorted version of the enum keys here? Or
do you think the current space vs speed tradeoff is better?

I need to grab the current flufl.enum code and see if I can easily extend
it to do some more esoteric things that my enum implementation supports
(*not* bare names, but maybe the name = ... syntax, which of course
requires the definition order metaclass). I'm in the middle of a release
cycle, so my time is somewhat limited right now :(

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Enumeration items: mixed types?

2013-04-30 Thread Tim Delaney
On 1 May 2013 02:27, Eli Bendersky  wrote:

>
>
>
> On Mon, Apr 29, 2013 at 5:38 PM, Greg Ewing 
> wrote:
>
>> Ethan Furman wrote:
>>
>>> I suppose the other option is to have `.value` be whatever was assigned
>>> (1, 'really big country', and (8273.199, 517) ),
>>>
>>
>> I thought that was the intention all along, and that we'd
>> given up on the idea of auto-assigning integer values
>> (because it would require either new syntax or extremely
>> dark magic).
>>
>
> Yes, Guido rejected the auto-numbering syntax a while back. The only case
> in which auto-numbering occurs (per PEP 435) is the "convenience syntax":
>
> Animal = Enum('Animal', 'fox dog cat')
>

Actually, since Guido has pronounced that definition order will be the
default, there's no reason each Enum instance couldn't have an "ordinal"
attribute.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP-435 reference implementation

2013-05-01 Thread Tim Delaney
On 2 May 2013 02:18, Tres Seaver  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 05/01/2013 12:14 PM, Guido van Rossum wrote:
> > But we'd probably have to give up something else, e.g. adding methods
> > to enums, or any hope that the instance/class/subclass relationships
> > make any sense.
>
> I'd be glad to drop both of those in favor of subclassing:  I think the
> emphasis on "class-ness" makes no sense, given the driving usecases for
> adopting enums into the stdlib in the first place.   IOW, I would vote
> that real-world usecases trump hypothetical purity.
>

I have real-world use cases of enums (in java) that are essentially classes
and happen to use the enum portion purely to obtain a unique name without
explicitly supplying an ID.

In the particular use case I'm thinking of, the flow is basically like this:

1. An Enum where each instance describes the shape of a database query.
2. Wire protocol where the Enum instance name is passed.
3. At one end, the data for performing the DB query is populated.
4. At the other end, the data is extracted and the appropriate enum is used
to perform the query.

Why use an enum? By using the name in the wire protocol I'm guaranteed a
unique ID that won't change across versions (there is a requirement to only
add to the enum) but does not rely on people setting it manually - the
compiler will complain if there is a conflict, as opposed to setting
values. And having the behaviour be part of the class simplifies things
immensely.

Yes, I could do all of this without an enum (have class check that each
supplied ID is unique, etc) but the code is much clearer using the enum.

I am happy to give up subclassing of enums in order to have behaviour on
enum instances. I've always seen enums more as a container for their
instances. I do want to be able to find out what enum class a particular
enum belongs to (I've used this property in the past) and it's nice that
the enum instance is an instance of the defining class (although IMO not
required).

I see advantages to enums being subclassable, but also significant
disadvantages. For example, given the following:

class Color(Enum):
red = 1

class MoreColor(Color):
blue = 2

class DifferentMoreColor(Color):
green = 2

then the only reasonable way for it to work IMO is that MoreColor contains
both (red, blue) and DifferentMoreColor contains both (red, green) and that
red is not an instance of either MoreColor or DifferentMoreColor. If you
allow subclassing, at some point either something is going to be
intuitively backwards to some people (in the above that Color.red is not an
instance of MoreColor), or is going to result in a contravariance violation.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435 - requesting pronouncement

2013-05-04 Thread Tim Delaney
Typo line 171: 

One thing I'd like to be clear in the PEP about is whether enum_type and
_EnumDict._enum_names should be documented, or whether they're considered
implementation details.

I'd like to make a subclass of Enum that accepts ... for auto-valued enums
but that requires subclassing the metaclass and access to
classdict._enum_names. I can get to enum_type via type(Enum), but
_EnumDict._enum_names requires knowing the attribute. It would sufficient
for my purposes if it was just documented that the passed classdict had a
_enum_names attribute.

In testing the below, I've also discovered a bug in the reference
implementation - currently it will not handle an __mro__ like:

(, , , ,
)

Apply the following patch to make that work:

diff -r 758d43b9f732 ref435.py
--- a/ref435.py Fri May 03 18:59:32 2013 -0700
+++ b/ref435.py Sun May 05 09:23:25 2013 +1000
@@ -116,7 +116,11 @@
 if bases[-1] is Enum:
 obj_type = bases[0]
 else:
-obj_type = bases[-1].__mro__[1] # e.g. (IntEnum, int,
Enum, object)
+for base in bases[-1].__mro__:
+if not issubclass(base, Enum):
+obj_type = base
+break
+
 else:
 obj_type = object
 # save enum items into separate mapping so they don't get baked
into

My auto-enum implementation (using the above patch - without it you can get
the essentially the same results with class AutoIntEnum(int, Enum,
metaclass=auto_enum).

class auto_enum(type(Enum)):
def __new__(metacls, cls, bases, classdict):
temp = type(classdict)()
names = set(classdict._enum_names)
i = 0

for k in classdict._enum_names:
v = classdict[k]

if v is Ellipsis:
v = i
else:
i = v

i += 1
temp[k] = v

for k, v in classdict.items():
if k not in names:
temp[k] = v

return super(auto_enum, metacls).__new__(metacls, cls, bases, temp)

class AutoNumberedEnum(Enum, metaclass=auto_enum):
pass

class AutoIntEnum(IntEnum, metaclass=auto_enum):
pass

class TestAutoNumber(AutoNumberedEnum):
a = ...
b = 3
c = ...

class TestAutoInt(AutoIntEnum):
a = ...
b = 3
c = ...

print(TestAutoNumber, list(TestAutoNumber))
print(TestAutoInt, list(TestAutoInt))

-- Run ----------
 [, ,
]
 [, ,
]

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435 - requesting pronouncement

2013-05-04 Thread Tim Delaney
On 5 May 2013 10:49, Eli Bendersky  wrote:

>
> On Sat, May 4, 2013 at 4:27 PM, Tim Delaney 
> wrote:
>
>> Typo line 171: 
>>
>>
> Fixed, thanks.
>
>
>
>> One thing I'd like to be clear in the PEP about is whether enum_type and
>> _EnumDict._enum_names should be documented, or whether they're considered
>> implementation details.
>>
>>
> No, they should not. Not only are they implementation details, they are
> details of the *reference implementation*, not the actual stdlib module.
> The reference implementation will naturally serve as a basis for the stdlib
> module, but it still has to undergo a review in which implementation
> details can change. Note that usually we do not document implementation
> details of stdlib modules, but this doesn't prevent some people from using
> them if they really want to.
>

I think it would be useful to have some guaranteed method for a
sub-metaclass to get the list of enum keys before calling the base class
__new__. Not being able to do so removes a large number of possible
extensions (like auto-numbering).


> In testing the below, I've also discovered a bug in the reference
>> implementation - currently it will not handle an __mro__ like:
>>
>
> Thanks! Tim - did you sign the contributor CLA for Python? Since the
> reference implementation is aimed for becoming the stdlib enum eventually,
> we'd probably need you to sign that before we can accept patches from you.
>

I have now (just waiting on the confirmation email). Haven't submitted a
patch since the CLAs were started ...

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] CLA link from bugs.python.org

2013-05-04 Thread Tim Delaney
It appears there's no obvious link from bugs.python.org to the contributor
agreement - you need to go via the unintuitive link Foundation ->
Contribution Forms (and from what I've read, you're prompted when you add a
patch to the tracker).

I'd suggest that if the "Contributor Form Received" field is "No" in user
details, there be a link to http://www.python.org/psf/contrib/.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435 - requesting pronouncement

2013-05-04 Thread Tim Delaney
On 5 May 2013 11:22, Tim Delaney  wrote:

> On 5 May 2013 10:49, Eli Bendersky  wrote:
>
>>
>> On Sat, May 4, 2013 at 4:27 PM, Tim Delaney 
>> wrote:
>>
>>> Typo line 171: 
>>>
>>>
>> Fixed, thanks.
>>
>>
>>
>>> One thing I'd like to be clear in the PEP about is whether enum_type and
>>> _EnumDict._enum_names should be documented, or whether they're considered
>>> implementation details.
>>>
>>>
>> No, they should not. Not only are they implementation details, they are
>> details of the *reference implementation*, not the actual stdlib module.
>> The reference implementation will naturally serve as a basis for the stdlib
>> module, but it still has to undergo a review in which implementation
>> details can change. Note that usually we do not document implementation
>> details of stdlib modules, but this doesn't prevent some people from using
>> them if they really want to.
>>
>
> I think it would be useful to have some guaranteed method for a
> sub-metaclass to get the list of enum keys before calling the base class
> __new__. Not being able to do so removes a large number of possible
> extensions (like auto-numbering).
>

 I've been able to achieve the auto-numbering without relying on the
internal implementation at all (with a limitation), with a single change to
enum_type.__new__. My previous patch was slightly wrong - fix below as
well. All existing tests pass. BTW, for mix-ins it's required that they
have __slots__ = () - might want to mention that in the PEP.

diff -r 758d43b9f732 ref435.py
--- a/ref435.py Fri May 03 18:59:32 2013 -0700
+++ b/ref435.py Sun May 05 13:10:11 2013 +1000
@@ -116,7 +116,17 @@
 if bases[-1] is Enum:
 obj_type = bases[0]
 else:
-obj_type = bases[-1].__mro__[1] # e.g. (IntEnum, int,
Enum, object)
+obj_type = None
+
+for base in bases:
+for c in base.__mro__:
+if not issubclass(c, Enum):
+obj_type = c
+break
+
+if obj_type is not None:
+break
+
 else:
 obj_type = object
 # save enum items into separate mapping so they don't get baked
into
@@ -142,6 +152,7 @@
 if obj_type in (object, Enum):
 enum_item = object.__new__(enum_class)
 else:
+value = obj_type.__new__(obj_type, value)
 enum_item = obj_type.__new__(enum_class, value)
 enum_item._value = value
 enum_item._name = e

Implementation:

class AutoInt(int):
__slots__ = ()  # Required

def __new__(cls, value):
if value is Ellipsis:
try:
i = cls._auto_number
except AttributeError:
i = cls._auto_number = 0

else:
i = cls._auto_number = value

cls._auto_number += 1
return int.__new__(cls, i)

class AutoIntEnum(AutoInt, IntEnum):
pass

class TestAutoIntEnum(AutoIntEnum):
a = ...
b = 3
c = ...

print(TestAutoIntEnum, list(TestAutoIntEnum))

-- Run --
 [, ,
]

The implementation is not quite as useful - there's no immediately-obvious
way to have an auto-numbered enum that is not also an int enum e.g. if you
define class AutoNumberedEnum(AutoInt, Enum) it's still an int subclass.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435 - requesting pronouncement

2013-05-04 Thread Tim Delaney
On 5 May 2013 13:11, Tim Delaney  wrote:

> @@ -142,6 +152,7 @@
>  if obj_type in (object, Enum):
>  enum_item = object.__new__(enum_class)
>  else:
> +value = obj_type.__new__(obj_type, value)
>  enum_item = obj_type.__new__(enum_class, value)
>  enum_item._value = value
>  enum_item._name = e
>

Bugger - this is wrong (it didn't feel right to me) - I'm sure it's only
working for me by accident. Need to think of something better.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435 - requesting pronouncement

2013-05-04 Thread Tim Delaney
On 5 May 2013 13:32, Ethan Furman  wrote:

> On 05/04/2013 08:11 PM, Tim Delaney wrote:
>
>>
>>   I've been able to achieve the auto-numbering without relying on the
>> internal implementation at all (with a
>> limitation), with a single change to enum_type.__new__. My previous patch
>> was slightly wrong - fix below as well. All
>> existing tests pass. BTW, for mix-ins it's required that they have
>> __slots__ = () - might want to mention that in the PEP.
>>
>
> What happens without `__slots__ = ()` ?
>

Traceback (most recent call last):
  File "D:\Development\ref435\ref435.py", line 311, in 
class AutoIntEnum(AutoInt, IntEnum):
  File "D:\Development\ref435\ref435.py", line 138, in __new__
enum_class = type.__new__(metacls, cls, bases, classdict)
TypeError: multiple bases have instance lay-out conflict

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 435 - reference implementation discussion

2013-05-04 Thread Tim Delaney
Split off from the PEP 435 - requesting pronouncement thread.

Think I've come up with a system that works for my auto-numbering case
without knowing the internals of enum_type. Patch passes all existing test
cases. The patch does two things:

1. Finds the first non-Enum class on the MRO of the new class and uses that
as the enum type.

2. Instead of directly setting the _name and _value of the enum_item, it
lets the Enum class do it via Enum.__init__(). Subclasses can override
this. This gives Enums a 2-phase construction just like other classes.

diff -r 758d43b9f732 ref435.py
--- a/ref435.py Fri May 03 18:59:32 2013 -0700
+++ b/ref435.py Sun May 05 13:43:56 2013 +1000
@@ -116,7 +116,17 @@
 if bases[-1] is Enum:
 obj_type = bases[0]
 else:
-obj_type = bases[-1].__mro__[1] # e.g. (IntEnum, int,
Enum, object)
+obj_type = None
+
+for base in bases:
+for c in base.__mro__:
+if not issubclass(c, Enum):
+obj_type = c
+break
+
+if obj_type is not None:
+break
+
 else:
 obj_type = object
 # save enum items into separate mapping so they don't get baked
into
@@ -143,8 +153,7 @@
 enum_item = object.__new__(enum_class)
 else:
 enum_item = obj_type.__new__(enum_class, value)
-enum_item._value = value
-enum_item._name = e
+enum_item.__init__(e, value)
 enum_map[e] = enum_item
 enum_class.__aliases__ = aliases  # non-unique enums names
 enum_class._enum_names = enum_names # enum names in definition
order
@@ -232,6 +241,10 @@
 return enum
 raise ValueError("%s is not a valid %s" % (value, cls.__name__))

+def __init__(self, name, value):
+self._name = name
+self._value = value
+
 def __repr__(self):
 return "<%s.%s: %r>" % (self.__class__.__name__, self._name,
self._value)

Auto-int implementation:

class AutoInt(int):
__slots__ = ()

def __new__(cls, value):
if value is Ellipsis:
try:
i = cls._auto_number
except AttributeError:
i = cls._auto_number = 0

else:
i = cls._auto_number = value

cls._auto_number += 1
return int.__new__(cls, i)

class AutoIntEnum(AutoInt, IntEnum):
def __init__(self, name, value):
super(AutoIntEnum, self).__init__(name, int(self))

class TestAutoIntEnum(AutoIntEnum):
a = ...
b = 3
c = ...

class TestAutoIntEnum2(AutoIntEnum):
a = ...
b = ...
c = ...

print(TestAutoIntEnum, list(TestAutoIntEnum))
print(TestAutoIntEnum2, list(TestAutoIntEnum2))

------ Run ------
 [, ,
]
 [, , ]

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435 - reference implementation discussion

2013-05-05 Thread Tim Delaney
On 5 May 2013 16:17, Ethan Furman  wrote:

> On 05/04/2013 10:59 PM, Ethan Furman wrote:
>
>> On 05/04/2013 08:50 PM, Tim Delaney wrote:
>>
>>> 2. Instead of directly setting the _name and _value of the enum_item, it
>>> lets the Enum class do it via Enum.__init__().
>>>
>> Subclasses can override this. This gives Enums a 2-phase construction
>>> just like other classes.
>>>
>>
>> Not sure I care for this.  Enums are, at least in theory, immutable
>> objects, and immutable objects don't call __init__.
>>
>
> Okay, still thinking about `value`, but as far as `name` goes, it should
> not be passed -- it must be the same as it was in the class definition
>

Agreed - name should not be passed.

I would have preferred to use __new__, but Enum.__new__ doesn't get called
at all from enum_type (and the implementation wouldn't be at all
appropriate anyway).

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435: initial values must be specified? Yes

2013-05-05 Thread Tim Delaney
On 6 May 2013 06:09, Ethan Furman  wrote:

> On 05/05/2013 10:07 AM, � wrote:> I'm chiming in late, but am I the only
> one who's really bothered by the syntax?
>
>>
>> class Color(Enum):
>>  red = 1
>>  green = 2
>>  blue = 3
>>
>
> No, you are not only one that's bothered by it.  I tried it without
> assignments until I discovered that bugs are way too easy to introduce.
>  The problem is a successful name lookup looks just like a name failure,
> but of course no error is raised and no new enum item is created:
>
> --> class Color(Enum):
> ... red, green, blue
> ...
>
> --> class MoreColor(Color):
> ... red, orange, yellow
> ...
>
> --> type(MoreColor.red) is MoreColor
> False
>
> --> MoreColor.orange
># value should be 5
>

Actually, my implementation at  https://bitbucket.org/magao/enum (the one
mentioned in the PEP) does detect MoreColor.red as a duplicate. It's
possible to do it, but it's definitely black magic and also involves use of
sys._getframe() for more than just getting module name.

>>> from enum import Enum
>>> class Color(Enum):
... red, green, blue
...
>>> class MoreColor(Color):
... red, orange, yellow
...
Traceback (most recent call last):
  File "", line 1, in 
  File ".\enum.py", line 388, in __new__
raise AttributeError("Duplicate enum key '%s.%s' (overriding '%s')" %
(result.__name__, v.key, k
eys[v.key]))
AttributeError: Duplicate enum key 'MoreColor.red' (overriding 'Color.red')
>>>

So long as I can get one of the requirements documented to implement an
auto-number syntax I'll be happy enough with stdlib enums I think.

class Color(AutoIntEnum):
red = ...
green = ...
blue = ...

Not as pretty, but ends up being less magical.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435: initial values must be specified? Yes

2013-05-05 Thread Tim Delaney
On 6 May 2013 08:00, Guido van Rossum  wrote:

> On Sun, May 5, 2013 at 2:55 PM, Tim Delaney 
> wrote:
> > So long as I can get one of the requirements documented to implement an
> > auto-number syntax I'll be happy enough with stdlib enums I think.
>
> Specifically what do you want the PEP to promise?
>

It was mentioned in the other threads, but the requirement is either:

1. That the dictionary returned from .__prepare__ provide a
way to obtain the enum instance names once it's been populated (e.g. once
it's been passed as the classdict to __new__). The reference implementation
provides a _enum_names list attribute. The enum names need to be available
to a metaclass subclass before calling the base metaclass __new__.

OR

2. A way for subclasses of Enum to modify the value before it's assigned to
the actual enum - see the PEP 435 reference implementation - discussion
thread where I modified the reference implementation to give enum instances
2-phase construction, passing the value to Enum.__init__. This way is more
limited, as you need to use an appropriate mix-in type which puts certain
constraints on the behaviour of the enum instances (e.g. they *have* to be
int instances for auto-numbering). The implementation is also more complex,
and as noted in that thread, __init__ might not be appropriate for an Enum.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435: initial values must be specified? Yes

2013-05-05 Thread Tim Delaney
On 6 May 2013 08:55, Eli Bendersky  wrote:

> 1. That the dictionary returned from .__prepare__ provide
> a way to obtain the enum instance names once it's been populated (e.g. once
> it's been passed as the classdict to __new__). The reference implementation
> provides a _enum_names list attribute. The enum names need to be available
> to a metaclass subclass before calling the base metaclass __new__.
>
>> So your preferred solution is (1), which requires exposing the metaclass
>> and an attribute publicly? I have to ask - to what end? What is the goal of
>> this? To have an AutoNumberedEnum which is guaranteed to be compatible with
>> stdlib's Enum?
>>
>
My preferred solution is 1 (for the reason mentioned above) but it does not
require exposing the metaclass publically (that's obtainable via
type(Enum)). It does require a way to get the enum names before calling the
base metaclass __new__, but that does not necessarily imply that I'm
advocating exposing _enum_names (or at least, not directly).

My preferred way would probably be a note that the dictionary returned from
the enum metaclass __prepare__ implements an enum_names() or maybe
__enum_names__() method which returns an iterator over the enum instance
names in definition order. The way this is implemented by the dictionary
would be an implementation detail.

The enum metaclass __new__ needs access to the enum instance names in
definition order, so I think making it easily available to enum metaclass
subclasses as well just makes sense.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435 - reference implementation discussion

2013-05-05 Thread Tim Delaney
On 5 May 2013 21:58, Tim Delaney  wrote:

> On 5 May 2013 16:17, Ethan Furman  wrote:
>
>> On 05/04/2013 10:59 PM, Ethan Furman wrote:
>>
>>> On 05/04/2013 08:50 PM, Tim Delaney wrote:
>>>
>>>> 2. Instead of directly setting the _name and _value of the enum_item,
>>>> it lets the Enum class do it via Enum.__init__().
>>>>
>>> Subclasses can override this. This gives Enums a 2-phase construction
>>>> just like other classes.
>>>>
>>>
>>> Not sure I care for this.  Enums are, at least in theory, immutable
>>> objects, and immutable objects don't call __init__.
>>>
>>
>> Okay, still thinking about `value`, but as far as `name` goes, it should
>> not be passed -- it must be the same as it was in the class definition
>>
>
> Agreed - name should not be passed.
>
> I would have preferred to use __new__, but Enum.__new__ doesn't get called
> at all from enum_type (and the implementation wouldn't be at all
> appropriate anyway).
>

*If* I can manage to convince Guido and Eli over in that other (initial
values) thread, I think it's still probably worthwhile calling __init__ on
the enum instance, but with no parameters. That would allow more
behaviour-based enums to set up any other initial state they require.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435: initial values must be specified? Yes

2013-05-06 Thread Tim Delaney
On 7 May 2013 12:29, Ethan Furman  wrote:

> On 05/05/2013 02:55 PM, Tim Delaney wrote:
>
>>
>> So long as I can get one of the requirements documented to implement an
>> auto-number syntax I'll be happy enough with
>> stdlib enums I think.
>>
>> class Color(AutoIntEnum):
>>  red = ...
>>  green = ...
>>  blue = ...
>>
>>
> Will this do?
>
> class AutoNumber(Enum):
> def __new__(cls):
> value = len(cls.__enum_info__) + 1
> obj = object.__new__(cls)
> obj._value = value
> return obj
> def __int__(self):
> return self._value
> class Color(AutoNumber):
> red = ()
> green = ()
> blue = ()


Considering that doesn't actually work with the reference implementation
(AutoNumber.__new__ is never called) ... no.

print(Color.red._value)
print(int(Color.red))

-- Run Python3 --
()
Traceback (most recent call last):
  File "D:\home\repos\mercurial\ref435\ref435.py", line 292, in 
print(int(Color.red))
TypeError: __int__ returned non-int (type tuple)

Plus I would not want to use the empty tuple for the purpose - at least ...
implies something ongoing.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435: initial values must be specified? Yes

2013-05-06 Thread Tim Delaney
On 7 May 2013 15:14, Tim Delaney  wrote:

> D'oh! I had my default path being my forked repo ... so didn't see the
> changes. BTW I can't see how that exact implementation passes ... not
> enough parameters declared in AutoNumber.__new__ ...
>

Sorry - my fault again - I'd already changed () to ...

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435: initial values must be specified? Yes

2013-05-06 Thread Tim Delaney
On 7 May 2013 15:14, Tim Delaney  wrote:

> Unfortunately, if you subclass AutoNumber from IntEnum it breaks.
>
> -- Run Python3 --
> Traceback (most recent call last):
>   File "D:\home\repos\mercurial\ref435\ref435.py", line 346, in 
> class Color(AutoNumber):
>   File "D:\home\repos\mercurial\ref435\ref435.py", line 184, in __new__
> enum_item = __new__(enum_class, *args)
> TypeError: int() argument must be a string or a number, not 'ellipsis'
>

Or using your exact implementation, but subclassing AutoNumber from IntEnum:

class AutoNumber(IntEnum):
def __new__(cls):
value = len(cls.__enum_info__) + 1
obj = object.__new__(cls)
obj._value = value
return obj
def __int__(self):
return self._value
class Color(AutoNumber):
red = ()
green = ()
blue = ()

print(repr(Color.red))

-- Run Python3 --


Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 435: initial values must be specified? Yes

2013-05-06 Thread Tim Delaney
On 7 May 2013 14:54, Ethan Furman  wrote:

> On 05/06/2013 07:58 PM, Tim Delaney wrote:
>
>>
>> Considering that doesn't actually work with the reference implementation
>> (AutoNumber.__new__ is never called) ... no.
>>
>
> Two points:
>
>   1) Did you grab the latest code?  That exact implementation passes in
> the tests.
>

D'oh! I had my default path being my forked repo ... so didn't see the
changes. BTW I can't see how that exact implementation passes ... not
enough parameters declared in AutoNumber.__new__ ...


>   2) You can write your __new__ however you want -- use ... !  ;)


class AutoNumber(Enum):
def __new__(cls, value):
if value is Ellipsis:
try:
value = cls._auto_number
except AttributeError:
value = cls._auto_number = 0
else:
cls._auto_number = int(value)

obj = object.__new__(cls)
obj._value = value
cls._auto_number += 1
return obj

def __int__(self):
return self._value

class Color(AutoNumber):
red = ...
green = 3
blue = ...

print(repr(Color.red))
print(repr(Color.green))
print(repr(Color.blue))

-- Run Python3 --




Unfortunately, if you subclass AutoNumber from IntEnum it breaks.

-- Run Python3 --
Traceback (most recent call last):
  File "D:\home\repos\mercurial\ref435\ref435.py", line 346, in 
class Color(AutoNumber):
  File "D:\home\repos\mercurial\ref435\ref435.py", line 184, in __new__
enum_item = __new__(enum_class, *args)
TypeError: int() argument must be a string or a number, not 'ellipsis'

I would probably also suggest 2 changes:

1. Set enum_item._name before calling enum_item.__init__.

2. Don't pass any arguments to enum_item.__init__ - the value should be set
in enum_item.__new__.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of 3.2 in Hg repository?

2013-08-21 Thread Tim Delaney
On 22 August 2013 05:34, Tim Peters  wrote:

> Anyone know a reason not to do:
>
> hg -y merge --tool=internal:fail 3.2
>
> instead?  I saw that idea on some Hg wiki.


That would be from
http://mercurial.selenic.com/wiki/TipsAndTricks#Keep_.22My.22_or_.22Their.22_files_when_doing_a_merge.
I think it's a perfectly reasonable approach.

I expanded on it a little to make it more general (to choose which parent
to discard) in
http://stackoverflow.com/questions/14984793/mercurial-close-default-branch-and-replace-with-a-named-branch-as-new-default/14991454#14991454
.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Add a "transformdict" to collections

2013-09-11 Thread Tim Delaney
On 12 September 2013 02:03, Ethan Furman  wrote:

> On 09/11/2013 08:49 AM, Victor Stinner wrote:
>
>> 2013/9/11 Ethan Furman :
>>
>>> He isn't keeping the key unchanged (notice no white space in MAPPING),
>>> he's
>>> merely providing a function that will automatically strip the whitespace
>>> from key lookups.
>>>
>>
>> transformdict keeps the key unchanged, see the first message:
>>
>> >>> d = transformdict(str.lower)
>> >>> d['Foo'] = 5
>> >>> d['foo']
>> 5
>> >>> d['FOO']
>> 5
>> >>> list(d)
>> ['Foo']
>>
>
That seems backwards to me. I would think that retrieving the keys from the
dict would return the transformed keys (I'd call them canonical keys). That
way there's no question about which key is stored - it's *always* the
transformed key.

In fact, I think this might get more traction if it were referred to as a
canonicalising dictionary (bikeshedding, I know).

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Add a "transformdict" to collections

2013-09-12 Thread Tim Delaney
On 13 September 2013 07:29, Tim Delaney  wrote:

>
> In this case though, there are two pieces of information:
>
> 1. A canonical key (which may or may not equal the original key);
>
> 2. The original key.
>
> It seems to me then that TransformDict is a specialised case of
> CanonicalDict, where the canonical key is defined to be the first key
> inserted. It would in fact be possible (though inefficient) to implement
> that using a canonicalising callable that maintained state - something like
> (untested):
>
> class OriginalKeys:
> def __init__(self)::
> self.keys = CanonicalDict(str.lower)
>
> def __call__(self, key):
> return self.keys.setdefault(key, key)
>
> class OriginalKeyDict(CanonicalDict):
> def __init__(self)::
> super().__init__(OriginalKeys())
>

Bah - got myself mixed up with original key and case preserving there ...
try this:

class OriginalKeys:
def __init__(self, func)::
self.keys = CanonicalDict(func)

def __call__(self, key):
return self.keys.setdefault(key, key)

class OriginalKeyDict(CanonicalDict):
def __init__(self, func)::
super().__init__(OriginalKeys(func))

class IdentityDict(OriginalKeyDict):
def __init__(self):
super().__init__(id)

class CasePreservingDict(OriginalKeyDict):
def __init__(self):
super().__init__(str.lower)

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Add a "transformdict" to collections

2013-09-12 Thread Tim Delaney
On 13 September 2013 01:40, Antoine Pitrou  wrote:

> Le Thu, 12 Sep 2013 08:05:44 -0700,
> Ethan Furman  a écrit :
> > On 09/12/2013 07:43 AM, Antoine Pitrou wrote:
> > >
> > > Yeah, so this is totally silly. What you're basically saying is "we
> > > don't need TransformDict since people can re-implement it
> > > themselves".
> >
> > No, what I'm saying is that the "case-preserving" aspect of
> > transformdict is silly.  The main point of transformdict is to
> > enable, for example, 'IBM', 'Ibm', and 'ibm' to all match up as the
> > same key.  But why?  Because you don't trust the user data.  And if
> > you don't trust the user data you have to add the correct version of
> > the key yourself before you ever process that data, which means you
> > already have the correct version stored somewhere.
>
> That's assuming there is an a priori "correct" version. But there might
> not be any. Keeping the original key is important for different reasons
> depending on the use case:
>
> - for case-insensitive dicts, you want to keep the original key for
>   presentation, logging and debugging purposes (*)
>
> - for identity dicts, the original key is mandatory because the id()
>   value in itself is completely useless, it's just used for matching
>
> (*) For a well-known example of such behaviour, think about Windows
> filesystems.
>

In this case though, there are two pieces of information:

1. A canonical key (which may or may not equal the original key);

2. The original key.

It seems to me then that TransformDict is a specialised case of
CanonicalDict, where the canonical key is defined to be the first key
inserted. It would in fact be possible (though inefficient) to implement
that using a canonicalising callable that maintained state - something like
(untested):

class OriginalKeys:
def __init__(self)::
    self.keys = CanonicalDict(str.lower)

def __call__(self, key):
return self.keys.setdefault(key, key)

class OriginalKeyDict(CanonicalDict):
def __init__(self)::
super().__init__(OriginalKeys())

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] On the dangers of giving developers the best resources

2013-10-08 Thread Tim Delaney
On 9 October 2013 03:35, Guido van Rossum  wrote:

> On Tue, Oct 8, 2013 at 8:33 AM, R. David Murray wrote:
>
>> PS: I have always thought it sad that the ready availability of memory,
>> CPU speed, and disk space tends to result in lazy programs.  I understand
>> there is an effort/value tradeoff, and I make those tradeoffs myself
>> all the time...but it still makes me sad.  Then, again, in my early
>> programming days I spent a fair amount of time writing and using Forth,
>> and that probably colors my worldview. :)
>>
>
> I never used or cared for Forth, but I have the same worldview. I remember
> getting it from David Rosenthal, an early Sun reviewer. He stated that
> engineers should be given the smallest desktop computer available, not the
> largest, so they would feel their users' pain and optimize appropriately.
> Sadly software vendors who are also hardware vendors have incentives going
> in the opposite direction -- they want users to feel the pain so they'll
> buy a new device.
>

I look at it a different way. Developers should be given powerful machines
to speed up the development cycle (especially important when prototyping
and in the code/run unit test cycle), but everything should be tested on
the smallest machine available.

It's also a good idea for each developer to have a resource-constrained
machine for developer testing/profiling/etc. Virtual machines work quite
well for this - you can modify the resource constraints (CPU, memory, etc)
to simulate different scenarios.

I find that this tends to better promote the methodology of "make it right,
then make it fast (small, etc)", which I subscribe to. Optimising too early
leads to all your code being complicated, rather than just the bits that
need it.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] On the dangers of giving developers the best resources

2013-10-08 Thread Tim Delaney
On 9 October 2013 07:38, Guido van Rossum  wrote:

> Let's agree to disagree then. I see your methodology used all around me
> with often problematic results. Maybe devs should have two machines -- one
> monster that is *only* usable to develop fast, one small that where they
> run their own apps (email, web browser etc.).
>

I've done that before too - it works quite well (especially if you set them
up to use a single keyboard/mouse).

I suspect the main determination of whether a fast machine as the primary
development machine works better depends heavily on the developer and what
their background is. I've also worked in resource-constrained environments,
so I'm always considering the impact of my choices, even when I go for the
less complicated option initially.

I've also been fortunate to mainly work in places where software
development was considered a craft, with pride in what we produced.
However, I think I should probably reconsider my viewpoint in light of my
current employment ... I despair at some of the code I see ...

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] On the dangers of giving developers the best resources

2013-10-08 Thread Tim Delaney
On 9 October 2013 09:10, Guido van Rossum  wrote:

> It's not actually so much the extreme waste that I'm looking to expose,
> but rather the day-to-day annoyances of stuff you use regularly that slows
> you down by just a second (or ten), or things that gets slower at each
> release.
>

Veering off-topic (but still related) ...

There's a reason I turn off all animations when I set up a machine for
someone ... I've found turning off the animations is the quickest way to
make a machine feel faster - even better than adding an SSD. The number of
times I've fixed a "slow" machine by this one change ...

I think everyone even remotely involved in the existence of animations in
the OS should be forced to have the slowest animations turned on at all
times, no matter the platform (OSX, Windows, Linux ...). Which comes back
to the idea of developers having slow machines so they feel the pain ...

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython: Rename contextlib.ignored() to contextlib.ignore().

2013-10-15 Thread Tim Delaney
On 16 October 2013 05:17, Alexander Belopolsky <
alexander.belopol...@gmail.com> wrote:

> On Tue, Oct 15, 2013 at 12:45 PM, Ethan Furman  wrote:
> > with trap(OSError) as cm:
> > os.unlink('missing.txt')
> > if cm.exc:
> > do_something()
>
> .. and why is this better than
>
> try:
>os.unlink('missing.txt')
> except OSError as exc:
>do_something()


It would allow you to perform a series of operations then process the any
exceptions all together e.g.

with trap(OSError) as cm1:
os.unlink('missing.txt')

with trap(OSError) as cm2:
os.unlink('other_missing.txt')

with trap(OSError) as cm3:
os.unlink('another_missing.txt')

for cm in (cm1, cm2, cm3):
if cm.exc:
do_something(cm.exc)

An equivalent implementation would be:

exceptions = []

try:
os.unlink('missing.txt')
except OSError as exc:
exceptions.append(exc)

try:
os.unlink('missing.txt')
except OSError as exc:
exceptions.append(exc)

try:
os.unlink('missing.txt')
except OSError as exc:
exceptions.append(exc)

for exc in exceptions:
if exc:
do_something(exc)

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 487: Simpler customization of class creation

2016-06-20 Thread Tim Delaney
On 21 June 2016 at 06:12, Guido van Rossum  wrote:

> OK, basically you're arguing that knowing the definition order of class
> attributes is often useful when (ab)using Python for things like schema or
> form definitions. There are a few ways to go about it:
>
> 1. A hack using a global creation counter
> <https://github.com/GoogleCloudPlatform/datastore-ndb-python/blob/master/ndb/model.py#L888>
> 2. Metaclass with __prepare__
> <https://docs.python.org/3/reference/datamodel.html#prepare>
> 3. PEP 520 <https://www.python.org/dev/peps/pep-0520/>
> 4a. Make all dicts OrderedDicts in CPython
> <http://bugs.python.org/issue27350>
> 4b. Ditto in the language standard
>
> If we can make the jump to (4b) soon enough I think we should skip PEP
> 520; if not, I am still hemming and hawing about whether PEP 520 has enough
> benefits over (2) to bother.
>
> Sorry Eric for making this so hard. The better is so often the enemy of
> the good. I am currently somewhere between -0 and +0 on PEP 520. I'm not
> sure if the work on (4a) is going to bear fruit in time for the 3.6
> feature freeze <https://www.python.org/dev/peps/pep-0494/#schedule>; if
> it goes well I think we should have a separate conversation (maybe even a
> PEP?) about (4b). Maybe we should ask for feedback from the Jython
> developers? (PyPy already has this IIUC, and IronPython
> <https://github.com/IronLanguages/main> seems moribund?)
>

Although not a Jython developer, I've looked into the code a few times.

The major stumbling block as I understand it will be that Jython uses a
ConcurrentHashMap as the underlying structure for a dictionary. This would
need to change to a concurrent LinkedHashMap, but there's no such thing in
the standard library. The best option would appear to be
https://github.com/ben-manes/concurrentlinkedhashmap.

There are also plenty of other places that use maps and all of them would
need to be looked at. In a lot of cases they're things like IdentityHashMap
which may also need an ordered equivalent.

There is a repo for Jython 3.5 development:
https://github.com/jython/jython3 but it doesn't seem to be very active -
only 11 commits in the last year (OTOH that's also in the last 3 months).

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3.6 dict becomes compact and gets a private version; and keywords become ordered

2016-09-08 Thread Tim Delaney
On 9 September 2016 at 07:45, Chris Angelico  wrote:

> On Fri, Sep 9, 2016 at 6:22 AM, Victor Stinner 
> wrote:
> > A nice "side effect" of compact dict is that the dictionary now
> > preserves the insertion order. It means that keyword arguments can now
> > be iterated by their creation order:
> >
>
> This is pretty sweet! Of course, there are going to be 1172 complaints
> from people who's doctests have been broken, same as when hash
> randomization came in, but personally, I don't care. Thank you for
> landing this!
>

Are sets also ordered by default now? None of the PEPs appear to mention it.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3.6 dict becomes compact and gets a private version; and keywords become ordered

2016-09-08 Thread Tim Delaney
On 9 September 2016 at 15:34, Benjamin Peterson  wrote:

> On Thu, Sep 8, 2016, at 22:33, Tim Delaney wrote:
> > On 9 September 2016 at 07:45, Chris Angelico  wrote:
> >
> > > On Fri, Sep 9, 2016 at 6:22 AM, Victor Stinner <
> victor.stin...@gmail.com>
> > > wrote:
> > > > A nice "side effect" of compact dict is that the dictionary now
> > > > preserves the insertion order. It means that keyword arguments can
> now
> > > > be iterated by their creation order:
> > > >
> > >
> > > This is pretty sweet! Of course, there are going to be 1172 complaints
> > > from people who's doctests have been broken, same as when hash
> > > randomization came in, but personally, I don't care. Thank you for
> > > landing this!
> > >
> >
> > Are sets also ordered by default now? None of the PEPs appear to mention
> > it.
>
> No.
>

That's an unfortunate inconsistency - I can imagine a lot of people making
the assumption that if dict is ordered (esp. if documented as such) then
sets would be as well. Might need a big red warning in the docs that it's
not the case.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3.6 dict becomes compact and gets a private version; and keywords become ordered

2016-09-12 Thread Tim Delaney
On 10 September 2016 at 03:17, Guido van Rossum  wrote:

> I've been asked about this. Here's my opinion on the letter of the law in
> 3.6:
>
> - keyword args are ordered
> - the namespace passed to a metaclass is ordered by definition order
> - ditto for the class __dict__
>
> A compliant implementation may ensure the above three requirements
> either by making all dicts ordered, or by providing a custom dict
> subclass (e.g. OrderedDict) in those three cases.
>

I'd like to add one more documented constraint - that dict literals
maintain definition order (so long as the dict is not further modified).
This allows defining a dict literal and then passing it as **kwargs.

Hmm - again, there's no mention of dict literals in the PEPs. I'm assuming
that dict literals will preserve their definition order with the new
implementation, but is that a valid assumption? Guess I can test it now
3.6.0b1 is out.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3.6 dict becomes compact and gets a private version; and keywords become ordered

2016-09-12 Thread Tim Delaney
On 13 September 2016 at 10:28, Brett Cannon  wrote:

>
>> I'd like to add one more documented constraint - that dict literals
>> maintain definition order (so long as the dict is not further modified).
>> This allows defining a dict literal and then passing it as **kwargs.
>>
>
> That would require all dictionaries keep their insertion order which we
> are explicitly not doing (at least yet). If you look at the PEPs that are
> asking for definition order they specify an "ordered mapping", not a dict.
> Making dict literals do this means dict literals become "order mapping
> literals" which isn't what they are; they are dict literals. I don't think
> we should extend this guarantee to literals any more than any other
> dictionary.
>

I'm not sure I agree with you, but I'm not going to argue too strongly
either (it can always be revisited later). I will note that a conforming
implementation could be that the result of evaluating a dict literal is a
frozen ordered dict which transparently changes to be a mutable dict as
required. There could well be performance and/or memory benefits from such
a dict implementation.

Personally I expect all Python 3.6 implementations will have
order-preserving dict as that's the easiest way to achieve the existing
guarantees. And that enough code will come to depend on an order-preserving
dict that eventually the decision will be made to retrospectively guarantee
the semantics.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3.6 dict becomes compact and gets a private version; and keywords become ordered

2016-09-14 Thread Tim Delaney
On 15 September 2016 at 05:02, Terry Reedy  wrote:

>
> We already have compact mutable collection types that can be kept
> insert-ordered if one chooses -- lists and collections.deque -- and they
> are not limited to hashables.  Before sets were added, either lists or
> dicts with None values were used as sets.  The latter is obsolete but lists
> are still sometimes used for their generality, as in a set of lists.  We
> now also have enums for certain small frozensets where the set opertions
> are not needed.


One use case that isn't covered by any of the above is removing duplicates
whilst retaining order (of the first of the matching elements). With an
OrderedSet (or ordered by default sets) it would be as simple as:

a = OrderedSet(iterable)

Probably the best current option would be:

a = list(OrderedDict(k, None for k in iterable))

The other use I have for an ordered set is to build up an iterable of
unique values whilst retaining order. It's a lot more efficient than doing
a linear search on a list when adding each element to see if it's already
present.

In many cases the order is primarily important for debugging purposes, but
I can definitely find cases in my current java codebase where I've used the
pattern (LinkedHashSet) and the order is important to the semantics of the
code.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Helping contributors with chores (do we have to?)

2017-06-25 Thread Tim Delaney
On 26 June 2017 at 07:39, Paul Moore  wrote:

> On 25 June 2017 at 18:31, Donald Stufft  wrote:
> >
> > I have used it. I don’t use it every day but I’ve never had it fail on me
> > unless the contributor has unchecked the flag. I just ``git remote add
> >  `` then checkout their branch, add more
> > commits, and push to their branch.
>
> The decision to move to git/github has been made. It's not up for
> debate whether core devs need to learn to deal with it. But we do need
> to acknowledge that there's a significant learning curve, and while
> core devs are contributing from their free time, learning the new
> tooling is a major distraction from what they actually want to do,
> which is work on Python code.
>

I went  through this transition a few years ago when I changed employment
(and didn't enjoy it in the slightest). Coming from Mercurial, Git feels
primitive (I mean that literally - common functionality often requires
low-level, arcane invocations). I can keep all of the Mercurial
command-line that I use regularly in my head, whereas with Git I always
have to go back to the manual even for things that I use all the time, and
I'm often unsure if I'll get the result I expect. As a result, I've avoided
using Git directly as much as possible.

Instead, my personal recommendation for people who are experienced with
Mercurial but not Git is to use Mercurial with the hggit plugin. It's not
ideal, but since Mercurial functionality is almost a superset of Git
functionality, it works so long as you don't use things that Git can't
handle.

The most important things to be aware of IMO are:

1. Avoid the use of named branches and instead use bookmarks (a workflow I
personally hate, but it's the closest match to git branches, and I know I'm
an anomaly in preferring named branches).

2. Last I checked hggit can't force-push to a git repository after
history-modifying actions (e.g. rebase) so after such actions it's
necessary to delete any existing branch in a local git repo, hg push to
that then force-push to Github. This wnew branch head.

So the workflow I use for working with Github is (after enabling hggit):

git clone  .git
hg clone git+.git 

cd .hg
...

cd .git
git branch -d  

cd .hg
hg push -B  .git

cd .git
git push --force

Although I use a Git GUI to avoid having to remember the git commands ...

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Helping contributors with chores (do we have to?)

2017-06-25 Thread Tim Delaney
On 26 June 2017 at 08:20, Tim Delaney  wrote:

>
> 2. Last I checked hggit can't force-push to a git repository after
> history-modifying actions (e.g. rebase) so after such actions it's
> necessary to delete any existing branch in a local git repo, hg push to
> that then force-push to Github. This wnew branch head.
>

Not sure what happened there - that last line should have been:

This will update any PR for that branch to the new branch head.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3147: PYC Repository Directories

2010-01-31 Thread Tim Delaney
On 1 February 2010 00:34, Nick Coghlan  wrote:

>
> __file__ would always point to the source files
> __file_cached__ would always point to the relevant compiled file (either
> pre-existing or newly created)
>
>
>
I like this solution combined with having a single cache directory and a few
other things I've added below.

The pyc/pyo files are just an optimisation detail, and are essentially
temporary. Given that, if they were to live in a single directory, to me it
seems obvious that the default location for that should be in the system
temporary directory. I an immediately think of the following advantages:

1. No one really complains too much about putting things in /tmp unless it
starts taking up too much space. In which case they delete it and if it gets
reused, it gets recreated.

2. /tmp is often on non-volatile memory. If it is (e.g. my Windows system
temp dir is on a RAMdisk) then it seems wise to respect the obvious desire
to throw away temporary files on shutdown.

3. It removes the need for people in general to even think about the
existence of pyc/pyo files. They could then be relegated to even more of an
implementation detail (probably while explaining the command-line options).

4. No need (in fact undesireable) to make it a hidden directory.

If you wanted to package up the pyc/pyo files, I've got an idea that
combines well with executing a zip file containing __main__.py (see other
thread)

1. Delete /tmp/__pycache__.
2. Compiling all your source files with the versions you want to support (so
long as they supported this mechanism).
3. Add a __main__.py which sets the cache directory to the directory (zip
file) that __main__.py is in. __main__.py (as the initial script) doesn't
use the cache.
4. Zip up the contents of /tmp/__pycache__.

Note that for this to work properly it would either require an __init__.py
to be automatically created in the __pycache__ module subdirectory, or have
the subdirectory be named as a .pyr to indicate it's a cached module (and
thus should be importable).

/tmp/__pycache__
__main__.py
foo.pyr/
        foo.py32.pyc
foo.py33.pyc

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Mercurial migration readiness (was: Taking over the Mercurial Migration)

2010-07-01 Thread Tim Delaney
On 2 July 2010 08:07, Barry Warsaw  wrote:

>
> Other than that, while I sometimes review patches in email, I do not think
> patches in a tracker are the best way to manage these.  A dvcs's biggest
> strength is in branches, so we should use those as much as possible.
>
>
I changed my team over from ClearCase to Mercurial in the last 3 months
(just before we were made redundant ... :) Obviously our usage was coloured
by our previous use of ClearCase, but the workflow I found that worked best
was along the lines of the following.

1. Central server for the team, with a repository for each version (e.g.
2.6, 2.7, 3.2) where the default branch is the "trunk". Later versions get
cloned from a previous version at a particular tag, and merges between
versions are always forwards (if there's a need to merge backwards, we
cherry-pick the change by doing an hg export of the appropriate versions).

2. Developer made at least one clone of the appropriate repo, but could have
as many as they liked. Generally would have one local repo for production
work, and others for prototyping, exploratory work, etc - stuff that you may
not want in the central repo.

3. Each change is developed in its own named branch, named for the task
number. Local commits are performed as often as possible - no requirement
that any particular commit was compilable so long as it was on the task's
named branch.

4. Changesets are pushed to the central repository (requires forcing, as Hg
doesn't like you pushing new named branches). The named branch should be
compilable, pass unit tests, etc at this point. Generally preferred not to
edit the history or anything, but if there was something egregious

5. Other developers pulled from the central repository and reviewed
(integration with ReviewBoard or Reitveld was planned, but ran out of
time). This often led to a pair programming session on the reviewers machine
where comments were addressed directly.

6. Named branch was closed and merged to the main/trunk branch (in our case,
default). In our case this was usually done by the author - for Python this
would be done by a committer.

7. Merged default branch was pushed to the central repo, which triggered a
continuous build.

This approach is quite contrary to that favoured by the Mercurial
developers, but we found it had the following advantages:

a. Central team repo made backup and continuous build simple.

b. Named branches for tasks made the whole "what do I do with these
unfinished changes?" question moot - you commit them to your named branch.

c. Switching between tasks is incredibly simple - you already have your
workspace set up (e.g. Eclipse) and just update to the appropriate named
branch. No need to set up multiple workspaces.

d. Similarly, switching to someone else's named branch is just as easy (for
review, pair programming, etc).

e. Named branches make it very obvious what needs to be merged for any
particular task.

f. Easier to follow in the log/graphical log as everything was "tagged" with
what task it was against.

The only issue was that if a branch was abandoned, it would show up in "hg
branches" unless it was closed - but closing wasn't a problem since you can
always reopen if necessary.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Mercurial migration readiness (was: Taking over the Mercurial Migration)

2010-07-01 Thread Tim Delaney
On 2 July 2010 09:08, Tim Delaney  wrote:

> On 2 July 2010 08:07, Barry Warsaw  wrote:
>
>>
>> Other than that, while I sometimes review patches in email, I do not think
>> patches in a tracker are the best way to manage these.  A dvcs's biggest
>> strength is in branches, so we should use those as much as possible.
>>
>>
> 7. Merged default branch was pushed to the central repo, which triggered a
> continuous build.
>
> Clarification here - I mean that a committer would merge it to default,
then pull it into the main python repo - there would be one that anyone
could push changes to, and those named branches would then be cherry-picked
to be merged and pulled into the main repo by committers. Or something along
those lines.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] notifications form Hg to python-checkins

2010-07-14 Thread Tim Delaney
On 14 July 2010 18:26, Dirkjan Ochtman  wrote:

> On Wed, Jul 14, 2010 at 10:15, Georg Brandl  wrote:
> > I also don't think we will see pushes like Tarek's 100+ one very often
> for
> > Python.  Usually changes will be bug fixes, and unless the committer is
> > offline I expect them to be pushed immediately.
>
> Depends. If we do feature branches in separate clones, the merges for
> those can potentially push large numbers of csets at once.
>
> Presumably, we could push some more complexity into the hook, and have
> it send emails per-group for groups of 5 and larger and per-cset for
> smaller groups.
>
> > No, I think we agreed to put the (first line of the) commit message
> there,
> > which I think tells potential reviewers much better if they want to look
> > at that changeset.
>
> Not sure there was actual consensus on that, but I still stand behind
> this point.
>
> If development work was done in named branches, the hook could send one
email per branch that was delivered, and to be safe, one email per changeset
added to a main feature branch.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] Warning for 2.6 and greater

2007-01-12 Thread Tim Delaney
Georg Brandl wrote:

> Martin v. Löwis schrieb:
>>
>> What does that mean for the example James gave: if dict.items is
>> going to be an iterator in 3.0, what 2.x version can make it return
>> an iterator, when it currently returns a list?
>>
>> There simply can't be a 2.x version that *introduces* the new way,
>> as it is not merely a new API, but a changed API.
>
> Well, that is one of the cases in which that won't be possible ;)

Yes - but dict.items() *isn't* going to just return an iterator - it will 
return a view. For most uses of dict.items(), this means there will not need 
to be any code change.

I'm wondering if we might be going the wrong way about warning about 
compatibility between 2.x and 3.x. Perhaps it might be better if the 3.0 
alpha had a 2.x compatibility mode command-line flag, which is removed late 
in the beta cycle.

Tim Delaney 

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] dict.items as attributes [Was: The bytes type]

2007-01-16 Thread Tim Delaney
Phillip J. Eby wrote:

> To be honest, the items() and keys() thing personally baffles me.  If
> they're supposed to be *views* on the underlying dictionary, wouldn't
> it
> make more sense for them to be *attributes* instead of methods?  I.e.
> dict.items and dict.keys.  Then, we could provide that feature in
> 2.6, and
> drop the availability of the callable forms in 3.0.
>
> Then you could write code like:
>
> for k,v in somedict.items:
> ...
>
> And have it work in 2.6 and 3.0.  Meanwhile, .items() would still
> return a
> list in 2.6 (but be warnable about with a -3 switch), but go away
> entirely
> in 3.0.

I think this comes down to whether or not the views returned have any 
independent state. There's something that tells me that attributes (even 
properties) should not return different objects with independent state - 
working on two views obtained from the same dictionary property should 
either work identically to working on one view bound to two names, or they 
should not be obtained from a property.

But unless I'm mistaken, everything done to a view would pass through to the 
dict, or result in another object that has independent state (e.g. iter()) 
so the effect of working on two views of a dict *would* be identical to 
working on two names to the same view. The only case I can think of for 
which we might want to hold state in the view is for detecting concurrent 
modification - I know that iterators should throw exceptions in this case, 
but I can't remember what (if anything) was decided for views.

Tim Delaney 

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] Pre-pre PEP for 'super' keyword

2007-04-30 Thread Tim Delaney
From: "Delaney, Timothy (Tim)" <[EMAIL PROTECTED]>

> Sorry - this is related to my proposal that the following two bits of
> code behave the same:
>
>class A(object):
>def f(self, *p, **kw):
>super.f(*p, **kw)
>
>class A(object):
>def f(self, *p, **kw):
>super(*p, **kw)
>
> But as has been pointed out, this creates an ambiguity with:
>
>class A(object):
>def f(self, *p, **kw):
>super.__call__(*p, **kw)
>
> so I want to see if I can resolve it.

A 'super' instance would be callable, without being able to access it's 
__call__ method (because super.__call__ would refer to the base class method 
of that name).

But I find I really don't care. The only place where that would really 
matter IMO is if you want to find out if a 'super' instance is callable. 
Calling a base class __call__ method would not be ambiguous - the following 
two classes would work the same:

class A(object):
def __call__(self, *p, **kw):
return super.__call__(*p, **kw)

class A(object):
def __call__(self, *p, **kw):
return super(*p, **kw)

So, I guess my question is whether the most common case of calling the base 
class method with the same name is worth having some further syntactic sugar 
to avoid repetition? I think it is, but that would be your call Guido.

Cheers,

Tim Delaney 

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] Pre-pre PEP for 'super' keyword

2007-04-30 Thread Tim Delaney
From: "Calvin Spealman" <[EMAIL PROTECTED]>

> I believe the direction my PEP took with all this is a good bit
> primitive compared to this approach, although I still find value in it
> because at least a prototype came out of it that can be used to test
> the waters, regardless of if a more direct-in-the-language approach
> would be superior.

I've been working on improved super syntax for quite a while now - my 
original approach was 'self.super' which used _getframe() and mro crawling 
too. I hit on using bytecode hacking to instantiate a super object at the 
start of the method to gain performance, which required storing the class in 
co_consts, etc. It turns out that using a metaclass then makes this a lot 
cleaner.

> However, I seem to think that if the __this_class__ PEP goes through,
> your version can be simplified as well. No tricky stuffy things in
> cells would be needed, but we can just expand the super 'keyword' to
> __super__(__this_class__, self), which has been suggested at least
> once. It seems this would be much simpler to implement, and it also
> brings up a second point.
>
> Also, I like that the super object is created at the beginning of the
> function, which my proposal couldn't even do. It is more efficient if
> you have multiple super calls, and gets around a problem I completely
> missed: what happens if the instance name were rebound before the
> implicit lookup of the instance object at the time of the super call?

You could expand it inline, but I think your second point is a strong 
argument against it. Also, sticking the super instance into a cell means 
that inner classes get access to it for free. Otherwise each inner class 
would *also* need to instantiate a super instance, and __this_class__ (or 
whatever it's called) would need to be in a cell for them to get access to 
it instead.

BTW, one of my test cases involves multiple super calls in the same method - 
there is a *very* large performance improvement by instantiating it once.

>> I think it would be very rare to need
>> super(ThisClass), although it makes some sense that that would be what
>> super means in a static method ...
>
> Does super mean anything in a static method today?

Well, since all super instantiations are explicit currently, it can mean 
whatever you want it to.

    class A(object):

@staticmethod
def f():
print super(A)
print super(A, A)

Cheers,

Tim Delaney 

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 367: New Super

2007-05-14 Thread Tim Delaney
Here is my modified version of PEP 367. The reference implementation in it 
is pretty long, and should probably be split out to somewhere else (esp. 
since it can't fully implement the semantics).

Cheers,

Tim Delaney


PEP: 367
Title: New Super
Version: $Revision$
Last-Modified: $Date$
Author: Calvin Spealman <[EMAIL PROTECTED]>
Author: Tim Delaney <[EMAIL PROTECTED]>
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 28-Apr-2007
Python-Version: 2.6
Post-History: 28-Apr-2007, 29-Apr-2007 (1), 29-Apr-2007 (2), 14-May-2007

Abstract


This PEP proposes syntactic sugar for use of the ``super`` type to 
automatically
construct instances of the super type binding to the class that a method was
defined in, and the instance (or class object for classmethods) that the 
method
is currently acting upon.

The premise of the new super usage suggested is as follows::

super.foo(1, 2)

to replace the old::

super(Foo, self).foo(1, 2)

and the current ``__builtin__.super`` be aliased to 
``__builtin__.__super__``
(with ``__builtin__.super`` to be removed in Python 3.0).

It is further proposed that assignment to ``super`` become a 
``SyntaxError``,
similar to the behaviour of ``None``.


Rationale
=

The current usage of super requires an explicit passing of both the class 
and
instance it must operate from, requiring a breaking of the DRY (Don't Repeat
Yourself) rule. This hinders any change in class name, and is often 
considered
a wart by many.


Specification
=

Within the specification section, some special terminology will be used to
distinguish similar and closely related concepts. "super type" will refer to
the actual builtin type named "super". A "super instance" is simply an 
instance
of the super type, which is associated with a class and possibly with an
instance of that class.

Because the new ``super`` semantics are not backwards compatible with Python
2.5, the new semantics will require a ``__future__`` import::

from __future__ import new_super

The current ``__builtin__.super`` will be aliased to 
``__builtin__.__super__``.
This will occur regardless of whether the new ``super`` semantics are 
active.
It is not possible to simply rename ``__builtin__.super``, as that would 
affect
modules that do not use the new ``super`` semantics. In Python 3.0 it is
proposed that the name ``__builtin__.super`` will be removed.

Replacing the old usage of super, calls to the next class in the MRO (method
resolution order) can be made without explicitly creating a ``super``
instance (although doing so will still be supported via ``__super__``). 
Every
function will have an implicit local named ``super``. This name behaves
identically to a normal local, including use by inner functions via a cell,
with the following exceptions:

1. Assigning to the name ``super`` will raise a ``SyntaxError`` at compile 
time;

2. Calling a static method or normal function that accesses the name 
``super``
   will raise a ``TypeError`` at runtime.

Every function that uses the name ``super``, or has an inner function that
uses the name ``super``, will include a preamble that performs the 
equivalent
of::

super = __builtin__.__super__(, )

where  is the class that the method was defined in, and
 is the first parameter of the method (normally ``self`` for
instance methods, and ``cls`` for class methods). For static methods and 
normal
functions,  will be ``None``, resulting in a ``TypeError`` being
raised during the preamble.

Note: The relationship between ``super`` and ``__super__`` is similar to 
that
between ``import`` and ``__import__``.

Much of this was discussed in the thread of the python-dev list, "Fixing 
super
anyone?" [1]_.


Open Issues
---


Determining the class object to use
'''''''''''''''''''''''''''''''''''

The exact mechanism for associating the method with the defining class is 
not
specified in this PEP, and should be chosen for maximum performance. For
CPython, it is suggested that the class instance be held in a C-level 
variable
on the function object which is bound to one of ``NULL`` (not part of a 
class),
``Py_None`` (static method) or a class object (instance or class method).


Should ``super`` actually become a keyword?
'''''''''''''''''''''''''''''''''''''''''''

With this proposal, ``super`` would become a keyword to the same extent that
``None`` is a keyword. It is possible that further restricting the ``super``
name may simplify implementation, however some are against

Re: [Python-Dev] [Python-3000] PEP 367: New Super

2007-05-19 Thread Tim Delaney
Phillip J. Eby wrote:
> At 05:23 PM 5/14/2007 +1000, Tim Delaney wrote:
>> Determining the class object to use
>> '''''''''''''''''''''''''''''''''''
>>
>> The exact mechanism for associating the method with the defining
>> class is not
>> specified in this PEP, and should be chosen for maximum performance.
>> For CPython, it is suggested that the class instance be held in a
>> C-level variable
>> on the function object which is bound to one of ``NULL`` (not part
>> of a class),
>> ``Py_None`` (static method) or a class object (instance or class
>> method).
>
> Another open issue here: is the decorated class used, or the
> undecorated class?

Sorry I've taken so long to get back to you about this - had email problems.

I'm not sure what you're getting at here - are you referring to the 
decorators for classes PEP? In that case, the decorator is applied after the 
class is constructed, so it would be the undecorated class.

Are class decorators going to update the MRO? I see nothing about that in 
PEP 3129, so using the undecorated class would match the current super(cls, 
self) behaviour.

Tim Delaney 

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] PEP 367: New Super

2007-05-19 Thread Tim Delaney
Tim Delaney wrote:
> Phillip J. Eby wrote:
>> At 05:23 PM 5/14/2007 +1000, Tim Delaney wrote:
>>> Determining the class object to use
>>> '''''''''''''''''''''''''''''''''''
>>>
>>> The exact mechanism for associating the method with the defining
>>> class is not
>>> specified in this PEP, and should be chosen for maximum performance.
>>> For CPython, it is suggested that the class instance be held in a
>>> C-level variable
>>> on the function object which is bound to one of ``NULL`` (not part
>>> of a class),
>>> ``Py_None`` (static method) or a class object (instance or class
>>> method).
>>
>> Another open issue here: is the decorated class used, or the
>> undecorated class?
>
> Sorry I've taken so long to get back to you about this - had email
> problems.
> I'm not sure what you're getting at here - are you referring to the
> decorators for classes PEP? In that case, the decorator is applied
> after the class is constructed, so it would be the undecorated class.
>
> Are class decorators going to update the MRO? I see nothing about
> that in PEP 3129, so using the undecorated class would match the
> current super(cls, self) behaviour.

Duh - I'm an idiot. Of course, the current behaviour uses name lookup, so it 
would use the decorated class.

So the question is, should the method store the class, or the name? Looking 
up by name could pick up a totally unrelated class, but storing the 
undecorated class could miss something important in the decoration.

Tim Delaney 

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] PEP 367: New Super

2007-05-20 Thread Tim Delaney
Nick Coghlan wrote:
> Tim Delaney wrote:
>> So the question is, should the method store the class, or the name?
>> Looking up by name could pick up a totally unrelated class, but
>> storing the undecorated class could miss something important in the
>> decoration. 
> 
> Couldn't we provide a mechanism whereby the cell can be adjusted to
> point to the decorated class? (heck, the interpreter has access to
> both classes after execution of the class statement - it could
> probably arrange for this to happen automatically whenever the
> decorated and undecorated classes are different).

Yep - I thought of that. I think that's probably the right way to go.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] cpython (trunk): Close the "trunk" branch.

2011-02-26 Thread Tim Delaney
On 27 February 2011 03:02, "Martin v. Löwis"  wrote:

> > Committing reopened it
>
> So what's the point of closing it, then? What effect does that
> achieve?


http://stackoverflow.com/questions/4099345/is-it-possible-to-reopen-a-closed-branch-in-mercurial/4101279#4101279

The closed flag is just used to filter out closed branches from hg branches
and hg heads unless you use the --closed option - it doesn't prevent you
from using the branches.

Basically, it reduces the noise, especially if you have very branchy
development like I personally prefer (a named branch per task/issue). If you
only use anonymous branches, except for your feature branches (e.g. 3.2,
2.7) then you'd probably never close a branch.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] cpython: improve license

2011-02-26 Thread Tim Delaney
On 27 February 2011 05:12, Barry Warsaw  wrote:

> I guess it's possible for change notifications to encompass multiple named
> branches though, right?  I'm not sure what to do about that, but it seems
> like
> a less common use case.
>

Are the change notifications per-commit? If so, there's no way that a single
change notification could be for more than one named branch.

If the notifications are per-pull/per-push, then yes it could be for
multiple branches.

In either case, it should definitely be possible to put the name(s) of the
branches in the change notifications - in either type of hook you can
inspect the changesets and determine what branch they are on.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] pymigr: Ask for hgeol-checking hook.

2011-02-26 Thread Tim Delaney
On 27 February 2011 05:23, "Martin v. Löwis"  wrote:

> It actually happened to me, so please trust me that it's not a legend.
> Yes, I could fix it with hg commands, and a lot of text editing.
> It took me a day, I considered the repository corrupted so that I
> actually had to branch from the last ok revision, and redo all checkins
> since (I also discarded changes which I didn't chose to redo). It was
> a real catastrophe to me.
>
> Since the changes actually changed all lines, "hg blame" became useless,
> which was unacceptable.
>

I'd disagree that that is catastrophic repository corruption - it's fixable
by creating a new clone from before the "corruption", discarding the old one
and redoing the changes (possibly via cherry-picking).

Catastrophic corruption of a mercurial repository happens when the history
itself becomes corrupted. This should never happen under normal usage, but
it is possible to happen if you commit using an older version of hg to a
repo that's been created (or modified) by a newer version.

You can pull from a newer version repo using the older version, but you
shouldn't ever commit to it (including pushing) except through the "remote"
interfaces (where the remote hg is the one doing the actual commits).

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Mercurial conversion repositories

2011-02-26 Thread Tim Delaney
On 27 February 2011 01:40, Nick Coghlan  wrote:

> On Sat, Feb 26, 2011 at 4:34 PM, Georg Brandl  wrote:
> >> Would it be possible to name "trunk" as "2.x" instead? Otherwise I
> >> could see people getting confused and asking why trunk was closed,
> >> and/or not the same as "default".
> >
> > Problem is, you then have 1.5.2 released from the 2.x branch :)
>
> In that case, "legacy-trunk" would seem clearer.


+1

Exactly what I was about to suggest.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] .hgignore (was: Mercurial conversion repositories)

2011-03-05 Thread Tim Delaney
On 6 March 2011 00:44, R. David Murray  wrote:

> On Fri, 04 Mar 2011 13:01:02 -0800, Santoso Wijaya <
> santoso.wij...@gmail.com> wrote:
> > As a mercurial user, I thank you for this effort! One question, where/how
> do
> > I send suggestion to what to add into .hgignore file? In particular, I
> found
> > these dynamically generated files after a build in Windows (3.2) that
> > probably should be entered as .hgignore entries:
> >
> > ? PC/python_nt_d.h
> > ? PC/pythonnt_rc_d.h
>
> I, on the other hand, would like to see .rej and .orig removed from
> the ignore list.  I don't like having these polluting my working
> directory, and 'hg status' is the easiest way to find them (if
> they aren't ignored).
>
> Or if there's some way to configure my personal .hgrc to ignore
> those particular ignore lines, that would be fine too :)


If those were to be removed from .hgignore then there would be a high
likelihood of someone doing "hg addremove" and inadvertently tracking them.
The purpose of .hgignore is to prevent inadventently tracking files that
shouldn't be tracked.

"hg status  -i" will list all ignored files that are present in your working
directory. For other options, "hg help status".

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Hg: inter-branch workflow

2011-03-20 Thread Tim Delaney
working on (which
you would need to do if you're using separate clones for each task). If you
have each task on a named branch, you can just hg update 1234 and your
existing workspace is now ready to work on another task (you might want to
hg purge as well to get rid of generated artifacts such as .pyc files).

I've worked extensively with this workflow, and it was *really easy*. The
entire team was working happily in about a week, and we really found no
reason to change how we used Mercurial once we started doing this. Yes - you
end up with a much branchier workflow, but I found that to be an advantage,
rather than a disadvantage, because I could easily isolate the changes that
composed any particular task.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Hg: inter-branch workflow

2011-03-20 Thread Tim Delaney
On 21 March 2011 08:16, Tim Delaney  wrote:

>
> For the second and later merges:
>
> hg update 1234_merged_with_3.2
> hg merge 3.2
> hg commit -m "Merged 3.2 to 1234_merged_with_3.2"
> hg merge 1234
> hg commit -m "Merged 1234 to 1234_merged_with_3.2"
>

Of course, you should probably do the "hg merge 1234" before "hg merge 3.2"
to avert the case that you actually "hg update 1234" here ...

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Hg: inter-branch workflow

2011-03-21 Thread Tim Delaney
On 2011-03-22, Ben Finney  wrote:

> That seems to me the ideal: preserve all revision history for those
> cases when some user will care about it, but *present* history cleanly
> by default.
>
> Whether adding support in Mercurial or Git for similar
> clean-presentation-by-default would obviate the need for rewriting
> history, I can't tell.

That's my thought as well - it's the presentation that makes things
difficult for people. I'm used to it (having used ClearCase for many
years before Mercurial) but I do find the presentation suboptimal.

I've recently been thinking about prototyping a "mainline" option for
hgrc that the various hg commands would follow (starting with hg log
and glog). Something like:

mainline = default, 3.3, 3.2, 2.7, 3.1, 3.0, 2.6, 2.5

defaulting to:

mainline = default

All hg commands would aquire an "operate on all branches" option.

The algorithm for hg log would be fairly trivial to change, but hg
glog would be a significant departure (and so would the hgweb log view
- I've played with this before and it's non-trivial).

The idea for glog and hgweb log would be to straight lines for the
mainlines wherever possible (multiple heads on the same mainline
branch would obviously cause deviations). The order the branches are
listed in the "mainline" option would be the order to display the
branches (so you could ensure that your current version was displayed
first). Merges would be indicated with a separate symbol and the name
of the branch that was merged. Similarly, when viewing all branches,
keeping a straight line would be similarly important.

You'd end up using more horizontal space, but we all seem to have
widescreen monitors these days.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Hg: inter-branch workflow

2011-03-21 Thread Tim Delaney
On 2011-03-22, Steven D'Aprano  wrote:
> Tim Delaney wrote:
>
>> You'd end up using more horizontal space, but we all seem to have
>> widescreen monitors these days.
>
> Not even close to "we all".

Fair enough - that was a fairly stupid statement on my part. Blame it
on being on dial-up (26kbps!) for the last 24 hours.

I do heartily recommend getting one though. If nothing else, it really
helps with visualising the interrelationships between multiple
branches (named or anonymous), even with the current sub-optimal (IMO)
layout.

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Extending os.chown() to accept user/group names

2011-05-25 Thread Tim Delaney
2011/5/26 Victor Stinner 

> Le mercredi 25 mai 2011 à 18:46 +0200, Charles-François Natali a écrit :
> > While we're at it, adding a "recursive" argument to this shutil.chown
> > could also be useful.
>
> I don't like the idea of a recursive flag. I would prefer a "map-like"
> function to "apply" a function on all files of a directory. Something
> like shutil.apply_recursive(shutil.chown)...
>
> ... maybe with options to choose between deep-first search and
> breadth-first search, filter (filenames, file size, files only,
> directories only, other attributes?), directory before files (may be
> need for chmod(0o000)), etc.


Pass an iterable to shutil.chown()? Then you could call it like:

shutil.chown(os.walk(path))

Then of course you have the difficulty of wanting to pass either an iterator
or a single path - probably prefer two functions e.g.:

shutil.chown(path)
shutil.chown_many(iter)

Tim Delaney
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Patch #1731330 - pysqlite_cache_display - missing Py_DECREF

2007-06-05 Thread Tim Delaney
I've added patch #1731330 to fix a missing Py_DECREF in 
pysqlite_cache_display. I've attached the diff to this email.


I haven't actually been able to test this - haven't been able to get 
pysqlite compiled here on cygwin yet. I just noticed it when taking an 
example of using PyObject_Print ...


Cheers,

Tim Delaney 


sqlite_cache.diff
Description: Binary data
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   >