Re: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff

2017-11-11 Thread Antoine Pitrou
On Sat, 11 Nov 2017 07:34:05 +
Brett Cannon  wrote:
> 
> One problem with that is I don't want e.g. mypy to start spewing out
> warnings while checking my code.

It's rather trivial for mypy (or any other code analysis tool) to turn
warnings off when importing the code under analysis.  And since there
are other warnings out there than DeprecationWarnings, it should do it
anyway even if we don't change DeprecationWarning's default behaviour.

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff

2017-11-11 Thread Nick Coghlan
On 11 November 2017 at 17:34, Brett Cannon  wrote:
> On Thu, Nov 9, 2017, 17:33 Nathaniel Smith,  wrote:
>> Some more ideas to throw out there:
>>
>> - if an envvar CI=true is set, then by default make deprecation warnings
>> into errors. (This is an informal standard that lots of CI systems use.
>> Error instead of "once" because most people don't look at CI output at all
>> unless there's an error.)
>
>
> One problem with that is I don't want e.g. mypy to start spewing out
> warnings while checking my code. That's why I like Victor's idea of a -X
> option that also flips on other test/debug features. Yes, this would also
> trigger for test runners, but that's at least a smaller amount of affected
> code.

For mypy itself, the CLI is declared as a console_scripts entry point,
so none of mypy's own code actually runs in __main__ - it's all part
of an imported module.

And given that one of the key benefits of static analysis is that it
*doesn't* run the code, I'd be surprised if mypy could ever trigger a
runtime warning in the code under tests :)

For other test runners that do import the code under test, I think
that *our* responsibility is to make it clearer that the default
warning state isn't something that test runner designers should
passively inherit from the interpreter - deciding what the default
warning state should be (and how to get subprocesses to inherit that
setting by default) is part of the process of designing the test
runner.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff

2017-11-11 Thread Nathaniel Smith
On Fri, Nov 10, 2017 at 11:34 PM, Brett Cannon  wrote:
> On Thu, Nov 9, 2017, 17:33 Nathaniel Smith,  wrote:
>> - if an envvar CI=true is set, then by default make deprecation warnings
>> into errors. (This is an informal standard that lots of CI systems use.
>> Error instead of "once" because most people don't look at CI output at all
>> unless there's an error.)
>
> One problem with that is I don't want e.g. mypy to start spewing out
> warnings while checking my code. That's why I like Victor's idea of a -X
> option that also flips on other test/debug features. Yes, this would also
> trigger for test runners, but that's at least a smaller amount of affected
> code.

Ah, yeah, you're right -- often CI systems use Python programs for
infrastructure, beyond the actual code under test. pip is maybe a more
obvious example than mypy -- we probably don't want pip to stop
working in CI runs just because it happens to use a deprecated API
somewhere :-). So this idea doesn't work.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Analog of PEP 448 for dicts (unpacking in assignment with dict rhs)

2017-11-11 Thread Joao S. O. Bueno
Ben, I have a small package which enables one to do:

with MapGetter(my_dictionary):
from my_dictionary import a, b, parameter3

If this interests you, contributions so it can get hardenned for
mainstram acceptance are welcome.
https://github.com/jsbueno/extradict

On 11 November 2017 at 04:26, Ben Usman  wrote:
> Got it, thank you. I'll go and check it out!
>
> On Nov 11, 2017 01:22, "Jelle Zijlstra"  wrote:
>>
>>
>>
>> 2017-11-10 19:53 GMT-08:00 Ben Usman :
>>>
>>> The following works now:
>>>
>>> seq = [1, 2]
>>> d = {'c': 3, 'a': 1, 'b': 2}
>>>
>>> (el1, el2) = *seq
>>> el1, el2 = *seq
>>> head, *tail = *seq
>>>
>>> seq_new = (*seq, *tail)
>>> dict_new = {**d, **{'c': 4}}
>>>
>>> def f(arg1, arg2, a, b, c):
>>> pass
>>>
>>> f(*seq, **d)
>>>
>>> It seems like dict unpacking syntax would not be fully coherent with
>>> list unpacking syntax without something like:
>>>
>>> {b, a, **other} = **d
>>>
>>> Because iterables have both syntax for function call unpacking and
>>> "rhs in assignment unpacking" and dict has only function call
>>> unpacking syntax.
>>>
>>> I was not able to find any PEPs that suggest this (search keywords:
>>> "PEP 445 dicts", "dictionary unpacking assignment", checked PEP-0),
>>> however, let me know if I am wrong.
>>>
>> It was discussed at great length on Python-ideas about a year ago. There
>> is a thread called "Unpacking a dict" from May 2016.
>>
>>>
>>> The main use-case, in my understating, is getting shortcuts to
>>> elements of a dictionary if they are going to be used more then
>>> ones later in the scope. A made-up example is using a config to
>>> initiate a bunch of things with many config arguments with long
>>> names that have overlap in keywords used in initialization.
>>>
>>> One should either write long calls like
>>>
>>> start_a(config['parameter1'], config['parameter2'],
>>> config['parameter3'], config['parameter4'])
>>>
>>> start_b(config['parameter3'], config['parameter2'],
>>> config['parameter3'], config['parameter4'])
>>>
>>> many times or use a list-comprehension solution mentioned above.
>>>
>>> It becomes even worse (in terms of readability) with nested structures.
>>>
>>> start_b(config['group2']['parameter3'], config['parameter2'],
>>> config['parameter3'], config['group2']['parameter3'])
>>>
>>>
>>> ## Rationale
>>>
>>> Right now this problem is often solved using [list] comprehensions,
>>> but this is somewhat verbose:
>>>
>>> a, b = (d[k] for k in ['a', 'b'])
>>>
>>> or direct per-instance assignment (looks simple for with
>>> single-character keys, but often becomes very verbose with
>>> real-world long key names)
>>>
>>> a = d['a']
>>> b = d['b']
>>>
>>> Alternatively one could have a very basic method\function
>>> get_n() or __getitem__() accepting more then a single argument
>>>
>>> a, b = d.get_n('a', 'b')
>>> a, b = get_n(d, 'a', 'b')
>>> a, b = d['a', 'b']
>>>
>>> All these approaches require verbose double-mentioning of same
>>> key. It becomes even worse if you have nested structures
>>> of dictionaries.
>>>
>>> ## Concerns and questions:
>>>
>>> 0. This is the most troubling part,  imho, other questions
>>> are more like common thoughts. It seems (to put it mildly)
>>> weird that execution flow depends on names of local variables.
>>>
>>> For example, one can not easily refactor these variable names. However,
>>> same is true for dictionary keys anyway: you can not suddenly decide
>>> and refactor your code to expect dictionaries with keys 'c' and
>>> 'd' whereas your entire system still expects you to use dictionaries
>>> with keys 'a' and 'b'. A counter-objection is that this specific
>>> scenario is usually handled with record\struct-like classes  with
>>> fixed members rather then dicts, so this is not an issue.
>>>
>>> Quite a few languages (closure and javascript to name a few) seem
>>> to have this feature now and it seems like they did not suffer too
>>> much from refactoring hell. This does not mean that their approach
>>> is good, just that it is "manageable".
>>>
>>> 1. This line seems coherent with sequence syntax, but redundant:
>>> {b, a, **other} = **d
>>>
>>> and the following use of "throwaway" variable just looks poor visually
>>> {b, a, **_} = **d
>>>
>>> could it be less verbose like this
>>> {b, a} = **d
>>>
>>> but it is not very coherent with lists behavior.
>>>
>>> E.g. what if that line did not raise something like "ValueError:
>>> Too many keys to unpack, got an unexpected keyword argument 'c'".
>>>
>>> 2. Unpacking in other contexts
>>>
>>> {self.a, b, **other} = **d
>>>
>>> should it be interpreted as
>>> self.a, b = d['a'], d['b']
>>>
>>> or
>>>
>>> self.a, b = d['self.a'], d['b']
>>>
>>> probably the first, but what I am saying is that these name-extracting
>>> rules should be strictly specified and it might not be trivial.
>>>
>>> ---
>>> Ben
>>>
>>> ___
>>> Python-Dev mailing list
>>> Python-Dev@python.org
>>> https://mail.python.org/mailman/listinfo/python-dev

Re: [Python-Dev] Analog of PEP 448 for dicts (unpacking in assignment with dict rhs)

2017-11-11 Thread Koos Zevenhoven
Oops, forgot to reply to the list.

On Nov 12, 2017 03:35, "Koos Zevenhoven"  wrote:

On Nov 12, 2017 02:12, "Joao S. O. Bueno"  wrote:

Ben, I have a small package which enables one to do:

with MapGetter(my_dictionary):
from my_dictionary import a, b, parameter3

If this interests you, contributions so it can get hardenned for
mainstram acceptance are welcome.
https://github.com/jsbueno/extradict


Your VersionDict in fact has some similarities to what I have thought of
implementing using the PEP 555 machinery, but it is also a bit different.
Interesting...

-- Koos (mobile)



On 11 November 2017 at 04:26, Ben Usman  wrote:
> Got it, thank you. I'll go and check it out!
>
> On Nov 11, 2017 01:22, "Jelle Zijlstra"  wrote:
>>
>>
>>
>> 2017-11-10 19:53 GMT-08:00 Ben Usman :
>>>
>>> The following works now:
>>>
>>> seq = [1, 2]
>>> d = {'c': 3, 'a': 1, 'b': 2}
>>>
>>> (el1, el2) = *seq
>>> el1, el2 = *seq
>>> head, *tail = *seq
>>>
>>> seq_new = (*seq, *tail)
>>> dict_new = {**d, **{'c': 4}}
>>>
>>> def f(arg1, arg2, a, b, c):
>>> pass
>>>
>>> f(*seq, **d)
>>>
>>> It seems like dict unpacking syntax would not be fully coherent with
>>> list unpacking syntax without something like:
>>>
>>> {b, a, **other} = **d
>>>
>>> Because iterables have both syntax for function call unpacking and
>>> "rhs in assignment unpacking" and dict has only function call
>>> unpacking syntax.
>>>
>>> I was not able to find any PEPs that suggest this (search keywords:
>>> "PEP 445 dicts", "dictionary unpacking assignment", checked PEP-0),
>>> however, let me know if I am wrong.
>>>
>> It was discussed at great length on Python-ideas about a year ago. There
>> is a thread called "Unpacking a dict" from May 2016.
>>
>>>
>>> The main use-case, in my understating, is getting shortcuts to
>>> elements of a dictionary if they are going to be used more then
>>> ones later in the scope. A made-up example is using a config to
>>> initiate a bunch of things with many config arguments with long
>>> names that have overlap in keywords used in initialization.
>>>
>>> One should either write long calls like
>>>
>>> start_a(config['parameter1'], config['parameter2'],
>>> config['parameter3'], config['parameter4'])
>>>
>>> start_b(config['parameter3'], config['parameter2'],
>>> config['parameter3'], config['parameter4'])
>>>
>>> many times or use a list-comprehension solution mentioned above.
>>>
>>> It becomes even worse (in terms of readability) with nested structures.
>>>
>>> start_b(config['group2']['parameter3'], config['parameter2'],
>>> config['parameter3'], config['group2']['parameter3'])
>>>
>>>
>>> ## Rationale
>>>
>>> Right now this problem is often solved using [list] comprehensions,
>>> but this is somewhat verbose:
>>>
>>> a, b = (d[k] for k in ['a', 'b'])
>>>
>>> or direct per-instance assignment (looks simple for with
>>> single-character keys, but often becomes very verbose with
>>> real-world long key names)
>>>
>>> a = d['a']
>>> b = d['b']
>>>
>>> Alternatively one could have a very basic method\function
>>> get_n() or __getitem__() accepting more then a single argument
>>>
>>> a, b = d.get_n('a', 'b')
>>> a, b = get_n(d, 'a', 'b')
>>> a, b = d['a', 'b']
>>>
>>> All these approaches require verbose double-mentioning of same
>>> key. It becomes even worse if you have nested structures
>>> of dictionaries.
>>>
>>> ## Concerns and questions:
>>>
>>> 0. This is the most troubling part,  imho, other questions
>>> are more like common thoughts. It seems (to put it mildly)
>>> weird that execution flow depends on names of local variables.
>>>
>>> For example, one can not easily refactor these variable names. However,
>>> same is true for dictionary keys anyway: you can not suddenly decide
>>> and refactor your code to expect dictionaries with keys 'c' and
>>> 'd' whereas your entire system still expects you to use dictionaries
>>> with keys 'a' and 'b'. A counter-objection is that this specific
>>> scenario is usually handled with record\struct-like classes  with
>>> fixed members rather then dicts, so this is not an issue.
>>>
>>> Quite a few languages (closure and javascript to name a few) seem
>>> to have this feature now and it seems like they did not suffer too
>>> much from refactoring hell. This does not mean that their approach
>>> is good, just that it is "manageable".
>>>
>>> 1. This line seems coherent with sequence syntax, but redundant:
>>> {b, a, **other} = **d
>>>
>>> and the following use of "throwaway" variable just looks poor visually
>>> {b, a, **_} = **d
>>>
>>> could it be less verbose like this
>>> {b, a} = **d
>>>
>>> but it is not very coherent with lists behavior.
>>>
>>> E.g. what if that line did not raise something like "ValueError:
>>> Too many keys to unpack, got an unexpected keyword argument 'c'".
>>>
>>> 2. Unpacking in other contexts
>>>
>>> {self.a, b, **other} = **d
>>>
>>> should it be interpreted as
>>> self.a, b = d['a'], d['b']
>>>
>>> or
>>>
>>> self.a, b = d['self.a'], d['b']
>>>
>>> prob

Re: [Python-Dev] [python-committers] Enabling depreciation warnings feature code cutoff

2017-11-11 Thread Guido van Rossum
On Sat, Nov 11, 2017 at 3:29 PM, Nick Coghlan  wrote:

> And given that one of the key benefits of static analysis is that it
> *doesn't* run the code, I'd be surprised if mypy could ever trigger a
> runtime warning in the code under tests :)
>

Actually there are a few cases where mypy *will* generate deprecation
warnings: when the warning is produced by the standard Python parser.
Mypy's parser (typed_ast) is a fork of the stdlib ast module and it
preserves the code that generates such warnings. I found two cases in
particular that generate them:

- In Python 2 code, the `<>` operator gives "DeprecationWarning: <> not
supported in 3.x; use !="/
- In Python 3 code, using `\u` escapes in a b'...' literal gives
"DeprecationWarning: invalid escape sequence '\u'"

In both cases these warnings are currently only generated if you run mypy
with these warnings enabled, e.g. `python3 -Wd -m mypy `. But this
means that mypy would start generating these by default if those warnings
were enabled everywhere by default (per Antoine's preference). And while
it's debatable whether they are useful, there should at least be a way to
turn them off (e.g. when checking Python 2 code that's never going to be
ported). Running mypy in the above way is awkward; mypy would likely have
to grow a new flag to control this.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Analog of PEP 448 for dicts (unpacking in assignment with dict rhs)

2017-11-11 Thread Joao S. O. Bueno
On 11 November 2017 at 23:40, Koos Zevenhoven  wrote:
> Oops, forgot to reply to the list.
>
>
> On Nov 12, 2017 03:35, "Koos Zevenhoven"  wrote:
>
> On Nov 12, 2017 02:12, "Joao S. O. Bueno"  wrote:
>
> Ben, I have a small package which enables one to do:
>
> with MapGetter(my_dictionary):
> from my_dictionary import a, b, parameter3
>
> If this interests you, contributions so it can get hardenned for
> mainstram acceptance are welcome.
> https://github.com/jsbueno/extradict
>
>
> Your VersionDict in fact has some similarities to what I have thought of
> implementing using the PEP 555 machinery, but it is also a bit different.
> Interesting...
>

My main issue with that VersionDict is that after I did it, I didn't
had a real case
where to use it. So, it never been used beyond the unit tests and some
playing around.
(I wrote it when dicts in Python got versioning and that was only
visible from the C-API)

I remember the idea for the versioned value retrieved occurred to me
quite naturally when I wrote it,
so it is probably close to the "OOWTDI".



> -- Koos (mobile)
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 563: Postponed Evaluation of Annotations

2017-11-11 Thread Guido van Rossum
On Fri, Nov 10, 2017 at 11:02 PM, Nick Coghlan  wrote:

> On 11 November 2017 at 01:48, Guido van Rossum  wrote:
> > I don't mind the long name. Of all the options so far I really only like
> > 'string_annotations' so let's go with that.
>
> +1 from me.
>

I'd like to reverse my stance on this. We had `from __future__ import
division` for many years in Python 2, and nobody argued that it implied
that Python 2 doesn't have division -- it just meant to import the future
*version* of division. So I think the original idea, `from __future__
import annotations` is fine. I don't expect there will be *other* things
related to annotations that we'll be importing from the future.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com