Re: [Python-Dev] Multiline with statement line continuation

2014-08-13 Thread yoav glazner
On Aug 13, 2014 7:04 PM, "Akira Li" <4kir4...@gmail.com> wrote:
>
> Nick Coghlan  writes:
>
> > On 12 August 2014 22:15, Steven D'Aprano  wrote:
> >> Compare the natural way of writing this:
> >>
> >> with open("spam") as spam, open("eggs", "w") as eggs,
frobulate("cheese") as cheese:
> >> # do stuff with spam, eggs, cheese
> >>
> >> versus the dynamic way:
> >>
> >> with ExitStack() as stack:
> >> spam, eggs = [stack.enter_context(open(fname), mode) for fname,
mode in
> >>   zip(("spam", "eggs"), ("r", "w")]
> >> cheese = stack.enter_context(frobulate("cheese"))
> >> # do stuff with spam, eggs, cheese
> >
> > You wouldn't necessarily switch at three. At only three, you have lots
> > of options, including multiple nested with statements:
> >
> > with open("spam") as spam:
> > with open("eggs", "w") as eggs:
> > with frobulate("cheese") as cheese:
> > # do stuff with spam, eggs, cheese
> >
> > The "multiple context managers in one with statement" form is there
> > *solely* to save indentation levels, and overuse can often be a sign
> > that you may have a custom context manager trying to get out:
> >
> > @contextlib.contextmanager
> > def dish(spam_file, egg_file, topping):
> > with open(spam_file), open(egg_file, 'w'), frobulate(topping):
> > yield
> >
> > with dish("spam", "eggs", "cheese") as spam, eggs, cheese:
> > # do stuff with spam, eggs & cheese
> >
> > ExitStack is mostly useful as a tool for writing flexible custom
> > context managers, and for dealing with context managers in cases where
> > lexical scoping doesn't necessarily work, rather than being something
> > you'd regularly use for inline code.
> >
> > "Why do I have so many contexts open at once in this function?" is a
> > question developers should ask themselves in the same way its worth
> > asking "why do I have so many local variables in this function?"
>
> Multiline with-statement can be useful even with *two* context
> managers. Two is not many.
>
> Saving indentations levels along is a worthy goal. It can affect
> readability and the perceived complexity of the code.
>
> Here's how I'd like the code to look like:
>
>   with (open('input filename') as input_file,
> open('output filename', 'w') as output_file):
>   # code with list comprehensions to transform input file into output
file
>
> Even one additional unnecessary indentation level may force to split
> list comprehensions into several lines (less readable) and/or use
> shorter names (less readable). Or it may force to move the inline code
> into a separate named function prematurely, solely to preserve the
> indentation level (also may be less readable) i.e.,
>
>   with ... as input_file:
>   with ... as output_file:
>   ... #XXX indentation level is lost for no reason
>
>   with ... as infile, ... as outfile: #XXX shorter names
>   ...
>
>   with ... as input_file:
>   with ... as output_file:
>   transform(input_file, output_file) #XXX unnecessary function
>
> And (nested() can be implemented using ExitStack):
>
>   with nested(open(..),
>   open(..)) as (input_file, output_file):
>   ... #XXX less readable
>
> Here's an example where nested() won't help:
>
>   def get_integers(filename):
>   with (open(filename, 'rb', 0) as file,
> mmap.mmap(file.fileno(), 0, access=mmap.ACCESS_READ) as
mmapped_file):
>   for match in re.finditer(br'\d+', mmapped_file):
>   yield int(match.group())
>
> Here's another:
>
>   with (open('log'+'some expression that generates filename', 'a') as
logfile,
> redirect_stdout(logfile)):
>   ...
>
Just a thought, would it bit wierd that:
with (a as b, c as d): "works"
with (a, c): "boom"
with(a as b, c): ?

>
> --
> Akira
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
https://mail.python.org/mailman/options/python-dev/yoavglazner%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492: No new syntax is required

2015-04-26 Thread yoav glazner
How do you implement "async for"?

On Sun, Apr 26, 2015 at 11:21 PM, Mark Shannon  wrote:

> Hi,
>
> I was looking at PEP 492 and it seems to me that no new syntax is required.
>
> Looking at the code, it does four things; all of which, or a functional
> equivalent, could be done with no new syntax.
> 1. Make a normal function into a generator or coroutine. This can be done
> with a decorator.
> 2. Support a parallel set of special methods starting with 'a' or 'async'.
> Why not just use the current set of special methods?
> 3. "await". "await" is an operator that takes one argument and produces a
> single result, without altering flow control and can thus be replaced by an
> function.
> 4. Asynchronous with statement. The PEP lists the equivalent as "with
> (yield from xxx)" which doesn't seem so bad.
>
> Please don't add unnecessary new syntax.
>
> Cheers,
> Mark.
>
> P.S. I'm not objecting to any of the other new features proposed, just the
> new syntax.
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/yoavglazner%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python 3 niggle: None < 1 raises TypeError

2014-02-14 Thread yoav glazner
On Feb 14, 2014 1:13 PM, "Oleg Broytman"  wrote:
>
> On Fri, Feb 14, 2014 at 09:54:35PM +1100, Chris Angelico 
wrote:
> > So definitely SQL's handling of NULL should not be any sort of guide
> > as regards Python's treatment of None.
>
>Why not? Just make the order different for CPython and PyPy, Cython
> and Jython. ;-)
>
> Oleg.
> --
>  Oleg Broytmanhttp://phdru.name/p...@phdru.name
>Programmers don't die, they just GOSUB without RETURN.

Unicode + bytes explodes in order to get the error sooner.i.e
Not waiting for the killer 8bit string.

So None<1 is the opposite !
sort(somelist) is a ticking bomb in py3
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python startup time

2013-10-09 Thread yoav glazner
I'm not sure Droping imports is the best way to go, since every python
script/app will import common modules right on the start and it will still
seem like the interpeter boot is slow.

making modules load faster seems like a better approch




On Thu, Oct 10, 2013 at 3:18 AM, Eric Snow wrote:

> On Wed, Oct 9, 2013 at 8:30 AM, Christian Heimes 
> wrote:
> > The os module imports MutableMapping from collections.abc. That import
> > adds collections, collections.abc and eight more modules. I'm not sure
> > if we can do anything about it, though.
>
> Well, that depends on how much we want to eliminate those 10 imports.
> :)  Both environ and environb could be turned into lazy wrappers
> around an _Environ-created-when-needed.  If we used a custom module
> type for os [1], then adding descriptors for the two attributes is a
> piece of cake.  As it is, with a little metaclass magic (or even with
> explicit wrapping of the various dunder methods), we could drop those
> 10 imports from startup.
>
> -eric
>
> [1] This probably wouldn't be a great idea considering that
> undoubtedly some code depends on "type(os) is types.ModuleType".
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/yoavglazner%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Moving forward with the concurrent package

2011-08-11 Thread yoav glazner
On Thu, Aug 11, 2011 at 10:56 AM, Nick Coghlan  wrote:

> On Thu, Aug 11, 2011 at 5:07 PM, Antoine Pitrou 
> wrote:
> > Le Thu, 11 Aug 2011 09:03:35 +1000,
> > Nick Coghlan  a écrit :
> >> On Thu, Aug 11, 2011 at 4:55 AM, Brian Curtin 
> >> wrote:
> >> > Now that we have concurrent.futures, is there any plan for
> >> > multiprocessing to follow suit? PEP 3148 mentions a hope to add or
> move
> >> > things in the future [0], which would be now.
> >>
> >> As Jesse said, moving multiprocessing or threading wholesale was never
> >> part of the plan. The main motivator of that comment in PEP 3148 was
> >> the idea of creating 'concurrent.pool', which would provide a
> >> concurrent worker pool API modelled on multiprocessing.Pool that
> >> supported either threads or processes as the back end, just like the
> >> executor model in concurrent.futures.
> >
> > Executors *are* pools, so I don't know what you're talking about.
>

Also the Pool from multiprocessing "works" for threads and process:

from multiprocessing.pool import Pool as ProcessPool
from multiprocessing.dummy import Pool as ThreadPool
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com