[Tim]
>> But it's a fact that they _are_ the same in naive time, which Python's
>> datetime single-timezone arithmetic implements:
>>
>> - A minute is exactly 60 seconds.
>> ...
[Chris Angelico ]
> No leap second support, presumably. Also feature?
Absolutely none, and absolutely "a feature", but
[Chris Barker]
> ...
> and infact, everything Tim said can also apply to UTC time. We've had a lot
> of discussion on teh numpy list about the difference between UTC and "naive"
> times, but for practicle putrposes, they are exactly the same -- unitl you
> try to convert to a known time zone anyway
[Terry Reedy ]
> To me, having 1 day be 23 or 25 hours of elapsed time on the DST transition
> days, as in Paul's alarm example, hardly ignores the transition point.
It's 2:56PM. What time will it be 24 hours from now? If your answer
is "not enough information to say, but it will be some minute
[Tim]
>> Python didn't implement timezone-aware arithmetic at all within a
>> single time zone. Read what I wrote just above. It implements naive
>> arithmetic within a single time zone.
[Jon Ribbens ]
> This usage of "time zone" is confusing.
Ha! _All_ usages of "time zone" are confusing ;-)
[Paul Moore]
> ...
> I think the following statements are true. If they aren't, I'd
> appreciate clarification. I'm going to completely ignore leap seconds
> in the following - I hope that's OK, I don't understand leap seconds
> *at all* and I don't work in any application areas where they are
> re
[Brett Cannon ]
\> Alexander and Tim, you okay with moving this conversation to a datetime-sig
> if we got one created?
Fine by me!
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https:
[Tres Seaver ]
> "Naive" alarm clocks (those which don't know from timezones) break human
> expectations twice a year, because their users have to be awake to fix
> them (or make the clock itself out-of-whack with real civil time for the
> hours between fixing and the actual transition). For confir
[Ronald Oussoren]
>> Treating time as UTC with conversions at the application edge might
>> be "cleaner" in some sense, but can make code harder to read for
>> application domain experts.
>>
>> It might be nice to have time zone aware datetime objects with the
>> right(TM) semantics, but those can
[Mark Lawrence ]
> To me a day is precisely 24 hours, no more, no less. I have no interest in
> messing about with daylight savings of 30 minutes, one hour, two hours or
> any other variant that I've not heard about.
>
> In my mission critical code, which I use to predict my cashflow, I use code
>
[Paul Moore]
>>>
[Tim]
>> Guido will never allow any aspect of "leap seconds" into the core,
[Chris Barker really? that is a shame (and odd) -- it's a trick, because we don't know
> what leap seconds will be needed in the future, but other than that, it's
> not really any different than leap yea
[Tim]
>> timedelta objects only store days, seconds, and microseconds,
[Lennart Regebro ]
> Except that they don't actually store days. They store 24 hour
> periods,
Not really. A timedelta is truly an integer number of microseconds,
and that's all. The internal division into days, seconds and
[Ronald Oussoren]
>> I totally agree with that, having worked on applications
> that had to deal with time a lot and including some where the
> end of a day was at 4am the following day. That app never
> had to deal with DST because not only are the transitions at
> night, the are also during the
[delightful new insight elided, all summarized by what remains ;-) ]
[Tim]
>> What somedatetime+timedelta really does is simpler than that: it
>> adds the number of microseconds represented by the timedelta to
>> somedatetime,
[Lennart]]
> No it doesn't.
Lennart, I wrote the code. Both the Pyt
[Lennart Regebro ]
> Of course, I meant datetime objects.
> In everything else, I stand by my original claim. If you want naive
> datetime obejcts, you should use naive datetime objects.
That's tautological ("if you want X, you should use X"). I'm not sure
what you intended to say. But it's a fa
[Łukasz Rekucki ]
>> Maybe instead of trying to decide who is "wrong" and which approach is
>> "broken", Python just needs a more clear separation between timezone
>> aware objects and "naive" ones?
[Lennart Regebro ]
> Well, the separation is pretty clear already.
I preemptively ;-) agreed with
[Lennart Regebro]
\>>> I have yet to see a use case for that.
[Tim]
>> Of course you have. When you address them, you usually dismiss them
>> as "calendar operations" (IIRC).
'[Lennart]
> Those are not usecases for this broken behaviour.
>
> I agree there is a usecase for where you want to add
[Random832 ]
> ...
>
> Also, can someone explain why this:
> >>> ET = pytz.timezone("America/New_York")
> >>> datetime.strftime(datetime.now(ET) + timedelta(days=90),
> ... "%Y%m%d %H%M%S %Z %z")
> returns '20151210 214526 EDT -0400'
pytz lives in its own world here, and only use
[Chris Angelico ]
> What I'd like to hear (but maybe this won't be possible) would be
> "less-than is transitive if and only if ", where might be
> something like "all of the datetimes are in the same timezone" or
> "none of the datetimes fall within a fold" or something. That would at
> least mak
[Nick Coghlan]
>>> Based on the UTC/local diagram from the "Mind the Gap" section, am I
>>> correct in thinking that the modified invariant that also covers times
>>> in a gap is:
>>>
>>> dt ==
>>> datetime.fromtimestamp(dt.astimezone(utc).astimezone(dt.tzinfo).timestamp())
>>>
>>> That is, for
[Tim]
> ...
> The
> top-level operation on the RHS is datetime.fromtimestamp(). However,
> it didn't pass a tzinfo, so it creates a naive datetime. Assuming dt
> was aware to begin with, the attempt to compare will always (gap or
> not) raise an exception.
Oops! In current Python, comparing nai
[Tim]
>>> ...
>>> The
>>> top-level operation on the RHS is datetime.fromtimestamp(). However,
>>> it didn't pass a tzinfo, so it creates a naive datetime. Assuming dt
>>> was aware to begin with, the attempt to compare will always (gap or
>>> not) raise an exception.
[Tim]
>> Oops! In current
[Tim]
>> Sure - no complaint. I was just saying that in the specific,
>> complicated, contrived expression Nick presented, that it always
>> returns False (no matter which aware datetime he starts with) would be
>> more of a head-scratcher than if it raised a "can't compare naive and
>> aware date
[Guido]
>>> I think we should change this in the PEP, except I can't find where
>>> the PEP says == should raise an exception in this case.
[Tim]
>> It doesn't - the only comparison behavior changed by the PEP is in
>> case of interzone comparison when at least one comparand is a "problem
>> time"
[Guido]
>> it is broken, due to the confusion about classic vs. timeline arithmetic
>> -- these have different needs but there's only one > operator.
[Alex]
> I feel silly trying to defend a design against its author. :-)
"Design" may be an overstatement in this specific case ;-)
I remember impl
[Nick Coghlan ]
> ...
> Sorry, what I wrote in the code wasn't what I wrote in the text, but I
> didn't notice until Guido pointed out the discrepancy. To get the
> right universal invariant, I should have normalised the LHS, not the
> RHS:
>
> dt.astimezone(utc).astimezone(dt.tzinfo) ==
>
[Steven D'Aprano]
>> ...
>> I think it is fair to say that out of the three functions, there is
>> consensus that randbelow has the most useful functionality in a crypto
>> context. Otherwise, people seem roughly equally split between the three
>> functions. There doesn't seem to be any use-case fo
[Alexander Belopolsky]
>> ...
>> but I would really like to see a change in the repr of negative
>> timedeltas:
>>
>> >>> timedelta(minutes=-1)
>> datetime.timedelta(-1, 86340)
>>
>> And str() is not much better:
>>
>> >>> print(timedelta(minutes=-1))
>> -1 day, 23:59:00
>>
>> The above does not qu
[Tim]
>> But I wouldn't change repr() - the internal representation is fully
>> documented, and it's appropriate for repr() to reflect documented
>> internals as directly as possible.
[Alex]
> Note that in the case of float repr, the consideration of user convenience
> did win over "reflect docume
[Greg Ewing ]
> The Mersenne Twister is no longer regarded as quite state-of-the art
> because it can get into states that produce long sequences that are
> not very random.
>
> There is a variation on MT called WELL that has better properties
> in this regard. Does anyone think it would be a good
[Facundo Batista ]
> I'm seeing that our code increases the reference counting to Py_None,
> and I find this a little strange: isn't Py_None eternal and will never
> die?
Yes, but it's immortal in CPython because its reference count never
falls to 0 (it's created with a reference count of 1 to beg
Brett Cannon ]
>> And if we didn't keep its count accurately it would eventually hit
>> zero and constantly have its dealloc function checked for.
[Armin Rigo]
[> I think the idea is really consistency. If we wanted to avoid all
> "Py_INCREF(Py_None);", it would be possible: we could let the refc
[Guido]
> After a fruitful discussion on python-ideas I've decided that it's fine to
> break lines *before* a binary operator. It looks better and Knuth recommends
> it.
> ...
> Therefore it is permissible to break before or
> after a binary operator, as long as the convention is consistent
> local
You may be interested in this seemingly related bug report:
http://bugs.python.org/issue26601
[Neil Schemenauer ]
> I was running Python 2.4.11 under strace and I noticed some odd
> looking system calls:
>
> mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x7f9
[Brett Cannon ]
> Can someone disable this person's subscription?
Done.
> On Mon, 25 Apr 2016 at 14:15 Kenny via Python-Dev
> wrote:
>>
>>
>> fopen Terminal.app.python.
>> 3.5.0.()
>>
>> def fopen Termina.app.python.3.5.0.()
>>
>> %add.%data(CDATA[])::true||false
>>
>> fclose();
>>
>> end Termi
[Tim Golden , on Kenny the "thingy" guy]
> Not subscribed; probably via gmane.
They were subscribed, but I already did the unsub.
> I've added him to a hold list via spam filter. See if that works.
So now we're doubly safe ;-)
___
Python-Dev mailing l
On Sat, Oct 26, 2013 at 6:29 AM, A.M. Kuchling wrote:
> On Sat, Oct 26, 2013 at 05:34:05AM +0200, tim.peters wrote:
>> Fiddled Thread.join() to be a little simpler. Kinda ;-)
>> -# else it's a negative timeout - precise behavior isn't documented
>> -# then, but historically .joi
I was surprised to find that "this works": if you want to find all
_overlapping_ matches for a regexp R, wrap it in
(?=(R))
and feed it to (say) finditer. Here's a very simple example, finding
all overlapping occurrences of "xx":
pat = re.compile("(?=(xx))")
for it in pat.finditer
[Tim]
>> Is that a feature? Or an accident? It's very surprising to find a
>> non-empty match inside an empty match (the outermost lookahead
>> assertion).
[Paul Moore]
> Personally, I would read (?=(R))" as finding an empty match at a point
> where R starts. There's no implication that R is in
[Terry Reedy]
> Should stdlib code use assert at all?
Of course, and for exactly the same reasons we use `assert()` in
Python's C code: to verify preconditions, postconditions, and
invariants that should never fail. Assertions should never be used
to, e.g., verify user-supplied input (or anythin
[Antoine Pitrou]
> Alexandre Vassalotti (thanks a lot!) has recently finalized his work on
> the PEP 3154 implementation - pickle protocol 4.
>
> I think it would be good to get the PEP and the implementation accepted
> for 3.4. As far as I can say, this has been a low-controvery proposal,
> and it
[Guido]
>> Clearly the framing is the weakest point of the PEP (== elicits the most
>> bikeshedding). I am also unsure about the value of framing when pickles are
>> written to strings.
[Antoine]
> It hasn't much value in that case,
It has _no_ value in that case, yes? It doesn't appear to have
[Tim]
>> But I wonder why it isn't done with a new framing opcode instead (say,
>> FRAME followed by 8-byte count). I suppose that would be like the
>> "prefetch" idea, except that framing opcodes would be mandatory
>> (instead of optional) in proto 4. Why I initially like that:
>>
>> - Uniform d
[Tim]
>> ...
>> It was already annoying when the PROTO opcode was introduced,
>> and the size of small pickles increased by 2 bytes. That
>> added up too :-(
[Antoine]
> Are very small pickles that size-sensitive? I have the impression that
> if 8 bytes vs. e.g. 15 bytes makes a difference for yo
[Tim]
>> But it has a different kind of advantage: PREFETCH was optional. As
>> Guido said, it's annoying to bloat the size of small pickles (which
>> may, although individually small, occur in great numbers) by 8 bytes
>> each. There's really no point to framing small chunks of data, right?
[A
[Antoine]
> Well, sending oceans of tiny integers will also incur many system calls
> and additional synchronization costs, since sending data on a
> multiprocessing Queue has to acquire a semaphore. So it generally
> sounds like a bad idea, IMHO.
>
> That said, I agree with:
>> Since pickle intend
[Antoine]
> Yet another possibility: keep framing but use a variable-length
> encoding for the frame size:
>
> - first byte: bits 7-5: N (= frame size bytes length - 1)
> - first byte: bits 4-0: first 5 bits of frame size
> - remaning N bytes: remaining bits of frame size
>
> With this scheme, very
[Guido]
> Food for thought: maybe we should have variable-encoding lengths for all
> opcodes, rather than the current cumbersome scheme?
Yes, but not for protocol 4 - time's running out fast for that. When
we "only" had the XXX1, XXX2, and XXX4 opcodes, it was kinda silly,
but after adding XXX8 f
[Richard Oudkerk]
> I tried using multiprocessing.Pipe() and send_bytes()/recv_bytes() to send
> messages between processes:
>
> 8 bytes messages -- 525,000 msgs/sec
> 15 bytes messages -- 556,000 msgs/sec
>
> So the size of small messages does not seem to make much difference.
To the contrar
[Antoine]
>>> - first byte: bits 7-5: N (= frame size bytes length - 1)
>>> - first byte: bits 4-0: first 5 bits of frame size
>>> - remaning N bytes: remaining bits of frame size
[Tim]
>> I'm unclear on how that would work for, e.g., encoding 40 =
>> 0b000101000. That has 6 significant bits. Wo
[Antoine]
> ...
> Well, it's a question of cost / benefit: does it make sense to optimize
> something that will be dwarfed by other factors in real world
> situations?
For most of my career, a megabyte of RAM was an unthinkable luxury.
Now I'm running on an OS that needs a gigabyte of RAM just to
[Guido]
> So using an opcode for framing is out? (Sorry, I've lost track of the
> back-and-forth.)
It was never in ;-) I'd *prefer* one, but not enough to try to block
the PEP. As is, framing is done at a "lower level" than opcode
decoding. I fear this is brittle, for all the usual "explicit is
[Tim]
>> ...
>> better than implicit" kinds of reasons. The only way now to know that
>> you're looking at a frame size is to keep a running count of bytes
>> processed and realize you've reached a byte offset where a frame size
>> "is expected".
[Antoine]
> That's integrated to the built-in buff
[Tim]
>> ...
>> But if some _other_ implementation of unpickling didn't give a hoot
>> about framing, having an explicit opcode means that implementation
>> could ignore the whole scheme very easily: just implement the FRAME
>> opcode in *its* opcode-decoding loop to consume the FRAME argument,
>>
[Antoine]
>>> Ahah, ok, I see where you're going. But how many other implementations
>>> of unpickling are there?
[Tim]
>> That's something you should have researched when writing the PEP ;-)
>> How many implementations of Python aren't CPython? That's probably
>> the answer. I'm not an expert o
[Martin v. Löwis]
> ...
> AFAICT, the real driving force is the desire to not read-ahead
> more than the pickle is long. This is what complicates the code.
> The easiest (and most space-efficient) solution to that problem
> would be to prefix the entire pickle with a data size field
> (possibly in
[Tim]
>> BTW, I'm not a web guy: in what way is HTTP chunked transfer mode
>> viewed as being flawed? Everything I ever read about it seemed to
>> think it was A Good Idea.
[Martin]
> It just didn't work for some time, see e.g.
>
> http://bugs.python.org/issue1486335
> http://bugs.python.org/iss
[Antoine]
> I have made two last-minute changes to the PEP:
>
> - addition of the FRAME opcode, as discussed with Tim, and keeping a
> fixed 8-byte frame size
Cool!
> - addition of the MEMOIZE opcode, courtesy of Alexandre, which replaces
> PUT opcodes in protocol 4 and helps shrink the size
[Alexandre Vassalotti]
> Looking at the different options available to us:
>
> 1A. Mandatory framing
> (+) Allows the internal buffering layer of the Unpickler to rely
> on the presence of framing to simplify its implementation.
> (-) Forces all implementations of pickle to in
[Christian Heimes]
> the buildbots are flaky because two repr() tests for userdict and
> functools.partial fail every now and then. The test cases depend on a
> fixed order of keyword arguments the representation of userdict and
> partial instances. The improved hash randomization of PEP 456 shows
[guido]
> http://hg.python.org/cpython/rev/6bee0fdcba39
> changeset: 87468:6bee0fdcba39
> user:Guido van Rossum
> date:Sat Nov 23 15:09:16 2013 -0800
> summary:
> asyncio: Change bounded semaphore into a subclass, like
> threading.[Bounded]Semaphore.
>
> files:
> Lib/asyncio
[Brett]
> On 2008-12-03, Python 3.0.0 was released by Barry.
Dang - nobody ever tells me anything. Congratulations! It's about
time 3.0.0 was released ;-)
> ...
> Thanks to those in the community who stuck by the dev team and had faith
> we knew what we were doing and have continued to help eve
[Barry]
> ...
> I don't think the API *has* to change in a backward incompatible way either.
> The methods could be given **kws with a bit of hackery to figure out whether
> the old API was being used (keys: int, default, maxwidth) or the new API was
> being used (keys: _int and _maxwidth). Yeah i
[Daniel Holth]
> But who could forget njzrs' wasp UAV software line 107, using
> int=float?
> https://github.com/nzjrs/wasp/blob/master/sw/groundstation/wasp/__init__.py#L107
I could forget it ;-) The remarkable thing about the two instances of:
random.randrange(0.0,1.0, int=float)
in tha
[Dan Stromberg]
> I keep hearing naysayers, nay saying about Python 3.x.
>
> Here's a 9 question, multiple choice survey I put together about
> Python 2.x use vs Python 3.x use.
>
> I'd be very pleased if you could take 5 or 10 minutes to fill it out.
If you run Python 3 while filling out the surv
[Benjamin Peterson]
> ...
> This is the first time I ever installed a version of Python which
> caused something called "MSIEXEC.EXE"
msiexec.exe is not part of the Python download.. msiexec.exe is part
of the Windows operating system, and is precisely the program that
installs .msi files (which
[Bob Hanson]
> ...
> Didn't think this likely, but I have now quintuple-checked
> everything again. Everything says I have the real McCoy
> msiexec.exe in its proper location -- just upgraded another app
> which used MSI installers and it went as per normal.
That sounds most likely to me too ;-)
[Bob Hanson]
> Forgive me, but I'm an old man with very poor vision. Using my
> magnifying glass, I see it is two very long URLs ending with
> something like after the blah-blah: < ... akametechnology.com>
>
> More precisely, these two IP addresses:
> 23.59.190.113:80
> 23.59.190.106:80
So
[Bob Hanson]
>> ... magnifying glass, I see it is two very long URLs ending with
>> something like after the blah-blah: < ... akametechnology.com>
[Stephen J. Turnbull]
> I suppose you tried cutting and pasting? Note that you don't need to
> be exact as long as you're pretty sure you got the whol
The behavior of None in comparisons is intentional in Python 3. You
can agitate to change it, but it will probably die when Guido gets
wind of it ;-)
The catch-all mixed-type comparison rules in Pythons 1 and 2 were only
intended to be "arbitrary but consistent". Of course each specific
release
[M.-A. Lemburg]
> ...
> None worked as "compares less than all other objects" simply due
> to the fact that None is a singleton and doesn't implement the
> comparison slots (which for all objects not implementing rich
> comparisons, meant that the fallback code triggered in Python 2).
And the fall
[Tim]
>> Guido wanted to drop all the "arbitrary but consistent" mixed-type
>> comparison crud for Python 3.
[Greg Ewing]
> Nobody is asking for a return to the arbitrary-but-
> [in]consistent mess of Python 2, only to bring
> back *one* special case, i.e. None comparing less
> than everything els
[Aahz ]
> [I'm nomail -- Cc me if you care whether I see followups]
>
> https://github.com/BonzaiThePenguin/WikiSort/tree/master
>
>WikiSort is a stable bottom-up in-place merge sort based on the work
>described in "Ratio based stable in-place merging", by Pok-Son Kim and
>Arne Kutzner
There's been a bit of serious study on this. The results are still
open to interpretation, though ;-) Here's a nice summary:
http://whathecode.wordpress.com/2011/02/10/camelcase-vs-underscores-scientific-showdown/
of-course-dashes-are-most-natural-ly y'rs - tim
On Thu, Apr 24, 2014 at 11:25 A
[Tim]
>> There's been a bit of serious study on this. The results are still
>> open to interpretation, though ;-) Here's a nice summary:
>>
>>
>> http://whathecode.wordpress.com/2011/02/10/camelcase-vs-underscores-scientific-showdown/
[Terry Reedy]
> The linked poll is almost evenly split, 52% t
[Raymond Hettinger]
> ...
> I'm not all at comfortable with the wording of the second sentence.
> I was the author of the SystemRandom() class and I only want
> to guarantee that it provides access to the operating system's
> source of random numbers. It is a bold claim to guarantee that
> it is cr
> [Nick Coghlan]
> I'm OK with a target scope declaration construct having
> > lexical-scope-dependent behaviour - exactly what "nonlocal NAME" will
> > do depends on both the nature of the current scope,
> [Greg Ewing]
>
Yes, but my point is that having an explicit "parentlocal" scope
> declar
[Tim]
> If the parent has a matching parentlocal declaration for the same
> name then the original
> > really refers to the grandparent - and so on.
>
[Greg]
> Ah, I missed that part, sorry -- I withdraw that particular
> objecttion.
>
Good! I have another reply that crossed in the mail.
>
[Guido]
> ..
> Given that definition of `__parentlocal`, in first approximation the
> scoping rule proposed by PEP 572 would then be: In comprehensions
> (which in my use in the PEP 572 discussion includes generator
> expressions) the targets of inline assignments are automatically
> endowed with a
[Chris Barker]
> ...
> So what about:
>
> l = [x:=i for i in range(3)]
>
> vs
>
> g = (x:=i for i in range(3))
>
> Is there any way to keep these consistent if the "x" is in the regular
local scope?
I'm not clear on what the question is. The list comprehension would bind `
l ` to [0, 1, 2] and le
[Chris Barker]
>>> So what about:
>>>
>>> l = [x:=i for i in range(3)]
>>>
>>> vs
>>>
>>> g = (x:=i for i in range(3))
>>>
>>> Is there any way to keep these consistent if the "x" is in the regular
local scope?
[Tim]
>> I'm not clear on what the question is. The list comprehension would
>> bind `
[Chris]
> yes, it was a contrived example, but the simplest one I could think of off
> the top of my head that re-bound a name in the loop -- which was what I
> thought was the entire point of this discussion?
But why off the top of your head? There are literally hundreds & hundreds
of prior mess
[Tim]
>> Regardless of how assignment expressions work in listcomps and genexps,
>> this example (which uses neither) _will_ rebind the containing block's
`x`:
> >>
> >> [x := 1]
>
[Chris Barker]
> This reinforces my point that it’s not just about comprehensions,
I agree, it's not at all - and
[Nick Coghlan]
> > ...
> > "NAME := EXPR" exists on a different level of complexity, since it
> > adds name binding in arbitrary expressions for the sake of minor
> > performance improvement in code written by developers that are
> > exceptionally averse to the use of vertical screen real esta
[Nick Coghlan]
>>> "NAME := EXPR" exists on a different level of complexity, since it
>>> adds name binding in arbitrary expressions for the sake of minor
>>> performance improvement in code written by developers that are
>>> exceptionally averse to the use of vertical screen real estate,
> >>> .
I think I'll bow out of this now. It's just too tedious.
Like here:
[Nick]
> I never said the motivation was to gain performance relative to the
> two-statement version - I said the motivation given in the PEP is to
> gain performance relative to the *repeated subexpression* version,
> *without*
[Tim]
>> ...
>> So, ya, when someone claims [assignment expressions will] make
>> Python significantly harder to teach, I'm skeptical of that claim.
[Michael Selik]
> I don't believe anyone is making that claim.
I haven't seen it in this specific thread, but the larger discussion has
been going o
[Michael Selik]
>>> My worry is that assignment expressions will add about 15 to 20
>>> minutes to my class and a slight discomfort.
[Tim]
>> So not intractable - which is my high-order bit ;-)
>>
>> For those who want more bits of precision (perhaps Guido), while
>> quantification is good, it nee
[Rob Cliffe]
> It's late to raise this,
By months, yes ;-)
> but what exactly are the objections to the syntax
> > expr -> name # or variations such as expr => name
> > instead of
> > name := expr
> >
> > The PEP mentions that this syntax does not have a problem that "as"
[Chris Barker]
> > However, generator expressions ( why don’t we call them generator
> > comprehensions?)
Because nobody really liked the "iterator comprehensions" or "accumulator
displays" they were variously called at the start.
https://mail.python.org/pipermail/python-dev/2003-October/03
[INADA Naoki]
> ...
> On the other hand, I understand PEP 572 allows clever code
> simplifies tedious code. It may increase readability of non-dirty code.
The latter is the entire intent ,of course. We can't force people to write
readable code, but I don't understand the widespread assumption th
[Tim]
>> I really don't know what Guido likes best about this, but for me it's
> >> the large number of objectively small wins in `if` and `while`
> >> contexts. They add up. That conclusion surprised me. That there are
> >> occasionally bigger wins to be had is pure gravy.
>
[Serhiy Storch
Just a quickie:
[Steve Dower]
> > The PEP uses the phrase "an assignment expression
> occurs in a comprehension" - what does this mean?
It's about static analysis of the source code, at compile-time, to
establish scopes. So "occurs in" means your eyeballs see an assignment
expression in a compr
[Serhiy Storchaka]
> > Sorry, this PEP was rewritten so many times that I missed your
> [Tim's] Appendix.
> >
> >> while total != (total := total + term):
> >> term *= mx2 / (i*(i+1))
> >> i += 2
> >> return total
> >
> > This code looks clever that the original while loop with a br
[Yury Selivanov]
> Wow, I gave up on this example before figuring this out (and I also
> > stared at it for a good couple of minutes). Now it makes sense. It's
> > funny that this super convoluted snippet is shown as a good example
> > for PEP 572. Although almost all PEP 572 examples are que
[Steve Dower]
> Okay, so as far as the specification goes, saying "assignment
> > expressions in comprehensions get or create a cell variable in the
> > defining scope and update its value" satisfies me just fine (or some
> > other wording that more closely mirrors the actual behaviour - all my
; "x, y := ..." is invalid. It can be tricked using "while (x_y :=...)[0]:
> x, y = x_y; ...". IMHO it's not worth it.
Indeed, it's quite worth _not_ doing it :-)
> (B)
>
> while True:
> coeff = _dlog10(c, e, places)
> # assert len(str(abs(coef
[Steve Dower]
> In that case, please provide more examples of how it should work when
> the assignment expression appears to define a variable in a scope that
> is not on the call stack.
>
Sorry, I'm not clear what you're asking about. Python's scopes are
determined statically, at compile-time -
[Ivan Pozdeev]
> Victor Stinner in "Assignment expression and coding style: the while
> True case" and others have brought to attention
>
> that the AE as currently written doesn't support all the capabilities of
> the assignment statement, namely:
>
> * tuple unpacking
> * augmented assignment
>
[Victor Stinner]
>
> > I propose to start the discussion about "coding style" (where are
> > assignment expressions appropriate or not?) with the "while True"
> > case.
>
[Steven D'Aprano]
> We don't even have an official implementation yet, and you already want
> to start prescribing coding styl
[Victor Stinner]
> FYI I'm trying to use assignment expressions on the stdlib because
> *all* examples of the PEP 572 look artificial to me.
>
All the examples in my Appendix A were derived from real. pre-existing code
(although mostly my own). It's turned out that several others with your
compl
101 - 200 of 1049 matches
Mail list logo