[Alexander Belopolsky]
> ...
> I also think that the dreadfulness of mistyping = where == is expected
> is exaggerated.
There are a number of core devs who would be rabidly opposed to allowing
that confusion in Python, due to still-remembered real-life nightmares in
C. For example, me ;-) It o
[Alexander Belopolsky]
>>> ...
>>> I also think that the dreadfulness of mistyping = where == is expected
>>> is exaggerated.
[Tim]
>> There are a number of core devs who would be rabidly opposed
>> to allowing that confusion in Python, due to still-remembered
>> real-life nightmares in C. For
[Alexander Belopolsky]
> > Do we want to protect users who
> > cannot tell = from == so much that we are willing to cause Python to be
> > the first language with two non-interchangeable assignment operators?
>
[Steven D'Aprano][
Not even close to the first. Go beat us to it -- it has both = an
[Tim]
> It doesn't even exist yet, but Googling on
> python operator :=
>
> already returns a directly relevant hit on the first page for me:
>
>
> https://stackoverflow.com/questions/26000198/what-does-colon-equal-in-python-mean
> ...
>
[Guido, who later apologized for unclear quoting, hi
[Steven D'Aprano]
> I'd just like to point out that
> given the existence of float NANs, there's a case to be made for having
> separate <> and != operators with != keeping the "not equal" meaning and
> the <> operator meaning literally "less than, or greater than".
>
> py> NAN != 23
> True
>
[Guido]
> ...
> As to why you might want to use := in a function call, I could imagine
> writing
>
> if validate(name := re.search(pattern, line).group(1)):
> return name
>
When I was staring at my code, I never mentioned the very first plausible
use I bumped into (in code I was activ
>
> [Tim]
> > When I was staring at my code, I never mentioned the very first
> > plausible use I bumped into (in code I was actively working on at the
> time):
> >
> > while not probable_prime(p := randrange(lo, hi)):
> > pass
> > # and now `p` is likely a random prime in range
>
{Terry Ree
[Eric V. Smith]
> > there is at least one place
> where the grammar does forbid you from doing something that would
> > otherwise make be allowable: decorators.
>
[Greg Ewing]
> And that was a controversial issue at the time. I don't remember
there being much of an objective argument for the
[Radim Řehůřek ]
> one of our Python projects calls for pretty heavy, low-level optimizations.
> We went down the rabbit hole and determined that having access to
> PyList_GET_ITEM(list), PyInt_AS_LONG(int) and PyDict_GetItem(dict, unicode)
> on Python objects **outside of GIL** might be a good-
[Tim]> There is no intention to support GIL-free access to any Python
objects. So
> > that's the contract: "All warranties are null & void if you do just
> about
> > anything while not holding the GIL".
>
[Antoine]
> Actually, accessing a Py_buffer that's been obtained in the regular way
(e.g
[Serhiy Storchaka]
> Recently Barry shown an example:
>
> assert len(subdirs := list(path.iterdir())) == 0, subdirs
>
> It looks awful to me. It looks even worse than using asserts for
> validating the user input. The assert has a side effect, and it depends
> on the interpreter option (-O).
[Tim]
> > Same as the docs, I use "Python object" to mean a pointer to PyObject.
> In
> that sense, a Py_buffer is no more a "Python object" than, e.g,, is a
> > Py_ssize_t.
>
[Antoine
> Yes, and the payload of an int object is not a "Python object".
> The OP mentioned PyInt_AS_LONG as an exam
[Barry Warsaw]
> Thanks! I thought it was cute. It was just something that occurred to me
> as I was reviewing some existing code. The intent wasn’t to use `subdirs`
> outside of the assert statement, but I’m warm to it because it means I
> don’t have to do wasted work outside of the assert sta
[Radim Řehůřek ]
> Thanks Tim. That's one unambiguous answer.
I'm glad that part came across ;-)
> I did consider posting to python-list, but this seemed somewhat
> python-devish.
>
I agree it's on the margin - I don't mind.
Any appetite for documenting which foundational functions are const
I wrote Python's sort, so I may know something about it ;-) To my
eyes, no, there's not "an issue" here, but a full explanation would be
very involved.
For some sorting algorithms, it's possible to guarantee a redundant
comparison is never made. For example, a pure insertion sort.
But Python's
[Joseph C. Sible
> I'm used to signing CLA's that require nothing beyond a name and a check
> box. When I went to sign the PSF Contributor Agreement so I can submit
> a PR for CPython, I was surprised to see that it wants my address. Why
> does the Python Software Foundation need this, especially
[MRAB]
>> If I want to cache some objects, I put them in a dict, using the id as
>> the key. If I wanted to locate an object in a cache and didn't have
>> id(), I'd have to do a linear search for it.
[Greg Ewing ]
> That sounds dangerous. An id() is only valid as long as the object
> it came from
[Gregory P. Smith ]
> ...
> A situation came up the other day where I believe this could've helped.
>
> Scenario (admittedly not one most environments run into): A Python process
> with a C++ extension module implementing a threaded server (threads
> spawned by C++) that could call back into CPytho
[Gregory P. Smith ]
> Good point, I hadn't considered that it was regular common ref
> count 0 dealloc chaining.
It pretty much has to be whenever you see a chain of XXX_dealloc
routines in a stack trace. gcmodule.c never even looks at a
tp_dealloc slot directly, let alone directly invoke a deall
[Larry Hastings ]
> Guido just stopped by--we're all at the PyCon 2019 dev sprints--and we had
> a chat about it. Guido likes it but wanted us to restore a little of the
> magical
> behavior we had in "!d": now, = in f-strings will default to repr (!r), unless
> you specify a format spec. If you
[Jordan Adler ]
> Through the course of work on the future polyfills that mimic the behavior
> of Py3 builtins across versions of Python, we've discovered that the
> equality check behavior of at least some builtin types do not match the
> documented core data model.
>
> Specifically, a comparison
There's a Stackoverflow report[1] I suspect is worth looking into, but
it requires far more RAM (over 80GB) than I have). The OP whittled it
down to a reasonably brief & straightforward pure Python 3 program.
It builds a ternary search tree, with perhaps a billion nodes. The
problem is that it "t
[Inada Naoki ]
> ...
> 2. This loop is cleary hot:
> https://github.com/python/cpython/blob/51aa35e9e17eef60d04add9619fe2a7eb938358c/Objects/obmalloc.c#L1816-L1819
Which is 3 lines of code plus a closing brace. The OP didn't build
their own Python, and the source from which it was compiled wasn't
It seems pretty clear now that the primary cause is keeping arenas
sorted by number of free pools. As deallocation goes on, the number of
distinct "# of free pools" values decreases, leaving large numbers of
arenas sharing the same value. Then, e.g., if there are 10,000 arenas
with 30 free pools r
[Inada Naoki ]
> For the record, result for 10M nodes, Ubuntu 18.04 on AWS r5a.4xlarge:
I'm unclear on what "nodes" means. If you mean you changed 27M to 10M
in this line:
for token in random_strings(27_000_000):
that's fine, but there are about 40 times more than that `Node`
objects create
[Tim]
> I'll note that the approach I very briefly sketched before
> (restructure the list of arenas to partition it into multiple lists
> partitioned by number of free pools) "should make" obmalloc
> competitive with malloc here ...
But it's also intrusive, breaking up a simple linked list into a
[Tim]
> Key invariants:
> ...
> 2. nfp2lasta[pa->nfreepools] == pa if and only if pa is the only arena
> in usable_arenas with that many free pools.
Ack! Scratch that. I need a nap :-(
In fact if that equality holds, it means that nfp2lasta entry has to
change if pa is moved and pa->prevarena h
[Larry Hastings ]
> I have a computer with two Xeon CPUs and 256GB of RAM. So, even
> though it's NUMA, I still have 128GB of memory per CPU. It's running a
> "spin" of Ubuntu 18.10.
>
> I compiled a fresh Python 3.7.3 --with-optimizations. I copied the sample
> program straight off the StackOve
I made a pull request for this that appears to work fine for my 10x
smaller test case (reduces tear-down time from over 45 seconds to over
7). It implements my second earlier sketch (add a vector of search
fingers, to eliminate searches):
https://github.com/python/cpython/pull/13612
It would be
The PR for this looks good to go:
https://github.com/python/cpython/pull/13612
But, I still have no idea how it works for the OP's original test
case. So, if you have at least 80 GB of RAM to try it, I added
`arena.py` to the BPO report:
https://bugs.python.org/issue37029
That adds code to the
[Tim]
>> I'm keen to get feedback on this before merging the PR, because this
>> case is so very much larger than anything I've ever tried that I'm
>> wary that there may be more than one "surprise" lurking here. ...
[Inada Naoki ]
> I started r5a.4xlarge EC2 instance and started arena.py.
> I wil
[Antoine Pitrou, replying to Thomas Wouters]
> Interesting that a 20-year simple allocator (obmalloc) is able to do
> better than the sophisticated TCMalloc.
It's very hard to beat obmalloc (O) at what it does. TCMalloc (T) is
actually very similar where they overlap, but has to be more complex
b
[Antoine Pitrou ]
> The interesting thing here is that in many situations, the size is
> known up front when deallocating - it is simply not communicated to the
> deallocator because the traditional free() API takes a sole pointer,
> not a size. But CPython could communicate that size easily if we
[Antoine Pitrou ]
> But my response was under the assumption that we would want obmalloc to
> deal with all allocations.
I didn't know that. I personally have no interest in that: if we
want an all-purpose allocator, there are several already to choose
from. There's no reason to imagine we coul
[Tim]
>> But I don't know what you mean by "access memory in random order to
>> iterate over known objects". obmalloc never needs to iterate over
>> known objects - indeed, it contains no code capable of doing that..
>> Our cyclic gc does, but that's independent of obmalloc.
[Antoine]
> It's not.
To be clearer, while knowing the size of allocated objects may be of
some use to some other allocators, "not really" for obmalloc. That
one does small objects by itself in a uniform way, and punts
everything else to the system malloc family. The _only_ thing it
wants to know on a free/realloc is
[Tim\
> For the current obmalloc, I have in mind a different way ...
> Not ideal, but ... captures the important part (more objects
> in a pool -> more times obmalloc can remain in its
> fastest "all within the pool" paths).
And now there's a PR that removes obmalloc's limit on pool sizes, and,
fo
[Neil Schemenauer ]
> I've done a little testing the pool overhead. I have an application
> that uses many small dicts as holders of data. The function:
>
> sys._debugmallocstats()
>
> is useful to get stats for the obmalloc pools. Total data allocated
> by obmalloc is 262 MB. At the 4*PAGE
[Inada Naoki . to Neil S]
> Oh, do you mean your branch doesn't have headers in each page?
That's probably right ;-) Neil is using a new data structure, a radix
tree implementing a sparse set of arena addresses. Within obmalloc
pools, which can be of any multiple-of-4KiB (on a 64-bit box) size,
[Neil Schemenauer ]
> ...
> BTW, the current radix tree doesn't even require that pools are
> aligned to POOL_SIZE. We probably want to keep pools aligned
> because other parts of obmalloc rely on that.
obmalloc relies on it heavily. Another radix tree could map block
addresses to all the necess
[Tim. to Neil]
>> Moving to bigger pools and bigger arenas are pretty much no-brainers
>> for us, [...]
[Antoine]
> Why "no-brainers"?
We're running tests, benchmarks, the Python programs we always run,
Python programs that are important to us, staring at obmalloc stats
... and seeing nothing bad
[Tim]
>> At the start, obmalloc never returned arenas to the system. The vast
>> majority of users were fine with that.
[Neil]
> Yeah, I was totally fine with that back in the day. However, I
> wonder now if there is a stronger reason to try to free memory back
> to the OS. Years ago, people wo
[Antoine]
> We moved from malloc() to mmap() for allocating arenas because of user
> requests to release memory more deterministically:
>
> https://bugs.python.org/issue11849
Which was a good change! As was using VirtualAlloc() on Windows.
None of that is being disputed. The change under discuss
[Inada Naoki ]
> obmalloc is very nice at allocating small (~224 bytes) memory blocks.
> But it seems current SMALL_REQUEST_THRESHOLD (512) is too large to me.
For the "unavoidable memory waste" reasons you spell out here,
Vladimir deliberately set the threshold to 256 at the start. As
things tur
[Inada Naoki]
>> Increasing pool size is one obvious way to fix these problems.
>> I think 16KiB pool size and 2MiB (huge page size of x86) arena size is
>> a sweet spot for recent web servers (typically, about 32 threads, and
>> 64GiB), but there is no evidence about it.
[Antoine]
> Note that the
[Tim]
> ...
> Here are some stats from running [memcrunch.py] under
> my PR, but using 200 times the initial number of objects
> as the original script:
>
> n = 2000 #number of things
>
> At the end, with 1M arena and 16K pool:
>
> 3362 arenas * 1048576 bytes/arena =3,525,312,512
> # b
Heh. I wasn't intending to be nasty, but this program makes our arena
recycling look _much_ worse than memcrunch.py does. It cycles through
phases. In each phase, it first creates a large randomish number of
objects, then deletes half of all objects in existence. Except that
every 10th phase, i
[Tim]
> ...
> Now under 3.7.3. First when phase 10 is done building:
>
> phase 10 adding 9953410
> phase 10 has 16743920 objects
>
> # arenas allocated total = 14,485
> # arenas reclaimed =2,020
> # arenas highwater mark=
And one more random clue.
The memcrunch.py attached to the earlier-mentioned bug report does
benefit a lot from changing to a "use the arena with the smallest
address" heuristic, leaving 86.6% of allocated bytes in use by objects
at the end (this is with the arena-thrashing fix, and the current
25
[Tim]
> - For truly effective RAM releasing, we would almost certainly need to
> make major changes, to release RAM at an OS page level. 256K arenas
> were already too fat a granularity.
We can approximate that closely right now by using 4K pools _and_ 4K
arenas: one pool per arena, and mmap()/
[Tim]
>> - For truly effective RAM releasing, we would almost certainly need to
>> make major changes, to release RAM at an OS page level. 256K arenas
>> were already too fat a granularity.
[also Tim]
> We can approximate that closely right now by using 4K pools _and_ 4K
> arenas: one pool per
[Tim]
>> I don't think we need to cater anymore to careless code that mixes
>> system memory calls with O calls (e.g., if an extension gets memory
>> via `malloc()`, it's its responsibility to call `free()`), and if not
>> then `address_in_range()` isn't really necessary anymore either, and
>> then
[Thomas]
>>> And what would be an efficient way of detecting allocations punted to
>>> malloc, if not address_in_range?
[Tim]
>> _The_ most efficient way is the one almost all allocators used long
>> ago: use some "hidden" bits right before the address returned to the
>> user to store info about
[Tim]
> The radix tree generally appears to be a little more memory-frugal
> than my PR (presumably because my need to break "big pools" into 4K
> chunks, while the tree branch doesn't, buys the tree more space to
> actually store objects than it costs for the new tree).
It depends a whole lot on
[Antoine Pitrou ]
> For the record, there's another contender in the allocator
> competition now:
> https://github.com/microsoft/mimalloc/
Thanks! From a quick skim, most of it is addressing things obmalloc doesn't:
1) Efficient thread safety (we rely on the GIL).
2) Directly handling requests
[Antoine Pitrou ]
>> Ah, interesting. Were you able to measure the memory footprint as well?
[Inada Naoki ]
> Hmm, it is not good. mimalloc uses MADV_FREE so it may affect to some
> benchmarks. I will look it later.
>
> ```
> $ ./python -m pyperf compare_to pymalloc-mem.json mimalloc-mem.json
[Victor Stinner ]
> I guess that INADA-san used pyperformance --track-memory.
>
> pyperf --track-memory doc:
> "--track-memory: get the memory peak usage. it is less accurate than
> tracemalloc, but has a lower overhead. On Linux, compute the sum of
> Private_Clean and Private_Dirty memory mappings
[Pablo Galindo Salgado ]
> Recently, we moved the optimization for the removal of dead code of the form
>
> if 0:
>
>
> to the ast so we use JUMP bytecodes instead (being completed in PR14116). The
> reason is that currently, any syntax error in the block will never be
> reported.
> For exa
[Inada Naoki , trying mimalloc]
>>> Hmm, it is not good. mimalloc uses MADV_FREE so it may affect to some
>>> benchmarks. I will look it later.
>> ...
>> $ ./python -m pyperf compare_to pymalloc-mem.json mimalloc-mem.json -G
>> Slower (60):
>> - logging_format: 10.6 MB +- 384.2 kB -> 27.2 MB +-
[Inada Naoki ,
looking into why mimalloc did so much better on spectral_norm]
> I compared "perf" output of mimalloc and pymalloc, and I succeeded to
> optimize pymalloc!
>
> $ ./python bm_spectral_norm.py --compare-to ./python-master
> python-master: . 199 ms +- 1 ms
> python:
[Inada Naoki]
>> So I tried to use LIKELY/UNLIKELY macro to teach compiler hot part.
>> But I need to use
>> "static inline" for pymalloc_alloc and pymalloc_free yet [1].
[Neil Schemenauer]
> I think LIKELY/UNLIKELY is not helpful if you compile with LTO/PGO
> enabled.
I like adding those regardl
https://github.com/python/cpython/pull/13482
is a simple doc change for difflib, which I approved some months ago.
But I don't know the current workflow well enough to finish it myself.
Like:
- Does something special need to be done for doc changes?
- Since this is a 1st-time contributor, do
[Mariatta ]
- Since this is a 1st-time contributor, does it need a change to the ACKS
file?
>
> I think the change is trivial enough, the misc/acks is not necessary.
>
> - Anything else?
>
>
> 1. Does it need to be backported? If so, please add the "needs backport to
> .." label.
>
> 2. Add the "🤖
[Brett Cannon ]
> We probably need to update https://devguide.python.org/committing/ to
> have a step-by-step list of how to make a merge works and how to
> handle backports instead of the wall of text that we have. (It's already
> outdated anyway, e.g. `Misc/ACKS` really isn't important as git its
I don't plan on making a series of these posts, just this one, to give
people _some_ insight into why the new algorithm gets systematic
benefits the current algorithm can't. It splits the needle into two
pieces, u and v, very carefully selected by subtle linear-time needle
preprocessing (and it's
[Tim Peters, explains one of the new algorithm's surprisingly
effective moving parts]
[Chris Angelico ]
> Thank you, great explanation. Can this be added to the source code
> if/when this algorithm gets implemented?
No ;-) While I enjoy trying to make hard things clear(er)
[Tim]
>> Note that no "extra" storage is needed to exploit this. No character
>> lookups, no extra expenses in time or space of any kind. Just "if we
>> mismatch on the k'th try, we can jump ahead k positions".
[Antoine Pitrou ]
> Ok, so that means that on a N-character haystack, it'll always do
[Tim]
> ...
> Alas, the higher preprocessing costs leave the current PR slower in "too
> many" cases too, especially when the needle is short and found early
> in the haystack. Then any preprocessing cost approaches a pure waste
> of time.
But that was this morning. Since then, Dennis changed the
[Julien Danjou]
> ...
> Supposedly PyObject_Malloc() returns some memory space to store a
> PyObject. If that was true all the time, that would allow anyone to
> introspect the allocated memory and understand why it's being used.
>
> Unfortunately, this is not the case. Objects whose types are trac
I'm guessing it's time to fiddle local CPython clones to account for
master->main renaming now?
If so, I've seen two blobs of instructions, which are very similar but
not identical:
Blob 1 ("origin"):
"""
You just need to update your local clone after the branch name changes.
>From the local clo
FYI, I just force-unsubscribed this member (Hoi Lam Poon) from
python-dev. Normally I don't do things like that, since, e.g, we have
no way to know whether the sender address was spoofed in emails we
get. But in this case Hoi's name has come up several times as the
sender of individual spam, and
[Dan Stromberg ]
> ...
> Timsort added the innovation of making mergesort in-place, plus a little
> (though already common) O(*n^2) sorting for small sublists.
Actually, both were already very common in mergesorts. "timsort" is
much more a work of engineering than of insight ;-) That is, it
combin
[Ethan Furman]
> A question [1] has arisen about the viability of `random.SystemRandom` in
> Pythons before and after the secrets module was introduced
> (3.5 I think) -- specifically
>
> does it give independent and uniform discrete distribution for
> cryptographic purposes across CPytho
Sorry, all! This post was pure spam - I clicked the wrong button on
the moderator UI. The list has already been set to auto-reject any
future posts from this member.
On Mon, Aug 9, 2021 at 10:51 AM ridhimaortiz--- via Python-Dev
wrote:
>
> It is really nice post. https://bit.ly/3fsxwwl
>
[Marco Sulla ]
> Oh, this is enough. The sense of the phrase was very clear and you all
> have understood it.
Sincerely, I have no idea what "I pretend your immediate excuses."
means, in or out of context.
> Remarking grammatical errors is a gross violation
> of the Netiquette. I ask __immediatel
[Marco Sulla ]
> It's the Netiquette, Chris. It's older than Internet. It's a gross
> violation of the Netiquette remarking grammatical or syntactical
> errors. I think that also the least advanced AI will understand what I
> meant.
As multiple people have said now, including me, they had no idea
[Marco Sulla ]
> I repeat, even the worst AI will understand from the context what I
> meant.
Amazingly enough, the truth value of a proposition does not increase
via repetition ;-)
>>> bool(True * 1_000_000_000)
True
>>> bool(False * 1_000_000_000)
False
> But let me do a very rude example:
> W
Various variations on:
> ... I am also considering unsubscribing if someone doesn't step in and stop
> the mess going on between Brett and Marco. ...
Overall, "me too!" pile-ons _are_ "the [bulk of the] mess" to most
list subscribers.
It will die out on its own in time. Dr. Brett should know by
[me]
> If you want more active moderation, volunteer for the job. I'd happily
> give it up, and acknowledge that my laissez-faire moderation approach
> is out of style.
But, please, don't tell _me_ off-list that you volunteer. I want no
say in who would become a new moderator - I'm already doing t
[Laurent Lyaudet ]
> ...
> My benchmarks could be improved but however I found that Shivers' sort
> and adaptive Shivers' sort (aka Jugé's sort) performs better than
> Tim's sort.
Cool! Could you move this to the issue report already open on this?
Replace list sorting merge_collapse()?
ht
[Raymond Hettinger]
> I'm hoping that core developers don't get caught-up in the "doctests are bad
> meme".
>
> Instead, we should be clear about their primary purpose which is to test
> the examples given in docstrings.
I disagree.
> In many cases, there is a great deal of benefit to docstrin
verify -v
repository uses revlog format 1
checking changesets
checking manifests
crosschecking files in changesets and manifests
checking files
warning: copy source of 'a.txt' not in parents of 9c2205c187bf
2 files, 3 changesets, 2 total revisions
1 warnings encountered!
What the warning
>> ...
>> $ hg verify
>> repository uses revlog format 1
>> checking changesets
>> checking manifests
>> crosschecking files in changesets and manifests
>> checking files
>> warning: copy source of 'Modules/_threadmodule.c' not in parents of
>> 60ad83716733
>> warning: copy source of 'Objects/byte
> hg branches
default85277:4f7845be9e23
2.785276:7b867a46a8b4
3.283826:b9b521efeba3
3.385274:7ab07f15d78c (inactive)
2.682288:936621d33c38 (inactive)
3.180967:
[Tim]
warning: copy source of 'Modules/_threadmodule.c' not in parents of
60ad83716733
warning: copy source of 'Objects/bytesobject.c' not in parents of
64bb1d258322
warning: copy source of 'Objects/stringobject.c' not in parents of
357e268e7c5f
[Armin]
> I've seen
[Tim]
>> > hg log -r 3.2
>> changeset: 83826:b9b521efeba3
>> branch: 3.2
>> parent: 83739:6255b40c6a61
>> user:Antoine Pitrou
>> date:Sat May 18 17:56:42 2013 +0200
>> summary: Issue #17980: Fix possible abuse of ssl.match_hostname()
>> for denial of service using c
[Tim]
> > hg log -r 3.2
> changeset: 83826:b9b521efeba3
> branch: 3.2
> parent: 83739:6255b40c6a61
> user:Antoine Pitrou
> date:Sat May 18 17:56:42 2013 +0200
> summary: Issue #17980: Fix possible abuse of ssl.match_hostname()
> for d
[Tim, wondering why the 3.2 branch isn't "inactive"]
>> ...
>> What is gained by _not_ merging here? I don't see it.
[Antoine Pitrou]
> Perhaps Georg doesn't like merges? ;-)
> I suppose what's gained is "one less command to type".
So let's try a different question ;-) Would anyone _object_ to
[Tim, wondering why the 3.2 branch isn't "inactive"]
>> ...
>> So let's try a different question ;-) Would anyone _object_ to
>> completing the process described in the docs: merge 3.2 into 3.3,
>> then merge 3.3 into default? I'd be happy to do that. I'd throw away
>> all the merge changes exc
[Brett]
> ...
> After reading that sentence I realize there is a key "not" missing: "I see
> no reason NOT to help visibly shutter the 3.2. branch ...". IOW I say do the
> null merge. Sorry about that.
No problem! Since I've been inactive for a long time, it's good for
me to practice vigorously d
[Tim]
>> BTW, it's not quite a null-merge. The v3.2.5 release tag doesn't
>> currently exist in the 3.3 or default .hgtags files. So long as 3.2
>> has a topological head, people on the 3.3 and default branches won't
>> notice (unless they look directly at .hgtags - they can still use
>> "v3.2.5"
[Tim]
>>> ...
>>> Here's what I intend to do (unless an objection appears):
>>>
>>> hg up 3.3
>>> hg merge 3.2
>>> # merge in the v3.2.5 tag definition from .hgtags,
>>> # but revert everything else
>>> hg revert -a -X .hgtags -r .
>>> hg resolve -a -m
>>> hg diff # to ensure that only the v3.2.5
[Tim, wondering why the 3.2 branch isn't "inactive"]
[Georg Brandl]
> FWIW I have no real objections, I just don't see the gain.
I'm glad it's OK! Especially because it's already been done ;-)
Two gains:
1. "hg branches" output now matches what the developer docs imply it
should be. It didn't
In
http://bugs.python.org/issue18843
a user reported a debug PyMalloc "bad leading pad byte" memory
corruption death while running their code. After some thrashing, they
decided to rebuild Python, and got the same kind of error while
rebuilding Python. See
http://bugs.python.org/msg196
[R. David Murray ]
> Emerge uses Python, and 2.7 is the default system python on Gentoo,
> so unless he changed his default, that error almost certainly came from
> the existing Python he was having trouble with.
Yes, "The Python used to run `emerge` here was a --with-pydebug Python
the bug report
[Nikolaus Rath]
>> Aeh, what the tin says is "return random bytes".
[Larry Hastings]
> What the tin says is "urandom", which has local man pages that dictate
> exactly how it behaves. On Linux the "urandom" man page says:
>
> A read from the /dev/urandom device will not block waiting for more
[David Mertz]
> OK. My understanding is that Guido ruled out introducing an os.getrandom()
> API in 3.5.2. But would you be happy if that interface is added to 3.6?
>
> It feels to me like the correct spelling in 3.6 should probably be
> secrets.getrandom() or something related to that.
secrets.
[Tim]
>> secrets.token_bytes() is already the way to spell "get a string of
>> messed-up bytes", and that's the dead obvious (according to me) place
>> to add the potentially blocking implementation.
[Sebastian Krause]
> I honestly didn't think that this was the dead obvious function to
> use. To
[Random832]
> So, I have a question. If this "weakness" in /dev/urandom is so
> unimportant to 99% of situations... why isn't there a flag that can be
> passed to getrandom() to allow the same behavior?
Isn't that precisely the purpose of the GRND_NONBLOCK flag?
http://man7.org/linux/man-pages/ma
[Guido]
> ...
> An alternative would be to keep the secrets module linked to SystemRandom,
> and improve the latter. Its link with os.random() is AFAIK undocumented. Its
> API is clumsy but for code that needs some form of secret-ish bytes and
> requires platform and Python version independence it
201 - 300 of 1049 matches
Mail list logo