> There is now a “newcomer friendly” keyword in bpo.
>
> My hope is that going forward, we can tag issues that are suitable for first
> time contributors with this keyword.
Hmmm... I haven't looked lately, but didn't there used to be an "easy"
tag which purported to serve roughly the same purpose
Victor's experiments into a register-based virtual machine live here:
https://hg.python.org/sandbox/registervm
I'd like to revive them, if for no other reason to understand what he
did. I see no obvious way to collect them all as a massive diff. For
the moment, I downloaded each commit and am app
> I think this might work:
>
> $ hg diff -r fb80df16c4ff -r tip
>
> Not sure fb80df16c4ff is the correct base revision. It seems to be
> the base of Victor's work. I put the resulting patch file here:
>
> http://python.ca/nas/python/registervm-victor.txt
Thanks, Neil. I barely remembered
On Thu, Oct 29, 2020, 6:32 PM Gregory P. Smith wrote:
> I agree, remove Solaris support. Nobody willing to contribute seems
> interested.
>
*sniff* I spent a lot of professional time in front of SunOS and Solaris
screens. But yes, I agree. It seems time to give Solaris the boot.
Skip
__
I'm still messing around with my register VM stuff (off-and-on). I'm
trying to adjust to some changes made a while ago, particularly (but
probably not exclusively) after RERAISE acquired an argument. As a
result, at least the expected_opinfo_jumpy list changed in a
substantial way. I can manually w
> Maybe these lines in test_dis.py?
> ```
> #print('expected_opinfo_jumpy = [\n ',
> #',\n '.join(map(str, _instructions)), ',\n]', sep='')
> ```
Thanks, I'll take a look. I was expecting there'd be a standalone
script somewhere. Hadn't considered that comments would be hiding
code.
Skip
Guido> Maybe these lines in test_dis.py?
...
Skip> Thanks, I'll take a look. I was expecting there'd be a standalone
Skip> script somewhere. Hadn't considered that comments would be hiding
Skip> code.
Indeed, that did the trick, however... I'm a bit uncomfortable with
the methodology. It seems tes
> The problem is not that dis.get_instructions can't be trusted, but that
> the test isn't testing the dis module at all. It is testing whether the
> output from the compiler has changed.
> A lot of the tests in test_dis do that.
Thanks. Perhaps such tests belong in a different test_* module? (I a
A note to webmas...@python.org from an astute user named Hiromi in
Japan* referred
us to Guido's shell archives for the 0.9.1 release from 1991. As that
wasn't listed in the historical releases README file:
https://legacy.python.org/download/releases/src/README
I pulled the shar files (and a patc
>
> Wow. Was white-space not significant in this release of Python? I see the
>> lack of indentation in the first Python programs.
>>
>
> Indentation most certainly was significant from day 0. I suspect what
> happened is that these files got busted somehow by the extraction process
> used by Skip
> If someone knows how to get the original Usenet messages from what Google
> published, let me know.
Seems the original shar is there buried in a Javascript string toward
the end of the file. I think I've got a handle on it, though it will
take a Python script to massage back into correct format
> Also mind
> http://www.dalkescientific.com/writings/diary/archive/2009/03/27/python_0_9_1p1.html
> for result comparison.
Thanks, Paul. I had lost track of Andrew. Good to know he's still out
there. I wonder why his tar file was never sucked up into the
historical releases page.
Whew! My stupid
This is getting a bit more off-topic for python-dev than I'd like. I
will make a couple comments though, then hopefully be done with this
thread.
> The original ones are here:
> http://ftp.fi.netbsd.org/pub/misc/archive/alt.sources/volume91/Feb/
> Look at http://ftp.fi.netbsd.org/pub/misc/archive/
> If we can get a clean copy of the original sources I think we should put them
> up under the Python org on GitHub for posterity.
Did that earlier today:
https://github.com/python/pythondotorg/issues/1734
Skip
___
Python-Dev mailing list -- python-de
> In conversation with Dan, I have fixed my conda package (but overwritten
the same version). I needed to add this to the build:
>
> # sudo apt-get install gcc-multilib
> CC='gcc -m32' make python
Thanks. That fixes it for me as well. I never even looked at intobject.c,
since it compiled out of t
Consider this little session from the tip of the spear:
>>> sys.version
'3.10.0a6+ (heads/master:0ab152c6b5, Mar 15 2021, 17:24:38) [GCC 10.2.0]'
>>> def while2(a):
... while a >= 0:
... a -= 1
... return a
...
>>> dis.dis(while2)
2 0 LOAD_FAST0 (a)
> co_lnotab has had negative deltas since 3.6.
Thanks. I'm probably misreading Objects/lnotab_notes.txt.
Skip
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailma
Can I distract people for a moment to ask a couple procedural questions
about this change? I maintain my own fork of
https://github.com/python/cpython, but don't yet see a main branch on
python/cpython.
- When is the new main branch supposed to appear
- Once it does, what will I need to do
>
> Perhaps there's some history in the python-dev archives that would inform
> you of previous discussions and help you repeating already-considered
> arguments.
>
This topic has come up a few times over the years. Maybe it would be
worthwhile to have an informational PEP which documents the vari
> Practically speaking, one issue I have is how easy it is to write
> isinstance or issubclass checks. It has historically been much more
> difficult to write and maintain a check that something looks like a duck.
>
> `if hasattr(foo, 'close') and hasattr(foo, 'seek') and hasattr(foo,
> 'read'):`
(Sorry, this is probably not really python-dev material, but I'm stuck
trying to bring my fork into sync with python/cpython.)
I don't know if I did something to my fork or if the master->main
change did something to me, but I am unable to sync my
smontanaro/cpython main with the python/cpython ma
Thanks for the recipe to fix my problem.
Your main branch in GitHub has some commits they are not in python/cpython.
> https://github.com/smontanaro/cpython/commits/main
Is there a way to easily tell how they differ? My (obvious to me, but
wrong) guess was
git diff upstream/main origin/main
Th
> Your main branch in GitHub has some commits they are not in python/cpython.
> https://github.com/smontanaro/cpython/commits/main
Regarding this. How else am I to keep my fork in sync with
python/cpython other than by the occasional pull upstream/push origin
process? That's what all those merges
> Maybe others have different workflows, but I don't see much of a need for
> keeping your fork's main branch up to date. My workflow is something like
> this:
>
> % git remote -v
> origin g...@github.com:JelleZijlstra/cpython.git (fetch)
> origin g...@github.com:JelleZijlstra/cpython.git (push)
I'm having a hard time debugging some virtual machine code because GDB
won't break where it's supposed to. Here's my breakpoint #2:
2 breakpoint keep y 0x556914fd
ceval_reg.h:_PyEval_EvalFrameDefault:TARGET_JUMP_IF_FALSE_REG
breakpoint already hit 1 time
p/x oparg
On Fri, May 21, 2021 at 2:48 PM Guido van Rossum wrote:
> I suspect that you're running into the issue where compiler optimizations
> are *forced* on for ceval.c.
>
> There's a comment near the top about this. Just comment out this line:
>
> #define PY_LOCAL_AGGRESSIVE
>
> We tried to define that
> I strongly suggest to only build Python with -O0 when using gdb. -Og
> enables too many optimizations which makes gdb less usable.
Thanks, Victor. It never made sense to me that you would want any
optimizations enabled when truly debugging code (as opposed to wanting
debug symbols and a sane tra
> Just turn off optimisation when you want to single-step.
But I don't just want to single-step. I want to break at the target
label associated with a specific opcode. (I am - in fits and starts -
working on register-based virtual machine instructions). If I'm
working on, for example, the register
me> I'm having a hard time debugging some virtual machine code because GDB
won't break where it's supposed to.
Here's a quick follow-up. I tried a number of different values of OPT
during configuration and compilation, but nothing changed the result. I
could never (and still can't) get GDB to brea
>
> I'm having a hard time debugging some virtual machine code because GDB
> won't break where it's supposed to.
>
A quick follow-up. The GDB folks were able to reproduce this in an XUbuntu
20.04 VM. I don't know if they tried straight Ubuntu, but as the main
difference between the two is the user
>
> However, it has become a de facto standard for all Python code, and in the
> document itself, there is frequent wording akin to "Identifiers used in the
> standard library must be ASCII compatible ...", and even advice for third
> party libraries.
>
> Which I think is acknowledging that PEP 8 i
>
> For the record, my personal arrangement for years has been to read most
> open source mailing-lists using GMane, on a NNTP reader separate from my
> main mail client. This works fine when I don't want to read open
> source-related e-mails :-)
>
And if you're not an NNTP person (anymore), filt
> > I'm working on it. The patches need to be discussed as they break
> > backward compatibility and AFAIK XML standards, too.
>
> That's not very good. XML parsers are supposed to parse XML according
> to standards. Is the goal to have them actually do that, or just
> address DDOS issues?
Having
>> If it was just once or twice, sure, but I use them as names for ints,
>> which means I use them as ints, which means I would have a boat load of
>> int() calls.
>
>
> Personally I don't see "name for ints" as being the main use case for enums.
Ethan seems to have a use case, even if it is using
> Besides "we just don't need them int-based in these use-cases" what are the
> reasons for the strong desire to have them be valueless?
Not sure about other people, but piggybacking off C semantics, while
convenient, reflects the limitations of the C implementation more than
anything else. An en
On Mon, Mar 4, 2013 at 10:30 AM, Brian Curtin wrote:
> The full announcement is at
> http://blog.python.org/2013/03/introducing-electronic-contributor.html,
> but a summary follows.
> ...
Brian,
Do you want old-timers like me who have a wet-signed fax gathering dust in
a box at PSF World Head
On Mon, Mar 18, 2013 at 8:50 AM, Neal Becker wrote:
> def F(x):
> return x
>
> x = 2
> F(x) = 3
>
> F(x) = 3
> SyntaxError: can't assign to function call
>
> Do we really need this restriction? There do exist other languages without
> it.
I think this belongs on python-ideas before laun
I started writing this last night before the flurry of messages which
arrived overnight. I thought originally, "Oh, Skip, you're being too
harsh." But now I'm not so sure. I think you are approaching the
issue of 2.7's EOL incorrectly. Of those discussing the end of Python
2.7, how many of you s
Obviously SourceForge doesn't think the current release interval is short
enough. (Emphasis mine.)
:-)
Skip
-- Forwarded message --
From: SourceForge.net
Date: Mon, Apr 8, 2013 at 1:09 PM
Subject: SourceForge Project Upgrade Notification
To: nore...@in.sf.net
Dear SourceForge
> If from the start you use:
> - six
...
There's the rub. We are not blessed with Guido's time machine where I
work. Much of the Python code we run was written long before six was
a gleam in anybody's eye. Heck, some of it was probably written
before some active members of python-dev graduated
> Would it make sense to think about adding this in the scope of the argument
> clinic work, or is it too unrelated? This seems like a commonly needed thing
> for large parts of the stdlib (where the C accelerator overrides Python
> code).
Or maybe separate doc strings from both code bases altoget
> Some pieces of code are still guarded by:
> #ifdef HAVE_FSTAT
> ...
> #endif
Are there other guards for similarly common libc functions? If so,
perhaps each one should be removed in a series of change sets, one per
guard.
Skip
___
Python-Dev mailin
> On Sat, May 18, 2013 at 10:27 PM, Raymond Hettinger
> wrote:
>> BTW, I'm +1 on the idea for ordering keyword-args. It makes
>> it easier to debug if the arguments show-up in the order they
>> were created. AFAICT, no purpose is served by scrambling them
>> (which is exacerbated by the new rand
> But one thing that often confuses people : function naming. The standard
> library is kind of inconsistent. Some functions are separated by underscores
> and others aren't.
I think there are a number of reasons for this:
* Despite PEP 8's age, significant chunks of the standard library predate
I encountered this disconcerting message yesterday on a Linux system
running Python 2.7.2:
*** glibc detected *** /opt/local/bin/python: corrupted double-linked
list: 0x03b01c90 ***
Of course, no core file or other information about where the problem
occurred was left behind, just the raw
> You may want to try running the process under valgrind.
Thanks. I'm trying that but have been so far unable to get valgrind
to report any problems within Python or our libraries before that
message, just a couple things at startup which seem to occur before
Python gets going:
==21374== Invalid
> I'd like to urge stdlib contributors and core devs to
> heed it -- or explain why you can't.
Where I can, I do, however I often find it difficult to come up with a
one-liner, especially one that mentions the parameters to functions.
If the one-line rule is going to be violated, I go whole hog an
> http://mail.python.org/pipermail/python-list/2013-July/653046.html
One correspondent objected that I was artificial biasing my histogram
because wrapped lines are, more-or-less by definition, going to be <
80 characters. Off-list I responded with a modified version of my
graph where I eliminate
> It is also likely that the mentor gets overworked after the GSoC period is
> over,
> is unable to finalize the patch and push it...
Given that Python development is done using a good DVCS now, it seems
that if each manageable chunk of changes is done on a separate branch,
the likelihood of acce
> We then realized that it isn't really used by anyone (pydoc uses it but it
> should have been using textwrap). Looking at the history of the module it
> has just been a magnet for cleanup revisions and not actual usage or
> development since Guido added it back in 1995.
Note that it is/was used
>> On Thu, Sep 05, 2013 at 02:35:16PM -0400, Donald Stufft
>> wrote:
>>> Persona is the logical successor to OpenID.
>>
>> OpenID lived a short life and died a quiet death. I'm afraid Persona
>> wouldn't live even that much. Dead-born idea, in my so humble opinion.
>
> I don't think there's muc
> I think Persona is just too new to see it around much yet. Or maybe Mozilla
> needs better PR.
The Persona site touts: "Signing in using Persona requires only a
valid email address; allowing you to provide personal information on
as-needed basis, when and where you think it’s appropriate."
The
> Whether a given site chooses to authroize an
> authenticated-but-otherwise-unknown user to do anything meaningful is
> logically distinct.
But the least they could have done was pick a demo site that didn't do
exactly what they contend you shouldn't need to do: cough up all sorts
of personal inf
> I have spend a very long time on a patch for Dtrace support in most
> platforms with dtrace available. Currently working under Solaris and
> derivatives, and MacOS X. Last time I checked, it would crash FreeBSD
> because bugs in the dtrace port, but that was a long time ago.
I looked at this sev
> However, it's common in economic statistics to have a rectangular
> array, and extract both certain rows (tuples of observations on
> variables) and certain columns (variables). For example you might
> have data on populations of American states from 1900 to 2012, and
> extract the data on New E
> (case-insensitive but case-preserving, as the best filesystems are ;-))
> I have a sweet spot for "transformdict" myself.
Antoine,
"Transform" does not remind me of "case-insensitive but
case-preserving". If this is important enough to put into the
collections module (I'm skeptical), shouldn't
> Seriously, I'm curious: what needs to mature, according to you?
In my mind, its availability on PyPI along with demonstrated use in
the wild (plus corresponding votes to demonstrate that people use/like
it) would help. That you can find several implementations at this
doesn't mean it's necessar
>> Note: Because dir() is supplied primarily as a convenience for
>> use at an interactive prompt [...]
This was always my interpretation of its intent. In fact, I use a
customized dir() for my own needs which would probably break inspect
(elides _-prefixed functions by default, notes modules or
> I don't really understand why the releases should be manually listed.
> Is it some kind of defensive coding?
I think it's to give people who care about such things all the
information they need to make informed decisions. As I recall, the 1.6
series was problematic, because it wasn't actually op
As a MacBook Pro user running Snow Leopard/10.6.8, I would find the
lack of a binary release problematic, were it not for the fact that I
routinely build from source (and don't do anything Mac-specific with
Python). For those not familiar with the platform, it's perhaps worth
noting that upgrade is
> That's why I get my Python (for Snow Leopard) from MacPorts.
Unless things have changed, that probably doesn't support Mac-specific
stuff, does it?
I was thinking more of non-developer users who are likely to need/want
Mac-specific interfaces for tools which are written in Python. That
might ju
>> It would be great if the docstring contained a link to the online
>> documentation.
>
> That would have to be a feature of help(), not hardcoded in each docstring.
That *is* a feature of the help function:
Help on built-in module sys:
>>> help(sys)
NAME
sys
FILE
(built-in)
MODULE DO
> https://github.com/python/cpython is now live as a semi-official, *read
> only* Github mirror of the CPython Mercurial repository. Let me know if you
> have any problems/concerns.
Thanks for this, Eli. I use git regularly at work, so I'm getting much
more comfortable with it. Do you have a sugge
Splitting into two pieces also means you can implement it for 3.4
first and identify possible problems caused by preexisting pip
installs before deciding whether to add it to 2.7 and 3.3.
Skip
___
Python-Dev mailing list
Python-Dev@python.org
https://mai
> If this love-in continues I'll prep a release tonight and commit it in the
> morning... just before my flight home across the Atlantic.
You've got it backwards. You're supposed to release, then leave town
for two weeks, not come home to field complaints. I think it might
even be in the Zen of Py
I'm sure this has a simple explanation (and is probably only of historical
interest at this point), but ...
While starting to port the Python Sybase module to Python 3, among other
hurdles, I noticed that RO is no longer defined. Looking in structmember.h,
I see that most of the macros defined the
spect, and one of
that set of flags does have a "PY_" prefix. Why didn't these flags
(and the T_* flags in structmember.h) get swept up in all the hubbub
around the grand renaming.
Skip
On Mon, Oct 3, 2016 at 10:39 AM, Victor Stinner
wrote:
> 2016-10-03 15:37 GMT+02:00 Skip Mo
I've recently run into a problem building the math and cmath modules
for 2.7. (I don't rebuild very often, so this problem might have been
around for awhile.) My hg repos look like this:
* My cpython repo pulls from https://hg.python.org/cpython
* My 2.7 repo (and other non-tip repos) pulls from
On Thu, Oct 20, 2016 at 6:47 AM, Skip Montanaro
wrote:
> Is it possible that the fix wasn't propagated to
> the 2.7 branch? Or perhaps I've fouled up my hg repo relationships?
Either way, I went ahead and opened a ticket:
http://bugs.python.o
On Thu, Oct 20, 2016 at 7:35 AM, Victor Stinner
wrote:
>
> Are you on the 2.7 branch or the default branch?
>
> You might try to cleanup your checkout:
>
> hg up -C -r 2.7
> make distclean
> hg purge # WARNING! it removes *all* files not tracked by Mercurial
> ./configure && make
>
> You should al
On Fri, Oct 21, 2016 at 1:12 PM, Brett Cannon wrote:
>> in first cpython, then 2.7 repos I should be up-to-date, correct?
>
>
> Nope, you need to execute the same steps in your 2.7 checkout
"repos" == "checkout" in my message.
So the hg up -C solved my problem, but I'm still a bit confused
(noth
I need to do a little 2.6 spelunking. I don't see a 2.6 branch in the
output of "hg branches". Is "hg clone v2.6.9" the proper incantation to get
the latest version (or perhaps "v2.6")?
Thx,
Skip
___
Python-Dev mailing list
Python-Dev@python.org
https:/
Cool, thanks to Ned and Zach. Hg never gets allocated very many neurons in
my brain. Then there's the whole brain-in-neutral aspect of things which
makes me fail to consider there might be help and/or closed branches which
aren't displayed... Sorry for the distraction.
Skip
___
I just got burned (wasted a good day or so) by the fact that PyDateTimeAPI
wasn't initialized. The datetime.rst doc states (emphasis mine):
Before using any of these functions, the header file :file:`datetime.h`
must be included in your source (note that this is not included by
:file:`Python.h`),
Alexander> I find them useful. I've never had success with python-gdb.py.
As the original author, and occasional user (just in the last week or
two) I still find the current crude hack useful. I tried to get the
Python support in GDB working a couple years ago, but gave up in
frustration. I hope
On Tue, Feb 28, 2017 at 2:33 AM, Victor Stinner
wrote:
> 2017-02-28 3:00 GMT+01:00 Skip Montanaro :
>> Alexander> I find them useful. I've never had success with python-gdb.py.
>
> Alexander, Skip: Oh, which kind of issues do you have with
> python-gdb.py? It doesn
> First, I had to rename python-gdb.py ...
Okay, I found a copy of python-gdb.py at the top level in a 2.7.13
snapshot I have at-hand. I have no idea where it came from. A file
search on GitHub in the python/cpython repo on both master and 2.7
branches yielded nothing.
It does seem to be working
By the way, maybe we can also start to list vendors (Linux vendors?)
who plan to offer commercial extended ...
Delurking ever so briefly...
Might be worthwhile to list published vendor EOL dates no matter if they
are before or after the 2020 EOL date. Different Linux distros have
different focu
On Mon, Jun 5, 2017 at 12:41 AM, Serhiy Storchaka wrote:
> Barry and Victor prefer moving a brace on a new line in all multiline
> conditional cases. I think that it should be done only when the condition
> continuation lines and the following block of the code have the same
> indentation (as in t
> I have a core file (produced via the gcore command) of a linux python2.6
> process. I need to extract the byte code and de-compile it.
Following on Steve's comment, you might want to take a look at
Misc/gdbinit for some GDB command inspiration. You are correct, you
won't have a running process
Emacs has been unexec'ing for as long as I can remember (which is longer
than
I can remember Python :). I know that it's been problematic and there have
been many efforts over the years to replace it, but I think it's been a
fairly
successful technique in practice, at least on platforms that suppo
Mohamed> I love everything about this - but I expect some hesitancy
due to this "Multithreaded programs are prone to concurrency bugs.".
Paul> The way I see it, the concurrency model to be used is selected
by developers. They can choose between ...
I think the real intent of the statement Mohamed
Guido> To be clear, Sam’s basic approach is a bit slower for
single-threaded code, and he admits that. But to sweeten the pot he has
also applied a bunch of unrelated speedups that make it faster in general,
so that overall it’s always a win. But presumably we could upstream the
latter easily, sepa
>
> Did you try running the same code with stock Python?
>
> One reason I ask is the IIUC, you are using numpy for the individual
> vector operations, and numpy already releases the GIL in some
> circumstances.
>
I had not run the same code with stock Python (but see below). Also, I only
used nu
Skip> 1. I use numpy arrays filled with random values, and the output array
is also a numpy array. The vector multiplication is done in a simple for
loop in my vecmul() function.
CHB> probably doesn't make a difference for this exercise, but numpy arrays
make lousy replacements for a regular list
> Remember that py stone is a terrible benchmark.
I understand that. I was only using it as a spot check. I was surprised at
how much slower my (threaded or unthreaded) matrix multiply was on nogil vs
3.9+. I went into it thinking I would see an improvement. The Performance
section of Sam's design
Sam> I think the performance difference is because of different
versions of NumPy.
Thanks all for the help/input/advice. It never occurred to me that two
relatively recent versions of numpy would differ so much for the
simple tasks in my script (array creation & transform). I confirmed
this by rem
> Many operations involving two literals are optimized (to a certain level). So
> it sort of surprises me that literal comparisons are not optimized and
> literal contains only convert the right operand to a constant if possible.
> I'd like to implement optimizations for these especially for the
> That is not entirely true:
> https://github.com/python/cpython/pull/29639#issuecomment-974146979
The only places I've seen "if 0:" or "if False:" in live code was for
debugging. Optimizing that hardly seems necessary. In any case, the
original comment was about comparisons of two constants. I s
It might be worth (re)reviewing Sam Gross's nogil effort to see how he
approached this:
https://github.com/colesbury/nogil#readme
He goes into plenty of detail in his design document about how he deals
with immortal objects. From that document:
Some objects, such as interned strings, small inte
> Is anyone else also getting multiple subscription notices?
>
Yup. In an earlier thread (here? discuss.python.org?) I thought it was
established that someone was working on something related to Python bug
tracking in GitHub. Or something like that. I've just been deleting them.
Skip
Perhaps I missed it, but maybe an action item would be to add a
buildbot which configures for 15-bit PyLong digits.
Skip
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.
> ... make sense of what they’re reading.
Some of us have that problem with type-embellished code now. I'm not sure a
little language would be such a bad idea. 🤔 Fortunately, my relationship
to the working world allows me to simply ignore explicit typing. 😉
Way, way BITD I recall leaning on a cru
>
> So if you hate type annotations because they are unreadable, then you
> hate Python because Python is unreadable.
>
That seems rather harsh. I suspect if those of us who are uncomfortable
with the typing subsystem actually hated Python we would have found our way
to the exits long ago. Typing
> Here is the type hint for `len`, taken from the stub file in typeshed:
>
> def len(__obj: Sized) -> int: ...
>
> Putting the mysterious double underscore naming convention aside, I do
> not find it credible that anyone capable of programming Python beyond a
> beginner level can find that "unr
>
> It would not be nice if the traceback module API started providing
> text with embedded escape sequences without a way to turn then off in the
> API.
>
I think fobj.isatty() would give the traceback module a good idea whether
it's writing to a display device or not. There are a number of other
>
> One thing I would mention though is people who can reproduce it check if
> you have any extensions enabled or other tools that can block network
> traffic. Sometimes privacy based extensions and tools can have false
> positives and block resources required to render sites correctly.
>
(I have
Dang auto-correct... I meant "anti-tracking," in case it wasn't obvious.
Skip
On Wed, Mar 16, 2022, 10:19 AM Skip Montanaro
wrote:
> One thing I would mention though is people who can reproduce it check if
>> you have any extensions enabled or other tools that can
Barry writes (in part):
> We could still distribute “sumo” releases which include all the
> batteries, but develop and maintain them outside the cpython repo,
> and even release them separately on PyPI. It’s *possible* but I
> don’t know if it’s *practical*.
to which Stephen responds (in part):
> What happens when the new maintainer puts malware in the next release of
> a package in sumo.txt?
> Will core devs be blamed for listing it?
> As a user, how do I determine if I can trust the packages there? (This
> is easily the hardest part of finding and installing a package from
> PyPI, thoug
101 - 200 of 353 matches
Mail list logo