New submission from Tony :
After installing python 3.8.1 64-bit, on Windows 10 64-bit version 1909, the
system needs to be rebooted to validate all settings in the registry. Otherwise
will cause a lot of exceptions, like Path not found etc.
--
components: Installation
messages
New submission from Tony :
It would be more practical to name the Windows main registry keys 'python',
with for example 'python32' or 'python64'. This would make searching the
registry for registered python versions (single and/or multi users) a lot
easier.
Tony added the comment:
Hello Steve,
I just red the PEP 514.
Thank you for pointing this out.
However, when installing the latest version (3.8.1), the multi-user install is
registered under key
“HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\” as the PEP describes.
The key “HKEY_LOCAL_MACHINE
Tony added the comment:
The attachment I forgot..
Greetings, Tony.
Van: Steve Dower
Verzonden: zaterdag 11 januari 2020 17:30
Aan: factoryx.c...@gmail.com
Onderwerp: [issue39296] Windows register keys
Steve Dower added the comment:
Have you read PEP 514? Does that help?
If not, can you
Tony added the comment:
Hi Steve,
Thank you for this.
I know about the working of WOW64 and the redirection to the
(HKEY_LOCAL_MACHINE) ..\Wow6432Node, that is explained on md.docs.
The HKEY_CURRENT_USER redirection is not well explained, and so it appears I’m
not the only one (Google) who
New submission from Tony :
on the >>> prompt type:
>>>717161 * 0.01
7171.6101
the same goes for
>>>717161.0 * 0.01
7171.6101
You can easily find more numbers with similar problem:
for i in range(100):
if len(str(i * 0.01)) > 12:
New submission from Tony :
Currently calling BaseServer's shutdown() function will not make
serve_forever() return immediately from it's select().
I suggest adding a new function called server_shutdown() that will make
serve_forever() shutdown immediately.
Then in TCPServer(BaseS
Tony added the comment:
By the way I have to ask, if I want this feature to be merged (this is my first
PR) should I make a PR to 3.6/3.7/3.8/3.9 and master?
Or should I create a PR to master only?
thanks
--
___
Python tracker
<ht
Change by Tony :
--
keywords: +patch
pull_requests: +20259
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21093
___
Python tracker
<https://bugs.python.org/issu
Change by Tony :
--
pull_requests: +20260
pull_request: https://github.com/python/cpython/pull/21094
___
Python tracker
<https://bugs.python.org/issue41
Tony added the comment:
Just want to note that this fixes an issue in all TCPServers and not only
http.server
--
title: BaseServer's server_forever() shutdown immediately when calling
shutdown() -> TCPServer's server_forever() shutdown immediately when call
Tony added the comment:
This still leaves the open issue of UDPServer not shutting down immediately
though
--
___
Python tracker
<https://bugs.python.org/issue41
Tony added the comment:
poke
--
___
Python tracker
<https://bugs.python.org/issue41093>
___
___
Python-bugs-list mailing list
Unsubscribe:
https://mail.pyth
New submission from Tony :
In IocpProactor I saw that the callbacks to the functions recv, recv_into,
recvfrom, sendto, send and sendfile all give the same callback function for
when the overlapped operation is done.
I just wanted cleaner code so I made a static function inside the class
Change by Tony :
--
keywords: +patch
pull_requests: +20547
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21399
___
Python tracker
<https://bugs.python.org/issu
New submission from Tony :
There is a cache variable for the running loop holder, but once
set_running_loop is called the variable was set to NULL so the next time
get_running_loop would have to query a dictionary to receive the running loop
holder.
I thought why not always cache the latest
Change by Tony :
--
title: asyncio module better caching for set and get_running_loop ->
asyncio.set_running_loop() cache running loop holder
___
Python tracker
<https://bugs.python.org/issu
Change by Tony :
--
keywords: +patch
pull_requests: +20550
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21401
___
Python tracker
<https://bugs.python.org/issu
Tony added the comment:
bump
--
___
Python tracker
<https://bugs.python.org/issue41093>
___
___
Python-bugs-list mailing list
Unsubscribe:
https://mail.pyth
Change by Tony :
--
pull_requests: +20555
pull_request: https://github.com/python/cpython/pull/21406
___
Python tracker
<https://bugs.python.org/issue41
New submission from Tony :
Using recv_into instead of recv in the transport _loop_reading will speed up
the process.
>From what I checked it's about 120% performance increase.
This is only because there should not be a new buffer allocated each time we
call recv, it's re
Change by Tony :
--
nosy: +tontinton
nosy_count: 3.0 -> 4.0
pull_requests: +20585
pull_request: https://github.com/python/cpython/pull/21439
___
Python tracker
<https://bugs.python.org/issu
Change by Tony :
--
keywords: +patch
pull_requests: +20588
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21439
___
Python tracker
<https://bugs.python.org/issu
Change by Tony :
--
pull_requests: +20589
pull_request: https://github.com/python/cpython/pull/21442
___
Python tracker
<https://bugs.python.org/issue41
Change by Tony :
--
pull_requests: +20590
pull_request: https://github.com/python/cpython/pull/21442
___
Python tracker
<https://bugs.python.org/issue41
New submission from Tony :
This will greatly increase performance, from my internal tests it was about
150% on linux.
Using read_into instead of read will make it so we do not allocate a new buffer
each time data is received.
--
messages: 373526
nosy: tontinton
priority: normal
Change by Tony :
--
pull_requests: +20593
pull_request: https://github.com/python/cpython/pull/21446
___
Python tracker
<https://bugs.python.org/issue41
Change by Tony :
--
keywords: +patch
pull_requests: +20594
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21446
___
Python tracker
<https://bugs.python.org/issu
Tony added the comment:
I feel like the metadata is not really a concern here. I like when there is no
code duplication :)
--
___
Python tracker
<https://bugs.python.org/issue41
New submission from Tony :
Add a StreamReader.readinto(buf) function.
Exactly like StreamReader.read() with *n* being equal to the length of buf.
Instead of allocating a new buffer, copy the read buffer into buf.
--
messages: 373702
nosy: tontinton
priority: normal
severity: normal
Change by Tony :
--
pull_requests: +20633
pull_request: https://github.com/python/cpython/pull/21491
___
Python tracker
<https://bugs.python.org/issue41
Change by Tony :
--
keywords: +patch
pull_requests: +20634
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21491
___
Python tracker
<https://bugs.python.org/issu
Tony added the comment:
ok.
Im interested in learning about the new api.
Is it documented somewhere?
--
___
Python tracker
<https://bugs.python.org/issue41
Tony added the comment:
Ah it's trio...
--
___
Python tracker
<https://bugs.python.org/issue41305>
___
___
Python-bugs-list mailing list
Unsubscr
Tony added the comment:
Ok actually that sounds really important, I am interested.
But to begin doing something like this I need to know what's the general design.
Is it simply combining stream reader and stream writer into a single object and
changing the write() function to always
Tony added the comment:
> Which brings me to the most important point: what we need it not coding it
> (yet), but rather drafting the actual proposal and posting it to
> https://discuss.python.org/c/async-sig/20. Once a formal proposal is there
> we can proceed with the im
Tony added the comment:
By the way if we will eventually combine StreamReader and StreamWriter won't
this function (readinto) be useful then?
Maybe we should consider adding it right now.
Tell me your thoughts on this.
--
___
Python tr
Tony added the comment:
I see, I'll start working on a fix soon
--
___
Python tracker
<https://bugs.python.org/issue41273>
___
___
Python-bugs-list m
Tony added the comment:
Ok so I checked and the PR I am currently having a CR on fixes this issue:
https://github.com/python/cpython/pull/21446
Do you want me to make a different PR tomorrow that fixes this specific issue
to get it faster to master or is it ok to wait a bit
Tony added the comment:
If the error is not resolved yet, I would prefer if we revert this change then.
The new PR is kinda big I don't know when it will be merged.
--
___
Python tracker
<https://bugs.python.org/is
New submission from Tony :
When calling a function a stack is allocated via va_build_stack.
There is a leak that happens if do_mkstack fails in it.
--
messages: 375267
nosy: tontinton
priority: normal
severity: normal
status: open
title: Bugfix: va_build_stack leaks the stack if
Change by Tony :
--
keywords: +patch
pull_requests: +20974
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21847
___
Python tracker
<https://bugs.python.org/issu
Tony added the comment:
bump
--
title: Convert StreamReaderProtocol to a BufferedProtocol -> Add a
StreamReaderBufferedProtocol
___
Python tracker
<https://bugs.python.org/issu
Tony added the comment:
bump
--
___
Python tracker
<https://bugs.python.org/issue41533>
___
___
Python-bugs-list mailing list
Unsubscribe:
https://mail.pyth
Tony added the comment:
bump
--
___
Python tracker
<https://bugs.python.org/issue41246>
___
___
Python-bugs-list mailing list
Unsubscribe:
https://mail.pyth
New submission from Tony:
The source code for ctw (CurseTheWeather) can be found here:
https://github.com/tdy/ctw
Running `ctw USCA0987` or `ctw --nometric USCA0987` (happens regardless of
location) results in an attribute error with Python 3.4.3. Running `ctw` by
itself does print a
New submission from Tony Rice :
datetime.datetime.utcnow()
returns a timezone naive datetime, this is counter-intuitive since you are
logically dealing with a known timezone. I suspect this was implemented this
way for fidelity with the rest of datetime.datetime (which returns timezone
Tony Rice added the comment:
This enhancement request should be reconsidered.
Yes it is the documented behavior but that doesn't mean it's the right
behavior. Functions should work as expected not just in the context of the
module they are implemented in but the context of t
Tony Rice added the comment:
I would argue that PEP20 should win over backward compatibility, in addition to
the points I hinted at above,
practicality beats purity
--
___
Python tracker
<https://bugs.python.org/issue12
Changes by Tony Meyer :
--
nosy: +anadelonbrin
___
Python tracker
<http://bugs.python.org/issue4661>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from Tony Wallace :
Change to documentation preamble for csv module:
From:
There is no “CSV standard”, so the format is operationally defined by the many
applications which read and write it. The lack of a standard means that subtle
differences often exist in the data produced
New submission from Tony Wallace <[EMAIL PROTECTED]>:
[EMAIL PROTECTED] Python-2.5.1]$ ./configure
--prefix=/home/tony/root/usr/local/python-2.5.2 --enable-shared
--enable-static
[EMAIL PROTECTED] bin]$ file python
python: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for
GNU
Tony Wallace <[EMAIL PROTECTED]> added the comment:
> how do you know
Here is the story, sorry I skipped it before- I was at work then.
I was doing the basic build-from-source on RHEL (Centos) Linux, because
I don't have root and I need to install it in $HOME/something. I don
Tony Wallace <[EMAIL PROTECTED]> added the comment:
> are you using gcc 4.3
No, I don't think so.
[EMAIL PROTECTED] tony]$ gcc --version
gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-53)
Copyright (C) 2002 Free Software Foundation, Inc.
This is free software; see the sour
Tony Wallace <[EMAIL PROTECTED]> added the comment:
> are you willing
Yes, so long as I don't need root, I can follow instructions OK.
By the way, the same thing (memory leak 2.5.2) occurred on Centos 4.6, a
different Linux box. Lets proceed on that Centos 4.6 box. Here are t
Tony Wallace <[EMAIL PROTECTED]> added the comment:
make test not only fails "test_list", it also fails "test_tuple" and
"test_userlist". In all cases, the behavior looks the same -- memory
expands to > 90% and you kill it.
___
Tony Wallace <[EMAIL PROTECTED]> added the comment:
in test_list.py, the following shows where it hit the memory leak:
[EMAIL PROTECTED] Python-2.5.2]$
LD_LIBRARY_PATH=/home/tony/src/Python-2.5.2/Lib/:$LD_LIBRARY_PATH
./python -v Lib/test/test_list.py
# installing zipimport hook
Tony Wallace <[EMAIL PROTECTED]> added the comment:
tried again with
/configure --prefix=/home/tony/root/usr/local/python-2.5.2 --with-tcl
--disable-shared
No change
But I noticed this when it recompiled. Maybe it is related.
gcc -pthread -c -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3
Tony Wallace <[EMAIL PROTECTED]> added the comment:
> Objects/obmalloc.c:529: warning:
> comparison is always false due to limited range of data type
This compile complaint was definitely introduced in 2.5.2 by source
changes from 2.5.1. So, there's a minor problem that could
Tony Wallace <[EMAIL PROTECTED]> added the comment:
It worked- I took a patch of r65334, as
svn diff -c 65334
"http://svn.python.org/projects/python/branches/release25-maint";
and applied that patch ONLY to a clean release 2.5.2 source, ignoring
the patch failure in Misc/NEW
Tony Locke added the comment:
I've created a patch for parse.py against the py3k branch, and I've also
included ndim's test cases in that patch file.
When returning the host name of an IPv6 literal, I don't include the
surrounding '[' and ']'. Fo
Changes by Tony Locke :
Removed file: http://bugs.python.org/file16886/parse.py.patch
___
Python tracker
<http://bugs.python.org/issue2987>
___
___
Python-bugs-list mailin
Tony Locke added the comment:
Regarding the RFC list issue, I've posted a new patch with a new RFC list that
combines ndim's list and the comments from #5650.
Pitrou argues that http://dead:beef::]/foo/ should fail because it's a
malformed URL. My response would be that the p
Tony Nelson added the comment:
If I understand RFC2822 3.2.2. Quoted characters (heh), unquoting must
be done in one pass, so the current replace().replace() is wrong. It
will change '\\"' to '"', but it should become '\"' when unquoted.
This se
Tony Meyer added the comment:
None of my arguments have really changed since 2.4. I still believe
that this is a poor choice of default behaviour (and if it is meant to
be overridden to be useable, then 'pass' or 'raise
NotYetImplementedError' would be a better c
Tony Cappellini added the comment:
I'm still seeing hangs with subprocess.run() in Python 3.7.4
Unfortunately, it involves talking to an NVME SSD on Linux, so I cannot
easily submit code to duplicate it.
--
nosy: +cappy
___
Python tracker
&
Tony Cappellini added the comment:
Using Python 3.7.4, I'm calling subprocess.run() with the following arguments.
.run() still hangs even though a timeout is being passed in.
subprocess.run(cmd_list,
stdout=subprocess
New submission from Tony Zhou :
3.10.0 Documentation » The Python Tutorial » 15. Floating Point Arithmetic:
Issues and Limitationsin
in the link "The Perils of Floating Point" brings user to https://www.hmbags.tw/
I don't think this is right. please check
--
messag
Tony Zhou added the comment:
ok i see, I found the pdf. thank you for that anyway
--
___
Python tracker
<https://bugs.python.org/issue45916>
___
___
Python-bug
Change by Tony Hirst :
--
components: Library (Lib)
nosy: Tony Hirst
priority: normal
severity: normal
status: open
title: json serialiser errors with numpy int64
versions: Python 3.7
___
Python tracker
<https://bugs.python.org/issue39
New submission from Tony Hirst :
import json
import numpy as np
json.dumps( {'int64': np.int64(1)})
TypeError: Object of type int64 is not JSON serializable
---
TypeError
Tony Hirst added the comment:
Apols - this is probably strictly a numpy issue.
See: https://github.com/numpy/numpy/issues/12481
--
___
Python tracker
<https://bugs.python.org/issue39
Tony Hirst added the comment:
Previously posted issue: https://bugs.python.org/issue22107
--
___
Python tracker
<https://bugs.python.org/issue39258>
___
___
Tony Hirst added the comment:
Argh: previous was incorrect associated issue: correct issue:
https://bugs.python.org/issue24313
--
___
Python tracker
<https://bugs.python.org/issue39
New submission from Tony Ladd :
The expression "1 and 2" evaluates to 2. Actually for most combinations of data
type it returns the second object. Of course its a senseless construction (a
beginning student made it) but why no exception?
--
components: Interpreter Cor
Tony Ladd added the comment:
Dennis
Thanks for the explanation. Sorry to post a fake report. Python is relentlessly
logical but sometimes confusing.
--
___
Python tracker
<https://bugs.python.org/issue43
New submission from Tony Lykke :
I submitted this to the python-ideas mailing list early last year:
https://mail.python.org/archives/list/python-id...@python.org/thread/7ZHY7HFFQHIX3YWWCIJTNB4DRG2NQDOV/.
Recently I had some time to implement it (it actually turned out to be pretty
trivial
Change by Tony Lykke :
--
keywords: +patch
pull_requests: +23269
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/24478
___
Python tracker
<https://bugs.python.org/issu
Tony Lykke added the comment:
Perhaps the example I added to the docs isn't clear enough and should be
changed because you're right, that specific one can be served by store_const.
Turns out coming up with examples that are minimal but not too contrived is
hard! Let me try ag
Tony Lykke added the comment:
Sorry, there's a typo in my last comment.
--store --foo a
Namespace(foo=['a', 'b', 'c'])
from the first set of examples should have been
--store --foo c
Tony Reix added the comment:
On Fedora32/PPC64LE (5.7.9-200.fc32.ppc64le), with little change:
libc = CDLL('/usr/lib64/libc.so.6')
I get the correct answer:
b'def'
b'def'
b'def'
# python3 --version
Python 3.8.3
libffi : 3.1-24
On Fedora32/x86_6
Tony Reix added the comment:
Fedora32/x86_64
[root@destiny10 tmp]# gdb /usr/bin/python3.8 core
...
Core was generated by `python3 ./Pb.py'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x7f898a02a1d8 in __memchr_sse2 () from /lib64/libc.so.6
Missing separate debug
Tony Reix added the comment:
On AIX:
root@castor4## gdb /opt/freeware/bin/python3
...
(gdb) run -m pdb Pb.py
...
(Pdb) n
b'def'
> /home2/freeware/src/packages/BUILD/Python-3.8.5/32bit/Pb.py(35)()
-> print(
(Pdb) n
> /home2/freeware/src/packages/BUILD/Python-3
Tony Reix added the comment:
On Fedora/x86_64, in order to get the core, one must do:
coredumpctl -o /tmp/core dump /usr/bin/python3.8
--
___
Python tracker
<https://bugs.python.org/issue38
Tony Reix added the comment:
On Fedora/PPC64LE, where it is OK, the same debug with gdb gives:
(gdb) where
#0 0x77df03b0 in __memchr_power8 () from /lib64/libc.so.6
#1 0x7fffea167680 in ?? () from /lib64/libffi.so.6
#2 0x7fffea166284 in ffi_call () from /lib64/libffi.so.6
Tony Reix added the comment:
On AIX in 32bit, we have:
Thread 2 hit Breakpoint 2, 0xd01407e0 in memchr () from /usr/lib/libc.a(shr.o)
(gdb) where
#0 0xd01407e0 in memchr () from /usr/lib/libc.a(shr.o)
#1 0xd438f480 in ffi_call_AIX () from /opt/freeware/lib/libffi.a(libffi.so.6)
#2
Tony Reix added the comment:
AIX: difference between 32bit and 64bit.
After the second print, the stack is:
32bit:
#0 0xd01407e0 in memchr () from /usr/lib/libc.a(shr.o)
#1 0xd438f480 in ffi_call_AIX () from /opt/freeware/lib/libffi.a(libffi.so.6)
#2 0xd438effc in ffi_call () from /opt
Tony Reix added the comment:
# pwd
/opt/freeware/src/packages/BUILD/libffi-3.2.1
# grep -R ffi_closure_ASM *
powerpc-ibm-aix7.2.0.0/.libs/libffi.exp: ffi_closure_ASM
powerpc-ibm-aix7.2.0.0/include/ffitarget.h:void * code_pointer; /*
Pointer to ffi_closure_ASM */
src/powerpc
Tony Reix added the comment:
On AIX 7.2, with libffi compiled with -O0 -g, I have:
1) Call to memchr thru memchr_args_hack
#0 0x091b0d60 in memchr () from /usr/lib/libc.a(shr_64.o)
#1 0x0900058487a0 in ffi_call_DARWIN () from
/opt/freeware/lib/libffi.a(libffi.so.6)
#2
Tony Reix added the comment:
After adding traces and after rebuilding Python and libffi with -O0 -g -gdwarf,
it appears that, still in 64bit, the bug is still there, but that ffi_call_AIX
is called now instead of ffi_call_DARWIN from ffi_call() routine of
../src/powerpc/ffi_darwin.c (lines
Tony Reix added the comment:
Fedora32/x86_64 : Python v3.8.5 has been built.
Issue is still there, but different in debug or optimized mode.
Thus, change done in https://bugs.python.org/issue22273 did not fix this issue.
./Pb-3.8.5-debug.py :
#!/opt/freeware/src/packages/BUILD/Python-3.8.5
Change by Tony Reix :
--
versions: +Python 3.8 -Python 3.7
___
Python tracker
<https://bugs.python.org/issue38628>
___
___
Python-bugs-list mailing list
Unsub
Tony Reix added the comment:
Fedora32/x86_64 : Python v3.8.5 : optimized : uint type.
If, instead of using ulong type, the Pb.py program makes use of uint, the issue
is different: see below.
This means that the issue depends on the length of the data.
BUILD=optimized
TYPE=int
export
Tony Reix added the comment:
After more investigations, we (Damien and I) think that there are several
issues in Python 3.8.5 :
1) Documentation.
a) AFAIK, the only place in the Python ctypes documentation where it talks
about how arrays in a structure are managed appears at:
https
Tony Reix added the comment:
I do agree that the example with memchr is not correct.
About your suggestion, I've done it. With 32. And that works fine.
All 3 values are passed by value.
# cat Pb-3.8.5.py
#!/usr/bin/env python3
from ctypes import *
mine = CDLL('./MemchrAr
New submission from Tony Reix :
Python master of 2020/08/11
Test test_maxcontext_exact_arith (test.test_decimal.CWhitebox) checks that
Python correctly handles a case where an object of size 421052631578947376 is
created.
maxcontext = Context(prec=C.MAX_PREC, Emin=C.MIN_EMIN, Emax
Tony Reix added the comment:
Some more explanations.
On AIX, the memory is controlled by the ulimit command.
"Global memory" comprises the physical memory and the paging space, associated
with the Data Segment.
By default, both Memory and Data Segment are limited:
# ulimit -a
dat
Tony Reix added the comment:
Hi Pablo,
I'm only surprised that the maximum size generated in the test is always lower
than the PY_SSIZE_T_MAX. And this appears both on AIX and on Linux, which both
compute the same values.
On AIX, it appears (I've just discovered this now) that mal
Tony Reix added the comment:
Is it a 64bit AIX ? Yes, AIX is 64bit by default and only since ages, but it
manages 32bit applications as well as 64bit applications.
The experiments were done with 64bit Python executables on both AIX and Linux.
The AIX machine has 16GB Memory and 16GB Paging
Tony Reix added the comment:
I forgot to say that this behavior was not present in stable version 3.8.5 .
Sorry.
On 2 machines AIX 7.2, testing Python 3.8.5 with:
+ cd /opt/freeware/src/packages/BUILD/Python-3.8.5
+ ulimit -d unlimited
+ ulimit -m unlimited
+ ulimit -s unlimited
+ export
1 - 100 of 159 matches
Mail list logo