[issue39293] Windows 10 64-bit needs reboot
New submission from Tony : After installing python 3.8.1 64-bit, on Windows 10 64-bit version 1909, the system needs to be rebooted to validate all settings in the registry. Otherwise will cause a lot of exceptions, like Path not found etc. -- components: Installation messages: 359756 nosy: ToKa priority: normal severity: normal status: open title: Windows 10 64-bit needs reboot type: behavior versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue39293> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39296] Windows register keys
New submission from Tony : It would be more practical to name the Windows main registry keys 'python', with for example 'python32' or 'python64'. This would make searching the registry for registered python versions (single and/or multi users) a lot easier. -- components: Windows messages: 359765 nosy: ToKa, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows register keys versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue39296> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39296] Windows register keys
Tony added the comment: Hello Steve, I just red the PEP 514. Thank you for pointing this out. However, when installing the latest version (3.8.1), the multi-user install is registered under key “HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\” as the PEP describes. The key “HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Python\PythonCore\3.8-32” Is a bit confusing, because since I know I have to search the registry for 32bit apps installed on a 64bit OS in “..\WOW6432Node”, the versionsuffix “-32” is a bit overdone in my opinion because have to write extra code to extract the version number only, as you can see in the screenshot of the registry (See attachment). The single user option however, is registered different. Here the key is not registered like the multi-user option under key “..\WOW6432Node”, but in key “HKEY_CURRENT_USER\Software\” while I would expect the same as the Multi-user option: “..\WOW6432Node”. I hope I explained enough about this. Greetings, Tony. Van: Steve Dower Verzonden: zaterdag 11 januari 2020 17:30 Aan: factoryx.c...@gmail.com Onderwerp: [issue39296] Windows register keys Steve Dower added the comment: Have you read PEP 514? Does that help? If not, can you provide specific suggestions in terms of that standard to help us understand what you are suggesting? -- ___ Python tracker <https://bugs.python.org/issue39296> ___ -- ___ Python tracker <https://bugs.python.org/issue39296> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39296] Windows register keys
Tony added the comment: The attachment I forgot.. Greetings, Tony. Van: Steve Dower Verzonden: zaterdag 11 januari 2020 17:30 Aan: factoryx.c...@gmail.com Onderwerp: [issue39296] Windows register keys Steve Dower added the comment: Have you read PEP 514? Does that help? If not, can you provide specific suggestions in terms of that standard to help us understand what you are suggesting? -- ___ Python tracker <https://bugs.python.org/issue39296> ___ -- Added file: https://bugs.python.org/file48836/python-reg1.jpg ___ Python tracker <https://bugs.python.org/issue39296> __ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39296] Windows register keys
Tony added the comment: Hi Steve, Thank you for this. I know about the working of WOW64 and the redirection to the (HKEY_LOCAL_MACHINE) ..\Wow6432Node, that is explained on md.docs. The HKEY_CURRENT_USER redirection is not well explained, and so it appears I’m not the only one (Google) who was confused about this behavior. So, again, many thanks for your explanation! Tony Kalf. Van: Steve Dower Verzonden: maandag 13 januari 2020 19:49 Aan: factoryx.c...@gmail.com Onderwerp: [issue39296] Windows register keys Steve Dower added the comment: You should read the version number from the Version or SysVersion values, rather than from the tag. Having -32 in the key name is a compatibility requirement. Without it, if you installed 32-bit and 64-bit versions for the current user (which is now the default), they would overwrite each other. The Wow6432Node key is due to Windows, not Python. We don't decide the name or when it is used, and Windows determined that HKEY_CURRENT_USER is not subject to registry redirection. That's why you don't see it there. Hope that helps clarify what's going on. -- ___ Python tracker <https://bugs.python.org/issue39296> ___ -- ___ Python tracker <https://bugs.python.org/issue39296> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43291] elementary multiplication by 0.01 error
New submission from Tony : on the >>> prompt type: >>>717161 * 0.01 7171.6101 the same goes for >>>717161.0 * 0.01 7171.6101 You can easily find more numbers with similar problem: for i in range(100): if len(str(i * 0.01)) > 12: print(i, i * 0.01) I am sure, that this problem was found before and circumvented by: >>>717161 / 100 7171.61 but this is hardly the way, one wants to rely on the code. This is the python version I use: Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)] on win32 -- messages: 387485 nosy: tonys_0 priority: normal severity: normal status: open title: elementary multiplication by 0.01 error type: behavior versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue43291> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41093] BaseServer's server_forever() shutdown immediately when calling shutdown()
New submission from Tony : Currently calling BaseServer's shutdown() function will not make serve_forever() return immediately from it's select(). I suggest adding a new function called server_shutdown() that will make serve_forever() shutdown immediately. Then in TCPServer(BaseServer) all we need to do is call self.socket.shutdown(socket.SHUT_RDWR) in server_shutdown()'s implementation. To test this I made a simple script: import threading import time from functools import partial from http.server import HTTPServer, SimpleHTTPRequestHandler def serve_http(server): server.serve_forever(poll_interval=2.5) def main(): with HTTPServer(('', 8000), SimpleHTTPRequestHandler) as server: t = threading.Thread(target=partial(serve_http, server)) t.start() time.sleep(3) start = time.time() print('shutdown') server.shutdown() print(f'time it took: {time.time() - start}') if __name__ == "__main__": main() -- components: Library (Lib) messages: 372194 nosy: tontinton priority: normal severity: normal status: open title: BaseServer's server_forever() shutdown immediately when calling shutdown() type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue41093> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41093] BaseServer's server_forever() shutdown immediately when calling shutdown()
Tony added the comment: By the way I have to ask, if I want this feature to be merged (this is my first PR) should I make a PR to 3.6/3.7/3.8/3.9 and master? Or should I create a PR to master only? thanks -- ___ Python tracker <https://bugs.python.org/issue41093> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41093] BaseServer's server_forever() shutdown immediately when calling shutdown()
Change by Tony : -- keywords: +patch pull_requests: +20259 stage: -> patch review pull_request: https://github.com/python/cpython/pull/21093 ___ Python tracker <https://bugs.python.org/issue41093> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41093] BaseServer's server_forever() shutdown immediately when calling shutdown()
Change by Tony : -- pull_requests: +20260 pull_request: https://github.com/python/cpython/pull/21094 ___ Python tracker <https://bugs.python.org/issue41093> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41093] TCPServer's server_forever() shutdown immediately when calling shutdown()
Tony added the comment: Just want to note that this fixes an issue in all TCPServers and not only http.server -- title: BaseServer's server_forever() shutdown immediately when calling shutdown() -> TCPServer's server_forever() shutdown immediately when calling shutdown() ___ Python tracker <https://bugs.python.org/issue41093> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41093] TCPServer's server_forever() shutdown immediately when calling shutdown()
Tony added the comment: This still leaves the open issue of UDPServer not shutting down immediately though -- ___ Python tracker <https://bugs.python.org/issue41093> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41093] TCPServer's server_forever() shutdown immediately when calling shutdown()
Tony added the comment: poke -- ___ Python tracker <https://bugs.python.org/issue41093> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41246] IOCP Proactor same socket overlapped callbacks
New submission from Tony : In IocpProactor I saw that the callbacks to the functions recv, recv_into, recvfrom, sendto, send and sendfile all give the same callback function for when the overlapped operation is done. I just wanted cleaner code so I made a static function inside the class that I give to each of these functions as the overlapped callbacks. -- messages: 373324 nosy: tontinton priority: normal severity: normal status: open title: IOCP Proactor same socket overlapped callbacks ___ Python tracker <https://bugs.python.org/issue41246> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41246] IOCP Proactor same socket overlapped callbacks
Change by Tony : -- keywords: +patch pull_requests: +20547 stage: -> patch review pull_request: https://github.com/python/cpython/pull/21399 ___ Python tracker <https://bugs.python.org/issue41246> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41247] asyncio module better caching for set and get_running_loop
New submission from Tony : There is a cache variable for the running loop holder, but once set_running_loop is called the variable was set to NULL so the next time get_running_loop would have to query a dictionary to receive the running loop holder. I thought why not always cache the latest set_running_loop? The only issue I thought of here is in the details of the implementation: I have too little experience in python to know if there could be a context switch to get_running_loop while set_running_loop is running. If a context switch is possible there then this issue would be way harder to solve, but it is still solvable. -- messages: 37 nosy: tontinton priority: normal severity: normal status: open title: asyncio module better caching for set and get_running_loop ___ Python tracker <https://bugs.python.org/issue41247> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41247] asyncio.set_running_loop() cache running loop holder
Change by Tony : -- title: asyncio module better caching for set and get_running_loop -> asyncio.set_running_loop() cache running loop holder ___ Python tracker <https://bugs.python.org/issue41247> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41247] asyncio.set_running_loop() cache running loop holder
Change by Tony : -- keywords: +patch pull_requests: +20550 stage: -> patch review pull_request: https://github.com/python/cpython/pull/21401 ___ Python tracker <https://bugs.python.org/issue41247> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41093] TCPServer's server_forever() shutdown immediately when calling shutdown()
Tony added the comment: bump -- ___ Python tracker <https://bugs.python.org/issue41093> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41247] asyncio.set_running_loop() cache running loop holder
Change by Tony : -- pull_requests: +20555 pull_request: https://github.com/python/cpython/pull/21406 ___ Python tracker <https://bugs.python.org/issue41247> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41273] asyncio: proactor read transport: use recv_into instead of recv
New submission from Tony : Using recv_into instead of recv in the transport _loop_reading will speed up the process. >From what I checked it's about 120% performance increase. This is only because there should not be a new buffer allocated each time we call recv, it's really wasteful. -- messages: 373483 nosy: tontinton priority: normal severity: normal status: open title: asyncio: proactor read transport: use recv_into instead of recv ___ Python tracker <https://bugs.python.org/issue41273> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41270] NamedTemporaryFile is not its own iterator.
Change by Tony : -- nosy: +tontinton nosy_count: 3.0 -> 4.0 pull_requests: +20585 pull_request: https://github.com/python/cpython/pull/21439 ___ Python tracker <https://bugs.python.org/issue41270> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41273] asyncio: proactor read transport: use recv_into instead of recv
Change by Tony : -- keywords: +patch pull_requests: +20588 stage: -> patch review pull_request: https://github.com/python/cpython/pull/21439 ___ Python tracker <https://bugs.python.org/issue41273> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41273] asyncio: proactor read transport: use recv_into instead of recv
Change by Tony : -- pull_requests: +20589 pull_request: https://github.com/python/cpython/pull/21442 ___ Python tracker <https://bugs.python.org/issue41273> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41270] NamedTemporaryFile is not its own iterator.
Change by Tony : -- pull_requests: +20590 pull_request: https://github.com/python/cpython/pull/21442 ___ Python tracker <https://bugs.python.org/issue41270> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41279] Convert StreamReaderProtocol to a BufferedProtocol
New submission from Tony : This will greatly increase performance, from my internal tests it was about 150% on linux. Using read_into instead of read will make it so we do not allocate a new buffer each time data is received. -- messages: 373526 nosy: tontinton priority: normal severity: normal status: open title: Convert StreamReaderProtocol to a BufferedProtocol ___ Python tracker <https://bugs.python.org/issue41279> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41273] asyncio: proactor read transport: use recv_into instead of recv
Change by Tony : -- pull_requests: +20593 pull_request: https://github.com/python/cpython/pull/21446 ___ Python tracker <https://bugs.python.org/issue41273> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41279] Convert StreamReaderProtocol to a BufferedProtocol
Change by Tony : -- keywords: +patch pull_requests: +20594 stage: -> patch review pull_request: https://github.com/python/cpython/pull/21446 ___ Python tracker <https://bugs.python.org/issue41279> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41246] IOCP Proactor same socket overlapped callbacks
Tony added the comment: I feel like the metadata is not really a concern here. I like when there is no code duplication :) -- ___ Python tracker <https://bugs.python.org/issue41246> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41305] Add StreamReader.readinto()
New submission from Tony : Add a StreamReader.readinto(buf) function. Exactly like StreamReader.read() with *n* being equal to the length of buf. Instead of allocating a new buffer, copy the read buffer into buf. -- messages: 373702 nosy: tontinton priority: normal severity: normal status: open title: Add StreamReader.readinto() ___ Python tracker <https://bugs.python.org/issue41305> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41279] Convert StreamReaderProtocol to a BufferedProtocol
Change by Tony : -- pull_requests: +20633 pull_request: https://github.com/python/cpython/pull/21491 ___ Python tracker <https://bugs.python.org/issue41279> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41305] Add StreamReader.readinto()
Change by Tony : -- keywords: +patch pull_requests: +20634 stage: -> patch review pull_request: https://github.com/python/cpython/pull/21491 ___ Python tracker <https://bugs.python.org/issue41305> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41305] Add StreamReader.readinto()
Tony added the comment: ok. Im interested in learning about the new api. Is it documented somewhere? -- ___ Python tracker <https://bugs.python.org/issue41305> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41305] Add StreamReader.readinto()
Tony added the comment: Ah it's trio... -- ___ Python tracker <https://bugs.python.org/issue41305> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41305] Add StreamReader.readinto()
Tony added the comment: Ok actually that sounds really important, I am interested. But to begin doing something like this I need to know what's the general design. Is it simply combining stream reader and stream writer into a single object and changing the write() function to always wait the write (thus deprecating drain) and that's it? If there is not more to it I can probably do this pretty quickly, I mean it seems easy on the surface. If there is more to it then I would like a more thorough explanation. Maybe we should chat about this. -- ___ Python tracker <https://bugs.python.org/issue41305> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41305] Add StreamReader.readinto()
Tony added the comment: > Which brings me to the most important point: what we need it not coding it > (yet), but rather drafting the actual proposal and posting it to > https://discuss.python.org/c/async-sig/20. Once a formal proposal is there > we can proceed with the implementation. Posted: https://discuss.python.org/t/discussion-on-a-new-api-for-asyncio/4725 By the way I know it's unrelated but I want a CR on https://github.com/python/cpython/pull/21446 I think it's also very important. -- ___ Python tracker <https://bugs.python.org/issue41305> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41305] Add StreamReader.readinto()
Tony added the comment: By the way if we will eventually combine StreamReader and StreamWriter won't this function (readinto) be useful then? Maybe we should consider adding it right now. Tell me your thoughts on this. -- ___ Python tracker <https://bugs.python.org/issue41305> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41273] asyncio: proactor read transport: use recv_into instead of recv
Tony added the comment: I see, I'll start working on a fix soon -- ___ Python tracker <https://bugs.python.org/issue41273> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41273] asyncio: proactor read transport: use recv_into instead of recv
Tony added the comment: Ok so I checked and the PR I am currently having a CR on fixes this issue: https://github.com/python/cpython/pull/21446 Do you want me to make a different PR tomorrow that fixes this specific issue to get it faster to master or is it ok to wait a bit? -- ___ Python tracker <https://bugs.python.org/issue41273> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41273] asyncio: proactor read transport: use recv_into instead of recv
Tony added the comment: If the error is not resolved yet, I would prefer if we revert this change then. The new PR is kinda big I don't know when it will be merged. -- ___ Python tracker <https://bugs.python.org/issue41273> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41533] Bugfix: va_build_stack leaks the stack if do_mkstack fails
New submission from Tony : When calling a function a stack is allocated via va_build_stack. There is a leak that happens if do_mkstack fails in it. -- messages: 375267 nosy: tontinton priority: normal severity: normal status: open title: Bugfix: va_build_stack leaks the stack if do_mkstack fails ___ Python tracker <https://bugs.python.org/issue41533> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41533] Bugfix: va_build_stack leaks the stack if do_mkstack fails
Change by Tony : -- keywords: +patch pull_requests: +20974 stage: -> patch review pull_request: https://github.com/python/cpython/pull/21847 ___ Python tracker <https://bugs.python.org/issue41533> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41279] Add a StreamReaderBufferedProtocol
Tony added the comment: bump -- title: Convert StreamReaderProtocol to a BufferedProtocol -> Add a StreamReaderBufferedProtocol ___ Python tracker <https://bugs.python.org/issue41279> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41533] Bugfix: va_build_stack leaks the stack if do_mkstack fails
Tony added the comment: bump -- ___ Python tracker <https://bugs.python.org/issue41533> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41246] IOCP Proactor same socket overlapped callbacks
Tony added the comment: bump -- ___ Python tracker <https://bugs.python.org/issue41246> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25253] AttributeError: 'Weather' object has no attribute 'dom'
New submission from Tony: The source code for ctw (CurseTheWeather) can be found here: https://github.com/tdy/ctw Running `ctw USCA0987` or `ctw --nometric USCA0987` (happens regardless of location) results in an attribute error with Python 3.4.3. Running `ctw` by itself does print a *Welcome to "Curse the Weather" Version 0.6* message. Traceback (most recent call last): File "/usr/bin/ctw", line 378, in curses.wrapper(main) File "/usr/lib/python3.4/curses/__init__.py", line 94, in wrapper return func(stdscr, *args, **kwds) File "/usr/bin/ctw", line 283, in main update(stdscr) File "/usr/bin/ctw", line 250, in update weather = weatherfeed.Weather(location, metric) File "/usr/lib/python3.4/weatherfeed.py", line 40, in __init__ self.dom = parseString(self._getData()) File "/usr/lib/python3.4/xml/dom/minidom.py", line 1970, in parseString return expatbuilder.parseString(string) File "/usr/lib/python3.4/xml/dom/expatbuilder.py", line 925, in parseString return builder.parseString(string) File "/usr/lib/python3.4/xml/dom/expatbuilder.py", line 223, in parseString parser.Parse(string, True) xml.parsers.expat.ExpatError: not well-formed (invalid token): line 64, column 26 Exception ignored in: > Traceback (most recent call last): File "/usr/lib/python3.4/weatherfeed.py", line 44, in __del__ self.dom.unlink() AttributeError: 'Weather' object has no attribute 'dom' I did notice the API URL in weatherfeed.py gives a Bad Request error for: urlHandle = urllib.request.urlopen('http://xoap.weather.com/weather/local/%s?cc=1&dayf=5&prod=xoap&link=xoap&unit=%s&par=1003666583&key=4128909340a9b2fc' I also noticed the weather.com API now redirects to wunderground.com so I registered a new API and updated the URL in weatherfeed.py only to still get the same AttributeError. The new API is something like http://api.wunderground.com/api/APIKEY/conditions/q/CA/San_Francisco.json -- components: Library (Lib), XML messages: 251742 nosy: tc priority: normal severity: normal status: open title: AttributeError: 'Weather' object has no attribute 'dom' type: crash versions: Python 3.4 ___ Python tracker <http://bugs.python.org/issue25253> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue46319] datetime.utcnow() should return a timezone aware datetime
New submission from Tony Rice : datetime.datetime.utcnow() returns a timezone naive datetime, this is counter-intuitive since you are logically dealing with a known timezone. I suspect this was implemented this way for fidelity with the rest of datetime.datetime (which returns timezone naive datetime objects). The workaround (see below) is to replace the missing tzinfo. Recommendation: By default datetime.datetime.utcnow() should return a timezone aware datetime (with tzinfo of UTC of course) or at least offer this behavoir as an option, e.g.: datetime.datetime.utcnow(timezone-aware=True) Workaround: dt = datetime.utcnow().replace(tzinfo=timezone.utc) -- components: Library (Lib) messages: 410160 nosy: rtphokie priority: normal severity: normal status: open title: datetime.utcnow() should return a timezone aware datetime type: behavior versions: Python 3.10, Python 3.11, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue46319> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12756] datetime.datetime.utcnow should return a UTC timestamp
Tony Rice added the comment: This enhancement request should be reconsidered. Yes it is the documented behavior but that doesn't mean it's the right behavior. Functions should work as expected not just in the context of the module they are implemented in but the context of the problem they are solving. The suggested workaround of essentially nesting the specified UTC time via datetime.now(timezone.utc) is ugly rather than beautiful, complex rather than simple, and nested instead of flat. The suggestion that now is preferred over isnow loses sight that UTC is not like other timezones. A lot has changed since Python 2.7 was released in 2010. It is the default timezone of cloud infrastructure. -- nosy: +rtphokie ___ Python tracker <https://bugs.python.org/issue12756> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12756] datetime.datetime.utcnow should return a UTC timestamp
Tony Rice added the comment: I would argue that PEP20 should win over backward compatibility, in addition to the points I hinted at above, practicality beats purity -- ___ Python tracker <https://bugs.python.org/issue12756> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4661] email.parser: impossible to read messages encoded in a different encoding
Changes by Tony Meyer : -- nosy: +anadelonbrin ___ Python tracker <http://bugs.python.org/issue4661> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11456] Documentation csv RFC4180
New submission from Tony Wallace : Change to documentation preamble for csv module: From: There is no “CSV standard”, so the format is operationally defined by the many applications which read and write it. The lack of a standard means that subtle differences often exist in the data produced and consumed by different applications. To: CSV has been used for many years prior to attempts to standardise it in RFC4180. This has resulted in subtle differences often exist in the data produced and consumed by different applications. -- assignee: docs@python components: Documentation messages: 130469 nosy: docs@python, tonywallace priority: normal severity: normal status: open title: Documentation csv RFC4180 type: feature request versions: Python 2.5, Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3 ___ Python tracker <http://bugs.python.org/issue11456> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3107] memory leak in make test (in "test list"), 2.5.2 not 2.5.1, Linux 64bit
New submission from Tony Wallace <[EMAIL PROTECTED]>: [EMAIL PROTECTED] Python-2.5.1]$ ./configure --prefix=/home/tony/root/usr/local/python-2.5.2 --enable-shared --enable-static [EMAIL PROTECTED] bin]$ file python python: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.4.0, dynamically linked (uses shared libs), not stripped [EMAIL PROTECTED] bin]$ uname -a Linux gossamer.ambric.local 2.4.21-40.ELsmp #1 SMP Wed Mar 15 13:46:01 EST 2006 x86_64 x86_64 x86_64 GNU/Linux [EMAIL PROTECTED] bin]$ cat /etc/redhat-release CentOS release 3.6 (Final) -- components: Demos and Tools messages: 68188 nosy: hushp1pt severity: normal status: open title: memory leak in make test (in "test list"), 2.5.2 not 2.5.1, Linux 64bit type: resource usage versions: Python 2.5 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3107> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3107] memory leak in make test (in "test list"), 2.5.2 not 2.5.1, Linux 64bit
Tony Wallace <[EMAIL PROTECTED]> added the comment: > how do you know Here is the story, sorry I skipped it before- I was at work then. I was doing the basic build-from-source on RHEL (Centos) Linux, because I don't have root and I need to install it in $HOME/something. I don't try to change anything except the install location. See the ./configure command given before. I tried 2.5.2 because it was the latest + greatest. When I ran "make test", it ran OK until it got to "test_list". Then it got stuck until I killed it. Its very repeatable. In another window, "top" shows the python process's memory growing and growing, until it owns basically all available memory (8GB) after less than a minute or so. Then I tried python 2.5.1, built exactly the same way, and make test works OK- test_list completes in seconds. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3107> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3107] memory leak in make test (in "test list"), 2.5.2 not 2.5.1, Linux 64bit
Tony Wallace <[EMAIL PROTECTED]> added the comment: > are you using gcc 4.3 No, I don't think so. [EMAIL PROTECTED] tony]$ gcc --version gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-53) Copyright (C) 2002 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. And make is definitely using gcc, not something else. > prompt 2.5.1 Good eye. I grabbed the file-python info from python 2.5.1 after I installed it (because 2.5.1 passed make test, I am planning to use that). Its true, I should have got that from the python 2.5.2, which I never did install- but unfortunately, it was deleted too soon. If it is important I can rebuild 2.5.2 again. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3107> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3107] memory leak in make test (in "test list"), 2.5.2 not 2.5.1, Linux 64bit
Tony Wallace <[EMAIL PROTECTED]> added the comment: > are you willing Yes, so long as I don't need root, I can follow instructions OK. By the way, the same thing (memory leak 2.5.2) occurred on Centos 4.6, a different Linux box. Lets proceed on that Centos 4.6 box. Here are the particulars: Python-2.5.2]$ file python python: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.4.0, dynamically linked (uses shared libs), not stripped [EMAIL PROTECTED] Python-2.5.2]$ gcc --version gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-9) Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. [EMAIL PROTECTED] Python-2.5.2]$ cat /etc/[EMAIL PROTECTED] CentOS release 4.6 (Final) [EMAIL PROTECTED] Python-2.5.2]$ uname -a Linux hathi.ambric.local 2.6.9-67.0.15.ELsmp #1 SMP Thu May 8 10:50:20 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux (after make test) test_largefile test_linuxaudiodev test_linuxaudiodev skipped -- Use of the `audio' resource not enabled test_list (stalls here so I dumped it) make: *** [test] Quit (another window, just before I dumped it) top - 01:36:31 up 4 days, 15:07, 5 users, load average: 2.70, 0.84, 0.32 Tasks: 87 total, 1 running, 86 sleeping, 0 stopped, 0 zombie Cpu(s): 3.8% us, 9.7% sy, 0.0% ni, 0.0% id, 86.0% wa, 0.0% hi, 0.5% si Mem: 15639112k total, 15610836k used,28276k free, 280k buffers Swap: 24579440k total, 1533172k used, 23046268k free,57676k cached PID USER PR NI %CPUTIME+ %MEM VIRT RES SHR S COMMAND 17190 tony 24 0 17 1:05.49 90.9 32.2g 13g 7200 D python 69 root 16 06 0:12.97 0.0 000 D kswapd0 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3107> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3107] memory leak in make test (in "test list"), 2.5.2 not 2.5.1, Linux 64bit
Tony Wallace <[EMAIL PROTECTED]> added the comment: make test not only fails "test_list", it also fails "test_tuple" and "test_userlist". In all cases, the behavior looks the same -- memory expands to > 90% and you kill it. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3107> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3107] memory leak in make test (in "test list"), 2.5.2 not 2.5.1, Linux 64bit
Tony Wallace <[EMAIL PROTECTED]> added the comment: in test_list.py, the following shows where it hit the memory leak: [EMAIL PROTECTED] Python-2.5.2]$ LD_LIBRARY_PATH=/home/tony/src/Python-2.5.2/Lib/:$LD_LIBRARY_PATH ./python -v Lib/test/test_list.py # installing zipimport hook import zipimport # builtin <<...>> Python 2.5.2 (r252:60911, Jun 14 2008, 01:31:25) [GCC 3.4.6 20060404 (Red Hat 3.4.6-9)] on linux2 Type "help", "copyright", "credits" or "license" for more information. # /home/tony/src/Python-2.5.2/Lib/unittest.pyc matches /home/tony/src/Python-2.5.2/Lib/unittest.py import unittest # precompiled from /home/tony/src/Python-2.5.2/Lib/unittest.pyc dlopen("/home/tony/src/Python-2.5.2/build/lib.linux-x86_64-2.5/time.so", 2); import time # dynamically loaded from /home/tony/src/Python-2.5.2/build/lib.linux-x86_64-2.5/time.so # /home/tony/src/Python-2.5.2/Lib/traceback.pyc matches /home/tony/src/Python-2.5.2/Lib/traceback.py import traceback # precompiled from /home/tony/src/Python-2.5.2/Lib/traceback.pyc import test # directory /home/tony/src/Python-2.5.2/Lib/test # /home/tony/src/Python-2.5.2/Lib/test/__init__.pyc matches /home/tony/src/Python-2.5.2/Lib/test/__init__.py import test # precompiled from /home/tony/src/Python-2.5.2/Lib/test/__init__.pyc # /home/tony/src/Python-2.5.2/Lib/test/test_support.pyc matches /home/tony/src/Python-2.5.2/Lib/test/test_support.py import test.test_support # precompiled from /home/tony/src/Python-2.5.2/Lib/test/test_support.pyc # /home/tony/src/Python-2.5.2/Lib/test/list_tests.pyc has bad mtime import test.list_tests # from /home/tony/src/Python-2.5.2/Lib/test/list_tests.py # wrote /home/tony/src/Python-2.5.2/Lib/test/list_tests.pyc # /home/tony/src/Python-2.5.2/Lib/test/seq_tests.pyc matches /home/tony/src/Python-2.5.2/Lib/test/seq_tests.py import test.seq_tests # precompiled from /home/tony/src/Python-2.5.2/Lib/test/seq_tests.pyc dlopen("/home/tony/src/Python-2.5.2/build/lib.linux-x86_64-2.5/itertools.so", 2); import itertools # dynamically loaded from /home/tony/src/Python-2.5.2/build/lib.linux-x86_64-2.5/itertools.so test_addmul (__main__.ListTest) ... ok test_append (__main__.ListTest) ... ok Terminated === in test_tuple.py, the following shows where it hit the memory leak: [EMAIL PROTECTED] Python-2.5.2]$ LD_LIBRARY_PATH=/home/tony/src/Python-2.5.2/Lib/:$LD_LIBRARY_PATH ./python -v Lib/test/test_tuple.py # installing zipimport hook import zipimport # builtin <<...>> Python 2.5.2 (r252:60911, Jun 14 2008, 01:31:25) [GCC 3.4.6 20060404 (Red Hat 3.4.6-9)] on linux2 Type "help", "copyright", "credits" or "license" for more information. # /home/tony/src/Python-2.5.2/Lib/unittest.pyc matches /home/tony/src/Python-2.5.2/Lib/unittest.py import unittest # precompiled from /home/tony/src/Python-2.5.2/Lib/unittest.pyc dlopen("/home/tony/src/Python-2.5.2/build/lib.linux-x86_64-2.5/time.so", 2); import time # dynamically loaded from /home/tony/src/Python-2.5.2/build/lib.linux-x86_64-2.5/time.so # /home/tony/src/Python-2.5.2/Lib/traceback.pyc matches /home/tony/src/Python-2.5.2/Lib/traceback.py import traceback # precompiled from /home/tony/src/Python-2.5.2/Lib/traceback.pyc import test # directory /home/tony/src/Python-2.5.2/Lib/test # /home/tony/src/Python-2.5.2/Lib/test/__init__.pyc matches /home/tony/src/Python-2.5.2/Lib/test/__init__.py import test # precompiled from /home/tony/src/Python-2.5.2/Lib/test/__init__.pyc # /home/tony/src/Python-2.5.2/Lib/test/test_support.pyc matches /home/tony/src/Python-2.5.2/Lib/test/test_support.py import test.test_support # precompiled from /home/tony/src/Python-2.5.2/Lib/test/test_support.pyc # /home/tony/src/Python-2.5.2/Lib/test/seq_tests.pyc matches /home/tony/src/Python-2.5.2/Lib/test/seq_tests.py import test.seq_tests # precompiled from /home/tony/src/Python-2.5.2/Lib/test/seq_tests.pyc dlopen("/home/tony/src/Python-2.5.2/build/lib.linux-x86_64-2.5/itertools.so", 2); import itertools # dynamically loaded from /home/tony/src/Python-2.5.2/build/lib.linux-x86_64-2.5/itertools.so test_addmul (__main__.TupleTest) ... ok Terminated === in test_userlist.py, the following shows where it hit the memory leak: [EMAIL PROTECTED] Python-2.5.2]$ LD_LIBRARY_PATH=/home/tony/src/Python-2.5.2/Lib/:$LD_LIBRARY_PATH ./python -v Lib/test/test_userlist.py # installing zipimport hook import zipimport # builtin <<...>> Python 2.5.2 (r252:60911, Jun 14 2008, 01:31:25) [GCC 3.4.6 20060404 (Red Hat 3.4.6-9)] on linux2 Type "help", "copyright", "credits" or "license" for more information. # /home/tony/src/Python-2.5.2/Lib/UserList.pyc matches /home/tony/src/Python-2.5.2/Lib/UserList.py import UserList # precompiled from /home/tony/src/Python-2.5.2/Lib/UserList.pyc # /home/tony/src/Python-2.5.2/Lib/unittest.pyc matches /home/tony
[issue3107] memory leak in make test (in "test list"), 2.5.2 not 2.5.1, Linux 64bit
Tony Wallace <[EMAIL PROTECTED]> added the comment: tried again with /configure --prefix=/home/tony/root/usr/local/python-2.5.2 --with-tcl --disable-shared No change But I noticed this when it recompiled. Maybe it is related. gcc -pthread -c -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE -o Objects/obmalloc.o Objects/obmalloc.c Objects/obmalloc.c: In function `new_arena': Objects/obmalloc.c:529: warning: comparison is always false due to limited range of data type (code fragment) /* Double the number of arena objects on each allocation. * Note that it's possible for `numarenas` to overflow. */ numarenas = maxarenas ? maxarenas << 1 : INITIAL_ARENA_OBJECTS; if (numarenas <= maxarenas) return NULL;/* overflow */ if (numarenas > PY_SIZE_MAX / sizeof(*arenas)) /* line 529 here */ return NULL;/* overflow */ nbytes = numarenas * sizeof(*arenas); arenaobj = (struct arena_object *)realloc(arenas, nbytes); if (arenaobj == NULL) return NULL; arenas = arenaobj; ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3107> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3107] memory leak in make test (in "test list"), 2.5.2 not 2.5.1, Linux 64bit
Tony Wallace <[EMAIL PROTECTED]> added the comment: > Objects/obmalloc.c:529: warning: > comparison is always false due to limited range of data type This compile complaint was definitely introduced in 2.5.2 by source changes from 2.5.1. So, there's a minor problem that could be fixed, anyway. However, replacing the 2.5.2 version of obmalloc.c with the 2.5.1 version and rebuilding (with incremental make this time) did NOT help the memory leak in test_list - I still get it. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3107> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3107] test_list uses unreasonable amounts of memory on 64-bit Linux
Tony Wallace <[EMAIL PROTECTED]> added the comment: It worked- I took a patch of r65334, as svn diff -c 65334 "http://svn.python.org/projects/python/branches/release25-maint"; and applied that patch ONLY to a clean release 2.5.2 source, ignoring the patch failure in Misc/NEWS. Built it all over again the same as before (that is, CentOS release 4.4 (Final) Linux manfred 2.6.9-42.0.8.ELsmp #1 SMP Tue Jan 30 12:18:01 EST 2007 x86_64 x86_64 x86_64 GNU/Linux ./configure --prefix=/home/tools/sqa/amd64_rhel4/Python-2.5.2 --enable-shared --build=x86_64-redhat-linux --enable-static ) Now, make test runs all the way through with no difficulty. Thanks, everyone. I consider this closed. Maintainer, please dispose this bug as you think appropriate. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3107> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2987] RFC2732 support for urlparse (e.g. http://[::1]:80/)
Tony Locke added the comment: I've created a patch for parse.py against the py3k branch, and I've also included ndim's test cases in that patch file. When returning the host name of an IPv6 literal, I don't include the surrounding '[' and ']'. For example, parsing http://[::1]:5432/foo/ gives the host name '::1'. -- nosy: +tlocke versions: +Python 3.2 Added file: http://bugs.python.org/file16886/parse.py.patch ___ Python tracker <http://bugs.python.org/issue2987> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2987] RFC2732 support for urlparse (IPv6 addresses)
Changes by Tony Locke : Removed file: http://bugs.python.org/file16886/parse.py.patch ___ Python tracker <http://bugs.python.org/issue2987> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2987] RFC2732 support for urlparse (IPv6 addresses)
Tony Locke added the comment: Regarding the RFC list issue, I've posted a new patch with a new RFC list that combines ndim's list and the comments from #5650. Pitrou argues that http://dead:beef::]/foo/ should fail because it's a malformed URL. My response would be that the parse() function has historically assumed that a URL is well formed, and so this change to accommodate IPv6 should continue to assume the URL is well formed. I'd say that a separate bug should be raised if it's thought that parse() should be changed to check that any URL is well-formed. -- Added file: http://bugs.python.org/file16888/parse.py.patch ___ Python tracker <http://bugs.python.org/issue2987> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue795081] email.Message param parsing problem II
Tony Nelson added the comment: If I understand RFC2822 3.2.2. Quoted characters (heh), unquoting must be done in one pass, so the current replace().replace() is wrong. It will change '\\"' to '"', but it should become '\"' when unquoted. This seems to work: re.sub(r'\\(.)',r'\1',s) I haven't encountered a problem with this; I just came across it while looking at the file Utils.py (Python 2.4, but unchanged in trunk). I will submit a new bug if desired. -- nosy: +tony_nelson Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue795081> ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1161031] Neverending warnings from asyncore
Tony Meyer added the comment: None of my arguments have really changed since 2.4. I still believe that this is a poor choice of default behaviour (and if it is meant to be overridden to be useable, then 'pass' or 'raise NotYetImplementedError' would be a better choice). However, my impression is that nothing I say will convince you of that, so I'll give up on that. It would have been interesting to hear from Andrew the reason for the code. However, this is definitely a documentation bug (as outlined previously), and so should be fixed, not closed. I can provide a patch if it stands a chance of being accepted. ___ Python tracker <http://bugs.python.org/issue1161031> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37424] subprocess.run timeout does not function if shell=True and capture_output=True
Tony Cappellini added the comment: I'm still seeing hangs with subprocess.run() in Python 3.7.4 Unfortunately, it involves talking to an NVME SSD on Linux, so I cannot easily submit code to duplicate it. -- nosy: +cappy ___ Python tracker <https://bugs.python.org/issue37424> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37424] subprocess.run timeout does not function if shell=True and capture_output=True
Tony Cappellini added the comment: Using Python 3.7.4, I'm calling subprocess.run() with the following arguments. .run() still hangs even though a timeout is being passed in. subprocess.run(cmd_list, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=False, timeout=timeout_val, check=True, universal_newlines=True) cmd_list contains the name of the bash script below, which is ./rescan.sh -- #!/usr/bin/bash echo Rescanning system for PCIe devices echo "Rescan device" echo 1 > /sys/bus/pci/rescan sleep 5 if [ `lspci | grep -ic "Non-Volatile memory controller"` -gt 0 ] then echo "Device Detected after Rescan" else echo "Device NOT detected after Rescan" exit 1 fi echo Rescan Done This script is scanning for NVME SSDs, so duplicating the issue is not as straightforward as submitting a python script. The OS is CentOS 7. uname -a shows 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux I know the Kernel is old, but we have a restriction against updating it. -- ___ Python tracker <https://bugs.python.org/issue37424> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45916] documentation link error
New submission from Tony Zhou : 3.10.0 Documentation » The Python Tutorial » 15. Floating Point Arithmetic: Issues and Limitationsin in the link "The Perils of Floating Point" brings user to https://www.hmbags.tw/ I don't think this is right. please check -- messages: 407200 nosy: cookiez6 priority: normal severity: normal status: open title: documentation link error type: security ___ Python tracker <https://bugs.python.org/issue45916> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45916] documentation link error
Tony Zhou added the comment: ok i see, I found the pdf. thank you for that anyway -- ___ Python tracker <https://bugs.python.org/issue45916> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39258] json serialiser errors with numpy int64
Change by Tony Hirst : -- components: Library (Lib) nosy: Tony Hirst priority: normal severity: normal status: open title: json serialiser errors with numpy int64 versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue39258> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39258] json serialiser errors with numpy int64
New submission from Tony Hirst : import json import numpy as np json.dumps( {'int64': np.int64(1)}) TypeError: Object of type int64 is not JSON serializable --- TypeError Traceback (most recent call last) in > 1 json.dumps( {'int64': np.int64(1)}) /usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw) 229 cls is None and indent is None and separators is None and 230 default is None and not sort_keys and not kw): --> 231 return _default_encoder.encode(obj) 232 if cls is None: 233 cls = JSONEncoder /usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/encoder.py in encode(self, o) 197 # exceptions aren't as detailed. The list call should be roughly 198 # equivalent to the PySequence_Fast that ''.join() would do. --> 199 chunks = self.iterencode(o, _one_shot=True) 200 if not isinstance(chunks, (list, tuple)): 201 chunks = list(chunks) /usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/encoder.py in iterencode(self, o, _one_shot) 255 self.key_separator, self.item_separator, self.sort_keys, 256 self.skipkeys, _one_shot) --> 257 return _iterencode(o, 0) 258 259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, /usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/encoder.py in default(self, o) 177 178 """ --> 179 raise TypeError(f'Object of type {o.__class__.__name__} ' 180 f'is not JSON serializable') 181 TypeError: Object of type int64 is not JSON serializable -- ___ Python tracker <https://bugs.python.org/issue39258> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39258] json serialiser errors with numpy int64
Tony Hirst added the comment: Apols - this is probably strictly a numpy issue. See: https://github.com/numpy/numpy/issues/12481 -- ___ Python tracker <https://bugs.python.org/issue39258> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39258] json serialiser errors with numpy int64
Tony Hirst added the comment: Previously posted issue: https://bugs.python.org/issue22107 -- ___ Python tracker <https://bugs.python.org/issue39258> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39258] json serialiser errors with numpy int64
Tony Hirst added the comment: Argh: previous was incorrect associated issue: correct issue: https://bugs.python.org/issue24313 -- ___ Python tracker <https://bugs.python.org/issue39258> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43130] Should this construct throw an exception?
New submission from Tony Ladd : The expression "1 and 2" evaluates to 2. Actually for most combinations of data type it returns the second object. Of course its a senseless construction (a beginning student made it) but why no exception? -- components: Interpreter Core messages: 386496 nosy: tladd priority: normal severity: normal status: open title: Should this construct throw an exception? type: behavior ___ Python tracker <https://bugs.python.org/issue43130> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43130] Should this construct throw an exception?
Tony Ladd added the comment: Dennis Thanks for the explanation. Sorry to post a fake report. Python is relentlessly logical but sometimes confusing. -- ___ Python tracker <https://bugs.python.org/issue43130> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43160] argparse: add extend_const action
New submission from Tony Lykke : I submitted this to the python-ideas mailing list early last year: https://mail.python.org/archives/list/python-id...@python.org/thread/7ZHY7HFFQHIX3YWWCIJTNB4DRG2NQDOV/. Recently I had some time to implement it (it actually turned out to be pretty trivial), so thought I'd put forward a PR. Here's the summary from the mailing list submission: I have found myself a few times in a position where I have a repeated argument that uses the append action, along with some convenience arguments that append a specific const to that same dest (eg: --filter-x being made equivalent to --filter x via append_const). This is particularly useful in cli apps that expose some kind of powerful-but-verbose filtering capability, while also providing shorter aliases for common invocations. I'm sure there are other use cases, but this is the one I'm most familiar with. The natural extension to this filtering idea are convenience args that set two const values (eg: --filter x --filter y being equivalent to --filter-x-y), but there is no extend_const action to enable this. While this is possible (and rather straight forward) to add via a custom action, I feel like this should be a built-in action instead. append has append_const, it seems intuitive and reasonable to expect extend to have extend_const too (my anecdotal experience the first time I came across this need was that I simply tried using extend_const without checking the docs, assuming it already existed). Here's an excerpt from the docs I drafted for this addition that hopefully convey the intent and use case clearly. +* ``'extend_const'`` - This stores a list, and extends each argument value to the list. + The ``'extend_const'`` action is typically useful when you want to provide an alias + that is the combination of multiple other arguments. For example:: + +>>> parser = argparse.ArgumentParser() +>>> parser.add_argument('--str', dest='types', action='append_const', const=str) +>>> parser.add_argument('--int', dest='types', action='append_const', const=int) +>>> parser.add_argument('--both', dest='types', action='extend_const', const=(str, int)) +>>> parser.parse_args('--str --int'.split()) +Namespace(types=[, ]) +>>> parser.parse_args('--both'.split()) +Namespace(types=[, ]) -- components: Library (Lib) messages: 386614 nosy: rhettinger, roganartu priority: normal severity: normal status: open title: argparse: add extend_const action type: enhancement versions: Python 3.10 ___ Python tracker <https://bugs.python.org/issue43160> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43160] argparse: add extend_const action
Change by Tony Lykke : -- keywords: +patch pull_requests: +23269 stage: -> patch review pull_request: https://github.com/python/cpython/pull/24478 ___ Python tracker <https://bugs.python.org/issue43160> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43160] argparse: add extend_const action
Tony Lykke added the comment: Perhaps the example I added to the docs isn't clear enough and should be changed because you're right, that specific one can be served by store_const. Turns out coming up with examples that are minimal but not too contrived is hard! Let me try again with a longer example that hopefully shows more clearly how the existing action's behaviours differ from my patch. parser = argparse.ArgumentParser() parser.add_argument("--foo", action="append", default=[]) parser.add_argument("--append", action="append_const", dest="foo", const=["a", "b"]) parser.add_argument("--store", action="store_const", dest="foo", const=["a", "b"]) When run on master the following behaviour is observed: --foo a --foo b --foo c Namespace(foo=['a', 'b', 'c']) --foo c --append Namespace(foo=['c', ['a', 'b']]) --foo c --store Namespace(foo=['a', 'b']) --store --foo a Namespace(foo=['a', 'b', 'c']) If we then add the following: parser.add_argument("--extend", action="extend_const", dest="foo", const=["a", "b"]) and then run it with my patch the following can be observed: --foo c --extend Namespace(foo=['c', 'a', 'b']) --extend --foo c Namespace(foo=['a', 'b', 'c']) store_const is actually a pretty close fit, but the way it makes order significant (specifically in that it will silently drop prev values) seems like it'd be rather surprising to users and makes it a big enough footgun for this use case that I don't think it's a satisfactory alternative. > I suspect users of your addition will get a surprise if they aren't careful > to provide a list or tuple 'const' I did consider that, but I don't think they'd get any more of a surprise than for doing the same with list.extend vs list.append. -- ___ Python tracker <https://bugs.python.org/issue43160> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43160] argparse: add extend_const action
Tony Lykke added the comment: Sorry, there's a typo in my last comment. --store --foo a Namespace(foo=['a', 'b', 'c']) from the first set of examples should have been --store --foo c Namespace(foo=['a', 'b', 'c']) -- ___ Python tracker <https://bugs.python.org/issue43160> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38628] Issue with ctypes in AIX
Tony Reix added the comment: On Fedora32/PPC64LE (5.7.9-200.fc32.ppc64le), with little change: libc = CDLL('/usr/lib64/libc.so.6') I get the correct answer: b'def' b'def' b'def' # python3 --version Python 3.8.3 libffi : 3.1-24 On Fedora32/x86_64 (5.7.9-200.fc32.x86_64), with a little change: libc = CDLL('/usr/lib64/libc-2.31.so') that crashes: b'def' Segmentation fault (core dumped) # python3 --version Python 3.8.3 libffi : 3.1-24 AIX : libffi-3.2.1 On AIX 7.2, with Python 3.8.5 compiled with XLC v13, in 64bit: b'def' b'def' None On AIX 7.2, with Python 3.8.5 compiled with GCC 8.4, in 64bit: b'def' b'def' None On AIX 7.2, with Python 3.8.5 compiled with XLC v13, in 32bit: ( libc = CDLL('libc.a(shr.o)') ) b'def' b'def' b'def' On AIX 7.2, with Python 3.8.5 compiled with GCC 8.4, in 32bit: b'def' b'def' b'def' Preliminary conclusions: - this is a 64bit issue on AIX and it is independent of the compiler - it is worse on Fedora/x86_64 - it works perfectly on Fedora/PPC64LE what a mess. -- nosy: +T.Rex ___ Python tracker <https://bugs.python.org/issue38628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38628] Issue with ctypes in AIX
Tony Reix added the comment: Fedora32/x86_64 [root@destiny10 tmp]# gdb /usr/bin/python3.8 core ... Core was generated by `python3 ./Pb.py'. Program terminated with signal SIGSEGV, Segmentation fault. #0 0x7f898a02a1d8 in __memchr_sse2 () from /lib64/libc.so.6 Missing separate debuginfos, use: dnf debuginfo-install python3-3.8.3-2.fc32.x86_64 (gdb) where #0 0x7f898a02a1d8 in __memchr_sse2 () from /lib64/libc.so.6 #1 0x7f898982caf0 in ffi_call_unix64 () from /lib64/libffi.so.6 #2 0x7f898982c2ab in ffi_call () from /lib64/libffi.so.6 #3 0x7f8989851ef1 in _ctypes_callproc.cold () from /usr/lib64/python3.8/lib-dynload/_ctypes.cpython-38-x86_64-linux-gnu.so #4 0x7f898985ba2f in PyCFuncPtr_call () from /usr/lib64/python3.8/lib-dynload/_ctypes.cpython-38-x86_64-linux-gnu.so #5 0x7f8989d6c7a1 in _PyObject_MakeTpCall () from /lib64/libpython3.8.so.1.0 #6 0x7f8989d69111 in _PyEval_EvalFrameDefault () from /lib64/libpython3.8.so.1.0 #7 0x7f8989d62ec4 in _PyEval_EvalCodeWithName () from /lib64/libpython3.8.so.1.0 #8 0x7f8989dde109 in PyEval_EvalCodeEx () from /lib64/libpython3.8.so.1.0 #9 0x7f8989dde0cb in PyEval_EvalCode () from /lib64/libpython3.8.so.1.0 #10 0x7f8989dff028 in run_eval_code_obj () from /lib64/libpython3.8.so.1.0 #11 0x7f8989dfe763 in run_mod () from /lib64/libpython3.8.so.1.0 #12 0x7f8989cea81b in PyRun_FileExFlags () from /lib64/libpython3.8.so.1.0 #13 0x7f8989cea19d in PyRun_SimpleFileExFlags () from /lib64/libpython3.8.so.1.0 #14 0x7f8989ce153c in Py_RunMain.cold () from /lib64/libpython3.8.so.1.0 #15 0x7f8989dd1bf9 in Py_BytesMain () from /lib64/libpython3.8.so.1.0 #16 0x7f8989fb7042 in __libc_start_main () from /lib64/libc.so.6 #17 0x557a1f3c407e in _start () -- ___ Python tracker <https://bugs.python.org/issue38628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38628] Issue with ctypes in AIX
Tony Reix added the comment: On AIX: root@castor4## gdb /opt/freeware/bin/python3 ... (gdb) run -m pdb Pb.py ... (Pdb) n b'def' > /home2/freeware/src/packages/BUILD/Python-3.8.5/32bit/Pb.py(35)() -> print( (Pdb) n > /home2/freeware/src/packages/BUILD/Python-3.8.5/32bit/Pb.py(36)() -> CFUNCTYPE(c_char_p, MemchrArgsHack2, (Pdb) Thread 2 received signal SIGINT, Interrupt. [Switching to Thread 1] 0x0916426c in __fd_select () from /usr/lib/libc.a(shr_64.o) (gdb) b ffi_call Breakpoint 1 at 0x1217918 (gdb) c ... (Pdb) n Thread 2 hit Breakpoint 1, 0x090001217918 in ffi_call () from /opt/freeware/lib/libffi.a(libffi.so.6) (gdb) where #0 0x090001217918 in ffi_call () from /opt/freeware/lib/libffi.a(libffi.so.6) #1 0x090001217780 in ffi_prep_cif_machdep () from /opt/freeware/lib/libffi.a(libffi.so.6) #2 0x090001216fb8 in ffi_prep_cif_var () from /opt/freeware/lib/libffi.a(libffi.so.6) .. (gdb) b memchr Breakpoint 2 at 0x91b0d60 (gdb) c Continuing. Thread 2 hit Breakpoint 2, 0x091b0d60 in memchr () from /usr/lib/libc.a(shr_64.o) (gdb) i register r0 0x91b0d60648518346343124320 r1 0xfffc8d01152921504606832848 r2 0x9001000a008e8b8648535941212334264 r3 0xa3669e0720575940382845408 r4 0x64 100 r5 0x0 0 r6 0x9001000a04ee730648535941216921392 r7 0x0 0 ... (gdb) x/s $r3 0xa3669e0: "abcdef" So: - the string is passed as r3. - r4 contains "d" = 0x64=100 - but the size 7 is missing Anyway, it seems that ffi does not pass the pointer, but values. However, the length 7 is missing. Not in r5, and nowhere in the other registers. -- ___ Python tracker <https://bugs.python.org/issue38628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38628] Issue with ctypes in AIX
Tony Reix added the comment: On Fedora/x86_64, in order to get the core, one must do: coredumpctl -o /tmp/core dump /usr/bin/python3.8 -- ___ Python tracker <https://bugs.python.org/issue38628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38628] Issue with ctypes in AIX
Tony Reix added the comment: On Fedora/PPC64LE, where it is OK, the same debug with gdb gives: (gdb) where #0 0x77df03b0 in __memchr_power8 () from /lib64/libc.so.6 #1 0x7fffea167680 in ?? () from /lib64/libffi.so.6 #2 0x7fffea166284 in ffi_call () from /lib64/libffi.so.6 #3 0x7fffea1a7fdc in _ctypes_callproc () from /usr/lib64/python3.8/lib-dynload/_ctypes.cpython-38-ppc64le-linux-gnu.so .. (gdb) i register r0 0x7fffea167614 140737120728596 r1 0x7fffc490 140737488340112 r2 0x7fffea187f00 140737120861952 r3 0x7fffea33a140 140737122640192 r4 0x6464 25700 r5 0x7 7 r6 0x0 0 r7 0x7fffea33a147 140737122640199 r8 0x7fffea33a140 140737122640192 (gdb) x/s 0x7fffea33a140 0x7fffea33a140: "abcdef" r3: string r4 : 0x6464 : "d" ?? r5: 7 : length of the string !!! -- ___ Python tracker <https://bugs.python.org/issue38628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38628] Issue with ctypes in AIX
Tony Reix added the comment: On AIX in 32bit, we have: Thread 2 hit Breakpoint 2, 0xd01407e0 in memchr () from /usr/lib/libc.a(shr.o) (gdb) where #0 0xd01407e0 in memchr () from /usr/lib/libc.a(shr.o) #1 0xd438f480 in ffi_call_AIX () from /opt/freeware/lib/libffi.a(libffi.so.6) #2 0xd438effc in ffi_call () from /opt/freeware/lib/libffi.a(libffi.so.6) (gdb) i register r0 0xd01407e0 3490973664 r1 0x2ff20f80 804392832 r2 0xf07a3cc0 4034542784 r3 0xb024c558 2955199832 r4 0x64 100 r5 0x7 7 r6 0x0 0 ... (gdb) x/s 0xb024c558 0xb024c558: "abcdef" r5 is OK. -- ___ Python tracker <https://bugs.python.org/issue38628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38628] Issue with ctypes in AIX
Tony Reix added the comment: AIX: difference between 32bit and 64bit. After the second print, the stack is: 32bit: #0 0xd01407e0 in memchr () from /usr/lib/libc.a(shr.o) #1 0xd438f480 in ffi_call_AIX () from /opt/freeware/lib/libffi.a(libffi.so.6) #2 0xd438effc in ffi_call () from /opt/freeware/lib/libffi.a(libffi.so.6) #3 0xd14979bc in ?? () #4 0xd148995c in ?? () #5 0xd20fd5d8 in _PyObject_MakeTpCall () from /opt/freeware/lib/libpython3.8.so 64bit: #0 0x091b0d60 in memchr () from /usr/lib/libc.a(shr_64.o) #1 0x090001217f00 in ffi_closure_ASM () from /opt/freeware/lib/libffi.a(libffi.so.6) #2 0x090001217aac in ffi_prep_closure_loc () from /opt/freeware/lib/libffi.a(libffi.so.6) #3 0x09d30900 in ?? () #4 0x09d22b6c in ?? () #5 0x09ebbc18 in _PyObject_MakeTpCall () from /opt/freeware/lib64/libpython3.8.so So, the execution does not run in the same ffi routines in 32bit and in 64bit. Bug ? It should be interesting to do the same with Python3 and libffi built with -O0 -g maybe. -- ___ Python tracker <https://bugs.python.org/issue38628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38628] Issue with ctypes in AIX
Tony Reix added the comment: # pwd /opt/freeware/src/packages/BUILD/libffi-3.2.1 # grep -R ffi_closure_ASM * powerpc-ibm-aix7.2.0.0/.libs/libffi.exp: ffi_closure_ASM powerpc-ibm-aix7.2.0.0/include/ffitarget.h:void * code_pointer; /* Pointer to ffi_closure_ASM */ src/powerpc/aix_closure.S:.globl ffi_closure_ASM src/powerpc/darwin_closure.S:.globl _ffi_closure_ASM src/powerpc/ffi_darwin.c: extern void ffi_closure_ASM (void); *((unsigned long *)&tramp[2]) = (unsigned long) ffi_closure_ASM; /* function */ src/powerpc/ffitarget.h: void * code_pointer; /* Pointer to ffi_closure_ASM */ # grep -R ffi_call_AIX * powerpc-ibm-aix7.2.0.0/.libs/libffi.exp: ffi_call_AIX src/powerpc/aix.S:.globl ffi_call_AIX src/powerpc/ffi_darwin.c: extern void ffi_call_AIX(extended_cif *, long, unsigned, unsigned *, In 64bit, I see that: ffi_darwin.c is compiled and used for building libffi.so.6 . Same in 32bit. The code of file src/powerpc/ffi_darwin.c seems to be able to handle both FFI_AIX and FFI_DARWIN , dynamically based on cif->abi . The code looks like VERY complex! The hypothesis is that the 64bit code has a bug vs the 32bit version. -- ___ Python tracker <https://bugs.python.org/issue38628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38628] Issue with ctypes in AIX
Tony Reix added the comment: On AIX 7.2, with libffi compiled with -O0 -g, I have: 1) Call to memchr thru memchr_args_hack #0 0x091b0d60 in memchr () from /usr/lib/libc.a(shr_64.o) #1 0x0900058487a0 in ffi_call_DARWIN () from /opt/freeware/lib/libffi.a(libffi.so.6) #2 0x090005847eec in ffi_call (cif=0xfff, fn=0xca90, rvalue=0xfff, avalue=0xca80) at ../src/powerpc/ffi_darwin.c:31 #3 0x0900058f9900 in ?? () #4 0x0900058ebb6c in ?? () #5 0x09000109fc18 in _PyObject_MakeTpCall () from /opt/freeware/lib64/libpython3.8.so r3 0xa3659e0720575940382841312 r4 0x64 100 r5 0x7 7 (gdb) x/s $r3 0xa3659e0: "abcdef" 2) Call to memchr thru memchr_args_hack2 #0 0x091b0d60 in memchr () from /usr/lib/libc.a(shr_64.o) #1 0x0900058487a0 in ffi_call_DARWIN () from /opt/freeware/lib/libffi.a(libffi.so.6) #2 0x090005847eec in ffi_call (cif=0xfff, fn=0xca90, rvalue=0xfff, avalue=0xca80) at ../src/powerpc/ffi_darwin.c:31 #3 0x0900058f9900 in ?? () #4 0x0900058ebb6c in ?? () #5 0x09000109fc18 in _PyObject_MakeTpCall () from /opt/freeware/lib64/libpython3.8.so r3 0xa3659e0720575940382841312 r4 0x64 100 r5 0x0 0 So, it looks like, when libffi is not compiled with -O but with -O0 -g, that in 64bit ffi_call_DARWIN() is call in both cases (memchr_args_hack and memchr_args_hack2). However, as seen previously, it was not the case with libffi built with -O . Moreover, we have in source code: switch (cif->abi) { case FFI_AIX: ffi_call_AIX(&ecif, -(long)cif->bytes, cif->flags, ecif.rvalue, fn, FFI_FN(ffi_prep_args)); break; case FFI_DARWIN: ffi_call_DARWIN(&ecif, -(long)cif->bytes, cif->flags, ecif.rvalue, fn, FFI_FN(ffi_prep_args), cif->rtype); Why calling ffi_call_DARWIN instead of ffi_call_AIX ? Hummm Will rebuild libffi and python both with gcc -O0 -g -gdwarf and look at details. -- ___ Python tracker <https://bugs.python.org/issue38628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38628] Issue with ctypes in AIX
Tony Reix added the comment: After adding traces and after rebuilding Python and libffi with -O0 -g -gdwarf, it appears that, still in 64bit, the bug is still there, but that ffi_call_AIX is called now instead of ffi_call_DARWIN from ffi_call() routine of ../src/powerpc/ffi_darwin.c (lines 915...). ??? # ./Pb.py TONY: libffi: src/powerpc/ffi_darwin.c : FFI_AIX TONY: libffi: cif->abi: 1 -(long)cif->bytes : -144 cif->flags : 8 ecif.rvalue : fffd1f0 fn: 9001000a0082640 FFI_FN(ffi_prep_args) : 9001000a0483be8 b'def' TONY: libffi: src/powerpc/ffi_darwin.c : FFI_AIX TONY: libffi: cif->abi: 1 -(long)cif->bytes : -144 cif->flags : 8 ecif.rvalue : fffd220 fn: 9001000a0082640 FFI_FN(ffi_prep_args) : 9001000a0483be8 b'def' TONY: libffi: src/powerpc/ffi_darwin.c : FFI_AIX TONY: libffi: cif->abi: 1 -(long)cif->bytes : -144 cif->flags : 8 ecif.rvalue : fffd220 fn: 9001000a0082640 FFI_FN(ffi_prep_args) : 9001000a0483be8 None In 32bit with same build environment, a different code is run since the traces are not printed. Thus, 32bit and 64bit are managed very differently. -- ___ Python tracker <https://bugs.python.org/issue38628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38628] Issue with ctypes in AIX
Tony Reix added the comment: Fedora32/x86_64 : Python v3.8.5 has been built. Issue is still there, but different in debug or optimized mode. Thus, change done in https://bugs.python.org/issue22273 did not fix this issue. ./Pb-3.8.5-debug.py : #!/opt/freeware/src/packages/BUILD/Python-3.8.5/build/debug/python ... i./Pb-3.8.5-optimized.py : #!/opt/freeware/src/packages/BUILD/Python-3.8.5/build/optimized/python BUILD=debug export LD_LIBRARY_PATH=/opt/freeware/src/packages/BUILD/Python-3.8.5/build/debug:/usr/lib64:/usr/lib export PYTHONPATH=/opt/freeware/src/packages/BUILD/Python-3.8.5/build/debug/Modules ./Pb-3.8.5-debug.py b'def' None None BUILD=optimized export LD_LIBRARY_PATH=/opt/freeware/src/packages/BUILD/Python-3.8.5/build/optimized:/usr/lib64:/usr/lib export PYTHONPATH=/opt/freeware/src/packages/BUILD/Python-3.8.5/build/optimized/Modules + ./Pb-3.8.5-optimized.py b'def' Pb-3.8.5.sh: line 6: 103569 Segmentation fault (core dumped) ./Pb-3.8.5-$BUILD.py -- ___ Python tracker <https://bugs.python.org/issue38628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38628] Issue with ctypes in AIX
Change by Tony Reix : -- versions: +Python 3.8 -Python 3.7 ___ Python tracker <https://bugs.python.org/issue38628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38628] Issue with ctypes in AIX
Tony Reix added the comment: Fedora32/x86_64 : Python v3.8.5 : optimized : uint type. If, instead of using ulong type, the Pb.py program makes use of uint, the issue is different: see below. This means that the issue depends on the length of the data. BUILD=optimized TYPE=int export LD_LIBRARY_PATH=/opt/freeware/src/packages/BUILD/Python-3.8.5/build/optimized:/usr/lib64:/usr/lib export PYTHONPATH=/opt/freeware/src/packages/BUILD/Python-3.8.5/build/optimized/Modules ./Pb-3.8.5-int-optimized.py b'def' None None # cat ./Pb-3.8.5-int-optimized.py #!/opt/freeware/src/packages/BUILD/Python-3.8.5/build/optimized/python # #!/opt/freeware/src/packages/BUILD/Python-3.8.5/python # #!/usr/bin/env python3 from ctypes import * libc = CDLL('/usr/lib64/libc-2.31.so') class MemchrArgsHack(Structure): _fields_ = [("s", c_char_p), ("c", c_uint), ("n", c_uint)] memchr_args_hack = MemchrArgsHack() memchr_args_hack.s = b"abcdef" memchr_args_hack.c = ord('d') memchr_args_hack.n = 7 class MemchrArgsHack2(Structure): _fields_ = [("s", c_char_p), ("c_n", c_uint * 2)] memchr_args_hack2 = MemchrArgsHack2() memchr_args_hack2.s = b"abcdef" memchr_args_hack2.c_n[0] = ord('d') memchr_args_hack2.c_n[1] = 7 print( CFUNCTYPE(c_char_p, c_char_p, c_uint, c_uint, c_void_p)(('memchr', libc))(b"abcdef", c_uint(ord('d')), c_uint(7), None)) print( CFUNCTYPE(c_char_p, MemchrArgsHack, c_void_p)(('memchr', libc))(memchr_args_hack, None)) print( CFUNCTYPE(c_char_p, MemchrArgsHack2, c_void_p)(('memchr', libc))(memchr_args_hack2, None)) -- ___ Python tracker <https://bugs.python.org/issue38628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38628] Issue with ctypes in AIX
Tony Reix added the comment: After more investigations, we (Damien and I) think that there are several issues in Python 3.8.5 : 1) Documentation. a) AFAIK, the only place in the Python ctypes documentation where it talks about how arrays in a structure are managed appears at: https://docs.python.org/3/library/ctypes.html#arrays b) the size of the structure in the example given here is much greater than in our case. c) The documentation does NOT talk that a structure <= 16 bytes and a structure greater than 16 bytes are managed differently. That's a bug in the documentation vs the code. 2) Tests Looking at tests, there are NO test about our case. 3) There is a bug in Python About the issue here, we see with gdb that Python provides libffi with a description saying that our case is passed as pointers. However, Python does NOT provides libffi with pointers for the array c_n, but with values. 4) libffi obeys Python directives given in description, thinking that it deals with 2 pointers, and thus it pushes only 2 values in registers R3 and R4. = Bug in Python: - 1) gdb (gdb) b ffi_call Breakpoint 1 at 0x900016fab80: file ../src/powerpc/ffi_darwin.c, line 919. (gdb) run Starting program: /home2/freeware/bin/python3 /tmp/Pb_damien2.py Thread 2 hit Breakpoint 1, ffi_call (cif=0xfffd108, fn=@0x9001000a0082640: 0x91b0d60 , rvalue=0xfffd1d0, avalue=0xfffd1c0) (gdb) p *(ffi_cif *)$r3 $9 = {abi = FFI_AIX, nargs = 2, arg_types = 0xfffd1b0, rtype = 0xa435cb8, bytes = 144, flags = 8} (gdb) x/2xg 0xfffd1b0 0xfffd1b0: 0x0a43ca48 0x08001000a0002a10 (gdb) p *(ffi_type *)0x0a43ca48 $11 = {size = 16, alignment = 8, type = 13, elements = 0xa12eed0} <= 13==FFI_TYPE_STRUCT size == 16 on AIX!!! == 24 on Linux (gdb) p *(ffi_type *)0x08001000a0002a10 $12 = {size = 8, alignment = 8, type = 14, elements = 0x0} <= FFI_TYPE_POINTER (gdb) x/3xg *(long *)$r6 0xa436050: 0x0a152200 0x0064 0xa436060: 0x0007 <= 7 is present in avalue[2] (gdb) x/s 0x0a152200 0xa152200: "abcdef" - 2) prints in libffi: AIX : aix_adjust_aggregate_sizes() TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 8 s->type:14 : FFI_TYPE_POINTER TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size:24 s->type:13 : FFI_TYPE_STRUCT TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() FFI_TYPE_STRUCT Before s->size:24 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 8 s->type:14 : FFI_TYPE_POINTER TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() p->size: 8 s->size: 8 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 8 s->type:14 : FFI_TYPE_POINTER TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() p->size: 8 s->size:16 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() After ALIGN s->size:16 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 8 s->type:14 : FFI_TYPE_POINTER TONY: libffi: src/powerpc/ffi_darwin.c: ffi_call: FFI_AIX TONY: libffi: cif->abi: 1 -(long)cif->bytes : -144 cif->flags : 8 ecif.rvalue : fffd200 fn: 9001000a0227760 FFI_FN(ffi_prep_args) : 9001000a050a108 s element : char pointer: a153d40 abcdef c_n element 0: a Long: 100 0X64 = 100 instead of a pointer c_n element 1: a Long: 0 libffi obeys description given by Python and pushes to R4 only what it thinks is a pointer (100 instead), and nothing in R5 Summary: - Python documentation is uncomplete vs the code - Python code gives libffi a description about pointers but Python code provides libffi with values. -- ___ Python tracker <https://bugs.python.org/issue38628> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38628] Issue with ctypes in AIX
Tony Reix added the comment: I do agree that the example with memchr is not correct. About your suggestion, I've done it. With 32. And that works fine. All 3 values are passed by value. # cat Pb-3.8.5.py #!/usr/bin/env python3 from ctypes import * mine = CDLL('./MemchrArgsHack2.so') class MemchrArgsHack2(Structure): _fields_ = [("s", c_char_p), ("c_n", c_ulong * 2)] memchr_args_hack2 = MemchrArgsHack2() memchr_args_hack2.s = b"abcdef" memchr_args_hack2.c_n[0] = ord('d') memchr_args_hack2.c_n[1] = 7 print( "sizeof(MemchrArgsHack2): ", sizeof(MemchrArgsHack2) ) print( CFUNCTYPE(c_char_p, MemchrArgsHack2, c_void_p) (('my_memchr', mine)) (memchr_args_hack2, None) ) # cat MemchrArgsHack2.c #include #include struct MemchrArgsHack2 { char *s; unsigned long c_n[2]; }; extern char *my_memchr(struct MemchrArgsHack2 args) { printf("s element : char pointer: %p %s\n", args.s, args.s); printf("c_n element 0: a Long: %ld\n", args.c_n[0]); printf("c_n element 1: a Long: %ld\n", args.c_n[1]); return(args.s +3); } TONY Modules/_ctypes/stgdict.c: MAX_STRUCT_SIZE=32 sizeof(MemchrArgsHack2): 24 TONY: libffi: src/powerpc/ffi_darwin.c : ffi_prep_cif_machdep() TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 8 s->type:14 : FFI_TYPE_POINTER TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size:24 s->type:13 : FFI_TYPE_STRUCT TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() FFI_TYPE_STRUCT Before s->size:24 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 8 s->type:14 : FFI_TYPE_POINTER TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() p->size: 8 s->size: 8 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size:16 s->type:13 : FFI_TYPE_STRUCT TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() FFI_TYPE_STRUCT Before s->size:16 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 8 s->type:11 : FFI_TYPE_UINT64 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() p->size: 8 s->size: 8 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 8 s->type:11 : FFI_TYPE_UINT64 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() p->size: 8 s->size:16 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() After ALIGN s->size:16 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() p->size:16 s->size:24 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() After ALIGN s->size:24 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 8 s->type:14 : FFI_TYPE_POINTER TONY: libffi: src/powerpc/ffi_darwin.c: ffi_call: FFI_AIX TONY: libffi: cif->abi: 1 -(long)cif->bytes : -144 cif->flags : 8 ecif.rvalue : fffd210 fn: 9001000a0227760 FFI_FN(ffi_prep_args) : 9001000a050a108 s element : char pointer: a0000154d40 abcdef c_n element 0: a Long: 100 c_n element 1: a Long: 7<<<< Correct value appears. b'def' With the regular version (16), I have: sizeof(MemchrArgsHack2): 24 TONY: libffi: src/powerpc/ffi_darwin.c : ffi_prep_cif_machdep() TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 8 s->type:14 : FFI_TYPE_POINTER TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size:24 s->type:13 : FFI_TYPE_STRUCT TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() FFI_TYPE_STRUCT Before s->size:24 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 8 s->type:14 : FFI_TYPE_POINTER TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() p->size: 8 s->size: 8 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 8 s->type:14 : FFI_TYPE_POINTER TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() p->size: 8 s->size:16 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() After ALIGN s->size:16 TONY: libffi: src/powerpc/ffi_darwin.c : aix_adjust_aggregate_sizes() s->size: 8 s->type:14 : FFI_TYPE_POINTER TONY: libffi: src/powerpc/ffi_darwin.c: ffi_call: FFI_AIX TONY: libffi: cif->abi: 1 -(long)cif->bytes : -144 cif->flags : 8 ecif.rvalue : fffd210 fn: 9001000a0227760 FFI_FN(ffi_prep_args) : 9001000a050a108 s element : char pointer: a154d40 abcdef c_n element 0: a Long: 100 c_n element 1: a Long: 0<<< Python pushed nothing for this. -- _
[issue41540] Test test_maxcontext_exact_arith (_decimal) consumes all memory on AIX
New submission from Tony Reix : Python master of 2020/08/11 Test test_maxcontext_exact_arith (test.test_decimal.CWhitebox) checks that Python correctly handles a case where an object of size 421052631578947376 is created. maxcontext = Context(prec=C.MAX_PREC, Emin=C.MIN_EMIN, Emax=C.MAX_EMAX) Both on Linux and AIX, we have: Context(prec=99, rounding=ROUND_HALF_EVEN, Emin=-99, Emax=99, capitals=1, clamp=0, flags=[], traps=[InvalidOperation, DivisionByZero, Overflow]) The test appears in: Lib/test/test_decimal.py 5665 def test_maxcontext_exact_arith(self): and the issue (on AIX) exactly appears at: self.assertEqual(Decimal(4) / 2, 2) The issue is due to code in: Objects/obmalloc.c : void * PyMem_RawMalloc(size_t size) { /* * Limit ourselves to PY_SSIZE_T_MAX bytes to prevent security holes. * Most python internals blindly use a signed Py_ssize_t to track * things without checking for overflows or negatives. * As size_t is unsigned, checking for size < 0 is not required. */ if (size > (size_t)PY_SSIZE_T_MAX) return NULL; return _PyMem_Raw.malloc(_PyMem_Raw.ctx, size); Both on Fedora/x86_64 and AIX, we have: size:421052631578947376 PY_SSIZE_T_MAX: 9223372036854775807 thus: size < PY_SSIZE_T_MAX and _PyMem_Raw.malloc() is called. However, on Linux, the malloc() returns a NULL pointer in that case, and then Python handles this and correctly runs the test. However, on AIX, the malloc() tries to allocate the requested memory, and the OS gets stucked till the Python process is killed by the OS. Either size is too small, or PY_SSIZE_T_MAX is not correctly computed: ./Include/pyport.h : /* Largest positive value of type Py_ssize_t. */ #define PY_SSIZE_T_MAX ((Py_ssize_t)(((size_t)-1)>>1)) Anyway, the following code added in PyMem_RawMalloc() before the call to _PyMem_Raw.malloc() , which in turns calls malloc() : if (size == 421052631578947376) { printf("TONY: 421052631578947376: --> PY_SSIZE_T_MAX: %ld \n", PY_SSIZE_T_MAX); return NULL; } does fix the issue on AIX. However, it is simply a way to show where the issue can be fixed. Another solution (fix size < PY_SSIZE_T_MAX) is needed. -- components: C API messages: 375302 nosy: T.Rex priority: normal severity: normal status: open title: Test test_maxcontext_exact_arith (_decimal) consumes all memory on AIX type: crash versions: Python 3.10 ___ Python tracker <https://bugs.python.org/issue41540> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41540] Test test_maxcontext_exact_arith (_decimal) consumes all memory on AIX
Tony Reix added the comment: Some more explanations. On AIX, the memory is controlled by the ulimit command. "Global memory" comprises the physical memory and the paging space, associated with the Data Segment. By default, both Memory and Data Segment are limited: # ulimit -a data seg size (kbytes, -d) 131072 max memory size (kbytes, -m) 32768 ... However, it is possible to remove the limit, like: # ulimit -d unlimited Now, when the "data seg size" is limited, the malloc() routine checks if enough memory/paging-space are available, and it immediately returns a NULL pointer. But, when the "data seg size" is unlimited, the malloc() routine first tries to allocate and quickly consumes the paging space, which is much slower than acquiring memory since it consumes disk space. And it nearly hangs the OS. Thus, in that case, it does NOT check if enough memory of data segments are available. Bad. So, this issue appears on AIX only if we have: # ulimit -d unlimited Anyway, the test: if (size > (size_t)PY_SSIZE_T_MAX) in: Objects/obmalloc.c: PyMem_RawMalloc() seems weird to me since the max of size is always lower than PY_SSIZE_T_MAX . -- nosy: -facundobatista, mark.dickinson, pablogsal, rhettinger, skrah ___ Python tracker <https://bugs.python.org/issue41540> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41540] Test test_maxcontext_exact_arith (_decimal) consumes all memory on AIX
Tony Reix added the comment: Hi Pablo, I'm only surprised that the maximum size generated in the test is always lower than the PY_SSIZE_T_MAX. And this appears both on AIX and on Linux, which both compute the same values. On AIX, it appears (I've just discovered this now) that malloc() does not ALWAYS check that there is enough memory to allocate before starting to claim memory (and thus paging space). This appears when Data Segment size is unlimited. On Linux/Fedora, I had no limit too. But it behaves differently and malloc() always checks that the size is correct. -- ___ Python tracker <https://bugs.python.org/issue41540> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41540] Test test_maxcontext_exact_arith (_decimal) consumes all memory on AIX
Tony Reix added the comment: Is it a 64bit AIX ? Yes, AIX is 64bit by default and only since ages, but it manages 32bit applications as well as 64bit applications. The experiments were done with 64bit Python executables on both AIX and Linux. The AIX machine has 16GB Memory and 16GB Paging Space. The Linux Fdora32/x86_64 machine has 16GB Memory and 8269820 Paging Space (swapon -s). Yes, I agree that the behavior of AIX malloc() under "ulimit -d unlimited" is... surprising. And the manual of malloc() does not talk about this. Anyway, does the test: if (size > (size_t)PY_SSIZE_T_MAX) was aimed to prevent calling malloc() with such a huge size? If yes, that does not work. -- ___ Python tracker <https://bugs.python.org/issue41540> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41540] Test test_maxcontext_exact_arith (_decimal) consumes all memory on AIX
Tony Reix added the comment: I forgot to say that this behavior was not present in stable version 3.8.5 . Sorry. On 2 machines AIX 7.2, testing Python 3.8.5 with: + cd /opt/freeware/src/packages/BUILD/Python-3.8.5 + ulimit -d unlimited + ulimit -m unlimited + ulimit -s unlimited + export LIBPATH=/opt/freeware/src/packages/BUILD/Python-3.8.5/64bit:/usr/lib64:/usr/lib:/opt/lib + export PYTHONPATH=/opt/freeware/src/packages/BUILD/Python-3.8.5/64bit/Modules + ./python Lib/test/regrtest.py -v test_decimal ... gave: 507 tests in 227 items. 507 passed and 0 failed. Test passed. So, this issue with v3.10 (master) appeared to me as a regression. However, after hours debugging the issue, I forgot to say it in this defect, sorry. (Previously, I was using limits for -d -m and -s : max 4GB. However, that appeared to be an issue when running tests with Python test option -M12Gb, which requires up and maybe more than 12GB of my 16GB memory machine in order to be able to run a large part of the Python Big Memory tests. And thus I unlimited these 3 resources, with no problem at all with version 3.8.5 .) -- ___ Python tracker <https://bugs.python.org/issue41540> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com