[issue23979] Multiprocessing Pool.map pickles arguments passed to workers
New submission from Luis: Hi, I've seen an odd behavior for multiprocessing Pool in Linux/MacOS: - import multiprocessing as mp from sys import getsizeof import numpy as np def f_test(x): print('process has received argument %s' % x ) r = x[:100] # return will put in a queue for Pool, for objects > 4GB pickle complains return r if __name__ == '__main__': # 2**28 runs ok, 2**29 or bigger breaks pickle big_param = np.random.random(2**29) # Process+big_parameter OK: proc = mp.Process(target=f_test, args=(big_param,)) res = proc.start() proc.join() print('size of process result', getsizeof(res)) # Pool+big_parameter BREAKS: pool = mp.Pool(1) res = pool.map(f_test, (big_param,)) print('size of Pool result', getsizeof(res)) - $ python bug_mp.py process has received argument [ 0.65282086 0.34977429 0.64148342 ..., 0.79902495 0.31427761 0.02678803] size of process result 16 Traceback (most recent call last): File "bug_mp.py", line 26, in res = pool.map(f_test, (big_param,)) File "/usr/local/Cellar/python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/pool.py", line 260, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/usr/local/Cellar/python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/pool.py", line 599, in get raise self._value File "/usr/local/Cellar/python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/pool.py", line 383, in _handle_tasks put(task) File "/usr/local/Cellar/python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/connection.py", line 206, in send self._send_bytes(ForkingPickler.dumps(obj)) File "/usr/local/Cellar/python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/reduction.py", line 50, in dumps cls(buf, protocol).dump(obj) OverflowError: cannot serialize a bytes object larger than 4 GiB - There's another flavor of error seen in similar scenario: ... struct.error: 'i' format requires -2147483648 <= number <= 2147483647 - Tested in: Python 3.4.2 |Anaconda 2.1.0 (64-bit)| (default, Oct 21 2014, 17:16:37) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux And in: Python 3.4.3 (default, Apr 9 2015, 16:03:56) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.51)] on darwin - Pool.map creates a "task Queue" to handle workers, and I think that but by doing this we are forcing any arguments passed to the workers to be pickled. Process works OK, since no queue is created, it just forks. My expectation would be that since we are in POSIX and forking, we shouldn't have to worry about arguments being pickled, and if this is expected behavior, it should be warned/documented (hope I've not missed this in the docs). For small sized arguments, pickling-unpicking may not be an issue, but for big ones then, it is (I am aware of the Array and MemShare options). Anybody has seen something similar, is perhaps this a hard requirement to Pool.map or I'm completely missing the point altogether? -- messages: 241289 nosy: kieleth priority: normal severity: normal status: open title: Multiprocessing Pool.map pickles arguments passed to workers type: behavior versions: Python 3.4 ___ Python tracker <http://bugs.python.org/issue23979> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23979] Multiprocessing Pool.map pickles arguments passed to workers
Luis added the comment: Thanks for answer, although I still think I haven't made myself fully understood here, allow me to paraphrase: "...You need some means of transferring objects between processes, and pickling is the Python standard serialization method" Yes, but the question that stands is why Pool has to use a multiprocess.Queue to load and spin the workers (therefore pickling-unpickling their arguments), whereas we should just inheriting in that moment and then just create a Queue for the returns of the workers. This applies to method "fork", not to "spawn", and not sure for "fork server". Plus, I'm not trying to avoid inheritance, I'm trying to use it with Pools and large arguments as theoretically allowed by forking, and instead at the moment I'm forced to use Processes with a Queue for the results, as shown in the code above. "OverflowError: cannot serialize a bytes object larger than 4 GiB" is just what allows us to expose this behavior, cause the Pool pickles the arguments without, in my opinion, having to do so. -- ___ Python tracker <http://bugs.python.org/issue23979> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23979] Multiprocessing Pool.map pickles arguments passed to workers
Luis added the comment: Thanks for information and explanations. The option of writing a tweaked serialization mechanism in Queue for Pool and implement a sharedmem sounds like fun, not sure if the pure-copy-on-write of forking can be achieved tho, it would be nice to know if it is actually possible (the project mentioned in issue17560 still needs to "dump" the arrays in the filesystem) As quick fix for us, I've created a simple wrapper around Pool and its map, it creates a Queue for the results and uses Process to start the workers, this works just fine. Simplicity and consistency are great, but I still believe that Pool, in LINUX-based systems, by serializing arguments, creates duplication and works inefficiently, and this could be avoided. Obviously it's not me who takes the decisions and I don't have the time to investigate it further, so, after this petty rant, should we close this bug? :> -- ___ Python tracker <http://bugs.python.org/issue23979> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10141] SocketCan support
Changes by Miguel Luis : -- nosy: +mluis ___ Python tracker <http://bugs.python.org/issue10141> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38260] asyncio.run documentation does not mention its return value
New submission from Luis E. : The documentation for asyncio.run (https://docs.python.org/3/library/asyncio-task.html#asyncio.run) does not mention the function's return value or lack of one. Looking at the source, its clear it returns the passed coroutine's value via loop.run_until_complete, but the documentation or the provided example do not make it clear. -- assignee: docs@python components: Documentation, asyncio messages: 353033 nosy: asvetlov, docs@python, edd07, yselivanov priority: normal severity: normal status: open title: asyncio.run documentation does not mention its return value type: enhancement versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue38260> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44191] Getting an ImportError DLL load failed while importing _ssl
New submission from Luis González : Good morning everyone. First of all, I would like apologize for my poor english. I'm a very newby programming in python. I'm getting an ImportError DLL load failed while importing _ssl. Can't find _ssl.pyd, from my EXE file created by de sentence "Python setup.py py2exe". The file _ssl.pyd, is at the same folder than the .exe file. Thanks in advance. -- assignee: christian.heimes components: SSL messages: 394033 nosy: christian.heimes, lnegger priority: normal severity: normal status: open title: Getting an ImportError DLL load failed while importing _ssl type: crash versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue44191> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45576] Cannot import modules from Zip64 files
New submission from Luis Franca : I've tried to import a module from a fat jar file and got a ModuleNotFoundError: No module named ... error. I checked that the jar file had more than 65k files and was created using Zip64. When I unzip the file, Python is capable of importing the modules. I was able to reproduce the error on a simple project, such as: simplePackage/ __init__.py a/ __init__.py moduleA.py where - I'm using Python 3.9.4 - __init__.py files are empty - moduleA.py only has a printA() function I ran the following tests: 1. When importing from the folder, it works python >>> import sys >>> sys.path.append('C:\\Users\\...\\simplePackage') >>> from a.moduleA import printA >>> printA() I'm module a 2. When zipping the folder, it works python >>> import sys >>> sys.path.append('C:\\Users\\...\\simplePackage.zip') >>> from a.moduleA import printA >>> printA() I'm module a 3. When forcing to zip with Zip64 it doesn't work On linux: zip -fzr simple64Package.zip . python >>> import sys >>> sys.path.append('C:\\Users\\...\\simple64Package.zip') >>> from a.moduleA import printA Traceback (most recent call last): File "", line 1, in ModuleNotFoundError: No module named 'a' Is this an expected behavior? Am I missing something? Thanks! -- components: Library (Lib) messages: 404792 nosy: lfamorim priority: normal severity: normal status: open title: Cannot import modules from Zip64 files type: behavior versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue45576> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40025] enum: _generate_next_value_ is not called if its definition occurs after calls to auto()
New submission from Luis E. : I ran into this issue when attempting to add a custom _generate_next_value_ method to an existing Enum. Adding the method definition to the bottom of the class causes it to not be called at all: from enum import Enum, auto class E(Enum): A = auto() B = auto() def _generate_next_value_(name, *args): return name E.B.value # Returns 2, E._generate_next_value_ is not called class F(Enum): def _generate_next_value_(name, *args): return name A = auto() B = auto() F.B.value # Returns 'B', as intended I do not believe that the order of method/attribute definition should affect the behavior of the class, or at least it should be mentioned in the documentation. -- assignee: docs@python components: Documentation, Library (Lib) messages: 364665 nosy: docs@python, edd07 priority: normal severity: normal status: open title: enum: _generate_next_value_ is not called if its definition occurs after calls to auto() type: behavior versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue40025> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40025] enum: _generate_next_value_ is not called if its definition occurs after calls to auto()
Change by Luis E. : -- components: -Documentation ___ Python tracker <https://bugs.python.org/issue40025> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13756] Python3.2.2 make fail on cygwin
Luis Marsano added the comment: The README file implies support: [⋮] Build Instructions -- On Unix, Linux, BSD, OSX, and Cygwin: [⋮] -- components: +Build -Installation nosy: +Luis.Marsano ___ Python tracker <http://bugs.python.org/issue13756> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13756] Python3.2.2 make fail on cygwin
Luis Marsano added the comment: Got it to build. Unpack the Python (3.2.2) source package and apply this patch to get a package that builds on Cygwin (1.7.9), eg: xz -d patch.xz && tar -xJf Python-3.2.2.tar.xz && patch -p0 -i patch Changes: (1) The Makefile, makesetup, and distutils.UnixCCompiler and distutils.command.build_ext modules set values for locating cygwin's python library that didn't agree or make sense during buildtime, so I revised them to agree and use build options that work. (2) configuration and setup.py couldn't locate cygwin's ncurses headers, so I revised them to do that. I don't think I made that change as portable friendly as possible, so someone please check that and find a better way. Your input is welcome. -- Added file: http://bugs.python.org/file24395/patch.xz ___ Python tracker <http://bugs.python.org/issue13756> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36483] Missing line in documentation example
New submission from Luis Muñoz : Hi, https://docs.python.org/3/tutorial/controlflow.html#break-and-continue-statements-and-else-clauses-on-loops The example is missing a break at the end of the else statement. First time reporting here. If there is an error in formating or anything else please accept my apologies. Luis Muñoz -- assignee: docs@python components: Documentation messages: 339184 nosy: Luis Muñoz, docs@python priority: normal severity: normal status: open title: Missing line in documentation example type: behavior versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue36483> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36483] Missing line in documentation example
Luis Muñoz added the comment: My bad. Sorry for the inconvenience. -- ___ Python tracker <https://bugs.python.org/issue36483> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20908] Memory leak in Reg2Py()
New submission from Luis G.F: A memory leak can happend in Reg2Py() loosing the reference to str pointer. See file PC/winreg.c +947 -- components: Extension Modules, Windows messages: 213384 nosy: luisgf priority: normal severity: normal status: open title: Memory leak in Reg2Py() type: behavior versions: Python 3.4 ___ Python tracker <http://bugs.python.org/issue20908> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20908] Memory leak in Reg2Py()
Luis G.F added the comment: Attach of patch for the 3.3.5 version. -- keywords: +patch versions: +Python 3.3 Added file: http://bugs.python.org/file34394/winreg_leak_v33.patch ___ Python tracker <http://bugs.python.org/issue20908> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23230] Bug parsing integers with zero padding
New submission from Luis G.F: Python 3.4 interpreter fail to parse a integer that has zero padding, whereas python 2.7 works properly. Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> int(001) 1 >>> Python 3.4.0 (default, Apr 11 2014, 13:05:11) [GCC 4.8.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> int(001) File "", line 1 int(001) ^ SyntaxError: invalid token >>> -- components: Interpreter Core messages: 233928 nosy: luisgf priority: normal severity: normal status: open title: Bug parsing integers with zero padding versions: Python 3.4 ___ Python tracker <http://bugs.python.org/issue23230> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23230] Bug parsing integers with zero padding
Luis G.F added the comment: Thanks for the response, but in my case, 001 is not an octal literal, is a base-10 zero padded comming from the parsing of a ip string like 111.000.222.333 , where numbers are all integers in base-10. The solution for parsing that seams to cast 000 as string and then using int('000', base=10). -- resolution: -> not a bug ___ Python tracker <http://bugs.python.org/issue23230> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32846] Deletion of large sets of strings is extra slow
Luis Pedro Coelho added the comment: I think some of this conversation is going off-topic, but there is no disk-swapping in my case. I realize ours is not a typical setup, but our normal machines have 256GB of RAM and the "big memory" compute nodes are >=1TB. Normally, swap is outright disabled. This really is an impressive case study on how much difference cache-locality can make. -- ___ Python tracker <https://bugs.python.org/issue32846> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33105] os.isfile returns false on Windows when file path is longer than 260 characters
New submission from Luis Conejo-Alpizar : Windows has a maximum path length limitation of 260 characters. This limitation, however, can be bypassed in the scenario described below. When this occurs, os.isfile() will return false, even when the affected file does exist. For Windows systems, the behavior should be for os.isfile() to return an exception in this case, indicating that maximum path length has been exceeded. Sample scenario: 1. Let's say you have a folder, named F1 and located in your local machine at this path: C:\tc\proj\MTV\cs_fft\Milo\Fries\STL\BLNA\F1\ 2. Inside of that folder, you have a log file with this name: This_is_a_really_long_file_name_that_by_itself_is_not_capable_of_exceeding_the_path_length_limitation_Windows_has_in_pretty_much_every_single_version_of_Wind.log 3. The combined length of the path and the file is exactly 260 characters, so Windows lets you get away with it when the file is initially created and/or placed there. 4. Later, you decide to make the F1 folder available on your network, under this name: \\tst\tc\proj\MTV\cs_fft\Milo\Fries\STL\BLNA\F1\ 5. Your log file continues to be in the folder, but its full network path is now 263 characters, effectively violating the maximum path length limitation. 6. If you use os.listdir() on the networked folder, the log file will come up. 7. Now, if you try os.path.isfile(os.path.join(networked_path,logfile_name)) it will return false, even though the file is indeed there and is indeed a file. -- components: Library (Lib) messages: 314109 nosy: ldconejo priority: normal severity: normal status: open title: os.isfile returns false on Windows when file path is longer than 260 characters type: behavior versions: Python 2.7 ___ Python tracker <https://bugs.python.org/issue33105> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5259] smtplib is broken in Python3
José Luis Cáceres added the comment: There is a similar problem that I found with encode_cram_md5 in smtplib.py, SMTP.login() method. I used the solution proposed by miwa, both for PLAIN and CRAM MD5 authentication. Additionally, for the last one, I had to introduce a second correction and byte encode the password string when passing it to hmac.HMAC. I do not know if I did things correctly, but just in case it can help here is the complete patch that I used and worked well with the two AUTH methods. I keep the original and modified lines for clarity. def encode_cram_md5(challenge, user, password): challenge = base64.decodestring(challenge) #response = user + " " + hmac.HMAC(password, challenge).hexdigest() response = user + " " + hmac.HMAC(password.encode(), challenge).hexdigest() #return encode_base64(response) return encode_base64((response).encode('ascii'), eol='') def encode_plain(user, password): #return encode_base64("\0%s\0%s" % (user, password)) return encode_base64(("\0%s\0%s" % (user, password)).encode ('ascii'), eol='') -- nosy: +j.l.caceres ___ Python tracker <http://bugs.python.org/issue5259> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17297] Issue with return in recursive functions
New submission from Luis López Lázaro: Sorry if I am raising something naive as perhaps I am doing something wrong as I am both an amateur programmer and a newcomer to Python, but version 3.3 appears to have an issue with the return statement in the setting of recursive functions. When implementing a fruitful recursive function in Python 3.3 (specifically Python 3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 10:57:17) [MSC v.1600 64 bit (AMD64)] on win32) which depends on conditionals it only returns the result value if the conditions are met in the first iteration. I am attaching a file in which I have implemented the Euclidean algorithm in 2 slightly different ways. The print statements produce the expected results (for instance with 1000,75) and a print statement placed in the if loop where the return statement is shows the indicator sentence but no value is returned by the function. I have also copied an implementation obtained from a website (function Euclid_LP; obtained from the wiki Literate Programs, http://en.literateprograms.org/Euclidean_algorithm_%28Python%29) and it does not work either. For the tests, initially I run the program with F5 and invoked the functions from the Python shell. Later I have added a main part of the program prompting for the numbers and calling the functions to later display the results, with no change in the outcome -- components: Regular Expressions files: Chapter 6 MCD Euclidean.py messages: 182966 nosy: ezio.melotti, luislopezlazaro, mrabarnett priority: normal severity: normal status: open title: Issue with return in recursive functions versions: Python 3.3 Added file: http://bugs.python.org/file29236/Chapter 6 MCD Euclidean.py ___ Python tracker <http://bugs.python.org/issue17297> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21258] Add __iter__ support for mock_open
Changes by José Luis Lafuente : -- nosy: +José.Luis.Lafuente ___ Python tracker <http://bugs.python.org/issue21258> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10863] zlib.compress() fails with string
New submission from Jose-Luis Fernandez-Barros : On "The Python Tutorial", section 10.9. Data Compression http://docs.python.org/py3k/tutorial/stdlib.html#data-compression >>> import zlib >>> s = 'witch which has which witches wrist watch' ... >>> t = zlib.compress(s) Traceback (most recent call last): File "", line 1, in TypeError: must be bytes or buffer, not str Possible solution (sorry, newbie) are: >>> s = b'witch which has which witches wrist watch' or >>> s = 'witch which has which witches wrist watch'.encode("utf-8") At "The Python Standard Library", secction 12. Data Compression and Archiving http://docs.python.org/py3k/library/zlib.html#module-zlib apparently example is correct: zlib.compress(string[, level]) -- assignee: d...@python components: Documentation messages: 125702 nosy: d...@python, joseluisfb priority: normal severity: normal status: open title: zlib.compress() fails with string type: compile error versions: Python 3.1 ___ Python tracker <http://bugs.python.org/issue10863> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10863] zlib.compress() fails with string
Jose-Luis Fernandez-Barros added the comment: Thanks for your answer. Error remains at development "The Python Standard Library", secction 12. Data Compression and Archiving http://docs.python.org/dev/py3k/library/zlib.html#module-zlib zlib.compress(string[, level]) -- resolution: fixed -> status: closed -> open ___ Python tracker <http://bugs.python.org/issue10863> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37577] ModuleNotFoundError: No module named '_sysconfigdata__linux_x86_64-linux-gnu'
New submission from Luis Alejandro Martínez Faneyth : Hello everyone, I've been building some minimal python docker images for a while and a few days ago an error popped out in my CI when building python 3.8 on debian sid. The error happens when trying to install pip with the usual: curl -fsSL https://bootstrap.pypa.io/get-pip.py | python3.8 - setuptools The message: ERROR: Exception: Traceback (most recent call last): File "/tmp/tmprv6tur0m/pip.zip/pip/_internal/cli/base_command.py", line 178, in main status = self.run(options, args) File "/tmp/tmprv6tur0m/pip.zip/pip/_internal/commands/install.py", line 405, in run installed = install_given_reqs( File "/tmp/tmprv6tur0m/pip.zip/pip/_internal/req/__init__.py", line 54, in install_given_reqs requirement.install( File "/tmp/tmprv6tur0m/pip.zip/pip/_internal/req/req_install.py", line 919, in install self.move_wheel_files( File "/tmp/tmprv6tur0m/pip.zip/pip/_internal/req/req_install.py", line 440, in move_wheel_files move_wheel_files( File "/tmp/tmprv6tur0m/pip.zip/pip/_internal/wheel.py", line 318, in move_wheel_files scheme = distutils_scheme( File "/tmp/tmprv6tur0m/pip.zip/pip/_internal/locations.py", line 180, in distutils_scheme i.finalize_options() File "/usr/lib/python3.8/distutils/command/install.py", line 306, in finalize_options (prefix, exec_prefix) = get_config_vars('prefix', 'exec_prefix') File "/usr/lib/python3.8/distutils/sysconfig.py", line 501, in get_config_vars func() File "/usr/lib/python3.8/distutils/sysconfig.py", line 461, in _init_posix _temp = __import__(name, globals(), locals(), ['build_time_vars'], 0) ModuleNotFoundError: No module named '_sysconfigdata__linux_x86_64-linux-gnu' You can check the full CI output[0] or the building script if you need to[1]. I've checked for similar bugs and I found #28046 but I don't know if this is related or not. Thanks for the great work and I'm looking forward to help you fix this issue. Luis [0]https://travis-ci.org/LuisAlejandro/dockershelf/jobs/557990064 [1]https://github.com/LuisAlejandro/dockershelf/blob/master/python/build-image.sh -- messages: 347765 nosy: luisalejandro priority: normal severity: normal status: open title: ModuleNotFoundError: No module named '_sysconfigdata__linux_x86_64-linux-gnu' versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue37577> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37577] ModuleNotFoundError: No module named '_sysconfigdata__linux_x86_64-linux-gnu'
Luis Alejandro Martínez Faneyth added the comment: New information on this: python3-distutils for 3.8 exists on Debian (experimental) but python3 (which is kind of a meta-package) for 3.8 doesn't exist. It depends on python3.8 or python3.7, resulting in the installation on python3.7. Perhaps this is a bug to report on Debian instead of here, idk. -- type: -> crash ___ Python tracker <https://bugs.python.org/issue37577> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37577] ModuleNotFoundError: No module named '_sysconfigdata__linux_x86_64-linux-gnu'
Luis Alejandro Martínez Faneyth added the comment: Thanks Christian for the suggestion and Matthias. -- ___ Python tracker <https://bugs.python.org/issue37577> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36647] TextTestRunner doesn't honour "buffer" argument
New submission from José Luis Segura Lucas : When using "buffer = True" in a TextTestRunner, the test result behaviour doesn't change at all. This is because TextTestRunner.stream is initialised using a decorator (_WritelnDecorator). When "buffer" is passed, the TestResult base class will try to redirect the stdout and stderr to 2 different io.StringIO objects. As the TextTestRunner.stream is initialised before that "redirection", all the "self.stream.write" calls will end using the original stream (stderr by default), and resulting in not buffering at all. -- components: Tests messages: 340398 nosy: José Luis Segura Lucas priority: normal severity: normal status: open title: TextTestRunner doesn't honour "buffer" argument type: behavior versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue36647> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18512] sys.stdout.write does not allow bytes in Python 3.x
New submission from Juan Luis Boya García: Sometimes developers need to write text to stdout, and it's nice to have on the fly Unicode to UTF-8 conversion (or what matches the platform), but sometimes they also need to output binary blobs, like text encoded in other codifications than the system default, binary files, etc. Python2 does the thing more-or-less right and allows writing both text and binary. I think Python3 should also accept both. -- components: Library (Lib) messages: 193394 nosy: ntrrgc priority: normal severity: normal status: open title: sys.stdout.write does not allow bytes in Python 3.x type: behavior versions: Python 3.3 ___ Python tracker <http://bugs.python.org/issue18512> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18512] sys.stdout.write does not allow bytes in Python 3.x
Juan Luis Boya García added the comment: Sorry for the late response, GMail's SPAM filter ate the replies. The main issue is sys.stdout being opened as text instead of binary. This fact is stated in the docs. http://docs.python.org/3/library/sys.html#sys.stdout In any case, there are some caveats worth noting: > You can do >sys.stdout.buffer.write(b"hello") This is problematic if both buffer and IOTextWrapper are used. For example: print("Hello", end=""); sys.stdout.buffer.write(b"World") That line may write `WorldHello` instead of `HelloWorld` (and it does indeed, at least in Linux implementation). Yes, an application should not do this in Python3, but using print() and writing to stdout were OK in Python2, which makes porting programs harder. A workaround is to perform sys.stdout.flush() before sys.stdout.buffer.write(). > (from the docs) > Using io.TextIOBase.detach(), streams can be made binary by default. > sys.stdout = sys.stdout.detach() This should help in cases where most output is binary, but it's worth noting that interactive shells (such as the builtin python or IPython) and debuggers (both pdb and ipdb) stop working when this is used. Also, it will probably break every function there that relies on sys.stdout being Unicode or binary depending on only the Python version. -- ___ Python tracker <http://bugs.python.org/issue18512> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com