[issue37837] add internal _PyLong_FromUnsignedChar() function
New submission from Sergey Fedoseev : When compiled with default NSMALLPOSINTS, _PyLong_FromUnsignedChar() is significantly faster than other PyLong_From*(): $ python -m perf timeit -s "from collections import deque; consume = deque(maxlen=0).extend; b = bytes(2**20)" "consume(b)" --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: . 7.10 ms +- 0.02 ms /home/sergey/tmp/cpython-dev/venv/bin/python: . 4.29 ms +- 0.03 ms Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 7.10 ms +- 0.02 ms -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 4.29 ms +- 0.03 ms: 1.66x faster (-40%) It's mostly useful for bytes/bytearray, but also can be used in several other places. -- components: Interpreter Core messages: 349540 nosy: sir-sigurd priority: normal severity: normal status: open title: add internal _PyLong_FromUnsignedChar() function type: performance versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue37837> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37837] add internal _PyLong_FromUnsignedChar() function
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +14971 stage: -> patch review pull_request: https://github.com/python/cpython/pull/15251 ___ Python tracker <https://bugs.python.org/issue37837> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37840] bytearray_getitem() handles negative index incorrectly
New submission from Sergey Fedoseev : bytearray_getitem() adjusts negative index, though that's already done by PySequence_GetItem(). This makes PySequence_GetItem(bytearray(1), -2) to return 0 instead of raise IndexError. -- components: Interpreter Core messages: 349545 nosy: sir-sigurd priority: normal severity: normal status: open title: bytearray_getitem() handles negative index incorrectly type: behavior ___ Python tracker <https://bugs.python.org/issue37840> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37840] bytearray_getitem() handles negative index incorrectly
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +14972 stage: -> patch review pull_request: https://github.com/python/cpython/pull/15250 ___ Python tracker <https://bugs.python.org/issue37840> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37842] Initialize Py_buffer variables more efficiently
New submission from Sergey Fedoseev : Argument Clinic generates `{NULL, NULL}` initializer for Py_buffer variables. Such initializer zeroes all Py_buffer members, but as I understand only `obj` and `buf` members are really had to be initialized. Avoiding unneeded initialization provides tiny speed-up: $ python -m perf timeit -s "replace = b''.replace" "replace(b'', b'')" --compare-to=../cpython-master/venv/bin/python --duplicate=1000 /home/sergey/tmp/cpython-master/venv/bin/python: . 43.0 ns +- 0.5 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 41.8 ns +- 0.4 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 43.0 ns +- 0.5 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 41.8 ns +- 0.4 ns: 1.03x faster (-3%) -- components: Argument Clinic messages: 349582 nosy: larry, sir-sigurd priority: normal severity: normal status: open title: Initialize Py_buffer variables more efficiently type: performance versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue37842> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37842] Initialize Py_buffer variables more efficiently
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +14975 stage: -> patch review pull_request: https://github.com/python/cpython/pull/15254 ___ Python tracker <https://bugs.python.org/issue37842> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37907] speed-up PyLong_As*() for large longs
New submission from Sergey Fedoseev : PyLong_As*() functions computes result for large longs like this: size_t x, prev; x = 0; while (--i >= 0) { prev = x; x = (x << PyLong_SHIFT) | v->ob_digit[i]; if ((x >> PyLong_SHIFT) != prev) { *overflow = sign; goto exit; } } It can be rewritten like this: size_t x = 0; while (--i >= 0) { if (x > (size_t)-1 >> PyLong_SHIFT) { goto overflow; } x = (x << PyLong_SHIFT) | v->ob_digit[i]; } This provides some speed-up: PyLong_AsSsize_t() $ python -m perf timeit -s "from struct import Struct; N = 1000; pack = Struct('n'*N).pack; values = (2**30,)*N" "pack(*values)" --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: . 9.69 us +- 0.02 us /home/sergey/tmp/cpython-dev/venv/bin/python: . 8.61 us +- 0.07 us Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 9.69 us +- 0.02 us -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 8.61 us +- 0.07 us: 1.12x faster (-11%) PyLong_AsSize_t() $ python -m perf timeit -s "from struct import Struct; N = 1000; pack = Struct('N'*N).pack; values = (2**30,)*N" "pack(*values)" --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: . 10.5 us +- 0.1 us /home/sergey/tmp/cpython-dev/venv/bin/python: . 8.19 us +- 0.17 us Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 10.5 us +- 0.1 us -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 8.19 us +- 0.17 us: 1.29x faster (-22%) PyLong_AsLong() $ python -m perf timeit -s "from struct import Struct; N = 1000; pack = Struct('l'*N).pack; values = (2**30,)*N" "pack(*values)" --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: . 9.68 us +- 0.02 us /home/sergey/tmp/cpython-dev/venv/bin/python: . 8.48 us +- 0.22 us Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 9.68 us +- 0.02 us -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 8.48 us +- 0.22 us: 1.14x faster (-12%) PyLong_AsUnsignedLong() $ python -m perf timeit -s "from struct import Struct; N = 1000; pack = Struct('L'*N).pack; values = (2**30,)*N" "pack(*values)" --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: . 10.5 us +- 0.1 us /home/sergey/tmp/cpython-dev/venv/bin/python: . 8.41 us +- 0.26 us Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 10.5 us +- 0.1 us -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 8.41 us +- 0.26 us: 1.25x faster (-20%) The mentioned pattern is also used in PyLong_AsLongLongAndOverflow(), but I left it untouched since the proposed change doesn't seem to affect its performance. -- components: Interpreter Core messages: 350091 nosy: sir-sigurd priority: normal severity: normal status: open title: speed-up PyLong_As*() for large longs type: performance versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue37907> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37907] speed-up PyLong_As*() for large longs
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +15074 stage: -> patch review pull_request: https://github.com/python/cpython/pull/15363 ___ Python tracker <https://bugs.python.org/issue37907> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27961] remove support for platforms without "long long"
Change by Sergey Fedoseev : -- pull_requests: +15094 pull_request: https://github.com/python/cpython/pull/15385 ___ Python tracker <https://bugs.python.org/issue27961> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27961] remove support for platforms without "long long"
Change by Sergey Fedoseev : -- pull_requests: +15095 pull_request: https://github.com/python/cpython/pull/15386 ___ Python tracker <https://bugs.python.org/issue27961> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27961] remove support for platforms without "long long"
Change by Sergey Fedoseev : -- pull_requests: +15098 pull_request: https://github.com/python/cpython/pull/15388 ___ Python tracker <https://bugs.python.org/issue27961> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37938] refactor PyLong_As*() functions
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +15152 stage: -> patch review pull_request: https://github.com/python/cpython/pull/15457 ___ Python tracker <https://bugs.python.org/issue37938> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37938] refactor PyLong_As*() functions
New submission from Sergey Fedoseev : PyLong_As*() functions have a lot of duplicated code, creating draft PR with attempt to deduplicate them. -- components: Interpreter Core messages: 350367 nosy: sir-sigurd priority: normal severity: normal status: open title: refactor PyLong_As*() functions ___ Python tracker <https://bugs.python.org/issue37938> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37837] add internal _PyLong_FromUnsignedChar() function
Sergey Fedoseev added the comment: $ gcc -v 2>&1 | grep 'gcc version' gcc version 8.3.0 (Debian 8.3.0-19) using ./configure --enable-optimizations --with-lto $ python -m perf timeit -s "from collections import deque; consume = deque(maxlen=0).extend; b = bytes(2**20)" "consume(b)" --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: . 6.71 ms +- 0.09 ms /home/sergey/tmp/cpython-dev/venv/bin/python: . 6.71 ms +- 0.08 ms Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 6.71 ms +- 0.09 ms -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 6.71 ms +- 0.08 ms: 1.00x slower (+0%) using ./configure --enable-optimizations $ python -m perf timeit -s "from collections import deque; consume = deque(maxlen=0).extend; b = bytes(2**20)" "consume(b)" --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: . 6.73 ms +- 0.17 ms /home/sergey/tmp/cpython-dev/venv/bin/python: . 4.28 ms +- 0.01 ms Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 6.73 ms +- 0.17 ms -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 4.28 ms +- 0.01 ms: 1.57x faster (-36%) -- ___ Python tracker <https://bugs.python.org/issue37837> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37802] micro-optimization of PyLong_FromSize_t()
Sergey Fedoseev added the comment: Previous benchmarks results were obtained with non-LTO build. Here are results for LTO build: $ python -m perf timeit -s "from itertools import repeat; _len = repeat(None, 0).__length_hint__" "_len()" --compare-to=../cpython-master/venv/bin/python --duplicate=1000 /home/sergey/tmp/cpython-master/venv/bin/python: . 14.9 ns +- 0.2 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 13.1 ns +- 0.5 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 14.9 ns +- 0.2 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 13.1 ns +- 0.5 ns: 1.13x faster (-12%) $ python -m perf timeit -s "from itertools import repeat; _len = repeat(None, 2**10).__length_hint__" "_len()" --compare-to=../cpython-master/venv/bin/python --duplicate=1000 /home/sergey/tmp/cpython-master/venv/bin/python: . 22.1 ns +- 0.1 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 20.9 ns +- 0.4 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 22.1 ns +- 0.1 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 20.9 ns +- 0.4 ns: 1.05x faster (-5%) $ python -m perf timeit -s "from itertools import repeat; _len = repeat(None, 2**30).__length_hint__" "_len()" --compare-to=../cpython-master/venv/bin/python --duplicate=1000 /home/sergey/tmp/cpython-master/venv/bin/python: . 23.3 ns +- 0.0 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 21.6 ns +- 0.1 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 23.3 ns +- 0.0 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 21.6 ns +- 0.1 ns: 1.08x faster (-8%) $ python -m perf timeit -s "from itertools import repeat; _len = repeat(None, 2**60).__length_hint__" "_len()" --compare-to=../cpython-master/venv/bin/python --duplicate=1000 /home/sergey/tmp/cpython-master/venv/bin/python: . 24.4 ns +- 0.1 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 22.7 ns +- 0.1 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 24.4 ns +- 0.1 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 22.7 ns +- 0.1 ns: 1.08x faster (-7%) -- ___ Python tracker <https://bugs.python.org/issue37802> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37837] add internal _PyLong_FromUnsignedChar() function
Sergey Fedoseev added the comment: These last results are invalid :-) I thought that I was checking _PyLong_FromUnsignedChar() on top of GH-15192, but that wasn't true. So the correct results for LTO build are: $ python -m perf timeit -s "from collections import deque; consume = deque(maxlen=0).extend; b = bytes(2**20)" "consume(b)" --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: . 6.93 ms +- 0.04 ms /home/sergey/tmp/cpython-dev/venv/bin/python: . 3.96 ms +- 0.01 ms Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 6.93 ms +- 0.04 ms -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 3.96 ms +- 0.01 ms: 1.75x faster (-43%) But the most important thing is that using PyLong_FromUnsignedLong() instead of _PyLong_FromUnsignedChar() on top of GH-15192 is producing the same results: striter_next() uses small_ints[] directly. However that's not true for bytearrayiter_next(): PyLong_FromUnsignedLong() is called there. I think that's due to how code is profiled so I'm satisfied with these results more or less. I'm closing existing PR and probably will close this issue soon after GH-15192 will be merged. -- ___ Python tracker <https://bugs.python.org/issue37837> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37973] improve docstrings of sys.float_info
New submission from Sergey Fedoseev : In [8]: help(sys.float_info) ... | | dig | DBL_DIG -- digits | This is not very helpful, https://docs.python.org/3/library/sys.html#sys.float_info is more verbose, so probably docstrings should be updated from where. -- assignee: docs@python components: Documentation messages: 350703 nosy: docs@python, sir-sigurd priority: normal severity: normal status: open title: improve docstrings of sys.float_info type: enhancement ___ Python tracker <https://bugs.python.org/issue37973> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37974] zip() docstring should say 'iterator' instead of 'object with __next__()'
New submission from Sergey Fedoseev : In [3]: help(zip) class zip(object) | zip(*iterables) --> zip object | | Return a zip object whose .__next__() method returns a tuple where | the i-th element comes from the i-th iterable argument. The .__next__() | method continues until the shortest iterable in the argument sequence | is exhausted and then it raises StopIteration. This description is awkward and should use term 'iterator' as https://docs.python.org/3/library/functions.html#zip does. The same applies to chain(), count() and zip_longest() from itertools. -- assignee: docs@python components: Documentation messages: 350704 nosy: docs@python, sir-sigurd priority: normal severity: normal status: open title: zip() docstring should say 'iterator' instead of 'object with __next__()' ___ Python tracker <https://bugs.python.org/issue37974> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37976] zip() shadows TypeError raised in __iter__() of source iterable
New submission from Sergey Fedoseev : zip() shadows TypeError raised in __iter__() of source iterable: In [21]: class Iterable: ...: def __init__(self, n): ...: self.n = n ...: def __iter__(self): ...: return iter(range(self.n)) ...: In [22]: zip(Iterable('one')) --- TypeError Traceback (most recent call last) in () > 1 zip(Iterable('one')) TypeError: zip argument #1 must support iteration -- messages: 350763 nosy: sir-sigurd priority: normal severity: normal status: open title: zip() shadows TypeError raised in __iter__() of source iterable ___ Python tracker <https://bugs.python.org/issue37976> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37976] zip() shadows TypeError raised in __iter__() of source iterable
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +15268 stage: -> patch review pull_request: https://github.com/python/cpython/pull/15592 ___ Python tracker <https://bugs.python.org/issue37976> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37976] zip() shadows TypeError raised in __iter__() of source iterable
Sergey Fedoseev added the comment: Maybe it's not clear from description, but traceback only show the line with zip(), so it doesn't help at localizing the source of exception at all. You only see that 'argument #N must support iteration', but that argument has __iter__() i.e. it supports iteration. -- ___ Python tracker <https://bugs.python.org/issue37976> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37976] zip() shadows TypeError raised in __iter__() of source iterable
Sergey Fedoseev added the comment: Also using this example class: In [5]: iter(Iterable('one')) --- TypeError Traceback (most recent call last) in () > 1 iter(Iterable('one')) in __iter__(self) 3 self.n = n 4 def __iter__(self): > 5 return iter(range(self.n)) 6 TypeError: range() integer end argument expected, got str. -- ___ Python tracker <https://bugs.python.org/issue37976> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37976] zip() shadows TypeError raised in __iter__() of source iterable
Sergey Fedoseev added the comment: > map() does not have anything special. Just for the reference, in Python 2.7 map() has the same behavior and it caused many problems for me and other people working with/developing Django. -- ___ Python tracker <https://bugs.python.org/issue37976> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37986] Improve perfomance of PyLong_FromDouble()
New submission from Sergey Fedoseev : This patch simplifies fast path for floats that fit into C long and moves it from float.__trunc__ to PyLong_FromDouble(). +-+-+--+ | Benchmark | long-from-float-ref | long-from-float | +=+=+==+ | int(1.) | 39.5 ns | 37.3 ns: 1.06x faster (-6%) | +-+-+--+ | int(2.**20) | 46.4 ns | 45.6 ns: 1.02x faster (-2%) | +-+-+--+ | int(2.**30) | 52.5 ns | 49.0 ns: 1.07x faster (-7%) | +-+-+--+ | int(2.**60) | 50.0 ns | 49.2 ns: 1.02x faster (-2%) | +-+-+--+ | int(-2.**63)| 76.6 ns | 48.6 ns: 1.58x faster (-37%) | +-+-+--+ | int(2.**80) | 77.1 ns | 72.5 ns: 1.06x faster (-6%) | +-+-+--+ | int(2.**120)| 91.5 ns | 87.7 ns: 1.04x faster (-4%) | +-+-+--+ | math.ceil(1.) | 57.4 ns | 32.9 ns: 1.74x faster (-43%) | +-+-+--+ | math.ceil(2.**20) | 60.5 ns | 41.3 ns: 1.47x faster (-32%) | +-+-+--+ | math.ceil(2.**30) | 64.2 ns | 43.9 ns: 1.46x faster (-32%) | +-+-+--+ | math.ceil(2.**60) | 66.3 ns | 42.3 ns: 1.57x faster (-36%) | +-+-+--+ | math.ceil(-2.**63) | 67.7 ns | 43.1 ns: 1.57x faster (-36%) | +-+-+--+ | math.ceil(2.**80) | 66.6 ns | 65.6 ns: 1.01x faster (-1%) | +-+-+--+ | math.ceil(2.**120) | 79.9 ns | 80.5 ns: 1.01x slower (+1%) | +-+-+--+ | math.floor(1.) | 58.4 ns | 31.2 ns: 1.87x faster (-47%) | +-+-+--+ | math.floor(2.**20) | 61.0 ns | 39.6 ns: 1.54x faster (-35%) | +-+-+--+ | math.floor(2.**30) | 64.2 ns | 43.9 ns: 1.46x faster (-32%) | +-+-+--+ | math.floor(2.**60) | 62.1 ns | 40.1 ns: 1.55x faster (-35%) | +-+-+--+ | math.floor(-2.**63) | 64.1 ns | 39.9 ns: 1.61x faster (-38%) | +-+-+--+ | math.floor(2.**80) | 62.2 ns | 62.7 ns: 1.01x slower (+1%) | +-+-+--+ | math.floor(2.**120) | 77.0 ns | 77.8 ns: 1.01x slower (+1%) | +-+-+--+ I'm going to speed-up conversion of larger floats in a follow-up PR. -- components: Interpreter Core files: bench-long-from-float.py messages: 350861 nosy: sir-sigurd priority: normal pull_requests: 15285 severity: normal status: open title: Improve perfomance of PyLong_FromDouble() type: performance versions: Python 3.9 Added file: https://bugs.python.org/file48573/bench-long-from-float.py ___ Python tracker <https://bugs.python.org/issue37986> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38015] inline function generates slightly inefficient machine code
Change by Sergey Fedoseev : -- pull_requests: +15372 pull_request: https://github.com/python/cpython/pull/15718 ___ Python tracker <https://bugs.python.org/issue38015> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38015] inline function generates slightly inefficient machine code
Sergey Fedoseev added the comment: I added similar patch that replaces get_small_int() with macro version, since it also induces unnecessary casts and makes machine code less efficient. Example assembly can be checked at https://godbolt.org/z/1SjG3E. This change produces tiny, but measurable speed-up for handling small ints: $ python -m pyperf timeit -s "from collections import deque; consume = deque(maxlen=0).extend; r = range(256)" "consume(r)" --compare-to=../cpython-master/venv/bin/python --duplicate=1000 /home/sergey/tmp/cpython-master/venv/bin/python: . 1.03 us +- 0.08 us /home/sergey/tmp/cpython-dev/venv/bin/python: . 973 ns +- 18 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 1.03 us +- 0.08 us -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 973 ns +- 18 ns: 1.05x faster (-5%) -- ___ Python tracker <https://bugs.python.org/issue38015> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38015] inline function generates slightly inefficient machine code
Sergey Fedoseev added the comment: I use GCC 9.2. -- ___ Python tracker <https://bugs.python.org/issue38015> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38079] _PyObject_VAR_SIZE should avoid arithmetic overflow
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +15471 stage: -> patch review pull_request: https://github.com/python/cpython/pull/14838 ___ Python tracker <https://bugs.python.org/issue38079> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38094] unneeded assignment to wb.len in PyBytes_Concat using buffer protocol
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +15517 stage: -> patch review pull_request: https://github.com/python/cpython/pull/15274 ___ Python tracker <https://bugs.python.org/issue38094> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38147] add macro for __builtin_unreachable
New submission from Sergey Fedoseev : GCC (along with Clang and ICC) has __builtin_unreachable() and MSVC has __assume() builtins, that can be used to optimize out unreachable conditions. So we could add macro like this: #ifdef Py_DEBUG # define Py_ASSUME(cond) (assert(cond)) #else # if defined(_MSC_VER) # define Py_ASSUME(cond) (__assume(cond)) # elif defined(__GNUC__) # define Py_ASSUME(cond) (cond? (void)0: __builtin_unreachable()) # else # define Py_ASSUME(cond) ((void)0); # endif #endif Here's a pair of really simple examples showing how it can optimize code: https://godbolt.org/z/g9LYXF. Real world example. _PyLong_Copy() [1] calls _PyLong_New() [2]. _PyLong_New() checks the size, so that overflow does not occur. This check is redundant when _PyLong_New() is called from _PyLong_Copy(). We could add a function that bypass that check, but in LTO build PyObject_MALLOC() is inlined into _PyLong_New() and it also checks the size. Adding Py_ASSUME((size_t)size <= MAX_LONG_DIGITS) allows to bypass both checks. [1] https://github.com/python/cpython/blob/3a4f66707e824ef3a8384827590ebaa6ca463dc0/Objects/longobject.c#L287-L309 [2] https://github.com/python/cpython/blob/3a4f66707e824ef3a8384827590ebaa6ca463dc0/Objects/longobject.c#L264-L283 -- messages: 352228 nosy: sir-sigurd priority: normal severity: normal status: open title: add macro for __builtin_unreachable ___ Python tracker <https://bugs.python.org/issue38147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38205] Python no longer compiles without small integer singletons
Sergey Fedoseev added the comment: I believe that the problem is caused by the change in Py_UNREACHABLE() (https://github.com/python/cpython/commit/3ab61473ba7f3dca32d779ec2766a4faa0657923). Before the mentioned commit Py_UNREACHABLE() was an expression, now it's a block. Py_UNREACHABLE() macro is public (see https://docs.python.org/3/c-api/intro.html#c.Py_UNREACHABLE), so this change can cause similar problems outside of CPython (i.e. that change was breaking). -- nosy: +sir-sigurd ___ Python tracker <https://bugs.python.org/issue38205> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38205] Python no longer compiles without small integer singletons
Sergey Fedoseev added the comment: Also quote from Py_UNREACHABLE() doc: > Use this in places where you might be tempted to put an assert(0) or abort() > call. https://github.com/python/cpython/commit/6b519985d23bd0f0bd072b5d5d5f2c60a81a19f2 does exactly that, it replaces assert(0) with Py_UNREACHABLE(). -- ___ Python tracker <https://bugs.python.org/issue38205> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38211] clean up type_init()
New submission from Sergey Fedoseev : I wrote patch that cleans up type_init(): 1. Removes conditions already checked by assert() 2. Removes object_init() call that effectively creates an empty tuple and checks that this tuple is empty -- messages: 352710 nosy: sir-sigurd priority: normal severity: normal status: open title: clean up type_init() ___ Python tracker <https://bugs.python.org/issue38211> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38211] clean up type_init()
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +15852 stage: -> patch review pull_request: https://github.com/python/cpython/pull/16257 ___ Python tracker <https://bugs.python.org/issue38211> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38205] Py_UNREACHABLE() no longer behaves as a function call
Sergey Fedoseev added the comment: FWIW I proposed to add Py_ASSUME() macro that uses __builtin_unreachable() in bpo-38147. -- ___ Python tracker <https://bugs.python.org/issue38205> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38147] add macro for __builtin_unreachable
Sergey Fedoseev added the comment: > If you care of _PyLong_Copy() performance, you should somehow manually inline > _PyLong_New() inside _PyLong_Copy(). It doesn't solve this: > We could add a function that bypass that check, but in LTO build > PyObject_MALLOC() is inlined into _PyLong_New() and it also checks the size. > Adding Py_ASSUME((size_t)size <= MAX_LONG_DIGITS) allows to bypass both > checks. Here's example: https://github.com/sir-sigurd/cpython/commit/c8699d0c614a18d558216ae7d432107147c95c28. I attach some disassembly from this example compiled with LTO, to demonstrate how the proposed macro affects generated code. -- Added file: https://bugs.python.org/file48614/disasm.txt ___ Python tracker <https://bugs.python.org/issue38147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35696] remove unnecessary operation in long_compare()
Sergey Fedoseev added the comment: These warnings are caused by https://github.com/python/cpython/commit/c6734ee7c55add5fdc2c821729ed5f67e237a096. I'd fix them, but I'm not sure if we are going to restore CHECK_SMALL_INT() ¯\_(ツ)_/¯ -- nosy: +sir-sigurd ___ Python tracker <https://bugs.python.org/issue35696> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38517] functools.cached_property should support partial functions and partialmethod's
Sergey Fedoseev added the comment: issue38524 is related. -- nosy: +sir-sigurd ___ Python tracker <https://bugs.python.org/issue38517> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38524] functools.cached_property is not supported for setattr
Change by Sergey Fedoseev : -- nosy: +sir-sigurd ___ Python tracker <https://bugs.python.org/issue38524> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39759] os.getenv documentation is misleading
Change by Sergey Fedoseev : -- nosy: +sir-sigurd ___ Python tracker <https://bugs.python.org/issue39759> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44022] urllib http client possible infinite loop on a 100 Continue response
Change by Sergey Fedoseev : -- nosy: +sir-sigurd nosy_count: 8.0 -> 9.0 pull_requests: +25593 pull_request: https://github.com/python/cpython/pull/27033 ___ Python tracker <https://bugs.python.org/issue44022> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37347] Reference-counting problem in sqlite
Change by Sergey Fedoseev : -- pull_requests: +16894 pull_request: https://github.com/python/cpython/pull/17413 ___ Python tracker <https://bugs.python.org/issue37347> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27961] remove support for platforms without "long long"
Change by Sergey Fedoseev : -- pull_requests: +17021 pull_request: https://github.com/python/cpython/pull/17539 ___ Python tracker <https://bugs.python.org/issue27961> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40302] Add pycore_byteswap.h internal header file with _Py_bswap32() function
Change by Sergey Fedoseev : -- nosy: +sir-sigurd nosy_count: 3.0 -> 4.0 pull_requests: +19338 pull_request: https://github.com/python/cpython/pull/15659 ___ Python tracker <https://bugs.python.org/issue40302> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41141] remove unneeded handling of '.' and '..' from patlib.Path.iterdir()
New submission from Sergey Fedoseev : Currently patlib.Path.iterdir() filters out '.' and '..'. It's unneeded since patlib.Path.iterdir() uses os.listdir() under the hood, which returns neither '.' nor '..'. https://docs.python.org/3/library/os.html#os.listdir -- components: Library (Lib) messages: 372465 nosy: sir-sigurd priority: normal severity: normal status: open title: remove unneeded handling of '.' and '..' from patlib.Path.iterdir() ___ Python tracker <https://bugs.python.org/issue41141> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41141] remove unneeded handling of '.' and '..' from patlib.Path.iterdir()
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +20336 stage: -> patch review pull_request: https://github.com/python/cpython/pull/21179 ___ Python tracker <https://bugs.python.org/issue41141> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41415] duplicated signature of dataclass in help()
New submission from Sergey Fedoseev : In [191]: import dataclasses, pydoc In [192]: @dataclass ...: class C: ...: pass ...: In [193]: print(pydoc.render_doc(C)) Python Library Documentation: class C in module __main__ class C(builtins.object) | C() -> None | | C() | | Methods defined here: | It's duplicated because dataclass __doc__ defaults to signature: In [195]: C.__doc__ Out[195]: 'C()' -- components: Library (Lib) messages: 374461 nosy: sir-sigurd priority: normal severity: normal status: open title: duplicated signature of dataclass in help() type: behavior versions: Python 3.10 ___ Python tracker <https://bugs.python.org/issue41415> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41415] duplicated signature of dataclass in help()
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +20793 stage: -> patch review pull_request: https://github.com/python/cpython/pull/21652 ___ Python tracker <https://bugs.python.org/issue41415> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26834] Add truncated SHA512/224 and SHA512/256
Change by Sergey Fedoseev : -- nosy: +sir-sigurd ___ Python tracker <https://bugs.python.org/issue26834> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35276] Document thread safety
Change by Sergey Fedoseev : -- nosy: +sir-sigurd ___ Python tracker <https://bugs.python.org/issue35276> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33239] tempfile module: functions with the 'buffering' option are incorrectly documented
Change by Sergey Fedoseev : -- nosy: +sir-sigurd nosy_count: 4.0 -> 5.0 pull_requests: +21279 pull_request: https://github.com/python/cpython/pull/21763 ___ Python tracker <https://bugs.python.org/issue33239> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31862] Port the standard library to PEP 489 multiphase initialization
Change by Sergey Fedoseev : -- pull_requests: +13420 ___ Python tracker <https://bugs.python.org/issue31862> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34488] improve performance of BytesIO.writelines() by avoiding creation of unused PyLongs
Sergey Fedoseev added the comment: `BytesIO.write()` and `BytesIO.writelines()` are independent of each other. -- ___ Python tracker <https://bugs.python.org/issue34488> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37802] micro-optimization of PyLong_FromSize_t()
New submission from Sergey Fedoseev : Currently PyLong_FromSize_t() uses PyLong_FromLong() for values < PyLong_BASE. It's suboptimal because PyLong_FromLong() needs to handle the sign. Removing PyLong_FromLong() call and handling small ints directly in PyLong_FromSize_t() makes it faster: $ python -m perf timeit -s "from itertools import repeat; _len = repeat(None, 2).__length_hint__" "_len()" --compare-to=../cpython-master/venv/bin/python --duplicate=1 /home/sergey/tmp/cpython-master/venv/bin/python: . 18.7 ns +- 0.3 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 16.7 ns +- 0.1 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 18.7 ns +- 0.3 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 16.7 ns +- 0.1 ns: 1.12x faster (-10%) $ python -m perf timeit -s "from itertools import repeat; _len = repeat(None, 2**10).__length_hint__" "_len()" --compare-to=../cpython-master/venv/bin/python --duplicate=1 /home/sergey/tmp/cpython-master/venv/bin/python: . 26.2 ns +- 0.0 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 25.0 ns +- 0.7 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 26.2 ns +- 0.0 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 25.0 ns +- 0.7 ns: 1.05x faster (-5%) $ python -m perf timeit -s "from itertools import repeat; _len = repeat(None, 2**30).__length_hint__" "_len()" --compare-to=../cpython-master/venv/bin/python --duplicate=1 /home/sergey/tmp/cpython-master/venv/bin/python: . 25.6 ns +- 0.1 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 25.6 ns +- 0.0 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 25.6 ns +- 0.1 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 25.6 ns +- 0.0 ns: 1.00x faster (-0%) This change makes PyLong_FromSize_t() consistently faster than PyLong_FromSsize_t(). So it might make sense to replace PyLong_FromSsize_t() with PyLong_FromSize_t() in __length_hint__() implementations and other similar cases. For example: $ python -m perf timeit -s "_len = iter(bytes(2)).__length_hint__" "_len()" --compare-to=../cpython-master/venv/bin/python --duplicate=1 /home/sergey/tmp/cpython-master/venv/bin/python: . 19.4 ns +- 0.3 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 17.3 ns +- 0.1 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 19.4 ns +- 0.3 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 17.3 ns +- 0.1 ns: 1.12x faster (-11%) $ python -m perf timeit -s "_len = iter(bytes(2**10)).__length_hint__" "_len()" --compare-to=../cpython-master/venv/bin/python --duplicate=1 /home/sergey/tmp/cpython-master/venv/bin/python: . 26.3 ns +- 0.1 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 25.3 ns +- 0.2 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 26.3 ns +- 0.1 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 25.3 ns +- 0.2 ns: 1.04x faster (-4%) $ python -m perf timeit -s "_len = iter(bytes(2**30)).__length_hint__" "_len()" --compare-to=../cpython-master/venv/bin/python --duplicate=1 /home/sergey/tmp/cpython-master/venv/bin/python: . 27.6 ns +- 0.1 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 26.0 ns +- 0.1 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 27.6 ns +- 0.1 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 26.0 ns +- 0.1 ns: 1.06x faster (-6%) -- components: Interpreter Core messages: 349285 nosy: sir-sigurd priority: normal severity: normal status: open title: micro-optimization of PyLong_FromSize_t() type: performance versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue37802> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37802] micro-optimization of PyLong_FromSize_t()
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +14924 stage: -> patch review pull_request: https://github.com/python/cpython/pull/15192 ___ Python tracker <https://bugs.python.org/issue37802> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30856] unittest.TestResult.addSubTest should be called immediately after subtest finishes
New submission from Sergey Fedoseev: Currently TestResult.addSubTest() is called just before TestResult.stopTest(), but docs says that addSubTest is "Called when a subtest finishes". IMO that means that it will be called immediately after subtest finishes, but not after indefinite time. Test is attached. -- files: test_subtest.py messages: 297756 nosy: sir-sigurd priority: normal severity: normal status: open title: unittest.TestResult.addSubTest should be called immediately after subtest finishes type: behavior Added file: http://bugs.python.org/file46990/test_subtest.py ___ Python tracker <http://bugs.python.org/issue30856> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30856] unittest.TestResult.addSubTest should be called immediately after subtest finishes
Changes by Sergey Fedoseev : -- components: +Library (Lib) ___ Python tracker <http://bugs.python.org/issue30856> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31108] add __contains__ for list_iterator (and others) for better performance
Changes by Sergey Fedoseev : -- pull_requests: +3025 ___ Python tracker <http://bugs.python.org/issue31108> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31108] add __contains__ for list_iterator (and others) for better performance
New submission from Sergey Fedoseev: > python -mtimeit -s "l = list(range(10))" "l[-1] in l" 1000 loops, best of 3: 1.34 msec per loop > python -mtimeit -s "l = list(range(10))" "l[-1] in iter(l)" > 1000 loops, best of 3: 1.59 msec per loop -- messages: 299666 nosy: sir-sigurd priority: normal severity: normal status: open title: add __contains__ for list_iterator (and others) for better performance type: performance ___ Python tracker <http://bugs.python.org/issue31108> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33234] Improve list() pre-sizing for inputs with known lengths
Change by Sergey Fedoseev : -- pull_requests: +10490 ___ Python tracker <https://bugs.python.org/issue33234> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36030] add internal API function to create tuple without items array initialization
New submission from Sergey Fedoseev : PyTuple_New() fills items array with NULLs to make usage of Py_DECREF() safe even when array is not fully filled with real items. There are multiple cases when this initialization step can be avoided to improve performance. For example it gives such speed-up for PyList_AsTuple(): Before: $ python -m perf timeit -s "l = [None] * 10**6" "tuple(l)" . Mean +- std dev: 4.43 ms +- 0.01 ms After: $ python -m perf timeit -s "l = [None] * 10**6" "tuple(l)" . Mean +- std dev: 4.11 ms +- 0.03 ms -- messages: 335897 nosy: sir-sigurd priority: normal severity: normal status: open title: add internal API function to create tuple without items array initialization type: performance versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue36030> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36030] add internal API function to create tuple without items array initialization
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +11952 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue36030> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36031] add internal API function to effectively convert just created list to tuple
New submission from Sergey Fedoseev : There are several cases in CPython sources when PyList_AsTuple() is used with just created list and immediately after that this list is Py_DECREFed. This operation can be performed more effectively since refcount of items is not changed. For example it gives such speed-up for BUILD_TUPLE_UNPACK: Before: $ python -m perf timeit -s "l = [None]*10**6" "(*l,)" . Mean +- std dev: 8.75 ms +- 0.10 ms After: $ python -m perf timeit -s "l = [None]*10**6" "(*l,)" . Mean +- std dev: 5.41 ms +- 0.07 ms -- messages: 335901 nosy: sir-sigurd priority: normal severity: normal status: open title: add internal API function to effectively convert just created list to tuple type: performance versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue36031> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36031] add internal API function to effectively convert just created list to tuple
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +11953 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue36031> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36031] add internal API function to effectively convert just created list to tuple
Sergey Fedoseev added the comment: Does this look like more real world example? Before: $ python -m perf timeit -s "t = (1, 2, 3)" "(*t, 4, 5)" . Mean +- std dev: 95.7 ns +- 2.3 ns After: $ python -m perf timeit -s "t = (1, 2, 3)" "(*t, 4, 5)" . Mean +- std dev: 85.1 ns +- 0.6 ns -- ___ Python tracker <https://bugs.python.org/issue36031> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36030] add internal API function to create tuple without items array initialization
Change by Sergey Fedoseev : -- pull_requests: +11979 ___ Python tracker <https://bugs.python.org/issue36030> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36030] add internal API function to create tuple without items array initialization
Sergey Fedoseev added the comment: Here's benchmarks for PyTuple_FromArray() PR: $ python -m perf timeit -s "l = [None]*0; tuple_ = tuple" "tuple_(l)" --duplicate=100 --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: . 50.5 ns +- 1.0 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 45.6 ns +- 1.2 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 50.5 ns +- 1.0 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 45.6 ns +- 1.2 ns: 1.11x faster (-10%) $ python -m perf timeit -s "l = [None]*1; tuple_ = tuple" "tuple_(l)" --duplicate=100 --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: . 61.8 ns +- 1.1 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 54.8 ns +- 0.8 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 61.8 ns +- 1.1 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 54.8 ns +- 0.8 ns: 1.13x faster (-11%) $ python -m perf timeit -s "l = [None]*5; tuple_ = tuple" "tuple_(l)" --duplicate=100 --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: . 68.2 ns +- 1.3 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 61.8 ns +- 1.5 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 68.2 ns +- 1.3 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 61.8 ns +- 1.5 ns: 1.10x faster (-9%) $ python -m perf timeit -s "l = [None]*10; tuple_ = tuple" "tuple_(l)" --duplicate=100 --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: . 88.1 ns +- 2.3 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 78.9 ns +- 3.1 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 88.1 ns +- 2.3 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 78.9 ns +- 3.1 ns: 1.12x faster (-10%) $ python -m perf timeit -s "l = [None]*100; tuple_ = tuple" "tuple_(l)" --duplicate=100 --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: . 477 ns +- 7 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 452 ns +- 6 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 477 ns +- 7 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 452 ns +- 6 ns: 1.05x faster (-5%) -- ___ Python tracker <https://bugs.python.org/issue36030> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36062] move index normalization from list_slice() to PyList_GetSlice()
New submission from Sergey Fedoseev : list_slice() is used by PyList_GetSlice(), list_subscript() and for list copying. In list_subscript() slice indices are already normalized with PySlice_AdjustIndices(), so slice normalization currently performed in list_slice() is only needed for PyList_GetSlice(). Moving this normalization from list_slice() to PyList_GetSlice() provides minor speed-up for list copying and slicing: $ python -m perf timeit -s "copy = [].copy" "copy()" --duplicate=1000 --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: . 26.5 ns +- 0.5 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 25.7 ns +- 0.5 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 26.5 ns +- 0.5 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 25.7 ns +- 0.5 ns: 1.03x faster (-3%) $ python -m perf timeit -s "l = [1]" "l[:]" --duplicate=1000 --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: . 71.5 ns +- 1.4 ns /home/sergey/tmp/cpython-dev/venv/bin/python: . 70.2 ns +- 0.9 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 71.5 ns +- 1.4 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 70.2 ns +- 0.9 ns: 1.02x faster (-2%) -- messages: 336184 nosy: sir-sigurd priority: normal severity: normal status: open title: move index normalization from list_slice() to PyList_GetSlice() type: performance versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue36062> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36062] move index normalization from list_slice() to PyList_GetSlice()
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +11993 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue36062> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36063] replace PyTuple_SetItem() with PyTuple_SET_ITEM() in long_divmod()
New submission from Sergey Fedoseev : This change produces minor speed-up: $ python-other -m perf timeit -s "divmod_ = divmod" "divmod_(1, 1)" --duplicate=1000 --compare-to=../cpython-master/venv/bin/python python: . 64.6 ns +- 4.8 ns python-other: . 59.4 ns +- 3.2 ns Mean +- std dev: [python] 64.6 ns +- 4.8 ns -> [python-other] 59.4 ns +- 3.2 ns: 1.09x faster (-8%) -- messages: 336194 nosy: sir-sigurd priority: normal severity: normal status: open title: replace PyTuple_SetItem() with PyTuple_SET_ITEM() in long_divmod() type: performance versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue36063> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36063] replace PyTuple_SetItem() with PyTuple_SET_ITEM() in long_divmod()
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +11998 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue36063> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36072] str.translate() behave differently for ASCII-only and other strings
New submission from Sergey Fedoseev : In [186]: from itertools import cycle In [187]: class ContainerLike: ...: def __init__(self): ...: self.chars = cycle('12') ...: def __getitem__(self, key): ...: return next(self.chars) ...: In [188]: 'aa'.translate(ContainerLike()) Out[188]: '11' In [189]: 'ыы'.translate(ContainerLike()) Out[189]: '121212 It seems that behavior was changed in https://github.com/python/cpython/commit/89a76abf20889551ec1ed64dee1a4161a435db5b. At least it should be documented. -- messages: 336279 nosy: sir-sigurd priority: normal severity: normal status: open title: str.translate() behave differently for ASCII-only and other strings ___ Python tracker <https://bugs.python.org/issue36072> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36072] str.translate() behaves differently for ASCII-only and other strings
Change by Sergey Fedoseev : -- title: str.translate() behave differently for ASCII-only and other strings -> str.translate() behaves differently for ASCII-only and other strings ___ Python tracker <https://bugs.python.org/issue36072> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36073] sqlite crashes with converters mutating cursor
New submission from Sergey Fedoseev : It's somewhat similar to bpo-10811, but for converter function: In [197]: import sqlite3 as sqlite ...: con = sqlite.connect(':memory:', detect_types=sqlite.PARSE_COLNAMES) ...: cur = con.cursor() ...: sqlite.converters['CURSOR_INIT'] = lambda x: cur.__init__(con) ...: ...: cur.execute('create table test(x foo)') ...: cur.execute('insert into test(x) values (?)', ('foo',)) ...: cur.execute('select x as "x [CURSOR_INIT]", x from test') ...: [1]25718 segmentation fault python manage.py shell Similar to bpo-10811, proposed patch raises ProgrammingError instead of crashing. -- components: Extension Modules messages: 336283 nosy: sir-sigurd priority: normal severity: normal status: open title: sqlite crashes with converters mutating cursor type: crash ___ Python tracker <https://bugs.python.org/issue36073> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36073] sqlite crashes with converters mutating cursor
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +12008 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue36073> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36030] add internal API function to create tuple without items array initialization
Change by Sergey Fedoseev : -- pull_requests: +12062 ___ Python tracker <https://bugs.python.org/issue36030> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36030] add internal API function to create tuple without items array initialization
Change by Sergey Fedoseev : -- pull_requests: +12078 ___ Python tracker <https://bugs.python.org/issue36030> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36030] add internal API function to create tuple without items array initialization
Sergey Fedoseev added the comment: I added WIP PR with discussed micro-optimization, here are benchmark results: $ python -m perf compare_to --min-speed 1 -G master.json tuple-untracked.json Slower (1): - sqlite_synth: 5.16 us +- 0.10 us -> 5.22 us +- 0.08 us: 1.01x slower (+1%) Faster (19): - python_startup: 12.9 ms +- 0.7 ms -> 12.2 ms +- 0.0 ms: 1.06x faster (-5%) - python_startup_no_site: 8.96 ms +- 0.29 ms -> 8.56 ms +- 0.03 ms: 1.05x faster (-4%) - raytrace: 882 ms +- 11 ms -> 854 ms +- 12 ms: 1.03x faster (-3%) - mako: 27.9 ms +- 0.8 ms -> 27.1 ms +- 0.3 ms: 1.03x faster (-3%) - scimark_monte_carlo: 176 ms +- 4 ms -> 171 ms +- 5 ms: 1.03x faster (-3%) - logging_format: 17.7 us +- 0.4 us -> 17.2 us +- 0.3 us: 1.03x faster (-3%) - telco: 11.0 ms +- 0.2 ms -> 10.8 ms +- 0.4 ms: 1.02x faster (-2%) - richards: 123 ms +- 2 ms -> 120 ms +- 2 ms: 1.02x faster (-2%) - pathlib: 35.1 ms +- 0.7 ms -> 34.6 ms +- 0.5 ms: 1.01x faster (-1%) - scimark_sparse_mat_mult: 6.97 ms +- 0.20 ms -> 6.88 ms +- 0.29 ms: 1.01x faster (-1%) - scimark_sor: 327 ms +- 6 ms -> 323 ms +- 3 ms: 1.01x faster (-1%) - scimark_fft: 570 ms +- 5 ms -> 562 ms +- 4 ms: 1.01x faster (-1%) - float: 184 ms +- 2 ms -> 182 ms +- 2 ms: 1.01x faster (-1%) - logging_simple: 15.8 us +- 0.4 us -> 15.6 us +- 0.3 us: 1.01x faster (-1%) - deltablue: 12.6 ms +- 0.2 ms -> 12.5 ms +- 0.3 ms: 1.01x faster (-1%) - crypto_pyaes: 186 ms +- 2 ms -> 184 ms +- 2 ms: 1.01x faster (-1%) - hexiom: 17.3 ms +- 0.1 ms -> 17.2 ms +- 0.1 ms: 1.01x faster (-1%) - sqlalchemy_declarative: 253 ms +- 4 ms -> 251 ms +- 3 ms: 1.01x faster (-1%) - spectral_norm: 225 ms +- 2 ms -> 223 ms +- 3 ms: 1.01x faster (-1%) -- ___ Python tracker <https://bugs.python.org/issue36030> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36030] add internal API function to create tuple without items array initialization
Sergey Fedoseev added the comment: This optimization also can be used for BUILD_TUPLE opcode and in pickle module, if it's OK to add _PyTuple_StealFromArray() function :-) -- ___ Python tracker <https://bugs.python.org/issue36030> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36030] add internal API function to create tuple without items array initialization
Sergey Fedoseev added the comment: > Can you please convert msg336142 into a perf script? > And then run again these benchmarks on PR 12052. ++-+--+ | Benchmark | ref | untracked| ++=+==+ | list_as_tuple(0) | 50.7 ns | 45.5 ns: 1.12x faster (-10%) | ++-+--+ | list_as_tuple(1) | 64.5 ns | 56.5 ns: 1.14x faster (-12%) | ++-+--+ | list_as_tuple(5) | 72.0 ns | 62.6 ns: 1.15x faster (-13%) | ++-+--+ | list_as_tuple(10) | 86.3 ns | 77.1 ns: 1.12x faster (-11%) | ++-+--+ | list_as_tuple(100) | 469 ns | 450 ns: 1.04x faster (-4%) | ++-+--+ -- ___ Python tracker <https://bugs.python.org/issue36030> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36030] add internal API function to create tuple without items array initialization
Sergey Fedoseev added the comment: >> This optimization also can be used for BUILD_TUPLE opcode and in pickle >> module, if it's OK to add _PyTuple_StealFromArray() function :-) > I would like to see a micro-benchmark showing that it's faster. +-+-+-+ | Benchmark | build_tuple_ref | build_tuple_untracked | +=+=+=+ | (a, ) | 19.9 ns | 19.4 ns: 1.03x faster (-3%) | +-+-+-+ | (a, 1) | 24.0 ns | 22.6 ns: 1.06x faster (-6%) | +-+-+-+ | (a, 1, 1) | 28.2 ns | 25.9 ns: 1.09x faster (-8%) | +-+-+-+ | (a, 1, 1, 1)| 31.0 ns | 29.0 ns: 1.07x faster (-6%) | +-+-+-+ | (a, 1, 1, 1, 1) | 34.7 ns | 32.2 ns: 1.08x faster (-7%) | +-+-+-+ -- ___ Python tracker <https://bugs.python.org/issue36030> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36030] add internal API function to create tuple without items array initialization
Change by Sergey Fedoseev : Added file: https://bugs.python.org/file48174/list_as_tuple_bench.py ___ Python tracker <https://bugs.python.org/issue36030> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36030] add internal API function to create tuple without items array initialization
Change by Sergey Fedoseev : Added file: https://bugs.python.org/file48173/build_tuple_bench.py ___ Python tracker <https://bugs.python.org/issue36030> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32147] improve performance of binascii.unhexlify() by using conversion table
New submission from Sergey Fedoseev : Before: $ ./python -m timeit -s "from binascii import unhexlify; b = b'aa'*2**20" "unhexlify(b)" 50 loops, best of 5: 5.68 msec per loop After: $ ./python -m timeit -s "from binascii import unhexlify; b = b'aa'*2**20" "unhexlify(b)" 100 loops, best of 5: 2.06 msec per loop -- messages: 307053 nosy: sir-sigurd priority: normal severity: normal status: open title: improve performance of binascii.unhexlify() by using conversion table type: performance ___ Python tracker <https://bugs.python.org/issue32147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32147] improve performance of binascii.unhexlify() by using conversion table
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +4508 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue32147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32147] improve performance of binascii.unhexlify() by using conversion table
Change by Sergey Fedoseev : -- components: +Library (Lib) ___ Python tracker <https://bugs.python.org/issue32147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32147] improve performance of binascii.unhexlify() by using conversion table
Sergey Fedoseev added the comment: Serhiy, did you use the same benchmark as mentioned here? -- ___ Python tracker <https://bugs.python.org/issue32147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32147] improve performance of binascii.unhexlify() by using conversion table
Sergey Fedoseev added the comment: > OS x86_64 GNU/Linux > compiler gcc version 7.2.0 (Debian 7.2.0-16) > CPU Architecture:x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 2 Core(s) per socket: 2 Socket(s): 1 NUMA node(s):1 Vendor ID: GenuineIntel CPU family: 6 Model: 58 Model name: Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz Stepping:9 CPU MHz: 2494.521 CPU max MHz: 3100, CPU min MHz: 1200, BogoMIPS:4989.04 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache:256K L3 cache:3072K NUMA node0 CPU(s): 0-3 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx f16c rdrand lahf_lm cpuid_fault epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts > Do you build 32- or 64-bit Python? I'm not sure about that, I guess 64 is default on 64 OS? > Do you build in a debug or release mode? I tried with --enable-optimizations, --with-pydebug and without any flags. Numbers are different, but magnitude of a change is the same. -- ___ Python tracker <https://bugs.python.org/issue32147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32147] improve performance of binascii.unhexlify() by using conversion table
Sergey Fedoseev added the comment: Is there anything I can do to push this forward? BTW, Serhiy, what are your OS, compiler and CPU? -- ___ Python tracker <https://bugs.python.org/issue32147> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8488] Docstrings of non-data descriptors "ignored"
Change by Sergey Fedoseev : -- nosy: +sir-sigurd ___ Python tracker <https://bugs.python.org/issue8488> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33576] Make exception wrapping less intrusive for __set_name__ calls
Change by Sergey Fedoseev : -- nosy: +sir-sigurd ___ Python tracker <https://bugs.python.org/issue33576> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21145] Add the @cached_property decorator
Change by Sergey Fedoseev : -- nosy: +sir-sigurd ___ Python tracker <https://bugs.python.org/issue21145> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34018] SQLite converters are documented to be sensitive to the case of type names, but they're not
New submission from Sergey Fedoseev : SQLite converters are documented to be sensitive to the case of type names, but they're not. In [50]: import sqlite3 ...: ...: sqlite3.converters.clear() ...: sqlite3.register_converter('T', lambda x: 'UPPER') ...: sqlite3.register_converter('t', lambda x: 'lower') ...: ...: con = sqlite3.connect(':memory:', detect_types=sqlite3.PARSE_DECLTYPES) ...: cur = con.cursor() ...: cur.execute('create table test(upper T, lower t)') ...: cur.execute('insert into test values (?, ?)', ('X', 'x')) ...: cur.execute('select * from test') ...: cur.fetchone() ...: Out[50]: ('lower', 'lower') In [51]: sqlite3.converters Out[51]: {'T': >} Original commit in pysqlite that makes converters case insensitive: https://github.com/ghaering/pysqlite/commit/1e8bd36be93b7d7425910642b72e4152c77b0dfd -- assignee: docs@python components: Documentation messages: 320855 nosy: docs@python, sir-sigurd priority: normal severity: normal status: open title: SQLite converters are documented to be sensitive to the case of type names, but they're not ___ Python tracker <https://bugs.python.org/issue34018> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34018] SQLite converters are documented to be sensitive to the case of type names, but they're not
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +7651 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue34018> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34041] add *deterministic* parameter to sqlite3.Connection.create_function()
New submission from Sergey Fedoseev : SQLiter 3.8.3 and higher allows to mark created functions as deterministic to allow additional optimizations. There isn't currently a way to it from Python. https://sqlite.org/c3ref/create_function.html -- components: Extension Modules messages: 321027 nosy: sir-sigurd priority: normal severity: normal status: open title: add *deterministic* parameter to sqlite3.Connection.create_function() type: enhancement ___ Python tracker <https://bugs.python.org/issue34041> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34041] add *deterministic* parameter to sqlite3.Connection.create_function()
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +7687 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue34041> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34052] sqlite's create_function() raises exception on unhashable callback, but creates function
New submission from Sergey Fedoseev : In [1]: import sqlite3 In [2]: con = sqlite3.connect(':memory:') In [3]: con.execute('SELECT f()') --- OperationalError Traceback (most recent call last) in () > 1 con.execute('SELECT f()') OperationalError: no such function: f In [4]: con.create_function('f', 0, []) --- TypeError Traceback (most recent call last) in () > 1 con.create_function('f', 0, []) TypeError: unhashable type: 'list' In [5]: con.execute('SELECT f()') --- OperationalError Traceback (most recent call last) in () > 1 con.execute('SELECT f()') OperationalError: user-defined function raised exception It seems that something like this cause segmentation fault, but I can't reproduce it. Some other similar sqlite functions also affected. They can be easily modified to accept unhashable objects, but probably it should be done in another issue. -- components: Extension Modules messages: 321094 nosy: sir-sigurd priority: normal severity: normal status: open title: sqlite's create_function() raises exception on unhashable callback, but creates function type: behavior versions: Python 2.7, Python 3.8 ___ Python tracker <https://bugs.python.org/issue34052> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34052] sqlite's create_function() raises exception on unhashable callback, but creates function
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +7702 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue34052> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34151] use malloc() for better performance of some list operations
New submission from Sergey Fedoseev : Currently list concatenation, slicing and repeating operations are using PyList_New() which allocates memory for the items by calloc(). malloc() could be used instead, since the allocated memory is overwritten by mentioned operations. I made benchmarks with this script: NAME=list-malloc-master.json python -m perf timeit --name slice0 -s "l = [None]*100" "l[:0]" --duplicate=2048 --append $NAME python -m perf timeit --name slice1 -s "l = [None]*100" "l[:1]" --duplicate=1024 --append $NAME python -m perf timeit --name slice2 -s "l = [None]*100" "l[:2]" --duplicate=1024 --append $NAME python -m perf timeit --name slice3 -s "l = [None]*100" "l[:3]" --duplicate=1024 --append $NAME python -m perf timeit --name slice100 -s "l = [None]*100" "l[:100]" --append $NAME python -m perf timeit --name cat0 -s "l = [None]*0" "l + l" --duplicate=1024 --append $NAME python -m perf timeit --name cat1 -s "l = [None]*1" "l * 1" --duplicate=1024 --append $NAME python -m perf timeit --name cat2 -s "l = [None]*2" "l * 1" --duplicate=1024 --append $NAME python -m perf timeit --name cat3 -s "l = [None]*3" "l * 1" --duplicate=1024 --append $NAME python -m perf timeit --name cat100 -s "l = [None]*100" "l * 1" --append $NAME python -m perf timeit --name 1x0 -s "l = [None]" "l * 0" --duplicate=1024 --append $NAME python -m perf timeit --name 1x1 -s "l = [None]" "l * 1" --duplicate=1024 --append $NAME python -m perf timeit --name 1x2 -s "l = [None]" "l * 2" --duplicate=1024 --append $NAME python -m perf timeit --name 1x3 -s "l = [None]" "l * 3" --duplicate=1024 --append $NAME python -m perf timeit --name 1x100 -s "l = [None]" "l * 100" --append $NAME Here's comparison table: +--++--+ | Benchmark| list-malloc-master | list-malloc | +==++==+ | slice1 | 84.5 ns| 59.6 ns: 1.42x faster (-30%) | +--++--+ | slice2 | 71.6 ns| 61.8 ns: 1.16x faster (-14%) | +--++--+ | slice3 | 74.4 ns| 63.6 ns: 1.17x faster (-15%) | +--++--+ | slice100 | 4.39 ms| 4.08 ms: 1.08x faster (-7%) | +--++--+ | cat0 | 23.9 ns| 24.9 ns: 1.04x slower (+4%) | +--++--+ | cat1 | 73.2 ns| 51.9 ns: 1.41x faster (-29%) | +--++--+ | cat2 | 61.6 ns| 53.1 ns: 1.16x faster (-14%) | +--++--+ | cat3 | 63.0 ns| 54.3 ns: 1.16x faster (-14%) | +--++--+ | cat100 | 4.38 ms| 4.08 ms: 1.07x faster (-7%) | +--++--+ | 1x0 | 27.1 ns| 27.7 ns: 1.02x slower (+2%) | +--++--+ | 1x1 | 72.9 ns| 51.9 ns: 1.41x faster (-29%) | +--++--+ | 1x2 | 60.9 ns| 52.9 ns: 1.15x faster (-13%) | +--++--+ | 1x3 | 62.5 ns| 54.8 ns: 1.14x faster (-12%) | +--++--+ | 1x100| 2.67 ms| 2.34 ms: 1.14x faster (-12%) | +--++--+ Not significant (1): slice0 -- components: Interpreter Core messages: 321905 nosy: sir-sigurd priority: normal severity: normal status: open title: use malloc() for better performance of some list operations type: performance ___ Python tracker <https://bugs.python.org/issue34151> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34151] use malloc() for better performance of some list operations
Change by Sergey Fedoseev : -- keywords: +patch pull_requests: +7869 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue34151> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com