Marco Sulla added the comment:
Well, the fact is, basically, for the other libraries you have not to re-run
`configure`. You have to install only the missing C libraries and redo `make`.
This works, for example, for zlib, lzma, ctypes, sqlite3, readline, bzip2.
Furthermore, it happened to
Marco Sulla added the comment:
Ah, well, this is not possible. I was banned from the mailing list. I wrote my
"defense" to conduct...@python.org in date 2019-12-29, and I'm still waiting
for a response...
--
___
Python
Marco Sulla added the comment:
I see that many breaking changes was done in recent releases. I get only the
ones for `asyncio` in Python 3.8:
https://bugs.python.org/issue36921
https://bugs.python.org/issue36373
https://bugs.python.org/issue34790
https://bugs.python.org/issue32528
https
Marco Sulla added the comment:
I think in this case the error is more trivial: simply `Programs/_testembed.c`
is compiled with g++ but it should be compiled with gcc.
Indeed, there are much gcc-only options in the compilation of
`Programs/_testembed.c`, and g++ complains about them
New submission from Marco Sulla :
I noticed that `__contains__()` and `__getitem__()` of subclasses of `dict` are
much slower. I asked why on StackOverflow, and an user seemed to find the
reason.
The problem for him/her is that `dict` implements directly `__contains__()` and
`__getitem__
Change by Marco Sulla :
--
resolution: -> duplicate
stage: -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.org/issue39754>
___
___
Marco Sulla added the comment:
I asked why on StackOverflow, and an user seemed to find the reason. The
problem for him/her is in `update_one_slot()`.
`dict` implements directly `__contains__()` and `__getitem__()`. Usually,
`sq_contains` and `mp_subscript` are wrapped to implement
Change by Marco Sulla :
--
resolution: not a bug -> rejected
___
Python tracker
<https://bugs.python.org/issue39698>
___
___
Python-bugs-list mailing list
Un
Marco Sulla added the comment:
> I also distinctly remember seeing code (and writing such code myself) that
> performs computation on timeouts and does not care if the end value goes
> below 0.
This is not a good statistics. Frankly we can't measure the impact of the
cha
New submission from Marco Sulla :
I think a tuple comprehension could be very useful.
Currently, the only way to efficiently create a tuple from a comprehension is
to create a list comprehension (generator comprehensions are more slow) and
convert it with `tuple()`.
A tuple comprehension
New submission from Marco Sulla :
(venv_3_9) marco@buzz:~/sources/python-frozendict$ python
Python 3.9.0a0 (heads/master-dirty:d8ca2354ed, Oct 30 2019, 20:25:01)
[GCC 9.2.1 20190909] on linux
Type "help", "copyright", "credits" or "license" for more inf
Marco Sulla added the comment:
Sorry, but I can't figure out what code can break this change. Integers are
implicitly converted to floats in operations with floats. How can this change
break old code?
> if you are worried about the performance
No, I'm worried about the ex
Marco Sulla added the comment:
All the examples you mentioned seems to me to fix code, instead of breaking it.
About 1e300**1, it's not a bug at all. No one can stop you to full your RAM
in many other ways :-D
About conventions, it does not seems to me that Python cares about
Marco Sulla added the comment:
> >>> int(1e100)
> 1159028911097599180468360808563945281389781327557747838772170381060813469985856815104
.
Oh my God... I'm just more convinced than before :-D
> Ya, this change will never be made - give up gracef
New submission from Marco Sulla :
During `make test`, I get the error in the title.
(venv_3_9) marco@buzz:~/sources/cpython_test$ ll /dev/tty
crw-rw-rw- 1 root tty 5, 0 Mar 1 15:24 /dev/tty
--
components: Tests
messages: 363063
nosy: Marco Sulla
priority: normal
severity: normal
Marco Sulla added the comment:
The problem is here:
Programs/_testembed.o: $(srcdir)/Programs/_testembed.c
$(MAINCC) -c $(PY_CORE_CFLAGS) -o $@ $(srcdir)/Programs/_testembed.c
`MAINCC` in my Makefile is `g++-9`. Probably, MAINCC is set to the value of
``--with-cxx-main`, if
Change by Marco Sulla :
--
keywords: +patch
pull_requests: +18079
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/18721
___
Python tracker
<https://bugs.python.org/issu
Marco Sulla added the comment:
https://github.com/python/cpython/pull/18721
--
___
Python tracker
<https://bugs.python.org/issue39697>
___
___
Python-bugs-list m
Marco Sulla added the comment:
Mmmm... wait a moment. It seems the behavior is intended:
https://bugs.python.org/issue1324762
I quote:
The patch contains the following changes:
[...]
2) The compiler used to translate python's main() function is
stored in the configure / Mak
Marco Sulla added the comment:
OS: Lubuntu 18.04.4
Steps to reproduce:
sudo apt-get install git libbz2-dev liblzma-dev uuid-dev libffi-dev
libsqlite3-dev libreadline-dev libssl-dev libgdbm-dev libgdbm-compat-dev tk-dev
libncurses5-dev
git clone https://github.com/python/cpython.git
cd
Marco Sulla added the comment:
Okay... if I have understood well, the problem is with C++ Extensions.
Some questions:
1. does this problem exists yet?
2. if yes, maybe Python have to wrap the python.c and _testembed.c so they can
also be compiled with a C++ compiler?
3. --with-cxx-main is
Marco Sulla added the comment:
Furthermore, I have not understood a think: if I understood well,
--with-cxx-main is used on _some_ platforms that have problems with C++
extensions. What platforms? Is there somewhere a unit test for testing if
Python compiled on one of these platforms with
New submission from Marco Sulla :
I suggest to add an implementation of bracketed paste mode in the REPL.
Currently if you, for example, copy & paste a piece of Python code to see if it
works, if the code have a blank line without indentation and the previous and
next line are indented,
Marco Sulla added the comment:
> Is this even possible in a plain text console?
Yes. See Jupyter Console (aka IPython).
--
___
Python tracker
<https://bugs.python.org/issu
Marco Sulla added the comment:
Please read the message of Terry J. Reed:
https://bugs.python.org/issue38747#msg356345
I quote the relevant part below
> Skipping the rest of your post, I will just restate why I closed this
> issue.
>
> 1. It introduces too many features
Marco Sulla added the comment:
I agree with Pablo Galindo Salgado: https://bugs.python.org/issue35912#msg334942
The "quick and dirty" solution is to change MAINCC to CC, for _testembed.c AND
python.c (g++ fails with both).
After that, _testembed.c and python.c should be changed s
Marco Sulla added the comment:
Excuse me, but my original "holistic" proposal was rejected and it was
suggested to me to propose only relevant changes, and one for issue. Now you
say exactly the contrary. I feel a bit confused.
PS: yes, I can, and I use, IPython. But IMHO IPytho
New submission from Marco Sulla :
In `string` module, there's a very little known class `Template`. It implements
a very simple template, but it has an interesting method: `safe_substitute()`.
`safe_substitute()` permits you to not fill the entire Template at one time. On
the contrar
Marco Sulla added the comment:
IMHO such a feature is useful for sysops that does not have a graphical
interface, as Debian without an X. That's why vi is (unluckily) very popular
also in 2020. IDLE can't be used in this cases.
Windows users can't remotely login withou
New submission from Marco Sulla :
I got this warning. I suppose that `distutils` can use any iterable.
--
components: Distutils
messages: 363354
nosy: Marco Sulla, dstufft, eric.araujo
priority: normal
severity: normal
status: open
title: Warning: 'classifiers' should be a
Marco Sulla added the comment:
> Do you have some concrete use case for this?
Yes, for EWA:
https://marco-sulla.github.io/ewa/
Since it's a code generator, it uses templates a lot, and much times I feel the
need for a partial substitution. In the end I solved with some ugl
Marco Sulla added the comment:
This is IMHO broken.
1. _ensure_list() allows strings, because, documentation says, they are split
in finalize_options(). But finalize_options() does only split keywords and
platforms. It does _not_ split classifiers.
2. there's no need that key
Change by Marco Sulla :
--
resolution: -> duplicate
stage: -> resolved
status: open -> closed
type: -> behavior
___
Python tracker
<https://bugs.python
Marco Sulla added the comment:
> What would "{} {}".partial_format({}) return?
`str.partial_format()` was proposed exactly to avoid such tricks.
> It is not possible to implement a "safe" variant of str.format(),
> because in difference to Template it can call ar
Marco Sulla added the comment:
@Eric V. Smith: that you for your effort, but I'll never use an API marked as
private, that is furthermore undocumented.
--
___
Python tracker
<https://bugs.python.org/is
New submission from Marco Sulla :
This is a little PR with some micro-optimizations to the PySequence_Tuple()
function. Mainly, it simply add a support variable new_n_tmp_1 instead of
reassigning newn multiple times.
--
components: Interpreter Core
messages: 363974
nosy: Marco Sulla
Marco Sulla added the comment:
The PR will probably be rejected... you can do something like this:
1. in the venv on our machine, do `pip freeze`. This gives you the whole list
of installed dependencies
2. download all the packages using `pip download`
3. copy all the packages on the cloud
Marco Atzeri added the comment:
The Analysis is correct.
Removing the test for CYGWIN and always include the
solved the problem building all python (3.6,3.7,3.8) packages
https://sourceware.org/pipermail/cygwin-apps/2020-December/040845.html
https://sourceware.org/pipermail/cygwin
New submission from Marco Franzo :
It would be better to write at the end of the program this:
os.system('stty sane')
because when you import readline, at the end of program, the console remains
unusable
--
assignee: docs@python
components: Documentation
messages: 384379
Marco Franzo added the comment:
So, I use Ubuntu 20.10 and the terminal
is the one distributed with the system.
I think this problem born in my code here:
def generate_input():
while True:
str = input().strip()
yield helloworld_pb2.Operazione(operazione = str)
I think
New submission from Marco Barisione :
The generation of pickle files in load_grammar in lib2to3/pgen2/driver.py is
racy as other processes may end up reading a half-written pickle file.
This is reproducible with the command line tool, but it's easier to reproduce
by importing lib2to3
Marco Paolini added the comment:
hello Thomas,
do you need any help fixing the conflicts in your PR?
even if Lib/warnings.py changed a little in the last 2 years, your PR is still
good!
--
nosy: +mpaolini
___
Python tracker
<ht
New submission from Marco Trevisan :
Webbrowser uses env variables such as GNOME_DESKTOP_SESSION_ID that have been
dropped by GNOME in recent releases
--
components: Library (Lib)
messages: 374806
nosy: Trevinho
priority: normal
severity: normal
status: open
title: webbrowser uses
Change by Marco Trevisan :
--
keywords: +patch
pull_requests: +20875
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21731
___
Python tracker
<https://bugs.python.org/issu
Marco Paolini added the comment:
This happens because the default value for the start argument is zero , hence
the first operation is `0 + 'a'`
--
nosy: +mpaolini
___
Python tracker
<https://bugs.python.o
Marco Paolini added the comment:
also worth noting, the start argument is type checked instead. Maybe we could
apply the same checks to the items of the iterable?
python3 -c "print(sum(('a', 'b', 'c'), start='d'))"
Traceback (most recent
Marco Paolini added the comment:
I was thinking to just clarify a bit the error message that results from
Py_NumberAdd. This won't make it slower in the "hot" path
doing something like (not compile tested, sorry)
--- a/Python/bltinmodule.c
+++ b/Python/bltinmodule.c
@@ -
New submission from Marco Sulla :
I've done a PR that speeds up the vectorcall creation of a dict using keyword
arguments. The PR in practice creates a insertdict_init(), a specialized
version of insertdict. I quote the comment to the function:
Same to insertdict but specialize
Marco Sulla added the comment:
> `dict(**o)` is not common use case. Could you provide some other benchmarks?
You can do
python -m timeit -n 200 "dict(key1=1, key2=2, key3=3, key4=4, key5=5,
key6=6, key7=7, key8=8, key9=9, key10=10)"
or with pyperf. In this case, sinc
New submission from Marco Sulla :
All pickle error messages in typeobject.c was a generic "cannot pickle 'type'
object". Added some explaining for every individual error.
--
components: Interpreter Core
messages: 377747
nosy: Marco Sulla
priority: normal
pull_requ
Marco Sulla added the comment:
I do not remember the problem I had, but when I experimented with frozendict I
get one of these errors. I failed to understand the problem so I added the
additional info.
Maybe adding an assert in debug mode? It will be visible only to devs
Marco Sulla added the comment:
I closed it for this reason:
https://github.com/python/cpython/pull/22438#issuecomment-702794261
--
stage: -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.org/i
New submission from Marco Castelluccio :
Shelve is currently defaulting to Pickle protocol 3, instead of using Pickle's
default protocol for the Python version in use.
This way, Shelve's users don't benefit from improvements introduced in newer
Pickle protocols, unless the
Change by Marco Castelluccio :
--
keywords: +patch
pull_requests: +21713
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/22751
___
Python tracker
<https://bugs.python.org/issu
Marco Sulla added the comment:
Another bench:
python -m pyperf timeit --rigorous "dict(ihinvdono='doononon',
gowwondwon='nwog', bdjbodbob='nidnnpn', nwonwno='vndononon',
dooodbob='iohiwipwgpw', doidonooq='ndwnnpnpnp', fn
Marco Sulla added the comment:
@methane: well, to be honest, I don't see much difference between the two
pulls. The major difference is that you merged insertdict_init in
dict_merge_init.
But I kept insertdict_init separate on purpose, because this function can be
used in other f
Marco Sulla added the comment:
@Mark.Shannon I tried to run pyperformance, but wheel does not work for Python
3.10. I get the error:
AssertionError: would build wheel with unsupported tag ('cp310', 'cp310',
'linux_x86_64')
--
Marco Sulla added the comment:
I commented out sqlalchemy in the requirements.txt in the pyperformance source
code, and it worked. I had also to skip tornado:
pyperformance run -r
-b,-sqlalchemy_declarative,-sqlalchemy_imperative,-tornado_http -o
../perf_master.json
This is my result
New submission from Marco Sulla :
The PR #22948 is an augmented version of #22346. It speeds up also the creation
of:
1. dicts from other dicts that are not "perfect" (combined and without holes)
2. fromkeys
3. copies of dicts with many holes
4. dict from keywords, as in #22346
Marco Sulla added the comment:
I'm quite sure I not invented the wheel :) but I think it's a good improvement:
| pathlib | 35.8 ms | 35.1 ms| 1.02x
faster | Significant (t=13.21) |
| scimark_monte_carlo | 176 ms | 172 ms
Marco Sulla added the comment:
Note that this time I've no slowdown in the macro bench, since I used normal
builds, not optimized ones. I suppose an optimized build will show slowdown
because the new functions are not in the test ba
Marco Sulla added the comment:
The fact is that, IMHO, PGO will "false" the results, since it's quite
improbable that in the test battery there's a test of creation of a dict from
another dict with an hole. It seems to me that the comparison between the
normal builds
Marco Sulla added the comment:
Well, after a second thought I think you're right, there's no significant
advantage and too much duplicated code.
--
stage: -> resolved
status: open -> closed
___
Python tracker
<https://bugs.py
Change by Marco Castelluccio :
--
nosy: +marco-c
nosy_count: 6.0 -> 7.0
pull_requests: +21928
pull_request: https://github.com/python/cpython/pull/22751
___
Python tracker
<https://bugs.python.org/issu
Marco Castelluccio added the comment:
I've opened https://github.com/python/cpython/pull/22751 to fix this, I know
there was already a PR, but it seems to have been abandoned.
--
___
Python tracker
<https://bugs.python.org/is
Marco Sulla added the comment:
Well, following your example, since split dicts seems to be no more supported,
I decided to be more drastic. If you see the last push in PR 22346, I do not
check anymore but always resize, so the dict is always combined. This seems to
be especially good for
Marco Sulla added the comment:
Well, actually Serhiy is right, it does not seem that the macro benchs did show
something significant. Maybe the code can be used in other parts of CPython,
for example in _pickle, where dicts are loaded. But it needs also to expose,
maybe internally only
Marco Sulla added the comment:
I did PGO+LTO... --enable-optimizations --with-lto
--
___
Python tracker
<https://bugs.python.org/issue41835>
___
___
Python-bug
New submission from Marco Sulla :
I'm telling about python3 -m venv VIRTUALENV_NAME, not about the virtualenv
binary.
Some remarks:
1. `VIRTUAL_ENV` variable in `activate` script is the absolute path of the
virtualenv folder
2. A symlink to the `python3` bin of the machine is cr
Marco Sulla added the comment:
Well, I didn't know `--copy`. I think I'll use it. :)
What about VIRTUAL_ENV="$(dirname "$(dirname "$(readlink -nf "$0")")")"? In
`bash` and in `sh` it works.
--
__
New submission from Marco Sulla :
It's really useful and easy to have a requirements.txt. It integrates also with
Github, that tells you if you're specifying a version of the library with
security issues.
I don't understand why this flag is missing in Windows builds. It seems
Marco Sulla added the comment:
Excuse me, after a
python -m pip install --upgrade setuptools
python -m pip install --upgrade pip
it works like a charme.
--
resolution: -> works for me
stage: test needed -> resolved
status: pending -> closed
versions: +Python 3.6 -P
Marco Sulla added the comment:
> if you have entry points installed then moving them to another
> machine would break their shebang lines.
Not if you port it on the same OS using, for example
#!/usr/bin/env python3
> And even if you do it on your local machine there'
Marco Sulla added the comment:
Well, what about the modification to VIRTUAL_ENV? I think it's little and
without harm.
--
___
Python tracker
<https://bugs.python.org/is
Marco Sulla added the comment:
> Changing VIRTUAL_ENV will break code
VIRTUAL_ENV it's the same if you don't move the venv. Moving it will be an
unofficial unsupported bonus, and if you coded bad and it doesn't work
Marco Sulla added the comment:
Well, I repeat, not with
#!/usr/bin/env python3
--
___
Python tracker
<https://bugs.python.org/issue36964>
___
___
Python-bug
Marco Sulla added the comment:
The previous post was for Laurie Opperman
"upset people with requiring everyone to update their code"
I don't know why they have to be upset. Until now they can't move the folder.
They want to move the folder? They have to change their co
Marco Sulla added the comment:
Furthermore, if you destroy an old virtual env and recreate it with the new
method, it continues to work as before, since VIRTUAL_ENV points to the same
folder. We don't force to change the code if they continues to use the virtual
environments a
Marco Sulla added the comment:
Please Mr. Cannon, can you read my last posts? I think they are not describing
a mad idea, but something reasonable.
--
nosy: +brett.cannon
___
Python tracker
<https://bugs.python.org/issue36
Marco Sulla added the comment:
> I don't like the idea of changing what VIRTUAL_ENV gets set to when I
> believe you should recreate the virtual environment as necessary and
> risk surprising people who expect VIRTUAL_ENV to function as it does
> today and has for years.
New submission from Marco Dickert :
I guess I found a bug in the documented Queue.join() example [1].
The problem is the break condition for the while loop of the worker. If the
item is None, the loop breaks, but the worker never calls item.task_done().
Thus the q.join() statement never
Marco Dickert added the comment:
Sorry, I missed that q.join() is executed *before* the "None" item is added to
the queue.
In my real-world case I called q.join() *after* I added the "None" item.
--
resolution: -> not a bug
stage: -> resol
New submission from Marco Paolini :
I analysed the performance of json.loads in one production workload we have.
Spy-py tells me the majority of time is spent into C json module (see
events.svg)
Digging deeper, linux perf tells me hottest loop (where 20%+ of the time is
spent) in
Change by Marco Paolini :
--
keywords: +patch
pull_requests: +14547
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/14752
___
Python tracker
<https://bugs.python.org/issu
Change by Marco Paolini :
--
nosy: +ezio.melotti, rhettinger
___
Python tracker
<https://bugs.python.org/issue37587>
___
___
Python-bugs-list mailing list
Unsub
Marco Paolini added the comment:
Also on my real workload (loading 60GB jsonl file containing mostly strings) I
measured a 10% improvement
--
___
Python tracker
<https://bugs.python.org/issue37
Marco Paolini added the comment:
Here's the real world example
$ ls -hs events-100k.json
84M events-100k.json
+---+-+-+
| Benchmark | vanilla-bpo-events-100k | patched-bpo-events
Marco Paolini added the comment:
On gcc, running the tests above, the only change that is relevant for speedup
is switching around the strict check. Removing the extra MOV related to the
outer "c" variable is not significant (at least on gcc and the few tests I did)
Unfortunatel
Marco Paolini added the comment:
I am also working on a different patch that uses the "pcmpestri" SSE4 processor
instruction, it looks like this for now.
While at it I realized there is (maybe) another potential speedup: avoiding the
ucs4lib_find_max_char we do for each chunk of
Marco Paolini added the comment:
I forgot to mention, I was inspired by @christian.heimes 's talk at EuroPython
2019
https://ep2019.europython.eu/talks/es2pZ6C-introduction-to-low-level-profiling-and-tracing/
(thanks!)
--
___
Python tr
Marco Paolini added the comment:
@steve.dower yes, that's what made me discard that experiment we did during the
sprint.
Ok will test your new patch soon
--
___
Python tracker
<https://bugs.python.org/is
Change by Marco Sulla :
--
nosy: Marco Sulla
priority: normal
severity: normal
status: open
title: Propose to deprecate ignore_errors,
___
Python tracker
<https://bugs.python.org/issue37
New submission from Marco Sulla :
I propose to mark as deprecated the parameters `ignore_errors` and `onerror` of
`shutil.rmtree()`, and raise a warning if used.
The reason is I feel them unpythonic. For emulating `ignore_errors=True`, the
code can be written simply with a `try-except` that
New submission from Marco Sulla :
Currectly, even if two `Element`s elem1 and elem2 are different objects but the
tree is identical, elem1 == elem2 returns False. The only effective way to
compare two `Element`s is
ElementTree.tostring(elem1) == ElementTree.tostring(elem2)
Furthermore, from
Marco Buttu added the comment:
Or maybe: "tuple of names of global variables used in the bytecode"
--
nosy: +marco.buttu
___
Python tracker
<http://bugs.python.o
Marco Buttu added the comment:
Terry thanks for opening this issue.
The title of the FAQ makes me think that the section wants to clarify why -22
// 10 returns -3. I am a bit confused, maybe because -22//10 == -3 does not
surprise me, and so I do not understand the point :(
This seems to
New submission from Marco Buttu:
In the doc there are several hints [*] to frozen modules, but there is no
definition of them. Do you think we should add a "frozen module" definition to
the glossary?
* Doc/library/importlib.rst, Doc/library/imp.rst, Doc/reference/
Changes by Marco Buttu :
--
pull_requests: +356
___
Python tracker
<http://bugs.python.org/issue16355>
___
___
Python-bugs-list mailing list
Unsubscribe:
Changes by Marco Buttu :
--
nosy: +marco.buttu
___
Python tracker
<http://bugs.python.org/issue29716>
___
___
Python-bugs-list mailing list
Unsubscribe:
Changes by Marco Buttu :
--
pull_requests: +499
___
Python tracker
<http://bugs.python.org/issue27200>
___
___
Python-bugs-list mailing list
Unsubscribe:
101 - 200 of 402 matches
Mail list logo