[issue12638] urllib.URLopener prematurely deletes files on cleanup

2011-07-25 Thread Carl

New submission from Carl :

urllib.URLopener (or urllib.request.URLopener for Python 3) and user defined 
classes that inherit from these prematurely delete files upon cleanup.  Any 
temporary files downloaded using the .retrieve() method are deleted when an 
instance of a URLopener is garbage collected.

I feel this is a violation since the filename is returned to the caller and 
then silently deleted.  It is possible to simply override the .cleanup() 
method, but I feel this is not a good solution.

--
components: None
files: bug2.py
messages: 141094
nosy: carlbook
priority: normal
severity: normal
status: open
title: urllib.URLopener prematurely deletes files on cleanup
type: behavior
versions: Python 2.6, Python 2.7, Python 3.2
Added file: http://bugs.python.org/file22750/bug2.py

___
Python tracker 
<http://bugs.python.org/issue12638>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12638] urllib.URLopener prematurely deletes files on cleanup

2011-07-25 Thread Carl

Carl  added the comment:

@orsenthil, that is the correct behavior if you do not want to override any of 
URLopener's handlers for error codes.  In my case, I wanted to override 
FancyURLopener (a child class of URLopener) to override HTTP 401 behavior.  
Using urlretrieve is not correct in this case.

Also included python 3.2 code, I didn't test 3.1.

--
Added file: http://bugs.python.org/file22752/bug3.py

___
Python tracker 
<http://bugs.python.org/issue12638>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33258] Unable to install 3.6.5 on Windows Server 2008

2018-04-10 Thread Carl

New submission from Carl :

Hello,

I am trying to install python version 3.6.5 on a windows 2008 2008 rc2 SP1 
server.

I have tired both the installer for python-3.6.5.exe and python-3.6.5-amd64.exe 
installers. 

Both will not run on the server either from the gui or the command line prompt 
with admin privileges. It appears to run but nothing happens, no response or no 
error is provided. Any there any other methods to install this version?

--
components: Windows
messages: 315173
nosy: hpo0016, paul.moore, steve.dower, tim.golden, zach.ware
priority: normal
severity: normal
status: open
title: Unable to install 3.6.5 on Windows Server 2008
versions: Python 3.6

___
Python tracker 
<https://bugs.python.org/issue33258>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33258] Unable to install 3.6.5 on Windows Server 2008

2018-04-10 Thread Carl

Carl  added the comment:

Steve,
there was no information or no log files created in %TEMP%, but you would think 
some kind of message dialog or log would be displayed or created. 

And I am aware of the end of life for 2008 r2 enterprise server, but this 
organization is still running this version for some of their servers.

--

___
Python tracker 
<https://bugs.python.org/issue33258>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue33258] Unable to install 3.6.5 on Windows Server 2008

2018-04-25 Thread Carl

Carl  added the comment:

The Windows server 2008 is in the process of being updated to 2012. Thanks for 
all the feedback

--
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue33258>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue32103] Inconsistent text at TypeError in concatenation

2017-11-21 Thread Carl

New submission from Carl :

>>> a = b"jan"
>>> b = "jan"
>>> a+b
Traceback (most recent call last):
  File "", line 1, in 
TypeError: can't concat str to bytes
>>> b+a
Traceback (most recent call last):
  File "", line 1, in 
TypeError: must be str, not bytes
>>>

IMHO The latter TypeError text should be "TypeError: can't concat bytes to str"

--
components: Interpreter Core
messages: 306642
nosy: wolfc01
priority: normal
severity: normal
status: open
title: Inconsistent text at TypeError in concatenation
type: enhancement
versions: Python 3.6

___
Python tracker 
<https://bugs.python.org/issue32103>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46201] PEP 495 misnames PyDateTime_DATE_GET_FOLD

2021-12-30 Thread Carl Drougge


New submission from Carl Drougge :

PEP 495 names one of the accessor macros PyDateTime_GET_FOLD but the code names 
it PyDateTime_DATE_GET_FOLD.

The FOLD macros are also missing from 
https://docs.python.org/3/c-api/datetime.html (and versions).

--
assignee: docs@python
components: Documentation
messages: 409354
nosy: docs@python, drougge
priority: normal
severity: normal
status: open
title: PEP 495 misnames PyDateTime_DATE_GET_FOLD
type: behavior
versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 
3.9

___
Python tracker 
<https://bugs.python.org/issue46201>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selecting dictionaries

2022-03-01 Thread Carl Meyer

New submission from Carl Meyer :

CPython extensions providing optimized execution of Python bytecode (e.g. the 
Cinder JIT), or even CPython itself (e.g. the faster-cpython project) may wish 
to inline-cache access to frequently-read and rarely-changed namespaces, e.g. 
module globals. Rather than requiring a dict version guard on every cached 
read, the best-performing way to do this is is to mark the dictionary as 
“watched” and set a callback on writes to watched dictionaries. This optimizes 
the cached-read fast-path at a small cost to the (relatively infrequent and 
usually less perf sensitive) write path.

We have an implementation of this in Cinder ( 
https://docs.google.com/document/d/1l8I-FDE1xrIShm9eSNJqsGmY_VanMDX5-aK_gujhYBI/edit#heading=h.n2fcxgq6ypwl
 ), used already by the Cinder JIT and its specializing interpreter. We would 
like to make the Cinder JIT available as a third-party extension to CPython ( 
https://docs.google.com/document/d/1l8I-FDE1xrIShm9eSNJqsGmY_VanMDX5-aK_gujhYBI/
 ), and so we are interested in adding dict watchers to core CPython.

The intention in this issue is not to add any specific optimization or cache 
(yet); just the ability to mark a dictionary as “watched” and set a write 
callback.

The callback will be global, not per-dictionary (no extra function pointer 
stored in every dict). CPython will track only one global callback; it is a 
well-behaved client’s responsibility to check if a callback is already set when 
setting a new one, and daisy-chain to the previous callback if so. Given that 
multiple clients may mark dictionaries as watched, a dict watcher callback may 
receive events for dictionaries that were marked as watched by other clients, 
and should handle this gracefully.

There is no provision in the API for “un-watching” a watched dictionary; such 
an API could not be used safely in the face of potentially multiple 
dict-watching clients.

The Cinder implementation marks dictionaries as watched using the least bit of 
the dictionary version (so version increments by 2); this also avoids any 
additional memory usage for marking a dict as watched.

Initial proposed API, comments welcome:

// Mark given dictionary as "watched" (global callback will be called if it is 
modified)
void PyDict_Watch(PyObject* dict);

// Check if given dictionary is already watched
int PyDict_IsWatched(PyObject* dict);

typedef enum {
  PYDICT_EVENT_CLEARED,
  PYDICT_EVENT_DEALLOCED,
  PYDICT_EVENT_MODIFIED
} PyDict_WatchEvent;

// Callback to be invoked when a watched dict is cleared, dealloced, or 
modified.
// In clear/dealloc case, key and new_value will be NULL. Otherwise, new_value 
will be the
// new value for key, NULL if key is being deleted.
typedef void(*PyDict_WatchCallback)(PyDict_WatchEvent event, PyObject* dict, 
PyObject* key, PyObject* new_value);

// Set new global watch callback; supply NULL to clear callback
void PyDict_SetWatchCallback(PyDict_WatchCallback callback);

// Get existing global watch callback
PyDict_WatchCallback PyDict_GetWatchCallback();

The callback will be called immediately before the modification to the dict 
takes effect, thus the callback will also have access to the prior state of the 
dict.

--
components: C API
messages: 414307
nosy: carljm, dino.viehland, itamaro
priority: normal
severity: normal
status: open
title: add support for watching writes to selecting dictionaries
versions: Python 3.11

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-01 Thread Carl Meyer


Change by Carl Meyer :


--
title: add support for watching writes to selecting dictionaries -> add support 
for watching writes to selected dictionaries

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-03 Thread Carl Meyer


Carl Meyer  added the comment:

Thanks gps! Working on a PR and will collect pyperformance data as well.

We haven't observed any issues in Cinder with the callback just being called at 
shutdown, too, but if there are problems with that it should be possible to 
just have CPython clear the callback at shutdown time.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-03 Thread Carl Meyer


Carl Meyer  added the comment:

> Could we (or others) end up with unguarded stale caches if some buggy 
> extension forgets to chain the calls correctly?

Yes. I can really go either way on this. I initially opted for simplicity in 
the core support at the cost of asking a bit more of clients, on the theory 
that a) there are lots of ways for a buggy C extension to cause crashes with 
bad use of the C API, and b) I don't expect there to be very many extensions 
using this API. But it's also true that the consequences of a mistake here 
could be hard to debug (and easily blamed to the wrong place), and there might 
turn out to be more clients for dict-watching than I expect! If the consensus 
is to prefer CPython tracking an array of callbacks instead, we can try that.

> when you say "only one global callback": does that mean per-interpreter, or 
> per-process?

Good question! The currently proposed API suggests per-process, but it's not a 
question I've given a lot of thought to yet; open to suggestions. It seems like 
in general the preference is to avoid global state and instead tie things to an 
interpreter instance? I'll need to do a bit of research to understand exactly 
how that would affect the implementation. Doesn't seem like it should be a 
problem, though it might make the lookup at write time to see if we have a 
callback a bit slower.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-04 Thread Carl Meyer

Carl Meyer  added the comment:

Thanks for the feedback!

> Why so coarse?

Simplicity of implementation is a strong advantage, all else equal :) And the 
coarse version is a) at least somewhat proven as useful and usable already by 
Cinder / Cinder JIT, and b) clearly doable without introducing memory or 
noticeable CPU overhead to unwatched dicts. Do you have thoughts about how 
you'd do a more granular version without overhead?

> Getting a notification for every change of a global in module, is likely to 
> make use the use of global variables extremely expensive.

It's possible. We haven't ever observed this as an issue in practice, but we 
may have just not observed enough workloads with heavy writes to globals. I'd 
like to verify this problem with a real representative benchmark before making 
design decisions based on it, though. Calling a callback that is uninterested 
in a particular key doesn't need to be super-expensive if the callback is 
reasonably written, and this expense would occur only on the write path, for 
cases where the `global` keyword is used to rebind a global. I don't think it's 
common for idiomatic Python code to write to globals in perf-sensitive paths. 
Let's see how this shows up in pyperformance, if we try running it with all 
module globals dicts watched.

> For example, we could just tag the low bit of any pointer in a dictionary’s 
> values that we want to be notified of changes to

Would you want to tag the value, or the key? If value, does that mean if the 
value is changed it would revert to unwatched unless you explicitly watched the 
new value?

I'm a bit concerned about the performance overhead this would create for use of 
dicts outside the write path, e.g. the need to mask off the watch bit of 
returned value pointers on lookup.

> What happens if a watched dictionary is modified in a callback?

It may be best to document that this isn't supported; it shouldn't be necessary 
or advisable for the intended uses of dict watching. That said, I think it 
should work fine if the callback can handle re-entrancy and doesn't create 
infinite recursion. Otherwise, I think it's a case of "you broke it, you get to 
keep all the pieces."

> How do you plan to implement this? Steal a bit from `ma_version_tag`

We currently steal the low bit from the version tag in Cinder; my plan was to 
keep that approach.

> You'd probably need a PEP to replace PEP 509, but I think this may need a PEP 
> anyway.

I'd prefer to avoid coupling this to removal of the version tag. Then we get 
into issues of backward compatibility that this proposal otherwise avoids.

I don't think the current proposal is of a scope or level of user impact that 
should require a PEP, but I'm happy to write one if needed.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1130] Idle - Save (buffer)

2007-09-07 Thread Carl Trachte

Changes by Carl Trachte:


--
components: IDLE
severity: normal
status: open
title: Idle - Save (buffer)
type: behavior
versions: Python 3.0

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1130>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1130] Idle - Save (buffer) - closes IDLE and does not save file (Windows XP)

2007-09-07 Thread Carl Trachte

Changes by Carl Trachte:


--
title: Idle - Save (buffer) -> Idle - Save (buffer) - closes IDLE and does not 
save file (Windows XP)

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1130>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13049] distutils2 should not allow packages

2011-09-26 Thread Carl Meyer

New submission from Carl Meyer :

As discussed at 
http://groups.google.com/group/the-fellowship-of-the-packaging/browse_frm/thread/3b7a8ddd307d1020
 , distutils2 should not allow a distribution to install files into a top-level 
package that is already installed from a different distribution.

--
assignee: tarek
components: Distutils2
messages: 144542
nosy: alexis, carljm, eric.araujo, tarek
priority: normal
severity: normal
status: open
title: distutils2 should not allow packages
type: behavior

___
Python tracker 
<http://bugs.python.org/issue13049>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2945] bdist_rpm does not list dist files (should effect upload)

2011-10-06 Thread Carl Robben

Carl Robben  added the comment:

I found that bdist_rpm wasn't registering distributions with dist.dist_files at 
all.  The attached patch should be all that's needed to fix this.

--
keywords: +patch
nosy: +crobben
Added file: http://bugs.python.org/file2/bdist_rpm.patch

___
Python tracker 
<http://bugs.python.org/issue2945>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2945] bdist_rpm does not list dist files (should effect upload)

2011-10-10 Thread Carl Robben

Carl Robben  added the comment:

Here's a patch for test_bdist_rpm.py and to check the contents of 
dist.dist_files

--
Added file: http://bugs.python.org/file23363/test_bdist_rpm.patch

___
Python tracker 
<http://bugs.python.org/issue2945>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2945] bdist_rpm does not list dist files (should effect upload)

2011-10-10 Thread Carl Robben

Carl Robben  added the comment:

Adding a patch for 2.7

--
Added file: http://bugs.python.org/file23364/bdist_rpm-2.7.patch

___
Python tracker 
<http://bugs.python.org/issue2945>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2945] bdist_rpm does not list dist files (should effect upload)

2011-10-10 Thread Carl Robben

Carl Robben  added the comment:

Yeah I installed rpm and have run the tests successfully.

--

___
Python tracker 
<http://bugs.python.org/issue2945>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12405] packaging does not record/remove directories it creates

2011-10-17 Thread Carl Meyer

Carl Meyer  added the comment:

> Carl: Can you tell us how pip removes directories?

In short - pip would _love_ to have directories recorded as well as files, 
exactly as Vinay has proposed. We don't have that info (even the distutils 
--record option currently doesn't record directories, thus installed-files.txt 
doesn't contain directories), so we are reduced to some nasty things like 
referring to top-level.txt in order to avoid lots of empty directories hanging 
about, which in itself was the subject of recent controversy re Twisted's 
custom namespace packages implementation.

Please, let's have directories recorded in RECORD, and yes, if a directory 
would have been created but already existed, it should also be recorded (so 
that shared directories are in the RECORD file for both/all of the sharing 
distributions).

--

___
Python tracker 
<http://bugs.python.org/issue12405>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12405] packaging does not record/remove directories it creates

2011-10-18 Thread Carl Meyer

Carl Meyer  added the comment:

> This is what I proposed earlier: we’d need to record all directories that 
> would have been created, but I’m not sure if it will be possible.  For 
> example, if one uses --prefix /tmp/usr and pysetup install creates /tmp/usr, 
> /tmp/usr/lib, /tmp/usr/lib/python2.7, /tmp/usr/lib/python2.7/site-packages, 
> /tmp/usr/lib/python2.7/site-packages/spam and 
> /tmp/usr/lib/python2.7/site-packages/Spam-0.1.dist-info, then we pysetup 
> should Spam, should packaging remove only the package and dist-info 
> directories or also the site-packages, python2.7, lib and usr directories?

I think it would make sense to draw a distinction between "creating the prefix 
directories (including site-packages)" and "creating the distribution-specific 
directories within the prefix directories." And only record the latter in 
RECORD for the given installed distribution.

If I use --prefix and install some things, and then uninstall them, I would not 
consider it a bug to find the empty site-packages directory still remaining 
under that prefix. (In fact, I'd be surprised if it were removed).

> Okay, so I will champion a patch to PEP 376.

Thank you!

--

___
Python tracker 
<http://bugs.python.org/issue12405>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13304] test_site assumes that site.ENABLE_USER_SITE is True

2011-10-31 Thread Carl Meyer

New submission from Carl Meyer :

If the test suite is run with PYTHONNOUSERSITE=true, the test_s_option test in 
test_site fails, because it implicitly assumes that site.ENABLE_USER_SITE is 
True and that site.USER_SITE should unconditionally be in sys.path.

This is a practical problem in the reference implementation for PEP 404, as the 
tests should pass when run from within a virtual environment, but a 
system-isolated virtual environment disables user-site (i.e. has the same 
effect as PYTHONNOUSERSITE).

I think the correct fix here is to conditionally skip that test if 
site.ENABLE_USER_SITE is not True.

I also think the module-level conditional check at the top of the file, which, 
if site.USER_SITE does not exist, creates site.USER_SITE and calls 
site.addsitedir() on it, should only run if site.ENABLE_USER_SITE is True.

--
components: Tests
messages: 146722
nosy: carljm
priority: normal
severity: normal
status: open
title: test_site assumes that site.ENABLE_USER_SITE is True

___
Python tracker 
<http://bugs.python.org/issue13304>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13304] test_site assumes that site.ENABLE_USER_SITE is True

2011-10-31 Thread Carl Meyer

Carl Meyer  added the comment:

Added a patch implementing my proposed fix.

--
hgrepos: +87

___
Python tracker 
<http://bugs.python.org/issue13304>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13304] test_site assumes that site.ENABLE_USER_SITE is True

2011-10-31 Thread Carl Meyer

Changes by Carl Meyer :


--
keywords: +patch
Added file: http://bugs.python.org/file23575/cea40c2d7323.diff

___
Python tracker 
<http://bugs.python.org/issue13304>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13304] test_site assumes that site.ENABLE_USER_SITE is True

2011-10-31 Thread Carl Meyer

Changes by Carl Meyer :


Removed file: http://bugs.python.org/file23575/cea40c2d7323.diff

___
Python tracker 
<http://bugs.python.org/issue13304>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13304] test_site assumes that site.ENABLE_USER_SITE is True

2011-10-31 Thread Carl Meyer

Changes by Carl Meyer :


Added file: http://bugs.python.org/file23576/d851c64c745a.diff

___
Python tracker 
<http://bugs.python.org/issue13304>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11574] TextIOWrapper: Unicode Fallback Encoding on Python 3.3

2011-12-14 Thread Carl Meyer

Carl Meyer  added the comment:

Here's an example real-world case where the only solution I could find was to 
simply avoid non-ASCII characters entirely (which is obviously not a real 
solution): https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690

distutils/distribute require long_description to be a string, not bytes (so it 
can rfc822-escape it, and use string methods to do so), but does not explicitly 
set an output encoding when it writes egg-info. This means that a developer 
either has the choice to a) break installation of their package on any system 
with an ASCII default locale, or b) not use any non-ASCII characters in 
long_description.

One might say, "ok, this is a bug in distutils/distribute, it should explicitly 
specify UTF-8 encoding when writing egg-info." But if this is a sensible thing 
for distutils/distribute to do, regardless of user locale, why would it not be 
equally sensible for Python itself to have the default output encoding always 
be UTF-8 (with the ability for a developer who wants to support arbitrary user 
locale to explicitly do so)?

--
nosy: +carljm

___
Python tracker 
<http://bugs.python.org/issue11574>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12168] SysLogHandler incorrectly appents \000 to messages

2011-05-24 Thread Carl Crowder

New submission from Carl Crowder :

logging.handlers.SysLogHandler contains this class variable and comment:

# curious: when talking to the unix-domain '/dev/log' socket, a
# zero-terminator seems to be required.  this string is placed
# into a class variable so that it can be overridden if
# necessary.
log_format_string = '<%d>%s\000'

And separately, in emit:

msg = self.format(record) + '\000'

The assumption here is that a null character must be appended to delimit the 
syslog message. In RFC5424, there is no mention of a message delimiter, and in 
fact the previous syslog RFC, RFC3164, specifically states:

>  The MSG part will fill the remainder of the syslog packet.  This will 
> usually contain some additional information of the process that generated the 
> message, and then the text of the message.  There is no ending delimiter to 
> this part.

I believe this comment and behaviour is due to an older version of syslogd. 
Checking the manpage for an older version of rsyslog for example includes this 
piece of information [1]:

> There is probably one important consideration when installing rsyslogd. It is 
> dependent on proper formatting of messages by the syslog function. The 
> functioning of the syslog function in the shared libraries changed somewhere 
> in the region of libc.so.4.[2-4].n.   The specific change was to 
> null-terminate the message before transmitting it to the /dev/log socket. 
> Proper functioning of this  version of rsyslogd is dependent on 
> null-termination of the message.

I'm running Ubuntu 11.04 with rsyslogd 4.6.4 (that is, the standard version). 
In the manpage for this version of rsyslogd, there is no reference to 
null-terminators. Removing "+ '\000'" from the SysLogHandler results in 
messages still being received correctly.

Problem behaviour:
1) When running any RFC compliant syslog receiver that handles syslog messages, 
such as flume[2], this null character is not stripped as it is not expected to 
be present. Current versions of syslog cope because previously they assumed it 
existed.
2) The log_format_string class variable is not actually used anywhere, so it 
cannot be overridden usefully.

Removing the null terminator will cause older typical versions of syslogd to 
fail to receive messages, however including it causes any normal receiver that 
does not implement the non-standard behaviour to receive an additional unwanted 
null.

Suggestion for a fix is either to properly use the log_format_string class 
variable, or to allow an optional "append_null" argument to the SysLogHandler 
constructor. By default, this should be True, as it will continue to work with 
the main use case, which is unix syslog demons. Having the option will allow 
other use cases to also use the SysLogHandler.

[1] http://manpages.ubuntu.com/manpages/hardy/man8/rsyslogd.8.html#contenttoc8
[2] http://www.cloudera.com/blog/category/flume/

--
components: Library (Lib)
messages: 136743
nosy: Carl.Crowder
priority: normal
severity: normal
status: open
title: SysLogHandler incorrectly appents \000 to messages
type: behavior
versions: Python 2.7

___
Python tracker 
<http://bugs.python.org/issue12168>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12168] SysLogHandler incorrectly appends \000 to messages

2011-05-25 Thread Carl Crowder

Carl Crowder  added the comment:

Flume certainly could avoid parsing certain values. However, while a syslog 
application "should avoid octet values below 32", they are still "legal" [1]. I 
don't think that adjusting flume to reject legal values due to legacy behaviour 
in some unix syslog daemons is the Right Thing™ here.

[1] http://tools.ietf.org/html/rfc5424#section-6.4

--
title: SysLogHandler incorrectly appents \000 to messages -> SysLogHandler 
incorrectly appends \000 to messages

___
Python tracker 
<http://bugs.python.org/issue12168>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12168] SysLogHandler incorrectly appends \000 to messages

2011-05-25 Thread Carl Crowder

Carl Crowder  added the comment:

Oh, I understand. Flume doesn't break, it handles the \0 just fine, the problem 
is that I ended up with a message with that additional byte on the end. Sorry 
for the confusion!

--

___
Python tracker 
<http://bugs.python.org/issue12168>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8668] Packaging: add a 'develop' command

2011-07-11 Thread Carl Meyer

Carl Meyer  added the comment:

Can someone post a link here to the page of use cases that Michael just 
reviewed? I think the link came through on the Fellowship mailing list, but I'm 
not quickly finding it...

--

___
Python tracker 
<http://bugs.python.org/issue8668>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8668] Packaging: add a 'develop' command

2011-07-11 Thread Carl Meyer

Carl Meyer  added the comment:

On 07/11/2011 09:17 AM, Michael Mulich wrote:
> * Cases 2, 3, 5 and 6 are strongly related. I'd suggest you condense them 
> into a single use case. I agree with case 2 and 6 most, but have questions:
> ** Why wouldn't one simply use a virtualenv? 

I don't know. I don't consider case 3 useful, because I don't consider
"I don't want to use a virtualenv" (without some clearer technical
justification) to be a prejudice the develop feature needs to support;
especially if supporting it essentially means re-implementing a
less-capable version of virtualenv within the develop command.

> -- Case 5 touches on this topic, but if we are installing in-place, who cares 
> if can place a development package in the global site-packages directory?

Several of these stories make the assumption that even the "in-place"
installation will require placing a file in the installation location (a
.pth file, if we follow the current setuptools implementation strategy).
I think this is probably true, given the requirements in case 6 (which I
agree with). So if you want an in-place install that's globally
accessible, you'd need write access to global site-packages.

> ** After the package has been installed in-place (using the develop command), 
> how does one identify it as an in development project (or in development 
> mode)? -- Case 3 and 6 touch on this topic (case 3 is a little vague at this 
> time), but doesn't explain what type of action is intended. So if we install 
> in-place (aka, develop), how does the python interpreter find the package? 
> Are we using PYTHONPATH at this point (which would be contradict a 
> requirement in  case 6)?

These use cases (probably intentionally) don't touch on specific
implementation strategies, but as I mentioned there's an implicit
assumption that a .pth file is the most likely strategy.

> * Case 4 is a be unclear. Is Carl, the actor, pulling unreleased remote 
> changes (hg pull --update) for these mercurial server plugins then running 
> the develop command on them? 

Right, although the requirement for that story is that you don't have to
re-run the develop command after every pull; if you develop-install it
once, you can simply pull more code changes in and they'll immediately
be available. I've added a line to that story to make it more clear.

> * Case 1 is good and very clear, but I'd consider it a feature rather than 
> required. Perhaps it should not be focused on first (priority). Thoughts?

I agree that's a second-level feature (or, perhaps more accurately, a
bug in the existing setuptools feature that I was hoping could be
addressed in the d2 version), not a primary requirement.

--

___
Python tracker 
<http://bugs.python.org/issue8668>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12279] Add build_distinfo command to packaging

2011-07-11 Thread Carl Meyer

Carl Meyer  added the comment:

You guys are more familiar with the codebase than I am, but it seems to me that 
the RECORD file should clearly either be not present or empty when metadata has 
been built but not yet installed. I don't really think the "invalid PEP 376" 
issue is a problem: PEP 376 describes the metadata for installed distributions; 
it has nothing to say about built metadata for a distribution which has not yet 
been installed.

For purposes of the develop command, if a pth file is used to implement 
develop, then ideally when develop is run a RECORD file would be added 
containing only the path to that pth file, as thats the only file that has 
actually been installed (and the only one that should be removed if the 
develop-installed package is uninstalled).

--
nosy: +carljm

___
Python tracker 
<http://bugs.python.org/issue12279>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12279] Add build_distinfo command to packaging

2011-07-12 Thread Carl Meyer

Carl Meyer  added the comment:

>> I don't really think the "invalid PEP 376" issue is a problem: PEP
>> 376 describes the metadata for installed distributions; it has
>> nothing to say about built metadata for a distribution which has not
>> yet been installed.
> The problem is that develop is a kind of install.

Right, I was simply referring to "build_distinfo" leaving it
empty/missing; I'd want "develop" to add a (very short) RECORD file as
specified below.

>> For purposes of the develop command, if a pth file is used to
>> implement develop, then ideally when develop is run a RECORD file
>> would be added containing only the path to that pth file, as thats
>> the only file that has actually been installed
> Yeah!
> 
>> (and the only one that should be removed if the develop-installed
>> package is uninstalled).
> Are you saying that such a RECORD file would allow any installer compatible 
> with PEP 376 to undo a develop install?  Clever!

Yeah, that's the idea. I don't see any actual use case for having all of
the Python modules etc included in the RECORD file for a
develop-install, because they haven't been installed anywhere: what we
really want to know is "what has been placed in the installation
location that we need to keep track of."?

--

___
Python tracker 
<http://bugs.python.org/issue12279>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8668] Packaging: add a 'develop' command

2011-07-12 Thread Carl Meyer

Carl Meyer  added the comment:

> Ah, higery’s code already has an answer for me: it writes *two* paths in the 
> .pth file, one to the build dir (so that .dist-info is found) and one to the 
> modules root (for modules, built in place).  Anyone sees a problem with that? 
>  (For example huge sys.path.)
> 
> In this scheme, when Python modules are edited, changes are visible 
> instantly, when C modules are edited, a call to build_ext is required, and 
> when the metadata is edited, build_distinfo is required.  Does that sound 
> good?

That sounds reasonable to me. I'm not worried about that making sys.path
too long: whatever we do we aren't going to challenge buildout in that
department ;-)

--

___
Python tracker 
<http://bugs.python.org/issue8668>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8668] Packaging: add a 'develop' command

2011-07-12 Thread Carl Meyer

Carl Meyer  added the comment:

> I’ve reviewed the last patch.  It looks like the code only installs
> to the global site-packages, and there is no support to install to
> the user site-packages or to another arbitrary location.
> 
> On Windows, normal users seem to be able to write to the global
> site-packages (see #12260), but on other OSes with a proper rights
> model  that won’t do.  Luckily, PEP 370 brings us user
> site-packages (currently poorly documented, see #8617 and #10745),
> but only for 2.6, 2.7 and 3.x.  It looks like Tarek is ready to drop
> 2.4 compatibility for distutils2, so the question is: what to do
> under 2.5?
> 
> Generally, I don’t see why develop could not install to any
> directory.  We want a default invocation without options to Just
> Work™, finding a writable directory already on sys.path and writing
> into it, but that doesn’t exclude letting the user do what they
> want.

I don't see why the installation-location-finding for develop should be
any different than for a normal "pysetup install". Does "pysetup
install" install to global site-packages by default, or try to find
somewhere it can install without additional privileges? Whatever it does
by default, develop should do the same. If "develop" can install to
arbitrary locations, then "install" should be able to as well (though I
don't really see the value in "arbitrary locations", since you then have
to set up PYTHONPATH manually anyway). There is no reason for them to
have different features in this area, it just adds confusion.

Certainly "develop" should support PEP 370, ideally with the same
command-line flag as a regular install.

--

___
Python tracker 
<http://bugs.python.org/issue8668>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8668] Packaging: add a 'develop' command

2011-07-20 Thread Carl Meyer

Carl Meyer  added the comment:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

> Éric Araujo  added the comment:
> 
> [Carl]
>> there's an implicit assumption that a .pth file is the most likely
>> strategy.
> If you have other ideas, please share them.

No, I think that's the most promising strategy. The "implicit
assumption" comment was not criticism, just explanation for Michael.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk4nF2wACgkQ1j/fhc23WEDvlwCeK3Y+MJGyb3uoEzYzJWaSCrTy
WewAoI7UdW+nqP2SEtquvQXCndXX57VO
=UFOY
-END PGP SIGNATURE-

--

___
Python tracker 
<http://bugs.python.org/issue8668>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9869] long_subtype_new segfault in pure-Python code

2010-09-16 Thread Carl Witty

New submission from Carl Witty :

PyNumber_Long() (and hence long_new()) are willing to return ints, rather than 
longs.  However, when long_subtype_new() calls long_new(), it casts the result 
to PyLongObject* without a check.  (Well, there is an assertion, so if 
assertions are enabled you'd get an assertion failure instead of a potential 
segmentation fault.)

The attached program segfaults for me; returning smaller numbers than 100 
sometimes gives bad answers instead of crashing.

--
components: Interpreter Core
files: python_long_bug.py
messages: 116514
nosy: cwitty
priority: normal
severity: normal
status: open
title: long_subtype_new segfault in pure-Python code
type: crash
versions: Python 2.5, Python 2.6, Python 2.7
Added file: http://bugs.python.org/file18899/python_long_bug.py

___
Python tracker 
<http://bugs.python.org/issue9869>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11296] Possible error in What's new in Python 3.2 : duplication of rsplit() mention

2011-02-22 Thread Carl Chenet

New submission from Carl Chenet :

Hi,

Could the rsplit() method be mentioned mistakenly two times in the following 
sentence of the current What's new in Python 3.2 ?

"The fast-search algorithm in stringlib is now used by the split(), rsplit(), 
splitlines() and replace() methods on bytes, bytearray and str objects. 
Likewise, the algorithm is also used by rfind(), rindex(), rsplit() and 
rpartition()."

Regards,
Carl Chenet

--
assignee: docs@python
components: Documentation
messages: 129146
nosy: chaica_, docs@python
priority: normal
severity: normal
status: open
title: Possible error in What's new in Python 3.2 : duplication of rsplit() 
mention
versions: Python 3.2

___
Python tracker 
<http://bugs.python.org/issue11296>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9878] Avoid parsing pyconfig.h and Makefile by autogenerating extension module

2011-03-13 Thread Carl Meyer

Changes by Carl Meyer :


--
nosy: +carljm

___
Python tracker 
<http://bugs.python.org/issue9878>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11591] "python -S" should be robust against e.g. "from site import addsitedir"

2011-03-17 Thread Carl Meyer

New submission from Carl Meyer :

If python is run with the -S flag, that declares the intent of the user to not 
have site-specific additions to sys.path.

However, some code in that process may have a legitimate need for a function 
defined in site.py - for instance, addsitedir. But the act of importing 
site.py, as a side effect, adds the standard site-specific directories to 
sys.path.

python -S would be more useful and reliable if it prevented importing site from 
automatically making the sys.path additions. There is no loss of flexibility 
here, as user code could still explicitly call site.main() to achieve all of 
the current side-effects of "import site".

The fix is a one-liner, and is in the linked hg repository.

--
components: Library (Lib)
hgrepos: 4
messages: 131281
nosy: carljm
priority: normal
severity: normal
status: open
title: "python -S" should be robust against e.g. "from site import addsitedir"
type: behavior

___
Python tracker 
<http://bugs.python.org/issue11591>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11591] "python -S" should be robust against e.g. "from site import addsitedir"

2011-03-17 Thread Carl Meyer

Changes by Carl Meyer :


--
keywords: +patch
Added file: http://bugs.python.org/file21274/87df1d37c88e.diff

___
Python tracker 
<http://bugs.python.org/issue11591>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11591] "python -S" should be robust against e.g. "from site import addsitedir"

2011-03-17 Thread Carl Meyer

Carl Meyer  added the comment:

Adding a test is easier said than done. The behavior change here depends on 
python being run with -S. Currently test_site skips itself if the test suite is 
run with -S, and if I remove that skip it crashes under -S.

Options as I see it:

1. Declare this one-liner correct by inspection. It doesn't break any existing 
tests.

2. Add a new test file (test_no_site.py?) that only runs with -S and tests that 
importing something from site doesn't trigger sys.path additions. This seems 
like the most reasonable test, but I'm not sure how useful it is, since I doubt 
most people ever try running the test suite with -S.

3. Make the fix more complicated such that it uses an intermediary variable 
which can be mocked (unlike sys.flags.no_site, which is read-only), and then 
add a test which mocks this variable, temporarily removes "site" from 
sys.modules, tries importing it again, and checks whether main() is called. 
This creates a complex test which is highly coupled to the implementation in 
site.py, but would be run under normal conditions (without -S).

Which option do you prefer?

--

___
Python tracker 
<http://bugs.python.org/issue11591>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11598] missing afxres.h error when building bdist_wininst in Visual Studio 2008 Express

2011-03-18 Thread Carl Meyer

New submission from Carl Meyer :

By opening up pcbuild.sln in VS2008 Express, I was able to successfully build 
python and pythonw, but when I tried to build bdist_wininst it failed with 
"Fatal Error RC1015: cannot open include file afxres.h"

Googling turned up a number of comments about how this file is part of MFC, 
which is really not supposed to be used with VS2008. The recommended "fix" that 
seemed to work for most people online was to replace "afxres.h" with 
"windows.h" in the rc file. I did this in PC/bdist_wininst/install.rc, and then 
it failed with a different error about a missing IDC_STATIC token.

I have very little experience with Windows, so it's entirely possible I'm just 
doing something wrong, but I was asked in #python-dev to file a bug here.

--
components: Build, Windows
messages: 131351
nosy: carljm
priority: normal
severity: normal
status: open
title: missing afxres.h error when building bdist_wininst in Visual Studio 2008 
Express
versions: Python 3.3

___
Python tracker 
<http://bugs.python.org/issue11598>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11603] Python crashes or hangs when rebinding __repr__ as __str__

2011-03-18 Thread Carl Banks

New submission from Carl Banks :

The issue was raised by J Peyret on the following c.l.python thread:

http://groups.google.com/group/comp.lang.python/browse_frm/thread/459e5ec433e7dcab?hl=en#

Several posters reported that the following code either hangs or crashes Python 
(versions 2.7, 2.6, and 3.2, on Windows and Linux) were tested:

-
class Foo(object):
pass

Foo.__repr__ = Foo.__str__

foo = Foo()
print(str(foo))
-

--
components: Interpreter Core
messages: 131364
nosy: aerojockey
priority: normal
severity: normal
status: open
title: Python crashes or hangs when rebinding __repr__ as __str__
type: crash
versions: Python 2.6, Python 2.7, Python 3.2

___
Python tracker 
<http://bugs.python.org/issue11603>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11591] "python -S" should be robust against e.g. "from site import addsitedir"

2011-03-21 Thread Carl Meyer

Carl Meyer  added the comment:

Added documentation to Doc/library/site.rst and Misc/NEWS.

--
hgrepos: +5

___
Python tracker 
<http://bugs.python.org/issue11591>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11591] "python -S" should be robust against e.g. "from site import addsitedir"

2011-03-21 Thread Carl Meyer

Changes by Carl Meyer :


Added file: http://bugs.python.org/file21327/ebe5760afa08.diff

___
Python tracker 
<http://bugs.python.org/issue11591>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11591] "python -S" should be robust against e.g. "from site import addsitedir"

2011-03-21 Thread Carl Meyer

Carl Meyer  added the comment:

> Did you have to manually click “Create Patch” to make roundup generate it?  

Yes - the first time too.

> Did you try first to click on the button of the existing repo before adding a 
> new repo entry?

That would probably have worked fine. The "Remote hg repo" field was just empty 
when I made my latest comment, so I filled it in again. Wasn't sure if it would 
duplicate, or be smart enough to tell they were the same repo, or what. I guess 
it duplicated :/

--

___
Python tracker 
<http://bugs.python.org/issue11591>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6087] distutils.sysconfig.get_python_lib gives surprising result when used with a Python build

2011-04-01 Thread Carl Meyer

Changes by Carl Meyer :


--
nosy: +carljm

___
Python tracker 
<http://bugs.python.org/issue6087>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11810] _socket fails to build on OpenIndiana

2011-04-10 Thread Carl Brewer

Carl Brewer  added the comment:

I know this is closed etc... but Plone (the CMS I use) is tied to various 
versions of Python, in particular 2.6 at this time.  Having it not build on 
Open[Solaris/Indiana] means I can't install current versions of Plone/Zope on 
this platform.  Any chance it could be fixed?

--
nosy: +Bleve

___
Python tracker 
<http://bugs.python.org/issue11810>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11810] _socket fails to build on OpenIndiana

2011-04-10 Thread Carl Brewer

Carl Brewer  added the comment:

Plone ships with a "universal installer" which expects particular versions of 
python (and PIL etc etc) which makes it easy to build on, for example, many 
Linux distros, but it's just not working on Open[Solaris|Indiana] and also 
NetBSD (pkgsrc's python2.6 is broken too, but we're working on that).  The only 
time the installer gets bumped is when new versions of Plone get released, 
which means that only the bleeding edge might work.  This is a problem for many 
integrators who are tied to older versions of Plone|Zope that are unlikely to 
get migrated to more recent releases in any sort of a reasonable timeframe.

Is it really not possible to fix up python2.6 to solve this issue?

--

___
Python tracker 
<http://bugs.python.org/issue11810>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11868] Minor word-choice improvement in devguide "lifecycle of a patch" opening paragraph

2011-04-18 Thread Carl Meyer

New submission from Carl Meyer :

The opening paragraph of the "lifecycle of a patch" devguide page contains a 
confusing parenthetical aside implying that an "svn-like" workflow would mean 
never *saving* anything to your working copy and using "hg diff" to generate a 
patch. This is obviously wrong given the usual meaning of "save": if you never 
save anything to your working copy, "hg diff" will be empty.

Patch attached with proposed alternative wording.

--
components: Devguide
files: svn-like-wording.diff
keywords: patch
messages: 133978
nosy: carljm
priority: normal
severity: normal
status: open
title: Minor word-choice improvement in devguide "lifecycle of a patch" opening 
paragraph
versions: 3rd party
Added file: http://bugs.python.org/file21707/svn-like-wording.diff

___
Python tracker 
<http://bugs.python.org/issue11868>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1346874] httplib simply ignores CONTINUE

2011-04-24 Thread Carl Nobile

Carl Nobile  added the comment:

I have run into this same issue. It does violate RFC2616 in section 4.3 "All 
1xx (informational), 204 (no content), and 304 (not modified) responses MUST 
NOT include a message-body. All other responses do include a message-body, 
although it MAY be of zero length."

The embedded while loop is looking for entity data coming back from the server 
which will never be seen. In my tests the code dies with an exception. I don't 
see why anything is being done special for a 100 CONTINUE at all. My fix was to 
eliminate the code previously quoted and replace it with a single line of code 
so that it would now look like the code snippet below.

   def begin(self):
if self.msg is not None:
# we've already started reading the response
return
version, status, reason = self._read_status()
self.status = status
self.reason = reason.strip()

Note on providing a patch as stated previously.

Having this restriction on providing a patch is a large deterrent to people. I 
spent a lot of time myself finding the cause of the issues I was having. I 
don't really have the time to fix tests and documentation also. I understand 
the reason for asking, but it certainly is a discouragement to helping when 
bugs are found.

--
nosy: +Carl.Nobile

___
Python tracker 
<http://bugs.python.org/issue1346874>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11234] Possible error in What's new Python3.2(rc3) documentation (sysconfig.get_config_var)

2011-02-17 Thread Carl Chenet

New submission from Carl Chenet :

Hi,

It seems a mistake could be in the "What's new in Python 3.2" (rc3) 
documentation in the sysconfig.get_config_var('SO') example :

>>> sysconfig.get_config_var('SO')   # find the full filename extension
'cpython-32mu.so'

On my system (Debian GNU/Linux, Python3.2rc3), the same command gives :
 
>>> sysconfig.get_config_var('SO')
'.cpython-32m.so'

A dot at the beginning of the string could be missing in the example of the 
current documentation. This dot also appears in the example of the PEP 3149.

Regards,
Carl Chenet

--
assignee: docs@python
components: Documentation
messages: 128747
nosy: chaica_, docs@python
priority: normal
severity: normal
status: open
title: Possible error in What's new Python3.2(rc3) documentation 
(sysconfig.get_config_var)
versions: Python 3.2

___
Python tracker 
<http://bugs.python.org/issue11234>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3331] Possible inconsistency in behavior of list comprehensions vs. generator expressions

2008-07-10 Thread Carl Johnson

New submission from Carl Johnson <[EMAIL PROTECTED]>:

Compare the following behaviors:

Python 3.0a5 (r30a5:62856, May 10 2008, 10:34:28)
[GCC 4.0.1 (Apple Inc. build 5465)] on darwin
Type "help", "copyright", "credits" or "license" for more 
information.
 >>> def f(x):
...  if x > 5: raise StopIteration
...
 >>> [x for x in range(100) if not f(x)]
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 1, in 
  File "", line 2, in f
StopIteration
 >>> list(x for x in range(100) if not f(x))
[0, 1, 2, 3, 4, 5]

One might object that the behavior of the list comprehension is 
identical to that of a for-loop:

>>> r = []
>>> for x in range(100):
...  if not f(x):
...   r.append(x)
... 
Traceback (most recent call last):
  File "", line 2, in 
  File "", line 2, in f
StopIteration

However, it can be argued that in Python 3 list comprehensions should be 
thought of as "syntatic sugar" for ``list(generator expression)`` not a 
for-loop with an accumulator. (This seems to be the motivation for no 
longer "leaking" variables from list comprehensions into their enclosing 
namespace.)

One interesting question that this raises (for me at least) is whether 
the for-loop should also behave like a generator expression. Of course, 
the behavior of the generator expression can already be simulated by 
writing:

>>> r = []
>>> for x in range(100):
...  try:
...   if f(x):
...r.append(x)
...  except StopIteration:
...   break
... 
>>> r
[0, 1, 2, 3, 4, 5]

This raises the question, do we need both a ``break`` statement and 
``raise StopIteration``? Can the former just be made into syntatic sugar 
for the later?

--
components: Interpreter Core
messages: 69496
nosy: carlj
severity: normal
status: open
title: Possible inconsistency in behavior of list comprehensions vs. generator 
expressions
type: behavior
versions: Python 3.0

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue3331>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1284316] Win32: Security problem with default installation directory

2007-12-01 Thread Carl Karsten

Carl Karsten added the comment:

Another reason to fix: perception.  installing to the root looks like a
hack.  Installing to the proper place* looks professional.  

As for it being hard to type, either add it to PATH or put a .bat file
in the path.  I think vista even supports some sort of symlink, so that
might be best.

As for easy_install.exe and others breaking when they hit a space, They
should be fixed too.  avoiding fixing them means people who try to force
the installer to do the right thing end up with a headache, which is evil.

* proper place isn't always "C:\Program Files" - the installer builder
should have an option to determine what it should be.  The environment
var %ProgramFiles% holds the correct path.  There is an API call too.

--
nosy: +carlfk

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1284316>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1713] posixpath.ismount() claims symlink to .. is mountpoint.

2007-12-30 Thread Carl Drougge

Changes by Carl Drougge:


--
components: Library (Lib)
nosy: drougge
severity: minor
status: open
title: posixpath.ismount() claims symlink to .. is mountpoint.
type: behavior
versions: Python 2.4, Python 2.5

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1713>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1713] posixpath.ismount() claims symlink to a mountpoint is a mountpoint.

2007-12-30 Thread Carl Drougge

New submission from Carl Drougge:

Sorry, this happened to me in /tmp, where it's actually true, except I 
don't expect symlinks to be considered mountpoints, so I still consider 
it a bug. Should have tested more though.

--
title: posixpath.ismount() claims symlink to .. is mountpoint. -> 
posixpath.ismount() claims symlink to a mountpoint is a mountpoint.

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1713>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2244] urllib and urllib2 decode userinfo multiple times

2008-03-06 Thread Carl Meyer

New submission from Carl Meyer:

Both urllib and urllib2 call urllib.unquote() multiple times on data in
the userinfo section of an FTP URL.  One call occurs at the end of the
urllib.splituser() function.  In urllib, the other call appears in
URLOpener.open_ftp().  In urllib2, the other two occur in
FTPHandler.ftp_open() and Request.get_host().

The effect of this is that if the userinfo section of an FTP url should
need to contain a literal % sign followed by two digits, the % sign must
be double-encoded as %2525 (for urllib) or triple-encoded as %252525
(for urllib2) in order for the URL to be accessed.

The proper behavior would be to only ever unquote a given data segment
once.  The W3's URI: Generic Syntax RFC
(http://gbiv.com/protocols/uri/rfc/rfc3986.html) addresses this very
issue in section 2.4 (When to Encode or Decode): "Implementations must
not percent-encode or decode the same string more than once, as decoding
an already decoded string might lead to misinterpreting a percent data
octet as the beginning of a percent-encoding, or vice versa in the case
of percent-encoding an already percent-encoded string."

The solution would be to standardize where in urllib and urllib2 the
unquoting happens, and then make sure it happens nowhere else.  I'm not
familiar enough with the libraries to know where it should be removed
without possibly breaking other behavior.  It seems that just removing
the map/unquote call in urllib.splituser() would fix the problem in
urllib.  I would guess the call in urllib2 Request.get_host() should
also be removed, as the RFC referenced above says clearly that only
individual data segments of the URL should be decoded, not larger
portions that might contain delimiters (: and @).

I've attached a patchset for these suggested changes.  Very superficial
testing suggests that the patch doesn't break anything obvious, but I
make no guarantees.

--
components: Library (Lib)
files: urllib-issue.patch
keywords: patch
messages: 63324
nosy: carljm
severity: normal
status: open
title: urllib and urllib2 decode userinfo multiple times
type: behavior
versions: Python 2.5
Added file: http://bugs.python.org/file9621/urllib-issue.patch

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue2244>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4627] Add Mac OS X Disk Images to Python.org homepage

2008-12-10 Thread Carl Johnson

New submission from Carl Johnson <[EMAIL PROTECTED]>:

As recently as Python 2.6.0's release, Python.org had a link to download a 
disk image with a special newb-friendly installer for OS X. See 
http://www.python.org/download/releases/2.6/

Now, it's gone in Python 2.6.1, and it was never there for Python 3.0. 
Which is a pain, because it's really hard to get "readlines" to install 
just using config/make/install. 

So, whoever is in charge of making that disk image should make it again.

--
components: Macintosh
messages: 77591
nosy: carlj
severity: normal
status: open
title: Add Mac OS X Disk Images to Python.org homepage
type: compile error
versions: Python 2.6, Python 3.0

___
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue4627>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5062] Rlcompleter.Completer does not use __dir__ magic method

2009-01-25 Thread Carl Johnson

New submission from Carl Johnson :

The documentation at http://docs.python.org/library/rlcompleter.html
claims that

Completer.complete(text, state)¶

   Return the state*th completion for *text.

   If called for text that doesn’t include a period character ('.'), it
will complete from names currently defined in __main__, __builtin__ and
keywords (as defined by the keyword module).

   If called for a dotted name, it will try to evaluate anything without
obvious side-effects (functions will not be evaluated, but it can
generate calls to __getattr__()) up to the last part, and find matches
for the rest via the dir() function. Any exception raised during the
evaluation of the expression is caught, silenced and None is returned.

In other words, it claims to use dir(obj) as part of the tab completion
process. This is not true (using Python 2.6.1 on OS X):

>>> class B(object):
...  def __dir__(self): return dir(u"") #Makes B objects look like strings
...
>>> b = B()
>>> dir(b)
['__add__', '__class__', '__contains__', '__delattr__', '__doc__',
'__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__',
'__getnewargs__', '__getslice__', '__gt__', '__hash__', '__init__',
'__le__', '__len__', '__lt__', '__mod__', '__mul__', '__ne__',
'__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__',
'__rmul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__',
'_formatter_field_name_split', '_formatter_parser', 'capitalize',
'center', 'count', 'decode', 'encode', 'endswith', 'expandtabs', 'find',
'format', 'index', 'isalnum', 'isalpha', 'isdecimal', 'isdigit',
'islower', 'isnumeric', 'isspace', 'istitle', 'isupper', 'join',
'ljust', 'lower', 'lstrip', 'partition', 'replace', 'rfind', 'rindex',
'rjust', 'rpartition', 'rsplit', 'rstrip', 'split', 'splitlines',
'startswith', 'strip', 'swapcase', 'title', 'translate', 'upper', 'zfill']
>>> c = rlcompleter.Completer()
>>> c.complete("b.", 0) #Notice that it does NOT return __add__
u'b.__class__('
>>> c.matches #Notice that this list is completely different from the
list given by dir(b)
[u'b.__class__(', u'b.__delattr__(', u'b.__doc__', u'b.__format__(',
u'b.__getattribute__(', u'b.__hash__(', u'b.__init__(', u'b.__new__(',
u'b.__reduce__(', u'b.__reduce_ex__(', u'b.__repr__(',
u'b.__setattr__(', u'b.__sizeof__(', u'b.__str__(',
u'b.__subclasshook__(', u'b.__class__(', u'b.__class__(',
u'b.__delattr__(', u'b.__dict__', u'b.__dir__(', u'b.__doc__',
u'b.__format__(', u'b.__getattribute__(', u'b.__hash__(',
u'b.__init__(', u'b.__module__', u'b.__new__(', u'b.__reduce__(',
u'b.__reduce_ex__(', u'b.__repr__(', u'b.__setattr__(',
u'b.__sizeof__(', u'b.__str__(', u'b.__subclasshook__(',
u'b.__weakref__', u'b.__class__(', u'b.__delattr__(', u'b.__doc__',
u'b.__format__(', u'b.__getattribute__(', u'b.__hash__(',
u'b.__init__(', u'b.__new__(', u'b.__reduce__(', u'b.__reduce_ex__(',
u'b.__repr__(', u'b.__setattr__(', u'b.__sizeof__(', u'b.__str__(',
u'b.__subclasshook__(']

Suggested course of action: 

* Change the documentation for Python 2.6/3.0.
* Update Completer to use __dir__ in Pythons 2.7/3.1 and revert the
documentation.

See
http://mail.python.org/pipermail/python-dev/2009-January/thread.html#85471

--
assignee: georg.brandl
components: Documentation, Extension Modules
messages: 80556
nosy: carlj, georg.brandl
severity: normal
status: open
title: Rlcompleter.Completer does not use __dir__ magic method
type: behavior
versions: Python 2.6, Python 3.0

___
Python tracker 
<http://bugs.python.org/issue5062>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5062] Rlcompleter.Completer does not use __dir__ magic method

2009-01-26 Thread Carl Johnson

Carl Johnson  added the comment:

It seems to me that it isn't tab completion's place to out think the
__dir__ method. A) Because the documentation doesn't tell you that it
does (although you are warned that it may call some stuff) and B)
because if someone set up a __dir__ method, they probably are listing
the things that they want listed for a particular reason. I think that
it would be less confusing for rlcompleter to follow the __dir__ method
when it exists and only do its own poking and prodding when it does not.

___
Python tracker 
<http://bugs.python.org/issue5062>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5062] Rlcompleter.Completer does not use __dir__ magic method

2009-01-26 Thread Carl Johnson

Carl Johnson  added the comment:

I think that checking to see which things really exist with
getattr/hasattr made sense back in the days before the __dir__, since in
those days the real API for an object could diverge wildly from what was
reported by dir(object), but nowadays, if someone goes to the trouble of
defining the __dir__ method, then we should just trust that as being
"the API" and not do any other checking.

___
Python tracker 
<http://bugs.python.org/issue5062>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5062] Rlcompleter.Completer does not use __dir__ magic method

2009-01-26 Thread Carl Johnson

Carl Johnson  added the comment:

Ah, I see. It does a dir(obj) then tests things to see which are
callable and while it is at that, it removes the names that don't really
exist according to getattr.

Actually, can we go back to the Python 2.5 behavior? I really hate those
auto-added parentheses. For one thing, it screws it up when you do
"help(name". Am I missing some really obvious switch that would
turn the behavior back to the old style of ignoring the
callable/non-callable thing?

___
Python tracker 
<http://bugs.python.org/issue5062>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4627] Add Mac OS X Disk Images to Python.org homepage

2009-02-13 Thread Carl Johnson

Carl Johnson  added the comment:

Is it possible to reopen this bug? Python 3.0.1 still has no Mac installer…

--
versions:  -Python 2.6

___
Python tracker 
<http://bugs.python.org/issue4627>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4627] Add Mac OS X Disk Images to Python.org homepage

2009-02-13 Thread Carl Johnson

Carl Johnson  added the comment:

What's German for "the squeaky wheel gets the grease"? ;-)

___
Python tracker 
<http://bugs.python.org/issue4627>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4627] Add Mac OS X Disk Images to Python.org homepage

2009-02-14 Thread Carl Johnson

Carl Johnson  added the comment:

Fair enough. In this case though, I'm not complaining for myself, since
I can compile config, make, install source (although I don't know how to
build a Mac Installer, or else I would just do it). I'm complaining on
behalf of all the AppleScript users and others who have heard about this
"Python 3" and are interested, but don't go in for using the command
line (yet). It would be a shame to discourage them from learning Python
when they don't see a Mac download link.

___
Python tracker 
<http://bugs.python.org/issue4627>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-09 Thread Carl Meyer


Change by Carl Meyer :


--
keywords: +patch
pull_requests: +29891
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/31787

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-09 Thread Carl Meyer


Carl Meyer  added the comment:

Draft PR is up for consideration. Perf data in 
https://gist.github.com/carljm/987a7032ed851a5fe145524128bdb67a

Overall it seems like the base implementation is perf neutral -- maybe a slight 
impact on the pickle benchmarks? With all module global dicts (uselessly) 
watched, there are a few more benchmarks with small regressions, but also some 
with small improvements (just noise I guess?) -- overall still pretty close to 
neutral.

Comments welcome!

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-09 Thread Carl Meyer


Carl Meyer  added the comment:

Hi Dennis, thanks for the questions!

> A curiosity: have you considered watching dict keys rather than whole dicts?

There's a bit of discussion of this above. A core requirement is to avoid any 
memory overhead and minimize CPU overhead on unwatched dicts. Additional memory 
overhead seems like a nonstarter, given the sheer number of dict objects that 
can exist in a large Python system. The CPU overhead for unwatched dicts in the 
current PR consists of a single added `testb` and `jne` (for checking if the 
dict is watched), in the write path only; I think that's effectively the 
minimum possible.

It's not clear to me how to implement per-key watching under this constraint. 
One option Brandt mentioned above is to steal the low bit of a `PyObject` 
pointer; in theory we could do this on `me_key` to implement per-key watching 
with no memory overhead. But then we are adding bit-masking overhead on every 
dict read and write. I think we really want the implementation here to be 
zero-overhead in the dict read path.

Open to suggestions if I've missed a good option here!

> That way, changing global values would not have to de-optimize, only adding 
> new global keys would.

> Indexing into dict values array wouldn't be as efficient as embedding direct 
> jump targets in JIT-generated machine code, but as long as we're not doing 
> that, maybe watching the keys is a happy medium?

But we are doing that, in the Cinder JIT. Dict watching here is intentionally 
exposed for use by extensions, including hopefully  in future the Cinder JIT as 
an installable extension. We burn exact pointer values for module globals into 
generated JIT code and deopt if they change (we are close to landing a change 
to code-patch instead of deopting.) This is quite a bit more efficient in the 
hot path than having to go through a layer of indirection.

I don't want to assume too much about how dict watching will be used in future, 
or go for an implementation that limits its future usefulness. The current PR 
is quite flexible and can be used to implement a variety of caching strategies. 
The main downside of dict-level watching is that a lot of notifications will be 
fired if code does a lot of globals-rebinding in modules where globals are 
watched, but this doesn't appear to be a problem in practice, either in our 
workloads or in pyperformance. It seems likely that a workable strategy if this 
ever was observed to be a problem would be to notice at runtime that globals 
are being re-bound frequently in a particular module and just stop watching 
that module's globals.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-09 Thread Carl Meyer


Carl Meyer  added the comment:

> have you considered watching dict keys rather than whole dicts?

Just realized that I misunderstood this suggestion; you don't mean per-key 
watching necessarily, you just mean _not_ notifying on dict values changes. Now 
I understand better how that connects to the second part of your comment! But 
yeah, I don't want this limitation on dict watching use cases.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-10 Thread Carl Meyer


Carl Meyer  added the comment:

Thanks for outlining the use cases. They make sense.

The current PR provides a flexible generic API that fully supports all three of 
those use cases (use cases 2 and 3 are strict subsets of use case 1.) Since the 
callback is called before the dict is modified, all the necessary information 
is available to the callback to decide whether the event is interesting to it 
or not.

The question is how much of the bookkeeping to classify events as "interesting" 
or "uninteresting" should be embedded in the core dispatch vs being handled by 
the callback.

One reason to prefer keeping this logic in the callback is that with 
potentially multiple chained callbacks in play, the filtering logic must always 
exist in the callback, regardless. E.g. if callback A wants to watch only 
keys-version changes to dict X, but callback B wants to watch all changes to 
it, events will fire for all changes, and callback A must still disregard 
"uninteresting" events that it may receive (just like it may receive events for 
dicts it never asked to watch at all.) So providing API for different "levels" 
of watching means that the "is this event interesting to me" predicate must 
effectively be duplicated both in the callback and in the watch level chosen.

The proposed rationale for this complexity and duplication is the idea that 
filtering out uninteresting events at dispatch will provide better performance. 
But this is hypothetical: it assumes the existence of perf-bottleneck code 
paths that repeatedly rebind globals. The only benchmark workload with this 
characteristic that I know of is pystone, and it is not even part of the 
pyperformance suite, I think precisely because it is not representative of 
real-world code patterns. And even assuming that we do need to optimize for 
such code, it's also not obvious that it will be noticeably cheaper in practice 
to filter on the dispatch side.

It may be more useful to focus on API. If we get the API right, internal 
implementation details can always be adjusted in future if a different 
implementation can be shown to be noticeably faster for relevant use cases. And 
if we get existing API right, we can always add new API if we have to. I don't 
think anything about the proposed simple API precludes adding 
`PyDict_WatchKeys` as an additional feature, if it turns out to be necessary.

One modification to the simple proposed API that should improve the performance 
(and ease of implementation) of use case #2 would be to split the current 
`PyDict_EVENT_MODIFIED` into two separate event types: `PyDict_EVENT_MODIFIED` 
and `PyDict_EVENT_NEW_KEY`. Then the callback-side event filtering for use case 
#2 would just be `event == PyDict_EVENT_NEW_KEY` instead of requiring a lookup 
into the dict to see whether the key was previously set or not.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-10 Thread Carl Meyer


Carl Meyer  added the comment:

I've updated the PR to split `PyDict_EVENT_MODIFIED` into separate 
`PyDict_EVENT_ADDED`, `PyDict_EVENT_MODIFIED`, and `PyDict_EVENT_DELETED` event 
types. This allows callbacks only interested in e.g. added keys (case #2) to 
more easily and cheaply skip uninteresting events.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-15 Thread Carl Meyer


Carl Meyer  added the comment:

> There should not be much of a slowdown for this code when watching `CONST`:

How and when (and based on what data?) would the adaptive interpreter make the 
decision that for this code sample the key `CONST`, but not the key `var`, 
should be watched in the module globals dict? It's easy to contrive an example 
in which it's beneficial to watch one key but not another, but this is 
practically irrelevant unless it's also feasible for an optimizer to 
consistently make the right decision about which key(s) to watch.

The code sample also suggests that the module globals dict for a module is 
being watched while that module's own code object is being executed. In module 
body execution, writing to globals (vs reading them) is relatively much more 
common, compared to any other Python code execution context, and it's much less 
common for the same global to be read many times. Given this, how frequently 
would watching module globals dictionaries during module body execution be a 
net win at all? Certainly cases can be contrived in which it would be, but it 
seems unlikely that it would be a net win overall. And again, unless the 
optimizer can reliably (and in advance, since module bodies are executed only 
once) distinguish the cases where it's a win, it seems the example is not 
practically relevant.

> Another use of this is to add watch points in debuggers.
> To that end, it would better if the callback were a Python object.

It is easy to create a C callback that delegates to a Python callable if 
someone wants to implement this use case, so the vectorcall overhead is paid 
only when needed. The core API doesn't need to be made more complex for this, 
and there's no reason to impose any overhead at all on low-level 
interpreter-optimization use cases.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46896] add support for watching writes to selected dictionaries

2022-03-15 Thread Carl Meyer


Carl Meyer  added the comment:

Thanks for the extended example.

I think in order for this example to answer the question I asked, a few more 
assumptions should be made explicit:

1) Either `spam_var` and/or `eggs_var` are frequently re-bound to new values in 
a hot code path somewhere. (Given the observations above about module-level 
code, we should assume for a relevant example this takes place in a function 
that uses `global spam_var` or `global eggs_var` to allow such rebinding.)

2) But `spam_var` and `eggs_var` are not _read_ in any hot code path anywhere, 
because if they were, then the adaptive interpreter would be just as likely to 
decide to watch them as it is to watch `EGGS_CONST`, in which case any benefit 
of per-key watching in this example disappears. (Keep in mind that with 
possibly multiple watchers around, "unwatching" anything on the dispatch side 
is never an option, so we can't say that the adaptive interpreter would decide 
to unwatch the frequently-re-bound keys after it observes them being re-bound. 
It can  always "unwatch" them in the sense of no longer being interested in 
them in its callback, though.)

It is certainly possible that this case could occur, where some module contains 
both a frequently-read-but-not-written global and also a global that is 
re-bound using `global` keyword in a hot path, but rarely read. But it doesn't 
seem warranted to pre-emptively add a lot of complexity to the API in order to 
marginally improve the performance of this quite specific case, unsupported by 
any benchmark or sample workload demonstrating it.

> This might not be necessary for us right now

I think it's worth keeping in mind that `PyDict_WatchKey` API can always be 
added later without disturbing or changing semantics of the `PyDict_Watch` API 
added here.

--

___
Python tracker 
<https://bugs.python.org/issue46896>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36990] test_asyncio.test_create_connection_ipv6_scope fails(in mock test?)

2019-09-09 Thread Carl Jacobsen


Change by Carl Jacobsen :


--
nosy: +CarlRJ

___
Python tracker 
<https://bugs.python.org/issue36990>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43520] Fraction only handles regular slashes ("/") and fails with other similar slashes

2021-03-16 Thread Carl Anderson

New submission from Carl Anderson :

Fraction works with a regular slash:

>>> from fractions import Fraction
>>> Fraction("1/2")
Fraction(1, 2)

but there are other similar slashes such as (0x2044) in which it throws an 
error:

>>> Fraction("0⁄2")
Traceback (most recent call last):
  File "", line 1, in 
  File "/opt/anaconda3/lib/python3.7/fractions.py", line 138, in __new__
numerator)
ValueError: Invalid literal for Fraction: '0⁄2'


This seems to come from the (?:/(?P\d+))? section of the regex 
_RATIONAL_FORMAT in fractions.py

--
components: Library (Lib)
messages: 388865
nosy: weightwatchers-carlanderson
priority: normal
severity: normal
status: open
title: Fraction only handles regular slashes ("/") and fails with other similar 
slashes
type: enhancement
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue43520>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43520] Fraction only handles regular slashes ("/") and fails with other similar slashes

2021-03-16 Thread Carl Anderson

Carl Anderson  added the comment:

from https://en.wikipedia.org/wiki/Slash_(punctuation) there is

U+002F / SOLIDUS
U+2044 ⁄ FRACTION SLASH
U+2215 ∕ DIVISION SLASH
U+29F8 ⧸ BIG SOLIDUS
U+FF0F / FULLWIDTH SOLIDUS (fullwidth version of solidus)
U+1F67C 🙼 VERY HEAVY SOLIDUS

In XML and HTML, the slash can also be represented with the character entity 
/ or / or /.[42]

there are a couple more listed here:

https://unicode-search.net/unicode-namesearch.pl?term=SLASH

--

___
Python tracker 
<https://bugs.python.org/issue43520>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43520] Fraction only handles regular slashes ("/") and fails with other similar slashes

2021-03-16 Thread Carl Anderson

Carl Anderson  added the comment:

I guess if we are doing slashes, then the division sign ÷ (U+00F7) should be 
included too. 

There are at least 2 minus signs too (U+002D, U+02D7).

--

___
Python tracker 
<https://bugs.python.org/issue43520>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43562] test_ssl.NetworkedTests.test_timeout_connect_ex fails if network is unreachable

2021-03-19 Thread Carl Meyer


New submission from Carl Meyer :

In general it seems the CPython test suite takes care to not fail if the 
network is unreachable, but `test_timeout_connect_ex` fails because the result 
code of the connection is checked without any exception being raised that would 
reach `support.transient_internet`.

--
components: Tests
messages: 389113
nosy: carljm
priority: normal
severity: normal
status: open
title: test_ssl.NetworkedTests.test_timeout_connect_ex fails if network is 
unreachable
type: behavior
versions: Python 3.10, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue43562>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43562] test_ssl.NetworkedTests.test_timeout_connect_ex fails if network is unreachable

2021-03-19 Thread Carl Meyer


Change by Carl Meyer :


--
keywords: +patch
pull_requests: +23697
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/24937

___
Python tracker 
<https://bugs.python.org/issue43562>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43564] some tests in test_urllib2net fail instead of skipping on unreachable network

2021-03-19 Thread Carl Meyer


New submission from Carl Meyer :

In general it seems the CPython test suite takes care to skip instead of 
failing networked tests when the network is unavailable (c.f. 
`support.transient_internet` test helper).

In this case of the 5 FTP tests in `test_urllib2net` (that is, `test_ftp`, 
`test_ftp_basic`, `test_ftp_default_timeout`, `test_ftp_no_timeout`, and 
`test_ftp_timeout`), even though they use `support_transient_internet`, they 
still fail if the network is unavailable.

The reason is that they make calls which end up raising an exception in the 
form `URLError("ftp error: OSError(101, 'Network is unreachable')"` -- the 
original OSError is flattened into the exception string message, but is 
otherwise not in the exception args. This means that `transient_network` does 
not detect it as a suppressable exception.

It seems like many uses of `URLError` in urllib pass the original `OSError` 
directly to `URLError.__init__()`, which means it ends up in `args` and the 
unwrapping code in `transient_internet` is able to find the original `OSError`. 
But the ftp code instead directly interpolates the `OSError` into a new message 
string.

--
components: Tests
messages: 389115
nosy: carljm
priority: normal
severity: normal
status: open
title: some tests in test_urllib2net fail instead of skipping on unreachable 
network
type: behavior
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue43564>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43564] ftp tests in test_urllib2net fail instead of skipping on unreachable network

2021-03-19 Thread Carl Meyer


Change by Carl Meyer :


--
title: some tests in test_urllib2net fail instead of skipping on unreachable 
network -> ftp tests in test_urllib2net fail instead of skipping on unreachable 
network

___
Python tracker 
<https://bugs.python.org/issue43564>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43564] ftp tests in test_urllib2net fail instead of skipping on unreachable network

2021-03-19 Thread Carl Meyer


Change by Carl Meyer :


--
keywords: +patch
pull_requests: +23699
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/24938

___
Python tracker 
<https://bugs.python.org/issue43564>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43564] ftp tests in test_urllib2net fail instead of skipping on unreachable network

2021-03-19 Thread Carl Meyer


Carl Meyer  added the comment:

Created a PR that fixes this by being more consistent in how urllib wraps 
network errors. If there are backward-compatibility concerns with this change, 
another option could be some really ugly regex-matching code in 
`test.support.transient_internet`.

--

___
Python tracker 
<https://bugs.python.org/issue43564>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43520] Make Fraction(string) handle non-ascii slashes

2021-03-22 Thread Carl Anderson


Carl Anderson  added the comment:

>Carl: can you say more about the problem that motivated this issue?

@mark.dickinson

I was parsing a large corpus of ingredients strings from web-scraped recipes. 
My code to interpret strings such as "1/2 cup sugar" would fall over every so 
often due to this issue as they used fraction slash and other visually similar 
characters

--

___
Python tracker 
<https://bugs.python.org/issue43520>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43520] Make Fraction(string) handle non-ascii slashes

2021-03-23 Thread Carl Anderson

Carl Anderson  added the comment:

>The proposal I like is for a unicode numeric normalization functions that 
>return the ascii equivalent to exist.

@Gregory P. Smith 
this makes sense to me. That does feel like the cleanest solution. 
I'm currently doing s = s.replace("⁄","/") but it would be good to have a 
well-maintained normalization method that contained the all the relevant 
mappings as an independent preprocess step to Fraction would work well.

--

___
Python tracker 
<https://bugs.python.org/issue43520>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17088] ElementTree incorrectly refuses to write attributes without namespaces when default_namespace is used

2021-06-20 Thread Carl Schaefer


Change by Carl Schaefer :


--
nosy: +carlschaefer

___
Python tracker 
<https://bugs.python.org/issue17088>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45384] Accept Final as indicating ClassVar for dataclass

2021-10-10 Thread Carl Meyer


Carl Meyer  added the comment:

> Are Final default_factory fields real fields or pseudo-fields? (i.e. are they 
> returned by dataclasses.fields()?)

They are real fields, returned by `dataclasses.fields()`.

In my opinion, the behavior change proposed in this bug is a bad idea all 
around, and should not be made, and the inconsistency with PEP 591 should 
rather be resolved by explicitly specifying the interaction with dataclasses in 
a modification to the PEP.

Currently the meaning of:

```
@dataclass
class C:
x: Final[int] = 3
```

is well-defined, intuitive, and implemented consistently both in the runtime 
and in type checkers. It specifies a dataclass field of type `int`, with a 
default value of `3` for new instances, which can be overridden with an init 
arg, but cannot be modified (per type checker; runtime doesn't enforce Final) 
after the instance is initialized.

Changing the meaning of the above code to be "a dataclass with no fields, but 
one final class attribute of value 3" is a backwards-incompatible change to a 
less useful and less intuitive behavior.

I argue the current behavior is intuitive because in general the type 
annotation on a dataclass attribute applies to the eventual instance attribute, 
not to the immediate RHS -- this is made very clear by the fact that 
typecheckers happily accept `x: int = dataclasses.field(...)` which in a 
non-dataclass context would be a type error. Therefore the Final should 
similarly be taken to apply to the eventual instance attribute, not to the 
immediate assignment. And therefore it should not (in the case of dataclasses) 
imply ClassVar.

I realize that this means that if we want to allow final class attributes on 
dataclasses, it would require wrapping an explicit ClassVar around Final, which 
violates the current text of PEP 591. I would suggest this is simply because 
that PEP did not consider the specific case of dataclasses, and the PEP should 
be amended to carve out dataclasses specifically.

--
nosy: +carljm

___
Python tracker 
<https://bugs.python.org/issue45384>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45384] Accept Final as indicating ClassVar for dataclass

2021-10-10 Thread Carl Meyer

Carl Meyer  added the comment:

Good idea to check with the PEP authors. 

I don’t think allowing both ClassVar and Final in dataclasses requires general 
intersection types. Neither ClassVar nor Final are real types; they aren’t part 
of the type of the value.  They are more like special annotations on a name, 
which are wrapped around a type as syntactic convenience. You’re right that it 
would require more than just amendment to the PEP text, though; it might 
require changes to type checkers, and it would also require changes to the 
runtime behavior of the `typing` module to special-case allowing 
`ClassVar[Final[…]]`. And the downside of this change is that it couldn’t be 
context sensitive to only be allowed in dataclasses. But I think this isn’t a 
big problem; type checkers could still error on that wrapping in non dataclass 
contexts if they want to. 

But even if that change can’t be made, I think backwards compatibility still 
precludes changing the interpretation of `x: Final[int] = 3` on a dataclass, 
and it is more valuable to be able to specify Final instance attributes 
(fields) than final class attributes on dataclasses.

--

___
Python tracker 
<https://bugs.python.org/issue45384>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39318] NamedTemporaryFile could cause double-close on an fd if _TemporaryFileWrapper throws

2020-01-13 Thread Carl Harris


Change by Carl Harris :


--
nosy: +hitbox

___
Python tracker 
<https://bugs.python.org/issue39318>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39428] allow creation of "symtable entry" objects from Python

2020-01-22 Thread Carl Meyer


New submission from Carl Meyer :

Currently the "symtable entry" extension type (PySTEntry_Type) defined in 
`Python/symtable.c` defines no `tp_new` or `tp_init`, making it impossible to 
create instances of this type from Python code.

I have a use case for pickling symbol tables (as part of a cache subsystem for 
a static analyzer), but the inability to create instances of symtable entries 
from attributes makes this impossible, even with custom pickle support via 
dispatch_table or copyreg.

If the idea of making instances of this type creatable from Python is accepted 
in principle, I can submit a PR for it.

Thanks!

--
messages: 360522
nosy: carljm
priority: normal
severity: normal
status: open
title: allow creation of "symtable entry" objects from Python
type: enhancement

___
Python tracker 
<https://bugs.python.org/issue39428>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35799] fix or remove smtpd.PureProxy

2020-02-05 Thread Carl Harris


Change by Carl Harris :


--
nosy: +hitbox

___
Python tracker 
<https://bugs.python.org/issue35799>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3950] turtle.py: bug in TurtleScreenBase._drawimage

2020-02-10 Thread Carl Tyndall


Change by Carl Tyndall :


--
pull_requests: +17809
pull_request: https://github.com/python/cpython/pull/18435

___
Python tracker 
<https://bugs.python.org/issue3950>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40255] Fixing Copy on Writes from reference counting

2020-04-14 Thread Carl Meyer


Carl Meyer  added the comment:

> Anything that is touched by the immortal object will be leaked. This can also 
> happen in obscure ways if reference cycles are created.

I think this is simply expected behavior if you choose to create immortal 
objects, and not really an issue. How could you have an immortal object that 
doesn't keep its strong references alive?

> this does not fully cover all cases as objects that become tracked by the GC 
> after they are modified (for instance, dicts and tuples that only contain 
> immutable objects). Those objects will still participate in reference 
> counting after they start to be tracked.

I think the last sentence here is not quite right. An immortalized object will 
never start participating in reference counting again after it is immortalized.

There are two cases. If at the time of calling `immortalize_heap()` you have a 
non-GC-tracked object that is also not reachable from any GC-tracked container, 
then it will not be immortalized at all, so will be unaffected. This is a side 
effect of the PR using the GC to find objects to immortalize.

If the non-GC-tracked object is reachable from a GC-tracked object (I believe 
this is by far the more common case), then it will be immortalized. If it later 
becomes GC-tracked, it will start participating in GC (but the immortal bit 
causes it to appear to the GC to have a very high reference count, so GC will 
never collect it or any cycle it is part of), but that will not cause it to 
start participating in reference counting again.

> if immortal objects are handed to extension modules compiled with the other 
> version of the macros, the reference count can be corrupted

I think the word "corrupted" makes this sound worse than it is in practice. 
What happens is just that the object is still effectively immortal (because the 
immortal bit is a very high bit), but the copy-on-write benefit is lost for the 
objects touched by old extensions.

> 1.17x slower on logging_silent or unpickle_pure_python is a very expensive 
> price

Agreed. It seems the only way this makes sense is under an ifdef and off by 
default. CPython does a lot of that for debug features; this might be the first 
case of doing it for a performance feature?

> I would be more interested by an experiment to move ob_refcnt outside 
> PyObject to solve the Copy-on-Write issue

It would certainly be interesting to see results of such an experiment. We 
haven't tried that for refcounts, but in the work that led to `gc.freeze()` we 
did try relocating the GC header to a side location. We abandoned that because 
the memory overhead of adding a single indirection pointer to every PyObject 
was too large to even consider the option further. I suspect that this memory 
overhead issue and/or likely cache locality problems will make moving refcounts 
outside PyObject look much worse for performance than this immortal-instances 
patch does.

--
nosy: +carljm

___
Python tracker 
<https://bugs.python.org/issue40255>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40255] Fixing Copy on Writes from reference counting

2020-04-14 Thread Carl Meyer


Carl Meyer  added the comment:

> An immortalized object will never start participating in reference counting 
> again after it is immortalized.

Well, "passed to an extension compiled with no-immortal headers" is an 
exception to this.

But for the "not GC tracked but later becomes GC tracked" case, it will not 
re-enter reference counting, only the GC.

--

___
Python tracker 
<https://bugs.python.org/issue40255>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40255] Fixing Copy on Writes from reference counting

2020-04-14 Thread Carl Meyer


Carl Meyer  added the comment:

> This may break the garbage collector algorithm that relies on the balance 
> between strong references between objects and its reference count to do the 
> calculation of the isolated cycles.

I don't think it really breaks anything. What happens is that the immortal 
object appears to the GC to have a very large reference count, even after 
adjusting for within-cycle references. So cycles including an immortal object 
are always kept alive, which is exactly the behavior one should expect from an 
immortal object.

--

___
Python tracker 
<https://bugs.python.org/issue40255>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40255] Fixing Copy on Writes from reference counting

2020-04-14 Thread Carl Meyer


Carl Meyer  added the comment:

I think the concerns about "perfect" behavior in corner cases are in general 
irrelevant here.

In the scenarios where this optimization matters, there is no quantitative 
change that occurs at 100% coverage. Preventing 99% of CoW is 99% as good as 
preventing 100% :) So the fact that a few objects here and there in special 
cases could still trigger CoW just doesn't matter; it's still a massive 
improvement over the status quo. (That said, I wouldn't _mind_ improving the 
coverage, e.g. if you can suggest a better way to find all heap objects instead 
of using the GC.)

And similarly, gps is right that the concern that immortal objects can keep 
other objects alive (even via references added after immortalization) is a 
non-issue in practice. There really is no other behavior one could prefer or 
expect instead.

> if said objects (isolated and untracked before and now tracked) acquire 
> strong references to immortal objects, those objects will be visited when the 
> gc starts calculating the isolated cycles and that requires a balanced 
> reference count to work.

I'm not sure what you mean here by "balanced ref count" or by "work" :) What 
will happen anytime an immortal object gets into the GC, for any reason, is 
that the GC will "subtract" cyclic references and see that the immortal object 
still has a large refcount even after that adjustment, and so it will keep the 
immortal object and any cycle it is part of alive. This behavior is correct and 
should be fully expected; nothing breaks. It doesn't matter at all to the GC 
that this large refcount is "fictional," and it doesn't break the GC algorithm, 
it results only in the desired behavior of maintaining immortality of immortal 
objects.

It is perhaps slightly weird that this behavior falls out of the immortal bit 
being a high bit rather than being more explicit. I did do some experimentation 
with trying to explicitly prevent immortal instances from ever entering GC, but 
it turned out to be hard to do that in an efficient way. And motivation to do 
it is low, because there's nothing wrong with the behavior in the existing PR.

--

___
Python tracker 
<https://bugs.python.org/issue40255>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40255] Fixing Copy on Writes from reference counting

2020-04-15 Thread Carl Meyer


Carl Meyer  added the comment:

> Is it a common use case to load big data and then fork to use preloaded data?

A lot of the "big data" in question here is simply lots of Python 
module/class/code objects resulting from importing lots of Python modules.

And yes, this "pre-fork" model is extremely common for serving Python web 
applications; it is the way most Python web application servers work. We 
already have an example in this thread of another large Python web application 
(YouTube) that had similar needs and considered a similar approach.

--

___
Python tracker 
<https://bugs.python.org/issue40255>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40255] Fixing Copy on Writes from reference counting

2020-04-15 Thread Carl Meyer


Carl Meyer  added the comment:

> I would be interested to hear the answer to Antoine's question which is 
> basically: why not using the multiprocessing fork server?

Concretely, because for a long time we have used the uWSGI application server 
and it manages forking worker processes (among other things), and AFAIK nobody 
has yet proposed trying to replace that with something built around the 
multiprocessing module. I'm actually not aware of any popular Python WSGI 
application server built on top of the multiprocessing module (but some may 
exist).

What problem do you have in mind that the fork server would solve? How is it 
related to this issue? I looked at the docs and don't see that it does anything 
to help sharing Python objects' memory between forked processes without CoW.

--

___
Python tracker 
<https://bugs.python.org/issue40255>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   3   >