[issue42369] Reading ZipFile not thread-safe

2022-01-03 Thread Thomas


Thomas  added the comment:

@khaledk I finally got some time off, so here you go 
https://github.com/1/ParallelZipFile

I can not offer any support for a more correct implementation of the zip 
specification due to time constraints, but maybe the code is useful for you 
anyway.

--

___
Python tracker 
<https://bugs.python.org/issue42369>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



difflib.SequenceMatcher fails for larger strings

2007-03-12 Thread Thomas
I'm trying to write a program to test a persons typing speed and show
them their mistakes. However I'm getting weird results when looking
for the differences in longer strings:

import difflib

a = 
'01234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789'

# now with a few mistakes
b = 
'012345W7890123456789012W456789012345678901W3456789012345678901234567890W234567890123456789012345W789012345678901234567890123W567890123456W89012345678901234567W90123456789012W4567890123456W890123456789'

s = difflib.SequenceMatcher(None, a, b)

print s.get_matching_blocks()
print s.get_opcodes()

Is this a known bug? Would it just take to long to calculate?
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43477] from x import * behavior inconsistent between module types.

2021-03-11 Thread Thomas


New submission from Thomas :

I'm looking for clarification as to how `from x import *` should operate when 
importing file/directory-based modules versus when importing a sub-module from 
within a directory-based module.

While looking into a somewhat related issue with pylint, I noticed that `from x 
import *` appears to behave inconsistently when called from within a 
directory-based module on a sub-module. Whereas normally `from x import *` 
intentionally does not cause `x` to be added to the current namespace, when 
called within a directory-based module to import from a sub-module (so, `from 
.y import *` in an `__init__.py`, for example), the sub-module (let's say, `y`) 
*does* end up getting added to the importing namespace. From what I can tell, 
this should not be happening. If this oddity has been documented somewhere, I 
may have just missed it, so please let me know if it has been.

This inconsistency is actually setting off pylint (and confusing its AST 
handling code) when you use the full path to reference any member of the 
`asyncio.subprocess` submodule (for example, `asyncio.subprocess.Process`) 
because, according to `asyncio`'s `__init__.py` file, no explicit import of the 
`subprocess` sub-module ever occurs, and yet you can draw the entire path all 
the way to it, and its members. I've attached a generic example of the 
different behaviors (tested with Python 3.9) using simple modules, including a 
demonstration of the sub-module import.

Thomas

--
components: Interpreter Core
files: example.txz
messages: 388530
nosy: kaorihinata
priority: normal
severity: normal
status: open
title: from x import * behavior inconsistent between module types.
type: behavior
versions: Python 3.9
Added file: https://bugs.python.org/file49871/example.txz

___
Python tracker 
<https://bugs.python.org/issue43477>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43477] from x import * behavior inconsistent between module types.

2021-03-13 Thread Thomas


Thomas  added the comment:

I've spent a bit of time building (and rebuilding) Python 3.9 with a modified 
`Lib/importlib/_bootstrap.py`/regenerated `importlib.h` to give me some extra 
logging, and believe the answer I was looking for is `_find_and_load_unlocked`. 
`_find_and_load_unlocked` appears to load the module in question, and always 
attach it to the parent regardless of the contents of `fromlist` 
(`_find_and_load_unlocked` isn't even aware of `fromlist`.) The only real 
condition seems to be "is there a parent/are we in a package?". 
`Lib/importlib/_bootstrap.py` is pretty sparsely documented so it's not 
immediately obvious whether or not some other piece of `importlib` depends on 
this behavior. If the author is known, then they may be able to give some 
insight into why the decision was made, and what the best solution would be?

--

___
Python tracker 
<https://bugs.python.org/issue43477>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43477] from x import * behavior inconsistent between module types.

2021-03-13 Thread Thomas


Thomas  added the comment:

Ahh, I always forget about blame.

Though the form was different, the initial commit of `importlib` (authored by 
Brett, so the nosy list seems fine for the moment) behaved the same way, and 
had an additional comment noting that the section in question was included to 
maintain backwards compatibility. I checked with Python 2.x and can confirm 
that this was how Python 2.x behaved as well (so I assume that's what the 
comment was for.)

I've tested simply commenting out that section (as, at a glance, I don't 
believe it will have any effect on explicit imports), and for the few scripts I 
tested with the backtraces were actually pretty clear: a lot of places in the 
standard library are accidentally relying on this quirk. collections doesn't 
import abc, importlib doesn't import machinery, concurrent doesn't import 
futures, etc, etc.

The easy, temporary fix would be to just add the necessary imports, then worry 
about `importlib`'s innards when the time comes to cross that bridge. That 
said, I know of only a few of the modules which will need imports added (the 
ones above, essentially), so I can't really say what the full scale of the work 
will be.

--

___
Python tracker 
<https://bugs.python.org/issue43477>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42369] Reading ZipFile not thread-safe

2021-06-30 Thread Thomas


Thomas  added the comment:

The monkey patch works for me! Thank you very much! (I have only tested 
reading, not writing).

However, the lock contention of Python's ZipFile is so bad that using multiple 
threads actually makes the code run _slower_ than single threaded code when 
reading a zip file with many small files. For this reason, I am not using 
ZipFile any longer. Instead, I have implemented a subset of the zip spec 
without locks, which gives me a speedup of over 2500 % for reading many small 
files compared to ZipFile.

I think that the architecture of ZipFile should be reconsidered, but this 
exceeds the scope of this issue.

--

___
Python tracker 
<https://bugs.python.org/issue42369>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34451] docs: tutorial/introduction doesn't mention toggle of prompts

2021-07-12 Thread Thomas


Change by Thomas :


--
keywords: +patch
nosy: +thmsdnnr
nosy_count: 6.0 -> 7.0
pull_requests: +25650
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/27105

___
Python tracker 
<https://bugs.python.org/issue34451>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44961] @classmethod doesn't set __wrapped__ the same way as functool's update_wrapper

2021-08-20 Thread Thomas


New submission from Thomas :

@classmethod defines a __wrapped__ attribute that always points to the inner 
most function in a decorator chain while functool's update_wrapper has been 
fixed to set the wrapper.__wrapped__ attribute after updating the 
wrapper.__dict__ (see https://bugs.python.org/issue17482) so .__wrapped__ 
points to the next decorator in the chain.
This results in inconsistency of the value of the.__wrapped__ attribute.

Consider this code:

from functools import update_wrapper


class foo_deco:
def __init__(self, func):
self._func = func
update_wrapper(self, func)

def __call__(self, *args, **kwargs):
return self._func(*args, **kwargs)


class bar_deco:
def __init__(self, func):
self._func = func
update_wrapper(self, func)

def __call__(self, *args, **kwargs):
return self._func(*args, **kwargs)


class Foo:
@classmethod
@foo_deco
def bar_cm(self):
pass

@bar_deco
@foo_deco
def bar_bar(self):
pass


print(Foo.bar_cm.__wrapped__)
# 
print(Foo.bar_bar.__wrapped__)
# <__main__.foo_deco object at 0x7fb025445fd0>

# The foo_deco object is available on bar_cm this way though
print(Foo.__dict__['bar_cm'].__func__)
# <__main__.foo_deco object at 0x7fb025445fa0>

It would be more consistent if the fix that was applied to update_wrapper was 
ported to classmethod's construction (or classmethod could invoke 
update_wrapper directly, maybe). It's also worth noting that @staticmethod 
behaves the same and @property doesn't define a .__wrapped__ attribute. For 
@property, I don't know if this is by design or if it was just never ported, 
but I believe it would be a great addition just to be able to go down a 
decorator chain without having to special-case the code.

--
components: Extension Modules
messages: 399965
nosy: Thomas701
priority: normal
severity: normal
status: open
title: @classmethod doesn't set __wrapped__ the same way as functool's 
update_wrapper
type: behavior
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue44961>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34451] docs: tutorial/introduction doesn't mention toggle of prompts

2021-09-20 Thread Thomas


Thomas  added the comment:

I added a pull request to attempt to fix this issue. It received a label but no 
review and has gone stale, so I am sending out a ping.

--

___
Python tracker 
<https://bugs.python.org/issue34451>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45259] No _heappush_max()

2021-09-21 Thread Thomas


New submission from Thomas :

There is no heappush function for a max heap when the other supporting helper 
functions are already implemented (_siftdown_max())

--
components: Library (Lib)
messages: 402351
nosy: ThomasLee94
priority: normal
severity: normal
status: open
title: No _heappush_max()
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue45259>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45259] No _heappush_max()

2021-09-21 Thread Thomas


Change by Thomas :


--
nosy: +rhettinger, stutzbach -ThomasLee94

___
Python tracker 
<https://bugs.python.org/issue45259>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45259] No _heappush_max()

2021-09-21 Thread Thomas


Change by Thomas :


--
versions: +Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8

___
Python tracker 
<https://bugs.python.org/issue45259>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39247] dataclass defaults and property don't work together

2021-10-21 Thread Thomas


Thomas  added the comment:

Hello everyone,

A quick look on SO and Google + this python issue + this blog post and its 
comments: 
https://florimond.dev/en/posts/2018/10/reconciling-dataclasses-and-properties-in-python/
 show that this is still a problem where dataclass users keep hitting a wall.

The gist here seems to be that there's two ways to solve this:
- have descriptor be treated differently when found as default value in the 
__init__. I like this solution. The argument against is that users might want 
to have the descriptor object itself as an instance attribute and this solution 
would prevent them from doing it. I'd argue that, if the user intention was to 
have the descriptor object as a default value, the current dataclass 
implementation allows it in a weird way: as shown above, it actually sets and 
gets the descriptor using the descriptor as its own getter/setter (although it 
makes sense when one thinks of how dataclass are implemented, specifically 
"when" the dataclass modifies the class, it is nonetheless jarring at first 
glance).

- add an "alias/name/public_name/..." keyword to the field constructor so that 
we could write _bar: int = field(default=4, alias="bar"). The idea here keeps 
the usage of this alias to the __init__ method but I'd go further. The alias 
should be used everywhere we need to show the public API of the dataclass 
(repr, str, to_dict, ...). Basically, if a field has an alias, we only ever 
show / give access to the alias and essentially treat the original attribute 
name as a private name (i.e.: if the dataclass maintainer changes the attribute 
name, none of the user code should break).

I like both solutions for the given problem but I still have a preference for 
the first, as it covers more cases that are not shown by the example code: what 
if the descriptor doesn't delegate to a private field on the class? It is a bit 
less common, but one could want to have a field in the init that delegates to a 
resource that is not a field on the dataclass. The first solution allows that, 
the second doesn't.

So I'd like to propose a variation of the first solution that, hopefully, also 
solves the counter argument to that solution:

@dataclass
class FileObject:
_uploaded_by: str = field(init=False)

@property
def uploaded_by(self):
return self._uploaded_by

@uploaded_by.setter
def uploaded_by(self, uploaded_by):
print('Setter Called with Value ', uploaded_by)
self._uploaded_by = uploaded_by

uploaded_by: str = field(default=None, descriptor=uploaded_by)


Basically, add an argument to the field constructor that allows developers to 
tell the dataclass constructor that this field requires special handling: in 
the __init__, it should use the default value as it would do for normal fields 
but at the class level, it should install the descriptor, instead of the 
default value.

What do you think ?

--
nosy: +Thomas701

___
Python tracker 
<https://bugs.python.org/issue39247>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39247] dataclass defaults and property don't work together

2021-10-21 Thread Thomas


Thomas  added the comment:

Thinking a little more about this, maybe a different solution would be to have 
default values be installed at the class level by default without being 
overwritten in the init, as is the case today. default_factory should keep 
being set in the init as is the case today.

With this approach:

@dataclass
class Foo:
bar = field(default=4)
# assigns 4 to Foo.bar but not to foo.bar (bonus: __init__ will be faster)

bar = field(default=some_descriptor)
# assigns some_descriptor to Foo.bar, so Foo().bar does a __get__ on the 
descriptor

bar = field(default_factory=SomeDescriptor)
# assigns a new SomeDescriptor instance to every instance of Foo

bar = field(default_factory=lambda: some_descriptor)
# assigns the same descriptor object to every instance of Foo

I don't think this change would break a lot of existing code as the attribute 
overwrite that happens at the instance level in the __init__ is essentially an 
implementation detail. It also seems this would solve the current problem and 
allow for a cleaner way to assign a descriptor object as a default value. Am I 
not seeing some obvious problem here ?

--

___
Python tracker 
<https://bugs.python.org/issue39247>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39247] dataclass defaults and property don't work together

2021-10-21 Thread Thomas


Thomas  added the comment:

Scratch that last one, it leads to problem when mixing descriptors with actual 
default values:

@dataclass
class Foo:
bar = field(default=some_descriptor)
# technically this is a descriptor field without a default value or at the 
very least, the dataclass constructor can't know because it doesn't know what 
field, if any, this delegates to. This means this will show up as optional in 
the __init__ signature but it might not be.

bar = field(default=some_descriptor, default_factory=lambda:4)
# this could be a solve for the above problem. The dc constructor would 
install the constructor at the class level and assign 4 to the instance 
attribute in the __init__. Still doesn't tell the dc constructor if a field is 
optional or not when it's default value is a descriptor and no default_factory 
is passed. And it feels a lot more like hack than anything else.


So ignore my previous message. I'm still 100% behind the "descriptor" arg in 
the field constructor, though :)

PS: Sorry for the noise, I just stumbled onto this problem for the nth-times 
and I can't get my brain to shut off.

--

___
Python tracker 
<https://bugs.python.org/issue39247>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39247] dataclass defaults and property don't work together

2021-10-21 Thread Thomas


Thomas  added the comment:

Agreed on everything but that last part, which I'm not sure I understand:
> If we allow descriptor to accept an iterable as well you could have multiple 
> descriptors just like normal.
Could you give an example of what you mean with a regular class?

I've had a bit more time to think about this and I think one possible solution 
would be to mix the idea of a "descriptor" argument to the field constructor 
and the idea of not applying regular defaults at __init__ time.


Basically, at dataclass construction time (when the @dataclass decorator 
inspects and enhances the class), apply regular defaults at the class level 
unless the field has a descriptor argument, then apply that instead at the 
class level. At __init__ time, apply default_factories only unless the field 
has a descriptor argument, then do apply the regular default value.

If the implementation changed in these two ways, we'd have code like this work 
exactly as expected:

from dataclasses import dataclass, field


@dataclass
class Foo:
_bar: int = field(init=False)

@property
def bar(self):
return self._bar

@bar.setter
def bar(self, value):
self._bar = value

# field is required,
# uses descriptor bar for get/set
bar: int = field(descriptor=bar)

# field is optional,
# default of 5 is set at __init__ time
# using the descriptor bar for get/set,
bar: int = field(descriptor=bar, default=5)

# field is optional,
# default value is the descriptor instance,
# it is set using regular attribute setter
bar: int = field(default=bar)

Not only does this allow for descriptor to be used with dataclasses, it also 
fixes the use case of trying to have a descriptor instance as a default value 
because the descriptor wouldn't be used to get/set itself.

Although I should say, at this point, I'm clearly seeing this with blinders on 
to solve this particular problem... It's probable this solution breaks 
something somewhere that I'm not seeing. Fresh eyes appreciated :)

--

___
Python tracker 
<https://bugs.python.org/issue39247>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39247] dataclass defaults and property don't work together

2021-10-21 Thread Thomas


Thomas  added the comment:

Just to rephrase, because the explanation in my last message can be ambiguous:

At dataclass construction time (when the @dataclass decorator inspects and 
enhances the class):

for field in fields:
if descriptor := getattr(field, 'descriptor'):
setattr(cls, field.name, descriptor)
elif default := getattr(field, 'default'):
setattr(cls, field.name, default)


Then at __init__ time:

for field in fields:
if (
(descriptor := getattr(field, 'descriptor'))
and (default := getattr(field, 'default'))
):
setattr(self, field.name, default)
elif default_factory := getattr(field, 'default_factory'):
setattr(self, field.name, default_factory())

Now, this is just pseudo-code to illustrate the point, I know the dataclass 
implementation generates the __init__ on the fly by building its code as a 
string then exec'ing it. This logic would have to be applied to that generative 
code.

I keep thinking I'm not seeing some obvious problem here, so if something jumps 
out let me know.

--

___
Python tracker 
<https://bugs.python.org/issue39247>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39247] dataclass defaults and property don't work together

2021-10-21 Thread Thomas


Thomas  added the comment:

> An example of multiple descriptors would be to have:
> @cached_property
> @property
> def expensive_calc(self):
> #Do something expensive

That's decorator chaining. The example you gave is not working code (try to 
return something from expensive_calc and print(obj.expensive_calc()), you'll 
get a TypeError). Correct me if I'm wrong, but I don't think you can chain 
descriptors the way you want unless the descriptors themselves have knowledge 
that they're acting on descriptors. E.g., given:

class Foo:
@descriptorA
@descriptorB
def bar(self):
return 5

You would need descriptorA to be implemented such that its __get__ method 
return .__get__() of whatever it was wrapping (in this case descriptorB).

Either way, at the class level (I mean the Foo class, the one we'd like to make 
a dataclass), all of this doesn't matter because it only sees the outer 
descriptor (descriptorA). Assuming the proposed solution is accepted, you would 
be able to do this:

@dataclass
class Foo:
@descriptorA
@descriptorB
def bar(self):
return some_value

@bar.setter
def bar(self, value):
...  # store value

bar: int = field(descriptor=bar)

and, assuming descriptorA is compatible with descriptorB on both .__get__ and 
.__set__, as stated above, it would work the way you intend it to.

--

___
Python tracker 
<https://bugs.python.org/issue39247>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42369] Reading ZipFile not thread-safe

2020-11-16 Thread Thomas


New submission from Thomas :

According to https://docs.python.org/3.5/whatsnew/changelog.html#id108 
bpo-14099, reading multiple ZipExtFiles should be thread-safe, but it is not.

I created a small example where two threads try to read files from the same 
ZipFile simultaneously, which crashes with a Bad CRC-32 error. This is 
especially surprising since all files in the ZipFile only contain 0-bytes and 
have the same CRC.

My use case is a ZipFile with 82000 files. Creating multiple ZipFiles from the 
same "physical" zip file is not a satisfactory workaround because it takes 
several seconds each time. Instead, I open it only once and clone it for each 
thread:

with zipfile.ZipFile("/tmp/dummy.zip", "w") as dummy:
pass

def clone_zipfile(z):
z_cloned = zipfile.ZipFile("/tmp/dummy.zip")
z_cloned.NameToInfo = z.NameToInfo
z_cloned.fp = open(z.fp.name, "rb")
return z_cloned

This is a much better solution for my use case than locking. I am using 
multiple threads because I want to finish my task faster, but locking defeats 
that purpose.

However, this cloning is somewhat of a dirty hack and will break when the file 
is not a real file but rather a file-like object.

Unfortunately, I do not have a solution for the general case.

--
files: test.py
messages: 381090
nosy: Thomas
priority: normal
severity: normal
status: open
title: Reading ZipFile not thread-safe
versions: Python 3.7, Python 3.8
Added file: https://bugs.python.org/file49601/test.py

___
Python tracker 
<https://bugs.python.org/issue42369>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42369] Reading ZipFile not thread-safe

2020-11-16 Thread Thomas


Change by Thomas :


--
components: +Library (Lib)
type:  -> crash

___
Python tracker 
<https://bugs.python.org/issue42369>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42369] Reading ZipFile not thread-safe

2020-11-16 Thread Thomas


Thomas  added the comment:

I have not observed any segfaults yet. Only zipfile.BadZipFile exceptions so 
far.

The exact file at which it crashes is fairly random. It even crashes if all 
threads try to read the same file multiple times.

I think the root cause of the problem is that the reads of zef_file in 
ZipFile.read are not locked properly.

https://github.com/python/cpython/blob/c79667ff7921444911e8a5dfa5fba89294915590/Lib/zipfile.py#L1515

The underlying file object is shared between all ZipExtFiles. Every time a 
thread makes a call to ZipFile.read, a new lock is created in _SharedFile, but 
that lock only protects against multiple threads reading the same ZipExtFile. 
Multiple threads reading different ZipExtFiles with the same underlying file 
object will cause trouble. The locks do nothing in this scenario because they 
are individual to each thread and not shared.

--

___
Python tracker 
<https://bugs.python.org/issue42369>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42369] Reading ZipFile not thread-safe

2020-11-16 Thread Thomas


Thomas  added the comment:

Scratch what I said in the previous message. I thought that the lock was 
created in _SharedFile and did not notice that it was passed as a parameter.

--

___
Python tracker 
<https://bugs.python.org/issue42369>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42369] Reading ZipFile not thread-safe

2020-11-16 Thread Thomas


Thomas  added the comment:

I have simplified the test case a bit more:

import multiprocessing.pool, zipfile

# Create a ZipFile with two files and same content
with zipfile.ZipFile("test.zip", "w", zipfile.ZIP_STORED) as z:
z.writestr("file1", b"0"*1)
z.writestr("file2", b"0"*1)

# Read file1  with two threads at once
with zipfile.ZipFile("test.zip", "r") as z:
pool = multiprocessing.pool.ThreadPool(2)
while True:
pool.map(z.read, ["file1", "file1"])

Two files are sufficient to cause the error. It does not matter which files are 
read or which content they have.

I also narrowed down the point of failure a bit. After

self._file.seek(self._pos)

in _SharedFile.read ( 
https://github.com/python/cpython/blob/c79667ff7921444911e8a5dfa5fba89294915590/Lib/zipfile.py#L742
 ), the following assertion should hold:

assert(self._file.tell() == self._pos)

The issue occurs when seeking to position 35 (size of header + length of name). 
Most of the time, self._file.tell() will then be 35 as expected, but sometimes 
it is 8227 instead, i.e. 35 + 8192.

I am not sure how this can happen since the file object should be locked.

--

___
Python tracker 
<https://bugs.python.org/issue42369>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24882] ThreadPoolExecutor doesn't reuse threads until #threads == max_workers

2019-05-18 Thread Thomas


Thomas  added the comment:

We ran into this issue in the context of asyncio which uses an internal 
ThreadPoolExecutor to provide an asynchronous getaddrinfo / getnameinfo.

We observed an async application spawned more and more threads through several 
reconnects. With a maximum of 5 x CPUs these were dozens of threads which 
easily looked like a resource leak.

At least in this scenario I would strongly prefer to correctly reuse idle 
threads. 

Spawning all possible threads on initialization in such a transparent case 
would be quite bad. Imagine having a process-parallel daemon that running a 
apparently single-threaded asyncio loop but then getting these executors for 
doing a single asyncio.getaddrinfo. Now you run 80 instances on an 80 core 
machine you get 32.000 extra implicit threads.

Now you can argue whether the default executor in asyncio is good as is, but if 
the executors properly reuse threads, it would be quite unlikely to be a 
practical problem.

--
nosy: +tilsche

___
Python tracker 
<https://bugs.python.org/issue24882>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24964] Add tunnel CONNECT response headers to httplib / http.client

2015-09-04 Thread Thomas

Thomas added the comment:

Martin: Thanks for your quick answer (and sorry for sending the whole file) !
I think it is indeed a good idea to detach the proxy connection and treat it as 
any other connection, as you did in your patch. It would be great if you would 
be able to dig it up !

--

___
Python tracker 
<http://bugs.python.org/issue24964>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26628] Segfault in cffi with ctypes.union argument

2016-03-23 Thread Thomas

New submission from Thomas:

Passing ctypes.Union types as arguments crashes python.

Attached is a minimal example to reproduce. Due to undefined behavior, you may 
have to increase the union _fields_ to reproduce. I tested with 3.5.1 and 
2.7.11.

It seems that cffi treats the union as a normal struct. In classify_argument, 
it loops through the type->elements. The byte_offset increases for each union 
element until pos exceeds enum x86_64_reg_class classes[MAX_CLASSES], causing 
an invalid write here:

size_t pos = byte_offset / 8;
classes[i + pos] = merge_classes (subclasses[i], classes[i + pos]);

I am quite scared considering the lack of any index checks in this code. At 
this point I'm not yet sure whether this is a bug in ctypes or libffi.

#0  classify_argument (type=0xce41b8, classes=0x7fffb4e0, byte_offset=8) at 
Python-3.5.1/Modules/_ctypes/libffi/src/x86/ffi64.c:248
#1  0x76bc6409 in examine_argument (type=0xce41b8, 
classes=0x7fffb4e0, in_return=false, pngpr=0x7fffb4dc, 
pnsse=0x7fffb4d8)
at Python-3.5.1/Modules/_ctypes/libffi/src/x86/ffi64.c:318
#2  0x76bc68ce in ffi_call (cif=0x7fffb590, fn=0x7751d5a0, 
rvalue=0x7fffb660, avalue=0x7fffb640) at 
Python-3.5.1/Modules/_ctypes/libffi/src/x86/ffi64.c:462
#3  0x76bb589e in _call_function_pointer (flags=4353, 
pProc=0x7751d5a0, avalues=0x7fffb640, atypes=0x7fffb620, 
restype=0xcdd488, resmem=0x7fffb660, argcount=1)
at Python-3.5.1/Modules/_ctypes/callproc.c:811
#4  0x76bb6593 in _ctypes_callproc (pProc=0x7751d5a0, 
argtuple=0xc8b3e8, flags=4353, argtypes=0xcb2098, restype=0xcdcd38, checker=0x0)
at Python-3.5.1/Modules/_ctypes/callproc.c:1149
#5  0x76baf84f in PyCFuncPtr_call (self=0xcf3708, inargs=0xc8b3e8, 
kwds=0x0) at Python-3.5.1/Modules/_ctypes/_ctypes.c:3869
#6  0x0043b66a in PyObject_Call (func=0xcf3708, arg=0xc8b3e8, kw=0x0) 
at ../../Python-3.5.1/Objects/abstract.c:2165

--
components: ctypes
files: unioncrash.py
messages: 262307
nosy: tilsche
priority: normal
severity: normal
status: open
title: Segfault in cffi with ctypes.union argument
type: crash
versions: Python 2.7, Python 3.5
Added file: http://bugs.python.org/file42263/unioncrash.py

___
Python tracker 
<http://bugs.python.org/issue26628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26628] Segfault in cffi with ctypes.union argument

2016-03-23 Thread Thomas

Thomas added the comment:

Note [http://www.atmark-techno.com/~yashi/libffi.html]

> Although ‘libffi’ has no special support for unions or bit-fields, it is 
> perfectly happy passing structures back and forth. You must first describe 
> the structure to ‘libffi’ by creating a new ffi_type object for it.

--

___
Python tracker 
<http://bugs.python.org/issue26628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26628] Undefined behavior calling C functions with ctypes.Union arguments

2016-03-24 Thread Thomas

Thomas added the comment:

So after some more pondering about the issue I read the documentation again:

> Warning ctypes does not support passing unions or structures with bit-fields 
> to functions by value.

Previously I always read this as 'does not support passing unions with 
bit-fields'... I guess it is meant otherwise. IMHO this should be formulated 
more clearly, like: "does not support passing structures with bit-fields or 
unions to functions by value.".

Also I would strongly argue to generally prohibit this with an exception 
instead of just trying if libffi maybe handles this on the current 
architecture. libffi clearly does not support unions. This just introduces 
subtle bugs.

See also: https://github.com/atgreen/libffi/issues/33

--
title: Segfault in cffi with ctypes.union argument -> Undefined behavior 
calling C functions with ctypes.Union arguments

___
Python tracker 
<http://bugs.python.org/issue26628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26657] Directory traversal with http.server and SimpleHTTPServer on windows

2016-03-28 Thread Thomas

New submission from Thomas:

SimpleHTTPServer and http.server allow directory traversal on Windows.
To exploit this vulnerability, replace all ".." in URLs with "c:c:c:..".


Example:
Run
python -m http.server
and visit
127.0.0.1:8000/c:c:c:../secret_file_that_should_be_secret_but_is_not.txt


There is a warning that those modules are not secure in the module docs,
but for some reason they do not appear in the online docs:
https://docs.python.org/3/library/http.server.html
https://docs.python.org/2/library/simplehttpserver.html


It would be nice if that warning was as apparent as for example here:
https://docs.python.org/2/library/xml.etree.elementtree.html


There are a lot of other URLs that are insecure as well, which can all
be traced back to here:
https://hg.python.org/cpython/file/tip/Lib/http/server.py#l766


The splitdrive and split functions, which should make sure that the
final output is free of ".." are only called once which leads to this
control flow:
---
path = "c:/secret/public"
word = "c:c:c:.."

_, word = os.path.splitdrive(word) # word = "c:c:.."
_, word = os.path.split(word) # word = "c:.."
path = os.path.join(path, word) # path = "c:/secret/public\\.."
---


Iterating splitdrive and split seems safer:
---
for word in words:
# Call split and splitdrive multiple times until
# word does not change anymore.
has_changed = True
while has_changed:
previous_word = word
_, word = os.path.split(word)
_, word = os.path.splitdrive(word)
has_changed = word != previous_word
---






There is another weird thing which I am not quite sure about here:
https://hg.python.org/cpython/file/tip/Lib/http/server.py#l761

---
path = posixpath.normpath(path)
words = path.split('/')
---

posixpath.normpath does not do anything with backslashes and then the
path is split by forward slashes, so it may still contain backslashes.
Maybe replacing posixpath.normpath with os.path.normpath and then
splitting by os.sep would work, but I don't have enough different
operating systems to test this, so someone else should have a look.





I have attached some simple fuzzing test that tries a few weird URLs and
sees if they lead where they shouldn't.
Disclaimer: Might still contain other bugs.

--
components: Library (Lib)
files: fuzz.py
messages: 262572
nosy: Thomas
priority: normal
severity: normal
status: open
title: Directory traversal with http.server and SimpleHTTPServer on windows
type: security
versions: Python 3.6
Added file: http://bugs.python.org/file42315/fuzz.py

___
Python tracker 
<http://bugs.python.org/issue26657>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26657] Directory traversal with http.server and SimpleHTTPServer on windows

2016-03-29 Thread Thomas

Thomas added the comment:

Martin Panter: Regarding the warning, you appear to be correct.
However, reading the source of http.server again made me notice
_url_collapse_path(path)
which seems to have some overlap with translate_path. Also it
crashes with an IndexError if path contains '..'.

Also, yes, python 2.7's SimpleHTTPServer is affected as well.

Discarding weird paths instead of trying to repair them would change semantics, 
but from a user perspective, it would be easier to understand what is going on, 
so I'd agree with that change.

Further, I agree that it would be nice if there was some library function to 
safely handle path operations.
The function you proposed in https://bugs.python.org/issue21109#msg216675 and 
https://bitbucket.org/vadmium/pyrescene/src/34264f6/rescene/utility.py#cl-217 
leaves handling path separators to the user. Maybe that should be handled as 
well?
The function withstood my fuzzing tests on windows, so it might be correct.
There is probably a good reason for disallowing paths that contain /dev/null 
but I don't know why. Could you add a word or two of documentation to explain?

A really high-level solution would be to do away with all the strings and 
handle paths properly as the structure that they represent instead of trying to 
fake all kinds of things with strings, but that is probably beyond the scope of 
this issue.

--
versions: +Python 2.7

___
Python tracker 
<http://bugs.python.org/issue26657>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26657] Directory traversal with http.server and SimpleHTTPServer on windows

2016-04-02 Thread Thomas

Thomas added the comment:

Looks ok to me security-wise. But I just noticed that it the trailing slash is 
inconsistent on Windows, e.g.:

translate_path('asdf/')
==
'C:\\Users\\User\\Desktop\\temp\\asdf/' <- this slash

because path += '/' is used instead of os.path.sep. But apparently nobody 
complained about this yet, so it probably is not an issue.

--

___
Python tracker 
<http://bugs.python.org/issue26657>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26628] Undefined behavior calling C functions with ctypes.Union arguments

2016-04-05 Thread Thomas

Changes by Thomas :


Added file: http://bugs.python.org/file42372/libfoo.c

___
Python tracker 
<http://bugs.python.org/issue26628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26628] Undefined behavior calling C functions with ctypes.Union arguments

2016-04-05 Thread Thomas

Thomas added the comment:

Thanks Eryk for the additional explanation. I added a more elaborate example 
that doesn't abuse the standard c function that actually doesn't expect a union:

 % gcc -shared -fPIC libfoo.c -o libfoo.so -Wall
 % python pyfoo.py 
*** stack smashing detected ***: python terminated
[1]28463 segmentation fault (core dumped)  python pyfoo.py

The underling issue is exactly the same as previously described.

I still argue that ctypes should refuse to attempt such a call, and the 
documentation should be clarified, as long as libffi does not support unions.

--
Added file: http://bugs.python.org/file42373/pyfoo.py

___
Python tracker 
<http://bugs.python.org/issue26628>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26799] gdb support fails with "Invalid cast."

2016-04-18 Thread Thomas

New submission from Thomas:

Trying to use any kind of python gdb integration results in the following error:

(gdb) py-bt
Traceback (most recent call first):
Python Exception  Invalid cast.: 
Error occurred in Python command: Invalid cast.

I have tracked it down to the _type_... globals, and I am able to fix it with 
the following commands:

(gdb) pi
>>> # Look up the gdb.Type for some standard types:
... _type_char_ptr = gdb.lookup_type('char').pointer() # char*
>>> _type_unsigned_char_ptr = gdb.lookup_type('unsigned char').pointer() # 
>>> unsigned char*
>>> _type_void_ptr = gdb.lookup_type('void').pointer() # void*
>>> _type_unsigned_short_ptr = gdb.lookup_type('unsigned short').pointer()
>>> _type_unsigned_int_ptr = gdb.lookup_type('unsigned int').pointer()

After this, it works correctly. I was able to workaround it by making a 
fix_globals that resets the globals on each gdb.Command. I do not understand 
why the originally initialized types are not working properly. It feels like 
gdb-inception trying to debug python within a gdb that debugs cpython while 
executing python code.

I have tried this using hg/default cpython (--with-pydebug --without-pymalloc 
--with-valgrind --enable-shared) 
1) System install of gdb 7.11, linked against system libpython 3.5.1.
2) Custom install of gdb 7.11.50.20160411-git, the debug cpython I am trying to 
debug

--
components: Demos and Tools
messages: 263690
nosy: tilsche
priority: normal
severity: normal
status: open
title: gdb support fails with "Invalid cast."
type: crash
versions: Python 3.6

___
Python tracker 
<http://bugs.python.org/issue26799>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26799] gdb support fails with "Invalid cast."

2016-04-19 Thread Thomas

Thomas added the comment:

I have done a bit more digging, turns out it is actually no problem at all to 
debug python in gdb with gdb with python support (at least using a fixed 
python-gdb-py).

Turns out the type->length of the the globally initialized ptr types is wrong: 
It is 4 instead of 8, causing the cast to fail. I suspect the initialization is 
done before the executable is loaded and gdb is using some default. To verify, 
I have put two prints in the global initialization and in a command invocation:

GOBAL INITIALIZATION: gdb.lookup_type('char').pointer().sizeof == 4
COMMAND INVOKE: gdb.lookup_type('char').pointer().sizeof == 8

I guess to be fully portable those types need to be looked up at least whenever 
gdb changes it's binary, but I have no idea if there is a hook for that. But it 
seems reasonable to just replace those globals with on-demand lookup functions 
or properties.

If you are interested in the actual python/c stack traces for the error:

Thread 1 "gdb" hit Breakpoint 1, value_cast (type=type@entry=0x2ef91e0, 
arg2=arg2@entry=0x32b13f0) at ../../gdb/valops.c:571
571   error (_("Invalid cast."));
(gdb) py-bt
Traceback (most recent call first):
  
  File "[...]/python-gdb.py", line 1151, in proxyval
field_str = field_str.cast(_type_unsigned_char_ptr)
  File "[...]/python-gdb.py", line 945, in print_traceback
% (self.co_filename.proxyval(visited),
  File "[...]/python-gdb.py", line 1578, in print_traceback
pyop.print_traceback()
  File "[...]/python-gdb.py", line 1761, in invoke
frame.print_traceback()
(gdb) bt
#0  value_cast (type=type@entry=0x2ef91e0, arg2=arg2@entry=0x32b13f0) at 
../../gdb/valops.c:571
#1  0x0052261f in valpy_do_cast (self=, 
args=, op=UNOP_CAST) at ../../gdb/python/py-value.c:525
#2  0x7fc7ce2141de in PyCFunction_Call (func=func@entry=, args=args@entry=(,), kwds=kwds@entry=0x0)
at ../../Python-3.5.1/Objects/methodobject.c:109
#3  0x7fc7ce2c8887 in call_function 
(pp_stack=pp_stack@entry=0x7c81cec8, oparg=oparg@entry=1) at 
../../Python-3.5.1/Python/ceval.c:4655
#4  0x7fc7ce2c57ac in PyEval_EvalFrameEx (

--

___
Python tracker 
<http://bugs.python.org/issue26799>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26799] gdb support fails with "Invalid cast."

2016-04-20 Thread Thomas

Thomas added the comment:

The second option seems like the safest choice, attached is a patch that 
addresses just that.

--
keywords: +patch
Added file: http://bugs.python.org/file42538/gdb-python-invalid-cast.patch

___
Python tracker 
<http://bugs.python.org/issue26799>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26799] gdb support fails with "Invalid cast."

2016-04-21 Thread Thomas

Thomas added the comment:

Thank you for the quick integration and fixing the return. I have signed the 
electronic form yesterday.

--

___
Python tracker 
<http://bugs.python.org/issue26799>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26833] returning ctypes._SimpleCData objects from callbacks

2016-04-23 Thread Thomas

New submission from Thomas:

If a callback function returns a ctypes._SimpleCData object, it will fail with 
a type error and complain that it expects a basic type.

Using the qsort example:

def py_cmp_func(a, b):
print(a.contents, b.contents)
return c_int(0)

> TypeError: an integer is required (got type c_int)
> Exception ignored in: 

This is somewhat surprising as it is totally fine to pass a c_int (or an int) 
as an c_int argument. But this is really an issue for subclasses of fundamental 
data types:

(sticking with qsort for simplicity, full example attached)

class CmpRet(c_int):
pass

cmp_ctype = CFUNCTYPE(CmpRet, POINTER(c_int), POINTER(c_int))

def py_cmp_func(a, b):
print(a.contents, b.contents)
return CmpRet(0)

> TypeError: an integer is required (got type CmpRet)
> Exception ignored in: 

This is inconsistent with the no transparent argument/return type conversion 
rule for subclasses.

Consider for instance an enum with a specific underlying type. A subclass (with 
__eq__ on value) from the corresponding ctype can be useful to provide a 
typesafe way to pass / receive those from C. Due to the described behavior, 
this doesn't work for callbacks.

This is related to #5710, that discusses composite types.

--
files: callback_ret_sub.py
messages: 264056
nosy: tilsche
priority: normal
severity: normal
status: open
title: returning ctypes._SimpleCData objects from callbacks
type: behavior
versions: Python 3.5
Added file: http://bugs.python.org/file42575/callback_ret_sub.py

___
Python tracker 
<http://bugs.python.org/issue26833>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21461] Recognize -pthread

2021-12-10 Thread Thomas Klausner


Change by Thomas Klausner :


--
pull_requests: +28257
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/30032

___
Python tracker 
<https://bugs.python.org/issue21461>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21461] Recognize -pthread

2021-12-10 Thread Thomas Klausner


Thomas Klausner  added the comment:

gcc supports this flag. According to the man page:

This option consistently for both compilation and linking.  This option is 
supported on GNU/Linux targets, most other Unix derivatives, and also on x86 
Cygwin and MinGW targets.

On NetBSD, using -pthread is the recommended method to enable thread support.

clang on NetBSD also supports this flag. I don't have access to clang on other 
systems.

--

___
Python tracker 
<https://bugs.python.org/issue21461>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21461] Recognize -pthread

2021-12-10 Thread Thomas Klausner


Thomas Klausner  added the comment:

I must confess, I don't know.
This patch has been in pkgsrc since at least the import of the first python 2.7 
package in 2011, and I haven't dug deeper.

If you think it is unnecessary, I'll trust you. I've just removed it from the 
python 3.10 package in pkgsrc.

--
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue21461>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21459] DragonFlyBSD support

2021-12-10 Thread Thomas Klausner


Thomas Klausner  added the comment:

Not interested in this any longer, and Dragonfly's Dports doesn't carry this 
patch, so it's probably not needed any longer.

--
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue21459>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46045] NetBSD: do not use POSIX semaphores

2021-12-11 Thread Thomas Klausner


New submission from Thomas Klausner :

On NetBSD by default, the following tests do not finish in > 1h:

1:07:13 load avg: 0.00 running: test_compileall (1 hour 7 min), 
test_multiprocessing_fork (1 hour 7 min), test_concurrent_futures (1 hour 6 min)

Defining HAVE_BROKEN_POSIX_SEMAPHORES fixes this, and they finish:

0:00:32 load avg: 10.63 [408/427/17] test_compileall passed ...
...
0:02:37 load avg: 3.04 [427/427/22] test_concurrent_futures passed (2 min 33 
sec)

The last one fails:
test_multiprocessing_fork

with most of the subtests failing like this:

ERROR: test_shared_memory_SharedMemoryServer_ignores_sigint 
(test.test_multiprocessing_fork.WithProcessesTestSharedMemory)
--
Traceback (most recent call last):
  File 
"/scratch/lang/python310/work/Python-3.10.1/Lib/test/_test_multiprocessing.py", 
line 4006, in test_shared_memory_SharedMemoryServer_ignores_sigint
sl = smm.ShareableList(range(10))
  File 
"/scratch/lang/python310/work/Python-3.10.1/Lib/multiprocessing/managers.py", 
line 1372, in ShareableList
sl = shared_memory.ShareableList(sequence)
  File 
"/scratch/lang/python310/work/Python-3.10.1/Lib/multiprocessing/shared_memory.py",
 line 327, in __init__
self.shm = SharedMemory(name, create=True, size=requested_size)
  File 
"/scratch/lang/python310/work/Python-3.10.1/Lib/multiprocessing/shared_memory.py",
 line 92, in __init__
self._fd = _posixshmem.shm_open(
OSError: [Errno 86] Not supported: '/psm_b1ec903a'

I think this is a separate issue, so I'd like to define 
HAVE_BROKEN_POSIX_SEMAPHORES for now.

This has been done in pkgsrc since at least python 2.7 (in 2011), I haven't dug 
deeper.

--
components: Interpreter Core
messages: 408291
nosy: wiz
priority: normal
severity: normal
status: open
title: NetBSD: do not use POSIX semaphores
type: behavior
versions: Python 3.11

___
Python tracker 
<https://bugs.python.org/issue46045>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46045] NetBSD: do not use POSIX semaphores

2021-12-11 Thread Thomas Klausner


Change by Thomas Klausner :


--
keywords: +patch
pull_requests: +28272
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/30047

___
Python tracker 
<https://bugs.python.org/issue46045>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46053] NetBSD: ossaudio support incomplete

2021-12-11 Thread Thomas Klausner


New submission from Thomas Klausner :

When compiling Python on NetBSD, the ossaudio module is not enabled.
1. the code tries to export some #define that are not in the public OSS API 
(but that some other implementations provide)
2. on NetBSD, you need to link against libossaudio when using OSS

--
components: Extension Modules
messages: 408349
nosy: wiz
priority: normal
severity: normal
status: open
title: NetBSD: ossaudio support incomplete
type: enhancement
versions: Python 3.11

___
Python tracker 
<https://bugs.python.org/issue46053>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46053] NetBSD: ossaudio support incomplete

2021-12-11 Thread Thomas Klausner


Change by Thomas Klausner :


--
keywords: +patch
pull_requests: +28285
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/30065

___
Python tracker 
<https://bugs.python.org/issue46053>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30512] CAN Socket support for NetBSD

2021-12-11 Thread Thomas Klausner


Change by Thomas Klausner :


--
pull_requests: +28286
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/30066

___
Python tracker 
<https://bugs.python.org/issue30512>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46083] PyUnicode_FSConverter() has confusing reference semantics

2021-12-15 Thread Thomas Wouters


New submission from Thomas Wouters :

The PyUnicode_FSConverter function has confusing reference semantics, and 
confusing documentation.

https://docs.python.org/3/c-api/unicode.html#c.PyUnicode_FSConverter says the 
output argument "must be a PyBytesObject* which must be released when it is no 
longer used." That seems to suggest one must pass a PyBytesObject to it, and 
indeed one of the error paths assumes an object was passed 
(https://github.com/python/cpython/blob/main/Objects/unicodeobject.c#L4116-- 
'addr' is called 'result' in the docs). Not passing a valid object would result 
in trying to DECREF NULL, or garbage. However, the function doesn't actually 
use the object, and later in the function overwrites the value *without* 
DECREFing it, so passing a valid object would in fact cause a leak.

I understand the function signature is the way it is so it can be used with 
PyArg_ParseTuple's O& format, but there are reasons to call it directly (e.g. 
with METH_O functions), and it would be nice if the semantics were more clear.

--
components: C API
messages: 408604
nosy: twouters
priority: normal
severity: normal
status: open
title: PyUnicode_FSConverter() has confusing reference semantics
versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue46083>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45996] Worse error from asynccontextmanager in Python 3.10

2021-12-19 Thread Thomas Grainger


Thomas Grainger  added the comment:

> Actually I don't agree with Thomas's logic... his argument feels like 
> consistency for its own sake.

Do you expect sync and async contextmanagers to act differently?

Why would sync contextmanagers raise AttributeError and async contextmanagers 
raise a RuntimeError?

If it's sensible to guard against invalid re-entry for async contextmanagers 
then I think it's sensible to apply the same guard to sync contextmanagers.

--

___
Python tracker 
<https://bugs.python.org/issue45996>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34624] -W option and PYTHONWARNINGS env variable does not accept module regexes

2021-12-20 Thread Thomas Gläßle

Thomas Gläßle  added the comment:

Ok, it seems at least the incorrect documentation has been fixed in the mean 
time.

I'm going to close this as there seems to be no capacity to deal with this.

--

___
Python tracker 
<https://bugs.python.org/issue34624>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34624] -W option and PYTHONWARNINGS env variable does not accept module regexes

2021-12-20 Thread Thomas Gläßle

Change by Thomas Gläßle :


--
stage: patch review -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue34624>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46150] test_pathlib assumes "fakeuser" does not exist as user

2021-12-22 Thread Thomas Wouters


New submission from Thomas Wouters :

test_pathlib contains, in PosixPathTest.test_expanduser, a check that 
expanduser on a nonexistent user will raise RuntimeError. Leaving aside the 
question why that's a RuntimeError (which is probably too late to fix anyway), 
the test performs this check by assuming 'fakeuser' is a nonexistent user. This 
test will fail when such a user does exist. (The test already uses the pwd 
module for other reasons, so it certainly could check that first.)

--
components: Tests
messages: 409030
nosy: twouters
priority: normal
severity: normal
status: open
title: test_pathlib assumes "fakeuser" does not exist as user
versions: Python 3.10, Python 3.11, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue46150>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38415] @asynccontextmanager decorated functions are not callable like @contextmanager

2021-12-22 Thread Thomas Grainger


Change by Thomas Grainger :


--
nosy: +graingert
nosy_count: 3.0 -> 4.0
pull_requests: +28454
pull_request: https://github.com/python/cpython/pull/30233

___
Python tracker 
<https://bugs.python.org/issue38415>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38415] @asynccontextmanager decorated functions are not callable like @contextmanager

2021-12-22 Thread Thomas Grainger


Thomas Grainger  added the comment:

actually it was already done in 13 months!

--

___
Python tracker 
<https://bugs.python.org/issue38415>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46308] Unportable test(1) operator in configure script

2022-01-08 Thread Thomas Klausner


New submission from Thomas Klausner :

The configure script uses the test(1) '==' operator, which is only supported by 
bash. The standard comparison operator is '='.

--
components: Installation
messages: 410120
nosy: wiz
priority: normal
severity: normal
status: open
title: Unportable test(1) operator in configure script
type: compile error
versions: Python 3.11

___
Python tracker 
<https://bugs.python.org/issue46308>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46308] Unportable test(1) operator in configure script

2022-01-08 Thread Thomas Klausner


Change by Thomas Klausner :


--
keywords: +patch
pull_requests: +28693
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/30490

___
Python tracker 
<https://bugs.python.org/issue46308>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34602] python3 resource.setrlimit strange behaviour under macOS

2022-01-08 Thread Thomas Klausner


Change by Thomas Klausner :


--
nosy: +wiz
nosy_count: 8.0 -> 9.0
pull_requests: +28694
pull_request: https://github.com/python/cpython/pull/30490

___
Python tracker 
<https://bugs.python.org/issue34602>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46053] NetBSD: ossaudio support incomplete

2022-01-14 Thread Thomas Klausner


Thomas Klausner  added the comment:

ping - this patch needs a review

--

___
Python tracker 
<https://bugs.python.org/issue46053>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46045] NetBSD: do not use POSIX semaphores

2022-01-14 Thread Thomas Klausner


Thomas Klausner  added the comment:

ping - this patch needs a review

--

___
Python tracker 
<https://bugs.python.org/issue46045>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46415] ipaddress.ip_{address, network, interface} raise TypeError instead of ValueError if given a tuple as address

2022-01-17 Thread Thomas Cellerier


New submission from Thomas Cellerier :

`IPv*Network` and `IPv*Interface` constructors accept a 2-tuple of (address 
description, netmask) as the address parameter.
When the tuple-based address is used errors are not propagated correctly 
through the `ipaddress.ip_*` helper because of the %-formatting now expecting 
several arguments:

In [7]: ipaddress.ip_network(("192.168.100.0", "fooo"))

---
TypeError Traceback (most recent call 
last)
 in 
> 1 ipaddress.ip_network(("192.168.100.0", "fooo"))

/usr/lib/python3.8/ipaddress.py in ip_network(address, strict)
 81 pass
 82
---> 83 raise ValueError('%r does not appear to be an IPv4 or IPv6 
network' %
 84  address)
 85

TypeError: not all arguments converted during string formatting

Compared to:

In [8]: ipaddress.IPv4Network(("192.168.100.0", "foo"))

---
NetmaskValueError Traceback (most recent call 
last)
 in 
> 1 ipaddress.IPv4Network(("192.168.100.0", "foo"))

/usr/lib/python3.8/ipaddress.py in __init__(self, address, strict)
   1453
   1454 self.network_address = IPv4Address(addr)
-> 1455 self.netmask, self._prefixlen = self._make_netmask(mask)
   1456 packed = int(self.network_address)
   1457 if packed & int(self.netmask) != packed:

/usr/lib/python3.8/ipaddress.py in _make_netmask(cls, arg)
   1118 # Check for a netmask or hostmask in 
dotted-quad form.
   1119 # This may raise NetmaskValueError.
-> 1120 prefixlen = cls._prefix_from_ip_string(arg)
   1121 netmask = 
IPv4Address(cls._ip_int_from_prefix(prefixlen))
   1122 cls._netmask_cache[arg] = netmask, prefixlen

/usr/lib/python3.8/ipaddress.py in _prefix_from_ip_string(cls, ip_str)
516 ip_int = cls._ip_int_from_string(ip_str)
517 except AddressValueError:
--> 518 cls._report_invalid_netmask(ip_str)
519
520 # Try matching a netmask (this would be /1*0*/ as a 
bitwise regexp).

/usr/lib/python3.8/ipaddress.py in _report_invalid_netmask(cls, 
netmask_str)
472 def _report_invalid_netmask(cls, netmask_str):
473 msg = '%r is not a valid netmask' % netmask_str
--> 474 raise NetmaskValueError(msg) from None
475
476 @classmethod

NetmaskValueError: 'foo' is not a valid netmask

--
components: Library (Lib)
messages: 410798
nosy: thomascellerier
priority: normal
severity: normal
status: open
title: ipaddress.ip_{address,network,interface} raise TypeError instead of 
ValueError if given a tuple as address
type: behavior
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue46415>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46415] ipaddress.ip_{address, network, interface} raise TypeError instead of ValueError if given a tuple as address

2022-01-17 Thread Thomas Cellerier


Change by Thomas Cellerier :


--
keywords: +patch
pull_requests: +28845
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/30642

___
Python tracker 
<https://bugs.python.org/issue46415>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46415] ipaddress.ip_{address, network, interface} raises TypeError instead of ValueError if given a tuple as address

2022-01-17 Thread Thomas Cellerier


Change by Thomas Cellerier :


--
title: ipaddress.ip_{address,network,interface} raise TypeError instead of 
ValueError if given a tuple as address -> 
ipaddress.ip_{address,network,interface} raises TypeError instead of ValueError 
if given a tuple as address

___
Python tracker 
<https://bugs.python.org/issue46415>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46045] NetBSD: do not use POSIX semaphores

2022-01-18 Thread Thomas Klausner


Thomas Klausner  added the comment:

Thanks for merging this, @serhiy.storchaka!

--

___
Python tracker 
<https://bugs.python.org/issue46045>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46522] concurrent.futures.__getattr__ raises the wrong AttributeError message

2022-01-25 Thread Thomas Grainger


New submission from Thomas Grainger :

>>> import types
>>> types.ModuleType("concurrent.futures").missing_attribute
Traceback (most recent call last):
  File "", line 1, in 
AttributeError: module 'concurrent.futures' has no attribute 'missing_attribute'
>>> import concurrent.futures
>>> concurrent.futures.missing_attribute
Traceback (most recent call last):
  File "", line 1, in 
  File 
"/home/graingert/miniconda3/lib/python3.9/concurrent/futures/__init__.py", line 
53, in __getattr__
raise AttributeError(f"module {__name__} has no attribute {name}")
AttributeError: module concurrent.futures has no attribute missing_attribute

--
messages: 411611
nosy: graingert
priority: normal
pull_requests: 29069
severity: normal
status: open
title: concurrent.futures.__getattr__ raises the wrong AttributeError message
versions: Python 3.10, Python 3.11, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue46522>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46522] concurrent.futures.__getattr__ raises the wrong AttributeError message

2022-01-25 Thread Thomas Grainger


Thomas Grainger  added the comment:

this also applies to io and _pyio

--

___
Python tracker 
<https://bugs.python.org/issue46522>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44863] Allow TypedDict to inherit from Generics

2022-02-03 Thread Thomas Grainger


Thomas Grainger  added the comment:

there's a thread on typing-sig for this now: 
https://mail.python.org/archives/list/typing-...@python.org/thread/I7P3ER2NH7SENVMIXK74U6L4Z5JDLQGZ/#I7P3ER2NH7SENVMIXK74U6L4Z5JDLQGZ

--
nosy: +graingert

___
Python tracker 
<https://bugs.python.org/issue44863>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42752] multiprocessing Queue leaks a file descriptor associated with the pipe writer (#33081 still a problem)

2022-02-17 Thread Thomas Grainger


Change by Thomas Grainger :


--
nosy: +graingert, vstinner

___
Python tracker 
<https://bugs.python.org/issue42752>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46824] use AI_NUMERICHOST | AI_NUMERICSERV to skip getaddrinfo thread in asyncio

2022-02-22 Thread Thomas Grainger


New submission from Thomas Grainger :

now that the getaddrinfo lock has been removed on all platforms the numeric 
only host resolve in asyncio could be moved back into BaseEventLoop.getaddrinfo

--
components: asyncio
messages: 413699
nosy: asvetlov, graingert, yselivanov
priority: normal
severity: normal
status: open
title: use AI_NUMERICHOST | AI_NUMERICSERV to skip getaddrinfo thread in asyncio
type: enhancement

___
Python tracker 
<https://bugs.python.org/issue46824>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46824] use AI_NUMERICHOST | AI_NUMERICSERV to skip getaddrinfo thread in asyncio

2022-02-22 Thread Thomas Grainger


Change by Thomas Grainger :


--
keywords: +patch
pull_requests: +29627
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/31497

___
Python tracker 
<https://bugs.python.org/issue46824>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46824] use AI_NUMERICHOST | AI_NUMERICSERV to skip getaddrinfo thread in asyncio

2022-02-22 Thread Thomas Grainger


Thomas Grainger  added the comment:

hello, it's actually a bit of a round about context, but it was brought up on a 
tornado issue where I was attempting to port the asyncio optimization to 
tornado: 
https://github.com/tornadoweb/tornado/issues/3113#issuecomment-1041019287

I think it would be better to use this AI_NUMERICHOST | AI_NUMERICSERV 
optimization from trio everywhere instead

--

___
Python tracker 
<https://bugs.python.org/issue46824>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46827] asyncio SelectorEventLoop.sock_connect fails with a UDP socket

2022-02-22 Thread Thomas Grainger


New submission from Thomas Grainger :

the following code:

import socket
import asyncio

async def amain():
with socket.socket(family=socket.AF_INET, proto=socket.IPPROTO_UDP, 
type=socket.SOCK_DGRAM) as sock:
sock.setblocking(False)
await asyncio.get_running_loop().sock_connect(sock, ("google.com", 
"443"))

asyncio.run(amain())


fails with:

Traceback (most recent call last):
  File "/home/graingert/projects/test_foo.py", line 9, in 
asyncio.run(amain())
  File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
  File "/usr/lib/python3.10/asyncio/base_events.py", line 641, in 
run_until_complete
return future.result()
  File "/home/graingert/projects/test_foo.py", line 7, in amain
await asyncio.get_running_loop().sock_connect(sock, ("google.com", "443"))
  File "/usr/lib/python3.10/asyncio/selector_events.py", line 496, in 
sock_connect
resolved = await self._ensure_resolved(
  File "/usr/lib/python3.10/asyncio/base_events.py", line 1395, in 
_ensure_resolved
return await loop.getaddrinfo(host, port, family=family, type=type,
  File "/usr/lib/python3.10/asyncio/base_events.py", line 855, in getaddrinfo
return await self.run_in_executor(
  File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
  File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -7] ai_socktype not supported

--
components: asyncio
messages: 413709
nosy: asvetlov, graingert, yselivanov
priority: normal
severity: normal
status: open
title: asyncio SelectorEventLoop.sock_connect fails with a UDP socket
versions: Python 3.10, Python 3.11, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue46827>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46827] asyncio SelectorEventLoop.sock_connect fails with a UDP socket

2022-02-22 Thread Thomas Grainger


Change by Thomas Grainger :


--
keywords: +patch
pull_requests: +29629
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/31499

___
Python tracker 
<https://bugs.python.org/issue46827>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45390] asyncio.Task doesn't propagate CancelledError() exception correctly.

2022-02-23 Thread Thomas Grainger


Thomas Grainger  added the comment:

there could be multiple messages here

perhaps it could be:

```
finally:
# Must reacquire lock even if wait is cancelled
cancelled = []
while True:
try:
await self.acquire()
break
except exceptions.CancelledError as e:
cancelled.append(e)

if len(cancelled) > 1:
raise ExceptionGroup("Cancelled", cancelled)
if cancelled:
raise cancelled[0]
```

--

___
Python tracker 
<https://bugs.python.org/issue45390>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46885] Ensure PEP 663 changes are reverted from 3.11

2022-02-28 Thread Thomas Wouters


Change by Thomas Wouters :


--
nosy: +twouters

___
Python tracker 
<https://bugs.python.org/issue46885>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43923] Can't create generic NamedTuple as of py3.9

2022-03-05 Thread Thomas Grainger

Thomas Grainger  added the comment:

The main advantage for my usecase is support for heterogeneous unpacking

On Sat, Mar 5, 2022, 6:04 PM Alex Waygood  wrote:

>
> Alex Waygood  added the comment:
>
> I sense we'll have to agree to disagree on the usefulness of NamedTuples
> in the age of dataclasses :)
>
> For me, I find the simplicity of the underlying idea behind namedtuples —
> "tuples with some properties bolted on" — very attractive. Yes, standard
> tuples are more performant, but it's great to have a tool in the arsenal
> that's essentially the same as a tuple (and is backwards-compatible with a
> tuple, for APIs that require a tuple), but can also, like dataclasses, be
> self-documenting. (You're right that DoneAndNotDoneFutures isn't a great
> example of this.)
>
> But I agree that this shouldn't be a priority if it's hard to accomplish;
> and there'll certainly be no complaints from me if energy is invested into
> making dataclasses faster.
>
> --
>
> ___
> Python tracker 
> <https://bugs.python.org/issue43923>
> ___
>

--

___
Python tracker 
<https://bugs.python.org/issue43923>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1039] Asssertion in Windows debug build

2007-08-27 Thread Thomas Heller

New submission from Thomas Heller:

In a windows debug build, an assertion is triggered when os.execvpe is
called with an empty argument list:

self.assertRaises(OSError, os.execvpe, 'no such app-', [], None)

The same problem is present in the trunk version.
Attached is a patch that fixes this, with a test.

--
components: Windows
files: os.diff
messages: 55350
nosy: theller
severity: normal
status: open
title: Asssertion in Windows debug build
type: crash
versions: Python 3.0

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1039>
__Index: Lib/test/test_os.py
===
--- Lib/test/test_os.py	(revision 57596)
+++ Lib/test/test_os.py	(working copy)
@@ -441,6 +441,9 @@
 def test_execvpe_with_bad_program(self):
 self.assertRaises(OSError, os.execvpe, 'no such app-', [], None)
 
+def test_execvpe_with_bad_arglist(self):
+self.assertRaises(ValueError, os.execvpe, 'notepad', [], None)
+
 class Win32ErrorTests(unittest.TestCase):
 def test_rename(self):
 self.assertRaises(WindowsError, os.rename, test_support.TESTFN, test_support.TESTFN+".bak")
Index: Modules/posixmodule.c
===
--- Modules/posixmodule.c	(revision 57596)
+++ Modules/posixmodule.c	(working copy)
@@ -2834,6 +2834,11 @@
 PyMem_Free(path);
 		return NULL;
 	}
+	if (argc < 1) {
+		PyErr_SetString(PyExc_ValueError, "execv() arg 2 must not be empty");
+PyMem_Free(path);
+		return NULL;
+	}
 
 	argvlist = PyMem_NEW(char *, argc+1);
 	if (argvlist == NULL) {
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1043] test_builtin failure on Windows

2007-08-27 Thread Thomas Heller

New submission from Thomas Heller:

test test_builtin failed -- Traceback (most recent call last):
  File "c:\svn\py3k\lib\test\test_builtin.py", line 1473, in test_round
self.assertEqual(round(1e20), 1e20)
AssertionError: 0 != 1e+020

--
components: Windows
messages: 55355
nosy: theller
severity: normal
status: open
title: test_builtin failure on Windows
versions: Python 3.0

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1043>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1042] test_glob fails with UnicodeDecodeError

2007-08-27 Thread Thomas Heller

New submission from Thomas Heller:

Unicode errors in various tests - not only in test_glob:

  test_glob
  test test_glob failed -- Traceback (most recent call last):
File "c:\svn\py3k\lib\test\test_glob.py", line 87, in
test_glob_directory_names
  eq(self.glob('*', '*a'), [])
File "c:\svn\py3k\lib\test\test_glob.py", line 41, in glob
  res = glob.glob(p)
File "c:\svn\py3k\lib\glob.py", line 16, in glob
  return list(iglob(pathname))
File "c:\svn\py3k\lib\glob.py", line 42, in iglob
  for name in glob_in_dir(dirname, basename):
File "c:\svn\py3k\lib\glob.py", line 56, in glob1
  names = os.listdir(dirname)
  UnicodeDecodeError: 'utf8' codec can't decode bytes in position 27-31:
unexpected end of data

--
components: Windows
messages: 55354
nosy: theller
severity: normal
status: open
title: test_glob fails with UnicodeDecodeError
versions: Python 3.0

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1042>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1040] Unicode problem with TZ

2007-08-27 Thread Thomas Heller

Thomas Heller added the comment:

BTW, setting the environment variable TZ to, say, 'GMT' makes the
problem go away.

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1040>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1040] Unicode problem with TZ

2007-08-27 Thread Thomas Heller

New submission from Thomas Heller:

In my german version of winXP SP2, python3 cannot import the time module:

c:\svn\py3k\PCbuild>python_d
Python 3.0x (py3k:57600M, Aug 28 2007, 07:58:23) [MSC v.1310 32 bit
(Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import time
Traceback (most recent call last):
  File "", line 1, in 
UnicodeDecodeError: 'utf8' codec can't decode bytes in position 9-11:
invalid data
[36719 refs]
>>> ^Z

The problem is that the libc '_tzname' variable contains umlauts.  For
comparison, here is what Python2.5 does:

c:\svn\py3k\PCbuild>\python25\python
Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit
(Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import time
>>> time.tzname
('Westeurop\xe4ische Normalzeit', 'Westeurop\xe4ische Normalzeit')
>>>

--
components: Windows
messages: 55351
nosy: theller
severity: normal
status: open
title: Unicode problem with TZ
versions: Python 3.0

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1040>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1041] io.py problems on Windows

2007-08-27 Thread Thomas Heller

New submission from Thomas Heller:

Running the PCBuild\rt.bat script fails when it compares the expected output
with the actual output.  Some inspection shows that the comparison fails
because
there are '\n' linefeeds in the expected and '\n\r' linefeeds in the
actual output:

  c:\svn\py3k\PCbuild>python_d  -E -tt ../lib/test/regrtest.py
  test_grammar
  test test_grammar produced unexpected output:
  **
  *** mismatch between line 1 of expected output and line 1 of actual
output:
  - test_grammar
  + test_grammar
  ? +
  (['test_grammar\n'], ['test_grammar\r\n'])
  ... and so on ...

(The last line is printed by some code I added to Lib\regrtest.py.)

It seems that this behaviour was introduced by r57186:

  New I/O code from Tony Lownds implement newline feature correctly,
  and implements .newlines attribute in a 2.x-compatible fashion.


The patch at http://bugs.python.org/issue1029 apparently fixes this problem.

--
components: Windows
messages: 55353
nosy: theller
severity: normal
status: open
title: io.py problems on Windows
versions: Python 3.0

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1041>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1617699] slice-object support for ctypes Pointer/Array

2007-08-28 Thread Thomas Wouters

Thomas Wouters added the comment:

I'd like to check this into the trunk, without the non-step-1 support
for now, so that we can remove simple slicing from the py3k branch. We
can always add non-step-1 support later (all the sooner if someone who
isn't me volunteers to do the painful bits of that support, probably by
copy-pasting from the array module ;-)

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1617699>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1617687] specialcase simple sliceobj in list (and bugfixes)

2007-08-28 Thread Thomas Wouters

Thomas Wouters added the comment:

I prefer the current method, as it's more obviously walking in two
strides across the same array. I also dislike hiding the final memmove()
of the tail bit inside the loop. As for which is more obvious, I would
submit neither is obvious, as it took me quite a bit of brainsweat to
figure out how either version was supposed to work after not looking at
the code for months :)

Committed revision 57619.

--
assignee:  -> twouters
resolution:  -> fixed
status: open -> closed

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1617687>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1617702] extended slicing for buffer objects

2007-08-28 Thread Thomas Wouters

Thomas Wouters added the comment:

Committed revision 57619.

--
assignee:  -> twouters
resolution:  -> fixed
status: open -> closed

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1617702>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1617701] extended slicing for structseq

2007-08-28 Thread Thomas Wouters

Thomas Wouters added the comment:

Committed revision 57619.

--
assignee:  -> twouters
resolution:  -> fixed
status: open -> closed

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1617701>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1617698] Extended slicing for array objects

2007-08-28 Thread Thomas Wouters

Thomas Wouters added the comment:

Committed revision 57619.

--
assignee:  -> twouters
resolution:  -> fixed
status: open -> closed

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1617698>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1617682] specialcase simple sliceobj in tuple/str/unicode

2007-08-28 Thread Thomas Wouters

Thomas Wouters added the comment:

Committed revision 57619.

--
assignee:  -> twouters
resolution:  -> fixed
status: open -> closed

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1617682>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1617691] Extended slicing for UserString

2007-08-28 Thread Thomas Wouters

Thomas Wouters added the comment:

Committed revision 57619.

--
assignee:  -> twouters
resolution:  -> fixed
status: open -> closed

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1617691>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1617700] slice-object support for mmap

2007-08-28 Thread Thomas Wouters

Thomas Wouters added the comment:

Committed revision 57619.

--
assignee:  -> twouters
resolution:  -> fixed
status: open -> closed

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1617700>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1056] test_cmd_line starts python without -E

2007-08-29 Thread Thomas Wouters

New submission from Thomas Wouters:

test_cmd_line tests various things by spawning sys.executable.
Unfortunately it does so without passing the -E argument (which 'make
test' does do) so environment variables like PYTHONHOME and PYTHONPATH
can cause the test to fail.

--
assignee: ncoghlan
components: Tests
messages: 55418
nosy: twouters
priority: high
severity: normal
status: open
title: test_cmd_line starts python without -E
type: crash
versions: Python 2.6

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1056>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1617699] slice-object support for ctypes Pointer/Array

2007-08-29 Thread Thomas Wouters

Thomas Wouters added the comment:

Added tests (by duplicating any slicing operations in the test suite
with extended slice syntax, to force the use of slice-objects ;)

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1617699>
_Index: Lib/ctypes/test/test_cast.py
===
--- Lib/ctypes/test/test_cast.py	(revision 57617)
+++ Lib/ctypes/test/test_cast.py	(working copy)
@@ -50,12 +50,16 @@
 def test_other(self):
 p = cast((c_int * 4)(1, 2, 3, 4), POINTER(c_int))
 self.failUnlessEqual(p[:4], [1,2, 3, 4])
+self.failUnlessEqual(p[:4:], [1,2, 3, 4])
 c_int()
 self.failUnlessEqual(p[:4], [1, 2, 3, 4])
+self.failUnlessEqual(p[:4:], [1, 2, 3, 4])
 p[2] = 96
 self.failUnlessEqual(p[:4], [1, 2, 96, 4])
+self.failUnlessEqual(p[:4:], [1, 2, 96, 4])
 c_int()
 self.failUnlessEqual(p[:4], [1, 2, 96, 4])
+self.failUnlessEqual(p[:4:], [1, 2, 96, 4])
 
 def test_char_p(self):
 # This didn't work: bad argument to internal function
Index: Lib/ctypes/test/test_buffers.py
===
--- Lib/ctypes/test/test_buffers.py	(revision 57617)
+++ Lib/ctypes/test/test_buffers.py	(working copy)
@@ -15,6 +15,7 @@
 self.failUnless(type(b[0]) is str)
 self.failUnlessEqual(b[0], "a")
 self.failUnlessEqual(b[:], "abc\0")
+self.failUnlessEqual(b[::], "abc\0")
 
 def test_string_conversion(self):
 b = create_string_buffer(u"abc")
@@ -23,6 +24,7 @@
 self.failUnless(type(b[0]) is str)
 self.failUnlessEqual(b[0], "a")
 self.failUnlessEqual(b[:], "abc\0")
+self.failUnlessEqual(b[::], "abc\0")
 
 try:
 c_wchar
@@ -41,6 +43,7 @@
 self.failUnless(type(b[0]) is unicode)
 self.failUnlessEqual(b[0], u"a")
 self.failUnlessEqual(b[:], "abc\0")
+self.failUnlessEqual(b[::], "abc\0")
 
 def test_unicode_conversion(self):
 b = create_unicode_buffer("abc")
@@ -49,6 +52,7 @@
 self.failUnless(type(b[0]) is unicode)
 self.failUnlessEqual(b[0], u"a")
 self.failUnlessEqual(b[:], "abc\0")
+self.failUnlessEqual(b[::], "abc\0")
 
 if __name__ == "__main__":
 unittest.main()
Index: Lib/ctypes/test/test_arrays.py
===
--- Lib/ctypes/test/test_arrays.py	(revision 57617)
+++ Lib/ctypes/test/test_arrays.py	(working copy)
@@ -95,6 +95,7 @@
 p = create_string_buffer("foo")
 sz = (c_char * 3).from_address(addressof(p))
 self.failUnlessEqual(sz[:], "foo")
+self.failUnlessEqual(sz[::], "foo")
 self.failUnlessEqual(sz.value, "foo")
 
 try:
@@ -106,6 +107,7 @@
 p = create_unicode_buffer("foo")
 sz = (c_wchar * 3).from_address(addressof(p))
 self.failUnlessEqual(sz[:], "foo")
+self.failUnlessEqual(sz[::], "foo")
 self.failUnlessEqual(sz.value, "foo")
 
 if __name__ == '__main__':
Index: Lib/ctypes/test/test_structures.py
===
--- Lib/ctypes/test/test_structures.py	(revision 57617)
+++ Lib/ctypes/test/test_structures.py	(working copy)
@@ -236,7 +236,9 @@
 
 # can use tuple to initialize array (but not list!)
 self.failUnlessEqual(SomeInts((1, 2)).a[:], [1, 2, 0, 0])
+self.failUnlessEqual(SomeInts((1, 2)).a[::], [1, 2, 0, 0])
 self.failUnlessEqual(SomeInts((1, 2, 3, 4)).a[:], [1, 2, 3, 4])
+self.failUnlessEqual(SomeInts((1, 2, 3, 4)).a[::], [1, 2, 3, 4])
 # too long
 # XXX Should raise ValueError?, not RuntimeError
 self.assertRaises(RuntimeError, SomeInts, (1, 2, 3, 4, 5))
Index: Lib/ctypes/test/test_strings.py
===
--- Lib/ctypes/test/test_strings.py	(revision 57617)
+++ Lib/ctypes/test/test_strings.py	(working copy)
@@ -121,6 +121,7 @@
 def XX_test_initialized_strings(self):
 
 self.failUnless(c_string("ab", 4).raw[:2] == "ab")
+self.failUnless(c_string("ab", 4).raw[:2:] == "ab")
 self.failUnless(c_string("ab", 4).raw[-1] == "\000")
 self.failUnless(c_string("ab", 2).raw == "a\000")
 
Index: Lib/ctypes/test/test_memfunctions.py
===
--- Lib/ctypes/test/test_memfunctions.p

[issue1617699] slice-object support for ctypes Pointer/Array

2007-08-29 Thread Thomas Wouters

Changes by Thomas Wouters:


_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1617699>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1040] Unicode problem with TZ

2007-08-29 Thread Thomas Heller

Thomas Heller added the comment:

IMO the very best would be to avoid as many conversions as possible by
using the wide apis on Windows.  Not for _tzname maybe, but for env
vars, sys.argv, sys.path, and so on.  Not that I would have time to work
on that...

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1040>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1617699] slice-object support for ctypes Pointer/Array

2007-08-29 Thread Thomas Wouters

Changes by Thomas Wouters:


_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1617699>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1039] Asssertion in Windows debug build

2007-08-30 Thread Thomas Heller

Thomas Heller added the comment:

Applied in rev. 57731.

--
resolution: accepted -> fixed
status: open -> closed

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1039>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1617699] slice-object support for ctypes Pointer/Array

2007-08-30 Thread Thomas Heller

Thomas Heller added the comment:

Set to accepted.  As pointed out in private email, please apply it to
the trunk.

Your thoughts about the 'length' of pointers make sense, and are very
similar to what I had in mind when I implemented pointer indexing.

For indexing pointers, negative indices (in the C sense, not the usual
Python sense) absolutely are needed, IMO.  For slicing, missing indices 
do not really have a meaning - would it be possible to disallow them?

--
resolution:  -> accepted

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1617699>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1617699] slice-object support for ctypes Pointer/Array

2007-08-30 Thread Thomas Heller

Changes by Thomas Heller:


--
assignee: theller -> twouters

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1617699>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1617699] slice-object support for ctypes Pointer/Array

2007-08-30 Thread Thomas Wouters

Thomas Wouters added the comment:

Well, that's not quite how I implemented the slicing, and it's also not
how the existing simple-slicing was implemented: A negative start index
is taken to mean 0, and a stop index below the start index is taken to
mean 'the start index' (leading to an empty slice.)

However, it isn't too hard to do what I think you want done: a negative
index means indexing before the pointer, not from the end of the
pointer, and missing indices are only okay if they clearly mean '0'
('start' when step > 0, 'stop' when step < 0.)

So:
 P[5:10] would slice from P[5] up to but not including P[10],
 P[-5:5] would slice from P[-5] up to but not including P[5],
 P[:5] would slice from P[0] up to but not including P[5],
 P[5::-1] would slice from P[5] down to *and including* P[0]
but the following would all be errors:
 P[5:]
 P[:5:-1]
 P[:]
 P[::-1]

Does that sound like what you wanted?

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1617699>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1617699] slice-object support for ctypes Pointer/Array

2007-08-30 Thread Thomas Heller

Thomas Heller added the comment:

Yes.

But looking at your examples I think it would be better to forbid
missing indices completely instead of allowing them only where they
clearly mean 0.

Writing (and reading!) a 0 is faster than thinking about if a missing
index is allowed or what it means.

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1617699>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1617699] slice-object support for ctypes Pointer/Array

2007-08-30 Thread Thomas Wouters

Thomas Wouters added the comment:

Hmmm Well, that's fine by me, but it changes current behaviour, and
in a way that ctypes own testsuite was testing, even ;) (it does, e.g.,
'p[:4]' in a couple of places.) Requiring the start always would
possibly break a lot of code. We could make only the start (and step)
optional, and the start only if the step is positive, perhaps? That
would change no existing, sane behaviour.

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1617699>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   3   4   5   6   7   8   9   10   >