Changes by Yury Selivanov :
--
nosy: +yselivanov
___
Python tracker
<http://bugs.python.org/issue21326>
___
___
Python-bugs-list mailing list
Unsubscribe:
Yury Selivanov added the comment:
FWIW, this can also be resolved by fixing Queue.full to do "self.qsize() >=
self._maxsize" instead of "self.qsize() == self._maxsize".
I generally don't like implicit casts as they break duck typing.
--
__
New submission from Yury Selivanov:
TestInsort.test_vsBuiltinSort is a bit broken, as it doesn't test insorting
`list` objects.
Patch attached.
--
files: test_bisect.patch
keywords: patch
messages: 174365
nosy: christian.heimes, georg.brandl, rhettinger, yselivanov
priority: n
New submission from Yury Selivanov:
Right now decimal.py defines 'ROUND_DOWN' as 'ROUND_DOWN' (string), whereas C
version define it as 1 (integer).
While using constant values directly in your code is not a good idea, there is
another case where it doesn't work:
Yury Selivanov added the comment:
Well, I don't care about py 3.2 & 3.3 pickle compatibility in this particular
issue. This one is about compatibility of py & c decimal modules in 3.3.
--
___
Python tracker
<http://bugs.pytho
Yury Selivanov added the comment:
Right ;)
Is there any chance we can fix that in next 3.3 point release or 3.4?
--
___
Python tracker
<http://bugs.python.org/issue16
Changes by Yury Selivanov :
--
nosy: +yselivanov
___
Python tracker
<http://bugs.python.org/issue16431>
___
___
Python-bugs-list mailing list
Unsubscribe:
Yury Selivanov added the comment:
I'll take a look later today.
--
___
Python tracker
<http://bugs.python.org/issue17071>
___
___
Python-bugs-list m
Yury Selivanov added the comment:
Thanks Antoine, the patch looks good to me.
The only thing I would have done differently myself is to name "self" as
"__bind_self__" (with two underscores at the end).
Could you pl
Yury Selivanov added the comment:
This broke a lot of our code, I think that priority needs to be raised to
urgent.
--
nosy: +yselivanov
___
Python tracker
<http://bugs.python.org/issue19
Yury Selivanov added the comment:
This behaviour is indeed a bug. However, I think that the solution you propose
is wrong.
If we ignore invalid contents of __signature__ we are masking a bug or
incorrect behaviour. In this case, you should have checked the requested
attribute name in
Yury Selivanov added the comment:
That's the intended and documented behaviour, see
https://docs.python.org/3/library/inspect.html#inspect.BoundArguments.arguments.
You can easily implement the functionality you need by iterating through
Signature.parameters and copying defaults t
Yury Selivanov added the comment:
Since 3.4, help() uses signature.
Closing this one.
--
nosy: +yselivanov
resolution: -> rejected
status: open -> closed
___
Python tracker
<http://bugs.python.org/i
Changes by Yury Selivanov :
--
assignee: -> yselivanov
keywords: +needs review
versions: +Python 3.4
___
Python tracker
<http://bugs.python.org/issu
Yury Selivanov added the comment:
Ryan,
Can you explain the use case for it? What's the problem you're trying to solve?
--
___
Python tracker
<http://bugs.python.o
Yury Selivanov added the comment:
Fixed in 3.4 and 3.5.
Thanks for the bug report!
--
___
Python tracker
<http://bugs.python.org/issue21801>
___
___
Python-bug
Yury Selivanov added the comment:
>@Guido, @Yury: What do you think of log_destroyed_pending_task.patch? Does it
>sound correct?
Premature task garbage collection is indeed hard to debug. But at least, with
your patch, one gets an exception and has a chance to track the bug down. So
Yury Selivanov added the comment:
> But I can't think of any use case when it would be undesirable to include the
> extra parameters
One use case is that you are actually loosing information what arguments
Signature.bind() was called with, when defaults are included. In some
Yury Selivanov added the comment:
Thanks, Antony, this is a good catch. Your suggestion seems like a good idea.
I'll look into this more closely soon.
--
___
Python tracker
<http://bugs.python.org/is
Changes by Yury Selivanov :
--
nosy: +yselivanov
___
Python tracker
<http://bugs.python.org/issue22203>
___
___
Python-bugs-list mailing list
Unsubscribe:
Yury Selivanov added the comment:
Antonie, I'm attaching a patch (issue20334-2.01.patch) to this issue which
should fix the problem. Please review.
--
Added file: http://bugs.python.org/file36607/issue20334-2.01.patch
___
Python tracker
Yury Selivanov added the comment:
Antony, I've tweaked the patch a bit and it's now in default branch. Thank you!
--
___
Python tracker
<http://bugs.python.o
Yury Selivanov added the comment:
Vinay,
Please take a look at the second patch -- 'logging_02.patch' -- with updated
docs
--
Added file: http://bugs.python.org/file36614/logging_02.patch
___
Python tracker
<http://bugs.python.o
Yury Selivanov added the comment:
It's not that it doesn't work after fork, right? Should we add a recipe with
pid monitoring a self-pipe re-initialization?
--
___
Python tracker
<http://bugs.python.o
Yury Selivanov added the comment:
> Is there a use case for sharing an event loop across forking?
I don't think so. I use forking mainly for the following two use-cases:
- Socket sharing for web servers. Solution: if you want to have a shared
sockets between multiple child process
New submission from Yury Selivanov:
While writing a lexer for javascript language, I managed to hit the limit of
named groups in one regexp, it's 100. The check is in sre_compile.py:compile()
function, and there is even an XXX comment on this.
Unfortunately, I'm not an expert in t
Yury Selivanov added the comment:
Serhiy,
This is awesome!
Is is possible to split the patch in two, and commit the one that just
increases the groups limit to 3.4 as well?
Thank you
--
___
Python tracker
<http://bugs.python.org/issue22
Yury Selivanov added the comment:
Guys, when you update asyncio code, please make sure you sync your changes with
its upstream here: https://code.google.com/p/tulip/ to avoid commits like this
5f001ad90373
The goal is to have single source base for 3.4 and 3.5 in cpython repo and for
3.3 in
Yury Selivanov added the comment:
Hm, strange, usually roundup robot closes issues. Anyways, closed now. Thanks
again, Joshua.
--
resolution: -> fixed
status: open -> closed
___
Python tracker
<http://bugs.python.org/i
Yury Selivanov added the comment:
Thanks for the patch.
I've committed this to 3.5 only, as there is a slight chance that it breaks
backwards compatibility for some scripts.
--
resolution: -> fixed
status: open -> closed
___
Python tra
Yury Selivanov added the comment:
The problem is that map & filter are classes, and their __init__ and __new__
methods do not provide any text_signature, hence signature uses the one from
object.__init__.
--
___
Python tracker
&
Changes by Yury Selivanov :
--
nosy: +larry
___
Python tracker
<http://bugs.python.org/issue22203>
___
___
Python-bugs-list mailing list
Unsubscribe:
Yury Selivanov added the comment:
I'm fine with either one, Serhiy. The static one looks good to me.
--
___
Python tracker
<http://bugs.python.org/is
Yury Selivanov added the comment:
I left some comments in the codereview.
I think that having some half-baked solution is great, but I really would like
to see a proper fix, i.e. with remove_header and other methods fixed. Ideally,
you should add a UserDict implementation for headers that
Yury Selivanov added the comment:
I left some comments in the codereview.
I think that having some half-baked solution is great, but I really would like
to see a proper fix, i.e. with remove_header and other methods fixed. Ideally,
you should add a UserDict implementation for headers that
Yury Selivanov added the comment:
Oups, my previous comment is related to issue #5550, wrong window.
--
___
Python tracker
<http://bugs.python.org/issue17
Yury Selivanov added the comment:
Ideally, we should just wait when PEP 455 lands, so we can use TransformDict
for headers.
Also, I don't think we can land a fix for this in any pythons out there, I
would focus on making this right i
Yury Selivanov added the comment:
A second version of the patch (tempfile_02), fixing more tempfile functions to
properly support relative paths. Please review.
--
nosy: +serhiy.storchaka
Added file: http://bugs.python.org/file36740/tempfile_02.patch
Changes by Yury Selivanov :
--
resolution: -> fixed
status: open -> closed
___
Python tracker
<http://bugs.python.org/issue21397>
___
___
Python-bugs-list
Yury Selivanov added the comment:
Thanks for the bug report and patch! Committed to 3.5.
--
resolution: -> fixed
status: open -> closed
___
Python tracker
<http://bugs.python.org/iss
Yury Selivanov added the comment:
@Berker Peksag: The patch looks fine, although I would rename 'redirect_stream'
-> '_redirect_stream' or '_RedirectStream'
--
nosy: +yselivanov
___
Python tracker
&
Yury Selivanov added the comment:
@Berker Peksag: Also, please update the docs.
--
___
Python tracker
<http://bugs.python.org/issue22389>
___
___
Python-bug
Yury Selivanov added the comment:
Antony, I agree regarding the poor naming of '_sanitize_dir()' helper. As for
your other suggestion, I think such a refactoring will actually make code
harder to follow (+ it's more invasive). Generally, I'm in favour of
transforming
Yury Selivanov added the comment:
> Note that abspath() can return incorrect result in case of symbolic links to
> directories and pardir components. I.e. abspath('symlink/..').
Good catch.. Should I use os.path.realpath?
--
__
Yury Selivanov added the comment:
> IMO it makes the code simpler and easier to understand.
But it's a tad slower, like 2-3% ;) You can test it yourself, we only tested it
on huge tasks list of 1M items.
FWIW, I'm not opposed
Yury Selivanov added the comment:
Victor,
During the code review we tried the single loop approach. At the end Joshua
wrote a small benchmark to test if it's really faster to do it in one loop or
not. Turned out that single loop approach is not faster than loop+comprehension
(but it&
Yury Selivanov added the comment:
Victor,
I've done some additional testing. Here's a test that Joshua wrote for the code
review: https://gist.github.com/1st1/b38ac6785cb01a679722
It appears that single loop approach works a bit faster for smaller collections
of tasks. On a lis
Yury Selivanov added the comment:
Victor,
Here's an updated benchmark results:
NUMBER_OF_TASKS 1
ITERATIONS -> 2000 out of 2000
2 loops: 0.004267875499863294
1 loop: 0.007916624497738667
TOTAL_BENCH_TIME 15.975227117538452
NUMBER_OF_TASKS 10
IT
Yury Selivanov added the comment:
typo:
> 2 loops is always about 30-40% slower.
2 loops is always about 30-40% faster.
--
___
Python tracker
<http://bugs.python.org/issu
Yury Selivanov added the comment:
Eh, I knew something was wrong. Thanks.
NUMBER_OF_TASKS 10
ITERATIONS -> 2000 out of 2000
2 loops: 0.045037232999675325
1 loop: 0.045182990999819594
TOTAL_BENCH_TIME 91.36706805229187
Please commit your change to the tulip repo
Changes by Yury Selivanov :
--
resolution: -> fixed
status: open -> closed
___
Python tracker
<http://bugs.python.org/issue22448>
___
___
Python-bugs-list
Changes by Yury Selivanov :
--
nosy: +yselivanov
___
Python tracker
<http://bugs.python.org/issue22540>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from Yury Selivanov:
Can you propose a format for it?
I'm not sure that including all arguments and their reprs is a good idea, as it
will make BA's repr too long.
--
nosy: +yselivanov
___
Python tracker
<http://bu
Yury Selivanov added the comment:
How about we just list bound arguments names, without values?
--
___
Python tracker
<http://bugs.python.org/issue22547>
___
___
Yury Selivanov added the comment:
Yes and no ;)
You can have partially bound args, you can bind just one argument and use
defaults for the rest, etc. I agree that it's not an ideal solution, but it is
a sane compromise.
--
___
Python tr
Yury Selivanov added the comment:
> Another thing I proposed in python-ideas is to have `__getitem__` delegate to
> `.arguments`, so this proposal is similar in spirit, because I want to have
> `__repr__` show information from `.arguments`.
Big -1 on __getitem__
> To be honest
Yury Selivanov added the comment:
> @Yury do you agree with this?
I think it's a perfectly normal behaviour. OSError is raised for valid kind of
objects, and TypeError is raised when you're passing something weird. That's a
pretty common
Changes by Yury Selivanov :
--
resolution: -> not a bug
status: open -> closed
___
Python tracker
<http://bugs.python.org/issue19472>
___
___
Python-bugs-
Yury Selivanov added the comment:
I don't think it a bug or that it's possible to do something about it.
Reloading modules in Python should usually be just avoided by all means.
--
nosy: +brett.cannon, yselivanov
___
Python trac
Yury Selivanov added the comment:
I think that the main problem is that '_stop_server' is called from a main
thread (by unittest machinery via addCleanup), whereas the loop is in the other
thread. asyncio code is not thread-safe in general.
If I change your code slightly to a
New submission from Yury Selivanov:
new and handy functools.partialmethod doesn't fully support inspect.signature.
For instance, for the following code:
class Spam:
def say(self, a, b=1):
print(a)
hello = functools.partialmethod(say, 'hello')
Yury Selivanov added the comment:
Larry,
Congrats on the amazing job you did with the arguments clinic.
And if you need any assistance with 'inspect.signature' I'd be glad to help.
--
nosy: +yselivanov
___
Python tracker
<http
Yury Selivanov added the comment:
Hi Eric,
I'm not sure why do you want this. Having "Signature.from_callable" does not
allow you to change behaviour of 'inspect.signature' function. More over, it
creates a confusion about what API should be used
Yury Selivanov added the comment:
Please consider the attached patch (getargsspec_01.patch).
It modifies 'getargspec' and 'getfullargspec' to use the 'inspect.signature'
API. The entire test suite passes just fine.
This also will address issue #16490.
I can
Yury Selivanov added the comment:
This is related to issue #17481
--
nosy: +yselivanov
___
Python tracker
<http://bugs.python.org/issue16490>
___
___
Python-bug
Yury Selivanov added the comment:
> The difference is that inspect.signature is not friendly to Signature
> subclasses. Without from_callable you can't customize the behavior in
> inspect.signature easily.
OK, suppose you have "Signature.from_callable". You then creat
Yury Selivanov added the comment:
Eric,
Moreover, 'Signature.from_function' and newly added 'Signature.from_builtin'
are private API, or implementation detail (they are not part of PEP, not
mentioned in the docs). If at some point it is needed to rewrite Signature
in C
Yury Selivanov added the comment:
OK, got it now.
Green light from me. Looking through the code, I saw that 'from_builtin'
doesn't use the 'Signature._parameter_cls' (as in from_function). That's
probably needs to be fixed as w
Yury Selivanov added the comment:
Larry, just a small thing.. Could you please add something like "Parameter =
cls._parameter_cls" in the "from_builtin" method? (see the discussion in #17373)
--
___
Python tracker
<http://bug
Changes by Yury Selivanov :
--
nosy: +yselivanov
___
Python tracker
<http://bugs.python.org/issue20230>
___
___
Python-bugs-list mailing list
Unsubscribe:
Changes by Yury Selivanov :
--
components: +Interpreter Core, Library (Lib)
versions: +Python 3.5
___
Python tracker
<http://bugs.python.org/issue20230>
___
___
Yury Selivanov added the comment:
> But this complicates life for inspect.Signature, which needs to not publish
> the "self" parameter when it's been bound.
That's already supported, isn't it?
>>> str(inspect.signature(F.a))
'(se
New submission from Yury Selivanov:
Can we remove debug timing around "self._selector.select(timeout)"
(or at least make it configurable) from BaseEventLoop?
Right now the code is:
# TODO: Instrumentation only in debug mode?
t0 = self.time()
event_l
Yury Selivanov added the comment:
And I'd be happy to help with the patch.
--
___
Python tracker
<http://bugs.python.org/issue20275>
___
___
Python-bugs-l
Yury Selivanov added the comment:
> What part of the debug timing is more expensive than the select call itself?
Well, there is nothing really expensive there, but:
- time.monotonic() is still a syscall
- logging.log executes a fair amount of code
Yury Selivanov added the comment:
I wrote a small micro-benchmark, that creates 100K tasks and executes them.
With debug code the execution time is around 2.8-3.1s, with debug comment
commented out it's around 2.3-2.4s.
Again, it's a micro-benchmark, and in a real application the
Yury Selivanov added the comment:
The micro-benchmark i used is here: https://gist.github.com/1st1/8446175
--
___
Python tracker
<http://bugs.python.org/issue20
Yury Selivanov added the comment:
> Wow, that's impressive that such minor syscalls can take so much times!
Apparently, it's not syscalls, it's logging.
Actually, with commented out "logging.log" code I can't see a difference wether
there are calls to mono
Yury Selivanov added the comment:
> Regarding the microbench, please count and report how many times it actually
> calls select().
I'm on MacOS X, so it's KqueueSelector. It's 'select' method (and
self._kqueue.control respectively) is called twice more times.
Yury Selivanov added the comment:
Victor,
Re your patch: since it's not really time syscalls, and Guido said he's using
this debug code, how about we just have something like:
t0 = self.time()
event_list = self._selector.select(timeout)
t1 = self.time()
Yury Selivanov added the comment:
And, I think that "asyncio._DEBUG" or even "asyncio.DEBUG" would be a great
idea.
--
___
Python tracker
<http://bug
Yury Selivanov added the comment:
Victor, Guido,
Please take a look at the attached one.
I believe it's slightly better, than the "logger_is_enabled_for.patch", since
it doesn't log stuff that was faster than 1 second
Changes by Yury Selivanov :
Added file: http://bugs.python.org/file33487/greater_than_1sec_logging.patch
___
Python tracker
<http://bugs.python.org/issue20275>
___
___
Changes by Yury Selivanov :
Removed file: http://bugs.python.org/file33487/greater_than_1sec_logging.patch
___
Python tracker
<http://bugs.python.org/issue20275>
___
___
Changes by Yury Selivanov :
Added file: http://bugs.python.org/file33488/greater_than_1sec_logging.patch
___
Python tracker
<http://bugs.python.org/issue20275>
___
___
Yury Selivanov added the comment:
> I really want to log the time every time if level == DEBUG and only if > 1
> sec for other levels, so maybe all you need to do is remove the comment? :-)
> (Or maybe logger.isEnabledFor(logging.INFO) is faster than logger.log() when
> noth
Yury Selivanov added the comment:
Can somebody review the patch? I'd be cool if this lands in 3.4.
--
nosy: +larry
___
Python tracker
<http://bugs.python.org/is
Changes by Yury Selivanov :
--
nosy: +terry.reedy
___
Python tracker
<http://bugs.python.org/issue17481>
___
___
Python-bugs-list mailing list
Unsubscribe:
Yury Selivanov added the comment:
Larry,
getargspec uses getfullargspec, so it's covered.
--
___
Python tracker
<http://bugs.python.org/issue17481>
___
___
Yury Selivanov added the comment:
Larry,
I saw your message on the tracker regarding adding support for parameters
groups to the signature object. Would you mind if I join the discussion with my
ideas of how this feature might be implemented?
Yury
On Sunday, January 19, 2014 at 5:44 AM
Yury Selivanov added the comment:
Terry,
Thanks.
> When responding by email, please snip the quotation and footer, except
> possibly a line of the quoted message.
My email client played an evil (and a bit embarrassing) trick with me, showing
Larry's name without an actual em
Yury Selivanov added the comment:
Well, the current code looks for __init__ or __new__. The only ones it can find
is the 'object.__init__' which are blacklisted, because they are in C.
The question is do (or will) 'object.__new__' or '__init__' have a sign
Yury Selivanov added the comment:
> Otherwise we run the risk of introducing unexpected exceptions into
> introspection code.
That's a good catch. I'll make a new patch, keeping the old implementation of
getfullargsspec intact, and falling back to it if no signat
Yury Selivanov added the comment:
In this case it would probably be best to just special case classes that don't
have __init__ or __new__ defined to return an empty signature without
parameters.
I can also make a special case for object.__init__ and object.__new__
functions, if someone
Yury Selivanov added the comment:
>> Otherwise we run the risk of introducing unexpected exceptions into
>> introspection code.
> That's a good catch. I'll make a new patch, keeping the old implementation of
> getfullargsspec intact, and falling back to it if no s
Yury Selivanov added the comment:
Please take a look at the attached patch (signature_plain_cls_01.patch)
Now, the patch addresses two kind of classes:
class C: pass
and
class C(type): pass
For metaclasses, signature will return a signature with three positional-only
parameters - (name
Yury Selivanov added the comment:
Couple of thoughts:
1. "(object_or_name, [bases, dict])" is a signature for the "type" function,
and yes, on that we need to agree how it looks like. Maybe exactly as you
proposed, as it is what it is after all.
2. For user-defined
Yury Selivanov added the comment:
When in doubt, import this ;)
Agree. So the best course would be: make a patch for plain classes (not
metaclasses). Fix signatures for metaclasses without __init__/__new__ when we
have groups support for parameters, so that we can have (obj_or_name, [bases
Yury Selivanov added the comment:
Attached is a stripped down patch that special-cases classes without
__init__/__new__. Not metaclasses, for that we can start a new issue.
--
Added file: http://bugs.python.org/file33557/signature_plain_cls_02.patch
Yury Selivanov added the comment:
> Yury: fire away, either here or in a new issue as is best. (Sorry, brain
> mildly fried, not sure what would be best issue-tracker hygiene.)
Larry, I ended up with a relatively big core dump of my thoughts, so I decided
to post it on python-dev.
1601 - 1700 of 3098 matches
Mail list logo