>> IMO supporting nanosecond in datetime and timedelta is an orthogonal issue.
>
> Not if you use it to cast them aside for this issue. ;)
Hum yes, I wanted to say that even if we don't keep datetime as a
supported type for time.time(), we can still patch the type to make it
support nanosecond res
2012/2/14 Barry Warsaw :
> On Feb 13, 2012, at 07:33 PM, Victor Stinner wrote:
>
>>Oh, I forgot to mention my main concern about datetime: many functions
>>returning timestamp have an undefined starting point (an no timezone
>>information ), and so cannot be converted to d
(Oops, I sent my email by mistake, here is the end of my email)
> (...) Ah, timedelta case is different. But I already replied to Nick in this
> thread about timedelta. You can also
see arguments against timedelta in the PEP 410.
Victor
___
Python-Dev
> I agree with Barry here (despite having voiced support for using Decimal
> before): datetime.datetime *is* the right data type to represent time
> stamps. If it means that it needs to be improved before it can be used
> in practice, then so be it - improve it.
Maybe I missed the answer, but how
2012/2/15 "Martin v. Löwis" :
> I agree with Barry here (despite having voiced support for using Decimal
> before): datetime.datetime *is* the right data type to represent time
> stamps. If it means that it needs to be improved before it can be used
> in practice, then so be it - improve it.
Decim
> I'd like to remind people what the original point of the PEP process
> was: to avoid going in cycles in discussions. To achieve this, the PEP
> author is supposed to record all objections in the PEP, even if he
> disagrees (and may state rebuttals for each objection that people
> brought up).
>
>
2012/2/15 Guido van Rossum :
> I just came to this thread. Having read the good arguments on both
> sides, I keep wondering why anybody would care about nanosecond
> precision in timestamps.
Python 3.3 exposes C functions that return timespec structure. This
structure contains a timestamp with a r
>> Linux supports nanosecond timestamps since Linux 2.6, Windows supports
>> 100 ns resolution since Windows 2000 or maybe before. It doesn't mean
>> that Windows system clock is accurate: in practical, it's hard to get
>> something better than 1 ms :-)
>
> Well, do you think the Linux system clock
2012/2/16 "Martin v. Löwis" :
>> Maybe an alternative PEP could be written that supports the filesystem
>> copying use case only, using some specialized ns APIs? I really think
>> that all you need is st_{a,c,m}time_ns fields and os.utime_ns().
>
> I'm -1 on that, because it will make people write
Most users don't need a truly ACID write, but implement their own
best-effort function. Instead of having a different implement in each
project, Python can provide something better, especially when the OS
provides low level function to implement such feature.
Victor
2012/2/16 "Martin v. Löwis" :
2012/2/15 Guido van Rossum :
> So using floats we can match 100ns precision, right?
Nope, not to store an Epoch timestamp newer than january 1987:
>>> x=2**29; (x+1e-7) != x # no loss of precision
True
>>> x=2**30; (x+1e-7) != x # lose precision
False
>>> print(datetime.timedelta(seconds=2**29))
> A data point on this specific use case. The following code throws its
> assert ~90% of the time in Python 3.2.2 on a modern Linux machine (assuming
> "foo" exists and "bar" does not):
>
> import shutil
> import os
> shutil.copy2("foo", "bar")
> assert os.stat("foo").st_mtime == os.stat("
> PEP author Victor asked
> (in http://mail.python.org/pipermail/python-dev/2012-February/116499.html):
>
>> Maybe I missed the answer, but how do you handle timestamp with an
>> unspecified starting point like os.times() or time.clock()? Should we
>> leave these function unchanged?
>
> If *all* yo
> The way Linux does that is to use the time-stamping counter of the
> processor (the rdtsc instructions), which (originally) counts one unit
> per CPU clock. I believe current processors use slightly different
> countings (e.g. through the APIC), but still: you get a resolution
> within the clock
> If I run your snippet and inspect modification times using `stat`, the
> difference is much smaller (around 10 ns, not 1 ms):
>
> $ stat test | \grep Modify
> Modify: 2012-02-16 13:51:25.643597139 +0100
> $ stat test2 | \grep Modify
> Modify: 2012-02-16 13:51:25.643597126 +0100
The loss of preci
>> > $ stat test | \grep Modify
>> > Modify: 2012-02-16 13:51:25.643597139 +0100
>> > $ stat test2 | \grep Modify
>> > Modify: 2012-02-16 13:51:25.643597126 +0100
>>
>> The loss of precision is not constant: it depends on the timestamp value.
>
> Well, I've tried several times and I can't reproduce
2012/2/16 Guido van Rossum :
> On Thu, Feb 16, 2012 at 2:04 PM, Victor Stinner
> wrote:
>> It doesn't change anything to the Makefile issue, if timestamps are
>> different in a single nanosecond, they are seen as different by make
>> (by another program comparing the
>> The problem is that shutil.copy2() produces sometimes *older*
>> timestamp :-/ (...)
>
> Have you been able to reproduce this with an actual Makefile? What's
> the scenario?
Hum. I asked the Internet who use shutil.copy2() and I found an "old"
issue (Decimal('43462967.173053') seconds ago):
Py
Congratulations to Kerrick Staley and Nick Coghlan, the authors of the
PEP! It's good to hear that the "python", "python2" and "python3"
symlinks are now standardized in a PEP. I hope that most Linux
distributions will follow this PEP :-)
Victor
___
Pyth
> Maybe it's okay to wait a few years on this, until either 128-bit
> floats are more common or cDecimal becomes the default floating point
> type? In the mean time for clock freaks we can have a few specialized
> APIs that return times in nanoseconds as a (long) integer.
I don't think that the de
PEP: 410
Title: Use decimal.Decimal type for timestamps
Version: $Revision$
Last-Modified: $Date$
Author: Victor Stinner
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 01-February-2012
Python-Version: 3.3
Abstract
Decimal becomes the official type for high
As asked by Martin, I tried to list *all* objections and alternatives.
> * A: (numerator, denominator)
>
> * value = numerator / denominator
> * resolution = 1 / denominator
> * denominator > 0
> (...)
> Tuple of integers have been rejected because they don't support
> arithmetic operations
>>> We must do better than Ruby: support arbritrary precision! :-D
>>
>> Seriously, I do consider that a necessary requirement for the PEP (which
>> the Decimal type actually meets). (...)
>
> (...)
> Not-quite-sure-how-seriously-you-intend-supporting-yoctoseconds-ly y'rs,
The point is not support
Python 3.3 has two new functions in the time module: monotonic() and
wallclock().
Victor
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/
I rejected datetime.datetime because I want to get nanosecond
resolution for time and os modules, not only for the os module. If we
choose to only patch the os module (*stat() and *utime*() functions),
datetime.datetime would be meaningful (e.g. it's easier to format
datetime for an human, than a E
Oh sorry and thanks for the fix.
Victor
2012/2/24 Antoine Pitrou :
> On Fri, 24 Feb 2012 00:49:31 +0100
> victor.stinner wrote:
>> http://hg.python.org/cpython/rev/f89e2f4cda88
>> changeset: 75231:f89e2f4cda88
>> user: Victor Stinner
>> date:
> Scratch that, *I* don't agree. timedelta is a pretty clumsy type to
> use. Have you ever tried to compute the number of seconds between two
> datetimes? You can't just use the .seconds field, you have to combine
> the .days and .seconds fields. And negative timedeltas are even harder
> due to the
> Scratch that, *I* don't agree. timedelta is a pretty clumsy type to
> use. Have you ever tried to compute the number of seconds between two
> datetimes? You can't just use the .seconds field, you have to combine
> the .days and .seconds fields. And negative timedeltas are even harder
> due to the
Rationale
=
A frozendict type is a common request from users and there are various
implementations. There are two main Python implementations:
* "blacklist": frozendict inheriting from dict and overriding methods
to raise an exception when trying to modify the frozendict
* "whitelist":
> This may be an issue at the C level (I'm not sure), but since this would
> be a Python 3-only collection, "user" code (in Python) should/would
> generally be using abstract base classes, so type-checking would not
> be an issue (as in Python code performing `isinstance(a, dict)` checks
> naturall
>> The blacklist implementation has a major issue: it is still possible
>> to call write methods of the dict class (e.g. dict.set(my_frozendict,
>> key, value)).
>
> It is also possible to use ctypes and violate even more invariants.
> For most purposes, this falls under "consenting adults".
My pr
> A frozendict type is a common request from users and there are various
>> implementations. There are two main Python implementations:
>
> Perhaps this should also detail why namedtuple is not a viable alternative.
It doesn't have the same API. Example: frozendict[key] vs
namedtuple.attr (namedtu
> I think you need to elaborate on your use cases further, ...
A frozendict can be used as a member of a set or as a key in a dictionary.
For example, frozendict is indirectly needed when you want to use an
object as a key of a dict, whereas one attribute of this object is a
dict. Use a frozendic
>> A frozendict can be used as a member of a set or as a key in a dictionary.
>>
>> For example, frozendict is indirectly needed when you want to use an
>> object as a key of a dict, whereas one attribute of this object is a
>> dict.
>
> It isn't. You just have to define __hash__ correctly.
Define
Updated patch and more justifications.
New patch:
- dict doesn't inherit from frozendict anymore
- frozendict is a subclass of collections.abc.Mutable
- more tests
> * frozendict.__hash__ computes hash(frozenset(self.items())) and
> caches the result is its private hash attribute
hash(frozen
>> * frozendict values must be immutable, as dict keys
>
> Why? That may be useful, but an immutable dict whose values
> might mutate is also useful; by forcing that choice, it starts
> to feel too specialized for a builtin.
Hum, I realized that calling hash(my_frozendict) on a frozendict
instan
ndict builtin type
Version: $Revision$
Last-Modified: $Date$
Author: Victor Stinner
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 29-February-2012
Python-Version: 3.3
Abstract
Add a new frozendict builtin type.
Rationale
=
A frozendict mapping cannot be ch
> It would (apparently) help Victor to fix issues in his pysandbox
> project. I don't know if a secure Python sandbox is an important
> enough concept to warrant core changes to make it possible.
Ok, let's talk about sandboxing and security.
The main idea of pysandbox is to reuse most of CPython
>> A frozendict type is a common request from users and there are various
>> implementations.
>
> ISTM, this request is never from someone who has a use case.
One of my colleagues implemented recently its own frozendict class
(which the "frozendict" name ;-)). He tries to implement something
like
>> Problem: if you implement a frozendict type inheriting from dict in
>> Python, it is still possible to call dict methods (e.g.
>> dict.__setitem__()). To fix this issue, pysandbox removes all dict
>> methods modifying the dict: __setitem__, __delitem__, pop, etc. This
>> is a problem because unt
>> The main idea of pysandbox is to reuse most of CPython but hide
>> "dangerous" functions and run untrusted code in a separated namespace.
>> The problem is to create the sandbox and ensure that it is not
>> possible to escape from this sandbox. pysandbox is still a
>> proof-of-concept, even if i
>> Rationale
>> =
>>
>> A frozendict mapping cannot be changed, but its values can be mutable
>> (not hashable). A frozendict is hashable and so immutable if all
>> values are hashable (immutable).
> The wording of the above seems very unclear to me.
>
> Do you mean "A frozendict has a cons
>> An immutable mapping can be implemented using frozendict::
>
>> class immutabledict(frozendict):
>> def __new__(cls, *args, **kw):
>> # ensure that all values are immutable
>> for key, value in itertools.chain(args, kw.items()):
>> if not isins
>> Rationale
>> =
>>
>> A frozendict mapping cannot be changed, but its values can be mutable
>> (not hashable). A frozendict is hashable and so immutable if all
>> values are hashable (immutable).
> The wording of the above seems very unclear to me.
>
> Do you mean "A frozendict has a cons
> A builtin frozendict type "compatible" with the PyDict C API is very
> convinient for pysandbox because using this type for core features
> like builtins requires very few modification. For example, use
> frozendict for __builtins__ only requires to modify 3 lines in
> frameobject.c.
See the fro
> Here are my real-world use cases. Not for security, but for safety and
> performance reasons (I've built by own RODict and ROList modeled after
> dictproxy):
>
> - Global, but immutable containers, e.g. as class members
I attached type_final.patch to the issue #14162 to demonstrate how
frozendic
> But I will try to suggest another approach. `frozendict` inherits from
> `dict`, but data is not stored in the parent, but in the internal
> dictionary. And even if dict.__setitem__ is used, it will have no visible
> effect.
>
> class frozendict(dict):
> def __init__(self, values={}):
>
>> > Here are my real-world use cases. Not for security, but for safety and
>> > performance reasons (I've built by own RODict and ROList modeled after
>> > dictproxy):
>> >
>> > - Global, but immutable containers, e.g. as class members
>>
>> I attached type_final.patch to the issue #14162 to demon
> In App Engine's case, an attacker who broke out of the sandbox would have
> access to the inside of Google's datacenter, which would obviously be
> bad -- that's why Google has developed its own sandboxing
> technologies.
This is not specific to Google: if an attacker breaks a sandbox,
he/she ha
Hi,
The frozendict discussion switched somewhere to sandboxing, and so I
prefer to start a new thread.
There are various ways to implement a sandbox, but I would like to
expose here how I implemented pysandbox to have your opinion.
pysandbox is written to execute quickly a short untrusted functio
>> I challenge anyone to try to break pysandbox!
>
> Can you explain precisely how a frozendict will help pysandbox? Then
> I'll be able to beat this challenge :-)
See this email:
http://mail.python.org/pipermail/python-dev/2012-February/117011.html
The issue #14162 has also two patches: one to m
> I challenge anymore to break pysandbox! I would be happy if anyone
> breaks it because it would make it more stronger.
Hum, I should give some rules for such contest:
- the C module (_sandbox) must be used
- you have to get access to a object outside the sandbox, like a real
module, or get acce
Le 01/03/2012 19:07, Guido van Rossum a écrit :
What other use cases are there?
frozendict could be used to implement "read-only" types: it is not
possible to add or remove an attribute or set an attribute value, but
attribute value can be a mutable object. Example of an enum with my
type_fi
> I think you should provide stronger arguments in each case why the
> data needs to be truly immutable or read-only, rather than just using
> a convention or an "advisory" API (like __private can be circumvented
> but clearly indicates intent to the reader).
I only know one use case for "truly im
Hi,
Le 03/03/2012 20:13, Armin Rigo a écrit :
I challenge anymore to break pysandbox! I would be happy if anyone
breaks it because it would make it more stronger.
I tried to run the files from Lib/test/crashers and --- kind of
obviously --- I found at least two of them that still segfaults
exe
Le 29/02/2012 19:21, Victor Stinner a écrit :
Rationale
=
(...) Use cases of frozendict: (...)
I updated the PEP to list use cases described in the other related
mailing list thread.
---
Use cases:
* frozendict lookup can be done at compile time instead of runtime
because the
> Is your implementation (adapted to a standalone type) something you
> could put up on the cheeseshop?
Short answer: no.
My implementation (attached to the issue #14162) reuses most of
private PyDict functins which are not exported and these functions
have to be modified to accept a frozendict a
>>> You can't solve the too much time, without solving the halting problem,
>>
>> Not sure what you mean by that. It seems to me that it's particularly
>> easy to do in a roughly portable way, with alarm() for example on all
>> UNIXes.
>
> What time should you set the alarm for? How much time is e
2012/3/5 Serhiy Storchaka :
> 05.03.12 11:09, Victor Stinner написав(ла):
>
>> pysandbox uses SIGALRM with a timeout of 5 seconds by default. You can
>> change this timeout or disable it completly.
>>
>> pysandbox doesn't provide a function to limit the memory
>>> I challenge anymore to break pysandbox! I would be happy if anyone
>>> breaks it because it would make it more stronger.
>
> I tried to run the files from Lib/test/crashers and --- kind of
> obviously --- I found at least two of them that still segfaults
> execfile.py, sometimes with minor edit
> Just forbid the sandboxed code from using the signal module, and set
> the signal to the default action (abort).
Ah yes, good idea. It may be an option because depending on the use
case, failing with abort is not always the best option.
The signal module is not allowed by the default policy.
>
> For a comparison, PyPy sandbox is a compiled from higher-level
> language program that by design does not have all sorts of problems
> described. The amount of code you need to carefully review is very
> minimal (as compared to the entire CPython interpreter). It does not
> mean it has no bugs, b
> In the array module the 'u' specifier previously meant "2-bytes, on wide
> builds 4-bytes". Currently in 3.3 the 'u' specifier is mapped to UCS4.
>
> I think it would be nice for Python3.3 to implement the PEP-3118
> suggestion:
>
> 'c' -> UCS1
>
> 'u' -> UCS2
>
> 'w' -> UCS4
A Unicode string is
Hi,
During the Language Summit 2011 (*), it was discussed that PyPy and
Jython don't support non-string key in type dict. An issue was open to
emit a warning on such dict, but the patch has not been commited yet.
I'm trying to Lib/test/crashers/losing_mro_ref.py: I wrote a patch
fixing the specif
> During the Language Summit 2011 (*), it was discussed that PyPy and
> Jython don't support non-string key in type dict. An issue was open to
> emit a warning on such dict, but the patch has not been commited yet.
It's the issue #11455. As written in the issue, there are two ways to
create such t
On 07/03/2012 16:33, Mark Shannon wrote:
It should also help with sandboxing, as it would make it easier to
analyse and thus control access to builtins, since the execution context
of all code would be easier to determine.
pysandbox patchs __builtins__ in:
- the caller frame
- the interprete
On 05/03/2012 23:11, Victor Stinner wrote:
3 tests are crashing pysandbox:
- modify a dict during a dict lookup: I proposed two different fixes
in issue #14205
- type MRO changed during a type lookup (modify __bases__ during the
lookup): I proposed a fix in issue #14199 (keep a reference to
On 01/03/2012 22:59, Victor Stinner wrote:
I challenge anymore to break pysandbox! I would be happy if anyone
breaks it because it would make it more stronger.
Results, one week later. Nobody found a vulnerability giving access to
the filesystem or to the sandbox.
Armin Rigo complained that
If you use your own comparaison function (__eq__) for objects used as
dict keys, you may now get RuntimeError with Python 3.3 if the dict is
modified during a dict lookup in a multithreaded application. You
should use a lock on the dict to avoid this exception.
Said differently, a dict lookup is n
> The reason I am proposing this here rather than on python-ideas is that
> treating the triple of [locals, globals, builtins] as a single
> "execution context" can be implemented in a really nice way.
>
> Internally, the execution context of [locals, globals, builtins]
> can be treated a single im
On 09/03/2012 22:32, Jim Jewett wrote:
I do not believe the change set below is valid.
As I read it, the new test verifies that one particular type of Nasty
key will provoke a RuntimeError -- but that particular type already
did so, by hitting the recursion limit. (It doesn't even really
mutate
On 09/03/2012 20:44, Thomas Wouters wrote:
(...) it would be nice to still
run the tests that can be run without _testcapi. Any objections to
fixing the tests to use test.support.import_module() for _testcapi and a
'needs_testcapi' skipping decorator?
test.support.import_module() looks fine fo
Le 01/03/2012 14:49, Paul Moore a écrit :
Just avoid using the term "immutable" at all:
You right, I removed mention of mutable/immutable from the PEP.
Victor
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinf
> benjamin.peterson wrote:
>> http://hg.python.org/cpython/rev/3877bf2e3235
>> changeset: 75542:3877bf2e3235
>> user: Benjamin Peterson
>> date: Mon Mar 12 09:46:44 2012 -0700
>> summary:
>> give the AST class a __dict__
>
> This seems to have broken the Windows buildbots.
http
Hi,
I added two functions to the time module in Python 3.3: wallclock()
and monotonic(). I'm unable to explain the difference between these
two functions, even if I wrote them :-) wallclock() is suppose to be
more accurate than time() but has an unspecified starting point.
monotonic() is similar e
On 14/03/2012 01:18, Nadeem Vawda wrote:
So wallclock() falls back to a not-necessarily-monotonic time source
if necessary,
while monotonic() raises an exception in that case? ISTM that these
don't need to
be separate functions - rather, we can have one function that takes a
flag (called
require_
> I agree that it's better to have only one of these. I also think if we
> offer it we should always have it -- if none of the implementations
> are available, I guess you could fall back on returning time.time(),
> with some suitable offset so people don't think it is always the same.
> Maybe it c
2012/3/14 Antoine Pitrou :
> On Wed, 14 Mar 2012 02:03:42 +0100
> Victor Stinner wrote:
>>
>> We may merge both functions with a flag to be able to disable the
>> fallback. Example:
>>
>> - time.realtime(): best-effort monotonic, with a fallback
>>
On 14/03/2012 00:57, Victor Stinner wrote:
I added two functions to the time module in Python 3.3: wallclock()
and monotonic(). (...)
I merged the two functions into one function: time.steady(strict=False).
time.steady() should be monotonic most of the time, but may use a fallback
> I merged the two functions into one function: time.steady(strict=False).
I opened the issue #14309 to deprecate time.clock():
http://bugs.python.org/issue14309
time.clock() is a different clock type depending on the OS (Windows vs
UNIX) and so is confusing. You should now decide between time.ti
2012/3/15 Matt Joiner :
> Victor, I think that steady can always be monotonic, there are time sources
> enough to ensure this on the platforms I am aware of. Strict in this sense
> refers to not being adjusted forward, i.e. CLOCK_MONOTONIC vs
> CLOCK_MONOTONIC_RAW.
I don't think that CLOCK_MONOTON
> I have to agree with Georg. Looking at the code, it appears OSError can
> be raised with both strict=True and strict=False (since floattime() can
> raise OSError).
This is an old bug in floattime(): I opened the issue #14368 to remove
the unused exception. In practice, it never happens (or it is
>>> This is not clear to me. Why wouldn't it raise OSError on error even with
>>> strict=False? Please clarify which exception is raised in which case.
>>
>> It seems clear to me. It doesn't raise exceptions when strict=False because
>> it falls back to a non-monotonic clock. If strict is True an
2012/3/20 Jim J. Jewett :
>
>
> In http://mail.python.org/pipermail/python-dev/2012-March/117762.html
> Georg Brandl posted:
>
>>> + If available, a monotonic clock is used. By default, if *strict* is
>>> False,
>>> + the function falls back to another clock if the monotonic clock failed
>>>
> I think this discussion has veered off a bit into the overly-theoretical.
> Python cannot really "guarantee" anything here
That's why the function name was changed from time.monotonic() to
time.steady(strict=True). If you want to change something, you should
change the documentation to list OS
pysandbox is a Python sandbox. By default, untrusted code executed in
the sandbox cannot modify the environment (write a file, use print or
import a module). But you can configure the sandbox to choose exactly
which features are allowed or not, e.g. import sys module and read
/etc/issue file.
http
>> http://hg.python.org/cpython/rev/730d5357
>> changeset: 75850:730d5357
>> user: Stefan Krah
>> date: Wed Mar 21 18:25:23 2012 +0100
>> summary:
>> Issue #7652: Integrate the decimal floating point libmpdec library to speed
>> up the decimal module. Performance gains of
2012/3/22 Guido van Rossum :
> To close the loop, I've rejected the PEP, adding the following rejection
> notice:
>
> """
> I'm rejecting this PEP. (...)
Hum, you may specify who is "I" in the PEP.
Victor
___
Python-Dev mailing list
Python-Dev@python.o
> My proposed syntax is x(:)
Change the Python syntax is not a good start. You can already
experiment your idea using the slice() type.
> We would have to do something like this.
> sum(x[:-20:2])
Do you know the itertools module? It looks like itertools.islice().
Victor
> On the other hand, exposing the existing read-only dict proxy as a
> built-in type sounds good to me. (It would need to be changed to
> allow calling the constructor.)
I wrote a small patch to implement this request:
http://bugs.python.org/issue14386
I also opened the following issue to suppor
Hi,
I created the following issue to expose the dictproxy as a builtin type.
http://bugs.python.org/issue14386
I would be interesting to accept any mapping type, not only dict.
dictproxy implementation supports any mapping, even list or tuple, but
I don't want to support sequences because a missi
By the way, how much faster is cdecimal? 72x or 80x?
http://docs.python.org/dev/whatsnew/3.3.html#decimal
Victor
2012/3/23 Stefan Krah :
> Georg Brandl wrote:
>> >>> Issue #7652: Integrate the decimal floating point libmpdec library to
>> >>> speed
>> >>> up the decimal module. Performance gai
> I want time.steady(strict=True), and I'm glad you're providing it and
> I'm willing to use it this way, although it is slightly annoying
> because "time.steady(strict=True)" really means
> "time.steady(i_really_mean_it=True)". Else, I would have used
> "time.time()".
>
> I am aware of a large num
2012/3/23 Yury Selivanov :
> Why can't I use select & threads? You mean that if a platform does not
> support monotonic clocks it also does not support threads and select sys
> call?
Python 3.3 now uses time.steady(strict=False) in the threading and
queue modules. If we replace it by time.steady(
Hi,
time.steady(strict=True) looks to be confusing for most people, some
of them don't understand the purpose of the flag and others don't like
a flag changing the behaviour of the function.
I propose to replace time.steady(strict=True) by time.monotonic().
That would avoid the need of an ugly No
> This seems like it should have been a PEP, or maybe should become a PEP.
I replaced time.wallclock() by time.steady(strict=False) and
time.monotonic() by time.steady(strict=True). This change solved the
naming issue of time.wallclock(), but it was a bad idea to merge
monotonic() feature into tim
>> time.steady() is something like:
>>
>> try:
>> return time.monotonic()
>> except (NotImplementError, OSError):
>> return time.time()
>
> Is the use of weak monotonic time so wide-spread in the stdlib that we
> need the 'steady()' function? If it's just two modules then it's not
> worth adding
> Question: under what circumstances will monotonic() exist but raise OSError?
On Windows, OSError is raised if QueryPerformanceFrequency fails.
Extract of Microsoft doc:
"If the function fails, the return value is zero. To get extended
error information, call GetLastError. For example, if the in
>> - time.clock(): monotonic clock on Windows, CPU time on UNIX
>
>
> Actually, I think that is not correct. Or at least *was* not correct in
> 2006.
>
> http://bytes.com/topic/python/answers/527849-time-clock-going-backwards
Oh, I was not aware of this issue. Do you suggest to not use
QueryPerfor
> Does this mean that there are circumstances where monotonic will work for a
> while, but then fail?
No. time.monotonic() always work or always fail. If monotonic()
failed, steady() doesn't call it again.
> Otherwise, we would only need to check monotonic once, when the time module
> is first lo
2901 - 3000 of 3215 matches
Mail list logo